字幕表 動画を再生する 英語字幕をプリント (gentle music) - [Instructor] All right, I'm gonna speak about the digital camera data. Here's a sample of a picture, an image we took over Boulder, that one year, and you are right there. That's NEON Headquarters, in the classroom. The, this image has been orthorectified, and the ground distance is about 800 by 600 meters. And this is a 10 centimeter resolution. All right, the camera itself is a, the camera back is made by PhaseOne. You can see the number of pixels. At best it can take one frame every two seconds. It's got a forward motion compensator, so that as the plane is flying along at 100 knots, it minimizes the blurring on the ground. The output format of this camera is, something called IIQ. It's proprietary to PhaseOne, meaning, in order to actually deal with the really raw images, you need to use their software, called CaptureOne, which is free to use for this particular image format. And the nominal resolution when you're flying at 1000 meters above ground level is 8.5 centimeters for the raw data. All right, so what do you use the camera images for? Well it's a complement to the spectral data on the left here, you see, spectrometer data. We selected three bands, 52, 34, and 18, and they mimic RGB. It's typical to see what you're looking at on the ground, over on the right, we see the camera image (inaudible) camera image, and you can see that there are roads, trees, shadows, and then right here, well this by the way is San Joaquin right here is the central tower, San Joaquin Central Tower. You notice that it looks like it's falling over at 45 degrees, obviously it's not, but that's an artifact of the back that, you're at 1000 meters above ground, and this is over toward the edge of an image, so it's distorted. All right, in operations, it's coincident, it's coincident with the spectrometer. We have about a 50% overlap along track, 33% cross track. Nominally the frame rate is about 4 images per second. And each site, it may collect between 2000 and 11,000 images over several days. Well there's three major steps to processing. The first one is to adjust the color balance and exposure, and since the image is mainly taken, on different days, and on the same day over a period of several hours, where conditions change, you may have to adjust the color balance and exposure separately over the time period. 2nd step is orthorectification. You have to remap the image from the camera frame to a regular fixed grid on the ground. Same grid that the spectrometer is projected on, except that it's 10 centimeter resolution, not one meter resolution like the spectrometer. And finally, mosaicking. Like I said you can have 11,000 images over one site. Mosaicking takes all of those images, overlaps them, and creates one single image. We, and then, because the single image is so large, you subdivide that single mosaic image into separate tiles, which are one kilometer on a side. All right, preprocessing. You can see on the left, a raw image. On the right, a processed image. We try and make it appear as close to what you would see if you were actually up there and looking down. Or you may have to adjust these separately, it can be a very tedious process. Orthorectification, we remap from the camera frame, down to a regular UTM grid on the ground that is shown on the picture. And to do this, we require several other pieces of data. One's called a smooth best estimate of trajectory, we get that from the lidar, and that tells you exactly where the plane is at any second. And exactly how it's oriented, the roll, pitch and yaw. And from this we can trace any line of sight from the camera down to the ground, and also from the lidar we get a digital elevation map, a DEM. And that's this, the grid you see there. So we shoot a ray down from each camera pixel down to the ground and see where it intersects the DEM, and then project that down to this UTM grid. We also need a camera model, which, describes the distortions in the camera image due to the lens. And also, the offset of the camera itself from the lidar (inaudible) site. Here's an example of a, an image before orthorectification. This is the raw image. But after preprocessing. And over here it's been orthorectified. The plane wasn't going exactly north south, and you can see on the sides here, were there's curvature on the side of the image, and that's due to the uneven ground surface. Now, the orthorectification process introduces other distortions. And that's because of a mismatch between the camera resolution, which is a 10th of a meter, and the lidar DEM resolution, which is one meter. So you get straight, here's a image of a intersection, and you can see here where these straight lines are distorted, and that's because of trees, and holes nearby which, the DEM, again has a one meter resolution, so it may trace a ray down to the top of this tree, but it really belongs here. Over here, is the very, over on the right of the screen, edge of an image, in, where you've seen the tree canopy, and you see a lot of swirls in the tree canopy. Again, you're seeing partly through the ground, partly to the top of the tree, so you get these artifacts that look kinda weird, especially when you compare them to let's say a satellite image, where the line of sight is very narrowly vertical, and you don't see this kind of distortion. Finally, mosaicking. A single survey will produce between 2000 and 11,000 images. The mosaicking combines all of these into one image. Okay and from all of these overlapping images, these selected, the pixel with the smallest zenith angle, most vertical, most vertical angle, to minimize the distortions that we talked about earlier. And the result is a set of, and then you tile it into a set of images which are one kilometer on the side. And you can end up with between 100 and 450 of these tiles. Now, one ongoing issue is, how do you blend these images from different days and different times and days, with different solar zenith angles, into something that's, looks uniform. You'll see that in a bit. All right, here's an example of a mosaic we made earlier this year. Here is the full mosaic, including all the different images. And over here, is one tile from this mosaic. You can see along here these, seams, and that's because you're taking different images with slightly different lighting conditions, as it shows up at these boundaries. All right, now, how do you deal with 11,000 images or 450 tiles. In order to deal with all these, we have created a KMZ file, which you can load into Google Earth. Here is a picture of a, this is, Google Earth centered over the NEON hangar, at Boulder Airport. About two miles just north of here. So, we created these things, KMZ files, and here's an example of San Joaquin this year. And if you load it into Google Earth and then double click on it, here's what you see initially. And, this purple boundary, the extreme purple boundary is the limits of a digital elevation map, that we use for processing. But, let me turn that off. You'll see this other purple boundary, and that's the limits of the actual DEM from the lidar. We've included all this other area, but that's from a USGS DEM at eight meter resolution. We just used it for filler and it's necessary for some of the analysis. If we blow it up, you'll see that the there's some interior portions outlined in purple, and that's where those no good lidar data. So, this has also been filled with the USGS DEM. Typically, you'll see that over water bodies where you don't get a decent lidar return. But the water bodies are flat, so there's not, no features that you'll see. You'll also see here the location of the central tower. All right, then, if you click on mosaic tiles, this shows you the location of each tile. So, supposed you're interested in the central site, blow it up there, if you click on the tile, it gives you the name of the file that corresponds to this tile. And the name consists of the year, the site, San Joaquin Experimental Range. The (inaudible) was the 2nd visit we have ever made of San Joaquin, we went there once before from one year. And these two numbers here, are the Universal Transverse Mercator locations in meters of the lower, left corner of the image. And, that's just the way that (inaudible) is. See, that's for something else, but, so so that's, if you're interested in this area, this tells you how to find this tile. You can also go over to this button, and that shows the location of each individual image. So, like this one, #0499, if we click on it, it gives you the file name of that image plus the exact location, the altitude, and the heading of the plane when it took the image. And then, finally there is a five meter resolution (inaudible) This is taken from the mosaic but it reduced resolution because it was just too big. This image by itself was already 10 megabytes. And this gives you a sort of overview and perspective of what you're looking at. And it includes things like cloud shadows, and you can also see those boundaries between individual images. Okay, and, so, and one more feature of Google Earth. Now, this is Google Earth Pro, and Google Earth Pro is now free for anybody to use and download. Google isn't gonna support it anymore, so if you can still get it, I'd use it. So, one feature of Google Earth Chrome is that you can actually take some of the, well, let's go to this tile here. It's tile 3257411. We go over here to natural images, and find that one. There it is, 257411. And we drag that into Google Earth. Now the problem is, is this image is too large to fit into Google Earth. So you got two choices, you can either look at the whole image and scale it down in resolution or you can crop it. So you get full resolution over a limited area. So let's crop it, and center it right on the tower. And then you can blow it up and look at the image in detail. Again, here is that tower we saw at the very beginning, plus the roads, trees, so forth. And this is to locate things of interest. And again, you can do the same thing with, you can scale it. There's this also great super overlay but don't, don't push that button. (audience laughs) That shows you the full image, again, at reduced resolution.
B1 中級 米 Fundamentals of NEON RGB Camera Imagery: A Presentation 8 0 joey joey に公開 2021 年 05 月 24 日 シェア シェア 保存 報告 動画の中の単語