DIG IMAGE PROC & ANALY
DIG IMAGE PROC & ANALY GEOG 661
Popular in Course
Popular in Geography
This 11 page Class Notes was uploaded by Jamarcus Ritchie V on Wednesday October 21, 2015. The Class Notes belongs to GEOG 661 at Texas A&M University taught by Staff in Fall. Since its upload, it has received 17 views. For similar materials see /class/225772/geog-661-texas-a-m-university in Geography at Texas A&M University.
Reviews for DIG IMAGE PROC & ANALY
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 10/21/15
Geometric Recti cation Sources of Geometric Distortion 1 rotation of the earth during image acquisition 2 the nite scan rate of some sensors 3 the Wide eld of view of some sensors 4 the curvature of the earth 5 non ideal behavior of sensors 6 variations in platform altitude and velocity 7 panoramic effects related to the imaging geometry Earth Rotation Effects Satellite motion Earth motion I I I I I I I Compensation for earth rotation b a Fig 26 The effect of earth rotation on scanner imagery a Image formed according to Fig 25 in which lines are arranged on a square grid b Offset of successive lines to the west to correct for the rotation of the earth s surface during the frame acquisition time Finite Sensor Scan Rate Because sensors like M33 and TM require a nite time to make a single scan the satellite moves forward from the start of the line to the end of the line causing the end of the scan to be farther ahead in the along track direction than the beginning of the scan For MSS this effect is about 213 meters per scan line Richards and Jia 1999 Panoramic Distortion or Wide Field of View For scanners the angular IFOV is constant As a result as a scanner scans from side to side the pixel size will be larger at the ends of the scan than at nadir If the pixel size at nadir is p then the dimension in the scan direction at a scan angle 9 is pg zsec20pse020 Where h is the altitude of the sensor Fig 27 Effect of scan angle on pixel size at constant angular instantaneous eld of view Earth Curvature For satellite sensors with a large eld of view eg NOAA AVHRR the assumption that the earth can be adequately represented as a at surface is incorrect and in fact the deviation of the earth39s surface from a plane can be as much as 23 over an AVHRR swath The major problem is that the inclination of the surface makes the area viewed for a given IFOV larger than it would be if no curvature eXisted pc 2 3 h re1 cos sect sec6 b 27b I F Satellite Earth39s surface 53 Fig 29 Effect of earth curvature on the size of a pixel in the scan direction across track Correction of Geometric Distortion There are two major techniques for correcting geometric distortion present in an image The rst is to model the nature and magnitude of the sources of distortion and then use these models to correct for distortions The second is to establish a mathematical relationship between the locations of pixels in an image X39Y39 and their location XY in the real world eg on a map The latter is by far the most common though many standard sources of satellite images will have at least some corrections done by modeling before the user receives the data X X Rectified output image b Original input image a The idea is to develop a mathematical function that relates the image and real world coordinate systems eg X39Z fXY and Y39 gXY While in the end we want to transform our images in the pixel coordinate system into some coordinate system in the real world in actuality this is accomplished in reverse Once a real world coordinate system is established the mapping functions are used to determine which pixels in the image whose centers fall closest to the locations of the real world grid which pixelss in the im age are closet to those in real world Map Image Rectification Types There are two general types of rectification 1 Image to Image and 2 Image to Map In many cases it is convenient to place an image directly in a map projection so that it can be compared to other dam However in some cases if it is only necessary to compare to images it may make sense to simply rectify one image the slave to another the master Apologies for the non politically correct language but those are the terms commonly in use A little later we will return to why this approach may be preferred Ground Control Points Seldom is it the case that the functions relating pixel coordinates to realiworld coordinates are known Usually the function is approximated by a leastisquares polynomial regression though some software packages are beginning to allow other rectification methods including triangulation rubberisheeting and orthorectification In this discussion we will limit ourselves to the more common polynomial warp The idea is to find points called Ground Control Points or GPCs that can be precisely located both in the real world and in the image These features should be oflimited spatial extent so their position can be precisely determined Road and runway intersections are the best These GPCs are then used to statistically determine a relationship between the image and realiworld coordinate systems using standard leastisquares regression techniques Usually polynomials of the 1 2nd or 3rd order degree are used Be warnedl Ifyou use a higher order polynomial errors can be introduced especially in areas unconstrained outside of the selected yamdcaxmalyam my jam Fume m sawmmagwmummmm m m mum mdedmng aiGPCs Manny 5 me ihambelaw m m l w Lu 4 m The eye mummymm canm 0i dzmmg d1 mmmm mddlzmmllvmddeqmv zn Belaw mexmykiaxnfewyams mm duUTMyxapcnan afknam mm P ca admin 2 m Gem m an x Labonooliuiwrmroz m m u Total mum of any 51 my mm m 51 u m a 1 m comma mi Prvjecuon emu Map mmum 5mm inn mm ooo mm on Lama Ce 7x can TM TH Once enough points have been collected the software will two equations that relate the map coordinates to the image coordinates Here are examples of simple firstiorder polynomial equations X a0alXazY X b0b1Xb2Y Since there are three pairs ofunknowns in the above equation a minimum of three ground controls points is required but more are better Once more than the minimum number ofpoints have been collected the system is overidetermined and a leastisquares best fit is used to determine the coefficients For each point a predicted image location is determined and the difference between the predicted image location and known image location is used in the calculation of an RMS error RMS xprec xlmown 2 Vprec yknown 2 The RMS error gives an indication of how 39well39 the polynomial equation represents the functional relationship between the image and map coordinates Once a large number ofground control points have been determined it is common practice to remove points with high RMS errors or to adjust the points to improve the overall RMS error However do not fall into the trap of trying to overly adjust the points just to lower the RMS Many effects such as topography will introduce complexity in the actual functional relationship that cannot be modeled with a lowiorder polynomial equatlon An example of the RMS errors for the pixel above is shown below RMS error report Warp Type Polynomial ACTUAL PREDICTED Point Cell X Cell Y Cell X Cell Y RMS quot1 quot 1109000 2849000 1106895 2847225 27533 quot2quot 987900 2888500 987134 2886807 18588 quot3quot 1039730 2738020 1041878 2739113 24103 quot4quot 784022 2670040 782814 2669497 13249 quot5quot 1339000 2649000 1336555 2647092 31008 Interpolation Seldom do the grid centers from the realiworld grid project exactly to the image pixel centers In this case it becomes necessary to make a decision about which pixels from the input image will be used in calculating a value to place at the realiworld grid center Three techniques are commonly used N eorext Neigboor Bz39lz39mor Inteijjolotz39on and Cukz39t Convolution Nearest Neighbor R esampliug ln nearest neighbor resampling the value of the image pixel whose center ends up nearest that of the output grid is placed in the output grid This interpolation technique has two major advanmges The rst is that it is computationally very efficient The second is that it retains the DN values in the original image This is preferred if the resulting image is to be classified or if geophysical parameters are to be extracted BilinearIntezpoIa on In bilinear interpolation three linear interpolations over the four pixels that surround the point in the output image are used to determine the value in the resulting image Once the four surrounding pixels are identified there position in the output space is determined Then two linear interpolations are done along scan lines Once the two interpolates along the scan lines are determined a third linear interpolation is done to interpret the value at the grid center The following diagram explains the geometry H ij ljl 7f I W j1l I I Iquotj Pixel brightness value required 39 From map b 1 11 Bilinear interpolation will usually result in a smoother image then nearest neighbor but at the expense of a higher computational load Cubic ConVOILIrion In cubic convolution the nearest 16 pixels instead of the nearest 4 are used The picture below illustrates the situation Cubic convolution will result in an even smoother image than bilinear interpolation and is even more computationally intensive Neither cubic convolution nor bilinear interpolation should be used if the original DN values are of interest Cubic polynomiol lj Eubic polynomiol From mop r inlerpololions 3 j3 Here m examples ofthe vmous mtapolmon techniques The Ongmnl Imnge Bdmenr Imezpolmon Cubic Convolution