DIG IMAGE PROC & ANALY
DIG IMAGE PROC & ANALY GEOG 661
Popular in Course
Popular in Geography
This 52 page Class Notes was uploaded by Jamarcus Ritchie V on Wednesday October 21, 2015. The Class Notes belongs to GEOG 661 at Texas A&M University taught by Staff in Fall. Since its upload, it has received 14 views. For similar materials see /class/225772/geog-661-texas-a-m-university in Geography at Texas A&M University.
Reviews for DIG IMAGE PROC & ANALY
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/21/15
Geometric Enhancement Using Image Domain Techniques Introduction a a W 1 manan m Here is in exunple of 1 3 x 3 filter Kemels cm be my size in the 1nd ydixeamns 1nd vmuus snipee The output Vzlueis 1 Mk Nk k A Lard muhemzliully follows n n outpum J inputmn Xtemplatemn an m In the equation above the template simply refers to the weight of each pixel in the template An example of such weights for a low pass lters might be as follows 1 1 1 1 1 1 1 1 1 Smoothing Operations Low Pass Filters lmages will contain random noise that is superimposed on the DN Values in an image In some instances it might be quite useful to damp out this random component and this is done using low pass filters You should be aware that using low pass filters carries a penalty 7 some frequency information will be lost M can Filter The most common low pass filter is the average filter and is mathematically tern latem n 1 p M N Which for our 3 X 3 template would simply be 19 19 19 19 19 19 19 19 19 Now let us look at a 17D case to show the effect of the mean filter Here is the original function adapted from Richards and Jia 1999 UN pixel Here is the function after smoothing with a 3 X 1 window dashed blue line The filter has two effects The first is our intended smoothing of the high frequency Variation but the second is an unintentional degradation of the edge between the groups of DN Values DIN 111ch A way to combat the degradation of the edges is to use an edgefrexerw39ng smoothing lter The simplest of these is simply a smoothing lter with a threshold so that if the difference between the original and smoothed DN Values is less than a certain threshold then the smoothed Value is used otherwise the original Value is retained This requires some apt207239 estimation of the threshold Value An example of this type of lter operation is shown below The results of the thresholded smoothing kernel are shown in green The threshold was set at 1 DN Value 4quot Presewed Edge Degraded Edge h quot 2 g aquot J 9 IIIIIIIIIIIIIII 3N u l I l I U 13 H a u p39xxc Median Filter Another common method to smooth remotely sensed image is to use a Median lter A median lter also works well where spurious data Values exist that may negatively impact the mean It works with Radar data that often have a speckled appearance An example ofa median lter applied to our 17D example is shown below ch l39JN r w N L as Now we will look at the effects of these 3 lters on 27Dimensional images Edge Detection and Enhancement There are essentially three computationally efficient methods by which edges can be detected in remote sensing images As we will see these techniques are also useful in general sharpening of the image as well 1 application of an edge detection template 2 calculation of spatial derivatives 3 subtracting a smoothed image from its original Linear E dge Detection One method of detecting edges is to apply a directional first difference algorithm that simply approximates the first derivative between two adjacent pixels as the difference between them Vertical DNM DNM 7 DNMJr1 Horizontal DNM DNMi DNH NE Diagonal DNM DNMi DN HJIJr1 SE Diagonal DNM DNM 7 DNHJIJr1 Edge enhancement can also be accomplished via a convolution technique For instance here is a kernel that detects vertical edges in an image 71 0 1 71 0 1 71 0 1 For example here is a rather simple 11 X 11 image Here is the result of applying the above template on the image Here is a template for detecting horizonml edges 71 71 71 O O O 1 1 1 And the resulting image Here is a template for detecting diagonal edges which ones 1 1 0 1 0 71 0 1 1 And the resulting image There are many others that lters that will accomplish similar things and I suggest that you read both Jensen and Richards and Jia to exaInine these different examples Spatial Derivative Techniques Another way of attempting to locate edges in remotely sensed images is to look at the 1St derivative in X and y If an image is a continuous function in X and y then it has a vector gradient that can be defined as E E VDN x y EDNOCJN gDNOC yJ The direction of the gradient function is in the direction of the maximum upward slope and the amplitude is the slope value itself In edge detection we are interested only in the magnitude which we decompose into two components V1 iDNx y and V2 iDNx y Bx By ln image processing our function is not continuous but is discrete so we must developed techniques that are based on discrete approximations to the continuous derivatives The Roberts Operator In the Roberts operator the continuous derivatives are replaced by differences V1 DNijDNz39lj1 V 2 DNz 1 j DNz39j 1 These are the discrete components of the vector derivative at the points i 12 and j 12 in the diagonal directions For our simple image these two gradient operators yield the following images The Sobel Operator A better edge estimator than the Roberts operator is the Sobel operator which computes a discrete gradient in the horizontal and vertical directions at the pixel locations ij The cost is that the operator is more complex In template notation the Sobel operator is equiVilant to the simultaneous application of the templates and 1 2 1 0 0 0 1 2 1 1 2 1 0 0 0 1 2 1 1quot L 0 L 39 0 L 39 1 r 39 In addition to simply detecting edges we might be interested in improving the high frequency detail information in an image including edges lines and points This enhancement is often referred to as sharpening A very efficient manner in which to accomplish this is quite simple 7 simply subtract a smoothed image from the original Let us consider why this works The original image of course contains all the information possible for that image while the smoothed image contains information only about lower spatial frequencies than the original Subtracting the smoothed from the original will leave only the high frequency components of the image Laplacian 0p crater Another template operator that can be used to highlight high frequency information is the Laplacian operator Unlike the Roberts and Sobel operators the Laplacian is a second order derivative eg it is a measure of the changes in slope or curvature It is also insensitive to direction and invariant to rotation There eXists a family of Laplacian template operators but here is one of the most common 0 1 0 1 4 1 0 1 0 In some instances it may be desirable to subtract the Laplacian edges from the original image This can be accomplished using the following filter 1 1 1 1 77 1 1 1 1 By now you should have a good working knowledge of filter operations Physical Basis for Environmental Remote Sensing Introduction Remote Xemz39ng refers to acquisition ofinformation about objects without physical contact between the object and the sensor In terrestrial remote sensing this is accomplished Via propagation of energy from an energy source to the target and then to the sensor Electromagnetic radiation is ofprimary interest though acoustical techniques are commonly used in marine application and the propagation of seismic energy both from natural and arti cial sources is a basis for remote sensing of the solid earth This discussion will be limited to electromagnetic radiation Fundamental Equations All objects with absolute temperatures above 00 K emit radiation All radiation propagates at a constant speed through a Vacuum This is the speed oflight c and is equal to 3 X 108 m sec c 11 where 1 is the frequency and 7 is the wavelength Studying the particle nature electromagnetic radiation it was determined that the energy of an individual photon is quantized that is to say is it has can have speci c Values often referred to as smtes The energy ofa photon is Ep nhv nh Where It is an integer 123 and b is Plank s constant 6626X10 34J sec From the above equation it is possible to see that the energy ofa photon is inversely related to its wavelength In 1900 Max Planck determined the spectral emmitmnce 37 of energy emmitted from an object with temperature above 00K Zirczh 1 is ehm 1 where k is the Boltzman constant 13805X10 23 JICl SA 1m mama a ram buve md um mm dn emmdtozew we an End mungM mu g a meme rim 2 mm 11 mama yum g 7 mm mmm 2 2 2mm K r Em 5 u n ammmm minwnymm hgmum 04 U3 u w 21 m u kmxmuoimbxecmn shaman mm m WW mung mum m a WVWm w 2 9mm K L m mph Mm we a m m my nnmmnmwwelzngh mum DfEl 475 pmch 475 x w m mm mddz blue panama af rm exacqu mm rm ludm mqu mpmom or ppxommy 6mm K mum comm h m shudndmcmdzicentkghtbulb rm unpuntum af rm lmntuin mm 1 me u mh mm mm M can mpmm 217uc Ulng Wthgabam Aw h m x memmm buspm Or my 01th putyxsmthnwmmhemgnbu emts ntxcextnn wwelzrg h bhthawmnchemg mm x mmwxmgu To Mm h m mummmymmpmkmum mm 1 s 54 5 s 07 m buve equmm u m u dd my Bdfgmhw md 615 Mm ppxopmxy n a Skfm okzmncomtmz 5 5mm Wm K Una m we hm mud n mung mum pu fecdy Such shuts m mwkmmmsm m mybmh unhunbsulma unpuntum them n K shuts Mm bhc 0a Hmm mast V m menu 1 mm foxmnny ohms mum u maunuiwwelzngh than to nydnt ohm ememmmymgmmy n mum wawelzrgdq yuitu thett re ectance vanes as a functton ofwavelength Thus the StefaneBolzman telattonshtp for graybodtes is 550T4 Photon Interaction When a photon comes tnto contact wtth mattet Thete ate thtee posstble fates Absorbed A Re ected R Emttted E 0 And A R E 1 And as ts shown tn the gure below for a blackbody E A whtch ts known as Kttko s law Kirchoff39s Law at the same tempemmre emission equals absotblion T 4 T proton E 5T E3absorbtion5T4 1 E2 tobscrbticmmT4 blackbody s gtaybody ts lt1 E 41 emissionyxT E1E2EA El eE20E1E3E2 lharefom E3 E4 ifTblackbody Ttgraybooy men absorbiion emission Solar Radiation Let us get a little more practical We all know that the Sun emits energy and now know that the amount of energy that it emits is a function ofits temperature The amount of energy the earth receives at the top of the atmosphere from the sun is known as the 501W mmtzmt This energy that the earth receives from the sun is the major source of energy running the earth system Radioactive decay within the earth is the other source of energy but is ofa much smaller magnitude In terms of radiative emission let us consider the sun with an equivalent blackbody temperature of 57700 K According to the StefaniBolzman Equation the amount of energy emitted from the sun is E 567x10 8W r m 2 rK A 5770K 628x108W r m 2 Note that this is the amount of energy emitted from may square meter of the sun To find the total amount of energy emitted from the sun we need to multiply this value by the total surface area of the sun 47Erm2 where the radius of the sun is 7X108 m Em E4msfm615e 18W This energy propagates through space however as the distance away from the sun increases the amount of energy incident on a square meter area will drop offproportionally to the square of the distance from the sun to the earth is 15X1011m You might remember this relationship from high school physics Thus by the time the energy from the sun reaches the earth the amount incident on a square meter area is E iM1367WW4 47r15x10m earth 2 47139 r Sunizarth Exarnmatron of the gure below demonstrates how accurate thls ap the solar u Lth r mad by and L T proximate 15 4 the ER mstrument ERBS Solar Irradiance Measurements So ar Momtor Channe Jan 85 7 Dec 97 1367 Solar lrradiance WallsWE mag mas mm mag mag man mm mm mm mm mag mas mm onlth year run ng mean The Electromagnetic Spectrum contmuous from short wavelengths lt 1079 m to wavelengths gt 1 m The flng below tllustrates the eleckomngnenc spectrum The Electromagnetic Spectrum wavelength 1mm 1m Wmt 32 a 2 5 67 IS IU m m m I m m In I In I m m m gt5 g A v a E s gee g g a a an tutntttnttMu WI The kmspnencyofEmh s atmosphere vanes consldembly as a tunctaon of table The below lllustmtes th e geneml regton s of th e electrom zgneuc spectrum Be aware that these are gm emllzm ary ons and spectttc usnge mayv Terminology In remote sensing both wavelength and frequency are commonly used m the hterature Htstoncally research m the mtcrowave pomon of the spectrum frequency ls mmonlyused but wavelength ls also employed whlle wavelength ls almost always used for the other regtons otthe spectrum Because otthe short wavelengths commonly used m remote sensmg the standard Si umt otlength is seldom used dlrectly Instead dependmg on the region otthe electromagnetic spectrum ofinterest different length units all based on the meter are used The mble below lists some of the more commonly used units Unit De nition Region m nanometer 1 m or 1 micrometer m micrometer 1 m m WavelengthFrequency Relationships ln remote sensing it is occasionally necessary to transform between wavelength and frequency units This is accomplished Via the equation cVl where c is the speed oflight 3 X 108 m sec l For eXaInple suppose I am working with radar data with a frequency of 1500 MHZ and I wish to know its wavelength The wavelength is simply 1 3 V 3gtlt108 1500X106 7 02m0r200m Credits Material for this lecture was drawn from Shannon Crum s Lecture Energy Sources and Radiometric Principles which is part of the Remote Sensing Core Curriculum Thermal Remote Sensing Introduction and theoretical background Heat energy is the kinetic energy of the random motion ofparticles ofmatter and the concentration of heat energy in a substance is measured by its temperature lVlanual ofRemote Sensing p 70 All bodies above 00 K emit radiant energy An ideal thermal emitter is known as a Hatekoa It transforms heat energy to radiant energy with the maximum conversion rate possible permitted by thermodynamic laws Max Plank developed the formula relating heat to spectral exitance which you have seen before Just in case you missed it Plank39s Function is given as follows 27139c2h 1 is ehm 1 where e is the Boltzman constant 13805x10 23 JICl and b is Plank s constant 6626x10 34j sec SA The total exitance for all wavelenghts is given by the StefaniBoltzman Law S 0T4 where O39 is the StefaniBoltzman constant 567x10 8 W m 2 14 However natural bodies are not blackbodies but instead emit radiant energy at a rate less than the maximum Thus they are known as grayMalia The exact rate at which heat energy is converted to radiant energy is an intrinsic property of the material The capability ofwhich a graybody emits energy is commonly expressed as 23915 palm emim39w39g and is usually desginated as This is simply the ratio of the spectral exitance ofa material to that ofa blackbody A MAmateriaLOK M1L blackbodyf K For many applications especially energyibalance studies often emissivity is not considered to vary by wavelength and a single emissivity 8 is used Thus for most natural materials the Stefani Boltzman equation is commonly given as S 0T4 For most terrestrial surfaces 07 S 8 2 10 and most natural surfaces have emissivities greater than 085 the exception being desert environments A few examples of the spectral emissivity of rock types are shown in the following two gures arkosic sandstone re ectaneele1n1 O 3 005 v 7 luncstone t 3 3t 5 4 45 5 55 6 wavelength tun Spectra from Snlishluy and D39Aria 994 r t t t 7 arkosic sandstone A 020 7 g 2 7 g 015 umcmuc i E a 010 7 g siltslouc 7 E 005 t t 8 9 10 1 l 12 I3 14 wavelength pm Spemm from Salisbury and D39An39a 1992 It 15 Important to recogmze that two facmrs the temperature and the emlsslmty of the surface of mterest control the thermal energy measured at a satelhbe It 15 also important to recogmze that temperature 15 not an mtnnslc property of the surface 7 1t vanes wrth the Juadlance hlstory and meteorolong cahdmons Emlsslmty however 15 an mtnnslc property of matenals and the th mat um rr the product ofboth temperature and 55 5 u 5 m5 5 my 5 W W Wm Faxmmamsemxgyuqasesdubnghmss m 55 am m 5ymm 5 m 5 m m u h 5 n 5 mvelzngdru g 5 Mum 5 mm 55 m m5 mm 2 5m5m mmavxsmn dmgm5 5 5 hw g m m accumulates am 55 b x 5 mum Hm 5w5 myn 55 um 555 mug5am w5 a 5 Am5 g m 5 5 Tm m5 n 5 5 mkmdm 5 TE55s ms 7 Mama 55m mm mm 5 m mdmm eq man m m M van u m 55 5 m unknam w 7255 mummy 55 aivmmli 5 TES y a 5 m5 55 s r u m m m 5 at J m uns 1211quot Online Tutorials on Thermal Remote Sensing I highly recommend the following online tutorials htto r thr M a m Sect9 nicktutor 971html and htto nnlln oencrnr h erln will rsccthrm rsch 8html Applications of Thermal Imagery Dcram39namm afLandOccan Sulfate Tempmamn Geologic Applications 39 39EUMFUSITE x f v Credits Information about Thermal Remote Sensing and most of the images presented here are from the ASTER web site httpasterwebjp1nasagov Atmospheric Correction Description of the Problem Only a fraction of the photons coming the sun illuminate the surface and only a fraction of these reach the sensor 85 at 0 85 um and 50 at 0 45 um Two atmospheric processes affect the radiance received at the sensor gaseous absorption and scattering by molecules and aerosols The Culprits Ozone 03 Oxygen 02 Carbon Dioxide C02 Methane CH4 Nitrous Oxide N20 Water Vapor H20 Optical Thickness T 0 Dependant on the length of path through atmosphere 6 00 Sun 0V Sensor d ggg Ta THZO 702 703 Tcoz Transmittance The percentage of radiant energy reaching the ground relative to that of no atmosphere T c050 Scattering Interaction of photons with molecules or nonabsorbing aerosols is elastic and photons are immediately reemmitted in a direction other than the incident one Three main types Rayleigh Mie Non selective Atmospheric Scatlering Rayleigh Scattering a 0 Gas molecule A Mic Scattering Scattering Types b 39 Smoke CluSI Nonselective Scattering Photon ol39 clcclmnmgnetic energy modeled as a wave Figure 213 Types or summing cncounlelud m the atmosphere The lypc of scattering is a function l I mo wave length or lhc lnCldclll mdimu ensugy and 2 the size arm gas Inolcculu dust particle andor water vapor droplet encounlcrcd Rayleigh Scattering Aka Molecular Scattering Interactions with molecules whose effective diameter is much smaller than the wavelength d lt 017 Amount of scattering is inversely related to the wavelength lN Occurs in upper 45 km of atmosphere Mie Scattering Aka nonmolecular scattering Effective diameter of the particles is on the order of the wavelength 01 lt d lt 10 Takes place in lower 45 km of atmosphere Amount of scattering is typically greater than Rayleigh and wavelengths affected are longer 3990 18 9 1 pm 9 g 0 s 7 18 pm 5 Rayleigh g 170 16 17 pm 5 160 05 06 um 050 1LAI l I an 80 70 60 50 40 30 20 Ill Sun Elevation Angle la 100 190 130 08 ll um 07 03 Mm M 06 17 pm 070 05 06 pm 160 150 90 50 70 60 50 40 30 20 10 Sun Elevation Angle bl Non selective Scattering All wavelengths approximately equally affected Occurs in lowermost portion of atmosphere Effective diameter of particles is much greater than wavelength d gt 10 Water and Ice Crystals Figure 64 Water Vapor Effects LOU 030 07039 Direct Transmittance 060quot 050 08 1 pm 07 08 ttm 05 06 pm 06 07 mt 23456789 Purcipitablt vVilliCl39 cml Effects ol alnmspherit water vapnr absorption on radiation transmitted to Earth39s surface Absorption Gases absorb radiation by changes of rotational vibrational or electronic states Rotational variations are week and correspond to emission and absorption of weak low frequency photos microwave or farinfrared range Vibration transitions have greater energy which lead to absorption spectrum in the nearIR Electronic translations correspond to higher energy and give rise to absorption or emission bands in the visible and ultraviolet Since these transitions occur at discrete values the absorption coefficients vary rapidly with frequency and give rise to complex absorption spectra Atmospheric Absorption Bands Figure 63 Atmospheric 39l i39ansmissiun 9 s Ultra violet Visible Infrared 39 39 Elcar and mid infrared Thermal infrared HZO C02 CO 395 U E 0 I I I 1 30 Wavelength tun Atmospheric absorption bands in the visible nearinfrared and therniiilinlmrcd portions of the electromagnetic spectrum Solar Radiation M iii1 tintu u 0 02 04 06 08 10 L2 1 4 L6 18 20 22 24 26 28 30 Wavelength pm I 32 he region from 01 7 w iim b r v Figure 215 2 The absorption of the Sun39s incident elcctm etic e cigy in t anotis atmospheric cases he tmt fc i tepict the absorption characteristics uero 02 and 03 C02 and 1410 while the nal graphic dzp icts 39 1quot L 39 to us down J3 01 quot 39 39 level arr r I i Figure 65 Combined effects of scattering and absorption in the brightness values Eli s eventually produced by the lantlsai MSS sensor SYSLC H39I The combined effects Brightness Value I M DJ J1 U 0quot G C C CD C D 3 Brightness Values increased by scattering i3939 m if quot 39 1 i H Brightness Values reduced by absorption l 07 08 l l Wavelength ttm l l Back to the problem Some photos do not reach the target and therefore contain no useful information about it Some photons re ected from areas outside the target are scattered into the surfacesatellite path not useful The Whole Picture Various Paths of Satellite Received Radiance LS 2LTLP Atmospheric Correction attempts to remove or minimize the Path Radiance LP Figure rl The remainder contribute to the illumination of the ground by way of scattered paths and this diffuse component is part of the useful signal Lastly a fraction of the photons will be backscattered by the atmosphere to the target Let s look at a feW of the paths Atmospheric Transmittance 22 Eg 24 22 LT 7 22 Eagle COS 60 Eda Em d1 R QVEMQO cos 60 Ed EM da 1 LT RTHVEMTQO cos 60 Ed EMd1 LP So What are we after that is to say What do we want an image of 7 Atmospheric Correction Approaches 0 Forget about it 0 Relative Radiometric Correction of Atmospheric Attenuation Single image Normalization using Histogram Adjustment Multidate Normalization using regression Multidate Empirical Radiometric Normalization Multidate Deterministic Radiometric Normalization Atmospheric Modeling Histogram Adjustment No of Effect of pixels almasphe7 Band 1 I Brightness value BandZ Brightness value Brightness value Bond 4 Fig 23 Illustration of the effect of path ra diance resulting from atmospheric scatterin on the four histograms of Landsat MSS image Brightness value data 6S input 7 Landsat TM 08 0214 00 67 9893 15 9011 Tropical Atmosphere Continental Aerosol Model Acquisition Information 1 1 Good Visibility 10 km 25 3 500 TARGET AT 12 km elevation 1000 Satellite Simulation 25 TM Band 1 O Homogenous Surface 0 N o Directional Effects 0 Next Value is Surface Re ectance O 3 Surface Re ectance atmospheric correction An Example Correction 1 Convert DN to Spectral Radiances L L I maxi mini 1 mini D max I 7 radiance Wm392 ster391 um39l Lmin 7 spectral radiance at DN 0 Lmax spectral radiance at DN DN max DNmax 7 maximum DN value 2 Convert Radiances to in band planetary albedo es2 7r11 P EmqL cos6 es 7 Earth Sun Distance in AMUs I 7 Spectral Radiance Esun 7 Exoatmospheric solar irradiance 9 7 solar zenith angle Emamsvhenc Spearal lrradia39me mutanxm wanders minim cumin mahgh numansl 68 output as version A i geometnca condltlons identity t u observztlon uontn u day 2 unlversa tiue ii uu th dd latltude 715 su deg iongitude 47 dd deg solar zenitn ingie 5 AS deg solar azluuthz angie 5n E7 deg View zenith angie u uu deg View azluuthz angle u uu deg scattenng angie 12E 5 deg zzlllmthz sngie dlffermce 5 E7 deg Now Do a Haze Correction Accuracy Assessment of Computer Classification of Remotely Sensed Images Introduction Once a classi cation map has been created the fun is only beginning Some methodology must be used to provide a quantitative assessment of the accuracy ofa remote sensing derived land cover classi cation Unfortunately throughout the 197039s and 198039s accuracy assessments were not an integrated part of the classi cation process and too often remain an afterthought It is not enough to provide a single number representing the accuracy ofa remote sensing classi cation as the accuracies ofindividual classes may deviate considerable from the overall accuracy In addition users ofa classi cation need more detailed information on both errors of commission and omission to fully utilize the remote sensing classi cation map Example In our example we will compare the results of a classi cation performed for the Lakeview Marsh Wildlife Management Area which is a small wetland area on the eastern edge ofLake Ontario in New York State According to eld surveys the area has nineteen different land covers that are listed in Table 1 found at the back of the handout Using a portion ofa nonfatrnospherically corrected Landsat image acquired on July 16th 1992 Figure 1 a maximum likelihood classi cation was performed assuming equal prior probabilities A eld map Figure 2 showing vegetation in the area provided the training data From the eld map 10 percent of the pixels in each class were randomly selected to serve as the training data Figure 3 The resulting classi cation is shown in Figure 4 It should be noted that any classi cation scheme could replace the maximum likelihood approach used here but the classi cation accuracy assessment is similar regardless of the classi cation approach used Figure 1 Figure 2 ngm 1 View arm Regan Classi cation Accuracy Assessment Classi cation accuracy assessment is accomplished through comparison ofthe resulting classification maps with reference data that itselfmay be incorrect Class by class comparisons between the classi cation maps and reference data are accomplished through the use ofa confusion or error matrix Columns in a confusion matrix typically represent the reference data and rows represent the classi cation data The number of test sites required for accurate comparisons is discussed in detail below In our example the test data was determined by randomly drawing a 2nd 10 of the pixels from the eld map to serve as the test sites In reality this could be done using eld checking aerial photographs etc The confusion matrix for our classi cation is below From the confusion matrix it is possible to compute a number ofmetrics that assess the accuracy of the classi cation The rst of these is the overall accuracy It is simply the sum the pixels classi ed correctly eg the diagonal elements divided by the total number of number of samples in the comparison For our example the overall accuracy is overall accuracy 06176 or 6167 Two other metrics are traditionally calculated These are the producer39s and consumer39s accuracies Theprodmm tummy is a measure of how well a cermin area is classi ed and is calculated as the number ofpixels of the reference class that were correctly classi ed the diagonal element for that class divided by the total number ofpixels of that reference class the column total It is a measure of omission errors The consumer39s or user39s accuracy is a measure of the reliability of the classi cation or the probability that a pixel on a map actually represents that category on the ground It is calculated by dividing the number ofpixels correctly classi ed the diagonal element by the number ofpixels of that class row toml It is a measure of commission errors The class producer39s and consumer39s errors are illustrated in our example below There is one commonly used measure of the overall accuracy ofa classi cation This is called the Kappa coef cient Km It also can be de ned in terms of the confusion matrix r r N xkk xk X xk k1 N2 xk Xxk 161 Where ris the number of number of rows in the matrix page is the number of observations in row 239 and mlmmtj and X and X are the marginal totals for row 239 and mkmmj respectively and N is the total number of observations For our example the Kappa coef cient is 5367 The Kappa coef cient unlike the overall accuracy includes errors of omission and commission Computation of the Kappa coef cient may be used to determine whether the results in the error matrix are signi cantly better than a random result Km 0 or to compare if two similar matrices are signi cantly different Confusion Matrix Reference Data 1 2 3 4 5 s 7 8 9 1o 11 12 13 14 15 1s 17 18 1 5 0 10 64 0 0 2 0 0 2 2 0 0 0 0 1 2 0 2 0 13 16 33 0 0 2 0 0 4 3 1 0 1 0 0 1 4 3 2 3 179 9 0 0 0 0 0 2 0 0 0 0 0 1 0 8 4 1 1 0 340 1 0 0 0 0 6 6 2 2 5 0 2 14 1 5 0 0 1 61 15 0 10 2 0 7 17 5 0 15 2 1 24 0 s 1 0 0 94 2 6 2 0 0 6 10 4 0 7 0 3 29 0 Q 7 0 1 1 17 2 0 14 1 0 0 3 9 0 10 0 0 6 0 g 8 0 0 0 4 0 0 5 6 0 0 5 3 0 8 0 0 8 0 a 9 0 1 0 2 0 0 0 0 21 23 1 1 0 1 0 0 1 22 g 10 0 1 0 5 0 2 0 0 3 51 9 3 0 1 0 0 10 1 11 0 0 1 8 1 0 1 0 0 1 13 1 0 10 1 1 44 0 12 0 0 0 18 1 2 8 0 0 18 1 105 3 89 1 1 5 0 13 1 0 0 41 3 1 2 0 0 6 8 4 2 8 0 1 7 0 14 0 0 1 18 1 0 1 0 0 7 2 5 0 1390 0 0 5 0 15 0 0 0 15 2 3 6 1 0 1 12 6 1 7 3 0 11 0 1s 1 2 5 24 0 0 10 0 0 0 0 8 1 2 0 10 2 0 17 0 0 2 17 2 0 2 0 0 12 5 0 0 8 1 1 42 0 18 0 2 7 1 0 0 0 0 0 0 0 0 0 0 0 0 0 11111 11 24 223 771 30 14 65 10 24 146 97 157 9 182 8 22 211 1147 Producers 4545 5417 8027 4410 5000 4286 2154 6000 8750 3493 1340 6688 2222 549 3750 4545 1991 9686 ommission 5455 4583 1973 5590 5000 5714 7846 4000 1250 6507 8660 3312 7778 9451 6250 5455 8009 314 88 78 204 381 160 164 64 39 73 86 82 252 84 50 68 65 92 1 121 3151 users 568 1667 8775 8924 938 366 2188 1538 2877 5930 1585 4167 238 2000 441 1538 4565 9911 comission 9432 8333 1225 1076 9063 9634 7813 8462 7123 4070 8415 5833 9762 8000 9559 8462 5435 089 Number of Training Pixels Required Suf cient training samples must be provided to allow reasonable estimates of the elements of the mean vector and the covariance matrix to be determined For an Nidimensional multispectral space at least Nl samples are required to avoid the covariance matrix from being Singular Should this happened its inverse in discriminate functions cannot be determined While it can be generally stated that the more training pixels the better it is important to have as many training pixels as possible because as the dimensionality of the pixel vector increases eg more bands there is a greater chance that some individual dimensions are poorly represented Swain and Davis 1978 recommend a practical minimum that lON samples per spectral class be obtained with lOON being desirable ifpossible Number of Test Pixels Required Determining the actual number ofpixels on the ground that need to be sampled to assess the accuracy ofindividual categories in classi cation maps is dif cult to theoretically determine Most approaches aimed at determining the number of test pixel required are based on the binominal distribution or a normal approximation to the binomial distribution eg 0 or 1 The probability of Xpixels being correct in a random sample ofnpixels drawn from a class with an accuracy of 9 is given by the binominal probability px 716 nC x 1 039H where x 01N T is the map accuracy for a class Van Genderen et a 1978 determined the minimum sample size by considering that if the number of samples is too small there will be a nite chance all the pixels could be labeled correctly which would result in an unreliable estimate ofmap accuracy Such a situation occurs where X271 giving the probability that all pixels are correct pn n 6 19quot They have noted that pnn0 is unacceptably high if it is greater than 005 or more than 5 of the time there is the chance of selecting a perfect sample from the population with an accuracy of 9 Some of their results are shown in the table below Required Classi cation Accuracy Sample Size 095 60 090 30 085 20 080 15 060 10 050 7 Rosenfield et al 1982 have studied the number of test pixels required Their approach is founded on the basis of determining the number of samples required to ensure that the sample mean the number of correct classificationsnumber of samples per category is within 10 of the population mean or the classification accuracy at a 95 con dence level Like Van Genderen et a they also assume a bimodal distribution Some of their results are shown in the following table Required Classification Accuracy Sample Size 085 19 080 30 060 60 050 60 Yet a third bimodial based approach is that of FitzpatrickiLins 1981 The suggest the number of samples N can be calculated as follows Z 22761 E2 N Where p is the expected accuracy q is 1007p E is the allowable error and Z72 from the standard deviate of 196 for the 95 twoisided confidence level The table below indicates the number of samples for several required classification accuracies assuming an allowable error of 500 Required Classi cation Accuracy Sample Size 095 76 090 144 085 204 080 256 060 384 050 400 Table 1 Field Classes Number Class 1 Marsh Headwater Stream 2 Main Channel Stream 3 Eutrophic Pond 4 Deep Emergent Marsh 5 Shallow Emergent Marsh 6 Shrub Swamp 7 Red Maple Hardwood Swamp 8 Impounded Marsh 9 Sand Beach 10 Great Lakes Dunes 11 Successional Old Field 12 BeechiMaple Mesic Forest 13 Successional Red Cedar Woodland 14 Successional Northern Hardwoods 15 Successional Shrublands 16 Pine Plantation 17 CroplandRow Crops 18 Lake Ontario Unclassi ed Principal Component Analysis Introduction Principal Component Analysis PCA is a method for deriving a set ofnew images from a set of original multispectral or multitemporal images The idea is that this images may be more easily interpreted than the originals PCA is also commonly used to reduce the dimensionality of the data Basics At least you should read the treatment of PCA in Jensen and if you are more interested in the matrix algebra behind the actual transformation consult Richards and Jia In essence the PCA depends on either the covariance or correlation matrix developed from the original images PCA accomplished using the Covariance matrix is referred to as the Unsmndardized PCA while PCA accomplished using the correlation matrix is referred to as the standardized PCA An exaInple ofa covariance matrix for our Bolivian image is shown below Band2 Band3 Band4 Band5 Band 6 Band7 705069 947785 541690 679418 753242 446412 705069 640910 354474 452489 729594 295437 Band3 947785 640910 445362 764760 74374 501238 Band4 541690 354474 445362 515792 65481 216924 Band5 679418 452489 764760 515792 891722 Band6 753242 729594 74374 65481 28779 142615 Band7 446412 295437 501238 216924 891722 142615 Here is the correlation matrix for the same image Using matrix algebra not presented here for brevitiy a set of eigenvalues and eigenvectors can be computed using either matrix Eigenvalues refer to the Variances of the ptb Principal Component For our Bolivian Example the Eigenvalues are shown below Covariance Eigenvalues 3898652 1042393 412746 85431 66675 10790 3927 The total variance for all p Principal Components is 5520614 By dividing the Covariance Eigenvalues by the total variance we can arrive at the percentage of the variance in all bands in the original image explained by each Principal component 0706 0189 0075 0015 0012 0002 00007 We could repeat the same process using the correlation rather covariance matrix Correlation Eigenvalues 4399 1588 0659 0263 0071 0013 0006 Total correlation value 70 Percentage ofvariance explained 0629 I 0227 I 0094 0038 0010 0002 00009 The Eigenvectors are used in the determination of each Principal Components from the original bands Shown below are the Covariance and Correlation Eigenvectors Covariance Eigenvcctors 0477 70474 0124 0269 Correlation Eigenvcctors 0393 0339 0409 70276 0066 70683 70128 To determine the value of the principal component for each pixel in an image using the eigenvectors is quite easy Simply multiply the DN value or re ectance vector by the associated PCl eigenvector For instance for PCZ PCZ DN Bandl 70307 DN band2 70297 DN band3 70192 DN band4 7 0053 DN bandS 0402 DN band6 0708 DN band 7 0339 However what do each of these Principal components represent That is entirely dependent on the information content in the images However it is possible by knowing the Variance eigenvalues and eigenvectors to determine the correlation or zttor loadings between each principal component and each band Jensen has a complete description of how factor loadings are determined R eigenvectorbandpc SQRTlteigenvaluegt SQRTltvariance of band in covariance matrix Factor Loadings PCl Kauth Thomas or TassledCap Trans formation Once you have a basic understanding ofprincipal component analysis it is possible to explore one of the most important vegetation indexes 7 the TamlediCaAj Transformation developed by Kauth and Thomas 1976 The KauthiThomas transformation has been extensively studied and is used in agricultural research This technique utilizes a GraniiSchmidt sequential orthogonal transformation to original fourichannel MSS imagery to produce a set ofimages that can be directly interpreted in terms ofvegetation dynamics The KauthiThomas technique differs from PCA analysis in that while PCA analysis places an apt207239 order on the principal directions in the data the GraniiSchmidt approach allows the user to choose which order calculations are done based on a physical interpremtion of the images I think that the KauthiThomas Transformation is a great example of how an undersmnding ofphysical or biological processes can be used to develop an appropriate remote sensing approach to enhance an im The original TasselediCap Transformation Matrix for Landsat MSS Greenness TasslediCap transformation coefficients for TM Christ and Cicone 1984 Here is the M88 band 2 and Band 3 combination showing the movement of crops through this multispectral space It Maturation Emergence O Semescence MSS band 3 Band 6 MISS hand 2 Band 5 For the same crops here is the M88 band 1 and band 2 spectral space 7 this combination is good for delineating the yellowing ofthe crops 1k 39yellowing of senescent crops M53 hand 2 Band 539 V MISS band 1 Band 4 Band3Bund6 Here 5 the Tassled Cap m 313 multxspectral space from Rlchards andjxa1999 Fold of green stu Badge of trees amp Plane of soils Bandgmam 5
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'