Advance Photogrammetry SURE 440
Popular in Course
verified elite notetaker
Popular in Surveying Engineering
This 445 page Class Notes was uploaded by Mrs. April Bechtelar on Monday October 12, 2015. The Class Notes belongs to SURE 440 at Ferris State University taught by Robert Burtch in Fall. Since its upload, it has received 18 views. For similar materials see /class/221620/sure-440-ferris-state-university in Surveying Engineering at Ferris State University.
Reviews for Advance Photogrammetry
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 10/12/15
PRINCIPLES OF AIRBORNE GPS 5 Center for Photogrammetric Training Ferris State University INTRODUCTION The utilization of the global positioning system GPS in photogrammetric mapping began almost from the inception of this technology Initially GPS offered a major improvement in the control needed for mapping It provided coordinate values that were of higher quality and more reliable than those using conventional field surveying techniques At the same time the cost and labor required for that control were lower than conventional surveying Experiences from using GPScontrol showed several improvements Salsig and Grissim 1995 a There was a better fit between the control and the aerotriangulation results particularly for largearea projects b Surveyors were not concerned with issues like intervisibility between control points therefore the photogrammetrist often received the control points in locations advantageous to them instead of the location determined from the execution of a conventional field survey c Visibility of the ground control point to the aerial camera is always important Fortunately those points that are visible using the GPS receivers are also free of major obstructions that would prevent the image from appearing in the photography This led to a better recovery rate for the control Unfortunately the window from which GPS observations could be made was not always at the most desirable time of day This changed as the satellite constellation began to reach its current operational status Also with these increasing windows came the idea of placing a GPS receiver within the mapping aircraft AirborneGPS is now a practical and operational technology that can be used to enhance the efficiency of photogrammetry although Abdullah et al 2000 reports that only about 30 of the photogrammetry companies are using this technology at this time But this does account for about 40 of the projects undertaken by photogrammetric firms This data is based on antidotal information Airborne GPS can be used for gt precise navigation during the photo ight gt centered or pinpoint photography gt determination of the coordinates of the nodal point for aerial triangulation The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 2 of 39 To achieve the first two applications the user requires realtime differential GPS positioning Habib and Novak 1994 Because the accuracy of position for navigation and centered photography ranges from one to five meters CAcode or P code pseudorange is all that is required The important capability is the realtime processing For aerotriangulation a higher accuracy is needed which means observing pseudorange and phase Here realtime processing is not as important in terms of functionality Airborne GPS is used to measure the location of the camera at the instant of exposure This gives the photogrammetrist XL YL and ZL GPS can also be used to derive the orientation angles by using multiple antennas Unfortunately the derived angular relationships only have a precision of about 1 of arc while photogrammetrists need to obtain these values to better than 10 of arc To compute the position of the camera during the project two dual frequency geodetic GPS receivers are commonly employed One is placed over a point whose location is known and the other is mounted on the aircraft Carrier phase data are collected by both receivers during the flight with sampling rates generally at either 05 or 1 second The integer ambiguity must be taken into account and this will be discussed later Generally onthefly integer ambiguity resolution techniques are employed ADVANTAGES OF AIRBORNE GPS The main limitation of photogrammetry is the need to obtain ground control to fiX the exterior orientation elements The necessity of obtaining ground control is costly and timeconsuming In addition there are many instances where the ability to gather control is not feasible Corbett and Short 1995 identify situations where this eXists a Time Because phenomena change with time it is possible that the subject of the mapping has either changed or disappeared when that the control has been collected Another limitation occurs when the results of the mapping need to be completed in a very short time period b Location The physical location of the survey site may restrict access because of geography or the logistics to complete a field survey may be such to make the survey prohibitive The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 3 of 39 c Safety The phenomena of interest may be hazardous or the subject may be located in an area that is dangerous for field surveys d Cost Tied to the other problems is that of cost The necessity of obtaining control under the conditions outlined above may make the cost of the project prohibitive because control surveys are a laborintensive activity Even under normal conditions the charge for procuring control is high and if too much is needed could negate the economic advantages that photogrammetry offers GPS gives the photogrammetrist the opportunity to minimize or even eliminate the amount of ground control and still maintain the accuracy needed for a mapping project Lapine nd points out that almost all of the National Oceanic and Atmospheric Administration NOAA aerial mapping projects utilize airborneGPS because they have found efficiencies due to a reduction in the amount of ground control required for their mapping While airborne GPS can be used to circumvent the necessity of ground control it offers the photogrammetrist additional advantages These include Abdullah et al 2000 Lucas 1994 It has a stabilizing effect on the geometry The attainable accuracy meets most mapping standards Substantial cost reduction for medium and largescale projects are possible There is an increase in productivity by decreasing the amount of ground control necessary for a project It reduces the hazards due to traffic particularly for highway corridor mapping Precise flight navigation and pinpoint photography are possible with this technology It is now possible at least theoretically to use GPS aerotriangulation without any ground control This requires Lucas 1996 a near perfect system an unlikely scenario Moreover it would be extremely prudent to have control if for no other reason than to check the results While airborne GPS is operational there are special considerations that must be accounted for to ensure success for a project Airborne GPS is operational and being used for more mapping projects There are some concerns that need to be addressed for a successful project These include Abdullah et al 2000 The Gamer k v Fhmogrammmc minim Principles of Airborne GPS Page 4 of 39 0 Risk is greater if the project is not properly planned and executed 0 There is less ground control 0 As ground control gets smaller datum transformation problems become more important 0 There is some initial financial investment by the mapping organization 0 Requires nontraditional technical support ERROR SOURCES The use of GPS in photogrammetry contains two sets of error sources and the introduction of additional errors inherent in the integration of these two technologies For precise work these errors need to be accounted for Photogrammetric errors include the following a Errors associated with the placement of targets The Texas Department of b d v v Transportation has determined that an error of 1 cm can be eXpected in centering the target over the point Bains 1995 This is based on a 10 cm wide cross target The main problem is that the center of the target is not precisely defined Errors inherent in the pug device used to mark control on the diapositives 1f the pug is not properly adjusted then the point transfer may locate pass and tiepoints erroneously Regardless the process of marking control introduces another source of error into the photogrammetric process Camera calibration is crucial in determining the distortion parameters of the aerial camera used in photogrammetry Bains 1995 has found that the current USGS calibration certificate does not provide the information needed for GPS assisted photogrammetry Merchant 1992 states that a system calibration is more important with airborne GPS The camera shutter can contain large random variability as to the time the shutter is open Most of the time this error source is not that important but if this irregularity is too great contrast within the image could be lost The major problem with this nonuniformity is when trying to synchronize the time of eXposure to the epoch in which the GPS signal collecting data Error sources for GPS are well identified A loss or disruption of the GPS signal The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 5 of 39 could cause problems in resolving the integer ambiguities and could result in erroneous positioning of the camera location thereby invalidating the project The GPS error sources include v Software problems can cause problems with a GPS mission particularly in the kinematic mode Some software cannot resolve cycle slips in a robust fashion although newer onthefly ambiguity resolution software will help There is also a limitation on the accuracy of different receivers used in the kinematic surveys Geodetic quality receivers with 12 cm relative accuracy should be employed for projects where high precision is required a b v Datum problems The GPS position is determined in the WGS 84 system whereas the survey coordinates are in some local coordinate system or in NAD 27 coordinates where there is no exact mathematical relationship between systems c Signal interruption This is critical if continuous tracking is necessary in order to process the GPS signal lnterruption may occur during sharp banking turns through the flight 01 v Geometry of the satellite constellation Receiver clock drift Although this error is relatively small this drift should be accounted for in the processing of GPS observations v e f Multipath This is particularly problemsome on surfaces such as the fuselage or on the wings This error is due to reception of a reflected signal which represents a delay in the reception time Errors that can be found in the integration of GPS with the aerial camera and photogrammetry are Bains 1995 Merchant 1992 Lapine nd a The configuration of airborne GPS implies that the two data collectors are not physically in the same location The GPS antenna must be located outside and on top of the aircraft to receive the satellite signals The aerial camera is located within the aircraft and is situated on the bottom of the craft The separation distance between the antenna and camera the nodal point needs to be accurately determined This distance is found through a calibration process prior to the flight This value can also be introduced in the adjustment by constraining the solution or by treating it in the The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 6 of 39 stochastic process b Prior to beginning a GPS photogrammetric mission the height between the ground control point and the antenna needs to be measured Experience has found that there can be variability in this height based on the quantity of fuel in the aircraft This problem occurs only when the airborneGPS system is based on an initialization process when solving for the integer ambiguities c The camera shutter can cause problems as was identified above The effect of this error creates a time bias Of concern is the ability to trip the shutter on demand In the worst case Merchant 1992 points out that the delay from making the demand for an exposure to the midpoint of the actual exposure could be several seconds For largescale photography this could cause serious problems because of the turbulent air in the lower atmosphere and the interpretation from the GPS signal to the effective exposure time Early experiments with the Wild RC10 with an external pulse generator showed wide variability in time between maximum aperture and shutter release van der Vegt 1989 The values ranged from 10100 msec Traveling at 100 msec positional errors from 110 m could be expected 01 v Interpolation algorithm used to compute the position of the phase center of the antenna Since the instant of exposure does not coincide with the sampling time in the GPS receiver an interpolation of the position of the antenna at the instant of exposure must be computed Different algorithms have varying characteristics which could introduce error in the position Related to this uncertainty is the sampling rate used to capture the GPS signal Too low of a rate will increase the processing whereas too high of a rate will degrade the accuracy of the interpolation model v Radio frequency interference can cause problems particularly onboard the airplane A receiver that can filter out this noise should be used One example receiver is the Trimble 4000 SST with SuperTrak signal processing which has been used successfully in airborneGPS Salsig and Grissim 1995 e Camera Calibration One of the weak links in airborne GPS involves the camera calibration As was pointed out earlier the traditional camera calibration may not provide the The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 7 of 39 information needed when GPS is used to locate the exposure station What should be considered is a system calibration whereby the whole process is calibrated and exercised under normal operating conditions Lapine 1991 Merchant 1992 Because of the complex nature of combining different measurement systems within airborne GPS two important drawbacks are identified with the traditional component approach to camera calibration Lapine 1991 1 The environment is different In the laboratory calibration can be performed under ideal and controlled conditions situations that are not possible in practice This leads to different atmospheric conditions and variations in the noise found in photo measurements 2 The effect of correlation between the different components of the total system are not considered Traditionally survey control on the ground had the effect of compensating for residual systematic errors in the photogrammetric process Lapine 1991 Merchant 1992 This is due to the projective transformation where ground control is transformed into the photo coordinate system The exposure station coordinates are free parameters that are allowed to float during the adjustment thereby enforcing the collinearity condition With GPSobserved exposure coordinates the space position of the nodal point of the camera are fixed and ground coordinates become extrapolated variables Because of this calibration of the photogrammetric system under operating conditions becomes critical if highlevel accuracy is to be maintained GPS Signal Measurements There are many different methods of measuring with GPS static fast static and kinematic Static surveying requires leaving the antennas over the points for an hour or more It is the most accurate method of obtaining GPS surveying data Fast static is a newer approach that yields high accuracies while increasing the productivity since the roving antenna need only be left over a point for 1015 minutes The high accuracies are possible because the receiver will revisit each point after an elapsed time of about an hour Of course neither of these situations are possible in airborneGPS Kinematic measures the position of a point at the instant of the measurement At the next epoch the GPS antenna has moved and continues to move Because of this measurement process baseline accuracies determined from kinematic GPS will be 1 cm i 2 ppm of the baseline distance from The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 8 of 39 the base station to the receiver Curry and Schuckman 1993 Flight Planning for Airborne GPS When planning for an airborne GPS project special consideration must be taken into account for the addition of the GPS receivers that will be used to record the location of the camera The first issue is the form of initialization of the receiver to fiX the integer ambiguities Next when planning the flight lines the potential loss of lock on the satellites has to be accounted for Depending on the location of the airborne receiver wide banking turns by the pilot may result is a loss of the GPS signal Banking angles of 25 or less are recommended which results in longer flight lines Abdullah et al 2000 The location of the base receiver must also be considered during the planning Will it be at the airport or near the job site The longer the distance between the base receiver and the rover on the plane the more uncertain will be the positioning results It is assumed that the relative positioning of the rover will be based upon similar atmospheric conditions The longer the distance the less this assumption is valid Deploying at the site requires additional manpower to deploy the receiver and assurances that the person who is occupying the base is collecting data when the rover is collecting the same data When planning try to find those times when the satellite coverage consists of 6 or more satellites with minimum change in coverage Abdullah et al 2000 Also plan for a PDOP that is less than 3 to ensure optimal geometry Additionally one might have to arrive at a compromise between favorable sun angle and favorable satellite availability Make sure that the GPS receiver has enough memory to store the satellite data This is particularly true when a static initialization is performed and satellite data is collected from the airport There may also be some consideration on the amount of sidelap and overlap when the camera is locked down during the flight This will be important when a combined GPSINS system is used Finally a flight management system should be used to precalculate the exposure station locations during the flight The limitations attributed to the loss of lock on the satellite places additional demands on proper planning These problems can be alleviated to some degree if additional drift parameters are used in the photogrammetric block adjustment The Gamer k1 Fllotogrammm c minim Principles of Airborne GPS Page 9 of 39 Antenna Placement To achieve acceptable results using airborne GPS it is essential that the offset between the GPS antenna and the perspective center of the camera be accurately known in the image coordinate system figure 1 The measurement of this offset distance is performed by leveling the aircraft using jack above the wheels Then either conventional surveying or close range photogrammetry can be used to determine the actual offset GPS 1 Antenna 2 1 Film 3 focal plane X l y l quotJ Calibrated dx F l 1 3323 b5 Perspective X Center Figure 1 GPS Offset This helps For simplicity the camera can be locked in place during the ight maintain the geometric relationship of the offset vector But the effect is that tilt and crab in the aircraft could result in a loss of coverage on the ground unless more sidelap were accounted for in the planning If the camera is to be leveled during the ight then the amount of movement should be measured in order to achieve higher accuracy The location of the antenna on the aircraft should be carefully considered Although any point on the top side of the plane could be thought of as a candidate site two locations can be studied further because of their advantages over other sites These are on the fuselage directly above the camera and the tip of the vertical stabilizer The location on the fuselage over the camera has the advantage of aligning the The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 10 of 39 phase center along the optical axis of the camera thereby making the measurement of the offset as well as the mathematical modeling easier Curry and Schuckman 1993 Moreover the crab angle is hardly affected and the tilt corrections are negligible for large image scale Abdullah et al 2000 The disadvantages are as follows First the fuselage location increases the probability of multipath Second this location coupled with the wing placement may lead to a loss of signal because of shadowing Antenna shadowing is the blockage of the GPS signal which could occur during sharp banking turns Finally mounting on the fuselage may require special modification of the aircraft by certified airplane mechanics Placing the antenna on the vertical stabilizer will require more work in determining the offset vector between the antenna and the camera Curry and Schuckman 1993 But once determined it should not have to be remeasured unless some changes would suggest a remeasurement be undertaken The advantages are that both multipath and shadowing are less likely to occur Moreover the actual installation might be far simpler since many aircraft already have a strobe light on the stabilizer which could easily be adapted to accommodate an antenna Determining the Exposure Station Coordinates The GPS receiver is preset to sample data at a certain rate ie 1 second intervals This sample time may not coincide with the actual exposure time Therefore it is necessary to interpolate the position of the exposure station between GPS observations An error in timing will result in a change in the coordinates of the exposure station For example if a plane is traveling at 200 kmhr 56 msec then a one millisecond difference will result in 6 cm of coordinate error With the rotary shutters used in aerial cameras the time between when the shutter release signal is sent see figure 2 to the midpoint of the exposure station varies Jacobsen 1991 Therefore a sensor must be installed to record the time of exposure Then through a calibration process the offset from the recorded time to the effective instant of exposure can be determined and taken into account Without calibration the photographer should not change the exposure during the flight thereby maintaining a constant offset distance which can be accounted for in the processing This though can only be done approximately Many of the cameras now in use for airborneGPS will send a signal to the receiver when the exposure was taken The receiver then records the GPS time for this event marker within the data Merchant 1993 points out that some cameras can The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 11 of 39 determine the midexposure pulse time to 01 ms whereas some of the other cameras use a TTL pulse that can be calibrated to accurately measure the midpoint of the exposure Accuracies better than 1 msec have been reported for time intervals by using a light sensitive device within the aerial camera van der Vegt 1989 This device will create an electrical pulse when the shutter is at the maximum aperture Prior to determining the exposure station coordinates the location of the phase center of the antenna must be interpolated Since the receiver clock contains a small drift of about 1 pssec Lapine nd suggests that the position of the antenna be time shifted so that the positions are equally spaced Several different interpolation models can be employed to determine the trajectory of the aircraft Some of them include the linear model polynomial approach spline function and quadratic time dependent polynomial Some field results found very little difference between these methods Forlani and Pinto 1994 This may have been because they used the GPS receiver PPS pulse per second signal to trip the shutter on the aerial camera This meant that the effective instant of exposure was very close to the GPS time signal Nominal Exposure Time Shutter Open Shutter Closed LReleose Time Recorded Tnstont of Effective Tnstoni Time Exposure of Exposure Figure 2 Shutter release diagram for rotary shutters from Jacobsen 1991 One of the most simplest interpolation models is the linear approach The assumption is made that the change in trajectory from one epoch to another is linear Thus one can write a simple ratio as The Gamer k1 Flimogrammm c minim Principles of Airborne GPS Page 12 of 39 i di AX Y z dx Y z where i time interval between GPS epochs AXYZ 2 changes in GPS coordinated between two epochs di 2 time difference when the exposure was made within an epoch and dXYZ 2 changes in GPS coordinates to the exposure time The advantage of this model is its simplicity On the other hand it assumes that the change in position is linear which may not be true Sudden changes in direction are very common at lower altitudes where large scale mapping missions are flown For example figure 3 shows a sudden change in the Zdirection during the flight Assuming a linear change the location of the receiver could be considerably different than the actual location during exposure One alternative would be to decrease the sample interval to say 05 seconds This would reduce the effect of the error but increase the number of observations taken and the time to process those data 1 sec 1 sec 1 sec Trajectory of4 Aubreff CPS l eosure e t Epoch POSUO of Awen o UQHg Linear lnterpolo on Mode 4lnsfonf 4 of Exposure Figure 3 Effects of linear interpolation model when the aircraft experiences sudden changes in its trajectory between PG epochs Because of the nonlinear nature of the aircraft motion Jacobsen 1993 suggests that a leastsquares polynomial fitting algorithm be used to determine the space position of the perspective center By varying the degree of the polynomial and the The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 13 of 39 number of neighbors to be included in the interpolation process a more realistic trajectory should be obtained The degree and number of points will depend on the time interval between GPS epochs The added advantage of this method is that if a cycle slip is experienced it can be used to estimate better the exposure station coordinates than a linear model A second order polynomial is used by Lapine nd to determine the position offset velocity and acceleration of the aircraft in all three axes This is done by fitting a curve to a five epoch period around the exposure time The effect of this polynomial is to smooth the trajectory of the aircraft over the five epochs The following model is used Similar equation can be generated for Y and Z Thus the three models look in a general form like XaX bXtcXt2 YaY bYtth2 ZaZ bztczt2 t titsandi125 a 2 distance from the origin b velocity and c twice the acceleration where From this the observation equations can be written as The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 14 of 39 VXaXbXtcXtZ X0 VYaYbYtthZ Y0 VzazbztthZ Z0 The design or coefficient matrix is found by differentiating the model With respect to the unknown parameters All three models have the same coefficient matriX 1 t1 t3 t1 t32 1 t t2 1 t2 t3 t2 t32 1 t t2 B t3 t3 t3 t32 1 t t2 1 t4 t3 t4 t32 1 t t2 1 t5 t3 t5 t32 1 t t2 The observation vectors f are X1 Y1 Z1 X2 Y2 Z2 fX X3 fY Y3 f2 Z3 X4 Y4 Z4 X5 Y5 Z5 The normal equations can then be expressed as VX BAX fX VY BAY fY VZ BAZ fZ Where A represent the parameters A a b cT The solution becomes The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 15 of 39 A BTWB391 BTWfX AY BTWB1 BTWfY AZ BTWB391 BTWfZ where W is the weight matrix Assuming a weight of 1 the weight matrix then becomes the identity matrix and 5 5t 5t2 BTWB BTIB 5t 5t2 5t3 5t2 5 St4 For the X observed values as an example x1x2 x3 x x5 BTWfX BTIfX Xlt th X3t x t Xst xlt2 thz X3t2 X4t2 xst2 The weighting scheme is important in the adjustment because an inappropriate choice of weights may biased or unduly in uence the results Lapine looked at assigning equal weights but this choice was rejected because the trajectory of the aircraft may be nonuniform The final weighting scheme used a binomial expansion technique whereby times further from the central time epoch t3 were weighted less than those closest to the middle Using a variance of 10 cm2 for the central time epoch the variance scheme looks like The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 16 of 39 22 001 m2 22 001 m2 22 001 m2 22 001 m2 22 001 m2 4 cm2 4 cm2 4 cm2 4 cm2 4 cm2 where the offdiagonal values are all zero 0 A basic assumption made in Lapine39s study was that the observations are independent therefore there is no covariance Once the coefficients are solved for the position of the antenna phase center can be computed using the following expressions Xexp aX bXtexp t3 cXteXp z t t3 3 cYtexp t3z ZeXp 839Z bZtexp t3 cZtexp t31 Yexp av bYtexp Determination of Integer Ambiguity The important error concern in airborneGPS is the determination of the integer ambiguity Unlike groundbased measurements the whole photogrammetric mission could be lost if a cycle slip occurs and the receiver cannot resolve the ambiguity problem There are two principal methods of solving for this integer ambiguity static initialization over a know reference point or using a dual frequency receiver with onthefly ambiguity resolution techniques Habib and Novak 1994 Static initialization can be performed in two basic modes Abdullah et al 2000 The first method of resolving the integer ambiguities is to place the aircraft over a The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 17 of 39 point on a baseline with know coordinates Only a few observations are required because the vector from the reference receiver to the aircraft is known The accuracy of the baseline must be better than 67 cm The second approach is a static determination of the vector over a know baseline or from the reference station to the antenna on the aircraft The integer ambiguities are solved for in a conventional static solution This method may require a longer time period to complete varying from 5 minutes to one hour due to the length of the vector type of GPS receiver postprocessing software satellite geometry and ionospheric stability When static initialization is performed it does require that the receiver onboard the aircraft maintain a constant lock on at least 4 and preferable 5 GPS satellites Abdullah et al 2000 identify several weaknesses to static initialization The methods add time to the project and are cumbersome to perform GPS data collection begins at the airport during this initialization Since the data are collected for so long large amounts of data are collected and need to be processed about 7 Mbytes per hour The receiver is susceptable to cycle slips or loss of lock It is possible that the initial solution of the integers was incorrect thereby invalidating the entire photo mission The use of onthefly OTF ambiguity integer resolution makes the process much easier The new GPS receiver and postprocessing software are much more robust and easy to use while the receiver is in flight OTF requires Pcode receivers where carrier phase data are collected using both the L1 and L2 frequencies The solution requires about 1015 minutes of measurements before entering the project area Component integration can also create problems For example a test conducted by the National Land Survey Sweden experienced cycle slips when using the aircraft communication transmitter Jonsson and Jivall 1990 Receiving information was not a problem just transmissions This test involved preflight initialization with the goal of reobservation over the reference station at the end of the mission This was not possible GPS Aided Navigation One of the exciting applications of airborneGPS is its utilization of in flight navigation The ability to precisely locate the exposure station and activate the The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 18 of 39 shutter at a predetermined interval along the ight line is beneficial for centering the photography over a geographic region such as in quadcentered photography for orthophoto production An early test by the Swedish National Land Survey Jonsson and Jivall 1990 showed early progress in this endeavor The system configuration is shown in figure 4 Two personal computers PCs where used in the early test one for navigation and the other for determination of the exposure time PC for Navigation put Pnoio Strip Preseiecled Position for Exposures Tirneeiegged GPS Position GPS Receiver Asnieon LXii Timing Signoi PC for Deiineation of the Time of the Puise from ne Camera at Exposure Airborne Camera Puise for Zeiss Jena LMK Exposure Figure 4 Configuration of navigationmode GPS equipment from Jonsson and Jivall 1990 The test consisted of orientation of the receiver on the plane prior to the mission over a ground reference mark This initialization is performed to solve for the integer ambiguity This method of fixing the ambiguity requires no loss of lock during the flight thus necessitating long banking turns which adds to the amount of data collected A flight plan was computed with the location of each exposure station identified The PC used for the navigation activated a pulse that was sent to the aerial camera to trip the shutter The test showed that this approach yielded about a 05 second delay Thus the exposure station locations were 2040 meters too late An accuracy of about 6 meters was found at the preselected position along the strip When compared to the photogrammetrically derived exposure station coordinates the The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 19 of 39 relative carrier phase measurements were within about 015 meters in agreement The Texas Department of Transportation TDOT had a different problem Bains 1992 Using airborneGPS gave TDOT the ability to reduce the amount of ground control for their design mapping With GPS one paneled control point was placed at the beginning of the project and a second at the end If the site was greater than 10 km in length then a third paneled control point was placed near the center For their low altitude flights photo scale of 1 cm 2 30 m the desire was to control the sidelap to 50 m Using realtime differential GPS accuracies of better than 10 m at that time were realistic Using this 10 m error value this amount of error would only cause a variation in sidelap of 7 TDOT uses 60 sidelap for their large scale mapping For the high altitude mapping photo scale of 1 cm 2 300 m and 30 sidelap it was determined that the quot50 m was not really necessary This 50 m value would cause a variation of only about 2 PROCESSING AIRBORNE GPS OBSERVATIONS The mathematical model utilized in analytical photogrammetry is the collinearity model which simply states that the line from object space through the lens cone to the negative plane is a straight line The functional representation of this model is shown as F Xijxo 0 Fy yij Yoc 0 where Xij yij are the observed photo coordinates i for photoj X0 y0 are the coordinates of the principal point c is the camera constant AXi AYi AZi are the transformed ground coordinates This mathematical model is often presented in the following form The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 20 of 39 XV X c m11XiXLm12YiYLm13ZiZL U K 0 m31Xi XLm32Yi YL m33Zi ZL yuV y c m21XiXLmZZYiYLm23ZiZL I y 0 m31Xi XLm32Yi YLm33Zi ZL Where vx vy are the residuals in X and y respectively for point i on photo j X Y Z are the ground coordinates of point i XLYLZL are the space rectangular coordinates of the exposure station for photo j m11 mas is the 3x3 rotation matrix that transforms the ground coordinates to a photo parallel system The model implies that the difference between the observed photo coordinates corrected for the location of the principal point should equal the predicted values of the photo coordinates based upon the current estimates of the parameters These parameters include the location of the exposure station and the orientation of the photo at the instant of exposure The former values could be observed quantities from onboard GPS These central projective equations form the basis for the aerotri an gul ati on It is common to treat observations as stochastic variables This is done by expanding the mathematical model For example Merchant 1973 gives the additional mathematical model When observations are made on the exterior orientation elements as FXLXL XL0 FYLYL YL0 Fz z 0 The mathematical model for observation on survey control can be similarly The center Iar thoglammalric Training Principles of Airborne GPS Page 21 of 39 POSSIBLE CAMERA POSITIONS Figure 5 Position ambiguity for a single photo resection from Lucas 1996 p125 Using GPS to determine the exposure station coordinates without ground control is not applicable to all photogrammetric problems Ground control is needed for a single photo resection and orientation Lucas 1996 If the exposure station coordinates are precisely known then the only thing known is that the camera lies in some sphere with a radius equal to the offset distance from the GPS antenna to the cameras nodal point figure 5 The antenna is located at the center of the circle All positions on the sphere are theoretically possible but from a practical viewpoint one knows that the camera being located below the aircraft and pointing to the ground is below the antenna The antenna naturally is located on top of the aircraft to receive the satellite signals Adding a second photo reduces some of the uncertainty This is due to the additional constraint of the collinearity condition that is placed on the rays from the control to the image position The collinearity theory will provide the relative orientation between the two photos Lucas 1996 Without ground control the camera is then free to rotate about a line that passes through the two antenna locations see figure 6 Without ground control or some other mechanism to constrain the roll angle along the single strip this situation could be found throughout a single strip of photography The Center I r Phamgrammetric Training Principles of Airborne GPS Page 22 of 39 9 7 Fu Figure 6 Ambiguity of the camera position for a pair of aerial photos from Lucas 1996 p125 While independent model triangulation continues to be employed in practice the usual iterative adjustment cannot be used with the recommended 4 corner control points Jacobsen 1993 Moreover the 7parameter solution to independent model triangulation results in a loss of accuracy in the solution Determining the coordinates of the exposure stations can be easily visualized in the following model Merchant 1992 Assume that the photo coordinate system Xyz are aligned with the coordinate system U V W Further assume that the survey control X Y Z is reported in the WGS 84 system Then it remains to transform the offset between the receivers phase center and the nodal point of the aerial camera DU DV DW into the corresponding survey coordinate system This is shown as The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 23 of 39 XL Xa DU YL Ya MEMM DV Z Z DW L A where DU DV DW are the offset distances MM is the camera mount orientation ME is the exterior orientation elements of the camera The camera mount orientation is necessary to ensure that the camera is heading correctly down the ight path In the normal acquisition of aerial photos the camera is leveled prior to each exposure This is done so that the photography can be nearly vertical at the instant of exposure even though the aircraft is experiencing pitch roll and swing crab or drift When the coordinate offsets between the antenna and camera were surveyed the orientation angles on the mount are leveled A problem occurs if there is an offset between the location of the nodal point and the gimbals rotational center on the mount When the camera is rotated the relationship between the two points should be considered The simplest way to ensure that the relationship between the receiver and the camera are consistent would be to forgo any rotation of the camera during the flight With this rigid relationship fixed the antenna coordinates can be rotated into a parallel system with respect to the ground by using the tilts experienced during the flight Alternatively Lapine nd points out that the transformation of the offsets to the local coordinate system can easily be performed using the standard gimbal form In this situation pitch and swing angles between the aircraft and the camera are measured Then one can simply algebraically sum the camera mount angles with the appropriate measured pitch and swing angles Here K and swing are added to form one rotational element and p and pitch are similarly combined Since roll was not measured during the test 1 is treated independently Using the Wild RClO camera mount Lapine found that the optical axis of the camera coincided with the vertical axis of the mount That meant that the combination of K and swing would not produce any eccentricity Testing revealed that the gimbal center was located approximately 27 mm from the nodal points Thus an eccentricity error could be introduced During the flight a 150 maximum The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 24 of 39 pitch angle between the aircraft and the camera mount was found Thus the error in neglecting this effect in the ight direction would be maximum pitch error 0027m sin 150 00007 meters Experiences from tests in Europe Jacobsen 1991 indicate that the GPS positions of the projection centers differ from the coordinates obtained from a bundle adjustment Moreover many of the data sets have shown a time dependent drift pattern in the GPS values When this systematic tendency is accounted for in the adjustment excellent results are possible For relative positioning 4 cm can be reached whereas 60 cm are possible using absolute positioning A second approach to perform airborne GPS aerial triangulation is sometimes referred to as the Stuttgart method In this technique certain physical conditions are assumed or accepted Ackermann 1993 First it is accepted that loss of lock will occur This means that low banking angles onboard the aircraft will not be used as in those methods where a loss of lock means a thwarted mission Because loss of lock it is also unnecessary to perform a stationary observation prior to takeoff to resolve the integer ambiguities These ambiguities are solved onthefly and can be determined for each strip if loss of lock occurs during the banking or at other times during the photo mission Seldom will loss of lock happen along the strip though Second it will be assumed that single frequency receivers will be used on the aircraft Finally the ground or base receiver will probably be located at a great distance from the photo mission The solution of the integer ambiguities is performed using CAcode pseudorange positioning These positions can be affected by selective availability SA Because of this there will be bias in the solution These drift errors which can include other effects such as datum effects are systematic in nature and consist of a linear and time dependent component The block adjustment is used to solve for these biases Early test results added confusion to the drift error biases In a test by the Survey Department of Rijkswaterstaat Netherlands a systematic effect was not noticeable on all photo strips van der Vegt 1989 Evaluation of the results indicated that this was probably due to the GPS processing of the cycle slips The accuracy of the position in the differential mode is predicated on the accuracy of solving these integer ambiguities at both the base receiver and the rover This test used a technique where the differences between the observed pseudoranges and the phase measurements were averaged The accuracy of this approach will be dependent upon the accuracy of the measurements the satellite geometry and how many The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 25 of 39 uncorrelated observations are used in the averaging approach If no loss of lock occurs during the photo mission the aircraft trajectories will be continuous and therefore only one set of drift parameters need to be carried in the bundle adjustment Unfortunately banking turns could have an adverse effect by blocking the signal to some of the satellites causing cycle slips Hoghlen 1993 states that an alternative to the stripwise application of the biases the block may be able to be split into parts where the aircraft trajectories are continuous thereby decreasing the number of unknown parameters within the adjustment The advantage of modeling these drift parameters is that the ground receiver does not have to be situated near the site It could be 500 km or farther away Ackermann 1993 This is important because it can decrease the costs associated with photo missions Logistical concerns include not only the deployment of the aircraft but also the ground personnel on the site to operate the base When projects are located at great distances from the airplanes home base uncertainty in weather could mean field crews already on the site but the photo mission canceled It also is an asset to flight planning in that onsite GPS ground receivers will require fixing the flight lines at least one day before the mission During the flying season this could be a problem Jacobsen 1994 In Germany the problem is solved because of the existence of permanent reference stations throughout the country that could be occupied by the ground receiver Using the mathematical model for additional stochastic observations within the adjustment as outlined earlier Merchant 1973 a new set of observations can be written for the perspective center coordinates as Blankenberg 1992 XLGFS VX XL YLGPS vY YL ZLGFS i VZ i ZL 1 where XLYL ZLGps the perspective center coordinates observed with GPS vx vy vz residuals on the observed principal center coordinates V EH W anmles nmrhnne cps page 2 ma xr Yx Zx the aduated perspeetwe the bundle adhrstrnent tam phaee aanm sn narn rn Ethan eentre Fxgme 7 ceprnetrya hh e cps artemnwrunrespeetm the usual earnerncerpx mpy spureeurhnpwrg earher the antenna dues nut acmpy the same lacatmn as the earnera npdal pmnt The gearnetry rs Shawn In gure 7 Relating the antenna affset tn the graund rs dependent pan the mtatmn af the mmera wrth respect tn the affset wru anly he dependent Pan the anentatmn elements a n as new a manal abservatmn equatmns tn the mllmeax madel are men as a The dd Adaerrnann 1995 Haghlen 199 Xequot Vr XL X a Ar bx YA vV 1L Rumlt m at dt bV Znquot Va Zn t d mardmates af the cps antenna far phatax where XA YA maps gmun we me vs resrduals far the cps antenna mardmates XA YA maps arp a a 1 n caardmates afphptar s tn the cps an 1 tenna dfo parameters ar stnp representmg the xr Yx Zr me mu m 7 x an as canstant terrn dt danerenee hetween the expasure trrne pr phnta x and the The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 27 of 39 time at the start of strip j bx by bz 2 GPS drift parameters for strip j representing the linear timedependent terms Rq n K orthogonal rotation matrix It is recognized in analytical photogrammetry that adding parameters to the adjustment weakens the solution To strengthen the problem one can introduce more ground control but this defeats one of the advantages of airborne GPS Introducing the stepwise drift parameters and using four ground control points located at the corners of the project there are three approaches to reducing the instability of the block Ackermann 1993 These are shown in figure 8 and are i using both 60 end and 60 sidelap ii using 60 endlap and 20 sidelap and adding an additional vertical control point at both ends of each strip and iii using the conventional amount of overlap as indicated in ii and ying at A Honmnm central 0 Vanlcal cnmml Figure 8 Idealized block schemes least two crossstrips of photography The block schemes shown in figure 8 are idealized depictions The figure 8i scheme can be used for airborne GPS when no drift parameters are employed in the block adjustment It is important that the receiver maintains lock during the flight which necessitates flat turns between flights Maintaining lock ensures that the phase history is recorded from takeoff to landing Abdullah et al 2000 points out that this is the most accurate type of configuration in a production environment The same control scheme can also be used when block drift parameters are used in The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 28 of 39 the bundle adjustment lf strip drift parameters are used then a control configuration as shown in figure 8ii should be used Here drift parameters are developed for each ight line strip which requires additional height control at the ends of each strip The control configuration in figure 8iii incorporates two cross strips of photography This model increases the geometry and provides a check against any gross errors in the ground control But it does add to the cost of the project because more photography is required to be taken and measured For that reason it is not frequently utilized in a production environment More often the area is not rectangular but rather irregular In this situation it is advisable to add additional crossstrips or provide more ground control Figure 9 is an example Theoretically it is possible to perform the block adjustment without any ground control This can easily be visualized if one considers supplanting the ground control by control located at the exposure stations Nonetheless it is prudent to include control on every job if nothing more than providing a check to the aerotriangulation Using the four control point scheme as just presented has the advantage of using the GPS position for interpolation only within the strip As is known conventional aerotriangulation requires ground control As an example for planimetric mapping control is required at an interval of approximately every seventh photo on the edge of the block Topographic mapping requires vertical control within the block at about the same spacing Using this background and simulated data Lucas 1996 was able to develop error ellipses from a bundle adjustment showing the accumulation of error along the edges of the block figure 10 The is commonly referred to as the edge effect and stems from a weakened geometric configuration that exists because of a loss in redundancy Under normal circumstances a point in the middle of a block should be visible on at least nine photos But on the edge the photos are taken only from one side of view Using the same simulated data Lucas 1996 also showed the error ellipses one would expect to find using 60 end and sidelap photography along with airborne GPS and no control The results show that for planimetry the results are similar Larger error ellipses were found at the control points but at every other point they were either smaller or nearly equivalent The came for Phomgrammetric Training Principles of Airborne GPS Page 29 of 39 GPS BLO CK FAIRFIELD A 41SBIIder l7Stre39len 16x25km 13 D 18700 i El 4 A i iIDi D 1 A D 39t uxyz check pom i 3 xxyz control pomt 1 Figure 9 GPS block control configuration Elevation errors were much different within the two simulations Using just aerotriangulation without control error ellipses grew larger towards the center of the block Using kinematic GPS on the other hand kept the error from getting larger Compared with the original simulation with vertical control within the block each point had improvements except the control points that were fixed in the conventional adjustment Lucas 1996 states that the reason for the improvement lies in the fact that each exposure station is now a control point and the distance between the control is less than one would find conventionally It would not be practical to have the same density of control as one would have in the air These results are based on simulations therefore re ect what is possible and not necessarily what one would find in real data E can Pfl lnglumn ri Twining Principles of Airborne GPS Page 30 of39 C C 3 2 n c Q i 7A o a o x A V xx Figure 10 Error ellipses with ground points positioned by conventional aerotriangulation adjustment of a photo block Lucasm 1994 Accuracy considerations are important in determining the viability of using GPS observations within a combined bundle adjustment Results of projects conducted with combined GPS bundle adjustment show that this approach is not only feasible but also desirable In conventional aerotriangulation ground control points helped suppress the effects of block deformation GPS observed perspective center coordinates stabilize the adjustment thus negating the necessity for extensive control In fact their main function now becomes one of assisting in the datum transformation problem Ackermann 1993 If the position of exposure station can be ascertained to an accuracy of 10 cm or better then the accuracy of the adjustment becomes primarily dependent upon the precision of the measurement of the photo coordinates Ackermann 1993 Designating the standard error of the photo observations as 00 the projected values expressed in ground units are 30 Then as long as oGPS go Ackermann indicates that the following rule could apply The expected horizontal accuracy X Y will be approximately 1580 and the vertical accuracy Z would be around 2030 This assumes using the six drift parameters for each strip four control points and cross strips The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 31 of 39 Strip Airborne GPS For route surveys such as transportation systems there is a problem with airborne GPS when the GPS measurements are exclusively used to control the ight Theoretically a solution is possible if the exposure stations are distributed along a block and are noncollinear In the case of strip photography the exposure station coordinates will nearly lie on a line making it an illconditioned or singular system Therefore some kind of control needs to be provided on the ground to eliminate the weak solution that would otherwise exist As an example Lucas 1996 shows the error ellipses one would expect with only ground control and then with kinematic GPS These are shown in figure 11 for horizontal values and figure 12 for vertical control Merchant 1994 states that to solve this adjustment problem existing ground control could be utilized in the adjustment Most transportation projects have monumented points throughout the project and intervisible control should be reasonably expected A test was performed to evaluate the idea of using control for strip photography Merchant 1994 A strip of three photos was taken from a Wild RC20 aerial camera in a Cessna Citation over the Transportation Research Center test site in Ohio The aircraft was pressurized and the flying height above the ground was approximately 1800 m A Trimble SSE receiver was used with a distance to the groundbased receiver being approximately 35 km The photography was acquired with 60 endlap Corrections applied to the measured photo coordinates included lens distortion compensation both Seidel39s aberration radial distortion and decentering distortion using the BrownConrady model atmospheric refraction also accounting for the refraction due to the pressurized cabin and film deformation U SCampGS 8parameter model m cm Pmlmme Tnmm Principles of Airborne GPS Page 32 of 39 nu 1 L L Figure 11 lt 4 lt H1 i T 3 l i I h E i i 1 1 l l t l n7 muwamm l 1 i l mmm ussws l l l i 1 Figure 12 The middle photo had 80 targeted image points For this test only one or two were used while the remaining control values were withheld The results are shown in the following table The full field method utilized all of the checkpoints within the photography The corridor method only used a narrow band of points along the route which is typical of the area of interest for many transportation departments Merchant 1994 The results are expressed in terms of the root mean square error rmse defined as the measure of variability of the observed and quottruequot or withheld values for the checkpoints The method is shown as 2true 7 observed2 1 1 rmse where n is the number of test points The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 33 of 39 Number of Test Points Using 2 targeted ground Using 1 targeted ground Corridor 11 0034 0033 0082 The results indicate that accuracies in elevation are better than 120000 of the ying height which are comparable to results found from conventional block adjustments It should also be noted that pass points were targeted therefore errors that may occur due to the marking of conjugate imagery is not present Moreover the adjustment also included calibration of the system Nonetheless good results can be expected by using ground control to alleviate the ill conditioning of the normal equations A minimum of one point is needed with additional points being used as a check Another approach other than including additional control would be to fly a cross strip perpendicular to the strip of photography This will have the effect of anchoring the strip thereby preventing it from accumulating large amounts of error If the strip was only a single strip then it is recommended that a cross strip be obtained at both ends of the strip Lucas 1996 Combined INS and GPS Surveying The combination of a combined inertial navigation system INS with GPS gives the The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 34 of 39 surveyor the ability to exploit the advantages of both systems INS has a very high shortterm accuracy which can be used to eliminate multipath effects and aid in the solution of the ambiguity problem The longterm accuracy of the GPS can be used to correct for the timedependent drift found within the inertial systems Used together will give the surveyor not only good relative accuracies but also good absolute accuracies as well Moreover within the bundle adjustment only the shift parameters need to be included within the adjustment model Jacobsen 1993 thereby increasing at least theoretically the accuracy of the aerotriangulation Texas DOT Accuracy Assessment Project The Texas Department of Transportation undertook a project to assess the accuracy level that is achievable using GPS and photogrammetry Bains 1995 describes the project in length Three considerations were addressed in this project system description airborne GPS kinematic processing and statistical analysis The system description can be summarized as follows The site selected was an abandoned US Air Force base located near Bryan Texas This site was selected because the targets could be permanently set and there would be minimal obstructions due to traffic Being an abandoned facility expansion of the test facility was possible In addition the facility could handle the King Air airplane Target design is important for the aerial triangulation A 60 x 60 cm cross target with a pin in the center was selected based on a photo scale of 13000 The location of the center of the target allowed for the precise centering of the ground receiver over the point In areas where there was no hard surface to paint the target a prefabricated painter wafer board target was employed All of the targets were measured using static GPS measurements Each target was observed at least once Using 8 receivers two occupied master control points while the remaining six simultaneously observed the satellites over the photo control points The goal was to achieve Order B accuracy in 3D of 11000000 In addition differential levels were run over all targets to test the accuracy of the GPSderived heights The offset between the antenna and the camera was measured four times and the mean values determined Prior to the measurement the aircraft was jacked The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 35 of 39 up and leveled The aerial camera was then leveled and locked into place The offset distances were then measured The ight specifications were designed to optimize the accuracy of the test They are Photo Scale 13000 Flying Height 500 meters Flight Direction NorthSouth Forward Overlap 60 minimum Sidelap 60 Number of Strips 3 Exposures per Strip 12 Focal Length 152 mm Format 230 X 230 mm Camera Wild RC 20 Film Type Kodak Panatomic 2412 BlackWhite Sun Angle 30 minimum Cloud Cover None GDOP 4 The mission began by measuring the height of the antenna when the aircraft was parked The ground receiver was turned on and a sample rate of 1 second was used The rover receiver in the aircraft was then turned on and tracked the satellites for five minutes with the same onesecond sampling rate Then the aircraft took off and flew its mission The processing steps involved the kinematic solution of the GPS observations The PNAV software was used for onthefly ambiguity resolution The software vendor recommended that the processing be done both forward and backward for better accuracy but the test indicated that at least for this project there was no increase in the accuracy when performing that kind of processing The photogrammetry was processed using softcopy photogrammetry A 15Fm pixel size was used The aerial triangulation was then performed with the GAPP software using only four ground control stations two at the start and two at the end The results were then statistically processed using the SAS Statistical Analysis System The results of this study showed that the accuracy achieved fell within The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 36 of 39 specifications In fact the GPS results were either equal to or better than the accuracy of conventional positioning systems The results also indicated that there was a need to have a reference point within the site to aid in the transformation to State Plane Coordinates As an example Table 1 shows the comparison between the GPSderived control and the values from the ground truth These results show that airborne GPS can meet the accuracy specifications for photogrammetric mapping Number of Variable Minimu Maximu Mean Standard Elevation 0068 0105 0008 Table 1 Comparison of airborne GPS assisted triangulation with ground truth on day 279 1993 over a long strip from Bains 1995 p40 ECONOMICS OF AIRBORNE GPS While no studies have been conducted that describe the economic advantages of airborneGPS some general findings are available Ackermann 1993 Utilization of airborneGPS does increase the aerotriangulation costs by about 25 over the conventional approach This increase includes flying additional crossstrips film GPS equipment GPS base observations processing the GPS data and computation of aircraft trajectories aerotriangulation point transfer and photo observations and combined block adjustment The real savings accrue in the control where the costs are 10 or less than those The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 37 of 39 required using conventional aerotriangulation The overall net savings Will be about 40 When looking at the total project costs If higher order accuracy is required Ackermann uses the example of cadastral photogrammetry Which needs 12 cm accuracy then the savings Will decrease because additional ground control are necessary REFERENCES Abdullah Q M Hussain and R Munjy 2000 Airborne GPSControlled Aerial Triangulation Theory amp Practical Concepts Workshop notes Ackermann F 1993 quotGPS for Photogrammetryquot The Photogrammetric Journal of Finland 1320715 Bains HS 1992 quotPhotogrammetric Surveying by GPS Navigationquot Proceedings of the 6th International geodetic Symposium on Satellite Positioning Vol II Columbus OH March 1720 pp 731738 Bains HS 1995 quotAirborne GPS Performance on a Texas Projectquot ACSMASPRS Annual Convention and Exposition Technical Papers Vol 2 February 27 March 2 pp 3142 Corbett SJ and TM Short 1995 quotDevelopment of an Airborne Positioning Systemquot Photogrammetric Record 1585315 Curry S and K Schuckman 1993 Practical Guidelines for the Use of GPS Photogrammetry ACSMASPRS Annual Convention and Exposition Technical Papers Vol 3 New Orleans LA pp 7988 Forlani G and L Pinto 1994 quotExperiences of Combined Block Adjustment With GPS Dataquot International Archives of Photogrammetry and Remote Sensing Vol 30 Part 3 Munich Germany September 59 pp 219226 Habib A and K Novak 1994 quotGPS Controlled Aerial Triangulation of Single Flight Linesquot Proceedings of ASPRSACSM Annual Convention and Exposition Vol 1 Reno NV April 2528 pp 225235 also International Archives of Photogrammetry and Remote Sensing Vol 30 Part 2 Ottawa Canada June 610 pp 203210 The comer k1 Fhmogrammmc minim Principles of Airborne GPS Page 38 of 39 Hoghlen A 1993 quotGPSSupported Aerotriangulation in Finland The Eura Blockquot The Photogrammetric Journal of Finland 1326877 Jacobsen K 1991 quotTrends in GPS Photogrammetryquot Proceedings of ACSM ASPRS Annual Convention Vol 5 Baltimore MD pp 208217 Jacobsen K 1993 Correction of GPS Antenna Position for Combined Block Adjustment ACSMASPRS Annual Convention and Exposition Technical Papes Vol 3 New Orleans LA pp 152158 Jacobsen K 1994 Combined Block Adjustment with Precise Differential GPS Data International Archives of Photogrammetry and Remote Sensing Vol 30 Part 3 Munich Germany September 59 pp 422426 Jonsson B and A Jivall 1990 Experiences from Kinematic GPS Measurements Paper presented at the Nordic Geodetic Commission 11th General Meeting Copenhagen 12p Lapine LA 1991 Analytical Calibration of the Airborne Photogrammetric System Using A Priori Knowledge of the Exposure Station Obtained from Kinematic Global Positioning System Techniques Department of Geodetic Science and Survey Report No 411 The Ohio State University Columbus OH 188p Lapine LA nd quotAirborne Kinematic GPS Positioning for Photogrammetry The Determination of the Camera Exposure Stationquot Xerox copy source unknown Lucas J R 1996 Covariance Propagation in Kinematic GPS Photogrammetry in Digital Photogrammetry An Addendum to the Manual of Photogrammetry ASPRS pp 124129 Merchant DC 1992 quotGPSControlled Aerial Photogrammetryquot ASPRSACSMRT92 Technical Papers Col 2 Washington DD August 38 pp 76 85 Merchant DC 1994 quotAirborne GPSPhotogrammetry for Transportation Systemsquot Proceedings of ASPRSACSM Annual Convention and Exposition Vol 1 Reno NV April 2528 pp 392395 Salsig G and T Grissim 1995 GPS in Aerial Mapping Proceedings of Trimble Surveying and Mapping Users Conference Santa Clara CA August 911 pp 4853 SURE 440 Advanced Photogrammetry OVERVIEW OF BURSAWOLF DEVELOPMENT Suneying Engineering Ferris State University 3D Transformations Surveying Engineering Department 3D Coordinate Transformation SURE 440 Advanced Photogrammetry ThreeDimensional Conformal m Coordinate Transformation Converting from one threedimensional system to another while preserving the true shape This type of coordinate transformation is essential in analytical photogrammetry to transform arbitrary stereo model coordinates to a ground or object space system It is o en used in Geodesy to convert GPS coordinates in WGS84 to State Plane Coordinates x 5339 ZAxis XAxis Applications of 3D Conformal Coordinate Transformations Mobile mapping systems Relations between different coordinate frames Sensor frame Body frame Mapping frame Camera 2 Dittevermai cps 3 Odometer 3D Coordinate Transformation SURE 440 Advanced Photogrammetry Applications of 3D Conformal Coordinate Transformations Home1and security EG facial pattern recognition Image processing 3D Conformal Coordinate g Transformation Also known as the 7 Parameters transformation since it involves 0 Three rotation angles omega 0 phi p and kappa K 0 Three translation parameters TX TYTZ and 0 a scale factor S Kappa 1gb 0 Phi 4 3D Coordinate Transformation SURE 440 Advanced Photogrammetry Rotation angles Omega In general form X2 X1 Y10Z10 Y2 X10Y1 cosoaZ1sinoa Z2 X10Y1 sinoaZ1cosoa In matrix form quotX2quot 391 0 0 39 quotX1quot 39 Y2 0 cosoa sinoa Y v Z 0 sino cosm Z 2 1 More conciser Rotation angles Phi In general form X3X2cosqY20Z2 sinp Y3X20Y2ZZ0 Z3X2simpY20Z2cosqgt n matrix form X3 cos p 0 sin p X2 Kappa K Y3 0 1 0 Y2 Z3 sin p 0 coscp 22 Omega w More conCIsely o 4 Phi 1 739 3D Coordinate Transformation SURE 440 Advanced Photogrammetry z Rotation angles Kappa XV In general form X39X3 COSKY3 sinKZ3 0 Y39X3 7sinK Y3 COSKZ3 0 Z X3 0Y3 0Z3 In matrix form 7X 7 cos K sin K 07 7X37 Kappa K Y 7 sin K cos K 0 Y3 z39 0 0 1 Z3 7 7 7 Omega More concisely o 42 Phi Combined Rotation Matrix If we combine all the rotation matrices 39xl X Y39 MG Y1 MKMWMQ Y1 Z39 Zi 7Z1 quot3911 quot3912 m1 MMMwAIu ml quot3922 quot3923 quot3931 quot3932 m3 MG becomes after multiplication MG 7 COSLPSi K K K sincp 7 sinoacoscp cos oncost COSLPCOSK cosmsinKsinoasiancosK sinoasinK7cosoasiancosK 3D Coordinate Transformation SURE 440 Advanced Photogrammetry COMPUTING ROTATION ANGLES 0 If rotation matrix tan 0 m32 known rotatlon m33 angles can be computed as shown on the right Properties of rotation matrix The rotation matrix is an orthogonal matrix which has the property that its inverse is equal to its transpose or This can be used for inverse relationship 3D Coordinate Transformation SURE 440 Advanced Photogrammetry the translation factors TX Ty and T2 C sM C T Where Three Dimensional Conformal Coordinate Transformation Finally the 3D Conformal Transformation is derived by multiplying the system by a scale factor s and adding TX X quot711 quot712 m1 1 7 C 7 Y M7 m21 m22 m23 Z T T1 T2 m31 m I i cos 9 l sin e e radians Product of two sines 0 Rotation matrix R becomes BURSAWOLF TRANSFORMATION Geodesy assumption rotation angles small 1 Rx 7R R 7R 1 Rm Rw 7R 1 3D Coordinate Transformation SURE 440 Advanced Photogrammetry BURSAWOLF TRANSFORMATION 3D similarity transformation 709 X 390 R R39x x TX K a gxY RK 0 RM ysyTY hx Z R RM 0 z z TZ Observation Equation VBA f BURSAWOLF TRANSFORMATION Coef cient matrix B 1 O 0 x1 0 z1 y1 O 1 O 21 21 O x1 B Z O O 1 z1 y1 x1 0 1 O O 0 Zz yz O O 1 zquot yn x O Vector of parameters A and discrepancy vector f X ATX T T2 s R R RKT 3D Coordinate Transformation SURE 440 Advanced Photogrammetry Three Dimensional Coordinates l Transformation General polynomial approach transformation is not conformal Xna0a1xa2ya32a4x2 a5y2a622 axyayz 2 2 2 agzxawxy a11xya12xz quot39 Yn b0 blxbzyb3zb4x2 175y2 17622 b7xyb8yz bgzxway2 bllxzyblzxz2 Zn CO clxczyc3zc4x2 05y2 0622 c7xycsyz 2 2 2 cgzxcwxy cllxycuxz Three Dimensional Coordinates Transformation Alternative that is conformal in the three planes XnA0A1xA2yA32A5x2 y2 220aA72x2A6xy erB0 142cllyl4214l6 x2 y2 222A7yz02A5xy ZrzC0 143x A4yllz147 x2 y2 222A6yz2A52x0 3D Coordinate Transformation SURE 440 Advanced Photogrammetry Three Dimensional Coordinates Transformation alxc12ycz32cz4 Xn d1xd2yd3zl Polynomial projective transformation 15 parameters Yn blx b2y b32 b4 d1xd2yd3zl clxc2yc32c4 Zn d1xd2yd3zl BursaWolf Transformation From Krakiwsky and Thomson 1974 FP 1kR R X X39 1 kRwR RK Y1 7 Y39 1P Z39J 3D Coordinate Transformation SURE 440 Advanced Photogrammetry BursaWolf Transformation Rotation matrix can be shown as R RwRwRK D IQx Functional model becomes FPrT EkZQRE BursaWolf Transformation Linearize Taylor series AV BA 2 f 0 where WP fl r l Af1lil 10 0 0 72 Y X B 0 1 0 Z 0 7X Y 0AXAYAzw Kk 0 01 Y X 0 Z 3D Coordinate Transformation SURE 440 Advanced Photogrammetry l MolodenskyBadekas l A I TI dl Suited for terrestrial to satellite transrormations Legacy coordinates barycentric GENERALIZED LEAST SQUARES Surveying Engineering Ferris State University 3D Coordinate Transformation SURE 440 Advanced Photogrammetry INTRODUCHON For condition equation of indirect obsenations each condition equation contains only 1 obsenation V BA f In adjustment of observations only no parameters included in condition AV f Generalized least squares handle combined obsenations IAV REDUNDANCY Redundancy r n n O n no of measurements no min observations for unique sol To carry unknown parameters in adjustment need to write additional condition equations for each parameter b r u For u arameters no of conditions e uations is OSuSn rScSn o 3D Coordinate Transformation SURE 440 Advanced Photogrammetry CONDITION EQUATIONS A v BAd A c x n coefficient matrix c S n 2 n x 1 observational vector v n x 1 vector of residuals B c x u coefficient matrix c gt u A u x 1 vector of parameters d c x 1 vector of constants DERIVATION OF LS ESTIMATE OF A Two methods of derivation 1 Condition equations combining observations amp parameters transformed into form of indirect observation adjustment 2 Apply minimum criterion directly Precision expressed as Covariance matrix 2 Cofactor matrix Q Weight matrix W 3D Coordinate Transformation SURE 440 Advanced Photogrammetry CONDITION EQUATION FROM COMBINATION Vector of o equivalent observations each linear combination of n original observations f c A The residuals forthe corresponding obsenations Vc AV QC oofaotor of equivalent obsenations QCAQAT 00 011 1111 nc CONDITION EQUATION FROM COMBINATION Condition equations Weight matrix of equivalent observations W Q AQATl1 Normal equation for condition equation NBTMBBTAQATT B t BTW BTAQAT l f Solution A N It 3D Coordinate Transformation SURE 440 Advanced Photogrammetry APPLYING MINIMUM CRITERION DIRECTLY Minimum criterion p VTWV gt minimum Linearized condition equation AV BA f where A E M Ba X fFZxD When linear model f d A When conditions nonlinear gig31322 f FZX 0 xisvectorofu p aram eters APPLYING MINIMUM CRITERION DIRECTLY Using vector k for c Lagrangian multipliers minimum criterion is q v Wv 2k Av BA r gt minimum To find minimum differentiate with respect to V and A and make equal to 0 W39 T r u 2v W ZkA0 1 Bv M V 7 I981 rszB 0 739 BA 3D Coordinate Transformation SURE 440 Advanced Photogrammetry APPLYING MINIMUM CRITERION DIRECTLY Transpose and rearrange WVATk BTk0 Solve for v in first equation vW391ATk QA Tk Substitute into linearized condition equation AQATIkBAf Becomes in terms of oofaotor matrix of equivalent obsenations Qekf BA APPLYING MINIMUM CRITERION DIRECTLY Solve for k k 01fBA WefBA Substitute into equation BTk 0 or BTWef BA 0 BTWCBA BTWef NAt or N BTWEB BT AQAT I1 B t BTW BTAQAT1f 3D Coordinate Transformation SURE 440 Advanced Photogrammetry APPLYING MINIMUM CRITERION DIRECTLY Solve for A Ifcondition equations nonlinear vector estimate of parameters is 32 Xo A Vector of residuals vQATWef BA Vector of adjusted observations A EV GENERALIZED BURSAWOLF TRANSFORMATION x X 0 Rx 7R x x TX 3D Similarity W iRk 0 R y H y TY transformation m 2 Re 7R 0 z z TZ where X o 72 m g g g g g g g o OOHO B 6AXAYAZSm arr OHOO lt N o i Ev X2 0 7Z2 Y2 3D Coordinate Transformation SURE 440 Advanced Photogrammetry Fifiiiii ms 7K n u K ms u u v w a u u 7 u u u u u u u u u u u u u u u u u u u u u u u u u 4 4n u an 1n an an an uuuuuuuuuuuuuuuu 4 GENERALIZED BURSAWOLF TRANSFORMATION Vector of parameters TY T2 s R Ry RKY fmatrix X x f Y7 Z72 3D Coordinate Transformation SURE 440 Advanced Photogrammetry GENERALIZED BU RSAWOLF TRANSFORMATION If no common points in both systems requires observations Additional observation equations FM S G dx 0 Where M X Y Z coordinates classical G X Y Z coordinates in new datum dx X Y Z coordinate difference in connecting survey 8 scale factor ExpandedFull Model Angles not small nonlinear S 7 cos 05in K cosmcosxr sin msin 05in K sin mcosK cos msin 05in K Y X39 cos pcosK cosmsinKsin msin pcosK sin msinxrcosmsin pcosK X Y39 Z39 Z sinqp rsinmcosw cos mcosqp AX AY Ml 3D Coordinate Transformation 20 SURE 440 Advanced Photogrammetry ExpandedFull Model H estimates Solution Generalized Full Model Solution AV BA f Sr 572 S73 71 0 0 0 0 0 0 0 Sr 5722 5732 0 71 0 0 0 0 0 0 v Sr 372 Sr 0 0 71 0 0 0 0 0 0 0 0 0 0 0 0 Sr 572 Sr 71 in 0 0 0 0 0 0 Sr Syn syn 0 71 v2quot dS b 0 b b 1 0 0 b b bu b 0 1 0 dR 2i 2 23 24 dRW by 2 g 3 LR f 4 Z 4 dAX b b bi b 0 0 1 d mi m2 m3 m4 dAZ 3D Coordinate Transformation 21 SURE 440 Advanced Photogrammetry Generalized Full Model Z711 r11X1rzlYl r31Z1 b13 S7X1sinqpcos1Y1 sin sinK Z1 cosqp Z714 Sr21X17r11Y1 bztr12X1rzzY1 Jrr3221 bzz S 13X1rzzyt rKKZI b23 SXl sinwcoszpCOSK7Yl sinwcoszpsinkZl sinwsinzp by Srsz1 rth1 b31 r13X1 r23Y1 r33Z1 b32 Sr12X1 r22Y1 Jrr3221 b33 S7X1cosmoosgpcosY1 coswcosgpsinkiZ1 coswsingp by Sr23X1 r13Y1 Polynomial Approach Another approach x a0 a1x a2ya3za4x2 asy2 11622 a7xya8yz agzx amxy2 a11x2y 1112sz y be b1x b2yb3zb4x2 by2 b622 b7xy bgyz 2 2 2 bgzx bmxy bnx y buxz t7 2 2 2 zicoclxczyc3z04x csy 062 c7xycgyz cgzx cmxy2 cllxzy 012x22 3D Coordinate Transformation SURE 440 Advanced Photogrammetry MULTIPLE REGRESSION EQUATIONS MRE NGA Research Grant Surveying Engineering Ferris State University Why MRE Developed Need for better accuracy than available from Molodensky transformation Need to simplify procedure for eld use automates transformation process 3D Coordinate Transformation 23 SURE 440 Advanced Photogrammetry W68 84 Coordinates Obtained Using DWGSS4 DLGS AC0 chsm 2 kLGS A HWGS84 HLGS AH Where WGSS4 relates to the new datum and LGS is the local geodetic system Geodetic coordinates ltp A H referenced to ellipsoid Acp AA AH obtained from MRE transformation Multiple Regression Equations Ag 2 A0 A1UA2V A3U2 A4UVA5V2 A54V9 A55U9V A56U8V2 A64U9V2 A65U8V3 A72U9V3 A73U8V A99U9V9 Ai MRE coef cient arrived at in stepwise fashion U Normalized geodetic latitude U quJ pm V Normalized geodetic longitude V k l m k scale factor and degreetoradian conversion p A local geodetic latitude and longitude in degrees pm Am midlatitude and midlongitude of local geodetic area in degrees 3D Coordinate Transformation SURE 440 Advanced Photogrammetry 11252008 ragtimetitled Rectification orthogonal projection of all points ofthe ground to a reference surface Orthophoto resulting imagebased map RECTIFICATION OF DIGITAL G R f G d eo eerencmg eoco Ing IMAGERY routines to perform rectification Mathematical models wide range from simple affine transformations utilizing higher order polynomial amp projective transformations to differential rectification with relief displacement Kurt Novak correction Photogrammetric Engineering amp Remote Sensing 583339344 Rectification of Digital Imagery SURE 440 Advanced Photogrammetry Direct approach Starts from pixel location in original image transforms its coordinated into the result and places gray value to nearest integer pixel Contrast amp density are not changed in transformation Some pixels might not get any gray value have to fill after transformation in second pass through resulting image Rectification of Digital Imagery TRANSFORMATION OF DIGITAL IMAGES Indirect approach Takes each pier location of result eg orthophoto determines its position in original image by selected transformation and interpolates the gray value by a given resampling method Uses resampling Gray value has to be computed by interpolation for original gray values Location of pixel which was transformed into original does not correspond to integer position Gray values altered in dependency of resampling alogorithm 11252008 SURE 440 Advanced Photogrammetry 11252008 7 I Nearest Neighbor ix21 pixel 0 39Res a mpijng Bilinear 2 x2 pixels 4 Resampling Cubic Convolution 4 xii pixels 3110 Rectification of Digital Imagery 3 SURE 440 Advanced Photogrammetry BOLXNQMJALREQEF iAEl NE Transformation amp rectification done by polynomials vdummn vX39TBv39fyXiv39l where x yare ooordinates of orig39nal image x y coordinates of the rectification and A Bcoefficient matrices of polynomials T 2 I3 v va397v39 7v 7 m or or bin but bill A to 11 12 B but bu biz quot In 21 12 b2 bu Rectification of Digital Imagery POLYNOMIAL RECTIFICATION Corrects for distortion of image relative to dense set of control points Order of polynomial depends on number of control points available More control more accurate result independent of geometry of imaging sensor Can be used for both satellite images amp aerial photographs Original image shifted rotated scaled amp squeezed so it fits best to given reference points useful in satellite images where geometry amp distortions sometimes difficult to model 11252008 SURE 440 Advanced Photogrammetry 11252008 Rectification of Digital Imagery 5 SURE 440 Advanced Photogrammetry 11252008 all i aim t Rectification of Digital Imagery 6 SURE 440 Advanced Photogrammetry EROJJEQWE IRANSEQRMAW lfy given in flying direction and x represents pixel in scan line projective transformation modified arhnvha x7 x f clx39lrczy39lrl Jfxyvyyyb1xybzyyb3 Method has little practical significance for satellite scenes Due to earth curvature one can hardly de ne a plane area on ground which can be related to satellite scene in above equation Even forverv flatterrain Earth curvature cannot be neglected DIFFERENTIAL RECTIFICATION For digital differential rectification each pixel transferred using indirectapproach DEM needed to correct for relief displacement Stored in same format as digtal Image Related to map projection of resulting orthophoto Rectification of Digital Imagery 11252008 SURE 440 Advanced Photogrammetry 11252008 mggeagmmnacmm DEEERMI A i DIFFERENTlAL BEQTIFQATIQN Transform X Y Z coordinates defined by DEM pixel into the Density stored at X Y location of image by collinearity digital orthophoto equal to position of DEM point At image position x y gray value interpolated by resampling r 1 Onhiiphmo V digital eicmriun model Rectification of Digital Imagery 8 SURE 440 Adyanced Photogrammetry available 0 20 o p K camera units pX py in mm units g g in m DlEEERENIJALREQIlElQtu Following parameters must be 0N interior orientation of camera xp yp Exterior orientation of camera X0 Y0 Pixelspacing of digital image in Cellsize of DEM pixels in ground Reference coordinates of one DEM pixel in given mapprojection usually the upper left corner of the DEM file DLFEERENIlAlmREQIlElQAIIONI SPOT math model modified for lineperspective x C Villa X10 r 1Y39KnV1Z39Zmr l llle39Xplt lY39Yplt HZZn I l M C 512X39Xx1V Y39Yh zz39zp 0 513X39XinV Y39Zn az39zin wilful where x is the coordinate in scan line i orthogonal to the direction of travel and x0 y0 0 Basically orientation parameters of each scan line are different Rectification of Digital Imagery 11252008 SURE 440 Advanced Photogrammetry PIEEERENIIAiRECIIEIQAIIQIEII Due to smooth satellite trajectory amp relatively short time to capture a scene neighboring perspective centers highly correlated Generally can be approximated by linear fu nctio ns Where XUj k are the X0139 X0 kXyi exterior orientation arameters of line39 X0 k Y Y k p 0 0 Yy are the exterior orientation parameters of 20139 Z 0 kZ yi the centerline of the 00 scene kx liqare the linear coeffICIents of t he 00 Z 00 kwyl exterior orientation 111in is the line number 39K0klyi DIFFERENTIAL RECTIFICATION 12 exterior orientation parameters for each image versus 6 for regular frame photography Additionally earth curvature necessary for satellite imagery and small scale photography X Y Z coordinates in collinearity equations related to tangential plane touching ellipsoid at center Largest influence in elevation Zdirection Rectification of Digital Imagery 11252008 SURE 440 Advanced Photogrammetry The Center for Photogrammetric Training CORRECTIONS TO PHOTO COORDINATES Surveying Engineering Department Ferns State Umversity ANALYTICAL INS TRUlVlENTATION Design characteristics 7 High accuracy 7 High reliability 7 High measuring ef ciency 7 Low rst cost 7 Low cost of maintenance 7 Operational ef ciency Does operator need specialized training Comfort of individual when operating instrument Center for Photogrammetric 39 39aining Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry ANALYTICAL INSTRUMENTATION Systematic errors associated with comparators 7 Instrument system errors Scaling and periodic errors spindles coordinate counter Affinity errors different scales Rectilinearity bending errors Lack of orthogonality 7 Backlash and tracking errors 7 Dynamic errors microscope velocity does not drop to zero at points to be approached during operation 7 System automation errors Digital resolution Errors due to deviation of direction Center for Photogrammetric naming ARTIFICIAL TARGETS If scale of photo is l 500 and the desired target width is m 0004 about 100 um I m 2 Artificial Target white on dark background r ing IAL D S D 1 D l j D Ground Distance Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry GROUND TARGETS Three types 7 Signalized 7 Detail points 7 PUG points 0 Suggested patterns from MDOT for standard mapping utl ine 739 Background Target highlighted 2539 by 7 Background Bull seye 7 Bull s eye 7 outline Standard Target GROUND TARGETS T Target Chevron Center for Photogrammetric 39 39aining Corrections to Photo Coordinates SURE 440 Advanced Photogranimetry GROUND TARGETS MDOT high level target design criteria T Target Chevron Center for Photogrammetric naming GROUND TARGETS OSm MDOT low level target design required to be square Low Level Target Center for Photogrammetric naming Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry ABBE S COIVIPARATOR PRINCIPLES To exclusively base To always design the the measurement in measuring apparatus all cases on a in such a way that the longitudinal distance to be graduation with which measured will be the the distance to be rectilinear extension measured is directly of the graduation used compared and as a scale Center for Photogrammetric naming ABBE S COIVIPARATOR PRINCIPLES W Simple Linear Scale 7 Full Compliance lt gt Drum micrometer 7 Least Count Level Daes Not Comply I I Sliaina micrometer r Complies with l But NaL ma 2 Center for Photogrammetric naming Corrections to Photo Coordinates SURE 440 Advanced Photogramnietry BASIC ANALYTICAL PHOTOGRAMMETRY THEORY First Order Theory basic collineaiity concept Straight line from object space to inner space Second Order Theory corrects for most significant errors unaccounted in First Order Theory Third Order Theory other sources of error generally not accounted for Center for Photogrammetric naming COMPARATOR READINGS Rotary Stage Photographic Plate Arbitrary origin of comparator readings Corrections to Photo Coordinates SURE 440 Advanced Photogranimetry COMPARATOR READINGS 0 Measure coordinates of fiducial marks Arbitrary coordinates of principal point assuming photo oriented to stage rxs rXA IXo 2 rYO rY1 ryZ Photo coordinates XP rxP IXo yP rYP rYO Center for Photogrammetric naming COlVlPARATOR READINGS 0 If photos placed on stage with no orientation compute rotation angle r r tan 9 YI er rX1 2D rotation X39 rX cosG ry sine y rX s1n9ry cosG Center for Photogrammetlic naming Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry INTERIOR ORIENTATION gtlt Fiduciol 1 P Mark o lyD Principal Point Photogrammetric coordinate system Vector from perspective Xp X0 5 yp 7 yo 0 e f Center for Photogrammetric 39 39aining center to point FILM DEFORMATION 0 Use and processing makes film susceptible to dimensional change 0 Isogonal affine transformation lilliillf Z iilliilli 33 lllll1llzl ll Center for Photogrammetric 39 39aining Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry FILM DEFORMATION 8parameter projective transformation l l X a1X a2y a3 clx39c2y39l b1X39 bzy39 b3 clx39 c2y39 1 Center for Photogrammetric 39 39aining FILM DEFORMATION Polynomial correction 4fiducial model AXX X Xa0 a1Xa2ya3xy Ay y y yb0 b1Xb2yb3xy 8fiducial model AX Xix Xa0 211X212y213xy214X2 asy2 a6X2ya7xy Ay yiy yb0 blxb2yb3xy b42 b5y2 b6X2yb7xy Center for Photogrammetric 39 39aining Corrections to Photo Coordinates SURE 440 Advanced Photogramnietry 5rk0rklr3 k2r5 SEIDEL ABERRATION DISTORTION Conrady s intuitive development is F By similar i X triangles 5X5y r X y Center for Photogrammetric 39 39aining SEIDEL ABERRATION DISTORTION 0 Corrections to X and y coordinates oxz x oyzgy r r 0 Corrected Xe X75XX1i j r coordinates X17k0 iklrzikzrl39im 5r ycy5yy177 y17k07k1r27k2r47 Center for Photogrammetric 39 39aining Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry Example A camera calibration report displays the following information Field Angle 750 150 2270 300 350 400 Symmetric radial distortion pm 4 6 4 1 6 3 Decentering distortion pun 0 0 0 1 1 2 If the photo coordinates of a point are x 33148 mm and y 1 921 what are the coordinates corrected for radial lens distortion The calibrated tocal length of the camera is 152560 mm c enter for Photogrammetric Tmmmg Correcting Photographic Coordinates for Radial Lens Distortion Given the following vaiues x33148 y 714921 152560 Factorto convert degrees into radians Wad L 180 75 4 40 f3 principal point 20 085 dist ftanang mud 6m 128013 Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry The radrai drStance from me principai point to the point i5 Z r x y r 36 351 Thus the pomt hes between me 7 5 and 15 freid angres Perform a irnear interpoiatron to find me radrai distortion atmat pomt rastonled ston 4 7 rats in 64 U 1 distl 7 st 2 z 0 0044 1000 The corrected photographic coordinates become X XE 33 144 Y yE 714 919 Center for Photogammetnc Tmmmg Correcting Photographic Coordinates for Radial Lens Distortion r Given the foiioWH rg vaiues x33148 y 714921 152560 koyozznx 10 3 k1045011077 kzrolsmo n for 2 4 i1k0k1rerix xe33142 I 2 4 ye 1k0k1r kzriy ycrl4919 Center for Photogammetnc Tmmmg Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry DECENTERING DISTORTION Always one radial line axis of zero tangential distortion which remains straight DECENTERING DISTORTION 0 D Brown Thin Prism Model 5X J1r2 J2r4sinp0 Jsin p0 5y J1r2 er4COSp0 Jcosltp0 ConradyBrown Model 2 5X J1r2 J2r41 2 jsin p0 ZXy cos 0 r r 2 2 5y 2 Jlr2 J2r42 sinp0 l nyjcompo I Center for Photogrammetric 39 39aining Corrections to Photo Coordinates SURE 440 Advanced Photo grammetry DECENTERING DISTORTION 0 Revised ConradyBrown Model 5X 131r2 2X22P2XY1P31 2 P4r4 5y 2P1Xy P2r2 2Y21P3r2 P4r4 Corrected photo coordinates X0 X 5X y0 y 5y Center for Photogrammetlic naming ATMOSPHERIC REFRACTION Corrected Image Location dr Actual 4 Image Point Tangent to Ray at EX osure Station a p Actual Ray Path Exposure Station x Theoretical Ray Path Ground Point Corrections to Photo Coordinates SURE 440 Advanced Photogrammetry ATMOSPHERIC REFRACTION Snell s Law ni dnsin 9i 2 ni sin6idoc Generalizing and simplification doc d ntan9 n OL Jan doc tan Grind n tan 91nn up F n n F Center for Photogrammetric naming ATMOSPHERIC REFRACTION Generalizing d9 0 K tan 9 d9quot 0206K tan 9 0 For vertical A hotovraA h d9 can be shown with respect to r rftan6 Center for Photogrammetric naming Corrections to Photo Coordinates SURE 440 Advanced Photo grammetiy ATMOSPHERIC REFRACTION Differentiating E fsec2 9 d9 dr 2 f sec2 9619 f1ta112 9d9 f1r2de drz f Center for Photogrammetric 39 39aining d9 ATMOSPHERIC REFRACTION Expressing dr as a function of K d6 K tan 6 2 2 dr 2Ktan6 f2r2 K i f f 3 drK r 2 Center for Photogrammetric 39 39aining Corrections to Photo Coordinates SURE 440 Advanced Photo grammetry ATMOSPHERIC REFRACTION Cartesian components of radial displacement 2 5XX Kl 2X r 8r r2 5y yT2Klf 2y Center for Photogrammetlic naming ATMOSPHERIC REFRACTION 1959 ARDC Model K 2410H 2410h 3 106 H2 6H250 h2 6h250 H 0 Saastamoinen Model K 1 00225711539256 1 002257H539256 27701 002257H439256 10 6 Center for Photogrammetlic naming Corrections to Photo Coordinates SURE 440 Advanced Photogrammetly Height in m 12 3000 v 4 6000 0 7 9000 0 9 3000 0 3 6000 0 7 9000 0 9 3000 0 3 6000 v u 9000 0 8 3000 0 2 6000 0 5 9000 0 7 24 50 63 78 11 131 For Ground Elevation 7 0 rn above sea level 15 33 44 59 77 101 135 19 42 57 75 99 130 173 For Ground Elevation 7 500 rn above sea level 07 16 21 28 37 49 64 13 30 40 53 69 91 122 18 39 53 70 9 160 For Ground Elevation 7 1000 rn above sea level 51 12 2 30 40 03 02 109 14 9 For Ground Elevation 7 1500 rn above sea level 04 08 12 16 22 28 38 11 24 32 42 55 73 97 15 34 45 60 78 103 138 Center for Photogrammetlic naming Flying For Radial Distance r of the Image Point from the Photo Center in mm 153 107 183 234 88 154 217 69 201 51 131 186 Coef cients 191072 191076 61 250 77 223 28 125 54 23 72 299 22 099 40 AV0 67 276 16 074 42 187 61 259 Corrected Ray Path Actual Ray Path Exposure Station EARTH CURVATURE dE Vertlcal Photograph I Corrected lmage Location Actual lmage Location Corrections to Photo Coordinates SURE 440 Advanced Photogranimetry Proj ective Equaitons The Center for Photogrammetric Training PROJECTIVE EQUATIONS surveying Engineering uepartment Ferris State University PHOTO COORDINATE SYSTEM 0 Origin at principal point 0 De ned as X x x Y y yo Z f 0 1 ransiating ground X1 X XL coordinates to photo Y1 Y YL 21 z zL The Center for Photogrammetric mining SURE 440 Advanced Photo grammetry Proj ective Equaitons DIRECTION COSINES 0 P has coordinates XP Yp ZP 0 Length of vector OP is L 0PXYz 0 Direction of cosot XP vector wrt the 0P 3 axes are cosy Zp OF The Center for Photogrammetric 39n39aining DIRECTION COSINES 0 Vector from P z to defined as I Q Q XQ 7 X1 1e Hi YQ 7 YP Q PS i I ZQ ZP 0 Length of vector X PQ XQ XP2 Yo YP2 ZQ ZP2V2 The Center for Photogrammetric 39n39aining SURE 440 Advanced Photo grammetry Proj ective Equaitons DIRECTION COSINES 0 Direction cosines become XQ X cosoc P PQ Y Y cosB 2 Q P PQ Z cosy PQ The Center for Photogrammetric mining DIRECTION COSINES 0 Looking at unit vector 0 to P is xiy3zi 0 P has coordinates X y ZT 0 Given a second set of x coordinate axes I J K similar relationships can be formed The Center for Photogrammetric mining SURE 440 Advanced Photo grammetry Proj ective Equaitons DIRECTION COSINES 0 Rotation between Y and X 1 X axes 0 Writing unit vector in terms of direction i cosines V 7777777777 77 Y 0 Similarly for and R The Center for Photogrammetric 39n39aining VECTOR FROM O TO P cosXX cosyX coszX OPX cosXY y cosyY z coszY cosXZ cosyZ coszZ X X cosXY cosyY coszY y Y z Z XRi The Center for Photogrammetric 39n39aining SURE 440 Advanced Photogrammetry Proj ective Equaitons SEQUENTIAL ROTATIONS The c enter fur Fhmummmetn Tmmmg DERIVATION OF GIMBAL ANGLE S 0 Coordinate transformation shown as XP cosoc sinoc UP YP 7 isinoc cosoc VP 0 t erIorm a planar rotatlon oI axes 1n sequence 03 primary p secondary K tertiary The c enter fur Fhmummmetn Tmmmg SURE 440 Advanced Photo grammetly Proj ective Equaitons ROTATION ANGLES IN PHOTOGRAMIVIETRY we Rotation lt07 Rotation K7 Rotation about X1 about Y2 about Z3 ZZZ1 ZEZZ Y Ys Y X M Y1 X2 2 K X3 X3 The Center for Photogrammetric 39n39aining 03 ROTATION 0 In general form X2X1Y10Z10 Y2 X10Y1cosmZ1sinoo Z2 X10 Y1 7sinmZ1 cosoo 0 In matrix form X2 1 0 0 X1 Y2 cosoa sinoa Y1 Z2 0 7 sin OJ cosoa Z1 0 More concisely C2 MmC The Center for Photogrammetric 39n39aining SURE 440 Advanced Photo grammetry Proj ective Equaitons p ROTATION In general form X3 X2 coscpY2 0Z2 7Sinp Y3 X20Y2Z20 Z3 X2 sincpY2 0Z2 coscp In matrix form X3 coscp 0 isincp X2 Y3 0 1 0 Y2 Z3 sincp 0 coscp ZZ More concisely C3 ch2 The Center for Photogrammetric mining K ROTATION 0 In general form X39X3COSKY3sin1ltZ30 Y39X3SanY3COSKZ30 Z X3OY30Z3 0 In matrix form X39 cos K sin K 0 X3 Y 7 sin K cos K 0 Y3 Z 0 0 1 Z 3 More concisely C MKC3 The Center for Photogrammetric mining SURE 440 Advanced Photo grammetry Proj ective Equaitons TRANSFORMATION FROM SURVEY PARALLEL SYSTEM X X1 X1 Y MG Y1 MKMWMQ Y1 239 z1 z1 MG becomes after multiplication COSpCOSK cosmsinKsinmsinltp005K sinmsmkicosmsintpCOSK MG COSpSan cosmCOSKisinmsintpsinK smm005KcosmsintpsinK sinq isinmcosq cosmcosq The Center for Photogrammetric 39n39aining COlVlPUTlNG ROTATION ANGLES 0 If rotation tan 0 111 matrix known In rotation angles 33 can be computed as 51111 11131 shown on the right tan K 11121 mm The Center for Photogrammetric 39n39aining SURE 440 Advanced Photo grammetry Proj ective Equaitons COLLINEARITY CONCEPT 0 Line from object space to perspective center is same as line from perspective center to image point 0 Relationship shown as kMK The Center for Photogrammetric mining COLLINEARITY CONCEPT 0 Collinearity condition X Xo mll mu mu X XL yiyo k m21 mzz m23 YinL if m31 m32 m33 Z ZL 0 Ground coordinates are translated to the ground nadir position and rotated to a photo parallel system then scaled to the photograph 7 Predicted photo coordinates The Center for Photogrammetric mining SURE 440 Advanced Photo grammetry Proj ective Equaitons COLLINEARITY CONCEPT 0 Expressing the collineauty concept algebraically X Xo kmnX XLJr mi2Y YLJr m13Z ZL Y yo km21X XLJr m22Y YL m23Z ZL if km31X XLJr m32Y YL m33Z ZL 0 Dividing the first 2 equations by the last 7 7f m11X XLmizY YLm13Z ZL X XDE m31X XLmzzY YLm33Z Z leX7XLmZZY7YLmZKZ7Z yi yu 7 if m31X7XL mzzY YL m33Z Z The Center for Photogrammetric mining L L L l COLLINEARITY CONCEPT 0 Collinearity equation must satisfy two conditions m11m12 1112111122 m31m32 0 2 2 2 2 2 2 mll 111214141131 m12 1n22 11132 The Center for Photogrammetric mining SURE 440 Advanced Photo grammetry Proj ective Equaitons COLLINEARITY CONCEPT Since X XL 1 YL aiiu KL ZL it proportion to the direction cosines of A the collinearity equation can be shown in terms of the direction cosines In11 cosoc m12 cosB m13 cosy xixo 71 11131 cosocm32 cosBm33 cosy yiy 71 11121 cosocmzzcos m23cosy o m31 cosocm32 cosBm33 cosy The Center for Photogrammetric mining COLLINEARITY CONCEPT 0 The inverse relationship is The Center for Photogrammetric mining SURE 440 Advanced Photo grammetiy Proj ective Equaitons LINEARIZATION OF COLLINEARITY EQUATION 0 For simplicity lUL the projective equations be shown in the following form F1X Xof0 V F2 y yofW0 The Center for Photogrammetric mining LINEARIZATION OF COLLINEARITY EQUATION Condition equation shown as AVBAF0 0 Design matrix appears as The Center for Photogrammetric mining SURE 440 Advanced Photo grammetiy Proj ective Equaitons LINEARIZATION OF COLLINEARITY EQUATION Partial derivatives wrt interior orientation X0 yo and f only an1 6F10 22 6X0 aye 6f w 6F20 6F21 aizv 6X0 aye 6f w LINEARIZATION OF COLLINEARITY EQUATION Partials taken wrt exposure station E M if 0P 0P f0V an 0P W2 W 0P w 0P 0P waP W Eff 0P 0P f av vow 0P7 W2 W Where P are the parameters The Center for Photogrammetric mining SURE 440 Advanced Photo grammetry Proj ective Equaitons aU aXL av aXL 0W 0X L LINEARIZATION OF COLLINEARITY EQUATION For the exposure station Coordinates the partial derivatives of functions U V W 0U 7mm aYL 7 7m av 21 aYL 0W 7mg aYL 7mm 7mm 32 0U 7mm aZL 0V 7 7m aZL 23 0W 7 7m aZL 33 The Center for Photogrammetric mining 6F1 BXL 5E aYL 6E azL LINEARIZATION OF COLLINEARITY EQUATION Partials of the functions F1 x 1392 is f U W mii Wmn f U W m12 Wmn U 7 m1 Wm 2m 612 f V m21 m31 BXL W W The Center for Photogrammetric mining SURE 440 Advanced Photo grammetiy Proj ective Equaitons LINEARIZATION OF COLLINEARITY EQUATION 0 Partial of orientation matrix wrt the angles 0 0 0 BMG B39Mm MKMW MG 0 0 1 Bo 803 0 71 0 0 smog icosoa aMG 8MW MK MmMG 75mm 0 0 av aw c0503 0 0 3M 3 010 G M MM7100MG ax ax W 000 The Center for Photogrammetric mining LINEARIZATION OF COLLINEARITY EQUATION Partials wrt the 94 XFXL av aMG orientation angles a aw YrYL aw z 72L E aw 60 X XL BU aM G Yth a X 7XL 6m 60 av aMG E Z 7ZL a BK Y YL 6m 31 Z 72L BK The Center for Photogrammetric mining SURE 440 Advanced Photo grammetry LINEARIZATION OF COLLINEARITY EQUATION 0 Evaluate the i PartialSOfF1amp 00 W 00 W 00 F2 wrt orientation angles 2 aUEaW 0p W 0p W 0p Ei 621 ex w 6K w 6K The Center for Photogrammetric mining LINEARIZATION OF COLLINEARITY EQUATION 0 Evaluate the 0ii partials ofF1 amp am W 60 W 00 F2 wrt orientation angles 6amp2 0p W 0p W 0p 61 0V10 0K w 0K w 0K The Center for Photogrammetric mining Proj ective Equaitons INTRODUCTION TO DIGITAL ORTHOPHOTOGRAPHY 8 Center for Photogrammetric Training Ferris State University Intr0 du cti0n Digital photogrammetry and particularly digital orthophotography is making a significant impact in the mapping sciences To paraphrase a famous photogrammetrist Dr F Ackermann softcopy photogrammetry is taking photogrammetry out of photogrammetry What this means is that this tool is available to the masses No longer is it a necessity to have a highly skilled photogrammetric technician create the orthophoto Using the digital imagery and a companion digital elevation model DEM nonphotogrammetrists with little knowledge of the science of photogrammetry can create adequate map products for many applications The one caveat of Dr Ackermann s statement is that with this power comes the possibility of misuse To put it in other terms without the proper input data the derived outputted map may not be accurate enough for its intended use Garbage in garbage out To understand what is garbage still requires someone knowledgeable of the limitations with particular data sets What is a Digital Orthophoto An orthophotograph is an image which has been processed such that the features on the image represent an orthographic projection This is done through a differential rectification process whereby the effects of tilt and relief displacement are removed from the image The amount of correction though is limited For example in urban settings or other areas where there are features with very sharp vertical features it is impossible to create a truly orthographic projection for all features Buildings viewed offnadir will obscure features Moreover the sides of those buildings facing the center of the photograph will display the side of the building figure 1 This cannot be eliminated There are some techniques one can use to minimize the effect of building lean For example Merrick amp Company was awarded a digital orthophoto project for Cook County Illinois which included Chicago To reduce the effects of tall building Merrick modified their normal acquisition of the aerial photography One method employed was the use of 80 endlap and sidelap in the photography which means that only about the center 112 of the photograph was used for the orthophoto Moreover they also employed a 12 focal length camera over the central business district of Chicago To maintain the same scale of photography used with the normal 6 focal length camera the longer focal length imagery was own at a higher altitude Finally they also acquired spot or pinpoint photography over the taller buildings Jacoby 1999 All of these modifications do add to the cost of the project Lquot xntrounetron to Drgtu Onhaphmagaphy Page 2 40 r a 3 Figure 1 E eclsufhu dingsunzn ur mphulugnp h Desprte these hmrtatrons an orthophoto rs a very useful mapprng too1 quahues rnherent m an rm ge th o tne pr p rue features on the rmage It has the rnterpretatrye Th5 s that the ean be aeeurate1y measured just hke one mrght want to do wrth a a eonyenuona1 hne map Beeause of thrs orthophotographs form an eneehent base or eontrol layer for a 615 t r a1so re1atrye1y rnexpensrye espeerahy when one eonsrders the eosts rneurred m eonyenuona1hne mapprng A dgrta1 orthophoto rs nothrng more than an orthormage stored m dgrta1 form The rmage eonsrsts of an array of purels that reeord the ground re eetanee a1ues for that purel The resoluuon of the rmage rs detated m part by the srze of the purels and as we mcrease the There are basreahy four data sourees neededto ereate a dgrta1 orthophoto FGDQ 1999 These are 1 an unreetrhedrasterrmage hle aequrred by erther seannrng an rmage or eoheeted dreetly by a dgrta1 sensor by the rmagery 3 groundeontrol and 4 sensor eahbratron data TL L d d the ddstomon m any measurement system These data are the rntenor onentatron parameters that were dseussed m the last lesson The ground eontrol proyrdes the absolute onentauon of the rm e and allows us to georeference eaeh of the purels wrthrn the rmage The DEMDTM rs used to awquot Ea me Page 3 compensate for the effects of relief displacement This can be obtained from a number of different sources but one must be careful that the density of the ground sampling be consistent With the area being mapped Finally one needs an unrectified image file 7 the picture itself An example digital orthophoto is shown in figure 2 lquot E Figure 2 Example digital orthophoto of Las Vegas Nevada An important consideration When obtaining any kind of imagery is When should the imagery be acquired In other Words one must consider the season of the year If terrain features are important then leafoff imagery should be collected The best time to acquire the imagery is the spring because snow has melted and the tall grasses that might be otherwise present are matted If the purpose is to analyze vegetation then leafon imagery will be desired Image data are commonly stored in files called tiles When the tiles are brought together they should form a seamless map of the project area For proper data management an image catalog should be created and provided to the user this may be transparent to the user The catalog locates all of the tiles of orthophotography Some systems use What is called an image pyramid The Gamer k1 Flimogrammm c39fminim I to D1g1tal C Page 4 This consists of a series of images sampled at different ground resolutions such as l 2 5 and 10 The idea is to provide rapid image display by automatically loading only those images that are needed for the current views extent with the appropriate pixel resolution Applying a lter over the data during a resampling routine forms the pyramid While a number of different types of filters are available the Gaussian filter seems to be the best at preserving the content in the original image while reducing computation time Imagery used in the creation of a digital orthophoto can be of different types blackandwhite BW color color infrared CIR and other imagery captured in different regions of the electromagnetic spectrum URISA 2001 Blackandwhite imagery consists of shades of gray extending from pure white to pure black It is very versatile and yields excellent resolution if properly exposed BW film can accommodate largescale enlargements Additionally it only requires about 13rd the storage space of color The disadvantage of BW imagery is that it may not be as helpful for analysis such as vegetation monitoring or when color or heat is important If used for interpretation purposes more training of the analyst is usually required Color film is often a medium that users prefer to work with because it yields a picture that closely resembles how humans view the scene It does not require as much training for interpretation Additionally detail that may be lost in shadows in a blackandwhite film particularly light shadows may still be visible in color It is more expensive than BW film and requires more storage space Color infrared or false color film is similar to color film except that it is sensitive to the green red and nearinfrared regions of the electromagnetic spectrum For example vegetation will appear as red in CIR film although one can change the colors of the different bands when the image is displayed in digital form This type of film is particularly helpful in delineating differences in vegetation since re ectance between vegetation features is markedly different in this part of the electromagnetic spectrum While color and blackandwhite films are the most common means of creating a digital orthophoto other parts of the electromagnetic spectrum can also be use Radar as an example is an active sensor that is very useful in obtaining a digital elevation model of the earth39s surface Moreover it is helpful in obtaining an image of the surface under many different weather conditions Some examples of aerial negative scale ground coverage ying height map scale contour interval and size of TIF file uncompressed are given in table 11 1 From the Northwestern Illinois Planning Commission www 39 39 Y me howOvervies The Gamer k v Fhmogrammmc minim I to Digital 0 J J Page 5 Photogrammetric Mapping Scales and Digital Orthophoto Resolutions Negative Scale 1quot 250 1quot 300 1quot 417 1quot 833 1quot 666 1quot 2640 Representative 13000 13600 15000 110000 120000 131680 Fraction Flying Height 1500 1800 2500 5000 10000 15840 Above Mean Terrain Ground Coverage 1575 X 1890 X 2621 X 5280 X 2 miles X 3 miles X 3 1575 1890 2621 5280 2 miles miles Coverage in PLS 14 14 Section 40 14 1 Section 2 X 2 3 X 3 Sections ac es Section Sections Sections Map Scale 1quot 40 1quot 50 1quot 100 1quot 200 1quot 400 1quot 500 Contour Interval 1 1 1 2 4 10 15 um Pixel Size 18quot 21quot 30quot 59quot 118quot 187quot Ground 30 um Pixel Size 35quot 43quot 59quot 118quot 236quot 374quot Ground 15 um TIF Image 980 mb 1440 mb 1440 mb 1440 mb 1440 mb 1200 mb Size 30 um TIF Image 240 mb 360 mb 360 mb 360 mb 360 mb 300 mb Size Table 1 Example values for coverage pixel size etc for given photographic scales Digital Orthophoto Problems The creation of a digital orthophoto brings with it competing issues These include accuracy quality cost and the hardwaresoftware display and manipulation capabilities Image quality is dependent upon a number of production components such as Manzer 1996 camera quality photo to orthophoto map scale magni cation orthophoto diapositive density range or bits in the scanner scan piXel radiometric resolution sample scan rate in micrometers2 or dots per inch dpi and the photo scale rectification procedures nal piXel size in ground units piXel ground resolution electronic autododging or radiometric image smoothing after the recti cation process selection of control points DEM data density 2 The unit micrometer is often called micron The Gamer k1 Plimmmmc minim 1 1 1 T to Drgrtal C Page 6 Assuming that the correct inputs are used the accuracy that can be achieved in orthophotography is comparable to that found in line maps Accuracy of a digital orthophoto is a function of magni cation geometric accuracy of the scanner quality of the DEM control focal length of the taking camera One of the most abused aspects of digital data on the computer is the use of scale or magnification Computers have the ability to zoom in or out very simply This may give the user a false sense of the accuracy of the map product As an example field measurements may be taken of features with one meter positioning capabilities such as with resourcegrade global positioning system GPS receivers But in the computer these positions could be displayed at the millimeter level Clearly displaying data at this range is inappropriate for data collected at the coarse meter range The same applies to orthophoto imagery Remember that the farther the camera is away from the ground there is a loss of detail in the features imaged on the photo For example a manhole might not be imaged on the photo because it is too small at the scale in which the photography was taken Magnification also affects the image quality The recommended magnification range is 8 or 9 times enlargement Magnification of ten times or more will degrade the image quality because the distance between the silver crystals on the film become noticeable Below five times enlargement does not noticeably improve the image quality Therefore a range of 59 times enlargement is the optimum range depending upon the area being mapped This means that if the desired final orthophoto scale is lquot 100 then the photo scale should not be less than 1quot 900 Note that this would be for optimum terrain Larger photo scales such as lquot 70039 may be required to meet the needs of the client Radiometric resolution relates to the ability to discern small changes in the tonal change within an image The Content Standard recommends that 8bit binary data be used for black and white imagery and 24bit 3byte data for color imagery This gives the user 256 gray levels over the image 0 7 255 The value zero represents black and 255 is white Radiometric corrections such as contrast stretching analog dodging noise filtering destriping and edge matching are frequently applied to the data before it is given to the user The standard recommends that these processing techniques be used sparingly to minimize the amount of data loss Image quality is also affected by the resolution of the scanner The scanner and the scanning process have inherent errors associated with them It is important that high precision scanners be used in converting the image into a digital form Additionally it must be calibrated to ensure that the performance of the equipment is within the minimum specifications for the mapping Many of the softcopy instruments used today have the capability of adding the scanner calibration parameters into the program to correct for the distortion scanning introduces It should be evident that the coarser the resolution larger the pixel size the more steplike lines and features become The important issue is the relationship between the size of the scan The Gamer k1 Flimogrammm c minim 1 1 1 T to D1g1tal F Page 7 pixel to the scale of the photography and the desired output orthophoto scale One suggestion is to scan the photo at about 240 dpi for each magni cation range This means that if the desired photo to nal orthophoto magni cation range is 5 times then the photo should be scanned at 5 x 240 1200 dpi as a minimum This represents a pixel size of approximately 20 um micrometers at the photo scale Taking the magni cation recommendation to its limit of 9 times yields a sampling rate of 2160 dpi with a pixel size of roughly 12 micrometers While a smaller pixel size may yield better resolution it does not necessarily mean higher accuracy since accuracy is affected by a number of factors like the survey control ying height and focal length ofthe camera along with pixel size The results are consistent with other studies For example it has been pointed out that an approximate 15 um resolution is needed to maintain photographic resolution of aerial lm Michael and Krolikowski nd Higher resolution does little to enhance interpretability of the image In fact 20 7 30 um scan rates are commonly utilized in industry These levels are both economical and meet the needs of most mapping applications Another issue affecting image quality is the pixel size expressed in ground units This is frequently performed by resampling the pixel values to create a smoother image in terms of it tone When this is done the preference will be to sample to a coarser resolution such as sampling at half a foot and resample to the foot level As a rule of thumb a 12 times or larger factor should be applied to the scanned pixel For example using this factor to a onefoot scan the nal orthophoto would have at least a 12 pixel size Subsampling should only be applied within the limits de ned by the Nyquist theorem FGDC 1999 which limits the resampling to a maximum of 2X This limit is arrived at to avoid undesirable aliasing The accuracy of the orthophoto is dependent upon two primary factors control and DEM accuracy Survey control is required to fix the map to the ground It is used to removereduce many of the random errors associated with the imagery such as terrain relief platform positionorientation and faulty elevation data Photogrammetrists often use aerotriangulation to provide control between the primary ground control on a project In some instances control for the orthophoto is derived from existing maps of the area Signi cant errors can be introduced into the process thereby degrading the orthophoto For largescale mapping ground targets that will be imaged on the photo should be used The control needs to meet the speci cations for the mapp1ng DEM accuracy is critical to the nal quality of the orthophoto The appropriate DEM must be selected to match the scale of the orthophoto terrain conditions focal length of the camera used to acquire the photography and the magni cation The sampling interval for collecting the elevation data depends upon the terrain conditions Where the ground is relatively at a coarser DEM can be used On the other hand if there is a lot of an elevation change or surface roughness in the area a denser sampling rate is required It is generally acknowledged that the density and accuracy of the DEM for orthophotography does not need to be as accurate as a DEM used for contouring or 3D modeling For largescale mapping it is important to also include break lines in the data collection A break line is where the terrain changes direction in slope such as the bottom or toe of a hill These break lines control the modeling of the characteristics of the surface and xes the placement of contour lines on the site While density The Gamer k1 Flimogrammamc minim 1 1 1 T to Dlgltal F Page 8 is important the quality of the break lines is more significant In fact experience indicates that the sample rate can be very course provided that sufficient break lines exist in order to correctly capture the characteristics of the terrain surface Michael and Krolikowski nd It is not uncommon to process the elevation data prior to importing it into the orthophoto rectification process Generally a rectangular grid is created which makes the estimation of elevation easier in the orthophoto production process But this interpolation process does degrade and smooth the DEM which affects both the vertical and horizontal accuracy of the orthophoto Michael and Krolikowski nd The effect of a DEM on the accuracy of the orthophoto can be shown by the following example Assume that the map is to meet National Map Accuracy Standards to be discussed later Then the shift in the placement of welldefined points should not exceed 150quot at the map scale Therefore for a map at a scale of lquot 100 l 1200 the maximum shift that should occur is 2 at the ground scale This is found from the formula 150quot x 100 lquot 2 Then if the aerial camera has a 6quot focal length lens using basic trigonometry a 225quot error in the DEM would bring about a 2 error in the extreme edge of a 9quot x 9quot format aerial photo Since only the neat model area is used in mapping and using the dimension of 63quot x 72quot for the neat model then the DEM can be in error by 30quot at the extremity of the neat model and still fall within the 150quot criteria It can be seen that the closer to the center the greater the tolerance The formula for computing the errors in digital orthophotos can be expressed as Onhu eDEM X tan A e where eOnhO error in digital orthophoto eDEM error in the digital elevation model and A viewing angle in degrees outward from the center of the photo Another factor to consider in DEM accuracy is the magnification ratio and the density of the DEM Generally the density of the DEM needs to be denser with smaller magnification ratios As a rule of thumb if the magnification is less than 3 times then the spacing for the DEM needs to be 48 mm at the final map scale If the magnification is 38 times then the spacing at the final map scale should be 816 mm Over 8 times magnification allows a grid spacing of 1224 mm at the final map scale The operator needs to be aware that the density is greater when the terrain changes rapidly on the site and can be relaxed or spread farther when the terrain is at DEM characteristics change with the terrain therefore it is impossible to outline minimum criteria that would be applicable for all surfaces It is even possible to find a lot of variability within a single map sheet or tile Because terrain variability can exist within a project the map itself may meet accuracy specifications but local anomalies can exist where the map can fail stipulated testing This is particularly true in areas where elevation changes are abrupt or where bridges elevated highways and the like are present Michael and Krolikowski nd Problems with digital orthophotos that need to be looked at include Pageg e The rmage ar a inaccurate orthophoto One of the brggest problems ls cloud cover Ether the cloud to ensure that lf the rmage contains cloud cover that the percentage of obstructrorr ls acceptable for the intended purpose l ll d l r wr m the DEM data resultmg m a spike or large error Excessive rellef on the ed e of the photography can also be the cause othls problem The result is that ground rmage data is hidden from yrew Smearlng can occur when an mterpolatrorr program is used to There is 1t ls up to the user to determme whether the amount of these smear amfacts affect the rmage data for therr mterrded use ma 6 Dlstamarl e For largescale orthophotos local drstortrorrs can errst as was drscussed earlrer Flgure 3a shows a drstortroh along a brrdge deck due to relrahce on a regular grrd ofelevauons These drstortrorrs can be reduced butrrot elrmmated by erther 5 l 3 N J 2 Figure 3 Figure Shaw Inca dislnrunn WEI abridge wexpm wirh rhe d39smnimheilg eliruihared other image edirihg irnru Michael and Kruliimwslci ml ls mapped on both photos when thrs should not occur In other words the maps should p mlpl p l less accuracy m the DEM where ground eleyatrohs are gryeh that are larger than the realrty The comer k1 Fhmogrammmc minim T 39 to DigitalC 1 Page 10 0 Missing Image The causes of this error are the same as the double image except that the DEM gives elevations lower than the real ground values This error is hard to detect but is clearly evident when looking at linear features where sections may be missing 0 Inaccurate Planimetry If the planimetric positions of the pixels are in error look at the control by comparing the visible control on the orthophoto and the photogrammetric control used to control the project Image Replication 7 Some clients have experienced problems with tone in their digital imagery in that it appears to vary depending on different computing environments While a contractor may adjust the tone over the entire map when it is ported to the client s computer environment then the tone quality may be noticeably different 0 File Size 7 More data results in larger data files It is necessary to ensure that the computing environment can handle the image data The size of the data file is a function of the resolution and size of the project area For example using a 6 ground resolution for the pixel size and 2500 x 2500 tiles it will take about 100 MB per square mile to store the image data This means that one CD can hold approximately 6 square miles or about the size of a township Digital Orthophoto Program A partnership between the US Department of Agriculture Natural Resources Conservation Service the Farm Services Agency the US Forest Service and the US Geological Survey USGS created the National Digital Orthophoto Program The purpose of this program is to provide national coverage of digital orthophoto quadrangle DOQ data The photography used for the orthophoto creation of the lmeter digital orthophoto quarterquadrangles 375minute by 375 minute are l40000 scale National Aerial Photography Program NAPP photographs or images of similar nature The quadrangles are generally created by mosaicking the digital orthophoto quarterquadrangles The photography is scanned using a 25micrometer scan width The DOQ can be black and white color infrared or natural color image Like other digital orthophoto products the DOQ requires six pieces of input that are used to register the image to the camerascanner sensor determine the location and orientation of the sensor and remove the effects of relief displacement These items are l The unrectifled raster scanned image that is either acquired directly from a digital sensor or converted to raster format by scanning the film diapositive 2 A DEM ofthe area 3 Image and ground coordinates photo identifiable control points located on the ground Calibration information on the scanner and A user parameter file 9399 wanker thogmmmmc minim T 39 to Digital 0 1 1 Page ll Resolution is the minimum distance between two adjacent features or the minimum size of a feature that can be detected by a remote sensing system The resolution is generally larger than the computed ground sample distance of the DOQ The ground sample distance GSD is the distance on the ground represented by each pixel in the x and y components The ground sample distance of the digital orthophoto is a result of the scanning aperture of the microdensitometer used to capture the digital image and the resampling algorithm For example if a scanning aperture of 25 micrometers is used on a 140000 photoscale image the ground pixel sample distance is approximately 1 meter A 75micrometer scan yields a pixel size of 03 meters while a 15micrometer scan equates to a 06 meter For the processed DOQ the GSD is 1 meter for quarterquad digital orthophoto and 2 meters for quadrangle digital orthophotos If digital orthophotos are produced at a ner sampling distance than 1 or 2 meters they may be processed by resampling to 1 or 2 meter horizontal GSD Digital orthophotos produced at a coarser sample distances are not resampled to a ner horizontal ground sample distance3 The USGS identi es the following characteristics of their gray scale digital orthophotos CI The data consist of an ASCII header followed by 8bit binary image data Radiometric image brightness data that are stored as 256 gray levels and represented as integers in the range of 0255 The ground sample distance of the 375minute quarterquadrangle is 1 meter The geographic extent of the digital orthophoto is equivalent to an orthophoto quarterquadrangle 375 of latitude and longitude A minimum of 50 meters to a maximum of 300 meters of overedge is included suf cient to offer covera e to 31331 gt zor i OBJECTIVE The objective ofthis points sltlandardfishtodde nle the ortholind lage 39 t eme o t e igita geospatia ata 52312 giltzisft og lemgiiverL framework and envisioned by the FGDC Transverge Mercator UTM It is the intent of this standard to set a projection on the North American mm n bidsehne hat W111 etisure the Datum of 1983 NAD83 with w1dest ut111ty of d1g1tal orth01magery for coordinates in meters the user and producer commun1t1es 39 through enhanced data sharing and the grails 11131 a ptlt Clg u utg reduction of redundant data production each line containing a series of pixels ordered from west to east The order of the lines is from north to south When displayed on a computer the image projection grid north is at the top The four primary datum NAD83 comers are imprinted into the image as four solid white crosses with an image value of 255 and the four secondary datum comers as four dashed white crosses with similar intensity values4 D DD D D D E 3httn39 edcuc J quot quotquot quot quotA I doq 4httn39 edema WV quot quot I quot A I doq The comer lav Fhmogrammmc minim T 39 to DigitalC 1 1 Page 12 Content Standards for Digital Orthoimagery5 The Federal Geographic Data Committee has developed a set of standards for digital orthoimagery FGDCSTD00801999 to set a benchmark that will facilitate the widest use of these products by the user community If properly done it would encourage data sharing and reduce redundancy in the acquisition of digital orthoimagery Refer to the web page listed above for the technical information on this standard The content standard is a part of the National Spatial Data Infrastructure NSDI6 and as such applies to Federal Government produced orthoimagery or imagery they may disseminate To facilitate the sharing of data the structure of the orthoimage is very well defined The image is a twodimensional array of pixels consisting of rows or lines and columns samples Each pixel is ordered from the upper left hand pixel in the image which is designated within the array as 00 Rows are ordered from the top to bottom and columns from left to right The image will have a rectangular or square format This means that irregular areas will have images padded with nonimage pixels7 Georeferencing of the pixels is given wide latitude within the Content Standard While the North American Datum of 1983 is the preferred horizontal datum it is recognized that other reference systems may be used Georegistration is described in the metadata using a 4tuple that defines the first pixel 00 along with its X and Y ground coordinates All other pixels can then be georeferenced since the pixel resolution is known The center of the pixel is selected as the point of referencing Accuracy is always an issue with any mapped product The standard mandates that the National Standard for Spatial Data Accuracy NSSDA will be employed The NSSDA requires reporting positional accuracy using the rootmean square error RMSE reported at the 95 confidence level But no threshold is specified This is left to the contracting agency Common thresholds that may be employed include the National Map Accuracy Standards and the Accuracy Standards for LargeScale Maps8 Frequently it is required that the image be resampled Figure 4 depicts the situation where the image pixel orientation does not coincide with the ground coordinate system For example the red squares indicate the desired location of the mage pixels based on the ground coordinate system The black squares represent the actual pixels that comprise the acquired image Resampling is the process of determining the digital numbers for the new image in red given the existing image There are a number of ways in which this can be done The most common methods are nearest neighbor bilinear interpolation and cubic convolution algorithms The Content Standard recommends methods that are analogous to the results one would nd using the cubic convolution method but recognizes that bilinear interpolation methods often lead to 5 httpwww fade 6 html 6 The NDSI is a national initiative that is designed to elevate the role of spatial data as a critical element of our nations infrastructure like the road network as an example The concept of the NSDI will be discussed later in this class 7 The values given to these padded pixels are zero 7 black 8 The American Society for Photogrammetry and Remote Sensing developed these standards The comer k1 Fhmogrammmc minim T 39 to DigitalC 1 1 Page 13 acceptable results From a practical point of view it is very difficult to notice any difference between these two methods The nearest neighbor is not recommended particularly for large scale imagery since the resulting image may have a disjointed appearance The critical issue throughout the Content Standard for Digital Orthoimagery is that everything be well documented within the metadata This way the user can determine themselves whether the data is acceptable for their intended purpose An example of a FGDC compliant metadata file is presented within the standard document Orthophotography Partnerships When planning for a digital orthophotographic project it is recommended to investigate whether other agencies are also interested in developing a partnership An example of this type of project is the Year 2000 Digital Orthophotography Consortium undertaken by the Northeastern Illinois Flaming Commission NIPC The NIPC obtains aerial photography over their jurisdictional area on a fiveyear cycle They have developed a cooperative project model by inviting counties municipalities state and federal agencies nongovemmental organization and the private sector that desire digital orthophotography to participate in the acquisition of new imagery The advantages of this model are NIPC will contribute its entire line budget towards the project With cost sharing the overall cost of the project is reduces Y Image orientation Figure 4 Example showing the need to resample an image 9 See httpwwwgeoanalyticscomnipc for more detailed information The comer k1 Fhmogrammmc minim T 39 to Digital 1 Page 14 Overhead costs such as procurement costs and project management are also reduced since these items will be spread among all participants Duplication of image acquisition is eliminated Because of the economies of scale the costs for ying scheduling data processing and contracting will be lower than if each project were undertaken individually A large project like this may provide incentives to the contractors who may use the project as a showcase thereby making the price structure very competitive The products provided to the participants have enhanced value because it would include more data than the participant may have otherwise contracted for There are a lot of data required by planners land managers environmentalists and public works of cials that extend beyond municipal boundaries and these would be included Quality assurancequality control should be improved because this could be better coordinated and more resources would be available to undertake checking of the data Cooperative projects offer greater exibility to the different participants While NIPC has established a set of standards and speci cations each participating group has the ability to use their own speci cations and would only pay for those additional expenses that they require This includes things like lower ying altitudes more data enhanced digital terrain model or development of a digital elevation model By meeting the needs of the most stringent user at a reduced cost better functionality in the data products are available to all users from those who are just beginning to develop GIS to those who have created sophisticate mapping systems In general we can see these advantages higher resolution greater accuracy and larger area of coverage all at a lower cost can be great incentives to GIS creation One of the nice features about the NIPC web site is that they show some example speci cations for acquisition of digital orthophotography These are shown in Table 2 and they give agencies an example of costs accuracy mapping scale and other speci cations that would be required to meet their expected needs The State of Tennessee embarked on a statewide base mapping project in 19969710 The goal is to provide digital orthophotography for each county in the state Two scales of photography are acquires l30000 and 17500 to support 14800 and 11200 scale orthophotos The former scale has a 239 ground resolution while the latter has a 6quot ground resolution Photography is acquired together with airborne GPS and an inertial measuring unit IMU along with GPS ground control The accuracy of the scanned imagery after calibration is less than or equal to 15 pm in each axis Following are the speci cations adopted by Tennessee quotThe scanned images shall be in blackandwhite The radiometric quality of the scanned images is critical to subsequent processes There shall be no bad scan lines visible scratches dust lint dirt smudges or other cosmetic blemishes Automatic scratch removal software may be used The scanned images will be compared with the source photograph to verify that the gray scale is acceptable Dark tones and highlights shall be examined to assure that the full range of gray shades in the original negative is preserved 10 httn39 m39 fate tn 39 quot htm accessed 61702 The comer k1 Fhmogrammmc minim I 39 to Digital 1 Page 15 Wide Alternative Product Alternative Product 1quot 400 1quot 400 1quot 200 w atlS Table 2 Example speci cations from the Northeastern Illinois Planning Commission The l30000scale photography shall be used to produce digital orthophoto sheets that are 1400039 EastWest by 8000 NorthSouth Each orthophoto shall have a ground pixel resolution of 2039 x 20 The 17500scale photography shall be used to produce digital orthophotos sheets that are 350039 EastWest by 200039 NorthSouth Each nal orthophoto shall have a ground pixel resolution of 0539 x 05 The digital orthophotos shall have a horizontal accuracy of i2 pixels RMSE 4 feet l 400 1 foot l 100 on all check points taken on clearly de ned image detail The mismatch between two adjoining orthophoto sheet edges shall not exceed ve 5 pixels The nal digital orthophoto shall cover the entire neat area of each sheet with no over edge even though the DTM will have been compiled overedge The neatline shall be orthogonal and the extent shall be an even number of pixels Neatlines are inherent in the vector data and are superimposed on the ortho image The cutline between orthophotos made from l30000 scale photography may be straight coinciding with the sheet neat line The orthophotos made from the 17500 scale photography should be mosaiced to minimize the undesirable effect of building lean All orthophotos shall be delivered on CD ROM The les are to be written and delivered in map sheet order Image format shall be striped GeoTIFF with World le The image must be uncompressed and have horizontal scan lines with a top left origin In addition to the required GeoTIFF format identi ed above in item g all orthophotos for a county shall be compressed into a multiresolution seamless image database SURE 440 Advanced Photogrammetly Airborne GPS The Center for Surveying Engineering Department Ferris State University Photogrammetric Training REC 1rva I Vquot FLIGHT PLANNING FOR AIBBORNE GPS I Consideration of GPS receivers What form of initialization Potential loss of lock during aircrart banking 25 or less recommended Where will base be deployed At airport I On site requires additional manpower SURE 440 Advanced Photogrammetly FLIGHT PLANNING FOR AIBBORNE GPS Plan for when Receiver memory satellite coverage should be large COnSiStS 0f6 or enough to store Eggislaten39aes 3 satellite data I M esdstt an I Consider amount of 39 1133 ng sidelap when between favorable Camera LOCked down sun angle and during flight favorable satellite availability I r N h am 2 ANTENNA PLACEM ENT r F Antenna Offset between i perspective Z center and Wm antenna phase quot W W x 1 center must be y accurately quota known L3 3m Location based Ezgigsc ie X on image coordinate system Airborne GPS SURE 440 Advanced Photogrammetly W ANTENNA PLACEM ENT I Level aircraft I Camera can be locked during flight usmg JaCkS I Maintains geometric I Use relationship of offset vector conventional I Potential loss of ground coverage due to tilt and crab surveymg or I More sideleap needed in closerange planning PhOtogramme I Leveling camera during tw flight I Movement needs to be measured to achieve higher accuracy 1 399 quot ANTENNA PLACEM ENT I Two locations desirable 39 D39sadvantage fuselage and vertical Increases ProbabIIItY stabilizer of multipath I Fuselage directly over 39 When COUP39ed Wlth ca mera Wing placement may lead to loss of signal because of shadowing May require special I Advantage aligning phase center along optical axis of camera I Measuring offset and WOdi cation 0f mathematical modeling aircraft simpler I Crab angle hardly affected I Tilt correction negligible for large scale mapping Airborne GPS SURE 440 Advanced Photogrammetry Airborne GPS ANTENNA PLACEMENT Vertical stabilizer More work in determining offset vector Advantages Multipath and shadowing les likely to occur Installation may be simpler DEFERMINING EXPOSURE STA3939ION COORDINATES Linear interpolation 39 Lgbggglgtgswggochs Simplest approach I MXIYIZ change in Assume Change In GPS coordinates traJECtOF l from on between 2 epochs epoch to another is di time difference linear when exposure made I Model shown as W39th39 epOCh i di I AXYZ change in GPS 439 to AX Y z dX Y 2 exposure time SURE 440 Advanced Photogrammetly W DETERMINING EXPOSURE STA3939ION COORDINATES I Change in trajectory being linear may not be true Sudden change in direction common at lower altitudes Location of receiver could be considerably different than actual location during exposure I Alternative would be to decrease sample rate Reduces effect of error Increases observations taken and time to process data 1 399 quot DETERMINING EXPOSURE STA3939ION COORDINATES Trajectory ot Alrcrott GPS Measurement Epoch Posltlon 0t t t Antenna Using n En Linear lnterpolotlon 7 O Xposwe Model Airborne GPS SURE 440 Advanced Photogrammetly DETERMINING EXPOSURE STA3939ION COORDINATES I Suggestion use leastsquares polynomial fitting algorithm I Vary degree of polynomial and number of neighbors to be used 2 more realistic trajectory should be obtained I Degree and number of points a function of GPS sampling rate I Advantage if cycle slip experienced it can be used to better estimate exposure station coordinates than linear model w39 DETERMINING EXPOSURE STA3939ION COORDINATES I Secondorder polynomial I Fitting curve to 5 epoch period around exposure I Effect is to smooth trajectory over 5 epochs X1aXbXt17t3cXtlit32 X2 aX bxt2 itg cxt2 732 X3 aX bxt3 itg cxt3 732 X4 ax bxt4 7t3cxt4 732 X5 aXerX 5 7t3 cxt5 7t32 I Similar equations can be developed for Y and Z Airborne GPS SURE 440 Advanced Photogrammetly Airborne GPS W DETERMINING EXPOSURE STA3939ION COORDINATES I The 3 models look like X ax bxt cXt2 Y aY bYt th2 Z aZ bZtth2 I tti t3 I125 I a distance from origin I b velocity I c twice the acceleration 1quot7 DETERMINING EXPOSURE STA3939ION COORDINATES I Observation equations VX aX bXtcXt2 X0 VY aY bYtth2 Y0 VZ aZbZtth2 Z0 SURE 440 Advanced Photogrammetly W DETERMINING EXPOSURE STA3939ION COORDINATES I Coefficient or design matrix I Differentiate model wrt parameters 1 ltt r3gt 2 1 t t B 1 t t3 t 32 1 t t 1 t t 2 1 t t2 1 t 2 t1 t32 1 t t 2 1 t t2 2 2 1 t t 1quot7 DETERMINING EXPOSURE STA3939ION COORDINATES I Observation vectors f X1 Y1 Z1 X2 Y2 Z2 f X fy Y f 2 X4 Y4 Z4 X5 Y5 Z5 Airborne GPS SURE 440 Advanced Photogrammetly W DETERMINING EXPOSURE STA3939ION COORDINATES NorInal eouations I Solution vector A a b ct VX BAX fX vY BAY fY Ax BTWBV BTfo vz BAZ fZ AY BTWB 1BTWfY AZ 2 BTWB 1BTWfZ 1quot7 DETERMINING EXPOSURE STA3939ION COORDINATES I Assuming weight of 1 then 5 5t 5t2 BTWBBTIB 5t 5t2 5t3 5t2 5t3 5t4 Airborne GPS SURE 440 Advanced Photogrammetly Airborne GPS W DETERMINING EXPOSURE STA3939ION COORDINATES I For X observed values as an example X1X2 X3 X4 X5 BTWfX BTIfX X1tX2tX3tX tX5t Xlt2 th2 X3t2 X4t2 X5t2 I Weighting important I If inappropriate may bias or influence results WW DETERMINING EXPOSURE sTATIoN COORDINATES I Once coefficients solved for position of antenna phase center at instant of photography can be found using Xexp aX bXteXP t3cxtexp t32 Yexp aY bYtexp t3cYtexp t32 Zexp aZ bztexp t3cztexp t32 SURE 440 Advanced Photogrammetly DETERMINING INTEGER AMBIGUITY I If cycle slip occurs and integer ambiguity cannot be solved for whole photogrammetric mission can be lost I 2 methods I Static initialization over known reference point I Using dualfrequency receiver with OTF ambiguity resolution techniques w39 DETERMINING INTEGER AMBIGUITY I Static initialization I Place aircraft over point on baseline I Only few observations required because vector is known I Static determination of vector over known baseline or from reference station I Integer ambiguities solved in conventional static solution I May require longer observation periods 5m 1h Airborne GPS SURE 440 Advanced Photogrammetly Airborne GPS DETERMINING INTEGER AMBIGUITY I Static initialization weaknesses Adds time to project and are cumbersome to perform GPS data collection begins at airport during initialization Large amounts of data collected and need to be processed about 7 Mbytes per hour Receiver susceptible to cycle slips or loss of lock Possible that initial solution of integers may be incorrect 2 invalidates entire photo mission w39 DETERMINING INTEGER AMBIGUITY I Onthefly integer ambiguity Newer receivers and postprocessing software more robust and easy to use while receiver in flight Requires Pcode receivers Solution requires about 1015 minutes of measurement before entering project area SURE 440 Advanced Photogrammetry Airborne GPS PROCESSING AIRBORNE GPS OBSERVATIONS L Ground control required for single photo resection and orientation Camera lies on sphere whose radius equals the offset distance Camera located at center of sphere POSSIBLE CAMERA POSITIONS PROCESSING AIRBORNE GPS OBSERVATIONS I Adding second photo reduces some uncertainty I Collinearity theory provides relative orientation between photos Without ground control camera free to rotate around line passing through antenna Need to control roll through flight 13 SURE 440 Advanced Photogrammetly PROCESSING AIRBORNE GPS OBSERVATIONS I Determining exposure station coordinates can be easily visualized by I Assume I photo coordinates xyz aligned with coordinate system UVW I survey control XYZ in WGS 84 system I Transform offset DUDVDW into corresponding survey coordinate system w39 PROCESSING AIRBORNE GPS OBSERVATIONS IModel 39 EL Xa DU YL 2 Ya MEMM DV ZL ZA DW I DU DV DW are offset distances I MM is camera mount orientation I ME is exterior orientation elements I Necessary to ensure camera heading correctly down ight path Airborne GPS 14 SURE 440 Advanced Photogrammetly PROCESSING AIRBORNE GPS OBSERVATIONS Normal photography gt camera leveled before each exposure I When offset measured orientation angles on mount are leveled Problem occurs if there is a offset between nodal point and gimbals rotational center on mount When camera rotated relationship between the 2 points should be considered W PROCESSING AIRBORNE GPS OBSERVATIONS Simplest method for accommodating this offset Ensure relationship between receiver and camera consistent gt don t rotate camera during flight I Alternatively transform offset to local coordinate system using the gimbal form Airborne GPS SURE 440 Advanced Photogrammetly Airborne GPS W PROCESSING AIRBORNE GPS OB5ERVATIONS I Stuttgart method of GPS AT Certain physical conditions assumed or accepted Loss of lock will occur Unnecessary to perform static initialization prior to ight Solve OTF Single frequency receivers will be used Base receiver located at some distance from photo mission w39 PROCESSING AIRBORNE GPS OBSERVATIONS I Integer ambiguities found using CA code pseudorange positioning May be a bias drift errors I originally developed because of errors due to SA Can include datum effects Block adjustment used to solve for biases SURE 440 Advanced Photogrammetly Airborne GPS 5 PROCESSING AIRBORNE GPS OB5ERVATIONS If no loss of lock during mission Aircraft trajectories will be continuous One set of drift parameters carried in bundle adjustment If loss of lock occurs not common along a strip each strip can have their own drift parameters Instead of stripwise block could be broken into parts where aircraft trajectories are continuous Decreases number of unknown parameters m PROCESSING AIRBORNE GPS OB5ERVATIONS Advantage of drift parameters Ground receiver does not have to be situated near site can be up to 500 km away I This could decrease costs with photo mission Logistical gains in deployment of ground personnel to mission site Weather may cancel photo mission with survey personnel already on site SURE 440 Advanced Photogrammetly PROCESSING AIRBORNE GPS OB5ERVATIONS I New set of observations written for perspective center coordinates XLGPS VX XL YLGPS VY YL ZLGPS VZ I ZL I XLIYLZLGPS GPS observed perspective center coordinates I vXv vZ residuals on observed principal I center coordinates v39r PROCESSING AIRBORNE GPS OB5ERVATIONS Relating antenna offset to ground dependent on rotation of camera wrt F1 aircraft and orientation of aircraft to ground quot Bundle can be used to correct camera offset if u 397 camera remains fixed mum during mission If condition not met orientation of camera offset dependent on angular eo elements Airborne GPS SURE 440 Advanced Photogrammetry W PROCESSING AIRBORNE GPS OBSERVATIONS I New additional observation equation to collinear model XAcps VX XL Xiic ax bx YAGPS VY YL Rq031lti y c aY dt bY ZAGFS i VZ i ZL i Z c aZ bZ 1 399 quot PROCESSING AIRBORNE GPS OBSERVATIONS XA YA ZAGps ground coordinates of the GPS antenna for photo i VX Vy vz residuals for the GPS antenna coordinates XA YA ZQGPS for photo i XL YL ZL exposure station coordinates of photo i xApc yApc ZApc eccentricity components to the GPS antenna ax ay az GPS drift parameters for strip j representing the constant term dt difference between the exposure time for photo I and the time at the start of strip bx by bz GPS dri parameters for strip j representing the linear time dependent terms orthogonal rotation matrix RltPgt m K Airborne GPS SURE 440 Advanced Photogrammetly Airborne GPS W PROCESSING AIRBORNE GPS OBSERVATIONS I Recognized that adding parameters weakens solution I To strengthen problem can introduce more ground control defeats purpose 1 rrquot PROCESSING AIRBORNE GPS I Using both 60 end and 60 sidelap us Using 60 endlap and 20 sidelap and adding additional vertical control point at ends of each strip and iiiv Use conventional overlap and flying at least 2 cross strips of photography 20 SURE 440 Advanced Photogrammetly Airborne GPS PROCESSING AIRBORNE GPS OB5ERVATIONS I Scheme i used when no drift parameters being used in block adjustment I Receiver must maintain lock during flight I Control scheme can also be used when block drift parameters used Scheme ii used when strip drift parameters used I Parameters developed for each flight thus requiring more vertical control at ends of strips I Increases geometry and provides check against gross errors in ground control I Does add to cost of project more photography w39 PROCESSING AIRBORNE GPS OBSERVATIONS I Sometimes necessary to add more cross strips to tie in the control in the adjustment 21 SURE 440 Advanced Photogrammetry Airborne GPS FTquot PROCESSING AIRBORNE GPS OBSERVATIONS I Conventional aerotriangulation requires ground control I Control required approximately every 7th photo on edge of block I Using simulated data error ellipses from a bundle show the accumulation of error along the edges edge effectquot 22 SURE 440 Advanced Photogrammetly Airborne GPS PROCESSING AIRBORNE GPS OBSERVATIONS STRIP AIRBORNE GPS I Need control to eliminate the weak solution I Exposure station coordinates will nearly lie on line making it illconditioned or singular a Rm 23 COORDINATE TRANSFORMATIONS Surveying Engineering Department Ferris State University Basic Principles A coordinate transformation is a mathematical process whereby coordinate values expressed in one system are converted transformed into coordinate values for a second coordinate system This does not change the physical location of the point An example is when the eld surveyor sets up an arbitrary coordinate system such as orienting the axes along two perpendicular roads Later the office may want to place this survey onto the state plane coordinate system This can be done by a simple transformation The geometry is shown in figure 1 UP P X 9 ltP O b c Olt F U Figure 1 Geometry of simple linear transformation in two dimensions In figure 1 a point P can be expressed in a U V coordinate system as Up and VP Likewise in the X Y coordinate system the point is defined as Xp and Yp Assume for the time that both systems share the same origin Then the Xaxis is oriented at some angle at from the Uaxis The same applies for the Y and V axes The Xcoordinate of the point can be shown to be X1 de ep From triangle feP fP cosoc 3 eP eP cosoc The Xp coordinate can be written as The 6er 39 mmmc Tmlnlm Coordinate T f Page 2 X1 de eP But de tan0c deYPtan0c YP U cos at P eP P P cos 0c Then U XPYPtan0c P cosoc YPs1n0c UP cosoc cosoc XPcosocYPsin0cUP UP XP cosoc Y1 since In a similar fashion the Vp coordinate transformation can also be developed VP ab Pb But tan0c 3 bcYPtan0c Y sinoc X ab X1 bcsin 0c Therefore ab becomes ab X1 Y1 tan 0csin0c 39 Z X1 sin 0c Y1 m cos 0c and Pb can be shown to be Y Y cos at P 3 Pb P Pb cos 0c The Vp coordinate then becomes The 6er In Flimmmm Tmlnlm Coordinate T f Page 3 2 s1n at Y VPXPs1n0c YP P COS X COS X 1 sin2 at X1 s1n0c YP cosoc cosoc XP sin0cYP cosoc In these derivations the angle of rotation at is a rotation to the right It can easily be shown that the conversion from U V to X Y will take on a similar form Since the angle would be in the opposite direction one can insert a negative value for ac and then arrive at the following form of the transformation recognizing that the sin x sin 0c X1 UP cos0cVP s1n0c Y1 UP s1n0cVP cosoc This can be shown in matrix form as X1 cosoc sinoc UP Y1 sin0c cosoc VP Next we will take these transformation equations and expand them into different forms all related to what is called the affine transformation General Af ne Transformation The general affine transformation is normally shown as x39 a1xb1yc1 y39 a2xb2yc2 a b Which provides a unique solution if 1 1 0 2 2 This is a twodimensional linear transformation that is used in photogrammetry for the following The 6er lmmmc Tmlnlm Coordinate T f Page 4 a Transforms comparator coordinates to photo coordinates and is used for correcting lm distortion b Connecting stereo models c Transforms model coordinates to survey coordinates The property of the affine transformation is that it will carry parallel lines into parallel lines In other words two lines that are parallel to each other prior to the transformation will remain parallel after the transformation It will not preserve orthogonality A X Figure 2 Physical interpretation of the af ne transformation Figure 2 shows the physical interpretation involved in this transformation The x and y axes represent the original axis system while x y represent the newly transformed coordinate system The transformation can then be written as x CX xcos at Cy ysin at Ax 2 y CX xsin0c 8 Cy ycos0c 8 Ay39 where Ax Ay are the translation elements in moving from the center of the original coordinate system to the center of the transformed coordinate system Cx Cy are scale factors in the x and y direction at is the angle of rotation and s is the angle of nonorthogonality between the axes of the transformed coordinate system Note that there are six parameters in this transformation Cx Cy 0c 8 Ax and Ay When comparing equations 1 and 2 one can see the following The 6er lmmmc Tmlnlm Coordinate T r Page 5 a1 CX cosoc a2 CX sin0c 8 b1 Cy s1n0c b2 Cy cos0c e 3 c1 AX c2 Ay The solution of the general affine transformation requires at least three points with coordinate pairs in both coordinate systems Three points will yield a unique solution More points provide more redundancy and a better solution generally The design matrix can be developed from equation 1 by differentiating the model with respect to the unknown parameters ix391 6X 6X 6X 6X 6X 6a1 dbl 6c1 6a2 6b2 6c2 ayi ii i ii ayi Bal dbl acl 6a2 abz 6c2 B 6X2 6X2 6X2 6X2 6X2 6X2 6a1 dbl 6c1 6a2 6b2 6c2 5yn 5yn 5yn 5y 5y 5yquot 6a1 dbl 6c1 6a2 6b2 6c2 where Bal ab1 y ac1 6 K 0 6 K 0 6 K 0 6a2 6b2 6c2 6 37 0 0 6 3 0 6a1 dbl 6c1 ayv X ayv y ayv 1 6a 6b 6c Because the model is linear the fmatriX is composed of the known values in the second coordinate system Thus fx y x y y39l The normal equations are computed as BTBABTf0 or NAt0 with the solution being The 6er In Flimmmm Tmlnlm Coordinate T f Page 6 The residuals can be found from V BA f The a posteriori reference variance is calculated using VTV 6 n r where n r represents the number of redundant observations The variancecovariance matrix is Example of a General Af ne Transformation Four fiducial marks 1 4 and two image points a and b were measured on a comparator The comparator photo observations and the known values from the camera calibration report are given in the following spreadsheet The problem is to compute the transformed coordinates of points a and b Note that in this example the upper case variables X and Y are the same as X and y given in equation 2 The following MathCAD program shows the solution The 6er geranium minim Coordinate T r Page 7 6Parameter Coordinate Transformation Program Input Values Note that lower case values represent observed comparator coordinates while the upper case represents the known camera calibration coordinates for the respective fiducial values x17111734 y17114293 X17113007 Y17112997 x2 111734 y2 114293 X2 113001 Y2112989 g 7114289 y3 111699 X3 7112997 Y3 113004 x 114280 y4 7111749 X4112985 Y4 7112997 The measured points are X3 74794 ya 12202 1 767123 yb 53432 Solution Forming the Bmatrix and fmatrix x1 y1 1 0 0 0 X1 0 0 0 x1 y1 1 Y1 g yz 1 0 0 0 X2 0 0 0 9 y2 1 Y2 B r5y3 1 0 0 0 f X3 0 0 0 X5 y3 1 Y3 x4 y4 1 0 0 0 X4 k0 0 0 x4 y4 I kYA J NBTB71 The variancecovariance matrix is QXX N 19573E006 71603E009 44019E009 0E000 0E000 0E000 71603E009 19573E006 244661E009 0E000 0E000 0E000 44019E009 244661 E009 250E003 0E000 0E000 0E000 X 7 0E000 0E000 0E000 19573E006 71603E009 44019E009 0E000 0E000 0E000 71603E009 19573E006 244661E009 0E000 0E000 0E000 44019E009 244661E009 250E003 The 6er In Fhotonrwnnm Tmlnlm Coordinate T r 39 Page 8 t BT f 510790151 58352 70018 t 7578092 51078353 70001 The solution vector is A N t 099977W 001134 7000211 7001140 099977 001222 Note that A is in the form a1 b1 c1 a2 b2 clT as identi ed in equation 1 The resisuals are v B A 7 f 0001 0016 W 0001 0016 70001 70016 V 70001 L70016 The reference variance for the adjustment is 2 G 0001 The 6er E mmmc Tmlnlm Coordinate T f Page 9 The Transformed coordinates become XdA1xaA2yaA3 xa74913 YaA4xA5yaA5 Ya11359 XbA1xbA2ybA3 Xb766504 YbA4xb Asyb A5 Yb54197 These results are consistent with those found using available transformation software For an example see the Appendix Orthogonal Af ne Transformation To the general affine case one can impose the condition of orthogonality ie s gt 0 This results in a five parameter transformation Cx Cy 0c Ax and Ay This transformation is useful when one takes into account the differences in the magnitude of film shrinkage in the length of the film versus its width The transformation is shown as x39CX xcosocCy ysinocAx 4 y39 CX xsinocnycosocAy39 Note that this transformation is nonlinear Therefore the formation of the design matrix B will require the Jacobian1 matrix The derivatives of the transformation model with respect to the 5 transformation parameters is shown as follows 6x39 6x39 Bx xcosoc ys1noc Cxxs1nocnycosoc 6CX 6Cy 60c 6x 6x 0 6Ax BAy39 6y xsinoc 6y ycosoc Cxxcosoc nysinoc 6CX 6Cy 60c ay 0 6y 1 6Ax39 6Ay39 The design matrix is then shown to be 1 The Jacobian matrix is often depicted as J The 6er flimonmmmc Tmlnlru Coordinate T f Page 10 6X 6X 6X 6X 6X 6C X 6C y 60c 6AX 6Ay39 6y 6y i 6y 6y 6CX 6C y 60c 6AX 6Ay39 B 6X2 6X2 6X2 6X2 6X2 6C BC 60c 6AX 6Ay39 X y 6y 6y 6y39 0y 0y 6C BC 60c 6AX 6Ay39 X y The solution will also require initial estimates of the parameters Generally in conventional analytical photogrammetry the rotation angle is close to zero and scale factors are close to one The translations need to be estimated in some fashion The discrepancy vector is found by comparing the observed values to the estimates of the observed values using the back solution with the current estimates of the parameters Rearranging equation 4 we can write the transformation formula as CX cos 0cX X Cy sin 0cy AX 5a Cy cos 0cy y39 CX sin 0cX Ay39 5b Solving for y in 5b yields y CX sin 0cX Ay39 y Cy cosoc Substitute this value of y into equation 5a y CX sin 0cX Ayr Cy cosoc J CX cosocX X Cy sinoc AX X y39tanoc CX sin octan 0cX Ay39tanoc AX CX cosoc CX sin octan at X y39tanoc Ay tanoc AX X X39 y tan0cAy39tan0c AX 6 Cxcos0c s1n0ctan0c In a similar vein from equation 5a solve for X X39 Cy sin 0cy AX39 CX cosoc The 6er lmmmc Tmlnlm Coordinate T f Page 11 Substitute this value of X into equation 5b and solve for y X Cy sin 0cy AX Av Cxcosoc y Cy cos 0cy y CX sin 0c y X tanoc Cy sin octan 0cy AX39tanoc Ay39 Cy cos at Cy sin octan 0cy y X tan 0c AX tanoc Ay39 y39X tan0c AX tan0c Ay 7 Cy cos at sin octan 0c Equations 6 and 7 are used to estimate what the observed quantity should be given the current estimates of the parameters and the known quantities The discrepancy vector f is found using these two equations with the observed quantity The subscript 0 indicates the current estimates of the parameters in the transformation model y tanoc Ay tanoc AX39 y tanoc AX39tanoc Ay39 2 Fy1o yl CX cos x sin octamoc f 3FX20 Xvi yvztanocAy tan0c AX X2 CX cos at sin octamoc y39n tanoc AX39tanoc Ay39 y Cxcos0cs1n0ctam0c Just as in any least squares solution the current estimates of the parameters must be updated The alteration vector A represents the correction to be added to the current estimates of the parameters In other words 5C x The parameters are updated and a new solution is found for the problem The problem iterates until the corrections within the alteration vector are small enough and diverges to the solution The alteration vector is small enough so as to have no general impact on the solution Example of an Othogonal Affine Transformation The 6er in Fhotonrwnmehic Tmlnlm Coordinate T r Page 12 Using the same example used in the general affine transformation the following MathCAD program shows how the problem can be solved The current estimates of the parameters were estimated to be at 0 CK l Cy l AX 0 and Ay 0 The reason that the translations were estimated as zero was because the comparator measurements were made with respect to a fairly good estimate of the locations of the principal point This was also borne out in the example for the 6parameter transformation A3 and A5 were close to zero and these represented the translational elements 5Parameter Coordinate Transformation Program Solution Forming the Bmatrix and f matrix x1 cos on N ET B71 Y1quot Y139 Y239 Y239 Y339 Y339 Y439 Y439 COS QVVQVVQVVQVV Xl c c c cX Cx c c c mMQm mmiqhw mMWn mM7WnV mMw mMWm Introducing some intermediate values a o and d aAytanoc7Ax c AX tanoc Ay mmiqhw MWFQVr39 d cosoc sinoc tanoc moamr 39glzimrwnme c minim I Coordmate T r Page 13 X17 Y1 tanltocgt a X1 7 CX d Y1 X1 tanoc 7 c A d X2 7 Y2 tanoc a A CX 1273 Y2 Xztanoc7c 71296 CY39 d 71267 X3 7 Y3 tanoc a 1304 Y1 X2 Y2 Cx d 71292 Y3 X3tanoc 7c 71305 yr Cy d 1295 1248 X47Y4tanoc a cx d Y4 X4tanoc7c A Cy d X3 X4 r 1185 1 11932 t 71161611 0009 t J The solution vector is A N t 000023W 000023 Updating the Estimates of the Parameters A 7001137 Q CK 7 A1 Q 09998 000211 A 0 9998 17001222 Cy 397 Cy 2 CV 39 aa7A3 a0011368 Ax Ax7 A4 Ax 700021 Ay Ay 7 A5 Ay 00122 The Second Iteration is as follows Forming the Bmatrix and fmatrix The 6er 39glzlowgmnlrme Tmlnlm Coordmate T r Page 14 X1 cosoc yl sinoc CX X1 sinoc Cy y1 cosoc 1 0 7X1 sinoc yl cos on CX X1 cos on 7 Cy yl sinoc 0 1 X2 cos on y2 sinoc CX X2 sinoc Cy y2 cos on 1 0 7X2 sinoc y2 cos on CX X2 cos on 7 Cy y2 sinoc 0 1 B X3 cos on y3 sinoc CX X3 sinoc Cy y3 cos on 1 0 7x3 sinoc y3 cos on CX x3 cos on 7 Cy y3 sinoc 0 1 X4 cos on y4 sinoc CX451110 Cy y4 cos on 1 0 7X4 sinoc y4 cos on 7CX X4 cos on 7 Cy y4 sinoc 0 1 The variancecovariance matrix is 19573E006 765674E015 7801 884E012 44026E009 765674E015 1957 3E006 801 893E012 2791 E009 QXX 1N 7498717E012 244647E009 QXX 7801 884E012 801 893E012 9791 E006 1221E009 723404E009 44026E009 2791E009 1221E009 250E003 7258227E012 7498717E012 244647E009 723404E009 7258227E012 250E003 a 39 Ay tanoc 7 Ax c AX tan0c Ay d cosoc sinoc tanoc X1 cxd Y1X1tanoc7c YI X27Y2tanoca X27 Y2X2tanoc7c YZ X37Y3tanoca XVT Y3X3tanoc7c YS X47Y4tanoca WT x Y4X4tanoc7c W Cyd X17Y1tanoca 4931 X 10 3 002 73516gtlt 10 3 0012 f 9687gtlt 10 3 70027 70011 75516gtlt 10 3 w 6er Fhotogrnmmehic Tmlnlru Coordinate T r 39 Page 15 t BT f 73337 7327 t 70145 0 0 The solution vector is 7000007 W 7000006 The parameters are sufficiently small enough A 7000000 to assume that the current estimates are 7000000 correct based on the discripancy values l 000000 J The resisuals are v B A 7 f 0003 70013 70004 70019 70002 002 0004 0013 The reference variance for the adjustment is G 0000 3 CF The Transformed coordinates become XaCX xa cosoc Cyya sinoc Ax Xa 74908 Ya7CXXasinocCyyacosocAy Ya11361 Xb CXXb cosoc Cyyb 511101 Ax Xb 766498 Yb 7cXxb 511101 Cyyb cosoc Ay Yb 54191 Isogonal Af ne Transformation To the general case of the affine transformation one can impose two conditions These conditions are orthogonality 8 gt 0 and uniform scale C Cx Cy The isogonal affine transformation is The 6er lmmmc Tmlnlm Coordinate T f Page 16 also called the Helmert transformation similarity transformation Euclidean transformation and conformal transformation It is shown as X Cxcos0cCysin0cAX 8a y stin0cCycos0cAy 8b If one recalls those equalities expressed in equation 3 we can see that Ccos at a1 b2 Csinoc b1 a2 Therefore equation 8 can be expressed as X a1Xb1yc1 y b1Xalyc2 or as normally shown as X aXbyc 9 y39 bXayd In this form 9 the transformation is linear The back solution is developed in a similar manner as discussed under the orthogonal transformation section From equation 9 rewrite the equations in terms of and y X aXbyc y ay bXd b b d sz yL a a Then substituting X and y into the respective right hand sides of equations gives ax Xl bwc a a2X aX by39 b2Xbd ac Xa2 b2 aX39 c by d aX c by39 d X a2b2 and The 6er geranium minim Coordinate T r Page 17 ayyvbWJd a a2yay39bX b2y bc ad ya2 b2bX39 cay39 d bX cay39 d y a2b2 Thus the back solution is shown as ax cby d a2 b2 bX39 cay d a2b2 10 y Example of an Isogonal Af ne Transformation The example that has been used in the last two transformation types is also given here for the isogonal af ne transformation The results are as shown below 4Parameter Coordinate Transformation Program Solution Forming the Bmatrix and f matrix X1 yl l 0 X1 yl 7X1 0 1 Y1 X2 y2 l 0 X2 y2 7X2 0 1 Y2 E f X3 Y3 1 0 X3 Y3 X3 0 1 Y3 The 6er 39 mmmc minim Coordmate T r Page 18 71 N ET B The variancecovariance matrix is kx N 9787E006 0E000 2202E009 122332E00 0E000 9787E006 122332E009 72202E009 QXX 2202E009 122332E009 250E003 0E000 122332E009 72202E009 0E000 250E003 tBTf 1021573711 1161611 70018 70001 t The solution vector is A N t 099977 001137 7000211 A L 0012221 The resisuals are v B A 7 f 110021 i i 0013 0004 0019 0002 70020 70004 70013 The reference variance for the adjustment is 4 c c 00003 The 6er In Fhotogrimme c Tmlnlm r Coordinate T Page 19 The Transformed coordinates become XaA1XaA2yaA3 Xd74913 YaA2ampA1yaA4 Ya11361 XbA1XbA2ybA3 Xb766502 YbA2Xb A1yb A4 Yb54195 For simplicity the linear form of the isogonal affine transformation was used What would be interesting would be to see if the nonlinear model would result in the same answers The solution will require the formation of the Jacobian matrix using equation 8 The design matrix is thus 6X 6X 6X 6C 60c 6AX 1 1 5Y1 6C 60c 6AX B 39in2 6sz 6sz 6C 60c 6AX 5y 6L 6ylz 6C 60c 6AX where 6X Xicosocyis1noc 6C 6 f i CXi sinoc Cyi cosoc 60c 6y Xis1nocyicosoc 6C CXicosoc Cyisinoc 60c 6X 6Ay39 ii 6Ay39 39in2 6Ay39 6y 6Ay39 6X 6AX 6X 6Ay39 6y BAX ay z 1 My Like we have done previously use equation 8 to isolate the variables X and y Thus we can write X X Cysinoc AX39 Ccosoc y y39CXsinoc Ay Ccosoc Taking the value for y substitute this into equation 8a and solve for X meemr lmmmc Tmlnlm Coordinate T f Page 20 39 CX sin 0c A 39 XCcos0c X Cs1n0c AX Ccos 0c XCcos0cCsin0ctan0c X y tan0c Ay39tanoc AX X y tan0cAy39tan0c AX39 X Ccos at sin octan at In a similar vein substitute the value for X computed above into equation 8b and solve for y yC cos at y Csin a Ay39 yCcos0cCsin0ctan0c y X39tanoc Ax39tanoc Ay39 y X39tanoc Ax tanoc Ay39 y Ccosocs1n0ctan0c The discrepancy vector f is then found using the following relationship yi tan0cAy tan0c AX39 X1 Ccos at sin octan 0c y1 y1 X1tan0cAx39tan0c Ay Ccosocs1n0ctan0c f X2 X2 y2 tan0cIAy tan0c AX Ccosocs1n0ctan 0c y4 y tandAX tan0c Ay39 Ccosocsin0ctan0c The following MathCAD program shows the results of the adjustment For the first iteration the initial estimates of the parameters are C 10 or 00 AX 00 and Ay 00 The solution converged on the second iteration and the results are the same in both cases as one would expect Note that A in the linear model yields the solution to the parameters directly whereas in the non linear model it represents alterations here expressed in terms of an error which is why it is subtracted to be applied to the current estimates of the parameters 4Parameter Coordinate Transformation Program The 6er In Fhotonrwnnm Tmlnlm Forming the Bmatrix and f matrix 7X1 sinoc yl X1 39 COS0 yl 39 x2 cos 01 7X2 sinoc X3 sinoc y3 xscosocy3 Lx400501 y4 7X4 sinoc y4 Computing intermediate values aAytanoc7Ax Xli d Y1 X1 Y1 d Xz Yz39 Xzi d Y2X2 Y2 d f X3 Y339 X3i d Y3X3 Y3 d X4 Y439 X47 d Y4 X4tanocb d X17Y1tanoc a tanoc b tanoc a tanoc b tanoc a 48111 b tanoc a 7C 7C 7C 7C 7C 7C 7C 2C Coordinate T r Page 21 XlsinocCy1cosoc 1 0 Xicoswcylsmm 011 XzsinocCy2cosoc 1 0 X239COS0 C39Y23951391109 0 1 x3sinocCy3cosoc 1 0 X339COS0 C39Y3395139110 01 X4sinocCy4cosoc 1 0 acoswcyzismm 011 Ax tanltocgt 7 Ay 12730 712960 712670 13040 71 2920 713050 12950 L 12480 J The 6er gimmmc Tmlnlm Coordinate T r Page 22 71 N ET B t BT f 23781 71161611 t 0009 70049 The solution vector is A N t Updating the current estimates of the 00002 parameters 700114 00021 CC7A1 C 09998 700122 ococ7A2 oc001137 Ax Ax7 A3 Ax 700021 Ay Ay 7 A4 Ay 00122 The Second Iteration Forming the Bmatrix and fmatrix XlCOS0 y1sinoc 7CX1sinocCy1cos 7X1sinoc y1cosoc 7CX1cosoc7Cy1sin Xzcosoc y2sinoc 7CX2sinocCy2cos 7X2sinoc y2 cosoc 7C X2 cosoc 7 C y2 sin y3sinoc 7CX3sinoc Cy3cos B x3 cosoc 7X3 sinoc y 3 cosoc 7C X3 cosoc 7 C y3 sin X4COS0 y4 sinoc 7C X4 sinoc Cy4 cos 7X4sinoc y4 cosoc 7C X4 cosoc 7 C y4 sin on 01 Computing intermediate values a Ay tan0c 7 AX b AX tan0c 7 Ay d C cos on sinoc tanoc The 6er 39 mmmc Tmlnlm Coordmate T r Page 23 X17 Y1 tanoc a X1 7 d Y1 X1 tanoc b Y1 d X 7 Y tan on a x2 7 000491 d 00202 Y2 X2 tanoc b 70 0035 YZ f f f 00124 X37Y3tanoc a 00097 X3 7 d 700269 Y3 X3 tanoc b 700111 YS 1 700055 X47Y4tanoc a 7 d Y4 X4 tanoc b Y4 d 71 N ET B The variancecovariance matrix is kx N 9787E006 0E000 23409E009 122074E009 0E000 9791E006 122102E009 723414E009 QXX L 23409E009 122102E009 250E003 0E000 J 122074E009 723414E009 0E000 250E003 tBTf The 6er in Fhmmmm Tmlnlm Coordinate T Page 24 The solution vector is Updating the current estimates of the A N t 700001 parameters 700000 00000 C r C 7 A1 00000 0c 0c 7 A2 AX AX 7 A 3 Ay Ay 7 A4 The resisuals are 0003 70013 VB39A f 70004 70019 7 70003 0019 0004 L 0013 J The reference variance for the adjustment is f v cs cs 00003 4 The Transformed coordinates become XdCXacosocCyasmocAx Xa74913 Ya7CXasmocCyacosocAy Ya11361 XbCXbcosocCybsmocAx Xb766502 Yb7CXbsmocCybcosocAy Yb54195 The 4parameter transformation is a very type of transformation another example where comparator observations ma e on a reseau The following are the measured comparator Reseou 16 19 Gi id 39 Point C 09998 0c 001137 AX 700021 Ay 00122 common Here is are pattern values The 6er In Fhotonrwnnm Tmlnlm Coordinate T r Page 25 XUL 70057 mm yUL 40014 mm XLR 80067 mm YLR 50026 mm XpT 760985 mm ypT 4l9810 mm The true photo coordinates of the reseau are X UL 70107 mm y UL 39843 mm X LR 80133 mm y LR 49820 mm Recall that the transformation formulas can be written as X39ULaXUL byUL c yiULayUL bXUL d X39LRaXLR byLRc yiLR ayLR bXLR d The unknowns are a b c and d while the measured values are XUL yUL XLR and yLR The true values are X UL y UL X LR and y LR Differentiating the transformation formulas with respect to the parameters would follow as The design matrix B is shown as UL 6a y BXUL y BXUL BX 6c XUL l XUL yUL yUL XUL XLR yLR yLR XLR Ot OD I D IOt O BX39 6ELYUL aXiUL0 6d 70057 40014 1 0 40014 70057 0 1 80067 50026 1 0 50026 80067 0 l The discrepancy vector 1 and the vector containing the parameters A are shown as The 6er Emmmc Tmlnlm Coordinate T f 39 Page 26 X39UL 70 107 a f 39UL 39843 A b X39LR 80 133 c y39LR 49820 1 The normal equation is found using the following relationship where N is the normal coef cient matrix N B B 15422429 0 150124 90040 N 15422429 90040 150124 2 0 2 The inverse of the normal coef cient matrix is shown as 0009978 0 0748971 0449211 N71 0009978 0449211 0748971 76942775 0 76942775 The constant vector t and the solution A are computed as 1514068 0999051 a T 33776 1 0002547 b t B f A N t 150240 0014579 c 89663 0045424 d The transformed coordinates of the point are then computed as The 6er lmmmc Tmlnlm Coordinate T r Page 27 X391raxPT byPT c 0999051760985 0002547 419810 0014579 76148 mm y 1r bxlyr ayPT d 0002547760985 0999051 419810 0045424 41793 Rigid Body Transformation To the general affine transformation one further set of conditions can be imposed This includes orthogonality and not scale Cx Cy 1 In this case the transformation can be shown with only three parameters at Ax and Ay as X xcosoc ysin0cAX 11 y Xsin0cycosocAy Example of a Rigid Body Transformation Again we can use the same example used for the previous transformation types Being nonlinear we begin by forming the design matrix by differentiating the transformation model with respect to the parameters The design matrix looks like 6x 6x1 6x 60c 6AX 6Ay39 6y 6y 6y 60c 6AX 6Ay39 B 6x2 6x2 6x2 60c 6AX 6Ay39 i 6y 6y 60c 6AX 6Ay39 where Bx 6x39 6x39 xsmocycosoc l 0 60c 6AX 6Ay39 a y Xcos ysinoc 6y 0 6y 1 60 6Ax 6Ay39 The 6er lmmmc Tmlnlm Coordinate T f Page 28 The back substitution is found in the same way as shown in the discussion of the orthogonal transformation It is shown as X39 y tan0c Ay39tanoc AX39 X cosoc sin0ctan0c y y X39tanoc AX tanoc Ay39 cos0csin0ctan0c The discrepancy vector takes the following form y tan0c Ay39tanoc AX X1 cos0csin0ctan0c y Xitan0c Ay tan0c Ay39 YI cos0csin0ctan0c f Xvi yvztanocAy tan0c AX39 X2 cos0csin0ctan0c y y39n Xn tanoc Ax tanoc Ay39 cos0csin0ctan0c Following are the results of a rigid body transformation given the same example data used for the previous transformation discussions Being a nonlinear model initial estimates of the parameters are required In this case or AX and Ay the rotation and two translations respectively were assumed to be zero 3Parameter Coordinate Transformation Program Solution Forming the Bmatrix and fmatrix Xl 39 511109 t Y139 cosoc 1 0 l711coslto1gt7ys11lto1gt 0 1i i 1111 1 yz 1111 1 o X2390050 Y239Si110 0 1 Bl 71311111 11401 1 0 ixscoslto1gt7y3s11lto1gt o1 714111101 y4 1111 1 o iacowiyismm o 1 The 6er gimmmc Tmlnlm Coordmate T r Page 29 N ET 1371 Computing intermediate values for the computation ofthe fmatrix a Ay tanoc 7 Ax c AX tan0c Ay d cosoc 51110c tanoc X17 Y1 tanoc a X1 d Y1 X1 tanoc 7 0 Y1 d X2 7 Y2 tanoc a X27 d 12731 Y2 Xz tanoc 7 c 71296 YZ f d 71267 X7X37Y3tanoca f7 1304 3 d 7 71292 Y3 X3 tanoc 7 c 71305 Y3 7 1295J X4 7 Y4 tanoc a 1248 X4 d Y4 X4tanoc7c 7 d 7116161 tBT f t 0009 L 70049 J The solution vector is A N t 700113 0c0c7A1 0c001137 A 000211 Ax Ax7 A2 Ax 700021 L 001222J Ay Ay 7 A3 Ay 00122 The 6er Slimmmc Tmlnlm Coordinate T r Page 30 The Second Iteration Forming the Bmatrix and f matrix 7X1si11oc yl cos 7X1 cosoc 7 yl si11 7X2 cosoc 7 y2 sin 7X3si11oc y3 cos Xz sinoc y2 cos B 7X3cosoc7y3sin 0c 0 1 7x4sinocy4cos 0c 1 0 7x4cosoc7y4sinoc 0 1 71 N ET B The variancecovariance matrix is QXX N 9787E006 122074E009 723409E009 QXX 12207 4E009 250E003 7291 994E012 723409E009 7291 994E012 250E003 Computing intermediate values for the computation ofthe fmatrix a Ay tan0c 7 AX c AX tanoc Ay d cos on sinoc tanoc The 6er gm m 39r H rim 2 mum Coordinate T r 39 Page 31 X17Y1tanoc a Y1 X1 tanoc 7 c y1 f X27Y2tanoc a 2 d 70021 Y2 X2 tanoc 7 c 76449gtlt 10 3 yr f d 0023 39 X3 7 Y3 tan0c a f 0039 X 3 d 70017 Y3 X3 tan0c 7 c 79588gtlt 10 4 Y3 7 d 0016 X4 7 Y4 tan0c a 70032 d Y4 X4 tanoc 7 0 yr f The solution vector is A N t QOOOOOW 0c 0c 7 Al on 001137 A 000000 Ax Ax7 A2 Ax 700021 L 000000 Ay Ay 7 A3 Ay 7 00122 The resisuals are v 3 A 7 f 0022 0006 70023 70038 7 0016 0001 70015 L 0032 J The 6er geranium minim Coordinate T f Page 32 The reference variance for the adjustment is f v o o0001 5 The Transformed coordinates become Xd Xa cosoc ya sin0c AX Xd 74926 Ya ixa sin0c ya cosoc Ay Ya 11363 Xbxb cosoc ybsinoc Ax Xb766513 Yb ixb sinoc yb cos or Ay Yb 54204 Polynomial Transformations A polynomial can also be used to perform a transformation This is given as X a0 a1Xa2ya3X2 a4y2 asxy 12 y b0 b1Xb2yb3X2 b4y2 bsxy An example of a bilinear polynomial transformation using the same example that has been used in the previous sections is shown in the following Mathcad program Bilinear Polynomial Coordinate Transformation Program Forming the Bmatrix and f matrix 1 X1 y1 Xlyl 0 0 0 0 0 0 0 0 1 X1 yl Xlyl 1 X2 y2 X2y2 0 0 0 0 0 0 0 0 1 X2 y2 X2y2 E f 1X3Y3XS39Y3000 0 X3 J3 1X4y4gtlt4y4000 0 o The 6er 39glzotogmnlrme Tmlnlm Coordinate T r Page 33 NBT 1311 The variancecovariance matrix is QXX N 02500000 00000000 00000002 700000000 00000000 00000000 00000000 00000000 00000000 00000196 700000000 00000000 00000000 00000000 00000000 00000000 00000002 700000000 00000196 700000000 00000000 00000000 00000000 00000000 700000000 00000000 700000000 00000000 00000000 00000000 00000000 00000000 Qxx 00000000 00000000 00000000 00000000 02500000 00000000 00000002 700000000 I 00000000 00000000 00000000 00000000 00000000 00000196 700000000 00000000 00000000 00000000 00000000 00000000 00000002 700000000 00000196 700000000 00000000 00000000 00000000 00000000 700000000 00000000 700000000 00000000 M BTf 70018 51079018 58352 7455444 t 70001 7578092 51078353 L 340545 J The solution vector is A N t 700021 09998 00113 700000 A 00122 700114 09998 700000 The resisuals are v B A 7 f 700001 0000 0000 0000 V 70000 0000 0000 0000 The 6er Emmmc Tmlnlm Coordinate T f Page 34 The reference variance for the adjustment is f v cs T cs 00000 The Transformed coordinates become XaiA1A239XaA339YaA439Xa39Ya Xa74913 YaA5AG39XaTA739YaTA839Xa39Ya Ya11358 XbA1A239XbA339YbA439Xb39Yb Xb766503 Yb1A5A539XbA739YbAS39Xb39Yb Yb54201 An alternative from Mikhail Ghosh 1979 can also be used X A0 A1XA2yA3X2 y2A42xy 13 y B0 A2XA1yA4X2 y2A32xy Projective Transformation The projective equations are frequently used in photogrammetry Shown here without derivation is the form of the 2D projective transformation a1Xa2ya3 d1Xd2yl 14 b1Xb2yb3 d1Xd2yl It should be fairly evident from equation 14 that if d1 dz 0 then the af ne transformation is formed The solution to the projective transformation requires linearization since the equations are nonlinear For a point 1 this is shown as The 6er Emmmc Tmlnlm Coordinate T r Page 35 g 6X 6X 0 0 g BX39 6211 O 6212 O 6213 O Bdl O de O 6b1 O O 6b3 O 6d1 O de O da1 15 da2 da3 db X39 X0 dbl yy Yo db3 dd1 dd2 where E L 6x 4 6211 d1Xd2y1 6212 d1Xd2y1 6X 1 By39 X 6213 d1Xd2y1 6y y abz d1Xd2y1 BX39 alXazya3 2 X adl led2y1 blxb2yb3 dlx d2y12 The design matrix B is shown as E d1Xd2y1 1 6b3 d1Xd2y1 BX alXazya3 2 y adz d1Xd2y1 y ay b1Xb2yb3 adz dlx dzy 12 The 6er 39Fuh39 mm T H mm c mm Coordinate T r 39 Page 36 6Fx 1 6Fx 1 m 6Fx 1 6a1 6a2 de 5FY1 0FY1 0FY1 B 6a1 6a2 de 5W 01W 013M 6a1 6a2 de J The discrepancy vector 139 is the difference between the observed values Z i and the initial approximations contained in Fxy This vector is depicted in the following form 1 FX1 2 Fy1 f 2 ma FXn gin Fyn The solution is shown as follows A BTB391BTf N lt where N BTB and t BTf For weighted observations the solution is A BTWB391 BTWf N lt Using the same example that we have used throughout these notes the following Mathcad program shows the results of the projective transformation As it is indicated the initial estimates of the parameters are all zero except a1 and b which are 10 The 6er lmmmc Tmlnlm Coordmate T r Page 37 Initial estimates of the parameters are given as follows The vector of observations a 10 a 00 2 Y1 a3 00 X2 b1 00 b210 L Y2 b3 00 X3 d1 00 Y3 d2 00 X4 lml The denominator of the functional model is dlx1 d2y1 1 dlX2 d2y2 1 dlX3 d2y3 1 dlx4 d2y4 1 The design matrix B is formed as follows den X1 Y1 1 a139X1t a239Y1 a3 8139X1 83 XI 7 denl denl denl den12 den12 X1 Y1 1 b139X1 t b239y1t b3 den12 blxl b2y1 b3 denl denl denl den12 X2 Y2 1 a139X2 az39yzt a3 X den22 alx2 a2y2 a3 denz denz denz I den22 l l l l l l l l l o o o 1 1 ibl39 drsftlxz lbl39xzidiijfibBlyz l l l l l l l l l l l l X3 1 8139X3 83 8139X3 83 X3 7 den3 den3 den3 den32 b1X3 be3 b3 den32 den32 X3 y3 1 blX3 b2y3 b3 den3 den3 den3 den32 X4 Y41 den4 den4 den4 8139X4 83 81X4 a2y4 a3 X4 7 den42 b1x4 b2y4 b3 den42 den42 blx4 b2y4 b3 X4 den4 den4 den4 den42 yearn1r Fhmmmc In Coordinate T r 39 Page 38 7111734 7114293 1 0 0 0 71248448676 71277041406 0 0 0 7111734 7114293 1 71277041406 71306288985 111734 114293 1 0 0 0 71248448676 71277041406 0 0 0 111734 114293 1 71277041406 71306288985 7114289 111699 1 0 0 0 71306197552 1276596701 0 0 0 7114289 111699 1 1276596701 7124766666 L 11428 7111749 1 0 0 0 7130599184 1277067572 0 0 0 11428 7111749 1 1277067572 712487839 al39X1 83 denl blx1 b2y1 b3 denl al39XZ 83 T blX2 b2y2 b3 T 71273 FX aIIXB a2y3 a3 1296 T f L 7 FX 1267 3 71304 b139X3 b239Y3 b3 7 1292 dens 1305 alx4 a2y4 a3 71295 T 71248 blx4 b2y4 b3 den4 NBTB 1 BTf A N t 7000023 001134 001409 700114 7000023 001348 0 0 The 6er gimmmc Tmlnlm Coordinate T r Page 39 The new estimates of the parameters are al a1 A1 a1 099977 a2 a2 A2 a2001134 a3 a3 A3 a3 001409 b1b1 A4 b1 700114 b2 b2 A5 b2 099977 b3 b3 A5 b3 001348 d1d1A7 d10 d2d2A8 d20 Second Iteration The denominator of the functional model is dlxl d2y11 dlX2 d2y2 1 den dlx3 d2y3 1 dlX4 d2y4 1 711175109 711431048 100015 0 0 0 71262867731 7129179069 0 0 0 711175109 711431048 100015 71262751588 71291671893 11171691 11427552 099985 0 0 0 71262410137 71291322622 0 0 0 11171691 11427552 099985 71262280469 71291189984 711430431 11171396 100013 0 0 0 71291602429 126233233 0 0 0 711430431 11171396 100013 1291686418 71262414416 L 1142647 711173404 099987 0 0 0 71290692085 1262428949 0 0 0 1142647 711173404 099987 1291154894 71262559225 000019 70 700002 70 000019 000002 f 700002 I A 0 I 7000017 70 7000017 7000037 7000017 0 Liooom 71 L 70 J The 6er In Fhotogrimme c Tmlnlm a1a1 A1 a2 a2 A2 a3 a3 A3 b1b1 A4 b2b2 A5 b3b3 A5 d1d1 A7 d2d2 A8 Coordinate T The new estimates of the parameters are al 099977 a2 001134 a3 001411 b1 700114 b2 099977 b3 001311 d1 0 d2 0 The transformed coordinates are Xa Y7 a Xb 7 Yb Another example of this solution is shown in the appendix using the Adjust software package The developments that have already been presented represent the transformation in 2D space Surveying measurements are increasingly being performed in a 3D mode basically the same as above except for the addition of one more axis about which the transformation takes place Discussion on the use of the projective equations will be given in a later section Instead of using the projective equations polynomials may be used to perform the 3D a2ya 83 dlxa d2ya 1 7 b1 b2ya b3 dlxa d2ya 1 83 dlxb d2yb 1 blxb b2yb b3 dlxb d2yb 1 Page 40 Xa 7492187 Ya 1135877 Xb 76649273 Yb 5420205 Transformations in Three Dimensions transformation Ghosh 1979 gives the general form of this type of transformation The approach is The 6er lmmmc Tmlnlm Coordinate T f Page 41 X39 a0 a1Xa2ya32a4X2 a5y2 asz2 a7xya8yz agzxawxy2 a11XZya12XZ2 y39 b0 b1Xb2yb3Zb4X2 b5y2 b622 b7xyb8yz bgzxbloxy2 b11X2yb12XZZ 2 c0 c1Xc2yc32c4X2 c5y2 c622 c7xyc8yz cgzxcwxy2 c11XZyc12XZ2 This transformation is not conformal therefore it should only be used where the rotation angles are very small Mikhail presents another form of the 3D polynomial which is conformal in the three planes This is given as Ghosh 1979 X39 A0 A1XA2yA32A5X2 y2 ZZ0aA7ZX2A6Xy y B0 A2XA1yA42A6 X2 y2 zz2A7yz02Asxy 2 C0 A3X A4yAIZA7 XZ y2 zz2A6yz2Aszx0 The 0 s here indicate that the coefficients for the terms yz in X ZX in y and xy in z are zero A polynomial projective transformation can be shown without derivation as Ghosh 1979 a1Xa2ya32a4 d1Xd2yd32l b1Xb2yb32b4 d1Xd2yd32l c1Xc2yc32c4 d1Xd2yd32l Zv A solution is possible provided that The 6er 39 mmmc Tmlnlm Coordinate T r Page 42 a1 a2 a3 a4 b b b b 1 2 3 4 i 0 c1 c2 c3 04 d1 d2 d3 d4 References Ghosh SK 1979 Anal ical Photogrammet Pergamon Press New York 203p Wolf P and C Ghilani 1997 AJ39 quot Statistics and Least Squares in Surveying and GIS John Wiley amp Sons New York 564p The 6er In Fhotonrwnnm Tmlnlm Coordinate T f 39 Page 43 Appendix There are several free or public domain transformation software packages available to the user One such set of software is Adjust for Windows that was written to accompany the text by Wolf and Ghilani 1997 and is available at httpsurveyingwbpsuedupsusurvfreehtm General Af ne Transformation The general affine 6parameter transformation example presented in the notes was run using the Adjust for Windows software The results which are consistent with the MathCAD program given in the notes are as follows Two Dimensional Affine Coordinate Transformation of Filegt 6pardat Transformation Example ax by C X VX dx ey f Y VY A matrix L matrix 7111734 7114293 1000 0000 0000 0000 7113007 0000 0000 0000 7111734 7114293 1000 7112 997 111734 114293 1000 0000 0000 0000 113001 0000 0000 0000 111734 114293 1000 112989 7114289 111699 1000 0000 0000 0000 7112997 0000 0000 0000 7114289 111699 1000 113004 114280 7111749 1000 0000 0000 0 000 112 985 0 000 0000 0000 114280 7111749 1000 7112997 Transformed Control Points Y VX VY 1 113007 112997 0001 0016 2 113001 112989 0001 0016 3 112997 113004 0001 0016 4 112985 112997 0001 0016 Transformation Parameters estimated errors and t Values a 099977 1 000010 t value 983584 b 001134 1 000010 t value 11156 C 0002 1 0011 t Value 018 d 001140 1 000010 t value 11212 e 099977 1 000010 t value 983578 f i 0012 1 0011 t value 106 Adjustment39s Reference Variance 00005 Transformed Points Y 15x 15y 1 113006 112981 0020 0020 2 113002 113005 0020 0020 The 6er Ellwogrwnlne c Tmlnlm Coordinate T r 39 Page 44 3 112998 112988 0020 0020 4 112984 113013 0020 0020 A 74913 11359 0014 0014 B 66504 54197 0014 0014 TNVERSE MATRIX 00001957 000000000 00000000 00001957 00000024 00000000 00000000 00000000 00000004 00000024 25000000 00000000 00000000 00000000 0 0 0 000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 000000000 000000000 000000000 000001957 000000000 000000004 0 0 0 0 0 0 0 0 0 0 0 0 00000004 00000000 00000000 00000000 00000000 00000000 00000000 00001957 00000024 00000000 00000000 00000000 00000004 00000024 25000000 Isogonal Af ne Transformation The isogonal af ne 4parameter transformation example presented in the notes was run using the Adjust for Windows software The results which are consistent with the MathCAD program given in the notes are as follows Two Dimensional Conformal Coordinate Transformation from file 4pardat Transformation Example ax by Tx X VX bx ay Ty Y VY A matrix L matrix 111734 114293 1000 0000 113007 114293 111734 0000 1000 112997 111734 114293 1000 0000 113001 114293 111734 0000 1000 112989 114289 111699 1000 0000 112997 111699 114289 0000 1000 113004 114280 111749 1000 0000 112985 111749 114280 0000 1000 112997 Transformed Control Points POINT X Y VX VY 1 113007 112997 0002 0013 2 113001 112989 0004 0019 3 112997 113004 0002 0020 4 112985 112997 0004 0013 Transformation Parameters estimated errors and t Values a 099977 1 000005 t value 1893071 b 001137 1 000005 t value 21526 Tx 0002 1 0008 t value 025 Ty 0012 1 0008 t value 145 The 6er gwwmnm mwm Coordinate T r Page 45 Rotation 35902039547quot Scale 099983 Adjustment39s Reference Variance 00003 Transformed Points POINT X Y 13x 13y a 74913 11361 0009 0009 b 66502 54195 0010 0010 INVERSE MATRIX 000000979 000000000 000000002 000000012 000000000 000000979 000000012 000000002 00000002 000000012 025000000 000000000 00000012 000000002 000000000 025000000 0 O Projective Transformation The projective transformation example presented in the notes was run using the Adjust for Windows software The results which are consistent with the MathCAD program given in the notes are as follows Two Dimensional Projective Coordinate Transformation of File gt projdat Transformation Example X VX a3x b3y 1 a2x b2y 02 Y VY a3x b3y 1 ITERATION 1 J matrix K matrix 111751 114310 1000 0000 0000 0000 12628637 12917865 0000 0000 0000 0000 111751 114310 1000 12627519 12916722 0000 111717 114276 1000 0000 0000 0000 12624142 12913268 0000 0000 0000 0000 111717 114276 1000 12622801 12911896 0000 114305 111714 1000 0000 0000 0000 12916067 12623365 0000 0000 0000 0000 114305 111714 1000 12916867 12624147 0000 The 6er gmwmmmm mwm Coordmate T r Page 46 114264 111734 1000 0000 0000 0000 12910174 12624248 0000 0000 0000 0000 114264 111734 1000 12911545 12625589 0000 X matrix 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 Transformation Parameters a 099977 b1 001134 cl 0014 a2 001140 b2 099977 02 0013 a3 000000 b3 000000 Unique Solution Obtained 1 Number of Iteration 1 Transformed Control Points POINT X Y VX VY 1 113007 112997 0000 0000 2 113001 112989 0000 0000 3 112997 113004 0000 0000 4 112985 112997 0000 0000 Transformed Points POINT X Y 1 113007 112997 2 113001 112989 3 112997 113004 4 112985 112997 A 74922 11359 B 66493 54202 INVERSE MATRIX 22603289928 00185221647 00006136532 00000000000 00000000000 00000000000 93772839475 24183321390 00185221647 22603160285 00002591974 00000000000 00000000000 00000000000 24175775878 27666266719 00006136532 00002591974 19999999589 00000000000 00000000000 00000000000 25539508446 29175942605 00000000000 00000000000 00000000000 22603289928 00185221647 00006136532 07542936824 58246558120 SURE 440 Advanced Photogrammetry 1142008 LIDAR PRINCIPLES AND APPLICATIONS CENTER FOR PHOTOGRAMMETRIC TRAINING FERRIS STATE UNIVERSITY The Center for E Photogrammetric Training What LIDAR Is 0 LIght Detection And Ranging 0 Active Sensing System Uses its own energy source rather than reflected natural light or naturally emitted energy 0 Detection of features from reflected light energy from an internal source 0 Ranging of the reflecting object based on time difference between emission and reflection 0 Direct terrain data acquisition not inferential like photograrnmetry 0 Day or Night operation Lidar Principles and Applicatations 1 SURE 440 Advanced Photogrammetry 1142008 What LIDAR Is Not 0 Light or Laser Assisted RADAR o RADAR uses e1eerreemegneue EM energy in the radio range LIDAR does not 0 Allrweather o The target must be visible m the mstmmenfs EM range 0 Some haze is manageable fogis not 0 Able to see through trees 0 LIDAR sees around trees not through them Fully closed canopies ram forests cannot be penetrated LASER Emitting diode producing light source at very speci c frequency 0 Signal sent to ground and re ected back 0 Compute range Lidar Principles and Applicatations 2 SURE 440 Advanced Photogrammetry 1142008 TYPES OF LIDAR Topographic system 0 Laser pulse in near infrared region Bathymetric system 0 Infrared signal partially absorbed o Utilize bluegreen portion of electromagnetic spectrum INSTANTEOUS FIELD OF VIEW 0 Scan angle up to 20 some up to 30 Footprint called IFOV 0 Circle changes to ellipse 2d sunu Lidar Principles and Applicatations 3 SURE 440 Advanced Photogrammetry o Rotating Mirror LASER SCANNING LI DAR o Oscillating Mirror mquot em Aircu Aucmll Einwlwu 1 ma mum an Mm 5 mm Suanmquot mumMN thm mam m Apavvlavvnbucm LASER SCAN 0 Generally laser scanner utilizes rotating mirror transverse to direction of ight o Disadvantage 0 Rate of movement not constant 0 Slows down near end of scan then stops reverses direction nally speeds up 0 Software corrects for effect and disregards data at end of swath o Rotating prism one alternative 0 Fixes speed of prism to constant rate 0 Disadvantage x Timing when data are to be collected x Since data collected in only one direction may be bias that may not be distinguishable unless there was a field check Lidar Principles and Applicatations 1142008 SURE 440 Advanced Photogrammetry 1142008 Mirror Scan Systems magma Emma Signal d Palmer Scanner and Scan Pattern Lidar Principles and Applicatations 5 SURE 440 Advanced Photogrammetry 1142008 LASER SCAN Laser scan signal rate v Hz 0 10 kHz 7 10000 individual pulses per second 0 kHz 7 50000 individual pulses per second GPS 0 Satellite based measurement system 0 Distance to satellites calculated 0 Pseudorange One satellite nd position of receiver on sphere about satellite 0 Minimum of 4 satellites required to determine position 0 GPS provides position of receiver Lidar Principles and Applicatations 6 SURE 440 Advanced Photogrammetry Satellite Ranging Multiple Satellite Ranges Lidar Principles and Applicatations 1142008 SURE 440 Advanced Photogrammetry 1142008 IMU Measures angular changes measures orientation of scanner S 17633 In vs 19438 In TIMING Each component sampling interval fixed 0 Timing becomes important 0 GPS may sample at 1 second 0 Laser may sample at 20000 pulses per second 0 Location of laser pulse interpolate 0 Same situation for IMU Timing system sometimes called 4 h component of LIDAR Lidar Principles and Applicatations 8 SURE 440 Advanced Photogrammetry 1142008 PROCESSING o Postprocessing 0 Data related to particular frame of reference required by user x Recommend 2 GPS ground receivers x With GPS position known making georeferencing possible 0 Elimination of irrelevant data x Often called bald earth x Often automatically done 90 of data may be removed x Remaining 10 may represent 90 of time PROCESSING o LIDAR data processed after initial ight o Slant ranges computed 0 GPS computed separately then imported into LIDAR processing system XyYyH Se sor Platform Lidar Principles and Applicatations 9 SURE 440 Advanced Photograrnrnetry 1142008 PROCESSING 0 LIDAR signal pulse may re ect off more than one object 0 May have up to R returns e u 4 v a LIDAR Data Acquired over open ground v r M LI h LIDAR return srgnals OV ER r r m Frmng curve surface through elevation LIDAR palms Wquot 5 r w 7 J 1d Comparing me curve fitted surface to LIDAR geometry amme of dam collection good t Lidar Principles and Applicatations 10 SURE 440 Advanced Photogrammetry 1142008 t tmmn a LIDAR Data Acquired ovenrec canopy LIDAR OVER 39 iquotf lt fquot TREE CANOPY to meg curve surface through Elevation LIDAR points mmmmmt 1d Comparing me curve mum surface m LIDAR poor t geometry at time of data nounanon ADVANTAGES Versatile technology used for atmospheric studies bathymetric surveys glacial ice investigations etc 0 Cost effective method of terrain data collection 0 High precision and high point density Accelerates project schedule upward to 30 because processing begins almost immediately Not restricted to daylight or cloud cover 0 Cuts down on amount of obscured terrain Lidar Principles and Applicatations ll SURE 440 Advanced Photogrammetry 1142008 DISADVANTAGES 0 Problems with data collection over water 0 Water boundary delineation suspect o LIDAR not capable of determining break lines 0 May miss data 0 Often augmented with break line data from photogrammetIy o No standards since relatively new technology 0 In photogrammetry operator has cartographic license not available with LI DAR ACCURACIES 0 Typical 15 cm in elevation and horizontal position 0 Rule of thumb 12000th of ying height 0 Vertical accuracies valid for H below 1200 In and about 25 cm when ying between 1200 In to 2500 In 0 Spot spacing much denser for slower aircraft 0 More reliable accurate DTM through denser spot spacing more data collected 0 Highest accuracy heights at nadir and decrease as swath angle increases 0 Smaller width will yield denser spot spacing Lidar Principles and Applicatations 12 SURE 440 Advanced Photo grammetry Effects of Positional and Angular Error on Footprint positional and attitude depends on co and q utilized to form rotation w 2 systematic errors i7 9 matrix R a IS angular error q is B positional q error Effects of Positional and Angular Error on Footprint 0 Total error e q a 0 De ne angular error as a Ral 0 Total error becomes e q Rar 0 Reconstructed ground point de ned as p p q RaReRir Lidar Principles and Applicatations 1142008 SURE 440 Advanced Photogrammetry 1142008 Effect of Timing Bias Laser pro le over horizontal terrain Shift W sziclinn 1mm mu m mirg Bin Timing Bias 0 Flight lines going back amp forth create Positional error dlfferent effeCt o Creates angular q 5mm distortion unless all q x I ight lines own in COS a 39 39 same dlrectlon Flightline L W it Lidar Principles and Applicatations l4 SURE 440 Advanced Photogrammetry 1142008 Effect of Sloping Terrain 0 Effect of positional error over sloping terrain is a positional shift s and elevation error AZ s along slope of maximum gradient Effect of Sloping Terrain 0 When ight along any line oblique to maximum gradient positional error becomes 005037 a 11 Y X s39 q cosB 0c Lidar Principles and Applicatations 15 SURE 440 Advanced Photogrammetry 1142008 Effect of Angular Error for Laser System 0 Angular error designated as 6 and range is r 0 Assume angular error constant during ight then magnitude of positional change a depends on range 0 Magnitude greater for distances farther away 0 Resonstructed ight path not straight line F Effect of Angular Error Over Sloping Surface 0 Defining tanycoso tany l tanys1n8 0 Slope error becomes cos8 AV 2 tany tany 2 tany l ltanys1n8 Lidar Principles and Applicatations l6 SURE 440 Advanced Photogrammetry 1142008 Data Acquisition Reconstruction Lidar Products 0 Level 1 0 Basic or All Point 0 Level 2 0 Low Fidelity or First Pass 0 Level 3 0 High Fidelity 0r Cleaned 0 Level 4 0 Feature Layers 0 Level 5 o Fused Lidar Principles and Applicatations 17 SURE 440 Advanced Photogrammetry Vegetation Identifying and Isolating the Ground Surface 0 Use of Multiple Returns 0 Automatic Vegetation Removal 0 Manual Editing Raw FIRST Return LIDAR Data Lidar Principles and Applicatations 1142008 SURE 440 Advanced Photogrammetry 1142008 Raw LAST Return LIDAR Data Automatic Vegetation Removal 0 Automatic programs begin the noise and vegetation removal process 0 These remove approximately 80 of vegetation depending on the land cover and terrain characteristics 0 This typically uses about 20 of the vegetation removal tjme budget Lidar Principles and Applicatations 19 SURE 440 Advanced Photogrammetry 1142008 Before gtgt ltlt after IE E Lidar Principles and Applicatations 20 SURE 440 Advanced Photogrammetry 1142008 Manual Editing 0 Final vegetation and feature removal requires manual intervention 0 Custom selection routines are used in ArcVieW to analyze the data and identify target points 0 Accurate interpretation of the LIDAR data requires supporting imagery 0 Removal of the remaining 20 of the vegetation and features will account for about 80 of the time budget Before gtgt Lidar Principles and Applicatations 21 SURE 440 Advanced Photogrammetry 1142008 Lidar Principles and Applicatations 22 SURE 440 Advanced Photogrammetry 1142008 Building Height Extraction Building Height Extraction Lidar Principles and Applicatations 23 SURE 440 Advanced Photogrammetry 1142008 Building Height Extraction Power Line MappingInspection Lidar Principles and Applicatations 24 SURE 440 Advanced Photogramnietry 1072009 Center for Photogrammetric Training Ferris State University The Center for Photogrammetric Training INTRODUCTION Case I Compute exterior orientation K p 03 XL YL ZL Observe photo coordinates xi yi Treat survey control as known Xi Yi Zi Case 11 Extension of Case I exterior orientation treated as observed quantities The Center for Photogrammetn39c aining Numerican Resection and Orientation 1 SURE 440 Advanced Photogrammetry 1072009 INTRODUCTION 0 Case III Extension of Case II Observed quantities include photo coordinates eo survey coordinates unknown survey points Survey control given Find adjusted co and survey coordinates 0 Case IV Extension of Case III 7 interior orientation observed Adjustment adjusted eo io survey coordinates The Center for Photogrammetn39c naming INTRODUCTION General mathematical model F F0bs X Y 0 0 Taylor s series linearizes equation 6F 6F FF A V0 l u moi l u admin nu The Center for Photogrammetn39c 39D39aining Numerican Resection and Orientation 2 SURE 440 Advanced Photogrammetry 1072009 INTRODUCTKHJ 0 Observation equation FfBAAV0 AVBAf0 0 Where V residuals on the observations A alteration vector to parameters f discrepancy vector The Center for Photogrammetn39c naming CASEI Estimate variancecovariance matrix 200 Compute adjusted eo parameters and variancecovariance matrix on adjusted parameters 22 00 0 Math model Fxx x0 c 0 central projective AZ equations Fyyy0 c0 The Center for Photogrammetn39c aining Numerican Resection and Orientation 1072009 SURE 440 Advanced Photogrammetiy CASE 1 0 Observation equations AV BA f 0 6Fxl 39 Where A 6E axquoty 1 0I am 0 1 6xyJ F69 V 7 Vi f 7 My L The Center for Photogrammetn39c 39 39aining CASE 1 Observation equations AV BAf 0 Where aFxj 6Kgt gtQgtXLgtYLgtZL 2 BF BJ MPararrieters y 6F J 6Kgt gt wgtXLgt YZRZL A 51a 5 5a XL an 32 The Center for Photogrammetn39c 39D39aining Numerican Resection and Orientation SURE 440 Advanced Photogramnietry 1072009 CASE 1 0 General form of observation equation 9 e VBAfO 0 Function to be minimized FVTWV 21V1 Xf The Center for Photogrammetn39c naming CASE 1 0 Differentiate the function a F2WV 2t0 6V e T 6172 43 20 6A The Center for Photogrammetn39c aining Numerican Resection and Orientation 5 SURE 440 Advanced Photogramnietiy 1072009 CASE 1 0 Collecting observation equation and differentiated function W o 171 V 0 0 0 13 2 0 0 e t f 1 B o The Center for Photogrammetn39c 39n39aining CASEI 0 Eliminating V and 9t and substituting V l W B A Wf Nonnal equation found by substituting 9 T T B WBA BJ Wf 0 N z0 The Center for Photogrammetn39c 39D39aining Numerican Resection and Orientation SURE 440 Advanced Photogrammetry 1072009 CASE 1 0 Corrections to parameters found by 8 A N 1t 0 Adjusted parameters found by adding corrections to current estimates XX a 00 The Center for Photogrammetn39c naming CASE 1 Residuals computed as Vf 0 V Fg 0 Unit variance 2 VTWV 00 2n 6 Variancecovariance 2 2 71 matrix 2 O39ON 00 The Center for Photogrammetn39c aining Numerican Resection and Orientation 7 SURE 440 Advanced Photogrammetiy EXAMPLE Photo observations Point No x y 1 61982 79018 2 73147 78240 3 54934 65899 4 26046 29449 5 34893 71287 6 23980 31889 7 11783 88922 8 85047 105836 9 26468 6082 10 12523 79026 11 27972 85027 12 12094 69861 13 80458 70012 Survey Control Points PointNo Y Z 1 4464675000 11129553700 27386600 2 4552720300 10993263000 27553100 3 4553670500 11019301300 27510100 4 4632243000 11108631900 25499000 5 4679722300 11126100100 26321400 6 4633426800 11112289000 25485000 7 4501989000 11047518200 26284500 8 4532804500 10965087600 29136500 9 4608713500 11093334300 25565500 10 4512621800 11053117400 26197300 11 4481580000 11091016300 28832000 12 4648927900 11172917600 26685200 13 4706142300 11079542700 26863900 Exterior Orientation Elements Estimated L YL L Kappa Phi Omega 459000000 1111500000 20900000 21500 00000 00000 Numerican Resection and Orientation 1072009 SURE 440 Advanced Photogrammetry 1072009 9056894 00000 7650311 3538837816 719422609805 1069676390 9056894 71952187 71162762887 71069676390 18861944125 2918256 00000 2164764928 75353926170 9613732691300 7 6415319767400 7 2078557849100 42739502349000 7 4793827451900 40916409428000 51743527 14980210 326492 37790387580 108535637450 26185558995 Itera onNo39 1 IterationNo 3 Altemtmn vecmr Dena Alteration Vector Delta 815331 00015 394869 00015 015855 002176 888833 001941 0 00000 000958 0 00000 IterationNo 2 Altemtion Vector Delta Iterauquo 4 061417 Alteration Vector Delta 072201 000000 070268 000000 000013 000000 000012 000000 000022 000000 000000 Numerican Resection and Orientation 9 SURE 440 Advanced Photogrammetry Exterior Orientation Elements Adjusted XL YL ZL Kappa 458924624 1111467719 20905445 21281 Residuals on Photo Observations X y 1 0002 0009 2 0004 0007 3 0002 0002 4 0001 0002 5 0002 0004 6 0000 0000 7 0006 0011 8 0006 0001 9 0011 0000 10 0007 0001 11 0002 0006 12 0001 0007 13 0004 0006 Phi Omega 00195 00098 The A Posteriori Unit Variance is 3471294 The VarianceCovariance Matn39x of Adjusted Parameters is 0233948622 0011026685 0020985439 0000016307 0000104961 0011026685 0154028192 0034834200 0000000932 0000001678 0020985439 0034834200 0025329779 0000001566 0000009114 0000016307 0000000932 0000001566 0000000005 0000000007 0000104961 0000001678 0000009114 0000000007 0000000048 0000002099 0000075937 0000018958 0000000000 0000000001 0000002099 000007 5937 0000018958 0000000000 0000000001 0000000039 Numerican Resection and Orientation 1072009 SURE 440 Advanced Photogrammetry CASE II 0 Introduce direct F K f 5 0 observations on FW 2 p p Z 0 parameters a a 0 Introduce new F 0 63 0 math model to FXLXL L 0 adjustment 0 a FYLYL YL 0 FZLZL ZL 0 The Center for Photogrammetn39c mining 0 Observations 5 C g 5K have residuals g v g 5quot therefore 0 00 adjusted 3 Va a 5m parameters XL VX XL 5X only estimated a oo initially Y L VYL Y L 51 ZL sz ZL 6ZL The Center for Photogrammetn39c aining Numerican Resection and Orientation 1072009 SURE 440 Advanced Photogramnietiy 1072009 CASE II VK 5 K0 O Reanange vw pigp 75W0 vmEc gtigi m 0 39Observation equations 6 e e V Af0 The Center for Photogrammetn39c 39 39aining CASE II Choupxw hobsmva onequa ons oni pn ec veequanons V Af0 ampgo m e VBAf0 The Center for Photogrammetn39c aining Numerican Resection and Orientation 12 SURE 440 Advanced Photogrammetry 1072009 CASE II 0 Function to be minimized FJTWMTE A7J 39 Where weight matrix is W 0 W e 0 W The Center for Photogrammetn39c naming CASE II 0 Normal equations ETW TW fo 0 In expanded form BET 1W 91 21 T 1W 90 0 W 1 0 W f The Center for Photogrammetn39c aining Numerican Resection and Orientation 13 SURE 440 Advanced Photogrammetry 1072009 CASEII 0 Performing multiplication 3T WB WjA BT Wf Wf 0 0 Generally shown as N l20 The Center for Photogrammetn39c naming CASEII 0 Initial estimate of parameters observed a a X X 00 O 0 Discrepancy vector fF 0 Solution 9 71 A N t The Center for Photogrammetn39c aining Numerican Resection and Orientation l4 SURE 440 Advanced Photogrammetry 1072009 CASE II 0 Adjusted parameters 5 2 3 Residuals V 2 Fa 0 Unit variance 0 A posteriori g A 2 variancecovariance matrix 2 oZN 00 The Center for Photogrammetn39c 39 39aining CASE III Introduce spatial coordinates as observed Math model expanded with survey points FIoYJO Yj 0 FZjZJ ZJ 0 The Center for Photogrammetn39c aining Numerican Resection and Orientation 15 SURE 440 Advanced Photogramnietry 1072009 CASE III 0 Observation equations become o E Af0 VEAEAf0 The Center for Photogrammetn39c naming CASE III Observational residuals defined as v7 le x1 vw VYI vyl I v I v21 V f2 VXL VXZ 39 vYL x VZL vZn y The Center for Photogrammetn39c aining Numerican Resection and Orientation 16 SURE 440 Advanced Photogramnietry 1072009 CASE III 0 Discrepancy vectors found by evaluating functions using current estimates FX1 if Fm ff Jeri jf FZJ 7 7 FXL 7 Fyj 3930 F02 FZL lgg FZ CASE III 0 Alteration vectors defined as 71 5K SY 5gp 621 2 660 S 1 A A XL 6X SYL SYV 62L 62 The Center for Photogrammetn39c aining Numerican Resection and Orientation 17 SURE 440 Advanced Photogrammetry 1072009 CASE III Design matrices CASE III Observation equations V E fag f V I 0l 1f0 V 0 VEX7O The Center for Photogrammetn39c aining Numerican Resection and Orientation 18 SURE 440 Advanced Photogrammetry 1072009 CASE Ill 0 Function to be minimized T F V WV 21TVBAf 0 Leads to normal equations 3mg 3W7 0 The Center for Photogrammetn39c naming CASE Ill 0 Expanded form of normal equations 2 BETWf VIEjf A BTWf Vfjf BTWEHIg BTWE 0 BETWgHIS 0 Or more simply NXE O 71 0 Solution Az N I The Center for Photogrammetn39c aining Numerican Resection and Orientation l9 SURE 440 Advanced Photogrammetry Case IV 9 Sometimes referred to as calibration case 9 Additional observations interior orientation elements ie camera constant calibrated focal F X 0 lt 0 x0 x0 0 a Fcc C0 length and principal point coordinates Can include other items like lens distortion etc 0 Math model expanded for c o a O X0 Yo The Center for Photogrammetric Case IV 0 Observation equations are 0 Or collectively Numerican Resection and Orientation 1072009 20 6 INTRODUCTION TO LASER SCANNING Center for Photogrammetric Training Ferris State University Introduction The creation of a digital terrain model DTMdigital elevation model DEM can be performed in a number of ways These include 0 Direct measurement onsite using conventional surveying including the global positioning system GPS 0 Photogrammetric techniques IFSAR lnterEerometric ynthetic Aperture Radar and Lidar ght Detection And Banging Lidar is one of the newer technologies in use today Not only can it be used for DTlWDEM data collection it has been used for a myriad of other studies including determining vegetation volume and atmospheric studies to feature extraction LASER SCANNING Y OBI Figure 1 Lidar sensing system geometry Laser ScanningLidar Page 2 Lidar is also referred to as laser1 ranging laser altimetry laser scanning and LADAR Mser Detection And Ranging While airborne laser ranging systems are not newz development of a system comprised of a laser scanner global positioning system GPS and an inertial measuring unit IMU that can meet map accuracy guidelines has been a relatively recent occurrence see gure 1 This is a technology at its infancy and as such it will probably experience rapid evolution as the market for this technology grows Laser altimetry has been used in photogrammetry for decades In the early years the laser was used to record ground pro les along the ight path in a near nadir view These pro les were used in photogrammetric processing to strengthen the solution to the adjustment of photogrammetric measurements There are basically two types of laser scanning systems pulsed laser and waveform The pulsed laser system is the predominant form used for topographic mapping A discrete signal is emitted from the laser and one or more return signals are recorded Waveform systems use a continuous signal and a continuous or nearly so signal is received AeroScan System Figure 2 AeroScan lidar System Different scan frequencies can be used by lidar scan systems see gure 2 for an example system The selected frequency will depend on the application Project requirements as well as 1 Laser is an acronym for Light Ampli cation by timulated Emission of Eadiation Z In fact the technology used in lidar has been used for numerous scienti c studies for over 20 years Laser ScanningLidar Page 3 eye safety will determine the ight altitude Typically this falls within a range of 100 to 5000 meters at speeds of 75 to 250 kilometers per hour3 Basic Principles Lidar as it has been already pointed out is really a system which integrates three basic data collection tools laser scanner GPS and IMU4 The laser scanner is mounted in an aircraft just like an aerial camera It sends out an infrared laser signal actually there can be anywhere from 10000 7 150000 pulses per second to the ground that is then re ected back to the instrument The number of pulses sent out by the scanner is referred to as the pulse repetition rate PRF and is measured in kHz Thus 10 kHz means that 10000 pulses are being emitted per second The time it takes the laser light to complete this trip is recorded Hence objects closer to the aircraft will return faster than the signals that are re ected from objects farther away from the vehicle Since the signal made the trip from the vehicle to the ground twice once to the ground and then back to the aircraft the total distance is divided by two and this is then multiplied by the speed of light to obtain the distance There are two different kinds of lasers used in laser altimetry depending on the surface being measured Systems used over ground so called typographic lasers utilize the infrared portion of the electromagnetic EM energy spectrum For bathymetric laser altimetry surveys the bluegreen portion of the EM spectrum is used The reason is that little or no re ectance would be sensed by the lidar unit Another difference is that the bluegreen laser usually frequencydouble the wavelength This makes the determination of the bathymetric depth easy to calculate since one pulse is re ected off the surface and the other off the seabed oor The depth of the water is the difference in the two returns Maume 2001 There are several types of scanners used in the industry today Wehr and Lohr 1999 describe four such systems as shown in figure 3 These include the oscillating mirror figure 3a the nutating mirror which is also called the Palmer scan figure 3b the fiber scan figure 3c and the rotating polygon figure 3d The nutating mirror Palmer scan consists of a de ecting mirror oriented such that the angle between the laser beam and the scanner shaft is oriented at 450 see figure 4 from Wehr and Lohr 1999 The mirror is attached to the scanner shaft but at an angle SN As the scanner shaft rotates the mirror is nutated The result is an elliptical scan pattern on the ground The size of the scan is also shown in figure 4 where the units are based on the angle SN By multiplying the coordinate by SN one can obtain the actual angle The advantages of the nutating mirror is that most of the ground points are scanned twice which adds redundancy The ground point is imaged in the forward and backwards view This is very useful in calibrating the scanner and the sensor orientation 3 httpwww airborne w quotP htm accessed 5312001 4 An important piece of equipment used by the scanner is a cooling system which is sometimes identified as the fourth component of a lidar system Laser ScanningLidar Page 4 3 Fiber Swimh Fiber f Polygon Emittcd Re ected Fxnittcd Signal Signal 0 d Figure 3 Mirror scan systems for laser scanning from Wehr and Lohr 1999 Ranging Q Unit Figure 4 Palmer scanner and scan pattern from Wehr and Lohr 1999 Most lidar systems utilize a rotating mirror to collect their scanned data The mirror sits in front of the laser and rotates in a sweep motion perpendicular to the direction of ight When the mirror rotates left to right and then back it is referred to as a sawtooth scanner One of the disadvantages of the moving mirror is that the rate of movement is not a constant As the mirror nears the end of the scan it slows down then stops reverses direction and nally speeds up This type of movement besides adding strain to the mechanics of the system affects the positional accuracy of the system Fowler 2001 Laser Semnrngerar Page 5 one way around thrs problem rs to use a rotatrng polygo w reh m only rn one drreeuon thereby xlng the speed of the prrsm movement drsadvantage of s n h oves at a eonstant rate see frgure 3 The xed stop po rtrons to rndreate the extent of the swath Addruonally srnee all the data are ll rn one drreetron there mrght be a bras rn the measurement that would not be drsungurshable unless there was a eld eheek Fowler 2001 We should reeognrze that the lrdar srgnal rs not a pornt but rathe rs advantages of a laser srgnal an ar Qne ofthe e r ea rs that the beam rs very narrow but rt does get larger th farther away the system rs from the souree Moreover rt also beeomes drstorted talrrng on an Hm H h n Thus the srgnal on the ground Thls rs typreally 2 to 10 feet The eolleeted data wrll eonsrst of a hernngbone pattern of spot elements gure 5 5 The sean rate must be suf erently fast to prevent any unwanted gaps rn the data Thl allows for a good unrform drstrrbuuon of data over the proreet srte Figure 5 Sample lidar scan y vendors feel that the mln39orrtype laser seannrng system yrelds more aeeurate o eompensate for the slowrng down effeet software rs used to eorreet for thrs effeet and to drsregard data at the end of the swath For Xample most data provrders may adveruse thatthey have a swath wrdth of 300039 but use only the mrddle 250039 Fowler 2001 For a lrdar system to funetron properly rt rs rmportant that aeeurate trmrng devrees be employed by nte eolleetron proeess 1t rs rmportant to know the me when the GPS posruon was measured what tame the MU data was reeorded and the me that the laser srgnal was sent and of eourse ShimWWW mbamzl camPagesuchnnlag htm accessedSZlZEIEII Laser ScanningLida Page 6 returned Since lidar consists of three separate and distinct components it will be impossible to set the timing of each component to match the other components This will require processing The GPS receiver will generally use a lsecond sampling rate meaning that the location of the receiver will be determined every second during the ight But the aircraft could be traveling more than 50 m per second This means that the location of the laser scan must be interpolated between the sample intervals The same situation will occur with the inertial measuring unit although the sampling rate will be much higher The IMU provides the sensor orientation data namely the pitch roll and yaw angular values The problem with using three different components in the measuring process is exacerbated by the fact that they measure in different reference frames while results need to be reported in a ground reference frame It is beyond the scope of this course to discuss these differences Suffice it to say that data can be transformed to the ground reference frame provided that control eXists on the ground This is used to determine the parameters required in transforming the airborne coordinates to the ground system Laser Scanner The laser is an important component of the lidar system It takes electrical and chemical energy to create an optical energy output The biggest problem in this conversion is the loss of energy since the outputted laser signal will represent only 110 of the inputted energy Nonetheless it will produce a signal with several desired properties These include Sizgoric 2002 L 7 high radiance high energy densityhighly collimated 7t short wavelength AM narrow spectral width pure color 139 short pulse duration PRF 7 high pulse repetition rate 91 laser beam divergence The advantages of these properties for lidar are Sizgoric 2002 as follows A large L and small wavelength yields a small sample target area A large L also results in a higher ying height Higher vertical accuracy is a product of the shorter pulse duration or small 5 A high pulse repetition rate provides a high sampling density Finally using the narrow spectrum width AM and controlling the radiation source yields the ability to operate either during the day or night Processing Raw lidar data are post processed after the initial aerial ight is completed The slant distance as described above is calculated for each returned signal This data is then corrected Laser ScanningLida Page 7 for atmospheric effects The roll pitch and yaw are determined from the IMU and these angles are then applied to the slant distances to correct for the orientation of the scanner during data collection The GPS data are processed separately and are then imported into the lidar processing system Using the position of the sensor and the swath angle during the individual scan the elevation of the ground point can be easily computed For example look at the geometry in figure 6 XXjH Sensor Platform Elev Datum Figure 6 Geometry of laser scan pulse Lets assume that the laser signal was sent out at a 100 angle from the nadir along the swath width at 10 Further assume that the sensor orientation is perfect no pitch roll or yaw effects and that the distance measured by the laser scanner was found to be 138750 m Then using simple trigonometry the vertical distance from the sensor to the ground at the elevation of A VA is VA DA cos OLA 138750 m cos 10 136642m If the GPS on board the aircraft determined the location of the sensor at that instant the signal was sent with state plane coordinates and orthometric height as Xsensor 126847113 m YSensor 58861447 m Zsensor 180659 m then the ground elevation of point A is Laser ScanningLida Page 8 ElevA HSensor VA 180659 m 136642 m 44017 m In a similar fashion the X and Y state plane coordinates can also be determined For example the horizontal distance HA from the vertical line to the ground point can be computed using basic trigonometry HA DA s1n 01A which for our example becomes HA 138750 msin 10 24094m If we assume that the aircraft is ying due north along the Yaxis and the scan angle is to the right to the east of the vertical line then the Ycoordinate would remain the same and the X coordinate would become X XSensm HA 126847113 m24094m 126871207 m A The X Y H coordinates of ground point A are then 126847113 m 58861447 m and 24094 m respectively Hence each return has been georeferenced Most of the time the aircraft is not necessarily ying in a cardinal direction In this case the AX and AY coordinates that would be applied to the sensor coordinates is found using simple latitude and departure easting and northing computations This example of how the imaged point is georeferenced is a very simplistic view of what is happening within this measurement system The reality can be shown in gure 7 The coordinates x y 2 represent the laser beam coordinate system The origin is at the point where the laser signal was emitted ring point and the z axis is de ned as being in the opposite direction of the emitted laser beam to the ground point Schenk 1999 The distance from the laser to the ground point is the range which can be de ned by the range vector This relationship would be ne if the laser measurement system was xed Since it moves for every range measurement one needs to de ne the laser reference system as x y 2 As with the laser beam system the origin of the laser reference system is the ring point of the Laser ScanningLidar Page 9 laser The orientation of the laser reference system depends upon the type of scanner being used For example with the nutating mirror system the zaxis is de ned as being collinear to the scanner shaft rotation axis The xaxis is de ned as the direction towards the starting position of the rotating mirror With an oscillating mirror scanner the xz plane is defined by the scan plane where the zaxis defines the line where the scan angles is zero Schenk 1999 Laser Scanner quotZero Positionquot P Ground Point Origin of Mapping Frame Coordinate System X Figure 7 Relationship between the laser scanner and ground point from Schenk 1999 Transforming the data from the laser beam system to the laser reference system is performed using the rotation matrix R No translation is required because both systems share the same origin The scan angle at the instant the laser pulse is emitted defines the rotation angle for R For the nutating mirror system two angles are required Schenk 1999 The final process is to locate the center of the footprint of the laser signal on the ground point P in figure 7 This is done by another rotation matrix Re that defines the angular components of the exterior orientation of the laser This is the same as the exterior orientation used in photogrammetry Therefore one takes the vector from the laser to the ground point and transforms this into the laser reference system and then transforms this into the ground coordinate system This is then added to the vector between the origin of the ground coordinate system and the laser to obtain the vector from the origin of the ground reference system to the point This is represented mathematically as p c RZRI r where c is the positional component of the exterior orientation of the laser that represents the vector between the origin of the ground reference system and the laser platform The exterior orientation elements are usually known from GPS and inertial measurements Schenk 1999 LtserScmnrngerar Page ln Thrs equatron for reconstrucung the surface pornt P rs almost rdentrcal to how photogrammetry solves the same problem Wrthout derryatron the mathemaucal expresslon usrng photogrammetry can be shown to be Schenk 1999 p yam Thus the only dlfference rs the scale factor A When a laser srgnal rs sent to the earth rt can easrly hrt more than one obrect For example gure 8 shows al pulse headrng towards the ground A part of the srgnal rst encounters a part of the follage whrle the rest of the srgnal hrts the ground Dependrng on how th system rs set up the sensor can as an example collect both of these data pulses Thrs rs commonly called the rst pulse or rst return the poruon of the srgnal smklng the follage and the last pulse or last return the portron of the srgnal hrtung the ground In some systems rt rs posslble to collect up to 5 dlfferent returns For topographrc mapprng purposes rt rs generally the last return that rs proyrded to the clrent Figure 8 Laser Sisz hitting multiple nhi ects during its travel Collecung Just the rst retum can lead to problems For example gure 9 a shows the srtuatron where laser data are collected over bare ground The correspondrng eleyatron data rs S 5 a H a a l E ere o t e rom an py and o her o h wn rn gure 10 b andthe surface rs t to the po ts as rndrcted rn gure 10 c The companson gure 10 d shows that the tted surface does notmatch the ongrnal geometry For DEMDTM data products a bare earthquot or bald earthquot ground surface rs desrred Laser ScanningLida Page 1 1 made objects from the data This is not a trivial problem In many instances it has been reported that automated removal of data points that are above the ground can be up to 90 effective This would represent somewhat ideal situations since steep terrain with a lot of vegetation is much more time consuming to train the software To remove point data after the automatic cleaning has been performed may require upwards to 90 of the post processing time Not only can a laser altimetry system measure multiple signal returns some are even capable of measuring the intensity of the return signal The significance of this capability is that the user can now determine how much energy an object re ects Since different features re ect different proportions of energy it could be possible to use the intensity to differentiate different features This could be an important r of A feature 39 LasexScanmngLndar Pagzll 4quot 7 3 LIDAR Data Acquned over open ground m Flmng curve surface through elevation LIDAR points m Companng the curve fitted surface m LIDAR geometry atllme of dam collection good m Figure 9 Lidzr data cullected aver hm gmund a hug Nagts vashmgmn edufegm edlxdarmtm mm LasexScAmmgLxdzr Pkg 1 1111 TT 4 m LIDAR Data Aequuee overuee canopy n LIDAR return signals 1c Fming curve survace mmth elevanon LIDAR points 1T iTTTTTTTTtt39 1e Comparing me curve ned surface to LIDAR geometry amme of dam collection poorm gure 1n Lxdzr datacullected uvertxee canqu 7 ms excz em deplman allemfxam mypexspechve mama Somm gm gamesfmm WWW Laser ScanningLidar Page 14 Advantages and Disadvantages of Lidar There are several advantages of lidar data First it is a very versatile technology that has been used for many uses such as atmospheric studies bathymetric surveys glacial ice investigations just to mention a few It is nding a lot of application in terrain mapping Here we see that this technology is a very cost effective method of terrain data collection It offers high precision and high point density data for DTM modeling Moreover it has been shown to accelerate the project schedule upwards to 30 because the DTM data processing can begin almost immediately It is theoretically not restricted to daylight nor cloud cover like aerial photography although if aerial imagery is being collected simultaneously as it is commonly done then those limitation will affect the particular project Unlike photogrammetric methods it is capable of mapping areas characterized as low contrast low relief and relatively dense vegetation Flood 2002 In coastal zones and forest areas lidar is considered as a superior data collection tool over conventional photogrammetric techniques where it is extremely difficult to locate terrain points in the imagery There are several disadvantages as well While the data collection appears to be cost competitive the upfront cost of equipment acquisition is very significant on the order of 1 million This could be a hard sell since amortization would have to be spread over a very short period since the technology like that of computers will probably experience a lot of change over the next two to three years That is a lot of imagery to collect over a short period of time On a project basis this means it is a relatively high cost method of terrain data collection Being a relatively new terrain measurement tool no real industry standards exist although there are currently efforts under the American Society of Photography and Remote Sensing to define a set of best practices The Federal Emergency Management Agency FEMA has developed guidelines and specifications for the use of laser altimetry in their Flood Hazard Mapping program FEMA 2003 Moreover there is a general lack of knowledge of the technology especially of its capabilities and limitations although this is changing as the technology matures For example lidar provides surface data at a regular sampling rates In other words it is not possible for the user to point the laser towards a predefined area to collect the data The implication is that shorelines stream channels ridge lines and other types of breaklines within the terrain may be missed which if not accounted for in post processing could yield a surface that departs from the true terrain characteristics In terms of processing there are some problems related to robust effrcient feature extraction such as bald earth and breaklines Flood 2002 For example if there is a lot of vegetation on the terrain point removal of last return data that are not on the bald earth will be required This will create holes within the terrain model This will result in an even more irregular ground point spacing 8 A regular sampling rate does not equate to a regular interval on the ground Laser ScanningLida Page 15 Misconceptions Lidar is in some respects similar to RADAR and some have attributed characteristics associated with radar to lidar systems While lidar data like RADAR can be collected 24 hours a day very often aerial photography is collected simultaneously which limits the time in which data collection can take place While both RADAR and lidar are active sensing systems they use different signals Therefore atmospheric conditions will affect the two different sensors in totally different manners Hence laser altimetry is not an allweather data collection system In other words the terrain must be visible within the electromagnetic range of the scanner While some fog is manageable generally it is not usable under heavy fog or extended cloud cover It is often stated that lidar can see through vegetation This is not true at all What it can do very well is see around the vegetation by exploiting holes that may exist within the canopy It is important to recognize that a hole in the canopy may exist at one angle but can be completely blocked only a few inches off in any direction As the laser system becomes inclined it is more likely that branches and tree trunks will be encountered thereby blocking the signal to the bare earth Turner 2001 As a general rule of thumb if a person can stand below the tree canopy and see sky through the foliage then there is a good possibility that a laser measurement can be made through that hole in the trees Laser Altimetry Errors Schenk 1999 presents a very good look at the effects of systematic errors on a laser system While the approach he uses is grossly simplified it provides an excellent conceptualization and visualization of how these errors affect surface reconstruction He looks at two systematic errors a positional error which he de nes as q and an attitude error gure 11 This latter error involves two angles 03 and p which are utilized to form the rotation matrix Ra An example source of a positional error is the error in time synchronization between the GPS clock and the laser signal generator For example an aircraft traveling at 100 ms with a 5 msec timing error will introduce a positional error of 05 In An example of an angular error is the mounting bias The effects of these errors are also shown in figure 11 The angular error is shown by the vector a It will move the laser footprint from A to B The positional error is depicted by the vector q As one can see from the gure it will move the footprint from B to C The vector e is the error due to both of these effects Schenk 1999 The total error can be shown to be eqa De ning the angular error as Then the total error becomes Laser ScanningLida Page 16 Figure 11 Effects of positional and angular error on lidar footprint from Schenk 1999 e q Ra r The reconstructed ground point can now be de ned using the formula presented earlier as p39 pq1 1 2Rr Real World Depiction Results Due to Timing Bias Figure 12 Effect of a timing bias in the laser measurements from Schenk 1999 Laser ScanningLida Page 17 Schenk 1999 first looks at the effects of positional errors from a laser pro le over horizontal terrain Figure 12 depicts a vertical object like a building and shows that the effect of a timing error results in a shift s This shift can be defined as sAtv where At is the timing error and v is the aircraft velocity While the effects look very simple to correct how a ight mission is actually run produces a compounded effect Since imagery is acquired in ight paths that go back and forth over the project area the positional error has a much different shape than along a single ightline figure 13 Hence the shifts are in opposite directions Designating the azimuth of the ightline by or the positional error can be shown to be Schenk 1999 q sina Fl M l q cosa Thus this positional error causes an angular distortion unless of the unlikely situation occurs where all of the ight lines are own in the same direction Flightline l l l l l l l l CQQ 5 Figure 13 Typical ight pattern for data acquisition from Schenk 1999 While a positional error over at terrain creates a shift in the surface reconstruction along the same plane the effects over a sloping terrain is much different Schenk 1999 shows this effect in figure 14 The result is both a positional shift s and an elevation error AZ This elevation error is represented by AZs39tan Laser ScanningLida Page 18 where y is the slope gradient In this situation the positional shift is along the slope of maximum gradient When the ight is along any line oblique to the line of maximum gradient gure 15 the positional error can be expressed as s39qcos a Figure 15 Flight trajectory and slope gradient relationship from Schenk 1999 Laser ScanningLida Page 19 where at is the azimuth of the ight line and 5 is the azimuth of the slope gradient The elevation error becomes AZ I q lcos atany As Schenk 1999 points out when cos a i1 one would nd the greatest elevation error This happens when the ight line coincides with the maximum slope gradient Additionally as one would intuitively suspect when the ight direction is perpendicular to the maximum slope gradient then the elevation error would be zero As with the positional error Schenk 1999 evaluated the effects of angular error for laser systems by looking at the laser pro le The effects on a horizontal surface are shown in figure 16 The angular error is designated as 5 and the range is r If we assume that the angular error is a constant throughout the ight then the magnitude of the positional change depicted by the vector a will depend on the range In other words the magnitude of a will be greater for distances farther away from the laser system Thus we see that the shift in figure 16b will be less for points on the top of the building than for those at ground level This means that the reconstructed ight path is not a straight line Schenk 1999 It should also be evident that the direction of the shift in the reconstructed surface will be a function of the ight direction ff ff a b Figure 16 Effects from a systematic angular error from Schenk 1999 In figure 17 the effects of an angular error from a laser scanner profile is depicted as the aircraft ies uphill over a sloping surface The aircraft maintains a constant ying height during the ight The direction of ight is along the maximum gradient of the slope The effect is that the range measurements become smaller The angular error 5 becomes smaller The effect is that the elevation error also gets smaller and this results in a slope error Av This is found from the following relationship Schenk 1999 Laser ScanningLidar Page 20 tanycos 5 tany ltan ys1n5 The slope error becomes A7 tany tany cos539 tany l ltanys1n539 If we assume that the angular error is very small then one can use the approximations cos 8 m l andsin 539 539 As Schenk 1999 indicates there are no additional errors introduced with this approximation tanzy539 ltany539 Figure 17 As a pro le laser scanner moves uphill a slope error A7 in the reconstructed surface is created by an angular error 5 from Schenk 1999 The effects of an angular error shown on a horizontal surface is given in figure 18 This angular error 5 is defined as the angle measured in the error plane which is comprised of the range vector and the displacement vector This angular error is shown geometrically in figure Laser ScanningLida Page 21 19 Designating the azimuth of the trace of the error plane by s and the azimuth of the scan direction ast the effective angular error is de ned as the projection of the error plane into the scan plane Schenk 1999 Mathematically this is de ned as Figure 18 Slope error due to angular error in the scanner from Schenk 1999 539 5008 E Referring to gure 18 the displacement vector is shown by the vector a As one can see as the range becomes longer the corresponding displacement vector also increases The error is nearly perpendicular to the range vector Therefore the slope error Av can be shown to be Schenk 1999 A7 sin 5 The last example that Schenk 1999 discusses is the effect of a systematic error over a tilted surface gure 20a The amount of displacement designated by a is a function of the range r and angular error 5 The problem is that the error is not a constant but changes from point to point This means that the slope of the reconstructed surface is incorrect gure 10a The displacement error can be depicted as a lrl cos 8 ttan 5 Laser ScanningLida Page 22 Flight Trajectory Figure 20 Angular error effect on tilted ground from Schenk 1999 From this equation it should be evident that the maximum error will occur when the angular offset coincides with the scan plane 8 1 Similarly when the angular offset and the scan plane Laser ScanningLida Page 23 are perpendicular to each other then there is no slope error Figure 20 b shows the effect of the reconstructed surface when the slope on the ground changes direction Since the magnitude of the error is a function of range one can see from figure 20 that as the range increases the amount of error increases The slope error can be defined as Schenk 1999 A7 tan 5 tan2 7 From this discussion Schenk 1999 drew the following conclusions The simplest situation involves a pro ling system and horizontal surfaces Intuition tells that causes a horizontal shift that has no in uence on the reconstructed surface The positional error is a vector quantity and as such depends on the ight direction This in turn causes shift errors of variable direction and magnitude That is reconstructed horizontal surfaces are distorted We also recognized that horizontal surfaces reconstructed from a scanning system with angular errors mounting bias have a slope error Sloped surfaces present a more complex scenario Here the reconstruction error depends on several factors such as slope gradient its spatial orientation ight direction and systematic positional andor angular error In most cases the errors cause a deformation of the surface That is the relationship between the true and reconstructed surface cannot be described by a simple similarity transformation Calibration In order to provide an accurate and acceptable product it is essential that the errors within the lidar system be known and accounted for in the processing of the data This is achieved through calibration The importance of calibration is very evident once one realizes that the system has no inherent redundancies The INSGPS provides the position and orientation of the lidar system A single range measurement is then used to georeference the ground point Accuracies There are a number of very optimistic claims as to the accuracy of lidar data To fully assess the accuracy one must consider the errors inherent in the three components of the system laser scan GPS and IMU It is conservatively estimated that the accuracy of lidar as determined from error propagation is about 15 centimeters in elevation It is very difficult to assess horizontal accuracy with laser altimetry but the general thought within the industry is that it is about 15 times the vertical accuracy This can be thought of as typical results from lidar Laser ScanningLidar Page 24 surveys assuming that the system is properly calibrated and functioning correctly and that the surface terrain conditions are ideal This latter assumption is almost never correct As a rule of thumb horizontal accuracy is often claimed to be l2000th of the ying height Vertical accuracies of better than 15 cm are obtainable when the sensor altitude is below 1200 m and up to 25 cm when the operating altitude is between 1200 m and 2500 m Brinkman and O Neill 2000 There are some additional general rules that pertain to the accuracy of lidar Brinkman and O Neill 2000 The spot spacing is much denser for slower aircraft 0 A more reliable or accurate DTM is available through a denser spot spacing since there is more data collected by the system 0 The highest accuracy heights occur at the nadir and decrease as the swath angle increases 0 A smaller width will yield a denser spot spacing problem But the fact of the matter is that lidar does not give the user any redundant information This means that the accuracy of a point can only be inferred Thus it is critical that the system is well calibrated and that errors are understood and taken into account in the postprocessing Schenk 1999 In the last section lidar errors were discussed Here we will look at the errors found in a DTMDEM derived from lidar measurements There are basically two sources of error Raber et al 2002 data interpolation and the effects on nonterrain points on the digital terrain model As it relates to interpolation error there are two major sources of this error post spacing and vegetation point removal Post spacing pertains to the distance between lidar returns This is a semisystematic error Raber et al 2002 This distance is a result of the ying height of the aircraft velocity of the aircraft laser pulse rate and the laser scan angle Interpolation errors are also affected through the vegetation point removal process Raber et al 2002 indicate that many of the algorithms used for removal of vegetation points use a statistical trend surface to remove the cover The process implies that the interaction is with the vegetation cover instead of the bald earth which is the desired output of the laser survey The point removal means that data voids are present in the terrain data thereby making the interpolation process weaker This can become even more uncertain when terrain points are mistakenly removed from the data set if the algorithm mistakenly classifies that terrain point as a vegetation point The result of the interpolation usually results in an underprediction of the elevations since high slopes become smooth and small peaks are removed Additionally these errors exhibit a systematic form due to the smoothing Finally if vegetation points are not removed from the data set then the reconstructed DTM will be overestimated Raber et al 2002 Laser ScanningLida Page 25 Lidar Products Level Name Description 1 Basic All of postprocessed lidar data properly georeferenced but with no or All additional ltering or analysis Suitable for those organizations with in Point house data processing tools and capabilities or who work with thirdparty data processing service bureau Cheapest and fastest produced 2 Low Fidelity Using either proprietary algorithm or thirdparty software tools the data Or First Pass provider will automatically lter the point cloud into points on the ground the bare earth and points that are not ground There is generally no classification of the nonground points into separate feature types buildings trees etc and the ground points generally include some percentage of residual features not extracted by automated classification algorithms Suitable for those organizations with inhouse data processing tools and capabilities or who work with a thirdparty data processing service bureau Common deliverable Usually same costschedule as AllPoints High Fidelity Or Cleaned A fully edited data set that has been extensively reviewed by an experienced data analyst to remove any artifacts by the automated classification routine and provide a 99 clean terrain model The low fidelity data are analyzed and classified manually usually with supporting imagery Labor intensive product Moderate cost but with longer delivery 39 J 39 especially on larger projects Feature Layers Further processing using a combination of automated and manual classification to identify features of interest such as power lines or building footprints Generally completed inhouse or using a service bureau or thirdparty data processor that specializes in the desired application and has experience or has developed customized tools for specific types of feature extraction Usually more expensive product than high fidelity terrain model Fused A further refinement of the lidar data product achieved by the fusion of the lidarderived elevation data set with information from other sensors This can include digital imagery hyperspectral data thermal imagery planimetric data or similar sources Generally the most informationrich product with the highest cost Table 1 Lidar products from Flood 2002 Flood 2002 has stated that there are no currently accepted definitions of data products for laser scan data Nonetheless he has defined five different standard lidar products identified according to the level of processing performed for product delivery see table 1 the lidar project The Level 1 data product is the most basic data set consisting of all the points collected in The client receives a data set consisting of point clouds that have been georeferenced No other processing is performed on the data This data type is finding increased usage due to the fact that a number of software packages are available to give the user the Laser ScanningLida Page 26 opportunity to manipulate and extract information from the point cloud data In the past this capability was only possible through proprietary software controlled by either the instrument manufacturer or the data collector As Flood 2002 indicates this product should nd more appeal once a binary data exchange format and analysis are adopted as industry standards The next class is Level 2 data described as being a low fidelity or first pass data Here the data collector will perform a preliminary classification on the point cloud data The client receives two data sets ground also called bare earth or bald earth and nonground Generally the nonground data set does not contain any further classification of feature type like buildings vegetation etc This data product is considered low fidelity because data classification anomalies could still exist within the data Since this level is almost fully automated this type of processing is more efficiently done by the data provider There is a lot of effort within the industry to create algorithms that are more robust and efficient Therefore the automated filtering often look at integrating object information such as the intensity of the laser pulse return simultaneously acquired digital imagery or direct spectral tagging of elevation data Flood 2002 An additional issue related to the filtering is the aggressiveness For example a more aggressive filter setting will result in a cleaner data set and hence a higher level of fidelity in the terrain data The price of this is a higher level of misclassification On the other hand a less aggressive filtering value will decrease the misclassification errors but create a lower level of fidelity in the terrain data If the user will be performing a lot of processing inhouse then Flood 2002 recommends this last level of data processing He also recommends that the client request documentation on the filter settings Without this data it would be very difficult to repeat the results of the processing Unless the area contains only a minimum amount of ground cover or manmade features artifacts and data misclassification will exist within the data Thus Level 3 pertains to the high fidelity or cleaned data Flood 2002 states that common problems remaining in the preliminary product include poor ground model fidelity in areas of low dense ground cover the inability to accurately capture sharp grade breaks such as low ridges or sharp cuts misclassification of manmade features such as bridges and an inability to discriminate tree cover from topography in areas of sharp relief Correcting for these problems is a very labor intensive operation While 8090 of the automated lidar processing is possible what remains takes the bulk of the effort to clean This requires either ancillary data or a highly trained technician This level will not only increase the cost of the project and add to the schedule but it can easily create an impediment to the throughput of lidar data The next level is Level 4 7 Feature Extraction Upon completion of Level 3 processing important above ground information can be extracted from the point cloud data This is done using automated andor manual application specific feature extraction tools As Flood 2002 indicates it is important to specify in the contract what the deliverables will be and to specify the data collector s capabilities and experience The Level 5 processing stage is called Fused This is the most informationrich product where lidar data are fused with other sensor data Flood 2002 Laser ScanningLida Page 27 Comparison of Lidar with Photogrammetry Schenk 1999 presents a comparison of lidar and photogrammetry A summary is shown in table 2 When comparing the ying height photogrammetry offers a signi cant advantage Generally lidar is restricted to about 1000 m Because of the wideangle lenses used in aerial cameras the angular eld of view is much wider than the swath width in lidar These factors mean that lidar will require signi cantly more data acquisition increasing the ying time by a factor of three to five The sampling size is also very different Assuming a ying height of 1000 m and a focal length of 150 mm Schenk 1999 shows that the sample size called the ground pixel size for photogrammetry is 15 cm whereas the laser scan footprint could be on the order of one meter Moreover since the distance between the footprint is much larger than the footprint size This results in an irregular ground pattern If one considers the cost to purchase lidar and the disadvantages that it has when compared to photogrammetry it is clear that data acquisition costs will be higher than those same costs using photogrammetry The one advantage that lidar has over photogrammetry is that weather conditions are much less restrictive Thus some of the cost advantage that photogrammetry has in data acquisition will be diminished since aircraft crews will not have to wait nearly as much for the best ying conditions Data Acquisition 5 X H Surface Reconstruction Table 2 Comparison between Lidar and Photogrammetry from Schenk 1999 One of the more signi cant advantages of lidar over photogrammetry is the method of surface reconstruction Height measurements in photogrammetry require that the surface point be identi able on at least two photographs In other words stereoscopy is needed This can be very dif cult when vegetation or tall buildings are present within the imagery Laser scanning on the other hand can determine the height of a surface point through a single range Laser ScanningLidar Page 28 measurement This means that the object point needs only to have a view along a single line This difference is a doubleedged sword in that photogrammetry inherently provides redundancy whereas there is no redundancy using lidar On the other hand measuring surface points by photogrammetry requires reasonably good contrast or texture to make the measurement As Schenk 1999 points out surfaces like sand snow ice and water bodies can create formidable obstacles for surface measurements using photogrammetry Lidar works very well under these types of conditions Photogrammetry can employ two different methods for determining the exterior orientation of the camera Traditionally ground control points are established on and around the project area New points can be determined in the object space through an essential interpolation process Schenk 1999 calls this indirect orientation Today many photogrammetric flights direct sensor orientation is employed Here the GPS and INS are used to measure the exterior orientation Determination of new control points using the direct orientation without ground control can be considered as an extrapolation process The significance is in the error propagation where extrapolation is much worse than interpolation Schenk 1999 Another important factor between these two methods is the effect of interior orientation In the conventional approach the adjustment of the observations partially compensates for the interior orientation effects This is not true with direct orientation where the results are fully affected by the interior orientation Schenk 1999 Both photogrammetry and laser altimetry are competitive in many ways since both are used for building and road extraction open field detection and a host of other tasks But neither by themselves are useful in all situations For example difficulties encountered when using imagery include 0 Colors of certain features may not be distinguishable from the background colors thereby making feature identification difficult Shadows can hide features The extracted features are often incomplete Grouping of extracted features can be very hard It can be difficult to create a stereo or 3D relationship Similarly difficulties in using lidar include Trees can mess up road and building shapes Resolution may not be high enough The extracted features are often incomplete Certain objects like roads show no unique characteristics Conclusion By all indications lidar appears to be a technology that will considerably change mapping in the future While it has been around for a long time from a commercial applications point of view it is at its infancy As such it is difficult to gauge how it will impact geospatial SURE 440 Advanced Photogrammetry Geometric Characteristics Accuracy depends on availability and usage of GCPs C t f PM r T WO ground control er 39 gramme quotC ra39mng knowled e de ends on satellite Ferris State University ephemegs am attitude 0 Ephemeris determined using on board GPS amp sophisticated ground processing of GPS data 0 Attitude optimally combining star tracker data with measurements taken by onboard gyros which measure relative attitude changes Stereo Product Type Horizontal Accuracy Vertical Accuracy c590 LE90 Slrlgle stereo palr vvlth 25 Elm 22 Elm out Ground Control Slrlgle stereo palr vvlth 2 Elm 3 Elm Ground Control IKONOS Stereometric Accuracy IKONOS Stereo Imagery SURE 440 Advanced Photogrammetry IKONOS Sensor Model 9 Relates image coordinate space to object coordinate space 9 Can determine object coordinates from image coordinates Pushbroom sensor each image line taken at different instance of time o Attitude anglesperspective center position change from scan line to scan line SPACE Me Illle 2 llme l time 12 line n line 3 time in we l3 GROUND IKONOS Stereo Imagery Physical Sensor Model 9 Complex difficult to implement in COTS program 9 Rational Polynomial Camera RPC model used in lieu of physical sensor model 9 Accomplishes objectives with great efficiency amp no discernable loss of accuracy 9 RPC relates object space to image space 0 Form of ratio oftwo cubic functions ofobject space coord 0 Separate rational functions used to express object space to line and to sample coordinate relationship SURE 440 Advanced Photogrammetw Line RPC model NumLUVW 1 DenLUVW where a18 VZ Walg UZ Waz W3 b18 VZ W 1219 UZ W 52 W3 l I I I NumLUVW alaz Va3 Ua4 Wa5 VUa VW 17UW4ragVzagUZ511 WZaHUVWalzV3 a VUZ 51 VWZ 5115 VZUam U3a17UWZ DenLUVWblbz V123UbAWbsVUb VW 17UWb8 VZ 129 UZ bm WZ 1ll UVWblz V3 b VUZ 1M VWZ 515 VZUbm U3 1l7 UWZ w w IKONOS Stereo Imagery NumSUVW s DenSUVW where NumSUVWcl cz Vc3 Uc4 Wc5 VUc VW C739U39WC839VZC939UZC1U39WZC1139U39V39WCIZ39V3 cnVUZ 94 VWzc15 VZUcm U3cl7 UWZ c18 VZ Wcw UZ Wcm W3 DenSUVWdl 012 Vd3 Ud4 Wd5 VUd VW d7 UWd8 VZ 019 UZ dm Wzd UVWdlZ V3 d VUZdH VWzd15 VZU 1m U3d17 UWZ 118 VZ Wd19 UZ Wdm W3 WW 139 SURE 440 Advanced Photogrammetry longitude 7 height h Uquot OF Vzaroy j W sample S coordinates lL OL SFL SL OS SFS Sample RPC Model 9 U V W normalized object space coordinates latitude p 9 land s normalized image space coordinates line L and l 9 g IKONOS Stereo Imagery Sample RPC Model 9 ox ow oh 0L and oS are the mean values o SFA sap SFh SFL and SFS are scale factors SFX maxdlmax 04lal man OAD st maxqwm 0 man 0 SFh mathmax 0hhmin OhD SFL maXlLmax 0LLman OLD SFS manS OSSmn OSD max SURE 440 Advanced Photogrammetry a RPC Accuracy RPC Accuracy Loss of accuracy with RPC IMAGE SPACE compared with physical imagequot IKONOS camera model reference LS adjustment to determine coefficients ai bi ci and di from 3D grid of points generated using physical model 0 3D grid generated b y intersecting rays from 2D grid Heugm Lamude of Image pomts from phySIcal m camera modeIO with a number of constant elevation planes RPC Generation IKONOS Stereo Imagery SURE 440 Advanced Photogrammetry COORDINATE TRANSFORMATIONS The Center for Photogrammetric Training Ferris State University The Center for Photogrammetric Training BASIC PRINCIPLES ETwo coordinate systems UN and XY in Pmnmmne Tillnlm Coordinate Transformations SURE 440 Advanced Photogrammetry BASIC PRINCIPLES Xooordinate of X 2 de 6P pornt P IS P BUtI tanoc E do YP tanoc Y P cosoc g 6P 2 UP ThaCantsr e cosa ginmogrammetric Training BASIC PRINCIPLES P cosoc YP smoc UP cosoc cosoc XP cosoc YP smoc UP Then XP YP tanoc UP XPcosoc Yps1noc The Center Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry BASIC PRINCIPLES For VP VP ab Pb But mi 3 be Y1 tan on YP ab smoc 3 abXP7bcsmoc X1 7 be And ab becomes ab 2 XP YP tanocsinoc EaCantsr Fhmogrammetric Training 2 sm 0c 2 XP smoc YP BASIC PRINCIPLES Pb can be shown to be cosoc i Pb YP P cosoc VP becomes 2 s1n 0L YP P XPs1noc YP 00S U 00S U 1 sin2 0c X1 s1n 0L YP cosoc cosoc The Center XP sinocYP cosoc Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry BASIC PRINCIPLES Conversion from UV to XY can be developed in a similar fashion XP 2 UP cosoc VP sinoc YP 2 UP sinoc VP cosoc Or in matrix form XP cosoc sinoc UP YP sinoc cosoc VP EaCamsr Fhmogrammetric Training GENERAL AFFINE TRANSFORMATION Normally shown as X a1Xb1y c1 y azxbzyc2 C 2Unique solution if The Center Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry GENERAL AFFINE TRANSFORMATION Used in photogrammetry for Transform comparator coordinates to photo coordinates and used for correcting film distortion Connecting stereo models Transform model coordinates to survey coordinates The Center for Fhmogrammetric Training GENERAL AFFINE TRANSFORMATION Property Carry parallel lines into parallel lines Does not have to preserve orthogonality The Center Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry GENERAL AFFINE TRANSFORMATION 1 A X 29 Canter Fhmogrammetric Training GENERAL AFFINE TRANSFORMATION Physicai interpretation X CX Xcosot Cy ysin ot AX y CX Xsinot e Cy ycosot e Ay 6 para meters CX Cy 0c 3 Ax Ay The Center Of Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry GENERAL AFFINE TRANSFORMATION Can be related to linear form of the model a1 Cx cosa a2 Cx sina 8 b1 Cy sina b2 2 Cy cosag c1 Ax39 c2 Ay39 EaCantsr Fhmogrammetric Training GENERAL AFFINE TRANSFORMATION EXAMPLE 0 Four fiducial marks 1 4 and two image points a and b were measured on a com arator The comparator photo observations and the known values from the camera calibration report are given in the following spreadsheet The Center Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry GENERAL AFFINE TRANSFORMATION EXAMPLE 6Parameter Coordinate Transformation Program Input Values Note that lower case values represent observed comparator coordinates while the upper case represents the known camera calibration coordinates forthe respective ducial values x17111734 yl 7114 293 x1 7113 007 Y17112 997 x2111734 y2 114293 X2113 001 Y2112989 X3114 289 y 111699 X3 7112 997 Y3113 004 X41114 280 y4 7111749 x4112 985 3147112 997 The measured points are x2 74 794 ya 12 202 xb 767123 yb 53 432 GENERAL AFFINE TRANSFORMATION EXAMPLE Solution Forming the Bmatrix and f matrix x1 y1 1 0 0 0 X1 0 0 0 X1 yl 1 Y1 x2 yz 1 0 0 0 X2 0 0 0 x2 yzl YZ BE x3y31000 E x3 0 0 0 X3 y31 Y3 x4 y41 0 0 0 X4 0 0 0 x4 y41 Y4 NBTB71 Coordinate Transformations SURE 440 Advanced Photogrammetry GENERAL AFFINE TRANSFORMATION EXAMPLE The variancecovariance matrix is QXX N 19573E006 71603E009 44019E009 0E000 0E000 0E000 71603E009 19573E006 244661E009 0E000 0E000 0E000 44019E009 244661E009 250E003 0E000 0E000 0E000 QXX 0E000 0E000 0E000 19573E006 71603E009 44019E009 0E000 0E000 0E000 71603E009 19573E006 244661E009 0E000 0E000 0E000 44019E009 244661E009 250E003 The Center lcr Fhmogrammetric Training GENERAL AFFINE TRAN SFORMATION EXAM PLE t BT f I I 51079018 The solution vector is A N t 58352 t 0018 099977 5780 001134 51078353 7000211 039001 A 7001140 099977 001222 TheCsntsr Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry GENERAL AFFINE TRANSFORMATION EXAMPLE The Center Fhologrammetric Training The resisuals are v B A 7 f 0001 0016 0001 0016 v 70001 70016 0001 The reference variance for the adjustment is 70016 6 c0001 TheCantsr 2 ginmogrammetric Training GENERAL AFFINE TRANSFORMATION EXAM PLE The Transformed coordinates become XaA1XaA2yaA3 Xa74913 YaA4XaA5yaA6 Ya11359 XbA1XbA2ybA3 Xb 66504 YbA4XbA5ybA6 Yb54197 Coordinate Transformations 10 SURE 440 Advanced Photogrammetry ORTHOGONAL AFFINE TRANSFORMATION lmpose condition of orthogonality a 0 yielding 5 parameters CX Cy oc Ax Ay X39 CX xcosocCy ysinocAX39 y39 CX xsinoc Cy ycosoc Ay39 EaCamsr Fhmogrammetric Training ORTHOGONAL AFFINE TRANSFORMATION EXAMPLE 5Parameter Coordinate Transformation Program Solution Forming the Bmatrix and fmatrix xl cosh yl sincz er xl sincz yl coscz 1 0 7x1 sincz yl cos 39 0 1 x2 cosh yz sincz rCX xz sincz yz cos 1 0 7x2 sincz yz coscz er xz coscz r yz sincz 0 1 x3 coscz y3 sincz er x3 sinu y3 cos 1 0 a erxgcosczrCyy3sin 0 1 x4coscz y4sin a rCKMsinczCyy4cos 1 0 av sinu yi cosu er xi cosu r Cy yi sinu 0 1 Coordinate Transformations SURE 440 Advanced Photogrammetry ORTHOGONAL AFFINE TRANSFORMATION EXAMPLE Introducing some intermediate values a c and d 21 Ay tano 7 Ax c Ax tano Ay d 0051 sino tano x 7 Y Lana a 0 d 1273 yz 7 12 X Lana 7 c 71296 f Cy d 71267 X3 7 Y3 Lana a 1304 X 7 T f 71292 13 X3 Lana 7 c 1 305 Y3 T 1295 X4 Y4 Lana 1248 Kr g d Y4X4 t a Y4 c ORTHOGONAL AFFINE TRANSFORMATION EXAMPLE t BT f 11 85 11 932 t 71161 611 0 009 70 049 The solu ion vector is A N t 0 00023 0 00023 Updating the Estimates ofthe Parameters A 7001137 CXCX7A1 CX09998 000211 quot A C 09998 70 01222 CY39 CV 2 Y 04047A3 040011368 Ax7Ax7A4 Ax 410021 Ay 7 Ay7 A5 Ay00122 Coordinate Transformations SURE 440 Advanced Photogrammetry ORTHOGONAL AFFINE 2nd Iteration abridged I bOluthl converged The variancecovariance matrix is QXX N TRANSFORMATION EXAMPLE 765674E015 19573E006 801893E012 2791E009 244647E009 7801884E012 801 893E012 9791E006 1221E009 723404E009 L 44026E009 2791E009 1221E009 250E003 7498717E012244647E009 723404E009 7258227E012 250E003 19573E006 765674E0157801884E012 44026E009 498717E01 Qxx 7258227E012j ORTHOGONAL AFFINE TRANSFORMATION EXAMPLE The solution vector is A N 7000007 000005 The parameters are suf ciently small enough A 7000000 to assume that the current estimates are 7000000 correct based on the discripancy values 000000 The Center Fhologrammetric Training Coordinate Transformations 13 SURE 440 Advanced Photogrammetry ORTHOGONAL AFFINE TRANSFORMATION EXAMPLE The resisuals are VB A 7 f 0004 0013 The reference variance for the adjustment is U H The Center 397 lcr 3 Fhmogrammetric Training a 0000 ORTHOGONAL AFFINE TRANSFORMATION EXAMPLE The Transformed coordinates become XaCxX cosxCyyasinaAX Xa74908 Ya7CXxasina CyyacosaAy Ya 11361 XbCXqcoso CyybsinoAX Xb766498 Yb7CXXbsino CyybcosaAy Yb54191 TheCsnlsr Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry ISOGONAL AFFINE TRANSFORMATION Impose additional condition of equal scale C CX Cy yielding 4 parameters C 0c Ax Ay X39CxcosocCysinocAX39 y39 stinoc Cy cosoc Ay39 EaCamsr Fhmogrammetric Training ISOGONAL AFFINE TRANSFORMATION Recall Ccosoc a1 b2 Csinot b1 a2 Then dropping subscripts the transformation is X aXbyc TheCsntsr y bX ay d If Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry ISOGONAL AFFINE TRANSFORMATION Back solution 23X39 C bY39 d a2 b2 bX39 cay39 d y 212b2 The Center lor Fhmogrammetric Training ISOGONAL AFFINE TRANSFORMATION EXAMPLE 4Parameter Coordinate Transformation Program Solution Forming the Bmatrix and f matrix X131110 X1 YITXIOI Y1 QY210 X2 Y2 amp01 Y2 B f quot SY310 X3 y3a01 Y3 X4y410 X4 YA X401 Y4 Coordinate Transformations 16 SURE 440 Advanced Photogrammetry ISOGONAL AFFINE TRANSFORMATION EXAM PLE NBTB71 The variancecovariance matrix is QXX N 9 787E006 0E000 22 02E009 122 332E009 0E000 9 787E006 122 332E009 r22 02E009 Qai 7 22 02E009 122 332E009 250E003 0E000 122 332E009 r22 02E009 0E000 25013003 t BT f 102157371 t 1161 611 The Center 7 70018 lcr Fhmogrammetric Training 70001 ISOGONAL AFFINE TRANSFORMATION EXAMPLE iiifioi Coordinate Transformations 17 SURE 440 Advanced Photogrammetry ISOGONAL AFFINE TRANSFORMATION EXAMPLE The reference variance forthe adjustment is 4 cs cs 00003 The Transformed coordinates become X Ayx Afy M Xa74913 YaAzXaAlyaA4 Ya11361 XbA1XbAzybA3 Xb766502 YbAzXbAlybA4 Yb54195 ISOGONAL AFFINE TRANSFORMATION EXAMPLE Another approach where the model is not linear X xiiy ltanoHAy39tanociAx39 8C 8a 8AX39 8Ay39 1 C0050lt sinoctanoc y lix ltanoHAx39tanociAy39 8C 8 1 aAX39 8A yl Ccosoc sinoctanoc B ax z f X Ziy tanoHAy39tanociAx39 i i X aC 80 8AX 8A 2 Ccosoc sinoctanoc g 8y 8y 8y 8C 8a an39 aAyV yzt T y 7X tanoc Ax39tanociAy39 Ccosoc sinoctanoc Coordinate Transformations SURE 440 Advanced Photogrammetry Forming the Bmatrix and f matrix xl cos on yl sinoL 7Cx1 SiJ1DL C yl COSDL ixlsinon y1cos0L 7Cx1coson7Cy1sinon xz cos 0L yz sinoL 7C Q SiJ1DL C yz cos 0L ixzsinon y2coson 7CxZcoson7Cy2sin B x3cos ixgsinon y3cos0L 7Cgcoson7Cy3 s1n 0L 4COSDL y4sinon 7Cx4sinonCy4cos 0L on y3 sin0L 7C g sin0L C y3 cos 7x4 sinoL y4 COSDL 7C 4 COSDL 7 C y4 sinoL ISOGONAL AFFINE TRANSFORMATION EXAMPLE on on 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 Computing intermediate values aAytancz7Ax bAxtancz7Ay x 7 Y Lana a T d T Y x mum b quotf X 7 Y2 Lana a f Y2 X2 mnab f X3 7 Y3 Lana a f Y3 X3 mnab d X47 Y4 Lana a d Y4X4 mnab er f ISOGONAL AFFINE TRANSFORMATION EXAMPLE d C cosu sincz tancz 1 2730 71 2960 71 2670 13040 71 2920 71 3050 1 2950 1 2480 Coordinate Transformations 19 SURE 440 Advanced Photogrammetry 71 N ET B t BT f 23781 71161611 0009 70049 The solution vector is A Nt Updating the current estimates ofthe 0 0002 parameters 70 0114 A 00021 CFCaAl C09998 700122 1 LrAZ 7001137 Ax AxrAK Ax700021 Ay AyrAA Ay00122 ISOGONAL AFFINE TRANSFORMATION EXAMPLE Second iteration abridged Converged to solution 71 N ET B The variancecovariance matrix is QXX N 9787E006 0E000 23409E009 122074E009 0E000 9791E006 122102E009 723414E009 23409E009 122102E009 250E003 0E000 122074E009 723414E009 0E000 250E003 QXX Coordinate Transformations SURE 440 Advanced Photogrammetry ISOGONAL AFFINE TRANSFORMATION EXAMPLE The solution vector is A N 1 Updating the current estimates ofthe 70 0001 parameters 0 0000 A00000 ccrAl C09998 00000 110iAZ 7001137 Ax AxrAK Ax700021 Ay Ay 7 A4 Ay 0 0122 The Center lcr Fhmogrammetric Training ISOGONAL AFFINE TRANSFORMATION EXAMPLE The resisuals are 0003 70013 VB39A f 70004 V 70019 70003 0019 0004 L 0013 The reference variance for the adjustment is 4 The Center 6 00003 Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry ISOGONAL AFFINE The Transformed coordinates become XaCXa cosa C ya sinx AX YEA CXa 51110 C ya cosa Ay XbCXbcosx C ybsinx AX Yb7Cq sinx Cybcosa Ay The Center lcr Fhmogrammetric Training TRANSFORMATION EXAMPLE X3 74913 Ya 11361 Xb766502 Yb 54195 ANOTHER EXAMPLE 0 Measured gtltUL 70057 mm ISOGONAL AFFINE TRANSFORMATION yUL 40014 mm XLR 80067 mm yLR 50026 mm XPT 760985 mm ypT 419810 mm True X39UL 70107 mm Reseau is 19 Grid Paint 07 20 y UL 39843 mm X39LR 80133 mm y LR 49820 mm ThrnsCentsr Phdhonrammetic Training Coordinate Transformations 22 SURE 440 Advanced Photogrammetry ISOGONAL AFFINE TRANSFORMATION Differentiating transformation formulas with respect to the parameters 36 826 aX UL 1 ax UL 0 80 6d The Center lcr Fhmogrammetric Training ISOGONAL AFFINE TRANSFORMATION Design matrix 6X51 6abcd ayvm xUl yUl 1 0 70057 7400141 0 B 6abcd ya ixm 0 1 7 740014 770057 0 1 7 Wm xLR yLR 1 0 7 80067 750026 1 0 aked yLR 7ka 0 1 750026 780067 0 1 ay39LR 6abcd The Center Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry ISOGONAL AFFINE TRANSFORMATION Discrepancy vector and vector containing the parameters are x UL 70107 a f y UL 39843 A b x LR 80133 0 y LR 49820 d ThaCantsr galogrammetric Training ISOGONAL AFFINE TRANSFORMATION Normal equations N BTB 15422429 0 150124 90040 N 15422429 90040 150124 2 0 2 TheCsntsr Fhologrammetric Training Coordinate Transformations 24 SURE 440 Advanced Photogrammetry ISOGONAL AFFINE TRANSFORMATION Constant vector and solution vector are computed as 1514068 0999051 21 7 T 7 733776 7 1 7 70002547 7 b 113 f AiN t 150240 0014579 0 7 89663 7 0045424 d The Center lcr Fhmogrammetric Training ISOGONAL AFFINE TRANSFORMATION Transformed coordinates become X39PT axPT byPT c 0999051760985 7 00025477 419810 0014579 76148 mm ibeT ayPT d 77 0002547760985 09990517 419810 7 0045424 741793 y39PT The Center Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry RIGID BODY TRANSFORMATION Condition onthogonality and no scale CX Cy 1 X XcosotysinotAX y XsinotycosotAy 3 parameters oc Ax Ay EaCamsr Fhmogrammetric Training RIGID BODY TRANSFORMATION EXAMPLE 3Parameter Coordinate Transformation Program Solution Forming the Bmatrix and f matrix 7x1 5111DL yl cos on 1 0 XICOSDLy1 in on 01 ixz 5111DL yz cos on 1 0 1 s ixz cos on 7 yz s39 s n on 0 1 B 7x3 5111DL y3 cos on 1 0 ixgcosotiy3 in on 0 1 The Comer 7x4 5111DL y4 cos on 1 0 nmogrammetric Training 7x4 cos on 7 y sin on 0 1 Coordinate Transformations SURE 440 Advanced Photogrammetry RIGID BODY TRANSFORMATION EXAM PLE N 7 13THl Computing intermediate values for the computation ofthe f matrix 21 Ay tano 7 Ax c 7 Ax tano Ay d 7 cosa sina tano x 7 Y mna a X Y x Lana 7 c Y1 X27Y2 mnaa XI Yz X2 WU C 71 296 yr f 7 d 71267 X3 7 Y3 mna a f 1304 I I 71292 13 X3 Lana 7 c 71305 yr 1295 X47 Y4 mna a 1248 x4 d Y4X4 Lana7c If RIGID BO DY TRAN SFORMATIO N EXAM PLE 3225 L 70049 J AJL L LZTI 21112 235 L7001222J AyAy7A3 Ay00122 Coordinate Transformations SURE 440 Advanced Photogrammetry RIGID BODY TRANSFORMATION EXAMPLE 2nd Iteration abbreviated N ET B71 The variancecovariance matrix is QXX N 9787E006 122074E009 723409E009 QXX 122074E009 250E003 72919943012 723409E009 72919943012 250E003 The Center lcr Fhmogrammetric Training RIGID BODY TRANSFORMATION EXAMPLE The solution vector is A N 1 4100000 DLDL7A1 11001137 A 000000 AXAX7AZ AX 700021 000000 Ay Ay 7 A3 Ay 00122 The resisuals are VB A 7 f The Center Fhologrammetric Training A 0 032 J Coordinate Transformations SURE 440 Advanced Photogrammetry RIGID BO DY TRAN SFORMATIO N EXAM PLE The reference variance for the adjustment is 0 o 0001 The Transformed coordinates become Xa xa 0050 ya sinx Ax Xa 74926 Ya ix sinx ya 0050 Ay Ya 11363 Xbxb 005m yb sina AX Xb66513 Yb 7x sinx yb 0051 Ay Yb 54204 EaCantsr Fhmogrammetric Training POLYNOMIAL TRANSFORMATION General form X39a0 a1xa2ya3x2 a4y2 a5xy y39b0 b1Xbb2yb3X2 b4y2 bsxy Alternatively x A0 A1xAzyA3xZ 7yZA42xy iB 7A A A Z Z A 2 mecsmsr y 0 ZX 1yJr 4X y T 3 XYT Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammet I39Y Bilinear Polynomial Coordinate Transformation Program 1X1Y1X139 000 0 1X2 Y2 X239 1X3Y3 X339 y100 y200 1X2 y300 13 Forming the Bmatrix and f matrix 0 0 Y1 X139Y1 0 0 Y2 X239Y2 Bilinear Polynomial Coordinate Transformation Program N ET B71 700000000 00000000 QXX 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 700000000 00000000 00000000 00000000 00000000 The variancecovariance matrix is 00000000 00000000 00000000 00000000 00000000 QXX 1 N 02500000 00000000 00000002 700000000 00000000 00000000 00000196 700000000 00000000 00000000 00000002 700000000 00000196 700000000 00000000 00000000 02500000 00000000 00000002 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 I 00000000 00000002 700000000 00000196 700000000 00000000 700000000 00000196 700000000 700000000 00000000 700000000 00000000 Coordinate Transformations 30 SURE 440 Advanced Photogrammetry Bilinear Polynomial Coordinate Transformation Program 1 BTf 70018 51079018 58352 455444 1 70001 7578092 51078353 340545 The solution vector is A N 1 700021 09998 W 00113 A 700000 00122 700114 09998 700000 Bilinear Polynomial Coordinate Transformation Program 70000 0000 0000 0000 70000 0000 0000 0000 The resisuals are V B A e f The reference variance for the adjustment is 4 cs cs 00000 Coordinate Transformations SURE 440 Advanced Photogrammetry Bilinear Polynomial Coordinate Transformation Program The Transformed coordinates become XaA1A2anrA3yaJrA4xaya Xa74913 YaA5A5XaA7YaAs39Xa39Ya YEN358 XbA1A2A3ybA439Xb39Yb Xb766503 YbA5A6A7ybAg3939yb Yb54201 PROJ ECT IVE TRANSFORMATION Frequently used in photogrammetry General form alx 32y a3 dlx dzy 1 blxb2yb3 d1xd2y1 The Center Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetry PROJECTIVE TRANSFORMATION 8FX 1 8FX 1 8FX 1 631 63 adz lt 1 lt gt lt gt I F XX 6F y 3F yl 5F yl B gal aaz adz f g 2 g 2n1 F Xn myquot anyquot myquot 3 2n Fyn L 631 832 adz PROJECTIVE TRANSFORMATION EXAM PLE Initial estimates of he parameters are given as follows The vector of observations 11 1 0 X1 1 0 0 2 Y1 a 39 0 0 3 b1 0 0 X2 Y b2 1 0 L 2 7 X b3 0 0 3 d1 0 0 Y3 d2 0 0 X4 The denominator ofthe functional model is dlx1 dzy11w dlxz dzyz 1 dlX3 dzy3 1 dlx4 dzy4 1 den Coordinate Transformations 33 SURE 440 Advanced Photogrammetry The demgn matnx B 15 formed as foHoWs 7 X1 Y1 1 a1K132Y1a3 a1K132Y1a3 7 o o o 7 x1 7 y1 den den den deny den2 x1 Y1 1 b1K1 rsz1 rb3 b1K1 rsz1 rb3 o o o 7 x1 7 y1 den den den 1181 dem2 x2 Y2 1 a1K232Y2a3 a1K232Y2a3 0 0 7 x2 7 yZ den den2 dmz den dew X2 Y2 1 b1xzbzyzb3 b1xzbzyzb3 o o o 7 x2 7 yZ am am den2 day dew B x3 Y3 1 a1K332Y3a3 a1K332Y3a3 o o o 7 x3 7 y3 den den den3 deny dang2 x3 Y3 1 b1x3sz3b3 b1x3sz3b3 o o o 7 x3 7 y3 den den den 11mg 11mg2 X4 Y4 a1 4quotquot 2Y1quotquot 3 a1K432Y433 o o o 7 x4 7 y4 denA denA dam den dew2 x4 Y4 1 b1x4bzy4b3 b1x4bzy4b3 o o 7 X4 7 Y4 d8 14 W 01814 dew PROJECTIVE TRANSFORMATION EXAMPLE alx1 a2y1 as T blx1 b2y1 b3 T alX2 a2y2 as T2 blxz b2y2 b3 T alx3 a2y3 as T f L 7 FX 71 304 blx3b2y3b3 1292 den 1305 a1X4 a239Y4 a3 71 295 T 71 248 blx4 b2y4 b3 den4 Coordinate Transformations SURE 440 Advanced Photogrammetry PROJECT IVE TRANSFORMATION EXAMPLE NBTB tB f AN t 7000023 001134 001409 700114 7000023 001348 0 Lo PROJECTIVE TRANSFORMATION EXAM PLE Update estimates of parameters and iterate towards a solution The transform ed coordinates are allxa 212ya a3 a xa 74 92187 dlxa d2ya1 b b y b aM Ya1135877 dlxa d2ya1 a a y 21 Kb 1Xb 2 b 3 Kb 75549273 dlxb d2yb 1 b b y b Yb aw Yb 54 20205 dlxb d2yb 1 Coordinate Transformations SURE 440 Advanced Photogrammetry TRANSFORMATIONS IN TH REE DIMENSIONS General polynomial approach transformation is not conformal X39 a0 alx a2y a32 a4X2 asy2 a622 a7xy agyz 2 2 2 agzxawxy anx ya12xz y39 b0 bx b2y b32 b4x2 by2 b z2 b7xy bgyz 2 2 2 bgzxway bnx yb12xz 239 c0 c1x 02y 032 04x2 05y2 c z2 c7xy cgyz cgzx cwxy2 c11x2y cnxz2 The Center for Fhmogrammetric Training TRANSFORMATIONS IN TH REE DIMENSIONS Alternative that is conformal in the three planes X39A0 A1XA2yA3ZA5X2 7y2 7220aA7zx2A6xy y39B0 7A2xA1yA42A67x2 y2 7222A7yz02A5xy z39C0 A3X7A4yAlzA7ix2 7y2 222A6yz2A52X0 The Center Fhologrammetric Training Coordinate Transformations SURE 440 Advanced Photogrammetiy PRINCIPLES OF DIGITAL IMAGE PROCESSING Surveying Engineering Department Ferris State University INTRODUCTION El Types of digital image processes Ei Preprocessing operations correct for distortions in image acquisition process Ei Image enhancement improves the visual qualitI of images El Image classification tries to replace manual human visual analysis with automated procedures for recognizing and identifying objects El Data merging combines image data for certain geographic areas with other geographically referenced information of some area Principles of Digital Image Processing SURE 440 Advanced Photogrammetry DIGITAL IMAGE MODEL l El Image mathematical expression yielding density as function of row column X y Like DEM gives elevation as function of XY Araw image El Imperfect rendition of obiect A 39stretched imag e space Affected by imaging system signal noise atmospheric scatter and shadows e A 39sharp A 39smooth imag e imag e DIGITAL IMAGE MODEL D Digital Images Primary image degradation combined systematic blurring from aberration of lens resolution of recording medium amp atmospheric scatter l Combined effect called pointspread function E Pointspread function blurred image from perfect light source 111 Can be represented mathematically and applied through process called convolution Cumulative effect of systematic degradation 1693 0693 quot PXaYNXaY Principles of Digital Image Processing SURE 440 Advanced Photogrammetiy SPATIAL FREQUENCY In No cycles of sinusoidal wave per unit of distance El Obiects have virtual unlimited range D Most high frequency lost in image capture Brightness variations averaged together attenuates high frequency Nyquist frequency highest frequency that can be represented in a digital image I 12 sampling frequency SPATIAL FREQUENCY El Number of changes in brightness value per unit distance for an image Few Changes low frequency area Many Changes high frequency area D Describes brightness values over a spatial region Use spatial approach to extracting quantitative spatial information Look at localneighboring pixels brightness values than just an independent pier value Principles of Digital Image Processing SURE 440 Advanced Photogrammetiy SPATIAL FREQUENCY I Enhances or subdued using 2 different approaches Spatial convolution filter I Based on use of convolution masks Fourier analysis I Mathematically separates image into spatial frequency components I Results in Fourier transform of image CONTRAST ENHANCEMENT D If overall scene brightness is low maiority of pixels will have low values Contrast diminished over the features Histogram of area plot digital numbers on abscissa while no occurrences on ordinate Linear stretch DNs linearly expanded to full available range of values eg 0255 Histogram equalization expand DNs in nonlinear fashion across available range so values are more evenly distributed Principles of Digital Image Processing SURE 440 Advanced Photogrammetry Image in Shadow and Corresponding Histogram mum 41 minimum cl u in mm is Air 250 Linear Stretch Contrast Enhancement and Associated Histogram mu Iii 2m SU mi m Principles of Digital Image Processing SURE 440 Advanced Photogrammetry Histogram Equalization Contrast Enhancement and Histogram Number of Occurrences min n w mo ISU Jun 250 1m 1 Contrast Enhancement by Histogram 2 a 5 n E a n E s 2 St r etc h in 9 Law I CONVISI D Plxcl B lhlnua 255 W B Plxul ErlngIsa 256 0 F gure 5 Principles of Digital Image Processing SURE 440 Advanced Photogrammetry Contrast Enhancement IllllxlIlLlllllllllllllllllllllillt1l v t twmmrmmnmmmwu m n WWW m mmquot mm 1 Mum mi runmum m mm mmMmmlamluwm mmmmm mm mm w 5 Cum m m humr1 cum Mm mmwm SPECTRAL TRANSFORMATION El Spatial domain natural form of image 13 Frequency domain abstraction of original image Pixels whose positions rows and columns relate to frequency instead of spatial location Principles of Digital Image Processing SURE 440 Advanced Photogrammetry DISCRETE FOURIER TRANSFORM DFT El Converts brightness values from digital image to set of coefficients for sine and cosine functions at frequencies ranging from O to Nyquist frequency El With additional step amplitude and phase information can be obtained from the coefficients El Fast Fourier Transform efficient implementation of discrete Fourier Transform FOURIER TRANSFORM El Greater amplitudes correspond to brighter tones El Lowest spatial frequencies represented at center of star El Discernable spikes radiating from center Principles of Digital Image Processing SURE 440 Advanced Photogrammetry FOURIER TRANSFORM El Certain operations easier in frequency domain Example High Pass Filter Principles ofDigital Image Processing SURE 440 Advanced Photogrammetry Pa ss Example Low Filter CONVOLUTION r El 2 functions signal and response function I Function convolved apply smearing effect of the response function to the signal I Has effect of filtering out certain high frequencies II Deconvolution inverse function Principles of Digital Image Processing SURE 440 Advanced Photogrammetiy CORRELATION III 2 functions both can assume to be signals El Measure of sensitivity of 2 images by comparing them while supervised El Useful in matching patters El Drawback sensitive to noise in image MOVING WINDOWS OPERATIONS El Also used for convolution El Useful when response function highly localized Eg convolving with Fourier transform requires image and response function to be same dimension Inefficient when vast majority of response function equal to zero Moving window more efficient I 2 primary inputs original digital image and localized response function calleol kernel Principles of Digital Image Processing SURE 440 Advanced Photogrammetry KERNEL II Small array usually with odd number of rows and columns I Overlaid same size area of input image I Resulting value placed at center of corresponding location of output image I On edges kernel extends beyond edge El Assume image periodic function kernel values extend beyond image will wraparoundquot so as to overlay image pixels on opposite side of image KERNEL El Mathematical C34 kl x123 kz x124 k3 x125 k4 x133 operation kS x134 k6 x135 k7 x143 k8 x1 k9 X145 Principles of Digital Image Processing SURE 440 Advanced Photogrammetry LOW PASS FILTER El Simplest computes average of nearby pixels El Equivalent to computing weighted average I Kernel values for 3x3 lowpass filter K LOW PASS FILTER 1 Effect a original b filtereol image Principles of Digital Image Processing SURE 440 Advanced Photogrammetry WHY CREATE BLURRED IMAGE El Could be used as simple high frequency noise her 39l Better methods for this D Use as precursor to highpass filter I Pixelbypixel subtraction of lowpass filter from original image El Gives high frequency detail that was filtered out I Take absolute value after subtraction to remove negatives SIMPLE HIGHPASS FILTER a Method good for edge detection Principles of Digital Image Processing SURE 440 Advanced Photogrammetry MEDIAN FILTER El Useful for noise filtering El 3x3 moving window passed through input and 9 pixels in immediate neighborhood extracted at each step El 9 digital numbers sorted and median placed in corresponding location of output image I Not sensitive to extremely high or low values resulting from image noise MEDIAN FILTER II Effective for many random noise I Not as effective for removing systematic noise Principles of Digital Image Processing 15 SURE 440 Advanced Photogrammetiy LOW FREQUENCY FILTER D Standard low frequency filter Deemphasize or block high spatial frequency detail Simplest uses mean of surrounding pixels Convolution mask or kernel n is size of neighborhood El Mask template and example 01 02 c3 1 1 1 c4 05 06 1 1 1 c7 c8 09 1 1 1 LOW FREQUENCY FILTER El Coefficients ci multiplied by brightness value in input image clBV1 cZBV2 03BV3 mask template 04 BV4 05 BV5 06 BV6 c7 BV7 08 BV8 c9 BV9 where BV1BVHJ71 BV 2BVirLj BV 3BVHJ1 BV 4BVH71 BV 5 BV i7 BV 6BVH1 BV 7BVi1jrl BV8BViLj BV9BViLj1 Principles of Digital Image Processing SURE 440 Advanced Photogrammetry LOW FREQUENCY FILTER El Original data results in lowfrequency filtered LFF image quot9 Z c BK LFFSYOUT INT l n BV1BVZBV3BV9J 9 INT Result in applying low frequency convolution mask to hypothetical doto Filtered Image Principles of Digital Image Processing SURE 440 Advanced Photogrammetiy LOW FREQUENCY FILTER El Spatial moving average shifts to next pixel where average of all 9 brightness values compute El Image smoothing useful for removing periodic salt and pepper noise from electronic systems El Blurs image especially at edges nMore severe blurring as size of kernel increased LOW FREQUENCY FILTER D To reduce blurring unequalweighted smoothing masks developed Examples 025 050 025 mask 2 050 100 050 025 050 025 100 100 100 mask100 200 100 100 100 100 Principles of Digital Image Processing SURE 440 Advanced Photogrammetiy LOW FREQUENCY FILTER El 3 X 3 kernel may result in image being 2 lines and 2 columns smaller El To solve problem DArtificially extend original image beyond borders by repeating original border pier brightness values DReplicating averaged brightness values near borders base on image behavior within a few pixels of border D While preserving original dimensions introduces some spurious information that should not be interpreted LOW FREQUENCY FILTER I Median filter Neighborhood ranking Useful for removing noise especially shot noise I Pixels corrupted or missing Ranks pixels in neighborhood from lowest to highest I Central value of mask replaced by median value Only original pixel values used in creation of median filter Principles of Digital Image Processing SURE 440 Advanced Photogrammetry ll Median filter DAdvantages wrt weighted convolution filter IDoes not shift boundaries IMinimal degradation to edges allows filters to be applied repeatedly I Allows line delail lo be erased lAllows posterization where large regions lake on same brightness values DStandard median filter erase some lines in image narrower than halfwidth of neighborhood and round or clips corners ll Edgepreserving median filter Preserves edges and corners Procedure ll Compute median value of black pixels 2 Compute median value of gray pixels 3 The center on minal brightness value ranked with the 2 other median values and placed in ascending order 4 Final median selected to replace central pixel Principles of Digital Image Processing 20 SURE 440 Advanced Photogrammetiy LOW FREQUENCY FILTER El Minimum or Maximum filter Replace brightness value of current pixel with minimum or maximum bright value in kernel These filters used for visual analysis only should not be applied prior to quantitative data analysis El Olympic filter Mimics scoring of Olympic events Instead of using full 3 x 3 matrix highest and lowest valued dropped and result averaged Useful for removing most shot noise LOW FREQUENCY FILTER El Adaptive box filters Significant value for removing noise 2 types I Remove random bit error shot error I Smooth noisy data pixels related to image scene but with an additive or multiplicative component of noise Procedures rely on computation of standard deviation 6 of only those pixels in box surrounding central pixel Principles of Digital Image Processing 21 SURE 440 Advanced Photogrammetiy LOW FREQUENCY FILTER El Adaptive box filters BV5 central pixel considered bit error if it deviated from box mean of 8 surrounding values by more than 10 to 206 I When true central value replaced by box mean Called adaptive because it is based on standard deviation for 3 x 3 window instead of standard deviation of whole scene Even minor bit error removed from lowvariance areas I but data along sharp edges and corners not replaced LOW FREQUENCY FILTER El Lee sigma filter DCompute standard deviation of entire scene DEach BV5 in 3 x 3 moving window replaced by average of only those neighboring pixels with intensity within fixed 6 range of central pier El Adaptive Lee sigma filter DUses oca adaptive 0 rather than fixed from entire scene I Filter averages only those pixels within the box with intensities within 120 of central pixel I Effective in reducing speckle in radar images without eliminating fine deta39ls El Both filters can be combined to eliminate random bit error and noisy data Principles of Digital Image Processing 22 SURE 440 Advanced Photogrammetiy HIGH FREQUENCY FILTER El Applied to image to 39 Remove slowly varying components 39 Enhance highfrequency local variations El One filter HFFslom subtracts output of low frequency filter LFFslom from 2times value of original central pixel HFF 2xBV5 LFF 50m 50ut HIGH FREQUENCY FILTER El Brightness values tend to be highly correlated in 9 x 9 kernel Highfrequency filtered image relatively narrow intensity histogram Output from most HFF images must be contrast stretched before visual analysis Principles of Digital Image Processing 23 SURE 440 Advanced Photogrammetiy HIGH FREQUENCY FILTER El Highpass filters that accentuate or sharpen edges produced from convolution masks 1 1 1 maskz l 9 1 1 1 1 1 2 1 maskz Z 5 2 1 2 1 EDGE ENHANCEMENT El Delineates edges surrounding various obiects of interest D Makes shapes and details more conspicuous and easier to analyze El Edges enhanced using Linear edge technique Nonlinear edge technique Principles of Digital Image Processing 24 SURE 440 Advanced Photogrammetiy LINEAR EDGE ENHANCEMENT El Directional Firstdifference algorithm Approximates first derivative between 2 adiacent pixels Produces first difference in horizontal vertical and diagonal directions Algorithm Vertical BIIJBVlj BVIJ1K Horizontal BVU BVW BVH K NE Diagonal BVM BVW BVHLJ1K SEDiagonal BVUBViJ BVFUHK LINEAR ED 3 E ENHANCEMENT D Subtraction yields value Add constant K usually l27 to make all values positive and centered between 0 255 I Causes adiacent pixels with little change to be around i 27 and any dramatic change between adiacent pixels to migrate from 127 in either direction El Resultant image normally minmax contrast stretched enhances edges more Causes uniform areas to appear in shades of gray Important edges become black or white Principles of Digital Image Processing 25 SURE 440 Advanced Photogrammetiy 0 0 mask1 0 B 5 I W H O O 0 1 0 LINEAR EDGE ENHANCEMENT El Embossing causes edges to appear in plastic shadedrelief format El Embossed edges obtained from emboss East emboss NW El Embossing stretched LINEAR EDGE ENHANCEMENT Offset of i 27 often added to result and data contrast Direction of embossing controlled by changing location of coefficients around periphery of mask Principles of Digital Image Processing 26 SURE 440 Advanced Photogrammetry LINEAR EDGE ENHANCEMENT 1 1 1 mask 1 7 2 1 North 1 1 1 Compass gradient masks 71 1 1 mask 71 72 1 East Used to perform 2 D 7 1 1 discrete differentiation directional edge enhancement 1 1 1 mask 7 1 7 2 1 NE 7 1 7 1 1 LINEAR EDGE ENHANCEMENT 71 71 1 71 71 1 mask 71 72 1 SE mask 1 7 2 1 South 1 1 1 1 1 1 mask 1 72 1 SW mask 1 72 1 West Principles of Digital Image Processing SURE 440 Advanced Photogrammetiy LINEAR EDGE ENHANCEMENT 1 1 1 mask 1 2 1 NW 1 1 1 Compass names suggest slope direction of max response U I East gradient mask produces max output for horizontal brightness values change from west to n Gradient masks have zero weighting sum of convolution masks are zero I No output response over regions with constant bright values no edges present LINEAR EDGE ENHANCEMENT mask 0 0 O horizonta1 1 1 1 1 1 1 mask 1 0 1 139 g 1 1 1 Principles of Digital Image Processing SURE 440 Advanced Photogrammetry n nmmwmw mmmam unansm r i c 0161mm MMNWEsYmnm LIMEMama E mammaoomnmm we mum mmwnWnumnrmn m mmeIwummm LAPLACIAN FILTER a Performs edge enhancement 1 Laplacian is second derivative as opposed to gradient first derivative Ulnvariant to rotation I Insensitive to direction in which discontinuities points line and edgesrun 0 1 0 1 1 1 mask 4 1 maskz l 8 1 0 10 121111 mask 2 4 2 1 2 1 Principles of Digital Image Processing SURE 440 Advanced Photogrammetry LAP LACIAN FILTER D Subtracting Laplacian edge from original restores overall grayscale variation 1 1 H f t bl I umun can CDI I I DI a Y mask 1 77 Interpret El Sharpens image by locally 1 1 increasing contrast at discontinuities subwct operatorfrom original image LAPLACIAN FILTER 1 Generally highlights point lines and edges and suppresses uniform and smoothly varying regions Human vision behaves same way D Has more natural look than many other edge enhanced images Principles of Digital Image Processing SURE 440 Advanced Photogrammetiy NONLINEAR EDGE ENHANCEMENT El Many algorithms use either 2 X 2 or 3 X 3 kernels El Sobel edge detector Based on 3 x 3 window and computed as Sobelm le Y2 where X BV3 ZBV6 BV9 BV1 ZBV4 BV7 Y BV1 2BV2 BV3 BV7 2BV8 BV9 SOBEL OPERATOR D Also computed by simultaneoust applying next 2 templates over image 1 0 1 1 2 1 Y 0 0 0 1 0 1 1 2 1 D Detects horizontal vertical and diagonal edges El Pier declared an edge if its Sobel value exceeds some userspecified threshold May be used to create edge maps which appear as while lines on black background or vice versa Principles of Digital Image Processing 31 SURE 440 Advanced Photogrammetry PREDAWN THERMALINFRARED DATA b a Origmal canlrasl slimmed Low Ylequancy H mmme Maximnm Hm Irequencv PREDAWN THERMALINFRARED DATA L d e E35 Lapiwan a Laplaman 17 Principles of Digital Image Processing SURE 440 Advanced Photogrammetry PREDAWN THERMALINFRARED DATA Edge map 0 Sobel Ruben39s CHARLESTON SC TM BAND 4 Principles of Digital Image Processing SURE 440 Advanced Photogrammetry 1222008 PRHMCIIPLES OE SOFTGOPY PHOTOGRAMMETRY Surveying Engineering Department Ferris State University The Center or Photogrammetric Training m SYSTEM HARDWARE Hihpowered computer High processing speed Large random access memory and mass storage Digital files can be larger than several hundred megabytes Controls for X Y Z motions within model Mouse with thumbwheel Handwheels and footwheel Principles of Softcopy Photogrammetry SURE 440 Advanced Photogrammetry 1222008 S1ERB VB D NC APABL I39 ES Anaglyphic filters Colored filters Disadvantage can t use color imagery Multiple viewers are possible Polarizing filters Can alternate between horizontal and vertical parity up to 120 Hz 5 Viewer wears polarized L mgx I39M filters S1ERB VB D NC APABL I39 ES Alternating shutters Viewing glasses have liquid crystal display LCD masks Monitor shows left amp right image at 120 Hz LCD receives signals from infrared device mounted on top of monitor to alternately control leftright masking Splitscreen with stereoscope Mirror stereoscope mounted in front of monitor Principles of Softcopy Photogrammetry SURE 440 Advanced Photogrammetry STERE VB D NC APABL I39 ES Lcl r Immgc lixplaycd iugm image Jimmyu lt Alternating Shutter mu gum LCD gums L n l R Righl image Split Screen with Stereoscope s STERE VB D NC APABL I39 ES Advantaesdisadvantaes Split screen Tends to be least expensive other than anaglyphic Can work with standard computer display with 60 Hz refresh rate Restricts viewer s head motion Precludes having multiple simultaneous viewers Anaglyphic Polarizing Screen amp Alternating Shutter All allow more than one person to view stereo ut LCD glasses tend to be more expensive All allow more freedom of movement of operator Principles of Softcopy Photogrammetry 1222008 SURE 440 Advanced Photogrammetry 1222008 DEA LPH 39DBAMMTD SSTEII ImageStation 2002 SocetStation Fa IMAGE MEASUREMENTS Manual measurements use floatin mark 2 half marks consist of single pixel or small pattern of pixels to form shape 2 approaches for half mark movement Fixed mark with moving image Fixed image with moving mark Automatic point measurements Use some form of pattern matching technique small subarray matched on both photos Principles of Softcopy Photogrammetry 4 SURE 440 Advanced Photogrammetry OREN TATD NPRO CEDE ES Allows for greater automation Interior orientation operator control or pattern matching Match position of fiducials by matching standard image of fiducial called template Relative orientation Greatly assisted by automatic pattern matching Accuracy improved by matching additional points greater redundancy in LS solution Absolute orientation Mostly done manually control in varying shapes and can occur anywhere within model Can be automated when block aerotriangulation done and eo parameters known EPIPOLAR GEOMETRY Pattern matchin computationally intensive Reduce burden by constraining search area Coplanarity concept condition where left amp riht exposure station object point and left amp riht photo imaes all lie in a common plane lf ro known for stereopair coplanarity can be used to define epipolar lines Principles of Softcopy Photogrammetry 1222008 SURE 440 Advanced Photogrammetry 1222008 EPIPOLAR GEOMETRY Left epipolar line EPIPOLAR RESAMPLING o Also called image normalization or pairwise recti cation Done after ro Resample image so rows of pixels in both images lie along epipolar lines 39 Increases efficiency of pattern matching Guarantees that conjugate points have zero yparallax Principles ofSoftcopyPhotogrammetry 6 SURE 440 Advanced Photogrammetry 1222008 EPIPOLAR RESAMPLING EPIPOLAR RESAMPLING Rotation matrices relatin object space Xv Yv Zv and two oriinal photoraphs are M1 and M2 Rotate XV to lie in vertical plane throuh base B tan 391 Y BX Make once rotated X v parallel to base B eY tan391 z Zz IBX BY Principles of Softcopy Photogrammetry 7 SURE 440 Advanced Photogrammetry 1222008 EPIPOLAR RESAMPLING Make twice rotated Z V close to oriinal direction of view Not unique step Rotation matrices MX MY MZ correspond to 6X Y 02 Product is MB MX MY MZ EPIPOLAR RESAMPLING Composite rotation matrices MNI MB M1T MNZ ME ME Coordinates of oriinal photos transformed to normalized counterparts by mN xpmN ypmN f KN f 2 n mNn x1 mNnyp mNn f mNn xv mNnyp mNn f yN Principles of Softcopy Photogrammetry 8 SURE 440 Advanced Photogrammetry 1222008 EX II IE EPPO IAR RESAM LN 6 Input Values XLI 5000 11L1 5000 2L1 610 omegal 14145001 pml 1414070 kappal 44 982543 XL2 5260 yL2 5260 0 2L2 630 omega2 0 707143 ph12 0 707089 kappaz 44 995636 The vames x and y Wm subscmpts 1 and 21nd1cateme same pomt mo 1 1nd1cat1ng me pom 15 on me on photo and 2 bemg onmer1gmphoto f 1524 x149843 yl 13 860 x2 48418 Y2 21285 KP 5080 Y 5180 Z 50 From Mikhail Bethel amp McGlone EXAMPLE EPIPOLAR RESAMPLING Solution mm i 180 K1 kappal mm K2 kappazlumd 61 me mm 61 phzlumd m1 umegzl ma m2 umegazlumd 5051 11 0 ous 2 smK2 0 M smxl cum 0 Mg smxz ous q 0 0 0 1 0 0 1 1205021 0 sn6l ousg n sm62 MM 0 1 W2 0 1 61 n 505M sm gt2 0 120507 1 0 0 1 0 MM 1 0 cusml smml MM 0 cusm2 12 0 smml cusml 0 sm 2 cusmz Principles of Softcopy Photogrammetry 9 SURE 440 Advanced Photogrammetly M1MK139M 139MOJI 0 707107 0 707107 BX XLz XL1 BY 1 YL2 YL1 32 1 ZL2 ZL1 1 iasinM1173 M1 70706676 0706676 0034899 0024678 70024678 0999391 EXAMPLE EPIPOLAR RESAMPLING M2 MKZM Dzsz 70 0707107 0707107 70 M2 70706999 0706999 70017452 70012341 0012341 0999848 BX 260000000 BY 260 000000 BZ 20000000 M w 39 atan2 239 3 ml 2 000000 1 I cos Kb 7 cos Kb torad I M M 23 3 22 3 0 2 02 atan2 71000000 cos Kb cos Kb torad e Z atanzBX BY 7 m1m2 EXAMPLE EPIPOLAR RESAMPLING e Z 45000000 torad 9 2 2 y 9 atan2 B B 7B 73113412 y X Y Z torad e x torad 0500000 Principles of Softcopy Photogrammetly 1222008 SURE 440 Advanced Photogrammetly 1222008 EXAMPLE EPIPOLAR RESAMPLING MB MX39MYMZ 0706063 0706063 0054313 MB 70707415 0706745 0008714 70032233 70044574 0998486 MN1 MBMlT 0998524 0001895 0054279 MNl 70000474 0999657 70026190 70054310 0026125 0998182 MN2 MBM2T 0998524 70000948 0054304 MNZ 70000474 0999658 0026164 70054310 70026151 0998182 EXAMPLE EPIPOLAR RESAMPLING M x M y M 7f N1171 1 N1172 1 N1173 xn 4 1 7 M X M y M f ml 740968 N1371 1 N1372 1 N1373 M x M y M 7f f levl 1 N1272 1 N1273 ynl 7 M X M y M f 91115 1W584 N1371 1 N1372 1 N1373 uh MN21 139th MN21 239Y2 MN21 54 KHZ f M M M m2757530 N2371X2 N2372Y2 N2373 f M M y M 7f f N2271X2 N212 2 N213 Y 2 39 PET MN23 th MN23 239Y2 MN23 31quot ET 3958 Note that now both ycoordinates highlighted for the same point are on the same line Principles of Softcopy Photogrammetly 1222008 SURE 440 Advanced Photogrammetry DIGITAL IMAGE MATCHING 3 eneral cateories of techniques Area based Use small numerical comparison of DNs in small subarrays Feature based Complex requires extraction of features that are com prised of edge I subsequent comparison based on feature characteristics like size and shape Hybrid methods Combination preprocessing images to highlight edges of features then after features located match by areabased methods NORMALIZED CROSS CORRELATION Statistical comparison of DNs from subarrays on left and riht imae Correlation coefficient iiKAu KXBu El Principles of Softcopy Photogrammetry SURE 440 Advanced Photogrammetry 1222008 CORRELATION COEFFICIENT Ranes from 1 to 1 1 indicates perfect correlation exact match 1 indicates negative correlation Would occur if identical images from positive and negative compared Because of image noise 1 very rare Generally use a threshold like 07 above which the subarrays assumed to match LINEAR REGRESSION Normalized crosscorrelation uses basically same operation as linear regression Set of ordered pairs statistically evaluated to see how they correspond to a straight line Find best fit line slope and intercept through the data Simple table can be used to assist in computations Principles of Softcopy Photogrammetry 13 SURE 440 Advanced Photogrammetry 33705 7 reression S ZXi 2X S 26 7 ZY 346197 SKY 21x 7y 71 many 2552 500511 9 Il Followin terms used to find M 332107 54322 53161 LINEAR REGRESSION aXi byi 32309 b2 byi a x b xiyi 25 33 625 1089 825 48 56 2304 3136 2688 89 81 7921 6561 7209 43 40 1849 1600 1720 94 98 8836 9604 9212 47 54 2209 2916 2538 76 84 5776 7056 6384 21 16 441 256 336 57 49 3249 2401 2793 2500 2511 233210 234619 233705 LINEAR REGRESSION 5112 56056 Principles of Softcopy Photogrammetry 1222008 SURE 440 Advanced Photogrammetry a 1 1 Continued S 53161 8 54322 33705 21034619 sxy 1 1Isis J33 LINEAR REGRESSION 0979 amp I 0979500 241 0963 y 100 y intercept LINEAR REGRESSION y Regresswn hne N 39 a dy Slope I 50 100 Pn39nciples of Softcopy Photo grammetry 1222008 SURE 440 Advanced Photogrammetry right image search array DIGITAL IMAGE MATCHING Candidate subarray chosen from left photo and search performed on corresponding subarray on Search array larger than candidate subarray Moving window used to compare candidate subarray with all possible window locations in Correlation coefficient computed at each window location results in correlation matrix C Largest correlation in C tested against threshold If exceeds threshold corresponding location in search subarray considered a match Computing correlation coefficients using a moving window within a search array DIGITAL IMAGE MATCHING Compute correlation coef cient w indow subarray B Candidate subarray A Search array Pn39nciples of Softcopy Photo grammetry 1222008 SURE 440 Advanced Photogrammctry 1222008 DIGITAL IMAGE MATCHING EXAMPLE Given candidate array A as ideal fiducial cross template and search array S containing a fiducial mark Compute position of fiducial 41 43 43 49 60 43 41 40 44 43 44 45 50 64 45 43 43 45 0 0500 0 42 43 44 48 63 49 45 42 42 o osoo o 42 45 47 so 65 45 45 4141 A so so so so so s 59 62 62 64 69 64 62 63 60 o osoo o so 48 48 5168 55 so 54 53 0 0500 0 42 4144 48 63 42 47 47 45 42 44 42 45 62 44 44 45 43 42 43 44 48 60 47 44 38 35 SOLUTION Extract subarray B from search array at position 11 41 43 43 49 60 43 44 45 50 64 B 42 43 44 48 63 42 45 47 50 65 59 62 62 64 69 Compute average DNs for subarrays 00505000 X 18 E 4143436264695148 Principles of Softcopy Photogrammctry 17 SURE 440 Advanced Photogrammetry SOLUTION cont Compute summation terms for correlation coefficient ii A XXI x 0 1341 5143 0 1343 5143 50 1343 5143 50 13s2 5143 0 13s4 5143 0 1869 5143 1316 film K 0 132 0 132 50 132 50 132 0 132 0 132 41 51432 43 51432 43 51432 62 51432 64 51432 69 5143 210224 SOLUTION cont Compute correlation coefficient Maw 4 2ltB gt 1316 0 24 1210224144ooi 39 Principles of Softcopy Photogrammetry 1222008 SURE 440 Advanced Photogrammetry 1222008 SOLUTION cont Compute remainin coefficients in similar manner 024 009 035 019 019 024 016 032 021 032 C 025 037 094 029 027 008 002 050 006 003 028 023 027 018 022 LEAST SQUARES MATCHING Conceptually related to correlation method but with advantae of obtainin match location to a fraction of a pixel Common implementation approach Axyh0 hlBx39y39 x39a0 a1xa2y y39zb0 b1xb2y Principles of Softcopy Photogrammetry 19 SURE 440 Advanced Photogrammetry 1222008 LEAST SQUARES MATCHING left image Right image o Position of subarrays for E least squares A B matching Coordinates in y I units of pixels A x x Least squares observation equation Ir0 hlBa0 a1xa2yb0 b1xb2y Ax y VA LEAST SQUARES MATCHING o Linearize observation equation using Taylor s series o39i dho f39ii1 dao 39a1 dai We daz 39b0db039bldb139b2 db2AxyVA where f39ifl fiBir5ri f39afiiB39i f39ifiiB39y f39i riiB39i firiiBi f39ifriiBi fifriiB39y Principles of Softcopy Photo grammetry 20 SURE 440 Advanced Photogramnietry NUMERICAL RESECTION AND ORIENTATION Center for Photogrammetric Training Ferris State University INTRODUCTION Case I Compute exterior orientation K p 03 XL YL ZL Observe photo coordinates xi yi Treat survey control as known Xi Yi Zi Case II Extension of Case I exterior orientation treated as observed quantities The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogrammetry INTRODUCTION 0 Case III Extension of Case II Observed quantities include photo coordinates eo survey coordinates unknown survey points Survey control given Find adjusted co and survey coordinates 0 Case IV Extension of Case III 7 interior orientation observed Adjustment adjusted eo io survey coordinates The Center for Photogrammetn39c naming INTRODUCTION General mathematical model F F0bs X Y 0 0 Taylor s series linearizes equation 6F 6F 131 A V0 l u axj l u 6obsju nu The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogrammetry INTRODUCTKHJ 0 Observation equation FfBAAV0 AVBAf0 0 Where V residuals on the observations A alteration vector to parameters f discrepancy vector The Center for Photogrammetn39c naming CASEI Estimate variancecovariance matrix 200 Compute adjusted eo parameters and variancecovariance matrix on adjusted parameters 22 00 0 Math model Fxx x0 c 0 central projective AZ equations Fyyy0c0 The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogranimetiy CASEI Observation equations AV BA f 0 6Fxl Where A 6E axquoty 1 0I mm 0 1 6xyJ a FW V 7 Vi f7 FWL The Center for Photogrammetn39c 39 39aining CASEI Observation equations AV BAf 0 Where aFxj 6Kgt gtQgtXLgtYLgtZL 2 BF BJ MPararrietersJ y 6F J 6Kgt gt wgtXLgt YZRZL A 51a 5 5a XL an 32 The Center for Photogrammetn39c 39 39aining Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASE 1 General form of observation equation 9 e VBAfO Function to be minimized FVTWV 21V1 Xf The Center for Photogrammetn39c naming CASE 1 0 Differentiate the function a F2WV 2t0 6V e T 6172 43 20 6A The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogramnietiy CASE 1 Collecting observation equation and differentiated function W o 171 V 0 0 0 13 2 0 0 e t f 1 B o The Center for Photogrammetn39c 39n39aining CASEI 0 Eliminating V and 9t and substituting V l W B A Wf Nonnal equation found by substituting 9 T T B WBA BJ Wf 0 N z0 The Center for Photogrammetn39c naming Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASE 1 Corrections to parameters found by 3 N 1t Adjusted parameters found by adding corrections to current estimates Jar 2 33 The Center for Photogrammetn39c naming CASE 1 Residuals computed as Vf0 V F 0 a 0 Unit variance 2 VTWV 00 2n 6 Variancecovariance 2 2 71 matrix 2 2 00 N 00 The Center for Photogrammetn39c naming Numerical Resection and Orientation SURE 440 Advanced Photogrammetiy EXANIPLE Photo observations Point No x y 1 61982 79018 2 73147 78240 3 54934 65899 4 26046 29449 5 34893 71287 6 23980 31889 7 11783 88922 8 85047 105836 9 26468 6082 10 12523 79026 11 27972 85027 12 12094 69861 13 80458 70012 Survey Control Points PointNo Y Z 1 4464675000 11129553700 27386600 2 4552720300 10993263000 27553100 3 4553670500 11019301300 27510100 4 4632243000 11108631900 25499000 5 4679722300 11126100100 26321400 6 4633426800 11112289000 25485000 7 4501989000 11047518200 26284500 8 4532804500 10965087600 29136500 9 4608713500 11093334300 25565500 10 4512621800 11053117400 26197300 11 4481580000 11091016300 28832000 12 4648927900 11172917600 26685200 13 4706142300 11079542700 26863900 Exterior Orientation Elements Estimated L YL L Kappa Phi Omega 459000000 1111500000 20900000 21500 00000 00000 Numerical Resection and Orientation SURE 440 Advanced Photogrammetry 9056894 00000 7650311 3538837816 719422609805 1069676390 9056894 71952187 71162762887 71069676390 18861944125 2918256 00000 2164764928 75353926170 9613732691300 7 6415319767400 7 2078557849100 42739502349000 7 4793827451900 40916409428000 51743527 14980210 326492 37790387580 108535637450 26185558995 Itera onNo39 1 IterationNo 3 Altemtmn vecmr Dena Alteration Vector Delta 815331 00015 394869 00015 015855 002176 888833 001941 0 00000 000958 0 00000 IterationNo 2 Altemtion Vector Delta Iterauquo 4 061417 Alteration Vector Delta 072201 000000 070268 000000 000013 000000 000012 000000 000022 000000 000000 Numerical Resection and Orientation SURE 440 Advanced Photogrammetry Exterior Orientation Elements Adjusted XL YL ZL Kappa 458924624 1111467719 20905445 21281 Residuals on Photo Observations X y 1 0002 0009 2 0004 0007 3 0002 0002 4 0001 0002 5 0002 0004 6 0000 0000 7 0006 0011 8 0006 0001 9 0011 0000 10 0007 0001 11 0002 0006 12 0001 0007 13 0004 0006 Phi Omega 00195 00098 The A Posteriori Unit Variance is 3471294 The VarianceCovariance Matn39x of Adjusted Parameters is 0233948622 0011026685 0020985439 0000016307 0000104961 0011026685 0154028192 0034834200 0000000932 0000001678 0020985439 0034834200 0025329779 0000001566 0000009114 0000016307 0000000932 0000001566 0000000005 0000000007 0000104961 0000001678 0000009114 0000000007 0000000048 0000002099 0000075937 0000018958 0000000000 0000000001 0000002099 000007 5937 0000018958 0000000000 0000000001 0000000039 Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASE II 0 Introduce direct FK 5 5 0 observations on FW 2 p p Z 0 parameters a a 0 Introduce new F 0 63 0 math model to FXLXL L 0 adjustment 0 a FYLYL YL 0 FZLZL ZL 0 The Center for Photogrammetn39c mining 0 Observations 5 C g 5K have residuals g v g 5quot therefore a 00 adjusted 3 Va a 5m parameters XL VX XL 5X only estimated a oo initially Y L VYL Y L 51 ZL sz ZL 6ZL The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASE II VK KK K0 0 Rearrange W Observation equations o The Center for Photogrammetn39c 39 39aining CASE II Group with observation equations from projective equations VBAf0 V Af0 or e VBAf0 The Center for Photogrammetn39c 39n39aining Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASE II 0 Function to be minimized Where weight matrix is W 0 W e The Center for Photogrammetn39c naming MTWMTE A7 CASE II 0 Normal equations ETW A WW 2 0 0 In expanded form BET HVE iiiiii r I The Center for Photogrammetn39c aining i VEH f e f J20 Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASEII 0 Performing multiplication 3T WB WjA BT Wf Wf 0 Generally shown as N l20 The Center for Photogrammetn39c naming CASEII 0 Initial estimate of parameters observed a a X X 00 O 0 Discrepancy vector fF 0 Solution 9 71 A N t The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASE ll 0 Adjusted parameters 5 2 3 0 Residuals V 2 Fa 0 Unit variance 0 A posteriori g X 2 variancecovarlance matrix 2 oZN 00 The Center for Photogrammetn39c naming CASE Ill Introduce spatial coordinates as observed Math model expanded with survey points FIono Io0 FZjZJ ZJ 0 The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASE Ill Observation equations become o E Af0 VEAEAfO The Center for Photogrammetn39c naming CASE Ill Observational residuals defined as v7 le x1 vw VYI vyl I v I v21 V f2 VXL VXZ 39 vYL x VZL vZn y The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogranimetry CASE Ill Discrepancy vectors found by evaluating functions using current estimates FltX1gt Fm 11769 e S FZJ f f f 3 mm Fyj 3930 F02 Z Iga FltZZgt CASE Ill 0 Alteration vectors defined as 6X 5K SY 5gp 621 2 660 S 1 A A XL 6X 5YL SYV 62L 52 The Center for Photogrammetn39c aining Numerical Resection and Orientation SURE 440 Advanced Photogrammetry CASE III Design matrices CASE III Observation equations V E fag f V I 0l 1f0 V 0 VEX7O The Center for Photogrammetn39c aining Numerical Resection and Orientation CORRECTIONS TO PHOTO COORDINATES 2 Surveying Engineering Department Ferris State University Analytical Photogrammetry Instrumentation Analytical photogrammetry is performed on specialized instruments that have a very high cost due to the fact that there is a limited market With the onset of digital photogrammetry the instrumentation is cheaper being the computer but the software still remains expensive for this specialized applications The design characteristics of analytical instrumentation include Merchant 1979 High accuracy High reliability High measuring ef ciency Low rst cost Low cost of maintenance In addition operational ef ciency becomes an important consideration This factor involves the necessary training required for the operator of the equipment If the instrument requires an individual with a basic theoretical background in photogrammetry along with experience then there will be a limited pool from which one can draw their operators Operational ef ciency also involves on the comfort of the operator when operating the equipment One of the advantages of digital photogrammetry is that it has the capability at least theoretically to completely automate the whole process and an individual with no basic understanding of photogrammetric principles can do this There are various different kinds of photogrammetric instrumentation that can be used in analytical photogrammetry At the low end precision analog or semianalytical computeraided stereoplotters can be used either in a monoscopic or stereoscopic mode When used on a stereoplotter it is important to put all of the elements in their zero positions 03 on p q K K by by bz bz 0 Ghosh 1979 The base bx scale and Z column readings should be at some realistic value Analytical plotters can also be used for analytical photogrammetric measurements These instruments are generally linked to analytical photogrammetry software that helps the operator complete the photo measurements Comparators are designed speci cally for precise photo measurements for analytical photogrammetry Comparators can be either r39 or r39 The 139 J 39 are placed on the stages and all points that are imaged on the photo are measured The last type of instrument is the digital or so copy plotter Photos are scanned or captured directly in a digital form and points are measured With autocorrelation techniques the whole process of aerotriangulation can be automated with the solution containing more points than can be done Tho Cantor gmmmmmcwrlm Correctrons to Photo Coordrnates Page 2 manually To achieve the high accuracy demanded by many analytical photogrammetric applications it is important that the instrument upon which the measurements are made is well calibrated and maintained There are many systematic error sources associated with the comparator They are a Errors of the instrument system scaling and periodic errors of the X y measuring systems involving scales spindles coordinate counter etc affinity errors being the scale difference between X an y directions errors of rectilinearity bending of the guide rails lack of orthogonality between X and y aXes also known as rectangularity error b Backlash and tracking errors c Dynamic errors e g microscope velocity does not drop to zero at points to be approached during the operation d Errors of automation in the system digital resolution smallest incremental interval errors due to deviation of the direction This is because the control system may not provide for the continuously variable scanning direction Ghosh 1979 p30 One could determine the corrections to each of these error sources although from a practical perspective these errors are accounted for by transforming the photo measurements to the true photo system which is based on calibration Ground Targets Ground targets can be one of three different types Signalized points are targeted on the ground prior to the ight Several different target designs are used in photogrammetry Detail points are those well defined physical features that are imaged on the photography These items can be things like the intersection of roads for smallscale mapping intersections of sidewalks manholes etc The last type of control point is the arti cial point that is added to the photography after the film is processed Using a point transfer instrument such as the PUG by Wild points are marked on the emulsion of the film EXample target design employed by the Michigan Department of Transportation are shown in figures 13 Tho Cantor gmmmmmcwrlm Correctlons to Photo Coordlnates Page 3 Standard Aerial Photography Targets Standard Target 25m 15m MagNail T Target Chevron All painted targets must be highlighted in b lack by one of the fo owing methods a Background b Outline 0 Bull seye Figure 1 Standard MDOT target design Tho Cantor gmmmmmcwrlm Correcnons to Photo Coordlnates Page 4 High Level Aerial Photography Targets MagNail T Target Chevron All painted targets must be highlighted inblackby one of the f0 llowing methods 1 Background 2 Outline 3 Bull seye Figure 2 MDOT target design for high altitude photography Tho Cantor for thgmmmm c Triler Corrections to Photo Coordinates Page 5 Low Level Aerial Photography Targets Low Level Target Low Level target must have the square b lack outline shown Figure 3 Low level MDOT target design Abbe39s Comparator Principle Abbe39s comparator principle states that the object that is to be measured and the measuring instrument must be in contact or lie in the same plane The design is based on the following requirements refer to gure 4 1 quot39 To exclusively base the measurement in all cases on a longitudinal graduation with which the distance to be measured is directly compared and ii To always design the measuring apparatus in such a way that the distance to be measured will be the rectilinear extension of the graduation used as a scalequot Manual of Photogrammetry ASP in Ghosh 1979 p7 Tho Cantor gmmmmmcwrlm Correctlons to Photo Coordlnates Page 6 Simp e Li eor Sco e 7 FUN Comp ia ce m Drum Micrometer 7 Least Count Leve Does Not Comp y S iding micrometer 7 Comp ies with t But Not with 2 Figure 4 Example of Abbe39s comparator principle with simple measurement systems Photo measurements can be made on many dilTerent types of instruments In the past the most accurate methods involved the use of a comparator and dilTerent types of comparators were created to improve the accuracy of these measurements Today digital photogrammetric techniques can be employed for photo measurements with a very high degree of accuracy While comparators come in all dilTerent types of con gurations the procedure described as follows will illustrate an approach to determining photo coordinates from comparator measurements This approach is the same that can be applied to the Mann monocomparator The geometry is depicted in gure 5 The simplest method computationally is to place the diapositive on the rotary stage and align the f1ducial marks to the coordinate system of the comparator This is done by rotating the stage such that the line between the f1ducial marks labeled 1 and 2 lie are perfectly aligned to the comparator XaXis Then the comparator coordinates to the indicated principal point can be found using r r r x3 x x 2 ry rYI rYz The corresponding photo coordinates are found by 39 quot the r J39 0f the indicated principal point to the corresponding comparator measurements made at each point For point p this is Tho Cantor gmmmmmcwrlm Corrections to Photo Coordinates Page 7 Ty Rotary Stage Photographic Plate Arbitrary origin of comparator readings IX Figure 5 Geometry of a rotary stage comparator The process of aligning the fiducial marks to the comparator XaXis is a laborious procedure that is not necessary Simply place the diapositive onto the comparator rotary stage such that it is approximately aligned to the comparator coordinate system Then observe the coordinates at each of the fiducial coordinates and perform a transformation from the comparator coordinate system to the photo coordinate system The rotation angle can be found using the arctangent function Then apply a 2dimensional transformation X rX cosGry sine y rX sine ry c059 Tho Cantor gmmmmmcwrlm Corrections to Photo Coordinates Page 8 Using this basic relationship the y coordinates of fiducial points 1 or 2 and the X coordinates of fiducial points 3 and 4 need only be computed Then X1 2 X5 X1 2 Y Yi Y2 and XpXp Xo ypzyifyio If as is the normal situation the coordinates of the fiducial points on the camera calibration report are in the photo coordinate system then there is no need to determine the transformed coordinates of the indicated principal point and perform the translation to the origin The transformed coordinates will represent the photo coordinates directly Basic Analytical Photogrammetric Theory Analytical photogrammetry can be broken down into three fundamental categories First Order Theory Second Order Theory and Third Order Theory Fist Order Theory is the basic collinearity concept where the light rays from the object space pass through the atmosphere and the camera lens to the lm in a straight line Second Order Theory corrects for the most significant errors that are not accounted for in First Order Theory Those items that are normally covered include lens distortion atmospheric refraction film deformation and earth curvature Third Order Theory consists of all the other sources of error in the imposition of the collinearity condition which are not included in Second Order Theory These errors are usually not accounted for except for special circumstances They include platen un atness transient thermal gradients across the lens cone etc Interior Orientation The first phase of analytical photogrammetric processing is the determination of the interior orientation of the J 39 J 39 The J 39 ic 139 system is shown in figure 6 The point p is imaged on the photograph with coordinates Xp yp 0 The principal point is determined through camera calibration and it generally is reported with respect to the center of the photograph as defined by the intersection of opposite fiducial marks indicated principal point It has coordinates X0 yo 0 The perspective center is the location of the lens elements and it has coordinates X0 yo f The vector from the perspective center to the position on the photo is given as Tho Cantor for thgmmmm c Triler Corrections to Photo Coordinates Page 9 X p X 0 5 yp yo 0 f Interior orientation involves the determination of lm deformation lens distortion atmospheric refraction and earth curvature The purpose is to correct the image rays such that the line form the object space to the image space is a straight line thereby ful lling the basic assumption used in the collinearity condition llC Fiduciol Mark Principol Point Figure 6 Photographic coordinate system Film Deformation When lm is processed and used it is susceptible to dimensional change due to the tension applied to the lm as it is wound during both the picture taking and processing stages In addition the introduction of waterbased chemicals to the emulsion during processing and the subsequent drying of the lm may cause the emulsion to change dimensionally Therefore these effects need to be compensated The simplest approach is to use the appropriate transformation model discussed in the previous section One of the problems with this approach is that it is possible that unmodelled distortion can still be present when only four or fewer ducial marks are employed To overcome this problem reseau photography is commonly employed for applications requiring a higher degree of accuracy A Tho Gamer gmvnmmm e Tralnlnu Correctlons to Photo Coordlnates Page 10 reseau grid consists of a grid of targets that are xed to the camera lens and imaged on the lm One simple approach is to put a piece of glass in front of the lm with the targets etched on the surface The reseau grid is calibrated so that the positions of the targets are accurately known By observing the reseau targets that surround the imaged points and using one of the transformation models discussed earlier the results should more accurately depict the dimensional changes that occur due to lm deformation For example the isogonal af ne model can be used It will have the following form taking into consideration the coordinates of the principal point x0 ya x Ax cosoc sinoc x xo y Ay sin0c cosoc y yo In its linear form it looks like x a b x c x0 y b a y d yo Using 4 ducials an 8parameter projective transformation can be used Its advantage is that linear scale changes can be found in any direction The correction for lm deformation is given as y y Xa1x a2y a3 o clx czy l blx b2y b3 clx czy l Measurement of the four ducials yields 8 observations Therefore this model provides a unique solution Other approach to compensation of lm deformation is to use a polynomial One model used by the Us Coast and Geodetic Survey now National Geodetic Survey when four ducials are used is shown as Axx x xa0 a1xa2ya3xy Ay y y yb0 b1xb2yb3xy This model can be expanded to an eight ducial observational scheme as Tho Gamer gmvnmmm e Tralnlnu Correctlons to Photo Coordmates Page 11 Axx x39 xa0 a1xa2ya3xya4x2 a5y2 a6x2ya7xy Ayy y yb0 b1xb2yb3xyb4x2 b5y2 b6x2yb7xy Lens Distortion The effects of lens distortion are to move the image from its theoretically correct location to its actual position There are two components of lens distortion radial distortion Seidel aberration and decentering distortion Radial lens distortion is caused from faulty grinding of the lens With today s computer controlled lens manufacturing process this distortion is almost negligible at least to the accuracy of the camera calibration itself Decentering distortion is caused by faulty placement of the individual lens elements in the camera cone and other manufacturing defects The effects are small with today s lens systems The values for lens distortion are determined from camera calibration These values are generally reported by either a table or in terms of a polynomial see the example at the end of this section Seidel Aberration Distortion Seidel has identi ed five lens aberrations These include astigmatism chromatic aberration this is sometimes broken into lateral and longitudinal chromatic aberration spherical aberration coma curvature of field and distortion An aberration is the quotfailure of an optical system to bring all light rays received from a point object to a single image point or to a prescribed geometric positionquot ASPRS 1980 It is caused by the faulty grinding of the lens Generally aberrations do not affect the geometry of the image but instead affect image quality The exception is Seidel39s filth aberration distortion Here the geometric position of the image point is moved in image space and this change in position must be accounted for in analytical photogrammetry The effect of this distortion is radial from the principal point Conrady s intuitive development for handling this radial distortion is expressed in the following polynomial form 3 5 7 9 5rk0rklr k2r k3r k4r This is based on three general hypotheses a The axial ray passes the lens undeviated b The distortion can be represented by a continuous anction and c The sense of the distortions should be positive for all outward displacement of the image Ghosh 1979 p88 Tho Gamer gmvnmmm e Tralnlnu Correctlons to Photo Coordlnates Page 12 y A V A Figure 7 Radial lens distortion geometry From Figure 7 recall that r2 X2 yz By similar triangles the following relationship can be shown E33251 r X y the X and y Cartesian coordinate components of the effects of this distortion are thus found by 5 5Xrxk0 klr2 er4 r 5 5y ryk0 k1r2 k2r4 y r Tho Gamer gmvnmmm e Tralnlnu Correctrons to Photo Coordrnates Page 13 The corrected photo coordinates can then be computed using the form1 xcx 5xl Exl k0 k1r2 k2r4 x r ycy5y1 Ey1 ko klr2k2r4y r An example using two different methods of applying the lens distortion are as follows The rst example uses a linear interpolation using the values given in the table on radial lens distortion from a camera calibration report The second example is the same as the first except that this time the polynomial correction is employed The problem is stated as follows A camera calibration report displays the following information Field Angle 750 15 2270 30 35 400 Symmetric radial distortion Hm 4 6 4 l 6 3 Decentering distortion pm 0 0 0 1 1 2 If the photo coordinates of a point are x 33l48 mm and y l4921 mm what are the coordinates corrected for radial lens distortion The calibrated focal length of the camera is 152560 mm 1 Note that the US Geological Survey gives the polynomial coefficients as correction terms instead of error terms as presented here Therefore the corrected photo coordinates are computed from the data given in the calibration report as follows xc 1k0 k1r2 k2r4 yc 1k0 k1r2 er4 y Tho cm gongrlmmot c Tralnlnu Correctlons to Photo Coordlnates Page 14 Correcting Photographic Coordinates for Radial Lens Distortion Given the following values x 33148 y714921 f 152560 Factor to convert degrees into radians torad L 180 75 4 15 6 227 ang dtstott 30 35 76 40 3 To compute the distortion at the point we first need to compute the radial distance from the principal point 20085 40878 39 63817 dtst ftanang torad dist 88081 106824 128013 The radial distance from the principal pointto the point is r IX2 y2 r 36351 Thus the point lies between the 75 and 150 field angles Perform a linear interpolation to find the radial distortion at that point distort1 7 distort 0 4 dist 7 1 dtst1 7 dtst0 1 5r 5r 00044 1000 The corrected photographic coordinates become 3 17E X 7 r gci33144 5r yc 17 y yc714919 The cm In Fhmmmmm e Training Corrections to Photo Coordinates Page 15 Correcting Photographic Coordinates for Radial Lens Distortion Given the following values X 33148 y 714921 f 152560 r quotx2 y2 k0 702231gtlt 10 3 k1 0450110 7 k2 70181710711 Using the polynomial to compute the photographic coordinates corrected for radial lens distortion 1 k0 klr2 k2r4x 33142 yc1 k0 klr2 k2r4y yc 14919 DE CENTERING DISTORTION Decentering lens distortion is asymmetric about the principal point of autocollimation When the value is quotonequot then the radial line remains straight This is called the axis of zero tangential distortion see gures 8 and 9 y ll Tangential Pro le m of Maxxmu qxgential Q s lming gDO Figure 8 Geometry of tangential distortion showing the tangential pro le Tho Gamer gammymural Tralnlnu Correctrons to Photo Coordmates Page 16 Aly Figure 9 Effects of decentering distortion Duane Brown using the developments by Washer designed the corrections for the lens distortion due to decentering Brown called this the quotThin Prism Modelquot and it is shown as 5x Jlr2 erA sin p0 Jsin p0 5y Jlr2 erA cos p0 Jcos p0 where J1 J2 are the coefficients of the pro le function of the decentering distortion and p0 is the angle subtended by the axis of the maximum tangential distortion with the photo x axis The concept of the thin prism was found to be inadequate to fully describe the effects of decentering distortion Therefore the ConradyBrown model was developed to find the effects of decentering on the xy encoders 2 5x J1r2 J2r4l 2 Jsintpo 2X2yCOSp0 r r 2 2 2 5yJlr2J2r4 iysintpo l 3 JCOS 0 r r Tho Gamer gammymural Tralnlnu Correctrons to Photo Coordrnates Page 17 A revised ConradyBrown model made further re nements to the computation of decentering distortion and this model is shown to be 5x P1r2 2x2 2P2xy1 Psr2 PAH 6y 2P1Xy P2 r2 Zyz l 133r2 PAH P1 J1 sin p0 P2 J1 cos p0 J where P3 2 J1 J P4 J 3 1 P s de ne the tangential pro le function This is the tangential distortion along the axis of maximum tangential distortion The corrected photo coordinates due to the effects of decentering distortion can then be found by subtracting the errors computed in the previous equations The corrected photo coordinates become xC x 5x Ye y 5y Atmospheric Refraction Light rays bend due to refraction The amount of refractions is a function of the refractive index of the air along the path of that light ray This index depends upon the temperature pressure and composition including humidity dust carbon dioxide etc The light rays from the object space to image space must pass through layers of differing density thereby bending that ray at various layer boundaries along the path From Snell39s Law we can express the law of refraction as ni dnsin9i ni sin Q doc Tho Gamer gmvnmmm e Tralnlnu I Corrections to Photo Coordinates Page 18 Corrected Image Location Photo Nadir Point Actual d 9 Image Tangent to Ray at Point a Exposure Station V Exposure Station Actual Ray Path Theoretical Ray Path Figure 10 Effects of atmospheric refraction on an object space light ray where n refractive index dn difference in refractive index between the two mediums angle of incidence and 6d0L angle of refraction Generalizing and simplifying yields doc d ntan9 n Integrating 0LJduoctan9J Dd ntan9lnnk n 1 np where In indicates the natural logarithm and the subscripts L is the camera station and P is the ground point Generalizing Tho Gamer In thmmmmue Tralnlnu Correctlons to Photo Coordmates Page 19 de KtanG Where K is the atmospheric refraction constant For vertical photography d6 can be expressed with respect to r as rftan9 dr fsec2 9 d9 fltan2 ede 2 f1 2d9 f2 r2 f dr d9 5r can also be expressed as a function of K using d9Ktan9 Ktan9 3 drKr 2 The radial component can also be expressed using a simpli ed power series Z Z Z drzf r f f r2K r f f drk1rk2r3 k3r5 Where the k s are constants The Cartesian components of atmospheric refraction are Tho Gamer gmvnmmm e Tralnlnu Correctlons to Photo Coordmates Page 20 Z 5XX EJK1r 2X r f 5r r2 6 K1 y Yr fzjy K is a constant determined from some model atmosphere For example the 1959 ARDC Air Rome Development Center model developed from Bertram is shown as K 2410H 2410h h 1 6 H2 6H250 h2 6h250 H The atmospheric model developed by Saastamoinen for an altitude of up to eleven kilometers is given by 256 K 1 0022576h 1 002257H5 27701 002257H 5 10 6 For altitudes up to nine kilometers this equation can be simpli ed as K 13 H h1 002 2H h10 6 There are several other atmospheric models Ghosh 1979 also identi es the US Standard Atmosphere and the ICAO Standard atmosphere He also states that up to about 20 km these models are almost the same Table 1 shows the amount of distortion using a focal length of 153 mm and the ICAO Standard atmosphere from Ghosh 1979 p95 The tabulated values dr are in micrometers Tho Gamer gmvnmmm e Tralnlnu Corrections to Photo Coordinates Page 21 Flying For Radial Distance r of the Image Point from the Photo Center in mm Coefficients Height in m 12 24 50 63 78 94 111 131 153 1910392 k210396 For Ground Elevation 7 0 m above sea level 3000 04 09 19 26 34 45 59 79 107 34 153 6000 07 15 33 44 59 77 101 135 183 61 250 9000 09 19 42 57 75 99 130 173 234 77 223 For Ground Elevation 7 500 m above sea level 3000 03 07 16 21 28 37 49 64 88 28 125 6000 07 13 30 40 53 69 91 122 154 54 23 9000 09 18 39 53 70 92 120 160 217 72 299 For Ground Elevation 7 1000 m above sea level 3000 03 06 13 17 22 29 39 51 69 22 099 6000 06 12 27 36 48 63 82 109 145 48 208 9000 08 16 36 49 65 85 112 149 201 67 276 For Ground Elevation 7 1500 m above sea level 3000 02 04 08 12 16 22 28 38 51 16 074 6000 05 11 24 32 42 55 73 97 131 42 187 9000 07 15 34 45 60 78 103 138 186 61 259 Table 1 Radial image distortion due to atmospheric refraction EARTH CURVATURE Earth curvature causes a displacement of a point due to the curvature of the earth The point when projected onto a plane tangent to the ground nadir point will occupy a position on that plane at a distance of AH from the earth39s surface The image displacement as shown in the gure is always radially inward towards the principal point From the geometry we can see that 9222 R R D D2 cosezcos R 2R2 AHR7Rcos0 Rl cos9 2 R 171 D 2R2 D2 Tho Gamer gammymmch Tralnlnu Correctlons to Photo Coordlnates Page 22 r dli Vertical Photograph if Corrected Image Location Actual Image Location Exposure Station Corrected Ray Path H Actual Ray Path E AH arm Surface Dy R D 9 Figure 11 Earth curvature correction From which we can write dE i H But dE m 2AH Hy Therefore Tho Gamer gmvnmmm e Tralnlnu Corrections to Photo Coordinates Page 23 f D D2 dE H39 H39 2 R fDS 2 HZR But D m r Yielding H 3 dB r 2 2 R f Since H392Rf 2 is constant for any photograph dE Kr3 where 141 2Rf2 The effects of earth curvature are shown in the Table 2 with respect to the ying height H and the radial distance from the nadir point Ghosh 1979 Doyle 1981 Looking at the formula for earth curvature and the intuitive evaluation of the gure one can see that the effects will increase rapidly at higher ying heights and the farther one moves from the nadir point En Comm thmmmmue Tralnlnu Correct10ns to Photo Coordmates Page 24 Rmm H Table 2 Amount of earth curvature in mm for vertical photography assuming a focal length of 150 mm from Ghosh 1979 p98 EXAMPLE A vertical aerial photograph is taken with an aerial camera having the following calibration data Calibrated focal length 152212 mm Fiducial mark amp principal point coordinates are shown in the next gure of the fiducial marks The radial lens distortion is shown from the following diagram delineating the distortion curve 4 j F235928mk 3 22236 968242 228974 mm Xl3llO4IUIH y121814mm X26274mm F238257mm 714648 y 16973 mm 1 V m 2 Figure 12 Example showing calibration values for fiducials and principal point The cm In Phowgmmmome Tralnlnu Corrections to Photo Coordinates microm eters 15 W 10 W 5 W 0 1 1 5 10 quot 15 quot Dashed polynomial Solid distortion Page 25 Figure 13 Camera calibration graph of distortion using both polynomials and radial lens distortion Radial Distance mm 20 40 60 80 100 120 140 160 Distortion pm 6 9 6 1 7 9 1 13 Polynomial pm 53 79 52 05 71 105 06 123 Table 3 Radial lens distortion for camera in the example The decentering lens distortion values are J1 810X10394 J2 140x10398 N0 108 0039 The ying height is 38000 above mean sea level The average height of the terrain is 400 above mean sea level The photograph is placed in the comparator and the following image coordinates are measured m rZS mm 1 28202 2 240341 3 237068 4 24980 r mm 13032 16260 228432 225160 Tho Gamer gmvnmmm e Tralnlnu Corrections to Photo Coordinates Page 26 Pt p 228640 36426 Questions 1 What are the image coordinates of point p corrected for lm deformation and reduced to put the origin at the principal point Use a 6parameter general af ne transformation and compute the residuals 2 What are the image coordinates of p corrected additionally for radial and decentering lens distortion 3 What are the image coordinate corrections at p for atmospheric refraction and earth curvature 4 What are the nal corrected image coordinates of p SOL U T I ON 1 The observed photo coordinates are X 228640 mm y 36426 mm The design matrix B is 28202 13031 10 240341 16260 10 237068 228432 10 24980 225160 10 The discrepancy vectors are 26274 14648 238257 16973 235928 2 228974 23984 226622 1 The normal coef ment matr1X mverse N 1s Tho Gamer gmvnmmm e Tralnlnu Correctlons to Photo Coordlnates Page 27 00000222209 00000000002 00029475275 N391 00000222132 00026815783 09647056994 The parameters are a1 099923 a2 000428 b1 000441 b2 099917 c1 196656 c2 175196 The residuals are 000294 000430 000294 000429 1 000294 V2 000430 000294 000430 The transformed coordinates are X 226657 mm y 37168 mm The photo coordinates translated to the principal point become X 226657mm131104mm 95553mm y 37168 mm 121814 mm 84646 mm 2 Lens distortions are computed as follows r X2 y2 1955532 846462 127653 mm Tho Gamer gmvnmmm e Tralnlnu Correctlons to Photo Coordlnates Page 28 Ar 0286r 5794 gtlt10 5r3 2223 gtlt10 9r5 8663 mm Siedel radial distortion in terms of their rectangular coordinate vales are xc x1 955531 MJ 1 127653 95559 mm Ar 000866 l 84646 l yc y r 127653 84652mm The decentering distortion using the reVised ConradyBrown model is shown as follows P1 J1 sin p0 8 10 gtlt10394 sin108 000077 P2 J1 cos p0 810 gtlt10394 cos1080 000025 78 P3 JLW 0000017 J1 810 X 10 5x P1r2 2x2 2szy 1 PZrZ 000077 1276532 2955592 2 00002595559 846521 000251276532 0016mm 2Plxy F2 r2 2y2 1 P3r2 2 000077 95559 84652 0000251276532 2 846552 00000171276532 0003 mm 6 39lt The coordinates corrected for decentering distortion then become En Gamer thmmmmue Tralnlnu Correctlons to Photo Coordlnates Page 29 x x 5x 95559 0016 95576mm yc y 8y 84652 0003 84655 mm Using the 1959 ARDC model 380001200m 1km 3937 1000 m 400391200111 um 012km 393739 1000m 1158km K 2410H X104 2 24101158 X1 6 H2 6H250 11582 61158250 00000887 1276532 2 5XK 1r 2 X00000887 1 2 95576 f 152212 0014mm r2 1276532 5 K 1 00000887 1 84655 y fzjy 1522122 0013mm The effects of earth curvature are presented as r3H 1276532 38000 400 2sz 2 1522122 20906000 00807 mm dE The corrected photo coordinated due to the effects of refraction are SURE 440 Advanced Photogrammetry K DIGITAL ORTHOPHOTOGRAPHY Center for Photogrammetric Training Ferris State University The Center for Photogrammetric Training PRINCIPLES OF ORTHOPHOTOGRAPHY l Desire a picture where perspective aspect of picture is removed Eliminate relief and tilt displacement Process called differential rectification 0 Small segments of photograph rectified individually o Rectification removal of tilt displacement Digital Orthophotography SURE 440 Advanced Photogrammetry PERSPECTIVE VS ORTHOGRAPHIC PROJECTION Pers ective Center Perspecuve p Image A ReliefDisplacement u Orthoimage PRINCIPLES OF ORTHOPHOTOGRAPHY I Aerial photo a Streets displaced outward crossing over a hill I Orthophoto b Digital Orthophotography SURE 440 Advanced Photogrammetry PRINCIPLES OF n One of earliest instruments GallusFerber Photorestituteur Not economical PRINCIPLES OF ORTHOPHOTOGRAPHY GigasZeiss Orthoprojector GZl Uses components of C8 Stereoplanigraph Exposure slit moves in strips across the projection surface Scale of image continuously varied according to the relief by means of z motion Digital Orthophotography SURE 440 Advanced Photogrammetry Er Digital camera gt Orthophotograph Software rocessing DTM Ground Control Points DIGITAL ORTHOPHOTOGRAPHY I Differential rectification performed on pixels I Problems in urban and other areas with sharp vertical relief Impossible to obtain truly orthographic projections Areas hidden from Image Digital Orthophotography SURE 440 Advanced Photogrammetry BUILDING LEAN I Use 80 endlap and sidelap I Only about 1 12quot of photo used in orthophoto I Use longer focal length camera I Eg Merrick amp Co used 12 focal length camera over central Chicago I To maintain scale with 6 photography fly at higher altitude I Acquired spot or pinpoint photography over buildings I All these items added to cost of project DATA SOURCES FOR DIGITAL ORTHOPHOTO I Unrectified raster image I Scanned aerial photo I Image collected with digital sensor I Digital elevation model DEM or digital terrain model DTM over area I Used to compensate for effects of relief I Ground control I Provides absolute orientation if image and provides means to georeference each pixel in the image I Sensor calibration data I Compensate for distortions within sensor interior orientation Digital Orthophotography SURE 440 Advanced Photogrammetry ORTHOPHOTOGRAPHY I Image data commonly stored in files called tiles I Tiles merged to create seamless map I Image catalog should be used for data management I Locates all tiles I Some systems use image pyramid I Consist of series of images sampled at different ground resolutions I Provides rapid image display by automatically loading only those images needed for current view Image pyramid Raster Data Reduced resolution images I 1 Original image V cii iull resolution Fig 115 Construction of an image pyramid by successively averaging groups of 2 x 2 pixels Digital Orthophotography SURE 440 Advanced Photogrammetry IMAGERY BIackandWhite Consists of shades of gray extending from pure white to pure black Versatile Yields excellent resolution Can accommodate large scale enlargements Requires 13rcl storage space of color I May not be that helpful for analysis like vegetation monitoring If used for interpretation analyst needs more training One can overlay thematic information easily IMAGERY I Color film Pictures closely resemble how humans view scene Does not require as much training for interpretation Detail lost in shadows in BW film may still be visible More expensive Requires more storage space how much Digital Orthophotography SURE 440 Advanced Photogrammetry IMAGERY I Color Infrared False Color I Uses nearinfrared portion of EM spectrum I Particularly helpful in delineating differences in vegetation I Radar I Image can be formed under many different weather conditions RELATIONSHIP BETWEEN MAP S LE AND RE L N Photogramme tric Scales and Orthophoto Resolutions Fraction Mean Terrain X X X 2 mile 5 miles X X Section Sections Sections Digital Orthophotography SURE 440 Advanced Photogrammetry RELATIONSHIP BETWEEN MAP S LE AND RE L N Photogrammetric Mapping Scales and Digital Orthophoto Resolutions Fraction X X 2 mile s miles X X Section Sections Sections IMAGE QUALITY I Depends on Camera quality Photo to orthophoto map scale magnification Orthophoto diapositive density range bits in scanner scan pixel Sample scan rate Micrometer or dots per inch dpi Rectification procedures Pixel ground resolution pixel size on ground Radiometric image smoothingelectronic auto dodging Selection of control points DEM data density Digital Orthophotography SURE 440 Advanced Photogrammetry ACCU RACY Function of Magnification l Geometric accuracy of scanner Quality of DEM Control Focal length of the taking camera MAGNIFICATION Affects image quality I Recommended range 89 times enlargement 10 times degrades image quality distance between silver crystals on film noticeable Below 5 times no noticeable quality improvement 59 times is optimal range 0 For final orthophoto scale 1 100 photo scale should no be less than 1 900 0 Guides valid for optimum terrain Digital Orthophotography 10 SURE 440 Advanced Photogrammetry RADIOMETRIC RESOLUTION I Ability to discern small tonal changes I Content Standard recommends 8bit binary data for BW and 24bit 3byte data for color I 8bits gives 256 gray levels 0255 I Radiometric corrections that may be appHed I Contrast stretching analog dodging noise filtering destriping edge matching TONE MATCHING I Diapositive or negative scanning I Tone matching between photos complex 0 Changing light conditions from flight line to flight line and from frame to frame I Within single frame hot spots dark to light trends I Most software use 8bit imagery 0 Aerial negative film wider dynamic range 0 Need to compress range of possible values into gray shaded constraints imposed by 8 bits 1 Ortho diapositive produced to specific and restricted density range on electronic auto dodge contact printer 2 Air negatives scanned with wider density range 10 12 bits then software restricts final output by truncating values or running logarithmic function Digital Orthophotography SURE 440 Advanced Photogrammetry Photogrammetric scanner 25000 capable of 5pm scanning accurately SCAN N ER RESOLUTION l Scanner and scan process have inherent errors I High precision scanners used Calibrated to ensure performance meets minimum specifications for mapping l Most softcopy instruments capable of adding scanner calibration to program to correct for scanner distortions Digital Orthophotography 12 SURE 440 Advanced Photogrammetry SCAN NER RESOLUTION l Important relationship size of scan pixel to scale of photography and desired output orthophoto scale One suggestion 240 dpi for each magnification range Ex if desired photo to final orthophoto magnification is 5 times 0 Scan photo is 5 X 240 1200 dpi as minimum 0 Using 9 times magnification yields 2160 dpi or roughly 12 micrometers SCAN N ER RESOLUTION Smaller pixel size may give better resolution but not necessarily higher accuracy 0 Accuracy function of survey control flying height focal length pixel size etc Approximately 15 um resolution required to maintain photographic resolution of aerial film 0 2030 pm scan rates common in industry Digital Orthophotography 13 SURE 440 Advanced Photogrammetry PIXEL SIZE GROUND UNITS I Most important factor magnification ratio I Generally better to resample to coarser pixel than finer I Do not scan at 1 and resample to 05 I Rule of thumb resample by multiplication factor of 12 or greater I Scan at 1 finished orthophoto should be at least a 12 pixel PIXEL SIZE GROUND UNITS I Orthophoto rectification should be resampled using cubic convolution resampling process I Subsampling should only be applied within limits defined by the Nyquist theorem I Limits resampling to maximum of 2x I Limit avoids undesirable aliasing Digital Orthophotography 14 SURE 440 Advanced Photogrammetry ACCURACY OF ORTHOPHOTO I Relative accuracy directly related to photo scale I Absolute accuracy related to quality of ground control as well as photo scale I Primary factors for absolute accuracy I Survey control I DEM accuracy ACCURACY OF ORTHOPHOTO I Controlling image to be scanned I For largescale orthophotos control should be surveyed ground targets 0 More commonly aerotriangulation control used I Effects of deriving control from maps or other inaccurate methods 0 Significant errors introduced 0 Accuracy of orthophoto accordingly degraded Digital Orthophotography 15 SURE 440 Advanced Photogrammetry ACCURACY OF DEM DEM important component I Appropriateness of DEM related to l Scale specification to which the orthophoto is being created I Roughness of terrain I Focal length of aerial camera I Magnification I Creation of DEM possibly most expensive I Density of DEM depends on I Roughness of terrain coarser sampling for flatter terrain I Accuracy required for largescale include break lines I scale EXAMPLE OF EFFECT OF DEM ON ACCURACY Assume producing orthophoto at 1quot100 Shift in placement of welldefined point cannot exceed 150quot at map scale I Corresponds to 2 on ground I If f 6 I DEM errors at extreme corner of 9 format of 225quot results in horizontal error of 2 I If made only from neat model 63 x 72quot DEM error of 30quot at extremity less than 150quot criteria If f 12 acceptable errors in DEM would double I More tolerant of error in DEM than shorter focal length Digital Orthophotography 16 SURE 440 Advanced Photogrammetry ERRORS IN DIGITAL ORTHOPHOTOS Formula expressed as eOrtho eDEM gtlt tanA where eOrtho error in digital orthophoto eDEIVI error in the digital elevation model and u A viewing angle in degrees outward from the center of the photo ORTHOPHOTO DEFECTS Image Completeness If area not adequately covered by DEM image will be inaccurate and digital orthophoto will not be complete Image Stretch Blurring Typical causes Anomaliesspikes in DEM 0 Excessive relief especially near edge of photo Result small amount of information stretched to fill out area required by shift from perspective to orthogonal projection Digital Orthophotography 17 SURE 440 Advanced Photogrammetry I Image Distortions a Distortions along bridge deck due to reliance on regular grid of elevations b Break lines used to give more faithful rendition of terrain ORTHOPHOTO DEFECTS I Double Image I When adjacent orthophotos compared and the same features are shown on both should not occur I Caused by c Improper orientation of c0ntrol Inaccurate DEM where elevations are higher than reality I Missing Image I Identified by missing sections of linear sections I Same cause as double image except DEM is under representing real ground elevations Digital Orthophotography 18 SURE 440 Advanced Photogrammetry ORTHOPHOTO DEFECTS I Inaccurate Planimetry I Planimetric positions of pixels in error I Look at control I Image Replication I Problems in tone in digital imagery varies depending on different computing environments ORTHOPHOTO DEFECTS SCRATCH ES Digital Orthophotography 19 SURE 440 Advanced Photogrammetry ORTHOPHOTO DEFECTS I Discrepancies found by overlaying digital file over orthophoto I Alignment problems between two digital images I More noticeable with linear features Digital Orthophotography 20 SURE 440 Advanced Photogrammetry Effects of relief displacem ent caused part of tank to be cut out Optical artifact TO DEFECTS y 39 x 4 47quot I Example of l tone differences specular reflector and missing data black line due to operator error Digital Orthophotography 21 SURE 440 Advanced Photogrammetry ORTHOPHOTO DEFECTS DIRT AND HAIR EFFECTS Shadow and time of day Digital Orthophotography 22 SURE 440 Advanced Photogrammetry NATIONAL DIGITAL ORTHOPHOTO PROGRAM I Purpose provide national coverage of digital orthophoto quadrangle DOQ data I Photography 140000 National Aerial Photography Program NAPP photos or comparable I Scanned at 25 um resolution I Blackwhite color IR or natural color I Free at httpwwwmichigangovcgi NATIONAL DIGITAL ORTHOPHOTO PROGRAM I Characteristics I Data consist of ASCII header followed by 8bit binary image data I Radiometric image brightness data 256 gray levels I Ground sample distance 1 meter I Geographic extent 375 X 375 p 9 extent 0 Min of 50 meters and max 300 meters overedge to encompass primary and secondary horizontal datum corner points Digital Orthophotography SURE 440 Advanced Photogrammetry NATIONAL DIGITAL ORTHOPHOTO PROGRAM I Characteristics continued I Use UTM projection on NAD83 datum with coordinates in meters I Ordering of data by lines rows and samples columns from west to east and from north to south I Four primary datum corners imprinted as solid white crosses and four secondary datum corners as dashed white crosses CONTENT STANDARD FOR DIGITAL ORTHOIMAGERY I Digital orthoimage georeferenced image prepared from perspective photo or other remote sensing data I Distortion due to sensor orientation and terrain relief removed I Composed of array of georeferenced pixels that encode reflectance at a discrete value Digital Orthophotography 24 SURE 440 Advanced Photogrammetry CONTENT STANDARD FOR DIGITAL ORTHOIMAGERY Digital structure I Resolution Rows lines amp Pixel ground columns samples resolution defines Ordered from top ground represented to bottom left to In eaCh DIXEI right Radiometric Start at 00 resolution defines sensitivity of detector to differences in wavelength File has equal record lengths Rectangular or squared image CONTENT STANDARD FOR DIGITAL ORTHOIMAGERY I Accuracy Shall employ National Standard for Spatial Data Accuracy NSSDA Uses rmse to estimate positional accuracy 0 Accuracy reported at 95 confidence level 0 Accuracy reflects uncertainties including geodetic control compilation and final computations No threshold defined producers may use threshold like National Map Accuracy Standard 0 Will be reported according to NSSDA Digital Orthophotography SURE 440 Advanced Photogrammetry CONTENT STANDARD FOR DIGITAL ORTHOIMAGERY Geometric correction All systematic and random errors removed to extent required to meet accuracy requirements Distortions may be 0 Systematic predictable errors that follow some definite mathematical or physical law or pattern 0 Random errors due on to chance and do not recur Most orthoimagery errors random CONTENT STANDARD FOR DIGITAL ORTHOIMAGERY Common Y to fit Image orientation values to fit map geo location values Digital Orthophotography 26 PROJECTIVE EQUATIONS 5 Surveying Engineering Department Ferris State University Intro du cti0n In the rst section we were introduced to coordinate transformations The numerical resection problem involves the transformation rotation and translation of the ground coordinates to photo coordinates for comparison purposes in the least squares adjustment Before we begin this process lets derive the rotation matrix that will be used to form the collinearity condition In photogrammetry the coordinates of the points imaged on the photograph are determined through observations The next procedure is to compare these photo coordinates with the ground coordinates On the photograph the positive xaxis is taken in the direction of ight For any number of reasons this will most probably never coincide with the ground Xaxis The origin of the photographic coordinates is at the principal point which can be expressed as X x x0 Y yn Z f where xy are the photo coordinates of the imaged point with reference to the intersection of the fiducial axes xy0 are the coordinates from the intersection of the fiducial axes to the principal point f is the focal length Since the origin of the ground coordinates does not coincide with the origin of the photographic coordinate system a translation is necessary We can write this as L r amp xx Y1 Y Y z1 zzL where X Y Z are the ground coordinates of the point XL YL ZL are the ground coordinates of the ground nadir point mean1r EllwogrwnmeTmlnlm Prejectlve Equatlons Page 2 Thus in the comparison both ground coordinates and photo coordinates are referenced to the same origin separated only by the ying height Note that the ground nadir coordinates would correspond to the principal point coordinates in X and Y if the photograph was truly vertical Direction Cosines If we look at figure 1 we can see that point P has coordinates Xp Yp Zp The length of the vector distance can be de ned as Z Figure 1 Vector OP in 3D space 1 OP x3 sz z3F The direction of the vector can be written with respect to the 3 axes as XP 00sec OP YP cos 6 OP ZP cos y OP These cosines are called the direction cosines ofthe vector from O to P This concept can be extended to any line in space For example gure 2 shows the line PQ Here we can readily see that the vector PQ can be de ned as The 6er 39 mmmc Tmlnlm Prejectlve Equatlons Page 3 p XP PQ YQ Y1 QP Z Q Z1 The length of the vector becomes PQ XQ XP2 YQ YP2 ZQ ZPH and the direction cosines are Z Y O X W X Figure 2 Line vector PQ in space X COS X Q P PQ Y Y cos Q P PQ ZQ Z1 cos y PQ If we look at the unit vector as shown in figure 3 one can see that the vector from O to P can be de ned as The 6er lmmmc Tmlnlm Prejectwe Equat1ons Page 4 xiyj21 Z Figure 3 Unit vectors and the point P has coordinates x y zT Given a second set of coordinates axes I J K one can write similar relationships for the same point P Each coordinate axes has an angular gtlt relationship to each of the i j k coordinate axes For example gure 4 shows the relationship between 3 and The angle between the axes is de ned as N xY Since i has similar angles to the other two axes J 7777777777 7 Y one can write the unit vector in terms 0f the direCtion COSineS Figure 4 Rotation between Y and x axes as Similarly we have for and E The 6er l0 Fhmmmc In Proj ective Equations Page 5 cosyX 005ZX 3 005yY R coszY cosyZ 005ZZ Then the vector from O to P can be written as cosXX cosyX coszX OPX cosXY y cosyY z coszY cosXZ cosyZ coszZ This can be written more generally as X RX To solve these unknowns using only three angles 6 orthogonal conditions must be applied to the rotation matrix R All vectors must have a length of l and any combination of the two must be orthogonal Novak 1993 Thus designating R as three column vectors R r1 r2 r3 we have Sequential Rotations Combination Axes of Rotation 1Roll03 Pitch p Yaw K X y z 2 Pitch p Roll 0 Yaw K y X 3 Heading H Roll 0 Pitch p z X 4 Heading H Pitch p Roll 0 z y Z X Z X 5 Azimuth 0c Tilt t Swing s 6 Azimuth 0c Elevation h Swing s NNXquotltN Table 1 Rotation combinations The 6er lmmmc Tmlnlm Prejectrve Equatrons Page 6 Applying three sequential rotations about three different axes forms the rotation matrix Doyle 1981 identi es a series of different combinations These are shown in Table l and they all presume a local space coordinate system Roll 03 is a rotation about the xaxis where a positive rotation moves the yaxis in the direction of the zaxis Pitch p is a rotation about the yaxis When the zaxis is moved towards the xaxis then the rotation is positive A rotation about the zaxis is called yaw K with a positive rotation occurring when the xaxis is rotated towards the yaxis All of these angles have a range from 180 to 180 Heading H is a clockwise rotation about the Zaxis from the Yaxis to the Xaxis Azimuth 0c is a clockwise rotation about the Zaxis from the Yaxis to the principal plane Tilt t is a rotation about the xaxis and is de ned as the angle between the camera axis and the nadir or Zaxis This rotation is positive when the xaxis is moved towards the zaxis Swing is a clockwise angle in the plane of the photograph measured about the zaxis from the yaxis to the nadir side of the principal line Heading azimuth and swing have a range from 0 to 360 while the tilt angle will vary between 0 to 180 Finally elevation h is a rotation in the vertical plane about the xaxis from the XY plane to the camera axis The rotation is positive when the camera axis is above the XY plane The combinations 1 and 2 are frequently used in stereoplotters while 3 and 4 are common in navigation Professor Earl Church developed 5 in his photogrammetric research whereas the ballistic cameras often used the 6th combination Derivation of the Gimbal Angles For a physical interpretation of the rotation matrix written in terms of the directions cosines we can look at the planar rotations of the axes in sequence In the rst section we saw that the coordinate transformation can be written in the following form X1 cosoc sinoc UP Y1 sin0c cosoc VP In the photogrammetric approach we rotate the ground coordinates to a photo parallel system This involves three rotations n primary q secondary and K tertiary If we look at the to rotation about the X1 axis we should realize that the Xcoordinate does not change but the Y and Z coordinates do change figure 5 Moreover the new values for Y and Z are not affected by the Xcoordinate Thus one can write The 6er 39Fuh39owgrimmc Tmlnlm Projective Equations Page 7 Off Rotatkna agt47 Rotatna Kquot RotatRJH about X1 Zozt 0 W about Y2 about Z3 Y Y ZSZ2 3 X2 K X1 X3 Figure 5 Rotation angles in photogrammetry or in matrix form or more concisely X2 X1 Y10Z10 Y2 X10Y1cosoaZ1sinn Z2 X10Y1 sinoaZ1cosoa X2 1 0 0 X1 Y2 0 cosoa sinoa Y1 Z2 0 sinoa cosoa Z1 C2 MwC The next rotation is a p rotation about the once rotated Yzaxis One can write or in matrix form X3 X2 COSpY2 0Z2 sinp Y3 X20Y2Z20 Z3 X2 sinqY2 0Z2 cosq X3 cos q 0 sin q X2 Y3 0 1 0 Y2 Z3 sinq 0 cosq Z2 The 6er lmmmc Tmlnlm Prejectlve Equatlons Page 8 or more concisely C3 M qCZ Finally we have the Krotation about the twicerotated Z3aXis see gure 5 This becomes X X3cOSKY3sinKZ30 Y39X3 sinKY3cOSKZ3 0 Z X30Y30Z3 which in matrix form is X cos K sin K 0 X3 Y sin K cos K 0 Y3 Z 0 0 l Z3 or more concisely as C M KC3 Thus the transformation from the survey parallel X1 Y1 Z1 system is shown as X X1 X1 Y MG Y1 MKM Mw Y1 2 z1 z1 Performing the multiplication the elements of MG are shown as COSpCOSK COSOJSanSlIIOJSlnpCOSK sinoasinK cosoasintpcOSK MG cosqsinK coschSK sinmsintpsinK sinoacOSKcosoasintpsinK sinq sinncosq cosmcosq If the rotation matrix is known then the angles K p n can be computed as Doyle 1981 The 6er In Flimmmm Tmlnlm Prejectlve Equatlons Page 9 In tan n 32 m33 sincl m31 In tan K 21 If the socalled Church angles t s 0c are being used then the rotation matrix can be derived in a similar fashion The values for M are cosscos0c costsin0csins cosssinoc costcosocsins sintsins M sinscosoc costsinoccoss sinssin0c costcos0ccoss sintcoss sintsinoc sintcosoc cost If the rotation matrix is known then the Church angles can be found using the following relationships Doyle 1981 m tanoc 31 m32 39 2 2 2 2 cost m33 or s1nt 1lmm m32 1mu m23 m tans A m The collinearity concept means that the line form object space to the perspective center is the same as the line from the perspective center to the image point figure 6 The only difference is a scale factor Since the comparison is performed in image space the object space coordinates are rotated into a parallel coordinate system This relationship can be written as kMX Recall that we wrote two basic equations relating the location of a point in the photo coordinate system and ground nadir position r X X Xo X1 X X Y y y0 and Y1 Y Y Z f Z1 Z ZL r The 6er lmmmc Tmlnlm Prejectwe Equat1ons Page 10 Figure 6 Collinearity condition Then XXo mu In12 In13 XXL y yo k In21 In22 In23 YYL f In31 In32 In33 Z ZL where k is the scale factor This equation takes the ground coordinates and translates them to the ground nadir position The rotation matrix Mg takes those translated coordinates and rotates them into a system that is parallel to the photograph Finally these coordinates are scaled to the photograph The result is the predicted photo coordinates of the ground points given the exposure station coordinates XL YL ZL and the tilt that exists in the photography K p D If we express this last equation algebraically then we have XXo km11XXLm12YYLm13ZZL y yo km21X XLmZZYYLm23ZZL fkm31X XLm3 2Y YLm33ZZL To eliminate the unknown scale factor diVide the rst two equations by the third Thus The 6er lmmmc Tmlnlm Prejectlve Equatlons Page 11 Otto von Gruber first introduced this equation in 1930 This equation must satisfy two conditions Novak 1993 In1111112 In21m22 In31m32 0 2 2 2 2 2 2 m11m21m31m12 m22 m32 If we look at the equation for MG above lets see if the first condition is met mum12 m21m22 m31m32 cos pcos Kcos to sin K sin oasin pcos K cos psin Kcos n cos K sin suintpsin K cos psin psin n cos pCOS Ksin Kcos n cos psin pcos2 Ksin n cos pCOS Ksin Kcos n cos psin psin2 Ksin n cos psin psin n cos pcos Ksin Kcos n cos psin psin 03cos 2 K sin2 K COSpCOS Ksin Kcos n cos psin psin n 0 Thus the first condition is met For the second constraint lets first look at the left hand side of the equation 2 2 2 2 2 2 39 2 39 2 m11m21m31 cos pcos Kcos psm Ks1n p cos2 qcos2 Ksin2 Ksin2 q 1 The right side of the equation becomes 2 2 2 2 2 2 2 2 m12m22m32cos oasm K2s1ntpcosKs1anosoas1noas1n pcos Ks1n n 2 2 2 2 cos Kcos D 281npCOSKSIIlKCOSOJSIIIOJSIII psm Ks1n n 2 2 cos psm n 2 2 2 2 2 2 2 2 2 2 s1n Kcos oas1n pcos Ks1n oacos Kcos oas1n psm Ks1n n 2 2 cos psm n 2 2 2 2 2 2 2 s1n psm 03cos Ks1n KCOS oacos psm n 2 2 2 2 s1n sm pcos pcos 031 Thus both sides of the equation equal are equal to one and to each other Since X 7 XL Y 7 YL and Z 7 ZL are proportional to the direction cosines of X these equations can also be presented as Doyle 1981 The 6er lmmmc Tmlnlm Prejectlve Equatlons Page 12 XX f m11 cosocm12 cos m13cosy o m31 cos at m32 cos 5 m33 cosy yy f m21 cos0cm22 cos m23 cosy o m31 cosoc m32 cos m33 cosy Here cos 0c cos 5 and cos y are the direction cosines of A The inverse relationship is These equations are referred to as the collinearity equations It would be interesting to see how these equations stand up to the basic principles learned in basic photogrammetry Recall that for a truly vertical photograph that the scale at a point can be written using 3 H h X Here we assumed that the principal point coincided with the indicated principal point and that the X and Y ground coordinates were related to the origin being at the nadir point with the XaXis coinciding with the line from opposite ducials in the ight direction If we look at the collinearity equations the rotation matrix for a truly vertical photo would be the identity matrix Thus M Vert l 0 0 0 l 0 0 0 1 Then the projective equations become x xokX XL yyokY YL fk ZJ The 6er lmmmc Tmlnlm Prejectlve Equatlons Page 13 If we further assume that the principal point is located at the intersection of opposite fiducials and if we substitute H for ZL and h for Z then xkX XL ykYYL fkH h Dividing the first two equations by the third ad manipulating the equation yields the identical scale relationships given in basic photogrammetry LINEARIZATION OF THE COLLINEARITY EQUATION The linearization of the collinearity equations are given in a number of different textbooks The developments presented here follow that outlined by Doyle 1981 For simplicity lets define the projective equations in the following form U F1 x x0fW0 v F2 yyofW0 where U and V are the numerators in the projective equations given earlier and W is the denominator From adjustments we know that the general form of the condition equations can be written as AVBAF0 The deign matrix B is found by taking the partial derivative of the projective equations with respect to the parameters Thus it will appear as 6F1 6F1 6F1 6F1 6F1 6F1 6F1 6F1 6F1 6F1 6F1 6F1 6y0 6f BXL 6YL 6ZL 6m q 6K BXi 6Yi 6Zi 6F 6F 6F 6F 6F 6F 6F 6F 6F 6F 6F 6F 6x0 dye araxL aYL azL 603 am am axi avi azi The first section contains the partial derivatives with respect to the interior orientation the second group are the partials with respect to the exterior orientation and the third The 6er In Fhotonrwnnm Tmlnlm Projective Equations Page 14 group are the partials with respect to the ground coordinates The partial derivatives of the interior orientation X0 yo and f only are very basic 6110 E 6y 6f W 6124 21 6yo 6f W For the partial derivatives taken with respect to the exposure station coordinates we will use the following general differentiation formulas W U 6 6F1 6P f av an f 2 6P legj f av vaw 6P W6P where P are the parameters For the exposure station coordinates XL YL ZL the partial derivatives of the functions U V and W become 6U m 6XL 11 6V m 6XL 21 5W m 6XL 31 6U m 6U m aYL 12 azL 13 6V m 5V m aYL 22 azL 23 5W m E m aYL 32 azL 33 Then the partial derivatives of the functions F1 and F2 can be shown to be The 6er Ellwogmnlrme Tmlnlm Projective Equations Page 15 6F1 f U an f m11 m31 m21 m31 6XL W W 6XL W 6131 f 6132 f aYL W miz msz aYL W mzz m32 6F1 f U an f v m13 m33 m23 m33 6ZL W W 6ZL W W Recall that the rotation matrix is given in the sequential form as M G M KM qM m then the partial derivatives of the orientation matrix with respect to the angles can be shown to be 6M 6M 0 0 0 G MKM MG l 603 603 l 0 6M 6M 0 sinoa cosoa a G MK6 l MwMG sinoa 0 0 p p cosoa 0 0 6M 6M 0 1 0 G K Mwa l 0 0 MG 6K 6K 0 0 0 Then the partial derivatives of the functions U V and W taken with respect to the orientation angles becomes XL Y r X MG Yi w zi z L Yielding The 6er lmmmc Tmlnlm Projective Equations Page 16 6 U 60 Xi XL 6M 6V G Yi YL 60 60 6w zi ZL 6m 6 U 61 Xi XL 6V 6MG Yi YL 61 61 6W Zi ZL 61 6 U 6K Xi XL 6M 6V G Yi YL 6K 6 6W Z ZL 6K Now one can evaluate the partial derivatives of F1 and F2 with respect to the orientation angles 6F1 f 6U U 6W 60 W 1 6F1 6U U6W W 61 W61 6if 6U U6W 6K W