GNR607 Principles of Satellite Image Processing Instructor: Prof. B. Krishna Mohan CSRE, IIT Bombay bkmohan@csre.iitb.ac.in Slot 4 Lecture 21-23 Image Corrections Sept. 15-18, 2014 9.30 – 10.25 AM, 10.35 AM – 11.30 AM, 11.35 – 12.30 PM
IIT Bombay Slide 47 Computation of Spatial Transformation • The first order affine transformation is adequate to account for a several forms of distortions: – Skew – Rotation – Scale changes in x and y directions – Translation in x and y directions GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 46 Control Point Selection Reproduced with permission from the lecture notes of Prof. John Jensen, University of South Carolina GNR607 Lecture 21-23 B. Krishna
IIT Bombay Slide 45 Source of Ground Control Points • GCPs are obtained from: – Survey of India topographic maps (digital or paper) at 1:25,000 or 1:50,000 scale – Other maps with ground reference – Global Positioning Systems (GPS) • It is important to choose GCPs that are invariant with time since the map and image are often years apart in time GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 48 Computation of Spatial Transformation • Given a map reference, we define the pixel size such that after geometric correction, the image aligns with the map reference, with a pixel size chosen by the user. • It may be noted that the size of pixel as acquired by the satellite can be selected different from the pixel size after geometric correction GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 49 Spatial Transformation Reproduced with permission from the lecture notes of Prof. John Jensen, University of South Carolina GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 50 Errors in Transformation • If the GCPs selected are in error, the transformation maps the points in the image inaccurately onto the reference. The error can be measured in terms of the Root Mean Squared (RMS) Error • RMS error = N 1 ∑ − + − ' ' 2 ' ' 2 ( x x ) ( y y ) orig comp orig comp N = i 1 GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 51 Effect of Errors in Transformation Coefficients • Error for each point is given by − + − ' ' 2 ' ' 2 ( x x ) ( y y ) orig comp orig comp • It is common to select initially more GCPs and choose those that result in the smallest RMS error GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 52 Higher Order Transformations • Sometimes the 1 st order affine transformation may not accurately transform the image onto the map in which case one can choose a higher order polynomial transformation such as = + + + + + 2 2 x ' a x b xy c y d x e y f 1 1 1 1 1 1 = + + + + + 2 2 y ' a x b xy c y d x e y f 2 2 2 2 2 2 Based on the order of transformation, the number of coefficients vary. Accordingly the number of minimum GCPs also vary. Commercial products support 1 st – 5 th order transformations. GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 53 Resampling or Intensity Interpolation • The transformation is of two types: – Forward mapping or input to output mapping, i.e., for every pixel in the input image find the corresponding location in the reference map according to the determined transformation – Reverse mapping or output to input mapping, i.e., for every pixel in the output frame find the corresponding location in the input image according to the determined transformation GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 54 Intensity Transformation Reproduced with permission from the lecture notes of Prof. John Jensen, University of South Carolina GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 55 Intensity Interpolation • In this phase, gray level values are computed for the transformed pixels since they are now at different locations from where they collected the reflected energy • This step involves intensity interpolation since the computed values are weighted averages of existing measured values GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 56 Interpolation Strategy • It is more convenient to use reverse mapping or output to input mapping when geometrically correcting multispectral images • The reference frame can be assigned a given pixel size, and each pixel can then be located in the input image through the spatial transformation GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 57 Intensity Interpolation • • • • • Reference frame To be corrected GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 58 Nearest Neighbor Interpolation Standard Interpolation B Methods: A • Nearest Neighbor P ● • Bilinear Interpolation D • Higher order C interpolation (bicubic) GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 59 Nearest Neighbor Interpolation • P is the location to which a point from the reference frame gets transformed • Measured values exist at A, B, C and D • Let D AP be the distance of P from A, likewise D BP , D CP , and D DP • P is assigned the value of element K {A,B,C,D} ∈ in case of Nearest Neighbor Interpolation where D KP = Min{D AP , D BP , D CP , D DP } GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 60 Issues in NN Interpolation • Fastest to compute • No new values introduced – only the same values recorded by the sensors retained • Renders the image blocky if large pixel size to small pixel size resampling is performed • e.g., resampling an IRS-1D LISS-III image to 1 metre pixel size GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 61 Bilinear Interpolation • As opposed to nearest neighbor interpolation, all the four known points are employed in estimating the value at the unknown point • The weightages assigned to the four points are dependent on the proximity of the unknown point to these known points. GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 62 Bilinear Interpolation Principle B A P ● D C Bilinear Interpolation d(C,P) GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 63 Bilinear Interpolation • Denoting the estimated gray level at point P by f(P), and the known values by f(A), f(B), f(C) and f(D), + + + w f A ( ) w f B ( ) w f C ( ) w f D ( ) = f P ( ) A B C D + + + w w w w A B C D • The weight w A = 1/d(A,P), where d(A,P) is the distance between point A and point P. GNR607 Lecture 21-23 B. Krishna Mohan
IIT Bombay Slide 64 Cubic Convolution • Use of a bigger neighborhood to estimate the pixel gray level allows a smooth image since local differences are averaged out . • O—O—O—O For the location marked • O—O—O—O by the colored circle, the neighboring 16 • O—O—O—O elements are employed. • O—O—O—O GNR607 Lecture 21-23 B. Krishna Location X R , Y R Pixel (i,j) Mohan
IIT Bombay Slide 65 Cubic Convolution Technique • The estimated value at location (X R , Y R ) is given by V R = 4 ∑ V(i–1,j+n–2)× f(d(i–1,j+n–2)+ 1) + V(i, j + n – 2)× f(d(i, j + n – 2)) + n = 1 V(i+1,j+n–2)× f(d(i+1,j+n–2)– 1) + V(i+2,j+n–2)× f(d(i+2,j+n–2)– 2) V(m,n) is the value of the pixel at location (m,n) f(x) is weight function d(x,y) is the (Euclidean) distance between pixels x and y. GNR607 Lecture 21-23 B. Krishna Mohan
Recommend
More recommend