4 Pre Processing of Satellite Images
Swati katiyar
Learning Outcome
Ø Student will acquire understanding of processing of satellite data .
Ø Student will acquire skill to analyze errors that can occur at the time of registration of data, sources of these errors and how to remove them.
Ø Student will be equipped with knowledge to study further about geometric and radiometric corrections.
Outline
- Remote sensing data suffers from variety of radiometric and geometric errors.
- These errors diminish the accuracy of information extracted and thereby reduce the utility of data.
- Image Registration/Preprocessing involves removal of distortions, degradation and noise.
- Types of error: internal and external.
- Geometric and Radiometric corrections.
Introduction:
A digital remotely sensed image typically composed of picture elements (pixel) having Digital Number (DN) or Brightness Value (BV) located at the intersection of each row i and column j in band k in imagery. A smaller number indicates low radiance from the area and high number is an indicator of high radiant properties of the area.
Raw digital images usually contains distortion or various types of error so they cannot be used directly as a map without proper processing. Sources of these distortions range from variation in the altitude and velocity of sensor to Earth rotation and curvature etc. Correction is needed for satellite images. These rectification operations aim to correct distorted or degraded image data to create a faithful representation of the original scene. Image rectification involves the initial processing of raw image data to correct for geometric distortion, to calibrate the data radiometrically and to eliminate noise present in the data. Image rectification and restoration procedures are often termed preprocessing operations because they normally precede manipulation and processing of image data.The former deals with initial processing of raw image data to correct geometric and radiometric distortions. Enhancement procedures are applied to image data in order to effectively display the data for subsequent visual interpretation. The intent of classification process is to categorize all pixels in a digital image into one of several land cover classes or themes. This classified data may be used to produce thematic maps of the land cover present in an image.
Types of Error
Internal errors/Systematic errors:
Internal errors are introduced by remote sensing system. They are generally systematic (predictable) and may be identified and then corrected based on pre-launch or in flight calibration measurements (Figure 1). For example n-line stripping in the imagery may be caused by single detector that has become uncelebrated. In many instances, radiometric correction can adjust for detector miscaliberation.
Figure.1 Internal distortion
Source: www.jars1974.net/pdf/10_Chapter09.pdf
External error/Non-Systematic errors:
External errors are introduced by phenomena that vary in nature through space and time. External variable that can cause remote sensor data to exhibit radiometric and geometric error include the atmosphere, terrain elevation, slope and aspect. Some external error may be corrected by relating empirical ground observations (i.e. radiometric and geometric ground control points) to sensor measurements (Figure 2).
Some high point behavior of external errors is listed below:
- Caused due to platform perturbations and changes of atmospheric and seismic characteristics.
- They are unpredictable.
- They are variable.
- Can be determine by relating points of the ground to sensor system measurement.
- Eg: Spacecraft velocity, attitude, altitude, atmospheric effects.
Figure.2 External distortion
Source: www.jars1974.net/pdf/10_Chapter09.pdf
Several most common remote sensing system–induced radiometric errors are Line start or stop problems:
Occasionally, scanning systems fail to collect data at the beginning or end of a scan line, or they place the pixel data at inappropriate locations along the scan line. For
example, all the pixels in a scan line might be systematically shifted just one pixel to the right. This is called a line-start problem (Figure 3).
Figure3: Line start error
Source: http://www.csre.iitb.ac.in/~avikb/GNR401/DIP/DIP_401_lecture_1.pdf
Line or column dropout
If one of the six detectors of Landsat MSS fails to function during a scan. This results in zero brightness value for every pixel j in a particular line i. This line drop out appears as a completely black line in band k. This is no way to restore data since it is never acquired, but it is possible to improve visual interpretability of data by introducing estimated brightness value for each bad line. If a detector in a linear array (e.g., SPOT XS, IRS-1C, QuickBird) fails to function, this can result in an entire column of data with no spectral information. The bad line or column is commonly called a line or column drop-out and contains brightness values equal to I ,j ,k = zero. For example, if one of the 16 detectors in the Landsat Thematic Mapper sensor system fails to function during scanning, this can result in a brightness value of zero for every pixel, j, in a particular line, i. This line drop-out would appear as a completely black line in the band, k, of imagery. This is a serious condition because there is no way to restore data that were never acquired.
Correction of Line drop out
a) Replacement by preceding line
Here the brightness value of the pixels along the dropped scan line is replaced by the value of corresponding pixel on immediately preceding or succeeding line
Bvi, j = Bvi, j-1 or Bvi, j+1
Where,
Bvi, j = missing pixel value of pixel i and scan line j
b) Replacement by Averaging
A thresholding algorithm can flag any scan line having mean brightness value at or near zero. Correction for line dropout can be done by calculating the output pixel value by averaging preceding line (BVi-1,j,k) and succeeding line (Bvi+1,j,k) pixel value and assign the output pixel (BVi.j,k) in drop out line.
Bvi, j = Bvi, j-1 or Bvi, j+1
Stripping:
Sometimes a detector does not fail completely, but simply goes out of radiometric adjustment. For example, a detector might record spectral measurements over a dark, deep body of water that are almost uniformly 20
brightness values greater than the other detectors for the same band. The result would be an image with systematic, noticeable lines that are brighter than adjacent lines (Figure 4). This is referred to as n-line striping. The maladjusted line contains valuable information, but should be corrected to have approximately the same radiometric scale at the data collected by the properly calibrated detectors associated with the same band. Data although valid should be averaged so that it get same contrast as other detectors per scan. Identification of bad line is done by computing the histograms of the values of each of n detector over a homogeneous area such as water body. Now if one of the detectors’s Response is significantly different from others than may be it is out of adjustment.
Figure 4. Stripping Error
Source: http://www.mdpi.com/2072-4292/6/10/10131/html
Correction of Stripping
Yk (i, j) = (σ/σk) x [ Xk(i, j) –Mk] + M
Where,
Yk (i, j) = output pixel gray value.
Xk (i, j) = input pixel gray value.
M = Mean of full image.
Mk= Mean of kth detector.
σ = Standard deviation of full image.
σk= Standard deviation of kth detector.
Random bad pixels (shot noise)
This occurs randomly, it is called a bad pixel. When there are numerous random bad pixels found within the scene, it is called shot noise because it appears that the image was shot by a shotgun. Normally these bad pixels contain values of 0 or 255 (in 8-bit data) in one or more of the bands. Shot noise is identified and repaired using the following methodology (Figure 5). It is first necessary to locate each bad pixel in the band k dataset. A simple thresholding algorithm makes a pass through the dataset and flags any pixel (BVi,j,k) having a brightness value of zero (assuming values of 0 represent shot noise and not a real land cover such as water). Once identified, it is then possible to evaluate the eight pixels surrounding the flagged pixel, as shown below:
Figure.5 Random bad pixel error
Source: http://www.csre.iitb.ac.in/~avikb/GNR401/DIP/DIP_401_lecture_1.pdf
Radiometric preprocessing:
Radiometric correction applied to any given digital set varies widely among sensors. The radiance measured by any given system over a given subject is influence by such factors as change in scene illumination, atmospheric condition, viewing geometry, and instrument response characteristics. Some of these effects, such as viewing geometry variation, are greater in the case of airborne data collection than in satellite image acquisition. Radiometric corrections can be further divided in two parts:
•Absolute Radiometric Correction
•Relative Radiometric Correction
Absolute Radiometric Correction:
This method uses model atmosphere along with in-situ atmospheric measurements acquired at the time of data acquisition. Atmospheric model can be more accurately refined along with local condition. However it cannot be applied in most applications, therefore the relative correction is applied because the atmospheric model is so complicated and the exact measurement of atmospheric condition is difficult. The atmosphere affects the radiance measured at any point in scene in two ways:
a) It attenuates the energy illuminating the ground object
b) It acts as a reflector itself, adding scattered, and extraneous “Path radiance” to the signal detected by sensor (Figure 6)
Ls = REgTƟ / π + Lp
Where,
Ls = Total spectral radiance measured by sensor.
R = Reflectance of object.
Eg =Irradiance on object.
Tθ=Transmittance of atmosphere.
Lp = Path radiance.
First term in the above equation contains valid information about ground reflectance. Second term represents the scattered path radiance, which introduces haze in the imagery and reduces image contrast.
Figure.6 Radiance received by RS System
Source: http://www.utsa.edu/LRSG/Teaching/EES5083/L4-Radiom.pdf
Another absolute radiometric correction method is conversion of DNs to absolute radiance values. Such conversions are necessary when changes in absolute reflectance of objects are to be measured over time using different sensors.
Such conversions are important in the development of mathematical models that physically relate image data to quantitative ground measurements (e g. Water quality data). Each spectral band of sensor has its own response function, and its characteristics are monitored using on-board calibration lamps. The absolute spectral radiance output of calibration sources is known from prelaunch calibration and is assumed to be stable over life of sensor. Onboard calibration sources relate known radiance values incident to nth detectors to the resulting DNs
Figure. 7 Graph of Slope distance against DN value
Source: Self
DN = GL + B
Where,
DN = digital number value recorded.
G = slope of response function (gain).
L = Spectral radiance measured.
B = intercept of response function (offset).
Figure.8 Graphical plot of DN against spectralradiance
Source: Self
This equation is used to convert any DN in a particular band to absolute units of spectral radiance in that band if LMAX and LMIN are known from sensor calibration (Figure 8).
Relative Radiometric Correction
Relative radiometric correction is use to normalize multi-temporal data taken on different dates to a selected reference data at specific time. Atmospheric attenuation is minimized by using multiple looks at the same object or view the same object in multiple bands. The multiple look method suffers with drawbacks that the atmospheric path changes with the different look.
Relative radiometric correction may be used for:
Single-image normalization using histogram adjustment •Multiple-data image normalization using regression
Single-image normalization using histogram adjustment: This method is based on examination of spectral characteristics of objects of known or assumed brightness recorded by multispectral imagery. This approach is often known as “image-based atmospheric correction” because it adjusts for atmospheric effect mainly, from evidence available within the image itself. This strategy can be implemented by identifying dark object or feature within the scene which may be large water body or possibly shadows cast by clouds or by large topographic features. In the infrared portion of the spectrum, both water bodies and shadows should have brightness stay over near zero, because clear water absorbs strongly in the near infrared spectrum and every little infrared energy is scattered to the sense or from shadowed pixels. But it is observed from the histograms of the DN values for a scene that the lowest values (for dark areas, such as clear water bodies) are not zero but have some larger value. These values, assumed to represent the value contributed by atmospheric scattering for each band, are then subtracted from all DN values for that scene and band. Thus the lowest value in each band is set to zero, the dark black color assumed to be the correct tone for a dark object in the absence of atmospheric scattering. This method is simplest method and is known as the histogram minimum method (HMM) or the dark object subtraction (DOS) technique.
DOS/HMM Correction: This procedure has the advantages of simplicity, directness, and almost universal applicability, as it exploits information present within the image itself. But the atmosphere can cause dark pixels to become brighter and bright pixels to become darker, so application of a single correction to all pixels will provide only a rough adjustment for atmospheric effects. DOS technique is capable of correction for additive effects of atmospheric scattering, but not for multiplicative effects of absorption.
Multi Date Image Normalization: Applications such as change detection involves the use of multi date historical images. So for the historical temporal images to be radiometrically corrected. Two methods can be applied:
• Multi-date Empirical Radiometric Normalization •Multi-date Deterministic Radiometric Normalization In Unknown atmospheric conditions pseudo invariant ground targets may be used to normalize multi-temporal datasets to a single reference scene. Non anniversary date imagery is a major problem in using temporal images. Image normalization can be achieved by applying regression equations to multi-date imageries which predict what will be the value of brightness of a pixel if it would have been acquired under same conditionals that of reference image.
Regression Method: The method generally involves calculation of regression lines for a number of surface materials of contrasting spectral properties. The regression line method (RLM) determines a ‘best fit line’ for multi spectral plots of pixels within homogenous cover types. If no atmospheric scattering has taken place, the intersection of the line would be expected to pass through the origin. The slope of the plot is proportional to the ratio of the reflective material. Intersect on the x and y axis producing two offset values which represent the amount of bias caused by atmospheric scattering shown in Fig(9)
Figure.9 Regression Graph
Source: http://www.degreedays.net/regression-analysis
Target Properties: Target should be at approximately same elevation as other land within the scene. It should contain minimal amounts of vegetation. Target taken should be relatively flat. Patterns of the target should change over time.
Multi-date Deterministic Radiometric Normalization
Solar Elevation Correction: The satellite scenes taken at different time of the year it is must to correct the sun elevation correction and earth sun distance correction. Sun elevation correction accounts for the seasonal position of the sun relative to the earth. Image data acquired can be normalized by assuming that sun was at zenith at each date of sensing. This correction is applied by dividing each pixel by the cosine of the sun’s angle from the zenith. This correction ignores the topographic and atmospheric effect which is shown in Fig (10).
Source: http://mikeyharris.weebly.com/energy-efficient-house.html
Earth Sun Distance Correction:
Earth sun distance correction is applied to normalize for the seasonal changes in the distance between earth and sun. The earth sun distance is usually expressed in astronomical units. This astronomical unit is equivalent to mean distance between the earth and sun, approximately 149.6×106 km. The irradiance of the sun decreases as the square of the earth sun distance increases.
Figure.11 Topographic Normalization
Where,
E = normalized solar irradiance.
E0= solar irradiance at mean earth sun distance.
Θ0= sun’s angle from zenith.
d = earth-sun distance, in astronomical units.
Topographic Normalization: The variations of the topographic parameters (slope, aspect and altitude) affect the variation in the brightness of the satellite images in rugged mountainous terrain due to which object lying in shadow gets less solar irradiance than one on a sunny side (Figure 11). A surface perpendicular to the sun at a low sun elevation will receive less radiation than a surface perpendicular to the sun at a high solar elevation (Ekstrand, 1996). South aspect (Sun facing illuminated slopes) shows more reflectance whereas the effect is opposite in north aspect (Warren et al. 1998; Riano et al. 2003). These differential illumination effects in satellite imagery will restrain the maximum information on the north facing slopes, thus negatively affecting the results of various quantitative methods ofsnow cover mapping specially classification, change analysis and other peril information (Mishra et al. 2010). Therefore effective removal or minimization of topographic effects is necessary in satellite image data of the mountainous regions.
Variations in illumination affect land cover discrimination as the same land cover will have different spectral responses among had owed and non-shadowed areas. The correction of illumination variations is referred to as Topographic Normalization or Topographic correction. Techniques are grouped into two major categories:
(a) Band ratios
(b) Modeling of illumination conditions
Source: Remote Sensing of the Environment, John R. Jensen Techniques under group b) model illumination to compute the flat-normalized radiance of each pixel. They are grouped into two additional sub categories, Lambertian and non-Lambertian, depending on whether they assume a Lambertian or non-Lambertian surface behavior:
Correction for Slope and Aspect Effects
Slope aspect corrections are given by Teilletetal (1982)
1. Cosine Correction
2. Modified Cosine Correction
3. Semi empirical Method Minnaert correction C-Correction
4. Statistic empirical Methods
Each of these corrections is based on illumination (defined by cosine of incident solar angle). Illumination is dependent on the orientation of the pixel towards sun’s actual position. Digital elevation model of the study area is required for these corrections. DEM and Satellite sensor data must be geometrically registered and resampled to same spatial resolution.
Lambertian Technique: This techniques are the easiest to implement, but are based on somewhat unrealistic assumptions, such as:
- The surface reflects energy equally in all directions,
- The correction is wavelength independent
- Constant distance between the Earth and the Sun, and
- Constant amount of solar energy illuminating the Earth.
Cosine Correction (Lambertian): Amount of irradiance reaching a pixel on slope is directly proportional to the cosine of incidence angle “i”. Incidence angle “i” is defined as the angle between normal on the pixel and zenith direction. This method has a limitation that it accounts for only the direct part of irradiance that illuminates a pixel on ground. It doesn’t take into account the diffused sky light and light reflected from surrounding mountain sides which may illuminated the pixel. So weakly illuminated areas in terrain receive a disproportionate brightening effect when cosine correction is applied (Figure 12). Smaller the value of cosine greater will be the slope correction.
Where,
LH = radiance observed for a horizontal surface LT = radiance observed for a sloped surface Θz = sun’s angle from zenith
I = sun’s incidence angle in relation to normal on pixel.
Terrain
Figure.12 Cosine Correction
Source: Self
MODIFIED COSINE CORRECTION (Lambertian)
ILm = average IL value for the study area
Non Lambertian Technique: These techniques try to model the roughness of the surface, or the degree to which it is Lambertian. Usually, they require the calculation of correction coefficients which are wavelength dependent. Therefore, each band is processed separately. The computation of the correction factors will be done using a subset of pixels from the same land cover class. First, we will extract the pixel values from the image bands and illumination layers, and then we will compute the correction factors.
MINNAERT CORRECTION: Minnaert method is non-Lambertian and implemented for topographic corrections which depends on the type of surface and spectral wavelength bands. It varies from 0 (ideally non-Lambertian surface) to 1 (perfect Lambertian surface). This method is an improved of cosine correction and is given by
Where,
LH = radiance observed for a horizontal surface.
LT=radiance observed for a sloped surface.
Θz=sun’s angle from zenith.
I= sun’s incidence angle in relation to normal on pixel.
K= the Minnaert constant.
The value of constant varies between 0 and 1
K is measure of the extent to which a surface is Lambertian. A perfectly Lambertian surface has k =1and thus represents the traditional cosine correction.
C-Correction
In this method an additional adjustment to the cosine correction is added which modifies the correction to C-correction given below:
Where,
LH = radiance observed for a horizontal surface.
LT = radiance observed for a sloped surface.
Θ0 = sun’s angle from zenith.
I = sun’s incidence angle in relation to normal on pixel.
C = b/m.
C increases the cosine I and thus weakens the correction of faintly illuminated pixel.
Statistical Empirical Correction: A DEM based topographic correction model is proposed to decrease the divergence caused by solar illumination on the same land cover but located on different (north and south) aspect of mountain so that the land cover class with the same reflectance in a different solar azimuth shows the same spectral response in optical remote sensing image. For this an atmospheric transmittance from ground surface to sensor as well as atmospheric transmittance along the path from sun to ground is calculated using (Pandya et al. 2002) and an improved dark object subtraction (DOS) technique was implemented. This is further implemented in topographic correction method based on empirical statistical analysis of the radiance values of remotely sensed data acquired for rugged terrain and the cosine of the solar illumination angle .It possible to correlate the predicted illumination from the DEM & actual remote sensing data and also to generate regression line for that. Slope of the regression line suggests that a same kind of class can occurred, differently on different terrain slope.
LH= LT – cos(i)m – b + LT
Where,
LH = radiance observed for a horizontal surface.
LT = radiance observed for a sloped surface.
LT = average of LT.
i = sun’s incidence angle in relation to normal on pixel. m = slope of regression line.
b = y intercept of regression line.
Another absolute radiometric correction method is conversion of DNs to absolute radiance values, such conversion are necessary when changes in absolute reflectance of objects are to be measured over time using different sensors. Such conversions are important in the development of mathematical models that physically relate image data to quantitative ground measurements (e g. Water quality data). Each spectral band of sensor has its own response function, and its characteristics are monitored using on board calibration lamps. The absolute spectral radiance output of calibration sources is known from prelaunch calibration and is assumed to be stable over life of sensor. Onboard calibration sources relate known radiance values incident to nth detectors to the resulting DNs.
Radiometric correction is further classified into the following three types
1) Radiometric correction due to sensor sensitivity: In this case image corner will be darker as compared with the central area. This is called vignetting.
Vignetting can be expressed by cosnϴ, where ϴ is the angle of ray with respect to the optical axis. n is dependent on the lens characteristic. In this case of electro-optical sensor, calibration is measured between irradiance and the sensor output signal which can be used for radiometric correction.
2) Radiometric correction for Sun angle and topography
a. Sun spot: The solar radiance will be reflected diffusely onto the ground surface, which results in lighter area in an image. It is called a Sun spot. The Sun spot together with vignetting effects can be corrected by estimating a shading curve which is determined by Fourier analysis to extract a low frequency component.
b. Shading: The shading effect due to topographic relief can be corrected using the angle between the solar radiation direction and the normal vector to the ground surface.
3) Atmospheric correction: Atmospheric effects cause absorption and scattering of the solar radiation. Reflected or emitted radiation from an object and path radiance should be corrected.
Geometric Correction
It is a process of transforming imagery to remove undesirable or misleading geometric distortion (due to sensor pitch, roll, height etc). Simple applications such as Land use / Land Cover don’t required atmospheric correction since the various features such as soil, water, vegetation and urban signals are strong and much different from others that atmospheric attenuation can be neglected in that case But when we are extracting the biophysical properties within a specific class then the differences in reflectance of various constituents may be so small that the atmospheric attenuation might make them inseparable. When the atmospheric attenuation is small as compared to signal from terrain being sensed the model atmosphere can be used. Model Atmosphere is an assumed atmosphere which is calculated using time of year, altitude, latitude and longitude of the study area. There are three steps of Geometric Correction; the first one is to collecting GCPs, pre-registration checking and then, registration. The raw digital images that have the nonsystematic distortion need to be rectifying by using whether the image to map These distortions may be due to several factors such as:
(i) Scan Skew Distortion
(ii) Space Craft Velocity Distortion
(iii) Panoramic Effect
(iv) Earth Rotation Correction
(v) Altitude and Attitude
Scan Skew Distortion:
During the time the scan mirror completes one active scan, the satellite moves along the ground track. Therefore, scanning is not at right angles to the satellite velocity vector (ground track) but is slightly skewed, which produce along track geometric distortion, if not corrected (Figure 13).
Figure. 13 Scan skew effect
Earth Rotation Correction
Amount of earth rotation during the time required to scan one frame results in distortion in scan direction. The process is along track distortion. This is a function of space craft latitude and orbit (Figure 14).
Figure 14 Earth rotation error
Source: http://www.geos.ed.ac.uk/~rharwood/teaching/msc/adv_ip/corr.pdf
Altitude and Attitude
Deviation of the space craft from nominal altitude causes scale distortion in remote sensing data with decrease in height of the space craft the pixel size decrease sand vice-versa. For Landsat system the distortion is primarily along scan line. For IRS systems distortion will be in both direction since the scanning mechanism is different from landsat Satellite position is stable in space and its axis system is perpendicular to each other. Rotation along the Y-axis (Longitudinal axis) of the sensing platform is called “Roll” (Fig 15). Rotation along the X-axis (along track direction) of the sensing platform is called “Pitch”(Fig17). Rotation on axis orthogonal to previous two axis i .e along line passing through vertically through the sensing platform to center of the earth is called “Yaw”(Fig 16). If space craft departs from this normal position, geometric distortions inherit the RS Data. This distortion Is unpredictable and uncertain.
Figure 15 Roll
Source: Self
Figure 16 Yaw
Source: Self
Figure 17 Pitch
Source: Self
Geometric Correction for Unsystematic Error
Random/Non-Symmetrical distortions are corrected by analyzing well distributed ground control points (GCP‟s).GCP’s are features of known ground location that can be accurately located on digital imagery. E.g. Highway Intersection, building corners, distinct shorelines. In correction process numerous GCP’s are located in both terms of the image coordinate system (Column and row numbers) on distorted image and in terms of their ground coordinates (measured from map, GPS readings, or from already projected in projected or geographic coordinate system). Number of GCP required based on order of transformation model is given by:
[(N+1)*(N+2)]/2
An undistorted output matrix of empty map cells is defined. Coordinates of each element in undistorted output matrix are transformed to determine their corresponding location in original input (distorted) matrix. Cell in output matrix will not overlay exactly over the pixel in input matrix. So the intensity values or digital number assigned to cell in output matrix is determined on basis of its surrounding pixel in transformed image.
Resampling: Resampling is a technique of extraction of gray values from a location in original input image and its relocation to appropriate coordinates rectified output image.
Coordinates of each element in undistorted output matrix are transformed to determine their corresponding location in original input (distorted) matrix .
Cell in output matrix will not overlay exactly over the pixel in input matrix. So the intensity values or digital number assigned to cell in output matrix is determined on basis of its surrounding pixel in transformed image.Three resampling techniques can be identified based on degree of precision and cost effectiveness:
A) Nearest neighborhood
B) Bilinear interpolation
C) Cubic Convolution
(A) Nearest Neighborhood
The DN for this pixel could be assigned simply on the basis of the DN of the closest pixel in the input matrix. In mentioned example Fig. 19 the DN of the input pixel labeled would be transferred to the shaded output pixel. This approach is called Nearest Neighbor resampling. It offers the advantage of computational simplicity and avoids having to alter the original input pixel values. This method preserves the original array values of the pixel and thus doesn’t alter the thematic in formation, but it introduces a spatial shift of √2 times of pixel size Geometry of features such as road, canals, railways, etc may appear disjointed in rectified output.
Fig (19): Matrix of geometrically correct output pixels superimposed on matrix of original, distorted input pixels
Source: Remote Sensing and Image Interpretation, Lillsand and Kiefer
(B) Bilinear Interpolation
The bilinear interpolation technique takes a distance weighted average of the DNs of the four nearest pixel (labeled a and b in the distorted image matrix) Figure 19. This process is simply the two dimensional equivalent to linear interpolation. This technique generates a smoother appearing resampled image.
New pixel gets a value from the weighted average of 4 (2 x 2) nearest pixels; smoother but ‘synthetic. This technique takes a distance weighted average of DN of the four nearest pixels. The resultant image is much smoother than that of NNB interpolation. Linear features such as road, canal etc is continue. It has 1/4th of mean square resampling error of NNB (Fig 19). It requires more computational time because of four multiplications. It also alters the DN values so it may created confusion in classification process.
Bvm = ∑4 =1 /D2k
∑4 =1 1/ 2
Zk is DN value of nearest four pixels D2k is the distance between data point and nearest pixels.
(C) Cubic Convolution
An improved restoration of image is provided by the cubic convolution method of resampling. In this approach, the transferred “synthetic” pixel values are determined by evaluating the block of 16 pixels in the input matrix that surrounds each output pixel (labeled a,b and c in Fig 19). Cubic convolution resampling avoids the disjointed appearance of the nearest neighbor method and provides a slightly sharper image than the bilinear interpolation method. This technique takes a distance weighted average of DN of the 16 nearest pixels. The resultant image is much smoother than that of bilinear interpolation. It requires more computational time because of 16 multiplications. It also alters the DN values so it may create confusion in classification process.
Bvm = ∑16 =1 / 2
∑16 =1 1/ 2
you can view video on Pre Processing of Satellite Images |