5 IMAGE ENHANCEMENT

Kumari Anamika Kumari Anamika

epgp books

 

 

 

Objectives

 

Ø Student will get to know why enhancement is required.

Ø Student will acquire skill how to study statistics of imageenhancement

Ø Student will be equipped with knowledge to study further about the background functioning of image enhancement

 

    Outline

  • Image enhancement is a technique which is widely used in computer graphics.
  • This operation are performed in order to modify the brightness, contrast or the distribution of the gray level of satellite image
  • The  aim  of  image  enhancement  is  to  improve  the  interpretability  or perception  on  information  in  image  for  human  viewers,  or  to  provide
  • “better” input for other automated image processing techniques.
  • This technique is tested on a large number of images and has shown significant result.
  • This module discusses the concept of image enhancement, their approaches and different techniques.

 

Image enhancement means to improve the visibility of any portion or feature of the image suppressing the information in other portions or features. Information extraction techniques help to get the statistical information about any specific feature or portion of the image. Now a day Image enhancement is applied in the field of medical imaging, analysis of satellite image etc. The satellite image having insufficient information (details) or having a lots of extra information which is unwanted. It may reduce the noise level by using the enhancement technique and eliminate by using some filtering techniques. Even filtering techniques are a part of image enhancement techniques. These techniques are discussed in detail and illustrated in this module.

 

Introduction:

 

Digital Image Processing (DIP) involves the modification of digital data for improving image qualities with the aid of computer. The processing helps in maximising clarity, sharpness and details of features of interest towards information extraction and further analysis. Image enhancement plays a vital role in the analysis and interpretation of remotely sensed data. Particularly data obtained from Satellite, which is in digital form can best be utilised with the help of digital image processing. Image enhancement algorithms are applied to improve the appearance of an image for human visual analysis. The principal objective of image enhancement is to modify attribute of an image to make it more suitable for a given task and a specific observer. During this process, one or more attributes of the image are modified. This technique is widely applied to geophysical images and used to make it easier for visual interpretation and geological understanding.

     Input Image

 

Enhancement

 

Techniques

 

Enhanced Image

 

Fig (1): Process of image enhancement

    Source: self

 

Specific Application

   For example, you can remove noise, sharpen or brighten an image and improve perceptual aspects, such as image quality, intelligibility or visual appearance and make it easier to identify key features. The enhancement techniques change the original digital number (DN) values permanently. Therefore those images cannot be used for many digital analysis e.g, image classification. These techniques are applied either to single-band image or separately to the individual bands of a multi-band image set.

 

Image Reduction and Magnification:

Image Reduction: Image Reduction techniques allow the analyst to obtain a regional perspective of the remotely sensed data. The computer screen cannot display the entire image on the screen unless reduce the visual representation of the image. It is commonly known as zoom out.

 

Original Image

 

Reduced Image

 

Fig (2): Logic of a simple 2x integer reduction.

Source: Self

 

Fig (2) shows the hypothetical example of 2x image reduction achieved by sampling every other row and column of the original data. Result shows that the new image consisting of only quarter (25%) of the original data.

 

Image Magnification:

 

Image magnification is very useful techniques when the analysis is trying to obtain the detail information about the spectral reflectance or emitance characteristics of a relatively small geographic area of interest and it is also used to match the display scale of another image. It is commonly known as zoom out.

 

Magnified Image

 

Original Image

Fig (3): logic of a simple 2x integer magnification.

Source: Self

 

Fig (3) shows the hypothetical example of 2x image magnification achieved by replacing every row and column in the original image. The new image will consist of four times as many pixels as the original scene. Row and column deletion is the simplest form of image magnification. To magnify an image by an integer factor m squared, each pixel in the original image is replaced by an m×m block of pixels all of the original pixel values.

 

Methods of Enhancement:

 

Spatial Domain enhancement techniques

 

Frequency Domain enhancement techniques

 

Ø  Spatial  domain  enhancement  techniques  modify  the  grey  scale  or intensity value of image and it is based on direct manipulation of pixels in an image.

 

Spatial domain process are denoted by the expression g (x, y) = T[f(x, y)]

 

Where,

 

f(x, y) is input image.

 

g(x, y) is processed image.

 

f is the intensity value at the pixel located by x and y.

 

T is define the operator on f over some neighbourhood region of x and y.

 

Intensity value at the pixel located by x and y after processed does not depend on the intensity value at pixel located by x and y which are there in the neighbourhood of x and y alone. It is also depend on the intensity which are there in the neighbourhood of the x and y point. The value of a pixel with coordinates (x, y) in the enhanced image is the result of performing some operation on the pixels in the neighbourhood of (x, y) in the input image, f. In frequency domain methods, the image is first transferred into frequency domain. It means that, Fourier Transform of the image is computed first. Fourier analysis is a mathematical technique is used to separate the satellite image into its various spatial frequency component. Frequency domain filtering operations is shown in Fig (4).

 

 

Fig (4): Filtering operation in frequency domain

Source: http://www.slideshare.net/diwakerpant/frequency-domain-image- enhancement-techniques.

 

 

 

Contrast Enhancement:

 

Remote sensing images have played an important role in many fields such as meteorology, agriculture, geology, education, etc. Contrast enhancement techniques are required for better visual perception and colour reproduction. The range of brightness values present on image is referred to as contrast. Contrast enhancement techniques have been widely used in many applications of image processing where the subjective quality of images is important for human interpretation. A common problem in remote sensing is that the range of reflectance values collected by a sensor may not match the capabilities of the colour display monitor. In digital image processing the contrast enhancement for satellite image in the field of remote sensing a lot of work has been done to get better the quality of image such as histogram equalization, multi-histogram equalization and pixel dependent contrast preserving.Contrast generally refers to the difference in luminance or grey level values in an image and is an important characteristic. It can be defined as the ratio of the maximum intensity to the minimum intensity over an image.

 

Where,

 

BV max

 

BV min

 

 

Contrast Ratio = BV max / BV min

 

is the maximum brightness value.

 

is the minimum brightness value.

 

In this module we will talk about contrast enhancement. Linear and non-linear transformation functions such as image negatives, logarithmic transformations, power-law transformations, and piecewise linear transformations will be discussed. Histogram process and histogram of four basic grey-level characteristics will be introduced.

 

Fig (5): Image Histogram

Source: https://hexagongeospatial.fluidtopics.net/book#!book;uri=a0316196704acf3e68 dc2909785a5f77;breadcrumb=1959db97f38d1f1c5b119813d9287f6a-60efbd0962e05deb5a71261633febc8d

 

The key to understand contrast enhancements is to the concept of an image histogram. A histogram is a graphical representation of satellite data.

 

Linear Contrast Enhancement: Linear Contrast linearly expands the original digital values of the remotely sensed data into a new distribution. It is also referred to as a contrast stretching. These types of enhancements are best applied to remotely sensed images with Gaussian or near-Gaussian histograms, meaning, all the brightness values fall within a narrow range of the histogram and only one mode is apparent. There are four methods of linear contrast enhancement.

 

Fig (6): Linear Contrast Stretch

source : Lilles and Kiefer, 1993

 

Fig(6) shows the graphical representation of linear contrast stretch.

 

Minimum-Maximum Linear Contrast Stretch Piecewise Linear Contrast Stretch Saturation stretch

 

Minimum-Maximum Linear Contrast Stretch: In this technique the original minimum and maximum values of the data are assigned to a newly specified set of values that utilize the full range of available brightness values of the display unit. To perform linear contrast enhancement, the analyst examines the satellite image statistics and determines the minimum and maximum brightness values in band k, maxk and mink respectively.

 

BVout = [(BVin – mink  ⁄ maxk – mink)] quantk

 

Where, BVout is the orignal input brightness values and quantk is the maximun value of range of brightness.

 

 

Piecewise Linear Contrast Stretch: It involves the identification of a number of linear enhancement steps that expands the brightness ranges in the modes of the histogram. The piecewise linear contrast stretch is similar to the minimum-maximum linear contrast stretch except this method use a specified minimum and maximum values that lie in a certain percentage of pixels from the mean of the histogram. In the piecewise stretch, a series of small min-max stretches are set up within a single histogram. It is very powerful enhancement techniques.

 

A piecewise linear contrast stretch normally follows two rules:

1) Data values are continuous there can be no break in the values between high, middle, and low range.

2) Data values specified can go only in an upward, increasing direction, as shown in Fig (7) .

Fig(7): Logic of a piecewise linear contrast stretch:

Source: http://www.r-s-c-c.org/node/244

 

 

In the piecewise linear contrast stretch, several breakpoints are defined that increase or decrease the contrast of the image for a given range of values. The minimum and maximum values are stretched to the values of 0 and 255 at a constant level of intensity.

 

Saturation Linear Contrast Stretch:The saturation contrast stretch is also referred as percentage linear contrast stretch or tail trim. It is similar to the minimum-maximum linear contrast stretch except this method uses specified minimum and maximum values that lie in a certain percentage of pixels. Sometimes these tails of the histogram is enhanced more prominently. This is the main of this method. In this method the information content of the pixels that saturate at 0 and 255 is lost; however, the remainder part of histogram is more enhanced compared to minimum to maximum linear stretch.

 

Non-Linear Contrast Enhancement: Nonlinear contrast enhancement often involves histogram equalizations through the use of an algorithm. In the nonlinear contrast enhancement techniques the each value in the input image can have several values in the output image, so that objects in the original image lose their corrective relative brightness value. Usually, nonlinear enhancements bring out the contrast in one range while decreasing the contrast in other ranges.

 

There are three types of nonlinear enhancement techniques

 

Histogram Equalizations

 

Adaptive Histogram Equalization Homomorphic Filter

 

Histogram equalization is another non-linear contrast enhancement technique. It is one of the most useful method for the nonlinear contrast enhancement. It usually increases the global contrast of satellite images. It create an output version of satellite image which maximizes the contrast of the data by applying a nonlinear contrast stretch that redistributes pixel values so that there are approximately the same number of pixels with each value within a range. When the histogram values of satellite image is equalized, all pixel values of the image are redistributed. Histogram equalization can also separate pixels into distinct groups if there are few output values over a wide range. It is not necessary that contrast will always be increase in this. There may be some cases were histogram equalization can be worse. In this case the contrast is decreased or it doesn’t force the distribution “flat” which means the number of pixel in each intensity levels distributed equally.

 

The total number of pixels is divided by the number of bins, equaling the number of pixels per bin, as shown in the following equation:

Where,

 

N= number of bins (If there are many bins or many pixels with the same value or values, some bins may be empty).

 

T= total number of pixels in the image.

 

A= equalized number of pixels per bin.

 

Histogram Equalization Example: In this example, 10 bins are rescaled to the range 0 to 9, because the input values ranged from 0 to 9, so that the equalized histogram can be compared to the original. The output histogram of this equalized image is shown in fig 8(b). In the output, histogram is not exactly flat, since the pixels can rarely be grouped together into bins with an equal number of pixels.

 

 

 

Source: ERDAS, (2014). ERDAS field guide)

 

There are 240 pixels represented by this histogram. To equalize this histogram to 10 bins, 240 pixels / 10 bins = 24 pixels per bin = A

 

Following equation is used to assign pixels to bins:

 

Source: ERDAS, (2014). ERDAS field guide

 

Where,

 

A = equalized number of pixels per bin.

 

Hi = number of values with the value i.

 

Hk = number of pixel per bin.

 

k = a particular bin number.

 

int = integer function.

 

Bi= bin number for pixels with value i.

 

There  is  also  one  important  thing  to  be  note  here  that  during  histogram equalization the overall shape of the histogram changes, where as in histogram stretching the overall shape of histogram remains same.

 

 

Adaptive Histogram Equalization: In Adaptive histogram equalization the satellite image divide into several rectangular domains, compute an equalizing histogram and modify levels. This method enhances the contrast of images by transforming the values in the intensity image. Contrast is increased at the most populated range of brightness values of the histogram. It automatically reduces the contrast in very light or dark parts of the image associated with the tails of a normally distributed histogram. According to this method, we partition the given image into blocks of suitable size and equalize the histogram of each sub block. In order to eliminate artificial boundaries created by the process, the intensities are interpolated across the block regions using bicubic interpolating functions (Al-amri, S. S., Kalyankar, N. V., & Khamitkar, S. D. (2010)

 

Homomorphic Filter: In the image enhancement techniques the homomorphic filter is used to remove the multiplicative noise, It is used in the log-spectral domain to separate filter effects from excitation effects, enhancements in the log spectral domain can improve sound intelligibility. For homomorphic filter to be effective it needs to affect the low- and high-frequency components of the Fourier transform in different way.

 

Fig(9):Homorphic Filter

Source: http://debian.fmi.unisofia.bg/~blizzard/download/Image%20Processing/6.Image %20Enhancement%203.pdf

 

For homomorphic filter to be effective it needs to affect the low- and high-frequency components of the Fourier transform in different way.

 

Filters: There are two types of enhancement techniques called spatial domain and frequency domain techniques which are categorized again for smoothing and sharpening the images. We considered the filtering in frequency domain using FFT, use of the terms frequency domain and frequency components is really no different from the terms time domain and time components, which we would use to express the domain and values of f(x) if x where a time variable aattenuated to some degree.

 

Lowpass filter (smoothing): A low-pass filter is a filter that passes low-frequency signals and attenuates signals with frequencies higher than the cut-off frequency. The actual amount of attenuation for each frequency varies depending on specific filter design. Smoothing is fundamentally a lowpass operation in the frequency domain . There are several standard forms of lowpass filters are Ideal, Butterworth and Gaussian lowpass filter.

 

Highpass filters (sharpening): A high-pass filter is a filter that passes high frequencies well, but attenuates frequencies lower than the cut-off frequency. Sharpening is fundamentally a highpass operation in the frequency domain. There are several standard forms of highpass filters such as Ideal, Butterworth and Gaussian highpass filter. All highpass filter (Hhp) is often represented by its relationship to the lowpass filter (Hlp).

 

ℎ   = 1 −

 

Indices: Band rationing of satellite image is the arithmetic operation that is most widely applied to images in the application of remote sensing such as geological, forestry and agriculture. In this enhancement techniques the DN value of one band is divided by that of any other band in the sensor array. Creating ratio images is done using the following general formula:

 

BVi, j, r = (BV i, j, k / BV I ,j, l

 

Where,

 

BVi ,j, r  = output ratio values for pixel at row i, column j.

 

BVi ,j, k  and B Vi ,j,l  are the brightness values at the same location in bands k and

 

l  respectively.

 

If BVi,j,k and BVi ,j,l both values are similar, resulting proportion is a number close to 1.

    If the numerator number is low and denominator high, the quotient approaches zero.

 

If this is reversed (high numerator; low denominator) the number is well above 1.

 

The Ratio of their reflectance between the two bands should always be very similar. Three band ratio images can be combined as colour composites which highlight certain features in distinctive colours. Commonly used ratios/indices are as follow

 

Vegetation Index = DNNIR / DNR

 

Where,

 

DNNIR  = Brightness value of pixel in NIR band

 

DNR  = Brightness value of pixel in R band

 

 

Normalized Difference Vegetation Index (NDVI): In the NDVI, the difference between the near-infrared and red (or visible) reflectance is divided by their sum. The NDVI has a range limited to a value from -1 to 1. Data from vegetated areas will yield positive values for the NDVI due to high near-infrared and low red or visible reflectance. NDVI is calculated as follows:

 

NDVI = (NIR – RED) / (NIR + RED)

 

Normalized Difference Snow Index (NDSI): Normalized difference of two bands (one in the visible and one in the near-infrared or short-wave infrared parts of the spectrum) is used to map snow.

 

 

 

 

Values of NDSI greater than 0.4 indicate the presence of snow. The NDSI was originally developed for use with Land sat TM/ETM+ bands 2 and 5 or MODIS bands 4 and 6. However, it will work with any multispectral sensor with a green band between 0.5-0.6 µm and a NIR band between 0.76-0.96 µm.

 

Reference: Riggs, G., D. Hall, and V. Salomonson. “A Snow Index for the Land sat Thematic Mapper and Moderate Resolution Imaging Spectrometer.” Geoscience and Remote Sensing Symposium, IGARSS ’94, Volume 4: Surface and Atmospheric Remote Sensing: Technologies, Data Analysis, and Interpretation (1994), pp. 1942-1944.

 

Normalized Difference Built-up Index (NDBI): This index highlights urban areas where there is typically a higher reflectance in the shortwave-infrared (SWIR) region, compared to the near-infrared (NIR) region. Applications include watershed runoff predictions and land-use planning.

 

     The NDBI was originally developed for use with Landsat TM bands 5 and

 

4. However, it will work with any multispectral sensor with a SWIR band between 1.55-1.75 µm and a NIR band between 0.76-0.9 µm.

 

Reference: Zha, Y., J. Gao, and S. Ni. “Use of Normalized Difference Built-Up Index in Automatically Mapping Urban Areas from TM Imagery.” International Journal of Remote Sensing 24, no. 3 (2003): 583-594.

 

Principal Component Analysis: Principal component analysis (PCA) is one of the statistical techniques frequently used in signal processing to the data dimension reduction or to the data decorrelation. There are two distinct applications of PCA in image processing. They are a) image colour reduction b) Determination of object orientation with the help of eigenvector. Quality of image segmentation implies to results of the following process of object orientation evaluation based on PCA as well. Presented paper briefly Principal Component Analysis is a technique in which the original remotely sensed dataset is transformed into a new dataset which may better capture the essential information. This transform is known as principal Component transformation (PCT) or Principal component analysis (PCA). Principal component analysis (Karhunen-Loeve or Hotelling transform) belongs to linear transforms based on the statistical techniques. This method provides a powerful tool for data analysis and pattern recognition which is often used in signal and image processing as a technique for data compression, data dimension reduction or their decorrelation as well. There are various algorithms based on multivariate analysis or neural networks that can perform PCA on a given data set. The objective of this transform is to reduce the no of bands in the data and compress as much of the information in the original bands into fewer bands. The new bands that result from this statistical procedure are called principal.

 

The PCA Theory

 

Principal component analysis in signal processing can be described as a transform of a given set of n input vectors (variables) with the same length K formed in the n-dimensional vector.

 

 

Matrix A in Eq. (1) is determined by the covariance matrix Cx. Rows in the A matrix are formed from the eigenvectors of Cx ordered according to corresponding eigen values in descending order. The evaluation of the Cx matrix is possible according to relation.

 

 

As the vector of input variables is n-dimensional it is obvious that the size of Cx is nxn.The elements Cx(i,i) lying in its main diagonal are the variances.

 

PCA Use for Image Compression

 

Data volume reduction is a common task in image processing. There is a huge amount of algorithms based on various principles leading to the image compression. Algorithms based on the image colour reduction are mostly less but their results are still acceptable for some applications. The image transformation from colour to the gray-level (intensity) image I belongs to the most common algorithms. Its implementation is usually based on the weighted sum of three colour components R, G, B according to relation.

 

The R, G and B matrices contain image colour components, the weights wi were determined with regarding to the possibilities of human perception.The PCA method provides an alternative way to this method. The idea is based on Eq. (6) where the matrix A is replaced by matrix Al in which only l largest (instead of n) eigenvalues are used for its forming. The vector x of reconstructed variables is then given by relation.

 

   PCA Use for Determination of Object Rotation

 

Properties of PCA can be used for determination of selected object orientation or its rotation, edge detection must be used at first. Binary image containing object boundary or its area in black (or white) pixels on the inverse background results from this process. After that two vectors a and b containing the cartesian coordinates of object’s pixels can be simply formed. The vector x in the Eq. (1) is in this case a 2-dimensional vector consisting of a and B respectively. The mean vector mx and the covariance matrix Cx are computed as well as its eigenvectore. Its two elements -vectors e1 and e2 enable the evaluation of object rotation in the cartesian axis or object rotation around the center given by mx illustrates the PCA use for the determination of selected object orientation. The object boundary was detected at first by means of LoG filter in the original gray-level image. The original has been rotated by a given angle with the bilinear interpolation method use and the process of image segmentation and PCA has been applied again. Resulted eigenvector e1 and e2 are drawn in each binary image, too and their orientation were compared with the rotation angle.

 

Fig (10):Object Rotation

 

REVIEW OF PCA IN SATELLITE IMAGE ANALYSIS:

 

To perform the principal component analysis apply a transformation to a correlated set of multispectral satellite images. This technique is widely used in digital processing of multispectral image. PCA and image fusion techniques both are often used to enhance an image particularly in the land-use and land-cover classification of satellite image and it is used for improve the accuracy of classification. In the field of remote sensing . PCA may also be useful for reducing the dimensionality of hyper spectral dataset without much loss of data information. There are certain vectors, called characteristic vectors or Eigen vector that gives the direction of the new axes. This is a general method for determining the axes of the new coordinate system.

 

Fig (10): Principal Component Analysis

Source:https://www.google.co.in/search?q=PCA+ANALYSIS+in+remote+sensi ng&biw=1821&bih=830&source=lnms&tbm=isch&sa=X&ved=0ahUKEwii0d PetrnMAhXMG44KHXbPD54Q_AUICCgC&dpr=0.75

 

 

you can view video on IMAGE ENHANCEMENTy