6 Filtering
Kumari Anamika Kumari Anamika
Objectives
- Student will get to know the need of Filtering
- Student will acquire skill to categorize and apply various types of filters appropriately.
- Student will be furthering studying about types of filters, their uses and specific application.
Outlines
- Introduction
Image Enhancement Technique Spatial Filter
-Convolution
Types of Spatial Filtering Basis of Spatial filtering
- Smoothening Spatial Filters
-Smoothing linear filters
-Order Static Filters
- Sharpening Spatial Filters
-Foundation
-The Laplacian
- Unsharp Masking and High Boost Filter
- Smoothning Frequency Domain Filter
-Ideal Low Pass Filter
-Butterworth Low Pass Filter
-Gaussian Low Pass Filter
- Sharpening Frequency Domain Filters
-Ideal High Pass Filter
-Butterworth High Pass Filter
-Gaussian High Pass Filter
-Laplacian Filter
-High Boost Filter and High Frequency Emphasis Filters
- Restoration in the Presence of noise over Spatial Filtering
-Mean Filter
-Order Filter
-Adaptive Filter
- Periodic noise Reduction by Frequency Domain Filtering
-Band Reject Filter
-Band Pass Filter -Notch Filter -Optimum Notch Filter
Introduction
Filtering is an operation designed to improve images’ readability and to extract certain information from them. The application of filters is to modify the numerical value of each pixel as a function of the neighboring pixels values. For example, if the value of each pixel is replaced by the average of its value and those of its eight neighbours the image is smoothed, that is to say, the finer details disappear and the image appears fuzzier.
Image Enhancement is the Process which improves the visual interpretability of an image by increasing distinction between the features in a scene. The enhancements optimize the abilities of human mind and computer for visual interpretation. Enhancements techniques can be categorized into two operations
Point Operations: Modify the values of each pixel in an image data set independentl e.g. Contrast stretching, histogram equalization.
Local Operations: Modify the values of each pixel based on neighboring brightness values. e.g. Spatial Filtering. They can be performed on either single band or multiple bands
Image enhancements techniques can be categorized as :
A) Contrast Manipulation
(i) Gray Level Thresholding
(ii) Level Slicing/ Density Slicing
(iii) Contrast Stretching ( Linear/Non-Linear)
B) Spatial Feature Manipulation
(i) Spatial Filtering
(ii) Edge enhancements
(iii)Fourier Analysis
C) Multi- Image Manipulation
(i) Band Rationing
(ii) Image Differencing
(iii)Principal Components
(iv)Vegetation Component
Image Enhancement Technique: Spatial frequency, defined as the number of changes in brightness value per unit distance for any particular part of an image. It refers to the roughness of the tonal variation occurring in the image area of high spatial frequency is tonally “rough”. The DN values in high spatial frequency areas change abruptly over a relatively small number of pixels (e .g. Across roads, filed borders)Smooth images areas are those of low spatial frequency where DN values vary gradually over relatively large number of pixels (e.g. Large agricultural fields, water bodies).
Spatial Filtering operation: The concept of filtering has its roots in the use of the Fourier transform for signal processing in the frequency domain. Filtering operations are performed directly on the pixels of an image. Spatial frequency in remotely sensed imagery may be enhanced or subdued using two different approaches. Spatial convolution filtering based primarily on the use of convolution masks, and Fourier analysis which mathematically separates an image into its spatial frequency components .Spatial filtering is a local operation in that pixel values in an original image are modified on the basis of gray levels of neighboring pixels. Spatial domain filtering involves a discrete implementation of convolution filtering in the spatial domain. It is an important tool to process images and is widely used as the preliminary step in a variety of image processing applications. Spatial filtering is implemented using a moving mask (also known as a kernel or a neighborhood) which is typically a small rectangular grid of pixel locations. The filtering operation is performed by placing the mask on a pixel and processing only the image pixels which lie within the mask. The output of the filtering operation generally gives an intensity value which is used as the new intensity value for the pixel at the center of the mask. The mask is moved over all the pixels in the image and the process is repeated to compute the new intensity value for each and every pixel in the image. Satellite images are digital images characterized by a parameter called spatial frequency defined as number of brightness values of pixels over an image. if there are small variations in DN values in image, this is called as low frequency images and if DN values changes frequently, this is called high frequency image. Spectral enhancement relies on changing the gray scale representation of pixels to give an image with more contrast for interpretation. It applies the same spectral transformation to all pixels with a given gray scale in an image. However, it does not take full advantage of human recognition capabilities even though it may allow better interpretation of an image by a user. These enhancement techniques use the concept of spatial frequency within an image. Spatial frequency is the manner in which gray-scale values change relative to their neighbors within an image. Many natural and manmade features in images have high spatial frequency such as boundaries of water bodies, roads, cropped areas, forest boundaries, built-up areas etc. Spatial enhancement involves the enhancement of either low or high frequency information within an image. Algorithms that enhance low frequency image information employ a “blurring” filter (commonly called a low pass filter) that emphasizes low frequency parts of an image while de-emphasizing the high frequency components. The enhancement of high frequency information within an image is often called edge enhancement. It emphasizes edges in the image while retaining overall image quality. Main purposes of spatial enhancements are to improve interpretability of image data, automated feature extraction, remove and reduce errors due to sensor degradation.
Fig.1: Mechanism of Spatial Filtering
Source:https://www.slideshare.net/gichelleamon/5-spatial-filtering-p1
The two major methods commonly used in spatial enhancement they are:
- Convolution
- Fourier Transform
Convolution involves the passing of a moving window over an image and creating a new image where each pixel in the new image is a function of the original pixel values within the moving window and the coefficients of the moving window as specified by the user. The window, a convolution operator, may be considered as a matrix (or mask) of coefficients that are to be multiplied by image pixel values to derive a new pixel value for a resultant enhanced image. This matrix may be of any size in pixels and does not necessarily have to be square.
Examples
As an example of the convolution methodology, take a 3 by 3 matrix of coefficients and see the effects on an example image subset. A set of coefficients that is used for image smoothing and noise removal is given below:
If we have a sample image, given directly above where the image normally has a low smoothly varying gray scale, except for the bottom right region, which exhibits a sharp brightness change, we can see the effects of the convolution filter on a pixel-by-pixel basis.
Because we do not wish to consider edge effects, we will start the overlay of the moving window on the x=2, y=2 pixel of the input image and end at the x=6, y=5 position of the original image.
Filtering is an enhancement operation that alters pixel values on the basis of the grey values of its surrounding neighbors. For this reason it is considered as a spatial operation. Spatial operations are called filters because like their real counterparts they. ‘shift’ only the image frequencies considered to be important. High pass filters allow only the high frequency information of an image to be sieved into a new image channel. High frequency images contain all of the local details in an image (i.e. its edge). Whereas, low pass filters pass only low frequency information producing images that appear smooth or blurred in comparison to the original data.
The general concept for any filter is to define a filter window, (kernel) the dimension being an odd number of pixels and lines.Each pixel of this window containing a coefficient or weighting factor representative of some mathematical relationship. A filtered image is generated by multiplying each coefficient in the window by the grey level in the original image corresponding to the window’s current location and assigning the result to the control pixel location of the window in the filtered channel. The window is moved throughout the image one pixel at a time. This window multiplication process is known as convolution.
Location in the input image. The process of evaluating the weighted neighbouring pixel values are called two-dimensional convolution. It emphasizes high spatial frequency, may sharpen the edge of the image where as emphasizing the low spatial frequencies may be used to reduce noise in an image.
Spatial filtering can be of two types:
Fig.2.Flow Chart for Filter Classification
Source: Self
Linear Filtering: A filtering method is linear when the output is a weighted sum of the input pixels. The operation performed on the pixels within the filter mask is linear Weighted averaging of pixel intensities.
Fig.3: 3Χ3 filter mask
Non linear: Methods that do not satisfy the above property are called non-linear. The operation performed on the pixels within the filter mask is non-linear e.g max filtering, min filtering, median filtering, etc.
Basics of Spatial Filtering
Neighborhood operations calculate new pixel values from pixels’ neighborhoods by using a mask convolution where the mask is a 2-D array of weights. The mask, also called as the template or a kernel, is generally taken as a square array of odd size, so that it can be positioned around a given pixel with symmetry of weights (coefficients) on both sides. A neighborhood operation computes a new pixel value by centering the mask on a given pixel, multiplying each pixel’s intensity value with the corresponding mask coefficient, and taking the sum of all products. This value is divided by the sum of all weights of the mask to get a weighted average of the pixels intensities falling within the mask window. The mask is moved to cover all the pixels in the input image. For pixels near the image boundary where the mask may be partially out of the image, the sum of products is computed by considering only those mask coefficients which happen to overlap with the image pixels.
We illustrate the process of linear filtering with a 3Χ3 mask. In Fig.3 we show w1, w2, ….,w9 as the weights (or coefficients) to be used for different pixel locations within the mask. The intensity values at the corresponding pixel locations are indicated as: z1, z2,…z9. The convolution of the
Figure.4: 3Χ3 filter mask showing the weights at each pixel location
Smoothing Spatial Filtering
Smoothing filters are used for blurring and for noise reduction. Blurring is used in preprocessing steps, such as removal of small details from an image prior to (large) object extraction, and bridging of small gaps in lines or curves. Noise reduction can be accomplished by blurring with a linear filter and also by nonlinear filtering.
Smoothing Linear Filters: The output (response) of a smoothing, linear spatial filter is simply the average of the pixels contained in the neighborhood of the filter mask. These filters sometimes are called averaging filters. they also are referred to a lowpass filters. The idea behind smoothing filters is straight forward .By replacing the value of every pixel in an image by the average of the gray levels in the neighborhood defined by the filter mask, this process results in an image with reduced “sharp” transitions in gray levels. Because random noise typically consists of sharp transitions in gray levels, the most obvious application of smoothing is noise reduction. However, edges (which almost always are desirable features of an image) also are characterized by sharp transitions in gray levels, so averaging filters have the undesirable side effect that they blur edges. Another application of this type of process includes the smoothing of false contours that result.
Fig.5: Results of smoothing filtering
Source: http://slideplayer.com/slide/8579594/
Order-Statistics Filters: Order-statistics filters are nonlinear spatial filters whose response is based on ordering (ranking) the pixels contained in the image area encompassed by the filter, and then replacing the value of the center pixel with the value determined by the ranking result. The best-known example in this category is the median filter.
Sharpening Spatial Filters
The principal objective of sharpening is to highlight fine detail in an image or to enhance detail that has been blurred, either in error or as a natural effect of a particular method of image acquisition. Uses of image sharpening vary and include applications ranging from electronic printing and medical imaging to industrial inspection and autonomous guidance in military systems. Image blurring could be accomplished in the spatial domain by pixel averaging in a neighborhood. Since averaging is analogous to integration, it is logical to conclude that sharpening could be accomplished by spatial differentiation. This, in fact, is the case, and the discussion in this section deals with various ways of defining and implementing operators for sharpening by digital differentiation. Fundamentally, the strength of the response of a derivative operator is proportional to the degree of discontinuity of the image at the point at which the operator is applied. Thus, image differentiation enhances edges and other discontinuities (such as noise) and deemphasizes areas with slowly varying gray-level values.
Foundation: This is used to derivatives in areas of constant gray level (flat segments), at the onset and end of discontinuities (step and ramp discontinuities), and along gray-level ramps. These types of discontinuities can be used to model noise points, lines, and edges in an image. The behavior of derivatives during transitions into and out of these image features also is of interest. The derivatives of a digital function are defined in terms of differences. There are various ways to define these differences. However any definition use for a first derivative (1) must be zero in flat segments (areas of constant gray-level values); (2) must be nonzero at the onset of a gray-level step or ramp; and (3)must be nonzero along ramps slope. values are finite, the maximum possible gray-level change also is finite, and the shortest distance over which that change can occur is between adjacent pixels. A basic definition of the first-order derivative of a one-dimensional function f(x) is the difference
Laplacian
It use Second Derivatives for Enhancement. The Laplacian is a second derivative (as opposed to the gradient which is a first derivative)
It is invariant to rotation, meaning that it is insensitive to the direction in which the discontinuities (point, line, and edges) run.
It highlights points, lines and edges in the image and suppresses uniform and smoothly varying regions
By itself, the Laplacian image may be difficult to interpret. Therefore, a Laplacian edge enhancement may be added back to the original image using the following mask:
The approach basically consists of defining a discrete formulation of the second-order derivative and then constructing a filter mask based on that formulation. We are interested in isotropic filters, whose response is independent of the direction of the discontinuities in the image to which the filter is applied. In other words, isotropic filters are rotation invariant, in the sense that rotating the image and then applying the filter gives the same result as applying the filter to the image first and then rotating the result.
Fig.6: Process of Laplacian Filtering Source:https://www.google.com/patents/EP0645737B1
Unsharp masking and high-boost filtering: A process used for many years in the publishing industry to sharpen images consists of subtracting a blurred version of an image from the image itself. This process,called unsharp masking, is expressed as
fs (x, y) = f `s(x, y)-f(x, y) where,
fs(x, y)denotes the sharpened image obtained by unsharp masking f `s(x, y) is a blurred version of f(x, y )
Smoothing Frequency Domain Filters
Smoothing is achieved in the frequency domain by dropping out the high frequency components
The basic model for filtering is:
G (u, v) = H(u, v)F(u, v)
Where,
F (u, v) is the Fourier transform of the image being filtered and
H (u, v) is the filter transform function.
Frequency filters process an image in the frequency domain The image is Fourier transformed, multiplied with the filter function and then re-transformed into the spatial domain. Attenuating high frequencies results in a smoother image in the spatial domain, attenuating low frequencies enhances the edges. All frequency filters can also be implemented in the spatial domain and, if there exists a simple kernel for the desired filter effect, it is computationally less expensive to perform the filtering in the spatial domain. Frequency filtering is more appropriate if no straightforward kernel can be found in the spatial domain, and may also be more efficient. Frequency filtering is based on the Fourier Transform. (For the following discussion we assume some knowledge about the Fourier Transform, therefore it is advantageous if you have already read the corresponding worksheet.) The operator usually takes an image and a filter function in the Fourier domain. This image is then multiplied with the filter function in a pixel-by-pixel fashion.
G(k, l) = F(k, l)H(k,l)
Where,
F (k, l) is the input image in the Fourier domain,
H(k, l) the filter function
G(k, l) is the filtered image.
To obtain the resulting image in the spatial domain, G(k,l) has to be re-transformed using the inverse Fourier Transform.
Ideal Low Pass Filter: A low-pass filter attenuates high frequencies and retains low frequencies unchanged. The result in the spatial domain is equivalent to that of a smoothing filter; as the blocked high frequencies correspond to sharp intensity changes, i.e. to the fine-scale details and noise in the spatial domain image.
The most simple lowpass filter is the ideal lowpass. It suppresses all frequencies higher than the cut-off frequency D0 and leaves smaller frequencies unchanged:
H(k, l) = {1 if √ 2+l ,<D0
0 if √ 2 + , > 0
In most implementations, D0 is given as a fraction of the highest frequency represented in the Fourier domain image
Applications of LPF
Useful for reducing noise and eliminating small details. The elements of the mask must be positive.
Sum of mask elements is 1 (after normalization). It is also called burring or smoothening filter.
It calculates average of its eight neighbouring pixels.
Fig.7:Ideal Low Pass Filter
Source:https://dsp.stackexchange.com/questions/29331/graph-of-lowpass-filter
Butterworth Low Pass Filter: It is a type of signal processing filter design to have a flat frequency response .It is also called maximally flat magnitude filter.It was first described in 1930 by the British engineer and physicist Stephen Butterworth in his paper entitled “On the Theory of Filter Amplifiers”. Butterworth had a reputation for solving “impossible” mathematical problems. Transfer function of butterworth low pass filter is given below
Fig.8: Butterworth Low Pass Filter
http://slideplayer.com/slide/10780340/
Gaussian Low Pass Filter: Gaussian filters are used in image processing because they have a property that their support in the time domain, is equal to their support in the frequency domain. This comes about from the Gaussian being its own Fourier Transform used as a low pass filter if only one SIGMSQ (Square of Gaussian filter parameter) is specified. It can also used as band pass filter to detect sudden intensity changes, if 2 SIGMSQ values are specified. FGA computes the weighted sum of grey levels within kernel surrounding central pixel. The weights are results of the Gaussian function with given deviation. Kernel will be a square of size (2 x SIGMSQH).
(i, j) is a pixel within filter window,
(u, v) is the centre of kernel and sigma square is a parameter.
Filter weight (Wi, J) are the normalised values of G(i,j) over the entire kernel, hence sum of all weights is 1. Grey level of a filtered pixel is sum of w(i,j) * v(i,j) over the window where, v(i,j) is the original value at location (i,j). It replicates edge pixel values to give sufficient data.
§ Adjusting the kernel coefficients based upon its proximity to pixel being altered,
- Weight age decreases away from the centre
- Smooth more gently than FAV
Sharpening Frequency Domain Filters
Ideal High Pass Filters:
- Enhance differences between values of neighboring pixels, these changes in values are represented by edges and lines
- Edge – border between two types of surface (forest – field), it has no width
- Lines – rivers, streams, roads • High-pass filters enhance such objects which are smaller than a half of filter window. Wider objects are suppressed.
- These filters are used for sharpening of images, edge and line detection, to increase the difference between pixels
A high-pass filter is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The amount of attenuation for each frequency depends on the filter design. A high-pass filter is usually modeled as a linear time-invariant system. It is sometimes called a low-cut filter or bass-cut filter. Ideal high pass filter (IHPF) is defined as
Where,
Do = cutoff distance measured from origin
Where,
Do = cutoff distance measured from origin
Fig.9: Ideal High Pass Filter
Source:https://www.slideshare.net/VinayGupta6/digital-image-processing-img-smoothnin
Butterworth High Pass Filter: The transfer function of butter worth high pass filter of order n and cutoff frequency at distance Do from origin is given in the equation below
Fig.10: Butterworth High Pass Filter
Source:https://www.slideshare.net/VinayGupta6/digital-image-processing-img-smoothning
Gaussian High Pass Filter:
The Gaussian high pass filter is given as: where D0 is the cut off distance. The transfer function for Gaussian High Pass Filter (GHPF) at the distance D0 from origin is given by
H (u, v) = 1- − ( , )/
Where,
Do = Cutoff distance from origin.
Fig.11: Gaussian High Pass Filter
Source: https://www.slideshare.net/VinayGupta6/digital-image-processing-img-smoothnin
Laplacian
Fig.12: Laplacian in Frequency Domain
Source:http://graphics.cs.cmu.edu/courses/15-463/2011_fall/Lectures/FreqDomain.pdf
High Boost Filter & high frequency Emphasis Filtering
In image processing, it is often desirable to emphasize high frequency components representing the image details without eliminating low frequency components (such as sharpening).The high-boost filter can be used to enhance high frequency component.The high-boost filter can be used to enhance high frequency component while keeping the low frequency components. High boost filter is composed by an all pass filter and a edge detection filter (Laplacian filter). Thus, it emphasizes edges and results in image sharpener. The high-boost filter is a simple sharpening operator in signal and image processing. It is used for amplifying high frequency components of signals and images. The amplification is achieved via a procedure which subtracts a smoothed version of the data from the original one.
Fig.13: High Boost Filtering.
Source: http://slideplayer.com/slide/8229155
Restoration in the Presence of noise over Spatial Filtering
Restoration attempts to reconstruct or recover an image that has been degraded by using a priori knowledge of the degradation phenomenon.
If H is a linear, position-invariant process, then the degraded image is given in the spatial domain by g(x, y) = h(x, y) * F(x, y) +η(x, y)
Where,
h(x,y) is the spatial representation of the degradation function z Write the model in an equivalent frequency domain representation.
G (u, v) = H (u, v) F(u, v) + N(u, v)
Fig.14: Model of Image Restoration
Source: http://www.tjucs.win/faculty/hyh/image%20analysis/Chapter%205.pdf
Mean Filter
It computes the mean of the grey levels within a kernel and eliminates noise. Filter window need not be square. Input and output channel can be same. The mean filter is a simple sliding-window spatial filter that replaces the center value in the window with the average (mean) of all the pixel values in the window. The window, or kernel, is usually square but can be any shape. An example of mean filtering of a single 3×3 window of values is shown below
Fig.15:Mean Filter.
Source: http://prog3.com/sbdm/blog/ebowtang/article/details/38960271
Order Statistics Filter
This type of filter is based on estimators and is based on “order”, the sense of order is about some quatities like minimum (first order statistic), maximum (largest order statistic) and etc. The best example of order static filter is median filter, max min filter.
The Median Filter
The median filter is normally used to reduce noise in an image, somewhat like the mean filter. However, it often does a better job tan the mean filter of preserving useful detail in the image.
Like the mean filter, the median filter considers each pixel in the image in turn and looks at its nearby neighbors to decide whether or not it is representative of its surroundings. Instead of simply replacing the pixel value with the mean of neighboring pixel values, it replaces it with the median of those values. The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value.
Fig.16: Median Filter
Source: http://blog.malintha.org/implement-a-median-filter-with-opencv/
Min Filter: The intensity of the pixel at the center of the mask is replaced by the minimum intensity value of any pixel within the mask. This filter is used to find the dark points in an image.
Max Filter: The intensity of the pixel at the center of the mask is replaced by the maximum intensity value of any pixel within the mask. This filter is used to find the bright points in an image.
Periodic noise Reduction by Frequency Domain Filtering
Removal of periodic and quasi-periodic patterns from photographs is an important problem. There are a lot of sources of this periodic noise, e.g. the resolution of the scanner used to scan the image affects the high frequency noise pattern in the acquired image and can produce moiré patterns. It is also characteristic of gray scale images obtained from single-chip video cameras. Usually periodic and quasi-periodic noise results peaks in image spectrum amplitude. Considering this, processing in the frequency domain is a much better solution than spatial domain operations (blurring for example, which can hide the periodic patterns at the cost of the edge sharpness reduction).In a video stream, periodic noise is typically caused by the presence of electrical or electromechanical interference during video acquisition or transmission.
Fig.17: Block Diagram of Periodic noise Reduction
Source: https://in.mathworks.com/help/vision/examples/periodic-noise-reduction.html?requestedDomain=www.mathworks.com
Band Reject Filter
Band Reject Filter remove attenuate a band of frequency above the origin of Fourier transform. An ideal Band Reject Filter is given by equation.
Where,
D (u, v) is distance from origin. W is width of band.
D0 is radial centre.
Fig.18: Band Reject Filter
Source: http://www.electronics-tutorials.ws/filter/band-stop-filter.html
Band Pass Filter
A band pass attenuates very low and very high frequencies, but retains a middle range band of frequencies. Band pass filtering can be used to enhance edges (suppressing low frequencies) while reducing the noise at the same time (attenuating high frequencies). We obtain the filter function of a band pass by multiplying the filter functions of a low pass and of a high pass in the frequency domain, where the cut-off frequency of the low pass is higher than that of the high pass. Therefore in theory, one can derive a band pass filter function if the low pass filter function is available. Band pass filtering is attractive but there is always a trade-off between blurring and noise: lowpass reduces noise but accentuates blurring high pass reduces blurring but accentuates noise. The ideal band pass filter passes only frequencies within the pass band and gives an output in the spatial domain that is in most cases blurred and/or ringed. It is the easiest band pass filter to simulate but its vertical edges and sharp corners are not realizable in the physical world. An ideal band pass filter with a frequency range of is defined as follows
Hbp(u,v) = 1 – Hbr(u,v)
Where,
Hbp(u,v) is transfer function of band pass filter.
Hbr(u,v) is transfer function of band reject filter.
Notch Filter
It rejects frequency in predefined neighborhood against central frequency. Notch filters are used to remove repetitive “Spectral” noise from an image like a narrow high pass filter, but they “notch” out frequencies other than the dc component. Attenuate a selected frequency (and some of its neighbors) and leave other frequencies of the Fourier transform relatively unchanged.
The transfer function of an ideal notch filter of radius D0 with central frequency (u0, v0) at (-u0,-v0) is
H (u, v) = { 0 if D1(u, v) ≤ D0, D2(u, v) ≤ D0
1 otherwise
Optimum Notch Filtering
In a real system contaminated by periodic noise, the output image tends to contain 2-D periodic structure superimposed on the input image with patterns more complex than several simple sinusoidal signals. In this condition, two mentioned methods are not always acceptable because they may remove much image information in filtering process. Optimum notch filter tries to minimize local variance of the restored image. At the first stage, principal contribution of the inference repetitive pattern is extracted from the noisy image, and then the output image is restored by subtracting a variable weighted portion of the repetitive pattern from the contaminated image. The extractions of the repetitive pattern is implemented in frequency-domain by applying a proper notch-pass filter on every periodic noise frequency, and then by applying inverse 2-D Fourier transform to restore the repetitive pattern in spatial-domain. The 2-D Fourier transform of the inference repetitive pattern, is given by
N(u, v) = Hnp (u, v)G(u, v) Where,
Hnp (u,v) is superimposed response of all necessary notch-pass filters
G(u, v) is the 2-D Fourier transform of the contaminated image. The proper selecting of Hnp(u, v) is challenge problem in the optimum notch filter.
Fig.19: Notch Filter
Source:http://www.radioelectronics.com/info/circuits/opamp_notch_filter/opamp_notch_ filter.php
you can view video on Filtering |