An image signal can have noise introduced at many stages including digitization, or transmission. There are two types of noise that are of specific interest in image analysis.
I' = I + NWhere I is the perfect image and N is the noise added resulting in the noisy image I'.
I' | N | I |
m = mean std = standard deviationThe mean is the average value of the noise signal whereas the standard deviation is the square root of the variance. Given a set of values Vi, the following are calculations:
We can consider the picture, P(r,c) as a random variable from
which we sample an ensemble of images from the space of all possibilities. This
ensemble has a mean (average) image, which we'll denote as Pmean(r,c).
As with most stochastic processes, if we sample enough images, the
ensemble mean approaches the noise-free original signal.
So, one way to often eliminate noise is to take a lot of pictures.
However, this usually isn't feasible.
If we compare the strength of a signal or image (the mean of the
ensemble) to the variance between individual acquired images we get a
signal-to-noise ratio, SNR:
SNR = mean/standard_deviationA high signal-to-noise ratio indicates a relatively clean signal or image; a low signal-to-noise ratio indicates that the noise is great enough to impair our ability to discern the signal in it.
In-Class Assignment/Exercise The files Orig_N*.jpg contains a sequence of images corrupted by noise. Average any number (2..4) of these images together to reduce the noise. Display the result. Why does this work? How many is enough? How can you tell?
Original Image |