Adding RMS noise to an image - python

I have a two dimensional array representing an image. I have to add background gaussian noise of RMS 2 units to the image. I am unfamiliar with RMS measurement of noise and how to add it. Can you give me an insight on how to do this ?

The way I understand it, you want to add white noise following a Gaussian distribution at every pixel. That could be achieved by something like this
from scipy import stats
my_noise=stats.distributions.norm.rvs(0,2,size=your_array.shape)
your_array+=my_noise
Here, 0 is the mean and 2 the standard deviation of your distribution.

Related

What is the best measurement for validating a denoising function in Image processing? Signal to Noise ratio seems to fail me

I'm using BrainWeb a simulated dataset for normal brain MR images. I want to validate MyDenoise function which calls denoise_nl_means of skimage.restoration package. To do so, I downloaded two sets of images from BrainWeb, a original image with 0% noise and 0% Intensity non-uniformity, and a noisy image with the same options but 9% noise and 40% Intensity non-uniformity. And, I calculate Signal To Noise ratio (SNR) based on a deprecated version of scipy.stats as follows:
def signaltonoise(a, axis=0, ddof=0):
a = np.asanyarray(a)
m = a.mean(axis)
sd = a.std(axis=axis, ddof=ddof)
return np.where(sd == 0, 0, m/sd)
I assume, after denoising, we should have a higher SNR which is always true. However, when comparing to the original image, we have more SNR in the noisy image. I guess it's because the total mean of the image has increased more significantly than the standard deviation. So, it seems SNR cannot be a good measurement to validate whether my denoised image is closer to the original images or not since noisy images have already a higher SNR than the original images. I want to know if there are better measurements for validating denoising functions in images.
Here is my result:
Original image SNR: 1.23
Noisy image SNR: 1.41
Denoised image SNR: 1.44
Thank you.
This is not how you calculate SNR.
The core concept is that, for any one given image, you don’t know what is noise and what is signal. If we did, denoising wouldn’t be a problem. Therefore, it is impossible to measure the noise level from one image (it is possible to estimate it, but we cannot compute it).
The solution is to use that noise-free image. This is the ground truth, the objective of the denoise operation. We can thus estimate the noise by comparing any one image to this ground truth, the difference is the noise:
noise = image - ground_truth
You can now compute the mean square error (MSE):
mse = np.mean(noise**2)
Or the signal to noise ratio:
snr = np.mean(ground_truth) / np.mean(noise)
(Note that this is one of many possible different definitions of signal to noise ratio, often we use power of the signals rather that just their means, and often it is measured in dB.)
In general, MSE is a really good way to talk about the error in denoising. You’ll see most scientific papers in the field additionally using peak signal to noise ratio (PSNR) instead, which is just a scaling and logarithmic mapping of the MSE. Therefore it is pointless to use both.
You can also look at the mean absolute error (MAE), which is more sensitive to individual pixels with a large error.

Remove outliers in an image after applying treshold

Here`s the deal. I want to create a mask that visualizes all the changes between two images (GeoTiffs which are converted to 2D numpy arrays).
For that I simply subtract the pixel values and normalize the absolute value of the subtraction:
Since the result will be covered in noise, I use a treshold and remove all pixels with a value below a certain limit.
def treshold(array, thresholdLimit):
print("Treshold...")
result = (array > thresholdLimit) * array
return result
This works without a problem. Now comes the issue. When applying the treshold, outliers remain, which is not intended:
What is a good way to remove those outliers?
Sometimes the outliers are small chunks of pixels, like 5-6 pixels together, how could those be removed?
Additionally, the images I use are about 10000x10000 pixels.
I would appreciate all advice!
EDIT:
Both images are landsat satelite images, covering the exact same area.
The difference here is that one image shows cloud coverage and the other one is free of clouds.
The bright snakey line in the top right is part of a river that has been covered by a cloud. Since water bodies like the ocean or rivers are depicted black in those images, the difference between the bright cloud and the dark river results in the river showing a high degree of change.
I hope the following images make this clear:
Source tiffs :
Subtraction result:
I also tried to smooth the result of the tresholding by using a median filter but the result was still covered in outliers:
from scipy.ndimage import median_filter
def filter(array, limit):
print("Median-Filter...")
filteredImg = np.array(median_filter(array, size=limit)).astype(np.float32)
return filteredImg
I would suggest the following:
Before proceeding please double check if the two images are 100% registered. To check that you should overlay them using e.g. different color channels. Even minimal registration errors can render your task impossible
Smooth both input images slightly (before the subtraction). For that I would suggest you use standard implementations. Play around with the filter parameters to find an acceptable compromise between smoothness (or reduction of graininess of source image 1) and resolution
Then try to match the image statistics by applying histogram normalization, using the histogram of image 2 as a target for the histogram of image 1. For this you can also use e.g. the OpenCV implementation
Subtract the images
If you then still observe obvious noise, look at the histogram of the subtraction result and see if you can relate the noise to intensity outliers. If you can clearly separate signal and noise based on intensity, apply again a thresholding (informed by your histogram). Alternatively (or additionally), if the noise is structurally different from your signal (e.g. clustered), you could look into morphological operations to remove it.

Gaussian Noise vs Gaussian White Noise

How Gaussian noise differs from white Gaussian noise? As I read Gaussian noise has PDF of normal distribution. Does the white Gaussian noise have it too?
How can I manually (without built-in functions) generate each of the noise for an image using Python? Which parameters do I need to consider?
Lets examine the phrase White Gaussian Noise starting from the end.
Noise - This only says about the usage. Has nothing to do with its properties.
Gaussian - The values are following (Extracted) from Gaussian (Normal) Distribution.
White - The values are not correlated. Namely you can infer no data from one sample on a different sample (Since in Gaussian Distribution no Correlation -> Independence). Also tells us the Power Spectrum of the Auto Correlation function is Flat (Or the Auto Correlation itself is the Delta Function).
Now, regarding how to generate them.
Basically most Random Number Generators generate Uniform Data which then some transformation is applied on to generate any other wanted distribution (See https://en.wikipedia.org/wiki/Probability_density_function#Dependent_variables_and_change_of_variables for some idea on how it is done).
To create non white data you need to create some linear connection between samples.
Namely, just mix few samples with linear weights.
It is usually done by applying some kind of a filter on the data.
if each sample has a normal distribution with zero mean, the signal is
said to be Gaussian white noise.
Wikipedia
White noise = noise with a constant power spectral density. The term comes from light, if you have all wavelengths of light present, the resulting light is white.
Gaussian noise = noise that follows a normal distribution
Getting good quality randomness is rather difficult but for simple purposes, look at random, especailly random.gauss(mu, sigma)

Skewed gaussian distribution within an ellipse with python

Okay, so I've been pulling some hairs out over this for the last couple of days and haven't made much progress.
I want to generate a 2-D array (grid) of gaussian-like distribution on an elliptical domain. Why do I say gaussian-like?, well I want an asymmetric gaussian, aka skewed gaussian where the peak of the gaussian-like surface is at some point x0,y0 within the ellipse and the values on the perimeter of the ellipse are zero (or approaching zero...).
The attached picture might describe what I mean a little better.

Finding the vertical and the horizontal gradients of a image using python

I'm just starting off on Image processing in python using Scipy, Numpy, Image libraries. I need to find the gradient field of the image in order to divide the pixels into bins. For that, I calculated the low pass Gaussian filter to reduce pixel by pixel noise. Now, I've to calculate the horizontal and vertical gradients by convolving 2x2 pixel horizontal and vertical masks across the image.
I couldn't find the exact resources for accomplish this..!
scipy.signal.convolve2d should work for this.

Categories

Resources