Why is my 2D FFT convolution displacing the result? [duplicate] - python

I'm using zero padding around my image and convolution kernel, converting them to the Fourier domain, and inverting them back to get the convolved image, see code below. The result, however, is wrong. I was expecting a blurred image, but the output is four shifted quarters. Why is the output wrong, and how can I fix the code?
Input image:
Result of convolution:
from PIL import Image,ImageDraw,ImageOps,ImageFilter
import numpy as np
from scipy import fftpack
from copy import deepcopy
import imageio
## STEP 1 ##
im1=Image.open("pika.jpeg")
im1=ImageOps.grayscale(im1)
im1.show()
print("s",im1.size)
## working on this image array
im_W=np.array(im1).T
print("before",im_W.shape)
if(im_W.shape[0]%2==0):
im_W=np.pad(im_W, ((1,0),(0,0)), 'constant')
if(im_W.shape[1]%2==0):
im_W=np.pad(im_W, ((0,0),(1,0)), 'constant')
print("after",im_W.shape)
Boxblur=np.array([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]])
dim=Boxblur.shape[0]
##padding before frequency domain multipication
pad_size=(Boxblur.shape[0]-1)/2
pad_size=int(pad_size)
##padded the image(starts here)
p_im=np.pad(im_W, ((pad_size,pad_size),(pad_size,pad_size)), 'constant')
t_b=(p_im.shape[0]-dim)/2
l_r=(p_im.shape[1]-dim)/2
t_b=int(t_b)
l_r=int(l_r)
##padded the image(ends here)
## padded the kernel(starts here)
k_im=np.pad(Boxblur, ((t_b,t_b),(l_r,l_r)), 'constant')
print("hjhj",k_im)
print("kernel",k_im.shape)
##fourier transforms image and kernel
fft_im = fftpack.fftshift(fftpack.fft2(p_im))
fft_k = fftpack.fftshift(fftpack.fft2(k_im))
con_in_f=fft_im*fft_k
ifft2 = abs(fftpack.ifft2(fftpack.ifftshift(con_in_f)))
convolved=(np.log(abs(ifft2))* 255 / np.amax(np.log(abs(ifft2)))).astype(np.uint8)
final=Image.fromarray(convolved.T)
final.show()
u=im1.filter(ImageFilter.Kernel((3,3), [1/9,1/9,1/9,1/9,1/9,1/9,1/9,1/9,1/9], scale=None, offset=0))
u.show()

The Discrete Fourier transform (DFT) and, by extension, the FFT (which computes the DFT) have the origin in the first element (for an image, the top-left pixel) for both the input and the output. This is the reason we often use the fftshift function on the output, so as to shift the origin to a location more familiar to us (the middle of the image).
This means that we need to transform a 3x3 uniform weighted blurring kernel to look like this before passing it to the FFT function:
1/9 1/9 0 0 ... 0 1/9
1/9 1/9 0 0 ... 0 1/9
0 0 0 0 ... 0 0
... ... ...
0 0 0 0 ... 0 0
1/9 1/9 0 0 ... 0 1/9
That is, the middle of the kernel is at the top-left corner of the image, with the pixels above and to the left of the middle wrapping around and appearing at the right and bottom ends of the image.
We can do this using the ifftshift function, applied to the kernel after padding. When padding the kernel, we need to take care that the origin (middle of the kernel) is at location k_im.shape // 2 (integer division), within the kernel image k_im. Initially the origin is at [3,3]//2 == [1,1]. Usually, the image whose size we're matching is even in size, for example [256,256]. The origin there will be at [256,256]//2 == [128,128]. This means that we need to pad a different amount to the left and to the right (and bottom and top). We need to be careful computing this padding:
sz = img.shape # the sizes we're matching
kernel = np.ones((3,3)) / 9
sz = (sz[0] - kernel.shape[0], sz[1] - kernel.shape[1]) # total amount of padding
kernel = np.pad(kernel, (((sz[0]+1)//2, sz[0]//2), ((sz[1]+1)//2, sz[1]//2)), 'constant')
kernel = fftpack.ifftshift(kernel)
Note that the input image, img, does not need to be padded (though you can do this if you want to enforce a size for which the FFT is cheaper). There is also no need to apply fftshift to the result of the FFT before multiplication, and then reverse this shift right after, these shifts are redundant. You should use fftshift only if you want to display the Fourier domain image. Finally, applying logarithmic scaling to the filtered image is wrong.
The resulting code is (I'm using pyplot for display, not using PIL at all):
import numpy as np
from scipy import misc
from scipy import fftpack
import matplotlib.pyplot as plt
img = misc.face()[:,:,0]
kernel = np.ones((3,3)) / 9
sz = (img.shape[0] - kernel.shape[0], img.shape[1] - kernel.shape[1]) # total amount of padding
kernel = np.pad(kernel, (((sz[0]+1)//2, sz[0]//2), ((sz[1]+1)//2, sz[1]//2)), 'constant')
kernel = fftpack.ifftshift(kernel)
filtered = np.real(fftpack.ifft2(fftpack.fft2(img) * fftpack.fft2(kernel)))
plt.imshow(filtered, vmin=0, vmax=255)
plt.show()
Note that I am taking the real part of the inverse FFT. The imaginary part should contain only values very close to zero, which are the result of rounding errors in the computations. Taking the absolute value, though common, is incorrect. For example, you might want to apply a filter to an image that contains negative values, or apply a filter that produces negative values. Taking the absolute value here would create artefacts. If the output of the inverse FFT contains imaginary values significantly different from zero, then there is an error in the way that the filtering kernel was padded.
Also note that the kernel here is tiny, and consequently the blurring effect is tiny too. To better see the effect of the blurring, make a larger kernel, for example np.ones((7,7)) / 49.

Related

How do I normalize the pixel value of an image to 0~1?

The type of my train_data is 'Array of unit 16'. The size is (96108,7,7). Therefore, there are 96108 images.
The image is different from the general image. My image has a sensor of 7x7 and 49 pixels contain the number of detected lights. And one image is the number of light detected for 0 to 1 second. Since the sensor detects randomly for a unit time, the maximum values of the pixel are all different.
If the max value of all images is 255, I can do 'train data/255', but I can't use the division because the max value of the image I have is all different.
I want to make the pixel value of all images 0 to 1.
What should I do?
Contrast Normalization (or contrast stretch) should not be confused with Data Normalization which maps data between 0.0-1.0.
Data Normalization
We use the following formula to normalize data. The min() and max() values are the possible minimum and maximum values supported within the type of data.
When we use it with images, x is the whole image and i is an individual pixel of that image. If you are using an 8-bit image the min() and max() values become 0 and 255 respectively. This should not be confused with the minimum and maximum values presented within your image in question.
To convert an 8-bit image into a floating-point image, As min() value reaches 0, the simple math is image/255.
img = img/255
NumPy methods likes to output arrays in 64-bit floating-point by default. To effectively test methods applied to 8-bit images with NumPy, an 8-bit array is required as the input:
image = np.random.randint(0,255, (7,7), dtype=np.uint8)
normalized_image = image/255
When we examine the output of the above two lines we can see the maximum value of the image is 252 which has now mapped to 0.9882352941176471 on the 64-bit normalized image.
However, in most cases, you wouldn't need a 64-bit image. You can output (or in other words cast) it to 32-bit (or 16-bit) using the following code. If you try to cast it to 8-bit it will throw an error. Using '/' for division is a shorthand for np.true_divide but lacks the ability to define the output data format.
normalized_image_2 = np.true_divide(image, 255, dtype=np.float32)
The properties of the new array is shown below. You can see the number of digits are now reduced and 252 has been remapped to 0.9882353.
Contrast Normalization
The method shown by #3dSpatialUser effectively does a partial contrast normalization, meaning it stretches the intensities of the image within the available intensity range. Test it with an 8-bit array with the following code.
c_image = np.random.randint(64,128, (7,7), dtype=np.uint8)
cn_image = (c_image - c_image.min()) / (c_image.max()- c_image.min())
Contrast is now stretched mapping minimum contrast of 64 to 0.0 and maximum 127 to 1.0.
The formula for contrast normalization is shown below.
Using the above formula with NumPy and to remap data back to the 8-bit input format after contrast normalization, the image should be multiplied by 255, then change the data type back to unit8:
cn_image_correct = (c_image - c_image.min()) / (c_image.max()- c_image.min()) * 255
cn_image_correct = cn_image_correct.astype(np.int8)
64 is now mapped to 0 and 174 is mapped to 255 stretching the contrast.
Where the confusion arise
In most applications, the intensity values of an image are spread close to their minima and maxima. Hence, when we apply the normalization formula using the min and max values presented within the image, instead of the min max of the available range, it will output a better looking image (in most cases) within the 0.0-1.0 range, which effectively does normalize both data and contrast at the same time. Also, image editing software perform gamma corrections or remapping when switching between image data types 8/16/32-bits.
import numpy as np
data = np.random.normal(loc=0, scale=1, size=(96108, 7, 7))
data_min = np.min(data, axis=(1,2), keepdims=True)
data_max = np.max(data, axis=(1,2), keepdims=True)
scaled_data = (data - data_min) / (data_max - data_min)
EDIT: I have voted for the other answer since that is a cleaner way (in my opinion) to do it, but the principles are the same.
EDIT v2: I saw the comment and I see the difference. I will rewrite my code so it is "cleaner" with less extra variables but still correct using min/max:
data -= data.min(axis=(1,2), keepdims=True)
data /= data.max(axis=(1,2), keepdims=True)
First the minimum value is moved to zero, thereafter one can take the maximum value to get the full range (max-min) of the specific image.
After this step np.array_equal(data, scaled_data) = True.
You can gather the maximum values with np.ndarray.max across multiple axes: here axis=1 and axis=2 (i.e. on each image individually). Then normalize the initial array with it. To avoid having to broadcast this array of maxima yourself, you can use the keepdims option:
>>> x = np.random.rand(96108,7,7)
>>> x.max(axis=(1,2), keepdims=True).shape
(96108, 1, 1)
While x.max(axis=(1,2)) alone would have returned an array shaped (96108,)...
Such that you can then do:
>>> x /= x.max(axis=(1,2), keepdims=True)

apply filters on images when there is no data pixels

I have image that contains many no data pixels. The image is 2d numpy array and the no-data values are "None". Whenever I try to apply on it filters, seems like the none values are taken into account into the kernel and makes my pixels dissapear.
For example, I have this image:
I have tried to apply on it the lee filter with this function (taken from Speckle ( Lee Filter) in Python):
from scipy.ndimage.filters import uniform_filter
from scipy.ndimage.measurements import variance
def lee_filter(img, size):
img_mean = uniform_filter(img, (size, size))
img_sqr_mean = uniform_filter(img**2, (size, size))
img_variance = img_sqr_mean - img_mean**2
overall_variance = variance(img)
img_weights = img_variance / (img_variance + overall_variance)
img_output = img_mean + img_weights * (img - img_mean)
return img_output
but the results looks like this:
with the warnning:
UserWarning: Warning: converting a masked element to nan. dv =
np.float64(self.norm.vmax) - np.float64(self.norm.vmin)
I have also tried to use the library findpeaks.
from findpeaks import findpeaks
import findpeaks
#lee enhanced filter
image_lee_enhanced = findpeaks.lee_enhanced_filter(img, win_size=3, cu=0.25)
but I get the same blank image.
When I used median filter on the same image with ndimage is worked no problem.
My question is how can I run those filters on the image without letting the None values interrupt the results?
edit: I prefer not to set no value pixels to 0 because the pixel range value is between -50-1 (is an index values). In addition i'm afraid that if I change it to any other value e.g 9999) it will also influence the filter (am I wrong?)
Edit 2:
I have read Cris Luengo answer and I have tried to apply something similar with the scipy.ndimage median filter as I have realized that the result is disorted as well.
This is the original image:
I have tried masking the Null values:
idx = np.ma.masked_where(img,img!=None)[:,1]
median_filter_img = ndimage.median_filter(img[idx].reshape(491, 473), size=10)
zeros = np.zeros([img.shape[0],img.shape[1]])
zeros[idx] = median_filter_img
The results looks like this (color is darker to see the problem in the edges):
As it can bee seen, seems like the edges values are inflluences by the None values.
I have done this also with img!=0 but got the same problem.
(just to add: the pixels vlues are between 1 to -35)
If you want to apply a linear smoothing filter, then you can use the Normalized Convolution.
The basic recipe is:
Create a mask image that is 1 for the pixels with data, and 0 for the pixels without data.
Set the pixels without data to any number, for example 0. NaN is not valid because it spreads in the computations.
Apply the linear smoothing filter to the image multiplied by the mask.
Apply the linear smoothing filter to the mask.
Divide the two results.
Basically, we normalize the result of the linear smoothing filter (convolution) by the number of pixels with data within the filter window.
In regions where the smoothed mask is 0 (far away from data), we will divide 0 by 0, so special care needs to be taken there.
Note that normalized convolution can be used also for uncertain data, where the mask image gets values in between 0 and 1 indicating the confidence we have in each pixel. Pixels thought to be noisy can be set to a value closer to 0 than the other pixels, for example.
The recipe above is only valid for linear smoothing filters. Normalized convolution can be done with other linear filters, for example derivative filters, but the resulting recipe is different. See for example here the equation for Normalized Convolution to compute the derivative.
For non-linear filters, other approaches are necessary. Non-linear smoothing filters, for example, will often avoid affecting edges, and so will work quite well in images with missing data, if the missing pixels are set to 0, or some value far outside of the data range. The concept of keeping a mask image that indicates which pixels have data and which don't is always a good idea.
Seems like a simple solution is to set the non values to zero. I don't know how you would get around this, because most image processing kernels require some value to for you to apply.
a[numpy.argwhere(a==None)] = 0

Python fractal box count - fractal dimension

I have some images for which I want to calculate the Minkowski/box count dimension to determine the fractal characteristics in the image. Here are 2 example images:
10.jpg:
24.jpg:
I'm using the following code to calculate the fractal dimension:
import numpy as np
import scipy
def rgb2gray(rgb):
r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
return gray
def fractal_dimension(Z, threshold=0.9):
# Only for 2d image
assert(len(Z.shape) == 2)
# From https://github.com/rougier/numpy-100 (#87)
def boxcount(Z, k):
S = np.add.reduceat(
np.add.reduceat(Z, np.arange(0, Z.shape[0], k), axis=0),
np.arange(0, Z.shape[1], k), axis=1)
# We count non-empty (0) and non-full boxes (k*k)
return len(np.where((S > 0) & (S < k*k))[0])
# Transform Z into a binary array
Z = (Z < threshold)
# Minimal dimension of image
p = min(Z.shape)
# Greatest power of 2 less than or equal to p
n = 2**np.floor(np.log(p)/np.log(2))
# Extract the exponent
n = int(np.log(n)/np.log(2))
# Build successive box sizes (from 2**n down to 2**1)
sizes = 2**np.arange(n, 1, -1)
# Actual box counting with decreasing size
counts = []
for size in sizes:
counts.append(boxcount(Z, size))
# Fit the successive log(sizes) with log (counts)
coeffs = np.polyfit(np.log(sizes), np.log(counts), 1)
return -coeffs[0]
I = rgb2gray(scipy.misc.imread("24.jpg"))
print("Minkowski–Bouligand dimension (computed): ", fractal_dimension(I))
From the literature I've read, it has been suggested that natural scenes (e.g. 24.jpg) are more fractal in nature, and thus should have a larger fractal dimension value
The results it gives me are in the opposite direction than what the literature would suggest:
10.jpg: 1.259
24.jpg: 1.073
I would expect the fractal dimension for the natural image to be larger than for the urban
Am I calculating the value incorrectly in my code? Or am I just interpreting the results incorrectly?
With fractal dimension of something physical the dimension might converge at different stages to different values. For example, a very thin line (but of finite width) would initially seem one dimensional, then eventual two dimensional as its width becomes of comparable size to the boxes used.
Lets see the dimensions that you have produced:
What do you see? Well the linear fits are not so good. And the dimensions is going towards a value of two.
To diagnose, lets take a look at the grey-scale images produced, with the threshold that you have (that is, 0.9):
The nature picture has almost become an ink blob. The dimensions would go to a value of 2 very soon, as the graphs told us. That is because we pretty much lost the image.
And now with a threshold of 50?
With new linear fits that are much better, the dimensions are 1.6 and 1.8 for urban and nature respectively. Keep in mind, that the urban picture actually has a lot of structure to it, in particular on the textured walls.
In future good threshold values would be ones closer to the mean of the grey scale images, that way your image does not turn into a blob of ink!
A good text book on this is "Fractals everywhere" by Michael F. Barnsley.

Adding poisson noise to an image

I have some images that I need to add incremental amounts of Poisson noise to in order to more thoroughly analyze them. I know you can do this in MATLAB, but how do you go about doing it in Python? Searches have yielded nothing so far.
The answer of Helder is correct. I just want to add the fact that Poisson noise is not additive and you can not add it as Gaussian noise.
Depend on what you want to achieve, here is some suggestions:
Simulate a low-light noisy image (if PEAK = 1, it will be really noisy)
import numpy as np
image = read_image("YOUR_IMAGE") # need a rescale to be more realistic
noisy = np.random.poisson(image / 255.0 * PEAK) / PEAK * 255 # noisy image
Add a noise layer on top of the clean image
import numpy as np
image = read_image("YOUR_IMAGE")
noisemap = create_noisemap()
noisy = image + np.random.poisson(noisemap)
Then you can crop the result to 0 - 255 if you like (I use PIL so I use 255 instead of 1).
Actually the answer of Paul doesnt make sense.
Poisson noise is signal dependent! And using those commands, provided by him, the noise later added to the image is not signal dependent.
To make it signal dependent you shold pass the image to the NumPy's poisson function:
filename = 'myimage.png'
img = (scipy.misc.imread(filename)).astype(float)
noise_mask = numpy.random.poisson(img)
noisy_img = img + noise_mask
You could use skimage.util.random_noise:
from skimage.util import random_noise
noisy = random_noise(img, mode="poisson")
From the item 1.4.4 - "Gaussian Approximation of the Poisson Distribution" of Chapter 1 of this book:
For large mean values, the Poisson distribution is well approximated by a Gaussian distribution with mean and variance equal to the mean of the Poisson random variable:
P(μ) ≈ N (μ,μ)
Then, we can generate Poisson noise from a normal distribution N (0,1), scale its standard deviation by the square root of μ and add it to the image which is the μ value:
# Image size
M, N = 1000, 1000
# Generate synthetic image
image = np.tile(np.arange(0,N,dtype='float64'),(M,1)) * 20
# -- sqrt(mu) * normal(0,1) --
poisson_noise = np.sqrt(image) * np.random.normal(0, 1, image.shape)
# Add the noise to the mu values
noisy_image = image + poisson_noise
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
plt.title('Image')
plt.imshow(image,'gray')
plt.subplot(2,2,2)
plt.title('Noisy image noise')
plt.imshow(noisy_image,'gray')
plt.subplot(2,2,3)
plt.title('Image profile')
plt.plot(image[0,:])
plt.subplot(2,2,4)
plt.title('Noisy image profile')
plt.plot(noisy_image[0,:])
print("Synthetic image mean: {}".format(image[:,1].mean()))
print("Synthetic image variance: {}".format(image[:,1].var()))
print("Noisy image mean: {}".format(noisy_image[:,1].mean()))
print("Noisy image variance: {}".format(noisy_image[:,1].var()))
As Poisson noise is signal-dependent, as we increase the underlying signal the noise variance also increases, as we can see in this row profiles:
Output for statistics in a single column:
Synthetic image mean: 20.0
Synthetic image variance: 0.0
Noisy image mean: 19.931120555821597
Noisy image variance: 19.39456713877459
Further references: [1][2]
If numpy/scipy are available to you, then the following should help. I recommend that you cast the arrays to float for intermediate computations then cast back to uint8 for output/display purposes. As poisson noise is all >=0, you will need to decide how you want to handle overflow of your arrays as you cast back to uint8. You could scale or truncate depending on what your goals were.
filename = 'myimage.png'
imagea = (scipy.misc.imread(filename)).astype(float)
poissonNoise = numpy.random.poisson(imagea).astype(float)
noisyImage = imagea + poissonNoise
#here care must be taken to re cast the result to uint8 if needed or scale to 0-1 etc...

Peak detection in a noisy 2d array

I'm trying to get python to return, as close as possible, the center of the most obvious clustering in an image like the one below:
In my previous question I asked how to get the global maximum and the local maximums of a 2d array, and the answers given worked perfectly. The issue is that the center estimation I can get by averaging the global maximum obtained with different bin sizes is always slightly off than the one I would set by eye, because I'm only accounting for the biggest bin instead of a group of biggest bins (like one does by eye).
I tried adapting the answer to this question to my problem, but it turns out my image is too noisy for that algorithm to work. Here's my code implementing that answer:
import numpy as np
from scipy.ndimage.filters import maximum_filter
from scipy.ndimage.morphology import generate_binary_structure, binary_erosion
import matplotlib.pyplot as pp
from os import getcwd
from os.path import join, realpath, dirname
# Save path to dir where this code exists.
mypath = realpath(join(getcwd(), dirname(__file__)))
myfile = 'data_file.dat'
x, y = np.loadtxt(join(mypath,myfile), usecols=(1, 2), unpack=True)
xmin, xmax = min(x), max(x)
ymin, ymax = min(y), max(y)
rang = [[xmin, xmax], [ymin, ymax]]
paws = []
for d_b in range(25, 110, 25):
# Number of bins in x,y given the bin width 'd_b'
binsxy = [int((xmax - xmin) / d_b), int((ymax - ymin) / d_b)]
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
paws.append(H)
def detect_peaks(image):
"""
Takes an image and detect the peaks usingthe local maximum filter.
Returns a boolean mask of the peaks (i.e. 1 when
the pixel's value is the neighborhood maximum, 0 otherwise)
"""
# define an 8-connected neighborhood
neighborhood = generate_binary_structure(2,2)
#apply the local maximum filter; all pixel of maximal value
#in their neighborhood are set to 1
local_max = maximum_filter(image, footprint=neighborhood)==image
#local_max is a mask that contains the peaks we are
#looking for, but also the background.
#In order to isolate the peaks we must remove the background from the mask.
#we create the mask of the background
background = (image==0)
#a little technicality: we must erode the background in order to
#successfully subtract it form local_max, otherwise a line will
#appear along the background border (artifact of the local maximum filter)
eroded_background = binary_erosion(background, structure=neighborhood, border_value=1)
#we obtain the final mask, containing only peaks,
#by removing the background from the local_max mask
detected_peaks = local_max - eroded_background
return detected_peaks
#applying the detection and plotting results
for i, paw in enumerate(paws):
detected_peaks = detect_peaks(paw)
pp.subplot(4,2,(2*i+1))
pp.imshow(paw)
pp.subplot(4,2,(2*i+2) )
pp.imshow(detected_peaks)
pp.show()
and here's the result of that (varying the bin size):
Clearly my background is too noisy for that algorithm to work, so the question is: how can I make that algorithm less sensitive? If an alternative solution exists then please let me know.
EDIT
Following Bi Rico advise I attempted smoothing my 2d array before passing it on to the local maximum finder, like so:
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
H1 = gaussian_filter(H, 2, mode='nearest')
paws.append(H1)
These were the results with a sigma of 2, 4 and 8:
EDIT 2
A mode ='constant' seems to work much better than nearest. It converges to the right center with a sigma=2 for the largest bin size:
So, how do I get the coordinates of the maximum that shows in the last image?
Answering the last part of your question, always you have points in an image, you can find their coordinates by searching, in some order, the local maximums of the image. In case your data is not a point source, you can apply a mask to each peak in order to avoid the peak neighborhood from being a maximum while performing a future search. I propose the following code:
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import copy
def get_std(image):
return np.std(image)
def get_max(image,sigma,alpha=20,size=10):
i_out = []
j_out = []
image_temp = copy.deepcopy(image)
while True:
k = np.argmax(image_temp)
j,i = np.unravel_index(k, image_temp.shape)
if(image_temp[j,i] >= alpha*sigma):
i_out.append(i)
j_out.append(j)
x = np.arange(i-size, i+size)
y = np.arange(j-size, j+size)
xv,yv = np.meshgrid(x,y)
image_temp[yv.clip(0,image_temp.shape[0]-1),
xv.clip(0,image_temp.shape[1]-1) ] = 0
print xv
else:
break
return i_out,j_out
#reading the image
image = mpimg.imread('ggd4.jpg')
#computing the standard deviation of the image
sigma = get_std(image)
#getting the peaks
i,j = get_max(image[:,:,0],sigma, alpha=10, size=10)
#let's see the results
plt.imshow(image, origin='lower')
plt.plot(i,j,'ro', markersize=10, alpha=0.5)
plt.show()
The image ggd4 for the test can be downloaded from:
http://www.ipac.caltech.edu/2mass/gallery/spr99/ggd4.jpg
The first part is to get some information about the noise in the image. I did it by computing the standard deviation of the full image (actually is better to select an small rectangle without signal). This is telling us how much noise is present in the image.
The idea to get the peaks is to ask for successive maximums, which are above of certain threshold (let's say, 3, 4, 5, 10, or 20 times the noise). This is what the function get_max is actually doing. It performs the search of maximums until one of them is below the threshold imposed by the noise. In order to avoid finding the same maximum many times it is necessary to remove the peaks from the image. In the general way, the shape of the mask to do so depends strongly on the problem that one want to solve. for the case of stars, it should be good to remove the star by using a Gaussian function, or something similar. I have chosen for simplicity a square function, and the size of the function (in pixels) is the variable "size".
I think that from this example, anybody can improve the code by adding more general things.
EDIT:
The original image looks like:
While the image after identifying the luminous points looks like this:
Too much of a n00b on Stack Overflow to comment on Alejandro's answer elsewhere here. I would refine his code a bit to use a preallocated numpy array for output:
def get_max(image,sigma,alpha=3,size=10):
from copy import deepcopy
import numpy as np
# preallocate a lot of peak storage
k_arr = np.zeros((10000,2))
image_temp = deepcopy(image)
peak_ct=0
while True:
k = np.argmax(image_temp)
j,i = np.unravel_index(k, image_temp.shape)
if(image_temp[j,i] >= alpha*sigma):
k_arr[peak_ct]=[j,i]
# this is the part that masks already-found peaks.
x = np.arange(i-size, i+size)
y = np.arange(j-size, j+size)
xv,yv = np.meshgrid(x,y)
# the clip here handles edge cases where the peak is near the
# image edge
image_temp[yv.clip(0,image_temp.shape[0]-1),
xv.clip(0,image_temp.shape[1]-1) ] = 0
peak_ct+=1
else:
break
# trim the output for only what we've actually found
return k_arr[:peak_ct]
In profiling this and Alejandro's code using his example image, this code about 33% faster (0.03 sec for Alejandro's code, 0.02 sec for mine.) I expect on images with larger numbers of peaks, it would be even faster - appending the output to a list will get slower and slower for more peaks.
I think the first step needed here is to express the values in H in terms of the standard deviation of the field:
import numpy as np
H = H / np.std(H)
Now you can put a threshold on the values of this H. If the noise is assumed to be Gaussian, picking a threshold of 3 you can be quite sure (99.7%) that this pixel can be associated with a real peak and not noise. See here.
Now the further selection can start. It is not exactly clear to me what exactly you want to find. Do you want the exact location of peak values? Or do you want one location for a cluster of peaks which is in the middle of this cluster?
Anyway, starting from this point with all pixel values expressed in standard deviations of the field, you should be able to get what you want. If you want to find clusters you could perform a nearest neighbour search on the >3-sigma gridpoints and put a threshold on the distance. I.e. only connect them when they are close enough to each other. If several gridpoints are connected you can define this as a group/cluster and calculate some (sigma-weighted?) center of the cluster.
Hope my first contribution on Stackoverflow is useful for you!
The way I would do it:
1) normalize H between 0 and 1.
2) pick a threshold value, as tcaswell suggests. It could be between .9 and .99 for example
3) use masked arrays to keep only the x,y coordinates with H above threshold:
import numpy.ma as ma
x_masked=ma.masked_array(x, mask= H < thresold)
y_masked=ma.masked_array(y, mask= H < thresold)
4) now you can weight-average on the masked coordinates, with weight something like (H-threshold)^2, or any other power greater or equal to one, depending on your taste/tests.
Comment:
1) This is not robust with respect to the type of peaks you have, since you may have to adapt the thresold. This is the minor problem;
2) This DOES NOT work with two peaks as it is, and will give wrong results if the 2nd peak is above threshold.
Nonetheless, it will always give you an answer without crashing (with pros and cons of the thing..)
I'm adding this answer because it's the solution I ended up using. It's a combination of Bi Rico's comment here (May 30 at 18:54) and the answer given in this question: Find peak of 2d histogram.
As it turns out using the peak detection algorithm from this question Peak detection in a 2D array only complicates matters. After applying the Gaussian filter to the image all that needs to be done is to ask for the maximum bin (as Bi Rico pointed out) and then obtain the maximum in coordinates.
So instead of using the detect-peaks function as I did above, I simply add the following code after the Gaussian 2D histogram is obtained:
# Get 2D histogram.
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
# Get Gaussian filtered 2D histogram.
H1 = gaussian_filter(H, 2, mode='nearest')
# Get center of maximum in bin coordinates.
x_cent_bin, y_cent_bin = np.unravel_index(H1.argmax(), H1.shape)
# Get center in x,y coordinates.
x_cent_coor , y_cent_coord = np.average(xedges[x_cent_bin:x_cent_bin + 2]), np.average(yedges[y_cent_g:y_cent_g + 2])

Categories

Resources