Setting pixel value to its percentile in a neighbourhood - python

For spatial analysis purposes, I am trying to set up a filter that would, for a pixel in a given neighbourhood, give the percentile of this pixel in its neighbourhood (defined by a structuring element).
Below is my best shot so far:
import numpy as np
import scipy.ndimage as ndimage
import scipy.stats as sp
def get_percentile(values, radius=3):
# Retrieve central pixel and neighbours values
cur_value = values[4]
other_values = np.delete(values, 4)
return sp.percentileofscore(other_values, cur_value)/100
def percentiles(image):
# definition of the neighbourhood (structuring element)
footprint = np.array([[1,1,1],
[1,1,1],
[1,1,1]])
# Using generic_filter to apply sequentially a my own user-defined
# function (`get_percentile`) in the filter
results = ndimage.generic_filter(
image,
get_percentile,
footprint=footprint,
mode='constant',
cval=np.nan)
return results
# Pick dimensions for a dummy example
dims = [12,15]
# Generate dummy example
df = np.random.randn(np.product(dims)).reshape(dims[0], dims[1])
percentiles(df)
It sort of work, but:
1. I'm sure the code is not really optimal, and could run faster
2. The dimension of my neighbourhood is hard coded. Something I would like is to better identify the central pixel on which I'm applying the filter (footprint) from its neighbours according to this filter.

Related

Calculate pixel distance from centre of image

import numpy as np
import pandas as pd
Problem
I have an image, n pixel wide, m pixel tall. Both of these numbers are even. Pixels are squares. Pixel values are in a numpy array.
I need to calculate the distance from each pixel's centre to the centre of the image. Ie the pixels just next to the centre should have associated values sqrt(2)/2. If the image is like a chess-board, the pixel corresponding to g6 square should have associated distance value should be (2.5^2+1.5^2)^0.5=2.91
My solution
I have achieved the task by the following code:
image=np.zeros([2,4]) # let's say this is my image
df = pd.DataFrame(image)
distances = \
pd.DataFrame(
df.apply(
lambda row: (row.index+0.5-len(df.index)/2)**2,axis=0).to_numpy()+\
df.T.apply(
lambda row: (row.index+0.5-len(df.columns)/2)**2,axis=0).T.to_numpy())\
.apply(np.sqrt).to_numpy()
distances will be:
array([[1.58113883, 0.70710678, 0.70710678, 1.58113883],
[1.58113883, 0.70710678, 0.70710678, 1.58113883]])
As expected.
Question
Is there a better way? I would appreciate a shorter, more-numpy oriented, or more transparent method.
A more transparent method would be first to define the center of your image, i.e something like
Read your array as an image in openCV :
img = cv2.imread("inputImage")
height, width = img.shape
x_center=(width/2)
y_center=(height/2)
Then for each pixels in your numpy/image array you can compute the distance between each pixel of your numpy array and the above center by computing the euclidean distance:
D = dist.euclidean((xA, yA), (x_center, y_center))
PS : You can simply use img.shape from numpy but openCV gives you many method related to distance computing.
I do not know any specific algorithms to do this except this straightforward implementation in Numpy, that is, using the indices (creates array of indices from shape of array) and linalg.norm (calculates norms) functions. Note we also use broadcasting by indexing new dimensions in center[:,None,None] (necessary because of indices intrinsic output shape).
import numpy as np
import pandas as pd
# Numpy function
def np_function(image):
center = np.array([(image.shape[0])/2, (image.shape[1])/2])
distances = np.linalg.norm(np.indices(image.shape) - center[:,None,None] + 0.5, axis = 0)
return distances
# Pandas function
def pd_function(image):
df = pd.DataFrame(image)
distances = \
pd.DataFrame(
df.apply(
lambda row: (row.index+0.5-len(df.index)/2)**2,axis=0).to_numpy()+\
df.T.apply(
lambda row: (row.index+0.5-len(df.columns)/2)**2,axis=0).T.to_numpy())\
.apply(np.sqrt).to_numpy()
return distances
For a 4000x6000 image, Numpy's method is more than one order of magnitude faster than the original function in my computer. You could also just compute the distances from the center to one octant and then copy the results conveniently to the remaining octants (exploiting simmetry), but this will probably be only worth for big images imho.

apply filters on images when there is no data pixels

I have image that contains many no data pixels. The image is 2d numpy array and the no-data values are "None". Whenever I try to apply on it filters, seems like the none values are taken into account into the kernel and makes my pixels dissapear.
For example, I have this image:
I have tried to apply on it the lee filter with this function (taken from Speckle ( Lee Filter) in Python):
from scipy.ndimage.filters import uniform_filter
from scipy.ndimage.measurements import variance
def lee_filter(img, size):
img_mean = uniform_filter(img, (size, size))
img_sqr_mean = uniform_filter(img**2, (size, size))
img_variance = img_sqr_mean - img_mean**2
overall_variance = variance(img)
img_weights = img_variance / (img_variance + overall_variance)
img_output = img_mean + img_weights * (img - img_mean)
return img_output
but the results looks like this:
with the warnning:
UserWarning: Warning: converting a masked element to nan. dv =
np.float64(self.norm.vmax) - np.float64(self.norm.vmin)
I have also tried to use the library findpeaks.
from findpeaks import findpeaks
import findpeaks
#lee enhanced filter
image_lee_enhanced = findpeaks.lee_enhanced_filter(img, win_size=3, cu=0.25)
but I get the same blank image.
When I used median filter on the same image with ndimage is worked no problem.
My question is how can I run those filters on the image without letting the None values interrupt the results?
edit: I prefer not to set no value pixels to 0 because the pixel range value is between -50-1 (is an index values). In addition i'm afraid that if I change it to any other value e.g 9999) it will also influence the filter (am I wrong?)
Edit 2:
I have read Cris Luengo answer and I have tried to apply something similar with the scipy.ndimage median filter as I have realized that the result is disorted as well.
This is the original image:
I have tried masking the Null values:
idx = np.ma.masked_where(img,img!=None)[:,1]
median_filter_img = ndimage.median_filter(img[idx].reshape(491, 473), size=10)
zeros = np.zeros([img.shape[0],img.shape[1]])
zeros[idx] = median_filter_img
The results looks like this (color is darker to see the problem in the edges):
As it can bee seen, seems like the edges values are inflluences by the None values.
I have done this also with img!=0 but got the same problem.
(just to add: the pixels vlues are between 1 to -35)
If you want to apply a linear smoothing filter, then you can use the Normalized Convolution.
The basic recipe is:
Create a mask image that is 1 for the pixels with data, and 0 for the pixels without data.
Set the pixels without data to any number, for example 0. NaN is not valid because it spreads in the computations.
Apply the linear smoothing filter to the image multiplied by the mask.
Apply the linear smoothing filter to the mask.
Divide the two results.
Basically, we normalize the result of the linear smoothing filter (convolution) by the number of pixels with data within the filter window.
In regions where the smoothed mask is 0 (far away from data), we will divide 0 by 0, so special care needs to be taken there.
Note that normalized convolution can be used also for uncertain data, where the mask image gets values in between 0 and 1 indicating the confidence we have in each pixel. Pixels thought to be noisy can be set to a value closer to 0 than the other pixels, for example.
The recipe above is only valid for linear smoothing filters. Normalized convolution can be done with other linear filters, for example derivative filters, but the resulting recipe is different. See for example here the equation for Normalized Convolution to compute the derivative.
For non-linear filters, other approaches are necessary. Non-linear smoothing filters, for example, will often avoid affecting edges, and so will work quite well in images with missing data, if the missing pixels are set to 0, or some value far outside of the data range. The concept of keeping a mask image that indicates which pixels have data and which don't is always a good idea.
Seems like a simple solution is to set the non values to zero. I don't know how you would get around this, because most image processing kernels require some value to for you to apply.
a[numpy.argwhere(a==None)] = 0

Smoothing without filling missing values with zeros

I'd like to smooth a map that doesn't cover the full sky. This map is not Gaussian nor has it mean zero, so the default behavior of healpy in which it fills missing values with 0 leads to a bias towards lower values at the edges of this mask:
import healpy as hp
nside = 128
npix = hp.nside2npix(nside)
arr = np.ones(npix)
mask = np.zeros(npix, dtype=bool)
mask[:mask.size//2] = True
arr[~mask] = hp.UNSEEN
arr_sm = hp.smoothing(arr, fwhm=np.radians(5.))
hp.mollview(arr, title='Input array')
hp.mollview(arr_sm, title='Smoothed array')
I would like to preserve the sharp edge by setting the weight of the masked values to zero, instead of setting the values to zero. This appears to be difficult because healpy performs the smoothing in harmonic space.
To be more specific, I'd like to mimic the mode keyword in scipy.gaussian_filter(). healpy.smoothing() implicitly uses mode=constant with cval=0, but I'd require something like mode=reflect.
Is there any reasonable way to overcome this issue?
The easiest way to handle this is to remove the mean of the map, perform the smoothing with hp.smoothing, then add the offset back.
This works around the issue because now the map is zero-mean so zero-filling does not create a border effect.
def masked_smoothing(m, fwhm_deg=5.0):
#make sure m is a masked healpy array
m = hp.ma(m)
offset = m.mean()
smoothed=hp.smoothing(m - offset, fwhm=np.radians(fwhm_deg))
return smoothed + offset
The other option I can think of is some iterative algorithm to fill the map in "reflect" mode before smoothing, possibly to be implemented in cythonor numba , the main issue is how complex is your boundary. If it is easy like a latitude cut then all of this is easy, for a general case is very complex and could have a lot of corner cases you need to handle:
Identify "border layers"
get all the missing pixels
find the neighbors and find which one has a valid neighbor and mark it as the "first border"
repeat this algorithm and find pixels that have a "first border" pixel neighbor and mark it as "second border"
repeat until you have all the layers you need
Fill reflected values
loop on border layers
loop on each layer pixel
find the valid neighbors, compute their barycenter, now assume that the line between the border pixel center and the barycenter is going perpendicular through the mask boundary and the mask boundary is halfway
now extend this line by doubling it in the direction inside the mask, take the interpolated value of the map at that location and assign it to the current missing pixel
repeat this for the other layers by playing with the length of the line.
This problem is related to the following question and answer (disclaimer: from me):
https://stackoverflow.com/a/36307291/5350621
It can be transferred to your case as follows:
import numpy as np
import healpy as hp
nside = 128
npix = hp.nside2npix(nside)
# using random numbers here to see the actual smoothing
arr = np.random.rand(npix)
mask = np.zeros(npix, dtype=bool)
mask[:mask.size//2] = True
def masked_smoothing(U, rad=5.0):
V=U.copy()
V[U!=U]=0
VV=hp.smoothing(V, fwhm=np.radians(rad))
W=0*U.copy()+1
W[U!=U]=0
WW=hp.smoothing(W, fwhm=np.radians(rad))
return VV/WW
# setting array to np.nan to exclude the pixels in the smoothing
arr[~mask] = np.nan
arr_sm = masked_smoothing(arr)
arr_sm[~mask] = hp.UNSEEN
hp.mollview(arr, title='Input array')
hp.mollview(arr_sm, title='Smoothed array')

Straighten B-Spline

I've interpolated a spline to fit pixel data from an image with a curve that I would like to straighten. I'm not sure what tools are appropriate to solve this problem. Can someone recommend an approach?
Here's how I'm getting my spline:
import numpy as np
from skimage import io
from scipy import interpolate
import matplotlib.pyplot as plt
from sklearn.neighbors import NearestNeighbors
import networkx as nx
# Read a skeletonized image, return an array of points on the skeleton, and divide them into x and y coordinates
skeleton = io.imread('skeleton.png')
curvepoints = np.where(skeleton==False)
xpoints = curvepoints[1]
ypoints = -curvepoints[0]
# reformats x and y coordinates into a 2-dimensional array
inputarray = np.c_[xpoints, ypoints]
# runs a nearest neighbors algorithm on the coordinate array
clf = NearestNeighbors(2).fit(inputarray)
G = clf.kneighbors_graph()
T = nx.from_scipy_sparse_matrix(G)
# sorts coordinates according to their nearest neighbors order
order = list(nx.dfs_preorder_nodes(T, 0))
xx = xpoints[order]
yy = ypoints[order]
# Loops over all points in the coordinate array as origin, determining which results in the shortest path
paths = [list(nx.dfs_preorder_nodes(T, i)) for i in range(len(inputarray))]
mindist = np.inf
minidx = 0
for i in range(len(inputarray)):
p = paths[i] # order of nodes
ordered = inputarray[p] # ordered nodes
# find cost of that order by the sum of euclidean distances between points (i) and (i+1)
cost = (((ordered[:-1] - ordered[1:])**2).sum(1)).sum()
if cost < mindist:
mindist = cost
minidx = i
opt_order = paths[minidx]
xxx = xpoints[opt_order]
yyy = ypoints[opt_order]
# fits a spline to the ordered coordinates
tckp, u = interpolate.splprep([xxx, yyy], s=3, k=2, nest=-1)
xpointsnew, ypointsnew = interpolate.splev(np.linspace(0,1,270), tckp)
# prints spline variables
print(tckp)
# plots the spline
plt.plot(xpointsnew, ypointsnew, 'r-')
plt.show()
My broader project is to follow the approach outlined in A novel method for straightening curved text-lines in stylistic documents. That article is reasonably detailed in finding the line that describes curved text, but much less so where straightening the curve is concerned. I have trouble visualizing the only reference to straightening that I see is in the abstract:
find the angle between the normal at a point on the curve and the vertical line, and finally visit each point on the text and rotate by their corresponding angles.
I also found Geometric warp of image in python, which seems promising. If I could rectify the spline, I think that would allow me to set a range of target points for the affine transform to map to. Unfortunately, I haven't found an approach to rectify my spline and test it.
Finally, this program implements an algorithm to straighten splines, but the paper on the algorithm is behind a pay wall and I can't make sense of the javascript.
Basically, I'm lost and in need of pointers.
Update
The affine transformation was the only approach I had any idea how to start exploring, so I've been working on that since I posted. I generated a set of destination coordinates by performing an approximate rectification of the curve based on the euclidean distance between points on my b-spline.
From where the last code block left off:
# calculate euclidian distances between adjacent points on the curve
newcoordinates = np.c_[xpointsnew, ypointsnew]
l = len(newcoordinates) - 1
pointsteps = []
for index, obj in enumerate(newcoordinates):
if index < l:
ord1 = np.c_[newcoordinates[index][0], newcoordinates[index][1]]
ord2 = np.c_[newcoordinates[index + 1][0], newcoordinates[index + 1][1]]
length = spatial.distance.cdist(ord1, ord2)
pointsteps.append(length)
# calculate euclidian distance between first point and each consecutive point
xpositions = np.asarray(pointsteps).cumsum()
# compose target coordinates for the line after the transform
targetcoordinates = [(0,0),]
for element in xpositions:
targetcoordinates.append((element, 0))
# perform affine transformation with newcoordinates as control points and targetcoordinates as target coordinates
tform = PiecewiseAffineTransform()
tform.estimate(newcoordinates, targetcoordinates)
I'm presently hung up on errors with the affine transform (scipy.spatial.qhull.QhullError: QH6154 Qhull precision error: Initial simplex is flat (facet 1 is coplanar with the interior point)
), but I'm not sure whether it's because of a problem with how I'm feeding the data in, or because I'm abusing the transform to do my projection.
I got the same error with you when using scipy.spatial.ConvexHull.
First, let me explain my project: what i wanted to do is to segment the people from its background(image matting). In my code, first I read an image and a trimap, then according to the trimap, I segment the original image to foreground, bakground and unknown pixels. Here is part of the coed:
img = scipy.misc.imread('sweater_black.png') #color_image
trimap = scipy.misc.imread('sw_trimap.png', flatten='True') #trimap
bg = trimap == 0 #background
fg = trimap == 255 #foreground
unknown = True ^ np.logical_or(fg,bg) #unknown pixels
fg_px = img[fg] #here i got the rgb value of the foreground pixels,then send them to the ConvexHull
fg_hull = scipy.spatial.ConvexHull(fg_px)
But i got an error here.So I check the Array of fg_px and then I found this array is n*4. which means every scalar i send to ConvexHull has four values. Howerver, the input of ConvexHUll should be 3 dimension.
I source my error and found that the input color image is 32bits(rgb channel and alpha channel) which means it has an alpha channel. After transferring the image to 24 bit (which means only rgb channels), the code works.
In one sentence, the input of ConvexHull should be b*4, so check your input data! Hope this works for you~

Peak detection in a noisy 2d array

I'm trying to get python to return, as close as possible, the center of the most obvious clustering in an image like the one below:
In my previous question I asked how to get the global maximum and the local maximums of a 2d array, and the answers given worked perfectly. The issue is that the center estimation I can get by averaging the global maximum obtained with different bin sizes is always slightly off than the one I would set by eye, because I'm only accounting for the biggest bin instead of a group of biggest bins (like one does by eye).
I tried adapting the answer to this question to my problem, but it turns out my image is too noisy for that algorithm to work. Here's my code implementing that answer:
import numpy as np
from scipy.ndimage.filters import maximum_filter
from scipy.ndimage.morphology import generate_binary_structure, binary_erosion
import matplotlib.pyplot as pp
from os import getcwd
from os.path import join, realpath, dirname
# Save path to dir where this code exists.
mypath = realpath(join(getcwd(), dirname(__file__)))
myfile = 'data_file.dat'
x, y = np.loadtxt(join(mypath,myfile), usecols=(1, 2), unpack=True)
xmin, xmax = min(x), max(x)
ymin, ymax = min(y), max(y)
rang = [[xmin, xmax], [ymin, ymax]]
paws = []
for d_b in range(25, 110, 25):
# Number of bins in x,y given the bin width 'd_b'
binsxy = [int((xmax - xmin) / d_b), int((ymax - ymin) / d_b)]
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
paws.append(H)
def detect_peaks(image):
"""
Takes an image and detect the peaks usingthe local maximum filter.
Returns a boolean mask of the peaks (i.e. 1 when
the pixel's value is the neighborhood maximum, 0 otherwise)
"""
# define an 8-connected neighborhood
neighborhood = generate_binary_structure(2,2)
#apply the local maximum filter; all pixel of maximal value
#in their neighborhood are set to 1
local_max = maximum_filter(image, footprint=neighborhood)==image
#local_max is a mask that contains the peaks we are
#looking for, but also the background.
#In order to isolate the peaks we must remove the background from the mask.
#we create the mask of the background
background = (image==0)
#a little technicality: we must erode the background in order to
#successfully subtract it form local_max, otherwise a line will
#appear along the background border (artifact of the local maximum filter)
eroded_background = binary_erosion(background, structure=neighborhood, border_value=1)
#we obtain the final mask, containing only peaks,
#by removing the background from the local_max mask
detected_peaks = local_max - eroded_background
return detected_peaks
#applying the detection and plotting results
for i, paw in enumerate(paws):
detected_peaks = detect_peaks(paw)
pp.subplot(4,2,(2*i+1))
pp.imshow(paw)
pp.subplot(4,2,(2*i+2) )
pp.imshow(detected_peaks)
pp.show()
and here's the result of that (varying the bin size):
Clearly my background is too noisy for that algorithm to work, so the question is: how can I make that algorithm less sensitive? If an alternative solution exists then please let me know.
EDIT
Following Bi Rico advise I attempted smoothing my 2d array before passing it on to the local maximum finder, like so:
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
H1 = gaussian_filter(H, 2, mode='nearest')
paws.append(H1)
These were the results with a sigma of 2, 4 and 8:
EDIT 2
A mode ='constant' seems to work much better than nearest. It converges to the right center with a sigma=2 for the largest bin size:
So, how do I get the coordinates of the maximum that shows in the last image?
Answering the last part of your question, always you have points in an image, you can find their coordinates by searching, in some order, the local maximums of the image. In case your data is not a point source, you can apply a mask to each peak in order to avoid the peak neighborhood from being a maximum while performing a future search. I propose the following code:
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import copy
def get_std(image):
return np.std(image)
def get_max(image,sigma,alpha=20,size=10):
i_out = []
j_out = []
image_temp = copy.deepcopy(image)
while True:
k = np.argmax(image_temp)
j,i = np.unravel_index(k, image_temp.shape)
if(image_temp[j,i] >= alpha*sigma):
i_out.append(i)
j_out.append(j)
x = np.arange(i-size, i+size)
y = np.arange(j-size, j+size)
xv,yv = np.meshgrid(x,y)
image_temp[yv.clip(0,image_temp.shape[0]-1),
xv.clip(0,image_temp.shape[1]-1) ] = 0
print xv
else:
break
return i_out,j_out
#reading the image
image = mpimg.imread('ggd4.jpg')
#computing the standard deviation of the image
sigma = get_std(image)
#getting the peaks
i,j = get_max(image[:,:,0],sigma, alpha=10, size=10)
#let's see the results
plt.imshow(image, origin='lower')
plt.plot(i,j,'ro', markersize=10, alpha=0.5)
plt.show()
The image ggd4 for the test can be downloaded from:
http://www.ipac.caltech.edu/2mass/gallery/spr99/ggd4.jpg
The first part is to get some information about the noise in the image. I did it by computing the standard deviation of the full image (actually is better to select an small rectangle without signal). This is telling us how much noise is present in the image.
The idea to get the peaks is to ask for successive maximums, which are above of certain threshold (let's say, 3, 4, 5, 10, or 20 times the noise). This is what the function get_max is actually doing. It performs the search of maximums until one of them is below the threshold imposed by the noise. In order to avoid finding the same maximum many times it is necessary to remove the peaks from the image. In the general way, the shape of the mask to do so depends strongly on the problem that one want to solve. for the case of stars, it should be good to remove the star by using a Gaussian function, or something similar. I have chosen for simplicity a square function, and the size of the function (in pixels) is the variable "size".
I think that from this example, anybody can improve the code by adding more general things.
EDIT:
The original image looks like:
While the image after identifying the luminous points looks like this:
Too much of a n00b on Stack Overflow to comment on Alejandro's answer elsewhere here. I would refine his code a bit to use a preallocated numpy array for output:
def get_max(image,sigma,alpha=3,size=10):
from copy import deepcopy
import numpy as np
# preallocate a lot of peak storage
k_arr = np.zeros((10000,2))
image_temp = deepcopy(image)
peak_ct=0
while True:
k = np.argmax(image_temp)
j,i = np.unravel_index(k, image_temp.shape)
if(image_temp[j,i] >= alpha*sigma):
k_arr[peak_ct]=[j,i]
# this is the part that masks already-found peaks.
x = np.arange(i-size, i+size)
y = np.arange(j-size, j+size)
xv,yv = np.meshgrid(x,y)
# the clip here handles edge cases where the peak is near the
# image edge
image_temp[yv.clip(0,image_temp.shape[0]-1),
xv.clip(0,image_temp.shape[1]-1) ] = 0
peak_ct+=1
else:
break
# trim the output for only what we've actually found
return k_arr[:peak_ct]
In profiling this and Alejandro's code using his example image, this code about 33% faster (0.03 sec for Alejandro's code, 0.02 sec for mine.) I expect on images with larger numbers of peaks, it would be even faster - appending the output to a list will get slower and slower for more peaks.
I think the first step needed here is to express the values in H in terms of the standard deviation of the field:
import numpy as np
H = H / np.std(H)
Now you can put a threshold on the values of this H. If the noise is assumed to be Gaussian, picking a threshold of 3 you can be quite sure (99.7%) that this pixel can be associated with a real peak and not noise. See here.
Now the further selection can start. It is not exactly clear to me what exactly you want to find. Do you want the exact location of peak values? Or do you want one location for a cluster of peaks which is in the middle of this cluster?
Anyway, starting from this point with all pixel values expressed in standard deviations of the field, you should be able to get what you want. If you want to find clusters you could perform a nearest neighbour search on the >3-sigma gridpoints and put a threshold on the distance. I.e. only connect them when they are close enough to each other. If several gridpoints are connected you can define this as a group/cluster and calculate some (sigma-weighted?) center of the cluster.
Hope my first contribution on Stackoverflow is useful for you!
The way I would do it:
1) normalize H between 0 and 1.
2) pick a threshold value, as tcaswell suggests. It could be between .9 and .99 for example
3) use masked arrays to keep only the x,y coordinates with H above threshold:
import numpy.ma as ma
x_masked=ma.masked_array(x, mask= H < thresold)
y_masked=ma.masked_array(y, mask= H < thresold)
4) now you can weight-average on the masked coordinates, with weight something like (H-threshold)^2, or any other power greater or equal to one, depending on your taste/tests.
Comment:
1) This is not robust with respect to the type of peaks you have, since you may have to adapt the thresold. This is the minor problem;
2) This DOES NOT work with two peaks as it is, and will give wrong results if the 2nd peak is above threshold.
Nonetheless, it will always give you an answer without crashing (with pros and cons of the thing..)
I'm adding this answer because it's the solution I ended up using. It's a combination of Bi Rico's comment here (May 30 at 18:54) and the answer given in this question: Find peak of 2d histogram.
As it turns out using the peak detection algorithm from this question Peak detection in a 2D array only complicates matters. After applying the Gaussian filter to the image all that needs to be done is to ask for the maximum bin (as Bi Rico pointed out) and then obtain the maximum in coordinates.
So instead of using the detect-peaks function as I did above, I simply add the following code after the Gaussian 2D histogram is obtained:
# Get 2D histogram.
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
# Get Gaussian filtered 2D histogram.
H1 = gaussian_filter(H, 2, mode='nearest')
# Get center of maximum in bin coordinates.
x_cent_bin, y_cent_bin = np.unravel_index(H1.argmax(), H1.shape)
# Get center in x,y coordinates.
x_cent_coor , y_cent_coord = np.average(xedges[x_cent_bin:x_cent_bin + 2]), np.average(yedges[y_cent_g:y_cent_g + 2])

Categories

Resources