Visualize SimpleITK coordinates on Paraview - python

I am trying to just make an SimpleITK image where I want specific voxel values to be 1 and the rest to be 0. I am new to SimpleITK so I feel I am missing out on something.
Anyway, I have some indices that I have generated that I assign the voxel value of as 1. However, I want to be able to visualise how these samples are oriented with respect to each other in space. I have tried multiple ways from transforming an array full of zeroes with required indices as 1 to a NIFTI image however I am still not able to visualise it and see how these points look
Below is a basic code snippet I have tried
def WriteSampleToDisk():
"""Creates an empty image, assigns generated samples with voxel value 1 and writes it to disk.
returns image written to disk"""
img = sitk.Image(512, 512, 416, sitk.sitkInt16)
img.SetOrigin((0, 0, 0))
img.SetSpacing((1, 1, 1))
#Some code to get indices
for i in range(len(dimx)): #Same number of elements in every index dimension
img.SetPixel(dimz[i], dimy[i], dimx[i], 1)#Sitk convention takes z axis as the first axis
arr = sitk.GetArrayFromImage(img)
print(np.argwhere(arr == 1)) --> It's giving me the indices where I have Set the voxel value as 1
sitk.WriteImage(img, "image.nii")
return img
However when I try to view it on paraview even after setting the threshold, I still get nothing. What could be the reason for this? Is there a way to circumvent this problem?

Your voxel type is Int16 which has a range of -32768 to 32767. But you're setting your voxels to 1. Given the intensity range, that's not that different from 0 so it's pretty much going to be the same as 0, visually.
Try setting your on voxels to 32767. Also you might want to dilate your image after setting the voxels. A one voxel dot will be very small and difficult to see. Run the BinaryDilateFilter to grow the size of your dots.
UPDATE: ok, here's a example that I've written. It creates a UInt8 volume and sets random dots in it to 255. Then it creates a distance map volume from the dots volume.
import random
import SimpleITK as sitk
random.seed()
img = sitk.Image([512,512,512], sitk.sitkUInt8)
for i in range(1000):
x = random.randrange(512)
y = random.randrange(512)
z = random.randrange(512)
print(x,y,z)
img[x,y,z] = 255
sitk.WriteImage(img, "dots.nrrd")
dist = sitk.SignedMaurerDistanceMap(img)
sitk.WriteImage(dist, "dots.dist.nrrd")
sitk.Show(dist)
For the SignedMaurierDistanceMap, the voxels can be any value, as long as it's not the background value.

Related

How do I normalize the pixel value of an image to 0~1?

The type of my train_data is 'Array of unit 16'. The size is (96108,7,7). Therefore, there are 96108 images.
The image is different from the general image. My image has a sensor of 7x7 and 49 pixels contain the number of detected lights. And one image is the number of light detected for 0 to 1 second. Since the sensor detects randomly for a unit time, the maximum values of the pixel are all different.
If the max value of all images is 255, I can do 'train data/255', but I can't use the division because the max value of the image I have is all different.
I want to make the pixel value of all images 0 to 1.
What should I do?
Contrast Normalization (or contrast stretch) should not be confused with Data Normalization which maps data between 0.0-1.0.
Data Normalization
We use the following formula to normalize data. The min() and max() values are the possible minimum and maximum values supported within the type of data.
When we use it with images, x is the whole image and i is an individual pixel of that image. If you are using an 8-bit image the min() and max() values become 0 and 255 respectively. This should not be confused with the minimum and maximum values presented within your image in question.
To convert an 8-bit image into a floating-point image, As min() value reaches 0, the simple math is image/255.
img = img/255
NumPy methods likes to output arrays in 64-bit floating-point by default. To effectively test methods applied to 8-bit images with NumPy, an 8-bit array is required as the input:
image = np.random.randint(0,255, (7,7), dtype=np.uint8)
normalized_image = image/255
When we examine the output of the above two lines we can see the maximum value of the image is 252 which has now mapped to 0.9882352941176471 on the 64-bit normalized image.
However, in most cases, you wouldn't need a 64-bit image. You can output (or in other words cast) it to 32-bit (or 16-bit) using the following code. If you try to cast it to 8-bit it will throw an error. Using '/' for division is a shorthand for np.true_divide but lacks the ability to define the output data format.
normalized_image_2 = np.true_divide(image, 255, dtype=np.float32)
The properties of the new array is shown below. You can see the number of digits are now reduced and 252 has been remapped to 0.9882353.
Contrast Normalization
The method shown by #3dSpatialUser effectively does a partial contrast normalization, meaning it stretches the intensities of the image within the available intensity range. Test it with an 8-bit array with the following code.
c_image = np.random.randint(64,128, (7,7), dtype=np.uint8)
cn_image = (c_image - c_image.min()) / (c_image.max()- c_image.min())
Contrast is now stretched mapping minimum contrast of 64 to 0.0 and maximum 127 to 1.0.
The formula for contrast normalization is shown below.
Using the above formula with NumPy and to remap data back to the 8-bit input format after contrast normalization, the image should be multiplied by 255, then change the data type back to unit8:
cn_image_correct = (c_image - c_image.min()) / (c_image.max()- c_image.min()) * 255
cn_image_correct = cn_image_correct.astype(np.int8)
64 is now mapped to 0 and 174 is mapped to 255 stretching the contrast.
Where the confusion arise
In most applications, the intensity values of an image are spread close to their minima and maxima. Hence, when we apply the normalization formula using the min and max values presented within the image, instead of the min max of the available range, it will output a better looking image (in most cases) within the 0.0-1.0 range, which effectively does normalize both data and contrast at the same time. Also, image editing software perform gamma corrections or remapping when switching between image data types 8/16/32-bits.
import numpy as np
data = np.random.normal(loc=0, scale=1, size=(96108, 7, 7))
data_min = np.min(data, axis=(1,2), keepdims=True)
data_max = np.max(data, axis=(1,2), keepdims=True)
scaled_data = (data - data_min) / (data_max - data_min)
EDIT: I have voted for the other answer since that is a cleaner way (in my opinion) to do it, but the principles are the same.
EDIT v2: I saw the comment and I see the difference. I will rewrite my code so it is "cleaner" with less extra variables but still correct using min/max:
data -= data.min(axis=(1,2), keepdims=True)
data /= data.max(axis=(1,2), keepdims=True)
First the minimum value is moved to zero, thereafter one can take the maximum value to get the full range (max-min) of the specific image.
After this step np.array_equal(data, scaled_data) = True.
You can gather the maximum values with np.ndarray.max across multiple axes: here axis=1 and axis=2 (i.e. on each image individually). Then normalize the initial array with it. To avoid having to broadcast this array of maxima yourself, you can use the keepdims option:
>>> x = np.random.rand(96108,7,7)
>>> x.max(axis=(1,2), keepdims=True).shape
(96108, 1, 1)
While x.max(axis=(1,2)) alone would have returned an array shaped (96108,)...
Such that you can then do:
>>> x /= x.max(axis=(1,2), keepdims=True)

Coverting point cloud data (.ply) into a range image

I have a point cloud with XYZ data, I have read the .ply file using pyntcloud and converted it into a numpy array (553181,3)
I want to convert the point cloud e.g. into an 800x600x3 matrix that can be also treated as an RGB-image. For this, I scaled the XY-coordinates to [0,800] and [0,600] ranges.
so far ->
I have normalized the x and y coordinate to the range of (0,800) and (0,600)
I have created databins of size 800 and 600 and stored the respective x and y coordinate points
I don't know how to map these points to get a range image
I am new to python and would greatly appreciate the help and guidance
I don't understand how you could possibly treat a 3d point cloud in a 2d image. Any way you could use open3d to visualize your point cloud and store it in .xyzrgb file if you have RGB data, which seems you don't since you converted the file into a NumPy array with 3 columns that have to be x, y and z values. Therefore, you may need RGB values or give random values to the point cloud. A good way to do that is using pptk, which allows you to generate RGB colors on a point cloud and to render images (screenshots) on a built-in viewer (I assume that is what you need).
A simple workflow could be this:
import pptk
xyz = pptk.rand(100, 3) #generates a 3dimensions array or it could be a
#numpy array - your (553181,3) array
v = pptk.viewer(xyz)
rgb = pptk.rand(100, 3) #same here, must be same size as what you used
#or you can create a single random color of the shape you need
import random
r = np.full(shape=100, fill_value=random.randint(1,255))
g = np.full(shape=100, fill_value=random.randint(1,255))
b = np.full(shape=100, fill_value=random.randint(1,255))
rgb = np.dstack((r,g,b))
colors = rgb/256 #you are going to need to convert the values in
#0-1 interval
v.attributes(colors)
v.capture('screenshot.png')
Perhaps you would like to take a read to this massive-3d-point-clouds-visualization-in-python. It describes more or less what I put in the example I made, which is just a "short form" of that.

2D X-ray reconstruction from 3D DICOM images

I need to write a python function or class with the following Input/Output
Input :
The position of the X-rays source (still not sure why it's needed)
The position of the board (still not sure why it's needed)
A three dimensional CT-Scan
Output :
A 2D X-ray Scan (simulate an X-Ray Scan which is a scan that goes through the whole body)
A few important remarks to what I'm trying to achieve:
You don’t need additional information from the real world or any advanced knowledge.
You can add any input parameter that you see fit.
If your method produces artifacts, you are excepted to fix them.
Please explain every step of your method.
What I've done until now: (.py file added)
I've read the .dicom files, which are located in "Case2" folder.
These .dicom files can be downloaded from my Google Drive:
https://drive.google.com/file/d/1lHoMJgj_8Dt62JaR2mMlK9FDnfkesH5F/view?usp=sharing
I've sorted the files by their position.
Finally, I've created a 3D array, and added all the images to that array in order to plot the results (you can see them in the added image) - which are slice of the CT Scans. (reference: https://pydicom.github.io/pydicom/stable/auto_examples/image_processing/reslice.html#sphx-glr-auto-examples-image-processing-reslice-py)
Here's the full code:
import pydicom as dicom
import os
import matplotlib.pyplot as plt
import sys
import glob
import numpy as np
path = "./Case2"
ct_images = os.listdir(path)
slices = [dicom.read_file(path + '/' + s, force=True) for s in ct_images]
slices[0].ImagePositionPatient[2]
slices = sorted(slices, key = lambda x: x.ImagePositionPatient[2])
#print(slices)
# Read a dicom file with a ctx manager
with dicom.dcmread(path + '/' + ct_images[0]) as ds:
# plt.imshow(ds.pixel_array, cmap=plt.cm.bone)
print(ds)
#plt.show()
fig = plt.figure()
for num, each_slice in enumerate(slices[:12]):
y= fig.add_subplot(3,4,num+1)
#print(each_slice)
y.imshow(each_slice.pixel_array)
plt.show()
for i in range(len(ct_images)):
with dicom.dcmread(path + '/' + ct_images[i], force=True) as ds:
plt.imshow(ds.pixel_array, cmap=plt.cm.bone)
plt.show()
# pixel aspects, assuming all slices are the same
ps = slices[0].PixelSpacing
ss = slices[0].SliceThickness
ax_aspect = ps[1]/ps[0]
sag_aspect = ps[1]/ss
cor_aspect = ss/ps[0]
# create 3D array
img_shape = list(slices[0].pixel_array.shape)
img_shape.append(len(slices))
img3d = np.zeros(img_shape)
# fill 3D array with the images from the files
for i, s in enumerate(slices):
img2d = s.pixel_array
img3d[:, :, i] = img2d
# plot 3 orthogonal slices
a1 = plt.subplot(2, 2, 1)
plt.imshow(img3d[:, :, img_shape[2]//2])
a1.set_aspect(ax_aspect)
a2 = plt.subplot(2, 2, 2)
plt.imshow(img3d[:, img_shape[1]//2, :])
a2.set_aspect(sag_aspect)
a3 = plt.subplot(2, 2, 3)
plt.imshow(img3d[img_shape[0]//2, :, :].T)
a3.set_aspect(cor_aspect)
plt.show()
The result isn't what I wanted because:
These are slice of the CT scans. I need to simulate an X-Ray Scan which is a scan that goes through the whole body.
Would love your help to simulate an X-Ray scan that goes through the body.
I've read that it could be done in the following way: "A normal 2D X-ray image is a sum projection through the volume. Send parallel rays through the volume and add up the densities." Which I'm not sure how it's accomplished in code.
References that may help: https://pydicom.github.io/pydicom/stable/index.html
EDIT: as further answers noted, this solution yields a parallel projection, not a perspective projection.
From what I understand of the definition of "A normal 2D X-ray image", this can be done by summing each density for each pixel, for each slice of a projection in a given direction.
With your 3D volume, this means performing a sum over a given axis, which can be done with ndarray.sum(axis) in numpy.
# plot 3 orthogonal slices
a1 = plt.subplot(2, 2, 1)
plt.imshow(img3d.sum(2), cmap=plt.cm.bone)
a1.set_aspect(ax_aspect)
a2 = plt.subplot(2, 2, 2)
plt.imshow(img3d.sum(1), cmap=plt.cm.bone)
a2.set_aspect(sag_aspect)
a3 = plt.subplot(2, 2, 3)
plt.imshow(img3d.sum(0).T, cmap=plt.cm.bone)
a3.set_aspect(cor_aspect)
plt.show()
This yields the following result:
Which, to me, looks like a X-ray image.
EDIT : the result is a bit too "bright", so you may want to apply gamma correction. With matplotlib, import matplotlib.colors as colors and add a colors.PowerNorm(gamma_value) as the norm parameter in plt.imshow:
plt.imshow(img3d.sum(0).T, norm=colors.PowerNorm(gamma=3), cmap=plt.cm.bone)
Result:
The way I understand the task you are expected to write a ray-tracer that follows the X-rays from the source (that's why you need its position) to the projection plane (That's why you need its position).
Sum up the values as you go and do a mapping to the allowed grey-values in the end.
Take a look at line drawing algorithms to see how you can do this.
It is really no black magic, I have done this kind of stuff more than 30 years ago. Damn, I'm old...
What you want is a perspective projection instead of a parallel projection. In order to obtain this, you need to know which values to sum for each point on the projection plane. There are multiple considerations to keep in mind:
We are talking about voxels, so you need to a method to determine whether a certain point in space belongs to a certain voxel in your volume.
A line between two points is straight, but because voxels are a discrete representation of space different methods of determining the above can lead to different (mostly minor) results. This difference will ultimately also lead to slightly different images depending on the alogrithms used. This is expected.
Let's say you have a CT scan volume comprising of 256 512x512 pixel slices. This gives you a volume of 512x512x256 voxels. For each of these voxels you need to know what their positions in x,y,z coordinates are. You can do this as follows:
- Use the ImagePositionPatient attribute to find out the x,y,z coordinate of the upper left hand corner pixel in mm for a given slice.
- Use the PixelSpacing attribute to calculate the x,y,z coordinates of the other pixels in your slice. Repeat for all slices
edit: i just found a counterexample against below method, the rest is still helpful. will update
Now to find out for a given point (Xa, Ya, Za) what voxel values need to be summed if the source is at (Xb, Yb, Zb):
Find the voxel that belongs to (Xa,Ya, Za). Keep pixel/voxel data.
Calculate (you can do this with NumPy) the distance between voxel(Xa, Ya, Za) and (Xb, Yb, Zb). There is an optimalization possible here :)
For all directly surrounding voxels (that will be a number of 3x3x3-1 voxels) also calculate this distance. Can also be optimized :)
Take the voxel with the shortest distance as the starting point for a next iteration of the above. Add pixel/voxel data.
Repeat until out of bounds of you CT volume.
In order to obtain a projection repeat these steps for all points on your projection plane and visualize the result. Good luck with your assignment! :)

Generate a gradient from extrema of a float Image

As the title states, i need to generate a gradient from the lowest value to the highest value of a given float Image. This shall serve as a legend to the image.
my idea is to create an image and then fill every pixel of it with a value within the range of the extrema.
I am still not a Pro in Python, so any help would be nice.
What i got so far:
im = Image.open('path_to.tiff')
extrw=im.getextrema()
grad = Image.new('F', (10, 100))
pix = grad.load()
for i in range(grad.size[0]): # for every pixel:
for j in range(grad.size[1]):
pixels[i,j] = (some_float)
As you can see i need to somehow use extrema to get the float values accordingly into the pixels to create a gradient.
it would be nice, if i could stay in the PIL library.
Thank you!
After some research i found that this could be a linear function. all i had to to was to put the minimum, maximum and length into this mathematical form : y=m*x+b
with the help of a kollegue we figured: y(the pixel value)= ((minvalue-maxvalue)/h(heigth of the image))*index_of_pixel+maxvalue. in code form:
extw=im.getextrema()
grad_b = 10
grad = Image.new('F', (grad_b, im_h))
pix = grad.load()
for i in range(grad.size[0]): # for every pixel:
for j in range(grad.size[1]):
pix[i,j] = (extw[0]-extw[1]/im_h)*j+extw[1]

Peak detection in a noisy 2d array

I'm trying to get python to return, as close as possible, the center of the most obvious clustering in an image like the one below:
In my previous question I asked how to get the global maximum and the local maximums of a 2d array, and the answers given worked perfectly. The issue is that the center estimation I can get by averaging the global maximum obtained with different bin sizes is always slightly off than the one I would set by eye, because I'm only accounting for the biggest bin instead of a group of biggest bins (like one does by eye).
I tried adapting the answer to this question to my problem, but it turns out my image is too noisy for that algorithm to work. Here's my code implementing that answer:
import numpy as np
from scipy.ndimage.filters import maximum_filter
from scipy.ndimage.morphology import generate_binary_structure, binary_erosion
import matplotlib.pyplot as pp
from os import getcwd
from os.path import join, realpath, dirname
# Save path to dir where this code exists.
mypath = realpath(join(getcwd(), dirname(__file__)))
myfile = 'data_file.dat'
x, y = np.loadtxt(join(mypath,myfile), usecols=(1, 2), unpack=True)
xmin, xmax = min(x), max(x)
ymin, ymax = min(y), max(y)
rang = [[xmin, xmax], [ymin, ymax]]
paws = []
for d_b in range(25, 110, 25):
# Number of bins in x,y given the bin width 'd_b'
binsxy = [int((xmax - xmin) / d_b), int((ymax - ymin) / d_b)]
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
paws.append(H)
def detect_peaks(image):
"""
Takes an image and detect the peaks usingthe local maximum filter.
Returns a boolean mask of the peaks (i.e. 1 when
the pixel's value is the neighborhood maximum, 0 otherwise)
"""
# define an 8-connected neighborhood
neighborhood = generate_binary_structure(2,2)
#apply the local maximum filter; all pixel of maximal value
#in their neighborhood are set to 1
local_max = maximum_filter(image, footprint=neighborhood)==image
#local_max is a mask that contains the peaks we are
#looking for, but also the background.
#In order to isolate the peaks we must remove the background from the mask.
#we create the mask of the background
background = (image==0)
#a little technicality: we must erode the background in order to
#successfully subtract it form local_max, otherwise a line will
#appear along the background border (artifact of the local maximum filter)
eroded_background = binary_erosion(background, structure=neighborhood, border_value=1)
#we obtain the final mask, containing only peaks,
#by removing the background from the local_max mask
detected_peaks = local_max - eroded_background
return detected_peaks
#applying the detection and plotting results
for i, paw in enumerate(paws):
detected_peaks = detect_peaks(paw)
pp.subplot(4,2,(2*i+1))
pp.imshow(paw)
pp.subplot(4,2,(2*i+2) )
pp.imshow(detected_peaks)
pp.show()
and here's the result of that (varying the bin size):
Clearly my background is too noisy for that algorithm to work, so the question is: how can I make that algorithm less sensitive? If an alternative solution exists then please let me know.
EDIT
Following Bi Rico advise I attempted smoothing my 2d array before passing it on to the local maximum finder, like so:
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
H1 = gaussian_filter(H, 2, mode='nearest')
paws.append(H1)
These were the results with a sigma of 2, 4 and 8:
EDIT 2
A mode ='constant' seems to work much better than nearest. It converges to the right center with a sigma=2 for the largest bin size:
So, how do I get the coordinates of the maximum that shows in the last image?
Answering the last part of your question, always you have points in an image, you can find their coordinates by searching, in some order, the local maximums of the image. In case your data is not a point source, you can apply a mask to each peak in order to avoid the peak neighborhood from being a maximum while performing a future search. I propose the following code:
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import copy
def get_std(image):
return np.std(image)
def get_max(image,sigma,alpha=20,size=10):
i_out = []
j_out = []
image_temp = copy.deepcopy(image)
while True:
k = np.argmax(image_temp)
j,i = np.unravel_index(k, image_temp.shape)
if(image_temp[j,i] >= alpha*sigma):
i_out.append(i)
j_out.append(j)
x = np.arange(i-size, i+size)
y = np.arange(j-size, j+size)
xv,yv = np.meshgrid(x,y)
image_temp[yv.clip(0,image_temp.shape[0]-1),
xv.clip(0,image_temp.shape[1]-1) ] = 0
print xv
else:
break
return i_out,j_out
#reading the image
image = mpimg.imread('ggd4.jpg')
#computing the standard deviation of the image
sigma = get_std(image)
#getting the peaks
i,j = get_max(image[:,:,0],sigma, alpha=10, size=10)
#let's see the results
plt.imshow(image, origin='lower')
plt.plot(i,j,'ro', markersize=10, alpha=0.5)
plt.show()
The image ggd4 for the test can be downloaded from:
http://www.ipac.caltech.edu/2mass/gallery/spr99/ggd4.jpg
The first part is to get some information about the noise in the image. I did it by computing the standard deviation of the full image (actually is better to select an small rectangle without signal). This is telling us how much noise is present in the image.
The idea to get the peaks is to ask for successive maximums, which are above of certain threshold (let's say, 3, 4, 5, 10, or 20 times the noise). This is what the function get_max is actually doing. It performs the search of maximums until one of them is below the threshold imposed by the noise. In order to avoid finding the same maximum many times it is necessary to remove the peaks from the image. In the general way, the shape of the mask to do so depends strongly on the problem that one want to solve. for the case of stars, it should be good to remove the star by using a Gaussian function, or something similar. I have chosen for simplicity a square function, and the size of the function (in pixels) is the variable "size".
I think that from this example, anybody can improve the code by adding more general things.
EDIT:
The original image looks like:
While the image after identifying the luminous points looks like this:
Too much of a n00b on Stack Overflow to comment on Alejandro's answer elsewhere here. I would refine his code a bit to use a preallocated numpy array for output:
def get_max(image,sigma,alpha=3,size=10):
from copy import deepcopy
import numpy as np
# preallocate a lot of peak storage
k_arr = np.zeros((10000,2))
image_temp = deepcopy(image)
peak_ct=0
while True:
k = np.argmax(image_temp)
j,i = np.unravel_index(k, image_temp.shape)
if(image_temp[j,i] >= alpha*sigma):
k_arr[peak_ct]=[j,i]
# this is the part that masks already-found peaks.
x = np.arange(i-size, i+size)
y = np.arange(j-size, j+size)
xv,yv = np.meshgrid(x,y)
# the clip here handles edge cases where the peak is near the
# image edge
image_temp[yv.clip(0,image_temp.shape[0]-1),
xv.clip(0,image_temp.shape[1]-1) ] = 0
peak_ct+=1
else:
break
# trim the output for only what we've actually found
return k_arr[:peak_ct]
In profiling this and Alejandro's code using his example image, this code about 33% faster (0.03 sec for Alejandro's code, 0.02 sec for mine.) I expect on images with larger numbers of peaks, it would be even faster - appending the output to a list will get slower and slower for more peaks.
I think the first step needed here is to express the values in H in terms of the standard deviation of the field:
import numpy as np
H = H / np.std(H)
Now you can put a threshold on the values of this H. If the noise is assumed to be Gaussian, picking a threshold of 3 you can be quite sure (99.7%) that this pixel can be associated with a real peak and not noise. See here.
Now the further selection can start. It is not exactly clear to me what exactly you want to find. Do you want the exact location of peak values? Or do you want one location for a cluster of peaks which is in the middle of this cluster?
Anyway, starting from this point with all pixel values expressed in standard deviations of the field, you should be able to get what you want. If you want to find clusters you could perform a nearest neighbour search on the >3-sigma gridpoints and put a threshold on the distance. I.e. only connect them when they are close enough to each other. If several gridpoints are connected you can define this as a group/cluster and calculate some (sigma-weighted?) center of the cluster.
Hope my first contribution on Stackoverflow is useful for you!
The way I would do it:
1) normalize H between 0 and 1.
2) pick a threshold value, as tcaswell suggests. It could be between .9 and .99 for example
3) use masked arrays to keep only the x,y coordinates with H above threshold:
import numpy.ma as ma
x_masked=ma.masked_array(x, mask= H < thresold)
y_masked=ma.masked_array(y, mask= H < thresold)
4) now you can weight-average on the masked coordinates, with weight something like (H-threshold)^2, or any other power greater or equal to one, depending on your taste/tests.
Comment:
1) This is not robust with respect to the type of peaks you have, since you may have to adapt the thresold. This is the minor problem;
2) This DOES NOT work with two peaks as it is, and will give wrong results if the 2nd peak is above threshold.
Nonetheless, it will always give you an answer without crashing (with pros and cons of the thing..)
I'm adding this answer because it's the solution I ended up using. It's a combination of Bi Rico's comment here (May 30 at 18:54) and the answer given in this question: Find peak of 2d histogram.
As it turns out using the peak detection algorithm from this question Peak detection in a 2D array only complicates matters. After applying the Gaussian filter to the image all that needs to be done is to ask for the maximum bin (as Bi Rico pointed out) and then obtain the maximum in coordinates.
So instead of using the detect-peaks function as I did above, I simply add the following code after the Gaussian 2D histogram is obtained:
# Get 2D histogram.
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
# Get Gaussian filtered 2D histogram.
H1 = gaussian_filter(H, 2, mode='nearest')
# Get center of maximum in bin coordinates.
x_cent_bin, y_cent_bin = np.unravel_index(H1.argmax(), H1.shape)
# Get center in x,y coordinates.
x_cent_coor , y_cent_coord = np.average(xedges[x_cent_bin:x_cent_bin + 2]), np.average(yedges[y_cent_g:y_cent_g + 2])

Categories

Resources