I have a problem where in a grid of x*y size I am provided a single dot, and I need to find the nearest neighbour. In practice, I am trying to find the closest dot to the cursor in pygame that crosses a color distance threshold that is calculated as following:
sqrt(((rgb1[0]-rgb2[0])**2)+((rgb1[1]-rgb2[1])**2)+((rgb1[2]-rgb2[2])**2))
So far I have a function that calculates the different resolutions for the grid and reduces it by a factor of two while always maintaining the darkest pixel. It looks as following:
from PIL import Image
from typing import Dict
import numpy as np
#we input a pillow image object and retrieve a dictionary with every grid version of the 3 dimensional array:
def calculate_resolutions(image: Image) -> Dict[int, np.ndarray]:
resolutions = {}
#we start with the highest resolution image, the size of which we initially divide by 1, then 2, then 4 etc.:
divisor = 1
#reduce the grid by 5 iterations
resolution_iterations = 5
for i in range(resolution_iterations):
pixel_lookup = image.load() #convert image to PixelValues object, which allows for pixellookup via [x,y] index
#calculate the resolution of the new grid, round upwards:
resolution = (int((image.size[0] - 1) // divisor + 1), int((image.size[1] - 1) // divisor + 1))
#generate 3d array with new grid resolution, fill in values that are darker than white:
new_grid = np.full((resolution[0],resolution[1],3),np.array([255,255,255]))
for x in range(image.size[0]):
for y in range(image.size[1]):
if not x%divisor and not y%divisor:
darkest_pixel = (255,255,255)
x_range = divisor if x+divisor<image.size[0] else (0 if image.size[0]-x<0 else image.size[0]-x)
y_range = divisor if y+divisor<image.size[1] else (0 if image.size[1]-y<0 else image.size[1]-y)
for x_ in range(x,x+x_range):
for y_ in range(y,y+y_range):
if pixel_lookup[x_,y_][0]+pixel_lookup[x_,y_][1]+pixel_lookup[x_,y_][2] < darkest_pixel[0]+darkest_pixel[1]+darkest_pixel[2]:
darkest_pixel = pixel_lookup[x_,y_]
if darkest_pixel != (255,255,255):
new_grid[int(x/divisor)][int(y/divisor)] = np.array(darkest_pixel)
resolutions[i] = new_grid
divisor = divisor*2
return resolutions
This is the most performance efficient solution I was able to come up with. If this function is run on a grid that continually changes, like a video with x fps, it will be very performance intensive. I also considered using a kd-tree algorithm that simply adds and removes any dots that happen to change on the grid, but when it comes to finding individual nearest neighbours on a static grid this solution has the potential to be more resource efficient. I am open to any kinds of suggestions in terms of how this function could be improved in terms of performance.
Now, I am in a position where for example, I try to find the nearest neighbour of the current cursor position in a 100x100 grid. The resulting reduced grids are 50^2, 25^2, 13^2, and 7^2. In a situation where a part of the grid looks as following:
And I am on the aggregation step where a part of the grid consisting of six large squares, the black one being the current cursor position and the orange dots being dots where the color distance threshold is crossed, I would not know which diagonally located closest neighbour I would want to pick to search next. In this case, going one aggregation step down shows that the lower left would be the right choice. Depending on how many grid layers I have this could result in a very large error in terms of the nearest neighbour search. Is there a good way how I can solve this problem? If there are multiple squares that show they have a relevant location, do I have to search them all in the next step to be sure? And if that is the case, the further away I get the more I would need to make use of math functions such as the pythagorean theorem to assert whether the two positive squares I find are overlapping in terms of distance and could potentially contain the closest neighbour, which would start to be performance intensive again if the function is called frequently. Would it still make sense to pursue this solution over a regular kd tree? For now the grid size is still fairly small (~800-600) but if the grid gets larger the performance may start suffering again. Is there a good scalable solution to this problem that could be applied here?
I have image that contains many no data pixels. The image is 2d numpy array and the no-data values are "None". Whenever I try to apply on it filters, seems like the none values are taken into account into the kernel and makes my pixels dissapear.
For example, I have this image:
I have tried to apply on it the lee filter with this function (taken from Speckle ( Lee Filter) in Python):
from scipy.ndimage.filters import uniform_filter
from scipy.ndimage.measurements import variance
def lee_filter(img, size):
img_mean = uniform_filter(img, (size, size))
img_sqr_mean = uniform_filter(img**2, (size, size))
img_variance = img_sqr_mean - img_mean**2
overall_variance = variance(img)
img_weights = img_variance / (img_variance + overall_variance)
img_output = img_mean + img_weights * (img - img_mean)
return img_output
but the results looks like this:
with the warnning:
UserWarning: Warning: converting a masked element to nan. dv =
np.float64(self.norm.vmax) - np.float64(self.norm.vmin)
I have also tried to use the library findpeaks.
from findpeaks import findpeaks
import findpeaks
#lee enhanced filter
image_lee_enhanced = findpeaks.lee_enhanced_filter(img, win_size=3, cu=0.25)
but I get the same blank image.
When I used median filter on the same image with ndimage is worked no problem.
My question is how can I run those filters on the image without letting the None values interrupt the results?
edit: I prefer not to set no value pixels to 0 because the pixel range value is between -50-1 (is an index values). In addition i'm afraid that if I change it to any other value e.g 9999) it will also influence the filter (am I wrong?)
Edit 2:
I have read Cris Luengo answer and I have tried to apply something similar with the scipy.ndimage median filter as I have realized that the result is disorted as well.
This is the original image:
I have tried masking the Null values:
idx = np.ma.masked_where(img,img!=None)[:,1]
median_filter_img = ndimage.median_filter(img[idx].reshape(491, 473), size=10)
zeros = np.zeros([img.shape[0],img.shape[1]])
zeros[idx] = median_filter_img
The results looks like this (color is darker to see the problem in the edges):
As it can bee seen, seems like the edges values are inflluences by the None values.
I have done this also with img!=0 but got the same problem.
(just to add: the pixels vlues are between 1 to -35)
If you want to apply a linear smoothing filter, then you can use the Normalized Convolution.
The basic recipe is:
Create a mask image that is 1 for the pixels with data, and 0 for the pixels without data.
Set the pixels without data to any number, for example 0. NaN is not valid because it spreads in the computations.
Apply the linear smoothing filter to the image multiplied by the mask.
Apply the linear smoothing filter to the mask.
Divide the two results.
Basically, we normalize the result of the linear smoothing filter (convolution) by the number of pixels with data within the filter window.
In regions where the smoothed mask is 0 (far away from data), we will divide 0 by 0, so special care needs to be taken there.
Note that normalized convolution can be used also for uncertain data, where the mask image gets values in between 0 and 1 indicating the confidence we have in each pixel. Pixels thought to be noisy can be set to a value closer to 0 than the other pixels, for example.
The recipe above is only valid for linear smoothing filters. Normalized convolution can be done with other linear filters, for example derivative filters, but the resulting recipe is different. See for example here the equation for Normalized Convolution to compute the derivative.
For non-linear filters, other approaches are necessary. Non-linear smoothing filters, for example, will often avoid affecting edges, and so will work quite well in images with missing data, if the missing pixels are set to 0, or some value far outside of the data range. The concept of keeping a mask image that indicates which pixels have data and which don't is always a good idea.
Seems like a simple solution is to set the non values to zero. I don't know how you would get around this, because most image processing kernels require some value to for you to apply.
a[numpy.argwhere(a==None)] = 0
I have an image mask (differenceM) like this:
For every single white pixel (pixel value != 0), I want to calculate the minimum distance from that pixel to a set of points. The set of points are points on the external contour which will be stored as a numpy array of [x_val y_val]. I was thinking of doing this:
...
def calcMinDist(dilPoints):
...
#returns 2d array (same shape as image)
def allMinDistDil(dilMask):
dilPoints = getPoints(dilMask)
...
return arrayOfMinValues
#more code here
blkImg = np.zeros(maskImage.shape,dtype=np.uint8)
blkImg.fill(0)
img_out = np.where(differenceM,allMinDistDil(dilatedMask),blkImg)
....
But the problem with this is, in order to calculate minimum distance from a pixel point to a set of points (obtained from getPoints function), I'll need to pass in the pixel point (index?) as well. But (if my understanding is correct,) with this where function, it only checks for true and false values in the first parameter...So the way I wrote the np.where() function won't work.
I've considered using nested for loops for this problem but I'm trying to avoid using for loops because I have a lot of images to process.
May I ask for suggestions to solve this? Any help would be greatly appreciated!
(not enough rep to comment) As for distance you probably want scipy.spatial.distance.cdist( X, Y ) . You can calculate a minimum distance with as simple as:
from scipy.spatial import distance
def min_distance(points, set_of_points):
return distance.cdist(np.atleast_1d(point), set_of_points).min()
As for np.where can you provide a bit more on your data structure? Most of the times a simple boolean mask will do the job...
Instead of using the np.where() function to find the specific pixels that are not zero, I applied:
diffMaskNewArray = np.transpose(np.nonzero(binaryThreshMask))
to get the points where the values are not zeros. With this array of points, I iterated through each point in this array and compared it with the array of boundary points of the masks and used:
shortestDistDil = np.amin(distance.cdist(a, b, 'euclidean'))
to find the min distance between the point and the set of boundary points.
I have this binary image in where each ‘curve' represents a hat from a pile of these objects. It was obtained by thresholding a region of the original image of stacked straw hats.
As you can see, these curves have many gaps and holes inside of its shapes, which dificults the use of a technique like cv.connectedcomponentes in order to obtain the amount of objects in the image, which is my goal.
I think if there was some technique to fill in these gaps and/or, mainly, the holes, in smaller parts of the original binary image, like the ones I'm showing bellow, maybe by connecting nearby elements or detecting and filling contours, would be possible to segment each curve as an individual element.
Not the most elegant way, but it should be simple enough.
Consider a vertical slice of with w (the same as the slices you posted in your question). If you sum the white pixels along the rows of the slice, you should get six nice "peaks" corresponding to the six rims of the hats:
However, since the rims are rounded, some vertical slices would be better than others for this sort of estimation.
Therefore, I suggest looking at all slices of width w and counting the peaks for each slice.
Here's a Matlab code that does this
img = imread('http://i.stack.imgur.com/69FfJ.jpg'); % read the image
bw = img(:,:,1)>128; % convert to binary
w = 75; % width of slice
all_slices = imfilter(single(bw), ones(1,w)/w, 'symmetric')>.5; % compute horizontal sum of all slices using filter
% a peak is a slice with more than 50% "white" pixels
peaks = diff( all_slices, 1, 1 ) > 0; % detect the peaks using vertical diff
count_per_slice = sum( peaks, 1 ); % how many peaks each slice think it sees
Looking at the distribution of the count_per_slice:
You see that although many slices predict the wrong number of hats (between 4 to 9) the majority votes for the correct number 6:
num_hats = mode(count_per_slice); % take the mode of the distribution.
A python code that does the same (assuming bw is a numpy array of shape (h,w) and of dtype bool):
from scipy import signal, stats
import numpy as np
w = 75;
all_slices = signal.convolve2d( bw.astype('f4'), np.ones((1,w),dtype='f4')/float(w), mode='same', boundary='symmetric')>0.5
peaks = np.diff( all_slices, n=1, axis=0 ) > 0
count_per_slice = peaks.sum( axis=0 )
num_hats = stats.mode( count_per_slice )
I'm trying to get python to return, as close as possible, the center of the most obvious clustering in an image like the one below:
In my previous question I asked how to get the global maximum and the local maximums of a 2d array, and the answers given worked perfectly. The issue is that the center estimation I can get by averaging the global maximum obtained with different bin sizes is always slightly off than the one I would set by eye, because I'm only accounting for the biggest bin instead of a group of biggest bins (like one does by eye).
I tried adapting the answer to this question to my problem, but it turns out my image is too noisy for that algorithm to work. Here's my code implementing that answer:
import numpy as np
from scipy.ndimage.filters import maximum_filter
from scipy.ndimage.morphology import generate_binary_structure, binary_erosion
import matplotlib.pyplot as pp
from os import getcwd
from os.path import join, realpath, dirname
# Save path to dir where this code exists.
mypath = realpath(join(getcwd(), dirname(__file__)))
myfile = 'data_file.dat'
x, y = np.loadtxt(join(mypath,myfile), usecols=(1, 2), unpack=True)
xmin, xmax = min(x), max(x)
ymin, ymax = min(y), max(y)
rang = [[xmin, xmax], [ymin, ymax]]
paws = []
for d_b in range(25, 110, 25):
# Number of bins in x,y given the bin width 'd_b'
binsxy = [int((xmax - xmin) / d_b), int((ymax - ymin) / d_b)]
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
paws.append(H)
def detect_peaks(image):
"""
Takes an image and detect the peaks usingthe local maximum filter.
Returns a boolean mask of the peaks (i.e. 1 when
the pixel's value is the neighborhood maximum, 0 otherwise)
"""
# define an 8-connected neighborhood
neighborhood = generate_binary_structure(2,2)
#apply the local maximum filter; all pixel of maximal value
#in their neighborhood are set to 1
local_max = maximum_filter(image, footprint=neighborhood)==image
#local_max is a mask that contains the peaks we are
#looking for, but also the background.
#In order to isolate the peaks we must remove the background from the mask.
#we create the mask of the background
background = (image==0)
#a little technicality: we must erode the background in order to
#successfully subtract it form local_max, otherwise a line will
#appear along the background border (artifact of the local maximum filter)
eroded_background = binary_erosion(background, structure=neighborhood, border_value=1)
#we obtain the final mask, containing only peaks,
#by removing the background from the local_max mask
detected_peaks = local_max - eroded_background
return detected_peaks
#applying the detection and plotting results
for i, paw in enumerate(paws):
detected_peaks = detect_peaks(paw)
pp.subplot(4,2,(2*i+1))
pp.imshow(paw)
pp.subplot(4,2,(2*i+2) )
pp.imshow(detected_peaks)
pp.show()
and here's the result of that (varying the bin size):
Clearly my background is too noisy for that algorithm to work, so the question is: how can I make that algorithm less sensitive? If an alternative solution exists then please let me know.
EDIT
Following Bi Rico advise I attempted smoothing my 2d array before passing it on to the local maximum finder, like so:
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
H1 = gaussian_filter(H, 2, mode='nearest')
paws.append(H1)
These were the results with a sigma of 2, 4 and 8:
EDIT 2
A mode ='constant' seems to work much better than nearest. It converges to the right center with a sigma=2 for the largest bin size:
So, how do I get the coordinates of the maximum that shows in the last image?
Answering the last part of your question, always you have points in an image, you can find their coordinates by searching, in some order, the local maximums of the image. In case your data is not a point source, you can apply a mask to each peak in order to avoid the peak neighborhood from being a maximum while performing a future search. I propose the following code:
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import copy
def get_std(image):
return np.std(image)
def get_max(image,sigma,alpha=20,size=10):
i_out = []
j_out = []
image_temp = copy.deepcopy(image)
while True:
k = np.argmax(image_temp)
j,i = np.unravel_index(k, image_temp.shape)
if(image_temp[j,i] >= alpha*sigma):
i_out.append(i)
j_out.append(j)
x = np.arange(i-size, i+size)
y = np.arange(j-size, j+size)
xv,yv = np.meshgrid(x,y)
image_temp[yv.clip(0,image_temp.shape[0]-1),
xv.clip(0,image_temp.shape[1]-1) ] = 0
print xv
else:
break
return i_out,j_out
#reading the image
image = mpimg.imread('ggd4.jpg')
#computing the standard deviation of the image
sigma = get_std(image)
#getting the peaks
i,j = get_max(image[:,:,0],sigma, alpha=10, size=10)
#let's see the results
plt.imshow(image, origin='lower')
plt.plot(i,j,'ro', markersize=10, alpha=0.5)
plt.show()
The image ggd4 for the test can be downloaded from:
http://www.ipac.caltech.edu/2mass/gallery/spr99/ggd4.jpg
The first part is to get some information about the noise in the image. I did it by computing the standard deviation of the full image (actually is better to select an small rectangle without signal). This is telling us how much noise is present in the image.
The idea to get the peaks is to ask for successive maximums, which are above of certain threshold (let's say, 3, 4, 5, 10, or 20 times the noise). This is what the function get_max is actually doing. It performs the search of maximums until one of them is below the threshold imposed by the noise. In order to avoid finding the same maximum many times it is necessary to remove the peaks from the image. In the general way, the shape of the mask to do so depends strongly on the problem that one want to solve. for the case of stars, it should be good to remove the star by using a Gaussian function, or something similar. I have chosen for simplicity a square function, and the size of the function (in pixels) is the variable "size".
I think that from this example, anybody can improve the code by adding more general things.
EDIT:
The original image looks like:
While the image after identifying the luminous points looks like this:
Too much of a n00b on Stack Overflow to comment on Alejandro's answer elsewhere here. I would refine his code a bit to use a preallocated numpy array for output:
def get_max(image,sigma,alpha=3,size=10):
from copy import deepcopy
import numpy as np
# preallocate a lot of peak storage
k_arr = np.zeros((10000,2))
image_temp = deepcopy(image)
peak_ct=0
while True:
k = np.argmax(image_temp)
j,i = np.unravel_index(k, image_temp.shape)
if(image_temp[j,i] >= alpha*sigma):
k_arr[peak_ct]=[j,i]
# this is the part that masks already-found peaks.
x = np.arange(i-size, i+size)
y = np.arange(j-size, j+size)
xv,yv = np.meshgrid(x,y)
# the clip here handles edge cases where the peak is near the
# image edge
image_temp[yv.clip(0,image_temp.shape[0]-1),
xv.clip(0,image_temp.shape[1]-1) ] = 0
peak_ct+=1
else:
break
# trim the output for only what we've actually found
return k_arr[:peak_ct]
In profiling this and Alejandro's code using his example image, this code about 33% faster (0.03 sec for Alejandro's code, 0.02 sec for mine.) I expect on images with larger numbers of peaks, it would be even faster - appending the output to a list will get slower and slower for more peaks.
I think the first step needed here is to express the values in H in terms of the standard deviation of the field:
import numpy as np
H = H / np.std(H)
Now you can put a threshold on the values of this H. If the noise is assumed to be Gaussian, picking a threshold of 3 you can be quite sure (99.7%) that this pixel can be associated with a real peak and not noise. See here.
Now the further selection can start. It is not exactly clear to me what exactly you want to find. Do you want the exact location of peak values? Or do you want one location for a cluster of peaks which is in the middle of this cluster?
Anyway, starting from this point with all pixel values expressed in standard deviations of the field, you should be able to get what you want. If you want to find clusters you could perform a nearest neighbour search on the >3-sigma gridpoints and put a threshold on the distance. I.e. only connect them when they are close enough to each other. If several gridpoints are connected you can define this as a group/cluster and calculate some (sigma-weighted?) center of the cluster.
Hope my first contribution on Stackoverflow is useful for you!
The way I would do it:
1) normalize H between 0 and 1.
2) pick a threshold value, as tcaswell suggests. It could be between .9 and .99 for example
3) use masked arrays to keep only the x,y coordinates with H above threshold:
import numpy.ma as ma
x_masked=ma.masked_array(x, mask= H < thresold)
y_masked=ma.masked_array(y, mask= H < thresold)
4) now you can weight-average on the masked coordinates, with weight something like (H-threshold)^2, or any other power greater or equal to one, depending on your taste/tests.
Comment:
1) This is not robust with respect to the type of peaks you have, since you may have to adapt the thresold. This is the minor problem;
2) This DOES NOT work with two peaks as it is, and will give wrong results if the 2nd peak is above threshold.
Nonetheless, it will always give you an answer without crashing (with pros and cons of the thing..)
I'm adding this answer because it's the solution I ended up using. It's a combination of Bi Rico's comment here (May 30 at 18:54) and the answer given in this question: Find peak of 2d histogram.
As it turns out using the peak detection algorithm from this question Peak detection in a 2D array only complicates matters. After applying the Gaussian filter to the image all that needs to be done is to ask for the maximum bin (as Bi Rico pointed out) and then obtain the maximum in coordinates.
So instead of using the detect-peaks function as I did above, I simply add the following code after the Gaussian 2D histogram is obtained:
# Get 2D histogram.
H, xedges, yedges = np.histogram2d(x, y, range=rang, bins=binsxy)
# Get Gaussian filtered 2D histogram.
H1 = gaussian_filter(H, 2, mode='nearest')
# Get center of maximum in bin coordinates.
x_cent_bin, y_cent_bin = np.unravel_index(H1.argmax(), H1.shape)
# Get center in x,y coordinates.
x_cent_coor , y_cent_coord = np.average(xedges[x_cent_bin:x_cent_bin + 2]), np.average(yedges[y_cent_g:y_cent_g + 2])