Removing completely isolated cells from Python array? - python

I'm trying to reduce noise in a binary python array by removing all completely isolated single cells, i.e. setting "1" value cells to 0 if they are completely surrounded by other "0"s. I have been able to get a working solution by removing blobs with sizes equal to 1 using a loop, but this seems like a very inefficient solution for large arrays:
import numpy as np
import scipy.ndimage as ndimage
import matplotlib.pyplot as plt
# Generate sample data
square = np.zeros((32, 32))
square[10:-10, 10:-10] = 1
np.random.seed(12)
x, y = (32*np.random.random((2, 20))).astype(np.int)
square[x, y] = 1
# Plot original data with many isolated single cells
plt.imshow(square, cmap=plt.cm.gray, interpolation='nearest')
# Assign unique labels
id_regions, number_of_ids = ndimage.label(square, structure=np.ones((3,3)))
# Set blobs of size 1 to 0
for i in xrange(number_of_ids + 1):
if id_regions[id_regions==i].size == 1:
square[id_regions==i] = 0
# Plot desired output, with all isolated single cells removed
plt.imshow(square, cmap=plt.cm.gray, interpolation='nearest')
In this case, eroding and dilating my array won't work as it will also remove features with a width of 1. I feel the solution lies somewhere within the scipy.ndimage package, but so far I haven't been able to crack it. Any help would be greatly appreciated!

A belated thanks to both Jaime and Kazemakase for their replies. The manual neighbour-checking method did remove all isolated patches, but also removed patches attached to others by one corner (i.e. to the upper-right of the square in the sample array). The summed area table works perfectly and is very fast on the small sample array, but slows down on larger arrays.
I ended up following a approach using ndimage which seems to work efficiently for very large and sparse arrays (0.91 sec for 5000 x 5000 array vs 1.17 sec for summed area table approach). I first generated a labelled array of unique IDs for each discrete region, calculated sizes for each ID, masked the size array to focus only on size == 1 blobs, then index the original array and set IDs with a size == 1 to 0:
def filter_isolated_cells(array, struct):
""" Return array with completely isolated single cells removed
:param array: Array with completely isolated single cells
:param struct: Structure array for generating unique regions
:return: Array with minimum region size > 1
"""
filtered_array = np.copy(array)
id_regions, num_ids = ndimage.label(filtered_array, structure=struct)
id_sizes = np.array(ndimage.sum(array, id_regions, range(num_ids + 1)))
area_mask = (id_sizes == 1)
filtered_array[area_mask[id_regions]] = 0
return filtered_array
# Run function on sample array
filtered_array = filter_isolated_cells(square, struct=np.ones((3,3)))
# Plot output, with all isolated single cells removed
plt.imshow(filtered_array, cmap=plt.cm.gray, interpolation='nearest')
Result:

You can manually check the neighbors and avoid the loop using vectorization.
has_neighbor = np.zeros(square.shape, bool)
has_neighbor[:, 1:] = np.logical_or(has_neighbor[:, 1:], square[:, :-1] > 0) # left
has_neighbor[:, :-1] = np.logical_or(has_neighbor[:, :-1], square[:, 1:] > 0) # right
has_neighbor[1:, :] = np.logical_or(has_neighbor[1:, :], square[:-1, :] > 0) # above
has_neighbor[:-1, :] = np.logical_or(has_neighbor[:-1, :], square[1:, :] > 0) # below
square[np.logical_not(has_neighbor)] = 0
That way looping over the square is performed internally by numpy, which is rather more efficient than looping in python. There are two drawbacks of this solution:
If your array is very sparse there may be more efficient ways to check the neighborhood of non-zero points.
If your array is very large the has_neighbor array might consume too much memory. In this case you could loop over sub-arrays of smaller size (trade-off between python loops and vectorization).
I have no experience with ndimage, so there may be a better solution built in somewhere.

The typical way of getting rid of isolated pixels in image processing is to do a morphological opening, for which you have a ready-made implementation in scipy.ndimage.morphology.binary_opening. This would affect the contours of your larger areas as well though.
As for a DIY solution, I would use a summed area table to count the number of items in every 3x3 subimage, subtract from that the value of the central pixel, then zero all center points where the result came out to zero. To properly handle the borders, first pad the array with zeros:
sat = np.pad(square, pad_width=1, mode='constant', constant_values=0)
sat = np.cumsum(np.cumsum(sat, axis=0), axis=1)
sat = np.pad(sat, ((1, 0), (1, 0)), mode='constant', constant_values=0)
# These are all the possible overlapping 3x3 windows sums
sum3x3 = sat[3:, 3:] + sat[:-3, :-3] - sat[3:, :-3] - sat[:-3, 3:]
# This takes away the central pixel value
sum3x3 -= square
# This zeros all the isolated pixels
square[sum3x3 == 0] = 0
The implementation above works, but is not especially careful about not creating intermediate arrays, so you can probably shave off some execution time by refactoring adequately.

Related

Optimize this linear transformation for images with Numpy

Good evening,
I'm trying to learn NumPy and have written a simple Linear transformation that applies to an image using for loops:
import numpy as np
M = np.array([
[width, 0],
[0, height]
])
T = np.array([
[1, 3],
[0, 1]
])
def transform_image(M, T):
T_rel_M = abs(M # T)
new_img = np.zeros(T_rel_M.sum(axis=1).astype("int")).T
for i in range(0, 440):
for j in range(0, 440):
x = np.array([j, i])
coords = (T # x)
x = coords[0]
y = coords[1]
new_img[y, -x] = image[i, -j]
return new_img
plt.imshow(transform_image(M, T))
It does what I want and spits out a transformation that is correct, except that I think there is a way to do this without the loops.
I tried doing some stuff with meshgrid but I couldn't figure out how to get the pixels from the image in the same way I do it in the loop (using i and j). I think I figured out how to apply the transformation but then getting the pixels from the image in the correct spots wouldn't work.
Any ideas?
EDIT:
Great help with below solutions, lezaf's solution was very similar to what I tried before, the only step missing that I couldn't figure out was assigning the pixels from the old to the new image. I made some changes to the code to exclude transposing, and also added a astype("int") so it works with float values in the T matrix:
def transform_image(M, T):
T_rel_M = abs(M # T)
new_img = np.zeros(T_rel_M.sum(axis=1).astype("int")).T
x_combs = np.array(np.meshgrid(np.arange(width), np.arange(height))).reshape(2,-1)
coords = (T # x_combs).astype("int")
new_img[coords[1, :], -coords[0, :]] = image[x_combs[1, :], -x_combs[0, :]]
return new_img
A more efficient solution is the following:
def transform_image(M, T):
T_rel_M = abs(M # T)
new_img = np.zeros(T_rel_M.sum(axis=1).astype("int")).T
# This one replaces the double for-loop
x_combs = np.array(np.meshgrid(np.arange(440), np.arange(440))).T.reshape(-1,2)
# Calculate the new coordinates
coords = (T#x_combs.T)
# Apply changes to new_img
new_img[coords[1, :], -coords[0, :]] = image[x_combs[:, 1], -x_combs[:,0]]
I updated my solution removing the for-loop, so now is a lot more straightforward.
After this change, the time of the optimized code is 50 ms compared to the initial 3.06 s of the code in question.
There seems to have some confusions between width/height, x/y, ... so not 100% my code won't need adaptation. But I think, the main idea is the one you are looking for
def transform_image(M, T):
T_rel_M = abs(M # T)
j,i=np.meshgrid(range(width), range(height))
ji=np.array((j.flatten(), i.flatten()))
coords = (T#ji).astype(int)
new_img=np.zeros((coords[1].max()+1, coords[0].max()+1), dtype=np.uint8)
new_img[coords[1], coords[0]] = image.flatten()
The main idea here is to build a set of coordinates of the input image with meshgrid. I don't want a 2d-array of coordinates. Just a list of coordinates (a list of pairs i,j). Hence the flatten. So ji is a huge 2×N array, N being the number of pixels (so width×height).
coords is the transformation of all those coordinates.
Since your original code seemed to have some inconsistency with size (the rotated image did not fit in the new_img), I choose the easy way to compute the size of new_img, and just compute the max of those coordinates (a bit overkill: the max of the four corners would be enough)
And then, I use this set of coordinates as indexes for new_img, to which I affect the matching image, that is image flatten
So, no for loop at all.
(Note that I've dropped the -x thing also. Just because I struggled to understand. I could have putted it back now that I have a working solution. But I am not 100% sure if it wasn't there because you also tried/errored some strange adjustment. But anyway, I think what you were looking for is how to use meshgrid to create a set of coordinates and process them without loop. Even if you may need to adapt my solution, you have it: flatten the coordinates of meshgrid, transform them with a matrix multiplication, and use them as index for places of all pixels of the original image)
Edit : variant
def transform_image(M, T):
T_rel_M = abs(M # T)
ji=np.array(np.meshgrid(range(width), range(height)))
coords = np.einsum('ik,kjl', T, ji).astype(int)
new_img=np.zeros((max(coords[1,0,-1],coords[1,-1,0], coords[1,-1,-1])+1, max(coords[0,0,-1], coords[0,-1,0], coords[0,-1,-1])+1), dtype=np.uint8)
new_img[coords[1].flatten(), coords[0].flatten()] = image.flatten()
return new_img
The idea is the same. But instead of flattening directly ji original coordinates, I keep them as is. Then use einsum to perform a matrix multiplication on a 3D array (which returns also a 2d 2×width×height arrays, whose each [:,j,i] value is just the transformation of [j,i]. So, it is just the same as previous #, except that it works even if, instead of having a 2×N set of coordinates we have a 2×width×height one).
Which has 2 advantages
Apparently it is sensibly faster to create ji than way
It allows the usage of just corners to find the size of the new image, as I've mentioned before (that was more difficult when coords was flatten from its creation).
Timing
Solution
Timing
Yours
4.5 s
lezaf's
3.2 s
This one
49 ms
The variant
41 ms

How to find if numpy array contains vector along 3rd dimension?

I want to find if a 3D numpy array contains a specific 1D vector along the 3rd dimension. (I need to check if an image contains pixel(s) of a specific color.)
I need to return true if and only if any of the pixels match exactly with the target.
I've tried the following:
import numpy as np
target = np.array([255, 0, 0])
search_area = np.array([[[0,0,0],[1,1,1],[2,2,2]],
[[3,3,3],[4,4,4],[5,5,5]],
[[6,6,6],[7,7,7],[8,8,255]]])
contains_target = np.isin(target, search_area).all(): # Returns True
Which returns True since each element can be found individually somewhere within the entire array.
Next I tried:
target = np.array([255, 0, 0])
search_area = np.array([[[0,0,0],[1,1,1],[2,2,2]],
[[3,3,3],[4,4,4],[5,5,5]],
[[6,6,6],[7,7,7],[8,0,255]]])
contains_target = (target == search.all(2)).any() # Returns True
This works better since it matches the elements of target for each pixel individually, but it still returns True when they aren't in order or in the right numbers.
Lastly, I tried:
def pixel_matches_target(self, pixel_to_match):
return (target == pixel_to_match).all()
contains_target = np.apply_along_axis(self.pixel_matches_target, 2, search_area).any()
But it's too slow to be used (about 1 second per pass).
How can I find if a numpy array contains a vector along a specific axis?
EDIT:
I ended up circumventing the issue by converting the RGB images to binary masks using cv2.inRange() and checking if the resulting 2D array contains True values. This resulted in several orders of magnitude faster execution.
One somewhat decent possibility to solve your problem would be (if you can afford the extra temp-memory):
import numpy as np
target = np.array([255, 0, 0])
search_area = np.array([[[0,0,0],[1,1,1],[2,2,2]],
[[3,3,3],[4,4,4],[5,5,5]],
[[6,6,6],[7,7,7],[8,0,255]]])
# works for general N-D sub-arrays
adjusted_shape = search_area.reshape((-1, *target.shape))
contains_target = target.tolist() in adjusted_shape.tolist() # False
If your arrays are integers you may use numpy.array_equal() to check that the arrays match (if using floats see numpy.allclose() instead). Assuming the sub-arrays to match are always in the 3rd sub-row you may do the following:
if sum(np.array_equal(target,a) for a in arr[:,2]):
# Contains the target!
Should the sub-array occur anywhere you can use:
sum(np.array_equal(target,item) for sublist in arr for item in sublist))
Note: This doesn't answer the generalized question, but is several orders of magnitude faster for the specific problem of finding if an image contains pixels of a specific color.
import cv2
import numpy
target_lower = np.array([250, 0, 0])
target_upper = np.array([255, 5, 5])
search_area = np.array([[[0,0,0],[1,1,1],[2,2,2]],
[[3,3,3],[4,4,4],[5,5,5]],
[[6,6,6],[7,7,7],[8,0,255]]])
mask = cv2.inRange(search_area, target_lower, target_upper)
mask = mask.astype(bool)
contains_target = (True in mask)
Additionally, it has the benefit of allowing for a bit of flexibility for the target color.

Efficiently creating masks - Numpy /Python

Suppose I have a NumPy array with shape (50, 10000, 10000) with 1000 distinct "clusters". For example, there would be small volume somewhere with just 1s, another small volume with 2s, etc. I would like to iterate through each cluster to create a mask like so:
for i in np.unique(arr)[1:]:
mask = arr == i
#do other stuff with mask
Creating each mask takes about 15 seconds, and iterating through 1000 clusters would take more than 4 hours. Is there a possible way to speed up the code or is this the best there is since there is no avoiding iterating through each element of the array?
EDIT: the dtype of the array is uint16
I'm assuming arr is sparse:
you say the clusters are small, and 1000 clusters isn't going to tile an array that big
you iterate over np.unique(arr)[1:], so I assume the first unique value is 0
In this case I would recommend leveraging a scipy.sparse.csr_matrix
from scipy.sparse import csr_matrix
sp_arr = csr_matrix(arr.reshape(1,-1))
This turns your big dense array into a one-row compressed sparse row array. Since sparse arrays don't like more than 2 dimensions, this tricks it into using ravelled indices. Now sp_arr has data (the cluster labels), indices (the ravelled indices), and indptr (which is trivial here since we only have one row). So,
for i in np.unique(sp_arr.data): # as a bonus this `unique` call should be faster too
x, y, z = np.unravel_index(sp_arr.indices[sp_arr.data == i], arr.shape)
Should much more efficiently give equivalent coordinates to
for i in np.unique(arr)[:1]:
x, y, z = np.nonzero(arr == i)
where x, y, z are the indices of the True values in mask. From there you can either reconstruct mask or work off the indices (recommended).
You could also do this purely with numpy, and still have a boolean mask at the end, but a bit less memory efficient:
all_mask = arr != 0 # points assigned to any cluster
data = arr[all_mask] # all cluster labels
for i in np.unique(data):
mask = all_mask.copy()
mask[mask] = data == i # now mask is same as before

Improving performance of a Python function that outputs the pixels that are different between two images

I'm working on a computer vision project and am looking to build a fast function that compares two images and outputs only the pixels where the differences between the pixels of the two images are sufficiently different. Other pixels get set to (0,0,0). In practice, I want the camera to detect objects and ignore the background.
My issue is the function doesn't run fast enough. What are some ways to speed things up?
def get_diff_image(fixed_image):
#get new image
new_image = current_image()
#get diff
diff = abs(fixed_image-new_image)
#creating a filter array
filter_array = np.empty(shape = (fixed_image.shape[0], fixed_image.shape[1]))
for idx, row in enumerate(diff):
for idx2, pixel in enumerate(row):
mag = np.linalg.norm(pixel)
if mag > 40:
filter_array[idx][idx2] = True
else:
filter_array[idx][idx2] = False
#applying filter
filter_image = np.copy(new_image)
filter_image[filter_array == False] = [0, 0, 0]
return filter_image
As others have mentioned, your biggest slow down in this code is iterating over every pixel in Python. Since Python is an interpreted language, these iterations take much longer than their equivalents in C/C++, which numpy uses under the hood.
Conveniently, you can specify an axis for numpy.linalg.norm, so you can get all the magnitudes in one numpy command. In this case, your pixels are on axis 2, so we'll take the norm on that axis, like this:
mags = np.linalg.norm(diff, axis=2)
Here, mags will have the same shape as filter_array, and each location will hold the magnitude of the corresponding pixel.
Using a boolean operator on a numpy array returns an array of bools, so:
filter_array = mags > 40
With the loops removed, the whole thing looks like this:
def get_diff_image(fixed_image):
#get new image
new_image = current_image()
#get diff
diff = abs(fixed_image-new_image)
#creating a filter array
mags = np.linalg.norm(diff, axis=2)
filter_array = mags > 40
#applying filter
filter_image = np.copy(new_image)
filter_image[filter_array == False] = [0, 0, 0]
return filter_image
But there is still more efficiency to be gained.
As noted by pete2fiddy, the magnitude of a vector doesn't depend on its direction. The absolute value operator only changes direction, not magnitude, so we just don't need it here. Sweet!
The biggest remaining performance gain is to avoid copying the image. If you need to preserve the original image, start by allocating zeros for the output array since zeroing memory is often hardware accelerated. Then, copy only the required pixels. If you don't need the original and only plan to use the filtered one, then modifying the image in-place will give much better performance.
Here's an updated function with those changes in place:
def get_diff_image(fixed_image):
#get new image
new_image = current_image()
# Compute difference magnitudes
mags = np.linalg.norm(fixed_image - new_image, axis=2)
# Preserve original image
filter_image = np.zeros_like(new_image)
filter_image[mag > 40] = new_image[mag > 40]
return filter_image
# Avoid copy entirely (overwrites original!)
# new_image[mag < 40] = [0, 0, 0]
# return new_image
The slow part is your loop.
for idx, row in enumerate(diff):
for idx2, pixel in enumerate(row):
mag = np.linalg.norm(pixel)
if mag > 40:
filter_array[idx][idx2] = True
else:
filter_array[idx][idx2] = False
Doing this in Python code is much slower than doing this in numpy's implicit loop over an array. I believe that this would apply to the norm call in which case you would have mag = np.linalg.norm(diff) which would create the array mag. You could the use numpy's greater function applied to the whole array to get filter_array.
The loop over the array will then be done in numpy's C code which is an order of magnitude faster (in general).
Nested loops in python run very slowly. However, this can often be mitigated through letting numpy which is largely written in C. Try giving this a go:
def get_image_difference_magnitudes(image1, image2):
'''order of subtraction does not matter, because the magnitude
of any vector is always a positive scalar'''
image_subtractions = image1-image2
'''checks if the image has channels or not, i.e.
if the two images are grayscaled or not. Calculating the magnitude
of change differs for both cases'''
if len(image1.shape) == 2:
return np.abs(image_subtractions)
'''returns the magnitude of change for each pixel. The "axis = 2"
part of np.linalg.norm specifies that, rather than the output that
does not specify the axis, it will return an image that replaces each
pixel, which contains a vector of the difference between the two images,
with the magnitude of the vector at that pixel'''
return np.linalg.norm(image_subtractions, axis = 2)
def difference_magnitude_threshold_images(image1, image2, threshold):
image_subtraction_magnitudes = get_image_difference_magnitudes(image1, image2)
'''creates a numpy array of "False" that is the same shape as the
width and height of the image. Slicing the shape to 2 makes it so
that the mask is only width x height, and not width x height x 3,
in the case of RGB'''
threshold_mask = np.zeros(image1.shape[:2], dtype = np.bool)
'''checks each element of image_subtraction_magnitudes, and for
all instances where the magnitude of difference at that pixel is
greater than threshold, set the mask = to True'''
threshold_mask[image_subtraction_magnitudes > threshold] = True
return threshold_mask
The above also has the benefit of being only a few lines long (were you to fit it into one function). The way you apply the filter looks fine to me -- I'm not sure if there is a faster way to do it. Also, I apologize for the formatting. I wasn't sure how to indent within a code block, so I left the function names outside of the code block.

numpy: unique list of colors in the image

I have an image img:
>>> img.shape
(200, 200, 3)
On pixel (100, 100) I have a nice color:
>>> img[100,100]
array([ 0.90980393, 0.27450982, 0.27450982], dtype=float32)
Now my question is: How many different colors are there in this image, and how do I enumerate them?
My first idea was numpy.unique(), but somehow I am using this wrong.
Your initial idea to use numpy.unique() actually can do the job perfectly with the best performance:
numpy.unique(img.reshape(-1, img.shape[2]), axis=0)
At first, we flatten rows and columns of matrix. Now the matrix has as much rows as there're pixels in the image. Columns are color components of each pixels.
Then we count unique rows of flattened matrix.
You could do this:
set( tuple(v) for m2d in img for v in m2d )
One straightforward way to do this is to leverage the de-duplication that occurs when casting a list of all pixels as a set:
unique_pixels = np.vstack({tuple(r) for r in img.reshape(-1,3)})
Another way that might be of practical use, depending on your reasons for extracting unique pixels, would be to use Numpy’s histogramdd function to bin image pixels to some pre-specified fidelity as follows (where it is assumed pixel values range from 0 to 1 for a given image channel):
n_bins = 10
bin_edges = np.linspace(0, 1, n_bins + 1)
bin_centres = (bin_edges[0:-1] + bin_edges[1::]) / 2.
hist, _ = np.histogramdd(img.reshape(-1, 3), bins=np.vstack(3 * [bin_edges]))
unique_pixels = np.column_stack(bin_centres[dim] for dim in np.where(hist))
If for any reason you will need to count the number of times each unique color appears, you can use this:
from collections import Counter
Counter([tuple(colors) for i in img for colors in i])
The question about unique colors (or more generally unique values along a given axis) has been also asked here (in particular, see this answer). If you're seeking for the fastest available option then "void view" would be your weapon of choice:
axis=2
np.unique(
img.view(np.dtype((np.void, img.dtype.itemsize*img.shape[axis])))
).view(img.dtype).reshape(-1, img.shape[axis])
For any questions related to what the script actually does, I refer the reader to the links above.

Categories

Resources