Suppose I have a NumPy array with shape (50, 10000, 10000) with 1000 distinct "clusters". For example, there would be small volume somewhere with just 1s, another small volume with 2s, etc. I would like to iterate through each cluster to create a mask like so:
for i in np.unique(arr)[1:]:
mask = arr == i
#do other stuff with mask
Creating each mask takes about 15 seconds, and iterating through 1000 clusters would take more than 4 hours. Is there a possible way to speed up the code or is this the best there is since there is no avoiding iterating through each element of the array?
EDIT: the dtype of the array is uint16
I'm assuming arr is sparse:
you say the clusters are small, and 1000 clusters isn't going to tile an array that big
you iterate over np.unique(arr)[1:], so I assume the first unique value is 0
In this case I would recommend leveraging a scipy.sparse.csr_matrix
from scipy.sparse import csr_matrix
sp_arr = csr_matrix(arr.reshape(1,-1))
This turns your big dense array into a one-row compressed sparse row array. Since sparse arrays don't like more than 2 dimensions, this tricks it into using ravelled indices. Now sp_arr has data (the cluster labels), indices (the ravelled indices), and indptr (which is trivial here since we only have one row). So,
for i in np.unique(sp_arr.data): # as a bonus this `unique` call should be faster too
x, y, z = np.unravel_index(sp_arr.indices[sp_arr.data == i], arr.shape)
Should much more efficiently give equivalent coordinates to
for i in np.unique(arr)[:1]:
x, y, z = np.nonzero(arr == i)
where x, y, z are the indices of the True values in mask. From there you can either reconstruct mask or work off the indices (recommended).
You could also do this purely with numpy, and still have a boolean mask at the end, but a bit less memory efficient:
all_mask = arr != 0 # points assigned to any cluster
data = arr[all_mask] # all cluster labels
for i in np.unique(data):
mask = all_mask.copy()
mask[mask] = data == i # now mask is same as before
Related
I am constructing a transition matrix from a n1 x n2 x ... x nN x nN array. For concreteness let N = 3, e.g.,
import numpy as np
# example with N = 3
n1, n2, n3 = 3, 2, 5
dim = (n1, n2, n3)
arr = np.random.random_sample(dim + (n3,))
Here arr contains transition probabilities between 2 states, where the "from"-state is indexed by the first 3 dimensions, and the "to"-state is indexed by the first 2 and the last dimension. I want to construct a transition matrix, which expresses these probabilities raveled into a sparse (n1*n2*n3) x (n1*n2*n3 matrix.
To clarify, let me provide my current approach that does what I want to do. Unfortunately, it's slow and doesn't work when N and n1, n2, ... are large. So I am looking for a more efficient way of doing the same that scales better for larger problems.
My approach
import numpy as np
from scipy import sparse as sparse
## step 1: get the index correponding to each dimension of the from and to state
# ravel axes 1 to 3 into single axis and make sparse
spmat = sparse.coo_matrix(arr.reshape(np.prod(dim), -1))
data = spmat.data
row = spmat.row
col = spmat.col
# use unravel to get idx for
row_unravel = np.array(np.unravel_index(row, dim))
col_unravel = np.array(np.unravel_index(col, n3))
## step 2: combine "to" index with rows 1 and 2 of "from"-index to get "to"-coordinates in full state space
row_unravel[-1, :] = col_unravel # first 2 dimensions of state do not change
colnew = np.ravel_multi_index(row_unravel, dim) # ravel back to 1d
## step 3: assemble transition matrix
out = sparse.coo_matrix((data, (row, colnew)), shape=(np.prod(dim), np.prod(dim)))
Final thought
I will be running this code many times. Across iterations, the data of arr may change, but the dimensions will stay the same. So one thing I could do is to save and load row and colnew from a file, skipping everything between the definition of data (line 2) and final line of my code. Do you think this would be the best approach?
Edit: One problem I see with this strategy is that if some elements of arr are zero (which is possible) then the size of data will change across iterations.
One approach that beats the one posted in the OP. Not sure if it's the most efficient.
import numpy as np
from scipy import sparse
# get col and row indices
idx = np.arange(np.prod(dim))
row = idx.repeat(dim[-1])
col = idx.reshape(-1, dim[-1]).repeat(dim[-1], axis=0).ravel()
# get the data
data = arr.ravel()
# construct the sparse matrix
out = sparse.coo_matrix((data, (row, col)), shape=(np.prod(dim), np.prod(dim)))
Two things that could be improved:
(1) if arr is sparse, the output matrix out will have zeros coded as nonzero.
(2) The approach relies on the new state being the last dimension of dim. It would be nice to generalize so that the last axis of arr can replace any of the originating axis, not just the last one.
I need an efficient way to create a numpy array of shape (x,y,3) where only one random element out of the 3 for each tuple (x,y) has a value randomly selected from [-1,0,1]
np.random.randint(-1, 2, (x,y,3))
does the work only for the second half of my requirements.
I could use a nested loop to iterate on each (x, y) and multiple its value by a random mask but it would not be efficient at all.
Here is the loop implementation:
a=np.random.randint(-1, 2, (x,y,3))
for i in range(a.shape[0]):
for j in range(a.shape[1]):
mask = np.array(np.random.permutation([0,1,0]))
a[i][j] = a[i][j] * mask
Rather than generating a whole bunch of extra numbers and turning most of them off, I'd approach this from the point of view of only generating the numbers you need. You want to assign to a random index between 0 and 2 for each x-y pair. So generate a random index, and the random values, and assign:
indices = np.random.randint(3, size=(x, y))
values = np.random.randint(-1, 2, size=(x, y))
result = np.zeros((x, y, 3), dtype=int)
result[(*np.ogrid[:x, :y], indices)] = values
The indexing expression is an advanced index because indices is a list of integers. Using ... or :, : for the first two indices won't do what you want in that case. Instead, np.ogrid generates ranges of the correct shape to force the elements of indices to correspond to the correct x-y coordinates.
I'm quite new to programming in general, but I could not figure this problem out until now.
I've got a two-dimensional numpy array mask, lets say mask.shape is (3800,3500)which is filled with 0s and 1s representing a spatial resolution of a 2D image, where a 1 represents a visible pixel and 0 represents background.
I've got a second two-dimensional array data of data.shape is (909,x) where x is exactly the amount of 1s in the first array. I now want to replace each 1 in the first array with a vector of length 909 from the second array. Resulting in a final 3D array of shape(3800,3500,909) which is basically a 2D x by y image where select pixels have a spectrum of 909 values in z direction.
I tried
mask_vector = mask.flatten
ones = np.ones((909,1))
mask_909 = mask_vector.dot(ones) #results in a 13300000 by 909 2d array
count = 0
for i in mask_vector:
if i == 1:
mask_909[i,:] = data[:,count]
count += 1
result = mask_909.reshape((3800,3500,909))
This results in a viable 3D array giving a 2D picture when doing plt.imshow(result.mean(axis=2))
But the values are still only 1s and 0s not the wanted spectral data in z direction.
I also tried using np.where but broadcasting fails as the two 2D arrays have clearly different shapes.
Has anybody got a solution? I am sure that there must be an easy way...
Basically, you simply need to use np.where to locate the 1s in your mask array. Then initialize your result array to zero and replace the third dimension with your data using the outputs of np.where:
import numpy as np
m, n, k = 380, 350, 91
mask = np.round(np.random.rand(m, n))
x = np.sum(mask == 1)
data = np.random.rand(k, x)
result = np.zeros((m, n, k))
row, col = np.where(mask == 1)
result[row,col] = data.transpose()
I am trying to vectorize an operation using numpy, which I use in a python script that I have profiled, and found this operation to be the bottleneck and so needs to be optimized since I will run it many times.
The operation is on a data set of two parts. First, a large set (n) of 1D vectors of different lengths (with maximum length, Lmax) whose elements are integers from 1 to maxvalue. The set of vectors is arranged in a 2D array, data, of size (num_samples,Lmax) with trailing elements in each row zeroed. The second part is a set of scalar floats, one associated with each vector, that I have a computed and which depend on its length and the integer-value at each position. The set of scalars is made into a 1D array, Y, of size num_samples.
The desired operation is to form the average of Y over the n samples, as a function of (value,position along length,length).
This entire operation can be vectorized in matlab with use of the accumarray function: by using 3 2D arrays of the same size as data, whose elements are the corresponding value, position, and length indices of the desired final array:
sz_Y = num_samples;
sz_len = Lmax
sz_pos = Lmax
sz_val = maxvalue
ind_len = repmat( 1:sz_len ,1 ,sz_samples);
ind_pos = repmat( 1:sz_pos ,sz_samples,1 );
ind_val = data
ind_Y = repmat((1:sz_Y)',1 ,Lmax );
copiedY=Y(ind_Y);
mask = data>0;
finalarr=accumarray({ind_val(mask),ind_pos(mask),ind_len(mask)},copiedY(mask), [sz_val sz_pos sz_len])/sz_val;
I was hoping to emulate this implementation with np.bincounts. However, np.bincounts differs to accumarray in two relevant ways:
both arguments must be of same 1D size, and
there is no option to choose the shape of the output array.
In the above usage of accumarray, the list of indices, {ind_val(mask),ind_pos(mask),ind_len(mask)}, is 1D cell array of 1x3 arrays used as index tuples, while in np.bincounts it must be 1D scalars as far as I understand. I expect np.ravel may be useful but am not sure how to use it here to do what I want. I am coming to python from matlab and some things do not translate directly, e.g. the colon operator which ravels in opposite order to ravel. So my question is how might I use np.bincount or any other numpy method to achieve an efficient python implementation of this operation.
EDIT: To avoid wasting time: for these multiD index problems with complicated index manipulation, is the recommend route to just use cython to implement the loops explicity?
EDIT2: Alternative Python implementation I just came up with.
Here is a heavy ram solution:
First precalculate:
Using index units for length (i.e., length 1 =0) make a 4D bool array, size (num_samples,Lmax+1,Lmax+1,maxvalue) , holding where the conditions are satisfied for each value in Y.
ALLcond=np.zeros((num_samples,Lmax+1,Lmax+1,maxvalue+1),dtype='bool')
for l in range(Lmax+1):
for i in range(Lmax+1):
for v in range(maxvalue+!):
ALLcond[:,l,i,v]=(data[:,i]==v) & (Lvec==l)`
Where Lvec=[len(row) for row in data]. Then get the indices for these using np.where and initialize a 4D float array into which you will assign the values of Y:
[indY,ind_len,ind_pos,ind_val]=np.where(ALLcond)
Yval=np.zeros(np.shape(ALLcond),dtype='float')
Now in the loop in which I have to perform the operation, I compute it with the two lines:
Yval[ind_Y,ind_len,ind_pos,ind_val]=Y[ind_Y]
Y_avg=sum(Yval)/num_samples
This gives a factor of 4 or so speed up over the direct loop implementation. I was expecting more. Perhaps, this is a more tangible implementation for Python heads to digest. Any faster suggestions are welcome :)
One way is to convert the 3 "indices" to a linear index and then apply bincount. Numpy's ravel_multi_index is essentially the same as MATLAB's sub2ind. So the ported code could be something like:
shape = (Lmax+1, Lmax+1, maxvalue+1)
posvec = np.arange(1, Lmax+1)
ind_len = np.tile(Lvec[:,None], [1, Lmax])
ind_pos = np.tile(posvec, [n, 1])
ind_val = data
Y_copied = np.tile(Y[:,None], [1, Lmax])
mask = posvec <= Lvec[:,None] # fill-value independent
lin_idx = np.ravel_multi_index((ind_len[mask], ind_pos[mask], ind_val[mask]), shape)
Y_avg = np.bincount(lin_idx, weights=Y_copied[mask], minlength=np.prod(shape)) / n
Y_avg.shape = shape
This is assuming data has shape (n, Lmax), Lvec is Numpy array, etc. You may need to adapt the code a little to get rid of off-by-one errors.
One could argue that the tile operations are not very efficient and not very "numpythonic". Something with broadcast_arrays could be nice, but I think I prefer this way:
shape = (Lmax+1, Lmax+1, maxvalue+1)
posvec = np.arange(1, Lmax+1)
len_idx = np.repeat(Lvec, Lvec)
pos_idx = np.broadcast_to(posvec, data.shape)[mask]
val_idx = data[mask]
Y_copied = np.repeat(Y, Lvec)
mask = posvec <= Lvec[:,None] # fill-value independent
lin_idx = np.ravel_multi_index((len_idx, pos_idx, val_idx), shape)
Y_avg = np.bincount(lin_idx, weights=Y_copied, minlength=np.prod(shape)) / n
Y_avg.shape = shape
Note broadcast_to was added in Numpy 1.10.0.
I'm trying to reduce noise in a binary python array by removing all completely isolated single cells, i.e. setting "1" value cells to 0 if they are completely surrounded by other "0"s. I have been able to get a working solution by removing blobs with sizes equal to 1 using a loop, but this seems like a very inefficient solution for large arrays:
import numpy as np
import scipy.ndimage as ndimage
import matplotlib.pyplot as plt
# Generate sample data
square = np.zeros((32, 32))
square[10:-10, 10:-10] = 1
np.random.seed(12)
x, y = (32*np.random.random((2, 20))).astype(np.int)
square[x, y] = 1
# Plot original data with many isolated single cells
plt.imshow(square, cmap=plt.cm.gray, interpolation='nearest')
# Assign unique labels
id_regions, number_of_ids = ndimage.label(square, structure=np.ones((3,3)))
# Set blobs of size 1 to 0
for i in xrange(number_of_ids + 1):
if id_regions[id_regions==i].size == 1:
square[id_regions==i] = 0
# Plot desired output, with all isolated single cells removed
plt.imshow(square, cmap=plt.cm.gray, interpolation='nearest')
In this case, eroding and dilating my array won't work as it will also remove features with a width of 1. I feel the solution lies somewhere within the scipy.ndimage package, but so far I haven't been able to crack it. Any help would be greatly appreciated!
A belated thanks to both Jaime and Kazemakase for their replies. The manual neighbour-checking method did remove all isolated patches, but also removed patches attached to others by one corner (i.e. to the upper-right of the square in the sample array). The summed area table works perfectly and is very fast on the small sample array, but slows down on larger arrays.
I ended up following a approach using ndimage which seems to work efficiently for very large and sparse arrays (0.91 sec for 5000 x 5000 array vs 1.17 sec for summed area table approach). I first generated a labelled array of unique IDs for each discrete region, calculated sizes for each ID, masked the size array to focus only on size == 1 blobs, then index the original array and set IDs with a size == 1 to 0:
def filter_isolated_cells(array, struct):
""" Return array with completely isolated single cells removed
:param array: Array with completely isolated single cells
:param struct: Structure array for generating unique regions
:return: Array with minimum region size > 1
"""
filtered_array = np.copy(array)
id_regions, num_ids = ndimage.label(filtered_array, structure=struct)
id_sizes = np.array(ndimage.sum(array, id_regions, range(num_ids + 1)))
area_mask = (id_sizes == 1)
filtered_array[area_mask[id_regions]] = 0
return filtered_array
# Run function on sample array
filtered_array = filter_isolated_cells(square, struct=np.ones((3,3)))
# Plot output, with all isolated single cells removed
plt.imshow(filtered_array, cmap=plt.cm.gray, interpolation='nearest')
Result:
You can manually check the neighbors and avoid the loop using vectorization.
has_neighbor = np.zeros(square.shape, bool)
has_neighbor[:, 1:] = np.logical_or(has_neighbor[:, 1:], square[:, :-1] > 0) # left
has_neighbor[:, :-1] = np.logical_or(has_neighbor[:, :-1], square[:, 1:] > 0) # right
has_neighbor[1:, :] = np.logical_or(has_neighbor[1:, :], square[:-1, :] > 0) # above
has_neighbor[:-1, :] = np.logical_or(has_neighbor[:-1, :], square[1:, :] > 0) # below
square[np.logical_not(has_neighbor)] = 0
That way looping over the square is performed internally by numpy, which is rather more efficient than looping in python. There are two drawbacks of this solution:
If your array is very sparse there may be more efficient ways to check the neighborhood of non-zero points.
If your array is very large the has_neighbor array might consume too much memory. In this case you could loop over sub-arrays of smaller size (trade-off between python loops and vectorization).
I have no experience with ndimage, so there may be a better solution built in somewhere.
The typical way of getting rid of isolated pixels in image processing is to do a morphological opening, for which you have a ready-made implementation in scipy.ndimage.morphology.binary_opening. This would affect the contours of your larger areas as well though.
As for a DIY solution, I would use a summed area table to count the number of items in every 3x3 subimage, subtract from that the value of the central pixel, then zero all center points where the result came out to zero. To properly handle the borders, first pad the array with zeros:
sat = np.pad(square, pad_width=1, mode='constant', constant_values=0)
sat = np.cumsum(np.cumsum(sat, axis=0), axis=1)
sat = np.pad(sat, ((1, 0), (1, 0)), mode='constant', constant_values=0)
# These are all the possible overlapping 3x3 windows sums
sum3x3 = sat[3:, 3:] + sat[:-3, :-3] - sat[3:, :-3] - sat[:-3, 3:]
# This takes away the central pixel value
sum3x3 -= square
# This zeros all the isolated pixels
square[sum3x3 == 0] = 0
The implementation above works, but is not especially careful about not creating intermediate arrays, so you can probably shave off some execution time by refactoring adequately.