Renumber/Relabel a Numpy array based on coordinates - python

I have a segmentation map (numpy.ndarray) that contain objects labeled with unique numbers. I want to combine objects across multiple slices by labeling them with the same number. Specifically, I want to renumber objects based on a DataFrame containing centroid positions and the desired label value.
First, I created some mock labels and a DataFrame:
df = pd.DataFrame({
"slice": [0, 0, 0, 0, 1, 1, 1, 2, 2, 2],
"number": [1, 2, 3, 4, 1, 2, 3, 1, 2, 3],
"x": [10, 20, 30, 40, 11, 21, 31, 12, 22, 32],
"y": [10, 20, 30, 40, 11, 21, 31, 12, 22, 32]
})
def make_segmap(df):
x, y = np.indices((50, 50))
maps = []
# Iterate over slices and coordinates
for n_slice in df["slice"].unique():
masks = []
for row in df[df["slice"] == n_slice].iterrows():
# Create circle
mask_circle = (x - row[1]["x"])**2 + (y - row[1]["y"])**2 < 5**2
# Random index number (here just a multiple)
masks.append(mask_circle * row[1]["number"]*3)
maps.append(np.max(masks, axis=0))
return np.stack(maps, axis=0)
segmap = make_segmap(df)
For renumbering, this is what I came up with so far:
new_maps = []
# Iterate over slices
for n_slice in df["slice"].unique():
new_labels = []
for row in df[df["slice"] == n_slice].iterrows():
# Find current value at position
original_label = segmap[n_slice, row[1]["y"], row[1]["x"]]
# Replace all label occurrences with the desired label from the DataFrame
replaced_label = np.where(segmap[n_slice] == original_label, row[1]["number"], 0)
new_labels.append(replaced_label)
new_maps.append(np.max(new_labels, axis=0))
new_segmap = np.stack(new_maps, axis=0)
This works reasonably well but doesn't scale to larger datasets. The real dataset has thousands of objects across hundreds of slices and this approach takes very long to run (an hour or so). Are there any suggestions on how to replace multiple values at once to improve performance?
Thanks in advance.

You can use groupby to replace the current quadratic search algorithm by a (quasi) linear search. Moreover, you can take advantage of Numpy's vectorization and broadcasting to remove the inner loop and make the computation faster.
Here is a faster implementation:
def make_segmap_fast(df):
x, y = np.indices((50, 50))
maps = []
# Iterate over slices and coordinates
for n_slice,subDf in df.groupby("slice"):
subDf_x = subDf["x"].to_numpy()[:, None, None]
subDf_y = subDf["y"].to_numpy()[:, None, None]
subDf_number = subDf["number"].to_numpy()[:, None, None]
# Create circle
mask_circle = (x - subDf_x)**2 + (y - subDf_y)**2 < 5**2
# Random index number (here just a multiple)
masks = mask_circle * subDf_number
maps.append(np.max(masks, axis=0)*3)
return np.stack(maps, axis=0)
On my machine, this is 2 times faster on the very small example (much more on bigger dataframes).

Related

How to efficiently split nested list into left and right based on a specific condition for a decision tree function

I am trying to implement a decision tree algorithm in python from scratch which will include 3 parts - splitting the data, calculating entropy / information gain, and training the tree).
Currently, I am having a trouble with splitting the data into X_left, X_right, y_left, y_right based on a specific condition (split attribute is a column and split value is a value to split on). I’ve implemented the code below and it works fine but my actual data is very large, and it takes forever to execute it. I was wondering if there is a way to simplify and make my code more efficient?
Fyi, I know there are multiple packages that I can use to split the data like sklearn, etc but I am trying to do it from scratch first. Appreciate your help in advance!
def parts(X, y, split_attribute, split_val):
X_left = []
X_right = []
y_left = []
y_right = []
count = 0
for x in X:
count += len(x)
attribute_count = count / len(X)
# if split_attribute < len of list, then add to the X_left and X_right, else pass
if split_attribute < attribute_count:
X_left = [x for x in X if x[split_attribute] <= split_val]
X_right = [x for x in X if x[split_attribute] > split_val]
else:
pass
#get indecies of left and right lists after split
left_index = [X.index(item) for item in X_left]
right_index = [X.index(item) for item in X_right]
#get y values based on X_left and X_right indicies
y_left = [y[i] for i in left_index]
y_right = [y[i] for i in right_index]
#############################################
return (X_left, X_right, y_left, y_right)
Inputs:
X = [[3, 10], [1, 22], [2, 28], [5, 32], [4, 32]]
y = [1, 1, 0, 0, 1]
split_attribute = 0
split_val = 1
parts(X, y, split_attribute, split_val)
Output:
([[1, 22]], [[3, 10], [2, 28], [5, 32], [4, 32]], [1], [1, 0, 0, 1])

returning elements in bins as arrays in python

I have x,y,v arrays of data points and I am binning v on x-y plane. I am trying to get the x,y,v values back after binning but I want them as arrays corresponding to each bin. My code can get them individually but that will not work for large data sets with many bins. Maybe I need to use loops of some kind but my understanding of loops is weak. Code:
from scipy import stats
import numpy as np
x=np.array([-10,-2,4,12,3,6,8,14,3])
y=np.array([5,5,-6,8,-20,10,2,2,8])
v=np.array([4,-6,-10,40,22,-14,20,8,-10])
ret = stats.binned_statistic_2d(x,
y,
values,
'count',
bins=2,
expand_binnumbers=True)
print('counts=',ret.statistic)
print('binnumber=', ret.binnumber)
binnumber = ret.binnumber
statistic = ret.statistic
# get the bin numbers according to some condition
idx_bin_x, idx_bin_y = np.where(statistic==statistic[1][1])#[0]
print('idx_binx=',idx_bin_x)
print('idx_bin_y=',idx_bin_y)
# A binnumber of i means the corresponding value is
# between (bin_edges[i-1], bin_edges[i]).
# -> increment the bin indices by one
idx_bin_x += 1
idx_bin_y += 1
print('idx_binx+1=',idx_bin_x)
print('idx_bin_y+1=',idx_bin_y)
# get the boolean mask and apply it
is_event_x = np.in1d(binnumber[0], idx_bin_x)
print('eventx=',is_event_x)
is_event_y = np.in1d(binnumber[1], idx_bin_y)
print('eventy=',is_event_y)
is_event_xy = np.logical_and(is_event_x, is_event_y)
print('event_xy=', is_event_xy)
events_x = x[is_event_xy]
events_y = y[is_event_xy]
event_v=v[is_event_xy]
print('x=', events_x)
print('y=', events_y)
print('v=',event_v)
This outputs x,y,v for the bin with count=5 but I want all 4 bins returning 4 arrays for each x,y,v. eg for bin1: x_bin1=[...], y_bin1=[...], v_bin1=[...] and so on for 4 bins.
Also, feel free to suggest if you think there are easier ways to bin 2d planes (x,y) with values (v) like mine and getting binned values. Thank you!
Using np.array facilitates a compact way to recover the arrays you are after:
from scipy import stats
# coordinates
x = np.array([-10,-2,4,12,3,6,8,14,3])
y = np.array([5,5,-6,8,-20,10,2,2,8])
v = np.array([4,-6,-10,40,22,-14,20,8,-10])
ret = stats.binned_statistic_2d(x, y, None, 'count', bins=2, expand_binnumbers=True)
b = ret.binnumber
for i in [1,2]:
for j in [1,2]:
m = (b[0] == i) & (b[1] == j) # mask
print((list(x[m]),list(y[m]),list(v[m])))
which gives for each of the four bins a tuple of 3 lists corresponding to x, y and v values:
([], [], [])
([-10, -2], [5, 5], [4, -6])
([4, 3], [-6, -20], [-10, 22])
([12, 6, 8, 14, 3], [8, 10, 2, 2, 8], [40, -14, 20, 8, -10])

How to use np.where multiple times without iteration?

Note that this question is not about multiple conditions within a single np.where(), see this thread for that.
I have a numpy array arr1 with some numbers (without a particular structure):
arr0 = \
np.array([[0,3,0],
[1,3,2],
[1,2,0]])
and a list of all the entries in this array:
entries = [0,1,2,3]
I also have another array, arr1:
arr1 = \
np.array([[4,5,6],
[6,2,4],
[3,7,9]])
I would like to perform some function on multiple subsets of elements of arr1. A subset consts of numbers which are at the same position as arr0 entries with a cetrain value. Let this function be finding the max value. Performing the function on each subset via a list comprehension:
res = [np.where(arr0==index,arr1,0).max() for index in entries]
res is [9, 6, 7, 5]
As expected: 0 in arr0 is on the top left, top right, bottom right corner, and the biggest number from the top left, top right, bottom right entries of arr1 (ie 4, 6, 9) is 9. Rest follow with a similar logic.
How can I achieve this without iteration?
My actual arrays are much bigger than these examples.
With broadcasting
res = np.where(arr0[...,None] == entries, arr1[...,None], 0).max(axis=(0, 1))
The result of np.where(...) is a (3, 3, 4) array, where slicing [...,0] would give you the same 3x3 array you get by manually doing the np.where with just entries[0], etc. Then taking the max of each 3x3 subarray leaves you with the desired result.
Timings
Apparently this method doesn't scale well for bigger arrays. The other answer using np.unique is more efficient because it reduces the maximum operation down to a few unique value regardless of how big the original arrays are.
import timeit
import matplotlib.pyplot as plt
import numpy as np
def loops():
return [np.where(arr0==index,arr1,0).max() for index in entries]
def broadcast():
return np.where(arr0[...,None] == entries, arr1[...,None], 0).max(axis=(0, 1))
def numpy_1d():
arr0_1D = arr0.ravel()
arr1_1D = arr1.ravel()
arg_idx = np.argsort(arr0_1D)
u, idx = np.unique(arr0_1D[arg_idx], return_index=True)
return np.maximum.reduceat(arr1_1D[arg_idx], idx)
sizes = (3, 10, 25, 50, 100, 250, 500, 1000)
lengths = (4, 10, 25, 50, 100)
methods = (loops, broadcast, numpy_1d)
fig, ax = plt.subplots(len(lengths), sharex=True)
for i, M in enumerate(lengths):
entries = np.arange(M)
times = [[] for _ in range(len(methods))]
for N in sizes:
arr0 = np.random.randint(1000, size=(N, N))
arr1 = np.random.randint(1000, size=(N, N))
for j, method in enumerate(methods):
times[j].append(np.mean(timeit.repeat(method, number=1, repeat=10)))
for t in times:
ax[i].plot(sizes, t)
ax[i].legend(['loops', 'broadcasting', 'numpy_1d'])
ax[i].set_title(f'Entries size {M}')
plt.xticks(sizes)
fig.text(0.5, 0.04, 'Array size (NxN)', ha='center')
fig.text(0.04, 0.5, 'Time (s)', va='center', rotation='vertical')
plt.show()
It's more convenient to work in 1D case. You need to sort your arr0 then find starting indices for every group and use np.maximum.reduceat.
arr0_1D = np.array([[0,3,0],[1,3,2],[1,2,0]]).ravel()
arr1_1D = np.array([[4,5,6],[6,2,4],[3,7,9]]).ravel()
arg_idx = np.argsort(arr0_1D)
>>> arr0_1D[arg_idx]
array([0, 0, 0, 1, 1, 2, 2, 3, 3])
u, idx = np.unique(arr0_1D[arg_idx], return_index=True)
>>> idx
array([0, 3, 5, 7], dtype=int64)
>>> np.maximum.reduceat(arr1_1D[arg_idx], idx)
array([9, 6, 7, 5], dtype=int32)

Finding local maxima in large 3D Numpy arrays

I'm processing some large volumetric image data that are present in three dimensional numpy arrays. I'll explain my task with two small 1D arrays. I have one image:
img = [5, 6, 70, 80, 3, 4, 80, 90]
and one segmented and labeled version of that image:
labels = [0, 0, 1, 1, 0, 0, 2, 2]
Each number in labels represents an object in img. Both arrays have the same dimensions. So in this example there's two objects in img:
[5, 6, 70, 80, 3, 4, 80, 90]
and what I'm trying to do now is finding the location of the maximum value of each object, which in this case would be the 3 and 7. Currently I loop over all labels, create a version of img which contains only the object corresponding to the current label, and look for the maximum value:
for label in range(1, num_labels + 1):
imgcp = np.copy(img)
imgcp[labels != label] = 0
max_pos = np.argmax(imgcp)
max_coords = np.unravel_index(pos, imgcp.shape)
One problem with this approach is that copying img in every step tends to create memory errors. I feel like memory management should prevent this, but is there a more memory efficient and possibly faster way to to this task?
Here is a method using argpartition.
# small 2d example
>>> data = np.array([[0,1,4,0,0,2,1,0],[0,4,1,3,0,0,0,0]])
>>> segments = np.array([[0,1,1,0,0,2,2,0],[0,1,1,1,0,0,0,0]])
>>>
# discard zeros
>>> nz = np.where(segments)
>>> segc = segments[nz]
>>> dac = data[nz]
# count object sizes
>>> cnts = np.bincount(segc)
>>> bnds = np.cumsum(cnts)
# use counts to partition into objects
>>> idx = segc.argpartition(bnds[1:-1])
>>> dai = dac[idx]
# find maxima per object
>>> mx = np.maximum.reduceat(dai, bnds[:-1])
# find their positions
>>> am, = np.where(dai==mx.repeat(cnts[1:]))
# translate positions back to coordinate space
>>> im = idx[am]
>>> am = *(n[im] for n in nz),
>>>
>>>
# result
# coordinates, note that there are more points than objects because
# the maximum 4 occurs twice in object 1
>>> am
(array([1, 0, 0]), array([1, 2, 5]))
# maxima
>>> data[am]
array([4, 4, 2])
# labels
>>> segments[am]
array([1, 1, 2])

Adding a unique value filter to an strides moving windows in Python

I already found two solutions for the strides moving windows which can compute mean, max, min, variance, etc. Now, I look to add a count of unique value function by axis. By axis, I mean compute all 2D arrays in single pass.
len(numpy.unique(array)) can make it but a lot of iterations will be needed to compute all arrays. I may work with image as big as 2000 x 2000, so iterations are not a good option. It's all about performance and memory effectiveness.
Here is the two solutions for the strides moving windows:
First is directly taken from Erik Rigtorp's at http://www.mail-archive.com/numpy-discussion#scipy.org/msg29450.html
import numpy as np
def rolling_window_lastaxis(a, window):
if window < 1:
raise ValueError, "`window` must be at least 1."
if window > a.shape[-1]:
raise ValueError, "`window` is too long."
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
def rolling_window(a, window):
if not hasattr(window, '__iter__'):
return rolling_window_lastaxis(a, window)
for i, win in enumerate(window):
if win > 1:
a = a.swapaxes(i, -1)
a = rolling_window_lastaxis(a, win)
a = a.swapaxes(-2, i)
return a
filtsize = (3, 3)
a = np.zeros((10,10), dtype=np.float)
a[5:7,5] = 1
b = rolling_window(a, filtsize)
blurred = b.mean(axis=-1).mean(axis=-1)
Second is from Alex Rogozhnikov at http://gozhnikov.github.io/2015/09/30/NumpyTipsAndTricks2.html.
def compute_window_mean_and_var_strided(image, window_w, window_h):
w, h = image.shape
strided_image = np.lib.stride_tricks.as_strided(image,
shape=[w - window_w + 1, h - window_h + 1, window_w, window_h],
strides=image.strides + image.strides)
# important: trying to reshape image will create complete 4-dimensional compy
means = strided_image.mean(axis=(2,3))
mean_squares = (strided_image ** 2).mean(axis=(2, 3))
maximums = strided_image.max(axis=(2,3))
variations = mean_squares - means ** 2
return means, maximums, variations
image = np.random.random([500, 500])
compute_window_mean_and_var_strided(image, 20, 20)
Is there a way to add/implement a count of unique value function in one or both solutions?
Clarification: Basically, I need a Unique Value filter for a 2D array, just like numpy.ndarray.mean.
Thanks you
Alex
Here's one approach with scikit-image's view_as_windows for efficient sliding window extraction.
Steps involved :
Get sliding windows.
Reshape into 2D array. Note that this would make a copy and thus we would lose the efficiency of views, but keep it vectorized.
Sort along the axis of merged block axes.
Get the differentiation along that axes and count the number of different elements, which when added with 1 would be the count of unique values in each of those sliding windows and hence the final expected result.
The implementation would be like so -
from skimage.util import view_as_windows as viewW
def sliding_uniq_count(a, BSZ):
out_shp = np.asarray(a.shape) - BSZ + 1
a_slid4D = viewW(a,BSZ)
a_slid2D = np.sort(a_slid4D.reshape(-1,np.prod(BSZ)),axis=1)
return ((a_slid2D[:,1:] != a_slid2D[:,:-1]).sum(1)+1).reshape(out_shp)
Sample run -
In [233]: a = np.random.randint(0,10,(6,7))
In [234]: a
Out[234]:
array([[6, 0, 5, 7, 0, 8, 5],
[3, 0, 7, 1, 5, 4, 8],
[5, 0, 5, 1, 7, 2, 3],
[5, 1, 3, 3, 7, 4, 9],
[9, 0, 7, 4, 9, 1, 1],
[7, 0, 4, 1, 6, 3, 4]])
In [235]: sliding_uniq_count(a, [3,3])
Out[235]:
array([[5, 4, 4, 7, 7],
[5, 5, 4, 6, 7],
[6, 6, 6, 6, 6],
[7, 5, 6, 6, 6]])
Hybrid approach
To make it work with very large arrays, to accommodate everything into memory, we might have to keep one loop that would iterate along each row of the input data, like so -
def sliding_uniq_count_oneloop(a, BSZ):
S = np.prod(BSZ)
out_shp = np.asarray(a.shape) - BSZ + 1
a_slid4D = viewW(a,BSZ)
out = np.empty(out_shp,dtype=int)
for i in range(a_slid4D.shape[0]):
a_slid2D_i = np.sort(a_slid4D[i].reshape(-1,S),-1)
out[i] = (a_slid2D_i[:,1:] != a_slid2D_i[:,:-1]).sum(-1)+1
return out
Hybrid approach - Version II
Another version of hybrid one, with the explicit usage of np.lib.stride_tricks.as_strided -
def sliding_uniq_count_oneloop(a, BSZ):
S = np.prod(BSZ)
out_shp = np.asarray(a.shape) - BSZ + 1
strd = np.lib.stride_tricks.as_strided
m,n = a.strides
N = out_shp[1]
out = np.empty(out_shp,dtype=int)
for i in range(out_shp[0]):
a_slid3D = strd(a[i], shape=((N,) + tuple(BSZ)), strides=(n,m,n))
a_slid2D_i = np.sort(a_slid3D.reshape(-1,S),-1)
out[i] = (a_slid2D_i[:,1:] != a_slid2D_i[:,:-1]).sum(-1)+1
return out
np.mean operates on a given axis without making any copies. Looking at just the shape of the as_strided array it looks much bigger than the original array. But because each 'window' is a view, it doesn't take up any additional space. Reduction operators like mean work fine with that kind of view.
But note that your second example warns about reshape. That creates a copy; it replicates the values in all of those windows.
unique starts with
ar = np.asanyarray(ar).flatten()
so right off the bat is is making a reshapened copy. It's a copy, and 1d. Then it sorts elements, looks for duplicates etc.
There are ways of finding unique rows, but they require converting rows into large structured array elements. In effect turning a 2d array into a 1d that unique can work with.

Categories

Resources