I have a 100 by 100 2D numpy array. and I also have the index of such array. Is there any way that I can extract or get the "unique indexes" along each side of the array? (North_Bound, East_Bound, West_Bound, South_Bound)? Below is my code attempt, but something is wrong as the size of each side index list be 99 but its not and sometimes generates erroneous indexes on my actual big data! Is there any better reliable way to do this job that would not generates wrong results?
import numpy as np
my_array = np.random.rand(100, 100)
indexes = np.argwhere(my_array[:, :] == my_array[:, :])
indexes = list(indexes)
NBound_indexes = np.argwhere(my_array[:, :] == my_array[0, :])
NBound_indexes = list(NBound_indexes)
SBound_indexes = np.argwhere(my_array[:, :] == my_array[99, :])
SBound_indexes = list(SBound_indexes)
WBound_indexes = []
for element in range(0, 100):
#print(element)
WB_index = np.argwhere(my_array[:, :] == my_array[element, 0])
WB_index = WB_index[0]
WBound_indexes.append(WB_index)
EBound_indexes = []
for element in range(0, 100):
#print(element)
EB_index = np.argwhere(my_array[:, :] == my_array[element, 99])
EB_index = EB_index[0]
EBound_indexes.append(EB_index)
outet_belt_ind = NBound_indexes
NBound_indexes.extend(EBound_indexes) #, SBound_index, WBound_index)
NBound_indexes.extend(SBound_indexes)
NBound_indexes.extend(WBound_indexes)
outer_bound = []
for i in NBound_indexes:
i_list = list(i)
outer_bound.append(i_list)
outer_bound = [outer_bound[i] for i in range(len(outer_bound)) if i == outer_bound.index(outer_bound[i]) ]
This wouldn't extract them directly from my_array but you could use list comprehensions to generate similar results to your code above:
y, x = my_array.shape
WBound_indexes = [np.array((i, 0)) for i in range(1, y-1)]
EBound_indexes = [np.array((i, x-1)) for i in range(1, y-1)]
NBound_indexes = [np.array((0, i)) for i in range(x)]
SBound_indexes = [np.array((y-1, i)) for i in range(x)]
outer_bound = NBound_indexes + WBound_indexes + SBound_indexes + EBound_indexes
For example, WBound_indexes would look like:
[array([1, 0]), array([2, 0]), ..., array([97, 0]), array([98, 0])]
Related
I want to perform element wise multiplication between two tensors, where most of the elements are zero.
For two example tensors:
test1 = np.zeros((2, 3, 5, 6))
test1[0, 0, :, 2] = 4
test1[0, 1, [2, 4], 1] = 7
test1[0, 2, 2, :] = 2
test1[1, 0, 4, 1:3] = 5
test1[1, :, 0, 1] = 3
and,
test2 = np.zeros((5, 6, 4, 7))
test2[2, 2, 2, 4] = 4
test2[0, 1, :, 1] = 3
test2[4, 3, 2, :] = 6
test2[1, 0, 3, 1:3] = 1
test2[3, :, 0, 1] = 2
the calulation I need is:
result = test1[..., None, None] * test2[None, None, ...]
In the actual use case I am coding for, the tensors can have more dimensions and much longer lengths in some of the dimensions, so while the multiplication is reasonably quick, I would like to utilise the fact that most of the elements are zero.
My first thought was to make a sparse representation of each tensor.
coords1 = np.nonzero(test1)
shape1 = test1.shape
test1_squished = test1[coords1]
coords1 = np.array(coords1)
coords2 = np.nonzero(test2)
shape2 = test2.shape
test2_squished = test2[coords2]
coords2 = np.array(coords2)
Here there is enough information to perform the multiplication, by comparing the coordinates along the equal axes and multiplying if they are the same.
I have a function for adding a new axis,
def new_axis(coords, shape, axis):
new_coords = np.zeros((len(coords)+1, len(coords[0])))
new_index = np.delete(np.arange(0, len(coords)+1), axis)
new_coords[new_index] = coords
coords = new_coords
new_shape = np.zeros(len(new_coords), dtype=int)
new_shape[new_index] = shape
new_shape[axis] = 1
new_shape = np.array(new_shape)
return coords, new_shape
and for performing the multiplication,
def multiply(coords1, shape1, array1, coords2, shape2, array2): #all inputs should be numpy arrays
if np.array_equal( shape1, shape2 ):
index1 = np.nonzero( ( coords1.T[:, None, :] == coords2.T ).all(-1).any(-1) )[0]
index2 = np.nonzero( ( coords2.T[:, None, :] == coords1.T ).all(-1).any(-1) )[0]
array = array1[index1] * array2[index2]
coords = ( coords1.T[index] ).T
shape = shape1
else:
if len(shape1) == len(shape2):
equal_index = np.nonzero( ( shape1 == shape2 ) )[0]
not_equal_index = np.nonzero( ~( shape1 == shape2 ) )[0]
if np.logical_or( ( shape1[not_equal_index] == 1 ), ( shape2[not_equal_index] == 1 ) ).all():
#if where not equal, one of them = 1 -> can broadcast
# compare dimensions with same length, if equal then multiply corresponding elements
multiply_index1 = np.nonzero(
( coords1[equal_index].T[:, None, :] == coords2[equal_index].T ).all(-1).any(-1)
)[0]
# would like vecotrised version of below
array = []
coords = []
for index in multiply_index1:
multiply_index2 = np.nonzero( ( (coords2[equal_index]).T == (coords1[equal_index]).T[index] ).all(-1) )[0]
array.append( test_squished[index] * test2_squished[multiply_index2] )
temp = np.zeros((6, len(multiply_index2)))
temp[not_equal_index] = ((coords1[not_equal_index].T[index]).T + (coords2[not_equal_index].T[multiply_index2])).T
if len(multiply_index2)==1:
temp[equal_index] = coords1[equal_index].T[index].T[:, None]
else:
temp[equal_index] = np.repeat( coords1[equal_index].T[index].T[:, None], len(multiply_index2), axis=-1)
coords.append(temp)
array = np.concatenate(array)
coords = np.concatenate(coords, axis=-1)
shape = shape1
shape[np.where(shape==1)] = shape2[np.where(shape==1)]
else:
print("error")
else:
print("error")
return array, coords, shape
However the multiply function is very inefficient and so I lose any gain of going to the sparse representation.
Is there an elegant vectorised approach to the multiply function? Or is there a better solution than this sparse tensor idea?
Thanks in advance.
I have two 1D NumPy arrays x = [x[0], x[1], ..., x[n-1]] and y = [y[0], y[1], ..., y[n-1]]. The array x is known, and I need to determine the values for array y. For every index in np.arange(n), the value of y[index] depends on x[index] and on x[index + 1: ]. My code is this:
import numpy as np
n = 5
q = 0.5
x = np.array([1, 2, 0, 1, 0])
y = np.empty(n, dtype=int)
for index in np.arange(n):
if (x[index] != 0) and (np.any(x[index + 1:] == 0)):
y[index] = np.random.choice([0,1], 1, p=(1-q, q))
else:
y[index] = 0
print(y)
The problem with the for loop is that the size of n in my experiment can become very large. Is there any vectorized way to do this?
Randomly generate the array y with the full shape.
Generate a bool array indicating where to set zeros.
Use np.where to set zeros.
Try this,
import numpy as np
n = 5
q = 0.5
x = np.array([1, 2, 0, 1, 0])
y = np.random.choice([0, 1], n, p=(1-q, q))
condition = (x != 0) & (x[::-1].cumprod() == 0)[::-1] # equivalent to the posted one
y = np.where(condition, y, 0)
I want to get the index of one 1D array called value in another 1D array called bin and calculate the minimum index in each bin.
Here's the whole step:
Here's my current method:
import numpy as np
bin = np.array([1, 2, 3, 4])
value = np.array([1.2, 1.3, 2.1, 3.1])
extend_bin = np.append(bin[1:], np.iinfo('int').max)
mask = (bin[:, None] <= value[None, :]) & (value[None, :] < extend_bin[:, None])
res = np.argmax(mask, axis=-1)[:-1]
However, when the two 1D arrays are longer, I could get the memory error because of the large 2D mask array:
import random
import numpy as np
length = int(4e6)
a = np.random.rand(length)
order = np.argsort(a)
bin = a[order]
value = np.random.rand(length)
random.shuffle(value)
extend_bin = np.append(bin[1:], np.iinfo('int').max)
mask = (a[:, None] <= value[None, :]) & (value[None, :] < extend_bin[:, None])
res = np.argmax(mask, axis=-1)[:-1]
Memory error:
mask = (a[:, None] <= value[None, :]) & (value[None, :] < extend_bin[:, None])
numpy.core._exceptions.MemoryError: Unable to allocate 14.6 TiB for an array with shape (4000000, 4000000) and data type bool
Is there any simpler and valid method to deal with problem?
Similar to the pd.cut proposed in the comments, one could use numpy.digitize:
import numpy as np
bin = np.array([1, 2, 3, 4])
value = np.array([1.2, 1.3, 2.1, 3.1])
extend_bin = np.append(bin[1:], np.iinfo('int').max)
binned_values = np.digitize(value, extend_bin)
# The above returns [0, 0, 1, 2]
_, res = np.unique(binned_values, return_index=True)
# res equals [0, 2, 3], i.e. the first index where each value happens
This also works in the extended case:
from time import time
length = int(4e6)
a = np.random.rand(length)
bin = np.concatenate([[0], np.sort(a), [1]])
value = np.random.rand(length)
tstart = time()
binned_values = np.digitize(value, bin)
print(time() - tstart)
# On my machine, the above takes about 4 seconds
tstart = time()
_, res = np.unique(binned_values, return_index=True)
print(time() - tstart)
# And this takes less than one second
I have two sorted, numpy arrays similar to these ones:
x = np.array([1, 2, 8, 11, 15])
y = np.array([1, 8, 15, 17, 20, 21])
Elements never repeat in the same array. I want to figure out a way of pythonicaly figuring out a list of indexes that contain the locations in the arrays at which the same element exists.
For instance, 1 exists in x and y at index 0. Element 2 in x doesn't exist in y, so I don't care about that item. However, 8 does exist in both arrays - in index 2 in x but index 1 in y. Similarly, 15 exists in both, in index 4 in x, but index 2 in y. So the outcome of my function would be a list that in this case returns [[0, 0], [2, 1], [4, 2]].
So far what I'm doing is:
def get_indexes(x, y):
indexes = []
for i in range(len(x)):
# Find index where item x[i] is in y:
j = np.where(x[i] == y)[0]
# If it exists, save it:
if len(j) != 0:
indexes.append([i, j[0]])
return indexes
But the problem is that arrays x and y are very large (millions of items), so it takes quite a while. Is there a better pythonic way of doing this?
Without Python loops
Code
def get_indexes_darrylg(x, y):
' darrylg answer '
# Use intersect to find common elements between two arrays
overlap = np.intersect1d(x, y)
# Indexes of common elements in each array
loc1 = np.searchsorted(x, overlap)
loc2 = np.searchsorted(y, overlap)
# Result is the zip two 1d numpy arrays into 2d array
return np.dstack((loc1, loc2))[0]
Usage
x = np.array([1, 2, 8, 11, 15])
y = np.array([1, 8, 15, 17, 20, 21])
result = get_indexes_darrylg(x, y)
# result[0]: array([[0, 0],
[2, 1],
[4, 2]], dtype=int64)
Timing Posted Solutions
Results show that darrlg code has the fastest run time.
Code Adjustment
Each posted solution as a function.
Slight mod so that each solution outputs an numpy array.
Curve named after poster
Code
import numpy as np
import perfplot
def create_arr(n):
' Creates pair of 1d numpy arrays with half the elements equal '
max_val = 100000 # One more than largest value in output arrays
arr1 = np.random.randint(0, max_val, (n,))
arr2 = arr1.copy()
# Change half the elements in arr2
all_indexes = np.arange(0, n, dtype=int)
indexes = np.random.choice(all_indexes, size = n//2, replace = False) # locations to make changes
np.put(arr2, indexes, np.random.randint(0, max_val, (n//2, ))) # assign new random values at change locations
arr1 = np.sort(arr1)
arr2 = np.sort(arr2)
return (arr1, arr2)
def get_indexes_lllrnr101(x,y):
' lllrnr101 answer '
ans = []
i=0
j=0
while (i<len(x) and j<len(y)):
if x[i] == y[j]:
ans.append([i,j])
i += 1
j += 1
elif (x[i]<y[j]):
i += 1
else:
j += 1
return np.array(ans)
def get_indexes_joostblack(x, y):
'joostblack'
indexes = []
for idx,val in enumerate(x):
idy = np.searchsorted(y,val)
try:
if y[idy]==val:
indexes.append([idx,idy])
except IndexError:
continue # ignore index errors
return np.array(indexes)
def get_indexes_mustafa(x, y):
indices_in_x = np.flatnonzero(np.isin(x, y)) # array([0, 2, 4])
indices_in_y = np.flatnonzero(np.isin(y, x[indices_in_x])) # array([0, 1, 2]
return np.array(list(zip(indices_in_x, indices_in_y)))
def get_indexes_darrylg(x, y):
' darrylg answer '
# Use intersect to find common elements between two arrays
overlap = np.intersect1d(x, y)
# Indexes of common elements in each array
loc1 = np.searchsorted(x, overlap)
loc2 = np.searchsorted(y, overlap)
# Result is the zip two 1d numpy arrays into 2d array
return np.dstack((loc1, loc2))[0]
def get_indexes_akopcz(x, y):
' akopcz answer '
return np.array([
[i, j]
for i, nr in enumerate(x)
for j in np.where(nr == y)[0]
])
perfplot.show(
setup = create_arr, # tuple of two 1D random arrays
kernels=[
lambda a: get_indexes_lllrnr101(*a),
lambda a: get_indexes_joostblack(*a),
lambda a: get_indexes_mustafa(*a),
lambda a: get_indexes_darrylg(*a),
lambda a: get_indexes_akopcz(*a),
],
labels=["lllrnr101", "joostblack", "mustafa", "darrylg", "akopcz"],
n_range=[2 ** k for k in range(5, 21)],
xlabel="Array Length",
# More optional arguments with their default values:
# logx="auto", # set to True or False to force scaling
# logy="auto",
equality_check=None, #np.allclose, # set to None to disable "correctness" assertion
# show_progress=True,
# target_time_per_measurement=1.0,
# time_unit="s", # set to one of ("auto", "s", "ms", "us", or "ns") to force plot units
# relative_to=1, # plot the timings relative to one of the measurements
# flops=lambda n: 3*n, # FLOPS plots
)
What you are doing is O(nlogn) which is decent enough.
If you want, you can do it in O(n) by iterating on both arrays with two pointers and since they are sorted, increase the pointer for the array with smaller object.
See below:
x = [1, 2, 8, 11, 15]
y = [1, 8, 15, 17, 20, 21]
def get_indexes(x,y):
ans = []
i=0
j=0
while (i<len(x) and j<len(y)):
if x[i] == y[j]:
ans.append([i,j])
i += 1
j += 1
elif (x[i]<y[j]):
i += 1
else:
j += 1
return ans
print(get_indexes(x,y))
which gives me:
[[0, 0], [2, 1], [4, 2]]
Although, this function will search for all the occurances of x[i] in the y array, if duplicates are not allowed in y it will find x[i] exactly once.
def get_indexes(x, y):
return [
[i, j]
for i, nr in enumerate(x)
for j in np.where(nr == y)[0]
]
You can use numpy.searchsorted:
def get_indexes(x, y):
indexes = []
for idx,val in enumerate(x):
idy = np.searchsorted(y,val)
if y[idy]==val:
indexes.append([idx,idy])
return indexes
One solution is to first look from x's side to see what values are included in y by getting their indices through np.isin and np.flatnonzero, and then use the same procedure from the other side; but instead of giving x entirely, we give only the (already found) intersected elements to gain time:
indices_in_x = np.flatnonzero(np.isin(x, y)) # array([0, 2, 4])
indices_in_y = np.flatnonzero(np.isin(y, x[indices_in_x])) # array([0, 1, 2])
Now you can zip them to get the result:
result = list(zip(indices_in_x, indices_in_y)) # [(0, 0), (2, 1), (4, 2)]
Let's say I have an 2D array of (N, N) shape:
import numpy as np
my_array = np.random.random((N, N))
Now I want to do some computations only on some "cells" of this array, for instance the ones inside the central part of the array. To avoid doing computations on cells I'm not interested in, what I usually do here is create a Boolean mask, in this spirit:
my_mask = np.zeros_like(my_array, bool)
my_mask[40:61,40:61] = True
my_array[my_mask] = some_twisted_computations(my_array[my_mask])
But what if some_twisted_computations() involves values of the neighboring cells if they are inside the mask? Performance-wise, would it be a good idea to create an "adjacency array" with a (len(my_mask), 4) shape, storing the index of 4-connected neighbor cells in the flat my_array[mask] array that I will use in some_twisted_computations()? If yes, what are the efficient options for computing such adjacency array? Should I switch to lower-level langage/other data structures?
My real-worlds arrays shapes are around (1000,1000,1000), the mask concerns only a small subset (~100000) of these values and is of rather complex geometry. I hope my questions make sense...
EDIT: the very dirty and slow solution I've worked out:
wall = mask
i = 0
top_neighbors = []
down_neighbors = []
left_neighbors = []
right_neighbors = []
indices = []
for index, val in np.ndenumerate(wall):
if not val:
continue
indices += [index]
if wall[index[0] + 1, index[1]]:
down_neighbors += [(index[0] + 1, index[1])]
else:
down_neighbors += [i]
if wall[index[0] - 1, index[1]]:
top_neighbors += [(index[0] - 1, index[1])]
else:
top_neighbors += [i]
if wall[index[0], index[1] - 1]:
left_neighbors += [(index[0], index[1] - 1)]
else:
left_neighbors += [i]
if wall[index[0], index[1] + 1]:
right_neighbors += [(index[0], index[1] + 1)]
else:
right_neighbors += [i]
i += 1
top_neighbors = [i if type(i) is int else indices.index(i) for i in top_neighbors]
down_neighbors = [i if type(i) is int else indices.index(i) for i in down_neighbors]
left_neighbors = [i if type(i) is int else indices.index(i) for i in left_neighbors]
right_neighbors = [i if type(i) is int else indices.index(i) for i in right_neighbors]
The best answer will probably depend on the nature of the computations you want to do. For example, if they can be expressed as summations over neighboring pixels, then something like np.convolve or scipy.signal.fftconvolve can be a really nice solution.
For your specific question of efficiently generating arrays of neighbor indices, you might try something like this:
x = np.random.rand(100, 100)
mask = x > 0.9
i, j = np.where(mask)
i_neighbors = i[:, np.newaxis] + [0, 0, -1, 1]
j_neighbors = j[:, np.newaxis] + [-1, 1, 0, 0]
# need to do something with the edge cases
# the best choice will depend on your application
# here we'll change out-of-bounds neighbors to the
# central point itself.
i_neighbors = np.clip(i_neighbors, 0, 99)
j_neighbors = np.clip(j_neighbors, 0, 99)
# compute some vectorized result over the neighbors
# as a concrete example, here we'll do a standard deviation
result = x[i_neighbors, j_neighbors].std(axis=1)
The result is an array of values corresponding to the masked region, containing the standard deviation of neighboring values.
Hopefully that approach will work for whatever specific problem you have in mind!
Edit: given the edited question above, here's how my response can be adapted to generate arrays of indices in a vectorized manner:
x = np.random.rand(100, 100)
mask = x > -0.9
i, j = np.where(mask)
i_neighbors = i[:, np.newaxis] + [0, 0, -1, 1]
j_neighbors = j[:, np.newaxis] + [-1, 1, 0, 0]
i_neighbors = np.clip(i_neighbors, 0, 99)
j_neighbors = np.clip(j_neighbors, 0, 99)
indices = np.zeros(x.shape, dtype=int)
indices[mask] = np.arange(len(i))
neighbor_in_mask = mask[i_neighbors, j_neighbors]
neighbors = np.where(neighbor_in_mask,
indices[i_neighbors, j_neighbors],
np.arange(len(i))[:, None])
left_indices, right_indices, top_indices, bottom_indices = neighbors.T