Vectorizing operations using 3d numpy arrays - python

I'm working in an algorithm to match two kind of objects (lets say balls and buckets). Each object is modeled as a 4D numpy array, and each kind is grouped within another array. My method is based on calculating all possible differences between each pair (ball, bucket) and applying a similarity function on that difference.
I'm trying to avoid for loops since speed is really relevant for what I'm doing, so I'm creating those differences by reshaping one of the initial arrays, broadcasting numpy operations and creating a 3D numpy array (diff_map). I'm not finding any good tutorial about this, so I'd like to know if there is a more "proper way" to do that. Id also like to see any good references about this kind of operation (multidimensional reshape and broadcast) if possible.
My code:
import numpy as np
balls = np.random.rand(3,4)
buckets = np.random.rand(6,4)
buckets = buckets.reshape(len(buckets), 1, 4)
buckets
array([[[ 0.38382622, 0.27114067, 0.63856317, 0.51360638]],
[[ 0.08709269, 0.21659216, 0.31148519, 0.99143705]],
[[ 0.03659845, 0.78305241, 0.87699971, 0.78447545]],
[[ 0.11652137, 0.49490129, 0.76382286, 0.90313785]],
[[ 0.62681395, 0.10125169, 0.61131263, 0.15643676]],
[[ 0.97072113, 0.56535597, 0.39471204, 0.24798229]]])
diff_map = balls-buckets
diff_map.shape
(6, 3, 4)
For Loop
As requested, this is the for loop I'm trying to avoid:
diff_map_for = np.zeros((len(buckets), len(balls), 4))
for i in range(len(buckets)):
for j in range(len(balls)):
diff_map_for[i, j] = buckets[i]-balls[j]
`Just to be sure, let's compare the two results:
np.all(diff_map == diff_map_for)
True

Does this work for you?
import numpy as np
balls = np.random.rand(3,4)
buckets = np.random.rand(6,4)
diff_map = buckets[:, np.newaxis, :] - balls[np.newaxis, :, :]
print(diff_map.shape)
# output: (6, 3, 4)
# ... compared to for loop
diff_map_for = np.zeros((len(buckets), len(balls), 4))
for i in range(len(buckets)):
for j in range(len(balls)):
diff_map_for[i, j] = buckets[i] - balls[j]
print(np.sum(diff_map - diff_map_for))
# output: 0.0

Related

Efficient way of constructing a 3D stack of block diagonal matrix in numpy/scipy from a 3D stack of matrices

I am trying to construct a stack of block diagonal matrix in the form of nXMXM in numpy/scipy from a given stacks of matrices (nXmXm), where M=k*m with k the number of stacks of matrices. At the moment, I'm using the scipy.linalg.block_diag function in a for loop to perform this task:
import numpy as np
import scipy.linalg as linalg
a = np.ones((5,2,2))
b = np.ones((5,2,2))
c = np.ones((5,2,2))
result = np.zeros((5,6,6))
for k in range(0,5):
result[k,:,:] = linalg.block_diag(a[k,:,:],b[k,:,:],c[k,:,:])
However, since n is in my case getting quite large, I'm looking for a more efficient way than a for loop. I found 3D numpy array into block diagonal matrix but this does not really solve my problem. Anything I could imagine is transforming each stack of matrices into block diagonals
import numpy as np
import scipy.linalg as linalg
a = np.ones((5,2,2))
b = np.ones((5,2,2))
c = np.ones((5,2,2))
a = linalg.block_diag(*a)
b = linalg.block_diag(*b)
c = linalg.block_diag(*c)
and constructing the resulting matrix from it by reshaping
result = linalg.block_diag(a,b,c)
result = result.reshape((5,6,6))
which does not reshape. I don't even know, if this approach would be more efficient, so I'm asking if I'm on the right track or if somebody knows a better way of constructing this block diagonal 3D matrix or if I have to stick with the for loop solution.
Edit:
Since I'm new to this platform, I don't know where to leave this (Edit or Answer?), but I want to share my final solution: The highlightet solution from panadestein worked very nice and easy, but I'm now using higher dimensional arrays, where my matrices reside in the last two dimensions. Additionally my matrices are no longer of the same dimension (mostly a mixture of 1x1, 2x2, 3x3), so I adopted V. Ayrat's solution with minor changes:
def nd_block_diag(arrs):
shapes = np.array([i.shape for i in arrs])
out = np.zeros(np.append(np.amax(shapes[:,:-2],axis=0), [shapes[:,-2].sum(), shapes[:,-1].sum()]))
r, c = 0, 0
for i, (rr, cc) in enumerate(shapes[:,-2:]):
out[..., r:r + rr, c:c + cc] = arrs[i]
r += rr
c += cc
return out
which works also with array broadcasting, if the input arrays are shaped properly (i.e. the dimensions, which are to be broadcasted are not added automatically). Thanks to pandestein and V. Ayrat for your kind and fast help, I've learned a lot about the possibilites of list comprehensions and array indexing/slicing!
block_diag also just iterate through shapes. Almost all time spend in copying data so you can do it whatever way your want for example with little change of source code of block_diag
arrs = a, b, c
shapes = np.array([i.shape for i in arrs])
out = np.zeros([shapes[0, 0], shapes[:, 1].sum(), shapes[:, 2].sum()])
r, c = 0, 0
for i, (_, rr, cc) in enumerate(shapes):
out[:, r:r + rr, c:c + cc] = arrs[i]
r += rr
c += cc
print(np.allclose(result, out))
# True
I don't think that you can escape all possible loops to solve your problem. One way that I find convenient and perhaps more efficient than your for loop is to use a list comprehension:
import numpy as np
from scipy.linalg import block_diag
# Define input matrices
a = np.ones((5, 2, 2))
b = np.ones((5, 2, 2))
c = np.ones((5, 2, 2))
# Generate block diagonal matrices
mats = np.array([a, b, c]).reshape(5, 3, 2, 2)
result = [block_diag(*bmats) for bmats in mats]
Maybe this can give you some ideas to improve your implementation.

Alternative to loop for for boolean / nonzero indexing of numpy array

I need to select only the non-zero 3d portions of a 3d binary array (or alternatively the true values of a boolean array). Currently I am able to do so with a series of 'for' loops that use np.any, but this does work but seems awkward and slow, so currently investigating a more direct way to accomplish the task.
I am rather new to numpy, so the approaches that I have tried include a) using
np.nonzero, which returns indices that I am at a loss to understand what to do with for my purposes, b) boolean array indexing, and c) boolean masks. I can generally understand each of those approaches for simple 2d arrays, but am struggling to understand the differences between the approaches, and cannot get them to return the right values for a 3d array.
Here is my current function that returns a 3D array with nonzero values:
def real_size(arr3):
true_0 = []
true_1 = []
true_2 = []
print(f'The input array shape is: {arr3.shape}')
for zero_ in range (0, arr3.shape[0]):
if arr3[zero_].any()==True:
true_0.append(zero_)
for one_ in range (0, arr3.shape[1]):
if arr3[:,one_,:].any()==True:
true_1.append(one_)
for two_ in range (0, arr3.shape[2]):
if arr3[:,:,two_].any()==True:
true_2.append(two_)
arr4 = arr3[min(true_0):max(true_0) + 1, min(true_1):max(true_1) + 1, min(true_2):max(true_2) + 1]
print(f'The nonzero area is: {arr4.shape}')
return arr4
# Then use it on a small test array:
test_array = np.zeros([2, 3, 4], dtype = int)
test_array[0:2, 0:2, 0:2] = 1
#The function call works and prints out as expected:
non_zero = real_size(test_array)
>> The input array shape is: (2, 3, 4)
>> The nonzero area is: (2, 2, 2)
# So, the array is correct, but likely not the best way to get there:
non_zero
>> array([[[1, 1],
[1, 1]],
[[1, 1],
[1, 1]]])
The code works appropriately, but I am using this on much larger and more complex arrays, and don't think this is an appropriate approach. Any thoughts on a more direct method to make this work would be greatly appreciated. I am also concerned about errors and the results if the input array has for example two separate non-zero 3d areas within the original array.
To clarify the problem, I need to return one or more 3D portions as one or more 3d arrays beginning with an original larger array. The returned arrays should not include extraneous zeros (or false values) in any given exterior plane in three dimensional space. Just getting the indices of the nonzero values (or vice versa) doesn't by itself solve the problem.
Assuming you want to eliminate all rows, columns, etc. that contain only zeros, you could do the following:
nz = (test_array != 0)
non_zero = test_array[nz.any(axis=(1, 2))][:, nz.any(axis=(0, 2))][:, :, nz.any(axis=(0, 1))]
An alternative solution using np.nonzero:
i = [np.unique(_) for _ in np.nonzero(test_array)]
non_zero = test_array[i[0]][:, i[1]][:, :, i[2]]
This can also be generalized to arbitrary dimensions, but requires a bit more work (only showing the first approach here):
def real_size(arr):
nz = (arr != 0)
result = arr
axes = np.arange(arr.ndim)
for axis in range(arr.ndim):
zeros = nz.any(axis=tuple(np.delete(axes, axis)))
result = result[(slice(None),)*axis + (zeros,)]
return result
non_zero = real_size(test_array)

Applying matrix functions like scipy.linalg.eigh to higher dimensional arrays

I am new to numpy but have been using python for quite a while as an engineer.
I am writing a program that currently stores stress tensors as 3x3 numpy arrays within another NxM array which represents values through time and through the thickness of a wall, so overall it is an NxMx3x3 numpy array. I want to efficiently calculate the eigenvals and vectors of each 3x3 array within this larger array. So far I have tried to using "fromiter" but this doesn't seem to work because the functions returns 2 arrays. I have also tried apply_along_axis which also doesn't work because it says the inner 3x3 is not a square matrix? I can do it with list comprehension, but this doesn't seem ideal to resort to using lists.
Example just calculating eigenvals using list comprehension
import numpy as np
from scipy import linalg
a=np.random.random((2,2,3,3))
f=linalg.eigvalsh
ans=np.asarray([f(x) for x in a.reshape((4,3,3))])
ans.shape=(2,2,3)
I thought something like this would work but I have played around with it and can't get it working:
np.apply_along_axis(f,0,a)
BTW the 2x2 bit could be up to 5000x100 and this code is repeated ~50x50x200 times hence the need for efficiency. Any help would be greatly appreciated?
You can use numpy.linalg.eigh. It accepts an array like your example a.
Here's an example. First, create an array of 3x3 symmetric arrays:
In [96]: a = np.random.random((2, 2, 3, 3))
In [97]: a = a + np.transpose(a, axes=(0, 1, 3, 2))
In [98]: a[0, 0]
Out[98]:
array([[0.61145048, 0.85209618, 0.03909677],
[0.85209618, 1.79309413, 1.61209077],
[0.03909677, 1.61209077, 1.55432465]])
Compute the eigenvalues and eigenvectors of all the 3x3 arrays:
In [99]: evals, evecs = np.linalg.eigh(a)
In [100]: evals.shape
Out[100]: (2, 2, 3)
In [101]: evecs.shape
Out[101]: (2, 2, 3, 3)
Take a look at the result for a[0, 0]:
In [102]: evals[0, 0]
Out[102]: array([-0.31729364, 0.83148477, 3.44467813])
In [103]: evecs[0, 0]
Out[103]:
array([[-0.55911658, 0.79634401, 0.23070516],
[ 0.63392772, 0.23128064, 0.73800062],
[-0.53434473, -0.55887877, 0.63413738]])
Verify that it is the same as computing the eigenvalues and eigenvectors for a[0, 0] separately:
In [104]: np.linalg.eigh(a[0, 0])
Out[104]:
(array([-0.31729364, 0.83148477, 3.44467813]),
array([[-0.55911658, 0.79634401, 0.23070516],
[ 0.63392772, 0.23128064, 0.73800062],
[-0.53434473, -0.55887877, 0.63413738]]))

Apply bincount to each row of a 2D numpy array

Is there a way to apply bincount with "axis = 1"? The desired result would be the same as the list comprehension:
import numpy as np
A = np.array([[1,0],[0,0]])
np.array([np.bincount(r,minlength = np.max(A) + 1) for r in A])
#array([[1,1]
# [2,0]])
np.bincount doesn't work with a 2D array along a certain axis. To get the desired effect with a single vectorized call to np.bincount, one can create a 1D array of IDs such that different rows would have different IDs even if the elements are the same. This would keep elements from different rows not binning together when using a single call to np.bincount with those IDs. Thus, such an ID array could be created with an idea of linear indexing in mind, like so -
N = A.max()+1
id = A + (N*np.arange(A.shape[0]))[:,None]
Then, feed the IDs to np.bincount and finally reshape back to 2D -
np.bincount(id.ravel(),minlength=N*A.shape[0]).reshape(-1,N)
If the data is too large for this to be efficient, then the issue is more likely to be the memory usage of the dense matrix rather than the numerical operations themself. Here is an example of using a sklearn Hashing Vectorizer on a matrix which is too large to use the bincounts method (the results are a sparse matrix):
import numpy as np
from sklearn.feature_extraction.text import HashingVectorizer
h = HashingVectorizer()
A = np.random.randint(100,size=(1000,100))*10000
A_str = [" ".join([str(v) for v in i]) for i in A]
%timeit h.fit_transform(A_str)
#10 loops, best of 3: 110 ms per loop
You can use apply_along_axis, Here is an example
import numpy as np
test_array = np.array([[0, 0, 1], [0, 0, 1]])
print(test_array)
np.apply_along_axis(np.bincount, axis=1, arr= test_array,
minlength = np.max(test_array) +1)
Note the final shape of this array depends on the number of bins, also you can specify other arguments along with apply_along_axis

optimize python code for memory efficiency

I have a python code as follow:
import numpy as np
sizes = 2000
array1 = np.empty((sizes, sizes, sizes, 3), dtype=np.float32)
for i in range(sizes):
array1[i, :, :, 0] = 1.5*i
array1[:, i, :, 1] = 2.5*i
array1[:, :, i, 2] = 3.5*i
array2 = array1.reshape(sizes*sizes*sizes, 3)
#do something with array2
array3 = array2.reshape(sizes*sizes*sizes, 3)
I would want to optimize this code for memory efficient but I have no idea. Could I use "numpy.reshape" by a more memory efficient way?
I think your code is already memory efficient.
When possible, np.reshape returns a view of the original array. That is so in this case and therefore np.reshape is already as memory efficient as can be.
Here is how you can tell np.reshape is returning a view:
import numpy as np
# Let's make array1 smaller; it won't change our conclusions
sizes = 5
array1 = np.arange(sizes*sizes*sizes*3).reshape((sizes, sizes, sizes, 3))
for i in range(sizes):
array1[i, :, :, 0] = 1.5*i
array1[:, i, :, 1] = 2.5*i
array1[:, :, i, 2] = 3.5*i
array2 = array1.reshape(sizes*sizes*sizes, 3)
Note the value of array2 at a certain location:
assert array2[0,0] == 0
Change the corresponding value in array1:
array1[0,0,0,0] = 100
Note that the value of array2 changes.
assert array2[0,0] == 100
Since array2 changes due to a modification of array1, you can conclude that array2 is a view of array1. Views share the underlying data. Since there is no copy being made, the reshape is memory efficient.
array2 is already of shape (sizes*sizes*sizes, 3), so this reshape does nothing.
array3 = array2.reshape(sizes*sizes*sizes, 3)
Finally, the assert below shows array3 was also affected by the modification made to array1. So that proves conclusively that array3 is also a view of array1.
assert array3[0,0] == 100
So really your problem depends on what you are doing with the array. You are currently storing a large amount of redundant information. You could keep 0.15% of the currently stored information and not lose anything.
For instance, if we define the following three one dimensional arrays
a = np.linspace(0,(size-1)*1.5,size).astype(np.float32)
b = np.linspace(0,(size-1)*2.5,size).astype(np.float32)
c = np.linspace(0,(size-1)*3.5,size).astype(np.float32)
We can create any minor entry (i.e. entry in the fastest rotating axis) in your array1:
In [235]: array1[4][3][19] == np.array([a[4],b[3],c[19]])
Out[235]: array([ True, True, True], dtype=bool)
The use of this all depends on what you are doing with the array, as it will be less performant to remake array1 from a,b and c. However, if you are nearing the limits of what your machine can handle, sacrificing performance for memory efficiency may be a necessary step. Also moving a,b and c around will have a much lower overhead than moving array1 around.

Categories

Resources