Apply custom sum over numpy ndarray - python

I would like to do this particular computation: with a square ndarray A in 4 dimension of size (N, )*4, I would like to compute the 2 dimension array B such that
for n in range(N):
for m in range(N):
B[n, m] = sum(A[i, j, n-i, m-j] for i in range(n) for j in range(m))
Is it possible to vectorize this computation with numpy?
It somehow looks like a kind of convolution, but on one array only

It's hard to visualize the whole array action
N=4; A = np.arange(N**4).reshape(N,N,N,N)
this tests the same:
for n in range(N):
for m in range(N):
I = np.arange(n)[:,None]; J = np.arange(m)
B[n,m] = A[I,J,n-I,m-J].sum()
It's harder to "vectorize" the n,m since the indexed portion of A changes with n and m. At the last iteration:
In [248]: n,m
Out[248]: (3, 3)
In [249]: A[I,J,n-I,m-J]
Out[249]:
array([[ 15, 30, 45],
[ 75, 90, 105],
[135, 150, 165]])
while if either n or m is 0, it's an "empty" array with sum of 0.

Related

Python - How one can vectorize two matrices twice with a dot product in a stochatic process?

I have a stochastic process with Mmax trajectories. For each trajectory, I have to take the dot product of two matrices, A and B.
With a loop, it works great
A=np.zeros((2,Mmax),dtype=np.complex64)
B=np.zeros((2,2,Mmax),dtype=np.complex64)
C=np.zeros((2,Mmax),dtype=np.complex64)
for m in range(Mmax):
C[:,m]=B[:,:,m].dot(A[:,m])
(here are just 2x2 matrices to simplify it, when in reality they are much larger)
However, this loop is slow for a large number of trajectories. I want to optimize it by vectorizing it, and I have some problems when I try to implement it
B[:,:,:].dot(A[:,:])
It gives me the error 'shapes (2,2,10) and (2,10) not aligned: 10 (dim 2) != 2 (dim 0)', which makes sense. However, I would really need to vectorize this process, or at least optimize it as much as possible.
Is there any way to get this?
If speed is your concern, there is a way to have that multiplication non-vectorised and yet extremely fast - usually even significantly faster than that. It needs numba though:
import numpy as np
import numba as nb
#nb.njit
def mat_mul(A, B):
n, Mmax = A.shape
C = np.zeros((n, Mmax))
for m in range(Mmax):
for j in range(n):
for i in range(n):
C[j, m] += B[j, i, m]*A[i, m]
return C
Mmax = 100
A = np.ones((2, Mmax))
B = np.ones((2, 2, Mmax))
C = mat_mul(A, B)
Define sample arrays that aren't all zeros. We want to verify values as well as shapes.
In [82]: m = 5
In [83]: A = np.arange(2*m).reshape(2,m)
In [84]: B = np.arange(2*2*m).reshape(2,2,m)
Your iteration:
In [85]: C = np.zeros((2,m))
In [86]: for i in range(m):
...: C[:,i]=B[:,:,i].dot(A[:,i])
...:
In [87]: C
Out[87]:
array([[ 25., 37., 53., 73., 97.],
[ 75., 107., 143., 183., 227.]])
It's fairly easy to express that in einsum:
In [88]: np.einsum('ijk,jk->ik',B,A)
Out[88]:
array([[ 25, 37, 53, 73, 97],
[ 75, 107, 143, 183, 227]])
matmul/# is a variation on np.dot that handles 'batches' nicely. But the batch dimension has to be first (of 3). Your batch dimension, m, is last, so we have do some transposing to get the same result:
In [90]: (np.matmul(B.transpose(2,0,1),A.transpose(1,0)[...,None])[...,0]).T
Out[90]:
array([[ 25, 37, 53, 73, 97],
[ 75, 107, 143, 183, 227]])

Charm-crypto how to do elementwise exponentiation in matrix?

I am working on a project for cryptography course working with Charm jhuisi link .
I have two numpy matrix: V(2,3) belonging to ZR and M(3x2) belonging to G1. I want to bring V to G1, so I can exponentiate M^V. To perform this operation, in Charm I cannot simply use M**V, but I have to do it element by element.
from charm.toolbox.pairinggroup import PairingGroup,ZR,G1,G2,GT,pair
import numpy as np
M = np.array([[group.random(G1) for i in range(2)] for j in range(3)])
V_t = np.transpose(np.array([[group.random(ZR) for i in range(2)] for j in range(3)]))
matrix = np.array([[M[i][j] ** V[j][i] for j in range(3)] for i in range(2)]
but it returns me an error "IndexError: index 2 is out of bounds for axis 0 with size 2"
Can someone who has used Charm before help me, please?
Your code is a confused mix of python lists and numpy arrays. Lets first do the calculation with lists, and paying particular attention to keeping indices right.
Make 2 lists:
In [358]: M = [[1,2,3],[4,5,6]]
In [359]: V = [[1,2],[3,4],[5,6]]
Starting with an empty list, fill it with a new list:
In [360]: res = []
In [361]: for i in range(2):
...: res1 = []
...: for j in range(3):
...: res1.append(M[i][j]**V[j][i])
...: res.append(res1)
...:
In [362]: res
Out[362]: [[1, 8, 243], [16, 625, 46656]]
Note how i range is 2, j range is 3, matching the lengths of the lists.
The same calculation using numpy arrays:
In [363]: np.array(M)**np.array(V).T
Out[363]:
array([[ 1, 8, 243],
[ 16, 625, 46656]])
np.array(M) is (2,3) shape; np.array(V) is (3,2). To perform the elementwise power, V has to be transposed to (2,3).
The nested loop can be written as a comprehension - again with the same care over indices:
In [364]: [[M[i][j]**V[j][i] for j in range(3)] for i in range(2)]
Out[364]: [[1, 8, 243], [16, 625, 46656]]
V_t
What is V_t? I see
In [365]: from random import random
In [366]: V = np.transpose(np.array([[random() for i in range(2)] for j in rang
...: e(3)]))
In [367]: V
Out[367]:
array([[0.8748556 , 0.10373381, 0.23399403],
[0.95388354, 0.24060715, 0.38468676]])
In [368]: V.shape
Out[368]: (2, 3)
Have you done some undocumented transpose to produce a (3,2)? If so then you need to use V_t[i][j]. Are your problems just the result of a sloppy use of the transpose?
Aren't your indices just the wrong way around?
import numpy as np
from numpy.random import random
M = np.array([[random() for i in range(2)] for j in range(3)])
V = np.transpose(np.array([[random() for i in range(2)] for j in range(3)]))
matrix = [[M[i][j] ** V[j][i] for j in range(2)] for i in range(3)]
Edit
Here's a wild idea. Try:
import numpy as np
from god_knows_what import random
M = np.array([[random() for i in range(2)] for j in range(3)], dtype=object)
V = np.transpose(np.array([[random() for i in range(2)] for j in range(3)], dtype=object))
matrix = np.array([[M[i][j] ** V[j][i] for j in range(2)] for i in range(3)], dtype=object)
If the last line fails, try
matrix = np.array([[M[i][j] ** V[j][i] for j in range(3)] for i in range(2)], dtype=object)

How to find the nearest neighbour index from one series to another

I have a target array A, which represents isobaric pressure levels in NCEP reanalysis data.
I also have the pressure at which a cloud is observed as a long time series, B.
What I am looking for is a k-nearest neighbour lookup that returns the indices of those nearest neighbours, something like knnsearch in Matlab that could be represented the same in python such as: indices, distance = knnsearch(A, B, n)
where indices is the nearest n indices in A for every value in B, and distance is how far removed the value in B is from the nearest value in A, and A and B can be of different lengths (this is the bottleneck that I have found with most solutions so far, whereby I would have to loop each value in B to return my indices and distance)
import numpy as np
A = np.array([1000, 925, 850, 700, 600, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30, 20, 10]) # this is a fixed 17-by-1 array
B = np.array([923, 584.2, 605.3, 153.2]) # this can be any n-by-1 array
n = 2
What I would like returned from indices, distance = knnsearch(A, B, n) is this:
indices = [[1, 2],[4, 5] etc...]
where 923 in A is matched to first A[1]=925 and then A[2]=850
and 584.2 in A is matched to first A[4]=600 and then A[5]=500
distance = [[72, 77],[15.8, 84.2] etc...]
where 72 represents the distance between queried value in B to the nearest value in A e.g. distance[0, 0] == np.abs(B[0] - A[1])
The only solution I have been able to come up with is:
import numpy as np
def knnsearch(A, B, n):
indices = np.zeros((len(B), n))
distances = np.zeros((len(B), n))
for i in range(len(B)):
a = A
for N in range(n):
dif = np.abs(a - B[i])
ind = np.argmin(dif)
indices[i, N] = ind + N
distances[i, N] = dif[ind + N]
# remove this neighbour from from future consideration
np.delete(a, ind)
return indices, distances
array_A = np.array([1000, 925, 850, 700, 600, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30, 20, 10])
array_B = np.array([923, 584.2, 605.3, 153.2])
neighbours = 2
indices, distances = knnsearch(array_A, array_B, neighbours)
print(indices)
print(distances)
returns:
[[ 1. 2.]
[ 4. 5.]
[ 4. 3.]
[10. 11.]]
[[ 2. 73. ]
[ 15.8 84.2]
[ 5.3 94.7]
[ 3.2 53.2]]
There must be a way to remove the for loops, as I need the performance should my A and B arrays contain many thousands of elements with many nearest neighbours...
Please help! Thanks :)
The second loop can easily be vectorized. The most straightforward way to do it is to use np.argsort and select the indices corresponding to the n smallest dif values. However, for large arrays, as only n values should be sorted, it is better to use np.argpartition.
Therefore, the code would look like something like that:
def vector_knnsearch(A, B, n):
indices = np.empty((len(B), n))
distances = np.empty((len(B), n))
for i,b in enumerate(B):
dif = np.abs(A - b)
min_ind = np.argpartition(dif,n)[:n] # Returns the indexes of the 3 smallest
# numbers but not necessarily sorted
ind = min_ind[np.argsort(dif[min_ind])] # sort output of argpartition just in case
indices[i, :] = ind
distances[i, :] = dif[ind]
return indices, distances
As said in the comments, the first loop can also be removed using a meshgrid, however, the extra use of memory and computation time to construct the meshgrid makes this approach slower for the dimensions I tried (and this will probably get worse for large arrays and end up in Memory Error). In addition, the readability of the code decreases. Overall, this would probably do this approach less pythonic.
def mesh_knnsearch(A, B, n):
m = len(B)
rng = np.arange(m).reshape((m,1))
Amesh, Bmesh = np.meshgrid(A,B)
dif = np.abs(Amesh-Bmesh)
min_ind = np.argpartition(dif,n,axis=1)[:,:n]
ind = min_ind[rng,np.argsort(dif[rng,min_ind],axis=1)]
return ind, dif[rng,ind]
Not that it is important to define this rng as a 2d array in order to retrieve a[rng[0],ind[0]], a[rng[1],ind[1]], etc and maintain the dimensions of the array, as opposed to a[:,ind] which retrieves a[:,ind[0]], a[:,ind[1]], etc.

Numpy: Add each row of a matrix to the matrix (one at a time), then find the min for each row in each new matrix. Wish to speed the code up

I am adding each row of a matrix, to the matrix, then computing the min of each row in the new matrix.
My current code from python is with a test case is:
# Compute distances to all other nodes using landmarks
distToLM = np.array([[1,2,3],[4,5,6],[7,8,9]])
m = len(distToLM)
count = 1
dist = np.zeros((m,m))
for i in range(m):
findMin = distToLM[i,:] + distToLM.take(range(count,m),axis=0)
dist[i,count:]=np.min(findMin,axis = 1)
count = count + 1
Note: I am slicing the matrix each time as I only require the upper triangular values of the matrix
So the first iteration would add [1,2,3] to [4,5,6] and [7,8,9] to make a matrix:
[5,7,9]
[8,10,12]
From here I want the min of each row, so 5 and 8.
Next iteration I would take [4,5,6] and add it to all rows beneath it i.e [7,8,9] and take the min of each row.
This code is rather slow, around 3 seconds for a 4000x4000 matrix.
I've also tried a Cython version, there was not much of a speed increase likely due to the heavy dependence on calling the numpy functions VS executing the main code in C:
DTYPE=np.int
ctypedef np.int_t DTYPE_t
#cython.boundscheck(False)
#cython.wraparound(False)
def findDist(np.ndarray[DTYPE_t,ndim=2] distToLM):
cdef int m = distToLM.shape[0]
count = 1
cdef np.ndarray[DTYPE_t, ndim=2] dist = np.zeros((m,m),dtype=DTYPE)
cdef np.ndarray[DTYPE_t, ndim=2] findMin
for i in range(m):
findMin = distToLM[i,:] + distToLM.take(range(count,m),axis=0)
dist[i,count:]=np.min(findMin,axis = 1)
count = count + 1
return dist
I assume if there was some way to vectorize this it would be much faster.
I am open to any suggestions.
Changing it a bit helps me visualize the action better (I don't use take much):
distToLM = np.array([[1,2,3],[4,5,6],[7,8,9]])
m = distToLM.shape[0]
dist = np.zeros((m,m), distToLM.dtype)
for i in range(m):
findMin = distToLM[i,:] + distToLM[i+1:,:]
dist[i, i+1:] = np.min(findMin,axis = 1)
In fact the double iteration is even clear:
distToLM = np.array([[1,2,3],[4,5,6],[7,8,9]])
m = distToLM.shape[0]
dist = np.zeros((m,m), distToLM.dtype)
for i in range(m):
for j in range(i+1,m):
dist[i,j] = np.min(distToLM[i,:] + distToLM[j,:])
That reveals a symmetry in the 2 dimensions that is obscured in your code. It's not faster, but will be easier to implement with Cython memoryviews.
That symmetry also shows that I can perform an 'outer' sum on these rows:
In [512]: np.min(distToLM[:,None,:]+distToLM[None,:,:],axis=-1)
Out[512]:
array([[ 2, 5, 8],
[ 5, 8, 11],
[ 8, 11, 14]])
The upper tri is the desired dist.
In [518]: np.triu(_,k=1)
Out[518]:
array([[ 0, 5, 8],
[ 0, 0, 11],
[ 0, 0, 0]])
This calculates more values than the iterative approach, but can be faster. Unfortunately for your big problem, the intermediate size (4000,4000,4000) array may be too big for memory.
I could pick the triu indices before hand with:
In [530]: I,J=np.triu_indices(3,1)
In [531]: I,J
Out[531]: (array([0, 0, 1], dtype=int32), array([1, 2, 2], dtype=int32))
In [532]: np.min(distToLM[I,:]+distToLM[J,:],axis=1)
Out[532]: array([ 5, 8, 11])
I don't have a feel for how that will perform with large arrays.
This reminds me that scipy.spatial has what it calls squareform and compact representations of pairwise distances.
https://docs.scipy.org/doc/scipy/reference/spatial.distance.html
Maybe there's some useful stuff there.

Index numpy nd array along last dimension

Is there an easy way to index a numpy multidimensional array along the last dimension, using an array of indices? For example, take an array a of shape (10, 10, 20). Let's assume I have an array of indices b, of shape (10, 10) so that the result would be c[i, j] = a[i, j, b[i, j]].
I've tried the following example:
a = np.ones((10, 10, 20))
b = np.tile(np.arange(10) + 10, (10, 1))
c = a[b]
However, this doesn't work because it then tries to index like a[b[i, j], b[i, j]], which is not the same as a[i, j, b[i, j]]. And so on. Is there an easy way to do this without resorting to a loop?
There are several ways to do this. Let's first generate some test data:
In [1]: a = np.random.rand(10, 10, 20)
In [2]: b = np.random.randint(20, size=(10,10)) # random integers in range 0..19
One way to solve the question would be to create two index vectors, where one is a row vector and the other a column vector of 0..9 using meshgrid:
In [3]: i1, i0 = np.meshgrid(range(10), range(10), sparse=True)
In [4]: c = a[i0, i1, b]
This works because i0, i1 and b will all be broadcasted to 10x10 matrices. Quick test for correctness:
In [5]: all(c[i, j] == a[i, j, b[i, j]] for i in range(10) for j in range(10))
Out[5]: True
Another way would be to use choose and rollaxis:
# choose needs a sequence of length 20, so move last axis to front
In [22]: aa = np.rollaxis(a, -1)
In [23]: c = np.choose(b, aa)
In [24]: all(c[i, j] == a[i, j, b[i, j]] for i in range(10) for j in range(10))
Out[24]: True

Categories

Resources