I have a matrix A with size (5,7,3) and a matrix B with size (5,3,8). I want to multiply them C = A.B, and the size of C is (5,7,8).
It means that one 2D submatrix with size (7,3) in matrix A will be multiplied with one 2D submatrix with size (3,8) in matrix B respectively. So I have to multiply 5 times.
The simplest way is using a loop and numpy:
for u in range(5):
C[u] = numpy.dot(A[u],B[u])
Is there any way to do this without using a loop?
Is there any equivalent method in Theano to do this without using scan?
Can be done pretty simply with np.einsum in numpy.
C = numpy.einsum('ijk,ikl->ijl', A, B)
It can also simply be:
C = numpy.matmul(A,B)
Since the docs state:
If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly
Theano has similar functionaly of batched_dot so it would be
C = theano.tensor.batched_dot(A, B)
Related
I have two different dimension numpy array, lets say A and B, defined as follows
A = np.random.rand(3,3,10)
B = np.random.rand(3,3)
I'm trying to calculate the sum of elementwise product between A and B along the third dimension of A.
ans = []
for i in range(10):
ans.append(np.sum(B*A[:,:,i]))
Is there any better way to do this? cause I feel its slow when data gets larger.
Try broadcasting the missing dimension:
(A * B[...,None]).sum(axis=(0,1))
I have a matrix A with shape=(N, N) and a matrix B with the same shape=(N, N).
I am constructing a matrix M using the following einsum (using the opt_einsum library):
M = oe.contract('nm,in,jm,pn,qm->ijpq', A, B, B, B, B)
This is evaluating the following sum:
This yeilds matrix M with shape (N, N, N, N). I then reshape this to a 2D array of shape (N**2, N**2)
M = M.reshape((N**2, N**2))
This must be 2D as it is treated as a linear operator.
I want to use the sparse library, as M is sparse, and becomes too large to store for large N. I can make A and B sparse, and insert them into the oe.contract.
The problem is, sparse only supports 2D arrays and so fails to produce the 4D output of shape (N, N, N. N). Is there a way to combine the einsum and reshape steps to allow sparse to be used in this way, as the final shape of M is 2D?
This may not help with your use of opt_einsum, but with a bit of reorganizing I can speed up np.einsum quite a bit, at least for small arrays.
Do a partial product of two B:
c1 = np.einsum('in,jm->ijnm',B,B).reshape(N*N,N,N)
The pq pair is the same, so we don't need to recalculate it:
c2 = np.einsum('nm,onm,pnm->op',A,c1,c1)
I verified that this works for two (3,3) arrays, and the speed up is about 10x.
We can even reshape the nm to 1d, though this doesn't improve speed:
c1 = np.einsum('in,jm->ijnm',B,B).reshape(N*N,N*N)
c3 = np.einsum('n,on,pn->op',A.reshape(N*N),c1,c1)
I did not correctly interpret the error given by opt_einsum.
The problem is not that sparse does not support ND sparse arrays (it does!), but that I was not using a true einsum, as the indices summed over appear more than twice (n and m). As stated in the opt_einsum documentation this will result in the use of the sparse.einsum function, of which none exists. Using only 1 or 2 of each index works. Using a differerent path, one suggested for example by hpaulj can be used to solve the problem.
I have a numpy array of matrices which i am trying to multiply together in the form A * B * C * D where A is the first matrix, B is the second and so on. I have tried this code:
matrix = matrices[0]
for m in matrices[1:]:
matrix = np.matmul(matrix, m)
However I believe this multiplication is wrong as i get incorrect output variables, and I have triple checked the rest of my code so I believe this is the issue. How can I multiply all the matrices in this array together? Also the array length will vary depending on the input file, thus i cant use an A * B * C approach.
Your code for multiply a series of matrices together should work. Here is an example using your method with some simple matrices.
import numpy as np
matrices = []
matrices.append(np.eye(3,dtype=float))
matrices.append(np.matrix('1.0,2.0,3.0;4.0,5.0,6.0;7.0,8.0,8.0'))
matrices.append(np.eye(3,dtype=float))
matrices.append(np.linalg.inv(np.matrix('1.0,2.0,3.0;4.0,5.0,6.0;7.0,8.0,8.0')))
matrix = matrices[0]
for m in matrices[1:]:
matrix = np.matmul(matrix, m)
print(matrix)
directmul = np.matmul(matrices[1],matrices[3])
print(np.subtract(matrix,directmul))
Your problem is somewhere else: maybe how you are filling the list of matrices, or how are you filling the matrices. Have you tried unit testing on your code? Have you give a try to the python debugger?
I need to multiply each row of an array A with all rows of an array B element-wise. For instance, let's say we have the following arrays:
A = np.array([[1,5],[3,6]])
B = np.array([[4,2],[8,2]])
I want to get the following array C:
C = np.array([[4,10],[8,10],[12,12],[24,12]])
I could do this by using for loop but I think there could be a better way to do it.
EDIT: I thought of repeating and tiling but my arrays are not that small. It could create some memory problem.
Leverage broadcasting extending dims for A to 3D with None/np.newaxis, perform the elementwise multiplication and reshape back to 2D -
(A[:,None]*B).reshape(-1,B.shape[1])
which essentially would be -
(A[:,None,:]*B[None,:,:]).reshape(-1,B.shape[1])
Schematically put, it's :
A : M x 1 x N
B : 1 x K x N
out : M x K x N
Final reshape to merge last two axes and give us (M x K*N) shaped 2D array.
We can also use einsum to perform the extension to 3D and elementwise multiplication in one function call -
np.einsum('ij,kj->ikj',A,B).reshape(-1,B.shape[1])
I'm wondering if there is a simple way to multiply a numpy matrix by a scalar. Essentially I want all values to be multiplied by the constant 40. This would be an nxn matrix with 40's on the diagonal, but I'm wondering if there is a simpler function to use to scale this matrix. Or how would I go about making a matrix with the same shape as my other matrix and fill in its diagonal?
Sorry if this seems a bit basic, but for some reason I couldn't find this in the doc.
If you want a matrix with 40 on the diagonal and zeros everywhere else, you can use NumPy's function fill_diagonal() on a matrix of zeros. You can thus directly do:
N = 100; value = 40
b = np.zeros((N, N))
np.fill_diagonal(b, value)
This involves only setting elements to a certain value, and is therefore likely to be faster than code involving multiplying all the elements of a matrix by a constant. This approach also has the advantage of showing explicitly that you fill the diagonal with a specific value.
If you want the diagonal matrix b to be of the same size as another matrix a, you can use the following shortcut (no need for an explicit size N):
b = np.zeros_like(a)
np.fill_diagonal(b, value)
Easy:
N = 100
a = np.eye(N) # Diagonal Identity 100x100 array
b = 40*a # Multiply by a scalar
If you actually want a numpy matrix vs an array, you can do a = np.asmatrix(np.eye(N)) instead. But in general * is element-wise multiplication in numpy.