I have two different dimension numpy array, lets say A and B, defined as follows
A = np.random.rand(3,3,10)
B = np.random.rand(3,3)
I'm trying to calculate the sum of elementwise product between A and B along the third dimension of A.
ans = []
for i in range(10):
ans.append(np.sum(B*A[:,:,i]))
Is there any better way to do this? cause I feel its slow when data gets larger.
Try broadcasting the missing dimension:
(A * B[...,None]).sum(axis=(0,1))
Related
I have a numpy array of matrices which i am trying to multiply together in the form A * B * C * D where A is the first matrix, B is the second and so on. I have tried this code:
matrix = matrices[0]
for m in matrices[1:]:
matrix = np.matmul(matrix, m)
However I believe this multiplication is wrong as i get incorrect output variables, and I have triple checked the rest of my code so I believe this is the issue. How can I multiply all the matrices in this array together? Also the array length will vary depending on the input file, thus i cant use an A * B * C approach.
Your code for multiply a series of matrices together should work. Here is an example using your method with some simple matrices.
import numpy as np
matrices = []
matrices.append(np.eye(3,dtype=float))
matrices.append(np.matrix('1.0,2.0,3.0;4.0,5.0,6.0;7.0,8.0,8.0'))
matrices.append(np.eye(3,dtype=float))
matrices.append(np.linalg.inv(np.matrix('1.0,2.0,3.0;4.0,5.0,6.0;7.0,8.0,8.0')))
matrix = matrices[0]
for m in matrices[1:]:
matrix = np.matmul(matrix, m)
print(matrix)
directmul = np.matmul(matrices[1],matrices[3])
print(np.subtract(matrix,directmul))
Your problem is somewhere else: maybe how you are filling the list of matrices, or how are you filling the matrices. Have you tried unit testing on your code? Have you give a try to the python debugger?
I am trying to implement the QR decomposition via householder reflectors. While attempting this on a very simple array, I am getting weird numbers. Anyone who can tell me, also, why using the # vs * operator between vec and vec.T on the last line of the function definition gets major bonus points.
This has stumped two math/comp sci phds as of this morning.
import numpy as np
def householder(vec):
vec[0] += np.sign(vec[0])*np.linalg.norm(vec)
vec = vec/vec[0]
gamma = 2/(np.linalg.norm(vec)**2)
return np.identity(len(vec)) - gamma*(vec*vec.T)
array = np.array([1, 3 ,4])
Q = householder(array)
print(Q#array)
Output:
array([-4.06557377, -7.06557377, -6.06557377])
Where it should be:
array([5.09, 0, 0])
* is elementwise multiplication, # is matrix multiplication. Both have their uses, but for matrix calculations you most likely want the matrix product.
vec.T for an array returns the same array. A simple array only has one dimension, there is nothing to transpose. vec*vec.T just returns the elementwise squared array.
You might want to use vec=vec.reshape(-1,1) to get a proper column vector, a one-column matrix. Then vec*vec.T does "by accident" the correct thing. You might want to put the matrix multiplication operator there anyway.
I need to multiply each row of an array A with all rows of an array B element-wise. For instance, let's say we have the following arrays:
A = np.array([[1,5],[3,6]])
B = np.array([[4,2],[8,2]])
I want to get the following array C:
C = np.array([[4,10],[8,10],[12,12],[24,12]])
I could do this by using for loop but I think there could be a better way to do it.
EDIT: I thought of repeating and tiling but my arrays are not that small. It could create some memory problem.
Leverage broadcasting extending dims for A to 3D with None/np.newaxis, perform the elementwise multiplication and reshape back to 2D -
(A[:,None]*B).reshape(-1,B.shape[1])
which essentially would be -
(A[:,None,:]*B[None,:,:]).reshape(-1,B.shape[1])
Schematically put, it's :
A : M x 1 x N
B : 1 x K x N
out : M x K x N
Final reshape to merge last two axes and give us (M x K*N) shaped 2D array.
We can also use einsum to perform the extension to 3D and elementwise multiplication in one function call -
np.einsum('ij,kj->ikj',A,B).reshape(-1,B.shape[1])
I have a matrix A with size (5,7,3) and a matrix B with size (5,3,8). I want to multiply them C = A.B, and the size of C is (5,7,8).
It means that one 2D submatrix with size (7,3) in matrix A will be multiplied with one 2D submatrix with size (3,8) in matrix B respectively. So I have to multiply 5 times.
The simplest way is using a loop and numpy:
for u in range(5):
C[u] = numpy.dot(A[u],B[u])
Is there any way to do this without using a loop?
Is there any equivalent method in Theano to do this without using scan?
Can be done pretty simply with np.einsum in numpy.
C = numpy.einsum('ijk,ikl->ijl', A, B)
It can also simply be:
C = numpy.matmul(A,B)
Since the docs state:
If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly
Theano has similar functionaly of batched_dot so it would be
C = theano.tensor.batched_dot(A, B)
I'm wondering if there is a simple way to multiply a numpy matrix by a scalar. Essentially I want all values to be multiplied by the constant 40. This would be an nxn matrix with 40's on the diagonal, but I'm wondering if there is a simpler function to use to scale this matrix. Or how would I go about making a matrix with the same shape as my other matrix and fill in its diagonal?
Sorry if this seems a bit basic, but for some reason I couldn't find this in the doc.
If you want a matrix with 40 on the diagonal and zeros everywhere else, you can use NumPy's function fill_diagonal() on a matrix of zeros. You can thus directly do:
N = 100; value = 40
b = np.zeros((N, N))
np.fill_diagonal(b, value)
This involves only setting elements to a certain value, and is therefore likely to be faster than code involving multiplying all the elements of a matrix by a constant. This approach also has the advantage of showing explicitly that you fill the diagonal with a specific value.
If you want the diagonal matrix b to be of the same size as another matrix a, you can use the following shortcut (no need for an explicit size N):
b = np.zeros_like(a)
np.fill_diagonal(b, value)
Easy:
N = 100
a = np.eye(N) # Diagonal Identity 100x100 array
b = 40*a # Multiply by a scalar
If you actually want a numpy matrix vs an array, you can do a = np.asmatrix(np.eye(N)) instead. But in general * is element-wise multiplication in numpy.