So I have a 3 numpy arrays which has the following dimensions,
a.shape = (704, 528)
b.shape = (704, 528)
c.shape = (704, 528)
And I have a square matrix that looks like this,
mat = np.array([[a, b], [b, c]])
I need to find the eigen values of this. I'm aware that it's going to be a matrix of eigen values. But when I use numpy.linalg.eig(), it gives me an error: numpy.linalg.LinAlgError: Last 2 dimensions of the array must be square.
I haven't found many resources as to how to do this, could someone guide me to any sources or give me a solution? Thank you!
Eigenvalues are only defined for square matrices.
Your matrix has 2*704 = 1408 rows and 2*528 = 1056 columns, hence you get an error as numpy.linalg.eig() is expecting a square matrix as input.
Depending on your goal of wanting to compute eigenvalues, you might want to consider SVD which is defined for non-square matrix as well. You might also want to examine if the matrix that you constructed is indeed the matrix that you intend to construct.
Related
if I have a 2D matrix, and I want to assign a vector [1,1,1] into each cell of my M matrix
vector = np.array([1,1,1])
M= np.zeros((4,4)).astype(np.object)
M[:]=vector.astype(object)
This will obviously give me the error that:
ValueError: could not broadcast input array from shape (2) into shape (3,3)
So is there any method I can store my 3d vector into each cell of my 4x4 M matrix?
Thanks!
I know that if I iterate the ndarray I can do it
for i in range(np.shape(M)[0]):
for j in range(np.shape(M)[1]):
M[i][j]=vector
just wandering whether there's a simple syntax for this
You need to declare what the entries of your matrix should contain with the argument dtype, namely vector.dtype.
This link might help: Numpy - create matrix with rows of vector
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
w1 = w.T
print(np.matmul(X*w1))
This code gives the following error:
ValueError: operands could not be broadcast together with shapes (2,3) (1,2)
How can I solve it?
Matrix multiplication is not your problem here. It is the multiplication you are trying to do: X*w1. This is not possible. If you want to multiply two arrays, they have to have the same shape or you can use broadcasting. But for broadcasting to work, all the axes, except one, have to have the same length. So that would not be possible in this case.
It seems what you are actually trying to do is matrix multiplication. This needs two matrices, so you cannot multiply them first. Also, for two matrices to be multiplied this way, the number of columns of the first matrix needs to equal the number of rows of the second. So, the following would work and is probably what you are trying to do:
np.matmul(w1, X)
I may be misunderstanding how broadcasting works in Python, but I am still running into errors.
scipy offers a number of "special functions" which take in two arguments, in particular the eval_XX(n, x[,out]) functions.
See http://docs.scipy.org/doc/scipy/reference/special.html
My program uses many orthogonal polynomials, so I must evaluate these polynomials at distinct points. Let's take the concrete example scipy.special.eval_hermite(n, x, out=None).
I would like the x argument to be a matrix shape (50, 50). Then, I would like to evaluate each entry of this matrix at a number of points. Let's define n to be an a numpy array narr = np.arange(10) (where we have imported numpy as np, i.e. import numpy as np).
So, calling
scipy.special.eval_hermite(narr, matrix)
should return Hermitian polynomials H_0(matrix), H_1(matrix), H_2(matrix), etc. Each H_X(matrix) is of the shape (50,50), the shape of the original input matrix.
Then, I would like to sum these values. So, I call
matrix1 = np.sum( [scipy.eval_hermite(narr, matrix)], axis=0 )
but I get a broadcasting error!
ValueError: operands could not be broadcast together with shapes (10,) (50,50)
I can solve this with a for loop, i.e.
matrix2 = np.sum( [scipy.eval_hermite(i, matrix) for i in narr], axis=0)
This gives me the correct answer, and the output matrix2.shape = (50,50). But using this for loop slows down my code, big time. Remember, we are working with entries of matrices.
Is there a way to do this without a for loop?
eval_hermite broadcasts n with x, then evaluates Hn(x) at each point. Thus, the output shape will be the result of broadcasting n with x. So, if you want to make this work, you'll have to make n and x have compatible shapes:
import scipy.special as ss
import numpy as np
matrix = np.ones([100,100]) # example
narr = np.arange(10) # example
ss.eval_hermite(narr[:,None,None], matrix).shape # => (10, 100, 100)
But note that this might actually be faster:
out = np.zeros_like(matrix)
for n in narr:
out += ss.eval_hermite(n, matrix)
In testing, it appears to be between 5-10% faster than np.sum(...) of above.
The documentation for these functions is skimpy, and a lot of the code is compiled, so this is just based on experimentation:
special.eval_hermite(n, x, out=None)
n apparently is a scalar or array of integers. x can be an array of floats.
special.eval_hermite(np.ones(5,int)[:,None],np.ones(6)) gives me a (5,6) result. This is the same shape as what I'd get from np.ones(5,int)[:,None] * np.ones(6).
The np.ones(5,int)[:,None] is a (5,1) array, np.ones(6) a (6,), which for this purpose is equivalent of (1,6). Both can be expanded to (5,6).
So as best I can tell, broadcasting rules in these special functions is the same as for operators like *.
Since special.eval_hermite(nar[:,None,None], x) produces a (10,50,50), you just apply sum to axis 0 of that to produce the (50,50).
special.eval_hermite(nar[:,Nar,Nar], x).sum(axis=0)
Like I wrote before, the same broadcasting (and summing) rules apply for this hermite as they do for a basic operation like *.
in a current project I have a large multidimensional array of shape (I,J,K,N) and a square matrix of dim N.
I need to perform a matrix vector multiplication of the last axis of the array with the square matrix.
So the obvious solution would be:
for i in range(I):
for j in range(J):
for k in range(K):
arr[i,j,k] = mat.dot(arr[i,j,k])
but of course this is rather slow. So I also tried numpy's tensordot but had little success.
I would expect that something like:
arr = tensordot(mat,arr,axes=((0,1),(3)))
should do the trick but I get a shape mismatch error.
Has someone a better solution or knows how to correctly use tensordot?
Thank you!
This should do what your loops, but with vectorized looping:
from numpy.core.umath_tests import matrix_multiply
arr[..., np.newaxis] = matrix_multiply(mat, arr[..., np.newaxis])
matrix_multiply and its sister inner1d are hidden, undocumented, gems of numpy, although a full set of linear algebra gufuncs should see the light with numpy 1.8. matrix_multiply does matrix multiplication on the last two dimensions of its inputs, and broadcasting on the rest. The only tricky part is setting an additional dimension, so that it sees column vectors when multiplying, and adding it also on assignment back into array, so that there is no shape mismatch.
I think your for loop is wrong, and for this case dot seems to be enough:
# a is your IJKN
# b is your NN
c = dot(a, b)
Here c will be a IJKN array. If you want to sum over the last dimension to get the IJK array:
arr = dot(a,b).sum(axis=3)
BUT I'm NOT SURE IF THIS IS WHAT YOU WANT...
I have a matrix P with shape MxN and a 3d tensor T with shape KxNxR. I want to multiply P with every NxR matrix in T, resulting in a KxMxR 3d tensor.
P.dot(T).transpose(1,0,2) gives the desired result. Is there a nicer solution (i.e. getting rid of transpose) to this problem? This must be quite a common operation, so I assume, others have found different approaches, e.g. using tensordot (which I tried but failed to get the desired result). Opinions/Views would be highly appreciated!
scipy.tensordot(P, T, axes=[1,1]).swapaxes(0,1)
You could also use Einstein summation notation:
P = numpy.random.randint(1,10,(5,3))
P.shape
T = numpy.random.randint(1,10,(2,3,4))
T.shape
numpy.einsum('ij,kjl->kil',P,T)
which should give you the same results as:
P.dot(T).transpose(1,0,2)