Related
I am new to numpy but have been using python for quite a while as an engineer.
I am writing a program that currently stores stress tensors as 3x3 numpy arrays within another NxM array which represents values through time and through the thickness of a wall, so overall it is an NxMx3x3 numpy array. I want to efficiently calculate the eigenvals and vectors of each 3x3 array within this larger array. So far I have tried to using "fromiter" but this doesn't seem to work because the functions returns 2 arrays. I have also tried apply_along_axis which also doesn't work because it says the inner 3x3 is not a square matrix? I can do it with list comprehension, but this doesn't seem ideal to resort to using lists.
Example just calculating eigenvals using list comprehension
import numpy as np
from scipy import linalg
a=np.random.random((2,2,3,3))
f=linalg.eigvalsh
ans=np.asarray([f(x) for x in a.reshape((4,3,3))])
ans.shape=(2,2,3)
I thought something like this would work but I have played around with it and can't get it working:
np.apply_along_axis(f,0,a)
BTW the 2x2 bit could be up to 5000x100 and this code is repeated ~50x50x200 times hence the need for efficiency. Any help would be greatly appreciated?
You can use numpy.linalg.eigh. It accepts an array like your example a.
Here's an example. First, create an array of 3x3 symmetric arrays:
In [96]: a = np.random.random((2, 2, 3, 3))
In [97]: a = a + np.transpose(a, axes=(0, 1, 3, 2))
In [98]: a[0, 0]
Out[98]:
array([[0.61145048, 0.85209618, 0.03909677],
[0.85209618, 1.79309413, 1.61209077],
[0.03909677, 1.61209077, 1.55432465]])
Compute the eigenvalues and eigenvectors of all the 3x3 arrays:
In [99]: evals, evecs = np.linalg.eigh(a)
In [100]: evals.shape
Out[100]: (2, 2, 3)
In [101]: evecs.shape
Out[101]: (2, 2, 3, 3)
Take a look at the result for a[0, 0]:
In [102]: evals[0, 0]
Out[102]: array([-0.31729364, 0.83148477, 3.44467813])
In [103]: evecs[0, 0]
Out[103]:
array([[-0.55911658, 0.79634401, 0.23070516],
[ 0.63392772, 0.23128064, 0.73800062],
[-0.53434473, -0.55887877, 0.63413738]])
Verify that it is the same as computing the eigenvalues and eigenvectors for a[0, 0] separately:
In [104]: np.linalg.eigh(a[0, 0])
Out[104]:
(array([-0.31729364, 0.83148477, 3.44467813]),
array([[-0.55911658, 0.79634401, 0.23070516],
[ 0.63392772, 0.23128064, 0.73800062],
[-0.53434473, -0.55887877, 0.63413738]]))
What are the advantages and disadvantages of each?
From what I've seen, either one can work as a replacement for the other if need be, so should I bother using both or should I stick to just one of them?
Will the style of the program influence my choice? I am doing some machine learning using numpy, so there are indeed lots of matrices, but also lots of vectors (arrays).
Numpy matrices are strictly 2-dimensional, while numpy arrays (ndarrays) are
N-dimensional. Matrix objects are a subclass of ndarray, so they inherit all
the attributes and methods of ndarrays.
The main advantage of numpy matrices is that they provide a convenient notation
for matrix multiplication: if a and b are matrices, then a*b is their matrix
product.
import numpy as np
a = np.mat('4 3; 2 1')
b = np.mat('1 2; 3 4')
print(a)
# [[4 3]
# [2 1]]
print(b)
# [[1 2]
# [3 4]]
print(a*b)
# [[13 20]
# [ 5 8]]
On the other hand, as of Python 3.5, NumPy supports infix matrix multiplication using the # operator, so you can achieve the same convenience of matrix multiplication with ndarrays in Python >= 3.5.
import numpy as np
a = np.array([[4, 3], [2, 1]])
b = np.array([[1, 2], [3, 4]])
print(a#b)
# [[13 20]
# [ 5 8]]
Both matrix objects and ndarrays have .T to return the transpose, but matrix
objects also have .H for the conjugate transpose, and .I for the inverse.
In contrast, numpy arrays consistently abide by the rule that operations are
applied element-wise (except for the new # operator). Thus, if a and b are numpy arrays, then a*b is the array
formed by multiplying the components element-wise:
c = np.array([[4, 3], [2, 1]])
d = np.array([[1, 2], [3, 4]])
print(c*d)
# [[4 6]
# [6 4]]
To obtain the result of matrix multiplication, you use np.dot (or # in Python >= 3.5, as shown above):
print(np.dot(c,d))
# [[13 20]
# [ 5 8]]
The ** operator also behaves differently:
print(a**2)
# [[22 15]
# [10 7]]
print(c**2)
# [[16 9]
# [ 4 1]]
Since a is a matrix, a**2 returns the matrix product a*a.
Since c is an ndarray, c**2 returns an ndarray with each component squared
element-wise.
There are other technical differences between matrix objects and ndarrays
(having to do with np.ravel, item selection and sequence behavior).
The main advantage of numpy arrays is that they are more general than
2-dimensional matrices. What happens when you want a 3-dimensional array? Then
you have to use an ndarray, not a matrix object. Thus, learning to use matrix
objects is more work -- you have to learn matrix object operations, and
ndarray operations.
Writing a program that mixes both matrices and arrays makes your life difficult
because you have to keep track of what type of object your variables are, lest
multiplication return something you don't expect.
In contrast, if you stick solely with ndarrays, then you can do everything
matrix objects can do, and more, except with slightly different
functions/notation.
If you are willing to give up the visual appeal of NumPy matrix product
notation (which can be achieved almost as elegantly with ndarrays in Python >= 3.5), then I think NumPy arrays are definitely the way to go.
PS. Of course, you really don't have to choose one at the expense of the other,
since np.asmatrix and np.asarray allow you to convert one to the other (as
long as the array is 2-dimensional).
There is a synopsis of the differences between NumPy arrays vs NumPy matrixes here.
Scipy.org recommends that you use arrays:
*'array' or 'matrix'? Which should I use? - Short answer
Use arrays.
They support multidimensional array algebra that is supported in
MATLAB
They are the standard vector/matrix/tensor type of NumPy. Many
NumPy functions return arrays, not matrices.
There is a clear
distinction between element-wise operations and linear algebra
operations.
You can have standard vectors or row/column vectors if you
like.
Until Python 3.5 the only disadvantage of using the array type
was that you had to use dot instead of * to multiply (reduce) two
tensors (scalar product, matrix vector multiplication etc.). Since
Python 3.5 you can use the matrix multiplication # operator.
Given the above, we intend to deprecate matrix eventually.
Just to add one case to unutbu's list.
One of the biggest practical differences for me of numpy ndarrays compared to numpy matrices or matrix languages like matlab, is that the dimension is not preserved in reduce operations. Matrices are always 2d, while the mean of an array, for example, has one dimension less.
For example demean rows of a matrix or array:
with matrix
>>> m = np.mat([[1,2],[2,3]])
>>> m
matrix([[1, 2],
[2, 3]])
>>> mm = m.mean(1)
>>> mm
matrix([[ 1.5],
[ 2.5]])
>>> mm.shape
(2, 1)
>>> m - mm
matrix([[-0.5, 0.5],
[-0.5, 0.5]])
with array
>>> a = np.array([[1,2],[2,3]])
>>> a
array([[1, 2],
[2, 3]])
>>> am = a.mean(1)
>>> am.shape
(2,)
>>> am
array([ 1.5, 2.5])
>>> a - am #wrong
array([[-0.5, -0.5],
[ 0.5, 0.5]])
>>> a - am[:, np.newaxis] #right
array([[-0.5, 0.5],
[-0.5, 0.5]])
I also think that mixing arrays and matrices gives rise to many "happy" debugging hours.
However, scipy.sparse matrices are always matrices in terms of operators like multiplication.
As per the official documents, it's not anymore advisable to use matrix class since it will be removed in the future.
https://numpy.org/doc/stable/reference/generated/numpy.matrix.html
As other answers already state that you can achieve all the operations with NumPy arrays.
As others have mentioned, perhaps the main advantage of matrix was that it provided a convenient notation for matrix multiplication.
However, in Python 3.5 there is finally a dedicated infix operator for matrix multiplication: #.
With recent NumPy versions, it can be used with ndarrays:
A = numpy.ones((1, 3))
B = numpy.ones((3, 3))
A # B
So nowadays, even more, when in doubt, you should stick to ndarray.
Matrix Operations with Numpy Arrays:
I would like to keep updating this answer
about matrix operations with numpy arrays if some users are interested looking for information about matrices and numpy.
As the accepted answer, and the numpy-ref.pdf said:
class numpy.matrix will be removed in the future.
So now matrix algebra operations has to be done
with Numpy Arrays.
a = np.array([[1,3],[-2,4]])
b = np.array([[3,-2],[5,6]])
Matrix Multiplication (infix matrix multiplication)
a#b
array([[18, 16],
[14, 28]])
Transpose:
ab = a#b
ab.T
array([[18, 14],
[16, 28]])
Inverse of a matrix:
np.linalg.inv(ab)
array([[ 0.1 , -0.05714286],
[-0.05 , 0.06428571]])
ab_i=np.linalg.inv(ab)
ab#ab_i # proof of inverse
array([[1., 0.],
[0., 1.]]) # identity matrix
Determinant of a matrix.
np.linalg.det(ab)
279.9999999999999
Solving a Linear System:
1. x + y = 3,
x + 2y = -8
b = np.array([3,-8])
a = np.array([[1,1], [1,2]])
x = np.linalg.solve(a,b)
x
array([ 14., -11.])
# Solution x=14, y=-11
Eigenvalues and Eigenvectors:
a = np.array([[10,-18], [6,-11]])
np.linalg.eig(a)
(array([ 1., -2.]), array([[0.89442719, 0.83205029],
[0.4472136 , 0.5547002 ]])
An advantage of using matrices is for easier instantiation through text rather than nested square brackets.
With matrices you can do
np.matrix("1, 1+1j, 0; 0, 1j, 0; 0, 0, 1")
and get the desired output directly:
matrix([[1.+0.j, 1.+1.j, 0.+0.j],
[0.+0.j, 0.+1.j, 0.+0.j],
[0.+0.j, 0.+0.j, 1.+0.j]])
If you use arrays, this does not work:
np.array("1, 1+1j, 0; 0, 1j, 0; 0, 0, 1")
output:
array('1, 1+1j, 0; 0, 1j, 0; 0, 0, 1', dtype='<U29')
Part of my code inverts a matrix (really an ndarray) using numpy.linalg.inv. However, this frequently errors out as follows:
numpy.linalg.linalg.LinAlgError: Singular matrix
That would be fine if the matrix was actually singular. But that doesn't seem to be the case.
For example, I'm printing the matrix before trying to invert it. So right before the error it prints this:
[[ 0.76400334 0.22660491]
[ 0.22660491 0.06721147]]
... and then returns the above singularity error when it tries to invert that matrix. But from what I can tell this matrix is invertible. Numpy even seems to agree when asked later.
>>> numpy.linalg.inv([[0.76400334, 0.22660491], [0.22660491, 0.06721147]])
array([[ 2.88436275e+07, -9.72469076e+07],
[ -9.72469076e+07, 3.27870046e+08]])
Here's the exact code snippet:
print np.dot(np.transpose(X), X)
print np.linalg.inv(np.dot(np.transpose(X),X))
The first line prints the matrix above; the second line fails.
So what distinguishes the two actions above? Why does the stand-alone code work even though it errors out in my script?
EDIT: Per Colonel Beauvel's request, if I do
try:
print np.dot(np.transpose(X), X)
z = np.linalg.inv(np.dot(np.transpose(X), X))
except:
z = "whoops"
print z
it outputs
[[ 0.01328185 0.1092696 ]
[ 0.1092696 0.89895982]]
whoops
but trying this on its own I get
>>> numpy.linalg.inv([[0.01328185, 0.1092696], [0.1092696, 0.89895982]])
array([[ 2.24677775e+08, -2.73098420e+07],
[ -2.73098420e+07, 3.31954382e+06]])
It's a matter of printing precision. The IEEE 754 doubles, that you're most likely using, have about 16 decimal digits of precision and you need to write out 17 to preserve the binary value.
Here's a small example. First create a singlular matrix:
In [1]: import numpy as np
In [2]: np.random.seed(0)
In [3]: a, b, c = np.random.rand(3)
In [4]: d = b*c / a
In [5]: X = np.array([[a, b],[c, d]])
Print and try to invert it:
In [6]: X
Out[6]:
array([[ 0.5488135 , 0.71518937],
[ 0.60276338, 0.78549444]])
In [7]: np.linalg.inv(X)
LinAlgError: Singular matrix
Try to invert the printed matrix:
In [8]: Y = np.array([[ 0.5488135 , 0.71518937],
...: [ 0.60276338, 0.78549444]])
In [9]: np.linalg.inv(Y)
Out[9]:
array([[-85805775.2940297 , 78125795.99532071],
[ 65844615.19517545, -59951242.76033063]])
Succes!
Increase printing precision and try again:
In [10]: np.set_printoptions(precision=17)
In [11]: X
Out[11]:
array([[ 0.54881350392732475, 0.71518936637241948],
[ 0.60276337607164387, 0.78549444195576024]])
In [12]: Z = np.array([[ 0.54881350392732475, 0.71518936637241948],
...: [ 0.60276337607164387, 0.78549444195576024]])
In [13]: np.linalg.inv(Z)
LinAlgError: Singular matrix
I just compute the determinant:
In [130]: m = np.array([[ 0.76400334, 0.22660491],[ 0.22660491,0.06721147]])
In [131]: np.linalg.det(m)
Out[131]: 2.3302017068132921e-09
# which is in fact for a 2D matrix 0.76400334*0.06721147 - 0.22660491*0.22660491
Which is already quit close to 0.
If a matrix m can be inverted, mathematically you can compute the adjoint and divide by the determinant to get the inverted matrix.
Numerically if the determinant is too small, this can entail the kind of error you have ...
I have a large matrix A of shape (n, n, 3, 3) with n is about 5000. Now I want find the inverse and transpose of matrix A:
import numpy as np
A = np.random.rand(1000, 1000, 3, 3)
identity = np.identity(3, dtype=A.dtype)
Ainv = np.zeros_like(A)
Atrans = np.zeros_like(A)
for i in range(1000):
for j in range(1000):
Ainv[i, j] = np.linalg.solve(A[i, j], identity)
Atrans[i, j] = np.transpose(A[i, j])
Is there a faster, more efficient way to do this?
This is taken from a project of mine, where I also do vectorized linear algebra on many 3x3 matrices.
Note that there is only a loop over 3; not a loop over n, so the code is vectorized in the important dimensions. I don't want to vouch for how this compares to a C/numba extension to do the same thing though, performance wise. This is likely to be substantially faster still, but at least this blows the loops over n out of the water.
def adjoint(A):
"""compute inverse without division by det; ...xv3xc3 input, or array of matrices assumed"""
AI = np.empty_like(A)
for i in xrange(3):
AI[...,i,:] = np.cross(A[...,i-2,:], A[...,i-1,:])
return AI
def inverse_transpose(A):
"""
efficiently compute the inverse-transpose for stack of 3x3 matrices
"""
I = adjoint(A)
det = dot(I, A).mean(axis=-1)
return I / det[...,None,None]
def inverse(A):
"""inverse of a stack of 3x3 matrices"""
return np.swapaxes( inverse_transpose(A), -1,-2)
def dot(A, B):
"""dot arrays of vecs; contract over last indices"""
return np.einsum('...i,...i->...', A, B)
A = np.random.rand(2,2,3,3)
I = inverse(A)
print np.einsum('...ij,...jk',A,I)
for the transpose:
testing a bit in ipython showed:
In [1]: import numpy
In [2]: x = numpy.ones((5,6,3,4))
In [3]: numpy.transpose(x,(0,1,3,2)).shape
Out[3]: (5, 6, 4, 3)
so you can just do
Atrans = numpy.transpose(A,(0,1,3,2))
to transpose the second and third dimensions (while leaving dimension 0 and 1 the same)
for the inversion:
the last example of http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html#numpy.linalg.inv
Inverses of several matrices can be computed at once:
from numpy.linalg import inv
a = np.array([[[1., 2.], [3., 4.]], [[1, 3], [3, 5]]])
>>> inv(a)
array([[[-2. , 1. ],
[ 1.5, -0.5]],
[[-5. , 2. ],
[ 3. , -1. ]]])
So i guess in your case, the inversion can be done with just
Ainv = inv(A)
and it will know that the last two dimensions are the ones it is supposed to invert over, and that the first dimensions are just how you stacked your data. This should be much faster
speed difference
for the transpose: your method needs 3.77557015419 sec, and mine needs 2.86102294922e-06 sec (which is a speedup of over 1 million times)
for the inversion: i guess my numpy version is not high enough to try that numpy.linalg.inv trick with (n,n,3,3) shape, to see the speedup there (my version is 1.6.2, and the docs i based my solution on are for 1.8, but it should work on 1.8, if someone else can test that?)
Numpy has the array.T properties which is a shortcut for transpose.
For inversions, you use np.linalg.inv(A).
As posted by wim A.I also works on matrix. e.g.
print (A.I)
for numpy-matrix object, use matrix.getI.
e.g.
A=numpy.matrix('1 3;5 6')
print (A.getI())
What are the advantages and disadvantages of each?
From what I've seen, either one can work as a replacement for the other if need be, so should I bother using both or should I stick to just one of them?
Will the style of the program influence my choice? I am doing some machine learning using numpy, so there are indeed lots of matrices, but also lots of vectors (arrays).
Numpy matrices are strictly 2-dimensional, while numpy arrays (ndarrays) are
N-dimensional. Matrix objects are a subclass of ndarray, so they inherit all
the attributes and methods of ndarrays.
The main advantage of numpy matrices is that they provide a convenient notation
for matrix multiplication: if a and b are matrices, then a*b is their matrix
product.
import numpy as np
a = np.mat('4 3; 2 1')
b = np.mat('1 2; 3 4')
print(a)
# [[4 3]
# [2 1]]
print(b)
# [[1 2]
# [3 4]]
print(a*b)
# [[13 20]
# [ 5 8]]
On the other hand, as of Python 3.5, NumPy supports infix matrix multiplication using the # operator, so you can achieve the same convenience of matrix multiplication with ndarrays in Python >= 3.5.
import numpy as np
a = np.array([[4, 3], [2, 1]])
b = np.array([[1, 2], [3, 4]])
print(a#b)
# [[13 20]
# [ 5 8]]
Both matrix objects and ndarrays have .T to return the transpose, but matrix
objects also have .H for the conjugate transpose, and .I for the inverse.
In contrast, numpy arrays consistently abide by the rule that operations are
applied element-wise (except for the new # operator). Thus, if a and b are numpy arrays, then a*b is the array
formed by multiplying the components element-wise:
c = np.array([[4, 3], [2, 1]])
d = np.array([[1, 2], [3, 4]])
print(c*d)
# [[4 6]
# [6 4]]
To obtain the result of matrix multiplication, you use np.dot (or # in Python >= 3.5, as shown above):
print(np.dot(c,d))
# [[13 20]
# [ 5 8]]
The ** operator also behaves differently:
print(a**2)
# [[22 15]
# [10 7]]
print(c**2)
# [[16 9]
# [ 4 1]]
Since a is a matrix, a**2 returns the matrix product a*a.
Since c is an ndarray, c**2 returns an ndarray with each component squared
element-wise.
There are other technical differences between matrix objects and ndarrays
(having to do with np.ravel, item selection and sequence behavior).
The main advantage of numpy arrays is that they are more general than
2-dimensional matrices. What happens when you want a 3-dimensional array? Then
you have to use an ndarray, not a matrix object. Thus, learning to use matrix
objects is more work -- you have to learn matrix object operations, and
ndarray operations.
Writing a program that mixes both matrices and arrays makes your life difficult
because you have to keep track of what type of object your variables are, lest
multiplication return something you don't expect.
In contrast, if you stick solely with ndarrays, then you can do everything
matrix objects can do, and more, except with slightly different
functions/notation.
If you are willing to give up the visual appeal of NumPy matrix product
notation (which can be achieved almost as elegantly with ndarrays in Python >= 3.5), then I think NumPy arrays are definitely the way to go.
PS. Of course, you really don't have to choose one at the expense of the other,
since np.asmatrix and np.asarray allow you to convert one to the other (as
long as the array is 2-dimensional).
There is a synopsis of the differences between NumPy arrays vs NumPy matrixes here.
Scipy.org recommends that you use arrays:
*'array' or 'matrix'? Which should I use? - Short answer
Use arrays.
They support multidimensional array algebra that is supported in
MATLAB
They are the standard vector/matrix/tensor type of NumPy. Many
NumPy functions return arrays, not matrices.
There is a clear
distinction between element-wise operations and linear algebra
operations.
You can have standard vectors or row/column vectors if you
like.
Until Python 3.5 the only disadvantage of using the array type
was that you had to use dot instead of * to multiply (reduce) two
tensors (scalar product, matrix vector multiplication etc.). Since
Python 3.5 you can use the matrix multiplication # operator.
Given the above, we intend to deprecate matrix eventually.
Just to add one case to unutbu's list.
One of the biggest practical differences for me of numpy ndarrays compared to numpy matrices or matrix languages like matlab, is that the dimension is not preserved in reduce operations. Matrices are always 2d, while the mean of an array, for example, has one dimension less.
For example demean rows of a matrix or array:
with matrix
>>> m = np.mat([[1,2],[2,3]])
>>> m
matrix([[1, 2],
[2, 3]])
>>> mm = m.mean(1)
>>> mm
matrix([[ 1.5],
[ 2.5]])
>>> mm.shape
(2, 1)
>>> m - mm
matrix([[-0.5, 0.5],
[-0.5, 0.5]])
with array
>>> a = np.array([[1,2],[2,3]])
>>> a
array([[1, 2],
[2, 3]])
>>> am = a.mean(1)
>>> am.shape
(2,)
>>> am
array([ 1.5, 2.5])
>>> a - am #wrong
array([[-0.5, -0.5],
[ 0.5, 0.5]])
>>> a - am[:, np.newaxis] #right
array([[-0.5, 0.5],
[-0.5, 0.5]])
I also think that mixing arrays and matrices gives rise to many "happy" debugging hours.
However, scipy.sparse matrices are always matrices in terms of operators like multiplication.
As per the official documents, it's not anymore advisable to use matrix class since it will be removed in the future.
https://numpy.org/doc/stable/reference/generated/numpy.matrix.html
As other answers already state that you can achieve all the operations with NumPy arrays.
As others have mentioned, perhaps the main advantage of matrix was that it provided a convenient notation for matrix multiplication.
However, in Python 3.5 there is finally a dedicated infix operator for matrix multiplication: #.
With recent NumPy versions, it can be used with ndarrays:
A = numpy.ones((1, 3))
B = numpy.ones((3, 3))
A # B
So nowadays, even more, when in doubt, you should stick to ndarray.
Matrix Operations with Numpy Arrays:
I would like to keep updating this answer
about matrix operations with numpy arrays if some users are interested looking for information about matrices and numpy.
As the accepted answer, and the numpy-ref.pdf said:
class numpy.matrix will be removed in the future.
So now matrix algebra operations has to be done
with Numpy Arrays.
a = np.array([[1,3],[-2,4]])
b = np.array([[3,-2],[5,6]])
Matrix Multiplication (infix matrix multiplication)
a#b
array([[18, 16],
[14, 28]])
Transpose:
ab = a#b
ab.T
array([[18, 14],
[16, 28]])
Inverse of a matrix:
np.linalg.inv(ab)
array([[ 0.1 , -0.05714286],
[-0.05 , 0.06428571]])
ab_i=np.linalg.inv(ab)
ab#ab_i # proof of inverse
array([[1., 0.],
[0., 1.]]) # identity matrix
Determinant of a matrix.
np.linalg.det(ab)
279.9999999999999
Solving a Linear System:
1. x + y = 3,
x + 2y = -8
b = np.array([3,-8])
a = np.array([[1,1], [1,2]])
x = np.linalg.solve(a,b)
x
array([ 14., -11.])
# Solution x=14, y=-11
Eigenvalues and Eigenvectors:
a = np.array([[10,-18], [6,-11]])
np.linalg.eig(a)
(array([ 1., -2.]), array([[0.89442719, 0.83205029],
[0.4472136 , 0.5547002 ]])
An advantage of using matrices is for easier instantiation through text rather than nested square brackets.
With matrices you can do
np.matrix("1, 1+1j, 0; 0, 1j, 0; 0, 0, 1")
and get the desired output directly:
matrix([[1.+0.j, 1.+1.j, 0.+0.j],
[0.+0.j, 0.+1.j, 0.+0.j],
[0.+0.j, 0.+0.j, 1.+0.j]])
If you use arrays, this does not work:
np.array("1, 1+1j, 0; 0, 1j, 0; 0, 0, 1")
output:
array('1, 1+1j, 0; 0, 1j, 0; 0, 0, 1', dtype='<U29')