EDIT
I realized that I did not check my mwe very well and as such asked something of the wrong question. The main problem is when the numpy array is passed in as a 2d array instead of 1d (or even when a python list is passed in as 1d instead of 2d). So if we have
x = np.array([[1], [2], [3]])
then obviously if you try to index this then you will get arrays out (if you use item you do not). this same thing also applies to standard python lists.
Sorry about the confusion.
Original
I am trying to form a new numpy array from something that may be a numpy array or may be a standard python list.
for example
import numpy as np
x = [2, 3, 1]
y = np.array([[0, -x[2], x[1]], [x[2], 0, -x[0]], [-x[1], x[0], 0]])
Now I would like to form a function such that I can make y easily.
def skew(vector):
"""
this function returns a numpy array with the skew symmetric cross product matrix for vector.
the skew symmetric cross product matrix is defined such that
np.cross(a, b) = np.dot(skew(a), b)
:param vector: An array like vector to create the skew symmetric cross product matrix for
:return: A numpy array of the skew symmetric cross product vector
"""
return np.array([[0, -vector[2], vector[1]],
[vector[2], 0, -vector[0]],
[-vector[1], vector[0], 0]])
This works great and I can now write (assuming the above function is included)
import numpy as np
x=[2, 3, 1]
y = skew(x)
However, I would also like to be able to call skew on existing 1d or 2d numpy arrays. For instance
import numpy as np
x = np.array([2, 3, 1])
y = skew(x)
Unfortunately, doing this returns a numpy array where the elements are also numpy arrays, not python floats as I would like them to be.
Is there an easy way to form a new numpy array like I have done from something that is either a python list or a numpy array and have the result be just a standard numpy array with floats in each element?
Now obviously one solution is to check to see if the input is a numpy array or not:
def skew(vector):
"""
this function returns a numpy array with the skew symmetric cross product matrix for vector.
the skew symmetric cross product matrix is defined such that
np.cross(a, b) = np.dot(skew(a), b)
:param vector: An array like vector to create the skew symmetric cross product matrix for
:return: A numpy array of the skew symmetric cross product vector
"""
if isinstance(vector, np.ndarray):
return np.array([[0, -vector.item(2), vector.item(1)],
[vector.item(2), 0, -vector.item(0)],
[-vector.item(1), vector.item(0), 0]])
else:
return np.array([[0, -vector[2], vector[1]],
[vector[2], 0, -vector[0]],
[-vector[1], vector[0], 0]])
however, it gets very tedious having to write these instance checks all over the place.
Another solution would be to cast everything to an array first and then just use the array call
def skew(vector):
"""
this function returns a numpy array with the skew symmetric cross product matrix for vector.
the skew symmetric cross product matrix is defined such that
np.cross(a, b) = np.dot(skew(a), b)
:param vector: An array like vector to create the skew symmetric cross product matrix for
:return: A numpy array of the skew symmetric cross product vector
"""
vector = np.array(vector)
return np.array([[0, -vector.item(2), vector.item(1)],
[vector.item(2), 0, -vector.item(0)],
[-vector.item(1), vector.item(0), 0]])
but I feel like this is inefficient as it requires creating a new copy of vector (in this case not a big deal since vector is small but this is just a simple example).
My question is, is there a different way to do this outside of what I've discussed or am I stuck using one of these methods?
Arrays are iterable. You can write in your skew function:
def skew(x):
return np.array([[0, -x[2], x[1]],
[x[2], 0, -x[0]],
[-x[1], x[0], 0]])
x = [1,2,3]
y = np.array([1,2,3])
>>> skew(y)
array([[ 0, -3, 2],
[ 3, 0, -1],
[-2, 1, 0]])
>>> skew(x)
array([[ 0, -3, 2],
[ 3, 0, -1],
[-2, 1, 0]])
In any case your methods ended with 1st dimension elements being numpy arrays containing floats. You'll need in any case a call on the 2nd dimension to get the floats inside.
Regarding what you told me in the comments, you may add an if condition for 2d arrays:
def skew(x):
if (isinstance(x,ndarray) and len(x.shape)>=2):
return np.array([[0, -x[2][0], x[1][0]],
[x[2][0], 0, -x[0][0]],
[-x[1][0], x[0][0], 0]])
else:
return np.array([[0, -x[2], x[1]],
[x[2], 0, -x[0]],
[-x[1], x[0], 0]])
You can implement the last idea efficiently using numpy.asarray():
vector = np.asarray(vector)
Then, if vector is already a NumPy array, no copying occurs.
You can keep the first version of your function and convert the numpy array to list:
def skew(vector):
if isinstance(vector, np.ndarray):
vector = vector.tolist()
return np.array([[0, -vector[2], vector[1]],
[vector[2], 0, -vector[0]],
[-vector[1], vector[0], 0]])
In [58]: skew([2, 3, 1])
Out[58]:
array([[ 0, -1, 3],
[ 1, 0, -2],
[-3, 2, 0]])
In [59]: skew(np.array([2, 3, 1]))
Out[59]:
array([[ 0, -1, 3],
[ 1, 0, -2],
[-3, 2, 0]])
This is not an optimal solution but is a very easy one.
You can just convert the vector into list by default.
def skew(vector):
vector = list(vector)
return np.array([[0, -vector[2], vector[1]],
[vector[2], 0, -vector[0]],
[-vector[1], vector[0], 0]])
Related
Take in two 3 dimensional vectors, each represented as an array, and tell whether they are linearly independent. I tried to use np.linalg.solve() to get the solution of x, and tried to find whether x is trivial or nontrivial. But it shows 'LinAlgError: Last 2 dimensions of the array must be square'. Can anyone help me how to figure that out?
from sympy import *
import numpy as np
from scipy import linalg
from numpy import linalg
v1 = np.array([0, 5, 0])
v2 = np.array([0, -10, 0])
a = np.array([v1,v2])
b = np.zeros(3)
x = np.linalg.solve(a, b)
As your final matrix will be in a rectangular form, a simple approach of EigenValues will not work. You need to use the library of sympy
import sympy
import numpy as np
matrix = np.array([
[0, 5, 0],
[0, -10, 0]
])
_, indexes = sympy.Matrix(matrix).T.rref() # T is for transpose
print(indexes)
This will print the indexes of linearly independent rows. To further print them from the matrix, use
print(matrix[indexes,:])
To answer your specific question, check if two vectors are linearly dependant or not. You can most definitely use an if statement afterwards if it is the two vectors you are always going to check.
if len(indexes) == 2:
print("linearly independant")
else:
print("linearly dependant")
If one eigenvalue of the matrix is zero, its corresponding eigenvector is linearly dependent.
So the following code would work for simple case:
from sympy import *
import numpy as np
from scipy import linalg
from numpy import linalg
matrix = np.array([[0, 1, 0, 0], [0, 0, 1, 0], [0, 1, 1, 0], [1, 0, 0,
1]])
(lambdas, V) = np.linalg.eig(matrix.T)
print matrix[lambdas == 0, :]
Output: [[0 1 1 0]]
Getting the inverse of a diagonal matrix is very simple and does not require complex methods. Does scipy.linalg.inv check whether the matrix is diagonal before it applies more complex methods or do I need to check this myself?
As you can see the Github code of scipy.linalg.inv, function inv first calls
getrf, getri, getri_lwork = get_lapack_funcs(('getrf', 'getri','getri_lwork'),
Then function getrf does it job to give the LU decomposition and so on. Now we have to investigate how getrf function gives the LU decomposition. Because if it checks if it's diagonal before to process the input matrix, then no need to check it yourself.
Function getrf is obtained by calling _get_funcs but I can't go further from there (_get_funcs is called with the following arguments _get_funcs(names, arrays, dtype, "LAPACK", _flapack, _clapack, "flapack", "clapack", _lapack_alias)).
I suggest that you run an experiment with a large diagonal matrix to compare the time given to spit the output with linalg and an inversion by hand.
Update (by question author):
import numpy as np
from scipy.linalg import inv
a = np.diag(np.random.random(19999))
b = a.copy()
np.fill_diagonal(a, 1/a.diagonal())
c = inv(b)
does not even require a time measuring tool: It it very obvious that inv is much slower... (that is surprisingly disappointing).
Please check: scipy.linalg.inv
If you put scipy.linalg.inv in try except if it raises LinAlgError when matrix a is singular. The determinant for singular matrix it zero.
try:
# your code that will (maybe) throw scipy.linalg.inv(your matrix)
except np.linalg.LinAlgError as err:
# It shows your matrix is singular
# Its determinant of a matrix is equal to zero
# The matrix does not have an inverse.
# You can conclude if the matrix is diagonal or not
If the determinant of a matrix is equal to zero:
The matrix is less than full rank. The matrix is singular. The matrix
does not have an inverse.
As manually:
def is_diagonal(matrix):
#create a dummy matrix
dummy_matrix = np.ones(matrix.shape, dtype=np.uint8)
# Fill the diagonal of dummy matrix with 0.
np.fill_diagonal(dummy_matrix, 0)
return np.count_nonzero(np.multiply(dummy_matrix, matrix)) == 0
diagonal_matrix = np.array([[3, 0, 0],
[0, 7, 0],
[0, 0, 4]])
print is_diagonal(diagonal_matrix)
>>> True
random_matrix = np.array([[3, 8, 0],
[1, 7, 8],
[5, 0, 4]])
print is_diagonal(random_matrix)
>>> False
scipy.sparse.dia_matrix.diagonal returns the k-th diagonal of the matrix.
from scipy.sparse import csr_matrix
A = csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]])
A.diagonal()
array([1, 0, 5])
A.diagonal(k=1)
array([2, 3])
Also, from scipy.linalg import block_diag creates diagonal matrix if input arrays are square, therefore if they are not square, it can not create diagonal matrix.
Please consider in Jupyter you can find out time complexity. %timeit yourfunctionname
I have not found a solution to this problem after searching the site. Its quite simple, I would like to update an already existing coo sparse matrix. So lets say I have initiated a coo matrix:
from scipy.sparse import coo_matrix
import numpy as np
row = np.array([0, 3, 1, 0])
col = np.array([0, 3, 1, 2])
data = np.array([4, 5, 7, 9])
a=coo_matrix((data, (row, col)), shape=(4, 4)).toarray()
array([[4, 0, 9, 0],
[0, 7, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 5]])
Fine but what if I just want an empty sparse array and initiate it with only the shape, and then update the values many times. The only way I have succeeded is to add a new coo matrix to my old one
a=coo_matrix((4, 4), dtype=np.int8)
a=a+coo_matrix((data, (row, col)), shape=(4, 4))
a.toarray()
array([[4, 0, 9, 0],
[0, 7, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 5]])
And I would like to update this sparse array many times. But this takes quite awhile since I am calling upon the coo function for each update. There has to be a better way but I feel like the documentation is a little light (at least what I have read) or that I am just not seeing it.
Thanks very much
when you make a coo matrix this way, it uses your input arrays as the attributes of the matrix (provided they are the correct type):
In [923]: row = np.array([0, 3, 1, 0])
...: col = np.array([0, 3, 1, 2])
...: data = np.array([4, 5, 7, 9])
...: A=sparse.coo_matrix((data, (row, col)), shape=(4, 4))
In [924]: A
Out[924]:
<4x4 sparse matrix of type '<class 'numpy.int32'>'
with 4 stored elements in COOrdinate format>
In [925]: A.row
Out[925]: array([0, 3, 1, 0])
In [926]: id(A.row)
Out[926]: 3071951160
In [927]: id(row)
Out[927]: 3071951160
Similarly for A.col, and A.data.
For display and calculations the matrix will probably be converted to csr format, since many of those operations are not defined for a coo format.
And as you've no doubt seen coo format does not implement indexing, either for fetching or setting.
lil format is designed for easier incremental changes. Indexed changes to csr are also possible but it will issue a warning.
But coo is often used for building new matrices. For example in the bmat format, the coo attributes of the component matrices are combined into new arrays, which are then used to construct a new coo matrix.
A good way of building a coo incrementally is to keep concatenating new values to your row, col, and data arrays, and then periodically build a new coo from those.
On updating a dok format:
How to incrementally create an sparse matrix on python?
putting column into empty sparse matrix
creating a scipy.lil_matrix using a python generator efficiently
I first thought that the coo_matrix is immutable, because it doesn't support any indexing, nor indexed assignment. Turns out you can directly mutate the underlying structure of your empty sparse matrix:
from scipy.sparse import coo_matrix
import numpy as np
row = np.array([0, 3, 1, 0])
col = np.array([0, 3, 1, 2])
data = np.array([4, 5, 7, 9])
a = coo_matrix((4, 4), dtype=np.int8)
print(a.toarray())
a.row = row
a.col = col
a.data = data
print(a.toarray())
That being said, there might be other sparse formats that are more suitable for this approach.
I'm trying to generate a 2d numpy array with the help of generators:
x = [[f(a) for a in g(b)] for b in c]
And if I try to do something like this:
x = np.array([np.array([f(a) for a in g(b)]) for b in c])
I, as expected, get a np.array of np.array. But I want not this, but ndarray, so I can get, for example, column in a way like this:
y = x[:, 1]
So, I'm curious whether there is a way to generate it in such a way.
Of course it is possible with creating npdarray of required size and filling it with required values, but I want a way to do so in a line of code.
This works:
a = [[1, 2, 3], [4, 5, 6]]
nd_a = np.array(a)
So this should work too:
nd_a = np.array([[x for x in y] for y in a])
To create a new array, it seems numpy.zeros is the way to go
import numpy as np
a = np.zeros(shape=(x, y))
You can also set a datatype to allocate it sensibly
>>> np.zeros(shape=(5,2), dtype=np.uint8)
array([[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0]], dtype=uint8)
>>> np.zeros(shape=(5,2), dtype="datetime64[ns]")
array([['1970-01-01T00:00:00.000000000', '1970-01-01T00:00:00.000000000'],
['1970-01-01T00:00:00.000000000', '1970-01-01T00:00:00.000000000'],
['1970-01-01T00:00:00.000000000', '1970-01-01T00:00:00.000000000'],
['1970-01-01T00:00:00.000000000', '1970-01-01T00:00:00.000000000'],
['1970-01-01T00:00:00.000000000', '1970-01-01T00:00:00.000000000']],
dtype='datetime64[ns]')
See also
How do I create an empty array/matrix in NumPy?
np.full(size, 0) vs. np.zeros(size) vs. np.empty()
Its very simple, do like this
import numpy as np
arr=np.arange(50)
arr_2d=arr.reshape(10,5) #Reshapes 1d array in to 2d, containing 10 rows and 5 columns.
print(arr_2d)
It should be a standard question but I am not able find the answer :(
I have a numpy darray n samples (raw) and p variables (observation).
I would like to count how many times each variables is non 0.
I would use a function like
sum([1 for i in column if i!=0])
but how can I apply this function to all the columns of my matrix?
from this post: How to apply numpy.linalg.norm to each row of a matrix?
If the operation supports axis, use the axis parameter, it's usually faster,
Otherwise, np.apply_along_axis could help.
Here is the numpy.count_nonzero.
So here is the simple answer:
import numpy as np
arr = np.eye(3)
np.apply_along_axis(np.count_nonzero, 0, arr)
You can use np.sum over a boolean array created from comparing your original array to zero, using the axis keyword argument to indicate whether you want to count over rows or columns. In your case:
>>> a = np.array([[0, 1, 1, 0],[1, 1, 0, 0]])
>>> a
array([[0, 1, 1, 0],
[1, 1, 0, 0]])
>>> np.sum(a != 0, axis=0)
array([1, 2, 1, 0])