My understanding is that 1-D arrays in numpy can be interpreted as either a column-oriented vector or a row-oriented vector. For instance, a 1-D array with shape (8,) can be viewed as a 2-D array of shape (1,8) or shape (8,1) depending on context.
The problem I'm having is that the functions I write to manipulate arrays tend to generalize well in the 2-D case to handle both vectors and matrices, but not so well in the 1-D case.
As such, my functions end up doing something like this:
if arr.ndim == 1:
# Do it this way
else:
# Do it that way
Or even this:
# Reshape the 1-D array to a 2-D array
if arr.ndim == 1:
arr = arr.reshape((1, arr.shape[0]))
# ... Do it the 2-D way ...
That is, I find I can generalize code to handle 2-D cases (r,1), (1,c), (r,c), but not the 1-D cases without branching or reshaping.
It gets even uglier when the function operates on multiple arrays as I would check and convert each argument.
So my question is: am I missing some better idiom? Is the pattern I've described above common to numpy code?
Also, as a related matter of API design principles, if the caller passes a 1-D array to some function that returns a new array, and the return value is also a vector, is it common practice to reshape a 2-D vector (r,1) or (1,c) back to a 1-D array or simply document that the function returns a 2-D array regardless?
Thanks
I think in general NumPy functions that require an array of shape (r,c) make no special allowance for 1-D arrays. Instead, they expect the user to either pass an array of shape (r,c) exactly, or for the user to pass a 1-D array that broadcasts up to shape (r,c).
If you pass such a function a 1-D array of shape (c,) it will broadcast to shape (1,c), since broadcasting adds new axes on the left. It can also broadcast to shape (r,c) for an arbitrary r (depending on what other array it is being combined with).
On the other hand, if you have a 1-D array, x, of shape (r,) and you need it to broadcast up to shape (r,c), then NumPy expects the user to pass an array of shape (r,1) since broadcasting will not add the new axes on the right for you.
To do that, the user must pass x[:,np.newaxis] instead of just x.
Regarding return values: I think it better to always return a 2-D array. If the user knows the output will be of shape (1,c), and wants a 1-D array, let her slice off the 1-D array x[0] herself.
By making the return value always the same shape, it will be easier to understand code that uses this function, since it is not always immediately apparent what the shape of the inputs are.
Also, broadcasting blurs the distinction between a 1-D array of shape (c,) and a 2-D array of shape (r,c). If your function returns a 1-D array when fed 1-D input, and a 2-D array when fed 2-D input, then your function makes the distinction strict instead of blurred. Stylistically, this reminds me of checking if isinstance(obj,type), which goes against the grain of duck-typing. Don't do it if you don't have to.
unutbu's explanation is good, but I disagree on the return dimension.
The function internal pattern depends on the type of function.
Reduce operations with an axis argument can often be written so that the number of dimensions doesn't matter.
Numpy has also an atleast_2d (and atleast_1d) function that is also commonly used if you need an explicit 2d array. In statistics, I sometimes use a function like atleast_2d_cols, that reshapes 1d (r,) to 2d (r,1) for code that expects 2d, or if the input array is 1d, then the interpretation and linear algebra requires a column vector. (reshaping is cheap so this is not a problem)
In a third case, I might have different code paths if the lower dimensional case can be done cheaper or simpler than the higher dimensional case. (example: if 2d requires several dot products.)
return dimension
I think not following the numpy convention with the return dimension can be very confusing to users for general functions. (topic specific functions can be different.)
For example, reduce operations loose one dimension.
For many other functions the output dimension matches the input dimension. I think a 1d input should have a 1d output and not an extra redundant dimension. Except for functions in linalg, I don't remember any functions that would return a redundant extra dimension. (The scalar versus 1-element array case is not always consistent.)
Stylistically this reminds me of an isinstance check:
Try without it if you allow for example for numpy matrices and masked arrays. You will get funny results that are not easy to debug. Although, for most numpy and scipy functions the user has to know whether the array type will work with them, since there are few isinstance checks and asarray might not always do the right thing.
As a user, I always know what kind of "array_like" I have, a list, tuple or which array subclass, especially when I use multiplication.
np.array(np.eye(3).tolist()*3)
np.matrix(range(3)) * np.eye(3)
np.arange(3) * np.eye(3)
another example: What does this do?
>>> x = np.array(tuple(range(3)), [('',int)]*3)
>>> x
array((0, 1, 2),
dtype=[('f0', '<i4'), ('f1', '<i4'), ('f2', '<i4')])
>>> x * np.eye(3)
This question has already very good answers. Here I just want to add what I usually do (which somehow summarizes responses by others) when I want to write functions that accept a wide range of inputs while the operations I do on them require a 2d row or column vector.
If I know the input is always 1d (array or list):
a. if I need a row: x = np.asarray(x)[None,:]
b. if I need a column: x = np.asarray(x)[:,None]
If the input can be either 2d (array or list) with the right shape or 1d (which needs to be converted to 2d row/column):
a. if I need a row: x = np.atleast_2d(x)
b. if I need a column: x = np.atleast_2d(np.asarray(x).T).T or x = np.reshape(x, (len(x),-1)) (the latter seems faster)
This is a good use for decorators
def atmost_2d(func):
def wrapr(x):
return func(np.atleast_2d(x)).squeeze()
return wrapr
For example, this function will pick out the last column of its input.
#atmost_2d
def g(x):
return x[:,-1]
But: it works for:
1d:
In [46]: b
Out[46]: array([0, 1, 2, 3, 4, 5])
In [47]: g(b)
Out[47]: array(5)
2d:
In [49]: A
Out[49]:
array([[0, 1],
[2, 3],
[4, 5]])
In [50]: g(A)
Out[50]: array([1, 3, 5])
0d:
In [51]: g(99)
Out[51]: array(99)
This answer builds on the previous two.
Related
My goal is to to turn a row vector into a column vector and vice versa. The documentation for numpy.ndarray.transpose says:
For a 1-D array, this has no effect. (To change between column and row vectors, first cast the 1-D array into a matrix object.)
However, when I try this:
my_array = np.array([1,2,3])
my_array_T = np.transpose(np.matrix(myArray))
I do get the wanted result, albeit in matrix form (matrix([[66],[640],[44]])), but I also get this warning:
PendingDeprecationWarning: the matrix subclass is not the recommended way to represent matrices or deal with linear algebra (see https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html). Please adjust your code to use regular ndarray.
my_array_T = np.transpose(np.matrix(my_array))
How can I properly transpose an ndarray then?
A 1D array is itself once transposed, contrary to Matlab where a 1D array doesn't exist and is at least 2D.
What you want is to reshape it:
my_array.reshape(-1, 1)
Or:
my_array.reshape(1, -1)
Depending on what kind of vector you want (column or row vector).
The -1 is a broadcast-like, using all possible elements, and the 1 creates the second required dimension.
If your array is my_array and you want to convert it to a column vector you can do:
my_array.reshape(-1, 1)
For a row vector you can use
my_array.reshape(1, -1)
Both of these can also be transposed and that would work as expected.
IIUC, use reshape
my_array.reshape(my_array.size, -1)
I am looking for a nice way to "clean up" the dimensions of two arrays which I would like to combine together using broadcasting.In particular I would like to broadcast a one dimensional array up to the shape of a multidimensional array and then add the two arrays. My understanding of the broadcasting rules tells me that this should work find if the last dimension of the multidimensional array matches that of the one dimensional array. For example, arrays with shapes (,3) and (10,3) would add fine
My problem is, given how the array I have is built the matching dimension happens to be the first dimension of the array so the broadcasting rules are not met. For reference my one d array has shape (,3) and the multi-dimensional array is (3,10,10,50).
I could correct this by reshaping the multi-dimensional array so that the compatible dimension is the last dimension but I'd like to avoid this as I find the logic of reshaping tricky to follow when the different dimensions have specific meaning.
I can also add empty dimensions to the one dimensional array as in the code below until the one dimensional array has as many dimensions as the high dimensional array as in the code snippet below.
>>> import numpy as np
>>> a = np.array([[1,2],
>>> [3,4],
>>> [5,6]])
>>> b = np.array([10,20,30])
>>> a+b[:,None]
array([[11, 12],
[23, 24],
[35, 36]])
This gives me my desired output however in my case my high dimensional array has 4 different axes so I would need to add in multiple empty dimensions which starts to feel inelegant. I can do something like
b = b[[slice(None)] + 3*[np.newaxis]]
and then proceed but that doesn't seem great. More generally one could imagine needing an arbitrary number of new axes on both sides of the original one dimension and writing a helper function to generalize the above logic. Is there a nicer/clearer way to achieve this?
I have a 4x4x256 tensor and a 128x256 matrix. I need to multiply each 256-d depth-wise vector of the tensor by the matrix, such that I get a 4x4x128 tensor as a result.
Working in Numpy it's not clear to me how to do this. In their current shape it doesn't look like any variant of np.dot exists to do this. Manipulating the shapes to take advantage of broadcasting rules doesn't seem to provide any help. np.tensordot and np.einsum may be useful but looking at the documentation is going right over my head.
Is there an efficient way to do this?
You can use np.einsum to do this operation. An example with random values:
a = np.arange(4096.).reshape(4,4,256)
b = np.arange(32768.).reshape(128,256)
c = np.einsum('ijk,lk->ijl',a,b)
print(c.shape)
Here, the subscripts argument is: ijk,lk->ijl
From your requirement, i=4, j=4, k=256, l=128
The comma separates the subscripts for two operands, and the subscripts state that the multiplication should be performed over the last subscript in each tensor (the subscript k which is common to both the tensors).
The tensor subscript after the -> states that the resultant tensor should have the shape (i,j,l). Now depending on the type of operation you are performing, you might have to retain this subscript or change this subscript to jil, but the rest of the subscripts remains the same.
I read something about NumPy and it's Matrix class. In the documentation the authors write, that we can create only a 2 dimensional Matrix. So I think they mean you can only write something like this:
input = numpy.matrix( ((1,2), (3,4))
Is this right?
But when I write code like this:
input = numpy.matrix( ((1,2), (3,4), (4,5)) )
it also works ...
Normally I would say ok, why not, I'm not intrrested why it works. But I must write an exam for my univerity and so I must know if I've understood it right or do they mean something else with 2D Matrix?
Thanks for your help
They both are 2D matrixes. The first one is 2x2 2D matrix and the second one is 3x2 2D matrix. It is very similar to 2D arrays in programming. The second matrix is defined as int matrix[3][2] in C for example.
Then, a 3D matrix means that it has the following definition: int 3d_array[3][2][3].
In numpy, if i try this with a 3d matrix:
>>> input = numpy.matrix((((2, 3), (4, 5)), ((6, 7), (8, 9))))
ValueError: matrix must be 2-dimensional
emre.'s answer is correct, but I would still like to address the use of numpy matrices, which might be the root of your confusion.
When in doubt about using numpy.matrix, go for ndarrays :
Matrix is actually a ndarray subclass : Everything a matrix can do, ndarray can do it (reverse is not exactly true).
Matrix overrides * and ** operators, and any operation between a Matrix and a ndarray will return a matrix, which is problematic for some algorithms.
More on the ndarray vs matrix debate on this SO post, and specifically this short answer
From Numpy documentation
matrix objects inherit from the ndarray and therefore, they have the same attributes and methods of ndarrays. There are six important differences of matrix objects, however, that may lead to unexpected results when you use matrices but expect them to act like arrays:
Matrix objects can be created using a string notation to allow Matlab-style syntax where spaces separate columns and semicolons (‘;’) separate rows.
Matrix objects are always two-dimensional. This has far-reaching implications, in that m.ravel() is still two-dimensional (with a 1 in the first dimension) and item selection returns two-dimensional objects so that sequence behavior is fundamentally different than arrays.
Matrix objects over-ride multiplication to be matrix-multiplication. Make sure you understand this for functions that you may want to receive matrices. Especially in light of the fact that asanyarray(m) returns a matrix when m is a matrix.
Matrix objects over-ride power to be matrix raised to a power. The same warning about using power inside a function that uses asanyarray(...) to get an array object holds for this fact.
The default __array_priority__ of matrix objects is 10.0, and therefore mixed operations with ndarrays always produce matrices.
Matrices have special attributes which make calculations easier. [...]
I may be misunderstanding how broadcasting works in Python, but I am still running into errors.
scipy offers a number of "special functions" which take in two arguments, in particular the eval_XX(n, x[,out]) functions.
See http://docs.scipy.org/doc/scipy/reference/special.html
My program uses many orthogonal polynomials, so I must evaluate these polynomials at distinct points. Let's take the concrete example scipy.special.eval_hermite(n, x, out=None).
I would like the x argument to be a matrix shape (50, 50). Then, I would like to evaluate each entry of this matrix at a number of points. Let's define n to be an a numpy array narr = np.arange(10) (where we have imported numpy as np, i.e. import numpy as np).
So, calling
scipy.special.eval_hermite(narr, matrix)
should return Hermitian polynomials H_0(matrix), H_1(matrix), H_2(matrix), etc. Each H_X(matrix) is of the shape (50,50), the shape of the original input matrix.
Then, I would like to sum these values. So, I call
matrix1 = np.sum( [scipy.eval_hermite(narr, matrix)], axis=0 )
but I get a broadcasting error!
ValueError: operands could not be broadcast together with shapes (10,) (50,50)
I can solve this with a for loop, i.e.
matrix2 = np.sum( [scipy.eval_hermite(i, matrix) for i in narr], axis=0)
This gives me the correct answer, and the output matrix2.shape = (50,50). But using this for loop slows down my code, big time. Remember, we are working with entries of matrices.
Is there a way to do this without a for loop?
eval_hermite broadcasts n with x, then evaluates Hn(x) at each point. Thus, the output shape will be the result of broadcasting n with x. So, if you want to make this work, you'll have to make n and x have compatible shapes:
import scipy.special as ss
import numpy as np
matrix = np.ones([100,100]) # example
narr = np.arange(10) # example
ss.eval_hermite(narr[:,None,None], matrix).shape # => (10, 100, 100)
But note that this might actually be faster:
out = np.zeros_like(matrix)
for n in narr:
out += ss.eval_hermite(n, matrix)
In testing, it appears to be between 5-10% faster than np.sum(...) of above.
The documentation for these functions is skimpy, and a lot of the code is compiled, so this is just based on experimentation:
special.eval_hermite(n, x, out=None)
n apparently is a scalar or array of integers. x can be an array of floats.
special.eval_hermite(np.ones(5,int)[:,None],np.ones(6)) gives me a (5,6) result. This is the same shape as what I'd get from np.ones(5,int)[:,None] * np.ones(6).
The np.ones(5,int)[:,None] is a (5,1) array, np.ones(6) a (6,), which for this purpose is equivalent of (1,6). Both can be expanded to (5,6).
So as best I can tell, broadcasting rules in these special functions is the same as for operators like *.
Since special.eval_hermite(nar[:,None,None], x) produces a (10,50,50), you just apply sum to axis 0 of that to produce the (50,50).
special.eval_hermite(nar[:,Nar,Nar], x).sum(axis=0)
Like I wrote before, the same broadcasting (and summing) rules apply for this hermite as they do for a basic operation like *.