I have a 2 x 2 numpy.array() matrix, and an array N x 2 X, containing N 2-dimensional vectors.
I want to multiply each vector in X by the 2 x 2 matrix. Below I use a for loop, but I am sure there is a faster way. Please, could someone show me what it is? I assume there is a way using a numpy function.
# the matrix I want to multiply X by
matrix = np.array([[0, 1], [-1, 0]])
# initialize empty solution
Y = np.empty((N, 2))
# loop over each vector in X and create a new vector Y with the result
for i in range(0, N):
Y[i] = np.dot(matrix, X[i])
For example, these arrays:
matrix = np.array([
[0, 1],
[0, -1]
])
X = np.array([
[0, 0],
[1, 1],
[2, 2]
])
Should result in:
Y = np.array([
[0, 0],
[1, -1],
[2, -2]
])
One-liner is (matrix # X.T).T
Just transpose your X, to get your vectors in columns. Then matrix # X.T or (np.dot(matrix, X.T) if you prefer this solution, but now that # notation exists, why not using it) is a matrix made of columns of matrix times X[i]. Just transpose back the result if you need Y to be made of lines of results
matrix = np.array([[0, 1], [-1, 0]])
X = np.array([[1,2],[3,4],[5,6]])
Y = (matrix # X.T).T
Y is
array([[ 2, -1],
[ 4, -3],
[ 6, -5]])
As expected, I guess.
In detail:
X is
array([[1, 2],
[3, 4],
[5, 6]])
so X.T is
array([[1, 3, 5],
[2, 4, 6]])
So, you can multiply your 2x2 matrix by this 2x3 matrix, and the result will be a 2x3 matrix whose columns are the result of multiplication of matrix by the column of this. matrix # X.T is
array([[ 2, 4, 6],
[-1, -3, -5]])
And transposing back this gives the already given result.
So, tl;dr: one-liner answer is (matrix # X.T).T
You are doing some kind of matrix multiplication with (2,2) matrix and each (2,1) X line.
You need to make all your vectors the same dimension to directly calculate this. Add a dimension with None and directly calculate Y like this :
matrix = np.array([[3, 1], [-1, 0.1]])
N = 10
Y = np.empty((N, 2))
X =np.ones((N,2))
X[0][0] = 2
X[5][1] = 3
# loop over each vector in X and create a new vector Y with the result
for i in range(0, N):
Y[i] = np.dot(matrix, X[i])
Ydirect = matrix[None,:] # X[:,:,None]
print(Y)
print(Ydirect[:,:,0])
You can vectorize Adrien's result and remove the for loop, which will optimize performance, especially as the matrices get bigger.
matrix = np.array([[3, 1], [-1, 0.1]])
N = 10
X = np.ones((N, 2))
X[0][0] = 2
X[5][1] = 3
# calculate dot product using # operator
Y = matrix # X.T
print(Y)
I'd like to copy a numpy 2D array into a third dimension. For example, given the 2D numpy array:
import numpy as np
arr = np.array([[1, 2], [1, 2]])
# arr.shape = (2, 2)
convert it into a 3D matrix with N such copies in a new dimension. Acting on arr with N=3, the output should be:
new_arr = np.array([[[1, 2], [1,2]],
[[1, 2], [1, 2]],
[[1, 2], [1, 2]]])
# new_arr.shape = (3, 2, 2)
Probably the cleanest way is to use np.repeat:
a = np.array([[1, 2], [1, 2]])
print(a.shape)
# (2, 2)
# indexing with np.newaxis inserts a new 3rd dimension, which we then repeat the
# array along, (you can achieve the same effect by indexing with None, see below)
b = np.repeat(a[:, :, np.newaxis], 3, axis=2)
print(b.shape)
# (2, 2, 3)
print(b[:, :, 0])
# [[1 2]
# [1 2]]
print(b[:, :, 1])
# [[1 2]
# [1 2]]
print(b[:, :, 2])
# [[1 2]
# [1 2]]
Having said that, you can often avoid repeating your arrays altogether by using broadcasting. For example, let's say I wanted to add a (3,) vector:
c = np.array([1, 2, 3])
to a. I could copy the contents of a 3 times in the third dimension, then copy the contents of c twice in both the first and second dimensions, so that both of my arrays were (2, 2, 3), then compute their sum. However, it's much simpler and quicker to do this:
d = a[..., None] + c[None, None, :]
Here, a[..., None] has shape (2, 2, 1) and c[None, None, :] has shape (1, 1, 3)*. When I compute the sum, the result gets 'broadcast' out along the dimensions of size 1, giving me a result of shape (2, 2, 3):
print(d.shape)
# (2, 2, 3)
print(d[..., 0]) # a + c[0]
# [[2 3]
# [2 3]]
print(d[..., 1]) # a + c[1]
# [[3 4]
# [3 4]]
print(d[..., 2]) # a + c[2]
# [[4 5]
# [4 5]]
Broadcasting is a very powerful technique because it avoids the additional overhead involved in creating repeated copies of your input arrays in memory.
* Although I included them for clarity, the None indices into c aren't actually necessary - you could also do a[..., None] + c, i.e. broadcast a (2, 2, 1) array against a (3,) array. This is because if one of the arrays has fewer dimensions than the other then only the trailing dimensions of the two arrays need to be compatible. To give a more complicated example:
a = np.ones((6, 1, 4, 3, 1)) # 6 x 1 x 4 x 3 x 1
b = np.ones((5, 1, 3, 2)) # 5 x 1 x 3 x 2
result = a + b # 6 x 5 x 4 x 3 x 2
Another way is to use numpy.dstack. Supposing that you want to repeat the matrix a num_repeats times:
import numpy as np
b = np.dstack([a]*num_repeats)
The trick is to wrap the matrix a into a list of a single element, then using the * operator to duplicate the elements in this list num_repeats times.
For example, if:
a = np.array([[1, 2], [1, 2]])
num_repeats = 5
This repeats the array of [1 2; 1 2] 5 times in the third dimension. To verify (in IPython):
In [110]: import numpy as np
In [111]: num_repeats = 5
In [112]: a = np.array([[1, 2], [1, 2]])
In [113]: b = np.dstack([a]*num_repeats)
In [114]: b[:,:,0]
Out[114]:
array([[1, 2],
[1, 2]])
In [115]: b[:,:,1]
Out[115]:
array([[1, 2],
[1, 2]])
In [116]: b[:,:,2]
Out[116]:
array([[1, 2],
[1, 2]])
In [117]: b[:,:,3]
Out[117]:
array([[1, 2],
[1, 2]])
In [118]: b[:,:,4]
Out[118]:
array([[1, 2],
[1, 2]])
In [119]: b.shape
Out[119]: (2, 2, 5)
At the end we can see that the shape of the matrix is 2 x 2, with 5 slices in the third dimension.
Use a view and get free runtime! Extend generic n-dim arrays to n+1-dim
Introduced in NumPy 1.10.0, we can leverage numpy.broadcast_to to simply generate a 3D view into the 2D input array. The benefit would be no extra memory overhead and virtually free runtime. This would be essential in cases where the arrays are big and we are okay to work with views. Also, this would work with generic n-dim cases.
I would use the word stack in place of copy, as readers might confuse it with the copying of arrays that creates memory copies.
Stack along first axis
If we want to stack input arr along the first axis, the solution with np.broadcast_to to create 3D view would be -
np.broadcast_to(arr,(3,)+arr.shape) # N = 3 here
Stack along third/last axis
To stack input arr along the third axis, the solution to create 3D view would be -
np.broadcast_to(arr[...,None],arr.shape+(3,))
If we actually need a memory copy, we can always append .copy() there. Hence, the solutions would be -
np.broadcast_to(arr,(3,)+arr.shape).copy()
np.broadcast_to(arr[...,None],arr.shape+(3,)).copy()
Here's how the stacking works for the two cases, shown with their shape information for a sample case -
# Create a sample input array of shape (4,5)
In [55]: arr = np.random.rand(4,5)
# Stack along first axis
In [56]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[56]: (3, 4, 5)
# Stack along third axis
In [57]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[57]: (4, 5, 3)
Same solution(s) would work to extend a n-dim input to n+1-dim view output along the first and last axes. Let's explore some higher dim cases -
3D input case :
In [58]: arr = np.random.rand(4,5,6)
# Stack along first axis
In [59]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[59]: (3, 4, 5, 6)
# Stack along last axis
In [60]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[60]: (4, 5, 6, 3)
4D input case :
In [61]: arr = np.random.rand(4,5,6,7)
# Stack along first axis
In [62]: np.broadcast_to(arr,(3,)+arr.shape).shape
Out[62]: (3, 4, 5, 6, 7)
# Stack along last axis
In [63]: np.broadcast_to(arr[...,None],arr.shape+(3,)).shape
Out[63]: (4, 5, 6, 7, 3)
and so on.
Timings
Let's use a large sample 2D case and get the timings and verify output being a view.
# Sample input array
In [19]: arr = np.random.rand(1000,1000)
Let's prove that the proposed solution is a view indeed. We will use stacking along first axis (results would be very similar for stacking along the third axis) -
In [22]: np.shares_memory(arr, np.broadcast_to(arr,(3,)+arr.shape))
Out[22]: True
Let's get the timings to show that it's virtually free -
In [20]: %timeit np.broadcast_to(arr,(3,)+arr.shape)
100000 loops, best of 3: 3.56 µs per loop
In [21]: %timeit np.broadcast_to(arr,(3000,)+arr.shape)
100000 loops, best of 3: 3.51 µs per loop
Being a view, increasing N from 3 to 3000 changed nothing on timings and both are negligible on timing units. Hence, efficient both on memory and performance!
This can now also be achived using np.tile as follows:
import numpy as np
a = np.array([[1,2],[1,2]])
b = np.tile(a,(3, 1,1))
b.shape
(3,2,2)
b
array([[[1, 2],
[1, 2]],
[[1, 2],
[1, 2]],
[[1, 2],
[1, 2]]])
A=np.array([[1,2],[3,4]])
B=np.asarray([A]*N)
Edit #Mr.F, to preserve dimension order:
B=B.T
Here's a broadcasting example that does exactly what was requested.
a = np.array([[1, 2], [1, 2]])
a=a[:,:,None]
b=np.array([1]*5)[None,None,:]
Then b*a is the desired result and (b*a)[:,:,0] produces array([[1, 2],[1, 2]]), which is the original a, as does (b*a)[:,:,1], etc.
Summarizing the solutions above:
a = np.arange(9).reshape(3,-1)
b = np.repeat(a[:, :, np.newaxis], 5, axis=2)
c = np.dstack([a]*5)
d = np.tile(a, [5,1,1])
e = np.array([a]*5)
f = np.repeat(a[np.newaxis, :, :], 5, axis=0) # np.repeat again
print('b='+ str(b.shape), b[:,:,-1].tolist())
print('c='+ str(c.shape),c[:,:,-1].tolist())
print('d='+ str(d.shape),d[-1,:,:].tolist())
print('e='+ str(e.shape),e[-1,:,:].tolist())
print('f='+ str(f.shape),f[-1,:,:].tolist())
b=(3, 3, 5) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
c=(3, 3, 5) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
d=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
e=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
f=(5, 3, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
Good luck
How to convert (5,) numpy array to (5,1)?
And how to convert backwards from (5,1) to (5,)?
What is the purpose of (5,) array, why is one dimension omitted? I mean why we didn't always use (5,1) form?
Does this happen only with 1D and 2D arrays or does it happen across 3D arrays, like can (2,3,) array exist?
UPDATE:
I managed to convert from (5,) to (5,1) by
a= np.reshape(a, (a.shape[0], 1))
but suggested variant looks simpler:
a = a[:, None] or a = a[:, np.newaxis]
To convert from (5,1) to (5,) np.ravel can be used
a= np.ravel(a)
A numpy array with shape (5,) is a 1 dimensional array while one with shape (5,1) is a 2 dimensional array. The difference is subtle, but can alter some computations in a major way. One has to be specially careful since these changes can be bull-dozes over by operations which flatten all dimensions, like np.mean or np.sum.
In addition to #m-massias's answer, consider the following as an example:
17:00:25 [2]: import numpy as np
17:00:31 [3]: a = np.array([1,2])
17:00:34 [4]: b = np.array([[1,2], [3,4]])
17:00:45 [6]: b * a
Out[6]:
array([[1, 4],
[3, 8]])
17:00:50 [7]: b * a[:,None] # Different result!
Out[7]:
array([[1, 2],
[6, 8]])
a has shape (2,) and it is broadcast over the second dimension. So the result you get is that each row (the first dimension) is multiplied by the vector:
17:02:44 [10]: b * np.array([[1, 2], [1, 2]])
Out[10]:
array([[1, 4],
[3, 8]])
On the other hand, a[:,None] has the shape (2,1) and so the orientation of the vector is known to be a column. Hence, the result you get is from the following operation (where each column is multiplied by a):
17:03:39 [11]: b * np.array([[1, 1], [2, 2]])
Out[11]:
array([[1, 2],
[6, 8]])
I hope that sheds some light on how the two arrays will behave differently.
You can add a new axis to an array a by doing a = a[:, None] or a = a[:, np.newaxis]
As far as "one dimension omitted", I don't really understand your question, because it has no end : the array could be (5, 1, 1), etc.
Use reshape() function
e.g.
open python terminal and type following:
>>> import numpy as np
>>> a = np.random.random(5)
>>> a
array([0.85694461, 0.37774476, 0.56348081, 0.02972139, 0.23453958])
>>> a.shape
(5,)
>>> b = a.reshape(5, 1)
>>> b.shape
(5, 1)
How do I select a row from a NxM numpy array as an array of size 1xM:
> import numpy
> a = numpy.array([[1,2], [3,4], [5,6]])
> a
array([[1, 2],
[3, 4],
[5, 6]])
> a.shape
(e, 2)
> a[0]
array([1, 2])
> a[0].shape
(2,)
I'd like
a[0].shape == (1,2)
I'm doing this because a library I want to use seems to require this.
If you have something of shape (2,) and you want to add a new axis so that the shape is (1,2), the easiest way is to use np.newaxis:
a = np.array([1,2])
a.shape
#(2,)
b = a[np.newaxis, :]
print b
#array([[1,2]])
b.shape
#(1,2)
If you have something of shape (N,2) and want to slice it with the same dimensionality to get a slice with shape (1,2), then you can use a range of length 1 as your slice instead of one index:
a = np.array([[1,2], [3,4], [5,6]])
a[0:1]
#array([[1, 2]])
a[0:1].shape
#(1,2)
Another trick is that some functions have a keepdims option, for example:
a
#array([[1, 2],
# [3, 4],
# [5, 6]])
a.sum(1)
#array([ 3, 7, 11])
a.sum(1, keepdims=True)
#array([[ 3],
# [ 7],
# [11]])
If you already have it, call .reshape():
>>> a = numpy.array([[1, 2], [3, 4]])
>>> b = a[0]
>>> c = b.reshape((1, -1))
>>> c
array([[1, 2]])
>>> c.shape
(1, 2)
You can also use a range to keep the array two-dimensional in the first place:
>>> b = a[0:1]
>>> b
array([[1, 2]])
>>> b.shape
(1, 2)
Note that all of these will have the same backing store.