Convert real-valued numpy array to binary array by sign - python

I am looking for a fast way to compute the following:
import numpy as np
a = np.array([-1,1,2,-4,5.5,-0.1,0])
Now I want to cast a to an array of binary values such that it has a 1 for every positive entry of a and a 0 otherwise. So the result I want is this:
array([ 0., 1., 1., 0., 1., 0., 0.])
One way to achieve this would be
np.array([x if x >=0 else 0 for x in np.sign(a)])
array([ 0., 1., 1., 0., 1., 0., 0.])
But I am hoping someone can point out a faster solution.
%timeit np.array([x if x >=0 else 0 for x in np.sign(a)])
100000 loops, best of 3: 11.4 us per loop
EDIT: timing the great solutions from the answers
%timeit (a > 0).astype(int)
100000 loops, best of 3: 3.47 us per loop

You can do this using mask:
(a > 0).astype(int)
I do not know how to properly use timeit, but even
import numpy as np
from datetime import datetime
n = 50000000
a = np.random.rand(1, n).ravel()
startTime = datetime.now()
np.array([ x if x >=0 else 0 for x in np.sign(a)])
print datetime.now() - startTime
startTime = datetime.now()
(a > 0).astype(int)
print datetime.now() - startTime
pass
shows dramatic difference of 26 seconds vs 0.5 second.
P.S. based on your comment
I'll be computing distances, like hamming
you do not really need to have an integer array and a > 0 will be enough. It will save you memory and make things slightly faster.

You can check where a is greater than 0 and cast the boolean array to an integer array:
>>> (a > 0).astype(int)
array([0, 1, 1, 0, 1, 0, 0])
This should be significantly faster than the method proposed in the question (especially over larger arrays) because it avoids looping over the array at the Python level.
Faster still is to simply view the boolean array as the int8 dtype - this prevents the need to create a new array from the boolean array:
>>> (a > 0).view(np.int8)
array([0, 1, 1, 0, 1, 0, 0], dtype=int8)
Timings:
>>> b = np.random.rand(1000000)
>>> %timeit np.array([ x if x >=0 else 0 for x in np.sign(b)])
1 loops, best of 3: 420 ms per loop
>>> %timeit (b > 0).astype(int)
100 loops, best of 3: 4.63 ms per loop
>>> %timeit (b > 0).view(np.int8)
1000 loops, best of 3: 1.12 ms per loop

Related

Is there a better way to perform calculations using an array of indices on a numpy array? [duplicate]

I have an array of values arr with shape (N,) and an array of coordinates coords with shape (N,2). I want to represent this in an (M,M) array grid such that grid takes the value 0 at coordinates that are not in coords, and for the coordinates that are included it should store the sum of all values in arr that have that coordinate. So if M=3, arr = np.arange(4)+1, and coords = np.array([[0,0,1,2],[0,0,2,2]]) then grid should be:
array([[3., 0., 0.],
[0., 0., 3.],
[0., 0., 4.]])
The reason this is nontrivial is that I need to be able to repeat this step many times and the values in arr change each time, and so can the coordinates. Ideally I am looking for a vectorized solution. I suspect that I might be able to use np.where somehow but it's not immediately obvious how.
Timing the solutions
I have timed the solutions present at this time and it appear that the accumulator method is slightly faster than the sparse matrix method, with the second accumulation method being the slowest for the reasons explained in the comments:
%timeit for x in range(100): accumulate_arr(np.random.randint(100,size=(2,10000)),np.random.normal(0,1,10000))
%timeit for x in range(100): accumulate_arr_v2(np.random.randint(100,size=(2,10000)),np.random.normal(0,1,10000))
%timeit for x in range(100): sparse.coo_matrix((np.random.normal(0,1,10000),np.random.randint(100,size=(2,10000))),(100,100)).A
47.3 ms ± 1.79 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
103 ms ± 255 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
48.2 ms ± 36 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
One way would be to create a sparse.coo_matrix and convert that to dense:
from scipy import sparse
sparse.coo_matrix((arr,coords),(M,M)).A
# array([[3, 0, 0],
# [0, 0, 3],
# [0, 0, 4]])
With np.bincount -
def accumulate_arr(coords, arr):
# Get output array shape
m,n = coords.max(1)+1
# Get linear indices to be used as IDs with bincount
lidx = np.ravel_multi_index(coords, (m,n))
# Or lidx = coords[0]*(coords[1].max()+1) + coords[1]
# Accumulate arr with IDs from lidx
return np.bincount(lidx,arr,minlength=m*n).reshape(m,n)
Sample run -
In [58]: arr
Out[58]: array([1, 2, 3, 4])
In [59]: coords
Out[59]:
array([[0, 0, 1, 2],
[0, 0, 2, 2]])
In [60]: accumulate_arr(coords, arr)
Out[60]:
array([[3., 0., 0.],
[0., 0., 3.],
[0., 0., 4.]])
Another with np.add.at on similar lines and might be easier to follow -
def accumulate_arr_v2(coords, arr):
m,n = coords.max(1)+1
out = np.zeros((m,n), dtype=arr.dtype)
np.add.at(out, tuple(coords), arr)
return out

Add copies of a dimension in Python - numpy.ndarray [duplicate]

Sometimes it is useful to "clone" a row or column vector to a matrix. By cloning I mean converting a row vector such as
[1, 2, 3]
Into a matrix
[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]
or a column vector such as
[[1],
[2],
[3]]
into
[[1, 1, 1]
[2, 2, 2]
[3, 3, 3]]
In MATLAB or octave this is done pretty easily:
x = [1, 2, 3]
a = ones(3, 1) * x
a =
1 2 3
1 2 3
1 2 3
b = (x') * ones(1, 3)
b =
1 1 1
2 2 2
3 3 3
I want to repeat this in numpy, but unsuccessfully
In [14]: x = array([1, 2, 3])
In [14]: ones((3, 1)) * x
Out[14]:
array([[ 1., 2., 3.],
[ 1., 2., 3.],
[ 1., 2., 3.]])
# so far so good
In [16]: x.transpose() * ones((1, 3))
Out[16]: array([[ 1., 2., 3.]])
# DAMN
# I end up with
In [17]: (ones((3, 1)) * x).transpose()
Out[17]:
array([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
Why wasn't the first method (In [16]) working? Is there a way to achieve this task in python in a more elegant way?
Use numpy.tile:
>>> tile(array([1,2,3]), (3, 1))
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
or for repeating columns:
>>> tile(array([[1,2,3]]).transpose(), (1, 3))
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
Here's an elegant, Pythonic way to do it:
>>> array([[1,2,3],]*3)
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
>>> array([[1,2,3],]*3).transpose()
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
the problem with [16] seems to be that the transpose has no effect for an array. you're probably wanting a matrix instead:
>>> x = array([1,2,3])
>>> x
array([1, 2, 3])
>>> x.transpose()
array([1, 2, 3])
>>> matrix([1,2,3])
matrix([[1, 2, 3]])
>>> matrix([1,2,3]).transpose()
matrix([[1],
[2],
[3]])
First note that with numpy's broadcasting operations it's usually not necessary to duplicate rows and columns. See this and this for descriptions.
But to do this, repeat and newaxis are probably the best way
In [12]: x = array([1,2,3])
In [13]: repeat(x[:,newaxis], 3, 1)
Out[13]:
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
In [14]: repeat(x[newaxis,:], 3, 0)
Out[14]:
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
This example is for a row vector, but applying this to a column vector is hopefully obvious. repeat seems to spell this well, but you can also do it via multiplication as in your example
In [15]: x = array([[1, 2, 3]]) # note the double brackets
In [16]: (ones((3,1))*x).transpose()
Out[16]:
array([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
Let:
>>> n = 1000
>>> x = np.arange(n)
>>> reps = 10000
Zero-cost allocations
A view does not take any additional memory. Thus, these declarations are instantaneous:
# New axis
x[np.newaxis, ...]
# Broadcast to specific shape
np.broadcast_to(x, (reps, n))
Forced allocation
If you want force the contents to reside in memory:
>>> %timeit np.array(np.broadcast_to(x, (reps, n)))
10.2 ms ± 62.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit np.repeat(x[np.newaxis, :], reps, axis=0)
9.88 ms ± 52.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit np.tile(x, (reps, 1))
9.97 ms ± 77.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
All three methods are roughly the same speed.
Computation
>>> a = np.arange(reps * n).reshape(reps, n)
>>> x_tiled = np.tile(x, (reps, 1))
>>> %timeit np.broadcast_to(x, (reps, n)) * a
17.1 ms ± 284 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit x[np.newaxis, :] * a
17.5 ms ± 300 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit x_tiled * a
17.6 ms ± 240 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
All three methods are roughly the same speed.
Conclusion
If you want to replicate before a computation, consider using one of the "zero-cost allocation" methods. You won't suffer the performance penalty of "forced allocation".
I think using the broadcast in numpy is the best, and faster
I did a compare as following
import numpy as np
b = np.random.randn(1000)
In [105]: %timeit c = np.tile(b[:, newaxis], (1,100))
1000 loops, best of 3: 354 µs per loop
In [106]: %timeit c = np.repeat(b[:, newaxis], 100, axis=1)
1000 loops, best of 3: 347 µs per loop
In [107]: %timeit c = np.array([b,]*100).transpose()
100 loops, best of 3: 5.56 ms per loop
about 15 times faster using broadcast
One clean solution is to use NumPy's outer-product function with a vector of ones:
np.outer(np.ones(n), x)
gives n repeating rows. Switch the argument order to get repeating columns. To get an equal number of rows and columns you might do
np.outer(np.ones_like(x), x)
You can use
np.tile(x,3).reshape((4,3))
tile will generate the reps of the vector
and reshape will give it the shape you want
Returning to the original question
In MATLAB or octave this is done pretty easily:
x = [1, 2, 3]
a = ones(3, 1) * x
...
In numpy it's pretty much the same (and easy to memorize too):
x = [1, 2, 3]
a = np.tile(x, (3, 1))
If you have a pandas dataframe and want to preserve the dtypes, even the categoricals, this is a fast way to do it:
import numpy as np
import pandas as pd
df = pd.DataFrame({1: [1, 2, 3], 2: [4, 5, 6]})
number_repeats = 50
new_df = df.reindex(np.tile(df.index, number_repeats))
Another solution
>> x = np.array([1,2,3])
>> y = x[None, :] * np.ones((3,))[:, None]
>> y
array([[ 1., 2., 3.],
[ 1., 2., 3.],
[ 1., 2., 3.]])
Why? Sure, repeat and tile are the correct way to do this. But None indexing is a powerful tool that has many times let me quickly vectorize an operation (though it can quickly be very memory expensive!).
An example from my own code:
# trajectory is a sequence of xy coordinates [n_points, 2]
# xy_obstacles is a list of obstacles' xy coordinates [n_obstacles, 2]
# to compute dx, dy distance between every obstacle and every pose in the trajectory
deltas = trajectory[:, None, :2] - xy_obstacles[None, :, :2]
# we can easily convert x-y distance to a norm
distances = np.linalg.norm(deltas, axis=-1)
# distances is now [timesteps, obstacles]. Now we can for example find the closest obstacle at every point in the trajectory by doing
closest_obstacles = np.argmin(distances, axis=1)
# we could also find how safe the trajectory is, by finding the smallest distance over the entire trajectory
danger = np.min(distances)
To answer the actual question, now that nearly a dozen approaches to working around a solution have been posted: x.transpose reverses the shape of x. One of the interesting side-effects is that if x.ndim == 1, the transpose does nothing.
This is especially confusing for people coming from MATLAB, where all arrays implicitly have at least two dimensions. The correct way to transpose a 1D numpy array is not x.transpose() or x.T, but rather
x[:, None]
or
x.reshape(-1, 1)
From here, you can multiply by a matrix of ones, or use any of the other suggested approaches, as long as you respect the (subtle) differences between MATLAB and numpy.
import numpy as np
x=np.array([1,2,3])
y=np.multiply(np.ones((len(x),len(x))),x).T
print(y)
yields:
[[ 1. 1. 1.]
[ 2. 2. 2.]
[ 3. 3. 3.]]

Translating arrays from MATLAB to numpy

I am defining an array of two's, with one's on either end. In MATLAB this can be acheived by
x = [1 2*ones(1,3) 1]
In Python, however, numpy gives something quite different:
import numpy
numpy.array([[1],2*numpy.ones(3),[1]])
What is the most efficient way to perform this MATLAB command in Python?
In [33]: import numpy as np
In [34]: np.r_[1, 2*np.ones(3), 1]
Out[34]: array([ 1., 2., 2., 2., 1.])
Alternatively, you could use hstack:
In [42]: np.hstack(([1], 2*np.ones(3), [1]))
Out[42]: array([ 1., 2., 2., 2., 1.])
In [45]: %timeit np.r_[1, 2*np.ones(300), 1]
10000 loops, best of 3: 27.5 us per loop
In [46]: %timeit np.hstack(([1], 2*np.ones(300), [1]))
10000 loops, best of 3: 26.4 us per loop
In [48]: %timeit np.append([1],np.append(2*np.ones(300)[:],[1]))
10000 loops, best of 3: 28.2 us per loop
Thanks to DSM for pointing out that pre-allocating the right-sized array from the very beginning, can be much much faster than appending, using r_ or hstack on smaller arrays:
In [49]: %timeit a = 2*np.ones(300+2); a[0] = 1; a[-1] = 1
100000 loops, best of 3: 6.79 us per loop
In [50]: %timeit a = np.empty(300+2); a.fill(2); a[0] = 1; a[-1] = 1
1000000 loops, best of 3: 1.73 us per loop
Use numpy.ones instead of just ones:
numpy.array([[1],2*numpy.ones(3),[1]])

Good ways to "expand" a numpy ndarray?

Are there good ways to "expand" a numpy ndarray? Say I have an ndarray like this:
[[1 2]
[3 4]]
And I want each row to contains more elements by filling zeros:
[[1 2 0 0 0]
[3 4 0 0 0]]
I know there must be some brute-force ways to do so (say construct a bigger array with zeros then copy elements from old smaller arrays), just wondering are there pythonic ways to do so. Tried numpy.reshape but didn't work:
import numpy as np
a = np.array([[1, 2], [3, 4]])
np.reshape(a, (2, 5))
Numpy complains that: ValueError: total size of new array must be unchanged
You can use numpy.pad, as follows:
>>> import numpy as np
>>> a=[[1,2],[3,4]]
>>> np.pad(a, ((0,0),(0,3)), mode='constant', constant_values=0)
array([[1, 2, 0, 0, 0],
[3, 4, 0, 0, 0]])
Here np.pad says, "Take the array a and add 0 rows above it, 0 rows below it, 0 columns to the left of it, and 3 columns to the right of it. Fill these columns with a constant specified by constant_values".
There are the index tricks r_ and c_.
>>> import numpy as np
>>> a = np.array([[1, 2], [3, 4]])
>>> z = np.zeros((2, 3), dtype=a.dtype)
>>> np.c_[a, z]
array([[1, 2, 0, 0, 0],
[3, 4, 0, 0, 0]])
If this is performance critical code, you might prefer to use the equivalent np.concatenate rather than the index tricks.
>>> np.concatenate((a,z), axis=1)
array([[1, 2, 0, 0, 0],
[3, 4, 0, 0, 0]])
There are also np.resize and np.ndarray.resize, but they have some limitations (due to the way numpy lays out data in memory) so read the docstring on those ones. You will probably find that simply concatenating is better.
By the way, when I've needed to do this I usually just do it the basic way you've already mentioned (create an array of zeros and assign the smaller array inside it), I don't see anything wrong with that!
Just to be clear: there's no "good" way to extend a NumPy array, as NumPy arrays are not expandable. Once the array is defined, the space it occupies in memory, a combination of the number of its elements and the size of each element, is fixed and cannot be changed. The only thing you can do is to create a new array and replace some of its elements by the elements of the original array.
A lot of functions are available for convenience (the np.concatenate function and its np.*stack shortcuts, the np.column_stack, the indexes routines np.r_ and np.c_...), but there are just that: convenience functions. Some of them are optimized at the C level (the np.concatenate and others, I think), some are not.
Note that there's nothing at all with your initial suggestion of creating a large array 'by hand' (possibly filled with zeros) and filling it yourself with your initial array. It might be more readable that more complicated solutions.
A simple way:
# what you want to expand
x = np.ones((3, 3))
# expand to what shape
target = np.zeros((6, 6))
# do expand
target[:x.shape[0], :x.shape[1]] = x
# print target
array([[ 1., 1., 1., 0., 0., 0.],
[ 1., 1., 1., 0., 0., 0.],
[ 1., 1., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.]])
Functional way:
borrow from https://stackoverflow.com/a/35751427/1637673, with a little modification.
def pad(array, reference_shape, offsets=None):
"""
array: Array to be padded
reference_shape: tuple of size of narray to create
offsets: list of offsets (number of elements must be equal to the dimension of the array)
will throw a ValueError if offsets is too big and the reference_shape cannot handle the offsets
"""
if not offsets:
offsets = np.zeros(array.ndim, dtype=np.int32)
# Create an array of zeros with the reference shape
result = np.zeros(reference_shape, dtype=np.float32)
# Create a list of slices from offset to offset + shape in each dimension
insertHere = [slice(offsets[dim], offsets[dim] + array.shape[dim]) for dim in range(array.ndim)]
# Insert the array in the result at the specified offsets
result[insertHere] = array
return result
You should use np.column_stack or append
import numpy as np
p = np.array([ [1,2] , [3,4] ])
p = np.column_stack( [ p , [ 0 , 0 ],[0,0] ] )
p
Out[277]:
array([[1, 2, 0, 0],
[3, 4, 0, 0]])
Append seems to be faster though:
timeit np.column_stack( [ p , [ 0 , 0 ],[0,0] ] )
10000 loops, best of 3: 61.8 us per loop
timeit np.append(p, [[0,0],[0,0]],1)
10000 loops, best of 3: 48 us per loop
And a comparison with np.c_ and np.hstack [append still seems to be the fastest]:
In [295]: z=np.zeros((2, 2), dtype=a.dtype)
In [296]: timeit np.c_[a, z]
10000 loops, best of 3: 47.2 us per loop
In [297]: timeit np.append(p, z,1)
100000 loops, best of 3: 13.1 us per loop
In [305]: timeit np.hstack((p,z))
10000 loops, best of 3: 20.8 us per loop
and np.concatenate [that is a even a bit faster than append]:
In [307]: timeit np.concatenate((p, z), axis=1)
100000 loops, best of 3: 11.6 us per loop
there are also similar methods like np.vstack, np.hstack, np.dstack. I like these over np.concatente as it makes it clear what dimension is being "expanded".
temp = np.array([[1, 2], [3, 4]])
np.hstack((temp, np.zeros((2,3))))
it's easy to remember becase numpy's first axis is vertical so vstack expands the first axis and 2nd axis is horizontal so hstack.

"Cloning" row or column vectors

Sometimes it is useful to "clone" a row or column vector to a matrix. By cloning I mean converting a row vector such as
[1, 2, 3]
Into a matrix
[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]
or a column vector such as
[[1],
[2],
[3]]
into
[[1, 1, 1]
[2, 2, 2]
[3, 3, 3]]
In MATLAB or octave this is done pretty easily:
x = [1, 2, 3]
a = ones(3, 1) * x
a =
1 2 3
1 2 3
1 2 3
b = (x') * ones(1, 3)
b =
1 1 1
2 2 2
3 3 3
I want to repeat this in numpy, but unsuccessfully
In [14]: x = array([1, 2, 3])
In [14]: ones((3, 1)) * x
Out[14]:
array([[ 1., 2., 3.],
[ 1., 2., 3.],
[ 1., 2., 3.]])
# so far so good
In [16]: x.transpose() * ones((1, 3))
Out[16]: array([[ 1., 2., 3.]])
# DAMN
# I end up with
In [17]: (ones((3, 1)) * x).transpose()
Out[17]:
array([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
Why wasn't the first method (In [16]) working? Is there a way to achieve this task in python in a more elegant way?
Use numpy.tile:
>>> tile(array([1,2,3]), (3, 1))
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
or for repeating columns:
>>> tile(array([[1,2,3]]).transpose(), (1, 3))
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
Here's an elegant, Pythonic way to do it:
>>> array([[1,2,3],]*3)
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
>>> array([[1,2,3],]*3).transpose()
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
the problem with [16] seems to be that the transpose has no effect for an array. you're probably wanting a matrix instead:
>>> x = array([1,2,3])
>>> x
array([1, 2, 3])
>>> x.transpose()
array([1, 2, 3])
>>> matrix([1,2,3])
matrix([[1, 2, 3]])
>>> matrix([1,2,3]).transpose()
matrix([[1],
[2],
[3]])
First note that with numpy's broadcasting operations it's usually not necessary to duplicate rows and columns. See this and this for descriptions.
But to do this, repeat and newaxis are probably the best way
In [12]: x = array([1,2,3])
In [13]: repeat(x[:,newaxis], 3, 1)
Out[13]:
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
In [14]: repeat(x[newaxis,:], 3, 0)
Out[14]:
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
This example is for a row vector, but applying this to a column vector is hopefully obvious. repeat seems to spell this well, but you can also do it via multiplication as in your example
In [15]: x = array([[1, 2, 3]]) # note the double brackets
In [16]: (ones((3,1))*x).transpose()
Out[16]:
array([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
Let:
>>> n = 1000
>>> x = np.arange(n)
>>> reps = 10000
Zero-cost allocations
A view does not take any additional memory. Thus, these declarations are instantaneous:
# New axis
x[np.newaxis, ...]
# Broadcast to specific shape
np.broadcast_to(x, (reps, n))
Forced allocation
If you want force the contents to reside in memory:
>>> %timeit np.array(np.broadcast_to(x, (reps, n)))
10.2 ms ± 62.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit np.repeat(x[np.newaxis, :], reps, axis=0)
9.88 ms ± 52.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit np.tile(x, (reps, 1))
9.97 ms ± 77.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
All three methods are roughly the same speed.
Computation
>>> a = np.arange(reps * n).reshape(reps, n)
>>> x_tiled = np.tile(x, (reps, 1))
>>> %timeit np.broadcast_to(x, (reps, n)) * a
17.1 ms ± 284 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit x[np.newaxis, :] * a
17.5 ms ± 300 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit x_tiled * a
17.6 ms ± 240 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
All three methods are roughly the same speed.
Conclusion
If you want to replicate before a computation, consider using one of the "zero-cost allocation" methods. You won't suffer the performance penalty of "forced allocation".
I think using the broadcast in numpy is the best, and faster
I did a compare as following
import numpy as np
b = np.random.randn(1000)
In [105]: %timeit c = np.tile(b[:, newaxis], (1,100))
1000 loops, best of 3: 354 µs per loop
In [106]: %timeit c = np.repeat(b[:, newaxis], 100, axis=1)
1000 loops, best of 3: 347 µs per loop
In [107]: %timeit c = np.array([b,]*100).transpose()
100 loops, best of 3: 5.56 ms per loop
about 15 times faster using broadcast
One clean solution is to use NumPy's outer-product function with a vector of ones:
np.outer(np.ones(n), x)
gives n repeating rows. Switch the argument order to get repeating columns. To get an equal number of rows and columns you might do
np.outer(np.ones_like(x), x)
You can use
np.tile(x,3).reshape((4,3))
tile will generate the reps of the vector
and reshape will give it the shape you want
Returning to the original question
In MATLAB or octave this is done pretty easily:
x = [1, 2, 3]
a = ones(3, 1) * x
...
In numpy it's pretty much the same (and easy to memorize too):
x = [1, 2, 3]
a = np.tile(x, (3, 1))
If you have a pandas dataframe and want to preserve the dtypes, even the categoricals, this is a fast way to do it:
import numpy as np
import pandas as pd
df = pd.DataFrame({1: [1, 2, 3], 2: [4, 5, 6]})
number_repeats = 50
new_df = df.reindex(np.tile(df.index, number_repeats))
Another solution
>> x = np.array([1,2,3])
>> y = x[None, :] * np.ones((3,))[:, None]
>> y
array([[ 1., 2., 3.],
[ 1., 2., 3.],
[ 1., 2., 3.]])
Why? Sure, repeat and tile are the correct way to do this. But None indexing is a powerful tool that has many times let me quickly vectorize an operation (though it can quickly be very memory expensive!).
An example from my own code:
# trajectory is a sequence of xy coordinates [n_points, 2]
# xy_obstacles is a list of obstacles' xy coordinates [n_obstacles, 2]
# to compute dx, dy distance between every obstacle and every pose in the trajectory
deltas = trajectory[:, None, :2] - xy_obstacles[None, :, :2]
# we can easily convert x-y distance to a norm
distances = np.linalg.norm(deltas, axis=-1)
# distances is now [timesteps, obstacles]. Now we can for example find the closest obstacle at every point in the trajectory by doing
closest_obstacles = np.argmin(distances, axis=1)
# we could also find how safe the trajectory is, by finding the smallest distance over the entire trajectory
danger = np.min(distances)
To answer the actual question, now that nearly a dozen approaches to working around a solution have been posted: x.transpose reverses the shape of x. One of the interesting side-effects is that if x.ndim == 1, the transpose does nothing.
This is especially confusing for people coming from MATLAB, where all arrays implicitly have at least two dimensions. The correct way to transpose a 1D numpy array is not x.transpose() or x.T, but rather
x[:, None]
or
x.reshape(-1, 1)
From here, you can multiply by a matrix of ones, or use any of the other suggested approaches, as long as you respect the (subtle) differences between MATLAB and numpy.
import numpy as np
x=np.array([1,2,3])
y=np.multiply(np.ones((len(x),len(x))),x).T
print(y)
yields:
[[ 1. 1. 1.]
[ 2. 2. 2.]
[ 3. 3. 3.]]

Categories

Resources