Translating arrays from MATLAB to numpy - python

I am defining an array of two's, with one's on either end. In MATLAB this can be acheived by
x = [1 2*ones(1,3) 1]
In Python, however, numpy gives something quite different:
import numpy
numpy.array([[1],2*numpy.ones(3),[1]])
What is the most efficient way to perform this MATLAB command in Python?

In [33]: import numpy as np
In [34]: np.r_[1, 2*np.ones(3), 1]
Out[34]: array([ 1., 2., 2., 2., 1.])
Alternatively, you could use hstack:
In [42]: np.hstack(([1], 2*np.ones(3), [1]))
Out[42]: array([ 1., 2., 2., 2., 1.])
In [45]: %timeit np.r_[1, 2*np.ones(300), 1]
10000 loops, best of 3: 27.5 us per loop
In [46]: %timeit np.hstack(([1], 2*np.ones(300), [1]))
10000 loops, best of 3: 26.4 us per loop
In [48]: %timeit np.append([1],np.append(2*np.ones(300)[:],[1]))
10000 loops, best of 3: 28.2 us per loop
Thanks to DSM for pointing out that pre-allocating the right-sized array from the very beginning, can be much much faster than appending, using r_ or hstack on smaller arrays:
In [49]: %timeit a = 2*np.ones(300+2); a[0] = 1; a[-1] = 1
100000 loops, best of 3: 6.79 us per loop
In [50]: %timeit a = np.empty(300+2); a.fill(2); a[0] = 1; a[-1] = 1
1000000 loops, best of 3: 1.73 us per loop

Use numpy.ones instead of just ones:
numpy.array([[1],2*numpy.ones(3),[1]])

Related

Is there a better way to perform calculations using an array of indices on a numpy array? [duplicate]

I have an array of values arr with shape (N,) and an array of coordinates coords with shape (N,2). I want to represent this in an (M,M) array grid such that grid takes the value 0 at coordinates that are not in coords, and for the coordinates that are included it should store the sum of all values in arr that have that coordinate. So if M=3, arr = np.arange(4)+1, and coords = np.array([[0,0,1,2],[0,0,2,2]]) then grid should be:
array([[3., 0., 0.],
[0., 0., 3.],
[0., 0., 4.]])
The reason this is nontrivial is that I need to be able to repeat this step many times and the values in arr change each time, and so can the coordinates. Ideally I am looking for a vectorized solution. I suspect that I might be able to use np.where somehow but it's not immediately obvious how.
Timing the solutions
I have timed the solutions present at this time and it appear that the accumulator method is slightly faster than the sparse matrix method, with the second accumulation method being the slowest for the reasons explained in the comments:
%timeit for x in range(100): accumulate_arr(np.random.randint(100,size=(2,10000)),np.random.normal(0,1,10000))
%timeit for x in range(100): accumulate_arr_v2(np.random.randint(100,size=(2,10000)),np.random.normal(0,1,10000))
%timeit for x in range(100): sparse.coo_matrix((np.random.normal(0,1,10000),np.random.randint(100,size=(2,10000))),(100,100)).A
47.3 ms ± 1.79 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
103 ms ± 255 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
48.2 ms ± 36 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
One way would be to create a sparse.coo_matrix and convert that to dense:
from scipy import sparse
sparse.coo_matrix((arr,coords),(M,M)).A
# array([[3, 0, 0],
# [0, 0, 3],
# [0, 0, 4]])
With np.bincount -
def accumulate_arr(coords, arr):
# Get output array shape
m,n = coords.max(1)+1
# Get linear indices to be used as IDs with bincount
lidx = np.ravel_multi_index(coords, (m,n))
# Or lidx = coords[0]*(coords[1].max()+1) + coords[1]
# Accumulate arr with IDs from lidx
return np.bincount(lidx,arr,minlength=m*n).reshape(m,n)
Sample run -
In [58]: arr
Out[58]: array([1, 2, 3, 4])
In [59]: coords
Out[59]:
array([[0, 0, 1, 2],
[0, 0, 2, 2]])
In [60]: accumulate_arr(coords, arr)
Out[60]:
array([[3., 0., 0.],
[0., 0., 3.],
[0., 0., 4.]])
Another with np.add.at on similar lines and might be easier to follow -
def accumulate_arr_v2(coords, arr):
m,n = coords.max(1)+1
out = np.zeros((m,n), dtype=arr.dtype)
np.add.at(out, tuple(coords), arr)
return out

Split sorted array into list with sublists

I have a sorted array of float32 Values, I want to split this array into a list of lists containing only the same Values like this:
>>> split_sorted(array) # [1., 1., 1., 2., 2., 3.]
>>> [[1., 1., 1.], [2., 2.], [3.]]
My current approach is this Function
def split_sorted(array):
split = [[array[0]]]
s_index = 0
a_index = 1
while a_index < len(array):
while a_index < len(array) and array[a_index] == split[s_index][0]:
split[s_index].append(array[a_index])
a_index += 1
else:
if a_index < len(array):
s_index += 1
a_index += 1
split.append([array[a_index]])
My Question now is, is there a more Pythonic way to do this? maybe even with numpy? And is this the most performant way?
Thanks a lot!
Approach #1
With a as the array, we can use np.split -
np.split(a,np.flatnonzero(a[:-1] != a[1:])+1)
Sample run -
In [16]: a
Out[16]: array([1., 1., 1., 2., 2., 3.])
In [17]: np.split(a,np.flatnonzero(a[:-1] != a[1:])+1)
Out[17]: [array([1., 1., 1.]), array([2., 2.]), array([3.])]
Approach #2
Another more performant way would be to get the splitting indices and then slicing the array and zipping -
idx = np.flatnonzero(np.r_[True, a[:-1] != a[1:], True])
out = [a[i:j] for i,j in zip(idx[:-1],idx[1:])]
Approach #3
If you have to get a list of sublists as output, we could re-create with list duplication -
mask = np.r_[True, a[:-1] != a[1:], True]
c = np.diff(np.flatnonzero(mask))
out = [[i]*j for i,j in zip(a[mask[:-1]],c)]
Benchmarking
Timings for vectorized approaches on 1000000 elements with 10000 unique elements -
In [145]: np.random.seed(0)
...: a = np.sort(np.random.randint(1,10000,(1000000)))
In [146]: x = a
# Approach #1 from this post
In [147]: %timeit np.split(a,np.flatnonzero(a[:-1] != a[1:])+1)
100 loops, best of 3: 10.5 ms per loop
# Approach #2 from this post
In [148]: %%timeit
...: idx = np.flatnonzero(np.r_[True, a[:-1] != a[1:], True])
...: out = [a[i:j] for i,j in zip(idx[:-1],idx[1:])]
100 loops, best of 3: 5.18 ms per loop
# Approach #3 from this post
In [197]: %%timeit
...: mask = np.r_[True, a[:-1] != a[1:], True]
...: c = np.diff(np.flatnonzero(mask))
...: out = [[i]*j for i,j in zip(a[mask[:-1]],c)]
100 loops, best of 3: 11.1 ms per loop
# #RafaelC's soln
In [149]: %%timeit
...: v,c = np.unique(x, return_counts=True)
...: out = [[a]*b for (a,b) in zip(v,c)]
10 loops, best of 3: 25.6 ms per loop
You can use numpy.unique and zip
v,c = np.unique(x, return_counts=True)
[[a]*b for (a,b) in zip(v,c)]
Outputs
[[1.0, 1.0, 1.0], [2.0, 2.0], [3.0]]
Timings for a 6,000,000 sized array
%timeit v,c = np.unique(x, return_counts=True); [[a]*b for (a,b) in zip(v,c)]
18.2 ms ± 236 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit np.split(x,np.flatnonzero(x[:-1] != x[1:])+1)
424 ms ± 11.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit [list(group) for value, group in itertools.groupby(x)]
180 ms ± 4.42 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
The function itertools.groupby has this exact behavior.
>>> from itertools import groupby
>>> [list(group) for value, group in groupby(array)]
[[1.0, 1.0, 1.0], [2.0, 2.0], [3.0]]
>>> from itertools import groupby
>>> a = [1., 1., 1., 2., 2., 3.]
>>> for k, g in groupby(a) :
... print k, list(g)
...
1.0 [1.0, 1.0, 1.0]
2.0 [2.0, 2.0]
3.0 [3.0]
You may join the lists, if you like:
>>> result = []
>>> for k, g in groupby(a) :
... result.append( list(g) )
...
>>> result
[[1.0, 1.0, 1.0], [2.0, 2.0], [3.0]]
I improved your code a bit, it's not pythonic, but doesn't use external libraries (and also your code didn't work on the last element in the array):
def split_sorted(array):
splitted = [[]]
standard = array[0]
li = 0 # inner lists index
n = len(array)
for i in range(n):
if standard != array[i]:
standard = array[i]
splitted.append([]) # appending empty list
li += 1
split[li].append(array[i])
return splitted
# test
array = [1,2,2,2,3]
a = split_sorted(array)
print(a)enter code here

python location of elements in one numpy array with location of equal elements in another array

I need not just the values, but the locations of elements in one numpy array that also appear in a second numpy array, and I need the locations in that second array too.
Here's an example of the best I've been able to do:
>>> a=np.arange(0.,15.)
>>> a
array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.,
11., 12., 13., 14.])
>>> b=np.arange(4.,8.,.5)
>>> b
array([ 4. , 4.5, 5. , 5.5, 6. , 6.5, 7. , 7.5])
>>> [ (i,j) for (i,alem) in enumerate(a) for (j,blem) in enumerate(b) if alem==blem]
[(4, 0), (5, 2), (6, 4), (7, 6)]
Anybody have anything faster, numpy specific, or more "pythonic"?
Here is an O((n+k)log(n+k)) (the naive algorithm is O(nk)) solution with np.unique
uniq, inv = np.unique(np.r_[a, b], return_inverse=True)
map = -np.ones((len(uniq),), dtype=int)
map[inv[:len(a)]] = np.arange(len(a))
bina = map[inv[len(a):]]
inds_in_b = np.where(bina != -1)[0]
elements, inds_in_a = b[inds_in_b], bina[inds_in_b]
or you could simply sort a for O((n+k)log(k))
inds = np.argsort(a)
aso = a[inds]
bina = np.searchsorted(aso[:-1], b)
inds_in_b = np.where(b == aso[bina])[0]
elements, inds_in_a = b[inds_in_b], inds[bina[inds_in_b]]
For sorted array a, here's another approach with np.searchsorted making use of its optional argument - side set as left and right -
lidx = np.searchsorted(a,b,'left')
ridx = np.searchsorted(a,b,'right')
mask = lidx != ridx
out = lidx[mask], np.flatnonzero(mask)
# for zipped o/p : zip(lidx[mask], np.flatnonzero(mask))
Runtime test
Approaches -
def searchsorted_where(a,b): # #Paul Panzer's soln
inds = np.argsort(a)
aso = a[inds]
bina = np.searchsorted(aso[:-1], b)
inds_in_b = np.where(b == aso[bina])[0]
return b[inds_in_b], inds_in_b
def in1d_masking(a,b): # #Psidom's soln
logic = np.in1d(b, a)
return b[logic], np.where(logic)[0]
def searchsorted_twice(a,b): # Proposed in this post
lidx = np.searchsorted(a,b,'left')
ridx = np.searchsorted(a,b,'right')
mask = lidx != ridx
return lidx[mask], np.flatnonzero(mask)
Timings -
Case #1 (Using sample data from question and scaling it up) :
In [2]: a=np.arange(0.,15000.)
...: b=np.arange(4.,15000.,0.5)
...:
In [3]: %timeit searchsorted_where(a,b)
...: %timeit in1d_masking(a,b)
...: %timeit searchsorted_twice(a,b)
...:
1000 loops, best of 3: 721 µs per loop
1000 loops, best of 3: 1.76 ms per loop
1000 loops, best of 3: 1.28 ms per loop
Case #2 (Same as case #1 with no. of elems in b comparatively lesser than in a) :
In [4]: a=np.arange(0.,15000.)
...: b=np.arange(4.,15000.,5)
...:
In [5]: %timeit searchsorted_where(a,b)
...: %timeit in1d_masking(a,b)
...: %timeit searchsorted_twice(a,b)
...:
10000 loops, best of 3: 77.4 µs per loop
1000 loops, best of 3: 428 µs per loop
10000 loops, best of 3: 128 µs per loop
Case #3 (and comparatively much lesser elems in b) :
In [6]: a=np.arange(0.,15000.)
...: b=np.arange(4.,15000.,10)
...:
In [7]: %timeit searchsorted_where(a,b)
...: %timeit in1d_masking(a,b)
...: %timeit searchsorted_twice(a,b)
...:
10000 loops, best of 3: 42.8 µs per loop
1000 loops, best of 3: 392 µs per loop
10000 loops, best of 3: 71.9 µs per loop
You can use numpy.in1d to find out the elements of b also in a, logical indexing and numpy.where can get the elements and index correspondingly:
logic = np.in1d(b, a)
list(zip(b[logic], np.where(logic)[0]))
# [(4.0, 0), (5.0, 2), (6.0, 4), (7.0, 6)]
b[logic], np.where(logic)[0]
# (array([ 4., 5., 6., 7.]), array([0, 2, 4, 6]))

Add copies of a dimension in Python - numpy.ndarray [duplicate]

Sometimes it is useful to "clone" a row or column vector to a matrix. By cloning I mean converting a row vector such as
[1, 2, 3]
Into a matrix
[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]
or a column vector such as
[[1],
[2],
[3]]
into
[[1, 1, 1]
[2, 2, 2]
[3, 3, 3]]
In MATLAB or octave this is done pretty easily:
x = [1, 2, 3]
a = ones(3, 1) * x
a =
1 2 3
1 2 3
1 2 3
b = (x') * ones(1, 3)
b =
1 1 1
2 2 2
3 3 3
I want to repeat this in numpy, but unsuccessfully
In [14]: x = array([1, 2, 3])
In [14]: ones((3, 1)) * x
Out[14]:
array([[ 1., 2., 3.],
[ 1., 2., 3.],
[ 1., 2., 3.]])
# so far so good
In [16]: x.transpose() * ones((1, 3))
Out[16]: array([[ 1., 2., 3.]])
# DAMN
# I end up with
In [17]: (ones((3, 1)) * x).transpose()
Out[17]:
array([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
Why wasn't the first method (In [16]) working? Is there a way to achieve this task in python in a more elegant way?
Use numpy.tile:
>>> tile(array([1,2,3]), (3, 1))
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
or for repeating columns:
>>> tile(array([[1,2,3]]).transpose(), (1, 3))
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
Here's an elegant, Pythonic way to do it:
>>> array([[1,2,3],]*3)
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
>>> array([[1,2,3],]*3).transpose()
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
the problem with [16] seems to be that the transpose has no effect for an array. you're probably wanting a matrix instead:
>>> x = array([1,2,3])
>>> x
array([1, 2, 3])
>>> x.transpose()
array([1, 2, 3])
>>> matrix([1,2,3])
matrix([[1, 2, 3]])
>>> matrix([1,2,3]).transpose()
matrix([[1],
[2],
[3]])
First note that with numpy's broadcasting operations it's usually not necessary to duplicate rows and columns. See this and this for descriptions.
But to do this, repeat and newaxis are probably the best way
In [12]: x = array([1,2,3])
In [13]: repeat(x[:,newaxis], 3, 1)
Out[13]:
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
In [14]: repeat(x[newaxis,:], 3, 0)
Out[14]:
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
This example is for a row vector, but applying this to a column vector is hopefully obvious. repeat seems to spell this well, but you can also do it via multiplication as in your example
In [15]: x = array([[1, 2, 3]]) # note the double brackets
In [16]: (ones((3,1))*x).transpose()
Out[16]:
array([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
Let:
>>> n = 1000
>>> x = np.arange(n)
>>> reps = 10000
Zero-cost allocations
A view does not take any additional memory. Thus, these declarations are instantaneous:
# New axis
x[np.newaxis, ...]
# Broadcast to specific shape
np.broadcast_to(x, (reps, n))
Forced allocation
If you want force the contents to reside in memory:
>>> %timeit np.array(np.broadcast_to(x, (reps, n)))
10.2 ms ± 62.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit np.repeat(x[np.newaxis, :], reps, axis=0)
9.88 ms ± 52.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit np.tile(x, (reps, 1))
9.97 ms ± 77.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
All three methods are roughly the same speed.
Computation
>>> a = np.arange(reps * n).reshape(reps, n)
>>> x_tiled = np.tile(x, (reps, 1))
>>> %timeit np.broadcast_to(x, (reps, n)) * a
17.1 ms ± 284 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit x[np.newaxis, :] * a
17.5 ms ± 300 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>>> %timeit x_tiled * a
17.6 ms ± 240 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
All three methods are roughly the same speed.
Conclusion
If you want to replicate before a computation, consider using one of the "zero-cost allocation" methods. You won't suffer the performance penalty of "forced allocation".
I think using the broadcast in numpy is the best, and faster
I did a compare as following
import numpy as np
b = np.random.randn(1000)
In [105]: %timeit c = np.tile(b[:, newaxis], (1,100))
1000 loops, best of 3: 354 µs per loop
In [106]: %timeit c = np.repeat(b[:, newaxis], 100, axis=1)
1000 loops, best of 3: 347 µs per loop
In [107]: %timeit c = np.array([b,]*100).transpose()
100 loops, best of 3: 5.56 ms per loop
about 15 times faster using broadcast
One clean solution is to use NumPy's outer-product function with a vector of ones:
np.outer(np.ones(n), x)
gives n repeating rows. Switch the argument order to get repeating columns. To get an equal number of rows and columns you might do
np.outer(np.ones_like(x), x)
You can use
np.tile(x,3).reshape((4,3))
tile will generate the reps of the vector
and reshape will give it the shape you want
Returning to the original question
In MATLAB or octave this is done pretty easily:
x = [1, 2, 3]
a = ones(3, 1) * x
...
In numpy it's pretty much the same (and easy to memorize too):
x = [1, 2, 3]
a = np.tile(x, (3, 1))
If you have a pandas dataframe and want to preserve the dtypes, even the categoricals, this is a fast way to do it:
import numpy as np
import pandas as pd
df = pd.DataFrame({1: [1, 2, 3], 2: [4, 5, 6]})
number_repeats = 50
new_df = df.reindex(np.tile(df.index, number_repeats))
Another solution
>> x = np.array([1,2,3])
>> y = x[None, :] * np.ones((3,))[:, None]
>> y
array([[ 1., 2., 3.],
[ 1., 2., 3.],
[ 1., 2., 3.]])
Why? Sure, repeat and tile are the correct way to do this. But None indexing is a powerful tool that has many times let me quickly vectorize an operation (though it can quickly be very memory expensive!).
An example from my own code:
# trajectory is a sequence of xy coordinates [n_points, 2]
# xy_obstacles is a list of obstacles' xy coordinates [n_obstacles, 2]
# to compute dx, dy distance between every obstacle and every pose in the trajectory
deltas = trajectory[:, None, :2] - xy_obstacles[None, :, :2]
# we can easily convert x-y distance to a norm
distances = np.linalg.norm(deltas, axis=-1)
# distances is now [timesteps, obstacles]. Now we can for example find the closest obstacle at every point in the trajectory by doing
closest_obstacles = np.argmin(distances, axis=1)
# we could also find how safe the trajectory is, by finding the smallest distance over the entire trajectory
danger = np.min(distances)
To answer the actual question, now that nearly a dozen approaches to working around a solution have been posted: x.transpose reverses the shape of x. One of the interesting side-effects is that if x.ndim == 1, the transpose does nothing.
This is especially confusing for people coming from MATLAB, where all arrays implicitly have at least two dimensions. The correct way to transpose a 1D numpy array is not x.transpose() or x.T, but rather
x[:, None]
or
x.reshape(-1, 1)
From here, you can multiply by a matrix of ones, or use any of the other suggested approaches, as long as you respect the (subtle) differences between MATLAB and numpy.
import numpy as np
x=np.array([1,2,3])
y=np.multiply(np.ones((len(x),len(x))),x).T
print(y)
yields:
[[ 1. 1. 1.]
[ 2. 2. 2.]
[ 3. 3. 3.]]

Convert real-valued numpy array to binary array by sign

I am looking for a fast way to compute the following:
import numpy as np
a = np.array([-1,1,2,-4,5.5,-0.1,0])
Now I want to cast a to an array of binary values such that it has a 1 for every positive entry of a and a 0 otherwise. So the result I want is this:
array([ 0., 1., 1., 0., 1., 0., 0.])
One way to achieve this would be
np.array([x if x >=0 else 0 for x in np.sign(a)])
array([ 0., 1., 1., 0., 1., 0., 0.])
But I am hoping someone can point out a faster solution.
%timeit np.array([x if x >=0 else 0 for x in np.sign(a)])
100000 loops, best of 3: 11.4 us per loop
EDIT: timing the great solutions from the answers
%timeit (a > 0).astype(int)
100000 loops, best of 3: 3.47 us per loop
You can do this using mask:
(a > 0).astype(int)
I do not know how to properly use timeit, but even
import numpy as np
from datetime import datetime
n = 50000000
a = np.random.rand(1, n).ravel()
startTime = datetime.now()
np.array([ x if x >=0 else 0 for x in np.sign(a)])
print datetime.now() - startTime
startTime = datetime.now()
(a > 0).astype(int)
print datetime.now() - startTime
pass
shows dramatic difference of 26 seconds vs 0.5 second.
P.S. based on your comment
I'll be computing distances, like hamming
you do not really need to have an integer array and a > 0 will be enough. It will save you memory and make things slightly faster.
You can check where a is greater than 0 and cast the boolean array to an integer array:
>>> (a > 0).astype(int)
array([0, 1, 1, 0, 1, 0, 0])
This should be significantly faster than the method proposed in the question (especially over larger arrays) because it avoids looping over the array at the Python level.
Faster still is to simply view the boolean array as the int8 dtype - this prevents the need to create a new array from the boolean array:
>>> (a > 0).view(np.int8)
array([0, 1, 1, 0, 1, 0, 0], dtype=int8)
Timings:
>>> b = np.random.rand(1000000)
>>> %timeit np.array([ x if x >=0 else 0 for x in np.sign(b)])
1 loops, best of 3: 420 ms per loop
>>> %timeit (b > 0).astype(int)
100 loops, best of 3: 4.63 ms per loop
>>> %timeit (b > 0).view(np.int8)
1000 loops, best of 3: 1.12 ms per loop

Categories

Resources