How to get a 2D array containing indices of another 2D array - python

Problem
import numpy as np
I have an an array, without any prior information of its contents. For example:
ourarray = \
np.array([[0,1],
[2,3],
[4,5]])
I want to get the pairs of numbers which can be used for indexing ourarray. Ie I want to get:
array([[0, 0, 1, 1, 2, 2],
[0, 1, 0, 1, 0, 1]])
(0,0, 0,1, 1,0, etc., all the possible indices of ourarray are in this array.)
Similar but different posts
how to find indices of a 2d numpy array occuring in another 2d array: here they search for one array within another one, not returning indices of the entire array.
Find indices of rows of numpy 2d array in another 2D array: they are dealing with two arrays to start with, the objective isn't to create a second array based on the first one containing its indices
Attempt 1 (Successful but inefficient)
I can get this array by:
np.array(np.where(np.ones(ourarray.shape)))
Which gives the desired result but it requires creting np.ones(ourarray.shape), which seems like not an efficient way of doing it.
Attempt 2 (Failed)
I also tried:
np.array(np.where(ourarray))
which does not work because there is no indices returned for the 0 entry of ourarray.
Question
Attempt 1 works, but I am looking for a more efficient way. How can I do this more efficiently?

You can use numpy.argwhere then use .T and get what you want.
try this:
>>> ourarray = np.array([[0,1],[2,3], [4,5]])
>>> np.argwhere(ourarray>=0).T
array([[0, 0, 1, 1, 2, 2],
[0, 1, 0, 1, 0, 1]])
If maybe any values exist in your array you can use this:
ourarray = np.array([[np.nan,1],[2,np.inf], [-4,-5]])
np.argwhere(np.ones(ourarray.shape)==1).T
# array([[0, 0, 1, 1, 2, 2],
# [0, 1, 0, 1, 0, 1]])

How do you intend to use this index?
The tuple produced by nonzero (where) is designed for convenient indexing:
In [54]: idx = np.nonzero(np.ones_like(ourarray))
In [55]: idx
Out[55]: (array([0, 0, 1, 1, 2, 2]), array([0, 1, 0, 1, 0, 1]))
In [56]: ourarray[idx]
Out[56]: array([0, 1, 2, 3, 4, 5])
or equivalently using the 2 arrays explicitly:
In [57]: ourarray[idx[0], idx[1]]
Out[57]: array([0, 1, 2, 3, 4, 5])
Your np.array(idx) can be used as in [57] but not as in [56]. The use of a tuple in [56] is important.
If we apply transpose to this we get an array.
In [58]: tidx = np.transpose(idx)
In [59]: tidx
Out[59]:
array([[0, 0],
[0, 1],
[1, 0],
[1, 1],
[2, 0],
[2, 1]])
to use that for indexing we have to iterate:
In [60]: [ourarray[i,j] for i,j in tidx]
Out[60]: [0, 1, 2, 3, 4, 5]
argwhere as proposed in the other answer is just the transpose. Using outarray>=0 is really no different from the np.ones expression. Both make an array that is True/1 for all elements.
In [61]: np.argwhere(np.ones_like(ourarray))
Out[61]:
array([[0, 0],
[0, 1],
[1, 0],
[1, 1],
[2, 0],
[2, 1]])
There are other ways of generating indices, np.indices, np.meshgrid , np.mgrid, np.ndindex, but they will require some sort of reshaping and/or transpose to get exactly what you want:
In [71]: np.indices(ourarray.shape)
Out[71]:
array([[[0, 0],
[1, 1],
[2, 2]],
[[0, 1],
[0, 1],
[0, 1]]])
In [72]: np.indices(ourarray.shape).reshape(2,6)
Out[72]:
array([[0, 0, 1, 1, 2, 2],
[0, 1, 0, 1, 0, 1]])
timings
If ourarray>=0 works, it is faster than np.ones:
In [79]: timeit np.ones_like(ourarray)
6.22 µs ± 11.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [80]: timeit ourarray>=0
1.43 µs ± 15 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
np.where/nonzero adds a non-trivial time to that:
In [81]: timeit np.nonzero(ourarray>=0)
6.43 µs ± 8.15 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
and a bit more time to convert the tuple to array:
In [82]: timeit np.array(np.nonzero(ourarray>=0))
10.4 µs ± 35.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
The transpose round trip of argwhere adds more time:
In [83]: timeit np.argwhere(ourarray>=0).T
16.9 µs ± 35.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
indices is about the same as [82], though it may scale differently.
In [84]: timeit np.indices(ourarray.shape).reshape(2,-1)
10.9 µs ± 33.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Related

How do I convert Results of a loop to an array in Python? [duplicate]

So let's say I have a 2d array. How can I apply a function to every single item in the array and replace that item with the return? Also, the function's return will be a tuple, so the array will become 3d.
Here is the code in mind.
def filter_func(item):
if 0 <= item < 1:
return (1, 0, 1)
elif 1 <= item < 2:
return (2, 1, 1)
elif 2 <= item < 3:
return (5, 1, 4)
else:
return (4, 4, 4)
myarray = np.array([[2.5, 1.3], [0.4, -1.0]])
# Apply the function to an array
print(myarray)
# Should be array([[[5, 1, 4],
# [2, 1, 1]],
# [[1, 0, 1],
# [4, 4, 4]]])
Any ideas how I could do it? One way is to do np.array(list(map(filter_func, myarray.reshape((12,))))).reshape((2, 2, 3)) but that's quite slow, especially when I need to do it on an array of shape (1024, 1024).
I've also seen people use np.vectorize, but it somehow ends up as (array([[5, 2], [1, 4]]), array([[1, 1], [0, 4]]), array([[4, 1], [1, 4]])). Then it has shape of (3, 2, 2).
No need to change anything in your function.
Just apply the vectorized version of your function to your array
and stack the result:
np.stack(np.vectorize(filter_func)(myarray), axis=2)
The result is:
array([[[5, 1, 4],
[2, 1, 1]],
[[1, 0, 1],
[4, 4, 4]]])
Your list-map:
In [4]: np.array(list(map(filter_func, myarray.reshape((4,))))).reshape((2, 2, 3))
Out[4]:
array([[[5, 1, 4],
[2, 1, 1]],
[[1, 0, 1],
[4, 4, 4]]])
A variation using nested list comprehension:
In [5]: np.array([[filter_func(j) for j in row] for row in myarray])
Out[5]:
array([[[5, 1, 4],
[2, 1, 1]],
[[1, 0, 1],
[4, 4, 4]]])
Using vectorize, the result is one array for each element returned by the function.
In [6]: np.vectorize(filter_func)(myarray)
Out[6]:
(array([[5, 2],
[1, 4]]),
array([[1, 1],
[0, 4]]),
array([[4, 1],
[1, 4]]))
As #Vladi shows these can be combined with stack (or np.array followed by a transpose):
In [7]: np.stack(np.vectorize(filter_func)(myarray),2)
Out[7]:
array([[[5, 1, 4],
[2, 1, 1]],
[[1, 0, 1],
[4, 4, 4]]])
Your list-map is fastest. I've never found vectorize to be faster:
In [8]: timeit np.array(list(map(filter_func, myarray.reshape((4,))))).reshape((2, 2, 3))
17.2 µs ± 47.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [9]: timeit np.array([[filter_func(j) for j in row] for row in myarray])
20.5 µs ± 78.1 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [10]: timeit np.stack(np.vectorize(filter_func)(myarray),2)
75.2 µs ± 297 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Taking the np.vectorize(filter_func) out of the timing loop helps just a bit.
frompyfunc is similar to vectorize, but returns object dtype. It usually is faster:
In [29]: timeit np.stack(np.frompyfunc(filter_func, 1,3)(myarray),2).astype(int)
28.7 µs ± 125 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Generally if you have a function that only takes scalar inputs, it's hard to do better than simple iteration. vectorize/frompyfunc don't improve on that. Optimal use of numpy requires rewriting the function to work directly with arrays, as #Hammad demonstrates.
Though with this small example, even this proper numpy solution isn't faster. I expect it will scale better:
In [32]: timeit func(myarray)
25 µs ± 60.8 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
you could use this function, with vectorised implementation
def func(arr):
elements = np.array([
[1, 0, 1],
[2, 1, 1],
[5, 1, 4],
[4, 4, 4],
])
arr = arr.astype(int)
mask = (arr != 0) & (arr != 1) & (arr != 2)
arr[mask] = -1
return elements[arr]
you wont be able to rewrite your array because of shape mismatch
but you could overwrite the variable myarray
myarray = func(myarray)
myarray
>>> [[[5, 1, 4],
[2, 1, 1]],
[[1, 0, 1],
[4, 4, 4]]]

Mean of the last two entries of non-zero elements in a 3D array

I have a (n by i by j) - 3D numpy array:a_3d_array (2 by 5 by 3)
array([[[1, 2, 3],
[1, 1, 1],
[2, 2, 2],
[0, 3, 3],
[0, 0, 4]],
[[1, 2, 3],
[2, 2, 2],
[3, 3, 3],
[0, 4, 4],
[0, 0, 5]]]).
For each column j in n, I want to extract the last 2 non-zero elements and calculate the mean, then put the results in a (n by j) array. What I currently do is using a for loop
import numpy as np
a_3d_array = np.array([[[1, 2, 3],
[1, 1, 1],
[2, 2, 2],
[0, 3, 3],
[0, 0, 4]],
[[1, 2, 3],
[2, 2, 2],
[3, 3, 3],
[0, 4, 4],
[0, 0, 5]]])
aveCol = np.zeros([2,3])
for n in range(2):
for j in range(3):
temp = a_3d_array[n,:,j]
nonzero_array = temp[np.nonzero(temp)]
aveCol[n, j] = np.mean(nonzero_array[-2:])
to get the desired results
print(aveCol)
[[1.5 2.5 3.5] [2.5 3.5 4.5]]
that works fine. But I wonder if there is any better Pythonic way of doing the same thing?
What I found the most similar to my problem is here. But I don't quite understand the answer explained in a slightly different context.
TL;DR As far as I can tell, Ann's answer is the fastest
Each m is a n×i 2D array, next we take a row of its transpose, i.e., the "column" on which to perform the computation — on this "column" we discard ALL the zeros, we sum the last two non zero elements and take the mean
In [17]: np.array([[sum(r[r!=0][-2:])/2 for r in m.T] for m in a])
Out[17]:
array([[1.5, 2.5, 3.5],
[2.5, 3.5, 4.5]])
Edit1
It looks like it's faster than your loop
In [19]: %%timeit
...: avg = np.zeros([2,3])
...: for n in range(2):
...: for j in range(3):
...: temp = a[n,:,j]
...: nz = temp[np.nonzero(temp)]
...: avg[n, j] = np.mean(nz[-2:])
95.1 µs ± 596 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [20]: %timeit np.array([[sum(r[r!=0][-2:])/2 for r in m.T] for m in a])
45.5 µs ± 394 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Edit2
In [22]: %timeit np.array([[np.mean(list(filter(None, a[n,:,j]))[-2:]) for j in range(3)] for n in range(2)])
145 µs ± 689 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Edit3
In [25]: %%timeit
...: i = np.indices(a.shape)
...: i[:, a == 0] = -1
...: i = np.sort(i, axis=2)
...: i = i[:, :, -2:, :]
...: a[tuple(i)].mean(axis=1)
64 µs ± 239 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Edit4 Breaking News Info
The culprit in Ann's answer is np.mean!!
In [29]: %timeit np.array([[sum(list(filter(None, a[n,:,j]))[-2:])/2 for j in range(3)] for n in range(2)])
32.7 µs ± 111 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
You can use the filter method to filter out the 0s from the arrays.
Here is a list comprehension approach:
import numpy as np
a_3d_array = np.array([[[1, 2, 3],
[1, 1, 1],
[2, 2, 2],
[0, 3, 3],
[0, 0, 4]],
[[1, 2, 3],
[2, 2, 2],
[3, 3, 3],
[0, 4, 4],
[0, 0, 5]]])
aveCol = np.array([[np.mean(list(filter(None, a_3d_array[n,:,j]))[-2:]) for j in range(3)] for n in range(2)])
print(aveCol)
Output:
[[1.5 2.5 3.5]
[2.5 3.5 4.5]]
Note from #gboffi: For efficiency, use
aveCol = np.array([[sum([i for i in a_3d_array[n,:,j] if i][-2:])/2 for j in range(3)] for n in range(2)])
instead of
aveCol = np.array([[np.array([i for i in a_3d_array[n,:,j] if i][-2:]) for j in range(3)] for n in range(2)])
You can get the indices of your array a, mark zero items by a negative number, sort, limit and then use the result as an index:
i = np.indices(a.shape)
i[:, a == 0] = -1
i = np.sort(i, axis=2)
i = i[:, :, -2:, :]
a[tuple(i)].mean(axis=1)
# array([[1.5, 2.5, 3.5],
# [2.5, 3.5, 4.5]])

finding zero values in numpy 3-D array [duplicate]

NumPy has the efficient function/method nonzero() to identify the indices of non-zero elements in an ndarray object. What is the most efficient way to obtain the indices of the elements that do have a value of zero?
numpy.where() is my favorite.
>>> x = numpy.array([1,0,2,0,3,0,4,5,6,7,8])
>>> numpy.where(x == 0)[0]
array([1, 3, 5])
The method where returns a tuple of ndarrays, each corresponding to a different dimension of the input. Since the input is one-dimensional, the [0] unboxes the tuple's only element.
There is np.argwhere,
import numpy as np
arr = np.array([[1,2,3], [0, 1, 0], [7, 0, 2]])
np.argwhere(arr == 0)
which returns all found indices as rows:
array([[1, 0], # Indices of the first zero
[1, 2], # Indices of the second zero
[2, 1]], # Indices of the third zero
dtype=int64)
You can search for any scalar condition with:
>>> a = np.asarray([0,1,2,3,4])
>>> a == 0 # or whatver
array([ True, False, False, False, False], dtype=bool)
Which will give back the array as an boolean mask of the condition.
You can also use nonzero() by using it on a boolean mask of the condition, because False is also a kind of zero.
>>> x = numpy.array([1,0,2,0,3,0,4,5,6,7,8])
>>> x==0
array([False, True, False, True, False, True, False, False, False, False, False], dtype=bool)
>>> numpy.nonzero(x==0)[0]
array([1, 3, 5])
It's doing exactly the same as mtrw's way, but it is more related to the question ;)
You can use numpy.nonzero to find zero.
>>> import numpy as np
>>> x = np.array([1,0,2,0,3,0,0,4,0,5,0,6]).reshape(4, 3)
>>> np.nonzero(x==0) # this is what you want
(array([0, 1, 1, 2, 2, 3]), array([1, 0, 2, 0, 2, 1]))
>>> np.nonzero(x)
(array([0, 0, 1, 2, 3, 3]), array([0, 2, 1, 1, 0, 2]))
If you are working with a one-dimensional array there is a syntactic sugar:
>>> x = numpy.array([1,0,2,0,3,0,4,5,6,7,8])
>>> numpy.flatnonzero(x == 0)
array([1, 3, 5])
I would do it the following way:
>>> x = np.array([[1,0,0], [0,2,0], [1,1,0]])
>>> x
array([[1, 0, 0],
[0, 2, 0],
[1, 1, 0]])
>>> np.nonzero(x)
(array([0, 1, 2, 2]), array([0, 1, 0, 1]))
# if you want it in coordinates
>>> x[np.nonzero(x)]
array([1, 2, 1, 1])
>>> np.transpose(np.nonzero(x))
array([[0, 0],
[1, 1],
[2, 0],
[2, 1])
import numpy as np
arr = np.arange(10000)
arr[8000:8900] = 0
%timeit np.where(arr == 0)[0]
%timeit np.argwhere(arr == 0)
%timeit np.nonzero(arr==0)[0]
%timeit np.flatnonzero(arr==0)
%timeit np.amin(np.extract(arr != 0, arr))
23.4 µs ± 1.5 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
34.5 µs ± 680 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
23.2 µs ± 447 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
27 µs ± 506 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
109 µs ± 669 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
import numpy as np
x = np.array([1,0,2,3,6])
non_zero_arr = np.extract(x>0,x)
min_index = np.amin(non_zero_arr)
min_value = np.argmin(non_zero_arr)

Efficient way to compute the probability distribution of a vector of pairs in python

Suppose we have a numpy array v
v=np.array([3, 5])
Now we use the below code to find a new vector say w
v1=np.array(range(v[0]+1))
v2=np.array(range(v[1]+1))
w=np.array(list(itertools.product(v1,v2)))
So w looks like this,
array([[0, 0],
[0, 1],
[0, 2],
[0, 3],
[0, 4],
[0, 5],
[1, 0],
[1, 1],
[1, 2],
[1, 3],
[1, 4],
[1, 5],
[2, 0],
[2, 1],
[2, 2],
[2, 3],
[2, 4],
[2, 5],
[3, 0],
[3, 1],
[3, 2],
[3, 3],
[3, 4],
[3, 5]])
Now, we need to find the probability vector corresponding to each pair in w knowing that the first element in each pair follows a Binomial distribution Bin(v[0], 0.1) and the second element of each pair follows a Binomial distribution Bin(v[1], 0.05). One way to do it is by this one liner
import scipy.stats as ss
prob_vector=np.array(list((ss.binom.pmf(i[0],v[0], 0.1) * ss.binom.pmf(i[1],v[1], 0.05)) for i in w))
output:
array([5.64086303e-01, 1.48443764e-01, 1.56256594e-02, 8.22403125e-04,
2.16421875e-05, 2.27812500e-07, 1.88028768e-01, 4.94812547e-02,
5.20855312e-03, 2.74134375e-04, 7.21406250e-06, 7.59375000e-08,
2.08920853e-02, 5.49791719e-03, 5.78728125e-04, 3.04593750e-05,
8.01562500e-07, 8.43750000e-09, 7.73780938e-04, 2.03626563e-04,
2.14343750e-05, 1.12812500e-06, 2.96875000e-08, 3.12500000e-10])
But it takes tooo much time to compute, especially since I am iterating over several v vectors!!
Is there an efficient way to compute prob_vector?
Thanks
You're redoing a lot of pmf calls, as well as doing a lot on the Python side instead of the numpy side. We can save those computations by computing on your v1 and v2 arrays, and then multiplying those instead.
import numpy as np
import scipy.stats as ss
import itertools
def orig(x, y):
v = np.array([x, y])
v1 =np.array(range(v[0]+1))
v2=np.array(range(v[1]+1))
w=np.array(list(itertools.product(v1,v2)))
prob_vector=np.array(list((ss.binom.pmf(i[0],v[0], 0.1) * ss.binom.pmf(i[1],v[1], 0.05)) for i in w))
return prob_vector
def faster(x, y):
b0 = ss.binom.pmf(np.arange(x+1), x, 0.1)
b1 = ss.binom.pmf(np.arange(y+1), y, 0.05)
prob_array = b0[:, None] * b1
prob_vector = prob_array.ravel()
return prob_vector
which gives me:
In [61]: %timeit orig(3, 5)
4.46 ms ± 82.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [62]: %timeit faster(3, 5)
192 µs ± 4.33 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [63]: %timeit orig(30, 50)
311 ms ± 24.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [64]: %timeit faster(30, 50)
209 µs ± 8.43 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [65]: (orig(30, 50) == faster(30, 50)).all()
Out[65]: True

How to apply function which returns vector to each numpy array element (and get array with higher dimension)

Let's write it directly in code
Note: I edited mapper (original example use x -> (x, 2 * x, 3 * x) just for example), to generic blackbox function, which cause the troubles.
import numpy as np
def blackbox_fn(x): #I can't be changed!
assert np.array(x).shape == (), "I'm a fussy little function!"
return np.array([x, 2*x, 3*x])
# let's have 2d array
arr2d = np.array(list(range(4)), dtype=np.uint8).reshape(2, 2)
# each element should be mapped to vector
def mapper(x, blackbox_fn):
# there is some 3rdparty non-trivial function, returning np.array
# in examples returns np.array((x, 2 * x, 3 * x))
# but still this 3rdparty function operates only on scalar values
return vectorized_blackbox_fn(x)
So for input 2d array
array([[0, 1],
[2, 3]], dtype=uint8)
I would like to get 3d array
array([[[0, 0, 0],
[1, 2, 3]],
[[2, 4, 6],
[3, 6, 9]]], dtype=uint8)
I can write naive algorithm using for loop
# result should be 3d array, last dimension is same as mapper result size
arr3d = np.empty(arr2d.shape + (3,), dtype=np.uint8)
for y in range(arr2d.shape[1]):
for x in xrange(arr2d.shape[0]):
arr3d[x, y] = mapper(arr2d[x, y])
But is seems quite slow for large arrays.
I know there is np.vectorize, but using
np.vectorize(mapper)(arr2d)
not work, because of
ValueError: setting an array element with a sequence.
(seems that vectorize can't change dimension)
Is there some better (numpy idiomatic and faster) solution?
np.vectorize with the new signature option can handle this. It doesn't improve the speed, but makes the dimensional bookkeeping easier.
In [159]: def blackbox_fn(x): #I can't be changed!
...: assert np.array(x).shape == (), "I'm a fussy little function!"
...: return np.array([x, 2*x, 3*x])
...:
The documentation for signature is a bit cryptic. I've worked with it before, so made a good first guess:
In [161]: f = np.vectorize(blackbox_fn, signature='()->(n)')
In [162]: f(np.ones((2,2)))
Out[162]:
array([[[ 1., 2., 3.],
[ 1., 2., 3.]],
[[ 1., 2., 3.],
[ 1., 2., 3.]]])
With your array:
In [163]: arr2d = np.array(list(range(4)), dtype=np.uint8).reshape(2, 2)
In [164]: f(arr2d)
Out[164]:
array([[[0, 0, 0],
[1, 2, 3]],
[[2, 4, 6],
[3, 6, 9]]])
In [165]: _.dtype
Out[165]: dtype('int32')
The dtype is not preserved, because your blackbox_fn doesn't preserve it. As a default vectorize makes a test calculation with the first element, and uses its dtype to determine the result's dtype. It is possible to specify return dtype with the otypes parameter.
It can handle arrays other than 2d:
In [166]: f(np.arange(3))
Out[166]:
array([[0, 0, 0],
[1, 2, 3],
[2, 4, 6]])
In [167]: f(3)
Out[167]: array([3, 6, 9])
With a signature vectorize is using a Python level iteration. Without a signature it uses np.frompyfunc, with a bit better performance. But as long as blackbox_fn has to be called for element of the input, we can't improve the speed by much (at most 2x).
np.frompyfunc returns a object dtype array:
In [168]: fpy = np.frompyfunc(blackbox_fn, 1,1)
In [169]: fpy(1)
Out[169]: array([1, 2, 3])
In [170]: fpy(np.arange(3))
Out[170]: array([array([0, 0, 0]), array([1, 2, 3]), array([2, 4, 6])], dtype=object)
In [171]: np.stack(_)
Out[171]:
array([[0, 0, 0],
[1, 2, 3],
[2, 4, 6]])
In [172]: fpy(arr2d)
Out[172]:
array([[array([0, 0, 0]), array([1, 2, 3])],
[array([2, 4, 6]), array([3, 6, 9])]], dtype=object)
stack can't remove the array nesting in this 2d case:
In [173]: np.stack(_)
Out[173]:
array([[array([0, 0, 0]), array([1, 2, 3])],
[array([2, 4, 6]), array([3, 6, 9])]], dtype=object)
but we can ravel it, and stack. It needs a reshape:
In [174]: np.stack(__.ravel())
Out[174]:
array([[0, 0, 0],
[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
Speed tests:
In [175]: timeit f(np.arange(1000))
14.7 ms ± 322 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [176]: timeit fpy(np.arange(1000))
4.57 ms ± 161 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [177]: timeit np.stack(fpy(np.arange(1000).ravel()))
6.71 ms ± 207 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [178]: timeit np.array([blackbox_fn(i) for i in np.arange(1000)])
6.44 ms ± 235 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Having your function return a list instead of any array might make reassembling the result easier, and maybe even faster
def foo(x):
return [x, 2*x, 3*x]
or playing about with the frompyfunc parameters;
def foo(x):
return x, 2*x, 3*x # return a tuple
In [204]: np.stack(np.frompyfunc(foo, 1,3)(arr2d),2)
Out[204]:
array([[[0, 0, 0],
[1, 2, 3]],
[[2, 4, 6],
[3, 6, 9]]], dtype=object)
10x speed up - I'm surprised:
In [212]: foo1 = np.frompyfunc(foo, 1,3)
In [213]: timeit np.stack(foo1(np.arange(1000)),1)
428 µs ± 17.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
You can use basic NumPy broadcasting for these kind of "outer products"
np.arange(3)[:, None] * np.arange(2)
# array([[0, 0],
# [0, 1],
# [0, 2]])
In your case it would be
def mapper(x):
return (np.arange(3)[:, None, None] * x).transpose((1, 2, 0))
note the .transpose() is only needed if you specifically need the new axis to be at the end.
And it is almost 3x as fast as stacking 3 separate multiplications:
def mapper(x):
return (np.arange(3)[:, None, None] * x).transpose((1, 2, 0))
def mapper2(x):
return np.stack((x, 2 * x, 3 * x), axis = -1)
a = np.arange(30000).reshape(-1, 30)
%timeit mapper(a) # 48.2 µs ± 417 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit mapper2(a) # 137 µs ± 3.57 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I might be getting this wrong, but comprehension does the job:
a = np.array([[0, 1],
[2, 3]])
np.array([[[j, j*2, j*3] for j in i] for i in a ])
#[[[0 0 0]
# [1 2 3]]
#
# [[2 4 6]
# [3 6 9]]]

Categories

Resources