I have a very large 2D array which looks something like this:
a=
[[a1, b1, c1],
[a2, b2, c2],
...,
[an, bn, cn]]
Using numpy, is there an easy way to get a new 2D array with, e.g., 2 random rows from the initial array a (without replacement)?
e.g.
b=
[[a4, b4, c4],
[a99, b99, c99]]
>>> A = np.random.randint(5, size=(10,3))
>>> A
array([[1, 3, 0],
[3, 2, 0],
[0, 2, 1],
[1, 1, 4],
[3, 2, 2],
[0, 1, 0],
[1, 3, 1],
[0, 4, 1],
[2, 4, 2],
[3, 3, 1]])
>>> idx = np.random.randint(10, size=2)
>>> idx
array([7, 6])
>>> A[idx,:]
array([[0, 4, 1],
[1, 3, 1]])
Putting it together for a general case:
A[np.random.randint(A.shape[0], size=2), :]
For non replacement (numpy 1.7.0+):
A[np.random.choice(A.shape[0], 2, replace=False), :]
I do not believe there is a good way to generate random list without replacement before 1.7. Perhaps you can setup a small definition that ensures the two values are not the same.
This is an old post, but this is what works best for me:
A[np.random.choice(A.shape[0], num_rows_2_sample, replace=False)]
change the replace=False to True to get the same thing, but with replacement.
Another option is to create a random mask if you just want to down-sample your data by a certain factor. Say I want to down-sample to 25% of my original data set, which is currently held in the array data_arr:
# generate random boolean mask the length of data
# use p 0.75 for False and 0.25 for True
mask = numpy.random.choice([False, True], len(data_arr), p=[0.75, 0.25])
Now you can call data_arr[mask] and return ~25% of the rows, randomly sampled.
This is a similar answer to the one Hezi Rasheff provided, but simplified so newer python users understand what's going on (I noticed many new datascience students fetch random samples in the weirdest ways because they don't know what they are doing in python).
You can get a number of random indices from your array by using:
indices = np.random.choice(A.shape[0], number_of_samples, replace=False)
You can then use fancy indexing with your numpy array to get the samples at those indices:
A[indices]
This will get you the specified number of random samples from your data.
I see permutation has been suggested. In fact it can be made into one line:
>>> A = np.random.randint(5, size=(10,3))
>>> np.random.permutation(A)[:2]
array([[0, 3, 0],
[3, 1, 2]])
If you want to generate multiple random subsets of rows, for example if your doing RANSAC.
num_pop = 10
num_samples = 2
pop_in_sample = 3
rows_to_sample = np.random.random([num_pop, 5])
random_numbers = np.random.random([num_samples, num_pop])
samples = np.argsort(random_numbers, axis=1)[:, :pop_in_sample]
# will be shape [num_samples, pop_in_sample, 5]
row_subsets = rows_to_sample[samples, :]
An alternative way of doing it is by using the choice method of the Generator class, https://github.com/numpy/numpy/issues/10835
import numpy as np
# generate the random array
A = np.random.randint(5, size=(10,3))
# use the choice method of the Generator class
rng = np.random.default_rng()
A_sampled = rng.choice(A, 2)
leading to a sampled data,
array([[1, 3, 2],
[1, 2, 1]])
The running time is also profiled compared as follows,
%timeit rng.choice(A, 2)
15.1 µs ± 115 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit np.random.permutation(A)[:2]
4.22 µs ± 83.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit A[np.random.randint(A.shape[0], size=2), :]
10.6 µs ± 418 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
But when the array goes big, A = np.random.randint(10, size=(1000,300)). working on the index is the best way.
%timeit A[np.random.randint(A.shape[0], size=50), :]
17.6 µs ± 657 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit rng.choice(A, 50)
22.3 µs ± 134 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.random.permutation(A)[:50]
143 µs ± 1.33 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
So the permutation method seems to be the most efficient one when your array is small while working on the index is the optimal solution when your array goes big.
If you need the same rows but just a random sample then,
import random
new_array = random.sample(old_array,x)
Here x, has to be an 'int' defining the number of rows you want to randomly pick.
One can generates a random sample from a given array with a random number generator:
rng = np.random.default_rng()
b = rng.choice(a, 2, replace=False)
b
>>> [[a4, b4, c4],
[a99, b99, c99]]
I am quite surprised that this much easier to read solution has not been proposed after more than 10 years :
import random
b = np.array(
random.choices(a, k=2)
)
Edit : Ah, maybe because it was only introduced in Python 3.6, but still…
Related
I want to index an array with a boolean mask through multiple boolean arrays without a loop.
This is what I want to achieve but without a loop and only with numpy.
import numpy as np
a = np.array([[0, 1],[2, 3]])
b = np.array([[[1, 0], [1, 0]], [[0, 0], [1, 1]]], dtype=bool)
r = []
for x in b:
print(a[x])
r.extend(a[x])
# => array([0, 2])
# => array([2, 3])
print(r)
# => [0, 2, 2, 3]
# what I would like to do is something like this
r = some_fancy_indexing_magic_with_b_and_a
print(r)
# => [0, 2, 2, 3]
Approach #1
Simply broadcast a to b's shape with np.broadcast_to and then mask it with b -
In [15]: np.broadcast_to(a,b.shape)[b]
Out[15]: array([0, 2, 2, 3])
Approach #2
Another would be getting all the indices and mod those by the size of a, which would also be the size of each 2D block in b and then indexing into flattened a -
a.ravel()[np.flatnonzero(b)%a.size]
Approach #3
On the same lines as App#2, but keeping the 2D format and using non-zero indices along the last two axes of b -
_,r,c = np.nonzero(b)
out = a[r,c]
Timings on large arrays (given sample shapes scaled up by 100x) -
In [50]: np.random.seed(0)
...: a = np.random.rand(200,200)
...: b = np.random.rand(200,200,200)>0.5
In [51]: %timeit np.broadcast_to(a,b.shape)[b]
45.5 ms ± 381 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [52]: %timeit a.ravel()[np.flatnonzero(b)%a.size]
94.6 ms ± 1.64 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [53]: %%timeit
...: _,r,c = np.nonzero(b)
...: out = a[r,c]
128 ms ± 1.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I realize there are quite a number of 'how to sort numpy array'-questions on here already. But I could not find how to do it in this specific way.
I have an array similar to this:
array([[1,0,1,],
[0,0,1],
[1,1,1],
[1,1,0]])
I want to sort the rows, keeping the order within the rows the same. So I expect the following output:
array([[0,0,1,],
[1,0,1],
[1,1,0],
[1,1,1]])
You can use dot and argsort:
a[a.dot(2**np.arange(a.shape[1])[::-1]).argsort()]
# array([[0, 0, 1],
# [1, 0, 1],
# [1, 1, 0],
# [1, 1, 1]])
The idea is to convert the rows into integers.
a.dot(2**np.arange(a.shape[1])[::-1])
# array([5, 1, 7, 6])
Then, find the sorted indices and use that to reorder a:
a.dot(2**np.arange(a.shape[1])[::-1]).argsort()
# array([1, 0, 3, 2])
My tests show this is slightly faster than lexsort.
a = a.repeat(1000, axis=0)
%timeit a[np.lexsort(a.T[::-1])]
%timeit a[a.dot(2**np.arange(a.shape[1])[::-1]).argsort()]
230 µs ± 18.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
192 µs ± 4.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Verify correctness:
np.array_equal(a[a.dot(2**np.arange(a.shape[1])[::-1]).argsort()],
a[np.lexsort(a.T[::-1])])
# True
I have several numpy arrays; I want to build a groupby method that would have group ids for these arrays. It will then allow me to index these arrays on the group id to perform operations on the groups.
For an example:
import numpy as np
import pandas as pd
a = np.array([1,1,1,2,2,3])
b = np.array([1,2,2,2,3,3])
def group_np(groupcols):
groupby = np.array([''.join([str(b) for b in bs]) for bs in zip(*[c for c in groupcols])])
_, groupby = np.unique(groupby, return_invesrse=True)
return groupby
def group_pd(groupcols):
df = pd.DataFrame(groupcols[0])
for i in range(1, len(groupcols)):
df[i] = groupcols[i]
for i in range(len(groupcols)):
df[i] = df[i].fillna(-1)
return df.groupby(list(range(len(groupcols)))).grouper.group_info[0]
Outputs:
group_np([a,b]) -> [0, 1, 1, 2, 3, 4]
group_pd([a,b]) -> [0, 1, 1, 2, 3, 4]
Is there a more efficient way of implementing it, ideally in pure numpy? The bottleneck currently seems to be building a vector that would have unique values for each group - at the moment I am doing that by concatenating the values for each vector as strings.
I want this to work for any number of input vectors, which can have millions of elements.
Edit: here is another testcase:
a = np.array([1,2,1,1,1,2,3,1])
b = np.array([1,2,2,2,2,3,3,2])
Here, group elements 2,3,4,7 should all be the same.
Edit2: adding some benchmarks.
a = np.random.randint(1, 1000, 30000000)
b = np.random.randint(1, 1000, 30000000)
c = np.random.randint(1, 1000, 30000000)
def group_np2(groupcols):
_, groupby = np.unique(np.stack(groupcols), return_inverse=True, axis=1)
return groupby
%timeit group_np2([a,b,c])
# 25.1 s +/- 1.06 s per loop (mean +/- std. dev. of 7 runs, 1 loop each)
%timeit group_pd([a,b,c])
# 21.7 s +/- 646 ms per loop (mean +/- std. dev. of 7 runs, 1 loop each)
After using np.stack on the arrays a and b, if you set the parameter return_inverse to True in np.unique then it is the output you are looking for:
a = np.array([1,2,1,1,1,2,3,1])
b = np.array([1,2,2,2,2,3,3,2])
_, inv = np.unique(np.stack([a,b]), axis=1, return_inverse=True)
print (inv)
array([0, 2, 1, 1, 1, 3, 4, 1], dtype=int64)
and you can replace [a,b] in np.stack by a list of all the vectors.
Edit: a faster solution is use np.unique on the sum of the arrays multiply by the cumulative product (np.cumprod) of the max plus 1 of all previous arrays in groupcols. such as:
def group_np_sum(groupcols):
groupcols_max = np.cumprod([ar.max()+1 for ar in groupcols[:-1]])
return np.unique( sum([groupcols[0]] +
[ ar*m for ar, m in zip(groupcols[1:],groupcols_max)]),
return_inverse=True)[1]
To check:
a = np.array([1,2,1,1,1,2,3,1])
b = np.array([1,2,2,2,2,3,3,2])
print (group_np_sum([a,b]))
array([0, 2, 1, 1, 1, 3, 4, 1], dtype=int64)
Note: the number associated to each group may not be the same (here I changed the first element of a by 3)
a = np.array([3,2,1,1,1,2,3,1])
b = np.array([1,2,2,2,2,3,3,2])
print(group_np2([a,b]))
print (group_np_sum([a,b]))
array([3, 1, 0, 0, 0, 2, 4, 0], dtype=int64)
array([0, 2, 1, 1, 1, 3, 4, 1], dtype=int64)
but groups themselves are the same.
Now to check for timing:
a = np.random.randint(1, 100, 30000)
b = np.random.randint(1, 100, 30000)
c = np.random.randint(1, 100, 30000)
groupcols = [a,b,c]
%timeit group_pd(groupcols)
#13.7 ms ± 1.22 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit group_np2(groupcols)
#34.2 ms ± 6.88 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit group_np_sum(groupcols)
#3.63 ms ± 562 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
The numpy_indexed package (dsiclaimer: I am its authos) covers these type of use cases:
import numpy_indexed as npi
npi.group_by((a, b))
Passing a tuple of index-arrays like this avoids creating a copy; but if you dont mind making the copy you can use stacking as well:
npi.group_by(np.stack(a, b))
I am calling an object several times that is returning a numpy list:
for x in range(0,100):
d = simulation3()
d = [0, 1, 2, 3]
d = [4, 5, 6, 7]
..and many more
I want to take each list and append it to a 2D array.
final_array = [[0, 1, 2, 3],[4, 5, 6, 7]...and so forth]
I tried creating an empty array (final_array = np.zeros(4,4)) and appending it but the values are appending after the 4X4 matrix is created.
Can anyone help me with this? thank you!
You can use np.fromiter to create an array from an iterable. Since, by default, this function only works with scalars, you can use itertools.chain to help:
np.random.seed(0)
from itertools import chain
def simulation3():
return np.random.randint(0, 10, 4)
n = 5
d = np.fromiter(chain.from_iterable(simulation3() for _ in range(5)), dtype='i')
d.shape = 5, 4
print(d)
array([[5, 0, 3, 3],
[7, 9, 3, 5],
[2, 4, 7, 6],
[8, 8, 1, 6],
[7, 7, 8, 1]], dtype=int32)
But this is relatively inefficient. NumPy performs best with fixed size arrays. If you know the size of your array in advance, you can define an empty array and update rows sequentially. See the alternatives described by #norok2.
there are multiple way to do it in numpy , the easiest way is to use vstack like this :
for Ex :
#you have these array you want to concat
d1 = [0, 1, 2, 3]
d2 = [4, 5, 6, 7]
d3 = [4, 5, 6, 7]
#initialize your variable with zero raw
X = np.zeros((0,4))
#then each time you call your function use np.vstack like this :
X = np.vstack((np.array(d1),X))
X = np.vstack((np.array(d2),X))
X = np.vstack((np.array(d2),X))
# and finally you have your array like below
#array([[4., 5., 6., 7.],
# [4., 5., 6., 7.],
# [0., 1., 2., 3.]])
The optimal solution depends on the numbers / sizes you are dealing with.
My favorite solution (which only works if you already know the size of the final result) is to initialize the array which will contain your results and then fill each you could initialize your result and then fill it using views.
This the most memory efficient solution.
If you do not know the size of the final result, then you are better off by generating a list of lists, which can be converted (or stacked) as a NumPy array at the end of the process.
Here are some examples, where gen_1d_list() is used to generate some random numbers to mimic the result of simulate3() (meaning that in the following code, you should replace gen_1d_list(n, dtype) with simulate3()):
stacking1() implements the filling using views
stacking2() implements the list generation and converting to NumPy array
stacking3() implements the list generation and stacking to NumPy array
stacking4() implements the dynamic modification of a NumPy array using vstack() as proposed earlier.
import numpy as np
def gen_1d_list(n, dtype=int):
return list(np.random.randint(1, 100, n, dtype))
def stacking1(n, m, dtype=int):
arr = np.empty((n, m), dtype=dtype)
for i in range(n):
arr[i] = gen_1d_list(m, dtype)
return arr
def stacking2(n, m, dtype=int):
items = [gen_1d_list(m, dtype) for i in range(n)]
arr = np.array(items)
return arr
def stacking3(n, m, dtype=int):
items = [gen_1d_list(m, dtype) for i in range(n)]
arr = np.stack(items, dtype)
return arr
def stacking4(n, m, dtype=int):
arr = np.zeros((0, m), dtype=dtype)
for i in range(n):
arr = np.vstack((gen_1d_list(m, dtype), arr))
return arr
Time-wise, stacking1() and stacking2() are more or less equally fast, while stacking3() and stacking4() are slower (and, in proportion, much slower for small size inputs).
Some numbers, for small size inputs:
n, m = 4, 10
%timeit stacking1(n, m)
# 15.7 µs ± 182 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit stacking2(n, m)
# 14.2 µs ± 141 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit stacking3(n, m)
# 22.7 µs ± 282 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit stacking4(n, m)
# 31.8 µs ± 270 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
and for larger size inputs:
n, m = 4, 1000000
%timeit stacking1(n, m)
# 344 ms ± 1.64 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit stacking2(n, m)
# 350 ms ± 1.65 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit stacking3(n, m)
# 370 ms ± 2.75 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit stacking4(n, m)
# 369 ms ± 3.01 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Given input
A = np.array([[1,2,3],[4,5,6],[7,8,9]])
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Need output :
array([[2, 3],
[4, 6],
[7, 8]])
It is easy to use iteration or loop to do this, but there should be a neat way to do this without using loops. Thanks
Approach #1
One approach with masking -
A[~np.eye(A.shape[0],dtype=bool)].reshape(A.shape[0],-1)
Sample run -
In [395]: A
Out[395]:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
In [396]: A[~np.eye(A.shape[0],dtype=bool)].reshape(A.shape[0],-1)
Out[396]:
array([[2, 3],
[4, 6],
[7, 8]])
Approach #2
Using the regular pattern of non-diagonal elements that could be traced with broadcasted additions with range arrays -
m = A.shape[0]
idx = (np.arange(1,m+1) + (m+1)*np.arange(m-1)[:,None]).reshape(m,-1)
out = A.ravel()[idx]
Approach #3 (Strides Strikes!)
Abusing the regular pattern of non-diagonal elements from previous approach, we can introduce np.lib.stride_tricks.as_strided and some slicing help, like so -
m = A.shape[0]
strided = np.lib.stride_tricks.as_strided
s0,s1 = A.strides
out = strided(A.ravel()[1:], shape=(m-1,m), strides=(s0+s1,s1)).reshape(m,-1)
Runtime test
Approaches as funcs :
def skip_diag_masking(A):
return A[~np.eye(A.shape[0],dtype=bool)].reshape(A.shape[0],-1)
def skip_diag_broadcasting(A):
m = A.shape[0]
idx = (np.arange(1,m+1) + (m+1)*np.arange(m-1)[:,None]).reshape(m,-1)
return A.ravel()[idx]
def skip_diag_strided(A):
m = A.shape[0]
strided = np.lib.stride_tricks.as_strided
s0,s1 = A.strides
return strided(A.ravel()[1:], shape=(m-1,m), strides=(s0+s1,s1)).reshape(m,-1)
Timings -
In [528]: A = np.random.randint(11,99,(5000,5000))
In [529]: %timeit skip_diag_masking(A)
...: %timeit skip_diag_broadcasting(A)
...: %timeit skip_diag_strided(A)
...:
10 loops, best of 3: 56.1 ms per loop
10 loops, best of 3: 82.1 ms per loop
10 loops, best of 3: 32.6 ms per loop
I know I'm late to this party, but I have what I believe is a simper solution. So you want to remove the diagonal? Okay cool:
replace it with NaN
filter all but NaN (this converts to one dimensional as it can't assume the result will be square)
reset the dimensionality
`
arr = np.array([[1,2,3],[4,5,6],[7,8,9]]).astype(np.float)
np.fill_diagonal(arr, np.nan)
arr[~np.isnan(arr)].reshape(arr.shape[0], arr.shape[1] - 1)
Solution steps:
Flatten your array
Delete the location of the diagonal elements which is at the location range(0, len(x_no_diag), len(x) + 1)
Reshape your array to (num_rows, num_columns - 1)
The function:
import numpy as np
def remove_diag(x):
x_no_diag = np.ndarray.flatten(x)
x_no_diag = np.delete(x_no_diag, range(0, len(x_no_diag), len(x) + 1), 0)
x_no_diag = x_no_diag.reshape(len(x), len(x) - 1)
return x_no_diag
Example:
>>> x = np.random.randint(5, size=(3,3))
array([[0, 2, 3],
[3, 4, 1],
[2, 4, 0]])
>>> remove_diag(x)
array([[2, 3],
[3, 1],
[2, 4]])
Just with numpy, assuming a square matrix:
new_A = numpy.delete(A,range(0,A.shape[0]**2,(A.shape[0]+1))).reshape(A.shape[0],(A.shape[1]-1))
If you do not mind creating a new array, then you can use list comprehension.
A = np.array([A[i][A[i] != A[i][i]] for i in range(len(A))])
Rerunning the same methods as #Divakar,
A = np.random.randint(11,99,(5000,5000))
skip_diag_masking
85.7 ms ± 1.55 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
skip_diag_broadcasting
163 ms ± 1.77 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
skip_diag_strided
52.5 ms ± 2.54 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
skip_diag_list_comp
101 ms ± 347 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Perhaps the cleanest way, based on Divakar's first solution but using len(array) instead of array.shape[0], is:
array_without_diagonal = array[~np.eye(len(array), dtype=bool)].reshape(len(array), -1)
I love all the answers here, but would like to add one in case your numpy object has more than 2-dimensions. In that case, you can use the following adjustment of Divakar's approach #1:
def remove_diag(A):
removed = A[~np.eye(A.shape[0], dtype=bool)].reshape(A.shape[0], int(A.shape[0])-1, -1)
return np.squeeze(removed)
The other approach is to use numpy.delete(). assuming square matrix, you can use:
numpy.delete(A,range(0,A.shape[0]**2,A.shape[0])).reshape(A.shape[0],A.shape[1]-1)