How to test if all rows are equal in a numpy - python

In numpy, is there a nice idiomatic way of testing if all rows are equal in a 2d array?
I can do something like
np.all([np.array_equal(M[0], M[i]) for i in xrange(1,len(M))])
This seems to mix python lists with numpy arrays which is ugly and presumably also slow.
Is there a nicer/neater way?

One way is to check that every row of the array arr is equal to its first row arr[0]:
(arr == arr[0]).all()
Using equality == is fine for integer values, but if arr contains floating point values you could use np.isclose instead to check for equality within a given tolerance:
np.isclose(a, a[0]).all()
If your array contains NaN and you want to avoid the tricky NaN != NaN issue, you could combine this approach with np.isnan:
(np.isclose(a, a[0]) | np.isnan(a)).all()

Simply check if the number if unique items in the array are 1:
>>> arr = np.array([[1]*10 for _ in xrange(5)])
>>> len(np.unique(arr)) == 1
True
A solution inspired from unutbu's answer:
>>> arr = np.array([[1]*10 for _ in xrange(5)])
>>> np.all(np.all(arr == arr[0,:], axis = 1))
True
One problem with your code is that you're creating an entire list first before applying np.all() on it. Due to that there's no short-circuiting happening in your version, instead of that it would be better if you use Python's all() with a generator expression:
Timing comparisons:
>>> M = arr = np.array([[3]*100] + [[2]*100 for _ in xrange(1000)])
>>> %timeit np.all(np.all(arr == arr[0,:], axis = 1))
1000 loops, best of 3: 272 µs per loop
>>> %timeit (np.diff(M, axis=0) == 0).all()
1000 loops, best of 3: 596 µs per loop
>>> %timeit np.all([np.array_equal(M[0], M[i]) for i in xrange(1,len(M))])
100 loops, best of 3: 10.6 ms per loop
>>> %timeit all(np.array_equal(M[0], M[i]) for i in xrange(1,len(M)))
100000 loops, best of 3: 11.3 µs per loop
>>> M = arr = np.array([[2]*100 for _ in xrange(1000)])
>>> %timeit np.all(np.all(arr == arr[0,:], axis = 1))
1000 loops, best of 3: 330 µs per loop
>>> %timeit (np.diff(M, axis=0) == 0).all()
1000 loops, best of 3: 594 µs per loop
>>> %timeit np.all([np.array_equal(M[0], M[i]) for i in xrange(1,len(M))])
100 loops, best of 3: 9.51 ms per loop
>>> %timeit all(np.array_equal(M[0], M[i]) for i in xrange(1,len(M)))
100 loops, best of 3: 9.44 ms per loop

It is worth mentioning that the above version will not work for multidimensional arrays.
For example: for a three-dimensional square image tensor img [256, 256, 3] , we need to check whether the same RGB [256, 256] layers in the image or not.
In this case, we need to use broadcasting
(img == img[:, :, 0, np.newaxis]).all()
Because simple img[:, :, 0] gives us [256, 256], but we need [256, 256, 1] to broadcast through layers.

For Alex's answer about nan, we have now,
np.isclose([1.0, np.nan], [1.0, np.nan], equal_nan=True)
np.allclose([1.0, np.nan], [1.0, np.nan], equal_nan=True)

Related

Is there a function to calculate the sum of numpy ndarray elements returning a 1d array result?

simple example:
a = array([[[1, 0, 0],
[0, 2, 0],
[0, 0, 3]],
[[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]])
result = []
for i in a:
result.append(i.sum())
result = [6, 3]
Is there a numpy function doing this faster? If it helps: a contains only diagonal matrices.
Edit:
I just realized that a contains scipy csc_sparse matrices, i.e. its a numpy 1D array containing matrices and i can not apply the sum function with axis=(1, 2)
A proper use of the axis parameter of np.sum() would do:
import numpy as np
np.sum(a, axis=(1, 2))
# [6, 3]
While the above should be generic preferred method, if your input is actually diagonal over axis 1 and 2, then summing all the zeros is bound to be inefficient (read O(n² k) with same n and k as the gen_a() function below). Using np.sum() after np.diag() inside a loop can be much better (read O(n k) with same n and k as before). Possibly, using a list comprehension is the way to go:
import numpy as np
np.array([np.sum(np.diag(x)) for x in a])
# [3, 6]
To give some idea of the relative speed, let's write a function to generate inputs of arbitrary size:
def gen_a(n, k):
return np.array([
np.diag(np.ones(n, dtype=int))
if i % 2 else
np.diag(np.arange(1, n + 1, dtype=int))
for i in range(k)])
print(gen_a(3, 2))
# [[[1 0 0]
# [0 2 0]
# [0 0 3]]
# [[1 0 0]
# [0 1 0]
# [0 0 1]]]
Now, we can time for different input size. I have also included a list comprehension without the np.diag() call, which is fundamentally a slightly more concise version of your approach.
a = gen_a(3, 2)
%timeit np.array([np.sum(np.diag(x)) for x in a])
# 100000 loops, best of 3: 16 µs per loop
%timeit np.sum(a, axis=(1, 2))
# 100000 loops, best of 3: 4.51 µs per loop
%timeit np.array([np.sum(x) for x in a])
# 100000 loops, best of 3: 10 µs per loop
a = gen_a(3000, 2)
%timeit np.array([np.sum(np.diag(x)) for x in a])
# 10000 loops, best of 3: 20.5 µs per loop
%timeit np.sum(a, axis=(1, 2))
# 100 loops, best of 3: 17.8 ms per loop
%timeit np.array([np.sum(x) for x in a])
# 100 loops, best of 3: 17.8 ms per loop
a = gen_a(3, 2000)
%timeit np.array([np.sum(np.diag(x)) for x in a])
# 100 loops, best of 3: 14.8 ms per loop
%timeit np.sum(a, axis=(1, 2))
# 10000 loops, best of 3: 34 µs per loop
%timeit np.array([np.sum(x) for x in a])
# 100 loops, best of 3: 8.93 ms per loop
a = gen_a(300, 200)
%timeit np.array([np.sum(np.diag(x)) for x in a])
# 1000 loops, best of 3: 1.67 ms per loop
%timeit np.sum(a, axis=(1, 2))
# 100 loops, best of 3: 17.8 ms per loop
%timeit np.array([np.sum(x) for x in a])
# 100 loops, best of 3: 19.3 ms per loop
And we observe that depending on the value of n and k one or the other solution gets faster.
For larger n, the list comprehension gets faster, but only if np.diag() is used.
On the contrary, for smaller n and larger k, np.sum() raw speed can outperform the explicit looping.

Iterative subtraction of elements in array in Python

I have a large numpy array. Is there a way to subtract each element with the elements below it, and store the result in a new list/array, without using a loop.
A simple example of what I mean:
a = numpy.array([4,3,2,1])
result = [4-3, 4-2, 4-1, 3-2, 3-1, 2-1] = [1, 2, 3, 1, 2 ,1]
Note that the 'real' array I am working with doesn't contain numbers in sequence. This is just to make the example simple.
I know the result should have (n-1)! elements, where n is the size of the array.
Is there a way to do this without using a loop, but by repeating the array in a 'smart' way?
Thanks!
temp = a[:, None] - a
result = temp[np.triu_indices(len(a), k=1)]
Perform all pairwise subtractions to produce temp, including subtracting elements from themselves and subtracting earlier elements from later elements, then use triu_indices to select the results we want. (a[:, None] adds an extra length-1 axis to a.)
Note that almost all of the runtime is spent constructing result from temp (because triu_indices is slow and using indices to select the upper triangle of an array is slow). If you can use temp directly, you can save a lot of time:
In [13]: a = numpy.arange(2000)
In [14]: %%timeit
....: temp = a[:, None] - a
....:
100 loops, best of 3: 6.99 ms per loop
In [15]: %%timeit
....: temp = a[:, None] - a
....: result = temp[numpy.triu_indices(len(a), k=1)]
....:
10 loops, best of 3: 51.7 ms per loop
Here's a masking based approach for the extraction after broadcasted subtractions and for the mask creation we are again making use of broadcasting (double broadcasting powered so to speak) -
r = np.arange(a.size)
out = (a[:, None] - a)[r[:,None] < r]
Runtime test
Vectorized approaches -
# #user2357112's solution
def pairwise_diff_triu_indices_based(a):
return (a[:, None] - a)[np.triu_indices(len(a), k=1)]
# Proposed in this post
def pairwise_diff_masking_based(a):
r = np.arange(a.size)
return (a[:, None] - a)[r[:,None] < r]
Timings -
In [109]: a = np.arange(2000)
In [110]: %timeit pairwise_diff_triu_indices_based(a)
10 loops, best of 3: 36.1 ms per loop
In [111]: %timeit pairwise_diff_masking_based(a)
100 loops, best of 3: 11.8 ms per loop
Closer look at involved performance parameters
Let's dig deep a bit through the timings on this setup to study how much mask based approach helps. Now, for comparison there are two parts - Mask creation vs. indices creation and Mask based boolean indexing vs. integer based indexing.
How much mask creation helps?
In [37]: r = np.arange(a.size)
In [38]: %timeit np.arange(a.size)
1000000 loops, best of 3: 1.88 µs per loop
In [39]: %timeit r[:,None] < r
100 loops, best of 3: 3 ms per loop
In [40]: %timeit np.triu_indices(len(a), k=1)
100 loops, best of 3: 14.7 ms per loop
About 5x improvement on mask creation over index setup.
How much boolean indexing helps against integer based indexing?
In [41]: mask = r[:,None] < r
In [42]: idx = np.triu_indices(len(a), k=1)
In [43]: subs = a[:, None] - a
In [44]: %timeit subs[mask]
100 loops, best of 3: 4.15 ms per loop
In [45]: %timeit subs[idx]
100 loops, best of 3: 10.9 ms per loop
About 2.5x improvement here.
a = [4, 3, 2, 1]
differences = ((x - y) for i, x in enumerate(a) for y in a[i+1:])
for diff in differences:
# do something with difference.
pass
Check out itertools.combinations:
from itertools import combinations
l = [4, 3, 2, 1]
result = []
for n1, n2 in combinations(l, 2):
result.append(n1 - n2)
print result
Results in:
[1, 2, 3, 1, 2, 1]
combinations returns a generator, so this is good for very large lists :)

Number of elements of array less than each element of cutoff array in python

I've got a numpy array of strictly increasing "cutoff" values of length m, and a pandas series of values (thought the index isn't important and this could be cast to a numpy array) of values of length n.
I need to come up with an efficient way of spitting out a length m vector of counts of the number of elements in the pandas series less than the jth element of the "cutoff" array.
I could do this via a list iterator:
output = array([(pan_series < cutoff_val).sum() for cutoff_val in cutoff_ar])
but I was wondering if there were any way to do this that leveraged more of numpy's magic speed, as I have to do this quite a few times inside multiple loops and it keeps crasshing my computer.
Thanks!
Is this what you are looking for?
In [36]: a = np.random.random(20)
In [37]: a
Out[37]:
array([ 0.68574307, 0.15743428, 0.68006876, 0.63572484, 0.26279663,
0.14346269, 0.56267286, 0.47250091, 0.91168387, 0.98915746,
0.22174062, 0.11930722, 0.30848231, 0.1550406 , 0.60717858,
0.23805205, 0.57718675, 0.78075297, 0.17083826, 0.87301963])
In [38]: b = np.array((0.3,0.7))
In [39]: np.sum(a[:,None]<b[None,:], axis=0)
Out[39]: array([ 8, 16])
In [40]: np.sum(a[:,None]<b, axis=0) # b's new axis above is unnecessary...
Out[40]: array([ 8, 16])
In [41]: (a[:,None]<b).sum(axis=0) # even simpler
Out[41]: array([ 8, 16])
Timings are always well received (for a longish, 2E6 elements array)
In [47]: a = np.random.random(2000000)
In [48]: %timeit (a[:,None]<b).sum(axis=0)
10 loops, best of 3: 78.2 ms per loop
In [49]: %timeit np.searchsorted(a, b, 'right',sorter=a.argsort())
1 loop, best of 3: 448 ms per loop
For a smaller array
In [50]: a = np.random.random(2000)
In [51]: %timeit (a[:,None]<b).sum(axis=0)
10000 loops, best of 3: 89 µs per loop
In [52]: %timeit np.searchsorted(a, b, 'right',sorter=a.argsort())
The slowest run took 4.86 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 141 µs per loop
Edit
Divakar says that things may be different for lenghty bs, let's see
In [71]: a = np.random.random(2000)
In [72]: b =np.random.random(200)
In [73]: %timeit (a[:,None]<b).sum(axis=0)
1000 loops, best of 3: 1.44 ms per loop
In [74]: %timeit np.searchsorted(a, b, 'right',sorter=a.argsort())
10000 loops, best of 3: 172 µs per loop
quite different indeed! Thank you for prompting my curiosity.
Probably the OP should test for his use case, very long sample with respect to cutoff sequences or not? and where there is a balance?
Edit #2
I made a blooper in my timings, I forgot the axis=0 argument to .sum()...
I've edited the timings with the corrected statement and, of course, the corrected timing. My apologies.
You can use np.searchsorted for some NumPy magic -
# Convert to numpy array for some "magic"
pan_series_arr = np.array(pan_series)
# Let the magic begin!
sortidx = pan_series_arr.argsort()
out = np.searchsorted(pan_series_arr,cutoff_ar,'right',sorter=sortidx)
Explanation
You are performing [(pan_series < cutoff_val).sum() for cutoff_val in cutoff_ar] i.e. for each
element in cutoff_ar, we are counting the number of pan_series elements that are lesser than it. Now with np.searchsorted, we are looking for cutoff_ar to be put in a sorted pan_series_arr and get the indices of such positions compared to whom the current element in cutoff_ar is at 'right' position . These indices essentially represent the number of pan_series elements below the current cutoff_ar element, thus giving us our desired output.
Sample run
In [302]: cutoff_ar
Out[302]: array([ 1, 3, 9, 44, 63, 90])
In [303]: pan_series_arr
Out[303]: array([ 2, 8, 69, 55, 97])
In [304]: [(pan_series_arr < cutoff_val).sum() for cutoff_val in cutoff_ar]
Out[304]: [0, 1, 2, 2, 3, 4]
In [305]: sortidx = pan_series_arr.argsort()
...: out = np.searchsorted(pan_series_arr,cutoff_ar,'right',sorter=sortidx)
...:
In [306]: out
Out[306]: array([0, 1, 2, 2, 3, 4])

Faster way to do `A[tuple(B.T)]` in Numpy

I have a M by N array I, each row of which is an index an N dimensional array A. I want a vectorized expression to get the 1-d array of the M indexed values from A. I found that A[tuple(I.T)] does the right thing, but profiling shows it to be very expensive despite being vectorized. It is also not particularly elegant or "natural" and A[I] and A[I.T] do something completely different
What is the right way to do this?
It should also works for assignment like
A[tuple(I.T)] = 1
I think you are talking about something like:
In [398]: A=np.arange(24).reshape(4,6)
In [401]: I=np.array([[0,1],[1,2],[3,4],[0,0],[2,5]])
In [402]: tuple(I.T)
Out[402]: (array([0, 1, 3, 0, 2]), array([1, 2, 4, 0, 5]))
In [403]: A[tuple(I.T)]
Out[403]: array([ 1, 8, 22, 0, 17])
This is http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#purely-integer-array-indexing - purely integer array advanced indexing.
This is always going to be slower than basic indexing, which returns a view. Basic indexing pickes contiguous blocks of data, or values that can be selected with strides. That isn't possible with your indexing.
Look at some timings:
In [404]: timeit tuple(I.T)
100000 loops, best of 3: 3.4 µs per loop
In [405]: timeit A[tuple(I.T)]
100000 loops, best of 3: 10 µs per loop
In [406]: %%timeit i,j=tuple(I.T)
.....: A[i,j]
.....:
100000 loops, best of 3: 4.86 µs per loop
Constructing the tuple takes about 1/3 of the time. i,j=I.T is just a bit faster. But that indexing is the largest piece.
A[i,j] is the same as A[(i,j)] (as is A.__getitem__((i,j))). So wrapping I.T in tuple just produces the 2 indexing arrays, one for each dimension.
It is faster to index on a flattened version of the array:
In [420]: J= np.ravel_multi_index(tuple(I.T),A.shape)
In [421]: J
Out[421]: array([ 1, 8, 22, 0, 17], dtype=int32)
In [422]: A.flat[J]
Out[422]: array([ 1, 8, 22, 0, 17])
In [425]: timeit A.flat[J]
1000000 loops, best of 3: 1.56 µs per loop
In [426]: %%timeit
.....: J= np.ravel_multi_index(tuple(I.T),A.shape)
.....: A.flat[J]
.....:
100000 loops, best of 3: 11.2 µs per loop
So being able to precompute and reuse the indexes will save you time, but there's no way of getting about fact that selecting a bunch of individual values from A will takes extra time.
Just for fun, compare the time it takes to index A with each row of I:
In [442]: timeit np.array([A[tuple(i)] for i in I])
100000 loops, best of 3: 17.3 µs per loop
In [443]: timeit np.array([A[i,j] for i,j in I])
100000 loops, best of 3: 15.7 µs per loop
You can use linear indexing another way, like so -
def ravel_einsum(A,I):
# Get A's shape and calculate cummulative dimensions based on it
shp = np.asarray(A.shape)
cumdims = np.append(1,shp[::-1][:-1].cumprod())[::-1]
# Use linear indexing of A to extract elements from A corresponding
# to linear indexing of it with I
return A.ravel()[np.einsum('ij,j->i',I,cumdims)]
Runtime tests
Case #1:
In [84]: # Inputs
...: A = np.random.randint(0,10,(3,5,2,4,5,2,6,8,2,5,3,4,3))
...: I = np.mod(np.random.randint(0,10,(5,A.ndim)),A.shape)
...:
In [85]: %timeit A[tuple(I.T)]
10000 loops, best of 3: 27.7 µs per loop
In [86]: %timeit ravel_einsum(A,I)
10000 loops, best of 3: 48.3 µs per loop
Case #2:
In [87]: # Inputs
...: A = np.random.randint(0,10,(3,5,4,2))
...: I = np.mod(np.random.randint(0,5,(10000,A.ndim)),A.shape)
...:
In [88]: %timeit A[tuple(I.T)]
1000 loops, best of 3: 357 µs per loop
In [89]: %timeit ravel_einsum(A,I)
1000 loops, best of 3: 240 µs per loop
Case #3:
In [90]: # Inputs
...: A = np.random.randint(0,10,(30,50,40,20))
...: I = np.mod(np.random.randint(0,50,(5000,A.ndim)),A.shape)
...:
In [91]: %timeit A[tuple(I.T)]
1000 loops, best of 3: 220 µs per loop
In [92]: %timeit ravel_einsum(A,I)
10000 loops, best of 3: 168 µs per loop

Is there a better way to broadcast arrays?

I want to broadcast an array b to the shape it would take if it were in an arithmetic operation with another array a.
For example, if a.shape = (3,3) and b was a scalar, I want to get an array whose shape is (3,3) and is filled with the scalar.
One way to do this is like this:
>>> import numpy as np
>>> a = np.arange(9).reshape((3,3))
>>> b = 1 + a*0
>>> b
array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
Although this works practically, I can't help but feel it looks a bit weird, and wouldn't be obvious to someone else looking at the code what I was trying to do.
Is there any more elegant way to do this? I've looked at the documentation for np.broadcast, but it's orders of magnitude slower.
In [1]: a = np.arange(10000).reshape((100,100))
In [2]: %timeit 1 + a*0
10000 loops, best of 3: 31.9 us per loop
In [3]: %timeit bc = np.broadcast(a,1);np.fromiter((v for u, v in bc),float).reshape(bc.shape)
100 loops, best of 3: 5.2 ms per loop
In [4]: 5.2e-3/32e-6
Out[4]: 162.5
If you just want to fill an array with a scalar, fill is probably the best choice. But it sounds like you want something more generalized. Rather than using broadcast you can use broadcast_arrays to get the result that (I think) you want.
>>> a = numpy.arange(9).reshape(3, 3)
>>> numpy.broadcast_arrays(a, 1)[1]
array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
This generalizes to any two broadcastable shapes:
>>> numpy.broadcast_arrays(a, [1, 2, 3])[1]
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
It's not quite as fast as your ufunc-based method, but it's still on the same order of magnitude:
>>> %timeit 1 + a * 0
10000 loops, best of 3: 23.2 us per loop
>>> %timeit numpy.broadcast_arrays(a, 1)[1]
10000 loops, best of 3: 52.3 us per loop
But scalars, fill is still the clear front-runner:
>>> %timeit b = numpy.empty_like(a, dtype='i8'); b.fill(1)
100000 loops, best of 3: 6.59 us per loop
Finally, further testing shows that the fastest approach -- in at least some cases -- is to multiply by ones:
>>> %timeit numpy.broadcast_arrays(a, numpy.arange(100))[1]
10000 loops, best of 3: 53.4 us per loop
>>> %timeit (1 + a * 0) * numpy.arange(100)
10000 loops, best of 3: 45.9 us per loop
>>> %timeit b = numpy.ones_like(a, dtype='i8'); b * numpy.arange(100)
10000 loops, best of 3: 28.9 us per loop
The fastest and cleanest solution I know is:
b_arr = numpy.empty(a.shape) # Empty array
b_arr.fill(b) # Filling with one value
fill sounds like the simplest way:
>>> a = np.arange(9).reshape((3,3))
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a.fill(10)
>>> a
array([[10, 10, 10],
[10, 10, 10],
[10, 10, 10]])
EDIT: As #EOL points out, you don't need arange if you want to create a new array, np.empty((100,100)) (or whatever shape) is better for this.
Timings:
In [3]: a = np.arange(10000).reshape((100,100))
In [4]: %timeit 1 + a*0
100000 loops, best of 3: 19.9 us per loop
In [5]: a = np.arange(10000).reshape((100,100))
In [6]: %timeit a.fill(1)
100000 loops, best of 3: 3.73 us per loop
If you just need to broadcast a scalar to some arbitrary shape, you can do something like this:
a = b*np.ones(shape=(3,3))
Edit: np.tile is more general. You can use it to duplicate any scalar/vector in any number of dimensions:
b = 1
N = 100
a = np.tile(b, reps=(N, N))

Categories

Resources