I have a function that produces an array like this:
my_array = np.array([list(str(i).zfill(4)) for i in range(10000)], dtype=int)
Which outputs:
array([[0, 0, 0, 0],
[0, 0, 0, 1],
[0, 0, 0, 2],
...,
[9, 9, 9, 7],
[9, 9, 9, 8],
[9, 9, 9, 9]])
As you can see by converting ints to strings and lists, and then back to int, this is highly inefficient, and my real needs is for a much larger array (larger range). I tried looking into numpy to find a more efficient way to generate this array / list, but could not find a way. The best i've got so far is arange which will give a range from 1...9999 but not separated into lists.
Any ideas?
Here's one based on cartesian_product_broadcasted -
import functools
def cartesian_product_ranges(shape, out_dtype='int'):
arrays = [np.arange(s, dtype=out_dtype) for s in shape]
broadcastable = np.ix_(*arrays)
broadcasted = np.broadcast_arrays(*broadcastable)
rows, cols = functools.reduce(np.multiply, broadcasted[0].shape), \
len(broadcasted)
out = np.empty(rows * cols, dtype=out_dtype)
start, end = 0, rows
for a in broadcasted:
out[start:end] = a.reshape(-1)
start, end = end, end + rows
N = len(shape)
return np.moveaxis(out.reshape((-1,) + tuple(shape)),0,-1).reshape(-1,N)
Sample run -
In [116]: cartesian_product_ranges([3,2,4])
Out[116]:
array([[0, 0, 0],
[0, 0, 1],
[0, 0, 2],
[0, 0, 3],
[0, 1, 0],
[0, 1, 1],
[0, 1, 2],
[0, 1, 3],
[1, 0, 0],
[1, 0, 1],
[1, 0, 2],
[1, 0, 3],
[1, 1, 0],
[1, 1, 1],
[1, 1, 2],
[1, 1, 3],
[2, 0, 0],
[2, 0, 1],
[2, 0, 2],
[2, 0, 3],
[2, 1, 0],
[2, 1, 1],
[2, 1, 2],
[2, 1, 3]])
Run and timings on 10-ranged array with 4 cols -
In [119]: cartesian_product_ranges([10]*4)
Out[119]:
array([[0, 0, 0, 0],
[0, 0, 0, 1],
[0, 0, 0, 2],
...,
[9, 9, 9, 7],
[9, 9, 9, 8],
[9, 9, 9, 9]])
In [120]: cartesian_product_ranges([10]*4).shape
Out[120]: (10000, 4)
In [121]: %timeit cartesian_product_ranges([10]*4)
10000 loops, best of 3: 105 µs per loop
In [122]: %timeit np.array([list(str(i).zfill(4)) for i in range(10000)], dtype=int)
100 loops, best of 3: 16.7 ms per loop
In [123]: 16700.0/105
Out[123]: 159.04761904761904
Around 160x speedup!
For 10-ranged array with 9 columns, we can use lower-precision uint8 dtype -
In [7]: %timeit cartesian_product_ranges([10]*9, out_dtype=np.uint8)
1 loop, best of 3: 3.36 s per loop
You can user itertools.product for this.
Simply provide range(10) as an argument, and the number of digits you want as the argument for repeat.
Conveniently, the itertools iterator returns the elements in sorted order, so you do not have to perform a secondary sorting step by yourself.
Below is an evaluation of my code:
import timeit
if __name__ == "__main__":
# time run: 14.20635
print(timeit.timeit("np.array([list(str(i).zfill(4)) for i in range(10000)], dtype=int)",
"import numpy as np",
number=1000))
# time run: 5.00319
print(timeit.timeit("np.array(list(itertools.product(range(10), r=4)))",
"import itertools; import numpy as np",
number=1000))
I would solve this with a combination of np.tile and np.repeat and try to assemble the rows, then np.column_stack them.
This pure Numpy solution becomes nearly a one-liner then:
n = 10000
x = np.arange(10)
a = [np.tile(np.repeat(x, 10 ** k), n/(10 ** (k+1))) for k in range(int(np.log10(n)))]
y = np.column_stack(a[::-1]) # flip the list, first entry is rightmost row
A more verbose version to see what happens can be written like that
n = 10000
x = np.arange(10)
x0 = np.tile(np.repeat(x, 1), n/10)
x1 = np.tile(np.repeat(x, 10), n/100)
x2 = np.tile(np.repeat(x, 100), n/1000)
Now replace the numbers with exponents and get the number of columns using the log10.
Speed test:
import timeit
s = """
n = 10000
x = np.arange(10)
a = [np.tile(np.repeat(x, 10 ** k), n/(10 ** (k+1))) for k in range(int(np.log10(n)))]
y = np.column_stack(a[::-1])
"""
n_runs = 100000
t = timeit.timeit(s,
"import numpy as np",
number=n_runs)
print(t, t/n_runs)
About 260 µs on my slow machine (7 years old).
A fast solution is to use np.meshgrid to create all the columns. Then sort the columns on for instance element 123 or 1234 so that they are in the right order. And then just make an array out of them.
n_digits = 4
digits = np.arange(10)
columns = [c.ravel() for c in np.meshgrid(*[digits]*n_digits)]
out_array = columns.sort(key=lambda x: x[int("".join(str(d) for d in range(n_digits)))])
out_array = np.array(columns).T
np.all(out_array==my_array)
There are other one-liners to solve this
import numpy as np
y = np.array([index for index in np.ndindex(10, 10, 10, 10)])
This seems to be much slower.
Or
import numpy as np
from sklearn.utils.extmath import cartesian
x = np.arange(10)
y = cartesian((x, x, x, x))
This seems to be slightly slower than the accepted answer.
I want to build a numpy array from the result of itertools.product. My first approach was a simple:
from itertools import product
import numpy as np
max_init = 6
init_values = range(1, max_init + 1)
repetitions = 12
result = np.array(list(product(init_values, repeat=repetitions)))
This code works well for "small" repetitions (like <=4), but with "large" values (>= 12) it completely hogs the memory and crashes. I assumed that building the list was the thing eating all the RAM, so I searched how to make it directly with an array. I found Numpy equivalent of itertools.product and Using numpy to build an array of all combinations of two arrays.
So, I tested the following alternatives:
Alternative #1:
results = np.empty((max_init**repetitions, repetitions))
for i, row in enumerate(product(init_values, repeat=repetitions)):
result[:][i] = row
Alternative #2:
init_values_args = [init_values] * repetitions
results = np.array(np.meshgrid(*init_values_args)).T.reshape(-1, repetitions)
Alternative #3:
results = np.indices([sides] * num_dice).reshape(num_dice, -1).T + 1
#1 is extremely slow. I didn't have enough patience to let it finish (after a few minutes of processing on a 2017 MacBook Pro). #2 and #3 eat all the memory until the python interpreter crashes, as with the initial approach.
After that, I thought that I could express the same information in a different way that was still useful for me: a dict where the keys would be all the possible (sorted) combinations, and the values would be the counting of these combinations. So I tried:
Alternative #4:
from collections import Counter
def sorted_product(iterable, repeat=1):
for el in product(iterable, repeat=repeat):
yield tuple(sorted(el))
def count_product(repetitions=1, max_init=6):
init_values = range(1, max_init + 1)
sp = sorted_product(init_values, repeat=repetitions)
counted_sp = Counter(sp)
return np.array(list(counted_sp.values())), \
np.array(list(counted_sp.keys()))
cnt, values = count(repetitions=repetitions, max_init=max_init)
But the line counted_sp = Counter(sp), which triggers getting all the values of the generators, is also too slow for my needs (it also took several minutes before I canceled it).
Is there another way to generate the same data (or a different data structure containing the same information) that does not have the mentioned shortcomings of being too slow or using too much memory?
PS: I tested all the implementations above against my tests with a small repetitions, and all the tests passed, so they give consistent results.
I hope that editing the question is the best way to expand it. Otherwise, let me know, and I'll edit post where I should.
After reading the first two answers below and thinking about it, I agree that I am approaching the issue from the wrong angle. Instead of going with a "brute force" approach I should have used probabilities and work with that.
My intention is, later on, for each combination:
- Count how many values are under a threshold X.
- Count how many values are equal or over threshold X and below a threshold Y.
- Count how many values are over threshold Y.
And group the combinations that have the same counts.
As an illustrative example:
If I roll 12 dice of 6 sides, what's the probability of having M dice with a value <3, N dice with a value >=3 and <4, and P dice with a value >5, for all possible combinations of M, N, and P?
So, I think that I'll close this question in a few days while I go with this new approach. Thank you for all the feedback and your time!
The number tuples that list(product(range(1,7), repeats=12)) makes is 6**12, 2,176,782,336. Whether a list or array that's probably too large for most computers.
In [119]: len(list(product(range(1,7),repeat=12)))
....
MemoryError:
Trying to make an array of that size directly:
In [129]: A = np.ones((6**12,12),int)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-129-e833a9e859e0> in <module>()
----> 1 A = np.ones((6**12,12),int)
/usr/local/lib/python3.5/dist-packages/numpy/core/numeric.py in ones(shape, dtype, order)
190
191 """
--> 192 a = empty(shape, dtype, order)
193 multiarray.copyto(a, 1, casting='unsafe')
194 return a
ValueError: Maximum allowed dimension exceeded
Array memory size, 4 bytes per item
In [130]: 4*12*6**12
Out[130]: 104,485,552,128
100GB?
Why do you need to generate 2B combinations of 7 numbers?
So with your Counter you reduce the number of items
In [138]: sp = sorted_product(range(1,7), 2)
In [139]: counter=Counter(sp)
In [140]: counter
Out[140]:
Counter({(1, 1): 1,
(1, 2): 2,
(1, 3): 2,
(1, 4): 2,
(1, 5): 2,
(1, 6): 2,
(2, 2): 1,
(2, 3): 2,
(2, 4): 2,
(2, 5): 2,
(2, 6): 2,
(3, 3): 1,
(3, 4): 2,
(3, 5): 2,
(3, 6): 2,
(4, 4): 1,
(4, 5): 2,
(4, 6): 2,
(5, 5): 1,
(5, 6): 2,
(6, 6): 1})
from 36 to 21 (for 2 repetitions). It shouldn't be hard to generalize this to more repetitions (combinations? permutations?) It still will push time and/or memory boundaries.
A variant on meshgrid using mgrid:
In [175]: n=7; A=np.mgrid[[slice(1,7)]*n].reshape(n,-1).T
In [176]: A.shape
Out[176]: (279936, 7)
In [177]: B=np.array(list(product(range(1,7),repeat=7)))
In [178]: B.shape
Out[178]: (279936, 7)
In [179]: A[:10]
Out[179]:
array([[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 2],
[1, 1, 1, 1, 1, 1, 3],
[1, 1, 1, 1, 1, 1, 4],
[1, 1, 1, 1, 1, 1, 5],
[1, 1, 1, 1, 1, 1, 6],
[1, 1, 1, 1, 1, 2, 1],
[1, 1, 1, 1, 1, 2, 2],
[1, 1, 1, 1, 1, 2, 3],
[1, 1, 1, 1, 1, 2, 4]])
In [180]: B[:10]
Out[180]:
array([[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 2],
[1, 1, 1, 1, 1, 1, 3],
[1, 1, 1, 1, 1, 1, 4],
[1, 1, 1, 1, 1, 1, 5],
[1, 1, 1, 1, 1, 1, 6],
[1, 1, 1, 1, 1, 2, 1],
[1, 1, 1, 1, 1, 2, 2],
[1, 1, 1, 1, 1, 2, 3],
[1, 1, 1, 1, 1, 2, 4]])
In [181]: np.allclose(A,B)
mgrid is quite a bit faster:
In [182]: timeit B=np.array(list(product(range(1,7),repeat=7)))
317 ms ± 3.58 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [183]: timeit A=np.mgrid[[slice(1,7)]*n].reshape(n,-1).T
13.9 ms ± 242 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
but, yes, it will have the same overall memory usage and limit.
With n=10,
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
The right answer is: Don't. Whatever you want to do with all these combinations, adjust your approach so that you either generate them one at a time and use them immediately without storing them, or better yet, find a way to get the job done without inspecting every combination. Your current solution works for toy problems, but is not suitable for larger parameters. Explain what you are up to and maybe someone here can help.
I need to find unique rows in a numpy.array.
For example:
>>> a # I have
array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
>>> new_a # I want to get to
array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0]])
I know that i can create a set and loop over the array, but I am looking for an efficient pure numpy solution. I believe that there is a way to set data type to void and then I could just use numpy.unique, but I couldn't figure out how to make it work.
As of NumPy 1.13, one can simply choose the axis for selection of unique values in any N-dim array. To get unique rows, one can do:
unique_rows = np.unique(original_array, axis=0)
Yet another possible solution
np.vstack({tuple(row) for row in a})
Another option to the use of structured arrays is using a view of a void type that joins the whole row into a single item:
a = np.array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
b = np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * a.shape[1])))
_, idx = np.unique(b, return_index=True)
unique_a = a[idx]
>>> unique_a
array([[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
EDIT
Added np.ascontiguousarray following #seberg's recommendation. This will slow the method down if the array is not already contiguous.
EDIT
The above can be slightly sped up, perhaps at the cost of clarity, by doing:
unique_a = np.unique(b).view(a.dtype).reshape(-1, a.shape[1])
Also, at least on my system, performance wise it is on par, or even better, than the lexsort method:
a = np.random.randint(2, size=(10000, 6))
%timeit np.unique(a.view(np.dtype((np.void, a.dtype.itemsize*a.shape[1])))).view(a.dtype).reshape(-1, a.shape[1])
100 loops, best of 3: 3.17 ms per loop
%timeit ind = np.lexsort(a.T); a[np.concatenate(([True],np.any(a[ind[1:]]!=a[ind[:-1]],axis=1)))]
100 loops, best of 3: 5.93 ms per loop
a = np.random.randint(2, size=(10000, 100))
%timeit np.unique(a.view(np.dtype((np.void, a.dtype.itemsize*a.shape[1])))).view(a.dtype).reshape(-1, a.shape[1])
10 loops, best of 3: 29.9 ms per loop
%timeit ind = np.lexsort(a.T); a[np.concatenate(([True],np.any(a[ind[1:]]!=a[ind[:-1]],axis=1)))]
10 loops, best of 3: 116 ms per loop
If you want to avoid the memory expense of converting to a series of tuples or another similar data structure, you can exploit numpy's structured arrays.
The trick is to view your original array as a structured array where each item corresponds to a row of the original array. This doesn't make a copy, and is quite efficient.
As a quick example:
import numpy as np
data = np.array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
ncols = data.shape[1]
dtype = data.dtype.descr * ncols
struct = data.view(dtype)
uniq = np.unique(struct)
uniq = uniq.view(data.dtype).reshape(-1, ncols)
print uniq
To understand what's going on, have a look at the intermediary results.
Once we view things as a structured array, each element in the array is a row in your original array. (Basically, it's a similar data structure to a list of tuples.)
In [71]: struct
Out[71]:
array([[(1, 1, 1, 0, 0, 0)],
[(0, 1, 1, 1, 0, 0)],
[(0, 1, 1, 1, 0, 0)],
[(1, 1, 1, 0, 0, 0)],
[(1, 1, 1, 1, 1, 0)]],
dtype=[('f0', '<i8'), ('f1', '<i8'), ('f2', '<i8'), ('f3', '<i8'), ('f4', '<i8'), ('f5', '<i8')])
In [72]: struct[0]
Out[72]:
array([(1, 1, 1, 0, 0, 0)],
dtype=[('f0', '<i8'), ('f1', '<i8'), ('f2', '<i8'), ('f3', '<i8'), ('f4', '<i8'), ('f5', '<i8')])
Once we run numpy.unique, we'll get a structured array back:
In [73]: np.unique(struct)
Out[73]:
array([(0, 1, 1, 1, 0, 0), (1, 1, 1, 0, 0, 0), (1, 1, 1, 1, 1, 0)],
dtype=[('f0', '<i8'), ('f1', '<i8'), ('f2', '<i8'), ('f3', '<i8'), ('f4', '<i8'), ('f5', '<i8')])
That we then need to view as a "normal" array (_ stores the result of the last calculation in ipython, which is why you're seeing _.view...):
In [74]: _.view(data.dtype)
Out[74]: array([0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0])
And then reshape back into a 2D array (-1 is a placeholder that tells numpy to calculate the correct number of rows, give the number of columns):
In [75]: _.reshape(-1, ncols)
Out[75]:
array([[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
Obviously, if you wanted to be more concise, you could write it as:
import numpy as np
def unique_rows(data):
uniq = np.unique(data.view(data.dtype.descr * data.shape[1]))
return uniq.view(data.dtype).reshape(-1, data.shape[1])
data = np.array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
print unique_rows(data)
Which results in:
[[0 1 1 1 0 0]
[1 1 1 0 0 0]
[1 1 1 1 1 0]]
np.unique when I run it on np.random.random(100).reshape(10,10) returns all the unique individual elements, but you want the unique rows, so first you need to put them into tuples:
array = #your numpy array of lists
new_array = [tuple(row) for row in array]
uniques = np.unique(new_array)
That is the only way I see you changing the types to do what you want, and I am not sure if the list iteration to change to tuples is okay with your "not looping through"
np.unique works by sorting a flattened array, then looking at whether each item is equal to the previous. This can be done manually without flattening:
ind = np.lexsort(a.T)
a[ind[np.concatenate(([True],np.any(a[ind[1:]]!=a[ind[:-1]],axis=1)))]]
This method does not use tuples, and should be much faster and simpler than other methods given here.
NOTE: A previous version of this did not have the ind right after a[, which mean that the wrong indices were used. Also, Joe Kington makes a good point that this does make a variety of intermediate copies. The following method makes fewer, by making a sorted copy and then using views of it:
b = a[np.lexsort(a.T)]
b[np.concatenate(([True], np.any(b[1:] != b[:-1],axis=1)))]
This is faster and uses less memory.
Also, if you want to find unique rows in an ndarray regardless of how many dimensions are in the array, the following will work:
b = a[lexsort(a.reshape((a.shape[0],-1)).T)];
b[np.concatenate(([True], np.any(b[1:]!=b[:-1],axis=tuple(range(1,a.ndim)))))]
An interesting remaining issue would be if you wanted to sort/unique along an arbitrary axis of an arbitrary-dimension array, something that would be more difficult.
Edit:
To demonstrate the speed differences, I ran a few tests in ipython of the three different methods described in the answers. With your exact a, there isn't too much of a difference, though this version is a bit faster:
In [87]: %timeit unique(a.view(dtype)).view('<i8')
10000 loops, best of 3: 48.4 us per loop
In [88]: %timeit ind = np.lexsort(a.T); a[np.concatenate(([True], np.any(a[ind[1:]]!= a[ind[:-1]], axis=1)))]
10000 loops, best of 3: 37.6 us per loop
In [89]: %timeit b = [tuple(row) for row in a]; np.unique(b)
10000 loops, best of 3: 41.6 us per loop
With a larger a, however, this version ends up being much, much faster:
In [96]: a = np.random.randint(0,2,size=(10000,6))
In [97]: %timeit unique(a.view(dtype)).view('<i8')
10 loops, best of 3: 24.4 ms per loop
In [98]: %timeit b = [tuple(row) for row in a]; np.unique(b)
10 loops, best of 3: 28.2 ms per loop
In [99]: %timeit ind = np.lexsort(a.T); a[np.concatenate(([True],np.any(a[ind[1:]]!= a[ind[:-1]],axis=1)))]
100 loops, best of 3: 3.25 ms per loop
I've compared the suggested alternative for speed and found that, surprisingly, the void view unique solution is even a bit faster than numpy's native unique with the axis argument. If you're looking for speed, you'll want
numpy.unique(
a.view(numpy.dtype((numpy.void, a.dtype.itemsize*a.shape[1])))
).view(a.dtype).reshape(-1, a.shape[1])
I've implemented that fastest variant in npx.unique_rows.
There is a bug report on GitHub for this, too.
Code to reproduce the plot:
import numpy
import perfplot
def unique_void_view(a):
return (
numpy.unique(a.view(numpy.dtype((numpy.void, a.dtype.itemsize * a.shape[1]))))
.view(a.dtype)
.reshape(-1, a.shape[1])
)
def lexsort(a):
ind = numpy.lexsort(a.T)
return a[
ind[numpy.concatenate(([True], numpy.any(a[ind[1:]] != a[ind[:-1]], axis=1)))]
]
def vstack(a):
return numpy.vstack([tuple(row) for row in a])
def unique_axis(a):
return numpy.unique(a, axis=0)
perfplot.show(
setup=lambda n: numpy.random.randint(2, size=(n, 20)),
kernels=[unique_void_view, lexsort, vstack, unique_axis],
n_range=[2 ** k for k in range(15)],
xlabel="len(a)",
equality_check=None,
)
Here is another variation for #Greg pythonic answer
np.vstack(set(map(tuple, a)))
I didn’t like any of these answers because none handle floating-point arrays in a linear algebra or vector space sense, where two rows being “equal” means “within some 𝜀”. The one answer that has a tolerance threshold, https://stackoverflow.com/a/26867764/500207, took the threshold to be both element-wise and decimal precision, which works for some cases but isn’t as mathematically general as a true vector distance.
Here’s my version:
from scipy.spatial.distance import squareform, pdist
def uniqueRows(arr, thresh=0.0, metric='euclidean'):
"Returns subset of rows that are unique, in terms of Euclidean distance"
distances = squareform(pdist(arr, metric=metric))
idxset = {tuple(np.nonzero(v)[0]) for v in distances <= thresh}
return arr[[x[0] for x in idxset]]
# With this, unique columns are super-easy:
def uniqueColumns(arr, *args, **kwargs):
return uniqueRows(arr.T, *args, **kwargs)
The public-domain function above uses scipy.spatial.distance.pdist to find the Euclidean (customizable) distance between each pair of rows. Then it compares each each distance to a threshold to find the rows that are within thresh of each other, and returns just one row from each thresh-cluster.
As hinted, the distance metric needn’t be Euclidean—pdist can compute sundry distances including cityblock (Manhattan-norm) and cosine (the angle between vectors).
If thresh=0 (the default), then rows have to be bit-exact to be considered “unique”. Other good values for thresh use scaled machine-precision, i.e., thresh=np.spacing(1)*1e3.
Why not use drop_duplicates from pandas:
>>> timeit pd.DataFrame(image.reshape(-1,3)).drop_duplicates().values
1 loops, best of 3: 3.08 s per loop
>>> timeit np.vstack({tuple(r) for r in image.reshape(-1,3)})
1 loops, best of 3: 51 s per loop
The numpy_indexed package (disclaimer: I am its author) wraps the solution posted by Jaime in a nice and tested interface, plus many more features:
import numpy_indexed as npi
new_a = npi.unique(a) # unique elements over axis=0 (rows) by default
Based on the answer in this page I have written a function that replicates the capability of MATLAB's unique(input,'rows') function, with the additional feature to accept tolerance for checking the uniqueness. It also returns the indices such that c = data[ia,:] and data = c[ic,:]. Please report if you see any discrepancies or errors.
def unique_rows(data, prec=5):
import numpy as np
d_r = np.fix(data * 10 ** prec) / 10 ** prec + 0.0
b = np.ascontiguousarray(d_r).view(np.dtype((np.void, d_r.dtype.itemsize * d_r.shape[1])))
_, ia = np.unique(b, return_index=True)
_, ic = np.unique(b, return_inverse=True)
return np.unique(b).view(d_r.dtype).reshape(-1, d_r.shape[1]), ia, ic
Beyond #Jaime excellent answer, another way to collapse a row is to uses a.strides[0] (assuming a is C-contiguous) which is equal to a.dtype.itemsize*a.shape[0]. Furthermore void(n) is a shortcut for dtype((void,n)). we arrive finally to this shortest version :
a[unique(a.view(void(a.strides[0])),1)[1]]
For
[[0 1 1 1 0 0]
[1 1 1 0 0 0]
[1 1 1 1 1 0]]
np.unique works given a list of tuples:
>>> np.unique([(1, 1), (2, 2), (3, 3), (4, 4), (2, 2)])
Out[9]:
array([[1, 1],
[2, 2],
[3, 3],
[4, 4]])
With a list of lists it raises a TypeError: unhashable type: 'list'
For general purpose like 3D or higher multidimensional nested arrays, try this:
import numpy as np
def unique_nested_arrays(ar):
origin_shape = ar.shape
origin_dtype = ar.dtype
ar = ar.reshape(origin_shape[0], np.prod(origin_shape[1:]))
ar = np.ascontiguousarray(ar)
unique_ar = np.unique(ar.view([('', origin_dtype)]*np.prod(origin_shape[1:])))
return unique_ar.view(origin_dtype).reshape((unique_ar.shape[0], ) + origin_shape[1:])
which satisfies your 2D dataset:
a = np.array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
unique_nested_arrays(a)
gives:
array([[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
But also 3D arrays like:
b = np.array([[[1, 1, 1], [0, 1, 1]],
[[0, 1, 1], [1, 1, 1]],
[[1, 1, 1], [0, 1, 1]],
[[1, 1, 1], [1, 1, 1]]])
unique_nested_arrays(b)
gives:
array([[[0, 1, 1], [1, 1, 1]],
[[1, 1, 1], [0, 1, 1]],
[[1, 1, 1], [1, 1, 1]]])
None of these answers worked for me. I'm assuming as my unique rows contained strings and not numbers. However this answer from another thread did work:
Source: https://stackoverflow.com/a/38461043/5402386
You can use .count() and .index() list's methods
coor = np.array([[10, 10], [12, 9], [10, 5], [12, 9]])
coor_tuple = [tuple(x) for x in coor]
unique_coor = sorted(set(coor_tuple), key=lambda x: coor_tuple.index(x))
unique_count = [coor_tuple.count(x) for x in unique_coor]
unique_index = [coor_tuple.index(x) for x in unique_coor]
We can actually turn m x n numeric numpy array into m x 1 numpy string array, please try using the following function, it provides count, inverse_idx and etc, just like numpy.unique:
import numpy as np
def uniqueRow(a):
#This function turn m x n numpy array into m x 1 numpy array storing
#string, and so the np.unique can be used
#Input: an m x n numpy array (a)
#Output unique m' x n numpy array (unique), inverse_indx, and counts
s = np.chararray((a.shape[0],1))
s[:] = '-'
b = (a).astype(np.str)
s2 = np.expand_dims(b[:,0],axis=1) + s + np.expand_dims(b[:,1],axis=1)
n = a.shape[1] - 2
for i in range(0,n):
s2 = s2 + s + np.expand_dims(b[:,i+2],axis=1)
s3, idx, inv_, c = np.unique(s2,return_index = True, return_inverse = True, return_counts = True)
return a[idx], inv_, c
Example:
A = np.array([[ 3.17 9.502 3.291],
[ 9.984 2.773 6.852],
[ 1.172 8.885 4.258],
[ 9.73 7.518 3.227],
[ 8.113 9.563 9.117],
[ 9.984 2.773 6.852],
[ 9.73 7.518 3.227]])
B, inv_, c = uniqueRow(A)
Results:
B:
[[ 1.172 8.885 4.258]
[ 3.17 9.502 3.291]
[ 8.113 9.563 9.117]
[ 9.73 7.518 3.227]
[ 9.984 2.773 6.852]]
inv_:
[3 4 1 0 2 4 0]
c:
[2 1 1 1 2]
Lets get the entire numpy matrix as a list, then drop duplicates from this list, and finally return our unique list back into a numpy matrix:
matrix_as_list=data.tolist()
matrix_as_list:
[[1, 1, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 1, 1, 1, 0, 0], [1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 0]]
uniq_list=list()
uniq_list.append(matrix_as_list[0])
[uniq_list.append(item) for item in matrix_as_list if item not in uniq_list]
unique_matrix=np.array(uniq_list)
unique_matrix:
array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0]])
The most straightforward solution is to make the rows a single item by making them strings. Each row then can be compared as a whole for its uniqueness using numpy. This solution is generalize-able you just need to reshape and transpose your array for other combinations. Here is the solution for the problem provided.
import numpy as np
original = np.array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
uniques, index = np.unique([str(i) for i in original], return_index=True)
cleaned = original[index]
print(cleaned)
Will Give:
array([[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
Send my nobel prize in the mail
import numpy as np
original = np.array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
# create a view that the subarray as tuple and return unique indeies.
_, unique_index = np.unique(original.view(original.dtype.descr * original.shape[1]),
return_index=True)
# get unique set
print(original[unique_index])