Remove first occurence of elements in a numpy array - python

I have a numpy array:
a = np.array([-1,2,3,-1,5,-2,2,9])
I want to only keep values in the array which occurs more than 2 times, so the result should be:
a = np.array([-1,2,-1,2])
Is there a way to do this only using numpy?
I have a solution using a dictionary and dictionary filtering, but this is kind of slow, and I was wondering if there was a faster solution only using numpy.
Thanks !

import numpy as np
a = np.array([-1, 2, 3, -1, 5, -2, 2, 9])
values, counts = np.unique(a, return_counts=True)
values_filtered = values[counts >= 2]
result = a[np.isin(a, values_filtered)]
print(result) # return [-1 2 -1 2]

import numpy as np
arr = np.array([1, 2, 3,4,4,4,1])
filter_arr = [np.count_nonzero(arr == i)>1 for i in arr]
newarr = arr[filter_arr]
print(filter_arr)
print(np.unique(newarr))

thanks a lot!
All answers solved the problem, but the solution from Matvey_coder3 was the fastest.
KR

Related

Python: how to delete duplicates in Nx3 numpy array

I have Nx3 numpy array, let say:
a=[[1,1,1],[1,2,3],...,[2,1,3],[2,2,2]]
In my case, I don't care about the position of the elements in my "sub 3D array" and I consider them as duplicates:
[1,2,3] == [2,1,3] == [3,1,2] = ...
I would like to delete these duplicates and so get:
a_new = [[1,1,1],[1,2,3],...,[2,2,2]]
The problem is that I have no idea how to do this job.
Any help are welcome and thanks in advance :)
Use sort and unique:
import numpy as np
a=np.array([[1,1,1],[1,2,3],[2,1,3],[2,2,2]])
np.unique(np.sort(a, axis=1), axis=0)
array([[1, 1, 1],
[1, 2, 3],
[2, 2, 2]])

Change first colum of numpy array

How to convert this numpy array:
array = np.array([[np.array([1]),2],[np.array([1]),2],[np.array([1]),2]])
print (array)
[[array([1]) 2]
[array([1]) 2]
[array([1]) 2]]
to this numpy array:
print(array)
[[1 2]
[1 2]
[1 2]]
How can I achieve this without a for loop?
This is what I tried but it doesn't work:
first_col = array[:,0]
first_col = np.array([i[0] for i in first_col])
I don't even know if answering this is a good idea, since there must be a fundemental flaw in the design to even come up with a situation like this and the correct solution would be to fix that, rather than trying to mitigate the issue by converting the output.
Never-the-less, given the data, interestingly enough, it is possible to 'unpack' the structure using the numpy array method .astype():
import numpy as np
array = np.array([[np.array([1]),2],[np.array([1]),2],[np.array([1]),2]])
array = array.astype(int) # alt array = array.astype(float)
But, as stated above, this is treating the symptom of the problem, rather than the problem itself.
Try using map to convert it to a list:
map(lambda x: [list(x[0]), x[1]], array)
This is one way.
import numpy as np
arr = np.array([[np.array([1]),2],
[np.array([1]),2],
[np.array([1]),2]])
np.array([[i[0][0], i[1]] for i in arr])
# array([[1, 2],
# [1, 2],
# [1, 2]])

Getting the indices of several elements in a NumPy array at once

Is there any way to get the indices of several elements in a NumPy array at once?
E.g.
import numpy as np
a = np.array([1, 2, 4])
b = np.array([1, 2, 3, 10, 4])
I would like to find the index of each element of a in b, namely: [0,1,4].
I find the solution I am using a bit verbose:
import numpy as np
a = np.array([1, 2, 4])
b = np.array([1, 2, 3, 10, 4])
c = np.zeros_like(a)
for i, aa in np.ndenumerate(a):
c[i] = np.where(b == aa)[0]
print('c: {0}'.format(c))
Output:
c: [0 1 4]
You could use in1d and nonzero (or where for that matter):
>>> np.in1d(b, a).nonzero()[0]
array([0, 1, 4])
This works fine for your example arrays, but in general the array of returned indices does not honour the order of the values in a. This may be a problem depending on what you want to do next.
In that case, a much better answer is the one #Jaime gives here, using searchsorted:
>>> sorter = np.argsort(b)
>>> sorter[np.searchsorted(b, a, sorter=sorter)]
array([0, 1, 4])
This returns the indices for values as they appear in a. For instance:
a = np.array([1, 2, 4])
b = np.array([4, 2, 3, 1])
>>> sorter = np.argsort(b)
>>> sorter[np.searchsorted(b, a, sorter=sorter)]
array([3, 1, 0]) # the other method would return [0, 1, 3]
This is a simple one-liner using the numpy-indexed package (disclaimer: I am its author):
import numpy_indexed as npi
idx = npi.indices(b, a)
The implementation is fully vectorized, and it gives you control over the handling of missing values. Moreover, it works for nd-arrays as well (for instance, finding the indices of rows of a in b).
All of the solutions here recommend using a linear search. You can use np.argsort and np.searchsorted to speed things up dramatically for large arrays:
sorter = b.argsort()
i = sorter[np.searchsorted(b, a, sorter=sorter)]
For an order-agnostic solution, you can use np.flatnonzero with np.isin (v 1.13+).
import numpy as np
a = np.array([1, 2, 4])
b = np.array([1, 2, 3, 10, 4])
res = np.flatnonzero(np.isin(a, b)) # NumPy v1.13+
res = np.flatnonzero(np.in1d(a, b)) # earlier versions
# array([0, 1, 2], dtype=int64)
There are a bunch of approaches for getting the index of multiple items at once mentioned in passing in answers to this related question: Is there a NumPy function to return the first index of something in an array?. The wide variety and creativity of the answers suggests there is no single best practice, so if your code above works and is easy to understand, I'd say keep it.
I personally found this approach to be both performant and easy to read: https://stackoverflow.com/a/23994923/3823857
Adapting it for your example:
import numpy as np
a = np.array([1, 2, 4])
b_list = [1, 2, 3, 10, 4]
b_array = np.array(b_list)
indices = [b_list.index(x) for x in a]
vals_at_indices = b_array[indices]
I personally like adding a little bit of error handling in case a value in a does not exist in b.
import numpy as np
a = np.array([1, 2, 4])
b_list = [1, 2, 3, 10, 4]
b_array = np.array(b_list)
b_set = set(b_list)
indices = [b_list.index(x) if x in b_set else np.nan for x in a]
vals_at_indices = b_array[indices]
For my use case, it's pretty fast, since it relies on parts of Python that are fast (list comprehensions, .index(), sets, numpy indexing). Would still love to see something that's a NumPy equivalent to VLOOKUP, or even a Pandas merge. But this seems to work for now.

Vectorised code for selecting elements of a 2D array

Well the following code obviously returns the element in position ind in matrix:
def select_coord(a,ind):
return a[ind]
However I don't know how to vectorise this. In other words:
b=np.asarray([[2,3,4,5],[7,6,8,10]])
indices=np.asarray([2,3])
select_coord(b,indices)
Should return [4,10].
Which can be written with a for loop:
def new_select_record(a,indices):
ret=[]
for i in range a.shape[0]:
ret.append(a[indices[i]])
return np.asarray(ret)
Is there a way to write this in a vectorised manner?
To get b[0, 2], b[1, 3]:
>>> import numpy as np
>>> b = np.array([[2,3,4,5], [7,6,8,10]])
>>> indices = np.array([2, 3])
>>> b[np.arange(len(indices)), indices]
array([ 4, 10])
how about: np.diag(b[:,[2,3]])?

Determining duplicate values in an array

Suppose I have an array
a = np.array([1, 2, 1, 3, 3, 3, 0])
How can I (efficiently, Pythonically) find which elements of a are duplicates (i.e., non-unique values)? In this case the result would be array([1, 3, 3]) or possibly array([1, 3]) if efficient.
I've come up with a few methods that appear to work:
Masking
m = np.zeros_like(a, dtype=bool)
m[np.unique(a, return_index=True)[1]] = True
a[~m]
Set operations
a[~np.in1d(np.arange(len(a)), np.unique(a, return_index=True)[1], assume_unique=True)]
This one is cute but probably illegal (as a isn't actually unique):
np.setxor1d(a, np.unique(a), assume_unique=True)
Histograms
u, i = np.unique(a, return_inverse=True)
u[np.bincount(i) > 1]
Sorting
s = np.sort(a, axis=None)
s[:-1][s[1:] == s[:-1]]
Pandas
s = pd.Series(a)
s[s.duplicated()]
Is there anything I've missed? I'm not necessarily looking for a numpy-only solution, but it has to work with numpy data types and be efficient on medium-sized data sets (up to 10 million in size).
Conclusions
Testing with a 10 million size data set (on a 2.8GHz Xeon):
a = np.random.randint(10**7, size=10**7)
The fastest is sorting, at 1.1s. The dubious xor1d is second at 2.6s, followed by masking and Pandas Series.duplicated at 3.1s, bincount at 5.6s, and in1d and senderle's setdiff1d both at 7.3s. Steven's Counter is only a little slower, at 10.5s; trailing behind are Burhan's Counter.most_common at 110s and DSM's Counter subtraction at 360s.
I'm going to use sorting for performance, but I'm accepting Steven's answer because the performance is acceptable and it feels clearer and more Pythonic.
Edit: discovered the Pandas solution. If Pandas is available it's clear and performs well.
As of numpy version 1.9.0, np.unique has an argument return_counts which greatly simplifies your task:
u, c = np.unique(a, return_counts=True)
dup = u[c > 1]
This is similar to using Counter, except you get a pair of arrays instead of a mapping. I'd be curious to see how they perform relative to each other.
It's probably worth mentioning that even though np.unique is quite fast in practice due to its numpyness, it has worse algorithmic complexity than the Counter solution. np.unique is sort-based, so runs asymptotically in O(n log n) time. Counter is hash-based, so has O(n) complexity. This will not matter much for anything but the largest datasets.
I think this is most clear done outside of numpy. You'll have to time it against your numpy solutions if you are concerned with speed.
>>> import numpy as np
>>> from collections import Counter
>>> a = np.array([1, 2, 1, 3, 3, 3, 0])
>>> [item for item, count in Counter(a).items() if count > 1]
[1, 3]
note: This is similar to Burhan Khalid's answer, but the use of items without subscripting in the condition should be faster.
People have already suggested Counter variants, but here's one which doesn't use a listcomp:
>>> from collections import Counter
>>> a = [1, 2, 1, 3, 3, 3, 0]
>>> (Counter(a) - Counter(set(a))).keys()
[1, 3]
[Posted not because it's efficient -- it's not -- but because I think it's cute that you can subtract Counter instances.]
For Python 2.7+
>>> import numpy
>>> from collections import Counter
>>> n = numpy.array([1,1,2,3,3,3,0])
>>> [x[1] for x in Counter(n).most_common() if x[0] > 1]
[3, 1]
Here's another approach using set operations that I think is a bit more straightforward than the ones you offer:
>>> indices = np.setdiff1d(np.arange(len(a)), np.unique(a, return_index=True)[1])
>>> a[indices]
array([1, 3, 3])
I suppose you're asking for numpy-only solutions, since if that's not the case, it's very difficult to argue with just using a Counter instead. I think you should make that requirement explicit though.
If a is made up of small integers you can use numpy.bincount directly:
import numpy as np
a = np.array([3, 2, 2, 0, 4, 3])
counts = np.bincount(a)
print np.where(counts > 1)[0]
# array([2, 3])
This is very similar your "histogram" method, which is the one I would use if a was not made up of small integers.
If the array is a sorted numpy array, then just do:
a = np.array([1, 2, 2, 3, 4, 5, 5, 6])
rep_el = a[np.diff(a) == 0]
I'm adding my solution to the pile for this 3 year old question because none of the solutions fit what I wanted or used libs besides numpy. This method finds both the indices of duplicates and values for distinct sets of duplicates.
import numpy as np
A = np.array([1,2,3,4,4,4,5,6,6,7,8])
# Record the indices where each unique element occurs.
list_of_dup_inds = [np.where(a == A)[0] for a in np.unique(A)]
# Filter out non-duplicates.
list_of_dup_inds = filter(lambda inds: len(inds) > 1, list_of_dup_inds)
for inds in list_of_dup_inds: print inds, A[inds]
# >> [3 4 5] [4 4 4]
# >> [7 8] [6 6]
>>> import numpy as np
>>> a=np.array([1,2,2,2,2,3])
>>> uniques, uniq_idx, counts = np.unique(a,return_index=True,return_counts=True)
>>> duplicates = a[ uniq_idx[counts>=2] ] # <--- Get duplicates
If you also want to get the orphans:
>>> orphans = a[ uniq_idx[counts==1] ]
Combination of Pandas and Numpy (Utilizing value_counts():
import pandas as pd
import numpy as np
arr=np.array(('a','b','b','c','a'))
pd.Series(arr).value_counts()
OUTPUT:
a 2
b 2
c 1

Categories

Resources