Python Pandas: remove entries based on the number of occurrences - python

I'm trying to remove entries from a data frame which occur less than 100 times.
The data frame data looks like this:
pid tag
1 23
1 45
1 62
2 24
2 45
3 34
3 25
3 62
Now I count the number of tag occurrences like this:
bytag = data.groupby('tag').aggregate(np.count_nonzero)
But then I can't figure out how to remove those entries which have low count...

New in 0.12, groupby objects have a filter method, allowing you to do these types of operations:
In [11]: g = data.groupby('tag')
In [12]: g.filter(lambda x: len(x) > 1) # pandas 0.13.1
Out[12]:
pid tag
1 1 45
2 1 62
4 2 45
7 3 62
The function (the first argument of filter) is applied to each group (subframe), and the results include elements of the original DataFrame belonging to groups which evaluated to True.
Note: in 0.12 the ordering is different than in the original DataFrame, this was fixed in 0.13+:
In [21]: g.filter(lambda x: len(x) > 1) # pandas 0.12
Out[21]:
pid tag
1 1 45
4 2 45
2 1 62
7 3 62

Edit: Thanks to #WesMcKinney for showing this much more direct way:
data[data.groupby('tag').pid.transform(len) > 1]
import pandas
import numpy as np
data = pandas.DataFrame(
{'pid' : [1,1,1,2,2,3,3,3],
'tag' : [23,45,62,24,45,34,25,62],
})
bytag = data.groupby('tag').aggregate(np.count_nonzero)
tags = bytag[bytag.pid >= 2].index
print(data[data['tag'].isin(tags)])
yields
pid tag
1 1 45
2 1 62
4 2 45
7 3 62

Here are some run times for a couple of the solutions posted here, along with one that was not (using value_counts()) that is much faster than the other solutions:
Create the data:
import pandas as pd
import numpy as np
# Generate some 'users'
np.random.seed(42)
df = pd.DataFrame({'uid': np.random.randint(0, 500, 500)})
# Prove that some entries are 1
print "{:,} users only occur once in dataset".format(sum(df.uid.value_counts() == 1))
Output:
171 users only occur once in dataset
Time a few different ways of removing users with only one entry. These were run in separate cells in a Jupyter Notebook:
%%timeit
df.groupby(by='uid').filter(lambda x: len(x) > 1)
%%timeit
df[df.groupby('uid').uid.transform(len) > 1]
%%timeit
vc = df.uid.value_counts()
df[df.uid.isin(vc.index[vc.values > 1])].uid.value_counts()
These gave the following outputs:
10 loops, best of 3: 46.2 ms per loop
10 loops, best of 3: 30.1 ms per loop
1000 loops, best of 3: 1.27 ms per loop

df = pd.DataFrame([(1, 2), (1, 3), (1, 4), (2, 1),(2,2,)], columns=['col1', 'col2'])
In [36]: df
Out[36]:
col1 col2
0 1 2
1 1 3
2 1 4
3 2 1
4 2 2
gp = df.groupby('col1').aggregate(np.count_nonzero)
In [38]: gp
Out[38]:
col2
col1
1 3
2 2
lets get where the count > 2
tf = gp[gp.col2 > 2].reset_index()
df[df.col1 == tf.col1]
Out[41]:
col1 col2
0 1 2
1 1 3
2 1 4

Related

Return All Values of Column A and Put them in Column B until Specific Value Is reached

I am still having trouble with with this and nothing seems to work for me. I have a data frame with two columns. I am trying to return all of the values in column A in a new column, B. However, I want to loop through column A and stop returning those values and instead return 0 when the cumulative sum reaches 8 or the next value would make it greater than 8.
df max_val = 8
A
1
2
2
3
4
5
1
The output should look something like this
df max_val = 8
A B
1 1
2 2
2 2
3 3
4 0
5 0
1 0
I thought something like this
def func(x):
if df['A'].cumsum() <= max_val:
return x
else:
return 0
This doesn't work:
df['B'] = df['A'].apply(func, axis =1 )
Neither does this:
df['B'] = func(df['A'])
You can use Series.where:
df['B'] = df['A'].where(df['A'].cumsum() <= max_val, 0)
print (df)
A B
0 1 1
1 2 2
2 2 2
3 3 3
4 4 0
5 5 0
6 1 0
Approach #1 One approach using np.where -
df['B']= np.where((df.A.cumsum()<=max_val), df.A ,0)
Sample output -
In [145]: df
Out[145]:
A B
0 1 1
1 2 2
2 2 2
3 3 3
4 4 0
5 5 0
6 1 0
Approach #2 Another using array-initialization -
def app2(df,max_val):
a = df.A.values
colB = np.zeros(df.shape[0],dtype=a.dtype)
idx = np.searchsorted(a.cumsum(),max_val, 'right')
colB[:idx] = a[:idx]
df['B'] = colB
Runtime test
Seems like #jezrael's pd.where based one is close one, so timing against it on a bigger dataset -
In [293]: df = pd.DataFrame({'A':np.random.randint(0,9,(1000000))})
In [294]: max_val = 1000000
# #jezrael's soln
In [295]: %timeit df['B1'] = df['A'].where(df['A'].cumsum() <= max_val, 0)
100 loops, best of 3: 8.22 ms per loop
# Proposed in this post
In [296]: %timeit df['B2']= np.where((df.A.cumsum()<=max_val), df.A ,0)
100 loops, best of 3: 6.45 ms per loop
# Proposed in this post
In [297]: %timeit app2(df, max_val)
100 loops, best of 3: 4.47 ms per loop
df['B']=[x if x<=8 else 0 for x in df['A'].cumsum()]
df
Out[7]:
A B
0 1 1
1 2 3
2 2 5
3 3 8
4 4 0
5 5 0
6 1 0
Why don't you add values to a variable like this :
for i in range(len(df)):
if A<max_val:
return x
else:
return 0
A=A+df[i]
Splitting in multiple lines
import pandas as pd
A=[1,2,2,3,4,5,1]
MAXVAL=8
df=pd.DataFrame(data=A,columns=['A'])
df['cumsumA']=df['A'].cumsum()
df['B']=df['cumsumA']*(df['cumsumA']<MAXVAL).astype(int)
You can then drop the 'cumsumA' column
The below will work fine -
import numpy as np
max_val = 8
df['B'] = np.where(df['A'].cumsum() <= max_val , df['A'],0)
I hope this helps.
just a way to do it with .loc:
df['c'] = df['a'].cumsum()
df['b'] = df['a']
df['b'].loc[df['c'] > 8] = 0

Pandas np.where with matching range of values on a row

Test data:
In [1]:
import pandas as pd
import numpy as np
df = pd.DataFrame(
{'AAA' : [4,5,6,7,9,10],
'BBB' : [10,20,30,40,11,10],
'CCC' : [100,50,25,10,10,11]});
In [2]:df
Out[2]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 25
3 7 40 10
4 9 11 10
5 10 10 11
In [3]: thresh = 2
df['aligned'] = np.where(df.AAA == df.BBB,max(df.AAA)|(df.BBB),np.nan)
The following np.where statement provides max(df.AAA or df.BBB) when df.AAA and df.BBB are exactly aligned. I would like to have the max when the columns are within the value in thresh and also consider all columns. It does not have to be via np.where. Can you please show me ways of approaching this?
So for row 5 it should be 11.0 in df.aligned as this is the max value and within thresh of df.AAA and df.BBB.
Ultimately I am looking for ways to find levels across multiple columns where the values are closely aligned.
Current Output with my code:
df
AAA BBB CCC aligned
0 4 10 100 NaN
1 5 20 50 NaN
2 6 30 25 NaN
3 7 40 10 NaN
4 9 11 10 NaN
5 10 10 11 10.0
Desired Output:
df
AAA BBB CCC aligned
0 4 10 100 NaN
1 5 20 50 NaN
2 6 30 25 NaN
3 7 40 10 NaN
4 9 11 10 11.0
5 10 10 11 11.0
The desired output shows rows 4 and 5 with values on df.aligned. As these have values within thresh of each other (values 10 and 11 are within the range specified in thresh variable).
"Within thresh distance" to me means that the difference between the max
and the min of a row should be less than thresh. We can use DataFrame.apply with parameter axis=1 so that we apply the lambda function on each row.
In [1]: filt_thresh = df.apply(lambda x: (x.max() - x.min())<thresh, axis=1)
100 loops, best of 3: 1.89 ms per loop
Alternatively there's a faster solution as pointed out below by #root:
filt_thresh = np.ptp(df.values, axis=1) < tresh
10000 loops, best of 3: 48.9 µs per loop
Or, staying with pandas:
filt_thresh = df.max(axis=1) - df.min(axis=1) < thresh
1000 loops, best of 3: 943 µs per loop
We can now use boolean indexing and calculate the max of each row that matches (hence the axis=1 parameter in max()again):
In [2]: df.loc[filt_thresh, 'aligned'] = df[filt_thresh].max(axis=1)
In [3]: df
Out[3]:
AAA BBB CCC aligned
0 4 10 100 NaN
1 5 20 50 NaN
2 6 30 25 NaN
3 7 40 10 NaN
4 9 11 10 NaN
5 10 10 11 11.0
Update:
If you wanted to calculate the minimum distance between elements for each row, that'd be equivalent to sorting the array of values (np.sort()), calculating the difference between consecutive numbers (np.diff), and taking the min of the resulting array. Finally, compare that to tresh.
Here's the apply way that has the advantage of being a bit clearer to understand.
filt_thresh = df.apply(lambda row: np.min(np.diff(np.sort(row))) < thresh, axis=1)
1000 loops, best of 3: 713 µs per loop
And here's the vectorized equivalent:
filt_thresh = np.diff(np.sort(df)).min(axis=1) < thresh
The slowest run took 4.31 times longer than the fastest.
This could mean that an intermediate result is being cached.
10000 loops, best of 3: 67.3 µs per loop

Append a list of arrays as column to pandas Data Frame with same column indices

I have a list of arrays (one-dimensional numpy array) (a_) and a list (l_) and want to have a DataFrame with them as its columns. They look like this:
a_: [array([381]), array([376]), array([402]), array([400])...]
l_: [1.5,2.34,4.22,...]
I can do it by:
df_l = pd.DataFrame(l_)
df_a = pd.DataFrame(a_)
df = pd.concat([df_l, df_a], axis=1)
Is there a shorter way of doing it? I tried to use pd.append:
df_l = pd.DataFrame(l_)
df_l = df_l.append(a_)
However, because columns indices are both 0, it adds a_ to the end of the dataframe column, resulting in a single column. Is there something like this:
l_ = l_.append(a_).reset(columns)
that set a new column index for the appended array? well, obviously this does not work!
the desired output is like:
0 0
0 1.50 381
1 2.34 376
2 4.22 402
...
Thanks.
Suggestion:
df_l = pd.DataFrame(l_)
df_1['a_'] = pd.Series(a_list, index=df_1.index)
Example #1:
L = list(data)
A = list(data)
data_frame = pd.DataFrame(L)
data_frame['A'] = pd.Series(A, index=data_frame.index)
Example #2 - Same Series length (create series and set index to the same as existing data frame):
In [33]: L = list(item for item in range(10))
In [34]: A = list(item for item in range(10,20))
In [35]: data_frame = pd.DataFrame(L,columns=['L'])
In [36]: data_frame['A'] = pd.Series(A, index=data_frame.index)
In [37]: print data_frame
L A
0 0 10
1 1 11
2 2 12
3 3 13
4 4 14
5 5 15
6 6 16
7 7 17
8 8 18
9 9 19
Example #3 - Different Series lengths (create series and let pandas handle index matching):
In [45]: not_same_length = list(item for item in range(50,55))
In [46]: data_frame['nsl'] = pd.Series(not_same_length)
In [47]: print data_frame
L A nsl
0 0 10 50
1 1 11 51
2 2 12 52
3 3 13 53
4 4 14 54
5 5 15 NaN
6 6 16 NaN
7 7 17 NaN
8 8 18 NaN
9 9 19 NaN
Based on your comments, it looks like you want to join your list of lists.I'm assuming they are in list structure because array() is not a method in python. To do that you would do the following:
In [63]: A = [[381],[376], [402], [400]]
In [64]: A = [inner_item for item in A for inner_item in item]
In [65]: print A
[381, 376, 402, 400]
Then create the Series using the new array and follow the steps above to add to your data frame.

Optimizing pandas groupby on many small groups

I have a pandas DataFrame with many small groups:
In [84]: n=10000
In [85]: df=pd.DataFrame({'group':sorted(range(n)*4),'val':np.random.randint(6,size=4*n)}).sort(['group','val']).reset_index(drop=True)
In [86]: df.head(9)
Out[86]:
group val
0 0 0
1 0 0
2 0 1
3 0 2
4 1 1
5 1 2
6 1 2
7 1 4
8 2 0
I want to do something special for groups where val==1 appears but not val==0. E.g. replace the 1 in the group by 99 only if the val==0 is in that group.
But for DataFrames of this size it is quite slow:
In [87]: def f(s):
....: if (0 not in s) and (1 in s): s[s==1]=99
....: return s
....:
In [88]: %timeit df.groupby('group')['val'].transform(f)
1 loops, best of 3: 11.2 s per loop
Looping through the data frame is much uglier but much faster:
In [89]: %paste
def g(df):
df.sort(['group','val'],inplace=True)
last_g=-1
for i in xrange(len(df)):
if df.group.iloc[i]!=last_g:
has_zero=False
if df.val.iloc[i]==0:
has_zero=True
elif has_zero and df.val.iloc[i]==1:
df.val.iloc[i]=99
return df
## -- End pasted text --
In [90]: %timeit g(df)
1 loops, best of 3: 2.53 s per loop
But I would like to optimizing it further if possible.
Any idea of how to do so?
Thanks
Based on Jeff's answer, I got a solution that is very fast. I'm putting it here if others find it useful:
In [122]: def do_fast(df):
.....: has_zero_mask=df.group.isin(df[df.val==0].group.unique())
.....: df.val[(df.val==1) & has_zero_mask]=99
.....: return df
.....:
In [123]: %timeit do_fast(df)
100 loops, best of 3: 11.2 ms per loop
Not 100% sure this is what you are going for, but should be simple to have a different filtering/setting criteria
In [37]: pd.set_option('max_rows',10)
In [38]: np.random.seed(1234)
In [39]: def f():
# create the frame
df=pd.DataFrame({'group':sorted(range(n)*4),
'val':np.random.randint(6,size=4*n)}).sort(['group','val']).reset_index(drop=True)
df['result'] = np.nan
# Create a count per group
df['counter'] = df.groupby('group').cumcount()
# select which values you want, returning the indexes of those
mask = df[df.val==1].groupby('group').grouper.group_info[0]
# set em
df.loc[df.index.isin(mask) & df['counter'] == 1,'result'] = 99
In [40]: %timeit f()
10 loops, best of 3: 95 ms per loop
In [41]: df
Out[41]:
group val result counter
0 0 3 NaN 0
1 0 4 99 1
2 0 4 NaN 2
3 0 5 99 3
4 1 0 NaN 0
... ... ... ... ...
39995 9998 4 NaN 3
39996 9999 0 NaN 0
39997 9999 0 NaN 1
39998 9999 2 NaN 2
39999 9999 3 NaN 3
[40000 rows x 4 columns]

Pandas group by operations on a data frame

I have a pandas data frame like the one below.
UsrId JobNos
1 4
1 56
2 23
2 55
2 41
2 5
3 78
1 25
3 1
I group by the data frame based on the UsrId. The grouped data frame will conceptually look like below.
UsrId JobNos
1 [4,56,25]
2 [23,55,41,5]
3 [78,1]
Now, I'm looking for an in-build API that will give me the UsrId with the maximum job count. For the above example, UsrId-2 has the maximum count.
UPDATE:
Instead of the UsrID with maximum job count, I want n UserIds with maximum job counts. For the above example, if n=2 then the output is [2,1]. Can this be done?
Something like df.groupby('UsrId').JobNos.sum().idxmax() should do it:
In [1]: import pandas as pd
In [2]: from StringIO import StringIO
In [3]: data = """UsrId JobNos
...: 1 4
...: 1 56
...: 2 23
...: 2 55
...: 2 41
...: 2 5
...: 3 78
...: 1 25
...: 3 1"""
In [4]: df = pd.read_csv(StringIO(data), sep='\s+')
In [5]: grouped = df.groupby('UsrId')
In [6]: grouped.JobNos.sum()
Out[6]:
UsrId
1 85
2 124
3 79
Name: JobNos
In [7]: grouped.JobNos.sum().idxmax()
Out[7]: 2
If you want your results based on the number of items in each group:
In [8]: grouped.size()
Out[8]:
UsrId
1 3
2 4
3 2
In [9]: grouped.size().idxmax()
Out[9]: 2
Update: To get ordered results you can use the .order method:
In [10]: grouped.JobNos.sum().order(ascending=False)
Out[10]:
UsrId
2 124
1 85
3 79
Name: JobNos

Categories

Resources