I am looking to populate a new dataframe column with True if two cell values match another smaller subset dataframe in pandas, otherwise with a value of False.
For instance, this is original output dataframe I am constructing.
ID Type
1 A
2 B
3 A
4 A
5 C
6 A
7 D
8 A
9 B
10 A
And the smaller subset of the dataframe selected based on some criteria:
ID Type
1 A
3 A
4 A
5 C
7 D
10 A
What I am trying to accomplish is when ID and Type in the output dataframe match with the smaller subset datadrame, I want to populate a new column called 'Result' and value equals to True. Otherwise, value equals to False.
ID Type Result
1 A True
2 B False
3 A True
4 A True
5 C True
6 A False
7 D True
8 A False
9 B False
10 A True
You can .merge() the 2 dataframes using a left merge with the original dataframe as base and turn on the indicator= parameter to show the merge result. Then change the merge result to True for the rows that appear in both dataframes and False otherwise.
df_out = df1.merge(df2, on=['ID', 'Type'] , how='left', indicator='Result')
df_out['Result'] = (df_out['Result'] == 'both')
Explanation:
With indicator= parameter turn on, Pandas will show you the merge result of which dataframe the current row are from (in terms of both, left_only and right_only)
df_out = df1.merge(df2, on=['ID', 'Type'] , how='left', indicator='Result')
print(df_out)
ID Type Result
0 1 A both
1 2 B left_only
2 3 A both
3 4 A both
4 5 C both
5 6 A left_only
6 7 D both
7 8 A left_only
8 9 B left_only
9 10 A both
Then, we transform the both and others to True/False by boolean mask, as follows:
df_out['Result'] = (df_out['Result'] == 'both')
print(df_out)
ID Type Result
0 1 A True
1 2 B False
2 3 A True
3 4 A True
4 5 C True
5 6 A False
6 7 D True
7 8 A False
8 9 B False
9 10 A True
I have two pandas DF. Of unequal sizes. For example :
Df1
id value
a 2
b 3
c 22
d 5
Df2
id value
c 22
a 2
No I want to extract from DF1 those rows which has the same id as in DF2. Now my first approach is to run 2 for loops, with something like :
x=[]
for i in range(len(DF2)):
for j in range(len(DF1)):
if DF2['id'][i] == DF1['id'][j]:
x.append(DF1.iloc[j])
Now this is okay, but for 2 files of 400,000 lines in one and 5,000 in another, I need an efficient Pythonic+Pnadas way
import pandas as pd
data1={'id':['a','b','c','d'],
'value':[2,3,22,5]}
data2={'id':['c','a'],
'value':[22,2]}
df1=pd.DataFrame(data1)
df2=pd.DataFrame(data2)
finaldf=pd.concat([df1,df2],ignore_index=True)
Output after concat
id value
0 a 2
1 b 3
2 c 22
3 d 5
4 c 22
5 a 2
Final Ouput
finaldf.drop_duplicates()
id value
0 a 2
1 b 3
2 c 22
3 d 5
You can concat the dataframes , then check if all the elements are duplicated or not , then drop_duplicates and keep just the first occurrence:
m = pd.concat((df1,df2))
m[m.duplicated('id',keep=False)].drop_duplicates()
id value
0 a 2
2 c 22
You can try this:
df = df1[df1.set_index(['id']).index.isin(df2.set_index(['id']).index)]
In the following dataset what's the best way to duplicate row with groupby(['Type']) count < 3 to 3. df is the input, and df1 is my desired outcome. You see row 3 from df was duplicated by 2 times at the end. This is only an example deck. the real data has approximately 20mil lines and 400K unique Types, thus a method that does this efficiently is desired.
>>> df
Type Val
0 a 1
1 a 2
2 a 3
3 b 1
4 c 3
5 c 2
6 c 1
>>> df1
Type Val
0 a 1
1 a 2
2 a 3
3 b 1
4 c 3
5 c 2
6 c 1
7 b 1
8 b 1
Thought about using something like the following but do not know the best way to write the func.
df.groupby('Type').apply(func)
Thank you in advance.
Use value_counts with map and repeat:
counts = df.Type.value_counts()
repeat_map = 3 - counts[counts < 3]
df['repeat_num'] = df.Type.map(repeat_map).fillna(0,downcast='infer')
df = df.append(df.set_index('Type')['Val'].repeat(df['repeat_num']).reset_index(),
sort=False, ignore_index=True)[['Type','Val']]
print(df)
Type Val
0 a 1
1 a 2
2 a 3
3 b 1
4 c 3
5 c 2
6 c 1
7 b 1
8 b 1
Note : sort=False for append is present in pandas>=0.23.0, remove if using lower version.
EDIT : If data contains multiple val columns then make all columns columns as index expcept one column and repeat and then reset_index as:
df = df.append(df.set_index(['Type','Val_1','Val_2'])['Val'].repeat(df['repeat_num']).reset_index(),
sort=False, ignore_index=True)
I have a dataframe with two columns: "Agent" and "Client"
Each row corresponds to an interaction between an Agent and a client.
I want to keep only the rows if a client had interactions with at least 2 agents.
How can I do that?
Worth adding that now you can use df.duplicated()
df = df.loc[df.duplicated(subset='Agent', keep=False)]
Use groupby and transform by value_counts.
df[df.Agent.groupby(df.Agent).transform('value_counts') > 1]
Note, that, as mentioned here, you might have one agent interacting with the same client multiple times. This might be retained as a false positive. If you do not want this, you could add a drop_duplicates call before filtering:
df = df.drop_duplicates()
df = df[df.Agent.groupby(df.Agent).transform('value_counts') > 1]
print(df)
A B
0 1 2
1 2 5
2 3 1
3 4 1
4 5 5
5 6 1
mask = df.B.groupby(df.B).transform('value_counts') > 1
print(mask)
0 False
1 True
2 True
3 True
4 True
5 True
Name: B, dtype: bool
df = df[mask]
print(df)
A B
1 2 5
2 3 1
3 4 1
4 5 5
5 6 1
I have this DataFrame (df1) in Pandas:
df1 = pd.DataFrame(np.random.rand(10,4),columns=list('ABCD'))
print df1
A B C D
0.860379 0.726956 0.394529 0.833217
0.014180 0.813828 0.559891 0.339647
0.782838 0.698993 0.551252 0.361034
0.833370 0.982056 0.741821 0.006864
0.855955 0.546562 0.270425 0.136006
0.491538 0.445024 0.971603 0.690001
0.911696 0.065338 0.796946 0.853456
0.744923 0.545661 0.492739 0.337628
0.576235 0.219831 0.946772 0.752403
0.164873 0.454862 0.745890 0.437729
I would like to check if any row (all columns) from another dataframe (df2) are present in df1. Here is df2:
df2 = df1.ix[4:8]
df2.reset_index(drop=True,inplace=True)
df2.loc[-1] = [2, 3, 4, 5]
df2.loc[-2] = [14, 15, 16, 17]
df2.reset_index(drop=True,inplace=True)
print df2
A B C D
0.855955 0.546562 0.270425 0.136006
0.491538 0.445024 0.971603 0.690001
0.911696 0.065338 0.796946 0.853456
0.744923 0.545661 0.492739 0.337628
0.576235 0.219831 0.946772 0.752403
2.000000 3.000000 4.000000 5.000000
14.000000 15.000000 16.000000 17.000000
I tried using df.lookup to search for one row at a time. I did it this way:
list1 = df2.ix[0].tolist()
cols = df1.columns.tolist()
print df1.lookup(list1, cols)
but I got this error message:
File "C:\Users\test.py", line 19, in <module>
print df1.lookup(list1, cols)
File "C:\python27\lib\site-packages\pandas\core\frame.py", line 2217, in lookup
raise KeyError('One or more row labels was not found')
KeyError: 'One or more row labels was not found'
I also tried .all() using:
print (df2 == df1).all(1).any()
but I got this error message:
File "C:\Users\test.py", line 12, in <module>
print (df2 == df1).all(1).any()
File "C:\python27\lib\site-packages\pandas\core\ops.py", line 884, in f
return self._compare_frame(other, func, str_rep)
File "C:\python27\lib\site-packages\pandas\core\frame.py", line 3010, in _compare_frame
raise ValueError('Can only compare identically-labeled '
ValueError: Can only compare identically-labeled DataFrame objects
I also tried isin() like this:
print df2.isin(df1)
but I got False everywhere, which is not correct:
A B C D
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
Is it possible to search for a set of rows in a DataFrame, by comparing it to another dataframe's rows?
EDIT:
Is is possible to drop df2 rows if those rows are also present in df1?
One possible solution to your problem would be to use merge. Checking if any row (all columns) from another dataframe (df2) are present in df1 is equivalent to determining the intersection of the the two dataframes. This can be accomplished using the following function:
pd.merge(df1, df2, on=['A', 'B', 'C', 'D'], how='inner')
For example, if df1 was
A B C D
0 0.403846 0.312230 0.209882 0.397923
1 0.934957 0.731730 0.484712 0.734747
2 0.588245 0.961589 0.910292 0.382072
3 0.534226 0.276908 0.323282 0.629398
4 0.259533 0.277465 0.043652 0.925743
5 0.667415 0.051182 0.928655 0.737673
6 0.217923 0.665446 0.224268 0.772592
7 0.023578 0.561884 0.615515 0.362084
8 0.346373 0.375366 0.083003 0.663622
9 0.352584 0.103263 0.661686 0.246862
and df2 was defined as:
A B C D
0 0.259533 0.277465 0.043652 0.925743
1 0.667415 0.051182 0.928655 0.737673
2 0.217923 0.665446 0.224268 0.772592
3 0.023578 0.561884 0.615515 0.362084
4 0.346373 0.375366 0.083003 0.663622
5 2.000000 3.000000 4.000000 5.000000
6 14.000000 15.000000 16.000000 17.000000
The function pd.merge(df1, df2, on=['A', 'B', 'C', 'D'], how='inner') produces:
A B C D
0 0.259533 0.277465 0.043652 0.925743
1 0.667415 0.051182 0.928655 0.737673
2 0.217923 0.665446 0.224268 0.772592
3 0.023578 0.561884 0.615515 0.362084
4 0.346373 0.375366 0.083003 0.663622
The results are all of the rows (all columns) that are both in df1 and df2.
We can also modify this example if the columns are not the same in df1 and df2 and just compare the row values that are the same for a subset of the columns. If we modify the original example:
df1 = pd.DataFrame(np.random.rand(10,4),columns=list('ABCD'))
df2 = df1.ix[4:8]
df2.reset_index(drop=True,inplace=True)
df2.loc[-1] = [2, 3, 4, 5]
df2.loc[-2] = [14, 15, 16, 17]
df2.reset_index(drop=True,inplace=True)
df2 = df2[['A', 'B', 'C']] # df2 has only columns A B C
Then we can look at the common columns using common_cols = list(set(df1.columns) & set(df2.columns)) between the two dataframes then merge:
pd.merge(df1, df2, on=common_cols, how='inner')
EDIT: New question (comments), having identified the rows from df2 that were also present in the first dataframe (df1), is it possible to take the result of the pd.merge() and to then drop the rows from df2 that are also present in df1
I do not know of a straightforward way to accomplish the task of dropping the rows from df2 that are also present in df1. That said, you could use the following:
ds1 = set(tuple(line) for line in df1.values)
ds2 = set(tuple(line) for line in df2.values)
df = pd.DataFrame(list(ds2.difference(ds1)), columns=df2.columns)
There probably exists a better way to accomplish that task but i am unaware of such a method / function.
EDIT 2: How to drop the rows from df2 that are also present in df1 as shown in #WR answer.
The method provided df2[~df2['A'].isin(df12['A'])] does not account for all types of situations. Consider the following DataFrames:
df1:
A B C D
0 6 4 1 6
1 7 6 6 8
2 1 6 2 7
3 8 0 4 1
4 1 0 2 3
5 8 4 7 5
6 4 7 1 1
7 3 7 3 4
8 5 2 8 8
9 3 2 8 4
df2:
A B C D
0 1 0 2 3
1 8 4 7 5
2 4 7 1 1
3 3 7 3 4
4 5 2 8 8
5 1 1 1 1
6 2 2 2 2
df12:
A B C D
0 1 0 2 3
1 8 4 7 5
2 4 7 1 1
3 3 7 3 4
4 5 2 8 8
Using the above DataFrames with the goal of dropping rows from df2 that are also present in df1 would result in the following:
A B C D
0 1 1 1 1
1 2 2 2 2
Rows (1, 1, 1, 1) and (2, 2, 2, 2) are in df2 and not in df1. Unfortunately, using the provided method (df2[~df2['A'].isin(df12['A'])]) results in:
A B C D
6 2 2 2 2
This occurs because the value of 1 in column A is found in both the intersection DataFrame (i.e. (1, 0, 2, 3)) and df2 and thus removes both (1, 0, 2, 3) and (1, 1, 1, 1). This is unintended since the row (1, 1, 1, 1) is not in df1 and should not be removed.
I think the following will provide a solution. It creates a dummy column that is later used to subset the DataFrame to the desired results:
df12['key'] = 'x'
temp_df = pd.merge(df2, df12, on=df2.columns.tolist(), how='left')
temp_df[temp_df['key'].isnull()].drop('key', axis=1)
#Andrew: I believe I found a way to drop the rows of one dataframe that are already present in another (i.e. to answer my EDIT) without using loops - let me know if you disagree and/or if my OP + EDIT did not clearly state this:
THIS WORKS
The columns for both dataframes are always the same - A, B, C and D. With this in mind, based heavily on Andrew's approach, here is how to drop the rows from df2 that are also present in df1:
common_cols = df1.columns.tolist() #generate list of column names
df12 = pd.merge(df1, df2, on=common_cols, how='inner') #extract common rows with merge
df2 = df2[~df2['A'].isin(df12['A'])]
Line 3 does the following:
Extract only rows from df2 that do not match rows in df1:
In order for 2 rows to be different, ANY one column of one row must
necessarily be different that the corresponding column in another
row.
Here, I picked column A to make this comparison - it is
possible to use any of the column names, but not ALL of the
column names.
NOTE: this method is essentially the equivalent of the SQL NOT IN().