Matching on basis of a pair of columns in pandas - python

I have a data frame df1 with multiple columns. I have df2 with same set of columns. I want to get the records of df1 which aren't present in df2. I am able to perform this task as below:
df1[~df1['ID'].isin(df2['ID'])]
Now I want to the same operation, but on the combination of NAME and ID. This means if the NAME and ID together as a pair from df1 also exists as the same pair in df2, then that whole record should not be part of my result.
How do I accomplish this task using pandas?

I don't think that the currently accepted answer is actually correct. It was my impression that you would like to drop a value pair in df1 if that pair also exists in the other dataframe, independent of the row position that they take in the respective dataframes.
Consider the following dataframes
df1 = pd.DataFrame({'a': list('ABC'), 'b': list('CDF')})
df2 = pd.DataFrame({'a': list('ABAC'), 'b': list('CFFF')})
df1
a b
0 A C
1 B D
2 C F
df2
a b
0 A C
1 B F
2 A F
3 C F
So you would like to drop row 0 and 2 in df1. However, with the above suggestion you get
df1.isin(df2)
a b
0 True True
1 True False
2 False True
What you can do instead is
compare_cols = ['a','b']
mask = pd.Series(list(zip(*[df1[c] for c in compare_cols]]))).isin(list(zip(*[df2[c] for c in compare_cols])))
mask
0 True
1 False
2 True
dtype: bool
That is, you construct a Series of tuples from the columns you would like to compare coming from the first dataframe, and then check whether these tuples exist in the list of tuples obtained in the same way from the respective columns in the second dataframe.
Final step: df1 = df1.loc[~mask.values]
As pointed out by #rvrvrv in the comments, it is best to use mask.values instead of just mask in case df1 and mask do not have the same index (or one uses the df1 index in the construction of mask.)

It's actually pretty easy.
df1[(~df1[['ID', 'Name']].isin(df2[['ID', 'Name']])).any(axis=1)]
You pass the column names that you want to compare as a list. The interesting part is what it outputs.
Let's say df1 equals:
ID Name
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 8
9 1 1
And df2 equals:
ID Name
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 8
9 1 9
Every (ID, Name) pair between df1 and df2 matches except for row 9. The result of my answer will return:
ID Name
9 1 1
Which is exactly what you want.
In more detail, when you do the mask:
~df[['ID', 'Name']].isin(df2[['ID', 'Name']]
You get this:
ID Name
0 False False
1 False False
2 False False
3 False False
4 False False
5 False False
6 False False
7 False False
8 False False
9 False True
And we want to select the row where one of those columns is true. For this, we can add the any(axis=1) onto the end which creates:
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 True
And then when you index using this series, it will only select row 9.

Isin() would not work here as it is also comparing the index.
Let's have a look at a super powerful tool of pandas : merge()
If we consider the nice example given by user3820991, we have :
df1 = pd.DataFrame({'a': list('ABC'), 'b': list('CDF')})
df2 = pd.DataFrame({'a': list('ABAC'), 'b': list('CFFF')})
df1
a b
0 A C
1 B D
2 C F
df2
a b
0 A C
1 B F
2 A F
3 C F
The basic merge method of pandas is the 'inner' join. This will give you the equivalent of isin() method for two columns:
df1.merge(df2[['a','b']], how='inner')
a b
0 A C
1 C F
If you would like te equivalent of the not(isin()), then just change the merge method by 'outer' join (left join would work, but for the beauty of the example, you have more possibilities with the outer join).
This will give you all the rows in both dataframe, we only have to add the indicator=True to be able to select the one we want:
df1.merge(df2[['a','b']], how='outer', indicator=True)
a b _merge
0 A C both
1 B D left_only
2 C F both
3 B F right_only
4 A F right_only
We want the rows that are in df1 but not in df2, so 'left_only'. In a one liner code, you have :
pd.merge(df1, df2, on=['a','b'], how="outer", indicator=True
).query('_merge=="left_only"').drop(columns='_merge')
a b
1 B D

You can create a new column by concatenating NAME and ID and use this new column the same way you used ID in your question:
df1['temp'] = df1['NAME'].astype(str)+df1['ID'].astype(str)
df2['temp'] = df2['NAME'].astype(str)+df2['ID'].astype(str)
df1[~df1['temp'].isin(df2['temp'])].drop('temp',1)

Related

Populate a new dataframe column with True if two cell values match another smaller subset dataframe in pandas

I am looking to populate a new dataframe column with True if two cell values match another smaller subset dataframe in pandas, otherwise with a value of False.
For instance, this is original output dataframe I am constructing.
ID Type
1 A
2 B
3 A
4 A
5 C
6 A
7 D
8 A
9 B
10 A
And the smaller subset of the dataframe selected based on some criteria:
ID Type
1 A
3 A
4 A
5 C
7 D
10 A
What I am trying to accomplish is when ID and Type in the output dataframe match with the smaller subset datadrame, I want to populate a new column called 'Result' and value equals to True. Otherwise, value equals to False.
ID Type Result
1 A True
2 B False
3 A True
4 A True
5 C True
6 A False
7 D True
8 A False
9 B False
10 A True
You can .merge() the 2 dataframes using a left merge with the original dataframe as base and turn on the indicator= parameter to show the merge result. Then change the merge result to True for the rows that appear in both dataframes and False otherwise.
df_out = df1.merge(df2, on=['ID', 'Type'] , how='left', indicator='Result')
df_out['Result'] = (df_out['Result'] == 'both')
Explanation:
With indicator= parameter turn on, Pandas will show you the merge result of which dataframe the current row are from (in terms of both, left_only and right_only)
df_out = df1.merge(df2, on=['ID', 'Type'] , how='left', indicator='Result')
print(df_out)
ID Type Result
0 1 A both
1 2 B left_only
2 3 A both
3 4 A both
4 5 C both
5 6 A left_only
6 7 D both
7 8 A left_only
8 9 B left_only
9 10 A both
Then, we transform the both and others to True/False by boolean mask, as follows:
df_out['Result'] = (df_out['Result'] == 'both')
print(df_out)
ID Type Result
0 1 A True
1 2 B False
2 3 A True
3 4 A True
4 5 C True
5 6 A False
6 7 D True
7 8 A False
8 9 B False
9 10 A True

Compare columns in Pandas between two unequal size Dataframes for condition check

I have two pandas DF. Of unequal sizes. For example :
Df1
id value
a 2
b 3
c 22
d 5
Df2
id value
c 22
a 2
No I want to extract from DF1 those rows which has the same id as in DF2. Now my first approach is to run 2 for loops, with something like :
x=[]
for i in range(len(DF2)):
for j in range(len(DF1)):
if DF2['id'][i] == DF1['id'][j]:
x.append(DF1.iloc[j])
Now this is okay, but for 2 files of 400,000 lines in one and 5,000 in another, I need an efficient Pythonic+Pnadas way
import pandas as pd
data1={'id':['a','b','c','d'],
'value':[2,3,22,5]}
data2={'id':['c','a'],
'value':[22,2]}
df1=pd.DataFrame(data1)
df2=pd.DataFrame(data2)
finaldf=pd.concat([df1,df2],ignore_index=True)
Output after concat
id value
0 a 2
1 b 3
2 c 22
3 d 5
4 c 22
5 a 2
Final Ouput
finaldf.drop_duplicates()
id value
0 a 2
1 b 3
2 c 22
3 d 5
You can concat the dataframes , then check if all the elements are duplicated or not , then drop_duplicates and keep just the first occurrence:
m = pd.concat((df1,df2))
m[m.duplicated('id',keep=False)].drop_duplicates()
id value
0 a 2
2 c 22
You can try this:
df = df1[df1.set_index(['id']).index.isin(df2.set_index(['id']).index)]

Duplicate row of low occurrence in pandas dataframe

In the following dataset what's the best way to duplicate row with groupby(['Type']) count < 3 to 3. df is the input, and df1 is my desired outcome. You see row 3 from df was duplicated by 2 times at the end. This is only an example deck. the real data has approximately 20mil lines and 400K unique Types, thus a method that does this efficiently is desired.
>>> df
Type Val
0 a 1
1 a 2
2 a 3
3 b 1
4 c 3
5 c 2
6 c 1
>>> df1
Type Val
0 a 1
1 a 2
2 a 3
3 b 1
4 c 3
5 c 2
6 c 1
7 b 1
8 b 1
Thought about using something like the following but do not know the best way to write the func.
df.groupby('Type').apply(func)
Thank you in advance.
Use value_counts with map and repeat:
counts = df.Type.value_counts()
repeat_map = 3 - counts[counts < 3]
df['repeat_num'] = df.Type.map(repeat_map).fillna(0,downcast='infer')
df = df.append(df.set_index('Type')['Val'].repeat(df['repeat_num']).reset_index(),
sort=False, ignore_index=True)[['Type','Val']]
print(df)
Type Val
0 a 1
1 a 2
2 a 3
3 b 1
4 c 3
5 c 2
6 c 1
7 b 1
8 b 1
Note : sort=False for append is present in pandas>=0.23.0, remove if using lower version.
EDIT : If data contains multiple val columns then make all columns columns as index expcept one column and repeat and then reset_index as:
df = df.append(df.set_index(['Type','Val_1','Val_2'])['Val'].repeat(df['repeat_num']).reset_index(),
sort=False, ignore_index=True)

Retain only duplicated rows in a pandas dataframe

I have a dataframe with two columns: "Agent" and "Client"
Each row corresponds to an interaction between an Agent and a client.
I want to keep only the rows if a client had interactions with at least 2 agents.
How can I do that?
Worth adding that now you can use df.duplicated()
df = df.loc[df.duplicated(subset='Agent', keep=False)]
Use groupby and transform by value_counts.
df[df.Agent.groupby(df.Agent).transform('value_counts') > 1]
Note, that, as mentioned here, you might have one agent interacting with the same client multiple times. This might be retained as a false positive. If you do not want this, you could add a drop_duplicates call before filtering:
df = df.drop_duplicates()
df = df[df.Agent.groupby(df.Agent).transform('value_counts') > 1]
print(df)
A B
0 1 2
1 2 5
2 3 1
3 4 1
4 5 5
5 6 1
mask = df.B.groupby(df.B).transform('value_counts') > 1
print(mask)
0 False
1 True
2 True
3 True
4 True
5 True
Name: B, dtype: bool
df = df[mask]
print(df)
A B
1 2 5
2 3 1
3 4 1
4 5 5
5 6 1

Compare Python Pandas DataFrames for matching rows

I have this DataFrame (df1) in Pandas:
df1 = pd.DataFrame(np.random.rand(10,4),columns=list('ABCD'))
print df1
A B C D
0.860379 0.726956 0.394529 0.833217
0.014180 0.813828 0.559891 0.339647
0.782838 0.698993 0.551252 0.361034
0.833370 0.982056 0.741821 0.006864
0.855955 0.546562 0.270425 0.136006
0.491538 0.445024 0.971603 0.690001
0.911696 0.065338 0.796946 0.853456
0.744923 0.545661 0.492739 0.337628
0.576235 0.219831 0.946772 0.752403
0.164873 0.454862 0.745890 0.437729
I would like to check if any row (all columns) from another dataframe (df2) are present in df1. Here is df2:
df2 = df1.ix[4:8]
df2.reset_index(drop=True,inplace=True)
df2.loc[-1] = [2, 3, 4, 5]
df2.loc[-2] = [14, 15, 16, 17]
df2.reset_index(drop=True,inplace=True)
print df2
A B C D
0.855955 0.546562 0.270425 0.136006
0.491538 0.445024 0.971603 0.690001
0.911696 0.065338 0.796946 0.853456
0.744923 0.545661 0.492739 0.337628
0.576235 0.219831 0.946772 0.752403
2.000000 3.000000 4.000000 5.000000
14.000000 15.000000 16.000000 17.000000
I tried using df.lookup to search for one row at a time. I did it this way:
list1 = df2.ix[0].tolist()
cols = df1.columns.tolist()
print df1.lookup(list1, cols)
but I got this error message:
File "C:\Users\test.py", line 19, in <module>
print df1.lookup(list1, cols)
File "C:\python27\lib\site-packages\pandas\core\frame.py", line 2217, in lookup
raise KeyError('One or more row labels was not found')
KeyError: 'One or more row labels was not found'
I also tried .all() using:
print (df2 == df1).all(1).any()
but I got this error message:
File "C:\Users\test.py", line 12, in <module>
print (df2 == df1).all(1).any()
File "C:\python27\lib\site-packages\pandas\core\ops.py", line 884, in f
return self._compare_frame(other, func, str_rep)
File "C:\python27\lib\site-packages\pandas\core\frame.py", line 3010, in _compare_frame
raise ValueError('Can only compare identically-labeled '
ValueError: Can only compare identically-labeled DataFrame objects
I also tried isin() like this:
print df2.isin(df1)
but I got False everywhere, which is not correct:
A B C D
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
Is it possible to search for a set of rows in a DataFrame, by comparing it to another dataframe's rows?
EDIT:
Is is possible to drop df2 rows if those rows are also present in df1?
One possible solution to your problem would be to use merge. Checking if any row (all columns) from another dataframe (df2) are present in df1 is equivalent to determining the intersection of the the two dataframes. This can be accomplished using the following function:
pd.merge(df1, df2, on=['A', 'B', 'C', 'D'], how='inner')
For example, if df1 was
A B C D
0 0.403846 0.312230 0.209882 0.397923
1 0.934957 0.731730 0.484712 0.734747
2 0.588245 0.961589 0.910292 0.382072
3 0.534226 0.276908 0.323282 0.629398
4 0.259533 0.277465 0.043652 0.925743
5 0.667415 0.051182 0.928655 0.737673
6 0.217923 0.665446 0.224268 0.772592
7 0.023578 0.561884 0.615515 0.362084
8 0.346373 0.375366 0.083003 0.663622
9 0.352584 0.103263 0.661686 0.246862
and df2 was defined as:
A B C D
0 0.259533 0.277465 0.043652 0.925743
1 0.667415 0.051182 0.928655 0.737673
2 0.217923 0.665446 0.224268 0.772592
3 0.023578 0.561884 0.615515 0.362084
4 0.346373 0.375366 0.083003 0.663622
5 2.000000 3.000000 4.000000 5.000000
6 14.000000 15.000000 16.000000 17.000000
The function pd.merge(df1, df2, on=['A', 'B', 'C', 'D'], how='inner') produces:
A B C D
0 0.259533 0.277465 0.043652 0.925743
1 0.667415 0.051182 0.928655 0.737673
2 0.217923 0.665446 0.224268 0.772592
3 0.023578 0.561884 0.615515 0.362084
4 0.346373 0.375366 0.083003 0.663622
The results are all of the rows (all columns) that are both in df1 and df2.
We can also modify this example if the columns are not the same in df1 and df2 and just compare the row values that are the same for a subset of the columns. If we modify the original example:
df1 = pd.DataFrame(np.random.rand(10,4),columns=list('ABCD'))
df2 = df1.ix[4:8]
df2.reset_index(drop=True,inplace=True)
df2.loc[-1] = [2, 3, 4, 5]
df2.loc[-2] = [14, 15, 16, 17]
df2.reset_index(drop=True,inplace=True)
df2 = df2[['A', 'B', 'C']] # df2 has only columns A B C
Then we can look at the common columns using common_cols = list(set(df1.columns) & set(df2.columns)) between the two dataframes then merge:
pd.merge(df1, df2, on=common_cols, how='inner')
EDIT: New question (comments), having identified the rows from df2 that were also present in the first dataframe (df1), is it possible to take the result of the pd.merge() and to then drop the rows from df2 that are also present in df1
I do not know of a straightforward way to accomplish the task of dropping the rows from df2 that are also present in df1. That said, you could use the following:
ds1 = set(tuple(line) for line in df1.values)
ds2 = set(tuple(line) for line in df2.values)
df = pd.DataFrame(list(ds2.difference(ds1)), columns=df2.columns)
There probably exists a better way to accomplish that task but i am unaware of such a method / function.
EDIT 2: How to drop the rows from df2 that are also present in df1 as shown in #WR answer.
The method provided df2[~df2['A'].isin(df12['A'])] does not account for all types of situations. Consider the following DataFrames:
df1:
A B C D
0 6 4 1 6
1 7 6 6 8
2 1 6 2 7
3 8 0 4 1
4 1 0 2 3
5 8 4 7 5
6 4 7 1 1
7 3 7 3 4
8 5 2 8 8
9 3 2 8 4
df2:
A B C D
0 1 0 2 3
1 8 4 7 5
2 4 7 1 1
3 3 7 3 4
4 5 2 8 8
5 1 1 1 1
6 2 2 2 2
df12:
A B C D
0 1 0 2 3
1 8 4 7 5
2 4 7 1 1
3 3 7 3 4
4 5 2 8 8
Using the above DataFrames with the goal of dropping rows from df2 that are also present in df1 would result in the following:
A B C D
0 1 1 1 1
1 2 2 2 2
Rows (1, 1, 1, 1) and (2, 2, 2, 2) are in df2 and not in df1. Unfortunately, using the provided method (df2[~df2['A'].isin(df12['A'])]) results in:
A B C D
6 2 2 2 2
This occurs because the value of 1 in column A is found in both the intersection DataFrame (i.e. (1, 0, 2, 3)) and df2 and thus removes both (1, 0, 2, 3) and (1, 1, 1, 1). This is unintended since the row (1, 1, 1, 1) is not in df1 and should not be removed.
I think the following will provide a solution. It creates a dummy column that is later used to subset the DataFrame to the desired results:
df12['key'] = 'x'
temp_df = pd.merge(df2, df12, on=df2.columns.tolist(), how='left')
temp_df[temp_df['key'].isnull()].drop('key', axis=1)
#Andrew: I believe I found a way to drop the rows of one dataframe that are already present in another (i.e. to answer my EDIT) without using loops - let me know if you disagree and/or if my OP + EDIT did not clearly state this:
THIS WORKS
The columns for both dataframes are always the same - A, B, C and D. With this in mind, based heavily on Andrew's approach, here is how to drop the rows from df2 that are also present in df1:
common_cols = df1.columns.tolist() #generate list of column names
df12 = pd.merge(df1, df2, on=common_cols, how='inner') #extract common rows with merge
df2 = df2[~df2['A'].isin(df12['A'])]
Line 3 does the following:
Extract only rows from df2 that do not match rows in df1:
In order for 2 rows to be different, ANY one column of one row must
necessarily be different that the corresponding column in another
row.
Here, I picked column A to make this comparison - it is
possible to use any of the column names, but not ALL of the
column names.
NOTE: this method is essentially the equivalent of the SQL NOT IN().

Categories

Resources