I have this DataFrame (df1) in Pandas:
df1 = pd.DataFrame(np.random.rand(10,4),columns=list('ABCD'))
print df1
A B C D
0.860379 0.726956 0.394529 0.833217
0.014180 0.813828 0.559891 0.339647
0.782838 0.698993 0.551252 0.361034
0.833370 0.982056 0.741821 0.006864
0.855955 0.546562 0.270425 0.136006
0.491538 0.445024 0.971603 0.690001
0.911696 0.065338 0.796946 0.853456
0.744923 0.545661 0.492739 0.337628
0.576235 0.219831 0.946772 0.752403
0.164873 0.454862 0.745890 0.437729
I would like to check if any row (all columns) from another dataframe (df2) are present in df1. Here is df2:
df2 = df1.ix[4:8]
df2.reset_index(drop=True,inplace=True)
df2.loc[-1] = [2, 3, 4, 5]
df2.loc[-2] = [14, 15, 16, 17]
df2.reset_index(drop=True,inplace=True)
print df2
A B C D
0.855955 0.546562 0.270425 0.136006
0.491538 0.445024 0.971603 0.690001
0.911696 0.065338 0.796946 0.853456
0.744923 0.545661 0.492739 0.337628
0.576235 0.219831 0.946772 0.752403
2.000000 3.000000 4.000000 5.000000
14.000000 15.000000 16.000000 17.000000
I tried using df.lookup to search for one row at a time. I did it this way:
list1 = df2.ix[0].tolist()
cols = df1.columns.tolist()
print df1.lookup(list1, cols)
but I got this error message:
File "C:\Users\test.py", line 19, in <module>
print df1.lookup(list1, cols)
File "C:\python27\lib\site-packages\pandas\core\frame.py", line 2217, in lookup
raise KeyError('One or more row labels was not found')
KeyError: 'One or more row labels was not found'
I also tried .all() using:
print (df2 == df1).all(1).any()
but I got this error message:
File "C:\Users\test.py", line 12, in <module>
print (df2 == df1).all(1).any()
File "C:\python27\lib\site-packages\pandas\core\ops.py", line 884, in f
return self._compare_frame(other, func, str_rep)
File "C:\python27\lib\site-packages\pandas\core\frame.py", line 3010, in _compare_frame
raise ValueError('Can only compare identically-labeled '
ValueError: Can only compare identically-labeled DataFrame objects
I also tried isin() like this:
print df2.isin(df1)
but I got False everywhere, which is not correct:
A B C D
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
False False False False
Is it possible to search for a set of rows in a DataFrame, by comparing it to another dataframe's rows?
EDIT:
Is is possible to drop df2 rows if those rows are also present in df1?
One possible solution to your problem would be to use merge. Checking if any row (all columns) from another dataframe (df2) are present in df1 is equivalent to determining the intersection of the the two dataframes. This can be accomplished using the following function:
pd.merge(df1, df2, on=['A', 'B', 'C', 'D'], how='inner')
For example, if df1 was
A B C D
0 0.403846 0.312230 0.209882 0.397923
1 0.934957 0.731730 0.484712 0.734747
2 0.588245 0.961589 0.910292 0.382072
3 0.534226 0.276908 0.323282 0.629398
4 0.259533 0.277465 0.043652 0.925743
5 0.667415 0.051182 0.928655 0.737673
6 0.217923 0.665446 0.224268 0.772592
7 0.023578 0.561884 0.615515 0.362084
8 0.346373 0.375366 0.083003 0.663622
9 0.352584 0.103263 0.661686 0.246862
and df2 was defined as:
A B C D
0 0.259533 0.277465 0.043652 0.925743
1 0.667415 0.051182 0.928655 0.737673
2 0.217923 0.665446 0.224268 0.772592
3 0.023578 0.561884 0.615515 0.362084
4 0.346373 0.375366 0.083003 0.663622
5 2.000000 3.000000 4.000000 5.000000
6 14.000000 15.000000 16.000000 17.000000
The function pd.merge(df1, df2, on=['A', 'B', 'C', 'D'], how='inner') produces:
A B C D
0 0.259533 0.277465 0.043652 0.925743
1 0.667415 0.051182 0.928655 0.737673
2 0.217923 0.665446 0.224268 0.772592
3 0.023578 0.561884 0.615515 0.362084
4 0.346373 0.375366 0.083003 0.663622
The results are all of the rows (all columns) that are both in df1 and df2.
We can also modify this example if the columns are not the same in df1 and df2 and just compare the row values that are the same for a subset of the columns. If we modify the original example:
df1 = pd.DataFrame(np.random.rand(10,4),columns=list('ABCD'))
df2 = df1.ix[4:8]
df2.reset_index(drop=True,inplace=True)
df2.loc[-1] = [2, 3, 4, 5]
df2.loc[-2] = [14, 15, 16, 17]
df2.reset_index(drop=True,inplace=True)
df2 = df2[['A', 'B', 'C']] # df2 has only columns A B C
Then we can look at the common columns using common_cols = list(set(df1.columns) & set(df2.columns)) between the two dataframes then merge:
pd.merge(df1, df2, on=common_cols, how='inner')
EDIT: New question (comments), having identified the rows from df2 that were also present in the first dataframe (df1), is it possible to take the result of the pd.merge() and to then drop the rows from df2 that are also present in df1
I do not know of a straightforward way to accomplish the task of dropping the rows from df2 that are also present in df1. That said, you could use the following:
ds1 = set(tuple(line) for line in df1.values)
ds2 = set(tuple(line) for line in df2.values)
df = pd.DataFrame(list(ds2.difference(ds1)), columns=df2.columns)
There probably exists a better way to accomplish that task but i am unaware of such a method / function.
EDIT 2: How to drop the rows from df2 that are also present in df1 as shown in #WR answer.
The method provided df2[~df2['A'].isin(df12['A'])] does not account for all types of situations. Consider the following DataFrames:
df1:
A B C D
0 6 4 1 6
1 7 6 6 8
2 1 6 2 7
3 8 0 4 1
4 1 0 2 3
5 8 4 7 5
6 4 7 1 1
7 3 7 3 4
8 5 2 8 8
9 3 2 8 4
df2:
A B C D
0 1 0 2 3
1 8 4 7 5
2 4 7 1 1
3 3 7 3 4
4 5 2 8 8
5 1 1 1 1
6 2 2 2 2
df12:
A B C D
0 1 0 2 3
1 8 4 7 5
2 4 7 1 1
3 3 7 3 4
4 5 2 8 8
Using the above DataFrames with the goal of dropping rows from df2 that are also present in df1 would result in the following:
A B C D
0 1 1 1 1
1 2 2 2 2
Rows (1, 1, 1, 1) and (2, 2, 2, 2) are in df2 and not in df1. Unfortunately, using the provided method (df2[~df2['A'].isin(df12['A'])]) results in:
A B C D
6 2 2 2 2
This occurs because the value of 1 in column A is found in both the intersection DataFrame (i.e. (1, 0, 2, 3)) and df2 and thus removes both (1, 0, 2, 3) and (1, 1, 1, 1). This is unintended since the row (1, 1, 1, 1) is not in df1 and should not be removed.
I think the following will provide a solution. It creates a dummy column that is later used to subset the DataFrame to the desired results:
df12['key'] = 'x'
temp_df = pd.merge(df2, df12, on=df2.columns.tolist(), how='left')
temp_df[temp_df['key'].isnull()].drop('key', axis=1)
#Andrew: I believe I found a way to drop the rows of one dataframe that are already present in another (i.e. to answer my EDIT) without using loops - let me know if you disagree and/or if my OP + EDIT did not clearly state this:
THIS WORKS
The columns for both dataframes are always the same - A, B, C and D. With this in mind, based heavily on Andrew's approach, here is how to drop the rows from df2 that are also present in df1:
common_cols = df1.columns.tolist() #generate list of column names
df12 = pd.merge(df1, df2, on=common_cols, how='inner') #extract common rows with merge
df2 = df2[~df2['A'].isin(df12['A'])]
Line 3 does the following:
Extract only rows from df2 that do not match rows in df1:
In order for 2 rows to be different, ANY one column of one row must
necessarily be different that the corresponding column in another
row.
Here, I picked column A to make this comparison - it is
possible to use any of the column names, but not ALL of the
column names.
NOTE: this method is essentially the equivalent of the SQL NOT IN().
Related
I have a DataFrame that looks like this:
Image of DataFrame
What I would like to do is to compare the values in all four columns (A, B, C, and D) for every row and count the number of times in which D has the smaller value than A, B, or C for each row and add it into the 'Count' column. So for instance, 'Count' should be 1 for the second row, the third row, and 2 for the last row.
Thank you in advance!
You can use vectorize the operation using gt and sum methods along an axis:
df['Count'] = df[['A', 'B', 'C']].gt(df['D'], axis=0).sum(axis=1)
print(df)
# Output
A B C D Count
0 1 2 3 4 0
1 4 3 2 1 3
2 2 1 4 3 1
In the future, please do not post data as an image.
Use a lambda function and compare across all columns, then sum across the columns.
data = {'A': [1,47,4316,8511],
'B': [4,1,3,4],
'C': [2,7,9,1],
'D': [32,17,1,0]
}
df = pd.DataFrame(data)
df['Count'] = df.apply(lambda x: x['D'] < x, axis=1).sum(axis=1)
Output:
A B C D Count
0 1 4 2 32 0
1 47 1 7 17 1
2 4316 3 9 1 3
3 8511 4 1 0 3
I'd like to concatenate two dataframes A, B to a new one without duplicate rows (if rows in B already exist in A, don't add):
Dataframe A:
I II
0 1 2
1 3 1
Dataframe B:
I II
0 5 6
1 3 1
New Dataframe:
I II
0 1 2
1 3 1
2 5 6
How can I do this?
The simplest way is to just do the concatenation, and then drop duplicates.
>>> df1
A B
0 1 2
1 3 1
>>> df2
A B
0 5 6
1 3 1
>>> pandas.concat([df1,df2]).drop_duplicates().reset_index(drop=True)
A B
0 1 2
1 3 1
2 5 6
The reset_index(drop=True) is to fix up the index after the concat() and drop_duplicates(). Without it you will have an index of [0,1,0] instead of [0,1,2]. This could cause problems for further operations on this dataframe down the road if it isn't reset right away.
In case you have a duplicate row already in DataFrame A, then concatenating and then dropping duplicate rows, will remove rows from DataFrame A that you might want to keep.
In this case, you will need to create a new column with a cumulative count, and then drop duplicates, it all depends on your use case, but this is common in time-series data
Here is an example:
df_1 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':34},])
df_2 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':14},
])
df_1['count'] = df_1.groupby(['date','id','value']).cumcount()
df_2['count'] = df_2.groupby(['date','id','value']).cumcount()
df_tot = pd.concat([df_1,df_2], ignore_index=False)
df_tot = df_tot.drop_duplicates()
df_tot = df_tot.drop(['count'], axis=1)
>>> df_tot
date id value
0 11/20/2015 4 24
1 11/20/2015 4 24
2 11/20/2015 6 34
1 11/20/2015 6 14
I'm surprised that pandas doesn't offer a native solution for this task.
I don't think that it's efficient to just drop the duplicates if you work with large datasets (as Rian G suggested).
It is probably most efficient to use sets to find the non-overlapping indices. Then use list comprehension to translate from index to 'row location' (boolean), which you need to access rows using iloc[,]. Below you find a function that performs the task. If you don't choose a specific column (col) to check for duplicates, then indexes will be used, as you requested. If you chose a specific column, be aware that existing duplicate entries in 'a' will remain in the result.
import pandas as pd
def append_non_duplicates(a, b, col=None):
if ((a is not None and type(a) is not pd.core.frame.DataFrame) or (b is not None and type(b) is not pd.core.frame.DataFrame)):
raise ValueError('a and b must be of type pandas.core.frame.DataFrame.')
if (a is None):
return(b)
if (b is None):
return(a)
if(col is not None):
aind = a.iloc[:,col].values
bind = b.iloc[:,col].values
else:
aind = a.index.values
bind = b.index.values
take_rows = list(set(bind)-set(aind))
take_rows = [i in take_rows for i in bind]
return(a.append( b.iloc[take_rows,:] ))
# Usage
a = pd.DataFrame([[1,2,3],[1,5,6],[1,12,13]], index=[1000,2000,5000])
b = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], index=[1000,2000,3000])
append_non_duplicates(a,b)
# 0 1 2
# 1000 1 2 3 <- from a
# 2000 1 5 6 <- from a
# 5000 1 12 13 <- from a
# 3000 7 8 9 <- from b
append_non_duplicates(a,b,0)
# 0 1 2
# 1000 1 2 3 <- from a
# 2000 1 5 6 <- from a
# 5000 1 12 13 <- from a
# 2000 4 5 6 <- from b
# 3000 7 8 9 <- from b
Another option:
concatenation = pd.concat([
dfA,
dfB[dfB['I'].isin(dfA['I']) == False], # <-- get all the data in dfB that doesn't show up in dfB (based on values in column 'I')
])
The object concatenation will be:
I II
0 1 2
1 3 1
2 5 6
I have a data frame df1 with multiple columns. I have df2 with same set of columns. I want to get the records of df1 which aren't present in df2. I am able to perform this task as below:
df1[~df1['ID'].isin(df2['ID'])]
Now I want to the same operation, but on the combination of NAME and ID. This means if the NAME and ID together as a pair from df1 also exists as the same pair in df2, then that whole record should not be part of my result.
How do I accomplish this task using pandas?
I don't think that the currently accepted answer is actually correct. It was my impression that you would like to drop a value pair in df1 if that pair also exists in the other dataframe, independent of the row position that they take in the respective dataframes.
Consider the following dataframes
df1 = pd.DataFrame({'a': list('ABC'), 'b': list('CDF')})
df2 = pd.DataFrame({'a': list('ABAC'), 'b': list('CFFF')})
df1
a b
0 A C
1 B D
2 C F
df2
a b
0 A C
1 B F
2 A F
3 C F
So you would like to drop row 0 and 2 in df1. However, with the above suggestion you get
df1.isin(df2)
a b
0 True True
1 True False
2 False True
What you can do instead is
compare_cols = ['a','b']
mask = pd.Series(list(zip(*[df1[c] for c in compare_cols]]))).isin(list(zip(*[df2[c] for c in compare_cols])))
mask
0 True
1 False
2 True
dtype: bool
That is, you construct a Series of tuples from the columns you would like to compare coming from the first dataframe, and then check whether these tuples exist in the list of tuples obtained in the same way from the respective columns in the second dataframe.
Final step: df1 = df1.loc[~mask.values]
As pointed out by #rvrvrv in the comments, it is best to use mask.values instead of just mask in case df1 and mask do not have the same index (or one uses the df1 index in the construction of mask.)
It's actually pretty easy.
df1[(~df1[['ID', 'Name']].isin(df2[['ID', 'Name']])).any(axis=1)]
You pass the column names that you want to compare as a list. The interesting part is what it outputs.
Let's say df1 equals:
ID Name
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 8
9 1 1
And df2 equals:
ID Name
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 8
9 1 9
Every (ID, Name) pair between df1 and df2 matches except for row 9. The result of my answer will return:
ID Name
9 1 1
Which is exactly what you want.
In more detail, when you do the mask:
~df[['ID', 'Name']].isin(df2[['ID', 'Name']]
You get this:
ID Name
0 False False
1 False False
2 False False
3 False False
4 False False
5 False False
6 False False
7 False False
8 False False
9 False True
And we want to select the row where one of those columns is true. For this, we can add the any(axis=1) onto the end which creates:
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 True
And then when you index using this series, it will only select row 9.
Isin() would not work here as it is also comparing the index.
Let's have a look at a super powerful tool of pandas : merge()
If we consider the nice example given by user3820991, we have :
df1 = pd.DataFrame({'a': list('ABC'), 'b': list('CDF')})
df2 = pd.DataFrame({'a': list('ABAC'), 'b': list('CFFF')})
df1
a b
0 A C
1 B D
2 C F
df2
a b
0 A C
1 B F
2 A F
3 C F
The basic merge method of pandas is the 'inner' join. This will give you the equivalent of isin() method for two columns:
df1.merge(df2[['a','b']], how='inner')
a b
0 A C
1 C F
If you would like te equivalent of the not(isin()), then just change the merge method by 'outer' join (left join would work, but for the beauty of the example, you have more possibilities with the outer join).
This will give you all the rows in both dataframe, we only have to add the indicator=True to be able to select the one we want:
df1.merge(df2[['a','b']], how='outer', indicator=True)
a b _merge
0 A C both
1 B D left_only
2 C F both
3 B F right_only
4 A F right_only
We want the rows that are in df1 but not in df2, so 'left_only'. In a one liner code, you have :
pd.merge(df1, df2, on=['a','b'], how="outer", indicator=True
).query('_merge=="left_only"').drop(columns='_merge')
a b
1 B D
You can create a new column by concatenating NAME and ID and use this new column the same way you used ID in your question:
df1['temp'] = df1['NAME'].astype(str)+df1['ID'].astype(str)
df2['temp'] = df2['NAME'].astype(str)+df2['ID'].astype(str)
df1[~df1['temp'].isin(df2['temp'])].drop('temp',1)
I have 3 pandas data frames with matching indices. Some operations have trimmed data frames in different ways (removed rows), so that some indices in one data frame may not exist in the other.
I'd like to consolidate all 3 data frames, so they all contain rows with indices that are present in all 3 of them. How is this achievable?
import pandas as pd
data = pd.DataFrame.from_dict({'a': [1,2,3,4], 'b': [3,4,5,6], 'c': [6,7,8,9]})
a = pd.DataFrame(data['a'])
b = pd.DataFrame(data['b'])
c = pd.DataFrame(data['c'])
a = a[a['a'] <= 3]
b = b[b['b'] >= 4]
# some operation here that removes rows that aren't present in all (intersection of all dataframe's indices)
print a
a
1 2
2 3
print b
b
1 4
2 5
print c
c
1 7
2 8
Update
Sorry, I got carried away and forgot what I wanted to achieve when I wrote the examples. The actual intent was to keep the 3 dataframes separate. Apologies for the misleading example (I corrected it now).
Use merge and pass param left_index=True and right_index=True, the default type of merge is inner, so only values that exist on both left and right will be merged.
In [6]:
a.merge(b, left_index=True, right_index=True).merge(c, left_index=True, right_index=True)
Out[6]:
a b c
1 2 4 7
2 3 5 8
[2 rows x 3 columns]
To modify the original dataframes so that now only contain the rows that exist in all you can do this:
In [12]:
merged = a.merge(b, left_index=True, right_index=True).merge(c, left_index=True, right_index=True)
merged
Out[12]:
a b c
1 2 4 7
2 3 5 8
In [14]:
a = a.loc[merged.index]
b = b.loc[merged.index]
c = c.loc[merged.index]
In [15]:
print(a)
print(b)
print(c)
a
1 2
2 3
b
1 4
2 5
c
1 7
2 8
So we merge all of them on index values that are present in all of them and then use the index to filter the original dataframes.
Take a look at concat, which can be used for a variety of combination operations. Here you want to have the join type set to inner (because the want the intersection), and axis set to 1 (combining columns).
In [123]: pd.concat([a,b,c], join='inner', axis=1)
Out[123]:
a b c
1 2 4 7
2 3 5 8
I'd like to concatenate two dataframes A, B to a new one without duplicate rows (if rows in B already exist in A, don't add):
Dataframe A:
I II
0 1 2
1 3 1
Dataframe B:
I II
0 5 6
1 3 1
New Dataframe:
I II
0 1 2
1 3 1
2 5 6
How can I do this?
The simplest way is to just do the concatenation, and then drop duplicates.
>>> df1
A B
0 1 2
1 3 1
>>> df2
A B
0 5 6
1 3 1
>>> pandas.concat([df1,df2]).drop_duplicates().reset_index(drop=True)
A B
0 1 2
1 3 1
2 5 6
The reset_index(drop=True) is to fix up the index after the concat() and drop_duplicates(). Without it you will have an index of [0,1,0] instead of [0,1,2]. This could cause problems for further operations on this dataframe down the road if it isn't reset right away.
In case you have a duplicate row already in DataFrame A, then concatenating and then dropping duplicate rows, will remove rows from DataFrame A that you might want to keep.
In this case, you will need to create a new column with a cumulative count, and then drop duplicates, it all depends on your use case, but this is common in time-series data
Here is an example:
df_1 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':34},])
df_2 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':14},
])
df_1['count'] = df_1.groupby(['date','id','value']).cumcount()
df_2['count'] = df_2.groupby(['date','id','value']).cumcount()
df_tot = pd.concat([df_1,df_2], ignore_index=False)
df_tot = df_tot.drop_duplicates()
df_tot = df_tot.drop(['count'], axis=1)
>>> df_tot
date id value
0 11/20/2015 4 24
1 11/20/2015 4 24
2 11/20/2015 6 34
1 11/20/2015 6 14
I'm surprised that pandas doesn't offer a native solution for this task.
I don't think that it's efficient to just drop the duplicates if you work with large datasets (as Rian G suggested).
It is probably most efficient to use sets to find the non-overlapping indices. Then use list comprehension to translate from index to 'row location' (boolean), which you need to access rows using iloc[,]. Below you find a function that performs the task. If you don't choose a specific column (col) to check for duplicates, then indexes will be used, as you requested. If you chose a specific column, be aware that existing duplicate entries in 'a' will remain in the result.
import pandas as pd
def append_non_duplicates(a, b, col=None):
if ((a is not None and type(a) is not pd.core.frame.DataFrame) or (b is not None and type(b) is not pd.core.frame.DataFrame)):
raise ValueError('a and b must be of type pandas.core.frame.DataFrame.')
if (a is None):
return(b)
if (b is None):
return(a)
if(col is not None):
aind = a.iloc[:,col].values
bind = b.iloc[:,col].values
else:
aind = a.index.values
bind = b.index.values
take_rows = list(set(bind)-set(aind))
take_rows = [i in take_rows for i in bind]
return(a.append( b.iloc[take_rows,:] ))
# Usage
a = pd.DataFrame([[1,2,3],[1,5,6],[1,12,13]], index=[1000,2000,5000])
b = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], index=[1000,2000,3000])
append_non_duplicates(a,b)
# 0 1 2
# 1000 1 2 3 <- from a
# 2000 1 5 6 <- from a
# 5000 1 12 13 <- from a
# 3000 7 8 9 <- from b
append_non_duplicates(a,b,0)
# 0 1 2
# 1000 1 2 3 <- from a
# 2000 1 5 6 <- from a
# 5000 1 12 13 <- from a
# 2000 4 5 6 <- from b
# 3000 7 8 9 <- from b
Another option:
concatenation = pd.concat([
dfA,
dfB[dfB['I'].isin(dfA['I']) == False], # <-- get all the data in dfB that doesn't show up in dfB (based on values in column 'I')
])
The object concatenation will be:
I II
0 1 2
1 3 1
2 5 6