How can I pick out the difference between to columns of the same name in two dataframes?
I mean I have dataframe A with a column named X and dataframe B with column named X, if i do pd.merge(A, B, on=['X']), i'll get the common X values of A and B, but how can i get the "non-common" ones?
If you change the merge type to how='outer' and indicator=True this will add a column to tell you whether the values are left/both/right only:
In [2]:
A = pd.DataFrame({'x':np.arange(5)})
B = pd.DataFrame({'x':np.arange(3,8)})
print(A)
print(B)
x
0 0
1 1
2 2
3 3
4 4
x
0 3
1 4
2 5
3 6
4 7
In [3]:
pd.merge(A,B, how='outer', indicator=True)
Out[3]:
x _merge
0 0.0 left_only
1 1.0 left_only
2 2.0 left_only
3 3.0 both
4 4.0 both
5 5.0 right_only
6 6.0 right_only
7 7.0 right_only
You can then filter the resultant merged df on the _merge col:
In [4]:
merged = pd.merge(A,B, how='outer', indicator=True)
merged[merged['_merge'] == 'left_only']
Out[4]:
x _merge
0 0.0 left_only
1 1.0 left_only
2 2.0 left_only
You can also use isin and negate the mask to find values not in B:
In [5]:
A[~A['x'].isin(B['x'])]
Out[5]:
x
0 0
1 1
2 2
The accepted answer gives a so called LEFT JOIN IF NULL in SQL terms. If you want all the rows except the matching ones from both DataFrames, not only left. You have to add another condition to the filter, since you want to exclude all rows which are in both.
In this case we use DataFrame.merge & DataFrame.query:
df1 = pd.DataFrame({'A':list('abcde')})
df2 = pd.DataFrame({'A':list('cdefgh')})
print(df1, '\n')
print(df2)
A
0 a # <- only df1
1 b # <- only df1
2 c # <- both
3 d # <- both
4 e # <- both
A
0 c # both
1 d # both
2 e # both
3 f # <- only df2
4 g # <- only df2
5 h # <- only df2
df = (
df1.merge(df2,
on='A',
how='outer',
indicator=True)
.query('_merge != "both"')
.drop(columns='_merge')
)
print(df)
A
0 a
1 b
5 f
6 g
7 h
Related
How do you combine 2 dataframes so that one is repeated over and over and combined for every line of the other dataframe, for example :
d1 = pd.DataFrame([[1,3],[2,4]])
print(d1)
0 1
0 1 3
1 2 4
and
d2 = pd.DataFrame([['A','D'],['B','E'],['C','F']])
print(d2)
0 1
0 A D
1 B E
2 C F
combining in :
d3 = pd.DataFrame([[1,3,'A','D'],[1,3,'B','E'],[1,3,'C','F'],[2,4,'A','D'],[2,4,'B','E'],[2,4,'C','F']])
print(d3)
0 1 2 3
0 1 3 A D
1 1 3 B E
2 1 3 C F
3 2 4 A D
4 2 4 B E
5 2 4 C F
I can loop over d1 and concat, but is there any implemented functionnality already doing this ?
Thanks
I believe what you are searching for is a cross-join.
You can use the following code to get your answer, you will just need to clean up the column naming
df1 = pd.DataFrame([[1,3],[2,4]])
df2 = pd.DataFrame([['A','D'],['B','E'],['C','F']])
df1.merge(df2, how = 'cross')
I hope, this works for your solution. Create a key column with value of 1 in both dataframes and join with that key and then drop it.
import pandas as pd
d1 = pd.DataFrame([[1,3],[2,4]])
print(d1)
d2 = pd.DataFrame([['A','D'],['B','E'],['C','F']])
print(d2)
d1['key'] = 1
d2['key'] = 1
d1.merge(d2, on='key').drop('key', axis=1)
Here is an alternative solution using pd.merge() and df.assign()
d2.columns = ['2', '3']
d3 = pd.merge(d1.assign(key=1), d2.assign(key=1), on='key', suffixes=('', '')).drop('key', axis=1)
print(d3)
0 1 2 3
0 1 3 A D
1 1 3 B E
2 1 3 C F
3 2 4 A D
4 2 4 B E
5 2 4 C F
I have a panda dataframe (here represented using excel):
Now I would like to delete all dublicates (1) of a specific row (B).
How can I do it ?
For this example, the result would look like that:
You can use duplicated for boolean mask and then set NaNs by loc, mask or numpy.where:
df.loc[df['B'].duplicated(), 'B'] = np.nan
df['B'] = df['B'].mask(df['B'].duplicated())
df['B'] = np.where(df['B'].duplicated(), np.nan,df['B'])
Alternative if need remove duplicates rows by B column:
df = df.drop_duplicates(subset=['B'])
Sample:
df = pd.DataFrame({
'B': [1,2,1,3],
'A':[1,5,7,9]
})
print (df)
A B
0 1 1
1 5 2
2 7 1
3 9 3
df.loc[df['B'].duplicated(), 'B'] = np.nan
print (df)
A B
0 1 1.0
1 5 2.0
2 7 NaN
3 9 3.0
df = df.drop_duplicates(subset=['B'])
print (df)
A B
0 1 1
1 5 2
3 9 3
I am trying to merge two dataframes and replace the nan in the left df with the right df, I can do it with three lines of code as below, but I want to know if there is a better/shorter way?
# Example data (my actual df is ~500k rows x 11 cols)
df1 = pd.DataFrame({'a': [1,2,3,4], 'b': [0,1,np.nan, 1], 'e': ['a', 1, 2,'b']})
df2 = pd.DataFrame({'a': [1,2,3,4], 'b': [np.nan, 1, 0, 1]})
# Merge the dataframes...
df = df1.merge(df2, on='a', how='left')
# Fillna in 'b' column of left df with right df...
df['b'] = df['b_x'].fillna(df['b_y'])
# Drop the columns no longer needed
df = df.drop(['b_x', 'b_y'], axis=1)
The problem confusing merge is that both dataframes have a 'b' column, but the left and right versions have NaNs in mismatched places. You want to avoid getting unwanted multiple 'b' columns 'b_x', 'b_y' from merge in the first place:
slice the non-shared columns 'a','e' from df1
do merge(df2, 'left'), this will pick up 'b' from the right dataframe (since it only exists in the right df)
finally do df1.update(...) , this will update the NaNs in the column 'b' taken from df2 with df1['b']
Solution:
df1.update(df1[['a', 'e']].merge(df2, 'left'))
df1
a b e
0 1 0.0 a
1 2 1.0 1
2 3 0.0 2
3 4 1.0 b
Note: Because I used merge(..., how='left'), I preserve the row order of the calling dataframe. If my df1 had values of a that were not in order
a b e
0 1 0.0 a
1 2 1.0 1
2 4 1.0 b
3 3 NaN 2
The result would be
df1.update(df1[['a', 'e']].merge(df2, 'left'))
df1
a b e
0 1 0.0 a
1 2 1.0 1
2 4 1.0 b
3 3 0.0 2
Which is as expected.
Further...
If you want to be more explicit when there may be more columns involved
df1.update(df1.drop('b', 1).merge(df2, 'left', 'a'))
Even Further...
If you don't want to update the dataframe, we can use combine_first
Quick
df1.combine_first(df1[['a', 'e']].merge(df2, 'left'))
Explicit
df1.combine_first(df1.drop('b', 1).merge(df2, 'left', 'a'))
EVEN FURTHER!...
The 'left' merge may preserve order but NOT the index. This is the ultra conservative approach:
df3 = df1.drop('b', 1).merge(df2, 'left', on='a').set_index(df1.index)
df1.combine_first(df3)
Short version
df1.b.fillna(df1.a.map(df2.set_index('a').b),inplace=True)
df1
Out[173]:
a b e
0 1 0.0 a
1 2 1.0 1
2 3 0.0 2
3 4 1.0 b
Since you mentioned there will be multiple columns
df = df1.combine_first(df1[['a']].merge(df2, on='a', how='left'))
df
Out[184]:
a b e
0 1 0.0 a
1 2 1.0 1
2 3 0.0 2
3 4 1.0 b
Also we can pass to fillna with df
df1.fillna(df1[['a']].merge(df2, on='a', how='left'))
Out[185]:
a b e
0 1 0.0 a
1 2 1.0 1
2 3 0.0 2
3 4 1.0 b
Only if the indices are alligned (important note), we can use update:
df1['b'].update(df2['b'])
a b e
0 1 0.0 a
1 2 1.0 1
2 3 0.0 2
3 4 1.0 b
Or simply fillna:
df1['b'].fillna(df2['b'], inplace=True)
If you're indices are not alligned, see WenNYoBen's answer or comment underneath.
You can mask the data.
original data:
print(df)
one two three
0 1 1.0 1.0
1 2 NaN 2.0
2 3 3.0 NaN
print(df2)
one two three
0 4 4 4
1 4 2 4
2 4 4 3
See below, mask just fills based on condition.
# mask values where isna()
df1[['two','three']] = df1[['two','three']]\
.mask(df1[['two','three']].isna(),df2[['two','three']])
output:
one two three
0 1 1.0 1.0
1 2 2.0 2.0
2 3 3.0 3.0
I have a df:
df = pd.DataFrame([[1,1],[3,4],[3,4]], columns=["a", 'b'])
a b
0 1 1
1 3 4
2 3 4
I have to filter this df based on a query. The query can be complex, but here I'm using a simple one:
items = [3,4]
df.query("a in #items and b == 4")
a b
1 3 4
2 3 4
Only to these rows I would like to add some values in new columns:
configuration = {'c': 'action', "d": "non-action"}
for k, v in configuration.items():
df[k] = v
The rest of the rows should have an empty value or np.nan. So my end df should look like:
a b c d
0 1 1 np.nan np.nan
1 3 4 action non-action
2 3 4 action non-action
The issue is that to do the query I end up with a copy of a dataframe. And then I have to somehow merged them and replace the modified rows by index. How to do it without replacing in the original df the rows by index with the queried one?
Using combine_first with assign
df.query("a in #items and b == 4").assign(**configuration).combine_first(df)
Out[138]:
a b c d
0 1.0 1.0 NaN NaN
1 3.0 4.0 action non-action
2 3.0 4.0 action non-action
let say I have a dataframe that looks like this:
df = pd.DataFrame(index=list('abcde'), data={'A': range(5), 'B': range(5)})
df
Out[92]:
A B
a 0 0
b 1 1
c 2 2
d 3 3
e 4 4
Asumming that this dataframe already exist, how can I simply add a level 'C' to the column index so I get this:
df
Out[92]:
A B
C C
a 0 0
b 1 1
c 2 2
d 3 3
e 4 4
I saw SO anwser like this python/pandas: how to combine two dataframes into one with hierarchical column index? but this concat different dataframe instead of adding a column level to an already existing dataframe.
-
As suggested by #StevenG himself, a better answer:
df.columns = pd.MultiIndex.from_product([df.columns, ['C']])
print(df)
# A B
# C C
# a 0 0
# b 1 1
# c 2 2
# d 3 3
# e 4 4
option 1
set_index and T
df.T.set_index(np.repeat('C', df.shape[1]), append=True).T
option 2
pd.concat, keys, and swaplevel
pd.concat([df], axis=1, keys=['C']).swaplevel(0, 1, 1)
A solution which adds a name to the new level and is easier on the eyes than other answers already presented:
df['newlevel'] = 'C'
df = df.set_index('newlevel', append=True).unstack('newlevel')
print(df)
# A B
# newlevel C C
# a 0 0
# b 1 1
# c 2 2
# d 3 3
# e 4 4
You could just assign the columns like:
>>> df.columns = [df.columns, ['C', 'C']]
>>> df
A B
C C
a 0 0
b 1 1
c 2 2
d 3 3
e 4 4
>>>
Or for unknown length of columns:
>>> df.columns = [df.columns.get_level_values(0), np.repeat('C', df.shape[1])]
>>> df
A B
C C
a 0 0
b 1 1
c 2 2
d 3 3
e 4 4
>>>
Another way for MultiIndex (appanding 'E'):
df.columns = pd.MultiIndex.from_tuples(map(lambda x: (x[0], 'E', x[1]), df.columns))
A B
E E
C D
a 0 0
b 1 1
c 2 2
d 3 3
e 4 4
I like it explicit (using MultiIndex) and chain-friendly (.set_axis):
df.set_axis(pd.MultiIndex.from_product([df.columns, ['C']]), axis=1)
This is particularly convenient when merging DataFrames with different column level numbers, where Pandas (1.4.2) raises a FutureWarning (FutureWarning: merging between different levels is deprecated and will be removed ... ):
import pandas as pd
df1 = pd.DataFrame(index=list('abcde'), data={'A': range(5), 'B': range(5)})
df2 = pd.DataFrame(index=list('abcde'), data=range(10, 15), columns=pd.MultiIndex.from_tuples([("C", "x")]))
# df1:
A B
a 0 0
b 1 1
# df2:
C
x
a 10
b 11
# merge while giving df1 another column level:
pd.merge(df1.set_axis(pd.MultiIndex.from_product([df1.columns, ['']]), axis=1),
df2,
left_index=True, right_index=True)
# result:
A B C
x
a 0 0 10
b 1 1 11
Another method, but using a list comprehension of tuples as the arg to pandas.MultiIndex.from_tuples():
df.columns = pd.MultiIndex.from_tuples([(col, 'C') for col in df.columns])
df
A B
C C
a 0 0
b 1 1
c 2 2
d 3 3
e 4 4