Considering I have 2 dataframes as shown below (DF1 and DF2), I need to compare DF2 with DF1 such that I can identify all the Matching, Different, Missing values for all the columns in DF2 that match columns in DF1 (Col1, Col2 & Col3 in this case) for rows with same EID value (A, B, C & D). I do not wish to iterate on each row of a dataframe as it can be time-consuming.
Note: There can around 70 - 100 columns. This is just a sample dataframe I am using.
DF1
EID Col1 Col2 Col3 Col4
0 A a1 b1 c1 d1
1 B a2 b2 c2 d2
2 C None b3 c3 d3
3 D a4 b4 c4 d4
4 G a5 b5 c5 d5
DF2
EID Col1 Col2 Col3
0 A a1 b1 c1
1 B a2 b2 c9
2 C a3 b3 c3
3 D a4 b4 None
Expected output dataframe
EID Col1 Col2 Col3 New_Col
0 A a1 b1 c1 Match
1 B a2 b2 c2 Different
2 C None b3 c3 Missing in DF1
3 D a4 b4 c4 Missing in DF2
Firstly, you will need to filter df1 based on df2.
new_df = df1.loc[df1['EID'].isin(df2['EID']), df2.columns]
EID Col1 Col2 Col3
0 A a1 b1 c1
1 B a2 b2 c2
2 C None b3 c3
3 D a4 b4 c4
Next, since you have a big dataframe to compare, you can change both the new_df and df2 to numpy arrays.
array1 = new_df.to_numpy()
array2 = df2.to_numpy()
Now you can compare it row-wise using np.where
new_df['New Col'] = np.where((array1 == array2).all(axis=1),'Match', 'Different')
EID Col1 Col2 Col3 New Col
0 A a1 b1 c1 Match
1 B a2 b2 c2 Different
2 C None b3 c3 Different
3 D a4 b4 c4 Different
Finally, to convert the row with None value, you can use df.loc and df.isnull
new_df.loc[new_df.isnull().any(axis=1), ['New Col']] = 'Missing in DF1'
new_df.loc[df2.isnull().any(axis=1), ['New Col']] = 'Missing in DF2'
EID Col1 Col2 Col3 New Col
0 A a1 b1 c1 Match
1 B a2 b2 c2 Different
2 C None b3 c3 Missing in DF1
3 D a4 b4 c4 Missing in DF2
One thing to note is that "Match", "Different", "Missing in DF1", and "Missing in DF1" are not mutually exclusive.
You can have some values missing in DF1, but also missing in DF2.
However, based on your post, the priority seems to be:
"Match" > "Missing in DF1" > "Missing in DF2" > "Different".
Also, it seems like you're using EID as an index, so it makes more sense to use it as the dataframe index. You can call .reset_index() if you want it as a column.
The approach is to use the equality operator / null check element-wise, then call .all and .any across columns.
import numpy as np
import pandas as pd
def compare_dfs(df1, df2):
# output dataframe has df2 dimensions, but df1 values
result = df1.reindex(index=df2.index, columns=df2.columns)
# check if values match; note that None == None, but np.nan != np.nan
eq_check = (result == df2).all(axis=1)
# null values are understood to be "missing"
# change the condition otherwise
null_check1 = result.isnull().any(axis=1)
null_check2 = df2.isnull().any(axis=1)
# create New_Col based on inferred priority
result.loc[:, "New_Col"] = None
result.loc[result["New_Col"].isnull() & eq_check, "New_Col"] = "Match"
result.loc[
result["New_Col"].isnull() & null_check1, "New_Col"
] = "Missing in DF1"
result.loc[
result["New_Col"].isnull() & null_check2, "New_Col"
] = "Missing in DF2"
result["New_Col"].fillna("Different", inplace=True)
return result
You can test your inputs in a jupyter notebook:
import itertools as it
df1 = pd.DataFrame(
np.array(["".join(i) for i in it.product(list("abcd"), list("12345"))])
.reshape((4, 5))
.T,
index=pd.Index(list("ABCDG"), name="EID"),
columns=[f"Col{i + 1}" for i in range(4)],
)
df1.loc["C", "Col1"] = None
df2 = df1.iloc[:4, :3].copy()
df2.loc["B", "Col3"] = "c9"
df2.loc["D", "Col3"] = None
display(df1)
display(df2)
display(compare_dfs(df1, df2))
Which should give these results:
Col1 Col2 Col3 Col4
EID
A a1 b1 c1 d1
B a2 b2 c2 d2
C None b3 c3 d3
D a4 b4 c4 d4
G a5 b5 c5 d5
Col1 Col2 Col3
EID
A a1 b1 c1
B a2 b2 c9
C None b3 c3
D a4 b4 None
Col1 Col2 Col3 New_Col
EID
A a1 b1 c1 Match
B a2 b2 c2 Different
C None b3 c3 Missing in DF1
D a4 b4 c4 Missing in DF2
On my i7 6600U local machine, the function takes ~1 sec for a dataset with 1 million rows, 80 columns.
rng = np.random.default_rng(seed=0)
test_size = (1_000_000, 100)
df1 = (
pd.DataFrame(rng.random(test_size))
.rename_axis(index="EID")
.rename(columns=lambda x: f"Col{x + 1}")
)
df2 = df1.sample(frac=0.8, axis=1)
# add difference
df2 += rng.random(df2.shape) > 0.9
# add nulls
df1[rng.random(df1.shape) > 0.99] = np.nan
df2[rng.random(df2.shape) > 0.99] = np.nan
%timeit compare_dfs(df1, df2)
953 ms ± 199 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Underneath it all, you're still going to be doing iterations. However, what you can do is merge the two columns on the EID, perform and outer join, and then use an apply function to generate your new_col.
df3 = pd.merge(df1, df2, on='EID', how='outer', lsuffix='df1_', rsuffix='df2_')
df3['comparison'] = df3.apply(lambda x: comparison_function(x), axis=1)
# your comparison_function will have your checks that result in missing in df1, df2, etc
You can then use
try:
#1.
DF1 = DF1.drop('Col4', axis=1)
df= pd.merge(DF2, DF1.loc[df['EID'].ne('G')], on=['Col1','Col2', 'Col3', 'EID'], how='left', indicator='New Col')
df['New Col'] = np.where(df['New Col'] == 'left_only', "Missing in DF1", df['New Col'])
df = df.merge(pd.merge(DF2.loc[:, ['EID','Col1','Col2']], DF1.loc[DF1['EID'].ne('G'), [ 'EID', 'Col1','Col2',]], on=['EID', 'Col1','Col2', ], how='left', indicator='col1_col2'), on=['EID','Col1','Col2'], how='left')
df = df.merge(pd.merge(DF2.loc[:, ['EID','Col2','Col3']], DF1.loc[DF1['EID'].ne('G'), [ 'EID', 'Col2','Col3',]], on=['EID', 'Col2','Col3', ], how='left', indicator='col2_col3'), on=['EID','Col2','Col3'], how='left')
df = df.merge(pd.merge(DF2.loc[:, ['EID','Col1','Col3']], DF1.loc[DF1['EID'].ne('G'), [ 'EID', 'Col1','Col3',]], on=['EID', 'Col1','Col3', ], how='left', indicator='col1_col3'), on=['EID','Col1','Col3'], how='left')
a1 = df['New Col'].eq('both') #match
a2 = df['col1_col2'].eq('both') & df['New Col'].eq('Missing in DF1') #same by Col1 & Col2 --> Different
a3 = df['col2_col3'].eq('both') & df['New Col'].eq('Missing in DF1') #same by Col2 & Col3 --> Different
a4 = df['col1_col3'].eq('both') & df['New Col'].eq('Missing in DF1') #same by Col1 & Col3 --> Different
df['New Col'] = np.select([a1, a2, a3, a4], ['match', 'Different/ same Col1 & Col2', 'Different/ same Col2 & Col3', 'Different/ same Col1 & Col3'], df['New Col'])
df = df.drop(columns=['col1_col2', 'col2_col3', 'col1_col3'])
EID Col1 Col2 Col3 New Col
0 A a1 b1 c1 match
1 B a2 b2 c9 Different/ same Col1 & Col2
2 C a3 b3 c3 Different/ same Col2 & Col3
3 D a4 b4 None Different/ same Col1 & Col2
or
#2.
DF1 = DF1.drop('Col4', axis=1)
df= pd.merge(DF2, DF1.loc[df['EID'].ne('G')], on=['Col1','Col2', 'Col3', 'EID'], how='left', indicator='New Col')
df['New Col'] = np.where(df['New Col'] == 'left_only', "Missing in DF1", df['New Col'])
df = df.merge(pd.merge(DF2.loc[:, ['EID','Col1','Col2']], DF1.loc[DF1['EID'].ne('G'), [ 'EID', 'Col1','Col2',]], on=['EID', 'Col1','Col2', ], how='left', indicator='col1_col2'), on=['EID','Col1','Col2'], how='left')
a1 = df['New Col'].eq('both') #match
a2 = df['col1_col2'].eq('both') & df['New Col'].eq('Missing in DF1') #Different
df['New Col'] = np.select([a1, a2], ['match', 'Different'], df['New Col'])
df = df.drop(columns=['col1_col2'])
EID Col1 Col2 Col3 New Col
0 A a1 b1 c1 match
1 B a2 b2 c9 Different
2 C a3 b3 c3 Missing in DF1
3 D a4 b4 None Different
Note1: no iteration
Note2: goal of this solution: compare DF2 with DF1 such that you can identify all the Matching, Different, Missing values for all the columns in DF2 that match columns in DF1 (Col1, Col2 & Col3 in this case) for rows with same EID value (A, B, C & D)
temp_df1 = df1[df2.columns] # to compare the only available columns in df2
joined_df = df2.merge(temp_df1, on='EID') # default indicator is '_x' for left table (df2) and '_y' for right table (df1)
# getting the columns that need to be compared
cols = list(df2.columns)
cols.remove('EID')
cols_left = [i+'_x' for i in cols]
cols_right = [i+'_y' for i in cols]
# getting back the table
temp_df2 = joined_df[cols_left]
temp_df2.columns=cols
temp_df1 = joined_df[cols_right]
temp_df1.columns=cols
output_df = joined_df[['EID']].copy()
output_df[cols] = temp_df1
filt = (temp_df2 == temp_df1).all(axis=1)
output_df.loc[filt, 'New_Col'] = 'Match'
output_df.loc[~filt, 'New_Col'] = 'Different'
output_df.loc[temp_df2.isna().any(axis=1), 'New_Col'] = 'Missing in df2' # getting missing values in df2
output_df.loc[temp_df1.isna().any(axis=1), 'New_Col'] = 'Missing in df1' # getting missing values in df1
output_df
EID Col1 Col2 Col3 New_Col
0 A a1 b1 c1 Match
1 B a2 b2 c2 Different
2 C NaN b3 c3 Missing in df1
3 D a4 b4 c4 Missing in df2
I am creating a "design of experiments" matrix from a DataFrame that represents the possible choices for each element.
I would like to create a column for each unique combination of elements in a DataFrame, which will represent one experimental set.
Constraints: Elements are not all the same size.
Input:
index Column1 Column2 Column3
a a1
b b1 b2 b3
c c1 c2
d d1
Desired Output:
index Column1 Column2 Column3 Column4 Column5 Column6
a a1 a1 a1 a1 a1 a1
b b1 b2 b3 b1 b2 b3
c c1 c1 c1 c2 c2 c2
d d1 d1 d1 d1 d1 d1
I have looked at zipping lists but hoping to find an elegant way.
Maybe some itertools action? :-)
idx = ['a','b','c','d']
df = pd.DataFrame([['a1',None,None],['b1','b2','b3'],['c1','c2',None],['d1',None,None]],
index=idx,
columns=['Column1','Column2','Column3'])
NUM_OF_COLUMNS = 6
result = []
for r in df.values:
#Filter None or other types of "emtpy" values you have:
filtered = [x for x in r if x is not None]
# Creat a row by repeating the elements:
rep_list = list(islice(cycle(filtered), NUM_OF_COLUMNS))
result.append(rep_list)
res_df = pd.DataFrame(result,
index=idx,
columns=['Column'+str(i) for i in range(1, NUM_OF_COLUMNS+1)])
Suppose I have a main dataframe
main_df
Cri1 Cri2 Cr3 total
0 A1 A2 A3 4
1 B1 B2 B3 5
2 C1 C2 C3 6
I also have 3 dataframes
df_1
Cri1 Cri2 Cri3 value
0 A1 A2 A3 1
1 B1 B2 B3 2
df_2
Cri1 Cri2 Cri3 value
0 A1 A2 A3 9
1 C1 C2 C3 10
df_3
Cri1 Cri2 Cri3 value
0 B1 B2 B3 15
1 C1 C2 C3 17
What I want is to add value from each frame df to total in the main_df according to Cri
i.e. main_df will become
main_df
Cri1 Cri2 Cri3 total
0 A1 A2 A3 14
1 B1 B2 B3 22
2 C1 C2 C3 33
Of course I can do it using for loop, but at the end I want to apply the method to a large amount of data, say 50000 rows in each dataframe.
Is there other ways to solve it?
Thank you!
First you should align your numeric column names. In this case:
df_main = df_main.rename(columns={'total': 'value'})
Then you have a couple of options.
concat + groupby
You can concatenate and then perform a groupby with sum:
res = pd.concat([df_main, df_1, df_2, df_3])\
.groupby(['Cri1', 'Cri2', 'Cri3']).sum()\
.reset_index()
print(res)
Cri1 Cri2 Cri3 value
0 A1 A2 A3 14
1 B1 B2 B3 22
2 C1 C2 C3 33
set_index + reduce / add
Alternatively, you can create a list of dataframes indexed by your criteria columns. Then use functools.reduce with pd.DataFrame.add to sum these dataframes.
from functools import reduce
dfs = [df.set_index(['Cri1', 'Cri2', 'Cri3']) for df in [df_main, df_1, df_2, df_3]]
res = reduce(lambda x, y: x.add(y, fill_value=0), dfs).reset_index()
print(res)
Cri1 Cri2 Cri3 value
0 A1 A2 A3 14.0
1 B1 B2 B3 22.0
2 C1 C2 C3 33.0