I am using pandas 1.4.3 and python 3.9.13
I am creating some data frames which are identical as follows:
d = {'col1': [1, 2], 'col2': [3, 4]}
df_1 = pd.DataFrame(data=d)
df_2 = pd.DataFrame(data=d)
df_3 = pd.DataFrame(data=d)
df_4 = pd.DataFrame(data=d)
datasets = [df_1, df_2, df_3, df_4]
Now I am trying to merge them all in a single data frame on col1. So,I do the following:
from functools import reduce
df_merged = reduce(lambda left,right: pd.merge(left, right,on=['col1'], how='outer', suffixes=["_x", "_y"]), datasets)
So, I am trying to basically keep all the columns but just use some suffixes so that they stay unique. However, the issue is that since it is more than two dataframes, this ends up resulting in duplicated columns as:
col1 col2_x col2_y col2_x col2_y
0 1 3 3 3 3
1 2 4 4 4 4
I was wondering what would be the best way to do such a merge while ensuring no columns are dropped and duplicates are conserved properly with incrementally adding suffixes...
EDIT
At the moment, I am now doing it with a loop as:
merged = datasets[0]
for i in range(1, len(datasets)):
merged = pd.merge(merged, datasets[i], how='outer', on=['col1'], suffixes=[None, f"_{str(i)}"])
A little bit cumbersome solution:
import pandas as pd
d = {'col1': [1, 2], 'col2': [3, 4]}
df_1 = pd.DataFrame(data=d)
df_1.attrs['name']='1'
df_2 = pd.DataFrame(data=d)
df_2.attrs['name']='2'
df_3 = pd.DataFrame(data=d)
df_3.attrs['name']='3'
df_4 = pd.DataFrame(data=d)
df_4.attrs['name']='4'
datasets = [df_1, df_2, df_3, df_4]
from functools import reduce
def mrg (left,right):
return pd.merge(left, right,on=['col1'], how='outer', suffixes=["_"+str(left.attrs.get('name')), "_"+str(right.attrs.get('name'))])
df_merged = reduce(lambda left,right: mrg(left,right), datasets)
Related
I have two large DataFrames that I don't want to make copies of, but want to apply the same change to. How can I do this properly? For example, this is similar to what I want to do, but on a smaller scale. This only creates the temporary variable df that gives the result of each DataFrame, but I want both DataFrames to be themselves changed:
import pandas as pd
df1 = pd.DataFrame({'a':[1,2,3]})
df2 = pd.DataFrame({'a':[0,1,5,7]})
for df in [df1, df2]:
df = df[df['a'] < 3]
We can do query with inplace
df1 = pd.DataFrame({'a':[1,2,3]})
df2 = pd.DataFrame({'a':[0,1,5,7]})
for df in [df1, df2]:
df.query('a<3',inplace=True)
df1
a
0 1
1 2
df2
a
0 0
1 1
Don't think this is the best solution, but should do the job.
import pandas as pd
df1 = pd.DataFrame({'a':[1,2,3]})
df2 = pd.DataFrame({'a':[0,1,5,7]})
dfs = [df1, df2]
for i, df in enumerate(dfs):
dfs[i] = df[df['a'] < 3]
dfs[0]
a
0 1
1 2
I have two data frames that I would like to compare for equality in a row-wise manner. I am interested in computing the number of rows that have the same values for non-joined attributes.
For example,
import pandas as pd
df1 = pd.DataFrame({'a': [1,2,3,5], 'b': [2,3,4,6], 'c':[60,20,40,30], 'd':[50,90,10,30]})
df2 = pd.DataFrame({'a': [1,2,3,5], 'b': [2,3,4,6], 'c':[60,20,40,30], 'd':[50,90,40,40]})
I will be joining these two data frames on column a and b. There are two rows (first two) that have the same values for c and d in both the data frames.
I am currently using the following approach where I first join these two data frames, and then compute each row's values for equality.
df = df1.merge(df2, on=['a','b'])
cols1 = [c for c in df.columns.tolist() if c.endswith("_x")]
cols2 = [c for c in df.columns.tolist() if c.endswith("_y")]
num_rows_equal = 0
for index, row in df.iterrows():
not_equal = False
for col1,col2 in zip(cols1,cols2):
if row[col1] != row[col2]:
not_equal = True
break
if not not_equal: # row values are equal
num_rows_equal += 1
num_rows_equal
Is there a more efficient (pythonic) way to achieve the same result?
A shorter way of achieving that:
import pandas as pd
df1 = pd.DataFrame({'a': [1,2,3,5], 'b': [2,3,4,6], 'c':[60,20,40,30], 'd':[50,90,10,30]})
df2 = pd.DataFrame({'a': [1,2,3,5], 'b': [2,3,4,6], 'c':[60,20,40,30], 'd':[50,90,40,40]})
df = df1.merge(df2, on=['a','b'])
comparison_cols = [c.strip('_x') for c in df.columns.tolist() if c.endswith("_x")]
num_rows_equal = (df1[comparison_cols][df1[comparison_cols] == df2[comparison_cols]].isna().sum(axis=1) == 0).sum()
use pandas merge ordered, merging with 'inner'. From there, you can get your dataframe shape and by extension your number of rows.
df_r = pd.merge_ordered(df1,df2,how='inner')
a b c d
0 1 2 60 50
1 2 3 20 90
no_of_rows = df_r.shape[0]
#print(no_of_rows)
#2
This question already has answers here:
How to find dropped data after using Pandas merge in python?
(2 answers)
Closed 4 years ago.
I have 2 excel csv files as below
df1 = {'Transaction_Name':['SC-001_Homepage', 'SC-002_Homepage', 'SC-001_Signinlink'], 'Count': [1, 0, 2]}
df1 = pd.DataFrame(df1, columns=df1.keys())
df2 = {'Transaction_Name':['SC-001_Homepage', 'SC-002_Homepage', 'SC-001_Signinlink', 'SC-002_Signinlink'], 'Count': [2, 1, 2, 1]}
df2 = pd.DataFrame(df2, columns=df2.keys())
In df2 i could see that there is one extra transaction called 'SC-002_Signinlink' which is not there in df1. Can someone help me how to find only those extra transactions and print it to a file?
So far i had done below work to get the transactions...
merged_df = pd.merge(df1, df2, on = 'Transaction_Name', suffixes=('_df1', '_df2'), how='outer')
Use indicator=True in your merge :
df1 = {'Transaction_Name':['SC-001_Homepage', 'SC-002_Homepage', 'SC-001_Signinlink'], 'Count': [1, 0, 2]}
df1 = pd.DataFrame(df1, columns=df1.keys())
df2 = {'Transaction_Name':['SC-001_Homepage', 'SC-002_Homepage', 'SC-001_Signinlink', 'SC-002_Signinlink'], 'Count': [2, 1, 2, 1]}
df2 = pd.DataFrame(df2, columns=df2.keys())
df = pd.merge(df1, df2, on='Transaction_Name', how='outer', indicator=True)
# As we do not merge on Count, we have 2 count columns (Count_x & Count_y)
# So we create a Count column which is the addition of the 2
df.Count_x = df.Count_x.fillna(0)
df.Count_y = df.Count_y.fillna(0)
print(df.dtypes)
df['Count'] = df.Count_x + df.Count_y
df = df.loc[df._merge != 'both', ['Transaction_Name', 'Count']]
print(df)
# Missing transactions list :
print(df.Transaction_Name.values.tolist())
output for print(df.dtypes)
Transaction_Name object
Count_x float64
Count_y int64
_merge category
dtype: object
output for print(df)
Transaction_Name Count
3 SC-002_Signinlink 1.0
output for print(df.Transaction_Name.values.tolist())
['SC-002_Signinlink']
I have a object of which type is Panda and the print(object) is giving below output
print(type(recomen_total))
print(recomen_total)
Output is
<class 'pandas.core.frame.Pandas'>
Pandas(Index=12, instrument_1='XXXXXX', instrument_2='XXXX', trade_strategy='XXX', earliest_timestamp='2016-08-02T10:00:00+0530', latest_timestamp='2016-08-02T10:00:00+0530', xy_signal_count=1)
I want to convert this obejct in pd.DataFrame, how i can do it ?
i tried pd.DataFrame(object), from_dict also , they are throwing error
Interestingly, it will not convert to a dataframe directly but to a series. Once this is converted to a series use the to_frame method of series to convert it to a DataFrame
import pandas as pd
df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]},
index=['a', 'b'])
for row in df.itertuples():
print(pd.Series(row).to_frame())
Hope this helps!!
EDIT
In case you want to save the column names use the _asdict() method like this:
import pandas as pd
df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]},
index=['a', 'b'])
for row in df.itertuples():
d = dict(row._asdict())
print(pd.Series(d).to_frame())
Output:
0
Index a
col1 1
col2 0.1
0
Index b
col1 2
col2 0.2
To create new DataFrame from itertuples namedtuple you can use list() or Series too:
import pandas as pd
# source DataFrame
df = pd.DataFrame({'a': [1,2], 'b':[3,4]})
# empty DataFrame
df_new_fromAppend = pd.DataFrame(columns=['x','y'], data=None)
for r in df.itertuples():
# create new DataFrame from itertuples() via list() ([1:] for skipping the index):
df_new_fromList = pd.DataFrame([list(r)[1:]], columns=['c','d'])
# or create new DataFrame from itertuples() via Series (drop(0) to remove index, T to transpose column to row)
df_new_fromSeries = pd.DataFrame(pd.Series(r).drop(0)).T
# or use append() to insert row into existing DataFrame ([1:] for skipping the index):
df_new_fromAppend.loc[df_new_fromAppend.shape[0]] = list(r)[1:]
print('df_new_fromList:')
print(df_new_fromList, '\n')
print('df_new_fromSeries:')
print(df_new_fromSeries, '\n')
print('df_new_fromAppend:')
print(df_new_fromAppend, '\n')
Output:
df_new_fromList:
c d
0 2 4
df_new_fromSeries:
1 2
0 2 4
df_new_fromAppend:
x y
0 1 3
1 2 4
To omit index, use param index=False (but I mostly need index for the iteration)
for r in df.itertuples(index=False):
# the [1:] needn't be used, for example:
df_new_fromAppend.loc[df_new_fromAppend.shape[0]] = list(r)
The following works for me:
import pandas as pd
df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]}, index=['a', 'b'])
for row in df.itertuples():
row_as_df = pd.DataFrame.from_records([row], columns=row._fields)
print(row_as_df)
The result is:
Index col1 col2
0 a 1 0.1
Index col1 col2
0 b 2 0.2
Sadly, AFAIU, there's no simple way to keep column names, without explicitly utilizing "protected attributes" such as _fields.
With some tweaks in #Igor's answer
I concluded with this satisfactory code which preserved column names and used as less of pandas code as possible.
import pandas as pd
df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]})
# Or initialize another dataframe above
# Get list of column names
column_names = df.columns.values.tolist()
filtered_rows = []
for row in df.itertuples(index=False):
# Some code logic to filter rows
filtered_rows.append(row)
# Convert pandas.core.frame.Pandas to pandas.core.frame.Dataframe
# Combine filtered rows into a single dataframe
concatinated_df = pd.DataFrame.from_records(filtered_rows, columns=column_names)
concatinated_df.to_csv("path_to_csv", index=False)
The result is a csv containing:
col1 col2
1 0.1
2 0.2
To convert a list of objects returned by Pandas .itertuples to a DataFrame, while preserving the column names:
# Example source DF
data = [['cheetah', 120], ['human', 44.72], ['dragonfly', 54]]
source_df = pd.DataFrame(data, columns=['animal', 'top_speed'])
animal top_speed
0 cheetah 120.00
1 human 44.72
2 dragonfly 54.00
Since Pandas does not recommended building DataFrames by adding single rows in a for loop, we will iterate and build the DataFrame at the end:
WOW_THAT_IS_FAST = 50
list_ = list()
for animal in source_df.itertuples(index=False, name='animal'):
if animal.top_speed > 50:
list_.append(animal)
Now build the DF in a single command and without manually recreating the column names.
filtered_df = pd.DataFrame(list_)
animal top_speed
0 cheetah 120.00
2 dragonfly 54.00
What's the python/panda way to merge on multilevel dataframe on column "t" under "cell1" and "cell2"?
import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.arange(4).reshape(2, 2),
columns = [['cell 1'] * 2, ['t', 'sb']])
df2 = pd.DataFrame([[1, 5], [2, 6]],
columns = [['cell 2'] * 2, ['t', 'sb']])
Now when I tried to merge on "t", python REPL will error out
ddf = pd.merge(df1, df2, on='t', how='outer')
What's a good way to handle this?
pd.merge(df1, df2, left_on=[('cell 1', 't')], right_on=[('cell 2', 't')])
One solution is to drop the top level (e.g. cell_1 and cell_2) from the dataframes and then merge.
If you want, you can save these columns to reinstate them after the merge.
c1 = df1.columns
c2 = df2.columns
df1.columns = df1.columns.droplevel()
df2.columns = df2.columns.droplevel()
df_merged = df1.merge(df2, on='t', how='outer', suffixes=['_df1', '_df2'])
df1.columns = c1
df2.columns = c2
>>> df_merged
t sb_df1 sb_df2
0 0 1 NaN
1 2 3 6
2 1 NaN 5