I am having multiple dataframes. All are having different column names and lengths. For example df1 has columns like ['c1', 'c2', 'c3'], df2 has columns like ['d1', 'd2', 'd3', 'd4'] and so on.
I want to concatenate all the dfs one under another. I don't care about the column name preservation. Resultant df will have all the values of df1 and df2 and so on.
Right now I'm doing pd.concat([df1, df2], axis=0) which is making the resultant df both df1 and df2 columns side by side. I want them one under another.
If column names isn't important, we can also consider taking the numpy values of the dataframes by DataFrame.values and concat together using pd.concat(), like below:
pd.concat([pd.DataFrame(dfi.values) for dfi in [df1, df2]], ignore_index=True)
Since your DataFrames can have a different number of columns, rename the labels to be their integer position that way they align underneath for the join. The result will have an Int64Index on the columns, up to the length of the widest DataFrame you provide in the concat.
Sample Data
import pandas as pd
import numpy as np
np.random.seed(123)
df1 = pd.DataFrame(np.random.choice(['foo', 'bar'], (2, 3)),
columns=['c1', 'c2', 'c3'])
df2 = pd.DataFrame(np.random.randint(11, 20, (3, 4)),
columns=['d1', 'd2', 'd3', 'd4'])
Code
df = pd.concat([x.rename(columns=dict(zip(x.columns, range(x.shape[1]))))
for x in [df1, df2]],
ignore_index=True)
# 0 1 2 3
#0 foo bar foo NaN
#1 foo foo foo NaN
#2 17 12 14 17.0
#3 12 11 12 11.0
#4 11 14 15 11.0
Related
I'm merging some dataframes which have a time index.
import pandas as pd
df1 = pd.DataFrame(['a', 'b', 'c'],
columns=pd.MultiIndex.from_product([['target'], ['key']]),
index = [
'2022-04-15 20:20:20.000000',
'2022-04-15 20:20:21.000000',
'2022-04-15 20:20:22.000000'],)
df2 = pd.DataFrame(['a2', 'b2', 'c2', 'd2', 'e2'],
columns=pd.MultiIndex.from_product([['feature2'], ['keys']]),
index = [
'2022-04-15 20:20:20.100000',
'2022-04-15 20:20:20.500000',
'2022-04-15 20:20:20.900000',
'2022-04-15 20:20:21.000000',
'2022-04-15 20:20:21.100000',],)
df3 = pd.DataFrame(['a3', 'b3', 'c3', 'd3', 'e3'],
columns=pd.MultiIndex.from_product([['feature3'], ['keys']]),
index = [
'2022-04-15 20:20:19.000000',
'2022-04-15 20:20:19.200000',
'2022-04-15 20:20:20.000000',
'2022-04-15 20:20:20.200000',
'2022-04-15 20:20:23.100000',],)
then I use this merge procedure:
def merge(dfs:list[pd.DataFrame], targetColumn:'str|tuple[str]'):
from functools import reduce
if len(dfs) == 0:
return None
if len(dfs) == 1:
return dfs[0]
for df in dfs:
df.index = pd.to_datetime(df.index)
merged = reduce(
lambda left, right: pd.merge(
left,
right,
how='outer',
left_index=True,
right_index=True),
dfs)
for col in merged.columns:
if col != targetColumn:
merged[col] = merged[col].fillna(method='ffill')
return merged[merged[targetColumn].notna()]
like this:
merged = merge([df1, df2, df3], targetColumn=('target', 'key'))
which produces this:
And it all works great. Problem is efficiency - notice in the merge() procedure I use reduce and an outer merge to join the dataframes together, this can make a HUGE interim dataframe which then gets filtered down. But what if my pc doesn't have enough ram to handle that huge dataframe in memory? well that's the problem I'm trying to avoid.
I'm wondering if there's a way to avoid expanding the data out into a huge dataframe while merging.
Of course a regular old merge isn't sufficient because it only merges on exactly matching indexes rather than the latest temporal index before the target variable's observation:
df1.merge(df2, how='left', left_index=True, right_index=True)
Has this kind of thing been solved efficiently? Seems like a common data science issue, since no one wants to leak future information into their models, and everyone has various inputs to merge together...
You're in luck: pandas.merge_asof does exactly what you need!
We use the default direction='backward' argument:
A “backward” search selects the last row in the right DataFrame whose
‘on’ key is less than or equal to the left’s key.
Using your three example DataFrames:
import pandas as pd
from functools import reduce
# Convert all indexes to datetime
for df in [df1, df2, df3]:
df.index = pd.to_datetime(df.index)
# Perform as-of merges
res = reduce(lambda left, right:
pd.merge_asof(left, right, left_index=True, right_index=True),
[df1, df2, df3])
print(res)
target feature2 feature3
key keys keys
2022-04-15 20:20:20 a NaN c3
2022-04-15 20:20:21 b d2 d3
2022-04-15 20:20:22 c e2 d3
Here's some code that works for your example. I'm not sure about more general cases of multi-indexed columns, but in any event it contains the basic ideas for merging on a single temporal index.
merged = df1.copy(deep=True)
for df in [df2, df3]:
idxNew = df.index.get_indexer(merged.index, method='pad')
idxMerged = [i for i, x in enumerate(idxNew) if x != -1]
idxNew = [x for x in idxNew if x != -1]
n = len(merged.columns)
merged[df.columns] = None
merged.iloc[idxMerged,n:] = df.iloc[idxNew,:].set_index(merged.index[idxMerged])
print(merged)
Output:
target feature2 feature3
key keys keys
2022-04-15 20:20:20.000000 a None c3
2022-04-15 20:20:21.000000 b d2 d3
2022-04-15 20:20:22.000000 c e2 d3
I have trouble with some pandas dataframes.
Its very simple, I have 4 columns, and I want to reshape them in 2...
For 'practical' reasons, I don't want to use 'header names', but I need to use 'index' (for the columns header names).
I have :
df = pd.DataFrame({'a': [1,2,3],'b': [4,5,6],'c': [7,8,9],'d':[10,11,12]})
I want as a result :
df_res = pd.DataFrame({'NewName1': [1,2,3,4,5,6],'NewName2': [7,8,9,10,11,12]})
(in fact NewName1 doesn't matter, it can stay a or whatever the name...)
I tried with for loops, append, concat, but couldn't figured it out...
Any suggestions ?
Thanks for your help !
Bina
You can extract the desired columns and create a new pandas.DataFrame like so:
import numpy as np
import pandas as pd
df = pd.DataFrame({'a': [1,2,3],'b': [4,5,6],'c': [7,8,9],'d':[10,11,12]})
first_col = np.concatenate((df.a.to_numpy(), df.b.to_numpy()))
second_col = np.concatenate((df.c.to_numpy(), df.d.to_numpy()))
df2 = pd.DataFrame({"NewName1": first_col, "NewName2": second_col})
>>> df2
NewName1 NewName2
0 1 7
1 2 8
2 3 9
3 4 10
4 5 11
5 6 12
This is probably not the most elegant solution, but I would isolate the two dataframes and then concatenate them. I needed to rename the column axis so that the four columns could be aligned correctly.
import pandas as pd
df = pd.DataFrame({'a': [1,2,3],'b': [4,5,6],'c': [7,8,9],'d':[10,11,12]})
af = df[['a', 'c']]
bf = df[['b', 'd']]
frames = (
af.rename({'a': 'NewName1', 'c': 'NewName2'}, axis=1),
bf.rename({'b': 'NewName1', 'd': 'NewName2'}, axis=1)
)
out = pd.concat(frames)
[EDIT] Replying to the comment.
I'm not that familiar with indexing but this might be one solution. You could avoid column names by using .iloc. Replace the af, and bf frames above with these lines.
af = df.iloc[:, ::2]
bf = df.iloc[:, 1::2]
I have a dataframe df and a column df['table'] such that each item in df['table'] is another dataframe with the same headers/number of columns. I was wondering if there's a way to do a groupby like this:
Original dataframe:
name table
Bob Pandas df1
Joe Pandas df2
Bob Pandas df3
Bob Pandas df4
Emily Pandas df5
After groupby:
name table
Bob Pandas df containing the appended df1, df3, and df4
Joe Pandas df2
Emily Pandas df5
I found this code snippet to do a groupby and lambda for strings in a dataframe, but haven't been able to figure out how to append entire dataframes in a groupby.
df['table'] = df.groupby(['name'])['table'].transform(lambda x : ' '.join(x))
I've also tried df['table'] = df.groupby(['name'])['HTML'].apply(list), but that gives me a df['table'] of all NaN.
Thanks for your help!!
Given 3 dataframes
import pandas as pd
dfa = pd.DataFrame({'a': [1, 2, 3]})
dfb = pd.DataFrame({'a': ['a', 'b', 'c']})
dfc = pd.DataFrame({'a': ['pie', 'steak', 'milk']})
Given another dataframe, with dataframes in the columns
df = pd.DataFrame({'name': ['Bob', 'Joe', 'Bob', 'Bob', 'Emily'], 'table': [dfa, dfa, dfb, dfc, dfb]})
# print the type for the first value in the table column, to confirm it's a dataframe
print(type(df.loc[0, 'table']))
[out]:
<class 'pandas.core.frame.DataFrame'>
Each group of dataframes, can be combined into a single dataframe, by using .groupby and aggregating a list for each group, and combining the dataframes in the list, with pd.concat
# if there is only one column, or if there are multiple columns of dataframes to aggregate
dfg = df.groupby('name').agg(lambda x: pd.concat(list(x)).reset_index(drop=True))
# display(dfg.loc['Bob', 'table'])
a
0 1
1 2
2 3
3 a
4 b
5 c
6 pie
7 steak
8 milk
# to specify a single column, or specify multiple columns, from many columns
dfg = df.groupby('name')[['table']].agg(lambda x: pd.concat(list(x)).reset_index(drop=True))
Not a duplicate
Originally, I had marked this question as a duplicate of How to group dataframe rows into list in pandas groupby, thinking the dataframes could be aggregated into a list, and then combined with pd.concat.
df.groupby('name')['table'].apply(list)
df.groupby('name').agg(list)
df.groupby('name')['table'].agg(list)
df.groupby('name').agg({'table': list})
df.groupby('name').agg(lambda x: list(x))
However, these all result in a StopIteration error, when there are dataframes to aggregate.
Here let's create a dataframe with dataframes as columns:
First, I start with three dataframes:
import pandas as pd
#creating dataframes that we will assign to Bob and Joe, notice b's and j':
df1 = pd.DataFrame({'var1':[12, 34, -4, None], 'letter':['b1', 'b2', 'b3', 'b4']})
df2 = pd.DataFrame({'var1':[1, 23, 44, 0], 'letter':['j1', 'j2', 'j3', 'j4']})
df3 = pd.DataFrame({'var1':[22, -3, 7, 78], 'letter':['b5', 'b6', 'b7', 'b8']})
#lets make a list of dictionaries:
list_of_dfs = [
{'name':'Bob' ,'table':df1},
{'name':'Joe' ,'table':df2},
{'name':'Bob' ,'table':df3}
]
#constuct the main dataframe:
original_df = pd.DataFrame(list_of_dfs)
print(original_df)
original_df.shape #shows (3, 2)
Now we have the original dataframe created as the input, we will produce the resulting new dataframe. In doing so, we use groupby(),agg(), and pd.concat(). We also reset the index.
new_df = original_df.groupby('name')['table'].agg(lambda series: pd.concat(series.tolist())).reset_index()
print(new_df)
#check that Bob's table is now a concatenated table of df1 and df3:
new_df[new_df['name']=='Bob']['table'][0]
The output to the last line of code is:
var1 letter
0 12.0 b1
1 34.0 b2
2 -4.0 b3
3 NaN b4
0 22.0 b5
1 -3.0 b6
2 7.0 b7
3 78.0 b8
Given two dataframes, df1 and df2, I want to take the last column of df2 and add it to df1 based on column 'a' they both have.
That is for every row in df2, if df1['a'] has is, then I want to add it to the new column. The rows of df1['a'] that aren't in df2['a'] we set N/A. If there is a value in df2['a'] that isn't in df1['a'], we ignore it
Additionally, while adding a column, I was hoping to update df1['b'] and df1['c'] to the values of df2['b'] and df2['c'].
For the first part, this the best I've gotten
df1 = df1.merge(df2, how='outer', on='a')
df1 = df1.drop_duplicates('a')
This needlessly create doubles and I even update
Try using a left join:
import pandas as pd
df1 = pd.DataFrame({'a':[1,2,3],'b':[4,5,6],'c':[7,8,9]})
df2 = pd.DataFrame({'a':[2,3,4],'b':[5,6,7],'c':[8,9,10],'new_column_from_df2':[11,12,13]})
df1['a'].to_frame().merge(df2, how='left', on='a')
Output:
Out[190]:
a b c new_column_from_df2
0 1 NaN NaN NaN
1 2 5.0 8.0 11.0
2 3 6.0 9.0 12.0
Note the last row of df2 being ignored/excluded because it is not in df1['a']. Columns 'b' and 'c' are "updated" with df2 values.
I'm searching and haven't found an answer to this question, can you perform a merge of pandas dataframes using OR logic? Basically, the equivalent of a SQL merge using "where t1.A = t2.A OR t1.A = t2.B".
I have a situation where I am pulling information from one database into a dataframe (df1) and I need to merge it with information from another database, which I pulled into another dataframe (df2), merging based on a single column (col1). If these always used the same value when they matched, it would be very straightforward. The situation I have is that sometimes they match and sometimes they use a synonym. There is a third database that has a table that provides a lookup between synonyms for this data entity (col1 and col1_alias), which could be pulled into a third dataframe (df3). What I am looking to do is merge the columns I need from df1 and the columns I need from df2.
As stated above, in cases where df1.col1 and df2.col1 match, this would work...
df = df1.merge(df2, on='col1', how='left')
However, they don't always have the same value and sometimes have the synonyms. I thought about creating df3 based on when df3.col1 was in df1.col1 OR df3.col1_alias was in df1.col1. Then, creating a single list of values from df3.col1 and df3.col1_alias (list1) and selecting df2 based on df2.col1 in list1. This would give me the rows from df2 I need but, that still wouldn't put me in position to merge df1 and df2 matching the appropriate rows. I think if there an OR merge option, I can step through this and make it work, but all of the following threw a syntax error:
df = df1.merge((df3, left_on='col1', right_on='col1', how='left')|(df3, left_on='col1', right_on='col1_alias', how='left'))
and
df = df1.merge(df3, (left_on='col1', right_on='col1')|(left_on='col1', right_on='col1_alias'), how='left')
and
df = df1.merge(df3, left_on='col1', right_on='col1'|right_on='col1_alias', how='left')
and several other variations. Any guidance on how to perform an OR merge or suggestions on a completely different approach to merging df1 and df2 using the synonyms in two columns in df3?
I think I would do this as two merges:
In [11]: df = pd.DataFrame([[1, 2], [3, 4], [5, 6]], columns=["A", "B"])
In [12]: df2 = pd.DataFrame([[1, 7], [2, 8], [4, 9]], columns=["C", "D"])
In [13]: res = df.merge(df2, left_on="B", right_on="C", how="left")
In [14]: res.update(df.merge(df2, left_on="A", right_on="C", how="left"))
In [15]: res
Out[15]:
A B C D
0 1 2 1.0 7.0
1 3 4 4.0 9.0
2 5 6 NaN NaN
As you can see this picks A = 1 -> D = 7 rather than B = 2 -> D = 8.
Note: For more extensibility (matching different columns) it might make sense to pull out a single column, although they're both the same in this example:
In [21]: res = df.merge(df2, left_on="B", right_on="C", how="left")["C"]
In [22]: res.update(df.merge(df2, left_on="A", right_on="C", how="left")["C"])
In [23]: res
Out[23]:
0 1.0
1 4.0
2 NaN
Name: C, dtype: float64
#will this work?
df = pd.concat([df1.merge(df3, left_on='col1', right_on='col1', how='left'), df1.merge(df3, left_on='col1', right_on='col1_alias', how='left')]