Input :
Expected Output :
LEVEL LEVEL-1 LEVEL-2 LEVEL-3 LEVEL-4 LEVEL-5 LEVEL-6 Value Result Explanation
0 1 A01 NaN NaN NaN NaN NaN 10 550 = I3 + H2
1 2 NaN A011 NaN NaN NaN NaN 20 540 = I4 + H3
2 3 NaN NaN AO111 NaN NaN NaN 30 520 = I5 + H4
3 4 NaN NaN NaN A01111 NaN NaN 40 490 = I6 + i9 + H5
4 5 NaN NaN NaN NaN AO11111 NaN 50 180 = I8 + I7 + H6
5 6 NaN NaN NaN NaN NaN AO111111 60 60 = H7
6 6 NaN NaN NaN NaN NaN AO111112 70 70 = H8
7 5 NaN NaN NaN NaN AO11112 NaN 80 270 = I11+I10+H9
8 6 NaN NaN NaN NaN NaN AO111121 90 90 = H10
9 6 NaN NaN NaN NaN NaN AO111122 100 100 = H11
Explanation:
I have to get the Result column on the basis of Explanation Column. This Explanation column is made on basis Tree type. For example AO111121 & AO111122 are children's of its immediate parent AO11112 so AO11112 = AO111121 + AO111122 + AO11112 accordingly.
You could do
# Consolidating nodes and finding parents
df["Node"] = df["LEVEL-1"]
for level in range(2, 7):
col, last_col = df[f"LEVEL-{level}"], df[f"LEVEL-{level - 1}"]
df.loc[col.notna(), "Node"] = col
df.loc[col.notna(), "Parent"] = last_col.ffill()
df = df.drop(columns=[col for col in df.columns if col.startswith("LEVEL-")])
# Identifying childs
df = df.merge(
df.Node.groupby(df.Parent).apply(set).rename("Childs"),
left_on="Node", right_on="Parent", how="left"
)
# Recursively adding up results
def result(childs):
return df.loc[df.Node.isin(childs), "Result"].sum()
df["Result"] = df.Value
for level in range(5, 0, -1):
add_results = df.loc[df.LEVEL.eq(level), "Childs"].map(result)
df.loc[df.LEVEL.eq(level), "Result"] += add_results
Result for df
LEVEL LEVEL-1 LEVEL-2 LEVEL-3 LEVEL-4 LEVEL-5 LEVEL-6 Value Result_exp
0 1 A01 NaN NaN NaN NaN NaN 10 550
1 2 NaN A011 NaN NaN NaN NaN 20 540
2 3 NaN NaN AO111 NaN NaN NaN 30 520
3 4 NaN NaN NaN A01111 NaN NaN 40 490
4 5 NaN NaN NaN NaN AO11111 NaN 50 180
5 6 NaN NaN NaN NaN NaN AO111111 60 60
6 6 NaN NaN NaN NaN NaN AO111112 70 70
7 5 NaN NaN NaN NaN AO11112 NaN 80 270
8 6 NaN NaN NaN NaN NaN AO111121 90 90
9 6 NaN NaN NaN NaN NaN AO111122 100 100
is
LEVEL Value Result_exp Node Parent Childs Result
0 1 10 550 A01 NaN {A011} 550
1 2 20 540 A011 A01 {AO111} 540
2 3 30 520 AO111 A011 {A01111} 520
3 4 40 490 A01111 AO111 {AO11111, AO11112} 490
4 5 50 180 AO11111 A01111 {AO111111, AO111112} 180
5 6 60 60 AO111111 AO11111 NaN 60
6 6 70 70 AO111112 AO11111 NaN 70
7 5 80 270 AO11112 A01111 {AO111122, AO111121} 270
8 6 90 90 AO111121 AO11112 NaN 90
9 6 100 100 AO111122 AO11112 NaN 100
Be aware that in the dataframe you've provided you're using 0 and O inconsistently.
I have one DataFrame, df, I have four columns shown below:
IDP1 IDP1Number IDP2 IDP2Number
1 100 1 NaN
3 110 2 150
5 120 3 NaN
7 140 4 160
9 150 5 190
NaN NaN 6 130
NaN NaN 7 NaN
NaN NaN 8 200
NaN NaN 9 90
NaN NaN 10 NaN
I want instead to map values from df.IDP1Number to IDP2Number using IDP1 to IDP2. I want to replace existing values if IDP1 and IDP2 both exist with IDP1Number. Otherwise leave values in IDP2Number alone.
The error message that appears reads, " Reindexing only valid with uniquely value Index objects
The Dataframe below is what I wish to have:
IDP1 IDP1Number IDP2 IDP2Number
1 100 1 100
3 110 2 150
5 120 3 110
7 140 4 160
9 150 5 120
NaN NaN 6 130
NaN NaN 7 140
NaN NaN 8 200
NaN NaN 9 150
NaN NaN 10 NaN
Here's a way to do:
# filter the data and create a mapping dict
maps = df.query("IDP1.notna()")[['IDP1', 'IDP1Number']].set_index('IDP1')['IDP1Number'].to_dict()
# create new column using ifelse condition
df['IDP2Number'] = df.apply(lambda x: maps.get(x['IDP2'], None) if (pd.isna(x['IDP2Number']) or x['IDP2'] in maps) else x['IDP2Number'], axis=1)
print(df)
IDP1 IDP1Number IDP2 IDP2Number
0 1.0 100.0 1 100.0
1 3.0 110.0 2 150.0
2 5.0 120.0 3 110.0
3 7.0 140.0 4 160.0
4 9.0 150.0 5 120.0
5 NaN NaN 6 130.0
6 NaN NaN 7 140.0
7 NaN NaN 8 200.0
8 NaN NaN 9 150.0
9 NaN NaN 10 NaN
Editing my original post to hopefully simplify my question... I'm merging multiple DataFrames into one, SomeData.DataFrame, which gives me the following:
Key 2019-02-17 2019-02-24_x 2019-02-24_y 2019-03-03
0 A 80 NaN NaN 80
1 B NaN NaN 45 36
2 C 44 NaN 39 NaN
3 D 80 NaN NaN 12
4 E 49 2 NaN NaN
What I'm trying to do now is efficiently merge the columns ending in "_x" and "_y" while keeping everything else in place so that I get:
Key 2019-02-17 2019-02-24 2019-03-03
0 A 80 NaN 80
1 B NaN 45 36
2 C 44 39 NaN
3 D 80 NaN 12
4 E 49 2 NaN
The other issue I'm trying to account for is that the data contained in SomeData.DataFrame changes weekly so that my column headers are unpredictable. Meaning, some weeks I may not have the above issue at all and other weeks, there may be multiple instances for example:
Key 2019-02-17 2019-02-24_x 2019-02-24_y 2019-03_10_x 2019-03-10_y
0 A 80 NaN NaN 80 NaN
1 B NaN NaN 45 36 NaN
2 C 44 NaN 39 NaN 12
3 D 80 NaN NaN 12 NaN
4 E 49 2 NaN NaN 17
So that again the desired result would be:
Key 2019-02-17 2019-02-24 2019-03_10
0 A 80 NaN 80
1 B NaN 45 36
2 C 44 39 12
3 D 80 NaN 12
4 E 49 2 17
Is what I'm asking reasonable or am I venturing outside the bounds of Pandas' limits? I can't find anyone trying to do anything similar so I'm not sure anymore. Thank you in advance!
Edited answer to updated question:
df = df.set_index('Key')
df.groupby(df.columns.str.split('_').str[0], axis=1).sum()
Output:
2019-02-17 2019-02-24 2019-03-03
Key
A 80.0 0.0 80.0
B 0.0 45.0 36.0
C 44.0 39.0 0.0
D 80.0 0.0 12.0
E 49.0 2.0 0.0
Second dataframe Output:
df.groupby(df.columns.str.split('_').str[0], axis=1).sum()
Output:
2019-02-17 2019-02-24 2019-03-10
Key
A 80.0 0.0 80.0
B 0.0 45.0 36.0
C 44.0 39.0 12.0
D 80.0 0.0 12.0
E 49.0 2.0 17.0
You could try something like this:
df_t = df.T
df_t.set_index(df_t.groupby(level=0).cumcount(), append=True)\
.unstack().T\
.sort_values(df.columns[0])[df.columns.unique()]\
.reset_index(drop=True)
Output:
val03-20 03-20 val03-24 03-24
0 a 1 d 5
1 b 6 e 7
2 c 4 f 10
3 NaN NaN g 5
4 NaN NaN h 6
5 NaN NaN i 1
For example:
If I have a data frame like this:
20 40 60 80 100 120 140
1 1 1 1 NaN NaN NaN NaN
2 1 1 1 1 1 NaN NaN
3 1 1 1 1 NaN NaN NaN
4 1 1 NaN NaN 1 1 1
How do I find the last index in each row and then count the difference in columns elapsed so I get something like this?
20 40 60 80 100 120 140
1 40 20 0 NaN NaN NaN NaN
2 80 60 40 20 0 NaN NaN
3 60 40 20 0 NaN NaN NaN
4 20 0 NaN NaN 40 20 0
You can try of Transposing the dataframe, then after count only not null values and last set 1
#bit of complex procedure, solution involving with.
def fill_values(df):
df = df[::-1]
a = df == 1
b = a.cumsum()
#Function in counting the cummulative not null values
arr = np.where(a, b-b.mask(a).ffill().fillna(0).astype(int), 1)
return (b-b.mask(a).ffill().fillna(0).astype(int))[::-1]*20
df.apply(fill_values,1).replace(0,np.nan)-20
Out:
20 40 60 80 100 120 140
1 40.0 20.0 0.0 NaN NaN NaN NaN
2 80.0 60.0 40.0 20.0 0.0 NaN NaN
3 60.0 40.0 20.0 0.0 NaN NaN NaN
4 20.0 0.0 NaN NaN 40.0 20.0 0.0
I am attempting to transpose and merge two pandas dataframes, one containing accounts, the segment which they received their deposit, their deposit information, and what day they received the deposit; the other has the accounts, and withdrawal information. The issue is, for indexing purposes, the segment information from one dataframe should line up with the information of the other, regardless of there being a withdrawal or not.
Notes:
There will always be an account for every person
There will not always be a withdrawal for every person
The accounts and data for the withdrawal dataframe only exist if a withdrawal occurs
Account Dataframe Code
accounts = DataFrame({'person':[1,1,1,1,1,2,2,2,2,2],
'segment':[1,2,3,4,5,1,2,3,4,5],
'date_received':[10,20,30,40,50,11,21,31,41,51],
'amount_received':[1,2,3,4,5,6,7,8,9,10]})
accounts = accounts.pivot_table(index=["person"], columns=["segment"])
Account Dataframe
amount_received date_received
segment 1 2 3 4 5 1 2 3 4 5
person
1 1 2 3 4 5 10 20 30 40 50
2 6 7 8 9 10 11 21 31 41 51
Withdrawal Dataframe Code
withdrawals = DataFrame({'person':[1,1,1,2,2],
'withdrawal_segment':[1,1,5,2,3],
'withdraw_date':[1,2,3,4,5],
'withdraw_amount':[10,20,30,40,50]})
withdrawals = withdrawals.reset_index().pivot_table(index = ['index', 'person'], columns = ['withdrawal_segment'])
Since there can only be unique segments for a person it is required that my column only consists of a unique number once, while still holding all of the data, which is why this dataframe looks so much different.
Withdrawal Dataframe
withdraw_date withdraw_amount
withdrawal_segment 1 2 3 5 1 2 3 5
index person
0 1 1.0 NaN NaN NaN 10.0 NaN NaN NaN
1 1 2.0 NaN NaN NaN 20.0 NaN NaN NaN
2 1 NaN NaN NaN 3.0 NaN NaN NaN 30.0
3 2 NaN 4.0 NaN NaN NaN 40.0 NaN NaN
4 2 NaN NaN 5.0 NaN NaN NaN 50.0 NaN
Merge
merge = accounts.merge(withdrawals, on='person', how='left')
amount_received date_received withdraw_date withdraw_amount
segment 1 2 3 4 5 1 2 3 4 5 1 2 3 5 1 2 3 5
person
1 1 2 3 4 5 10 20 30 40 50 1.0 NaN NaN NaN 10.0 NaN NaN NaN
1 1 2 3 4 5 10 20 30 40 50 2.0 NaN NaN NaN 20.0 NaN NaN NaN
1 1 2 3 4 5 10 20 30 40 50 NaN NaN NaN 3.0 NaN NaN NaN 30.0
2 6 7 8 9 10 11 21 31 41 51 NaN 4.0 NaN NaN NaN 40.0 NaN NaN
2 6 7 8 9 10 11 21 31 41 51 NaN NaN 5.0 NaN NaN NaN 50.0 NaN
The problem with the merged dataframe is that segments from the withdrawal dataframe aren't lined up with the accounts segments.
The desired dataframe should look something like:
amount_received date_received withdraw_date withdraw_amount
segment 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
person
1 1 2 3 4 5 10 20 30 40 50 1.0 NaN NaN NaN NaN 10.0 NaN NaN NaN NaN
1 1 2 3 4 5 10 20 30 40 50 2.0 NaN NaN NaN NaN 20.0 NaN NaN NaN NaN
1 1 2 3 4 5 10 20 30 40 50 NaN NaN NaN NaN 3.0 NaN NaN NaN NaN 30.0
2 6 7 8 9 10 11 21 31 41 51 NaN 4.0 NaN NaN NaN NaN 40.0 NaN NaN NaN
2 6 7 8 9 10 11 21 31 41 51 NaN NaN 5.0 NaN NaN NaN NaN 50.0 NaN NaN
My problem is that I can't seem to merge across both person and segments. I've thought about inserting a row and column, but because I don't know which segments are and aren't going to have a withdrawal this gets difficult. Is it possible to merge the dataframes so that they line up across both people and segments? Thanks!
Method 1 , using reindex
withdrawals=withdrawals.reindex(pd.MultiIndex.from_product([withdrawals.columns.levels[0],accounts.columns.levels[1]]),axis=1)
merge = accounts.merge(withdrawals, on='person', how='left')
merge
Out[79]:
amount_received date_received \
segment 1 2 3 4 5 1 2 3 4 5
person
1 1 2 3 4 5 10 20 30 40 50
1 1 2 3 4 5 10 20 30 40 50
1 1 2 3 4 5 10 20 30 40 50
2 6 7 8 9 10 11 21 31 41 51
2 6 7 8 9 10 11 21 31 41 51
withdraw_amount withdraw_date
segment 1 2 3 4 5 1 2 3 4 5
person
1 10.0 NaN NaN NaN NaN 1.0 NaN NaN NaN NaN
1 20.0 NaN NaN NaN NaN 2.0 NaN NaN NaN NaN
1 NaN NaN NaN NaN 30.0 NaN NaN NaN NaN 3.0
2 NaN 40.0 NaN NaN NaN NaN 4.0 NaN NaN NaN
2 NaN NaN 50.0 NaN NaN NaN NaN 5.0 NaN NaN
Method 2 , using unstack and stack
merge = accounts.merge(withdrawals, on='person', how='left')
merge.stack(dropna=False).unstack()
Out[82]:
amount_received date_received \
segment 1 2 3 4 5 1 2 3 4 5
person
1 1 2 3 4 5 10 20 30 40 50
1 1 2 3 4 5 10 20 30 40 50
1 1 2 3 4 5 10 20 30 40 50
2 6 7 8 9 10 11 21 31 41 51
2 6 7 8 9 10 11 21 31 41 51
withdraw_amount withdraw_date
segment 1 2 3 4 5 1 2 3 4 5
person
1 10.0 NaN NaN NaN NaN 1.0 NaN NaN NaN NaN
1 20.0 NaN NaN NaN NaN 2.0 NaN NaN NaN NaN
1 NaN NaN NaN NaN 30.0 NaN NaN NaN NaN 3.0
2 NaN 40.0 NaN NaN NaN NaN 4.0 NaN NaN NaN
2 NaN NaN 50.0 NaN NaN NaN NaN 5.0 NaN NaN