Pandas - Sum Previous Rows if Value In Column Meets Condition - python

I have a dataframe that is of the following type. I have all the columns except the final column, "Total Previous Points P1", which I am hoping to create:
The data is sorted by the "Date" column.
Date | Points_P1 | P1_id | P2_id | Total_Previous_Points_P1
-------------+---------------+----------+-----------------------------------
10/08/15 | 5 | 100 | 90 | 500
-------------+---------------+----------+-----------------------------------
11/09/16 | 5 | 100 | 90 | 500
-------------+---------------+----------+-----------------------------------
20/09/19 | 10 | 10000 | 360 | 4,200
-------------+---------------+----------+-----------------------------------
... | | ... | ... | ...
-------------+---------------+----------+-----------------------------------
n | | | |
Now the column I want to create, is the "Total_Previous_Points_P1" column shown above.
The way to create it:
For each row, check the date (call this DATE_VAL) and P1_id (call this ID_VAL)
Now, for all rows before DATE_VAL AND where P1 id == ID_VAL, sum up the previous points.
Put this sum in the final column, in the current row
Is there a fast pandas pythonic way to do this? My data set is very large.
Thank you!

The solution by SIA computes sum of Points_P1 including the
current value of Points_P1, whereas the requirement is to sum
previous points (for all rows before...).
Assuming that dates in each group are unique (in your sample they are),
the proper, pandasonic solution should include the following steps:
Sort by Date.
Group by P1_id, then for each group:
Take Points_P1 column.
Compute cumulative sum.
Subtract the current value of Points_P1.
So the whole code should be:
df['Total_Previous_Points_P1'] = df.sort_values('Date')\
.groupby(['P1_id']).Points_P1.cumsum() - df.Points_P1
Edit
If Date is not unique (within group of rows with some P1_id), the case
is more complicated, what can be shown on such source DataFrame:
Date Points_P1 P1_id
0 2016-11-09 5 100
1 2016-11-09 3 100
2 2015-10-08 5 100
3 2019-09-20 10 10000
4 2019-09-21 7 100
5 2019-07-10 12 10000
6 2019-12-10 12 10000
Note that for P1_id there are two rows for 2016-11-09.
In this case, start from computing "group" sums of previous points,
for each P1_id and Date:
sumPrev = df.groupby(['P1_id', 'Date']).Points_P1.sum()\
.groupby(level=0).apply(lambda gr: gr.shift(fill_value=0).cumsum())\
.rename('Total_Previous_Points_P1')
The result is:
P1_id Date
100 2015-10-08 0
2016-11-09 5
2019-09-21 13
10000 2019-07-10 0
2019-09-20 12
2019-12-10 22
Name: Total_Previous_Points_P1, dtype: int64
Then merge df with sumPrev on P1_id and Date (in sumPrev on the index):
df = pd.merge(df, sumPrev, left_on=['P1_id', 'Date'], right_index=True)
To show the result, it is more instructive to sort df also on ['P1_id', 'Date']:
Date Points_P1 P1_id Total_Previous_Points_P1
2 2015-10-08 5 100 0
0 2016-11-09 5 100 5
1 2016-11-09 3 100 5
4 2019-09-21 7 100 13
5 2019-07-10 12 10000 0
3 2019-09-20 10 10000 12
6 2019-12-10 12 10000 22
As you can see:
The first sum for each P1_id is 0 (no points from previous dates).
E.g. for both rows with Date == 2016-11-09 the sum of previous
points is 5 (which is in row for Date == 2015-10-08).

Try:
df['Total_Previous_Points_P1'] = df.groupby(['P1_id'])['Points_P1'].cumsum()
How It Works
First, it groups the data using P1_id feature.
Then it accesses the Points_P1 values on the grouped dataframe and apply the cumulative sum function cumsum(), which returns the sum of points up to and including the current row for each group.

Related

Very simple pandas column/row transform that I cannot figure out

I need to do a simple calculation on values in a dataframe, but I need some column transposed first. Once they are transposed I want to take the most recent amount / 2nd most recent amount and then the binary result if it less than or equal to .5
By most recent I mean most recent to the date in the Date 2 column
Have This
| Name | Amount | Date 1 | Date 2 |
| -----| ---- |------------------------|------------|
| Jim | 100 | 2021-06-10 | 2021-06-15 |
| Jim | 200 | 2021-05-11 | 2021-06-15 |
| Jim | 150 | 2021-03-5 | 2021-06-15 |
| Bob | 350 | 2022-06-10 | 2022-08-30 |
| Bob | 300 | 2022-08-12 | 2022-08-30 |
| Bob | 400 | 2021-07-6 | 2022-08-30 |
I Want this
| Name | Amount | Date 2| Most Recent Amount(MRA) | 2nd Most Recent Amount(2MRA) | MRA / 2MRA| Less than or equal to .5 |
| -----| -------|------------------------|----------------|--------------------|-------------|--------------------------|
| Jim | 100 | 2021-06-15 | 100 | 200 | .5 | 1 |
| Bob | 300 | 2022-08-30 | 300 | 350 | .85 | 0 |
This is the original dataframe.
df = pd.DataFrame({'Name':['Jim','Jim','Jim','Bob','Bob','Bob'],
'Amount':[100,200,150,350,300,400],
'Date 1':['2021-06-10','2021-05-11','2021-03-05','2022-06-10','2022-08-12','2021-07-06'],
'Date 2':['2021-06-15','2021-06-15','2021-06-15','2022-08-30','2022-08-30','2022-08-30']
})
And this is the results.
# here we take the gropby of the 'Name' column
g = df.sort_values('Date 1', ascending=False).groupby(['Name'])
# then we use the agg function to get the first of 'Date 2' and 'Amount' columns
# and then rename result of the 'Amount' column to 'MRA'
first = g.agg({'Date 2':'first','Amount':'first'}).rename(columns={'Amount':'MRA'}).reset_index()
# Similarly, we take the second values by applying a lambda function
second = g.agg({'Date 2':'first','Amount':lambda t: t.iloc[1]}).rename(columns={'Amount':'2MRA'}).reset_index()
df_T = pd.merge(first, second, on=['Name','Date 2'], how='left')
# then we use this function to add two desired columns
def operator(x):
return x['MRA']/x['2MRA'], [1 if x['MRA']/x['2MRA']<=.5 else 0][0]
# we apply the operator function to add 'MRA/2MRA' and 'Less than or equal to .5' columns
df_T['MRA/2MRA'], df_T['Less than or equal to .5'] = zip(*df_T.apply(operator, axis=1))
Hope this helps. :)
One way to do what you've asked is:
df = ( df[df['Date 1'] <= df['Date 2']]
.groupby('Name', sort=False)['Date 1'].nlargest(2)
.reset_index(level=0)
.assign(**{
'Amount': df.Amount,
'Date 2': df['Date 2'],
'recency': ['MRA','MRA2']*len(set(df.Name.tolist()))
})
.pivot(index=['Name','Date 2'], columns='recency', values='Amount')
.reset_index().rename_axis(columns=None) )
df = df.assign(**{'Amount':df.MRA, 'MRA / MRA2': df.MRA/df.MRA2})
df = df.assign(**{'Less than or equal to .5': (df['MRA / MRA2'] <= 0.5).astype(int)})
df = pd.concat([df[['Name', 'Amount']], df.drop(columns=['Name', 'Amount'])], axis=1)
Input:
Name Amount Date 1 Date 2
0 Jim 100 2021-06-10 2021-06-15
1 Jim 200 2021-05-11 2021-06-15
2 Jim 150 2021-03-05 2021-06-15
3 Bob 350 2022-06-10 2022-08-30
4 Bob 300 2022-08-12 2022-08-30
5 Bob 400 2021-07-06 2022-08-30
Output:
Name Amount Date 2 MRA MRA2 MRA / MRA2 Less than or equal to .5
0 Bob 300 2022-08-30 300 350 0.857143 0
1 Jim 100 2021-06-15 100 200 0.500000 1
Explanation:
Filter only for rows where Date 1 <= Date 2
Use groupby() and nlargest() to get the 2 most recent Date 1 values per Name
Use assign() to add back the Amount and Date 2 columns and create a recency column containing MRA and MRA2 for the pair of rows corresponding to each Name value
Use pivot() to turn the recency values MRA and MRA2 into column labels
Use reset_index() to restore Name and Date 2 to columns, and use rename_axis() to make the columns index anonymous
Use assign() once to restore Amount and add column MRA / MRA2, and again to add column named Less than or equal to .5
Use concat(), [] and drop() to rearrange the columns to match the output sequence shown in the question.
Here's the rough procedure you want:
sort_values by Name and Date 1 to get the data in order.
shift to get the previous date and 2nd most recent amount fields
Filter the dataframe for Date 1 <= Date 2.
group_by by Name and use head to get only the first row.
Now, your Amount column is your Most Recent Amount and your Shifted Amount column is the 2nd Most Recent amount. From there, you can do a simple division to get the ratio.

Filtering, transposing and concatenating with Pandas

I'm trying something i've never done before and i'm in need of some help.
Basically, i need to filter sections of a pandas dataframe, transpose each filtered section and then concatenate every resulting section together.
Here's a representation of my dataframe:
df:
id | text_field | text_value
1 Date 2021-06-23
1 Hour 10:50
2 Position City
2 Position Countryside
3 Date 2021-06-22
3 Hour 10:45
I can then use some filtering method to isolate parts of my data:
df.groupby('id').filter(lambda x: True)
test = df.query(' id == 1 ')
test = test[["text_field","text_value"]]
test_t = test.set_index("text_field").T
test_t:
text_field | Date | Hour
text_value | 2021-06-23 | 10:50
If repeat the process looking for row with id == 3 and then concatenate the result with test_t, i'll have the following:
text_field | Date | Hour
text_value | 2021-06-23 | 10:50
text_value | 2021-06-22 | 10:45
I'm aware that performing this with rows where id == 2 will give me other columns and that's alright too, it's what a i want as well.
What i can't figure out is how to do this for every "id" in my dataframe. I wasn't able to create a function or for loop that works. Can somebody help me?
To summarize:
1 - I need to separate my dataframe in sections according with values from the "id" column
2 - After that i need to remove the "id" column and transpose the result
3 - I need to concatenate every resulting dataframe into one big dataframe
You can use pivot_table:
df.pivot_table(
index='id', columns='text_field', values='text_value', aggfunc='first')
Output:
text_field Date Hour Position
id
1 2021-06-23 10:50 NaN
2 NaN NaN City
3 2021-06-22 10:45 NaN
It's not exactly clear how you want to deal with repeating values though, would be great to have some description of that (id=2 would make a good example)
Update: If you want to ignore the ids and simply concatenate all the values:
pd.DataFrame(df.groupby('text_field')['text_value'].apply(list).to_dict())
Output:
Date Hour Position
0 2021-06-23 10:50 City
1 2021-06-22 10:45 Countryside

Calculate aggregate value of column row by row

My apologies for the vague title, it's complicated to translate what I want in writing terms.
I'm trying to build a filled line chart with the date on x axis and total transaction over time on the y axis
My data
The object is a pandas dataframe.
date | symbol | type | qty | total
----------------------------------------------
2020-09-10 ABC Buy 5 10
2020-10-18 ABC Buy 2 20
2020-09-19 ABC Sell 3 15
2020-11-05 XYZ Buy 10 8
2020-12-03 XYZ Buy 10 9
2020-12-05 ABC Buy 2 5
What I whant
date | symbol | type | qty | total | aggregate_total
------------------------------------------------------------
2020-09-10 ABC Buy 5 10 10
2020-10-18 ABC Buy 2 20 10+20 = 30
2020-09-19 ABC Sell 3 15 10+20-15 = 15
2020-11-05 XYZ Buy 10 8 8
2020-12-03 XYZ Buy 10 9 8+9 = 17
2020-12-05 ABC Buy 2 5 10+20-15+5 = 20
Where I am now
I'm working with 2 nested for loops : one for iterating over the symbols, one for iterating each row. I store the temporary results in lists. I'm still unsure how I will add the results to the final dataframe. I could reorder the dataframe by symbol and date, then append each temp lists together and finally assign that temp list to a new column.
The code below is just the inner loop over the rows.
af = df.loc[df['symbol'] == 'ABC']
for i in (range(0,af.shape[0])):
# print(af.iloc[0:i,[2,4]])
# if type is a buy, we add the last operation to the aggregate
if af.iloc[i,2] == "BUY":
temp_agg_total.append(temp_agg_total[i] + af.iloc[i,4])
temp_agg_qty.append(temp_agg_qty[i] + af.iloc[i, 3])
else:
temp_agg_total.append(temp_agg_total[i] - af.iloc[i,4])
temp_agg_qty.append(temp_agg_qty[i] - af.iloc[i, 3])
# Remove first element of list (0)
temp_agg_total.pop(0)
temp_agg_qty.pop(0)
af = af.assign(agg_total = temp_agg_total,
agg_qty = temp_agg_qty)
My question
Is there a better way to do this in pandas or numpy ? It feels really heavy for something relatively simple.
The presence of the Buy/Sell type of operation complicates things.
Regards
# negate qty of Sells
df.loc[df['type']=='Sell', 'total'] *=-1
# cumulative sum of the qty based on symbol
df['aggregate_total'] = df.groupby('symbol')['total'].cumsum()
Is this which you're looking for?
df['Agg'] = 1
df.loc[df['type'] == 'Sell', 'Agg'] = -1
df['Agg'] = df['Agg']*df['total']
df['Agg'].cumsum()
df["Type_num"] = df["type"].map({"Buy":1,"Sell":-1})
df["Num"] = df.Type_num*df.total
df.groupby(["symbol"],as_index=False)["Num"].cumsum()
pd.concat([df,df.groupby(["symbol"],as_index=False)["Num"].cumsum()],axis=1)
date symbol type qty total Type_num Num CumNum
0 2020-09-10 ABC Buy 5 10 1 10 10
1 2020-10-18 ABC Buy 2 20 1 20 30
2 2020-09-19 ABC Sell 3 15 -1 -15 15
3 2020-11-05 XYZ Buy 10 8 1 8 8
4 2020-12-03 XYZ Buy 10 9 1 9 17
5 2020-12-05 ABC Buy 2 5 1 5 20
The most important thing here is the cumulative sum. The regrouping is used to make sure that the cumulative sum is just performed on each kind of different symbol. The renaming and dropping of columns should be easy for you.
Trick is that I made {sell; buy} into {1,-1}

Calculate new column in pandas dataframe based only on grouped records

I have a dataframe with various events(id) and following structure, the df is grouped by id and sorted on timestamp :
id | timestamp | A | B
1 | 02-05-2016|bla|bla
1 | 04-05-2016|bla|bla
1 | 05-05-2016|bla|bla
2 | 11-02-2015|bla|bla
2 | 14-02-2015|bla|bla
2 | 18-02-2015|bla|bla
2 | 31-03-2015|bla|bla
3 | 02-08-2016|bla|bla
3 | 07-08-2016|bla|bla
3 | 27-09-2016|bla|bla
Each timestamp-id combo indicates a different stage in the process of the event with that particular id. Each new record for a specific id indicates the start of a new stage for that event-id.
I would like to add a new column Duration that calculates the duration of each stage for each event (see desired df below). This is easy as i can simply calculate the difference between the timestamp of the next stage for the same event id with the timestamp of the current stage as following:
df['Start'] = pd.to_datetime(df['timestamp'])
df['End'] = pd.to_datetime(df['timestamp'].shift(-1))
df['Duration'] = df['End'] - df['Start']
My problem appears on the last stage of each event id, as i want to simply display NaNs or dashes as the stage has not finished yet and the end time is unknown. My solution simply takes the timestamp of the next row which is not always correct, as it might belong to a completele different event.
Desired output:
id | timestamp | A | B | Duration
1 | 02-05-2016|bla|bla| 2 days
1 | 04-05-2016|bla|bla| 1 days
1 | 05-05-2016|bla|bla| ------
2 | 11-02-2015|bla|bla| 3 days
2 | 14-02-2015|bla|bla| 4 days
2 | 18-02-2015|bla|bla| 41 days
2 | 31-03-2015|bla|bla| -------
3 | 02-08-2016|bla|bla| 5 days
3 | 07-08-2016|bla|bla| 50 days
3 | 27-09-2016|bla|bla| -------
I think this does what you want:
df['timestamp'] = pd.to_datetime(df['timestamp'])
df['Duration'] = df.groupby('id')['timestamp'].diff().shift(-1)
If I understand correctly: groupby('id') tells pandas to apply .diff().shift(-1) to each group as if it were a miniature DataFrame independent of the other rows. I tested it on this fake data:
import pandas as pd
import numpy as np
# Generate some fake data
df = pd.DataFrame()
df['id'] = [1]*5 + [2]*3 + [3]*4
df['timestamp'] = pd.to_datetime('2017-01-1')
duration = sorted(np.random.randint(30,size=len(df)))
df['timestamp'] += pd.to_timedelta(duration)
df['A'] = 'spam'
df['B'] = 'eggs'
but double-check just to be sure I didn't make a mistake!
Here is one approach using apply
def timediff(row):
row['timestamp'] = pd.to_datetime(row['timestamp'], format='%d-%m-%Y')
return pd.DataFrame(row['timestamp'].diff().shift(-1))
res = df.assign(duration=df.groupby('id').apply(timediff))
Output:
id timestamp duration
0 1 02-05-2016 2 days
1 1 04-05-2016 1 days
2 1 05-05-2016 NaT
3 2 11-02-2015 3 days
4 2 14-02-2015 4 days
5 2 18-02-2015 41 days
6 2 31-03-2015 NaT
7 3 02-08-2016 5 days
8 3 07-08-2016 51 days
9 3 27-09-2016 NaT

Conditional expanding group aggregation pandas

For some data preprocessing I have a huge dataframe where I need historical performance within groups. However since it is for a predictive model that runs a week before the target I cannot use any data that happened in that week in between. There are a variable number of rows per day per group which means I cannot always discard the last 7 values by using a shift on the expanding functions, I have to somehow condition on the datetime of rows before it. I can write my own function to apply on the groups however this is usually very slow in my experience (albeit flexible). This is how I did it without conditioning on date and just looking at previous records:
df.loc[:, 'new_col'] = df_gr['old_col'].apply(lambda x: x.expanding(5).mean().shift(1))
The 5 represents that I want at least a sample size of 5 or to put it to NaN.
Small example with aggr_mean looking at the mean of all samples within group A at least a week earlier:
group | dt | value | aggr_mean
A | 01-01-16 | 5 | NaN
A | 03-01-16 | 4 | NaN
A | 08-01-16 | 12 | 5 (only looks at first row)
A | 17-01-16 | 11 | 7 (looks at first three rows since all are
at least a week earlier)
new answer
using #JulienMarrec's better example
dt group value
2016-01-01 A 5
2016-01-03 A 4
2016-01-08 A 12
2016-01-17 A 11
2016-01-04 B 10
2016-01-05 B 5
2016-01-08 B 12
2016-01-17 B 11
Condition df to be more useful
d1 = df.drop('group', 1)
d1.index = [df.group, df.groupby('group').cumcount().rename('gidx')]
d1
create a custom function that does what old answer did. Then apply it within groupby
def lag_merge_asof(df, lag):
d = df.set_index('dt').value.expanding().mean()
d.index = d.index + pd.offsets.Day(lag)
d = d.reset_index(name='aggr_mean')
return pd.merge_asof(df, d)
d1.groupby(level='group').apply(lag_merge_asof, lag=7)
we can get some formatting with this
d1.groupby(level='group').apply(lag_merge_asof, lag=7) \
.reset_index('group').reset_index(drop=True)
old answer
create a lookback dataframe by offsetting the dates by 7 days, then use it to pd.merge_asof
lookback = df.set_index('dt').value.expanding().mean()
lookback.index += pd.offsets.Day(7)
lookback = lookback.reset_index(name='aggr_mean')
lookback
pd.merge_asof(df, lookback, left_on='dt', right_on='dt')
Given this dataframe where I added another group in order to more clearly see what's happening:
dt group value
2016-01-01 A 5
2016-01-03 A 4
2016-01-08 A 12
2016-01-17 A 11
2016-01-04 B 10
2016-01-05 B 5
2016-01-08 B 12
2016-01-17 B 11
Let's load it:
df = pd.read_clipboard(index_col=0, sep='\s+', parse_dates=True)
Now we can use a groupby, resample daily, and do an shift that 7 days, and take the mean:
x = df.groupby('group')['value'].apply(lambda gp: gp.resample('1D').mean().shift(7).expanding().mean())
Now you can merge left that back into your df:
merged = df.reset_index().set_index(['group','dt']).join(x, rsuffix='_aggr_mean', how='left')
merged

Categories

Resources