cumulative month to date and year to date sum - python

I am having issues finding a solution for the cummulative sum for mtd and ytd
I need help to get this result

Use groupby.cumsum combined with periods using to_period:
# ensure datetime
s = pd.to_datetime(df['date'], dayfirst=False)
# group by year
df['ytd'] = df.groupby(s.dt.to_period('Y'))['count'].cumsum()
# group by month
df['mtd'] = df.groupby(s.dt.to_period('M'))['count'].cumsum()
Example (with dummy data):
date count ytd mtd
0 2022-08-26 6 6 6
1 2022-08-27 1 7 7
2 2022-08-28 4 11 11
3 2022-08-29 4 15 15
4 2022-08-30 8 23 23
5 2022-08-31 4 27 27
6 2022-09-01 6 33 6
7 2022-09-02 3 36 9
8 2022-09-03 5 41 14
9 2022-09-04 8 49 22
10 2022-09-05 7 56 29
11 2022-09-06 9 65 38
12 2022-09-07 9 74 47

Related

Create timeseries data - Pandas

I have a multi-index dataframe of timeseries data which looks like the following;
A B C
1 1 21 32 4
2 4 2 23
3 12 9 10
4 1 56 37
.
.
.
.
30 63 1 27
31 32 2 32
.
.
.
12 1 2 3 23
2 23 1 12
3 32 3 23
.
.
.
31 23 2 32
It is essentially a multi-index of month and dates with three columns.
I need to turn this into daily data and essentially have a dataframe whereby there is a single index where value in the above dataframe responds to its' respective date over 10 years.
For exmaple;
Desired output;
A B C
01/01/2017 21 32 4
.
.
31/12/2017 23 2 32
.
.
01/01/2022 21 32 4
.
.
31/12/2022 23 2 32
I hope this is clear! Its essentially turning daily/monthly data into daily/monthly/yearly data.
You can use:
df.index = pd.to_datetime(df.index.rename(['month', 'day']).to_frame().assign(year=2022))
Output:
A B C
2022-01-01 21 32 4
2022-01-02 4 2 23
2022-01-03 12 9 10
2022-01-04 1 56 37
2022-01-30 63 1 27
2022-01-31 32 2 32
2022-12-01 2 3 23
2022-12-02 23 1 12
2022-12-03 32 3 23
2022-12-31 23 2 32
spanning several years
There is no absolute fool proof way to handle years if those are missing. What we can do it to infer the year change when a date goes back in the past and add 1 year in this case:
# let's assume the starting year is 2017
date = pd.to_datetime(df.index.rename(['month', 'day']).to_frame().assign(year=2017))
df.index = date + date.diff().lt('0').cumsum().mul(pd.DateOffset(years=1))
output:
A B C
2017-01-01 21 32 4
2017-01-02 4 2 23
2017-06-03 12 9 10
2017-06-04 1 56 37
2018-01-30 63 1 27 # added 1 year
2018-01-31 32 2 32
2018-12-01 2 3 23
2018-12-02 23 1 12
2018-12-03 32 3 23
2018-12-31 23 2 32
used input:
A B C
1 1 21 32 4
2 4 2 23
6 3 12 9 10
4 1 56 37
1 30 63 1 27 # here we go back from month 1 after month 6
31 32 2 32
12 1 2 3 23
2 23 1 12
3 32 3 23
31 23 2 32

Pandas expand date range with multiple times and forward filling

I have a dataframe like this:
DATE MIN_AMOUNT MAX_AMOUNT MIN_DAY MAX_DAY
01/09/2022 10 20 1 2
01/09/2022 15 25 4 5
01/09/2022 30 50 7 10
05/09/2022 10 20 1 2
05/09/2022 15 25 4 5
07/09/2022 15 25 4 5
I want to expand the dataframe to all date range between the DATE column with forward filling. The desired putput is:
DATE MIN_AMOUNT MAX_AMOUNT MIN_DAY MAX_DAY
01/09/2022 10 20 1 2
01/09/2022 15 25 4 5
01/09/2022 30 50 7 10
02/09/2022 10 20 1 2
02/09/2022 15 25 4 5
02/09/2022 30 50 7 10
03/09/2022 10 20 1 2
03/09/2022 15 25 4 5
03/09/2022 30 50 7 10
04/09/2022 10 20 1 2
04/09/2022 15 25 4 5
04/09/2022 30 50 7 10
05/09/2022 10 20 1 2
05/09/2022 15 25 4 5
06/09/2022 10 20 1 2
06/09/2022 15 25 4 5
07/09/2022 15 25 4 5
Could you please help me about this?
First convert values to datetimes, create helper counter Series g by GroupBy.cumcount for reshape by DataFrame.set_index and DataFrame.unstack, then use DataFrame.asfreq with method='ffill' and reshape back by DataFrame.stack, remove helper level by DataFrame.droplevel, convert DatetimeIndex to column, change format of datetimes and last create same dtypes like original DataFrame:
df['DATE'] = pd.to_datetime(df['DATE'], dayfirst=True)
g = df.groupby('DATE').cumcount()
df = (df.set_index(['DATE',g])
.unstack()
.asfreq('D', method='ffill')
.stack()
.droplevel(-1)
.reset_index()
.assign(DATE = lambda x: x['DATE'].dt.strftime('%d/%m/%Y'))
.astype(df.dtypes)
)
print (df)
DATE MIN_AMOUNT MAX_AMOUNT MIN_DAY MAX_DAY
0 2022-01-09 10 20 1 2
1 2022-01-09 15 25 4 5
2 2022-01-09 30 50 7 10
3 2022-02-09 10 20 1 2
4 2022-02-09 15 25 4 5
5 2022-02-09 30 50 7 10
6 2022-03-09 10 20 1 2
7 2022-03-09 15 25 4 5
8 2022-03-09 30 50 7 10
9 2022-04-09 10 20 1 2
10 2022-04-09 15 25 4 5
11 2022-04-09 30 50 7 10
12 2022-05-09 10 20 1 2
13 2022-05-09 15 25 4 5
14 2022-06-09 10 20 1 2
15 2022-06-09 15 25 4 5
16 2022-07-09 15 25 4 5
A couple of merges should help with this, and should still be efficient as the data size increases:
Get the unique dates and build a new dataframe from that:
out = df.DATE.drop_duplicates()
dates = pd.date_range(out.min(), out.max(), freq='D')
dates = pd.DataFrame(dates, columns=['dates'])
Merge dates with out, and subsequently merge the outcome with the original dataframe:
(dates
.merge(
out,
left_on='dates',
right_on='DATE',
how = 'left')
# faster to fill on a Series than a Dataframe
.assign(DATE = lambda df: df.DATE.ffill())
.merge(
df,
on = 'DATE',
how = 'left')
.drop(columns='DATE')
.rename(columns= {'dates':'DATE'})
)
DATE MIN_AMOUNT MAX_AMOUNT MIN_DAY MAX_DAY
0 2022-09-01 10 20 1 2
1 2022-09-01 15 25 4 5
2 2022-09-01 30 50 7 10
3 2022-09-02 10 20 1 2
4 2022-09-02 15 25 4 5
5 2022-09-02 30 50 7 10
6 2022-09-03 10 20 1 2
7 2022-09-03 15 25 4 5
8 2022-09-03 30 50 7 10
9 2022-09-04 10 20 1 2
10 2022-09-04 15 25 4 5
11 2022-09-04 30 50 7 10
12 2022-09-05 10 20 1 2
13 2022-09-05 15 25 4 5
14 2022-09-06 10 20 1 2
15 2022-09-06 15 25 4 5
16 2022-09-07 15 25 4 5

Group columns based on the headers if they are found in the same list. Pandas Python

So I have a data frame that is something like this
Resource 2020-06-01 2020-06-02 2020-06-03
Name1 8 7 8
Name2 7 9 9
Name3 10 10 10
Imagine that the header is literal all the days of the month. And that there are way more names than just three.
I need to reduce the columns to five. Considering the first column to be the days between 2020-06-01 till 2020-06-05. Then from Saturday till Friday of the same week. Or the last day of the month if it is before Friday. So for June would be these weeks:
week 1: 2020-06-01 to 2020-06-05
week 2: 2020-06-06 to 2020-06-12
week 3: 2020-06-13 to 2020-06-19
week 4: 2020-06-20 to 2020-06-26
week 5: 2020-06-27 to 2020-06-30
I have no problem defining these weeks. The problem is grouping the columns based on them.
I couldn't come up with anything.
Does someone have any ideas about this?
I have to use these code to generate your dataframe.
dates = pd.date_range(start='2020-06-01', end='2020-06-30')
df = pd.DataFrame({
'Name1': np.random.randint(1, 10, size=len(dates)),
'Name2': np.random.randint(1, 10, size=len(dates)),
'Name3': np.random.randint(1, 10, size=len(dates)),
})
df = df.set_index(dates).transpose().reset_index().rename(columns={'index': 'Resource'})
Then, the solution starts from here.
# Set the first column as index
df = df.set_index(df['Resource'])
# Remove the unused column
df = df.drop(columns=['Resource'])
# Transpose the dataframe
df = df.transpose()
# Output:
Resource Name1 Name2 Name3
2020-06-01 00:00:00 3 2 7
2020-06-02 00:00:00 5 6 8
2020-06-03 00:00:00 2 3 6
...
# Bring "Resource" from index to column
df = df.reset_index()
df = df.rename(columns={'index': 'Resource'})
# Add a column "week of year"
df['week_no'] = df['Resource'].dt.weekofyear
# You can simply group by the week no column
df.groupby('week_no').sum().reset_index()
# Output:
Resource week_no Name1 Name2 Name3
0 23 38 42 41
1 24 37 30 43
2 25 38 29 23
3 26 29 40 42
4 27 2 8 3
I don't know what you want to do for the next. If you want your original form, just transpose() it back.
EDIT: OP claimed the week should start from Saturday end up with Friday
# 0: Monday
# 1: Tuesday
# 2: Wednesday
# 3: Thursday
# 4: Friday
# 5: Saturday
# 6: Sunday
df['weekday'] = df['Resource'].dt.weekday.apply(lambda day: 0 if day <= 4 else 1)
df['customised_weekno'] = df['week_no'] + df['weekday']
Output:
Resource Resource Name1 Name2 Name3 week_no weekday customised_weekno
0 2020-06-01 4 7 7 23 0 23
1 2020-06-02 8 6 7 23 0 23
2 2020-06-03 5 9 5 23 0 23
3 2020-06-04 7 6 5 23 0 23
4 2020-06-05 6 3 7 23 0 23
5 2020-06-06 3 7 6 23 1 24
6 2020-06-07 5 4 4 23 1 24
7 2020-06-08 8 1 5 24 0 24
8 2020-06-09 2 7 9 24 0 24
9 2020-06-10 4 2 7 24 0 24
10 2020-06-11 6 4 4 24 0 24
11 2020-06-12 9 5 7 24 0 24
12 2020-06-13 2 4 6 24 1 25
13 2020-06-14 6 7 5 24 1 25
14 2020-06-15 8 7 7 25 0 25
15 2020-06-16 4 3 3 25 0 25
16 2020-06-17 6 4 5 25 0 25
17 2020-06-18 6 8 2 25 0 25
18 2020-06-19 3 1 2 25 0 25
So, you can use customised_weekno for grouping.

assign a number id for every 4 rows in pandas dataframe

I have a pandas dataframe like this:
pd.DataFrame({'week': ['2019-w01', '2019-w02','2019-w03','2019-w04',
'2019-w05','2019-w06','2019-w07','2019-w08',
'2019-w9','2019-w10','2019-w11','2019-w12'],
'value': [11,22,33,34,57,88,2,9,10,1,76,14],
'period': [1,1,1,1,2,2,2,2,3,3,3,3]})
week value
0 2019-w1 11
1 2019-w2 22
2 2019-w3 33
3 2019-w4 34
4 2019-w5 57
5 2019-w6 88
6 2019-w7 2
7 2019-w8 9
8 2019-w9 10
9 2019-w10 1
10 2019-w11 76
11 2019-w12 14
what I need is like below. I would like to assign a period ID every 4-week interval.
week value period
0 2019-w01 11 1
1 2019-w02 22 1
2 2019-w03 33 1
3 2019-w04 34 1
4 2019-w05 57 2
5 2019-w06 88 2
6 2019-w07 2 2
7 2019-w08 9 2
8 2019-w9 10 3
9 2019-w10 1 3
10 2019-w11 76 3
11 2019-w12 14 3
what is the best way to achieve that? Thanks.
try with:
df['period']=(pd.to_numeric(df['week'].str.split('-').str[-1]
.str.replace('w',''))//4).shift(fill_value=0).add(1)
print(df)
week value period
0 2019-w01 11 1
1 2019-w02 22 1
2 2019-w03 33 1
3 2019-w04 34 1
4 2019-w05 57 2
5 2019-w06 88 2
6 2019-w07 2 2
7 2019-w08 9 2
8 2019-w9 10 3
9 2019-w10 1 3
10 2019-w11 76 3
11 2019-w12 14 3

Python how to get values in one dataframe from the other dataframe

import pandas as pd
import numpy as np
df1=pd.DataFrame(np.arange(25).reshape((5,5)),index=pd.date_range('2015/01/01',periods=5,freq='D')))
df1['trading_signal']=[1,-1,1,-1,1]
df1
0 1 2 3 4 trading_signal
2015-01-01 0 1 2 3 4 1
2015-01-02 5 6 7 8 9 -1
2015-01-03 10 11 12 13 14 1
2015-01-04 15 16 17 18 19 -1
2015-01-05 20 21 22 23 24 1
and
df2
0 1 2 3 4
Date Time
2015-01-01 22:55:00 0 1 2 3 4
23:55:00 5 6 7 8 9
2015-01-02 00:55:00 10 11 12 13 14
01:55:00 15 16 17 18 19
02:55:00 20 21 22 23 24
how would I get the value of trading_signal from df1 and sent it to df2.
I want an output like this:
0 1 2 3 4 trading_signal
Date Time
2015-01-01 22:55:00 0 1 2 3 4 1
23:55:00 5 6 7 8 9 1
2015-01-02 00:55:00 10 11 12 13 14 -1
01:55:00 15 16 17 18 19 -1
02:55:00 20 21 22 23 24 -1
You need to either merge or join. If you merge you need to reset_index, which is less memory efficient ans slower than using join. Please read the docs on Joining a single index to a multi index:
New in version 0.14.0.
You can join a singly-indexed DataFrame with a level of a
multi-indexed DataFrame. The level will match on the name of the index
of the singly-indexed frame against a level name of the multi-indexed
frame
If you want to use join, you must name the index of df1 to be Date so that it matches the name of the first level of df2:
df1.index.names = ['Date']
df1[['trading_signal']].join(df2, how='right')
trading_signal 0 1 2 3 4
Date Time
2015-01-01 22:55:00 1 0 1 2 3 4
23:55:00 1 5 6 7 8 9
2015-01-02 00:55:00 -1 10 11 12 13 14
01:55:00 -1 15 16 17 18 19
02:55:00 -1 20 21 22 23 24
I'm joining right for a reason, if you don't understand what this means please read Brief primer on merge methods (relational algebra).

Categories

Resources