what does first() do in pandas resample? - python

as the title implies:
IN[1] :
dates = pd.date_range('10/10/2018', periods=11, freq='D')
close_prices = np.arange(len(dates))
close = pd.Series(close_prices, dates)
close
OUT[1]:
2018-10-10 0
2018-10-11 1
2018-10-12 2
2018-10-13 3
2018-10-14 4
2018-10-15 5
2018-10-16 6
2018-10-17 7
2018-10-18 8
2018-10-19 9
2018-10-20 10
IN[2] : close.resample('W').first()
OUT[2] :
2018-10-14 0
2018-10-21 5
Freq: W-SUN, dtype: int64
first what does resample & first do?
and why do we have this date 2018-10-21 as it was not existing in the series and based on what we have the 0 and 5?
Thanks

You have resampled your data by week. '2018-10-14' and '2018-10-21' are the last dates of each resampled week (each a Sunday). So by resampling, you have aggregated your data into weekly samples displayed on the Sundays on 10-14 and 10-21. 0 and 5 each refer to the count at the beginning of each respective week (in other words, the counts on 10-10 and 10-15, which would be the beginning Mondays of the resampled weeks ending on Sundays.

resample('W') reorders and groups the dates so that they're each a full week.
first() selects each week.

Related

Calculating values from time series in pandas multi-indexed pivot tables

I've got a dataframe in pandas that stores the Id of a person, the quality of interaction, and the date of the interaction. A person can have multiple interactions across multiple dates, so to help visualise and plot this I converted it into a pivot table grouping first by Id then by date to analyse the pattern over time.
e.g.
import pandas as pd
df = pd.DataFrame({'Id':['A4G8','A4G8','A4G8','P9N3','P9N3','P9N3','P9N3','C7R5','L4U7'],
'Date':['2016-1-1','2016-1-15','2016-1-30','2017-2-12','2017-2-28','2017-3-10','2019-1-1','2018-6-1','2019-8-6'],
'Quality':[2,3,6,1,5,10,10,2,2]})
pt = df.pivot_table(values='Quality', index=['Id','Date'])
print(pt)
Leads to this:
Id
Date
Quality
A4G8
2016-1-1
2
2016-1-15
4
2016-1-30
6
P9N3
2017-2-12
1
2017-2-28
5
2017-3-10
10
2019-1-1
10
C7R5
2018-6-1
2
L4U7
2019-8-6
2
However, I'd also like to...
Measure the time from the first interaction for each interaction per Id
Measure the time from the previous interaction with the same Id
So I'd get a table similar to the one below
Id
Date
Quality
Time From First
Time To Prev
A4G8
2016-1-1
2
0 days
NA days
2016-1-15
4
14 days
14 days
2016-1-30
6
29 days
14 days
P9N3
2017-2-12
1
0 days
NA days
2017-2-28
5
15 days
15 days
2017-3-10
10
24 days
9 days
The Id column is a string type, and I've converted the date column into datetime, and the Quality column into an integer.
The column is rather large (>10,000 unique ids) so for performance reasons I'm trying to avoid using for loops. I'm guessing the solution is somehow using pd.eval but I'm stuck as to how to apply it correctly.
Apologies I'm a python, pandas, & stack overflow) noob and I haven't found the answer anywhere yet so even some pointers on where to look would be great :-).
Many thanks in advance
Convert Dates to datetimes and then substract minimal datetimes per groups by GroupBy.transformb subtracted by column Date and for second new column use DataFrameGroupBy.diff:
df['Date'] = pd.to_datetime(df['Date'])
df['Time From First'] = df['Date'].sub(df.groupby('Id')['Date'].transform('min'))
df['Time To Prev'] = df.groupby('Id')['Date'].diff()
print (df)
Id Date Quality Time From First Time To Prev
0 A4G8 2016-01-01 2 0 days NaT
1 A4G8 2016-01-15 3 14 days 14 days
2 A4G8 2016-01-30 6 29 days 15 days
3 P9N3 2017-02-12 1 0 days NaT
4 P9N3 2017-02-28 5 16 days 16 days
5 P9N3 2017-03-10 10 26 days 10 days
6 P9N3 2019-01-01 10 688 days 662 days
7 C7R5 2018-06-01 2 0 days NaT
8 L4U7 2019-08-06 2 0 days NaT
df["Date"] = pd.to_datetime(df.Date)
df = df.merge(
df.groupby(["Id"]).Date.first(),
on="Id",
how="left",
suffixes=["", "_first"]
)
df["Time From First"] = df.Date-df.Date_first
df['Time To Prev'] = df.groupby('Id').Date.diff()
df.set_index(["Id", "Date"], inplace=True)
df
output:

Reverse rows in time series dataframe

I have a time series that looks like this:
value date
63.85 2017-01-15
63.95 2017-01-22
63.88 2017-01-29
64.02 2017-02-05
63.84 2017-02-12
62.13 2017-03-05
65.36 2017-03-25
66.45 2017-04-25
And I would like to reverse the order of the rows so they look like this:
value date
66.45 2000-01-01
65.36 2000-02-01
62.13 2000-02-20
63.84 2000-03-12
64.02 2000-03-19
63.88 2000-03-26
63.95 2000-04-02
63.85 2000-04-09
As you can see, the "value" column requires to simply flip the row values, but for the date column what I would like to do is keep the same "difference in days" between dates. It doesn't really matter what the start date value is as long as the difference in days is flipped correctly too. In the second dataframe of the example, the start date value is 2000-01-01 and the second value is 2020-02-01, which is 31 days later than the first date. This "day difference" of 31 days is the same one as the last (2017-04-25) and penultimate date (2017-03-25) of the first dataframe. And, the same for the second (2000-02-01) and the third value (2000-02-20) of the second dataframe: the "difference in days" is 20 days, the same one between the penultimate date (2017-03-25) and the antepenultimate date (2017-03-05) of the first dataframe. And so on.
I believe that the steps needed to do this would require to first calculate this "day differences", but I would like to know how to do it efficiently. Thank you :)
NumPy has support for this via its datetime and timedelta data types.
First you reverse both columns in your time series as follows:
import pandas as pd
import numpy as np
df2 = df
df2 = df2.iloc[::-1]
df2
where df is your original time series data and df2 (shown below) is the reversed time series.
value date
7 66.45 2017-04-25
6 65.36 2017-03-25
5 62.13 2017-03-05
4 63.84 2017-02-12
3 64.02 2017-02-05
2 63.88 2017-01-29
1 63.95 2017-01-22
0 63.85 2017-01-15
Next you find the day differences and store them as timedelta objects:
dates_np = np.array(df2.date).astype(np.datetime64) # Convert dates to np.datetime64 ojects
timeDeltas = np.insert(abs(np.diff(dates_np)), 0, 0) # np.insert is to account for -1 length during np.diff call
d2 = {'value': df_reversed.value, 'day_diff': timeDeltas} # Create new dataframe (df3)
df3 = pd.DataFrame(data=d2)
df3
where df3 (the day differences table) looks like this:
value day_diff
7 66.45 0 days
6 65.36 31 days
5 62.13 20 days
4 63.84 21 days
3 64.02 7 days
2 63.88 7 days
1 63.95 7 days
0 63.85 7 days
Lastly, to get back to dates accumulating from a start data, you do the following:
startDate = np.datetime64('2000-01-01') # You can change this if you like
df4 = df2 # Copy coumn data from df2
df4.date = np.array(np.cumsum(df3.day_diff) + startDate # np.cumsum accumulates the day_diff sum
df4
where df4 (the start date accumulation) looks like this:
value date
7 66.45 2000-01-01
6 65.36 2000-02-01
5 62.13 2000-02-21
4 63.84 2000-03-13
3 64.02 2000-03-20
2 63.88 2000-03-27
1 63.95 2000-04-03
0 63.85 2000-04-10
I noticed there is a 1-day discrepancy with my final table, however this is most likely due to the implementation of timedelta inclusivity/exluclusivity.
Here's how I did it:
Creating the DataFrame:
value = [63.85, 63.95, 63.88, 64.02, 63.84, 62.13, 65.36, 66.45]
date = ["2017-01-15", "2017-01-22", "2017-01-29", "2017-02-05", "2017-02-12", "2017-03-05", "2017-03-25", "2017-04-25",]
df = pd.DataFrame({"value": value, "date": date})
Creating a second DataFrame with the values reversed and converting the date column to datetime
new_df = df.astype({'date': 'datetime64'})
new_df.sort_index(ascending=False, inplace=True, ignore_index=True)
new_df
value date
0 66.45 2017-04-25
1 65.36 2017-03-25
2 62.13 2017-03-05
3 63.84 2017-02-12
4 64.02 2017-02-05
5 63.88 2017-01-29
6 63.95 2017-01-22
7 63.85 2017-01-15
I then used pandas.Series.diff to calculate the time delta between each row and converted those values to absolute values.
time_delta_series = new_df['date'].diff().abs()
time_delta_series
0 NaT
1 31 days
2 20 days
3 21 days
4 7 days
5 7 days
6 7 days
7 7 days
Name: date, dtype: timedelta64[ns]
Then you need to convert those values to a cumulative time delta.
But to use the cumsum() method you need to first remove the missing values (NaT).
time_delta_series = time_delta_series.fillna(pd.Timedelta(seconds=0)).cumsum()
time_delta_series
0 0 days
1 31 days
2 51 days
3 72 days
4 79 days
5 86 days
6 93 days
7 100 days
Name: date, dtype: timedelta64[ns
Then you can create your starting date and create the date column for the second DataFrame we created before:
from datetime import date
start = date(2000, 1, 1)
new_df['date'] = start
new_df['date'] = new_df['date'] + time_delta_series
new_df
value date
0 66.45 2000-01-01
1 65.36 2000-02-01
2 62.13 2000-02-21
3 63.84 2000-03-13
4 64.02 2000-03-20
5 63.88 2000-03-27
6 63.95 2000-04-03
7 63.85 2000-04-10

Pandas Dataframe resample week, starting first day of the year

I have a dataframe containing hourly data, i want to get the max for each week of the year, so i used resample to group data by week
weeks = data.resample("W").max()
the problem is that week max is calculated starting the first monday of the year, while i want it to be calculated starting the first day of the year.
I obtain the following result, where you can notice that there is 53 weeks, and the last week is calculated on the next year while 2017 doesn't exist in the data
Date dots
2016-01-03 0.647786
2016-01-10 0.917071
2016-01-17 0.667857
2016-01-24 0.669286
2016-01-31 0.645357
Date dots
2016-12-04 0.646786
2016-12-11 0.857714
2016-12-18 0.670000
2016-12-25 0.674571
2017-01-01 0.654571
is there a way to calculate week for pandas dataframe starting first day of the year?
Find the starting day of the year, for example let say it's Friday, and then you can specify an anchoring suffix to resample in order to calculate week starting first day of the year:
weeks = data.resample("W-FRI").max()
One quick remedy is, given you data in one year, you can group it by day first, then take group of 7 days:
new_df = (df.resample("D", on='Date').dots
.max().reset_index()
)
new_df.groupby(new_df.index//7).agg({'Date': 'min', 'dots': 'max'})
new_df.head()
Output:
Date dots
0 2016-01-01 0.996387
1 2016-01-08 0.999775
2 2016-01-15 0.997612
3 2016-01-22 0.979376
4 2016-01-29 0.998240
5 2016-02-05 0.995030
6 2016-02-12 0.987500
and tail:
Date dots
48 2016-12-02 0.999910
49 2016-12-09 0.992910
50 2016-12-16 0.996877
51 2016-12-23 0.992986
52 2016-12-30 0.960348

Difference between datetimes in terms of number of business days using pandas

Is there a (more) convenient/efficient method to calculate the number of business days between to dates using pandas?
I could do
len(pd.bdate_range(start='2018-12-03',end='2018-12-14'))-1 # minus one only if end date is a business day
but for longer distances between the start and end day this seems rather inefficient.
There are a couple of suggestion how to use the BDay offset object, but they all seem to refer to the creation of dateranges or something similar.
I am thinking more in terms of a Timedelta object that is represented in business-days.
Say I have two series,s1 and s2, containing datetimes. If pandas had something along the lines of
s1.dt.subtract(s2,freq='B')
# giving a new series containing timedeltas where the number of days calculated
# use business days only
would be nice.
(numpy has a busday_count() method. But I would not want to convert my pandas Timestamps to numpy, as this can get messy.)
I think np.busday_count here is good idea, also convert to numpy arrays is not necessary:
s1 = pd.Series(pd.date_range(start='05/01/2019',end='05/10/2019'))
s2 = pd.Series(pd.date_range(start='05/04/2019',periods=10, freq='5d'))
s = pd.Series([np.busday_count(a, b) for a, b in zip(s1, s2)])
print (s)
0 3
1 5
2 7
3 10
4 14
5 17
6 19
7 23
8 25
9 27
dtype: int64
from xone import calendar
def business_dates(start, end):
us_cal = calendar.USTradingCalendar()
kw = dict(start=start, end=end)
return pd.bdate_range(**kw).drop(us_cal.holidays(**kw))
In [1]: business_dates(start='2018-12-20', end='2018-12-31')
Out[1]: DatetimeIndex(['2018-12-20', '2018-12-21', '2018-12-24', '2018-12-26',
'2018-12-27', '2018-12-28', '2018-12-31'],
dtype='datetime64[ns]', freq=None)
source Get business days between start and end date using pandas
#create dataframes with the dates
df=pd.DataFrame({'dates':pd.date_range(start='05/01/2019',end='05/31/2019')})
#check if the dates are in business days
df[df['dates'].isin(pd.bdate_range(df['dates'].get(0), df['dates'].get(len(df)-1)))]
out[]:
0 2019-05-01
1 2019-05-02
2 2019-05-03
5 2019-05-06
6 2019-05-07
7 2019-05-08
8 2019-05-09
9 2019-05-10
12 2019-05-13
13 2019-05-14
14 2019-05-15
15 2019-05-16
16 2019-05-17
19 2019-05-20
20 2019-05-21
21 2019-05-22
22 2019-05-23
23 2019-05-24
26 2019-05-27
27 2019-05-28
28 2019-05-29
29 2019-05-30
30 2019-05-31

Pandas DataFrame.resample monthly offset from particular day of month

I have a DataFrame df with sporadic daily business day rows (i.e., there is not always a row for every business day.)
For each row in df I want to create a historical resampled mean dfm going back one month at a time. For example, if I have a row for 2018-02-22 then I want rolling means for rows in the following date ranges:
2018-01-23 : 2018-02-22
2017-12-23 : 2018-01-22
2017-11-23 : 2017-12-22
etc.
But I can't see a way to keep this pegged to the particular day of the month using conventional offsets. For example, if I do:
dfm = df.resample('30D').mean()
Then we see two problems:
It references the beginning of the DataFrame. In fact, I can't find a way to force .resample() to peg itself to the end of the DataFrame – even if I have it operate on df_reversed = df.loc[:'2018-02-22'].iloc[::-1]. Is there a way to "peg" the resampling to something other than the earliest date in the DataFrame? (And ideally pegged to each particular row as I run some lambda on the associated historical resampling from each row's date?)
It will drift over time, because not every month is 30 days long. So as I go back in time I will find that the interval 12 "months" prior ends 2017-02-27, not 2017-02-22 like I want.
Knowing that I want to resample by non-overlapping "months," the second problem can be well-defined for month days 29-31: For example, if I ask to resample for '2018-03-31' then the date ranges would end at the end of each preceding month:
2018-03-01 : 2018-03-31
2018-02-01 : 2018-02-28
2018-01-01 : 2018-02-31
etc.
Though again, I don't know: is there a good or easy way to do this in pandas?
tl;dr:
Given something like the following:
someperiods = 20 # this can be a number of days covering many years
somefrequency = '8D' # this can vary from 1D to maybe 10D
rng = pd.date_range('2017-01-03', periods=someperiods, freq=somefrequency)
df = pd.DataFrame({'x': rng.day}, index=rng) # x in practice is exogenous data
from pandas.tseries.offsets import *
df['MonthPrior'] = df.index.to_pydatetime() + DateOffset(months=-1)
Now:
For each row in df: calculate df['PreviousMonthMean'] = rolling average of all df.x in range [df.MonthPrior, df.index). In this example the resulting DataFrame would be:
Index x MonthPrior PreviousMonthMean
2017-01-03 3 2016-12-03 NaN
2017-01-11 11 2016-12-11 3
2017-01-19 19 2016-12-19 7
2017-01-27 27 2016-12-27 11
2017-02-04 4 2017-01-04 19
2017-02-12 12 2017-01-12 16.66666667
2017-02-20 20 2017-01-20 14.33333333
2017-02-28 28 2017-01-28 12
2017-03-08 8 2017-02-08 20
2017-03-16 16 2017-02-16 18.66666667
2017-03-24 24 2017-02-24 17.33333333
2017-04-01 1 2017-03-01 16
2017-04-09 9 2017-03-09 13.66666667
2017-04-17 17 2017-03-17 11.33333333
2017-04-25 25 2017-03-25 9
2017-05-03 3 2017-04-03 17
2017-05-11 11 2017-04-11 15
2017-05-19 19 2017-04-19 13
2017-05-27 27 2017-04-27 11
2017-06-04 4 2017-05-04 19
If we can get that far, then I need to find an efficient way to iterate that so that for each row in df I can aggregate consecutive but non-overlapping df['PreviousMonthMean'] values going back one calendar month at a time from the given DateTimeIndex....

Categories

Resources