python pandas: vectorized time series window function - python

I have a pandas dataframe in the following format:
'customer_id','transaction_dt','product','price','units'
1,2004-01-02,thing1,25,47
1,2004-01-17,thing2,150,8
2,2004-01-29,thing2,150,25
3,2017-07-15,thing3,55,17
3,2016-05-12,thing3,55,47
4,2012-02-23,thing2,150,22
4,2009-10-10,thing1,25,12
4,2014-04-04,thing2,150,2
5,2008-07-09,thing2,150,43
I have written the following to create two new fields indicating 30 day windows:
import numpy as np
import pandas as pd
start_date_period = pd.period_range('2004-01-01', '12-31-2017', freq='30D')
end_date_period = pd.period_range('2004-01-30', '12-31-2017', freq='30D')
def find_window_start_date(x):
window_start_date_idx = np.argmax(x < start_date_period.end_time)
return start_date_period[window_start_date_idx]
df['window_start_dt'] = df['transaction_dt'].apply(find_window_start_date)
def find_window_end_date(x):
window_end_date_idx = np.argmin(x > end_date_period.start_time)
return end_date_period[window_end_date_idx]
df['window_end_dt'] = df['transaction_dt'].apply(find_window_end_date)
Unfortunately, this is far too slow doing the row-wise apply for my application. I would greatly appreciate any tips on vectorizing these functions if possible.
EDIT:
The resultant dataframe should have this layout:
'customer_id','transaction_dt','product','price','units','window_start_dt','window_end_dt'
It does not need to be resampled or windowed in the formal sense. It just needs 'window_start_dt' and 'window_end_dt' columns to be added. The current code works, it just need to be vectorized if possible.

EDIT 2: pandas.cut is built-in:
tt=[[1,'2004-01-02',0.1,25,47],
[1,'2004-01-17',0.2,150,8],
[2,'2004-01-29',0.2,150,25],
[3,'2017-07-15',0.3,55,17],
[3,'2016-05-12',0.3,55,47],
[4,'2012-02-23',0.2,150,22],
[4,'2009-10-10',0.1,25,12],
[4,'2014-04-04',0.2,150,2],
[5,'2008-07-09',0.2,150,43]]
start_date_period = pd.date_range('2004-01-01', '12-01-2017', freq='MS')
end_date_period = pd.date_range('2004-01-30', '12-31-2017', freq='M')
df = pd.DataFrame(tt,columns=['customer_id','transaction_dt','product','price','units'])
df['transaction_dt'] = pd.Series([pd.to_datetime(sub_t[1],format='%Y-%m-%d') for sub_t in tt])
the_cut = pd.cut(df['transaction_dt'],bins=start_date_period,right=True,labels=False,include_lowest=True)
df['win_start_test'] = pd.Series([start_date_period[int(x)] if not np.isnan(x) else 0 for x in the_cut])
df['win_end_test'] = pd.Series([end_date_period[int(x)] if not np.isnan(x) else 0 for x in the_cut])
print(df.head())
win_start_test and win_end_test should be equal to their counterparts computed using your function.
The ValueError was coming from not casting x to int in the relevant line. I also added a NaN check, though it wasn't needed for this toy example.
Note the change to pd.date_range and the use of the start-of-month and end-of-month flags M and MS, as well as converting the date strings into datetime.

Related

Python pandas rolling computations with custom step size

I have a pandas dataframe with daily data. At the last day of each month, I would like to compute a quantity that depends on the daily data of the previous n months (e.g., n=3).
My current solution is to use the pandas rolling function to compute this quantity for every day, and then, only keep the quantities of the last days of each month (and discard all the other quantities). This however implies that I perform a lot of unnecessary computations.
Does somebody of you know how I can improve that?
Thanks a lot in advance!
EDIT:
In the following, I add two examples. In both cases, I compute rolling regressions of stock returns. The first (short) example shows the problem described above and is a sub-problem of my actual problem. The second (long) example shows my actual problem. Therefore, I would either need a solution of the first example that can be embedded in my algorithm for solving the second example or a completely different solution of the second example. Note: The dataframe that I'm using is very large, which means that multiple copies of the entire dataframe are not feasible.
Example 1:
import pandas as pd
import random
import statsmodels.api as sm
# Generate a time index
dates = pd.date_range("2018-01-01", periods=365, freq="D", name='date')
df = pd.DataFrame(index=dates,columns=['Y','X']).sort_index()
# Generate Data
df['X'] = np.array(range(0,365))
df['Y'] = 3.1*X-2.5
df = df.iloc[random.sample(range(365),280)] # some days are missing
df.iloc[random.sample(range(280),20),0] = np.nan # some observations are missing
df = df.sort_index()
# Compute Beta
def estimate_beta(ser):
return sm.OLS(df.loc[ser.index,'Y'], sm.add_constant(df.loc[ser.index,'X']), missing = 'drop').fit().params[-1]
df['beta'] = df['Y'].rolling('60D', min_periods=10).apply(estimate_beta) # use last 60 days and require at least 10 observations
# Get last entries per month
df_monthly = df[['beta']].groupby([pd.Grouper(freq='M', level='date')]).agg('last')
df_monthly
Example 2:
import pandas as pd
from pandas import IndexSlice as idx
import random
import statsmodels.api as sm
# Generate a time index
dates = pd.date_range("2018-01-01", periods=365, freq="D", name='date')
arrays = [dates.tolist()+dates.tolist(),["10000"]*365+["10001"]*365]
index = pd.MultiIndex.from_tuples(list(zip(*arrays)), names=["Date", "Stock"])
df = pd.DataFrame(index=index,columns=['Y','X']).sort_index()
# Generate Data
df.loc[idx[:,"10000"],'X'] = X = np.array(range(0,365)).astype(float)
df.loc[idx[:,"10000"],'Y'] = 3*X-2
df.loc[idx[:,"10001"],'X'] = X
df.loc[idx[:,"10001"],'Y'] = -X+1
df = df.iloc[random.sample(range(365*2),360*2)] # some days are missing
df.iloc[random.sample(range(280*2),20*2),0] = np.nan # some observations are missing
# Estimate beta
def estimate_beta_grouped(df_in):
def estimate_beta(ser):
return sm.OLS(df.loc[ser.index,'Y'].astype(float),sm.add_constant(df.loc[ser.index,'X'].astype(float)), missing = 'drop').fit().params[-1]
df = df_in.droplevel('Stock').reset_index().set_index(['Date']).sort_index()
df['beta'] = df['Y'].rolling('60D',min_periods=10).apply(estimate_beta)
return df[['beta']]
df_beta = df.groupby(level='Stock').apply(estimate_beta_grouped)
# Extract beta at last day per month
df_monthly = df.groupby([pd.Grouper(freq='M', level='Date'), df.index.get_level_values(1)]).agg('last') # get last observations
df_monthly = df_monthly.merge(df_beta, left_index=True, right_index=True, how='left') # merge beta on df_monthly
df_monthly

Resampling of Weather Data for variable timeperiods by using Pandas Dataframe

Ive been trying to create a generic weather importer that can resample data to set intervals (e.g. from 20min to hours or the like (I've use 60min in the code below)).
For this I wanted to use the Pandas resample function. After a bit of puzzling I came up with the below (which is not the prettiest code). I had one problem with the averaging of the wind direction for the set periods, which I've tried to solve with pandas' resampler.apply.
However, I've hit a problem with the definition which gives the following error:
TypeError: can't convert complex to float
I realise I'm trying to force a square peg in a round hole, but I have no idea how to overcome this. Any hints would be appreciated.
raw data
import pandas as pd
import os
from datetime import datetime
from pandas import ExcelWriter
from math import *
os.chdir('C:\\test')
file = 'bom.csv'
df = pd.read_csv(file,skiprows=0, low_memory=False)
#custom dataframe reampler (.resampler.apply)
def custom_resampler(thetalist):
try:
s=0
c=0
n=0.0
for theta in thetalist:
s=s+sin(radians(theta))
c=c+cos(radians(theta))
n+=1
s=s/n
c=c/n
eps=(1-(s**2+c**2))**0.5
sigma=asin(eps)*(1+(2.0/3.0**0.5-1)*eps**3)
except ZeroDivisionError:
sigma=0
return degrees(sigma)
# create time index and format dataframes
df['DateTime'] = pd.to_datetime(df['DateTime'],format='%d/%m/%Y %H:%M')
df.index = df['DateTime']
df = df.drop(['Year','Month', 'Date', 'Hour', 'Minutes','DateTime'], axis=1)
dfws = df
dfwdd = df
dfws = dfws.drop(['WDD'], axis=1)
dfwdd = dfwdd.drop(['WS'], axis=1)
#resample data to xxmin and merge data
dfwdd = dfwdd.resample('60T').apply(custom_resampler)
dfws = dfws.resample('60T').mean()
dfoutput = pd.merge(dfws, dfwdd, right_index=True, left_index=True)
# write series to Excel
writer = pd.ExcelWriter('bom_out.xlsx', engine='openpyxl')
dfoutput.to_excel(writer, sheet_name='bom_out')
writer.save()
Did a bit more research and found that changing the definition worked best.
However, this gave a weird outcome by opposing angle (180degrees) division, which I accidently discovered. I had to deduct a small value, which will give a degree error in the actual outcome.
I would still be interested to know:
what was done wrong with the complex math
a better solution for opposing angles (180 degrees)
# changed the imports
from math import sin,cos,atan2,pi
import numpy as np
#changed the definition
def custom_resampler(angles,weights=0,setting='degrees'):
'''computes the mean angle'''
if weights==0:
weights=np.ones(len(angles))
sumsin=0
sumcos=0
if setting=='degrees':
angles=np.array(angles)*pi/180
for i in range(len(angles)):
sumsin+=weights[i]/sum(weights)*sin(angles[i])
sumcos+=weights[i]/sum(weights)*cos(angles[i])
average=atan2(sumsin,sumcos)
if setting=='degrees':
average=average*180/pi
if average == 180 or average == -180: #added since 290 degrees and 110degrees average gave a weird outcome
average -= 0.1
elif average < 0:
average += 360
return round(average,1)

Equivalent in DataFrame.rolling of ngroups from DataFrame.groupby

Is there an equivalent of ngroups from DataFrame.groupby in DataFrame.rolling?
If the window is numeric I get it that it is
nwindows = len(DataFrame)-min_periods+1
but what happens when the window is some freq? Is it lazy-evaluated or is there any variable which contains the number of windows that are going to be used? Some kind of property of the Rolling object.
EDIT: Added example
import pandas as pd
import numpy as np
N = 10
dates = pd.pandas.date_range(start="2017-01-01", periods=N, freq="10s").values
vals = np.random.rand(N)
df = pd.DataFrame(data=list(zip(dates, vals)), columns=['date', 'rnd'])
roll = df.rolling(window='1min', min_periods=2, on='date')
roll.mean()
Rephrasing my question: Can I know beforehand how many times mean() is going to be called?

Parallelize pandas apply

New to pandas, I already want to parallelize a row-wise apply operation. So far I found Parallelize apply after pandas groupby However, that only seems to work for grouped data frames.
My use case is different: I have a list of holidays and for my current row/date want to find the no-of-days before and after this day to the next holiday.
This is the function I call via apply:
def get_nearest_holiday(x, pivot):
nearestHoliday = min(x, key=lambda x: abs(x- pivot))
difference = abs(nearesHoliday - pivot)
return difference / np.timedelta64(1, 'D')
How can I speed it up?
edit
I experimented a bit with pythons pools - but it was neither nice code, nor did I get my computed results.
For the parallel approach this is the answer based on Parallelize apply after pandas groupby:
from joblib import Parallel, delayed
import multiprocessing
def get_nearest_dateParallel(df):
df['daysBeforeHoliday'] = df.myDates.apply(lambda x: get_nearest_date(holidays.day[holidays.day < x], x))
df['daysAfterHoliday'] = df.myDates.apply(lambda x: get_nearest_date(holidays.day[holidays.day > x], x))
return df
def applyParallel(dfGrouped, func):
retLst = Parallel(n_jobs=multiprocessing.cpu_count())(delayed(func)(group) for name, group in dfGrouped)
return pd.concat(retLst)
print ('parallel version: ')
# 4 min 30 seconds
%time result = applyParallel(datesFrame.groupby(datesFrame.index), get_nearest_dateParallel)
but I prefer #NinjaPuppy's approach because it does not require O(n * number_of_holidays)
I think going down the route of trying stuff in parallel is probably over complicating this. I haven't tried this approach on a large sample so your mileage may vary, but it should give you an idea...
Let's just start with some dates...
import pandas as pd
dates = pd.to_datetime(['2016-01-03', '2016-09-09', '2016-12-12', '2016-03-03'])
We'll use some holiday data from pandas.tseries.holiday - note that in effect we want a DatetimeIndex...
from pandas.tseries.holiday import USFederalHolidayCalendar
holiday_calendar = USFederalHolidayCalendar()
holidays = holiday_calendar.holidays('2016-01-01')
This gives us:
DatetimeIndex(['2016-01-01', '2016-01-18', '2016-02-15', '2016-05-30',
'2016-07-04', '2016-09-05', '2016-10-10', '2016-11-11',
'2016-11-24', '2016-12-26',
...
'2030-01-01', '2030-01-21', '2030-02-18', '2030-05-27',
'2030-07-04', '2030-09-02', '2030-10-14', '2030-11-11',
'2030-11-28', '2030-12-25'],
dtype='datetime64[ns]', length=150, freq=None)
Now we find the indices of the nearest nearest holiday for the original dates using searchsorted:
indices = holidays.searchsorted(dates)
# array([1, 6, 9, 3])
next_nearest = holidays[indices]
# DatetimeIndex(['2016-01-18', '2016-10-10', '2016-12-26', '2016-05-30'], dtype='datetime64[ns]', freq=None)
Then take the difference between the two:
next_nearest_diff = pd.to_timedelta(next_nearest.values - dates.values).days
# array([15, 31, 14, 88])
You'll need to be careful about the indices so you don't wrap around, and for the previous date, do the calculation with the indices - 1 but it should act as (I hope) a relatively good base.
I think that the pandarallel package makes it way easier to do this now. Have not looked into it much, but should do the trick.
You can also easily parallelize your calculations using the parallel-pandas library. Only two additional lines of code!
# pip install parallel-pandas
import pandas as pd
import numpy as np
from parallel_pandas import ParallelPandas
#initialize parallel-pandas
ParallelPandas.initialize(n_cpu=8, disable_pr_bar=True)
def foo(x):
"""Your awesome function"""
return np.sqrt(np.sum(x ** 2))
df = pd.DataFrame(np.random.random((1000, 1000)))
%%time
res = df.apply(foo, raw=True)
Wall time: 5.3 s
# p_apply - is parallel analogue of apply method
%%time
res = df.p_apply(foo, raw=True, executor='processes')
Wall time: 1.2 s

Pass a function with parameters specified for resample() method on a pandas dataframe

I want to pass a function to resample() on a pandas dataframe with certain parameters specified when it is passed (as opposed to defining several separate functions).
This is the function
import itertools
def spell(X, kind='wet', how='mean', threshold=0.5):
if kind=='wet':
condition = X>threshold
else:
condition = X<=threshold
length = [sum(1 if x==True else nan for x in group) for key,group in itertools.groupby(condition)]
if not length:
res = 0
elif how=='mean':
res = np.mean(length)
else:
res = np.max(length)
return res
here is a dataframe
idx = pd.DatetimeIndex(start='1960-01-01', periods=100, freq='d')
values = np.random.random(100)
df = pd.DataFrame(values, index=idx)
And heres sort of what I want to do with it
df.resample('M', how=spell(kind='dry',how='max',threshold=0.7))
But I get the error TypeError: spell() takes at least 1 argument (3 given). I want to be able to pass this function with these parameters specified except for the input array. Is there a way to do this?
EDIT:
X is the input array that is passed to the function when calling the resample method on a dataframe object like so df.resample('M', how=my_func) for a monthly resampling interval.
If I try df.resample('M', how=spell) I get:
0
1960-01-31 1.875000
1960-02-29 1.500000
1960-03-31 1.888889
1960-04-30 3.000000
which is exactly what I want for the default parameters but I want to be able to specify the input parameters to the function before passing it. This might include storing the definition in another variable but I'm not sure how to do this with the default parameters changed.
I think this may be what you're looking for, though it's a little hard to tell.. Let me know if this helps. First, the example dataframe:
idx = pd.DatetimeIndex(start='1960-01-01', periods=100, freq='d')
values = np.random.random(100)
df = pd.DataFrame(values, index=idx)
EDIT- had a greater than instead of less than or equal to originally...
Next, the function:
def spell(df, column='', kind='wet', rule='M', how='mean', threshold=0.5):
if kind=='wet':
df = df[df[column] > threshold]
else:
df = df[df[column] <= threshold]
df = df.resample(rule=rule, how=how)
return df
So, you would call it by:
spell(df, 0)
To get:
0
1960-01-31 0.721519
1960-02-29 0.754054
1960-03-31 0.746341
1960-04-30 0.654872
You can change around the parameters as well:
spell(df, 0, kind='something else', rule='W', how='max', threshold=0.7)
0
1960-01-03 0.570638
1960-01-10 0.529357
1960-01-17 0.565959
1960-01-24 0.682973
1960-01-31 0.676349
1960-02-07 0.379397
1960-02-14 0.680303
1960-02-21 0.654014
1960-02-28 0.546587
1960-03-06 0.699459
1960-03-13 0.626460
1960-03-20 0.611464
1960-03-27 0.685950
1960-04-03 0.688385
1960-04-10 0.697602

Categories

Resources