Pandas date_range() for middle and end of month - python

I can specify the date range of month ends using
import pandas as pd
monthend_range = pd.date_range(datetime.date(2017,12,10), datetime.date(2018,2,2), freq='BM')
Is there a straightforward way to include the middle of the month into the range above to form a middle-and-end-month index? Let's say that the logic we want is to use the successive month ends in the code above and find the business day that is right in the middle between the monthends. If that is not a business day, then try the following day and the following until we get a business day.
The expected output is
['2017-12-29', '2018-01-16', '2018-01-31']
This might seem a bit inconsistent as 2017-12-15 is a middle of the month that is within the date range. But the procedure is get the end of months, then interpolate between the ends. Unless of course there is a better approach to dealing with this question.

Idea is create business day range for each value with omit first and select value in the middle, last use Index.union for join togetehr:
a = []
for x in monthend_range[1:]:
r = pd.date_range(x.to_period('m').to_timestamp(), x, freq='B')
a.append(r[len(r)//2])
print (a)
[Timestamp('2018-01-16 00:00:00', freq='B')]
out = monthend_range.union(a)
print (out)
DatetimeIndex(['2017-12-29', '2018-01-16', '2018-01-31'], dtype='datetime64[ns]', freq=None)

Related

Replacing dates that fall on weekends to next business day in dataframe

I have a dataframe with bunch of dates in them, I would like to check each entry if it is a weekday or weekend, if it is a weekend i would like to increase the date to the next weekday. What is the most pythonic way of doing this ? I was thinking about using a list comprehension and s.th like
days = pd.date_range(start='1/1/2020', end='1/08/2020')
dates = pd.DataFrame(days,columns=['dates'])
dates['dates'] = [day+pd.DateOffset(days=1) if day.weekday() >4 else day for day in dates['dates']]
How could I adjust the code to cover day+pd.DateOffset(days=1) (for Sunday) and day+pd.DateOffset(days=2) (for Saturday) and get an updated column with the shifted dates? Running the code twice with +1 should work but is certainly not pretty

Find if there is any holidays between two dates in a large dataset?

I am working on a dataset that has some 26 million rows and 13 columns including two datetime columns arr_date and dep_date. I am trying to create a new boolean column to check if there is any US holidays between these dates.
I am using apply function to the entire dataframe but the execution time is too slow. The code has been running for more than 48 hours now on Goolge Cloud Platform (24GB ram, 4 core). Is there a faster way to do this?
The dataset looks like this:
Sample data
The code I am using is -
import pandas as pd
import numpy as np
from pandas.tseries.holiday import USFederalHolidayCalendar as calendar
df = pd.read_pickle('dataGT70.pkl')
cal = calendar()
def mark_holiday(df):
df.apply(lambda x: True if (len(cal.holidays(start=x['dep_date'], end=x['arr_date']))>0 and x['num_days']<20) else False, axis=1)
return df
df = mark_holiday(df)
This took me about two minutes to run on a sample dataframe of 30m rows with two columns, start_date and end_date.
The idea is to get a sorted list of all holidays occurring on or after the minimum start date, and then to use bisect_left from the bisect module to determine the next holiday occurring on or after each start date. This holiday is then compared to the end date. If it is less than or equal to the end date, then there must be at least one holiday in the date range between the start and end dates (both inclusive).
from bisect import bisect_left
import pandas as pd
from pandas.tseries.holiday import USFederalHolidayCalendar as calendar
# Create sample dataframe of 10k rows with an interval of 1-19 days.
np.random.seed(0)
n = 10000 # Sample size, e.g. 10k rows.
years = np.random.randint(2010, 2019, n)
months = np.random.randint(1, 13, n)
days = np.random.randint(1, 29, n)
df = pd.DataFrame({'start_date': [pd.Timestamp(*x) for x in zip(years, months, days)],
'interval': np.random.randint(1, 20, n)})
df['end_date'] = df['start_date'] + pd.TimedeltaIndex(df['interval'], unit='d')
df = df.drop('interval', axis=1)
# Get a sorted list of holidays since the fist start date.
hols = calendar().holidays(df['start_date'].min())
# Determine if there is a holiday between the start and end dates (both inclusive).
df['holiday_in_range'] = df['end_date'].ge(
df['start_date'].apply(lambda x: bisect_left(hols, x)).map(lambda x: hols[x]))
>>> df.head(6)
start_date end_date holiday_in_range
0 2015-07-14 2015-07-31 False
1 2010-12-18 2010-12-30 True # 2010-12-24
2 2013-04-06 2013-04-16 False
3 2013-09-12 2013-09-24 False
4 2017-10-28 2017-10-31 False
5 2013-12-14 2013-12-29 True # 2013-12-25
So, for a given start_date timestamp (e.g. 2013-12-14), bisect_right(hols, '2013-12-14') would yield 39, and hols[39] results in 2013-12-25, the next holiday falling on or after the 2013-12-14 start date. The next holiday calculated as df['start_date'].apply(lambda x: bisect_left(hols, x)).map(lambda x: hols[x]). This holiday is then compared to the end_date, and holiday_in_range is thus True if the end_date is greater than or equal to this holiday value, otherwise the holiday must fall after this end_date.
Have you already considered using pandas.merge_asof for this?
I could imagine that map and apply with lambda functions cannot be executed that efficiently.
UPDATE: Ah sorry, I just read, that you only need a boolean if there are any holidays inbetween, this makes it much easier. If thats enough you just need to perform steps 1-5 then group the DataFrame that is the result of step5 by start/end date and use count as the aggregate function to have the number of holidays in the ranges. This result you can join to your original dataset similar to step 8 described below. Then fill the rest of the values with fillna(0). Do something like joined_df['includes_holiday']= joined_df['joined_count_column']>0. After that, you can delete the joined_count_column again from your DataFrame, if you like.
If you use pandas_merge_asof you could work through these steps (step 6 and 7 are only necessary if you need to have all the holidays inbetween start and end in your result DataFrame as well, not just the booleans):
Load your holiday records in a DataFrame and index it on the date. The holidays should be one date per line (storing ranges like for christmas from 24th-26th in one row, would make it much more complex).
Create a copy of your dataframe with just the start, end date columns. UPDATE: every start, end date should only occur once in it. E.g. by using groupby.
Use merge_asof with a reasonable tolerance value (if you join over the start of the period, use direction='forward', if you use the end date, use direction='backward' and how='inner'.
As a result you have a merged DataFrame with your start, end columns and the date column from your holiday dataframe. You get only records, for which a holiday was found with the given tolerance, but later you can merge this data back with your original DataFrame. You will probably now have duplicates of your original records.
Then check the joined holiday for your records with indexers by comparing them with the start and end column and remove the holidays, which are not inbetween.
Sort the dataframe you obtained form step 5 (use something like df.sort_values(['start', 'end', 'holiday'], inplace=True). Now you should insert a number column that numbers the holidays between your periods (the ones you obtained after step 5) form 1 to ... (for each period starting from 1). This is necesary to use unstack in the next step to get the holidays in columns.
Add an index on your dataframe based on period start date, period end date and the count column you inserted in step 6. Use df.unstack(level=-1) on the DataFrame you prepared in steps 1-7. What you now have, is a condensed DataFrame with your original periods with the holidays arranged columnwise.
Now you only have to merge this DataFrame back to your original data using original_df.merge(df_from_step7, left_on=['start', 'end'], right_index=True, how='left')
The result of this is a file with your original data containing the date ranges and for each date range the holidays that lie inbetween the period are stored in a separte columns each behind the data. Loosely speaking the numbering in step 6 assigns the holidays to the columns and has the effect, that the holidays are always assigned from right to left to the columns (you wouldn't have a holiday in column 3 if column 1 is empty).
Step 6. is probably also a bit tricky, but you can do that for example by adding a series filled with a range and then fixing it, so the numbering starts by 0 or 1 in each group by using shift or grouping by start, end with aggregate({'idcol':'min') and joining the result back to subtract it from the value assigned by the range-sequence.
In all, I think it sounds more complicated, than it is and it should be performed quite efficient. Especially if your periods are not that large, because then after step 5, your result set should be much smaller than your original dataframe, but even if that is not the case, it should still be quite efficient, since it can use compiled code.

Python: Date conversion to year-weeknumber, issue at switch of year

I am trying to convert a dataframe column with a date and timestamp to a year-weeknumber format, i.e., 01-05-2017 03:44 = 2017-1. This is pretty easy, however, I am stuck at dates that are in a new year, yet their weeknumber is still the last week of the previous year. The same thing that happens here.
I did the following:
df['WEEK_NUMBER'] = df.date.dt.year.astype(str).str.cat(df.date.dt.week.astype(str), sep='-')
Where df['date'] is a very large column with date and times, ranging over multiple years.
A date which gives a problem is for example:
Timestamp('2017-01-01 02:11:27')
The output for my code will be 2017-52, while it should be 2016-52. Since the data covers multiple years, and weeknumbers and their corresponding dates change every year, I cannot simply subtract a few days.
Does anybody have an idea of how to fix this? Thanks!
Replace df.date.dt.year by this:
(df.date.dt.year- ((df.date.dt.week>50) & (df.date.dt.month==1)))
Basically, it means that you will substract 1 to the year value if the week number is greater than 50 and the month is January.

Use Python/Pandas indexed date as condition in holiday list

I'm opening a CSV file with two columns and about 10,000 rows. The first column has a unique date and time stamp (ascending in 30-minute intervals, called 'date_time') and the second column has an integer, 'intnum'. I use the date_time column as my index and then use conditions to sum only the integers that fall into specific date ranges. All of the conditions work perfectly, EXCEPT the last condition is based on matching those dates with the USFederalHolidayCalendar.
Here's the rub, the indexed date is more complex (eg. '2015-02-16 12:30:00.00000') than the holiday list date (eg. '2015-02-16', President's Day). So when I run an 'isin' function against the holiday list, it doesn't find all of the integers associated with the whole day because '2015-02-16 12:30:00.00000' is not equal to '2015-02-16', despite the fact that it is the same day.
Code snippet:
import numpy as np
import pandas as pd
from pandas.tseries.holiday import USFederalHolidayCalendar, get_calendar
newcal = get_calendar('USFederalHolidayCalendar')
holidays = newcal.holidays(start='2010-01-01', end='2016-12-31')
filename = "/Users/Me/Desktop/test.csv"
int_array = pd.read_csv(filename, header=0, parse_dates=['date_time'], index_col='date_time')
intnum_total = int(int_array['intnum'][(int_array.index.month >= 2) &
(int_array.index.month <= 3) & (int_array.index.hour >= 12) &
(int_array.index.isin(holidays) == TRUE)].sum()
print intnum_total
Now, I get no errors, so the syntax and functions work "properly", but I know for a fact the holiday match is not working.
Any thoughts?
Thanks ahead of time - this is my first post, so hopefully the formatting and question is clear.
Here are some some thoughts...
Say you have a list of holidays for 2016:
cal = USFederalHolidayCalendar()
holidays = cal.holidays(start='2016-01-01', end='2016-12-31')
print holidays.size
Which yields:
10
So there are 10 holidays in 2016 based on USFederalHolidayCalendar.
You also have your DateTimeIndex, which, let's say is covering 2015 and 2016:
idx = pd.DatetimeIndex(pd.date_range(start='2015-1-1',
end='2016-12-31', freq='30min'))
print idx.size
Which shows:
35041
Now if I would want to see how many holidays are in my 30 min based idx I would take the date part of the DateTimeIndex and compare it to date part of the holidays:
idx[pd.DatetimeIndex(idx.date).isin(holidays.date)].size
Which would give me:
480
Which is 10 holidays * 24 hours * 2 halfhours in an hour.
Does that sound correct?
Note that when you do index.isin(other_index) you get back a boolean array which is sufficient for indexing, and you don't need to do an extra comparison index.isin(other_index) == True
Can't you just access the date from your timestamp and see if it is in your list of federal holidays? I don't know why you need your second integer index column; I would think a boolean value should suffice (e.g. fed_holiday).
df = pd.DataFrame(pd.date_range(start='2016-1-1', end='2016-12-31', freq='30min', name='ts'))
df['fed_holiday'] = [ts.date() in holidays for ts in df.ts]
>>> df.fed_holiday.sum() / (24 * 2.)
10.0

Get last date in each month of a time series pandas

Currently I'm generating a DateTimeIndex using a certain function, zipline.utils.tradingcalendar.get_trading_days. The time series is roughly daily but with some gaps.
My goal is to get the last date in the DateTimeIndex for each month.
.to_period('M') & .to_timestamp('M') don't work since they give the last day of the month rather than the last value of the variable in each month.
As an example, if this is my time series I would want to select '2015-05-29' while the last day of the month is '2015-05-31'.
['2015-05-18', '2015-05-19', '2015-05-20', '2015-05-21',
'2015-05-22', '2015-05-26', '2015-05-27', '2015-05-28',
'2015-05-29', '2015-06-01']
Condla's answer came closest to what I needed except that since my time index stretched for more than a year I needed to groupby by both month and year and then select the maximum date. Below is the code I ended up with.
# tempTradeDays is the initial DatetimeIndex
dateRange = []
tempYear = None
dictYears = tempTradeDays.groupby(tempTradeDays.year)
for yr in dictYears.keys():
tempYear = pd.DatetimeIndex(dictYears[yr]).groupby(pd.DatetimeIndex(dictYears[yr]).month)
for m in tempYear.keys():
dateRange.append(max(tempYear[m]))
dateRange = pd.DatetimeIndex(dateRange).order()
Suppose your data frame looks like this
original dataframe
Then the following Code will give you the last day of each month.
df_monthly = df.reset_index().groupby([df.index.year,df.index.month],as_index=False).last().set_index('index')
transformed_dataframe
This one line code does its job :)
My strategy would be to group by month and then select the "maximum" of each group:
If "dt" is your DatetimeIndex object:
last_dates_of_the_month = []
dt_month_group_dict = dt.groupby(dt.month)
for month in dt_month_group_dict:
last_date = max(dt_month_group_dict[month])
last_dates_of_the_month.append(last_date)
The list "last_date_of_the_month" contains all occuring last dates of each month in your dataset. You can use this list to create a DatetimeIndex in pandas again (or whatever you want to do with it).
This is an old question, but all existing answers here aren't perfect. This is the solution I came up with (assuming that date is a sorted index), which can be even written in one line, but I split it for readability:
month1 = pd.Series(apple.index.month)
month2 = pd.Series(apple.index.month).shift(-1)
mask = (month1 != month2)
apple[mask.values].head(10)
Few notes here:
Shifting a datetime series requires another pd.Series instance (see here)
Boolean mask indexing requires .values (see here)
By the way, when the dates are the business days, it'd be easier to use resampling: apple.resample('BM')
Maybe the answer is not needed anymore, but while searching for an answer to the same question I found maybe a simpler solution:
import pandas as pd
sample_dates = pd.date_range(start='2010-01-01', periods=100, freq='B')
month_end_dates = sample_dates[sample_dates.is_month_end]
Try this, to create a new diff column where the value 1 points to the change from one month to the next.
df['diff'] = np.where(df['Date'].dt.month.diff() != 0,1,0)

Categories

Resources