I have a row of data (in pandas), that has a time of day:
0 8:00 AM
1 11:00 AM
2 8:00 AM
3 4:00 PM
4 9:00 AM
5
6 9:00 AM
7
8 9:00 AM
9
10 9:00 AM
11
12 9:00 AM
13
14 8:00 AM
15 11:00 AM
16 8:00 AM
17 11:00 AM
18 9:00 AM
19
20 9:00 AM
21
22 9:00 AM
23
24 9:00 AM
25
26 9:00 AM
27
28 9:00 AM
I would like to convert this to something similar to this:
0 2015-11-11 08:00:00
1 2015-11-11 11:00:00
2 2015-11-11 08:00:00
3 2015-11-11 16:00:00
4 2015-11-11 09:00:00
5 NaT
6 2015-11-11 09:00:00
7 NaT
8 2015-11-11 09:00:00
9 NaT
10 2015-11-11 09:00:00
11 NaT
12 2015-11-11 09:00:00
13 NaT
14 2015-11-11 08:00:00
15 2015-11-11 11:00:00
16 2015-11-11 08:00:00
17 2015-11-11 11:00:00
18 2015-11-11 09:00:00
19 NaT
20 2015-11-11 09:00:00
21 NaT
22 2015-11-11 09:00:00
23 NaT
24 2015-11-11 09:00:00
25 NaT
26 2015-11-11 09:00:00
27 NaT
28 2015-11-11 09:00:00
29 NaT
But without the date added to it. I am then trying to merge my pandas columns into a single column to be able to iterate through. I have tried adding them astype(str) with no success in a pd.merge.
Any ideas on how to use the to_datetime function in pandas while just keeping it as UTC time?
Considering the following input Data:
data = ['8:00 AM',
'11:00 AM',
'8:00 AM',
'4:00 PM',
'9:00 AM',
'',
'9:00 AM',
'',
'9:00 AM']
Code:
import pandas as pd
x = pd.to_datetime(data).time
pd.Series(x)
Output:
0 08:00:00
1 11:00:00
2 08:00:00
3 16:00:00
4 09:00:00
5 NaN
6 09:00:00
7 NaN
8 09:00:00
dtype: object
If you have other data in another series you would like to join into the same dataframe:
x = pd.Series(x)
y = pd.Series(range(9))
pd.concat([x, y], axis=1)
0 1
0 08:00:00 0
1 11:00:00 1
2 08:00:00 2
Finally, if you prefer the columns merged as strings, try this:
z = pd.concat([x, y], axis=1)
z[0].astype(str) + ' foo ' + z[1].astype(str)
0 08:00:00 foo 0
1 11:00:00 foo 1
2 08:00:00 foo 2
3 16:00:00 foo 3
4 09:00:00 foo 4
5 nan foo 5
6 09:00:00 foo 6
7 nan foo 7
8 09:00:00 foo 8
dtype: object
Related
I have data about how many messages each account sends aggregated to an hourly level. For each row, I would like to add a column with the sum of the previous 7 days messages. I know I can groupby account and date and aggregate the number of messages to the daily level, but I'm having a hard time calculating the rolling average because there isn't a row in the data if the account didn't send any messages that day (and I'd like to not balloon my data by adding these in if at all possible). If I could figure out a way to calculate the rolling 7-day average for each day that each account sent messages, I could then re-join that number back to the hourly data (is my hope). Any suggestions?
Note: For any day not in the data, assume 0 messages sent.
Raw Data:
Account | Messages | Date | Hour
12 5 2022-07-11 09:00:00
12 6 2022-07-13 10:00:00
12 10 2022-07-13 11:00:00
12 9 2022-07-15 16:00:00
12 1 2022-07-19 13:00:00
15 2 2022-07-12 10:00:00
15 13 2022-07-13 11:00:00
15 3 2022-07-17 16:00:00
15 4 2022-07-22 13:00:00
Desired Output:
Account | Messages | Date | Hour | Rolling Previous 7 Day Average
12 5 2022-07-11 09:00:00 0
12 6 2022-07-13 10:00:00 0.714
12 10 2022-07-13 11:00:00 0.714
12 9 2022-07-15 16:00:00 3
12 1 2022-07-19 13:00:00 3.571
15 2 2022-07-12 10:00:00 0
15 13 2022-07-13 11:00:00 0.286
15 3 2022-07-17 16:00:00 2.143
15 4 2022-07-22 13:00:00 0.429
I hope I've understood your question right:
df["Date"] = pd.to_datetime(df["Date"])
df["Messages_tmp"] = df.groupby(["Account", "Date"])["Messages"].transform(
"sum"
)
df["Rolling Previous 7 Day Average"] = (
df.set_index("Date")
.groupby("Account")["Messages_tmp"]
.rolling("7D")
.apply(lambda x: x.loc[~x.index.duplicated()].shift().sum() / 7)
).values
df = df.drop(columns="Messages_tmp")
print(df)
Prints:
Account Messages Date Hour Rolling Previous 7 Day Average
0 12 5 2022-07-11 09:00:00 0.000000
1 12 6 2022-07-13 10:00:00 0.714286
2 12 10 2022-07-13 11:00:00 0.714286
3 12 9 2022-07-15 16:00:00 3.000000
4 12 1 2022-07-19 13:00:00 3.571429
5 15 2 2022-07-12 10:00:00 0.000000
6 15 13 2022-07-13 11:00:00 0.285714
7 15 3 2022-07-17 16:00:00 2.142857
8 15 4 2022-07-22 13:00:00 0.428571
I have a data table that looks like this:
ID ARRIVAL_DATE_TIME DISPOSITION_DATE
1 2021-11-07 08:35:00 2021-11-07 17:58:00
2 2021-11-07 13:16:00 2021-11-08 02:52:00
3 2021-11-07 15:12:00 2021-11-07 21:08:00
I want to be able to count the number of patients in our location by date/hour and hour. I imagine I would eventually have to transform this data into a format seen below and then create a pivot table, but I'm not sure how to first transform this data. So for example, ID 1 would have a row for each date/hour and hour between '2021-11-07 08:35:00' and '2021-11-07 17:58:00'.
ID DATE_HOUR_IN_ED HOUR_IN_ED
1 2021-11-07 08:00:00 8:00
1 2021-11-07 09:00:00 9:00
1 2021-11-07 10:00:00 10:00
1 2021-11-07 11:00:00 11:00
...
2 2021-11-07 13:00:00 13:00
2 2021-11-07 14:00:00 14:00
2 2021-11-07 15:00:00 15:00
....
Use to_datetime with Series.dt.floor for remove times, then concat with repeat date_range and last create DataFrame by constructor:
df['ARRIVAL_DATE_TIME'] = pd.to_datetime(df['ARRIVAL_DATE_TIME']).dt.floor('H')
s = pd.concat([pd.Series(r.ID,pd.date_range(r.ARRIVAL_DATE_TIME,
r.DISPOSITION_DATE, freq='H'))
for r in df.itertuples()])
df1 = pd.DataFrame({'ID':s.to_numpy(),
'DATE_HOUR_IN_ED':s.index,
'HOUR_IN_ED': s.index.strftime('%H:%M')})
print (df1)
ID DATE_HOUR_IN_ED HOUR_IN_ED
0 1 2021-11-07 08:00:00 08:00
1 1 2021-11-07 09:00:00 09:00
2 1 2021-11-07 10:00:00 10:00
3 1 2021-11-07 11:00:00 11:00
4 1 2021-11-07 12:00:00 12:00
5 1 2021-11-07 13:00:00 13:00
6 1 2021-11-07 14:00:00 14:00
7 1 2021-11-07 15:00:00 15:00
8 1 2021-11-07 16:00:00 16:00
9 1 2021-11-07 17:00:00 17:00
10 2 2021-11-07 13:00:00 13:00
11 2 2021-11-07 14:00:00 14:00
12 2 2021-11-07 15:00:00 15:00
13 2 2021-11-07 16:00:00 16:00
14 2 2021-11-07 17:00:00 17:00
15 2 2021-11-07 18:00:00 18:00
16 2 2021-11-07 19:00:00 19:00
17 2 2021-11-07 20:00:00 20:00
18 2 2021-11-07 21:00:00 21:00
19 2 2021-11-07 22:00:00 22:00
20 2 2021-11-07 23:00:00 23:00
21 2 2021-11-08 00:00:00 00:00
22 2 2021-11-08 01:00:00 01:00
23 2 2021-11-08 02:00:00 02:00
24 3 2021-11-07 15:00:00 15:00
25 3 2021-11-07 16:00:00 16:00
26 3 2021-11-07 17:00:00 17:00
27 3 2021-11-07 18:00:00 18:00
28 3 2021-11-07 19:00:00 19:00
29 3 2021-11-07 20:00:00 20:00
30 3 2021-11-07 21:00:00 21:00
Alternative solution:
df['ARRIVAL_DATE_TIME'] = pd.to_datetime(df['ARRIVAL_DATE_TIME']).dt.floor('H')
L = [pd.date_range(s,e, freq='H')
for s, e in df[['ARRIVAL_DATE_TIME','DISPOSITION_DATE']].to_numpy()]
df['DATE_HOUR_IN_ED'] = L
df = (df.drop(['ARRIVAL_DATE_TIME','DISPOSITION_DATE'], axis=1)
.explode('DATE_HOUR_IN_ED')
.reset_index(drop=True)
.assign(HOUR_IN_ED = lambda x: x['DATE_HOUR_IN_ED'].dt.strftime('%H:%M')))
Try this:
import pandas as pd
import numpy as np
df = pd.read_excel('test.xls')
df1 = (df.set_index(['ID'])
.assign(DATE_HOUR_IN_ED=lambda x: [pd.date_range(s,d, freq='H')
for s,d in zip(x.ARRIVAL_DATE_TIME, x.DISPOSITION_DATE)])
['DATE_HOUR_IN_ED'].explode()
.reset_index()
)
df1['DATE_HOUR_IN_ED'] = df1['DATE_HOUR_IN_ED'].dt.floor('H')
df1['HOUR_IN_ED'] = df1['DATE_HOUR_IN_ED'].dt.strftime('%H:%M')
print(df1)
Output:
ID DATE_HOUR_IN_ED HOUR_IN_ED
0 1 2021-11-07 08:00:00 08:00
1 1 2021-11-07 09:00:00 09:00
2 1 2021-11-07 10:00:00 10:00
3 1 2021-11-07 11:00:00 11:00
4 1 2021-11-07 12:00:00 12:00
5 1 2021-11-07 13:00:00 13:00
6 1 2021-11-07 14:00:00 14:00
7 1 2021-11-07 15:00:00 15:00
8 1 2021-11-07 16:00:00 16:00
9 1 2021-11-07 17:00:00 17:00
10 2 2021-11-07 13:00:00 13:00
11 2 2021-11-07 14:00:00 14:00
12 2 2021-11-07 15:00:00 15:00
13 2 2021-11-07 16:00:00 16:00
14 2 2021-11-07 17:00:00 17:00
15 2 2021-11-07 18:00:00 18:00
16 2 2021-11-07 19:00:00 19:00
17 2 2021-11-07 20:00:00 20:00
18 2 2021-11-07 21:00:00 21:00
19 2 2021-11-07 22:00:00 22:00
20 2 2021-11-07 23:00:00 23:00
21 2 2021-11-08 00:00:00 00:00
22 2 2021-11-08 01:00:00 01:00
23 2 2021-11-08 02:00:00 02:00
24 3 2021-11-07 15:00:00 15:00
25 3 2021-11-07 16:00:00 16:00
26 3 2021-11-07 17:00:00 17:00
27 3 2021-11-07 18:00:00 18:00
28 3 2021-11-07 19:00:00 19:00
29 3 2021-11-07 20:00:00 20:00
I'm creating a Python program using pandas and datetime libraries that will calculate the pay from my casual job each week, so I can cross reference my bank statement instead of looking through payslips.
The data that I am analysing is from the Google Calendar API that is synced with my work schedule. It prints the events in that particular calendar to a csv file in this format:
Start
End
Title
Hours
0
02.12.2020 07:00
02.12.2020 16:00
Shift
9.0
1
04.12.2020 18:00
04.12.2020 21:00
Shift
3.0
2
05.12.2020 07:00
05.12.2020 12:00
Shift
5.0
3
06.12.2020 09:00
06.12.2020 18:00
Shift
9.0
4
07.12.2020 19:00
07.12.2020 23:00
Shift
4.0
5
08.12.2020 19:00
08.12.2020 23:00
Shift
4.0
6
09.12.2020 10:00
09.12.2020 15:00
Shift
5.0
As I am a casual at this job I have to take a few things into consideration like penalty rates (baserate, after 6pm on Monday - Friday, Saturday, and Sunday all have different rates). I'm wondering if I can analyse this csv using datetime and calculate how many hours are before 6pm, and how many after 6pm. So using this as an example the output would be like:
Start
End
Title
Hours
1
04.12.2020 15:00
04.12.2020 21:00
Shift
6.0
Start
End
Title
Total Hours
Hours before 3pm
Hours after 3pm
1
04.12.2020 15:00
04.12.2020 21:00
Shift
6.0
3.0
3.0
I can use this to get the day of the week but I'm just not sure how to analyse certain bits of time for penalty rates:
df['day_of_week'] = df['Start'].dt.day_name()
I appreciate any help in Python or even other coding languages/techniques this can be applied to:)
Edit:
This is how my dataframe is looking at the moment
Start
End
Title
Hours
day_of_week
Pay
week_of_year
0
2020-12-02 07:00:00
2020-12-02 16:00:00
Shift
9.0
Wednesday
337.30
49
EDIT
In response to David Erickson's comment.
value
variable
bool
0
2020-12-02 07:00:00
Start
False
1
2020-12-02 08:00:00
Start
False
2
2020-12-02 09:00:00
Start
False
3
2020-12-02 10:00:00
Start
False
4
2020-12-02 11:00:00
Start
False
5
2020-12-02 12:00:00
Start
False
6
2020-12-02 13:00:00
Start
False
7
2020-12-02 14:00:00
Start
False
8
2020-12-02 15:00:00
Start
False
9
2020-12-02 16:00:00
End
False
10
2020-12-04 18:00:00
Start
False
11
2020-12-04 19:00:00
Start
True
12
2020-12-04 20:00:00
Start
True
13
2020-12-04 21:00:00
End
True
14
2020-12-05 07:00:00
Start
False
15
2020-12-05 08:00:00
Start
False
16
2020-12-05 09:00:00
Start
False
17
2020-12-05 10:00:00
Start
False
18
2020-12-05 11:00:00
Start
False
19
2020-12-05 12:00:00
End
False
20
2020-12-06 09:00:00
Start
False
21
2020-12-06 10:00:00
Start
False
22
2020-12-06 11:00:00
Start
False
23
2020-12-06 12:00:00
Start
False
24
2020-12-06 13:00:00
Start
False
25
2020-12-06 14:00:00
Start
False
26
2020-12-06 15:00:00
Start
False
27
2020-12-06 6:00:00
Start
False
28
2020-12-06 17:00:00
Start
False
29
2020-12-06 18:00:00
End
False
30
2020-12-07 19:00:00
Start
False
31
2020-12-07 20:00:00
Start
True
32
2020-12-07 21:00:00
Start
True
33
2020-12-07 22:00:00
Start
True
34
2020-12-07 23:00:00
End
True
35
2020-12-08 19:00:00
Start
False
36
2020-12-08 20:00:00
Start
True
37
2020-12-08 21:00:00
Start
True
38
2020-12-08 22:00:00
Start
True
39
2020-12-08 23:00:00
End
True
40
2020-12-09 10:00:00
Start
False
41
2020-12-09 11:00:00
Start
False
42
2020-12-09 12:00:00
Start
False
43
2020-12-09 13:00:00
Start
False
44
2020-12-09 14:00:00
Start
False
45
2020-12-09 15:00:00
End
False
46
2020-12-11 19:00:00
Start
False
47
2020-12-11 20:00:00
Start
True
48
2020-12-11 21:00:00
Start
True
49
2020-12-11 22:00:00
Start
True
UPDATE: (2020-12-19)
I have simply filtered out the Start rows, as you were correct an extra row wa being calculated. Also, I passed dayfirst=True to pd.to_datetime() to convert the date correctly. I have also made the output clean with some extra columns.
higher_pay = 40
lower_pay = 30
df['Start'], df['End'] = pd.to_datetime(df['Start'], dayfirst=True), pd.to_datetime(df['End'], dayfirst=True)
start = df['Start']
df1 = df[['Start', 'End']].melt(value_name='Date').set_index('Date')
s = df1.groupby('variable').cumcount()
df1 = df1.groupby(s, group_keys=False).resample('1H').asfreq().join(s.rename('Shift').to_frame()).ffill().reset_index()
df1 = df1[~df1['Date'].isin(start)]
df1['Day'] = df1['Date'].dt.day_name()
df1['Week'] = df1['Date'].dt.isocalendar().week
m = (df1['Date'].dt.hour > 18) | (df1['Day'].isin(['Saturday', 'Sunday']))
df1['Higher Pay Hours'] = np.where(m, 1, 0)
df1['Lower Pay Hours'] = np.where(m, 0, 1)
df1['Pay'] = np.where(m, higher_pay, lower_pay)
df1 = df1.groupby(['Shift', 'Day', 'Week']).sum().reset_index()
df2 = df.merge(df1, how='left', left_index=True, right_on='Shift').drop('Shift', axis=1)
df2
Out[1]:
Start End Title Hours Day Week \
0 2020-12-02 07:00:00 2020-12-02 16:00:00 Shift 9.0 Wednesday 49
1 2020-12-04 18:00:00 2020-12-04 21:00:00 Shift 3.0 Friday 49
2 2020-12-05 07:00:00 2020-12-05 12:00:00 Shift 5.0 Saturday 49
3 2020-12-06 09:00:00 2020-12-06 18:00:00 Shift 9.0 Sunday 49
4 2020-12-07 19:00:00 2020-12-07 23:00:00 Shift 4.0 Monday 50
5 2020-12-08 19:00:00 2020-12-08 23:00:00 Shift 4.0 Tuesday 50
6 2020-12-09 10:00:00 2020-12-09 15:00:00 Shift 5.0 Wednesday 50
Higher Pay Hours Lower Pay Hours Pay
0 0 9 270
1 3 0 120
2 5 0 200
3 9 0 360
4 4 0 160
5 4 0 160
6 0 5 150
There are probably more concise ways to do this, but I thought resampling the dataframe and then counting the hours would be a clean approach. You can melt the dataframe to have Start and End in the same column and fill in the gap hours with resample making sure to groupby by the 'Start' and 'End' values that were initially on the same row. The easiest way to figure out which rows were initially together is to get the cumulative count with cumcount of the values in the new the dataframe grouped by 'Start' and 'End'. I'll show you how this works later in the answer.
Full Code:
df['Start'], df['End'] = pd.to_datetime(df['Start']), pd.to_datetime(df['End'])
df = df[['Start', 'End']].melt().set_index('value')
df = df.groupby(df.groupby('variable').cumcount(), group_keys=False).resample('1H').asfreq().ffill().reset_index()
m = (df['value'].dt.hour > 18) | (df['value'].dt.day_name().isin(['Saturday', 'Sunday']))
print('Normal Rate No. of Hours', df[m].shape[0])
print('Higher Rate No. of Hours', df[~m].shape[0])
Normal Rate No. of Hours 20
Higher Rate No. of Hours 26
Adding some more details...
Step 1: Melt the dataframe: You only need two columns 'Start' and 'End' to get your desired output
df = df[['Start', 'End']].melt().set_index('value')
df
Out[1]:
variable
value
2020-02-12 07:00:00 Start
2020-04-12 18:00:00 Start
2020-05-12 07:00:00 Start
2020-06-12 09:00:00 Start
2020-07-12 19:00:00 Start
2020-08-12 19:00:00 Start
2020-09-12 10:00:00 Start
2020-02-12 16:00:00 End
2020-04-12 21:00:00 End
2020-05-12 12:00:00 End
2020-06-12 18:00:00 End
2020-07-12 23:00:00 End
2020-08-12 23:00:00 End
2020-09-12 15:00:00 End
Step 2: Create Group in preparation for resample: *As you can see group 0-6 line up with each other representing '
Start' and 'End' as they were together previously
df.groupby('variable').cumcount()
Out[2]:
value
2020-02-12 07:00:00 0
2020-04-12 18:00:00 1
2020-05-12 07:00:00 2
2020-06-12 09:00:00 3
2020-07-12 19:00:00 4
2020-08-12 19:00:00 5
2020-09-12 10:00:00 6
2020-02-12 16:00:00 0
2020-04-12 21:00:00 1
2020-05-12 12:00:00 2
2020-06-12 18:00:00 3
2020-07-12 23:00:00 4
2020-08-12 23:00:00 5
2020-09-12 15:00:00 6
Step 3: Resample the data per group by hour to fill in the gaps for each group:
df.groupby(df.groupby('variable').cumcount(), group_keys=False).resample('1H').asfreq().ffill().reset_index()
Out[3]:
value variable
0 2020-02-12 07:00:00 Start
1 2020-02-12 08:00:00 Start
2 2020-02-12 09:00:00 Start
3 2020-02-12 10:00:00 Start
4 2020-02-12 11:00:00 Start
5 2020-02-12 12:00:00 Start
6 2020-02-12 13:00:00 Start
7 2020-02-12 14:00:00 Start
8 2020-02-12 15:00:00 Start
9 2020-02-12 16:00:00 End
10 2020-04-12 18:00:00 Start
11 2020-04-12 19:00:00 Start
12 2020-04-12 20:00:00 Start
13 2020-04-12 21:00:00 End
14 2020-05-12 07:00:00 Start
15 2020-05-12 08:00:00 Start
16 2020-05-12 09:00:00 Start
17 2020-05-12 10:00:00 Start
18 2020-05-12 11:00:00 Start
19 2020-05-12 12:00:00 End
20 2020-06-12 09:00:00 Start
21 2020-06-12 10:00:00 Start
22 2020-06-12 11:00:00 Start
23 2020-06-12 12:00:00 Start
24 2020-06-12 13:00:00 Start
25 2020-06-12 14:00:00 Start
26 2020-06-12 15:00:00 Start
27 2020-06-12 16:00:00 Start
28 2020-06-12 17:00:00 Start
29 2020-06-12 18:00:00 End
30 2020-07-12 19:00:00 Start
31 2020-07-12 20:00:00 Start
32 2020-07-12 21:00:00 Start
33 2020-07-12 22:00:00 Start
34 2020-07-12 23:00:00 End
35 2020-08-12 19:00:00 Start
36 2020-08-12 20:00:00 Start
37 2020-08-12 21:00:00 Start
38 2020-08-12 22:00:00 Start
39 2020-08-12 23:00:00 End
40 2020-09-12 10:00:00 Start
41 2020-09-12 11:00:00 Start
42 2020-09-12 12:00:00 Start
43 2020-09-12 13:00:00 Start
44 2020-09-12 14:00:00 Start
45 2020-09-12 15:00:00 End
Step 4 - From there, you can calculate the boolean series I have called m: *True values represent conditions met for "Higher Rate".
m = (df['value'].dt.hour > 18) | (df['value'].dt.day_name().isin(['Saturday', 'Sunday']))
m
Out[4]:
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 True
11 True
12 True
13 True
14 False
15 False
16 False
17 False
18 False
19 False
20 False
21 False
22 False
23 False
24 False
25 False
26 False
27 False
28 False
29 False
30 True
31 True
32 True
33 True
34 True
35 True
36 True
37 True
38 True
39 True
40 True
41 True
42 True
43 True
44 True
45 True
Step 5: Filter the dataframe by True or False to count total hours for the Normal Rate and Higher Rate and print values.
print('Normal Rate No. of Hours', df[m].shape[0])
print('Higher Rate No. of Hours', df[~m].shape[0])
Normal Rate No. of Hours 20
Higher Rate No. of Hours 26
I have dataframe like:
Timestamp Sold
10.01.2017 10:00:20 10
10.01.2017 10:01:55 20
10.01.2017 11:02:11 15
11.01.2017 11:04:30 10
11.01.2017 11:15:35 35
12.01.2017 10:02:01 22
How to resample it by hour. Ordinary resample resamples by all hours from first row to last. But what I need is to make timeframe (10-11) and resample it within this timeframe.
Last df should be like this:
Timestamp Sold
10.01.2017 10:00:00 30
10.01.2017 11:00:00 15
11.01.2017 10:00:00 NAN
11.01.2017 11:00:00 45
12.01.2017 10:00:00 22
12.01.2017 11:00:00 NAN
You could do something like this:
df_out = df.groupby(df.Timestamp.dt.floor('H')).sum()
df_out.reset_index()
Output:
Timestamp Sold
0 2017-10-01 10:00:00 30
1 2017-10-01 11:00:00 15
2 2017-11-01 11:00:00 45
3 2017-12-01 10:00:00 22
I have a Pandas DF that looks like this:
df
I want to filter the DF using a locally defined int parameter, 'days'. Such as when days = 10, my filtered DF only has the data for the last available 10 dates.
Until now, I have tried the following:
days=10
cutoff_date = df["SeriesDate"][-1:] - datetime.timedelta(days=days)
However, then trying to output the filtered DF using:
df[df['SeriesDate'] > cutoff_date]
I get the follwing error:
ValueError: Can only compare identically-labeled Series objects
I am still learning Python so will appreciate any help that I can get with this.
I think you need select last value of column SeriesDate by iloc:
start = pd.to_datetime('2015-02-24')
rng = pd.date_range(start, periods=15, freq='20H')
df = pd.DataFrame({'SeriesDate': rng, 'Value_1': np.random.random(15)})
print (df)
SeriesDate Value_1
0 2015-02-24 00:00:00 0.849160
1 2015-02-24 20:00:00 0.332487
2 2015-02-25 16:00:00 0.687638
3 2015-02-26 12:00:00 0.310326
4 2015-02-27 08:00:00 0.660795
5 2015-02-28 04:00:00 0.354475
6 2015-03-01 00:00:00 0.061312
7 2015-03-01 20:00:00 0.443908
8 2015-03-02 16:00:00 0.708326
9 2015-03-03 12:00:00 0.257419
10 2015-03-04 08:00:00 0.618363
11 2015-03-05 04:00:00 0.121625
12 2015-03-06 00:00:00 0.637324
13 2015-03-06 20:00:00 0.058292
14 2015-03-07 16:00:00 0.047624
days=10
cutoff_date = df["SeriesDate"].iloc[-1] - pd.Timedelta(days=days)
print (cutoff_date)
2015-02-25 16:00:00
df1 = df[df['SeriesDate'] > cutoff_date]
print (df1)
SeriesDate Value_1
3 2015-02-26 12:00:00 0.310326
4 2015-02-27 08:00:00 0.660795
5 2015-02-28 04:00:00 0.354475
6 2015-03-01 00:00:00 0.061312
7 2015-03-01 20:00:00 0.443908
8 2015-03-02 16:00:00 0.708326
9 2015-03-03 12:00:00 0.257419
10 2015-03-04 08:00:00 0.618363
11 2015-03-05 04:00:00 0.121625
12 2015-03-06 00:00:00 0.637324
13 2015-03-06 20:00:00 0.058292
14 2015-03-07 16:00:00 0.047624
Another alternative is use max, thanks Pocin:
cutoff_date = df["SeriesDate"].max() - pd.Timedelta(days=days)
print (cutoff_date)
2015-02-25 16:00:00
And if you want filter by dates only:
days=10
cutoff_date = df["SeriesDate"].dt.date.iloc[-1] - pd.Timedelta(days=days)
print (cutoff_date)
2015-02-25
EDIT:
You can filter out dates where is weekend with dayofweek and then use isin
start = pd.to_datetime('2015-02-24')
rng = pd.date_range(start, periods=15)
df = pd.DataFrame({'SeriesDate': rng, 'Value_1': np.random.random(15)})
print (df)
SeriesDate Value_1
0 2015-02-24 0.498387
1 2015-02-25 0.435767
2 2015-02-26 0.299233
3 2015-02-27 0.489286
4 2015-02-28 0.892167
5 2015-03-01 0.507436
6 2015-03-02 0.360427
7 2015-03-03 0.903886
8 2015-03-04 0.718148
9 2015-03-05 0.645489
10 2015-03-06 0.251285
11 2015-03-07 0.139275
12 2015-03-08 0.756845
13 2015-03-09 0.565863
14 2015-03-10 0.148077
days=10
last_day = df["SeriesDate"].dt.date.iloc[-1]
cutoff_date = last_day - pd.Timedelta(days=days)
rng = pd.date_range(cutoff_date, last_day)
rng = rng[(rng.dayofweek != 0) & (rng.dayofweek != 6)]
print (rng)
DatetimeIndex(['2015-02-28', '2015-03-03', '2015-03-04', '2015-03-05',
'2015-03-06', '2015-03-07', '2015-03-10'],
dtype='datetime64[ns]', freq=None)
df1 = df[df['SeriesDate'].isin(rng)]
print (df1)
SeriesDate Value_1
4 2015-02-28 0.892167
7 2015-03-03 0.903886
8 2015-03-04 0.718148
9 2015-03-05 0.645489
10 2015-03-06 0.251285
11 2015-03-07 0.139275
14 2015-03-10 0.148077