Why pandas .last('1W') don't show last 7 days? - python

Trying to extract a max value from a Pandas Dataframe with a daytime as index, I'm using .last('1W').
My data goes from the first day of month (2020-09-01 00:00:00). It seems to work properly until I reach today (monday 07/09/2020). At first I supposed that .last() takes the last days of week from starting value (sunday I guess) instead of the last 7 days (as I assumed) but, what confuses me is that if I extend the hours, the resulting dataframe shifts the first sample too...
I'm try to simulate this with:
import pandas as pd
i = pd.date_range('2020-09-01', periods=24*6+5, freq='1H')
values = range(0, 24*6+5 )
df = pd.DataFrame({'A': values}, index=i)
print(df)
print(df.last('1W'))
With output:
A
2020-09-01 00:00:00 0
2020-09-01 01:00:00 1
2020-09-01 02:00:00 2
2020-09-01 03:00:00 3
2020-09-01 04:00:00 4
... ...
2020-09-07 00:00:00 144
2020-09-07 01:00:00 145
2020-09-07 02:00:00 146
2020-09-07 03:00:00 147
2020-09-07 04:00:00 148
[149 rows x 1 columns]
A
2020-09-06 05:00:00 125
2020-09-06 06:00:00 126
2020-09-06 07:00:00 127
2020-09-06 08:00:00 128
2020-09-06 09:00:00 129
2020-09-06 10:00:00 130
2020-09-06 11:00:00 131
2020-09-06 12:00:00 132
2020-09-06 13:00:00 133
2020-09-06 14:00:00 134
2020-09-06 15:00:00 135
2020-09-06 16:00:00 136
2020-09-06 17:00:00 137
2020-09-06 18:00:00 138
2020-09-06 19:00:00 139
2020-09-06 20:00:00 140
2020-09-06 21:00:00 141
2020-09-06 22:00:00 142
2020-09-06 23:00:00 143
2020-09-07 00:00:00 144
2020-09-07 01:00:00 145
2020-09-07 02:00:00 146
2020-09-07 03:00:00 147
2020-09-07 04:00:00 148
Process finished with exit code 0
The first value in df is 0 at 2020-09-01 00:00:00
But,
When I try to apply last('1W'), the selection goes from 2020-09-06 05:00:00, to the last value, instead of the last 7 days... as I assumed, nor from 2020-09-06 00:00:00 if the operator works from sunday to sunday.

If you're looking for an offset of 7 days, why not use the Day offset, rather than the Week?
"1W" offset isn't the same as "7D" because "1W" starting on a Monday in a two-week dataset where the last row is Tuesday will have only 2 days. "2W" will include previous week (Monday-Sunday) + (Monday-Tuesday).
You can see the effects of changing the start day of the week by calling the offset class directly, like so:
week_offset = pd.tseries.offsets.Week(n=1, weekday=0) # week starting Monday
day_offset = pd.tseries.offsets.Day(n=7) # or simply "7D"
df.last(day_offset)

Related

Counting each day in a dataframe (Not resetting on new year)

I have two years worth of data in a Dataframe called df, with an additional column called dayNo which labels what day it is in the year. See below:
Code which handles dayNo:
df['dayNo'] = pd.to_datetime(df['TradeDate'], dayfirst=True).dt.day_of_year
I would like to amened dayNo so that when 2023 begins, dayNo doesn't reset to 1, but changes to 366, 367 and so on. Expected output below:
Maybe a completely different approach will have to be taken to what I've done above. Any help greatly appreciated, Thanks!
You could define a start day to start counting days from, and use the number of days from that point forward as your column. An example using self generated data to illustrate the point:
df = pd.DataFrame({"dates": pd.date_range("2022-12-29", "2023-01-03", freq="8H")})
start = pd.Timestamp("2021-12-31")
df["dayNo"] = df["dates"].sub(start).dt.days
dates dayNo
0 2022-12-29 00:00:00 363
1 2022-12-29 08:00:00 363
2 2022-12-29 16:00:00 363
3 2022-12-30 00:00:00 364
4 2022-12-30 08:00:00 364
5 2022-12-30 16:00:00 364
6 2022-12-31 00:00:00 365
7 2022-12-31 08:00:00 365
8 2022-12-31 16:00:00 365
9 2023-01-01 00:00:00 366
10 2023-01-01 08:00:00 366
11 2023-01-01 16:00:00 366
12 2023-01-02 00:00:00 367
13 2023-01-02 08:00:00 367
14 2023-01-02 16:00:00 367
15 2023-01-03 00:00:00 368
You are nearly there with your solution just do Apply for final result as
df['dayNo'] = df['dayNo'].apply(lambda x : x if x>= df.loc[0].dayNo else x+df.loc[0].dayNo)
df
Out[108]:
dates TradeDate dayNo
0 2022-12-31 00:00:00 2022-12-31 365
1 2022-12-31 01:00:00 2022-12-31 365
2 2022-12-31 02:00:00 2022-12-31 365
3 2022-12-31 03:00:00 2022-12-31 365
4 2022-12-31 04:00:00 2022-12-31 365
.. ... ... ...
68 2023-01-02 20:00:00 2023-01-02 367
69 2023-01-02 21:00:00 2023-01-02 367
70 2023-01-02 22:00:00 2023-01-02 367
71 2023-01-02 23:00:00 2023-01-02 367
72 2023-01-03 00:00:00 2023-01-03 368
Let's suppose we have a pandas dataframe as follows with this script (inspired by Chrysophylaxs dataframe) :
import pandas as pd
df = pd.DataFrame({'TradeDate': pd.date_range("2022-12-29", "2030-01-03", freq="8H")})
The dataframe has then dates from 2022 to 2030 :
TradeDate
0 2022-12-29 00:00:00
1 2022-12-29 08:00:00
2 2022-12-29 16:00:00
3 2022-12-30 00:00:00
4 2022-12-30 08:00:00
... ...
7682 2030-01-01 16:00:00
7683 2030-01-02 00:00:00
7684 2030-01-02 08:00:00
7685 2030-01-02 16:00:00
7686 2030-01-03 00:00:00
[7687 rows x 1 columns]
I propose you the following commented-inside code to aim our target :
import pandas as pd
df = pd.DataFrame({'TradeDate': pd.date_range("2022-12-29", "2030-01-03", freq="8H")})
# Initialize Days counter
dyc = df['TradeDate'].iloc[0].dayofyear
# Initialize Previous day of Year
prv_dof = dyc
def func(row):
global dyc, prv_dof
# Get the day of the year
dof = row.iloc[0].dayofyear
# If New day then increment days counter
if dof != prv_dof:
dyc+=1
prv_dof = dof
return dyc
df['dayNo'] = df.apply(func, axis=1)
Resulting dataframe :
TradeDate dayNo
0 2022-12-29 00:00:00 363
1 2022-12-29 08:00:00 363
2 2022-12-29 16:00:00 363
3 2022-12-30 00:00:00 364
4 2022-12-30 08:00:00 364
... ... ...
7682 2030-01-01 16:00:00 2923
7683 2030-01-02 00:00:00 2924
7684 2030-01-02 08:00:00 2924
7685 2030-01-02 16:00:00 2924
7686 2030-01-03 00:00:00 2925

Slicing pandas DateTimeIndex with steps

I often deal with pandas DataFrames with DateTimeIndexes, where I want to - for example - select only the parts where the hour of the index = 6. The only way I currently know how to do this is with reindexing:
df.reindex(pd.date_range(*df.index.to_series().agg([min, max]).apply(lambda ts: ts.replace(hour=6)), freq="24H"))
But this is quite unreadable and complex, which gets even worse when there is a MultiIndex with multiple DateTimeIndex levels. I know of methods that use .reset_index() and then either df.where or df.loc with conditional statements, but is there a simpler way to do this with regular IndexSlicing? I tried it as follows
df.loc[df.index.min().replace(hour=6)::pd.Timedelta(24, unit="H")]
but this gives a TypeError:
TypeError: '>=' not supported between instances of 'Timedelta' and 'int'
If your index is a DatetimeIndex, you can use:
>>> df[df.index.hour == 6]
val
2022-03-01 06:00:00 7
2022-03-02 06:00:00 31
2022-03-03 06:00:00 55
2022-03-04 06:00:00 79
2022-03-05 06:00:00 103
2022-03-06 06:00:00 127
2022-03-07 06:00:00 151
2022-03-08 06:00:00 175
2022-03-09 06:00:00 199
2022-03-10 06:00:00 223
2022-03-11 06:00:00 247
2022-03-12 06:00:00 271
2022-03-13 06:00:00 295
2022-03-14 06:00:00 319
2022-03-15 06:00:00 343
2022-03-16 06:00:00 367
2022-03-17 06:00:00 391
2022-03-18 06:00:00 415
2022-03-19 06:00:00 439
2022-03-20 06:00:00 463
2022-03-21 06:00:00 487
Setup:
dti = pd.date_range('2022-3-1', '2022-3-22', freq='1H')
df = pd.DataFrame({'val': range(1, len(dti)+1)}, index=dti)

Pandas, insert datetime values that increase one hour for each row

I made predictions with an Arima model that predict the next 168 hours (one week) of cars on the road. I also want to add a column called "datetime" that starts with 00:00 01-01-2021 and increases with one hour for each row.
Is there an intelligent way of doing this?
You can do:
x=pd.to_datetime('2021-01-01 00:00')
y=pd.to_datetime('2021-01-07 23:59')
pd.Series(pd.date_range(x,y,freq='H'))
output:
pd.Series(pd.date_range(x,y,freq='H'))
Out[153]:
0 2021-01-01 00:00:00
1 2021-01-01 01:00:00
2 2021-01-01 02:00:00
3 2021-01-01 03:00:00
4 2021-01-01 04:00:00
163 2021-01-07 19:00:00
164 2021-01-07 20:00:00
165 2021-01-07 21:00:00
166 2021-01-07 22:00:00
167 2021-01-07 23:00:00
Length: 168, dtype: datetime64[ns]

Python Pandas: Aggregate data by hour and display it instead of the index

I would like to aggregate some data by hour using pandas and display the date instead of an index.
The code I have right now is the following:
import pandas as pd
import numpy as np
dates = pd.date_range('1/1/2011', periods=20, freq='25min')
data = pd.Series(np.random.randint(100, size=20), index=dates)
result = data.groupby(data.index.hour).sum().reset_index(name='Sum')
print(result)
Which displays something along the lines of:
index Sum
0 0 131
1 1 116
2 2 180
3 3 62
4 4 95
5 5 107
6 6 89
7 7 169
The problem is that instead of index I want to display the date associated with that hour.
The result I'm trying to achieve is the following:
index Sum
0 2011-01-01 01:00:00 131
1 2011-01-01 02:00:00 116
2 2011-01-01 03:00:00 180
3 2011-01-01 04:00:00 62
4 2011-01-01 05:00:00 95
5 2011-01-01 06:00:00 107
6 2011-01-01 07:00:00 89
7 2011-01-01 08:00:00 169
Is there any way I can do that easily using pandas?
data.groupby(data.index.strftime('%Y-%m-%d %H:00:00')).sum().reset_index(name='Sum')
You could use resample.
data.resample('H').sum()
Output:
2011-01-01 00:00:00 84
2011-01-01 01:00:00 121
2011-01-01 02:00:00 160
2011-01-01 03:00:00 70
2011-01-01 04:00:00 88
2011-01-01 05:00:00 131
2011-01-01 06:00:00 56
2011-01-01 07:00:00 109
Freq: H, dtype: int32
Option #2
data.groupby(data.index.floor('H')).sum()
Output:
2011-01-01 00:00:00 84
2011-01-01 01:00:00 121
2011-01-01 02:00:00 160
2011-01-01 03:00:00 70
2011-01-01 04:00:00 88
2011-01-01 05:00:00 131
2011-01-01 06:00:00 56
2011-01-01 07:00:00 109
dtype: int32

Subset selected days data in Python

I have some time series data as:
import pandas as pd
index = pd.date_range('06/01/2014',periods=24*30,freq='H')
df1 = pd.DataFrame(range(len(index)),index=index)
Now I want to subset data of below dates
selec_dates = ['2014-06-10','2014-06-15','2014-06-20']
I tried following statement but it is not working
sub_data = df1.loc[df1.index.isin(pd.to_datetime(selec_dates))]
Where am I doing wrong? Is there any other approach to subset selected days data?
You need compare dates and for test membership use numpy.in1d:
sub_data = df1.loc[np.in1d(df1.index.date, pd.to_datetime(selec_dates).date)]
print (sub_data)
a
2014-06-10 00:00:00 216
2014-06-10 01:00:00 217
2014-06-10 02:00:00 218
2014-06-10 03:00:00 219
2014-06-10 04:00:00 220
2014-06-10 05:00:00 221
2014-06-10 06:00:00 222
2014-06-10 07:00:00 223
2014-06-10 08:00:00 224
2014-06-10 09:00:00 225
2014-06-10 10:00:00 226
...
If want use isin, is necessary create Series with same index:
sub_data = df1.loc[pd.Series(df1.index.date, index=df1.index)
.isin(pd.to_datetime(selec_dates).date)]
print (sub_data)
a
2014-06-10 00:00:00 216
2014-06-10 01:00:00 217
2014-06-10 02:00:00 218
2014-06-10 03:00:00 219
2014-06-10 04:00:00 220
2014-06-10 05:00:00 221
2014-06-10 06:00:00 222
2014-06-10 07:00:00 223
2014-06-10 08:00:00 224
2014-06-10 09:00:00 225
2014-06-10 10:00:00 226
2014-06-10 11:00:00 227
...
I'm sorry and misunderstood your question
df1[pd.Series(df1.index.date, index=df1.index).isin(pd.to_datetime(selec_dates).date)]
Should perform what was needed
original answer
Please check the pandas documentation on selection
You can easily do
sub_data = df1.loc[pd.to_datetime(selec_dates)]
You can use .query() method:
In [202]: df1.query('#index.normalize() in #selec_dates')
Out[202]:
0
2014-06-10 00:00:00 216
2014-06-10 01:00:00 217
2014-06-10 02:00:00 218
2014-06-10 03:00:00 219
2014-06-10 04:00:00 220
2014-06-10 05:00:00 221
2014-06-10 06:00:00 222
2014-06-10 07:00:00 223
2014-06-10 08:00:00 224
2014-06-10 09:00:00 225
... ...
2014-06-20 14:00:00 470
2014-06-20 15:00:00 471
2014-06-20 16:00:00 472
2014-06-20 17:00:00 473
2014-06-20 18:00:00 474
2014-06-20 19:00:00 475
2014-06-20 20:00:00 476
2014-06-20 21:00:00 477
2014-06-20 22:00:00 478
2014-06-20 23:00:00 479
[72 rows x 1 columns]
Edit: I have been made aware this only works if you are working with a daterange in the same month and year as in your query. For a more general (and better answer) see #jezrael solution.
You can use np.in1d and .day on your index if you wanted to do it as you tried:
selec_dates = ['2014-06-10','2014-06-15','2014-06-20']
df1.loc[np.in1d(df1.index.day, (pd.to_datetime(selec_dates).day))]
This gives you as you require:
2014-06-10 00:00:00 216
2014-06-10 01:00:00 217
2014-06-10 02:00:00 218
2014-06-10 03:00:00 219
2014-06-10 04:00:00 220
2014-06-10 05:00:00 221
2014-06-10 06:00:00 222
2014-06-10 07:00:00 223
2014-06-10 08:00:00 224
2014-06-10 09:00:00 225
2014-06-10 10:00:00 226
2014-06-10 11:00:00 227
2014-06-10 12:00:00 228
2014-06-10 13:00:00 229
2014-06-10 14:00:00 230
2014-06-10 15:00:00 231
2014-06-10 16:00:00 232
2014-06-10 17:00:00 233
2014-06-10 18:00:00 234
2014-06-10 19:00:00 235
2014-06-10 20:00:00 236
2014-06-10 21:00:00 237
2014-06-10 22:00:00 238
2014-06-10 23:00:00 239
2014-06-15 00:00:00 336
2014-06-15 01:00:00 337
2014-06-15 02:00:00 338
2014-06-15 03:00:00 339
2014-06-15 04:00:00 340
2014-06-15 05:00:00 341
...
2014-06-15 18:00:00 354
2014-06-15 19:00:00 355
2014-06-15 20:00:00 356
2014-06-15 21:00:00 357
2014-06-15 22:00:00 358
2014-06-15 23:00:00 359
2014-06-20 00:00:00 456
2014-06-20 01:00:00 457
2014-06-20 02:00:00 458
2014-06-20 03:00:00 459
2014-06-20 04:00:00 460
2014-06-20 05:00:00 461
2014-06-20 06:00:00 462
2014-06-20 07:00:00 463
2014-06-20 08:00:00 464
2014-06-20 09:00:00 465
2014-06-20 10:00:00 466
2014-06-20 11:00:00 467
2014-06-20 12:00:00 468
2014-06-20 13:00:00 469
2014-06-20 14:00:00 470
2014-06-20 15:00:00 471
2014-06-20 16:00:00 472
2014-06-20 17:00:00 473
2014-06-20 18:00:00 474
2014-06-20 19:00:00 475
2014-06-20 20:00:00 476
2014-06-20 21:00:00 477
2014-06-20 22:00:00 478
2014-06-20 23:00:00 479
[72 rows x 1 columns]
I used these Sources for this answer:
- Selecting a subset of a Pandas DataFrame indexed by DatetimeIndex with a list of TimeStamps
- In Python-Pandas, How can I subset a dataframe by specific datetime index values?
- return pandas DF column with the number of days elapsed between index and today's date
- Get weekday/day-of-week for Datetime column of DataFrame
- https://stackoverflow.com/a/36893416/2254228
Use the string repr of the date, leaving out the time periods in the day.
pd.concat([df1['2014-06-10'] , df1['2014-06-15'], df1['2014-06-20']])

Categories

Resources