Dear experienced community,
I can't find an elegant solution to my problem.
I have a subsample of my dataset which I want to resample weekly, but starting some weeks before the first entry in my data frame (so a few weeks with 0 counts)
A sample of the data:
In:
print(df_pec.head())
Out:
Count Image_Sequence_DateTime
18 1 2015-11-06 03:22:19
21 1 2015-11-11 01:48:51
22 1 2015-11-11 07:30:47
37 1 2015-11-25 09:42:23
48 1 2015-12-05 12:12:34
With the earliest image sequence at:
In:
df_pec.Image_Sequence_DateTime.min()
Out:
2015-09-30 15:16:38
I have another function that gives me the starting point of the first week and the last point of the last week ever measured in that experiment, which are:
In:
print(s_startend)
Out:
Start 2015-09-28
End 2017-12-25
dtype: datetime64[ns]
My problem is that I want to resample df_pec weekly, but starting on the very first second of the very first day of the very first week of the experimental deployment.(using s_startend as reference)
I try:
df_pec=df_pec.resample('1W', on='Image_Sequence_DateTime').sum()
print(df_pec.head(),'\n',df_pec.tail())
Out:
Count
Image_Sequence_DateTime
2015-10-04 26.0
2015-10-11 92.0
2015-10-18 204.0
2015-10-25 193.0
2015-11-01 187.0
Count
Image_Sequence_DateTime
2017-11-19 20.0
2017-11-26 34.0
2017-12-03 16.0
2017-12-10 11.0
2017-12-17 3.0
This is pretty weird because it is even skipping the first days of data in df_pec.(starting 2015-09-30 15:16:38)
And even if it worked, I have no way of indicating the resampling to start and end in specified values (s_startend from my example), even if there are no records in the earliest and latest weeks in my subsample df_pec.
I thought about artificially adding two entries to df_pec with the real start and real end, but I think it is not so elegant and I don't want to be adding meaningless keys to my df.
Thank you very much for your wisdom!
Related
I have a large dataset with daily stock prices and company codes. I need to compute the 52 week high for each stock at every point in time based on the previous 52 weeks. The problem is that some of the companies do not necessarily have data in between some periods and thus, if I use a fixed window size for a rolling max then the results are not correct.
First I tried this:
df['52wh'] = df["PRC"].groupby(df['id']).shift(1).rolling(253).max()
However, this doesn't work since it does not take into account the dates but only the previous 253 entries.
I also tried this:
df['date'] = pd.to_datetime(df['date'])
df['52wh'] = df.set_index('date').groupby('id').rolling(window=365, freq='D', min_periods=1).max()['PRC']
But this gives me this error:
ValueError: cannot handle a non-unique multi-index!
I am thinking maybe a rolling function with get bounds could work but I don't know how to write a good one.
Here is an example of how the data frame looks like
date id PRC
0 2010-01-09 10158 11.87
1 2010-01-10 10158 12.30
2 2010-01-11 10158 12.37
3 2010-01-12 10158 12.89
4 2010-02-08 10158 10.13
... ... ... ...
495711 2018-12-12 93188 14.48
495712 2018-12-13 93188 14.48
495713 2018-12-14 93188 14.48
495714 2018-12-17 93188 14.48
495715 2018-12-18 93188 NaN
Can someone help? Thanks in advance guys! :)
I have a dataframe of transactions. One of my columns is the date (datetime64[ns]). I'm making a group by of users (email as id). Something I'm interested in is the variability of time between orders of each user. So what I'm looking for in the group by is to find the standard deviation of the difference between dates (in days) for each user. If the user has two or least transactions the answer should be 0. This is some of the dataframe (I changed some things manually):
df
email date
0 cuadros.paolo#gmail.com 2018-05-01 12:29:59
1 rlez_1202#hotmail.com 2018-07-11 13:43:22
2 cuadros.paolo#gmail.com 2018-09-21 12:29:23
3 paola.alvarado#rumah.com.pe 2018-09-01 09:21:43
4 luchosuito#gmail.com 2018-04-30 12:29:30
5 paola.alvarado#rumah.com.pe 2018-03-22 12:29:23
6 davida.alvarado.703#gmail.com 2018-07-21 12:29:17
7 cuadros.paolo#gmail.com 2018-08-11 12:29:41
8 rlez_1202#hotmail.com 2018-05-23 12:29:14
9 luchosuito#gmail.com 2018-06-01 12:29:17
10 jessica26011#hotmail.com 2018-07-18 12:29:20
11 cuadros.paolo#gmail.com 2018-08-21 12:29:40
12 rlez_1202#hotmail.com 2018-10-01 12:29:31
13 paola.alvarado#rumah.com.pe 2018-06-01 12:29:20
14 miluska-paico#hotmail.com 2018-05-21 12:29:18
15 cinthia_leon87#hotmail.com 2018-07-20 12:29:59
I've tried many ways, but still can't get it. Please help.
For sequential differences, which seems to make the most sense given your explanation:
df.sort_values('date').groupby('email').apply(lambda x: x.date.diff().std()).fillna(0)
Output:
email
cinthia_leon87#hotmail.com 0 days 00:00:00
cuadros.paolo#gmail.com 48 days 05:04:12.988006
davida.alvarado.703#gmail.com 0 days 00:00:00
jessica26011#hotmail.com 0 days 00:00:00
luchosuito#gmail.com 0 days 00:00:00
miluska-paico#hotmail.com 0 days 00:00:00
paola.alvarado#rumah.com.pe 14 days 18:10:16.764069
rlez_1202#hotmail.com 23 days 06:17:04.453408
dtype: timedelta64[ns]
.std() will be null for groups with 1 value non-null value and since .diff reduces the number of non-null observations by 1, this automatically returns NaN for any groups with 2 or fewer measurements, which we fill with 0.
Also just be aware that the default for pandas is to use N-1 degrees of freedom.
I have a DataFrame df with sporadic daily business day rows (i.e., there is not always a row for every business day.)
For each row in df I want to create a historical resampled mean dfm going back one month at a time. For example, if I have a row for 2018-02-22 then I want rolling means for rows in the following date ranges:
2018-01-23 : 2018-02-22
2017-12-23 : 2018-01-22
2017-11-23 : 2017-12-22
etc.
But I can't see a way to keep this pegged to the particular day of the month using conventional offsets. For example, if I do:
dfm = df.resample('30D').mean()
Then we see two problems:
It references the beginning of the DataFrame. In fact, I can't find a way to force .resample() to peg itself to the end of the DataFrame – even if I have it operate on df_reversed = df.loc[:'2018-02-22'].iloc[::-1]. Is there a way to "peg" the resampling to something other than the earliest date in the DataFrame? (And ideally pegged to each particular row as I run some lambda on the associated historical resampling from each row's date?)
It will drift over time, because not every month is 30 days long. So as I go back in time I will find that the interval 12 "months" prior ends 2017-02-27, not 2017-02-22 like I want.
Knowing that I want to resample by non-overlapping "months," the second problem can be well-defined for month days 29-31: For example, if I ask to resample for '2018-03-31' then the date ranges would end at the end of each preceding month:
2018-03-01 : 2018-03-31
2018-02-01 : 2018-02-28
2018-01-01 : 2018-02-31
etc.
Though again, I don't know: is there a good or easy way to do this in pandas?
tl;dr:
Given something like the following:
someperiods = 20 # this can be a number of days covering many years
somefrequency = '8D' # this can vary from 1D to maybe 10D
rng = pd.date_range('2017-01-03', periods=someperiods, freq=somefrequency)
df = pd.DataFrame({'x': rng.day}, index=rng) # x in practice is exogenous data
from pandas.tseries.offsets import *
df['MonthPrior'] = df.index.to_pydatetime() + DateOffset(months=-1)
Now:
For each row in df: calculate df['PreviousMonthMean'] = rolling average of all df.x in range [df.MonthPrior, df.index). In this example the resulting DataFrame would be:
Index x MonthPrior PreviousMonthMean
2017-01-03 3 2016-12-03 NaN
2017-01-11 11 2016-12-11 3
2017-01-19 19 2016-12-19 7
2017-01-27 27 2016-12-27 11
2017-02-04 4 2017-01-04 19
2017-02-12 12 2017-01-12 16.66666667
2017-02-20 20 2017-01-20 14.33333333
2017-02-28 28 2017-01-28 12
2017-03-08 8 2017-02-08 20
2017-03-16 16 2017-02-16 18.66666667
2017-03-24 24 2017-02-24 17.33333333
2017-04-01 1 2017-03-01 16
2017-04-09 9 2017-03-09 13.66666667
2017-04-17 17 2017-03-17 11.33333333
2017-04-25 25 2017-03-25 9
2017-05-03 3 2017-04-03 17
2017-05-11 11 2017-04-11 15
2017-05-19 19 2017-04-19 13
2017-05-27 27 2017-04-27 11
2017-06-04 4 2017-05-04 19
If we can get that far, then I need to find an efficient way to iterate that so that for each row in df I can aggregate consecutive but non-overlapping df['PreviousMonthMean'] values going back one calendar month at a time from the given DateTimeIndex....
Is it somehow possible to use resample on irregularly spaced data? (I know that the documentation says it's for "resampling of regular time-series data", but I wanted to try if it works on irregular data, too. Maybe it doesn't, or maybe I am doing something wrong.)
In my real data, I have generally 2 samples per hour, the time difference between them ranging usually from 20 to 40 minutes. So I was hoping to resample them to a regular hourly series.
To test if I am using it right, I used some random list of dates that I already had, so it may not be a best example but at least a solution that works for it will be very robust. here it is:
fraction number time
0 0.729797 0 2014-10-23 15:44:00
1 0.141084 1 2014-10-30 19:10:00
2 0.226900 2 2014-11-05 21:30:00
3 0.960937 3 2014-11-07 05:50:00
4 0.452835 4 2014-11-12 12:20:00
5 0.578495 5 2014-11-13 13:57:00
6 0.352142 6 2014-11-15 05:00:00
7 0.104814 7 2014-11-18 07:50:00
8 0.345633 8 2014-11-19 13:37:00
9 0.498004 9 2014-11-19 22:47:00
10 0.131665 10 2014-11-24 15:28:00
11 0.654018 11 2014-11-26 10:00:00
12 0.886092 12 2014-12-04 06:37:00
13 0.839767 13 2014-12-09 00:50:00
14 0.257997 14 2014-12-09 02:00:00
15 0.526350 15 2014-12-09 02:33:00
Now I want to resample these for example monthly:
df_new = df.set_index(pd.DatetimeIndex(df['time']))
df_new['fraction'] = df.fraction.resample('M',how='mean')
df_new['number'] = df.number.resample('M',how='mean')
But I get TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'RangeIndex' - unless I did something wrong with assigning the datetime index, it must be due to the irregularity?
So my questions are:
Am I using it correctly?
If 1==True, is there no straightforward way to resample the data?
(I only see a solution in first reindexing the data to get finer intervals, interpolate the values in between and then reindexing it to hourly interval. If it is so, then a question regarding the correct implementation of reindex will follow shortly.)
You don't need to explicitly use DatetimeIndex, just set 'time' as the index and pandas will take care of the rest, so long as your 'time' column has been converted to datetime using pd.to_datetime or some other method. Additionally, you don't need to resample each column individually if you're using the same method; just do it on the entire DataFrame.
# Convert to datetime, if necessary.
df['time'] = pd.to_datetime(df['time'])
# Set the index and resample (using month start freq for compact output).
df = df.set_index('time')
df = df.resample('MS').mean()
The resulting output:
fraction number
time
2014-10-01 0.435441 0.5
2014-11-01 0.430544 6.5
2014-12-01 0.627552 13.5
I am trying use pandas to resample vessel tracking data from seconds to minutes using how='first'. The dataframe is called hg1s. The unique ID is called MMSI. The datetime index is TX_DTTM. Here is a data sample:
TX_DTTM MMSI LAT LON NS
2013-10-01 00:00:02 367542760 29.660550 -94.974195 15
2013-10-01 00:00:04 367542760 29.660550 -94.974195 15
2013-10-01 00:00:07 367451120 29.614161 -94.954459 0
2013-10-01 00:00:15 367542760 29.660210 -94.974069 15
2013-10-01 00:00:13 367542760 29.660210 -94.974069 15
The code to resample:
hg1s1min = hg1s.groupby('MMSI').resample('1Min', how='first')
And a data sample of the output:
hg1s1min[20000:20004]
MMSI TX_DTTM NS LAT LON
367448060 2013-10-21 00:42:00 NaN NaN NaN
2013-10-21 00:43:00 NaN NaN NaN
2013-10-21 00:44:00 NaN NaN NaN
2013-10-21 00:45:00 NaN NaN NaN
It's safe to assume that there are several data points within each minute, so I don't understand why this isn't picking up the first record for that method. I looked at this link: Pandas Downsampling Issue because it seemed similar to my problem. I tried passing label='left' and label='right', neither worked.
How do I return the first record in every minute for each MMSI?
As it turns out, the problem isn't with the method, but with my assumption about the data. The large data set is a month, or 44640 minutes. While every record in my dataset has the relevant values, there isn't 100% overlap in time. In this case MMSI = 367448060 is present at 2013-10-17 23:24:31 and again at 2013-10-29 20:57:32. between those two data points, there isn't data to sample, resulting in a NaN, which is correct.