I am currently trying to find a way to merge specific rows of df2 to df1 based on their datetime indices in a way that avoids lookahead bias so that I can add external features (df2) to my main dataset (df1) for ML applications. The lengths of the dataframes are different, and the datetime indices aren't increasing at a constant rate. My current thought process is to do this by using nested loops and if statements, but this method would be too slow as the dataframes I am trying to do this on both have over 30000 rows each. Is there a faster way of doing this?
df1
index a b
2015-06-02 16:00:00 0 5
2015-06-05 16:00:00 1 6
2015-06-06 16:00:00 2 7
2015-06-11 16:00:00 3 8
2015-06-12 16:00:00 4 9
df2
index c d
2015-06-02 9:03:00 10 16
2015-06-02 15:12:00 11 17
2015-06-02 16:07:00 12 18
... ... ...
2015-06-12 15:29:00 13 19
2015-06-12 16:02:00 14 20
2015-06-12 17:33:00 15 21
df_combined
(because you can't see the rows at 06-05, 06-06, 06-11, I just have NaN as the row values to make it easier to interpret)
index a b c d
2015-06-02 16:00:00 0 5 11 17
2015-06-05 16:00:00 1 NaN NaN NaN
2015-06-06 16:00:00 2 NaN NaN NaN
2015-06-11 16:00:00 3 NaN NaN NaN
2015-06-12 16:00:00 4 9 13 19
df_combined.loc[0, ['c', 'd']] and df_combined.loc[4, ['c', 'd']] are 11,17 and 13,19 respectively instead of 12,18 and 14,20 to avoid lookahead bias because in a live scenario, those values haven't been observed yet.
IIUC, you need merge_asof. assuming your index are ordered in time, it is with the direction backward.
print(pd.merge_asof(df1, df2, left_index=True, right_index=True, direction='backward'))
# a b c d
# 2015-06-02 16:00:00 0 5 11 17
# 2015-06-05 16:00:00 1 6 12 18
# 2015-06-06 16:00:00 2 7 12 18
# 2015-06-11 16:00:00 3 8 12 18
# 2015-06-12 16:00:00 4 9 13 19
Note that the dates 06-05, 06-06, 06-11 are not NaN but it is the last values in df2 (for 2015-06-02 16:07:00) being available before these dates in your given data.
Note: if what your dates are actually a column named index and not your index, then do:
print(pd.merge_asof(df1, df2, on='index', direction='backward'))
Related
I have a Pandas dataset with a monthly Date-time index and a column of outstanding orders (like below):
Date
orders
1991-01-01
nan
1991-02-01
nan
1991-03-01
24
1991-04-01
nan
1991-05-01
nan
1991-06-01
nan
1991-07-01
nan
1991-08-01
34
1991-09-01
nan
1991-10-01
nan
1991-11-01
22
1991-12-01
nan
I want to linearly interpolate the values to fill the nans. However it has to be applied within 6-month blocks (non-rolling). So for example, one 6-month block would be all the rows between 1991-01-01 and 1991-06-01, where we would do forward and backward linear imputation such that if there is a nan the interpolation would be descending to a final value of 0. So for the same dataset above here is how I would like the end result to look:
Date
orders
1991-01-01
8
1991-02-01
16
1991-03-01
24
1991-04-01
18
1991-05-01
12
1991-06-01
6
1991-07-01
17
1991-08-01
34
1991-09-01
30
1991-10-01
26
1991-11-01
22
1991-12-01
11
I am lost on how to do this in Pandas however. Any ideas?
Idea is grouping per 6 months with prepend and append 0 values, interpolate and then remove first and last 0 values per groups:
df['Date'] = pd.to_datetime(df['Date'])
f = lambda x: pd.Series([0] + x.tolist() + [0]).interpolate().iloc[1:-1]
df['orders'] = (df.groupby(pd.Grouper(freq='6MS', key='Date'))['orders']
.transform(f))
print (df)
Date orders
0 1991-01-01 8.0
1 1991-02-01 16.0
2 1991-03-01 24.0
3 1991-04-01 18.0
4 1991-05-01 12.0
5 1991-06-01 6.0
6 1991-07-01 17.0
7 1991-08-01 34.0
8 1991-09-01 30.0
9 1991-10-01 26.0
10 1991-11-01 22.0
11 1991-12-01 11.0
I have a dataframe, df that looks like this
Date Value
10/1/2019 5
10/2/2019 10
10/3/2019 15
10/4/2019 20
10/5/2019 25
10/6/2019 30
10/7/2019 35
I would like to calculate the delta for a period of 7 days
Desired output:
Date Delta
10/1/2019 30
This is what I am doing: A user has helped me with a variation of the code below:
df['Delta']=df.iloc[0:,1].sub(df.iloc[6:,1]), Date=pd.Series
(pd.date_range(pd.Timestamp('2019-10-01'),
periods=7, freq='7d'))[['Delta','Date']]
Any suggestions is appreciated
Let us try shift
s = df.set_index('Date')['Value']
df['New'] = s.shift(freq = '-6 D').reindex(s.index).values
df['DIFF'] = df['New'] - df['Value']
df
Out[39]:
Date Value New DIFF
0 2019-10-01 5 35.0 30.0
1 2019-10-02 10 NaN NaN
2 2019-10-03 15 NaN NaN
3 2019-10-04 20 NaN NaN
4 2019-10-05 25 NaN NaN
5 2019-10-06 30 NaN NaN
6 2019-10-07 35 NaN NaN
This question already has answers here:
Combine two pandas Data Frames (join on a common column)
(4 answers)
Closed 4 years ago.
I have two dfs, one is longer than the other but they both have one column that contain the same values.
Here is my first df called weather:
DATE AWND PRCP SNOW WT01 WT02 TAVG
0 2017-01-01 5.59 0.00 0.0 NaN NaN 46
1 2017-01-02 9.17 0.21 0.0 1.0 NaN 40
2 2017-01-03 10.74 0.58 0.0 1.0 NaN 42
3 2017-01-04 8.05 0.00 0.0 1.0 NaN 47
4 2017-01-05 7.83 0.00 0.0 NaN NaN 34
Here is my 2nd df called bike:
DATE LENGTH ID AMOUNT
0 2017-01-01 3 1 5
1 2017-01-01 6 2 10
2 2017-01-02 9 3 100
3 2017-01-02 12 4 250
4 2017-01-03 15 5 45
So I want my df to copy over all rows from the weather df based upon the shared DATE column and copy it over.
DATE LENGTH ID AMOUNT AWND SNOW TAVG
0 2017-01-01 3 1 5 5.59 0 46
1 2017-01-01 6 2 10 5.59 0 46
2 2017-01-02 9 3 100 9.17 0 40
3 2017-01-02 12 4 250 9.17 0 40
4 2017-01-03 15 5 45 10.74 0 42
Please help! Maybe some type of join can be used.
Use merge
In [93]: bike.merge(weather[['DATE', 'AWND', 'SNOW', 'TAVG']], on='DATE')
Out[93]:
DATE LENGTH ID AMOUNT AWND SNOW TAVG
0 2017-01-01 3 1 5 5.59 0.0 46
1 2017-01-01 6 2 10 5.59 0.0 46
2 2017-01-02 9 3 100 9.17 0.0 40
3 2017-01-02 12 4 250 9.17 0.0 40
4 2017-01-03 15 5 45 10.74 0.0 42
Just use the same indexes and simple slicing
df2 = df2.set_index('DATE')
df2[['SNOW', 'TAVG']] = df.set_index('DATE')[['SNOW', 'TAVG']]
If you check the pandas docs, they explain all the different types of "merges" (joins) that you can do between two dataframes.
The common syntax for a merge looks like: pd.merge(weather, bike, on= 'DATE')
You can also make the merge more fancy by adding any of the arguments to your merge function that I listed below: (e.g specifying whether your want an inner vs right join)
Here are the arguments the function takes based on the current pandas docs:
pandas.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)
Source
Hope it helps!
I have a dataframe with period_start_time by every 15 minutes and now I need to aggregate to 1 hour and calculate sum and avg for almost every column in dataframe (it has about 20 columns) and
PERIOD_START_TIME ID val1 val2
06.21.2017 22:15:00 12 3 0
06.21.2017 22:30:00 12 5 6
06.21.2017 22:45:00 12 0 3
06.21.2017 23:00:00 12 5 2
...
06.21.2017 22:15:00 15 9 2
06.21.2017 22:30:00 15 0 2
06.21.2017 22:45:00 15 1 5
06.21.2017 23:00:00 15 0 1
...
Desired output:
PERIOD_START_TIME ID val1(avg) val1(sum) val1(max) ...
06.21.2017 22:00:00 12 3.25 13 5
...
06.21.2017 23:00:00 15 2.25 10 9 ...
And for columns val2 too, and for every other column in dataframe.
I have no idea how to group by period start time for every hour, not for the whole day, no idea how to start.
I believe you need Series.dt.floor for Hours and then aggregate by agg:
df = df.groupby([df['PERIOD_START_TIME'].dt.floor('H'),'ID']).agg(['mean','sum', 'max'])
#for columns from MultiIndex
df.columns = df.columns.map('_'.join)
print (df)
val1_mean val1_sum val1_max val2_mean val2_sum \
PERIOD_START_TIME ID
2017-06-21 22:00:00 12 2.666667 8 5 3 9
15 3.333333 10 9 3 9
2017-06-21 23:00:00 12 5.000000 5 5 2 2
15 0.000000 0 0 1 1
val2_max
PERIOD_START_TIME ID
2017-06-21 22:00:00 12 6
15 5
2017-06-21 23:00:00 12 2
15 1
df = df.reset_index()
print (df)
PERIOD_START_TIME ID val1_mean val1_sum val1_max val2_mean val2_sum \
0 2017-06-21 22:00 12 2.666667 8 5 3 9
1 2017-06-21 22:00 15 3.333333 10 9 3 9
2 2017-06-21 23:00 12 5.000000 5 5 2 2
3 2017-06-21 23:00 15 0.000000 0 0 1 1
val2_max
0 6
1 5
2 2
3 1
Very similarly you can convert PERIOD_START_TIME to a pandas Period.
df['PERIOD_START_TIME'] = df['PERIOD_START_TIME'].dt.to_period('H')
df.groupby(['PERIOD_START_TIME', 'ID']).agg(['max', 'min', 'mean']).reset_index()
I have solutions for this question, 2 solutions in fact, but I'm not happy with them. The reason is that the files I'm trying to read have about 12 millions rows, and using these solutions, it takes a huge amount of time to process them. Mainly, the reason is that the solutions are row-by-row operations.
So, I read the file like this:
In [1]: df = pd.read_csv('C:/Projects/NPMRDS/FHWA_TASK2-4_NJ_09_2013_TT.CSV')
df.head()
Out [1]: TMC DATE EPOCH Travel_TIME_ALL_VEHICLES Travel_TIME_PASSENGER_VEHICLES Travel_TIME_FREIGHT_TRUCKS
0 103N04152 9252013 211 12 12 NaN
1 103N04152 9262013 0 7 7 NaN
2 103N04152 9032013 177 8 8 NaN
3 103N04152 9042013 176 8 9 7
My problem is with the DATE and EPOCH columns. I want to merge them into a single datetime column.
DATE is in '%m%d%Y' format (with the leading zero missing)
EPOCH is 5 minute epoch of a day:
Time EPOCH
00:00:00 => 0
00:05:00 => 1
...
...
12:00:00 => 144
12:05:00 => 145
...
...
23:50:00 => 286
23:55:00 => 287
What I want is something like this:
In [2]: df.head()
Out [2]: TMC DATE_TIME DATE EPOCH Travel_TIME_ALL_VEHICLES Travel_TIME_PASSENGER_VEHICLES Travel_TIME_FREIGHT_TRUCKS
0 103N04152 2013-09-25 17:35:00 9252013 211 12 12 NaN
1 103N04152 2013-09-26 00:00:00 9262013 0 7 7 NaN
2 103N04152 2013-09-03 14:45:00 9032013 177 8 8 NaN
3 103N04152 2013-09-04 14:30:00 9042013 176 8 9 7
Now, I can do this row-by-row as I mentioned earlier by doing either of these three things:
In [3]: df = pd.read_csv('C:/Projects/NPMRDS/FHWA_TASK2-4_NJ_09_2013_TT.CSV',
converters={'DATE': lambda x: datetime.datetime.strptime(x, '%m%d%Y'),
'EPOCH': lambda x: str(datetime.timedelta(minutes = int(x)*5))},
parse_dates = {'date_time': ['DATE', 'EPOCH']},
keep_date_col = True)
df.head()
Out [3]: date_time TMC DATE EPOCH Travel_TIME_ALL_VEHICLES Travel_TIME_PASSENGER_VEHICLES Travel_TIME_FREIGHT_TRUCKS
0 2013-09-25 17:35:00 103N04152 2013-09-25 17:35:00 12 12 NaN
1 2013-09-26 00:00:00 103N04152 2013-09-26 00:00:00 7 7 NaN
2 2013-09-03 14:45:00 103N04152 2013-09-03 14:45:00 8 8 NaN
3 2013-09-04 14:40:00 103N04152 2013-09-04 14:40:00 8 9 7
4 2013-09-05 09:35:00 103N04152 2013-09-05 09:35:00 10 10 NaN
In this method I lose the original formatting of DATE and EPOCH, but it doesn't really affect further computations on the dataframe. Instead of using converters as an argument, I could have used date_parser. Or, after reading the data, similar to line 1, I could have done something like this:
In [4]: df = pd.read_csv('C:/Projects/NPMRDS/FHWA_TASK2-4_NJ_09_2013_TT.CSV')
df['date_time'] = pd.to_datetime([datetime.datetime.strptime(str(df['DATE'][x]), '%m%d%Y') + datetime.timedelta(minutes = int(df['EPOCH'][x]*5)) for x in range(len(df))])
df.head()
Out [4]: TMC DATE EPOCH Travel_TIME_ALL_VEHICLES Travel_TIME_PASSENGER_VEHICLES Travel_TIME_FREIGHT_TRUCKS DATE_TIME
0 103N04152 9252013 211 12 12 NaN 2013-09-25 17:35:00
1 103N04152 9262013 0 7 7 NaN 2013-09-26 00:00:00
2 103N04152 9032013 177 8 8 NaN 2013-09-03 14:45:00
3 103N04152 9042013 176 8 9 7 2013-09-04 14:40:00
4 103N04152 9052013 115 10 10 NaN 2013-09-05 09:35:00
A more desirable result (don't worry about the column orders), but still row-by-row, and takes a huge amount of time.
Then there are pandas.to_datetime and pandas.to_timedelta, which run much faster than the methods described above. But I cannot merge the results together without resorting to string functions, which are again mainly row-by-row.
Does anyone know a better way to do this?
Edit: Solution!!!
In addition to chrisb's answer, I found a way to do it as well. The trick lies in setting the box parameter to False in pandas.to_datetime(). Like so:
df['DATE_TIME'] = pd.to_datetime(df['DATE'], format='%m%d%Y', box=False) + pd.to_timedelta(df['EPOCH']*5*60, unit='s')
Setting that to False returns a numpy.datetime[64] array, instead of pandas.DatetimeIndex. More information can be found in the pandas.to_datetime() documentation. And, pandas.to_timedelta() does not work with unit='m'.
Try this out - reduced runtime for me to about 1s (compared to 15s) on 4M rows of test data.
df = pd.read_csv('temp.csv')
df['DATE'] = pd.to_datetime(df['DATE'], format='%m%d%Y')
df['EPOCH'] = pd.to_timedelta((df['EPOCH'].astype(int) * 5).astype('timedelta64[m]'))
df['DATE_TIME'] = df['DATE'] + df['EPOCH']