Linearly interpolating Pandas Time Series - python

I am pulling data on exchange rates using Pandas. The data does not have values for every single day. I'd like to fill in the missing time series using Pandas interoplate function so that all dates are included in the index. For example, 2010-01-09 and 2010-01-10 are both missing. The interoplate function seems not to be doing anything, but I can't figure out why.
from pandas_datareader import data
can = data.get_data_fred('DEXCAUS')
can = can.interpolate(method='linear')
can = can.dropna()
print can.head(10)
Output:
DEXCAUS
DATE
2010-01-04 1.0377
2010-01-05 1.0371
2010-01-06 1.0333
2010-01-07 1.0351
2010-01-08 1.0345
2010-01-11 1.0317
2010-01-12 1.0374
2010-01-13 1.0319
2010-01-14 1.0260
2010-01-15 1.0287
Desired Output:
DEXCAUS
DATE
2010-01-04 1.0377
2010-01-05 1.0371
2010-01-06 1.0333
2010-01-07 1.0351
2010-01-08 1.0345
2010-01-09 some value..
2010-01-10 some value..
2010-01-11 1.0317
2010-01-12 1.0374
2010-01-13 1.0319
2010-01-14 1.0260
2010-01-15 1.0287

You need to resample first:
df.resample('D').interpolate(method='linear')
Out:
DEXCAUS
DATE
2010-01-04 1.037700
2010-01-05 1.037100
2010-01-06 1.033300
2010-01-07 1.035100
2010-01-08 1.034500
2010-01-09 1.033567
2010-01-10 1.032633
2010-01-11 1.031700
2010-01-12 1.037400
2010-01-13 1.031900
2010-01-14 1.026000
2010-01-15 1.028700

Related

Python resample to only keep every 5th day by group

I have a dataframe, consisting of daily stock observations, date and PERMNO (Identifier). I want to resample the dataframe to only consist of observations for every 5th trading day for every stock. The dataframe looks something like the below:
[10610 rows x 3 columns]
PERMNO date RET gret cumret_5d
0 10001.0 2010-01-04 -0.004856 0.995144 NaN
1 10001.0 2010-01-05 -0.005856 0.994144 NaN
2 10001.0 2010-01-06 0.011780 1.011780 NaN
3 10001.0 2010-01-07 -0.033940 0.966060 NaN
4 10001.0 2010-01-08 0.038150 1.038150 3.888603e-03
5 10001.0 2010-01-11 0.015470 1.015470 2.439321e-02
6 10001.0 2010-01-12 -0.004760 0.995240 2.552256e-02
7 10001.0 2010-01-13 -0.003350 0.996650 1.018706e-02
8 10001.0 2010-01-14 -0.001928 0.998072 4.366128e-02
9 10001.0 2010-01-15 -0.007730 0.992270 -2.462285e-03
10 10002.0 2010-01-05 -0.011690 0.988310 NaN
11 10002.0 2010-01-06 0.011826 1.011826 NaN
12 10002.0 2010-01-07 -0.021420 0.978580 NaN
13 10002.0 2010-01-08 0.004974 1.004974 NaN
14 10002.0 2010-01-11 -0.023760 0.976240 -3.992141e-02
15 10002.0 2010-01-12 0.002028 1.002028 -2.659527e-02
16 10002.0 2010-01-13 0.009780 1.009780 -2.856358e-02
17 10002.0 2010-01-14 0.017380 1.017380 9.953183e-03
18 10002.0 2010-01-15 -0.008865 0.991135 -3.954383e-03
19 10002.0 2010-02-18 -0.006958 0.993042 1.318849e-02
The result I want to produce is:
[10610 rows x 3 columns]
PERMNO date RET gret cumret_5d
4 10001.0 2010-01-08 0.038150 1.038150 3.888603e-03
9 10001.0 2010-01-15 -0.007730 0.992270 -2.462285e-03
13 10002.0 2010-01-08 0.004974 1.004974 NaN
18 10002.0 2010-01-15 -0.008865 0.991135 -3.954383e-03
I.e I want to keep observations for dates (2010-01-08), (2010-01-15), (2010-01-22)... continuing up until today. The problem is that not every stock contains the same dates (some may have its first trading day in the middle of a month). Further, every 5th trading day is not continuously every 7th day due to holidays.
I have tried using
crsp_daily = crsp_daily.groupby('PERMNO').resample('5D',on='date')
Which just resulted in an empty dataframe:
Out:
DatetimeIndexResamplerGroupby [freq=<Day>, axis=0, closed=left, label=left, convention=e, origin=start_day]
Any ideas on how to solve this problem?
You could loop through the values of PERMNO and then for each subset use .iloc[::5] to get every 5th row. Then concat each resulting DataFrame together:
dfs = []
for val in crsp_daily['PERMNO'].unique():
dfs.append(crsp_daily[crsp_daily['PERMNO'] == val].iloc[::5])
result = pd.concat(dfs)
For future reference, I solved it by:
def remove_nonrebalancing_dates(df,gap):
count = pd.DataFrame(df.set_index('date').groupby('date'), columns=['date', 'tmp']).reset_index()
del count['tmp']
count['index'] = count['index'] + 1
count = count[(count['index'].isin(range(gap, len(count['index']) + 1, gap)))]
df = df[(df['date'].isin(count['date']))]
return df
dataframe with containing only every 5th trading day can then be defined as:
df = remove_nonrebalancing_dates(df,5)

Mapping ranges of date in pandas dataframe

I would like to map values defined in a dictionary of date: value into a DataFrame of dates.
Consider the following example:
import pandas as pd
df = pd.DataFrame(range(19), index=pd.date_range(start="2010-01-01", end="2010-01-10", freq="12H"))
dct = {
"2009-01-01": 1,
"2010-01-05": 2,
"2020-01-01": 3,
}
I would like to get something like this:
df
0 test
2010-01-01 00:00:00 0 1.0
2010-01-01 12:00:00 1 1.0
2010-01-02 00:00:00 2 1.0
2010-01-02 12:00:00 3 1.0
2010-01-03 00:00:00 4 1.0
2010-01-03 12:00:00 5 1.0
2010-01-04 00:00:00 6 1.0
2010-01-04 12:00:00 7 1.0
2010-01-05 00:00:00 8 2.0
2010-01-05 12:00:00 9 2.0
2010-01-06 00:00:00 10 2.0
2010-01-06 12:00:00 11 2.0
2010-01-07 00:00:00 12 2.0
2010-01-07 12:00:00 13 2.0
2010-01-08 00:00:00 14 2.0
2010-01-08 12:00:00 15 2.0
2010-01-09 00:00:00 16 2.0
2010-01-09 12:00:00 17 2.0
2010-01-10 00:00:00 18 2.0
I have tried the following but I get a list of nan:
df["test"] = pd.Series(df.index.map(dct), index=df.index).ffill()
Any suggestions?
There are missing values, because no match types - in dict are keys like strings, in DaatFrame is datetimes in DatetimeIndex, need same types - here datetimes in helper Series created from dictionary with Series.asfreq for add datetimes between:
dct = {
"2009-01-01": 1,
"2010-01-05": 2,
"2020-01-01": 3,
}
s = pd.Series(dct).rename(lambda x: pd.to_datetime(x)).asfreq('d', method='ffill')
df["test"] = df.index.to_series().dt.normalize().map(s)
print (df)
0 test
2010-01-01 00:00:00 0 1
2010-01-01 12:00:00 1 1
2010-01-02 00:00:00 2 1
2010-01-02 12:00:00 3 1
2010-01-03 00:00:00 4 1
2010-01-03 12:00:00 5 1
2010-01-04 00:00:00 6 1
2010-01-04 12:00:00 7 1
2010-01-05 00:00:00 8 2
2010-01-05 12:00:00 9 2
2010-01-06 00:00:00 10 2
2010-01-06 12:00:00 11 2
2010-01-07 00:00:00 12 2
2010-01-07 12:00:00 13 2
2010-01-08 00:00:00 14 2
2010-01-08 12:00:00 15 2
2010-01-09 00:00:00 16 2
2010-01-09 12:00:00 17 2
2010-01-10 00:00:00 18 2

Pandas fillna() method not filling all missing values

I have rain and temp data sourced from Environment Canada but it contains some NaN values.
start_date = '2015-12-31'
end_date = '2021-05-26'
mask = (data['date'] > start_date) & (data['date'] <= end_date)
df = data.loc[mask]
print(df)
date time rain_gauge_value temperature
8760 2016-01-01 00:00:00 0.0 -2.9
8761 2016-01-01 01:00:00 0.0 -3.4
8762 2016-01-01 02:00:00 0.0 -3.6
8763 2016-01-01 03:00:00 0.0 -3.6
8764 2016-01-01 04:00:00 0.0 -4.0
... ... ... ... ...
56107 2021-05-26 19:00:00 0.0 22.0
56108 2021-05-26 20:00:00 0.0 21.5
56109 2021-05-26 21:00:00 0.0 21.1
56110 2021-05-26 22:00:00 0.0 19.5
56111 2021-05-26 23:00:00 0.0 18.5
[47352 rows x 4 columns]
Find the rows with a NaN value
null = df[df['rain_gauge_value'].isnull()]
print(null)
date time rain_gauge_value temperature
11028 2016-04-04 12:00:00 NaN -6.9
11986 2016-05-14 10:00:00 NaN NaN
11987 2016-05-14 11:00:00 NaN NaN
11988 2016-05-14 12:00:00 NaN NaN
11989 2016-05-14 13:00:00 NaN NaN
... ... ... ... ...
49024 2020-08-04 16:00:00 NaN NaN
49025 2020-08-04 17:00:00 NaN NaN
50505 2020-10-05 09:00:00 NaN 11.3
54083 2021-03-03 11:00:00 NaN -5.1
54084 2021-03-03 12:00:00 NaN -4.5
[6346 rows x 4 columns]
This is my dataframe I want to use to fill the NaN values
print(rain_df)
date time rain_gauge_value temperature
0 2015-12-28 00:00:00 0.1 -6.0
1 2015-12-28 01:00:00 0.0 -7.0
2 2015-12-28 02:00:00 0.0 -8.0
3 2015-12-28 03:00:00 0.0 -8.0
4 2015-12-28 04:00:00 0.0 -7.0
... ... ... ... ...
48043 2021-06-19 19:00:00 0.6 20.0
48044 2021-06-19 20:00:00 0.6 19.0
48045 2021-06-19 21:00:00 0.8 18.0
48046 2021-06-19 22:00:00 0.4 17.0
48047 2021-06-19 23:00:00 0.0 16.0
[48048 rows x 4 columns]
But when I use the fillna() method, some of the values don't get substitued.
null = null.fillna(rain_df)
null = null[null['rain_gauge_value'].isnull()]
print(null)
date time rain_gauge_value temperature
48057 2020-06-25 09:00:00 NaN NaN
48058 2020-06-25 10:00:00 NaN NaN
48059 2020-06-25 11:00:00 NaN NaN
48060 2020-06-25 12:00:00 NaN NaN
48586 2020-07-17 10:00:00 NaN NaN
48587 2020-07-17 11:00:00 NaN NaN
48588 2020-07-17 12:00:00 NaN NaN
49022 2020-08-04 14:00:00 NaN NaN
49023 2020-08-04 15:00:00 NaN NaN
49024 2020-08-04 16:00:00 NaN NaN
49025 2020-08-04 17:00:00 NaN NaN
50505 2020-10-05 09:00:00 NaN 11.3
54083 2021-03-03 11:00:00 NaN -5.1
54084 2021-03-03 12:00:00 NaN -4.5
How can I resolve this issue?
when fillna, you probably want a method, like fill using previous/next value, mean of column etc, what we can do is like this
nulls_index = df['rain_gauge_value'].isnull()
df = df.fillna(method='ffill') # use ffill as example
nulls_after_fill = df[nulls_index]
take a look at:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html
You need to inform pandas how you want to patch. It may be obvious to you want to use the "patch" dataframe's values when the date and times line up, but it won't be obvious to pandas. see my dummy example:
raw = pd.DataFrame(dict(date=[date(2015,12,28), date(2015,12,28)], time= [time(0,0,0),time(0,0,1)],temp=[1.,np.nan],rain=[4.,np.nan]))
raw
date time temp rain
0 2015-12-28 00:00:00 1.0 4.0
1 2015-12-28 00:00:01 NaN NaN
patch = pd.DataFrame(dict(date=[date(2015,12,28), date(2015,12,28)], time=[time(0,0,0),time(0,0,1)],temp=[5.,5.],rain=[10.,10.]))
patch
date time temp rain
0 2015-12-28 00:00:00 5.0 10.0
1 2015-12-28 00:00:01 5.0 10.0
you need the indexes of raw and patch to correspond to how you want to patch the raw data (in this case, you want to patch based on date and time)
raw.set_index(['date','time']).fillna(patch.set_index(['date','time']))
returns
temp rain
date time
2015-12-28 00:00:00 1.0 4.0
00:00:01 5.0 10.0

Drop nan rows in pandas that are not in the middle

I have a pandas dataframe which are indexed by time,
For example:
Time Value
2010-01-01 nan
2010-01-02 nan
2010-01-03 3
2010-01-04 4
2010-01-05 5
2010-01-06 3
2010-01-07 nan
2010-01-08 nan
2010-01-09 3
2010-01-10 3
2010-01-11 4
2010-01-12 5
2010-01-13 3
2010-01-14 nan
2010-01-15 nan
In this example, I would like to drop the first two and last two rows. But not the rows with nan in the middle. Is there a way to do this?
You can use index of first valid value and last valid value to filter the dataframe:
df.loc[df.Value.first_valid_index(): df.Value.last_valid_index()]
Result:
Value
Time
2010-01-03 3.0
2010-01-04 4.0
2010-01-05 5.0
2010-01-06 3.0
2010-01-07 NaN
2010-01-08 NaN
2010-01-09 3.0
2010-01-10 3.0
2010-01-11 4.0
2010-01-12 5.0
2010-01-13 3.0
Supposing data is your dataframe:
a, b = data.dropna().index[[0, -1]]
You could also consider selecting a specific column, e.g. using data['Value'] instead of data.
This way you get the starting and ending indices not containing NaN. Then you just have to get that slice (being careful to include that last row):
data[a:b+1]
Result:
Time Value
2010-01-03 3
2010-01-04 4
2010-01-05 5
2010-01-06 3
2010-01-07 nan
2010-01-08 nan
2010-01-09 3
2010-01-10 3
2010-01-11 4
2010-01-12 5
2010-01-13 3
Single-row solution following #unutbu's tip to use loc:
data.loc[slice(*data.dropna().index[[0, -1]])]
Using bfill and ffill
df[df.Value.ffill().notnull()&df.Value.bfill().notnull()]
Out[464]:
Time Value
2 2010-01-03 3.0
3 2010-01-04 4.0
4 2010-01-05 5.0
5 2010-01-06 3.0
6 2010-01-07 NaN
7 2010-01-08 NaN
8 2010-01-09 3.0
9 2010-01-10 3.0
10 2010-01-11 4.0
11 2010-01-12 5.0
12 2010-01-13 3.0

Choosing time from 2300-0000 for different days

So I'm having a issue with with the 23:00-00:00 time for different days within in Python.
times A B C D
2003-01-08 00:00:00 NaN 0.086215 0.086135 0.090659
2003-01-08 23:00:00 NaN 0.060930 0.059008 0.057293
2003-01-09 23:00:00 NaN 0.102374 0.101441 0.100743
2003-01-10 00:00:00 NaN 0.078799 0.077739 0.076138
2003-01-10 23:00:00 NaN 0.207653 0.205911 0.202886
2003-01-11 00:00:00 NaN 0.203436 0.201588 0.197515
...
What I'm looking for is to mainly select the 00:00:00 hour which is why I've applied df = df.reset_index().groupby(df.index.date).first().set_index('times') but if that doesn't exist that it should use the 23:00:00 of the previous days as the 00:00:00 of the next day. The following is wrong:
times A B C D
2003-01-08 00:00:00 NaN 0.086215 0.086135 0.090659
2003-01-09 23:00:00 NaN 0.102374 0.101441 0.100743
2003-01-10 00:00:00 NaN 0.078799 0.077739 0.076138
2003-01-11 00:00:00 NaN 0.203436 0.201588 0.197515
...
How do I get it to look at the 23:00:00 of the previous day to the 00:00:00 of the next day, to achieve this solution.
2003-01-08 00:00:00 NaN 0.086215 0.086135 0.090659
2003-01-08 23:00:00 NaN 0.060930 0.059008 0.057293
2003-01-10 00:00:00 NaN 0.078799 0.077739 0.076138
2003-01-11 00:00:00 NaN 0.203436 0.201588 0.197515
...

Categories

Resources