How to apply a function/impute on an interval in Pandas - python

I have a Pandas dataset with a monthly Date-time index and a column of outstanding orders (like below):
Date
orders
1991-01-01
nan
1991-02-01
nan
1991-03-01
24
1991-04-01
nan
1991-05-01
nan
1991-06-01
nan
1991-07-01
nan
1991-08-01
34
1991-09-01
nan
1991-10-01
nan
1991-11-01
22
1991-12-01
nan
I want to linearly interpolate the values to fill the nans. However it has to be applied within 6-month blocks (non-rolling). So for example, one 6-month block would be all the rows between 1991-01-01 and 1991-06-01, where we would do forward and backward linear imputation such that if there is a nan the interpolation would be descending to a final value of 0. So for the same dataset above here is how I would like the end result to look:
Date
orders
1991-01-01
8
1991-02-01
16
1991-03-01
24
1991-04-01
18
1991-05-01
12
1991-06-01
6
1991-07-01
17
1991-08-01
34
1991-09-01
30
1991-10-01
26
1991-11-01
22
1991-12-01
11
I am lost on how to do this in Pandas however. Any ideas?

Idea is grouping per 6 months with prepend and append 0 values, interpolate and then remove first and last 0 values per groups:
df['Date'] = pd.to_datetime(df['Date'])
f = lambda x: pd.Series([0] + x.tolist() + [0]).interpolate().iloc[1:-1]
df['orders'] = (df.groupby(pd.Grouper(freq='6MS', key='Date'))['orders']
.transform(f))
print (df)
Date orders
0 1991-01-01 8.0
1 1991-02-01 16.0
2 1991-03-01 24.0
3 1991-04-01 18.0
4 1991-05-01 12.0
5 1991-06-01 6.0
6 1991-07-01 17.0
7 1991-08-01 34.0
8 1991-09-01 30.0
9 1991-10-01 26.0
10 1991-11-01 22.0
11 1991-12-01 11.0

Related

maximum sum of consecutive n-days using pandas

I've seen solutions in different languages (i.e. SQL, fortran, or C++) which mainly do for loops.
I am hoping that someone can help me solve this task using pandas instead.
If I have a data frame that looks like this.
date pcp sum_count sumcum
7/13/2013 0.1 3.0 48.7
7/14/2013 48.5
7/15/2013 0.1
7/16/2013
8/1/2013 1.5 1.0 1.5
8/2/2013
8/3/2013
8/4/2013 0.1 2.0 3.6
8/5/2013 3.5
9/22/2013 0.3 3.0 26.3
9/23/2013 14.0
9/24/2013 12.0
9/25/2013
9/26/2013
10/1/2014 0.1 11.0
10/2/2014 96.0 135.5
10/3/2014 2.5
10/4/2014 37.0
10/5/2014 9.5
10/6/2014 26.5
10/7/2014 0.5
10/8/2014 25.5
10/9/2014 2.0
10/10/2014 5.5
10/11/2014 5.5
And I was hoping I could do the following:
STEP 1 : create the sum_count column by determining total count of consecutive non-zeros in the 'pcp' column.
STEP 2 : create the sumcum column and calculate the sum of non-consecutive 'pcp'.
STEP 3 : create a pivot table that will look like this:
year max_sum_count
2013 48.7
2014 135.5
BUT!! the max_sum_count is based on the condition when sum_count = 3
I'd appreciate any help! thank you!
UPDATED QUESTION:
I have previously emphasized that the sum_count should only return the maximum consecutive 3 pcps. But I, mistakenly gave the wrong data frame, I had to edit it. Sorry.
The sumcum of 135.5 came from 96.0 + 2.5 + 37.0. It is the maximum consecutive 3 pcps within the sum_count 11.
Thank you
Use:
#filtering + rolling by days
N = 3
df['date'] = pd.to_datetime(df['date'])
df = df.set_index('date')
#test NaNs
m = df['pcp'].isna()
#groups by consecutive non NaNs
df['g'] = m.cumsum()[~m]
#extract years
df['year'] = df.index.year
#filter no NaNs rows
df = df[~m].copy()
#filter rows greater like N
df['sum_count1'] = df.groupby(['g','year'])['g'].transform('size')
df = df[df['sum_count1'].ge(N)].copy()
#get rolling sum per groups per N days
df['sumcum1'] = (df.groupby(['g','year'])
.rolling(f'{N}D')['pcp']
.sum()
.reset_index(level=[0, 1], drop=True))
#get only maximal counts non NaN and consecutive datetimes
#add missing years
r = range(df['year'].min(), df['year'].max() + 1)
df1 = df.groupby('year')['sumcum1'].max().reindex(r).reset_index(name='max_sum_count')
print (df1)
year max_sum_count
0 2013 48.7
1 2014 135.5
First, convert date as a real datetime dtype and create a binary mask which keep rows where pcp is not null. Then you can create groups and compute your variables:
Input data:
>>> df
date pcp
0 7/13/2013 0.1
1 7/14/2013 48.5
2 7/15/2013 0.1
3 7/16/2013 NaN
4 8/1/2013 1.5
5 8/2/2013 NaN
6 8/3/2013 NaN
7 8/4/2013 0.1
8 8/5/2013 3.5
9 9/22/2013 0.3
10 9/23/2013 14.0
11 9/24/2013 12.0
12 9/25/2013 NaN
13 9/26/2013 NaN
14 10/1/2014 0.1
15 10/2/2014 96.0
16 10/3/2014 2.5
17 10/4/2014 37.0
18 10/5/2014 9.5
19 10/6/2014 26.5
20 10/7/2014 0.5
21 10/8/2014 25.5
22 10/9/2014 2.0
23 10/10/2014 5.5
24 10/11/2014 5.5
Code:
df['date'] = pd.to_datetime(df['date'])
mask = df['pcp'].notna()
grp = df.loc[mask, 'date'] \
.ne(df.loc[mask, 'date'].shift().add(pd.Timedelta(days=1))) \
.cumsum()
df = df.join(df.reset_index()
.groupby(grp)
.agg(index=('index', 'first'),
sum_count=('pcp', 'size'),
sumcum=('pcp', 'sum'))
.set_index('index'))
pivot = df.groupby(df['date'].dt.year)['sumcum'].max() \
.rename('max_sum_count').reset_index()
Output results:
>>> df
date pcp sum_count sumcum
0 2013-07-13 0.1 3.0 48.7
1 2013-07-14 48.5 NaN NaN
2 2013-07-15 0.1 NaN NaN
3 2013-07-16 NaN NaN NaN
4 2013-08-01 1.5 1.0 1.5
5 2013-08-02 NaN NaN NaN
6 2013-08-03 NaN NaN NaN
7 2013-08-04 0.1 2.0 3.6
8 2013-08-05 3.5 NaN NaN
9 2013-09-22 0.3 3.0 26.3
10 2013-09-23 14.0 NaN NaN
11 2013-09-24 12.0 NaN NaN
12 2013-09-25 NaN NaN NaN
13 2013-09-26 NaN NaN NaN
14 2014-10-01 0.1 11.0 210.6
15 2014-10-02 96.0 NaN NaN
16 2014-10-03 2.5 NaN NaN
17 2014-10-04 37.0 NaN NaN
18 2014-10-05 9.5 NaN NaN
19 2014-10-06 26.5 NaN NaN
20 2014-10-07 0.5 NaN NaN
21 2014-10-08 25.5 NaN NaN
22 2014-10-09 2.0 NaN NaN
23 2014-10-10 5.5 NaN NaN
24 2014-10-11 5.5 NaN NaN
>>> pivot
date max_sum_count
0 2013 48.7
1 2014 210.6

How would you flip and fold diagonaly a matrix with pandas?

I have some datas I would like to organize for visualization and statistics but I don't know how to proceed.
The data are in 3 columns (stimA, stimB and subjectAnswer) and 10 rows (numero of pairs) and they are from a pairwise comparison test, in panda's dataFrame format. Example :
stimA
stimB
subjectAnswer
1
2
36
3
1
55
5
3
98
...
...
...
My goal is to organize them as a matrix with each row and column corresponding to one stimulus with the subjectAnswer data grouped to the left side of the matrix' diagonal (in my example, the subjectAnswer 36 corresponding to stimA 1 and stimB 2 should go to the index [2][1]), like this :
stimA/stimB
1
2
3
4
5
1
...
2
36
3
55
4
...
5
...
...
98
I succeeded in pivoting the first table to the matrix but I couldn't succeed the arrangement on the left side of the diag of my datas, here is my code :
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
session1 = pd.read_csv(filepath, names=['stimA', 'stimB', 'subjectAnswer'])
pivoted = session1.pivot('stimA','stimB','subjectAnswer')
Which gives :
session1 :
stimA stimB subjectAnswer
0 1 3 6
1 4 3 21
2 4 5 26
3 2 3 10
4 1 2 6
5 1 5 6
6 4 1 6
7 5 2 13
8 3 5 15
9 2 4 26
pivoted :
stimB 1 2 3 4 5
stimA
1 NaN 6.0 6.0 NaN 6.0
2 NaN NaN 10.0 26.0 NaN
3 NaN NaN NaN NaN 15.0
4 6.0 NaN 21.0 NaN 26.0
5 NaN 13.0 NaN NaN NaN
The expected output for pivoted :
stimB 1 2 3 4 5
stimA
1 NaN NaN Nan NaN NaN
2 6.0 NaN Nan NaN NaN
3 6.0 10.0 NaN NaN NaN
4 6.0 26.0 21.0 NaN NaN
5 6.0 13.0 15.0 26.0 NaN
Thanks a lot for your help !
If I understand you correctly, the stimuli A and B are interchangeable. So to get the matrix layout you want, you can swap A with B in those rows where A is smaller than B. In other words, you don't use the original A and B for the pivot table, but the maximum and minimum of A and B:
session1['stim_min'] = np.min(session1[['stimA', 'stimB']], axis=1)
session1['stim_max'] = np.max(session1[['stimA', 'stimB']], axis=1)
pivoted = session1.pivot('stim_max', 'stim_min', 'subjectAnswer')
pivoted
stim_min 1 2 3 4
stim_max
2 6.0 NaN NaN NaN
3 6.0 10.0 NaN NaN
4 6.0 26.0 21.0 NaN
5 6.0 13.0 15.0 26.0
sort the columns stimA and stimB along the columns axis and assign two temporary columns namely x and y in the dataframe. Here sorting is required because we need to ensure that the resulting matrix clipped on the upper right side.
Pivot the dataframe with index as y, columns as x and values as subjectanswer, then reindex the reshaped frame in order to ensure that all the available unique stim names are present in the index and columns of the matrix
session1[['x', 'y']] = np.sort(session1[['stimA', 'stimB']], axis=1)
i = np.union1d(session1['x'], session1['y'])
session1.pivot('y', 'x','subjectAnswer').reindex(i, i)
x 1 2 3 4 5
y
1 NaN NaN NaN NaN NaN
2 6.0 NaN NaN NaN NaN
3 6.0 10.0 NaN NaN NaN
4 6.0 26.0 21.0 NaN NaN
5 6.0 13.0 15.0 26.0 NaN

Daily rate of return based on limited values - pandas

I wanted to calculate daily log rate of return for Optionvalue but only for first 252days in the data. I'm getting KeyError: 'log return'
import pandas as pd
import numpy as np
EUR = pd.read_csv('C:eurpln_d.csv', sep = ",", parse_dates=['Date'])
USD = pd.read_csv('C:usdpln_d.csv', sep = ",", parse_dates=['Date'])
w_1 = 0.5
w_2 = 1-w_1
EUR.merge(USD, on="Date")
EUR["Optionvalue"] = EUR["Close"]*w_1 + EUR["Close"]*w_2
So what i would like to have is log return but only on first 252days (which is to say I need to take only 252 first occurences, in this dailylogreturn calculation): log(yt)−log(yt−1). I've tried to use below.
EUR['log return'].iloc[0:252]= np.log(EUR["Optionvalue"]) - np.log(EUR["Optionvalue"].iloc[0])
Is my "np.log(EUR["Optionvalue"].iloc[0]" correctly taking previous value when calculating log return?
How can I limit data so I can calculate daily log return based only on first 252 dates? Above .iloc[0:252] seems to not work..Please help!
Small example
iloc[0] will just give you the first row of something, not the previous value. You can use shift(1) (shown below) to get the previous value.
When taking the previous value, the first item will be NA or NaN since there is no previous value of the first value. You can use fillna to provide an "artificial" value (1 in the below example)
Note that the first value in the last column is therefore artificial. Remove the fillna to keep this value NaN.
You should do iloc on an existing column. You can initialize a new column with a fixed value (e.g. -1 as in below)
You can remove the prev column below and use its value directly in the last assignment, if desired.
import pandas as pd
import numpy as np
from datetime import datetime
d = {'date': [datetime(2020, 5, d) for d in range(1, 30)],
'current': [x for x in range(1, 30)]}
df = pd.DataFrame(data=d)
df['prev'] = df.shift(1).fillna(1)['current']
df['logdiff'] = -1
df['logdiff'].iloc[0:20] = np.log(df['current']) - np.log(df['prev'])
print(df)
date current prev logdiff
0 2020-05-01 1 1.0 0.000000
1 2020-05-02 2 1.0 0.693147
2 2020-05-03 3 2.0 0.405465
3 2020-05-04 4 3.0 0.287682
4 2020-05-05 5 4.0 0.223144
5 2020-05-06 6 5.0 0.182322
6 2020-05-07 7 6.0 0.154151
7 2020-05-08 8 7.0 0.133531
8 2020-05-09 9 8.0 0.117783
9 2020-05-10 10 9.0 0.105361
10 2020-05-11 11 10.0 0.095310
11 2020-05-12 12 11.0 0.087011
12 2020-05-13 13 12.0 0.080043
13 2020-05-14 14 13.0 0.074108
14 2020-05-15 15 14.0 0.068993
15 2020-05-16 16 15.0 0.064539
16 2020-05-17 17 16.0 0.060625
17 2020-05-18 18 17.0 0.057158
18 2020-05-19 19 18.0 0.054067
19 2020-05-20 20 19.0 0.051293
20 2020-05-21 21 20.0 -1.000000
21 2020-05-22 22 21.0 -1.000000
22 2020-05-23 23 22.0 -1.000000
23 2020-05-24 24 23.0 -1.000000
24 2020-05-25 25 24.0 -1.000000
25 2020-05-26 26 25.0 -1.000000
26 2020-05-27 27 26.0 -1.000000
27 2020-05-28 28 27.0 -1.000000
28 2020-05-29 29 28.0 -1.000000

Pandas : Replace NaNs with mean of 'n' nearest non-empty values in column

Suppose I have the following dataframe.
A B
0 NaN 12
1 NaN NaN
2 24 NaN
3 NaN NaN
4 NaN 13
5 NaN 11
6 NaN 13
7 18 NaN
8 19 NaN
9 17 NaN
In column 'A', the missing values need to replaced with the mean of say 3 nearest non empty values in a sequence if they exist. For example the NaN at index 5 has 18 as its nearest non empty value and after 18, the next two values are also non empty. Therefore the NaN at index 5 is replaced with (18+19+17)/3.
The NaN at index 4 has 24 as its nearest non empty value but the two values prior to 24 are non empty. Therefore the NaN at index 4 is not replaced with any value.
Similarly it needs to be done with the rest of the columns. Does anyone know a vectorized way of doing this?
Thanks!
I believe you need combine rolling with mean with another rolling from back, then use DataFrame.interpolate for replace nearest NaNs by means with forward filling for last groups of NaNs and backfilling for first groups of NaNs for helper DataFrame c, which is used for replace missing values of original DataFrame:
a = df.rolling(3).mean()
b = df.iloc[::-1].rolling(3).mean()
c = a.fillna(b).fillna(df).interpolate(method='nearest').ffill().bfill()
print (c)
A B
0 24.0 12.000000
1 24.0 12.000000
2 24.0 12.000000
3 24.0 12.333333
4 24.0 12.333333
5 18.0 11.000000
6 18.0 12.333333
7 18.0 12.333333
8 19.0 12.333333
9 18.0 12.333333
df = df.fillna(c)
print (df)
A B
0 24.0 12.000000
1 24.0 12.000000
2 24.0 12.000000
3 24.0 12.333333
4 24.0 13.000000
5 18.0 11.000000
6 18.0 13.000000
7 18.0 12.333333
8 19.0 12.333333
9 17.0 12.333333

How to add conditions to columns at grouped by pivot table Pandas

I've used group by and pivot table from pandas package in order to create the following table:
Input:
q4 = q1[['category','Month']].groupby(['category','Month']).Month.agg({'Count':'count'}).reset_index()
q4 = pd.DataFrame(q4.pivot(index='category',columns='Month').reset_index())
then the output :
category Count
Month 6 7 8
0 adult-classes 29.0 109.0 162.0
1 air-pollution 27.0 43.0 13.0
2 babies-and-toddlers 4.0 51.0 2.0
3 bicycle 210.0 96.0 23.0
4 building NaN 17.0 NaN
5 buildings-maintenance 23.0 12.0 NaN
6 catering 1351.0 4881.0 1040.0
7 childcare 9.0 NaN NaN
8 city-planning 105.0 81.0 23.0
9 city-services 2461.0 2130.0 1204.0
10 city-taxes 1.0 4.0 42.0
I'm trying to add a condition to the months,
the problem I'm having is that after pivoting I can't access the columns
how can I show only the rows where 6<7<8?
To flatten your multi-index, you can use renaming of your columns (check out this answer).
q4.columns = [''.join([str(c) for c in col]).strip() for col in q4.columns.values]
To remove NaNs:
q4.fillna(0, inplace=True)
To select according to your constraint:
result = q4[(q4['Count6'] < q['Count7']) & (q4['Count7'] < q4['Count8'])]

Categories

Resources