Dateframe interpolate not working in Panda + multidimensional interpolation - python

multidimensional interpolation with dataframe not working
import pandas as pd
import numpy as np
raw_data = {'CCY_CODE': ['SGD','USD','USD','USD','USD','USD','USD','EUR','EUR','EUR','EUR','EUR','EUR','USD'],
'END_DATE': ['16/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018',
'17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018'],
'STRIKE':[0.005,0.01,0.015,0.02,0.025,0.03,0.035,0.04,0.045,0.05,0.55,0.06,0.065,0.07],
'VOLATILITY':[np.nan,np.nan,0.3424,np.nan,0.2617,0.2414,np.nan,np.nan,0.215,0.212,0.2103,np.nan,0.2092,np.nan]
}
df_volsurface = pd.DataFrame(raw_data,columns = ['CCY_CODE','END_DATE','STRIKE','VOLATILITY'])
df_volsurface['END_DATE'] = pd.to_datetime(df_volsurface['END_DATE'])
df_volsurface.interpolate(method='akima',limit_direction='both')
Output:
<table><tbody><tr><th> </th><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>NaN</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>NaN</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.296358</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.230295</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.220911</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.209471</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>NaN</td></tr></tbody></table>
Expected Result:
<table><tbody><tr><th> </th><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>NaN</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>Expected some logical value</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.296358</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.230295</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.220911</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.209471</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>Expected some logical value</td></tr></tbody></table>
Linear interpolation methods gives copy last available values to all backward and forward missing value without considering ccy_code
df_volsurface.interpolate(method='linear',limit_direction='both')
Output:
<table><tbody><tr><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th><th> </th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>0.3424</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>0.3424</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.30205</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.2326</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.2238</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.20975</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>0.2092</td></tr></tbody></table>
Any help is appreciated! Thanks!

I'd like to point out that this is still onedimensional interpolation. We have one independent variable ('STRIKE') and one dependent variable ('VOLATILITY'). Interpolation is done for different conditions, e.g. for each day, each currency, each scenario, etc. The following is an example of how the interpolation can be done based on 'END_DATE' and 'CCY_CODE'.
# set all the conditions as index
df_volsurface.set_index(['END_DATE', 'CCY_CODE', 'STRIKE'], inplace=True)
df_volsurface.sort_index(level=['END_DATE', 'CCY_CODE', 'STRIKE'], inplace=True)
# create separate columns for all criteria except the independent variable
df_volsurface = df_volsurface.unstack(level=['END_DATE', 'CCY_CODE'])
for ccy in df_volsurface:
indices = df_volsurface[ccy].notna()
if not any(indices):
continue # we are not interested in a column with only NaN
x = df_volsurface.index.get_level_values(level='STRIKE') # independent var
y = df_volsurface[ccy] # dependent var
# create interpolation function
f_interp = scipy.interpolate.interp1d(x[indices], y[indices], kind='linear',
bounds_error=False, fill_value='extrapolate')
df_volsurface['VOL_INTERP', ccy[1], ccy[2]] = f_interp(x)
print(df_volsurface)
The interpolation for the other conditions should work analogously. This is the resulting DataFrame:
VOLATILITY VOL_INTERP
END_DATE 2018-03-16 2018-03-17 2018-03-17
CCY_CODE SGD EUR USD EUR USD
STRIKE
0.005 NaN NaN NaN 0.23900 0.42310
0.010 NaN NaN NaN 0.23600 0.38275
0.015 NaN NaN 0.3424 0.23300 0.34240
0.020 NaN NaN NaN 0.23000 0.30205
0.025 NaN NaN 0.2617 0.22700 0.26170
0.030 NaN NaN 0.2414 0.22400 0.24140
0.035 NaN NaN NaN 0.22100 0.22110
0.040 NaN NaN NaN 0.21800 0.20080
0.045 NaN 0.2150 NaN 0.21500 0.18050
0.050 NaN 0.2120 NaN 0.21200 0.16020
0.055 NaN 0.2103 NaN 0.21030 0.13990
0.060 NaN NaN NaN 0.20975 0.11960
0.065 NaN 0.2092 NaN 0.20920 0.09930
0.070 NaN NaN NaN 0.20865 0.07900
Use df_volsurface.stack() to return to a multiindex of your choice. There are also several pandas interpolation methods to choose from. However, I have not found a satisfactory solution for your problem using method='akima' because it only interpolates between the given data points, but does not seem to extrapolate beyond.

Related

Pandas dataframe merge row by addition

I want to create a dataframe from census data. I want to calculate the number of people that returned a tax return for each specific earnings group.
For now, I wrote this
census_df = pd.read_csv('../zip code data/19zpallagi.csv')
sub_census_df = census_df[['zipcode', 'agi_stub', 'N02650', 'A02650', 'ELDERLY', 'A07180']].copy()
num_of_returns = ['Number_of_returns_1_25000', 'Number_of_returns_25000_50000', 'Number_of_returns_50000_75000',
'Number_of_returns_75000_100000', 'Number_of_returns_100000_200000', 'Number_of_returns_200000_more']
for i, column_name in zip(range(1, 7), num_of_returns):
sub_census_df[column_name] = sub_census_df[sub_census_df['agi_stub'] == i]['N02650']
I have 6 groups attached to a specific zip code. I want to get one row, with the number of returns for a specific zip code appearing just once as a column. I already tried to change NaNs to 0 and to use groupby('zipcode').sum(), but I get 50 million rows summed for zip code 0, where it seems that only around 800k should exist.
Here is the dataframe that I currently get:
zipcode agi_stub N02650 A02650 ELDERLY A07180 Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more Amount_1_25000 Amount_25000_50000 Amount_50000_75000 Amount_75000_100000 Amount_100000_200000 Amount_200000_more
0 0 1 778140.0 10311099.0 144610.0 2076.0 778140.0 NaN NaN NaN NaN NaN 10311099.0 NaN NaN NaN NaN NaN
1 0 2 525940.0 19145621.0 113810.0 17784.0 NaN 525940.0 NaN NaN NaN NaN NaN 19145621.0 NaN NaN NaN NaN
2 0 3 285700.0 17690402.0 82410.0 9521.0 NaN NaN 285700.0 NaN NaN NaN NaN NaN 17690402.0 NaN NaN NaN
3 0 4 179070.0 15670456.0 57970.0 8072.0 NaN NaN NaN 179070.0 NaN NaN NaN NaN NaN 15670456.0 NaN NaN
4 0 5 257010.0 35286228.0 85030.0 14872.0 NaN NaN NaN NaN 257010.0 NaN NaN NaN NaN NaN 35286228.0 NaN
And here is what I want to get:
zipcode Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more
0 0 778140.0 525940.0 285700.0 179070.0 257010.0 850.0
here is one way to do it using groupby and sum the desired columns
num_of_returns = ['Number_of_returns_1_25000', 'Number_of_returns_25000_50000', 'Number_of_returns_50000_75000',
'Number_of_returns_75000_100000', 'Number_of_returns_100000_200000', 'Number_of_returns_200000_more']
df.groupby('zipcode', as_index=False)[num_of_returns].sum()
zipcode Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more
0 0 778140.0 525940.0 285700.0 179070.0 257010.0 0.0
This question needs more information to actually give a proper answer. For example you leave out what is meant by certain columns in your data frame:
- `N1: Number of returns`
- `agi_stub: Size of adjusted gross income`
According to IRS this has the following levels.
Size of adjusted gross income "0 = No AGI Stub
1 = ‘Under $1’
2 = '$1 under $10,000'
3 = '$10,000 under $25,000'
4 = '$25,000 under $50,000'
5 = '$50,000 under $75,000'
6 = '$75,000 under $100,000'
7 = '$100,000 under $200,000'
8 = ‘$200,000 under $500,000’
9 = ‘$500,000 under $1,000,000’
10 = ‘$1,000,000 or more’"
I got the above from https://www.irs.gov/pub/irs-soi/16incmdocguide.doc
With this information, I think what you want to find is the number of
people who filed a tax return for each of the income levels of agi_stub.
If that is what you mean then, this can be achieved by:
import pandas as pd
data = pd.read_csv("./data/19zpallagi.csv")
## select only the desired columns
data = data[['zipcode', 'agi_stub', 'N1']]
## solution to your problem?
df = data.pivot_table(
index='zipcode',
values='N1',
columns='agi_stub',
aggfunc=['sum']
)
## bit of cleaning up.
PREFIX = 'agi_stub_level_'
df.columns = [PREFIX + level for level in df.columns.get_level_values(1).astype(str)]
Here's the output.
In [77]: df
Out[77]:
agi_stub_level_1 agi_stub_level_2 ... agi_stub_level_5 agi_stub_level_6
zipcode ...
0 50061850.0 37566510.0 ... 21938920.0 8859370.0
1001 2550.0 2230.0 ... 1420.0 230.0
1002 2850.0 1830.0 ... 1840.0 990.0
1005 650.0 570.0 ... 450.0 60.0
1007 1980.0 1530.0 ... 1830.0 460.0
... ... ... ... ... ...
99827 470.0 360.0 ... 170.0 40.0
99833 550.0 380.0 ... 290.0 80.0
99835 1250.0 1130.0 ... 730.0 190.0
99901 1960.0 1520.0 ... 1030.0 290.0
99999 868450.0 644160.0 ... 319880.0 142960.0
[27595 rows x 6 columns]

Rolling window produces no effect on dataframe

So I have to perform a rolling window to a set of rows inside a dataframe. The problem is that when I do full_df = full_df.rolling(window=5).mean() the output of full_df.head(2000) shows all NaN values. Does anyone know why this happens? I have to perform a time series exercise with this.
This is the dataset: https://github.com/plotly/datasets/blob/master/all_stocks_5yr.csv
This is what I have:
df = pd.read_csv('all_stocks_5yr.csv', usecols=["date", "close",
"Name"])
gp = df.groupby("Name")
my_dict = {key: group['close'].to_numpy() for key, group in gp}
full_df = pd.DataFrame.from_dict(my_dict, orient='index')
for i in full_df:
full_df = full_df.rolling(window=5).mean()
An image of the output:
First off, your loop for i in full_df is not doing what you think; instead of running the rolling mean in each row, you're running it over and over again on the whole dataframe, averaging along columns.
If we just do the rolling average once the way you're implemting it:
full_df = full_df.rolling(window=5).mean()
print(full_df)
0 1 2 3 ... 1255 1256 1257 1258
A NaN NaN NaN NaN ... NaN NaN NaN NaN
AAL NaN NaN NaN NaN ... NaN NaN NaN NaN
AAP NaN NaN NaN NaN ... NaN NaN NaN NaN
AAPL NaN NaN NaN NaN ... NaN NaN NaN NaN
ABBV 48.56684 48.37228 47.95056 48.07312 ... 102.590 98.768 101.212 100.510
... ... ... ... ... ... ... ... ... ...
XYL 45.58400 45.60000 45.74000 45.96200 ... 64.504 61.854 61.596 61.036
YUM 51.14200 51.01800 51.17400 51.28400 ... 66.902 64.420 63.914 63.668
ZBH 48.59000 48.49200 48.57000 48.75000 ... 75.154 73.112 72.704 72.436
ZION 44.84400 44.76600 44.89400 45.08200 ... 73.972 71.734 71.516 71.580
ZTS 45.08600 45.02600 45.27400 45.39200 ... 83.002 80.224 80.000 80.116
[505 rows x 1259 columns]
The first four rows are all NaN because the rolling mean isn't defined for fewer than 5 rows.
If we do it again (making a total of two times):
full_df = full_df.rolling(window=5).mean()
print(full_df.head(9))
0 1 2 ... 1256 1257 1258
A NaN NaN NaN ... NaN NaN NaN
AAL NaN NaN NaN ... NaN NaN NaN
AAP NaN NaN NaN ... NaN NaN NaN
AAPL NaN NaN NaN ... NaN NaN NaN
ABBV NaN NaN NaN ... NaN NaN NaN
ABC NaN NaN NaN ... NaN NaN NaN
ABT NaN NaN NaN ... NaN NaN NaN
ACN NaN NaN NaN ... NaN NaN NaN
ADBE 49.619072 49.471424 49.192048 ... 108.3420 110.4848 110.4976
You can see the first 8 rows are all NaN since the fourth row reaches down to the eighth in the rolling mean. Given the size of your data frame (505 rows) if you ran the rolling mean 127 times, the entire df would be consumed withNaN values, and your for loop is doing it even more times than that, which is why your df is filled with NaN values.
Also, note that you're averaging across different stock tickers, which doesn't make sense. What I believe you want to be doing is averaging the rows, not the columns in which case you simply need to do
full_df = full_df.rolling(axis = 'columns', window=5).mean()
print(full_df)
0 1 2 3 4 5 ... 1253 1254 1255 1256 1257 1258
A NaN NaN NaN NaN 44.72600 44.1600 ... 73.926 73.720 73.006 71.744 70.836 69.762
AAL NaN NaN NaN NaN 14.42600 14.3760 ... 53.142 53.308 53.114 52.530 52.248 51.664
AAP NaN NaN NaN NaN 78.74000 78.7600 ... 120.742 120.016 118.074 115.468 114.054 112.642
AAPL NaN NaN NaN NaN 67.32592 66.9025 ... 168.996 168.330 166.128 163.834 163.046 161.468
ABBV NaN NaN NaN NaN 35.87200 36.1380 ... 116.384 117.992 116.384 113.824 112.888 113.168
... ... ... ... ... ... ... ... ... ... ... ... ... ...
XYL NaN NaN NaN NaN 27.84600 28.0840 ... 73.278 73.598 73.848 73.698 73.350 73.256
YUM NaN NaN NaN NaN 64.58000 64.3180 ... 85.504 85.168 84.454 83.118 82.316 81.424
ZBH NaN NaN NaN NaN 75.85600 75.8660 ... 126.284 126.974 126.886 126.044 125.316 124.048
ZION NaN NaN NaN NaN 24.44200 24.4820 ... 53.838 54.230 54.256 53.748 53.466 53.464
ZTS NaN NaN NaN NaN 33.37400 33.5600 ... 78.720 78.434 77.772 76.702 75.686 75.112
Again, your first four columns are not managed here.
To correct for that, we add one more term:
full_df = full_df.rolling(axis = 'columns', window=5, min_periods = 1).mean()
print(full_df)
0 1 2 3 4 5 ... 1253 1254 1255 1256 1257 1258
A 45.0800 44.8400 44.766667 44.7625 44.72600 44.1600 ... 73.926 73.720 73.006 71.744 70.836 69.762
AAL 14.7500 14.6050 14.493333 14.5350 14.42600 14.3760 ... 53.142 53.308 53.114 52.530 52.248 51.664
AAP 78.9000 78.6450 78.630000 78.7150 78.74000 78.7600 ... 120.742 120.016 118.074 115.468 114.054 112.642
AAPL 67.8542 68.2078 67.752800 67.4935 67.32592 66.9025 ... 168.996 168.330 166.128 163.834 163.046 161.468
ABBV 36.2500 36.0500 35.840000 35.6975 35.87200 36.1380 ... 116.384 117.992 116.384 113.824 112.888 113.168
... ... ... ... ... ... ... ... ... ... ... ... ... ...
XYL 27.0900 27.2750 27.500000 27.6900 27.84600 28.0840 ... 73.278 73.598 73.848 73.698 73.350 73.256
YUM 65.3000 64.9250 64.866667 64.7525 64.58000 64.3180 ... 85.504 85.168 84.454 83.118 82.316 81.424
ZBH 75.8500 75.7500 75.646667 75.7350 75.85600 75.8660 ... 126.284 126.974 126.886 126.044 125.316 124.048
ZION 24.1400 24.1750 24.280000 24.3950 24.44200 24.4820 ... 53.838 54.230 54.256 53.748 53.466 53.464
ZTS 33.0500 33.1550 33.350000 33.4000 33.37400 33.5600 ... 78.720 78.434 77.772 76.702 75.686 75.112
In the above data frame the first column is just the value at time 0, the second is the average of times 0 and 1, the third is the average of times 0, 1, and 2, etc. The window size continues growing until you get to your value of window=5, at which point the window moves along with your rolling average. Note that you can also center the rolling mean if you want to rather than have a trailing window. You can see the documentation here.
I'm not quite sure what you are trying to do. Could you explain in more detail, what the goal of your operation is? I assume you try to build a moinving (rolling) average with a 5 day intervall across each asset and calculate the mean prices for each intervall.
But first, let me answer why you see all the NaNs:
What you are doing with this code below, is thare you are just doing the same operation over and over again and the result of it is always NaNs. That is, because you are doing something weird with the dict and the first rows all have NaNs so average will also be NaNs. And since you overwrite the variable full_df by the result of this computation, your dataframe shows only NaNs.
for i in full_df:
full_df = full_df.rolling(window=5).mean()
Let me explain in more detail. You were (probably) trying to iterate over the dataframe (using a window of 5 days) and compute the mean. The function full_df.rolling(window=5).mean() already does exactly that, and the output is a new dataframe, with the mean of each window over the entire datafrane full_df. By running this function in a loop, without additional indexing you are only running the same function across the entire dataframe over an over again.
Maybe this will get you what you want:
import pandas as pd
df = pd.read_csv("all_stocks_5yr.csv", index_col=[0,6])
means = df.rolling(window=5).mean()

One-way Anova loop through pandas dataframe - results in a single table

I have a pandas dataframe containing 16 columns, of which 14 represent variables where i perform a looped Anova test using statsmodels. My dataframe looks something like this (simplified):
ID Cycle_duration Average_support_phase Average_swing_phase Label
1 23.1 34.3 47.2 1
2 27.3 38.4 49.5 1
3 25.8 31.1 45.7 1
4 24.5 35.6 41.9 1
...
So far this is what i'm doing:
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
df = pd.read_csv('features_total.csv')
for variable in df.columns:
model = ols('{} ~ Label'.format(variable), data=df).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
print(anova_table)
Which yields:
sum_sq df F PR(>F)
Label 0.124927 2.0 2.561424 0.084312
Residual 1.731424 71.0 NaN NaN
sum_sq df F PR(>F)
Label 62.626057 2.0 4.969491 0.009552
Residual 447.374788 71.0 NaN NaN
sum_sq df F PR(>F)
Label 62.626057 2.0 4.969491 0.009552
Residual 447.374788 71.0 NaN NaN
I'm getting an individual table print for each variable where the Anova is performed. Basically what i want is to print one single table with the summarized results, or something like this:
sum_sq df F PR(>F)
Cycle_duration 0.1249270 2.0 2.561424 0.084312
Residual 1.7314240 71.0 NaN NaN
Average_support_phase 62.626057 2.0 4.969491 0.009552
Residual 447.374788 71.0 NaN NaN
Average_swing_phase 62.626057 2.0 4.969491 0.009552
Residual 447.374788 71.0 NaN NaN
I can already see a problem because this method always outputs the 'Label' nomenclature before the actual values, and not the variable name in question (like i've shown above, i would like to have the variable name above each 'residual'). Is this even possible with the statsmodels approach?
I'm fairly new to python and excuse me if this has nothing to do with statsmodels - in that case, please do elucidate me on what i should be trying.
You can collect the tables and concatenate them at the end of your loop. This method will create a hierarchical index, but I think that makes it a bit more clear. Something like this:
keys = []
tables = []
for variable in df.columns:
model = ols('{} ~ Label'.format(variable), data=df).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
keys.append(variable)
tables.append(anova_table)
df_anova = pd.concat(tables, keys=keys, axis=0)
Somewhat related, I would also suggest correcting for multiple comparisons. This is more a statistical suggestion than a coding suggestion, but considering you are performing numerous statistical tests, it would make sense to account for the probability that one of the test would result in a false positive.

How to remove missing data and 0s whilst keeping the dataframe the same shape using Pandas?

I have a dataframe and I want to reformat it in order it remove the instances of whether a missing value or a zero occurs before the first non-zero value appears across a row. However I do not want to delete any rows or columns and I do not want to remove any 0s or missing values which appear after the non-zeroes.
Below is the dataframe I am working with:
> data =[['Adam',2.55,4.53,3.45,2.12,3.14],['Bill',np.NaN,2.14,3.65,4.12],['Chris',np.NaN,0,2.82,0,6.04],['David',np.NaN,0,7.42,3.52]]
> df = pd.DataFrame(data, columns = ['Name', 'A','B','C','D','E'])
Moreover, here is the expected outcome:
> data1 =[['Adam',2.55,4.53,3.45,2.12,3.14],['Bill',2.14,3.65,4.12],['Chris',2.82,0,6.04],['David',7.42,3.52]]
> df1 = pd.DataFrame(data1, columns = ['Name', 'A','B','C','D','E'])
This is not a trivial problem. Here is the solution:
m=df.set_index('Name')
m=m[m.isin(m.mask(m.le(0)).bfill(axis=1).iloc[:,0]).cumsum(axis=1).astype(bool)]
print(m)
A B C D E
Name
Adam 2.55 4.53 3.45 2.12 3.14
Bill NaN 2.14 3.65 4.12 NaN
Chris NaN NaN 2.82 0.00 6.04
David NaN NaN 7.42 3.52 NaN
Then using justify:
pd.DataFrame(justify(m.values,np.nan),columns=m.columns,index=m.index).reset_index()
Name A B C D E
0 Adam 2.55 4.53 3.45 2.12 3.14
1 Bill 2.14 3.65 4.12 NaN NaN
2 Chris 2.82 0.00 6.04 NaN NaN
3 David 7.42 3.52 NaN NaN NaN
Explanation:
Step1: Set the Name column as index so we can deal with numeric values only.
Step2: m.mask(m.le(0)).bfill(axis=1).iloc[:,0] gives the first value which is greater than 0.
Step3: Then using isin() to return True wherever the value appears in each row.
Step4: cumsum(axis=1).astype(bool) makes all the remaining elements as True so we can filter only those values, other values becomes NaN.
Then use justify function from the linked post.

How to get the rolling(window = 3).max() on a numpy.ndarray?

I have a numpy.ndarray as follow. It's the output from talib.RSI. It's the type of numpy.ndarray. I want to get the list of rolling(windows=3).max() and the rolling(window=3).min
How to do that?
[ nan nan nan nan nan
nan nan nan nan nan
nan nan nan nan 56.50118203
60.05461743 56.99068148 55.70899949 59.2299361 64.19044898
60.62186599 53.96346826 44.06538636 52.04519976 51.32884016
58.65240379 60.44789401 58.94743634 59.75308787 53.56534397
54.22091468 47.22502341 51.5425848 50.0923126 49.80608264
45.69087847 50.07778871 54.21701441 58.79268406 63.59307774
66.08195696 65.49255218 65.11035657 68.47403716 70.70530564
73.21955929 76.57474822 65.89852612 66.51497688 72.42658468
73.80944844 69.56561001]
If you can afford adding a new dependency, I would rather do that with Pandas.
import numpy
import pandas
x = numpy.array([0, 1, 2, 3, 4])
s = pandas.Series(x)
print(s.rolling(3).min())
print(s.rolling(3).max())
print(s.rolling(3).mean())
print(s.rolling(3).std())
Note that converting your NumPy array to a Pandas series does not create a copy of the array, as Pandas uses NumPy arrays internally for its series.
You can use np.lib.stride_tricks.as_strided:
# a smaller example
import numpy.random as npr
npr.seed(123)
arr = npr.randn(10)
arr[:4] = np.nan
windows = np.lib.stride_tricks.as_strided(arr, shape=(8, 3), strides=(8, 8))
print(windows.max(axis=1))
print(windows.sum(axis=1))
[ nan nan nan nan 1.65143654 1.65143654
1.26593626 1.26593626]
[ nan nan nan nan -1.35384296 -1.20415534
-1.58965561 -0.02971677]

Categories

Resources