One-way Anova loop through pandas dataframe - results in a single table - python

I have a pandas dataframe containing 16 columns, of which 14 represent variables where i perform a looped Anova test using statsmodels. My dataframe looks something like this (simplified):
ID Cycle_duration Average_support_phase Average_swing_phase Label
1 23.1 34.3 47.2 1
2 27.3 38.4 49.5 1
3 25.8 31.1 45.7 1
4 24.5 35.6 41.9 1
...
So far this is what i'm doing:
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
df = pd.read_csv('features_total.csv')
for variable in df.columns:
model = ols('{} ~ Label'.format(variable), data=df).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
print(anova_table)
Which yields:
sum_sq df F PR(>F)
Label 0.124927 2.0 2.561424 0.084312
Residual 1.731424 71.0 NaN NaN
sum_sq df F PR(>F)
Label 62.626057 2.0 4.969491 0.009552
Residual 447.374788 71.0 NaN NaN
sum_sq df F PR(>F)
Label 62.626057 2.0 4.969491 0.009552
Residual 447.374788 71.0 NaN NaN
I'm getting an individual table print for each variable where the Anova is performed. Basically what i want is to print one single table with the summarized results, or something like this:
sum_sq df F PR(>F)
Cycle_duration 0.1249270 2.0 2.561424 0.084312
Residual 1.7314240 71.0 NaN NaN
Average_support_phase 62.626057 2.0 4.969491 0.009552
Residual 447.374788 71.0 NaN NaN
Average_swing_phase 62.626057 2.0 4.969491 0.009552
Residual 447.374788 71.0 NaN NaN
I can already see a problem because this method always outputs the 'Label' nomenclature before the actual values, and not the variable name in question (like i've shown above, i would like to have the variable name above each 'residual'). Is this even possible with the statsmodels approach?
I'm fairly new to python and excuse me if this has nothing to do with statsmodels - in that case, please do elucidate me on what i should be trying.

You can collect the tables and concatenate them at the end of your loop. This method will create a hierarchical index, but I think that makes it a bit more clear. Something like this:
keys = []
tables = []
for variable in df.columns:
model = ols('{} ~ Label'.format(variable), data=df).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
keys.append(variable)
tables.append(anova_table)
df_anova = pd.concat(tables, keys=keys, axis=0)
Somewhat related, I would also suggest correcting for multiple comparisons. This is more a statistical suggestion than a coding suggestion, but considering you are performing numerous statistical tests, it would make sense to account for the probability that one of the test would result in a false positive.

Related

Polynomial regression predicted values as dataframe (Python)

A lot of questions is answered regarding this, however, I could not figure out one thing.
I have a dataframe and I am performing regression,after that the results are stored in the new columns in Test dataframe. To compare methods I need to do both linear and polynomial regression.
I have found a way to beautifully do this with linear regression, where in result I have predicted values in new column of dataframe Test. But I cannot make this work within the same loop using polynomial regression, cause in the final Test dataframe I have multiple Null values as in the step of model_2.fit_transform(X) values somehow loses the corresponding Test index.
import pandas as pd
import statsmodels.api as sm
from sklearn.preprocessing import PolynomialFeatures
Test = pd.read_csv(r'D:\myfile.csv')
df_coef =[]
value = list(set(Test['Value']))
for value in value:
df_redux = Test[Test['Value'] == value]
Y = df_redux['Y']
X = df_redux[['X1', 'A', 'B', 'B']]
X = sm.add_constant(X)
# linear
model_1 = sm.OLS(Y, X).fit()
predictions_1 = model_1.predict(X)
# polynomial
polynomial_features = PolynomialFeatures(degree=2)
xp = polynomial_features.fit_transform(X)
model_2 = sm.OLS(Y, xp).fit()
predictions_2 = model_2.predict(xp)
stats_1 = pd.read_html(model_1.summary().tables[1].as_html(), header=0, index_col=0)[0]
stats_2 = pd.read_html(model_2.summary().tables[1].as_html(), header=0, index_col=0)[0]
predictions_1 = pd.DataFrame(predictions_1, columns=['lin'])
predictions_2 = pd.DataFrame(predictions_2, columns=['poly'])
# ??? how to concat and appen both prediction_1 and prediction_2 in the same df_coef = [] dataframe?
gf = pd.concat([predictions_1, df_redux], axis=1)
df_coef.append(gf)
all_coef = pd.concat(df_coef)
type(all_coef)
Out[234]: pandas.core.frame.DataFrame
The problem is that tranformed xp type is <class 'numpy.ndarray'> , but X type is <class 'pandas.core.frame.DataFrame'>. The question is how can I get the polynomial regression predicted values in new column of Test, next to linear reg. results. This is probably really simple, but I could not figure it out.
print(type(X))
print(type(xp))
print(X.sample(2))
print()
print(xp)
<class 'pandas.core.frame.DataFrame'>
<class 'numpy.ndarray'>
X1 A B G1
962 4.334912 1.945910 3.135494 3.258097
1365 4.197888 2.197225 3.135494 3.332205
[[ 1. 4.77041663 1.94591015 ... 35.74106743 34.52550933
33.35129251]
[ 1. 4.43240629 1.94591015 ... 33.28387641 32.03140262
30.82605947]
[ 1. 3.21669428 1.94591015 ... 29.95821572 30.38903979
30.82605947]
The result which I get with polynominal reg. predicted values appended to original Test dataframe:
0 6.178542 3.0 692 ... 2.079442 4.783216 6.146329
1 6.156108 11.0 692 ... 2.197225 4.842126 6.113682
2 6.071453 12.0 692 ... 2.197225 4.814595 6.052089
3 5.842053 NaN NaN ... NaN NaN NaN
4 4.625762 30.0 692 ... 1.945910 5.018201 5.828946
This is the correct and good result I obtained using only linear regression, without Nan and with value in each row, how it supposed to be:
0 6.151675 3 692 5 ... 3.433987 2.079442 4.783216 6.146329
1 6.132077 11 692 5 ... 3.401197 2.197225 4.842126 6.113682
2 6.068450 12 692 5 ... 3.332205 2.197225 4.814595 6.052089
4 5.819535 30 692 5 ... 3.258097 1.945910 5.018201 5.828946
8 4.761362 61 692 5 ... 2.564949 1.945910 3.889585 4.624973
Solve this by adding a line for numpy to series tranformation. And for model statistics statsmodels summary:
import pandas as pd
from statsmodels.api import OLS
predictions_2 = model_2.predict(xp)
predictions_2_series = pd.Series(predictions_2, index=df_redux.index.values)
print(OLS(Y, xp).fit().summary())

Pandas Way of Weighted Average in a Large DataFrame

I do have a large dataset (around 8 million rows x 25 columns) in Pandas and I am struggling to find a way to compute weighted average on this dataframe which in turn creates another data frame.
Here is how my dataset looks like (very simplified version of it):
prec temp
location_id hours
135 1 12.0 4.0
2 14.0 4.1
3 14.3 3.5
4 15.0 4.5
5 15.0 4.2
6 15.0 4.7
7 15.5 5.1
136 1 12.0 4.0
2 14.0 4.1
3 14.3 3.5
4 15.0 4.5
5 15.0 4.2
6 15.0 4.7
7 15.5 5.1
I have a multi-index on [location_id, hours]. I have around 60k locations and 140 hours for each location (making up the 8 million rows).
The rest of the data is numeric (float) or categorical. I have only included 2 columns here, normally there are around 20 columns.
What I am willing to do is to create a new data frame that is basically a weighted average of this data frame. The requirements indicate that 12 of these location_ids should be averaged out by a specified weight to form the combined_location_id values.
For example, location_ids 1,3,5,7,9,11,13,15,17,19,21,23 with their appropriate weights (separate data coming in from another data frame) should be weighted averaged to from the combined_location_id CL_1's data.
That is a lot of data to handle and I wasn't able to find a completely Pandas way of solving it. Therefore, I went with a for loop approach. It is extremely slow and I am sure this is not the right way to do it:
def __weighted(self, ds, weights):
return np.average(ds, weights=weights)
f = {'hours': 'first', 'location_id': 'first',
'temp': lambda x: self.__weighted(x, weights), 'prec': lambda x: self.__weighted(x, weights)}
data_frames = []
for combined_location in all_combined_locations:
mapped_location_ids = combined_location.location_ids
weights = combined_location.weights_of_location_ids
data_for_this_combined_location = pd.concat(df_data.loc[df_data.index.get_level_values(0) == location_id] for location_id in mapped_location_ids)
data_grouped_by_distance = data_for_this_combined_location.groupby("hours", as_index=False)
data_grouped_by_distance = data_grouped_by_distance.agg(f)
data_frames.append(data_grouped_by_distance)
df_combined_location_data = pd.concat(data_frames)
df_combined_location_data.set_index(['location_id', 'hours'], inplace=True)
This works well functionally, however the performance and the memory consumption is horrible. It is taking over 2 hours on my dataset and that is currently not acceptable. The existence of the for loop is an indicator that this could be handled better.
Is there a better/faster way to implement this?
From what I saw you can reduce one for loop with mapped_location_ids
data_for_this_combined_location = df_data.loc[df_data.index.get_level_values(0).isin(mapped_location_ids)]

How to remove missing data and 0s whilst keeping the dataframe the same shape using Pandas?

I have a dataframe and I want to reformat it in order it remove the instances of whether a missing value or a zero occurs before the first non-zero value appears across a row. However I do not want to delete any rows or columns and I do not want to remove any 0s or missing values which appear after the non-zeroes.
Below is the dataframe I am working with:
> data =[['Adam',2.55,4.53,3.45,2.12,3.14],['Bill',np.NaN,2.14,3.65,4.12],['Chris',np.NaN,0,2.82,0,6.04],['David',np.NaN,0,7.42,3.52]]
> df = pd.DataFrame(data, columns = ['Name', 'A','B','C','D','E'])
Moreover, here is the expected outcome:
> data1 =[['Adam',2.55,4.53,3.45,2.12,3.14],['Bill',2.14,3.65,4.12],['Chris',2.82,0,6.04],['David',7.42,3.52]]
> df1 = pd.DataFrame(data1, columns = ['Name', 'A','B','C','D','E'])
This is not a trivial problem. Here is the solution:
m=df.set_index('Name')
m=m[m.isin(m.mask(m.le(0)).bfill(axis=1).iloc[:,0]).cumsum(axis=1).astype(bool)]
print(m)
A B C D E
Name
Adam 2.55 4.53 3.45 2.12 3.14
Bill NaN 2.14 3.65 4.12 NaN
Chris NaN NaN 2.82 0.00 6.04
David NaN NaN 7.42 3.52 NaN
Then using justify:
pd.DataFrame(justify(m.values,np.nan),columns=m.columns,index=m.index).reset_index()
Name A B C D E
0 Adam 2.55 4.53 3.45 2.12 3.14
1 Bill 2.14 3.65 4.12 NaN NaN
2 Chris 2.82 0.00 6.04 NaN NaN
3 David 7.42 3.52 NaN NaN NaN
Explanation:
Step1: Set the Name column as index so we can deal with numeric values only.
Step2: m.mask(m.le(0)).bfill(axis=1).iloc[:,0] gives the first value which is greater than 0.
Step3: Then using isin() to return True wherever the value appears in each row.
Step4: cumsum(axis=1).astype(bool) makes all the remaining elements as True so we can filter only those values, other values becomes NaN.
Then use justify function from the linked post.

Dateframe interpolate not working in Panda + multidimensional interpolation

multidimensional interpolation with dataframe not working
import pandas as pd
import numpy as np
raw_data = {'CCY_CODE': ['SGD','USD','USD','USD','USD','USD','USD','EUR','EUR','EUR','EUR','EUR','EUR','USD'],
'END_DATE': ['16/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018',
'17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018'],
'STRIKE':[0.005,0.01,0.015,0.02,0.025,0.03,0.035,0.04,0.045,0.05,0.55,0.06,0.065,0.07],
'VOLATILITY':[np.nan,np.nan,0.3424,np.nan,0.2617,0.2414,np.nan,np.nan,0.215,0.212,0.2103,np.nan,0.2092,np.nan]
}
df_volsurface = pd.DataFrame(raw_data,columns = ['CCY_CODE','END_DATE','STRIKE','VOLATILITY'])
df_volsurface['END_DATE'] = pd.to_datetime(df_volsurface['END_DATE'])
df_volsurface.interpolate(method='akima',limit_direction='both')
Output:
<table><tbody><tr><th> </th><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>NaN</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>NaN</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.296358</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.230295</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.220911</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.209471</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>NaN</td></tr></tbody></table>
Expected Result:
<table><tbody><tr><th> </th><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>NaN</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>Expected some logical value</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.296358</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.230295</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.220911</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.209471</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>Expected some logical value</td></tr></tbody></table>
Linear interpolation methods gives copy last available values to all backward and forward missing value without considering ccy_code
df_volsurface.interpolate(method='linear',limit_direction='both')
Output:
<table><tbody><tr><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th><th> </th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>0.3424</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>0.3424</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.30205</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.2326</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.2238</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.20975</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>0.2092</td></tr></tbody></table>
Any help is appreciated! Thanks!
I'd like to point out that this is still onedimensional interpolation. We have one independent variable ('STRIKE') and one dependent variable ('VOLATILITY'). Interpolation is done for different conditions, e.g. for each day, each currency, each scenario, etc. The following is an example of how the interpolation can be done based on 'END_DATE' and 'CCY_CODE'.
# set all the conditions as index
df_volsurface.set_index(['END_DATE', 'CCY_CODE', 'STRIKE'], inplace=True)
df_volsurface.sort_index(level=['END_DATE', 'CCY_CODE', 'STRIKE'], inplace=True)
# create separate columns for all criteria except the independent variable
df_volsurface = df_volsurface.unstack(level=['END_DATE', 'CCY_CODE'])
for ccy in df_volsurface:
indices = df_volsurface[ccy].notna()
if not any(indices):
continue # we are not interested in a column with only NaN
x = df_volsurface.index.get_level_values(level='STRIKE') # independent var
y = df_volsurface[ccy] # dependent var
# create interpolation function
f_interp = scipy.interpolate.interp1d(x[indices], y[indices], kind='linear',
bounds_error=False, fill_value='extrapolate')
df_volsurface['VOL_INTERP', ccy[1], ccy[2]] = f_interp(x)
print(df_volsurface)
The interpolation for the other conditions should work analogously. This is the resulting DataFrame:
VOLATILITY VOL_INTERP
END_DATE 2018-03-16 2018-03-17 2018-03-17
CCY_CODE SGD EUR USD EUR USD
STRIKE
0.005 NaN NaN NaN 0.23900 0.42310
0.010 NaN NaN NaN 0.23600 0.38275
0.015 NaN NaN 0.3424 0.23300 0.34240
0.020 NaN NaN NaN 0.23000 0.30205
0.025 NaN NaN 0.2617 0.22700 0.26170
0.030 NaN NaN 0.2414 0.22400 0.24140
0.035 NaN NaN NaN 0.22100 0.22110
0.040 NaN NaN NaN 0.21800 0.20080
0.045 NaN 0.2150 NaN 0.21500 0.18050
0.050 NaN 0.2120 NaN 0.21200 0.16020
0.055 NaN 0.2103 NaN 0.21030 0.13990
0.060 NaN NaN NaN 0.20975 0.11960
0.065 NaN 0.2092 NaN 0.20920 0.09930
0.070 NaN NaN NaN 0.20865 0.07900
Use df_volsurface.stack() to return to a multiindex of your choice. There are also several pandas interpolation methods to choose from. However, I have not found a satisfactory solution for your problem using method='akima' because it only interpolates between the given data points, but does not seem to extrapolate beyond.

Pandas vectorize statistical odds-ratio test

I'm looking for a faster way to do odds ratio tests on a large dataset. I have about 1200 variables (see var_col) I want to test against each other for mutual exclusion/ co-occurrence. An odds ratio test is defined as (a * d) / (b * c)), where a, b c,d are number of samples with (a) altered in neither site x & y (b) altered in site x, not in y (c) altered in y, not in x (d) altered in both. I'd also like to calculate the fisher exact test to determine statistical significance. The scipy function fisher_exact can calculate both of these (see below).
#here's a sample of my original dataframe
sample_id_no var_col
0 258.0
1 -24.0
2 -150.0
3 149.0
4 108.0
5 -126.0
6 -83.0
7 2.0
8 -177.0
9 -171.0
10 -7.0
11 -377.0
12 -272.0
13 66.0
14 -13.0
15 -7.0
16 0.0
17 189.0
18 7.0
13 -21.0
19 80.0
20 -14.0
21 -76.0
3 83.0
22 -182.0
import pandas as pd
import numpy as np
from scipy.stats import fisher_exact
import itertools
#create a dataframe with each possible pair of variable
var_pairs = pd.DataFrame(list(itertools.combinations(df.var_col.unique(),2) )).rename(columns = {0:'alpha_site', 1: 'beta_site'})
#create a cross-tab with samples and vars
sample_table = pd.crosstab(df.sample_id_no, df.var_col)
odds_ratio_results = var_pairs.apply(getOddsRatio, axis=1, args = (sample_table,))
#where the function getOddsRatio is defined as:
def getOddsRatio(pairs, sample_table):
alpha_site, beta_site = pairs
oddsratio, pvalue = fisher_exact(pd.crosstab(sample_table[alpha_site] > 0, sample_table[beta_site] > 0))
return ([oddsratio, pvalue])
This code runs very slow, especially when used on large datasets. In my actual dataset, I have around 700k variable pairs. Since the getOddsRatio() function is applied to each pair individually, it is definitely the main source of the slowness. Is there a more efficient solution?

Categories

Resources