Interpolating multi index a pandas dataframe - python

I need to interpolate multi index dataframe:
for example:
this is the main dataframe:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
I need to find the result for:
1.3 1.7 1.55
What I've been doing so far is appending a pd.Series inside with NaN
for each index individually.
As you can see. this seems like a VERY inefficient way.
I would be happy if someone can enrich me.
P.S.
I spent some time looking over SO, and if the answer is in there, I missed it:
Fill multi-index Pandas DataFrame with interpolation
Resampling Within a Pandas MultiIndex
pandas multiindex dataframe, ND interpolation for missing values
Fill multi-index Pandas DataFrame with interpolation
Algorithm:
stage 1:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
stage 2:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 1.7 1 7.7
1.3 1.7 2 10.7
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
stage 3:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 1.7 1 7.7
1.3 1.7 1.55 9.35
1.3 1.7 2 10.7
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12

You can use scipy.interpolate.LinearNDInterpolator to do what you want. If the dataframe is a MultiIndex with the column 'a','b' and 'c', then:
from scipy.interpolate import LinearNDInterpolator as lNDI
print (lNDI(points=df.index.to_frame().values, values=df.result.values)([1.3, 1.7, 1.55]))
now if you have dataframe with all the tuples (a, b, c) as index you want to calculate, you can do for example:
def pd_interpolate_MI (df_input, df_toInterpolate):
from scipy.interpolate import LinearNDInterpolator as lNDI
#create the function of interpolation
func_interp = lNDI(points=df_input.index.to_frame().values, values=df_input.result.values)
#calculate the value for the unknown index
df_toInterpolate['result'] = func_interp(df_toInterpolate.index.to_frame().values)
#return the dataframe with the new values
return pd.concat([df_input, df_toInterpolate]).sort_index()
Then for example with your df and df_toI = pd.DataFrame(index=pd.MultiIndex.from_tuples([(1.3, 1.7, 1.55),(1.7, 1.4, 1.9)],names=df.index.names))
then you get
print (pd_interpolate_MI(df, df_toI))
result
a b c
1.0 1.0 1.00 6.00
2.00 9.00
2.0 1.00 8.00
2.00 11.00
1.3 1.7 1.55 9.35
1.7 1.4 1.90 10.20
2.0 1.0 1.00 7.00
2.00 10.00
2.0 1.00 9.00
2.00 12.00

Related

In pandas, how to assign the result of a groupby aggregate to the next group in the original df?

Using pandas I like to use groupby and an aggregate function, e.g. mean
and then put the results back in the original dataframe, but in the next group and not in the group itself. How to do this in a vectorized way?
I have a pandas dataframe like this:
data = {'Group': ['A','A','B','B','B','B', 'C','C', 'D','D'],
'Value': [1.1,1.3,9.1,9.2,9.5,9.4,6.2,6.4,2.2,2.3]
}
df = pd.DataFrame(data, columns = ['Group','Value'])
print (df)
Group Value
0 A 1.1
1 A 1.3
2 B 9.1
3 B 9.2
4 B 9.5
5 B 9.4
6 C 6.2
7 C 6.4
8 D 2.2
9 D 2.3
I like to get this, where each group has the mean value of the previous group.
Group Value
0 A NaN
1 A NaN
2 B 1.2
3 B 1.2
4 B 1.2
5 B 1.2
6 C 9.3
7 C 9.3
8 D 6.3
9 D 6.3
I tried this, but this is without the shift to the next group
df.groupby('Group')['Value'].transform('mean')
Easy, use map on a groupby result:
df['Value'] = df['Group'].map(df.groupby('Group')['Value'].mean().shift())
df
Group Value
0 A NaN
1 A NaN
2 B 1.2
3 B 1.2
4 B 1.2
5 B 1.2
6 C 9.3
7 C 9.3
8 D 6.3
9 D 6.3
How It Works
Get the mean
df.groupby('Group')['Value'].mean()
Group
A 1.20
B 9.30
C 6.30
D 2.25
Name: Value, dtype: float64
Shift it down by 1
df.groupby('Group')['Value'].mean().shift()
Group
A NaN
B 1.2
C 9.3
D 6.3
Name: Value, dtype: float64
Map it back.
df['Group'].map(df.groupby('Group')['Value'].mean().shift())
0 NaN
1 NaN
2 1.2
3 1.2
4 1.2
5 1.2
6 9.3
7 9.3
8 6.3
9 6.3
Name: Group, dtype: float64
You can calculate aggregated GroupBy.mean of each group value and use pd.Series.shift and take advantage of pandas index alignment.
df.set_index('Group').assign(value = df.groupby('Group').mean().shift()).reset_index()
Group Value value
0 A 1.1 NaN
1 A 1.3 NaN
2 B 9.1 1.2
3 B 9.2 1.2
4 B 9.5 1.2
5 B 9.4 1.2
6 C 6.2 9.3
7 C 6.4 9.3
8 D 2.2 6.3
9 D 2.3 6.3

Pandas: Iterate by two column for each iteration

Does anyone know how to iterate a pandas Dataframe with two columns for each iteration?
Say I have
a b c d
5.1 3.5 1.4 0.2
4.9 3.0 1.4 0.2
4.7 3.2 1.3 0.2
4.6 3.1 1.5 0.2
5.0 3.6 1.4 0.2
5.4 3.9 1.7 0.4
So something like
for x, y in ...:
correlation of x and y
So output will be
corr_ab corr_bc corr_cd
0.1 0.3 -0.4
You can use zip with indexing for tuples, create dictionary of one element lists with Series.corr and f-strings for columns names and pass to DataFrame constructor:
L = {f'corr_{col1}{col2}': [df[col1].corr(df[col2])]
for col1, col2 in zip(df.columns, df.columns[1:])}
df = pd.DataFrame(L)
print (df)
corr_ab corr_bc corr_cd
0 0.860108 0.61333 0.888523
You can use df.corr to get the correlation of the dataframe. You then use mask to avoid repeated correlations. After that you can stack your new dataframe to make it more readable. Assuming you have data like this
0 1 2 3 4
0 11 6 17 2 3
1 3 12 16 17 5
2 13 2 11 10 0
3 8 12 13 18 3
4 4 3 1 0 18
Finding the correlation,
corrData = data.corr(method='pearson')
We get,
0 1 2 3 4
0 1.000000 -0.446023 0.304108 -0.136610 -0.674082
1 -0.446023 1.000000 0.563112 0.773013 -0.258801
2 0.304108 0.563112 1.000000 0.494512 -0.823883
3 -0.136610 0.773013 0.494512 1.000000 -0.545530
4 -0.674082 -0.258801 -0.823883 -0.545530 1.000000
Masking out repeated correlations,
dataCorr = dataCorr.mask(np.tril(np.ones(dataCorr.shape)).astype(np.bool))
We get
0 1 2 3 4
0 NaN -0.446023 0.304108 -0.136610 -0.674082
1 NaN NaN 0.563112 0.773013 -0.258801
2 NaN NaN NaN 0.494512 -0.823883
3 NaN NaN NaN NaN -0.545530
4 NaN NaN NaN NaN NaN
Stacking the correlated data
dataCorr = dataCorr.stack().reset_index()
The stacked data will look as shown
level_0 level_1 0
0 0 1 -0.446023
1 0 2 0.304108
2 0 3 -0.136610
3 0 4 -0.674082
4 1 2 0.563112
5 1 3 0.773013
6 1 4 -0.258801
7 2 3 0.494512
8 2 4 -0.823883
9 3 4 -0.545530

Grouping dataframe based on consecutive occurrence of values

I have a pandas array which has one column which is either true or false (titled 'condition' in the example below). I would like to group the array by consecutive true or false values. I have tried to use pandas.groupby but haven't succeeded using that method, albeit I think that's down to my lack of understanding. An example of the dataframe can be found below:
df = pd.DataFrame(df)
print df
print df
index condition H t
0 1 2 1.1
1 1 7 1.5
2 0 1 0.9
3 0 6.5 1.6
4 1 7 1.1
5 1 9 1.8
6 1 22 2.0
Ideally the output of the program would be something along the lines of what can be found below. I was thinking of using some sort of 'grouping' method to make it easier to call each set of results but not sure if this is the best method. Any help would be greatly appreciated.
index condition H t group
0 1 2 1.1 1
1 1 7 1.5 1
2 0 1 0.9 2
3 0 6.5 1.6 2
4 1 7 1.1 3
5 1 9 1.8 3
6 1 22 2.0 3
Since you're dealing with 0/1s, here's another alternative using diff + cumsum -
df['group'] = df.condition.diff().abs().cumsum().fillna(0).astype(int) + 1
df
condition H t group
index
0 1 2.0 1.1 1
1 1 7.0 1.5 1
2 0 1.0 0.9 2
3 0 6.5 1.6 2
4 1 7.0 1.1 3
5 1 9.0 1.8 3
6 1 22.0 2.0 3
If you don't mind floats, this can be made a little faster.
df['group'] = df.condition.diff().abs().cumsum() + 1
df.loc[0, 'group'] = 1
df
index condition H t group
0 0 1 2.0 1.1 1.0
1 1 1 7.0 1.5 1.0
2 2 0 1.0 0.9 2.0
3 3 0 6.5 1.6 2.0
4 4 1 7.0 1.1 3.0
5 5 1 9.0 1.8 3.0
6 6 1 22.0 2.0 3.0
Here's the version with numpy equivalents -
df['group'] = 1
df.loc[1:, 'group'] = np.cumsum(np.abs(np.diff(df.condition))) + 1
df
condition H t group
index
0 1 2.0 1.1 1
1 1 7.0 1.5 1
2 0 1.0 0.9 2
3 0 6.5 1.6 2
4 1 7.0 1.1 3
5 1 9.0 1.8 3
6 1 22.0 2.0 3
On my machine, here are the timings -
df = pd.concat([df] * 100000, ignore_index=True)
%timeit df['group'] = df.condition.diff().abs().cumsum().fillna(0).astype(int) + 1
10 loops, best of 3: 25.1 ms per loop
%%timeit
df['group'] = df.condition.diff().abs().cumsum() + 1
df.loc[0, 'group'] = 1
10 loops, best of 3: 23.4 ms per loop
%%timeit
df['group'] = 1
df.loc[1:, 'group'] = np.cumsum(np.abs(np.diff(df.condition))) + 1
10 loops, best of 3: 21.4 ms per loop
%timeit df['group'] = df['condition'].ne(df['condition'].shift()).cumsum()
100 loops, best of 3: 15.8 ms per loop
Compare with ne (!=) by shifted column and then use cumsum:
df['group'] = df['condition'].ne(df['condition'].shift()).cumsum()
print (df)
condition H t group
index
0 1 2.0 1.1 1
1 1 7.0 1.5 1
2 0 1.0 0.9 2
3 0 6.5 1.6 2
4 1 7.0 1.1 3
5 1 9.0 1.8 3
6 1 22.0 2.0 3
Detail:
print (df['condition'].ne(df['condition'].shift()))
index
0 True
1 False
2 True
3 False
4 True
5 False
6 False
Name: condition, dtype: bool
Timings:
df = pd.concat([df]*100000).reset_index(drop=True)
In [54]: %timeit df['group'] = df['condition'].ne(df['condition'].shift()).cumsum()
100 loops, best of 3: 12.2 ms per loop
In [55]: %timeit df['group'] = df.condition.diff().abs().cumsum().fillna(0).astype(int) + 1
10 loops, best of 3: 24.5 ms per loop
In [56]: %%timeit
...: df['group'] = 1
...: df.loc[1:, 'group'] = np.cumsum(np.abs(np.diff(df.condition))) + 1
...:
10 loops, best of 3: 26.6 ms per loop

Find out intersection of 2 pandas DataFrame according to 2 columns

I would to find out intersection of 2 pandas DataFrame according to 2 columns 'x' and 'y' and combine them into 1 DataFrame. The data are:
df[1]:
x y id fa
0 4 5 9283222 3.1
1 4 5 9283222 3.1
2 10 12 9224221 3.2
3 4 5 9284332 1.2
4 6 1 51249 11.2
df[2]:
x y id fa
0 4 5 19283222 1.1
1 9 3 39224221 5.2
2 10 12 29284332 6.2
3 6 1 51242 5.2
4 6 2 51241 9.2
5 1 1 51241 9.2
The expected output is something like (can ignore index):
x y id fa
0 4 5 9283222 3.1
1 4 5 9283222 3.1
2 10 12 9224221 3.2
3 4 5 9284332 1.2
4 6 1 51249 11.2
0 4 5 19283222 1.1
2 10 12 29284332 6.2
3 6 1 51242 5.2
Thank you very much!
You can find out the intersection by joining the x,y columns from df1 and df2, with which you can filter df1 and df2 by inner join, and then concatenating the two results with pd.concat should give what you need:
intersection = df1[['x', 'y']].merge(df2[['x', 'y']]).drop_duplicates()
pd.concat([df1.merge(intersection), df2.merge(intersection)])
The simpliest solution:
df1.columns.intersection(df2.columns)

Fill in missing rows from columns after groupby in python pandas

I have a dataset that looks something like this but is much larger.
Column A Column B Result
1 1 2.4
1 4 2.9
1 1 2.8
2 5 9.3
3 4 1.2
df.groupby(['Column A','Column B'])['result'].mean()
Column A Column B Result
1 1 2.6
4 2.9
2 5 9.3
3 4 1.2
I want to have a range from 1-10 for Column B with the results for these rows to be the average of Column A and Column B. So this is my desired table:
Column A Column B Result
1 1 2.6
2 2.75
3 2.75
4 2.9
5 6.025
2 1 5.95
2 9.3
3 9.3
...
Hopefully the point is getting across. I know the average thing is pretty confusing so I would settle with just being able to fill in the missing values of my desired range. I appreciate the help!
You need reindex by new index created by MultiIndex.from_product and then groupby by first level Column A with fillna by mean per groups:
df = df.groupby(['Column A','Column B'])['Result'].mean()
mux = pd.MultiIndex.from_product([df.index.get_level_values(0).unique(),
np.arange(1,10)], names=('Column A','Column B'))
df = df.reindex(mux)
df = df.groupby(level='Column A').apply(lambda x: x.fillna(x.mean()))
print (df)
Column A Column B
1 1 2.60
2 2.75
3 2.75
4 2.90
5 2.75
6 2.75
7 2.75
8 2.75
9 2.75
2 1 9.30
2 9.30
3 9.30
4 9.30
5 9.30
6 9.30
7 9.30
8 9.30
9 9.30
3 1 1.20
2 1.20
3 1.20
4 1.20
5 1.20
6 1.20
7 1.20
8 1.20
9 1.20
Name: Result, dtype: float64

Categories

Resources