I have a dataset that looks something like this but is much larger.
Column A Column B Result
1 1 2.4
1 4 2.9
1 1 2.8
2 5 9.3
3 4 1.2
df.groupby(['Column A','Column B'])['result'].mean()
Column A Column B Result
1 1 2.6
4 2.9
2 5 9.3
3 4 1.2
I want to have a range from 1-10 for Column B with the results for these rows to be the average of Column A and Column B. So this is my desired table:
Column A Column B Result
1 1 2.6
2 2.75
3 2.75
4 2.9
5 6.025
2 1 5.95
2 9.3
3 9.3
...
Hopefully the point is getting across. I know the average thing is pretty confusing so I would settle with just being able to fill in the missing values of my desired range. I appreciate the help!
You need reindex by new index created by MultiIndex.from_product and then groupby by first level Column A with fillna by mean per groups:
df = df.groupby(['Column A','Column B'])['Result'].mean()
mux = pd.MultiIndex.from_product([df.index.get_level_values(0).unique(),
np.arange(1,10)], names=('Column A','Column B'))
df = df.reindex(mux)
df = df.groupby(level='Column A').apply(lambda x: x.fillna(x.mean()))
print (df)
Column A Column B
1 1 2.60
2 2.75
3 2.75
4 2.90
5 2.75
6 2.75
7 2.75
8 2.75
9 2.75
2 1 9.30
2 9.30
3 9.30
4 9.30
5 9.30
6 9.30
7 9.30
8 9.30
9 9.30
3 1 1.20
2 1.20
3 1.20
4 1.20
5 1.20
6 1.20
7 1.20
8 1.20
9 1.20
Name: Result, dtype: float64
Related
Using pandas I like to use groupby and an aggregate function, e.g. mean
and then put the results back in the original dataframe, but in the next group and not in the group itself. How to do this in a vectorized way?
I have a pandas dataframe like this:
data = {'Group': ['A','A','B','B','B','B', 'C','C', 'D','D'],
'Value': [1.1,1.3,9.1,9.2,9.5,9.4,6.2,6.4,2.2,2.3]
}
df = pd.DataFrame(data, columns = ['Group','Value'])
print (df)
Group Value
0 A 1.1
1 A 1.3
2 B 9.1
3 B 9.2
4 B 9.5
5 B 9.4
6 C 6.2
7 C 6.4
8 D 2.2
9 D 2.3
I like to get this, where each group has the mean value of the previous group.
Group Value
0 A NaN
1 A NaN
2 B 1.2
3 B 1.2
4 B 1.2
5 B 1.2
6 C 9.3
7 C 9.3
8 D 6.3
9 D 6.3
I tried this, but this is without the shift to the next group
df.groupby('Group')['Value'].transform('mean')
Easy, use map on a groupby result:
df['Value'] = df['Group'].map(df.groupby('Group')['Value'].mean().shift())
df
Group Value
0 A NaN
1 A NaN
2 B 1.2
3 B 1.2
4 B 1.2
5 B 1.2
6 C 9.3
7 C 9.3
8 D 6.3
9 D 6.3
How It Works
Get the mean
df.groupby('Group')['Value'].mean()
Group
A 1.20
B 9.30
C 6.30
D 2.25
Name: Value, dtype: float64
Shift it down by 1
df.groupby('Group')['Value'].mean().shift()
Group
A NaN
B 1.2
C 9.3
D 6.3
Name: Value, dtype: float64
Map it back.
df['Group'].map(df.groupby('Group')['Value'].mean().shift())
0 NaN
1 NaN
2 1.2
3 1.2
4 1.2
5 1.2
6 9.3
7 9.3
8 6.3
9 6.3
Name: Group, dtype: float64
You can calculate aggregated GroupBy.mean of each group value and use pd.Series.shift and take advantage of pandas index alignment.
df.set_index('Group').assign(value = df.groupby('Group').mean().shift()).reset_index()
Group Value value
0 A 1.1 NaN
1 A 1.3 NaN
2 B 9.1 1.2
3 B 9.2 1.2
4 B 9.5 1.2
5 B 9.4 1.2
6 C 6.2 9.3
7 C 6.4 9.3
8 D 2.2 6.3
9 D 2.3 6.3
I am trying to calculate rolling averages within groups. For this task I want a rolling average from the rows above so thought the easiest way would be to use shift() and then do rolling(). The problem is that shift() shifts the data from previous groups which makes first row in group 2 and 3 incorrect. Column 'ma' should have NaN in rows 4 and 7. How can I achieve this?
import pandas as pd
df = pd.DataFrame(
{"Group": [1, 2, 3, 1, 2, 3, 1, 2, 3],
"Value": [2.5, 2.9, 1.6, 9.1, 5.7, 8.2, 4.9, 3.1, 7.5]
})
df = df.sort_values(['Group'])
df.reset_index(inplace=True)
df['ma'] = df.groupby('Group', as_index=False)['Value'].shift(1).rolling(3, min_periods=1).mean()
print(df)
I get this:
index Group Value ma
0 0 1 2.5 NaN
1 3 1 9.1 2.50
2 6 1 4.9 5.80
3 1 2 2.9 5.80
4 4 2 5.7 6.00
5 7 2 3.1 4.30
6 2 3 1.6 4.30
7 5 3 8.2 3.65
8 8 3 7.5 4.90
I tried answers from couple similar questions but nothing seems to work.
If I understand the question correctly, then the solution you require can be achieved in 2 steps using the following:
df['sa'] = df.groupby('Group', as_index=False)['Value'].transform(lambda x: x.shift(1))
df['ma'] = df.groupby('Group', as_index=False)['sa'].transform(lambda x: x.rolling(3, min_periods=1).mean())
I got the below output, where 'ma' is the desired column
index Group Value sa ma
0 0 1 2.5 NaN NaN
1 3 1 9.1 2.5 2.5
2 6 1 4.9 9.1 5.8
3 1 2 2.9 NaN NaN
4 4 2 5.7 2.9 2.9
5 7 2 3.1 5.7 4.3
6 2 3 1.6 NaN NaN
7 5 3 8.2 1.6 1.6
8 8 3 7.5 8.2 4.9
Edit: Example with one groupby
def shift_ma(x):
return x.shift(1).rolling(3, min_periods=1).mean()
df['ma'] = df.groupby('Group', as_index=False)['Value'].apply(shift_ma).reset_index(drop=True)
Does anyone know how to iterate a pandas Dataframe with two columns for each iteration?
Say I have
a b c d
5.1 3.5 1.4 0.2
4.9 3.0 1.4 0.2
4.7 3.2 1.3 0.2
4.6 3.1 1.5 0.2
5.0 3.6 1.4 0.2
5.4 3.9 1.7 0.4
So something like
for x, y in ...:
correlation of x and y
So output will be
corr_ab corr_bc corr_cd
0.1 0.3 -0.4
You can use zip with indexing for tuples, create dictionary of one element lists with Series.corr and f-strings for columns names and pass to DataFrame constructor:
L = {f'corr_{col1}{col2}': [df[col1].corr(df[col2])]
for col1, col2 in zip(df.columns, df.columns[1:])}
df = pd.DataFrame(L)
print (df)
corr_ab corr_bc corr_cd
0 0.860108 0.61333 0.888523
You can use df.corr to get the correlation of the dataframe. You then use mask to avoid repeated correlations. After that you can stack your new dataframe to make it more readable. Assuming you have data like this
0 1 2 3 4
0 11 6 17 2 3
1 3 12 16 17 5
2 13 2 11 10 0
3 8 12 13 18 3
4 4 3 1 0 18
Finding the correlation,
corrData = data.corr(method='pearson')
We get,
0 1 2 3 4
0 1.000000 -0.446023 0.304108 -0.136610 -0.674082
1 -0.446023 1.000000 0.563112 0.773013 -0.258801
2 0.304108 0.563112 1.000000 0.494512 -0.823883
3 -0.136610 0.773013 0.494512 1.000000 -0.545530
4 -0.674082 -0.258801 -0.823883 -0.545530 1.000000
Masking out repeated correlations,
dataCorr = dataCorr.mask(np.tril(np.ones(dataCorr.shape)).astype(np.bool))
We get
0 1 2 3 4
0 NaN -0.446023 0.304108 -0.136610 -0.674082
1 NaN NaN 0.563112 0.773013 -0.258801
2 NaN NaN NaN 0.494512 -0.823883
3 NaN NaN NaN NaN -0.545530
4 NaN NaN NaN NaN NaN
Stacking the correlated data
dataCorr = dataCorr.stack().reset_index()
The stacked data will look as shown
level_0 level_1 0
0 0 1 -0.446023
1 0 2 0.304108
2 0 3 -0.136610
3 0 4 -0.674082
4 1 2 0.563112
5 1 3 0.773013
6 1 4 -0.258801
7 2 3 0.494512
8 2 4 -0.823883
9 3 4 -0.545530
I need to interpolate multi index dataframe:
for example:
this is the main dataframe:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
I need to find the result for:
1.3 1.7 1.55
What I've been doing so far is appending a pd.Series inside with NaN
for each index individually.
As you can see. this seems like a VERY inefficient way.
I would be happy if someone can enrich me.
P.S.
I spent some time looking over SO, and if the answer is in there, I missed it:
Fill multi-index Pandas DataFrame with interpolation
Resampling Within a Pandas MultiIndex
pandas multiindex dataframe, ND interpolation for missing values
Fill multi-index Pandas DataFrame with interpolation
Algorithm:
stage 1:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
stage 2:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 1.7 1 7.7
1.3 1.7 2 10.7
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
stage 3:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 1.7 1 7.7
1.3 1.7 1.55 9.35
1.3 1.7 2 10.7
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
You can use scipy.interpolate.LinearNDInterpolator to do what you want. If the dataframe is a MultiIndex with the column 'a','b' and 'c', then:
from scipy.interpolate import LinearNDInterpolator as lNDI
print (lNDI(points=df.index.to_frame().values, values=df.result.values)([1.3, 1.7, 1.55]))
now if you have dataframe with all the tuples (a, b, c) as index you want to calculate, you can do for example:
def pd_interpolate_MI (df_input, df_toInterpolate):
from scipy.interpolate import LinearNDInterpolator as lNDI
#create the function of interpolation
func_interp = lNDI(points=df_input.index.to_frame().values, values=df_input.result.values)
#calculate the value for the unknown index
df_toInterpolate['result'] = func_interp(df_toInterpolate.index.to_frame().values)
#return the dataframe with the new values
return pd.concat([df_input, df_toInterpolate]).sort_index()
Then for example with your df and df_toI = pd.DataFrame(index=pd.MultiIndex.from_tuples([(1.3, 1.7, 1.55),(1.7, 1.4, 1.9)],names=df.index.names))
then you get
print (pd_interpolate_MI(df, df_toI))
result
a b c
1.0 1.0 1.00 6.00
2.00 9.00
2.0 1.00 8.00
2.00 11.00
1.3 1.7 1.55 9.35
1.7 1.4 1.90 10.20
2.0 1.0 1.00 7.00
2.00 10.00
2.0 1.00 9.00
2.00 12.00
I would to find out intersection of 2 pandas DataFrame according to 2 columns 'x' and 'y' and combine them into 1 DataFrame. The data are:
df[1]:
x y id fa
0 4 5 9283222 3.1
1 4 5 9283222 3.1
2 10 12 9224221 3.2
3 4 5 9284332 1.2
4 6 1 51249 11.2
df[2]:
x y id fa
0 4 5 19283222 1.1
1 9 3 39224221 5.2
2 10 12 29284332 6.2
3 6 1 51242 5.2
4 6 2 51241 9.2
5 1 1 51241 9.2
The expected output is something like (can ignore index):
x y id fa
0 4 5 9283222 3.1
1 4 5 9283222 3.1
2 10 12 9224221 3.2
3 4 5 9284332 1.2
4 6 1 51249 11.2
0 4 5 19283222 1.1
2 10 12 29284332 6.2
3 6 1 51242 5.2
Thank you very much!
You can find out the intersection by joining the x,y columns from df1 and df2, with which you can filter df1 and df2 by inner join, and then concatenating the two results with pd.concat should give what you need:
intersection = df1[['x', 'y']].merge(df2[['x', 'y']]).drop_duplicates()
pd.concat([df1.merge(intersection), df2.merge(intersection)])
The simpliest solution:
df1.columns.intersection(df2.columns)