Is there a way to apply a function over a moving window centered around the current row?, for example:
>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
... 'B': {0: 1, 1: 3, 2: 5},
... 'C': {0: 2, 1: 4, 2: 6}})
>>> df
C
0 2
1 4
2 6
Desired results generates a column D which is the average of the values of the column C of the previous, current and following rows, that is:
row 0 => D = (2 + 4)/2 = 3
row 1 => D = (2 + 4 + 6)/3 = 4
row 2 => D = (4 + 6)/2 = 5
>>> df_final
C D
0 2 3
1 4 4
2 6 5
It looks like you just want a rolling mean, with a centred window of 3. For example:
>>> df["D"] = pd.rolling_mean(df["C"], window=3, center=True, min_periods=2)
>>> df
A B C D
0 a 1 2 3
1 b 3 4 4
2 c 5 6 5
Updated answer: pd.rolling_mean was deprecated in 0.18 and is no longer available as of pandas=0.23.4.
Window functions are now methods
Window functions have been refactored to be methods on Series/DataFrame objects, rather than top-level functions, which are now deprecated. This allows these window-type functions, to have a similar API to that of .groupby.
It either needs to be called on the dataframe:
In [55]: df['D'] = df['C'].rolling(window=3, center=True, min_periods=2).mean()
In [56]: df
Out[56]:
A B C D
0 a 1 2 3.0
1 b 3 4 4.0
2 c 5 6 5.0
Or from pandas.core.window.Rolling:
In [57]: df['D'] = pd.core.window.Rolling(df['C'], window=3, center=True, min_periods=2).mean()
In [58]: df
Out[58]:
A B C D
0 a 1 2 3.0
1 b 3 4 4.0
2 c 5 6 5.0
Related
This question already has answers here:
How do I melt a pandas dataframe?
(3 answers)
Closed 12 days ago.
I have table like below
Column 1 Column 2 Column 3 ...
0 a 1 2
1 b 1 3
2 c 2 1
and I want to convert it to be like below
Column 1 Column 2
0 a 1
1 a 2
2 b 1
3 b 3
4 c 2
5 c 1
...
I want to take each value from Column 2 (and so on) and pair it to value in column 1. I have no idea how to do it in pandas or even where to start.
You can use pd.melt to do this:
>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
... 'B': {0: 1, 1: 3, 2: 5},
... 'C': {0: 2, 1: 4, 2: 6}})
>>> df
A B C
0 a 1 2
1 b 3 4
2 c 5 6
>>> pd.melt(df, id_vars=['A'], value_vars=['B', 'C'])
A variable value
0 a B 1
1 b B 3
2 c B 5
3 a C 2
4 b C 4
5 c C 6
Here's my approach, hope it helps:
import pandas as pd
df=pd.DataFrame({'col1':['a','b','c'],'col2':[1,1,2],'col3':[2,3,1]})
new_df=pd.DataFrame(columns=['col1','col2'])
for index,row in df.iterrows():
for element in row.values[1:]:
new_df.loc[len(new_df)]=[row[0],element]
print(new_df)
Output:
col1 col2
0 a 1
1 a 2
2 b 1
3 b 3
4 c 2
5 c 1
I'm conducting an experiment(using python 2.7, panda 0.23.4) where I have three levels of a stimulus {a,b,c} and present all different combinations to participants, and they have to choose which one was rougher? (example: Stimulus 1 = a , Stimulus 2=b, participant choose 1 indicating stimulus 1 was rougher)
After the experiment, I have a data frame with three columns like this:
import pandas as pd
d = {'Stim1': ['a', 'b', 'a', 'c', 'b', 'c'],
'Stim2': ['b', 'a', 'c', 'a', 'c', 'b'],
'Answer': [1, 2, 2, 1, 2, 1]}
df = pd.DataFrame(d)
Stim1 Stim2 Answer
0 a b 1
1 b a 2
2 a c 2
3 c a 1
4 b c 2
5 c b 1
For my analysis, the order of which stimulus came first doesn't matter. Stim1= a, Stim2= b is the same as Stim1= b, Stim2= a. I'm trying to figure out how can I swap Stim1 and Stim2 and flip their Answer to be like this:
Stim1 Stim2 Answer
0 a b 1
1 a b 1
2 a c 2
3 a c 2
4 b c 2
5 b c 2
I read that np.where can be used, but it would do one thing at a time, where I want to do two (swap and flip).
Is there some way to use another function to do swap and flip at the same time?
Can you try if this works for you?
import pandas as pd
import numpy as np
df = pd.DataFrame(d)
# keep a copy of the original Stim1 column
s = df['Stim1'].copy()
# sort the values
df[['Stim1', 'Stim2']] = np.sort(df[['Stim1', 'Stim2']].values)
# exchange the Answer if the order has changed
df['Answer'] = df['Answer'].where(df['Stim1'] == s, df['Answer'].replace({1:2,2:1}))
output:
Stim1 Stim2 Answer
0 a b 1
1 a b 1
2 a c 2
3 a c 2
4 b c 2
5 b c 2
You can start by building a boolean series that indicates which rows should be swapped or not:
>>> swap = df['Stim1'] > df['Stim2']
>>> swap
0 False
1 True
2 False
3 True
4 False
5 True
dtype: bool
Then build the fully swapped dataframe as follows:
>>> swapped_df = pd.concat([
... df['Stim1'].rename('Stim2'),
... df['Stim2'].rename('Stim1'),
... 3 - df['Answer'],
... ], axis='columns')
>>> swapped_df
Stim2 Stim1 Answer
0 a b 2
1 b a 1
2 a c 1
3 c a 2
4 b c 1
5 c b 2
Finally, use .mask() to select initial rows or swapped rows:
>>> df.mask(swap, swapped_df)
Stim1 Stim2 Answer
0 a b 1
1 a b 1
2 a c 2
3 a c 2
4 b c 2
5 b c 2
NB .mask is roughly the same as .where, but it replaces rows where the series is True instead of keeping the rows that are True. This is exactly the same:
>>> swapped_df.where(swap, df)
Stim2 Stim1 Answer
0 b a 1
1 b a 1
2 c a 2
3 c a 2
4 c b 2
5 c b 2
Recently I am learning groupby and stack and encountered one method of pandas called melt. I would like to know how to achieve the same result given by melt using groupby and stack.
Here is the MWE:
import numpy as np
import pandas as pd
df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
'B': [1, 1, 2, 2, 1],
'C': [10, 20, 30, 40, 50],
'D': ['X', 'Y', 'X', 'Y', 'Y']})
df1 = pd.melt(df, id_vars='A',value_vars=['B','C'],var_name='variable',value_name='value')
print(df1)
A variable value
0 1 B 1
1 1 B 1
2 1 B 2
3 2 B 2
4 2 B 1
5 1 C 10
6 1 C 20
7 1 C 30
8 2 C 40
9 2 C 50
How to get the same result using groupby and stack?
My attempt
df.groupby('A')[['B','C']].count().stack(0).reset_index()
I am not quite correct. And looking for the suggestions.
I guess you do not need groupby, just stack + sort_values:
result = df[['A', 'B', 'C']].set_index('A').stack().reset_index().sort_values(by='level_1')
result.columns = ['A', 'variable', 'value']
Output
A variable value
0 1 B 1
2 1 B 1
4 1 B 2
6 2 B 2
8 2 B 1
1 1 C 10
3 1 C 20
5 1 C 30
7 2 C 40
9 2 C 50
Given the following data frame:
import pandas as pd
import numpy as np
df=pd.DataFrame({'A':['A','A','A','B','B','B'],
'B':['a','a','b','a','a','a'],
})
df
A B
0 A a
1 A a
2 A b
3 B a
4 B a
5 B a
I'd like to create column 'C', which numbers the rows within each group in columns A and B like this:
A B C
0 A a 1
1 A a 2
2 A b 1
3 B a 1
4 B a 2
5 B a 3
I've tried this so far:
df['C']=df.groupby(['A','B'])['B'].transform('rank')
...but it doesn't work!
Use groupby/cumcount:
In [25]: df['C'] = df.groupby(['A','B']).cumcount()+1; df
Out[25]:
A B C
0 A a 1
1 A a 2
2 A b 1
3 B a 1
4 B a 2
5 B a 3
Use groupby.rank function.
Here the working example.
df = pd.DataFrame({'C1':['a', 'a', 'a', 'b', 'b'], 'C2': [1, 2, 3, 4, 5]})
df
C1 C2
a 1
a 2
a 3
b 4
b 5
df["RANK"] = df.groupby("C1")["C2"].rank(method="first", ascending=True)
df
C1 C2 RANK
a 1 1
a 2 2
a 3 3
b 4 1
b 5 2
consider this
df = pd.DataFrame({'B': ['a', 'a', 'b', 'b'], 'C': [1, 2, 6,2]})
df
Out[128]:
B C
0 a 1
1 a 2
2 b 6
3 b 2
I want to create a variable that simply corresponds to the ordering of observations after sorting by 'C' within each groupby('B') group.
df.sort_values(['B','C'])
Out[129]:
B C order
0 a 1 1
1 a 2 2
3 b 2 1
2 b 6 2
How can I do that? I am thinking about creating a column that is one, and using cumsum but that seems too clunky...
I think you can use range with len(df):
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3],
'B': ['a', 'a', 'b'],
'C': [5, 3, 2]})
print df
A B C
0 1 a 5
1 2 a 3
2 3 b 2
df.sort_values(by='C', inplace=True)
#or without inplace
#df = df.sort_values(by='C')
print df
A B C
2 3 b 2
1 2 a 3
0 1 a 5
df['order'] = range(1,len(df)+1)
print df
A B C order
2 3 b 2 1
1 2 a 3 2
0 1 a 5 3
EDIT by comment:
I think you can use groupby with cumcount:
import pandas as pd
df = pd.DataFrame({'B': ['a', 'a', 'b', 'b'], 'C': [1, 2, 6,2]})
df.sort_values(['B','C'], inplace=True)
#or without inplace
#df = df.sort_values(['B','C'])
print df
B C
0 a 1
1 a 2
3 b 2
2 b 6
df['order'] = df.groupby('B', sort=False).cumcount() + 1
print df
B C order
0 a 1 1
1 a 2 2
3 b 2 1
2 b 6 2
Nothing wrong with Jezrael's answer but there's a simpler (though less general) method in this particular example. Just add groupby to JohnGalt's suggestion of using rank.
>>> df['order'] = df.groupby('B')['C'].rank()
B C order
0 a 1 1.0
1 a 2 2.0
2 b 6 2.0
3 b 2 1.0
In this case, you don't really need the ['C'] but it makes the ranking a little more explicit and if you had other unrelated columns in the dataframe then you would need it.
But if you are ranking by more than 1 column, you should use Jezrael's method.