pandas - Apply mean to a specific row in grouped dataframe [duplicate] - python
This should be straightforward, but the closest thing I've found is this post:
pandas: Filling missing values within a group, and I still can't solve my problem....
Suppose I have the following dataframe
df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3], 'name': ['A','A', 'B','B','B','B', 'C','C','C']})
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 3
5 B 1
6 C 3
7 C NaN
8 C 3
and I'd like to fill in "NaN" with mean value in each "name" group, i.e.
name value
0 A 1
1 A 1
2 B 2
3 B 2
4 B 3
5 B 1
6 C 3
7 C 3
8 C 3
I'm not sure where to go after:
grouped = df.groupby('name').mean()
Thanks a bunch.
One way would be to use transform:
>>> df
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 3
5 B 1
6 C 3
7 C NaN
8 C 3
>>> df["value"] = df.groupby("name").transform(lambda x: x.fillna(x.mean()))
>>> df
name value
0 A 1
1 A 1
2 B 2
3 B 2
4 B 3
5 B 1
6 C 3
7 C 3
8 C 3
fillna + groupby + transform + mean
This seems intuitive:
df['value'] = df['value'].fillna(df.groupby('name')['value'].transform('mean'))
The groupby + transform syntax maps the groupwise mean to the index of the original dataframe. This is roughly equivalent to #DSM's solution, but avoids the need to define an anonymous lambda function.
#DSM has IMO the right answer, but I'd like to share my generalization and optimization of the question: Multiple columns to group-by and having multiple value columns:
df = pd.DataFrame(
{
'category': ['X', 'X', 'X', 'X', 'X', 'X', 'Y', 'Y', 'Y'],
'name': ['A','A', 'B','B','B','B', 'C','C','C'],
'other_value': [10, np.nan, np.nan, 20, 30, 10, 30, np.nan, 30],
'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3],
}
)
... gives ...
category name other_value value
0 X A 10.0 1.0
1 X A NaN NaN
2 X B NaN NaN
3 X B 20.0 2.0
4 X B 30.0 3.0
5 X B 10.0 1.0
6 Y C 30.0 3.0
7 Y C NaN NaN
8 Y C 30.0 3.0
In this generalized case we would like to group by category and name, and impute only on value.
This can be solved as follows:
df['value'] = df.groupby(['category', 'name'])['value']\
.transform(lambda x: x.fillna(x.mean()))
Notice the column list in the group-by clause, and that we select the value column right after the group-by. This makes the transformation only be run on that particular column. You could add it to the end, but then you will run it for all columns only to throw out all but one measure column at the end. A standard SQL query planner might have been able to optimize this, but pandas (0.19.2) doesn't seem to do this.
Performance test by increasing the dataset by doing ...
big_df = None
for _ in range(10000):
if big_df is None:
big_df = df.copy()
else:
big_df = pd.concat([big_df, df])
df = big_df
... confirms that this increases the speed proportional to how many columns you don't have to impute:
import pandas as pd
from datetime import datetime
def generate_data():
...
t = datetime.now()
df = generate_data()
df['value'] = df.groupby(['category', 'name'])['value']\
.transform(lambda x: x.fillna(x.mean()))
print(datetime.now()-t)
# 0:00:00.016012
t = datetime.now()
df = generate_data()
df["value"] = df.groupby(['category', 'name'])\
.transform(lambda x: x.fillna(x.mean()))['value']
print(datetime.now()-t)
# 0:00:00.030022
On a final note you can generalize even further if you want to impute more than one column, but not all:
df[['value', 'other_value']] = df.groupby(['category', 'name'])['value', 'other_value']\
.transform(lambda x: x.fillna(x.mean()))
Shortcut:
Groupby + Apply + Lambda + Fillna + Mean
>>> df['value1']=df.groupby('name')['value'].apply(lambda x:x.fillna(x.mean()))
>>> df.isnull().sum().sum()
0
This solution still works if you want to group by multiple columns to replace missing values.
>>> df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, np.nan,np.nan, 4, 3],
'name': ['A','A', 'B','B','B','B', 'C','C','C'],'class':list('ppqqrrsss')})
>>> df['value']=df.groupby(['name','class'])['value'].apply(lambda x:x.fillna(x.mean()))
>>> df
value name class
0 1.0 A p
1 1.0 A p
2 2.0 B q
3 2.0 B q
4 3.0 B r
5 3.0 B r
6 3.5 C s
7 4.0 C s
8 3.0 C s
I'd do it this way
df.loc[df.value.isnull(), 'value'] = df.groupby('group').value.transform('mean')
The featured high ranked answer only works for a pandas Dataframe with only two columns. If you have a more columns case use instead:
df['Crude_Birth_rate'] = df.groupby("continent").Crude_Birth_rate.transform(
lambda x: x.fillna(x.mean()))
To summarize all above concerning the efficiency of the possible solution
I have a dataset with 97 906 rows and 48 columns.
I want to fill in 4 columns with the median of each group.
The column I want to group has 26 200 groups.
The first solution
start = time.time()
x = df_merged[continuous_variables].fillna(df_merged.groupby('domain_userid')[continuous_variables].transform('median'))
print(time.time() - start)
0.10429811477661133 seconds
The second solution
start = time.time()
for col in continuous_variables:
df_merged.loc[df_merged[col].isnull(), col] = df_merged.groupby('domain_userid')[col].transform('median')
print(time.time() - start)
0.5098445415496826 seconds
The next solution I only performed on a subset since it was running too long.
start = time.time()
for col in continuous_variables:
x = df_merged.head(10000).groupby('domain_userid')[col].transform(lambda x: x.fillna(x.median()))
print(time.time() - start)
11.685635566711426 seconds
The following solution follows the same logic as above.
start = time.time()
x = df_merged.head(10000).groupby('domain_userid')[continuous_variables].transform(lambda x: x.fillna(x.median()))
print(time.time() - start)
42.630549907684326 seconds
So it's quite important to choose the right method.
Bear in mind that I noticed once a column was not a numeric the times were going up exponentially (makes sense as I was computing the median).
def groupMeanValue(group):
group['value'] = group['value'].fillna(group['value'].mean())
return group
dft = df.groupby("name").transform(groupMeanValue)
I know that is an old question. But I am quite surprised by the unanimity of apply/lambda answers here.
Generally speaking, that is the second worst thing to do after iterating rows, from timing point of view.
What I would do here is
df.loc[df['value'].isna(), 'value'] = df.groupby('name')['value'].transform('mean')
Or using fillna
df['value'] = df['value'].fillna(df.groupby('name')['value'].transform('mean'))
I've checked with timeit (because, again, unanimity for apply/lambda based solution made me doubt my instinct). And that is indeed 2.5 faster than the most upvoted solutions.
To fill all the numeric null values with the mean grouped by "name"
num_cols = df.select_dtypes(exclude='object').columns
df[num_cols] = df.groupby("name").transform(lambda x: x.fillna(x.mean()))
df.fillna(df.groupby(['name'], as_index=False).mean(), inplace=True)
You can also use "dataframe or table_name".apply(lambda x: x.fillna(x.mean())).
Related
Pandas groupby diff removes column
I have a dataframe like this: d = {'id': ['101_i','101_e','102_i','102_e'], 1: [3, 4, 5, 7], 2: [5,9,10,11], 3: [8,4,3,7]} df = pd.DataFrame(data=d) I want to subtract all rows which have the same prefix id, i.e. subtract all values of rows 101_i with 101_e or vice versa. The code I use for that is: df['new_identifier'] = [x.upper().replace('E', '').replace('I','').replace('_','') for x in df['id']] df = df.groupby('new_identifier')[df.columns[1:-1]].diff().dropna() I get the output like this: I see that I lose the new column that I create, new_identifier. Is there a way I can retain that?
You can define specific aggregation function (in this case np.diff() for columns 1, 2, and 3) for columns that you know the types (int or float in this case). import numpy as np df.groupby('new_identifier').agg({i: np.diff for i in range(1, 4)}).dropna() Result: 1 2 3 new_identifier 101 1 4 -4 102 2 1 4
Series.str.split to get groups, you need DataFrame.set_axis() before GroupBy, after that we use GroupBy.diff cols = df.columns.difference(['id']) groups = df['id'].str.split('_').str[0] new_df = ( df.set_axis(groups, axis=0) .groupby(level=0) [cols] .diff() .dropna() ) print(new_df) 1 2 3 id 101 1.0 4.0 -4.0 102 2.0 1.0 4.0 Detail Groups df['id'].str.split('_').str[0] 0 101 1 101 2 102 3 102 Name: id, dtype: object
Best way to add multiple list to existing dataframe [duplicate]
I'm trying to figure out how to add multiple columns to pandas simultaneously with Pandas. I would like to do this in one step rather than multiple repeated steps. import pandas as pd df = {'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7]} df = pd.DataFrame(df) df[[ 'column_new_1', 'column_new_2','column_new_3']] = [np.nan, 'dogs',3] # I thought this would work here...
I would have expected your syntax to work too. The problem arises because when you create new columns with the column-list syntax (df[[new1, new2]] = ...), pandas requires that the right hand side be a DataFrame (note that it doesn't actually matter if the columns of the DataFrame have the same names as the columns you are creating). Your syntax works fine for assigning scalar values to existing columns, and pandas is also happy to assign scalar values to a new column using the single-column syntax (df[new1] = ...). So the solution is either to convert this into several single-column assignments, or create a suitable DataFrame for the right-hand side. Here are several approaches that will work: import pandas as pd import numpy as np df = pd.DataFrame({ 'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7] }) Then one of the following: 1) Three assignments in one, using list unpacking: df['column_new_1'], df['column_new_2'], df['column_new_3'] = [np.nan, 'dogs', 3] 2) DataFrame conveniently expands a single row to match the index, so you can do this: df[['column_new_1', 'column_new_2', 'column_new_3']] = pd.DataFrame([[np.nan, 'dogs', 3]], index=df.index) 3) Make a temporary data frame with new columns, then combine with the original data frame later: df = pd.concat( [ df, pd.DataFrame( [[np.nan, 'dogs', 3]], index=df.index, columns=['column_new_1', 'column_new_2', 'column_new_3'] ) ], axis=1 ) 4) Similar to the previous, but using join instead of concat (may be less efficient): df = df.join(pd.DataFrame( [[np.nan, 'dogs', 3]], index=df.index, columns=['column_new_1', 'column_new_2', 'column_new_3'] )) 5) Using a dict is a more "natural" way to create the new data frame than the previous two, but the new columns will be sorted alphabetically (at least before Python 3.6 or 3.7): df = df.join(pd.DataFrame( { 'column_new_1': np.nan, 'column_new_2': 'dogs', 'column_new_3': 3 }, index=df.index )) 6) Use .assign() with multiple column arguments. I like this variant on #zero's answer a lot, but like the previous one, the new columns will always be sorted alphabetically, at least with early versions of Python: df = df.assign(column_new_1=np.nan, column_new_2='dogs', column_new_3=3) 7) This is interesting (based on https://stackoverflow.com/a/44951376/3830997), but I don't know when it would be worth the trouble: new_cols = ['column_new_1', 'column_new_2', 'column_new_3'] new_vals = [np.nan, 'dogs', 3] df = df.reindex(columns=df.columns.tolist() + new_cols) # add empty cols df[new_cols] = new_vals # multi-column assignment works for existing cols 8) In the end it's hard to beat three separate assignments: df['column_new_1'] = np.nan df['column_new_2'] = 'dogs' df['column_new_3'] = 3 Note: many of these options have already been covered in other answers: Add multiple columns to DataFrame and set them equal to an existing column, Is it possible to add several columns at once to a pandas DataFrame?, Add multiple empty columns to pandas DataFrame
You could use assign with a dict of column names and values. In [1069]: df.assign(**{'col_new_1': np.nan, 'col2_new_2': 'dogs', 'col3_new_3': 3}) Out[1069]: col_1 col_2 col2_new_2 col3_new_3 col_new_1 0 0 4 dogs 3 NaN 1 1 5 dogs 3 NaN 2 2 6 dogs 3 NaN 3 3 7 dogs 3 NaN
My goal when writing Pandas is to write efficient readable code that I can chain. I won't go into why I like chaining so much here, I expound on that in my book, Effective Pandas. I often want to add new columns in a succinct manner that also allows me to chain. My general rule is that I update or create columns using the .assign method. To answer your question, I would use the following code: (df .assign(column_new_1=np.nan, column_new_2='dogs', column_new_3=3 ) ) To go a little further. I often have a dataframe that has new columns that I want to add to my dataframe. Let's assume it looks like say... a dataframe with the three columns you want: df2 = pd.DataFrame({'column_new_1': np.nan, 'column_new_2': 'dogs', 'column_new_3': 3}, index=df.index ) In this case I would write the following code: (df .assign(**df2) )
With the use of concat: In [128]: df Out[128]: col_1 col_2 0 0 4 1 1 5 2 2 6 3 3 7 In [129]: pd.concat([df, pd.DataFrame(columns = [ 'column_new_1', 'column_new_2','column_new_3'])]) Out[129]: col_1 col_2 column_new_1 column_new_2 column_new_3 0 0.0 4.0 NaN NaN NaN 1 1.0 5.0 NaN NaN NaN 2 2.0 6.0 NaN NaN NaN 3 3.0 7.0 NaN NaN NaN Not very sure of what you wanted to do with [np.nan, 'dogs',3]. Maybe now set them as default values? In [142]: df1 = pd.concat([df, pd.DataFrame(columns = [ 'column_new_1', 'column_new_2','column_new_3'])]) In [143]: df1[[ 'column_new_1', 'column_new_2','column_new_3']] = [np.nan, 'dogs', 3] In [144]: df1 Out[144]: col_1 col_2 column_new_1 column_new_2 column_new_3 0 0.0 4.0 NaN dogs 3 1 1.0 5.0 NaN dogs 3 2 2.0 6.0 NaN dogs 3 3 3.0 7.0 NaN dogs 3
Dictionary mapping with .assign(): This is the most readable and dynamic way to assign new column(s) with value(s) when working with many of them. import pandas as pd import numpy as np new_cols = ["column_new_1", "column_new_2", "column_new_3"] new_vals = [np.nan, "dogs", 3] # Map new columns as keys and new values as values col_val_mapping = dict(zip(new_cols, new_vals)) # Unpack new column/new value pairs and assign them to the data frame df = df.assign(**col_val_mapping) If you're just trying to initialize the new column values to be empty as you either don't know what the values are going to be or you have many new columns. import pandas as pd import numpy as np new_cols = ["column_new_1", "column_new_2", "column_new_3"] new_vals = [None for item in new_cols] # Map new columns as keys and new values as values col_val_mapping = dict(zip(new_cols, new_vals)) # Unpack new column/new value pairs and assign them to the data frame df = df.assign(**col_val_mapping)
use of list comprehension, pd.DataFrame and pd.concat pd.concat( [ df, pd.DataFrame( [[np.nan, 'dogs', 3] for _ in range(df.shape[0])], df.index, ['column_new_1', 'column_new_2','column_new_3'] ) ], axis=1)
if adding a lot of missing columns (a, b, c ,....) with the same value, here 0, i did this: new_cols = ["a", "b", "c" ] df[new_cols] = pd.DataFrame([[0] * len(new_cols)], index=df.index) It's based on the second variant of the accepted answer.
Just want to point out that option2 in #Matthias Fripp's answer (2) I wouldn't necessarily expect DataFrame to work this way, but it does df[['column_new_1', 'column_new_2', 'column_new_3']] = pd.DataFrame([[np.nan, 'dogs', 3]], index=df.index) is already documented in pandas' own documentation http://pandas.pydata.org/pandas-docs/stable/indexing.html#basics You can pass a list of columns to [] to select columns in that order. If a column is not contained in the DataFrame, an exception will be raised. Multiple columns can also be set in this manner. You may find this useful for applying a transform (in-place) to a subset of the columns.
You can use tuple unpacking: df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]}) df['col3'], df['col4'] = 'a', 10 Result: col1 col2 col3 col4 0 1 3 a 10 1 2 4 a 10
If you just want to add empty new columns, reindex will do the job df col_1 col_2 0 0 4 1 1 5 2 2 6 3 3 7 df.reindex(list(df)+['column_new_1', 'column_new_2','column_new_3'], axis=1) col_1 col_2 column_new_1 column_new_2 column_new_3 0 0 4 NaN NaN NaN 1 1 5 NaN NaN NaN 2 2 6 NaN NaN NaN 3 3 7 NaN NaN NaN full code example import numpy as np import pandas as pd df = {'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7]} df = pd.DataFrame(df) print('df',df, sep='\n') print() df=df.reindex(list(df)+['column_new_1', 'column_new_2','column_new_3'], axis=1) print('''df.reindex(list(df)+['column_new_1', 'column_new_2','column_new_3'], axis=1)''',df, sep='\n') otherwise go for zeros answer with assign
I am not comfortable using "Index" and so on...could come up as below df.columns Index(['A123', 'B123'], dtype='object') df=pd.concat([df,pd.DataFrame(columns=list('CDE'))]) df.rename(columns={ 'C':'C123', 'D':'D123', 'E':'E123' },inplace=True) df.columns Index(['A123', 'B123', 'C123', 'D123', 'E123'], dtype='object')
You could instantiate the values from a dictionary if you wanted different values for each column & you don't mind making a dictionary on the line before. >>> import pandas as pd >>> import numpy as np >>> df = pd.DataFrame({ 'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7] }) >>> df col_1 col_2 0 0 4 1 1 5 2 2 6 3 3 7 >>> cols = { 'column_new_1':np.nan, 'column_new_2':'dogs', 'column_new_3': 3 } >>> df[list(cols)] = pd.DataFrame(data={k:[v]*len(df) for k,v in cols.items()}) >>> df col_1 col_2 column_new_1 column_new_2 column_new_3 0 0 4 NaN dogs 3 1 1 5 NaN dogs 3 2 2 6 NaN dogs 3 3 3 7 NaN dogs 3 Not necessarily better than the accepted answer, but it's another approach not yet listed.
import pandas as pd df = pd.DataFrame({ 'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7] }) df['col_3'], df['col_4'] = [df.col_1]*2 >> df col_1 col_2 col_3 col_4 0 4 0 0 1 5 1 1 2 6 2 2 3 7 3 3
How I can merge the columns into a single column in Python?
I want to merge 3 columns into a single column. I have tried changing the column types. However, I could not do it. For example, I have 3 columns such as A: {1,2,4}, B:{3,4,4}, C:{1,1,1} Output expected: ABC Column {131, 241, 441} My inputs are like this: df['ABC'] = df['A'].map(str) + df['B'].map(str) + df['C'].map(str) df.head() ABC {13.01.0 , 24.01.0, 44.01.0} The type of ABC seems object and I could not change via str, int. df['ABC'].apply(str) Also, I realized that there are NaN values in A, B, C column. Is it possible to merge these even with NaN values?
# Example import pandas as pd import numpy as np df = pd.DataFrame() # Considering NaN's in the data-frame df['colA'] = [1,2,4, np.NaN,5] df['colB'] = [3,4,4,3,np.NaN] df['colC'] = [1,1,1,4,1] # Using pd.isna() to check for NaN values in the columns df['colA'] = df['colA'].apply(lambda x: x if pd.isna(x) else str(int(x))) df['colB'] = df['colB'].apply(lambda x: x if pd.isna(x) else str(int(x))) df['colC'] = df['colC'].apply(lambda x: x if pd.isna(x) else str(int(x))) # Filling the NaN values with a blank space df = df.fillna('') # Transform columns into string df = df.astype(str) # Concatenating all together df['ABC'] = df.sum(axis=1)
A workaround your NaN problem could look like this but now NaN will be 0 import numpy as np df = pd.DataFrame({'A': [1,2,4, np.nan], 'B':[3,4,4,4], 'C':[1,np.nan,1, 3]}) df = df.replace(np.nan, 0, regex=True).astype(int).applymap(str) df['ABC'] = df['A'] + df['B'] + df['C'] output A B C ABC 0 1 3 1 131 1 2 4 0 240 2 4 4 1 441 3 0 4 3 043
Reassigning Entries in a Column of Pandas DataFrame
My goal is to conditionally index a data frame and change the values in a column for these indexes. I intend on looking through the column 'A' to find entries = 'a' and update their column 'B' with the word 'okay. group = ['a'] df = pd.DataFrame({"A": [a,b,a,a,c], "B": [NaN,NaN,NaN,NaN,NaN]}) >>>df A B 0 a NaN 1 b NaN 2 a NaN 3 a NaN 4 c NaN df[df['A'].apply(lambda x: x in group)]['B'].fillna('okay', inplace=True) This gives me the following error: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self._update_inplace(new_data) Following the documentation (what I understood of it) I tried the following instead: df[df['A'].apply(lambda x: x in group)].loc[:,'B'].fillna('okay', inplace=True) I can't figure out why the reassignment of 'NaN' to 'okay' is not occurring inplace and how this can be rectified? Thank you.
Try this with lambda: Solution First: >>> df A B 0 a NaN 1 b NaN 2 a NaN 3 a NaN 4 c NaN Using lambda + map or apply.. >>> df["B"] = df["A"].map(lambda x: "okay" if "a" in x else "NaN") OR# df["B"] = df["A"].map(lambda x: "okay" if "a" in x else np.nan) OR# df['B'] = df['A'].apply(lambda x: 'okay' if x == 'a' else np.nan) >>> df A B 0 a okay 1 b NaN 2 a okay 3 a okay 4 c NaN Solution second: >>> df A B 0 a NaN 1 b NaN 2 a NaN 3 a NaN 4 c NaN another fancy way to Create Dictionary frame and apply it using map function across the column: >>> frame = {'a': "okay"} >>> df['B'] = df['A'].map(frame) >>> df A B 0 a okay 1 b NaN 2 a okay 3 a okay 4 c NaN Solution Third: This is already been posted by #d_kennetz but Just want to club together, wher you can also do the assignment to both columns (A & B)in one shot:.. >>> df.loc[df.A == 'a', 'B'] = "okay"
If I understand this correctly, you simply want to replace the value for a column on those rows matching a given condition (i.e. where A column belongs to a certain group, here with a single value 'a'). The following should do the trick: import pandas as pd group = ['a'] df = pd.DataFrame({"A": ['a','b','a','a','c'], "B": [None,None,None,None,None]}) print(df) df.loc[df['A'].isin(group),'B'] = 'okay' print(df) What we're doing here is we're using the .loc filter, which just returns a view on the existing dataframe. First argument (df['A'].isin(group)) filters on those rows matching a given criterion. Notice you can use the equality operator (==) but not the in operator and therefore have to use .isin() instead). Second argument selects only the 'B' column. Then you just assign the desired value (which is a constant). Here's the output: A B 0 a None 1 b None 2 a None 3 a None 4 c None A B 0 a okay 1 b None 2 a okay 3 a okay 4 c None If you wanted to fancier stuff, you might want do the following: import pandas as pd group = ['a', 'b'] df = pd.DataFrame({"A": ['a','b','a','a','c'], "B": [None,None,None,None,None]}) df.loc[df['A'].isin(group),'B'] = "okay, it was " + df['A']+df['A'] print(df) Which gives you: A B 0 a okay, it was aa 1 b okay, it was bb 2 a okay, it was aa 3 a okay, it was aa 4 c None
applying a function to a subset of columns in pandas groupby
I have a df with many columns. I would like to group by id and transform a subset of those columns leaving the rest untouched. What is the optimal way to do this? In particular, I have a df with a bunch of id's and I would like to z-score columns a and b within each id. Column c should remain untouched. In my actual problem I have many more columns. The best I can think of is passing a dict of {col_name: function_name} to transform. For some reason this raises a TypeError. MWE: import pandas as pd import numpy as np np.random.seed(123) #reproducible ex df = pd.DataFrame(data = {"a": np.arange(10), "b": np.arange(10)[::-1], "c": np.random.choice(a = np.arange(10), size = 10)}, index = pd.Index(data = np.random.choice(a = [1,2,3], size = 10), name = "id")) #create a dict for all columns other than "c" and the function to do the transform fmap = {k: lambda x: (x - x.mean()) / x.std() for k in df.columns if k != "c"} df.groupby("id").transform(fmap) #yields error that "dict" is unhashable Turns out this is a known bug: https://github.com/pandas-dev/pandas/issues/17309
One possible solution is filter columns names first by difference, because dict cannot working with transfrom yet: cols = df.columns.difference(['c']) print (cols) Index(['a', 'b'], dtype='object') fmap = lambda x: (x - x.mean()) / x.std() df[cols] = df.groupby("id")[cols].transform(fmap) print (df) a b c id 3 -1.000000 1.000000 2 2 -1.091089 1.091089 2 1 -1.134975 1.134975 6 3 0.000000 0.000000 1 1 -0.529655 0.529655 3 2 0.218218 -0.218218 9 3 1.000000 -1.000000 6 2 0.872872 -0.872872 1 1 0.680985 -0.680985 0 1 0.983645 -0.983645 1