I have an initial column with no missing data (A) but with repeated values. How do I fill the next column (B) with missing data so that it is filled and the column on the left always has the same value on the right? I would also like any other columns to remain the same (C)
For example, this is what I have
A B C
1 1 20 4
2 2 NaN 8
3 3 NaN 2
4 2 30 9
5 3 40 1
6 1 NaN 3
And this is what I want
A B C
1 1 20 4
2 2 30* 8
3 3 40* 2
4 2 30 9
5 3 40 1
6 1 20* 3
Asterisk on filled values.
This needs to be scalable with a very large dataframe.
Additionally, if I had a value on the left column that has more than one value on the right side on separate observations, how would I fill with the mean?
You can use groupby on 'A' and use first to find the first corresponding value in 'B' (it will not select NaN).
import pandas as pd
df = pd.DataFrame({'A':[1,2,3,2,3,1],
'B':[20, None, None, 30, 40, None],
'C': [4,8,2,9,1,3]})
# find first 'B' value for each 'A'
lookup = df[['A', 'B']].groupby('A').first()['B']
# only use rows where 'B' is NaN
nan_mask = df['B'].isnull()
# replace NaN values in 'B' with lookup values
df['B'].loc[nan_mask] = df.loc[nan_mask].apply(lambda x: lookup[x['A']], axis=1)
print(df)
Which outputs:
A B C
0 1 20.0 4
1 2 30.0 8
2 3 40.0 2
3 2 30.0 9
4 3 40.0 1
5 1 20.0 3
If there are many NaN values in 'B' you might want to exclude them before you use groupby.
import pandas as pd
df = pd.DataFrame({'A':[1,2,3,2,3,1],
'B':[20, None, None, 30, 40, None],
'C': [4,8,2,9,1,3]})
# Only use rows where 'B' is NaN
nan_mask = df['B'].isnull()
# Find first 'B' value for each 'A'
lookup = df[~nan_mask][['A', 'B']].groupby('A').first()['B']
df['B'].loc[nan_mask] = df.loc[nan_mask].apply(lambda x: lookup[x['A']], axis=1)
print(df)
You could do sort_values first then forward fill column B based on column A. The way to implement this will be:
import pandas as pd
import numpy as np
x = {'A':[1,2,3,2,3,1],
'B':[20,np.nan,np.nan,30,40,np.nan],
'C':[4,8,2,9,1,3]}
df = pd.DataFrame(x)
#sort_values first, then forward fill based on column B
#this will get the right values for you while maintaing
#the original order of the dataframe
df['B'] = df.sort_values(by=['A','B'])['B'].ffill()
print (df)
Output will be:
Original data:
A B C
0 1 20.0 4
1 2 NaN 8
2 3 NaN 2
3 2 30.0 9
4 3 40.0 1
5 1 NaN 3
Updated data:
A B C
0 1 20.0 4
1 2 30.0 8
2 3 40.0 2
3 2 30.0 9
4 3 40.0 1
5 1 20.0 3
Related
I have 2 different dataframes: df1, df2
df1:
index a
0 10
1 2
2 3
3 1
4 7
5 6
df2:
index a
0 1
1 2
2 4
3 3
4 20
5 5
I want to find the index of maximum values with a specific lookback in df1 (let's consider lookback=3 in this example). To do this, I use the following code:
tdf['a'] = df1.rolling(lookback).apply(lambda x: x.idxmax())
And the result would be:
id a
0 nan
1 nan
2 0
3 2
4 4
5 4
Now I need to save the values in df2 for each index found by idxmax() in tdf['b']
So if tdf['a'].iloc[3] == 2, I want tdf['b'].iloc[3] == df2.iloc[2]. I expect the final result to be like this:
id b
0 nan
1 nan
2 1
3 4
4 20
5 20
I'm guessing that I can do this using .loc() function like this:
tdf['b'] = df2.loc[tdf['a']]
But it throws an exception because there are nan values in tdf['a']. If I use dropna() before passing tdf['a'] to the .loc() function, then the indices get messed up (for example in tdf['b'], index 0 has to be nan but it'll have a value after dropna()).
Is there any way to get what I want?
Simply use a map:
lookback = 3
s = df1['a'].rolling(lookback).apply(lambda x: x.idxmax())
s.map(df2['a'])
Output:
0 NaN
1 NaN
2 1.0
3 4.0
4 20.0
5 20.0
Name: a, dtype: float64
I am dealing with pandas DataFrames like this:
id x
0 1 10
1 1 20
2 2 100
3 2 200
4 1 NaN
5 2 NaN
6 1 300
7 1 NaN
I would like to replace each NAN 'x' with the previous non-NAN 'x' from a row with the same 'id' value:
id x
0 1 10
1 1 20
2 2 100
3 2 200
4 1 20
5 2 200
6 1 300
7 1 300
Is there some slick way to do this without manually looping over rows?
You could perform a groupby/forward-fill operation on each group:
import numpy as np
import pandas as pd
df = pd.DataFrame({'id': [1,1,2,2,1,2,1,1], 'x':[10,20,100,200,np.nan,np.nan,300,np.nan]})
df['x'] = df.groupby(['id'])['x'].ffill()
print(df)
yields
id x
0 1 10.0
1 1 20.0
2 2 100.0
3 2 200.0
4 1 20.0
5 2 200.0
6 1 300.0
7 1 300.0
df
id val
0 1 23.0
1 1 NaN
2 1 NaN
3 2 NaN
4 2 34.0
5 2 NaN
6 3 2.0
7 3 NaN
8 3 NaN
df.sort_values(['id','val']).groupby('id').ffill()
id val
0 1 23.0
1 1 23.0
2 1 23.0
4 2 34.0
3 2 34.0
5 2 34.0
6 3 2.0
7 3 2.0
8 3 2.0
use sort_values, groupby and ffill so that if you have Nan value for the first value or set of first values they also get filled.
Solution for multi-key problem:
In this example, the data has the key [date, region, type]. Date is the index on the original dataframe.
import os
import pandas as pd
#sort to make indexing faster
df.sort_values(by=['date','region','type'], inplace=True)
#collect all possible regions and types
regions = list(set(df['region']))
types = list(set(df['type']))
#record column names
df_cols = df.columns
#delete ffill_df.csv so we can begin anew
try:
os.remove('ffill_df.csv')
except FileNotFoundError:
pass
# steps:
# 1) grab rows with a particular region and type
# 2) use forwardfill to fill nulls
# 3) use backwardfill to fill remaining nulls
# 4) append to file
for r in regions:
for t in types:
group_df = df[(df.region == r) & (df.type == t)].copy()
group_df.fillna(method='ffill', inplace=True)
group_df.fillna(method='bfill', inplace=True)
group_df.to_csv('ffill_df.csv', mode='a', header=False, index=True)
Checking the result:
#load in the ffill_df
ffill_df = pd.read_csv('ffill_df.csv', header=None, index_col=None)
ffill_df.columns = df_reindexed_cols
ffill_df.index= ffill_df.date
ffill_df.drop('date', axis=1, inplace=True)
ffill_df.head()
#compare new and old dataframe
print(df.shape)
print(ffill_df.shape)
print()
print(pd.isnull(ffill_df).sum())
I hope you are doing well.
I need help to perform a complex "NaN replace" on my dataframe.
What is the best way to replace NaN values in a pandas column, based on a mode of other column values filtered by other columns?
Let me illustrate my problem:
import random
import numpy as np
import pandas as pd
data = {'Region': [1,1,1,2,2,2,1,2,2,2,2,1,1,1,2,1], 'Country': ['a','a', 'a', 'a', 'a','a', 'a', 'a', 'b', 'b', 'b', 'b','b','b','b','b'], 'GDP' : [100,100,101,105,105,110,np.nan,np.nan,200,200,100,150,100,150,np.nan,np.nan]}
df = pd.DataFrame.from_dict(data)
df:
Region Country GDP
0 1 a 100.0
1 1 a 100.0
2 1 a 101.0
3 2 a 105.0
4 2 a 105.0
5 2 a 110.0
6 1 a NaN
7 2 a NaN
8 2 b 200.0
9 2 b 200.0
10 2 b 100.0
11 1 b 150.0
12 1 b 100.0
13 1 b 150.0
14 2 b NaN
15 1 b NaN
I would like to replace the nan values of the GDP column with the mode of other GDP values for the same country and region.
In the case of the NaN value of the GDP column of index 6, I wish to replace it with 100 (as it is the mode for GDP values for Region 1 & Country a)
The desired output should look like this:
Region Country GDP
0 1 a 100
1 1 a 100
2 1 a 101
3 2 a 105
4 2 a 105
5 2 a 110
6 1 a 100
7 2 a 105
8 2 b 200
9 2 b 200
10 2 b 100
11 1 b 150
12 1 b 100
13 1 b 150
14 2 b 200
15 1 b 150
Thank you for your help, I hope you have an excellent day!
Pandas' fillna allows for filling missing values from another series. So we need another series that contains the mode of each Country/Region at the corresponding indices.
To get this series, we can use Pandas' groupby().transform() operation. It groups the dataframe, and then broadcasts the results back to the original shape.
If we use this operation with mode as is, it will give an error. Mode can return multiple values, preventing pandas from broadcasting the values back to the original shape. So we need to force it to return a single value, so just pick the first one (or last one, or whichever).
df["GDP"].fillna(
df.groupby(["Country", "Region"])["GDP"].transform(
lambda x: x.mode()[0]
)
)
I have a csv like
A,B,C,D
1,2,,
1,2,30,100
1,2,40,100
4,5,,
4,5,60,200
4,5,70,200
8,9,,
In row 1 and row 4 C value is missing (NaN). I want to take their value from row 2 and 5 respectively. (First occurrence of same A,B value).
If no matching row is found, just put 0 (like in last line)
Expected op:
A,B,C,D
1,2,30,
1,2,30,100
1,2,40,100
4,5,60,
4,5,60,200
4,5,70,200
8,9,0,
using fillna I found bfill: use NEXT valid observation to fill gap but the NEXT observation has to be taken logically (looking at col A,B values) and not just the upcoming C column value
You'll have to call df.groupby on A and B first and then apply the bfill function:
In [501]: df.C = df.groupby(['A', 'B']).apply(lambda x: x.C.bfill()).reset_index(drop=True)
In [502]: df
Out[502]:
A B C D
0 1 2 30 NaN
1 1 2 30 100.0
2 1 2 40 100.0
3 4 5 60 NaN
4 4 5 60 200.0
5 4 5 70 200.0
6 8 9 0 NaN
You can also group and then call dfGroupBy.bfill directly (I think this would be faster):
In [508]: df.C = df.groupby(['A', 'B']).C.bfill().fillna(0).astype(int); df
Out[508]:
A B C D
0 1 2 30 NaN
1 1 2 30 100.0
2 1 2 40 100.0
3 4 5 60 NaN
4 4 5 60 200.0
5 4 5 70 200.0
6 8 9 0 NaN
If you wish to get rid of NaNs in D, you could do:
df.D.fillna('', inplace=True)
Here is a dataframe
a b c d
nan nan 3 5
nan 1 2 3
1 nan 4 5
2 3 7 9
nan nan 2 3
I want to replace the observations in both columns 'a' and 'b' where both of them are NaNs with 0s. Rows 2 and 5 in columns 'a' and 'b' have both both NaN, so I want to replace only those rows with 0's in those matching NaN columns.
so my output must be
a b c d
0 0 3 5
nan 1 2 3
1 nan 4 5
2 3 7 9
0 0 2 3
There might be a easier builtin function in Pandas, but this one should work.
df[['a', 'b']] = df.ix[ (np.isnan(df.a)) & (np.isnan(df.b)), ['a', 'b'] ].fillna(0)
Actually the solution from #Psidom much easier to read.
You can create a boolean series based on the conditions on columns a/b, and then use loc to modify corresponding columns and rows:
df.loc[df[['a','b']].isnull().all(1), ['a','b']] = 0
df
# a b c d
#0 0.0 0.0 3 5
#1 NaN 1.0 2 3
#2 1.0 NaN 4 5
#3 2.0 3.0 7 9
#4 0.0 0.0 2 3
Or:
df.loc[df.a.isnull() & df.b.isnull(), ['a','b']] = 0