I have several DataFrames (DataFrames have the same index and column structure). The problem is that there are NaN values in these dataframes.
I want to replace these NaN values by mean value of other's DataFrames' corresponding values.
For exapmle let's look at 3 dataframes.
DataFrame1 with 1:M2 NaN :
M1 M2 M3
0 1 1 2
1 8 NaN 9
2 4 2 7
3 9 6 3
DataFrame 2 with NaN value at 0:M3:
M1 M2 M3
0 2 3 NaN
1 1 1 6
2 1 2 9
3 4 6 2
DataFrame3:
M1 M2 M3
0 1 4 2
1 2 9 1
2 1 6 5
3 1 NaN 4
So we replace NaN in first DataFrame by 5 (9+1)/2. Second NaN should be replaced by 2 because (2+2)/2, third by 6 and so on.
Is there any good and elegant way to do it?
This is one way using numpy.nanmean.
avg = np.nanmean([df1.values, df2.values, df3.values], axis=0)
for df in [df1, df2, df3]:
df[df.isnull()] = avg
df = df.astype(int)
Note: since np.nan is float, we convert explicitly back to int.
We can concat , then using groupby fillna , after split should get what you need
s=pd.concat([df1,df2,df3],keys=[1,2,3])
s=s.groupby(level=1).apply(lambda x : x.fillna(x.mean()))
df1,df2,df3=[x.reset_index(level=0,drop=True) for _,x in s.groupby(level=0)]
df1
Out[1737]:
M1 M2 M3
0 1 1.0 2.0
1 8 5.0 9.0
2 4 2.0 7.0
3 9 6.0 3.0
Related
I have 2 different dataframes: df1, df2
df1:
index a
0 10
1 2
2 3
3 1
4 7
5 6
df2:
index a
0 1
1 2
2 4
3 3
4 20
5 5
I want to find the index of maximum values with a specific lookback in df1 (let's consider lookback=3 in this example). To do this, I use the following code:
tdf['a'] = df1.rolling(lookback).apply(lambda x: x.idxmax())
And the result would be:
id a
0 nan
1 nan
2 0
3 2
4 4
5 4
Now I need to save the values in df2 for each index found by idxmax() in tdf['b']
So if tdf['a'].iloc[3] == 2, I want tdf['b'].iloc[3] == df2.iloc[2]. I expect the final result to be like this:
id b
0 nan
1 nan
2 1
3 4
4 20
5 20
I'm guessing that I can do this using .loc() function like this:
tdf['b'] = df2.loc[tdf['a']]
But it throws an exception because there are nan values in tdf['a']. If I use dropna() before passing tdf['a'] to the .loc() function, then the indices get messed up (for example in tdf['b'], index 0 has to be nan but it'll have a value after dropna()).
Is there any way to get what I want?
Simply use a map:
lookback = 3
s = df1['a'].rolling(lookback).apply(lambda x: x.idxmax())
s.map(df2['a'])
Output:
0 NaN
1 NaN
2 1.0
3 4.0
4 20.0
5 20.0
Name: a, dtype: float64
Lets say we want to compute the variable D in the dataframe below based on time values in variable B and C.
Here, second row of D is C2 - B1, the difference is 4 minutes and
third row = C3 - B2= 4 minutes,.. and so on.
There is no reference value for first row of D so its NA.
Issue:
We also want a NA value for the first row when the category value in variable A changes from 1 to 2. In other words, the value -183 must be replaced by NA.
A B C D
1 5:43:00 5:24:00 NA
1 6:19:00 5:47:00 4
1 6:53:00 6:23:00 4
1 7:29:00 6:55:00 2
1 8:03:00 7:31:00 2
1 8:43:00 8:05:00 2
2 6:07:00 5:40:00 -183
2 6:42:00 6:11:00 4
2 7:15:00 6:45:00 3
2 7:53:00 7:17:00 2
2 8:30:00 7:55:00 2
2 9:07:00 8:32:00 2
2 9:41:00 9:09:00 2
2 10:17:00 9:46:00 5
2 10:52:00 10:20:00 3
You can use:
# Compute delta
df['D'] = (pd.to_timedelta(df['C']).sub(pd.to_timedelta(df['B'].shift()))
.dt.total_seconds().div(60))
# Fill nan
df.loc[df['A'].ne(df['A'].shift()), 'D'] = np.nan
Output:
>>> df
A B C D
0 1 5:43:00 5:24:00 NaN
1 1 6:19:00 5:47:00 4.0
2 1 6:53:00 6:23:00 4.0
3 1 7:29:00 6:55:00 2.0
4 1 8:03:00 7:31:00 2.0
5 1 8:43:00 8:05:00 2.0
6 2 6:07:00 5:40:00 NaN
7 2 6:42:00 6:11:00 4.0
8 2 7:15:00 6:45:00 3.0
9 2 7:53:00 7:17:00 2.0
10 2 8:30:00 7:55:00 2.0
11 2 9:07:00 8:32:00 2.0
12 2 9:41:00 9:09:00 2.0
13 2 10:17:00 9:46:00 5.0
14 2 10:52:00 10:20:00 3.0
You can use the difference between datetime columns in pandas.
Having
df['B_dt'] = pd.to_datetime(df['B'])
df['C_dt'] = pd.to_datetime(df['C'])
Makes the following possible
>>> df['D'] = (df.groupby('A')
.apply(lambda s: (s['C_dt'] - s['B_dt'].shift()).dt.seconds / 60)
.reset_index(drop=True))
You can always drop these new columns later.
df:
index a b c d
-
0 1 2 NaN NaN
1 2 NaN 3 NaN
2 5 NaN 6 NaN
3 1 NaN NaN 5
df expect:
index one two
-
0 1 2
1 2 3
2 5 6
3 1 5
Above output example is self-explanatory. Basically, I just need to shift the two values from columns [a, b, c, d] except NaN into another set of two columns ["one", "two"]
Use back filling missing values and select first 2 columns:
df = df.bfill(axis=1).iloc[:, :2].astype(int)
df.columns = ["one", "two"]
print (df)
one two
index
0 1 2
1 2 3
2 5 6
3 1 5
Or combine_first + drop:
df['two']=df.pop('b').combine_first(df.pop('c')).combine_first(df.pop('d'))
df=df.drop(['b','c','d'],1)
df.columns=['index','one','two']
Or fillna:
df['two']=df.pop('b').fillna(df.pop('c')).fillna(df.pop('d'))
df=df.drop(['b','c','d'],1)
df.columns=['index','one','two']
Both cases:
print(df)
Is:
index one two
0 0 1 2.0
1 1 2 3.0
2 2 5 6.0
3 3 1 5.0
If want output like #jezrael's, add a: (both cases all okay)
df=df.set_index('index')
And then:
print(df)
Is:
one two
index
0 1 2.0
1 2 3.0
2 5 6.0
3 1 5.0
Given the following dataframe df, where df['B']=df['M1']+df['M2']:
A M1 M2 B
1 1 2 3
1 2 NaN NaN
1 3 6 9
1 4 8 12
1 NaN 10 NaN
1 6 12 18
I want the NaN in column B to equal the corresponding value in M1 or M2 provided that the latter is not NaN:
A M1 M2 B
1 1 2 3
1 2 NaN 2
1 3 6 9
1 4 8 12
1 NaN 10 10
1 6 12 18
This answer suggested to use:
df.loc[df['B'].isnull(),'B'] = df['M1'], but the structure of this line allows to consider either M1 or M2, and not both at the same time.
Ideas on how I should change it to consider both columns?
EDIT
Not a duplicate question! For ease of understanding, I claimed that df['B']=df['M1']+df['M2'], but in my real case, df['B'] is not a sum and comes from a rather complicated computation. So I cannot apply a simple formula to df['B']: all I can do is change the NaN values to match the corresponding value in either M1 or M2.
Base on our discussion above in the comment
df.B=df.B.fillna(df[['M1','M2']].max(1))
df
Out[52]:
A M1 M2 B
0 1 1.0 2.0 3.0
1 1 2.0 NaN 2.0
2 1 3.0 6.0 9.0
3 1 4.0 8.0 12.0
4 1 NaN 10.0 10.0
5 1 6.0 12.0 18.0
From jezrael
df['B']= (df['M1']+ df['M2']).fillna(df[['M2','M1']].sum(1))
Here is a dataframe
a b c d
nan nan 3 5
nan 1 2 3
1 nan 4 5
2 3 7 9
nan nan 2 3
I want to replace the observations in both columns 'a' and 'b' where both of them are NaNs with 0s. Rows 2 and 5 in columns 'a' and 'b' have both both NaN, so I want to replace only those rows with 0's in those matching NaN columns.
so my output must be
a b c d
0 0 3 5
nan 1 2 3
1 nan 4 5
2 3 7 9
0 0 2 3
There might be a easier builtin function in Pandas, but this one should work.
df[['a', 'b']] = df.ix[ (np.isnan(df.a)) & (np.isnan(df.b)), ['a', 'b'] ].fillna(0)
Actually the solution from #Psidom much easier to read.
You can create a boolean series based on the conditions on columns a/b, and then use loc to modify corresponding columns and rows:
df.loc[df[['a','b']].isnull().all(1), ['a','b']] = 0
df
# a b c d
#0 0.0 0.0 3 5
#1 NaN 1.0 2 3
#2 1.0 NaN 4 5
#3 2.0 3.0 7 9
#4 0.0 0.0 2 3
Or:
df.loc[df.a.isnull() & df.b.isnull(), ['a','b']] = 0