Combine_first and null values in Pandas - python

df1:
0 1
0 nan 3.00
1 -4.00 nan
2 nan 7.00
df2:
0 1 2
1 -42.00 nan 8.00
2 -5.00 nan 4.00
df3 = df1.combine_first(df2)
df3:
0 1 2
0 nan 3.00 nan
1 -4.00 nan 8.00
2 -5.00 7.00 4.00
This is what I'd like df3 to be:
0 1 2
0 nan 3.00 nan
1 -4.00 nan 8.00
2 nan 7.00 4.00
(The difference is in df3.ix[2:2,0:0])
That is, if the column and index are the same for any cell in both df1 and df2, I'd like df1's value to prevail, even if that value is nan. combine_first does that, except when the value in df1 is nan.

Here's a bit of a hacky way to do it. First, align df2 with df1, which creates a frame indexed with the union of df1/df2, filled with df2's values. Then assign back df1's values.
In [325]: df3, _ = df2.align(df1)
In [327]: df3.loc[df1.index, df1.columns] = df1
In [328]: df3
Out[328]:
0 1 2
0 NaN 3 NaN
1 -4 NaN 8
2 NaN 7 4

Related

Create Dataframe by calling indices of df1 that are listed in df2

I'm new to Python Pandas and struggling with the following problem for a while now.
The following dataframe df1 values show the indices that are coupled to the values of df2 that should be called
Name1 Name2 ... Name160 Name161
0 62 18 ... NaN 75
1 79 46 ... NaN 5
2 3 26 ... NaN 0
df2 contains the values that belong to the indices that have to be called.
Name1 Name2 ... Name160 Name161
0 152.0 204.0 ... NaN 164.0
1 175.0 308.0 ... NaN 571.0
2 252.0 695.0 ... NaN 577.0
3 379.0 722.0 ... NaN 655.0
4 398.0 834.0 ... NaN 675.0
.. ... ... ... ... ...
213 NaN NaN ... NaN NaN
214 NaN NaN ... NaN NaN
215 NaN NaN ... NaN NaN
216 NaN NaN ... NaN NaN
217 NaN NaN ... NaN NaN
For example, df1 shows the value '0' in column 'Name161'. Then df3 should show the value that is listed in df2 with index 0. In this case '164'.
Till so far, I got df3 showing the first 3 values of df2, but of course that not what I would like to achieve.
Input:
df3 = df1*0
for c in df1.columns:
df3[c]= df2[c]
print(df3)
Output:
Name1 Name2 ... Name160 Name161
0 152.0 204.0 ... NaN 164.0
1 175.0 308.0 ... NaN 571.0
2 252.0 695.0 ... NaN 577.0
Any help would be much appreciated, thanks!
Use DataFrame.stack with Series.reset_index for reshape both DataFrames, then merging by DataFrame.merge with left join and last pivoting by DataFrame.pivot:
#change index values for match by sample data in df2
print (df1)
Name1 Name2 Name160 Name161
0 2 4 NaN 4
1 0 213 NaN 216
2 3 2 NaN 0
df11 = df1.stack().reset_index(name='idx')
df22 = df2.stack().reset_index(name='val')
df = (df11.merge(df22,
left_on=['idx','level_1'],
right_on=['level_0','level_1'],
how='left')
.pivot('level_0_x','level_1','val')
.reindex(df1.columns, axis=1)
.rename_axis(None)
)
print (df)
Name1 Name2 Name160 Name161
0 252.0 834.0 NaN 675.0
1 152.0 NaN NaN NaN
2 379.0 695.0 NaN 164.0

in Pandas how to replace a zero value with the nearest non nan value?

I have a dataframe where the col look like :
NaN
859.0
NaN
NaN
0.0
NaN
and I would like to change the zero by the previous non NaN value, and don't change the other NaN,id get this :
NaN
859.0
NaN
NaN
859.0
NaN
I've tried replace with ffill, but can't manage to get the right output.
Any help welcome !
.ffill().shift() will propagate the last non-null value forward, and then you can just assign any rows with value = 0 to that:
In [42]: s.ffill().shift()
Out[42]:
0 NaN
1 NaN
2 859.0
3 859.0
4 859.0
5 0.0
dtype: float64
In [43]: s[s==0] = s.ffill().shift()
In [44]: s
Out[44]:
0 NaN
1 859.0
2 NaN
3 NaN
4 859.0
5 NaN
dtype: float64
First replace 0 to missing values, use ffill for forward filling missing values and last replace missing values back by Series.mask:
df['col'] = df['col'].mask(df['col'].eq(0)).ffill().mask(df['col'].isna())
print (df)
col
0 NaN
1 859.0
2 NaN
3 NaN
4 859.0
5 NaN
you could also do this with last_valid_index:
say your column is in df['col']
for i,_ in df.iterrows():
if df.loc[i,'col'] == 0:
df.at[i,'col'] = df.loc[df.loc[:i-1,'col'].last_valid_index(),'col']
output:
col
0 NaN
1 859.0
2 NaN
3 NaN
4 859.0
5 NaN

How to replace NaN in single column with 0 based on index

Very new to coding so please excuse my lack of knowledge
I currently have a dataframe that looks like this:
date A B C
2006-11-01 NaN 1 NaN
2016-11-02 NaN NaN 1
2016-11-03 1 NaN NaN
2016-11-04 NaN 1 NaN
2016-11-05 NaN 1 NaN
2016-11-06 NaN NaN NaN
2016-11-07 NaN 1 NaN
What I want to do, for example, is:
replace all NaN's in column A with 0 for all dates after 2016-11-03 and be able to do this same thing for each column but with different corresponding dates.
I have tried
for col in df:
if col == 'A' & 'date' > '2016-11-03':
value_1 = {'A':0}
df = df.fillna(value=value_1)
but I received this error TypeError: unsupported operand type(s) for &: 'str' and 'str'
I'm sure this has to do with my lack of knowledge but I'm not sure how to proceed.
EDIT: what I am looking for is something like this:
date A B C
2006-11-01 NaN 1 NaN
2016-11-02 NaN NaN 1
2016-11-03 1 NaN NaN
2016-11-04 0 1 NaN
2016-11-05 0 1 NaN
2016-11-06 0 NaN NaN
2016-11-07 0 1 NaN
The condition consists of two parts: being after a certain date and being a NaN.
condition = (df['date'] > '2016-11-03') & df['A'].isnull()
Now, choose the rows that match the condition and make the respective items in the column A equal 0:
df.loc[condition, 'A'] = 0
df.loc[(df.date > '2016-11-03') & (df['A'].isnull()),'A'] = 0;
Output:
A B C date
0 NaN 1.0 NaN 2006-11-01
1 NaN NaN 1.0 2016-11-02
2 1.0 NaN NaN 2016-11-03
3 0.0 1.0 NaN 2016-11-04
4 0.0 1.0 NaN 2016-11-05
5 0.0 NaN NaN 2016-11-06
6 0.0 1.0 NaN 2016-11-07
You can use fillna, replace empty rows with zero
if condition == 'A':
fillna(0, inplace = True)

count total, total nulls, mean and median

Let us say I have a data frame with a column called values, and for this column, I want to calculate the total observations, total null observations, mean and median values per group.
I.e.,
mydf = pd.DataFrame({'date_ym':['2018-01', '2018-01','2018-01','2018-01','2018-02','2018-02','2018-03'],'category':['A','A','A','B','A','B','B'], 'values':[np.nan,4.0,5.1,np.nan,6.2,np.nan,np.nan]})
mydf
Out[134]:
category date_ym values
0 A 2018-01 NaN
1 A 2018-01 4.0
2 A 2018-01 5.1
3 B 2018-01 NaN
4 A 2018-02 6.2
5 B 2018-02 NaN
6 B 2018-03 NaN
If I use groupby and agg, I get the following output:
mydf.groupby(['date_ym','category']).agg(['count', 'mean', 'median']).reset_index()
Out[135]:
date_ym category values
count mean median
0 2018-01 A 2 4.55 4.55
1 2018-01 B 0 NaN NaN
2 2018-02 A 1 6.20 6.20
3 2018-02 B 0 NaN NaN
4 2018-03 B 0 NaN NaN
But the output I'd really want is as follows:
date_ym category values
count countNAs mean median
0 2018-01 A 2 1 4.55 4.55
1 2018-01 B 0 1 NaN NaN
2 2018-02 A 1 0 6.20 6.20
3 2018-02 B 0 1 NaN NaN
4 2018-03 B 0 1 NaN NaN
You can using
def countNAs(x): return x.isnull().sum()
mydf.groupby(['date_ym','category']).agg(['count',countNAs, 'mean', 'median']).reset_index()
Out[647]:
date_ym category values
count countNAs mean median
0 2018-01 A 2 1.0 4.55 4.55
1 2018-01 B 0 1.0 NaN NaN
2 2018-02 A 1 0.0 6.20 6.20
3 2018-02 B 0 1.0 NaN NaN
4 2018-03 B 0 1.0 NaN NaN
This is not a straight-forward approach but it does the job.
#aggregate on size which counts NA's
mydf = mydf.groupby(['date_ym','category']).agg(['size', 'count', 'mean', 'median']).reset_index()
#Renaming columns
mydf.columns = ['date_ym','category', 'countNA', 'count', 'mean', 'median']
#countNA = size - count
mydf['countNA'] = mydf['countNA'] - mydf['count']

Pandas merge_asof tolerance must be integer

I have searched around, but could not find the answer I was looking for. I have two dataframes, one has fairly discrete integer values in column A (df2) the other does not (df1). I would like to merge the two such that where column A is within 1, values in columns C and D would get merged once and NaN otherwise.
df1=
A B
0 30.00 -52.382420
1 33.14 -50.392513
2 36.28 -53.699646
3 39.42 -49.228439
.. ... ...
497 1590.58 -77.646561
498 1593.72 -77.049423
499 1596.86 -77.711639
500 1600.00 -78.092979
df2=
A C D
0 0.009 NaN NaN
1 0.036 NaN NaN
2 0.100 NaN NaN
3 10.000 12.4 0.29
4 30.000 12.82 0.307
.. ... ... ...
315 15000.000 NaN 7.65
316 16000.000 NaN 7.72
317 17000.000 NaN 8.36
318 18000.000 NaN 8.35
I would like the output to be
merged=
A B C D
0 30.00 -52.382420 12.82 0.29
1 33.14 -50.392513 NaN NaN
2 36.28 -53.699646 NaN NaN
3 39.42 -49.228439 NaN NaN
.. ... ... ... ...
497 1590.58 -77.646561 NaN NaN
498 1593.72 -77.049423 NaN NaN
499 1596.86 -77.711639 NaN NaN
500 1600.00 -78.092979 28.51 2.5
I tried:
merged = pd.merge_asof(df1, df2, left_on='A', tolerance=1, direction='nearest')
Which gives me a MergeError: key must be integer or timestamp.
So far the only way I've been able to successfully merge the dataframes is with:
merged = pd.merge_asof(df1, df2, on='A')
But this takes whatever value was close enough in columns C and D and fills in the NaN values.
For anyone else facing a similar problem, the column that the merge is performed on must be an integer. In my case this meant having to change column A to an int.
df1['A Int'] = df1['A'].astype(int)
df2['A Int'] = df2['A'].astype(int)
merged = pd.merge_asof(df1, df2, on='A Int', direction='nearest', tolerance=1)

Categories

Resources