How to implement sql coalesce in pandas - python

I have a data frame like
df = pd.DataFrame({"A":[1,2,np.nan],"B":[np.nan,10,np.nan], "C":[5,10,7]})
A B C
0 1.0 NaN 5
1 2.0 10.0 10
2 NaN NaN 7
I want to add a new column 'D'. Expected output is
A B C D
0 1.0 NaN 5 1.0
1 2.0 10.0 10 2.0
2 NaN NaN 7 7.0
Thanks in advance!

Another way is to explicitly fill column D with A,B,C in that order.
df['D'] = np.nan
df['D'] = df.D.fillna(df.A).fillna(df.B).fillna(df.C)

Another approach is to use the combine_first method of a pd.Series. Using your example df,
>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame({"A":[1,2,np.nan],"B":[np.nan,10,np.nan], "C":[5,10,7]})
>>> df
A B C
0 1.0 NaN 5
1 2.0 10.0 10
2 NaN NaN 7
we have
>>> df.A.combine_first(df.B).combine_first(df.C)
0 1.0
1 2.0
2 7.0
We can use reduce to abstract this pattern to work with an arbitrary number of columns.
>>> from functools import reduce
>>> cols = [df[c] for c in df.columns]
>>> reduce(lambda acc, col: acc.combine_first(col), cols)
0 1.0
1 2.0
2 7.0
Name: A, dtype: float64
Let's put this all together in a function.
>>> def coalesce(*args):
... return reduce(lambda acc, col: acc.combine_first(col), args)
...
>>> coalesce(*cols)
0 1.0
1 2.0
2 7.0
Name: A, dtype: float64

I think you need bfill with selecting first column by iloc:
df['D'] = df.bfill(axis=1).iloc[:,0]
print (df)
A B C D
0 1.0 NaN 5 1.0
1 2.0 10.0 10 2.0
2 NaN NaN 7 7.0
same as:
df['D'] = df.fillna(method='bfill',axis=1).iloc[:,0]
print (df)
A B C D
0 1.0 NaN 5 1.0
1 2.0 10.0 10 2.0
2 NaN NaN 7 7.0

option 1
pandas
df.assign(D=df.lookup(df.index, df.isnull().idxmin(1)))
A B C D
0 1.0 NaN 5 1.0
1 2.0 10.0 10 2.0
2 NaN NaN 7 7.0
option 2
numpy
v = df.values
j = np.isnan(v).argmin(1)
df.assign(D=v[np.arange(len(v)), j])
A B C D
0 1.0 NaN 5 1.0
1 2.0 10.0 10 2.0
2 NaN NaN 7 7.0
naive time test
over given data
over larger data

There is already a method for Series in Pandas that does this:
df['D'] = df['A'].combine_first(df['C'])
Or just stack them if you want to look up values sequentially:
df['D'] = df['A'].combine_first(df['B']).combine_first(df['C'])
This outputs the following:
>>> df
A B C D
0 1.0 NaN 5 1.0
1 2.0 10.0 10 2.0
2 NaN NaN 7 7.0

Related

create new pandas column based on if and else rule

I have this dataframe and I want to create column e:
df
a b c d
1 2 1 2
Nan Nan 3 1
Nan Nan Nan 5
4 5 0 2
I want create a new column based on this criteria:
The highest of column a vs column b.
If no value in column a and column b , then look column c
if no value in column c, then look column d.
df
a b c d e
1 2 1 2 2
Nan Nan 3 1 3
Nan Nan Nan 5 5
4 5 0 2 5
my idea just until step number 2.
def e(x):
if x['a'] >= x['b']:
return x['a']
elif x['a'] <= x['b']:
return x['b']
else:
x['c']
df['e'] = df.apply(e, axis=1)
IIUC, use pandas.DataFrame.bfill:
df["e"] = df.bfill(1)[["a", "b"]].max(1)
print(df)
Output:
a b c d e
0 1 2 1 2 2.0
1 NaN NaN 3 1 3.0
2 NaN NaN NaN 5 5.0
3 4 5 0 2 5.0
You can always use np.where()
df['e'] = df['d']
df['e'] = np.where((df['a'].isna()) & (df['b'].isna()) & (df['c'].notnull()), df['c'], df['e'])
df['e'] = np.where((df['a'].notnull()) & (df['b'].notnull()) & (df['a'] > df['b']), df['a'], df['e'])
df['e'] = np.where((df['a'].notnull()) & (df['b'].notnull()) & (df['b'] > df['a']), df['b'], df['e'])
df
First get maximum a, b values and assign to a column, then back filling missing values and select first column for prioritize c and then d columns:
df['e'] = df.assign(a = df[['a','b']].max(axis=1)).bfill(axis=1).iloc[:, 0]
print (df)
a b c d e
0 1.0 2.0 1.0 2 2.0
1 NaN NaN 3.0 1 3.0
2 NaN NaN NaN 5 5.0
3 4.0 5.0 0.0 2 5.0
If want test only a,b,c,d columns and possible some another columns:
df['e'] = df[['a','b']].max(axis=1).fillna(df.c).fillna(df.d)
print (df)
a b c d e
0 1.0 2.0 1.0 2 2.0
1 NaN NaN 3.0 5 3.0
2 NaN NaN NaN 5 5.0
3 4.0 5.0 0.0 2 5.0
If changed second row to 3,5 output is:
df['e'] = df.assign(a = df[['a','b']].max(axis=1)).bfill(axis=1).iloc[:, 0]
print (df)
a b c d e
0 1.0 2.0 1.0 2 2.0
1 NaN NaN 3.0 5 3.0 <- changed d=5
2 NaN NaN NaN 5 5.0
3 4.0 5.0 0.0 2 5.0

Perform arithmetic operations on null values

When i am trying to do arithmetic operation including two or more columns facing problem with null values.
One more thing which i want to mention here that i don't want to fill missed/null values.
Actually i want something like 1 + np.nan = 1 but it is giving np.nan. I tried to solve it by np.nansum but it didn't work.
df = pd.DataFrame({"a":[1,2,3,4],"b":[1,2,np.nan,np.nan]})
df
Out[6]:
a b c
0 1 1.0 2.0
1 2 2.0 4.0
2 3 NaN NaN
3 4 NaN NaN
And,
df["d"] = np.nansum([df.a + df.b])
df
Out[13]:
a b d
0 1 1.0 6.0
1 2 2.0 6.0
2 3 NaN 6.0
3 4 NaN 6.0
But i want actually like,
df
Out[10]:
a b c
0 1 1.0 2.0
1 2 2.0 4.0
2 3 NaN 3.0
3 4 NaN 4.0
The np.nansum here calculated the sum, of the entire column. You do not want that, you probably want to call the np.nansum on the two columns, like:
df['d'] = np.nansum((df.a, df.b), axis=0)
This then yield the expected:
>>> df
a b d
0 1 1.0 2.0
1 2 2.0 4.0
2 3 NaN 3.0
3 4 NaN 4.0
Simply use DataFrame.sum over axis=1:
df['c'] = df.sum(axis=1)
Output
a b c
0 1 1.0 2.0
1 2 2.0 4.0
2 3 NaN 3.0
3 4 NaN 4.0

Pandas split /group dataframe by row values

I have a dataframe of the following form
In [1]: df
Out [1]:
A B C D
1 0 2 6 0
2 6 1 5 2
3 NaN NaN NaN NaN
4 9 3 2 2
...
15 2 12 5 23
16 NaN NaN NaN NaN
17 8 1 5 3
I'm interested in splitting the dataframe into multiple dataframes (or grouping it) by the NaN rows.
So resulting in something as follows
In [2]: df1
Out [2]:
A B C D
1 0 2 6 0
2 6 1 5 2
In [3]: df2
Out [3]:
A B C D
1 9 3 2 2
...
12 2 12 5 23
In [4]: df3
Out [4]:
A B C D
1 8 1 5 3
You could use the compare-cumsum-groupby pattern, where we find the all-null rows, cumulative sum those to get a group number for each subgroup, and then iterate over the groups:
In [114]: breaks = df.isnull().all(axis=1)
In [115]: groups = [group.dropna(how='all') for _, group in df.groupby(breaks.cumsum())]
In [116]: for group in groups:
...: print(group)
...: print("--")
...:
A B C D
1 0.0 2.0 6.0 0.0
2 6.0 1.0 5.0 2.0
--
A B C D
4 9.0 3.0 2.0 2.0
15 2.0 12.0 5.0 23.0
--
A B C D
17 8.0 1.0 5.0 3.0
--
You can using local with groupby split
variables = locals()
for x, y in df.dropna(0).groupby(df.isnull().all(1).cumsum()[~df.isnull().all(1)]):
variables["df{0}".format(x + 1)] = y
df1
Out[768]:
A B C D
1 0.0 2.0 6.0 0.0
2 6.0 1.0 5.0 2.0
df2
Out[769]:
A B C D
4 9.0 3.0 2.0 2.0
15 2.0 12.0 5.0 23.0
I'd use dictionary, groupby with cumsum:
dictofdfs = {}
for n,g in df.groupby(df.isnull().all(1).cumsum()):
dictofdfs[n]= g.dropna()
Output:
dictofdfs[0]
A B C D
1 0.0 2.0 6.0 0.0
2 6.0 1.0 5.0 2.0
dictofdfs[1]
A B C D
4 9.0 3.0 2.0 2.0
15 2.0 12.0 5.0 23.0
dictofdfs[2]
A B C D
17 8.0 1.0 5.0 3.0

Pandas: convert first value in group to np.nan

I have the following DataFrame:
df = pd.DataFrame({'series1':['A','A','A','A','B','B','B','C','C','C','C'],
'series2':[0,1,10,99,-9,9,0,10,20,10,10]})
series1 series2
0 A 0.0
1 A 1.0
2 A 10.0
3 A 99.0
4 B -9.0
5 B 9.0
6 B 0.0
7 C 10.0
8 C 20.0
9 C 10.0
10 C 10.0
What I want:
df2 = pd.DataFrame({'series1':['A','A','A','A','B','B','B','C','C','C','C'],
'series2':[np.nan,1,10,99,np.nan,9,0,np.nan,20,10,10]})
series1 series2
0 A NaN
1 A 1.0
2 A 10.0
3 A 99.0
4 B NaN
5 B 9.0
6 B 0.0
7 C NaN
8 C 20.0
9 C 10.0
10 C 10.0
I have a feeling this might be able to be done by using Pandas .groupby function:
df.groupby('series1').first()
series2
series1
A 0
B -9
C 10
which gives me the observations I want to convert to NaNs, but I can't figure out a way to easily replace this in the original DataFrame.
This is just a simple example, the actual dataframe I'm working for has >8,000,000 observations.
There's probably a slicker way to do this, but the first element in each group is the 0th element in that group, and cumcount numbers the elements within each group. So:
In [19]: df.loc[df.groupby('series1').cumcount() == 0, 'series2'] = np.nan
In [20]: df
Out[20]:
series1 series2
0 A NaN
1 A 1.0
2 A 10.0
3 A 99.0
4 B NaN
5 B 9.0
6 B 0.0
7 C NaN
8 C 20.0
9 C 10.0
10 C 10.0
You want to locate discontinuities in series1 by shifting it down and comparing to itself:
df.loc[df['series1'].shift() != df['series1'], 'series2'] = np.nan
Another option by shifting the column:
df['series2'] = df.groupby('series1').series2.transform(lambda x: x.shift(-1).shift())
df
# series1 series2
#0 A NaN
#1 A 1.0
#2 A 10.0
#3 A 99.0
#4 B NaN
#5 B 9.0
#6 B 0.0
#7 C NaN
#8 C 20.0
#9 C 10.0
#10 C 10.0
Or you can using head, first or nth all give back same result by the index slicing.
df.loc[df.groupby('series1',as_index=False).head(1).index,'series2'] = np.nan
#df.loc[df.groupby('series1',as_index=False).first().index,'series2'] = np.nan
#df.loc[df.groupby('series1',as_index=False).nth(1).index,'series2'] = np.nan

Pandas: concatenate and reindex dataframes

I would like to combine two pandas dataframes into a new third dataframe using a new index. Suppose I start with the following:
df = pd.DataFrame(np.ones(25).reshape((5,5)),index = ['A','B','C','D','E'])
df1 = pd.DataFrame(np.ones(25).reshape((5,5))*2,index = ['A','B','C','D','E'])
df[2] = np.nan
df1[3] = np.nan
df[4] = np.nan
df1[4] = np.nan
I would like the least convoluted way to achieve the following result:
NewIndex OldIndex df df1
1 A 1 2
2 B 1 2
3 C 1 2
4 D 1 2
5 E 1 2
6 A 1 2
7 B 1 2
8 C 1 2
9 D 1 2
10 E 1 2
11 A NaN 2
12 B NaN 2
13 C NaN 2
14 D NaN 2
15 E NaN 2
16 A 1 NaN
17 B 1 NaN
18 C 1 NaN
19 D 1 NaN
20 E 1 NaN
What's the best way to do this?
You have to unstack your dataframes and then reindex concatenated dataframe.
import numpy as np
import pandas as pd
# test data
df = pd.DataFrame(np.ones(25).reshape((5,5)),index = ['A','B','C','D','E'])
df1 = pd.DataFrame(np.ones(25).reshape((5,5))*2,index = ['A','B','C','D','E'])
df[2] = np.nan
df1[3] = np.nan
df[4] = np.nan
df1[4] = np.nan
# unstack tables and concat
newdf = pd.concat([df.unstack(),df1.unstack()], axis=1)
# reset multiindex for level 1
newdf.reset_index(1, inplace=True)
# rename columns
newdf.columns = ['OldIndex','df','df1']
# drop old index
newdf = newdf.reset_index().drop('index',1)
# set index from 1
newdf.index = np.arange(1, len(newdf) + 1)
# rename new index
newdf.index.name='NewIndex'
print(newdf)
Output:
OldIndex df df1
NewIndex
1 A 1.0 2.0
2 B 1.0 2.0
3 C 1.0 2.0
4 D 1.0 2.0
5 E 1.0 2.0
6 A 1.0 2.0
7 B 1.0 2.0
8 C 1.0 2.0
9 D 1.0 2.0
10 E 1.0 2.0
11 A NaN 2.0
12 B NaN 2.0
13 C NaN 2.0
14 D NaN 2.0
15 E NaN 2.0
16 A 1.0 NaN
17 B 1.0 NaN
18 C 1.0 NaN
19 D 1.0 NaN
20 E 1.0 NaN
21 A NaN NaN
22 B NaN NaN
23 C NaN NaN
24 D NaN NaN
25 E NaN NaN

Categories

Resources