Pandas countif based on multiple conditions, result in new column - python

How can I add a field that returns 1/0 if the value in any specified column in not NaN?
Example:
df = pd.DataFrame({'id': [1,2,3,4,5,6,7,8,9,10],
'val1': [2,2,np.nan,np.nan,np.nan,1,np.nan,np.nan,np.nan,2],
'val2': [7,0.2,5,8,np.nan,1,0,np.nan,1,1],
})
display(df)
mycols = ['val1', 'val2']
# if entry in mycols != np.nan, then df[row, 'countif'] =1; else 0
Desired output dataframe:

We do not need countif logic in pandas , try notna + any
df['out'] = df[['val1','val2']].notna().any(1).astype(int)
df
Out[381]:
id val1 val2 out
0 1 2.0 7.0 1
1 2 2.0 0.2 1
2 3 NaN 5.0 1
3 4 NaN 8.0 1
4 5 NaN NaN 0
5 6 1.0 1.0 1
6 7 NaN 0.0 1
7 8 NaN NaN 0
8 9 NaN 1.0 1
9 10 2.0 1.0 1

Using iloc accessor filtre last two columns. Check if the sum of not NaNs in each row is more than zero. Convert resulting Boolean to integers.
df['countif']=df.iloc[:,1:].notna().sum(1).gt(0).astype(int)
id val1 val2 countif
0 1 2.0 7.0 1
1 2 2.0 0.2 1
2 3 NaN 5.0 1
3 4 NaN 8.0 1
4 5 NaN NaN 0
5 6 1.0 1.0 1
6 7 NaN 0.0 1
7 8 NaN NaN 0
8 9 NaN 1.0 1
9 10 2.0 1.0 1

Related

How do I get the index of a row where max value is not duplicated?

Given this df
from io import StringIO
import pandas as pd
data = StringIO('''gene_variant gene val1 val2 val3
b1 b 1 1 1
b2 b 2 11 1
b3 b 3 11 1
c2 c 1 1 1
t1 t 1 1 1
t2 t 12 2 2
t4 t 12 3 2
t5 t 1 4 3
d2 d 11 1 2
d4 d 11 1 1''')
df = pd.read_csv(data, sep='\t')
How do I get the gene_variant for each gene where; the gene_variant corresponds to the max value for val1 if the max value is not duplicated, and if it is duplicated, the gene_variant corresponds to the max value for val2 if the max value for val2 is not duplicated, or then just max for val3? I.e., any tiebreakers are decided by the next column until the third option.
EDIT: The column val2 is only considered if the max val in val1 is a duplicated/a tie. Same for val3. If the max val in val1/2 is duplicated/is a tie then the values in those columns are no longer considered. Only the values in 1 column at a time are compared.
I've been trying solutions based on:
df.groupby('gene').agg(max)
and:
df.groupby('gene').rank('max')
But I can't get there without dropping out into iteration...
The correct answer would be:
b3 3
c2 1
t5 4
d2 2
Thanks in advance!
If need maximum only for groups with no duplicated values is possible use:
#per groups count number of unique values
df1 = df.groupby('gene').transform('nunique')
#compare columns with `gene_variant` and set NaN if duplicates per columns
#if maximum is count from all columns if not duplicated values get max
max1 = df.where(df1.eq(df1.pop('gene_variant'), axis=0)).max(axis=1)
#if max is count by order - first val1, then val2
#back filling missing values and select first column
max1 = df.where(df1.eq(df1.pop('gene_variant'), axis=0)).bfill(axis=1).iloc[:, 0]
#assign column by maximum
df = df.assign(max1 = max1)
#get rows from original with maximum max1 per groups
df = df.loc[df.groupby('gene', sort=False)['max1'].idxmax(), ['gene_variant','max1']]
print (df)
gene_variant max1
2 b3 3.0
3 c2 1.0
7 t5 4.0
8 d2 2.0
How it working:
df1 = df.groupby('gene').transform('nunique')
s = df1.pop('gene_variant')
print (df.where(df1.eq(s, axis=0)))
gene_variant gene val1 val2 val3
0 NaN NaN 1.0 NaN NaN
1 NaN NaN 2.0 NaN NaN
2 NaN NaN 3.0 NaN NaN
3 NaN NaN 1.0 1.0 1.0
4 NaN NaN NaN 1.0 NaN
5 NaN NaN NaN 2.0 NaN
6 NaN NaN NaN 3.0 NaN
7 NaN NaN NaN 4.0 NaN
8 NaN NaN NaN NaN 2.0
9 NaN NaN NaN NaN 1.0
#max of all columns
print (df.where(df1.eq(s, axis=0)).max(axis=1))
0 1.0
1 2.0
2 3.0
3 1.0
4 1.0
5 2.0
6 3.0
7 4.0
8 2.0
9 1.0
dtype: float64
#back fill NaNs
print (df.where(df1.eq(s, axis=0)).bfill(axis=1))
gene_variant gene val1 val2 val3
0 1.0 1.0 1.0 NaN NaN
1 2.0 2.0 2.0 NaN NaN
2 3.0 3.0 3.0 NaN NaN
3 1.0 1.0 1.0 1.0 1.0
4 1.0 1.0 1.0 1.0 NaN
5 2.0 2.0 2.0 2.0 NaN
6 3.0 3.0 3.0 3.0 NaN
7 4.0 4.0 4.0 4.0 NaN
8 2.0 2.0 2.0 2.0 2.0
9 1.0 1.0 1.0 1.0 1.0
#selected first column
print (df.where(df1.eq(s, axis=0)).bfill(axis=1).iloc[:, 0])
0 1.0
1 2.0
2 3.0
3 1.0
4 1.0
5 2.0
6 3.0
7 4.0
8 2.0
9 1.0
Name: gene_variant, dtype: float64
You could use .sort_values() to get the maximum values. If you pass it multiple columns, it will treat tiebrakers correctly.
In [9]: df.sort_values(["val1", "val2", "val3"])
Out[9]:
gene_variant gene val1 val2 val3
0 b1 b 1 1 1
3 c2 c 1 1 1
4 t1 t 1 1 1
9 d4 d 1 1 1
8 d2 d 1 1 2
7 t5 t 1 4 3
1 b2 b 2 1 1
5 t2 t 2 2 2
6 t4 t 2 3 2
2 b3 b 3 1 1
Now, in order to do this for each gene you can groupby('gene') and apply a custom function.
In [11]: df.groupby("gene").apply(
...: lambda _df: _df.sort_values(["val1", "val2", "val3"], ascending=False)
...: .head(1)
...: .squeeze()
...: )
Out[11]:
gene_variant gene val1 val2 val3
gene
b b3 b 3 1 1
c c2 c 1 1 1
d d2 d 1 1 2
t t4 t 2 3 2
However, this is not telling you which val it was that won the tiebraker.

Concatenate columns skipping pasted rows and columns

I expect to describe well want I need. I have a data frame with the same columns name and another column that works as an index. The data frame looks as follows:
df = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,3,3,3,3],'X':[1,2,3,4,5,2,3,4,1,3,4,5],'Y':[1,2,3,4,5,2,3,4,5,4,3,2]})
df
Out[21]:
ID X Y
0 1 1 1
1 1 2 2
2 1 3 3
3 1 4 4
4 1 5 5
5 2 2 2
6 2 3 3
7 2 4 4
8 3 1 5
9 3 3 4
10 3 4 3
11 3 5 2
My intention is to copy X as an index or one column (it doesn't matter) and append Y columns from each 'ID' in the following way:
You can try
out = pd.concat([group.rename(columns={'Y': f'Y{name}'}) for name, group in df.groupby('ID')])
out.columns = out.columns.str.replace(r'\d+$', '', regex=True)
print(out)
ID X Y Y Y
0 1 1 1.0 NaN NaN
1 1 2 2.0 NaN NaN
2 1 3 3.0 NaN NaN
3 1 4 4.0 NaN NaN
4 1 5 5.0 NaN NaN
5 2 2 NaN 2.0 NaN
6 2 3 NaN 3.0 NaN
7 2 4 NaN 4.0 NaN
8 3 1 NaN NaN 5.0
9 3 3 NaN NaN 4.0
10 3 4 NaN NaN 3.0
11 3 5 NaN NaN 2.0
Here's another way to do it:
df_org = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,3,3,3,3],'X':[1,2,3,4,5,2,3,4,1,3,4,5]})
df = df_org.copy()
for i in set(df_org['ID']):
df1 = df_org[df_org['ID']==i]
col = 'Y'+str(i)
df1.columns = ['ID', col]
df = pd.concat([ df, df1[[col]] ], axis=1)
df.columns = df.columns.str.replace(r'\d+$', '', regex=True)
print(df)
Output:
ID X Y Y Y
0 1 1 1.0 NaN NaN
1 1 2 2.0 NaN NaN
2 1 3 3.0 NaN NaN
3 1 4 4.0 NaN NaN
4 1 5 5.0 NaN NaN
5 2 2 NaN 2.0 NaN
6 2 3 NaN 3.0 NaN
7 2 4 NaN 4.0 NaN
8 3 1 NaN NaN 1.0
9 3 3 NaN NaN 3.0
10 3 4 NaN NaN 4.0
11 3 5 NaN NaN 5.0
Another solution could be as follow.
Get unique values for column ID (stored in array s).
Use np.transpose to repeat column ID n times (n == len(s)) and evaluate the array's matches with s.
Use np.where to replace True with values from df.Y and False with NaN.
Finally, drop the orignal df.Y and rename the new columns as required.
import pandas as pd
import numpy as np
df = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,3,3,3,3],
'X':[1,2,3,4,5,2,3,4,1,3,4,5],
'Y':[1,2,3,4,5,2,3,4,5,4,3,2]})
s = df.ID.unique()
df[s] = np.where((np.transpose([df.ID]*len(s))==s),
np.transpose([df.Y]*len(s)),
np.nan)
df.drop('Y', axis=1, inplace=True)
df.rename(columns={k:'Y' for k in s}, inplace=True)
print(df)
ID X Y Y Y
0 1 1 1.0 NaN NaN
1 1 2 2.0 NaN NaN
2 1 3 3.0 NaN NaN
3 1 4 4.0 NaN NaN
4 1 5 5.0 NaN NaN
5 2 2 NaN 2.0 NaN
6 2 3 NaN 3.0 NaN
7 2 4 NaN 4.0 NaN
8 3 1 NaN NaN 5.0
9 3 3 NaN NaN 4.0
10 3 4 NaN NaN 3.0
11 3 5 NaN NaN 2.0
If performance is an issue, this method should be faster than this answer, especially when the number of unique values for ID increases.

How to transform weekly data to daily for specific columns using Python

I am a newbie at python and programming in general. I hope the following question is well explained.
I have a big dataset, with 80+ columns and some of these columns have only data on a weekly basis. I would like transform these columns to have values on a daily basis by simply dividing the weekly value by 7 and attributing the result to the value itself and the 6 other days of that week.
This is what my input dataset looks like:
date col1 col2 col3
02-09-2019 14 NaN 1
09-09-2019 NaN NaN 2
16-09-2019 NaN 7 3
23-09-2019 NaN NaN 4
30-09-2019 NaN NaN 5
07-10-2019 NaN NaN 6
14-10-2019 NaN NaN 7
21-10-2019 21 NaN 8
28-10-2019 NaN NaN 9
04-11-2019 NaN 14 10
11-11-2019 NaN NaN 11
..
This is what the output should look like:
date col1 col2 col3
02-09-2019 2 NaN 1
09-09-2019 2 NaN 2
16-09-2019 2 1 3
23-09-2019 2 1 4
30-09-2019 2 1 5
07-10-2019 2 1 6
14-10-2019 2 1 7
21-10-2019 3 1 8
28-10-2019 3 1 9
04-11-2019 3 2 10
11-11-2019 3 2 11
..
I can´t come up with a solution, but here is what I thought might work:
def convert_to_daily(df):
for column in df.columns.tolist():
if column.isna(): # if true
for line in range(len(df[column])):
# check if value is not empty and
succeeded by an 6 empty values or some
better logic
# I don´t know how to do that.
I believe you need select columns contains at least one missing value, forward filling missing values and divide by 7:
m = df.isna().any()
df.loc[:, m] = df.loc[:, m].ffill(limit=7).div(7)
print (df)
date col1 col2 col3
0 02-09-2019 2.0 NaN 1
1 09-09-2019 2.0 NaN 2
2 16-09-2019 2.0 1.0 3
3 23-09-2019 2.0 1.0 4
4 30-09-2019 2.0 1.0 5
5 07-10-2019 2.0 1.0 6
6 14-10-2019 2.0 1.0 7
7 21-10-2019 3.0 1.0 8
8 28-10-2019 3.0 1.0 9
9 04-11-2019 3.0 2.0 10
10 11-11-2019 3.0 2.0 11

Fill NaN with mean of a group for each column [duplicate]

This question already has answers here:
Pandas: filling missing values by mean in each group
(12 answers)
Closed last year.
I Know that the fillna() method can be used to fill NaN in whole dataframe.
df.fillna(df.mean()) # fill with mean of column.
How to limit mean calculation to the group (and the column) where the NaN is.
Exemple:
import pandas as pd
import numpy as np
df = pd.DataFrame({
'a': pd.Series([1,1,1,2,2,2]),
'b': pd.Series([1,2,np.NaN,1,np.NaN,4])
})
print df
Input
a b
0 1 1
1 1 2
2 1 NaN
3 2 1
4 2 NaN
5 2 4
Output (after groupby('a') & replace NaN by mean of group)
a b
0 1 1.0
1 1 2.0
2 1 1.5
3 2 1.0
4 2 2.5
5 2 4.0
IIUC then you can call fillna with the result of groupby on 'a' and transform on 'b':
In [44]:
df['b'] = df['b'].fillna(df.groupby('a')['b'].transform('mean'))
df
Out[44]:
a b
0 1 1.0
1 1 2.0
2 1 1.5
3 2 1.0
4 2 2.5
5 2 4.0
If you have multiple NaN values then I think the following should work:
In [47]:
df.fillna(df.groupby('a').transform('mean'))
Out[47]:
a b
0 1 1.0
1 1 2.0
2 1 1.5
3 2 1.0
4 2 2.5
5 2 4.0
EDIT
In [49]:
df = pd.DataFrame({
'a': pd.Series([1,1,1,2,2,2]),
'b': pd.Series([1,2,np.NaN,1,np.NaN,4]),
'c': pd.Series([1,np.NaN,np.NaN,1,np.NaN,4]),
'd': pd.Series([np.NaN,np.NaN,np.NaN,1,np.NaN,4])
})
df
Out[49]:
a b c d
0 1 1 1 NaN
1 1 2 NaN NaN
2 1 NaN NaN NaN
3 2 1 1 1
4 2 NaN NaN NaN
5 2 4 4 4
In [50]:
df.fillna(df.groupby('a').transform('mean'))
Out[50]:
a b c d
0 1 1.0 1.0 NaN
1 1 2.0 1.0 NaN
2 1 1.5 1.0 NaN
3 2 1.0 1.0 1.0
4 2 2.5 2.5 2.5
5 2 4.0 4.0 4.0
You get all NaN for 'd' as all values are NaN for group 1 for d
We first compute the group means, ignoring the missing values:
group_means = df.groupby('a')['b'].agg(lambda v: np.nanmean(v))
Next, we use groupby again, this time fetching the corresponding values:
df_new = df.groupby('a').apply(lambda t: t.fillna(group_means.loc[t['a'].iloc[0]]))

Missing data, insert rows in Pandas and fill with NAN

I'm new to Python and Pandas so there might be a simple solution which I don't see.
I have a number of discontinuous datasets which look like this:
ind A B C
0 0.0 1 3
1 0.5 4 2
2 1.0 6 1
3 3.5 2 0
4 4.0 4 5
5 4.5 3 3
I now look for a solution to get the following:
ind A B C
0 0.0 1 3
1 0.5 4 2
2 1.0 6 1
3 1.5 NAN NAN
4 2.0 NAN NAN
5 2.5 NAN NAN
6 3.0 NAN NAN
7 3.5 2 0
8 4.0 4 5
9 4.5 3 3
The problem is,that the gap in A varies from dataset to dataset in position and length...
set_index and reset_index are your friends.
df = DataFrame({"A":[0,0.5,1.0,3.5,4.0,4.5], "B":[1,4,6,2,4,3], "C":[3,2,1,0,5,3]})
First move column A to the index:
In [64]: df.set_index("A")
Out[64]:
B C
A
0.0 1 3
0.5 4 2
1.0 6 1
3.5 2 0
4.0 4 5
4.5 3 3
Then reindex with a new index, here the missing data is filled in with nans. We use the Index object since we can name it; this will be used in the next step.
In [66]: new_index = Index(arange(0,5,0.5), name="A")
In [67]: df.set_index("A").reindex(new_index)
Out[67]:
B C
0.0 1 3
0.5 4 2
1.0 6 1
1.5 NaN NaN
2.0 NaN NaN
2.5 NaN NaN
3.0 NaN NaN
3.5 2 0
4.0 4 5
4.5 3 3
Finally move the index back to the columns with reset_index. Since we named the index, it all works magically:
In [69]: df.set_index("A").reindex(new_index).reset_index()
Out[69]:
A B C
0 0.0 1 3
1 0.5 4 2
2 1.0 6 1
3 1.5 NaN NaN
4 2.0 NaN NaN
5 2.5 NaN NaN
6 3.0 NaN NaN
7 3.5 2 0
8 4.0 4 5
9 4.5 3 3
Using the answer by EdChum above, I created the following function
def fill_missing_range(df, field, range_from, range_to, range_step=1, fill_with=0):
return df\
.merge(how='right', on=field,
right = pd.DataFrame({field:np.arange(range_from, range_to, range_step)}))\
.sort_values(by=field).reset_index().fillna(fill_with).drop(['index'], axis=1)
Example usage:
fill_missing_range(df, 'A', 0.0, 4.5, 0.5, np.nan)
In this case I am overwriting your A column with a newly generated dataframe and merging this to your original df, I then resort it:
In [177]:
df.merge(how='right', on='A', right = pd.DataFrame({'A':np.arange(df.iloc[0]['A'], df.iloc[-1]['A'] + 0.5, 0.5)})).sort(columns='A').reset_index().drop(['index'], axis=1)
Out[177]:
A B C
0 0.0 1 3
1 0.5 4 2
2 1.0 6 1
3 1.5 NaN NaN
4 2.0 NaN NaN
5 2.5 NaN NaN
6 3.0 NaN NaN
7 3.5 2 0
8 4.0 4 5
9 4.5 3 3
So in the general case you can adjust the arange function which takes a start and end value, note I added 0.5 to the end as ranges are open closed, and pass a step value.
A more general method could be like this:
In [197]:
df = df.set_index(keys='A', drop=False).reindex(np.arange(df.iloc[0]['A'], df.iloc[-1]['A'] + 0.5, 0.5))
df.reset_index(inplace=True)
df['A'] = df['index']
df.drop(['A'], axis=1, inplace=True)
df.reset_index().drop(['level_0'], axis=1)
Out[197]:
index B C
0 0.0 1 3
1 0.5 4 2
2 1.0 6 1
3 1.5 NaN NaN
4 2.0 NaN NaN
5 2.5 NaN NaN
6 3.0 NaN NaN
7 3.5 2 0
8 4.0 4 5
9 4.5 3 3
Here we set the index to column A but don't drop it and then reindex the df using the arange function.
This question was asked a long time ago, but I have a simple solution that's worth mentioning. You can simply use NumPy's NaN. For instance:
import numpy as np
df[i,j] = np.NaN
will do the trick.

Categories

Resources