I want to mask out the values in a Pandas DataFrame where the index is the same as the column name. For example:
import pandas as pd
import numpy as np
a = pd.DataFrame(np.arange(12).reshape((4, 3)),
index=["a", "b", "c", "d"],
columns=["a", "b", "c"])
a b c
a 0 1 2
b 3 4 5
c 6 7 8
d 9 10 11
After masking:
a b c
a NaN 1 2
b 3 NaN 5
c 6 7 NaN
d 9 10 11
Seems simple enough but I'm not sure how to do it in a Pythonic way, without iteration.
Try using pd.DataFrame.apply to apply a condtion to each dataframe column that matches the index with the series/column name. The result of the apply is a boolean dataFrame and use pd.Dataframe.mask:
a.mask(a.apply(lambda x: x.name == x.index))
Output:
a b c
a NaN 1.0 2.0
b 3.0 NaN 5.0
c 6.0 7.0 NaN
d 9.0 10.0 11.0
Also, inspired by #QuangHoang you can use np.equal.outer:
a.mask(np.equal.outer(a.index, a.columns))
Output:
a b c
a NaN 1.0 2.0
b 3.0 NaN 5.0
c 6.0 7.0 NaN
d 9.0 10.0 11.0
You can use broadcasting as well:
a.mask(a.index[:,None] == a.columns[None,:])
Or
a.mask(a.index.values[:,None] == a.columns.values[None,:])
Output:
a b c
a NaN 1.0 2.0
b 3.0 NaN 5.0
c 6.0 7.0 NaN
d 9.0 10.0 11.0
Using DataFrame.stack and index.get_level_values:
st = a.stack()
m = st.index.get_level_values(0) == st.index.get_level_values(1)
a = st.mask(m).unstack()
a b c
a NaN 1.0 2.0
b 3.0 NaN 5.0
c 6.0 7.0 NaN
d 9.0 10.0 11.0
Related
consider the below pd.DataFrame
temp = pd.DataFrame({'label_0':[1,1,1,2,2,2],'label_1':['a','b','c',np.nan,'c','b'], 'values':[0,2,4,np.nan,8,5]})
print(temp)
label_0 label_1 values
0 1 a 0.0
1 1 b 2.0
2 1 c 4.0
3 2 NaN NaN
4 2 c 8.0
5 2 b 5.0
my desired output is
label_1 1 2
0 a 0.0 NaN
1 b 2.0 5.0
2 c 4.0 8.0
3 NaN NaN NaN
I have tried pd.pivot and wrangling around with pd.gropuby but cannot get to the desired output due to duplicate entries. any help most appreciated.
d = {}
for _0, _1, v in zip(*map(temp.get, temp)):
d.setdefault(_1, {})[_0] = v
pd.DataFrame.from_dict(d, orient='index')
1 2
a 0.0 NaN
b 2.0 5.0
c 4.0 8.0
NaN NaN NaN
OR
pd.DataFrame.from_dict(d, orient='index').rename_axis('label_1').reset_index()
label_1 1 2
0 a 0.0 NaN
1 b 2.0 5.0
2 c 4.0 8.0
3 NaN NaN NaN
Another way is to use set_index and unstack:
temp.set_index(['label_0','label_1'])['values'].unstack(0)
Output:
label_0 1 2
label_1
NaN NaN NaN
a 0.0 NaN
b 2.0 5.0
c 4.0 8.0
You can do fillna then pivot
temp.fillna('NaN').pivot(*temp.columns).T
Out[251]:
label_0 1 2
label_1
NaN NaN NaN
a 0 NaN
b 2 5
c 4 8
Seems like a straightforward pivot works:
temp.pivot(columns='label_0', index='label_1', values='values')
Output:
label_0 1 2
label_1
NaN NaN NaN
a 0.0 NaN
b 2.0 5.0
c 4.0 8.0
I know how to select all the numeric columns and fillna with mean,but how to make numeric columns fillna with mean and character columns fillna with mode?
Use select_dtypes for numeric columns with mean, then get non numeric with difference and mode, join together by append and last call fillna:
Notice: (thanks #jpp)
Function mode should return multiple values, for seelct first add iloc
df = pd.DataFrame({
'A':list('ebcded'),
'B':[np.nan,np.nan,4,5,5,4],
'C':[7,np.nan,9,4,2,3],
'D':[1,3,5,np.nan,1,0],
'F':list('aaabbb')
})
df.loc[[0,1], 'F'] = np.nan
df.loc[[2,1], 'A'] = np.nan
print (df)
A B C D F
0 e NaN 7.0 1.0 NaN
1 NaN NaN NaN 3.0 NaN
2 NaN 4.0 9.0 5.0 a
3 d 5.0 4.0 NaN b
4 e 5.0 2.0 1.0 b
5 d 4.0 3.0 0.0 b
a = df.select_dtypes(np.number).mean()
b = df[df.columns.difference(a.index)].mode().iloc[0]
#alternative
#b = df.select_dtypes(object).mode().iloc[0]
print (df[df.columns.difference(a.index)].mode())
A F
0 d b
1 e NaN
df = df.fillna(a.append(b))
print (df)
A B C D F
0 e 4.5 7.0 1.0 b
1 d 4.5 5.0 3.0 b
2 d 4.0 9.0 5.0 a
3 d 5.0 4.0 2.0 b
4 e 5.0 2.0 1.0 b
5 d 4.0 3.0 0.0 b
I have a dictionary of the form;
data = {A:[(1,2),(3,4),(5,6),(7,8),(8,9)],
B:[(3,4),(4,5),(5,6),(6,7)],
C:[(10,11),(12,13)]}
I create a dataFrame by:
df = pd.DataFrame(dict([ (k,pd.Series(v)) for k,v in data.iteritems()]))
which in turn becomes;
A B C
(1,2) (3,4) (10,11)
(3,4) (4,5) (12,13)
(5,6) (5,6) NaN
(6,7) (6,7) NaN
(8,9) NaN NaN
Is there a way to go from the dataframe above to the one below:
A B C
one two one two one two
1 2 3 4 10 11
3 4 4 5 12 13
5 6 5 6 NaN NaN
6 7 6 7 NaN NaN
8 9 NaN NaN NaN NaN
You can use list comprehension with DataFrame constructor with converting columns to numpy array by values + tolist and concat:
cols = ['A','B','C']
L = [pd.DataFrame(df[x].values.tolist(), columns=['one','two']) for x in cols]
df = pd.concat(L, axis=1, keys=cols)
print (df)
A B C
one two one two one two
0 1 2 3 4 5 6
1 7 8 9 10 11 12
2 13 14 15 16 17 18
EDIT:
Similar solution with dict comprehension, integers values was converted to floats, because type of NaN is float too.
data = {'A':[(1,2),(3,4),(5,6),(7,8),(8,9)],
'B':[(3,4),(4,5),(5,6),(6,7)],
'C':[(10,11),(12,13)]}
cols = ['A','B','C']
d = {k: pd.DataFrame(v, columns=['one','two']) for k,v in data.items()}
df = pd.concat(d, axis=1)
print (df)
A B C
one two one two one two
0 1 2 3.0 4.0 10.0 11.0
1 3 4 4.0 5.0 12.0 13.0
2 5 6 5.0 6.0 NaN NaN
3 7 8 6.0 7.0 NaN NaN
4 8 9 NaN NaN NaN NaN
EDIT:
For multiple by one column is possible use slicers:
s = df[('A', 'one')]
print (s)
0 1
1 3
2 5
3 7
4 8
Name: (A, one), dtype: int64
df.loc(axis=1)[:, 'one'] = df.loc(axis=1)[:, 'one'].mul(s, axis=0)
print (df)
A B C
one two one two one two
0 1.0 2 3.0 4.0 10.0 11.0
1 9.0 4 12.0 5.0 36.0 13.0
2 25.0 6 25.0 6.0 NaN NaN
3 49.0 8 42.0 7.0 NaN NaN
4 64.0 9 NaN NaN NaN NaN
Another solution:
idx = pd.IndexSlice
df.loc[:, idx[:, 'one']] = df.loc[:, idx[:, 'one']].mul(s, axis=0)
print (df)
A B C
one two one two one two
0 1.0 2 3.0 4.0 10.0 11.0
1 9.0 4 12.0 5.0 36.0 13.0
2 25.0 6 25.0 6.0 NaN NaN
3 49.0 8 42.0 7.0 NaN NaN
4 64.0 9 NaN NaN NaN NaN
I have number of similar dataframes where I would like to standardize the nans across all the dataframes. For instance, if a nan exists in df1.loc[0,'a'] then ALL other dataframes should be set to nan for the same index location.
I am aware that I could group the dataframes to create one big multiindexed dataframe but sometimes I find it easier to work with a group of dataframes of the same structure.
Here is an example:
import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.reshape(np.arange(12), (4,3)), columns=['a', 'b', 'c'])
df2 = pd.DataFrame(np.reshape(np.arange(12), (4,3)), columns=['a', 'b', 'c'])
df3 = pd.DataFrame(np.reshape(np.arange(12), (4,3)), columns=['a', 'b', 'c'])
df1.loc[3,'a'] = np.nan
df2.loc[1,'b'] = np.nan
df3.loc[0,'c'] = np.nan
print df1
print ' '
print df2
print ' '
print df3
Output:
a b c
0 0.0 1 2
1 3.0 4 5
2 6.0 7 8
3 NaN 10 11
a b c
0 0 1.0 2
1 3 NaN 5
2 6 7.0 8
3 9 10.0 11
a b c
0 0 1 NaN
1 3 4 5.0
2 6 7 8.0
3 9 10 11.0
However, I would like df1, df2 and df3 to have nans in the same locations:
print df1
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
Using the answer provided by piRSquared, I was able to extend it for dataframes of different sizes. Here is the function:
def set_nans_over_every_df(df_list):
# Find unique index and column values
complete_index = sorted(set([idx for df in df_list for idx in df.index]))
complete_columns = sorted(set([idx for df in df_list for idx in df.columns]))
# Ensure that every df has the same indexes and columns
df_list = [df.reindex(index=complete_index, columns=complete_columns) for df in df_list]
# Find the nans in each df and set nans in every other df at the same location
mask = np.isnan(np.stack([df.values for df in df_list])).any(0)
df_list = [df.mask(mask) for df in df_list]
return df_list
And an example using different sized dataframes:
df1 = pd.DataFrame(np.reshape(np.arange(15), (5,3)), index=[0,1,2,3,4], columns=['a', 'b', 'c'])
df2 = pd.DataFrame(np.reshape(np.arange(12), (4,3)), index=[0,1,2,3], columns=['a', 'b', 'c'])
df3 = pd.DataFrame(np.reshape(np.arange(16), (4,4)), index=[0,1,2,3], columns=['a', 'b', 'c', 'd'])
df1.loc[3,'a'] = np.nan
df2.loc[1,'b'] = np.nan
df3.loc[0,'c'] = np.nan
df1, df2, df3 = set_nans_over_every_df([df1, df2, df3])
print df1
a b c d
0 0.0 1.0 NaN NaN
1 3.0 NaN 5.0 NaN
2 6.0 7.0 8.0 NaN
3 NaN 10.0 11.0 NaN
4 NaN NaN NaN NaN
I'd set up a mask in numpy then use this mask in the pd.DataFrame.mask method
mask = np.isnan(np.stack([d.values for d in [df1, df2, df3]])).any(0)
print(df1.mask(mask))
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
print(df2.mask(mask))
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
print(df3.mask(mask))
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
You can create mask and then apply to all dataframes:
mask = df1.notnull() & df2.notnull() & df3.notnull()
print (mask)
a b c
0 True True False
1 True False True
2 True True True
3 False True True
You can also set mask dynamically with reduce:
import functools
masks = [df1.notnull(),df2.notnull(),df3.notnull()]
mask = functools.reduce(lambda x,y: x & y, masks)
print (mask)
a b c
0 True True False
1 True False True
2 True True True
3 False True True
print (df1[mask])
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
print (df2[mask])
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
print (df2[mask])
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
assuming that all your DF are of the same shape and have the same indexes:
In [196]: df2[df1.isnull()] = df3[df1.isnull()] = np.nan
In [197]: df1[df3.isnull()] = df2[df3.isnull()] = np.nan
In [198]: df1[df2.isnull()] = df3[df2.isnull()] = np.nan
In [199]: df1
Out[199]:
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
In [200]: df2
Out[200]:
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
In [201]: df3
Out[201]:
a b c
0 0.0 1.0 NaN
1 3.0 NaN 5.0
2 6.0 7.0 8.0
3 NaN 10.0 11.0
One simple method is to add the DataFrames together and multiply the result by 0 and then add this DataFrame to all the others individually.
df_zero = (df1 + df2 + df3) * 0
df1 + df_zero
df2 + df_zero
df3 + df_zero
I need to rid myself of all rows with a null value in column C. Here is the code:
infile="C:\****"
df=pd.read_csv(infile)
A B C D
1 1 NaN 3
2 3 7 NaN
4 5 NaN 8
5 NaN 4 9
NaN 1 2 NaN
There are two basic methods I have attempted.
method 1:
source: How to drop rows of Pandas DataFrame whose value in certain columns is NaN
df.dropna()
The result is an empty dataframe, which makes sense because there is an NaN value in every row.
df.dropna(subset=[3])
For this method I tried to play around with the subset value using both column index number and column name. The dataframe is still empty.
method 2:
source: Deleting DataFrame row in Pandas based on column value
df = df[df.C.notnull()]
Still results in an empty dataframe!
What am I doing wrong?
df = pd.DataFrame([[1,1,np.nan,3],[2,3,7,np.nan],[4,5,np.nan,8],[5,np.nan,4,9],[np.nan,1,2,np.nan]], columns = ['A','B','C','D'])
df = df[df['C'].notnull()]
df
It's just a prove that your method 2 works properly (at least with pandas 0.18.0):
In [100]: df
Out[100]:
A B C D
0 1.0 1.0 NaN 3.0
1 2.0 3.0 7.0 NaN
2 4.0 5.0 NaN 8.0
3 5.0 NaN 4.0 9.0
4 NaN 1.0 2.0 NaN
In [101]: df.dropna(subset=['C'])
Out[101]:
A B C D
1 2.0 3.0 7.0 NaN
3 5.0 NaN 4.0 9.0
4 NaN 1.0 2.0 NaN
In [102]: df[df.C.notnull()]
Out[102]:
A B C D
1 2.0 3.0 7.0 NaN
3 5.0 NaN 4.0 9.0
4 NaN 1.0 2.0 NaN
In [103]: df = df[df.C.notnull()]
In [104]: df
Out[104]:
A B C D
1 2.0 3.0 7.0 NaN
3 5.0 NaN 4.0 9.0
4 NaN 1.0 2.0 NaN