I have a Pandas Dataframe as below:
itm Date Amount
67 420 2012-09-30 00:00:00 65211
68 421 2012-09-09 00:00:00 29424
69 421 2012-09-16 00:00:00 29877
70 421 2012-09-23 00:00:00 30990
71 421 2012-09-30 00:00:00 61303
72 485 2012-09-09 00:00:00 71781
73 485 2012-09-16 00:00:00 NaN
74 485 2012-09-23 00:00:00 11072
75 485 2012-09-30 00:00:00 113702
76 489 2012-09-09 00:00:00 64731
77 489 2012-09-16 00:00:00 NaN
When I try to apply a function to the Amount column, I get the following error:
ValueError: cannot convert float NaN to integer
I have tried applying a function using .isnan from the Math Module
I have tried the pandas .replace attribute
I tried the .sparse data attribute from pandas 0.9
I have also tried if NaN == NaN statement in a function.
I have also looked at this article How do I replace NA values with zeros in an R dataframe? whilst looking at some other articles.
All the methods I have tried have not worked or do not recognise NaN.
Any Hints or solutions would be appreciated.
I believe DataFrame.fillna() will do this for you.
Link to Docs for a dataframe and for a Series.
Example:
In [7]: df
Out[7]:
0 1
0 NaN NaN
1 -0.494375 0.570994
2 NaN NaN
3 1.876360 -0.229738
4 NaN NaN
In [8]: df.fillna(0)
Out[8]:
0 1
0 0.000000 0.000000
1 -0.494375 0.570994
2 0.000000 0.000000
3 1.876360 -0.229738
4 0.000000 0.000000
To fill the NaNs in only one column, select just that column. in this case I'm using inplace=True to actually change the contents of df.
In [12]: df[1].fillna(0, inplace=True)
Out[12]:
0 0.000000
1 0.570994
2 0.000000
3 -0.229738
4 0.000000
Name: 1
In [13]: df
Out[13]:
0 1
0 NaN 0.000000
1 -0.494375 0.570994
2 NaN 0.000000
3 1.876360 -0.229738
4 NaN 0.000000
EDIT:
To avoid a SettingWithCopyWarning, use the built in column-specific functionality:
df.fillna({1:0}, inplace=True)
It is not guaranteed that the slicing returns a view or a copy. You can do
df['column'] = df['column'].fillna(value)
You could use replace to change NaN to 0:
import pandas as pd
import numpy as np
# for column
df['column'] = df['column'].replace(np.nan, 0)
# for whole dataframe
df = df.replace(np.nan, 0)
# inplace
df.replace(np.nan, 0, inplace=True)
The below code worked for me.
import pandas
df = pandas.read_csv('somefile.txt')
df = df.fillna(0)
I just wanted to provide a bit of an update/special case since it looks like people still come here. If you're using a multi-index or otherwise using an index-slicer the inplace=True option may not be enough to update the slice you've chosen. For example in a 2x2 level multi-index this will not change any values (as of pandas 0.15):
idx = pd.IndexSlice
df.loc[idx[:,mask_1],idx[mask_2,:]].fillna(value=0,inplace=True)
The "problem" is that the chaining breaks the fillna ability to update the original dataframe. I put "problem" in quotes because there are good reasons for the design decisions that led to not interpreting through these chains in certain situations. Also, this is a complex example (though I really ran into it), but the same may apply to fewer levels of indexes depending on how you slice.
The solution is DataFrame.update:
df.update(df.loc[idx[:,mask_1],idx[[mask_2],:]].fillna(value=0))
It's one line, reads reasonably well (sort of) and eliminates any unnecessary messing with intermediate variables or loops while allowing you to apply fillna to any multi-level slice you like!
If anybody can find places this doesn't work please post in the comments, I've been messing with it and looking at the source and it seems to solve at least my multi-index slice problems.
You can also use dictionaries to fill NaN values of the specific columns in the DataFrame rather to fill all the DF with some oneValue.
import pandas as pd
df = pd.read_excel('example.xlsx')
df.fillna( {
'column1': 'Write your values here',
'column2': 'Write your values here',
'column3': 'Write your values here',
'column4': 'Write your values here',
.
.
.
'column-n': 'Write your values here'} , inplace=True)
Easy way to fill the missing values:-
filling string columns: when string columns have missing values and NaN values.
df['string column name'].fillna(df['string column name'].mode().values[0], inplace = True)
filling numeric columns: when the numeric columns have missing values and NaN values.
df['numeric column name'].fillna(df['numeric column name'].mean(), inplace = True)
filling NaN with zero:
df['column name'].fillna(0, inplace = True)
To replace na values in pandas
df['column_name'].fillna(value_to_be_replaced,inplace=True)
if inplace = False, instead of updating the df (dataframe) it will return the modified values.
Considering the particular column Amount in the above table is of integer type. The following would be a solution :
df['Amount'] = df.Amount.fillna(0).astype(int)
Similarly, you can fill it with various data types like float, str and so on.
In particular, I would consider datatype to compare various values of the same column.
There have been many contributions already, but since I'm new here, I will still give input.
There are two approaches to replace NaN values with zeros in Pandas DataFrame:
fillna(): function fills NA/NaN values using the specified method.
replace(): df.replace()a simple method used to replace a string, regex, list, dictionary
Example:
#NaN with zero on all columns
df2 = df.fillna(0)
#Using the inplace=True keyword in a pandas method changes the default behaviour.
df.fillna(0, inplace = True)
# multiple columns appraoch
df[["Student", "ID"]] = df[["Student", "ID"]].fillna(0)
finally the replace() method :
df["Student"] = df["Student"].replace(np.nan, 0)
Replace all nan with 0
df = df.fillna(0)
To replace nan in different columns with different ways:
replacement= {'column_A': 0, 'column_B': -999, 'column_C': -99999}
df.fillna(value=replacement)
This works for me, but no one's mentioned it. could there be something wrong with it?
df.loc[df['column_name'].isnull(), 'column_name'] = 0
There are two options available primarily; in case of imputation or filling of missing values NaN / np.nan with only numerical replacements (across column(s):
df['Amount'].fillna(value=None, method= ,axis=1,) is sufficient:
From the Documentation:
value : scalar, dict, Series, or DataFrame
Value to use to fill holes (e.g. 0), alternately a
dict/Series/DataFrame of values specifying which value to use for
each index (for a Series) or column (for a DataFrame). (values not
in the dict/Series/DataFrame will not be filled). This value cannot
be a list.
Which means 'strings' or 'constants' are no longer permissable to be imputed.
For more specialized imputations use SimpleImputer():
from sklearn.impute import SimpleImputer
si = SimpleImputer(strategy='constant', missing_values=np.nan, fill_value='Replacement_Value')
df[['Col-1', 'Col-2']] = si.fit_transform(X=df[['C-1', 'C-2']])
If you were to convert it to a pandas dataframe, you can also accomplish this by using fillna.
import numpy as np
df=np.array([[1,2,3, np.nan]])
import pandas as pd
df=pd.DataFrame(df)
df.fillna(0)
This will return the following:
0 1 2 3
0 1.0 2.0 3.0 NaN
>>> df.fillna(0)
0 1 2 3
0 1.0 2.0 3.0 0.0
If you want to fill NaN for a specific column you can use loc:
d1 = {"Col1" : ['A', 'B', 'C'],
"fruits": ['Avocado', 'Banana', 'NaN']}
d1= pd.DataFrame(d1)
output:
Col1 fruits
0 A Avocado
1 B Banana
2 C NaN
d1.loc[ d1.Col1=='C', 'fruits' ] = 'Carrot'
output:
Col1 fruits
0 A Avocado
1 B Banana
2 C Carrot
I think it's also worth mention and explain
the parameters configuration of fillna()
like Method, Axis, Limit, etc.
From the documentation we have:
Series.fillna(value=None, method=None, axis=None,
inplace=False, limit=None, downcast=None)
Fill NA/NaN values using the specified method.
Parameters
value [scalar, dict, Series, or DataFrame] Value to use to
fill holes (e.g. 0), alternately a dict/Series/DataFrame
of values specifying which value to use for each index
(for a Series) or column (for a DataFrame). Values not in
the dict/Series/DataFrame will not be filled. This
value cannot be a list.
method [{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None},
default None] Method to use for filling holes in
reindexed Series pad / ffill: propagate last valid
observation forward to next valid backfill / bfill:
use next valid observation to fill gap axis
[{0 or ‘index’}] Axis along which to fill missing values.
inplace [bool, default False] If True, fill
in-place. Note: this will modify any other views
on this object (e.g., a no-copy slice for a
column in a DataFrame).
limit [int,defaultNone] If method is specified,
this is the maximum number of consecutive NaN
values to forward/backward fill. In other words,
if there is a gap with more than this number of
consecutive NaNs, it will only be partially filled.
If method is not specified, this is the maximum
number of entries along the entire axis where NaNs
will be filled. Must be greater than 0 if not None.
downcast [dict, default is None] A dict of item->dtype
of what to downcast if possible, or the string ‘infer’
which will try to downcast to an appropriate equal
type (e.g. float64 to int64 if possible).
Ok. Let's start with the method= Parameter this
have forward fill (ffill) and backward fill(bfill)
ffill is doing copying forward the previous
non missing value.
e.g. :
import pandas as pd
import numpy as np
inp = [{'c1':10, 'c2':np.nan, 'c3':200}, {'c1':np.nan,'c2':110, 'c3':210}, {'c1':12,'c2':np.nan, 'c3':220},{'c1':12,'c2':130, 'c3':np.nan},{'c1':12,'c2':np.nan, 'c3':240}]
df = pd.DataFrame(inp)
c1 c2 c3
0 10.0 NaN 200.0
1 NaN 110.0 210.0
2 12.0 NaN 220.0
3 12.0 130.0 NaN
4 12.0 NaN 240.0
Forward fill:
df.fillna(method="ffill")
c1 c2 c3
0 10.0 NaN 200.0
1 10.0 110.0 210.0
2 12.0 110.0 220.0
3 12.0 130.0 220.0
4 12.0 130.0 240.0
Backward fill:
df.fillna(method="bfill")
c1 c2 c3
0 10.0 110.0 200.0
1 12.0 110.0 210.0
2 12.0 130.0 220.0
3 12.0 130.0 240.0
4 12.0 NaN 240.0
The Axis Parameter help us to choose the direction of the fill:
Fill directions:
ffill:
Axis = 1
Method = 'ffill'
----------->
direction
df.fillna(method="ffill", axis=1)
c1 c2 c3
0 10.0 10.0 200.0
1 NaN 110.0 210.0
2 12.0 12.0 220.0
3 12.0 130.0 130.0
4 12.0 12.0 240.0
Axis = 0 # by default
Method = 'ffill'
|
| # direction
|
V
e.g: # This is the ffill default
df.fillna(method="ffill", axis=0)
c1 c2 c3
0 10.0 NaN 200.0
1 10.0 110.0 210.0
2 12.0 110.0 220.0
3 12.0 130.0 220.0
4 12.0 130.0 240.0
bfill:
axis= 0
method = 'bfill'
^
|
|
|
df.fillna(method="bfill", axis=0)
c1 c2 c3
0 10.0 110.0 200.0
1 12.0 110.0 210.0
2 12.0 130.0 220.0
3 12.0 130.0 240.0
4 12.0 NaN 240.0
axis = 1
method = 'bfill'
<-----------
df.fillna(method="bfill", axis=1)
c1 c2 c3
0 10.0 200.0 200.0
1 110.0 110.0 210.0
2 12.0 220.0 220.0
3 12.0 130.0 NaN
4 12.0 240.0 240.0
# alias:
# 'fill' == 'pad'
# bfill == backfill
limit parameter:
df
c1 c2 c3
0 10.0 NaN 200.0
1 NaN 110.0 210.0
2 12.0 NaN 220.0
3 12.0 130.0 NaN
4 12.0 NaN 240.0
Only replace the first NaN element across columns:
df.fillna(value = 'Unavailable', limit=1)
c1 c2 c3
0 10.0 Unavailable 200.0
1 Unavailable 110.0 210.0
2 12.0 NaN 220.0
3 12.0 130.0 Unavailable
4 12.0 NaN 240.0
df.fillna(value = 'Unavailable', limit=2)
c1 c2 c3
0 10.0 Unavailable 200.0
1 Unavailable 110.0 210.0
2 12.0 Unavailable 220.0
3 12.0 130.0 Unavailable
4 12.0 NaN 240.0
downcast parameter:
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 c1 4 non-null float64
1 c2 2 non-null float64
2 c3 4 non-null float64
dtypes: float64(3)
memory usage: 248.0 bytes
df.fillna(method="ffill",downcast='infer').info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 c1 5 non-null int64
1 c2 4 non-null float64
2 c3 5 non-null int64
dtypes: float64(1), int64(2)
memory usage: 248.0 bytes
Related
I want to have 81 rows x 1 columns.
How to correct this distortion?
Use fillna. Basically, use the values in the second column to fill holes in the first column:
df['first_column'].fillna(df['second_column'])
For example, if you have DataFrame df:
a b
0 1.0 NaN
1 2.0 NaN
2 NaN 100.0
then
df['a'] = df['a'].fillna(df['b'])
df = df.drop(columns=['b'])
Output:
a
0 1.0
1 2.0
2 100.0
the data frame looks like this. I have tried with pivot, stack, unstack. Is there any method to achieve the output
key attribute text_value numeric_value date_value
0 1 order NaN NaN 10/02/19
1 1 size NaN 43.0 NaN
2 1 weight NaN 22.0 NaN
3 1 price NaN 33.0 NaN
4 1 segment product NaN NaN
5 2 order NaN NaN 11/02/19
6 2 size NaN 34.0 NaN
7 2 weight NaN 32.0 NaN
8 2 price NaN 89.0 NaN
9 2 segment customer NaN NaN
I need the following output
key order size weight price segment
1 10/2/2019 43.0 22.0 33.0 product
2 11/2/2019 34.0 32.0 89.0 customer
Thanks in advance
I believe you dont want to change dtypes in output data, so possible solution is processing each column separately by DataFrame.dropna and DataFrame.pivot and then join together by concat:
df['date_value'] = pd.to_datetime(df['date_value'])
df1 = df.dropna(subset=['text_value']).pivot('key','attribute','text_value')
df2 = df.dropna(subset=['numeric_value']).pivot('key','attribute','numeric_value')
df3 = df.dropna(subset=['date_value']).pivot('key','attribute','date_value')
df = pd.concat([df1, df2, df3], axis=1).reindex(df['attribute'].unique(), axis=1)
print (df)
attribute order size weight price segment
key
1 2019-10-02 43.0 22.0 33.0 product
2 2019-11-02 34.0 32.0 89.0 customer
print (df.dtypes)
order datetime64[ns]
size float64
weight float64
price float64
segment object
dtype: object
Old answer - all values are casted to strings:
df['date_value'] = pd.to_datetime(df['date_value'])
df['text_value'] = df['text_value'].fillna(df['numeric_value']).fillna(df['date_value'])
df = df.pivot('key','attribute','text_value')
print (df)
attribute order price segment size weight
key
1 1569974400000000000 33 product 43 22
2 1572652800000000000 89 customer 34 32
print (df.dtypes)
order object
price object
segment object
size object
weight object
dtype: object
This is the solution, I figured out
attr_dict = {'order':'date_value', 'size':'numeric_value', 'weight':'numeric_value', 'price':'numeric_value', 'segment':'text_value'}
output_table = pd.DataFrame()
for attr in attr_dict.keys():
temp = input_table[input_table['attribute'] == attr][['key', attr_dict[attr]]]
temp.rename(columns={attr_dict[attr]:attr}, inplace=True)
output_table[attr] = list(temp.values[:,1])
output_table
I have a DataFrame with an Ids column an several columns with data, like the column "value" in this example.
For this DataFrame I want to move all the values that correspond to the same id to a new column in the row as shown below:
I guess there is an opposite function to "melt" that allow this, but I'm not getting how to pivot this DF.
The dicts for the input and out DFs are:
d = {"id":[1,1,1,2,2,3,3,4,5],"value":[12,13,1,22,21,23,53,64,9]}
d2 = {"id":[1,2,3,4,5],"value1":[12,22,23,64,9],"value2":[1,21,53,"","",],"value3":[1,"","","",""]}
Create MultiIndex by cumcount, reshape by unstack and add change columns names by add_prefix:
df = (df.set_index(['id',df.groupby('id').cumcount()])['value']
.unstack()
.add_prefix('value')
.reset_index())
print (df)
id value0 value1 value2
0 1 12.0 13.0 1.0
1 2 22.0 21.0 NaN
2 3 23.0 53.0 NaN
3 4 64.0 NaN NaN
4 5 9.0 NaN NaN
Missing values is possible replace by fillna, but get mixed numeric with strings data, so some function should failed:
df = (df.set_index(['id',df.groupby('id').cumcount()])['value']
.unstack()
.add_prefix('value')
.reset_index()
.fillna(''))
print (df)
id value0 value1 value2
0 1 12.0 13 1
1 2 22.0 21
2 3 23.0 53
3 4 64.0
4 5 9.0
You can GroupBy to a list, then expand the series of lists:
df = pd.DataFrame(d) # create input dataframe
res = df.groupby('id')['value'].apply(list).reset_index() # groupby to list
res = res.join(pd.DataFrame(res.pop('value').values.tolist())) # expand lists to columns
print(res)
id 0 1 2
0 1 12 13.0 1.0
1 2 22 21.0 NaN
2 3 23 53.0 NaN
3 4 64 NaN NaN
4 5 9 NaN NaN
In general, such operations will be expensive as the number of columns is arbitrary. Pandas / NumPy solutions work best when you can pre-allocate memory, which isn't possible here.
I have a very simple dataframe, made of only one column and the indexes. This is a very long column (52 rows) and I would like to group the items in groups of, let's say, 5 and put indexes and values side by side. Something like going from this
value
index
1 123
2 345
...
...
...
...
...
...
52 567
to this
value value ....
index index ....
1 123 6 ###
2 345 7 ###
3 567 8 ###
4 678 9 ###
5 789 10 ###
All for visual clarity, so that then I can simply do df.to_latex() without having to arrange things in latex. Is that possible?
First create new column from index by reset_index, then create MultiIndex by floor divison by 5 and reshape by unstack, change order of columns by sort_index. Last convert MultiIndex to columns by map:
df = pd.DataFrame({
'value': list(range(10, 19))
})
df = df.reset_index()
.set_index([df.index % 5, df.index // 5])
.unstack().sort_index(axis=1, level=1)
df.columns = df.columns.map('{0[0]}_{0[1]}'.format)
print (df)
index_0 value_0 index_1 value_1
0 0.0 10.0 5.0 15.0
1 1.0 11.0 6.0 16.0
2 2.0 12.0 7.0 17.0
3 3.0 13.0 8.0 18.0
4 4.0 14.0 NaN NaN
I have a DateFrame with a mixture of string, and float rows. The float rows are all still whole numbers and were only changed to floats because their were missing values. I want to fill in all the NaN rows that are numbers with zero while leaving the NaN in columns that are strings. Here is what I have currently.
df.select_dtypes(include=['int', 'float']).fillna(0, inplace=True)
This doesn't work and I think it is because .select_dtypes() returns a view of the DataFrame so the .fillna() doesn't work. Is there a method similar to this to fill all the NaNs on only the float rows.
Use either DF.combine_first (does not act inplace):
df.combine_first(df.select_dtypes(include=[np.number]).fillna(0))
or DF.update (modifies inplace):
df.update(df.select_dtypes(include=[np.number]).fillna(0))
The reason why fillna fails is because DF.select_dtypes returns a completely new dataframe which although forms a subset of the original DF, but is not really a part of it. It behaves as a completely new entity in itself. So any modifications done to it will not affect the DF it gets derived from.
Note that np.number selects all numeric type.
Your pandas.DataFrame.select_dtypes approach is good; you've just got to cross the finish line:
>>> df = pd.DataFrame({'A': [np.nan, 'string', 'string', 'more string'], 'B': [np.nan, np.nan, 3, 4], 'C': [4, np.nan, 5, 6]})
>>> df
A B C
0 NaN NaN 4.0
1 string NaN NaN
2 string 3.0 5.0
3 more string 4.0 6.0
Don't try to perform the in-place fillna here (there's a time and place for inplace=True, but here is not one). You're right in that what's returned by select_dtypes is basically a view. Create a new dataframe called filled and join the filled (or "fixed") columns back with your original data:
>>> filled = df.select_dtypes(include=['int', 'float']).fillna(0)
>>> filled
B C
0 0.0 4.0
1 0.0 0.0
2 3.0 5.0
3 4.0 6.0
>>> df = df.join(filled, rsuffix='_filled')
>>> df
A B C B_filled C_filled
0 NaN NaN 4.0 0.0 4.0
1 string NaN NaN 0.0 0.0
2 string 3.0 5.0 3.0 5.0
3 more string 4.0 6.0 4.0 6.0
Then you can drop whatever original columns you had to keep only the "filled" ones:
>>> df.drop([x[:x.find('_filled')] for x in df.columns if '_filled' in x], axis=1, inplace=True)
>>> df
A B_filled C_filled
0 NaN 0.0 4.0
1 string 0.0 0.0
2 string 3.0 5.0
3 more string 4.0 6.0
Consider a dataframe like this
col1 col2 col3 id
0 1 1 1 a
1 0 NaN 1 a
2 NaN 1 1 NaN
3 1 0 1 b
You can select the numeric columns and fillna
num_cols = df.select_dtypes(include=[np.number]).columns
df[num_cols]=df.select_dtypes(include=[np.number]).fillna(0)
col1 col2 col3 id
0 1 1 1 a
1 0 0 1 a
2 0 1 1 NaN
3 1 0 1 b