My dataframe contains both NaT and NaN values
Date/Time_entry Entry Date/Time_exit Exit
0 2015-11-11 10:52:00 19.9900 2015-11-11 11:30:00 20.350
1 2015-11-11 11:36:00 20.4300 2015-11-11 11:38:00 20.565
2 2015-11-11 11:44:00 21.0000 NaT NaN
3 2009-04-20 10:28:00 13.7788 2009-04-20 10:46:00 13.700
I want to fill NaT with dates and NaN with numbers. Fillna(4) method replaces both NaT and NaN with 4. Is it possible to differentiate between NaT and NaN somehow?
My current workaround is to df[column].fillna()
Since NaTs pertain to datetime columns, you can exclude them when applying your filling operation.
u = df.select_dtypes(exclude=['datetime'])
df[u.columns] = u.fillna(4)
df
Date/Time_entry Entry Date/Time_exit Exit
0 2015-11-11 10:52:00 19.9900 2015-11-11 11:30:00 20.350
1 2015-11-11 11:36:00 20.4300 2015-11-11 11:38:00 20.565
2 2015-11-11 11:44:00 21.0000 NaT 4.000
3 2009-04-20 10:28:00 13.7788 2009-04-20 10:46:00 13.700
Similarly, to fill NaT values only, change "exclude" to "include" in the code above.
u = df.select_dtypes(include=['datetime'])
df[u.columns] = u.fillna(pd.to_datetime('today'))
df
Date/Time_entry Entry Date/Time_exit Exit
0 2015-11-11 10:52:00 19.9900 2015-11-11 11:30:00.000000 20.350
1 2015-11-11 11:36:00 20.4300 2015-11-11 11:38:00.000000 20.565
2 2015-11-11 11:44:00 21.0000 2019-02-17 16:11:09.407466 4.000
3 2009-04-20 10:28:00 13.7788 2009-04-20 10:46:00.000000 13.700
Try something like this, using pandas.DataFrame.select_dtypes:
>>> import pandas as pd, datetime, numpy as np
>>> df = pd.DataFrame({'a': [datetime.datetime.now(), np.nan], 'b': [5, np.nan], 'c': [1, 2]})
>>> df
a b c
0 2019-02-17 18:06:15.231557 5.0 1
1 NaT NaN 2
>>> fill_dt = datetime.datetime.now()
>>> fill_value = 4
>>> dt_filled_df = df.select_dtypes('datetime').fillna(fill_dt)
>>> dt_filled_df
a
0 2019-02-17 18:06:15.231557
1 2019-02-17 18:06:36.040404
>>> value_filled_df = df.select_dtypes('int').fillna(fill_value)
>>> value_filled_df
c
0 1
1 2
>>> dt_filled_df.columns = [col + '_notnull' for col in dt_filled_df]
>>> value_filled_df.columns = [col + '_notnull' for col in value_filled_df]
>>> df = df.join(value_filled_df)
>>> df = df.join(dt_filled_df)
>>> df
a b c c_notnull a_notnull
0 2019-02-17 18:06:15.231557 5.0 1 1 2019-02-17 18:06:15.231557
1 NaT NaN 2 2 2019-02-17 18:06:36.040404
Related
I have two data frames, the first column is formed by getting the index values from the other data frame. This is tested and successfully returns 5 entries.
The second line executes but assigns NaN to all rows in "StartPrice" column
df = pd.DataFrame()
df["StartBar"] = df_rs["HighTrendStart"].dropna().index # Works
df["StartPrice"] = df_rs["HighTrendStart"].loc[df["StartBar"]] # Assigns Nan's to all rows
As pointed out by #YOBEN_S, the indexes do not match.
Date
2020-05-01 00:00:00 NaN
2020-05-01 00:15:00 NaN
2020-05-01 00:30:00 NaN
2020-05-01 00:45:00 NaN
2020-05-01 01:00:00 NaN
Freq: 15T, Name: HighTrendStart, dtype: float64
0 2020-05-01 02:30:00
1 2020-05-01 06:30:00
2 2020-05-01 13:45:00
3 2020-05-01 16:15:00
4 2020-05-01 20:00:00
Name: StartBar, dtype: datetime64[ns]
You should make sure the index did not match when you assign the value from different dataframe
df["StartPrice"] = df_rs["HighTrendStart"].loc[df["StartBar"]].to_numpy()
For example
df=pd.DataFrame({'a':[1,2,3,4,5,6]})
s=pd.Series([1,2,3,4,5,6],index=list('abcdef'))
df
Out[190]:
a
0 1
1 2
2 3
3 4
4 5
5 6
s
Out[191]:
a 1
b 2
c 3
d 4
e 5
f 6
dtype: int64
df['New']=s
df
Out[193]:
a New
0 1 NaN
1 2 NaN
2 3 NaN
3 4 NaN
4 5 NaN
5 6 NaN
I have a pandas dataframe:
import pandas as pnd
d = pnd.Timestamp('2013-01-01 16:00')
dates = pnd.bdate_range(start=d, end = d+pnd.DateOffset(days=10), normalize = False)
df = pnd.DataFrame(index=dates, columns=['a'])
df['a'] = 6
print(df)
a
2013-01-01 16:00:00 6
2013-01-02 16:00:00 6
2013-01-03 16:00:00 6
2013-01-04 16:00:00 6
2013-01-07 16:00:00 6
2013-01-08 16:00:00 6
2013-01-09 16:00:00 6
2013-01-10 16:00:00 6
2013-01-11 16:00:00 6
I am interested in find the label location of one of the labels, say,
ds = pnd.Timestamp('2013-01-02 16:00')
Looking at the index values, I know that is integer location of this label 1. How can get pandas to tell what the integer value of this label is?
You're looking for the index method get_loc:
In [11]: df.index.get_loc(ds)
Out[11]: 1
Get dataframe integer index given a date key:
>>> import pandas as pd
>>> df = pd.DataFrame(
index=pd.date_range(pd.datetime(2008,1,1), pd.datetime(2008,1,5)),
columns=("foo", "bar"))
>>> df["foo"] = [10,20,40,15,10]
>>> df["bar"] = [100,200,40,-50,-38]
>>> df
foo bar
2008-01-01 10 100
2008-01-02 20 200
2008-01-03 40 40
2008-01-04 15 -50
2008-01-05 10 -38
>>> df.index.get_loc(df["bar"].argmax())
1
>>> df.index.get_loc(df["foo"].argmax())
2
In column bar, the index of the maximum value is 1
In column foo, the index of the maximum value is 2
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html
get_loc can be used for rows and columns according to:
import pandas as pnd
d = pnd.Timestamp('2013-01-01 16:00')
dates = pnd.bdate_range(start=d, end = d+pnd.DateOffset(days=10), normalize = False)
df = pnd.DataFrame(index=dates)
df['a'] = 5
df['b'] = 6
print(df.head())
a b
2013-01-01 16:00:00 5 6
2013-01-02 16:00:00 5 6
2013-01-03 16:00:00 5 6
2013-01-04 16:00:00 5 6
2013-01-07 16:00:00 5 6
#for rows
print(df.index.get_loc('2013-01-01 16:00:00'))
0
#for columns
print(df.columns.get_loc('b'))
1
Because get_loc returns a mask rather than a list of integer index locations when there are multiple instances of the key in the index, I was toying with an answer using reset_index():
# Add a duplicate!!!
dup = pd.Timestamp('2013-01-07 16:00')
df = df.append(pd.DataFrame([7],columns=['a'],index=[dup]))
df
a
2013-01-01 16:00:00 6
2013-01-02 16:00:00 6
2013-01-03 16:00:00 6
2013-01-04 16:00:00 6
2013-01-07 16:00:00 6
2013-01-08 16:00:00 6
2013-01-09 16:00:00 6
2013-01-10 16:00:00 6
2013-01-11 16:00:00 6
2013-01-07 16:00:00 7
2013-01-08 16:00:00 3
# Only use this method if the key has duplicates
if (df.loc[dup].index.has_duplicates):
df.reset_index().loc[df.index.get_loc(dup)].index.to_list()
array([4, 9])
I have three dataframes. Each dataframe has date as column. I want to left join the three using date column. Date are present in the form 'yyyy-mm-dd'. I want to merge the dataframe using 'yyyy-mm' only.
df1
Date X
31-05-2014 1
30-06-2014 2
31-07-2014 3
31-08-2014 4
30-09-2014 5
31-10-2014 6
30-11-2014 7
31-12-2014 8
31-01-2015 1
28-02-2015 3
31-03-2015 4
30-04-2015 5
df2
Date Y
01-09-2014 1
01-10-2014 4
01-11-2014 6
01-12-2014 7
01-01-2015 2
01-02-2015 3
01-03-2015 6
01-04-2015 4
01-05-2015 3
01-06-2015 4
01-07-2015 5
01-08-2015 2
df3
Date Z
01-07-2015 9
01-08-2015 2
01-09-2015 4
01-10-2015 1
01-11-2015 2
01-12-2015 3
01-01-2016 7
01-02-2016 4
01-03-2016 9
01-04-2016 2
01-05-2016 4
01-06-2016 1
Try:
df4 = pd.merge(df1,df2, how='left', on='Date')
Result:
Date X Y
0 2014-05-31 1 NaN
1 2014-06-30 2 NaN
2 2014-07-31 3 NaN
3 2014-08-31 4 NaN
4 2014-09-30 5 NaN
5 2014-10-31 6 NaN
6 2014-11-30 7 NaN
7 2014-12-31 8 NaN
8 2015-01-31 1 NaN
9 2015-02-28 3 NaN
10 2015-03-31 4 NaN
11 2015-04-30 5 NaN
Use Series.dt.to_period with months periods and merge by multiple DataFrames in list:
import functools
dfs = [df1, df2, df3]
dfs = [x.assign(per=x['Date'].dt.to_period('m')) for x in dfs]
df = functools.reduce(lambda left,right: pd.merge(left,right,on='per', how='left'), dfs)
print (df)
Date_x X per Date_y Y Date Z
0 2014-05-31 1 2014-05 NaT NaN NaT NaN
1 2014-06-30 2 2014-06 NaT NaN NaT NaN
2 2014-07-31 3 2014-07 NaT NaN NaT NaN
3 2014-08-31 4 2014-08 NaT NaN NaT NaN
4 2014-09-30 5 2014-09 2014-09-01 1.0 NaT NaN
5 2014-10-31 6 2014-10 2014-10-01 4.0 NaT NaN
6 2014-11-30 7 2014-11 2014-11-01 6.0 NaT NaN
7 2014-12-31 8 2014-12 2014-12-01 7.0 NaT NaN
8 2015-01-31 1 2015-01 2015-01-01 2.0 NaT NaN
9 2015-02-28 3 2015-02 2015-02-01 3.0 NaT NaN
10 2015-03-31 4 2015-03 2015-03-01 6.0 NaT NaN
11 2015-04-30 5 2015-04 2015-04-01 4.0 NaT NaN
Alternative:
df1['per'] = df1['Date'].dt.to_period('m')
df2['per'] = df2['Date'].dt.to_period('m')
df3['per'] = df3['Date'].dt.to_period('m')
df4 = pd.merge(df1,df2, how='left', on='per').merge(df3, how='left', on='per')
print (df4)
Date_x X per Date_y Y Date Z
0 2014-05-31 1 2014-05 NaT NaN NaT NaN
1 2014-06-30 2 2014-06 NaT NaN NaT NaN
2 2014-07-31 3 2014-07 NaT NaN NaT NaN
3 2014-08-31 4 2014-08 NaT NaN NaT NaN
4 2014-09-30 5 2014-09 2014-09-01 1.0 NaT NaN
5 2014-10-31 6 2014-10 2014-10-01 4.0 NaT NaN
6 2014-11-30 7 2014-11 2014-11-01 6.0 NaT NaN
7 2014-12-31 8 2014-12 2014-12-01 7.0 NaT NaN
8 2015-01-31 1 2015-01 2015-01-01 2.0 NaT NaN
9 2015-02-28 3 2015-02 2015-02-01 3.0 NaT NaN
10 2015-03-31 4 2015-03 2015-03-01 6.0 NaT NaN
11 2015-04-30 5 2015-04 2015-04-01 4.0 NaT NaN
I have a DataFrame with columns = ['date','id','value'], where id represents different products. Assume that we have n products. I am looking to create a new dataframe with columns = ['date', 'valueid1' ..,'valueidn'], where the values are assigned to the corresponding date-row if they exist, a NaN is assigned as value if they don't. Many thanks
assuming you have the following DF:
In [120]: df
Out[120]:
date id value
0 2001-01-01 1 10
1 2001-01-01 2 11
2 2001-01-01 3 12
3 2001-01-02 3 20
4 2001-01-03 1 20
5 2001-01-04 2 30
you can use pivot_table() method:
In [121]: df.pivot_table(index='date', columns='id', values='value')
Out[121]:
id 1 2 3
date
2001-01-01 10.0 11.0 12.0
2001-01-02 NaN NaN 20.0
2001-01-03 20.0 NaN NaN
2001-01-04 NaN 30.0 NaN
or
In [122]: df.pivot_table(index='date', columns='id', values='value', fill_value=0)
Out[122]:
id 1 2 3
date
2001-01-01 10 11 12
2001-01-02 0 0 20
2001-01-03 20 0 0
2001-01-04 0 30 0
I think you need pivot:
df = df.pivot(index='date', columns='id', values='value')
Sample:
df = pd.DataFrame({'date':pd.date_range('2017-01-01', periods=5),
'id':[4,5,6,4,5],
'value':[7,8,9,1,2]})
print (df)
date id value
0 2017-01-01 4 7
1 2017-01-02 5 8
2 2017-01-03 6 9
3 2017-01-04 4 1
4 2017-01-05 5 2
df = df.pivot(index='date', columns='id', values='value')
#alternative solution
#df = df.set_index(['date','id'])['value'].unstack()
print (df)
id 4 5 6
date
2017-01-01 7.0 NaN NaN
2017-01-02 NaN 8.0 NaN
2017-01-03 NaN NaN 9.0
2017-01-04 1.0 NaN NaN
2017-01-05 NaN 2.0 NaN
but if get:
ValueError: Index contains duplicate entries, cannot reshape
is necessary use aggregating function like mean, sum, ... with groupby or pivot_table:
df = pd.DataFrame({'date':['2017-01-01', '2017-01-02',
'2017-01-03','2017-01-05','2017-01-05'],
'id':[4,5,6,4,4],
'value':[7,8,9,1,2]})
df.date = pd.to_datetime(df.date)
print (df)
date id value
0 2017-01-01 4 7
1 2017-01-02 5 8
2 2017-01-03 6 9
3 2017-01-05 4 1 <- duplicity 2017-01-05 4
4 2017-01-05 4 2 <- duplicity 2017-01-05 4
df = df.groupby(['date', 'id'])['value'].mean().unstack()
#alternative solution (another answer same as groupby only slowier in big df)
#df = df.pivot_table(index='date', columns='id', values='value', aggfunc='mean')
print (df)
id 4 5 6
date
2017-01-01 7.0 NaN NaN
2017-01-02 NaN 8.0 NaN
2017-01-03 NaN NaN 9.0
2017-01-05 1.5 NaN NaN <- 1.5 is mean (1 + 2)/2
I have two DataFrames (with DatetimeIndex) and want to update the first frame (the older one) with data from the second frame (the newer one).
The new frame may contain more recent data for rows already contained in the the old frame. In this case, data in the old frame should be overwritten with data from the new frame.
Also the newer frame may have more columns / rows, than the first one.
In this case the old frame should be enlarged by the data in the new frame.
Pandas docs state, that
"The .loc/.ix/[] operations can perform enlargement when setting a non-existant key for that axis"
and
"a DataFrame can be enlarged on either axis via .loc"
However this doesn't seem to work and throws a KeyError. Example:
In [195]: df1
Out[195]:
A B C
2015-07-09 12:00:00 1 1 1
2015-07-09 13:00:00 1 1 1
2015-07-09 14:00:00 1 1 1
2015-07-09 15:00:00 1 1 1
In [196]: df2
Out[196]:
A B C D
2015-07-09 14:00:00 2 2 2 2
2015-07-09 15:00:00 2 2 2 2
2015-07-09 16:00:00 2 2 2 2
2015-07-09 17:00:00 2 2 2 2
In [197]: df1.loc[df2.index] = df2
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-197-74e630e87cf8> in <module>()
----> 1 df1.loc[df2.index] = df2
/.../pandas/core/indexing.pyc in __setitem__(self, key, value)
112
113 def __setitem__(self, key, value):
--> 114 indexer = self._get_setitem_indexer(key)
115 self._setitem_with_indexer(indexer, value)
116
/.../pandas/core/indexing.pyc in _get_setitem_indexer(self, key)
107
108 try:
--> 109 return self._convert_to_indexer(key, is_setter=True)
110 except TypeError:
111 raise IndexingError(key)
/.../pandas/core/indexing.pyc in _convert_to_indexer(self, obj, axis, is_setter)
1110 mask = check == -1
1111 if mask.any():
-> 1112 raise KeyError('%s not in index' % objarr[mask])
1113
1114 return _values_from_object(indexer)
KeyError: "['2015-07-09T18:00:00.000000000+0200' '2015-07-09T19:00:00.000000000+0200'] not in index"
What is the best way (with respect to performance, as my real data is much larger) two achieve the desired updated and enlarged DataFrame. This is the result I would like to see:
A B C D
2015-07-09 12:00:00 1 1 1 NaN
2015-07-09 13:00:00 1 1 1 NaN
2015-07-09 14:00:00 2 2 2 2
2015-07-09 15:00:00 2 2 2 2
2015-07-09 16:00:00 2 2 2 2
2015-07-09 17:00:00 2 2 2 2
df2.combine_first(df1) (documentation)
seems to serve your requirement; PFB code snippet & output
import pandas as pd
print 'pandas-version: ', pd.__version__
df1 = pd.DataFrame.from_records([('2015-07-09 12:00:00',1,1,1),
('2015-07-09 13:00:00',1,1,1),
('2015-07-09 14:00:00',1,1,1),
('2015-07-09 15:00:00',1,1,1)],
columns=['Dt', 'A', 'B', 'C']).set_index('Dt')
# print df1
df2 = pd.DataFrame.from_records([('2015-07-09 14:00:00',2,2,2,2),
('2015-07-09 15:00:00',2,2,2,2),
('2015-07-09 16:00:00',2,2,2,2),
('2015-07-09 17:00:00',2,2,2,2),],
columns=['Dt', 'A', 'B', 'C', 'D']).set_index('Dt')
res_combine1st = df2.combine_first(df1)
print res_combine1st
output
pandas-version: 0.15.2
A B C D
Dt
2015-07-09 12:00:00 1 1 1 NaN
2015-07-09 13:00:00 1 1 1 NaN
2015-07-09 14:00:00 2 2 2 2
2015-07-09 15:00:00 2 2 2 2
2015-07-09 16:00:00 2 2 2 2
2015-07-09 17:00:00 2 2 2 2
You can use the combine function.
import pandas as pd
# your data
# ===========================================================
df1 = pd.DataFrame(np.ones(12).reshape(4,3), columns='A B C'.split(), index=pd.date_range('2015-07-09 12:00:00', periods=4, freq='H'))
df2 = pd.DataFrame(np.ones(16).reshape(4,4)*2, columns='A B C D'.split(), index=pd.date_range('2015-07-09 14:00:00', periods=4, freq='H'))
# processing
# =====================================================
# reindex to populate NaN
result = df2.reindex(np.union1d(df1.index, df2.index))
Out[248]:
A B C D
2015-07-09 12:00:00 NaN NaN NaN NaN
2015-07-09 13:00:00 NaN NaN NaN NaN
2015-07-09 14:00:00 2 2 2 2
2015-07-09 15:00:00 2 2 2 2
2015-07-09 16:00:00 2 2 2 2
2015-07-09 17:00:00 2 2 2 2
combiner = lambda x, y: np.where(x.isnull(), y, x)
# use df1 to update result
result.combine(df1, combiner)
Out[249]:
A B C D
2015-07-09 12:00:00 1 1 1 NaN
2015-07-09 13:00:00 1 1 1 NaN
2015-07-09 14:00:00 2 2 2 2
2015-07-09 15:00:00 2 2 2 2
2015-07-09 16:00:00 2 2 2 2
2015-07-09 17:00:00 2 2 2 2
# maybe fillna(method='ffill') if you like
In addition to previous answer, after reindexing you can use
result.fillna(df1, inplace=True)
so based on Jianxun Li's code (extended with one more column) you can try this
# your data
# ===========================================================
df1 = pd.DataFrame(np.ones(12).reshape(4,3), columns='A B C'.split(), index=pd.date_range('2015-07-09 12:00:00', periods=4, freq='H'))
df2 = pd.DataFrame(np.ones(20).reshape(4,5)*2, columns='A B C D E'.split(), index=pd.date_range('2015-07-09 14:00:00', periods=4, freq='H'))
# processing
# =====================================================
# reindex to populate NaN
result = df2.reindex(np.union1d(df1.index, df2.index))
# fill NaN from df1
result.fillna(df1, inplace=True)
Out[3]:
A B C D E
2015-07-09 12:00:00 1 1 1 NaN NaN
2015-07-09 13:00:00 1 1 1 NaN NaN
2015-07-09 14:00:00 2 2 2 2 2
2015-07-09 15:00:00 2 2 2 2 2
2015-07-09 16:00:00 2 2 2 2 2
2015-07-09 17:00:00 2 2 2 2 2