How to add NaN elements in a groupby on a pandas dataframe? - python

I have data:
id time w
0 39 2018-03-01 00:00:00 1176.000000
1 39 2018-03-01 01:45:00 1033.461538
2 39 2018-03-01 02:00:00 1081.066667
3 39 2018-03-01 02:15:00 1067.909091
4 39 2018-03-01 02:30:00 1026.600000
5 39 2018-03-01 02:45:00 1051.866667
I have groupby once every fifteen minutes from the original data.
But I want to present:
id time w
0 39 2018-03-01 00:00:00 1176.000000
1 39 2018-03-01 00:15:00 NaN
2 39 2018-03-01 00:30:00 NaN
. 39 ... ... ...
. 39 ... ... ...
. 39 2018-03-01 01:30:00 NaN
1 39 2018-03-01 01:45:00 1033.461538
2 39 2018-03-01 02:00:00 1081.066667
3 39 2018-03-01 02:15:00 1067.909091
4 39 2018-03-01 02:30:00 1026.600000
5 39 2018-03-01 02:45:00 1051.866667
I tried to use this but it was not work.
Like this:
showData = Data.groupby(['id', pd.Grouper(key='time',freq='15T')])
['w'].mean().replace('',np.nan).reset_index()
I really need your help.Many thanks.

Simply use resample:
df.resample('15min', on='time').mean()
id w
time
2018-03-01 00:00:00 39.0 1176.000000
2018-03-01 00:15:00 NaN NaN
2018-03-01 00:30:00 NaN NaN
2018-03-01 00:45:00 NaN NaN
2018-03-01 01:00:00 NaN NaN
2018-03-01 01:15:00 NaN NaN
2018-03-01 01:30:00 NaN NaN
2018-03-01 01:45:00 39.0 1033.461538
2018-03-01 02:00:00 39.0 1081.066667
2018-03-01 02:15:00 39.0 1067.909091
2018-03-01 02:30:00 39.0 1026.600000
2018-03-01 02:45:00 39.0 1051.866667
To fill in you id, you can just use fillna(method='ffill'):
resampled_df = df.resample('15T', on='time').mean()
resampled_df['id'].fillna(method='ffill', inplace=True)
resampled_df
id w
time
2018-03-01 00:00:00 39.0 1176.000000
2018-03-01 00:15:00 39.0 NaN
2018-03-01 00:30:00 39.0 NaN
2018-03-01 00:45:00 39.0 NaN
2018-03-01 01:00:00 39.0 NaN
2018-03-01 01:15:00 39.0 NaN
2018-03-01 01:30:00 39.0 NaN
2018-03-01 01:45:00 39.0 1033.461538
2018-03-01 02:00:00 39.0 1081.066667
2018-03-01 02:15:00 39.0 1067.909091
2018-03-01 02:30:00 39.0 1026.600000
2018-03-01 02:45:00 39.0 1051.866667

Related

Finding maximum null values in stretch and generating flag

I have dataframe with datetime and two columns.I have to find maximum stretch of null values in a 'particular date' for column 'X' and replace it with zero in both column for that particular date. In addition to that I have to create third column with name 'flag' which will carry value of 1 for every zero imputation in other two column or else value of 0. In example below, January 1st the maximum stretch null value is 3 times, so I have to replace this with zero. Similarly, I have to replicate the process for 2nd January.
Below is my sample data:
Datetime X Y
01-01-2018 00:00 1 1
01-01-2018 00:05 nan 2
01-01-2018 00:10 2 nan
01-01-2018 00:15 3 4
01-01-2018 00:20 2 2
01-01-2018 00:25 nan 1
01-01-2018 00:30 nan nan
01-01-2018 00:35 nan nan
01-01-2018 00:40 4 4
02-01-2018 00:00 nan nan
02-01-2018 00:05 2 3
02-01-2018 00:10 2 2
02-01-2018 00:15 2 5
02-01-2018 00:20 2 2
02-01-2018 00:25 nan nan
02-01-2018 00:30 nan 1
02-01-2018 00:35 3 nan
02-01-2018 00:40 nan nan
"Below is the result that I am expecting"
Datetime X Y Flag
01-01-2018 00:00 1 1 0
01-01-2018 00:05 nan 2 0
01-01-2018 00:10 2 nan 0
01-01-2018 00:15 3 4 0
01-01-2018 00:20 2 2 0
01-01-2018 00:25 0 0 1
01-01-2018 00:30 0 0 1
01-01-2018 00:35 0 0 1
01-01-2018 00:40 4 4 0
02-01-2018 00:00 nan nan 0
02-01-2018 00:05 2 3 0
02-01-2018 00:10 2 2 0
02-01-2018 00:15 2 5 0
02-01-2018 00:20 2 2 0
02-01-2018 00:25 nan nan 0
02-01-2018 00:30 nan 1 0
02-01-2018 00:35 3 nan 0
02-01-2018 00:40 nan nan 0
This question is the extension of previous question. Here is the link Python - Find maximum null values in stretch and replacing with 0
First create consecutive groups for each column filled by unique values:
df1 = df.isna()
df2 = df1.ne(df1.groupby(df1.index.date).shift()).cumsum().where(df1)
df2['Y'] *= len(df2)
print (df2)
X Y
Datetime
2018-01-01 00:00:00 NaN NaN
2018-01-01 00:05:00 2.0 NaN
2018-01-01 00:10:00 NaN 36.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:20:00 NaN NaN
2018-01-01 00:25:00 4.0 NaN
2018-01-01 00:30:00 4.0 72.0
2018-01-01 00:35:00 4.0 72.0
2018-01-01 00:40:00 NaN NaN
2018-02-01 00:00:00 6.0 108.0
2018-02-01 00:05:00 NaN NaN
2018-02-01 00:10:00 NaN NaN
2018-02-01 00:15:00 NaN NaN
2018-02-01 00:20:00 NaN NaN
2018-02-01 00:25:00 8.0 144.0
2018-02-01 00:30:00 8.0 NaN
2018-02-01 00:35:00 NaN 180.0
2018-02-01 00:40:00 10.0 180.0
Then get groups with maximum count - here group 4:
a = df2.stack().value_counts().index[0]
print (a)
4.0
Get mask for match rows for set 0 and for Flag column cast mask to integer to Tru/False to 1/0 mapping:
mask = df2.eq(a).any(axis=1)
df.loc[mask,:] = 0
df['Flag'] = mask.astype(int)
print (df)
X Y Flag
Datetime
2018-01-01 00:00:00 1.0 1.0 0
2018-01-01 00:05:00 NaN 2.0 0
2018-01-01 00:10:00 2.0 NaN 0
2018-01-01 00:15:00 3.0 4.0 0
2018-01-01 00:20:00 2.0 2.0 0
2018-01-01 00:25:00 0.0 0.0 1
2018-01-01 00:30:00 0.0 0.0 1
2018-01-01 00:35:00 0.0 0.0 1
2018-01-01 00:40:00 4.0 4.0 0
2018-02-01 00:00:00 NaN NaN 0
2018-02-01 00:05:00 2.0 3.0 0
2018-02-01 00:10:00 2.0 2.0 0
2018-02-01 00:15:00 2.0 5.0 0
2018-02-01 00:20:00 2.0 2.0 0
2018-02-01 00:25:00 NaN NaN 0
2018-02-01 00:30:00 NaN 1.0 0
2018-02-01 00:35:00 3.0 NaN 0
2018-02-01 00:40:00 NaN NaN 0
EDIT:
Added new condition for match dates from list:
dates = df.index.floor('d')
filtered = ['2018-01-01','2019-01-01']
m = dates.isin(filtered)
df1 = df.isna() & m[:, None]
df2 = df1.ne(df1.groupby(dates).shift()).cumsum().where(df1)
df2['Y'] *= len(df2)
print (df2)
X Y
Datetime
2018-01-01 00:00:00 NaN NaN
2018-01-01 00:05:00 2.0 NaN
2018-01-01 00:10:00 NaN 36.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:20:00 NaN NaN
2018-01-01 00:25:00 4.0 NaN
2018-01-01 00:30:00 4.0 72.0
2018-01-01 00:35:00 4.0 72.0
2018-01-01 00:40:00 NaN NaN
2018-02-01 00:00:00 NaN NaN
2018-02-01 00:05:00 NaN NaN
2018-02-01 00:10:00 NaN NaN
2018-02-01 00:15:00 NaN NaN
2018-02-01 00:20:00 NaN NaN
2018-02-01 00:25:00 NaN NaN
2018-02-01 00:30:00 NaN NaN
2018-02-01 00:35:00 NaN NaN
2018-02-01 00:40:00 NaN NaN
a = df2.stack().value_counts().index[0]
#solution working also if no NaNs per filtered rows (prevent IndexError: index 0 is out of bounds)
#a = next(iter(df2.stack().value_counts().index), -1)
mask = df2.eq(a).any(axis=1)
df.loc[mask,:] = 0
df['Flag'] = mask.astype(int)
print (df)
X Y Flag
Datetime
2018-01-01 00:00:00 1.0 1.0 0
2018-01-01 00:05:00 NaN 2.0 0
2018-01-01 00:10:00 2.0 NaN 0
2018-01-01 00:15:00 3.0 4.0 0
2018-01-01 00:20:00 2.0 2.0 0
2018-01-01 00:25:00 0.0 0.0 1
2018-01-01 00:30:00 0.0 0.0 1
2018-01-01 00:35:00 0.0 0.0 1
2018-01-01 00:40:00 4.0 4.0 0
2018-02-01 00:00:00 NaN NaN 0
2018-02-01 00:05:00 2.0 3.0 0
2018-02-01 00:10:00 2.0 2.0 0
2018-02-01 00:15:00 2.0 5.0 0
2018-02-01 00:20:00 2.0 2.0 0
2018-02-01 00:25:00 NaN NaN 0
2018-02-01 00:30:00 NaN 1.0 0
2018-02-01 00:35:00 3.0 NaN 0

Adding empty dataframe rows based on missing datetime values

I am trying to add rows to my pandas dataframe as such:
import pandas as pd
import datetime as dt
d={'datetime':[dt.datetime(2018,3,1,0,0),dt.datetime(2018,3,1,0,10),dt.datetime(2018,3,1,0,40)],
'value':[4.,5.,1.]}
df=pd.DataFrame(d)
Which outputs:
datetime value
0 2018-03-01 00:00:00 4.0
1 2018-03-01 00:10:00 5.0
2 2018-03-01 00:40:00 1.0
What I want to do is add rows from 00:00:00 to 00:40:00, to show every 5 minutes. My desired output looks like this:
datetime value
0 2018-03-01 00:00:00 4.0
1 2018-03-01 00:05:00 NaN
2 2018-03-01 00:10:00 5.0
3 2018-03-01 00:15:00 NaN
4 2018-03-01 00:20:00 NaN
5 2018-03-01 00:25:00 NaN
6 2018-03-01 00:30:00 NaN
7 2018-03-01 00:35:00 NaN
8 2018-03-01 00:40:00 1.0
How do I get there?
You can use pd.DataFrame.resample:
df = df.resample('5Min', on='datetime').first()\
.drop('datetime', 1).reset_index()
print(df)
datetime value
0 2018-03-01 00:00:00 4.0
1 2018-03-01 00:05:00 NaN
2 2018-03-01 00:10:00 5.0
3 2018-03-01 00:15:00 NaN
4 2018-03-01 00:20:00 NaN
5 2018-03-01 00:25:00 NaN
6 2018-03-01 00:30:00 NaN
7 2018-03-01 00:35:00 NaN
8 2018-03-01 00:40:00 1.0
First, you can create a dataframe including your final datetime index and then affect the second one :
df1 = pd.DataFrame({'value': np.nan} ,index=pd.date_range('2018-03-01 00:00:00',
periods=9, freq='5min'))
print(df)
#Output :
value
2018-03-01 00:00:00 NaN
2018-03-01 00:05:00 NaN
2018-03-01 00:10:00 NaN
2018-03-01 00:15:00 NaN
2018-03-01 00:20:00 NaN
2018-03-01 00:25:00 NaN
2018-03-01 00:30:00 NaN
2018-03-01 00:35:00 NaN
2018-03-01 00:40:00 NaN
Now, let's say your dataframe is the second one, you can add this to your above code :
d={'datetime':
[dt.datetime(2018,3,1,0,0),dt.datetime(2018,3,1,0,10),dt.datetime(2018,3,1,0,40)],
'value':[4.,5.,1.]}
df2=pd.DataFrame(d)
df2.datetime = pd.to_datetime(df2.datetime)
df2.set_index('datetime',inplace=True)
print(df2)
#Output
value
datetime
2018-03-01 00:00:00 4.0
2018-03-01 00:10:00 5.0
2018-03-01 00:40:00 1.0
Finally :
df1.value = df2.value
print(df1)
#output
value
2018-03-01 00:00:00 4.0
2018-03-01 00:05:00 NaN
2018-03-01 00:10:00 5.0
2018-03-01 00:15:00 NaN
2018-03-01 00:20:00 NaN
2018-03-01 00:25:00 NaN
2018-03-01 00:30:00 NaN
2018-03-01 00:35:00 NaN
2018-03-01 00:40:00 1.0

Return maximum datetime value

This is my code
file = pd.read_excel(open("file name",'rb'),sheetname="data")
max_vol = file["Voltage"].max()
max_time = file.loc["Voltage"]==max_vol,"Timestamp"]
My Timestamp has data like this
0 2018-03-01 00:00:00
1 2018-03-01 00:05:00
2 2018-03-01 00:10:00
3 2018-03-01 00:15:00
4 2018-03-01 00:20:00
5 2018-03-01 00:25:00
6 2018-03-01 00:30:00
7 2018-03-01 00:35:00
8 2018-03-01 00:40:00
9 2018-03-01 00:45:00
10 2018-03-01 00:50:00
11 2018-03-01 00:55:00
12 2018-03-01 01:00:00
13 2018-03-01 01:05:00
14 2018-03-01 01:10:00
15 2018-03-01 01:15:00
16 2018-03-01 01:20:00
When printing max_time, i am getting a result like
624 2018-03-03 04:00:00
Name: Timestamp, dtype: datetime64[ns]
but i want only
2018-03-03 04:00:00
can someone help me in this regard
You can use argmax to extract the index of the largest element, and then use pd.DataFrame.loc:
df['datetime'] = pd.to_datetime(df['datetime']) # convert to datetime
res = df['datetime'].loc[df['voltage'].argmax()]
If you know your index is an integer range beginning 0, e.g. [0, 1, 2], then you can equivalently use the more efficient .iat or .iloc accessors.
pd.Series.argmax returns the Index of first occurrence of maximum of values. pd.DataFrame.loc permits indexing by index label, so linking the two we reach the desired result.

how can i replace time-series dataframe specific values in pandas?

I have the dataframes below (date/time is multi index) and I want to replace column values in (00:00:00~07:00:00) as a numpy array:
[[ 21.63920663 21.62012822 20.9900515 21.23217008 21.19482458
21.10839656 20.89631935 20.79977166 20.99176729 20.91567565
20.87258765 20.76210464 20.50357827 20.55897631 20.38005033
20.38227309 20.54460993 20.37707293 20.08279925 20.09955877
20.02559575 20.12390737 20.2917257 20.20056711 20.1589065
20.41302289 20.48000767 20.55604102 20.70255192]]
date time
2018-01-26 00:00:00 21.65
00:15:00 NaN
00:30:00 NaN
00:45:00 NaN
01:00:00 NaN
01:15:00 NaN
01:30:00 NaN
01:45:00 NaN
02:00:00 NaN
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
03:15:00 NaN
03:30:00 NaN
03:45:00 NaN
04:00:00 NaN
04:15:00 NaN
04:30:00 NaN
04:45:00 NaN
05:00:00 NaN
05:15:00 NaN
05:30:00 NaN
05:45:00 NaN
06:00:00 NaN
06:15:00 NaN
06:30:00 NaN
06:45:00 NaN
07:00:00 NaN
07:15:00 NaN
07:30:00 NaN
07:45:00 NaN
08:00:00 NaN
08:15:00 NaN
08:30:00 NaN
08:45:00 NaN
09:00:00 NaN
09:15:00 NaN
09:30:00 NaN
09:45:00 NaN
10:00:00 NaN
10:15:00 NaN
10:30:00 NaN
10:45:00 NaN
11:00:00 NaN
Name: temp, dtype: float64
<class 'datetime.time'>
How can I do this?
You can use slicers:
idx = pd.IndexSlice
df1.loc[idx[:, '00:00:00':'02:00:00'],:] = 1
Or if second levels are times:
import datetime
idx = pd.IndexSlice
df1.loc[idx[:, datetime.time(0, 0, 0):datetime.time(2, 0, 0)],:] = 1
Sample:
print (df1)
aaa
date time
2018-01-26 00:00:00 21.65
00:15:00 NaN
00:30:00 NaN
00:45:00 NaN
01:00:00 NaN
01:15:00 NaN
01:30:00 NaN
01:45:00 NaN
02:00:00 NaN
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
2018-01-27 00:00:00 2.00
00:15:00 NaN
00:30:00 NaN
00:45:00 NaN
01:00:00 NaN
01:15:00 NaN
01:30:00 NaN
01:45:00 NaN
02:00:00 NaN
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
idx = pd.IndexSlice
df1.loc[idx[:, '00:00:00':'02:00:00'],:] = 1
print (df1)
aaa
date time
2018-01-26 00:00:00 1.0
00:15:00 1.0
00:30:00 1.0
00:45:00 1.0
01:00:00 1.0
01:15:00 1.0
01:30:00 1.0
01:45:00 1.0
02:00:00 1.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
2018-01-27 00:00:00 1.0
00:15:00 1.0
00:30:00 1.0
00:45:00 1.0
01:00:00 1.0
01:15:00 1.0
01:30:00 1.0
01:45:00 1.0
02:00:00 1.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
EDIT:
For assign array is necessary use numpy.tile for repeat by length of first level unique values:
df1.loc[idx[:, '00:00:00':'02:00:00'],:] = np.tile(np.arange(1, 10),len(df1.index.levels[0]))
print (df1)
aaa
date time
2018-01-26 00:00:00 1.0
00:15:00 2.0
00:30:00 3.0
00:45:00 4.0
01:00:00 5.0
01:15:00 6.0
01:30:00 7.0
01:45:00 8.0
02:00:00 9.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
2018-01-27 00:00:00 1.0
00:15:00 2.0
00:30:00 3.0
00:45:00 4.0
01:00:00 5.0
01:15:00 6.0
01:30:00 7.0
01:45:00 8.0
02:00:00 9.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
More general solution with generated array by length of slice:
idx = pd.IndexSlice
len0 = df1.loc[idx[df1.index.levels[0][0], '00:00:00':'02:00:00'],:].shape[0]
len1 = len(df1.index.levels[0])
df1.loc[idx[:, '00:00:00':'02:00:00'],:] = np.tile(np.arange(1, len0 + 1), len1)
Tested with times:
import datetime
idx = pd.IndexSlice
arr =np.tile(np.arange(1, 10),len(df1.index.levels[0]))
df1.loc[idx[:, datetime.time(0, 0, 0):datetime.time(2, 0, 0)],:] = arr
print (df1)
aaa
date time
2018-01-26 00:00:00 1.0
00:15:00 2.0
00:30:00 3.0
00:45:00 4.0
01:00:00 5.0
01:15:00 6.0
01:30:00 7.0
01:45:00 8.0
02:00:00 9.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
2018-01-27 00:00:00 1.0
00:15:00 2.0
00:30:00 3.0
00:45:00 4.0
01:00:00 5.0
01:15:00 6.0
01:30:00 7.0
01:45:00 8.0
02:00:00 9.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
EDIT:
Last was problem found - my solution wokrs with one column DataFrame, but if working with Series need remove one ::
arr = np.array([[ 21.63920663, 21.62012822, 20.9900515, 21.23217008, 21.19482458, 21.10839656,
20.89631935, 20.79977166, 20.99176729, 20.91567565, 20.87258765, 20.76210464,
20.50357827, 20.55897631, 20.38005033, 20.38227309, 20.54460993, 20.37707293,
20.08279925, 20.09955877, 20.02559575, 20.12390737, 20.2917257, 20.20056711,
20.1589065, 20.41302289, 20.48000767, 20.55604102, 20.70255192]])
import datetime
idx = pd.IndexSlice
df1.loc[idx[:, datetime.time(0, 0, 0): datetime.time(7, 0, 0)]] = arr[0]
---^^^

Create multiple columns in pandas aggregation function

I'd like to create multiple columns while resampling a pandas DataFrame like the built-in ohlc method.
def mhl(data):
return pandas.Series([np.mean(data),np.max(data),np.min(data)],index = ['mean','high','low'])
ts.resample('30Min',how=mhl)
Dies with
Exception: Must produce aggregated value
Any suggestions? Thanks!
You can pass a dictionary of functions to the resample method:
In [35]: ts
Out[35]:
2013-01-01 00:00:00 0
2013-01-01 00:15:00 1
2013-01-01 00:30:00 2
2013-01-01 00:45:00 3
2013-01-01 01:00:00 4
2013-01-01 01:15:00 5
...
2013-01-01 23:00:00 92
2013-01-01 23:15:00 93
2013-01-01 23:30:00 94
2013-01-01 23:45:00 95
2013-01-02 00:00:00 96
Freq: 15T, Length: 97
Create a dictionary of functions:
mhl = {'m':np.mean, 'h':np.max, 'l':np.min}
Pass the dictionary to the how parameter of resample:
In [36]: ts.resample("30Min", how=mhl)
Out[36]:
h m l
2013-01-01 00:00:00 1 0.5 0
2013-01-01 00:30:00 3 2.5 2
2013-01-01 01:00:00 5 4.5 4
2013-01-01 01:30:00 7 6.5 6
2013-01-01 02:00:00 9 8.5 8
2013-01-01 02:30:00 11 10.5 10
2013-01-01 03:00:00 13 12.5 12
2013-01-01 03:30:00 15 14.5 14

Categories

Resources