Adding empty dataframe rows based on missing datetime values - python

I am trying to add rows to my pandas dataframe as such:
import pandas as pd
import datetime as dt
d={'datetime':[dt.datetime(2018,3,1,0,0),dt.datetime(2018,3,1,0,10),dt.datetime(2018,3,1,0,40)],
'value':[4.,5.,1.]}
df=pd.DataFrame(d)
Which outputs:
datetime value
0 2018-03-01 00:00:00 4.0
1 2018-03-01 00:10:00 5.0
2 2018-03-01 00:40:00 1.0
What I want to do is add rows from 00:00:00 to 00:40:00, to show every 5 minutes. My desired output looks like this:
datetime value
0 2018-03-01 00:00:00 4.0
1 2018-03-01 00:05:00 NaN
2 2018-03-01 00:10:00 5.0
3 2018-03-01 00:15:00 NaN
4 2018-03-01 00:20:00 NaN
5 2018-03-01 00:25:00 NaN
6 2018-03-01 00:30:00 NaN
7 2018-03-01 00:35:00 NaN
8 2018-03-01 00:40:00 1.0
How do I get there?

You can use pd.DataFrame.resample:
df = df.resample('5Min', on='datetime').first()\
.drop('datetime', 1).reset_index()
print(df)
datetime value
0 2018-03-01 00:00:00 4.0
1 2018-03-01 00:05:00 NaN
2 2018-03-01 00:10:00 5.0
3 2018-03-01 00:15:00 NaN
4 2018-03-01 00:20:00 NaN
5 2018-03-01 00:25:00 NaN
6 2018-03-01 00:30:00 NaN
7 2018-03-01 00:35:00 NaN
8 2018-03-01 00:40:00 1.0

First, you can create a dataframe including your final datetime index and then affect the second one :
df1 = pd.DataFrame({'value': np.nan} ,index=pd.date_range('2018-03-01 00:00:00',
periods=9, freq='5min'))
print(df)
#Output :
value
2018-03-01 00:00:00 NaN
2018-03-01 00:05:00 NaN
2018-03-01 00:10:00 NaN
2018-03-01 00:15:00 NaN
2018-03-01 00:20:00 NaN
2018-03-01 00:25:00 NaN
2018-03-01 00:30:00 NaN
2018-03-01 00:35:00 NaN
2018-03-01 00:40:00 NaN
Now, let's say your dataframe is the second one, you can add this to your above code :
d={'datetime':
[dt.datetime(2018,3,1,0,0),dt.datetime(2018,3,1,0,10),dt.datetime(2018,3,1,0,40)],
'value':[4.,5.,1.]}
df2=pd.DataFrame(d)
df2.datetime = pd.to_datetime(df2.datetime)
df2.set_index('datetime',inplace=True)
print(df2)
#Output
value
datetime
2018-03-01 00:00:00 4.0
2018-03-01 00:10:00 5.0
2018-03-01 00:40:00 1.0
Finally :
df1.value = df2.value
print(df1)
#output
value
2018-03-01 00:00:00 4.0
2018-03-01 00:05:00 NaN
2018-03-01 00:10:00 5.0
2018-03-01 00:15:00 NaN
2018-03-01 00:20:00 NaN
2018-03-01 00:25:00 NaN
2018-03-01 00:30:00 NaN
2018-03-01 00:35:00 NaN
2018-03-01 00:40:00 1.0

Related

resampling and appending to same dataframe

I have a dataframe which I want to resample and append the results to original dataframe as new column,
What I have:
index = pd.date_range('1/1/2000', periods=9, freq='T')
series = pd.Series(range(9), index=index)
series
time value
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
What I want:
time value mean_resampled
2000-01-01 00:00:00 0. 2
2000-01-01 00:01:00 1. NaN
2000-01-01 00:02:00 2. NaN
2000-01-01 00:03:00 3. NaN
2000-01-01 00:04:00 4. NaN
2000-01-01 00:05:00 5. 6.5
2000-01-01 00:06:00 6. NaN
2000-01-01 00:07:00 7. NaN
2000-01-01 00:08:00 8. NaN
Note: resampling frequency is '5T'
index = pd.date_range('1/1/2000', periods=9, freq='T')
series = pd.Series(range(9), index=index, name='values')
sample = series.resample('5T').mean() # create a sample at some frequency
df = series.to_frame() # convert series to frame
df.loc[sample.index.values, 'mean_resampled'] = sample # use loc to assign new values
values mean_resampled
2000-01-01 00:00:00 0 2.0
2000-01-01 00:01:00 1 NaN
2000-01-01 00:02:00 2 NaN
2000-01-01 00:03:00 3 NaN
2000-01-01 00:04:00 4 NaN
2000-01-01 00:05:00 5 6.5
2000-01-01 00:06:00 6 NaN
2000-01-01 00:07:00 7 NaN
2000-01-01 00:08:00 8 NaN
Use resample to compute the mean and concat to merge your Series with new values.
>>> pd.concat([series, series.resample('5T').mean()], axis=1) \
.rename(columns={0: 'value', 1: 'mean_resampled'})
value mean_resampled
2000-01-01 00:00:00 0 2.0
2000-01-01 00:01:00 1 NaN
2000-01-01 00:02:00 2 NaN
2000-01-01 00:03:00 3 NaN
2000-01-01 00:04:00 4 NaN
2000-01-01 00:05:00 5 6.5
2000-01-01 00:06:00 6 NaN
2000-01-01 00:07:00 7 NaN
2000-01-01 00:08:00 8 NaN
If you have a DataFrame instead of Series in your real case, you have just to add a new column:
>>> df['mean_resampled'] = df.resample('5T').mean()

Finding maximum null values in stretch and generating flag

I have dataframe with datetime and two columns.I have to find maximum stretch of null values in a 'particular date' for column 'X' and replace it with zero in both column for that particular date. In addition to that I have to create third column with name 'flag' which will carry value of 1 for every zero imputation in other two column or else value of 0. In example below, January 1st the maximum stretch null value is 3 times, so I have to replace this with zero. Similarly, I have to replicate the process for 2nd January.
Below is my sample data:
Datetime X Y
01-01-2018 00:00 1 1
01-01-2018 00:05 nan 2
01-01-2018 00:10 2 nan
01-01-2018 00:15 3 4
01-01-2018 00:20 2 2
01-01-2018 00:25 nan 1
01-01-2018 00:30 nan nan
01-01-2018 00:35 nan nan
01-01-2018 00:40 4 4
02-01-2018 00:00 nan nan
02-01-2018 00:05 2 3
02-01-2018 00:10 2 2
02-01-2018 00:15 2 5
02-01-2018 00:20 2 2
02-01-2018 00:25 nan nan
02-01-2018 00:30 nan 1
02-01-2018 00:35 3 nan
02-01-2018 00:40 nan nan
"Below is the result that I am expecting"
Datetime X Y Flag
01-01-2018 00:00 1 1 0
01-01-2018 00:05 nan 2 0
01-01-2018 00:10 2 nan 0
01-01-2018 00:15 3 4 0
01-01-2018 00:20 2 2 0
01-01-2018 00:25 0 0 1
01-01-2018 00:30 0 0 1
01-01-2018 00:35 0 0 1
01-01-2018 00:40 4 4 0
02-01-2018 00:00 nan nan 0
02-01-2018 00:05 2 3 0
02-01-2018 00:10 2 2 0
02-01-2018 00:15 2 5 0
02-01-2018 00:20 2 2 0
02-01-2018 00:25 nan nan 0
02-01-2018 00:30 nan 1 0
02-01-2018 00:35 3 nan 0
02-01-2018 00:40 nan nan 0
This question is the extension of previous question. Here is the link Python - Find maximum null values in stretch and replacing with 0
First create consecutive groups for each column filled by unique values:
df1 = df.isna()
df2 = df1.ne(df1.groupby(df1.index.date).shift()).cumsum().where(df1)
df2['Y'] *= len(df2)
print (df2)
X Y
Datetime
2018-01-01 00:00:00 NaN NaN
2018-01-01 00:05:00 2.0 NaN
2018-01-01 00:10:00 NaN 36.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:20:00 NaN NaN
2018-01-01 00:25:00 4.0 NaN
2018-01-01 00:30:00 4.0 72.0
2018-01-01 00:35:00 4.0 72.0
2018-01-01 00:40:00 NaN NaN
2018-02-01 00:00:00 6.0 108.0
2018-02-01 00:05:00 NaN NaN
2018-02-01 00:10:00 NaN NaN
2018-02-01 00:15:00 NaN NaN
2018-02-01 00:20:00 NaN NaN
2018-02-01 00:25:00 8.0 144.0
2018-02-01 00:30:00 8.0 NaN
2018-02-01 00:35:00 NaN 180.0
2018-02-01 00:40:00 10.0 180.0
Then get groups with maximum count - here group 4:
a = df2.stack().value_counts().index[0]
print (a)
4.0
Get mask for match rows for set 0 and for Flag column cast mask to integer to Tru/False to 1/0 mapping:
mask = df2.eq(a).any(axis=1)
df.loc[mask,:] = 0
df['Flag'] = mask.astype(int)
print (df)
X Y Flag
Datetime
2018-01-01 00:00:00 1.0 1.0 0
2018-01-01 00:05:00 NaN 2.0 0
2018-01-01 00:10:00 2.0 NaN 0
2018-01-01 00:15:00 3.0 4.0 0
2018-01-01 00:20:00 2.0 2.0 0
2018-01-01 00:25:00 0.0 0.0 1
2018-01-01 00:30:00 0.0 0.0 1
2018-01-01 00:35:00 0.0 0.0 1
2018-01-01 00:40:00 4.0 4.0 0
2018-02-01 00:00:00 NaN NaN 0
2018-02-01 00:05:00 2.0 3.0 0
2018-02-01 00:10:00 2.0 2.0 0
2018-02-01 00:15:00 2.0 5.0 0
2018-02-01 00:20:00 2.0 2.0 0
2018-02-01 00:25:00 NaN NaN 0
2018-02-01 00:30:00 NaN 1.0 0
2018-02-01 00:35:00 3.0 NaN 0
2018-02-01 00:40:00 NaN NaN 0
EDIT:
Added new condition for match dates from list:
dates = df.index.floor('d')
filtered = ['2018-01-01','2019-01-01']
m = dates.isin(filtered)
df1 = df.isna() & m[:, None]
df2 = df1.ne(df1.groupby(dates).shift()).cumsum().where(df1)
df2['Y'] *= len(df2)
print (df2)
X Y
Datetime
2018-01-01 00:00:00 NaN NaN
2018-01-01 00:05:00 2.0 NaN
2018-01-01 00:10:00 NaN 36.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:20:00 NaN NaN
2018-01-01 00:25:00 4.0 NaN
2018-01-01 00:30:00 4.0 72.0
2018-01-01 00:35:00 4.0 72.0
2018-01-01 00:40:00 NaN NaN
2018-02-01 00:00:00 NaN NaN
2018-02-01 00:05:00 NaN NaN
2018-02-01 00:10:00 NaN NaN
2018-02-01 00:15:00 NaN NaN
2018-02-01 00:20:00 NaN NaN
2018-02-01 00:25:00 NaN NaN
2018-02-01 00:30:00 NaN NaN
2018-02-01 00:35:00 NaN NaN
2018-02-01 00:40:00 NaN NaN
a = df2.stack().value_counts().index[0]
#solution working also if no NaNs per filtered rows (prevent IndexError: index 0 is out of bounds)
#a = next(iter(df2.stack().value_counts().index), -1)
mask = df2.eq(a).any(axis=1)
df.loc[mask,:] = 0
df['Flag'] = mask.astype(int)
print (df)
X Y Flag
Datetime
2018-01-01 00:00:00 1.0 1.0 0
2018-01-01 00:05:00 NaN 2.0 0
2018-01-01 00:10:00 2.0 NaN 0
2018-01-01 00:15:00 3.0 4.0 0
2018-01-01 00:20:00 2.0 2.0 0
2018-01-01 00:25:00 0.0 0.0 1
2018-01-01 00:30:00 0.0 0.0 1
2018-01-01 00:35:00 0.0 0.0 1
2018-01-01 00:40:00 4.0 4.0 0
2018-02-01 00:00:00 NaN NaN 0
2018-02-01 00:05:00 2.0 3.0 0
2018-02-01 00:10:00 2.0 2.0 0
2018-02-01 00:15:00 2.0 5.0 0
2018-02-01 00:20:00 2.0 2.0 0
2018-02-01 00:25:00 NaN NaN 0
2018-02-01 00:30:00 NaN 1.0 0
2018-02-01 00:35:00 3.0 NaN 0

How to add NaN elements in a groupby on a pandas dataframe?

I have data:
id time w
0 39 2018-03-01 00:00:00 1176.000000
1 39 2018-03-01 01:45:00 1033.461538
2 39 2018-03-01 02:00:00 1081.066667
3 39 2018-03-01 02:15:00 1067.909091
4 39 2018-03-01 02:30:00 1026.600000
5 39 2018-03-01 02:45:00 1051.866667
I have groupby once every fifteen minutes from the original data.
But I want to present:
id time w
0 39 2018-03-01 00:00:00 1176.000000
1 39 2018-03-01 00:15:00 NaN
2 39 2018-03-01 00:30:00 NaN
. 39 ... ... ...
. 39 ... ... ...
. 39 2018-03-01 01:30:00 NaN
1 39 2018-03-01 01:45:00 1033.461538
2 39 2018-03-01 02:00:00 1081.066667
3 39 2018-03-01 02:15:00 1067.909091
4 39 2018-03-01 02:30:00 1026.600000
5 39 2018-03-01 02:45:00 1051.866667
I tried to use this but it was not work.
Like this:
showData = Data.groupby(['id', pd.Grouper(key='time',freq='15T')])
['w'].mean().replace('',np.nan).reset_index()
I really need your help.Many thanks.
Simply use resample:
df.resample('15min', on='time').mean()
id w
time
2018-03-01 00:00:00 39.0 1176.000000
2018-03-01 00:15:00 NaN NaN
2018-03-01 00:30:00 NaN NaN
2018-03-01 00:45:00 NaN NaN
2018-03-01 01:00:00 NaN NaN
2018-03-01 01:15:00 NaN NaN
2018-03-01 01:30:00 NaN NaN
2018-03-01 01:45:00 39.0 1033.461538
2018-03-01 02:00:00 39.0 1081.066667
2018-03-01 02:15:00 39.0 1067.909091
2018-03-01 02:30:00 39.0 1026.600000
2018-03-01 02:45:00 39.0 1051.866667
To fill in you id, you can just use fillna(method='ffill'):
resampled_df = df.resample('15T', on='time').mean()
resampled_df['id'].fillna(method='ffill', inplace=True)
resampled_df
id w
time
2018-03-01 00:00:00 39.0 1176.000000
2018-03-01 00:15:00 39.0 NaN
2018-03-01 00:30:00 39.0 NaN
2018-03-01 00:45:00 39.0 NaN
2018-03-01 01:00:00 39.0 NaN
2018-03-01 01:15:00 39.0 NaN
2018-03-01 01:30:00 39.0 NaN
2018-03-01 01:45:00 39.0 1033.461538
2018-03-01 02:00:00 39.0 1081.066667
2018-03-01 02:15:00 39.0 1067.909091
2018-03-01 02:30:00 39.0 1026.600000
2018-03-01 02:45:00 39.0 1051.866667

Return maximum datetime value

This is my code
file = pd.read_excel(open("file name",'rb'),sheetname="data")
max_vol = file["Voltage"].max()
max_time = file.loc["Voltage"]==max_vol,"Timestamp"]
My Timestamp has data like this
0 2018-03-01 00:00:00
1 2018-03-01 00:05:00
2 2018-03-01 00:10:00
3 2018-03-01 00:15:00
4 2018-03-01 00:20:00
5 2018-03-01 00:25:00
6 2018-03-01 00:30:00
7 2018-03-01 00:35:00
8 2018-03-01 00:40:00
9 2018-03-01 00:45:00
10 2018-03-01 00:50:00
11 2018-03-01 00:55:00
12 2018-03-01 01:00:00
13 2018-03-01 01:05:00
14 2018-03-01 01:10:00
15 2018-03-01 01:15:00
16 2018-03-01 01:20:00
When printing max_time, i am getting a result like
624 2018-03-03 04:00:00
Name: Timestamp, dtype: datetime64[ns]
but i want only
2018-03-03 04:00:00
can someone help me in this regard
You can use argmax to extract the index of the largest element, and then use pd.DataFrame.loc:
df['datetime'] = pd.to_datetime(df['datetime']) # convert to datetime
res = df['datetime'].loc[df['voltage'].argmax()]
If you know your index is an integer range beginning 0, e.g. [0, 1, 2], then you can equivalently use the more efficient .iat or .iloc accessors.
pd.Series.argmax returns the Index of first occurrence of maximum of values. pd.DataFrame.loc permits indexing by index label, so linking the two we reach the desired result.

how can i replace time-series dataframe specific values in pandas?

I have the dataframes below (date/time is multi index) and I want to replace column values in (00:00:00~07:00:00) as a numpy array:
[[ 21.63920663 21.62012822 20.9900515 21.23217008 21.19482458
21.10839656 20.89631935 20.79977166 20.99176729 20.91567565
20.87258765 20.76210464 20.50357827 20.55897631 20.38005033
20.38227309 20.54460993 20.37707293 20.08279925 20.09955877
20.02559575 20.12390737 20.2917257 20.20056711 20.1589065
20.41302289 20.48000767 20.55604102 20.70255192]]
date time
2018-01-26 00:00:00 21.65
00:15:00 NaN
00:30:00 NaN
00:45:00 NaN
01:00:00 NaN
01:15:00 NaN
01:30:00 NaN
01:45:00 NaN
02:00:00 NaN
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
03:15:00 NaN
03:30:00 NaN
03:45:00 NaN
04:00:00 NaN
04:15:00 NaN
04:30:00 NaN
04:45:00 NaN
05:00:00 NaN
05:15:00 NaN
05:30:00 NaN
05:45:00 NaN
06:00:00 NaN
06:15:00 NaN
06:30:00 NaN
06:45:00 NaN
07:00:00 NaN
07:15:00 NaN
07:30:00 NaN
07:45:00 NaN
08:00:00 NaN
08:15:00 NaN
08:30:00 NaN
08:45:00 NaN
09:00:00 NaN
09:15:00 NaN
09:30:00 NaN
09:45:00 NaN
10:00:00 NaN
10:15:00 NaN
10:30:00 NaN
10:45:00 NaN
11:00:00 NaN
Name: temp, dtype: float64
<class 'datetime.time'>
How can I do this?
You can use slicers:
idx = pd.IndexSlice
df1.loc[idx[:, '00:00:00':'02:00:00'],:] = 1
Or if second levels are times:
import datetime
idx = pd.IndexSlice
df1.loc[idx[:, datetime.time(0, 0, 0):datetime.time(2, 0, 0)],:] = 1
Sample:
print (df1)
aaa
date time
2018-01-26 00:00:00 21.65
00:15:00 NaN
00:30:00 NaN
00:45:00 NaN
01:00:00 NaN
01:15:00 NaN
01:30:00 NaN
01:45:00 NaN
02:00:00 NaN
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
2018-01-27 00:00:00 2.00
00:15:00 NaN
00:30:00 NaN
00:45:00 NaN
01:00:00 NaN
01:15:00 NaN
01:30:00 NaN
01:45:00 NaN
02:00:00 NaN
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
idx = pd.IndexSlice
df1.loc[idx[:, '00:00:00':'02:00:00'],:] = 1
print (df1)
aaa
date time
2018-01-26 00:00:00 1.0
00:15:00 1.0
00:30:00 1.0
00:45:00 1.0
01:00:00 1.0
01:15:00 1.0
01:30:00 1.0
01:45:00 1.0
02:00:00 1.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
2018-01-27 00:00:00 1.0
00:15:00 1.0
00:30:00 1.0
00:45:00 1.0
01:00:00 1.0
01:15:00 1.0
01:30:00 1.0
01:45:00 1.0
02:00:00 1.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
EDIT:
For assign array is necessary use numpy.tile for repeat by length of first level unique values:
df1.loc[idx[:, '00:00:00':'02:00:00'],:] = np.tile(np.arange(1, 10),len(df1.index.levels[0]))
print (df1)
aaa
date time
2018-01-26 00:00:00 1.0
00:15:00 2.0
00:30:00 3.0
00:45:00 4.0
01:00:00 5.0
01:15:00 6.0
01:30:00 7.0
01:45:00 8.0
02:00:00 9.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
2018-01-27 00:00:00 1.0
00:15:00 2.0
00:30:00 3.0
00:45:00 4.0
01:00:00 5.0
01:15:00 6.0
01:30:00 7.0
01:45:00 8.0
02:00:00 9.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
More general solution with generated array by length of slice:
idx = pd.IndexSlice
len0 = df1.loc[idx[df1.index.levels[0][0], '00:00:00':'02:00:00'],:].shape[0]
len1 = len(df1.index.levels[0])
df1.loc[idx[:, '00:00:00':'02:00:00'],:] = np.tile(np.arange(1, len0 + 1), len1)
Tested with times:
import datetime
idx = pd.IndexSlice
arr =np.tile(np.arange(1, 10),len(df1.index.levels[0]))
df1.loc[idx[:, datetime.time(0, 0, 0):datetime.time(2, 0, 0)],:] = arr
print (df1)
aaa
date time
2018-01-26 00:00:00 1.0
00:15:00 2.0
00:30:00 3.0
00:45:00 4.0
01:00:00 5.0
01:15:00 6.0
01:30:00 7.0
01:45:00 8.0
02:00:00 9.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
2018-01-27 00:00:00 1.0
00:15:00 2.0
00:30:00 3.0
00:45:00 4.0
01:00:00 5.0
01:15:00 6.0
01:30:00 7.0
01:45:00 8.0
02:00:00 9.0
02:15:00 NaN
02:30:00 NaN
02:45:00 NaN
03:00:00 NaN
EDIT:
Last was problem found - my solution wokrs with one column DataFrame, but if working with Series need remove one ::
arr = np.array([[ 21.63920663, 21.62012822, 20.9900515, 21.23217008, 21.19482458, 21.10839656,
20.89631935, 20.79977166, 20.99176729, 20.91567565, 20.87258765, 20.76210464,
20.50357827, 20.55897631, 20.38005033, 20.38227309, 20.54460993, 20.37707293,
20.08279925, 20.09955877, 20.02559575, 20.12390737, 20.2917257, 20.20056711,
20.1589065, 20.41302289, 20.48000767, 20.55604102, 20.70255192]])
import datetime
idx = pd.IndexSlice
df1.loc[idx[:, datetime.time(0, 0, 0): datetime.time(7, 0, 0)]] = arr[0]
---^^^

Categories

Resources