Python Pandas - Get Attributes Associated With Consecutive datetime - python

I have a data frame that has a list of datetime by minutes (generally in hour increments), for example 2018-01-14 03:00, 2018-01-14 04:00, etc.
What I want to do is capture the number of consecutive records by the minute increment (some could be 60 others 15, etc.) that I define. Then, I want to associate the first and last reading time in the block.
Take the following data for instance:
id reading_time type
1 1/6/2018 00:00 Interval
1 1/6/2018 01:00 Interval
1 1/6/2018 02:00 Interval
1 1/6/2018 03:00 Interval
1 1/6/2018 06:00 Interval
1 1/6/2018 07:00 Interval
1 1/6/2018 09:00 Interval
1 1/6/2018 10:00 Interval
1 1/6/2018 14:00 Interval
1 1/6/2018 15:00 Interval
I would like the output to look like the following:
id first_reading_time last_reading_time number_of_records type
1 1/6/2018 00:00 1/6/2018 03:00 4 Received
1 1/6/2018 04:00 1/6/2018 05:00 2 Missed
1 1/6/2018 06:00 1/6/2018 07:00 2 Received
1 1/6/2018 08:00 1/6/2018 08:00 1 Missed
1 1/6/2018 09:00 1/6/2018 10:00 2 Received
1 1/6/2018 11:00 1/6/2018 13:00 3 Missed
1 1/6/2018 14:00 1/6/2018 15:00 2 Received
Now, in this example there is only 1 day and I can write the code for one day. Many of the rows extend across multiple days.
Now, what I've been able to is capture this aggregation up to the point the first consecutive records come in, but not the next set using this code:
first_reading_time = df['reading_time'][0]
last_reaeding_time = df['reading_time'][idx_loc-1]
df = pd.DataFrame(data=d)
df.reading_time = pd.to_datetime(df.reading_time)
d = pd.Timedelta(60, 'm')
df = df.sort_values('reading_time', ascending=True)
consecutive = df.reading_time.diff().fillna(0).abs().le(d)
df['consecutive'] = consecutive
df.iloc[:idx_loc]
idx_loc = df.index.get_loc(consecutive.idxmin())
where the data frame 'd' represents the more granular level data up top. The line of code that sets the variable 'consecutive' tags each records as True or False based on the number of minutes difference between the current row and the previous. The variable idx_loc captures the number of rows that were consecutive, but it only captures the first set (in this case 1/6/2018 00:00 and 1/6/2018 00:03).
Any help is appreciated.

import pandas as pd
df = pd.DataFrame({'id': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'reading_time': ['1/6/2018 00:00', '1/6/2018 01:00', '1/6/2018 02:00', '1/6/2018 03:00', '1/6/2018 06:00', '1/6/2018 07:00', '1/6/2018 09:00', '1/6/2018 10:00', '1/6/2018 14:00', '1/6/2018 15:00'], 'type': ['Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval']} )
df['reading_time'] = pd.to_datetime(df['reading_time'])
df = df.set_index('reading_time')
df = df.asfreq('1H')
df = df.reset_index()
df['group'] = (pd.isnull(df['id']).astype(int).diff() != 0).cumsum()
result = df.groupby('group')['reading_time'].agg(['first','last','count'])
types = pd.Categorical(['Missed', 'Received'])
result['type'] = types[result.index % 2]
yields
first last count type
group
1 2018-01-06 00:00:00 2018-01-06 03:00:00 4 Received
2 2018-01-06 04:00:00 2018-01-06 05:00:00 2 Missed
3 2018-01-06 06:00:00 2018-01-06 07:00:00 2 Received
4 2018-01-06 08:00:00 2018-01-06 08:00:00 1 Missed
5 2018-01-06 09:00:00 2018-01-06 10:00:00 2 Received
6 2018-01-06 11:00:00 2018-01-06 13:00:00 3 Missed
7 2018-01-06 14:00:00 2018-01-06 15:00:00 2 Received
You could use asfreq to expand the DataFrame to include missing rows:
df = df.set_index('reading_time')
df = df.asfreq('1H')
df = df.reset_index()
# reading_time id type
# 0 2018-01-06 00:00:00 1.0 Interval
# 1 2018-01-06 01:00:00 1.0 Interval
# 2 2018-01-06 02:00:00 1.0 Interval
# 3 2018-01-06 03:00:00 1.0 Interval
# 4 2018-01-06 04:00:00 NaN NaN
# 5 2018-01-06 05:00:00 NaN NaN
# 6 2018-01-06 06:00:00 1.0 Interval
# 7 2018-01-06 07:00:00 1.0 Interval
# 8 2018-01-06 08:00:00 NaN NaN
# 9 2018-01-06 09:00:00 1.0 Interval
# 10 2018-01-06 10:00:00 1.0 Interval
# 11 2018-01-06 11:00:00 NaN NaN
# 12 2018-01-06 12:00:00 NaN NaN
# 13 2018-01-06 13:00:00 NaN NaN
# 14 2018-01-06 14:00:00 1.0 Interval
# 15 2018-01-06 15:00:00 1.0 Interval
Next, use the NaNs in, say, the id column to identify groups:
df['group'] = (pd.isnull(df['id']).astype(int).diff() != 0).cumsum()
then group by the group values to find first and last reading_times for each group:
result = df.groupby('group')['reading_time'].agg(['first','last','count'])
# first last count
# group
# 1 2018-01-06 00:00:00 2018-01-06 03:00:00 4
# 2 2018-01-06 04:00:00 2018-01-06 05:00:00 2
# 3 2018-01-06 06:00:00 2018-01-06 07:00:00 2
# 4 2018-01-06 08:00:00 2018-01-06 08:00:00 1
# 5 2018-01-06 09:00:00 2018-01-06 10:00:00 2
# 6 2018-01-06 11:00:00 2018-01-06 13:00:00 3
# 7 2018-01-06 14:00:00 2018-01-06 15:00:00 2
Since the Missed and Received values alternate, they can be generated from the index:
types = pd.Categorical(['Missed', 'Received'])
result['type'] = types[result.index % 2]
To handle multiple frequencies on a per-id basis, you could use:
import pandas as pd
df = pd.DataFrame({'id': [1, 1, 1, 1, 1, 2, 2, 2, 2, 2], 'reading_time': ['1/6/2018 00:00', '1/6/2018 01:00', '1/6/2018 02:00', '1/6/2018 03:00', '1/6/2018 06:00', '1/6/2018 07:00', '1/6/2018 09:00', '1/6/2018 10:00', '1/6/2018 14:00', '1/6/2018 15:00'], 'type': ['Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval', 'Interval']} )
df['reading_time'] = pd.to_datetime(df['reading_time'])
df = df.sort_values(by='reading_time')
df = df.set_index('reading_time')
freqmap = {1:'1H', 2:'15T'}
df = df.groupby('id', group_keys=False).apply(
lambda grp: grp.asfreq(freqmap[grp['id'][0]]))
df = df.reset_index(level='reading_time')
df['group'] = (pd.isnull(df['id']).astype(int).diff() != 0).cumsum()
grouped = df.groupby('group')
result = grouped['reading_time'].agg(['first','last','count'])
result['id'] = grouped['id'].agg('first')
types = pd.Categorical(['Missed', 'Received'])
result['type'] = types[result.index % 2]
which yields
first last count id type
group
1 2018-01-06 00:00:00 2018-01-06 03:00:00 4 1.0 Received
2 2018-01-06 04:00:00 2018-01-06 05:00:00 2 NaN Missed
3 2018-01-06 06:00:00 2018-01-06 07:00:00 2 1.0 Received
4 2018-01-06 07:15:00 2018-01-06 08:45:00 7 NaN Missed
5 2018-01-06 09:00:00 2018-01-06 09:00:00 1 2.0 Received
6 2018-01-06 09:15:00 2018-01-06 09:45:00 3 NaN Missed
7 2018-01-06 10:00:00 2018-01-06 10:00:00 1 2.0 Received
8 2018-01-06 10:15:00 2018-01-06 13:45:00 15 NaN Missed
9 2018-01-06 14:00:00 2018-01-06 14:00:00 1 2.0 Received
10 2018-01-06 14:15:00 2018-01-06 14:45:00 3 NaN Missed
11 2018-01-06 15:00:00 2018-01-06 15:00:00 1 2.0 Received
It seems plausible that "Missed" rows should not be associated with any id, but to bring the result a little closer to the one you posted, you could ffill to forward-fill NaN id values:
result['id'] = result['id'].ffill()
changes the result to
first last count id type
group
1 2018-01-06 00:00:00 2018-01-06 03:00:00 4 1 Received
2 2018-01-06 04:00:00 2018-01-06 05:00:00 2 1 Missed
3 2018-01-06 06:00:00 2018-01-06 07:00:00 2 1 Received
4 2018-01-06 07:15:00 2018-01-06 08:45:00 7 1 Missed
5 2018-01-06 09:00:00 2018-01-06 09:00:00 1 2 Received
6 2018-01-06 09:15:00 2018-01-06 09:45:00 3 2 Missed
7 2018-01-06 10:00:00 2018-01-06 10:00:00 1 2 Received
8 2018-01-06 10:15:00 2018-01-06 13:45:00 15 2 Missed
9 2018-01-06 14:00:00 2018-01-06 14:00:00 1 2 Received
10 2018-01-06 14:15:00 2018-01-06 14:45:00 3 2 Missed
11 2018-01-06 15:00:00 2018-01-06 15:00:00 1 2 Received

Related

How to extract the first and last value from a data sequence based on a column value?

I have a time series dataset that can be created with the following code.
idx = pd.date_range("2018-01-01", periods=100, freq="H")
ts = pd.Series(idx)
dft = pd.DataFrame(ts,columns=["date"])
dft["data"] = ""
dft["data"][0:5]= "a"
dft["data"][5:15]= "b"
dft["data"][15:20]= "c"
dft["data"][20:30]= "d"
dft["data"][30:40]= "a"
dft["data"][40:70]= "c"
dft["data"][70:85]= "b"
dft["data"][85:len(dft)]= "c"
In the data column, the unique values are a,b,c,d. These values are repeating in a sequence in different time windows. I want to capture the first and last value of that time window. How can I do that?
Compute a grouper for your changing values using shift to compare consecutive rows, then use groupby+agg to get the min/max per group:
group = dft.data.ne(dft.data.shift()).cumsum()
dft.groupby(group)['date'].agg(['min', 'max'])
output:
min max
data
1 2018-01-01 00:00:00 2018-01-01 04:00:00
2 2018-01-01 05:00:00 2018-01-01 14:00:00
3 2018-01-01 15:00:00 2018-01-01 19:00:00
4 2018-01-01 20:00:00 2018-01-02 05:00:00
5 2018-01-02 06:00:00 2018-01-02 15:00:00
6 2018-01-02 16:00:00 2018-01-03 21:00:00
7 2018-01-03 22:00:00 2018-01-04 12:00:00
8 2018-01-04 13:00:00 2018-01-05 03:00:00
edit. combining with original data:
dft.groupby(group).agg({'data': 'first', 'date': ['min', 'max']})
output:
data date
first min max
data
1 a 2018-01-01 00:00:00 2018-01-01 04:00:00
2 b 2018-01-01 05:00:00 2018-01-01 14:00:00
3 c 2018-01-01 15:00:00 2018-01-01 19:00:00
4 d 2018-01-01 20:00:00 2018-01-02 05:00:00
5 a 2018-01-02 06:00:00 2018-01-02 15:00:00
6 c 2018-01-02 16:00:00 2018-01-03 21:00:00
7 b 2018-01-03 22:00:00 2018-01-04 12:00:00
8 c 2018-01-04 13:00:00 2018-01-05 03:00:00

How to add a new categorical column with numbering as per time Interval in Pandas

Value
2021-07-15 00:00:00 10
2021-07-15 06:00:00 10
2021-07-15 12:00:00 10
2021-07-15 18:00:00 10
2021-07-16 00:00:00 20
2021-07-16 06:00:00 10
2021-07-16 12:00:00 10
2021-07-16 18:00:00 20
I want to add a column such that when it
00:00:00 1
06:00:00 2
12:00:00 3
18:00:00 4
Eventually, I want something like this
Value Number
2021-07-15 00:00:00 10 1
2021-07-15 06:00:00 10 2
2021-07-15 12:00:00 10 3
2021-07-15 18:00:00 10 4
2021-07-16 00:00:00 20 1
2021-07-16 06:00:00 10 2
2021-07-16 12:00:00 10 3
2021-07-16 18:00:00 20 4
and so on
I want that Numbering column such that whenever it's 00:00:00 time it always says 1, whenever it's 06:00:00 time it always says 2, whenever it's 12:00:00 time it always says 3, whenever it's 18:00:00 time it always says 4. In this way, I will have a categorical column having only 1,2,3,4 values
Sorry, new here, so I don't have enough rep to comment. But #Keiku's solution is closer than you realise. If you replace .time by .hour, you get the hour of the day. Divide that by 6 to get 0-3 categories for 0:00 to 18:00. If you must have them in the range 1-4 specifically, simply add 1.
To borrow #Keiku's example code:
import pandas as pd
df = pd.DataFrame({
'2021-07-15 00:00:00 0.48',
'2021-07-15 06:00:00 80.00',
'2021-07-15 12:00:00 6.10',
'2021-07-15 18:00:00 1400.00',
'2021-07-16 00:00:00 1400.00'
}, columns=['value'])
df['date'] = pd.to_datetime(df['value'].str[:19])
df.sort_values(['date'], ascending=[True], inplace=True)
df['category'] = df['date'].dt.hour / 6 # + 1 if you want this to be 1-4
You can use pd.to_datetime to convert to datetime and .dt.time to extract the time. You can use pd.factorize for 1,2,3,4 categories.
import pandas as pd
df = pd.DataFrame({
'2021-07-15 00:00:00 0.48',
'2021-07-15 06:00:00 80.00',
'2021-07-15 12:00:00 6.10',
'2021-07-15 18:00:00 1400.00',
'2021-07-16 00:00:00 1400.00'
}, columns=['value'])
df
# value
# 0 2021-07-15 00:00:00 0.48
# 1 2021-07-15 06:00:00 80.00
# 2 2021-07-15 12:00:00 6.10
# 3 2021-07-16 00:00:00 1400.00
# 4 2021-07-15 18:00:00 1400.00
df['date'] = pd.to_datetime(df['value'].str[:19])
df.sort_values(['date'], ascending=[True], inplace=True)
df['time'] = df['date'].dt.time
df['index'], _ = pd.factorize(df['time'])
df['index'] += 1
df
# value date time index
# 0 2021-07-15 00:00:00 0.48 2021-07-15 00:00:00 00:00:00 1
# 1 2021-07-15 06:00:00 80.00 2021-07-15 06:00:00 06:00:00 2
# 2 2021-07-15 12:00:00 6.10 2021-07-15 12:00:00 12:00:00 3
# 4 2021-07-15 18:00:00 1400.00 2021-07-15 18:00:00 18:00:00 4
# 3 2021-07-16 00:00:00 1400.00 2021-07-16 00:00:00 00:00:00 1

How to find occurrence of consecutive events in python timeseries data frame?

I have got a time series of meteorological observations with date and value columns:
df = pd.DataFrame({'date':['11/10/2017 0:00','11/10/2017 03:00','11/10/2017 06:00','11/10/2017 09:00','11/10/2017 12:00',
'11/11/2017 0:00','11/11/2017 03:00','11/11/2017 06:00','11/11/2017 09:00','11/11/2017 12:00',
'11/12/2017 00:00','11/12/2017 03:00','11/12/2017 06:00','11/12/2017 09:00','11/12/2017 12:00'],
'value':[850,np.nan,np.nan,np.nan,np.nan,500,650,780,np.nan,800,350,690,780,np.nan,np.nan],
'consecutive_hour': [ 3,0,0,0,0,3,6,9,0,3,3,6,9,0,0]})
With this DataFrame, I want a third column of consecutive_hours such that if the value in a particular timestamp is less than 1000, we give corresponding value in "consecutive-hours" of "3:00" hours and find consecutive such occurrence like 6:00 9:00 as above.
Lastly, I want to summarize the table counting consecutive hours occurrence and number of days such that the summary table looks like:
df_summary = pd.DataFrame({'consecutive_hours':[3,6,9,12],
'number_of_day':[2,0,2,0]})
I tried several online solutions and methods like shift(), diff() etc. as mentioned in:How to groupby consecutive values in pandas DataFrame
and more, spent several days but no luck yet.
I would highly appreciate help on this issue.
Thanks!
Input data:
>>> df
date value
0 2017-11-10 00:00:00 850.0
1 2017-11-10 03:00:00 NaN
2 2017-11-10 06:00:00 NaN
3 2017-11-10 09:00:00 NaN
4 2017-11-10 12:00:00 NaN
5 2017-11-11 00:00:00 500.0
6 2017-11-11 03:00:00 650.0
7 2017-11-11 06:00:00 780.0
8 2017-11-11 09:00:00 NaN
9 2017-11-11 12:00:00 800.0
10 2017-11-12 00:00:00 350.0
11 2017-11-12 03:00:00 690.0
12 2017-11-12 06:00:00 780.0
13 2017-11-12 09:00:00 NaN
14 2017-11-12 12:00:00 NaN
The cumcount_reset function is adapted from this answer of #jezrael:
Python pandas cumsum with reset everytime there is a 0
cumcount_reset = \
lambda b: b.cumsum().sub(b.cumsum().where(~b).ffill().fillna(0)).astype(int)
df["consecutive_hour"] = (df.set_index("date")["value"] < 1000) \
.groupby(pd.Grouper(freq="D")) \
.apply(lambda b: cumcount_reset(b)).mul(3) \
.reset_index(drop=True)
Output result:
>>> df
date value consecutive_hour
0 2017-11-10 00:00:00 850.0 3
1 2017-11-10 03:00:00 NaN 0
2 2017-11-10 06:00:00 NaN 0
3 2017-11-10 09:00:00 NaN 0
4 2017-11-10 12:00:00 NaN 0
5 2017-11-11 00:00:00 500.0 3
6 2017-11-11 03:00:00 650.0 6
7 2017-11-11 06:00:00 780.0 9
8 2017-11-11 09:00:00 NaN 0
9 2017-11-11 12:00:00 800.0 3
10 2017-11-12 00:00:00 350.0 3
11 2017-11-12 03:00:00 690.0 6
12 2017-11-12 06:00:00 780.0 9
13 2017-11-12 09:00:00 NaN 0
14 2017-11-12 12:00:00 NaN 0
Summary table
df_summary = df.loc[df.groupby(pd.Grouper(key="date", freq="D"))["consecutive_hour"] \
.apply(lambda h: (h - h.shift(-1).fillna(0)) > 0),
"consecutive_hour"] \
.value_counts().reindex([3, 6, 9, 12], fill_value=0) \
.rename("number_of_day") \
.rename_axis("consecutive_hour") \
.reset_index()
>>> df_summary
consecutive_hour number_of_day
0 3 2
1 6 0
2 9 2
3 12 0

Flagging list of datetimes within date ranges in pandas dataframe

I've looked around (eg.
Python - Locating the closest timestamp) but can't find anything on this.
I have a list of datetimes, and a dataframe containing 10k + rows, of start and end times (formatted as datetimes).
The dataframe is effectively listing parameters for runs of an instrument.
The list describes times from an alarm event.
The datetime list items are all within a row (i.e. between a start and end time) in the dataframe. Is there an easy way to locate the rows which would contain the timeframe within which the alarm time would be? (sorry for poor wording there!)
eg.
for i in alarms:
df.loc[(df.start_time < i) & (df.end_time > i), 'Flag'] = 'Alarm'
(this didn't work but shows my approach)
Example datasets
# making list of datetimes for the alarms
df = pd.DataFrame({'Alarms':["18/07/19 14:56:21", "19/07/19 15:05:15", "20/07/19 15:46:00"]})
df['Alarms'] = pd.to_datetime(df['Alarms'])
alarms = list(df.Alarms.unique())
# dataframe of runs containing start and end times
n=33
rng1 = pd.date_range('2019-07-18', '2019-07-22', periods=n)
rng2 = pd.date_range('2019-07-18 03:00:00', '2019-07-22 03:00:00', periods=n)
df = pd.DataFrame({ 'start_date': rng1, 'end_Date': rng2})
Herein a flag would go against line (well, index) 4, 13 and 21.
You can use pandas.IntervalIndex here:
# Create and set IntervalIndex
intervals = pd.IntervalIndex.from_arrays(df.start_date, df.end_Date)
df = df.set_index(intervals)
# Update using loc
df.loc[alarms, 'flag'] = 'alarm'
# Finally, reset_index
df = df.reset_index(drop=True)
[out]
start_date end_Date flag
0 2019-07-18 00:00:00 2019-07-18 03:00:00 NaN
1 2019-07-18 03:00:00 2019-07-18 06:00:00 NaN
2 2019-07-18 06:00:00 2019-07-18 09:00:00 NaN
3 2019-07-18 09:00:00 2019-07-18 12:00:00 NaN
4 2019-07-18 12:00:00 2019-07-18 15:00:00 alarm
5 2019-07-18 15:00:00 2019-07-18 18:00:00 NaN
6 2019-07-18 18:00:00 2019-07-18 21:00:00 NaN
7 2019-07-18 21:00:00 2019-07-19 00:00:00 NaN
8 2019-07-19 00:00:00 2019-07-19 03:00:00 NaN
9 2019-07-19 03:00:00 2019-07-19 06:00:00 NaN
10 2019-07-19 06:00:00 2019-07-19 09:00:00 NaN
11 2019-07-19 09:00:00 2019-07-19 12:00:00 NaN
12 2019-07-19 12:00:00 2019-07-19 15:00:00 NaN
13 2019-07-19 15:00:00 2019-07-19 18:00:00 alarm
14 2019-07-19 18:00:00 2019-07-19 21:00:00 NaN
15 2019-07-19 21:00:00 2019-07-20 00:00:00 NaN
16 2019-07-20 00:00:00 2019-07-20 03:00:00 NaN
17 2019-07-20 03:00:00 2019-07-20 06:00:00 NaN
18 2019-07-20 06:00:00 2019-07-20 09:00:00 NaN
19 2019-07-20 09:00:00 2019-07-20 12:00:00 NaN
20 2019-07-20 12:00:00 2019-07-20 15:00:00 NaN
21 2019-07-20 15:00:00 2019-07-20 18:00:00 alarm
22 2019-07-20 18:00:00 2019-07-20 21:00:00 NaN
23 2019-07-20 21:00:00 2019-07-21 00:00:00 NaN
24 2019-07-21 00:00:00 2019-07-21 03:00:00 NaN
25 2019-07-21 03:00:00 2019-07-21 06:00:00 NaN
26 2019-07-21 06:00:00 2019-07-21 09:00:00 NaN
27 2019-07-21 09:00:00 2019-07-21 12:00:00 NaN
28 2019-07-21 12:00:00 2019-07-21 15:00:00 NaN
29 2019-07-21 15:00:00 2019-07-21 18:00:00 NaN
30 2019-07-21 18:00:00 2019-07-21 21:00:00 NaN
31 2019-07-21 21:00:00 2019-07-22 00:00:00 NaN
32 2019-07-22 00:00:00 2019-07-22 03:00:00 NaN
you were calling your columns start_date and end_Date, but in your for you use start_time and end_time.
try this:
import pandas as pd
df = pd.DataFrame({'Alarms': ["18/07/19 14:56:21", "19/07/19 15:05:15", "20/07/19 15:46:00"]})
df['Alarms'] = pd.to_datetime(df['Alarms'])
alarms = list(df.Alarms.unique())
# dataframe of runs containing start and end times
n = 33
rng1 = pd.date_range('2019-07-18', '2019-07-22', periods=n)
rng2 = pd.date_range('2019-07-18 03:00:00', '2019-07-22 03:00:00', periods=n)
df = pd.DataFrame({'start_date': rng1, 'end_Date': rng2})
for i in alarms:
df.loc[(df.start_date < i) & (df.end_Date > i), 'Flag'] = 'Alarm'
print(df[df['Flag']=='Alarm']['Flag'])
Output:
4 Alarm
13 Alarm
21 Alarm
Name: Flag, dtype: object

Python - filtering lines from data frame

I have a simple data frame:
ID Stime Etime
1 13:00:00 13:15:00
1 14:00:00 14:15:00
2 15:00:00 15:42:00
3 13:00:00 13:25:00
4 15:00:00 15:15:00
4 15:05:00 15:15:00
What I would like to do is to unit the 2 last lines, because they belong to the same ID (ID=4) and the time of the last line is contained in the time of the penultimate line.
What I want the output to be is:
ID Stime Etime
1 13:00:00 13:15:00
1 14:00:00 14:15:00
2 15:00:00 15:42:00
3 13:00:00 13:25:00
4 15:00:00 15:15:00
Solution
def setup(df):
td = df.Stime - df.Etime.shift()
td = td.apply(lambda x: x.total_seconds() > 1)
td.iloc[0] = True
return td.cumsum()
def collapse(df):
df_ = df.iloc[0, :]
df_.loc['Stime'] = df.Stime.min()
df_.loc['Etime'] = df.Etime.max()
return df_
df['group id'] = df.groupby('ID').apply(setup).values
gbcols = ['ID', 'group id']
fcols = ['ID', 'Stime', 'Etime']
print df.groupby(gbcols)[fcols].apply(collapse).reset_index(drop=True)
ID Stime Etime
0 1 2016-05-30 13:00:00 2016-05-30 13:15:00
1 1 2016-05-30 14:00:00 2016-05-30 14:15:00
2 2 2016-05-30 15:00:00 2016-05-30 15:42:00
3 3 2016-05-30 13:00:00 2016-05-30 13:25:00
4 4 2016-05-30 15:00:00 2016-05-30 15:15:00

Categories

Resources