I have this huge dataset which has dates for several days and timestamps. The datetime format is in UNIX format. The datasets are logs of some login.
The code is supposed to group start and end time logs and provide log counts and unique id counts.
I am trying to get some stats like:
total log counts per hour & unique login ids per hour.
log count with choice of hours i.e. 24hrs, 12hrs, 6 hrs, 1 hr, etc and day of the week and such options.
I am able to split the data with start and end hours but I am not able to get the stats of counts of logs and unique ids.
Code:
from datetime import datetime,time
# This splits data from start to end time
start = time(8,0,0)
end = time(20,0,0)
with open('input', 'r') as infile, open('output','w') as outfile:
for row in infile:
col = row.split()
t1 = datetime.fromtimestamp(float(col[2])).time()
t2 = datetime.fromtimestamp(float(col[3])).time()
print (t1 >= start and t2 <= end)
Input data format: The data has no headers but the fields are given below. The number of days is not known in input.
UserID, StartTime, StopTime, GPS1, GPS2
00022d9064bc,1073260801,1073260803,819251,440006
00022d9064bc,1073260803,1073260810,819213,439954
00904b4557d3,1073260803,1073261920,817526,439458
00022de73863,1073260804,1073265410,817558,439525
00904b14b494,1073260804,1073262625,817558,439525
00022d1406df,1073260807,1073260809,820428,438735
00022d9064bc,1073260801,1073260803,819251,440006
00022dba8f51,1073260801,1073260803,819251,440006
00022de1c6c1,1073260801,1073260803,819251,440006
003065f30f37,1073260801,1073260803,819251,440006
00904b48a3b6,1073260801,1073260803,819251,440006
00904b83a0ea,1073260803,1073260810,819213,439954
00904b85d3cf,1073260803,1073261920,817526,439458
00904b14b494,1073260804,1073265410,817558,439525
00904b99499c,1073260804,1073262625,817558,439525
00904bb96e83,1073260804,1073265163,817558,439525
00904bf91b75,1073260804,1073263786,817558,439525
Expected Output: Example Output
StartTime, EndTime, Day, LogCount, UniqueIDCount
00:00:00, 01:00:00, Mon, 349, 30
StartTime and Endtime = Human readable format
Only to separate data with range of time is already achieved, but I am trying to write a round off time and calculate the counts of logs and uniqueids. Solution with Pandas is also welcome.
Edit One: I more details
StartTime --> EndTIime
1/5/2004, 5:30:01 --> 1/5/2004, 5:30:03
But that falls between 5:00:00 --> 6:00:00 . So this way count of all the logs in the time range is what I am trying to find. Similarly for others also like
5:00:00 --> 6:00:00 Hourly Count
00:00:00 --> 6:00:00 Every 6 hours
00:00:00 --> 12:00:00 Every 12 hours
5 Jan 2004, Mon --> count
6 Jan 2004, Tue --> Count
And so on Looking for a generic program where I can change the time/hours range as needed.
Unfortunately i couldn't find any elegant solution.
Here is my attempt:
fn = r'D:\temp\.data\dart_small.csv'
cols = ['UserID','StartTime','StopTime','GPS1','GPS2']
df = pd.read_csv(fn, header=None, names=cols)
df['m'] = df.StopTime + df.StartTime
df['d'] = df.StopTime - df.StartTime
# 'start' and 'end' for the reporting DF: `r`
# which will contain equal intervals (1 hour in this case)
start = pd.to_datetime(df.StartTime.min(), unit='s').date()
end = pd.to_datetime(df.StopTime.max(), unit='s').date() + pd.Timedelta(days=1)
# building reporting DF: `r`
freq = '1H' # 1 Hour frequency
idx = pd.date_range(start, end, freq=freq)
r = pd.DataFrame(index=idx)
r['start'] = (r.index - pd.datetime(1970,1,1)).total_seconds().astype(np.int64)
# 1 hour in seconds, minus one second (so that we will not count it twice)
interval = 60*60 - 1
r['LogCount'] = 0
r['UniqueIDCount'] = 0
for i, row in r.iterrows():
# intervals overlap test
# https://en.wikipedia.org/wiki/Interval_tree#Overlap_test
# i've slightly simplified the calculations of m and d
# by getting rid of division by 2,
# because it can be done eliminating common terms
u = df[np.abs(df.m - 2*row.start - interval) < df.d + interval].UserID
r.ix[i, ['LogCount', 'UniqueIDCount']] = [len(u), u.nunique()]
r['Day'] = pd.to_datetime(r.start, unit='s').dt.weekday_name.str[:3]
r['StartTime'] = pd.to_datetime(r.start, unit='s').dt.time
r['EndTime'] = pd.to_datetime(r.start + interval + 1, unit='s').dt.time
print(r[r.LogCount > 0])
PS the less periods you will have in the report DF - r, the faster it will count. So you may want to get rid of rows (times) if you know beforehand that those timeframes won't contain any data (for example during the weekends, holidays, etc.)
Result:
start LogCount UniqueIDCount Day StartTime EndTime
2004-01-05 00:00:00 1073260800 24 15 Mon 00:00:00 01:00:00
2004-01-05 01:00:00 1073264400 5 5 Mon 01:00:00 02:00:00
2004-01-05 02:00:00 1073268000 3 3 Mon 02:00:00 03:00:00
2004-01-05 03:00:00 1073271600 3 3 Mon 03:00:00 04:00:00
2004-01-05 04:00:00 1073275200 2 2 Mon 04:00:00 05:00:00
2004-01-06 12:00:00 1073390400 22 12 Tue 12:00:00 13:00:00
2004-01-06 13:00:00 1073394000 3 2 Tue 13:00:00 14:00:00
2004-01-06 14:00:00 1073397600 3 2 Tue 14:00:00 15:00:00
2004-01-06 15:00:00 1073401200 3 2 Tue 15:00:00 16:00:00
2004-01-10 16:00:00 1073750400 20 11 Sat 16:00:00 17:00:00
2004-01-14 23:00:00 1074121200 218 69 Wed 23:00:00 00:00:00
2004-01-15 00:00:00 1074124800 12 11 Thu 00:00:00 01:00:00
2004-01-15 01:00:00 1074128400 1 1 Thu 01:00:00 02:00:00
Related
I am trying to create a proper bin for a timestamp interval column,
using code such as
df['Bin'] = pd.cut(df['interval_length'], bins=pd.to_timedelta(['00:00:00','00:10:00','00:20:00','00:30:00','00:40:00','00:50:00','00:60:00']))
The Resulting df looks like:
time_interval | bin
00:17:00 (0 days 00:10:00, 0 days 00:20:00]
01:42:00 NaN
00:15:00 (0 days 00:10:00, 0 days 00:20:00]
00:00:00 NaN
00:06:00 (0 days 00:00:00, 0 days 00:10:00]
Which is a little off as the result I want is jjust the time value and not the days and also I want the upper limit or last bin to be 60 mins or inf ( or more)
Desired Output:
time_interval | bin
00:17:00 (00:10:00,00:20:00]
01:42:00 (00:60:00,inf]
00:15:00 (00:10:00,00:20:00]
00:00:00 (00:00:00,00:10:00]
00:06:00 (00:00:00,00:10:00]
Thanks for looking!
In pandas inf for timedeltas not exist, so used maximal value. Also for include lowest values is used parameter include_lowest=True if want bins filled by timedeltas:
b = pd.to_timedelta(['00:00:00','00:10:00','00:20:00',
'00:30:00','00:40:00',
'00:50:00','00:60:00'])
b = b.append(pd.Index([pd.Timedelta.max]))
df['Bin'] = pd.cut(df['time_interval'], include_lowest=True, bins=b)
print (df)
time_interval Bin
0 00:17:00 (0 days 00:10:00, 0 days 00:20:00]
1 01:42:00 (0 days 01:00:00, 106751 days 23:47:16.854775]
2 00:15:00 (0 days 00:10:00, 0 days 00:20:00]
3 00:00:00 (-1 days +23:59:59.999999, 0 days 00:10:00]
4 00:06:00 (-1 days +23:59:59.999999, 0 days 00:10:00]
If want strings instead timedeltas use zip for create labels with append 'inf':
vals = ['00:00:00','00:10:00','00:20:00',
'00:30:00','00:40:00', '00:50:00','00:60:00']
b = pd.to_timedelta(vals).append(pd.Index([pd.Timedelta.max]))
vals.append('inf')
labels = ['{}-{}'.format(i, j) for i, j in zip(vals[:-1], vals[1:])]
df['Bin'] = pd.cut(df['time_interval'], include_lowest=True, bins=b, labels=labels)
print (df)
time_interval Bin
0 00:17:00 00:10:00-00:20:00
1 01:42:00 00:60:00-inf
2 00:15:00 00:10:00-00:20:00
3 00:00:00 00:00:00-00:10:00
4 00:06:00 00:00:00-00:10:00
You could just use labels to solve it -
df['Bin'] = pd.cut(df['interval_length'], bins=pd.to_timedelta(['00:00:00','00:10:00','00:20:00','00:30:00','00:40:00','00:50:00','00:60:00', '24:00:00']), labels=['(00:00:00,00:10:00]', '(00:10:00,00:20:00]', '(00:20:00,00:30:00]', '(00:30:00,00:40:00]', '(00:40:00,00:50:00]', '(00:50:00,00:60:00]', '(00:60:00,inf]'])
I have the following df
lst = [[1548828606206000000, 1548840373139000000],
[1548841285708000000, 1548841458405000000],
[1548842198276000000, 1548843109519000000],
[1548844022821000000, 1548844934207000000],
[1548845431090000000, 1548845539219000000],
[1548845555332000000, 1548845846621000000],
[1548847176147000000, 1548851020030000000],
[1548851704053000000, 1548852256143000000],
[1548852436514000000, 1548855900767000000],
[1548856817770000000, 1548857162183000000],
[1548858736931000000, 1548858979032000000]]
df = pd.DataFrame(lst,columns =['start','end'])
df['start'] = pd.to_datetime(df['start'])
df['end'] = pd.to_datetime(df['end'])
and I would like to get the duration of that event with start and end times per hour: e.g.
in my dummy df then for 6th hour should be 60 mins(maximum per hour) - 00:10:06 = 00:49:54. For 7th and 8th should be 1:00:00 each as the end time is 09:26:13. For 9th should be 00:26:13 plus all the intervals in the following .rows that overlap with 9th hour 09:44 - 09:41 = 3mins and 60mins -00:56 =4 mins. So the total for 9th should be 26+ 3 +4~=00:32:28
My initial apporach was to merge start and end, add dummy points every 3rd row, upsample to 1S, get the difference between rows, sum up only the actual rows. There must be a more pythonic way of doing this. Any hint would be great.
IIUC, something like this:
df.apply(lambda x: pd.to_timedelta(pd.Series(1, index=pd.date_range(x.start, x.end, freq='S'))
.groupby(pd.Grouper(freq='H')).count(), unit='S'), axis=1).sum()
Output:
2019-01-30 06:00:00 00:49:54
2019-01-30 07:00:00 01:00:00
2019-01-30 08:00:00 01:00:00
2019-01-30 09:00:00 00:32:28
2019-01-30 10:00:00 00:33:43
2019-01-30 11:00:00 00:40:24
2019-01-30 12:00:00 00:45:37
2019-01-30 13:00:00 00:45:01
2019-01-30 14:00:00 00:09:48
Freq: H, dtype: timedelta64[ns]
Or to get it down to hours, try:
df.apply(lambda r: pd.to_timedelta(pd.Series(1, index=pd.date_range(r.start, r.end, freq='S'))
.pipe(lambda x: x.groupby(x.index.hour).count()), unit='S'), axis=1)\
.sum()
Output:
6 00:49:54
7 01:00:00
8 01:00:00
9 00:32:28
10 00:33:43
11 00:40:24
12 00:45:37
13 00:45:01
14 00:09:48
dtype: timedelta64[ns]
I have pandas dataframe with two timestamps columns start and end
start end
2014-08-28 17:00:00 | 2014-08-29 22:00:00
2014-08-29 10:45:00 | 2014-09-01 17:00:00
2014-09-01 15:00:00 | 2014-09-01 19:00:00
The intention is to aggregate the number of hours that were logged on a given date. So in the case of my example.
I would be creating date range and aggreating the hours over multiple entries.
2014-08-28 -> 7 hrs
2014-08-29 -> 10 hrs + 1 hr 15 min => 11 hrs 15 mins
2014-08-30 -> 24 hrs
2014-08-31 -> 24 hrs
2014-09-01 -> 17 hrs + 4 hrs => 21 hrs
I've tried using timedelta but it only splits in absolute hours, not on a per day basis.
I've also tried to explode the rows(i.e split the row on a day basis but I could only get it to works at a date level, not at a time stamp level)
Any suggestion are greatly appreciated.
you can use of pd.date_range to create a minute to minute interval of each day that spent, after that you can count the spent minutes and convert it to time delta
start end
0 2014-08-28 17:00:00 2014-08-29 22:00:00
1 2014-08-29 10:45:00 2014-09-01 17:00:00
2 2014-09-01 15:00:00 2014-09-01 19:00:00
#Creating the minute to minute time intervals from start to end date of each line and creating as one series of dates
a = pd.Series(sum(df.apply(lambda x: pd.date_range(x['start'],x['end'],freq='min').tolist(),1).tolist(),[])).dt.date
# Counting the each mintue intervals and converting to time stamps
a.value_counts().apply(lambda x: pd.to_timedelta(x,'m'))
Out:
2014-08-29 1 days 11:16:00
2014-08-30 1 days 00:00:00
2014-08-31 1 days 00:00:00
2014-09-01 0 days 21:02:00
2014-08-28 0 days 07:00:00
dtype: timedelta64[ns]
Hope that would be useful. I guess you'll be able to adjust to serve your purpose. Way to thinking is the following - store day and corresponding time in dict. if it's the same day - just write difference. Otherwise write time till first midnight, iterate whenever days needed and write time from last midnight till end. FYI... I guess for 2014-09-01 result might be 21 hrs.
from datetime import datetime, timedelta
from collections import defaultdict
s = [('2014-08-28 17:00:00', '2014-08-29 22:00:00'),
('2014-08-29 10:45:00', '2014-09-01 17:00:00'),
('2014-09-01 15:00:00', '2014-09-01 19:00:00') ]
def aggreate(time):
store = defaultdict(timedelta)
for slice in time:
start = datetime.strptime(slice[0], "%Y-%m-%d %H:%M:%S")
end = datetime.strptime(slice[1], "%Y-%m-%d %H:%M:%S")
start_date = start.date()
end_date = end.date()
if start_date == end_date:
store[start_date] += end - start
else:
midnight = datetime(start.year, start.month, start.day + 1, 0, 0, 0)
part1 = midnight - start
store[start_date] += part1
for i in range(1, (end_date - start_date).days):
next_date = start_date + timedelta(days=i)
store[next_date] += timedelta(hours=24)
last_midnight = datetime(end_date.year, end_date.month, end_date.day, 0, 0, 0)
store[end_date] += end - last_midnight
return store
r = aggreate(s)
for i in r:
print(i, r[i])
2014-08-28 7:00:00
2014-08-29 1 day, 11:15:00
2014-08-30 1 day, 0:00:00
2014-08-31 1 day, 0:00:00
2014-09-01 21:00:00
Given a Pandas dataframe which represents when some programs start to work and when finish (i.e. single row - single program):
starts finishes
2018-01-01 12:00 2018-01-01 15:00
2018-01-01 16:00 2018-01-01 20:00
2018-01-01 16:30 2018-01-01 20:00
2018-01-01 17:00 2018-01-01 21:00
...
I need to calculate the number of concurrent programs in every time represented in the table. The table above becomes next:
time number_of_conc_progs
2018-01-01 12:00 1
2018-01-01 15:00 0
2018-01-01 16:00 1
2018-01-01 16:30 2
2018-01-01 17:00 3
2018-01-01 20:00 1
2018-01-01 21:00 0
...
If a program starts at 12:00 (e.g.) and the current number of processes is n, then at 12:00 the number has value n+1.
If a program finishes at 12:00 (e.g.) and the current number of processes is n, then at 12:00 the number has value n-1.
# creation of the dataframe
df = pd.DataFrame([
["2018-01-01 12:00", "2018-01-01 15:00"],
["2018-01-01 16:00", "2018-01-01 20:00"],
["2018-01-01 16:30", "2018-01-01 20:00"],
["2018-01-01 17:00", "2018-01-01 21:00"]])
df.columns = ["starts", "finishes"]
# number of progs increases of 1 for start times
starts = pd.DataFrame()
starts["time"] = df.starts
starts["number_of_conc_progs"] = 1
# number of progs decreases of 1 for finishes times
finishes = pd.DataFrame()
finishes["time"] = df.finishes
finishes["number_of_conc_progs"] = -1
# then I merge the starts and the finishes dataframes
result = pd.DataFrame()
result = pd.concat([starts,finishes])
# I sort the time values
result = result.sort_values(by=['time'])
# If there is several starts or finishes at the same time, I sum them
result = result.groupby(['time']).sum()
# I do a cumulation sum to get the actual number of progs running
result.number_of_conc_progs = result.number_of_conc_progs.cumsum()
I have two columns; the time an event started and the duration of that event. Like so:
time, duration
1:22:51,41
1:56:29,36
2:02:06,12
2:32:37,38
2:34:51,24
3:24:07,31
3:28:47,59
3:31:19,32
3:42:52,37
3:57:04,58
4:21:55,23
4:40:28,17
4:52:39,51
4:54:48,26
5:17:06,46
6:08:12,1
6:21:34,12
6:22:48,24
7:04:22,1
7:06:28,46
7:19:12,51
7:19:19,4
7:22:27,27
7:32:25,53
I want to create a line chart that shows the number of concurrent events happening throughout the day. Renaming time to start_time and adding a new column that computes the end_time is easy enough (assuming that's the next step) -- what I'm not quite sure I understand is how, afterwards, I can resample this data so I can chart concurrents.
I imagine I want to wind up with something like (but bucketed by the minute):
time, events
1:30:00,1
2:00:00,2
2:30:00,1
3:00:00,1
3:30:00,2
First make it an actual time stamp:
df['time'] = pd.to_datetime('2014-03-14 ' + df['time'])
Now you can get the end times:
df['end_time'] = df['time'] + df['duration'] * pd.offsets.Minute(1)
A way to get the open events is to combine the start and end times, resample and cumsum:
In [11]: open = pd.concat([pd.Series(1, df.time), # created add 1
pd.Series(-1, df.end_time) # closed substract 1
]).resample('30Min', how='sum').cumsum()
In [12]: open
Out[12]:
2014-03-14 01:00:00 1
2014-03-14 01:30:00 2
2014-03-14 02:00:00 1
2014-03-14 02:30:00 1
2014-03-14 03:00:00 2
2014-03-14 03:30:00 4
2014-03-14 04:00:00 2
2014-03-14 04:30:00 2
2014-03-14 05:00:00 2
2014-03-14 05:30:00 1
2014-03-14 06:00:00 2
2014-03-14 06:30:00 0
2014-03-14 07:00:00 3
2014-03-14 07:30:00 2
2014-03-14 08:00:00 0
Freq: 30T, dtype: int64
You could create a list containing dictionary items with values "time", "events"
obviously you need to handle the evaluating and manipulating of time data types differently, but you could do something like this:
event_bucket = []
time_interval = (end_time - start_time) / num_of_buckets
for ii in range(num_of_buckets):
event_bucket.append({"time":start_time + ii*time_interval,"events":0})
for entry in time_entry:
for bucket in event_bucket:
if bucket["time"] >= entry["start_time"] and bucket["time"] <= entry["end_time"]:
bucket["events"] += 1
If you make num_of_buckets larger you make the graph more precise.