How to convert X min, Y sec string to timestamp - python

I have a dataframe with a duration column of strings in a format like:
index
duration
0
26 s
1
24 s
2
4 min, 37 s
3
7 s
4
1 min, 1 s
Is there a pandas or strftime() / strptime() way to convert the duration column to a min/sec timestamp.
I've attempted this way to convert strings, but I'll run into multiple scenarios after replacing strings:
for row in df['index']:
if "min, " in df['duration'][row]:
df['duration'][row] = df['duration'][row].replace(' min, ', ':').replace(' s', '')
else:
pass
Thanks in advance

Try:
pd.to_timedelta(df['duration'])
Output:
0 0 days 00:00:26
1 0 days 00:00:24
2 0 days 00:04:37
3 0 days 00:00:07
4 0 days 00:01:01
Name: duration, dtype: timedelta64[ns]

Related

From unix timestamps to relative date based on a condition from another column in pandas

I have a column of dates in unix timestamps and i need to convert them into relative dates from the starting activity.
The final output should be the column D, which expresses the relative time from the activity which has index = 1, in particular the relative time has always to refer to the first activity (index=1).
A index timestamp D
activity1 1 1.612946e+09 0
activity2 2 1.614255e+09 80 hours
activity3 1 1.612181e+09 0
activity4 2 1.613045e+09 50 hours
activity5 3 1.637668e+09 430 hours
Any idea?
Use to_datetime with unit='s' and then create groups starting by index equal 1 and get first value, last subtract and convert to hours:
df['timestamp'] = pd.to_datetime(df['timestamp'], unit='s')
s = df.groupby(df['index'].eq(1).cumsum())['timestamp'].transform('first')
df['D1'] = df['timestamp'].sub(s).dt.total_seconds().div(3600)
print (df)
A index timestamp D D1
0 activity1 1 2021-02-10 08:33:20 0 0.000000
1 activity2 2 2021-02-25 12:10:00 80 hours 363.611111
2 activity3 1 2021-02-01 12:03:20 0 0.000000
3 activity4 2 2021-02-11 12:03:20 50 hours 240.000000
4 activity5 3 2021-11-23 11:46:40 430 hours 7079.722222

Get the mean of timedelta column

I have a column made of timedelta elements in a dataframe:
time_to_return_ask
0 0 days 00:00:00.046000
1 0 days 00:00:00.204000
2 0 days 00:00:00.336000
3 0 days 00:00:00.362000
4 0 days 00:00:00.109000
...
3240 0 days 00:00:00.158000
3241 0 days 00:00:00.028000
3242 0 days 00:00:00.130000
3243 0 days 00:00:00.035000
3244 0
Name: time_to_return_ask, Length: 3245, dtype: object
I tried to apply the solution of another question, by taking the values of the different elements, but I am already stuck. Any idea? Thanks!
What I tried:
df['time_to_return_ask'].values.astype(np.int64)
means = dropped.groupby('ts').mean()
means['new'] = pd.to_timedelta(means['new'])

Pandas duration groupby - Start group-range with defined value

I am trying to group a data set of travel duration with 5 minutes interval, starting from 0 to inf. How may I do that?
My sample dataFrame looks like:
Duration
0 00:01:37
1 00:18:19
2 00:22:03
3 00:41:07
4 00:11:54
5 00:21:34
I have used this code: df.groupby([pd.Grouper(key='Duration', freq='5T')]).size()
And I have found following result:
Duration
00:01:37 1
00:06:37 0
00:11:37 1
00:16:37 2
00:21:37 1
00:26:37 0
00:31:37 0
00:36:37 1
00:41:37 0
Freq: 5T, dtype: int64
My expected result is:
Duration Counts
00:00:00 0
00:05:00 1
00:10:00 0
00:15:00 1
00:20:00 1
........ ...
My expectation is the index will start from 00:00:00 instead of 00:01:37.
Or, showing bins will also work for me, I mean:
Duration Counts
0-5 1
5-10 0
10-15 1
15-20 1
20-25 2
........ ...
I need your help please. Thank you.
First, you need to roud off your time to lower 5th minute. Then simply count it.
I suppose this is what you are looking for -
def round_to_5min(t):
""" This function rounds a timedelta timestamp to the nearest 5-min mark"""
t = datetime.datetime(1991,2,13, t.hour, t.minute - t.minute%5, 0)
return t
data['new_col'] = data.Duration.map(round_to_5min).dt.time

Count String Values in Column across 30 Minute Time Bins using Pandas

I am looking to determine the count of string variables in a column across a 3 month data sample. Samples were taken at random times throughout each day. I can group the data by hour, but I require the fidelity of 30 minute intervals (ex. 0500-0600, 0600-0630) on roughly 10k rows of data.
An example of the data:
datetime stringvalues
2018-06-06 17:00 A
2018-06-07 17:30 B
2018-06-07 17:33 A
2018-06-08 19:00 B
2018-06-09 05:27 A
I have tried setting the datetime column as the index, but I cannot figure how to group the data on anything other than 'hour' and I don't have fidelity on the string value count:
df['datetime'] = pd.to_datetime(df['datetime']
df.index = df['datetime']
df.groupby(df.index.hour).count()
Which returns an output similar to:
datetime stringvalues
datetime
5 0 0
6 2 2
7 5 5
8 1 1
...
I researched multi-indexing and resampling to some length the past two days but I have been unable to find a similar question. The desired result would look something like this:
datetime A B
0500 1 2
0530 3 5
0600 4 6
0630 2 0
....
There is no straightforward way to do a TimeGrouper on the time component, so we do this in two steps:
v = (df.groupby([pd.Grouper(key='datetime', freq='30min'), 'stringvalues'])
.size()
.unstack(fill_value=0))
v.groupby(v.index.time).sum()
stringvalues A B
05:00:00 1 0
17:00:00 1 0
17:30:00 1 1
19:00:00 0 1

Pandas: converting amount of seconds into timedeltas or times

I have an amount of seconds in a dataframe, let's say:
s = 122
I want to convert it to the following format:
00:02:02.0000
To do that I try using to_datetime the following way:
pd.to_datetime(s, format='%H:%M:%S.%f')
However this doesn't work:
ValueError: time data 122 does not match format '%H:%M:%S.%f' (match)
I also tried using unit='ms' instead of format, but then I get the date before the time.
How can I modify my code to get the desired convertion ?
It needs to be done in the dataframe using pandas if possible.
EDIT: both jezrael and MedAli solutions below are valid, however Jezrael solution have the advantage to work not only with integers but also with Datetime.time as input!
Use to_timedelta with convert seconds to nanoseconds:
df = pd.DataFrame({'sec':[122,3,5,7,1,0]})
df['t'] = pd.to_timedelta(df['sec'] * 10**9)
print (df)
sec t
0 122 00:02:02
1 3 00:00:03
2 5 00:00:05
3 7 00:00:07
4 1 00:00:01
5 0 00:00:00
You can edit your code as follows to get the desired result:
df = pd.DataFrame({'sec':[122,3,5,7,1,0]})
df['time'] = pd.to_datetime(df.sec, unit="s").dt.time
Output:
In [10]: df
Out[10]:
sec time
0 110 00:01:50
1 3 00:00:03
2 5 00:00:05
3 7 00:00:07
4 1 00:00:01
5 0 00:00:00

Categories

Resources