Pandas - Compute data from a column to another - python

Considering the following dataframe:
df = pd.read_json("""{"week":{"0":1,"1":1,"2":1,"3":1,"4":1,"5":1,"6":2,"7":2,"8":2,"9":2,"10":2,"11":2,"12":3,"13":3,"14":3,"15":3,"16":3,"17":3},"extra_hours":{"0":"01:00:00","1":"00:00:00","2":"01:00:00","3":"01:00:00","4":"00:00:00","5":"01:00:00","6":"01:00:00","7":"01:00:00","8":"01:00:00","9":"01:00:00","10":"00:00:00","11":"01:00:00","12":"01:00:00","13":"02:00:00","14":"01:00:00","15":"02:00:00","16":"00:00:00","17":"00:00:00"},"extra_hours_over":{"0":null,"1":null,"2":null,"3":null,"4":null,"5":null,"6":null,"7":null,"8":null,"9":null,"10":null,"11":null,"12":null,"13":null,"14":null,"15":null,"16":null,"17":null}}""")
df.tail(6)
week extra_hours extra_hours_over
12 3 01:00:00 NaN
13 3 02:00:00 NaN
14 3 01:00:00 NaN
15 3 02:00:00 NaN
16 3 00:00:00 NaN
17 3 00:00:00 NaN
Now, in every week, the maximum amount of extra_hours is 4h, meaning I have to subtract 30min blocks from extra_hour column, and fill the extra_hour_over column, so that in every week, total sum of extra_hour has a maximum of 4h.
So, given the example dataframe, a possible solution (for week 3) would be like this:
week extra_hours extra_hours_over
12 3 01:00:00 00:00:00
13 3 01:30:00 00:30:00
14 3 00:30:00 00:30:00
15 3 01:00:00 01:00:00
16 3 00:00:00 00:00:00
17 3 00:00:00 00:00:00
I would need to aggregate total extra_hours per week, check in which days it passes 4h, and then randomly subtract half-hour chunks.
What would be the easiest/most direct way to achieve this?

Here goes one attempt for what you seem to be asking. The idea is simple, although the code fairly verbose:
1) Create some helper variables (minutes, extra_minutes, total for the week)
2) Loop through a temporary dataset that will contain only while sum is > 240 minutes.
3) In the loop, use random.choice to select a time to remove 30 min from.
4) Apply the changes to minutes and extra minutes
The code:
df = pd.read_json("""{"week":{"0":1,"1":1,"2":1,"3":1,"4":1,"5":1,"6":2,"7":2,"8":2,"9":2,"10":2,"11":2,"12":3,"13":3,"14":3,"15":3,"16":3,"17":3},"extra_hours":{"0":"01:00:00","1":"00:00:00","2":"01:00:00","3":"01:00:00","4":"00:00:00","5":"01:00:00","6":"01:00:00","7":"01:00:00","8":"01:00:00","9":"01:00:00","10":"00:00:00","11":"01:00:00","12":"01:00:00","13":"02:00:00","14":"01:00:00","15":"02:00:00","16":"00:00:00","17":"00:00:00"},"extra_hours_over":{"0":null,"1":null,"2":null,"3":null,"4":null,"5":null,"6":null,"7":null,"8":null,"9":null,"10":null,"11":null,"12":null,"13":null,"14":null,"15":null,"16":null,"17":null}}""")
df['minutes'] = pd.DatetimeIndex(df['extra_hours']).hour * 60 + pd.DatetimeIndex(df['extra_hours']).minute
df['extra_minutes'] = 0
df['tot_time'] = df.groupby('week')['minutes'].transform('sum')
while not df[df['tot_time'] > 240].empty:
mask = df[(df['minutes']>=30)&(df['tot_time']>240)].groupby('week').apply(lambda x: np.random.choice(x.index)).values
df.loc[mask,'minutes'] -= 30
df.loc[mask,'extra_minutes'] += 30
df['tot_time'] = df.groupby('week')['minutes'].transform('sum')
df['extra_hours_over'] = df['extra_minutes'].apply(lambda x: pd.Timedelta(minutes=x))
df['extra_hours'] = df['minutes'].apply(lambda x: pd.Timedelta(minutes=x))
df.drop(['minutes','extra_minutes'], axis=1).tail(6)
Out[1]:
week extra_hours extra_hours_over tot_time
12 3 00:30:00 00:30:00 240
13 3 01:30:00 00:30:00 240
14 3 00:30:00 00:30:00 240
15 3 01:30:00 00:30:00 240
16 3 00:00:00 00:00:00 240
17 3 00:00:00 00:00:00 240
Note: Because I am using np.random.choice, the same observation can be picked twice, which will make that observation change by a chunk of more than 30 min.

Related

Pandas / Databricks - Creating a new datetime column if another datetime columns has x in minutes

I am new to python so if there is a solution to this somewhere else I apologize.
I have a dataframe that has a column that consist of a timestamps (y-m-d-h-m-s). What I require change the timestamps of that current column minutes has:
if 10 min then add 5 min
if 20 min add 10 min
if 30 min add 15 min
if 40 min add 20 min
if 50 min is null
if 60/00 min is nulll
I Believe it would be something like:
df.loc[df['column'].dt.minute == 10, 'column'] = 15
Summary, I am trying to change a column that has 10 minute intervals into 15 minute intervals.
Thanks for your assistance!
For multiple conditions like this np.select() is a convenient function
import numpy as np
import pandas as pd
# if 10 min then add 5 min
# if 20 min add 10 min
# if 30 min add 15 min
# if 40 min add 20 min
# if 50 min is null
# if 60/00 min is nulll
df = pd.DataFrame({"date":pd.date_range("1-may-2021","1-may-2021 1:09", freq="5min")})
df["cond"] = np.select([df.date.dt.minute==c for c in [10,20,30,40,50,60,0]],[5,10,15,20,np.nan,np.nan,np.nan], 0)
date
cond
0
2021-05-01 00:00:00
nan
1
2021-05-01 00:05:00
0
2
2021-05-01 00:10:00
5
3
2021-05-01 00:15:00
0
4
2021-05-01 00:20:00
10
5
2021-05-01 00:25:00
0
6
2021-05-01 00:30:00
15
7
2021-05-01 00:35:00
0
8
2021-05-01 00:40:00
20
9
2021-05-01 00:45:00
0
10
2021-05-01 00:50:00
nan
11
2021-05-01 00:55:00
0
12
2021-05-01 01:00:00
nan
13
2021-05-01 01:05:00
0

Python/Pandas: extract intervals from a large dataframe

I have two pandas DataFrames:
20 million rows of continues time series data with DateTime Index (df) IMG
20 thousand rows with two timestamps (df_seq) IMG
I want to use the second Dataframe to extract all sequences out of the first (all rows of the first between the two timestamps for each row of 2. ), then each sequence needs to be transposed into 990 columns and then all sequences have to be combined in a new DataFrame.
So the new DataFrame has one row with 990 columns for each sequence IMG (case row get added later).
Right now my code looks like this:
sequences = pd.DataFrame()
for row in df_seq.itertuples(index=True, name='Pandas'):
sequences = sequences.append(df.loc[row.date:row.end_date].reset_index(drop=True)[:990].transpose())
sequences = sequences.reset_index(drop=True)
This code works, but is terribly slow --> 20-25 min execution time
Is there a way to rewrite this in vectorised operations? Or any other way to improve the performance of this code?
Here's a way to do it. The large dataframe is 'df', and the intervals one is called 'intervals':
inx = pd.date_range(start="2020-01-01", freq="1s", periods=1000)
df = pd.DataFrame(range(len(inx)), index=inx)
df.index.name = "timestamp"
intervals = pd.DataFrame([("2020-01-01 00:00:12","2020-01-01 00:00:18"),
("2020-01-01 00:01:20","2020-01-01 00:02:03")],
columns=["start_time", "end_time"])
intervals.start_time = pd.to_datetime(intervals.start_time)
intervals.end_time = pd.to_datetime(intervals.end_time)
intervals
t = pd.merge_asof(df.reset_index(), intervals[["start_time"]], left_on="timestamp", right_on="start_time", )
t = pd.merge_asof(t, intervals[["end_time"]], left_on="timestamp", right_on="end_time", direction="forward")
t = t[(t.timestamp >= t.start_time) & (t.timestamp <= t.end_time)]
The result is:
timestamp 0 start_time end_time
12 2020-01-01 00:00:12 12 2020-01-01 00:00:12 2020-01-01 00:00:18
13 2020-01-01 00:00:13 13 2020-01-01 00:00:12 2020-01-01 00:00:18
14 2020-01-01 00:00:14 14 2020-01-01 00:00:12 2020-01-01 00:00:18
15 2020-01-01 00:00:15 15 2020-01-01 00:00:12 2020-01-01 00:00:18
16 2020-01-01 00:00:16 16 2020-01-01 00:00:12 2020-01-01 00:00:18
.. ... ... ... ...
119 2020-01-01 00:01:59 119 2020-01-01 00:01:20 2020-01-01 00:02:03
120 2020-01-01 00:02:00 120 2020-01-01 00:01:20 2020-01-01 00:02:03
121 2020-01-01 00:02:01 121 2020-01-01 00:01:20 2020-01-01 00:02:03
122 2020-01-01 00:02:02 122 2020-01-01 00:01:20 2020-01-01 00:02:03
123 2020-01-01 00:02:03 123 2020-01-01 00:01:20 2020-01-01 00:02:03
After the steps from the answer above I added a groupby and a unstack and the result is exactly the df i need:
Execution time is ~30 seconds!
The full code looks now like this:
sequences = pd.merge_asof(df, df_seq[["date"]], left_on="timestamp", right_on="date", )
sequences = pd.merge_asof(sequences, df_seq[["end_date"]], left_on="timestamp", right_on="end_date", direction="forward")
sequences = sequences[(sequences.timestamp >= sequences.date) & (sequences.timestamp <= sequences.end_date)]
sequences = sequences.groupby('date')['feature_1'].apply(lambda df_temp: df_temp.reset_index(drop=True)).unstack().loc[:,:990]
sequences = sequences.reset_index(drop=True)

How can I get different statistics for a rolling datetime range up top a current value in a pandas dataframe?

I have a dataframe that has four different columns and looks like the table below:
index_example | column_a | column_b | column_c | datetime_column
1 A 1,000 1 2020-01-01 11:00:00
2 A 2,000 2 2019-11-01 10:00:00
3 A 5,000 3 2019-12-01 08:00:00
4 B 1,000 4 2020-01-01 05:00:00
5 B 6,000 5 2019-01-01 01:00:00
6 B 7,000 6 2019-04-01 11:00:00
7 A 8,000 7 2019-11-30 07:00:00
8 B 500 8 2020-01-01 05:00:00
9 B 1,000 9 2020-01-01 03:00:00
10 B 2,000 10 2020-01-01 02:00:00
11 A 1,000 11 2019-05-02 01:00:00
Purpose:
For each row, get the different rolling statistics for column_b based on a window of time in the datetime_column defined as the last N months. The window of time to look at however, is filtered by the value in column_a.
Code example using a for loop which is not feasible given the size:
mean_dict = {}
for index,value in enumerate(df.datetime_column)):
test_date = value
test_column_a = df.column_a[index]
subset_df = df[(df.datetime_column<test_date)&\
(df.datetime_column>=test_date-timedelta(days = 180))&
(df.column_a == test_column_a)]
mean_dict[index] = df.column_b.mean()
For example for row #1:
Target date = 2020-01-01 11:00:00
Target value in column_a = A
Date Range: from 2019-07-01 11:00:00 to 2020-01-01 11:00:00
Average would be the mean of rows 2,3,7
If I wanted average for row #2 then it would be:
Target date = 2019-11-01 10:00:00
Target value in column_a = A
Date Range: from 2019-05-01 10:00 to 2019-11-01 10:00:00
Average would be the mean of rows 11
and so on...
I cannot use the grouper since in reality I do not have dates but datetimes.
Has anyone encountered this before?
Thanks!
EDIT
The dataframe is big ~2M rows which means that looping is not an option. I already tried looping and creating a subset based on conditional values but it takes too long.

how do I classify or regroup dataset based on time variation in python

I need to assign number to values between different time hourly. How can I then add a new column to this where I can specify each cell to be grouped hourly. for instance, all the transactions within 00:00:00 to 00:59:59 to be filled with 1, transactions within 01:00:00 to 01:59:59 to be filled with 2, and so on till 23:00:00 to 23:59:59 to be filled with 24
Time_duration = df['period']
print (Time_duration)
0 23:59:56
1 23:59:56
2 23:59:55
3 23:59:53
4 23:59:52
...
74187 00:00:18
74188 00:00:09
74189 00:00:08
74190 00:00:03
74191 00:00:02 ```
# this is the result I desire.... How can I then add a new column to this where I can specify each cell to be grouped hourly. for instance, all the transactions within 00:00:00 to 00:59:59 to be filled with 1, transactions within 01:00:00 to 01:59:59 to be filled with 2, and so on till 23:00:00 to 23:59:59 to be filled with 24.
0 23:59:56 24
1 23:59:56 24
2 23:59:55 24
3 23:59:53 24
4 23:59:52 24
...
74187 00:00:18 1
74188 00:00:09 1
74189 00:00:08 1
74190 00:00:03 1
74191 00:00:02 1
df.sort_values(by=["period"])
timeStamp_list = (pd.to_datetime(list(df['period'])))
df['Hour'] =timeStamp_list.hour
try this code, this works for me.
You can use regular expressions and str.extract
import pandas as pd
pattern= r'^(\d{1,2}):' #capture the digits of the hour
df['hour']=df['period'].str.extract(pattern).astype('int') + 1 # cast it as int so that you can add 1

Reorder timestamps pandas

I have a pandas column that contain timestamps that are unordered. When I sort them it works fine except for the values H:MM:SS.
d = ({
'A' : ['8:00:00','9:00:00','10:00:00','20:00:00','24:00:00','26:20:00'],
})
df = pd.DataFrame(data=d)
df = df.sort_values(by='A',ascending=True)
Out:
A
2 10:00:00
3 20:00:00
4 24:00:00
5 26:20:00
0 8:00:00
1 9:00:00
Ideally, I'd like to add a zero before 5 letter strings. If I convert them all to time delta it converts the times after midnight into 1 day plus n amount of hours. e.g.
df['A'] = pd.to_timedelta(df['A'])
A
0 0 days 08:00:00
1 0 days 09:00:00
2 0 days 10:00:00
3 0 days 20:00:00
4 1 days 00:00:00
5 1 days 02:20:00
Intended Output:
A
0 08:00:00
1 09:00:00
2 10:00:00
3 20:00:00
4 24:00:00
5 26:20:00
If you only need to sort by the column as timedelta, you can convert the column to timedelta and use argsort on it to create the sorting order to sort the data frame:
df.iloc[pd.to_timedelta(df.A).argsort()]
# A
#0 8:00:00
#1 9:00:00
#2 10:00:00
#3 20:00:00
#4 24:00:00
#5 26:20:00

Categories

Resources