I want to resample data by datetime, frequency is three day, and sum the weight column. But if the sum of weight is greater than 1000, then stop sum. How can i write this sum condition, i have already write a code to resample data without any condition.
data1 = data.groupby('buyer_id').resample('3D',on='wh_inbound_sg_time').actual_weight.sum()
Table format:
order_sn|buyer_name buyer_id|ordersn|whs_code|consignment_no|actual_weight|time
Result:
buyer_id time
19051 2021-08-04 32
2021-08-07 71
2021-08-10 0
2021-08-13 0
2021-08-16 0
2021-08-19 18
2021-08-22 0
2021-08-25 174
2021-08-28 0
2021-08-31 266
2021-09-03 0
2021-09-06 0
2021-09-09 372
2021-09-12 0
2021-09-15 192
2021-09-18 436
2021-09-21 456
64155 2021-09-06 1964
2021-09-09 0
2021-09-12 0
2021-09-15 0
2021-09-18 940
Related
How can I give a column the same number every 7 times in a dataframe?
In the last column,
'ww' I want to put the same 1 from 1-21 to 1-27, the same 2 from 1-28 to 2-3,..
2 for the next 7 days
3 for the next 7 days, etc..
Finally, I want to put a number that increases every 7 days, but I am not sure of the code.
date people ww
0 2020-01-21 0
1 2020-01-22 0
2 2020-01-23 0
3 2020-01-24 1
4 2020-01-25 0
... ... ...
616 2021-09-28 2289
617 2021-09-29 2883
618 2021-09-30 2564
619 2021-10-01 2484
620 2021-10-02 2247
Since you have daily data, you can do this with simple math:
df["ww"] = (df["date"]-df["date"].min()).dt.days//7+1
>>> df
date ww
0 2021-01-21 1
1 2021-01-22 1
2 2021-01-23 1
3 2021-01-24 1
4 2021-01-25 1
.. ... ..
250 2021-09-28 36
251 2021-09-29 36
252 2021-09-30 37
253 2021-10-01 37
254 2021-10-02 37
I currently have a list of tuples that look like this:
time_constraints = [
('001', '01/01/2020 10:00 AM', '01/01/2020 11:00 AM'),
('001', '01/03/2020 05:00 AM', '01/03/2020 06:00 AM'),
...
('999', '01/07/2020 07:00 AM', '01/07/2020 08:00 AM')
]
where:
each tuple contains an id, lower_bound, and upper_bound
none of the time frames overlap for a given id
len(time_constraints) can be on the order of 10^4 to 10^5.
My goal is to quickly and efficiently filter a relatively large (millions of rows) Pandas dataframe (df) to include only the rows that match on the id column and fall between the specified lower_bound and upper_bound times (inclusive).
My current plan is to do this:
import pandas as pd
output = []
for i, lower, upper in time_constraints:
indices = list(df.loc[(df['id'] == i) & (df['timestamp'] >= lower) & (df['timestamp'] <= upper), ].index)
output.extend(indices)
output_df = df.loc[df.index.isin(output), ].copy()
However, using a for-loop isn't ideal. I was wondering if there was a better solution (ideally vectorized) using Pandas or NumPy arrays that would be faster.
Edited:
Here's some sample rows of df:
id
timestamp
1
01/01/2020 9:56 AM
1
01/01/2020 10:32 AM
1
01/01/2020 10:36 AM
2
01/01/2020 9:42 AM
2
01/01/2020 9:57 AM
2
01/01/2020 10:02 AM
I already answered for a similar case.
To test, I used 100,000 constraints (tc) and 5,000,000 of records (df).
Is it what you expect
>>> df
id timestamp
0 565 2020-08-16 05:40:55
1 477 2020-04-05 22:21:40
2 299 2020-02-22 04:54:34
3 108 2020-08-17 23:54:02
4 041 2020-09-10 10:01:31
... ... ...
4999995 892 2020-12-27 16:16:35
4999996 373 2020-08-29 05:44:34
4999997 659 2020-05-23 20:48:15
4999998 858 2020-09-08 22:58:20
4999999 710 2020-04-10 08:03:14
[5000000 rows x 2 columns]
>>> tc
id lower_bound upper_bound
0 000 2020-01-01 00:00:00 2020-01-04 14:00:00
1 000 2020-01-04 15:00:00 2020-01-08 05:00:00
2 000 2020-01-08 06:00:00 2020-01-11 20:00:00
3 000 2020-01-11 21:00:00 2020-01-15 11:00:00
4 000 2020-01-15 12:00:00 2020-01-19 02:00:00
... ... ... ...
99995 999 2020-12-10 09:00:00 2020-12-13 23:00:00
99996 999 2020-12-14 00:00:00 2020-12-17 14:00:00
99997 999 2020-12-17 15:00:00 2020-12-21 05:00:00
99998 999 2020-12-21 06:00:00 2020-12-24 20:00:00
99999 999 2020-12-24 21:00:00 2020-12-28 11:00:00
[100000 rows x 3 columns]
# from tqdm import tqdm
from itertools import chain
# df = pd.DataFrame(data, columns=['id', 'timestamp'])
tc = pd.DataFrame(time_constraints, columns=['id', 'lower_bound', 'upper_bound'])
g1 = df.groupby('id')
g2 = tc.groupby('id')
indexes = []
# for id_ in tqdm(tc['id'].unique()):
for id_ in tc['id'].unique():
df1 = g1.get_group(id_)
df2 = g2.get_group(id_)
ii = pd.IntervalIndex.from_tuples(list(zip(df2['lower_bound'],
df2['upper_bound'])),
closed='both')
indexes.append(pd.cut(df1['timestamp'], bins=ii).dropna().index)
out = df.loc[chain.from_iterable(indexes)]
Performance:
100%|█████████████████████████████████████████████████| 1000/1000 [00:17<00:00, 58.40it/s]
Output result:
>>> out
id timestamp
1326 000 2020-11-10 05:51:00
1685 000 2020-10-07 03:12:48
2151 000 2020-05-08 11:11:18
2246 000 2020-07-06 07:36:57
3995 000 2020-02-02 04:39:11
... ... ...
4996406 999 2020-02-19 15:27:06
4996684 999 2020-02-05 11:13:56
4997408 999 2020-07-09 09:31:31
4997896 999 2020-04-10 03:26:13
4999674 999 2020-04-21 22:57:04
[4942976 rows x 2 columns] # 57024 records filtered
You can use boolean indexing, likewise:
output_df = df[pd.Series(list(zip(df['id'],
df['lower_bound'],
df['upper_bound']))).isin(time_constraints)]
The zip function is creating tuples from each column and then comparing it with your list of tuple. The pd.Series is used to create a Boolean series.
I have a df of about 100000 rows, a sample of which is as follows:
id commodity frequency ms_id created modified measuring_type tariff overshoot_delta timestamp time_series_id quantity type
0 12188 1 900 12191 2019-03-25 12:40:00 2019-11-19 05:38:00 29 0 0 2019-03-16 23:00:00 12188 50.25 220
1 12858 1 900 12861 2019-04-08 15:13:00 2019-11-19 05:39:00 29 0 0 2019-03-16 23:00:00 12858 50.25 220
2 12858 7 900 12861 2019-04-08 15:13:00 2019-11-19 05:39:00 29 0 0 2019-03-16 23:00:00 12858 50.25 220
3 12188 1 900 12191 2019-03-25 12:40:00 2019-11-19 05:38:00 29 10 0 2019-03-16 23:00:00 12188 50.25 250
4 12188 1 900 12191 2019-03-25 12:41:00 2019-11-19 05:38:00 29 10 0 2019-03-16 23:00:00 12188 50.25 250
What I would like to do is to check the values in the columns: commodity, measuring_type, tariff, timestamp, type and see if there are duplicates in any rows. If the values in the above-mentioned columns are exactly the same for any 2 rows, then I want to take the last value (greatest time) from the created column. Such a check has to be done for all the rows in the df.
From the above example, the expected output:
id commodity frequency ms_id created modified measuring_type tariff overshoot_delta timestamp time_series_id quantity type
0 12858 1 900 12861 2019-04-08 15:13:00 2019-11-19 05:39:00 29 0 0 2019-03-16 23:00:00 12858 50.25 220
1 12858 7 900 12861 2019-04-08 15:13:00 2019-11-19 05:39:00 29 0 0 2019-03-16 23:00:00 12858 50.25 220
2 12188 1 900 12191 2019-03-25 12:41:00 2019-11-19 05:38:00 29 10 0 2019-03-16 23:00:00 12188 50.25 250
The first 2 rows had same values for the columns commodity, measuring_type, tariff, timestamp, type, so the time values in the created column have to be compared for those 2 rows and the greatest one (2019-04-08 15:13:00) has to be selected. Similarly for the last 2 rows.
Since the third row had a different value, it shouldn't be dropped and this must be added to the output.
How can this be done?
Thanks
Let us try sort_values then drop_duplicates
df=df.sort_values('created').drop_duplicates(['commodity', 'measuring_type', 'tariff', 'timestamp', 'type'], keep='last')
I have a data frame which contains date and value. I have to compute sum of the values for each month.
i.e., df.groupby(pd.Grouper(freq='M'))['Value'].sum()
But the problem is in my data set starting date of the month is 21 and ending at 20. Is there any way to tell that group the month from 21th day to 20th day to pandas.
Assume my data frame contains starting and ending date is,
starting_date=datetime.datetime(2015,11,21)
ending_date=datetime.datetime(2017,11,20)
so far i tried,
starting_date=df['Date'].min()
ending_date=df['Date'].max()
month_wise_sum=[]
while(starting_date<=ending_date):
temp=starting_date+datetime.timedelta(days=31)
e_y=temp.year
e_m=temp.month
e_d=20
temp= datetime.datetime(e_y,e_m,e_d)
month_wise_sum.append(df[df['Date'].between(starting_date,temp)]['Value'].sum())
starting_date=temp+datetime.timedelta(days=1)
print month_wise_sum
My above code does the thing. but still waiting for pythonic way to achieve it.
My biggest problem is slicing data frame for month wise
for example,
2015-11-21 to 2015-12-20
Is there any pythonic way to achieve this?
Thanks in Advance.
For Example consider this as my dataframe. It contains date from date_range(datetime.datetime(2017,01,21),datetime.datetime(2017,10,20))
Input:
Date Value
0 2017-01-21 -1.055784
1 2017-01-22 1.643813
2 2017-01-23 -0.865919
3 2017-01-24 -0.126777
4 2017-01-25 -0.530914
5 2017-01-26 0.579418
6 2017-01-27 0.247825
7 2017-01-28 -0.951166
8 2017-01-29 0.063764
9 2017-01-30 -1.960660
10 2017-01-31 1.118236
11 2017-02-01 -0.622514
12 2017-02-02 -1.416240
13 2017-02-03 1.025384
14 2017-02-04 0.448695
15 2017-02-05 1.642983
16 2017-02-06 -1.386413
17 2017-02-07 0.774173
18 2017-02-08 -1.690147
19 2017-02-09 -1.759029
20 2017-02-10 0.345326
21 2017-02-11 0.549472
22 2017-02-12 0.814701
23 2017-02-13 0.983923
24 2017-02-14 0.551617
25 2017-02-15 0.001959
26 2017-02-16 -0.537112
27 2017-02-17 1.251595
28 2017-02-18 1.448950
29 2017-02-19 -0.452310
.. ... ...
243 2017-09-21 0.791439
244 2017-09-22 1.368647
245 2017-09-23 0.504924
246 2017-09-24 0.214994
247 2017-09-25 -3.020875
248 2017-09-26 -0.440378
249 2017-09-27 1.324862
250 2017-09-28 0.116897
251 2017-09-29 -0.114449
252 2017-09-30 -0.879000
253 2017-10-01 0.088985
254 2017-10-02 -0.849833
255 2017-10-03 1.136802
256 2017-10-04 -0.398931
257 2017-10-05 0.067660
258 2017-10-06 1.080505
259 2017-10-07 0.516830
260 2017-10-08 -0.755461
261 2017-10-09 1.367292
262 2017-10-10 1.444083
263 2017-10-11 -0.840497
264 2017-10-12 -0.090092
265 2017-10-13 0.193068
266 2017-10-14 -0.284673
267 2017-10-15 -1.128397
268 2017-10-16 1.029995
269 2017-10-17 -1.269262
270 2017-10-18 0.320187
271 2017-10-19 0.580825
272 2017-10-20 1.001110
[273 rows x 2 columns]
I want to slice this dataframe like below
Iter-1:
Date Value
0 2017-01-21 -1.055784
1 2017-01-22 1.643813
2 2017-01-23 -0.865919
3 2017-01-24 -0.126777
4 2017-01-25 -0.530914
5 2017-01-26 0.579418
6 2017-01-27 0.247825
7 2017-01-28 -0.951166
8 2017-01-29 0.063764
9 2017-01-30 -1.960660
10 2017-01-31 1.118236
11 2017-02-01 -0.622514
12 2017-02-02 -1.416240
13 2017-02-03 1.025384
14 2017-02-04 0.448695
15 2017-02-05 1.642983
16 2017-02-06 -1.386413
17 2017-02-07 0.774173
18 2017-02-08 -1.690147
19 2017-02-09 -1.759029
20 2017-02-10 0.345326
21 2017-02-11 0.549472
22 2017-02-12 0.814701
23 2017-02-13 0.983923
24 2017-02-14 0.551617
25 2017-02-15 0.001959
26 2017-02-16 -0.537112
27 2017-02-17 1.251595
28 2017-02-18 1.448950
29 2017-02-19 -0.452310
30 2017-02-20 0.616847
iter-2:
Date Value
31 2017-02-21 2.356993
32 2017-02-22 -0.265603
33 2017-02-23 -0.651336
34 2017-02-24 -0.952791
35 2017-02-25 0.124278
36 2017-02-26 0.545956
37 2017-02-27 0.671670
38 2017-02-28 -0.836518
39 2017-03-01 1.178424
40 2017-03-02 0.182758
41 2017-03-03 -0.733987
42 2017-03-04 0.112974
43 2017-03-05 -0.357269
44 2017-03-06 1.454310
45 2017-03-07 -1.201187
46 2017-03-08 0.212540
47 2017-03-09 0.082771
48 2017-03-10 -0.906591
49 2017-03-11 -0.931166
50 2017-03-12 -0.391388
51 2017-03-13 -0.893409
52 2017-03-14 -1.852290
53 2017-03-15 0.368390
54 2017-03-16 -1.672943
55 2017-03-17 -0.934288
56 2017-03-18 -0.154785
57 2017-03-19 0.552378
58 2017-03-20 0.096006
.
.
.
iter-n:
Date Value
243 2017-09-21 0.791439
244 2017-09-22 1.368647
245 2017-09-23 0.504924
246 2017-09-24 0.214994
247 2017-09-25 -3.020875
248 2017-09-26 -0.440378
249 2017-09-27 1.324862
250 2017-09-28 0.116897
251 2017-09-29 -0.114449
252 2017-09-30 -0.879000
253 2017-10-01 0.088985
254 2017-10-02 -0.849833
255 2017-10-03 1.136802
256 2017-10-04 -0.398931
257 2017-10-05 0.067660
258 2017-10-06 1.080505
259 2017-10-07 0.516830
260 2017-10-08 -0.755461
261 2017-10-09 1.367292
262 2017-10-10 1.444083
263 2017-10-11 -0.840497
264 2017-10-12 -0.090092
265 2017-10-13 0.193068
266 2017-10-14 -0.284673
267 2017-10-15 -1.128397
268 2017-10-16 1.029995
269 2017-10-17 -1.269262
270 2017-10-18 0.320187
271 2017-10-19 0.580825
272 2017-10-20 1.001110
So that i could calculate each month's sum of value series
[0.7536957367200978, -4.796100620186059, -1.8423374363366014, 2.3780759926221267, 5.753755441349653, -0.01072884830461407, -0.24877912707664018, 11.666305431020149, 3.0772592888909065]
I hope i explained thoroughly.
For the purpose of testing my solution, I generated some random data, frequency is daily but it should work for every frequencies.
index = pd.date_range('2015-11-21', '2017-11-20')
df = pd.DataFrame(index=index, data={0: np.random.rand(len(index))})
Here you see that I passed as index an array of datetimes. Indexing with dates allow in pandas for a lot of added functionalities. With your data you should do (if the Date column already only contains datetime values) :
df = df.set_index('Date')
Then I would realign artificially your data by substracting 20 days to the index :
from datetime import timedelta
df.index -= timedelta(days=20)
and then I would resample data to a monthly indexing, summing all data in the same month :
df.resample('M').sum()
The resulting dataframe is indexed by the last datetime of each month (for me something like :
0
2015-11-30 3.191098
2015-12-31 16.066213
2016-01-31 16.315388
2016-02-29 13.507774
2016-03-31 15.939567
2016-04-30 17.094247
2016-05-31 15.274829
2016-06-30 13.609203
but feel free to reindex it :)
Using pandas.cut() could be a quick solution for you:
import pandas as pd
import numpy as np
start_date = "2015-11-21"
# As #ALollz mentioned, the month with the original end_date='2017-11-20' was missing.
# since pd.date_range() only generates dates in the specified range (between start= and end=),
# '2017-11-31'(using freq='M') exceeds the original end='2017-11-20' and thus is cut off.
# the similar situation applies also to start_date (using freq="MS") when start_month might be cut off
# easy fix is just to extend the end_date to a date in the next month or use
# the end-date of its own month '2017-11-30', or replace end= to periods=25
end_date = "2017-12-20"
# create a testing dataframe
df = pd.DataFrame({ "date": pd.date_range(start_date, periods=710, freq='D'), "value": np.random.randn(710)})
# set up bins to include all dates to create expected date ranges
bins = [ d.replace(day=20) for d in pd.date_range(start_date, end_date, freq="M") ]
# group and summary using the ranges from the above bins
df.groupby(pd.cut(df.date, bins)).sum()
value
date
(2015-11-20, 2015-12-20] -5.222231
(2015-12-20, 2016-01-20] -4.957852
(2016-01-20, 2016-02-20] -0.019802
(2016-02-20, 2016-03-20] -0.304897
(2016-03-20, 2016-04-20] -7.605129
(2016-04-20, 2016-05-20] 7.317627
(2016-05-20, 2016-06-20] 10.916529
(2016-06-20, 2016-07-20] 1.834234
(2016-07-20, 2016-08-20] -3.324972
(2016-08-20, 2016-09-20] 7.243810
(2016-09-20, 2016-10-20] 2.745925
(2016-10-20, 2016-11-20] 8.929903
(2016-11-20, 2016-12-20] -2.450010
(2016-12-20, 2017-01-20] 3.137994
(2017-01-20, 2017-02-20] -0.796587
(2017-02-20, 2017-03-20] -4.368718
(2017-03-20, 2017-04-20] -9.896459
(2017-04-20, 2017-05-20] 2.350651
(2017-05-20, 2017-06-20] -2.667632
(2017-06-20, 2017-07-20] -2.319789
(2017-07-20, 2017-08-20] -9.577919
(2017-08-20, 2017-09-20] 2.962070
(2017-09-20, 2017-10-20] -2.901864
(2017-10-20, 2017-11-20] 2.873909
# export the result
summary = df.groupby(pd.cut(df.date, bins)).value.sum().tolist()
..
I have the following df1 below, showing hhmm times. These values represent literal times, but are in the incorrect format. Eg. 845 should be 08:45, and 1125 = 11:25.
CU Parameters 31-07-2017 01-08-2017 02-08-2017 03-08-2017
CU0111-039820-L Time of Full Charge 1125 0 1359 1112
CU0111-041796-H Time of Full Charge 1233 0 0 1135
CU0111-046907-0 Time of Full Charge 845 0 1229 1028
CU0111-046933-6 Time of Full Charge 1053 0 0 1120
CU0111-050103-K Time of Full Charge 932 0 1314 1108
CU0111-052525-J Time of Full Charge 1214 1424 1307 1254
CU0111-052534-M Time of Full Charge 944 0 0 1128
CU0111-052727-7 Time of Full Charge 1136 0 1443 1114
I need to convert all of these values into valid timestamps of hh:mm, and then work out the average of these timestamps, excluding the values that are '0'.
CU Parameters 31-07-2017 01-08-2017 02-08-2017 03-08-2017
CU0111-039820-L Time of Full Charge 11:25 0 13:59 11:12
CU0111-041796-H Time of Full Charge 12:33 0 0 11:35
CU0111-046907-0 Time of Full Charge 08:45 0 12:29 10:28
CU0111-046933-6 Time of Full Charge 10:53 0 0 11:20
CU0111-050103-K Time of Full Charge 09:32 0 13:14 11:08
CU0111-052525-J Time of Full Charge 12:14 14:24 13:07 12:54
CU0111-052534-M Time of Full Charge 09:44 0 0 11:28
CU0111-052727-7 Time of Full Charge 11:36 0 14:43 11:14
End result:
Average time of charge: hh:hh (excluding 0 values)
Number of no charges: =count(number of 0)
I have tried something along these lines, to no avail:
text = df1[col_list].astype(str)
df1[col_list] = text.str[:-2] + ':' + text.str[-2:]
hhmm = df1[col_list]
minutes = (hhmm / 100).astype(int) * 60 + hhmm % 100
df[col_list] = pd.to_timedelta(minutes, 'm')
I think you can convert all values to_timedelta first:
cols = df.columns.difference(['CU','Parameters'])
df[cols] = df[cols].replace(0, '0000')
.astype(str)
.apply(lambda x: pd.to_timedelta(x.str[:-2] + ':' + x.str[-2:] + ':00'))
print (df)
CU Parameters 31-07-2017 01-08-2017 02-08-2017 \
0 CU0111-039820-L Time of Full Charge 11:25:00 00:00:00 13:59:00
1 CU0111-041796-H Time of Full Charge 12:33:00 00:00:00 00:00:00
2 CU0111-046907-0 Time of Full Charge 08:45:00 00:00:00 12:29:00
3 CU0111-046933-6 Time of Full Charge 10:53:00 00:00:00 00:00:00
4 CU0111-050103-K Time of Full Charge 09:32:00 00:00:00 13:14:00
5 CU0111-052525-J Time of Full Charge 12:14:00 14:24:00 13:07:00
6 CU0111-052534-M Time of Full Charge 09:44:00 00:00:00 00:00:00
7 CU0111-052727-7 Time of Full Charge 11:36:00 00:00:00 14:43:00
03-08-2017
0 11:12:00
1 11:35:00
2 10:28:00
3 11:20:00
4 11:08:00
5 12:54:00
6 11:28:00
7 11:14:00
And then create new columns for average not null timedeltas and count 0 as sum of True values:
df['avg'] = df[cols][df[cols].ne(0)].mean(axis=1)
df['number no changes'] = df[cols].eq(0).sum(axis=1)
print (df)
CU Parameters 31-07-2017 01-08-2017 02-08-2017 \
0 CU0111-039820-L Time of Full Charge 11:25:00 00:00:00 13:59:00
1 CU0111-041796-H Time of Full Charge 12:33:00 00:00:00 00:00:00
2 CU0111-046907-0 Time of Full Charge 08:45:00 00:00:00 12:29:00
3 CU0111-046933-6 Time of Full Charge 10:53:00 00:00:00 00:00:00
4 CU0111-050103-K Time of Full Charge 09:32:00 00:00:00 13:14:00
5 CU0111-052525-J Time of Full Charge 12:14:00 14:24:00 13:07:00
6 CU0111-052534-M Time of Full Charge 09:44:00 00:00:00 00:00:00
7 CU0111-052727-7 Time of Full Charge 11:36:00 00:00:00 14:43:00
03-08-2017 avg number no changes
0 11:12:00 12:12:00 1
1 11:35:00 12:04:00 2
2 10:28:00 10:34:00 1
3 11:20:00 11:06:30 2
4 11:08:00 11:18:00 1
5 12:54:00 13:09:45 0
6 11:28:00 10:36:00 2
7 11:14:00 12:31:00 1
print (df[cols][df[cols].ne(0)])
01-08-2017 02-08-2017 03-08-2017 31-07-2017
0 NaT 13:59:00 11:12:00 11:25:00
1 NaT NaT 11:35:00 12:33:00
2 NaT 12:29:00 10:28:00 08:45:00
3 NaT NaT 11:20:00 10:53:00
4 NaT 13:14:00 11:08:00 09:32:00
5 14:24:00 13:07:00 12:54:00 12:14:00
6 NaT NaT 11:28:00 09:44:00
7 NaT 14:43:00 11:14:00 11:36:00
print (df[cols].eq(0))
01-08-2017 02-08-2017 03-08-2017 31-07-2017
0 True False False False
1 True True False False
2 True False False False
3 True True False False
4 True False False False
5 False False False False
6 True True False False
7 True False False False