Trouble obtaining counts using multiple datetime columns as conditionals - python

I am attempting to collect counts of occurrences of an id between two time periods in a dataframe. I have a moderately sized dataframe (about 400 unique ids and just short of 1m rows) containing a time of occurrence and an id for the account which caused the occurrence. I am attempting to get a count of occurrences for multiple time periods (1 hour, 6 hour, 1 day, etc.) prior a specific occurrence and have run into lots of difficulties.
I am using Python 3.7, and for this instance I only have the pandas package loaded. I have tried using for loops and while it likely would have worked (eventually), I am looking for something a bit more efficient time-wise. I have also tried using list comprehension and have run into some errors that I did not anticipate when dealing with datetimes columns. Examples of both are below.
## Sample data
data = {'id':[ 'EAED813857474821E1A61F588FABA345', 'D528C270B80F11E284931A7D66640965', '6F394474B8C511E2A76C1A7D66640965', '7B9C7C02F19711E38C670EDFB82A24A9', '80B409D1EC3D4CC483239D15AAE39F2E', '314EB192F25F11E3B68A0EDFB82A24A9', '68D30EE473FE11E49C060EDFB82A24A9', '156097CF030E4519DBDF84419B855E10', 'EE80E4C0B82B11E28C561A7D66640965', 'CA9F2DF6B82011E28C561A7D66640965', '6F394474B8C511E2A76C1A7D66640965', '314EB192F25F11E3B68A0EDFB82A24A9', 'D528C270B80F11E284931A7D66640965', '3A024345C1E94CED8C7E0DA3A96BBDCA', '314EB192F25F11E3B68A0EDFB82A24A9', '47C18B6B38E540508561A9DD52FD0B79', 'B72F6EA5565B49BBEDE0E66B737A8E6B', '47C18B6B38E540508561A9DD52FD0B79', 'B92CB51EFA2611E2AEEF1A7D66640965', '136EDF0536F644E0ADE6F25BB293DD17', '7B9C7C02F19711E38C670EDFB82A24A9', 'C5FAF9ACB88D4B55AB8196DBFFE5B3C0', '1557D4ECEFA74B40C718A4E5425F3ACB', '68D30EE473FE11E49C060EDFB82A24A9', '68D30EE473FE11E49C060EDFB82A24A9', 'CAF9D8CD627B422DFE1D587D25FC4035', 'C620D865AEE1412E9F3CA64CB86DC484', '47C18B6B38E540508561A9DD52FD0B79', 'CA9F2DF6B82011E28C561A7D66640965', '06E2501CB81811E290EF1A7D66640965', '68EEE17873FE11E4B5B90AFEF9534BE1', '47C18B6B38E540508561A9DD52FD0B79', '1BFE9CB25AD84B64CC2D04EF94237749', '7B20C2BEB82811E28C561A7D66640965', '261692EA8EE447AEF3804836E4404620', '74D7C3901F234993B4788EFA9E6BEE9E', 'CAF9D8CD627B422DFE1D587D25FC4035', '76AAF82EB8C511E2A76C1A7D66640965', '4BD38D6D44084681AFE13C146542A565', 'B8D27E80B82911E28C561A7D66640965' ], 'datetime':[ "24/06/2018 19:56", "24/05/2018 03:45", "12/01/2019 14:36", "18/08/2018 22:42", "19/11/2018 15:43", "08/07/2017 21:32", "15/05/2017 14:00", "25/03/2019 22:12", "27/02/2018 01:59", "26/05/2019 21:50", "11/02/2017 01:33", "19/11/2017 19:17", "04/04/2019 13:46", "08/05/2019 14:12", "11/02/2018 02:00", "07/04/2018 16:15", "29/10/2016 20:17", "17/11/2018 21:58", "12/05/2017 16:39", "28/01/2016 19:00", "24/02/2019 19:55", "13/06/2019 19:24", "30/09/2016 18:02", "14/07/2018 17:59", "06/04/2018 22:19", "25/08/2017 17:51", "07/04/2019 02:24", "26/05/2018 17:41", "27/08/2014 06:45", "15/07/2016 19:30", "30/10/2016 20:08", "15/09/2018 18:45", "29/01/2018 02:13", "10/09/2014 23:10", "11/05/2017 22:00", "31/05/2019 23:58", "19/02/2019 02:34", "02/02/2019 01:02", "27/04/2018 04:00", "29/11/2017 20:35"]}
df = pd.dataframe(data)
df = df.sort_values(['id', 'datetime'], ascending=True)
# for loop attempt
totalAccounts = df['id'].unique()
for account in totalAccounts:
oneHourCount=0
subset = df[df['id'] == account]
for i in range(len(subset)):
onehour = subset['datetime'].iloc[i] - timedelta(hours=1)
for j in range(len(subset)):
if (subset['datetime'].iloc[j] >= onehour) and (subset['datetime'].iloc[j] < sub):
oneHourCount+=1
#list comprehension attempt
df['onehour'] = df['datetime'] - timedelta(hours=1)
for account in totalAccounts:
onehour = sum([1 for x in subset['datetime'] if x >= subset['onehour'] and x < subset['datetime']])
I am getting either 1) incredibly long runtime with the for loop or 2) an ValueError regarding the truth of a series being ambiguous. I know the issue is dealing with the datetimes, and perhaps it is just going to be slow-going, but I want to check here first just to make sure.

So I was able to figure this out using bisection. If you have a similar question please PM me and I'd be more than happy to help.
Solution:
left = bisect_left(keys, subset['start_time'].iloc[i]) ## calculated time
right = bisect_right(keys, subset['datetime'].iloc[i]) ## actual time of occurrence
count=len(subset['datetime'][left:right]

Related

Optimization of this python function

def split_trajectories(df):
trajectories_list = []
count = 0
for record in range(len(df)):
if record == 0:
continue
if df['time'].iloc[record] - df['time'].iloc[record - 1] > pd.Timedelta('0 days 00:00:30'):
temp_df = reset_index(df[count:record])
if not temp_df.empty:
if len(temp_df) > 50:
trajectories_list.append(temp_df)
count = record
return trajectories_list
This is a python function that receives a pandas dataframe and divides it into a list of dataframes when their time delta is greater than 30 seconds and if the dataframe contains than 50 records. In my case I need to execute this function thousands of times and I wonder if anyone can help me optimize it. Thanks in advance!
I tried to optimize it as far as I can.
You're doing a few things right here, like iterating using a range instead of .iterrows and using .iloc.
Simple things you can do:
Switch .iloc to .iat since you only need 'time' anyway
Don't recalculate Timedelta every time, just save it as a variable
The big thing is that you don't need to actually save each temp_df, or even create it. You can save a tuple (count, record) and retrieve from df afterwards, as needed. (Although, I have to admit I don't quite understand what you're accomplishing with the count:record logic; should one of your .iloc's involve count instead?)
def split_trajectories(df):
trajectories_list = []
td = pd.Timedelta('0 days 00:00:30')
count = 0
for record in range(1, len(df)):
if df['time'].iat[record] - df['time'].iat[record - 1] > td:
if record-count>50:
new_tuple = (count, record)
trajectories_list.append(new_tuple)
return trajectories_list
Maybe you can try this one below, you could use count if you want to keep track of number of df's or else the length of trajectories_list will give you that info.
def split_trajectories(df):
trajectories_list = []
df_difference = df['time'].diff()
if not (df_difference.empty) and (df_difference.shape[0]>50):
trajectories_list.append(df_difference)
return trajectories_list

How to count the duration of a field in a given value while having the field change history data?

I'm working with field change history data which has timestamps for when the field value was changed. In this example, I need to calculate the overall case duration in 'Termination in Progress' status.
The given case was changed from and to this status three times in total:
see screenshot
I need to add up all three durations in this case and in other cases it can be more or less than three.
Does anyone know how to calculate that in Python?
Welcome to Stack Overflow!
Based on the limited data you provided, here is a solution that should work although the code makes some assumptions that could cause errors so you will want to modify it to suit your needs. I avoided using list comprehension or array math to make it more clear since you said you're new to Python.
Assumptions:
You're pulling this data into a pandas dataframe
All Old values of "Termination in Progress" have a matching new value for all Case Numbers
import datetime
import pandas as pd
import numpy as np
fp = r'<PATH TO FILE>\\'
f = '<FILENAME>.csv'
data = pd.read_csv(fp+f)
#convert ts to datetime for later use doing time delta calculations
data['Edit Date'] = pd.to_datetime(data['Edit Date'])
# sort by the same case number and date in opposing order to make sure values for old and new align properly
data.sort_values(by = ['CaseNumber','Edit Date'], ascending = [True,False],inplace = True)
#find timestamps where Termination in progress occurs
old_val_ts = data.loc[data['Old Value'] == 'Termination in progress']['Edit Date'].to_list()
new_val_ts = data.loc[data['New Value'] == 'Termination in progress']['Edit Date'].to_list()
#Loop over the timestamps and calc the time delta
ts_deltas = list()
for i in range(len(old_val_ts)):
item = old_val_ts[i] - new_val_ts[i]
ts_deltas.append(item)
# this loop could also be accomplished with list comprehension like this:
#ts_deltas = [old_ts - new_ts for (old_ts, new_ts) in zip(old_val_ts, new_val_ts)]
print('Deltas between groups')
print(ts_deltas)
print()
#Sum the time deltas
total_ts_delta = sum(ts_deltas,datetime.timedelta())
print('Total Time Delta')
print(total_ts_delta)
Deltas between groups
[Timedelta('0 days 00:08:00'), Timedelta('0 days 00:06:00'), Timedelta('0 days 02:08:00')]
Total Time Delta
0 days 02:22:00
I've also attached a picture of the solution minus my file path for obvious reasons. Hope this helps. Please remember to mark as correct if this solution works for you. Otherwise let me know what issues you run into.
EDIT:
If you have multiple case numbers you want to look at, you could do it in various ways, but the simplest would be to just get a list of unique case numbers with data['CaseNumber'].unique() then iterate over that array filtering for each case number and appending the total time delta to a new list or a dictionary (not necessarily the most efficient solution, but it will work).
cases_total_td = {}
unique_cases = data['CaseNumber'].unique()
for case in unique_cases:
temp_data = data[data['CaseNumber'] == case]
#find timestamps where Termination in progress occurs
old_val_ts = data.loc[data['Old Value'] == 'Termination in progress']['Edit Date'].to_list()
new_val_ts = data.loc[data['New Value'] == 'Termination in progress']['Edit Date'].to_list()
#Loop over the timestamps and calc the time delta
ts_deltas = list()
for i in range(len(old_val_ts)):
item = old_val_ts[i] - new_val_ts[i]
ts_deltas.append(item)
ts_deltas = [old_ts - new_ts for (old_ts, new_ts) in zip(old_val_ts, new_val_ts)]
#Sum the time deltas
total_ts_delta = sum(ts_deltas,datetime.timedelta())
cases_total_td[case] = total_ts_delta
print(cases_total_td)
{1005222: Timedelta('0 days 02:22:00')}

Optimizing Nested Double Loops Python With Pandas

How do you optimize nested for loops?
let's say I have two tables of data with different datetimes.
Assume the datetimes are sorted in order.
The first table has a datetime, the second table has a start.datetime and end.datetime amongst other columns.
I can do the first loops for the first table, and the second loop to see if the elem in the first loop is between the start.datetime and end.datetime; if it is, write a row in the program
Then, move onto the next elem in the first loop. Since I am coding in python and it's iteration, I don't think dynamic programming can apply (Only know memorization), but is there any way to optimize? I guess my code is time = O(n^2) since I need to loop twice and space = O(n).
I also need to ensure duplicates are removed if the user uploads the exact same first table.
The time to run for the code may be short, but since my resulting csv holds a lot of data (like 2-3 years worth of data), I am planning to optimize it as far as possible.
#open new file using writer. write first row headings. then do a comparision and
#only select those when lotstarttime is later than starttime, lotstarttime earlier than lotendtime
#i is start time index and j is the lot time index
with open(outfile, 'a', newline ='') as h:
writer = csv.writer(h)
if os.stat(outfile).st_size == 0:
writer.writerow(('Start TimeStamp', 'Alarm_text', 'Package', 'Recipe', 'Lot_starttime', 'UL/L', 'Network'))
for i in range(0,len(Start_time)):
for j in range(0,len(Lot_starttime)):
if Start_time[i] > Lot_starttime[j] and Start_time[i] < Lot_endtime[j]:
writer.writerow((Start_time[i], Alarm_text[i], Package[i], Recipe[j], Lot_starttime[j],Unload_Load[j],Network[j]))
#removes duplicate row keeping only first occurences
#sorts according to time; earliest first
df = pd.read_csv(outfile)
df.drop_duplicates(keep = 'first',inplace = True)
df['Start TimeStamp'] = pd.to_datetime(df['Start TimeStamp'])
df.sort_values(by='Start TimeStamp', inplace = True)
df.to_csv(outfile, index=False)
end = time.time()
print(f'Duration of code for {first_excel}, {sheet_name} is {round((end - start),2)}seconds.')
merge_data(first_excel,'A','AAA_Performance_Lot(2022)_N1.2.csv','A_AAA.csv')
The output is like
`What is your excel file name?: 2362022
Duration of code for A.xlsx, AAA is 2.24seconds.
...
Completed`
Thank You.

Calculating On Base Volume (OBV) with Python Pandas

I have a trading Python Pandas DataFrame which includes the "close" and "volume". I want to calculate the On-Balance Volume (OBV). I've got it working over the entire dataset but I want it to be calculated on a rolling series of 10.
The current function looks as follows...
def calculateOnBalanceVolume(df):
df['obv'] = 0
index = 1
while index <= len(df) - 1:
if(df.iloc[index]['close'] > df.iloc[index-1]['close']):
df.at[index, 'obv'] += df.at[index-1, 'obv'] + df.at[index, 'volume']
if(df.iloc[index]['close'] < df.iloc[index-1]['close']):
df.at[index, 'obv'] += df.at[index-1, 'obv'] - df.at[index, 'volume']
index = index + 1
return df
This creates the "obv" column and works out the OBV over the 300 entries.
Ideally I would like to do something like this...
data['obv10'] = data.volume.rolling(10, min_periods=1).apply(calculateOnBalanceVolume)
This looks like it has potential to work but the problem is the "apply" only passes in the "volume" column so you can't work out the change in closing price.
I also tried this...
data['obv10'] = data[['close','volume']].rolling(10, min_periods=1).apply(calculateOnBalanceVolume)
Which sort of works but it tries to update the "close" and "volume" columns instead of adding the new "obv10" column.
What is the best way of doing this or do you just have to iterate over the data in batches of 10?
I found a more efficient way of doing the code above from this link:
Calculating stocks's On Balance Volume (OBV) in python
import numpy as np
def calculateOnBalanceVolume(df):
df['obv'] = np.where(df['close'] > df['close'].shift(1), df['volume'],
np.where(df['close'] < df['close'].shift(1), -df['volume'], 0)).cumsum()
return df
The problem is this still does the entire data set. This looks pretty good but how can I cycle through it in batches of 10 at a time without looping or iterating through the entire data set?
*** UPDATE ***
I've got slightly closer to getting this working. I have managed to calculate the OBV in groups of 10.
for gid,df in data.groupby(np.arange(len(data)) // 10):
df['obv'] = np.where(df['close'] > df['close'].shift(1), df['volume'],
np.where(df['close'] < df['close'].shift(1), -df['volume'], 0)).cumsum()
I want this to be calculated rolling not in groups. Any idea how to do this using Pandas in an efficient way?
*** UPDATE ***
It turns out that OBV is supposed to be calculated over the entire data set. I've settled on the following code which looks correct now.
# calculate on-balance volume (obv)
self.df['obv'] = np.where(self.df['close'] > self.df['close'].shift(1), self.df['volume'],
np.where(self.df['close'] < self.df['close'].shift(1), -self.df['volume'], self.df.iloc[0]['volume'])).cumsum()

Speeding up pandas array calculation

I have working code that achieves the desired calculation result, but I am currently using an algorithm that iterates over the pandas array. this is obviously slower than pure pandas DataFrame calculations. Would like some advice on how i can use pandas functions to speed up this calculation
Code to generate dummy data
df = pd.DataFrame(index=pd.date_range(start='2014-01-01', periods=365))
df['Month'] = df.index.month
df['MTD'] = (df.index.day+0.001)/10000
This is basically a pandas DataFrame with MTD figures for some value. This is purely given so that we have some data to play with.
Needed calculation
what I need is a new DataFrame that has starting (investment) dates as columns - populating them with a few beginning of month values. the index is all possible dates and the values should be the YTD figure. I am using this Dataframe as a lookup/cache for investement dates
pseudocode
YTD = (1+last MTD figure) * ((1+last MTD figure)... for all months to the required date
Working function
def calculate_YTD(df): # slow takes 3.5s on my machine!!!!!!
YTD_df = pd.DataFrame(index=df.index)
for investment_date in [datetime.datetime(2014,x+1,1) for x in range(12)]:
YTD_df[investment_date] =1.0 # pre-populate with dummy floats
for date in df.index: # iterate over all dates in period
h = (df[investment_date:date].groupby('Month')['MTD'].max().fillna(0) + 1).product() -1
YTD_df[investment_date][date] = h
return YTD_df
I have hardcoded the investment dates list to simplify the problem statement. On my machines this code takes 2.5 to 3.5 seconds. Any suggestions on how i can speed it up?
Here's an approach that should be reasonably quick. Quite possible there is something faster/cleaner, but this should be an improvement.
#assuming a fixed number of investments dates, build a list
investment_dates = pd.date_range('2014-1-1', periods=12, freq='MS')
#build a table, by month, which contains the cumulative MTD
#return for each invesment date. Still have to loop over the investment dates,
#but don't need to loop over each daily value
running_mtd = []
for date in investment_dates:
curr_mo = (df[df.index >= date].groupby('Month')['MTD'].last() + 1.).cumprod()
curr_mo.name = date
running_mtd.append(curr_mo)
running_mtd_df = pd.concat(running_mtd, axis=1)
running_mtd_df = running_mtd_df.shift(1).fillna(1.)
#merge running mtd returns with base dataframe
df = df.merge(running_mtd_df, left_on='Month', right_index=True)
#calculate ytd return for each column / day, by multipling the running
#monthly return with the current MTD value
for date in investment_dates:
df[date] = np.where(df.index < date, np.nan, df[date] * (1. + df['MTD']) - 1.)

Categories

Resources