Optimizing Nested Double Loops Python With Pandas - python

How do you optimize nested for loops?
let's say I have two tables of data with different datetimes.
Assume the datetimes are sorted in order.
The first table has a datetime, the second table has a start.datetime and end.datetime amongst other columns.
I can do the first loops for the first table, and the second loop to see if the elem in the first loop is between the start.datetime and end.datetime; if it is, write a row in the program
Then, move onto the next elem in the first loop. Since I am coding in python and it's iteration, I don't think dynamic programming can apply (Only know memorization), but is there any way to optimize? I guess my code is time = O(n^2) since I need to loop twice and space = O(n).
I also need to ensure duplicates are removed if the user uploads the exact same first table.
The time to run for the code may be short, but since my resulting csv holds a lot of data (like 2-3 years worth of data), I am planning to optimize it as far as possible.
#open new file using writer. write first row headings. then do a comparision and
#only select those when lotstarttime is later than starttime, lotstarttime earlier than lotendtime
#i is start time index and j is the lot time index
with open(outfile, 'a', newline ='') as h:
writer = csv.writer(h)
if os.stat(outfile).st_size == 0:
writer.writerow(('Start TimeStamp', 'Alarm_text', 'Package', 'Recipe', 'Lot_starttime', 'UL/L', 'Network'))
for i in range(0,len(Start_time)):
for j in range(0,len(Lot_starttime)):
if Start_time[i] > Lot_starttime[j] and Start_time[i] < Lot_endtime[j]:
writer.writerow((Start_time[i], Alarm_text[i], Package[i], Recipe[j], Lot_starttime[j],Unload_Load[j],Network[j]))
#removes duplicate row keeping only first occurences
#sorts according to time; earliest first
df = pd.read_csv(outfile)
df.drop_duplicates(keep = 'first',inplace = True)
df['Start TimeStamp'] = pd.to_datetime(df['Start TimeStamp'])
df.sort_values(by='Start TimeStamp', inplace = True)
df.to_csv(outfile, index=False)
end = time.time()
print(f'Duration of code for {first_excel}, {sheet_name} is {round((end - start),2)}seconds.')
merge_data(first_excel,'A','AAA_Performance_Lot(2022)_N1.2.csv','A_AAA.csv')
The output is like
`What is your excel file name?: 2362022
Duration of code for A.xlsx, AAA is 2.24seconds.
...
Completed`
Thank You.

Related

How to count the duration of a field in a given value while having the field change history data?

I'm working with field change history data which has timestamps for when the field value was changed. In this example, I need to calculate the overall case duration in 'Termination in Progress' status.
The given case was changed from and to this status three times in total:
see screenshot
I need to add up all three durations in this case and in other cases it can be more or less than three.
Does anyone know how to calculate that in Python?
Welcome to Stack Overflow!
Based on the limited data you provided, here is a solution that should work although the code makes some assumptions that could cause errors so you will want to modify it to suit your needs. I avoided using list comprehension or array math to make it more clear since you said you're new to Python.
Assumptions:
You're pulling this data into a pandas dataframe
All Old values of "Termination in Progress" have a matching new value for all Case Numbers
import datetime
import pandas as pd
import numpy as np
fp = r'<PATH TO FILE>\\'
f = '<FILENAME>.csv'
data = pd.read_csv(fp+f)
#convert ts to datetime for later use doing time delta calculations
data['Edit Date'] = pd.to_datetime(data['Edit Date'])
# sort by the same case number and date in opposing order to make sure values for old and new align properly
data.sort_values(by = ['CaseNumber','Edit Date'], ascending = [True,False],inplace = True)
#find timestamps where Termination in progress occurs
old_val_ts = data.loc[data['Old Value'] == 'Termination in progress']['Edit Date'].to_list()
new_val_ts = data.loc[data['New Value'] == 'Termination in progress']['Edit Date'].to_list()
#Loop over the timestamps and calc the time delta
ts_deltas = list()
for i in range(len(old_val_ts)):
item = old_val_ts[i] - new_val_ts[i]
ts_deltas.append(item)
# this loop could also be accomplished with list comprehension like this:
#ts_deltas = [old_ts - new_ts for (old_ts, new_ts) in zip(old_val_ts, new_val_ts)]
print('Deltas between groups')
print(ts_deltas)
print()
#Sum the time deltas
total_ts_delta = sum(ts_deltas,datetime.timedelta())
print('Total Time Delta')
print(total_ts_delta)
Deltas between groups
[Timedelta('0 days 00:08:00'), Timedelta('0 days 00:06:00'), Timedelta('0 days 02:08:00')]
Total Time Delta
0 days 02:22:00
I've also attached a picture of the solution minus my file path for obvious reasons. Hope this helps. Please remember to mark as correct if this solution works for you. Otherwise let me know what issues you run into.
EDIT:
If you have multiple case numbers you want to look at, you could do it in various ways, but the simplest would be to just get a list of unique case numbers with data['CaseNumber'].unique() then iterate over that array filtering for each case number and appending the total time delta to a new list or a dictionary (not necessarily the most efficient solution, but it will work).
cases_total_td = {}
unique_cases = data['CaseNumber'].unique()
for case in unique_cases:
temp_data = data[data['CaseNumber'] == case]
#find timestamps where Termination in progress occurs
old_val_ts = data.loc[data['Old Value'] == 'Termination in progress']['Edit Date'].to_list()
new_val_ts = data.loc[data['New Value'] == 'Termination in progress']['Edit Date'].to_list()
#Loop over the timestamps and calc the time delta
ts_deltas = list()
for i in range(len(old_val_ts)):
item = old_val_ts[i] - new_val_ts[i]
ts_deltas.append(item)
ts_deltas = [old_ts - new_ts for (old_ts, new_ts) in zip(old_val_ts, new_val_ts)]
#Sum the time deltas
total_ts_delta = sum(ts_deltas,datetime.timedelta())
cases_total_td[case] = total_ts_delta
print(cases_total_td)
{1005222: Timedelta('0 days 02:22:00')}

for loop with same dataframe on both side of the operator

I have defined 10 different DataFrames A06_df, A07_df , etc, which picks up six different data point inputs in a daily time series for a number of years. To be able to work with them I need to do some formatting operations such as
A07_df=A07_df.fillna(0)
A07_df[A07_df < 0] = 0
A07_df.columns = col # col is defined
A07_df['oil']=A07_df['oil']*24
A07_df['water']=A07_df['water']*24
A07_df['gas']=A07_df['gas']*24
A07_df['water_inj']=0
A07_df['gas_inj']=0
A07_df=A07_df[['oil', 'water', 'gas','gaslift', 'water_inj', 'gas_inj', 'bhp', 'whp']]
etc for a few more formatting operations
Is there a nice way to have a for loop or something so I don’t have to write each operation for each dataframe A06_df, A07_df, A08.... etc?
As an example, I have tried
list=[A06_df, A07_df, A08_df, A10_df, A11_df, A12_df, A13_df, A15_df, A18_df, A19_df]
for i in list:
i=i.fillna(0)
But this does not do the trick.
Any help is appreciated
As i.fillna() returns a new object (an updated copy of your original dataframe), i=i.fillna(0) will update the content of ibut not of the list content A06_df, A07_df,....
I suggest you copy the updated content in a new list like this:
list_raw = [A06_df, A07_df, A08_df, A10_df, A11_df, A12_df, A13_df, A15_df, A18_df, A19_df]
list_updated = []
for i in list_raw:
i=i.fillna(0)
# More code here
list_updated.append(i)
To simplify your future processes I would recommend to use a dictionary of dataframes instead of a list of named variables.
dfs = {}
dfs['A0'] = ...
dfs['A1'] = ...
dfs_updated = {}
for k,i in dfs.items():
i=i.fillna(0)
# More code here
dfs_updated[k] = i

Trouble obtaining counts using multiple datetime columns as conditionals

I am attempting to collect counts of occurrences of an id between two time periods in a dataframe. I have a moderately sized dataframe (about 400 unique ids and just short of 1m rows) containing a time of occurrence and an id for the account which caused the occurrence. I am attempting to get a count of occurrences for multiple time periods (1 hour, 6 hour, 1 day, etc.) prior a specific occurrence and have run into lots of difficulties.
I am using Python 3.7, and for this instance I only have the pandas package loaded. I have tried using for loops and while it likely would have worked (eventually), I am looking for something a bit more efficient time-wise. I have also tried using list comprehension and have run into some errors that I did not anticipate when dealing with datetimes columns. Examples of both are below.
## Sample data
data = {'id':[ 'EAED813857474821E1A61F588FABA345', 'D528C270B80F11E284931A7D66640965', '6F394474B8C511E2A76C1A7D66640965', '7B9C7C02F19711E38C670EDFB82A24A9', '80B409D1EC3D4CC483239D15AAE39F2E', '314EB192F25F11E3B68A0EDFB82A24A9', '68D30EE473FE11E49C060EDFB82A24A9', '156097CF030E4519DBDF84419B855E10', 'EE80E4C0B82B11E28C561A7D66640965', 'CA9F2DF6B82011E28C561A7D66640965', '6F394474B8C511E2A76C1A7D66640965', '314EB192F25F11E3B68A0EDFB82A24A9', 'D528C270B80F11E284931A7D66640965', '3A024345C1E94CED8C7E0DA3A96BBDCA', '314EB192F25F11E3B68A0EDFB82A24A9', '47C18B6B38E540508561A9DD52FD0B79', 'B72F6EA5565B49BBEDE0E66B737A8E6B', '47C18B6B38E540508561A9DD52FD0B79', 'B92CB51EFA2611E2AEEF1A7D66640965', '136EDF0536F644E0ADE6F25BB293DD17', '7B9C7C02F19711E38C670EDFB82A24A9', 'C5FAF9ACB88D4B55AB8196DBFFE5B3C0', '1557D4ECEFA74B40C718A4E5425F3ACB', '68D30EE473FE11E49C060EDFB82A24A9', '68D30EE473FE11E49C060EDFB82A24A9', 'CAF9D8CD627B422DFE1D587D25FC4035', 'C620D865AEE1412E9F3CA64CB86DC484', '47C18B6B38E540508561A9DD52FD0B79', 'CA9F2DF6B82011E28C561A7D66640965', '06E2501CB81811E290EF1A7D66640965', '68EEE17873FE11E4B5B90AFEF9534BE1', '47C18B6B38E540508561A9DD52FD0B79', '1BFE9CB25AD84B64CC2D04EF94237749', '7B20C2BEB82811E28C561A7D66640965', '261692EA8EE447AEF3804836E4404620', '74D7C3901F234993B4788EFA9E6BEE9E', 'CAF9D8CD627B422DFE1D587D25FC4035', '76AAF82EB8C511E2A76C1A7D66640965', '4BD38D6D44084681AFE13C146542A565', 'B8D27E80B82911E28C561A7D66640965' ], 'datetime':[ "24/06/2018 19:56", "24/05/2018 03:45", "12/01/2019 14:36", "18/08/2018 22:42", "19/11/2018 15:43", "08/07/2017 21:32", "15/05/2017 14:00", "25/03/2019 22:12", "27/02/2018 01:59", "26/05/2019 21:50", "11/02/2017 01:33", "19/11/2017 19:17", "04/04/2019 13:46", "08/05/2019 14:12", "11/02/2018 02:00", "07/04/2018 16:15", "29/10/2016 20:17", "17/11/2018 21:58", "12/05/2017 16:39", "28/01/2016 19:00", "24/02/2019 19:55", "13/06/2019 19:24", "30/09/2016 18:02", "14/07/2018 17:59", "06/04/2018 22:19", "25/08/2017 17:51", "07/04/2019 02:24", "26/05/2018 17:41", "27/08/2014 06:45", "15/07/2016 19:30", "30/10/2016 20:08", "15/09/2018 18:45", "29/01/2018 02:13", "10/09/2014 23:10", "11/05/2017 22:00", "31/05/2019 23:58", "19/02/2019 02:34", "02/02/2019 01:02", "27/04/2018 04:00", "29/11/2017 20:35"]}
df = pd.dataframe(data)
df = df.sort_values(['id', 'datetime'], ascending=True)
# for loop attempt
totalAccounts = df['id'].unique()
for account in totalAccounts:
oneHourCount=0
subset = df[df['id'] == account]
for i in range(len(subset)):
onehour = subset['datetime'].iloc[i] - timedelta(hours=1)
for j in range(len(subset)):
if (subset['datetime'].iloc[j] >= onehour) and (subset['datetime'].iloc[j] < sub):
oneHourCount+=1
#list comprehension attempt
df['onehour'] = df['datetime'] - timedelta(hours=1)
for account in totalAccounts:
onehour = sum([1 for x in subset['datetime'] if x >= subset['onehour'] and x < subset['datetime']])
I am getting either 1) incredibly long runtime with the for loop or 2) an ValueError regarding the truth of a series being ambiguous. I know the issue is dealing with the datetimes, and perhaps it is just going to be slow-going, but I want to check here first just to make sure.
So I was able to figure this out using bisection. If you have a similar question please PM me and I'd be more than happy to help.
Solution:
left = bisect_left(keys, subset['start_time'].iloc[i]) ## calculated time
right = bisect_right(keys, subset['datetime'].iloc[i]) ## actual time of occurrence
count=len(subset['datetime'][left:right]

Creating a list of lists of lists to sort data from a text file

I'm trying to read and analyse data back from a molecular dynamics simulation, which looks like this, but has approximately 50000 lines :
40 443.217134221125 -1167.16960983145 -930.540717277902 -945.149746592058 14.6090293141563 -76510.1177229871 4955.17798368798 17.0485096390963 17.0485096390963 17.0485096390963
80 659.39103652059 -923.638916369481 -963.088128935875 -984.822539088925 21.7344101530497 14390.2520385682 4392.18167603894 16.3767140226773 16.3767140226773 16.3767140226773
120 410.282687399253 -979.413482414461 -978.270613122515 -991.794079036891 13.5234659143754 -416.30808174241 4398.37322990079 16.3844056974088 16.3844056974088 16.3844056974088
The second column represents temperature. I want to have the entire contents of the file inside a list, containing lists dividing every line depending on their temperature. So for example, the first list in the main list would have every line where the temperature is 50+/-25K, the second list in the main list would have every line where the temperature is 100+/-25K, the third for 150+/-25K, etc.
Here's the code I have so far :
for nbligne in tqdm(range(0,len(LogFullText),1), unit=" lignes", disable=False):
string = LogFullText[nbligne]
line = string.replace('\n','')
Values = line.split(' ')
divider = float(Values[1])
number = int(round(divider/ecart,0))
if number>0 and number < (nbpts+1):
numericValues = []
for nbresultat in range(0,len(Values)-1,1):
numericValues = numericValues + [float(Values[nbresultat+1])]
TotalResultats[number-1].append(numericValues)
The entire document with data is stored in the list LogFullText, in which I remove the \n at the end and split the data, using line.split(' '), I then know in which "section" of the main list, TotalResultats, the line of data has to be stored with the variable number, ecart has in my example a value of 50.
From my testing in idle, this should work, but in reality what happens in that the list numericValues is appended to every section of TotalResultats, which makes the entire "sorting" process pointless, as I simply end up with nbpts times the same list.
EDIT : A desired output would be for example to have TotalResultats[0] contain only these lines :
440 49.9911561170447 -1002.727121613 -1002.72088094757 -1004.36865629012 1.64777534254374 -2.30045369926927 4346.38067015602 16.319590369315 16.319590369315 16.319590369315
480 42.0678318129411 -1002.69068695093 -1003.09270361295 -1004.47931559314 1.38661198019398 148.219667654185 4345.58826561836 16.3185985476593 16.3185985476593 16.3185985476593
520 43.0855216044083 -1003.4761833678 -1003.33820025832 -1004.75835665467 1.42015639634654 -50.877194096845 4345.23364199522 16.3181546401367 16.3181546401367 16.3181546401367
Whereas TotalResults[1] would contain these :
29480 109.504432929553 -980.560226069922 -998.958927113452 -1002.5683396275 3.6094125140473 6797.60091557441 4336.52501942717 16.3072458525354 16.3072458525354 16.3072458525354
29520 106.663291994583 -987.853629557979 -998.63436605413 -1002.15013076443 3.51576471029626 3975.43407740646 4344.84444478408 16.3176674266037 16.3176674266037 16.3176674266037
29560 112.712019757891 -1020.65735849343 -998.342638324154 -1002.05777718853 3.71513886437272 -8172.25412368794 4374.81748831773 16.3551041162317 16.3551041162317 16.3551041162317
And TotalResults[2] would be :
52480 142.86322849701 -983.254970494784 -995.977110177167 -1000.68607319299 4.70896301582636 4687.60299340191 4348.30194824999 16.321994657312 16.321994657312 16.321994657312
52520 159.953459288754 -984.221801201968 -995.711657311665 -1000.9839371836 5.27227987193358 4233.04866428826 4348.82254074761 16.3226460049712 16.3226460049712 16.3226460049712
52560 161.624843851124 -1011.76969126636 -995.320907086768 -1000.64827802848 5.32737094170867 -6023.57133443538 4375.12133631739 16.3554827492176 16.3554827492176 16.3554827492176
In the first case,
TotalResultats[0][0] = [49.9911561170447, -1002.727121613, -1002.72088094757, -1004.36865629012, 1.64777534254374, -2.30045369926927, 4346.38067015602, 16.319590369315, 16.319590369315, 16.319590369315]
If it can help, I'm coding this in Visual Studio, using python 3.6.8
Thanks a whole lot!
I recommend to use pandas. It's a very powerfull tool to treat tabular data in python. It's like excel or sql inside python. Suppose 1.csv contains the data you have provided in the question. Then you can easily load data, filter it, and save results:
import pandas as pd
# load data from file into pandas dataframe
df = pd.read_csv('1.csv', header=None, delimiter=' ')
# filter by temperature, column named 0 since there is no header in the file
df2 = df[df[0].between(450, 550)]
# save filtered rows in the same format
df2.to_csv('2.csv', header=None, index=False, sep=' ')
Pandas may be harder to learn than plain python syntax but it is well worth it.

Check logs with Spark

I'm new to Spark and I'm trying to develop a python script that reads a csv file with some logs:
userId,timestamp,ip,event
13,2016-12-29 16:53:44,86.20.90.121,login
43,2016-12-29 16:53:44,106.9.38.79,login
66,2016-12-29 16:53:44,204.102.78.108,logoff
101,2016-12-29 16:53:44,14.139.102.226,login
91,2016-12-29 16:53:44,23.195.2.174,logoff
And checks if a user had some strange behaviors, for example if he has done two consecutive 'login' without doing 'logoff'. I've loaded the csv as a Spark dataFrame and I wanted to compare the log rows of a single user, ordered by timestamp and checking if two consecutive events are of the same type (login - login , logoff - logoff). I'm searching for doing it in a 'map-reduce' way, but at the moment I can't figure out how to use a reduce function that compares consecutive rows.
The code I've written works, but the performance are very bad.
sc = SparkContext("local","Data Check")
sqlContext = SQLContext(sc)
LOG_FILE_PATH = "hdfs://quickstart.cloudera:8020/user/cloudera/flume/events/*"
RESULTS_FILE_PATH = "hdfs://quickstart.cloudera:8020/user/cloudera/spark/script_results/prova/bad_users.csv"
N_USERS = 10*1000
dataFrame = sqlContext.read.format("com.databricks.spark.csv").load(LOG_FILE_PATH)
dataFrame = dataFrame.selectExpr("C0 as userID","C1 as timestamp","C2 as ip","C3 as event")
wrongUsers = []
for i in range(0,N_USERS):
userDataFrame = dataFrame.where(dataFrame['userId'] == i)
userDataFrame = userDataFrame.sort('timestamp')
prevEvent = ''
for row in userDataFrame.rdd.collect():
currEvent = row[3]
if(prevEvent == currEvent):
wrongUsers.append(row[0])
prevEvent = currEvent
badUsers = sqlContext.createDataFrame(wrongUsers)
badUsers.write.format("com.databricks.spark.csv").save(RESULTS_FILE_PATH)
First (not related but still), be sure that the number of entries per user is not that big because that collect in for row in userDataFrame.rdd.collect(): is dangerous.
Second, you don't need to leave the DataFrame area here to use classical Python, just stick to Spark.
Now, your problem. It's basically "for each line I want to know something from the previous line": that belongs to the concept of Window functions and to be precise the lag function. Here are two interesting articles about Window functions in Spark: one from Databricks with code in Python and one from Xinh with (I think easier to understand) examples in Scala.
I have a solution in Scala, but I think you'll pull it off translating it in Python:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.lag
import sqlContext.implicits._
val LOG_FILE_PATH = "hdfs://quickstart.cloudera:8020/user/cloudera/flume/events/*"
val RESULTS_FILE_PATH = "hdfs://quickstart.cloudera:8020/user/cloudera/spark/script_results/prova/bad_users.csv"
val data = sqlContext
.read
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true") // use the header from your csv
.load(LOG_FILE_PATH)
val wSpec = Window.partitionBy("userId").orderBy("timestamp")
val badUsers = data
.withColumn("previousEvent", lag($"event", 1).over(wSpec))
.filter($"previousEvent" === $"event")
.select("userId")
.distinct
badUsers.write.format("com.databricks.spark.csv").save(RESULTS_FILE_PATH)
Basically you just retrieve the value from the previous line and compare it to the value on your current line, if it's a match that is a wrong behavior and you keep the userId. For the first line in your "block" of lines for each userId, the previous value will be null: when comparing with the current value, the boolean expression will be false so no problem here.

Categories

Resources