Python time measure function - python

I want to create a python function to test the time spent in each function and print its name with its time, how i can print the function name and if there is another way to do so please tell me
def measureTime(a):
start = time.clock()
a()
elapsed = time.clock()
elapsed = elapsed - start
print "Time spent in (function name) is: ", elapsed

First and foremost, I highly suggest using a profiler or atleast use timeit.
However if you wanted to write your own timing method strictly to learn, here is somewhere to get started using a decorator.
Python 2:
def timing(f):
def wrap(*args):
time1 = time.time()
ret = f(*args)
time2 = time.time()
print '%s function took %0.3f ms' % (f.func_name, (time2-time1)*1000.0)
return ret
return wrap
And the usage is very simple, just use the #timing decorator:
#timing
def do_work():
#code
Python 3:
def timing(f):
def wrap(*args, **kwargs):
time1 = time.time()
ret = f(*args, **kwargs)
time2 = time.time()
print('{:s} function took {:.3f} ms'.format(f.__name__, (time2-time1)*1000.0))
return ret
return wrap
Note I'm calling f.func_name to get the function name as a string(in Python 2), or f.__name__ in Python 3.

After playing with the timeit module, I don't like its interface, which is not so elegant compared to the following two method.
The following code is in Python 3.
The decorator method
This is almost the same with #Mike's method. Here I add kwargs and functools wrap to make it better.
def timeit(func):
#functools.wraps(func)
def new_func(*args, **kwargs):
start_time = time.time()
result = func(*args, **kwargs)
elapsed_time = time.time() - start_time
print('function [{}] finished in {} ms'.format(
func.__name__, int(elapsed_time * 1_000)))
return result
return new_func
#timeit
def foobar():
mike = Person()
mike.think(30)
The context manager method
from contextlib import contextmanager
#contextmanager
def timeit_context(name):
start_time = time.time()
yield
elapsed_time = time.time() - start_time
print('[{}] finished in {} ms'.format(name, int(elapsed_time * 1_000)))
For example, you can use it like:
with timeit_context('My profiling code'):
mike = Person()
mike.think()
And the code within the with block will be timed.
Conclusion
Using the first method, you can easily comment out the decorator to get the normal code. However, it can only time a function. If you have some part of code that you don't what to make it a function, then you can choose the second method.
For example, now you have
images = get_images()
big_image = ImagePacker.pack(images, width=4096)
drawer.draw(big_image)
Now you want to time the big_image = ... line. If you change it to a function, it will be:
images = get_images()
big_image = None
#timeit
def foobar():
nonlocal big_image
big_image = ImagePacker.pack(images, width=4096)
drawer.draw(big_image)
Looks not so great...What if you are in Python 2, which has no nonlocal keyword.
Instead, using the second method fits here very well:
images = get_images()
with timeit_context('foobar'):
big_image = ImagePacker.pack(images, width=4096)
drawer.draw(big_image)

I don't see what the problem with the timeit module is. This is probably the simplest way to do it.
import timeit
timeit.timeit(a, number=1)
Its also possible to send arguments to the functions. All you need is to wrap your function up using decorators. More explanation here: http://www.pythoncentral.io/time-a-python-function/
The only case where you might be interested in writing your own timing statements is if you want to run a function only once and are also want to obtain its return value.
The advantage of using the timeit module is that it lets you repeat the number of executions. This might be necessary because other processes might interfere with your timing accuracy. So, you should run it multiple times and look at the lowest value.

Timeit has two big flaws: it doesn't return the return value of the function, and it uses eval, which requires passing in extra setup code for imports. This solves both problems simply and elegantly:
def timed(f):
start = time.time()
ret = f()
elapsed = time.time() - start
return ret, elapsed
timed(lambda: database.foo.execute('select count(*) from source.apachelog'))
(<sqlalchemy.engine.result.ResultProxy object at 0x7fd6c20fc690>, 4.07547402381897)

There is an easy tool for timing. https://github.com/RalphMao/PyTimer
It can work like a decorator:
from pytimer import Timer
#Timer(average=False)
def matmul(a,b, times=100):
for i in range(times):
np.dot(a,b)
Output:
matmul:0.368434
matmul:2.839355
It can also work like a plug-in timer with namespace control(helpful if you are inserting it to a function which has a lot of codes and may be called anywhere else).
timer = Timer()
def any_function():
timer.start()
for i in range(10):
timer.reset()
np.dot(np.ones((100,1000)), np.zeros((1000,500)))
timer.checkpoint('block1')
np.dot(np.ones((100,1000)), np.zeros((1000,500)))
np.dot(np.ones((100,1000)), np.zeros((1000,500)))
timer.checkpoint('block2')
np.dot(np.ones((100,1000)), np.zeros((1000,1000)))
for j in range(20):
np.dot(np.ones((100,1000)), np.zeros((1000,500)))
timer.summary()
for i in range(2):
any_function()
Output:
========Timing Summary of Default Timer========
block2:0.065062
block1:0.032529
========Timing Summary of Default Timer========
block2:0.065838
block1:0.032891
Hope it will help

Decorator method using decorator Python library:
import decorator
#decorator
def timing(func, *args, **kwargs):
'''Function timing wrapper
Example of using:
``#timing()``
'''
fn = '%s.%s' % (func.__module__, func.__name__)
timer = Timer()
with timer:
ret = func(*args, **kwargs)
log.info(u'%s - %0.3f sec' % (fn, timer.duration_in_seconds()))
return ret
See post on my Blog:
post on mobilepro.pl Blog
my post on Google Plus

My way of doing it:
from time import time
def printTime(start):
end = time()
duration = end - start
if duration < 60:
return "used: " + str(round(duration, 2)) + "s."
else:
mins = int(duration / 60)
secs = round(duration % 60, 2)
if mins < 60:
return "used: " + str(mins) + "m " + str(secs) + "s."
else:
hours = int(duration / 3600)
mins = mins % 60
return "used: " + str(hours) + "h " + str(mins) + "m " + str(secs) + "s."
Set a variable as start = time() before execute the function/loops, and printTime(start) right after the block.
and you got the answer.

Elaborating on #Jonathan Ray I think this does the trick a bit better
import time
import inspect
def timed(f:callable):
start = time.time()
ret = f()
elapsed = 1000*(time.time() - start)
source_code=inspect.getsource(f).strip('\n')
logger.info(source_code+": "+str(elapsed)+" seconds")
return ret
It allows to take a regular line of code, say a = np.sin(np.pi) and transform it rather simply into
a = timed(lambda: np.sin(np.pi))
so that the timing is printed onto the logger and you can keep the same assignment of the result to a variable you might need for further work.
I suppose in Python 3.8 one could use the := but I do not have 3.8 yet

Below is a Timer class that:
Easy to use: use directly or as decorator function, < 100 lines
Measures a lot: total calls, total time, average time, and std. deviation.
Prints pretty time
Thread-safe
This is how you use it:
# Create the timer
timer1 = Timer("a name", log_every=2)
# Use "with"
with timer1:
print("timer1")
# Reuse as a decorator
#timer1
def my_func():
print("my_func")
# Instantiate as a decorator
#Timer("another timer", log_every=1)
def my_func2():
print("my_func2")
my_func()
my_func2()
my_func()
Below is the class
from datetime import datetime
import time, logging, math, threading
class Timer(object):
'''A general timer class. Does not really belong in a judicata file here.'''
def __init__(self, name, log_every = 1):
self.name = name
self.log_every = 1
self.calls = 0
self.total_time = 0
self.total_squared_time = 0
self.min, self.max = None, 0
# Make timer thread-safe by storing the times in thread-local storage.
self._local = threading.local()
self._lock = threading.Lock()
def __enter__(self):
"""Start a new timer"""
self._local.start = datetime.utcnow()
def __exit__(self, exc_type, exc_val, exc_tb):
"""Stop the timer, and report the elapsed time"""
elapsed_time = (datetime.utcnow() - self._local.start).total_seconds()
with self._lock:
self.calls += 1
self.total_time += elapsed_time
if self.min == None or elapsed_time < self.min:
self.min = elapsed_time
if elapsed_time > self.max:
self.max = elapsed_time
self.total_squared_time += elapsed_time * elapsed_time
if self.log_every and (self.calls % self.log_every) == 0:
self.log()
def __call__(self, fn):
'''For use as a decorator.'''
def decorated_timer_function(*args, **kwargs):
with self:
return fn(*args, **kwargs)
return decorated_timer_function
#classmethod
def time_str(cls, secs):
if isinstance(secs, six.string_types):
try:
secs = float(secs)
except:
return "(bad time: %s)"%secs
sign = lambda x: x
if secs < 0:
secs = -secs
sign = lambda x: ("-" + x)
return sign("%d secs"%int(secs) if secs >= 120 else
"%.2f secs" % secs if secs >= 1 else
"%d ms" % int(secs * 1000) if secs >= .01 else
"%.2f ms" % (secs * 1000) if secs >= .0001 else
"%d ns" % int(secs * 1000 * 10000) if secs >= 1e-9 else
"%s" % secs)
def log(self):
if not self.calls:
logging.info("<Timer %s: no calls>"%self.name)
return
avg = 1.0 * self.total_time / self.calls
var = 1.0 * self.total_squared_time / self.calls - avg*avg
std_dev = self.time_str(math.sqrt(var))
total = self.time_str(self.total_time)
min, max, avg = [self.time_str(t) for t in [self.min, self.max, avg]]
logging.info("<Timer %s: N=%s, total=%s, avg=%s, min/max=%s/%s, std=%s>"
%(self.name, self.calls, total, avg, min, max, std_dev))

You can use timeit.default_timer along with a contextmanager:
from timeit import default_timer
from contextlib import contextmanager
#contextmanager
def timer():
start_time = default_timer()
try:
yield
finally:
print("--- %s seconds ---" % (default_timer() - start_time))
Use it with with statement:
def looper():
for i in range(0, 100000000):
pass
with timer():
looper()
Output:
--- 2.651526927947998 seconds ---

Here is a generic solution
def timed(fn):
# make sure wherever u used this, imports will be ready
from time import perf_counter
from functools import wraps
# wraps preserves the metadata of fn
#wraps(fn)
def inner(*args, **kwargs):
start = perf_counter()
result = fn(*args, **kwargs)
end = perf_counter()
elapsed = end - start
args_ = [str(a) for a in args]
kwargs_ = ["{0}={1}".format(k, v) for (k, v) in kwargs.items()]
all_args = args_ + kwargs_
args_str = ",".join(all_args)
print("{0} ({1}) took {2:.6f} to run.".format(fn.__name__, args_str, elapsed))
return result
return inner
define a function:
#timed
def sum_up(a,b):
return a+b
now call it:
sum_up(2,9)

For the case using timeit.timeit, if command
timeit.timeit(function_to_test, n=10000)
raise error ValueError: stmt is neither a string nor callable
or command
timeit.timeit('function_to_test', n=10000)
raise error name 'function_to_test' is not defined, then you need:
replace function_to_test or 'function_to_test' with str(function_to_test), that is
timeit.timeit(str(function_to_test), n=10000)
or if Python version >= 3.6, another way is using f string as
timeit.timeit(f'{function_to_test}', n=10000)
About version use lambda, i.e. timeit.timeit(lambda: function_to_test, n=10000), it work but, from my test, it take much longer time.
Here, is a concrete example:
import timeit
def function_to_test(n):
s = 1
for i in range(n):
s += 1
return s
print("time run function_to_test: ", timeit.timeit(str(function_to_test(1000000)), number=10000))
print("time run function_to_test: ", timeit.timeit(f'{function_to_test(1000000)}', number=10000))

Related

Problems with python decorators returns

I'm having some problems with the decorators return.
I would like to create a decorator to calculate the function duration time, so I build this code to learn how to work with decorators.
When I use the decorator with the print method, it works, but the intention of this code is to return the message saying the function name and the duration time.
import time
def timer(function):
def wrapper(*args, **kwargs):
init_time = time.time()
res = function(*args, **kwargs)
end_time = time.time()
Answer = str(f'The function {function.__name__} takes {end_time - init_time} seconds to be executed.')
print(Answer)
return res
return wrapper
def timer2(function):
def wrapper(*args, **kwargs):
init_time = time.time()
function(*args, **kwargs)
end_time = time.time()
Answer = str(f'The function {function.__name__} takes {end_time - init_time} seconds to be executed.')
return Answer
return wrapper
#timer
def calculator():
soma_tot = 0
for i in range(1,1000000):
soma_tot += 1
return soma_tot
#timer2
def my_name(Name):
print(f'Hello, my name is {Name}')
calculator()
my_name('Leonardo')
So I got two problems:
1 - If the function return something, the decorators is not returning the function return;
2 - The decorators just print the answer, it's not returning it so I can reuse the answer.
Take a look at timer2:
def timer2(function):
def wrapper(*args, **kwargs):
init_time = time.time()
function(*args, **kwargs) # Where does the result go?
end_time = time.time()
Answer = str(f'The function {function.__name__} takes {end_time - init_time} seconds to be executed.')
return Answer # You return an unrelated string
return wrapper
Then,
calculator() # What do you do with the return value?
my_name('Leonardo')
In fact, timer2 should be exactly like timer. No need to create a new function.
When you use calculator(), check the return value and you'll see it works:
res = calculator()
print(res)
So guys, this is the code that I remake with your help, and now it's working.
import time
def timer(function):
def wrapper(*args, **kwargs):
init_time = time.time()
funcReturn = function(*args, **kwargs)
end_time = time.time()
result_time = end_time - init_time
if funcReturn == None:
response={"FunctionName":function.__name__, "TimeSpent":"{:.5f}".format(result_time)}
return response
else:
response={"FunctionReturn":funcReturn, "FunctionName":function.__name__, "TimeSpent":"{:.5f}".format(result_time)}
return response
return wrapper
#timer
def calculator():
soma_tot = 0
for i in range(1,1000000):
soma_tot += 1
return soma_tot
#timer
def my_name(Name):
print(f'Hello, my name is {Name}')
print(calculator())
print(my_name('Leonardo'))
In this way, I'm getting the result of function (print or return) and getting the execution time of the function.
Thx all.

Alternative to global variables when logging stats about requests

I have a program that logs some messages about data that I download. Besides that, I would like to display some stats about the requests with every k-requests that I make to a site (k is 10 in my case) + some overall stats at the end of the execution.
At the moment I have an implementation that I am not happy with, as it uses global variables. I am looking for a cleaner alternative. It looks like this (Note: please ignore the fact that I am using print instead of logging and that I am measuring the passing of time using time.time instead of time.perf_counter (read here that the latter would be a better option):
import time
import pprint
def f2(*args, **kwargs):
global START_TIME
global NO_REQUESTS
global TOTAL_TIME_FOR_REQUESTS
global MAX_TIME_FOR_REQUEST
global AVERAGE_TIME_FOR_REQUESTS
global TOTAL_TIME_FOR_DECODING
global TOTAL_TIME_FOR_INTERSECT
# ... logic that changes values of most of these global variables
if NO_REQUESTS % 10 == 0:
AVERAGE_TIME_FOR_REQUESTS = TOTAL_TIME_FOR_REQUESTS / NO_REQUESTS
print()
print('no requests so far: ' + str(NO_REQUESTS))
print('average request time: {:.2f}s'.format(AVERAGE_TIME_FOR_REQUESTS))
print('max request time: {:.2f}s'.format(MAX_TIME_FOR_REQUEST))
elapsed = time.time() - START_TIME
hours_elapsed = elapsed // 3600
minutes_elapsed = (elapsed % 3600) // 60
seconds_elapsed = ((elapsed % 3600) % 60)
print('time elapsed so far: {}h {}m {:.2f}s'.format(hours_elapsed, minutes_elapsed, seconds_elapsed))
print()
time5 = time.time()
decoded = some_module.decode(res.content)
time6 = time.time()
elapsed2 = time6 - time5
TOTAL_TIME_FOR_DECODING += elapsed2
return something
def f1(*args, **kwargs):
global START_TIME
global TOTAL_TIME_FOR_REQUESTS
TOTAL_TIME_FOR_REQUESTS = 0
global MAX_TIME_FOR_REQUEST
MAX_TIME_FOR_REQUEST = 0
global NO_REQUESTS
NO_REQUESTS = 0
global AVERAGE_TIME_FOR_REQUESTS
AVERAGE_TIME_FOR_REQUESTS = 0
global TOTAL_TIME_FOR_DECODING
TOTAL_TIME_FOR_DECODING = 0
global TOTAL_TIME_FOR_INTERSECT
TOTAL_TIME_FOR_INTERSECT = 0
f2() # notice call to other function!
# ... some logic
return some_results
def output_final_stats(elapsed, results, precision='{:.3f}'):
print()
print('=============================')
hours_elapsed = elapsed // 3600
minutes_elapsed = (elapsed % 3600) // 60
seconds_elapsed = ((elapsed % 3600) % 60)
print("TIME ELAPSED: {:.3f}s OR {}h {}m {:.3f}s".format(
elapsed, hours_elapsed, minutes_elapsed, seconds_elapsed))
print("out of which:")
# print((precision+'s for requests)'.format(TOTAL_TIME_FOR_REQUESTS)))
print('{:.3f}s for requests'.format(TOTAL_TIME_FOR_REQUESTS))
print('{:.3f}s for decoding'.format(TOTAL_TIME_FOR_DECODING))
print('{:.3f}s for intersect'.format(TOTAL_TIME_FOR_INTERSECT))
total = TOTAL_TIME_FOR_REQUESTS + TOTAL_TIME_FOR_DECODING + TOTAL_TIME_FOR_INTERSECT
print('EXPECTED: {:.3f}s'.format(total))
print('DIFF: {:.3f}s'.format(elapsed - total))
print()
print('AVERAGE REQUEST TIME: {:.3f}s'.format(AVERAGE_TIME_FOR_REQUESTS))
print('TOTAL NO. REQUESTS: ' + str(NO_REQUESTS))
print('MAX REQUEST TIME: {:.3f}s'.format(MAX_TIME_FOR_REQUEST))
print('TOTAL NO. RESULTS: ' + str(len(results)))
pprint('RESULTS: {}'.format(results), indent=4)
if __name__ == '__main__':
START_TIME = time.time()
results = f1(some_params)
final_time = time.time()
elapsed = final_time - START_TIME
output_final_stats(elapsed, results)
The way I thought of it (not sure if the best option, open to alternatives) is to somehow have a listener on the NO_REQUESTS variable and whenever that number reaches a multiple of 10 trigger the logging of the variables that I am interested in. Nonetheless, where would I store those variables, what would be their namespace?
Another alternative would be to maybe have a parametrised decorator for one of my functions, but in this case I am not sure how easy it would be to pass the values that I am interested in from one function to another.
I think the cleanest way is to use a parametrized class decorator.
class LogEveryN:
def __init__(self, n=10):
self.n = n
self.number_of_requests = 0
self.total_time_for_requests = 0
self.max_time_for_request = 0
self.average_time_for_request = 0
def __call__(self, func, *args, **kwargs):
def wrapper(*args, **kwargs):
self.number_of_request += 1
if self.number_of_request % self.n:
# Do your computation and logging
return func(*args, **kwargs)
return wrapper
#LogEveryN(n=5)
def request_function():
pass

Using Python multiprocessing library inside nested objects

I'm trying to use the multiprocessing library to parallelize some expensive calculations without blocking some others, much lighter. The both need to interact through some variables, although the may run with different paces.
To show this, I have created the following example, that works fine:
import multiprocessing
import time
import numpy as np
class SumClass:
def __init__(self):
self.result = 0.0
self.p = None
self.return_value = None
def expensive_function(self, new_number, return_value):
# Execute expensive calculation
#######
time.sleep(np.random.random_integers(5, 10, 1))
return_value.value = self.result + new_number
#######
def execute_function(self, new_number):
print(' New number received: %f' % new_number)
self.return_value = multiprocessing.Value("f", 0.0, lock=True)
self.p = multiprocessing.Process(target=self.expensive_function, args=(new_number, self.return_value))
self.p.start()
def is_executing(self):
if self.p is not None:
if not self.p.is_alive():
self.result = self.return_value.value
self.p = None
return False
else:
return True
else:
return False
if __name__ == '__main__':
sum_obj = SumClass()
current_value = 0
while True:
if not sum_obj.is_executing():
# Randomly determine whether the function must be executed or not
if np.random.rand() < 0.25:
print('Current sum value: %f' % sum_obj.result)
new_number = np.random.rand(1)[0]
sum_obj.execute_function(new_number)
# Execute other (light) stuff
#######
print('Executing other stuff')
current_value += sum_obj.result * 0.1
print('Current value: %f' % current_value)
time.sleep(1)
#######
Basically, in the main loop some light function is executed, and depending on a random condition, some heavy work is sent to another process if it has already finished the previous one, carried out by an object which needs to store some data between executions. Although expensive_function needs some time, the light function keeps on executing without being blocked.
Although the above code gets the job done, I'm wondering: is it the best/most appropriate method to do this?
Besides, let us suppose the class SumClass has an instance of another object, which also needs to store data. For example:
import multiprocessing
import time
import numpy as np
class Operator:
def __init__(self):
self.last_value = 1.0
def operate(self, value):
print(' Operation, last value: %f' % self.last_value)
self.last_value *= value
return self.last_value
class SumClass:
def __init__(self):
self.operator_obj = Operator()
self.result = 0.0
self.p = None
self.return_value = None
def expensive_function(self, new_number, return_value):
# Execute expensive calculation
#######
time.sleep(np.random.random_integers(5, 10, 1))
# Apply operation
number = self.operator_obj.operate(new_number)
# Apply other operation
return_value.value = self.result + number
#######
def execute_function(self, new_number):
print(' New number received: %f' % new_number)
self.return_value = multiprocessing.Value("f", 0.0, lock=True)
self.p = multiprocessing.Process(target=self.expensive_function, args=(new_number, self.return_value))
self.p.start()
def is_executing(self):
if self.p is not None:
if not self.p.is_alive():
self.result = self.return_value.value
self.p = None
return False
else:
return True
else:
return False
if __name__ == '__main__':
sum_obj = SumClass()
current_value = 0
while True:
if not sum_obj.is_executing():
# Randomly determine whether the function must be executed or not
if np.random.rand() < 0.25:
print('Current sum value: %f' % sum_obj.result)
new_number = np.random.rand(1)[0]
sum_obj.execute_function(new_number)
# Execute other (light) stuff
#######
print('Executing other stuff')
current_value += sum_obj.result * 0.1
print('Current value: %f' % current_value)
time.sleep(1)
#######
Now, inside the expensive_function, a function member of the object Operator is used, which needs to store the number passed.
As expected, the member variable last_value does not change, i.e. it does not keep any value.
Is there any way of doing this properly?
I can imagine I could arrange everything so that I only need to use one class level, and it would work well. However, this is a toy example, in reality there are different levels of complex objects and it would be hard.
Thank you very much in advance!
from concurrent.futures import ThreadPoolExecutor
from numba import jit
import requests
import timeit
def timer(number, repeat):
def wrapper(func):
runs = timeit.repeat(func, number=number, repeat=repeat)
print(sum(runs) / len(runs))
return wrapper
URL = "https://httpbin.org/uuid"
#jit(nopython=True, nogil=True,cache=True)
def fetch(session, url):
with session.get(url) as response:
print(response.json()['uuid'])
#timer(1, 1)
def runner():
with ThreadPoolExecutor(max_workers=25) as executor:
with requests.Session() as session:
executor.map(fetch, [session] * 100, [URL] * 100)
executor.shutdown(wait=True)
executor._adjust_thread_count
Maybe this might help.
I'm using ThreadPoolExecutor for multithreading. you can also use ProcessPoolExecutor.
For your compute expensive operation you can use numba for making cached byte code of your function for faster exeution.

Logging execution time with decorators [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
After I tried unsuccessfully for a while, I am seeking help from this miraculous website. Now for my problem: I want to create a decorator that writes the elapsed execution time of a function (during the execution of the function) into a logging file like:
#log_time("log.txt", 35)
def some_function(...):
...
return result
and
from functools import wraps
def log_time(path_to_logfile, interval):
...
so that log.txt would look something like
Time elapsed: 0h 0m 35s
Time elapsed: 0h 1m 10s
Time elapsed: 0h 1m 45s
Any ideas?
I'll give you the basic overview about what you must do in order to accomplish this. The following is a decorator that accepts two parameters, and executes the function. The missing funcitonality is presented as comments, add them in:
def log_time(path_to_logfile, interval):
def log(func):
# 'wrap' this puppy up if needed
def wrapped(*args, **kwargs):
# start timing
func(*args, **kwargs)
# stop timing
with open(path_to_logfile, 'a') as f:
pass # functionality
return wrapped
return log
You can now decorate functions and the output is going to be written in path_to_logfile. So, for example, decorating foo here:
#log_time('foo.txt', 40)
def foo(i, j):
print(i, j)
foo(1, 2)
Will take foo and execute it. You need to time it appropriately and write the contents to your file. You should experiment with decorators even more and read up on them, a nice article on Decorators exist at the Python Wiki.
Okay, I figured something out in the end with threads. Thanks for all the suggestions!
import codecs, threading, time
from functools import wraps
def log_time(logpath="log.txt", interval=5):
def log_time_decorator(func):
#wraps(func)
def wrapper(*args, **kwargs):
t = threading.Thread(target=func, args=args, kwargs=kwargs)
log_entries = 0
with codecs.open(logpath, "wb", "utf-8") as logfile:
start_time = time.time()
t.start()
while t.is_alive():
elapsed_time = (time.time() - start_time)
if elapsed_time > interval * log_entries:
m, s = divmod(elapsed_time, 60)
h, m = divmod(m, 60)
logfile.write("Elapsed time: %2dh %2dm %2ds\n" %(h, m, s))
log_entries += 1
return wrapper
return log_time_decorator
One disadvantage might be that you cannot easily retrieve the return value of the function (at least I haven't figured it out yet).
EDIT1: Removed an unnecessary variable and added a nice format for logwriting (see this)
EDIT2: Even though other users rejected his edit, I want to include a version from Piotr Dabkowski because it works with a return-value:
def log_time(logpath="log.txt", interval=5):
def log_time_decorator(func):
#wraps(func)
def wrapper(*args, **kwargs):
RESULT = [None]
def temp():
RESULT[0] = func(*args, **kwargs)
t = threading.Thread(target=temp)
log_entries = 0
with codecs.open(logpath, "wb", "utf-8") as logfile:
start_time = time.time()
t.start()
while t.is_alive():
elapsed_time = (time.time() - start_time)
if elapsed_time > interval * log_entries:
m, s = divmod(elapsed_time, 60)
h, m = divmod(m, 60)
logfile.write("Elapsed time: %2dh %2dm %2ds\n" %(h, m, s))
log_entries += 1
return RESULT[0]
return wrapper
return log_time_decorator
Quickly put together but worked in a test with #timeit on a few functions.
import logging
logging.basicConfig(
level=logging.DEBUG,
filename='myProgramLog.txt',
format=' %(asctime)s - %(levelname)s - %(message)s')
import time
def timeit(method):
def timed(*args, **kw):
ts = time.time()
result = method(*args, **kw)
te = time.time()
logging.debug('%r (%r, %r) %2.2f sec' % \
(method.__name__, args, kw, te-ts))
return result
return timed
Sources: https://www.andreas-jung.com/contents/a-python-decorator-for-measuring-the-execution-time-of-methods, https://automatetheboringstuff.com/chapter10/
EDIT: I find Python comes with a pretty good logging module; why re-invent the wheel?

Creating Class Stopwatch Python. Don't Understand Why it Works?

import time #useful for measuring code execution
class StopWatch:
def __init__(self, startTime = 0, endTime = 0, elapsedTime = 0):
self.__startTime = startTime
self.__endTime = endTime
self.__elapsedTime = elapsedTime
def start(self):
self.__startTime = time.clock()
def stop(self):
return self.getElapsedTime()
def reset(self):
self.__startTime = 0
self.__elapsedTime = 0
def getstarttime(self):
return self.__startTime
def getendtime(self):
return self.__endTime
def getElapsedTime(self):
elapsedTime = self.__elapsedTime
elapsedTime +=((time.clock() - self.__startTime) * 1000)
return elapsedTime
def main():
x = StopWatch()
x.start
a = time.clock() #code only works with this line of code in place (I don't understand why?)
sum = 0
for i in range(1 , 10000000):
sum += i
x.stop
print("Elapsed execution time is", x.getElapsedTime())
print(sum)
x.reset
main()
The code fails to produce the correct result if I remove the
a = time.clock()
portion. With that in place it produces the correct result but I am not really sure why it does this?
I realize there may be better ways to do this, but Im kind of a beginner at Python so I'd appreciate the help. Thanks! I am using a Windows system.
You wouldn't happen to be a rubyist, would you? x.start works to call methods in Ruby, but not in Python. You need x.start() - notice the parentheses. You have the same problem with x.stop and x.reset.
a = time.clock() is helping because time.clock() will sometimes (platform-dependent) return the time since the first call to clock(), instead of from process start. The actual assignment to a isn;t doing anything, it's simply creating a start point for clock to reference later. Don't rely on this - the Python docs state "Return the CPU time or real time since the start of the process or since the first call to clock()."

Categories

Resources