Get python unit test duration in seconds - python

Is there any way to get the total amount of time that "unittest.TextTestRunner().run()" has taken to run a specific unit test.
I'm using a for loop to test modules against certain scenarios (some having to be used and some not, so they run a few times), and I would like to print the total time it has taken to run all the tests.
Any help would be greatly appreciated.

UPDATED, thanks to #Centralniak's comment.
How about simple
from datetime import datetime
tick = datetime.now()
# run the tests here
tock = datetime.now()
diff = tock - tick # the result is a datetime.timedelta object
print(diff.total_seconds())

You could record start time in the setup function and then print elapsed time in cleanup.

Following Eric's one-line answer I have a little snippet I work with here:
from datetime import datetime
class SomeTests(unittest.TestCase):
"""
... write the rest yourself! ...
"""
def setUp(self):
self.tick = datetime.now()
def tearDown(self):
self.tock = datetime.now()
diff = self.tock - self.tick
print (diff.microseconds / 1000), "ms"
# all the other tests below
This works fine enough for me, for now, but I want to fix some minor formatting issues. The result ok is now on the next line, and FAIL has priority. This is ugly.

I do this exactly as Eric postulated -- here's a decorator I use for tests (often more functional-test-y than strict unit tests)...
# -*- coding: utf-8 -*-
from __future__ import print_function
from functools import wraps
from pprint import pprint
WIDTH = 60
print_separator = lambda fill='-', width=WIDTH: print(fill * width)
def timedtest(function):
"""
Functions so decorated will print the time they took to execute.
Usage:
import unittest
class MyTests(unittest.TestCase):
#timedtest
def test_something(self):
assert something is something_else
# … etc
# An optional return value is pretty-printed,
# along with the timing values:
return another_thing
"""
#wraps(function)
def wrapper(*args, **kwargs):
print()
print("TESTING: %s(…)" % getattr(function, "__name__", "<unnamed>"))
print_separator()
print()
t1 = time.time()
out = function(*args, **kwargs)
t2 = time.time()
dt = str((t2 - t1) * 1.00)
dtout = dt[:(dt.find(".") + 4)]
print_separator()
if out is not None:
print('RESULTS:')
pprint(out, indent=4)
print('Test finished in %s seconds' % dtout)
print_separator('=')
return out
return wrapper
That's the core of it -- from there, if you want, you can stash the times in a database for analysis, or draw graphs, et cetera. A decorator like this (using #wraps(…) from the functools module) won't interfere with any of the dark magic that unit-test frameworks occasionally resort to.

Besides using datetime, you could also use time
from time import time
t0 = time()
# do your stuff here
print(time() - t0) # it will show in seconds

Related

Add extra parameters to callback function loop

I have a wrapper to time the execution of certain functions in a list. Most of these functions have one and the same parameter: era. I run the functions like displayed below. However, some functions require an extra parameter, e.g. the function dummy_function(). I've been looking for a way to be able to add this parameter in a Pythonic way. I found some solutions but they are very ugly and not quite scalable. Any help or suggestions would be tremendously appreciated!
def dummy_function(self, period, letter='A'):
""" Debugging purposes only """
print(f'This function prints the letter {letter}.')
from time import sleep
sleep(3)
def timed_execution(callbacks, era):
for callback in callbacks:
start_time = time.time()
callback(era)
end_time = time.time()
print(f'{callback.__name__} took {end_time-start_time:.3f}s')
def calculate_insights(era):
timed_execution([
dummy_function,
another_function,
yet_another_function,
], era)
calculate_insights(era)
Perhaps the best way is to actually pass the arguments for their respective function or just try to use a wrapper to calculate the time of a function.
Code taken from another question
from functools import wraps
from time import time
def timing(f):
#wraps(f)
def wrap(*args, **kw):
ts = time()
result = f(*args, **kw)
te = time()
print 'func:%r args:[%r, %r] took: %2.4f sec' % \
(f.__name__, args, kw, te-ts)
return result
return wrap
Then you can do something along the lines of
#timming
def dummy_function(self, period, letter='A'):
""" Debugging purposes only """
print(f'This function prints the letter {letter}.')
from time import sleep
sleep(3)
def calculate_insights():
dummy_function(era)
or you could just a dict with all the parameters passed into each callback but that doesn't sounds to pythonic for me.

Running a Python web scraper every hour [duplicate]

I'm looking for a library in Python which will provide at and cron like functionality.
I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron.
For those unfamiliar with cron: you can schedule tasks based upon an expression like:
0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday
0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays.
The cron time expression syntax is less important, but I would like to have something with this sort of flexibility.
If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received.
Edit
I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process.
To this end, I'm looking for the expressivity of the cron time expression, but in Python.
Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence.
If you're looking for something lightweight checkout schedule:
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
while 1:
schedule.run_pending()
time.sleep(1)
Disclosure: I'm the author of that library.
You could just use normal Python argument passing syntax to specify your crontab. For example, suppose we define an Event class as below:
from datetime import datetime, timedelta
import time
# Some utility classes / functions first
class AllMatch(set):
"""Universal set - match everything"""
def __contains__(self, item): return True
allMatch = AllMatch()
def conv_to_set(obj): # Allow single integer to be provided
if isinstance(obj, (int,long)):
return set([obj]) # Single item
if not isinstance(obj, set):
obj = set(obj)
return obj
# The actual Event class
class Event(object):
def __init__(self, action, min=allMatch, hour=allMatch,
day=allMatch, month=allMatch, dow=allMatch,
args=(), kwargs={}):
self.mins = conv_to_set(min)
self.hours= conv_to_set(hour)
self.days = conv_to_set(day)
self.months = conv_to_set(month)
self.dow = conv_to_set(dow)
self.action = action
self.args = args
self.kwargs = kwargs
def matchtime(self, t):
"""Return True if this event should trigger at the specified datetime"""
return ((t.minute in self.mins) and
(t.hour in self.hours) and
(t.day in self.days) and
(t.month in self.months) and
(t.weekday() in self.dow))
def check(self, t):
if self.matchtime(t):
self.action(*self.args, **self.kwargs)
(Note: Not thoroughly tested)
Then your CronTab can be specified in normal python syntax as:
c = CronTab(
Event(perform_backup, 0, 2, dow=6 ),
Event(purge_temps, 0, range(9,18,2), dow=range(0,5))
)
This way you get the full power of Python's argument mechanics (mixing positional and keyword args, and can use symbolic names for names of weeks and months)
The CronTab class would be defined as simply sleeping in minute increments, and calling check() on each event. (There are probably some subtleties with daylight savings time / timezones to be wary of though). Here's a quick implementation:
class CronTab(object):
def __init__(self, *events):
self.events = events
def run(self):
t=datetime(*datetime.now().timetuple()[:5])
while 1:
for e in self.events:
e.check(t)
t += timedelta(minutes=1)
while datetime.now() < t:
time.sleep((t - datetime.now()).seconds)
A few things to note: Python's weekdays / months are zero indexed (unlike cron), and that range excludes the last element, hence syntax like "1-5" becomes range(0,5) - ie [0,1,2,3,4]. If you prefer cron syntax, parsing it shouldn't be too difficult however.
More or less same as above but concurrent using gevent :)
"""Gevent based crontab implementation"""
from datetime import datetime, timedelta
import gevent
# Some utility classes / functions first
def conv_to_set(obj):
"""Converts to set allowing single integer to be provided"""
if isinstance(obj, (int, long)):
return set([obj]) # Single item
if not isinstance(obj, set):
obj = set(obj)
return obj
class AllMatch(set):
"""Universal set - match everything"""
def __contains__(self, item):
return True
allMatch = AllMatch()
class Event(object):
"""The Actual Event Class"""
def __init__(self, action, minute=allMatch, hour=allMatch,
day=allMatch, month=allMatch, daysofweek=allMatch,
args=(), kwargs={}):
self.mins = conv_to_set(minute)
self.hours = conv_to_set(hour)
self.days = conv_to_set(day)
self.months = conv_to_set(month)
self.daysofweek = conv_to_set(daysofweek)
self.action = action
self.args = args
self.kwargs = kwargs
def matchtime(self, t1):
"""Return True if this event should trigger at the specified datetime"""
return ((t1.minute in self.mins) and
(t1.hour in self.hours) and
(t1.day in self.days) and
(t1.month in self.months) and
(t1.weekday() in self.daysofweek))
def check(self, t):
"""Check and run action if needed"""
if self.matchtime(t):
self.action(*self.args, **self.kwargs)
class CronTab(object):
"""The crontab implementation"""
def __init__(self, *events):
self.events = events
def _check(self):
"""Check all events in separate greenlets"""
t1 = datetime(*datetime.now().timetuple()[:5])
for event in self.events:
gevent.spawn(event.check, t1)
t1 += timedelta(minutes=1)
s1 = (t1 - datetime.now()).seconds + 1
print "Checking again in %s seconds" % s1
job = gevent.spawn_later(s1, self._check)
def run(self):
"""Run the cron forever"""
self._check()
while True:
gevent.sleep(60)
import os
def test_task():
"""Just an example that sends a bell and asd to all terminals"""
os.system('echo asd | wall')
cron = CronTab(
Event(test_task, 22, 1 ),
Event(test_task, 0, range(9,18,2), daysofweek=range(0,5)),
)
cron.run()
None of the listed solutions even attempt to parse a complex cron schedule string. So, here is my version, using croniter. Basic gist:
schedule = "*/5 * * * *" # Run every five minutes
nextRunTime = getNextCronRunTime(schedule)
while True:
roundedDownTime = roundDownTime()
if (roundedDownTime == nextRunTime):
####################################
### Do your periodic thing here. ###
####################################
nextRunTime = getNextCronRunTime(schedule)
elif (roundedDownTime > nextRunTime):
# We missed an execution. Error. Re initialize.
nextRunTime = getNextCronRunTime(schedule)
sleepTillTopOfNextMinute()
Helper routines:
from croniter import croniter
from datetime import datetime, timedelta
# Round time down to the top of the previous minute
def roundDownTime(dt=None, dateDelta=timedelta(minutes=1)):
roundTo = dateDelta.total_seconds()
if dt == None : dt = datetime.now()
seconds = (dt - dt.min).seconds
rounding = (seconds+roundTo/2) // roundTo * roundTo
return dt + timedelta(0,rounding-seconds,-dt.microsecond)
# Get next run time from now, based on schedule specified by cron string
def getNextCronRunTime(schedule):
return croniter(schedule, datetime.now()).get_next(datetime)
# Sleep till the top of the next minute
def sleepTillTopOfNextMinute():
t = datetime.utcnow()
sleeptime = 60 - (t.second + t.microsecond/1000000.0)
time.sleep(sleeptime)
I know there are a lot of answers, but another solution could be to go with decorators. This is an example to repeat a function everyday at a specific time. The cool think about using this way is that you only need to add the Syntactic Sugar to the function you want to schedule:
#repeatEveryDay(hour=6, minutes=30)
def sayHello(name):
print(f"Hello {name}")
sayHello("Bob") # Now this function will be invoked every day at 6.30 a.m
And the decorator will look like:
def repeatEveryDay(hour, minutes=0, seconds=0):
"""
Decorator that will run the decorated function everyday at that hour, minutes and seconds.
:param hour: 0-24
:param minutes: 0-60 (Optional)
:param seconds: 0-60 (Optional)
"""
def decoratorRepeat(func):
#functools.wraps(func)
def wrapperRepeat(*args, **kwargs):
def getLocalTime():
return datetime.datetime.fromtimestamp(time.mktime(time.localtime()))
# Get the datetime of the first function call
td = datetime.timedelta(seconds=15)
if wrapperRepeat.nextSent == None:
now = getLocalTime()
wrapperRepeat.nextSent = datetime.datetime(now.year, now.month, now.day, hour, minutes, seconds)
if wrapperRepeat.nextSent < now:
wrapperRepeat.nextSent += td
# Waiting till next day
while getLocalTime() < wrapperRepeat.nextSent:
time.sleep(1)
# Call the function
func(*args, **kwargs)
# Get the datetime of the next function call
wrapperRepeat.nextSent += td
wrapperRepeat(*args, **kwargs)
wrapperRepeat.nextSent = None
return wrapperRepeat
return decoratorRepeat
I like how the pycron package solves this problem.
import pycron
import time
while True:
if pycron.is_now('0 2 * * 0'): # True Every Sunday at 02:00
print('running backup')
time.sleep(60) # The process should take at least 60 sec
# to avoid running twice in one minute
else:
time.sleep(15) # Check again in 15 seconds
There isn't a "pure python" way to do this because some other process would have to launch python in order to run your solution. Every platform will have one or twenty different ways to launch processes and monitor their progress. On unix platforms, cron is the old standard. On Mac OS X there is also launchd, which combines cron-like launching with watchdog functionality that can keep your process alive if that's what you want. Once python is running, then you can use the sched module to schedule tasks.
Another trivial solution would be:
from aqcron import At
from time import sleep
from datetime import datetime
# Event scheduling
event_1 = At( second=5 )
event_2 = At( second=[0,20,40] )
while True:
now = datetime.now()
# Event check
if now in event_1: print "event_1"
if now in event_2: print "event_2"
sleep(1)
And the class aqcron.At is:
# aqcron.py
class At(object):
def __init__(self, year=None, month=None,
day=None, weekday=None,
hour=None, minute=None,
second=None):
loc = locals()
loc.pop("self")
self.at = dict((k, v) for k, v in loc.iteritems() if v != None)
def __contains__(self, now):
for k in self.at.keys():
try:
if not getattr(now, k) in self.at[k]: return False
except TypeError:
if self.at[k] != getattr(now, k): return False
return True
I don't know if something like that already exists. It would be easy to write your own with time, datetime and/or calendar modules, see http://docs.python.org/library/time.html
The only concern for a python solution is that your job needs to be always running and possibly be automatically "resurrected" after a reboot, something for which you do need to rely on system dependent solutions.

What's the best way to parallelize this process

I've been trying parallelize a process inside a class method. When I try using Pool() from multiprocessing I get pickling errors. When I use Pool() from multiprocessing.dummy my execution is slower than serialized execution.
I've attempted several variations of my code below, using Stackoverflow posts as a guide, but none of them were a successful workaround for the problem outlined above.
One for example: if I move process_function above the class definition (globalizing it) it's doesn't work because I can't access my objects attributes.
Anyway, my code is similar to:
from multiprocessing.dummy import Pool as ThreadPool
from my_other_module import other_module_class
class myClass:
def __init__(self, some_list, number_iterations):
self.my_interface = other_module_class
self.relevant_list = []
self.some_list = some_list
self.number_iterations = number_iterations
# self.other_attributes = stuff from import statements
def load_relevant_data:
self.relevant_list = self.interface.other_function
def compute_foo(self, relevant_list_member_value):
# math involving class attributes
return foo_scalar
def higher_function(self):
self.relevant_list = self.load_relevant_data
np.random.seed(0)
pool = ThreadPool() # I've tried different args here, no help
pool.map(self.process_function, self.relevant_list)
def process_function(self, dict_from_relevant_list):
foo_bar = self.compute_foo(dict_from_relevant_list['key'])
a = 0
for i in some_other_list:
# do other stuff involving class attributes and foo_bar
# a = some of that
dict_from_relevant_list['other_key'] = a
if __name__ == '__main__':
import time
import pprint as pp
some_list = blah
number_of_iterations = 10**4
my_obj = myClass(some_list, number_of_iterations
my_obj.load_third_parties()
start = time.time()
my_obj.higher_function()
execution_time = time.time() - start
print()
print("Execution time for %s simulation runs: %s" % (number_of_iterations, execution_time))
print()
pp.pprint(my_obj.relevant_list[0:5])
I have a few hundred dictionaries inside relevant list. I just want to populate each of those dictionary's 'other_key' field from a computationally expensive simulation on my inner most loop, which yields a scalar value, like a above. It seems like there should be a simple way to do this since in Matlab I could just right parfor and it's done automatically. Maybe that instinct is wrong for Python.

How to slow down asynchrounous API calls to match API limits?

I have a list of ~300K URLs for an API i need to get data from.
The API limit is 100 calls per second.
I have made a class for the asynchronous but this is working to fast and I am hitting an error on the API.
How do I slow down the asynchronous, so that I can make 100 calls per second?
import grequests
lst = ['url.com','url2.com']
class Test:
def __init__(self):
self.urls = lst
def exception(self, request, exception):
print ("Problem: {}: {}".format(request.url, exception))
def async(self):
return grequests.map((grequests.get(u) for u in self.urls), exception_handler=self.exception, size=5)
def collate_responses(self, results):
return [x.text for x in results]
test = Test()
#here we collect the results returned by the async function
results = test.async()
response_text = test.collate_responses(results)
The first step that I took was to create an object who can distribute a maximum of n coins every t ms.
import time
class CoinsDistribution:
"""Object that distribute a maximum of maxCoins every timeLimit ms"""
def __init__(self, maxCoins, timeLimit):
self.maxCoins = maxCoins
self.timeLimit = timeLimit
self.coin = maxCoins
self.time = time.perf_counter()
def getCoin(self):
if self.coin <= 0 and not self.restock():
return False
self.coin -= 1
return True
def restock(self):
t = time.perf_counter()
if (t - self.time) * 1000 < self.timeLimit:
return False
self.coin = self.maxCoins
self.time = t
return True
Now we need a way of forcing function to only get called if they can get a coin.
To do that we can write a decorator function that we could use like that:
#limitCalls(callLimit=1, timeLimit=1000)
def uniqFunctionRequestingServer1():
return 'response from s1'
But sometimes, multiple functions are calling requesting the same server so we would want them to get coins from the the same CoinsDistribution object.
Therefor, another use of the decorator would be by supplying the CoinsDistribution object:
server_2_limit = CoinsDistribution(3, 1000)
#limitCalls(server_2_limit)
def sendRequestToServer2():
return 'it worked !!'
#limitCalls(server_2_limit)
def sendAnOtherRequestToServer2():
return 'it worked too !!'
We now have to create the decorator, it can take either a CoinsDistribution object or enough data to create a new one.
import functools
def limitCalls(obj=None, *, callLimit=100, timeLimit=1000):
if obj is None:
obj = CoinsDistribution(callLimit, timeLimit)
def limit_decorator(func):
#functools.wraps(func)
def limit_wrapper(*args, **kwargs):
if obj.getCoin():
return func(*args, **kwargs)
return 'limit reached, please wait'
return limit_wrapper
return limit_decorator
And it's done ! Now you can limit the number of calls any API that you use and you can build a dictionary to keep track of your CoinsDistribution objects if you have to manage a lot of them (to differrent API endpoints or to different APIs).
Note: Here I have choosen to return an error message if there are no coins available. You should adapt this behaviour to your needs.
You can just keep track of how much time has passed and decide if you want to do more requests or not.
This will print 100 numbers per second, for example:
from datetime import datetime
import time
start = datetime.now()
time.sleep(1);
counter = 0
while (True):
end = datetime.now()
s = (end-start).seconds
if (counter >= 100):
if (s <= 1):
time.sleep(1) # You can keep track of the time and sleep less, actually
start = datetime.now()
counter = 0
print(counter)
counter += 1
This other question in SO shows exactly how to do this. By the way, what you need is usually called throttling.

How can I capture return value with Python timeit module?

Im running several machine learning algorithms with sklearn in a for loop and want to see how long each of them takes. The problem is I also need to return a value and DONT want to have to run it more than once because each algorithm takes so long. Is there a way to capture the return value 'clf' using python's timeit module or a similar one with a function like this...
def RandomForest(train_input, train_output):
clf = ensemble.RandomForestClassifier(n_estimators=10)
clf.fit(train_input, train_output)
return clf
when I call the function like this
t = Timer(lambda : RandomForest(trainX,trainy))
print t.timeit(number=1)
P.S. I also dont want to set a global 'clf' because I might want to do multithreading or multiprocessing later.
For Python 3.5 you can override the value of timeit.template
timeit.template = """
def inner(_it, _timer{init}):
{setup}
_t0 = _timer()
for _i in _it:
retval = {stmt}
_t1 = _timer()
return _t1 - _t0, retval
"""
unutbu's answer works for python 3.4 but not 3.5 as the _template_func function appears to have been removed in 3.5
The problem boils down to timeit._template_func not returning the function's return value:
def _template_func(setup, func):
"""Create a timer function. Used if the "statement" is a callable."""
def inner(_it, _timer, _func=func):
setup()
_t0 = _timer()
for _i in _it:
_func()
_t1 = _timer()
return _t1 - _t0
return inner
We can bend timeit to our will with a bit of monkey-patching:
import timeit
import time
def _template_func(setup, func):
"""Create a timer function. Used if the "statement" is a callable."""
def inner(_it, _timer, _func=func):
setup()
_t0 = _timer()
for _i in _it:
retval = _func()
_t1 = _timer()
return _t1 - _t0, retval
return inner
timeit._template_func = _template_func
def foo():
time.sleep(1)
return 42
t = timeit.Timer(foo)
print(t.timeit(number=1))
returns
(1.0010340213775635, 42)
The first value is the timeit result (in seconds), the second value is the function's return value.
Note that the monkey-patch above only affects the behavior of timeit when a callable is passed timeit.Timer. If you pass a string statement, then you'd have to (similarly) monkey-patch the timeit.template string.
Funnily enough, I'm also doing machine-learning, and have a similar requirement ;-)
I solved it as follows, by writing a function, that:
runs your function
prints the running time, along with the name of your function
returns the results
Let's say you want to time:
clf = RandomForest(train_input, train_output)
Then do:
clf = time_fn( RandomForest, train_input, train_output )
Stdout will show something like:
mymodule.RandomForest: 0.421609s
Code for time_fn:
import time
def time_fn( fn, *args, **kwargs ):
start = time.clock()
results = fn( *args, **kwargs )
end = time.clock()
fn_name = fn.__module__ + "." + fn.__name__
print fn_name + ": " + str(end-start) + "s"
return results
If I understand it well, after python 3.5 you can define globals at each Timer instance without having to define them in your block of code. I am not sure if it would have the same issues with parallelization.
My approach would be something like:
clf = ensemble.RandomForestClassifier(n_estimators=10)
myGlobals = globals()
myGlobals.update({'clf'=clf})
t = Timer(stmt='clf.fit(trainX,trainy)', globals=myGlobals)
print(t.timeit(number=1))
print(clf)
As of 2020, in ipython or jupyter notebook it is
t = %timeit -n1 -r1 -o RandomForest(trainX, trainy)
t.best
If you don't want to monkey-patch timeit, you could try using a global list, as below. This will also work in python 2.7, which doesn't have globals argument in timeit():
from timeit import timeit
import time
# Function to time - plaigiarised from answer above :-)
def foo():
time.sleep(1)
return 42
result = []
print timeit('result.append(foo())', setup='from __main__ import result, foo', number=1)
print result[0]
will print the time and then the result.
An approach I'm using it is to "append" the running time to the results of the timed function. So, I write a very simple decorator using the "time" module:
def timed(func):
def func_wrapper(*args, **kwargs):
import time
s = time.clock()
result = func(*args, **kwargs)
e = time.clock()
return result + (e-s,)
return func_wrapper
And then I use the decorator for the function I want to time.
For Python 3.X I use this approach:
# Redefining default Timer template to make 'timeit' return
# test's execution timing and the function return value
new_template = """
def inner(_it, _timer{init}):
{setup}
_t0 = _timer()
for _i in _it:
ret_val = {stmt}
_t1 = _timer()
return _t1 - _t0, ret_val
"""
timeit.template = new_template

Categories

Resources