Removing cached files after a pytest run - python

I'm using a joblib.Memory to cache expensive computations when running tests with py.test. The code I'm using reduces to the following,
from joblib import Memory
memory = Memory(cachedir='/tmp/')
#memory.cache
def expensive_function(x):
return x**2 # some computationally expensive operation here
def test_other_function():
input_ds = expensive_function(x=10)
## run some tests with input_ds
which works fine. I'm aware this could be possibly more elegantly done with tmpdir_factory fixture but that's beside the point.
The issue I'm having is how to clean the cached files once all the tests run,
is it possible to share a global variable among all tests (which would contains e.g. a list of path to the cached objects) ?
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?

is it possible to share a global variable among all tests (which would contains e.g. a list of path to the cached objects) ?
I wouldn't go down that path. Global mutable state is something best avoided, particularly in testing.
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
Yes, add an auto-used session-scoped fixture into your project-level conftest.py file:
# conftest.py
import pytest
#pytest.yield_fixture(autouse=True, scope='session')
def test_suite_cleanup_thing():
# setup
yield
# teardown - put your command here
The code after the yield will be run - once - at the end of the test suite, regardless of pass or fail.

is it possible to share a global variable among all tests (which would
contains e.g. a list of path to the cached objects) ?
There are actually a couple of ways to do that, each with pros and cons. I think this SO answer sums them up quite nice - https://stackoverflow.com/a/22793013/3023841 - but for example:
def pytest_namespace():
return {'my_global_variable': 0}
def test_namespace(self):
assert pytest.my_global_variable == 0
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
Yes, py.test has teardown functions available:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method.
"""

Related

Calling a multiple python scripts from python with predefined environment

Probably related to globals and locals in python exec(), Python 2 How to debug code injected by the exec block and How to get local variables updated, when using the `exec` call?
I am trying to develop a test framework for our desktop applications which uses click bot like functions.
My goal was to enable non-programmers to write small scripts which could work as a test script. So my idea is to structure the test scripts by files like:
tests-folder
| -> 01-first-test.py
| -> 02-second-test.py
| ... etc
| -> fixture.py
And then just execute these scripts in alphabetical order. However, I also wanted to have fixtures which would define functions, classes, variables and make them available to the different scripts without having the scripts to import that fixture explicitly. If that works, I could also have that approach for 2 or more directory levels.
I could get it working-ish with some hacking around, but I am not entirely convinced. I have a test_sequence.py which looks like this:
from pathlib import Path
from copy import deepcopy
from my_module.test import Failure
def run_test_sequence(test_dir: str):
error_occured = False
fixture = {
'some_pre_defined_variable': 'this is available in all scripts and fixtures',
'directory_name': test_dir,
}
# Check if fixture.py exists and load that first
fixture_file = Path(dir) / 'fixture.py'
if fixture_file.exists():
with open(fixture_file.absolute(), 'r') as code:
exec(code.read(), fixture, fixture)
# Go over all files in test sequence directory and execute them
for test_file in sorted(Path(test_dir).iterdir()):
if test_file.name == 'fixture.py':
continue
# Make a deepcopy, so scripts cannot influence one another
fixture_copy = deepcopy(fixture)
fixture_copy.update({
'some_other_variable': 'this is available in all scripts but not in fixture'
})
try:
with open(test_file.absolute(), 'r') as code:
exec(code.read(), fixture_locals, fixture_locals)
except Failure:
error_occured = True
return error_occured
This iterates over all files in the directory tests-folder and executes them in order (with fixture.py first). It also makes the local variables, functions and classes from fixture.py available to each test-script.
A test script could then just be arbitrary code that will be executed and if it raises my custom Failure exception, this will be noted as a failed test.
The whole sequence is started with a script that does
from my_module.test_sequence import run_test_sequence
if __name__ == '__main__':
exit(run_test_sequence('tests-folder')
This mostly works.
What it cannot do, and what leaves me unsatisfied with this approach:
I cannot debug the scripts itself. Since the code is loaded as string and then interpreted, breakpoints inside the test scripts are not recognized.
Calling fixture functions behaves weird. When I define a function in fixture.py like:
from my_hello_module import print_hello
def printer():
print_hello()
I will receive a message during execution that print_hello is undefined because the variables/modules/etc. in the scope surrounding printer are lost.
Stacktraces are useless. On failure it shows the stacktrace but of course only shows my line which says `exec(...)' and the insides of that function, but none of the code that has been loaded.
I am sure there are other drawbacks, that I have not found yet, but these are the most annoying ones.
I also tried to find a solution through __import__ but I couldn't get it to inject my custom locals or globals into the imported script.
Is there a solution, that I am too inexperienced to find or another builtin Python function that actually does, what I am trying to do? Or is there no way to achieve this and I should rather have each test-script import the fixture and file/directory names from the test-scripts itself. I want those scripts to have as few dependencies and pythony code as possible. Ideally they are just:
from my_module.test import *
click(x, y, LEFT)
write('admin')
press('tab')
write('password')
press('enter')
if text_on_screen('Login successful'):
succeed('Test successful')
else:
fail('Could not login')
Additional note: I think I had the debugger working when I still used execfile, but it is not available in python3 environments.

How to log inputs into a function inside a package in python

While using profiler to look for the location where most of the execution time is spent in my python code, i found that it is from a package used in the code. So a function in a package is called 100s of times with different input arguments. In total this function takes the maximum time to execute.
So I want to implement some caching, so that if same parameters are passed, I can use the already extracted output from cache. So first I want to check if same parameters are being passed multiple times at all.
Is there any way I can enable some python level configuration, so that I can get arguments passed to the function on each iteration?
As I am not allowed to make any changes to this package Package1. So enabling something outside (like enabling debug mode) the pakage only may help.
Package1
module1
def function1()
for i in range(10000):
###Want to get arguments passed
###for each iteration of below function to a logfile
retvalue = function2(ar1,arg2,arg3)
My Code
package1.module1.function1()
You can use the cache from python to cache the values.
from functools import cache
#cache
def function2(*args):
return function1(*args) # function1 imported from module you can't change
Instead of logging the input args, you can use profiler to see if the runtime has improved. If it has, you can be sure that some calls are duplicate.

How can I set a default per test timeout in pytest?

I want to enforce that no test takes longer than 3 seconds in pytest.
pytest-timeout (https://pypi.python.org/pypi/pytest-timeout) almost does what I want... but it seems to allow me to either set a global timeout (ie make sure the test suite takes less than 10 minutes) or, an ability to set a decorator on each test manually.
Desired Behavior:
Configure pytest with a single setting to fail any individual test which exceeds 3 seconds.
From the pytest-timeout page:
You can set a global timeout in the py.test configuration file using
the timeout option. E.g.:
[pytest]
timeout = 300
You can use a local plugin. Place a conftest.py file into your project root or into your tests folder with something like the following to set the default timeout for each test to 3 seconds;
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if item.get_marker('timeout') is None:
item.add_marker(pytest.mark.timeout(3))
Pytest calls the pytest_collection_modifyitems function after it has collected the tests. This is used here to add the timeout marker to all of the tests.
Adding the marker only when it does not already exist (if item.get_marker...) ensures that you can still use the #pytest.mark.timeout decorator on those tests that need a different timeout.
Another possibility would be to assign to the special pytestmark variable somewhere at the top of a test module:
pytestmark = pytest.mark.timeout(3)
This has the disadvantage that you need to add it to each module, and in my tests I got an error message when I then attempted to use the #pytest.mark.timeout decorator anywhere in that module.

Avoid setUpClass to run every time for nose cherry picked tests

This is my tests class, in mymodule.foo:
class Some TestClass(TestCase):
def setUpClass(cls):
# Do the setup for my tests
def test_Something(self)
# Test something
def test_AnotherThing(self)
# Test another thing
def test_DifferentStuff(self)
# Test another thing
I'm running the tests from Python with the following lines:
tests_to_run = ['mymodule.foo:test_AnotherThing', 'mymodule.foo:test_DifferentStuff']
result = nose.run(defaultTest= tests_to_run)
(This is obviously a bit more complicated and there's some logic to pick what tests I want to run)
Nose will run just the selected tests, as expected, but the setUpClass will run once for every test in tests_to_run. Is there any way to avoid this?
What I'm trying to achieve is to be able to run some dynamic set of tests while using nose in a Python script (not from the command line)
As #jonrsharpe mentioned, setupModule is what I was after: it will run just once per the whole module where my tests reside.

py.test 2.3.5 does not run finalizer after fixture failure

i was trying py.test for its claimed better support than unittest for module and session fixtures, but i stumbled on a, at least for me, bizarre behavior.
Consider the following code (don't tell me it's dumb, i know it, it's just a quick and dirty hack to replicate the behavior) (i'm running on Python 2.7.5 x86 on windows 7)
import os
import shutil
import pytest
test_work_dir = 'test-work-dir'
tmp = os.environ['tmp']
count = 0
#pytest.fixture(scope='module')
def work_path(request):
global count
count += 1
print('test: ' + str(count))
test_work_path = os.path.join(tmp, test_work_dir)
def cleanup():
print('cleanup: ' + str(count))
if os.path.isdir(test_work_path):
shutil.rmtree(test_work_path)
request.addfinalizer(cleanup)
os.makedirs(test_work_path)
return test_work_path
def test_1(work_path):
assert os.path.isdir(work_path)
def test_2(work_path):
assert os.path.isdir(work_path)
def test_3(work_path):
assert os.path.isdir(work_path)
if __name__ == "__main__":
pytest.main(['-s', '-v', __file__])
If test_work_dir does not exist, then i obtain the expected behavior:
platform win32 -- Python 2.7.5 -- pytest-2.3.5 -- C:\Programs\Python\27-envs\common\Scripts\python.exe
collecting ... collected 4 items
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
cleanup: 1
PASSED
py_test.py:38: test_2 PASSED
py_test.py:42: test_3 PASSEDcleanup: 1
fixture is called once for the module and cleanup is called once at the end of tests.
Then if test_work_dir exists i would expect something similar to unittest, that fixture is called once, it fails with OSError, tests that need it are not run, cleanup is called once and world peace is established again.
But... here's what i see:
py_test.py: [doctest] PASSED
py_test.py:34: test_1 test: 1
ERROR
py_test.py:38: test_2 test: 2
ERROR
py_test.py:42: test_3 test: 3
ERROR
Despite the failure of the fixture all the tests are run, the fixture that is supposed to be scope='module' is called once for each test and finalizer in never called!
I know that exceptions in fixtures are not good policy, but the real fixtures are complex and i'd rather avoid filling them with try blocks if i can count on the execution of each finalizer set till the point of failure. I don't want to go hunting for test artifacts after a failure.
And moreover trying to run the tests when not all of the fixtures they need are in place makes no sense and can make them at best erratic.
Is this the intended behavior of py.test in case of failure in a fixture?
Thanks, Gabriele
Three issues here:
you should register the finalizer after you performed the action that you want to be undone. So first call makedirs() and then register the finalizer. That's a general issue with fixtures because usually teardown code can only run if there was something successfully created
pytest-2.3.5 has a bug in that it will not call finalizers if the fixture function fails. I've just fixed it and you can install the 2.4.0.dev7 (or higher) version with pip install -i http://pypi.testrun.org -U pytest. It ensures the fixture finalizers are called even if the fixture function partially fails. Actually a bit surprising this hasn't popped up earlier but i guess people, including me, usually just go ahead and fix the fixtures instead of diving into what's happening specifically. So thanks for posting here!
if a module-scoped fixture function fails, the next test needing that fixture will still trigger execution of the fixture function again, as it might have been an intermittent failure. It stands to reason that pytest should memorize the failure for the given scope and not retry execution. If you think so, please open an issue, linking to this stackoverflow discussion.
thanks, holger

Categories

Resources