I want to enforce that no test takes longer than 3 seconds in pytest.
pytest-timeout (https://pypi.python.org/pypi/pytest-timeout) almost does what I want... but it seems to allow me to either set a global timeout (ie make sure the test suite takes less than 10 minutes) or, an ability to set a decorator on each test manually.
Desired Behavior:
Configure pytest with a single setting to fail any individual test which exceeds 3 seconds.
From the pytest-timeout page:
You can set a global timeout in the py.test configuration file using
the timeout option. E.g.:
[pytest]
timeout = 300
You can use a local plugin. Place a conftest.py file into your project root or into your tests folder with something like the following to set the default timeout for each test to 3 seconds;
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if item.get_marker('timeout') is None:
item.add_marker(pytest.mark.timeout(3))
Pytest calls the pytest_collection_modifyitems function after it has collected the tests. This is used here to add the timeout marker to all of the tests.
Adding the marker only when it does not already exist (if item.get_marker...) ensures that you can still use the #pytest.mark.timeout decorator on those tests that need a different timeout.
Another possibility would be to assign to the special pytestmark variable somewhere at the top of a test module:
pytestmark = pytest.mark.timeout(3)
This has the disadvantage that you need to add it to each module, and in my tests I got an error message when I then attempted to use the #pytest.mark.timeout decorator anywhere in that module.
Related
While using profiler to look for the location where most of the execution time is spent in my python code, i found that it is from a package used in the code. So a function in a package is called 100s of times with different input arguments. In total this function takes the maximum time to execute.
So I want to implement some caching, so that if same parameters are passed, I can use the already extracted output from cache. So first I want to check if same parameters are being passed multiple times at all.
Is there any way I can enable some python level configuration, so that I can get arguments passed to the function on each iteration?
As I am not allowed to make any changes to this package Package1. So enabling something outside (like enabling debug mode) the pakage only may help.
Package1
module1
def function1()
for i in range(10000):
###Want to get arguments passed
###for each iteration of below function to a logfile
retvalue = function2(ar1,arg2,arg3)
My Code
package1.module1.function1()
You can use the cache from python to cache the values.
from functools import cache
#cache
def function2(*args):
return function1(*args) # function1 imported from module you can't change
Instead of logging the input args, you can use profiler to see if the runtime has improved. If it has, you can be sure that some calls are duplicate.
Assuming the following test suite:
# test_module.py
import unittest
class Tests(unittest.TestCase):
#unittest.skip
def test_1(self):
print("This should run only if explicitly asked to but not by default")
# assume many other test cases and methods with or without the skip marker
When invoking the unittest library via python -m unittest are there any arguments I can pass to it actually run and not skip Tests.test_1 without modifying the test code and running any other skipped tests?
python -m unittest test_module.Tests.test_1 correctly selects this as the only test to run, but it still skips it.
If there is no way to do it without modifying the test code, what is the most idiomatic change I can make to conditionally undo the #unittest.skip and run one specific test case test case?
In all cases, I still want python -m unittest discover (or any other invocation that doesn't explicitly turn on the test) to skip the test.
If you want to skip some expensive tests you can use a conditional skip together with a custom environment variable:
#skipIf(int(os.getenv('TEST_LEVEL', 0)) < 1)
def expensive_test(self):
...
Then you can include this test by specifying the corresponding environment variable:
TEST_LEVEL=1 python -m unittest discover
TEST_LEVEL=1 python -m unittest test_module.Tests.test_1
If you want to skip a test because you expect it to fail, you can use the dedicated expectedFailure decorator.
By the way, pytest has a dedicated decorator for marking slow tests.
I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)
I'm using a joblib.Memory to cache expensive computations when running tests with py.test. The code I'm using reduces to the following,
from joblib import Memory
memory = Memory(cachedir='/tmp/')
#memory.cache
def expensive_function(x):
return x**2 # some computationally expensive operation here
def test_other_function():
input_ds = expensive_function(x=10)
## run some tests with input_ds
which works fine. I'm aware this could be possibly more elegantly done with tmpdir_factory fixture but that's beside the point.
The issue I'm having is how to clean the cached files once all the tests run,
is it possible to share a global variable among all tests (which would contains e.g. a list of path to the cached objects) ?
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
is it possible to share a global variable among all tests (which would contains e.g. a list of path to the cached objects) ?
I wouldn't go down that path. Global mutable state is something best avoided, particularly in testing.
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
Yes, add an auto-used session-scoped fixture into your project-level conftest.py file:
# conftest.py
import pytest
#pytest.yield_fixture(autouse=True, scope='session')
def test_suite_cleanup_thing():
# setup
yield
# teardown - put your command here
The code after the yield will be run - once - at the end of the test suite, regardless of pass or fail.
is it possible to share a global variable among all tests (which would
contains e.g. a list of path to the cached objects) ?
There are actually a couple of ways to do that, each with pros and cons. I think this SO answer sums them up quite nice - https://stackoverflow.com/a/22793013/3023841 - but for example:
def pytest_namespace():
return {'my_global_variable': 0}
def test_namespace(self):
assert pytest.my_global_variable == 0
is there a mechanism in py.test to call some command once all the tests are run (whether they succeed or not)?
Yes, py.test has teardown functions available:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method.
"""
I have a minor issue using py.test for my unit tests.
I use py.test to run my tests and output a junitxml report of the tests.
This xml report is imported in jenkins and generates nice statistics.
When I use a test class which derives from unittest.TestCase,
I skip expected failures using:
#unittest.skip("Bug 1234 : This does not work")
This message also shows up in jenkins when selecting this test.
When I don't use a unittest.TestCase class, e.g. to use py.test parametrize functionality,
I skip expected failures using:
#pytest.mark.xfail(reason="Bug 1234 : This does not work", run=False)
But then this reason is not actually displayed in jenkins, instead it will say:
Skip Message
expected test failure
How can I fix this?
I solved it using this line as the first line of the test:
pytest.skip("Bug 1234: This does not work")
I'd rather have used one of the pytest decorators, but this'll do.
I had a similar problem except I had a different Jenkins message and could not tell which test was skipped.
It turns out that if the only test in the module is a skipped test, then jenkins would not show the test in the test result list (using either decorator or jr-be's soloution). You could see that there was a skipped test in the total resuls, but could not tell which test or which module the skipped test was in.
To solve this (ok hack solve), I went back to using the decorator on my test and added a dummy test (so have 1 test that runs and 1 test that gets skipped):
#pytest.skip('SONIC-3218')
def test_segments_create_delete(self, api):
logging.info('TestCreateDeleteSegments.test_segments_create_delete')
def test_dummy(self, api):
'''
Dummy test to see if suite will display in jenkins if one
test is run and 1 is skipped (instead of having only skipped tests)
'''
logging.info('TestCreateDeleteSegments.test_dummy')
For me that works since I would rather have 1 extra dummy test and be able to find my skipped tests.