Run every pytest test function simultaneously with another function - python

I want every pytest test run simultaneously with a function recording video. I want to somehow make pytest do something like this
test_function1
test_function2
test_function
When pytest would call every function for execution it would couple it with record_video(). Like test_function{n} + record_video()
In the end it woud return a report on how the test was executed and a .mp4 file with a recording.
How to achieve that? I guess it should be possible somehow

Fixtures are a perfect use case for this.
See the following hypothetical example --
import pytest
#pytest.fixture
def record_video():
record.start()
yield
record.stop()
#pytest.fixture(scope="session")
def generate_report():
yield
report.make()

Related

pytest exit on failure only for a specific file

I'm using the pytest framework to run tests that interface with a set of test instruments. I'd like to have a set of initial tests in a single file that will check the connection and configuration of these instruments. If any of these tests fail, I'd like to abort all future tests.
In the remaining tests and test files, I'd like pytest to continue if there are any test failures so that I get a complete report of all failed tests.
For example, the test run may look something like this:
pytest test_setup.py test_series1.py test_series2.py
In this example, I'd like pytest to exit if any tests in test_setup.py fail.
I would like to invoke the test session in a single call so that session based fixtures that I have only get called once. For example, I have a fixture that will connect to a power supply and configure it for the tests.
Is there a way to tell pytest to exit on any test only in a specific file? If I use the -x option it will not continue in subsequent tests.
Ideally, I'd prefer something like a decorator that tells pytest to exit if there is a failure. However, I have not seen anything like this.
Is this possible or am I thinking about this the wrong way?
Update
Based on the answer from pL3b, my final solution looks like this:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
if 'critical' in [mark.name for mark in item.own_markers]:
result = outcome.get_result()
if result.when == "call" and result.failed:
print('FAILED')
pytest.exit('Exiting pytest due to critical test failure', 1)
I needed to inspect the failure code in order to check if the test failed or not. Otherwise this would exit on every call.
Additionally, I needed to register my custome marker. I chose to also put this in the conftest.py file like so:
def pytest_configure(config):
config.addinivalue_line(
"markers", "critical: make test as critical"
)
You may use following hook in your conftest.py to solve your problem:
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
yield
if 'critical' in [mark.name for mark in item.own_markers]:
pytest.exit('Exiting pytest')
Then just add #pytest.mark.critical decorator to desired tests/classes in test_setup.py.
Learn more about pytest hooks here so you can define desired output and so on.

Why do I only get a function as a return value by using a fixture (from pytest) in a test script?

I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions

Python unittest, running only the tests that failed

I have a large python test file using unittest that I run from the command line. Some tests take a while to run. This is a mild pain point because I'm often only concerned with the last test I added. What I want is this:
add test.
run tests (one fails because I haven't written the code to make it pass)
implement the behaviour
run only the test that failed last time
fix the silly error I made when implementing the code
run only the failing test, which passes this time
run all the tests to find out what I broke.
Is it possible to do this from the command line?
(Not a fully automated solution, but better than the existing one)
If you pass the name of a test class as an argument to the test script, only that test will be run. For example, if you only want to run tests in the MyTest class in the script test_whatever.py:
python3 test_whatever.py MyTest
You can also specify an individual test as a member of that class. For example, suppose you want to run the test test_something in the class MyTest:
python3 test_whatever.py MyTest.test_something
Every test function is declared like:
def test_something_something(self):
If you add an underscore in front, like:
def _test_something_something(self):
that test will be ignored. One thing you can do is to do a quick find and replace in vim. Find all "test_"s and replace them with "_test_" and then find the one test that failed and remove the underscore.
Just run the test with --last-failed option (you might need pytest)

Avoid setUpClass to run every time for nose cherry picked tests

This is my tests class, in mymodule.foo:
class Some TestClass(TestCase):
def setUpClass(cls):
# Do the setup for my tests
def test_Something(self)
# Test something
def test_AnotherThing(self)
# Test another thing
def test_DifferentStuff(self)
# Test another thing
I'm running the tests from Python with the following lines:
tests_to_run = ['mymodule.foo:test_AnotherThing', 'mymodule.foo:test_DifferentStuff']
result = nose.run(defaultTest= tests_to_run)
(This is obviously a bit more complicated and there's some logic to pick what tests I want to run)
Nose will run just the selected tests, as expected, but the setUpClass will run once for every test in tests_to_run. Is there any way to avoid this?
What I'm trying to achieve is to be able to run some dynamic set of tests while using nose in a Python script (not from the command line)
As #jonrsharpe mentioned, setupModule is what I was after: it will run just once per the whole module where my tests reside.

Can tests with pytest fixtures be run interactively?

I have some tests written using pytest and fixtures, e.g.:
class TestThing:
#pytest.fixture()
def temp_dir(self, request):
my_temp_dir = tempfile.mkdtemp()
def fin():
shutil.rmtree(my_temp_dir)
request.addfinalizer(fin)
return my_temp_dir
def test_something(self, temp_dir)
with open(os.path.join(temp_dir, 'test.txt'), 'w') as f:
f.write('test')
This works fine when the tests are invoked from the shell, e.g.
$ py.test
but I don't know how to run them from within a python/ipython session; trying e.g.
tt = TestThing()
tt.test_something(tt.temp_dir())
fails because temp_dir requires a request object to be passed on. So, how does one invoke a fixture with a request object injected?
Yes. You don't have to manually assemble any test fixtures or anything like that. Everything runs just like calling pytest in the project directory.
Method1:
This is the best method because it gives you access to the debugger if your test fails
In ipython shell use:
**ipython**> run -m pytest prj/
This will run all your tests in the prj/tests directory.
This will give you access to the debugger, or allow you to set breakpoints if you have a
import ipdb; ipdb.set_trace() in your program (https://docs.pytest.org/en/latest/usage.html#setting-breakpoints).
Method2:
Use !pytest while in the test directory. This wont give you access to the debugger. However, if you use
**ipython**> !pytest --pdb
If you have a test failure, it will drop you into the debugger (subshell), so you can run your post-mortem analysis (https://docs.pytest.org/en/latest/usage.html#dropping-to-pdb-python-debugger-on-failures)
Using these methods you can even run individual modules/test_fuctions/TestClasses in ipython using (https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)
**ipython**> run -m pytest prj/tests/test_module1.py::TestClass1::test_function1
You can bypass the pytest.fixture decorator and directly call the wrapped test function.
tmp = tt.temp_dir.__pytest_wrapped__.obj(request=...)
Accessing internals, it's bad, but when you need it...
The best method I have which is far from ideal is to just %run the test file, then manually assemble the fixtures, then simply call the tests. The problem with this is tracking down the modules where the default fixtures are defined, and then calling them in their order of dependencies.
you can use two cells for this:
first:
def test_something():
assert True
second:
from tempfile import mktemp
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(_i) # write last cell input
!pytest $test_file
also u can do this on one cell like this but you won't have code highlighting:
from tempfile import mktemp
test_code = """
def test_something():
assert True
"""
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(test_code)
!pytest $test_file
The simple answer is that you don't want to run py.test interactively from python. Most people set up some integration with their text editor or IDE to run py.test and parse it's output. But really it's a command line tool and that is how it should be used.
As a sidenode you may want to check out the built-in tmpdir fixture: http://pytest.org/latest/tmpdir.html Because it seems like you're re-inventing this.

Categories

Resources