I have a test that sets the ground for a couple of other tests to run, thus, I need the first test to succeed and only then run the other ones. I know I could use pytest-dependencies but that would not allow me to run my tests in parallel. I'm also interested in the option of only having to run one test and have it initialize all the ones that depend on it if it succeeds (instead of running all of them and skipping, them if the test they depend on fails).
Related
I am using pytest in a way in which there are two execution paths , one to configure the test environment (create some buildable folders, such as a c application that needs to be cross compiled and executed later in a distant target), and the second path that will execute the test.
I am having this separation because we need to separate the build and the execution processes so that they can be run separately by different machines in a CI environment.
so in the Configuration phase (controlled using a pytest cli argument), I am simply skipping the tests in a fixture that checks if this is the configuration stage or not, using:
pytest.skip("configuration stage")
What I want to do is to make the test appear as PASSED during the configuration stage, from the fixture context (without going to the test), putting an assert will not suffice as the test function will be executed anyway, and I don't want to pollute all the tests with a check like this:
def test_function(config_fixture):
if config_fixture:
assert True
else:
#run the test
I have a large test suite that contains thousands of tests of an information extraction engine. There are about five hundred inputs, and I have to extract 10-90 items of information from each input.
For maximal transparency, I test each item from each input separately, so that I can look at the pytest log and tell exactly which tests flip after a code change. But since it's inefficient to run extraction within each test dozens of times, I organized my tests so that the extraction is done only once for each input, a test-global variable is set and then each test simply checks one aspect of the result.
It seems that this set-up prevents any speedup gains when using python-xdist. Apparently, pytest will still initialize all 500 test files in one thread and then have individual workers execute the test methods - but the actual tests take no time, so I save nothing!
Is there a simple way to instruct pytest-xdist to distribute the test files rather than the test methods to the workers, so that I can parallelize the 500 calls to the engine?
According to official docs - https://pytest-xdist.readthedocs.io/en/latest/distribution.html, you can use one of the following:
custom groups (mark all related tests with #pytest.mark.xdist_group), and run cli with flag --dist loadgroup. It can help you run entire groups (in your cases - all methods of class) in isolated worker
if you use OOP style (Test classes with test methods or modules with tests) - use pytest CLI flag --dist loadscope , this approach will make groups automatically (modules, or classes) and guarantees that all tests in a group run in the same process.
I have tests which have a huge variance in their runtime. Most will take much less than a second, some maybe a few seconds, some of them could take up to minutes.
Can I somehow specify that in my Nosetests?
In the end, I want to be able to run only a subset of my tests which take e.g. less than 1 second (via my specified expected runtime estimate).
Have a look at this write up about attribute plugin for nose tests, where you can manually tag tests as #attr('slow') and #attr('fast'). You can tun nosetests -a '!slow' afterward to run your tests quickly.
It would be great if you can do it automatically, but I'm afraid that you would have to write additional code to do it on the fly. If you are into rapid development, I would run the nose with xunit xml output enabled (which tracks the runtime of each test). Your test module can dynamically read in your xml output file from previous runs and set attribute settings for tests accordingly to filter out quick tests. This way you do not have to do it manually, alas with more work (and you have to run all tests at least once).
I have few tests written in Python with unittest module. Tests working properly, but in Jenkins even if test fails, build with this test is still marked as successive. Is there way to check output for python test and return needed result?
When you publish the unit test results in the post build section (If you aren't already, you should), you set the thresholds for failure.
If you don't set thresholds, the build will always fail unless running them returns a non zero exit code.
To always fail the build on any unit test failure, set all failure thresholds to zero.
Note that you can also set thresholds for skipped tests as well.
I am new to this so please do not mind if the question is not specific enough.
I want to know how to club unit tests into a single integration test in pytest.
Furthermore, I would like to repeat the integration test in a single test session a couple of times. Please let me know if there is a way to do this in pytest.
Scenario:
I have two unit tests name test_start_call and test_end_call that are invoked by pytest in that order.
Now I wanted to repeat the process a couple of times so I did this:
for i in range(0,c):
pytest.main(some command)
which works fine which will start the test session and tear down the test session as many times as I want with one call being made in each test session.
But I want to make several calls in a single test session and by far I have not found any way to do this since last two days. I tried looking into xdist but I don't want to start new processes in parallel. The integration tests should serially execute unit tests (start call and end call) as many times as I want in a single test session.
I am stuck. So any help would be great. Thank you!
review https://docs.pytest.org/en/latest/parametrize.html
Then add mult marker to each test and consume it in hook pytest_generate_tests to provide multiple tests fixture value will be visible in --collect-only --mult 3. Using marker this way will constrain the multiple tests mechanism to only marked tests.
# conftest
def pytest_addoptions(parser):
parser.addoption('--mult', default=0, help="run many tests")
def pytest_generate_tests(metafunc):
count = int(metafunc.config.getoption('--mult'))
if count and metafunc.get_closest_marker('mult'):
if 'mult' not in metafunc.fixturenames:
metafunc.fixturenames.append('mult')
metafunc.parametrize("mult", range(count))
# testfoo
#pytest.mark.mult
def test_start_call():
...
From what you're saying, I'm not quite sure that you are using the right toolset.
It sounds like you are either trying to load test something ( run it multiple times and see if it falls over ), or trying to do something more "data driven" - aka given input values x through y, see how it behaves.
If you are trying to do something like load testing, I'd suggest looking into something like locust.
Here is a reasonable blog with different examples on driving unit tests via different data.
Again, not sure if either of these are actually what you're looking for.