Is it possible to get the results of previous tests in pytest? Or drop test execution if a few specific test failure?
for example:
pytest.mark.skipif(test_failed("test1", "test2"), reason="Reason")
def test_the_unknown(): pass
there is nothing builtin for that kind of behavior atm
but its possible to create tools for that as plugins,
if you are up to it, please get in touch on github, the ml or the irc channels
Related
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)
I am using pytest-testrail in order to publish some python test cases to testrail. I have a couple test cases that are flaky and use the "#pytest.mark.flaky" in order to rerun the test cases that fail. After the rerun, some test cases will pass (meaning the case failed once and passed on the rerun), but pytest-testrail will publish the failed run on top (meaning the test case will be marked as failed). I can only think of two ways to fix this, either find a way to published the passed run first or find a way to only published the passed run. I do not know how I could perform either of these. Overall, I would like to publish the test case as passed if one of the reruns succeeds.
TestRail supports two methods on how to submit test results via the API:
get_results_for_case (get Failed for the existing case)
add_result_for_case (update Passed for the existing case)
I hope this helps.
Reference:
setup for Python: http://docs.gurock.com/testrail-api2/bindings-python
API: http://docs.gurock.com/testrail-api2/reference-results
Earlier I was using python unittest in my project, and with it came unittest.TextTestRunner and unittest.defaultTestLoader.loadTestsFromTestCase. I used them for the following reasons,
Control the execution of unittest using a wrapper function which calls the unittests's run method. I did not want the command line approach.
Read the unittest's output from the result object and upload the results to a bug tracking system which allow us to generate some complex reports on code stability.
Recently there was a decision made to switch to py.test, how can I do the above using py.test ? I don't want to parse any CLI/HTML to get the output from py.test. I also don't want to write too much code on my unit test file to do this.
Can someone help me with this ?
You can use the pytest's hook to intercept the test result reporting:
conftest.py:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logreport(report):
yield
# Define when you want to report:
# when=setup/call/teardown,
# fields: .failed/.passed/.skipped
if report.when == 'call' and report.failed:
# Add to the database or an issue tracker or wherever you want.
print(report.longreprtext)
print(report.sections)
print(report.capstdout)
print(report.capstderr)
Similarly, you can intercept one of these hooks to inject your code at the needed stage (in some cases, with the try-except around yield):
pytest_runtest_protocol(item, nextitem)
pytest_runtest_setup(item)
pytest_runtest_call(item)
pytest_runtest_teardown(item, nextitem)
pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
Read more: Writing pytest plugins
All of this can be easily done either with a tiny plugin made as a simple installable library, or as a pseudo-plugin conftest.py which just lies around in one of the directories with the tests.
It looks like pytest lets you launch from Python code instead of using the command line. It looks like you just pass the same arguments to the function call that would be on the command line.
Pytest will create resultlog format files, but the feature is deprecated. The documentation suggests using the pytest-tap plugin that produces files in the Test Anything Protocol.
I have about 10 tests in same file and each one them has following set to execute in order
import pytest
#pytest.mark.order1
.
.
.
#pytest.mark.order10
But the tests never run in the order they are assigned. They always run in the order they are arranged. Anything I am missing ?
even #pytest.mark.tryfirst didn't work. One thing I noticed is, the #pytest.mark.order never shows up in suggestions while atleast #pyetst.mark.tryfirst was there in pycharm.
It looks like you're using pytest-ordering. That package is indeed "alpha quality" -- I wrote it and I haven't spent much time keeping it updated.
Instead of decorating with #pytest.mark.order1, try decorating with #pytest.mark.run(order=1). I believe the Read the Docs documentation is out of date.
I faced the same issue, you can use this command to order it:
#pytest.mark.run(order=1)
But before, install it from this site.
Then it will work fine.
The pytest marks do not do anything special, except for marking the tests. The marks can be used only for the purpose of filtering them with -m CLI option.
This is all pytest alone can do with the marks. Well, and few little things like parameterization & skipif's.
Specifically, there is no such special mark as tryfirst. It is a parameter to the hook declaration, but this is not applicable for the tests/marks.
Some external or internal plugins can add special behavior which is dependent on the marks.
Pytest executes the tests in the order they were found (collected). In some cases, pytest can reorder (regroup) the tests for better fixture usage. At least, this is declared; not sure if actually done.
The tests are assumed to be completely independent by design. If your tests depend on each another, e.g. use the state of the system under test from the previous test-cases, you have a problem with the test design. That state should be somehow converted to the fixture(s).
If you still want to force some dependencies or order of the tests (contrary to the test design principles), you have to install a plugin for the test ordering based on marks, e.g., http://pytest-ordering.readthedocs.io/en/develop/, and mark the tests according to its supported mark names.
It looks like you're using pytest-ordering. Make sure you already have installed pytest-ordering in you env, here de docs: https://github.com/ftobia/pytest-ordering and try to use the following decoration
#pytest.mark.run(order=1)
I was also facing the same issue multiple times and tried everything available online. However, what worked for me is to install ordering using pip3 install pytest-ordering and restart pycharm.
I have to test a piece of hardware using it's provided python API.
The hardware has two interfaces one of which has to be programmed by
using it's API and has to be checked if values are read/written correctly by using another interface.
Is there a python library I can use ?
It's something like this:
Test1
write using Interface under Test
check if written correctly by working interface.
program hardware using working interface 3 then
Test2
write using Interface under Test and check
Also try out various range of values for writing within the test at various speeds set through the API
and so on...
A log or results file should be created at the end of this series of tests which details all these tests and whether they passed or failed and some other results from the test
Try the unittest module from the standard library (formerly known as PyUnit).
I'd recommend py.test. It features auto discovery of tests, is non-invasive and you can easily log test results to a file (though that should be possible with every test framework).
Just to be complete another of these auto discovery test suites is nose (http://code.google.com/p/python-nose/). I normally just use just straight up unittest (http://docs.python.org/library/unittest.html) but I am in a possibly more formal environment.
If you want to have a simple test library for auto-logging, and providing you the ability to try out speed, retry, and something else related to the test, you could try test_steps package, which can be used independently or with py.test / nose platform together.