How to make jenkins trigger build failure if tests were failed - python

I have few tests written in Python with unittest module. Tests working properly, but in Jenkins even if test fails, build with this test is still marked as successive. Is there way to check output for python test and return needed result?

When you publish the unit test results in the post build section (If you aren't already, you should), you set the thresholds for failure.
If you don't set thresholds, the build will always fail unless running them returns a non zero exit code.
To always fail the build on any unit test failure, set all failure thresholds to zero.
Note that you can also set thresholds for skipped tests as well.

Related

pytest-testrail rerun publishing

I am using pytest-testrail in order to publish some python test cases to testrail. I have a couple test cases that are flaky and use the "#pytest.mark.flaky" in order to rerun the test cases that fail. After the rerun, some test cases will pass (meaning the case failed once and passed on the rerun), but pytest-testrail will publish the failed run on top (meaning the test case will be marked as failed). I can only think of two ways to fix this, either find a way to published the passed run first or find a way to only published the passed run. I do not know how I could perform either of these. Overall, I would like to publish the test case as passed if one of the reruns succeeds.
TestRail supports two methods on how to submit test results via the API:
get_results_for_case (get Failed for the existing case)
add_result_for_case (update Passed for the existing case)
I hope this helps.
Reference:
setup for Python: http://docs.gurock.com/testrail-api2/bindings-python
API: http://docs.gurock.com/testrail-api2/reference-results

How can I start a test from another test in Pytest?

I have a test that sets the ground for a couple of other tests to run, thus, I need the first test to succeed and only then run the other ones. I know I could use pytest-dependencies but that would not allow me to run my tests in parallel. I'm also interested in the option of only having to run one test and have it initialize all the ones that depend on it if it succeeds (instead of running all of them and skipping, them if the test they depend on fails).

How to pass a failed testsuite run (junit.xml parsing)?

We are working in a automated continues integration/deployment pipeline. we got hundreds of testcases and 4 stages (Red, Orange, Yellow, Green).
The issue I'm facing is that a test can fail (bug, timings, stuck process etc.) and it will fail the entire regression run.
I think that we need some sort of weight to determine amount of pass/fail tests to be considered as 'fail' build.
any ideas? something you created on your pipeline?
Thanks,
-M
Having a failed build does not always reflect the quality of the product, mainly if the fails are related to testing infrastructure issues.
Reduce the risk of unwanted fails which are not related to application bugs(timings, stuck process), by building a strong and stable framework, that can be easily maintained and extended.
When speaking of fails related to application bugs, the type of tests that failed are more important than the amount of fails. This is defect severity. You can have 3 trivial fails that don't have big impact, and you can also have only 1 fail that is critical. You need to flag your tests accordingly.
Additionally to this, there is a Jenkins plugin that creates an easy to follow test run history, where you can see the number of tests that failed the most times in the last runs.

how to handle time-consuming tests

I have tests which have a huge variance in their runtime. Most will take much less than a second, some maybe a few seconds, some of them could take up to minutes.
Can I somehow specify that in my Nosetests?
In the end, I want to be able to run only a subset of my tests which take e.g. less than 1 second (via my specified expected runtime estimate).
Have a look at this write up about attribute plugin for nose tests, where you can manually tag tests as #attr('slow') and #attr('fast'). You can tun nosetests -a '!slow' afterward to run your tests quickly.
It would be great if you can do it automatically, but I'm afraid that you would have to write additional code to do it on the fly. If you are into rapid development, I would run the nose with xunit xml output enabled (which tracks the runtime of each test). Your test module can dynamically read in your xml output file from previous runs and set attribute settings for tests accordingly to filter out quick tests. This way you do not have to do it manually, alas with more work (and you have to run all tests at least once).

Lettuce: Continue testing after an assertion

How can I continue testing after a test fails?
Feature: some feature
Scenario Outline: some scenario outline
Given I prepare everything
Then there is a test that could fail
And some other test I still want to run
I want "some other test I still want to run" to run, even though "there is a test that could fail" failed.
In the Unit testing framework which is natively supported by Python, the framework will run all tests and give an output, e.g. 8/10 have passed. So for example you could write 20 test scenarios and write your code against the tests, bringing up to coverage. A testing framework should usually run all tests (unless they take very long, which should usually not be the case).
Have a look at: http://docs.python.org/library/unittest.html

Categories

Resources