I am using pytest-testrail in order to publish some python test cases to testrail. I have a couple test cases that are flaky and use the "#pytest.mark.flaky" in order to rerun the test cases that fail. After the rerun, some test cases will pass (meaning the case failed once and passed on the rerun), but pytest-testrail will publish the failed run on top (meaning the test case will be marked as failed). I can only think of two ways to fix this, either find a way to published the passed run first or find a way to only published the passed run. I do not know how I could perform either of these. Overall, I would like to publish the test case as passed if one of the reruns succeeds.
TestRail supports two methods on how to submit test results via the API:
get_results_for_case (get Failed for the existing case)
add_result_for_case (update Passed for the existing case)
I hope this helps.
Reference:
setup for Python: http://docs.gurock.com/testrail-api2/bindings-python
API: http://docs.gurock.com/testrail-api2/reference-results
Related
I have multiple test cases that need to be tested. With every release, I get some new test cases as well as some old ones.
My problem is for old test cases which are failed I have already created a problem ticket in JIRA and in the next release this ticket number is added in [Documentation] Field of the .robot
Now what I want is next time on a new release if the bug is already raised in Jira meaning the documentation section of the robot will contain the ticket number, If the test fails I will label it as WARN in Yellow instead of marking it as Failure.
I have searched a lot and found this thread but according to it I can't do it- Github Issue is there any other way?
You will need to process the Documentation of the test to see if it has a JIRA issue number, if so, then add a Tag (for example TagName) to it. When launching tests with robotframework version 4.0, you call them by passing the option --skiponfailure TagName. You will have those tests marked as SKIPPED.
The parsing of Documentation would be needed to parse before running the actual tests (in a helper test run).
I'm aware of the Run Keyword and Continue on Failure / Run Keyword And Ignore Error / Run keyword and return status Builtin keywords but I have a very wide set of test cases that should not be stopped for any reason on a specific scenario and I was wondering if there is an option not to make the execution stop on a failure by default, without having to manage it through these keywords and adding a non-business related syntax in my upper layer keywords.
Generally speaking, robot simply isn't designed to work the way you want. It's designed to exit a test when a keyword fails unless you explicitly run that keyword with one of the special keywords (eg: run keyword and continue on failure).
In some very limited cases, you can get this behavior by using a template that calls run keyword and continue on failure for every test step. This technique will only work if your test case is made up strictly of keywords, and doesn't try to save keyword results to variables.
For example, consider this test:
*** Test cases ***
Example
log step one
log step two
fail something went wrong
fail something else went wrong
log last step
If you run the above test, it will stop on the first failure. However, by adding a test template that uses run keyword and continue on failure, all the steps will run before continuing to the next test:
*** Test cases ***
Example
[Template] Run keyword and continue on failure
log step one
log step two
fail something went wrong
fail something else went wrong
log last step
This is what the report looks like with the above test:
Although it feels a bit counter intuitive that you should want to continue when you've encountered an erronous situation, given that you may no longer be in control of the application. This in itself should be prevented. However, that said.
Given that you are already familiar with the family of Run and continue keywords, there is not much else to suggest and to answer the question with an affirmative: No.
The only approach is to wrap the keywords in a Run and Continue keyword.
Is it possible to get the results of previous tests in pytest? Or drop test execution if a few specific test failure?
for example:
pytest.mark.skipif(test_failed("test1", "test2"), reason="Reason")
def test_the_unknown(): pass
there is nothing builtin for that kind of behavior atm
but its possible to create tools for that as plugins,
if you are up to it, please get in touch on github, the ml or the irc channels
I am looking at a way to add a new "mode" in the Pysys Baserunner.
In particular I would like to add a validate mode, that just re-run the validation portion. Useful when you are writing your testcase and trying to tune the validation condition so to fit the current output without having to re-reun the complete testcase.
What is the best way to do this without having to change the original class?
This requires support from the framework unfortunately. The issue is that the BaseRunner class will always automatically purge the output directory, and there is no hook into the framework to allow you to avoid this. You can for instance move the output subdirectory manually you want to re-run the validation over to say 'repeat' (same directory level), and then use;
from pysys.constants import *
from pysys.basetest import BaseTest
class PySysTest(BaseTest):
def execute(self):
if self.mode=='repeat': pass
def validate(self):
if self.mode=='repeat':
self.output=os.path.join(self.descriptor.output, 'repeat')
where I have ommitted the implementations of the execute and validate. You would need to add the mode into the descriptor for the test
<classification>
<groups>
<group></group>
</groups>
<modes>
<mode>repeat</mode>
</modes>
</classification>
and run using "pysys.py run -mrepeat". This would help with the debugging if your execute takes a long time, but probably not want you want out-of-the-box i.e. a top level option to the runner to just perform validation over a previously run test. I'll add a feature request for this.
Since the original discussion, a --validateOnly command line option was added to PySys (in v1.1.1) which does pretty much what you suggest - it skips the execute method and just runs validate.
Assumes you aren't running with --purge (which I imagine is a safe assumption for this use case), and that you don't have validation commands that try to read zero-byte files from the output dir (which always get deleted even if --purge is not specified). However assuming those conditions are met, your (non-empty) output files will still be there after you've completed the first run of the test and you can re-run just validation using the --validateOnly command.
To get this feature you can install the latest PySys version (1.4.0) - see https://pypi.org/project/PySys/
I am new to this so please do not mind if the question is not specific enough.
I want to know how to club unit tests into a single integration test in pytest.
Furthermore, I would like to repeat the integration test in a single test session a couple of times. Please let me know if there is a way to do this in pytest.
Scenario:
I have two unit tests name test_start_call and test_end_call that are invoked by pytest in that order.
Now I wanted to repeat the process a couple of times so I did this:
for i in range(0,c):
pytest.main(some command)
which works fine which will start the test session and tear down the test session as many times as I want with one call being made in each test session.
But I want to make several calls in a single test session and by far I have not found any way to do this since last two days. I tried looking into xdist but I don't want to start new processes in parallel. The integration tests should serially execute unit tests (start call and end call) as many times as I want in a single test session.
I am stuck. So any help would be great. Thank you!
review https://docs.pytest.org/en/latest/parametrize.html
Then add mult marker to each test and consume it in hook pytest_generate_tests to provide multiple tests fixture value will be visible in --collect-only --mult 3. Using marker this way will constrain the multiple tests mechanism to only marked tests.
# conftest
def pytest_addoptions(parser):
parser.addoption('--mult', default=0, help="run many tests")
def pytest_generate_tests(metafunc):
count = int(metafunc.config.getoption('--mult'))
if count and metafunc.get_closest_marker('mult'):
if 'mult' not in metafunc.fixturenames:
metafunc.fixturenames.append('mult')
metafunc.parametrize("mult", range(count))
# testfoo
#pytest.mark.mult
def test_start_call():
...
From what you're saying, I'm not quite sure that you are using the right toolset.
It sounds like you are either trying to load test something ( run it multiple times and see if it falls over ), or trying to do something more "data driven" - aka given input values x through y, see how it behaves.
If you are trying to do something like load testing, I'd suggest looking into something like locust.
Here is a reasonable blog with different examples on driving unit tests via different data.
Again, not sure if either of these are actually what you're looking for.