In my testing platform if a tests starts to fail we create a bug report for it and mark the test as xfail or skipif using pytest decorators. In the reason parameter we always associate the bug report number. Once a new release is released that is suppose to fix one of the bugs we have to manually go in and re enable the test. Manual work doesn't get done 100% of the time though.
Is there anyway to automate the removal of a pytest xfail or skipif marker assuming during each release I will get a list of bug report numbers that are "fixed"?
Related
I have multiple test cases that need to be tested. With every release, I get some new test cases as well as some old ones.
My problem is for old test cases which are failed I have already created a problem ticket in JIRA and in the next release this ticket number is added in [Documentation] Field of the .robot
Now what I want is next time on a new release if the bug is already raised in Jira meaning the documentation section of the robot will contain the ticket number, If the test fails I will label it as WARN in Yellow instead of marking it as Failure.
I have searched a lot and found this thread but according to it I can't do it- Github Issue is there any other way?
You will need to process the Documentation of the test to see if it has a JIRA issue number, if so, then add a Tag (for example TagName) to it. When launching tests with robotframework version 4.0, you call them by passing the option --skiponfailure TagName. You will have those tests marked as SKIPPED.
The parsing of Documentation would be needed to parse before running the actual tests (in a helper test run).
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)
I have about 10 tests in same file and each one them has following set to execute in order
import pytest
#pytest.mark.order1
.
.
.
#pytest.mark.order10
But the tests never run in the order they are assigned. They always run in the order they are arranged. Anything I am missing ?
even #pytest.mark.tryfirst didn't work. One thing I noticed is, the #pytest.mark.order never shows up in suggestions while atleast #pyetst.mark.tryfirst was there in pycharm.
It looks like you're using pytest-ordering. That package is indeed "alpha quality" -- I wrote it and I haven't spent much time keeping it updated.
Instead of decorating with #pytest.mark.order1, try decorating with #pytest.mark.run(order=1). I believe the Read the Docs documentation is out of date.
I faced the same issue, you can use this command to order it:
#pytest.mark.run(order=1)
But before, install it from this site.
Then it will work fine.
The pytest marks do not do anything special, except for marking the tests. The marks can be used only for the purpose of filtering them with -m CLI option.
This is all pytest alone can do with the marks. Well, and few little things like parameterization & skipif's.
Specifically, there is no such special mark as tryfirst. It is a parameter to the hook declaration, but this is not applicable for the tests/marks.
Some external or internal plugins can add special behavior which is dependent on the marks.
Pytest executes the tests in the order they were found (collected). In some cases, pytest can reorder (regroup) the tests for better fixture usage. At least, this is declared; not sure if actually done.
The tests are assumed to be completely independent by design. If your tests depend on each another, e.g. use the state of the system under test from the previous test-cases, you have a problem with the test design. That state should be somehow converted to the fixture(s).
If you still want to force some dependencies or order of the tests (contrary to the test design principles), you have to install a plugin for the test ordering based on marks, e.g., http://pytest-ordering.readthedocs.io/en/develop/, and mark the tests according to its supported mark names.
It looks like you're using pytest-ordering. Make sure you already have installed pytest-ordering in you env, here de docs: https://github.com/ftobia/pytest-ordering and try to use the following decoration
#pytest.mark.run(order=1)
I was also facing the same issue multiple times and tried everything available online. However, what worked for me is to install ordering using pip3 install pytest-ordering and restart pycharm.
I have tests which have a huge variance in their runtime. Most will take much less than a second, some maybe a few seconds, some of them could take up to minutes.
Can I somehow specify that in my Nosetests?
In the end, I want to be able to run only a subset of my tests which take e.g. less than 1 second (via my specified expected runtime estimate).
Have a look at this write up about attribute plugin for nose tests, where you can manually tag tests as #attr('slow') and #attr('fast'). You can tun nosetests -a '!slow' afterward to run your tests quickly.
It would be great if you can do it automatically, but I'm afraid that you would have to write additional code to do it on the fly. If you are into rapid development, I would run the nose with xunit xml output enabled (which tracks the runtime of each test). Your test module can dynamically read in your xml output file from previous runs and set attribute settings for tests accordingly to filter out quick tests. This way you do not have to do it manually, alas with more work (and you have to run all tests at least once).
We are testing a Django applications with a black box (functional integration) testing approach, where a client performs tests with REST API calls to the Django application. The client is running on a different VM, so we can not use the typical coverage.py (I think).
Is there a way to compute the coverage of these black box tests? Can I somehow instruct Django to start and stop in test coverage mode and then report test coverage?
The coverage for functional integration tests are really a different layer of abstraction than unit test coverage which covers lines of code executed. You likely care more about coverage of use-cases in a true black-box test.
But if you are looking for code coverage anyways (and there are certainly reasons why you might want to), it looks like you should be able to use coverage.py if you have access to the server to set up test scenarios. You will need to implement a way to end the django process to allow coverage.py to write the coverage report.
From:
https://coverage.readthedocs.io/en/coverage-4.3.4/howitworks.html#execution
"At the end of execution, coverage.py writes the data it collected to
a data file"
This indicates that the python processes must come to completion naturally. Killing the process manually would also take out the coverage.py wrapper preventing the write.
Some ideas to end django: stop django command using sys.exit()
See:
https://docs.djangoproject.com/en/1.10/topics/testing/advanced/#integration-with-coverage-py