We are testing a Django applications with a black box (functional integration) testing approach, where a client performs tests with REST API calls to the Django application. The client is running on a different VM, so we can not use the typical coverage.py (I think).
Is there a way to compute the coverage of these black box tests? Can I somehow instruct Django to start and stop in test coverage mode and then report test coverage?
The coverage for functional integration tests are really a different layer of abstraction than unit test coverage which covers lines of code executed. You likely care more about coverage of use-cases in a true black-box test.
But if you are looking for code coverage anyways (and there are certainly reasons why you might want to), it looks like you should be able to use coverage.py if you have access to the server to set up test scenarios. You will need to implement a way to end the django process to allow coverage.py to write the coverage report.
From:
https://coverage.readthedocs.io/en/coverage-4.3.4/howitworks.html#execution
"At the end of execution, coverage.py writes the data it collected to
a data file"
This indicates that the python processes must come to completion naturally. Killing the process manually would also take out the coverage.py wrapper preventing the write.
Some ideas to end django: stop django command using sys.exit()
See:
https://docs.djangoproject.com/en/1.10/topics/testing/advanced/#integration-with-coverage-py
Related
I'm creating some automatic tests for my app using pytest. Every test consists of some test actions and an assertion - nothing special. However some of these tests are intentionally a bit disruptive. I would like to gather some metrics from different resources of my app and from my test environment while the tests are running, and put these metrics into a log file - I'm not interested in failing the tests based on these metrics, I just want them to understand my system better.
I'm thinking about creating a script that gathers the information that I want and creating a fixture to run it in the background of my test by using subprocess.Popen(). I have also thought about creating a function to gather the data and run it in parallel to my test code by using multiprocessing. I don't know if there are other options.
I would like to know if there is a standard, simple way to do this. I want to avoid unnecessary complexity at all costs.
Thanks!
if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)
I've seen this question: py.test logging messages and test results/assertions into a single file
I've also read documentation here: https://docs.pytest.org/en/latest/logging.html
Neither comes close to a satisfactory solution.
I don't need assertion results together with logs, but it's OK if they are both in the logs.
I need all the logs produced during the test, but I don't need them for the tests themselves. I need them for analyzing the test results after the tests failed / succeeded.
I need logs for both succeeding and for failing tests.
I need stdout to only contain the summary (eg. test-name PASSED). It's OK if the summary also contains the stack trace for failing tests, but it's not essential.
Essentially, I need the test to produce 3 different outputs:
HTML / JUnit XML artifact for CI.
CLI minimal output for CI log.
Extended log for testers / automation team to analyze.
I tried pytest-logs plugin. As far as I can tell, it can override some default pytest behavior by displaying all logging while the test runs. This is slightly better than default behavior, but it's still very far from what I need. From the documentation I understand that pytest-catchlog will conflict with pytest, and I don't even want to explore that option.
Question
Is this achievable by configuring pytest or should I write a plugin, or, perhaps, even a plugin won't do it, and I will have to patch pytest?
You can use --junit-xml=xml-path switch to generate junit logs. If you want the report in html format, you can use pytest-html plugin. Similarly, you can use pytest-excel plugin to generate report in excel format.
You can use tee to pipe logs to two different processes. example: pytest --junit-xml=report.xml | tee log_for_testers.log It will generate logs in stdout for CI log, report.xml for CI artifact and log_for_testers.log for team analysis.
I have tests which have a huge variance in their runtime. Most will take much less than a second, some maybe a few seconds, some of them could take up to minutes.
Can I somehow specify that in my Nosetests?
In the end, I want to be able to run only a subset of my tests which take e.g. less than 1 second (via my specified expected runtime estimate).
Have a look at this write up about attribute plugin for nose tests, where you can manually tag tests as #attr('slow') and #attr('fast'). You can tun nosetests -a '!slow' afterward to run your tests quickly.
It would be great if you can do it automatically, but I'm afraid that you would have to write additional code to do it on the fly. If you are into rapid development, I would run the nose with xunit xml output enabled (which tracks the runtime of each test). Your test module can dynamically read in your xml output file from previous runs and set attribute settings for tests accordingly to filter out quick tests. This way you do not have to do it manually, alas with more work (and you have to run all tests at least once).
I have to test a piece of hardware using it's provided python API.
The hardware has two interfaces one of which has to be programmed by
using it's API and has to be checked if values are read/written correctly by using another interface.
Is there a python library I can use ?
It's something like this:
Test1
write using Interface under Test
check if written correctly by working interface.
program hardware using working interface 3 then
Test2
write using Interface under Test and check
Also try out various range of values for writing within the test at various speeds set through the API
and so on...
A log or results file should be created at the end of this series of tests which details all these tests and whether they passed or failed and some other results from the test
Try the unittest module from the standard library (formerly known as PyUnit).
I'd recommend py.test. It features auto discovery of tests, is non-invasive and you can easily log test results to a file (though that should be possible with every test framework).
Just to be complete another of these auto discovery test suites is nose (http://code.google.com/p/python-nose/). I normally just use just straight up unittest (http://docs.python.org/library/unittest.html) but I am in a possibly more formal environment.
If you want to have a simple test library for auto-logging, and providing you the ability to try out speed, retry, and something else related to the test, you could try test_steps package, which can be used independently or with py.test / nose platform together.