Read py.test's output as object - python

Earlier I was using python unittest in my project, and with it came unittest.TextTestRunner and unittest.defaultTestLoader.loadTestsFromTestCase. I used them for the following reasons,
Control the execution of unittest using a wrapper function which calls the unittests's run method. I did not want the command line approach.
Read the unittest's output from the result object and upload the results to a bug tracking system which allow us to generate some complex reports on code stability.
Recently there was a decision made to switch to py.test, how can I do the above using py.test ? I don't want to parse any CLI/HTML to get the output from py.test. I also don't want to write too much code on my unit test file to do this.
Can someone help me with this ?

You can use the pytest's hook to intercept the test result reporting:
conftest.py:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logreport(report):
yield
# Define when you want to report:
# when=setup/call/teardown,
# fields: .failed/.passed/.skipped
if report.when == 'call' and report.failed:
# Add to the database or an issue tracker or wherever you want.
print(report.longreprtext)
print(report.sections)
print(report.capstdout)
print(report.capstderr)
Similarly, you can intercept one of these hooks to inject your code at the needed stage (in some cases, with the try-except around yield):
pytest_runtest_protocol(item, nextitem)
pytest_runtest_setup(item)
pytest_runtest_call(item)
pytest_runtest_teardown(item, nextitem)
pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
Read more: Writing pytest plugins
All of this can be easily done either with a tiny plugin made as a simple installable library, or as a pseudo-plugin conftest.py which just lies around in one of the directories with the tests.

It looks like pytest lets you launch from Python code instead of using the command line. It looks like you just pass the same arguments to the function call that would be on the command line.
Pytest will create resultlog format files, but the feature is deprecated. The documentation suggests using the pytest-tap plugin that produces files in the Test Anything Protocol.

Related

Is it possible to implement multiple test runners in pyunitest? while only running the test suite once

if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)

Storing pass/fail result after all tests are run

I am relatively new to pytest at the moment and was curious if there is a way to store the pass/fail results of the test in a variable.
Essentially what I want to do is run my full suite of tests and after the tests are run, send the name of the tests run along with the pass/fail result to a server.
I understand that pytest provides options such as -r that will output the test run with pass or fail after execution, but is there a way to store those into variables or pass those results along?
is there a way to store those into variables or pass those results along?
Pytest can natively output JUnitXML files:
To create result files which can be read by Jenkins or other Continuous integration servers, use this invocation:
pytest --junitxml=path
to create an XML file at path.
There is an available schema for this format and there appear to be several Python libraries that can parse them with varying levels of support. This one looks like a good place to start.
There are also plugins that may be able to help. For example, pytest-json:
pytest-json is a plugin for py.test that generates JSON reports for test results

Running pytest tests against multiple backends?

I've built a series of tests (using pytest) for a codebase interacting with the Github API (both making calls to it, and receiving webhooks).
Currently, these tests run against a semi-realistic mock of github: calls to github are intercepted through Sentry's Responses and run through a fake github/git implementation (of the bits I need), which can also be interacted with directly from the tests cases. Any webhook which needs to be triggered uses Werkzeug's test client to call back into the WSGI application used as webhook endpoint.
This works nicely, fast (enough) and is an excellent default, but I'd like the option to run these same tests against github itself, and that's where I'm stumped: I need to switch out the current implementations of the systems under test (direct "library" access to the codebase & mock github) with different ones (API access to "externally" run codebase & actual github), and I'm not quite sure how to proceed.
I attempted to use pytest_generate_tests to switch out the fixture implementations (of the codebase & github) via a plugin but I don't quite know if that would even work, and so far my attempts to load a local file as plugin in pytest via pytest -p <package_name> have not been met with much success.
I'd like to know if I'm heading in the right direction, and in that case if anyone can help with using "local" plugins (not installed via setuptools and not conftest.py-based) in pytest.
Not sure if that has any relevance, but I'm using pytest 3.6.0 running on CPython 3.6, requests 2.18.4, responses 0.9.0 and Werkzeug 0.14.1.
There are several ways to approach this. The one I would go for it by default run your mocked tests and then when a command line flag is present test against both the mock and the real version.
So first the easier part, adding a command line option:
def pytest_addoption(parser):
parser.addoption('--github', action='store_true',
help='Also test against real github')
Now this is available via the pytestconfig fixture as pytestconfig.getoption('github'), often also indirectly available, e.g. via the request fixture as request.config.getoption('github').
Now you need to use this parametrize any test which needs to interact with the github API so that they get run both with the mock and with the real instance. Without knowing your code it sounds like a good point would be the Werkzeug client: make this into a fixture and then it can be parameterized to return both a real client or the test client you mention:
#pytest.fixture
def werkzeug_client(werkzeug_client_type):
if werkzeug_client_type == 'real':
return create_the_real_client()
else:
return create_the_mock_client()
def pytest_generate_tests(metafunc):
if 'werkzeug_client_type' in metafunc.fixturenames:
types = ['mock']
if metafunc.config.getoption('github'):
types.append('real')
metafunc.parametrize('werkzeug_client_type', types)
Now if you write your test as:
def test_foo(werkzeug_client):
assert werkzeug_client.whatever()
You will get one test normally and two tests when invoked with pytest --github.
(Be aware hooks must be in conftest.py files while fixtures can be anywhere. Be extra aware that the pytest_addoption hook should really only be used in the toplevel conftest file to avoid you from confusion about when the hook is used by pytest and when not. So you should put all this code in a toplevel conftest.py file really.)

How to test a python binary in the python unit test?

I would like to test a python binary "main.py" with command line arguments in the unit test. The usage of this binary is as follows.
main.py --input_file=input.txt --output_file=out.txt
When designing unit tests, I think it is better to test each component like a class or a method.
However, in some cases like the above one, I would like to do end-to-end testing of the whole python binary especially when it is already created by someone else. In the above case, I want to make it sure whether "main.py" generates "out.txt" correctly.
One option is using subprocess.check_call and create it to a temporary directory, and loads it and compares it with the golden (expected output).
Is it a good way?
Or, if there is any better way, could you please advise me?
This is called blackbox testing as the inner structure of the program is unknown to the tester. If you insist on testing the module without getting to know what's happening inside, You could (as you mentioned) use exec or subprocess to check the validity of the output. But the more logical way is to use unittest libraries and try to test the code using the API it provides.
If you're testing the handling of arguments as part of the unit tests, just look inside main.py and see how it handles arguments.
For example, you might have a test case that sets sys.argv and then calls main:
import sys
import myapp.main
sys.argv = '''main.py --input_file=input.txt --output_file=out.txt'''.split()
myapp.main.main()
# I have no idea what test you want to run.

calling pytest from inside python code

I am writing a Python script for collecting data from running tests under different conditions. At the moment, I am interested in adding support for Py.Test.
The Py.Test documentation clearly states that running pytest inside Python code is supported:
You can invoke pytest from Python code directly... acts as if you would call “pytest” from the command line...
However, the documentation does not describe in detail return value of calling pytest.main() as prescribed. The documentation only seems to indicate how to read the exit code of calling the tests.
What are the limits of data resolution available through this interface? Does this method simply return a string indicating the results of the test? Is support more friendly data structures supported (e.g., outcome of each test case assigned to key, value pair)?
Update: Examining the return data structure in the REPL reveals that calling pytest.main yeilds an integer return type indicating system exit code and directs a side-effect (stream of text detailing test result) to standard out. Considering this is the case, does Py.Test provide an alternate interface for accessing the result of tests run from within python code through some native data structure (e.g., dictionary)? I would like to avoid catching and parsing the std.out result because that approach seems error prone.
I don`t think so, the official documentation tells us that pytest.main
returns an os error code like is described in the example.
here
You can use the pytest flags if you want to, even the traceback (--tb) option to see if some of those marks helps you.
In your other point about parsing the std.out result because that approach seems error prone.
It really depends on what you are doing. Python has a lot of packages to do it like subprocess for example.

Categories

Resources