Saving pytest logs in a file - python

I've seen this question: py.test logging messages and test results/assertions into a single file
I've also read documentation here: https://docs.pytest.org/en/latest/logging.html
Neither comes close to a satisfactory solution.
I don't need assertion results together with logs, but it's OK if they are both in the logs.
I need all the logs produced during the test, but I don't need them for the tests themselves. I need them for analyzing the test results after the tests failed / succeeded.
I need logs for both succeeding and for failing tests.
I need stdout to only contain the summary (eg. test-name PASSED). It's OK if the summary also contains the stack trace for failing tests, but it's not essential.
Essentially, I need the test to produce 3 different outputs:
HTML / JUnit XML artifact for CI.
CLI minimal output for CI log.
Extended log for testers / automation team to analyze.
I tried pytest-logs plugin. As far as I can tell, it can override some default pytest behavior by displaying all logging while the test runs. This is slightly better than default behavior, but it's still very far from what I need. From the documentation I understand that pytest-catchlog will conflict with pytest, and I don't even want to explore that option.
Question
Is this achievable by configuring pytest or should I write a plugin, or, perhaps, even a plugin won't do it, and I will have to patch pytest?

You can use --junit-xml=xml-path switch to generate junit logs. If you want the report in html format, you can use pytest-html plugin. Similarly, you can use pytest-excel plugin to generate report in excel format.
You can use tee to pipe logs to two different processes. example: pytest --junit-xml=report.xml | tee log_for_testers.log It will generate logs in stdout for CI log, report.xml for CI artifact and log_for_testers.log for team analysis.

Related

pytest: only print summary on stdout, but capture test output to file

What I want to achieve is the following:
When running pytest, i want only to see a line per test, with nothing on stdout. If a test fails, then output of the failing test should be printed. This is the default behavior of pytest and it suits me well.
However, I also want the stdout and stderr of all tests to be written to a log file.
Preferably, written so that the output from different tests can be distinguished.
I couldn't find an option for doing so.
In
Pytest capture stdout of a certain test
a hook is added which does something similar (but for a subset of tests). I could adapt this, but I was wondering if there was not something already in pytest? Or a plugin?

Is it possible to implement multiple test runners in pyunitest? while only running the test suite once

if __name__ == '__main__':
if is_running_under_teamcity():
runner = TeamcityTestRunner()
else:
runner = HTMLTestRunner.HTMLTestRunner(
stream=outfile,
title='Test Report',
description='This is an example.'
)
unittest.main(testRunner=runner)
I am currently running some tests using unittest module in python this is my current code above. I am deploying this test setup on Teamcity, the first module allows me to convert the output into teamcity-messages and the second creates a html report of the results. Is there a way I can run both of these runners while only running one set of tests? The only option I can see at the minuete is to either try and combine both these modules into a hybrid or using another testing module that Teamcity supports. However I would like to keep the dependancies as low as possible
Any ideas would be great :)
Any ideas would be great :)
Looks like you'll have to handroll it, looking at the code TeamcityTestRunner is a pretty simple extension of the standard TextTestRunner, however HTMLTestRunner is a way more complex beast.
Sadly this is one area of the stdlib which is really badly architected: one could expect the test runner to be concerned solely with discovering and running tests, however it's also tasked with part of the test reporting rather than have an entirely separate test reporter (this test reporting is furthermore a split responsability with the test result, which shouldn't be part of that one's job description either).
Frankly if you don't have any further customisation I'd suggest just using pytest as your test runner instead of unittest with a custom runner:
it should be able to run unittest tests fine
IME it has better separation of concerns and pluggability so having multiple reporters / formatters should work out of the box
pytest-html certainly has no issue generating its reports without affecting the normal text output
according to the readme teamcity gets automatically enabled and used for pytest
so I'd assume generating html reports during your teamcity builds would work fine (to test)
and you can eventually migrate to using pytest tests (which are so much better it's not even funny)

How to disable pytest dumping out source code?

When one of the tests fail, pytest will dump out the source code of the function where the exception is raised. However, something when the error is raised from another library, it still dumps the function source code flood the output.
Is it possible to disable pytest from dump source code and have the stack trace only? Stack trace is usually more than enough to track down the problem.
I have searched a bit but all I can find are posts related to --show-capture.
You can use the --tb option. You can choose either --tb=short or --tb=native as per what suits you. Check the detailed documentation here.
Here is how to put your chosen option in a pytest.ini file:
[pytest]
# --tb not given Produces reams of output, with full source code included in tracebacks
# --tb=no Just shows location of failure in the test file: no use for tracking down errors
# --tb=short Just shows vanilla traceback: very useful, but file names are incomplete and relative
# --tb=native Slightly more info than short: still works very well. The full paths may be useful for CI
addopts = --tb=native
The available values for --tb are:
--tb=style traceback print mode
(auto/long/short/line/native/no).
Useful links:
pytest Configuration file formats
pytest Command-line Flags

Storing pass/fail result after all tests are run

I am relatively new to pytest at the moment and was curious if there is a way to store the pass/fail results of the test in a variable.
Essentially what I want to do is run my full suite of tests and after the tests are run, send the name of the tests run along with the pass/fail result to a server.
I understand that pytest provides options such as -r that will output the test run with pass or fail after execution, but is there a way to store those into variables or pass those results along?
is there a way to store those into variables or pass those results along?
Pytest can natively output JUnitXML files:
To create result files which can be read by Jenkins or other Continuous integration servers, use this invocation:
pytest --junitxml=path
to create an XML file at path.
There is an available schema for this format and there appear to be several Python libraries that can parse them with varying levels of support. This one looks like a good place to start.
There are also plugins that may be able to help. For example, pytest-json:
pytest-json is a plugin for py.test that generates JSON reports for test results

How to generate Junit output report using Behave Python

I'm using Behave on Python to test a web application.
My test suite run properly, but I'm not able to generate junit report.
Here is my behave.ini file :
[behave]
junit=true
format=pretty
I only run behave using this command :behave
After run, the test result is print in the console, but no report is generated.
1 feature passed, 3 failed, 0 skipped
60 scenarios passed, 5 failed, 0 skipped
395 steps passed, 5 failed, 5 skipped, 0 undefined
Took 10m17.149s
What can I do ?
Make sure you don't change the working directory in your steps definition (or, at the end of the test change it back to what it was before).
I was observing the same problem, and it turned out that the reports directory was created in the directory I changed into while executing one of the steps.
What may help, if you don't want to care about the working directory, is setting the --junit-directory option. This should help behave to figure out where to store the report, regardless of the working directory at the end of the test (I have not tested that though)
Try using
behave --junit
on the command line instead of just behave.
Also, you can show the options available by using:
behave --help
I have done a bit of searching and it appears that the easiest way to do this is via the Jenkins junit plugin.
It seems to me like there ought to be a simple way to convert junit xml reports to a human readable html format, but I have not found it in any of my searches. The best I can come up with are a few junit bash scripts, but they don't appear to have any publishing capability. They only generate the xml reports.

Categories

Resources