What changed between 2 successive pytest runs - python

Is there a way to figure out which tests flipped their status since the last run? Status is either fail, pass or xfail.

You can output the results in JUnit XML format with the --junit-xml command line flag and compare them later in a reporting tool such as Allure. Or maybe one of the various reporting plugins can help you, such as the pytest-historic, which I haven't tried.

Related

Run Unittests in Python with highlighting of Keywords in Terminal (determined during the run)

I'm administrating a quite a huge amount of Python Unittests and as sometimes the Stacktraces are weirdly long I had one thought on optimizing the outputs in order to see the file I'm interested faster.
Currently I am running tests with ack:
python3 unittests.py | ack --passthru 'GraphTests.py|Versioning/Testing/Tests.py'
Which does work as I desired it. But as the amount of tests keeps growing and I want to keep it dynamic I wanted to read the classes from whatever I've set in the testing suite.
suite.addTest(loader.loadTestsFromModule(UC3))
What would be the best way to accomplish this?
My first thought was to split the unittests up into two files, one loader, one caller. The loader adds all the unittests as before and executes the caller.py with an ack, including the list of files.
Is that a reasonable approach? Are there better ways? I don't think it is possible to fill the ack patterns in after I executed the line.
There's also the idea of just piping the results into a file that I read afterwards, but from my experience until now I am not sure if that will work as planned and not just cause extra-troubles. (I use an experimental unittest-framework that adds colouring during the execution of unittests using format-characters like '\033[91m' - see: https://stackoverflow.com/questions/15580303/python-output-complex-line-with-floats-colored-by-value)
Is there an option I don't see?
Edit (1): Also I forgot to add: Getting into the debugger is a different experience. With ack it doesn't seem to work properly any more

Saving pytest logs in a file

I've seen this question: py.test logging messages and test results/assertions into a single file
I've also read documentation here: https://docs.pytest.org/en/latest/logging.html
Neither comes close to a satisfactory solution.
I don't need assertion results together with logs, but it's OK if they are both in the logs.
I need all the logs produced during the test, but I don't need them for the tests themselves. I need them for analyzing the test results after the tests failed / succeeded.
I need logs for both succeeding and for failing tests.
I need stdout to only contain the summary (eg. test-name PASSED). It's OK if the summary also contains the stack trace for failing tests, but it's not essential.
Essentially, I need the test to produce 3 different outputs:
HTML / JUnit XML artifact for CI.
CLI minimal output for CI log.
Extended log for testers / automation team to analyze.
I tried pytest-logs plugin. As far as I can tell, it can override some default pytest behavior by displaying all logging while the test runs. This is slightly better than default behavior, but it's still very far from what I need. From the documentation I understand that pytest-catchlog will conflict with pytest, and I don't even want to explore that option.
Question
Is this achievable by configuring pytest or should I write a plugin, or, perhaps, even a plugin won't do it, and I will have to patch pytest?
You can use --junit-xml=xml-path switch to generate junit logs. If you want the report in html format, you can use pytest-html plugin. Similarly, you can use pytest-excel plugin to generate report in excel format.
You can use tee to pipe logs to two different processes. example: pytest --junit-xml=report.xml | tee log_for_testers.log It will generate logs in stdout for CI log, report.xml for CI artifact and log_for_testers.log for team analysis.

Read py.test's output as object

Earlier I was using python unittest in my project, and with it came unittest.TextTestRunner and unittest.defaultTestLoader.loadTestsFromTestCase. I used them for the following reasons,
Control the execution of unittest using a wrapper function which calls the unittests's run method. I did not want the command line approach.
Read the unittest's output from the result object and upload the results to a bug tracking system which allow us to generate some complex reports on code stability.
Recently there was a decision made to switch to py.test, how can I do the above using py.test ? I don't want to parse any CLI/HTML to get the output from py.test. I also don't want to write too much code on my unit test file to do this.
Can someone help me with this ?
You can use the pytest's hook to intercept the test result reporting:
conftest.py:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logreport(report):
yield
# Define when you want to report:
# when=setup/call/teardown,
# fields: .failed/.passed/.skipped
if report.when == 'call' and report.failed:
# Add to the database or an issue tracker or wherever you want.
print(report.longreprtext)
print(report.sections)
print(report.capstdout)
print(report.capstderr)
Similarly, you can intercept one of these hooks to inject your code at the needed stage (in some cases, with the try-except around yield):
pytest_runtest_protocol(item, nextitem)
pytest_runtest_setup(item)
pytest_runtest_call(item)
pytest_runtest_teardown(item, nextitem)
pytest_runtest_makereport(item, call)
pytest_runtest_logreport(report)
Read more: Writing pytest plugins
All of this can be easily done either with a tiny plugin made as a simple installable library, or as a pseudo-plugin conftest.py which just lies around in one of the directories with the tests.
It looks like pytest lets you launch from Python code instead of using the command line. It looks like you just pass the same arguments to the function call that would be on the command line.
Pytest will create resultlog format files, but the feature is deprecated. The documentation suggests using the pytest-tap plugin that produces files in the Test Anything Protocol.

calling pytest from inside python code

I am writing a Python script for collecting data from running tests under different conditions. At the moment, I am interested in adding support for Py.Test.
The Py.Test documentation clearly states that running pytest inside Python code is supported:
You can invoke pytest from Python code directly... acts as if you would call “pytest” from the command line...
However, the documentation does not describe in detail return value of calling pytest.main() as prescribed. The documentation only seems to indicate how to read the exit code of calling the tests.
What are the limits of data resolution available through this interface? Does this method simply return a string indicating the results of the test? Is support more friendly data structures supported (e.g., outcome of each test case assigned to key, value pair)?
Update: Examining the return data structure in the REPL reveals that calling pytest.main yeilds an integer return type indicating system exit code and directs a side-effect (stream of text detailing test result) to standard out. Considering this is the case, does Py.Test provide an alternate interface for accessing the result of tests run from within python code through some native data structure (e.g., dictionary)? I would like to avoid catching and parsing the std.out result because that approach seems error prone.
I don`t think so, the official documentation tells us that pytest.main
returns an os error code like is described in the example.
here
You can use the pytest flags if you want to, even the traceback (--tb) option to see if some of those marks helps you.
In your other point about parsing the std.out result because that approach seems error prone.
It really depends on what you are doing. Python has a lot of packages to do it like subprocess for example.

How to generate Junit output report using Behave Python

I'm using Behave on Python to test a web application.
My test suite run properly, but I'm not able to generate junit report.
Here is my behave.ini file :
[behave]
junit=true
format=pretty
I only run behave using this command :behave
After run, the test result is print in the console, but no report is generated.
1 feature passed, 3 failed, 0 skipped
60 scenarios passed, 5 failed, 0 skipped
395 steps passed, 5 failed, 5 skipped, 0 undefined
Took 10m17.149s
What can I do ?
Make sure you don't change the working directory in your steps definition (or, at the end of the test change it back to what it was before).
I was observing the same problem, and it turned out that the reports directory was created in the directory I changed into while executing one of the steps.
What may help, if you don't want to care about the working directory, is setting the --junit-directory option. This should help behave to figure out where to store the report, regardless of the working directory at the end of the test (I have not tested that though)
Try using
behave --junit
on the command line instead of just behave.
Also, you can show the options available by using:
behave --help
I have done a bit of searching and it appears that the easiest way to do this is via the Jenkins junit plugin.
It seems to me like there ought to be a simple way to convert junit xml reports to a human readable html format, but I have not found it in any of my searches. The best I can come up with are a few junit bash scripts, but they don't appear to have any publishing capability. They only generate the xml reports.

Categories

Resources