I'm using Behave on Python to test a web application.
My test suite run properly, but I'm not able to generate junit report.
Here is my behave.ini file :
[behave]
junit=true
format=pretty
I only run behave using this command :behave
After run, the test result is print in the console, but no report is generated.
1 feature passed, 3 failed, 0 skipped
60 scenarios passed, 5 failed, 0 skipped
395 steps passed, 5 failed, 5 skipped, 0 undefined
Took 10m17.149s
What can I do ?
Make sure you don't change the working directory in your steps definition (or, at the end of the test change it back to what it was before).
I was observing the same problem, and it turned out that the reports directory was created in the directory I changed into while executing one of the steps.
What may help, if you don't want to care about the working directory, is setting the --junit-directory option. This should help behave to figure out where to store the report, regardless of the working directory at the end of the test (I have not tested that though)
Try using
behave --junit
on the command line instead of just behave.
Also, you can show the options available by using:
behave --help
I have done a bit of searching and it appears that the easiest way to do this is via the Jenkins junit plugin.
It seems to me like there ought to be a simple way to convert junit xml reports to a human readable html format, but I have not found it in any of my searches. The best I can come up with are a few junit bash scripts, but they don't appear to have any publishing capability. They only generate the xml reports.
Related
Is there a way to figure out which tests flipped their status since the last run? Status is either fail, pass or xfail.
You can output the results in JUnit XML format with the --junit-xml command line flag and compare them later in a reporting tool such as Allure. Or maybe one of the various reporting plugins can help you, such as the pytest-historic, which I haven't tried.
I've seen this question: py.test logging messages and test results/assertions into a single file
I've also read documentation here: https://docs.pytest.org/en/latest/logging.html
Neither comes close to a satisfactory solution.
I don't need assertion results together with logs, but it's OK if they are both in the logs.
I need all the logs produced during the test, but I don't need them for the tests themselves. I need them for analyzing the test results after the tests failed / succeeded.
I need logs for both succeeding and for failing tests.
I need stdout to only contain the summary (eg. test-name PASSED). It's OK if the summary also contains the stack trace for failing tests, but it's not essential.
Essentially, I need the test to produce 3 different outputs:
HTML / JUnit XML artifact for CI.
CLI minimal output for CI log.
Extended log for testers / automation team to analyze.
I tried pytest-logs plugin. As far as I can tell, it can override some default pytest behavior by displaying all logging while the test runs. This is slightly better than default behavior, but it's still very far from what I need. From the documentation I understand that pytest-catchlog will conflict with pytest, and I don't even want to explore that option.
Question
Is this achievable by configuring pytest or should I write a plugin, or, perhaps, even a plugin won't do it, and I will have to patch pytest?
You can use --junit-xml=xml-path switch to generate junit logs. If you want the report in html format, you can use pytest-html plugin. Similarly, you can use pytest-excel plugin to generate report in excel format.
You can use tee to pipe logs to two different processes. example: pytest --junit-xml=report.xml | tee log_for_testers.log It will generate logs in stdout for CI log, report.xml for CI artifact and log_for_testers.log for team analysis.
I am writing a Python script for collecting data from running tests under different conditions. At the moment, I am interested in adding support for Py.Test.
The Py.Test documentation clearly states that running pytest inside Python code is supported:
You can invoke pytest from Python code directly... acts as if you would call “pytest” from the command line...
However, the documentation does not describe in detail return value of calling pytest.main() as prescribed. The documentation only seems to indicate how to read the exit code of calling the tests.
What are the limits of data resolution available through this interface? Does this method simply return a string indicating the results of the test? Is support more friendly data structures supported (e.g., outcome of each test case assigned to key, value pair)?
Update: Examining the return data structure in the REPL reveals that calling pytest.main yeilds an integer return type indicating system exit code and directs a side-effect (stream of text detailing test result) to standard out. Considering this is the case, does Py.Test provide an alternate interface for accessing the result of tests run from within python code through some native data structure (e.g., dictionary)? I would like to avoid catching and parsing the std.out result because that approach seems error prone.
I don`t think so, the official documentation tells us that pytest.main
returns an os error code like is described in the example.
here
You can use the pytest flags if you want to, even the traceback (--tb) option to see if some of those marks helps you.
In your other point about parsing the std.out result because that approach seems error prone.
It really depends on what you are doing. Python has a lot of packages to do it like subprocess for example.
I have tests which have a huge variance in their runtime. Most will take much less than a second, some maybe a few seconds, some of them could take up to minutes.
Can I somehow specify that in my Nosetests?
In the end, I want to be able to run only a subset of my tests which take e.g. less than 1 second (via my specified expected runtime estimate).
Have a look at this write up about attribute plugin for nose tests, where you can manually tag tests as #attr('slow') and #attr('fast'). You can tun nosetests -a '!slow' afterward to run your tests quickly.
It would be great if you can do it automatically, but I'm afraid that you would have to write additional code to do it on the fly. If you are into rapid development, I would run the nose with xunit xml output enabled (which tracks the runtime of each test). Your test module can dynamically read in your xml output file from previous runs and set attribute settings for tests accordingly to filter out quick tests. This way you do not have to do it manually, alas with more work (and you have to run all tests at least once).
I have a completely non-interactive python program that takes some command-line options and input files and produces output files. It can be fairly easily tested by choosing simple cases and writing the input and expected output files by hand, then running the program on the input files and comparing output files to the expected ones.
1) What's the name for this type of testing?
2) Is there a python package to do this type of testing?
It's not difficult to set up by hand in the most basic form, and I did that already. But then I ran into cases like output files containing the date and other information that can legitimately change between the runs - I considered writing something that would let me specify which sections of the reference files should be allowed to be different and still have the test pass, and realized I might be getting into "reinventing the wheel" territory.
(I rewrote a good part of unittest functionality before I caught myself last time this happened...)
I guess you're referring to a form of system testing.
No package would know which parts can legitimately change. My suggestion is to mock out the sections of code that result in the changes so that you can be sure that the output is always the same - you can use tools like Mock for that. Comparing two files is pretty straightforward, just dump each to a string and compare strings.
Functional testing. Or regression testing, if that is its purpose. Or code coverage, if you structure your data to cover all code paths.