Using the Coverage Python API to get coverage results after a run - python

I want to generate a Markdown report after a coverage run, so I tried to use the Python API, particularly the CoverageData class. I can get the lines covered with CoverageData.lines(<file>), however I don't see how to get the percentage. Any pointers?

You should use the coverage json command to get a JSON data file, then process it however you like. It will be easier than using the API.

Related

Python unit-testing: view annotated coverage report in terminal

I'm using Python's unittest for testing, and I know I can view per-source-file annotated coverage report in browser by exporting it to HTML files with:
coverage run -m unittest *_test.py
coverage html
But I want to view this in Linux/Unix command line: view a given source file, with covered lines marked with green, and missed lines marked with red.
I tested several terminal web browsers (w3m, links, elinks, links2) and none of them can display these html files in a readable manner.
Maybe I'm missing something, because it looks like a very obvious feature to have in "coverage" or "green" or other testing tools, but I can't find anything!
There isn't a way to get colored source file reports in the terminal. You can use coverage annotate to get annotated source files currently.
Perhaps it makes sense now to get rid of the old-style annotate, and replace it with a rich terminal report.

Calculate test coverage percentage only using coverage.py to create gitlab badge using anybadge

Gitlab only provides to visualize the coverage.py report for the default branch using a hard-coded logic: https://docs.gitlab.com/ee/ci/pipelines/settings.html#test-coverage-report-badge
As I want to get the coverage value for any branch and show it in the README.MD using the anybadge package, I only want to get the total test coverage as percentage in order to create the badge manually and provide it as an artifact.
How can I only calculate this total coverage percentage similar to gitlab's logic using coverage.py?
Any hints are welcome!
You can use the coverage json command to get a JSON file with the results, then extract the total to use with anybadge. Share the code when you get it working!

Using RobotFramework APIs for creating the testcases in Python

I have Automation Repository coded in Python. Now i want to use some of the RobotFramework features like, html for logs and output, xml creation. Will it be possible to use somehow the Robot features in my existing testcases written in Python Unittest library without re-writing these. Please let me know, if its the wrong way to approach this
Yes #rjha,
You can use your testcases written in python. Generally in robot framework we will import the libraries which are written in Python. Using the same concept we can use your testcases which are written in Python.
Here I'm using RED Editor in Eclipse, according to my experience to use modules which are created should be imported to your red.xml file and each method name would be your keyword and when you completed running execution from testsuite file, log.html and report.html will be generated which you want for results generation.
For Better testcase execution results import "Logging" module where you can use log.info, log.warn etc., in your testcases which will be displayed in generated html reports
enter image description here

Call python script from Jira while creating an issue

Let say I'm creating an issue in Jira and write the summary and the description. Is it possible to call a python script after these are written that sets the value for another field, depending on the values of the summary and the description?
I know how to create an issue and change fields from a python script using the jira-python module. But I have not find a solution for using a python script while editing/creating the issue manually in Jira. Does anyone have an idea of how I manage that?
I solved the problem by using the Add-on Script runner in Jira. There I created a scripted field in Groovy and called the command prompt and run the python script from there. Here is a simple example of the script
def process = ['cmd', '/c', 'filepathToPython.exe', 'filepathToPythonFile.py'].execute()
process.waitfor()
return p.text
process?.err?.text can be used instead of process.text if one want to see eventual error messages.
Take a look at JIRA webhooks calling a small python based web server?

python nose framework: A plugin to display results in a human friendly format

Any format which is targeted for humans (.html, .doc, whatever) would be good. I cannot find any plugin that provides it
All I found was XUNIT or XML output..
I don't know of a stand-alone visualization tool, but Hudson can graph your test and coverage results. If there's a failure, it will list the problems on a web page with hyperlinks to each individual test result.
This blog post explains the setup: http://heisel.org/blog/2009/11/21/django-hudson/. There's a screenshot at the bottom that shows what's possible. It's geared toward django, but the idea is applicable to any python app.
A continuous integration server gives you many benefits beyond just graphing your test results. Hudson can automatically checkout your code after a subversion commit, run all your tests, email you if there's a failure, etc..
http://hudson-ci.org/
Nose has an html output module! (the --cover-html option). See here : http://somethingaboutorange.com/mrl/projects/nose/0.11.1/plugins/cover.html
nosetest provide a way to dump result to xunit-xml format. use options below -
--with-xunit --xunit-file <file.xml>
once you have results, you can use xslt to convert your runs to xhtml.
I tried https://github.com/mungayree/nosetest-xunit-xslt
it displays result of your runs.

Categories

Resources