Test coverage tool for Behave test framework - python

We are using Behave BDD tool for automating API's. Is there any tool which give code coverage using our behave cases?
We tried using coverage module, it didn't work with Behave.

You can run any module with coverage to see the code usage.
In your case should be close to coverage run --source='.' -m behave
Tracking code coverage for Aceptace/Integration/Behaviour test will give a high coverage number easily but can lead to the idea that the code are properly tested.
Those are for see things working together, not to track how much code are well 'covered'.
Tying together unittests and coverages makes more sense to me.

Related

Python - Difference between Sonarcube Vs pylint

I'm evaluating test framework, lint and code coverage options for a new Python project I'll be working on.
I've chosen pytest for the testing needs. After reading a bunch of resources, I'm confused when to use Sonarcube, Sonarlint , pylint and coverage.py.
Is SonarLint and Pylint comparable? When would I use Sonarcube?
I need to be able to use this in a Jenkins build. Thanks for helping!
Sonarlint and pylint are comparable, in a way.
Sonarlint is a code linter and pylint is too. I haven't used sonarlint, but it seems that analyzes the code a bit deeper that pylint does. From my experience, pylint only follows a set of rules (that you can modify, by the way), while sonarlint goes a bit further analyzing the inner workings of your code. They are both static analyze tools, however.
Sonarcube, on the other hand, does a bit more. Sonarcube is a CI/CD tool that runs static linters, but also shows you code smells, and does a security analysis. All of what I'm saying is based purely on their website.
If you would like to run CI/CD workflows or scripts, you would use Sonarcube, but for local coding, sonarlint is enough. Pylint is the traditional way, though.
Nicholas has a great summary of Pylint vs Sonarlint.
(Personally I use the Sonarlint)
Although the question is older, I thought I'd answer the other part of your question in case anyone else has the same question; internet being eternal and all.
Coverage.py as it sounds, runs code coverage for your package. SonarQube then uses the report that coverage.py makes and does things with it and formats it in a way that the Sonar team decided was necessary. Coverage.py is needed if you want to use SonarQube for code coverage. However, if you just want the code smells from SonarQube, it is not needed.
You were also asking about when to use SonarQube, coverage.py, and Jenkins.
In Jenkins, you would create a pipeline with several stages. Something along the following lines:
Check out code (automatically done as the first step by Jenkins
Build code as it is intended to be used by user/developer
Run Unit Tests
run coverage.py
run SonarQube

Executing doctest Unitests from Jenkins

I am struggling to find a good, well documented approach for execution of doctest-based Python unit test from the Jenkins continuous integration server.
I have seen approaches where the doctests are executed by nose.
Does anyone know of the approach for executing Pyhton doctests from Jenkins.
(Will try this approach)

Adding unit testing to a large python project

I started learning python as I developed a project about a year ago. Since then the project became somewhat of a (quite large) stable and useful tool for me. The project's arrangement is like so:
main.py
../functions/func1.py
../functions/func2.py
../functions/func3.py
../functions/func4.py
...
../functions/funcN.py
where the main.py file calls the rest of the functions sequentially.
The issue is that I did not write a single unit test for any of the functions. Not one.
I did not pay much attention to testing since at first I was just learning and eventually it got out of hand.
I want to correct this and add the proper unit tests, the question is: which testing method should I use for my project?
I've seen many different methods described:
unittest
Doctest
pytest
nose
tox
unittest2
mock
but I've no idea if one of those is more suited to something like my project than the rest.
unittest which is now just unittest2 is already in python and the most standard, just start with that.
Think of nose as a set of extensions, use it when you want something not already in unittests, it's quite popular as well.
doctests puts unit tests into doc comments, I don't like it too much but use it if you want to.
mock is just a testing paradigm you should use when interacting with interfaces/objects is not trivial
tox runs tests under different python environments.
As an addition, integration tools like travis/jenkins allows you to run tox or sets of unit tests automatically, they're often used for multi-user projects, so everybody can see the test results on each commit.

Python benchmark tool like nosetests?

What I want
I would like to create a set of benchmarks for my Python project. I would like to see the performance of these benchmarks change as I introduce new code. I would like to do this in the same way that I test Python, by running the utility command like nosetests and getting a nicely formatted readout.
What I like about nosetests
The nosetests tool works by searching through my directory structure for any functions named test_foo.py and runs all functions test_bar() contained within. It runs all of those functions and prints out whether or not they raised an exception.
I'd like something similar that searched for all files bench_foo.py and ran all contained functions bench_bar() and reported their runtimes.
Questions
Does such a tool exist?
If not what are some good starting points? Is some of the nose source appropriate for this?
nosetests can run any type of test, so you can decide if they test functionality, input/output validity etc., or performance or profiling (or anything else you'd like). The Python Profiler is a great tool, and it comes with your Python installation.
import unittest
import cProfile
class ProfileTest(unittest.TestCase):
test_run_profiler:
cProfile.run('foo(bar)')
cProfile.run('baz(bar)')
You just add a line to the test, or add a test to the test case for all the calls you want to profile, and your main source is not polluted with test code.
If you only want to time execution and not all the profiling information, timeit is another useful tool.
The wheezy documentation has a good example on how to do this with nose. The important part if you just want to have the timings is to use options -q for quiet run, -s for not capturing the output (so you will see the output of the report) and -m benchmark to only run the 'timing' tests.
I recommend using py.test for testing over. To run the example from wheezy with that, change the name of the runTest method to test_bench_run and run only this benchmark with:
py.test -qs -k test_bench benchmark_hello.py
(-q and -s having the same effect as with nose and -k to select the pattern of the test names).
If you put your benchmark tests in file in a separate file or directory from normal tests they are of course more easy to select and don't need special names.

While running some java DB Unit tests which call a python script how can I test the code coverage for the python script?

I have a python script which generates some reports based on a DB.
I am testing the script using java Db Units which call the python script.
My question is how can I verify the code coverage for the python script while I am running the DB Units?
Coverage.py has an API that you can use to start and stop coverage measurement as you need.
I'm not sure how you are invoking your Python code from your Java code, but once in the Python, you can use coverage.py to measure Python execution, and then get reports on the results.
Drop me a line if you need help.
I don't know how you can check for inter-language unit test coverage. You will have to tweak the framework yourself to achieve something like this.
That said, IMHO this is a wrong approach to take for various reasons.
Inter-language disqualifies the tests from being described as "unit". These are functional tests. and thereby shouldn't care about code coverage (See #Ned's comment below).
If you must (unit) test the Python code then I suggest that you do it using Python. This will also solve your problem of checking for test coverage.
If you do want to function test then it would be good idea to keep Python code coverage checks away from Java. This would reduce the coupling between the Java and Python code. Tests are code after all and it is usually a good idea to reduce coupling between parts.

Categories

Resources