Executing doctest Unitests from Jenkins - python

I am struggling to find a good, well documented approach for execution of doctest-based Python unit test from the Jenkins continuous integration server.
I have seen approaches where the doctests are executed by nose.
Does anyone know of the approach for executing Pyhton doctests from Jenkins.
(Will try this approach)

Related

IronPython Microsoft.Scripting.MutableTuple Exception

I am trying to run unit tests for a Iron Python code (Python with C#). I am facing the below error when running the unit tests. However the weird thing is that the framework runs for other python code tests. The framework is written using the unittest.defaultTestLoader.loadTestsFromName and then calls the test runner.
One reason I feel the code may not be running is because of the memory exception as there are alot of data structures that have been used in this python code compared to others for which the tests are running successfully.
I have worked on changing the defaultTestLoader to loadTestsFromModule and loadtestsfromtestcase. But I still get the same error.
Can someone please help me to find a solution around this?
Error on CMD when running tests
Test Framework Code to run tests
I faced whith the same problem in Dynamo for Revit application.
As i understand the count of naming variable have some memory limit.
I start to use dict instead of variables. It help me. But it is not the best way out.

Python - Difference between Sonarcube Vs pylint

I'm evaluating test framework, lint and code coverage options for a new Python project I'll be working on.
I've chosen pytest for the testing needs. After reading a bunch of resources, I'm confused when to use Sonarcube, Sonarlint , pylint and coverage.py.
Is SonarLint and Pylint comparable? When would I use Sonarcube?
I need to be able to use this in a Jenkins build. Thanks for helping!
Sonarlint and pylint are comparable, in a way.
Sonarlint is a code linter and pylint is too. I haven't used sonarlint, but it seems that analyzes the code a bit deeper that pylint does. From my experience, pylint only follows a set of rules (that you can modify, by the way), while sonarlint goes a bit further analyzing the inner workings of your code. They are both static analyze tools, however.
Sonarcube, on the other hand, does a bit more. Sonarcube is a CI/CD tool that runs static linters, but also shows you code smells, and does a security analysis. All of what I'm saying is based purely on their website.
If you would like to run CI/CD workflows or scripts, you would use Sonarcube, but for local coding, sonarlint is enough. Pylint is the traditional way, though.
Nicholas has a great summary of Pylint vs Sonarlint.
(Personally I use the Sonarlint)
Although the question is older, I thought I'd answer the other part of your question in case anyone else has the same question; internet being eternal and all.
Coverage.py as it sounds, runs code coverage for your package. SonarQube then uses the report that coverage.py makes and does things with it and formats it in a way that the Sonar team decided was necessary. Coverage.py is needed if you want to use SonarQube for code coverage. However, if you just want the code smells from SonarQube, it is not needed.
You were also asking about when to use SonarQube, coverage.py, and Jenkins.
In Jenkins, you would create a pipeline with several stages. Something along the following lines:
Check out code (automatically done as the first step by Jenkins
Build code as it is intended to be used by user/developer
Run Unit Tests
run coverage.py
run SonarQube

Test coverage tool for Behave test framework

We are using Behave BDD tool for automating API's. Is there any tool which give code coverage using our behave cases?
We tried using coverage module, it didn't work with Behave.
You can run any module with coverage to see the code usage.
In your case should be close to coverage run --source='.' -m behave
Tracking code coverage for Aceptace/Integration/Behaviour test will give a high coverage number easily but can lead to the idea that the code are properly tested.
Those are for see things working together, not to track how much code are well 'covered'.
Tying together unittests and coverages makes more sense to me.

Controlling the distribution of tests with py.test xdist

I have several thousand tests that I want to run in parallel. The tests are all compiled binaries that give a return code of 0 or non-zero (on failure). Some unknown subsets of them try to use the same resources (files, ports, etc). Each test assumes that it is running independently and just reports a failure if a resources isn't available.
I'm using Python to launch each test using the subprocess module, and that works great serially. I looked into Nose for parallelizing, but I need to autogenerate the tests (to wrap each of the 1000+ binaries into Python class that uses subprocess) and Nose's multiprocessing module doesn't support parallelizing autogenerated tests.
I ultimately settled on PyTest because it can run autogenerated tests on remote hosts over SSH with the xdist plugin.
However, as far as I can tell, it doesn't look like xdist supports any kind of control of how the tests get distributed. I want to give it a pool of N machines, and have one test run per machine.
Is what I want possible with PyTest/xdist? If not, is there a tool out there that can do what I'm looking for?
I am not sure if this would help. But if you know ahead of time how you want to divide up your tests, instead of having pytest distribute your tests, you could use your continuous integration server to call a different run of pytest for each different machine. Using -k or -m to select a subset of tests, or simply specifying different test dir paths, you could control which tests are run together.

While running some java DB Unit tests which call a python script how can I test the code coverage for the python script?

I have a python script which generates some reports based on a DB.
I am testing the script using java Db Units which call the python script.
My question is how can I verify the code coverage for the python script while I am running the DB Units?
Coverage.py has an API that you can use to start and stop coverage measurement as you need.
I'm not sure how you are invoking your Python code from your Java code, but once in the Python, you can use coverage.py to measure Python execution, and then get reports on the results.
Drop me a line if you need help.
I don't know how you can check for inter-language unit test coverage. You will have to tweak the framework yourself to achieve something like this.
That said, IMHO this is a wrong approach to take for various reasons.
Inter-language disqualifies the tests from being described as "unit". These are functional tests. and thereby shouldn't care about code coverage (See #Ned's comment below).
If you must (unit) test the Python code then I suggest that you do it using Python. This will also solve your problem of checking for test coverage.
If you do want to function test then it would be good idea to keep Python code coverage checks away from Java. This would reduce the coupling between the Java and Python code. Tests are code after all and it is usually a good idea to reduce coupling between parts.

Categories

Resources