Python - Difference between Sonarcube Vs pylint - python

I'm evaluating test framework, lint and code coverage options for a new Python project I'll be working on.
I've chosen pytest for the testing needs. After reading a bunch of resources, I'm confused when to use Sonarcube, Sonarlint , pylint and coverage.py.
Is SonarLint and Pylint comparable? When would I use Sonarcube?
I need to be able to use this in a Jenkins build. Thanks for helping!

Sonarlint and pylint are comparable, in a way.
Sonarlint is a code linter and pylint is too. I haven't used sonarlint, but it seems that analyzes the code a bit deeper that pylint does. From my experience, pylint only follows a set of rules (that you can modify, by the way), while sonarlint goes a bit further analyzing the inner workings of your code. They are both static analyze tools, however.
Sonarcube, on the other hand, does a bit more. Sonarcube is a CI/CD tool that runs static linters, but also shows you code smells, and does a security analysis. All of what I'm saying is based purely on their website.
If you would like to run CI/CD workflows or scripts, you would use Sonarcube, but for local coding, sonarlint is enough. Pylint is the traditional way, though.

Nicholas has a great summary of Pylint vs Sonarlint.
(Personally I use the Sonarlint)
Although the question is older, I thought I'd answer the other part of your question in case anyone else has the same question; internet being eternal and all.
Coverage.py as it sounds, runs code coverage for your package. SonarQube then uses the report that coverage.py makes and does things with it and formats it in a way that the Sonar team decided was necessary. Coverage.py is needed if you want to use SonarQube for code coverage. However, if you just want the code smells from SonarQube, it is not needed.
You were also asking about when to use SonarQube, coverage.py, and Jenkins.
In Jenkins, you would create a pipeline with several stages. Something along the following lines:
Check out code (automatically done as the first step by Jenkins
Build code as it is intended to be used by user/developer
Run Unit Tests
run coverage.py
run SonarQube

Related

PyCharm code style check via command line

Is it possible to run the PyCharm linter / code style checks from command line and get the warnings/errors?
Extension to that: Is it possible to integrate that in my Travis tests?
To put some of the comments and some further research into an answer:
PyCharm comes with a small command-line utility bin/inspect.sh, which is documented here. This tool is quite limited though, and has some problems, e.g. it cannot run while the PyCharm IDE is running, and it reports somewhat incorrectly / different than the IDE. Related code can be seen e.g. here.
PyCharm does not do PEP8 code style checks by itself but uses the (bundled) pycodestyle tool.
Maybe these shortcomings can be fixed upstream. See e.g. this report, or this, or this.
I'm using this in Travis and GitHub Actions now, via this script.
This also does the necessary setup of a project.
This also compensates for the inspect.sh missing warnings by also using the pycodestyle tool and thus exactly matching the PyCharm IDE warnings.
Alternative, I'm thinking about writing an extended simple utility which basically does this. All the relevant PyCharm code is open source. I created a project page pychar-inspect for this. But this is just in the planing phase currently, and maybe obsolete when this will be addressed upstream.

How to get the warning - 'unused function' in Eclipse Python (PyDev)?

I want to get warnings for unused functions in PyDev, the Python IDE for Eclipse.
I found no option for that in Code Analysis. How can I do this?
PyDev by default does not have such functionality... Finding unused functions may be extremely difficult due to the Python dynamic nature -- also, it would require whole program analysis (the PyDev code analysis is meant to be fast and with as few false positives as possible without doing whole program analysis, so, this check doesn't really suit the targets on PyDev).
Now, there is a project that seems to implement it: https://github.com/jendrikseipp/vulture so, it may be an option... or maybe PyLint (PyLint itself is integrated in PyDev -- http://www.pydev.org/manual_adv_pylint.html -- but I think they may not have that functionality either).
Another option could be running with code coverage (http://www.pydev.org/manual_adv_coverage.html) -- everything not hit could potentially be unused (but that would require a suitable test suite of integration tests which actually test your whole app -- unit tests may end up calling code not used in the real app, which may skew the results -- although you really need a reliable suite of integration tests checking your whole app for this to work).

Why are there no Makefiles for automation in Python projects?

As a long time Python programmer, I wonder, if a central aspect of Python culture eluded me a long time: What do we do instead of Makefiles?
Most ruby-projects I've seen (not just rails) use Rake, shortly after node.js became popular, there was cake. In many other (compiled and non-compiled) languages there are classic Make files.
But in Python, no one seems to need such infrastructure. I randomly picked Python projects on GitHub, and they had no automation, besides the installation, provided by setup.py.
What's the reason behind this?
Is there nothing to automate? Do most programmers prefer to run style checks, tests, etc. manually?
Some examples:
dependencies sets up a virtualenv and installs the dependencies
check calls the pep8 and pylint commandlinetools.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
the coffeescript task compiles all coffeescripts to minified javascript
the runserver task depends on dependencies and coffeescript
the deploy task depends on check and test and deploys the project.
the docs task calls sphinx with the appropiate arguments
Some of them are just one or two-liners, but IMHO, they add up. Due to the Makefile, I don't have to remember them.
To clarify: I'm not looking for a Python equivalent for Rake. I'm glad with paver. I'm looking for the reasons.
Actually, automation is useful to Python developers too!
Invoke is probably the closest tool to what you have in mind, for automation of common repetitive Python tasks: https://github.com/pyinvoke/invoke
With invoke, you can create a tasks.py like this one (borrowed from the invoke docs)
from invoke import run, task
#task
def clean(docs=False, bytecode=False, extra=''):
patterns = ['build']
if docs:
patterns.append('docs/_build')
if bytecode:
patterns.append('**/*.pyc')
if extra:
patterns.append(extra)
for pattern in patterns:
run("rm -rf %s" % pattern)
#task
def build(docs=False):
run("python setup.py build")
if docs:
run("sphinx-build docs docs/_build")
You can then run the tasks at the command line, for example:
$ invoke clean
$ invoke build --docs
Another option is to simply use a Makefile. For example, a Python project's Makefile could look like this:
docs:
$(MAKE) -C docs clean
$(MAKE) -C docs html
open docs/_build/html/index.html
release: clean
python setup.py sdist upload
sdist: clean
python setup.py sdist
ls -l dist
Setuptools can automate a lot of things, and for things that aren't built-in, it's easily extensible.
To run unittests, you can use the setup.py test command after having added a test_suite argument to the setup() call. (documentation)
Dependencies (even if not available on PyPI) can be handled by adding a install_requires/extras_require/dependency_links argument to the setup() call. (documentation)
To create a .deb package, you can use the stdeb module.
For everything else, you can add custom setup.py commands.
But I agree with S.Lott, most of the tasks you'd wish to automate (except dependencies handling maybe, it's the only one I find really useful) are tasks you don't run everyday, so there wouldn't be any real productivity improvement by automating them.
There is a number of options for automation in Python. I don't think there is a culture against automation, there is just not one dominant way of doing it. The common denominator is distutils.
The one which is closed to your description is buildout. This is mostly used in the Zope/Plone world.
I myself use a combination of the following: Distribute, pip and Fabric. I am mostly developing using Django that has manage.py for automation commands.
It is also being actively worked on in Python 3.3
Any decent test tool has a way of running the entire suite in a single command, and nothing is stopping you from using rake, make, or anything else, really.
There is little reason to invent a new way of doing things when existing methods work perfectly well - why re-invent something just because YOU didn't invent it? (NIH).
The make utility is an optimization tool which reduces the time spent building a software image. The reduction in time is obtained when all of the intermediate materials from a previous build are still available, and only a small change has been made to the inputs (such as source code). In this situation, make is able to perform an "incremental build": rebuild only a subset of the intermediate pieces that are impacted by the change to the inputs.
When a complete build takes place, all that make effectively does is to execute a set of scripting steps. These same steps could just be deposited into a flat script. The -n option of make will in fact print these steps, which makes this possible.
A Makefile isn't "automation"; it's "automation with a view toward optimized incremental rebuilds." Anything scripted with any scripting tool is automation.
So, why would Python project eschew tools like make? Probably because Python projects don't struggle with long build times that they are eager to optimize. And, also, the compilation of a .py to a .pyc file does not have the same web of dependencies like a .c to a .o.
A C source file can #include hundreds of dependent files; a one-character change in any one of these files can mean that the source file must be recompiled. A properly written Makefile will detect when that is or is not the case.
A big C or C++ project without an incremental build system would mean that a developer has to wait hours for an executable image to pop out for testing. Fast, incremental builds are essential.
In the case of Python, probably all you have to worry about is when a .py file is newer than its corresponding .pyc, which can be handled by simple scripting: loop over all the files, and recompile anything newer than its byte code. Moreover, compilation is optional in the first place!
So the reason Python projects tend not to use make is that their need to perform incremental rebuild optimization is low, and they use other tools for automation; tools that are more familiar to Python programmers, like Python itself.
The original PEP where this was raised can be found here. Distutils has become the standard method for distributing and installing Python modules.
Why? It just happens that python is a wonderful language to perform the installation of Python modules with.
Here are few examples of makefile usage with python:
https://blog.horejsek.com/makefile-with-python/
https://krzysztofzuraw.com/blog/2016/makefiles-in-python-projects.html
I think that a most of people is not aware "makefile for python" case. It could be useful, but "sexiness ratio" is too small to propagate rapidly (just my PPOV).
Is there nothing to automate?
Not really. All but two of the examples are one-line commands.
tl;dr Very little of this is really interesting or complex. Very little of this seems to benefit from "automation".
Due to documentation, I don't have to remember the commands to do this.
Do most programmers prefer to run stylechecks, tests, etc. manually?
Yes.
generation documentation,
the docs task calls sphinx with the appropiate arguments
It's one line of code. Automation doesn't help much.
sphinx-build -b html source build/html. That's a script. Written in Python.
We do this rarely. A few times a week. After "significant" changes.
running stylechecks (Pylint, Pyflakes and the pep8-cmdtool).
check calls the pep8 and pylint commandlinetools
We don't do this. We use unit testing instead of pylint.
You could automate that three-step process.
But I can see how SCons or make might help someone here.
tests
There might be space for "automation" here. It's two lines: the non-Django unit tests (python test/main.py) and the Django tests. (manage.py test). Automation could be applied to run both lines.
We do this dozens of times each day. We never knew we needed "automation".
dependecies sets up a virtualenv and installs the dependencies
Done so rarely that a simple list of steps is all that we've ever needed. We track our dependencies very, very carefully, so there are never any surprises.
We don't do this.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
The start server & run nosetest as a two-step "automation" makes some sense. It saves you from entering the two shell commands to run both steps.
the coffeescript task compiles all coffeescripts to minified javascript
This is something that's very rare for us. I suppose it's a good example of something to be automated. Automating the one-line script could be helpful.
I can see how SCons or make might help someone here.
the runserver task depends on dependencies and coffeescript
Except. The dependencies change so rarely, that this seems like overkill. I supposed it can be a good idea of you're not tracking dependencies well in the first place.
the deploy task depends on check and test and deploys the project.
It's an svn co and python setup.py install on the server, followed by a bunch of customer-specific copies from the subversion area to the customer /www area. That's a script. Written in Python.
It's not a general make or SCons kind of thing. It has only one actor (a sysadmin) and one use case. We wouldn't ever mingle deployment with other development, QA or test tasks.

Finding unused Django code to remove

I've started working on a project with loads of unused legacy code in it. I was wondering if it might be possible to use a tool like coverage in combination with a crawler (like the django-test-utils one) to help me locate code which isn't getting hit which we can mark with deprecation warnings. I realise that something like this won't be foolproof but thought it might help.
I've tried running coverage.py with the django debug server but it doesn't work correctly (it seems to just profile the runserver machinery rather than my views, etc).
We're improving our test coverage all the time but there's a way to go and I thought there might be a quicker way.
Any thoughts?
Thanks.
You can run the development server under coverage if you use the --noreload switch:
coverage run ./manage.py runserver --noreload
pylint is great tool for static code analysis (among others things it will detect unused imports, variables or arguments).
http://nedbatchelder.com/blog/200806/pylint.html
http://www.doughellmann.com/articles/pythonmagazine/completely-different/2008-03-linters/index.html

While running some java DB Unit tests which call a python script how can I test the code coverage for the python script?

I have a python script which generates some reports based on a DB.
I am testing the script using java Db Units which call the python script.
My question is how can I verify the code coverage for the python script while I am running the DB Units?
Coverage.py has an API that you can use to start and stop coverage measurement as you need.
I'm not sure how you are invoking your Python code from your Java code, but once in the Python, you can use coverage.py to measure Python execution, and then get reports on the results.
Drop me a line if you need help.
I don't know how you can check for inter-language unit test coverage. You will have to tweak the framework yourself to achieve something like this.
That said, IMHO this is a wrong approach to take for various reasons.
Inter-language disqualifies the tests from being described as "unit". These are functional tests. and thereby shouldn't care about code coverage (See #Ned's comment below).
If you must (unit) test the Python code then I suggest that you do it using Python. This will also solve your problem of checking for test coverage.
If you do want to function test then it would be good idea to keep Python code coverage checks away from Java. This would reduce the coupling between the Java and Python code. Tests are code after all and it is usually a good idea to reduce coupling between parts.

Categories

Resources