I started learning python as I developed a project about a year ago. Since then the project became somewhat of a (quite large) stable and useful tool for me. The project's arrangement is like so:
main.py
../functions/func1.py
../functions/func2.py
../functions/func3.py
../functions/func4.py
...
../functions/funcN.py
where the main.py file calls the rest of the functions sequentially.
The issue is that I did not write a single unit test for any of the functions. Not one.
I did not pay much attention to testing since at first I was just learning and eventually it got out of hand.
I want to correct this and add the proper unit tests, the question is: which testing method should I use for my project?
I've seen many different methods described:
unittest
Doctest
pytest
nose
tox
unittest2
mock
but I've no idea if one of those is more suited to something like my project than the rest.
unittest which is now just unittest2 is already in python and the most standard, just start with that.
Think of nose as a set of extensions, use it when you want something not already in unittests, it's quite popular as well.
doctests puts unit tests into doc comments, I don't like it too much but use it if you want to.
mock is just a testing paradigm you should use when interacting with interfaces/objects is not trivial
tox runs tests under different python environments.
As an addition, integration tools like travis/jenkins allows you to run tox or sets of unit tests automatically, they're often used for multi-user projects, so everybody can see the test results on each commit.
What I want
I would like to create a set of benchmarks for my Python project. I would like to see the performance of these benchmarks change as I introduce new code. I would like to do this in the same way that I test Python, by running the utility command like nosetests and getting a nicely formatted readout.
What I like about nosetests
The nosetests tool works by searching through my directory structure for any functions named test_foo.py and runs all functions test_bar() contained within. It runs all of those functions and prints out whether or not they raised an exception.
I'd like something similar that searched for all files bench_foo.py and ran all contained functions bench_bar() and reported their runtimes.
Questions
Does such a tool exist?
If not what are some good starting points? Is some of the nose source appropriate for this?
nosetests can run any type of test, so you can decide if they test functionality, input/output validity etc., or performance or profiling (or anything else you'd like). The Python Profiler is a great tool, and it comes with your Python installation.
import unittest
import cProfile
class ProfileTest(unittest.TestCase):
test_run_profiler:
cProfile.run('foo(bar)')
cProfile.run('baz(bar)')
You just add a line to the test, or add a test to the test case for all the calls you want to profile, and your main source is not polluted with test code.
If you only want to time execution and not all the profiling information, timeit is another useful tool.
The wheezy documentation has a good example on how to do this with nose. The important part if you just want to have the timings is to use options -q for quiet run, -s for not capturing the output (so you will see the output of the report) and -m benchmark to only run the 'timing' tests.
I recommend using py.test for testing over. To run the example from wheezy with that, change the name of the runTest method to test_bench_run and run only this benchmark with:
py.test -qs -k test_bench benchmark_hello.py
(-q and -s having the same effect as with nose and -k to select the pattern of the test names).
If you put your benchmark tests in file in a separate file or directory from normal tests they are of course more easy to select and don't need special names.
I've been seeing and reading about a lot of people using nose to run their Django tests. I haven't been able to figure out the added benefits of using Nose to run my Django tests. If someone could fill me in on what nose is and how it adds more to a Django project, it would be helpful.
I haven't been able to find a good document/article outlining these points.
Thank you
I was curious about this too and it seems that the main advantage of django-nose using the python nose library is "Test Discovery".
In addition, from http://readthedocs.org/docs/nose/en/latest/testing.html
you can also write simple test functions, as well as test classes that are not
subclasses of unittest.TestCase. nose also supplies a number of
helpful functions for writing timed tests, testing for exceptions, and
other common use cases. See Writing tests and Testing tools for more.
From what I understand from other python developers on freenode irc, Trial test runner on Twisted Framework have these similar features like nose.
I am still not entirely convinced about using django-nose for django development but am giving a shot and report back if I find out more!
There are a lot more features overall, but I think one major reason people use nose/djano_nose is that it allows you to very easily do code coverage.
python manage.py test myapp --with-coverage --cover-package=myapp
As a long time Python programmer, I wonder, if a central aspect of Python culture eluded me a long time: What do we do instead of Makefiles?
Most ruby-projects I've seen (not just rails) use Rake, shortly after node.js became popular, there was cake. In many other (compiled and non-compiled) languages there are classic Make files.
But in Python, no one seems to need such infrastructure. I randomly picked Python projects on GitHub, and they had no automation, besides the installation, provided by setup.py.
What's the reason behind this?
Is there nothing to automate? Do most programmers prefer to run style checks, tests, etc. manually?
Some examples:
dependencies sets up a virtualenv and installs the dependencies
check calls the pep8 and pylint commandlinetools.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
the coffeescript task compiles all coffeescripts to minified javascript
the runserver task depends on dependencies and coffeescript
the deploy task depends on check and test and deploys the project.
the docs task calls sphinx with the appropiate arguments
Some of them are just one or two-liners, but IMHO, they add up. Due to the Makefile, I don't have to remember them.
To clarify: I'm not looking for a Python equivalent for Rake. I'm glad with paver. I'm looking for the reasons.
Actually, automation is useful to Python developers too!
Invoke is probably the closest tool to what you have in mind, for automation of common repetitive Python tasks: https://github.com/pyinvoke/invoke
With invoke, you can create a tasks.py like this one (borrowed from the invoke docs)
from invoke import run, task
#task
def clean(docs=False, bytecode=False, extra=''):
patterns = ['build']
if docs:
patterns.append('docs/_build')
if bytecode:
patterns.append('**/*.pyc')
if extra:
patterns.append(extra)
for pattern in patterns:
run("rm -rf %s" % pattern)
#task
def build(docs=False):
run("python setup.py build")
if docs:
run("sphinx-build docs docs/_build")
You can then run the tasks at the command line, for example:
$ invoke clean
$ invoke build --docs
Another option is to simply use a Makefile. For example, a Python project's Makefile could look like this:
docs:
$(MAKE) -C docs clean
$(MAKE) -C docs html
open docs/_build/html/index.html
release: clean
python setup.py sdist upload
sdist: clean
python setup.py sdist
ls -l dist
Setuptools can automate a lot of things, and for things that aren't built-in, it's easily extensible.
To run unittests, you can use the setup.py test command after having added a test_suite argument to the setup() call. (documentation)
Dependencies (even if not available on PyPI) can be handled by adding a install_requires/extras_require/dependency_links argument to the setup() call. (documentation)
To create a .deb package, you can use the stdeb module.
For everything else, you can add custom setup.py commands.
But I agree with S.Lott, most of the tasks you'd wish to automate (except dependencies handling maybe, it's the only one I find really useful) are tasks you don't run everyday, so there wouldn't be any real productivity improvement by automating them.
There is a number of options for automation in Python. I don't think there is a culture against automation, there is just not one dominant way of doing it. The common denominator is distutils.
The one which is closed to your description is buildout. This is mostly used in the Zope/Plone world.
I myself use a combination of the following: Distribute, pip and Fabric. I am mostly developing using Django that has manage.py for automation commands.
It is also being actively worked on in Python 3.3
Any decent test tool has a way of running the entire suite in a single command, and nothing is stopping you from using rake, make, or anything else, really.
There is little reason to invent a new way of doing things when existing methods work perfectly well - why re-invent something just because YOU didn't invent it? (NIH).
The make utility is an optimization tool which reduces the time spent building a software image. The reduction in time is obtained when all of the intermediate materials from a previous build are still available, and only a small change has been made to the inputs (such as source code). In this situation, make is able to perform an "incremental build": rebuild only a subset of the intermediate pieces that are impacted by the change to the inputs.
When a complete build takes place, all that make effectively does is to execute a set of scripting steps. These same steps could just be deposited into a flat script. The -n option of make will in fact print these steps, which makes this possible.
A Makefile isn't "automation"; it's "automation with a view toward optimized incremental rebuilds." Anything scripted with any scripting tool is automation.
So, why would Python project eschew tools like make? Probably because Python projects don't struggle with long build times that they are eager to optimize. And, also, the compilation of a .py to a .pyc file does not have the same web of dependencies like a .c to a .o.
A C source file can #include hundreds of dependent files; a one-character change in any one of these files can mean that the source file must be recompiled. A properly written Makefile will detect when that is or is not the case.
A big C or C++ project without an incremental build system would mean that a developer has to wait hours for an executable image to pop out for testing. Fast, incremental builds are essential.
In the case of Python, probably all you have to worry about is when a .py file is newer than its corresponding .pyc, which can be handled by simple scripting: loop over all the files, and recompile anything newer than its byte code. Moreover, compilation is optional in the first place!
So the reason Python projects tend not to use make is that their need to perform incremental rebuild optimization is low, and they use other tools for automation; tools that are more familiar to Python programmers, like Python itself.
The original PEP where this was raised can be found here. Distutils has become the standard method for distributing and installing Python modules.
Why? It just happens that python is a wonderful language to perform the installation of Python modules with.
Here are few examples of makefile usage with python:
https://blog.horejsek.com/makefile-with-python/
https://krzysztofzuraw.com/blog/2016/makefiles-in-python-projects.html
I think that a most of people is not aware "makefile for python" case. It could be useful, but "sexiness ratio" is too small to propagate rapidly (just my PPOV).
Is there nothing to automate?
Not really. All but two of the examples are one-line commands.
tl;dr Very little of this is really interesting or complex. Very little of this seems to benefit from "automation".
Due to documentation, I don't have to remember the commands to do this.
Do most programmers prefer to run stylechecks, tests, etc. manually?
Yes.
generation documentation,
the docs task calls sphinx with the appropiate arguments
It's one line of code. Automation doesn't help much.
sphinx-build -b html source build/html. That's a script. Written in Python.
We do this rarely. A few times a week. After "significant" changes.
running stylechecks (Pylint, Pyflakes and the pep8-cmdtool).
check calls the pep8 and pylint commandlinetools
We don't do this. We use unit testing instead of pylint.
You could automate that three-step process.
But I can see how SCons or make might help someone here.
tests
There might be space for "automation" here. It's two lines: the non-Django unit tests (python test/main.py) and the Django tests. (manage.py test). Automation could be applied to run both lines.
We do this dozens of times each day. We never knew we needed "automation".
dependecies sets up a virtualenv and installs the dependencies
Done so rarely that a simple list of steps is all that we've ever needed. We track our dependencies very, very carefully, so there are never any surprises.
We don't do this.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
The start server & run nosetest as a two-step "automation" makes some sense. It saves you from entering the two shell commands to run both steps.
the coffeescript task compiles all coffeescripts to minified javascript
This is something that's very rare for us. I suppose it's a good example of something to be automated. Automating the one-line script could be helpful.
I can see how SCons or make might help someone here.
the runserver task depends on dependencies and coffeescript
Except. The dependencies change so rarely, that this seems like overkill. I supposed it can be a good idea of you're not tracking dependencies well in the first place.
the deploy task depends on check and test and deploys the project.
It's an svn co and python setup.py install on the server, followed by a bunch of customer-specific copies from the subversion area to the customer /www area. That's a script. Written in Python.
It's not a general make or SCons kind of thing. It has only one actor (a sysadmin) and one use case. We wouldn't ever mingle deployment with other development, QA or test tasks.
I have a python script which generates some reports based on a DB.
I am testing the script using java Db Units which call the python script.
My question is how can I verify the code coverage for the python script while I am running the DB Units?
Coverage.py has an API that you can use to start and stop coverage measurement as you need.
I'm not sure how you are invoking your Python code from your Java code, but once in the Python, you can use coverage.py to measure Python execution, and then get reports on the results.
Drop me a line if you need help.
I don't know how you can check for inter-language unit test coverage. You will have to tweak the framework yourself to achieve something like this.
That said, IMHO this is a wrong approach to take for various reasons.
Inter-language disqualifies the tests from being described as "unit". These are functional tests. and thereby shouldn't care about code coverage (See #Ned's comment below).
If you must (unit) test the Python code then I suggest that you do it using Python. This will also solve your problem of checking for test coverage.
If you do want to function test then it would be good idea to keep Python code coverage checks away from Java. This would reduce the coupling between the Java and Python code. Tests are code after all and it is usually a good idea to reduce coupling between parts.