I use unittest (actually unittest2) for Python testing, together with Python Mock for mocking objects and nose to run all tests in a single pass.
I miss being able to tell what is working and what's wrong at a glance from the green/red bars. Is there a way to get colored output from unittest?
(Changing test suite at this point is not an option, and I actually like unittest)
Using a method very similar to robert's answer, I have (today!) released a package that enables colour output in unittest test results. I have called it colour-runner.
To install it, run:
pip install colour-runner
Then, where you were using unittest.TextTestRunner, use colour_runner.runner.ColourTextTestRunner instead.
See how it looks with verbosity=1...and verbosity=2
I'm having good success with nosetests and rednose. It's still maintained at the time of writing this.
In python 2.x you could try pyrg. Does not work in Python 3 though.
Make a class that inherits from unittest.TestResult (say, MyResults) and implements a bunch of methods. Then make a class that inherits from unittest.TextTestRunner (say, MyRunner) and override _makeResult() to return an instance of MyResults.
Then, construct a test suite (which you've probably already got working), and call MyRunner().run(suite).
You can put whatever behavior you like, including colors, into MyResults.
If you are running pytest this way:
python -m unittest test_my.py
Change it to:
pytest test_my.py
And you get colors for free
pytest can do this with no changes needed for unit tests.
Now install pytest.
pip install --user pytest
And run the tests to see the color!
If you could change just the line of your test imports, you could use redgreenunittest. It's a clone I made of unittest, but it has colorized output.
If you want to use it without updating any of the meat of your code, you can just use it like so:
import redgreenunittest as unittest
It's not a clone of unittest2, so it wouldn't work out-of-the-box with Andrea's code, but its source is right there, so a unittest2 fork of redgreenunittest wouldn't be out of the question.
Also, any "you're doing it wrong" comments are welcome, so long as they contain some reasoning. I'd love to do it right instead.
I've also found another colouring plugin for nose: YANC at https://pypi.python.org/pypi/yanc
Works for me with Python 3.5 and nose 1.3.7 (I couldn't get any of the other options for nose listed above to work)
Try rudolf plugin for nosetests.
Related
In my CI/CD environment I have a multiple projects that use mostly the same tests, with a bit of variation. Since all of them are mostly the same, just different projects/builds use them a bit differently, I am looking for a way (if there is one) to package the tests themselves to pass around the projects. EDIT: Packaging tested code is not possible.
The ultimate usage will be something like this:
pip install <test-package>
pytest -m <some-mark-depending-on-build/project> --<additional-variables>
Is there a way to do this?
EDIT: If there is, please point me out toward a solution.
Thanks in advance.
Keeping it here for references.
The way to do this is to create a test package that can run as python module, from main.py.
After researching and some testing, I've concluded that in my case this will create more code to maintain than I would otherwise properly reuse.
I'm using nose for testing some REST API written using Flask. Also I'm using script-manager. Everytime I do manage test it'll run through all the tests. This is OK for CI but not ideal if one wants to fix something. In golang, there is a way to specify a subset of test to run by providing a regexp. Is there something similar in nose?
You can run
nosetests -m REGEX
as specified in nose's options page.
If you don't need full regex you can also specify a path with grobs after nosetests, e.g.:
nosetests tests/my_cool_subset*
Is it possible to use Pytest with Django without using django-pytest 3rd party app?
I had tried to set this up, but kept running into random errors, like pytest couldn't find the DJANGO_SETTINGS_MODULE. Then I fixed the path, but the normal python manage.py runserver then couldn't find the DJANGO_SETTINGS_MODULE. I'm running:
Pytest 2.5.4
Python 3.4.0
Django 1.6.2
If it is possible, would you be able to provide a setup example of where to put the tests/ directory, etc... within the project so Pytest works?
Thanks
Hmm, py.test 2.5.4 does not exist afaik.
Anyway, assuming you mean to ask whether it is possible to avoid the pytest-django plugin to test Django using py.test: the short answer is no.
The long answer is yes, but it is extremely difficult to get it all to work and you will basically write at least a minimal version of pytest-django in the conftest.py file. The pytest-django plugin was created specifically to work around all the weirdness which Django does with much global state and other hidden magic.
OTOH looking at the pytest-django source would probably help you kickstart such an effort. However you may consider thinking about what it is of pytest-django you do not like and maybe file an enhancement request to improve it.
What I want
I would like to create a set of benchmarks for my Python project. I would like to see the performance of these benchmarks change as I introduce new code. I would like to do this in the same way that I test Python, by running the utility command like nosetests and getting a nicely formatted readout.
What I like about nosetests
The nosetests tool works by searching through my directory structure for any functions named test_foo.py and runs all functions test_bar() contained within. It runs all of those functions and prints out whether or not they raised an exception.
I'd like something similar that searched for all files bench_foo.py and ran all contained functions bench_bar() and reported their runtimes.
Questions
Does such a tool exist?
If not what are some good starting points? Is some of the nose source appropriate for this?
nosetests can run any type of test, so you can decide if they test functionality, input/output validity etc., or performance or profiling (or anything else you'd like). The Python Profiler is a great tool, and it comes with your Python installation.
import unittest
import cProfile
class ProfileTest(unittest.TestCase):
test_run_profiler:
cProfile.run('foo(bar)')
cProfile.run('baz(bar)')
You just add a line to the test, or add a test to the test case for all the calls you want to profile, and your main source is not polluted with test code.
If you only want to time execution and not all the profiling information, timeit is another useful tool.
The wheezy documentation has a good example on how to do this with nose. The important part if you just want to have the timings is to use options -q for quiet run, -s for not capturing the output (so you will see the output of the report) and -m benchmark to only run the 'timing' tests.
I recommend using py.test for testing over. To run the example from wheezy with that, change the name of the runTest method to test_bench_run and run only this benchmark with:
py.test -qs -k test_bench benchmark_hello.py
(-q and -s having the same effect as with nose and -k to select the pattern of the test names).
If you put your benchmark tests in file in a separate file or directory from normal tests they are of course more easy to select and don't need special names.
I have used the 2to3 utility to convert code from the command line. What I would like to do is run it basically as a unittest. Even if it tests the file rather than parts(functions, methods...) as would be normal for a unittest.
It does not need to be a unittest and I don't what to automatically convert the files I just want to monitor the py3 compliance of files in a unittest like manor. I can't seem to find any documentation or examples for this.
An example and/or documentation would be great.
Simply use the -3 option with python2.6+ to be informed of Python3 compliance.
If you are trying to verify the code will work in Python 3.x, I would suggest a script that copies the source files to a new directory, runs 2to3 on them, then copies the unit tests to the directory and runs them.
This may seem slightly inelegant, but is consistent with the spirit of unit testing. You are making a series of assertions that you believe ought to be true about the external behavior of the code, regardless of implementation. If the converted code passes your unit tests, you can consider your code to support Python 3.