Im using unittest and it prints ".", "E" or "F" for "ok", "error" and "fail" after each test it does. How do I switch it off ? Im using Python 2.7 and these print come from the runner class which is built in.
It sounds very tough to override the classes because it's all nested.
edit:
I only want to take off the characters E . and F because they don't appear at the same time as some other log in my tests.
The output of unittest is written to the standard error stream, which you can pipe somewhere else. On a *nix box this would be possible like this:
python -m unittest some_module 2> /dev/null
On windows, this should look like this (thanks Karl Knechtel):
python -m unittest some_module 2> NUL
If you run the tests from python, you can simply replace the stderr stream like that:
import sys, os
sys.stderr = open(os.devnull, 'w')
... # do your testing here
sys.stderr = sys.__stderr__ # if you still need the stderr stream
Since you just want to turn off the updates for the ., F, E symbols, you could also create your own TestResult class by overriding the default one. In my case (Python 2.6) this would look like this:
import unittest
class MyTestResult(unittest._TextTestResult):
def addSuccess(self, test):
TestResult.addSuccess(self, test)
def addError(self, test, err):
TestResult.addError(self, test, err)
def addFailure(self, test, err):
TestResult.addFailure(self, test, err)
This effectively turns off the printing of the characters, but maintaining the default functionality.
Now we also need a new TestRunner class and override the _makeResult method:
class MyTestRunner(unittest.TextTestRunner):
def _makeResult(self):
return MyTestResult(self.stream, self.descriptions, self.verbosity)
With this runner you can now enjoy a log free testing.
Just a note: this is not possible from the command line, unfortunately.
A bit late response, but someone may find it useful.
You can turn . E and F off by setting verbosity level to 0:
testRunner = unittest.TextTestRunner( verbosity = 0 )
You will still have the final result and possible errors/exceptions at the end of tests in the stderr.
Tested in Python 2.4 and 2.7.
Depending the unittest framework you're using (standard, nose...), you have multiple way to decrease the verbosity:
python -m unittest -h
...
-q, --quiet Minimal output
...
Related
Is it possible to change the capturing behavior in pytest for just one test — i.e., within the test script?
I have a bunch of tests that I use with pytest. There are several useful quantities that I like to print during some tests, so I use the -s flag to show them in the pytest output. But I also test for warnings, which also get printed, and look ugly and distracting. I've tried using the warnings.simplefilter as usual to just not show the warnings, but that doesn't seem to do anything. (Maybe pytest hacks it???) Anyway, I'd like some way to quiet the warnings but still check that they are raised, while also being able to see the captured output from my other print statements. Is there any way to do this — e.g., by change the capture for just one test function?
With pytest 3.x there is an easy way to temporarily disable capturing (see the section about capsys.disabled().
There's also the pytest-warnings plugin which shows the warning in a dedicated report section.
I've done it by manually redirecting stderr:
import os
import sys
import warnings
import pytest
def test():
stderr = sys.stderr
sys.stderr = open(os.devnull, 'w')
with pytest.warns(UserWarning):
warnings.warn("Warning!", UserWarning)
sys.stderr = stderr
For good measure, I could similarly redirect stdout to devnull, if other print statements are not wanted.
I have a hundred or so unit tests I'm running with nose. When I change something in my models obviously I get fails, with some errors mixed in. Is there an easy way to tell nose to only log the errors? Then I don't have to go through pages of fails to look for one error log.
nose provides tools for testing exceptions (like unittest does). Try this example (and read about the other tools at Nose Testing Tools
from nose.tools import *
l = []
d = dict()
#raises(Exception)
def test_Exception1():
'''this test should pass'''
l.pop()
#raises(KeyError)
def test_Exception2():
'''this test should pass'''
d[1]
An alternative, is to redirect output to stdout and use grep (adjust the number of lines, 15 in this example, to your liking):
nosetests tests.py 2>&1 | grep "ERROR" -A 15
Another alternative is to use --pdb-errors to stop on every error and open the debugger.
It's not what you asked, but it's what I ended up using.
I have some tests written using pytest and fixtures, e.g.:
class TestThing:
#pytest.fixture()
def temp_dir(self, request):
my_temp_dir = tempfile.mkdtemp()
def fin():
shutil.rmtree(my_temp_dir)
request.addfinalizer(fin)
return my_temp_dir
def test_something(self, temp_dir)
with open(os.path.join(temp_dir, 'test.txt'), 'w') as f:
f.write('test')
This works fine when the tests are invoked from the shell, e.g.
$ py.test
but I don't know how to run them from within a python/ipython session; trying e.g.
tt = TestThing()
tt.test_something(tt.temp_dir())
fails because temp_dir requires a request object to be passed on. So, how does one invoke a fixture with a request object injected?
Yes. You don't have to manually assemble any test fixtures or anything like that. Everything runs just like calling pytest in the project directory.
Method1:
This is the best method because it gives you access to the debugger if your test fails
In ipython shell use:
**ipython**> run -m pytest prj/
This will run all your tests in the prj/tests directory.
This will give you access to the debugger, or allow you to set breakpoints if you have a
import ipdb; ipdb.set_trace() in your program (https://docs.pytest.org/en/latest/usage.html#setting-breakpoints).
Method2:
Use !pytest while in the test directory. This wont give you access to the debugger. However, if you use
**ipython**> !pytest --pdb
If you have a test failure, it will drop you into the debugger (subshell), so you can run your post-mortem analysis (https://docs.pytest.org/en/latest/usage.html#dropping-to-pdb-python-debugger-on-failures)
Using these methods you can even run individual modules/test_fuctions/TestClasses in ipython using (https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)
**ipython**> run -m pytest prj/tests/test_module1.py::TestClass1::test_function1
You can bypass the pytest.fixture decorator and directly call the wrapped test function.
tmp = tt.temp_dir.__pytest_wrapped__.obj(request=...)
Accessing internals, it's bad, but when you need it...
The best method I have which is far from ideal is to just %run the test file, then manually assemble the fixtures, then simply call the tests. The problem with this is tracking down the modules where the default fixtures are defined, and then calling them in their order of dependencies.
you can use two cells for this:
first:
def test_something():
assert True
second:
from tempfile import mktemp
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(_i) # write last cell input
!pytest $test_file
also u can do this on one cell like this but you won't have code highlighting:
from tempfile import mktemp
test_code = """
def test_something():
assert True
"""
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(test_code)
!pytest $test_file
The simple answer is that you don't want to run py.test interactively from python. Most people set up some integration with their text editor or IDE to run py.test and parse it's output. But really it's a command line tool and that is how it should be used.
As a sidenode you may want to check out the built-in tmpdir fixture: http://pytest.org/latest/tmpdir.html Because it seems like you're re-inventing this.
I'm working on a large codebase that uses print statements for logging rather than python logging. I'm wondering if there is a recommended for converting all these print statements to calls to logging.info? Many of these prints are spread over several lines and thus any solution needs to handle those cases and hopefully would maintain formatting.
I've looked into python rope but that doesn't seem to have the facility to convert calls to statement like print to a function call.
You could use 2to3 and only apply the fix for print statement -> print function.
2to3 --fix=print [yourfiles] # just displays the diff on stdout
2to3 --fix=print [yourfiles] --write # also saves the changes to disk
This should automatically handle all those strange cases, and then converting print functions to logging functions should be a straightforward find-and-replace with, e.g., sed.
If you don't have the shortcut for the 2to3 script for some reason, run lib2to3 as a module instead:
python -m lib2to3 --fix=print .
just add this few line before your code starts and it will log everything it prints. I think you are looking for something like this.
import logging
import sys
class writer :
def __init__(self, *writers) :
self.writers = writers
def write(self, text) :
logging.warning(text)
saved = sys.stdout
sys.stdout = writer(sys.stdout)
print "There you go."
print "There you go2."
I would like to write a Unit Test for a (rather complex) Bash completion script, preferrably with Python - just something that gets the values of a Bash completion programmatically.
The test should look like this:
def test_completion():
# trigger_completion should return what a user should get on triggering
# Bash completion like this: 'pbt createkvm<TAB>'
assert trigger_completion('pbt createkvm') == "module1 module2 module3"
How can I simulate Bash completion programmatically to check the completion values inside a testsuite for my tool?
Say you have a bash-completion script in a file called asdf-completion, containing:
_asdf() {
COMPREPLY=()
local cur prev
cur=$(_get_cword)
COMPREPLY=( $( compgen -W "one two three four five six" -- "$cur") )
return 0
}
complete -F _asdf asdf
This uses the shell function _asdf to provide completions for the fictional asdf command. If we set the right environment variables (from the bash man page), then we can get the same result, which is the placement of the potential expansions into the COMPREPLY variable. Here's an example of doing that in a unittest:
import subprocess
import unittest
class BashTestCase(unittest.TestCase):
def test_complete(self):
completion_file="asdf-completion"
partial_word="f"
cmd=["asdf", "other", "arguments", partial_word]
cmdline = ' '.join(cmd)
out = subprocess.Popen(['bash', '-i', '-c',
r'source {compfile}; COMP_LINE="{cmdline}" COMP_WORDS=({cmdline}) COMP_CWORD={cword} COMP_POINT={cmdlen} $(complete -p {cmd} | sed "s/.*-F \\([^ ]*\\) .*/\\1/") && echo ${{COMPREPLY[*]}}'.format(
compfile=completion_file, cmdline=cmdline, cmdlen=len(cmdline), cmd=cmd[0], cword=cmd.index(partial_word)
)],
stdout=subprocess.PIPE)
stdout, stderr = out.communicate()
self.assertEqual(stdout, "four five\n")
if (__name__=='__main__'):
unittest.main()
This should work for any completions that use -F, but may work for others as well.
je4d's comment to use expect is a good one for a more complete test.
bonsaiviking's solution almost worked for me. I had to change the bash string script. I added an extra ';' separator to the executed bash script otherwise the execution wouldn't work on Mac OS X. Not really sure why.
I also generalized the initialization of the various COMP_ arguments a bit to handle the various cases I ended up with.
The final solution is a helper class to test bash completion from python so that the above test would be written as:
from completion import BashCompletionTest
class AdsfTestCase(BashCompletionTest):
def test_orig(self):
self.run_complete("other arguments f", "four five")
def run_complete(self, command, expected):
completion_file="adsf-completion"
program="asdf"
super(AdsfTestCase, self).run_complete(completion_file, program, command, expected)
if (__name__=='__main__'):
unittest.main()
The completion lib is located under https://github.com/lacostej/unity3d-bash-completion/blob/master/lib/completion.py