Maybe I am going about this the wrong way, because my search turned up nothing useful.
Adding the -b (-bb) option when calling the python interpreter will warn (raise) whenever an implicit bytes to string or bytes to int conversion takes place:
Issue a warning when comparing bytes or bytearray with str or bytes with int. Issue an error when the option is given twice (-bb).
I would like to write a unit test around this using pytest. I.e., I'd like to do
# foo.py
import pytest
def test_foo():
with pytest.raises(BytesWarning):
print(b"This is a bytes string.")
When calling the above as pytest foo.py the test will fail (no BytesWarning raised). When I call the above test as python -bb -m pytest foo.py it will pass, because BytesWarning is raised as an exception. So far so good.
What I can't work out (nor do I seem to be able to find anything useful on the internet), is if/how it is possible to configure pytest to do this automatically so that I can run pytest --some_arg foo.py and it will do the intended thing. Is this possible?
When you execute pytest foo.py, your shell will look for the pytest program. You can know which one will be executed with the command which pytest. For me, it's /home/stack_overflow/venv/bin/pytest which looks like that :
#!/home/stack_overflow/venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from pytest import console_main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(console_main())
It just calls console_main() from the pytest library.
If you add a print(sys.argv), you can see how it was called. For me, it is ['/home/stack_overflow/venv/bin/pytest', 'so70782647.py']. It matches the path from the first line, which is called a shebang. It instructs how your program should be invoked. Here, it indicates to run the file using the python from my venv.
If I modify the line :
#!/home/stack_overflow/venv/bin/python -bb
# ^^^^
now your test passes.
It may be a solution, although not very elegant.
You may notice that, even now, the -bb do not appear when printing sys.argv. The reason is explained in the doc you linked yourself :
An interface option terminates the list of options consumed by the interpreter, all consecutive arguments will end up in sys.argv [...]
So it is not possible to check if it was activated using sys.argv.
I found a question about how to retrieve them from the interpreter's internal state, in case you are interested to check it as pre-condition for your test : Retrieve the command line arguments of the Python interpreter. Although, checking for sys.flags.bytes_warning is simpler in our case ({0: None, 1: '-b', 2: '-bb'}).
Continuing on your question, how to run pytest with the -bb interpreter option ?
You already have a solution : python -bb -m pytest foo.py.
If you prefer, it is possible to create a file pytest whose content is just python -bb -m pytest $# (don't forget to make it executable). Run it with ./pytest foo.py. Or make it an alias.
You can't tell pytest which Python interpreter options you want, because pytest would already be running in the interpreter itself, which would have already handled its own options.
As far as I know, these options are really not easy to change. I guess if you could write into the PyConfig C struct, it would have the desired effect. For example see function bytes_richcompare which does :
if (_Py_GetConfig()->bytes_warning && (op == Py_EQ || op == Py_NE)) {
if (PyUnicode_Check(a) || PyUnicode_Check(b)) {
if (PyErr_WarnEx(PyExc_BytesWarning,
"Comparison between bytes and string", 1))
return NULL;
}
then you could activate it from within your test, as in :
def test_foo():
if sys.flags.bytes_warning < 2:
# change PyConfig.bytes_warning
assert sys.flags.bytes_warning == 2
with pytest.raises(BytesWarning):
print(b"This is a bytes string.")
# change back the PyConfig.bytes_warning
But I think how to do that should be another question.
As a workaround, you can use pytest.warns like so :
def test_foo():
with pytest.warns(BytesWarning):
print(b"This is a bytes string.")
and it only requires the -b option (although -bb works too).
Related
I am calling in my script, pytest.main(); although I get no print out unless the test fail.
I know that pytest has a -s flag that solve the problem, and in fact if I call my test from console using
python3 -m pytest -s mytest.py
Works just fine and I get the print statements printed correctly even if the test pass; but I can't find how do you obtain the same outcome while calling pytest.main() from a python script.
It's easy. Anything you pass from command line can be put in the args list:
pytest.main(args=['-s', 'mytest.py'])
From the pytest.main docs:
Parameters:
args – list of command line arguments.
when I issue git with tab , it can auto-complete with a list, I want to write a test.py, when I type test.py followed with tab, it can auto-complete with a given list defined in test.py, is it possible ?
$ git [tab]
add branch column fetch help mv reflog revert stash
am bundle commit filter-branch imap-send name-rev relink rm status
annotate checkout config format-patch init notes remote send-email submodule
apply cherry credential fsck instaweb p4 repack shortlog subtree
archive cherry-pick describe gc log pull replace show tag
bisect clean diff get-tar-commit-id merge push request-pull show-branch whatchanged
blame clone difftool grep mergetool rebase reset stage
The method you are looking for is: readline.set_completer . This method interacts with the readline of the bash shell. It's simple to implement. Examples: https://pymotw.com/2/readline/
That's not a feature of the git binary itself, it's a bash completion 'hack' and as such has nothing to do with Python per-se, but since you've tagged it as such let's add a little twist. Let's say we create a script aware of its acceptable arguments - test.py:
#!/usr/bin/env python
import sys
# let's define some sample functions to be called on passed arguments
def f1():
print("F1 called!")
def f2():
print("F2 called!")
def f3():
print("F3 called!")
def f_invalid(): # a simple invalid placeholder function
print("Invalid command!")
def f_list(): # a function to list all valid arguments
print(" ".join(sorted(arguments.keys())))
if __name__ == "__main__": # make sure we're running this as a script
arguments = { # a simple argument map, use argparse or similar in a real world use
"arg1": f1,
"arg2": f2,
"arg3": f3,
"list_arguments": f_list
}
if len(sys.argv) > 1:
for arg in sys.argv[1:]: # loop through all arguments
arguments.get(arg, f_invalid)() # call the mapped or invalid function
else:
print("At least one argument required!")
NOTE: Make sure you add an executable flag to the script (chmod +x test.py) so its shebang is used for executing instead of providing it as an argument to the Python interpreter.
Apart from all the boilerplate, the important argument is list_arguments - it lists all available arguments to this script and we'll use this output in our bash completion script to instruct bash how to auto-complete. To do so, create another script, let's call it test-completion.bash:
#!/usr/bin/env bash
SCRIPT_NAME=test.py
SCRIPT_PATH=/path/to/your/script
_complete_script()
{
local cursor options
options=$(${SCRIPT_PATH}/${SCRIPT_NAME} list_arguments)
cursor="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=( $(compgen -W "${options}" -- ${cursor}) )
return 0
}
complete -F _complete_script ${SCRIPT_NAME}
What it does is essentially adding to complete the _complete_script function to be called whenever a completion over test.py is invoked. The _complete_script function itself first calls list_arguments on test.py to retrieve its acceptable arguments, and then uses compgen to create a required structure for complete to be able to print it out.
To test, all you need is to source this script as:
source test-completion.bash
And then your bash will behave as:
$ ./test.py [tab]
arg1 arg2 arg3 list_arguments
And what's more, it's completely controllable from your Python script - whatever gets printed as a list on list_arguments command is what will be shown as auto-completion help.
To make the change permanent, you can simply add the source line to your .bashrc, or if you want more structured solution you can follow the guidelines for your OS. There are a couple of ways described on the git-flow-completion page for example. Of course, this assumes you actually have bash-autocomplete installed and enabled on your system, but your git autocompletion wouldn't work if you didn't.
Speaking of git autocompletion, you can see how it's implemented by checking git-completion.bash source - a word of warning, it's not for the fainthearted.
Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run?
The -s switch disables per-test capturing (only if a test fails).
-s is equivalent to --capture=no.
pytest captures the stdout from individual tests and displays them only on certain conditions, along with the summary of the tests it prints by default.
Extra summary info can be shown using the '-r' option:
pytest -rP
shows the captured output of passed tests.
pytest -rx
shows the captured output of failed tests (default behaviour).
The formatting of the output is prettier with -r than with -s.
When running the test use the -s option. All print statements in exampletest.py would get printed on the console when test is run.
py.test exampletest.py -s
In an upvoted comment to the accepted answer, Joe asks:
Is there any way to print to the console AND capture the output so that it shows in the junit report?
In UNIX, this is commonly referred to as teeing. Ideally, teeing rather than capturing would be the py.test default. Non-ideally, neither py.test nor any existing third-party py.test plugin (...that I know of, anyway) supports teeing – despite Python trivially supporting teeing out-of-the-box.
Monkey-patching py.test to do anything unsupported is non-trivial. Why? Because:
Most py.test functionality is locked behind a private _pytest package not intended to be externally imported. Attempting to do so without knowing what you're doing typically results in the public pytest package raising obscure exceptions at runtime. Thanks alot, py.test. Really robust architecture you got there.
Even when you do figure out how to monkey-patch the private _pytest API in a safe manner, you have to do so before running the public pytest package run by the external py.test command. You cannot do this in a plugin (e.g., a top-level conftest module in your test suite). By the time py.test lazily gets around to dynamically importing your plugin, any py.test class you wanted to monkey-patch has long since been instantiated – and you do not have access to that instance. This implies that, if you want your monkey-patch to be meaningfully applied, you can no longer safely run the external py.test command. Instead, you have to wrap the running of that command with a custom setuptools test command that (in order):
Monkey-patches the private _pytest API.
Calls the public pytest.main() function to run the py.test command.
This answer monkey-patches py.test's -s and --capture=no options to capture stderr but not stdout. By default, these options capture neither stderr nor stdout. This isn't quite teeing, of course. But every great journey begins with a tedious prequel everyone forgets in five years.
Why do this? I shall now tell you. My py.test-driven test suite contains slow functional tests. Displaying the stdout of these tests is helpful and reassuring, preventing leycec from reaching for killall -9 py.test when yet another long-running functional test fails to do anything for weeks on end. Displaying the stderr of these tests, however, prevents py.test from reporting exception tracebacks on test failures. Which is completely unhelpful. Hence, we coerce py.test to capture stderr but not stdout.
Before we get to it, this answer assumes you already have a custom setuptools test command invoking py.test. If you don't, see the Manual Integration subsection of py.test's well-written Good Practices page.
Do not install pytest-runner, a third-party setuptools plugin providing a custom setuptools test command also invoking py.test. If pytest-runner is already installed, you'll probably need to uninstall that pip3 package and then adopt the manual approach linked to above.
Assuming you followed the instructions in Manual Integration highlighted above, your codebase should now contain a PyTest.run_tests() method. Modify this method to resemble:
class PyTest(TestCommand):
.
.
.
def run_tests(self):
# Import the public "pytest" package *BEFORE* the private "_pytest"
# package. While importation order is typically ignorable, imports can
# technically have side effects. Tragicomically, that is the case here.
# Importing the public "pytest" package establishes runtime
# configuration required by submodules of the private "_pytest" package.
# The former *MUST* always be imported before the latter. Failing to do
# so raises obtuse exceptions at runtime... which is bad.
import pytest
from _pytest.capture import CaptureManager, FDCapture, MultiCapture
# If the private method to be monkey-patched no longer exists, py.test
# is either broken or unsupported. In either case, raise an exception.
if not hasattr(CaptureManager, '_getcapture'):
from distutils.errors import DistutilsClassError
raise DistutilsClassError(
'Class "pytest.capture.CaptureManager" method _getcapture() '
'not found. The current version of py.test is either '
'broken (unlikely) or unsupported (likely).'
)
# Old method to be monkey-patched.
_getcapture_old = CaptureManager._getcapture
# New method applying this monkey-patch. Note the use of:
#
# * "out=False", *NOT* capturing stdout.
# * "err=True", capturing stderr.
def _getcapture_new(self, method):
if method == "no":
return MultiCapture(
out=False, err=True, in_=False, Capture=FDCapture)
else:
return _getcapture_old(self, method)
# Replace the old with the new method.
CaptureManager._getcapture = _getcapture_new
# Run py.test with all passed arguments.
errno = pytest.main(self.pytest_args)
sys.exit(errno)
To enable this monkey-patch, run py.test as follows:
python setup.py test -a "-s"
Stderr but not stdout will now be captured. Nifty!
Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time.
According to pytest documentation, version 3 of pytest can temporary disable capture in a test:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
pytest --capture=tee-sys was recently added (v5.4.0). You can capture as well as see the output on stdout/err.
Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing'
You can also enable live-logging by setting the following in pytest.ini or tox.ini in your project root.
[pytest]
log_cli = True
Or specify it directly on cli
pytest -o log_cli=True
pytest test_name.py -v -s
Simple!
I would suggest using -h command. There're quite interesting commands might be used for.
but, for this particular case: -s shortcut for --capture=no. is enough
pytest <test_file.py> -s
If you are using logging, you need to specify to turn on logging output in addition to -s for generic stdout. Based on Logging within pytest tests, I am using:
pytest --log-cli-level=DEBUG -s my_directory/
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
If anyone wants to run tests from code with output:
if __name__ == '__main__':
pytest.main(['--capture=no'])
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
def test_print_something_even_if_the_test_pass(self, capsys):
text_to_be_printed = "Print me when the test pass."
print(text_to_be_printed)
p_t = capsys.readouterr()
sys.stdout.write(p_t.out)
# the two rows above will print the text even if the test pass.
Here is the result:
test_print_something_even_if_the_test_pass PASSED [100%]Print me when the test pass.
I would like to write a Unit Test for a (rather complex) Bash completion script, preferrably with Python - just something that gets the values of a Bash completion programmatically.
The test should look like this:
def test_completion():
# trigger_completion should return what a user should get on triggering
# Bash completion like this: 'pbt createkvm<TAB>'
assert trigger_completion('pbt createkvm') == "module1 module2 module3"
How can I simulate Bash completion programmatically to check the completion values inside a testsuite for my tool?
Say you have a bash-completion script in a file called asdf-completion, containing:
_asdf() {
COMPREPLY=()
local cur prev
cur=$(_get_cword)
COMPREPLY=( $( compgen -W "one two three four five six" -- "$cur") )
return 0
}
complete -F _asdf asdf
This uses the shell function _asdf to provide completions for the fictional asdf command. If we set the right environment variables (from the bash man page), then we can get the same result, which is the placement of the potential expansions into the COMPREPLY variable. Here's an example of doing that in a unittest:
import subprocess
import unittest
class BashTestCase(unittest.TestCase):
def test_complete(self):
completion_file="asdf-completion"
partial_word="f"
cmd=["asdf", "other", "arguments", partial_word]
cmdline = ' '.join(cmd)
out = subprocess.Popen(['bash', '-i', '-c',
r'source {compfile}; COMP_LINE="{cmdline}" COMP_WORDS=({cmdline}) COMP_CWORD={cword} COMP_POINT={cmdlen} $(complete -p {cmd} | sed "s/.*-F \\([^ ]*\\) .*/\\1/") && echo ${{COMPREPLY[*]}}'.format(
compfile=completion_file, cmdline=cmdline, cmdlen=len(cmdline), cmd=cmd[0], cword=cmd.index(partial_word)
)],
stdout=subprocess.PIPE)
stdout, stderr = out.communicate()
self.assertEqual(stdout, "four five\n")
if (__name__=='__main__'):
unittest.main()
This should work for any completions that use -F, but may work for others as well.
je4d's comment to use expect is a good one for a more complete test.
bonsaiviking's solution almost worked for me. I had to change the bash string script. I added an extra ';' separator to the executed bash script otherwise the execution wouldn't work on Mac OS X. Not really sure why.
I also generalized the initialization of the various COMP_ arguments a bit to handle the various cases I ended up with.
The final solution is a helper class to test bash completion from python so that the above test would be written as:
from completion import BashCompletionTest
class AdsfTestCase(BashCompletionTest):
def test_orig(self):
self.run_complete("other arguments f", "four five")
def run_complete(self, command, expected):
completion_file="adsf-completion"
program="asdf"
super(AdsfTestCase, self).run_complete(completion_file, program, command, expected)
if (__name__=='__main__'):
unittest.main()
The completion lib is located under https://github.com/lacostej/unity3d-bash-completion/blob/master/lib/completion.py
I would like to be able to run a nose test script which accepts command line arguments. For example, something along the lines:
test.py
import nose, sys
def test():
# do something with the command line arguments
print sys.argv
if __name__ == '__main__':
nose.runmodule()
However, whenever I run this with a command line argument, I get an error:
$ python test.py arg
E
======================================================================
ERROR: Failure: ImportError (No module named arg)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py", line 368, in loadTestsFromName
module = resolve_name(addr.module)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py", line 334, in resolve_name
module = __import__('.'.join(parts_copy))
ImportError: No module named arg
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Apparently, nose tries to do something with the arguments passed in sys.argv. Is there a way to make nose ignore those arguments?
Alright, I hate "why would you want to do that?" answers just as much as anyone, but I'm going to have to make one here. I hope you don't mind.
I'd argue that doing whatever you're wanting to do isn't within the scope of the framework nose. Nose is intended for automated tests. If you have to pass in command-line arguments for the test to pass, then it isn't automated. Now, what you can do is something like this:
import sys
class test_something(object):
def setUp(self):
sys.argv[1] = 'arg'
del sys.argv[2] # remember that -s is in sys.argv[2], see below
def test_method(self):
print sys.argv
If you run that, you get this output:
[~] nosetests test_something.py -s
['/usr/local/bin/nosetests', 'arg']
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
(Remember to pass in the -s flag if you want to see what goes on stdout)
However, I'd probably still recommend against that, as it's generally a bad idea to mess with global state in automated tests if you can avoid it. What I would likely do is adapt whatever code I'm wanting to test to take an argv list. Then, you can pass in whatever you want during testing and pass in sys.argv in production.
UPDATE:
The reason why I need to do it is
because I am testing multiple
implementations of the same library.
To test those implementations are
correct I use a single nose script,
that accepts as a command line
argument the library that it should
import for testing.
It sounds like you may want to try your hand at writing a nose plugin. It's pretty easy to do. Here are the latest docs.
You could use another means of getting stuff into your code:
import os
print os.getenv('KEY_THAT_MIGHT_EXIST', default_value)
Then just remember to set your environment before running nose.
I think that is a perfectly acceptable scenario. I also needed to do something similar in order to run the tests against different scenarios (dev, qa, prod, etc) and there I needed the right URLS and configurations for each environment.
The solution I found was to use the nose-testconfig plugin (link here). It is not exactly passing command line arguments, but creating a config file with all your parameters, and then passing this config file as argument when you execute your nose-tests.
The config file has the following format:
[group1]
env=qa
[urlConfig]
address=http://something
[dbConfig]
user=test
pass=test
And you can read the arguments using:
from testconfig import config
print(config['dbConfig']['user'])
For now I am using the following hack:
args = sys.argv[1:]
sys.argv = sys.argv[0:1]
which just reads the argument into a local variable, and then deletes all the additional arguments in sys.argv so that nose does not get confused by them.
Just running nose and passing in parameters will not work as nose will attempt to interpret the arguments as nose parameters so you get the problems you are seeing.
I do not think nose support parameter passing directly yet but this nose plug-in nose-testconfig Allows you to write tests like below:
from testconfig import config
def test_os_specific_code():
os_name = config['os']['type']
if os_name == 'nt':
pass # some nt specific tests
else:
pass # tests for any other os