I would like to be able to run a nose test script which accepts command line arguments. For example, something along the lines:
test.py
import nose, sys
def test():
# do something with the command line arguments
print sys.argv
if __name__ == '__main__':
nose.runmodule()
However, whenever I run this with a command line argument, I get an error:
$ python test.py arg
E
======================================================================
ERROR: Failure: ImportError (No module named arg)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py", line 368, in loadTestsFromName
module = resolve_name(addr.module)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py", line 334, in resolve_name
module = __import__('.'.join(parts_copy))
ImportError: No module named arg
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Apparently, nose tries to do something with the arguments passed in sys.argv. Is there a way to make nose ignore those arguments?
Alright, I hate "why would you want to do that?" answers just as much as anyone, but I'm going to have to make one here. I hope you don't mind.
I'd argue that doing whatever you're wanting to do isn't within the scope of the framework nose. Nose is intended for automated tests. If you have to pass in command-line arguments for the test to pass, then it isn't automated. Now, what you can do is something like this:
import sys
class test_something(object):
def setUp(self):
sys.argv[1] = 'arg'
del sys.argv[2] # remember that -s is in sys.argv[2], see below
def test_method(self):
print sys.argv
If you run that, you get this output:
[~] nosetests test_something.py -s
['/usr/local/bin/nosetests', 'arg']
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
(Remember to pass in the -s flag if you want to see what goes on stdout)
However, I'd probably still recommend against that, as it's generally a bad idea to mess with global state in automated tests if you can avoid it. What I would likely do is adapt whatever code I'm wanting to test to take an argv list. Then, you can pass in whatever you want during testing and pass in sys.argv in production.
UPDATE:
The reason why I need to do it is
because I am testing multiple
implementations of the same library.
To test those implementations are
correct I use a single nose script,
that accepts as a command line
argument the library that it should
import for testing.
It sounds like you may want to try your hand at writing a nose plugin. It's pretty easy to do. Here are the latest docs.
You could use another means of getting stuff into your code:
import os
print os.getenv('KEY_THAT_MIGHT_EXIST', default_value)
Then just remember to set your environment before running nose.
I think that is a perfectly acceptable scenario. I also needed to do something similar in order to run the tests against different scenarios (dev, qa, prod, etc) and there I needed the right URLS and configurations for each environment.
The solution I found was to use the nose-testconfig plugin (link here). It is not exactly passing command line arguments, but creating a config file with all your parameters, and then passing this config file as argument when you execute your nose-tests.
The config file has the following format:
[group1]
env=qa
[urlConfig]
address=http://something
[dbConfig]
user=test
pass=test
And you can read the arguments using:
from testconfig import config
print(config['dbConfig']['user'])
For now I am using the following hack:
args = sys.argv[1:]
sys.argv = sys.argv[0:1]
which just reads the argument into a local variable, and then deletes all the additional arguments in sys.argv so that nose does not get confused by them.
Just running nose and passing in parameters will not work as nose will attempt to interpret the arguments as nose parameters so you get the problems you are seeing.
I do not think nose support parameter passing directly yet but this nose plug-in nose-testconfig Allows you to write tests like below:
from testconfig import config
def test_os_specific_code():
os_name = config['os']['type']
if os_name == 'nt':
pass # some nt specific tests
else:
pass # tests for any other os
Related
Maybe I am going about this the wrong way, because my search turned up nothing useful.
Adding the -b (-bb) option when calling the python interpreter will warn (raise) whenever an implicit bytes to string or bytes to int conversion takes place:
Issue a warning when comparing bytes or bytearray with str or bytes with int. Issue an error when the option is given twice (-bb).
I would like to write a unit test around this using pytest. I.e., I'd like to do
# foo.py
import pytest
def test_foo():
with pytest.raises(BytesWarning):
print(b"This is a bytes string.")
When calling the above as pytest foo.py the test will fail (no BytesWarning raised). When I call the above test as python -bb -m pytest foo.py it will pass, because BytesWarning is raised as an exception. So far so good.
What I can't work out (nor do I seem to be able to find anything useful on the internet), is if/how it is possible to configure pytest to do this automatically so that I can run pytest --some_arg foo.py and it will do the intended thing. Is this possible?
When you execute pytest foo.py, your shell will look for the pytest program. You can know which one will be executed with the command which pytest. For me, it's /home/stack_overflow/venv/bin/pytest which looks like that :
#!/home/stack_overflow/venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from pytest import console_main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(console_main())
It just calls console_main() from the pytest library.
If you add a print(sys.argv), you can see how it was called. For me, it is ['/home/stack_overflow/venv/bin/pytest', 'so70782647.py']. It matches the path from the first line, which is called a shebang. It instructs how your program should be invoked. Here, it indicates to run the file using the python from my venv.
If I modify the line :
#!/home/stack_overflow/venv/bin/python -bb
# ^^^^
now your test passes.
It may be a solution, although not very elegant.
You may notice that, even now, the -bb do not appear when printing sys.argv. The reason is explained in the doc you linked yourself :
An interface option terminates the list of options consumed by the interpreter, all consecutive arguments will end up in sys.argv [...]
So it is not possible to check if it was activated using sys.argv.
I found a question about how to retrieve them from the interpreter's internal state, in case you are interested to check it as pre-condition for your test : Retrieve the command line arguments of the Python interpreter. Although, checking for sys.flags.bytes_warning is simpler in our case ({0: None, 1: '-b', 2: '-bb'}).
Continuing on your question, how to run pytest with the -bb interpreter option ?
You already have a solution : python -bb -m pytest foo.py.
If you prefer, it is possible to create a file pytest whose content is just python -bb -m pytest $# (don't forget to make it executable). Run it with ./pytest foo.py. Or make it an alias.
You can't tell pytest which Python interpreter options you want, because pytest would already be running in the interpreter itself, which would have already handled its own options.
As far as I know, these options are really not easy to change. I guess if you could write into the PyConfig C struct, it would have the desired effect. For example see function bytes_richcompare which does :
if (_Py_GetConfig()->bytes_warning && (op == Py_EQ || op == Py_NE)) {
if (PyUnicode_Check(a) || PyUnicode_Check(b)) {
if (PyErr_WarnEx(PyExc_BytesWarning,
"Comparison between bytes and string", 1))
return NULL;
}
then you could activate it from within your test, as in :
def test_foo():
if sys.flags.bytes_warning < 2:
# change PyConfig.bytes_warning
assert sys.flags.bytes_warning == 2
with pytest.raises(BytesWarning):
print(b"This is a bytes string.")
# change back the PyConfig.bytes_warning
But I think how to do that should be another question.
As a workaround, you can use pytest.warns like so :
def test_foo():
with pytest.warns(BytesWarning):
print(b"This is a bytes string.")
and it only requires the -b option (although -bb works too).
I'm new to python tests so don't hesitate to provide any obvious information.
Basically I want to do some RESTful tests using python, and found the httpretty and sure libraries which look really nice.
I have a python file containing:
#!/usr/bin/python
from sure import expect
import requests, httpretty
#httpretty.activate
def RestTest():
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"}
Which is basically the same as the example code provided at https://github.com/gabrielfalcao/HTTPretty
My question is; how do I simply run this test to see it either passing or failing? I tried just executing it using ./pythonFile but that doesn't work.
If your test is implemented as a Python function, then of course simply trying to execute the file isn't going to run the test: nothing in that file actually calls RestTest.
You need some sort of test framework that will call your tests and collate the results.
One such solution is python-nose, which will look for methods named test_* and run them. So if you were to rename RestTest to test_rest, you could run:
$ nosetests myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
The nosetests command has a variety of options that control which tests are run, how errors are handled and reported, and more.
Python 3 includes similar functionality in the unittest module, which is also available as a backport for Python 2 called unittest2. You could modify your code to take advantage of unittest like this:
#!/usr/bin/python
from sure import expect
import requests, httpretty
import unittest
class RestTest(unittest.TestCase):
#httpretty.activate
def test_rest(self):
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"})
if __name__ == '__main__':
unittest.main()
Running your file would now provide output similar to what we saw with
nosetests:
$ python myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
Have you tried calling your method?
Or does the annotation mean you don't have to explicitly call your method?
If I call your method, it seems like it works. If I change the value on one side of the expect, it complains properly about the values not matching.
I have some tests written using pytest and fixtures, e.g.:
class TestThing:
#pytest.fixture()
def temp_dir(self, request):
my_temp_dir = tempfile.mkdtemp()
def fin():
shutil.rmtree(my_temp_dir)
request.addfinalizer(fin)
return my_temp_dir
def test_something(self, temp_dir)
with open(os.path.join(temp_dir, 'test.txt'), 'w') as f:
f.write('test')
This works fine when the tests are invoked from the shell, e.g.
$ py.test
but I don't know how to run them from within a python/ipython session; trying e.g.
tt = TestThing()
tt.test_something(tt.temp_dir())
fails because temp_dir requires a request object to be passed on. So, how does one invoke a fixture with a request object injected?
Yes. You don't have to manually assemble any test fixtures or anything like that. Everything runs just like calling pytest in the project directory.
Method1:
This is the best method because it gives you access to the debugger if your test fails
In ipython shell use:
**ipython**> run -m pytest prj/
This will run all your tests in the prj/tests directory.
This will give you access to the debugger, or allow you to set breakpoints if you have a
import ipdb; ipdb.set_trace() in your program (https://docs.pytest.org/en/latest/usage.html#setting-breakpoints).
Method2:
Use !pytest while in the test directory. This wont give you access to the debugger. However, if you use
**ipython**> !pytest --pdb
If you have a test failure, it will drop you into the debugger (subshell), so you can run your post-mortem analysis (https://docs.pytest.org/en/latest/usage.html#dropping-to-pdb-python-debugger-on-failures)
Using these methods you can even run individual modules/test_fuctions/TestClasses in ipython using (https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)
**ipython**> run -m pytest prj/tests/test_module1.py::TestClass1::test_function1
You can bypass the pytest.fixture decorator and directly call the wrapped test function.
tmp = tt.temp_dir.__pytest_wrapped__.obj(request=...)
Accessing internals, it's bad, but when you need it...
The best method I have which is far from ideal is to just %run the test file, then manually assemble the fixtures, then simply call the tests. The problem with this is tracking down the modules where the default fixtures are defined, and then calling them in their order of dependencies.
you can use two cells for this:
first:
def test_something():
assert True
second:
from tempfile import mktemp
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(_i) # write last cell input
!pytest $test_file
also u can do this on one cell like this but you won't have code highlighting:
from tempfile import mktemp
test_code = """
def test_something():
assert True
"""
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(test_code)
!pytest $test_file
The simple answer is that you don't want to run py.test interactively from python. Most people set up some integration with their text editor or IDE to run py.test and parse it's output. But really it's a command line tool and that is how it should be used.
As a sidenode you may want to check out the built-in tmpdir fixture: http://pytest.org/latest/tmpdir.html Because it seems like you're re-inventing this.
I have a python script which takes a config file on command line and gives an output.
I am trying to see how I can use nosetests to run all these files.
I read through the nosetests info on google but i could not follow how to run them with the config file.
Any ideas on where I could get started?
Something like this should work:
import sys
import nose
def test_me():
assert True
if __name__ == '__main__':
module_name = sys.modules[__name__].__file__
config_name = 'nose.cfg'
result = nose.run(
argv=[sys.argv[0],
module_name,
'--config=' + config_name]
)
You can also pass your config instance, as described in the docs for nose.run() arguments here.
I didnt have to do any of those. Just nosetests by themselves execute any test beginning with "test_....py" and make sure you use "--exe" if they are executable if not you can skip that option. The nosetests help page on wiki really helps.
I am using Python to simplify some commands in Maven. I have this script which calls mvn test in debug mode.
from subprocess import call
commands = []
commands.append("mvn")
commands.append("test")
commands.append("-Dmaven.surefire.debug=\"-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Xnoagent -Djava.compiler=NONE\"")
call(commands)
The problem is with line -Dmaven.surefire.debug which accepts parameter which has to be in quotas and I don't know how to do that correctly. It looks fine when I print this list but when I run the script I get Error translating CommandLine and the debugging line is never executed.
The quotas are only required for the shell executing the command.
If you do the said call directly from the shell, you probably do
mvn test -Dmaven.surefire.debug="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Xnoagent -Djava.compiler=NONE"
With these " signs you (simply spoken) tell the shell to ignore the spaces within.
The program is called with the arguments
mvn
test
-Dmaven.surefire.debug=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Xnoagent -Djava.compiler=NONE
so
from subprocess import call
commands = []
commands.append("mvn")
commands.append("test")
commands.append("-Dmaven.surefire.debug=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 -Xnoagent -Djava.compiler=NONE")
call(commands)
should be the way to go.