How do you append options to the command Tox runs by appending that option to Tox? Specifically, how do you run a specific Django unittest with Tox?
I'm trying to wrap Tox around some Django unittests, and I can run all unittests with tox, which runs django-admin.py test --settings=myapp.tests.settings myapp.tests.Tests.
However, I'd like to run a specific test at myapp.tests.Tests.test_somespecificthing, which would mean telling Tox to append ".test_somespecificthing" to the end of the command it runs, but I can't figure out how to do this.
The docs say to use "-- " to pass in additional arguments to the underlying command, but this doesn't seem to work.
Try adding {posargs} in the commands section of your tox.ini, like this:
commands =
python manage.py test {posargs}
Then at the command line, something like:
tox -- --pattern='some_specific_test.py'
Everything after the -- will be substituted in as {posargs}.
Read the official documentation here.
Related
From the question on running a single test via command line when tests are located within a sibling folder, the answer suggests using the -v option alongside the module name and test name to run a specific test.
Why does the -v option make this work? Specifying the module name and the test name makes sense since it corresponds to the unittest documnetation and obviously you need to specify which test to run. However, from what I can tell, the -v option corresponds to verbose output which shouldn't change the tests that the unittest module runs.
Apologies in advance if I've missed something obvious here.
So the reason this wasn't working was because of a pretty obvious, but stupid, error on my part 😅.
tldr; Use the full command line to run the tests (e.g. python3 -m unittest tests.module_name.TestClass.test_func) or if you're using a bash function, make sure the function accepts other arguments.
I had setup a bash function called run_tests to run unittests and I was trying to specify the module name and test name after calling that method. I.e. I had the following in .bash_profile:
run_tests ()
{
python3 -m unittest
}
and on the terminal, I did:
run_tests tests.module_name.TestClass.test_func
Since the bash function was not setup to accept arguments, the specific test I wanted to run wasn't actually being passed as an argument to unittest.
Obviously, using -v makes no difference if you use the run_tests function to try and run a specific test.
When I tested with the -v option, I used the full command python3 -m unittest -v tests.module_name.TestClass.test_func which is why I thought the -v option made it work. To test whether the -v option actually worked, I was lazy and ran run_tests tests.module_name.TestClass.test_func again since it was in my shell history instead of typing out the full command, which is what caused this confusion.
By default, pytest inflates the error traceback massively and printly some information into sysout stream that are redundant: Considering that I'm using PyCharm, it is really obfuscating to see code snippet out of context, while they are already available in the IDE & debugging interface.
As a result, I intend to set pytest traceback to native permanently. However, according to the documentation, the only way to do so is to add extra command line argument when launching the test runner:
-tb=native
I would like to make my test to always use the native traceback regardless of how it was run. Is it possible to use a TestCase API to do so?
Thanks a lot for your help.
You can add this option to the pytest.ini file and it would be automatically picked by pytest. For your specific case, a pytest.ini with following contents should work:
[pytest]
addopts = --tb=native
Note the double hyphens with tb; I am using pytest 4.6.4 and that is how it works for me.
Also, Refer pytest docs for another alternative by modifying PYTEST_ADDOPTS env variable.
I'm not sure how you can do this using pytest, nor am I familiar with this package. With that being said, you can always create a bash function to accomplish this:
function pytest() {
pytest -tb=native "$#"
}
The "$#" symbol will pass all arguments following pyt to the function (kind of like *args in python), so running pyt arg1 arg2 ... argn will be the same as running
pytest -tb=native arg1 arg2 ... argn
If you are unfamiliar with creating bash shortcuts, see this question.
Update
I misunderstood and thought OP was calling pytest from the cli. Instead of creating the pyt function, if you override pytest directly, PyCharm might invoke your bash version of it instead (I'm not really sure though).
That being said, yaniv's answer seems superior to this, if it works.
I have test.sh that runs python command on many different scripts. Is there a way to emit coverage -a for each python call without prepending each command with coverage -a?
See the coverage.py docs about subprocess measurement for a way to invoke coverage automatically when starting Python: http://coverage.readthedocs.io/en/latest/subprocess.html . It will require some fiddling.
It might be easier to alias in the shell script. For things like "nosetests", change it to "python -m nose".
There is a command that calls Django's shell:
python manage.py shell
I would like to create a BASH-alias, that will:
Start python manage.py shell
Execute print 'foo'
Something similar to -i option: python -i /usr/bin/print_foo.py but with manage.py shell
The reason for doing it is too speed up the debug process. So instead of importing all relevant models and assigning variables, I want to do it in the separate PY file. So each time I start the manage.py shell, I will just have all the tools in hand.
EDIT: using python manage.py shell < /usr/bin/print_foo.py almost does the trick. However the terminal gets closed.. if there is a way to make it stay opened.
You can use the following simple shell script:
#!/bin/bash
export PYTHONSTARTUP="$1" # Set the startup script python will run when it start.
shift # Remove the first argument, don't want to pass that.
python manage.py shell "$#" # Run manage.py with the startup script.
Just supply the python script you want to run first as the first argument to the script. All other arguments are directly passed to manage.py. The change to $PYTHONSTARTUP won't affect the environment in your shell.
I also battled with this, the < test.py closes the shell, and shell_plus doesn't add much here beside auto loading of the models, doesn't load the code you want to debug
However I can do this: I make a test.py that is initializes Django if it's not ran from a Django environment, and run it with python -i test.py
The basic idea is to not use manage.py shell
if __name__ == '__main__':
# This will run e.g. from python -i test.py, but will be skipped if from Django
import django # 1.7
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
django.setup()
# Your regular Django stuff here
# Init your vars etc and prepare values to debug
# The Python prompt will remain active here so you can work on it
Another less automated solution is to call execfile('test.py') or %run test.py if you're in IPython,
pro is you can reload the test.py module without leaving the shell (faster and preserves the context), con is you have to load it manually when the shell opens up
Not a direct answer, but Django Extensions (Github, Docs) shell_plus might be interesting for that. (the package has many useful tools, so it is a valuable dependency in many cases.)
The app's models are automatically imported. To configure further imports, see the additional imports section.
If you want to automatically execute code from arbitrary python modules with shell_plus, you can use the SHELL_PLUS_PRE_IMPORTS and SHELL_PLUS_POST_IMPORTS setting. Any python modules that are configured there run a) before or b) after Django's app models are auto-imported.
I would like to write an alias in my setup.py file to multiple tests commands for my project.
But, I have problems when I try to run multiple commands on one line, when 'nosetests' command is invoked before other commands.
This works
$ python setup.py lint nosetests
pylint output
nosetests output
But if I exchange the commands, I only gets nosetests output.
I think the lint command is eaten by the nosetests argument parser.
$ python setup.py nosetests lint
nosetests output
# No pylint output
So, I would like to know if there is a way to explicitly separate the commands ?
Thanks
New answer
By the looks of it, setuptools assumes all options begin with -- and all commands don't begin with --, so there's no explicit way to separate commands, because it's unnecessary.
If the custom nosetests command is accepting lint as an option, then it's a bug in that command, which ought to ignore anything which doesn't begin with --.
However, it might be possible to work around the bug with the traditional Unix idiom of using -- to indicate the end of options, so the following might work...
$ python setup.py nosetests -- lint
...otherwise you'll either have to fix the bug, or find an alternative to using that particular custom command.
Old answer
From the docs...
The basic usage of setup.py is:
$ python setup.py <some_command> <options>
...so it sounds like the fact that it executed both commands on your first example is a bug, or a fluke.
It's probably safest to run them as two separate commands...
$ python setup.py nosetests && python setup.py lint
nosetests output
pylint output