By default, pytest inflates the error traceback massively and printly some information into sysout stream that are redundant: Considering that I'm using PyCharm, it is really obfuscating to see code snippet out of context, while they are already available in the IDE & debugging interface.
As a result, I intend to set pytest traceback to native permanently. However, according to the documentation, the only way to do so is to add extra command line argument when launching the test runner:
-tb=native
I would like to make my test to always use the native traceback regardless of how it was run. Is it possible to use a TestCase API to do so?
Thanks a lot for your help.
You can add this option to the pytest.ini file and it would be automatically picked by pytest. For your specific case, a pytest.ini with following contents should work:
[pytest]
addopts = --tb=native
Note the double hyphens with tb; I am using pytest 4.6.4 and that is how it works for me.
Also, Refer pytest docs for another alternative by modifying PYTEST_ADDOPTS env variable.
I'm not sure how you can do this using pytest, nor am I familiar with this package. With that being said, you can always create a bash function to accomplish this:
function pytest() {
pytest -tb=native "$#"
}
The "$#" symbol will pass all arguments following pyt to the function (kind of like *args in python), so running pyt arg1 arg2 ... argn will be the same as running
pytest -tb=native arg1 arg2 ... argn
If you are unfamiliar with creating bash shortcuts, see this question.
Update
I misunderstood and thought OP was calling pytest from the cli. Instead of creating the pyt function, if you override pytest directly, PyCharm might invoke your bash version of it instead (I'm not really sure though).
That being said, yaniv's answer seems superior to this, if it works.
Related
I want to run pylint on all my modules, which are in different locations in a big directory. Because running pylint in this directory is still not supported, I assume I need to walk through each module in my own Python scripts and run pylint on each module from there.
To run pylint inside a Python script, the documentation seems clear:
It is also possible to call Pylint from another Python program, thanks
to the Run() function in the pylint.lint module (assuming Pylint
options are stored in a list of strings pylint_options) as:
import pylint.lint
pylint_opts = ['--version']
pylint.lint.Run(pylint_opts)
However, I cannot get this to run successfully on actual files. What is the correct syntax? Even if I copy-paste the arguments that worked on the command-line, using an absolute file path, the call fails:
import pylint.lint
pylint_opts = ["--load-plugins=pylint.extensions.docparams /home/user/me/mypath/myfile.py"]
pylint.lint.Run(pylint_opts)
The output is the default fallback dialogue of the command-line tool, with my script's name instead of pylint:
No config file found, using default configuration
Usage: myscript.py [options] module_or_package
Check that a module satisfied a coding standard (and more !).
myscript.py --help`
[...]
What am I missing?
I know that epylint exists as an alternative, and I can get that to run, but it is extremely inconvenient that it overrides the --msg-format and --reports parameters and I want to figure out what I am doing wrong.
The answer is to separate the options into a list, as shown in this related question:
pylint_opts = ["--load-plugins=pylint.extensions.docparams", "/home/user/me/mypath/myfile.py"]
From the question on running a single test via command line when tests are located within a sibling folder, the answer suggests using the -v option alongside the module name and test name to run a specific test.
Why does the -v option make this work? Specifying the module name and the test name makes sense since it corresponds to the unittest documnetation and obviously you need to specify which test to run. However, from what I can tell, the -v option corresponds to verbose output which shouldn't change the tests that the unittest module runs.
Apologies in advance if I've missed something obvious here.
So the reason this wasn't working was because of a pretty obvious, but stupid, error on my part 😅.
tldr; Use the full command line to run the tests (e.g. python3 -m unittest tests.module_name.TestClass.test_func) or if you're using a bash function, make sure the function accepts other arguments.
I had setup a bash function called run_tests to run unittests and I was trying to specify the module name and test name after calling that method. I.e. I had the following in .bash_profile:
run_tests ()
{
python3 -m unittest
}
and on the terminal, I did:
run_tests tests.module_name.TestClass.test_func
Since the bash function was not setup to accept arguments, the specific test I wanted to run wasn't actually being passed as an argument to unittest.
Obviously, using -v makes no difference if you use the run_tests function to try and run a specific test.
When I tested with the -v option, I used the full command python3 -m unittest -v tests.module_name.TestClass.test_func which is why I thought the -v option made it work. To test whether the -v option actually worked, I was lazy and ran run_tests tests.module_name.TestClass.test_func again since it was in my shell history instead of typing out the full command, which is what caused this confusion.
The traceback provided by pytest is great and super useful for debugging.
Is there a way to run a script using the pytest api even if the script itself does not contain any test modules? Essentially, I would like a way to pinpoint and run a certain function in a script as if it were a test, but get the pytest-formatted traceback.
The pytest documentation on test discovery states that normally only functions whose name begins with test_ are run. This behaviour can be changed however with the python_functions configuration option. Try entering in the command line:
pytest [script.py] -o python_functions=[script_function]
in which you should replace [script.py] with your python script file path and replace [script_function] with the name of the function that you want to be run.
I am trying to write a custom plugin for nosetests ( a plugin to run as a custom selector) which works fine and i can use it by calling
nose.run(plugins=[CustomSelectorPlugin()])
However i also want to run nose with the built-in xunit plugin but I am unsure as to how to do this.
I have tried
nose.main(plugins=[CustomSelectorPlugin()], argv=['--with-xunit'])
and calling my program with --with-xunit option but these do not seem to work (well, everythion runs fine but there is no nosetests.xml generated)
How do i run both my plugin and xunit pragmatically?
Thanks
Solved the problem
I needed to call
nose.run(addplugins=[CustomSelectorPlugin()])
note the addplugins (as opposed to plugins) call. This allows me to call my program with command line arg --with-xnuit. Just having plugins meant the default plugin manager was not invoked/called/was overridden.
Also I should mention to be able to specify the args in code the first arg in argv is ignored by nose so something like this should be used:
nose.run(addplugins=[CustomSelectorPlugin()], argv=['foo', '--with-xunit'])
Hope this helps future googlers
We started writing our functional and unit test cases in python using nose framework. We started learning python while writing these tests. Since there are lot of dependencies between our test classes/functions, we decided to use proboscis framework on top of nose to control order of execution.
We have quite a few 'print' statements in our tests and proboscis seems to be ignoring these! Tests are running in expected order and testing them all but not printing our print statement data to console. Any idea what we are missing here?
BTW, we stopped deriving our classes from 'unittest.TestCase' once we moved to proboscis and decorated all classes and their member functions with #test.
Note: According to the Proboscis documentation "unused arguments get passed along to Nose or the unittest module", so the following should apply to Proboscis by replacing nosetests with python run_tests.py.
As #Wooble has mentioned in his comment, by default nose captures stdout and only displays it for failed tests. You can override this behaviour with the nosetests -s or --nocapture switch:
$ nosetests --nocapture
Like #Wooble also mentions in his comment, I recommend using the logging module instead of print. Then you only need to pass nosetests the -l DEBUG or --debug=DEBUG switch, where DEBUG is replaced by a comma separated list of the names of the loggers you want to display, to enable displaying of the logging output from your modules:
$ nosetests --debug=your-logger-name