I am using py.test for unit testing my python program. I wish to debug my test code with the python debugger the normal way (by which I mean pdb.set_trace() in the code) but I can't make it work.
Putting pdb.set_trace() in the code doesn't work (raises IOError: reading from stdin while output is captured). I have also tried running py.test with the option --pdb but that doesn't seem to do the trick if I want to explore what happens before my assertion. It breaks when an assertion fails, and moving on from that line means terminating the program.
Does anyone know a way to get debugging, or is debugging and py.test just not meant to be together?
it's real simple: put an assert 0 where you want to start debugging in your code and run your tests with:
py.test --pdb
done :)
Alternatively, if you are using pytest-2.0.1 or above, there also is the pytest.set_trace() helper which you can put anywhere in your test code. Here are the docs. It will take care to internally disable capturing before sending you to the pdb debugger command-line.
I found that I can run py.test with capture disabled, then use pdb.set_trace() as usual.
> py.test --capture=no
============================= test session starts ==============================
platform linux2 -- Python 2.5.2 -- pytest-1.3.3
test path 1: project/lib/test/test_facet.py
project/lib/test/test_facet.py ...> /home/jaraco/projects/project/lib/functions.py(158)do_something()
-> code_about_to_run('')
(Pdb)
The easiest way is using the py.test mechanism to create breakpoint
http://pytest.org/latest/usage.html#setting-a-breakpoint-aka-set-trace
import pytest
def test_function():
...
pytest.set_trace() # invoke PDB debugger and tracing
Or if you want pytest's debugger as a one-liner, change your import pdb; pdb.set_trace() into import pytest; pytest.set_trace()
Similar to Peter Lyon's answer, but with the exact code you need for pytest, you can add the following to the bottom of your pytest module (my_test_module.py) :
if __name__ == "__main__":
pytest.main(["my_test_module.py", "-s"])
Then you can invoke the debugger from the command line:
pdb3 my_test_module.py
Boom. You're in the debugger and able to enter debugger commands. This method leaves your test code un-littered with set_trace() calls and will run inside pytest 'normally'.
Simply use: pytest --trace test_your_test.py.
This will invoke the Python debugger at the start of the test
I'm not familiar with py.test, but for unittest, you do the following. Maybe py.test is similar:
In your test module (mytestmodule.py):
if __name__ == "__main__":
unittest.main(module="mytestmodule")
Then run the test with
python -m pdb mytestmodule.py
You will get an interactive pdb shell.
Looking at the docs, it looks like py.test has a --pdb command line option:
https://docs.pytest.org/en/7.2.x/reference/reference.html#command-line-flags
Add and remove breakpoints without editing source files
Although you can add breakpoints by adding breakpoint() or set_trace() statements to your code, there are two issues with this approach:
Firstly, once you have started running your code, there is no way to remove your breakpoint. I often find that once I start running my code and reach an initial breakpoint, I want to place another one and remove the initial breakpoint. After breakpoint() drops me into the debugger I can add additional breakpoints, but I can't remove the initial one. Although this can be mitigated somewhat by putting the initial breakpoint statement higher up, if you have parametrised tests then even that is limited. I may find myself repeating cont very often.
Secondly, it requires changes to the source code. You need to remember to remove all breakpoint() commands before committing any code to version control, you have to remove them before switching branches, etc. I sometimes find I want to use the debugger to compare test runs between two branches, and having to edit the source code to add a breakpoint every time makes that a considerably slower and more error-prone exercise. I may even want to add a breakpoint in a library I'm calling, in which case the file I'm editing may not even me in my git repository but somewhere deep in my conda environment, increasing the risk of forgetting to remove it. Editing files to add break points is, in my humble opinion, ugly.
To add and remove breakpoints interactively without editing any source files, you can evoke pytest as follows (in the bash shell):
python -mipdb $(type -p pytest) -s test_fileset.py
The -s flag is crucial here, because it stops pytest from messing with stdin and stdout, and when running inside the debugger, pytest will fail to mess with stdin and stdout and everything will go wrong. The exact calling syntax will be different for different shells.
Related
I want to use ipdb instead of pdb with py.test --pdb option. Is this possible? If so, how?
Clearly, I can use import ipdb; ipdb.set_trace() in the code but that requires to run the test, watch it fail, open a file, find the point of failure in said file, write the above line, re-run the tests. Lots of hassle if I could have something that by passes all of that.
Use this option to set custom debugger:
--pdbcls=IPython.terminal.debugger:Pdb
It can also be included in pytest.ini using addopts:
[pytest]
addopts = "--pdbcls=IPython.terminal.debugger:Pdb"
Have you tried pytest-ipdb?
Looks like it's exactly what you are looking for?
There is a high level logic error deep within my Python script, and pdb doesn't help to debug it. Is there any other way to see what is being executed after I run my script?
NOTE: Using pdb is too slow and inconvenient - I wish I could grep over all cases when my function is executed, instead of inspecting manually each and every call, set/unset breakpoints. The state is lost when I exit pdb and its user interface is more confusing than helpful - requires docs at hand.
UPDATE: made it clear that pdb is not an option, so the most popular Python debugging tips can not be applied
I would recommend using pdb. You can use
import pdb
at the top of your script, and then add the line
pdb.set_trace()
somewhere in the code where you want to trace the problem. When the script gets to that line, you will have an interactive console where you can check variable values, run your own checks, and see what is going on. You can use n to execute the next line, or c to continue to the next occurrence of set_trace(). Full documentation is here: http://docs.python.org/2/library/pdb.html.
Let me know if you have any specific questions!
No, there's no magic formula.
My best suggestion is to get a good IDE with a debugger, like JetBrains' PyCharm, and step through your code to see where you went wrong.
Most of the time these situations happen because you make assumptions about behavior that aren't true. Get a debugger, step through, and check your assumptions.
I found a way to do this using excellent trace module that comes with Python.
An example how to troubleshoot module installation problem:
python -m trace -t setup.py install > execution.log
This will dump all source line of setup.py install execution to execution.log. I find this more useful than pdb, because usability of pdb command line interface is very poor.
When running nosetests I would like to drop in to an interactive console. However if I put the following anywhere in my code:
import code
code.interact(local=locals())
Nose just prints (InteractiveConsole) and does not provides the console to type in commands. Pytest treats code.interact as a failure. Is there a way I can drop into the console when running tests while also watching files for changes?
One way to get an interactive session under pytest is to set a breakpoint with
import pdb
pdb.set_trace()
Normally, pytest will supress this interactive session and will just hang when it hits the breakpoint. You can get around that by running pytest with the -s flag, which disables command line output capturing.
In the newest version of pytest, you can just use pytest.set_trace() without the -s flag to get the same behavior. See the docs for info.
I'd like to have my debugger run post_mortem() any time an exception is encountered, without having to modify the source that I'm working on. I see lots of examples that involve wrapping code in a try/except block, but I'd like to have it always run, regardless of what I'm working on.
I worked on a python wrapper script but that got to be ugly and pretty much unusable.
I use pudb, which is API-equivalent to pdb, so a pdb-specific answer is fine. I run code from within my editor (vim) and would like to have the pm come up any time an exception is encountered.
It took a few months of not doing anything about it, but I happened to stumble upon a solution. I'm sure this is nothing new for the more experienced.
I have the following in my environment:
export PYTHONUSERBASE=~/.python
export PYTHONPATH=$PYTHONPATH:$PYTHONUSERBASE
And I have the following file:
~/.python/lib/python2.7/site-packages/usercustomize.py
With the following contents:
import traceback
import sys
try:
import pudb as debugger
except ImportError:
import pdb as debugger
def drop_debugger(type, value, tb):
traceback.print_exception(type, value, tb)
debugger.pm()
sys.excepthook = drop_debugger
__builtins__['debugger'] = debugger
__builtins__['st'] = debugger.set_trace
Now, whether interactively or otherwise, the debugger always jumps in upon an exception. It might be nice to smarten this up some.
It's important to make sure that you have no no-global-site-packages.txt in your site-packages. This will disable the usercustomize module with the default site.py (my virtualenv had a no-global-site-packages.txt)
Just in case it would help others, I left in the bit about modifying __builtins__. I find it quite handy to always be able to rely on some certain tools being available.
Flavor to taste.
A possible solution is to invoke pdb (I don't know about pudb, but I'll just assume it works the same) as a script:
python -m pdb script.py
Quoting the the documentation:
When invoked as a script, pdb will automatically enter post-mortem
debugging if the program being debugged exits abnormally. After
post-mortem debugging (or after normal exit of the program), pdb will
restart the program.
A solution for pdb since Python 3.2 is to start the program under the debugger via -m pdb and tell pdb to continue via -c c:
python3 -m pdb -c c program.py
Quoting the pdb documentation:
When invoked as a script, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post-mortem debugging (or after normal exit of the program), pdb will restart the program.
As-of pudb 2019.2: according to the pudb documentation, the official way involves changing your code a bit (and even then, I don't end up in post-mortem mode if I just run python3 program.py!):
To start the debugger without actually pausing use:
from pudb import set_trace; set_trace(paused=False)
at the top of your code. This will start the debugger without breaking, and run it until a predefined breakpoint is hit. You can also press b on a set_trace call inside the debugger, and it will prevent it from stopping there.
Although it is possible to start the program properly under the debugger via python3 -m pudb.run program.py, pudb's command-line args do not support anything like pdb's -c c. The pudb's --pre-run=COMMAND is for external commands, not pudb commands.
What I currently do is run python3 -m pudb.run program.py without mentioning pudb or set_tracein program.py at all and press c on the keyboard. This way pudb enters its post-mortem mode upon any unhandled exception. This however only works well when I know that the exception will be reproduced. For hunting down sporadically occuring exceptions I go back to the pdb solution.
I'm wondering if anybody has a hint on how to debug a unittest, or any other piece of code in django, for that matter, using a debugger like winpdb?
I'm trying to to a
winpdb manage.py test photo
which runs my unittest for my photo app, but winpdb crashes. Are there alternatives? What is the best way to do this?
I'm running linux, ubuntu 10.10.
You can use pdb to debug your program.
import pdb
def some_function():
pdb.set_trace()
some_other_computation()
When the program hits the set_trace method, execution will pause, and you will be put into an interactive shell. You can then examine variables, and step through your code.
Look at pudb, it is a full-screen, console-based visual debugger for Python. Very nice for debugging with good console UI.
import pudb
def some_function():
pudb.set_trace()
some_other_computation()
You'll need to pass the -s option (eg: python manage.py test -s), to turn off output capturing (which prevents the debugger from starting).
Add following lines to your code:
import rpdb2;
rpdb2.start_embedded_debugger_interactive_password()
You can find more information here: http://winpdb.org/docs/embedded-debugging/
The problem is that django creates another process in which it runs the application under test. So you can not just use winpdb on your main django process.
You should put a call to rpdb2 debugger (winpdb internal debugger) just before the place you want test and attach with winpdb to that running debugger.
See a tutorial here: https://code.djangoproject.com/wiki/DebuggingDjangoWithWinpdb