Debugging a pyQT4 app? - python

I have a fairly simple app built with pyqt4. I wanted to debug one of the functions connected to one of the buttons in my app. However, when I do the following
python -m pdb app.pyw
> break app.pyw:55 # This is where the signal handling function starts.
things don't quite work like I'd hope. Instead of breaking in the function where I've set the breakpoint and letting me step through it, the debugger enters an infinite loop printing out QCoreApplication::exec: The event loop is already running and I am unable to input anything. Is there a better way to do this?

You need to call QtCore.pyqtRemoveInputHook. I wrap it in my own version of set_trace:
def debug_trace():
'''Set a tracepoint in the Python debugger that works with Qt'''
from PyQt4.QtCore import pyqtRemoveInputHook
# Or for Qt5
#from PyQt5.QtCore import pyqtRemoveInputHook
from pdb import set_trace
pyqtRemoveInputHook()
set_trace()
And when you are done debugging, you can call QtCore.pyqtRestoreInputHook(), probably best when you are still in pdb, and then after you hit enter, and the console spam is happening, keep hitting 'c' (for continue) until the app resumes properly. (I had to hit 'c' several times for some reason, it kept going back into pdb, but after hitting it a few times it resumed normally)
For further info Google "pyqtRemoveInputHook pdb". (Really obvious isn't it? ;P)

I had to use a "next" command at the trace point to get outside of that function first. For that I made a modification of the code from mgrandi:
def pyqt_set_trace():
'''Set a tracepoint in the Python debugger that works with Qt'''
from PyQt4.QtCore import pyqtRemoveInputHook
import pdb
import sys
pyqtRemoveInputHook()
# set up the debugger
debugger = pdb.Pdb()
debugger.reset()
# custom next to get outside of function scope
debugger.do_next(None) # run the next command
users_frame = sys._getframe().f_back # frame where the user invoked `pyqt_set_trace()`
debugger.interaction(users_frame, None)
This worked for me. I found the solution from here : Python (pdb) - Queueing up commands to execute

In my tests, jamk's solution works, while the previous one, although simpler, does not.
In some situations, for reasons that are unclear to me, I've been able to debug Qt without doing any of this.

Related

vscode with debugpy: Recursive mode like pdb missing?

I am trying to configure vscode debugger for recursive debugging with
debugpy. In this regard:
I open a jupyter console and attach the vscode debugger to the kernel.
I am then able to call some function and let vscode stop at some breakpoint.
The ipython cell seems to be blocked now and won't let me input further commands.
Leaving the breakpoint brings the ipython cell back to life.
This behaviour is confusing, since I would rather expect to be able to
continue working with the console, that is evaluating statements,
create plots, ... while stopping at the breakpoint. In addition I'd
expect to debug further code recursively, i.e. to let vscode stop at
some other breakpoint and return later on to the first breakpoint.
With python pdb this is possible by executing the "debug 'code'"
command, which creates a nested pdb instance.
Is it possible to replicate the pdb behavaiour with vscode by
configuration, ie.:
Go to the next cell when the breakpoint is hit an allow user input in ipython.
Provide some interactive magic commands to control the debugger from ipython (e.g. debug 'code')
Thanks, Daniel
Follows an example:
Define some dummy module 'my_module.py':
def fun():
import pdb; pdb.set_trace()
print('fun')
def rec_fun():
import pdb; pdb.set_trace()
print('fun')
a = 1
import pdb; pdb.set_trace()
b = 2
Then run a python console.
Then enter in your terminal:
import my_module # This will stop at the breakpoint and allow user further user input in the console, e.g. `print(a)`
# press c to completely load the module
my_module.fun() # This will stop the debugger at the breakpoint inside function fun
import my_module # this will load the module for the debugger instance
debug my_module.rec_fun() # this will launch a nested debugger instance
# Press c+enter: The nested instance will stop at breakpoint inside rec_fun()
# Press c+enter: The debugger will move to the previous breakpoint
# Press c+enter: The debugger will leave and go back to normal REPL mode.
This workflow seems not to be possible with vscode/debugpy. The key point in my opinion is keeping the shell alive with pdb-debugger. Some interactive commands are a subsequent problem. Is it possible to configure pydebug accordingly? What do you think?
Thanks,
Daniel

Python Debugging Using Pdb

I'm using a interactive graphical Python debugger with ipdb under the hood (Canopy's graphical debugger). The script I am working on has multiple imported modules and several calls to their respective functions. Whenever I attempt a debugging run, execution gets stuck somewhere within a call to an imported module's function (specifically subprocess). My two main questions are:
1) Does running in debug mode slow things down considerably? Is the code not actually stuck, but just running at a painfully slow rate?
2) Is there a way to completely pass over bits of code and run them as if I were not even debugging? I want to prevent the debugger from diving into subprocess and just execute it as if it were a normal run.
I might toss the graphical debugger and do everything from a terminal, but I would like to avoid that if I can because the graphical interface is really convenient and saves a lot of typing.
import pdb
a = "aaa"
pdb.set_trace()
b = "bbb"
c = "ccc"
final = a + b + c
print final
Your output when you run the code then it will start debugging and control will stop after a="aaa"
$ python abc.py
(Pdb) p a
'aaa'
(Pdb)
Thanks, Shashi

Is there an interpreter for Python similar to Pry for Ruby? [duplicate]

Is there a way to programmatically force a Python script to drop into a REPL at an arbitrary point in its execution, even if the script was launched from the command line?
I'm writing a quick and dirty plotting program, which I want to read data from stdin or a file, plot it, and then drop into the REPL to allow for the plot to be customized.
I frequently use this:
def interact():
import code
code.InteractiveConsole(locals=globals()).interact()
You could try using the interactive option for python:
python -i program.py
This will execute the code in program.py, then go to the REPL. Anything you define or import in the top level of program.py will be available.
Here's how you should do it (IPython > v0.11):
import IPython
IPython.embed()
For IPython <= v0.11:
from IPython.Shell import IPShellEmbed
ipshell = IPShellEmbed()
ipshell() # this call anywhere in your program will start IPython
You should use IPython, the Cadillac of Python REPLs. See http://ipython.org/ipython-doc/stable/interactive/reference.html#embedding-ipython
From the documentation:
It can also be useful in scientific
computing situations where it is
common to need to do some automatic,
computationally intensive part and
then stop to look at data, plots, etc.
Opening an IPython instance will give
you full access to your data and
functions, and you can resume program
execution once you are done with the
interactive part (perhaps to stop
again later, as many times as needed).
You can launch the debugger:
import pdb;pdb.set_trace()
Not sure what you want the REPL for, but the debugger is very similar.
To get use of iPython and functionality of debugger you should use ipdb,
You can use it in the same way as pdb, with the addition of :
import ipdb
ipdb.set_trace()
I just did this in one of my own scripts (it runs inside an automation framework that is a huge PITA to instrument):
x = 0 # exit loop counter
while x == 0:
user_input = raw_input("Please enter a command, or press q to quit: ")
if user_input[0] == "q":
x = 1
else:
try:
print eval(user_input)
except:
print "I can't do that, Dave."
continue
Just place this wherever you want a breakpoint, and you can check the state using the same syntax as the python interpreter (although it doesn't seem to let you do module imports).
It's not very elegant, but it doesn't require any other setup.
Great answers above, but if you would like this functionality in your IDE. Using Visual Studio Code (v1.5.*) with Python Setup:
Highlight the lines you would like to run and
right click and select Run Selection/Line in Interactive Window from the drop down.
Press shift + enter on your keyboard.
Right click on the Python file you want to execute in the file explorer and select Run Current File in Interactive Window
This will launch an interactive session, with linting, code completion and syntax highlighting:
Enter the code you would like to evaluate, and hit shift + enter on your keyboard to execute.
Enjoy Python!

How to drop into REPL (Read, Eval, Print, Loop) from Python code

Is there a way to programmatically force a Python script to drop into a REPL at an arbitrary point in its execution, even if the script was launched from the command line?
I'm writing a quick and dirty plotting program, which I want to read data from stdin or a file, plot it, and then drop into the REPL to allow for the plot to be customized.
I frequently use this:
def interact():
import code
code.InteractiveConsole(locals=globals()).interact()
You could try using the interactive option for python:
python -i program.py
This will execute the code in program.py, then go to the REPL. Anything you define or import in the top level of program.py will be available.
Here's how you should do it (IPython > v0.11):
import IPython
IPython.embed()
For IPython <= v0.11:
from IPython.Shell import IPShellEmbed
ipshell = IPShellEmbed()
ipshell() # this call anywhere in your program will start IPython
You should use IPython, the Cadillac of Python REPLs. See http://ipython.org/ipython-doc/stable/interactive/reference.html#embedding-ipython
From the documentation:
It can also be useful in scientific
computing situations where it is
common to need to do some automatic,
computationally intensive part and
then stop to look at data, plots, etc.
Opening an IPython instance will give
you full access to your data and
functions, and you can resume program
execution once you are done with the
interactive part (perhaps to stop
again later, as many times as needed).
You can launch the debugger:
import pdb;pdb.set_trace()
Not sure what you want the REPL for, but the debugger is very similar.
To get use of iPython and functionality of debugger you should use ipdb,
You can use it in the same way as pdb, with the addition of :
import ipdb
ipdb.set_trace()
I just did this in one of my own scripts (it runs inside an automation framework that is a huge PITA to instrument):
x = 0 # exit loop counter
while x == 0:
user_input = raw_input("Please enter a command, or press q to quit: ")
if user_input[0] == "q":
x = 1
else:
try:
print eval(user_input)
except:
print "I can't do that, Dave."
continue
Just place this wherever you want a breakpoint, and you can check the state using the same syntax as the python interpreter (although it doesn't seem to let you do module imports).
It's not very elegant, but it doesn't require any other setup.
Great answers above, but if you would like this functionality in your IDE. Using Visual Studio Code (v1.5.*) with Python Setup:
Highlight the lines you would like to run and
right click and select Run Selection/Line in Interactive Window from the drop down.
Press shift + enter on your keyboard.
Right click on the Python file you want to execute in the file explorer and select Run Current File in Interactive Window
This will launch an interactive session, with linting, code completion and syntax highlighting:
Enter the code you would like to evaluate, and hit shift + enter on your keyboard to execute.
Enjoy Python!

How to detect that Python code is being executed through the debugger?

Is there a simple way to detect, within Python code, that this code is being executed through the Python debugger?
I have a small Python application that uses Java code (thanks to JPype). When I'm debugging the Python part, I'd like the embedded JVM to be passed debug options too.
Python debuggers (as well as profilers and coverage tools) use the sys.settrace function (in the sys module) to register a callback that gets called when interesting events happen.
If you're using Python 2.6, you can call sys.gettrace() to get the current trace callback function. If it's not None then you can assume you should be passing debug parameters to the JVM.
It's not clear how you could do this pre 2.6.
Other alternative if you're using Pydev that also works in a multithreading is:
try:
import pydevd
DEBUGGING = True
except ImportError:
DEBUGGING = False
A solution working with Python 2.4 (it should work with any version superior to 2.1) and Pydev:
import inspect
def isdebugging():
for frame in inspect.stack():
if frame[1].endswith("pydevd.py"):
return True
return False
The same should work with pdb by simply replacing pydevd.py with pdb.py. As do3cc suggested, it tries to find the debugger within the stack of the caller.
Useful links:
The Python Debugger
The interpreter stack
Another way to do it hinges on how your python interpreter is started. It requires you start Python using -O for production and with no -O for debugging. So it does require an external discipline that might be hard to maintain .. but then again it might fit your processes perfectly.
From the python docs (see "Built-in Constants" here or here):
__debug__
This constant is true if Python was not started with an -O option.
Usage would be something like:
if __debug__:
print 'Python started without optimization'
If you're using Pydev, you can detect it in such way:
import sys
if 'pydevd' in sys.modules:
print "Debugger"
else:
print "commandline"
From taking a quick look at the pdb docs and source code, it doesn't look like there is a built in way to do this. I suggest that you set an environment variable that indicates debugging is in progress and have your application respond to that.
$ USING_PDB=1 pdb yourprog.py
Then in yourprog.py:
import os
if os.environ.get('USING_PDB'):
# debugging actions
pass
You can try to peek into your stacktrace.
https://docs.python.org/library/inspect.html#the-interpreter-stack
when you try this in a debugger session:
import inspect
inspect.getouterframes(inspect.currentframe()
you will get a list of framerecords and can peek for any frames that refer to the pdb file.
I found a cleaner way to do it,
Just add the following line in your manage.py
#!/usr/bin/env python
import os
import sys
if __debug__:
sys.path.append('/path/to/views.py')
if __name__ == "__main__":
....
Then it would automatically add it when you are debugging.
Since the original question doesn't specifically call out Python2 - This is to confirm #babbageclunk's suggested usage of sys also works in python3:
from sys import gettrace as sys_gettrace
DEBUG = sys_gettrace() is not None
print("debugger? %s" % DEBUG)
In my perllib, I use this check:
if 'pdb' in sys.modules:
# We are being debugged
It assumes the user doesn't otherwise import pdb

Categories

Resources