Showing the stack trace from a running Python application - python

I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
Related questions:
Print current call stack from a method in Python code
Check what a running process is doing: print stack trace of an uninstrumented Python program

I have module I use for situations like this - where a process will be running for a long time but gets stuck sometimes for unknown and irreproducible reasons. Its a bit hacky, and only works on unix (requires signals):
import code, traceback, signal
def debug(sig, frame):
"""Interrupt running process, and provide a python prompt for
interactive debugging."""
d={'_frame':frame} # Allow access to frame object.
d.update(frame.f_globals) # Unless shadowed by global
d.update(frame.f_locals)
i = code.InteractiveConsole(d)
message = "Signal received : entering python shell.\nTraceback:\n"
message += ''.join(traceback.format_stack(frame))
i.interact(message)
def listen():
signal.signal(signal.SIGUSR1, debug) # Register handler
To use, just call the listen() function at some point when your program starts up (You could even stick it in site.py to have all python programs use it), and let it run. At any point, send the process a SIGUSR1 signal, using kill, or in python:
os.kill(pid, signal.SIGUSR1)
This will cause the program to break to a python console at the point it is currently at, showing you the stack trace, and letting you manipulate the variables. Use control-d (EOF) to continue running (though note that you will probably interrupt any I/O etc at the point you signal, so it isn't fully non-intrusive.
I've another script that does the same thing, except it communicates with the running process through a pipe (to allow for debugging backgrounded processes etc). Its a bit large to post here, but I've added it as a python cookbook recipe.

The suggestion to install a signal handler is a good one, and I use it a lot. For example, bzr by default installs a SIGQUIT handler that invokes pdb.set_trace() to immediately drop you into a pdb prompt. (See the bzrlib.breakin module's source for the exact details.) With pdb you can not only get the current stack trace (with the (w)here command) but also inspect variables, etc.
However, sometimes I need to debug a process that I didn't have the foresight to install the signal handler in. On linux, you can attach gdb to the process and get a python stack trace with some gdb macros. Put http://svn.python.org/projects/python/trunk/Misc/gdbinit in ~/.gdbinit, then:
Attach gdb: gdb -p PID
Get the python stack trace: pystack
It's not totally reliable unfortunately, but it works most of the time. See also https://wiki.python.org/moin/DebuggingWithGdb
Finally, attaching strace can often give you a good idea what a process is doing.

I am almost always dealing with multiple threads and main thread is generally not doing much, so what is most interesting is to dump all the stacks (which is more like the Java's dump). Here is an implementation based on this blog:
import threading, sys, traceback
def dumpstacks(signal, frame):
id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %s(%d)" % (id2name.get(threadId,""), threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
if line:
code.append(" %s" % (line.strip()))
print("\n".join(code))
import signal
signal.signal(signal.SIGQUIT, dumpstacks)

Getting a stack trace of an unprepared python program, running in a stock python without debugging symbols can be done with pyrasite. Worked like a charm for me in on Ubuntu Trusty:
$ sudo pip install pyrasite
$ echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
$ sudo pyrasite 16262 dump_stacks.py # dumps stacks to stdout/stderr of the python program
(Hat tip to #Albert, whose answer contained a pointer to this, among other tools.)

>>> import traceback
>>> def x():
>>> print traceback.extract_stack()
>>> x()
[('<stdin>', 1, '<module>', None), ('<stdin>', 2, 'x', None)]
You can also nicely format the stack trace, see the docs.
Edit: To simulate Java's behavior, as suggested by #Douglas Leeder, add this:
import signal
import traceback
signal.signal(signal.SIGUSR1, lambda sig, stack: traceback.print_stack(stack))
to the startup code in your application. Then you can print the stack by sending SIGUSR1 to the running Python process.

The traceback module has some nice functions, among them: print_stack:
import traceback
traceback.print_stack()

You can try the faulthandler module. Install it using pip install faulthandler and add:
import faulthandler, signal
faulthandler.register(signal.SIGUSR1)
at the beginning of your program. Then send SIGUSR1 to your process (ex: kill -USR1 42) to display the Python traceback of all threads to the standard output. Read the documentation for more options (ex: log into a file) and other ways to display the traceback.
The module is now part of Python 3.3. For Python 2, see http://faulthandler.readthedocs.org/

What really helped me here is spiv's tip (which I would vote up and comment on if I had the reputation points) for getting a stack trace out of an unprepared Python process. Except it didn't work until I modified the gdbinit script. So:
download https://svn.python.org/projects/python/trunk/Misc/gdbinit and put it in ~/.gdbinit
edit it, changing PyEval_EvalFrame to PyEval_EvalFrameEx [edit: no longer needed; the linked file already has this change as of 2010-01-14]
Attach gdb: gdb -p PID
Get the python stack trace: pystack

python -dv yourscript.py
That will make the interpreter to run in debug mode and to give you a trace of what the interpreter is doing.
If you want to interactively debug the code you should run it like this:
python -m pdb yourscript.py
That tells the python interpreter to run your script with the module "pdb" which is the python debugger, if you run it like that the interpreter will be executed in interactive mode, much like GDB

It can be done with excellent py-spy. It's a sampling profiler for Python programs, so its job is to attach to a Python processes and sample their call stacks. Hence, py-spy dump --pid $SOME_PID is all you need to do to dump call stacks of all threads in the $SOME_PID process. Typically it needs escalated privileges (to read the target process' memory).
Here's an example of how it looks like for a threaded Python application.
$ sudo py-spy dump --pid 31080
Process 31080: python3.7 -m chronologer -e production serve -u www-data -m
Python v3.7.1 (/usr/local/bin/python3.7)
Thread 0x7FEF5E410400 (active): "MainThread"
_wait (cherrypy/process/wspbus.py:370)
wait (cherrypy/process/wspbus.py:384)
block (cherrypy/process/wspbus.py:321)
start (cherrypy/daemon.py:72)
serve (chronologer/cli.py:27)
main (chronologer/cli.py:84)
<module> (chronologer/__main__.py:5)
_run_code (runpy.py:85)
_run_module_as_main (runpy.py:193)
Thread 0x7FEF55636700 (active): "_TimeoutMonitor"
run (cherrypy/process/plugins.py:518)
_bootstrap_inner (threading.py:917)
_bootstrap (threading.py:885)
Thread 0x7FEF54B35700 (active): "HTTPServer Thread-2"
accept (socket.py:212)
tick (cherrypy/wsgiserver/__init__.py:2075)
start (cherrypy/wsgiserver/__init__.py:2021)
_start_http_thread (cherrypy/process/servers.py:217)
run (threading.py:865)
_bootstrap_inner (threading.py:917)
_bootstrap (threading.py:885)
...
Thread 0x7FEF2BFFF700 (idle): "CP Server Thread-10"
wait (threading.py:296)
get (queue.py:170)
run (cherrypy/wsgiserver/__init__.py:1586)
_bootstrap_inner (threading.py:917)
_bootstrap (threading.py:885)

I would add this as a comment to haridsv's response, but I lack the reputation to do so:
Some of us are still stuck on a version of Python older than 2.6 (required for Thread.ident), so I got the code working in Python 2.5 (though without the thread name being displayed) as such:
import traceback
import sys
def dumpstacks(signal, frame):
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %d" % (threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
if line:
code.append(" %s" % (line.strip()))
print "\n".join(code)
import signal
signal.signal(signal.SIGQUIT, dumpstacks)

Take a look at the faulthandler module, new in Python 3.3. A faulthandler backport for use in Python 2 is available on PyPI.

On Solaris, you can use pstack(1) No changes to the python code are necessary. eg.
# pstack 16000 | grep : | head
16000: /usr/bin/python2.6 /usr/lib/pkg.depotd --cfg svc:/application/pkg/serv
[ /usr/lib/python2.6/vendor-packages/cherrypy/process/wspbus.py:282 (_wait) ]
[ /usr/lib/python2.6/vendor-packages/cherrypy/process/wspbus.py:295 (wait) ]
[ /usr/lib/python2.6/vendor-packages/cherrypy/process/wspbus.py:242 (block) ]
[ /usr/lib/python2.6/vendor-packages/cherrypy/_init_.py:249 (quickstart) ]
[ /usr/lib/pkg.depotd:890 (<module>) ]
[ /usr/lib/python2.6/threading.py:256 (wait) ]
[ /usr/lib/python2.6/Queue.py:177 (get) ]
[ /usr/lib/python2.6/vendor-packages/pkg/server/depot.py:2142 (run) ]
[ /usr/lib/python2.6/threading.py:477 (run)
etc.

If you're on a Linux system, use the awesomeness of gdb with Python debug extensions (can be in python-dbg or python-debuginfo package). It also helps with multithreaded applications, GUI applications and C modules.
Run your program with:
$ gdb -ex r --args python <programname>.py [arguments]
This instructs gdb to prepare python <programname>.py <arguments> and run it.
Now when you program hangs, switch into gdb console, press Ctr+C and execute:
(gdb) thread apply all py-list
See example session and more info here and here.

I was looking for a while for a solution to debug my threads and I found it here thanks to haridsv. I use slightly simplified version employing the traceback.print_stack():
import sys, traceback, signal
import threading
import os
def dumpstacks(signal, frame):
id2name = dict((th.ident, th.name) for th in threading.enumerate())
for threadId, stack in sys._current_frames().items():
print(id2name[threadId])
traceback.print_stack(f=stack)
signal.signal(signal.SIGQUIT, dumpstacks)
os.killpg(os.getpgid(0), signal.SIGQUIT)
For my needs I also filter threads by name.

I hacked together some tool which attaches into a running Python process and injects some code to get a Python shell.
See here: https://github.com/albertz/pydbattach

You can use the hypno package, like so:
hypno <pid> "import traceback; traceback.print_stack()"
This would print a stack trace into the program's stdout.
Alternatively, if you don't want to print anything to stdout, or you don't have access to it (a daemon for example), you could use the madbg package, which is a python debugger that allows you to attach to a running python program and debug it in your current terminal. It is similar to pyrasite and pyringe, but newer, doesn't require gdb, and uses IPython for the debugger (which means colors and autocomplete).
To see the stack trace of a running program, you could run:
madbg attach <pid>
And in the debugger shell, enter:
bt
Disclaimer - I wrote both packages

It's worth looking at Pydb, "an expanded version of the Python debugger loosely based on the gdb command set". It includes signal managers which can take care of starting the debugger when a specified signal is sent.
A 2006 Summer of Code project looked at adding remote-debugging features to pydb in a module called mpdb.

Since Austin 3.3, you can use the -w/--where option to emit the current stack trace. See https://stackoverflow.com/a/70905181/1838793
If you want to peek at a running Python application to see the "live" call stack in a top-like fashon you can use austin-tui (https://github.com/p403n1x87/austin-tui). You can install it from PyPI with e.g.
pipx install austin-tui
Note that it requires the austin binary to work (https://github.com/p403n1x87/austin), but then you can attach to a running Python process with
austin-tui -p <pid>

pyringe is a debugger that can interact with running python processes, print stack traces, variables, etc. without any a priori setup.
While I've often used the signal handler solution in the past, it can still often be difficult to reproduce the issue in certain environments.

You can use PuDB, a Python debugger with a curses interface to do this. Just add
from pudb import set_interrupt_handler; set_interrupt_handler()
to your code and use Ctrl-C when you want to break. You can continue with c and break again multiple times if you miss it and want to try again.

I am in the GDB camp with the python extensions. Follow https://wiki.python.org/moin/DebuggingWithGdb, which means
dnf install gdb python-debuginfo or sudo apt-get install gdb python2.7-dbg
gdb python <pid of running process>
py-bt
Also consider info threads and thread apply all py-bt.

How to debug any function in console:
Create function where you use pdb.set_trace(), then function you want debug.
>>> import pdb
>>> import my_function
>>> def f():
... pdb.set_trace()
... my_function()
...
Then call created function:
>>> f()
> <stdin>(3)f()
(Pdb) s
--Call--
> <stdin>(1)my_function()
(Pdb)
Happy debugging :)

I don't know of anything similar to java's response to SIGQUIT, so you might have to build it in to your application. Maybe you could make a server in another thread that can get a stacktrace on response to a message of some kind?

There is no way to hook into a running python process and get reasonable results. What I do if processes lock up is hooking strace in and trying to figure out what exactly is happening.
Unfortunately often strace is the observer that "fixes" race conditions so that the output is useless there too.

use the inspect module.
import inspect
help(inspect.stack)
Help on function stack in module inspect:
stack(context=1)
Return a list of records for the stack above the caller's frame.
I find it very helpful indeed.

In Python 3, pdb will automatically install a signal handler the first time you use c(ont(inue)) in the debugger. Pressing Control-C afterwards will drop you right back in there. In Python 2, here's a one-liner which should work even in relatively old versions (tested in 2.7 but I checked Python source back to 2.4 and it looked okay):
import pdb, signal
signal.signal(signal.SIGINT, lambda sig, frame: pdb.Pdb().set_trace(frame))
pdb is worth learning if you spend any amount of time debugging Python. The interface is a bit obtuse but should be familiar to anyone who has used similar tools, such as gdb.

In case you need to do this with uWSGI, it has Python Tracebacker built-in and it's just matter of enabling it in the configuration (number is attached to the name for each worker):
py-tracebacker=/var/run/uwsgi/pytrace
Once you have done this, you can print backtrace simply by connecting to the socket:
uwsgi --connect-and-read /var/run/uwsgi/pytrace1

At the point where the code is run, you can insert this small snippet to see a nicely formatted printed stack trace. It assumes that you have a folder called logs at your project's root directory.
# DEBUG: START DEBUG -->
import traceback
with open('logs/stack-trace.log', 'w') as file:
traceback.print_stack(file=file)
# DEBUG: END DEBUG --!

Related

Emacs - Running current file in Python

I am using ELPY in emacs in Windows. How do I eval current file inside emacs using a Shortcut.
When I use eval-buffer.
I get below messages.
Flymake unable to run without a buffer file name
Removed if main == 'main' construct, use a prefix argument to evaluate.
There is a built in feature in emacs when working with the python (inferior) shell, to prevent people from running the whole script unintentionally, which is does by redefining (or removing) __name__ before it runs.
Your script does send __name__ = "__main__" but it is overwritten, due to this 'safety feature'.
To execute as you wish, running it as main, use the key binding:
`C-u C-c C-c`
If you would like to remap this to be something quicker or more familiar, try something like this:
(global-set-key (kbd "<f7>") (kbd "C-u C-c C-c"))
this was discussed in this thread here, where there is also some related extra information.

Calling Python 2 script from Python 3

I have two scripts, the main is in Python 3, and the second one is written in Python 2 (it also uses a Python 2 library).
There is one method in the Python 2 script I want to call from the Python 3 script, but I don't know how to cross this bridge.
Calling different python versions from each other can be done very elegantly using execnet. The following function does the charm:
import execnet
def call_python_version(Version, Module, Function, ArgumentList):
gw = execnet.makegateway("popen//python=python%s" % Version)
channel = gw.remote_exec("""
from %s import %s as the_function
channel.send(the_function(*channel.receive()))
""" % (Module, Function))
channel.send(ArgumentList)
return channel.receive()
Example: A my_module.py written in Python 2.7:
def my_function(X, Y):
return "Hello %s %s!" % (X, Y)
Then the following function calls
result = call_python_version("2.7", "my_module", "my_function",
["Mr", "Bear"])
print(result)
result = call_python_version("2.7", "my_module", "my_function",
["Mrs", "Wolf"])
print(result)
result in
Hello Mr Bear!
Hello Mrs Wolf!
What happened is that a 'gateway' was instantiated waiting
for an argument list with channel.receive(). Once it came in, it as been translated and passed to my_function. my_function returned the string it generated and channel.send(...) sent the string back. On other side of the gateway channel.receive() catches that result and returns it to the caller. The caller finally prints the string as produced by my_function in the python 3 module.
You could run python2 using subprocess (python module) doing the following:
From python 3:
#!/usr/bin/env python3
import subprocess
python3_command = "py2file.py arg1 arg2" # launch your python2 script
process = subprocess.Popen(python3_command.split(), stdout=subprocess.PIPE)
output, error = process.communicate() # receive output from the python2 script
Where output stores whatever python 2 returned
Maybe to late, but there is one more simple option for call python2.7 scripts:
script = ["python2.7", "script.py", "arg1"]
process = subprocess.Popen(" ".join(script),
shell=True,
env={"PYTHONPATH": "."})
I am running my python code with python 3, but I need a tool (ocropus) that is written with python 2.7. I spent a long time trying all these options with subprocess, and kept having errors, and the script would not complete. From the command line, it runs just fine. So I finally tried something simple that worked, but that I had not found in my searches online. I put the ocropus command inside a bash script:
#!/bin/bash
/usr/local/bin/ocropus-gpageseg $1
I call the bash script with subprocess.
command = [ocropus_gpageseg_path, current_path]
process = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
output, error = process.communicate()
print('output',output,'error',error)
This really gives the ocropus script its own little world, which it seems to need. I am posting this in the hope that it will save someone else some time.
It works for me if I call the python 2 executable directly from a python 3 environment.
python2_command = 'C:\Python27\python.exe python2_script.py arg1'
process = subprocess.Popen(python2_command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
python3_command = 'python python3_script.py arg1'
process = subprocess.Popen(python3_command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
I ended up creating a new function in the python3 script, which wraps the python2.7 code. It correctly formats error messages created by the python2.7 code and is extending mikelsr's answer and using run() as recommended by subprocess docs.
in bar.py (python2.7 code):
def foo27(input):
return input * 2
in your python3 file:
import ast
import subprocess
def foo3(parameter):
try:
return ast.literal_eval(subprocess.run(
[
"C:/path/to/python2.7/python.exe", "-c", # run python2.7 in command mode
"from bar import foo27;"+
"print(foo27({}))".format(parameter) # print the output
],
capture_output=True,
check=True
).stdout.decode("utf-8")) # evaluate the printed output
except subprocess.CalledProcessError as e:
print(e.stdout)
raise Exception("foo27 errored with message below:\n\n{}"
.format(e.stderr.decode("utf-8")))
print(foo3(21))
# 42
This works when passing in simple python objects, like dicts, as the parameter but does not work for objects created by classes, eg. numpy arrays. These have to be serialized and re-instantiated on the other side of the barrier.
Note: This was happening when running my python 2.x s/w in the liclipse IDE.
When I ran it from a bash script on the command line it didn't have the problem.
Here is a problem & solution I had when mixing python 2.x & 3.x scripts.
I am running a python 2.6 process & needed to call/execute a python 3.6 script.
The environment variable PYTHONPATH was set to point to 2.6 python s/w, so it was choking on the followng:
File "/usr/lib64/python2.6/encodings/__init__.py", line 123
raise CodecRegistryError,\
This caused the 3.6 python script to fail.
So instead of calling the 3.6 program directly I created a bash script which nuked the PYTHONPATH environment variable.
#!/bin/bash
export PYTHONPATH=
## Now call the 3.6 python scrtipt
./36psrc/rpiapi/RPiAPI.py $1
Instead of calling them in python 3, you could run them in conda env batch by creating a batch file as below:
call C:\ProgramData\AnacondaNew\Scripts\activate.bat
C:\Python27\python.exe "script27.py"
C:\ProgramData\AnacondaNew\python.exe "script3.py"
call conda deactivate
pause
I recommend to convert the Python2 files to Python3:
https://pythonconverter.com/

How can I add a command to the Python interactive shell?

I'm trying to save myself just a few keystrokes for a command I type fairly regularly in Python.
In my python startup script, I define a function called load which is similar to import, but adds some functionality. It takes a single string:
def load(s):
# Do some stuff
return something
In order to call this function I have to type
>>> load('something')
I would rather be able to simply type:
>>> load something
I am running Python with readline support, so I know there exists some programmability there, but I don't know if this sort of thing is possible using it.
I attempted to get around this by using the InteractivConsole and creating an instance of it in my startup file, like so:
import code, re, traceback
class LoadingInteractiveConsole(code.InteractiveConsole):
def raw_input(self, prompt = ""):
s = raw_input(prompt)
match = re.match('^load\s+(.+)', s)
if match:
module = match.group(1)
try:
load(module)
print "Loaded " + module
except ImportError:
traceback.print_exc()
return ''
else:
return s
console = LoadingInteractiveConsole()
console.interact("")
This works with the caveat that I have to hit Ctrl-D twice to exit the python interpreter: once to get out of my custom console, once to get out of the real one.
Is there a way to do this without writing a custom C program and embedding the interpreter into it?
Edit
Out of channel, I had the suggestion of appending this to the end of my startup file:
import sys
sys.exit()
It works well enough, but I'm still interested in alternative solutions.
You could try ipython - which gives a python shell which does allow many things including automatic parentheses which gives you the function call as you requested.
I think you want the cmd module.
See a tutorial here:
http://wiki.python.org/moin/CmdModule
Hate to answer my own question, but there hasn't been an answer that works for all the versions of Python I use. Aside from the solution I posted in my question edit (which is what I'm now using), here's another:
Edit .bashrc to contain the following lines:
alias python3='python3 ~/py/shellreplace.py'
alias python='python ~/py/shellreplace.py'
alias python27='python27 ~/py/shellreplace.py'
Then simply move all of the LoadingInteractiveConsole code into the file ~/py/shellreplace.py Once the script finishes executing, python will cease executing, and the improved interactive session will be seamless.

python: find out if running in shell or not (e.g. sun grid engine queue)

is there a way to find out from within a python program if it was started in a terminal or e.g. in a batch engine like sun grid engine?
the idea is to decide on printing some progress bars and other ascii-interactive stuff, or not.
thanks!
p.
The standard way is isatty().
import sys
if sys.stdout.isatty():
print("Interactive")
else:
print("Non-interactive")
You can use os.getppid() to find out the process id for the parent-process of this one, and then use that process id to determine which program that process is running. More usefully, you could use sys.stdout.isatty() -- that doesn't answer your title question but appears to better solve the actual problem you explain (if you're running under a shell but your output is piped to some other process or redirected to a file you probably don't want to emit "interactive stuff" on it either).
Slightly shorter:
import sys
sys.stdout.isatty()
I have found the following to work on both Linux and Windows, in both the normal Python interpreter and IPython (though I can't say about IronPython):
isInteractive = hasattr(sys, 'ps1') or hasattr(sys, 'ipcompleter')
However, note that when using ipython, if the file is specified as a command-line argument it will run before the interpreter becomes interactive. See what I mean below:
C:\>cat C:\demo.py
import sys, os
# ps1=python shell; ipcompleter=ipython shell
isInteractive = hasattr(sys, 'ps1') or hasattr(sys, 'ipcompleter')
print isInteractive and "This is interactive" or "Automated"
C:\>python c:\demo.py
Automated
C:\>python
>>> execfile('C:/demo.py')
This is interactive
C:\>ipython C:\demo.py
Automated # NOTE! Then ipython continues to start up...
IPython 0.9.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [2]: run C:/demo.py
This is interactive # NOTE!
HTH

How to detect that Python code is being executed through the debugger?

Is there a simple way to detect, within Python code, that this code is being executed through the Python debugger?
I have a small Python application that uses Java code (thanks to JPype). When I'm debugging the Python part, I'd like the embedded JVM to be passed debug options too.
Python debuggers (as well as profilers and coverage tools) use the sys.settrace function (in the sys module) to register a callback that gets called when interesting events happen.
If you're using Python 2.6, you can call sys.gettrace() to get the current trace callback function. If it's not None then you can assume you should be passing debug parameters to the JVM.
It's not clear how you could do this pre 2.6.
Other alternative if you're using Pydev that also works in a multithreading is:
try:
import pydevd
DEBUGGING = True
except ImportError:
DEBUGGING = False
A solution working with Python 2.4 (it should work with any version superior to 2.1) and Pydev:
import inspect
def isdebugging():
for frame in inspect.stack():
if frame[1].endswith("pydevd.py"):
return True
return False
The same should work with pdb by simply replacing pydevd.py with pdb.py. As do3cc suggested, it tries to find the debugger within the stack of the caller.
Useful links:
The Python Debugger
The interpreter stack
Another way to do it hinges on how your python interpreter is started. It requires you start Python using -O for production and with no -O for debugging. So it does require an external discipline that might be hard to maintain .. but then again it might fit your processes perfectly.
From the python docs (see "Built-in Constants" here or here):
__debug__
This constant is true if Python was not started with an -O option.
Usage would be something like:
if __debug__:
print 'Python started without optimization'
If you're using Pydev, you can detect it in such way:
import sys
if 'pydevd' in sys.modules:
print "Debugger"
else:
print "commandline"
From taking a quick look at the pdb docs and source code, it doesn't look like there is a built in way to do this. I suggest that you set an environment variable that indicates debugging is in progress and have your application respond to that.
$ USING_PDB=1 pdb yourprog.py
Then in yourprog.py:
import os
if os.environ.get('USING_PDB'):
# debugging actions
pass
You can try to peek into your stacktrace.
https://docs.python.org/library/inspect.html#the-interpreter-stack
when you try this in a debugger session:
import inspect
inspect.getouterframes(inspect.currentframe()
you will get a list of framerecords and can peek for any frames that refer to the pdb file.
I found a cleaner way to do it,
Just add the following line in your manage.py
#!/usr/bin/env python
import os
import sys
if __debug__:
sys.path.append('/path/to/views.py')
if __name__ == "__main__":
....
Then it would automatically add it when you are debugging.
Since the original question doesn't specifically call out Python2 - This is to confirm #babbageclunk's suggested usage of sys also works in python3:
from sys import gettrace as sys_gettrace
DEBUG = sys_gettrace() is not None
print("debugger? %s" % DEBUG)
In my perllib, I use this check:
if 'pdb' in sys.modules:
# We are being debugged
It assumes the user doesn't otherwise import pdb

Categories

Resources