In python ,There is a reload method to reload an imported module , but is there a method to reload the currently running script without restarting it, this would be very helpful in debugging a script and changing the code live while the script is running. In visual basic I remember a similar functionality was called "apply code changes", but I need a similar functionality as a function call like "refresh()" which will apply the code changes instantly.
This would work smoothly when an independent function in the script is modified and we need to immediately apply the code change without restarting the script.
Inshort will:
reload(self)
work?
reload(self) will not work, because reload() works on modules, not live instances of classes. What you want is some logic external to your main application, which checks whether it has to be reloaded. You have to think about what is needed to re-create your application state upon reload.
Some hints in this direction:
Guido van Rossum wrote once this: xreload.py; it does a bit more than reload() You would need a loop, which checks for changes every x seconds and applies this.
Also have a look at livecoding which basically does this. EDIT: I mistook this project for something else (which I didn't find now), sorry.
perhaps this SO question will help you
Perhaps you mean something like this:
import pdb
import importlib
from os.path import basename
def function():
print("hello world")
if __name__ == "__main__":
# import this module, but not as __main__ so everything other than this
# if statement is executed
mainmodname = basename(__file__)[:-3]
module = importlib.import_module(mainmodname)
while True:
# reload the module to check for changes
importlib.reload(module)
# update the globals of __main__ with the any new or changed
# functions or classes in the reloaded module
globals().update(vars(module))
function()
pdb.set_trace()
Run the code, then change the contents of function and enter c at the prompt to run the next iteration of the loop.
test.py
class Test(object):
def getTest(self):
return 'test1'
testReload.py
from test import Test
t = Test()
print t.getTest()
# change return value (test.py)
import importlib
module = importlib.import_module(Test.__module__)
reload(module)
from test import Test
t = Test()
print t.getTest()
Intro
reload is for imported modules. Documentation for reload advises against reloading __main__.
Reloading sys, __main__, builtins and other key modules is not recommended.
To achieve similar behavior on your script you will need to re-execute the script. This will - for normal scripts - also reset the the global state. I propose a solution.
NOTE
The solution I propose is very powerful and should only be used for code you trust. Automatically executing code from unknown sources can lead to a world of pain. Python is not a good environment for soapboxing unsafe code.
Solution
To programmatically execute a python script from another script you only need the built-in functions open, compile and exec.
Here is an example function that will re-execute the script it is in:
def refresh():
with open(__file__) as fo:
source_code = fo.read()
byte_code = compile(source_code, __file__, "exec")
exec(byte_code)
The above will in normal circumstances also re-set any global variables you might have. If you wish to keep these variables you should check weather or not those variables have already been set. This can be done with a try-except block covering NameError exceptions. But that can be tedious so I propose using a flag variable.
Here is an example using the flag variable:
in_main = __name__ == "__main__"
first_run = "flag" not in globals()
if in_main and first_run:
flag = True
None of these answers did the job properly for me, so I put together something very messy and very non-pythonic to do the job myself. Even after running it for several weeks, I am finding small issues and fixing them. One issue will be if your PWD/CWD changes.
Warning this is very ugly code. Perhaps someone will make it pretty, but it does work.
Not only does it create a refresh() function that properly reloads your script in a manner such that any Exceptions will properly display, but it creates refresh_<scriptname> functions for previously loaded scripts just-in-case you need to reload those.
Next I would probably add a require portion, so that scripts can reload other scripts -- but I'm not trying to make node.js here.
First, the "one-liner" that you need to insert in any script you want to refresh.
with open(os.path.dirname(__file__) + os.sep + 'refresh.py', 'r') as f: \
exec(compile(f.read().replace('__BASE__', \
os.path.basename(__file__).replace('.py', '')).replace('__FILE__', \
__file__), __file__, 'exec'))
And second, the actual refresh function which should be saved to refresh.py in the same directory. (See, room for improvement already).
def refresh(filepath = __file__, _globals = None, _locals = None):
print("Reading {}...".format(filepath))
if _globals is None:
_globals = globals()
_globals.update({
"__file__": filepath,
"__name__": "__main__",
})
with open(filepath, 'rb') as file:
exec(compile(file.read(), filepath, 'exec'), _globals, _locals)
def refresh___BASE__():
refresh("__FILE__")
Tested with Python 2.7 and 3.
Take a look at reload. You just need to install the plugin an use reload ./myscript.py, voilĂ
If you are running in an interactive session you could use ipython autoreload
autoreload reloads modules automatically before entering the execution of code typed at the IPython prompt.
Of course this also works on module level, so you would do something like:
>>>import myscript
>>>myscript.main()
*do some changes in myscript.py*
>>>myscript.main() #is now changed
I have a .pythonrc in my path, which gets loaded when I run python:
python
Loading pythonrc
>>>
The problem is that my .pythonrc is not loaded when I execute files:
python -i script.py
>>>
It would be very handy to have tab completion (and a few other things) when I load things interactively.
From the Python documentation for -i:
When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command, even when sys.stdin does not appear to be a terminal. The PYTHONSTARTUP file is not read.
I believe this is done so that scripts run predictably for all users, and do not depend on anything in a user's particular PYTHONSTARTUP file.
As Greg has noted, there is a very good reason why -i behaves the way it does. However, I do find it pretty useful to be able to have my PYTHONSTARTUP loaded when I want an interactive session. So, here's the code I use when I want to be able to have PYTHONSTARTUP active in a script run with -i.
if __name__ == '__main__':
#do normal stuff
#and at the end of the file:
import sys
if sys.flags.interactive==1:
import os
myPythonPath = os.environ['PYTHONSTARTUP'].split(os.sep)
sys.path.append(os.sep.join(myPythonPath[:-1]))
pythonrcName = ''.join(myPythonPath[-1].split('.')[:-1]) #the filename minus the trailing extension, if the extension exists
pythonrc = __import__(pythonrcName)
for attr in dir(pythonrc):
__builtins__.__dict__[attr] = getattr(pythonrc, attr)
sys.path.remove(os.sep.join(myPythonPath[:-1]))
del sys, os, pythonrc
Note that this is fairly hacky and I never do this without ensuring that my pythonrc isn't accidentally clobbering variables and builtins.
Apparently the user module provides this, but has been removed in Python 3.0. It is a bit of a security hole, depending what's in your pythonrc...
In addition to Chinmay Kanchi and Greg Hewgill's answers, I'd like to add that IPython and BPython work fine in this case. Perhaps it's time for you to switch? :)
I'm trying to save myself just a few keystrokes for a command I type fairly regularly in Python.
In my python startup script, I define a function called load which is similar to import, but adds some functionality. It takes a single string:
def load(s):
# Do some stuff
return something
In order to call this function I have to type
>>> load('something')
I would rather be able to simply type:
>>> load something
I am running Python with readline support, so I know there exists some programmability there, but I don't know if this sort of thing is possible using it.
I attempted to get around this by using the InteractivConsole and creating an instance of it in my startup file, like so:
import code, re, traceback
class LoadingInteractiveConsole(code.InteractiveConsole):
def raw_input(self, prompt = ""):
s = raw_input(prompt)
match = re.match('^load\s+(.+)', s)
if match:
module = match.group(1)
try:
load(module)
print "Loaded " + module
except ImportError:
traceback.print_exc()
return ''
else:
return s
console = LoadingInteractiveConsole()
console.interact("")
This works with the caveat that I have to hit Ctrl-D twice to exit the python interpreter: once to get out of my custom console, once to get out of the real one.
Is there a way to do this without writing a custom C program and embedding the interpreter into it?
Edit
Out of channel, I had the suggestion of appending this to the end of my startup file:
import sys
sys.exit()
It works well enough, but I'm still interested in alternative solutions.
You could try ipython - which gives a python shell which does allow many things including automatic parentheses which gives you the function call as you requested.
I think you want the cmd module.
See a tutorial here:
http://wiki.python.org/moin/CmdModule
Hate to answer my own question, but there hasn't been an answer that works for all the versions of Python I use. Aside from the solution I posted in my question edit (which is what I'm now using), here's another:
Edit .bashrc to contain the following lines:
alias python3='python3 ~/py/shellreplace.py'
alias python='python ~/py/shellreplace.py'
alias python27='python27 ~/py/shellreplace.py'
Then simply move all of the LoadingInteractiveConsole code into the file ~/py/shellreplace.py Once the script finishes executing, python will cease executing, and the improved interactive session will be seamless.
I would like to use Python 2.6's version of subprocess, because it allows the Popen.terminate() function, but I'm stuck with Python 2.5. Is there some reasonably clean way to use the newer version of the module in my 2.5 code? Some sort of from __future__ import subprocess_module?
I know this question has already been answered, but for what it's worth, I've used the subprocess.py that ships with Python 2.6 in Python 2.3 and it's worked fine. If you read the comments at the top of the file it says:
# This module should remain compatible with Python 2.2, see PEP 291.
There isn't really a great way to do it. subprocess is implemented in python (as opposed to C) so you could conceivably copy the module somewhere and use it (hoping of course that it doesn't use any 2.6 goodness).
On the other hand you could simply implement what subprocess claims to do and write a function that sends SIGTERM on *nix and calls TerminateProcess on Windows. The following implementation has been tested on linux and in a Win XP vm, you'll need the python Windows extensions:
import sys
def terminate(process):
"""
Kills a process, useful on 2.5 where subprocess.Popens don't have a
terminate method.
Used here because we're stuck on 2.5 and don't have Popen.terminate
goodness.
"""
def terminate_win(process):
import win32process
return win32process.TerminateProcess(process._handle, -1)
def terminate_nix(process):
import os
import signal
return os.kill(process.pid, signal.SIGTERM)
terminate_default = terminate_nix
handlers = {
"win32": terminate_win,
"linux2": terminate_nix
}
return handlers.get(sys.platform, terminate_default)(process)
That way you only have to maintain the terminate code rather than the entire module.
While this doesn't directly answer your question, it may be worth knowing.
Imports from __future__ actually only change compiler options, so while it can turn with into a statement or make string literals produce unicodes instead of strs, it can't change the capabilities and features of modules in the Python standard library.
I followed Kamil Kisiel suggestion regarding using python 2.6 subprocess.py in python 2.5 and it worked perfectly. To make it easier, I created a distutils package that you can easy_install and/or include in buildout.
To use subprocess from python 2.6 in python 2.5 project:
easy_install taras.python26
in your code
from taras.python26 import subprocess
in buildout
[buildout]
parts = subprocess26
[subprocess26]
recipe = zc.recipe.egg
eggs = taras.python26
Here are some ways to end processes on Windows, taken directly from
http://code.activestate.com/recipes/347462/
# Create a process that won't end on its own
import subprocess
process = subprocess.Popen(['python.exe', '-c', 'while 1: pass'])
# Kill the process using pywin32
import win32api
win32api.TerminateProcess(int(process._handle), -1)
# Kill the process using ctypes
import ctypes
ctypes.windll.kernel32.TerminateProcess(int(process._handle), -1)
# Kill the proces using pywin32 and pid
import win32api
PROCESS_TERMINATE = 1
handle = win32api.OpenProcess(PROCESS_TERMINATE, False, process.pid)
win32api.TerminateProcess(handle, -1)
win32api.CloseHandle(handle)
# Kill the proces using ctypes and pid
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, process.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
Well Python is open source, you are free to take that pthread function from 2.6 and move it into your own code or use it as a reference to implement your own.
For reasons that should be obvious there's no way to have a hybrid of Python that can import portions of newer versions.
I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
Related questions:
Print current call stack from a method in Python code
Check what a running process is doing: print stack trace of an uninstrumented Python program
I have module I use for situations like this - where a process will be running for a long time but gets stuck sometimes for unknown and irreproducible reasons. Its a bit hacky, and only works on unix (requires signals):
import code, traceback, signal
def debug(sig, frame):
"""Interrupt running process, and provide a python prompt for
interactive debugging."""
d={'_frame':frame} # Allow access to frame object.
d.update(frame.f_globals) # Unless shadowed by global
d.update(frame.f_locals)
i = code.InteractiveConsole(d)
message = "Signal received : entering python shell.\nTraceback:\n"
message += ''.join(traceback.format_stack(frame))
i.interact(message)
def listen():
signal.signal(signal.SIGUSR1, debug) # Register handler
To use, just call the listen() function at some point when your program starts up (You could even stick it in site.py to have all python programs use it), and let it run. At any point, send the process a SIGUSR1 signal, using kill, or in python:
os.kill(pid, signal.SIGUSR1)
This will cause the program to break to a python console at the point it is currently at, showing you the stack trace, and letting you manipulate the variables. Use control-d (EOF) to continue running (though note that you will probably interrupt any I/O etc at the point you signal, so it isn't fully non-intrusive.
I've another script that does the same thing, except it communicates with the running process through a pipe (to allow for debugging backgrounded processes etc). Its a bit large to post here, but I've added it as a python cookbook recipe.
The suggestion to install a signal handler is a good one, and I use it a lot. For example, bzr by default installs a SIGQUIT handler that invokes pdb.set_trace() to immediately drop you into a pdb prompt. (See the bzrlib.breakin module's source for the exact details.) With pdb you can not only get the current stack trace (with the (w)here command) but also inspect variables, etc.
However, sometimes I need to debug a process that I didn't have the foresight to install the signal handler in. On linux, you can attach gdb to the process and get a python stack trace with some gdb macros. Put http://svn.python.org/projects/python/trunk/Misc/gdbinit in ~/.gdbinit, then:
Attach gdb: gdb -p PID
Get the python stack trace: pystack
It's not totally reliable unfortunately, but it works most of the time. See also https://wiki.python.org/moin/DebuggingWithGdb
Finally, attaching strace can often give you a good idea what a process is doing.
I am almost always dealing with multiple threads and main thread is generally not doing much, so what is most interesting is to dump all the stacks (which is more like the Java's dump). Here is an implementation based on this blog:
import threading, sys, traceback
def dumpstacks(signal, frame):
id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %s(%d)" % (id2name.get(threadId,""), threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
if line:
code.append(" %s" % (line.strip()))
print("\n".join(code))
import signal
signal.signal(signal.SIGQUIT, dumpstacks)
Getting a stack trace of an unprepared python program, running in a stock python without debugging symbols can be done with pyrasite. Worked like a charm for me in on Ubuntu Trusty:
$ sudo pip install pyrasite
$ echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
$ sudo pyrasite 16262 dump_stacks.py # dumps stacks to stdout/stderr of the python program
(Hat tip to #Albert, whose answer contained a pointer to this, among other tools.)
>>> import traceback
>>> def x():
>>> print traceback.extract_stack()
>>> x()
[('<stdin>', 1, '<module>', None), ('<stdin>', 2, 'x', None)]
You can also nicely format the stack trace, see the docs.
Edit: To simulate Java's behavior, as suggested by #Douglas Leeder, add this:
import signal
import traceback
signal.signal(signal.SIGUSR1, lambda sig, stack: traceback.print_stack(stack))
to the startup code in your application. Then you can print the stack by sending SIGUSR1 to the running Python process.
The traceback module has some nice functions, among them: print_stack:
import traceback
traceback.print_stack()
You can try the faulthandler module. Install it using pip install faulthandler and add:
import faulthandler, signal
faulthandler.register(signal.SIGUSR1)
at the beginning of your program. Then send SIGUSR1 to your process (ex: kill -USR1 42) to display the Python traceback of all threads to the standard output. Read the documentation for more options (ex: log into a file) and other ways to display the traceback.
The module is now part of Python 3.3. For Python 2, see http://faulthandler.readthedocs.org/
What really helped me here is spiv's tip (which I would vote up and comment on if I had the reputation points) for getting a stack trace out of an unprepared Python process. Except it didn't work until I modified the gdbinit script. So:
download https://svn.python.org/projects/python/trunk/Misc/gdbinit and put it in ~/.gdbinit
edit it, changing PyEval_EvalFrame to PyEval_EvalFrameEx [edit: no longer needed; the linked file already has this change as of 2010-01-14]
Attach gdb: gdb -p PID
Get the python stack trace: pystack
python -dv yourscript.py
That will make the interpreter to run in debug mode and to give you a trace of what the interpreter is doing.
If you want to interactively debug the code you should run it like this:
python -m pdb yourscript.py
That tells the python interpreter to run your script with the module "pdb" which is the python debugger, if you run it like that the interpreter will be executed in interactive mode, much like GDB
It can be done with excellent py-spy. It's a sampling profiler for Python programs, so its job is to attach to a Python processes and sample their call stacks. Hence, py-spy dump --pid $SOME_PID is all you need to do to dump call stacks of all threads in the $SOME_PID process. Typically it needs escalated privileges (to read the target process' memory).
Here's an example of how it looks like for a threaded Python application.
$ sudo py-spy dump --pid 31080
Process 31080: python3.7 -m chronologer -e production serve -u www-data -m
Python v3.7.1 (/usr/local/bin/python3.7)
Thread 0x7FEF5E410400 (active): "MainThread"
_wait (cherrypy/process/wspbus.py:370)
wait (cherrypy/process/wspbus.py:384)
block (cherrypy/process/wspbus.py:321)
start (cherrypy/daemon.py:72)
serve (chronologer/cli.py:27)
main (chronologer/cli.py:84)
<module> (chronologer/__main__.py:5)
_run_code (runpy.py:85)
_run_module_as_main (runpy.py:193)
Thread 0x7FEF55636700 (active): "_TimeoutMonitor"
run (cherrypy/process/plugins.py:518)
_bootstrap_inner (threading.py:917)
_bootstrap (threading.py:885)
Thread 0x7FEF54B35700 (active): "HTTPServer Thread-2"
accept (socket.py:212)
tick (cherrypy/wsgiserver/__init__.py:2075)
start (cherrypy/wsgiserver/__init__.py:2021)
_start_http_thread (cherrypy/process/servers.py:217)
run (threading.py:865)
_bootstrap_inner (threading.py:917)
_bootstrap (threading.py:885)
...
Thread 0x7FEF2BFFF700 (idle): "CP Server Thread-10"
wait (threading.py:296)
get (queue.py:170)
run (cherrypy/wsgiserver/__init__.py:1586)
_bootstrap_inner (threading.py:917)
_bootstrap (threading.py:885)
I would add this as a comment to haridsv's response, but I lack the reputation to do so:
Some of us are still stuck on a version of Python older than 2.6 (required for Thread.ident), so I got the code working in Python 2.5 (though without the thread name being displayed) as such:
import traceback
import sys
def dumpstacks(signal, frame):
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %d" % (threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
if line:
code.append(" %s" % (line.strip()))
print "\n".join(code)
import signal
signal.signal(signal.SIGQUIT, dumpstacks)
Take a look at the faulthandler module, new in Python 3.3. A faulthandler backport for use in Python 2 is available on PyPI.
On Solaris, you can use pstack(1) No changes to the python code are necessary. eg.
# pstack 16000 | grep : | head
16000: /usr/bin/python2.6 /usr/lib/pkg.depotd --cfg svc:/application/pkg/serv
[ /usr/lib/python2.6/vendor-packages/cherrypy/process/wspbus.py:282 (_wait) ]
[ /usr/lib/python2.6/vendor-packages/cherrypy/process/wspbus.py:295 (wait) ]
[ /usr/lib/python2.6/vendor-packages/cherrypy/process/wspbus.py:242 (block) ]
[ /usr/lib/python2.6/vendor-packages/cherrypy/_init_.py:249 (quickstart) ]
[ /usr/lib/pkg.depotd:890 (<module>) ]
[ /usr/lib/python2.6/threading.py:256 (wait) ]
[ /usr/lib/python2.6/Queue.py:177 (get) ]
[ /usr/lib/python2.6/vendor-packages/pkg/server/depot.py:2142 (run) ]
[ /usr/lib/python2.6/threading.py:477 (run)
etc.
If you're on a Linux system, use the awesomeness of gdb with Python debug extensions (can be in python-dbg or python-debuginfo package). It also helps with multithreaded applications, GUI applications and C modules.
Run your program with:
$ gdb -ex r --args python <programname>.py [arguments]
This instructs gdb to prepare python <programname>.py <arguments> and run it.
Now when you program hangs, switch into gdb console, press Ctr+C and execute:
(gdb) thread apply all py-list
See example session and more info here and here.
I was looking for a while for a solution to debug my threads and I found it here thanks to haridsv. I use slightly simplified version employing the traceback.print_stack():
import sys, traceback, signal
import threading
import os
def dumpstacks(signal, frame):
id2name = dict((th.ident, th.name) for th in threading.enumerate())
for threadId, stack in sys._current_frames().items():
print(id2name[threadId])
traceback.print_stack(f=stack)
signal.signal(signal.SIGQUIT, dumpstacks)
os.killpg(os.getpgid(0), signal.SIGQUIT)
For my needs I also filter threads by name.
I hacked together some tool which attaches into a running Python process and injects some code to get a Python shell.
See here: https://github.com/albertz/pydbattach
You can use the hypno package, like so:
hypno <pid> "import traceback; traceback.print_stack()"
This would print a stack trace into the program's stdout.
Alternatively, if you don't want to print anything to stdout, or you don't have access to it (a daemon for example), you could use the madbg package, which is a python debugger that allows you to attach to a running python program and debug it in your current terminal. It is similar to pyrasite and pyringe, but newer, doesn't require gdb, and uses IPython for the debugger (which means colors and autocomplete).
To see the stack trace of a running program, you could run:
madbg attach <pid>
And in the debugger shell, enter:
bt
Disclaimer - I wrote both packages
It's worth looking at Pydb, "an expanded version of the Python debugger loosely based on the gdb command set". It includes signal managers which can take care of starting the debugger when a specified signal is sent.
A 2006 Summer of Code project looked at adding remote-debugging features to pydb in a module called mpdb.
Since Austin 3.3, you can use the -w/--where option to emit the current stack trace. See https://stackoverflow.com/a/70905181/1838793
If you want to peek at a running Python application to see the "live" call stack in a top-like fashon you can use austin-tui (https://github.com/p403n1x87/austin-tui). You can install it from PyPI with e.g.
pipx install austin-tui
Note that it requires the austin binary to work (https://github.com/p403n1x87/austin), but then you can attach to a running Python process with
austin-tui -p <pid>
pyringe is a debugger that can interact with running python processes, print stack traces, variables, etc. without any a priori setup.
While I've often used the signal handler solution in the past, it can still often be difficult to reproduce the issue in certain environments.
You can use PuDB, a Python debugger with a curses interface to do this. Just add
from pudb import set_interrupt_handler; set_interrupt_handler()
to your code and use Ctrl-C when you want to break. You can continue with c and break again multiple times if you miss it and want to try again.
I am in the GDB camp with the python extensions. Follow https://wiki.python.org/moin/DebuggingWithGdb, which means
dnf install gdb python-debuginfo or sudo apt-get install gdb python2.7-dbg
gdb python <pid of running process>
py-bt
Also consider info threads and thread apply all py-bt.
How to debug any function in console:
Create function where you use pdb.set_trace(), then function you want debug.
>>> import pdb
>>> import my_function
>>> def f():
... pdb.set_trace()
... my_function()
...
Then call created function:
>>> f()
> <stdin>(3)f()
(Pdb) s
--Call--
> <stdin>(1)my_function()
(Pdb)
Happy debugging :)
I don't know of anything similar to java's response to SIGQUIT, so you might have to build it in to your application. Maybe you could make a server in another thread that can get a stacktrace on response to a message of some kind?
There is no way to hook into a running python process and get reasonable results. What I do if processes lock up is hooking strace in and trying to figure out what exactly is happening.
Unfortunately often strace is the observer that "fixes" race conditions so that the output is useless there too.
use the inspect module.
import inspect
help(inspect.stack)
Help on function stack in module inspect:
stack(context=1)
Return a list of records for the stack above the caller's frame.
I find it very helpful indeed.
In Python 3, pdb will automatically install a signal handler the first time you use c(ont(inue)) in the debugger. Pressing Control-C afterwards will drop you right back in there. In Python 2, here's a one-liner which should work even in relatively old versions (tested in 2.7 but I checked Python source back to 2.4 and it looked okay):
import pdb, signal
signal.signal(signal.SIGINT, lambda sig, frame: pdb.Pdb().set_trace(frame))
pdb is worth learning if you spend any amount of time debugging Python. The interface is a bit obtuse but should be familiar to anyone who has used similar tools, such as gdb.
In case you need to do this with uWSGI, it has Python Tracebacker built-in and it's just matter of enabling it in the configuration (number is attached to the name for each worker):
py-tracebacker=/var/run/uwsgi/pytrace
Once you have done this, you can print backtrace simply by connecting to the socket:
uwsgi --connect-and-read /var/run/uwsgi/pytrace1
At the point where the code is run, you can insert this small snippet to see a nicely formatted printed stack trace. It assumes that you have a folder called logs at your project's root directory.
# DEBUG: START DEBUG -->
import traceback
with open('logs/stack-trace.log', 'w') as file:
traceback.print_stack(file=file)
# DEBUG: END DEBUG --!