Step through code called with exec in pdb - python

I am creating a DSL by doing some pre-processing on a string, and then using exec to call the pre-processed code using the python interpreter. I would like to be able to step through the pre-processed code using pdb, and be able to show the actual line of python code (after pre-processing) that I'm stepping through.
For example:
In [1]: s = '''print "hello"
...: print 'world'
...: '''
In [2]: s
Out[2]: 'print "hello"\nprint \'world\'\n'
In [3]: import pdb
In [4]: pdb.run(s)
> <string>(1)<module>()
(Pdb) list
[EOF]
I would like the list command in pdb to output the code and show what line I'm one, the same way that it does when I stop at a breakpoint in a regular python file. Any hints on how to do this, or an alternative approach/mindset would be greatly appreciated!
[edit]
I pass in a bunch of complicated objects to the exec using the optional globals positional argument to exec, so writing out the string to a file and then running that in pdb won't work. For example:
s = '''some_complicated_stateful_object.method(foo)'''
exec(s, {'some_complicated_stateful_object': an_object,
'foo': some_other_object})
Thanks to for the good suggestion though!

Related

How to put a breakpoint on the current line withtout typing the line number in the Python pdb debugger?

I'm used to GDB, where b does that.
But in pdb, b just list breakpoints.
I can do b 123, but lazy to type 123.
Maybe a magic argument like b .?
I know PyCharm and __import__('pdb').set_trace(), just checking if there is an usable CLI alternative for those quick debugs.
if you accept adding a new pdb command, it is trivial:
def do_breakcurrent(self, arg):
cur_lineno = str(self.curframe.f_lineno)
return self.do_break(cur_lineno)
import pdb
pdb.Pdb.do_breakcurrent = pdb.Pdb.do_bc = do_breakcurrent
use breakcurrent or bc:
(Pdb) bc
Breakpoint 1 at /Users/georgexsh/workspace/so/52110534.py:11
if you want to put those code into .pdbrc to make it available automatically, need little tweak:
import pdb
pdb.Pdb.do_bc = lambda self,arg: self.do_break(str(self.curframe.f_lineno))

How to define a new function in pdb

Why can't I define new functions when I run pdb?
For example take myscript.py:
#!/gpfs0/export/opt/anaconda-2.3.0/bin/python
print "Hello World"
print "I see you"
If I run python -m pdb myscript.py and try to interactively define a new function:
def foo():
I get the error:
*** SyntaxError: unexpected EOF while parsing (<stdin>, line 1)
Why is this?
I don't think it supports multi-line input. You can workaround by spawning up an interactive session from within pdb. Once you are done in the interactive session, exit it with Ctrl+D.
>>> import pdb
>>> pdb.set_trace()
(Pdb) !import code; code.interact(local=vars())
(InteractiveConsole)
In : def foo():
...: print('hello in pdb')
...:
In : # use ctrl+d here to return to pdb shell...
(Pdb) foo()
hello in pdb
You can define your function in a one line statement using ; instead of indentation, like this:
(Pdb) def foo(): print 'Hello world'; print 'I see you'
(Pdb) foo()
Hello world
I see you
i was able to import python modules from the pdb command line.
if you can import python modules, then you should be able to define your functions in a file and just do an import of the file.
If your application happens to have IPython as a dependency, you could drop yourself into a feature-rich IPython REPL right from ipdb:
import IPython; IPython.embed()
From inside, if you run IPython's magic command whos, you should see all the locally defined variables in the current pdb frame.

How to write an ipython alias which executes in python instead of shell?

We can define an alias in ipython with the %alias magic function, like this:
>>> d
NameError: name 'd' is not defined
>>> %alias d date
>>> d
Fri May 15 00:12:20 AEST 2015
This escapes to the shell command date when you type d into ipython.
But I want to define an alias to execute some python code, in the current interpreter scope, rather than a shell command. Is that possible? How can we make this kind of alias?
I work in the interactive interpreter a lot, and this could save me a lot of commands I find myself repeating often, and also prevent some common typos.
The normal way to do this would be to simply write a python function, with a def. But if you want to alias a statement, rather than a function call, then it's actually a bit tricky.
You can achieve this by writing a custom magic function. Here is an example, which effectively aliases the import statement to get, within the REPL.
from IPython.core.magic import register_line_magic
#register_line_magic
def get(line):
code = f"import {line}"
print("-->", code)
exec(code, globals())
del get # in interactive mode-remove from scope so function doesn't shadow magic
edit: below is the previous code, for older versions of IPython
from IPython.core.magic_arguments import argument, magic_arguments
#magic_arguments()
#argument('module')
def magic_import(self, arg):
code = 'import {}'.format(arg)
print('--> {}'.format(code))
self.shell.run_code(code)
ip = get_ipython()
ip.define_magic('get', magic_import)
Now it is possible to execute get statements which are aliased to import statements.
Demo:
In [1]: get json
--> import json
In [2]: json.loads
Out[2]: <function json.loads>
In [3]: get potato
--> import potato
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<string> in <module>()
ImportError: No module named potato
In [4]:
Of course, this is extendible to arbitrary python code, and optional arguments are supported aswell.
I don't know since when IPython provides with macro. And now you can simply do this:
ipy = get_ipython()
ipy.define_macro('d', 'date')
You can put this code into any file located in ~/.ipython/profile_default/startup/, and then this macro will be automatically available when you start IPython.
However, a macro doesn't accept arguments. So pleaes keep this in mind before you choose to define a macro.

Accessing variables from IPython interactive namespace in a script

Is there an easy way to access variables in the IPython interactive namespace. While implementing a project that has a slow load command, I would like to run a script to load the data into the interactive work space, then call a second script that uses the data, like is possible with MATLAB.
In this simple case, what I want to do is
In [20]: a=5
In [21]: run tst
where tst.py is just
print a
The idea is that I want to run the loading script once, then just work on tst.py.
Thanks!
Try using the -i option on IPython's magic run command; it makes the script run using the current interactive namespace, e.g. with
load.py:
a = 5
tst.py:
print a
From IPython I get;
In [1]: from load import *
In [2]: run -i tst
5
There is no easy or smart way to do this. One way would be to have a main function in your test function and then pass in the globals from your environment to update the globals in the caller. For example:
tst.py
def main(borrowed_globals):
globals().update(borrowed_globals)
print a
And then in iPython:
In [1]: a = 5
In [2]: import tst
In [3]: tst.main(globals())
5

Debugging code in the Python interpreter

I like testing functions in the Python interpreter. Is it possible to debug a function in the Python interpreter when I want to see more than a return value and a side effect?
If so, could you show basic debugger operations (launching the function with arguments, setting breakpoint, next step, step into, watching variable)? If not, how would you debug a function another way?
The point is, I want to debug only a particular function which will be supplied with arguments. I don't want to debug whole module code.
thank you for advice
If you want to debug specific function you can using this -
>>> import pdb
>>> import yourmodule
>>> pdb.run('yourmodule.foo()')
over the command line. pdb.set_trace() should be added in your function to break there.
More info on pdb can be seen here - http://docs.python.org/library/pdb.html
See pdb module. Insert into code:
import pdb
pdb.set_trace()
... makes a breakpoint.
The code-to-debug does not need to be modified to include pdb.set_trace(). That call can be made directly in the interpreter just before the code-to-debug:
>>> import pdb
>>> pdb.set_trace(); <code-to-debug>
For example, given test_script.py with the following code:
def some_func(text):
print 'Given text is {}'.format(repr(text))
for index,char in enumerate(text):
print ' '*index, char
an interpreter session to debug some_func using the debugger commands step-into (s), next (n) and continue (c) would look like:
>>> import pdb
>>> import test_script
>>> pdb.set_trace(); test_script.some_func('hello')
--Call--
> c:\src\test_script.py(1)some_func()
-> def some_func(text):
(Pdb) s
> c:\src\test_script.py(2)some_func()
-> print 'Given text is {}'.format(repr(text))
(Pdb) n
Given text is 'hello'
> c:\src\test_script.py(3)some_func()
-> for index,char in enumerate(text):
(Pdb) c
h
e
l
l
o
>>>
See the docs for the pdb module for more information on how to use the debugger: http://docs.python.org/library/pdb.html
Additionally, while using the debugger, the help command provides a nice list of commands and help <command> gives help specific to the given command.

Categories

Resources