Python (pdb) - Queueing up commands to execute - python

I am implementing a "breakpoint" system for use in my Python development that will allow me to call a function that, in essence, calls pdb.set_trace();
Some of the functionality that I would like to implement requires me to control pdb from code while I am within a set_trace context.
Example:
disableList = []
def breakpoint(name=None):
def d():
disableList.append(name)
#****
#issue 'run' command to pdb so user
#does not have to type 'c'
#****
if name in disableList:
return
print "Use d() to disable breakpoint, 'c' to continue"
pdb.set_trace();
In the above example, how do I implement the comments demarked by the #**** ?
In other parts of this system, I would like to issue an 'up' command, or two sequential 'up' commands without leaving the pdb session (so the user ends up at a pdb prompt but up two levels on the call stack).

You could invoke lower-level methods to get more control over the debugger:
def debug():
import pdb
import sys
# set up the debugger
debugger = pdb.Pdb()
debugger.reset()
# your custom stuff here
debugger.do_where(None) # run the "where" command
# invoke the interactive debugging prompt
users_frame = sys._getframe().f_back # frame where the user invoked `debug()`
debugger.interaction(users_frame, None)
if __name__ == '__main__':
print 1
debug()
print 2
You can find documentation for the pdb module here: http://docs.python.org/library/pdb and for the bdb lower-level debugging interface here: http://docs.python.org/library/bdb. You may also want to look at their source code.

Related

How to make a Windows shortcut to run a function in a Python script?

I am trying to find a way to create a Windows shortcut that executes a function from a Python file.
The function being run would look something like this (it runs a command in the shell):
def function(command):
subprocess.run(command, shell = True, check = True)
I am aware that you can run cmd functions and commands directly with a shortcut, but I would like it to be controlled by Python.
I have little experience working with Windows shorcuts, so the best I can do is show you the following:
This is what I imagine it would like like.
After a quick Google search, the only help I can find is how to make a shortcut with Python, not how to run a function from it. So hopefully what I am asking is even possible?
Generally speaking AFAIK, you can't do it, however it could be done if the target script is written a certain way and is passed the name of the function to run as an argument. You could even add arguments to be passed to the function by listing them following its name in the shortcut.
The target script has to be set up with an if __name__ == '__main__': section similar to what is shown below which executes the named function it is passed as a command line argument. The input() call at the end is there just to make the console window stay open so what is printed can be seen.
target_script.py:
def func1():
print('func1() running')
def func2():
print('func2() running')
if __name__ == '__main__':
from pathlib import Path
import sys
print('In module', Path(__file__).name)
funcname = sys.argv[1]
vars()[funcname]() # Call named function.
input('\npress Enter key to continue...')
To make use of it you would need to create a shortcut with a Target: set to something like:
python D:\path_to_directory\target_script.py func1
Output:
In module target_script.py
func1() running
press Enter key to continue...
Generalizing
It would also be possible to write a script that could be applied to other scripts that weren't written like target_script.
run_func_in_module.py:
import importlib.util
from pathlib import Path
import sys
mod_filepath = Path(sys.argv[1])
funcname = sys.argv[2]
# Import the module.
spec = importlib.util.spec_from_file_location(mod_filepath.stem, mod_filepath)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
# Call the specified function in the module.
mod_func = getattr(module, funcname)
mod_func()
To make use of this version you would need to create a shortcut with a Target: set to something like:
python D:\path_to_directory\run_func_in_module.py D:\another_directory\target_script.py func2
Note that the target_script.py would no longer need the if __name__ == '__main__': section at the end (although having one would do no harm).

vscode with debugpy: Recursive mode like pdb missing?

I am trying to configure vscode debugger for recursive debugging with
debugpy. In this regard:
I open a jupyter console and attach the vscode debugger to the kernel.
I am then able to call some function and let vscode stop at some breakpoint.
The ipython cell seems to be blocked now and won't let me input further commands.
Leaving the breakpoint brings the ipython cell back to life.
This behaviour is confusing, since I would rather expect to be able to
continue working with the console, that is evaluating statements,
create plots, ... while stopping at the breakpoint. In addition I'd
expect to debug further code recursively, i.e. to let vscode stop at
some other breakpoint and return later on to the first breakpoint.
With python pdb this is possible by executing the "debug 'code'"
command, which creates a nested pdb instance.
Is it possible to replicate the pdb behavaiour with vscode by
configuration, ie.:
Go to the next cell when the breakpoint is hit an allow user input in ipython.
Provide some interactive magic commands to control the debugger from ipython (e.g. debug 'code')
Thanks, Daniel
Follows an example:
Define some dummy module 'my_module.py':
def fun():
import pdb; pdb.set_trace()
print('fun')
def rec_fun():
import pdb; pdb.set_trace()
print('fun')
a = 1
import pdb; pdb.set_trace()
b = 2
Then run a python console.
Then enter in your terminal:
import my_module # This will stop at the breakpoint and allow user further user input in the console, e.g. `print(a)`
# press c to completely load the module
my_module.fun() # This will stop the debugger at the breakpoint inside function fun
import my_module # this will load the module for the debugger instance
debug my_module.rec_fun() # this will launch a nested debugger instance
# Press c+enter: The nested instance will stop at breakpoint inside rec_fun()
# Press c+enter: The debugger will move to the previous breakpoint
# Press c+enter: The debugger will leave and go back to normal REPL mode.
This workflow seems not to be possible with vscode/debugpy. The key point in my opinion is keeping the shell alive with pdb-debugger. Some interactive commands are a subsequent problem. Is it possible to configure pydebug accordingly? What do you think?
Thanks,
Daniel

Python shell cmd and executable formats

I have used both Python and C for a while. C is good in a way that i can use Windows cmd or anything like that to compile files and easily read command line arguments. However, the only thing that runs python that I know is IDLE which is like an interpreter and doesnt take command-line arguments and it's hard to work with. Is there anything like the C's cmd and a compiler for python 3.x?
Thanks
However, the only thing that runs python that I know is IDLE which is
like an interpreter
You can still call python helloworld.py from a command line
and doesnt take command-line arguments
It's possible to read commandline arguments from python helloworld.py Alex using:
import sys
name = sys.argv[1] # Gives "Alex", argv[0] would be "helloworld.py"
a compiler for python 3.x
py2exe supports Python 3
And finally if you're looking to call commands from your Python code, there is a module called subprocess
if i understand your question , you can do this in python by importing cmd , os
for example :
import os
import cmd
import readline
class Console(cmd.Cmd):
def __init__(self):
cmd.Cmd.__init__(self)
self.prompt = "=>> "
self.intro = "Welcome to console!" ## defaults to None
## Command definitions ##
def do_hist(self, args):
"""Print a list of commands that have been entered"""
print self._hist
def do_exit(self, args):
"""Exits from the console"""
return -1
## Command definitions to support Cmd object functionality ##
def do_EOF(self, args):
"""Exit on system end of file character"""
return self.do_exit(args)
def do_shell(self, args):
"""Pass command to a system shell when line begins with '!'"""
os.system(args)
def do_help(self, args):
"""Get help on commands
'help' or '?' with no arguments prints a list of commands for which help is available
'help <command>' or '? <command>' gives help on <command>
"""
## The only reason to define this method is for the help text in the doc string
cmd.Cmd.do_help(self, args)
## Override methods in Cmd object ##
def preloop(self):
"""Initialization before prompting user for commands.
Despite the claims in the Cmd documentaion, Cmd.preloop() is not a stub.
"""
cmd.Cmd.preloop(self) ## sets up command completion
self._hist = [] ## No history yet
self._locals = {} ## Initialize execution namespace for user
self._globals = {}
def postloop(self):
"""Take care of any unfinished business.
Despite the claims in the Cmd documentaion, Cmd.postloop() is not a stub.
"""
cmd.Cmd.postloop(self) ## Clean up command completion
print "Exiting..."
def precmd(self, line):
""" This method is called after the line has been input but before
it has been interpreted. If you want to modifdy the input line
before execution (for example, variable substitution) do it here.
"""
self._hist += [ line.strip() ]
return line
def postcmd(self, stop, line):
"""If you want to stop the console, return something that evaluates to true.
If you want to do some post command processing, do it here.
"""
return stop
def emptyline(self):
"""Do nothing on empty input line"""
pass
def default(self, line):
"""Called on an input line when the command prefix is not recognized.
In that case we execute the line as Python code.
"""
try:
exec(line) in self._locals, self._globals
except Exception, e:
print e.__class__, ":", e
if __name__ == '__main__':
console = Console()
console . cmdloop()
this example is for use command lines in python , however you can write your python code and call .py file in cmd by run this command :
python <file_name>.py
search more for other examples , also see official doc : cmd — Support for line-oriented command interpreters
You can use the python interpreter as a compiler too to compile your python programs.
Say you have a test.py file which you want to compile; then you can use python test.py to compile the file.
To be true, you are not actually compiling the file, you are executing it line by line (well, call it interpreting)
For command line arguments you can use sys.argv as already mentioned in the above answers.
Provided on how you have it installed, you can probably just run the python scripts as is, by typing the script file name, for example:
C:\> test.py
If you have a relatively recent python installation, this will be associated with the python launcher (py.exe) and be equivalent to running
C:\> py test.py
If you only have one version of python installed this will run it with this, but the python launcher supports multiple ways to customize how it behaves with multiple versions of python.
Additionally, and as stated above, you can run the script with just the python command as well. The main difference is that running it with the python command allows you to specify exactly which installation gets ran, using the script name alone (or the py.exe version), will allow the system to select which installation gets ran.

Using IPython as an effective debugger

How can I embed an IPython shell in my code and have it automatically display the line number and function in which it was invoked?
I currently have the following setup to embed IPython shells in my code:
from IPython.frontend.terminal.embed import InteractiveShellEmbed
from IPython.config.loader import Config
# Configure the prompt so that I know I am in a nested (embedded) shell
cfg = Config()
prompt_config = cfg.PromptManager
prompt_config.in_template = 'N.In <\\#>: '
prompt_config.in2_template = ' .\\D.: '
prompt_config.out_template = 'N.Out<\\#>: '
# Messages displayed when I drop into and exit the shell.
banner_msg = ("\n**Nested Interpreter:\n"
"Hit Ctrl-D to exit interpreter and continue program.\n"
"Note that if you use %kill_embedded, you can fully deactivate\n"
"This embedded instance so it will never turn on again")
exit_msg = '**Leaving Nested interpreter'
# Put ipshell() anywhere in your code where you want it to open.
ipshell = InteractiveShellEmbed(config=cfg, banner1=banner_msg, exit_msg=exit_msg)
This allows me to start a full IPython shell anywhere in my code by just using ipshell(). For example, the following code:
a = 2
b = a
ipshell()
starts an IPython shell in the scope of the caller that allows me inspect the values of a and b.
What I would like to do is to automatically run the following code whenever I call ipshell():
frameinfo = getframeinfo(currentframe())
print 'Stopped at: ' + frameinfo.filename + ' ' + str(frameinfo.lineno)
This would always show the context where the IPython shell starts so that I know what file/function, etc. I am debugging.
Perhaps I could do this with a decorator, but all my attemps so far have failed, since I need ipshell() to run within the original context (so that I have access to a and b from the IPython shell).
How can I accomplish this?
You can call ipshell() from within another user-defined function, e.g. ipsh()
from inspect import currentframe
def ipsh():
frame = currentframe().f_back
msg = 'Stopped at {0.f_code.co_filename} and line {0.f_lineno}'.format(frame)
ipshell(msg,stack_depth=2) # Go back one level!
Then use ipsh() whenever you want to drop into the IPython shell.
Explanation:
stack_depth=2 asks ipshell to go up one level when retrieving the namespace for the new IPython shell (the default is 1).
currentframe().f_back() retrieves the previous frame so that you can print the line number and file of the location where ipsh() is called.

Breakpoint-induced interactive debugging of Python with IPython

Say I have an IPython session, from which I call some script:
> run my_script.py
Is there a way to induce a breakpoint in my_script.py from which I can inspect my workspace from IPython?
I remember reading that in previous versions of IPython one could do:
from IPython.Debugger import Tracer;
def my_function():
x = 5
Tracer()
print 5;
but the submodule Debugger does not seem to be available anymore.
Assuming that I have an IPython session open already: how can I stop my program a location of my choice and inspect my workspace with IPython?
In general, I would prefer solutions that do not require me to pre-specify line numbers, since I would like to possibly have more than one such call to Tracer() above and not have to keep track of the line numbers where they are.
The Tracer() still exists in ipython in a different module. You can do the following:
from IPython.core.debugger import Tracer
def my_function():
x = 5
Tracer()()
print 5
Note the additional call parentheses around Tracer
edit: For IPython 6 onwards Tracer is deprecated so you should use set_trace() instead:
from IPython.core.debugger import set_trace
def my_function():
x = 5
set_trace()
print 5
You can run it and set a breakpoint at a given line with:
run -d -b12 myscript
Where -b12 sets a breakpoint at line 12. When you enter this line, you'll immediately drop into pdb, and you'll need to enter c to execute up to that breakpoint.
This is the version using the set_trace() method instead of the deprecated Tracer() one.
from IPython.core.debugger import Pdb
def my_function():
x = 5
Pdb().set_trace()
print 5
Inside the IPython shell, you can do
from IPython.core.debugger import Pdb
pdb = Pdb()
pdb.runcall(my_function)
for example, or do the normal pdb.set_trace() inside your function.
With Python 3 (v3.7+), there's the new breakpoint() function. You can modify it's behaviour so it'll call ipython's debugger for you.
Basically you can set an environment variable that points to a debugger function. (If you don't set the variable, breakpoint() defaults to calling pdb.)
To set breakpoint() to call ipython's debugger, set the environment variable (in your shell) like so:
# for bash/zsh users
export PYTHONBREAKPOINT='IPython.core.debugger.set_trace'
# powershell users
$env:PYTHONBREAKPOINT='IPython.core.debugger.set_trace'
(Note, obviously if you want to permanently set the environment variable, you'll need to modify your shell profile or system preferences.)
You can write:
def my_function():
x = 5
breakpoint()
print(5)
And it'll break into ipython's debugger for you. I think it's handier than having to import from IPython.core.debugger import set_trace and call set_trace().
I have always had the same question and the best workaround I have found which is pretty hackey is to add a line that will break my code, like so:
...
a = 1+2
STOP
...
Then when I run that code it will break, and I can do %debug to go there and inspect. You can also turn on %pdb to always go to point where your code breaks but this can be bothersome if you don't want to inspect everywhere and everytime your code breaks. I would love a more elegant solution.
I see a lot of options here, but maybe not the following simple option.
Fire up ipython in the directory where my_script.py is.
Turn the debugger on if you want the code to go into debug mode when it fails. Type %pdb.
In [1]: %pdb
Automatic pdb calling has been turned ON
Next type
In [2]: %run -d ./my_script.py
*** Blank or comment
*** Blank or comment
NOTE: Enter 'c' at the ipdb> prompt to continue execution.
> c:\users\c81196\lgd\mortgages-1\nmb\lgd\run_lgd.py(2)<module>()
1 # system imports
----> 2 from os.path import join
Now you can set a breakpoint where ever you want it.
Type b 100 to have a breakpoint at line 100, or b whatever.py:102 to have a breakpoint at line 102 in whatever.py.
For instance:
ipdb> b 100
Then continue to run, or continue.
ipdb> c
Once the code fails, or reaches the breakpoint you can start using the full power of the python debugger pdb.
Note that pdb also allows the setting of a breakpoint at a function.
b(reak) [([filename:]lineno | function) [, condition]]
So you do not necessarily need to use line numbers.

Categories

Resources