I would like to script some behaviour in GDB using Python: given a regular expression describing a set of functions, instantiate a subclass of gdb.Breakpoint (eg. MyBreakpoint) for each function matched.
There is no equivalent of rbreak in GDB's Python module. I had thought of doing this:
gdb.execute('rbreak {:s}'.format(regexp))
breakpoints = gdb.breakpoints()
# Extract breakpoint strings, delete existing breakpoints, and
# recreate them using my subclass.
for bp in breakpoints:
loc = bp.location
bp.delete()
MyBreakpoint(loc)
...however this suffers from the problem that there might already be some user defined breakpoints, and this would destroy them.
My next idea was to iterate over all possible functions to break on, and do the matching using Python's re module. However, there doesn't seem to be any way to list functions available for breaking from within Python.
My question is: could either of these approaches be salvaged so that they will work reliably and not clobber state set by a user in an interactive session; or is there some other way to achieve this? Or will I have to compromise on "not clobbering user state?"
Since rbreak creates new breakpoint objects, even if the breakpoints are for the same locations as pre-existing breakpoints, you can run gdb.breakpoints() before and after the execution of rbreak to see which breakpoints were added.
obreakpoints = gdb.breakpoints();
gdb.execute('rbreak {:s}'.format(regexp))
breakpoints = set(gdb.breakpoints()).difference(set(obreakpoints))
Related
I need to run a .tcl file via command line which get invoked with a Python script. However, a single line in that .tcl file needs to change based on input from the user. For example:
info = input("Prompt for the user: ")
Now I need the string contained in info to replace one of the lines in .tcl file.
Rewriting the script is one of the trickier options to pick. It makes things harder to audit and it is tremendously easy to make a mess of. It's not recommended at all unless you take special steps, such as factoring out the bit you set into its own file:
File that you edit, e.g., settings.tcl (simple enough that it is pretty trivial to write and you can rewrite the whole lot each time without making a mess of it)
set value "123"
Use of that file:
set value 0
if {[file readable settings.tcl]} {
source settings.tcl
}
puts "value is $value"
More sophisticated versions of that are possible with safe interpreters and language profiling… but they're only really needed when the settings and the code are in different trust domains.
That said, there are other approaches that are usually easier. If you are invoking the Tcl script by running a subprocess, the easiest ways to pass an arbitrary parameter are to use one of:
A command line argument. These can be read on the Tcl side from the $argv global, which holds a list of all arguments after the script name. (The lindex and lassign commands tend to be useful here, e.g., set value [lindex $argv 0].)
An environment variable. These can be read on the Tcl side from the env global array, e.g., set value $env(MyVarName)
On standard input. A line can be read from that on the Tcl side using set line [gets stdin].
In more complex cases, you'd pass values in their own files, or by writing them into something like an SQLite database, or… well, there's lots of options.
If on the other hand the Tcl interpreter is in the same process, pass the values by setting the variables in it before asking for the script to run. (Tcl has almost no true globals — environment variables are a special exception, and only because the OS forces it upon us — so everything is specific to the interpreter context.)
Specifically, if you've got a Tcl instance object from tkinter (Tk is a subclass of that) then you can do:
import tkinter
interp = tkinter.Tcl()
interp.call("set", "value", 123)
interp.eval("source program.tcl")
# Or interp.call("source", "program.tcl")
That has the advantage of doing all the quoting for you.
By default, ipython uses ipdb as debugger with %pdb or %debug magics.
However, I much prefer pdb++... Is there a way of changing the debugger called with these magics ? (I am aware I can simply use pdb.xpm() on exception with pdb++, but I'd like to make it work with ipython magic commands so that I don't have to wrap the code each time...)
So at least for limited circumstances and not in a way I'd necessarily recommend, the answer here is yes. I can't promise the below will work outside the confines of what I did, but it might give you enough insight to play around with it yourself. Caution is warranted because it involves changing undocumented attributes of the ipython shell class at runtime. TLDR: I hunted down how ipython calls the debugger when the %pdb magic is on or when you call the %debug magic, and I updated it to use the debugger I wanted. Skip the next two paragraphs if you just want the approach that worked for me and don't care about the hunt.
Long version: when you run ipython it starts an instance of TerminalInteractiveShell, which has a debugger_cls attribute telling you the debugger that ipython will launch. Unfortunately, at the level of TerminalInteractiveShell, debugger_cls is actually a property of the class, and has no setter that lets you modify it. Rather, it either gets set to Pdb (actually a more featureful ipython Pdb than the traditional pdb) or TerminalPdb (even more features).
If you dig deeper, however, you find that debugger_cls gets passed up to InteractiveShell to initialize how tracebacks are handled. There it seems to disappear into the initialization of InteractiveShell's InteractiveTB property, but actually just ends up as the debugger_cls attribute of that (InteractiveTB) class (by setting the inherited attribute from TBTools). Finally, this debugger_cls attribute only gets used to set the pdb attribute (more or less by doing TBToolsInstance.pdb = TBToolsInstance.debugger_cls()) in one of several places. In any case, it turns out that these attributes can be changed! And if you change them correctly they will percolate to the shell itself! Importantly, this relies on the fact that ipython makes use of the Traitlets package to create a Singleton object for the shell, and this allows you to gain access to that object from within the terminal itself. More on that below.
Below I show the code you can run in the ipython shell to achieve your desired result. As an example, I'm replacing the default debugger (TerminalPdb) with a modified version I created that deals more nicely with certain list comprehensions (LcTerminalPdb). The process (which you can run in the ipython shell) is as follows.
# import the TerminalInteractiveShell class
from IPython.terminal.interactiveshell import TerminalInteractiveShell
# grab the specific instance of the already initialized ipython
shl = TerminalInteractiveShell().instance()
# grab the InteractiveTB attribute (which is a class)
tbClass = shl.InteractiveTB
# load the debugger class you want to use; I assume it's accessible on your path
from LcTerminalPdb import LcTerminalPdb
# change tbClass's debugger_cls to the debugger you want (for consistency
# with the next line)
tbClass.debugger_cls = LcTerminalPdb
# more importantly, set the pdb attribute to an instance of the class
tbClass.pdb = tbClass.debugger_cls()
# The above line is necessary if you already have the terminal running
# (and have entered pdb at least once); otherwise, ipython will run it on
# its own
That's it! Note that because you call the instance() method of TerminalInteractiveShell, you are grabbing the object for the currently running shell, which is why the modifications will affect the shell itself and so all following debugs. For a bonus, you can add these lines of code to your ipython_config.py file, so the debugger you want (LcTerminalPdb here) is always loaded with ipython:
c.InteractiveShellApp.exec_lines = [
'%pdb on',
'from LcTerminalPdb import LcTerminalPdb',
'from IPython.terminal.interactiveshell import TerminalInteractiveShell',
'shl = TerminalInteractiveShell().instance().InteractiveTB',
'shl.debugger_cls = LcTerminalPdb',
]
Note that above I don't need to write the extra shl.pdb = shl.debugger_cls() line, as ipython will take care of it the first time a debug point is entered. But feel free to, to be sure.
NOTES:
I have only tested this using LcTerminalPdb, and only briefly, but it seems to work appropriately
I suspect as long as other pdb debuggers have the same API as pdb (i.e. if they can be used by the PYTHONBREAKPOINT environment variable) then it should work
It's really unclear to me whether changing such deep attributes will have unexpected effects, so not sure how much I recommend this approach
I have inherited a python script which appears to have multiple distinct entry points. For example:
if __name__ == '__main__1':
... Do stuff for option 1
if __name__ == '__main__2':
... Do stuff for option 2
etc
Google has turned up a few other examples of this syntax (e.g. here) but I'm still no wiser on how to use it.
So the question is: How can I call a specific entry point in a python script that has multiple numbered __main__ sections?
Update:
I found another example of it here, where the syntax appears to be related to a specfic tool.
https://github.com/brython-dev/brython/issues/163
The standard doc mentions only main as a reserved module namespace. After looking at your sample I notice that every main method seems separate, does its imports, performs some enclosed functionality. My suspicion is that the developer wanted to quickly swap functionalities and didn't bother to use command line arguments for that, opting instead to swap 'main2' to 'main' as needed.
This is by no means proven, though - any chance of contacting the one who wrote this in the first place?
In my python file, I have made a GUI widget that takes some inputs from user. I have imported a python module in my python file that takes some input using raw_input(). I have to use this module as it is, I have no right to change it. When I run my python file, it ask me for the inputs (due to raw_input() of imported module). I want to use GUI widget inputs in that place.
How can I pass the user input (that we take from widget) as raw_input() of imported module?
First, if importing it directly into your script isn't actually a requirement (and it's hard to imagine why it would be), you can just run the module (or a simple script wrapped around it) as a separate process, using subprocess or pexpect.
Let's make this concrete. Say you want to use this silly module foo.py:
def bar():
x = raw_input("Gimme a string")
y = raw_input("Gimme another")
return 'Got two strings: {}, {}'.format(x, y)
First write a trivial foo.wrapper.py:
import foo
print(foo.bar())
Now, instead of calling foo.do_thing() directly in your real script, run foo_wrapper as a child process.
I'm going to assume that you already have the input you want to send it in a string, because that makes the irrelevant parts of the answer simpler (in fact, it makes them possible—if you wanted to use some GUI code for that, there's really no way I could show you how unless you first tell us which GUI library you're using).
So:
foo_input = 'String 1\nString 2\n'
with subprocess.Popen([sys.executable, 'foo_wrapper.py'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE) as p:
foo_output, _ = p.communicate(foo_input)
Of course in real life you'll want to use an appropriate path for foo_wrapper.py instead of assuming that it's in the current working directory, but this should be enough to illustrate the idea.
Meanwhile, if "I have no right to change it" just means "I don't (and shouldn't) have checkin rights to the foo project's github site or the relevant subtree on our company's P4 server" or whatever, there's a really easy answer: Fork it, and change the fork.
Even if it's got a weak copyleft license like LGPL: fork it, change the fork, publish your fork under the same license as the original, then use your fork.
If you're depending on the foo package being installed on every target system, and can't depend on your replacement foo being installed instead, that's a bit more of a problem. But if the function or method that actually calls raw_input is just a small fraction of the actual code in foo, you can fix that by monkeypatching foo at runtime.
And that leads to the last-ditch possibility: You can always monkeypatch raw_input itself.
Again, I'm going to assume that you already have the input you need to give it to make things simpler.
So, first you write a replacement function:
foo_input = ['String 1\n', 'String 2\n']
def fake_raw_input(prompt):
global foo_input
return foo_input.pop()
Now, there are two ways you can patch this in. Usually, you want to do this:
import foo
foo.raw_input = fake_raw_input
This means any code in foo that calls raw_input will see the function you crammed into its module globals instead of the normal builtin. Unless it does something really funky (like looking up the builtin directly and copying it to a local variable or something), this is the answer.
If you need to handle one of those really funky edge cases, and you don't mind doing something questionable, you can do this:
import __builtin__
__builtin__.raw_input = fake_raw_input
You must do this before the first import foo anywhere in your problem. Also, it's not clear whether this is intentionally guaranteed to work, accidentally guaranteed to work (and should be fixed in the future), or not guaranteed to work. But it does work (at least for CPython 2.5-2.7, which is what you're probably using).
I find myself adding debugging "print" statements quite often -- stuff like this:
print("a_variable_name: %s" % a_variable_name)
How do you all do that? Am I being neurotic in trying to find a way to optimize this? I may be working on a function and put in a half-dozen or so of those lines, figure out why it's not working, and then cut them out again.
Have you developed an efficient way of doing that?
I'm coding Python in Emacs.
Sometimes a debugger is great, but sometimes using print statements is quicker, and easier to setup and use repeatedly.
This may only be suitable for debugging with CPython (since not all Pythons implement inspect.currentframe and inspect.getouterframes), but I find this useful for cutting down on typing:
In utils_debug.py:
import inspect
def pv(name):
record=inspect.getouterframes(inspect.currentframe())[1]
frame=record[0]
val=eval(name,frame.f_globals,frame.f_locals)
print('{0}: {1}'.format(name, val))
Then in your script.py:
from utils_debug import pv
With this setup, you can replace
print("a_variable_name: %s' % a_variable_name)
with
pv('a_variable_name')
Note that the argument to pv should be the string (variable name, or expression), not the value itself.
To remove these lines using Emacs, you could
C-x ( # start keyboard macro
C-s pv('
C-a
C-k # change this to M-; if you just want to comment out the pv call
C-x ) # end keyboard macro
Then you can call the macro once with C-x e
or a thousand times with C-u 1000 C-x e
Of course, you have to be careful that you do indeed want to remove all lines containing pv(' .
Don't do that. Use a decent debugger instead. The easiest way to do that is to use IPython and either to wait for an exception (the debugger will set off automatically), or to provoke one by running an illegal statement (e.g. 1/0) at the part of the code that you wish to inspect.
I came up with this:
Python string interpolation implementation
I'm just testing it and its proving handy for me while debugging.