Is there a way to determine whether the debugged target is a core dump or a 'live' process?
As far as I know, there is no dedicated way to do it in Python, however, you can still use
gdb.execute("<command>", to_string=<boolean>) to execute a "CLI" command in Python, where to_string being True will tell GDB to collect the output and return it as a string (cf. doc)
maint print target-stack which will print the layers used internally to access the inferior. You should see "core (Local core dump file)" if the core-debugging layer is active.
So all-in-all, a bit of code like
out = gdb.execute("maint print target-stack", to_string=True)
print "Local core dump file" in out
should do the trick.
Related
I have a linux proc entry in /proc/sys/fs/offs/ts/enable which toggles a flag in a custom kernel module. Setting the value to 1 will enable a mode in the module, setting to 0 will disable that mode.
In bash, to enable the mode, I would simply do
echo 1 > /proc/sys/fs/offs/ts/enable
And to disable it,
echo 0 > /proc/sys/fs/offs/ts/enable
I have a daemon written in Python 2.7 which will look for some external event trigger, and when that event fires, should enable or disable the feature in the kernel module. The daemon is run with root privileges, so I shouldn't run into any kind of permissions issues.
Is there a recommended way of setting this value from python?
For example, say my function right now looks like this.
def set_mode(enable=True):
with open('/proc/sys/fs/offs/ts/enable', 'w') as p:
if enable:
p.write("1")
else:
p.write("0")
p.flush()
There are a couple of problems with your code.
Firstly, you want to write to the file, but you're opening it in read mode.
Secondly, .write expects string data, not an integer.
We can get rid of the if test by exploiting the fact that False and True have integer values of 0 & 1, respectively. The code below uses the print function rather than .write because print can convert the integer returned by int(enable) to a string. Also, print appends a newline (unless you tell it not to via the end argument), so this way the Python code performs the same action as your Bash command lines.
def set_mode(enable=True):
with open('/proc/sys/fs/offs/ts/enable', 'w') as p:
print(int(enable), file=p)
If you want to do it with .write, change the print line to:
p.write(str(int(enable)) + '\n')
There's a way to do that conversion from boolean to string in one step: use the boolean to index into a string literal:
'01'[enable]
It's short & fast, but some would argue that it's a little cryptic to use booleans as indices.
Linux exposes the /proc filesystem, as the name says, as files. That means you'll operate on those files like you would do with any other file. Your suggested function is basically perfect regarding how to access /proc, but PM 2Ring's recommendations are definitely valid.
Since it is a low level code, not intended to be portable, I would use os module. It has functions open, write and close that are almost direct wrappers of the C counterparts.
More like C equals less surprises!
I'm using COIN-OR's CBC solver to solve some numerical optimization problems. I'm structuring the optimization problem in Python via PuLP.
I've noticed that solvers like GUROBI and CPLEX create log files, but I can't seem to figure out how to get CBC to create a log file (as opposed to printing the optimizer's progress to the screen).
Does anybody know of an option in CBC to set a log file? Re-directing all stdout to a file doesn't work for me, since I'm solving a bunch of problems in parallel and want to keep their log files separate.
Here's an example of how I'm calling the solver. This works great and prints progress to the terminal.
prob.solve(pulp.COIN_CMD(msg=1, options=['DivingVectorlength on','DivingSome on']))
Here's how I think a solution should be structured (though obviously LogFileName isn't a valid CBC option).
prob.solve(pulp.COIN_CMD(msg=1, options=['DivingVectorlength on', 'DivingSome on', 'LogFileName stats.log']))
Any help on this would be greatly appreciated. I've been going through the internet, docs, and the CBC interactive session for hours trying to figure this out.
Reusing #Mike's answer, PuLP (since 2.2) now includes the possibility to write the log to a file by passing the logPath argument with the path to the file to write.
So you can now do:
prob.solve(pulp.COIN_CMD(msg=1, logPath="stats.log", options=['DivingVectorlength on', 'DivingSome on']))
The only caveat is that you cannot longer see it "on screen", since it redirects the output to the file. You are not required to give msg=1, just logPath in this case.
The logPath argument is consistent (in PuLP >= 2.2) among several solvers: PULP_CBC_CMD, COIN_CMD, PULP_COIN_CMD, GUROBI, CPLEX, CPLEX_CMD, GUROBI_CMD.
For a solution requiring only a few lines of code in your script that invokes PuLP and CBC, see the solution by James Vogel (https://github.com/voglster, maybe) at https://groups.google.com/forum/#!topic/pulp-or-discuss/itbmTC7uNCQ, based on os.dup() and os.dup2().
I hope it isn't inappropriate to copy it here to guard against linkrot, but see the original post for line-by-line explanation and some sophisticated things I don't understand from the tempfile package. My own usage is less sophisticated, using an actual permanent filename:
from os import dup, dup2, close
f = open('capture.txt', 'w')
orig_std_out = dup(1)
dup2(f.fileno(), 1)
status = prob.solve (PULP_CBC_CMD(maxSeconds = i_max_sec, fracGap = d_opt_gap, msg=1)) # CBC time limit and relative optimality gap tolerance
print('Completion code: %d; Solution status: %s; Best obj value found: %s' % (status, LpStatus[prob.status], value(prob.objective)))
dup2(orig_std_out, 1)
close(orig_std_out)
f.close()
This leaves you with useful information in capture.txt in the current directory.
I was unable to find an answer without changing the pulp source code, but if that does not bother you, then take the following route:
navigate to the directory of your pulp install library and look at the solvers.py file.
The function of interest is solve_CBC in the COIN_CMD class. In that method, the arguments are formed into a single command to pass to the cbc-64 solver program, it is then called using the subprocess.Popen method. The stdout argument for this method is either set to None or os.devnull neither of which is very useful for us. You can see the process call on line 1340 (for PuLP 1.5.6).
cbc = subprocess.Popen((self.path + cmds).split(), stdout = pipe,
stderr = pipe)
This source also reveals that the problem (mps) and solution (sol) files are written to the /tmp directory (on UNIX machines) and that the file names include the pid of the interpreter calling it. I open a file using this id and pass it to that argument. like this:
logFilename = os.path.join(self.tmpDir, "%d-cbc.log" % pid)
logFile = open(logFilename, 'a')
cbc = subprocess.Popen((self.path + cmds).split(), stdout = logFile,
stderr = pipe)
Sure enough, in the /tmp directory I see my log files after running. You can set the verbosity with log N see the cbc help for more documentation there. Since this creates a different file for each process id, I think it will solve your problem of running multiple solvers in parallel.
I'm currently writing a Python GDB script. The problem is that it has to be compatible with GDB 7.1. So I first wrote the script for GDB 7.3.1 and used the following function to receive the output of an gdb command (GDB 7.3.1):
myvar = gdb.execute("info target", False, True)
The last parameter of this function is that it should return the result as a string (which makes perfectly sense; why else would I execute such a command ;) )
In GDB Version 7.1 though it seems that the last parameter isn't available thus this line(GDB 7.1):
myvar = gdb.execute("info target", False)
returns None.
Is there any chance to retrieve the output of this command? I already tried to redirect the standard output of my python script into a file, then loading this file but apparently the standard input and output of my python script is overwritten by the gdb environment so the output from the gdb.execute command is not be written to my file.
The only thing I could think of now is to wrap my script up with a bash script that first opens gdb with a python script that executes various commands and then pipe that into a file. Then open gdb again but with another python script that loads the file, parses it and then executes other commands based on the input from the file and so on. But this is a really the ugliest solution I can think of.
So is there a way to receive the output of an gdb.execute in GDB 7.1?
So is there a way to receive the output of an gdb.execute in GDB 7.1?
No.
Your best bet is to arrange for GDB-7.3 to be available. Since GDB doesn't usually use shared libraries (beyond libc and perhaps libpython), you can just copy gdb binary with your script. That will be much easier and more maintainable solution than the alternative you proposed.
You can write to a file, then read the file, for example:
os.system("rm tmp.txt")
gdb.execute("set logging file tmp.txt")
gdb.execute("set logging on")
mainsec=gdb.execute("info proc mappings")
gdb.execute("set logging off")
mainsec = open("tmp.txt").read()
The old version of gdb.execute was far superior though.
FYI now (tested with gdb 8.1) you can use the to_string parameter
https://sourceware.org/gdb/onlinedocs/gdb/Basic-Python.html
gdb.execute (command [, from_tty [, to_string]])
By default, any output produced by command is sent to GDB’s standard output (and to the log output if logging is turned on). If the to_string parameter is True, then output will be collected by gdb.execute and returned as a string. The default is False, in which case the return value is None.
I'm trying to save myself just a few keystrokes for a command I type fairly regularly in Python.
In my python startup script, I define a function called load which is similar to import, but adds some functionality. It takes a single string:
def load(s):
# Do some stuff
return something
In order to call this function I have to type
>>> load('something')
I would rather be able to simply type:
>>> load something
I am running Python with readline support, so I know there exists some programmability there, but I don't know if this sort of thing is possible using it.
I attempted to get around this by using the InteractivConsole and creating an instance of it in my startup file, like so:
import code, re, traceback
class LoadingInteractiveConsole(code.InteractiveConsole):
def raw_input(self, prompt = ""):
s = raw_input(prompt)
match = re.match('^load\s+(.+)', s)
if match:
module = match.group(1)
try:
load(module)
print "Loaded " + module
except ImportError:
traceback.print_exc()
return ''
else:
return s
console = LoadingInteractiveConsole()
console.interact("")
This works with the caveat that I have to hit Ctrl-D twice to exit the python interpreter: once to get out of my custom console, once to get out of the real one.
Is there a way to do this without writing a custom C program and embedding the interpreter into it?
Edit
Out of channel, I had the suggestion of appending this to the end of my startup file:
import sys
sys.exit()
It works well enough, but I'm still interested in alternative solutions.
You could try ipython - which gives a python shell which does allow many things including automatic parentheses which gives you the function call as you requested.
I think you want the cmd module.
See a tutorial here:
http://wiki.python.org/moin/CmdModule
Hate to answer my own question, but there hasn't been an answer that works for all the versions of Python I use. Aside from the solution I posted in my question edit (which is what I'm now using), here's another:
Edit .bashrc to contain the following lines:
alias python3='python3 ~/py/shellreplace.py'
alias python='python ~/py/shellreplace.py'
alias python27='python27 ~/py/shellreplace.py'
Then simply move all of the LoadingInteractiveConsole code into the file ~/py/shellreplace.py Once the script finishes executing, python will cease executing, and the improved interactive session will be seamless.
I am using the following call for executing the 'aspell' command on some strings in Python:
r,w,e = popen2.popen3("echo " +str(m[i]) + " | aspell -l")
I want to test the success of the function looking at the stdout File Object r. If there is no output the command is successful.
What is the best way to test that in Python?
Thanks in advance.
Best is to use the subprocess module of the standard Python library, see here -- popen2 is old and not recommended.
Anyway, in your code, if r.read(1): is a fast way to test if there's any content in r (if you don't care about what that content might specifically be).
Why don't you use aspell -a?
You could use subprocess as indicated by Alex, but keep the pipe open. Follow the directions for using the pipe API of aspell, and it should be pretty efficient.
The upside is that you won't have to check for an empty line. You can always read from stdout, knowing that you will get a response. This takes care of a lot of problematic race conditions.