How to get output from gdb.execute in PythonGDB (GDB 7.1)? - python

I'm currently writing a Python GDB script. The problem is that it has to be compatible with GDB 7.1. So I first wrote the script for GDB 7.3.1 and used the following function to receive the output of an gdb command (GDB 7.3.1):
myvar = gdb.execute("info target", False, True)
The last parameter of this function is that it should return the result as a string (which makes perfectly sense; why else would I execute such a command ;) )
In GDB Version 7.1 though it seems that the last parameter isn't available thus this line(GDB 7.1):
myvar = gdb.execute("info target", False)
returns None.
Is there any chance to retrieve the output of this command? I already tried to redirect the standard output of my python script into a file, then loading this file but apparently the standard input and output of my python script is overwritten by the gdb environment so the output from the gdb.execute command is not be written to my file.
The only thing I could think of now is to wrap my script up with a bash script that first opens gdb with a python script that executes various commands and then pipe that into a file. Then open gdb again but with another python script that loads the file, parses it and then executes other commands based on the input from the file and so on. But this is a really the ugliest solution I can think of.
So is there a way to receive the output of an gdb.execute in GDB 7.1?

So is there a way to receive the output of an gdb.execute in GDB 7.1?
No.
Your best bet is to arrange for GDB-7.3 to be available. Since GDB doesn't usually use shared libraries (beyond libc and perhaps libpython), you can just copy gdb binary with your script. That will be much easier and more maintainable solution than the alternative you proposed.

You can write to a file, then read the file, for example:
os.system("rm tmp.txt")
gdb.execute("set logging file tmp.txt")
gdb.execute("set logging on")
mainsec=gdb.execute("info proc mappings")
gdb.execute("set logging off")
mainsec = open("tmp.txt").read()
The old version of gdb.execute was far superior though.

FYI now (tested with gdb 8.1) you can use the to_string parameter
https://sourceware.org/gdb/onlinedocs/gdb/Basic-Python.html
gdb.execute (command [, from_tty [, to_string]])
By default, any output produced by command is sent to GDB’s standard output (and to the log output if logging is turned on). If the to_string parameter is True, then output will be collected by gdb.execute and returned as a string. The default is False, in which case the return value is None.

Related

Executing a profile load shell script from a python program [duplicate]

This question already has answers here:
how to "source" file into python script
(8 answers)
Closed 3 years ago.
I am struggling to execute a shell script from a Python program. The actual issue is the script is a load profile script and runs manually as :
. /path/to/file
The program can't be run as sh script as the calling programs are loading some configuration file and so must need to be run as . /path/to/file
Please do guide how can I integrate the same in my Python script? I am using subprocess.Popen command to run the script and as said the only way it works is to run as . /path/to/file and so not giving the right result.
Without knowledge of the precise reason the script needs to be sourced, this is slightly speculative.
The fundamental problem is this: How do I get a source command to take effect outside the shell script?
Let's say your sourced file does something like
export fnord="value"
This cannot (usefully) be run in a subshell (as a normally executed script would) because the environment variable and its value will be lost when the script terminates. The solution is to source (aka .) this snippet from an already running shell; then the value stays in that shell's environment until that shell terminates.
But Python is not a shell, and there is no general way for Python to execute arbitrary shell script code, short of reimplementing the shell in Python. You can reimplement a small subset of the shell's functionality with something like
with open('/path/to/file') as shell_source:
lines = shell_source.readlines()
for line in lines:
if line.strip().startswith('export '):
var, value = line[7:].strip().split('=', 1)
if value.startswith('"'):
value = value.strip('"')
elif value.startswith("'"):
value = value.strip("'")
os.environ[var] = value
with some very strict restrictions (let's not say naïve assumptions) on the allowable shell script syntax in the file. But what if the file contained something else than a series of variable assignments, or the assignment used something other than trivial quoted strings in the values? (Even the export might or might not be there. Its significance is to make the variable visible to subprocesses of the current shell; maybe that is not wanted or required? Also export variable=value is not portable; proper Bourne shell script syntax would use variable=value; export variable or one of the many variations.)
If you know what exactly your Python script needs from the shell script, maybe do something like
r = subprocess.run('. /path/to/file; printf "%s\n" "$somevariable"',
shell=True, capture_output=True, text=True)
os.environ['somevariable'] = r.stdout.split('\n')[-2]
to source the entire script in a subshell, then print to standard output the part you actually need, and capture that from your Python script (and assign it to an environment variable if that's what you eventually need to accomplish).

Running python script from perl, with argument to stdin and saving stdout output

My perl script is at path:
a/perl/perlScript.pl
my python script is at path:
a/python/pythonScript.py
pythonScript.py gets an argument from stdin, and returns result to stdout. From perlScript.pl , I want to run pythonScript.py with the argument hi to stdin, and save the results in some variable. That's what I tried:
my $ret = `../python/pythonScript.py < hi`;
but I got the following error:
The system cannot find the path specified.
Can you explain the path can't be found?
The qx operator (backticks) starts a shell (sh), in which prog < input syntax expects a file named input from which it will read lines and feed them to the program prog. But you want the python script to receive on its STDIN the string hi instead, not lines of a file named hi.
One way is to directly do that, my $ret = qx(echo "hi" | python_script).
But I'd suggest to consider using modules for this. Here is a simple example with IPC::Run3
use warnings;
use strict;
use feature 'say';
use IPC::Run3;
my #cmd = ('program', 'arg1', 'arg2');
my $in = "hi";
run3 \#cmd, \$in, \my $out;
say "script's stdout: $out";
The program is the path to your script if it is executable, or perhaps python script.py. This will be run by system so the output is obtained once that completes, what is consistent with the attempt in the question. See documentation for module's operation.
This module is intended to be simple while "satisfy 99% of the need for using system, qx, and open3 [...]. For far more power and control see IPC::Run.
You're getting this error because you're using shell redirection instead of just passing an argument
../python/pythonScript.py < hi
tells your shell to read input from a file called hi in the current directory, rather than using it as an argument. What you mean to do is
my $ret = `../python/pythonScript.py hi`;
Which correctly executes your python script with the hi argument, and returns the result to the variable $ret.
The Some of the other answers assume that hi must be passed as a command line parameter to the Python script but the asker says it comes from stdin.
Thus:
my $ret = `echo "hi" | ../python/pythonScript.py`;
To launch your external script you can do
system "python ../python/pythonScript.py hi";
and then in your python script
import sys
def yourFct(a, b):
...
if __name__== "__main__":
yourFct(sys.argv[1])
you can have more informations on the python part here

Python not getting raw binary from subprocess.check_call

How can I get subprocess.check_call to give me the raw binary output of a command, it seems to be encoding it incorrectly somewhere.
Details:
I have a command that returns text like this:
some output text “quote” ...
(Those quotes are unicode e2809d)
Here's how I'm calling the command:
f_output = SpooledTemporaryFile()
subprocess.check_call(cmd, shell=True, stdout=f_output)
f_output.seek(0)
output = f_output.read()
The problem is I get this:
>>> repr(output)
some output text ?quote? ...
>>> type(output)
<str>
(And if I call 'ord' the '?' I get 63.)
I'm on Python 2.7 on Linux.
Note: Running the same code on OSX works correctly to me. The problem is when I run it on a Linux server.
Wow, this was the weirdest issue ever but I've fixed it!
It turns out that the program it was calling (a java program) was returning different encoding depending on where it was called from!
Dev osx machine, returns the characters fine, Linux server from command line, returns them fine, called from a Django app, nope turns into "?"s.
To fix this I ended up adding this argument to the command:
-Dfile.encoding=utf-8
I got that idea here, and it seems to work. There's also a way to modify the Java program internally to do that.
Sorry I blamed Python! You guys had the right idea.
The redirection (stdout=file) happens at the file descriptor level. Python has nothing to do with what is written to the file if you see ? instead of “ in the file itself (not in a REPL).
If it work on OS X and it "doesn't work" on Linux server then the likely reason is the difference in the environment, check LC_ALL, LC_CTYPE, LANG envvars—python, /bin/sh (due to shell=True), and the cmd may use your locale encoding that is ASCII if the environment is not set (C, POSIX locale).
To get "raw binary" from a subprocess:
#!/usr/bin/env python
import subprocess
raw_binary = subprocess.check_output(['cmd', 'arg 1', 'arg 2'])
print(repr(raw_binary))
Note:
no shell=True—don't use it unless it is necessary
many programs may change their behavior if they detect that the output is not a tty, example.

Command copied from the command line not running when called with subprocss.Popen in Python

Scratching my head... this curl command will work fine from the command line when I copy it from here and paste it in my Windows 7 command line, but I can't get it to execute in my Python 2.7.9 script. Says the system cannot find the specified file. Popen using 'ping' or something like that works just fine, so I'm sure this is a goober typo that I'm just not seeing. I would appreciate a separate set of eyes and any comments as to what is wrong.
proc = subprocess.Popen("curl --ntlm -u : --upload-file c:\\temp\\test.xlsx http://site.domain.com/sites/site/SiteDirectory/folder/test.xlsx")
Have a look at second two paragraphs of the subprocess.Popen documentation if you haven't already:
args should be a sequence of program arguments or else a single string. By default, the program to execute is the first item in args if args is a sequence. If args is a string, the interpretation is platform-dependent and described below. See the shell and executable arguments for additional differences from the default behavior. Unless otherwise stated, it is recommended to pass args as a sequence.
On Unix, if args is a string, the string is interpreted as the name or path of the program to execute. However, this can only be done if not passing arguments to the program. [emphasis mine]
Instead you should pass in a list in which each argument to the program (including the executable name itself) is given as a separate item in the list. This is generally going to be safer in a cross-platform context anyways.
Update: I see now that you're using Windows in which case the advice on UNIX doesn't apply. On Windows though things are even more hairy. The best advice remains to use a list :)
Update 2: Another possible issue (and in fact the OP's issue as reported in the comments on this answer) is that because the full path to the curl executable was not given, it may not be found if the Python interpreter is running in an environment with a different PATH environment variable.

OS X & Python 3: Behavior while executing bash command in new terminal?

I've seen similar questions (e.g. Running a command in a new Mac OS X Terminal window ) but I need to confirm this command and its expected behavior in a mac (which I don't have). If anyone can run the following in Python 3 Mac:
import subprocess, os
def runcom(bashCommand):
sp = subprocess.Popen(['osascript'], stdin=subprocess.PIPE, stderr=subprocess.PIPE)
sp.communicate('''tell application "Terminal"\nactivate\ndo script with command "{0} $EXIT"\nend tell'''.format(bashCommand))
runcom('''echo \\"This is a test\\n\\nThis should come two lines later; press any key\\";read throwaway''')
runcom('''echo \\"This is a test\\"\n\necho \\"This should come one line later; press any key\\";read throwaway''')
runcom('''echo \\"This is testing whether I can have you enter your sudo pw on separate terminal\\";sudo ls;\necho \\"You should see your current directory; press any key\\";read throwaway''')
Firstly, and most basically, is the "spawn new terminal and execute" command correct? (For reference, this version of the runcom function came from this answer below, and is much cleaner than my original.)
As for the actual tests: the first one tests that internal double escaped \\n characters really work. The second tests that we can put (unescaped) newlines into the "script" and still have it work just like semicolon. Finally, the last one tests whether you can call a sudo process in a separate terminal (my ultimate goal).
In all cases, the new terminal should disappear as soon as you "press any key". Please also confirm this.
If one of these doesn't work, a correction/diagnosis would be most appreciated. Also appreciated: is there a more pythonic way of spawning a terminal on Mac then executing a (sudo, extended) bash commands on it?
Thanks!
[...] its expected behavior [...]
This is hard to answer, since those commands do what I expect them to do, which might not be what you expect them to do.
As for the actual tests: the first one tests that internal double escaped \n characters really work.
The \\n with the doubled backslash does indeed work correctly in that it causes echo to emit a newline character. However, no double quotes are emitted by echo.
The second tests that we can put (unescaped) newlines into the "script" and still have it work just like semicolon.
That works also.
Finally, the last one tests whether you can call a sudo process in a separate terminal (my ultimate goal).
There is no reason why this should not work also, and indeed it does.
In all cases, the new terminal should disappear as soon as you "press any key". Please also confirm this.
That will not work because of several reasons:
read in bash will by default read a whole line, not just one character
after the script you supply is executed, there is no reason for the shell within the terminal to exit
even if the shell would exit, the user can configure Terminal.app not to close a window after the shell exits (this is even the default setting)
Other problems:
the script you supply to osascript will appear in the terminal window before it is executed. in the examples above, the user will see every "This is a test [...]" twice.
I cannot figure out what $EXIT is supposed to do
The ls command will show the user "the current directory" only in the sense that the current working directory in a new terminal window will always be the user's home directory
throwaway will not be available after the script bashCommand exits
Finally, this script will not work at all under Python 3, because it crashes with a TypeError: communicate() takes a byte string as argument, not a string.
Also appreciated: is there a more pythonic way of spawning a terminal on Mac [...]
You should look into PyObjC! It's not necessarily more pythonic, but at least you would eliminate some layers of indirection.
I don't have Python 3, but I edited your runcom function a little and it should work:
def runcom(bashCommand):
sp = subprocess.Popen(['osascript'], stdin=subprocess.PIPE, stderr=subprocess.PIPE)
sp.communicate('''tell application "Terminal"\nactivate\ndo script with command "{0} $EXIT"\nend tell'''.format(bashCommand))

Categories

Resources