I'm writing a script that needs to take advantage of a Java daemon via the local dbus of the linux machines it will run on. This daemon in particular will return an array of tuples which I want so that I can parse through/use the information in later in my code. I want this code to take this value from multiple machines at once, but the problem is the only way I see to really take return/exit values from a terminal which I am ssh'ed into is by parsing stdout's output. I don't want to do this, I'd much prefer to get the actual variable. Right now I have this:
import os
message = "import dbus, sys\nbus=dbus.SystemBus()\nremote_object=bus.get_object('daemon.location', '/daemon')\ncontroller=dbus.Interface(remote_object, 'daemon.path')\nsys.exit(controller.getValue())"
x = os.system('echo \-e "%s" \| ssh %s python' %(message, ip))
In this example when I run "controller.getValue()" it returns an array of tuples. I'm trying to figure out a way to get that array. When using something like popen it pipes the output in stdout into a file and returns it to you, that way you get a string equivalent of the array. What I'm trying to figure out is how to get the actual array. As if to pass the variable returned when exiting the ssh tty into my code. Any ideas?
You can't avoid serialization if there is no shared memory. There are only bytes on the wire.
You could use a library that hides it from you e.g., with execnet module:
#!/usr/bin/env python
import execnet
gw = execnet.makegateway("ssh=user#host")
channel = gw.remote_exec("""
import dbus, sys
bus = dbus.SystemBus()
remote_object = bus.get_object('daemon.location', '/daemon')
controller = dbus.Interface(remote_object, 'daemon.path')
channel.send(controller.getValue())
""")
tuple_ = channel.receive()
print tuple_
print tuple_[0]
But it easy to parse simple tuple values yourself using ast.literal_eval() from stdlib:
#fabfile.py
import ast
from fabric.api import run
def getcontroller():
"""Return controller value."""
cmd = """
import dbus, sys
bus = dbus.SystemBus()
remote_object = bus.get_object('daemon.location', '/daemon')
controller = dbus.Interface(remote_object, 'daemon.path')
print repr(controller.getValue())
""" #NOTE: you must escape all quotation marks
output = run('python -c "%s"' % cmd)
tuple_ = ast.literal_eval(output)
print tuple_[0]
Example: $ fab getcontroller -H user#host
Here I've used fabric to run the command on remote host.
You could use JSON as a serialization format if the other end doesn't produce Python literals:
>>> import json
>>> t = (1, "a")
>>> json.dumps(t)
'[1, "a"]'
>>> json.loads(_)
[1, u'a']
>>>
Why not use popen?
lines = os.popen("your command here").readlines()
If you just want a shell variable then you could do this
$ FOO="myFOO"
$ export FOO
$ cat x.py
#!/usr/bin/python
import os
print os.environ['FOO']
$ ./x.py
myFOO
$
If you want the return code of a program:
try:
retcode = call("mycmd" + " myarg", shell=True)
if retcode < 0:
print >>sys.stderr, "Child was terminated by signal", -retcode
else:
print >>sys.stderr, "Child returned", retcode
except OSError, e:
print >>sys.stderr, "Execution failed:", e
If you could probably explain you requirement a little better, you might get better help
Related
This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 2 years ago.
I want to assign the output of a command I run using os.system to a variable and prevent it from being output to the screen. But, in the below code ,the output is sent to the screen and the value printed for var is 0, which I guess signifies whether the command ran successfully or not. Is there any way to assign the command output to the variable and also stop it from being displayed on the screen?
var = os.system("cat /etc/services")
print var #Prints 0
From this question which I asked a long time ago, what you may want to use is popen:
os.popen('cat /etc/services').read()
From the docs for Python 3.6,
This is implemented using subprocess.Popen; see that class’s
documentation for more powerful ways to manage and communicate with
subprocesses.
Here's the corresponding code for subprocess:
import subprocess
proc = subprocess.Popen(["cat", "/etc/services"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
print("program output:", out)
You might also want to look at the subprocess module, which was built to replace the whole family of Python popen-type calls.
import subprocess
output = subprocess.check_output("cat /etc/services", shell=True)
The advantage it has is that there is a ton of flexibility with how you invoke commands, where the standard in/out/error streams are connected, etc.
The commands module is a reasonably high-level way to do this:
import commands
status, output = commands.getstatusoutput("cat /etc/services")
status is 0, output is the contents of /etc/services.
For python 3.5+ it is recommended that you use the run function from the subprocess module. This returns a CompletedProcess object, from which you can easily obtain the output as well as return code. Since you are only interested in the output, you can write a utility wrapper like this.
from subprocess import PIPE, run
def out(command):
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
return result.stdout
my_output = out("echo hello world")
# Or
my_output = out(["echo", "hello world"])
I know this has already been answered, but I wanted to share a potentially better looking way to call Popen via the use of from x import x and functions:
from subprocess import PIPE, Popen
def cmdline(command):
process = Popen(
args=command,
stdout=PIPE,
shell=True
)
return process.communicate()[0]
print cmdline("cat /etc/services")
print cmdline('ls')
print cmdline('rpm -qa | grep "php"')
print cmdline('nslookup google.com')
I do it with os.system temp file:
import tempfile, os
def readcmd(cmd):
ftmp = tempfile.NamedTemporaryFile(suffix='.out', prefix='tmp', delete=False)
fpath = ftmp.name
if os.name=="nt":
fpath = fpath.replace("/","\\") # forwin
ftmp.close()
os.system(cmd + " > " + fpath)
data = ""
with open(fpath, 'r') as file:
data = file.read()
file.close()
os.remove(fpath)
return data
Python 2.6 and 3 specifically say to avoid using PIPE for stdout and stderr.
The correct way is
import subprocess
# must create a file object to store the output. Here we are getting
# the ssid we are connected to
outfile = open('/tmp/ssid', 'w');
status = subprocess.Popen(["iwgetid"], bufsize=0, stdout=outfile)
outfile.close()
# now operate on the file
from os import system, remove
from uuid import uuid4
def bash_(shell_command: str) -> tuple:
"""
:param shell_command: your shell command
:return: ( 1 | 0, stdout)
"""
logfile: str = '/tmp/%s' % uuid4().hex
err: int = system('%s &> %s' % (shell_command, logfile))
out: str = open(logfile, 'r').read()
remove(logfile)
return err, out
# Example:
print(bash_('cat /usr/bin/vi | wc -l'))
>>> (0, '3296\n')```
I am trying to assign the output of a command to a variable without the command thinking that it is being piped. The reason for this is that the command in question gives unformatted text as output if it is being piped, but it gives color formatted text if it is being run from the terminal. I need to get this color formatted text.
So far I've tried a few things. I've tried Popen like so:
output = subprocess.Popen(command, stdout=subprocess.PIPE)
output = output.communicate()[0]
output = output.decode()
print(output)
This will let me print the output, but it gives me the unformatted output that I get when the command is piped. That makes sense, as I'm piping it here in the Python code. But I am curious if there is a way to assign the output of this command, directly to a variable, without the command running the piped version of itself.
I have also tried the following version that relies on check_output instead:
output = subprocess.check_output(command)
output = output.decode()
print(output)
And again I get the same unformatted output that the command returns when the command is piped.
Is there a way to get the formatted output, the output the command would normally give from the terminal, when it is not being piped?
Using pexpect:
2.py:
import sys
if sys.stdout.isatty():
print('hello')
else:
print('goodbye')
subprocess:
import subprocess
p = subprocess.Popen(
['python3.4', '2.py'],
stdout=subprocess.PIPE
)
print(p.stdout.read())
--output:--
goodbye
pexpect:
import pexpect
child = pexpect.spawn('python3.4 2.py')
child.expect(pexpect.EOF)
print(child.before) #Print all the output before the expectation.
--output:--
hello
Here it is with grep --colour=auto:
import subprocess
p = subprocess.Popen(
['grep', '--colour=auto', 'hello', 'data.txt'],
stdout=subprocess.PIPE
)
print(p.stdout.read())
import pexpect
child = pexpect.spawn('grep --colour=auto hello data.txt')
child.expect(pexpect.EOF)
print(child.before)
--output:--
b'hello world\n'
b'\x1b[01;31mhello\x1b[00m world\r\n'
Yes, you can use the pty module.
>>> import subprocess
>>> p = subprocess.Popen(["ls", "--color=auto"], stdout=subprocess.PIPE)
>>> p.communicate()[0]
# Output does not appear in colour
With pty:
import subprocess
import pty
import os
master, slave = pty.openpty()
p = subprocess.Popen(["ls", "--color=auto"], stdout=slave)
p.communicate()
print(os.read(master, 100)) # Print 100 bytes
# Prints with colour formatting info
Note from the docs:
Because pseudo-terminal handling is highly platform dependent, there
is code to do it only for Linux. (The Linux code is supposed to work
on other platforms, but hasn’t been tested yet.)
A less than beautiful way of reading the whole output to the end in one go:
def num_bytes_readable(fd):
import array
import fcntl
import termios
buf = array.array('i', [0])
if fcntl.ioctl(fd, termios.FIONREAD, buf, 1) == -1:
raise Exception("We really should have had data")
return buf[0]
print(os.read(master, num_bytes_readable(master)))
Edit: nicer way of getting the content at once thanks to #Antti Haapala:
os.close(slave)
f = os.fdopen(master)
print(f.read())
Edit: people are right to point out that this will deadlock if the process generates a large output, so #Antti Haapala's answer is better.
A working polyglot example (works the same for Python 2 and Python 3), using pty.
import subprocess
import pty
import os
import sys
master, slave = pty.openpty()
# direct stderr also to the pty!
process = subprocess.Popen(
['ls', '-al', '--color=auto'],
stdout=slave,
stderr=subprocess.STDOUT
)
# close the slave descriptor! otherwise we will
# hang forever waiting for input
os.close(slave)
def reader(fd):
try:
while True:
buffer = os.read(fd, 1024)
if not buffer:
return
yield buffer
# Unfortunately with a pty, an
# IOError will be thrown at EOF
# On Python 2, OSError will be thrown instead.
except (IOError, OSError) as e:
pass
# read chunks (yields bytes)
for i in reader(master):
# and write them to stdout file descriptor
os.write(1, b'<chunk>' + i + b'</chunk>')
Many programs automatically turn off colour printing codes when they detect they are not connected directly to a terminal. Many programs will have a flag so you can force colour output. You could add this flag to your process call. For example:
grep "search term" inputfile.txt
# prints colour to the terminal in most OSes
grep "search term" inputfile.txt | less
# output goes to less rather than terminal, so colour is turned off
grep "search term" inputfile.txt --color | less
# forces colour output even when not connected to terminal
Be warned though. The actual colour output is done by the terminal. The terminal interprets special character espace codes and changes the text colour and background color accordingly. Without the terminal to interpret the colour codes you will just see the text in black with these escape codes interspersed throughout.
There is an external program A.
I want to write a script that does some action if the called external program A does not bring up any output(stout).
How is this possible in bash or python?
You can use the subprocess module which allows you to execute system calls and store its output in variables which can be used later on.
#!/usr/bin/python
import subprocess as sub
ur_call = '<your system call here>'
p = sub.Popen(ur_call, stdout=sub.PIPE,stderr=sub.PIPE)
output, errors = p.communicate()
if len(output) == 0 and len(errors) == 0:
pass # Do something
In a Bash-script, you could redirect the output to a file, and if the length of the file is zero then there was no output.
If the script that sometimes gives output is no.sh then you can do this in Python:
import os
x = os.popen("./no.sh")
y = x.read()
if y:
print "Got output"
I'd like to do something like:
do lots of stuff to prepare a good environement
become_interactive
#wait for Ctrl-D
automatically clean up
Is it possible with python?If not, do you see another way of doing the same thing?
Use the -i flag when you start Python and set an atexit handler to run when cleaning up.
File script.py:
import atexit
def cleanup():
print "Goodbye"
atexit.register(cleanup)
print "Hello"
and then you just start Python with the -i flag:
C:\temp>\python26\python -i script.py
Hello
>>> print "interactive"
interactive
>>> ^Z
Goodbye
The code module will allow you to start a Python REPL.
With IPython v1.0, you can simply use
from IPython import embed
embed()
with more options shown in the docs.
To elaborate on IVA's answer: embedding-a-shell, incoporating code and Ipython.
def prompt(vars=None, message="welcome to the shell" ):
#prompt_message = "Welcome! Useful: G is the graph, DB, C"
prompt_message = message
try:
from IPython.Shell import IPShellEmbed
ipshell = IPShellEmbed(argv=[''],banner=prompt_message,exit_msg="Goodbye")
return ipshell
except ImportError:
if vars is None: vars=globals()
import code
import rlcompleter
import readline
readline.parse_and_bind("tab: complete")
# calling this with globals ensures we can see the environment
print prompt_message
shell = code.InteractiveConsole(vars)
return shell.interact
p = prompt()
p()
Not exactly the thing you want but python -i will start interactive prompt after executing the script.
-i : inspect interactively after running script, (also PYTHONINSPECT=x) and force prompts, even if stdin does not appear to be a terminal
$ python -i your-script.py
Python 2.5.4 (r254:67916, Jan 20 2010, 21:44:03)
...
>>>
You may call python itself:
import subprocess
print "Hola"
subprocess.call(["python"],shell=True)
print "Adios"
I have googled "python ssh". There is a wonderful module pexpect, which can access a remote computer using ssh (with password).
After the remote computer is connected, I can execute other commands. However I cannot get the result in python again.
p = pexpect.spawn("ssh user#remote_computer")
print "connecting..."
p.waitnoecho()
p.sendline(my_password)
print "connected"
p.sendline("ps -ef")
p.expect(pexpect.EOF) # this will take very long time
print p.before
How to get the result of ps -ef in my case?
Have you tried an even simpler approach?
>>> from subprocess import Popen, PIPE
>>> stdout, stderr = Popen(['ssh', 'user#remote_computer', 'ps -ef'],
... stdout=PIPE).communicate()
>>> print(stdout)
Granted, this only works because I have ssh-agent running preloaded with a private key that the remote host knows about.
child = pexpect.spawn("ssh user#remote_computer ps -ef")
print "connecting..."
i = child.expect(['user#remote_computer\'s password:'])
child.sendline(user_password)
i = child.expect([' .*']) #or use i = child.expect([pexpect.EOF])
if i == 0:
print child.after # uncomment when using [' .*'] pattern
#print child.before # uncomment when using EOF pattern
else:
print "Unable to capture output"
Hope this help..
You might also want to investigate paramiko which is another SSH library for Python.
Try to send
p.sendline("ps -ef\n")
IIRC, the text you send is interpreted verbatim, so the other computer is probably waiting for you to complete the command.