python ssh stdout multiple - python

In python, I am trying to connect thru ssh and play multiple commands one by one.
This code works fine and output is printed out to my screen:
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE)
p.stdin.write('pwd\n')
p.stdin.write('ls -l\n')
p.stdin.write('exit\n')
p.stdin.close()
My problem is when I try to grab each response in a string. I have tried this but the read function is blocking:
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write('pwd\n')
st1 = p.stdout.read()
p.stdin.write('ls -l\n')
st2 = p.stdout.read()
p.stdin.close()

I agree with Alp that it's probably easier to have a library to do the connection logic for you. pexpect is one way to go. The below is an example with paramiko. http://docs.paramiko.org/en/1.13/
import paramiko
host = 'myhost'
port, user, password = '22', 'myuser', 'mypass'
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
client.connect(host, port, user, password, timeout=10)
command = 'ls -l'
stdin, stdout, stderr = client.exec_command(command)
errors = stderr.read()
output = stdout.read()
client.close()

The read() call is blocking because, when called with no argument, read() will read from the stream in question until it encounters EOF.
If your use case is as simple as your example code, a cheap workaround is to defer reading from p.stdout until after you close the connection:
cmd = ['ssh', '-t', '-t', 'deploy#pdb0']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write('pwd\n')
p.stdin.write('ls -l\n')
p.stdin.write('exit\n')
p.stdin.close()
outstr = p.stdout.read()
You'll then have to parse outstr to separate the output of the different comamnds. (Looking for occurrences of the remote shell prompt is probably the most straightforward way to do that.)
If you need to read the complete output of one command before sending another, you have several problems. First this can block:
p.stdin.write('pwd\n')
st1 = p.stdout.read()
because the command you write to p.stdin might be buffered. You need to flush the command before looking for output:
p.stdin.write('pwd\n')
p.stdin.flush()
st1 = p.stdout.read()
The read() call will still block, though. What you want to do is call read() with a specified buffer size and read the output in chunks until you encounter the remote shell prompt again. But even then you'll still need to use select to check the status of p.stdout to make sure you don't block.
There's a library called pexpect that implements that logic for you. It'll be much easier to use that (or, even better, pxssh, which specializes pexpect for use over ssh connections), as getting everything right is rather hairy, and different OSes behave somewhat differently in edge cases. (Take a look at pexpect.spawn.read_nonblocking() for an example of how messy it can be.)
Even cleaner, though, would be to use paramiko, which provides a higher level abstraction to doing things over ssh connections. In particular, look at the example usage of the paramiko.client.SSHClient class.

Thanks for both of you for your answers. To keep it simple I have updated my code with:
def getAnswer(p, cmnd):
# send my command
if len(cmnd) > 0:
p.stdin.write(cmnd + '\n')
p.stdin.flush()
# get reply -- until prompt received
st = ""
while True:
char = p.stdout.read(1)
st += char
if char == '>':
return st
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
#discard welcome message
getAnswer(p, '')
st1 = getAnswer(p, 'pwd')
st2 = getAnswer(p, 'ls -l')
...
p.stdin.write('exit\n')
p.stdin.flush()
p.stdin.close()
p.stdout.close()
This is not perfect but works fine. To detect a prompt I am simply waiting for a '>' this could be improved by first sending a 'echo $PS1' and build a regexp accordingly.

Related

subprocess.communicate() causes python program to hang when sending input to minecraft server

I use this bit of code to start the server as a subprocess and put the stdout into a text file.
with open('serverlog.txt', 'w') as outfile:
proc = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=outfile, shell=False)
and then use this to send a commmand to the subprocess via the communicate method
if message.content[:5] == "++say":
userMessage = message.content[6:]
proc.communicate(input=f"say {userMessage}".encode())
but once this block of code is reached, the program hangs.

How to run a new process and establish connection with it without any shell script?

My purpose is create a new process of other program and establish with it a long-time connection (opportunity to write to its stdin and read a result) i.e. not atomic write-read operation with following killing of created process. I have to use program code only, not any shell command.
There is my code:
import subprocess
proc = subprocess.Popen(['myprog', '-l'], shell = True, stdin = subprocess.PIPE, stdout = subprocess.PIPE)
#proc was kept
#after some waiting I try to send to proc some command
if proc.returncode == None:
proc.stdin.write(b'command')
response = process.communicate()[0]
This code returns either empty string (if one transaction was commited) or rises BrokenPipeError (if it was running in loop).
Does proc stay alive after the first process.communicate()? What approach I need to use to get control of stdin/stdout of proc?
You are checking for proc.returncode == None.
But if you read the documentation of subprocess the returncode is either 0 or a negative number, but never None.
Second, if you have long running processes, you should either adjust and handle the timeout, or disable it..
Third: You should really really really avoid shell=True in Popen, it is a huge security risk.
Here is some example how I normally deal with Popen:
from shlex import split as sh_split
from subprocess import PIPE, Popen, TimeoutExpired
def launch(command, cwd=None, stdin=None, timeout=15):
with Popen(
sh_split(command), universal_newlines=True,
cwd=cwd, stdout=PIPE, stderr=PIPE, stdin=PIPE
) as proc:
try:
out, err = proc.communicate(input=stdin, timeout=timeout)
except TimeoutExpired:
proc.kill()
out, err = proc.communicate()
return proc.returncode, out.splitlines(), err.splitlines()
This is for short living processes, but I hope you can see how stdin, stdout and stderr handling is done.

Get output of system ping without printing to the console

I want to call ping from Python and get the output. I tried the following:
response = os.system("ping "+ "- c")
However, this prints to the console, which I don't want.
PING 10.10.0.100 (10.10.0.100) 56(86) bytes of data.
64 bytes from 10.10.0.100: icmp_seq=1 ttl=63 time=0.713 ms
64 bytes from 10.10.0.100: icmp_seq=2 ttl=63 time=1.15 ms
Is there a way to not print to the console and just get the result?
To get the output of a command, use subprocess.check_output. It raises an error if the command fails, so surround it in a try block.
import subprocess
try:
response = subprocess.check_output(
['ping', '-c', '3', '10.10.0.100'],
stderr=subprocess.STDOUT, # get all output
universal_newlines=True # return string not bytes
)
except subprocess.CalledProcessError:
response = None
To use ping to know whether an address is responding, use its return value, which is 0 for success. subprocess.check_call will raise and error if the return value is not 0. To suppress output, redirect stdout and stderr. With Python 3 you can use subprocess.DEVNULL rather than opening the null file in a block.
import os
import subprocess
with open(os.devnull, 'w') as DEVNULL:
try:
subprocess.check_call(
['ping', '-c', '3', '10.10.0.100'],
stdout=DEVNULL, # suppress output
stderr=DEVNULL
)
is_up = True
except subprocess.CalledProcessError:
is_up = False
In general, use subprocess calls, which, as the docs describe, are intended to replace os.system.
If you only need to check if the ping was successful, look at the status code; ping returns 2 for a failed ping, 0 for a success.
I'd use subprocess.Popen() (and not subprocess.check_call() as that raises an exception when ping reports the host is down, complicating handling). Redirect stdout to a pipe so you can read it from Python:
ipaddress = '198.252.206.140' # guess who
proc = subprocess.Popen(
['ping', '-c', '3', ipaddress],
stdout=subprocess.PIPE)
stdout, stderr = proc.communicate()
if proc.returncode == 0:
print('{} is UP'.format(ipaddress))
print('ping output:')
print(stdout.decode('ASCII'))
You can switch to subprocess.DEVNULL* if you want to ignore the output; use proc.wait() to wait for ping to exit; you can add -q to have ping do less work, as it'll produce less output with that switch:
proc = subprocess.Popen(
['ping', '-q', '-c', '3', ipaddress],
stdout=subprocess.DEVNULL)
proc.wait()
if proc.returncode == 0:
print('{} is UP'.format(ipaddress))
In both cases, proc.returncode can tell you more about why the ping failed, depending on your ping implementation. See man ping for details. On OS X the manpage states:
EXIT STATUS
The ping utility exits with one of the following values:
0 At least one response was heard from the specified host.
2 The transmission was successful but no responses were received.
any other value
An error occurred. These values are defined in <sysexits.h>.
and man sysexits lists further error codes.
The latter form (ignoring the output) can be simplified by using subprocess.call(), which combines the proc.wait() with a proc.returncode return:
status = subprocess.call(
['ping', '-q', '-c', '3', ipaddress],
stdout=subprocess.DEVNULL)
if status == 0:
print('{} is UP'.format(ipaddress))
* subprocess.DEVNULL is new in Python 3.3; use open(os.devnull, 'wb') in it's place in older Python versions, making use of the os.devnull value, e.g.:
status = subprocess.call(
['ping', '-q', '-c', '3', ipaddress],
stdout=open(os.devnull, 'wb'))

Python Loop synchronization

I am calling an external process multiple times, in a loop. To give you a pseudocode:
for i in xrange(1, 100):
call external proc which inserts a row into a table
The problem here is, whenever the external process is called, it runs in a seperate thread, which could take any amount of time to run. So, python would have continued with the execution. This causes the insertion to run into a row lock and prevent insertion.
What is the ideal way to wait for the process to complete, under the following constraints:
I cannot modify the way the external process works.
I know I can, but I do not want to use a hack, like thread.sleep
I cannot modify any DB settings.
The code for calling the external proc is:
def run_query(query, username, password):
try:
process = subprocess.Popen( "<path to exe> -u " + username + " -p "+ password +" " + query,
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE )
result, error = process.communicate()
if error != '':
_pretty_error('stderr', error)
except OSError, error:
_pretty_error('OSError', str(error))
return result
You have several options according to the subprocess documentation:
Calling process.wait() after running process = subprocess.Popen(...)
Using subprocess.call instead of Popen
Using subprocess.check_call instead of Popen
depending on how the result looks, one way would be to use wait():
process = subprocess.Popen( "<path to exe> -u " + username + " -p "+ password +" " + query,
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE )
retcode = process.wait()
You can try to start the process like:
process = subprocess.call( ["<path to exe>", "-u", "username", "-p", password, query],
shell = False)
This way the main process sleeps until the subprocess ends, but you don't get output.

paramiko combine stdout and stderr

I am trying to combine the output of stdout and stderr. My belief is that this can be done with the set_combine_stderr() of a Channel object.
This is what I am doing:
SSH = paramiko.SSHClient()
#I connect and everything OK, then:
chan = ssh.invoke_shell()
chan.set_combine_stderr(True)
chan.exec_command('python2.6 subir.py')
resultado = chan.makefile('rb', -1.)
However, I get the following error when I try to store the result (last line above, chan.makefile() ):
Error: Channel closed.
Any help would be greatly appreciated
While it is true that set_combine_stderr diverts stderr to the stdout stream, it does so in chaotic order, so you do not get the result you probably want, namely, the lines combined in the order written, as if you were running the command in a local terminal window. Instead, use get_pty. That will cause the server to run the lines through a pseudo-terminal, keeping them in chronological sequence.
Here's a test program, outerr.py, that writes alternating lines on stdout and stdin. Assume it's sitting in the home directory of llmps#meerkat2.
#!/usr/bin/env python
import sys
for x in xrange(1, 101):
(sys.stdout, sys.stderr)[x%2].write('This is line #%s, on std%s.\n' %
(x, ('out', 'err')[x%2]))
Now try the following code to run it remotely:
#!/usr/bin/env python
import paramiko
def connect():
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('meerkat2', username='llmps', password='..')
return ssh
def runTest(ssh):
tran = ssh.get_transport()
chan = tran.open_session()
# chan.set_combine_stderr(True)
chan.get_pty()
f = chan.makefile()
chan.exec_command('./outerr.py')
print f.read(),
if __name__ == '__main__':
ssh = connect()
runTest(ssh)
ssh.close()
If you run the above, you should see 100 lines in order as written. If, instead, you comment out the chan.get_pty() call and uncomment the chan.set_combine_stderr(True) call, you will get clumps of stdout and stderr lines interspersed randomly from run to run.
Ok, I know this is quite an old topic, but I run into the same problem and I got a (maybe not-so-)pretty solution. Just call the command on the remote server redirecting the stderr to stdout and then always read from the stdout. For example:
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('hostname', username='user', password='pass')
stdin,stdout,stderr = client.exec_command('python your_script.py 2> \&1')
print stdout.read()
#AaronMcSmooth: I am referring to the stdout and stderr of the computer I am connecting to (via SSH).
I ended up doing this:
stdin, stdout, stderr = ssh.exec_command(...)
output = stdin.read().strip() + stdout.read().strip()
For the purpose of my application, it doesn't matter to distinguish between stdout and stderr, but I don't think that's the best way to combine the two.
The code of SSHClient.exec_command() is (looking at paramiko's source code):
def exec_command(self, command, bufsize=-1):
chan = self._transport.open_session()
chan.exec_command(command)
stdin = chan.makefile('wb', bufsize)
stdout = chan.makefile('rb', bufsize)
stderr = chan.makefile_stderr('rb', bufsize)
return stdin, stdout, stderr
I am performing the same actions on the channel but receive the Channel is closed error.

Categories

Resources