I have a wrapper around Paramiko's SSHClient.exec_command(). I'd like to capture standard out. Here's a shortened version of my function:
def __execute(self, args, sudo=False, capture_stdout=True, plumb_stderr=True,
ignore_returncode=False):
argstr = ' '.join(pipes.quote(arg) for arg in args)
channel = ssh.get_transport().open_session()
channel.exec_command(argstr)
channel.shutdown_write()
# Handle stdout and stderr until the command terminates
captured = []
def do_capture():
while channel.recv_ready():
o = channel.recv(1024)
if capture_stdout:
captured.append(o)
else:
sys.stdout.write(o)
sys.stdout.flush()
while plumb_stderr and channel.recv_stderr_ready():
sys.stderr.write(channel.recv_stderr(1024))
sys.stderr.flush()
while not channel.exit_status_ready():
do_capture()
# We get data after the exit status is available, why?
for i in xrange(100):
do_capture()
rc = channel.recv_exit_status()
if not ignore_returncode and rc != 0:
raise Exception('Got return code %d executing %s' % (rc, args))
if capture_stdout:
return ''.join(captured)
paramiko.SSHClient.execute = __execute
In do_capture(), whenever channel.recv_ready() tells me that I can receive data from the command's stdout, I call channel.recv(1024) and append the data to my buffer. I stop when the command's exit status is available.
However, it seems like more stdout data comes at some point after the exit status.
# We get data after the exit status is available, why?
for i in xrange(100):
do_capture()
I can't just call do_capture() once, as it seems like channel.recv_ready() will return False for a few milliseconds, and then True, and more data is received, and then False again.
I'm using Python 2.7.6 with Paramiko 1.15.2.
I encountered the same problem. The problem is that after the command exited there may still be data on the stout or stderr buffers, still on its way over the network, or whatever else. I read through paramiko's source code and apparently all data's been read once chan.recv() returns empty string.
So this is my attempt to solve it, until now it's been working.
def run_cmd(ssh, cmd, stdin=None, timeout=-1, recv_win_size=1024):
'''
Run command on server, optionally sending data to its stdin
Arguments:
ssh -- An instance of paramiko.SSHClient connected
to the server the commands are to be executed on
cmd -- The command to run on the remote server
stdin -- String to write to command's standard input
timeout -- Timeout for command completion in seconds.
Set to None to make the execution blocking.
recv_win_size -- Size of chunks the output is read in
Returns:
A tuple containing (exit_status, stdout, stderr)
'''
with closing(ssh.get_transport().open_session()) as chan:
chan.settimeout(timeout)
chan.exec_command(cmd)
if stdin:
chan.sendall(stdin)
chan.shutdown_write()
stdout, stderr = [], []
# Until the command exits, read from its stdout and stderr
while not chan.exit_status_ready():
if chan.recv_ready():
stdout.append(chan.recv(recv_win_size))
if chan.recv_stderr_ready():
stderr.append(chan.recv_stderr(recv_win_size))
# Command has finished, read exit status
exit_status = chan.recv_exit_status()
# Ensure we gobble up all remaining data
while True:
try:
sout_recvd = chan.recv(recv_win_size)
if not sout_recvd and not chan.recv_ready():
break
else:
stdout.append(sout_recvd)
except socket.timeout:
continue
while True:
try:
serr_recvd = chan.recv_stderr(recv_win_size)
if not serr_recvd and not chan.recv_stderr_ready():
break
else:
stderr.append(serr_recvd)
except socket.timeout:
continue
stdout = ''.join(stdout)
stderr = ''.join(stderr)
return (exit_status, stdout, stderr)
I encountered the same issue.
This link (Paramiko: how to ensure data is received between commands) gave me some help, in explaining that after you get exit_status_ready() you still have to receive possible additional data. In my tests (with a couple of screens of output), in every single run, there will be additional data to read after exit_status_ready() returns True.
But the way it reads the remaining data it is not correct: it uses recv_ready() to check if there is something to read, and once recv_ready() returns False it exits. Now, it will work most of the time. But the following situation can happen: recv_ready() can return False to indicate that at that moment there is nothing to receive, but it doesn't mean that it is the end of the all data. In my tests, I would leave the test running, and sometimes it would take half an hour for the issue to appear.
I found the solution by reading the following sentence in the Channel.recv() documentation: "If a string of length zero is returned, the channel stream has closed."
So we just can have a single loop and read all the data until recv() returns zero length result. At that point channel stream is closed, but just to make sure that exit status is ready we can make additional loop and sleep until channel.exit_status_ready() returns True.
Note that this will work only with a channel without pty enabled (which is default).
Related
I wrote this code in Paramiko:
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(hostname, username=user, password=passwd, timeout=3)
session = ssh.invoke_shell()
session.send("\n")
session.send("echo step 1\n")
time.sleep(1)
session.send("sleep 30\n")
time.sleep(1)
while not session.recv_ready():
time.wait(2)
output = session.recv(65535)
session.send("echo step 2\n")
time.sleep(1)
output += session.recv(65535)
I'm trying execute more commands on my Linux server. The problem is my Python code not wait to finish execute command, for example if I'm try to execute sleep 30, the Python not wait 30 seconds for finish execute commands. How can resolve this problem ? I tried with while recv_ready(), but it still does not wait.
Use exec_command: http://docs.paramiko.org/en/1.16/api/channel.html
stdin, stdout, stderr = ssh.exec_command("my_long_command --arg 1 --arg 2")
The following code works for me:
from paramiko import SSHClient, AutoAddPolicy
import time
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect('111.111.111.111', username='myname', key_filename='/path/to/my/id_rsa.pub', port=1123)
sleeptime = 0.001
outdata, errdata = '', ''
ssh_transp = ssh.get_transport()
chan = ssh_transp.open_session()
# chan.settimeout(3 * 60 * 60)
chan.setblocking(0)
chan.exec_command('ls -la')
while True: # monitoring process
# Reading from output streams
while chan.recv_ready():
outdata += chan.recv(1000)
while chan.recv_stderr_ready():
errdata += chan.recv_stderr(1000)
if chan.exit_status_ready(): # If completed
break
time.sleep(sleeptime)
retcode = chan.recv_exit_status()
ssh_transp.close()
print(outdata)
print(errdata)
Please note that command history cannot be executed with ssh as is.
See example here: https://superuser.com/questions/962001/incorrect-output-of-history-command-of-ssh-how-to-read-the-timestamp-info-corre
In case you do not need to read the stdout and stderr separately, you can use way more straightforward code:
stdin, stdout, stderr = ssh_client.exec_command(command)
stdout.channel.set_combine_stderr(True)
output = stdout.readlines()
The readlines reads until the command finishes and returns a complete output.
In case you need the output separately, do not be tempted to remove the set_combine_stderr and call readlines on stdout and stderr separately. That might deadlock. See Paramiko ssh die/hang with big output
For a correct code that reads the outputs separately, see Run multiple commands in different SSH servers in parallel using Python Paramiko.
Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
I have a program that runs from my local computer and connects via SSH (paramiko package) to a Linux computer.
I use the following functions to send a command and get an exit_code to make sure it's done.
For some reason, sometimes an exit code is returned, whereas sometimes the code enters an endless loop.
Does anyone know why this happens and how to make it stable?
def check_on_command(self, stdin, stdout, stderr):
if stdout is None:
raise Exception("Tried to check command before it was ready")
if not stdout.channel.exit_status_ready():
return None
else:
return stdout.channel.recv_exit_status()
def run_command(self, command):
(stdin, stdout, stderr) = self.client.exec_command(command)
logger.info(f"Excute command: {command}")
while self.check_on_command(stdin, stdout, stderr) is None:
time.sleep(5)
logger.info(f'Finish running, exit code: {stdout.channel.recv_exit_status()}')
In case you're using Python version >= 3.6, I advise working with an asynchronous library, that provides await capabilities for optimized run times and more manageable simple code.
For example, you can use asyncssh library that comes with python and does the job as requested. In general writing async code that uses sleeps to wait for a task to be executed should be replaced like so.
import asyncio, asyncssh, sys
async def run_client():
async with asyncssh.connect('localhost') as conn:
result = await conn.run('ls abc')
if result.exit_status == 0:
print(result.stdout, end='')
else:
print(result.stderr, end='', file=sys.stderr)
print('Program exited with status %d' % result.exit_status,
file=sys.stderr)
try:
asyncio.get_event_loop().run_until_complete(run_client())
except (OSError, asyncssh.Error) as exc:
sys.exit('SSH connection failed: ' + str(exc))
You can find further documentation here: asyncssh
I've been trying to write a python script to control the starting and stopping of a minecraft server. I've got it to accept commands through input() but i also wanted the logs of the server to be printed on the console(or be processed someway), since that the process never ends, readline hangs everytime the server finished outputing text,no further input can be performed. Is there a way to let stdin and stdout to work simultaneously,or is there a way to time out readline so i can continue?
The code i've got so far:
import subprocess
from subprocess import PIPE
import os
minecraft_dir = "D:\Minecraft Server"
executable = 'java -Xms4G -Xmx4G -jar "D:\Minecraft Server\paper-27.jar" java'
process = None
def server_command(cmd):
if(process is not None):
cmd = cmd + '\n'
cmd = cmd.encode("utf-8")
print(cmd)
process.stdin.write(cmd)
process.stdin.flush()
else:
print("Server is not running.")
def server_stop():
if process is None:
print("Server is not running.")
else:
process.stdin.write("stop\n".encode("utf-8"))
process.stdin.flush()
while True:
command=input()
command=command.lower()
if(command == "start"):
if process is None:
os.chdir(minecraft_dir)
process = subprocess.Popen(executable,stdin=PIPE,stdout=PIPE)
print("Server started.")
else:
print("Server Already Running.")
elif(command == "stop"):
server_stop()
process=None
else:
server_command(command)
I've mentioned processing the server log someway or the other because i don't really need it on the console,since i can always read from a file that it generated. But this particular server i'm running needs the stdout=PIPE argument or it throws out
java.io.IOException: ReadConsoleInputW failed
at org.fusesource.jansi.internal.Kernel32.readConsoleInputHelper(Kernel32.java:816)
at org.fusesource.jansi.internal.WindowsSupport.readConsoleInput(WindowsSupport.java:99)
at org.jline.terminal.impl.jansi.win.JansiWinSysTerminal.processConsoleInput(JansiWinSysTerminal.java:112)
at org.jline.terminal.impl.AbstractWindowsTerminal.pump(AbstractWindowsTerminal.java:458)
at java.lang.Thread.run(Unknown Source)
and i think it breaks the PIPE? Because no further input is directed to the process(process.stdin.write not working), yet the process is still running.
Any help on either one of the issue would be greatly appreciated.
So i have batch file that start appium server.
When i start execute my batch file i want to read the output and when the server running i want to continue.
I know when appium server running from this output:
Appium REST http interface listener started on 0.0.0.0:4723
Currently this is what i have:
process = subprocess.Popen([r'C:\\appium.bat'])
stdout = process.communicate()[0]
print('STDOUT:{}'.format(stdout))
What i want is to wait up to 60 seconds or until this line appears.
In case 60 secods pass and this line (Appium REST http interface listener started on 0.0.0.0:4723) did not appear i want to raise exception.
My problem is that when my server started the process is continue to run so this never exit and continue to the next code and i cannot kill the appium process.
Any suggestions how to solve it ?
The following code should work for your case. First, it spawns a process, and wait for the timeout, while waiting, it will keep checking the output of the process. Then when the pattern is matched, it will break, or else Exception will be raised.
import time
import subprocess
import re
proc = subprocess.Popen(['/tmp/test.sh'], stdout=subprocess.PIPE)
timeout = time.time() + 10 # adjust the timeout value here
target = ".*started on .*"
while True:
if time.time() >= timeout:
raise Exception("Server wasn't started")
else:
output = proc.stdout.readline()
# read a line of input
if output == '' and proc.poll() is not None:
# process is not running anymore,
# proc.poll() will return None if process is still running
raise Exception("Server process has stopped")
else:
line = output.decode().strip()
if re.match(target, line):
# if the pattern is matched, do something and break
print("Server has started")
break
time.sleep(0.5)
This is the bash file I used to test. Save it as /tmp/test.sh
#!/bin/bash
echo "TEST"
sleep 1
echo "server has started on 0.0.0.0"
You can wait by using the time module
import time
time.sleep(60) # wait for 60 seconds
You could do this with signal.alarm,
import signal
import time
import subprocess
def handler(signum, stack):
raise Exception("It didn't happen in time...") # raise exception if it didn't come up within the time frame
signal.signal(signal.SIGALRM, handler)
signal.alarm(60)
process = subprocess.Popen([r'C:\\appium.bat'])
stdout = process.communicate()[0] # assuming this blocks
print('STDOUT:{}'.format(stdout))
signal.alarm(0) # turn of the alarm, if it came up within 60 seconds
if .communicate() is not blocking, then,
import subprocess
process = subprocess.Popen([r'C:\\appium.bat'])
stdout = process.communicate()[0] # non blocking ?
print('STDOUT:{}'.format(stdout))
time.sleep(60)
if 'Appium REST http interface listener started' not in stdout:
raise Exception("It didn't come up in time...")
I'm trying to make a python script which is going run a bash script on a remote machine via ssh and then parse its output. The bash script outputs lot of data (like 5 megabytes of text / 50k lines) in stdout and here is a problem - I'm getting all the data only in ~10% cases. In other 90% cases I'm getting about 97% of what i expect and it looks like it always trims at the end. This is how my script looks like:
import subprocess
import re
import sys
import paramiko
def run_ssh_command(ip, port, username, password, command):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, port, username, password)
stdin, stdout, stderr = ssh.exec_command(command)
output = ''
while not stdout.channel.exit_status_ready():
solo_line = ''
# Print stdout data when available
if stdout.channel.recv_ready():
# Retrieve the first 1024 bytes
solo_line = stdout.channel.recv(2048).
output += solo_line
ssh.close()
return output
result = run_ssh_command(server_ip, server_port, login, password, 'cat /var/log/somefile')
print "result size: ", len(result)
I'm pretty sure that problem is in overflowing of some internal buffer, but which one and how to fix it?
Thank you very much for any tip!
When stdout.channel.exit_status_ready() starts returning True, there might still be a lot of data on the remote side, waiting to be sent. But you only receive one more chunk of 2048 bytes and quit.
Instead of checking the exit status, you could keep calling recv(2048) until it returns an empty string, which means that no more data is coming:
output = ''
next_chunk = True
while next_chunk:
next_chunk = stdout.channel.recv(2048)
output += next_chunk
But really you probably just want:
output = stdout.read()
May I suggest a less crude way to execute command over ssh via Fabric library.
It may look like this (omitting ssh authentication details):
from fabric import Connection
with Connection('user#localhost') as con:
res = con.run('~/test.sh', hide=True)
lines = res.stdout.split('\n')
print('{} lines readen.'.format(len(lines)))
given the test script ~/test.sh
#!/bin/sh
for i in {1..1234}
do
echo "Line $i"
done
all of the output is correctly consumed