I'm trying to use Python asyncio subprocesses to start an interactive SSH session and automatically input the password. The actual use case doesn't matter but it helps illustrate my problem. This is my code:
proc = await asyncio.create_subprocess_exec(
'ssh', 'user#127.0.0.1',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT,
stdin=asyncio.subprocess.PIPE,
)
# This loop could be replaced by async for, I imagine
while True:
buf = await proc.stdout.read()
if not buf:
break
print(f'stdout: { buf }')
I expected it to work something like asyncio streams, where I can create two tasks/subroutines/futures, one to listen to the StreamReader (in this case given by proc.stdout), the other to write to StreamWriter (proc.stdin).
However, it doesn't work as expected. The first few lines of output from the ssh command are printed directly to the terminal, until it gets to the password prompt (or host key prompt, as the case may be) and waits for manual input. I expected to be able to read the first few lines, check whether it was asking for password or the host prompt, and write to the StreamReader accordingly.
The only time it runs the line print(f'stdout: { buf }') is after I press enter, when it prints, obviously, that "stderr: b'Host key verification failed.\r\n'".
I also tried the recommended proc.communicate(), which isn't as neat as using StreamReader/Writer, but it has the same problem: Execution freezes while it waits for manual input.
How is this actually supposed to work? If it's not how I imagined, why not, and is there any way to achieve this without resorting to some sort of busy loop in a thread?
PS: I'm explaining using ssh just for clarity. I ended up using plink for what I wanted, but I want to understand how to do this with python to run arbitrary commands.
This isn't a problem specific to asyncio. The ssh process does not interact with the stdin and stdout streams, but rather accesses the TTY device directly, in order to ensure that password entry is properly secured.
You have three options to work around this:
Don't use ssh, but some other SSH client, one that doesn't expect to a TTY to control. For asyncio, you could use the asyncssh library. This library directly implements the SSH protocol and so doesn't require a separate process, and it accepts username and password credentials directly.
Provide a pseudo-tty for SSH to talk to, one your Python program controls. The pexpect library provides a high-level API that does this for you and can be used to fully control the ssh command.
Set up an alternative password prompter for ssh to use. The ssh program can let something else handle password entry if there is no TTY, via the SSH_ASKPASS environment variable. Most versions of ssh are quite picky about when they'll accept SSH_ASKPASS however, you need to set DISPLAY too, use the -n command-line switch for ssh and use the setsid command to run ssh in a new session, disconnected from any TTY.
I've previously described how to use SSH_ASKPASS with asyncio in an answer to a question about git and ssh.
The path of least resistance is to use pexpect, as it supports asyncio natively (any method that accepts async_=True can be used as a coroutine):
import pexpect
proc = pexpect.spawn('ssh user#127.0.0.1')
await child.expect('password:', timeout=120, async_=True)
child.sendline(password_for_user)
If anyone else landed here for a more generic answer to the question, see the following example:
import asyncio
async def _read_stream(stream, cb):
while True:
line = await stream.readline()
if line:
cb(line)
else:
break
async def _stream_subprocess(cmd, stdout_cb, stderr_cb):
process = await asyncio.create_subprocess_exec(*cmd,
stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE)
await asyncio.wait([
_read_stream(process.stdout, stdout_cb),
_read_stream(process.stderr, stderr_cb)
])
return await process.wait()
def execute(cmd, stdout_cb, stderr_cb):
loop = asyncio.get_event_loop()
rc = loop.run_until_complete(
_stream_subprocess(
cmd,
stdout_cb,
stderr_cb,
))
loop.close()
return rc
if __name__ == '__main__':
print(execute(
["bash", "-c", "echo stdout && sleep 1 && echo stderr 1>&2 && sleep 1 && echo done"],
lambda x: print("STDOUT: %s" % x),
lambda x: print("STDERR: %s" % x),
))
Here demonstration of live output.
Briefly, run bash process -> with stdin pass an 'ls' command -> async read result from the stdout
proc = await asyncio.create_subprocess_exec(
'/bin/bash', '-i',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT,
stdin=asyncio.subprocess.PIPE,
)
proc.stdin.write(b'ls \r\n')
await proc.stdin.drain()
try:
while True:
# wait line for 3 seconds or raise an error
line = await asyncio.wait_for( proc.stdout.readline(), 3 )\
print(line)
except asyncio.TimeoutError:
pass
Using this technique I was not able to enter server with ssh and "password",
I stacked with the error "bash: no job control in this shell" after command 'ssh -tt user#localhost '
Have you tried using AsyncSSH library? (which uses python's asyncio framework). Seems like this is what you're looking for.
import asyncio, asyncssh, sys
async def run_client():
async with asyncssh.connect('localhost', username='myuser', password='secretpw') as conn:
result = await conn.run('ls abc', check=True)
print(result.stdout, end='')
try:
asyncio.get_event_loop().run_until_complete(run_client())
except (OSError, asyncssh.Error) as exc:
sys.exit('SSH connection failed: ' + str(exc))
It also has support for ssh keys with client_keys param. Check the documentation. There are many examples for interactive input, i/o redirect, etc.
Related
I wrote this code in Paramiko:
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(hostname, username=user, password=passwd, timeout=3)
session = ssh.invoke_shell()
session.send("\n")
session.send("echo step 1\n")
time.sleep(1)
session.send("sleep 30\n")
time.sleep(1)
while not session.recv_ready():
time.wait(2)
output = session.recv(65535)
session.send("echo step 2\n")
time.sleep(1)
output += session.recv(65535)
I'm trying execute more commands on my Linux server. The problem is my Python code not wait to finish execute command, for example if I'm try to execute sleep 30, the Python not wait 30 seconds for finish execute commands. How can resolve this problem ? I tried with while recv_ready(), but it still does not wait.
Use exec_command: http://docs.paramiko.org/en/1.16/api/channel.html
stdin, stdout, stderr = ssh.exec_command("my_long_command --arg 1 --arg 2")
The following code works for me:
from paramiko import SSHClient, AutoAddPolicy
import time
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect('111.111.111.111', username='myname', key_filename='/path/to/my/id_rsa.pub', port=1123)
sleeptime = 0.001
outdata, errdata = '', ''
ssh_transp = ssh.get_transport()
chan = ssh_transp.open_session()
# chan.settimeout(3 * 60 * 60)
chan.setblocking(0)
chan.exec_command('ls -la')
while True: # monitoring process
# Reading from output streams
while chan.recv_ready():
outdata += chan.recv(1000)
while chan.recv_stderr_ready():
errdata += chan.recv_stderr(1000)
if chan.exit_status_ready(): # If completed
break
time.sleep(sleeptime)
retcode = chan.recv_exit_status()
ssh_transp.close()
print(outdata)
print(errdata)
Please note that command history cannot be executed with ssh as is.
See example here: https://superuser.com/questions/962001/incorrect-output-of-history-command-of-ssh-how-to-read-the-timestamp-info-corre
In case you do not need to read the stdout and stderr separately, you can use way more straightforward code:
stdin, stdout, stderr = ssh_client.exec_command(command)
stdout.channel.set_combine_stderr(True)
output = stdout.readlines()
The readlines reads until the command finishes and returns a complete output.
In case you need the output separately, do not be tempted to remove the set_combine_stderr and call readlines on stdout and stderr separately. That might deadlock. See Paramiko ssh die/hang with big output
For a correct code that reads the outputs separately, see Run multiple commands in different SSH servers in parallel using Python Paramiko.
Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
I wrote this code in Paramiko:
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect(hostname, username=user, password=passwd, timeout=3)
session = ssh.invoke_shell()
session.send("\n")
session.send("echo step 1\n")
time.sleep(1)
session.send("sleep 30\n")
time.sleep(1)
while not session.recv_ready():
time.wait(2)
output = session.recv(65535)
session.send("echo step 2\n")
time.sleep(1)
output += session.recv(65535)
I'm trying execute more commands on my Linux server. The problem is my Python code not wait to finish execute command, for example if I'm try to execute sleep 30, the Python not wait 30 seconds for finish execute commands. How can resolve this problem ? I tried with while recv_ready(), but it still does not wait.
Use exec_command: http://docs.paramiko.org/en/1.16/api/channel.html
stdin, stdout, stderr = ssh.exec_command("my_long_command --arg 1 --arg 2")
The following code works for me:
from paramiko import SSHClient, AutoAddPolicy
import time
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
ssh.connect('111.111.111.111', username='myname', key_filename='/path/to/my/id_rsa.pub', port=1123)
sleeptime = 0.001
outdata, errdata = '', ''
ssh_transp = ssh.get_transport()
chan = ssh_transp.open_session()
# chan.settimeout(3 * 60 * 60)
chan.setblocking(0)
chan.exec_command('ls -la')
while True: # monitoring process
# Reading from output streams
while chan.recv_ready():
outdata += chan.recv(1000)
while chan.recv_stderr_ready():
errdata += chan.recv_stderr(1000)
if chan.exit_status_ready(): # If completed
break
time.sleep(sleeptime)
retcode = chan.recv_exit_status()
ssh_transp.close()
print(outdata)
print(errdata)
Please note that command history cannot be executed with ssh as is.
See example here: https://superuser.com/questions/962001/incorrect-output-of-history-command-of-ssh-how-to-read-the-timestamp-info-corre
In case you do not need to read the stdout and stderr separately, you can use way more straightforward code:
stdin, stdout, stderr = ssh_client.exec_command(command)
stdout.channel.set_combine_stderr(True)
output = stdout.readlines()
The readlines reads until the command finishes and returns a complete output.
In case you need the output separately, do not be tempted to remove the set_combine_stderr and call readlines on stdout and stderr separately. That might deadlock. See Paramiko ssh die/hang with big output
For a correct code that reads the outputs separately, see Run multiple commands in different SSH servers in parallel using Python Paramiko.
Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
I have a program that runs from my local computer and connects via SSH (paramiko package) to a Linux computer.
I use the following functions to send a command and get an exit_code to make sure it's done.
For some reason, sometimes an exit code is returned, whereas sometimes the code enters an endless loop.
Does anyone know why this happens and how to make it stable?
def check_on_command(self, stdin, stdout, stderr):
if stdout is None:
raise Exception("Tried to check command before it was ready")
if not stdout.channel.exit_status_ready():
return None
else:
return stdout.channel.recv_exit_status()
def run_command(self, command):
(stdin, stdout, stderr) = self.client.exec_command(command)
logger.info(f"Excute command: {command}")
while self.check_on_command(stdin, stdout, stderr) is None:
time.sleep(5)
logger.info(f'Finish running, exit code: {stdout.channel.recv_exit_status()}')
In case you're using Python version >= 3.6, I advise working with an asynchronous library, that provides await capabilities for optimized run times and more manageable simple code.
For example, you can use asyncssh library that comes with python and does the job as requested. In general writing async code that uses sleeps to wait for a task to be executed should be replaced like so.
import asyncio, asyncssh, sys
async def run_client():
async with asyncssh.connect('localhost') as conn:
result = await conn.run('ls abc')
if result.exit_status == 0:
print(result.stdout, end='')
else:
print(result.stderr, end='', file=sys.stderr)
print('Program exited with status %d' % result.exit_status,
file=sys.stderr)
try:
asyncio.get_event_loop().run_until_complete(run_client())
except (OSError, asyncssh.Error) as exc:
sys.exit('SSH connection failed: ' + str(exc))
You can find further documentation here: asyncssh
I am using this script in python in order to connect to a Bluetooth device and then get data, but I want to know the result of this shell command in order to do next works
import os
import time
import signal
import subprocess
p = subprocess.Popen("sudo rfcomm connect /dev/rfcomm0 XX:XX:XX:XX:XX:XX 1",shell=True)
(stderr,stdout) = p.communicate()
print 'stderr: [%s]' % stderr
print 'stdout: [%s]' % stdout
time.sleep(5)
while True:
print "Device is ready"
time.sleep(5)
this code is a sample when I run the command:
"sudo rfcomm connect /dev/rfcomm0 XX:XX:XX:XX:XX:XX 1"
in shell, it returns:
Connected /dev/rfcomm0 to XX:XX:XX:XX:XX:XX on channel 1
Press CTRL-C for hangup
but how can I put above result in a variable, because I need to know the result of this command?
I use stdout, stderr in subprocess but does not work.
I am using python 2.7
Python subprocess and user interaction
above link talk about getting output in a variable in general, but the problem in my question related to rfcomm, which does not put its result in variable, I run those script and they works well, but it does not works when it used with rfcomm command
If you're using Python 3.5 or higher,
you can use run instead. That way you'll have access directly like so,
result = subprocess.run(["sudo rfcomm connect /dev/rfcomm0 XX:XX:XX:XX:XX:XX 1"], stdout=subprocess.PIPE)
Then access what you want like this,
result.stdout
If ever you use Python 2.7, as suggested by documentation I linked, they redirect you to the Older high-level API section.
From there you'll notice that you could use check_output
result = subprocess.check_output(["sudo rfcomm connect /dev/rfcomm0 XX:XX:XX:XX:XX:XX 1"])
Note, if ever you want to catch error also use the stderr=subprocess.STDOUT flag.
result = subprocess.check_output("sudo rfcomm connect /dev/rfcomm0 XX:XX:XX:XX:XX:XX 1", stderr=subprocess.STDOUT, shell=True)
Lasty, there is an important not you should be aware,
By default, this function will return the data as encoded bytes. The actual encoding of the output data may depend on the command being invoked, so the decoding to text will often need to be handled at the application level.
EDIT
Since your goal seems to get output while running. Take a look at this answer. I prefer linking instead of re-inventing the wheel.
You may need to issue the CTRL+C command before the data is returned.
Send a signal and catch the exception to deal with what is returned.
import os
import time
import signal
import subprocess
stream = []
try:
p = subprocess.Popen("sudo rfcomm connect /dev/rfcomm0 XX:XX:XX:XX:XX:XX 1",shell=True)
#(stderr, stdout) = p.communicate()
#print 'stderr: [%s]' % stderr
#print 'stdout: [%s]' % stdout
#time.sleep(5)
#print "Device is ready"
time.sleep(5)
os.kill(p.pid, signal.CTRL_C_EVENT)
p.wait()
except KeyboardInterrupt:
#except Exception:
for line in p.stdout: #May also be p.stderr
stream.append(line)
for x in stream:
print(x)
I've successfully implemented Paramiko using exec_command, however, the command I'm running on the remote machine(s) can sometimes take several minutes to complete.
During this time my Python script has to wait for the remote command to complete and receive stdout.
My goal is to let the remote machine "run in the background", and allow the local Python script to continue once it sends the command via exec_command.
I'm not concerned with stdout at this point, I'm just interested in bypassing waiting for stdout to return so the script can continue on while the command runs on the remote machine.
Any suggestions?
Current script:
def function():
ssh_object = paramiko.SSHClient()
ssh_object.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_object.connect(address, port=22, username='un', password='pw')
command = 'command to run'
try:
stdin, stdout, stderr = ssh_object.exec_command(command)
stdout.readlines()
except:
do something else
Thank you!
Use a separate thread to run the command. Usually threads should be cleaned up with the join command (the exception are daemon threads that you expect to run until your program exits). Exactly how you do that depends on the other stuff your program is running. But an example is:
import threading
def ssh_exec_thread(ssh_object, command):
stdin, stdout, stderr = ssh_object.exec_command(command)
stdout.readlines()
def function():
ssh_object = paramiko.SSHClient()
ssh_object.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_object.connect(address, port=22, username='un', password='pw')
command = 'command to run'
thread = threading.Thread(target=ssh_exec_thread, args=(ssh_object, command)
thread.start()
...do something else...
thread.join()
You can make this fancier by passing a Queue to ssh_exec_command and put the result on the queue for processing by your program later.