Why can't I interact with SSH using subprocess.Popen? - python

I'm trying to interact with SSH using Python's subprocess library. Here's my current code:
import subprocess
import time
proc = subprocess.Popen(["ssh", "-tt", "user#host"],
stdout=subprocess.PIPE, stdin=subprocess.PIPE)
time.sleep(10)
proc.stdin.write(b"ls\n")
while True:
next_line = proc.stdout.readline()
if next_line != '':
print(next_line.decode("utf-8"), end='')
else:
time.sleep(.01)
When I run it, I get the usual SSH security banner, but nothing else. I log in via public key and have already added the host to my known_hosts list, so I would imagine authentication shouldn't be an issue. By changing ls to a command to write text to a file, I have confirmed that no commands are "going through." What is the problem, and how can I fix it?

You may need to call proc.stdin.flush(). More generally, you should use Paramiko instead, which is a Python library for SSH directly, rather than using Popen for this.

Related

How to execute a command and read/write to its STDIN/TTY (together)?

I've seen examples and questions about how to do these things individually. But in this question I'm trying to do them all jointly.
Basically my case is that I have a command that needs me to write to its STDIN, read from its STDOUT, and to answer its TTY prompts. All done with a single execution of the command. Not that it matters, but if you're curious, the command is scrypt enc - out.enc.
Restrictions: must be pure Python.
Question: how to do it?
I tried these:
import pty
import os
import subprocess
master, slave = pty.openpty()
p = subprocess.Popen(['sudo', 'ls', '-lh'], stdin=slave, stdout=master)
x= os.read(master)
print(x)
stdout, stderr = p.communicate(b'lol\r\n')
import pty
import os
import sys
import subprocess
def read(fd):
data = os.read(fd, 1024)
data_str = data.decode()
if data_str.find('[sudo] password for') == 0:
data_str = 'password plz: '
sys.stdout.write(data_str)
sys.stdout.flush()
def write(fd):
x = 'lol\r\n'
for b in x.encode():
os.write(fd, b)
pty.spawn(['sudo', 'ls', '-lh'], read, write)
The goal is to fully wrap the TTY prompts so that they are not visible to the user, and at the same time to feed some password to processes TTY input to make sudo happy.
Based on that goal, none of these attempts work for various reasons.
But it is even worse: suppose that they work, how can I feed the process something to its STDIN and its TTY-input? What confuses me is that the Popen example literally states that stdin is mapped to TTY (pty), so how can it know which is which? How will it know that some input is for STDIN and not TTY-in?
Disclaimer:
Discussing this topic in detail would require a lot of text so I will try to simplify things to keep it short. I will try to include as many "for further reading" links as possible.
To make it short, there is only one input stream, that is STDIN. In a normal terminal, STDIN is connected to a TTY. So what you "type on TTY" will be read by the shell. The shell decides what to do with it then. It there is a program running, it will send it to STDIN of that program.
If you run something with Popen in python, that will not have a tty. You can check that easily by doing this:
from subprocess import Popen, PIPE
p = Popen("tty", stdin=PIPE, stdout=PIPE, stderr=PIPE)
o, e = p.communicate()
print(o)
It will produce this output: b'not a tty\n'
But how does scrypt then try to use a TTY? Because that is what it does.
You have to look at the manpage and code, to find the answer.
If -P is not given, scrypt reads passphrases from its controlling terminal, or failing that, from stdin.
What it does is actually, it is just opening /dev/tty (look at the code). That exists, even if the process does not have a TTY. So it can open it and it will try to read the password from it.
How can you solve your problem now?
Well, that is easy in this case. Check the manpage for the -P parameter.
Here is a working example:
from subprocess import Popen, PIPE
p = Popen("scrypt enc -P - out.enc", stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
p.communicate("pwd\nteststring")
This will encrypt the string "teststring" with the password "pwd".
There are a lot of "hacks" around ttys etc. but you should avoid those as they can have unexpected results. For example, start a shell and run tty then run a second shell and run cat with the output of the tty command (e.g. cat /dev/pts/7). Then type something in the first shell and watch what happens.
If you don't want to try it out, some characters will end up in the first shell, some in the second.
Check this post and this article about what a TTY is and where it comes from.

SSH to remote server - and write results to local server

So I want to be able to get this info that I initiate from my local server to this remote appliance and instead of getting the results to my local screen. I want to write it to a local file. I can see examples in paramiko, but I am having issues installing it for python3 as this is what I prefer to use. so I am trying using subprocess. now the unique thing is this remote appliance has limited commands it accepts, it is more like I literally have to run a 'show' command on the appliance. so there is nothing to SCP..hence the reason I did not use SCP.
This will write it to my screen, but that does not do me much good :(
xfer = subprocess.Popen(["ssh", "user#mysystem.com", " show my_secret_file"], stderr=subprocess.PIPE)
errdata = prog.communicate()[1]
Is this possible?
Assuming your appliance will actually write its output to stdout, its output will actually be returned in prog.communicate(), as long as you asked for stdout in Popen().
You can then save the returned stdout to a file using the standard file IO functions.
In other words, here's how it would work:
import subprocess
# Call subprocess and save stdout and stderr
prog = subprocess.Popen(["ssh", "user#mysystem.com", " show my_secret_file"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# ^ Add this bit
out, err = prog.communicate()
# Do your error handling here...
# ...
# Now write to file
writefile = open("Put your file name here", "w")
writefile.write(out.decode("utf-8"))
writefile.close()
Note that the above assumes stdout is in text mode. If it is in binary mode, you may have to do some str/bytes conversion, or open the file in a different mode.

automation of processes by python

I'm trying to write a python script that start a process and do some operations atferward.
The commands that I want to automate by script are circled as red in the picture.
The problem is that after performing first command, qemu environment will be run and the other commands should be executed on the qemu environment. So I want to know how can I do these commands by an script in python? Because as I know I can do the first command but I do not know how can I do those commands when I am going to qemu environment.
Could you help me how can I do this process?
First thing that came to mind was pexpect, a quick search on google turned up this blog post automatically-testing-vms-using-pexpect-and-qemu which seems to be pretty much along the lines of what you are doing:
import pexpect
image = "fedora-20.img"
user = "root"
password = "changeme"
# Define the qemu cmd to run
# The important bit is to redirect the serial to stdio
cmd = "qemu-kvm"
cmd += " -m 1024 -serial stdio -net user -net nic"
cmd += " -snapshot -hda %s" % image
cmd += " -watchdog-action poweroff"
# Spawn the qemu process and log to stdout
child = pexpect.spawn(cmd)
child.logfile = sys.stdout
# Now wait for the login
child.expect('(?i)login:')
# And login with the credentials from above
child.sendline(user)
child.expect('(?i)password:')
child.sendline(password)
child.expect('# ')
# Now shutdown the machine and end the process
if child.isalive():
child.sendline('init 0')
child.close()
if child.isalive():
print('Child did not exit gracefully.')
else:
print('Child exited gracefully.')
You could do it with subprocess.Popen also, checking the stdout for the (qemu) lines and writing to stdin. Something roughly like this:
from subprocess import Popen,PIPE
# pass initial command as list of individual args
p = Popen(["./tracecap/temu","-monitor",.....],stdout=PIPE, stdin=PIPE)
# store all the next arguments to pass
args = iter([arg1,arg2,arg3])
# iterate over stdout so we can check where we are
for line in iter(p.stdout.readline,""):
# if (qemu) is at the prompt, enter a command
if line.startswith("(qemu)"):
arg = next(args,"")
# if we have used all args break
if not arg:
break
# else we write the arg with a newline
p.stdin.write(arg+"\n")
print(line)# just use to see the output
Where args contains all the next commands.
Don't forget that Python has batteries included. Take a look of the Suprocess module in the standard lib. There a lot of pitfalls managing processes, and the module take care of them all.
You probably want to start a qemu process and send the next commands writing to its standard input (stdin). Subprocess module will allow you to do it. See that qemu has command line options to connect to stdi: -chardev stdio ,id=id

How to redirect python OS system call to a file?

I have no idea why the below code is not working. The file arch_list does not get created or anything written to it. The commands work fine when run in the terminal alone.
from yum.plugins import PluginYumExit , TYPE_CORE, TYPE_INTERACTIVE
import os
requires_api_version = '2.3'
plugin_type = (TYPE_CORE, TYPE_INTERACTIVE)
ip_vm = ['192.168.239.133']
def get_arch():
global ip_vm
os.system("uname -p > ~/arch_list")
for i in ip_vm:
cmd = "ssh thejdeep#"+i+" 'uname -p' >> ~/arch_list"
print cmd
os.system(cmd)
def init_hook(conduit):
conduit.info(2,'Hello World !')
get_arch()
I don't think os.system() will return to stdout in that case. You may try using subprocess.call() with the appropriate parameters.
Edit: Actually I think I remember seeing similar behaviour with ssh when running in a standard bash loop. You might try adding a -n to your ssh call.. I think that is the solution I used years ago in bash.
I just ran your code and it works fine for me, writing to the local arch file. I suspect adding more than one host to your list is where you start having problems. What version of python are you running? I'm on 2.7.6.
os.system() will not redirect stdout and stderr.
You can use subprocess modules Popen to set the stdout and stderr to a file descriptor or a pipe.
For example:
>>> import subprocess
>>> child1 = subprocess.Popen(["ls","-l"], stdout=subprocess.PIPE)
>>> print child1.stdout.readlines()
You can replace subprocess.PIPE to any valid file descriptor you opened for write. or you could pick up some lines to the file. It's your call.

diverting the stdin when ssh-ing to another machine from python

I am trying to use SSH as a socks proxy to another machine, then ask the user if he wants to proceed.
so I use:
proxy_cmd = "ssh -o 'StrictHostKeyChecking no' -i " + key_filename + ' -D 9998 ubuntu#' + ip_address
subprocess.Popen(proxy_cmd, shell=True, stdout=subprocess.PIPE)
if not raw_input('would you like to proceed?(y)')=='y':
sys.exit()
and I get:
IOError: [Errno 11] Resource temporarily unavailable
I assume that's because the ssh is open and it is capturing stdin or something. I just don't know how to bypass this (I have no need to send input to the ssh, I just want it open for Selenium to use later)
How can I do this?
If you want to keep stdin available to your Python program, then you'll have to redirect stdin for the ssh process even if you have no intention of using it, with something like...
subprocess.Popen(proxy_cmd,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
Note that the subprocess module will retain a reference to the Popen object in the subprocess._active list, but you may also want to bind the resulting Popen object to a variable so you can perform operations on it later.

Categories

Resources