python subprocess communicate() block - python

I am using the subprocess module to call an external program (plink.exe) to log-in to a server; but when I call communicate to read the output, it is blocking. The code is below:
import subprocess
process = subprocess.Popen('plink.exe hello#10.120.139.170 -pw 123456'.split(), shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print process.communicate() #block here
I know the block is because plink.exe it still running; but I need to read the output before the subprocess terminates. Is there anyway to do that?

The whole purpose of the communicate method is to wait for the process to finish and return all the output. If you don't want to wait, don't call communicate. Instead, read from the stdout or stderr attribute to read the output.
If the process outputs to both stdout and stderr (and you want to read it separately), you will have to be careful to actually read from both without blocking, or you can deadlock. This is fairly hard on Windows, and you may wish to use the pexpect module instead.

Maybe because "plink.exe" needs to take in input arguments, if you don't pass them, it will block until data are given, you could try adding arguments in method communicate(input)

I faced a similar situation where I had to execute a single command lmstat -a and then get the output of the terminal.
If you just need to run a single command and then read the output, you can use the following code:
import subprocess
Username = 'your_username'
Password = 'your_password'
IP = 'IP_of_system'
Connection_type = '-ssh' #can have values -ssh -telnet -rlogin -raw -serial
p = subprocess.Popen(['plink', Connection_type, '-l', Username, '-pw', Password, IP], \
shell = False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = p.communicate('lmstat -a\nexit\n'.encode())
print(out.decode())

Related

Need help to read out the output of a subprocess

My python script (python 3.4.3) calls a bash script via subprocess.
OutPST = subprocess.check_output(cmd,shell=True)
It works, but the problem is, that I only get half of the data. The subprocess I call, calls a different subprocess and I have the guess, that if the "sub subprocess" sends the EOF, my programm thinks, that that´s it and ends the check_output.
Has someone an idea how to get all the data?
You should use subprocess.run() unless you really need that fine grained of control over talking to the processing via its stdin (or doing something else while the process is running instead of blocking for it to finish). It makes capturing output super easy:
from subprocess import run, PIPE
result = run(cmd, stdout=PIPE, stderr=PIPE)
print(result.stdout)
print(result.stderr)
If you want to merge stdout and stderr (like how you'd see it in your terminal if you didn't do any redirection), you can use the special destination STDOUT for stderr:
from subprocess import STDOUT
result = run(cmd, stdout=PIPE, stderr=STDOUT)
print(result.stdout)

sending many commands to cmd

I'm trying to send the cmd many commands according to the answers he sends me.
I'm getting a run time error message:
ValueError: I/O operation on closed file
When I'm running something like this:
import subprocess
process = subprocess.Popen("cmd.exe", stdout=subprocess.PIPE,stdin=subprocess.PIPE)
answer = process.communicate(input="some command\n" + '\n')[0]
"""
choosing another command according to answer
"""
print process.communicate(input=another_command + '\n')[0]
process.kill()
Any idea on how to solve the problem?
Thank you for your help!
Do not send your commands to cmd.exe. Call your commands directly like:
subprocess.Popen("dir", shell=True, stdout=subprocess.PIPE,stdin=subprocess.PIPE)
Perhaps you will not need the pipe for stdin if you use it this way.
The error is normal. communicate closes the standard input of the subprocess to indicate that no more input is pending so that the subprocess can flush its output. So you cannot chain multiple communicate calls on one single subprocess.
But if you commands are simple enough (not many kbytes of input data), and if you do not need to collect and process the output of one command before sending the next one, you should be able to write all the commands in sequence, reading as much output as possible between two of them. After last command, you could then close the subprocess standard input and wait for it to terminate, still collating the output:
process = subprocess.Popen("cmd.exe", stdout=subprocess.PIPE, stdin=subprocess.PIPE)
process.stdin.write("some command\n\n")
partial_answer = process.stdout.read() # all or part of the answer can still be buffered subprocess side
...
process.stdin.write("some other command\n\n")
...
# after last command, time to close subprocess
process.stdin.close()
retcode = None
while True:
end_of_answer += process.stdout.read()
if retcode is not None: break

How to print stdout before writing stdin using subprocess module in Python

I am writing a script in which in the external system command may sometimes require user input. I am not able to handle that properly. I have tried using os.popen4 and subprocess module but could not achieve the desired behavior.
Below mentioned example would show this problem using "cp" command. ("cp" command is used to show this problem, i am calling some different exe which may similarly prompt for user response in some scenarios). In this example there are two files present on disk and when user tries to copy file1 to file2, an conformer message comes up.
proc = subprocess.Popen("cp -i a.txt b.txt", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,)
stdout_val, stderr_val = proc.communicate()
print stdout_val
b.txt?
proc.communicate("y")
Now in this example if i read only stdout/stderr and prints it, later on if i try to write "y" or "n" based on user's input, i got an error that channel is closed.
Can some one please help me on achieving this behavior in python such that i can print stdout first, then should take user input and write stdin later on.
I found another solution (Threading) from Non-blocking read on a subprocess.PIPE in python , not sure whether it would help. But it appears it is printing question from cp command, i have modified code but not sure on how to write in threading code.
import sys
from subprocess import PIPE, Popen
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty
ON_POSIX = 'posix' in sys.builtin_module_names
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
p = Popen(['cp', '-i', 'a.txt', 'b.txt'],stdin=PIPE, stdout=PIPE, bufsize=1, close_fds=ON_POSIX)
q = Queue()
t = Thread(target=enqueue_output, args=(p.stdout, q))
t.start()
try:
line = q.get_nowait()
except Empty:
print('no output yet')
else:
pass
Popen.communicate will run the subprocess to completion, so you can't call it more than once. You could use the stdin and stdout attributes directly, although that's risky as you could deadlock if the process uses block buffering or the buffers fill up:
stdout_val = proc.stdout.readline()
print stdout_val
proc.stdin.write('y\n')
As there is a risk of deadlock and because this may not work if the process uses block buffering, you would do well to consider using the pexpect package instead.
I don't have a technical answer to this question. More of just a solution. It has something to do with the way the process waits for the input, and once you communicate with the process, a None input is enough to close the process.
For your cp example, what you can do is check the return code immediately with proc.poll(). If the return value is None, you might assume it is trying to wait for input and can ask your user a question. You can then pass the response to the process via proc.communicate(response). It will then pass the value and proceed with the process.
Maybe someone else can chime in with a more technical reason why an initial communicate with a None value closes the process.

get PID from paramiko

I can't find a simple answer for this: I'm using paramiko to log in and execute a number of processes remotely and I need the PIDs of each process in order to check on them at later times. There doesn't seem to be a function in paramiko to get the PID of an executed command, so I tried using the following:
stdin,stdout,stderr = ssh.exec_command('./someScript.sh &;echo $!;)
I thought that then parsing through the stdout would return the PID, but it doesn't. I'm assuming I should run the script in the background in order to have a PID (while it is running). Is there a more simple, obvious, way of getting the PID?
Here's a way to obtain the remote process ID:
def execute(channel, command):
command = 'echo $$; exec ' + command
stdin, stdout, stderr = channel.exec_command(command)
pid = int(stdout.readline())
return pid, stdin, stdout, stderr
I usually use the standard UNIX command pidof <command name>, when I check on the process later. AFAIK there is no simpler way.
OK, given your comment, you can solve it by wrapping your ./someScript.sh in a Python process that uses the subprocess module.
wrapper.py:
import subprocess
import sys
proc = subprocess.Popen(sys.argv[1])
print proc.pid
proc.wait() #probably
Then run
stdin,stdout,stderr = ssh.exec_command('./wrapper.py ./someScript.sh')
and read the output

Getting shell output with Python?

I have a shell script that gets whois info for domains, and outputs taken or available to the shell depending on the domain.
I'd like to execute the script, and be able to read this value inside my Python script.
I've been playing around with subprocess.call but can't figure out how to get the output.
e.g.,
subprocess.call('myscript www.google.com', shell=True)
will output taken to the shell.
subprocess.call() does not give you the output, only the return code. For the output you should use subprocess.check_output() instead. These are friendly wrappers around the popen family of functions, which you could also use directly.
For more details, see: http://docs.python.org/library/subprocess.html
Manually using stdin and stdout with Popen was such a common pattern that it has been abstracted into a very useful method in the subprocess module: communicate
Example:
p = subprocess.Popen(['myscript', 'www.google.com'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(stdoutdata, stderrdata) = p.communicate(input="myinputstring")
# all done!
import subprocess as sp
p = sp.Popen(["/usr/bin/svn", "update"], stdin=sp.PIPE, stdout=sp.PIPE, close_fds=True)
(stdout, stdin) = (p.stdout, p.stdin)
data = stdout.readline()
while data:
# Do stuff with data, linewise.
data = stdout.readline()
stdout.close()
stdin.close()
Is the idiom I use, obviously in this case I was updating an svn repository.
try subprocess.check_output.

Categories

Resources