Interactively writing/reading from an external program - python

Backstory: I'm pretty new to Python, but fairly experienced in Perl. I'm trying to diversify my scripting portfolio in the sysadmin-activities area.
I'm attempting to dynamically communicate with an external process from my python script.
What I want to do is:
Call an executable (lets call it "cli")
Write a line to cli
Read the response back internally
Iterate through the response, one by one, and write another line to the CLI
Print the response from the cli back to the console
What I'm hoping this will yield is:
(spawn process) /usr/local/bin/cli
-> show listofobjects
<- (read back list of objects internally)
-> (one by one, write a line to the cli for each of the list of objects)
-> get objectname modifiedtime
<- (print response from above command)
Here is the code that I have so far:
import shlex, subprocess, re
clicmd = "/usr/local/bin/cli -s 10.1.213.226 -n Administrator -p password"
cliargs = shlex.split(clicmd)
cliproc = subprocess.Popen(cliargs,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
tmpclicmd = "LIST objects OUTPUT csv NAME"
cliret = cliproc.communicate(input=tmpclicmd)[0]
regex = re.compile('^\".*')
for line in cliret.split('\n'):
if regex.match(line):
procline = line.replace('"','')
if 'NAME' not in procline:
clideets = subprocess.Popen(cliargs,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
clideetscmd = 'modify objects ' + procline
clideets.communicate(input=clideetscmd)
clideetscmd = 'list objectdetails'
clideetsresp = clideets.communicate(input=clideetscmd)[0]
print clideetsresp;
I'm probably going about this in the completely wrong way. Do I have to spawn a new Popen for every step of the way? How could I do this better? (another module, etc). At the end of the day, I can't have the cli close on me, which Popen seems to do after each step of the way.
Thanks in advance!

It is not necessary to start a new process (using Popen) for every interaction. You do though, when you use communicate for sending data to the process, because, as the documentation states:
Interact with process: Send data to stdin. Read data from stdout and
stderr, until end-of-file is reached. Wait for process to terminate.
Instead, simply write to cliproc.stdin and read from cliproc.stdout:
cliproc.stdin.write("LIST objects OUTPUT csv NAME")
cliret = cliproc.stdout.readline()
The process keeps alive this way.
I don't know why you use the shlex module here:
clicmd = "/usr/local/bin/cli -s 10.1.213.226 -n Administrator -p password"
cliargs = shlex.split(clicmd)
The built-in str.split method will do fine:
clicmd = "/usr/local/bin/cli -s 10.1.213.226 -n Administrator -p password"
cliargs = clicmd.split()
Or you can just write the resulting list yourself:
cliargs = ["/usr/local/bin/cli", "-s", "10.1.213.226",
"-n", "Administrator", "-p", "password"]
You don't need a semicolon here:
print clideetsresp;

Related

Use subprocess to open an exe file and interact with it

I am using Python to script running an exe program.
If we open the exe program in the shell, we could enter different command such as "a", "b", "c" in the program. These commands can not be passed as flags into the exe program. I want to use Python to script running this exe program for many times, with custom exe-program specific input.
But if I run the "program.exe" with
p = subprocess.call(['program.exe'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
Python won't terminate. Can I achieve this purpose with subprocess in Python?
Beware: subprocess.call will not return before the child process has terminated. So you have no possibility to write anything to the standard input of the child.
If you can prepare the bunch of commands in advance, and if output has no risk to fill the system buffer, you can still use call that way:
cmds = "a\nb\nc\n"
p = subprocess.call(['program.exe'],
stdin=io.StringIO(cmds),
stdout=subprocess.PIPE,
)
But the more robust way is to directly use the Popen constructor, and then feed the input:
p = subprocess.Popen(['program.exe'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
p.stdin.write("a\n");
p.stdin.write("b\n");
...
p.stdin.close();
p.wait();
If you know that one subcommand will generate very large output you can read it before sending the next one. Beware to avoid being blocked waiting an input that the child has still not sent...
First, you have to use p = subprocess.Popen(…) in order to get the subprocess object. subprocess.call(…) would give you just the return status, and that's not enough.
If p is your connection object, you can send your commands to p.stdin, such as p.stdin.write("a\n"), and then read out p.stdout() until the next indication that the command output is finished. How you detect this depends on said program.
Then you can send the next command and read its output.
At the end, you can do p.stdin.close() in order to signal an EOF ot the other process, and then it should terminate.

How to obtain output from external progam and put it into a variable in Python

I am still fairly new to the python world and know this should be an easy question to answer. I have this section of a script in python that calls a script in Perl. This Perl script is a SOAP service that fetches data from a web page. Everything works great and outputs what I want, but after a bit of trial and error I am confused to how I can capture the data with a python variable and not just output to the screen like it does now.
Any pointers appreciated!
Thank you,
Pablo
# SOAP SERVICE
# Fetch the perl script that will request the users email.
# This service will return a name, email, and certificate.
var = "soap.pl"
pipe = subprocess.Popen(["perl", "./soap.pl", var], stdin = subprocess.PIPE)
pipe.stdin.write(var)
print "\n"
pipe.stdin.close()
I am not sure what your code aims to do (with var in particular), but here are the basics.
There is the subprocess.check_output() function for this
import subprocess
out = subprocess.check_output(['ls', '-l'])
print out
If your Python is before 2.7 use Popen with the communicate() method
import subprocess
proc = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE)
out, err = proc.communicate()
print out
You can instead iterate proc.stdout but it appears that you want all output in one variable.
In both cases you provide the program's arguments in the list.
Or add stdin if needed
proc = subprocess.Popen(['perl', 'script.pl', 'arg'],\
stdin = subprocess.PIPE,\
stdout = subprocess.PIPE)
The purpose of stdin = subprocess.PIPE is to be able to feed the STDIN of the process that is started, as it runs. Then you would do proc.stdin.write(string) and this writes to the invoked program's STDIN. That program generally waits on its STDIN and after you send a newline it gets everything written to it (since the last newline) and runs relevant processing.
If you simply need to pass parameters/arguments to the script at its invocation then that generally doesn't need nor involve its STDIN.
Since Python 3.5 the recommended method is subprocess.run(), with a very similar full signature, and similar operation, to that of the Popen constructor.

Simple Python Script not Executing Properly

The code is as follows:
fh = tempfile.NamedTemporaryFile(delete=False,suffix = '.py')
stream = io.open(fh.name,'w',newline='\r\n')
stream.write(unicode(script))
stream.flush()
stream.close()
proc = subprocess.Popen(
[path,fh.name],
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
proc.stdin.close()
proc.stderr.close()
out = proc.stdout.readline()
print out
script is a string which contains the subprocess code, in this case a simple hello world. Since it has unix file endings, I had to use io.open in order to write it properly for windows. path is the path to the python.exe on my machine. The file is generated and looks fine in notepad:
def main():
print 'hello world'
However, when I run the program, the subprocess executes and does nothing.
Its not a problem with the executable path, I've tested it with other programs, so it must be with either the temp file itself, or the text within it. Delete is set to false in order to check the contents of the file for debugging. Is there anything glaringly wrong with this code? I'm a bit new to using Popen.
The main issue in your program is that when you specify shell=True , you need to provide the entire command as a string, not a list.
Given that, there is really no need for you to use shell=True , also, unless absolutely neccessary, you should not use shell=True , its a security hazard, this is given in the documentation as well -
Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input:
Also, if you do not want to use stdin / stderr (since you are closing those off as soon as you start the process) , there is no need to use PIPE for them.
Example -
fh = tempfile.NamedTemporaryFile(delete=False,suffix = '.py')
stream = io.open(fh.name,'w',newline='\r\n')
stream.write(unicode(script))
stream.flush()
stream.close()
proc = subprocess.Popen(
[path,fh.name],
stdout=subprocess.PIPE,
)
out = proc.stdout.readline()
print out
Also, the script -
def main():
print 'hello world'
would not work, since you need to call main() for it to run.

python subprocess.Popen stdin.write

I'm new to python and would like to open a windows cmd prompt, start a process, leave the process running and then issue commands to the same running process.
The commands will change so i cant just include these commands in the cmdline variable below. Also, the process takes 10-15 seconds to start so i dont want to waste time waiting for the process to start and run commands each time. just want to start process once. and run quick commands as needed in the same process
I was hoping to use subprocess.Popen to make this work, though i am open to better methods. Note that my process to run is not cmd, but im just using this as example
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi' readback
cmd.stdin.write("echo hi again") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi again' readback
The results arent what i expect. Seems as though the stdin.write commands arent actually getting in and the readline freezes up with nothing to read.
I have tried the popen.communicate() instead of write/readline, but it kills the process. I have tried setting bufsize in the Popen line, but that didn't make too much difference
Your comments suggest that you are confusing command-line arguments with input via stdin. Namely, the fact that system-console.exe program accepts script=filename parameter does not imply that you can send it the same string as a command via stdin e.g., python executable accepts -c "print(1)" command-line arguments but it is a SyntaxError if you pass it as a command to Python shell.
Therefore, the first step is to use the correct syntax. Suppose the system-console.exe accepts a filename by itself:
#!/usr/bin/env python3
import time
from subprocess import Popen, PIPE
with Popen(r'C:\full\path\to\system-console.exe -cli -',
stdin=PIPE, bufsize=1, universal_newlines=True) as shell:
for _ in range(10):
print('capture.tcl', file=shell.stdin, flush=True)
time.sleep(5)
Note: if you've redirected more than one stream e.g., stdin, stdout then you should read/write both streams concurrently (e.g., using multiple threads) otherwise it is very easy to deadlock your program.
Related:
Q: Why not just use a pipe (popen())? -- mandatory reading for Unix environment but it might also be applicable for some programs on Windows
subprocess readline hangs waiting for EOF -- code example on how to pass multiple inputs, read multiple outputs using subprocess, pexpect modules.
The second and the following steps might have to deal with buffering issues on the side of the child process (out of your hands on Windows), whether system-console allows to redirect its stdin/stdout or whether it works with a console directly, and character encoding issues (how various commands in the pipeline encode text).
Here is some code that I tested and is working on Windows 10, Quartus Prime 15.1 and Python 3.5
import subprocess
class altera_system_console:
def __init__(self):
sc_path = r'C:\altera_lite\15.1\quartus\sopc_builder\bin\system-console.exe --cli --disable_readline'
self.console = subprocess.Popen(sc_path, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
def read_output(self):
rtn = ""
loop = True
i = 0
match = '% '
while loop:
out = self.console.stdout.read1(1)
if bytes(match[i],'utf-8') == out:
i = i+1
if i==len(match):
loop=False
else:
rtn = rtn + out.decode('utf-8')
return rtn
def cmd(self,cmd_string):
self.console.stdin.write(bytes(cmd_string+'\n','utf-8'))
self.console.stdin.flush()
c = altera_system_console()
print(c.read_output())
c.cmd('set jtag_master [lindex [get_service_paths master] 0]')
print(c.read_output())
c.cmd('open_service master $jtag_master')
print(c.read_output())
c.cmd('master_write_8 $jtag_master 0x00 0xFF')
print(c.read_output())
You need to use iter if you want to see the output in real time:
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi\n")#would like this to be written to the cmd prompt
for line in iter(cmd.stdout.readline,""):
print line
cmd.stdin.write("echo hi again\n")#would like this to be written to the cmd prompt
Not sure exactly what you are trying to do but if you want to input certain data when you get certain output then I would recommend using pexpect

subprocess in Python hangs

I am trying to retrieve some information from a Perl script using Python and subprocess:
command = ["perl","script.perl","arg1.txt","<","arg2.txt"]
print " ".join(command)
p = subprocess.Popen(command,stdout=subprocess.PIPE,shell=True)
text = p.stdout.read()
The join-statement simply prints the command as I would enter it in the terminal for double-checking the quality of the command. That one always works... But within Python, it hangs at the subprocess.Popen() (at p= ... ).
I also tried several other methods such as call() but to no avail.
It only outputs one line of text, so I don't know how that could be the problem.
There's no need to involve the shell if you only want a simple input redirection. Open the file in Python, and pass the file handle to Popen via the stdin argument.
with open("arg2.txt") as infile:
command = ["perl", "script.perl", "arg1.txt"]
p = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=infile)
text = p.stdout.read()
or
command = "perl script.perl arg1.txt < arg2.txt"
p = subprocess.Popen(command,stdout=subprocess.PIPE,shell=True)
text = p.stdout.read()
With a list and shell=True, it's not clear to me why it the call to perl blocks. When I try something like
subprocess.call("cat < .bashrc".split(), shell=True)
it blocks as if it is still trying to read from the inherited standard input. If I provide it with input using
subprocess.call("cat < .bashrc".split(), shell=True, stdin=open("/dev/null"))
the call returns immediately. In either case, it appears that cat is ignoring its further arguments.

Categories

Resources