subprocess.Popen for loop output - python

Code:
with open ('Client.txt','r') as Client_Name:
for Client in Client_Name:
out = subprocess.Popen(['script.sh', '-byclient', Client], stdout=subprocess.PIPE)
outputstring = out.communicate()
print (outputstring)
This code doesn't give the output, I want to pass the client names in Client.txt file to script.sh. Any suggestions please.

Your code to pass the name to the script seems correct, but communicate actually returns a tuple, (stdout_data, stderr_data).
You probably also want to pass universal_newlines=True to Popen to decode the output. Try this:
out = subprocess.Popen(['script.sh', '-byclient', Client],
stdout=subprocess.PIPE,
universal_newlines=True)
out.wait() # wait until the script has finished
stdout_data, stderr_data = out.communicate()
print(stdout_data)
Or, if you want to print the output in real time:
out = subprocess.Popen(['script.sh', '-byclient', Client],
stdout=subprocess.PIPE,
universal_newlines=True)
for line in out.stdout:
print(line)

Related

Get the output of multiple commands from subprocess.Popen

I am trying to run a command, get it's output, then later run another command in the same environment (say if I set an environment variable in the first command, I want it to be available to the second command). I tried this:
import subprocess
process = subprocess.Popen("/bin/bash", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE);
process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
# Do it again
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
but communicate() reads until the end, so this is not a valid technique. (I get this:)
stdout: Test
Traceback (most recent call last):
File "./MultipleCommands.py", line 15, in <module>
process.stdin.write("echo $MyVar\n")
ValueError: I/O operation on closed file
I have seen this: https://stackoverflow.com/a/15654218/284529 , but it doesn't give a working example of how to do what it proposes. Can anyone demonstrate how to do this?
I have also seen other techniques that involve constantly checking for output in a loop, but this doesn't fit the "get the output of a command" mentality - it is just treating it like a stream.
To get the output of multiple commands, just combine them into a single script:
#!/usr/bin/env python
import subprocess
import sys
output = subprocess.check_output("""
export MyVar="Test"
echo $MyVar
echo ${MyVar/est/ick}
""", shell=True, executable='/bin/bash', universal_newlines=True)
sys.stdout.write(output)
Output
Test
Tick
When using communicate it sees that subprocess had ended, but in case you have a intermediate one (bash), when your sub-subprocess ends, you have to somehow signal manually.
As for the rest, a simplest approach is to just emit a marker line. However, I'm sorry to disappoint you here but pooling (i.e. constantly checking in a loop) is actually the only sane option. If you don't like the loop, you could "hide" it away in a function.
import subprocess
import time
def readlines_upto(stream, until="### DONE ###"):
while True:
line = stream.readline()
if line is None:
time.sleep(0.1)
continue
if line.rstrip() == until:
break
yield line
process = subprocess.Popen("/bin/bash", shell=True,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.write("echo '### DONE ###'\n")
process.stdin.flush()
# Note, I don't read stderr here, so if subprocess outputs too much there,
# it'll fill the pipe and stuck. If you don't need stderr data, don't
# redirect it to a pipe at all. If you need it, make readlines read two pipes.
stdout = "".join(line for line in readlines_upto(process.stdout))
print "stdout: " + stdout
# Do it again
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
communicate and wait methods of Popen objects, close the PIPE after the process returns. If you want stay in communication with the process try something like this:
import subprocess
process = subprocess.Popen("/bin/bash", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE);
process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
process.stdout.readline()
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)
I think you misunderstand communicate...
Take a look over this link:-
http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate
communicate sends a string to the other process and then waits on it to finish... (Like you said waits for the EOF listening to the stdout & stderror)
What you should do instead is:
proc.stdin.write('message')
# ...figure out how long or why you need to wait...
proc.stdin.write('message2')
(and if you need to get the stdout or stderr you'd use proc.stdout or proc.stderr)
As per the manual:
Popen.communicate(input=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to
terminate. [...]
You need to read from the pipe instead:
import os
stdout = os.read(process.stdout.fileno(), 1024)
print "stdout: " + stdout
If there's no data waiting, it will hang there forever or until data is ready to be read. You should use the select system call to prevent that:
import select
import os
try:
i,o,e = select.select([process.stdout], [], [], 5) # 5 second timeout
stdout = os.read(i[0].fileno(), 1024)
except IndexError:
# nothing was written to the pipe in 5 seconds
stdout = ""
print "stdout: " + stdout
If you want to fetch multiple writes, to avoid race conditions, you'll have to put it in a loop:
stdout = ""
while True:
try:
i,o,e = select.select([process.stdout], [], [], 5) # 5 second timeout
stdout += os.read(i[0].fileno(), 1024)
except IndexError:
# nothing was written to the pipe in 5 seconds, we're done here
break

printing stdout in realtime from a subprocess that requires stdin

This is a follow up to this question, but if I want to pass an argument to stdin to subprocess, how can I get the output in real time? This is what I currently have; I also tried replacing Popen with call from the subprocess module and this just leads to the script hanging.
from subprocess import Popen, PIPE, STDOUT
cmd = 'rsync --rsh=ssh -rv --files-from=- thisdir/ servername:folder/'
p = Popen(cmd.split(), stdout=PIPE, stdin=PIPE, stderr=STDOUT)
subfolders = '\n'.join(['subfolder1','subfolder2'])
output = p.communicate(input=subfolders)[0]
print output
In the former question where I did not have to pass stdin I was suggested to use p.stdout.readline, there there is no room there to pipe anything to stdin.
Addendum: This works for the transfer, but I see the output only at the end and I would like to see the details of the transfer while it's happening.
In order to grab stdout from the subprocess in real time you need to decide exactly what behavior you want; specifically, you need to decide whether you want to deal with the output line-by-line or character-by-character, and whether you want to block while waiting for output or be able to do something else while waiting.
It looks like it will probably suffice for your case to read the output in line-buffered fashion, blocking until each complete line comes in, which means the convenience functions provided by subprocess are good enough:
p = subprocess.Popen(some_cmd, stdout=subprocess.PIPE)
# Grab stdout line by line as it becomes available. This will loop until
# p terminates.
while p.poll() is None:
l = p.stdout.readline() # This blocks until it receives a newline.
print l
# When the subprocess terminates there might be unconsumed output
# that still needs to be processed.
print p.stdout.read()
If you need to write to the stdin of the process, just use another pipe:
p = subprocess.Popen(some_cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
# Send input to p.
p.stdin.write("some input\n")
p.stdin.flush()
# Now start grabbing output.
while p.poll() is None:
l = p.stdout.readline()
print l
print p.stdout.read()
Pace the other answer, there's no need to indirect through a file in order to pass input to the subprocess.
something like this I think
from subprocess import Popen, PIPE, STDOUT
p = Popen('c:/python26/python printingTest.py', stdout = PIPE,
stderr = PIPE)
for line in iter(p.stdout.readline, ''):
print line
p.stdout.close()
using an iterator will return live results basically ..
in order to send input to stdin you would need something like
other_input = "some extra input stuff"
with open("to_input.txt","w") as f:
f.write(other_input)
p = Popen('c:/python26/python printingTest.py < some_input_redirection_thing',
stdin = open("to_input.txt"),
stdout = PIPE,
stderr = PIPE)
this would be similar to the linux shell command of
%prompt%> some_file.o < cat to_input.txt
see alps answer for better passing to stdin
If you pass all your input before starting reading the output and if by "real-time" you mean whenever the subprocess flushes its stdout buffer:
from subprocess import Popen, PIPE, STDOUT
cmd = 'rsync --rsh=ssh -rv --files-from=- thisdir/ servername:folder/'
p = Popen(cmd.split(), stdout=PIPE, stdin=PIPE, stderr=STDOUT, bufsize=1)
subfolders = '\n'.join(['subfolder1','subfolder2'])
p.stdin.write(subfolders)
p.stdin.close() # eof
for line in iter(p.stdout.readline, ''):
print line, # do something with the output here
p.stdout.close()
rc = p.wait()

Python Popen sending to process on stdin, receiving on stdout

I pass an executable on the command-line to my python script. I do some calculations and then I'd like to send the result of these calculations on STDIN to the executable. When it has finished I would like to get the executable's result back from STDOUT.
ciphertext = str(hex(C1))
exe = popen([sys.argv[1]], stdout=PIPE, stdin=PIPE)
result = exe.communicate(input=ciphertext)[0]
print(result)
When I print result I get nothing, not None, an empty line. I'm sure that the executable works with the data as I've repeated the same thing using the '>' on the command-line with the same previously calculated result.
A working example
#!/usr/bin/env python
import subprocess
text = 'hello'
proc = subprocess.Popen(
'md5sum',stdout=subprocess.PIPE,
stdin=subprocess.PIPE)
proc.stdin.write(text)
proc.stdin.close()
result = proc.stdout.read()
print result
proc.wait()
to get the same thing as “execuable < params.file > output.file”, do this:
#!/usr/bin/env python
import subprocess
infile,outfile = 'params.file','output.file'
with open(outfile,'w') as ouf:
with open(infile,'r') as inf:
proc = subprocess.Popen(
'md5sum',stdout=ouf,stdin=inf)
proc.wait()

Python subprocess readlines()?

So I'm trying to move away from os.popen to subprocess.popen as recommended by the user guide. The only trouble I'm having is I can't seem to find a way of making readlines() work.
So I used to be able to do
list = os.popen('ls -l').readlines()
But I can't do
list = subprocess.Popen(['ls','-l']).readlines()
ls = subprocess.Popen(['ls','-l'], stdout=subprocess.PIPE)
out = ls.stdout.readlines()
or, if you want to read line-by-line (maybe the other process is more intensive than ls):
for ln in ls.stdout:
# whatever
With subprocess.Popen, use communicate to read and write data:
out, err = subprocess.Popen(['ls','-l'], stdout=subprocess.PIPE).communicate()
Then you can always split the string from the processes' stdout with splitlines().
out = out.splitlines()
Making a system call that returns the stdout output as a string:
lines = subprocess.check_output(['ls', '-l']).splitlines()
list = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE).communicate()[0].splitlines()
straight from the help(subprocess)
A more detailed way of using subprocess.
# Set the command
command = "ls -l"
# Setup the module object
proc = subprocess.Popen(command,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
# Communicate the command
stdout_value,stderr_value = proc.communicate()
# Once you have a valid response, split the return output
if stdout_value:
stdout_value = stdout_value.split()

catching stdout in realtime from subprocess

I want to subprocess.Popen() rsync.exe in Windows, and print the stdout in Python.
My code works, but it doesn't catch the progress until a file transfer is done! I want to print the progress for each file in real time.
Using Python 3.1 now since I heard it should be better at handling IO.
import subprocess, time, os, sys
cmd = "rsync.exe -vaz -P source/ dest/"
p, line = True, 'start'
p = subprocess.Popen(cmd,
shell=True,
bufsize=64,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
for line in p.stdout:
print(">>> " + str(line.rstrip()))
p.stdout.flush()
Some rules of thumb for subprocess.
Never use shell=True. It needlessly invokes an extra shell process to call your program.
When calling processes, arguments are passed around as lists. sys.argv in python is a list, and so is argv in C. So you pass a list to Popen to call subprocesses, not a string.
Don't redirect stderr to a PIPE when you're not reading it.
Don't redirect stdin when you're not writing to it.
Example:
import subprocess, time, os, sys
cmd = ["rsync.exe", "-vaz", "-P", "source/" ,"dest/"]
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
print(">>> " + line.rstrip())
That said, it is probable that rsync buffers its output when it detects that it is connected to a pipe instead of a terminal. This is the default behavior - when connected to a pipe, programs must explicitly flush stdout for realtime results, otherwise standard C library will buffer.
To test for that, try running this instead:
cmd = [sys.executable, 'test_out.py']
and create a test_out.py file with the contents:
import sys
import time
print ("Hello")
sys.stdout.flush()
time.sleep(10)
print ("World")
Executing that subprocess should give you "Hello" and wait 10 seconds before giving "World". If that happens with the python code above and not with rsync, that means rsync itself is buffering output, so you are out of luck.
A solution would be to connect direct to a pty, using something like pexpect.
I know this is an old topic, but there is a solution now. Call the rsync with option --outbuf=L. Example:
cmd=['rsync', '-arzv','--backup','--outbuf=L','source/','dest']
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE)
for line in iter(p.stdout.readline, b''):
print '>>> {}'.format(line.rstrip())
Depending on the use case, you might also want to disable the buffering in the subprocess itself.
If the subprocess will be a Python process, you could do this before the call:
os.environ["PYTHONUNBUFFERED"] = "1"
Or alternatively pass this in the env argument to Popen.
Otherwise, if you are on Linux/Unix, you can use the stdbuf tool. E.g. like:
cmd = ["stdbuf", "-oL"] + cmd
See also here about stdbuf or other options.
On Linux, I had the same problem of getting rid of the buffering. I finally used "stdbuf -o0" (or, unbuffer from expect) to get rid of the PIPE buffering.
proc = Popen(['stdbuf', '-o0'] + cmd, stdout=PIPE, stderr=PIPE)
stdout = proc.stdout
I could then use select.select on stdout.
See also https://unix.stackexchange.com/questions/25372/
for line in p.stdout:
...
always blocks until the next line-feed.
For "real-time" behaviour you have to do something like this:
while True:
inchar = p.stdout.read(1)
if inchar: #neither empty string nor None
print(str(inchar), end='') #or end=None to flush immediately
else:
print('') #flush for implicit line-buffering
break
The while-loop is left when the child process closes its stdout or exits.
read()/read(-1) would block until the child process closed its stdout or exited.
Your problem is:
for line in p.stdout:
print(">>> " + str(line.rstrip()))
p.stdout.flush()
the iterator itself has extra buffering.
Try doing like this:
while True:
line = p.stdout.readline()
if not line:
break
print line
You cannot get stdout to print unbuffered to a pipe (unless you can rewrite the program that prints to stdout), so here is my solution:
Redirect stdout to sterr, which is not buffered. '<cmd> 1>&2' should do it. Open the process as follows: myproc = subprocess.Popen('<cmd> 1>&2', stderr=subprocess.PIPE)
You cannot distinguish from stdout or stderr, but you get all output immediately.
Hope this helps anyone tackling this problem.
To avoid caching of output you might wanna try pexpect,
child = pexpect.spawn(launchcmd,args,timeout=None)
while True:
try:
child.expect('\n')
print(child.before)
except pexpect.EOF:
break
PS : I know this question is pretty old, still providing the solution which worked for me.
PPS: got this answer from another question
p = subprocess.Popen(command,
bufsize=0,
universal_newlines=True)
I am writing a GUI for rsync in python, and have the same probelms. This problem has troubled me for several days until i find this in pyDoc.
If universal_newlines is True, the file objects stdout and stderr are opened as text files in universal newlines mode. Lines may be terminated by any of '\n', the Unix end-of-line convention, '\r', the old Macintosh convention or '\r\n', the Windows convention. All of these external representations are seen as '\n' by the Python program.
It seems that rsync will output '\r' when translate is going on.
if you run something like this in a thread and save the ffmpeg_time property in a property of a method so you can access it, it would work very nice
I get outputs like this:
output be like if you use threading in tkinter
input = 'path/input_file.mp4'
output = 'path/input_file.mp4'
command = "ffmpeg -y -v quiet -stats -i \"" + str(input) + "\" -metadata title=\"#alaa_sanatisharif\" -preset ultrafast -vcodec copy -r 50 -vsync 1 -async 1 \"" + output + "\""
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True, shell=True)
for line in self.process.stdout:
reg = re.search('\d\d:\d\d:\d\d', line)
ffmpeg_time = reg.group(0) if reg else ''
print(ffmpeg_time)
Change the stdout from the rsync process to be unbuffered.
p = subprocess.Popen(cmd,
shell=True,
bufsize=0, # 0=unbuffered, 1=line-buffered, else buffer-size
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
I've noticed that there is no mention of using a temporary file as intermediate. The following gets around the buffering issues by outputting to a temporary file and allows you to parse the data coming from rsync without connecting to a pty. I tested the following on a linux box, and the output of rsync tends to differ across platforms, so the regular expressions to parse the output may vary:
import subprocess, time, tempfile, re
pipe_output, file_name = tempfile.TemporaryFile()
cmd = ["rsync", "-vaz", "-P", "/src/" ,"/dest"]
p = subprocess.Popen(cmd, stdout=pipe_output,
stderr=subprocess.STDOUT)
while p.poll() is None:
# p.poll() returns None while the program is still running
# sleep for 1 second
time.sleep(1)
last_line = open(file_name).readlines()
# it's possible that it hasn't output yet, so continue
if len(last_line) == 0: continue
last_line = last_line[-1]
# Matching to "[bytes downloaded] number% [speed] number:number:number"
match_it = re.match(".* ([0-9]*)%.* ([0-9]*:[0-9]*:[0-9]*).*", last_line)
if not match_it: continue
# in this case, the percentage is stored in match_it.group(1),
# time in match_it.group(2). We could do something with it here...
In Python 3, here's a solution, which takes a command off the command line and delivers real-time nicely decoded strings as they are received.
Receiver (receiver.py):
import subprocess
import sys
cmd = sys.argv[1:]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
for line in p.stdout:
print("received: {}".format(line.rstrip().decode("utf-8")))
Example simple program that could generate real-time output (dummy_out.py):
import time
import sys
for i in range(5):
print("hello {}".format(i))
sys.stdout.flush()
time.sleep(1)
Output:
$python receiver.py python dummy_out.py
received: hello 0
received: hello 1
received: hello 2
received: hello 3
received: hello 4

Categories

Resources