In the snipet of my python script below, I think that temp2 doesn't wait for temp to finish running, the output can be large, but is just text. This truncates the result ('out') from temp, it stops mid line. 'out' from temp works fine until temp 2 is added. I tried adding time.wait() as well as subprocess.Popen.wait(temp). These both allow temp to run to completion so that 'out' is not truncated but disrupt the chaining process so that there is no 'out2'. Any ideas?
temp = subprocess.Popen(call, stdout=subprocess.PIPE)
#time.wait(1)
#subprocess.Popen.wait(temp)
temp2 = subprocess.Popen(call2, stdin=temp.stdout, stdout=subprocess.PIPE)
out, err = temp.communicate()
out2, err2 = temp2.communicate()
According to the Python Docs communicate() can accept a stream to be sent as input. If you change stdin of temp2 to subprocess.PIPE and put out in communicate(), the data is properly piped.
#!/usr/bin/env python
import subprocess
import time
call = ["echo", "hello\nworld"]
call2 = ["grep", "w"]
temp = subprocess.Popen(call, stdout=subprocess.PIPE)
temp2 = subprocess.Popen(call2, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = temp.communicate()
out2, err2 = temp2.communicate(out)
print("Out: {0!r}, Err: {1!r}".format(out, err))
# Out: b'hello\nworld\n', Err: None
print("Out2: {0!r}, Err2: {1!r}".format(out2, err2))
# Out2: b'world\n', Err2: None
Following "Replacing shell pipeline" section from the docs:
temp = subprocess.Popen(call, stdout=subprocess.PIPE)
temp2 = subprocess.Popen(call2, stdin=temp.stdout, stdout=subprocess.PIPE)
temp.stdout.close()
out2 = temp2.communicate()[0]
Related
In the program I maintain it is done as in:
# count the files in the archive
length = 0
command = ur'"%s" l -slt "%s"' % (u'path/to/7z.exe', srcFile)
ins, err = Popen(command, stdout=PIPE, stdin=PIPE,
startupinfo=startupinfo).communicate()
ins = StringIO.StringIO(ins)
for line in ins: length += 1
ins.close()
Is it really the only way ? I can't seem to find any other command but it seems a bit odd that I can't just ask for the number of files
What about error checking ? Would it be enough to modify this to:
proc = Popen(command, stdout=PIPE, stdin=PIPE,
startupinfo=startupinfo)
out = proc.stdout
# ... count
returncode = proc.wait()
if returncode:
raise Exception(u'Failed reading number of files from ' + srcFile)
or should I actually parse the output of Popen ?
EDIT: interested in 7z, rar, zip archives (that are supported by 7z.exe) - but 7z and zip would be enough for starters
To count the number of archive members in a zip archive in Python:
#!/usr/bin/env python
import sys
from contextlib import closing
from zipfile import ZipFile
with closing(ZipFile(sys.argv[1])) as archive:
count = len(archive.infolist())
print(count)
It may use zlib, bz2, lzma modules if available, to decompress the archive.
To count the number of regular files in a tar archive:
#!/usr/bin/env python
import sys
import tarfile
with tarfile.open(sys.argv[1]) as archive:
count = sum(1 for member in archive if member.isreg())
print(count)
It may support gzip, bz2 and lzma compression depending on version of Python.
You could find a 3rd-party module that would provide a similar functionality for 7z archives.
To get the number of files in an archive using 7z utility:
import os
import subprocess
def count_files_7z(archive):
s = subprocess.check_output(["7z", "l", archive], env=dict(os.environ, LC_ALL="C"))
return int(re.search(br'(\d+)\s+files,\s+\d+\s+folders$', s).group(1))
Here's version that may use less memory if there are many files in the archive:
import os
import re
from subprocess import Popen, PIPE, CalledProcessError
def count_files_7z(archive):
command = ["7z", "l", archive]
p = Popen(command, stdout=PIPE, bufsize=1, env=dict(os.environ, LC_ALL="C"))
with p.stdout:
for line in p.stdout:
if line.startswith(b'Error:'): # found error
error = line + b"".join(p.stdout)
raise CalledProcessError(p.wait(), command, error)
returncode = p.wait()
assert returncode == 0
return int(re.search(br'(\d+)\s+files,\s+\d+\s+folders', line).group(1))
Example:
import sys
try:
print(count_files_7z(sys.argv[1]))
except CalledProcessError as e:
getattr(sys.stderr, 'buffer', sys.stderr).write(e.output)
sys.exit(e.returncode)
To count the number of lines in the output of a generic subprocess:
from functools import partial
from subprocess import Popen, PIPE, CalledProcessError
p = Popen(command, stdout=PIPE, bufsize=-1)
with p.stdout:
read_chunk = partial(p.stdout.read, 1 << 15)
count = sum(chunk.count(b'\n') for chunk in iter(read_chunk, b''))
if p.wait() != 0:
raise CalledProcessError(p.returncode, command)
print(count)
It supports unlimited output.
Could you explain why buffsize=-1 (as opposed to buffsize=1 in your previous answer: stackoverflow.com/a/30984882/281545)
bufsize=-1 means use the default I/O buffer size instead of bufsize=0 (unbuffered) on Python 2. It is a performance boost on Python 2. It is default on the recent Python 3 versions. You might get a short read (lose data) if on some earlier Python 3 versions where bufsize is not changed to bufsize=-1.
This answer reads in chunks and therefore the stream is fully buffered for efficiency. The solution you've linked is line-oriented. bufsize=1 means "line buffered". There is minimal difference from bufsize=-1 otherwise.
and also what the read_chunk = partial(p.stdout.read, 1 << 15) buys us ?
It is equivalent to read_chunk = lambda: p.stdout.read(1<<15) but provides more introspection in general. It is used to implement wc -l in Python efficiently.
Since I already have 7z.exe bundled with the app and I surely want to avoid a third party lib, while I do need to parse rar and 7z archives I think I will go with:
regErrMatch = re.compile(u'Error:', re.U).match # needs more testing
r"""7z list command output is of the form:
Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2015-06-29 21:14:04 ....A <size> <filename>
where ....A is the attribute value for normal files, ....D for directories
"""
reFileMatch = re.compile(ur'(\d|:|-|\s)*\.\.\.\.A', re.U).match
def countFilesInArchive(srcArch, listFilePath=None):
"""Count all regular files in srcArch (or only the subset in
listFilePath)."""
# https://stackoverflow.com/q/31124670/281545
command = ur'"%s" l -scsUTF-8 -sccUTF-8 "%s"' % ('compiled/7z.exe', srcArch)
if listFilePath: command += u' #"%s"' % listFilePath
proc = Popen(command, stdout=PIPE, startupinfo=startupinfo, bufsize=-1)
length, errorLine = 0, []
with proc.stdout as out:
for line in iter(out.readline, b''):
line = unicode(line, 'utf8')
if errorLine or regErrMatch(line):
errorLine.append(line)
elif reFileMatch(line):
length += 1
returncode = proc.wait()
if returncode or errorLine: raise StateError(u'%s: Listing failed\n' +
srcArch + u'7z.exe return value: ' + str(returncode) +
u'\n' + u'\n'.join([x.strip() for x in errorLine if x.strip()]))
return length
Error checking as in Python Popen - wait vs communicate vs CalledProcessError by #JFSebastien
My final(ish) based on accepted answer - unicode may not be needed, kept it for now as I use it everywhere. Also kept regex (which I may expand, I have seen things like re.compile(u'^(Error:.+|.+ Data Error?|Sub items Errors:.+)',re.U). Will have to look into check_output and CalledProcessError.
def countFilesInArchive(srcArch, listFilePath=None):
"""Count all regular files in srcArch (or only the subset in
listFilePath)."""
command = [exe7z, u'l', u'-scsUTF-8', u'-sccUTF-8', srcArch]
if listFilePath: command += [u'#%s' % listFilePath]
proc = Popen(command, stdout=PIPE, stdin=PIPE, # stdin needed if listFilePath
startupinfo=startupinfo, bufsize=1)
errorLine = line = u''
with proc.stdout as out:
for line in iter(out.readline, b''): # consider io.TextIOWrapper
line = unicode(line, 'utf8')
if regErrMatch(line):
errorLine = line + u''.join(out)
break
returncode = proc.wait()
msg = u'%s: Listing failed\n' % srcArch.s
if returncode or errorLine:
msg += u'7z.exe return value: ' + str(returncode) + u'\n' + errorLine
elif not line: # should not happen
msg += u'Empty output'
else: msg = u''
if msg: raise StateError(msg) # consider using CalledProcessError
# number of files is reported in the last line - example:
# 3534900 325332 75 files, 29 folders
return int(re.search(ur'(\d+)\s+files,\s+\d+\s+folders', line).group(1))
Will edit this with my findings.
I need a simple way to pass the stdout of a subprocess as a list to another function using multiprocess:
The first function that invokes subprocess:
def beginRecvTest():
command = ["receivetest","-f=/dev/pcan33"]
incoming = Popen(command, stdout = PIPE)
processing = iter(incoming.stdout.readline, "")
lines = list(processing)
return lines
The function that should receive lines:
def readByLine(lines):
i = 0
while (i < len(lines)):
system("clear")
if(lines[i][0].isdigit()):
line = lines[i].split()
dictAdd(line)
else:
next
print ; print "-" *80
for _i in mydict.keys():
printMsg(mydict, _i)
print "Keys: ", ; print mydict.keys()
print ; print "-" *80
sleep(0.3)
i += 1
and the main from my program:
if __name__ == "__main__":
dataStream = beginRecvTest()
p = Process(target=dataStream)
reader = Process(target=readByLine, args=(dataStream,))
p.start()
reader.start()
I've read up on using queues, but I don't think that's exactly what I need.
The subprocess called returns infinite data so some people have suggested using tempfile, but I am totally confused about how to do this.
At the moment the script only returns the first line read, and all efforts on looping the beginRecvTest() function have ended in compilation errors.
I have the following code
import subprocess
import re
from itertools import *
command = ['ffprobe', '-i', '/media/some_file.mp4']
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
text = p.stderr.read()
retcode = p.wait()
text = text.decode('utf-8')
p = re.compile("Duration(.*)")
num = 0 #for debugging
for line in iter(text.splitlines()):
print(str(num) + line) #for debugging
m = p.match(str(line))
if m != None:
print(m.group(1))
When I look at the output there is a line that says "Duration" on it, however it is not captured, print(m.group(1)) is never reached. If I change the text variable to a hardcoded string of "Duration blahblah" I get " blahblah", which is what I expect. It seems like the regex doesn't recognize the text coming back from stderr. How can I get the text into a format that the regex will recognize and match on?
I have come up with the following solution, should it help anyone else attempting to capture duration from ffmpeg using python
import subprocess
import re
command = ['ffprobe', '-i', '/media/some_file.mp4']
p = subprocess.Popen(command, stderr=subprocess.PIPE)
text = p.stderr.read()
retcode = p.wait()
text = text.decode('utf-8')
p = re.compile(".*Duration:\s([0-9:\.]*),", re.MULTILINE|re.DOTALL)
m = p.match(text)
print(m.group(1))
p = re.compile(r".*?Duration(.*)")
Try this.match starts from the begining while there may might be something before duration.
this prints the directory size, but how can i save the output to a python variable, instead of print.
svn list -vR http://myIP/repos/test | awk '{sum+=$3; i++} END {print sum/1024000}'
but i need to store this print in a python variable;
proc = subprocess.Popen(svnproc, stdout=subprocess.PIPE, shell=True)
output = proc.stdout.read()
Print str(output)
nasty workaround is the push it out to a file and cat the file
svn list -vR http://myIP/repos/test | awk '{sum+=$3; i++} END {> /tmp/output.txt}'
From the fine docstring of "subprocess" I can read:
Replacing shell pipe line
output=dmesg | grep hda
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
so that in your case I'd try the following code
switches = ...
directory = ...
p1 = Popen(["svn", "list", switches, directory], stdout=PIPE)
p2 = Popen(["awk", "{sum+=$3; i++} END {print sum/1024/1024}", stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0].strip()
ps: I have changed from sum/1024000 to sum/1024/1024 assuming that you want to count in megabytes
"
svnproc = "svn list -vR " + repoURL + " | awk '{sum+=$3; i++} END {print sum/1073741824}'"
proc = subprocess.Popen(svnproc, shell=True,
stdout=subprocess.PIPE)
svnbackupsize = float(proc.stdout.read())
The only problematic part of this script is that Popen does not wait till the process is DONE, however subprocess.call does wait till the process is completed.
I want to execute a shell script with 3 arguments from a python script. (as described here: Python: executing shell script with arguments(variable), but argument is not read in shell script)
Here is my code:
subprocess.call('/root/bin/xen-limit %s %s %s' % (str(dom),str(result),str('--nosave'),), shell=True)
variables dom and result are containing strings.
And here is the output:
/bin/sh: --nosave: not found
UPDATE:
That is the variable "result":
c1 = ['/bin/cat', '/etc/xen/%s.cfg' % (str(dom))]
p1 = subprocess.Popen(c1, stdout=subprocess.PIPE)
c2 = ['grep', 'limited']
p2 = subprocess.Popen(c2, stdin=p1.stdout, stdout=subprocess.PIPE)
c3 = ['cut', '-d=', '-f2']
p3 = subprocess.Popen(c3, stdin=p2.stdout, stdout=subprocess.PIPE)
c4 = ['tr', '-d', '\"']
p4 = subprocess.Popen(c4, stdin=p3.stdout, stdout=subprocess.PIPE)
result = p4.stdout.read()
After that, the variable result is containing a number with mbit (for example 16mbit)
And dom is a string like "myserver"
from subprocess import Popen, STDOUT, PIPE
print('Executing: /root/bin/xen-limit ' + str(dom) + ' ' + str(result) + ' --nosave')
handle = Popen('/root/bin/xen-limit ' + str(dom) + ' ' + str(result) + ' --nosave', shell=True, stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(handle.stdout.read())
If this doesn't work i honestly don't know what would.
This is the most basic but yet error describing way of opening a 3:d party application or script while still giving you the debug you need.
Why not you save --nosave to a variable and pass the variable in subprocess
It's simpler (and safer) to pass a list consisting of the command name and its arguments.
subprocess.call(['/root/bin/xen-limit]',
str(dom),
str(result),
str('--nosave')
])
str('--nosave') is a no-op, as '--nosave' is already a string. The same may be true for dom and result as well.