I'm actually making a program in Python to run my C and C++ codes
When i tried to read the stdout of gcc, they return me nothing, an empty string
import subprocess
output = ''
cam = "g++ main.cpp -o output.exe -std=C++11"
proc = subprocess.Popen(cam, cwd='C:/test/', stdout=subprocess.PIPE)
proc.wait()
for line in proc.stdout:
output += line.rstrip()
And this is the real output (The error is proposital, just to check the output)
I just want to know how i read the output of GCC and copy to some variable
This is because g++ outputs errors on STDERR, not STDOUT. You should use stderr=subprocess.PIPE if you wish to read those errors, which will then be available in proc.stderr.
proc = subprocess.Popen(cam, cwd='C:/test/', stderr=subprocess.PIPE)
proc.wait()
for line in proc.stderr:
output += line.rstrip()
Also notice that you should not use .wait() in this situation. You want to use .communicate() instead, as stated by the documentation:
Note: This will deadlock when using stdout=PIPE or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use Popen.communicate() when using pipes to avoid that.
Related
I want to mimic the below using python subprocess:
cat /tmp/myscript.sh | sh
The /tmp/myscript.sh contains:
ls -l
sleep 5
pwd
Behaviour: stdout shows the result of "ls" and the results of "pwd" are shown after 5 seconds.
What I have done is:
import subprocess
f = open("/tmp/myscript.sh", "rb")
p = subprocess.Popen("sh", shell=True, stdin=f,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
f.close()
p.stdout.read()
This waits until ALL the processing is done and shows the results all at once. The desired effect is to fill in the stdout pipe in realtime.
Note: This expectation seems non sense but this is sample from a bigger and complex situation which I cannot describe here.
Another Note: I can't use p.communicate. This whole thing is inside a select.select statement so I need stdout to be in a pipe.
The problem is that when you don't give an argument to read(), it reads until EOF, which means it has to wait until the subprocess exits and the pipe is closed.
If you call it with a small argument it will return immediately after it has read that many characters
import subprocess
f = open("/tmp/myscript.sh", "rb")
p = subprocess.Popen("sh", shell=True, stdin=f,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8')
f.close()
while True:
c = p.stdout.read(1)
if not c:
break
print(c, end='')
print()
Note that some many buffer their output when stdout is connected to a pipe, so this might not solve the problem for everything. The shell doesn't buffer its own output, but ls probably does. But since ls is producing all its output at once, it won't be a problem in this case.
To solve the more general problem you may need to use a pty instead of a pipe. The pexpect library is useful for this.
For some reason, when I run "time ./" from the terminal, I get this format:
real 0m0.090s
user 0m0.086s
sys 0m0.004s
But when I execute the same command in Python 2.7.6:
result = subprocess.Popen("time ./<binary>", shell = True, stdout = subprocess.PIPE)
...I get this format when I print(result.stderr):
0.09user 0.00system 0:00.09elapsed
Is there any way I can force the first (real, user, sys) format?
From the man time documentation:
After the utility finishes, time writes the total time elapsed, the time consumed by system overhead, and the time used to execute utility to the standard error stream.
Bold emphasis mine. You are capturing the stdout stream, not the stderr stream, so whatever output you see must be the result of something else mangling your Python stderr stream.
Capture stderr:
proc = subprocess.Popen("time ./<binary>", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
The stderr variable then holds the time command output.
If this continues to produce the same output, your /bin/bash implementation has a built-in time command that overrides the /usr/bin/time version (which probably outputs everything on one line). You can force the use of the bash builtin by telling Python to run with that:
proc = subprocess.Popen("time ./<binary>", shell=True, executable='/bin/bash',
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
First: Martijn Pieters' answer is correct about needing to capture time's stderr instead of stdout.
Also, at least on older versions of Python like 3.1, a subprocess.Popen object contains several things that could be considered "output". Attempting to print one just results in:
<subprocess.Popen object at 0x2068fd0>
If later versions are print-able, they must do some processing of their contents, probably including mangling the output.
Reading (and Printing) from Popen Object
The Popen object has a stderr field, which is a readable, file-like object. You can read from it like any other file-like object, although it's not recommended. Quoting the big, pink security warning:
Warning
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
To print your Popen's contents, you must:
communicate() with the sub-process, assigning the 2-tuple it returns (stdout, stderr) to local variable(s).
Convert the variable's contents into strings --- by default, it's a bytes object, as if the "file" had been opened in binary mode.
Below is a tiny program that prints the stderr output of a shell command without mangling (not counting the conversion from ASCII to Unicode and back).
#!/usr/bin/env python3
import subprocess
def main():
result = subprocess.Popen(
'time sleep 0.2',
shell=True,
stderr=subprocess.PIPE,
)
stderr = result.communicate()[1]
stderr_text = stderr.decode('us-ascii').rstrip('\n')
#print(stderr_text) # Prints all lines at once.
# Or, if you want to process output line-by-line...
lines = stderr_text.split('\n')
for line in lines:
print(line)
return
if "__main__" == __name__:
main()
This output is on an old Fedora Linux system, running bash with LC_ALL set to "C":
real 0m0.201s
user 0m0.000s
sys 0m0.001s
Note that you'll want to add some error-handling around my script's stderr_text = stderr.decode(...) line... For all I know, time emits non-ASCII characters depending on localization, environment variables, etc.
Alternative: universal_newlines
You can save some of the decoding boilerplate by using the universal_newlines option to Popen. It does the conversion from bytes to strings automatically:
If universal_newlines is True, these file objects will be opened as text streams in universal newlines mode using the encoding returned by locale.getpreferredencoding(False). [...]
def main_universal_newlines():
result = subprocess.Popen(
'time sleep 0.2',
shell=True,
stderr=subprocess.PIPE,
universal_newlines=True,
)
stderr_text = result.communicate()[1].rstrip('\n')
lines = stderr_text.split('\n')
for line in lines:
print(line)
return
Note that I still have to strip the last '\n' manually to exactly match the shell's output.
I have a c program (I'm not the author) that reads from stderr. I call it using subprocess.Popen as below. Is there any way to write to stderr of the subprocess.
proc = subprocess.Popen(['./std.bin'],stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Yes, maybe, but you should be aware of the irregularity of writing to the standard output or standard error output of a subprocess. The vast majority of processes only writes to these and almost none is actually trying to read (because in almost all cases there's nothing to read).
What you could try is to open a socket and supply that as the stderr argument.
What you most probably want to do is the opposite, to read from the stderr from the subprocess (the subprocesses writes, you read). That can be done by just setting it to subprocess.PIPE and then access the stderr attribute of the subprocess:
proc subprocess(['./std.bin'], stderr=subprocess.PIPE)
for l in proc.stderr:
print(l)
Note that you could specify more than one of stdin, stdout and stderr as being subprocess.PIPE. This will not mean that they will be connected to the same pipe (subprocess.PIPE is no actuall file, but just a placeholder to indicate that a pipe should be created). If you do this however you should take care to avoid deadlocks, this can for example be done by using the communicate method (you can inspect the source of the subprocess module to see what communicate does if you want to do it yourself).
If the child process reads from stderr (note: normally stderr is opened for output):
#!/usr/bin/env python
"""Read from *stderr*, write to *stdout* reversed bytes."""
import os
os.write(1, os.read(2, 512)[::-1])
then you could provide a pseudo-tty (so that all streams point to the same place), to work with the child as if it were a normal subprocess:
#!/usr/bin/env python
import sys
import pexpect # $ pip install pexpect
child = pexpect.spawnu(sys.executable, ['child.py'])
child.sendline('abc') # write to the child
child.expect(pexpect.EOF)
print(repr(child.before))
child.close()
Output
u'abc\r\n\r\ncba'
You could also use subprocess + pty.openpty() instead pexpect.
Or you could write a code specific to the weird stderr behavior:
#!/usr/bin/env python
import os
import sys
from subprocess import Popen, PIPE
r, w = os.pipe()
p = Popen([sys.executable, 'child.py'], stderr=r, stdout=PIPE,
universal_newlines=True)
os.close(r)
os.write(w, b'abc') # write to subprocess' stderr
os.close(w)
print(repr(p.communicate()[0]))
Output
'cba'
for line in proc.stderr:
sys.stdout.write(line)
This is write the stderr of the subprocess. Hope it answers your question.
Off the bat, here is what I am importing:
import os, shutil
from subprocess import call, PIPE, STDOUT
I have a line of code that calls bjam to compile a library:
call(['./bjam',
'-j8',
'--prefix="' + tools_dir + '"'],
stdout=PIPE)
I want it to print out text as the compilation occurs. Instead, it prints everything out at the end.
It does not print anything when I run it like this. I have tried running the command outside of Python and determined that all of the output is to stdout (when I did ./bjam -j8 > /dev/null I got no output, and when I ran ./bjam -j8 2> /dev/null I got output).
What am I doing wrong here? I want to print the output from call live.
As a sidenote, I also noticed something when I was outputting the results of a git clone operation:
call(['git',
'clone', 'https://github.com/moses-smt/mosesdecoder.git'],
stdout=PIPE)
prints the stdout text live as the call process is run.
call(['git',
'clone', 'https://github.com/moses-smt/mosesdecoder.git'],
stdout=PIPE, stderr=STDOUT)
does not print out any text. What is going on here?
stdout=PIPE redirects subprocess' stdout to a pipe. Don't do it unless you want to read from the subprocesses stdout in your code using proc.communicate() method or using proc.stdout attribute directly.
If you remove it then subprocess should print to stdout like it does in the shell:
from subprocess import check_call
check_call(['./bjam', '-j8', '--prefix', tools_dir])
I've used check_call() to raise an exception if the child process fails.
See Python: read streaming input from subprocess.communicate() if you want to read subprocess' output line by line (making the line available as a variable in Python) as soon as it is avaiable.
Try:
def run(command):
proc = subprocess.Popen(command, stdout=subprocess.PIPE)
for lineno, line in enumerate(proc.stdout):
try:
print(line.decode('utf-8').replace('\n', ''))
except UnicodeDecodeError:
print('error(%d): cannot decode %s' % (lineno, line))
The try...except logic is for python 3 (maybe 3.2/3.3, I'm not sure), as there line is a byte array not a string. For earlier versions of python, you should be able to do:
def run(command):
proc = subprocess.Popen(command, stdout=subprocess.PIPE)
for line in proc.stdout:
print(line.replace('\n', ''))
Now, you can do:
run(['./bjam', '-j8', '--prefix="' + tools_dir + '"'])
call will not print anything it captures. As documentation says "Do not use stdout=PIPE or stderr=PIPE with this function. As the pipes are not being read in the current process, the child process may block if it generates enough output to a pipe to fill up the OS pipe buffer."
Consider using check_output and print its return value.
In the first case with git call you are not capturing stderr and therefor it normally flows onto your terminal.
I want to run a program from python and find its memory usage. To do so I am using:
l=['./a.out','<','in.txt','>','out.txt']
p=subprocess.Popen(l,shell=False,stdout = subprocess.PIPE, stderr = subprocess.PIPE)
p.wait()
Res= getrusage(resource.RUSAGE_CHILDREN)
print Res.ru_maxrss
I also tried to use check_call(l,shell=False,stdout = subprocess.PIPE, stderr = subprocess.PIPE) and remove p.wait but the problem is program is getting stuck at p.wait() when using Popen and at check_call() when using check_call(). I am not able to figure out why this is happening. Is my argument list wrong.
The command ./a.out < in.txt > out.txt is working fine on terminal. I am using Ubuntu
There are two issues (at least):
<, > redirection is handled by a shell. subprocess doesn't spawn a shell by default (you shouldn't either)
If stdout=PIPE, stderr=PIPE then you must read from the pipes otherwise the process may block forever
To make a subprocess read from a file and write to a file:
from subprocess import check_call, STDOUT
with open('in.txt') as file, open('out.txt', 'w') as outfile:
check_call(["./a.out"], stdin=file, stdout=outfile, stderr=STDOUT)
stderr=STDOUT merges stdout, stderr.
You are using shell redirection characters in your call, but when you use subprocess, and set shell=False, you have to handle those pipes manually.
You seem to be passing those redirection characters directly as arguments to the a.out program.
Try running this in your shell:
./a.out '<' in.txt '>' out.txt
See if a.out terminates then as well.