python subprocess.Popen - write to stderr - python

I have a c program (I'm not the author) that reads from stderr. I call it using subprocess.Popen as below. Is there any way to write to stderr of the subprocess.
proc = subprocess.Popen(['./std.bin'],stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

Yes, maybe, but you should be aware of the irregularity of writing to the standard output or standard error output of a subprocess. The vast majority of processes only writes to these and almost none is actually trying to read (because in almost all cases there's nothing to read).
What you could try is to open a socket and supply that as the stderr argument.
What you most probably want to do is the opposite, to read from the stderr from the subprocess (the subprocesses writes, you read). That can be done by just setting it to subprocess.PIPE and then access the stderr attribute of the subprocess:
proc subprocess(['./std.bin'], stderr=subprocess.PIPE)
for l in proc.stderr:
print(l)
Note that you could specify more than one of stdin, stdout and stderr as being subprocess.PIPE. This will not mean that they will be connected to the same pipe (subprocess.PIPE is no actuall file, but just a placeholder to indicate that a pipe should be created). If you do this however you should take care to avoid deadlocks, this can for example be done by using the communicate method (you can inspect the source of the subprocess module to see what communicate does if you want to do it yourself).

If the child process reads from stderr (note: normally stderr is opened for output):
#!/usr/bin/env python
"""Read from *stderr*, write to *stdout* reversed bytes."""
import os
os.write(1, os.read(2, 512)[::-1])
then you could provide a pseudo-tty (so that all streams point to the same place), to work with the child as if it were a normal subprocess:
#!/usr/bin/env python
import sys
import pexpect # $ pip install pexpect
child = pexpect.spawnu(sys.executable, ['child.py'])
child.sendline('abc') # write to the child
child.expect(pexpect.EOF)
print(repr(child.before))
child.close()
Output
u'abc\r\n\r\ncba'
You could also use subprocess + pty.openpty() instead pexpect.
Or you could write a code specific to the weird stderr behavior:
#!/usr/bin/env python
import os
import sys
from subprocess import Popen, PIPE
r, w = os.pipe()
p = Popen([sys.executable, 'child.py'], stderr=r, stdout=PIPE,
universal_newlines=True)
os.close(r)
os.write(w, b'abc') # write to subprocess' stderr
os.close(w)
print(repr(p.communicate()[0]))
Output
'cba'

for line in proc.stderr:
sys.stdout.write(line)
This is write the stderr of the subprocess. Hope it answers your question.

Related

What exactly means that the stdout (or stdin) is set to subprocess.PIPE in Popen?

I have already read the documentation about subprocesses in python, but still cannot quite understand this.
When using Popen, and we set the parameter stdout (or stdin) to subprocesses.PIPE, what does that actually mean?
The documentation says
stdin, stdout and stderr specify the executed program’s standard
input, standard output and standard error file handles,
respectively... PIPE indicates that a new pipe to the child should be
created.
what does this mean?
For example, if I have two subprocesses both with stdout to PIPE, are the ouptuts mixed? (I don't think so)
more importantly, if I have a subprocess with stdout set to PIPE and later another subprocess with stdin set to PIPE , is that pipe the same, the output of one goes to the other?
Can someone explain me that part of the documentation that seems criptic to me?
Additional notes:
For example
import os
import signal
import subprocess
import time
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen("sar -u 1 > mylog.log", stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
// Here another subprocess
subprocess.Popen(some_command, stdin=subprocess.PIPE)
time.sleep(10)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM)
Does the output of sar goes as input to "some command"?
Please see the document.
So as you can see, the PIPE is a special value, it "indicates that a new pipe to the child should be created." Which means, stdout=subprocess.PIPE and stderr=subprocess.PIPE results in two different pipes.
And for your example, the answer is no. These are two different pipes.
Actually you can print out the subprocess.PIPE:
print(subprocess.PIPE)
# -1
print(type(subprocess.PIPE))
# int
# So it is just an integer to represent a special case.

Need help to read out the output of a subprocess

My python script (python 3.4.3) calls a bash script via subprocess.
OutPST = subprocess.check_output(cmd,shell=True)
It works, but the problem is, that I only get half of the data. The subprocess I call, calls a different subprocess and I have the guess, that if the "sub subprocess" sends the EOF, my programm thinks, that that´s it and ends the check_output.
Has someone an idea how to get all the data?
You should use subprocess.run() unless you really need that fine grained of control over talking to the processing via its stdin (or doing something else while the process is running instead of blocking for it to finish). It makes capturing output super easy:
from subprocess import run, PIPE
result = run(cmd, stdout=PIPE, stderr=PIPE)
print(result.stdout)
print(result.stderr)
If you want to merge stdout and stderr (like how you'd see it in your terminal if you didn't do any redirection), you can use the special destination STDOUT for stderr:
from subprocess import STDOUT
result = run(cmd, stdout=PIPE, stderr=STDOUT)
print(result.stdout)

Proper way to close all files after subprocess Popen and communicate

We are having some problems with the dreaded "too many open files" on our Ubuntu Linux machine rrunning a python Twisted application. In many places in our program, we are using subprocess Popen, something like this:
Popen('ifconfig ' + iface, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = process.stdout.read()
while in other places we use subprocess communicate:
process = subprocess.Popen(['/usr/bin/env', 'python', self._get_script_path(script_name)],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
close_fds=True)
out, err = process.communicate(data)
What exactly do I need to do in both cases in order to close any open file descriptors? Python documentation is not clear on this. From what I gather (which could be wrong) both communicate() and wait() will indeed clean up any open fds on their own. But what about Popen? Do I need to close stdin, stdout, and stderr explicitly after calling Popen if I don't call communicate or wait?
According to this source for the subprocess module (link) if you call communicate you should not need to close the stdout and stderr pipes.
Otherwise I would try:
process.stdout.close()
process.stderr.close()
after you are done using the process object.
For instance, when you call .read() directly:
output = process.stdout.read()
process.stdout.close()
Look in the above module source for how communicate() is defined and you'll see that it closes each pipe after it reads from it, so that is what you should also do.
If you're using Twisted, don't use subprocess. If you were using spawnProcess instead, you wouldn't need to deal with annoying resource-management problems like this.

Keep a subprocess alive and keep giving it commands? Python

If I spawn a new subprocess in python with a given command (let's say I start the python interpreter with the python command), how can I send new data to the process (via STDIN)?
Use the standard subprocess module. You use subprocess.Popen() to start the process, and it will run in the background (i.e. at the same time as your Python program). When you call Popen(), you probably want to set the stdin, stdout and stderr parameters to subprocess.PIPE. Then you can use the stdin, stdout and stderr fields on the returned object to write and read data.
Untested example code:
from subprocess import Popen, PIPE
# Run "cat", which is a simple Linux program that prints it's input.
process = Popen(['/bin/cat'], stdin=PIPE, stdout=PIPE)
process.stdin.write(b'Hello\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'Hello\n'
process.stdin.write(b'World\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'World\n'
# "cat" will exit when you close stdin. (Not all programs do this!)
process.stdin.close()
print('Waiting for cat to exit')
process.wait()
print('cat finished with return code %d' % process.returncode)
Don't.
If you want to send commands to a subprocess, create a pty and then fork the subprocess with one end of the pty attached to its STDIN.
Here is a snippet from some of my code:
RNULL = open('/dev/null', 'r')
WNULL = open('/dev/null', 'w')
master, slave = pty.openpty()
print parsedCmd
self.subp = Popen(parsedCmd, shell=False, stdin=RNULL,
stdout=WNULL, stderr=slave)
In this code, the pty is attached to stderr because it receives error messages rather than sending commands, but the principle is the same.

Python - Execute Process -> Block till it exits & Suppress Output

I'm using the following to execute a process and hide its output from Python. It's in a loop though, and I need a way to block until the sub process has terminated before moving to the next iteration.
subprocess.Popen(["scanx", "--udp", host], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Use subprocess.call(). From the docs:
subprocess.call(*popenargs, **kwargs)
Run command with arguments. Wait for command to complete, then
return the returncode attribute.
The arguments are the same as for the
Popen constructor.
Edit:
subprocess.call() uses wait(), and wait() is vulnerable to deadlocks (as Tommy Herbert pointed out). From the docs:
Warning: This will deadlock if the
child process generates enough output
to a stdout or stderr pipe such that
it blocks waiting for the OS pipe
buffer to accept more data. Use
communicate() to avoid that.
So if your command generates a lot of output, use communicate() instead:
p = subprocess.Popen(
["scanx", "--udp", host],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = p.communicate()
If you don't need output at all you can pass devnull to stdout and stderr. I don't know if this can make a difference but pass a bufsize. Using devnull now subprocess.call doesn't suffer of deadlock anymore
import os
import subprocess
null = open(os.devnull, 'w')
subprocess.call(['ls', '-lR'], bufsize=4096, stdout=null, stderr=null)

Categories

Resources