I'm using the following to execute a process and hide its output from Python. It's in a loop though, and I need a way to block until the sub process has terminated before moving to the next iteration.
subprocess.Popen(["scanx", "--udp", host], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Use subprocess.call(). From the docs:
subprocess.call(*popenargs, **kwargs)
Run command with arguments. Wait for command to complete, then
return the returncode attribute.
The arguments are the same as for the
Popen constructor.
Edit:
subprocess.call() uses wait(), and wait() is vulnerable to deadlocks (as Tommy Herbert pointed out). From the docs:
Warning: This will deadlock if the
child process generates enough output
to a stdout or stderr pipe such that
it blocks waiting for the OS pipe
buffer to accept more data. Use
communicate() to avoid that.
So if your command generates a lot of output, use communicate() instead:
p = subprocess.Popen(
["scanx", "--udp", host],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = p.communicate()
If you don't need output at all you can pass devnull to stdout and stderr. I don't know if this can make a difference but pass a bufsize. Using devnull now subprocess.call doesn't suffer of deadlock anymore
import os
import subprocess
null = open(os.devnull, 'w')
subprocess.call(['ls', '-lR'], bufsize=4096, stdout=null, stderr=null)
Related
I am calling a shell script fom within Python, that spawns multiple child processes. I want to terminate that process and all of its children, if it did not finish after two minutes.
Is there any way I can do that with subprocess.run or do I have to go back to using Popen? Since run is blocking, I am not able to save the pid somewhere to kill the children in an extra command. A short code example:
try:
subprocess.run(["my_shell_script"], stderr=subprocess.STDOUT, timeout=120)
except subprocess.TimeoutExpired:
print("Timeout during execution")
This problem was reported as a bug to the Python developers. It seems to happen specifically when stderr or stdout is redirected.
Here is a more correct version of #Tanu's code.
import subprocess as sp
try:
proc = sp.Popen(['ls', '-l'], stdout=sp.PIPE, stderr=sp.PIPE)
outs, errs = proc.communicate(timeout=120)
except sp.TimeoutExpired:
proc.terminate()
Popen doesn't accept timeout as a parameter. It must be passed to communicate.
On Posix OSs, terminate is more gentle than kill, in that it reduces the risk of creating zombie processes.
Quoting from the docs:
subprocess.run - This does not capture stdout or stderr by default. To do so, pass PIPE for the stdout and/or stderr arguments.
Don't have to use Popen() if you don't want to. The other functions in the module, such as .call(), .Popen().
There are three 'file' streams: stdin for input, and stdout and stderr for output. The application decides what to write where; usually error and diagnostic information to stderr, the rest to stdout. To capture the output for either of these outputs, specify the subprocess.PIPE argument so that the 'stream' is redirected into your program.
To kill the child process after timeout:
import os
import signal
import subprocess
try:
proc = subprocess.Popen(["ls", "-l"], stdout=PIPE, stderr=PIPE, timeout=120)
except subprocess.TimeoutExpired:
os.kill(proc.pid, signal.SIGTERM)
I have a Python script to capture network traffic with tcpdump in a subprocess:
p = subprocess.Popen(['tcpdump', '-I', '-i', 'en1',
'-w', 'cap.pcap'], stdout=subprocess.PIPE)
time.sleep(10)
p.kill()
When this script completes its work, I'm trying to open output .pcap file in Wireshark and getting this error:
"The capture file appears to have been cut short in the middle of a packet."
What solution could be applied for "proper" closing of tcpdump's subprocess?
Instead of p.kill(), you can use p.send_signal(subprocess.signal.SIGTERM) to send a terminate signal rather than a kill (p.terminate() does the same).
The Popen docs describe the send_signal() command. The documentation on signals is a bit weak, but a dir(subprocess.signal) will list all the signals you may send to the process, but terminate should allow it some time to clean up.
Found working solution:
I've changed p.kill() to p.terminate().
After this change the subprocess is "properly" finished (output of tcpdump subprocess with statistics available in console) and output .pcap file not damaged.
I had the same problem about closing subprocesses. This thread really helped, so thanks, especially to https://stackoverflow.com/users/3583715/rkh. My solution:
Before:
output = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
message = output.stdout.read()
output.stdout.close()
After reading the Popen docs:
output = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
message = output.stdout.read()
output.TerminateProcess()
For some reason, calling output.kill(), and/or output.terminate() or sending output.send_signal(subprocess.signal.SIGTERM) didn't work, but output.TerminateProcess() did.
I have a c program (I'm not the author) that reads from stderr. I call it using subprocess.Popen as below. Is there any way to write to stderr of the subprocess.
proc = subprocess.Popen(['./std.bin'],stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Yes, maybe, but you should be aware of the irregularity of writing to the standard output or standard error output of a subprocess. The vast majority of processes only writes to these and almost none is actually trying to read (because in almost all cases there's nothing to read).
What you could try is to open a socket and supply that as the stderr argument.
What you most probably want to do is the opposite, to read from the stderr from the subprocess (the subprocesses writes, you read). That can be done by just setting it to subprocess.PIPE and then access the stderr attribute of the subprocess:
proc subprocess(['./std.bin'], stderr=subprocess.PIPE)
for l in proc.stderr:
print(l)
Note that you could specify more than one of stdin, stdout and stderr as being subprocess.PIPE. This will not mean that they will be connected to the same pipe (subprocess.PIPE is no actuall file, but just a placeholder to indicate that a pipe should be created). If you do this however you should take care to avoid deadlocks, this can for example be done by using the communicate method (you can inspect the source of the subprocess module to see what communicate does if you want to do it yourself).
If the child process reads from stderr (note: normally stderr is opened for output):
#!/usr/bin/env python
"""Read from *stderr*, write to *stdout* reversed bytes."""
import os
os.write(1, os.read(2, 512)[::-1])
then you could provide a pseudo-tty (so that all streams point to the same place), to work with the child as if it were a normal subprocess:
#!/usr/bin/env python
import sys
import pexpect # $ pip install pexpect
child = pexpect.spawnu(sys.executable, ['child.py'])
child.sendline('abc') # write to the child
child.expect(pexpect.EOF)
print(repr(child.before))
child.close()
Output
u'abc\r\n\r\ncba'
You could also use subprocess + pty.openpty() instead pexpect.
Or you could write a code specific to the weird stderr behavior:
#!/usr/bin/env python
import os
import sys
from subprocess import Popen, PIPE
r, w = os.pipe()
p = Popen([sys.executable, 'child.py'], stderr=r, stdout=PIPE,
universal_newlines=True)
os.close(r)
os.write(w, b'abc') # write to subprocess' stderr
os.close(w)
print(repr(p.communicate()[0]))
Output
'cba'
for line in proc.stderr:
sys.stdout.write(line)
This is write the stderr of the subprocess. Hope it answers your question.
Why doesn't the following work?
import subprocess
process = subprocess.Popen('cmd.exe', shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=None)
The output I get is the following:
The process tried to write to a nonexistent pipe.
The process tried to write to a nonexistent pipe.
The process tried to write to a nonexistent pipe.
The process tried to write to a nonexistent pipe.
The process tried to write to a nonexistent pipe.
After which the process terminates without an error. I'm running Windows 7.
I'm running windows 10 and have the same problem. The error goes away for me if I set stdout and stderr to the same value. Try setting them both to subprocess.PIPE instead of just stdout.
Try:
import subprocess
subprocess.run(["echo", "Hello, world!"], shell=True)
This uses the default stdout and stderr, instead of creating a PIPE.
I'm using the following command to run a shell command (creating a subprocess):
cmd = "ls"
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
Then, I want to get its return code when it's finished. I should use wait() or poll()? It seems to me wait() is equal to a poll() enclosed in a busy wait. Something like:
while process.poll() == None:
time.sleep(0.5)
I read that wait() could generate a deadlock if stdout/stderr buffer is filled. process.poll() used like above also could generate a deadlock? If this is true,
I should use process.comunicate() to solve the problem? Nowadays, I only use
process.comunicate() when I'm interested in the subprocess stdout/stderr.
Thanks in advance.
Yes. subprocess.poll, when used in a loop like that, will cause a deadlock if the stdout is piped into your process and you aren't reading from it. If you don't pipe stdout or you're consistently reading from it, neither poll nor wait will deadlock. subprocess.communicate will solve the deadlock in the cases it would occur. However, if you just want to run a command, check its return code, and get its output, use subprocess.check_output, which wraps all of that.