I have a Python script to capture network traffic with tcpdump in a subprocess:
p = subprocess.Popen(['tcpdump', '-I', '-i', 'en1',
'-w', 'cap.pcap'], stdout=subprocess.PIPE)
time.sleep(10)
p.kill()
When this script completes its work, I'm trying to open output .pcap file in Wireshark and getting this error:
"The capture file appears to have been cut short in the middle of a packet."
What solution could be applied for "proper" closing of tcpdump's subprocess?
Instead of p.kill(), you can use p.send_signal(subprocess.signal.SIGTERM) to send a terminate signal rather than a kill (p.terminate() does the same).
The Popen docs describe the send_signal() command. The documentation on signals is a bit weak, but a dir(subprocess.signal) will list all the signals you may send to the process, but terminate should allow it some time to clean up.
Found working solution:
I've changed p.kill() to p.terminate().
After this change the subprocess is "properly" finished (output of tcpdump subprocess with statistics available in console) and output .pcap file not damaged.
I had the same problem about closing subprocesses. This thread really helped, so thanks, especially to https://stackoverflow.com/users/3583715/rkh. My solution:
Before:
output = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
message = output.stdout.read()
output.stdout.close()
After reading the Popen docs:
output = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
message = output.stdout.read()
output.TerminateProcess()
For some reason, calling output.kill(), and/or output.terminate() or sending output.send_signal(subprocess.signal.SIGTERM) didn't work, but output.TerminateProcess() did.
Related
I want to read on-going output of started process from code (running in background) with subprocess.Popen() and subprocess.communicate()
Starting the process:
import subprocess
process_params = ['/usr/bin/tcpdump', '-n', 'dst port 80']
proc = subprocess.Popen(
process_params,
stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
(stdout, stderr) = proc.communicate()
The tcpdump process is running in background but proc.communicate() waits till end of file and only when process is killed it produces some output.
>>> (stdout, stderr) = proc.communicate()
I would like to achieve something like receiving data from process' stdout at the moment when output is produced by the process.
I think I need some thread that looks if some output is generated from process and then for example append it to log file.
I don't know how to get down to it, so any ideas and suggestions will be much appreciated.
tcpdump is probably buffering its output. It will only write output when it has a buffer full of data (typically 8KB) to write. This is common behavior for programs which are not writing to a TTY.
Tcpdump has a command-line option to line-buffer its output. This causes it to write its output every time it has a full line of text. Try this:
process_params = ['/usr/bin/tcpdump', '-l', '-n', 'dst port 80']
^^--Line buffer output
Alternately, see if you have programs named stdbuf or unbuffer installed on your system. They can be used to adjust the buffering behavior of another process. Or see these questions:
How to make output of any shell command unbuffered?
How to unbuffer stdout of legacy running binary without stdbuf and similar tools
I want to execute a python subprocess in a new console. Once started, I want the user to be able to answer questions asked by this new process on stdin.
I tried the following code:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd, creationflags=subprocess.CREATE_NEW_CONSOLE)
(o, e) = p.communicate()
As soon as the subprocess asks for input on stdin the following error message is displayed:
EOFError: EOF when reading a line
Is it the good way to achieve this ?
As i'm not really interested in the stdout/stderr redirection, i tried this way:
subprocess.Popen(cmd, cwd=cwd, creationflags=subprocess.CREATE_NEW_CONSOLE)
It works fine now. I guess that it's not compatible to redirect standard input/outputs and to create a new console.
When i am running a program in the console, i get some text output.
When i am running the same program in Popen(..), with the same parameters, stdout and stderr are empty.
I tried everything i could imagine like shell=False and shell=True, set stdout=subprocess.PIPE, did a os.chdir() to change into the directory of this program, try p.wait() and p.communicate(), set the command as a list and as a string, but nothing works.
example:
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
out, err = p.communicate()
--> out and err are empty strings, but if i ran this command in console i get a real output. Command is with fullpath, so its regardless where the command will be started.
My question is, are there mechanisms for programms to detect they weren't run in a real console? If so, how can i cheat.
Or miss i something?
(Python 2.7.8. x32 in Win7 x64)
from subprocess import Popen, STDOUT, PIPE
p = Popen(command, shell=True, stdout=PIPE, stderr=STDOUT, stdin=PIPE)
while p.poll() is None:
print(p.stdout.read())
p.stdout.close()
p.stdin.close()
Try this and see if it makes any difference. Also make sure command is a string and not a list/touple, shell=True for whatever reason works better or only with strings.
Also note that shell=True will get you hanged because it's insecure etc.
Also skipping .communicate() you'll need to tap off stdout otherwise the buffer will get full and you might hang both yours and the child process.
If this doesn't work, please provide more information. Such as the command used and the expected output (at least first few lines)
We are having some problems with the dreaded "too many open files" on our Ubuntu Linux machine rrunning a python Twisted application. In many places in our program, we are using subprocess Popen, something like this:
Popen('ifconfig ' + iface, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = process.stdout.read()
while in other places we use subprocess communicate:
process = subprocess.Popen(['/usr/bin/env', 'python', self._get_script_path(script_name)],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
close_fds=True)
out, err = process.communicate(data)
What exactly do I need to do in both cases in order to close any open file descriptors? Python documentation is not clear on this. From what I gather (which could be wrong) both communicate() and wait() will indeed clean up any open fds on their own. But what about Popen? Do I need to close stdin, stdout, and stderr explicitly after calling Popen if I don't call communicate or wait?
According to this source for the subprocess module (link) if you call communicate you should not need to close the stdout and stderr pipes.
Otherwise I would try:
process.stdout.close()
process.stderr.close()
after you are done using the process object.
For instance, when you call .read() directly:
output = process.stdout.read()
process.stdout.close()
Look in the above module source for how communicate() is defined and you'll see that it closes each pipe after it reads from it, so that is what you should also do.
If you're using Twisted, don't use subprocess. If you were using spawnProcess instead, you wouldn't need to deal with annoying resource-management problems like this.
I'm using the following to execute a process and hide its output from Python. It's in a loop though, and I need a way to block until the sub process has terminated before moving to the next iteration.
subprocess.Popen(["scanx", "--udp", host], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Use subprocess.call(). From the docs:
subprocess.call(*popenargs, **kwargs)
Run command with arguments. Wait for command to complete, then
return the returncode attribute.
The arguments are the same as for the
Popen constructor.
Edit:
subprocess.call() uses wait(), and wait() is vulnerable to deadlocks (as Tommy Herbert pointed out). From the docs:
Warning: This will deadlock if the
child process generates enough output
to a stdout or stderr pipe such that
it blocks waiting for the OS pipe
buffer to accept more data. Use
communicate() to avoid that.
So if your command generates a lot of output, use communicate() instead:
p = subprocess.Popen(
["scanx", "--udp", host],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = p.communicate()
If you don't need output at all you can pass devnull to stdout and stderr. I don't know if this can make a difference but pass a bufsize. Using devnull now subprocess.call doesn't suffer of deadlock anymore
import os
import subprocess
null = open(os.devnull, 'w')
subprocess.call(['ls', '-lR'], bufsize=4096, stdout=null, stderr=null)