I am trying to execute a command as follows but it is STUCK in try block as below until the timeout kicks in,the python script executes fine by itself independently,can anyone suggest why is it so and how to debug this?
cmd = "python complete.py"
proc = subprocess.Popen(cmd.split(' '),stdout=subprocess.PIPE )
print "Executing %s"%cmd
try:
print "In try" **//Stuck here**
proc.wait(timeout=time_out)
except TimeoutExpired as e:
print e
proc.kill()
with proc.stdout as stdout:
for line in stdout:
print line,
proc.stdout isn't available to be read after the process exits. Instead, you need to read it while the process is running. communicate() will do that for you, but since you're not using it, you get to do it yourself.
Right now, your process is almost certainly hanging trying to write to its stdout -- which it can't do, because the other end of the pipe isn't being read from.
See also Using module 'subprocess' with timeout.
Related
I have a thread which handles commands sent to a device. It opens a subprocess, sends the command to qmicli application (https://sigquit.wordpress.com/2012/08/20/an-introduction-to-libqmi/), gets a reply and the reply is dealt with.
This generally works fine for days/weeks of running. However I noticed that sometimes the thread would sometimes just stop doing anything when I make a subprocess.Popen call (the next lines of code do not run), the simplified code looks like this:
try:
self.qmi_process = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
# Log value of self.qmi_process happens here
if self.qmi_process:
out = self.qmi_process.communicate()
else:
return "ERROR: no qmi_process"
self.qmi_process = None
ret = ''.join(str(e) for e in out if e)
except:
return "ERROR: Caught unhandled exception"
I have started logging the value of the subprocess.Popen call to see if the communicate() call was blocking or was it failing before this when the subprocess call is created. It turns out that for some reason the subprocess.Popen fails and self.qmi_process value is not logged, but my Exception code is not being called, any idea how that could happen?
subprocess.Popen does not return.
I have multiple threads calling popen, I've read this can cause deadlock in 2.7?
I'm trying to get a program named xselect to run using the Popen construct in python. If I run xselect from the terminal manually by typing in the commands by hand, It runs all the way through. However, when done from the python script, it freezes at a certain command and will not continue. When I check the log file, all of the output is captured, but none of the error messages are captured.
I'm thinking that Popen may not know what to do with the errors from the output of xselect, and its causing xselect to freeze. To counter this, I tried to add a timeout so that it kills xselect after 5 seconds, but this hasn't worked either.
Can anyone help me get this running?
with subprocess.Popen(args, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, universal_newlines=True) as proc:
proc.wait(timeout=5)
out = proc.stdout.read()
if TimeoutExpired:
proc.kill()
See the warning for proc.wait() here: https://docs.python.org/2/library/subprocess.html#subprocess.Popen.wait. Basically, you should either be using proc.communicate(), or should be reading from proc.stdout instead of waiting.
earlier today this code worked as I wanted it to. Now it doesn't. I want it to run airodump and store the output to a csv file (command line option). After a few seconds kill the process.
def find_networks():
airodump = subprocess.Popen(["airodump-ng","-w","airo","--output-format","csv","mon0"],stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(10)
try:
os.kill(airodump.pid, signal.SIGTERM)
if not airodump.poll():
print "shouldn't have had to do this.."
airodump.kill()
if not airodump.poll():
print "shouldn't have had to do this....."
airodump.terminate()
if not airodump.poll():
print "shouldn't have had to do this........"
except OSError:
print "?"
pass
except UnboundLocalError:
print "??"
pass
return_code = airodump.wait()
print return_code
(the output here is:
shouldn't have had to do this..
shouldn't have had to do this.....
-9)
Earlier it would do exactly what I said (same code). The negative 9 is worrisome, earlier I was getting 1, but the process still DOES die which is all thats important, but not due to os.kill statement, which is wierd. That's not the big problem though. All i need is that .csv file. With this implementation, the csv file is completely empty - it is made but nothing ever put in it. If I run subprocess without stdout set to PIPE, the csv file IS created and populated, but I cant do it that way- I have to keep the stdout off the screen though.
Is stdout=subprocess.PIPE causing the data to be "written" to a nonexistent csv file in PIPE-land???
It's difficult to reproduce what's happening on your machine with your example code. One thing that might be causing some problems though: You should only use stdout=subprocess.PIPE if you are actually intending to read from the pipe. If you don't, the process will block once it generates enough output to fill up the pipe buffer.
If all you want to do is to hide stdout and stderr, you can do this:
airodump = subprocess.Popen(..., stdout=open("/dev/null", "w"), stderr=open("/dev/null", "w"))
Or better yet:
import os
airodump = subprocess.Popen(..., stdout=open(os.devnull, "w"), stderr=open(os.devnull, "w"))
Or if you are using Python 3, you can use subprocess.DEVNULL.
since you send the output to PIPE, you should explicitly read it out if you want it, you can then write it to a local file. Sth like:
airodump = subprocess.Popen(["airodump-ng","-w","airo","--output-format","csv","mon0"],stdout=subprocess.PIPE, stderr=subprocess.PIPE)
... # wait the command to complete
open("the_file_you_want_to_store_output", "w").write(airodump.stdout.read())
I am working on some scripts (in the company I work in) that are loaded/unloaded into hypervisors to fire a piece of code when an event occurs. The only way to actually unload a script is to hit Ctrl-C. I am writing a function in Python that automates the process
As soon as it sees the string "done" in the output of the program, it should kill the vprobe.
I am using subprocess.Popen to execute the command:
lineList = buff.readlines()
cmd = "vprobe /vprobe/myhello.emt"
p = subprocess.Popen(args = cmd, shell=True,stdout = buff, universal_newlines = True,preexec_fn=os.setsid)
while not re.search("done",lineList[-1]):
print "waiting"
os.kill(p.pid,signal.CTRL_C_EVENT)
As you can see, I am writing the output in buff file descriptor opened in read+write mode. I check the last line; if it has 'done', I kill it. Unfortunately, the CTRL_C_EVENT is only valid for Windows.
What can I do for Linux?
I think you can just send the Linux equivalent, signal.SIGINT (the interrupt signal).
(Edit: I used to have something here discouraging the use of this strategy for controlling subprocesses, but on more careful reading it sounds like you've already decided you need control-C in this specific case... So, SIGINT should do it.)
In Linux, Ctrl-C keyboard interrupt can be sent programmatically to a process using Popen.send_signal(signal.SIGINT) function. For example
import subprocess
import signal
..
process = subprocess.Popen(..)
..
process.send_signal(signal.SIGINT)
..
Don't use Popen.communicate() for blocking commands..
Maybe I misunderstand something, but the way you do it it is difficult to get the desired result.
Whatever buff is, you query it first, then use it in the context of Popen() and then you hope that by maciv lineList fills itself up.
What you probably want is something like
logfile = open("mylogfile", "a")
p = subprocess.Popen(['vprobe', '/vprobe/myhello.emt'], stdout=subprocess.PIPE, buff, universal_newlines=True, preexec_fn=os.setsid)
for line in p.stdout:
logfile.write(line)
if re.search("done", line):
break
print "waiting"
os.kill(p.pid, signal.CTRL_C_EVENT)
This gives you a pipe end fed by your vprobe script which you can read out linewise and act appropriately upon the found output.
is there a way to use python2.6 with either subprocess.Popen() or os.system() to run two tasks? Example the script will run "airodump-ng" first then this process is sub and is hidden(meaning will not print out from terminal) after which continue run the rest of the script which contain "sniff" function of scapy. I been researched but I only found windows version and python3. By the way I running on debian.
Use subprocess.Popen in combination with subprocess.PIPE:
p = Popen(['airodump-ng', …], stdin=PIPE, stdout=PIPE, stderr=PIPE)
If you want to wait until the process has finished use:
stdout, stderr = p.communicate()
If you omit the code above airodump-ng will run in the background and produce no visible output, while you can continue with your python code.
Another method would be to use os.devnull to redirect the output of airodump-ng to, this will completly get rid of any output produced:
devnull = os.open(os.devnull, os.O_WRONLY)
p = Popen(['airodump-n', …], stdout=devnull, stderr=devnull)
In the spot where you put the command airodump-ng replace that part with timeout 'X's airodump-ng mon'X'