Child Process is failing using Popen - python

I'm trying to get a program named xselect to run using the Popen construct in python. If I run xselect from the terminal manually by typing in the commands by hand, It runs all the way through. However, when done from the python script, it freezes at a certain command and will not continue. When I check the log file, all of the output is captured, but none of the error messages are captured.
I'm thinking that Popen may not know what to do with the errors from the output of xselect, and its causing xselect to freeze. To counter this, I tried to add a timeout so that it kills xselect after 5 seconds, but this hasn't worked either.
Can anyone help me get this running?
with subprocess.Popen(args, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, universal_newlines=True) as proc:
proc.wait(timeout=5)
out = proc.stdout.read()
if TimeoutExpired:
proc.kill()

See the warning for proc.wait() here: https://docs.python.org/2/library/subprocess.html#subprocess.Popen.wait. Basically, you should either be using proc.communicate(), or should be reading from proc.stdout instead of waiting.

Related

subprocess.popen is failing to retry

What specific syntax must be changed below in order to get the call to subprocess.popen to retry if no response is received in n seconds?
def runShellCommand(cmd):
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
process.wait()
The problem we are having is that the command is succeeding but the command is not receiving a response. This means that the runShellCommand(cmd) function is just hanging forever.
If the process.wait() lasted only n seconds and then retried running the same cmd, then repeated the call/wait cycle 3 or 4 times, then the function could either receive a response from one of the subsequent tries and return successful, or could fail gracefully within a specified maximum period of time.
Your process is probably deadlocking due to the STDOUT buffer filling up.
Setting stdout=subprocess.PIPE makes the process redirect STDOUT to a file called process.stdout instead of the terminal output. However, process.wait() doesn't ever read from process.stdout. Therefor, when process.stdout fills up (usually after a few megabytes of output), then the process deadlocks. The process is waiting for STDOUT (directed to process.stdout) to get read, but it will never get read because process.wait() is waiting for the process to finish, which it can't do because it's waiting to print to STDOUT... and that's the deadlock.
To solve this and read the output, use something like:
def runShellCommand(cmd):
return subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, text=True).stdout
Note that text=True requires Python 3.7 or later. Before that, use universal_newlines=True for the same effect, or leave that argument out to get the results as bytes instead.
Security note:
Please consider removing shell=True. It's horribly unsafe (subject to the variable expansion whims of your shell, which could be almost anything from a simple POSIX sh or bash, but also something more unusual like tcsh, zsh, or even a totally unexpected custom shell compiled by the user or their sysadmin).
E.g. instead of
subprocess.run('echo "Hello, World!"', shell=True)
You can use this more safely:
subprocess.run(['echo', 'Hello, World!'])

have a .exe run in the background and type things into that through python

I have a program, myshell.exe, that i need to interact with through python (send commands to it and read results back).
The catch is that i can only run myshell.exe once (cannot enclose popen and communicate in a loop)
I have tried popen and popen.communicate() but that seems to run myshell.exe, send my commands and then exits the process.
# settin up the command
p = Popen("myshell.exe", stdout=PIPE, stdin=PIPE, stderr=PIPE, shell=True)
# sending something (and getting output)
print p.communicate("run");
At this point, from the print output i can see that my myshell.exe has exited (i have a goodbye message that is printed).
Any ideas if there is any way around it ?
Thanks.
As you can read in Popen.communicate docs, it will wait until myshell.exe exits before returning.
Use p.stdout and p.stdin to communicate with the process instead:
p.stdin.write("run")
print p.stdout.read(1024)
p.stdin and p.stdout are regular file objects. You can read and write to them in a loop, just leave the p = Popen(...) part outside:
p = Popen("myshell.exe", stdout=PIPE, stdin=PIPE, stderr=PIPE, shell=True)
for i in range(3):
p.stdin.write("run")
print p.stdout.read(16)
p.terminate()
This assuming that myshell.exe is behaving as you expect (e.g. does not exit after first command is sent).

Difference between Popen.poll() and Popen.wait()

I'm using the following command to run a shell command (creating a subprocess):
cmd = "ls"
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
Then, I want to get its return code when it's finished. I should use wait() or poll()? It seems to me wait() is equal to a poll() enclosed in a busy wait. Something like:
while process.poll() == None:
time.sleep(0.5)
I read that wait() could generate a deadlock if stdout/stderr buffer is filled. process.poll() used like above also could generate a deadlock? If this is true,
I should use process.comunicate() to solve the problem? Nowadays, I only use
process.comunicate() when I'm interested in the subprocess stdout/stderr.
Thanks in advance.
Yes. subprocess.poll, when used in a loop like that, will cause a deadlock if the stdout is piped into your process and you aren't reading from it. If you don't pipe stdout or you're consistently reading from it, neither poll nor wait will deadlock. subprocess.communicate will solve the deadlock in the cases it would occur. However, if you just want to run a command, check its return code, and get its output, use subprocess.check_output, which wraps all of that.

Python: Hide sub-process print out on the terminal and continue the script while sub is running

is there a way to use python2.6 with either subprocess.Popen() or os.system() to run two tasks? Example the script will run "airodump-ng" first then this process is sub and is hidden(meaning will not print out from terminal) after which continue run the rest of the script which contain "sniff" function of scapy. I been researched but I only found windows version and python3. By the way I running on debian.
Use subprocess.Popen in combination with subprocess.PIPE:
p = Popen(['airodump-ng', …], stdin=PIPE, stdout=PIPE, stderr=PIPE)
If you want to wait until the process has finished use:
stdout, stderr = p.communicate()
If you omit the code above airodump-ng will run in the background and produce no visible output, while you can continue with your python code.
Another method would be to use os.devnull to redirect the output of airodump-ng to, this will completly get rid of any output produced:
devnull = os.open(os.devnull, os.O_WRONLY)
p = Popen(['airodump-n', …], stdout=devnull, stderr=devnull)
In the spot where you put the command airodump-ng replace that part with timeout 'X's airodump-ng mon'X'

Better multithreaded use of Python subprocess.Popen & communicate()?

I'm running multiple commands which may take some time, in parallel, on a Linux machine running Python 2.6.
So, I used subprocess.Popen class and process.communicate() method to parallelize execution of mulitple command groups and capture the output at once after execution.
def run_commands(commands, print_lock):
# this part runs in parallel.
outputs = []
for command in commands:
proc = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True)
output, unused_err = proc.communicate() # buffers the output
retcode = proc.poll() # ensures subprocess termination
outputs.append(output)
with print_lock: # print them at once (synchronized)
for output in outputs:
for line in output.splitlines():
print(line)
At somewhere else it's called like this:
processes = []
print_lock = Lock()
for ...:
commands = ... # a group of commands is generated, which takes some time.
processes.append(Thread(target=run_commands, args=(commands, print_lock)))
processes[-1].start()
for p in processes: p.join()
print('done.')
The expected result is that each output of a group of commands is displayed at once while execution of them is done in parallel.
But from the second output group (of course, the thread that become the second is changed due to scheduling indeterminism), it begins to print without newlines and adding spaces as many as the number of characters printed in each previous line and input echo is turned off -- the terminal state is "garbled" or "crashed". (If I issue reset shell command, it restores normal.)
At first, I tried to find the reason from handling of '\r', but it was not the reason. As you see in my code, I handled it properly using splitlines(), and I confirmed that with repr() function applied to the output.
I think the reason is concurrent use of pipes in Popen and communicate() for stdout/stderr. I tried check_output shortcut method in Python 2.7, but no success. Of course, the problem described above does not occur if I serialize all command executions and prints.
Is there any better way to handle Popen and communicate() in parallel?
A final result inspired by the comment from J.F.Sebastian.
http://bitbucket.org/daybreaker/kaist-cs443/src/247f9ecf3cee/tools/manage.py
It seems to be a Python bug.
I am not sure it is clear what run_commands needs to be actually doing, but it seems to be simply doing a poll on a subprocess, ignoring the return-code and continuing in the loop. When you get to the part where you are printing output, how could you know the sub-processes have completed?
In your example code I noticed your use of:
for line in output.splitlines():
to address partially the issue of " /r " ; use of
for line in output.splitlines(True):
would have been helpful.

Categories

Resources