I have the following piece of code. myprg has to run after my script exits. But before exiting, I would like to wait for mystring to be emitted by myprg and then redirect rest of the stderr messages to mylogfile and then exit. Is this possible at all? I tried the following and the python script does not exit. Where am I going wrong?
import subprocess
proc = subprocess.Popen(myprg, stderr=subprocess.PIPE)
for line in iter(proc.stderr.readline,'')
if check_string in line:
break
fh = open (mylogfile,'w')
proc.stderr = fh.fileno()
EDIT: I have temporarily worked around this problem by making myprg to emit check_string to stdout. This is less than optimal, but works for the moment.
shouldn't you call proc.communicate() for the script to exit?
Related
I was have to call a shell command
subprocess.Popen(command, stderr=subprocess.STDOUT, stdout=subprocess.PIPE)
I did it. and:
and, that command is prints lots of things like verbose is on, and then when its done it's job it prints (writes) blah blah : Ready
I have to call this command, wait for the 'Ready' text and leave it running on background, then let the rest of the code run
I tried this and things like this, didn't work
...
done=False
with subprocess.Popen(command, stderr=subprocess.STDOUT, stdout=subprocess.PIPE) as proc:
while not done:
x=proc.stdout.read()
x=x.find('Ready')
if x > -1:
done=True
print("YaaaY, this subprocess is done it's job and now running on background")
#rest of the code
i ran similar (edited) code on python terminal and I think I can't even access (read) the terminal of the subprocess. because...
I was expecting it will show every line that this subprocess print but. its just waiting.
Your problem is proc.stdout.read(). This reads the entire output of your subprocess, which is not known until it has terminated (usually). Try something like:
output = b''
while not done:
output += proc.stdout.read(1)
x = output.find(b'Ready')
if x > -1:
done = True
This reads the process's stdout one character at a time, so it doesn't have to wait for it to finish.
Note that using Popen in a context manager (with block) will cause your program to wait for the child process to terminate before it exits the with block, so it will not leave it running past that point. It's unclear if that is desired behaviour or not.
I am automating a GUI application that also has a CLI. I can successfully run the following (theApplication is an internally developed app):
import subprocess
process = subprocess.Popen(r'theApplication.exe -foo=7')
It then prints its output which I wish to process, so I use subprocess.PIPE
import subprocess
process = subprocess.Popen(r'theApplication.exe -foo=7', stdout=subprocess.PIPE)
Now the program does not start and Windows reports that theApplication has stopped working. The same happens if the stdout is redirected to DEVNULL or a file. I tried setting bufsize to 0 and 1 and the same happens.
Tried all permutations with stdout and stderr being redirected or not. It only works if neither of them is redirected. If one is redirected it instantly breaks the program. The value of process.returncode is None.
How can pipeing the output can break the program?
Have you already tried it with list parameter? Like:
import subprocess
process = subprocess.Popen(["theApplication.exe", "-foo=7"], stdout=subprocess.PIPE)
So the answer was to also redirect stdin=subprocess.PIPE, altough the theApplication does not read from it.
I would like to run several commands in the same shell. After some research I found that I could keep a shell open using the return process from Popen. I can then write and read to stdin and stdout. I tried implementing it as such:
process = Popen(['/bin/sh'], stdin=PIPE, stdout=PIPE)
process.stdin.write('ls -al\n')
out = ' '
while not out == '':
out = process.stdout.readline().rstrip('\n')
print out
Not only is my solution ugly, it doesn't work. out is never empty because it hands on the readline(). How can I successfully end the while loop when there is nothing left to read?
Use iter to read data in real time:
for line in iter(process.stdout.readline,""):
print line
If you just want to write to stdin and get the output you can use communicate to make the process end:
process = Popen(['/bin/sh'], stdin=PIPE, stdout=PIPE)
out,err =process.communicate('ls -al\n')
Or simply get the output use check_output:
from subprocess import check_output
out = check_output(["ls", "-al"])
The command you're running in a subprocess is sh, so the output you're reading is sh's output. Since you didn't indicate to the shell it should quit, it is still alive, thus its stdout is still open.
You can perhaps write exit to its stdin to make it quit, but be aware that in any case, you get to read things you don't need from its stdout, e.g. the prompt.
Bottom line, this approach is flawed to start with...
I've been reading up on a lot of documentations but am still not sure what I'm doing wrong.
So I have a separate shell script that fires up a separate server then the one I'm working on. Once the server is connected, I want to run ls and that's it. However, for some reason stdin=subprocess.PIPE is preventing the Popen command from terminating so that the next line could execute. For example because the code is stuck I'll Ctrl+C but I'll get an error saying that wait() got a keyboard interrupt. Here's an example code:
import subprocess
from time import sleep
p1 = subprocess.Popen("run_server",
stdout = subprocess.PIPE,
stdin = subprocess.PIPE)
#sleep(1)
p1.wait()
p1.communicate(input = "ls")[0]"
If I replace p1.wait() with sleep(1), the communicate command does run and displays ls, but the script that runs the server detects eof on tty and terminates it self. I must have some kind of wait between Popen and communicate because the server script will terminate for the same reason.
p.wait() does not return until the child process is dead. While the parent script is stuck on p.wait() call; your child process expects input at the same time -- deadlock. Then you press Ctrl+C in the shell; it sends SIGINT signal to all processes in the foreground process group that kills both your parent Python script and run_server subprocess.
You should drop the .wait() call:
#!/usr/bin/env python
from subprocess import Popen, PIPE
p = Popen(["run_server"], stdout=PIPE, stdin=PIPE)
output = p.communicate(b"ls")[0]
Or in Python 3.4+:
#!/usr/bin/env python3
from subprocess import check_output
output = check_output(["run_server"], input=b"ls")
If you want to run several commands then pass them all at once:
input = "\n".join(["ls", "cmd2", "etc"]) # with universal_newlines=True
As you know from reading the subprocess docs, p.communicate() waits for the child process to exit and therefore it should be called at most once. As well as with .wait(), the child process is dead after .communicate() has returned.
The fact that when you Ctrl+C and your traceback says you were stuck in wait() means the next line is executing, the next line is wait(). wait() won't return until your p1 process returns. However, it seems your p1 process won't return until you send it a command, 'ls' in your case. Try sending the command then calling wait().:
import subprocess
from time import sleep
p1 = subprocess.Popen("run_server",
stdout = subprocess.PIPE,
stdin = subprocess.PIPE)
#sleep(1)
p1.communicate(input = "ls")[0]"
p1.wait()
Otherwise, make sure your "run_server" script terminates so your script can advance past p1.wait()
I am writing a program to communicate to two programs:
output = Popen(shlex.split(query_cmd), stdout=PIPE,stdin=None)
cmd_out = [output.stdout]
while cmd_out:
readable,writeready,exceptready = select.select(cmd_out,[],[],timeout)
for f in readable:
line = f.readline()
snap_result=Popen(shlex.split("snap %s" % (line) ),stdout=PIPE,close_fds=True).communicate()[0]
print snap_result
Supposedly query_cmd will continuously generate lines of result. snap should that use this line as argument, return results and terminate. This works on python2.4. However, on python2.6.6, it seems that the snap will hangs on reading the result.
If I change the query_cmd to "tail -f file". It seems to work too.
I am running this inside a csh script where both stdout/stderr are redirected to a log file.
EDIT: Actually, it is weird, in csh, I redirected both stdout&stderr to log file. If I only redirect stdout, it is running fine. If I redirect stderr, it hangs. I think somehow the stderr is messed up between parent process of python and child process.
Seems not be the problem of the script but because the subprocess is expecting stdin input. Redirect the stdin to null device solve this.