After searching around, I defined a function to execute command like in terminal:
import shlex
import subprocess
def execute_cmd(cmd):
p = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for line in iter(p.stdout.readline, b''): # b'' here for python3
sys.stdout.write(line.decode(sys.stdout.encoding))
error = p.stderr.read().decode()
if error:
raise Exception(error)
It works fine(output is realtime), when i
execute_cmd('ping -c 5 www.google.com')
However, when i use execute_cmd to run a python script, the output will print out until the process is done.
execute_cmd('python test.py')
script: test.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import time
print('hello')
time.sleep(2)
print('hello, again')
How can i fix it? thanks!
Sorry for not explaining why 'catch the stdout and then write it to stdout again'. Here i really want to do is catching script outputs to logger, the logger output them to screen(StreamHandler) and log file(FileHandler). I builded and tested the logger part, now the 'execute' part. And ignore the stdout= parameter seems not work.
pipeline:
setup logger;
redirect STDOUT, STDERR to logger;
execute scripts;
Because of step 2, if i ignore stdout= parameter, the outputs of scripts will still output to STDOUT, and will not log in file.
Maybe i can set stdout= to logger?
This is a common problem of the underlying output system, notably on Linux or other Unix-like systems. The io library is smart enough to flush output on each \n when it detects that output is directed to a terminal. But this automatic flush does not occur when output is redirected to a file or a pipe, probably for performance reasons. It is not really a problem when only the data matters, but it leads to weird behaviour when timing matters too.
Unfortunately I know no simple way to fix it from the caller program(*). The only possibility is to have the callee force flushing on each line/block or to use unbuffered output:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import time
import sys
print('hello')
sys.stdout.flush()
time.sleep(2)
print('hello, again')
(*) The bullet proof way would be to use a pseudo-terminal. The caller controls the master side and passes the client side to the callee. The library will detect a terminal and will automatically flushes on each line. But it is no longer portable outside the Unix world and is not really a simple way.
Related
I have been trying to write a function which would execute a command passed to it thru a parameter using POPEN along with Context Managers. Unfortunately, I am unable to get it to work. Can someone please help?
import os
import sys
import subprocess
import inspect
def run_process(cmd_args):
with subprocess.Popen(cmd_args, stdout=subprocess.PIPE) as proc:
log.write(proc.stdout.read())
run_process("print('Hello')")
The output expected is "Hello". Can someone please point out where I am going wrong?
What you have done is right if you are running a bash command through the subprocess.
Inside the context manager "with ..." what you have done is to reading out the output from the terminal and storing them as byte(s) in "output" variable and trying to print out the bytes in ASCII after decoding it.
Try returning the value from the context manager and then decode it in the calling function:
import os
import sys
import subprocess
import inspect
def run_process(cmd_args): # Below added shell=True' in parameters.
with subprocess.Popen(cmd_args, stdout=subprocess.PIPE, shell=True) as proc:
return proc.stdout.read() # returns the output
# Optionally you can use the 'encoding='utf-8' argument
# instead and just print(proc.stdout.read()).
print(run_process().decode('utf-8'))
I was having a similar issue while pipelining a process to another program and I did the decoding in the other program and surprisingly it worked. Hope it works for you as well.
def run_process(cmd_args):
with subprocess.Popen(cmd_args, stdout=subprocess.PIPE) as p:
output = p.stdout.read()
return output
It worked for the same question.
Popen runs the command it receives as you would run something in your terminal (example: CMD on Windows or bash on Linux). So, it does not execute Python, but Bash code (on Linux for ex). The Python binary has a command, -c that does what you would need: executes a Python command right away. So you have to options:
either use echo Hello (works on Windows or Linux too, echo it's both
in batch and in bash)
or you could use python -c "print('Hello') instead of just the print command.
Without making too many changes to your existing script, I have edited your script with the below comments indicating what I did to get it to work. I hope this helps.
import os
import sys
import subprocess
import inspect
def run_process(cmd_args): # Below added shell=True' in parameters.
with subprocess.Popen(cmd_args, stdout=subprocess.PIPE, shell=True) as proc:
output = proc.stdout.read() # Reads the output from the process in bytes.
print(output.decode('utf-8')) # Converts bytes to UTF-8 format for readability.
# Optionally you can use the 'encoding='utf-8' argument
# instead and just print(proc.stdout.read()).
run_process("echo Hello") # To display the message in the prompt use 'echo' in your string like this.
Note: Read the Security Considerations section before using shell=True.
https://docs.python.org/3/library/subprocess.html#security-considerations
Usually I'd like to run my python code in linux as the following:
nohup python test.py > nohup.txt 2>&1 &
in the file test.py, I often use print to print out some messages to stdout. But actually I have to wait very long time then could see the messages were printed out to nohup.txt. How can I make it print out quickly.
You could call flush on stdout. If it is possible and practical for you to adjust your code to flush your buffers after the print call, in test.py:
from sys import stdout
from time import sleep
def log():
while True:
print('Test log message')
# flush your buffer
stdout.flush()
sleep(1)
While running this logging test, you can check the nohup.txt file and see the messages being printed out in realtime.
just make sure you have coreutils installed
stdbuf -oL nohup python test.py > nohup.txt 2>&1 &
this just sets buffering to off for this command ... you should see immediate output
(you might need nohup before stdbuf ... im not sure exactly)
alternatively ... just put at the top of test.py
import sys
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
I have a problem forwarding the stdout of a subprocess to stdout of the current process.
This is my MWE caller code (runner.py):
import sys
import subprocess
import time
p = subprocess.Popen([sys.executable, "test.py"], stdout=sys.stdout)
time.sleep(10)
p.terminate()
and here is the content of the callee test.py:
import time
while True:
time.sleep(1)
print "Heartbeat"
The following will work and print all the heartbeats to the console:
python runner.py
However, the following does not work, the output text file remains empty (using Python 2.7):
python runner.py > test.txt
What do I have to do?
When the standard output is a TTY (a terminal), sys.stdout is line-buffered by default: each line you print is immediately written to the TTY.
But when the standard output is a file, then sys.stdout is block-buffered: data is written to the file only when a certain amount of data gets printed. By using p.terminate(), you are killing the process before the buffer is flushed.
Use sys.stdout.flush() after your print and you'll be fine:
import sys
import time
while True:
time.sleep(1)
print "Heartbeat"
sys.stdout.flush()
If you were using Python 3, you could also use the flush argument of the print function, like this:
import time
while True:
time.sleep(1)
print("Heartbeat", flush=True)
Alternatively, you can also set up a handler for SIGTERM to ensure that the buffer is flushed when p.terminate() gets called:
import signal
signal.signal(signal.SIGTERM, sys.stdout.flush)
It is possible to force flushes by doing sys.stdout.flush() after each print, but this would quickly become cumbersome. Since you know you're running Python, it is possible to force Python to unbuffered mode - either with -u switch or PYTHONUNBUFFERED environment variable:
p = subprocess.Popen([sys.executable, '-u', 'test.py'], stdout=sys.stdout)
or
import os
# force all future python processes to be unbuffered
os.environ['PYTHONUNBUFFERED'] = '1'
p = subprocess.Popen([sys.executable, 'test.py'])
You don't need to pass stdout=sys.stdout unless sys.stdout uses a different file descriptor than the one that python executable has started with. C stdout fd is inherited by default: you don't need to do anything in order for the child process to inherit it.
As #Andrea Corbellini said, if the output is redirected to a file then python uses a block-buffering mode and "Heartbeat"*10 (usually) is too small to overflow the buffer.
I would expect that python flushes its internal stdout buffers on exit but it doesn't do it on SIGTERM signal (generated by the .terminate() call).
To allow the child process to exit gracefully, use SIGINT (Ctrl+C) instead of p.terminate():
p.send_signal(signal.SIGINT)
In that case test.py will flush the buffers and you'll see the output in test.txt file. Either discard stderr or catch KeyboardInterrupt exception in the child.
If you want to see the output while the child process is still running then run python -u, to disable buffering or set PYTHONUNBUFFERED envvar to change the behavior of all affected python processes as #Antti Haapala suggested.
Note: your parent process may also buffer the output. If you don't flush the buffer in time then the output printed before test.py is even started may appear after its output in the test.txt file. The buffers in the parent and the child processes are independent. Either flush buffers manually or make sure that an appropriate buffering mode is used in each process. See Disable output buffering
I'm using Python 2.7.6 and IDLE on Windows 7.
I have 2 Python scripts:
script.py:
import subprocess, os, sys
print("hello 1")
mypath = os.path.abspath(__file__)
mydir = os.path.dirname(mypath)
start = os.path.join(mydir, "script2.py")
subprocess.call([sys.executable, start, "param"])
print("bye 1")
and script2.py that is being called by the previous script:
import sys
print "hello 2"
print (sys.argv[1])
print "bye 2"
If I run script.py with cmd.exe shell I get the expected result:
C:\tests>python ./script.py
hello 1
hello 2
param
bye 2
bye 1
But if I open script.py with the IDLE editor and run it with F5 I get this result:
>>> ================================ RESTART ================================
>>>
hello 1
bye 1
>>>
Why is the sub script not writing to the IDLE Python shell?
You're running the subprocess without providing any stdout or stderr.
When run in a terminal, the subprocess will inherit your stdout and stderr, so anything it prints will show up intermingled with your output.
When run in IDLE, the subprocess will also inherit your stdout and stderr, but those don't go anywhere. IDLE intercepts the Python-level wrappers sys.stdout and sys.stderr,* so anything you print to them from within Python will end up in the GUI window, but anything that goes to real stdout or stderr—like the output of any subprocess you run that inherits your streams—just goes nowhere.**
The simplest fix is to capture the stdout and stderr from the subprocess and print them yourself. For example:
out = subprocess.check_output([sys.executable, start, "param"],
stderr=subprocess.STDOUT)
print out
* IDLE is more complicated than it looks. It's actually running separate processes for the GUI window and for running your code, communicating over a socket. The sys.stdout (and likewise for the others) that IDLE provides for your script isn't a file object, it's a custom file-like object that redirects each write to the GUI process via remote procedure call over the socket.
** Actually, if you launched IDLE from a terminal rather than by double-clicking its icon, the subprocess's output may end up there. I'm not sure how it works on Windows. But at any rate, that isn't helpful to you.
I verified that abamert's change works in 2.7, on Win7, with Idle started normally from the icon. The slight glitch is that 'print out' inserts an extra blank line. This is easily changed by making print a function with a future import and use of the end parameter.
from __future__ import print_function
...
print(out, end='')
With Python 3, there is an additional issue that 'out' is bytes instead of str, so that it prints as
b'hello 2\r\nparam\r\nbye 2\r\n'
Since your output is all ascii, this can be fixed by changing the print call to
print(out.decode(), end='')
The resulting program works identically in 2.7 and 3.x.
I'm writing an alternative terminal window (using PySide), and I'm running the shell (bash) using:
subprocess.Popen(['/bin/bash','-i'],....
while setting the various stdio to subprocess.PIPE
I'm also disabling buffering on the output stdio (out,err) using
fcntl(s.fileno(),F_SETFL,os.O_NONBLOCK)
Then I'm using a timer to poll the output io for available data and pull it.
It works fairly well, but I'm getting some strange behavior some of the time. If at a prompt I issue a command (e.g. pwd), I get two distinct possible outputs:
/etc:$ pwd
/etc
/etc:$
And the other is
/etc:$ pwd/etc
/etc:$
As if the newline from the command and the rest of the output get swapped. This happens for basically any command, so for ls, for example, the first file appears right after the ls, and an empty line after the last file.
What bugs me is that it is not consistent.
EDIT: Added full code sample
#!/usr/bin/python
from PySide import QtCore
from PySide import QtGui
import fcntl
import os
import subprocess
import sys
class MyTerminal(QtGui.QDialog):
def __init__(self,parent=None):
super(MyTerminal,self).__init__(parent)
startPath=os.path.expanduser('~')
self.process=subprocess.Popen(['/bin/bash','-i'],cwd=startPath,stdout=subprocess.PIPE,stdin=subprocess.PIPE,stderr=subprocess.PIPE)
fcntl.fcntl(self.process.stdout.fileno(),fcntl.F_SETFL,os.O_NONBLOCK)
fcntl.fcntl(self.process.stderr.fileno(),fcntl.F_SETFL,os.O_NONBLOCK)
self.timer=QtCore.QTimer(self)
self.connect(self.timer,QtCore.SIGNAL("timeout()"),self.onTimer)
self.started=False
def keyPressEvent(self,event):
text=event.text()
if len(text)>0:
if not self.started:
self.timer.start(10)
self.started=True
self.sendKeys(text)
event.accept()
def sendKeys(self,text):
self.process.stdin.write(text)
def output(self,text):
sys.stdout.write(text)
sys.stdout.flush()
def readOutput(self,io):
try:
text=io.read()
if len(text)>0:
self.output(text)
except IOError:
pass
def onTimer(self):
self.readOutput(self.process.stdout)
self.readOutput(self.process.stderr)
def main():
app=QtGui.QApplication(sys.argv)
t=MyTerminal()
t.show()
app.exec_()
if __name__=='__main__':
main()
After trying to create a small code example to paste (added above), I noticed that the problem arises because of synchronization between the stdout and stderr.
A little bit of searching led me to the following question:
Merging a Python script's subprocess' stdout and stderr while keeping them distinguishable
I tried the first answer there and used the polling method, but this didn't solve things, as I was getting events mixing in the same manner as before.
What solved the problem was the answer by mossman which basically redirected the stderr to the stdout, which in my case is good enough.