I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Finally I found a solution to see Python output when running daemonized in Docker, thanks to #ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output now (both, stderr and stdout) via
docker logs myapp
why -u ref
- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u
what it means by python -u ref. > python --help | grep -- -u
-u : force the stdout and stderr streams to be unbuffered;
In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:
docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage
[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.
If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.
web:
environment:
- PYTHONUNBUFFERED=1
https://docs.docker.com/compose/environment-variables/
See this article which explain detail reason for the behavior:
There are typically three modes for buffering:
If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.
And GNU libc (glibc) uses the following rules for buffering:
Stream Type Behavior
stdin input line-buffered
stdout (TTY) output line-buffered
stdout (not a TTY) output fully-buffered
stderr output unbuffered
So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.
And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.
Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.
Since I haven't seen this answer yet:
You can also flush stdout after you print to it:
import time
if __name__ == '__main__':
while True:
print('cleaner is up', flush=True)
time.sleep(5)
Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8
You can see logs on detached image if you change print to logging.
main.py:
import time
import logging
print "App started"
logging.warning("Log app started")
while True:
time.sleep(1)
Dockerfile:
FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
As a quick fix, try this:
from __future__ import print_function
# some code
print("App started", file=sys.stderr)
This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.
I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.
If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app
ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"
CMD [ "flask", "run" ]
When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.
However, python -u doesn't work for me.
Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).
Adding a tty using -t is also fine. You need to pick it up in docker logs.
Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.
I am writing a python script that calls a bash script.
from subprocess import call
rc = call("./try_me.sh")
How can I exit the running bash file without exiting the running python script?
I need something like Ctrl + C.
There's different ways to approach this problem.
It appears that you're writing some sort of CLI tool since you referenced Ctrl+C. If that's the case you can use & and send a SIGINT signal to stop it when you need.
import os
os.system('nohup ./try_me.sh &')
If you want stricter control try using async subprocess management:
https://docs.python.org/3/library/asyncio-subprocess.html
After Some research, I found I should have used Popen to run the bash file as #AKX suggested.
from subprocess import Popen
r1 = Popen('printf "*Systempassword* \n" |sudo -S ./try_me.sh &', shell=True, preexec_fn=os.setsid))
when you need to stop running the bash file
import os
os.killpg(os.getpgid(r1.pid), signal.SIGTERM) # Send the signal to all the process groups
I wrote a really simple script to test out the redirection of windows machine stdout. The program is as such
# hw.py
def main():
print('Hello World')
import time
time.sleep(1000)
if __name__ == '__main__':
main()
I ran this script using the following command.
python3 hw.py > hw.log
By observing in real time hw.log using either tail -f on git bash or opening an emacs buffer, I noticed that 'Hello World' is only printed to hw.log when the process ends, or it is cancelled prematurely.
This means that I cannot have a live view of a program output while writing it to a file.
Worst still, if my program consists of infinite child processes, any output from the program will not be written to the file
How do I resolve this?
To force the Python stdout and stderr streams to be unbuffered you can pass the -u argument.
python3 -u hw.py > hw.log
Setting the environment variable PYTHONBUFFERRED=1 will have the same effect
I need to run my Python program from rc.local, backgrounded, sudo'd to a user account and nohup'd. But:
nohup sudo -u pi ~pi/doit.py >~pi/doit.out &
doesn't work because the shell applies the redirection to the whole command running under root, and hence the file gets created by root, not user pi as I want.
So I tried doing the redirection in the Python program. By way of example, I did:
#!/usr/bin/python
import sys
import time
sys.stdout = open('doit.out', 'w')
while True:
print(time.ctime())
sys.stdout.flush()
time.sleep(1)
but this truncates the file on each print, even if I set the file open mode to append, and whether or not I include the flush() or run it with python -u.
So:
1. Why does Python keep truncatung the redirected stdout and how can I stop it?
or:
2. How can I get the redirection on the command line to operate on the command executed by nohup and sudo, rather than the whole command line?
or:
answers to both the above for my edification and enlightenment.
Regards - Philip
I've written this watchdog script to monitor VLC player and kill it when playback has stopped because VLC continues to inhibit the power management daemon after playback. The script works. I can run it from the command line or through IDLE and it kills VLC when playback stops. I've added many variations of the command to start the script to my startup applications as described here but when I reboot, if it is running at all, it stops as soon as VLC starts. Restarting it from a terminal cause it to stay running and do what it is supposed to do. I don't know if this is a problem with the script or something peculiar about Ubuntu Startup Applications (although I'm leaning towards Ubuntu). Maybe something to do with permissions? (Although I did chmod +x) Should I be executing some other commands to make sure DBus is up before I launch the script? Part of me thinks that something isn't fully loaded when the script starts so I tried sleeping before launching using the *nix sleep command, the X-GNOME-Autostart-Delay, and time.sleep(n) in the python code. The pythonic way seems to have the best chance of success. The *nix ways seem to only make startup take longer and at the end of it I find that the process isn't even running. I'm using the python-setproctitle module to name the process so I can quickly see if it is running with a ps -e from terminal. I'm out of ideas and about ready to just manually run the script whenever I reboot (although in principle I think that the machine should do it for me because I told it to). Some variations of Startup Application command lines that I've tried are:
/path/to/script/vlc_watchdog.py
"/path/to/script/vlc_watchdog.py"
/path/to/script/vlc_watchdog.py &
"/path/to/script/vlc_watchdog.py &"
python /path/to/script/vlc_watchdog.py
python /path/to/script/vlc_watchdog.py &
"python /path/to/script/vlc_watchdog.py"
"python /path/to/script/vlc_watchdog.py &"
bash -c "/path/to/script/vlc_watchdog.py"
sleep 30 ; /path/to/script/vlc_watchdog.py
sleep 30 && /path/to/script/vlc_watchdog.py
etc...
Full script:
#!/usr/bin/env python
import time
time.sleep(30)
import dbus
import os
import subprocess
from subprocess import Popen, PIPE
import daemon
import setproctitle
setproctitle.setproctitle('VLC-Watchdog')
sleeptime = 5
def vlc_killer():
bus = dbus.SessionBus()
vlc_media_player_obj = bus.get_object("org.mpris.MediaPlayer2.vlc", "/org/mpris/MediaPlayer2")
props_iface = dbus.Interface(vlc_media_player_obj, 'org.freedesktop.DBus.Properties')
pb_stat = props_iface.Get('org.mpris.MediaPlayer2.Player', 'PlaybackStatus')
if pb_stat == 'Stopped':
os.system("kill -9 $(pidof vlc)")
else:
time.sleep(sleeptime)
def vlc_is_running():
ps = subprocess.Popen(['ps', '-e'], stdout = PIPE)
out, err = ps.communicate()
for line in out.splitlines():
if 'vlc' in line:
return True
return False
def run():
while True:
if vlc_is_running():
vlc_killer()
else:
time.sleep(sleeptime)
with daemon.DaemonContext():
run()
In the shell script that starts your Python code (the one in the Ubuntu startup/initialization process), use something like:
#!/bin/sh
set -x
exec > /tmp/errors.out 2>&1
/path/to/script/vlc_watchdog.py
Then after things go awry again (that is, after another reboot), inspect /tmp/errors.out to see the error messages related to whatever happened. There should be a Python traceback in there, or at least a shell error.