Windows only writes to file after process is stopped - python

I wrote a really simple script to test out the redirection of windows machine stdout. The program is as such
# hw.py
def main():
print('Hello World')
import time
time.sleep(1000)
if __name__ == '__main__':
main()
I ran this script using the following command.
python3 hw.py > hw.log
By observing in real time hw.log using either tail -f on git bash or opening an emacs buffer, I noticed that 'Hello World' is only printed to hw.log when the process ends, or it is cancelled prematurely.
This means that I cannot have a live view of a program output while writing it to a file.
Worst still, if my program consists of infinite child processes, any output from the program will not be written to the file
How do I resolve this?

To force the Python stdout and stderr streams to be unbuffered you can pass the -u argument.
python3 -u hw.py > hw.log
Setting the environment variable PYTHONBUFFERRED=1 will have the same effect

Related

Jenkins Execute shell behaving differently

I am creating a Jenkins project which executes a shell on build. Inside the execute shell I am running a python script like
`python3 pythonScriptFile.py "${arg1}" "${arg2}" "${arg3}"
the python file internal call a shell script.
python -> shell1 -> shell2 -> return back to python file to continue execution.
when i execute the python file with arguments in terminal the the execution is synchronous one after the other.
but when I run the same in Jenkins first the shell is executed then the python file.
`print("SCRIPT Started")
process = os.system("""sh script.sh -t {arg1} -e {arg2}""")
process.wait()
if process.returncode != 0:
sys.exit()
print("Error executing build script")
print("SCRIPT COMPLETED")`
Output:
Script executed (which is a echo inside shell)
SCRIPT Started
SCRIPT COMPLETED`
Expected Output:
SCRIPT Started
Script executed (which is a echo inside shell)
SCRIPT COMPLETED`
[ Why does that happen ? ]
The buffering of a standard output stream depends on the environment and program settings.
In Jenking the output stream of python program is fully buffered, while interactive program connected to a terminal is line buffered.
[ How to fix it ? ]
Disable output buffering

How to run Python".exe" not Spawned, as it is by >python "myscript.py"

After creating the ".exe" from "myscript.py" by the command line:
pyinstaller --onefile -w myscript.py
i see that windows 10 spawn the process, it does not wait for the execution as it usually do if i execute as original script by the command line:
python myscript.py
So, how to run the ".exe" as it runed by python ???? Not spawing it.
Also no "print()" prints anything what i guess that it is because the main output port has been changed by the spawning stuff...
myscript.py
import sys
def main():
print('Hello World!')
sys.stdout.flush()
main()
You specifically asked for this behavior through -w (--windowed):
Windows and Mac OS X: do not provide a console window for standard i/o.
Don't use -w if you want a CLI program (and you are using the console). Use -w if you want a GUI program (and you are showing your own dialog windows and such).

Docker Container prints the output only while exiting

I wrote a python program and its Dockerfile:
import time
print("Begin")
time.sleep(100);
print("End")
The image for it was created,and it was run using docker run <image-id> and the behaviour that surprises me is,
after giving the run command in the console, it waits for sleep(100) seconds and prints "Begin" and "End" together.
Why are we not getting the intermediate results while running it?
Also how can I write a streaming app (in kafka or so), in this manner if it wont send the data immediately after it produces?
When you run your python script from the console, it displays Begin on stdout right away because it is a tty (interactive) and flushes at the end of each line. But if you redirect stdout and stdin like so python /tmp/a.py < /dev/null | cat, the python script will not notice it is run from a tty and will only flush when it completes.
If you run the same script from a docker container, it does not have a tty by default, you have to explicitly ask for one with --tty , -t Allocate a pseudo-TTY:
docker run -t yourimage
Alternatively, if you do no want the container to run with a tty, you can force the flush to happen regardless by setting the PYTHONUNBUFFERED environment variable, by adding the -u option to the python interpreter or by modifying your script like so:
import sys
import time
print("Begin")
sys.stdout.flush()
time.sleep(100);
print("End")
or with the flush argument (python3 only):
import time
print("Begin", flush=True)
time.sleep(100);
print("End")
When printing to stdout the OS does not guarantee it will be written immediately.
What is guaranteed is that when the file descriptor will be closed the OS will flush the write buffer (this is why when the docker exits you get the output).
In order to ensure OS will flush, add the following code after any important print:
import sys
sys.stdout.flush()

Redirection of stdout in Python and/or sh

I need to run my Python program from rc.local, backgrounded, sudo'd to a user account and nohup'd. But:
nohup sudo -u pi ~pi/doit.py >~pi/doit.out &
doesn't work because the shell applies the redirection to the whole command running under root, and hence the file gets created by root, not user pi as I want.
So I tried doing the redirection in the Python program. By way of example, I did:
#!/usr/bin/python
import sys
import time
sys.stdout = open('doit.out', 'w')
while True:
print(time.ctime())
sys.stdout.flush()
time.sleep(1)
but this truncates the file on each print, even if I set the file open mode to append, and whether or not I include the flush() or run it with python -u.
So:
1. Why does Python keep truncatung the redirected stdout and how can I stop it?
or:
2. How can I get the redirection on the command line to operate on the command executed by nohup and sudo, rather than the whole command line?
or:
answers to both the above for my edification and enlightenment.
Regards - Philip

Why won't this Python script run as a startup application in Ubuntu 12.04?

I've written this watchdog script to monitor VLC player and kill it when playback has stopped because VLC continues to inhibit the power management daemon after playback. The script works. I can run it from the command line or through IDLE and it kills VLC when playback stops. I've added many variations of the command to start the script to my startup applications as described here but when I reboot, if it is running at all, it stops as soon as VLC starts. Restarting it from a terminal cause it to stay running and do what it is supposed to do. I don't know if this is a problem with the script or something peculiar about Ubuntu Startup Applications (although I'm leaning towards Ubuntu). Maybe something to do with permissions? (Although I did chmod +x) Should I be executing some other commands to make sure DBus is up before I launch the script? Part of me thinks that something isn't fully loaded when the script starts so I tried sleeping before launching using the *nix sleep command, the X-GNOME-Autostart-Delay, and time.sleep(n) in the python code. The pythonic way seems to have the best chance of success. The *nix ways seem to only make startup take longer and at the end of it I find that the process isn't even running. I'm using the python-setproctitle module to name the process so I can quickly see if it is running with a ps -e from terminal. I'm out of ideas and about ready to just manually run the script whenever I reboot (although in principle I think that the machine should do it for me because I told it to). Some variations of Startup Application command lines that I've tried are:
/path/to/script/vlc_watchdog.py
"/path/to/script/vlc_watchdog.py"
/path/to/script/vlc_watchdog.py &
"/path/to/script/vlc_watchdog.py &"
python /path/to/script/vlc_watchdog.py
python /path/to/script/vlc_watchdog.py &
"python /path/to/script/vlc_watchdog.py"
"python /path/to/script/vlc_watchdog.py &"
bash -c "/path/to/script/vlc_watchdog.py"
sleep 30 ; /path/to/script/vlc_watchdog.py
sleep 30 && /path/to/script/vlc_watchdog.py
etc...
Full script:
#!/usr/bin/env python
import time
time.sleep(30)
import dbus
import os
import subprocess
from subprocess import Popen, PIPE
import daemon
import setproctitle
setproctitle.setproctitle('VLC-Watchdog')
sleeptime = 5
def vlc_killer():
bus = dbus.SessionBus()
vlc_media_player_obj = bus.get_object("org.mpris.MediaPlayer2.vlc", "/org/mpris/MediaPlayer2")
props_iface = dbus.Interface(vlc_media_player_obj, 'org.freedesktop.DBus.Properties')
pb_stat = props_iface.Get('org.mpris.MediaPlayer2.Player', 'PlaybackStatus')
if pb_stat == 'Stopped':
os.system("kill -9 $(pidof vlc)")
else:
time.sleep(sleeptime)
def vlc_is_running():
ps = subprocess.Popen(['ps', '-e'], stdout = PIPE)
out, err = ps.communicate()
for line in out.splitlines():
if 'vlc' in line:
return True
return False
def run():
while True:
if vlc_is_running():
vlc_killer()
else:
time.sleep(sleeptime)
with daemon.DaemonContext():
run()
In the shell script that starts your Python code (the one in the Ubuntu startup/initialization process), use something like:
#!/bin/sh
set -x
exec > /tmp/errors.out 2>&1
/path/to/script/vlc_watchdog.py
Then after things go awry again (that is, after another reboot), inspect /tmp/errors.out to see the error messages related to whatever happened. There should be a Python traceback in there, or at least a shell error.

Categories

Resources