I'm working on developing some tools in Python to be used in Linux via Bash or another CLI. These tools all print info to the terminal to let the user know what's going on. Some have progress percentages where the same line is printed to repeatedly.
I've got everything working how I'd like with print statements, however, occasionally these tools may be sent to the background in Bash. This is causing problems, since the print function seems to print to the foreground regardless of where the process is running.
How do I only print to the console when the script in the foreground. The script should keep running and just not print to the foreground when sent to the background. I imagine there's some whole topic I'm missing to do this, as I am new to programming as a whole. Even if I could get a pointer in the right direction, that would likely get me there.
Enabling TOSTOP in termios should work for your case
#! /usr/bin/env python3
import sys
import termios
import tty
try:
[iflag, oflag, cflag, lflag, ispeed, ospeed, ccs] = termios.tcgetattr(sys.stdout)
termios.tcsetattr(sys.stdout.fileno(), termios.TCSANOW, [iflag, oflag, cflag, lflag | termios.TOSTOP, ispeed, ospeed, ccs])
except termios.error:
pass
...
# your script here
...
# restore the original state when done
so if your script tries to print to stdout when in background it's going to receive a SIGTTOU signal and stops.
This should work either if you start it with & or you send it to background later using job control (CTRL+Z).
edit
If you need it to continue running handle the SIGCONT signal
signal.signal(signal.SIGCONT, handler)
and in handler switch a flag
def handler(signum, frame):
global do_print
do_print = False
I'm not sure this is the most ideal or elegant solution, but here's what I've gotten to work based upon this answer to a similar question:
#! /usr/bin/env python3
import time, sys, os
def termianl_output(data): #Function to print to console ONLY if process in foreground
if os.getpgrp() == os.tcgetpgrp(sys.stdout.fileno()):
print(data)
count = 0
while count < 20:
count += 1
terminal_output(count)
time.sleep(2)
I've simply put together a new function to handle printing that first checks whether the process group it's running in is the process group that has control of the terminal. If it's not, it does not print. This could be easily modified to instead output to a log file or something, though I don't need that in my case.
Related
I have subprocess that I am running by:
proc = subprocess.Popen("python -u my_script.py", shell=True)
my_script.py should print regularly to stdout and I have other non related process that is listening to this output so I can't change the output to be printed to somewhere else.
I want to ensure that the process is really regularly printing and not got stuck in some loop .etc, do I have way to check if stdout was wroten for some amount of time?
any other options to reach this goal?
EDIT
I am using windows
you can create a named pipe with mkfifo and use tee to output your script's data to both the process listening for it and the pipe.
mkfifo blarg
my_script.py | tee blarg | your_greedy_data_processing_instance
tail -f blarg
instead of tail you can use an arbitrarly complicated script to study the output and the state of the process generating it (timers, pid checks)
It appears that the access time and modification time of /dev/stdout is updated regularly. Note, however, that /dev/stdout will always be a soft link -- er, a symbolic link, I mean -- to the file handle of stdout for the process that's checking /dev/stdout. I.e., /dev/stdout links to /proc/self/fd/1.
So it seems that you could check the first file descriptor of your process to see if its modification time has changed, e.g.:
$ stat -c %y -L /proc/10830/fd/1
2021-05-13 02:34:00.367857061
-L means act on the target of the soft link, not the soft link itself; -c %y is just asking for the modification time. This Python script is running as process 10830 on my system right now, and it's occasionally updating the modification time (about every 8 seconds):
>>> import time
>>> while True: time.sleep(1); print("still alive")
still alive
still alive
still alive
....
You should Google this answer to be sure that the behavior I'm seeing is reliable, though, because I've never read anything about it before.
Alternatively, you could either (a) trust that the script is fine -- which it will, of course, always be (unless it's catching exceptions and refusing to exit even if it can no longer do anything useful, in which case you should change it to die the way it should), or (b) set up a daemon to do something like send a signal to the script, at which point the script could send a signal to the daemon to say "I'm still alive." There's literally no reason to do that, in my opinion, but how you write your programs is up to you.
So assuming that you want to press forward with this, here's a trivial example of the daemon that would monitor the script you want to make sure isn't stuck in a loop or something:
import time
import signal
import os
import sys
# keep a timestamp of when we receive a response
response_timestamp = time.time()
# add code here to get the process ID of the other script
other_pid = 0
def sig_handler(signum, frame):
global response_timestamp
response_timestamp = time.time()
if __name__ == '__main__':
# make sure that when we receive SIGBREAK, sig_handler() gets called
signal.signal(signal.SIGBREAK, sig_handler)
while True:
# send SIGBREAK to "other_pid"
os.kill(other_pid, signal.SIGBREAK)
time.sleep(15)
if time.time() - 20 > response_timestamp:
print("the other process is frozen")
sys.exit(os.EX_SOFTWARE)
Then you add this to the other script that you're monitoring:
import signal
import os
# add code here to get the process ID
other_pid = 0
def sig_handler(signum, frame):
os.kill(other_pid, signal.SIGBREAK)
...
...
(rest of your script)
Now be aware that the only thing this will do, is make sure that the process isn't completely frozen. Regrettably, Windows doesn't have a great deal of options when it comes to signals: SIGBREAK was the best one that I saw, but note that it's the signal received by a process when you hit CTRL+C to interrupt the program (so if you manually hit CTRL+C in the window running the Python program, it won't kill it, it will just make it call sig_handler()).
I would also be remiss if I did not inform you that even though this will probably work just fine, it is not safe to do almost anything inside of a signal handler function. It's bad form and may blow up on you unexpectedly, but in practice, it's pretty safe.
This question already has answers here:
How to flush the input stream?
(4 answers)
Closed 1 year ago.
I'm making a game that runs on the console in python. When I exit the game, all the keys I pressed are automatically typed. How do I stop this from happening? Also, I still want to have user input when I use the input() function. This is on Windows by the way.
If you want the code, this has the same "problem":
for _ in range(100000):
print("Hello")
When the program finishes in the command prompt, this comes up:
C:\Users\User>awdsaawdsadwsaasdwaws
Basically, whatever keys were pressed while the code was running. This happens when other things run in the command prompt too, but I want to know how to disable it in python.
Edit: I kept digging and found that what I was looking for was flushing or clearing keyboard buffer. I marked my question as a duplicate of another which has a few answers, but this one worked best for me:
def flush_input():
try:
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
except ImportError:
import sys, termios #for linux/unix
termios.tcflush(sys.stdin, termios.TCIOFLUSH)
This happens because your computer registers the key strokes and on the console, those are made available on the stdin input stream.
If you save your script as test.py and run python test.py and start entering some keystrokes, like abc, those letters will be on standard input.
Your script doesn't read them, because it doesn't touch that stream, as you're not using input() or any other calls that would read that stream. So your script finishes, the characters are still on standard input, the prompt comes back and it reads those characters, with the given result:
Hello
Hello
Hello
PS C:\Users\username> abc
To avoid this, you can read / flush the input buffer at the end of your script. However, this is surprisingly hard if you need it to work across all operating systems and in different modes of running your script (directly from cmd, IDLE, in other IDEs, etc.)
The problem is there's no way to know if there's input on the standard input stream, until you try to read from it. But if you try to read from it, your script will pause until an 'end of line' or 'end of file' is received. And if the user is just hitting keys, that won't happen, so you'll end up reading until they hit something like Ctrl+Break or Ctrl+C.
Here's a way I think is relatively robust, but I recommend you test it in scenarios and environments you consider likely for use of your script:
import sys
import threading
import queue
import os
import signal
for _ in range(100000):
print("Hello")
timeout = 0.1 # sec
def no_input():
# stop main thread (which is probably blocked reading input) via an interrupt signal
# only available for windows in Python version 3.2 or higher
os.kill(os.getpid(), signal.SIGINT)
exit()
# if a sigint is received, exit the main thread (you could call another function to do work first and then exit()
signal.signal(signal.SIGINT, exit)
# input is stored here, until it's dealt with
input_queue = queue.Queue()
# read all available input until main thread exit
def get_input():
while True:
try:
# read input, not doing anything with it
_ = input_queue.get(timeout=timeout)
except queue.Empty:
no_input()
reading_thread = threading.Thread(target=get_input)
reading_thread.start()
# main loop: put any available input in the queue, will wait for input if there is none
for line in sys.stdin:
input_queue.put(line)
# wait for reading thread
reading_thread.join()
It basically reads the input from a second thread, allowing that the main thread to get the input and possibly do something with it until there's nothing left and then it just tells the main thread to exit. Note that this will result in your script exiting with an exit code of 2, which may not be what you want.
Also note that you'll still see the input on screen, but it will no longer be passed to the terminal:
Hello
Hello
Hello
abc
PS C:\Users\username>
I don't know if there's an easy way to avoid the echo, other than on Linux doing something like stty -echo. You could of course just call the system to clear the screen at the end of your script:
from subprocess import call
from os import name as os_name
call('clear' if os_name =='posix' else 'cls')
I am trying to write a program that will run when my raspberry pi starts up and will allow me to immediately begin typing things with my keyboard and have it be picked up by the program. I don't want to have to manually start the program when the pi starts. I need to use curses (or a similar unbuffered keyboard input library) because I display what I am typing on a 2x16 I2C LCD, but I also need everything that I am typing to be recorded to a text file.
Right now, I am auto-starting the program at boot by putting a line in rc.local. This works, and the I2C display is correctly showing program output, but it does not respond to keyboard input, and the keyboard input is instead displayed (when I connect the pie to a screen, the goal is to run headless) on an odd console layout that exits when I press enter and says -bash: 'whatever I just typed' command not found.
I have already tried:
Setting a timer at the beginning of the program to wait until the pi has fully booted before initializing the curses window and keyboard capture
Creating a seperate python program to wait until the pi has fully booted and then running the main script by importing it
Neither of these methods works though, I get the same problem with slight differences.
To be clear, the program works flawlessly if I run it manually from the command line. But there is no keyboard input to the program (or at least not where it is supposed to be inputted) when I autostart the script with rc.local.
My code:
#!/usr/bin/python
import I2C_LCD_driver, datetime, sys
from time import *
from subprocess import call
mylcd = I2C_LCD_driver.lcd()
for x in range(30): #waits for raspberry pi to boot up
mylcd.lcd_display_string("Booting Up: "+str(x), 1)
sleep(1)
import curses
key = curses.initscr()
curses.cbreak()
curses.noecho()
key.keypad(1)
key.nodelay(1)
escape=0
while escape==0:
#variable initialization
while 1:
k=key.getch()
if k>-1: #runs when you hit any key. getch() returns -1 until a key is pressed
if k==27: #exits the program when you hit Esc
break
elif k==269:
# a couple other special Function key cases are here
else:
inpt=chr(k)
mylcd.lcd_display_string(inpt,2,step) #writes the last character to the display
#some more code that handles writing the text to the LCD, which works flawlessly when run manually.
file.write("%s\r\n" % entry)
file.close()
mylcd.lcd_display_string("Saved ",2)
mylcd.lcd_display_string("F1 New F2 PwrOff",1)
while 1:
k=key.getch()
if k>-1:
if k==265: #do it again! with F1
mylcd.lcd_clear()
break
elif k==266: #shut down with F2
escape=1
break
curses.nocbreak()
key.keypad(0)
curses.echo()
curses.endwin()
call("sudo shutdown -h now", shell=True)
The line that I have in /etc/rc.local is as follows if that is important:
sudo python3 journal.py &
and it is followed by the 'exit 0' line.
Thanks for any help you can provide. I know this is a very specific problem and will be tedious to reproduce, but if anyone knows anything about autostarting functions I would be very appreciative of any tips.
Ok, literally all I had to do (which I did find after some more research on stackexchange, this is the thread that contained the answer I was looking for) was run my program from ~/.bashrc instead of /etc/rc.local. This method works perfectly, exactly what I wanted.
This should be because of how you called the program:
python3 journal.py &
You may want to check out JOB CONTROL of bash (or your shell) man page:
Only foreground
processes are allowed to read from ... the terminal. Background processes which
attempt to read from ... the
terminal are sent a SIGTTIN ... signal by the kernel's terminal
driver, which, unless caught, suspends the process.
In short, once curses (or anything for that matter) try to read from stdin your process is likely stopped (after it may have already written to your display). Keep it in the foreground to be able to have it use stdin (and by extension keyboard).
Side note: Not sure about distro and details of implementation of rc.local in your case, but aren't init scripts normally run with uid/gid 0 already (without wrapping individual calls through sudo?)
I'm writing a parser in Python that outputs a bunch of database rows to standard out. In order for the DB to process them properly, each row needs to be fully printed to the console. I'm trying to prevent interrupts from making the print command stop halfway through printing a line.
I tried the solution that recommended using a signal handler override, but this still doesn't prevent the row from being partially printed when the program is interrupted. (I think the WRITE system call is cancelled to handle the interrupt).
I thought that the problem was solved by issue 10956 but I upgraded to Python 2.7.5 and the problem still happens.
You can see for yourself by running this example:
# Writer
import signal
interrupted = False
def signal_handler(signal, frame):
global interrupted
iterrupted = True
signal.signal(signal.SIGINT, signal_handler)
while True:
if interrupted:
break
print '0123456789'
In a terminal:
$ mkfifo --mode=0666 pipe
$ python writer.py > pipe
In another terminal:
$ cat pipe
Then Ctrl+C the first terminal. Some of the time the second terminal will end with an incomplete sequence of characters.
Is there any way of ensuring that full lines are written?
This seems less like an interrupt problem per se then a buffering issue. If I make a small change to your code, I don't get the partial lines.
# Writer
import sys
while True:
print '0123456789'
sys.stdout.flush()
It sounds like you don't really want to catch a signal but rather block it temporarily. This is supported by some *nix flavours. However Python explicitly does not support this.
You can write a C wrapper for sigmasks or look for a library. However if you are looking for a portable solution...
I am developing a wrapper around gdb using python. Basically, I just want to be able to detect a few setup annoyances up-front and be able to run a single command to invoke gdb, rather than a huge string I have to remember each time.
That said, there are two cases that I am using. The first, which works fine, is invoking gdb by creating a new process and attaching to it. Here's the code that I have for this one:
def spawnNewProcessInGDB():
global gObjDir, gGDBProcess;
from subprocess import Popen
from os.path import join
import subprocess
binLoc = join(gObjDir, 'dist');
binLoc = join(binLoc, 'bin');
binLoc = join(binLoc, 'mycommand')
profileDir = join(gObjDir, '..')
profileDir = join(profileDir, 'trash-profile')
try:
gGDBProcess = Popen(['gdb', '--args', binLoc, '-profile', profileDir], cwd=gObjDir)
gGDBProcess.wait()
except KeyboardInterrupt:
# Send a termination signal to the GDB process, if it's running
promptAndTerminate(gGDBProcess)
Now, if the user presses CTRL-C while this is running, it breaks (i.e. it forwards the CTRL-C to GDB). This is the behavior I want.
The second case is a bit more complicated. It might be the case that I already had this program running on my system and it crashed, but was caught. In this case, I want to be able to connect to it using gdb to get a stack trace (or perhaps I was already running it, and I simply now want to connect to the process that's already in memory).
As a convenience feature, I've created a mirror function, which will connect to a running process using gdb:
def connectToProcess(procNum):
global gObjDir, gGDBProcess
from subprocess import Popen
import subprocess
import signal
print("Connecting to mycommand process number " + str(procNum) + "...")
try:
gGDBProcess = Popen(['gdb', '-p', procNum], cwd=gObjDir)
gGDBProcess.wait()
except KeyboardInterrupt:
promptAndTerminate(gGDBProcess)
Again, this seems to work as expected. It starts gdb, I can set breakpoints, run the program, etc. The only catch is that it doesn't forward CTRL-C to gdb if I press it while the program is running. Instead, it jumps immediately to promptAndTerminate().
I'm wondering if anyone can see why this is happening - the two calls to subprocess.Popen() seem identical to me, albeit that one is running gdb in a different mode.
I have also tried replacing the call to subprocess.Popen() with the following:
gGDBProcess = Popen(['gdb', '-p', procNum], cwd=gObjDir, stdin=subprocess.PIPE)
but this leads to undesirable results as well, because it doesn't actually communicate anything to the child gdb process (e.g. if I type in c to start the program running again after it is broken upon connection from gdb, it doesn't do anything). Again, it terminates the running python process when I type CTRL-C.
Any help would be appreciated!