I have a python script and I want to be able to stop it at any time.
If I knew where in the script I may want to stop it I would use quit().
Is there a way to implemet this independent of the current state?
Background: This script runs in a docker container and if I want to stop that container it takes a rather long time to do so and exits with code 137. This bothers me.
Related
I am running some Python code using a SLURM script on a remote server accessed through SSH. At some point, issues related to licenses on the SLURM platform may happen, generating errors in Python and ending the subprocess. I want to use try-except to let the Python subprocess wait until the issue is fixed, after that it can keep running from where it stopped.
What are some smart implementations for that?
My most obvious solution is just keeping Python inside a loop if the error occurs and letting it read a file every X seconds, when I finally fix the error and want it to keep running from where it stopped, I would write something on the file and break the loop. I wonder if there is a smarter way to provide input to the Python subprocess while it is running through the SLURM script.
One idea might be to add a signal handler for signal USR1 to your Python script like this.
In the signal handler function, you can set a global variable or send a message or set a threading.Event that the main process is waiting on.
Then you can signal the process with:
kill -USR1 <PID>
or with the Python os.kill() equivalent.
Though I do have to agree there is something to be said for the simplicity of your process doing:
touch /tmp/blocked.$$
and your program waiting in a loop with a 1s sleep for that file to be removed. This way you can tell which process id is blocked.
I am writing a small Python script for Windows that runs a certain program using subprocess.Popen and then, after a while, kills it. I could use Popen.terminate or Popen.kill to terminate that program, however, as indicated by the docs, those would act the same by calling the Windows API function TerminateProcess. However, that function terminates the program immediately, which may result in errors. I would like to replicate the result of hitting the 'X' button, which uses the 'ExitProcess' function. Is that possible?
I am fairly new to programming with Python, so forgive me if this is trivial.
I know that when programming microcontrollers it is possible to interrupt the main program (e.g. on button press or due to a timer). The interrupt leads to a code outside of the main program that is then executed. Afterwards, the main program is continued to be executed. Hereby, the interrupt handler remembers where it interrupted the main program and returns to that exact point within the code. Is it possible to implement that on Python as well?
I looked into the "threading"-library but it doesn't seem fit, since I don't want several tasks running parallel. There it seems like I have to check for an event on every second line of my main code to ensure that it really interrupts the program immediately.
If you need some context:
I am implementing a program using the "PsychoPy Coder" (PsychoPy v2021.2.3) on Windows 10.
I expect the program (when finished) to run for at least an hour, depending on the user. I want this program to be interrupted every 60 to 90 seconds for a "baseline task" the user has to solve. This baseline task will last for about 6 to 9 seconds and the actual program should continue afterwards. Also, I want the user to be able to abort the program with a specific button at anytime.
I would be very thankful for any hint on an elegant way of programming this :) Have a nice day!
I have a shell script which I am calling in Python using os.system("./name_of_script")
I would prefer to do this call based on user input(ie a user types "start" and the call is done, and some other stuff in the python program is also done, when a user types "stop" the script is terminated) but i find that this call takes up the whole focus on the terminal (I dont really know the right word for it, but basically the whole program stalls on this call since my shell script executes until a keyboard interrupt is received). Then when I do a keyboard interrupt, that is the only moment that the shell script stops executing and the rest of the code afterwards is executed. Is this possible in python?
Simply constructing a Popen object, as in:
p = subprocess.Popen(['./name_of_script'])
...starts the named program without blocking on it to complete.
If you later want to see if it's done yet, you can check p.poll() for an update on its status.
This is also faster and safer than os.system(), in that it doesn't involve a shell (unless the script you're invoking runs one itself), so you aren't exposing yourself to shellshock, shell injection vulnerabilities, or other shell-related issues unnecessarily.
I'm designing a long running process, triggered by a Django management command, that needs to run on a fairly frequent basis. This process is supposed to run every 5 min via a cron job, but I want to prevent it from running a second instance of the process in the rare case that the first takes longer than 5 min.
I've thought about creating a touch file that gets created when the management process starts and is removed when the process ends. A second management command process would then check to make sure the touch file didn't exist before running. But that seems like a problem if a process dies abruptly without properly removing the touch file. It seems like there's got to be a better way to do that check.
Does anyone know any good tools or patterns to help solve this type of issue?
For this reason I prefer to have a long-running process that gets its work off of a shared queue. By long-running I mean that its lifetime is longer than a single unit of work. The process is then controlled by some daemon service such as supervisord which can take over control of restarting the process when it crashes. This delegates the work appropriately to something that knows how to manage process lifecycles and frees you from having to worry about the nitty gritty of posix processes in the scope of your script.
If you have a queue, you also have the luxury of being able to spin up multiple processes that can each take jobs off of the queue and process them, but that sounds like it's out of scope of your problem.