How to interrupt a runaway ipython evaluation without terminating the parent process? - python

Sometimes it happens that an ongoing ipython evaluation won't respond to one, or even several, Ctrl-C's from the keyboard1.
Is there some other way to goose the ipython process to abort the current evaluation, and come back to its "read" state?
Maybe with kill -SOMESECRETSIGNAL <pid>? I've tried a few (SIGINT, SIGTERM, SIGUSR1, ...) to no avail: either they have no effect (e.g. SIGINT), or they kill the ipython process. Or maybe some arcane ipython configuration? Some sentinel file? ... ?
1"Promptly enough", that is. Of course, it is impossible to specify precisely how promptly is "promptly enough"; it depends on the situation, the reliability of the delay's duration, the temperament of the user, the day's pickings at Hacker News, etc.

It depends on where execution is occurring when you decide to interrupt (in a python function, in a lower level library,...). If this commonly occurs within a function you have created, you can try putting a try/except block in the function and catching KeyboardInterrupt exceptions. It may not break out of a low level library (if that is indeed where you are running) but it should prevent the ipython interpreter from exiting.

Related

Is SIGINT intrinsically unreliable in python?

I have an application that relies on SIGINT for a graceful shutdown. I noticed that every once in awhile it just keeps running. The cause turned out to be a generator in xml/etree/ElementTree.py.
If SIGINT arrives while that generator is being cleaned up, all exceptions are ignored (recall that default action for SIGINT is to raise a KeyboardInterrupt). That's not unique to this particular generator, or to generators in general.
From the python docs:
"Due to the precarious circumstances under which __del__() methods are invoked, exceptions that occur during their execution are ignored, and a warning is printed to sys.stderr instead"
In over five years of programming in python, this is the first time I run into this issue.
If garbage collection can occur at any point, then SIGINT can also theoretically be ignored at any point, and I can't ever rely on it. Is that correct? Have I just been lucky this whole time?
Or is it something about this particular package and this particular generator?

Isolating code with a Python thread

I'm writing a program in which I want to evaluate a piece of code asynchronously. I want it to be isolated from the main thread so that it can raise an error, enter an infinite loop, or just about anything else without disrupting the main program. I was hoping to use threading.Thread, but this has a major problem; I can't figure out how to stop it. I have tried Thread._stop(), but that frequently doesn't work. I end up with a thread that I can't control hogging both interpreter time and CPU power. The code in the thread doesn't open any files or do anything else that would cause problems if I hard-killed it.
Python's multiprocessing.Process.terminate() does this really well; unfortunately, initiating a process on Windows takes nearly a second, which is long enough to cause annoying delays in my GUI.
Does anyone know either a: how to kill a Python thread (I don't think I care how dirty the exit is), or b: how to speed up starting a process?
A third possibility would be a third-party library that provides an alternative method for asynchronous execution, but I've never heard of any such thing.
In my case, the best way to do this seems to be to maintain a running worker process, and send the code to it on an as-needed basis. If the process acts up, I kill it and then start a new one immediately to avoid any delay the next time.

Python: Continuously and cancelably repeat execution with fixed interval

What is the best way to continuously repeat the execution of a given function at a fixed interval while being able to terminate the executor (thread or process) immediately?
Basically I know two approaches:
use multiprocessing and function with infinite cycle and time.sleep at the end. Processing is terminated with process.terminate() in any state.
use threading and constantly recreate timers at the end of the thread function. Processing is terminated by timer.cancel() while sleeping.
(both “in any state” and “while sleeping” are fine, even though the latter may be not immediate). The problem is that I have to use both multiprocessing and threading as the latter appears not to work on ARM (some fuzzy interaction of python interpreter and vim, outside of vim everything is fine) (I was using the second approach there, have not tried threading+cycle; no code is currently left) and the former spawns way too many processes which I would like not to see unless really required. This leads to a problem of having to code two different approaches while threading with cycle is just a few more imports for drop-in replacements of all multiprocessing stuff wrapped in if/else (except that there is no thread.terminate()). Is there some better way to do the job?
Currently used code is here (currently with cycle for both jobs), but I do not think it will be much useful to answer the question.
Update: The reason why I am using this solution are functions that display file status (and some other things like branch) in version control systems in vim statusline. These statuses must be updated, but updating them immediately cannot be done without using hooks and I have no idea how to set hooks temporary and remove on vim quit without possibly spoiling user configuration. Thus standard solution is cache expiring after N seconds. But when cache expired I need to do an expensive shell call and the delay appears to be noticeable, the more noticeable the heavier IO load is. What I am implementing now is updating values for viewed buffers each N seconds in a separate process thus delays are bothering that process and not me. Threads are likely to also work because GIL does not affect calls to external programs.
I'm not clear on why a single long-lived thread that loops infinitely over the tasks wouldn't work for you? Or why you end up with many processes in the multiprocess option?
My immediate reaction would have been a single thread with a queue to feed it things to do. But I may be misunderstanding the problem.
I do not know how do it simply and/or cleanly in Python, but I was wondering if maybe you couldn't take avantage of an existing system scheduler, e.g. crontab for *nix system.
There is an API in python and it might satisfied your needs.

Python Multiprocessing respawn crashed processes

I want to create some worker processes and if they crash due to an exception, I would like them to respawn. Aside from the is_alive method in the multiprocessing module, I can't seem to find a way to do this.
This would require me to iterate over all the running processes (after a sleep) and check if they are alive. This is essentially a busy loop, I was wondering if there was a better solution that will wake up my program in the event that any one of my worker processes has crashed. Once it wakes up, I would like to log th exception that crashed my program and launch another process.
Polling to see if the child processes are alive should work fine, since it's a low-overhead check and you don't need to check that often.
The first answer to this (similar) question has a Python code example: Multi-server monitor/auto restarter in python
You can wrap your worker processes in try/except blocks where the except pushes a message onto a pipe before raising. Of course, polling isn't really worse than this and it's simpler.
If you're on a unix-like system, your main program can be notified of dead children by installing a signal handler. Look up your operating system's documentation on signal(), especially SIGCHLD. I'm afraid I don't remember whether Windows covers SIGCHLD with its very limited POSIX signal support.

In Windows using Python, how do I kill my process?

Edit: Looks like a duplicate, but I assure you, it's not. I'm looking to kill the current running process cleanly, not to kill a separate process.
The problem is the process I'm killing isn't spawned by subprocess or exec. It's basically trying to kill itself.
Here's the scenario: The program does cleanup on exit, but sometimes this takes too long. I am sure that I can terminate the program, because the first step in the quit saves the Database. How do I go about doing this?
cannot use taskkill, as it is not available in some Windows installs e.g. home editions of XP
tskill also doesn't work
win32api.TerminateProcess(handle, 0) works, but i'm concerned it may cause memory leaks because i won't have the opportunity to close the handle (program immediately stops after calling TerminateProcess). note: Yup, I am force quitting it so there are bound to be some unfreed resources, but I want to minimize this as much as possible (as I will only do it only if it is taking an unbearable amount of time, for better user experience) but i don't think python will handle gc if it's force-quit.
I'm currently doing the last one, as it just works. I'm concerned though about the unfreed handle. Any thoughts/suggestions would be very much appreciated!
win32api.TerminateProcess(handle, 0)
works, but i'm concerned it may cause
memory leaks because i won't have the
opportunity to close the handle
(program immediately stops after
calling TerminateProcess). note: Yup,
I am force quitting it so there are
bound to be some unfreed resources,
but I want to minimize this as much as
possible (as I will only do it only if
it is taking an unbearable amount of
time, for better user experience) but
i don't think python will handle gc if
it's force-quit.
If a process self-terminates, then you don't need to worry about garbage collection. The OS will automatically clean up all memory resources used by that process, so you don't have to worry about memory leaks. Memory leaks are when a process is running and using more and more memory as time goes by.
So yes terminating your process this way isn't very "clean", but there wont be any ill side-effects.
If I understand your question, you're trying to get the program to shut itself down. This is usually done with sys.exit().
TerminateProcess and taskkill /f do not free resources and will result in memory leaks.
Here is the MS quote on terminateProcess:
{ ... Terminating a process does not cause child processes to be terminated.
Terminating a process does not necessarily remove the process object from the system. A process object is deleted when the last handle to the process is closed. ... }
MS heavily uses COM and DCOM, which share handles and resources the OS does not and can not track. ExitProcess should then be used instead, if you do not intend to reboot often. That allows a process to properly free the resources it used. Linux does not have this problem because it does not use COM or DCOM.

Categories

Resources