I would like to kill a function that executes to long. What is important this function is inside C extension (wrapped in Cython), and I would like this solution to work in multithreaded enviorment. Since it is wrapped in Cython this thread could hold GIL.
I have no control whatsoever on what is happening inside this extension (and I think that this code will not respond to interrupts).
I'm fairly certain that this code will be only run on Unix machines. But question Python kill hanging function does not apply because I think that signals would not work in multithreaded enviorment (AFAIK it is undefined which thread will catch them) --- but I might be wrong on this one :) so correct me.
Is there any way for me to resolve this without spawning new processes.
My solution is to wrap this function in another python process and if needed kill that process.
A piece of advice for anyone who googles this question: since process startup time (starting interpreter, loading modules and then loading data into memory) can last couple of seconds you need to group your function calls so this overhead won;t kill you (so there is no reusable solution really).
Example solution, was posted to na another question: How to interrupt native extension code without killing the interpreter?.
Related
I have a flask app that uses a large integer optimization GoogleOrTools. The underlying package is writen in C but there is a python API that I use to call it. When I get a very long running task it seems as if it blocks other tasks and server becomes unresponsive. The obvious solution is naturally to put in a different service. Which I will impliment when I have the time.
But I'm curious if there is something I'm missing as I read Python threading after every x lines of byte code the thread changes, which should allow the other process to take over? But does that also happen in the case of Gevent? Or if it is acturaly allowing the other threads, but it might greedyly capture all the resources?
It would be really nice if someone with some inside could tell me what is and is not posible.
I am running an HTTP server (homemade, in C++) that embeds a Python interpreter for server-side scripting. This is a forking server, but I don't use any threading in any parent process. I don't do any weird things with the Python interpreter (other than the forks).
In one of the scripts, however, in another thread, a call to time.sleep(0.1) can take up to one minute, especially the first call.
while not self.should_stop():
# other code
print "[PYTHON]: Sleeping"
time.sleep(0.1)
print "[PYTHON]: Slept, checking should_stop"
I know that this is where it's hanging, because the logs show only the first print, and the second much, much later.
Additional information:
the CPU is not pegged (~5%)
this is Python 2.7 on Ubuntu
These are threading threads; I do use locks and events where necessary.
I don't import threading in any process that will ever do a fork
Python is initialized before the forks; this works great elsewhere (no problems in the last 6 months)
Python can run only one threading.Thread at a time, so if there are many threads, the interpreter has to constantly switch between them, so one thread can run while the others get freezed or, in other words, interrupted.
But an interrupted thread isn't told that it's freezed, it's sort of falls unconscious for a while and then is woken up and continues its work from where it has been interrupted. So, 0.5 seconds for one particular thread may in fact turn out to be longer in real life.
Fixed!
As it turns out, the main thread (the one embedding the interpreter, in C++) doesn't actually release the GIL when it's not executing Python code (as I imagined). You actually have to release the GIL manually, with Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS, as specified here.
This makes the runtime release the GIL so other threads can run during IO-intensive tasks (like, in my case, reading or writing to/from the network). No running Python code while doing that, though.
I'm writing a program in which I want to evaluate a piece of code asynchronously. I want it to be isolated from the main thread so that it can raise an error, enter an infinite loop, or just about anything else without disrupting the main program. I was hoping to use threading.Thread, but this has a major problem; I can't figure out how to stop it. I have tried Thread._stop(), but that frequently doesn't work. I end up with a thread that I can't control hogging both interpreter time and CPU power. The code in the thread doesn't open any files or do anything else that would cause problems if I hard-killed it.
Python's multiprocessing.Process.terminate() does this really well; unfortunately, initiating a process on Windows takes nearly a second, which is long enough to cause annoying delays in my GUI.
Does anyone know either a: how to kill a Python thread (I don't think I care how dirty the exit is), or b: how to speed up starting a process?
A third possibility would be a third-party library that provides an alternative method for asynchronous execution, but I've never heard of any such thing.
In my case, the best way to do this seems to be to maintain a running worker process, and send the code to it on an as-needed basis. If the process acts up, I kill it and then start a new one immediately to avoid any delay the next time.
What is the best way to continuously repeat the execution of a given function at a fixed interval while being able to terminate the executor (thread or process) immediately?
Basically I know two approaches:
use multiprocessing and function with infinite cycle and time.sleep at the end. Processing is terminated with process.terminate() in any state.
use threading and constantly recreate timers at the end of the thread function. Processing is terminated by timer.cancel() while sleeping.
(both “in any state” and “while sleeping” are fine, even though the latter may be not immediate). The problem is that I have to use both multiprocessing and threading as the latter appears not to work on ARM (some fuzzy interaction of python interpreter and vim, outside of vim everything is fine) (I was using the second approach there, have not tried threading+cycle; no code is currently left) and the former spawns way too many processes which I would like not to see unless really required. This leads to a problem of having to code two different approaches while threading with cycle is just a few more imports for drop-in replacements of all multiprocessing stuff wrapped in if/else (except that there is no thread.terminate()). Is there some better way to do the job?
Currently used code is here (currently with cycle for both jobs), but I do not think it will be much useful to answer the question.
Update: The reason why I am using this solution are functions that display file status (and some other things like branch) in version control systems in vim statusline. These statuses must be updated, but updating them immediately cannot be done without using hooks and I have no idea how to set hooks temporary and remove on vim quit without possibly spoiling user configuration. Thus standard solution is cache expiring after N seconds. But when cache expired I need to do an expensive shell call and the delay appears to be noticeable, the more noticeable the heavier IO load is. What I am implementing now is updating values for viewed buffers each N seconds in a separate process thus delays are bothering that process and not me. Threads are likely to also work because GIL does not affect calls to external programs.
I'm not clear on why a single long-lived thread that loops infinitely over the tasks wouldn't work for you? Or why you end up with many processes in the multiprocess option?
My immediate reaction would have been a single thread with a queue to feed it things to do. But I may be misunderstanding the problem.
I do not know how do it simply and/or cleanly in Python, but I was wondering if maybe you couldn't take avantage of an existing system scheduler, e.g. crontab for *nix system.
There is an API in python and it might satisfied your needs.
I'm using OpenERP, a Python based ERP, which uses different threads (one-thread per client, etc). I would like to use multiprocessing.Process() to fork() and call a long-running method.
My question is: what will happen to the parent's threads? Will they be copied and continue to run? Will the child process call accept() on the server socket?
Thanks for your answers,
Forking does not copy threads, only the main one. So be very careful with forking multithreaded application as it can cause unpredictable side-effects (e.g when forking happened while some thread was executing in a mutexed critical section), something really can be broken in your forked process unless you know the code you're forking ideally.
Though everything that I said above is true, there's a workaround (at least on Linux) called pthread_atfork() which acts as a callback when a process was forked (you can recreate all needed threads). Though it applies to C applications, it's not applied to Python ones.
For further information you can refer to:
Python issue tracker on this problem - http://bugs.python.org/issue6923
Seek around the web on similar ideas implementation, for example: http://code.google.com/p/python-atfork/