Python time.sleep taking much longer - python

I am running an HTTP server (homemade, in C++) that embeds a Python interpreter for server-side scripting. This is a forking server, but I don't use any threading in any parent process. I don't do any weird things with the Python interpreter (other than the forks).
In one of the scripts, however, in another thread, a call to time.sleep(0.1) can take up to one minute, especially the first call.
while not self.should_stop():
# other code
print "[PYTHON]: Sleeping"
time.sleep(0.1)
print "[PYTHON]: Slept, checking should_stop"
I know that this is where it's hanging, because the logs show only the first print, and the second much, much later.
Additional information:
the CPU is not pegged (~5%)
this is Python 2.7 on Ubuntu
These are threading threads; I do use locks and events where necessary.
I don't import threading in any process that will ever do a fork
Python is initialized before the forks; this works great elsewhere (no problems in the last 6 months)

Python can run only one threading.Thread at a time, so if there are many threads, the interpreter has to constantly switch between them, so one thread can run while the others get freezed or, in other words, interrupted.
But an interrupted thread isn't told that it's freezed, it's sort of falls unconscious for a while and then is woken up and continues its work from where it has been interrupted. So, 0.5 seconds for one particular thread may in fact turn out to be longer in real life.

Fixed!
As it turns out, the main thread (the one embedding the interpreter, in C++) doesn't actually release the GIL when it's not executing Python code (as I imagined). You actually have to release the GIL manually, with Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS, as specified here.
This makes the runtime release the GIL so other threads can run during IO-intensive tasks (like, in my case, reading or writing to/from the network). No running Python code while doing that, though.

Related

How does python3 do context-switching? [duplicate]

I watched an excellent presentation on the GIL, and how when running in the interpreter only 1 single thread can run at a time. It also seemed that python is not very intelligent about switching between threads.
If i am threading some operation that only runs in the interpreter, and it is not particularly CPU heavy, and I use a thread lock where only 1 thread can run at a time for this relatively short interpreter-bound operation, will that lock actually make anything run slower? as opposed to if the lock were not necessary and all threads could run concurrently.
If all but 1 threads are locked, will the python interpreter know not to context switch?
Edit:
by 'making things run slower' I mean if python is context switching to a bunch of locked threads, that will (maybe) be a performance decrease even if the threads don't actually run
Larry Hastings (a core CPython Developer) has a great talk that covers this subject called "Python's Infamous GIL". If you skip to 11:40ish he gives the answer to your question.
From the talk: The way Python threads work with the GIL is with a simple counter. With every 100 byte codes executed the GIL is supposed to be released by the thread currently executing in order to give other threads a chance to execute code. This behavior is essentially broken in Python 2.7 because of the thread release/acquire mechanism. It has been fixed in Python 3.
When you use a thread lock Python will only execute the threads that are not locked. So if you have several threads sharing 1 lock, then only one thread will execute at the same time. Python will not start executing a locked thread until the thread can acquire the lock. Locks are there so you can have shared state between threads without introducing bugs.
If you have several threads and only 1 can run at a time because of a lock, then in theory your program will take longer to execute. In practice you should benchmark, because the results will surprise you.
python is not very intelligent about switching between threads
Python threads work a certain way :-)
if I use a thread lock where only 1 thread can run at a time... will that lock actually make anything run slower
Err, no because there is nothing else runnable, so nothing else could run slower.
If all but 1 threads are locked, will the python interpreter know not to context switch?
Yes. The kernel knows which threads are runnable. If no other threads can run then logically speaking (as far as the thread is concerned) the python interpreter won't context switch away from the only runnable thread. The thread doesn't know when it has been switched away from (how can it, it isn't running).

Python C API - running all python threads in the main thread (or faking it)

I'm adding python scripting support to an application.
This application has an API which is not thread safe, and I cannot change this aspect.
One requirement I have is being able to run multiple independent scripts, thus I have to run sub-interpreters in separate threads.
Although, due to the GIL in CPython, no more than one thread runs concurrently, whatever thread holds the GIL will still run concurrently with the main thread, and this will cause problems due to the thread-unsafe API of the application.
To summarize: I'm looking for a way to run all python code (__main__, threads, every sub-interpreter) in the main thread.
How can this be solved?
Should the main thread always hold the GIL, and have a function that -in a cooperative-multitasking fashion- would release it and reacquire it x milliseconds later, thus allowing the interpreter to do some work? This doesn't look right: such function will consume x milliseconds also when python has no work to do.

Would setting a mutex manually improve performance?

My python program is definitely cpu bound but 40% to 55% of the time spent is performed in C code in the z3 solver (which doesn’t knows anything against the gil) where each single call to the C function (z3_optimize_check) take almost a minute to complete (so far the parallel_enable parameter still result in this function working in single thread mode and blocking the main thread).
I can’t use multiprocessing as z3_objects aren’t serializable friendly (except if someone here can prove otherwise). As they are several tasks (where each tasks adds more z3 work in a dict for other tasks), I initially set up mulithreading directly. But the Gil definitely hurts performance more than there is a benefit (especially with hyperthreading) despite the huge time spent in the solver.
But if I set up a blocking mutex manually (through threading.Lock.aquire()) in the z3py module just after the switch from C code which would allows an other thread running only if all other threads are performing solver work, would this remove the gil performance penalty (since their would be only 1 thread at time executing python code and it would always be the same one until the lock is released before z3_optimize_check)?
I mean would using threading.Lock.aquire() triggers calls to PyEval_SaveThread() as if z3 was doing it directly?
so far the parallel_enable parameter still result in this function working in single thread mode and blocking the main thread
I think you are misunderstanding that. z3 running in parallel mode means that you call it from a single Python thread, and then it spawns multiple OS-level threads for itself, doing the job, cleaning up the threads and returning the result for you. It does not miraculously enable Python running without GIL.
From the viewpoint of Python, it still does one thing at a time, and that one thing is making the call to z3. And it is holding GIL for the entire time. So if you see more than one CPU core/thread utilized while the calculation is running, that is the effect of parallel mode of z3, internally branching to multiple threads.
There is another thing, releasing GIL, like what blocking I/O operations do. It does not happen by magic, there is a call-pair for that:
PyThreadState* PyEval_SaveThread()
Release the global interpreter lock (if it has been created) and reset the thread state to NULL, returning the previous thread state (which is not NULL). If the lock has been created, the current thread must have acquired it.
void PyEval_RestoreThread(PyThreadState *tstate)
Acquire the global interpreter lock (if it has been created) and set the thread state to tstate, which must not be NULL. If the lock has been created, the current thread must not have acquired it, otherwise deadlock ensues.
These are C calls, so they are accessible for extension developers. When developers know that the code will run for a long time, without the need for accessing Python internals, PyEval_SaveThread() can be used, and then Python can proceed with other Python threads. And when the long whatever is done, the thread can re-introduce itself and apply for GIL using PyEval_RestoreThread().
But, these things happen only if developers make them happen. And with z3 it might not be the case.
To provide a direct answer to your question: no, Python code can not release GIL and keep it released, as GIL is the lock what a Python thread has to hold when it proceeds. So whenever a Python "instruction" returns, GIL is held again.
Apparently somehow I managed to not include the link I wanted to, so they are on page https://docs.python.org/3/c-api/init.html#thread-state-and-the-global-interpreter-lock (and the linked paragraph discusses what I shortly summarized).
Z3 is open source (https://github.com/Z3Prover/z3), and the source code does not contain neither PyEval_SaveThread, nor the wrapper-shortcut Py_BEGIN_ALLOW_THREADS character sequences.
But, it has a parallel Python example, btw. https://github.com/Z3Prover/z3/blob/master/examples/python/parallel.py, with
from multiprocessing.pool import ThreadPool
So I would assume that it might be tested and working with multiprocessing.

Isolating code with a Python thread

I'm writing a program in which I want to evaluate a piece of code asynchronously. I want it to be isolated from the main thread so that it can raise an error, enter an infinite loop, or just about anything else without disrupting the main program. I was hoping to use threading.Thread, but this has a major problem; I can't figure out how to stop it. I have tried Thread._stop(), but that frequently doesn't work. I end up with a thread that I can't control hogging both interpreter time and CPU power. The code in the thread doesn't open any files or do anything else that would cause problems if I hard-killed it.
Python's multiprocessing.Process.terminate() does this really well; unfortunately, initiating a process on Windows takes nearly a second, which is long enough to cause annoying delays in my GUI.
Does anyone know either a: how to kill a Python thread (I don't think I care how dirty the exit is), or b: how to speed up starting a process?
A third possibility would be a third-party library that provides an alternative method for asynchronous execution, but I've never heard of any such thing.
In my case, the best way to do this seems to be to maintain a running worker process, and send the code to it on an as-needed basis. If the process acts up, I kill it and then start a new one immediately to avoid any delay the next time.

Python: Continuously and cancelably repeat execution with fixed interval

What is the best way to continuously repeat the execution of a given function at a fixed interval while being able to terminate the executor (thread or process) immediately?
Basically I know two approaches:
use multiprocessing and function with infinite cycle and time.sleep at the end. Processing is terminated with process.terminate() in any state.
use threading and constantly recreate timers at the end of the thread function. Processing is terminated by timer.cancel() while sleeping.
(both “in any state” and “while sleeping” are fine, even though the latter may be not immediate). The problem is that I have to use both multiprocessing and threading as the latter appears not to work on ARM (some fuzzy interaction of python interpreter and vim, outside of vim everything is fine) (I was using the second approach there, have not tried threading+cycle; no code is currently left) and the former spawns way too many processes which I would like not to see unless really required. This leads to a problem of having to code two different approaches while threading with cycle is just a few more imports for drop-in replacements of all multiprocessing stuff wrapped in if/else (except that there is no thread.terminate()). Is there some better way to do the job?
Currently used code is here (currently with cycle for both jobs), but I do not think it will be much useful to answer the question.
Update: The reason why I am using this solution are functions that display file status (and some other things like branch) in version control systems in vim statusline. These statuses must be updated, but updating them immediately cannot be done without using hooks and I have no idea how to set hooks temporary and remove on vim quit without possibly spoiling user configuration. Thus standard solution is cache expiring after N seconds. But when cache expired I need to do an expensive shell call and the delay appears to be noticeable, the more noticeable the heavier IO load is. What I am implementing now is updating values for viewed buffers each N seconds in a separate process thus delays are bothering that process and not me. Threads are likely to also work because GIL does not affect calls to external programs.
I'm not clear on why a single long-lived thread that loops infinitely over the tasks wouldn't work for you? Or why you end up with many processes in the multiprocess option?
My immediate reaction would have been a single thread with a queue to feed it things to do. But I may be misunderstanding the problem.
I do not know how do it simply and/or cleanly in Python, but I was wondering if maybe you couldn't take avantage of an existing system scheduler, e.g. crontab for *nix system.
There is an API in python and it might satisfied your needs.

Categories

Resources