In a couple of answers (see here and here), when dealing with GUI + asyncio in separate threads, it suggests to use a queue when the asyncio loop needs to communicate with the GUI. However, when the GUI wants to communicate with the asyncio event loop it should use call_soon_threadsafe().
For example, one answer states:
When the event loop needs to notify the GUI to refresh something, it can use a queue as shown here. On the other hand, if the GUI needs to tell the event loop to do something, it can call call_soon_threadsafe or run_coroutine_threadsafe.
What I don't understand is why can the GUI not also use another Queue rather than call_soon_threadsafe()? i.e. can the GUI not put data on a queue for the asyncio loop to get and process? Is it just a design decision or is there some technical reason not to use a queue from GUI to asyncio loop?
There's no appropriate queue class to use. asyncio.Queue isn't safe to interact with from outside the event loop, and queue.Queue would block the event loop.
If you want to use a queue anyway, you could use asyncio.run_coroutine_threadsafe to call an asyncio.Queue's put method.
Related
I have a C++ binary with an embedded python interpreter, done via pybind11::scoped_interpreter.
It also has a number of tcp connections using boost::asio which consume a proprietary messaging protocol and update some state based on the message contents.
On startup we import a python module, instantiate a specific class therein and obtain pybind11::py_object handles to various callback methods within the class.
namespace py = pybind11;
class Handler
{
public:
Handler(const cfg::Config& cfg)
: py_interpreter_{std::make_unique<py::scoped_interpreter>()}
{
auto module = py::module_::import(cfg.module_name);
auto Class = module.attr(cfg.class_name);
auto obj = Class(this);
py_on_foo_ = obj.attr("on_foo");
py_on_bar_ = obj.attr("on_bar");
}
std::unique_ptr<py::scoped_interpreter> py_interpreter_;
py::object py_on_foo_;
py::object py_on_bar_;
};
For each specific message which comes in, we call the associated callback method in the python code.
void Handler::onFoo(const msg::Foo& foo)
{
py_on_foo_(foo); // calls python method
}
All of this works fine... however, it means there is no "main thread" in the python code - instead, all python code execution is driven by events originating in the C++ code, from the boost::asio::io_context which is running on the C++ application's main thread.
What I'm now tasked with is a way to get this C++-driven code to play nicely with some 3rd-party asyncio python libraries.
What I have managed to do is to create a new python threading.Thread, and from there add some data to a thread-safe queue and make a call to boost::asio::post (exposed via pybind11) to execute a callback in the C++ thread context, from which I can drain the queue.
This is working as I expected, but I'm new to asyncio, and am lost as to how to create a new asyncio.event_loop on the new thread I've created, and post the async results to my thread-safe queue / C++ boost::asio::post bridge to the C++ thread context.
I'm not sure if this is even a recommended approach... or if there is some asyncio magic I should be using to wake up my boost::asio::io_context and have the events delivered in that context?
Questions:
How can I integrate an asyncio.event_loop into my new thread and have the results posted to my thread-safe event-queue?
Is it possible to create a decorator or some such similar functionality which will "decorate" an async function so that the results are posted to my thread-safe queue?
Is this approach recommended, or is there another asyncio / "coroutiney" way of doing things I should be looking at?
There are three possibilities to integrate the asio and asyncio event loops:
Run both event loops in the same thread, alternating between them
Run one event loop in the main thread and the other in a worker thread
Merge the two event loops together.
The first option is straightforward, but has the downside that you will be running that thread hot since it never gets the chance to sleep (classically, in a select), which is inconsiderate and can disguise performance issues (since the thread always uses all available CPU). Here option 1a would be to run the asio event loop as a guest in asyncio:
async def runAsio(asio: boost.asio.IoContext):
while await asyncio.sleep(0, True):
asio.poll()
And option 1b would be to run the asyncio event loop as a guest in asio:
boost::asio::awaitable<void> runAsyncio(py::object asyncio) {
for (;; co_await boost::asio::defer()) {
asyncio.attr("stop")();
asyncio.attr("run_forever")();
}
}
The second option is more efficient, but has the downside that completions will be invoked on either thread depending on which event loop they're triggered by. This is the approach taken by the asynchronizer library; it spawns a std::thread to run the asio event loop on the side (option 2a), but you could equally take your approach (option 2b) of spawning a threading.Thread and running the asyncio event loop on the side. If you're doing this you should create a new event loop in the worker thread and run it using run_forever. To post callbacks to this event loop from the main thread use call_soon_threadsafe.
Note that a downside of approach 2b would be that Python code invoked in the main thread won't be able to access the asyncio event loop using get_running_loop and, worse any code using the deprecated get_event_loop in the main thread will hang. If instead you use option 2a and run the C++ event loop in the worker thread you can ensure that any Python callbacks that might want access to the asyncio event loop are running in the main thread.
Finally, the third option is to replace one event loop with the other (or even possibly both with a third, e.g. libuv). Replacing the asio scheduler/reactor/proactor is pretty involved and fairly pointless (since it would mean adding overhead to C++ code that should be fast), but replacing the asyncio loop is far more straightforward and is very much a supported use case; see Event Loop Implementations and Policies and maybe take a look at uvloop which replaces the asyncio event loop with libuv. On the downside, I'm not aware of a fully supported asio implementation of the asyncio event loop, but there is a GSoC project that looks pretty complete, although it's (unsurprisingly) written using Boost.Python so might need a little work to integrate with your pybind11 codebase.
I have matplotlib running on the main thread with a live plot of data coming in from an external source. To handle the incoming data I have a simple UDP listener listening for packages using asyncio with the event loop running on a seperate thread.
I now want to add more sources and I'd like to run their listeners on the same loop/thread as the first one. To do this I'm just passing the loop object to the classes implementing the listeners and their constructor adds a task to the loop that will initialize and run the listener.
However since these classes are initialized in the main thread I'm calling the loop.create_task(...) function from there instead of the loop's thread. Will this cause any issues?
The answer is no, using loop.create_task(...) to schedule a coroutine from a different thread is not threadsafe, use asyncio.run_coroutine_threadsafe(...) instead.
I have a PyGtk (GTK+ 3) application that runs in two threads:
Thread A is a main app thread that executes Gtk.main() and so handles Gtk's events/signals.
Thread B is a PulseAudio event thread that handles all PA's stuff asynchronously.
In certain cases it's necessary to make an event handled by a callback from thread B do something in Gtk objects. The problem with Python is that because of GIL only one thread can run at a time, so it's not possible to change any Gtk-related things directly — it results in a deadlock.
A solution to it might be calling Gdk.threads_init() to allow GIL to be lifted for Gtk, but that seems to result in race conditions, apparently Gtk is not thread-safe enough.
What I want to do is 'flatten out' event handling so that thread B leaves something (event/signal?) for thread A to pick up and handle. in this scenario thread B is not blocked by this operation. As far as I understand, this is not the case with Python's signalling mechanism because it handles signals synchronously.
So my question is: how can I create a sort of custom event that can be picked up by Gtk's main loop and processed by thread A code?
Gtk is NOT threadsafe, you have to write your code so that it is threadsafe.
I don't know what version of pygtk you're using, but the easiest way to queue an action on the GUI thread is with idle_add:
http://www.pygtk.org/pygtk2reference/gobject-functions.html#function-gobject--idle-add
It queue's a function in Gtk's main loop and will get executed on it's thread.
EDIT: This is just the easiest way to get a function called on the GUI thread. If you want do the work of creating a custom gobject signal, I believe (but am not 100% sure) that the signal handler will be called on the GUI thread.
It's said that you should not call GUI functions from a thread, but I'm wondering if this is applicable only when you call functions which affects GUI directly or it's applicable on every function provided by GUI library. By example, it is safe to call:
gobject.idle_add(self.gui.get_object('button1').set_sensitive, False)
in a thread? Because self.gui.get_object is a function from the GUI framework but self.gui.get_object('button1') is actually calling it.
Thank you for your answers.
The call you showed there seems safe. As already posted, you can read (get_object) just fine in any thread, but should only modify (set_sensitive) in the main thread. Exactly this is done here, idle_add adds the event to the main loop which is running in the main thread.
Threading with GUI is bit tricky.If you want to do it right, you should not update GUI from within any other thread than main thread (common limitation in GUI libs). However you can make multiple read calls from several threads.
I am writing a Tkinter program that requires a loop. I can't run the loop from the same class that Tkinter is in, I'm fairly certain of that much. To run said loop, I believe that I have to use a separate thread, therefore a separate class, to keep Tkinter from freezing. I have gotten Tkinter to run while a loop in the thread prints numbers. However, I need to have it configure a Tkinter window that resides in another class. How would I go about this?
You don't necessarily need another thread, because you don't necessarily need to create a loop (see my answer to your other question about using a nested loop).
However, to answer your specific question, you have to implement a queue. The worker thread will place messages of some sort on the queue, and the main thread polls the queue via the event loop and responds to the message. This is necessary because a worker thread can't directly modify tk widgets.
For an example of using threads and queues with Tkinter, see Tkinter and Threads on effbot.orb. Pay close attention to how it uses after to poll the queue every 100 ms.