I have a C++ binary with an embedded python interpreter, done via pybind11::scoped_interpreter.
It also has a number of tcp connections using boost::asio which consume a proprietary messaging protocol and update some state based on the message contents.
On startup we import a python module, instantiate a specific class therein and obtain pybind11::py_object handles to various callback methods within the class.
namespace py = pybind11;
class Handler
{
public:
Handler(const cfg::Config& cfg)
: py_interpreter_{std::make_unique<py::scoped_interpreter>()}
{
auto module = py::module_::import(cfg.module_name);
auto Class = module.attr(cfg.class_name);
auto obj = Class(this);
py_on_foo_ = obj.attr("on_foo");
py_on_bar_ = obj.attr("on_bar");
}
std::unique_ptr<py::scoped_interpreter> py_interpreter_;
py::object py_on_foo_;
py::object py_on_bar_;
};
For each specific message which comes in, we call the associated callback method in the python code.
void Handler::onFoo(const msg::Foo& foo)
{
py_on_foo_(foo); // calls python method
}
All of this works fine... however, it means there is no "main thread" in the python code - instead, all python code execution is driven by events originating in the C++ code, from the boost::asio::io_context which is running on the C++ application's main thread.
What I'm now tasked with is a way to get this C++-driven code to play nicely with some 3rd-party asyncio python libraries.
What I have managed to do is to create a new python threading.Thread, and from there add some data to a thread-safe queue and make a call to boost::asio::post (exposed via pybind11) to execute a callback in the C++ thread context, from which I can drain the queue.
This is working as I expected, but I'm new to asyncio, and am lost as to how to create a new asyncio.event_loop on the new thread I've created, and post the async results to my thread-safe queue / C++ boost::asio::post bridge to the C++ thread context.
I'm not sure if this is even a recommended approach... or if there is some asyncio magic I should be using to wake up my boost::asio::io_context and have the events delivered in that context?
Questions:
How can I integrate an asyncio.event_loop into my new thread and have the results posted to my thread-safe event-queue?
Is it possible to create a decorator or some such similar functionality which will "decorate" an async function so that the results are posted to my thread-safe queue?
Is this approach recommended, or is there another asyncio / "coroutiney" way of doing things I should be looking at?
There are three possibilities to integrate the asio and asyncio event loops:
Run both event loops in the same thread, alternating between them
Run one event loop in the main thread and the other in a worker thread
Merge the two event loops together.
The first option is straightforward, but has the downside that you will be running that thread hot since it never gets the chance to sleep (classically, in a select), which is inconsiderate and can disguise performance issues (since the thread always uses all available CPU). Here option 1a would be to run the asio event loop as a guest in asyncio:
async def runAsio(asio: boost.asio.IoContext):
while await asyncio.sleep(0, True):
asio.poll()
And option 1b would be to run the asyncio event loop as a guest in asio:
boost::asio::awaitable<void> runAsyncio(py::object asyncio) {
for (;; co_await boost::asio::defer()) {
asyncio.attr("stop")();
asyncio.attr("run_forever")();
}
}
The second option is more efficient, but has the downside that completions will be invoked on either thread depending on which event loop they're triggered by. This is the approach taken by the asynchronizer library; it spawns a std::thread to run the asio event loop on the side (option 2a), but you could equally take your approach (option 2b) of spawning a threading.Thread and running the asyncio event loop on the side. If you're doing this you should create a new event loop in the worker thread and run it using run_forever. To post callbacks to this event loop from the main thread use call_soon_threadsafe.
Note that a downside of approach 2b would be that Python code invoked in the main thread won't be able to access the asyncio event loop using get_running_loop and, worse any code using the deprecated get_event_loop in the main thread will hang. If instead you use option 2a and run the C++ event loop in the worker thread you can ensure that any Python callbacks that might want access to the asyncio event loop are running in the main thread.
Finally, the third option is to replace one event loop with the other (or even possibly both with a third, e.g. libuv). Replacing the asio scheduler/reactor/proactor is pretty involved and fairly pointless (since it would mean adding overhead to C++ code that should be fast), but replacing the asyncio loop is far more straightforward and is very much a supported use case; see Event Loop Implementations and Policies and maybe take a look at uvloop which replaces the asyncio event loop with libuv. On the downside, I'm not aware of a fully supported asio implementation of the asyncio event loop, but there is a GSoC project that looks pretty complete, although it's (unsurprisingly) written using Boost.Python so might need a little work to integrate with your pybind11 codebase.
Related
I have matplotlib running on the main thread with a live plot of data coming in from an external source. To handle the incoming data I have a simple UDP listener listening for packages using asyncio with the event loop running on a seperate thread.
I now want to add more sources and I'd like to run their listeners on the same loop/thread as the first one. To do this I'm just passing the loop object to the classes implementing the listeners and their constructor adds a task to the loop that will initialize and run the listener.
However since these classes are initialized in the main thread I'm calling the loop.create_task(...) function from there instead of the loop's thread. Will this cause any issues?
The answer is no, using loop.create_task(...) to schedule a coroutine from a different thread is not threadsafe, use asyncio.run_coroutine_threadsafe(...) instead.
This is for a networking daemon, where each incoming request runs through an interpreter, with a lightweight request-specific stack. The interpreter allows the request to yield control when waiting on blocking I/O operations. In this way the requests operate very similarly to coroutines in other languages. A single POSIX thread may have several thousands requests in yielded or runnable states, but only a single request actively making progress.
With other embedded languages such as Lua, it's possible to yield control back to the C caller. This is one of the reasons why NGINX utilises Lua for its embedded scripting language.
I'm wondering if there's a way to achieve something similar with Python, when a python thread is waiting for a condition to be asynchronously satisfied.
I don't think it's realistic for Python to expose the details of the asynchronous condition to the C caller, and have the C caller notify the Python interpreter when the condition was satisfied. But even if Python returned control with no information regarding the asynchronous condition, it may allow the C caller to utilise multiple Python thread states as green threads.
The idea would be to attach a thread state to each request, and have the python interpreter inform the C caller when a particular thread and therefore request, was runnable. The most obvious (but likely worst/most naive) way of doing this would be for the C caller to poll the Python interpreter, allowing Python to check if any async conditions had been satisfied, and returning a list of runnable thread states. The C caller would then swap in a runnable thread state, and call the Python interpreter to continue execution.
I'd be grateful for any ideas on this. Even knowing whether it's possible for a Python coroutine to yield to a C caller, and have the C caller resume the coroutine would be useful.
EDIT
No points for suggesting running Python in a separate process and sending requests to it via a pipe or network socket. That's cheating.
EDIT 2
Looks like someone else implemented a similar mechanism as I was suggesting between for Emscripten and Python.
https://github.com/emscripten-core/emscripten/issues/9279
One potential solution is using asyncio's run_coroutine_threadsafe() function.
For every application thread, you have a shadow Python interpreter thread. These are separate OS threads that share an interpreter, but with separate PyThreadStates.
In the Python thread, you create a new event loop, write out a reference to the loop object to a shared variable, and call loop.run_forever() after installing an appropriate mechanism to stop the loop gracefully.
In the application thread, you wrap module calls to the Python script you want to run in a coroutine and use asyncio.run_coroutine_threadsafe() to submit them to the Python interpreter thread (using the handle from the shared variable). The application thread adds a callback to the Future it receives via the add_done_callback.
The application request is then yielded, which means its execution is suspended and the application thread can process a new application request.
The add_done_callback callback calls an application C function which signals the application thread that processing of a particular application request is complete. The application request is then placed back into the application's runnable queue for execution to continue.
I'll update the answer after I have a complete, polished solution, and i've fully tested the questionably thread unsafe aspects. But for now, this does seem like a viable solution.
Why get_event_loop method in asyncio (source) is checking if the current thread is the main thread (see my comment in the snippet below)?
def get_event_loop(self):
"""Get the event loop.
This may be None or an instance of EventLoop.
"""
if (self._local._loop is None and
not self._local._set_called and
isinstance(threading.current_thread(), threading._MainThread)): # <- I mean this thing here
self.set_event_loop(self.new_event_loop())
if self._local._loop is None:
raise RuntimeError('There is no current event loop in thread %r.'
% threading.current_thread().name)
return self._local._loop
For convenience, asyncio supports automatically creating an event loop without having to go through calls to new_event_loop() and set_event_loop(). As the event loop is moderately expensive to create, and consumes some OS resources, it's not created automatically on import, but on-demand, specifically on the first call to get_event_loop(). (This feature is mostly obsoleted by asyncio.run which always creates a new event loop, and then the auto-created one can cause problems.)
This convenience, however, is reserved for the main thread - any other thread must set the event loop explicitly. There are several possible reasons for this:
preventing confusion - you don't want an accidental call to get_event_loop() from an arbitrary thread to appropriate the "main" (auto-created) event loop for that thread;
some asyncio features work best when or require that the event loop is run in the main thread - for example, subprocesses and signal handling.
These problems could also be avoided by automatically creating a new event loop in each thread that invokes get_event_loop(), but that would make it easy to accidentally create multiple event loops whose coroutines would be unable to communicate with each other, which would go against the design of asyncio. So the remaining option is for the code to special-case the main thread, encouraging developers to use that thread for executing asyncio code.
So I have this library that I use and within one of my functions I call a function from that library, which happens to take a really long time. Now, at the same time I have another thread running where I check for different conditions, what I want is that if a condition is met, I want to cancel the execution of the library function.
Right now I'm checking the conditions at the start of the function, but if the conditions happen to change while the library function is running, I don't need its results, and want to return from it.
Basically this is what I have now.
def my_function():
if condition_checker.condition_met():
return
library.long_running_function()
Is there a way to run the condition check every second or so and return from my_function when the condition is met?
I've thought about decorators, coroutines, I'm using 2.7 but if this can only be done in 3.x I'd consider switching, it's just that I can't figure out how.
You cannot terminate a thread. Either the library supports cancellation by design, where it internally would have to check for a condition every once in a while to abort if requested, or you have to wait for it to finish.
What you can do is call the library in a subprocess rather than a thread, since processes can be terminated through signals. Python's multiprocessing module provides a threading-like API for spawning forks and handling IPC, including synchronization.
Or spawn a separate subprocess via subprocess.Popen if forking is too heavy on your resources (e.g. memory footprint through copying of the parent process).
I can't think of any other way, unfortunately.
Generally, I think you want to run your long_running_function in a separate thread, and have it occasionally report its information to the main thread.
This post gives a similar example within a wxpython program.
Presuming you are doing this outside of wxpython, you should be able to replace the wx.CallAfter and wx.Publisher with threading.Thread and PubSub.
It would look something like this:
import threading
import time
def myfunction():
# subscribe to the long_running_function
while True:
# subscribe to the long_running_function and get the published data
if condition_met:
# publish a stop command
break
time.sleep(1)
def long_running_function():
for loop in loops:
# subscribe to main thread and check for stop command, if so, break
# do an iteration
# publish some data
threading.Thread(group=None, target=long_running_function, args=()) # launches your long_running_function but doesn't block flow
myfunction()
I haven't used pubsub a ton so I can't quickly whip up the code but it should get you there.
As an alternative, do you know the stop criteria before you launch the long_running_function? If so, you can just pass it as an argument and check whether it is met internally.
I am implementing a MQTT worker in python with paho-mqtt.
Are all the on_message() multi threaded in different threads, so that if one of the task is time consuming, other messages can still be processed?
If not, how to achieve this behaviour?
The python client doesn't actually start any threads, that's why you have to call the loop function to handle network events.
In Java you would use the onMessage callback to put the incoming message on to a local queue that a separate pool of threads will handle.
Python doesn't have native threading support but does have support for spawning processes to act like threads. Details of the multiprocessing can be found here:
https://docs.python.org/2.7/library/multiprocessing.html
EDIT:
On looking closer at the paho python code a little closer it appears it can actually start a new thread (using the loop_start() function) to handle the network side of things previously requiring the loop functions. This does not change the fact the all calls to the on_message callback will happen on this thread. If you need to do large amounts of work in this callback you should definitely look spinning up a pool of new threads to do this work.
http://www.tutorialspoint.com/python/python_multithreading.htm