I've got an external library in C++ that has been wrapped by Cython. This C++ library itself I cannot change. I would like to combine the library to be used as part of Python application that uses asyncio as its primary process control.
The Cython library essentially does network work with a proprietary protocol. The Cython library however is blocking when the event handler for the library is started in python. I've gotten it to a stage where I can pass a Python function and receive callbacks for events received from the C++ library. I can resolve the library hanging the application at the library event handler if I run the event handler within event_loop.run_in_executor.
My question is, how can I best model this to work with asnycio that fits well with its interfaces rather than hack up ad hoc solutions to use the Cython library methods? I had a look into writing this as a asyncio.Protocol and asyncio.Transport that then uses the Cython library as it's underlying communication mechanism. However, it looks like it's a lot of effort with some monkey patching to make it look like a socket. Is there a better way or abstraction to put a wrapper on external libraries to make it work with asyncio?
To answer my own question, as far as I can see there is no obligations to use abstractions provided by Protocol or Transport in asyncio for structuring applications. The best modeling for this I found is to use a regular class with its methods defined as async. The class then can be made to look like whatever pattern fits your requirement. This is especially relevant if the code you are wrapping doesn’t have same overall use case as a socket. The asyncio provided abstractions themselves are pretty barebones.
For things that are complicated like Cython wrapped C++ blocking code, you will need to deal with it with multiprocessing. This is to avoid hanging the interpreter. Asyncio does not make it possible to run blocking code without changes. The code must be specifically written to be asyncio compatible.
What I did was put the entire blocking code including the construction of the object into a function that was executed with event_loop.run_in_executor. In addition to this I used a unix socket to communicate with the process for commands and callback data. Due to using unix sockets you can use asnycio methods in your main application, same goes for pipes.
Here are some results I got from sending 128 bytes from the multiprocess Process producer to the asyncio main process. The data was generated at a 10-millisecond interval. The duration was timed using time.perf_counter(). Results below are in nanoseconds. The machine itself was Intel(R) Core(TM) i7-2600 CPU # 3.40GHz running Linux kernel 4.10.17.
Asyncio with uvloop
count 10001.000000
mean 76435.956504
std 8887.459462
min 63608.000000
25% 71709.000000
50% 74104.000000
75% 79496.000000
max 287204.000000
Standard Asyncio event loop
count 10001.000000
mean 199741.937506
std 27900.377114
min 173321.000000
25% 185545.000000
50% 191839.000000
75% 205279.000000
max 529246.000000
Related
I'm trying to figure out how Gevent works with respect to other asynchronous frameworks in python, like Twisted.
The key difference between Gevent and Twisted is that Gevent uses greenlets and monkey patching the standard library for an implicit behavior and a synchronous programming model whereas Twisted requires specific libraries and callbacks for an explicit behavior. The event loop in Gevent is libev/libevent, which is written in C, and the event loop in Twisted is the reactor, which is written in python.
Is there anything special about libev/libevent that allows for this implicit behavior? Why not use an event loop written in Python? Conversely, why isn't Twisted using libev/libevent? Is there any particular reason? Maybe it was simply a design choice and could have gone either way...
Theoretically, can Gevent's libev be replaced with another event loop, written in python, like Twisted's reactor? And can Twisted's reactor be replaced with libev?
Short answer: Twisted is a network framework. Gevent tries to act as a library without requiring from the programmer to change the way he programs. That's their focus.. and not so much how that is achieved under the hood.
Long answer:
All asyncio libraries (Gevent, Asyncio, etc.) work pretty much the same:
Have a main loop running endlessly on a single thread.
When an event occurs, it's captured by the main loop.
The main loop decides based on different rules (scheduling) if it should continue checking for events or switch temporarily and give control to any subscriber functions to the event.
greenlet is a different library. It's very simple in that it just changes the order that Python code is run and lets you change jumping back and forth between functions. Gevent uses it under the hood to implement its async features.
asyncio which comes with Python3 is like gevent. The big difference is the interface again. It requires the programmer to mark functions with async and allow him to explicitly wait for a subscribed function in the main loop with await.
Gevent is like asyncio. But instead of the keywords it patches existing code where appropriate. It uses greenlet under the hood to switch between main loop and subscribed functions and make it all work seamlessly.
Twisted as mentioned feels more like a framework than a library. It requires the programmer to follow very specific ways to achieve concurrency. Again though it has a main loop under the hood called reactor like everything else.
Back to your initial question: You can in theory replace the reactor with any loop (including gevent). But that would defeat the purpose. Probably Twisted's team decided to use their own version of a main loop for optimisation reasons. All these libraries use different scheduling in their main loops to meet their needs.
I have implemented a server program using Twisted. I am using twisted.protocols.basic.LineReceiver along with twisted.internet.protocol.ServerFactory.
I would like to have each client that connects to the server run a set of functions in parallel (I'm thinking of multi-threading for this).
I have some confusion with using twisted.internet.threads.deferToThread for this problem.
Should I call deferToThread in the ServerFactory for this purpose?
Are twisted threads, thread-safe with respect to race conditions?
Previously, I tried using multiprocessing in my server program but it seemed not to work in combination with the Twisted reactor, while deferToThread did the job.
I'm wondering how are Twisted threads implemented? Don't they utilize multiprocessing?
Previously, I tried using multiprocessing in my server program but it seemed not to work in combination with the Twisted reactor, while deferToThread did the job. I'm wondering how are Twisted threads implemented? Don't they utilize multiprocessing?
You didn't say whether you used the multi-threaded version of multiprocessing or the multi-process version of multiprocessing.
You can read about mixing Twisted and multiprocessing on Stack Overflow, though:
Mix Python Twisted with multiprocessing?
Twisted network client with multiprocessing workers?
is twisted incompatible with multiprocessing events and queues?
(And more)
To answer the shorter part of this question - no, Twisted does not use the stdlib multiprocessing package to implement its threading APIs. It uses the stdlib threading module.
Are twisted threads, thread-safe with respect to race conditions?
The answer to this is implied by the above answer: no. "Twisted threads" aren't really a thing. Twisted's threading APIs are just a layer on top of the stdlib threading module (which is really just a Python API for POSIX threads (or something kind of similar but different on Windows). Twisted's threading APIs don't magically eliminate the possibility of race conditions (if there is any magic in Twisted, it is the ability to do certain things concurrently without using threads at all - which helps reduce the number of race conditions in your program, though it doesn't entirely eliminate the possibility of creating them).
Should I call deferToThread in the ServerFactory for this purpose?
I'm not quite sure what the point of this question is. Are you wondering if a method on your ServerFactory subclass is the best place to put your calls to deferToThread? That probably depends on the details of your implementation approach. It probably doesn't make a huge difference overall, though. If you like the pattern of having the factory provide services to protocol instances - go for it.
Many times, asynchronous I/O is synonymous with networked or file-based I/O (e.g. Twisted, Eventlet, asyncore ...).
However, I am currently in the midst of writing a Python toolkit to control motors. This should be asynchronous most of the time, so that several motors can be controlled at once. Right now, everything is based on threads but the underlying problem is so fundamental that I thought, that there must be an asynchronous framework that helps with this. Do you know of any?
No need for 3rd-party frameworks. Use asyncore, which is in the standard library.
I am looking into building a multi-protocol application using twisted. One of those protocols is bittorrent. Since libtorrent is a fairly complete implementation and its python bindings seems to be a good choice.
Now the question is:
When using libtorrent with twisted, do I need to worry about blocking?
Does the libtorrent networking layer (using boost.asio, a async networking loop) interfere with twisted epoll in any way?
Should I perhaps run the libtorrent session in a thread or target a multi-process application design?
I may be able to provide answers to some of those questions.
all of libtorrents logic, including networking and disk I/O is done in separate threads. So, over all, the concern of "blocking" is not that great. Assuming you mean libtorrent functions not returning immediately.
Some operations are guaranteed to return immediately, functions that don't return any state or information. However, functions that do return something, must synchronize with the libtorrent main thread, and if it is under heavy load (especially when built in debug mode with invariant checks and no optimization) this synchronization may be noticeable, especially when making many of them, and often.
There are ways to use libtorrent that are more asynchronous in nature, and there is an ongoing effort in minimizing the need for using functions that synchronize. For example, instead of querying the status of all torrents individually, one can subscribe to torrent status updates. Asynchronous notifications are returned via pop_alerts().
Whether it would interfere with twisted's epoll; I can't say for sure, but it doesn't seem very likely.
I don't think there's much need to interact with libtorrent via another layer of threads, since all of the work is already done in separate threads.
I have a web service that is required to handle significant concurrent utilization and volume and I need to test it. Since the service is fairly specialized, it does not lend itself well to a typical testing framework. The test would need to simulate multiple clients concurrently posting to a URL, parsing the resulting Http response, checking that a database has been appropriately updated and making sure certain emails have been correctly sent/received.
The current opinion at my company is that I should write this framework using Python. I have never used Python with multiple threads before and as I was doing my research I came across the Global Interpreter Lock which seems to be the basis of most of Python's concurrency handling. It seems to me that the GIL would prevent Python from being able to achieve true concurrency even on a multi-processor machine. Is this true? Does this scenario change if I use a compiler to compile Python to native code? Am I just barking up the wrong tree entirely and is Python the wrong tool for this job?
The Global Interpreter Lock prevents threads simultaneously executing Python code. This doesn't change when Python is compiled to bytecode, because the bytecode is still run by the Python interpreter, which will enforce the GIL. threading works by switching threads every sys.getcheckinterval() bytecodes.
This doesn't apply to multiprocessing, because it creates multiple Python processes instead of threads. You can have as many of those as your system will support, running truly concurrently.
So yes, you can do this with Python, either with threading or multiprocessing.
you can use python's multiprocessing library to achieve this.
http://docs.python.org/library/multiprocessing.html
Assuming general network conditions, as long you have sufficient system resources Python's regular threading module will allow you to simulate concurrent workload at an higher rate than any a real workload.