Twisted: degrade gracefully performance in case reactor is overloaded? - python

Is it somehow possible to "detect" that the reactor is overloaded and start dropping connections, or refuse new connections? How can we avoid the reactor being completely overloaded and not being able to catch up?

If I understand Twisted Reactors correctly, they don't parallelize everything. Whatever operations have been queued is scheduled and is done one by one.
One way out for you is to have a custom addCallback which checks for how many callbacks have been registered already and drop if necessary.

No easy way, but here's some suggestions: http://www.mail-archive.com/twisted-python#twistedmatrix.com/msg00389.html

I would approach this per protocol. Throttle when the actual service requires it, not when you think it will. Rather than worrying about how many callbacks are waiting for a reactor tick, I'd worry about how long the HTTP requests (for example) are taking to complete. The number of operations waiting for the reactor could be an implementation detail - for example, if one access pattern ended up with callbacks on long DeferredLists, and another had a more linear chain of callbacks, the time to respond might not be different even though the number of callbacks would be.
This could be done by keeping metrics of the time to complete a logical operation (such as servicing a HTTP request). An advantage of this is that it gives you important information before a problem happens.

Related

Sharing DB client among multiple processes in Python?

My python application uses concurrent.futures.ProcessPoolExecutor with 5 workers and each process makes multiple database queries.
Between the choice of giving each process its own db client, or alternatively , making all process to share a single client, which is considered more safe and conventional?
Short answer: Give each process (that needs it) its own db client.
Long answer: What problem are you trying to solve?
Sharing a DB client between processes basically doesn't happen; you'd have to have the one process which does have the DB client proxy the queries from the others, using more-or-less your own protocol. That can have benefits, if that protocol is specific to your application, but it will add complexity: you'll now have two different kinds of workers in your program, rather than just one kind, plus the protocol between them. You'd want to make sure that the benefits outweigh the additional complexity.
Sharing a DB client between threads is usually possible; you'd have to check the documentation to see which objects and operations are "thread-safe". However, since your application is otherwise CPU-heavy, threading is not suitable, due to Python limitations (the GIL).
At the same time, there's little cost to having a DB client in each process; you will in any case need some sort of client, it might as well be the direct one.
There isn't going to be much more IO, since that's mostly based on the total number of queries and amount of data, regardless of whether that comes from one process or gets spread among several. The only additional IO will be in the login, and that's not much.
If you're running out of connections at the database, you can either tune/upgrade your database for more connections, or use a separate off-the-shelf "connection pooler" to share them; that's likely to be much better than trying to implement a connection pooler from scratch.
More generally, and this applies well beyond this particular question, it's often better to combine several off-the-shelf pieces in a straightforward way, than it is to try to put together a custom complex piece that does the whole thing all at once.
So, what problem are you trying to solve?
It is better to use multithreading or asynchronous approach instead of multiprocessing because it will consume fewer resources. That way you could use a single db connection, but I would recommend creating a separate session for each worker or coroutine to avoid some exceptions or problems with locking.

Flask: spawning a single async sub-task within a request

I have seen a few variants of my question but not quite exactly what I am looking for, hence opening a new question.
I have a Flask/Gunicorn app that for each request inserts some data in a store and, consequently, kicks off an indexing job. The indexing is 2-4 times longer than the main data write and I would like to do that asynchronously to reduce the response latency.
The overall request lifespan is 100-150ms for a large request body.
I have thought about a few ways to do this, that is as resource-efficient as possible:
Use Celery. This seems the most obvious way to do it, but I don't want to introduce a large library and most of all, a dependency on Redis or other system packages.
Use subprocess.Popen. This may be a good route but my bottleneck is I/O, so threads could be more efficient.
Using threads? I am not sure how and if that can be done. All I know is how to launch multiple processes concurrently with ThreadPoolExecutor, but I only need to spawn one additional task, and return immediately without waiting for the results.
asyncio? This too I am not sure how to apply to my situation. asyncio has always a blocking call.
Launching data write and indexing concurrently: not doable. I have to wait for a response from the data write to launch indexing.
Any suggestions are welcome!
Thanks.
Celery will be your best bet - it's exactly what it's for.
If you have a need to introduce dependencies, it's not a bad thing to have dependencies. Just as long as you don't have unneeded dependencies.
Depending on your architecture, though, more advanced and locked-in solutions might be available. You could, if you're using AWS, launch an AWS Lambda function by firing off an AWS SNS notification, and have that handle what it needs to do. The sky is the limit.
I actually should have perused the Python manual section on concurrency better: the threading module does just what I needed: https://docs.python.org/3.5/library/threading.html
And I confirmed with some dummy sleep code that the sub-thread gets completed even after the Flask request is completed.

Concurrently searching a graph in Python 3

I'd like to create a small p2p application that concurrently processes incoming data from other known / trusted nodes (it mostly stores it in an SQLite database). In order to recognize these nodes, upon connecting, each node introduces itself and my application then needs to check whether it knows this node directly or maybe indirectly through another node. Hence, I need to do a graph search which obviously needs processing time and which I'd like to outsource to a separate process (or even multiple worker processes? See my 2nd question below). Also, in some cases it is necessary to adjust the graph, add new edges or vertices.
Let's say I have 4 worker processes accepting and handling incoming connections via asynchronous I/O. What's the best way for them to access (read / modify) the graph? A single queue obviously doesn't do the trick for read access because I need to pass the search results back somehow.
Hence, one way to do it would be another queue which would be filled by the graph searching process and which I could add to the event loop. The event loop could then pass the results to a handler. However, this event/callback-based approach would make it necessary to also always pass the corresponding sockets to the callbacks and thus to the Queue – which is nasty because sockets are not picklable. (Let alone the fact that callbacks lead to spaghetti code.)
Another idea that's just crossed my mind might be to create a pipe to the graph process for each incoming connection and then, on the graph's side, do asynchronous I/O as well. However, in order to avoid callbacks, if I understand correctly, I would need an async I/O library making use of yield from (i.e. tulip / PEP 3156). Are there other options?
Regarding async I/O on the graph's side: This is certainly the best way to handle many incoming requests at once but doing graph lookups is a CPU intensive task, thus could profit from using multiple worker threads or processes. The problem is: Multiple threads allow shared data but Python's GIL somewhat negates the performance benefit. Multiple processes on the other hand don't have this problem but how can I share and synchronize data between them? (For me it seems quite impossible to split up a graph.) Is there any way to solve this problem in a nice way? Also, does it make sense in terms of performance to mix asynchronous I/O with multithreading / multiprocessing?
Answering your last question: It does! But, IMHO, the question is: does it makes sense mix Events and Threads? You can check this article about hybrid concurrency models: http://bibliotecadigital.sbc.org.br/download.php?paper=3027
My tip: Start with just one process and an event loop, like in the tulip model. I'll try to explain how can you use tulip to have Events+async I/O (and threads or other processes) without callbacks at all.
You could have something like accept = yield from check_incoming(), which should be a tulip coroutine (check_incoming), and inside this function you could use loop.run_in_executor() to run your graph search in a thread/process pool (I'll explain more about this later). This function run_in_executor() returns a Future, in which you can also yield from tasks.wait([future_returned_by_run_in_executor], loop=self). The next step would be result = future_returned_by_run_in_executor.result() and finally return True or False.
The process pool requires that only pickable objects can be executed and returned. This requirement is not a problem but it's implicit that the graph operation must be self contained in a function and must obtain the graph instance somehow. The Thread pool has the GIL problem since you mentioned CPU bound tasks which can lead to 'acquiring-gil-conflicts' but this was improved in the new Python 3.x GIL. Both solutions have limitations..
So.. instead of a pool, you can have another single process with it's own event loop just to manage all the graph work and connect both processes with a unix domain socket for instance..
This second process, just like the first one, must also accept incoming connections (but now they are from a known source) and can use a thread pool just like I said earlier but it won't "conflict" with the first event loop process(the one that handles external clients), only with the second event loop. Threads sharing the same graph instance requires some locking/unlocking.
Hope it helped!

How will this asynchronous code execute in Python vs. C?

Let's say I have a simple script with two functions below. As callback_func is invoked I would assume it will only run in a singular basis. That i, there won't be two events passing through the code block at the same time. Is that correct? Also, if the callback_func is run in a singular basis, the messaging service itself would have to perform some buffering so no messages are lost, and that buffering is depending on the service originating the event. Is that also correct?
def callback_func(event):
# Can be called anytime
def main_func():
# Sets up a connection to a messaging service
Then what if I add a send_func? If I receive one message but I have three going out, how will send_func deal with a situation when it gets called while sending a message? Is such a situation handled by the Python interpreter?
def send_func(event):
# Can be called anytime
def callback_func(event):
# Can be called anytime
def main_func():
# Sets up a connection to a messaging service
Then lastly, if I change the language to C, how do the answers to my questions above change?
Confusing Two Concepts ( Asynchronous != Concurrent )
Asynchronous does not imply Concurrent, and Concurrent does not imply Asynchronous. These terms get semantically confused by beginners ( and some experts ), but they are different concepts!
You can have one without the other, or both sometimes.
Asynchronous means you don't wait for something, it doesn't imply that it happens while other things do, just that it may happen later..
Concurrent means more than one completely individual thing is happening at the exact same time, these things can be Synchronous while being isolated and concurrent.
Implementation Specific
CPython is single threaded, there is no concern about re-entry. Other Python runtimes allow for concurrency and would need locking mechanisms if those features were used.
C is inherently single threaded as well unless you are specifically starting new threads, then you would need a locking mechanism.
I'd like to add that there are many places that message buffering can happen other than the "service". At a low level, I believe the operating system will buffer incoming and outgoing bytes on a socket.

Twisted: Making code non-blocking

I'm a bit puzzled about how to write asynchronous code in python/twisted. Suppose (for arguments sake) I am exposing a function to the world that will take a number and return True/False if it is prime/non-prime, so it looks vaguely like this:
def IsPrime(numberin):
for n in range(2,numberin):
if numberin % n == 0: return(False)
return(True)
(just to illustrate).
Now lets say there is a webserver which needs to call IsPrime based on a submitted value. This will take a long time for large numberin.
If in the meantime another user asks for the primality of a small number, is there a way to run the two function calls asynchronously using the reactor/deferreds architecture so that the result of the short calc gets returned before the result of the long calc?
I understand how to do this if the IsPrime functionality came from some other webserver to which my webserver would do a deferred getPage, but what if it's just a local function?
i.e., can Twisted somehow time-share between the two calls to IsPrime, or would that require an explicit invocation of a new thread?
Or, would the IsPrime loop need to be chunked into a series of smaller loops so that control can be passed back to the reactor rapidly?
Or something else?
I think your current understanding is basically correct. Twisted is just a Python library and the Python code you write to use it executes normally as you would expect Python code to: if you have only a single thread (and a single process), then only one thing happens at a time. Almost no APIs provided by Twisted create new threads or processes, so in the normal course of things your code runs sequentially; isPrime cannot execute a second time until after it has finished executing the first time.
Still considering just a single thread (and a single process), all of the "concurrency" or "parallelism" of Twisted comes from the fact that instead of doing blocking network I/O (and certain other blocking operations), Twisted provides tools for performing the operation in a non-blocking way. This lets your program continue on to perform other work when it might otherwise have been stuck doing nothing waiting for a blocking I/O operation (such as reading from or writing to a socket) to complete.
It is possible to make things "asynchronous" by splitting them into small chunks and letting event handlers run in between these chunks. This is sometimes a useful approach, if the transformation doesn't make the code too much more difficult to understand and maintain. Twisted provides a helper for scheduling these chunks of work, cooperate. It is beneficial to use this helper since it can make scheduling decisions based on all of the different sources of work and ensure that there is time left over to service event sources without significant additional latency (in other words, the more jobs you add to it, the less time each job will get, so that the reactor can keep doing its job).
Twisted does also provide several APIs for dealing with threads and processes. These can be useful if it is not obvious how to break a job into chunks. You can use deferToThread to run a (thread-safe!) function in a thread pool. Conveniently, this API returns a Deferred which will eventually fire with the return value of the function (or with a Failure if the function raises an exception). These Deferreds look like any other, and as far as the code using them is concerned, it could just as well come back from a call like getPage - a function that uses no extra threads, just non-blocking I/O and event handlers.
Since Python isn't ideally suited for running multiple CPU-bound threads in a single process, Twisted also provides a non-blocking API for launching and communicating with child processes. You can offload calculations to such processes to take advantage of additional CPUs or cores without worrying about the GIL slowing you down, something that neither the chunking strategy nor the threading approach offers. The lowest level API for dealing with such processes is reactor.spawnProcess. There is also Ampoule, a package which will manage a process pool for you and provides an analog to deferToThread for processes, deferToAMPProcess.

Categories

Resources