I'm launching a new process (edit the same thing applies to a new thread) for computations from an async event loop. The new process has it's own asyncio event loop running and runs fine without any kind of blocking behavior.
I created two queues (multiprocessing.Queue or multiprocessing.Manager.Queue), one for outgoing messages, and another for incoming messages. I get the same behavior with both queues. The queue for outgoing messages is working fine, as I put/get a message on the queue with :
await asyncio.get_running_loop().run_in_executor(None, self.incoming_queue.put, msg)
msg = await asyncio.get_running_loop().run_in_executor(None, self.incoming_queue.get, True, 1)
However, when I attempt to run the same get() command in my original asyncio application using the asyncio run_in_executor command, it just hangs. The event loop itself seems fine and responsive.
Disabling the working queue doesn't change things, and neither does the executor (default, thread, or process).
Ideas?
I've decided to make an answer here based on my investigation. In short: what works in a new event loop in a new process does NOT work in the Django Channels event loop for one reason or another.
My current solution is to manually create a new thread to run my synchronous listener in. I'm looking into options for why the Channels event loop wouldn't work in my use case.
Related
I made a script that includes web scraping and api requests but I wanted to add discord.py for sending the results to my discord server but it stops after this:
client.run('token')
Is there any way to fix this?
You need to use threads.
Python threading allows you to have different parts of your program run concurrently and can simplify your design.
What Is a Thread?
A thread is a separate flow of execution. This means that your program will have two things happening at once.
Getting multiple tasks running simultaneously requires a non-standard implementation of Python, writing some of your code in a different language, or using multiprocessing which comes with some extra overhead.
Starting a thread
The Python standard library provides threading
import threading
x = threading.Thread(target=thread_function, args=(1,))
x.start()
wrapping up
you need to create two threads for each loop. Create and run the discord client in a thread, use another thread for web scraping and API requests.
The run method is completely blocking, so there are two ways I can see to solve this issue:
create and run the client in a separate thread, and use a queue of some sort to communicate between the client and the rest
use the start method, that returns an async coroutine which you can wrap into a task and multiplex with your scraping and API-requesting, assuming that also uses async coroutines
client.run seems to be a blocking operation.
E.g. your code is not meant to be executed after client.run
You can try using loop.create_task() as described here, to create another coroutine that would run in background and feed some messages into your client.
I'm having issues with asyncio queues. Execution gets stuck on await queue.get() if the queue is empty - even if I publish something into the queue.
I have a loop which reads the event queue, which starts right after the app loads, so the queue is empty on the first await. In a different co-routine I publish a message to this queue, however the execution waits on the await statement. Only a single consumer is reading the queue. I publish the message using put_nowait():
async def _event_loop(self):
while True:
try:
# if self.events.empty():
# await asyncio.sleep(0.1)
# continue
ev = await self.events.get()
print(ev)
If I uncomment the commented out part, the whole thing starts working.
I noticed a similar issue here:
https://github.com/mosquito/aio-pika/issues/56
But I had no luck figuring out how to fix this.
Does anyone have any idea what's wrong?
You are filling the queue from a thread different than the one that runs the event loop. By design, asyncio queues are not thread-safe and can only be safely accessed from asyncio coroutines and callbacks.
You can fix the issue by changing your call to queue.put_nowait(elem), to something like loop.call_soon_threadsafe(queue.put_nowait, elem), where loop is the event loop object which you must also pass to the thread, probably the same way you pass the queue.
why would then the uncommented part of the code fix the issue?
Uncommenting effectively removes the need for the coroutine to wake up while waiting on an empty queue. The wakeup didn't work because put_nowait assumes it is run from the event loop thread, and therefore doesn't need to emit an additional wakeup signal. See e.g. this answer for details.
I run a python program which is run by network events and cannot go 10-15 seconds without processing heartbeats. (More specifically, I use discord.py with a pretty large volume of events)
In one of the possible scenarios I could have a command store a large amount of data into a database, this could potentially take up more than those 10 to 15 seconds, and is blocking.
These are thousands of small database calls where I could make the asynchronous event loop "run its course" in between those calls if needed. How can I make python "await for nothing" in this case?
A similair hack would be to await for a resolved Promise in JavaScript, which throws the process back into the event loop, resolving more pressing events first.
await asyncio.sleep(0) - is a way to return control to an event loop.
Although instead of constantly calling it you may go another way: run your blocking code in another thread using run_in_executor and awaiting for it to be finished. This way event loop will normally continue it's course while blocking stuff being processed in background thread.
I am implementing a MQTT worker in python with paho-mqtt.
Are all the on_message() multi threaded in different threads, so that if one of the task is time consuming, other messages can still be processed?
If not, how to achieve this behaviour?
The python client doesn't actually start any threads, that's why you have to call the loop function to handle network events.
In Java you would use the onMessage callback to put the incoming message on to a local queue that a separate pool of threads will handle.
Python doesn't have native threading support but does have support for spawning processes to act like threads. Details of the multiprocessing can be found here:
https://docs.python.org/2.7/library/multiprocessing.html
EDIT:
On looking closer at the paho python code a little closer it appears it can actually start a new thread (using the loop_start() function) to handle the network side of things previously requiring the loop functions. This does not change the fact the all calls to the on_message callback will happen on this thread. If you need to do large amounts of work in this callback you should definitely look spinning up a pool of new threads to do this work.
http://www.tutorialspoint.com/python/python_multithreading.htm
Because of my zero knowledge about Python GUIs,
I need some help, to make a mechanism for,
Making requests through HTML,CSS or Ajax (node.js, Apache or nginx server) to a Python program to execute certain functions.
For example,
I have a python running a while True: loop, but at a given moment want perform an interrupt signal and sending data to execute a function a kind of events system.
First, I bind an event to program:
#program.bind(EVENT_NAME, EVENT_HANDLER)
program.bind(miaowcat, miaowfunc)
The program runs and any time an interrupt is performed, executing the function miaowfunct and passing the data of the event to *args
def miaowfunct(*args):
It's a prototype. So, args can be with numeric signals or other elements.
I don't know how to do this.
This kind of problem is what messaging systems are designed to solve.
You write some code that needs to be executed at a trigger (this is called a consumer).
Your code that needs to execute the function (called a producer), creates a message and sends it to a broker.
The broker takes your message and puts it on a queue.
The consumer is listening on this queue for messages, when it sees one, it will "wake up", run itself, and then go back to sleep.
For Python, the following are typically used:
Messaging Broker = RabbitMQ
Task Queue = Celery