python multi threading communication - python

I use a thread pool and and I submit some tasks to be proccessed.
Sometimes a server that I ping or my internet connection could be down. This can be determined by any of the running threads.
In that case I would like the thread that detected the error to notify the others to pause
their execution until the error is fixed.
Do you know how is this possible?
I add to the above. The ideal solution would be :
when a threads detects the program to send a message to all other threads to wait.
Also it should notify an external server and after receiving that everything is ok from the
server to send again a signal to the other threads to continue the work.

I found it using threading.Event
I just have an event
event = Event()
and I use the
event.wait()
In the beggining I
event.set()
when a thread detects an error it
event.clear()
and the threads are waiting until
event.set()
This seems to work

Related

Why is my queue hanging with asyncio event loop

I'm launching a new process (edit the same thing applies to a new thread) for computations from an async event loop. The new process has it's own asyncio event loop running and runs fine without any kind of blocking behavior.
I created two queues (multiprocessing.Queue or multiprocessing.Manager.Queue), one for outgoing messages, and another for incoming messages. I get the same behavior with both queues. The queue for outgoing messages is working fine, as I put/get a message on the queue with :
await asyncio.get_running_loop().run_in_executor(None, self.incoming_queue.put, msg)
msg = await asyncio.get_running_loop().run_in_executor(None, self.incoming_queue.get, True, 1)
However, when I attempt to run the same get() command in my original asyncio application using the asyncio run_in_executor command, it just hangs. The event loop itself seems fine and responsive.
Disabling the working queue doesn't change things, and neither does the executor (default, thread, or process).
Ideas?
I've decided to make an answer here based on my investigation. In short: what works in a new event loop in a new process does NOT work in the Django Channels event loop for one reason or another.
My current solution is to manually create a new thread to run my synchronous listener in. I'm looking into options for why the Channels event loop wouldn't work in my use case.

RabbitMQ pika async consumer heartbeat issue after consumer cancellation

Using RabbitMQ and pika (python), I am running a job queuing system that feeds nodes (asynchronous consumers) with tasks. Each message that defines a task is only acknowloedged once that task is completed.
Sometimes I need to perform updates on these nodes and I have created an exit mode, in which the node waits for its tasks to finish, then exits gracefully. I can then perform my maintenance work.
So that a node does not get more messages from RabbitMQ while in this exit mode, I let it call the basic_cancel method before waiting for the jobs to finish.
This effect of this method is described in the pika documentation :
This method cancels a consumer. This does not affect already
delivered messages, but it does mean the server will not send any more
messages for that consumer. The client may receive an arbitrary number
of messages in between sending the cancel method and receiving the
cancel-ok reply. It may also be sent from the server to the client in
the event of the consumer being unexpectedly cancelled (i.e. cancelled
for any reason other than the server receiving the corresponding
basic.cancel from the client). This allows clients to be notified of
the loss of consumers due to events such as queue deletion.
So if you read "already delivered messages" as messages already received, but not necessarily acknowledged, the tasks the exit mode allows to wait for should not be requeued even if the the consumer node that runs it cancels itself out of the queuing system.
My code for the stop function of my async consumer class (taken from the pika example) is similar to this one :
def stop(self):
"""Cleanly shutdown the connection to RabbitMQ by stopping the consumer
with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok
will be invoked by pika, which will then closing the channel and
connection. The IOLoop is started again because this method is invoked
when CTRL-C is pressed raising a KeyboardInterrupt exception. This
exception stops the IOLoop which needs to be running for pika to
communicate with RabbitMQ. All of the commands issued prior to starting
the IOLoop will be buffered but not processed.
"""
LOGGER.info('Stopping')
self._closing = True
self.stop_consuming()
LOGGER.info('Waiting for all running jobs to complete')
for index, thread in enumerate(self.threads):
if thread.is_alive():
thread.join()
# also tried with a while loop that waits 10s as long as the
# thread is still alive
LOGGER.info('Thread {} has finished'.format(index))
# also tried moving the call to stop consuming up to this point
if self._connection!=None:
self._connection.ioloop.start()
LOGGER.info('Closing connection')
self.close_connection()
My issue is that after the consumer cancellation, the async consumer appears to not be sending heartbeats anymore, even if I perform the cancellation after the loop where I wait for my tasks (threads) to finish.
I have read about a process_data_events function for BlockingConnections but I could not find such function. Is the ioloop of the SelectConnection class the equivalent for async consumer ?
As my node in exit mode does not send heartbeats anymore, the tasks it is currently performing will be requeued by RabbitMQ once the maximum heartbeat is reached. I would like to keep this heartbeat untouched, as it is anyway not an issue when I am not in exit mode (my heartbeat here is about 100s, and my tasks might take as much as 2 hours to complete).
Looking at the RabbitMQ logs, the heartbeat is indeed the reason :
=ERROR REPORT==== 12-Apr-2017::19:24:23 ===
closing AMQP connection (.....) :
missed heartbeats from client, timeout: 100s
The only workaround I can think of is acknowledging the messages corresponding to the tasks still running when in exit mode, and hoping that these tasks will not fail...
Is there any method from the channel or connection that I can use to send some heartbeats manually while waiting ?
Could the issue be that the time.sleep() or thread.join() method (from the python threading package) act as completely blocking and do not allow some other threads to perform what they need ? I use in other applications and they don't seem to act as such.
As this issue only appears when in exit mode, I guess there is something in the stop function that causes the consumer to stop sending heartbeats, but as I have also tried (without any success) to call the stop_consuming method only after the wait-on-running-tasks loop, I don't see what can be the root of this issue.
Thanks a lot for your help !
turns out the stop_consuming function was calling basic_cancel in an asynchronous manner with a callback on the channel.close() function, resulting in my application to stop its RabbitMQ interaction and RabbitMQ requeuing the unackesdmessages. Actually realized that as the threads trying to later acknowledge the remaining tasks were having an error as the channel was now set to None, and thus did not have a ack method anymore.
Hope it helps someone!

python: How to unblock a script that uses recvfrom

I'm using a client to connect a socket via UDP in python. I have two threads. After a KeyboardInterrupt, the first thread still is waiting for a connection via recvfrom.
(...)
udp = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
(...)
udp.shutdown(socket.SHUT_RDWR) # Does not work
udp.close() # Does not work
I have a global variable that all threads share and I update this value after a KeyboardInterrupt. But the thread don't close.
How can I exit from this thread?
Thanks in advance.
Some clarification might be needed on your part, but this is what I understand from your question. If your KeyboardInterrupt kills the first thread and you need paired threads to close when this occurs, then you should consider using a daemon thread. This will guarantee the end of these threads once the parent thread finishes. However, as mentioned in the docs:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event.
Events are nothing other than signals meant to be utilized between threads. Just like a status flag, you can check whether an event has been 'set' and take proper precautions to handle resources before finishing execution within the thread.

How to wakeup thread from thread pool in python?

I new to Python and am developing an application in Python 2.7. I am using a thread pool provided by the concurrent.futures library. Once a thread from ThreadPool is started, it needs to wait for some message from RabbitMQ.
How can I implement this logic in Python to make this thread from the pool wait for event messages? Basically I need to wake up a waiting thread once I receive message from RabbitMQ (i.e wait and notify implementation on ThreadPool).
First you define a Queue:
from Queue import Queue
q = Queue()
then, in your thread, you attempt to get an item from that queue:
msg = q.get()
this will block the entire thread until there is something to be found in the queue.
Now, at the same time, assuming your incoming events are notified by means of triggering callbacks, you register a callback that simply puts the received RabbitMQ message in the queue:
def on_message(msg):
q.put(msg)
rabbitmq_channel.register_callback(on_message)
or if you like shorter code:
rabbitmq_channel.register_callback(lambda msg: q.put(msg))
(the above is pseudocode because I've not used RabbitMQ nor whatever Python bindings for RabbitMQ, but you should be able to easily figure out how to adapt the snippet to your real application code; the key part to pay attention to is q.put(msg)—just make sure that part gets invoked as soon as a new message is notified.)
as soon as this happens, the thread is awakened and is free to process the message. In order to reuse the same thread for multiple messages, just use a while loop:
while True:
msg = q.get()
process_message(msg)
P.S. I would suggest looking into Gevent and how to combine it with RabbitMQ in your Python application so as to be able to get rid of threads and use more lightweight and scalable green threading mechanism instead without ever having to manage a threadpool (because you can just have tens of thousands of greenlets spawned and killed on the fly):
# this thing always called in a green thread; forget about pools and queues.
def on_message(msg):
# you're in a green thread now; just process away!
benefit_from("all the gevent goodness!")
spawn_and_join_10_sub_greenlets()
rabbitmq_channel.register_callback(lambda msg: gevent.spawn(on_message, msg))

redis-py subscribe blocked when read message

Recentlly i use python and redis to build a smarl messge-driven project.
i use one thread to subsribe to redis channel(called message thread here); a timer thread; and a worker thread;
when message thread got enough messages, it post a task to worker.
i use redis-py to communicate with redis
Message Thread:
subscribe to redis;
while True:
get message;
if len(messages)>threashold: post task to Worker
Worker Thread:
while True:
wait task event;
do task; //this may be heavy
here comes the problem:
after this work for a while, the redis-py subpub blocked!(ofcource redis is still publish message, but it do not return anymore, it just blocked!). i use gdb attach to it, i see stack frame like this:
[Switching to thread 4 (Thread 1084229984 (LWP 9812))]#0 0x000000302b80b0cf in __read_nocancel () from /lib64/tls/libpthread.so.0
(gdb) bt
0 0x000000302b80b0cf in __read_nocancel () from /lib64/tls/libpthread.so.0
1 0x00000000004e129a in posix_read (self=Variable "self" is not available.) at./Modules/posixmodule.c:6592
2 0x00000000004a04c5 in PyEval_EvalFrameEx (f=0x157a8c0, throwflag=Variable "throwflag" is not available.) at Python/ceval.c:4323
i even use redis 'client kill' command to kill the connection between python and redis, but python still block there, never return or raise exeption. the only way is to kill the python process use kill -9.
then i comment work's 'do task' procedure(remember this task is heavy, it make heavy network io, cpu calculation), it works well and no problem abserved.
so, it seems come to conclusion: Once i use worker do task, message thread will block at socket read.
How can this happen!!
The most probable explanation is you use the same Redis connection in your task processing code. You should not.
Once a connection has subscribed, you cannot use it for anything except receiving messages, or running additional SUBSCRIBE, PSUBSCRIBE, UNSUBSCRIBE and PUNSUBSCRIBE commands.
You probably need a second Redis connection in your task processing code.

Categories

Resources