Pyhon Pika how to use GeventConnection - python

We have several tasks that we consume from a message queue. The runtimes of those tasks are dependent on fetching some data from a database. Therefore we would like to work with Gevent to not block the program if some database requests take a long time. We are trying to couple it with the Pika client, which has some asynchronous adapters, one of them for gevent: pika.adapters.gevent_connection.GeventConnection.
I set up some toy code, which consumes from a MQ tasks that consists of integers and publishes them on another queue, while sleeping for 4 seconds for each odd number:
# from gevent import monkey
# # Monkeypatch core python libraries to support asynchronous operations.
# monkey.patch_time()
import pika
from pika.adapters.gevent_connection import GeventConnection
from datetime import datetime
import time
def handle_delivery(unused_channel, method, header, body):
"""Called when we receive a message from RabbitMQ"""
print(f"Received: {body} at {datetime.now()}")
channel.basic_ack(method.delivery_tag)
num = int(body)
print(num)
if num % 2 != 0:
time.sleep(4)
channel.basic_publish(
exchange='my_test_exchange2',
routing_key='my_test_queue2',
body=body
)
print("Finished processing")
def on_connected(connection):
"""Called when we are fully connected to RabbitMQ"""
# Open a channel
connection.channel(on_open_callback=on_channel_open)
def on_channel_open(new_channel):
"""Called when our channel has opened"""
global channel
channel = new_channel
channel.basic_qos(prefetch_count=1)
channel.queue_declare(queue="my_queue_gevent5")
channel.exchange_declare("my_test_exchange2")
channel.queue_declare(queue="my_test_queue2")
channel.queue_bind(exchange="my_test_exchange2", queue="my_test_queue2")
channel.basic_consume("my_queue_gevent5", handle_delivery)
def start_loop(i):
conn = GeventConnection(pika.ConnectionParameters('localhost'), on_open_callback=on_connected)
conn.ioloop.start()
start_loop(1)
If I run it without the monkey.patch_time() call it works OK and it publishes results on the my_test_queue2, but it works sequentially. The expected behaviour after adding monkey.patch_time() patch would be that it still works but concurrently. However, the code gets stuck (nothing happens anymore) after it comes to the call time.sleep(4). It processes and publishes the first integer, which is 0, and then gets stuck at 1, when the if clause gets triggered. What am I doing wrong?

With the help of ChatGPT I managed to make it work. There was a gevent.spawn() call missing:
def handle_delivery(unused_channel, method, header, body):
print("Handling delivery")
gevent.spawn(process_message, method, body)
def process_message(method, body):
print(f"Received: {body} at {datetime.now()}")
channel.basic_ack(method.delivery_tag)
num = int(body)
print(num)
if num % 2 != 0:
time.sleep(4)
channel.basic_publish(
exchange='my_test_exchange2',
routing_key='my_test_queue2',
body=body
)
print("Finished processing")

Related

RabbitMQ Pika long-running task and heartbeat threading

I have attempted to follow guidance given here: Handling long running tasks in pika / RabbitMQ and here: https://github.com/pika/pika/issues/753#issuecomment-318124510 on how to run long tasks in a separate thread to avoid interrupting the connection heartbeat. I'm a beginner to threading and still struggling to understand this solution.
For my final use case, I need to make function calls that are several minutes long, represented in the example code below by the long_function(). I've found that if the sleep call in long_function() exceeds the length of the heartbeat timeout, I lose connection (presumably because this function is blocking thread #2 from receiving/acknowledging the heartbeat messages from thread #1) and I get this message in the logs: ERROR: Unexpected connection close detected: StreamLostError: ("Stream connection lost: RxEndOfFile(-1, 'End of input stream (EOF)')",). A sleep call of the same length in the target function of thread #2 does not lead to a StreamLostError.
What's the proper solution for overcoming the StreamLostError here? Do I launch all subsequent function calls in their own threads to avoid blocking thread #2? Do I increase the heartbeat to be longer than long_function()? If this is the solution, what was the point of running my long task in a separate thread? Why not just make the heartbeat timeout in the main thread long enough to accommodate the whole message being processed? Thanks!
import functools
import logging
import pika
import threading
import time
import os
import ssl
from common_utils.rabbitmq_utils import send_message_to_queue, initialize_rabbitmq_channel
import json
import traceback
logging.basicConfig(format='%(asctime)s %(levelname)s: %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
def send_message_to_queue(channel, queue_name, body):
channel.basic_publish(exchange='',
routing_key=queue_name,
body=json.dumps(body),
properties=pika.BasicProperties(delivery_mode=2)
)
logging.info("RabbitMQ publish to queue {} confirmed".format(queue_name))
def initialize_rabbitmq_channel(timeout=5*60):
credentials = pika.PlainCredentials(os.environ.get("RABBITMQ_USER"), os.environ.get("RABBITMQ_PASSWORD"))
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
params = pika.ConnectionParameters(port=5671, host=os.environ.get("RABBITMQ_HOST"), credentials=credentials,
ssl_options=pika.SSLOptions(context), virtual_host="/", heartbeat=timeout)
connection = pika.BlockingConnection(params)
return connection.channel(), connection
def long_function():
logging.info("Long function starting...")
time.sleep(5)
logging.info("Long function finished.")
def ack_message(channel, delivery_tag):
"""
Note that `channel` must be the same pika channel instance via which
the message being ACKed was retrieved (AMQP protocol constraint).
"""
if channel.is_open:
channel.basic_ack(delivery_tag)
logging.info("Message {} acknowledged".format(delivery_tag))
else:
logging.error("Channel is closed and message acknowledgement will fail")
pass
def do_work(connection, channel, delivery_tag, body):
thread_id = threading.get_ident()
fmt1 = 'Thread id: {} Delivery tag: {} Message body: {}'
logging.info(fmt1.format(thread_id, delivery_tag, body))
# Simulating work including a call to another function that exceeds heartbeat timeout
time.sleep(5)
long_function()
send_message_to_queue(channel, "test_inactive", json.loads(body))
cb = functools.partial(ack_message, channel, delivery_tag)
connection.add_callback_threadsafe(cb)
def on_message(connection, channel, method, property, body):
t = threading.Thread(target=do_work, args=(connection, channel, method.delivery_tag, body))
t.start()
t.join()
if __name__ == "__main__":
channel, connection = initialize_rabbitmq_channel(timeout=3)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(queue="test_queue",
auto_ack=False,
on_message_callback=lambda channel, method, property, body: on_message(connection, channel, method, property, body)
)
channel.start_consuming()

Pika: Consume the next message even the last message was not acknowledged

For server automation, we're trying to develop a tool, which can handle and execute a lot of tasks on different servers. We send the task and the server hostname into a queue. The queue is then consumed from a requester, which give the information to the ansible api. To achieve that we can execute more then one task at once, we're using threading.
Now we're stuck with the acknowledge of the message...
What we have done so far:
The requester.py consumes the queue and starts then a thread, in which the ansible task is running. The result is then sended into another queue. So each new messages creates a new thread. Is the task finished, the thread dies.
But now comes difficult part. We have to made the messages persistent, in case our server dies. So each message should be acknowledged after the result from ansible was sended back.
Our problem is now, when we try to acknowledged the message in the thread itselfs, there is no more "simultaneously" work done, because the consume of pika waits for the acknowledge. So how we can achieve, that the consume consumes messages and dont wait for the acknowledge? Or how we can work around or improve our little programm?
requester.py
#!/bin/python
from worker import *
import ansible.inventory
import ansible.runner
import threading
class Requester(Worker):
def __init__(self):
Worker.__init__(self)
self.connection(self.selfhost, self.from_db)
self.receive(self.from_db)
def send(self, result, ch, method):
self.channel.basic_publish(exchange='',
routing_key=self.to_db,
body=result,
properties=pika.BasicProperties(
delivery_mode=2,
))
print "[x] Sent \n" + result
ch.basic_ack(delivery_tag = method.delivery_tag)
def callAnsible(self, cmd, ch, method):
#call ansible api pre 2.0
result = json.dumps(result, sort_keys=True, indent=4, separators=(',', ': '))
self.send(result, ch, method)
def callback(self, ch, method, properties, body):
print(" [x] Received by requester %r" % body)
t = threading.Thread(target=self.callAnsible, args=(body,ch,method,))
t.start()
worker.py
import pika
import ConfigParser
import json
import os
class Worker(object):
def __init__(self):
#read some config files
def callback(self, ch, method, properties, body):
raise Exception("Call method in subclass")
def receive(self, queue):
self.channel.basic_qos(prefetch_count=1)
self.channel.basic_consume(self.callback,queue=queue)
self.channel.start_consuming()
def connection(self,server,queue):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(
host=server,
credentials=self.credentials))
self.channel = self.connection.channel()
self.channel.queue_declare(queue=queue, durable=True)
We're working with Python 2.7 and pika 0.10.0.
And yes, we noticed in the pika FAQ: http://pika.readthedocs.io/en/0.10.0/faq.html
that pika is not thread safe.
Disable auto-acknowledge and set the prefetch count to something bigger then 1, depending on how many messages would you like your consumer to take.
Here is how to set prefetch
channel.basic_qos(prefetch_count=1), found here.

Pika channel.stop_consuming doesn't stop start_consuming loop

I have this piece of code, basically it run channel.start_consuming().
I want it to stop after a while.
I think that channel.stop_consuming() is the right method:
def stop_consuming(self, consumer_tag=None):
""" Cancels all consumers, signalling the `start_consuming` loop to
exit.
But it doesn't work: start_consuming() never ends (execution doesn't exit from this call, "end" is never printed).
import unittest
import pika
import threading
import time
_url = "amqp://user:password#xxx.rabbitserver.com/aaa"
class Consumer_test(unittest.TestCase):
def test_startConsuming(self):
def callback(channel, method, properties, body):
print("callback")
print(body)
def connectionTimeoutCallback():
print("connecionClosedCallback")
def _closeChannel(channel_):
print("_closeChannel")
time.sleep(1)
print("close")
if channel_.is_open:
channel_.stop_consuming()
print("stop_cosuming")
else:
print("channel is closed")
#channel_.close()
params = pika.URLParameters(_url)
params.socket_timeout = 5
connection = pika.BlockingConnection(params)
#connection.add_timeout(2, connectionTimeoutCallback)
channel = connection.channel()
channel.basic_consume(callback,
queue='test',
no_ack=True)
t = threading.Thread(target=_closeChannel, args=[channel])
t.start()
print("start_consuming")
channel.start_consuming() # start consuming (loop never ends)
connection.close()
print("end")
connection.add_timeout solve my problem, maybe call basic_cancel too, but I want to use the right method.
Thanks
Note:
I can't respond or add comment to this (pika, stop_consuming does not work) due to my low reputation points.
Note 2:
I think that I'm not sharing channel or connection across threads (Pika doesn't support this) because I use "channel_" passed as parameter and not "channel" instance of the class (Am I wrong?).
I was having the same problem; as pika is not thread safe. i.e. connections and channels can't be safely shared across threads.
So I used a separate connection to send a shutdown message; then stopped consuming the original channel from the callback function.

Synchronous and blocking consumption in RabbitMQ using pika

I want to consume a queue (RabbitMQ) synchronously with blocking.
Note: below is full code ready to be run.
The system set up is using RabbitMQ as it's queuing system, but asynchronous consumption is not needed in one of our modules.
I've tried using basic_get on top of a BlockingConnection, which doesn't block (returns (None, None, None) immediately):
# declare queue
get_connection().channel().queue_declare(TEST_QUEUE)
def blocking_get_1():
channel = get_connection().channel()
# get from an empty queue (prints immediately)
print channel.basic_get(TEST_QUEUE)
I've also tried to use the consume generator, fails with "Connection Closed" after a long time of not consuming.
def blocking_get_2():
channel = get_connection().channel()
# put messages in TEST_QUEUE
for i in range(4):
channel.basic_publish(
'',
TEST_QUEUE,
'body %d' % i
)
consume_generator = channel.consume(TEST_QUEUE)
print next(consume_generator)
time.sleep(14400)
print next(consume_generator)
Is there a way to use RabbitMQ using the pika client as I would a Queue.Queue in python? or anything similar?
My option at the moment is busy-wait (using basic_get) - but I rather use the existing system to not busy-wait, if possible.
Full code:
#!/usr/bin/env python
import pika
import time
TEST_QUEUE = 'test'
def get_connection():
# define connection
connection = pika.BlockingConnection(
pika.ConnectionParameters(
host=YOUR_IP,
port=YOUR_PORT,
credentials=pika.PlainCredentials(
username=YOUR_USER,
password=YOUR_PASSWORD,
)
)
)
return connection
# declare queue
get_connection().channel().queue_declare(TEST_QUEUE)
def blocking_get_1():
channel = get_connection().channel()
# get from an empty queue (prints immediately)
print channel.basic_get(TEST_QUEUE)
def blocking_get_2():
channel = get_connection().channel()
# put messages in TEST_QUEUE
for i in range(4):
channel.basic_publish(
'',
TEST_QUEUE,
'body %d' % i
)
consume_generator = channel.consume(TEST_QUEUE)
print next(consume_generator)
time.sleep(14400)
print next(consume_generator)
print "blocking_get_1"
blocking_get_1()
print "blocking_get_2"
blocking_get_2()
get_connection().channel().queue_delete(TEST_QUEUE)
A common problem with Pika is that it is currently not handling incoming events in the background. This basically means that in many scenarios you will need to call connection.process_data_events() periodically to ensure that it does not miss heartbeats.
This also means that if you sleep for a extended period of time, pika will not be handling incoming data, and eventually die as it is not responding to heartbeats. An option here is to disable heartbeats.
I usually solve this by having a thread in the background check for new events, as seen in this example.
If you want to block completely I would do something like this (based on my own library AMQPStorm).
while True:
result = channel.basic.get(queue='simple_queue', no_ack=False)
if result:
print("Message:", message.body)
message.ack()
else:
print("Channel Empty.")
sleep(1)
This is based on the example found here.

Pika: how to consume messages synchronously

I would like to run a process periodically(like once per 10 minutes, or once per hour) that gets all the messages from queue, processes them and then exits. Is there any way to do this with pika or should I use a different python lib?
I think an ideal solution here would be to use the basic_get method. It will fetch a single message, but if the the queue is already empty it will return None. The advantage of this is that you can clear the queue with a simple loop, and then simply break the loop once None is returned, plus it is safe to run basic_get with multiple consumers.
This example is based on my own library; amqpstorm, but you could easily implement the same with pika as well.
from amqpstorm import Connection
connection = Connection('127.0.0.1', 'guest', 'guest')
channel = connection.channel()
channel.queue.declare('simple_queue')
while True:
result = channel.basic.get(queue='simple_queue', no_ack=False)
if not result:
print("Channel Empty.")
# We are done, lets break the loop and stop the application.
break
print("Message:", result['body'])
channel.basic.ack(result['method']['delivery_tag'])
channel.close()
connection.close()
Would this work for you:
Measure the current queue length as N = queue.method.message_count
Make the callback count the processed messages and as soon as N are processed, call channel.stop_consuming.
So, client code would be something like this:
class CountCallback(object):
def __init__(self, count):
self.count = count
def __call__(self, ch, method, properties, body):
# process the message here
self.count -= 1
if not self.count:
ch.stop_consuming()
channel = conn.channel()
queue = channel.queue_declare('tasks')
callback = CountCallback(queue.method.message_count)
channel.basic_consume(callback, queue='tasks')
channel.start_consuming()
#eandersson
This example is based on my own library; amqpstorm, but you could easily implement the same with pika as well.
updated for amqpstorm 2.6.1 :
from amqpstorm import Connection
connection = Connection('127.0.0.1', 'guest', 'guest')
channel = connection.channel()
channel.queue.declare('simple_queue')
while True:
result = channel.basic.get(queue='simple_queue', no_ack=False)
if not result:
print("Channel Empty.")
# We are done, lets break the loop and stop the application.
break
print("Message:", result.body)
channel.basic.ack(result.method['delivery_tag'])
channel.close()
connection.close()

Categories

Resources