With the RabbitMQ Python client running subscriber.py:
import pika, time
credentials = pika.PlainCredentials('user', 'pass')
parameters = pika.ConnectionParameters(host='localhost', port=6672, credentials=credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.basic_qos(prefetch_count=1)
channel.queue_declare(queue='my_queue')
def callback(ch, method, properties, body):
ch.basic_ack(delivery_tag=method.delivery_tag)
time.sleep(600)
print ('process completed')
channel.basic_consume(queue='my_queue', on_message_callback=callback)
channel.start_consuming()
the connection breaks after the callback function is completed.
It appears it always happens on 60th second. It seems the channel.basic_consume() method doesn't want to wait for the main thread to complete the callback function. Is there a way to make sure the connection doesn't drop after 60th second?
Your time.sleep call is blocking Pika's I/O loop which prevents heartbeats from being processed. Don't block the I/O loop!!!
Instead, you should do your long-running work in a separate thread and acknowledge the message correctly from that thread. Fortunately, I have an example right here: link
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
I think the "heartbeat" parameter solves this problem. Just set the time in seconds:
import pika, time
credentials = pika.PlainCredentials('user', 'pass')
parameters = pika.ConnectionParameters(host='localhost', port=6672, credentials=credentials, heartbeat=36000)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.basic_qos(prefetch_count=1)
channel.queue_declare(queue='my_queue')
def callback(ch, method, properties, body):
ch.basic_ack(delivery_tag=method.delivery_tag)
time.sleep(600)
print ('process completed')
channel.basic_consume(queue='my_queue', on_message_callback=callback)
channel.start_consuming()
Related
I have attempted to follow guidance given here: Handling long running tasks in pika / RabbitMQ and here: https://github.com/pika/pika/issues/753#issuecomment-318124510 on how to run long tasks in a separate thread to avoid interrupting the connection heartbeat. I'm a beginner to threading and still struggling to understand this solution.
For my final use case, I need to make function calls that are several minutes long, represented in the example code below by the long_function(). I've found that if the sleep call in long_function() exceeds the length of the heartbeat timeout, I lose connection (presumably because this function is blocking thread #2 from receiving/acknowledging the heartbeat messages from thread #1) and I get this message in the logs: ERROR: Unexpected connection close detected: StreamLostError: ("Stream connection lost: RxEndOfFile(-1, 'End of input stream (EOF)')",). A sleep call of the same length in the target function of thread #2 does not lead to a StreamLostError.
What's the proper solution for overcoming the StreamLostError here? Do I launch all subsequent function calls in their own threads to avoid blocking thread #2? Do I increase the heartbeat to be longer than long_function()? If this is the solution, what was the point of running my long task in a separate thread? Why not just make the heartbeat timeout in the main thread long enough to accommodate the whole message being processed? Thanks!
import functools
import logging
import pika
import threading
import time
import os
import ssl
from common_utils.rabbitmq_utils import send_message_to_queue, initialize_rabbitmq_channel
import json
import traceback
logging.basicConfig(format='%(asctime)s %(levelname)s: %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
def send_message_to_queue(channel, queue_name, body):
channel.basic_publish(exchange='',
routing_key=queue_name,
body=json.dumps(body),
properties=pika.BasicProperties(delivery_mode=2)
)
logging.info("RabbitMQ publish to queue {} confirmed".format(queue_name))
def initialize_rabbitmq_channel(timeout=5*60):
credentials = pika.PlainCredentials(os.environ.get("RABBITMQ_USER"), os.environ.get("RABBITMQ_PASSWORD"))
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
params = pika.ConnectionParameters(port=5671, host=os.environ.get("RABBITMQ_HOST"), credentials=credentials,
ssl_options=pika.SSLOptions(context), virtual_host="/", heartbeat=timeout)
connection = pika.BlockingConnection(params)
return connection.channel(), connection
def long_function():
logging.info("Long function starting...")
time.sleep(5)
logging.info("Long function finished.")
def ack_message(channel, delivery_tag):
"""
Note that `channel` must be the same pika channel instance via which
the message being ACKed was retrieved (AMQP protocol constraint).
"""
if channel.is_open:
channel.basic_ack(delivery_tag)
logging.info("Message {} acknowledged".format(delivery_tag))
else:
logging.error("Channel is closed and message acknowledgement will fail")
pass
def do_work(connection, channel, delivery_tag, body):
thread_id = threading.get_ident()
fmt1 = 'Thread id: {} Delivery tag: {} Message body: {}'
logging.info(fmt1.format(thread_id, delivery_tag, body))
# Simulating work including a call to another function that exceeds heartbeat timeout
time.sleep(5)
long_function()
send_message_to_queue(channel, "test_inactive", json.loads(body))
cb = functools.partial(ack_message, channel, delivery_tag)
connection.add_callback_threadsafe(cb)
def on_message(connection, channel, method, property, body):
t = threading.Thread(target=do_work, args=(connection, channel, method.delivery_tag, body))
t.start()
t.join()
if __name__ == "__main__":
channel, connection = initialize_rabbitmq_channel(timeout=3)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(queue="test_queue",
auto_ack=False,
on_message_callback=lambda channel, method, property, body: on_message(connection, channel, method, property, body)
)
channel.start_consuming()
I have a RabbitMQ consumer. I would like to have that consumer do some message processing, simulated by time.sleep(10), then publish a message to a different queue. I know the consumer callback has a channel that in theory could be used to do the publish, but this seems like a bad implementation because if the basic_publish() somehow manages for force close the channel, then the consumer dies. What is the best way to handle this?
import time
import pika
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='logs', exchange_type='fanout')
result = channel.queue_declare(queue='original_queue', exclusive=True)
channel.queue_bind(exchange='logs', queue='original_queue')
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
time.sleep(10)
ch.basic_publish(exchange='logs', routing_key='different_queue', body='hello_world')
channel.basic_consume(
queue='original_queue', on_message_callback=callback, auto_ack=True)
channel.start_consuming()
You can implement your consumer in a way that it automatically reconnects to the RabbitMQ server if the connection gets closed. Hope this helps(I didn't put much thought on the design part, feel free to suggest some!)
import time
import pika
reconnect_on_failure = True
def consumer(connection, channel):
channel.exchange_declare(exchange='logs', exchange_type='fanout')
result = channel.queue_declare(queue='original_queue', exclusive=True)
channel.queue_bind(exchange='logs', queue='original_queue')
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
time.sleep(10)
ch.basic_publish(exchange='logs', routing_key='different_queue', body='hello_world')
channel.basic_consume(
queue='original_queue', on_message_callback=callback, auto_ack=True)
channel.start_consuming()
def get_connection_and_channel():
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
def start(reconnect_on_failure):
connection, channel = get_connection_and_channel()
consumer(connection, channel)
# the if condition will be executed when the consumer's start_consuming loop exists
if reconnect_on_failure:
# cleanly close the connection and channel
if not connection.is_closed():
connection.close()
if not channel.is_close():
channel.close()
start(reconnect_on_failure)
start(reconnect_on_failure)
I am unable to return the data from the Pika since it start_consuming is not stopping. It prints the results but it does not return the output
def on_request(ch, method, props, body):
directory =body
print(directory.decode('utf-8'))
response = parse(directory.decode('utf-8'))
ch.basic_publish(exchange='',
routing_key=props.reply_to,
properties=pika.BasicProperties(correlation_id = \
props.correlation_id),
body=str(response))
ch.basic_ack(delivery_tag=method.delivery_tag)
def start():
print("hi")
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='rpc_queue')
channel.basic_qos(prefetch_count=2)
channel.basic_consume(queue='rpc_queue', on_message_callback=on_request)
print(" [x] Awaiting RPC requests")
channel.start_consuming()
By design start_consuming blocks forever. You will have to cancel the consumer in your on_request method.
You can also use this method to consume messages which allows an inactivity_timeout to be set, where you could then cancel your consumer.
Finally, SelectConnection allows much more flexibility in interacting with Pika's I/O loop and is recommended when your requirements are more complex than what BlockingConnection supports.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Just use channel.stop_consuming()
For server automation, we're trying to develop a tool, which can handle and execute a lot of tasks on different servers. We send the task and the server hostname into a queue. The queue is then consumed from a requester, which give the information to the ansible api. To achieve that we can execute more then one task at once, we're using threading.
Now we're stuck with the acknowledge of the message...
What we have done so far:
The requester.py consumes the queue and starts then a thread, in which the ansible task is running. The result is then sended into another queue. So each new messages creates a new thread. Is the task finished, the thread dies.
But now comes difficult part. We have to made the messages persistent, in case our server dies. So each message should be acknowledged after the result from ansible was sended back.
Our problem is now, when we try to acknowledged the message in the thread itselfs, there is no more "simultaneously" work done, because the consume of pika waits for the acknowledge. So how we can achieve, that the consume consumes messages and dont wait for the acknowledge? Or how we can work around or improve our little programm?
requester.py
#!/bin/python
from worker import *
import ansible.inventory
import ansible.runner
import threading
class Requester(Worker):
def __init__(self):
Worker.__init__(self)
self.connection(self.selfhost, self.from_db)
self.receive(self.from_db)
def send(self, result, ch, method):
self.channel.basic_publish(exchange='',
routing_key=self.to_db,
body=result,
properties=pika.BasicProperties(
delivery_mode=2,
))
print "[x] Sent \n" + result
ch.basic_ack(delivery_tag = method.delivery_tag)
def callAnsible(self, cmd, ch, method):
#call ansible api pre 2.0
result = json.dumps(result, sort_keys=True, indent=4, separators=(',', ': '))
self.send(result, ch, method)
def callback(self, ch, method, properties, body):
print(" [x] Received by requester %r" % body)
t = threading.Thread(target=self.callAnsible, args=(body,ch,method,))
t.start()
worker.py
import pika
import ConfigParser
import json
import os
class Worker(object):
def __init__(self):
#read some config files
def callback(self, ch, method, properties, body):
raise Exception("Call method in subclass")
def receive(self, queue):
self.channel.basic_qos(prefetch_count=1)
self.channel.basic_consume(self.callback,queue=queue)
self.channel.start_consuming()
def connection(self,server,queue):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(
host=server,
credentials=self.credentials))
self.channel = self.connection.channel()
self.channel.queue_declare(queue=queue, durable=True)
We're working with Python 2.7 and pika 0.10.0.
And yes, we noticed in the pika FAQ: http://pika.readthedocs.io/en/0.10.0/faq.html
that pika is not thread safe.
Disable auto-acknowledge and set the prefetch count to something bigger then 1, depending on how many messages would you like your consumer to take.
Here is how to set prefetch
channel.basic_qos(prefetch_count=1), found here.
I have this piece of code, basically it run channel.start_consuming().
I want it to stop after a while.
I think that channel.stop_consuming() is the right method:
def stop_consuming(self, consumer_tag=None):
""" Cancels all consumers, signalling the `start_consuming` loop to
exit.
But it doesn't work: start_consuming() never ends (execution doesn't exit from this call, "end" is never printed).
import unittest
import pika
import threading
import time
_url = "amqp://user:password#xxx.rabbitserver.com/aaa"
class Consumer_test(unittest.TestCase):
def test_startConsuming(self):
def callback(channel, method, properties, body):
print("callback")
print(body)
def connectionTimeoutCallback():
print("connecionClosedCallback")
def _closeChannel(channel_):
print("_closeChannel")
time.sleep(1)
print("close")
if channel_.is_open:
channel_.stop_consuming()
print("stop_cosuming")
else:
print("channel is closed")
#channel_.close()
params = pika.URLParameters(_url)
params.socket_timeout = 5
connection = pika.BlockingConnection(params)
#connection.add_timeout(2, connectionTimeoutCallback)
channel = connection.channel()
channel.basic_consume(callback,
queue='test',
no_ack=True)
t = threading.Thread(target=_closeChannel, args=[channel])
t.start()
print("start_consuming")
channel.start_consuming() # start consuming (loop never ends)
connection.close()
print("end")
connection.add_timeout solve my problem, maybe call basic_cancel too, but I want to use the right method.
Thanks
Note:
I can't respond or add comment to this (pika, stop_consuming does not work) due to my low reputation points.
Note 2:
I think that I'm not sharing channel or connection across threads (Pika doesn't support this) because I use "channel_" passed as parameter and not "channel" instance of the class (Am I wrong?).
I was having the same problem; as pika is not thread safe. i.e. connections and channels can't be safely shared across threads.
So I used a separate connection to send a shutdown message; then stopped consuming the original channel from the callback function.