How to ensure that messages get delivered? - python

How do you ensure that messages get delivered with Pika? By default it will not provide you with an error if the message was not delivered succesfully.
In this example several messages can be sent before pika acknowledges that the connection was down.
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
for index in xrange(10):
channel.basic_publish(exchange='', routing_key='hello',
body='Hello World #%s!' % index)
print('Total Messages Sent: %s' % x)
connection.close()

When using Pika the channel.confirm_delivery() flag needs to be set before you start publishing messages. This is important so that Pika will confirm that each message has been sent successfully before sending the next message. This will however increase the time it takes to send messages to RabbitMQ, as delivery needs to be confirmed before the program can proceed with the next message.
channel.confirm_delivery()
try:
for index in xrange(10):
channel.basic_publish(exchange='', routing_key='hello',
body='Hello World #%s!' % index)
print('Total Messages Sent: %s' % x)
except pika.exceptions.ConnectionClosed as exc:
print('Error. Connection closed, and the message was never delivered.')
basic_publish will return a Boolean depending if the message was sent or not. But, it is important to catch potential exceptions in case the connection is closed during transfer and handle it appropriately. As in those cases the exception will interrupt the flow of the program.

after trying myself and failing to receive other than ack,
i decided to implement a direct reply to the sender.
i followed the example given here

Related

ActiveMQ Stomp python client NACK consumes message

I'm using ActiveMQ classic v5.16.3 and experimenting with NACK. My expectation is that if the client sends a NACK then the message will remain on the queue and be available for another client. My code is below. I set a prefetch of 1, and ack mode of client-individual.
If I omit the conn.nack() call then I see my print statements, and the message remains on the queue - hence I believe that ActiveMQ is looking for an ACK or NACK.
When I include the conn.nack() call then I again see my print statements, and the message is removed from the queue.
Is this expected behaviour? I think a client should be able to reject malformed messages by NACK-ing and that eventually ActiveMQ should put them to a dead letter queue.
import time
import sys
import stomp
class MyListener(stomp.ConnectionListener):
def on_error(self, frame):
print('received an error "%s"' % frame.body)
def on_message(self, frame):
# experiment with and without the following line
conn.nack(id=frame.headers['message-id'], subscription=frame.headers["subscription"])
print('received a message "%s"' % frame.body)
print('headers "%s"' % frame.headers)
print('Connecting ...')
conn = stomp.Connection()
conn.set_listener('', MyListener())
conn.connect('admin', 'admin', wait=True)
print('Connected')
conn.subscribe(destination='/queue/audit', id=1, ack='client-individual', headers={'activemq.prefetchSize': 1})
As suggested by Tim Bish, I needed to configure ActiveMQ to retry. I made the following changes to activemq.xml
Added scheduler support to the broker:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="localhost" dataDirectory="${activemq.data}"
schedulerSupport="true" >
Specified the redelivery plugin:
<plugins>
<redeliveryPlugin fallbackToDeadLetter="true"
sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<defaultEntry>
<redeliveryPolicy maximumRedeliveries="4"
initialRedeliveryDelay="5000"
redeliveryDelay="10000"/>
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
</plugins>
And then for my chosen destination, specify that poison messages be sent to a specific queue - default is to publish to a Topic.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue="audit" prioritizedMessages="true" >
<deadLetterStrategy>
<individualDeadLetterStrategy queuePrefix="DLQ."
useQueueForQueueMessages="true"/>
</deadLetterStrategy>
</policyEntry>

Why does the pika connection get dropped when publishing at irregular intervals

The scenario is the following: I receive messages on one queue, do a bit of processing and then send a message on another queue.
credentials = PlainCredentials("test", "test")
publisher_credentials = PlainCredentials("test", "test")
connection = BlockingConnection(ConnectionParameters("host1", 1234, "/", credentials))
publisher_connection = BlockingConnection(ConnectionParameters("host2", 1234, "/", publisher_credentials))
channel, publisher_channel = connection.channel(), publisher_connection.channel()
publisher_channel.queue_declare(queue="testqueue", passive=True)
publisher_channel.confirm_delivery()
callback_fct = generate_callback(publisher_channel)
channel.basic_consume(queue=os.getenv("RABBIT_MQ_QNAME"), on_message_callback=callback_fct, auto_ack=True)
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
connection.close()
except Exception as e:
logger.exception("An unexpected error has occurred!")
And the generate_callback function would do something like this:
def generate_callback(publisher):
def on_message(channel, method_frame, header_frame, body):
logger.debug(f"Received {body}")
# assume some processing is done here, it should be really fast (under one second)
publisher.basic_publish(exchange='', "test", body="random_string", properties=BasicProperties(content_type='text/plain', delivery_mode=DeliveryMode.Persistent))
return on_message
Publishing works, but if I do not receive a message in my consumer queue for a couple of minutes, it seems that the publisher connection is lost:
ERROR - Stream connection lost: ConnectionResetError(104, 'Connection reset by peer')
I do not understand what I have to do in order to prevent the connection from being lost. In my current implementation I am automatically recreating the connection, but I would want to prevent this, at least in the cases in which nothing is received for a couple of minutes. What am I missing?

pika, rabbitmq - get all messages from the queue without consuming them

Using the pika client, I want to display all the messages currently in the queue, without consuming them. Just to know how busy is the queue and display the jobs.
So far, I can only read one message as it arrives:
channel.queue_declare(queue='queue1', durable=True)
channel.basic_consume(on_message, queue='queue1')
channel.start_consuming()
def on_message(channel, method, properties, message):
channel.basic_ack(delivery_tag=method.delivery_tag)
print("Message: %s", message)
How can I read the whole queue?
To read messages "without consuming them", don't acknowledge delivery of the message. In your case above, get rid of
channel.basic_ack(delivery_tag=method.delivery_tag)
or set auto_ack to False:
def callback(ch, method, properties, body):
print(body)
channel.basic_consume(queue='your_queue', on_message_callback=callback, auto_ack=False)
The messages will be read and marked as unacked in rabbitMQ but will still be available in the queue.

Pub/sub based on events - Python

I am trying to build a system where I can send messages to diffferent users based on their subscription to certain events. Basically I have an api which gives me live stream events. Some of the users will be subscribed to those events. My task is to send message to those users whenever such an event occurs. I am trying to design the system in Python.
Currently I have the following questions.
How yo continously poll for events from a live stream api in Python.
How to find out which users are subscribed to that particular event. (Redis or Mysql)
How to send notification to all the users of a particular event. (Pub/sub)
I am thinking of using Amazon SNS. But not quite sure about the overall architecture.
RabbitMQ is lightweight and easy to deploy on premise and in the
cloud. It supports multiple messaging protocols. RabbitMQ can be
deployed in distributed and federated configurations to meet
high-scale, high-availability requirements.
Just small example:
Producer sends messages to the "hello" queue. The consumer receives messages from that queue. This will create a Queue (hello) with a message on the RabbitMQ cluster.
#!/usr/bin/env python
import pika
RABBITMQ_USERNAME = 'ansible'
RABBITMQ_PASSWORD = 'ansible'
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='example.eu-central-1.elb.amazonaws.com',
heartbeat_interval=25,
credentials=pika.PlainCredentials(RABBITMQ_USERNAME,RABBITMQ_PASSWORD)))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()
Receive a message from a named queue:
#!/usr/bin/env python
import pika
RABBITMQ_USERNAME = 'ansible'
RABBITMQ_PASSWORD = 'ansible'
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='example.elb.amazonaws.com',
heartbeat_interval=25,
credentials=pika.PlainCredentials(RABBITMQ_USERNAME,RABBITMQ_PASSWORD)))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

rabbitmq messages missing in receiver part

I have implemented RabbitMQ in my servers. So basically what it does is that the main server passes messages to the worker server.
The problem that I am facing is that all the message that I pass is not received by the server.
i.e if i send 10 messages only 4 of them are received.
Any idea where am I going wrong.
Receiving code
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
Publishing code
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
message = ' '.join(sys.argv[1:]) or "Hello World!"
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
print(" [x] Sent %r" % message)
connection.close()
Assuming that you are publishing to the same queue (as the examples you posted shows otherwise). I would recommend that you enable the confirm delivery flag. This will ensure that your message gets delivered, and if not it will either throw an exception, or publish will return False.
channel = connection.channel()
channel.confirm_delivery()
published = channel.basic_publish(...)
if not published:
raise Exception("Unable to publish message!")
It might also be worth to install the management plugin for RabbitMQ and inspect the queue before you start consuming messages. This way you can verify that the messages got published, and later consumed.

Categories

Resources