RabbitMQ basic publish - python

I have RabbitMQ listener written in Python from examples from rabbitmq's docs:
#!/usr/bin/env python
import time
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hound')
def callback(ch, method, properties, body):
print(" [x] Received %r" % (body,))
time.sleep(5)
print(" [x] Done")
ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_consume(callback,
queue='hound',
)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
And C++ client which tries to send message:
#include <SimpleAmqpClient/SimpleAmqpClient.h>
using namespace AmqpClient;
int main(int argc, char *argv[])
{
Channel::ptr_t channel;
channel = Channel::Create("SERVER_HOST", SERVER_PORT,
"LOGIN", "PASS", "/");
BasicMessage::ptr_t msg = BasicMessage::Create("HELLO!!!");
channel->DeclareQueue("hound");
channel->BasicPublish("", "hound", msg, true);
}
But when I sent message I got error:
terminate called after throwing an instance of 'AmqpClient::PreconditionFailedException'
what(): channel error: 406: AMQP_QUEUE_DECLARE_METHOD caused: PRECONDITION_FAILED - parameters for queue 'hound' in vhost '/' not equivalent
Aborted
But! When i delete line: channel->DeclareQueue("hound"); successfully sent.
Sender writte in Python is working well:
#!/usr/bin/env python
import sys
import pika
credentials = pika.PlainCredentials(
username=username, password=password
)
connection = pika.BlockingConnection(
pika.ConnectionParameters(
host=host,
virtual_host=virtual_host,
credentials=credentials,
port=RABBIT_PORT
)
)
channel = connection.channel()
channel.queue_declare(queue='hound')
channel.basic_publish(exchange='',
routing_key='hound',
body='hello!')
print(" [x] Sent %r" % (message,))
What's wrong? Why c++ client show me this error?

This error is caused by the fact that you are attempting to re-declare a queue with different parameters.
As the documentation states, a queue declaration is intended to be an idempotent assertion - if the queue does not exist, it is created. If it does exist, but has different parameters, you get this error.
Declaration and Property Equivalence
Before a queue can be used it has to be declared. Declaring a queue
will cause it to be created if it does not already exist. The
declaration will have no effect if the queue does already exist and
its attributes are the same as those in the declaration. When the
existing queue attributes are not the same as those in the declaration
a channel-level exception with code 406 (PRECONDITION_FAILED) will be
raised.
Something is going on in your DeclareQueue("hound"); method that is different from channel.queue_declare(queue='hound'). Since we don't have the code for that, it is impossible to explain further, but I think this is sufficient information for you to solve the problem.

Related

ActiveMQ Stomp python client NACK consumes message

I'm using ActiveMQ classic v5.16.3 and experimenting with NACK. My expectation is that if the client sends a NACK then the message will remain on the queue and be available for another client. My code is below. I set a prefetch of 1, and ack mode of client-individual.
If I omit the conn.nack() call then I see my print statements, and the message remains on the queue - hence I believe that ActiveMQ is looking for an ACK or NACK.
When I include the conn.nack() call then I again see my print statements, and the message is removed from the queue.
Is this expected behaviour? I think a client should be able to reject malformed messages by NACK-ing and that eventually ActiveMQ should put them to a dead letter queue.
import time
import sys
import stomp
class MyListener(stomp.ConnectionListener):
def on_error(self, frame):
print('received an error "%s"' % frame.body)
def on_message(self, frame):
# experiment with and without the following line
conn.nack(id=frame.headers['message-id'], subscription=frame.headers["subscription"])
print('received a message "%s"' % frame.body)
print('headers "%s"' % frame.headers)
print('Connecting ...')
conn = stomp.Connection()
conn.set_listener('', MyListener())
conn.connect('admin', 'admin', wait=True)
print('Connected')
conn.subscribe(destination='/queue/audit', id=1, ack='client-individual', headers={'activemq.prefetchSize': 1})
As suggested by Tim Bish, I needed to configure ActiveMQ to retry. I made the following changes to activemq.xml
Added scheduler support to the broker:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="localhost" dataDirectory="${activemq.data}"
schedulerSupport="true" >
Specified the redelivery plugin:
<plugins>
<redeliveryPlugin fallbackToDeadLetter="true"
sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<defaultEntry>
<redeliveryPolicy maximumRedeliveries="4"
initialRedeliveryDelay="5000"
redeliveryDelay="10000"/>
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
</plugins>
And then for my chosen destination, specify that poison messages be sent to a specific queue - default is to publish to a Topic.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue="audit" prioritizedMessages="true" >
<deadLetterStrategy>
<individualDeadLetterStrategy queuePrefix="DLQ."
useQueueForQueueMessages="true"/>
</deadLetterStrategy>
</policyEntry>

RabbitMQ Pika long-running task and heartbeat threading

I have attempted to follow guidance given here: Handling long running tasks in pika / RabbitMQ and here: https://github.com/pika/pika/issues/753#issuecomment-318124510 on how to run long tasks in a separate thread to avoid interrupting the connection heartbeat. I'm a beginner to threading and still struggling to understand this solution.
For my final use case, I need to make function calls that are several minutes long, represented in the example code below by the long_function(). I've found that if the sleep call in long_function() exceeds the length of the heartbeat timeout, I lose connection (presumably because this function is blocking thread #2 from receiving/acknowledging the heartbeat messages from thread #1) and I get this message in the logs: ERROR: Unexpected connection close detected: StreamLostError: ("Stream connection lost: RxEndOfFile(-1, 'End of input stream (EOF)')",). A sleep call of the same length in the target function of thread #2 does not lead to a StreamLostError.
What's the proper solution for overcoming the StreamLostError here? Do I launch all subsequent function calls in their own threads to avoid blocking thread #2? Do I increase the heartbeat to be longer than long_function()? If this is the solution, what was the point of running my long task in a separate thread? Why not just make the heartbeat timeout in the main thread long enough to accommodate the whole message being processed? Thanks!
import functools
import logging
import pika
import threading
import time
import os
import ssl
from common_utils.rabbitmq_utils import send_message_to_queue, initialize_rabbitmq_channel
import json
import traceback
logging.basicConfig(format='%(asctime)s %(levelname)s: %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
def send_message_to_queue(channel, queue_name, body):
channel.basic_publish(exchange='',
routing_key=queue_name,
body=json.dumps(body),
properties=pika.BasicProperties(delivery_mode=2)
)
logging.info("RabbitMQ publish to queue {} confirmed".format(queue_name))
def initialize_rabbitmq_channel(timeout=5*60):
credentials = pika.PlainCredentials(os.environ.get("RABBITMQ_USER"), os.environ.get("RABBITMQ_PASSWORD"))
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
params = pika.ConnectionParameters(port=5671, host=os.environ.get("RABBITMQ_HOST"), credentials=credentials,
ssl_options=pika.SSLOptions(context), virtual_host="/", heartbeat=timeout)
connection = pika.BlockingConnection(params)
return connection.channel(), connection
def long_function():
logging.info("Long function starting...")
time.sleep(5)
logging.info("Long function finished.")
def ack_message(channel, delivery_tag):
"""
Note that `channel` must be the same pika channel instance via which
the message being ACKed was retrieved (AMQP protocol constraint).
"""
if channel.is_open:
channel.basic_ack(delivery_tag)
logging.info("Message {} acknowledged".format(delivery_tag))
else:
logging.error("Channel is closed and message acknowledgement will fail")
pass
def do_work(connection, channel, delivery_tag, body):
thread_id = threading.get_ident()
fmt1 = 'Thread id: {} Delivery tag: {} Message body: {}'
logging.info(fmt1.format(thread_id, delivery_tag, body))
# Simulating work including a call to another function that exceeds heartbeat timeout
time.sleep(5)
long_function()
send_message_to_queue(channel, "test_inactive", json.loads(body))
cb = functools.partial(ack_message, channel, delivery_tag)
connection.add_callback_threadsafe(cb)
def on_message(connection, channel, method, property, body):
t = threading.Thread(target=do_work, args=(connection, channel, method.delivery_tag, body))
t.start()
t.join()
if __name__ == "__main__":
channel, connection = initialize_rabbitmq_channel(timeout=3)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(queue="test_queue",
auto_ack=False,
on_message_callback=lambda channel, method, property, body: on_message(connection, channel, method, property, body)
)
channel.start_consuming()

how to store rabbitMQ messages

I am using a RabbitMQ server with python for sending and receiving messages to the server
This is the code I am using for sending a message in to code.
import numpy as np
import pandas as pd
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='Q1')
message = 'Hello World'
channel.basic_publish(exchange='',
routing_key='Q1',
body=message)
# Printing the Sending Confirmation of ID
print(" [x] Sent %r" % message)
connection.close()
Output:
[x] Sent 'Hello World'
This is the code I am using for receiving messages from queue
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='Q1')
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
channel.basic_consume(callback, queue='Q1', no_ack=True)
channel.start_consuming()
Output:
[x] Received 'Hello World'
The problem is I want to save this message i.e. "Hello World" to a variable and then use it in my program
But I am not able to save the message.
How can I save it to a variable.
What will be the solution for Multiple Messages in the queue
The problem is I want to save this message i.e. "Hello World" to a variable and then use it in my program
You've got a variable already which you can use in you program - body. If you'd like to decouple the infrastructure code (i.e. RabbitMq/pika) from the business logic then you can simply declare another function and pass the body to it.
def processing_function(message_received):
print(" [x] Received %r" % message_received)
def callback(ch, method, properties, body):
processing_function(body)
The idea is that pika calls callback once a message is received and then body is passed to the processing_function which performs calculations.
If you're struggling to understand the callback function I'd recommend you to read this first.

rabbitmq messages missing in receiver part

I have implemented RabbitMQ in my servers. So basically what it does is that the main server passes messages to the worker server.
The problem that I am facing is that all the message that I pass is not received by the server.
i.e if i send 10 messages only 4 of them are received.
Any idea where am I going wrong.
Receiving code
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
Publishing code
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
message = ' '.join(sys.argv[1:]) or "Hello World!"
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
print(" [x] Sent %r" % message)
connection.close()
Assuming that you are publishing to the same queue (as the examples you posted shows otherwise). I would recommend that you enable the confirm delivery flag. This will ensure that your message gets delivered, and if not it will either throw an exception, or publish will return False.
channel = connection.channel()
channel.confirm_delivery()
published = channel.basic_publish(...)
if not published:
raise Exception("Unable to publish message!")
It might also be worth to install the management plugin for RabbitMQ and inspect the queue before you start consuming messages. This way you can verify that the messages got published, and later consumed.

How to ensure that messages get delivered?

How do you ensure that messages get delivered with Pika? By default it will not provide you with an error if the message was not delivered succesfully.
In this example several messages can be sent before pika acknowledges that the connection was down.
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
for index in xrange(10):
channel.basic_publish(exchange='', routing_key='hello',
body='Hello World #%s!' % index)
print('Total Messages Sent: %s' % x)
connection.close()
When using Pika the channel.confirm_delivery() flag needs to be set before you start publishing messages. This is important so that Pika will confirm that each message has been sent successfully before sending the next message. This will however increase the time it takes to send messages to RabbitMQ, as delivery needs to be confirmed before the program can proceed with the next message.
channel.confirm_delivery()
try:
for index in xrange(10):
channel.basic_publish(exchange='', routing_key='hello',
body='Hello World #%s!' % index)
print('Total Messages Sent: %s' % x)
except pika.exceptions.ConnectionClosed as exc:
print('Error. Connection closed, and the message was never delivered.')
basic_publish will return a Boolean depending if the message was sent or not. But, it is important to catch potential exceptions in case the connection is closed during transfer and handle it appropriately. As in those cases the exception will interrupt the flow of the program.
after trying myself and failing to receive other than ack,
i decided to implement a direct reply to the sender.
i followed the example given here

Categories

Resources