How can I use Pika to send and receive RabbitMQ messages? - python

I'm having some issue getting Pika to work with routing keys or exchanges in a way that's consistent with it AMQP or RabbitMQ documentation. I understand that the RabbitMQ documentation uses an older version of Pika, so I have disregarded their example code.
What I'm trying to do is define a queue, "order" and have two consumers, one that handle the exchange or routing_key "production" and one that handles "test". From looking at that RabbitMQ documentation that should be easy enough to do by using either a direct exchange and routing keys or by using a topic exchange.
Pika however doesn't appear to know what to do with the exchanges and routing keys. Using the RabbitMQ management tool to inspect the queues, it's pretty obvious that Pika either didn't queue the message correctly or that RabbitMQ just threw it away.
On the consumer side it isn't really clear how I should bind a consumer to an exchange or handle routing keys and the documentation isn't really helping.
If I drop all ideas or exchanges and routing keys, messages queue up nicely and are easily handled by my consumer.
Any pointers or example code people have would be nice.

As it turns out, my understanding of AMQP was incomplete.
The idea is as following:
Client:
The client after getting the connection should not care about anything else but the name of the exchange and the routing key. That is we don't know which queue this will end up in.
channel.basic_publish(exchange='order',
routing_key="order.test.customer",
body=pickle.dumps(data),
properties=pika.BasicProperties(
content_type="text/plain",
delivery_mode=2))
Consumer
When the channel is open, we declare the exchange and queue
channel.exchange_declare(exchange='order',
type="topic",
durable=True,
auto_delete=False)
channel.queue_declare(queue="test",
durable=True,
exclusive=False,
auto_delete=False,
callback=on_queue_declared)
When the queue is ready, in the "on_queue_declared" callback is a good place, we can bind the queue to the exchange, using our desired routing key.
channel.queue_bind(queue='test',
exchange='order',
routing_key='order.test.customer')
#handle_delivery is the callback that will actually pickup and handle messages
#from the "test" queue
channel.basic_consume(handle_delivery, queue='test')
Messages send to the "order" exchange with the routing key "order.test.customer" will now be routed to the "test" queue, where the consumer can pick it up.

While Simon's answer seems right in general, you might need to swap the parameters for consuming
channel.basic_consume(queue='test', on_message_callback=handle_delivery)
Basic setup is sth like
credentials = pika.PlainCredentials("some_user", "some_password")
parameters = pika.ConnectionParameters(
"some_host.domain.tld", 5672, "some_vhost", credentials
)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
To start consuming:
channel.start_consuming()

Related

Why does publisher declares queue a in Pika RabbitMQ?

I have gone through the fundamentals of RabbitMQ. One thing I figured out that a publisher does not directly publish on a queue. The exchange decides on which queue the message should be published based on routing-key and type of exchange (code below is using default exchange). I have also found an example code of publisher.
import pika, os, logging
logging.basicConfig()
# Parse CLODUAMQP_URL (fallback to localhost)
url = os.environ.get('CLOUDAMQP_URL', 'amqp://guest:guest#localhost/%2f')
params = pika.URLParameters(url)
params.socket_timeout = 5
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue='pdfprocess')
# send a message
channel.basic_publish(exchange='', routing_key='pdfprocess', body='User information')
print ("[x] Message sent to consumer")
connection.close()
In line #9 the queue is being declared. I am a bit confused because the publisher does not have to be aware of the queue. For example if it is using fanout exchange and there are 100 queues with different names, how the consumer know and declare 100 queues?
The consumer can declare the queue and bind it to the exchange when the consumer connects to RabbitMQ. A fanout exchange then copies and routes a received message to all queues bound to it, regardless of routing keys or pattern matching as with direct and topic exchanges.
So no, the publisher does not have to be aware of all queues bound to the exchange. However, the publisher can ensure that the queue exists to ensure that the code will run smoothly, but that is of more importance for other exchange types.
Any client (Publisher or Consumer) can create queues in RabbitMQ. Sometimes you might want the Publisher to create a queue, but for me that is usually the role of the Consumer. The Publisher doesn't need to know where or even whether anything it sends will be consumed.
For example, the Publisher can get an acknowledgement from the RabbitMQ server that a message has been received. The RabbitMQ server can get a acknowledgement from the Consumer when a message is consumed from a Queue.
A Publisher cannot get an acknowledgement of when a message is Consumed from a Queue, it has no visibity of whether the message was routes to zero, one or multiple queues, or whether they were consumed from these queues.

Discarding rabbitmq messages no one is listening to

I am a complete newbie to rabbitmq messaging, apologies if this question is silly or my setup completely pear-shaped.
My setup where I use rabbitmq is sending messages from certain probes. Each probe has a unique name. I have then a centralised server where I process the data - if there is a need.
I use a direct exchange and routing keys that correspond to probe names.
I declare my consumer (server) as follows (this is more or less from rabbitmq tutorials):
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.exchange_declare(exchange="foo", type="direct")
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
If at some point I become interested in what a probe is reporting, I issue
channel.queue_bind(exchange="foo", queue=queue_name, routing_key="XXX")
where XXX is the name of the probe.
My publishers at the probes are declared as follows:
connection = pika.BlockingConnection(pika.ConnectionParameters(host="foo.bar.com"))
channel = connection.channel()
channel.exchange_declare(exchange="foo", type="direct")
and when I send a message, I use
channel.basic_publish(exchange="foo", routing_key="XXX", body=data)
where XXX is the name of the probe.
This all works fine. But how do I make it so that messages to routing keys that no one is listening to get discarded immediately? Now if my consumer stops listening to a routing key or is not running at all, messages sent by probes start piling up. When I start my consumer or have it listen to a routing key is has not been listening to in a while, I might have tens of thousands of messages backlog there. This is not what I need, and that backlog is bound to cause a resource exhaustion somewhere.
Is there a way to modify this so that messages get discarded instead of queued if there is no one listening to them when they arrive at the exchange? I would assume there is a way but Google and pika documents did not help.
Thanks in advance.
But how do I make it so that messages to routing keys that no one is
listening to get discarded immediately?
By default, Rabbitmq has implemented this.You just need to make sure that there is no queue which is binded to that routing key.
Now if my consumer stops listening to a routing key or is not running
at all, messages sent by probes start piling up
If there is no queue for that routing key, all messages will be discarded.
Is there a way to modify this so that messages get discarded instead
of queued if there is no one listening to them when they arrive at the
exchange?
Rabbitmq defualt behavior itself support this(for Direct Exchange)
Go through page at https://www.rabbitmq.com/tutorials/tutorial-four-python.html

Do I need rabbitmq bindings for direct exchange?

I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more.
Thanks
If you are using the default exchange for direct routing (exchange = ''), then you don't have to declare any bindings. By default, all queues are bound to the default exchange. As long as the routing key exactly matches a queue name (and the queue exists), the queues can stay bound to the default exchange. See https://www.rabbitmq.com/tutorials/tutorial-one-dotnet.html.
Always. In fact, even though queues are strictly a consumer-side entity, they should be declared & bound to the direct exchange by the producer(s) at the time they create the exchange.
You have to bind a queue with some binding key to an exchange, else messages will be discarded.
This is how any amqp broker works, publisher publish a message to exchange with some key, amqp broker(RabbitMq) routes this message from exchange to those queue(s) which are binded with exchange with the given key.
However it's not mandatory to declare and bind a queue in publisher.
You can do that in subscriber but make sure you run your subscriber before starting your publisher.
If you think your messages are getting routed to queue without bindings than you are missing something.

Comet for User based Notification over a Message Queue

We trying to build application that should use Comet (AJAX Push) to send notifications to individual users. Most notifications will have a fairly low timeout.
As we are running RabbitMQ, it would be easiest to send messages through AMQP. I am wondering what the best way to address individual users is, so that both the Comet server and the queue server have an easy job.
I have looked at a number of solutions including using
Carrot with Orbited, Tornado, and more.
If the comet server registers one consumer (with the queue) for every user, then these consumers either have to be kept with a timeout, or discarded after every use. Neither solution seems very promising. I imagine something like this would be possible in Tornado/Carrot:
class MainHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def get(self):
user_id = 123
consumer = Consumer(connection=conn, queue="feed", exchange="feed", routing_key=user_id)
consumer.register_callback(self.message_received)
consumer.wait()
def message_received(self, message_data, message):
self.write(simplejson.dumps(message_data))
message.ack()
consumer.close()
self.finish()
Alternatively, the comet server could only have one consumer for the queue and have to implement its own lightweight message queue that can cache incoming notifications until a user connects and uses them. This seems like something that memcached might be good for, but I have no experience with it.
What would be the best approach here?
I had almost the same use case and eventually ended up with Socket.IO for client-side, TornadIO for handling connections and RabbitMQ for message passing (via pika). Works quite well, worth to try it out.

Posting messages in two RabbitMQ queue, instead of one (using py-amqp)

I've got this strange problem using py-amqp and the Flopsy module. I have written a publisher that sends messages to a RabbitMQ server, and I wanted to be able to send it to a specified queue. On the Flopsy module that is not possible, so I tweaked it adding a parameter and a line to declare the queue on the _init__ method of the Publisher object
def __init__(self, routing_key=DEFAULT_ROUTING_KEY,
exchange=DEFAULT_EXCHANGE, connection=None,
delivery_mode=DEFAULT_DELIVERY_MODE, queue=DEFAULT_QUEUE):
self.connection = connection or Connection()
self.channel = self.connection.connection.channel()
self.channel.queue_declare(queue) # ADDED TO SET UP QUEUE
self.exchange = exchange
self.routing_key = routing_key
self.delivery_mode = delivery_mode
The channel object is part of the py-amqplib library
The problem I've got it's that, even if it's sending the messages to the specified queue, it's ALSO sending the messages to the default queue. AS in this system we expect to send quite a lot of messages, we don't want to stress the system making useless duplicates... I've tried to debug the code and go inside the py-amqplib library, but I'm not able figure out any error or lacking step. Also, I'm not able to find any documentation form py-amqplib outside the code.
Any ideas on why is this happening and how to correct it?
OK, I've think I've got it. unless anybody else have a better idea. I've check this tutorial on AMQP I was assuming that the publisher should know the queue, but that's not the case, you need to send the message to a exchange, and the consumer will declare that the queue is related to the exchange. That allow different options on sending and receiving, as you can see on the tutorial.
So, I've been including the exchange information on both the publisher and the consumer, not making use of the call to queue_declare and it appears to be working just fine.

Categories

Resources