I have a rabbit mq server running, with one direct exchange which all my messages go through. The messages are routed to individual non-permanent queues (they may last a couple hours). I just started reading about queue bindings to exchanges and am a bit confused as to if I actually need to bind my queues to the exchange or not. I'm using pika basic_publish and consume functions so maybe this is implied? Not really sure just wanna understand a bit more.
Thanks
If you are using the default exchange for direct routing (exchange = ''), then you don't have to declare any bindings. By default, all queues are bound to the default exchange. As long as the routing key exactly matches a queue name (and the queue exists), the queues can stay bound to the default exchange. See https://www.rabbitmq.com/tutorials/tutorial-one-dotnet.html.
Always. In fact, even though queues are strictly a consumer-side entity, they should be declared & bound to the direct exchange by the producer(s) at the time they create the exchange.
You have to bind a queue with some binding key to an exchange, else messages will be discarded.
This is how any amqp broker works, publisher publish a message to exchange with some key, amqp broker(RabbitMq) routes this message from exchange to those queue(s) which are binded with exchange with the given key.
However it's not mandatory to declare and bind a queue in publisher.
You can do that in subscriber but make sure you run your subscriber before starting your publisher.
If you think your messages are getting routed to queue without bindings than you are missing something.
Related
This is really a programming design question more than a specific language or library question. I'm tinkering with the idea of a standalone chat server for websockets that will accept several remote browser-based javascript clients. I'm going for something super simple at first, then might build it up. The server just keeps accepting client connections and listens for messages. When a message is received, it will be sent back to all the clients.
What I need to better understand is which approach is best for sending the messages out to all clients, specifically, sending immediately to all clients, or queuing the messages to each client's queue to be sent when a client connection handler's turn comes up. Below are the two examples in a python-like pseudo-code:
Broadcast Method
def client_handler(client):
while true:
if(client.pending_msg):
rmsg = client.recv()
for c in clients:
c.send(rmsg)
client.sleep(1)
Queue Method
def client_handler(client):
while true:
if client.pending_msg:
rmsg = client.recv()
for c in clients:
c.queue_msg(rmsg)
if client.has_queued:
client.send_queue
client.sleep(1)
What is the best approach? Or, perhaps they are good for different use-cases, in which case, what are the pros, cons and circumstances for which they should be used. Thanks!
First of all, it seems odd to me that a single client handler would know about all the other existing clients. This should be the first thing you should abstract away and create a central message processing handler instead which the individual clients talk to.
That handler can then either send the message directly to the clients (like in your broadcast example), or add them to queues of the clients (like your queue example). Which would be the preferred version depends a bit on your network protocol.
Since you said that you will be using websockets, you have a persistent network connection to the clients anyway, so you can just send them out immediately. There is no real gain to queue (and buffer) the messages. Ideally, a client would just have a send() method anyway, and the client would then internally decide whether that means appending it to a queue or sending it immediately over the network.
Furthermore, since websockets are kind of asynchronous in their nature, you don’t need busy wait loops anyway. You can just listen for messages from the client directly, process those, and broadcast them using your central handler. And since you then don’t have a wait loop anymore, there also would be no place where you work off your queue anymore, making the immediate broadcast the more natural decision.
I use pika for python to communicate with RabbitMQ. I have 6 threads, which consume and acknowledge messages from the same queue. I use different connections(and channels) for each thread. So i have a few questions very close to each other:
If connection to rabbit will close in 1 of the thread, and i will make reconnect, delivery tag value will reset and after reconnect it will start from 0?
After reconnect i will receive same unacknowledged messages in the same order for each thread or it will start distribute them again between all threads or it will start from reconnect point?
It is important in my app, because there is delay between message receiving and acknowledgement, and i want to avoid duplicates on the next process steps.
Delivery tag are channel-specific and server assigned. See details in AMQP spec, section 1.1.
AMQPdefined Domains or RabbitMQ's documentation for delivery-tag. RabbitMQ initial value for delivery-tag is 1,
Zero is reserved for client use, meaning "all messages so far received".
With multiple consumers on a single queue there are no guarantee that consumers will get messages in the same order they was queued. See RabbitMQ's Broker Semantics, "Message ordering guarantees" paragraph for details:
Section 4.7 of the AMQP 0-9-1 core specification explains the conditions under which ordering is guaranteed: messages published in one channel, passing through one exchange and one queue and one outgoing channel will be received in the same order that they were sent. RabbitMQ offers stronger guarantees since release 2.7.0.
Messages can be returned to the queue using AMQP methods that feature a requeue parameter (basic.recover, basic.reject and basic.nack), or due to a channel closing while holding unacknowledged messages. Any of these scenarios caused messages to be requeued at the back of the queue for RabbitMQ releases earlier than 2.7.0. From RabbitMQ release 2.7.0, messages are always held in the queue in publication order, even in the presence of requeueing or channel closure.
With release 2.7.0 and later it is still possible for individual consumers to observe messages out of order if the queue has multiple subscribers. This is due to the actions of other subscribers who may requeue messages. From the perspective of the queue the messages are always held in the publication order.
Also, see this answer for "RabbitMQ - Message order of delivery" question.
By default when I send a message with absent routing_key, the broker rejects it. How can I force RabitMQ to send one to the some 'default' queue? For example, I have 3 consumers with keys 'con1', 'con2' and 'con4'. I send a message with the key 'con3' and I need broker requeues message to some 'starter' queue that can start 'con3' consumer and requeue message again?
I found this https://github.com/tonyg/script-exchange and I sure it helps me, but I can't install it because the repository updated 4 years ago and modern umbrella dev kit is not support this old makefile.
It's necessary to use a combination of the alternate exchange protocol extension and the consistent-hash exchange plugin. So, you should declare 2 exchanges: direct and x-consistent-hash (alternative to the first one). Then all existing consumers should create their own queues bounded to the direct exchange. In this case all messages with routing keys of 'con1', 'con2' and 'con4' will be routed directly to the consumers whereas messages with another routing keys will be rerouted to the alternative exchange which can route them to the some 'managers', which starts the necessary processors (consumers).
The 'script-exchange' RabbitMQ plugin is unsupported now.
I'd like to do some routing magic with AMQP. My setup is Python with Pika on the consumer/producer side and RabbitMQ for the AMQP server.
What I'd like to achieve:
send a message to a single exchange
(insert magic here)
consume messages like so:
one set of subscribers should be able to retrieve based on a routing key
one set of subscribers should just get all messages.
The tricky part is that if the any server in the second set has received a message no other server from the second set will receive it. All the servers from the first set should still be able to consume this message.
Is this possible with a single basic_publish call or do I need to send the message to a routing exchange (for the first set of consumers) and to a "global" exchange for the second set of consumers?
CLARIFICATION:
What I'd like to achieve is a single
call to publish a message and have it
received by 2 distinct sets of
consumers.
Case 1: Just receive messages based on routing key (that is a
message with routing key foo will be
received by all the consumers
currently interested in that topic)
Case 2: This basically resembles the RabbitMQ Tutorial for Worker
Queues.
There are a number of workers that
will receive messages dispatched in a
round robin way. Only one worker will receive a message
Still the message that is received by the consumers that are interested in a certain
routing key should be exactly the same as the messages received by the workers, produced
by a single API call.
(Hope my question makes sense I'm not too familiar with AMQP terms)
To start with, you need to use a topic exchange and publish your messages with a different routing key for each queue. The magic happens when the consumer binds the queue with a binding key (or pattern to be matched). Some consumers just use the routing keys as their binding key. But the second set will use a wildcard pattern for their binding key.
For Case 1, you need to create a queue per consumer, and bind each queue with an appropriate routing key.
For Case 2, just create a single queue with a routing key of # and have each of your worker consumers consume from that. The broker will dispatch in a round-robin manner to the workers.
Here's a screenshot of what it would look like in RabbitMQ. In this example there are two consumers from your "case 1" (Foo and Bar) and one queue for all the workers to satisfy "case 2".
This model should be supported by all AMQP-compliant brokers and wouldn't require any vendor-specific enhancements.
I'm having some issue getting Pika to work with routing keys or exchanges in a way that's consistent with it AMQP or RabbitMQ documentation. I understand that the RabbitMQ documentation uses an older version of Pika, so I have disregarded their example code.
What I'm trying to do is define a queue, "order" and have two consumers, one that handle the exchange or routing_key "production" and one that handles "test". From looking at that RabbitMQ documentation that should be easy enough to do by using either a direct exchange and routing keys or by using a topic exchange.
Pika however doesn't appear to know what to do with the exchanges and routing keys. Using the RabbitMQ management tool to inspect the queues, it's pretty obvious that Pika either didn't queue the message correctly or that RabbitMQ just threw it away.
On the consumer side it isn't really clear how I should bind a consumer to an exchange or handle routing keys and the documentation isn't really helping.
If I drop all ideas or exchanges and routing keys, messages queue up nicely and are easily handled by my consumer.
Any pointers or example code people have would be nice.
As it turns out, my understanding of AMQP was incomplete.
The idea is as following:
Client:
The client after getting the connection should not care about anything else but the name of the exchange and the routing key. That is we don't know which queue this will end up in.
channel.basic_publish(exchange='order',
routing_key="order.test.customer",
body=pickle.dumps(data),
properties=pika.BasicProperties(
content_type="text/plain",
delivery_mode=2))
Consumer
When the channel is open, we declare the exchange and queue
channel.exchange_declare(exchange='order',
type="topic",
durable=True,
auto_delete=False)
channel.queue_declare(queue="test",
durable=True,
exclusive=False,
auto_delete=False,
callback=on_queue_declared)
When the queue is ready, in the "on_queue_declared" callback is a good place, we can bind the queue to the exchange, using our desired routing key.
channel.queue_bind(queue='test',
exchange='order',
routing_key='order.test.customer')
#handle_delivery is the callback that will actually pickup and handle messages
#from the "test" queue
channel.basic_consume(handle_delivery, queue='test')
Messages send to the "order" exchange with the routing key "order.test.customer" will now be routed to the "test" queue, where the consumer can pick it up.
While Simon's answer seems right in general, you might need to swap the parameters for consuming
channel.basic_consume(queue='test', on_message_callback=handle_delivery)
Basic setup is sth like
credentials = pika.PlainCredentials("some_user", "some_password")
parameters = pika.ConnectionParameters(
"some_host.domain.tld", 5672, "some_vhost", credentials
)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
To start consuming:
channel.start_consuming()