ActiveMQ Stomp python client NACK consumes message - python

I'm using ActiveMQ classic v5.16.3 and experimenting with NACK. My expectation is that if the client sends a NACK then the message will remain on the queue and be available for another client. My code is below. I set a prefetch of 1, and ack mode of client-individual.
If I omit the conn.nack() call then I see my print statements, and the message remains on the queue - hence I believe that ActiveMQ is looking for an ACK or NACK.
When I include the conn.nack() call then I again see my print statements, and the message is removed from the queue.
Is this expected behaviour? I think a client should be able to reject malformed messages by NACK-ing and that eventually ActiveMQ should put them to a dead letter queue.
import time
import sys
import stomp
class MyListener(stomp.ConnectionListener):
def on_error(self, frame):
print('received an error "%s"' % frame.body)
def on_message(self, frame):
# experiment with and without the following line
conn.nack(id=frame.headers['message-id'], subscription=frame.headers["subscription"])
print('received a message "%s"' % frame.body)
print('headers "%s"' % frame.headers)
print('Connecting ...')
conn = stomp.Connection()
conn.set_listener('', MyListener())
conn.connect('admin', 'admin', wait=True)
print('Connected')
conn.subscribe(destination='/queue/audit', id=1, ack='client-individual', headers={'activemq.prefetchSize': 1})

As suggested by Tim Bish, I needed to configure ActiveMQ to retry. I made the following changes to activemq.xml
Added scheduler support to the broker:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="localhost" dataDirectory="${activemq.data}"
schedulerSupport="true" >
Specified the redelivery plugin:
<plugins>
<redeliveryPlugin fallbackToDeadLetter="true"
sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<defaultEntry>
<redeliveryPolicy maximumRedeliveries="4"
initialRedeliveryDelay="5000"
redeliveryDelay="10000"/>
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
</plugins>
And then for my chosen destination, specify that poison messages be sent to a specific queue - default is to publish to a Topic.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue="audit" prioritizedMessages="true" >
<deadLetterStrategy>
<individualDeadLetterStrategy queuePrefix="DLQ."
useQueueForQueueMessages="true"/>
</deadLetterStrategy>
</policyEntry>

Related

Stomp.py not processing one message at a time

I am using Stomp.py connecting to a standard ACtiveMQ server. I am simulating cases where the receiver crashes and I want to be able to restart it and it continue running from the message after the one that caused it to crash.
I have created two sample scripts:
putMessagesToQueue.py - This will put 56 messages into the
destination
readMessagesFromQueue.py - This will read messages from
the destination. If it reads the 6th message it will raise an
exception. Each message takes 1 second to process
Steps I take to run the test:
I run putMessagesToQueue.py
I run readMessagesFromQueue.py - it processes 5 messages sucessfully and an exception is raised in message 6
I terminate readMessagesFromQueue.py (ctrl-c)
I run readMessagesFromQueue.py again
For the behaviour I want in step 4 I want it to start processing from message 7.
However I don't see this.
If reciever subscribes with ack='auto' then in step 4 it processes no messages - all the messages are gone from the queue and I have lost 50 messages!
If I use ack='client' or ack='client-individual' then on step 4 it starts again from the beginning then crashes again on message 6.
This seems to suggest that the reciever is not processing messages on at a time, instead it is taking every single message at once and running through each one. I don't want this behaviour because I would like to scale up to running 5 recievers and I want the load distributed. At the moment the first reciever I start takes all the messages and starts churning through them and recievers 2-4 just wait for new messages. I want the recievers to take messages one at a time instead!
Can anyone give any hints on how I am implementing this wrong:
Source
putMessagesToQueue.py
import stomp
stompurl = "127.0.0.1"
stompport = "61613"
stompuser = "admin"
stomppass = "admin"
destination = "/queue/testQueueWithCrash"
conn = stomp.Connection(host_and_ports=[(stompurl, stompport)])
conn.connect(stompuser,stomppass,wait=True)
for x in range(0,5):
conn.send(body="OK-BEFORE-CRASH", destination=destination)
conn.send(body="CRASH", destination=destination)
for x in range(0,50):
conn.send(body="OK-AFTER-CRASH", destination=destination)
readMessagesFromQueue.py
import stomp
import time
stompurl = "127.0.0.1"
stompport = "61613"
stompuser = "admin"
stomppass = "admin"
destination = "/queue/testQueueWithCrash"
conn = stomp.Connection(host_and_ports=[(stompurl, stompport)])
conn.connect(stompuser,stomppass,wait=True)
class StompConnectionListenerClass(stomp.ConnectionListener):
processMessage = None
def __init__(self, processMessage):
self.processMessage = processMessage
def on_error(self, headers, message):
print('XX received an error "%s"' % message)
def on_message(self, headers, message):
self.processMessage(headers, message)
def messageProcessingFunction(headers, message):
print('Main recieved a message "%s"' % message)
if (message=="CRASH"):
print("Message told processor to crash")
raise Exception("Reached message which crashes reciever")
time.sleep(1) # simulate processing message taking time
stompConnectionListener = StompConnectionListenerClass(processMessage=messageProcessingFunction)
conn.set_listener('', stompConnectionListener)
print("Subscribing")
conn.subscribe(destination=destination, id=1, ack='auto')
#conn.subscribe(destination=destination, id=1, ack='client')
#conn.subscribe(destination=destination, id=1, ack='client-individual')
print("Terminate loop starting (Press ctrl+c when you want to exit)")
try:
while True:
time.sleep(10)
except KeyboardInterrupt:
print('interrupted - so exiting!')
conn.close()
print("Reciever terminated")
Update 001
I managed to obtain the desired behavour described above by changing the receive function to use ack='client-individual' and to manually send ack messages. (See new version below)
But I am still unable to get the recievers to process one message at a time. This can be demonstrated in the following steps:
I run putMessagesToQueue.py
I run readMessagesFromQueue2.py - it will start processing
In a new terminal run readMessagesFromQueue2.py
At first the second readMessagesFromQueue2 does nothing until the first one crashes, it then starts receiving messages. I want both instances of the reciever to read the messages from the start.
readMessagesFromQueue2.py
import stomp
import time
stompurl = "127.0.0.1"
stompport = "61613"
stompuser = "admin"
stomppass = "admin"
destination = "/queue/testQueueWithCrash"
conn = stomp.Connection(host_and_ports=[(stompurl, stompport)])
conn.connect(stompuser,stomppass,wait=True)
class StompConnectionListenerClass(stomp.ConnectionListener):
processMessage = None
conn = None
def __init__(self, processMessage, conn):
self.processMessage = processMessage
self.conn = conn
def on_error(self, headers, message):
print('XX received an error "%s"' % message)
def on_message(self, headers, message):
try:
self.processMessage(headers, message)
finally:
self.conn.ack(id=headers["message-id"], subscription=headers["subscription"])
def messageProcessingFunction(headers, message):
print('Main recieved a message "%s"' % message)
if (message=="CRASH"):
print("Message told processor to crash")
raise Exception("Reached message which crashes reciever")
time.sleep(1) # simulate processing message taking time
stompConnectionListener = StompConnectionListenerClass(processMessage=messageProcessingFunction, conn=conn)
conn.set_listener('', stompConnectionListener)
print("Subscribing")
conn.subscribe(destination=destination, id=1, ack='client-individual')
print("Terminate loop starting (Press ctrl+c when you want to exit)")
try:
while True:
time.sleep(10)
except KeyboardInterrupt:
print('interrupted - so exiting!')
conn.close()
print("Reciever terminated")
Lots of reading of diffent docs and I found the problem.
ActiveMQ has an option prefetch size - https://svn.apache.org/repos/infra/websites/production/activemq/content/5.7.0/what-is-the-prefetch-limit-for.html
If you have few messages that take a long time to process you can set it to 1. This is not apropiate in other situations.
I can do this in stopm.py with the following line:
conn.subscribe(destination=destination, id=1, ack='client-individual', headers={'activemq.prefetchSize': 1})
So using manual or auto ack was neither here nor there. The key is limiting prefetch to 1.

Acking activemq using python and stomp

I have an activemq set up with a ton of api calls to Zendesk. I need to retrieve those calls and then remove them from the queue entirely. conn.ack doesn't seem to be working!
I'm using python3 and the most recent version of stomp. I used this to make the initial connection script: https://github.com/jasonrbriggs/stomp.py/wiki/Simple-Example
https://jasonrbriggs.github.io/stomp.py/api.html
In this doc, it looks like you have to label the "id" tag of the .subscribe method. You call conn.ack with that id, but you also add the message id as an argument. I found out that the headers of the message are retrieved with the listener function. I printed those out and they look like this:
ID:[my workstation id].local-49557-1560302581785-5:58:-1:1:61592
I tried regex'ing out the whole string after ID:, and then I tried regexing out just the number on the very end of the string (it looks like possibly a unique number), but when I do conn.ack (matchObj.group(1), 4), the queue count doesn't change and I get no feedback as to why it doesn't.
The connection works absolutely fine so far-- I just can't send those acks.
import stomp
import time
import re
class SampleListener(object):
def on_message(self, headers, msg):
regex = r"ID:.*1:1:(.*)"
print(msg)
print(headers['message-id'])
matchObj = re.match ( regex, headers['message-id'], re.M|re.I)
print (matchObj.group(1))
conn.ack (matchObj.group(1), 4)
conn = stomp.Connection10()
conn.set_listener('SampleListener', SampleListener())
conn.start()
conn.connect()
conn.subscribe('zendeskqueue', id=4, ack='client')
time.sleep(1) # secs
conn.disconnect()
The code above has no error, it just exists without output.
Acknowledging messages with stomp.py and ActiveMQ works using ack id headers['ack'].
And you can get more verbose output by implementing on_error on your listener, and enabling debug logging.
With both, your code will look like that:
import stomp
import time
import logging
class SampleListener(object):
def on_message(self, headers, msg):
conn.ack(headers['ack'])
def on_error(self, headers, body):
print("An error occurred: headers=" + str(headers) + " ; body=" + str(body))
logging.basicConfig(level=logging.INFO)
logging.getLogger('stomp').setLevel(logging.DEBUG)
conn = stomp.Connection12()
conn.set_listener('SampleListener', SampleListener())
conn.start()
conn.connect()
conn.subscribe('zendeskqueue', id=4, ack='client')
time.sleep(1) # secs
conn.disconnect()

Retain Messages until a Subscription is Made using Python + Stomp

I am currently writing two scripts to subscribe to a message server using the stomp client library, write.py to write data and read.py to get data.
If I start read.py first and then run write.py, write.py receives the messages correctly.
However, if I run write.py first and then run read.py, read.py does not retrieve any messages previously sent to the server.
Below are relevant parts of the scripts.
How can I achieve that messages put into the queue by write.py are being retained until read.py subscribes and retrieves them?
write.py
def writeMQ(msg):
queue = '/topic/test'
conn = stomp.Connection(host_and_ports=[(MQ_SERVER, MQ_PORT)])
try:
conn.start()
conn.connect(MQ_USER, MQ_PASSWD, wait=True)
conn.send(body=msg, destination=queue, persistent=True)
except:
traceback.print_exc()
finally:
conn.disconnect()
return
read.py
class MyListener(stomp.ConnectionListener):
def on_error(self, headers, message):
print ('received an error {0}'.format(message))
def on_message(self, headers, message):
print ('received an message {0}'.format(message))
def readMQ():
queue = '/topic/test'
conn = stomp.Connection(host_and_ports=[(MQ_SERVER, MQ_PORT)])
try:
conn.set_listener("", MyListener())
conn.start()
conn.connect(MQ_USER, MQ_PASSWD, wait=True)
conn.subscribe(destination=queue, ack="auto", id=1)
stop = raw_input()
except:
traceback.print_exc()
finally:
conn.disconnect()
return
The problem is that the messages are being sent to a topic.
The Apollo Documentation describes the difference between topics and queues as follows:
Queues hold on to unconsumed messages even when there are no subscriptions attached, while a topic will drop messages when there are no connected subscriptions.
Thus, when read.py is startet first and listening, the topic recognizes the subscription and forwards the message. But when write.py is startet first the message is dropped because there is no subscribed client.
So you can use a queue instead of a topic. If the server is able to create a queue silently simply set
queue = '/queue/test' .
I don't know which version of stomp is being used, but I cannot find the parameter
send(..., persistent=True) .
Anyway persisting is not the right way to go since it still does not allow for messages to simply be retained for a later connection, but saves the messages in case of a server failure.
You can use the
retain:set
header for topic messages instead.

Paho Python client is encountering a socket read error and subsequent disconnect from broker

Client details: Paho MQTT Python client obtained from http://git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.python.git, SHA id 300fcbdffd62d08f627f94f3074463cfa532ca87
Broker details: RabbitMQ 3.3.4 with the MQTT plugin
In the first scenario everything is being run locally with SSL enabled. I am using the client in a fashion where I have a separate process which publishes messages (> 10000 messages) and does not wait for an acknowledgment between publishes. The error is a result of the self._ssl.read(1) returning a a zero length value from the following code snippet from _packet_read in client.py:
...
if self._in_packet['command'] == 0:
try:
if self._ssl:
command = self._ssl.read(1)
else:
command = self._sock.recv(1)
except socket.error as err:
...
else:
if len(command) == 0:
return 1
command = struct.unpack("!B", command)
self._in_packet['command'] = command[0]
...
and occurs after receiving and parsing 25 acknowledgments from RabbitMQ. After this error I no longer receive anything back from the broker.
If I run with SSL disabled I do not encounter any errors and can successfully receive acknowledgements for all messages sent.
If I run the broker remotely (some location on the internet) I get the same results over SSL. However, when not using SSL I get the read error/disconnect at different intervals but the client is able to recover/reconnect and I receive broker acknowledgements for all messages sent.
Here is the client configuration that I'm using:
...
client = Client('foo')
client.max_inflight_messages_set(65536)
client.on_connect = self.on_connect_callback
client.on_disconnect = self.on_disconnect_callback
client.on_publish = self.on_publish_callback
client.on_message = self.on_message_callback
client.username_pw_set('foo', 'bar')
client.tls_set("ca-cert.pem",
"client-cert.pem",
"client-key.pem")
client.will_set(topic="last_will/foo",
payload=Message().body, qos=1)
client.connect('127.0.0.1', 8883, 30)
client.loop_start()
...
Any idea on what could be causing this and/or suggestions for troubleshooting?
UPDATE 20140828: I was stepping through the loop_read and noticed that I get an empty socket return value after successfully receiving the first full packet (the connection acknowledgment). The call to select that precedes the socket.recv call indicates that there is data ready to be read on the socket. Could this be a socket buffer issue? I'm not sure what the behavior of a Python socket receive buffer (btw I'm running this on OSX) is if it overflows.

How to ensure that messages get delivered?

How do you ensure that messages get delivered with Pika? By default it will not provide you with an error if the message was not delivered succesfully.
In this example several messages can be sent before pika acknowledges that the connection was down.
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
for index in xrange(10):
channel.basic_publish(exchange='', routing_key='hello',
body='Hello World #%s!' % index)
print('Total Messages Sent: %s' % x)
connection.close()
When using Pika the channel.confirm_delivery() flag needs to be set before you start publishing messages. This is important so that Pika will confirm that each message has been sent successfully before sending the next message. This will however increase the time it takes to send messages to RabbitMQ, as delivery needs to be confirmed before the program can proceed with the next message.
channel.confirm_delivery()
try:
for index in xrange(10):
channel.basic_publish(exchange='', routing_key='hello',
body='Hello World #%s!' % index)
print('Total Messages Sent: %s' % x)
except pika.exceptions.ConnectionClosed as exc:
print('Error. Connection closed, and the message was never delivered.')
basic_publish will return a Boolean depending if the message was sent or not. But, it is important to catch potential exceptions in case the connection is closed during transfer and handle it appropriately. As in those cases the exception will interrupt the flow of the program.
after trying myself and failing to receive other than ack,
i decided to implement a direct reply to the sender.
i followed the example given here

Categories

Resources