kafka producer parameters require one message sent to take effect - python

I'm using confluent-kafka-python (https://github.com/confluentinc/confluent-kafka-python) to send some messages to Kafka, using Python. I send messages infrequently, so want the latency to be really really low.
If I do this, I can get messages to appear to my consumer with about a 2ms delay:
conf = { "bootstrap.servers" : "kafka-test-10-01",
"queue.buffering.max.ms" : 0,
'batch.num.messages': 1,
'queue.buffering.max.messages': 100,
"default.topic.config" : {"acks" : 0 }}
p = confluent_kafka.Producer(**conf)
p.produce(...)
BUT: the latency only drops to near zero after I've sent a first message with this new producer. Subsequent messages have latency near the 2ms mark.
The first message though has around a 1 second latency. Why?

Magnus Edenhill, the author of librdkafka, documented some useful parameters to set to decrease latency in any librdkafka client:
https://github.com/edenhill/librdkafka/wiki/How-to-decrease-message-latency
You don't show your consumer parameters but from your description it sounds like the consumer is polling and rightly getting nothing (null messages) before the first message is published and so it then waits the default 500 ms fetch.error.backoff.ms interval before trying to poll again and getting the first message. After that the messages are probably coming fast enough that the error back off is not triggered. Perhaps try setting fetch.error.backoff.ms lower and see if that helps.

Related

How to consume a rabbitmq stream starting from the last message in the stream?

I'd like to implement something with a similar behaviour to MQTT's "Retained Message". IE I want to attach a consumer and immediately start reading from the most recent message sent. It looks like Rabbitmq Streams should give me what I'm looking for.
I'm a little stuck because its possible to set the offset to last (see here) which begins reading from the last block of messages. But what I am looking for is the last message.
That is: I can't see how to determine which is the last message currently in the block when I subscribe.
Is there a way to set the offset to the last message in the stream?
At the moment, it is not possible to determine the last message in a specific chuck.
This is because the clients don't expose all the chunk information.
When you select last you get the last chuck. The chuck itself contains the number of messages. This information atm is not exposed.
You are using PIKA so amqp way, but to have more control in the stream, you could use native clients that give you more control.
See here for more details.
You can also track the message offset to restart consuming from a specific offset. See, for example, the java client
We could add that info.

Latency under low traffic in Google PubSub

I have noticed that under high load pubsub gives great throughput with pretty low latency. But if I want to send a single message, the latency can often be several seconds. I have used the publish_time in the incoming message to see how long the message spent in the queue and it is usually pretty low. Can't tell if, under very low traffic conditions, a published message doesn't actually get sent by the client libraries right away or if the libraries don't deliver it to the application immediately. I am using asynchronous pull in Python.
There can be several factors that impact latency of low-throughput Pub/Sub streams. First of all, the publish-side client library does wait a period of time to try to batch messages by default. You can get a little bit of improvement by setting the max_messages property of the pubsub_v1.types.BatchSettings to 1, which will ensure that every message is sent as soon as it is ready.
In general, there is also the issue of cold caches in the Pub/Sub service. If your publish rate is infrequent, say, O(1) publish call every 10-15 minutes, then the service may have to load state on each publish that can delay the delivery. If low latency for these messages is very important, our current recommendation is to send a heartbeat message every few seconds to keep all of the state active. You can add an attribute to the messages to indicate it is a heartbeat message and have your subscriber ignore them.

Allowing message dropping in websockets

Is there a simple method or library to allow a websocket to drop certain messages if bandwidth doesn't allow? Or any one of the following?
to measure the queue size of outgoing messages that haven't yet reached a particular client
to measure the approximate bitrate that a client has been receiving recent messages at
to measure the time that a particular write_message finished being transmitted to the client
I'm using Tornado on the server side (tornado.websocket.WebSocketHandler) and vanilla JS on the client side. In my use case it's really only important that the server realize that a client is slow and throttle its messages (or use lossier compression) when it realizes that condition.
You can implement this on top of what you have by having the client confirm every message it gets and then use that information on the server to adapt the sending of messages to each client.
This is the only way you will know which outgoing messages haven't yet reached the client, be able to approximate bitrate or figure out the time it took for the message to reach the client. You must consider that the message back to the server will also take time and that if you use timestamps on the client, they will likely not match your servers as clients have their time set incorrectly more often than not.

Does stomp.ConnectionListener in python holds any data while dequeuing messages from ActiveMQ?

I want to pull the messages from AMQ to python. I want to use python for batch processing (like if I have 1000 messages in the queue, I need to dequeue every 100 messages and process them and take the next 100 messages and process...until all messages are dequeued.)
here is my python code for batchListener:
class BatchEventListner(stomp.ConnectionListener):
def on_message(self, headers, message):
print('received a message "%s"' % message)
batchLsnr = BatchEventListner()
self.conn = stomp.Connection(host_and_ports=hosts)
self.conn.set_listener('', batchLsnr)
self.batchLsnr = batchLsnr
self.conn.start()
self.conn.connect('username', 'password', wait=True)
self.conn.subscribe(destination='/queue/' + self.queue_name, id=1, ack='auto')
I wrote a simulator to push messages to AMQ.
once I push 1000 messages to ActiveMQ. when the consumers start, python code will start pulling the data from ActiveMQ but python code is pulling more than 100 messages at once. (processing is happening only for 100 but more than 100 messages are getting dequeued).
i.e.,
for the last batch (100 messages), we are not seeing any messages in ActiveMQ but the messages are getting in the python process.
1. Does stomp holds any messages while dequeuing from ActiveMQ ?
2. is stomp holds any data while the batch is processing ?
You may be seeing the result of prefetch. Try setting the activemq.prefetchSize header in your SUBSCRIBE frame to 1.
Also, try setting your acknowledgement mode to client or client-individual. Using auto will basically trigger the broker to dispatch messages to the client as fast as it can.
Keep in mind that prefetching messages is a performance optimization so lowering it will potentially result in a performance drop. Of course, performance must be weighed against other factors of functionality. I recommend you test and tune until you meet all of your goals or find an acceptable compromise between them.

Google PubSub message duplication

I am using The python client (That comes as part of google-cloud 0.30.0) to process messages.
Sometimes (about 10% ) my messages are being duplicated. I will get the same message again and again up to 50 instances within a few hours.
My Subscription setup is for a 600 seconds ack time but a message may be resent a minute after its predecessor.
While running , I would occasionally get 503 errors (Which I log with my policy_class)
Has anybody experienced that behavior? any ideas ?
My code look like
c = pubsub_v1.SubscriberClient(policy_class)
subscription = c.subscribe(c.subscription_path(my_proj ,my_topic)
res = subscription.open(callback=callback_func)
res.result()
def callback_func(msg)
try:
log.info('got %s', msg.data )
...
finally:
ms.ack()
The client library you are using uses a new Pub/Sub API for subscribing called StreamingPull. One effect of this is that the subscription deadline you have set is no longer used, and instead one calculated by the client library is. The client library also automatically extends the deadlines of messages for you.
When you get these duplicate messages - have you already ack'd the message when it is redelivered, or is this while you are still processing it? If you have already ack'd, are there some messages you have avoided acking? Some messages may be duplicated if they were ack'd but messages in the same batch needed to be sent again.
Also keep in mind that some duplicates are expected currently if you take over a half hour to process a message.
This seems to be an issue with google-cloud-pubsub python client, I upgraded to version 0.29.4 and ack() work as expected
In general, duplicates can happen given that Google Cloud Pub/Sub offers at-least-once delivery. Typically, this rate should be very low. A rate of 10% would be very high. In this particular instance, it was likely an issue in the client libraries that resulted in excessive duplicates, which was fixed in April 2018.
For the general case of excessive duplicates there are a few things to check to determine if the problem is on the user side or not. There are two places where duplication can happen: on the publish side (where there are two distinct messages that are each delivered once) or on the subscribe side (where there is a single message delivered multiple times). The way to distinguish the cases is to look at the messageID provided with the message. If the same ID is repeated, then the duplication is on the subscribe side. If the IDs are unique, then duplication is happening on the publish side. In the latter case, one should look at the publisher to see if it is getting errors that are resulting in publish retries.
If the issue is on the subscriber side, then one should check to ensure that messages are being acknowledged before the ack deadline. Messages that are not acknowledged within this time will be redelivered. If this is the issue, then the solution is to either acknowledge messages faster (perhaps by scaling up with more subscribers for the subscription) or by increasing the acknowledgement deadline. For the Python client library, one sets the acknowledgement deadline by setting the max_lease_duration in the FlowControl object passed into the subscribe method.

Categories

Resources