Test of sending & receiving message for Azure Service Bus Queue - python

I would like to write an integration test checking connection of the Python script with Azure Service Bus queue. The test should:
send a message to a queue,
confirm that the message landed in the queue.
The test looks like this:
import pytest
from azure.servicebus import ServiceBusClient, ServiceBusMessage, ServiceBusSender
CONNECTION_STRING = <some connection string>
QUEUE = <queue name>
def send_message_to_service_bus(sender: ServiceBusSender, msg: str) -> None:
message = ServiceBusMessage(msg)
sender.send_message(message)
class TestConnectionWithQueue:
def test_message_is_sent_to_queue_and_received(self):
msg = "test message sent to queue"
expected_message = ServiceBusMessage(msg)
servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STRING, logging_enable=True)
with servicebus_client:
sender = servicebus_client.get_queue_sender(queue_name=QUEUE)
with sender:
send_message_to_service_bus(sender, expected_message)
receiver = servicebus_client.get_queue_receiver(queue_name=QUEUE)
with receiver:
messages_in_queue = receiver.receive_messages(max_message_count=10, max_wait_time=20)
assert any(expected_message == str(actual_message) for actual_message in messages_in_queue)
The test occassionally works, more often than not it doesn't. There are no other messages sent to the queue at the same time. As I debugged the code, if the test does not work, the variable messages_in_queue is just an empty list.
Why doesn't the code work at all times and what should be done to fix it?

Are you sure you don't have another process that receive your messages ? Maybe you are sharing your queue connections strings with other colleagues, build machines...
To troubleshoot you need to keep an eye on the Queue monitoring on Azure Portal. Debug your test and look at incoming messages if it increment by 1. Then continue your debug and check if it decrement by 1.
Also, are you sure that this unit test is useful? It looks like you are testing your infra instead of testing your code

Related

Trouble subscribing to ActiveMQ Artemis with Stomp. Queue already exists

What am I doing wrong here? I'm trying to use Stomp to test some things with Artemis 2.13.0, but when I uses either the command line utility of a Python script, I can't subscribe to a queue, even after I use the utility to publish a message to an address.
Also, if I give it a new queue name, it creates it, but then doesn't pull messages I publish to it. This is confusing. My actual Java app behaves nothing like this -- it's using JMS
I'm connection like this with the utility:
stomp -H 192.168.56.105 -P 61616 -U user -W password
> subscribe test3.topic::test.A.queue
Which give me this error:
Subscribing to 'test3.topic::test.A.queue' with acknowledge set to 'auto', id set to '1'
>
AMQ229019: Queue test.A.queue already exists on address test3.topic
Which makes me think Stomp is trying to create the queue when it subscribes, but I don't see how to manage this in the documentation. http://jasonrbriggs.github.io/stomp.py/api.html
I also have a Python script giving me the same issue.
import os
import time
import stomp
def connect_and_subscribe(conn):
conn.connect('user', 'password', wait=True)
conn.subscribe(destination='test3.topic::test.A.queue', id=1, ack='auto')
class MyListener(stomp.ConnectionListener):
def __init__(self, conn):
self.conn = conn
def on_error(self, headers, message):
print('received an error "%s"' % message)
def on_message(self, headers, message):
print('received a message "%s"' % message)
"""for x in range(10):
print(x)
time.sleep(1)
print('processed message')"""
def on_disconnected(self):
print('disconnected')
connect_and_subscribe(self.conn)
conn = stomp.Connection([('192.168.56.105', 61616)], heartbeats=(4000, 4000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
time.sleep(1000)
conn.disconnect()
I recommend you try the latest release of ActiveMQ Artemis. Since 2.13.0 was released a year ago a handful of STOMP related issues have been fixed specifically ARTEMIS-2817 which looks like your use-case.
It's not clear to me why you're using the fully-qualified-queue-name (FQQN) so I'm inclined to think this is not the right approach, but regardless the issue you're hitting should be fixed in later versions. If you want multiple consumers to share the messages on a single subscription then using FQQN would be a good option there.
Also, if you want to use the topic/ or queue/ prefix to control routing semantics from the broker then you should set the anycastPrefix and multicastPrefix appropriately as described in the documentation.
This may be coincidence but ARTEMIS-2817 was originally reported by "BENJAMIN Lee WARRICK" which is surprisingly similar to "BenW" (i.e. your name).

How to replace celery task with azure service bus in a django application?

I am asked to use the azure service bus instead of celery in a Django application.
Read the documentation provided but didn't get a clear picture of using service bus instead of a celery task. Any advice provided would be of great help.
Before getting into it, I would like to highlight the differences between Azure Service Bus and Celery.
Azure Service Bus :
Microsoft Azure Service Bus is a fully managed enterprise integration message broker.
You could refer this to know more about the service bus
Celery :
Distributed task queue. Celery is an asynchronous task queue/job queue based on distributed message passing.
I could think of 2 possibilities in your case :
You would like to use Service Bus with Celery in place of other
message brokers.
Replace Celery with the Service Bus
1 : You would like to use Service Bus with Celery in place of other message brokers.
You could refer this to understand why celery needs a message broker.
I am not sure which messaging broker you are using currently, but you could use the Kombu library to meet your requirement.
Reference for Azure Service Bus : https://docs.celeryproject.org/projects/kombu/en/stable/reference/kombu.transport.azureservicebus.html
Reference for others : https://docs.celeryproject.org/projects/kombu/en/stable/reference/index.html
2 : Replace Celery with the Service Bus completely
To meet your requirement :
Consider
Message senders are producers
Message receivers are consumers
These are two different application that you will have to work on.
You could refer the below to get more sample code to build on.
https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/servicebus/azure-servicebus/samples
Explanation :
Every time you would like to execute the actions, you could send
messages to a topic from the producer client.
The Consumer Client - the application that is listening, will receive the message and process the same. You could attach your custom process to it - in that way the your custom process gets executed whenever a message is received at the consumer client end.
The below is sample of the receiving client :
from azure.servicebus.aio import SubscriptionClient
import asyncio
import nest_asyncio
nest_asyncio.apply()
Receiving = True
#Topic 1 receiver :
conn_str= "<>"
name="Allmessages1"
SubsClient = SubscriptionClient.from_connection_string(conn_str, name)
receiver = SubsClient.get_receiver()
async def receive_message_from1():
await receiver.open()
print("Opening the Receiver for Topic1")
async with receiver:
while(Receiving):
msgs = await receiver.fetch_next()
for m in msgs:
print("Received the message from topic 1.....")
##### - Your code to execute when a message is received - ########
print(str(m))
##### - Your code to execute when a message is received - ########
await m.complete()
loop = asyncio.get_event_loop()
topic1receiver = loop.create_task(receive_message_from1())
the section between the below line would be instruction that will be executed every time a message is received.
##### - Your code to execute when a message is received - ########

Always Open Publish Channel RabbitMQ

I am trying to integrate snmptrapd and RabbitMQ for delivering traps notifications to an exterior system.
My system is composed of 3 components:
A Linux virtual machine with snmptrapd and RabbitMQ (Publisher);
A Linux virtual machine with RabbitMQ (Consumer);
A Linux bare metal with docker so I can have a lot of containers sending traps (using nping)
The snmptrapd part is simple:
authCommunity execute mycom
traphandle default /root/some_script
In my first attempts the some_script was written in Python, but the performance was not perfect (20 containers sending 1 trap per second during 10 seconds, I only received 160 messages in the consumer).
#!/usr/bin/env python
import pika
import sys
message = ""
for line in sys.stdin :
message += (line)
credentials = pika.PlainCredentials('test', 'test')
parameters = pika.ConnectionParameters('my_ip', 5672, '/', credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue='snmp')
channel.basic_publish(exchange='',
routing_key='snmp',
body=message)
connection.close()
I switched to Perl and now I can get 200 traps/messages.
My Perl script uses Net::AMQP::RabbitMQ
#!/usr/bin/perl
use Net::AMQP::RabbitMQ;
foreach my $line ( <STDIN> ) {
chomp( $line );
$message = "$message\n$line";
}
my $mq = Net::AMQP::RabbitMQ->new();
$mq->connect("my_ip", {
user => "test",
password => "test",
vhost => "/"
});
$mq->channel_open(1);
$mq->publish(1, "snmp", $message);
$mq->disconnect();
But I want better. I tried 200 containers sending 1 trap per second and it failed miserably, receiving only around 10% of messages in the consumer.
I think this has to do with the overhead of always have to open, publish and close the channel in RabbitMQ per trap received, because at the network level I receive all the messages (checked trough a tcpdump).
Is there a way to keep an always-open publish channel so I don't have to reopen/create a connection to the queue?
Asking if you can talk to RabbitMQ server without connecting to it first is like asking if you can talk to someone on the telephone without connecting to their phone first (by dialing and answering).
You really should reuse your connection if you're going to send multiple messages, but you do need a connection first!
Anyway, the problem isn't with the publisher. It's the consumer that's buggy if it's losing messages.

Setting up Rabbit MQ Heartbeat with Kombu

Edit:
The main issue is the 3rd party rabbitmq machine seems to kill idle connections every now and then. That's when I start getting "Broken Pipe" exceptions. The only way to gets comms. back to normal is for me to kill the processes and restart them. I assume there's a better way?
--
I'm a little lost here. I am connecting to a 3rd party RabbitMQ server to push messages to. Every now and then all the sockets on their machine gets dropped and I end up getting a "Broken Pipe" exception.
I've been told to implement a heartbeat check in my code but I'm not sure how exactly. I've found some info here: http://kombu.readthedocs.org/en/latest/changelog.html#version-2-3-0 but no real example code.
Do I only need to add "?heartbeat=x" to the connection string? Does Kombu do the rest? I see I need to call "Connection.heartbeat_check()" at "x/2". Should I create a periodic task to call this? How does the connection get re-established?
I'm using:
celery==3.0.12
kombu==2.5.4
My code looks like this right now. A simple Celery task gets called to send the message through to the 3rd party RabbitMQ server (removed logging and comments to keep it short, basic enough):
class SendMessageTask(Task):
name = "campaign.backends.send"
routing_key = "campaign.backends.send"
ignore_result = True
default_retry_delay = 60 # 1 minute.
max_retries = 5
def run(self, send_to, message, **kwargs):
payload = "Testing message"
try:
conn = BrokerConnection(
hostname=HOSTNAME,
port=PORT,
userid=USER_ID,
password=PASSWORD,
virtual_host=VHOST
)
with producers[conn].acquire(block=True) as producer:
publish = conn.ensure(producer, producer.publish, errback=sending_errback, max_retries=3)
publish(
body=payload,
routing_key=OUT_ROUTING_KEY,
delivery_mode=2,
exchange=EXCHANGE,
serializer=None,
content_type='text/xml',
content_encoding = 'utf-8'
)
except Exception, ex:
print ex
Thanks for any and all help.
While you certainly can add heartbeat support to a producer, it makes more sense for consumer processes.
Enabling heartbeats means that you have to send heartbeats regularly, e.g. if the heartbeat is set to 1 second, then you have to send a heartbeat every second or more or the remote will close the connection.
This means that you have to use a separate thread or use async io to reliably send heartbeats in time, and since a connection cannot be shared between threads this leaves us with async io.
The good news is that you probably won't get much benefit adding heartbeats to a produce-only connection.

celery get tasks count

I am using python celery+rabbitmq. I can't find a way to get task count in some queue.
Some thing like this:
celery.queue('myqueue').count()
Is it posible to get tasks count from certaint queue?
One solution is to run external command from my python scrpit:
"rabbitmqctl list_queues -p my_vhost"
and parse results, is it good way to do this?
I suppose that using rabbitmqctl command is not good solution, especially on my ubuntu server, where rabbitmqctl can be executed only with root privileges.
By playing with pika objects I found working solution:
import pika
from django.conf import settings
def tasks_count(queue_name):
''' Connects to message queue using django settings and returns count of messages in queue with name queue_name. '''
credentials = pika.PlainCredentials(settings.BROKER_USER, settings.BROKER_PASSWORD)
parameters = pika.ConnectionParameters( credentials=credentials,
host=settings.BROKER_HOST,
port=settings.BROKER_PORT,
virtual_host=settings.BROKER_VHOST)
connection = pika.BlockingConnection(parameters=parameters)
channel = connection.channel()
queue = channel.queue_declare(queue=queue_name, durable=True)
message_count = queue.method.message_count
return message_count
I did not find documentation about inspecting the AMQP queue with pika, so I do not know about solution's correctness.

Categories

Resources