I have a virtual assistant which receives messages and send it to a event broker (e.g rabbitmq).
An event broker allows me to connect my running assistant to other services that process the data coming in from conversations.
Example:
If I have a RabbitMQ server running, as well as another application that consumes the events, then this consumer needs to implement Pika's start_consuming() method with a callback action. Here's a simple example:
import json
import pika
def _callback(self, ch, method, properties, body):
# Do something useful with your incoming message body here, e.g.
# saving it to a database
print('Received event {}'.format(json.loads(body)))
if __name__ == '__main__':
# RabbitMQ credentials with username and password
credentials = pika.PlainCredentials('username', 'password')
# Pika connection to the RabbitMQ host - typically 'rabbit' in a
# docker environment, or 'localhost' in a local environment
connection = pika.BlockingConnection(
pika.ConnectionParameters('rabbit', credentials=credentials))
# start consumption of channel
channel = connection.channel()
channel.basic_consume(_callback,
queue='events',
no_ack=True)
channel.start_consuming()
What is the correct way to use fastapi with Pika to consume these live messages and saving it to a database ?
Do I need a websocket route ?
Related
I am able to receive the message from azure queues using their official SDK. It works fine. Here is the snippet that I am using to receive a message from the queue.
from azure.servicebus import QueueClient
client = QueueClient.from_connection_string(
q_string,
q_name)
msg = None
with client.get_receiver() as queue_receiver:
messages = queue_receiver.fetch_next(max_batch_size=1, timeout=3)
if len(messages) > 0:
msg = messages[0]
print(f"Received {msg.message}")
return msg
How do I close the connection with the queue? How can I do that? Is there any function available with azure sdk?
How do I close the connection with the queue? How can I do that? Is
there any function available with azure sdk?
You can call close method on the service bus client. That will close down the Service Bus client and the underlying connection.
I am using aiokafka==0.5.2 as python lib for kafka
I have just the code form the example:
async def consume():
consumer = AIOKafkaConsumer(
'my_topic', 'my_other_topic',
loop=loop, bootstrap_servers='localhost:9092',
group_id="my-group")
await consumer.start()
try:
# Consume messages
async for msg in consumer:
#...
When I run it - it works fine. But when I stop Kafka server - my app proceed hanging waiting for messages. And, I guess, when prod server exclude Kafka node from balancing - my app know nothing about this. How could I have listener on Kafka connection inside my app?
I am trying to reliably send a message from a publisher to multiple consumers using RabbitMQ topic exchange.
I have configured durable queues (one per consumer) and I am sending persistent messages delivery_mode=2. I am also setting the channel in confim_delivery mode, and have added mandatory=True flag to publish.
Right now the service is pretty reliable, but messages get lost to one of the consumers if it stays down during a broker restart followed by a
message publication.
It seems that broker can recover queues and messages on restart, but it doesn't seem to keep the binding between consumers and queues. So messages only reach one of the consumers and get lost for the one that is down.
Note: Messages do reach the queue and the consumer if the broker doesn't suffer a restart during the time a consumer is down. They accumulate properly on the queue and they are delivered to the consumer when it is up again.
Edit - adding consumers code:
import pika
class Consumer(object):
def __init__(self, queue_name):
self.queue_name = queue_name
def consume(self):
credentials = pika.PlainCredentials(
username='myuser', password='mypassword')
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='myhost', credentials=credentials))
channel = connection.channel()
channel.exchange_declare(exchange='myexchange', exchange_type='topic')
channel.queue_declare(queue=self.queue_name, durable=True)
channel.queue_bind(
exchange='myexchange', queue=self.queue_name, routing_key='my.route')
channel.basic_consume(
consumer_callback=self.message_received, queue=self.queue_name)
channel.start_consuming()
def message_received(self, channel, basic_deliver, properties, body):
print(f'Message received: {body}')
channel.basic_ack(delivery_tag=basic_deliver.delivery_tag)
You can assume each consumer server does something similar to:
c = Consumer('myuniquequeue') # each consumer has a permanent queue name
c.consume()
Edit - adding publisher code:
def publish(message):
credentials = pika.PlainCredentials(
username='myuser', password='mypassword')
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='myhost', credentials=credentials))
channel = connection.channel()
channel.exchange_declare(exchange='myexchange', exchange_type='topic')
channel.confirm_delivery()
success = channel.basic_publish(
exchange='myexchange',
routing_key='my.route',
body=message,
properties=pika.BasicProperties(
delivery_mode=2, # make message persistent
),
mandatory=True
)
if success:
print("Message sent")
else:
print("Could not send message")
# Save for sending later
It is worth saying that I am handling the error case on my own, and it is not the part I would like to improve. When my messages get lost to some of the consumers the flow goes through the success section
Use basic.ack(delivery_tag=basic_deliver.delivery_tag) in your consumer callback method. This acknowledgement tells whether the consumer has received a message and processed it or not. If it's a negative acknowledgement, the message will be requeued.
Edit #1
In order to receive messages during broker crash, the broker needs to be distributed. It is a concept called Mirrored Queues in RabbitMQ. Mirrored Queues lets your queues to be replicated across the nodes in your cluster. If one of the nodes containing the queue goes down, the other node containing the queue will act as your broker.
For complete understanding refer this Mirrored Queues
I am writing a django application which should act as MQTT publisher and as a subscriber.
Where should I start the paho client and run loop_forever() function.
Should it be in wsgi.py ?
Update:
If you need Django running in multiple threads then to publish messages from your Django app you can use helper functions from Publish module of Paho - https://eclipse.org/paho/clients/python/docs/#id17
You don't need to create an instance of mqtt client and start a loop in this case. And to subscribe to some topic consider running mqtt client as a standalone script and import there needed modules of your Django app (and don't forget to setup the Django environment in the script).
The answer below is good only if you run Django in a single thread, which is not usual in production.
Create mqtt.py in your application folder and put all related code there. For example:
import paho.mqtt.client as mqtt
def on_connect(client, userdata, rc):
client.subscribe("$SYS/#")
def on_message(client, userdata, msg):
# Do something
pass
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.connect("iot.eclipse.org", 1883, 60)
Don't call loop_forever() here!
Then in your application __init__.py call loop_start():
from . import mqtt
mqtt.client.loop_start()
Using loop_start() instead of loop_forever() will give you not blocking background thread.
If you are using ASGI in your Django application you can use MQTTAsgi.
It's a complete protocol server for Django and MQTT.
Edit: Full disclosure I'm the author of MQTTAsgi.
Edit 2: To utilize the mqtt protocol server you can run your application, first you need to create a MQTT consumer:
from mqttasgi.consumers import MqttConsumer
class MyMqttConsumer(MqttConsumer):
async def connect(self):
await self.subscribe('my/testing/topic', 2)
async def receive(self, mqtt_message):
print('Received a message at topic:', mqtt_mesage['topic'])
print('With payload', mqtt_message['payload'])
print('And QOS:', mqtt_message['qos'])
pass
async def disconnect(self):
await self.unsubscribe('my/testing/topic')
Then you should add this protocol to the protocol router:
application = ProtocolTypeRouter({
'websocket': AllowedHostsOriginValidator(URLRouter([
url('.*', WebsocketConsumer)
])),
'mqtt': MyMqttConsumer,
....
})
Then you can run the mqtt protocol server with*:
mqttasgi -H localhost -p 1883 my_application.asgi:application
*Assuming the broker is in localhost and port 1883.
Thanks for the comments!
I am trying to build a system where I can send messages to diffferent users based on their subscription to certain events. Basically I have an api which gives me live stream events. Some of the users will be subscribed to those events. My task is to send message to those users whenever such an event occurs. I am trying to design the system in Python.
Currently I have the following questions.
How yo continously poll for events from a live stream api in Python.
How to find out which users are subscribed to that particular event. (Redis or Mysql)
How to send notification to all the users of a particular event. (Pub/sub)
I am thinking of using Amazon SNS. But not quite sure about the overall architecture.
RabbitMQ is lightweight and easy to deploy on premise and in the
cloud. It supports multiple messaging protocols. RabbitMQ can be
deployed in distributed and federated configurations to meet
high-scale, high-availability requirements.
Just small example:
Producer sends messages to the "hello" queue. The consumer receives messages from that queue. This will create a Queue (hello) with a message on the RabbitMQ cluster.
#!/usr/bin/env python
import pika
RABBITMQ_USERNAME = 'ansible'
RABBITMQ_PASSWORD = 'ansible'
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='example.eu-central-1.elb.amazonaws.com',
heartbeat_interval=25,
credentials=pika.PlainCredentials(RABBITMQ_USERNAME,RABBITMQ_PASSWORD)))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()
Receive a message from a named queue:
#!/usr/bin/env python
import pika
RABBITMQ_USERNAME = 'ansible'
RABBITMQ_PASSWORD = 'ansible'
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='example.elb.amazonaws.com',
heartbeat_interval=25,
credentials=pika.PlainCredentials(RABBITMQ_USERNAME,RABBITMQ_PASSWORD)))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()