I am able to receive the message from azure queues using their official SDK. It works fine. Here is the snippet that I am using to receive a message from the queue.
from azure.servicebus import QueueClient
client = QueueClient.from_connection_string(
q_string,
q_name)
msg = None
with client.get_receiver() as queue_receiver:
messages = queue_receiver.fetch_next(max_batch_size=1, timeout=3)
if len(messages) > 0:
msg = messages[0]
print(f"Received {msg.message}")
return msg
How do I close the connection with the queue? How can I do that? Is there any function available with azure sdk?
How do I close the connection with the queue? How can I do that? Is
there any function available with azure sdk?
You can call close method on the service bus client. That will close down the Service Bus client and the underlying connection.
Related
I've been facing this issue where I have a gRPC AIO python client sending bunch of configuration changes to the gRPC server, though it's a bi-directional RPC, client is not expecting any message from gRPC server. So whenever there is a configuration change the client sends gRPC message containing the configuration. It keeps the channel open and it's not closed (not calling done_writing()) . But when it doesn't have anything to send, it polls a queue for any new message in a tight loop. During that time if the server goes down, the client is not able to detect that. But, as soon as some data is available in the queue, while pushing it to server, client is able to detect that server went down(exception thrown).
How to detect server went down while there is no data to send and client is waiting for data availability. Is there any gRPC API I can call on the channel while the client is waiting for data to send, in order to detect channel failure(basically if there is any api call which throws exception while the server went down is also good for me, I was not able to find any useful API)? I tried gRPC keepalive, but it didn't work for my scenario.
async with grpc.aio.insecure_channel('127.0.0.1:51234') as channel:
stream = hello_pb2.HelloStub(channel).Configure()
await stream.wait_for_connection()
while True:
if queue.empty():
continue
if not queue.empty():
item = queue.get()
await asyncio.sleep(0.001)
queue.task_done()
await stream.write(item)
await asyncio.sleep(0.01)
await stream.done_writing()
I tried to enable gRPS keepalive while forming insecure_channel. It didn't have desired effect.
Subsequently I tried calling channel_ready() inside the tight loop during queue empty, was expecting to throw some exception and come out of that loop, but it didn't work.
async with grpc.aio.insecure_channel('127.0.0.1:51234',
options = [
('grpc.keepalive_time_ms', 60000),
('grpc.keepalive_timeout_ms', 600000),
('grpc.keepalive_permit_without_calls', 1),
('grpc.http2.max_pings_without_data', 0),
('grpc.http2.min_time_between_pings_ms', 10000),
('grpc.http2.min_ping_interval_without_data_ms', 60000),
('grpc.max_connection_age_ms', 2147483647),
]) as channel:
I was able to solve it using channel.get_state(). Below is the code snippet
if queue.empty():
if channel.get_state() != grpc.ChannelConnectivity.READY:
break
time.sleep(5)
Help me figure out how to implement it correctly.
The bottom line: The client is charging stations that connect, open a socket and send messages to the server and receive responses.
Server - listens to the port, sees the connected station, receives messages and sends responses to them.
Question: When the client connects and sends headers, I can send a message to the client. But I need to periodically send messages to the client that keeps the socket open, I don't understand how to implement this. Can someone tell me?
sample sending code:
charge_point_id = path.strip('/')
cp = client_main(charge_point_id, websocket)
logging.info(charge_point_id)
print(charge_point_id)
print(path)
await websocket.send(json.dumps([2,"222", "GetLocalListVersion", {}]))
await cp.start()
example of receiving a message from a client:
class client_main(cp):
errors = False
if not errors:
#on('BootNotification')
def on_boot_notitication(self, charge_point_vendor, charge_point_model,charge_point_serial_number,firmware_version,
meter_type, **kwargs):
return call_result.BootNotificationPayload(
status="Accepted",
current_time=date_str.replace('+00:00','Z'),
interval=60
)
in this case, the charging station according to the ocpp protocol opens the connection and keeps it open, it should be possible to somehow write to him
how do i send a message to the client? My example:
#on('Heartbeat')
def on_getlocallistversion(self):
await self.route_message(json.dumps([2,"222","GetLocalListVersion",{}]))
def on_hearbeat(self):
return call_result.HeartbeatPayload(
current_time=datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S')+"Z"
)
I get an error:
await self.route_message(json.dumps([2,"222",
"GetLocalListVersion",{}]))
This is impossible to do. You can send a message only when the client is connected.
I have a virtual assistant which receives messages and send it to a event broker (e.g rabbitmq).
An event broker allows me to connect my running assistant to other services that process the data coming in from conversations.
Example:
If I have a RabbitMQ server running, as well as another application that consumes the events, then this consumer needs to implement Pika's start_consuming() method with a callback action. Here's a simple example:
import json
import pika
def _callback(self, ch, method, properties, body):
# Do something useful with your incoming message body here, e.g.
# saving it to a database
print('Received event {}'.format(json.loads(body)))
if __name__ == '__main__':
# RabbitMQ credentials with username and password
credentials = pika.PlainCredentials('username', 'password')
# Pika connection to the RabbitMQ host - typically 'rabbit' in a
# docker environment, or 'localhost' in a local environment
connection = pika.BlockingConnection(
pika.ConnectionParameters('rabbit', credentials=credentials))
# start consumption of channel
channel = connection.channel()
channel.basic_consume(_callback,
queue='events',
no_ack=True)
channel.start_consuming()
What is the correct way to use fastapi with Pika to consume these live messages and saving it to a database ?
Do I need a websocket route ?
I have Python client which opens a websocket connection to a server and subscribes to particular topic using STOMP protocol, subscription goes just fine as i see on the server all is fine. However, When the server publishes a few messages the client does not receive any.
Here are the codes used:
Client
# coding: utf-8
import websocket
import stomp
import stomper
token = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsInByaW5jaXBhbF9uYW1lIjoiYWRtaW4iLCJpc3MiOiJBdGhlbmEiLCJ1c2VydHlwZSI6IkxPQ0FMIiwiYW9zX3ZlcnNpb24iOiJldXBocmF0ZXMtNS4xMS1zdGFibGUiLCJyZWdpb24iOiJlbi1VUyIsImV4cCI6MTczNDI4MDI3NywidXVpZCI6ImI4MzhjOGRkLWI4NmQtNGNkZS05ZTE4LTUxM2E1OTk4ODhhYyIsImlhdCI6MTU3NjYwMDI3NywiYXV0aG9yaXRpZXMiOiJST0xFX0NMVVNURVJfQURNSU4sUk9MRV9NVUxUSUNMVVNURVJfQURNSU4sUk9MRV9VU0VSX0FETUlOLFJPTEVfQ0xVU1RFUl9WSUVXRVIiLCJqdGkiOiI1NTU1ZjEwZC04NGQ5LTRkZGYtOThhNC1mZmI1OTM1ZTQwZWEifQ.LOMX6ppkcSBBS_UwW9Qo2ieWZAGrKqADQL6ZQuTi2oieYa_LzykNiGMWMYXY-uw40bixDcE-aVWyrIEZQbVsvA"
headers = {"Authorization": "Bearer " + token}
uri = "ws://127.0.0.1:8765/notifications/websocket"
def on_msg(ws, msg):
print(msg)
def on_error(ws, err):
print(err)
def on_closed(ws):
print("#Closed#")
def on_open(ws):
sub = stomper.subscribe("/user/queue/alert", "MyuniqueId", ack="auto")
ws.send(sub)
headers = {"Authorization": "Bearer " + token}
websocket.enableTrace(True)
ws = websocket.WebSocketApp(uri, header=headers, on_message=on_msg, on_error=on_error, on_close=on_closed)
ws.on_open = on_open
ws.run_forever()
Code server uses to publish the message:
for (WatchesSubscription s : subscriptions) {
template.convertAndSendToUser(s.getSession().getUser(), destination, dto);
}
When i checked out the value of the above variables i saw that destination was as expected queue/alerts. I have java client to test out as well and it works just fine. I have even tried this by subscribing to /topic/alerts and sending to it via template.convertAndSend(/topic/alerts), here too i received nothing. I am a drawing a complete blank on this and would appreciate any sort of help!
After many days of hair pulling I finally figured out the reason and the fix!
The java client I used was
WebSocketStompClient stompClient = new WebSocketStompClient(transport);.The stompClient.connect(URL, webSocketHttpHeaders, sessionHandler); method implicitly sends a stomp CONNECT\n\n\x00\n
The Springboot server which has been configured for STOMP understands this as a connection request and responds with a CONNECT_ACK.
When this ACK is sent it also updates it's local UserRegistry with the new user. So the internal message broker knows that there is a user who has subscribed to so-and-so topic.
In my Python code, i had merely opened a Websocket connection and after that directly sent a SUBSCRIBE message. So the broker never got a CONNECT so the user was never stored! This resulted in the messages later on being published to be merely discarded by the broker.
The fix was to send a CONNECT\n\n\x00\n after opening up the connection and before the subscription. Here is the code:
def on_open(ws):
#The magic happens here!
ws.send("CONNECT\naccept-version:1.0,1.1,2.0\n\n\x00\n")
sub = stomper.subscribe("/user/queue/alert", "MyuniqueId", ack="auto")
ws.send(sub)
Client details: Paho MQTT Python client obtained from http://git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.python.git, SHA id 300fcbdffd62d08f627f94f3074463cfa532ca87
Broker details: RabbitMQ 3.3.4 with the MQTT plugin
In the first scenario everything is being run locally with SSL enabled. I am using the client in a fashion where I have a separate process which publishes messages (> 10000 messages) and does not wait for an acknowledgment between publishes. The error is a result of the self._ssl.read(1) returning a a zero length value from the following code snippet from _packet_read in client.py:
...
if self._in_packet['command'] == 0:
try:
if self._ssl:
command = self._ssl.read(1)
else:
command = self._sock.recv(1)
except socket.error as err:
...
else:
if len(command) == 0:
return 1
command = struct.unpack("!B", command)
self._in_packet['command'] = command[0]
...
and occurs after receiving and parsing 25 acknowledgments from RabbitMQ. After this error I no longer receive anything back from the broker.
If I run with SSL disabled I do not encounter any errors and can successfully receive acknowledgements for all messages sent.
If I run the broker remotely (some location on the internet) I get the same results over SSL. However, when not using SSL I get the read error/disconnect at different intervals but the client is able to recover/reconnect and I receive broker acknowledgements for all messages sent.
Here is the client configuration that I'm using:
...
client = Client('foo')
client.max_inflight_messages_set(65536)
client.on_connect = self.on_connect_callback
client.on_disconnect = self.on_disconnect_callback
client.on_publish = self.on_publish_callback
client.on_message = self.on_message_callback
client.username_pw_set('foo', 'bar')
client.tls_set("ca-cert.pem",
"client-cert.pem",
"client-key.pem")
client.will_set(topic="last_will/foo",
payload=Message().body, qos=1)
client.connect('127.0.0.1', 8883, 30)
client.loop_start()
...
Any idea on what could be causing this and/or suggestions for troubleshooting?
UPDATE 20140828: I was stepping through the loop_read and noticed that I get an empty socket return value after successfully receiving the first full packet (the connection acknowledgment). The call to select that precedes the socket.recv call indicates that there is data ready to be read on the socket. Could this be a socket buffer issue? I'm not sure what the behavior of a Python socket receive buffer (btw I'm running this on OSX) is if it overflows.