I am currently writing two scripts to subscribe to a message server using the stomp client library, write.py to write data and read.py to get data.
If I start read.py first and then run write.py, write.py receives the messages correctly.
However, if I run write.py first and then run read.py, read.py does not retrieve any messages previously sent to the server.
Below are relevant parts of the scripts.
How can I achieve that messages put into the queue by write.py are being retained until read.py subscribes and retrieves them?
write.py
def writeMQ(msg):
queue = '/topic/test'
conn = stomp.Connection(host_and_ports=[(MQ_SERVER, MQ_PORT)])
try:
conn.start()
conn.connect(MQ_USER, MQ_PASSWD, wait=True)
conn.send(body=msg, destination=queue, persistent=True)
except:
traceback.print_exc()
finally:
conn.disconnect()
return
read.py
class MyListener(stomp.ConnectionListener):
def on_error(self, headers, message):
print ('received an error {0}'.format(message))
def on_message(self, headers, message):
print ('received an message {0}'.format(message))
def readMQ():
queue = '/topic/test'
conn = stomp.Connection(host_and_ports=[(MQ_SERVER, MQ_PORT)])
try:
conn.set_listener("", MyListener())
conn.start()
conn.connect(MQ_USER, MQ_PASSWD, wait=True)
conn.subscribe(destination=queue, ack="auto", id=1)
stop = raw_input()
except:
traceback.print_exc()
finally:
conn.disconnect()
return
The problem is that the messages are being sent to a topic.
The Apollo Documentation describes the difference between topics and queues as follows:
Queues hold on to unconsumed messages even when there are no subscriptions attached, while a topic will drop messages when there are no connected subscriptions.
Thus, when read.py is startet first and listening, the topic recognizes the subscription and forwards the message. But when write.py is startet first the message is dropped because there is no subscribed client.
So you can use a queue instead of a topic. If the server is able to create a queue silently simply set
queue = '/queue/test' .
I don't know which version of stomp is being used, but I cannot find the parameter
send(..., persistent=True) .
Anyway persisting is not the right way to go since it still does not allow for messages to simply be retained for a later connection, but saves the messages in case of a server failure.
You can use the
retain:set
header for topic messages instead.
Related
I'm using ActiveMQ classic v5.16.3 and experimenting with NACK. My expectation is that if the client sends a NACK then the message will remain on the queue and be available for another client. My code is below. I set a prefetch of 1, and ack mode of client-individual.
If I omit the conn.nack() call then I see my print statements, and the message remains on the queue - hence I believe that ActiveMQ is looking for an ACK or NACK.
When I include the conn.nack() call then I again see my print statements, and the message is removed from the queue.
Is this expected behaviour? I think a client should be able to reject malformed messages by NACK-ing and that eventually ActiveMQ should put them to a dead letter queue.
import time
import sys
import stomp
class MyListener(stomp.ConnectionListener):
def on_error(self, frame):
print('received an error "%s"' % frame.body)
def on_message(self, frame):
# experiment with and without the following line
conn.nack(id=frame.headers['message-id'], subscription=frame.headers["subscription"])
print('received a message "%s"' % frame.body)
print('headers "%s"' % frame.headers)
print('Connecting ...')
conn = stomp.Connection()
conn.set_listener('', MyListener())
conn.connect('admin', 'admin', wait=True)
print('Connected')
conn.subscribe(destination='/queue/audit', id=1, ack='client-individual', headers={'activemq.prefetchSize': 1})
As suggested by Tim Bish, I needed to configure ActiveMQ to retry. I made the following changes to activemq.xml
Added scheduler support to the broker:
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="localhost" dataDirectory="${activemq.data}"
schedulerSupport="true" >
Specified the redelivery plugin:
<plugins>
<redeliveryPlugin fallbackToDeadLetter="true"
sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<defaultEntry>
<redeliveryPolicy maximumRedeliveries="4"
initialRedeliveryDelay="5000"
redeliveryDelay="10000"/>
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
</plugins>
And then for my chosen destination, specify that poison messages be sent to a specific queue - default is to publish to a Topic.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue="audit" prioritizedMessages="true" >
<deadLetterStrategy>
<individualDeadLetterStrategy queuePrefix="DLQ."
useQueueForQueueMessages="true"/>
</deadLetterStrategy>
</policyEntry>
I am using nameko.messaging.consume for consuming messages from a queue. Here’s a sample code -
from kombu import Queue
from nameko.messaging import consume
class Service:
name = "sample_service"
QUEUE = Queue("queue_name", no_declare=True)
#consume(QUEUE)
def process_message(self, payload):
# Some long running code ...
return result
By default behaviour, ACK will be sent to rabbitMQ broker after process_message function returns a response (Here, statement return result). I want to send an ACK as soon as consumer consumes the message. How can I do that?
*In library pika, consumer acknowledges as soon as message is consumed. That will be good example what I want to replicate with nameko’s consumer.
Thanks :)
I have a Python3 program that runs a "while True"-loop until stopped, which occasionally saves data to a MySQL database. I am creating an administrative website, separate from the Python program, where I will be able to observe this data.
I now want to be able to be notified, on the website, when changes have been made to the database. My thought was to set up a websocket connection, so that the Python program can send a message through the socket to all connected clients, i.e. all open browsers, if there has been any changes to the database table.
I have done something similar before, but in that case I had to wait for a websocket connection before the "while True"-loop would start. In the new scenario I want to be able to have multiple website clients at once, and let them connect at any time, as well as disconnect without interrupting the Python programs loop.
This is a simplified version of my previous code, which I now want to update to be able to run both with & without websocket clients.
import asyncio
import websockets
socket_server = websockets.serve(run, "127.0.0.1", 5055)
asyncio.get_event_loop().run_until_complete(socket_server)
console_log("Waiting for socket connection...")
asyncio.get_event_loop().run_forever()
async def run(ws):
while True:
db_has_updated = do_stuff()
if db_has_updated:
await ws.send(data)
I just can't seem to be able to come up with the right search terms to find a solution, so I'm asking here instead.
I figured it out, finally! Here is my solution with a websocket server running in a separate thread from the other logic. I'm probably changing some things to make it neater, but this does everything I need. Feel free to ask any questions.
Be aware that this blocks when messaging all the connected clients. That is the way I needed it to work, but you could always thread/subprocess the logic/data-gen part of the program if you want it to run completely asynchronously.
#!/usr/bin/env python3
import asyncio
import websockets
import threading
import time
import random
def gen_data():
print("Generating data...")
time.sleep(3)
data = random.randint(1, 10)
return data
async def send(client, data):
await client.send(data)
async def handler(client, path):
# Register.
print("Websocket Client Connected.", client)
clients.append(client)
while True:
try:
print("ping", client)
pong_waiter = await client.ping()
await pong_waiter
print("pong", client)
time.sleep(3)
except Exception as e:
clients.remove(client)
print("Websocket Client Disconnected", client)
break
clients = []
start_server = websockets.serve(handler, "localhost", 5555)
asyncio.get_event_loop().run_until_complete(start_server)
threading.Thread(target = asyncio.get_event_loop().run_forever).start()
print("Socket Server Running. Starting main loop.")
while True:
data = str(gen_data())
message_clients = clients.copy()
for client in message_clients:
print("Sending", data, "to", client)
try:
asyncio.run(send(client, data))
except:
# Clients might have disconnected during the messaging process,
# just ignore that, they will have been removed already.
pass
I was going through the PubSub pull docs here
from google.cloud import pubsub_v1
# TODO project_id = "Your Google Cloud Project ID"
# TODO subscription_name = "Your Pub/Sub subscription name"
# TODO timeout = 5.0 # "How long the subscriber should listen for
# messages in seconds"
subscriber = pubsub_v1.SubscriberClient()
# The `subscription_path` method creates a fully qualified identifier
# in the form `projects/{project_id}/subscriptions/{subscription_name}`
subscription_path = subscriber.subscription_path(
project_id, subscription_name
)
def callback(message):
print("Received message: {}".format(message))
message.ack()
streaming_pull_future = subscriber.subscribe(
subscription_path, callback=callback
)
print("Listening for messages on {}..\n".format(subscription_path))
# result() in a future will block indefinitely if `timeout` is not set,
# unless an exception is encountered first.
try:
streaming_pull_future.result(timeout=timeout)
except: # noqa
streaming_pull_future.cancel()
In the above example, message is ack-ed as soon as it is received. But I want to acknowledge only when my local celery workers finish processing the message so that PubSub can redeliver the message if the worker fails. So I take the ack_id of the message, and pass it onto the worker.
params["ack_id"] = message._ack_id
start_aggregation.delay(params)
I just can't figure out how I can use the ack_id in the worker to acknowledge the message. I know that you can use a pubsub end-point to ack a message like given here. But I can't figure out how I can use a service account credentials to do the same - they do it using OAuth in that doc. Any pointers are appreciated. Thanks.
Acking messages received from the client library with a direct call to the acknowledge API would cause issues in the client. The client has flow control limits, which determine the maximum number of messages that can be outstanding (delivered, but not acked). The removal of messages from the count occurs when one calls message.ack() or message.nack(). If you were to call the acknowledge API directly, then this count would not change, resulting in messages no longer flowing once the limit is reached.
If you are trying to use celery to get more parallelism in your processing, you can probably do it directly without this intermediate step. One option is to start up instances of the subscriber client with the same subscription in different processes. The messages will be distributed among the subscribers. Alternatively, you could replace the scheduler with one that is process-based instead of thread-based, though that would be some more work.
Sorry for the long post but I've been poking at this for over a week so I've tried a lot of different stuff. I know Python well enough but I don't have any experience with asyncio or non-blocking functions in Python.
I'm writing an API library/module/package/whatever for a web service that requires a websocket connection. There are many incoming messages to act on, and some control-related (web app level, not websocket control messages) that I need to send on occasion. I can easily receive messages over the connection and act on them. I can send messages, but only in response to received messages because the receive loop is always blocking waiting for messages. I don't want to wait for an incoming messages to process an outgoing one so the script doesn't have to hang on input until a new messages is received. In my struggles to get two-way communication working as desired I discovered I need to use something like Twisted, Tornado, or asyncio but so far every implementation I've tried has failed. Note that the sending has to happen over the same connection. Opening a short-lived connection outside of the receive loop will not work. Here's what I've done so far:
The first iteration of the websocket code was using the websocket-client package. It was very close to the example from the docs:
import websocket
try:
import thread
except ImportError:
import _thread as thread
import time
def on_message(ws, message):
# Send message frames to respective functions
# for sorting, objectification, and processing
def on_error(ws, error):
print(error)
def on_close(ws):
print("### closed ###")
def on_open(ws):
def run(*args):
# Send initial frames required for server to send the desired frames
thread.start_new_thread(run, ())
if __name__ == "__main__":
websocket.enableTrace(True)
ws = websocket.WebSocketApp(buildWebsocketURL()),
on_message = on_message,
on_error = on_error,
on_close = on_close)
ws.on_open = on_open
ws.run_forever()
This blocks any further execution outside of the loop. I tried learning up on the _thread module but I couldn't find any indication that I could "communicate" with the websocket thread from outside. I tried setting up a pub/sub listener function that would forward data to ws.send() from another sender function but it didn't work. No errors or anything, just no indication of any sent messages.
Next I tried the Websockets module. This one seems to be built from the ground up to utilize asyncio. Again, I got a client build that would send initial messages and act on received messages but the progress stopped there:
async def wsconnection():
async with websockets.connect(getWebsocketURL()) as websocket:
while True:
message = await websocket.recv()
if message == '{"type":"broadcaster.ready"}':
subscriptions = getSubscriptions() # Get subscriptions from ident data
logging.info('Sending bookmarks to server as subscription keys')
subscriptionupdate = '{{"type": "subscribe","subscription_keys": ["{0}"],"subscription_scope": "update"}}'.format(
'","'.join(subscriptions))
subscriptioncontent = '{{"subscription_keys": ["{0}"],"subscription_scope": "content","type": "subscribe"}}'.format(
'","'.join(subscriptions))
logging.debug(subscriptioncontent)
await websocket.send(subscriptionupdate)
await websocket.send(subscriptioncontent)
await websocket.send(
'{"type":"message_lobby.read","lobby_id":"1","message_id:"16256829"}')
sortframe(message)
asyncio.get_event_loop().run_until_complete(wsconnection())
I tried the aforementioned pub/sub listener applied here to no avail. Upon reading the docs for this module more thoroughly I tried getting the websocket protocol object (that contains the send() and recv() methods) outside of the loop then creating two coroutines(?), one listening for incoming messages and one listening for and sending outgoing messages. So far I've been completely unable to get the websocket protocol object without running the async with websockets.connect(getWebsocketURL()) as websocket: line within the scope of the wsconnection() function. I tried using websocket = websockets.client.connect() which according to the docs I thought would set the protocol object I need but it doesn't. All of the examples I can find don't seem to reveal any apparent way to structure the websockets sender and receiver in the way I require without extensive knowledge of asyncio.
I also poked around with autobahn with similar code structures as above using both asyncio and Twisted but I came up with all the same problems as above.
So far the closest I've gotten was with the Websockets package above. The docs have an example snippet for a send/recv connection but I can't really read what's going on there as it's all very specific to asyncio. I'm really having trouble wrapping my head around asyncio in general and I think a big problem is it seems to have very rapidly evolved recently so there is a ton of very version-specific information floating around that conflicts. Not good for learning, unfortunately. ~~~~This is what I tried using that example and it connects, receives initial messages, then the connection is lost/closed:
async def producer(message):
print('Sending message')
async def consumer_handler(websocket, path):
while True:
message = await websocket.recv()
await print(message)
await pub.sendMessage('sender', message)
async def producer_handler(websocket, path):
while True:
message = await producer()
await websocket.send(message)
async def wsconnect():
async with websockets.connect(getWebsocketURL()) as websocket:
path = "443"
async def handler(websocket, path):
consumer_task = asyncio.ensure_future(
consumer_handler(websocket, path))
producer_task = asyncio.ensure_future(
producer_handler(websocket, path))
done, pending = await asyncio.wait(
[consumer_task, producer_task],
return_when=asyncio.FIRST_COMPLETED,
)
for task in pending:
task.cancel()
pub.subscribe(producer, 'sender')
asyncio.get_event_loop().run_until_complete(wsconnect())
So how do I structure this code to get sending and receiving over the same websocket connection? I also have various API calls to make in the same script while the websocket connection is open which further complicates things.
I'm using Python 3.6.6 and this script is intended to be imported as a module into other scripts so the websocket functionality will need to be wrapped up in a function or class for external calls.
I am in the exact same situation as u. I know that this is a very inelegant solution
because it still isn't full-duplex but i can't seem to find any example on the internet or stackoverflow involving asyncio and the websockets module which i used.
I don't think i completely understand your websockets example (is that server-side or client-side code?) but i'm going to explain my situation and "solution" and maybe that would be usable for you too.
So i have a server main function that has a websocket listening for messages in a loop with recv(). When i send "start" it will start a function that will send data every second to the javascript client in the browser. But while the function is sending data i sometimes want to pause or stop the stream of data from my client be sending a stop message. The problem is that when i use recv() while the data sending has already begun the server stops sending data and only waits for a message. I tried threads,multiprocessing and some other stuff but eventually i came to the hopefully temporarily solution of sending a "pong" message to the server immediately after the client receives a piece of data so that the server continues sending data at the next loop iteration or stop sending data if the "pong" message is "stop" instead for example but yeah this is not real duplex just fast half-duplex...
code on my python "server"
async def start_server(self,websocket,webserver_path):
self.websocket = websocket
self.webserver_path = webserver_path
while True:
command = await self.websocket.recv()
print("received command")
if command == "start":
await self.analyze()
asyncio.sleep(1)
in my analyze function:
for i,row in enumerate(data)
await self.websocket.send(json.dumps(row))
msg = await self.websocket.recv()
if msg == "stop":
self.stopFlag = True
return
await asyncio.sleep(1)
main
start_server = websockets.serve(t.start_server, "127.0.0.1", 5678)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
code on the javascript client
var ws = new WebSocket("ws://127.0.0.1:5678/");
ws.onmessage = function (event) {
var datapoint = JSON.parse(event.data);
console.log(counter);
counter++;
data.push(datapoint);
if (data.length > 40){
var element = data.shift();
render(data);
}
ws.send("pong");//sending dummy message to let server continue
};
I know it is not THE solution and i hope somebody else provides a better one but since i have the same or very similar problem and there are no other answers i decided to post and i hope it helps.