Python WebSocket Client connecting but not sending messages - python

While using websocket client to send test messages to a django server, I cannot get a script to work which can both send and receive messages.
The following python script is what I have attempted:
import websocket
import threading
import json
from time import sleep
# handle message event
def on_message(ws, message):
print("message recieved: %s" % message)
# handle close event
def on_close(ws):
print("channel closed")
# execute as main script
if __name__ == "__main__":
websocket.enableTrace(True)
# new app object connecting to headstation
ws = websocket.WebSocketApp("ws://192.168.0.106:8000/?testI123", on_message = on_message, on_close = on_close)
# run in a new thread - kill if script ends
ws_listener = threading.Thread(target=ws.run_forever())
ws_listener.daemon = True
# start second thread
ws_listener.start()
# attempt connection 5 times
timeout = 5
while not ws.sock.connected and timeout:
sleep(1)
timeout -= 1
# error on timeout
if (timeout == 0):
print("Connection to server timed out")
print("test 1")
# periodically send test message to server
message_num = 0
while ws.sock.connected:
# send node id and message
message = 'hello %d'%message_num
ws.send(message)
sleep(1)
message_num += 1
This connections successfully, indicted by the server, and receives messages sent from the server, but does not send anything.
Periodically, something like this is displayed on the terminal:
send: b'\x8a\x84\xe2\xe9\xa8\xe2\x8f\xdc\xe2\x84'
If I simply use
ws = websocket.WebSocket()
ws.connect(url)
ws.send("hello")
then this works perfectly. Suggesting it is something wrong with my little python script displayed above.

Found the problem, stupid mistake of course:
ws_listener = threading.Thread(target=ws.run_forever())
should be:
ws_listener = threading.Thread(target=ws.run_forever)
without parentheses.
First one passes result of ws.run_forever to the target, second one sets ws.run_forever as the target, which was the intended outcome.

Related

Stomp.py not processing one message at a time

I am using Stomp.py connecting to a standard ACtiveMQ server. I am simulating cases where the receiver crashes and I want to be able to restart it and it continue running from the message after the one that caused it to crash.
I have created two sample scripts:
putMessagesToQueue.py - This will put 56 messages into the
destination
readMessagesFromQueue.py - This will read messages from
the destination. If it reads the 6th message it will raise an
exception. Each message takes 1 second to process
Steps I take to run the test:
I run putMessagesToQueue.py
I run readMessagesFromQueue.py - it processes 5 messages sucessfully and an exception is raised in message 6
I terminate readMessagesFromQueue.py (ctrl-c)
I run readMessagesFromQueue.py again
For the behaviour I want in step 4 I want it to start processing from message 7.
However I don't see this.
If reciever subscribes with ack='auto' then in step 4 it processes no messages - all the messages are gone from the queue and I have lost 50 messages!
If I use ack='client' or ack='client-individual' then on step 4 it starts again from the beginning then crashes again on message 6.
This seems to suggest that the reciever is not processing messages on at a time, instead it is taking every single message at once and running through each one. I don't want this behaviour because I would like to scale up to running 5 recievers and I want the load distributed. At the moment the first reciever I start takes all the messages and starts churning through them and recievers 2-4 just wait for new messages. I want the recievers to take messages one at a time instead!
Can anyone give any hints on how I am implementing this wrong:
Source
putMessagesToQueue.py
import stomp
stompurl = "127.0.0.1"
stompport = "61613"
stompuser = "admin"
stomppass = "admin"
destination = "/queue/testQueueWithCrash"
conn = stomp.Connection(host_and_ports=[(stompurl, stompport)])
conn.connect(stompuser,stomppass,wait=True)
for x in range(0,5):
conn.send(body="OK-BEFORE-CRASH", destination=destination)
conn.send(body="CRASH", destination=destination)
for x in range(0,50):
conn.send(body="OK-AFTER-CRASH", destination=destination)
readMessagesFromQueue.py
import stomp
import time
stompurl = "127.0.0.1"
stompport = "61613"
stompuser = "admin"
stomppass = "admin"
destination = "/queue/testQueueWithCrash"
conn = stomp.Connection(host_and_ports=[(stompurl, stompport)])
conn.connect(stompuser,stomppass,wait=True)
class StompConnectionListenerClass(stomp.ConnectionListener):
processMessage = None
def __init__(self, processMessage):
self.processMessage = processMessage
def on_error(self, headers, message):
print('XX received an error "%s"' % message)
def on_message(self, headers, message):
self.processMessage(headers, message)
def messageProcessingFunction(headers, message):
print('Main recieved a message "%s"' % message)
if (message=="CRASH"):
print("Message told processor to crash")
raise Exception("Reached message which crashes reciever")
time.sleep(1) # simulate processing message taking time
stompConnectionListener = StompConnectionListenerClass(processMessage=messageProcessingFunction)
conn.set_listener('', stompConnectionListener)
print("Subscribing")
conn.subscribe(destination=destination, id=1, ack='auto')
#conn.subscribe(destination=destination, id=1, ack='client')
#conn.subscribe(destination=destination, id=1, ack='client-individual')
print("Terminate loop starting (Press ctrl+c when you want to exit)")
try:
while True:
time.sleep(10)
except KeyboardInterrupt:
print('interrupted - so exiting!')
conn.close()
print("Reciever terminated")
Update 001
I managed to obtain the desired behavour described above by changing the receive function to use ack='client-individual' and to manually send ack messages. (See new version below)
But I am still unable to get the recievers to process one message at a time. This can be demonstrated in the following steps:
I run putMessagesToQueue.py
I run readMessagesFromQueue2.py - it will start processing
In a new terminal run readMessagesFromQueue2.py
At first the second readMessagesFromQueue2 does nothing until the first one crashes, it then starts receiving messages. I want both instances of the reciever to read the messages from the start.
readMessagesFromQueue2.py
import stomp
import time
stompurl = "127.0.0.1"
stompport = "61613"
stompuser = "admin"
stomppass = "admin"
destination = "/queue/testQueueWithCrash"
conn = stomp.Connection(host_and_ports=[(stompurl, stompport)])
conn.connect(stompuser,stomppass,wait=True)
class StompConnectionListenerClass(stomp.ConnectionListener):
processMessage = None
conn = None
def __init__(self, processMessage, conn):
self.processMessage = processMessage
self.conn = conn
def on_error(self, headers, message):
print('XX received an error "%s"' % message)
def on_message(self, headers, message):
try:
self.processMessage(headers, message)
finally:
self.conn.ack(id=headers["message-id"], subscription=headers["subscription"])
def messageProcessingFunction(headers, message):
print('Main recieved a message "%s"' % message)
if (message=="CRASH"):
print("Message told processor to crash")
raise Exception("Reached message which crashes reciever")
time.sleep(1) # simulate processing message taking time
stompConnectionListener = StompConnectionListenerClass(processMessage=messageProcessingFunction, conn=conn)
conn.set_listener('', stompConnectionListener)
print("Subscribing")
conn.subscribe(destination=destination, id=1, ack='client-individual')
print("Terminate loop starting (Press ctrl+c when you want to exit)")
try:
while True:
time.sleep(10)
except KeyboardInterrupt:
print('interrupted - so exiting!')
conn.close()
print("Reciever terminated")
Lots of reading of diffent docs and I found the problem.
ActiveMQ has an option prefetch size - https://svn.apache.org/repos/infra/websites/production/activemq/content/5.7.0/what-is-the-prefetch-limit-for.html
If you have few messages that take a long time to process you can set it to 1. This is not apropiate in other situations.
I can do this in stopm.py with the following line:
conn.subscribe(destination=destination, id=1, ack='client-individual', headers={'activemq.prefetchSize': 1})
So using manual or auto ack was neither here nor there. The key is limiting prefetch to 1.

Python: Close TCP client connection to server after no server message for x seconds

I'm trying to create Python TCP client. It is fully functional, but I desire a functionality where if a "connected" message hasn't been recieved from the server for x seconds, then I can close my connection. I currently have a separate thread for recieving messages from the server. Basically, if I receive a message that says "connected" I want to reset a timer.
I've tried a few different approaches to solving it. I've tried to have a separate thread with a timer that keep track of when to disconnect using global variables which is a messy solution which didn't work regardless. I tried to make my own threading subclass but I still was not able to solve the problem and it introduced a lot more complexity than I really need.
a simple approach that I took was to try to have a timer in the while loop in my recieve function. This works for disconnecting from the server, but it will only timeout when a message is sent (due to the try block) and not actually the second the timer is up.
import threading, time, sys
def main():
host = sys.argv[1]
port = sys.argv[2]
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
address = (host, port)
sock.connect(address)
rec_thread = threading.Thread(target=rec, args=(sock,))
rec_thread.setDaemon(True)
rec_thread.start()
while True:
message = input()
sock.sendall(str.encode(message))
def rec(sock):
start_time = time.time()
while True:
elapsed_time = time.time() - start_time
if(elapsed_time > 10):
sock.close()
try:
msg = sock.recv(1024).decode("utf8")
if msg == 'connected':
start_time = time.time()
else:
print(msg)
except KeyboardInterrupt:
sock.close()
break
If I use this approach then I will only actually close the connection after I recieve a message from the server.
ex: I want it to disconnect immediately after 10s, but if I don't get any message from the server until 14s, then it won't disconnect until 14s.

Unable to receive all MQTT messages in Python

I am trying to send messages from one python script to another using MQTT. One script is a publisher. The second script is a subscriber. I send messages every 0.1 second.
Publisher:
client = mqtt.Client('DataReaderPub')
client.connect('127.0.0.1', 1883, 60)
print("MQTT parameters set.")
# Read from all files
count = 0
for i in range(1,51):
payload = "Hello world" + str(count)
client.publish(testtopic, payload, int(publisherqos))
client.loop()
count = count+1
print(count, ' msg sent: ', payload)
sleep(0.1)
Subscriber:
subclient = mqtt.Client("DynamicDetectorSub")
subclient.on_message = on_message
subclient.connect('127.0.0.1')
subclient.subscribe(testtopic, int(subscriberqos))
subclient.loop_forever()
mosquitto broker version - 3.1
mosquitto.conf has max inflight messages set to 0, persistence true.
publisher QOS = 2
subscriber QOS = 2
topic = 'test' in both scripts
When I run subscriber and publisher in the same script, the messages are sent and received as expected. But when they are in separate scripts, I do not receive all the messages and sometimes no messages. I run subscriber first and then publisher. I have tried subscriber with loop.start() and loop.stop() with waiting for few minutes.
I am unable to debug this problem. Any pointers would be great!
EDIT:
I included client.loop() after publish. -> Same output as before
When I printed out statements in 'on_connect' and 'on_disconnect', I noticed that client mqtt connection gets established and disconnects almost immediately. This happens every second. I even got this message once -
[WinError 10053] An established connection was aborted by the software in your host machine
Keep Alive = 60
Is there any other parameter I should look at?
You need to call the network loop function in the publisher as well so the client actually gets some time to do the IO (And the dual handshake for the QOS2).
Add client.loop() after the call to client.publish() in the client:
import paho.mqtt.client as mqtt
import time
client = mqtt.Client('DataReaderPub')
client.connect('127.0.0.1', 1883, 60)
print("MQTT parameters set.")
# Read from all files
count = 0
for i in range(1,51):
payload = "Hello world" + str(count)
client.publish("test", payload, 2)
client.loop()
count = count+1
print(count, ' msg sent: ', payload)
time.sleep(0.1)
Subscriber code:
import paho.mqtt.client as mqtt
def on_message(client, userdata, msg):
print(msg.topic + " " + str(msg.payload))
subclient = mqtt.Client("DynamicDetectorSub")
subclient.on_message = on_message
subclient.connect('127.0.0.1')
subclient.subscribe("test", 2)
subclient.loop_forever()
When I ran your code, the subscriber was often missing the last packet. I was not otherwise able to reproduce the problems you described.
If I rewrite the publisher like this instead...
from time import sleep
import paho.mqtt.client as mqtt
client = mqtt.Client('DataReaderPub')
client.connect('127.0.0.1', 1883, 60)
print("MQTT parameters set.")
client.loop_start()
# Read from all files
count = 0
for i in range(1,51):
payload = "Hello world" + str(count)
client.publish('test', payload, 2)
count = count+1
print(count, ' msg sent: ', payload)
sleep(0.1)
client.loop_stop()
client.disconnect()
...then I no longer see the dropped packet. I'm using the start_loop/stop_loop methods here, which run the mqtt loop asynchronously. I'm not sure exactly what was causing your dropped packet, but I suspect that the final message was still in the publisher's send queue when the code exits.
It turned out to be a silly bug.
As hardillb suggested I looked at the broker logs. It showed that the subscriber client was already connected.
I am using Pycharm after a really really long time. So I had accidentally ran publisher and subscriber so many times that they were running in parallel in the output console. No wonder they got disconnected since the client IDs were the same. Sorry for the trouble. BTW client.loop() after publish is not needed. Thanks hardillb.

python websocket client how to load all responses [duplicate]

I am wanting to run a program in Python that sends a message every second via web sockets to a Tornado server. I have been using the example on websocket-client;
This example does not work, because ws.run_forever() will stop the execution of the while loop.
Can somebody give me an example of how to correctly implement this as a threaded class which I can both call the send method of, but also receive messages?
import websocket
import thread
import time
def on_message(ws, message):
print message
def on_error(ws, error):
print error
def on_close(ws):
print "### closed ###"
def on_open(ws):
pass
if __name__ == "__main__":
websocket.enableTrace(True)
ws = websocket.WebSocketApp("ws://echo.websocket.org/", on_message = on_message, on_error = on_error, on_close = on_close)
ws.on_open = on_open
ws.run_forever()
while True:
#do other actions here... collect data etc.
for i in range(100):
time.sleep(1)
ws.send("Hello %d" % i)
time.sleep(1)
There's an example in their github page that does exactly that. It seems like you started out of that example and took the code that sends messages every second out of the on_open and pasted it after the run_forever call, that BTW runs until the socket is disconnected.
Maybe you are having issues with the basic concepts here. There's always going to be a thread dedicated to listening to the socket (in this case the main thread that enters a loop inside the run_forever waiting for messages). If you want to have some other thing going on you'll need another thread.
Below is a different version of their example code, where instead of using the main thread as the "socket listener", another thread is created and the run_forever runs there. I see it as a bit more complicated since you have to write code to assure the socket has connected while you could use the on_open callback, but maybe it will help you understand.
import websocket
import threading
from time import sleep
def on_message(ws, message):
print message
def on_close(ws):
print "### closed ###"
if __name__ == "__main__":
websocket.enableTrace(True)
ws = websocket.WebSocketApp("ws://echo.websocket.org/", on_message = on_message, on_close = on_close)
wst = threading.Thread(target=ws.run_forever)
wst.daemon = True
wst.start()
conn_timeout = 5
while not ws.sock.connected and conn_timeout:
sleep(1)
conn_timeout -= 1
msg_counter = 0
while ws.sock.connected:
ws.send('Hello world %d'%msg_counter)
sleep(1)
msg_counter += 1
In 2023, they have an updated example for dispatching multiple WebSocketApps using an asynchronous dispatcher like rel.
import websocket, rel
addr = "wss://api.gemini.com/v1/marketdata/%s"
for symbol in ["BTCUSD", "ETHUSD", "ETHBTC"]:
ws = websocket.WebSocketApp(addr % (symbol,), on_message=lambda w, m : print(m))
ws.run_forever(dispatcher=rel, reconnect=3)
rel.signal(2, rel.abort) # Keyboard Interrupt
rel.dispatch()
Hope it helps!

How do I clear the buffer upon start/exit in ZMQ socket? (to prevent server from connecting with dead clients)

I am using a REQ/REP type socket for ZMQ communication in python. There are multiple clients that attempt to connect to one server. Timeouts have been added in the client script to prevent indefinite wait.
The problem is that when the server is not running, and a client attempts to establish connection, it's message gets added to the queue buffer, which should not even exist at this moment ideally. When the script starts running and a new client connects, the previous client's data is taken in first by the server. This should not happen.
When the server starts, it assumes a client is connected to it since it had tried to connect previously, and could not exit cleanly (since the server was down).
In the code below, when the client tries the first time, it gets ERR 03: Server down which is correct, followed by Error disconnecting. When server is up, I get ERR 02: Server Busy for the first client which connects. This should not occur. The client should be able to seamlessly connect with the server now that it's up and running.
Server Code:
import zmq
def server_fn():
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://192.168.1.14:5555")
one=1
while one == 1:
message = socket.recv()
#start process if valid new connection
if message == 'hello':
socket.send(message) #ACK
#keep session alive until application ends it.
while one == 1:
message = socket.recv()
print("Received request: ", message)
#exit connection
if message == 'bye':
socket.send(message)
break
#don't allow any client to connect if already busy
if message == 'hello':
socket.send ('ERR 00')
continue
#do all data communication here
else:
socket.send('ERR 01: Connection Error')
return
server_fn()
Client Code:
import zmq
class client:
def clientInit(self):
hello='hello'
#zmq connection
self.context = zmq.Context()
print("Connecting to hello world server...")
self.socket = self.context.socket(zmq.REQ)
self.socket.connect("tcp://192.168.1.14:5555")
#RCVTIMEO to prevent forever block
self.socket.setsockopt(zmq.RCVTIMEO, 5000)
#SNDTIME0 is needed since script may not up up yet
self.socket.setsockopt(zmq.SNDTIMEO, 5000)
try:
self.socket.send(hello)
except:
print "Sending hello failed."
try:
echo = self.socket.recv()
if hello == echo:
#connection established.
commStatus = 'SUCCESS'
elif echo == 'ERR 00':
#connection busy
commStatus = "ERR 00. Server busy."
else:
#connection failed
commStatus="ERR 02"
except:
commStatus = "ERR 03. Server down."
return commStatus
def clientQuit(self):
try:
self.socket.send('bye')
self.socket.recv()
except:
print "Error disconnecting."
cObj = client()
commStatus=cObj.clientInit()
print commStatus
cObj.clientQuit()
PS - I have a feeling the solution may lie in the correct usage of socket.bind and socket.connect.
Answering my own question-
The problem is that the first client sends a message which the server accepts when it starts running, regardless of the status of the client.
To prevent this, 2 things have to be done. The most important thing is to use socket.close() to close the client connection. Secondly, the LINGER parameter can be set to a low value or zero. This clears the buffer after the timeout value from the time the socket is closed.
class client:
def clientInit(self):
...
self.socket.setsockopt(zmq.LINGER, 100)
...
def clientQuit(self):
try:
self.socket.send('bye')
self.socket.recv()
except:
print "Error disconnecting."
self.socket.close()

Categories

Resources