Python Streamlit Application Sending Message Via ZMQ - python

Quite new to Streamlit but I am trying to create a dashboard that is able to send messages via ZMQ to my server application that acts as a subscriber.
import streamlit as st
import pandas as pd
import numpy as np
import altair as alt
import time
import zmq
class StreamLitManager(object):
def __init__(self,log_file_path,zmq_port="88888"):
self.log_file_path = log_file_path
self.port = zmq_port
self.context = None
self.socket = None
def InitConnections(self):
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUB)
self.socket.bind("tcp://*:" + str(self.port))
# Send Message
# Close it right after using, this doesn't work as it doesn't close/terminate
# I tried self.context.close() too, same situation
self.context.term()
if __name__ == "__main__":
submitted1 = st.form_submit_button('Submit 1')
if submitted1:
sm = StrategyManager(user_input)
sm.InitConnections()
So the idea is whenever the user presses that submit 1 button, I want to send a message. The problem I'm experiencing is ZMQError: Address in use. Based on my superficial understanding of streamlit is that whenever some parameter changes, the entire code gets re-ran. This doesn't seem to play well when I am constantly creating a new ZMQ publisher connection of each user click of submit. Am I doing something wrong or is there a better design pattern I should be doing.
Thanks

PUB/SUB is not a good pattern here where it's constantly getting started & stopped as you will likely lose messages. See here for details (py)zmq/PUB : Is it possible to call connect() then send() immediately and do not lose the message?

Related

Using GLib.IOChannel to send data from one python process to another

I am trying to use GLib.IOChannels to send data from a client to a server running a Glib.Mainloop.
The file used for the socket should be located at /tmp/so/sock, and the server should simply run a function whenever it receives data.
This is the code I've written:
import sys
import gi
from gi.repository import GLib
ADRESS = '/tmp/so/sock'
def server():
loop = GLib.MainLoop()
with open(ADRESS, 'r') as sock_file:
sock = GLib.IOChannel.unix_new(sock_file.fileno())
GLib.io_add_watch(sock, GLib.IO_IN,
lambda *args: print('received:', args))
loop.run()
def client(argv):
sock_file = open(ADRESS, 'w')
sock = GLib.IOChannel.unix_new(sock_file.fileno())
try:
print(sock.write_chars(' '.join(argv).encode('utf-8'), -1))
except GLib.Error:
raise
finally:
sock.shutdown(True)
# sock_file.close() # calling close breaks the script?
if __name__ == '__main__':
if len(sys.argv) > 1:
client(sys.argv[1:])
else:
server()
When called without arguments, it acts as the server, if called with arguments, it sends them to a running server.
When starting the server, I immediately get the following output:
received: (<GLib.IOChannel object at 0x7fbd72558b80 (GIOChannel at 0x55b8397905c0)>, <flags G_IO_IN of type GLib.IOCondition>)
I don't know why that is. Whenever I send something, I get an output like (<enum G_IO_STATUS_NORMAL of type GLib.IOStatus>, bytes_written=4) on the client side, while nothing happens server-side.
What am I missing? I suspect I understood the documentation wrong, as I did not find a concrete example.
I got the inspiration to use the IOChannel instead of normal sockets from this post: How to listen socket, when app is running in gtk.main()?

Why is ZeroMQ multipart sending/reciving wrong messages?

In python I'm creating an application also using ZeroMQ. I'm using the PUSH/PULL method to send the loading status of one script to another. The message received on the PULL script runs inside of a Thread. The PULL script looks like this:
import time
from threading import Thread
import threading
import os
import zmq
import sys
context = zmq.Context()
zmqsocket = context.socket(zmq.PULL)
zmqsocket.bind("tcp://*:5555")
class TaskstatusUpdater(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
while True:
# Wait for next request from client
task_id = int(zmqsocket.recv_multipart()[0])
taskcolorstat = int(zmqsocket.recv_multipart()[1])
taskstatus = zmqsocket.recv_multipart()[2]
time.sleep(0.1)
print(task_id, taskstatus, taskcolorstat)
thread = TaskstatusUpdater()
thread.start()
The PUSH part sends constantly updates about the status of the other script. It looks something like this:
import time
import sys
import zmq
# zmq - client startup and connecting
try:
context = zmq.Context()
print("Connecting to server…")
zmqsocket = context.socket(zmq.PUSH)
zmqsocket.connect("tcp://localhost:5555")
print("succesful")
except:
print('error could not connect to service')
# zmq - client startup and connecting
for i in range(10):
zmqsocket.send_multipart([b_task_id, b"0", b"first message"])
time.sleep(3)# doing stuff
zmqsocket.send_multipart([b_task_id, b"1", b"second message"])
b_task_id is generated earlier in the program and is a simple binary value created out of an integer. There are multiple of those PUSH scripts running at the same time and thru the b_task_id I can define which script is responding to the PULL.
It is now often the case that those multipart messages get mixed up between each other. Can somebody explain to me why that is and how I can fix this problem?
For example, sometimes the output is:
2 b'second message' 0
The output that I was expecting is:
2 b'second message' 1

Is it possible to use ZeroMQ sockets in a Django Channels Consumer?

I've got a hobby project of building an autonomous boat. I now built a GUI using a Vuejs frontend and a Django backend. In this GUI I can see the boat on a map, and send commands to it. Those commands are sent over ZeroMQ sockets which works great.
I'm using Django channels to send the commands from the frontend over a websocket to the backend and from there I send it on over the ZeroMQ socket. My consumer (which works great) looks as follows:
import zmq
from channels.generic.websocket import WebsocketConsumer
from .tools import get_vehicle_ip_address, get_vehicle_steer_socket
context = zmq.Context()
class SteerConsumer(WebsocketConsumer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.forward_steer_socket = get_vehicle_steer_socket(context, get_vehicle_ip_address())
def connect(self):
self.accept()
def receive(self, text_data):
print("Passing on the commands from the frontend:", text_data, "to the boat")
self.forward_steer_socket.send_string(text_data)
Next to this I also receive location information from the boat over a ZeroMQ socket which I save to the database. I'm running this in a separate script and the frontend simply polls the backend every 2 seconds for updates. Here's the script receiving the boat info:
import os
import django
import zmq
os.environ['DJANGO_SETTINGS_MODULE'] = 'server.settings'
django.setup()
# Socket to receive the boat location
context = zmq.Context()
location_socket = context.socket(zmq.SUB)
location_socket.setsockopt(zmq.CONFLATE, True)
location_socket.bind('tcp://*:6001')
location_socket.setsockopt_string(zmq.SUBSCRIBE, '')
while True:
boat_location = location_socket.recv_json()
print(boat_location)
# HERE I STORE THE BOAT LOCATION in the DB
I would now like to add this location_socket to the Consumer so that the Consumer can also receive the boat location on the ZeroMQ socket and send it to the frontend over the websocket.
I can of course simply add the location_socket to the Consumer its __init__() method as follows:
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.forward_steer_socket = get_vehicle_steer_socket(context, get_vehicle_ip_address())
self.location_socket = context.socket(zmq.SUB)
self.location_socket.setsockopt(zmq.CONFLATE, True)
self.location_socket.bind('tcp://*:6001')
self.location_socket.setsockopt_string(zmq.SUBSCRIBE, '')
But I obviously cannot include the while True loop in the Consumer. So from here I'm not sure what to do. I actually don't know whether this is even possible, since Django Channels seems to specifically been made for websockets. I guess I could start using multithreading or multiprocessing libraries, but that is uncharted territory for me.
Does anybody know whether and how it is possible to make a ZeroMQ listener in a Django Channel?
It is possible to send message to your consumer directly from your separated script via: https://channels.readthedocs.io/en/latest/topics/channel_layers.html#using-outside-of-consumers
When the new client connects to your consumer inside SteerConsumer you have self.channel_name which is unique for that client. To send message to that consumer you just have to execute (in your example from separated script):
from channels.layers import get_channel_layer
channel_layer = get_channel_layer()
# "channel_name" should be replaced for the proper name of course
channel_layer.send("channel_name", {
"type": "chat.message",
"text": "Hello there!",
})
and add inside your SteerConsumer method to handle this message:
def chat_message(self, event):
# Handles the "chat.message" event when it's sent to us.
self.send(text_data=event["text"])

pyzmq create a process with its own socket

I have some code thats monitoring some other changing files, what i would like to do is to start that code that uses zeromq with different socket, the way im doing it now seems to cause assertions to fail somewhere in libzmq, since i may be reusing the same socket. how do i ensure when i create a new process from the monitor class the context will not be reused? thats what i think is going on, if you can tell there is some other stupidity on my part, please advise.
here is some code:
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
class Monitor(object):
def __init(self)
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.connect("tcp//127.0.0.1:5055")
self.stream = ZMQStream(self._socket)
self.stream.on_recv(self.somefunc)
def initialize(self,id)
self._id = id
def somefunc(self, something)
"""work here and send back results if any """
import json
jdecoded = json.loads(something)
if self_id == jdecoded['_id']
""" good im the right monitor for you """
work = jdecoded['message']
results = algorithm (work)
self.socket.send(json.dumps(results))
else:
"""let some other process deal with it, not mine """
pass
class Prefect(object):
def __init(self, id)
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.bind("tcp//127.0.0.1:5055")
self.stream = ZMQStream(self._socket)
self.stream.on_recv(self.check_if)
self._id = id
self.monitors = []
def check_if(self,message):
"""find out from message's id whether we have
started a proces for it previously"""
import json
jdecoded = json.loads(message)
this_id = jdecoded['_id']
if this_id in self.monitors:
pass
else:
"""start new process for it should have its won socket """
new = Monitor()
import Process
newp = Process(target=new.initialize,args=(this_id) )
newp.start()
self.monitors.append(this_id) ## ensure its remembered
what is going on is that i want all the monitor processess and a single prefect process listening on the same port, so when prefect sees a request that it hasnt seen it starts a process for it, all the processes that exist probably should listen too but ignore messages not meant for them.
as it stands, if i do this i get some crash possibly related to concurrent access of the same zmq socket by something (i tried threading.thread, still crashes) i read somewhere that concurrent access of a zmq socket by different threads is not possible. How would i ensure that new processes get their own zmq sockets?
EDIT:
the main deal in my app is that a request comes in via zmq socket, and a process(s) thats listening reacts to the message by:
1. If its directed at that process judged by the _id field, do some reading on a file and reply since one of the monitors match the messages _id, if none match, then:
2 If the messages _id files is not recognized, all monitors ignore it but the Prefect creates a process to handle that _id and all future messages to that id.
3. I want all the messages to be seen by the monitor processes as well as the prefect process, seems that seems easiest,
4. All the messages are very small, avarage ~4096 bytes.
5. The monitor does some non-blocking read and for each ioloop it sends what it has found out
more-edit=>and the prefect process binds now, and it will receive messages and echo them so they can be seen by monitors. This is what i have in mind, as the architecture but its not final.
.
All the messages are arriving from remote users over a browser that lets the server know what a client wants and the server sends the message to the backend via zmq(i did not show this, but is not hard) so in production they might not bind/connect to localhost.
I chose DEALER since it allows asyc / unlimited messages in either direction (see point 5.) and DEALER can bind with DEALER, and initial request/reply can arrive from either side. The other that can do this is possibly DEALER/ROUTER.
You are correct that you cannot keep using the same socket in a subprocess (multiprocessing usually uses fork to create subprocesses). In general, what this means is that you don't want to create the socket that will be used in the subprocess until after the subprocess starts.
Since, in your case, the socket is an attribute on the Monitor object, you probably don't want to create the Monitor in the main process at all. That would look something like this:
def start_monitor(this_id):
monitor = Monitor()
monitor.initialize(this_id)
# run the eventloop, or this will return immediately and destroy the monitor
... inside Prefect.check_if():
proc = Process(target=start_monitor, args=(this_id,))
proc.start()
self.monitors.append(this_id)
rather than your example, where the only thing the subprocess does is assign an ID and then kill the process, ultimately having no effect.

Simple continuously running XMPP client in python

I'm using python-xmpp to send jabber messages. Everything works fine except that every time I want to send messages (every 15 minutes) I need to reconnect to the jabber server, and in the meantime the sending client is offline and cannot receive messages.
So I want to write a really simple, indefinitely running xmpp client, that is online the whole time and can send (and receive) messages when required.
My trivial (non-working) approach:
import time
import xmpp
class Jabber(object):
def __init__(self):
server = 'example.com'
username = 'bot'
passwd = 'password'
self.client = xmpp.Client(server)
self.client.connect(server=(server, 5222))
self.client.auth(username, passwd, 'bot')
self.client.sendInitPresence()
self.sleep()
def sleep(self):
self.awake = False
delay = 1
while not self.awake:
time.sleep(delay)
def wake(self):
self.awake = True
def auth(self, jid):
self.client.getRoster().Authorize(jid)
self.sleep()
def send(self, jid, msg):
message = xmpp.Message(jid, msg)
message.setAttr('type', 'chat')
self.client.send(message)
self.sleep()
if __name__ == '__main__':
j = Jabber()
time.sleep(3)
j.wake()
j.send('receiver#example.org', 'hello world')
time.sleep(30)
The problem here seems to be that I cannot wake it up. My best guess is that I need some kind of concurrency. Is that true, and if so how would I best go about that?
EDIT: After looking into all the options concerning concurrency, I decided to go with twisted and wokkel. If I could, I would delete this post.
There is a good example on the homepage of xmpppy itself (which is another name for python-xmpp), which does almost what you want: xtalk.py
It is basically a console jabber-client, but shouldn't be hard to rewrite into bot you want.
It's always online and can send and receive messages. I don't see a need for multiprocessing (or other concurrency) module here, unless you need to receive and send messages at exact same time.
A loop over the Process(timeout) method is a good way to wait and process any new incoming stanzas while keeping the connection up.

Categories

Resources