Because of my zero knowledge about Python GUIs,
I need some help, to make a mechanism for,
Making requests through HTML,CSS or Ajax (node.js, Apache or nginx server) to a Python program to execute certain functions.
For example,
I have a python running a while True: loop, but at a given moment want perform an interrupt signal and sending data to execute a function a kind of events system.
First, I bind an event to program:
#program.bind(EVENT_NAME, EVENT_HANDLER)
program.bind(miaowcat, miaowfunc)
The program runs and any time an interrupt is performed, executing the function miaowfunct and passing the data of the event to *args
def miaowfunct(*args):
It's a prototype. So, args can be with numeric signals or other elements.
I don't know how to do this.
This kind of problem is what messaging systems are designed to solve.
You write some code that needs to be executed at a trigger (this is called a consumer).
Your code that needs to execute the function (called a producer), creates a message and sends it to a broker.
The broker takes your message and puts it on a queue.
The consumer is listening on this queue for messages, when it sees one, it will "wake up", run itself, and then go back to sleep.
For Python, the following are typically used:
Messaging Broker = RabbitMQ
Task Queue = Celery
Related
I'm currently implementing a containerized python app to process messages from a queue.
The main process would poll the queue every n seconds and then process all the messages it receives. However, I would also like this app to expose an API with healthchecks and other endpoints that could send jobs to the main process.
I was wondering what are the standard libraries to do this in python, if they exist. I have seen some examples using Background tasks on FastAPI but this would not meet my requirements as the service should poll the queue on startup without any request to its endpoints.
I have also seen the Celery library mentioned, but it seems like large complexity leap from what I need.
Is there a simple way to run a FastAPI application 'side-by-side' with a long running process in a way that both can communicate?
the multiprocessing module has its own version of queues. In your calling program, first create a queue, like this:
import multiprocessing as mp
self.outq_log=mp.Queue()
then pass this Queue object to the process by putting it in the arguments when you call mp.Process() to start your long running task.
then a function in your calling program to check for messages in the queue would look like:
def service_queues(self):
#look for data from the process, print it in the windows
try:
if self.outq_log is not None:
x=self.outq_log.get(block=False)
self.logtxt.write(x)
except queue.Empty:
pass
finally, in your long running process you can send items to the caller using
outq_log.put(stuff)
If you want to send messages the other way, to the task from the caller, you can create a separate queue and do the "put"ting in the caller and the "get"ting in the task.
You can run FastAPI programatically like in this example, after starting your other tasks in a thread or using asyncio. This way you should be able to communicate from the server endpoints to whatever objects you've started before.
I have a flask-socketio server running on multiple pods, using redis as a message queue. I want to ensure that emits from external processes reach their destination 100% of the time, or to know when they have failed.
When process A emits an event to a socket that's connected to process B, the event goes through the message queue to process B, to the client. Is there any way I can intercept the outgoing emit on process B? Ideally i'd then use a worker to check after a few seconds if the message reached the client (via a confirm event emitted from the client) or it will be emitted again.
This code runs on process A:
#app.route('/ex')
def ex_route():
socketio.emit('external', {'text': f'sender: {socket.gethostname()}, welcome!'}, room='some_room')
return jsonify(f'sending message to room "some_room" from {socket.gethostname()}')
This is the output from process A
INFO:socketio.server:emitting event "external" to some_room [/]
INFO:geventwebsocket.handler:127.0.0.1 - - [2019-01-11 13:33:44] "GET /ex HTTP/1.1" 200 177 0.003196
This is the output from process B
INFO:engineio.server:9aab2215a0da4816a45e3fdc1e449fce: Sending packet MESSAGE data 2["external",{"text":"sender: *******, welcome!"}]
There is currently no mechanism to do what you ask, unfortunately.
I think you basically have two approaches to go about this:
Always run your emits from the main server(s). If you need to emit from an auxiliary process, use an IPC mechanism to notify the server so that it can run the emit on its behalf. And now you can use callbacks.
Ignore the callbacks, and instead have the client acknowledge receipt of the event by emitting back to the server.
Adding callback support for auxiliary processes should not be terribly difficult, by the way. I never needed that functionality myself and you are the first to ask about it. Maybe I should look into that at some point.
Edit: after some thought, I came up with a 3rd option:
You can connect your external process to the server as a client, instead of using the "emit-only" option. If this process is a client, it can emit an event to the server, which in turn the server can relay to the external client. When the client replies to the server, the server can once again relay the response to the external process, which is not another client and has full send and receive capabilities.
Using IPC is not very robust, especially in case of server receiving a lot of requests there might be an issue where you receive a message and don't retranslate it and it's vital.
Use either celery or zmq or redis itself for interconnect. The most natural is using socketio itself like mentioned by Miguel as it's already waiting for the requests has the environment and can emit anytime.
I've used a greenlet hack over threads - where greenlet is lighter than threads and runs in the same environment allowing it to send the message while your main thread awaits socket in non-blocking mode. Basically you write a thread, then apply eventlet or gevent to the whole code via monkeypatching and the thread becomes a greenlet - an inbetween function call. You put a sleep on it so it doesn't hog all resources and you have your sender because greenlets share environment easily, they are not bound by io, just cpu (which is the same for threads in Python but greenlets are even more lightweight due to no OS-level context change at all).
But as soon as CPU load increased I switched over to client/server. Imbuing IPC would require massive rewrites from the ground up.
I am attempting to perform a constant message from the websocket client to server(python/flask/socketio)
I have a submit button on a simple page that starts off a long running job.
$('form#emit').submit(function(event) {
socket.emit('submit', {...});
return false;
In the python code i kick off the long running job like so:
#socketio.on('submit', namespace='/namespace')
def long_running_function(message):
long_running_job_code(message)
What I would expect to happen is python to kick off the long_running_job_code and go back to executing the loop in the form of setInterval:
on client the 'loop' :
setInterval(function() { pinger() }, 1000);
function pinger()
{
socket.emit('ping','test');
}
on server:
#socketio.on('ping', namespace='/namespace')
def ping(message):
emit('my response', {'data': '.'})
Before the submit button is hit, the 'ping' function is placing .... on the screen but it does not continue to perform that function while long_running_job_code is executing.
I believe the issue is blocking on the server side, but I am not sure. The long running job has emits that are still getting to the client, but the ping emit stops while the long running job is going.
Anyone have an idea on how to get around this?
Thanks!
You do not mention this, but my guess is that you are using eventlet or gevent as the web server of your application, because that is what makes the most sense when working with WebSocket and Flask-SocketIO in particular.
Eventlet and gevent are coroutine servers. They can handle multi-tasking, but this is done cooperatively. That means that for a context switch from one task to another to occur, the first task must release the CPU. The CPU is automatically released transparently when certain I/O calls are made, like when reading or writing from a socket. You can also explicitly release the CPU by calling the sleep function. If a task goes off to do some long calculation without doing any I/O or explicitly releasing the CPU, then the whole thing is going to block.
You basically have two ways to keep the machinery going while you run your long function. One way is to regularly issue sleep calls. When a sleep call occurs, the scheduler will give the CPU to other task(s) before returning from the sleep. If your function has a loop, for example, you can add this on each iteration:
eventlet.sleep(0)
The other way to not block is to put the long function in a subprocess, which will probably require more changes that just adding sleeps here and there.
Hope this helps!
I am implementing a MQTT worker in python with paho-mqtt.
Are all the on_message() multi threaded in different threads, so that if one of the task is time consuming, other messages can still be processed?
If not, how to achieve this behaviour?
The python client doesn't actually start any threads, that's why you have to call the loop function to handle network events.
In Java you would use the onMessage callback to put the incoming message on to a local queue that a separate pool of threads will handle.
Python doesn't have native threading support but does have support for spawning processes to act like threads. Details of the multiprocessing can be found here:
https://docs.python.org/2.7/library/multiprocessing.html
EDIT:
On looking closer at the paho python code a little closer it appears it can actually start a new thread (using the loop_start() function) to handle the network side of things previously requiring the loop functions. This does not change the fact the all calls to the on_message callback will happen on this thread. If you need to do large amounts of work in this callback you should definitely look spinning up a pool of new threads to do this work.
http://www.tutorialspoint.com/python/python_multithreading.htm
I am developing a small Python program for the Raspberry Pi that listens for some events on a Zigbee network.
The way I've written this is rather simplisic, I have a while(True): loop checking for a Uniquie ID (UID) from the Zigbee. If a UID is received it's sent to a dictionary containing some callback methods. So, for instance, in the dictionary the key 101 is tied to a method called PrintHello().
So if that key/UID is received method PrintHello will be executed - pretty simple, like so:
if self.expectedCallBacks.has_key(UID) == True:
self.expectedCallBacks[UID]()
I know this approach is probably too simplistic. My main concern is, what if the system is busy handling a method and the system receives another message?
On an embedded MCU I can handle easily with a circuler buffer + interrupts but I'm a bit lost with it comes to doing this with a RPi. Do I need to implement a new thread for the Zigbee module that basically fills a buffer that the call back handler can then retrieve/read from?
I would appreciate any suggestions on how to implement this more robustly.
Threads can definitely help to some degree here. Here's a simple example using a ThreadPool:
from multiprocessing.pool import ThreadPool
pool = ThreadPool(2) # Create a 2-thread pool
while True:
uid = zigbee.get_uid()
if uid in self.expectedCallbacks:
pool.apply_async(self.expectedCallbacks[UID])
That will kick off the callback in a thread in the thread pool, and should help prevent events from getting backed up before you can send them to a callback handler. The ThreadPool will internally handle queuing up any tasks that can't be run when all the threads in the pool are already doing work.
However, remember that Raspberry Pi's have only one CPU core, so you can't execute more than one CPU-based operation concurrently (and that's even ignoring the limitations of threading in Python caused by the GIL, which is normally solved by using multiple processes instead of threads). That means no matter how many threads/processes you have, only one can get access to the CPU at a time. For that reason, you probably don't want more than one thread actually running the callbacks, since as you add more you're just going to slow things down, due to the OS needing to constantly switch between threads.