Python Websocket Client - Multiple Subprocesses (45+) - python

I have an interesting task ahead of me. I need to connect to 45 different websocket streams and stuff them all into the correct places. Sometimes those streams stop working and need to be relaunched. I'm right now running process via bash for each python file to ensure nothing crashes, but I'd rather have a single "server" if you will managing the whole process.
The goal is to eventually have a dashboard of sorts showing the status of each named subprocess, and some stats. I'd like to be able, via dashboard, to relaunch or just stop any of the processes without killing the server, but being able to relaunch those processed at any time.
Is this possible? Any direction?
Most python websockets examples are server/client. I'm just looking for an indefinite client with a relaunch on error w/o ever killing off the server.

I think, Supervisor is what you are looking for.
It has no web dashboard "out of the box", but it has some plugins that implement such feature though.

Related

Detect shutdown event with python

I have a Minecraft bedrock edition server running on our shared pc. I would like to interface with it via python. However, one problem I have is that my brothers sometimes restart our pc, or Windows updates. I need to know how to detect that shutdown event and send the shutdown command to the server before restart. I am using the subprocess library.
So, what you will need is the win32API and the function described here. You can use this function to add what's called a Control Handler Method that will run whenever the program is being shut down or terminated for any reason, including shutdown. You can find a list of the different codes that can be passed to the handler and their meanings here. Ideally, you should have a handler method that just shuts down the server, waits for it to finish shutting down, and then return.
I don't have any personal experience with the library, but it should be fairly straightforward.
EDIT: as noted by #ErykSun, you will need to create a hidden window in order to receive the events. To be quite honest I'm not sure how to create that hidden window. Some documentation suggests that running your application as a service may also work. I will look into this more if I get time.

Reacting to Webrequest while answering it - Python 3.7

I'm currently trying to figure out how I can host a simple webserver to handle POST requests with Python 3.7. My problem is that I want to answer requests after they were received, but the submitted POST data should be used to play back a specific audio file on my RaspberryPi. In two days of googling I couldn't figure out how to have the webserver run constantly while processing the incoming requests in the background.
I tried to use the subprocess module to run the playback script in the background but I never found a way to have it run in the background independently from the webserver. I always end up with my webserver getting a request which is than handled, but while this happens the webserver is unaccessible.
I would apreciate if someone pointed out a direction to look at for me.
I always end up with my webserver getting a request which is than handled, but while this happens the webserver is unaccessible.
To solve this problem, you can create a separate thread or a process to handle the request while the main thread/process goes back to processing new requests.
The workflow will be something like this:
The main process receives a request
The main process creates a new process to handle the request.
The main process goes back to listening for new requests while the new process processes the received request.
Assuming, you are unfamiliar with multithreading and multiprocessing, I'd suggest you go read a little about these topics. Most likely, multithreading will solve your problem. So, you can start from there. Here's a good article about it: An Intro to Threading in Python

Python Paramiko Child Process

I've written a script that uses PARAMIKO library to log on to a server and executes a command. This command actually invokes the server to execute another python script (resulting in a child process I believe). I believe the server returns back signal indicating that the command was executed successfully, however it doesn't seem to wait for the new child process to complete - only that the original parent process has been completed. Is there anyway of waiting to reference any/all child processes that were generated as a result of this command and waiting that they are all completed before returning control to the initiating client?
Many thanks.
Without the code this will be difficult. I think you should create a rest service . So you would POST to http://0.0.0.0/runCode and this would kick off a process in a different thread. That would end that call. The thread is still running ...when done do a post to http:// 0.0.0.0/afterProcessIsDone this will be the response from the thread that was kicked off. Then in that route you can do whatever you want with thay response there. If you need help with REST check out Flask. It's pretty easy and straight to the point for small projects.

Python: Why does my SMTP script freeze my computer?

So I wrote a little multithreaded SMTP program. The problem is every time I run it, it freezes the computer shortly after. The script appears to still work, as my network card is still lighting up and the emails are received, but in some cases it will lock up completely and stop sending the emails.
Here's a link to my two script files. The first is the one used to launch the program:
readFile.py
newEmail.py
First, you're using popen which creates subprocesses, ie. processes not threads. I'll assume this is what you meant.
My guess would be that the program gets stuck in a loop where it generates processes continuously, which the OS will probably dislike. (That kind of thing is known as a forkbomb which is a good way to freeze Linux unless a process limit has been set with ulimit.) I couldn't find the bug though, but if I were you, I'd log messages each time I spawn or kill a subprocess, and if everything is normal, watch the system closely (ps or top on Unix systems) to see if the processes are really being killed.

Where to put message queue consumer in Django?

I'm using Carrot for a message queue in a Django project and followed the tutorial, and it works fine. But the example runs in the console, and I'm wondering how I apply this in Django. The publisher class I'm calling from one of my models in models.py, so that's OK. But I have no idea where to put the consumer class.
Since it just sits there with .wait(), I don't know at what point or where I need to instantiate it so that it's always running and listening for messages!
Thanks!
The consumer is simply a long running script in the example you cite from the tutorial. It pops a message from the queue, does something, then calls wait and essentially goes to sleep until another message comes in.
This script could just be running at the console under your account or configured as a unix daemon or a win32 service. In production, you'd want to make sure that if it dies, it can be restarted, etc (a daemon or service would be more appropriate here).
Or you could take out the wait call and run it under the windows scheduler or as a cron job. So it processes the queue every n minutes or something and exits. It really depends on your application requirements, how fast your queue is filling up, etc.
Does that make sense or have I totally missed what you were asking?
If what you are doing is processing tasks, please check out celery: http://github.com/ask/celery/

Categories

Resources