How to read data from a socket connection signal? - python

I have a server (Multi_Server.py) that certain scripts connect to. These are Foundry Nuke scripts so I cannot simply import them into Multi_Server. There is a reoccurring function inside Nuke_Render_2.py that calculates the percentage, and emits the calculation as an integer to the signal
Class NukeRenderCheck(Qobject):
progress=Signal(int)
So my objective is to somehow get that value, from Nuke_Render_2.py into Multi_Server.py
Guess I need some sort of open, frequently updated version of socket.recv without blocking or something.

Related

Choosing a right paradigm to implement a specific programing task

I have a following architecture:
Main Control Unit(MCU) here must: run TCP/IP server for communication with robot; run Redis Database; be able to run several data processing programs (where data sent by robot or obtained from Redis is used). External Control Unit is connected to Redis DB.
I have to write program for MCU. Ideally it must be able to perform asynchronously following tasks:
Get request from robot and pass it to Redis DB, so External Control Unit can react to the signal appearance and start acquiring data from sensor (Then publishing this data to Redis DB).
Get request from robot to start receiving data from robot (Then publish this data to Redis DB).
React to the appearance of data from External Control Unit in Redis DB. This must force Main Control Unit to start data processing program using obtained sensor data.
Get request from robot to send resultant data to robot.
This is a simplified version of the system, since there will be more External Control Units with different sensors. But most of MCU tasks are described.
For now I can ensure data transmission between MCU and robot. And I'm pretty familiar with publishing/subscribing techniques of Redis DB.
But I'm struggling with choosing proper technology/paradigm to program MCU, asynchronous/multithreading/multiprocessing programming? Where to dig?
Addition to question:
Which paradigm (asynchronous/multithreading/multiprocessing programming) is better to use to implement such a behavior of MCU:
MCU receive request from robot to start up computer vision routine (it takes around 30-40 seconds to be finished). The routine can be started only if necessary data are found in redis DB. So, may be MCU must wait until ExtXU end publishing this data into DB, may be not. After the routine has been started, during 30-40 seconds of CV routine run, another request from robot can come, and it must be processed while CV routine is running.
I studied today an asyncio python module, and in my mind it's not suitable to implement what I want. It is ideal for processing multiple client requests on server, or trying to get some data from different servers from one client. A coroutines lock/wait is essential, so program can do something else while coroutine waits something. But my CV routine do not wait anything, it just runs.

how to use value from an rpyc client called from a progress script in that script

Trying to use rpyc server via progress script, and have the script do different tasks based on different values I'm trying to get from the client.
I'm using a rpyc server to automate some tasks on demand from users, and trying to implement the client in a progress script in this way:
1.progress script launches.
2.progress script calls the rpyc client via cmd, to run a function that checks if the server is live, and returns some sort of var to indicate wether the server is live or not (doesn't really matter to me what kind of indication is used, I guess different chars like 0-live 1-not live would be preferable).
3.based on the value returned in the last step, either notifies the user that the server is down and quits, or proceeds to the rest of the code.
The part I'm struggling with is stage 2, how to call the client in a way that stores a value it should return, and how to actually return the value properly to the script.
I thought about using -param command, but couldn't figure how to use it in my scenario, where the value I'm trying to return is to a script that already midrun, and not just call another progress script with the value.
The code of the client that I use for checking if the server is up is:
def client_check():
c=rpyc.connect(host,18812)
if __name__=="__main__":
try:
client_check()
except:
#some_method_of_transferring_the_indication#
For the progress script, as mentioned I didn't really managed to figure out the right way to call the client and store a value in the way I'm trying to..
I guess I can make the server create a file that will use as an indicator for his status, and check for the file at the start of the script, but I don't know if that's the right way to do so, and prefare to avoid using this if possible.
I am guessing that you are saying that you are shelling out from the Progress script to run your rpyc script as an external process?
In that case something along these lines will read the first line of output from that rpyc script:
define variable result as character no-undo format "x(30)".
input through value( "myrpycScript arg1 arg2" ). /* this runs your rpyc script */
import unformatted result. /* this reads the result */
input close.
display result.

Can I have Python code to continue executing after I call Flask app.run?

I have just started with Python, although I have been programming in other languages over the past 30 years. I wanted to keep my first application simple, so I started out with a little home automation project hosted on a Raspberry Pi.
I got my code to work fine (controlling a valve, reading a flow sensor and showing some data on a display), but when I wanted to add some web interactivity it came to a sudden halt.
Most articles I have found on the subject suggest to use the Flask framework to compose dynamic web pages. I have tried, and understood, the basics of Flask, but I just can't get around the issue that Flask is blocking once I call the "app.run" function. The rest of my python code waits for Flask to return, which never happens. I.e. no more water flow measurement, valve motor steering or display updating.
So, my basic question would be: What tool should I use in order to serve a simple dynamic web page (with very low load, like 1 request / week), in parallel to my applications main tasks (GPIO/Pulse counting)? All this in the resource constrained environment of a Raspberry Pi (3).
If you still suggest Flask (because it seems very close to target), how should I arrange my code to keep handling the real-world events, such as mentioned above?
(This last part might be tough answering without seeing the actual code, but maybe it's possible answering it in a "generic" way? Or pointing to existing examples that I might have missed while searching.)
You're on the right track with multithreading. If your monitoring code runs in a loop, you could define a function like
def monitoring_loop():
while True:
# do the monitoring
Then, before you call app.run(), start a thread that runs that function:
import threading
from wherever import monitoring_loop
monitoring_thread = threading.Thread(target = monitoring_loop)
monitoring_thread.start()
# app.run() and whatever else you want to do
Don't join the thread - you want it to keep running in parallel to your Flask app. If you joined it, it would block the main execution thread until it finished, which would be never, since it's running a while True loop.
To communicate between the monitoring thread and the rest of the program, you could use a queue to pass messages in a thread-safe way between them.
The way I would probably handle this is to split your program into two distinct separately running programs.
One program handles the GPIO monitoring and communication, and the other program is your small Flask server. Since they run as separate processes, they won't block each other.
You can have the two processes communicate through a small database. The GIPO interface can periodically record flow measurements or other relevant data to a table in the database. It can also monitor another table in the database that might serve as a queue for requests.
Your Flask instance can query that same database to get the current statistics to return to the user, and can submit entries to the requests queue based on user input. (If the GIPO process updates that requests queue with the current status, the Flask process can report that back out.)
And as far as what kind of database to use on a little Raspberry Pi, consider sqlite3 which is a very small, lightweight file-based database well supported as a standard library in Python. (It doesn't require running a full "database server" process.)
Good luck with your project, it sounds like fun!
Hi i was trying the connection with dronekit_sitl and i got the same issue , after 30 seconds the connection was closed.To get rid of that , there are 2 solutions:
You use the decorator before_request:in this one you define a method that will handle the connection before each request
You use the decorator before_first_request : in this case the connection will be made once the first request will be called and the you can handle the object in the other route using a global variable
For more information https://pythonise.com/series/learning-flask/python-before-after-request

should i be using threads multiprocessing or asycio for my project?

I am trying to build a temperature control module that can be controlled over a network or with manual controls. the individual parts of my program all work but I'm having trouble figuring out how to make them all work together.also my temperature control module is python and the client is C#.
so far as physical components go i have a keypad that sets a temperature and turns the heater on and off and an lcd screen that displays temperature data and of course a temperature sensor.
for my network stuff i need to:
constantly send temperature data to the client.
send a list of log files to the client.
await prompts from the client to either set the desired temperature or send a log file to the client.
so far all the hardware works fine and each individual part of the network functions work but not together. I have not tried to use both physical and network components.
I have been attempting to use threads for this but was wondering if i should be using something else?
EDIT:
here is the basic logic behind what i want to do:
Hardware:
keypad takes a number inputs until '*' it then sets a temp variable.
temp variable is compared to sensor data and the heater is turned on or off accordingly.
'#' turns of the heater and sets temp variable to 0.
sensor data is written to log files while temp variable is not 0
Network:
upon client connect the client is sent a list of log files
temperature sensor data is continuously sent to client.
prompt handler listens for prompts.
if client requests log file the temperature data is halted and the file sent after which the temperature data is resumed.
client can send a command to the prompt handler to set the temp variable to trigger the heater
client can send a command to the prompt handler to stop the heater and set temp variable to 0
commands from either the keypad or client should work at all times.
Multiprocessing is generally for when you want to take advantage of the computational power of multiple processing cores. Multiprocessing limits your options on how to handle shared state between components of your program, as memory is copied initially on process creation, but not shared or updated automatically. Threads execute from the same region of memory, and do not have this restriction, but cannot take advantage of multiple cores for computational performance. Your application does not sound like it would require large amounts of computation, and simply would benefit from concurrency to be able to handle user input, networking, and a small amount of processing at the same time. I would say you need threads not processes. I am not experienced enough with asyncio to give a good comparison of that to threads.
Edit: This looks like a fairly involved project, so don't expect it to go perfectly the first time you hit "run", but definitely very doable and interesting.
Here's how I would structure this project...
I see effectively four separate threads here (maybe small ancillary dameon threads for stupid little tasks)
I would have one thread acting as your temperature controller (PID control / whatever) that has sole control of the heater output. (other threads get to make requests to change setpoint / control mode (duty cycle / PID))
I would have one main thread (with a few dameon threads) to handle the data logging: Main thead listens for logging commands (pause, resume, get, etc.) dameon threads to poll thermometer, rotate log files, etc..
I am not as familiar with networking, and this will be specific to your client application, but I would probably get started with http.server just for prototyping, or maybe something like websockets and a little bit of asyncio. The main thing is that it would interact with the data logger and temperature controller threads with getters and setters rather than directly modifying values
Finally, for the keypad input, I would likely just make up a quick tkinter application to grab keypresses, because that's what I know. Again, form a request with the tkinter app, but don't modify values directly; use getters and setters when "talking" between threads. It just keeps things better organized and compartmentalized.

How to check whether or not RabbitMQ "is doing well"?

I want to build a script which would check up an instance of RabbitMQ on my server once in a minute. Is it possible to check if RabbitMQ is "doing well" in automatic mode, it might be via a script (ruby, python, whatever) or command line. By "doing well" it's not about to crash for any reason and it's not frozen.
Also, if I'm able to connect to it from a client script, say, from ruby, does that mean that "it's doing well" or not necessarily?
That doesn't mean it's doing well. The problem is that "doing well" cannot be measured. You need to check things like total queued messages, messages per second or memory consumption. A simple ping won't tell you much. Heck, rabbitmq as an erlang system is built to crash and respawn.
Once you define what you mean by doing well, you can create a script to hit Rabbit's API. It's simple HTTP.
The API becomes available via the Rabbit Management Plug see https://www.rabbitmq.com/management.html
Once installed you have to define what it means to be doing well within the context of your application. It could be that you app takes a long time to process messages. It could be that you'll have bursts of messages but you have to average their processing time. It could be that you purposely under powered the server so you only want to worry about extreme memory pressures. See http://looselycoupledlabs.com/2014/08/monitoring-rabbitmq/ for an example metric set.
The is no single stats value that will tell you any server is about to fail. You'll want to combine Rabbit's stats with the host OS'.

Categories

Resources