Python detect changing (global) variable - python

To make a somewhat long explanation rather simple for someone who's not fully into my project as me; I'm trying to find a way to detect a global variables that change in Python 2.7.
I'm trying to send updated to another device who registers them.
To reduce traffic and CPU load, instead of opting for a periodic update message, I was thinking of only sending an update message when a variable changes, and I might be tired right now but I don't know how I can detect a variable that changes.
Is there a library or something I can take advantage of?
Thank you!

You could use the Pub/Sub feature of Redis if this kind of behaviour is typical in your codebase. https://redis.io/topics/pubsub. Every time your variable changes, you publish this event on a channel.
For example let's call the channel variableUpdates.
The devices that depend on your variable value subscribe to the channel variableUpdates.
Every time when your variable changes you publish this event on the channel variableUpdates.
When this happens, your listeners get notified of this event, read the new variable value and use it in their own context.

Related

How can I verify if there is an incoming message to my node with MPI?

I'm doing a project using Python with MPI. Every node of my project needs to know if there is any incoming message for it before continuing the execution of other tasks.
I'm working on a system where multiple nodes executes some operations. Some nodes may need the outputs of another nodes and therefore needs to know if this output is available.
For illustration purposes, let's consider two nodes, A and B. A needs the output of B to execute it's task, but if the output is not available A needs to do some other tasks and then verify if B has send it's output again, in a loop. What I want to do is this verification of availability of output from B in A.
I made some research and found something about a method called probe, but don't understood neither found a usefull documentation about what it does or how to use. So, I don't know if it solves my problem.
The idea of what I want is ver simple: I just need to check if there is data to be received when I use the method "recv" of mpi4py. If there is something the code do some tasks, if there ins't the code do some other taks.
(elaborating on Gilles Gouaillardet's comment)
If you know you will eventually receive a message, but want to be able to run some computations while it is being prepared and sent, you want to use non-blocking receives, not probe.
Basically use MPI_Irecv to setup a receive request as soon as possible. If you want to know whether the message is ready yet, use MPI_Test to check the request.
This is much better than using probes, because you ensure that a receive buffer is ready as early as possible and the sender is not blocked, waiting for the receiver to see that there is a message and post the receive.
For the specific implementation you will have to consult the manual of the Python MPI wrapper you use. You might also find helpful information in the MPI standard itself.

Python constantly refresh variable

I don't know if it's a dumb question, but I'm really struggling with solving this problem.
I'm coding with the obd library.
Now my problem with that is the continuous actualization of my variables.
For instance, one variable outputs the actual speed of the car.
This variable has to be updated every second or 2 seconds. To do this update I have to run 2 lines of code
cmd = obd.commands.RPM
rpm = connection.query(cmd)
but I have to check the rpm variable in some while loops and if statements. (in realtime)
Is there any opportunity to get this thing done ? (another class or thread or something) It would really help me take a leap forward in my programming project.
Thanks :)
use the Async interface instead of the OBD:
Since the standard query() function is blocking, it can be a hazard for UI event loops. To deal with this, python-OBD has an Async connection object that can be used in place of the standard OBD object. Async is a subclass of OBD, and therefore inherits all of the standard methods. However, Async adds a few in order to control a threaded update loop. This loop will keep the values of your commands up to date with the vehicle. This way, when the user querys the car, the latest response is returned immediately.
The update loop is controlled by calling start() and stop(). To subscribe a command for updating, call watch() with your requested OBDCommand. Because the update loop is threaded, commands can only be watched while the loop is stoped.
import obd
connection = obd.Async() # same constructor as 'obd.OBD()'
connection.watch(obd.commands.RPM) # keep track of the RPM
connection.start() # start the async update loop
print connection.query(obd.commands.RPM) # non-blocking, returns immediately
http://python-obd.readthedocs.io/en/latest/Async%20Connections/

How to prebuffer an incoming network stream with gstreamer?

I'm using gstreamer to stream audio over the network. My goal is seemingly simple: Prebuffer the incoming stream up to a certain time/byte threshold and then start playing it.
I might be overlooking a really simple feature of gstreamer, but so far, I haven't been able to find a way to do that.
My (simplified) pipeline looks like this: udpsrc -> alsasink. So far all my attempts at achieving my goal have been using a queue element in between:
Add a queue element in between.
Use the min-threshold-time property. This actually works but the problem is, that it makes all the incoming data spend the specified minimum amount of time in the queue, rather than just the beginning, which is not what I want.
To work around the previous problem, I tried to have the queue notify my code when data enters the audio sink for the first time, thinking that this is the time to unset the min-time property that I set earlier, and thus, achieving the "prebuffering" behavior.
Here's is a rough equivalent of the code I tried:
def remove_thresh(pad, info, queue):
pad.remove_data_probe(probe_id)
queue.set_property("min-threshold-time", 0)
queue.set_property("min-threshold-time", delay)
queue.set_property("max-size-time", delay * 2)
probe_id = audiosink.get_pad("sink").add_data_probe(remove_thresh, queue)
This doesn't work for two reasons:
My callback gets called way earlier than the delay variable I provided.
After it gets called, all of the data that was stored in the queue is lost. the playback starts as if the queue weren't there at all.
I think I have a fundamental misunderstanding of how this thing works. Does anyone know what I'm doing wrong, or alternatively, can provide a (possibly) better way to do this?
I'm using python here, but any solution in any language is welcome.
Thanks.
Buffering has already been implemented in GStreamer. Some elements, like the queue, are capable of building this buffer and post bus messages regarding the buffer level (the state of the queue).
An application wanting to have more network resilience, then, should listen to these messages and pause playback if the buffer level is not high enough (usually, whenever it is below 100%).
So, all you have to do is set the pipeline to the PAUSED state while the queue is buffering. In your case, you only want to buffer once, so use any logic for this (maybe set flag variables to pause the pipeline only the first time).
Set the "max-size-bytes" property of the queue to the value you want.
Either listen to the "overrun" signal to notify you when the buffer becomes full or use gst_message_parse_buffering () to find the buffering level.
Once your buffer is full, set the pipeline to PLAYING state and then ignore all further buffering messages.
Finally, for a complete streaming example, you can refer to this tutorial: https://gstreamer.freedesktop.org/documentation/tutorials/basic/streaming.html
The code is in C, but the walkthroughs should help you with you want.
I was having the exact same problems as you with a different pipeline (appsrc), and after spending days trying to find an elegant solution (and ending up with code remarkably similar to what you posted)... all I did was switch the flag is-live to False and the buffering worked automagically. (no need for min-threshold-time or anything else)
Hope this helps.

Scanning MySQL table for updates Python

I am creating a GUI that is dependent on information from MySQL table, what i want to be able to do is to display a message every time the table is updated with new data. I am not sure how to do this or even if it is possible. I have codes that retrieve the newest MySQL update but I don't know how to have a message every time new data comes into a table. Thanks!
Quite simple and straightforward solution will be just to poll the latest autoincrement id from your table, and compare it with what you've seen at the previous poll. If it is greater -- you have new data. This is called 'active polling', it's simple to implement and will suffice if you do this not too often. So you have to store the last id value somewhere in your GUI. And note that this stored value will reset when you restart your GUI application -- be sure to think what to do at the start of the GUI. Probably you will need to track only insertions that occur while GUI is running -- then, at the GUI startup you need just to poll and store current id value, and then poll peroidically and react on its changes.
#spacediver gives some good advice about the active polling approach. I wanted to post some other options as well.
You could use some type of message passing to communcate notifications between clients. ZeroMQ, twisted, etc offer these features. One way to do it is have the updating client issue the message along with their successful database insert. Clients can all listen to a channel for notifications instead of always polling the db.
If you cant control adding an update message to the client doing the insertions, you could also look at this link for using a database trigger to call a script which would simply issue an update message to your messaging framework. It explains installing a UDF extension to allow you to run a sys_exec command in a trigger and call a simple script.
This way clients simply respond to a notification instead of all checking regularily.

Python-C api concurrency issue

We are developing a small c server application. The server application does some data processing and responds back to the client. To keep the data processing part configurable and flexible we decided to go for scripting and based on the availability of various ready modules we decided to go for Python. We are using the Python-C api to send/receive the data between c and python.
The Algorithm works something like this:-
Server receives some data from client, this data is stored in a dictionary created in c. The dictionary is created using the api function PyDict_New(); from c. The input is stored as a key value pair in the dictionary using the api function PyDict_SetItemString();
Next, we execute the python script PyRun_SimpleString(); passing the script as a parameter. This script makes use of the dictionary created in c. Please note, we make the dictionary created in c, accessible to the script using the methods PyImport_AddModule(); and PyModule_AddObject();
We store the result of the data processing in the script as a key value pair in the same dictionary created above. The c code can then simply access the result variable(key-value pair) after the script has executed.
The problem
The problem we are facing is in the case of concurrent requests coming in from different clients. When multiple requests come in from different clients we tend to object reference count exceptions. Please note, that for each request which comes in for a user, we create an independent dictionary for that user alone. To overcome this problem we encompassed the call to PyRun_SimpleString(); within PyEval_AcquireLock(); and PyEval_ReleaseLock();, but doing this has resulted in the script execution being a blocking call. So if a script is taking long time to execute, all the other users are also waiting for a response.
Could you please suggest the best possible approach or give pointers to where we are going wrong. Please ping me for more information.
Any help/guidance will be appreciated.
Perhaps you are missing one of the calls mentioned in this answer.
I suggest you investigate the multiprocessing module.
You should probably read http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock Your problem is explained in the first paragraph.
When you acquire the GIL, do so around your direct manipulation of Python objects. The call to PyRun_SimpleString will handle the GIL internally and will give it up on long-running operations or just every X instructions. It WILL NOT be truly multi-threaded, however.
Edit:
You need to acquire the lock and you need to ensure that Python knows it's in a different thread state:
// acquire the lock and switch thread state
PyEval_AcquireLock();
PyThreadState_Swap(perThreadState);
// execute some python code
PyEval_SimpleString("print 123");
// clear the thread state and release the lock
PyThreadState_Swap(NULL);
PyEval_ReleaseLock();

Categories

Resources