Python constantly refresh variable - python

I don't know if it's a dumb question, but I'm really struggling with solving this problem.
I'm coding with the obd library.
Now my problem with that is the continuous actualization of my variables.
For instance, one variable outputs the actual speed of the car.
This variable has to be updated every second or 2 seconds. To do this update I have to run 2 lines of code
cmd = obd.commands.RPM
rpm = connection.query(cmd)
but I have to check the rpm variable in some while loops and if statements. (in realtime)
Is there any opportunity to get this thing done ? (another class or thread or something) It would really help me take a leap forward in my programming project.
Thanks :)

use the Async interface instead of the OBD:
Since the standard query() function is blocking, it can be a hazard for UI event loops. To deal with this, python-OBD has an Async connection object that can be used in place of the standard OBD object. Async is a subclass of OBD, and therefore inherits all of the standard methods. However, Async adds a few in order to control a threaded update loop. This loop will keep the values of your commands up to date with the vehicle. This way, when the user querys the car, the latest response is returned immediately.
The update loop is controlled by calling start() and stop(). To subscribe a command for updating, call watch() with your requested OBDCommand. Because the update loop is threaded, commands can only be watched while the loop is stoped.
import obd
connection = obd.Async() # same constructor as 'obd.OBD()'
connection.watch(obd.commands.RPM) # keep track of the RPM
connection.start() # start the async update loop
print connection.query(obd.commands.RPM) # non-blocking, returns immediately
http://python-obd.readthedocs.io/en/latest/Async%20Connections/

Related

nidaqmx: prevent task from closing after being altered in function

I am trying to write an API that takes advantage of the python wrapper for NI-DAQmx, and need to have a global list of tasks that can be edited across the module.
Here is what I have tried so far:
1) Created an importable dictionary of tasks which is updated whenever a call is made to ni-daqmx. The function endpoint processes data from an HTTPS request, I promise it's not just a pointless wrapper around the ni-daqmx library itself.
e.g., on startup, the following is created:
#./daq/__init.py__
import nidaqmx
# ... other stuff ...#
TASKS = {}
then, the user can create a task by calling this endpoint
#./daq/task/task.py
from daq import TASKS
# ...
def api_create_task_endpoint(task_id):
try:
task = nidaqmx.Task(new_task_name=task_id)
TASKS[task_id] = task
except Exception:
# handle it
Everything up to here works as it should. I can get the task list, and the task stays open. I also tried explicitly calling task.control(nidaqmx.constants.TaskMode.TASK_RESERVE), but the following code gives me the same issue no matter what.
When I try to add channels to the task, it closes at the end of the function call no matter how I set the state.
#./daq/task/channels.py
from daq import TASKS
def api_add_channel_task_endpoint(task_id, channel_type, function):
# channel_type corresponds to ni-daqmx channel modules (e.g. ai_channels).
# function corresponds to callable functions (e.g. add_ai_voltage_chan)
# do some preliminary checks (e.g. task exists, channel type valid)
channels = get_chans_from_json_post()
with TASKS[task_id] as task:
getattr(getattr(task, channel_type), function)(channels)
# e.g. task.ai_channels.add_ai_voltage_chan("Dev1/ai0")
This is apparently closing the task. When I call api_create_task_endpoint(task_id) again, I receive the DaqResourceWarning that the task has been closed, and no longer exists.
I similarly tried setting the TaskMode using task.control here, to no avail.
I would like to be able to make edits to the task by storing it in the module-wide TASKS dict, but cannot keep the Task open long enough to do so.
2) I also tried implementing this using the NI-MAX save feature. The issue with this is that tasks cannot be saved unless they already contain channels, which I don't necessarily want to do immediately after creating the task.
I attempted to work around this by adding to the api_create_task_endpoint() some default behavior which just adds a random channel that is removed on the first channel added by the user.
Problem is, I can't find any documentation for a way to remove channels from a task after adding them without a GUI (this is running on CENTOS, so GUI is a non-starter).
Thank you so much for any help!
I haven't use the Python bindings for NI-DAQmx, but
with TASKS[task_id] as task:
looks like it would stop and clear the task immediately after updating it because the program flow leaves the with block and Task.__exit__() executes.
Because you expect these tasks to live while the Python module is in use, my recommendation is to only use task.control() when you need to change a task's state.

How to handle blocking calls / while 1-s, inside a tkinter's mainloop? ZMQ / Tkinter

I am using ZeroMQ, which is a messaging library (presumably async I/O), if you don't know what it is you can think of it as similar to socket library in python, the sockets used for messaging are usually run within an infinite while loop with a small sleep for keep everything cool.
I have the code written in a separate file and I have a GUI based on the working of that code separate, I want to integrate the two codes.
But the issue I come across is that I can not possibly put a while True, or a blocking socket.recv() inside tkinter's .mainloop().
I want to receive on on a socket, which is blocking - BUT I can manage that part of the issue, zmq sockets can either be polled on (check periodically to see if we have any pending messages to process) or equivalently you can use zmq.DONTWAIT which does the same thing.
The issue remaining however is that I need a while True, so that the socket is constantly polled, say every millisecond to see if we have messages.
How do I put a while True inside the tkinter .mainloop() that allows me to check the state of that socket constantly?
I would visualize something like this :
while True:
update_gui() # contains the mainloop and all GUI code
check_socket() # listener socket for incoming traffic
if work:
# # do a work, while GUI will hang for a bit.
I have checked the internet, and came across solution on SO, which says that you can use the After property of widgets but I am not sure how that works. If someone could help me out I would be super grateful !
Code for reference :
zmq.DONTWAIT throws an exception if you do not have any pending messages which makes us move forward in the loop.
while 1:
if socket_listen and int(share_state):
try:
msg = socket_listen.recv_string(zmq.DONTWAIT)
except:
pass
time.sleep(0.01)
I would like that I could put this inside the .mainloop() and along with the GUI this also gets checked every iteration.
Additional info : Polling here equates to :
check if we have messages on socket1
if not then proceed normally
else do work.
How do I put a while True inside the tkinter .mainloop() that allows me to check the state of that socket constantly?
Do not design such part using an explicit while True-loop, better use the tkinter-native tooling: asking .after() to re-submit the call not later than a certain amount of time ( let for other things to happen concurrently, yet having a reasonable amount of certainty, your requested call will still be activated no later than "after" specified amount of milliseconds ).
I love Tkinter architecture of co-existing event processing
So if one keeps the Finite-State-Automata ( a game, or a GUI front-end ) clean crafted on the Tkinter-grounds, one can enjoy delivering ZeroMQ-messages data being coordinated "behind" the scene, right by Tkinter-native tools, so no imperative-code will be needed whatsoever. Just let the messages get translated into tkinter-monitored-variables, if you need to have indeed smart-working GUI integration.
aScheduledTaskID = aGuiRelatedOBJECT.after( msecs_to_wait,
aFunc_to_call = None,
*args
)
# -> <_ID_#_>
# ... guarantees a given wait-time + just a one, soloist-call
# after a delay of at least delay_ms milliseconds.
# There is no upper limit to how long it will actually take, but
# your callback-FUN will be called NO SOONER than you requested,
# and it will be called only once.
# aFunc_to_call() may "renew" with .after()
#
# .after_cancel( aScheduledTaskID ) # <- <id> CANCELLED from SCHEDULER
#
# .after_idle() ~ SCHEDULE A TASK TO BE CALLED UPON .mainloop() TURNED-IDLE
#
# aScheduledTaskOnIdleID = aGuiOBJECT.after_idle( aFunc_to_call = None,
# *args
# )
# -> <_ID_#_>
That's cool on using the ready-to-reuse tkinter native-infrastructure scheduler tools in a smart way, isn't it?
Epilogue:
( Blocking calls? Better never use blocking calls at all. Have anyone ever said blocking calls here? :o) )
a while True, or a blocking socket.recv() inside tkinter's .mainloop().
well, one can put such a loop into a component aligned with tkinter native-infrastructure scheduler, yet this idea is actually an antipattern and can turn things into wreck havoc ( not only for tkinter, in general in any event-loop handler it is a bit risky to expect any "competitive" event-handler loop to somehow tolerate or behave in a peacefull the co-existence of adjacent intentions - problems will appear ( be it from a straight blocking or due to one being a just too much dominant in scheduling resources or other sorts of a war on time and resources ) ).

Python detect changing (global) variable

To make a somewhat long explanation rather simple for someone who's not fully into my project as me; I'm trying to find a way to detect a global variables that change in Python 2.7.
I'm trying to send updated to another device who registers them.
To reduce traffic and CPU load, instead of opting for a periodic update message, I was thinking of only sending an update message when a variable changes, and I might be tired right now but I don't know how I can detect a variable that changes.
Is there a library or something I can take advantage of?
Thank you!
You could use the Pub/Sub feature of Redis if this kind of behaviour is typical in your codebase. https://redis.io/topics/pubsub. Every time your variable changes, you publish this event on a channel.
For example let's call the channel variableUpdates.
The devices that depend on your variable value subscribe to the channel variableUpdates.
Every time when your variable changes you publish this event on the channel variableUpdates.
When this happens, your listeners get notified of this event, read the new variable value and use it in their own context.

Restructuring program to use asyncio

Currently I have a game that does networking synchronously using the socket module.
It is structured like this:
Server:
while True:
add_new_clients()
process_game_state()
for client in clients:
send_data(client)
get_data_from(client)
Client:
connect_to_server()
while True:
get_data_from_server()
process_game_state()
draw_to_screen()
send_input_to_server()
I want to replace the network code with some that uses a higher level module than socket, e.g. asyncio or gevent. However, I don't know how to do this.
All the examples I have seen are structured like this:
class Server:
def handle_client(self, connection):
while True:
input = get_input(connection)
output = process(input)
send(connection, output)
and then handle_client being called in parallel, using threads or something, for each client that joins.
This works fine if the clients can be handled separately. However, I still want to keep a game-loop type structure, where processing only occurs in one case - I don't want have to check collisions etc. for each client. How would I do this?
I assume that you understand how to create a server using a protocol and how asynchronous paradigm work.
All you need is to break down your while event loop into handlers.
Let's see server case and client case :
Server case
A client (server-side)
You need to create a what we call a protocol, it will be used to create the server and serve as a pattern where each instance = a client :
class ClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
# Here, we have a new player, the transport represent a socket.
self.transport = transport
def data_received(self, data):
packet = decode_packet(data) # some function for reading packets
if packet.opcode == CMSG_MOVE: # opcode is a operation code.
self.player.move(packet[0]) # packet[0] is the first "real" data.
self.transport.write("OK YOUR MOVE IS ACCEPTED") # Send back confirmation or whatever.
Ok, now you have a idea of how you can do thing with your clients.
Game state
After that, you need to process your game state each X ms :
def processGameState():
# some code...
eventLoop.call_later(0.1, processGameState) # every 100 ms, processGameState is called.
At some point, you will call processGameState in your initialization and it will tell the eventLoop to call processGameState 100 ms later (It may not be the ideal way to do it, but it's an idea like another one)
As for sending new data to clients, you just need to store a list of ClientProtocol and write to their transport with a simple for each.
The get_data_from is obviously removed, as we receive all our data asynchronously in the data_received method of the ClientProtocol.
This is a sketch of how you can refactor all your synchronous code into asynchronous code. You may want to add authentication, and some other things, if it's your first time with asynchronous paradigm, I suggest you to try to do it with Twisted more than asyncio : Twisted is likely to be more documented and explained everywhere than asyncio (but asyncio is quite the same as Twisted, so you can switch back everytime).
Client case
It's pretty the same here.
But, you may need to pay attention to how you draw and how you manage your input. You may need to ultimately use another thread to call inputs handlers, and another thread to draw to the screen at a constant framerate.
Conclusion
Think in asynchronous is pretty difficult at the start.
But it's worth the effort.
Note that even my approach may not be the best or adapted for games. I just feel I would do it like that, please, take your time to test your code and profile it.
Check if you don't mix synchronous and asynchronous code in the same function without proper handling using deferToThread (or other helpers), it would destroy your game's performances.

Understanding defer.DeferredQueue() in a proxy example

I am trying to understand a simple python proxy example using Twisted located here. The proxy instantiates a Server Class, which in turn instantiates a client class. defer.DeferredQueue() is used to pass data from client class to server class.
I am now trying to understand how defer.DeferredQueue() works in this example. For example what is the significance of this statement:
self.srv_queue.get().addCallback(self.clientDataReceived)
and it's analogous
self.cli_queue.get().addCallback(self.serverDataReceived)
statement.
What happens when self.cli_queue.put(False) or self.cli_queue = None is executed?
Just trying to get into grips with Twisted now, so things seems pretty daunting. A small explanation of how things are connected would make it far more easy to get into grips with this.
According to the documentation, DeferredQueue has a normal put method to add object to queue and a deferred get method.
The get method returns a Deferred object. You add a callback method (e.g serverDataReceived) to the object. Whenever the object available in the queue, the Deferred object will invoke the callback method. The object will be passed as argument to the method. In case the queue is empty or the serverDataReceived method hasn't finished executing, your program still continues to execute next statements. When new object available in the queue, the callback method will be called regardless of the point of execution of your program.
In other words, it is an asynchronous flow, in contrary to a synchronous flow model, in which, you might have a BlockingQueue, i.e, your program will wait until the next object available in the queue for it to continue executing.
In your example program self.cli_queue.put(False) add a False object to the queue. It is a sort of flag to tell the ProxyClient thread that there won't be anymore data added to the queue. So that it should disconnect the remote connection. You can refer to this portion of code:
def serverDataReceived(self, chunk):
if chunk is False:
self.cli_queue = None
log.msg("Client: disconnecting from peer")
self.factory.continueTrying = False
self.transport.loseConnection()
Set the cli_queue = None is just to discard the queue after the connection is closed.

Categories

Resources