Understanding defer.DeferredQueue() in a proxy example - python

I am trying to understand a simple python proxy example using Twisted located here. The proxy instantiates a Server Class, which in turn instantiates a client class. defer.DeferredQueue() is used to pass data from client class to server class.
I am now trying to understand how defer.DeferredQueue() works in this example. For example what is the significance of this statement:
self.srv_queue.get().addCallback(self.clientDataReceived)
and it's analogous
self.cli_queue.get().addCallback(self.serverDataReceived)
statement.
What happens when self.cli_queue.put(False) or self.cli_queue = None is executed?
Just trying to get into grips with Twisted now, so things seems pretty daunting. A small explanation of how things are connected would make it far more easy to get into grips with this.

According to the documentation, DeferredQueue has a normal put method to add object to queue and a deferred get method.
The get method returns a Deferred object. You add a callback method (e.g serverDataReceived) to the object. Whenever the object available in the queue, the Deferred object will invoke the callback method. The object will be passed as argument to the method. In case the queue is empty or the serverDataReceived method hasn't finished executing, your program still continues to execute next statements. When new object available in the queue, the callback method will be called regardless of the point of execution of your program.
In other words, it is an asynchronous flow, in contrary to a synchronous flow model, in which, you might have a BlockingQueue, i.e, your program will wait until the next object available in the queue for it to continue executing.
In your example program self.cli_queue.put(False) add a False object to the queue. It is a sort of flag to tell the ProxyClient thread that there won't be anymore data added to the queue. So that it should disconnect the remote connection. You can refer to this portion of code:
def serverDataReceived(self, chunk):
if chunk is False:
self.cli_queue = None
log.msg("Client: disconnecting from peer")
self.factory.continueTrying = False
self.transport.loseConnection()
Set the cli_queue = None is just to discard the queue after the connection is closed.

Related

How to send Python proxy objects across TCP

I've having problems sending proxy objects across a TCP connection with Python 3.7.3. They work fine locally, but the authentication keys don't get set right when the two processes interconnect with TCP.
The story goes like this: I have an instance of a class on one process and I want to refer to it from another process on a different machine. So I create my BaseManager on the first process with address=('', 50000), pull out a copy of its _authkey, and tell the other process to create a BaseManager with address=('whatever', 50000) along with authkey=... and call connect(). So far, so good.
The first process has a few things registered:
BaseManager.register('ManagerClass', ManagerClass)
BaseManager.register('managers', callable = lambda: managers)
managers is just a dictionary. ManagerClass has a method that saves a self-proxy in the dictionary, created like this:
def autoself(self):
server = getattr(multiprocessing.current_process(), '_manager_server', None)
classname = self.__class__.__name__
if server:
for key,value in server.id_to_obj.items():
if value[0] == self:
token = multiprocessing.managers.Token(typeid=classname, address=server.address, id=key)
proxy = multiprocessing.managers.AutoProxy(token, 'pickle', authkey=server.authkey)
return proxy
else:
return self
Incidentally, if I try to store the ManagerClass object directly in the dictionary and then transfer it, I get
TypeError: can't pickle _thread.lock objects
No great surprise - it's a complicated object and probably has thread locks in there somewhere.
So, I store the self-proxy created with autoself into the dictionary and transfer it.
It almost works, but the authkey doesn't get set right on the receiving end, so it doesn't work. Looks like the authkey gets set to the local process's authkey, because that's the default in AutoProxy if no authkey or manager is specified.
Well, how would it be specified? The dictionary is represented by a proxy object, which calls the remote method items(), which returns a pickled RebuildProxy containing an AutoProxy. Should RebuildProxy figure out what manager it's being called from and pick out the authkey from there? What if the returned proxy object refers to an object on a different process than the one holding the dictionary? Don't know.
I've "fixed" this by hacking BaseProxy's __reduce__ method to always transfer its authkey, irregardless of get_spawning_popen(), and disabling the check in AuthenticationString that prevents it being pickled. Look in Python's multiprocessing/ directory to make sense out of what I'm talking about.
Now it works.
Not really sure what to do about it. What if we've got a complicated setup with multiple processes passing proxy objects around? No reason to assume that a process receiving a proxy object has an authkey for the process that manages the object; it might only have an authkey for the process sending the proxy. Does that mean we need to pass authkeys around with proxy objects? Or am I missing a better way to do this?
I've found a simple way to fix my problem: use the same authentication key for every process on every machine.
Just set the key first thing when I import multiprocessing:
import multiprocessing
multiprocessing.current_process().authkey = b"secret"
No need to change the standard library code.

nidaqmx: prevent task from closing after being altered in function

I am trying to write an API that takes advantage of the python wrapper for NI-DAQmx, and need to have a global list of tasks that can be edited across the module.
Here is what I have tried so far:
1) Created an importable dictionary of tasks which is updated whenever a call is made to ni-daqmx. The function endpoint processes data from an HTTPS request, I promise it's not just a pointless wrapper around the ni-daqmx library itself.
e.g., on startup, the following is created:
#./daq/__init.py__
import nidaqmx
# ... other stuff ...#
TASKS = {}
then, the user can create a task by calling this endpoint
#./daq/task/task.py
from daq import TASKS
# ...
def api_create_task_endpoint(task_id):
try:
task = nidaqmx.Task(new_task_name=task_id)
TASKS[task_id] = task
except Exception:
# handle it
Everything up to here works as it should. I can get the task list, and the task stays open. I also tried explicitly calling task.control(nidaqmx.constants.TaskMode.TASK_RESERVE), but the following code gives me the same issue no matter what.
When I try to add channels to the task, it closes at the end of the function call no matter how I set the state.
#./daq/task/channels.py
from daq import TASKS
def api_add_channel_task_endpoint(task_id, channel_type, function):
# channel_type corresponds to ni-daqmx channel modules (e.g. ai_channels).
# function corresponds to callable functions (e.g. add_ai_voltage_chan)
# do some preliminary checks (e.g. task exists, channel type valid)
channels = get_chans_from_json_post()
with TASKS[task_id] as task:
getattr(getattr(task, channel_type), function)(channels)
# e.g. task.ai_channels.add_ai_voltage_chan("Dev1/ai0")
This is apparently closing the task. When I call api_create_task_endpoint(task_id) again, I receive the DaqResourceWarning that the task has been closed, and no longer exists.
I similarly tried setting the TaskMode using task.control here, to no avail.
I would like to be able to make edits to the task by storing it in the module-wide TASKS dict, but cannot keep the Task open long enough to do so.
2) I also tried implementing this using the NI-MAX save feature. The issue with this is that tasks cannot be saved unless they already contain channels, which I don't necessarily want to do immediately after creating the task.
I attempted to work around this by adding to the api_create_task_endpoint() some default behavior which just adds a random channel that is removed on the first channel added by the user.
Problem is, I can't find any documentation for a way to remove channels from a task after adding them without a GUI (this is running on CENTOS, so GUI is a non-starter).
Thank you so much for any help!
I haven't use the Python bindings for NI-DAQmx, but
with TASKS[task_id] as task:
looks like it would stop and clear the task immediately after updating it because the program flow leaves the with block and Task.__exit__() executes.
Because you expect these tasks to live while the Python module is in use, my recommendation is to only use task.control() when you need to change a task's state.

Azure Function App - Python - Single connection without re-initializing

I am using Python Function App that is triggered by service bus queue to store the data in SQL Server. I need to handle connection with SQL Server.
I found this link. Specifically, people often initiate a connection outside of main function, then use it in main function. Following the document, the connection could be re-used. But the issue is: Microsoft tutorial is made with solely C# and JavaScript.
I have tried with the following sample source code, it runs well, but I do not know if the Function App would create a new connection or not.
import azure.functions as func
connection = getConnection()
def main(msg: func.ServiceBusMessage):
# get content of message
mess = msg.get_body().decode("utf-8")
logging.info(mess)
message = eval(str(mess)) # Sensitive
# handle scenarios
data = handle_message_from_device(message)
insert(connection, data)
I want to ask:
With the above source code, could Function App re-use the connection or create a new one? If it re-uses the connection, could Function App remain this connection as long as it runs?
How could Python function app reuse this connection? Currently, I think when a new message is pushed to Function App, the main file (init file, by default) will be called. So in this case, a new message should be called instead?
Thanks in advance :-)
Yes, this should work just fine. connection = GetConnection() gets called only once on Function (cold) start. From there on it stays "warm" for about 5 minutes. If another request comes in - to the same instance of the function - it will be reused. After the timeout your function gets recycled. On next invocation it gets started again and another connection object will be created.
You can simply test this by adding some logging to your GetConnection() method. You'll see that this will be only executed on first start. For subsequent requests in the next few seconds/minutes, it will not get called again. Unless, of course, your function gets scaled out to additional instances.

Calling Dbus method without expecting a reply

I have a dbus client written in python to call the exposed dbus methods. The code is as follows
bus = dbus.SessionBus()
service = bus.get_object(PANEL_BUS_NAME, PANEL_BUS_OBJECT)
__panelInterface = dbus.Interface(service, PANEL_BUS_INTERFACE)
__panelInterface.SetBTConnected()
The problem is that when the method is called first time, it takes a while for the exposed method to get executed. My understanding is that the dbus expects a reply from the method's process but times out. However, what I fail to understand is that the method gets executed immediately if called again. In other words, the block occurs only the first time. Can somebody recommend me the remedy for this behavior and help me understand it??
You might find it useful to debug this with a D-Bus analysis tool like Bustle or dbus-monitor. They will show you when the messages and replies are sent, whether any errors are returned, and where the time is spent.

Python constantly refresh variable

I don't know if it's a dumb question, but I'm really struggling with solving this problem.
I'm coding with the obd library.
Now my problem with that is the continuous actualization of my variables.
For instance, one variable outputs the actual speed of the car.
This variable has to be updated every second or 2 seconds. To do this update I have to run 2 lines of code
cmd = obd.commands.RPM
rpm = connection.query(cmd)
but I have to check the rpm variable in some while loops and if statements. (in realtime)
Is there any opportunity to get this thing done ? (another class or thread or something) It would really help me take a leap forward in my programming project.
Thanks :)
use the Async interface instead of the OBD:
Since the standard query() function is blocking, it can be a hazard for UI event loops. To deal with this, python-OBD has an Async connection object that can be used in place of the standard OBD object. Async is a subclass of OBD, and therefore inherits all of the standard methods. However, Async adds a few in order to control a threaded update loop. This loop will keep the values of your commands up to date with the vehicle. This way, when the user querys the car, the latest response is returned immediately.
The update loop is controlled by calling start() and stop(). To subscribe a command for updating, call watch() with your requested OBDCommand. Because the update loop is threaded, commands can only be watched while the loop is stoped.
import obd
connection = obd.Async() # same constructor as 'obd.OBD()'
connection.watch(obd.commands.RPM) # keep track of the RPM
connection.start() # start the async update loop
print connection.query(obd.commands.RPM) # non-blocking, returns immediately
http://python-obd.readthedocs.io/en/latest/Async%20Connections/

Categories

Resources