How does the DelegatorBot work exactly in TelePot? - python

I'm trying to study the python library Telepot by looking at the counter.py example available here: https://github.com/nickoala/telepot/blob/master/examples/chat/counter.py.
I'm finding a little bit difficult to understand how the DelegatorBot class actually works.
This is what I think I've understood so far:
1.
I see that initially this class (derived from "ChatHandler" class) is being defined:
class MessageCounter(telepot.helper.ChatHandler):
def __init__(self, *args, **kwargs):
super(MessageCounter, self).__init__(*args, **kwargs)
self._count = 0
def on_chat_message(self, msg):
self._count += 1
self.sender.sendMessage(self._count)
2.
Then a bot is created by instancing the class DelegatorBot:
bot = telepot.DelegatorBot(TOKEN, [
pave_event_space()(
per_chat_id(), create_open, MessageCounter, timeout=10
),
])
3.
I understand that a new instance of DelegatorBot is created and put in the variable bot. The first parameter is the token needed by telegram to authenticate this bot, the second parameter is a list that contains something I don't understand.
I mean this part:
pave_event_space()(
per_chat_id(), create_open, MessageCounter, timeout=10
)
And then my question is..
Is pave_event_space() a method called that returns a reference to another method? And then this returned method is invoked with the parameters (per_chat_id(), create_open, MessageCounter, timeout=10) ?

Short answer
Yes, pave_event_space() returns a function. Let's call that fn. fn is then invoked with fn(per_chat_id(), create_open, ...), which returns a 2-tuple (seeder function, delegate-producing function).
If you want to study the code further, this short answer probably is not very helpful ...
Longer answer
To understand what pave_event_space() does and what that series of arguments means, we have to go back to basics and understand what DelegatorBot accepts as arguments.
DelegatorBot's constructor is explained here. Simply put, it accepts a list of 2-tuples (seeder function, delegate-producing function). To reduce verbosity, I am going to call the first element seeder and the second element delegate-producer.
A seeder has this signature seeder(msg) -> number. For every message received, seeder(msg) gets called to produce a number. If that number is new, the companion delegate-producer (the one that shares the same tuple with the seeder) will get called to produce a thread, which is used to handle the new message. If that number has been occupied by a running thread, nothing is done. In essence, the seeder "categorizes" the message. It spawns a new thread if it sees a message belong to a new "category".
A delegate-producer has this signature producer(cls, *args, **kwargs) -> Thread. It calls cls(*args, **kwargs) to instantiate a handler object (MessageCounter in your case) and wrap it in a thread, so the handler's methods are executed independently.
(Note: In reality, a seeder does not necessarily returns a number and a delegate-producer does not necessarily returns a Thread. I have simplified above for clarity. See the reference for a full explanation.)
In earlier days of telepot, a DelegatorBot was usually made by supplying a seeder and a delegate-producer transparently:
bot = DelegatorBot(TOKEN, [
(per_chat_id(), create_open(MessageCounter, ...))])
Later, I added to handlers (e.g. ChatHandler) a capability to generate its own events (say, a timeout event). Each class of handlers get their own event space, so different classes' events won't mix. Within each event space, the event objects themselves also have a source id to identify which handler has emitted it. This architecture puts some extra requirements on seeders and delegate-producers.
Seeders have to be able to "categorize" events (in additional to external messages) and returns the same number that leads to the event emitter (because we don't want to spawn a thread for this event; it's supposed to be handled by the event emitter itself). Delegate-producers also have to pass the appropriate event space to the Handler class (because each Handler class gets a unique event space, generated externally).
For everything to work properly, the same event space has to be supplied to the seeder and its companion delegate-producer. And every pair of (seeder, delegate-producer) has to get a globally unique event space. pave_event_space() ensures these two conditions, basically patches some extra operations and parameters onto per_chat_id() and create_open() and making sure they are consistent.
Deeper still
Exactly how the "patching" is done? Why do I make you do pave_event_space()(...) instead of the more straight-forward pave_event_space(...)?
First, recall that our ultimate goal is to have a 2-tuple (per_chat_id(), create_open(MessageCounter, ...)). To "patch" it usually means (1) appending some extra operations to per_chat_id(), and (2) inserting some extra parameters to the call create_open(... more arguments here ...). That means I cannot let the user call create_open(...) directly because, once it is called, I cannot insert extra parameters. I need a more abstract construct in which the user specifies create_open but the call create_open(...) is actually made by me.
Imagine a function named pair, whose signature being pair(per_chat_id(), create_open, ...) -> (per_chat_id(), create_open(...)). In other words, it passes the first argument as the first tuple element, and creates the second tuple element by making an actual call to create_open(...) with remaining arguments.
Now, it reaches a point where I am unable to explain source code in words (I have been thinking for 30 minutes). The pseudo-code of pave_event_space looks like this:
def pave_event_space(fn=pair):
def p(s, d, *args, **kwargs):
return fn(append_event_space_seeder(s),
d, *args, event_space=event_space, **kwargs)
return p
It takes the function pair, and returns a pair-like function (signature identical to pair), but with a more complex seeder and more parameters tagged on. That's what I meant by "patching".
pave_event_space is the most often-seen "patcher". Other patchers include include_callback_query_chat_id and intercept_callback_query_origin. They all do basically the same kind of things: takes a pair-like function, returns another pair-like function, with a more complex seeder and more parameters tagged on. Because the input and output are alike, they can be chained to apply multiple patches. If you look into the callback examples, you will see something like this:
bot = DelegatorBot(TOKEN, [
include_callback_query_chat_id(
pave_event_space())(
per_chat_id(), create_open, Lover, timeout=10),
])
It patches event space stuff, then patches callback query stuff, to enable the seeder (per_chat_id()) and handler (Lover) to work cohesively.
That's all I can say for now. I hope this throws some light on the code. Good luck.

Related

Receiving data in python callback function from dll

I am writing a program in Python that communicates with a spectrometer from Avantes. There are some proprietary dlls available whose code I don't access to, but they have some decent documentation. I am having some trouble to find a good way to store the data received via callbacks.
The proprietary shared library
Basically, the dll contains a function that I have to call to start measuring and that receives a callback function that will be called whenever the spectrometer has finished a measurement. The function is the following:
int AVS_MeasureCallback(AvsHandle a_hDevice,void (*__Done)(AvsHandle*, int*),short a_Nmsr)
The first argument is a handle object that identifies the spectrometer, the second is the actual callback function and the third is the amount of measurements to be made.
The callback function will receive then receive another type of handle identifying the spetrometer and information about the amount of data available after a measurement.
Python library
I am using a library that has Python wrappers for many equipments, including my spectrometer.
def measure_callback(self, num_measurements, callback=None):
self.sdk.AVS_MeasureCallback(self._handle, callback, num_measurements)
And they also have defined the following decorator:
MeasureCallback = FUNCTYPE(None, POINTER(c_int32), POINTER(c_int32))
The idea is that when the callback function is finally called, this will trigger the get_data() function that will retrieve data from the equipment.
The recommended example is
#MeasureCallback
def callback_fcn(handle, info):
print('The DLL handle is:', handle.contents.value)
if info.contents.value == 0: # equals 0 if everything is okay (see manual)
print(' callback data:', ava.get_data())
ava.measure_callback(-1, callback_fcn)
My problem
I have to store the received data in a 2D numpy array that I have created somewhere else in my main code, but I can't figure out what is the best way to update this array with the new data available inside the callback function.
I wondered if I could pass this numpy array as an argument for the callback function, but even in this case I cannot find a good way to do this since it is expected that the callback function will have only those 2 arguments.
Edit 1
I found a possible solution here but I am not sure it is the best way to do it. I'd rather not create a new class just to hold a single numpy array inside.
Edit 2
I actually changed my mind about my approach, because inside my callback I'd like to do many operations with the received data and save the results in many different variables. So, I went back to the class approach mentioned here, where I would basically have a class with all the variables that will somehow be used in the callback function and that would also inherit or have an object of the class ava.
However, as shown in this other question, the self parameter is a problem in this case.
If you don't want to create a new class, you can use a function closure:
# Initialize it however you want
numpy_array = ...
def callback_fcn(handle, info):
# Do what you want with the value of the variable
store_data(numpy_array, ...)
# After the callback is called, you can access the changes made to the object
print(get_data(numpy_array))
How this works is that when the callback_fcn is defined, it keeps a reference to the value of the variable numpy_array, so when it's called, it can manipulate it, as if it were passed as an argument to the function. So you get the effect of passing it in, without the callback caller having to worry about it.
I finally managed to solve my problem with a solution envolving a new class and also a closure function to deal with the self parameter that is described here. Besides that, another problem would appear by garbage collection of the new created method.
My final solution is:
class spectrometer():
def measurement_callback(self,handle,info):
if info.contents.value >= 0:
timestamp,spectrum = self.ava.get_data()
self.spectral_data[self.spectrum_index,:] = np.ctypeslib.as_array(spectrum[0:pixel_amount])
self.timestamps[self.spectrum_index] = timestamp
self.spectrum_index += 1
def __init__(self,ava):
self.ava = ava
self.measurement_callback = MeasureCallback(self.measurement_callback)
def register_callback(self,scans,pattern_amount,pixel_amount):
self.spectrum_index = 0
self.timestamps = np.empty((pattern_amount),dtype=np.uint32)
self.spectral_data = np.empty((pattern_amount,pixel_amount),dtype=np.float64)
self.ava.measure_callback(scans, self.measurement_callback)

Does python allow me to pass dynamic variables to a decorator at runtime?

I am attempting to integrate a very old system and a newer system at work. The best I can do is to utilize an RSS firehouse type feed the system utilizes. The goal is to use this RSS feed to make the other system perform certain actions when certain people do things.
My idea is to wrap a decorator around certain functions to check if the user (a user ID provided in the RSS feed) has permissions in the new system.
My current solution has a lot of functions that look like this, which are called based on an action field in the feed:
actions_dict = {
...
'action1': function1
}
actions_dict[RSSFEED['action_taken']](RSSFEED['user_id'])
def function1(user_id):
if has_permissions(user_id):
# Do this function
I want to create a has_permissions decorator that takes the user_id so that I can remove this redundant has_permissions check in each of my functions.
#has_permissions(user_id)
def function1():
# Do this function
Unfortunately, I am not sure how to write such a decorator. All the tutorials I see have the #has_permissions() line with a hardcoded value, but in my case it needs to be passed at runtime and will be different each time the function is called.
How can I achieve this functionality?
In your question, you've named both, the check of the user_id, as well as the wanted decorator has_permissions, so I'm going with an example where names are more clear: Let's make a decorator that calls the underlying (decorated) function when the color (a string) is 'green'.
Python decorators are function factories
The decorator itself (if_green in my example below) is a function. It takes a function to be decorated as argument (named function in my example) and returns a function (run_function_if_green in the example). Usually, the returned function calls the passed function at some point, thereby "decorating" it with other actions it might run before or after it, or both.
Of course, it might only conditionally run it, as you seem to need:
def if_green(function):
def run_function_if_green(color, *args, **kwargs):
if color == 'green':
return function(*args, **kwargs)
return run_function_if_green
#if_green
def print_if_green():
print('what a nice color!')
print_if_green('red') # nothing happens
print_if_green('green') # => what a nice color!
What happens when you decorate a function with the decorator (as I did with print_if_green, here), is that the decorator (the function factory, if_green in my example) gets called with the original function (print_if_green as you see it in the code above). As is its nature, it returns a different function. Python then replaces the original function with the one returned by the decorator.
So in the subsequent calls, it's the returned function (run_function_if_green with the original print_if_green as function) that gets called as print_if_green and which conditionally calls further to that original print_if_green.
Functions factories can produce functions that take arguments
The call to the decorator (if_green) only happens once for each decorated function, not every time the decorated functions are called. But as the function returned by the decorator that one time permanently replaces the original function, it gets called instead of the original function every time that original function is invoked. And it can take arguments, if we allow it.
I've given it an argument color, which it uses itself to decide whether to call the decorated function. Further, I've given it the idiomatic vararg arguments, which it uses to call the wrapped function (if it calls it), so that I'm allowed to decorate functions taking an arbitrary number of positional and keyword arguments:
#if_green
def exclaim_if_green(exclamation):
print(exclamation, 'that IS a nice color!')
exclaim_if_green('red', 'Yay') # again, nothing
exclaim_if_green('green', 'Wow') # => Wow that IS a nice color!
The result of decorating a function with if_green is that a new first argument gets prepended to its signature, which will be invisible to the original function (as run_function_if_green doesn't forward it). As you are free in how you implement the function returned by the decorator, it could also call the original function with less, more or different arguments, do any required transformation on them before passing them to the original function or do other crazy stuff.
Concepts, concepts, concepts
Understanding decorators requires knowledge and understanding of various other concepts of the Python language. (Most of which aren't specific to Python, but one might still not be aware of them.)
For brevity's sake (this answer is long enough as it is), I've skipped or glossed over most of them. For a more comprehensive speedrun through (I think) all relevant ones, consult e.g. Understanding Python Decorators in 12 Easy Steps!.
The inputs to decorators (arguments, wrapped function) are rather static in python. There is no way to dynamically pass an argument like you're asking. If the user id can be extracted from somewhere at runtime inside the decorator function however, you can achieve what you want..
In Django for example, things like #login_required expect that the function they're wrapping has request as the first argument, and Request objects have a user attribute that they can utilize. Another, uglier option is to have some sort of global object you can get the current user from (see thread local storage).
The short answer is no: you cannot pass dynamic parameters to decorators.
But... you can certainly invoke them programmatically:
First let's create a decorator that can perform a permission check before executing a function:
import functools
def check_permissions(user_id):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kw):
if has_permissions(user_id):
return f(*args, **kw)
else:
# what do you want to do if there aren't permissions?
...
return wrapper
return decorator
Now, when extracting an action from your dictionary, wrap it using the decorator to create a new callable that does an automatic permission check:
checked_action = check_permissions(RSSFEED['user_id'])(
actions_dict[RSSFEED['action_taken']])
Now, when you call checked_action it will first check the permissions corresponding to the user_id before executing the underlying action.
You may easily work around it, example:
from functools import wraps
def some_function():
print("some_function executed")
def some_decorator(decorator_arg1, decorator_arg2):
def decorate(func):
#wraps(func)
def wrapper(*args, **kwargs):
print(decorator_arg1)
ret = func(*args, **kwargs)
print(decorator_arg2)
return ret
return wrapper
return decorate
arg1 = "pre"
arg2 = "post"
decorated = some_decorator(arg1, arg2)(some_function)
In [4]: decorated()
pre
some_function executed
post

Callback to method in Python

I'm just starting to learn Python and I have the following problem.
Using a package with method "bind", the following code works:
def callback(data):
print data
channel.bind(callback)
but when I try to wrap this inside a class:
class myclass:
def callback(data):
print data
def register_callback:
channel.bind(self.callback)
the call_back method is never called. I tried both "self.callback" and just "callback". Any ideas?
It is not clear to me how your code works, as (1) you did not post the implementation of channel.bind, and (2) your second example is incorrect in the definition of register_callback (it is using a self argument that is not part of the list of parameters of the method, and it lacks parentheses).
Nevertheless, remember that methods usually require a "self" parameter, which is implicitly passed every time you run self.function(), as this is converted internally to a function call with self as its first parameter: function(self, ...). Since your callback has just one argument data, this is probably the problem.
You cannot declare a method bind that is able to accept either a function or a class method (the same problem happens with every OOP language I know: C++, Pascal...).
There are many ways to do this, but, again, without a self-contained example that can be compiled, it is difficult to give suggestions.
You need to pass the self object as well:
def register_callback(self):
channel.bind(self.callback)
What you're doing is entirely possible, but I'm not sure exactly what your issue is, because your sample code as posted is not even syntactically valid. (The second method has no argument list whatsoever.)
Regardless, you might find the following sample code helpful:
def send_data(callback):
callback('my_data')
def callback(data):
print 'Free function callback called with data:', data
# The follwing prints "Free function callback called with data: my_data"
send_data(callback)
class ClassWithCallback(object):
def callback(self, data):
print 'Object method callback called with data:', data
def apply_callback(self):
send_data(self.callback)
# The following prints "Object method callback called with data: my_data"
ClassWithCallback().apply_callback()
# Indeed, the following does the same
send_data(ClassWithCallback().callback)
In Python it is possible to use free functions (callback in the example above) or bound methods (self.callback in the example above) in more or less the same situations, at least for simple tasks like the one you've outlined.

Python Twisted's DeferredLock

Can someone provide an example and explain when and how to use Twisted's DeferredLock.
I have a DeferredQueue and I think I have a race condition I want to prevent, but I'm unsure how to combine the two.
Use a DeferredLock when you have a critical section that is asynchronous and needs to be protected from overlapping (one might say "concurrent") execution.
Here is an example of such an asynchronous critical section:
class NetworkCounter(object):
def __init__(self):
self._count = 0
def next(self):
self._count += 1
recording = self._record(self._count)
def recorded(ignored):
return self._count
recording.addCallback(recorded)
return recording
def _record(self, value):
return http.GET(
b"http://example.com/record-count?value=%d" % (value,))
See how two concurrent uses of the next method will produce "corrupt" results:
from __future__ import print_function
counter = NetworkCounter()
d1 = counter.next()
d2 = counter.next()
d1.addCallback(print, "d1")
d2.addCallback(print, "d2")
Gives the result:
2 d1
2 d2
This is because the second call to NetworkCounter.next begins before the first call to that method has finished using the _count attribute to produce its result. The two operations share the single attribute and produce incorrect output as a consequence.
Using a DeferredLock instance will solve this problem by preventing the second operation from beginning until the first operation has completed. You can use it like this:
class NetworkCounter(object):
def __init__(self):
self._count = 0
self._lock = DeferredLock()
def next(self):
return self._lock.run(self._next)
def _next(self):
self._count += 1
recording = self._record(self._count)
def recorded(ignored):
return self._count
recording.addCallback(recorded)
return recording
def _record(self, value):
return http.GET(
b"http://example.com/record-count?value=%d" % (value,))
First, notice that the NetworkCounter instance creates its own DeferredLock instance. Each instance of DeferredLock is distinct and operates independently from any other instance. Any code that participates in the use of a critical section needs to use the same DeferredLock instance in order for that critical section to be protected. If two NetworkCounter instances somehow shared state then they would also need to share a DeferredLock instance - not create their own private instance.
Next, see how DeferredLock.run is used to call the new _next method (into which all of the application logic has been moved). NetworkCounter (nor the application code using NetworkCounter) does not call the method that contains the critical section. DeferredLock is given responsibility for doing this. This is how DeferredLock can prevent the critical section from being run by multiple operations at the "same" time. Internally, DeferredLock will keep track of whether an operation has started and not yet finished. It can only keep track of operation completion if the operation's completion is represented as a Deferred though. If you are familiar with Deferreds, you probably already guessed that the (hypothetical) HTTP client API in this example, http.GET, is returning a Deferred that fires when the HTTP request has completed. If you are not familiar with them yet, you should go read about them now.
Once the Deferred that represents the result of the operation fires - in other words, once the operation is done, DeferredLock will consider the critical section "out of use" and allow another operation to begin executing it. It will do this by checking to see if any code has tried to enter the critical section while the critical section was in use and if so it will run the function for that operation.
Third, notice that in order to serialize access to the critical section, DeferredLock.run must return a Deferred. If the critical section is in use and DeferredLock.run is called it cannot start another operation. Therefore, instead, it creates and returns a new Deferred. When the critical section goes out of use, the next operation can start and when that operation completes, the Deferred returned by the DeferredLock.run call will get its result. This all ends up looking rather transparent to any users who are already expecting a Deferred - it just means the operation appears to take a little longer to complete (though the truth is that it likely takes the same amount of time to complete but has it wait a while before it starts - the effect on the wall clock is the same though).
Of course, you can achieve a concurrent-use safe NetworkCounter more easily than all this by simply not sharing state in the first place:
class NetworkCounter(object):
def __init__(self):
self._count = 0
def next(self):
self._count += 1
result = self._count
recording = self._record(self._count)
def recorded(ignored):
return result
recording.addCallback(recorded)
return recording
def _record(self, value):
return http.GET(
b"http://example.com/record-count?value=%d" % (value,))
This version moves the state used by NetworkCounter.next to produce a meaningful result for the caller out of the instance dictionary (ie, it is no longer an attribute of the NetworkCounter instance) and into the call stack (ie, it is now a closed over variable associated with the actual frame that implements the method call). Since each call creates a new frame and a new closure, concurrent calls are now independent and no locking of any sort is required.
Finally, notice that even though this modified version of NetworkCounter.next still uses self._count which is shared amongst all calls to next on a single NetworkCounter instance this can't cause any problems for the implementation when it is used concurrently. In a cooperative multitasking system such as the one primarily used with Twisted, there are never context switches in the middle of functions or operations. There cannot be a context switch from one operation to another in between the self._count += 1 and result = self._count lines. They will always execute atomically and you don't need locks around them to avoid re-entrancy or concurrency induced corruption.
These last two points - avoiding concurrency bugs by avoiding shared state and the atomicity of code inside a function - combined means that DeferredLock isn't often particularly useful. As a single data point, in the roughly 75 KLOC in my current work project (heavily Twisted based), there are no uses of DeferredLock.

python decorator losing argument definitions

I am using a block like this:
def served(fn) :
def wrapper(*args, **kwargs):
p = xmlrpclib.ServerProxy(SERVER, allow_none=True )
return (p.__getattr__(fn.__name__)(*args, **kwargs)) # do the function call
return functools.update_wrapper(wrapper,fn)
#served
def remote_function(a, b):
pass
to wrap a series of XML-RPC calls into a python module. The "served" decorator gets called on stub functions to expose operations on a remote server.
I'm creating stubs like this with the intention of being able to inspect them later for information about the function, specifically its arguments.
As listed, the code above does not transfer argument information from the original function to the wrapper. If I inspect with inspect.getargspec( remote_function ) then I get essentially an empty list, instead of args=['a','b'] that I was expecting.
I'm guessing I need to give additional direction to the functools.update_wrapper() call via the optional assigned parameter, but I'm not sure exactly what to add to that tuple to get the effect I want.
The name and the docstring are correctly transferred to the new function object, but can someone advise me on how to transfer argument definitions?
Thanks.
Previous questions here and here suggest that the decorator module can do this.

Categories

Resources