Upon looking up event handler modules, I came across pydispatcher, which seemed beginner friendly. My use case for the library is that I want to send a signal if my queue size is over a threshold. The handler function can then start processing and removing items from the queue (and subsequently do a bulk insert into the database).
I would like the handler function to run in the background. I am aware that I can simply overwrite the queue.append() method checking for the queue size and calling the handler function asynchronously, but I would like to implement the listener-dispatcher model to keep the logic clean and separated.
Does pydispatcher do this out of the box? If not, is there another module that can help me do this? Would I need to manage the access to the queue, since there might be multiple threads processing and appending to the queue at the same time?
Note that in my use case there is only a single dispatcher and event handler.
I've recently released the Akuanduba module, that may help you with this task. There's a single example on the repository that may help you understand how it works and it seems similar to what you want.
Anyway, I'll try to explain here a way of implementing your code with Akuanduba:
First you could make a data frame that would hold your queue:
# Mandatory imports
from Akuanduba.core.messenger.macros import *
from Akuanduba.core.constants import *
from Akuanduba.core import NotSet, AkuandubaDataframe
# Your imports go here:
from queue import Queue
class MyQueue (AkuandubaDataframe):
def __init__(self, name):
# Mandatory stuff
AkuandubaDataframe.__init__(self, name)
self.__queue = Queue ()
def getQueue (self):
return self.__queue
def putQueue (self, val):
self.__queue.put(val)
def getQueueSize (self):
return self.__queue.qsize()
#
# "toRawObj" method is a mandatory method that delivers a dict with the desired data
# for file saving
#
def toRawObj(self):
d = {
"Queue" : self.getQueue(),
}
return d
Then you could make a TriggerCondition that would check the queue size:
from Akuanduba.core import StatusCode, NotSet, StatusTrigger
from Akuanduba.core.messenger.macros import *
from Akuanduba.core import TriggerCondition
import time
class CheckQueueSize (TriggerCondition):
def __init__(self, name, maxSize):
TriggerCondition.__init__(self, name)
self._name = name
self._maxSize = maxSize
def initialize(self):
return StatusCode.SUCCESS
def execute (self):
size = self.getContext().getHandler("MyQueue").getQueueSize()
if (size > SIZE_THRESHOLD):
return StatusTrigger.TRIGGERED
else:
return StatusTrigger.NOT_TRIGGERED
def finalize(self):
return StatusCode.SUCCESS
Make a tool that would be your handler function:
# Mandatory imports
from Akuanduba.core import AkuandubaTool, StatusCode, NotSet, retrieve_kw
# Your imports go here:
class SampleTool(AkuandubaTool):
def __init__(self, name, **kw):
# Mandatory stuff
AkuandubaTool.__init__(self, name)
def initialize(self):
# Lock the initialization. After that, this tool can not be initialized once again
self.init_lock()
return StatusCode.SUCCESS
def execute(self,context):
#
# DO SOMETHING HERE
#
# Always return SUCCESS
return StatusCode.SUCCESS
def finalize(self):
self.fina_lock()
return StatusCode.SUCCESS
And finally, make a main script in order to make it all work together:
# Akuanduba imports
from Akuanduba.core import Akuanduba, LoggingLevel, AkuandubaTrigger
from Akuanduba import ServiceManager, ToolManager, DataframeManager
# This sample's imports
import MyQueue, CheckQueueSize, SampleTool
# Creating your handler
your_handler = SampleTool ("Your Handler's name")
# Creating dataframes
queue = MyQueue ("MyQueue")
# Creating trigger
trigger = AkuandubaTrigger("Sample Trigger Name", triggerType = 'or')
# Append conditions and tools to trigger just adding them
# Tools appended to the trigger will only run when trigger is StatusTrigger.TRIGGERED,
# and will run in the order they've been appended
trigger += CheckQueueSize( "CheckQueueSize condition", MAX_QUEUE_SIZE )
trigger += your_handler
# Creating Akuanduba
manager = Akuanduba("Akuanduba", level=LoggingLevel.INFO)
# Appending tools
#
# ToolManager += TOOL_1
# ToolManager += TOOL_2
#
ToolManager += trigger
# Apprending dataframes
DataframeManager += sampleDataframe
# Initializing
manager.initialize()
manager.execute()
manager.finalize()
That way, you'd have clean and separated code.
Related
Is it possible to detach a specific event after attaching it to a COM object?
For example, how to deregister the ClassOfHandlers in the following snippet:
from win32com.client import WithEvents
# ...
class ClassOfHandlers():
def OnStart(self):
print("Start observed")
class AnotherClassOfHandlers():
def OnStart(self):
print("Start observed from another")
WithEvents(client, ClassOfHandlers)
# ...
WithEvents(client, AnotherClassOfHandlers)
# ...
# Deregister `ClassOfHandlers`
As a variation on the OP's answer, which avoids a static member variable, it is worth remembering that WithEvents() returns an instance of the handler class.
from win32com.client import WithEvents
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
class ClassOfHandlers():
def __init__(self):
self._fn = MyOnStart
def setStartFunction(self,fn):
self._fn = fn
def OnStart(self):
self._fn()
handler = WithEvents(client, ClassOfHandlers)
#then later
handler.setStartFunction(MySecondOnStart)
Hence you can re-use the handler class for a different client.
Alternatively you could try opening an issue here and maybe the developers can advise on whether they expose the IConnectionPoint::Unadvise() function which would be needed behind the scenes to switch event handlers (I think).
Edit
Based on DS_London's answer we could benefit from WithEvents return, thus the combined solution would look like
from win32com.client import WithEvents
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
class ClassOfHandlers():
def __init__(self):
self._onStarts = []
# self._onStops = []
# ... add a list of functions for each event type
# the following 3 methods are implemented for each event type
def attachStart(self, fn):
self._onStarts.append(fn)
def detachStart(self, fn):
self._onStarts.remove(fn)
def OnStart(self):
for fn in self._onStarts:
fn()
# Always at the beginning
handler = WithEvents(client, ClassOfHandlers)
handler.attachStart(MyOnStart)
# ...
handler.attachStart(MySecondOnStart)
# ...
handler.detachStart(MyOnStart)
Limitation
If support for multiple clients is needed and thus threading is used, this edit won't work, and it would be needed to use the original answer's approach.
The cause: one needs to pass the ClassOfHandlers to the thread runnable*, however the thread runnable would PumpWaitingMessages() till interrupted, thus it won't be able to return the client handler back, preventing us from being able to detach/attach further functions while waiting for messages.
* PumpWaitingMessages() requires that it runs on the same thread that connected the ClassOfHandlers to the client, thus we can't create the client handler out of the thread then send it into the thread runnable.
Following is a snippet that shows this scenario:
def threadRunnable(client, eventsClass, controller):
pythoncom.CoInitializeEx(pythoncom.COINIT_MULTITHREADED)
# Connect the custom events
# The connection needs to be done inside the same thread for PumpWaitingMessages
handler = WithEvents(client, eventsClass)
if controller == None:
print("no control was provided")
controller = { "sleep_time": 1, "running_flag": True}
# With this while we won't be able to return the handler
while controller["running_flag"]:
pythoncom.PumpWaitingMessages()
time.sleep(controller["sleep_time"])
pythoncom.CoUninitialize()
def connectEvents(client, eventsClass, controller=None, runnable=threadRunnable):
flusher = Thread(target=runnable, args=(client,eventsClass,controller))
flusher.daemon = True
flusher.start()
def main():
controller = { "sleep_time": 1, "running_flag": True}
connectEvents(client, ClassOfHandlers, controller)
Original
I'm now able to achieve the desired behavior, by attaching a single permanent observer class and managing the events myself.
For example:
from win32com.client import WithEvents
# ...
class ClassOfHandlers():
OnStarts = []
def OnStart(self):
for handler in ClassOfHandlers.OnStarts:
handler()
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
# Always at the beginning
WithEvents(client, ClassOfHandlers)
ClassOfHandlers.OnStarts.append(MyOnStart)
# ...
ClassOfHandlers.OnStarts.append(MySecondOnStart)
# ...
ClassOfHandlers.OnStarts.remove(MyOnStart)
Hint:
The class variable OnStarts shall be changed to an instance variable if the class represents an instantiable COM object, to allow having an instance of the ClassOfHandlers (each instance having a different handler list) for each instantiated COM object.
One also needs to ensure that WithEvents is called only once for each COM object instance.
In pymodbus library in server.sync, SocketServer.BaseRequestHandler is used, and defines as follow:
class ModbusBaseRequestHandler(socketserver.BaseRequestHandler):
""" Implements the modbus server protocol
This uses the socketserver.BaseRequestHandler to implement
the client handler.
"""
running = False
framer = None
def setup(self):
""" Callback for when a client connects
"""
_logger.debug("Client Connected [%s:%s]" % self.client_address)
self.running = True
self.framer = self.server.framer(self.server.decoder, client=None)
self.server.threads.append(self)
def finish(self):
""" Callback for when a client disconnects
"""
_logger.debug("Client Disconnected [%s:%s]" % self.client_address)
self.server.threads.remove(self)
def execute(self, request):
""" The callback to call with the resulting message
:param request: The decoded request message
"""
try:
context = self.server.context[request.unit_id]
response = request.execute(context)
except NoSuchSlaveException as ex:
_logger.debug("requested slave does not exist: %s" % request.unit_id )
if self.server.ignore_missing_slaves:
return # the client will simply timeout waiting for a response
response = request.doException(merror.GatewayNoResponse)
except Exception as ex:
_logger.debug("Datastore unable to fulfill request: %s; %s", ex, traceback.format_exc() )
response = request.doException(merror.SlaveFailure)
response.transaction_id = request.transaction_id
response.unit_id = request.unit_id
self.send(response)
# ----------------------------------------------------------------------- #
# Base class implementations
# ----------------------------------------------------------------------- #
def handle(self):
""" Callback when we receive any data
"""
raise NotImplementedException("Method not implemented by derived class")
def send(self, message):
""" Send a request (string) to the network
:param message: The unencoded modbus response
"""
raise NotImplementedException("Method not implemented by derived class")
setup() is called when a client is connected to the server, and finish() is called when a client is disconnected. I want to manipulate these methods (setup() and finish()) in another class in another file which use the library (pymodbus) and add some code to setup and finish functions. I do not intend to modify the library, since it may cause strange behavior in specific situation.
---Edited ----
To clarify, I want setup function in ModbusBaseRequestHandler class to work as before and remain untouched, but add sth else to it, but this modification should be done in my code not in the library.
The simplest, and usually best, thing to do is to not manipulate the methods of ModbusBaseRequestHandler, but instead inherit from it and override those methods in your subclass, then just use the subclass wherever you would have used the base class:
class SoupedUpModbusBaseRequestHandler(ModbusBaseRequestHandler):
def setup(self):
# do different stuff
# call super().setup() if you want
# or call socketserver.BaseRequestHandler.setup() to skip over it
# or call neither
Notice that a class statement is just a normal statement, and can go anywhere any other statement can, even in the middle of a method. So, even if you need to dynamically create the subclass because you won't know what you want setup to do until runtime, that's not a problem.
If you actually need to monkeypatch the class, that isn't very hard—although it is easy to screw things up if you aren't careful.
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
If you want to be able to call the normal implementation, you have to stash it somewhere:
_setup = ModbusBaseRequestHandler.setup
def setup(self):
# do different stuff
# call _setup whenever you want
ModbusBaseRequestHandler.setup = setup
If you want to make sure you copy over the name, docstring, etc., you can use `wraps:
#functools.wraps(ModbusBaseRequestHandler.setup)
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
Again, you can do this anywhere in your code, even in the middle of a method.
If you need to monkeypatch one instance of ModbusBaseRequestHandler while leaving any other instances untouched, you can even do that. You just have to manually bind the method:
def setup(self):
# do different stuff
myModbusBaseRequestHandler.setup = setup.__get__(myModbusBaseRequestHandler)
If you want to call the original method, or wraps it, or do this in the middle of some other method, etc., it's otherwise basically the same as the last version.
It can be done by Interceptor
from functools import wraps
def iterceptor(func):
print('this is executed at function definition time (def my_func)')
#wraps(func)
def wrapper(*args, **kwargs):
print('this is executed before function call')
result = func(*args, **kwargs)
print('this is executed after function call')
return result
return wrapper
#iterceptor
def my_func(n):
print('this is my_func')
print('n =', n)
my_func(4)
more explanation can be found here
I struggled to think of a good title so I'll just explain it here. I'm using Python in Maya, which has some event callback options, so you can do something like on save: run function. I have a user interface class, which I'd like it to update when certain events are triggered, which I can do, but I'm looking for a cleaner way of doing it.
Here is a basic example similar to what I have:
class test(object):
def __init__(self, x=0):
self.x = x
def run_this(self):
print self.x
def display(self):
print 'load user interface'
#Here's the main stuff that used to be just 'test().display()'
try:
callbacks = [callback1, callback2, ...]
except NameError:
pass
else:
for i in callbacks:
try:
OpenMaya.MEventMessage.removeCallback(i)
except RuntimeError:
pass
ui = test(5)
callback1 = OpenMaya.MEventMessage.addEventCallback('SomeEvent', ui.run_this)
callback2 = OpenMaya.MEventMessage.addEventCallback('SomeOtherEvent', ui.run_this)
callback3 = ......
ui.display()
The callback persists until Maya is restarted, but you can remove it using removeCallback if you pass it the value that is returned from addEventCallback. The way I have currently is just check if the variable is set before you set it, which is a lot more messy than the previous one line of test().display()
Would there be a way that I can neatly do it in the function? Something where it'd delete the old one if I ran the test class again or something similar?
There are two ways you might want to try this.
You can an have a persistent object which represents your callback manager, and allow it to hook and unhook itself.
import maya.api.OpenMaya as om
import maya.cmds as cmds
om.MEventMessage.getEventNames()
class CallbackHandler(object):
def __init__(self, cb, fn):
self.callback = cb
self.function = fn
self.id = None
def install(self):
if self.id:
print "callback is currently installed"
return False
self.id = om.MEventMessage.addEventCallback(self.callback, self.function)
return True
def uninstall(self):
if self.id:
om.MEventMessage.removeCallback(self.id)
self.id = None
return True
else:
print "callback not currently installed"
return False
def __del__(self):
self.uninstall()
def test_fn(arg):
print "callback fired 2", arg
cb = CallbackHandler('NameChanged', test_fn)
cb.install()
# callback is active
cb.uninstall()
# callback not active
cb.install()
# callback on again
del(cb) # or cb = None
# callback gone again
In this version you'd store the CallbackHandlers you create for as long as you want the callback to persist and then manually uninstall them or let them fall out of scope when you don't need them any more.
Another option would be to create your own object to represent the callbacks and then add or remove any functions you want it to trigger in your own code. This keeps the management entirely on your side instead of relying on the api, which could be good or bad depending on your needs. You'd have an Event() class which was callable (using __call__() and it would have a list of functions to fire then its' __call__() was invoked by Maya. There's an example of the kind of event handler object you'd want here
I am updating some code from using libglade to GtkBuilder, which is supposed to be the way of the future.
With gtk.glade, you could call glade_xml.signal_autoconnect(...) repeatedly to connect signals onto objects of different classes corresponding to different windows in the program. However Builder.connect_signals seems to work only once, and (therefore) to give warnings about any handlers that aren't defined in the first class that's passed in.
I realize I can connect them manually but this seems a bit laborious. (Or for that matter I could use some getattr hackery to let it connect them through a proxy to all the objects...)
Is it a bug there's no function to hook up handlers across multiple objects? Or am I missing something?
Someone else has a similar problem http://www.gtkforums.com/about1514.html which I assume means this can't be done.
Here's what I currently have. Feel free to use it, or to suggest something better:
class HandlerFinder(object):
"""Searches for handler implementations across multiple objects.
"""
# See <http://stackoverflow.com/questions/4637792> for why this is
# necessary.
def __init__(self, backing_objects):
self.backing_objects = backing_objects
def __getattr__(self, name):
for o in self.backing_objects:
if hasattr(o, name):
return getattr(o, name)
else:
raise AttributeError("%r not found on any of %r"
% (name, self.backing_objects))
I have been looking for a solution to this for some time and found that it can be done by passing a dict of all the handlers to connect_signals.
The inspect module can extract methods using
inspect.getmembers(instance, predicate=inspect.ismethod
These can then be concatenated into a dictionary using d.update(d3), watching out for duplicate functions such as on_delete.
Example code:
import inspect
...
handlers = {}
for c in [win2, win3, win4, self]: # self is the main window
methods = inspect.getmembers(c, predicate=inspect.ismethod)
handlers.update(methods)
builder.connect_signals(handlers)
This will not pick up alias method names declared using #alias. For an example of how to do that, see the code for Builder.py, at def dict_from_callback_obj.
I'm only a novice but this is what I do, maybe it can inspire;-)
I instantiate the major components from a 'control' and pass the builder object so that the instantiated object can make use of any of the builder objects (mainwindow in example) or add to the builder (aboutDialog example). I also pass a dictionary (dic) where each component adds "signals" to it.
Then the 'connect_signals(dic)' is executed.
Of course I need to do some manual signal connecting when I need to pass user arguments to the callback method, but those are few.
#modules.control.py
class Control:
def __init__(self):
# Load the builder obj
guibuilder = gtk.Builder()
guibuilder.add_from_file("gui/mainwindow.ui")
# Create a dictionnary to store signal from loaded components
dic = {}
# Instanciate the components...
aboutdialog = modules.aboutdialog.AboutDialog(guibuilder, dic)
mainwin = modules.mainwindow.MainWindow(guibuilder, dic, self)
...
guibuilder.connect_signals(dic)
del dic
#modules/aboutdialog.py
class AboutDialog:
def __init__(self, builder, dic):
dic["on_OpenAboutWindow_activate"] = self.on_OpenAboutWindow_activate
self.builder = builder
def on_OpenAboutWindow_activate(self, menu_item):
self.builder.add_from_file("gui/aboutdialog.ui")
self.aboutdialog = self.builder.get_object("aboutdialog")
self.aboutdialog.run()
self.aboutdialog.destroy()
#modules/mainwindow.py
class MainWindow:
def __init__(self, builder, dic, controller):
self.control = controller
# get gui xml and/or signals
dic["on_file_new_activate"] = self.control.newFile
dic["on_file_open_activate"] = self.control.openFile
dic["on_file_save_activate"] = self.control.saveFile
dic["on_file_close_activate"] = self.control.closeFile
...
# get needed gui objects
self.mainWindow = builder.get_object("mainWindow")
...
Edit: alternative to auto attach signals to callbacks:
Untested code
def start_element(name, attrs):
if name == "signal":
if attrs["handler"]:
handler = attrs["handler"]
#Insert code to verify if handler is part of the collection
#we want.
self.handlerList.append(handler)
def extractSignals(uiFile)
import xml.parsers.expat
p = xml.parsers.expat.ParserCreate()
p.StartElementHandler = self.start_element
p.ParseFile(uiFile)
self.handlerList = []
extractSignals(uiFile)
for handler in handlerList:
dic[handler] = eval(''. join(["self.", handler, "_cb"]))
builder.connect_signals
({
"on_window_destroy" : gtk.main_quit,
"on_buttonQuit_clicked" : gtk.main_quit
})
Maybe it's just doesn't exist, as I cannot find it. But using python's logging package, is there a way to query a Logger to find out how many times a particular function was called? For example, how many errors/warnings were reported?
The logging module doesn't appear to support this. In the long run you'd probably be better off creating a new module, and adding this feature via sub-classing the items in the existing logging module to add the features you need, but you could also achieve this behavior pretty easily with a decorator:
class CallCounted:
"""Decorator to determine number of calls for a method"""
def __init__(self,method):
self.method=method
self.counter=0
def __call__(self,*args,**kwargs):
self.counter+=1
return self.method(*args,**kwargs)
import logging
logging.error = CallCounted(logging.error)
logging.error('one')
logging.error('two')
print(logging.error.counter)
Output:
ERROR:root:one
ERROR:root:two
2
You can also add a new Handler to the logger which counts all calls:
class MsgCounterHandler(logging.Handler):
level2count = None
def __init__(self, *args, **kwargs):
super(MsgCounterHandler, self).__init__(*args, **kwargs)
self.level2count = {}
def emit(self, record):
l = record.levelname
if (l not in self.level2count):
self.level2count[l] = 0
self.level2count[l] += 1
You can then use the dict afterwards to output the number of calls.
There'a a warnings module that -- to an extent -- does some of that.
You might want to add this counting feature to a customized Handler. The problem is that there are a million handlers and you might want to add it to more than one kind.
You might want to add it to a Filter, since that's independent of the Handlers in use.
Based on Rolf's answer and how to write a dictionary to a file, here another solution which stores the counts in a json file. In case the json file exists and continue_counts=True, it restores the counts on initialisation.
import json
import logging
import logging.handlers
import os
class MsgCounterHandler(logging.Handler):
"""
A handler class which counts the logging records by level and periodically writes the counts to a json file.
"""
level2count_dict = None
def __init__(self, filename, continue_counts=True, *args, **kwargs):
"""
Initialize the handler.
PARAMETER
---------
continue_counts: bool, optional
defines if the counts should be loaded and restored if the json file exists already.
"""
super(MsgCounterHandler, self).__init__(*args, **kwargs)
filename = os.fspath(filename)
self.baseFilename = os.path.abspath(filename)
self.continue_counts = continue_counts
# if another instance of this class is created, get the actual counts
if self.level2count_dict is None:
self.level2count_dict = self.load_counts_from_file()
def emit(self, record):
"""
Counts a record.
In case, create add the level to the dict.
If the time has come, update the json file.
"""
level = record.levelname
if level not in self.level2count_dict:
self.level2count_dict[level] = 0
self.level2count_dict[level] += 1
self.flush()
def flush(self):
"""
Flushes the dictionary.
"""
self.acquire()
try:
json.dump(self.level2count_dict, open(self.baseFilename, 'w'))
finally:
self.release()
def load_counts_from_file(self):
"""
Load the dictionary from a json file or create an empty dictionary
"""
if os.path.exists(self.baseFilename) and self.continue_counts:
try:
level2count_dict = json.load(open(self.baseFilename))
except Exception as a:
logging.warning(f'Failed to load counts with: {a}')
level2count_dict = {}
else:
level2count_dict = {}
return level2count_dict