I'm trying to override the DaemonRunner in the python standard daemon process library (found here https://pypi.python.org/pypi/python-daemon/)
The DaemonRunner responds to command line arguments for start, stop, and restart, but I want to add a fourth option for status.
The class I want to override looks something like this:
class DaemonRunner(object):
def _start(self):
...etc
action_funcs = {'start': _start}
I've tried to override it like this:
class StatusDaemonRunner(DaemonRunner):
def _status(self):
...
DaemonRunner.action_funcs['status'] = _status
This works to some extent, but the problem is that every instance of DaemonRunner now have the new behaviour. Is it possible to override it without modifying every instance of DaemonRunner?
I would override action_functs to make it a non-static member of class StatusDaemonRunner(DaemonRunner).
In terms of code I would do:
class StatusDaemonRunner(runner.DaemonRunner):
def __init__(self, app):
self.action_funcs = runner.DaemonRunner.action_funcs.copy()
self.action_funcs['status'] = StatusDaemonRunner._status
super(StatusDaemonRunner, self).__init__(app)
def _status(self):
pass # do your stuff
Indeed, if we look at the getter in the implementation of DaemonRunner (here) we can see that it acess the attribute using self
def _get_action_func(self):
""" Return the function for the specified action.
Raises ``DaemonRunnerInvalidActionError`` if the action is
unknown.
"""
try:
func = self.action_funcs[self.action]
except KeyError:
raise DaemonRunnerInvalidActionError(
u"Unknown action: %(action)r" % vars(self))
return func
Hence the previous code should do the trick.
Related
I have a large Python 3.6 system where multiple processes and threads interact with each other and the user. Simplified, there is a Scheduler instance (subclasses threading.Thread) and a Worker instance (subclasses multiprocessing.Process). Both objects run for the entire duration of the program.
The user interacts with the Scheduler by adding Task instances and the Scheduler passes the task to the Worker at the correct moment in time. The worker uses the information contained in the task to do its thing.
Below is some stripped out and simplified code out of the project:
class Task:
def __init__(self, name:str):
self.name = name
self.state = 'idle'
class Scheduler(threading.Thread):
def __init__(self, worker:Worker):
super().init()
self.worker = worker
self.start()
def run(self):
while True:
# Do stuff until the user schedules a new task
task = Task() # <-- In reality the Task intance is not created here but the thread gets it from elsewhere
task.state = 'scheduled'
self.worker.change_task(task)
# Do stuff until the task.state == 'finished'
class Worker(multiprocessing.Process):
def __init__(self):
super().init()
self.current_task = None
self.start()
def change_task(self, new_task:Task):
self.current_task = new_task
self.current_task.state = 'accepted-idle'
def run(self):
while True:
# Do stuff until the current task is updated
self.current_task.state = 'accepted-running'
# Task is running
self.current_task.state = 'finished'
The system used to be structured so that the task contained multiple multiprocessing.Events indicating each of its possible states. Then, not the whole Task instance was passed to the worker, but each of the task's attributes was. As they were all multiprocessing safe, it worked, with a caveat. The events changed in worker.run had to be created in worker.run and back passed to the task object for it work. Not only is this a less than ideal solution, it no longer works with some changes I am making to the project.
Back to the current state of the project, as described by the python code above. As is, this will never work because nothing makes this multiprocessing safe at the moment. So I implemented a Proxy/BaseManager structure so that when a new Task is needed, the system gets it from the multiprocessing manager. I use this structure in a sightly different way elsewhere in the project as well. The issue is that the worker.run never knows that the self.current_task is updated, it remains None. I expected this to be fixed by using the proxy but clearly I am mistaken.
def Proxy(target: typing.Type) -> typing.Type:
"""
Normally a Manager only exposes only object methods. A NamespaceProxy can be used when registering the object with
the manager to expose all the attributes. This also works for attributes created at runtime.
https://stackoverflow.com/a/68123850/8353475
1. Instead of exposing all the attributes manually, we effectively override __getattr__ to do it dynamically.
2. Instead of defining a class that subclasses NamespaceProxy for each specific object class that needs to be
proxied, this method is used to do it dynamically. The target parameter should be the class of the object you want
to generate the proxy for. The generated proxy class will be returned.
Example usage: FooProxy = Proxy(Foo)
:param target: The class of the object to build the proxy class for
:return The generated proxy class
"""
# __getattr__ is called when an attribute 'bar' is called from 'foo' and it is not found eg. 'foo.bar'. 'bar' can
# be a class method as well as a variable. The call gets rerouted from the base object to this proxy, were it is
# processed.
def __getattr__(self, key):
result = self._callmethod('__getattribute__', (key,))
# If attr call was for a method we need some further processing
if isinstance(result, types.MethodType):
# A wrapper around the method that passes the arguments, actually calls the method and returns the result.
# Note that at this point wrapper() does not get called, just defined.
def wrapper(*args, **kwargs):
# Call the method and pass the return value along
return self._callmethod(key, args, kwargs)
# Return the wrapper method (not the result, but the method itself)
return wrapper
else:
# If the attr call was for a variable it can be returned as is
return result
dic = {'types': types, '__getattr__': __getattr__}
proxy_name = target.__name__ + "Proxy"
ProxyType = type(proxy_name, (NamespaceProxy,), dic)
# This is a tuple of all the attributes that are/will be exposed. We copy all of them from the base class
ProxyType._exposed_ = tuple(dir(target))
return ProxyType
class TaskManager(BaseManager):
pass
TaskProxy = Proxy(Task)
TaskManager.register('get_task', callable=Task, proxytype=TaskProxy)
I struggled to think of a good title so I'll just explain it here. I'm using Python in Maya, which has some event callback options, so you can do something like on save: run function. I have a user interface class, which I'd like it to update when certain events are triggered, which I can do, but I'm looking for a cleaner way of doing it.
Here is a basic example similar to what I have:
class test(object):
def __init__(self, x=0):
self.x = x
def run_this(self):
print self.x
def display(self):
print 'load user interface'
#Here's the main stuff that used to be just 'test().display()'
try:
callbacks = [callback1, callback2, ...]
except NameError:
pass
else:
for i in callbacks:
try:
OpenMaya.MEventMessage.removeCallback(i)
except RuntimeError:
pass
ui = test(5)
callback1 = OpenMaya.MEventMessage.addEventCallback('SomeEvent', ui.run_this)
callback2 = OpenMaya.MEventMessage.addEventCallback('SomeOtherEvent', ui.run_this)
callback3 = ......
ui.display()
The callback persists until Maya is restarted, but you can remove it using removeCallback if you pass it the value that is returned from addEventCallback. The way I have currently is just check if the variable is set before you set it, which is a lot more messy than the previous one line of test().display()
Would there be a way that I can neatly do it in the function? Something where it'd delete the old one if I ran the test class again or something similar?
There are two ways you might want to try this.
You can an have a persistent object which represents your callback manager, and allow it to hook and unhook itself.
import maya.api.OpenMaya as om
import maya.cmds as cmds
om.MEventMessage.getEventNames()
class CallbackHandler(object):
def __init__(self, cb, fn):
self.callback = cb
self.function = fn
self.id = None
def install(self):
if self.id:
print "callback is currently installed"
return False
self.id = om.MEventMessage.addEventCallback(self.callback, self.function)
return True
def uninstall(self):
if self.id:
om.MEventMessage.removeCallback(self.id)
self.id = None
return True
else:
print "callback not currently installed"
return False
def __del__(self):
self.uninstall()
def test_fn(arg):
print "callback fired 2", arg
cb = CallbackHandler('NameChanged', test_fn)
cb.install()
# callback is active
cb.uninstall()
# callback not active
cb.install()
# callback on again
del(cb) # or cb = None
# callback gone again
In this version you'd store the CallbackHandlers you create for as long as you want the callback to persist and then manually uninstall them or let them fall out of scope when you don't need them any more.
Another option would be to create your own object to represent the callbacks and then add or remove any functions you want it to trigger in your own code. This keeps the management entirely on your side instead of relying on the api, which could be good or bad depending on your needs. You'd have an Event() class which was callable (using __call__() and it would have a list of functions to fire then its' __call__() was invoked by Maya. There's an example of the kind of event handler object you'd want here
I want to streamline/reduce my code, so I try to put initializations of classes with repeated parameters in their own, extended classes. This is a REST API based on Pyramid & Cornice.
How would I initialize a pyramid.httpexceptions.HTTPUnauthorized when I'm always adding the same headers on initialization? This also applies to other HTTP responses where I initialize them repeatedly without changing their parameters.
Currently I've come up with this to extend the class:
class _401(HTTPUnauthorized):
def basic_jwt_header(self):
self.headers.add('WWW-Authenticate','JWT')
self.headers.add('WWW-Authenticate', 'Basic realm="Please log in"')
return self
def jwt_header(self):
self.headers.add('WWW-Authenticate','JWT')
return self
which I use in a view like this:
#forbidden_view_config()
def authenticate(request):
response = _401()
return _401.basic_jwt_header(response)
But it does not feel and look right. Is there a better, cleaner way?
Create an __init__ method on the class:
class _401(HTTPUnauthorized):
def __init__(self):
# call base class __init__ first, which will set up the
# headers instance variable
super(_401, self).__init__()
# in Python 3, just use this:
# super().__init__()
# now add the headers that you always enter
self.headers.add('WWW-Authenticate','JWT')
self.headers.add('WWW-Authenticate', 'Basic realm="Please log in"')
resp = _401()
print resp.headers
Since you are using two different methods after instantiating your _401 instance, then you might be better off using class-level factory methods, which will do both the instance creation and setting the desired headers:
class _401(HTTPUnauthorized):
#classmethod
def basic_jwt_header(cls):
ret = cls()
ret.headers.add('WWW-Authenticate','JWT')
ret.headers.add('WWW-Authenticate', 'Basic realm="Please log in"')
return ret
#classmethod
def jwt_header(cls):
ret = cls()
ret.headers.add('WWW-Authenticate','JWT')
return ret
resp = _401.basic_jwt_header()
print resp.headers
Now there's no need for creating __init__, or calling super() or whatever. We use cls instead of the explicit _401 class in support of any future subclassing of _401.
I have 3 classes defined this way:
class Device:
Some method
class SSH:
def connect(self,type):
# code
def execute(self,cmd):
# code
class Netconf:
def connect(self,type):
# code
def execute(self,cmd):
# code
Note SSH and Netconf classes have same method names but they do things differently.
I have an instance of class Device and would like to access methods like this-
d = Device()
d.connect('cli') # <-- This should call SSH method and subsequently
# d.execute(cmd) should call execute method from SSH class
# too.
d.connect('netconf') # <-- This should call Netconf method and subsequently
# d.execute(cmd) should call execute method from
# Netconf class too.
The question is - how do I make it happen? I want to be able to use methods of SSH/Netconf class on Device class instance 'd' based on the input.
You can do this by storing the type of device connected in a private Device attribute and then forwarding most method calls to it by adding a custom __getattr__() method. This is a little tricky in the connect() method because that's were the target device is defined (as opposed to in the Device.__init__() initializer).
I also changed the variable you had named type to kind to avoid colliding with the built-in module of the same name.
class Device(object):
def connect(self, kind):
if kind == 'cli':
target = self._target = SSH()
elif kind == 'netconf':
target = self._target = Netconf()
else:
raise ValueError('Unknown device {!r}'.format(kind))
return target.connect(kind)
def __getattr__(self, name):
return getattr(self._target, name)
class SSH(object):
def connect(self, kind):
print('SSH.connect called with kind {!r}'.format(kind))
def execute(self, cmd):
print('SSH.execute called with cmd {!r}'.format(cmd))
class Netconf(object):
def connect(self, kind):
print('Netconf.connect called with kind {!r}'.format(kind))
def execute(self, cmd):
print('Netconf.execute called with cmd {!r}'.format(cmd))
d = Device()
d.connect('cli')
d.execute('cmd1')
d.connect('netconf')
d.execute('cmd2')
Output:
SSH.connect called with kind 'cli'
SSH.execute called with cmd 'cmd1'
Netconf.connect called with kind 'netconf'
Netconf.execute called with cmd 'cmd2'
You should implement the Strategy Pattern. The connect() method should instantiate the appropriate class (detach()ing from the previous if required) and store it, and then other methods should delegate to the stored object.
I've a class Client which has many methods:
class Client:
def compute(self, arg):
#code
#more methods
All the methods of this class runs synchronously. I want to run them asynchronously. There are too many ways to accomplish this. But I'm thinking along these lines:
AsyncClient = make_async(Client) #make all methods of Client async, etc!
client = AsyncClient() #create an instance of AsyncClient
client.async_compute(arg) #compute asynchronously
client.compute(arg) #synchronous method should still exist!
Alright, that looks too ambitious, and I feel it can be done.
So far I've written this:
def make_async(cls):
class async_cls(cls): #derive from the given class
def __getattr__(self, attr):
for i in dir(cls):
if ("async_" + i) == attr:
#THE PROBLEM IS HERE
#how to get the method with name <i>?
return cls.__getattr__(i) # DOES NOT WORK
return async_cls
As you see the comment in the code above, the problem is to get the method given its name as string. How to do that? Once I get the method, I would wrap it in async_caller method, etc — the rest of the work I hope I can do myself.
The function __getattr__ just works with class instance, not class. Use getattr(cls, method_name) instead, it will solve the problem.
getattr(cls, method_name)