A Python library provides a function create_object that creates an object of type OriginalClass.
I would like to create my own class so that it takes the output of create_object and adds extra logic (on top of what create_object already does). Also, that new custom object should have all the properties of the base object.
So far I've attempted the following:
class MyClass(OriginalClass):
def __init__(self, *args, **kwargs):
super(MyClass, self).__init__(args, kwargs)
This does not accomplish what I have in mind. Since the function create_object is not called and the extra logic handled by it not executed.
Also, I do not want to attach the output of create_object to an attribute of MyClass like so self.myobject = create_object(), since I want it to be accessed by just the instantiation of an object of type MyClass.
What would be the best way to achieve that functionality in Python? Does that corresponds to an existing design pattern?
I am new to Python OOP so maybe the description provided is too vague. Please feel free to request in depth description from those vaguely described parts.
Try this:
class MyClass(OriginalClass):
def __init__(self, custom_arg, *args, **kwargs):
super().__init__(*args, **kwargs)
self.init(custom_arg)
def init(self, custom_arg):
# add subclass initialization logic here
self._custom_arg = custom_arg
def my_class_method(self):
pass
obj = create_object()
obj.__class__ = MyClass
obj.init(custom_arg)
obj.original_class_method()
obj.my_class_method()
You can change the __class__ attribute of an object if you know what you're doing.
If I was you I would consider using an Adapter design pattern. It's maybe longer to code, but it's easier to maintain and understand.
Looking at the original code, I would have implemented the create_object functions as class methods.
class SqueezeNet(nn.Module):
...
#classmethod
def squeezenet1_0(cls, *args, **kwargs):
def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet:
return cls._squeezenet('1_0', pretrained, progress, **kwargs)
#classmethod
def squeezenet1_1(cls, *args, **kwargs):
def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet:
return cls._squeezenet('1_1', pretrained, progress, **kwargs)
#classmethod
def _squeezenet(cls, version: str, pretrained: bool, progress: bool, **kwargs: Any) -> SqueezeNet:
model = cls(version, **kwargs)
if pretrained:
arch = 'squeezenet' + version
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
So what does the class method do? It just instantiates the object as normal, but then calls a particular method on it before returning it. As such, there's nothing to do in your subclass. Calling MySqueezeNetSubclass._squeezenet would instantiate your subclass, not SqueezeNet. If you need to customize anything else, you can override _squeezenet in your own class, using super()._squeezenet to do the parent creation first before modifying the result.
class MySubclass(SqueezeNet):
#classmethod
def _squeezenet(cls, *args, **kwargs):
model = super()._squeezenet(*args, **kwargs)
# Do MySubclass-specific things to model here
return model
But, _squeezenet isn't a class method; it's a regular function. There's not much you can do except patch it at runtime, which is hopefully something you can do before anything tries to call it. For example,
import torchvision.models.squeezenet
def _new_squeezenet(version, pertained, progress, **kwargs):
model = MySqueezeNetSubClass(version, **kwarsg)
# Maybe more changes specific to your code here. Specifically,
# you might want to provide your own URL rather than one from
# model_urls, etc.
if pretrained:
arch = 'squeezenet' + version
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
torchvision.models.squeezenet._squeezenet = _new_squeezenet
The lesson here is that not everything is designed to be easily subclassed.
Related
I have two classes, Manager and DataManager, simplified in the example below:
import numpy as np
class Manager:
def __init__(self, value, delay_init=True):
self.value = value
self.is_init = False
self.data = None
if not delay_init:
self._initialize()
#staticmethod
def delayed_init(fn):
def wrapped_delayed_init(obj, *args, **kwargs):
if not obj.is_init:
obj.data = np.random.randn(obj.value, obj.value)
obj.is_init = True
return fn(obj, *args, **kwargs)
return wrapped_delayed_init
#delayed_init.__get__(object)
def _initialize(self):
pass
class DataManager(Manager):
def __init__(self, value):
super().__init__(value)
#Manager.delayed_init
def calculate_mean(self):
return np.mean(self.data)
data_manager = DataManager(100)
assert data_manager.data is None
mean = data_manager.calculate_mean()
What my code needs to do is pass the method calculate as an argument to some other function as part of a test suite. In order to do this I need to create an instance of DataManager. What I must avoid is the time incurred by the full instance creation (since it involved downloading data), and so I delegate this task to some function in the parent class called delayed_init. There are a subset of methods belonging to DataManager that require this delayed_init to have been run, and so I choose to decorate them with delayed_init to ensure it is run whenever 1) another method requires it and 2) it has not already been run.
Now my problem: Currently it appears I need to explicitly define the decorator as #Manager.delayed_init, but this can be re-written as #<parent>.delayed_init. I would like to write it this way if possible given that in my opinion it is cleaner to not have to explicitly write out a given type if the type is always the parent. However, I cannot find a way to properly reference the parent class before an instance/object is created. Is it possible to access the parent class without the creation of any instances?
Thank you for the assistance.
So I have a class (let's call it ParamClass) which requires a parameter for initialization, and that parameter is something that should be available to the user to configure via some option-setting interface.
ParamClass knows nothing about the configuration interface or how to read them. So I made another class, called Configurator, which does all of that. When a class inherits from Configurator and tells it what configuration keys to read, Configurator's __init__() method will read those keys and assign their values to the correct attributes in self.
The problem I run into, however, is that when I try to pass arguments to super(), including the parameters to be read by Configurator, those parameters have no value yet. But they are passed as constants in the argument list to the super(). Example shown below. MyClass.__init__() can't even get started because self.server_param doesn't exist yet.
class ParamClass:
"""Just some class that needs a parameter for init"""
def __init__(self, param1, **kwargs) -> None:
super().__init__(**kwargs)
self.value = param1
class Configurator:
"""Reads parameters from a configuration source and sets appropriate class
variables.
"""
def __init__(self, **kwargs) -> None:
super().__init__(**kwargs)
self.server_param = 2
class MyClass(Configurator, ParamClass):
def __init__(self, **kwargs) -> None:
super().__init__(param1=self.server_param, **kwargs)
# <-- Gives AttributeError: 'MyClass' object has no attribute 'server_param'
MyClass()
The only way I can get this to work is to break MRO in Configurator.init() and force the order of initilization. This is bad for obvious reason - I plan to use Configurator throughout my code and can't break MRO with it.
class ParamClass:
"""Just some class that needs a parameter for init"""
def __init__(self, param1, **kwargs) -> None:
super().__init__(**kwargs)
self.value = param1
class Configurator:
"""Reads parameters from a configuration source and sets appropriate class
variables.
"""
def __init__(self, **kwargs) -> None:
# super().__init__(**kwargs)
self.server_param = 2
class MyClass(Configurator, ParamClass):
def __init__(self, **kwargs) -> None:
Configurator.__init__(self, **kwargs)
# <-- After this call, self.server_param is defined.
ParamClass.__init__(self, param1=self.server_param, **kwargs)
MyClass()
How do I accomplish configuration of parameters in while user super? How do I do this in a generalized way that doesn't require Configurator to know little details about ParamClass?
Note: In my particular case, I don't "own" the ParamClass() code. It is library code that I'm using.
So I am trying to override __new__ and let it exist as a factory to create
derived instances. After reading a bit on SO, I am under the impression that I should be calling __new__ on the derived instance as well.
BaseThing
class BaseThing:
def __init(self, name, **kwargs):
self.name = name
# methods to be derived
ThingFactory
class Thing(BaseThing):
def __new__(cls, name, **kwargs):
if name == 'A':
return A.__new__(name, **kwargs)
if name == 'B':
return B.__new__(name, **kwargs)
def __init__(self, *args, **kwargs):
super().__init__(name, **kwargs)
# methods to be implemented by concrete class (same as those in base)
A
class A(BaseThing):
def __init__(self, name, **kwargs):
super().__init__(name, **kwargs)
B
class B(BaseThing):
def __init__(self, name, **kwargs):
super().__init__(name, **kwargs)
what I am expecting was that it'd just work.
>>> a = Thing('A')
gives me TypeError: object.__new__(X): X is not a type object (str)
I am bit confused by this; when I just return a concrete instance of derived classes, it just worked. i.e.
def __new__(cls, name, **kwargs):
if name == 'A':
return A(name)
if name == 'B':
return B(name)
I don't think this is the correct way to return in __new__; it may duplicate the calls to __init__.
when I am checking signatures of __new__ in object it seems be this one:
#staticmethod # known case of __new__
def __new__(cls, *more): # known special case of object.__new__
""" Create and return a new object. See help(type) for accurate signature. """
pass
I didn't expect this was the one; I'd expect it came with args and kwargs as well. I must have done something wrong here.
it seems to me that I need to inherit object directly in my base but could anyone explain the correct way of doing it?
You're calling __new__ wrong. If you want your __new__ to create an instance of a subclass, you don't call the subclass's __new__; you call the superclass's __new__ as usual, but pass it the subclass as the first argument:
instance = super().__new__(A)
I can't guarantee that this will be enough to fix your problems, since the code you've posted wouldn't reproduce the error you claim; it has other problems that would have caused a different error first (infinite recursion). Particularly, if A and B don't really descend from Thing, that needs different handling.
I'm trying to find the best way to create a class decorator that does the following:
Injects a few functions into the decorated class
Forces a call to one of these functions AFTER the decorated class' __init__ is called
Currently, I'm just saving off a reference to the 'original' __init__ method and replacing it with my __init__ that calls the original and my additional function. It looks similar to this:
orig_init = cls.__init__
def new_init(self, *args, **kwargs):
"""
'Extend' wrapped class' __init__ so we can attach to all signals
automatically
"""
orig_init(self, *args, **kwargs)
self._debugSignals()
cls.__init__ = new_init
Is there a better way to 'augment' the original __init__ or inject my call somewhere else? All I really need is for my self._debugSignals() to be called sometime after the object is created. I also want it happen automatically, which is why I thought after __init__ was a good place.
Extra misc. decorator notes
It might be worth mentioning some background on this decorator. You can find the full code here. The point of the decorator is to automatically attach to any PyQt signals and print when they are emitted. The decorator works fine when I decorate my own subclasses of QtCore.QObject, however I've been recently trying to automatically decorate all QObject children.
I'd like to have a 'debug' mode in the application where I can automatically print ALL signals just to make sure things are doing what I expect. I'm sure this will result in TONS of debug, but I'd still like to see what's happening.
The problem is my current version of the decorator is causing a segfault when replacing QtCore.QObject.__init__. I've tried to debug this, but the code is all SIP generated, which I don't have much experience with.
So, I was wondering if there was a safer, more pythonic way to inject a function call AFTER the __init__ and hopefully avoid the segfault.
Based on this post and this answer, an alternative way to do this is through a custom metaclass. This would work as follows (tested in Python 2.7):
# define a new metaclass which overrides the "__call__" function
class NewInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call MyNewClass() """
obj = type.__call__(cls, *args, **kwargs)
obj.new_init()
return obj
# then create a new class with the __metaclass__ set as our custom metaclass
class MyNewClass(object):
__metaclass__ = NewInitCaller
def __init__(self):
print "Init class"
def new_init(self):
print "New init!!"
# when you create an instance
a = MyNewClass()
>>> Init class
>>> New init!!
The basic idea is that:
when you call MyNewClass() it searches for the metaclass, finds that you have defined NewInitCaller
The metaclass __call__ function is called.
This function creates the MyNewClass instance using type,
The instance runs its own __init__ (printing "Init class").
The meta class then calls the new_init function of the instance.
Here is the solution for Python 3.x, based on this post's accepted answer. Also see PEP 3115 for reference, I think the rationale is an interesting read.
Changes in the example above are shown with comments; the only real change is the way the metaclass is defined, all other are trivial 2to3 modifications.
# define a new metaclass which overrides the "__call__" function
class NewInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call MyNewClass() """
obj = type.__call__(cls, *args, **kwargs)
obj.new_init()
return obj
# then create a new class with the metaclass passed as an argument
class MyNewClass(object, metaclass=NewInitCaller): # added argument
# __metaclass__ = NewInitCaller this line is removed; would not have effect
def __init__(self):
print("Init class") # function, not command
def new_init(self):
print("New init!!") # function, not command
# when you create an instance
a = MyNewClass()
>>> Init class
>>> New init!!
Here's a generalized form of jake77's example which implements __post_init__ on a non-dataclass. This enables a subclass's configure() to be automatically invoked in correct sequence after the base & subclass __init__s have completed.
# define a new metaclass which overrides the "__call__" function
class PostInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call BaseClass() """
print(f"{__class__.__name__}.__call__({args}, {kwargs})")
obj = type.__call__(cls, *args, **kwargs)
obj.__post_init__(*args, **kwargs)
return obj
# then create a new class with the metaclass passed as an argument
class BaseClass(object, metaclass=PostInitCaller):
def __init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__init__({args}, {kwargs})")
super().__init__()
def __post_init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__post_init__({args}, {kwargs})")
self.configure(*args, **kwargs)
def configure(self, *args, **kwargs):
print(f"{__class__.__name__}.configure({args}, {kwargs})")
class SubClass(BaseClass):
def __init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__init__({args}, {kwargs})")
super().__init__(*args, **kwargs)
def configure(self, *args, **kwargs):
print(f"{__class__.__name__}.configure({args}, {kwargs})")
super().configure(*args, **kwargs)
# when you create an instance
a = SubClass('a', b='b')
running gives:
PostInitCaller.__call__(('a',), {'b': 'b'})
SubClass.__init__(('a',), {'b': 'b'})
BaseClass.__init__(('a',), {'b': 'b'})
BaseClass.__post_init__(('a',), {'b': 'b'})
SubClass.configure(('a',), {'b': 'b'})
BaseClass.configure(('a',), {'b': 'b'})
I know that the metaclass approach is the Pro way, but I've a more readable and easy proposal using #staticmethod:
class Invites(TimestampModel, db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
invitee_email = db.Column(db.String(128), nullable=False)
def __init__(self, invitee_email):
invitee_email = invitee_email
#staticmethod
def create_invitation(invitee_email):
"""
Create an invitation
saves it and fetches it because the id
is being generated in the DB
"""
invitation = Invites(invitee_email)
db.session.save(invitation)
db.session.commit()
return Invites.query.filter(
PartnerInvites.invitee_email == invitee_email
).one_or_none()
So I could use it this way:
invitation = Invites.create_invitation("jim#mail.com")
print(invitation.id, invitation.invitee_email)
>>>> 1 jim#mail.com
I've been trying to pickle an object which contains references to static class methods.
Pickle fails (for example on module.MyClass.foo) stating it cannot be pickled, as module.foo does not exist.
I have come up with the following solution, using a wrapper object to locate the function upon invocation, saving the container class and function name:
class PicklableStaticMethod(object):
"""Picklable version of a static method.
Typical usage:
class MyClass:
#staticmethod
def doit():
print "done"
# This cannot be pickled:
non_picklable = MyClass.doit
# This can be pickled:
picklable = PicklableStaticMethod(MyClass.doit, MyClass)
"""
def __init__(self, func, parent_class):
self.func_name = func.func_name
self.parent_class = parent_class
def __call__(self, *args, **kwargs):
func = getattr(self.parent_class, self.func_name)
return func(*args, **kwargs)
I am wondering though, is there a better - more standard way - to pickle such an object?
I do not want to make changes to the global pickle process (using copy_reg for example), but the following pattern would be great:
class MyClass(object):
#picklable_staticmethod
def foo():
print "done."
My attempts at this were unsuccessful, specifically because I could not extract the owner class from the foo function. I was even willing to settle for explicit specification (such as #picklable_staticmethod(MyClass)) but I don't know of any way to refer to the MyClass class right where it's being defined.
Any ideas would be great!
Yonatan
This seems to work.
class PickleableStaticMethod(object):
def __init__(self, fn, cls=None):
self.cls = cls
self.fn = fn
def __call__(self, *args, **kwargs):
return self.fn(*args, **kwargs)
def __get__(self, obj, cls):
return PickleableStaticMethod(self.fn, cls)
def __getstate__(self):
return (self.cls, self.fn.__name__)
def __setstate__(self, state):
self.cls, name = state
self.fn = getattr(self.cls, name).fn
The trick is to snag the class when the static method is gotten from it.
Alternatives: You could use metaclassing to give all your static methods a .__parentclass__ attribute. Then you could subclass Pickler and give each subclass instance its own .dispatch table which you can then modify without affecting the global dispatch table (Pickler.dispatch). Pickling, unpickling, and calling the method might then be a little faster.
EDIT: modified after Jason comment.
I think python is correct in not letting pickling a staticmethod object - as it is impossible to pickle instance or class methods! Such an object would make very little sense outside of its context:
Check this: Descriptor Tutorial
import pickle
def dosomething(a, b):
print a, b
class MyClass(object):
dosomething = staticmethod(dosomething)
o = MyClass()
pickled = pickle.dumps(dosomething)
This works, and that's what should be done - define a function, pickle it, and use such function as a staticmethod in a certain class.
If you've got an use case for your need, please write it down and I'll be glad to discuss it.