correctly override __new__ in python3 - python

So I am trying to override __new__ and let it exist as a factory to create
derived instances. After reading a bit on SO, I am under the impression that I should be calling __new__ on the derived instance as well.
BaseThing
class BaseThing:
def __init(self, name, **kwargs):
self.name = name
# methods to be derived
ThingFactory
class Thing(BaseThing):
def __new__(cls, name, **kwargs):
if name == 'A':
return A.__new__(name, **kwargs)
if name == 'B':
return B.__new__(name, **kwargs)
def __init__(self, *args, **kwargs):
super().__init__(name, **kwargs)
# methods to be implemented by concrete class (same as those in base)
A
class A(BaseThing):
def __init__(self, name, **kwargs):
super().__init__(name, **kwargs)
B
class B(BaseThing):
def __init__(self, name, **kwargs):
super().__init__(name, **kwargs)
what I am expecting was that it'd just work.
>>> a = Thing('A')
gives me TypeError: object.__new__(X): X is not a type object (str)
I am bit confused by this; when I just return a concrete instance of derived classes, it just worked. i.e.
def __new__(cls, name, **kwargs):
if name == 'A':
return A(name)
if name == 'B':
return B(name)
I don't think this is the correct way to return in __new__; it may duplicate the calls to __init__.
when I am checking signatures of __new__ in object it seems be this one:
#staticmethod # known case of __new__
def __new__(cls, *more): # known special case of object.__new__
""" Create and return a new object. See help(type) for accurate signature. """
pass
I didn't expect this was the one; I'd expect it came with args and kwargs as well. I must have done something wrong here.
it seems to me that I need to inherit object directly in my base but could anyone explain the correct way of doing it?

You're calling __new__ wrong. If you want your __new__ to create an instance of a subclass, you don't call the subclass's __new__; you call the superclass's __new__ as usual, but pass it the subclass as the first argument:
instance = super().__new__(A)
I can't guarantee that this will be enough to fix your problems, since the code you've posted wouldn't reproduce the error you claim; it has other problems that would have caused a different error first (infinite recursion). Particularly, if A and B don't really descend from Thing, that needs different handling.

Related

A good practice to implement with python multiple inheritance class?

The Scenario:
class A:
def __init__(self, key, secret):
self.key = key
self.secret = secret
def same_name_method(self):
do_some_staff
def method_a(self):
pass
class B:
def __init__(self, key, secret):
self.key = key
self.secret = secret
def same_name_method(self):
do_another_staff
def method_b(self):
pass
class C(A,B):
def __init__(self, *args, **kwargs):
# I want to init both class A and B's key and secret
## I want to rename class A and B's same method
any_ideas()
...
What I Want:
I want the instance of class C initialize both class A and B, because they are different api key.
And I want rename class A and B's same_name_method, so I will not confused at which same_name_method.
What I Have Done:
For problem one, I have done this:
class C(A,B):
def __init__(self, *args, **kwargs):
A.__init__(self, a_api_key,a_api_secret)
B.__init__(self, b_api_key,b_api_secret)
Comment: I know about super(), but for this situation I do not know how to use it.
For problem two, I add a __new__ for class C
def __new__(cls, *args, **kwargs):
cls.platforms = []
cls.rename_method = []
for platform in cls.__bases__:
# fetch platform module name
module_name = platform.__module__.split('.')[0]
cls.platforms.append(module_name)
# rename attr
for k, v in platform.__dict__.items():
if not k.startswith('__'):
setattr(cls, module_name+'_'+k, v)
cls.rename_method.append(k)
for i in cls.rename_method:
delattr(cls, i) ## this line will raise AttributeError!!
return super().__new__(cls)
Comment: because I rename the new method names and add it to cls attr. I need to delete the old method attr, but do not know how to delattr. Now I just leave them alone, did not delete the old methods.
Question:
Any Suggestions?
So, you want some pretty advanced things, some complicated things, and you don't understand well how classes behave in Python.
So, for your first thing: initializing both classes, and every other method that should run in all classes: the correct solution is to make use of cooperative calls to super() methods.
A call to super() in Python returns you a very special proxy objects that reflects all methods available in the next class, obeying the proper method Resolution Order.
So, if A.__init__ and B.__init__ have to be called, both methods should include a super().__init__ call - and one will call the other's __init__ in the appropriate order, regardless of how they are used as bases in subclasses. As object also have __init__, the last super().__init__ will just call it that is a no-op. If you have more methods in your classes that should be run in all base classes, you'd rather build a proper base class so that the top-most super() call don't try to propagate to a non-existing method.
Otherwise, it is just:
class A:
def __init__(self, akey, asecret, **kwargs):
self.key = akey
self.secret = asecret
super().__init__(**kwargs)
class B:
def __init__(self, bkey, bsecret, **kwargs):
self.key = bkey
self.secret = bsecret
super().__init__(**kwargs)
class C(A,B):
# does not even need an explicit `__init__`.
I think you can get the idea. Of course, the parameter names have to differ - ideally, when writing C you don't have to worry about parameter order - but when calling C you have to worry about suplying all mandatory parameters for C and its bases. If you can't rename the parameters in A or B to be distinct, you could try to use the parameter order for the call, though, with each __init__ consuming two position-parameters - but that will require some extra care in inheritance order.
So - up to this point, it is basic Python multiple-inheritance "howto", and should be pretty straightforward. Now comes your strange stuff.
As for the auto-renaming of methods: first things first -
are you quite sure you need inheritance? Maybe having your granular classes for each external service, and a registry and dispatch class that call the methods on the others by composition would be more sane. (I may come back to this later)
Are you aware that __new__ is called for each instantiation of the class, and all class-attribute mangling you are performing there happens at each new instance of your classes?
So, if the needed method-renaming + shadowing needs to take place at class creation time, you can do that using the special method __init_subclass__ that exists from Python 3.6. It is a special class method that is called once for each derived class of the class it is defined on. So, just create a base class, from which A and B themselves will inherit, and move a properly modified version the thing you are putting in __new__ there. If you are not using Python 3.6, this should be done on the __new__ or __init__ of a metaclass, not on the __new__ of the class itself.
Another approach would be to have a custom __getattribute__ method - this could be crafted to provide namespaces for the base classes. It would owrk ony on instances, not on the classes themselves (but could be made to, again, using a metaclass). __getattribute__ can even hide the same-name-methods.
class Base:
#classmethod
def _get_base_modules(cls):
result = {}
for base in cls.__bases__:
module_name = cls.__module__.split(".")[0]
result[module_name] = base
return result
#classmethod
def _proxy(self, module_name):
class base:
def __dir__(base_self):
return dir(self._base_modules[module_name])
def __getattr__(base_self, attr):
original_value = self._base_modules[module_name].__dict__[attr]
if hasattr(original_value, "__get__"):
original_value = original_value.__get__(self, self.__class__)
return original_value
base.__name__ = module_name
return base()
def __init_subclass__(cls):
cls._base_modules = cls._get_base_modules()
cls._shadowed = {name for module_class in cls._base_modules.values() for name in module_class.__dict__ if not name.startswith("_")}
def __getattribute__(self, attr):
if attr.startswith("_"):
return super().__getattribute__(attr)
cls = self.__class__
if attr in cls._shadowed:
raise AttributeError(attr)
if attr in cls._base_modules:
return cls._proxy(attr)
return super().__getattribute__(attr)
def __dir__(self):
return super().dir() + list(self._base_modules)
class A(Base):
...
class B(Base):
...
class C(A, B):
...
As you can see - this is some fun, but starts getting really complicated - and all the hoola-boops that are needed to retrieve the actual attributes from the superclasses after ading an artificial namespace seem to indicate your problem is not calling for using inheritance after all, as I suggested above.
Since you have your small, functional, atomic classes for each "service" , you could use a plain, simple, non-meta-at-all class that would work as a registry for the various services - and you can even enhance it to call the equivalent method in several of the services it is handling with a single call:
class Services:
def __init__(self):
self.registry = {}
def register(self, cls, key, secret):
name = cls.__module__.split(".")[0]
service= cls(key, secret)
self.registry[name] = service
def __getattr__(self, attr):
if attr in self.registry:
return self.registry[attr]

Class name as idempotent typecast function

I'm writing a custom type Foo and I'd like to achieve the following: when writing
foo = Foo(bar)
then if bar is already an instance of Foo, then Foo(foo) returns that object unmodified, so foo and bar refer to the same object. Otherwise a new object of type Foo should be created, using information from bar in whatever way Foo.__init__ deems appropriate.
How can I do this?
I would assume that Foo.__new__ is the key to this. It should be fairly easy to have Foo.__new__ return the existing object if the instanceof check succeeds, and to call super().__new__ otherwise. But the documentation for __new__ writes this:
If __new__() returns an instance of cls, then the new instance’s __init__() method will be invoked like __init__(self[, ...]), where self is the new instance and the remaining arguments are the same as were passed to __new__().
In this case I would be returning an instance of the requested class, albeit not a new one. Can I somehow prevent the call to __init__? Or do I have to add a check inside __init__ to detect whether it is being invoked for a newly created instance or for an already existing one? The latter sounds like code duplication which should be avoidable.
IMHO, you should directly use __new__ and __init__. The test in init to see whether you should init a new object or already have an existing one is so simple that there is no code duplication and the added complexity is IMHO acceptable
class Foo:
def __new__(cls, obj):
if isinstance(obj, cls):
print("New: same object")
return obj
else:
print("New: create object")
return super(Foo, cls).__new__(cls)
def __init__(self, obj):
if self is obj:
print("init: same object")
else:
print("init: brand new object from ", obj)
# do the actual initialization
It gives as expected:
>>> foo = Foo("x")
New: create object
init: brand new object from x
>>> bar = Foo(foo)
New: same object
init: same object
>>> bar is foo
True
One way to achieve this is by moving the required code into a metaclass like this:
class IdempotentCast(type):
"""Metaclass to ensure that Foo(x) is x if isinstance(x, Foo)"""
def __new__(cls, name, bases, namespace, **kwds):
res = None
defineNew = all(i.__new__ is object.__new__ for i in bases)
if defineNew:
def n(cls, *args, **kwds):
if len(args) == 1 and isinstance(args[0], cls) and not kwds:
return args[0]
else:
return super(res, cls).__new__(cls)
namespace["__new__"] = n
realInit = namespace.get("__init__")
if realInit is not None or defineNew:
#functools.wraps(realInit)
def i(self, *args, **kwds):
if len(args) != 1 or args[0] is not self:
if realInit is None:
return super(res, self).__init__(*args, **kwds)
else:
return realInit(self, *args, **kwds)
namespace["__init__"] = i
res = type.__new__(cls, name, bases, namespace)
return res
class Foo(metaclass=IdempotentCast):
...
That metaclass adds a __new__ method unless there exists a base class that already added a __new__ method. So for class hierarchies where some such class extends another such class, the __new__ method will get added but once. It also wraps the constructor to perform the check whether the first argument is identical to self (thanks to the answer by Serge Ballesta for pointing out this simple check). Otherwise it calls the original constructor, or the base constructor if no constructor had been defined.
Quite a bit of code, but you only need that once, and can use it to introduce these semantics for as many types as you want. If you only need this for a single class, other answers may be more appropriate.

Python metaclasses and special methods

I experiment with metaclasses to generate the class with the custom special method - particularly, __call__. The generation of the class depends on the parameters the constructor was called with. I've faced a strange effect, simplified example is below:
def trick(self, *args, **kwargs):
print "Works!"
class Test1Factory(type):
def __new__(mcls, name, bases, namespace):
namespace['__call__'] = trick
return type.__new__(mcls, name, bases, namespace)
class Test1(object):
__metaclass__ = Test1Factory
def __init__(self, value):
self._value = value
t1 = Test1(1)
t1() # "Works!"
It works, but it is not really useful, because there is no access to constructor arguments within __new__. type.__call__ should do the trick:
import types
class Test2Factory(type):
def __call__(self, *args, **kwargs):
obj = type.__call__(self, *args, **kwargs)
setattr(obj, '__call__', types.MethodType(trick, obj, Test2))
return obj
class Test2(object):
__metaclass__ = Test2Factory
def __init__(self, value):
self._value = value
t2 = Test2(2)
t2.__call__() # "Works!"
t2() # TypeError: 'Test2' object is not callable
As far as I understand, instance() is similar to instance.__call__(), but it is not the case here. Using __new__ static method of the class does the same. I have a workaround that does not use metaclasses at all, but just want to understand the phenomena. Python version is 2.7.5
The wrong assumption may be in “instance() is similar to instance.__call__()”, as __call__ is not looked up for in instance, but in instance's type. That is, the __call__ used is not that of instance, but that of instance.__class__ or type(instance).
Any __call__ attribute defined on the instance solely, may be accessed regularly as any other attribute, but will not be used when instance is called as in instance(). That's part of Python's semantic.
Try to define a __call__ both on an instance and on its type, and see what you get.
If I understand the question correctly, the question has the same background as another I had, and which gets an answer (summarized, with demonstrations by experiments, in the question's post) : “How do Python tell “this is called as a function”?”.

Injecting function call after __init__ with decorator

I'm trying to find the best way to create a class decorator that does the following:
Injects a few functions into the decorated class
Forces a call to one of these functions AFTER the decorated class' __init__ is called
Currently, I'm just saving off a reference to the 'original' __init__ method and replacing it with my __init__ that calls the original and my additional function. It looks similar to this:
orig_init = cls.__init__
def new_init(self, *args, **kwargs):
"""
'Extend' wrapped class' __init__ so we can attach to all signals
automatically
"""
orig_init(self, *args, **kwargs)
self._debugSignals()
cls.__init__ = new_init
Is there a better way to 'augment' the original __init__ or inject my call somewhere else? All I really need is for my self._debugSignals() to be called sometime after the object is created. I also want it happen automatically, which is why I thought after __init__ was a good place.
Extra misc. decorator notes
It might be worth mentioning some background on this decorator. You can find the full code here. The point of the decorator is to automatically attach to any PyQt signals and print when they are emitted. The decorator works fine when I decorate my own subclasses of QtCore.QObject, however I've been recently trying to automatically decorate all QObject children.
I'd like to have a 'debug' mode in the application where I can automatically print ALL signals just to make sure things are doing what I expect. I'm sure this will result in TONS of debug, but I'd still like to see what's happening.
The problem is my current version of the decorator is causing a segfault when replacing QtCore.QObject.__init__. I've tried to debug this, but the code is all SIP generated, which I don't have much experience with.
So, I was wondering if there was a safer, more pythonic way to inject a function call AFTER the __init__ and hopefully avoid the segfault.
Based on this post and this answer, an alternative way to do this is through a custom metaclass. This would work as follows (tested in Python 2.7):
# define a new metaclass which overrides the "__call__" function
class NewInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call MyNewClass() """
obj = type.__call__(cls, *args, **kwargs)
obj.new_init()
return obj
# then create a new class with the __metaclass__ set as our custom metaclass
class MyNewClass(object):
__metaclass__ = NewInitCaller
def __init__(self):
print "Init class"
def new_init(self):
print "New init!!"
# when you create an instance
a = MyNewClass()
>>> Init class
>>> New init!!
The basic idea is that:
when you call MyNewClass() it searches for the metaclass, finds that you have defined NewInitCaller
The metaclass __call__ function is called.
This function creates the MyNewClass instance using type,
The instance runs its own __init__ (printing "Init class").
The meta class then calls the new_init function of the instance.
Here is the solution for Python 3.x, based on this post's accepted answer. Also see PEP 3115 for reference, I think the rationale is an interesting read.
Changes in the example above are shown with comments; the only real change is the way the metaclass is defined, all other are trivial 2to3 modifications.
# define a new metaclass which overrides the "__call__" function
class NewInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call MyNewClass() """
obj = type.__call__(cls, *args, **kwargs)
obj.new_init()
return obj
# then create a new class with the metaclass passed as an argument
class MyNewClass(object, metaclass=NewInitCaller): # added argument
# __metaclass__ = NewInitCaller this line is removed; would not have effect
def __init__(self):
print("Init class") # function, not command
def new_init(self):
print("New init!!") # function, not command
# when you create an instance
a = MyNewClass()
>>> Init class
>>> New init!!
Here's a generalized form of jake77's example which implements __post_init__ on a non-dataclass. This enables a subclass's configure() to be automatically invoked in correct sequence after the base & subclass __init__s have completed.
# define a new metaclass which overrides the "__call__" function
class PostInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call BaseClass() """
print(f"{__class__.__name__}.__call__({args}, {kwargs})")
obj = type.__call__(cls, *args, **kwargs)
obj.__post_init__(*args, **kwargs)
return obj
# then create a new class with the metaclass passed as an argument
class BaseClass(object, metaclass=PostInitCaller):
def __init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__init__({args}, {kwargs})")
super().__init__()
def __post_init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__post_init__({args}, {kwargs})")
self.configure(*args, **kwargs)
def configure(self, *args, **kwargs):
print(f"{__class__.__name__}.configure({args}, {kwargs})")
class SubClass(BaseClass):
def __init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__init__({args}, {kwargs})")
super().__init__(*args, **kwargs)
def configure(self, *args, **kwargs):
print(f"{__class__.__name__}.configure({args}, {kwargs})")
super().configure(*args, **kwargs)
# when you create an instance
a = SubClass('a', b='b')
running gives:
PostInitCaller.__call__(('a',), {'b': 'b'})
SubClass.__init__(('a',), {'b': 'b'})
BaseClass.__init__(('a',), {'b': 'b'})
BaseClass.__post_init__(('a',), {'b': 'b'})
SubClass.configure(('a',), {'b': 'b'})
BaseClass.configure(('a',), {'b': 'b'})
I know that the metaclass approach is the Pro way, but I've a more readable and easy proposal using #staticmethod:
class Invites(TimestampModel, db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
invitee_email = db.Column(db.String(128), nullable=False)
def __init__(self, invitee_email):
invitee_email = invitee_email
#staticmethod
def create_invitation(invitee_email):
"""
Create an invitation
saves it and fetches it because the id
is being generated in the DB
"""
invitation = Invites(invitee_email)
db.session.save(invitation)
db.session.commit()
return Invites.query.filter(
PartnerInvites.invitee_email == invitee_email
).one_or_none()
So I could use it this way:
invitation = Invites.create_invitation("jim#mail.com")
print(invitation.id, invitation.invitee_email)
>>>> 1 jim#mail.com

Pass keyword argument only to __new__() and never further it to __init__()?

Part 1
I have a setup where I have a set of classes that I want to mock, my idea was that in the cases where I want to do this I pass a mock keyword argument into the constructor and in __new__ intercept this and instead pass back a mocked version of that object.
It looks like this (Edited the keyword lookup after #mgilsons suggestion):
class RealObject(object):
def __new__(cls, *args, **kwargs):
if kwargs.pop('mock', None):
return MockRealObject()
return super(RealObect, cls).__new__(cls, *args, **kwargs)
def __init__(self, whatever = None):
'''
Constructor
'''
#stuff happens
I then call the constructor like this:
ro = RealObject(mock = bool)
The issue I have here is that I get the following error when bool is False:
TypeError: __init__() got an unexpected keyword argument 'mock'
This works if I add mock as a keyword argument to __init__ but what I am asking if this is possible to avoid. I even pop the mock from the kwargs dict.
This is also a question about the design. Is there a better way to do this? (of course!) I wanted to try doing it this way, without using a factory or a superclass or anything. But still, should I use another keyword maybe? __call__?
Part 2 based on jsbueno's answer
So I wanted to extract the metaclass and the __new__ function into a separate module. I did this:
class Mockable(object):
def __new__(cls, *args, **kwargs):
if kwargs.pop('mock', None):
mock_cls = eval('{0}{1}'.format('Mock',cls.__name__))
return super(mock_cls, mock_cls).__new__(mock_cls)
return super(cls, cls).__new__(cls,*args, **kwargs)
class MockableMetaclass(type):
def __call__(self, *args, **kwargs):
obj = self.__new__(self, *args, **kwargs)
if "mock" in kwargs:
del kwargs["mock"]
obj.__init__(*args, **kwargs)
return obj
And I have defined in a separate module the classes RealObject and MockRealObject.
I have two problems now:
If MockableMetaclass and Mockable are not in the same module as the RealObject class the eval will raise a NameError if I provide mock = True.
If mock = False the code will enter into an endless recursion that ends in an impressive RuntimeError: maximum recursion depth exceeded while calling a Python objec. I'm guessing this is due to RealObject's superclass no longer being object but instead Mockable.
How can I fix these problems? is my approach incorrect? Should I instead have Mockable as a decorator? I tried that but that didn't seem to work since __new__ of an instance is only read-only it seems.
This is a job for the metaclass! :-)
The code responsible to call both __new__ and __init__ when instantiating a Python new-style object lies in the __call__method for the class metaclass. (or the semantically equivalent to that).
In other words - when you do:
RealObject() - what is really called is the RealObject.__class__.__call__ method.
Since without declaring a explicit metaclass, the metaclass is type, it is type.__call__ which is called.
Most recipes around dealing with metaclasses deal with subclassing the __new__ method - automating actions when the class is created. But overriding __call__ we can take actions when the class is instantiated, instead.
In this case, all that is needed is to remove the "mock" keyword parameter, if any, before calling __init__:
class MetaMock(type):
def __call__(cls, *args, **kw):
obj = cls.__new__(cls, *args, **kw)
if "mock" in kw:
del kw["mock"]
obj.__init__(*args, **kw)
return obj
class RealObject(metaclass=MetaMock):
...
A subclass is pretty much essential, since __new__ always passes the arguments to the constructor call to the __init__ method. If you add a subclass via a class decorator as a mixin then you can intercept the mock argument in the subclass __init__:
def mock_with(mock_cls):
class MockMixin(object):
def __new__(cls, *args, **kwargs):
if kwargs.pop('mock'):
return mock_cls()
return super(MockMixin, cls).__new__(cls, *args, **kwargs)
def __init__(self, *args, **kwargs):
kwargs.pop('mock')
super(MockMixin, self).__init__(*args, **kwargs)
def decorator(real_cls):
return type(real_cls.__name__, (MockMixin, real_cls), {})
return decorator
class MockRealObject(object):
pass
#mock_with(MockRealObject)
class RealObject(object):
def __init__(self, whatever=None):
pass
r = RealObject(mock=False)
assert isinstance(r, RealObject)
m = RealObject(mock=True)
assert isinstance(m, MockRealObject)
The alternative is for the subclass __new__ method to return RealObject(cls, *args, **kwargs); in that case, since the returned object isn't an instance of the subclass. However in that case the isinstance check will fail.

Categories

Resources