How to enforce method signature for child classes? - python

Languages like C#, Java has method overloads, which means if child class does not implement the method with exact signature will not overwrite the parent method.
How do we enforce the method signature in child classes in python? The following code sample shows that child class overwrites the parent method with different method signature:
>>> class A(object):
... def m(self, p=None):
... raise NotImplementedError('Not implemented')
...
>>> class B(A):
... def m(self, p2=None):
... print p2
...
>>> B().m('123')
123
While this is not super important, or maybe by design of python (eg. *args, **kwargs). I am asking this for the sake of clarity if this is possible.
Please Note:
I have tried #abstractmethod and the ABC already.

Below is a complete running example showing how to use a metaclass to make sure that subclass methods have the same signatures as their base classes. Note the use of the inspect module. The way I'm using it here it makes sure that the signatures are exactly the same, which might not be what you want.
import inspect
class BadSignatureException(Exception):
pass
class SignatureCheckerMeta(type):
def __new__(cls, name, baseClasses, d):
#For each method in d, check to see if any base class already
#defined a method with that name. If so, make sure the
#signatures are the same.
for methodName in d:
f = d[methodName]
for baseClass in baseClasses:
try:
fBase = getattr(baseClass, methodName).__func__
if not inspect.getargspec(f) == inspect.getargspec(fBase):
raise BadSignatureException(str(methodName))
except AttributeError:
#This method was not defined in this base class,
#So just go to the next base class.
continue
return type(name, baseClasses, d)
def main():
class A(object):
def foo(self, x):
pass
try:
class B(A):
__metaclass__ = SignatureCheckerMeta
def foo(self):
"""This override shouldn't work because the signature is wrong"""
pass
except BadSignatureException:
print("Class B can't be constructed because of a bad method signature")
print("This is as it should be :)")
try:
class C(A):
__metaclass__ = SignatureCheckerMeta
def foo(self, x):
"""This is ok because the signature matches A.foo"""
pass
except BadSignatureException:
print("Class C couldn't be constructed. Something went wrong")
if __name__ == "__main__":
main()

Update of the accepted answer to work with python 3.5.
import inspect
from types import FunctionType
class BadSignatureException(Exception):
pass
class SignatureCheckerMeta(type):
def __new__(cls, name, baseClasses, d):
#For each method in d, check to see if any base class already
#defined a method with that name. If so, make sure the
#signatures are the same.
for methodName in d:
f = d[methodName]
if not isinstance(f, FunctionType):
continue
for baseClass in baseClasses:
try:
fBase = getattr(baseClass, methodName)
if not inspect.getargspec(f) == inspect.getargspec(fBase):
raise BadSignatureException(str(methodName))
except AttributeError:
#This method was not defined in this base class,
#So just go to the next base class.
continue
return type(name, baseClasses, d)
def main():
class A(object):
def foo(self, x):
pass
try:
class B(A, metaclass=SignatureCheckerMeta):
def foo(self):
"""This override shouldn't work because the signature is wrong"""
pass
except BadSignatureException:
print("Class B can't be constructed because of a bad method signature")
print("This is as it should be :)")
try:
class C(A):
__metaclass__ = SignatureCheckerMeta
def foo(self, x):
"""This is ok because the signature matches A.foo"""
pass
except BadSignatureException:
print("Class C couldn't be constructed. Something went wrong")
if __name__ == "__main__":
main()

By design, the language doesn't support checking the signatures. For an interesting read, check out:
http://grokbase.com/t/python/python-ideas/109qtkrzsd/abc-what-about-the-method-arguments
From this thread, it does sound like you may be able to write a decorator to check the signature, with abc.same_signature(method1, method2), but I've never tried that.

The reason it is being overridden is because they actually have the same method signature. What is written there is akin to doing something like this in Java:
public class A
{
public void m(String p)
{
throw new Exception("Not implemented");
}
}
public class B extends A
{
public void m(String p2)
{
System.out.println(p2);
}
}
Note that even though the paramater names are different, the types are the same and thus they have the same signature. In strongly typed languages like this, we get to explicitly say what the types are going to be ahead of time.
In python the type of the paramater is dynamically determined at run time when you use the method. This makes it impossible for the python interpreter to tell which method you actually wished to call when you say B().m('123'). Because neither of the method signatures specify which type of paramater they expect, they simply say I'm looking for a call with one parameter. So it makes sense that the deepest (and most relevent to the actual object you have) is called, which would be class B's method because it is an instance of class B.
If you want to only process cetain types in a child class method, and pass along all others to the parent class, it can be done like this:
class A(object):
def m(self, p=None):
raise NotImplementedError('Not implemented')
class B(A):
def m(self, p2=None):
if isinstance(p2, int):
print p2
else:
super(B, self).m(p2)
Then using b gives you the desired output. That is, class b processes ints, and passes any other type along to its parent class.
>>> b = B()
>>> b.m(2)
2
>>> b.m("hello")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in m
File "<stdin>", line 3, in m
NotImplementedError: Not implemented

I use meta classes for others purposes in my code so I rolled a version that uses a class decorator instead. The below version works with python3. and also supports decorated methods (yes, this creates a potential loophole but if you use decorators that changes the actual signature, shame on you). To make it work with python2, change inspect.isfunction to inspect.ismethod
import inspect
from functools import wraps
class BadSignatureException(Exception):
pass
def enforce_signatures(cls):
for method_name, method in inspect.getmembers(cls, predicate=inspect.isfunction):
if method_name == "__init__":
continue
for base_class in inspect.getmro(cls):
if base_class is cls:
continue
try:
base_method = getattr(base_class, method_name)
except AttributeError:
continue
if not inspect.signature(method) == inspect.signature(base_method):
raise BadSignatureException("%s.%s does not match base class %s.%s" % (cls.__name__, method_name,
base_class.__name__, method_name))
return cls
if __name__ == "__main__":
class A:
def foo(self, x):
pass
def test_decorator(f):
#wraps(f)
def decorated_function(*args, **kwargs):
pass
return decorated_function
#enforce_signatures
class B(A):
#test_decorator
def foo(self):
"""This override shouldn't work because the signature is wrong"""
pass

mypy, and I expect other static type-checkers, will complain if methods on your subclass have a different signature to the methods they overwrite. It seems to me the best way to enforce type-signatures on child classes is to enforce mypy (or whatever).

Related

A good practice to implement with python multiple inheritance class?

The Scenario:
class A:
def __init__(self, key, secret):
self.key = key
self.secret = secret
def same_name_method(self):
do_some_staff
def method_a(self):
pass
class B:
def __init__(self, key, secret):
self.key = key
self.secret = secret
def same_name_method(self):
do_another_staff
def method_b(self):
pass
class C(A,B):
def __init__(self, *args, **kwargs):
# I want to init both class A and B's key and secret
## I want to rename class A and B's same method
any_ideas()
...
What I Want:
I want the instance of class C initialize both class A and B, because they are different api key.
And I want rename class A and B's same_name_method, so I will not confused at which same_name_method.
What I Have Done:
For problem one, I have done this:
class C(A,B):
def __init__(self, *args, **kwargs):
A.__init__(self, a_api_key,a_api_secret)
B.__init__(self, b_api_key,b_api_secret)
Comment: I know about super(), but for this situation I do not know how to use it.
For problem two, I add a __new__ for class C
def __new__(cls, *args, **kwargs):
cls.platforms = []
cls.rename_method = []
for platform in cls.__bases__:
# fetch platform module name
module_name = platform.__module__.split('.')[0]
cls.platforms.append(module_name)
# rename attr
for k, v in platform.__dict__.items():
if not k.startswith('__'):
setattr(cls, module_name+'_'+k, v)
cls.rename_method.append(k)
for i in cls.rename_method:
delattr(cls, i) ## this line will raise AttributeError!!
return super().__new__(cls)
Comment: because I rename the new method names and add it to cls attr. I need to delete the old method attr, but do not know how to delattr. Now I just leave them alone, did not delete the old methods.
Question:
Any Suggestions?
So, you want some pretty advanced things, some complicated things, and you don't understand well how classes behave in Python.
So, for your first thing: initializing both classes, and every other method that should run in all classes: the correct solution is to make use of cooperative calls to super() methods.
A call to super() in Python returns you a very special proxy objects that reflects all methods available in the next class, obeying the proper method Resolution Order.
So, if A.__init__ and B.__init__ have to be called, both methods should include a super().__init__ call - and one will call the other's __init__ in the appropriate order, regardless of how they are used as bases in subclasses. As object also have __init__, the last super().__init__ will just call it that is a no-op. If you have more methods in your classes that should be run in all base classes, you'd rather build a proper base class so that the top-most super() call don't try to propagate to a non-existing method.
Otherwise, it is just:
class A:
def __init__(self, akey, asecret, **kwargs):
self.key = akey
self.secret = asecret
super().__init__(**kwargs)
class B:
def __init__(self, bkey, bsecret, **kwargs):
self.key = bkey
self.secret = bsecret
super().__init__(**kwargs)
class C(A,B):
# does not even need an explicit `__init__`.
I think you can get the idea. Of course, the parameter names have to differ - ideally, when writing C you don't have to worry about parameter order - but when calling C you have to worry about suplying all mandatory parameters for C and its bases. If you can't rename the parameters in A or B to be distinct, you could try to use the parameter order for the call, though, with each __init__ consuming two position-parameters - but that will require some extra care in inheritance order.
So - up to this point, it is basic Python multiple-inheritance "howto", and should be pretty straightforward. Now comes your strange stuff.
As for the auto-renaming of methods: first things first -
are you quite sure you need inheritance? Maybe having your granular classes for each external service, and a registry and dispatch class that call the methods on the others by composition would be more sane. (I may come back to this later)
Are you aware that __new__ is called for each instantiation of the class, and all class-attribute mangling you are performing there happens at each new instance of your classes?
So, if the needed method-renaming + shadowing needs to take place at class creation time, you can do that using the special method __init_subclass__ that exists from Python 3.6. It is a special class method that is called once for each derived class of the class it is defined on. So, just create a base class, from which A and B themselves will inherit, and move a properly modified version the thing you are putting in __new__ there. If you are not using Python 3.6, this should be done on the __new__ or __init__ of a metaclass, not on the __new__ of the class itself.
Another approach would be to have a custom __getattribute__ method - this could be crafted to provide namespaces for the base classes. It would owrk ony on instances, not on the classes themselves (but could be made to, again, using a metaclass). __getattribute__ can even hide the same-name-methods.
class Base:
#classmethod
def _get_base_modules(cls):
result = {}
for base in cls.__bases__:
module_name = cls.__module__.split(".")[0]
result[module_name] = base
return result
#classmethod
def _proxy(self, module_name):
class base:
def __dir__(base_self):
return dir(self._base_modules[module_name])
def __getattr__(base_self, attr):
original_value = self._base_modules[module_name].__dict__[attr]
if hasattr(original_value, "__get__"):
original_value = original_value.__get__(self, self.__class__)
return original_value
base.__name__ = module_name
return base()
def __init_subclass__(cls):
cls._base_modules = cls._get_base_modules()
cls._shadowed = {name for module_class in cls._base_modules.values() for name in module_class.__dict__ if not name.startswith("_")}
def __getattribute__(self, attr):
if attr.startswith("_"):
return super().__getattribute__(attr)
cls = self.__class__
if attr in cls._shadowed:
raise AttributeError(attr)
if attr in cls._base_modules:
return cls._proxy(attr)
return super().__getattribute__(attr)
def __dir__(self):
return super().dir() + list(self._base_modules)
class A(Base):
...
class B(Base):
...
class C(A, B):
...
As you can see - this is some fun, but starts getting really complicated - and all the hoola-boops that are needed to retrieve the actual attributes from the superclasses after ading an artificial namespace seem to indicate your problem is not calling for using inheritance after all, as I suggested above.
Since you have your small, functional, atomic classes for each "service" , you could use a plain, simple, non-meta-at-all class that would work as a registry for the various services - and you can even enhance it to call the equivalent method in several of the services it is handling with a single call:
class Services:
def __init__(self):
self.registry = {}
def register(self, cls, key, secret):
name = cls.__module__.split(".")[0]
service= cls(key, secret)
self.registry[name] = service
def __getattr__(self, attr):
if attr in self.registry:
return self.registry[attr]

Class instance as static attribute

Python 3 doesn't allow you to reference a class inside its body (except in methods):
class A:
static_attribute = A()
def __init__(self):
...
This raises a NameError in the second line because 'A' is not defined.
Alternatives
I have quickly found one workaround:
class A:
#property
#classmethod
def static_property(cls):
return A()
def __init__(self):
...
Although this isn't exactly the same since it returns a different instance every time (you could prevent this by saving the instance to a static variable the first time).
Are there simpler and/or more elegant alternatives?
EDIT:
I have moved the question about the reasons for this restriction to a separate question
The expression A() can't be run until the class A has been defined. In your first block of code, the definition of A is not complete at the point you are trying to execute A().
Here is a simpler alternative:
class A:
def __init__(self):
...
A.static_attribute = A()
When you define a class, Python immediately executes the code within the definition. Note that's different than defining a function where Python compiles the code, but doesn't execute it.
That's why this will create an error:
class MyClass(object):
a = 1 / 0
But this won't:
def my_func():
a = 1 / 0
In the body of A's class definition, A is not yet defined, so you can't reference it until after it's been defined.
There are several ways you can accomplish what you're asking, but it's not clear to me why this would be useful in the first place, so if you can provide more details about your use case, it'll be easier to recommend which path to go down.
The simplest would be what khelwood posted:
class A(object):
pass
A.static_attribute = A()
Because this is modifying class creation, using a metaclass could be appropriate:
class MetaA(type):
def __new__(mcs, name, bases, attrs):
cls = super(MetaA, mcs).__new__(mcs, name, bases, attrs)
cls.static_attribute = cls()
return cls
class A(object):
__metaclass__ = MetaA
Or you could use descriptors to have the instance lazily created or if you wanted to customize access to it further:
class MyDescriptor(object):
def __get__(self, instance, owner):
owner.static_attribute = owner()
return owner.static_attribute
class A(object):
static_attribute = MyDescriptor()
Using the property decorator is a viable approach, but it would need to be done something like this:
class A:
_static_attribute = None
#property
def static_attribute(self):
if A._static_attribute is None:
A._static_attribute = A()
return A._static_attribute
def __init__(self):
pass
a = A()
print(a.static_attribute) # -> <__main__.A object at 0x004859D0>
b = A()
print(b.static_attribute) # -> <__main__.A object at 0x004859D0>
You can use a class decorator:
def set_static_attribute(cls):
cls.static_attribute = cls()
return cls
#set_static_attribute
class A:
pass
Now:
>>>> A.static_attribute
<__main__.A at 0x10713a0f0>
Applying the decorator on top of the class makes it more explicit than setting static_attribute after a potentially long class definition. The applied decorator "belongs" to the class definition. So if you move the class around in your source code you will more likely move it along than an extra setting of the attribute outside the class.

How to execute BaseClass method before it gets overridden by DerivedClass method in Python

I am almost sure that there is a proper term for what I want to do but since I'm not familiar with it, I will try to describe the whole idea explicitly. So what I have is a collection of classes that all inherit from one base class. All the classes consist almost entirely of different methods that are relevant within each class only. However, there are several methods that share similar name, general functionality and also some logic but their implementation is still mostly different. So what I want to know is whether it's possible to create a method in a base class that will execute some logic that is similar to all the methods but still continue the execution in the class specific method. Hopefully that makes sense but I will try to give a basic example of what I want.
So consider a base class that looks something like that:
class App(object):
def __init__(self, testName):
self.localLog = logging.getLogger(testName)
def access(self):
LOGIC_SHARED
And an example of a derived class:
class App1(App):
def __init__(self, testName):
. . .
super(App1, self).__init__(testName)
def access(self):
LOGIC_SPECIFIC
So what I'd like to achieve is that the LOGIC_SHARED part in base class access method to be executed when calling the access method of any App class before executing the LOGIC_SPECIFIC part which is(as it says) specific for each access method of all derived classes.
If that makes any difference, the LOGIC_SHARED mostly consists of logging and maintenance tasks.
Hope that is clear enough and the idea makes sense.
NOTE 1:
There are class specific parameters which are being used in the LOGIC_SHARED section.
NOTE 2:
It is important to implement that behavior using only Python built-in functions and modules.
NOTE 3:
The LOGIC_SHARED part looks something like that:
try:
self.localLog.info("Checking the actual link for %s", self.application)
self.link = self.checkLink(self.application)
self.localLog.info("Actual link found!: %s", self.link)
except:
self.localLog.info("No links found. Going to use the default link: %s", self.link)
So, there are plenty of specific class instance attributes that I use and I'm not sure how to use these attributes from the base class.
Sure, just put the specific logic in its own "private" function, which can overridden by the derived classes, and leave access in the Base.
class Base(object):
def access(self):
# Shared logic 1
self._specific_logic()
# Shared logic 2
def _specific_logic(self):
# Nothing special to do in the base class
pass
# Or you could even raise an exception
raise Exception('Called access on Base class instance')
class DerivedA(Base):
# overrides Base implementation
def _specific_logic(self):
# DerivedA specific logic
class DerivedB(Base):
# overrides Base implementation
def _specific_logic(self):
# DerivedB specific logic
def test():
x = Base()
x.access() # Shared logic 1
# Shared logic 2
a = DerivedA()
a.access() # Shared logic 1
# Derived A specific logic
# Shared logic 2
b = DerivedB()
b.access() # Shared logic 1
# Derived B specific logic
# Shared logic 2
The easiest method to do what you want is to simply call the parent's class access method inside the child's access method.
class App(object):
def __init__(self, testName):
self.localLog = logging.getLogger(testName)
def access(self):
LOGIC_SHARED
class App1(App):
def __init__(self, testName):
super(App1, self).__init__(testName)
def access(self):
App.access(self)
# or use super
super(App1, self).access()
However, your shared functionality is mostly logging and maintenance. Unless there is a pressing reason to put this inside the parent class, you may want to consider is to refactor the shared functionality into a decorator function. This is particularly useful if you want to reuse similar logging and maintenance functionality for a range of methods inside your class.
You can read more about function decorators here: http://www.artima.com/weblogs/viewpost.jsp?thread=240808, or here on Stack Overflow: How to make a chain of function decorators?.
def decorated(method):
def decorated_method(self, *args, **kwargs):
LOGIC_SHARED
method(self, *args, **kwargs)
return decorated_method
Remember than in python, functions are first class objects. That means that you can take a function and pass it as a parameter to another function. A decorator function make use of this. The decorator function takes another function as a parameter (here called method) and then creates a new function (here called decorated_method) that takes the place of the original function.
Your App1 class then would look like this:
class App1(App):
#logged
def access(self):
LOGIC_SPECIFIC
This really is shorthand for this:
class App1(App):
def access(self):
LOGIC_SPECIFIC
decorated_access = logged(App.access)
App.access = decorated_access
I would find this more elegant than adding methods to the superclass to capture shared functionality.
If I understand well this commment (How to execute BaseClass method before it gets overridden by DerivedClass method in Python) you want that additional arguments passed to the parent class used in derived class
based on Jonathon Reinhart's answer
it's how you could do
class Base(object):
def access(self,
param1 ,param2, #first common parameters
*args, #second positional parameters
**kwargs #third keyword arguments
):
# Shared logic 1
self._specific_logic(param1, param2, *args, **kwargs)
# Shared logic 2
def _specific_logic(self, param1, param2, *args, **kwargs):
# Nothing special to do in the base class
pass
# Or you could even raise an exception
raise Exception('Called access on Base class instance')
class DerivedA(Base):
# overrides Base implementation
def _specific_logic(self, param1, param2, param3):
# DerivedA specific logic
class DerivedB(Base):
# overrides Base implementation
def _specific_logic(self, param1, param2, param4):
# DerivedB specific logic
def test():
x = Base()
a = DerivedA()
a.access("param1", "param2", "param3") # Shared logic 1
# Derived A specific logic
# Shared logic 2
b = DerivedB()
b.access("param1", "param2", param4="param4") # Shared logic 1
# Derived B specific logic
# Shared logic 2
I personally prefer Jonathon Reinhart's answer, but seeing as you seem to want more options, here's two more. I would probably never use the metaclass one, as cool as it is, but I might consider the second one with decorators.
With Metaclasses
This method uses a metaclass for the base class that will force the base class's access method to be called first, without having a separate private function, and without having to explicitly call super or anything like that. End result: no extra work/code goes into inheriting classes.
Plus, it works like maaaagiiiiic </spongebob>
Below is the code that will do this. Here http://dbgr.cc/W you can step through the code live and see how it works :
#!/usr/bin/env python
class ForceBaseClassFirst(type):
def __new__(cls, name, bases, attrs):
"""
"""
print("Creating class '%s'" % name)
def wrap_function(fn_name, base_fn, other_fn):
def new_fn(*args, **kwargs):
print("calling base '%s' function" % fn_name)
base_fn(*args, **kwargs)
print("calling other '%s' function" % fn_name)
other_fn(*args, **kwargs)
new_fn.__name__ = "wrapped_%s" % fn_name
return new_fn
if name != "BaseClass":
print("setting attrs['access'] to wrapped function")
attrs["access"] = wrap_function(
"access",
getattr(bases[0], "access", lambda: None),
attrs.setdefault("access", lambda: None)
)
return type.__new__(cls, name, bases, attrs)
class BaseClass(object):
__metaclass__ = ForceBaseClassFirst
def access(self):
print("in BaseClass access function")
class OtherClass(BaseClass):
def access(self):
print("in OtherClass access function")
print("OtherClass attributes:")
for k,v in OtherClass.__dict__.iteritems():
print("%15s: %r" % (k, v))
o = OtherClass()
print("Calling access on OtherClass instance")
print("-------------------------------------")
o.access()
This uses a metaclass to replace OtherClass's access function with a function that wraps a call to BaseClass's access function and a call to OtherClass's access function. See the best explanation of metaclasses here https://stackoverflow.com/a/6581949.
Stepping through the code should really help you understand the order of things.
With Decorators
This functionality could also easily be put into a decorator, as shown below. Again, a steppable/debuggable/runnable version of the code below can be found here http://dbgr.cc/0
#!/usr/bin/env python
def superfy(some_func):
def wrapped(self, *args, **kwargs):
# NOTE might need to be changed when dealing with
# multiple inheritance
base_fn = getattr(self.__class__.__bases__[0], some_func.__name__, lambda *args, **kwargs: None)
# bind the parent class' function and call it
base_fn.__get__(self, self.__class__)(*args, **kwargs)
# call the child class' function
some_func(self, *args, **kwargs)
wrapped.__name__ = "superfy(%s)" % some_func.__name__
return wrapped
class BaseClass(object):
def access(self):
print("in BaseClass access function")
class OtherClass(BaseClass):
#superfy
def access(self):
print("in OtherClass access function")
print("OtherClass attributes")
print("----------------------")
for k,v in OtherClass.__dict__.iteritems():
print("%15s: %r" % (k, v))
print("")
o = OtherClass()
print("Calling access on OtherClass instance")
print("-------------------------------------")
o.access()
The decorator above retrieves the BaseClass' function of the same name, and calls that first before calling the OtherClass' function.
May this simple approach can help.
class App:
def __init__(self, testName):
self.localLog = logging.getLogger(testName)
self.application = None
self.link = None
def access(self):
print('There is something BaseClass must do')
print('The application is ', self.application)
print('The link is ', self.link)
class App1(App):
def __init__(self, testName):
# ...
super(App1, self).__init__(testName)
def access(self):
self.application = 'Application created by App1'
self.link = 'Link created by App1'
super(App1, self).access()
print('There is something App1 must do')
class App2(App):
def __init__(self, testName):
# ...
super(App2, self).__init__(testName)
def access(self):
self.application = 'Application created by App2'
self.link = 'Link created by App2'
super(App2, self).access()
print('There is something App2 must do')
and the test result:
>>>
>>> app = App('Baseclass')
>>> app.access()
There is something BaseClass must do
The application is None
The link is None
>>> app1 = App1('App1 test')
>>> app1.access()
There is something BaseClass must do
The application is Application created by App1
The link is Link created by App1
There is something App1 must do
>>> app2 = App2('App2 text')
>>> app2.access()
There is something BaseClass must do
The application is Application created by App2
The link is Link created by App2
There is something App2 must do
>>>
Adding a combine function we can combine two functions and execute them one after other as bellow
def combine(*fun):
def new(*s):
for i in fun:
i(*s)
return new
class base():
def x(self,i):
print 'i',i
class derived(base):
def x(self,i):
print 'i*i',i*i
x=combine(base.x,x)
new_obj=derived():
new_obj.x(3)
Output Bellow
i 3
i*i 9
it need not be single level hierarchy it can have any number of levels or nested

Python sum of classes

I want to define a class AorB, such that all A's are AorB's, and all B's are AorB's, and these are all the AorB's. Of course, A and B should be subclasses of AorB. The problem is in AorB.__init__, when I can't convince self it should be something else. I can define an AorB factory, but I'd rather have AorB constructor if possible.
class AorB:
def __init__(self,par):
if par: self=A(par) #!
else: self=B() #!
#staticmethod
def from_par(par):
if par: return A(par)
else: return B()
class A(AorB):
def __init__(self,par):
self.par=par
class B(AorB):
def __init__(self):
pass
print(
AorB.from_par(5),
AorB.from_par(0),
AorB(5),
AorB(0),
sep="\n")
I know assignment to self doesn't work here, but I just wanted to show intention. As I said, factory (from_par) works fine, I just want to call it as AorB, not as AorB.from_par.
PS. I know, __init__ is probably too late, type of self is already determined. Feel free to use metaclasses in your answer. It's time I learn something useful about them. :-)
You can't, not with __init__. Once __init__ is being called, the instance is already created.
You want a factory function instead:
class AorB: pass
class A(AorB):
def __init__(self,par):
self.par=par
class B(AorB):
def __init__(self):
pass
def AorB(par):
return A(par) if par else B()
From an API point of view, there is no difference; AorB is a callable that produces either an A() or a B() instance.
The other possible route involves defining a __new__ function instead; this is the class constructor:
class AorB:
def __new__(cls, par=None):
if cls is not AorB: return super().__new__(cls)
return super().__new__(A) if par else super().__new__(B)
class A(AorB):
def __init__(self, par):
self.par = par
class B(AorB):
def __init__(self, par=None):
pass
Which is just a more involved factory function, really. Note that the __new__ method returns the result of super().__new__ of one of the subclasses based on par, so both A and B will be passed a par parameter, wether they want one or not.
The if cls is not AorB line is needed to allow instantiating A() or B() directly; you can omit that line if that is not a requirement and only the AorB factory class is used.

Is it best-practice to place __init__ in the beginning or end of a class?

Consider the following code:
class AClass():
def defaultMethod(self):
return 1
def __init__(self, methodToUse = defaultMethod):
print (methodToUse(self))
if __name__== "__main__":
AClass()
In this case one cannot move the defaultMethod below the __init__ method, if I do, it causes "NameError: name 'defaultMethod' is not defined"
This means that I need to define this method before the __init__ or else Python does not know about it. This again, means that I no longer have __init__ as the first method, which leaves me to wonder whether it is usual to place the __init__ method at the end of a class or in the beginning.
What do you mean, "I need to define this method before the init or else Python does not know about it" ?
>>> class A(object):
... def __init__(self):
... self.foo()
... def foo(self):
... print '42'
...
>>> A()
42
I usually place __ init__() before other instance methods, but after class methods/property/attributes.
I think you're doing things a little peculiarly. You should still put __init__ high up if not the first method. Readability is key and __init__ exposes what you expect the main instance fields to be.
Here are three alternatives. My preference is for the first as it documents the default method and will require the least modification to your code. The last works, but could be confusing for anyone having to maintain your code.
class A(object):
def __init__(self, method="foo"):
if callable(method):
method(self)
else:
getattr(self, method)()
def foo(self):
print "something"
class B(object):
def __init__(self, method = None):
if method is None:
self.defaultMethod()
else:
method(self)
def defaultMethod(self):
print "foo"
def _defaultMethod(self):
print self.x
class C(object):
def __init__(self, method = _defaultMethod):
self.x = "bleh"
method(self)
def anotherMethod(self):
print "doing something else"
def defaultMethodProxy(self):
_defaultMethod(self)
__init__ is most commonly placed at the beginning of a class since they are the first thing run when the class is instantiated. Since your situation requires it to exist further down in the class, it would be nice to other devs to leave a note in the comments for the class.
I prefer init at the beginning and I would actually not write the class that way, but rather something like this:
class AClass():
def __init__(self, methodToUse = 'defaultMethod'):
print getattr(self, methodToUse)()
def defaultMethod(self):
return 1
if __name__== "__main__":
AClass()
The problem is that at compile time (when the default arguments are created), there is no function defaultMethod, but if you use it inside __init__, then the method is there.

Categories

Resources