Class hierarchies and constructors are related. Parameters from a child class need to be passed to their parent.
So, in Python, we end up with something like this:
class Parent(object):
def __init__(self, a, b, c, ka=None, kb=None, kc=None):
# do something with a, b, c, ka, kb, kc
class Child(Parent):
def __init__(self, a, b, c, d, e, f, ka=None, kb=None, kc=None, kd=None, ke=None, kf=None):
super(Child, self).__init__(a, b, c, ka=ka, kb=kb, kc=kc)
# do something with d, e, f, kd, ke, kf
Imagine this with a dozen child classes and lots of parameters. Adding new parameters becomes very tedious.
Of course one can dispense with named parameters completely and use *args and **kwargs, but that makes the method declarations ambiguous.
Is there a pattern for elegantly dealing with this in Python (2.6)?
By "elegantly" I mean I would like to reduce the number of times the parameters appear. a, b, c, ka, kb, kc all appear 3 times: in the Child constructor, in the super() call to Parent, and in the Parent constructor.
Ideally, I'd like to specify the parameters for Parent's init once, and in Child's init only specify the additional parameters.
I'd like to do something like this:
class Parent(object):
def __init__(self, a, b, c, ka=None, kb=None, kc=None):
print 'Parent: ', a, b, c, ka, kb, kc
class Child(Parent):
def __init__(self, d, e, f, kd='d', ke='e', kf='f', *args, **kwargs):
super(Child, self).__init__(*args, **kwargs)
print 'Child: ', d, e, f, kd, ke, kf
x = Child(1, 2, 3, 4, 5, 6, ka='a', kb='b', kc='c', kd='d', ke='e', kf='f')
This unfortunately doesn't work, since 4, 5, 6 end up assigned to kd, ke, kf.
Is there some elegant python pattern for accomplishing the above?
"dozen child classes and lots of parameters" sounds like a problem irrespective of parameter naming.
I suspect that a little refactoring can peel out some Strategy objects that would simplify this hierarchy and make the super-complex constructors go away.
Well, the only solution I could see is using a mixture of listed variables as well as *args and **kwargs, as such:
class Parent(object):
def __init__(self, a, b, c, ka=None, kb=None, kc=None):
pass
class Child(Parent):
def __init__(self, d, e, f, *args, kd=None, ke=None, kf=None, **kwargs):
Parent.__init__(self, *args, **kwargs)
pass
This way, you could see which parameters are required by each of the classes, but without having to re-type them.
One thing to note is that you lose your desired ordering (a, b, c, d, e, f) as it becomes (d, e, f, a, b, c). I'm not sure if there's a way to have the *args before the other non-named parameters.
I try to group the parameters into their own objects, e.g, instead of passing
sourceDirectory, targetDirectory, temporaryDirectory, serverName, serverPort, I'd have a
DirectoryContext and ServerContext objects.
If the context objects start having more
behavior or logic it might lead to the strategy objects mentioned in here.
Related
I have a function-object f, which takes 4 numeric inputs and outputs two numbers. Maybe
def f(a, b, c, d):
return a+b, c+d
or maybe
def f(a, b, c, d):
return a*c, d*c
To be clear, I don't actually know what f is, I just have it as an object.
I would like to create a new function-object, h, such that h(a,b,c,d)=x*c+y where (x,y)=f(a,b,c,d). The trouble is, I have no direct access to c, only to f.
def make_h(f):
???
return h
assert( make_h(f)(a,b,c,d) == f(a,b,c,d)[0]*c+f(a,b,c,d)[1])
Is it possible to do this in python? I have tried searching and reading some documentation, but have not found an answer (yet?).
EDIT: There is a simple answer (given below) when the signature of f is fixed. Suppose I had to do this to different functions, some with inputs (a, b, c, d), some with inputs (l, m, c), and maybe some with inputs (c, r). Would it still be possible to do what I want?
This example is strongly related to the concept of a decorator. My solution is the following:
def make_h(f):
def h(a, b, c, d):
x, y = f(a,b,c,d)
return x * c + y
return h
UPDATE. In case f has any number of arguments, we can use args, and kwargs. While it is a bad practice, if we know that one of kwargs is c, we could use the following code:
def make_h(f):
def h(*args, **kwargs):
x, y = f(*args, **kwargs)
return x * kwargs["c"] + y
return h
I have this code, showing a classic diamond pattern:
class A:
def __init__( self, x ):
print( "A:" + x )
class B( A ):
def __init__( self, x ):
print( "B:" + x )
super().__init__( "b" )
class C( A ):
def __init__( self, x ):
print( "C:" + x )
super().__init__( "c" )
class D( B, C ):
def __init__( self ):
super().__init__( "d" )
d = D()
The output is:
B:d
C:b
A:c
B:d makes sense, since D derives from B.
The A:c I almost get, though I could equally see A:b.
However, the C:b bit doesn't make sense: C does not derive from B.
Could someone explain?
Questions such as this unfortunately do not mention the parameters.
Python uses the C3 linearization algorithm to establish the method resolution order, which is the same order that super delegates in.
Basically, the algorithm keeps lists for every class containing that class and every class it inherits from, for all classes that the class in question inherits from. It then constructs an ordering of classes by taking classes that aren't inherited by any unexamined classes one by one, until it reaches the root, object. Below, I use O for object for brevity:
L(O) = [O]
L(A) = [A] + merge(L(O), [O]) = [A, O]
L(B) = [B] + merge(L(A), [A]) = [B] + merge([A, O], [A]) = [B, A] + merge([O])
= [B, A, O]
L(C) = [C] + merge(L(A), [A]) = [C] + merge([A, O], [A]) = [C, A] + merge([O])
= [C, A, O]
L(D) = [D] + merge(L(B), L(C), [B, C]) = [D] + merge([B, A, O], [C, A, O], [B, C])
= [D, B] + merge([A, O], [C, A, O], [C]) = [D, B, C] + merge([A, O], [A, O])
= [D, B, C, A, O]
Classes in Python are dynamically composed - that includes inheritance.
The C:b output does not imply that B magically inherits from C. If you instantiate either B or C, none knows about the other.
>>> B('root')
B:root
A:b
However, D does know about both B and C:
class D(B,C):
...
There is a lot of technicalities available on this. However, there are basically two parts in how this works:
Direct Base Classes are resolved in order they appear.
B comes before C.
Recursive Base Classes are resolved to not duplicate.
A Base Class of both B and C must follow both.
For the class D, that means the base classes resolve as B->C->A! C has sneaked in between B and A - but only for class D, not for class B.
Note that there is actually another class involved: all classes derive from object by default.
>>> D.__mro__
(__main__.D, __main__.B, __main__.C, __main__.A, object)
You have already written A knowing that there is no base to take its parameters. However, neither B nor C can assume this. They both expect to derive from an A object. Subclassing does imply that both B and C are valid A-objects as well, though!
It is valid for both B and C to precede B and C, since the two are subclasses of A. B->C->A->object does not break that B expects its super class to be of type A.
With all other combinations, one ends up with C preceding nothing (invalid) or object preceding something (invalid). That rules out depth-first resolution B->A->object->C and duplicates B->A->object->C->A->object.
This method resolution order is practical to enable mixins: classes that rely on other classes to define how methods are resolved.
There is a nice example of how a logger for dictionary access can accept both dict and OrderedDict.
# basic Logger working on ``dict``
class LoggingDict(dict):
def __setitem__(self, key, value):
logging.info('Settingto %r' % (key, value))
super().__setitem__(key, value)
# mixin of different ``dict`` subclass
class LoggingOD(LoggingDict, collections.OrderedDict):
pass
You can always check the method resolution order that any class should have:
>>> D.mro()
[__main__.D, __main__.B, __main__.C, __main__.A, object]
As you can see, if everybody is doing the right thing (i.e. calling super), the MRO will be 1st parent, 2nd parent, 1st parent's parent and so on...
You can just think of depth first and then left to right to find the order although ever since python 2.3 the algorithm changed but the outcome is usually the same.
In this case B and C have the same parent A and A doesn't call super
I just want to be able to unpack the instance variables of class foo, for example:
x = foo("name", "999", "24", "0.222")
a, b, c, d = *x
a, b, c, d = [*x]
I am not sure as to which is the correct method for doing so when implementing my own __iter__ method, however, the latter is the one that has worked with mixed "success". I say mixed because doing so with the presented code appears to alter the original instance object x, such that it is no longer valid.
class foo:
def __init__(self, a, b, c, d):
self.a = a
self.b = b
self.c = c
self.d = d
def __iter__(self):
return iter([a, b, c, d])
I have read the myriad posts on this site regarding __iter__, __next__, generators etc., and also a python book and docs.python.org and seem unable to figure what I am not understanding. I've gathered that __iter__ needs to return an iterable (which can be just be self, but I am not sure how that works for what I want). I've also tried various ways of playing around with implementing __next__ and iterating over vars(foo).items(), either by casting to a list or as a dictionary, with no success.
I don't believe this is a duplicate post on account that the only similar questions I've seen present a single list sequence object attribute or employ a range of numbers instead of a four non-container variables.
If you want the instance's variables, you should access them with .self:
def __iter__(self):
return iter([self.a, self.b, self.c, self.d])
with this change,
a, b, c, d = list(x)
will get you the variables.
You could go to the more risky method of using vars(x) or x.__dict__, sort it by the variables name (and that's why it is also a limited one, the variables are saved in no-order), and extract the second element of each tuple. But I would say the iterator is definitely better.
You can store the arguments in an attribute (self.e below) or return them on function call:
class foo:
def __init__(self, *args):
self.a, self.b, self.c, self.d = self.e = args
def __call__(self):
return self.e
x = foo("name", "999", "24", "0.222")
a, b, c, d = x.e
# or
a, b, c, d = x()
I am trying to make a definition, what includes event (you know, to see the place of the click), and other four args. I want to call it, but I don't know how. I tried to give it a default value, but no success, I don't know what to write in the place of 'event'
My code is:
def example(event,a,b,c,d):
Your bound function will already be called like callback(event) from within tkinter's system, so your def header takes one positional argument by default and is usually written def callback(event): and bound with some_widget.bind(sequence, callback), just passing the function object to bind and letting event get passed along internally.
That having been said, there are two ways to use other variables from outside within event callbacks and still use the event object too.
Use lambda as a wrapper to pass along arbitrary args:
a, b, c, d = some_bbox
def on_click(event, a, b, c, d):
print(event.x, event.y, a, b, c, d)
# do the rest of your processing
some_widget.bind("<Button-1>", lambda event, a=a, b=b, c=c, d=d: on_click(event, a, b, c, d)
Use either the global or nonlocal keyword to specify variables to take from the outer scope:
a, b, c, d = some_bbox
def on_click(event):
# use global if a, b, c, and d exist at the module level
global a, b, c, d
# use nonlocal if a, b, c, and d exist within the scope of another function
# nonlocal a, b, c, d
print(event.x, event.y, a, b, c, d)
# do the rest of your processing
some_widget.bind("<Button-1>", on_click)
python 3.8 tkinter 8.6
I basically want to expand the current scope as you would a dictionary when calling a function.
I remember seeing something about this somewhere but I cannot remember where or how to do it.
Here is a simple example
def bar(a, b, c, d, e, f):
pass
def foo(a, b, c, d, e, f):
# Instead of doing this
bar(a, b, c, d, e, f)
# or
bar(a=a, b=b, c=c, d=d, e=e, f=f)
# I'd like to do this
bar(**local_scope)
Am I imagining things or can this really be done?
You can use locals() (or globals() depending on what you need), which returns a dictionary mapping variable names to values.
bar(**locals())
if foo was written like this
def foo(**kwargs):
bar(**kwargs)
Other than that the other two examples you posted are better, expanding all locals is a bad idea.