I just want to be able to unpack the instance variables of class foo, for example:
x = foo("name", "999", "24", "0.222")
a, b, c, d = *x
a, b, c, d = [*x]
I am not sure as to which is the correct method for doing so when implementing my own __iter__ method, however, the latter is the one that has worked with mixed "success". I say mixed because doing so with the presented code appears to alter the original instance object x, such that it is no longer valid.
class foo:
def __init__(self, a, b, c, d):
self.a = a
self.b = b
self.c = c
self.d = d
def __iter__(self):
return iter([a, b, c, d])
I have read the myriad posts on this site regarding __iter__, __next__, generators etc., and also a python book and docs.python.org and seem unable to figure what I am not understanding. I've gathered that __iter__ needs to return an iterable (which can be just be self, but I am not sure how that works for what I want). I've also tried various ways of playing around with implementing __next__ and iterating over vars(foo).items(), either by casting to a list or as a dictionary, with no success.
I don't believe this is a duplicate post on account that the only similar questions I've seen present a single list sequence object attribute or employ a range of numbers instead of a four non-container variables.
If you want the instance's variables, you should access them with .self:
def __iter__(self):
return iter([self.a, self.b, self.c, self.d])
with this change,
a, b, c, d = list(x)
will get you the variables.
You could go to the more risky method of using vars(x) or x.__dict__, sort it by the variables name (and that's why it is also a limited one, the variables are saved in no-order), and extract the second element of each tuple. But I would say the iterator is definitely better.
You can store the arguments in an attribute (self.e below) or return them on function call:
class foo:
def __init__(self, *args):
self.a, self.b, self.c, self.d = self.e = args
def __call__(self):
return self.e
x = foo("name", "999", "24", "0.222")
a, b, c, d = x.e
# or
a, b, c, d = x()
Related
I have a class & method, each with several arguments: my_class(a,b,c).my_method(d,e,f) and I'd like to be able to only change a single argument, while holding the others constant.
Constantly copy-pasting the other constant arguments seems bad, so I'd like to create a new object wrapper_fct where I reference my_class but only provide the one argument I want to change, b, without always having to specify the remaining arguments. How would wrapper_fct() look like?
For example, wrapper_fct(my_class, b1) would return my_class(a,b1,c).my_method(d,e,f), wrapper_fct(my_class, b2) would return my_class(a,b2,c).my_method(d,e,f).
Here's an example in practice:
Loop through just the variable b and evaluate several classes/methods for each new instance of b, and append the results in a list.
I can currently do this in a for loop:
mylist1 = [] # init lists (append results here)
mylist2 = []
mylist2 = []
for b in [1,2,3,4,5]:
mylist1.append( my_class1(a,b,c).my_method(d,e,f) )
mylist2.append( my_class2(a,b,c).my_method(d,e,f) )
mylist3.append( my_class3(a,b,c).my_method(d,e,f) )
...
But it seems better to create a function loop_through_B() and use the wrapper_fct(my_class,b) as specified above. Not sure if it's the ideal solution, but maybe something like:
def loop_through_B(input_class, b_values = [1,2,3,4,5])
mylist = []
for b in b_values:
mylist.append( wrapper_fct(input_class,b) )
return mylist
loop_through_B(my_class1) # would I also have to specify the method here as well?
loop_through_B(my_class2)
loop_through_B(my_class3)
Extra Question: how would I add the ability to vary method arguments, or even multiple class & method arguments?
After #chepner pointed me in the right direction, I think the best solution is to use the lambda function:
wrapper_fct = lambda b: my_class1(a,b,c).my_method(d,e,f)
In this case, I can vary b as much as I want while holding the class arguments a,c, and method arguments d,e,f constant. Note that with lambda functions, I can also vary the method arguments and/or the class arguments. For example:
wrapper_fct_multiple = lambda b, e: my_class1(a,b,c).my_method(d,e,f)
It is also possible to do this with functools.partial, but it's not obvious to me how I would specify both class & method arguments with functools.
Anyway, here is the solution implementation using lambda:
# define the "wrapper function" outside the loop
wrapper_fct = lambda b: my_class1(a,b,c).my_method(d,e,f)
# define the function I want to use to loop through B:
def loop_through_B(class_wrapper, b_values)
mylist = []
for b in b_values:
mylist.append( class_wrapper(b) )
return mylist
# run:
loop_through_B(wrapper_fct, b_values=[1,2,3,4,5])
# Can make additional wrapper_fct2, wrapper_fct3, for my_class2, my_class3 ...
You can pass the method a dictionary of arguments, and change what the method sees by selectively updating it when calling the method.
Here's what I mean:
class MyClass:
def __init__(self, a, b, c):
self.a, self.b, self.c = a, b, c
def my_method(self, kwargs):
return sum((kwargs[key] for key in kwargs.keys()))
def __repr__(self):
classname = type(self).__name__
args = ', '.join((f'{v!r}' for v in (self.a, self.b, self.c)))
return f'{classname}({args})'
instance = MyClass('a','b','c')
print(instance) # -> MyClass('a', 'b', 'c')
kwargs = dict(d=1, e=2, f=3)
print(instance.my_method(kwargs)) # -> 6
print(instance.my_method(dict(kwargs, e=38))) # -> 42
I am writing some small library and I want to provide users two approaches for the same functionality, by instance method and static method. Here is a simplified example:
class ClassTimesAdd(object):
def __init__(self, a, b):
self.a = a
self.b = b
def TimesAdd(self, c):
return self.a * self.b + c
#staticmethod
def TimesAdd(a, b, c):
return a * b + c
print(ClassTimesAdd.TimesAdd(1, 3, 7))
ins = ClassTimesAdd(2, 5)
print(ins.TimesAdd(7))
And you can find that the earlier function will be overwritten and only the last one is valid. I'm wondering if there is some simple method that I can use to make the two approaches both work.
I have this code, showing a classic diamond pattern:
class A:
def __init__( self, x ):
print( "A:" + x )
class B( A ):
def __init__( self, x ):
print( "B:" + x )
super().__init__( "b" )
class C( A ):
def __init__( self, x ):
print( "C:" + x )
super().__init__( "c" )
class D( B, C ):
def __init__( self ):
super().__init__( "d" )
d = D()
The output is:
B:d
C:b
A:c
B:d makes sense, since D derives from B.
The A:c I almost get, though I could equally see A:b.
However, the C:b bit doesn't make sense: C does not derive from B.
Could someone explain?
Questions such as this unfortunately do not mention the parameters.
Python uses the C3 linearization algorithm to establish the method resolution order, which is the same order that super delegates in.
Basically, the algorithm keeps lists for every class containing that class and every class it inherits from, for all classes that the class in question inherits from. It then constructs an ordering of classes by taking classes that aren't inherited by any unexamined classes one by one, until it reaches the root, object. Below, I use O for object for brevity:
L(O) = [O]
L(A) = [A] + merge(L(O), [O]) = [A, O]
L(B) = [B] + merge(L(A), [A]) = [B] + merge([A, O], [A]) = [B, A] + merge([O])
= [B, A, O]
L(C) = [C] + merge(L(A), [A]) = [C] + merge([A, O], [A]) = [C, A] + merge([O])
= [C, A, O]
L(D) = [D] + merge(L(B), L(C), [B, C]) = [D] + merge([B, A, O], [C, A, O], [B, C])
= [D, B] + merge([A, O], [C, A, O], [C]) = [D, B, C] + merge([A, O], [A, O])
= [D, B, C, A, O]
Classes in Python are dynamically composed - that includes inheritance.
The C:b output does not imply that B magically inherits from C. If you instantiate either B or C, none knows about the other.
>>> B('root')
B:root
A:b
However, D does know about both B and C:
class D(B,C):
...
There is a lot of technicalities available on this. However, there are basically two parts in how this works:
Direct Base Classes are resolved in order they appear.
B comes before C.
Recursive Base Classes are resolved to not duplicate.
A Base Class of both B and C must follow both.
For the class D, that means the base classes resolve as B->C->A! C has sneaked in between B and A - but only for class D, not for class B.
Note that there is actually another class involved: all classes derive from object by default.
>>> D.__mro__
(__main__.D, __main__.B, __main__.C, __main__.A, object)
You have already written A knowing that there is no base to take its parameters. However, neither B nor C can assume this. They both expect to derive from an A object. Subclassing does imply that both B and C are valid A-objects as well, though!
It is valid for both B and C to precede B and C, since the two are subclasses of A. B->C->A->object does not break that B expects its super class to be of type A.
With all other combinations, one ends up with C preceding nothing (invalid) or object preceding something (invalid). That rules out depth-first resolution B->A->object->C and duplicates B->A->object->C->A->object.
This method resolution order is practical to enable mixins: classes that rely on other classes to define how methods are resolved.
There is a nice example of how a logger for dictionary access can accept both dict and OrderedDict.
# basic Logger working on ``dict``
class LoggingDict(dict):
def __setitem__(self, key, value):
logging.info('Settingto %r' % (key, value))
super().__setitem__(key, value)
# mixin of different ``dict`` subclass
class LoggingOD(LoggingDict, collections.OrderedDict):
pass
You can always check the method resolution order that any class should have:
>>> D.mro()
[__main__.D, __main__.B, __main__.C, __main__.A, object]
As you can see, if everybody is doing the right thing (i.e. calling super), the MRO will be 1st parent, 2nd parent, 1st parent's parent and so on...
You can just think of depth first and then left to right to find the order although ever since python 2.3 the algorithm changed but the outcome is usually the same.
In this case B and C have the same parent A and A doesn't call super
I have a class which contains data as attributes and which has a method to return a tuple containing these attributes:
class myclass(object):
def __init__(self,a,b,c):
self.a = a
self.b = b
self.c = c
def tuple(self):
return (self.a, self.b, self.c)
I use this class essentially as a tuple where the items (attributes) can be modified/read through their attribute name. Now I would like to create objects of this class, which would be constants and have pre-defined attribute values, which I could then assign to a variable/mutable object, thereby initializing this variable object's attributes to match the constant object, while at the same time retaining the ability to modify the attributes' values. For example I would like to do this:
constant_object = myclass(1,2,3)
variable_object = constant_object
variable_object.a = 999
Now of course this doesn't work in python, so I am wondering what is the best way to get this kind of functionality?
Now I would like to create objects of this class, which would be constants and have pre-defined attribute values, which I could then assign to a variable/mutable object, thereby initializing this variable object's attributes to match the constant object,
Well, you can't have that. Assignment in Python doesn't initialize anything. It doesn't copy or create anything. All it does is give a new name to the existing value.
If you want to initialize an object, the way to do that in Python is to call the constructor.
So, with your existing code:
new_object = myclass(old_object.a, old_object.b, old_object.c)
If you look at most built-in and stdlib classes, it's a lot more convenient. For example:
a = set([1, 2, 3])
b = set(a)
How do they do that? Simple. Just define an __init__ method that can be called with an existing instance. (In the case of set, this comes for free, because a set can be initialized with any iterable, and sets are iterable.)
If you don't want to give up your existing design, you're going to need a pretty clumsy __init__, but it's at least doable. Maybe this:
_sentinel = object()
def __init__(myclass_or_a, b=_sentinel, c=_sentinel):
if isinstance(a, myclass):
self.a, self.b, self.c = myclass_or_a.a, myclass_or_a.b, myclass_or_a.c
else:
self.a, self.b, self.c = myclass_or_a, b, c
… plus some error handling to check that b is _sentinel in the first case and that it isn't in the other case.
So, however you do it:
constant_object = myclass(1,2,3)
variable_object = myclass(constant_object)
variable_object.a = 999
import copy
class myclass(object):
def __init__(self,a,b,c):
self.a = a
self.b = b
self.c = c
def tuple(self):
return (self.a, self.b, self.c)
constant_object = myclass(1,2,3)
variable_object = copy.deepcopy(constant_object)
variable_object.a = 999
print constant_object.a
print variable_object.a
Output:
1
999
Deepcopying is not entirely necessary in this case, because of the way you've setup your tuple method
class myclass(object):
def __init__(self,a,b,c):
self.a = a
self.b = b
self.c = c
def tuple(self):
return (self.a, self.b, self.c)
constant_object = myclass(1,2,3)
variable_object = myclass(*constant_object.tuple())
variable_object.a = 999
>>> constant_object.a
1
>>> variable_object.a
999
Usually (as others have suggested), you'd want to deepcopy. This creates a brand new object, with no ties to the object being copied. However, given that you are using only ints, deepcopy is overkill. You're better off doing a shallow copy. As a matter of fact, it might even be faster to call the class constructor on the parameters of the object you already have, seeing as these parameters are ints. This is why I suggested the above code.
Class hierarchies and constructors are related. Parameters from a child class need to be passed to their parent.
So, in Python, we end up with something like this:
class Parent(object):
def __init__(self, a, b, c, ka=None, kb=None, kc=None):
# do something with a, b, c, ka, kb, kc
class Child(Parent):
def __init__(self, a, b, c, d, e, f, ka=None, kb=None, kc=None, kd=None, ke=None, kf=None):
super(Child, self).__init__(a, b, c, ka=ka, kb=kb, kc=kc)
# do something with d, e, f, kd, ke, kf
Imagine this with a dozen child classes and lots of parameters. Adding new parameters becomes very tedious.
Of course one can dispense with named parameters completely and use *args and **kwargs, but that makes the method declarations ambiguous.
Is there a pattern for elegantly dealing with this in Python (2.6)?
By "elegantly" I mean I would like to reduce the number of times the parameters appear. a, b, c, ka, kb, kc all appear 3 times: in the Child constructor, in the super() call to Parent, and in the Parent constructor.
Ideally, I'd like to specify the parameters for Parent's init once, and in Child's init only specify the additional parameters.
I'd like to do something like this:
class Parent(object):
def __init__(self, a, b, c, ka=None, kb=None, kc=None):
print 'Parent: ', a, b, c, ka, kb, kc
class Child(Parent):
def __init__(self, d, e, f, kd='d', ke='e', kf='f', *args, **kwargs):
super(Child, self).__init__(*args, **kwargs)
print 'Child: ', d, e, f, kd, ke, kf
x = Child(1, 2, 3, 4, 5, 6, ka='a', kb='b', kc='c', kd='d', ke='e', kf='f')
This unfortunately doesn't work, since 4, 5, 6 end up assigned to kd, ke, kf.
Is there some elegant python pattern for accomplishing the above?
"dozen child classes and lots of parameters" sounds like a problem irrespective of parameter naming.
I suspect that a little refactoring can peel out some Strategy objects that would simplify this hierarchy and make the super-complex constructors go away.
Well, the only solution I could see is using a mixture of listed variables as well as *args and **kwargs, as such:
class Parent(object):
def __init__(self, a, b, c, ka=None, kb=None, kc=None):
pass
class Child(Parent):
def __init__(self, d, e, f, *args, kd=None, ke=None, kf=None, **kwargs):
Parent.__init__(self, *args, **kwargs)
pass
This way, you could see which parameters are required by each of the classes, but without having to re-type them.
One thing to note is that you lose your desired ordering (a, b, c, d, e, f) as it becomes (d, e, f, a, b, c). I'm not sure if there's a way to have the *args before the other non-named parameters.
I try to group the parameters into their own objects, e.g, instead of passing
sourceDirectory, targetDirectory, temporaryDirectory, serverName, serverPort, I'd have a
DirectoryContext and ServerContext objects.
If the context objects start having more
behavior or logic it might lead to the strategy objects mentioned in here.