Using __getattribute__ or __getattr__ to call methods in Python - python

I am trying to create a subclass which acts as a list of custom classes. However, I want the list to inherit the methods and attributes of the parent class and return a sum of the quantities of each item. I am attempting to do this using the __getattribute__ method, but I cannot figure out how to pass arguments to callable attributes. The highly simplified code below should explain more clearly.
class Product:
def __init__(self,price,quantity):
self.price=price
self.quantity=quantity
def get_total_price(self,tax_rate):
return self.price*self.quantity*(1+tax_rate)
class Package(Product,list):
def __init__(self,*args):
list.__init__(self,args)
def __getattribute__(self,*args):
name = args[0]
# the only argument passed is the name...
if name in dir(self[0]):
tot = 0
for product in self:
tot += getattr(product,name)#(need some way to pass the argument)
return sum
else:
list.__getattribute__(self,*args)
p1 = Product(2,4)
p2 = Product(1,6)
print p1.get_total_price(0.1) # returns 8.8
print p2.get_total_price(0.1) # returns 6.6
pkg = Package(p1,p2)
print pkg.get_total_price(0.1) #desired output is 15.4.
In reality I have many methods of the parent class which must be callable. I realize that I could manually override each one for the list-like subclass, but I would like to avoid that since more methods may be added to the parent class in the future and I would like a dynamic system. Any advice or suggestions is appreciated. Thanks!

This code is awful and really not Pythonic at all. There's no way for you to pass extra argument in the __getattribute__, so you shouldn't try to do any implicit magic like this. It would be better written like this:
class Product(object):
def __init__(self, price, quantity):
self.price = price
self.quantity = quantity
def get_total_price(self, tax_rate):
return self.price * self.quantity * (1 + tax_rate)
class Package(object):
def __init__(self, *products):
self.products = products
def get_total_price(self, tax_rate):
return sum(P.get_total_price(tax_rate) for P in self.products)
If you need, you can make the wrapper more generic, like
class Package(object):
def __init__(self, *products):
self.products = products
def sum_with(self, method, *args):
return sum(getattr(P, method)(*args) for P in self.products)
def get_total_price(self, tax_rate):
return self.sum_with('get_total_price', tax_rate)
def another_method(self, foo, bar):
return self.sum_with('another_method', foo, bar)
# or just use sum_with directly
Explicit is better than implicit. Also composition is usually better than inheritance.

You have a few points of confusion here:
1) __getattribute__ intercepts all attribute access, which isn't what you want. You only want your code to step in if a real attribute doesn't exist, so you want __getattr__.
2) Your __getattribute__ is calling the method on the list elements, but it shouldn't be doing real work, it should only return a callable thing. Remember, in Python, x.m(a) is really two steps: first, get x.m, then call that thing with an argument of a. Your function should only be doing the first step, not both steps.
3) I'm surprised that all the methods you need to override should be summed. Are there really that many methods, that really all should be summed, to make this worthwhile?
This code works to do what you want, but you might want to consider more explicit approaches, as others suggest:
class Product:
def __init__(self,price,quantity):
self.price = price
self.quantity = quantity
def get_total_price(self,tax_rate):
return self.price*self.quantity*(1+tax_rate)
class Package(list):
def __init__(self,*args):
list.__init__(self,args)
def __getattr__(self,name):
if hasattr(self[0], name):
def fn(*args):
tot = 0
for product in self:
tot += getattr(product,name)(*args)
return tot
return fn
else:
raise AttributeError
Things to note in this code: I've made Package not derive from Product, because all of its Product-ness it gets from delegation to the elements of the list. Don't use in dir() to decide if a thing has an attribute, use hasattr.

To answer your immediate question, you call a function or method retrieved using getattr() the same way you call any function: by putting the arguments, if any, in parentheses following the reference to the function. The fact that the reference to the function comes from getattr() rather than an attribute access doesn't make any difference.
func = getattr(product, name)
result = func(arg)
These can be combined and the temporary variable func eliminated:
getattr(product, name)(arg)

In addition to what Cat Plus Plus said, if you really want to invoke magic anyway (please don't! There are unbelievably many disturbing surprises awaiting you with such an approach in practice), you could test for the presence of the attribute in the Product class, and create a sum_with wrapper dynamically:
def __getattribute__(self, attr):
return (
lambda *args: self.sum_with(attr, *args)
if hasattr(Product, attr)
else super(Package, self).__getattribute__(attr)
)

Related

Is there a __repr__() like method for a python class?

I'm solving a funny problem that requires to define a class that can be called like this:
class Chain(2)(3)(4)
And it should print out the multiplication of all arguments.
I ended up a solution like this:
class Chain():
calc = 1
def __new__(cls, a=None):
if a:
cls.calc = cls.calc*a
return cls
else:
return cls.calc
This works fine and self.calc is equal to 24 but i have a wrong representation <class '__main__.Chain'>.
Is there anyway to have representation of multiplication instead of class name like what we have in __repr__ for objects?
note: The call arguments count has no limits and may be different on each call.
First of all to answer your direct question from the title:
As everything in Python, classes are too - objects. And just like classes define how instances are created (what attributes and methods they will have), metaclasses define how classes are created. So let's create a metaclass:
class Meta(type):
def __repr__(self):
return str(self.calc)
class Chain(metaclass=Meta):
calc = 1
def __new__(cls, a=None):
if a:
cls.calc = cls.calc*a
return cls
else:
return cls.calc
print(Chain(2)(3)(4))
This will print, as expected, 24.
A few notes:
Currently Meta simply accesses a calc attribute blindly. A check that it actually exists could be done but the code above was just to make the point.
The way your class is implemented, you can just do Chain(2)(3)(4)() and you will get the same result (that's based on the else part of your __new__).
That's a weird way to implement such behavior - you are returning the class itself (or an int...) from the __new__ method which should return a new object of this class. This is problematic design. A classic way to do what you want is by making the objects callable:
class Chain():
def __init__(self, a=1):
self.calc = a
def __call__(self, a=None):
if a:
return self.__class__(self.calc * a)
else:
return self.calc
def __repr__(self):
return str(self.calc)
print(Chain(2)(3)(4))
This solves your problem of even needing to do what you want, because now you just implement the class' __repr__ (because now each call in the chain returns a new object, and not the class itself).

Override class method while keeping the original class method call

I'd like to modify a class method to do some things in addition to the original method call. I have a toy example posted below.
Example:
class str2(str):
def __init__(self, val):
self.val = val
def upper(self):
print('conveting to upper')
return self.upper()
x = str2('a')
print(x.upper())
This does what I should have expected and gives a maximum recursion depth error. Is it possible to modify the upper method so that it prints some text before calling the actual str.upper method, while ideally keeping the name the same?
I've been wondering if this is the situation to use a decorator, but I am not familiar enough with them to have a clear idea on how to do this.
The solution would be:
class str2(str):
def __init__(self, val):
self.val = val
def upper(self):
print('conveting to upper')
return str.upper(self.val)
x = str2('a')
print(x.upper())
My point to the your code:
in upper function you just print it then go to same function again and again
this makes it that it will keep printing
it raises an error at the end because python basically has enough of this
My point to my code:
use the method-descriptor (<method 'upper' of 'str' objects>) to use it so it doesn't confuse it with self
use that because it will still be calling the real str class (not the metaclass)
I am also blaming myself that why i didn't think of:
class str2(str):
def __init__(self, val):
self.val = val
def upper(self):
print('conveting to upper')
return self.val.upper()
x = str2('a')
print(x.upper())
In the method str2.upper you are calling str2.upper which in turn calls str2.upper which... You see where this is going.
What you probably intended to so was to call str.upper from str2.upper. This is done by using super. Calling super() returns an object which delegates method calls to the parent classes.
class str2(str):
def upper(self):
print('converting to upper')
return super().upper()
Research "Mapping" and "decorators" - I think there's an easier/more pythonic way to do what you're trying to do.
As #Schalton stated, there is a way to do it without having to inherit from str by using decorators. Consider this snippet:
def add_text(func):
def wrapper(*args, **kwargs):
print('converting to upper')
return func(*args)
return wrapper
class str2:
def __init__(self, val):
self.val = val
#add_text
def upper(self):
return self.val.upper()
instance = str2('a')
print(instance.upper())
The great advantage of this is that the wrapper is reusable, e.g. if you have another class that you want to modify with the exact same behavior, you can just add the #decorator and don't have to redo all the work. Also, removing the additional functionality gets also easier.

Classes returned from class factory have different IDs

I have a class factory method that is used to instantiate an object. With multiple objects are created through this method, I want to be able to compare the classes of the objects. When using isinstance, the comparison is False, as can be seen in the simple example below. Also running id(a.__class__) and id(b.__class__), gives different ids.
Is there a simple way of achieving this? I know that this does not exactly conform to duck-typing, however this is the easiest solution for the program I am writing.
def factory():
class MyClass(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
return MyClass()
a = factory()
b = factory()
print(a.compare(b))
The reason is that MyClass is created dynamically every time you run factory. If you print(id(MyClass)) inside factory you get different results:
>>> a = factory()
140465711359728
>>> b = factory()
140465712488632
This is because they are actually different classes, dynamically created and locally scoped at the time of the call.
One way to fix this is to return (or yield) multiple instances:
>>> def factory(n):
class MyClass(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
for i in range(n):
yield MyClass()
>>> a, b = factory(2)
>>> a.compare(b)
Comparison Result: True
is a possible implementation.
EDIT: If the instances are created dynamically, then the above solution is invalid. One way to do it is to create a superclass outside, then inside the factory function subclass from that superclass:
>>> class MyClass(object):
pass
>>> def factory():
class SubClass(MyClass):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
return SubClass()
However, this does not work because they are still different classes. So you need to change your comparison method to check against the first superclass:
isinstance(other, self.__class__.__mro__[1])
If your class definition is inside the factory function, than each instance of the class you create will be an instance of a separate class. That's because the class definition is a statement, that's executed just like any other assignment. The name and contents of the different classes will be the same, but their identities will be distinct.
I don't think there's any simple way to get around that without changing the structure of your code in some way. You've said that your actual factory function is a method of a class, which suggests that you might be able to move the class definition somewhere else so that it can be shared by multiple calls to the factory method. Depending on what information you expect the inner class to use from the outer class, you might define it at class level (so there'd be only one class definition used everywhere), or you could define it in another method, like __init__ (which would create a new inner class for every instance of the outer class).
Here's what that last approach might look like:
class Outer(object):
def __init__(self):
class Inner(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
self.Inner = Inner
def factory(self):
return self.Inner()
f = Outer()
a = f.factory()
b = f.factory()
print(a.compare(b)) # True
g = Outer() # create another instance of the outer class
c = g.factory()
print(a.compare(c)) # False
It's not entirely clear what you're asking. It seems to me you want a simpler version of the code you already posted. If that's incorrect, this answer is not relevant.
You can create classes dynamically by explicitly constructing a new instance of the type type.
def compare(self, other):
...
def factory():
return type("MyClass", (object,), { 'compare': compare }()
type takes three arguments: the name, the parents, and the predefined slots. So this will behave the same way as your previous code.
Working off the answer from #rassar, and adding some more detail to represent the actual implementation (e.g. the factory-method existing in a parent class), I have come up with a working example below.
From #rassar's answer, I realised that the class is dynamically created each time, and so defining it within the parent object (or even above that), means that it will be the same class definition each time it is called.
class Parent(object):
class MyClass(object):
def __init__(self, parent):
self.parent = parent
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
def factory(self):
return self.MyClass(self)
a = Parent()
b = a.factory()
c = a.factory()
b.compare(c)
print(id(b.__class__))
print(id(c.__class__))

Python ignoring property setter in favor of property getter

I have created a class that has a property with a setter. There are 2 different ways to use this class, so the values of some of the object components may be set in a different order depending on the scenario (i.e. I don't want to set them during __init__). I have included a non-property with its own set function here to illustrate what is and isn't working.
class Dumbclass(object):
def __init__(self):
self.name = None
self.__priority = None
#property
def priority(self):
return self.__priority
#priority.setter
def priority(self, p):
self.__priority = p
def set_name(self, name):
self.name = "dumb " + name
def all_the_things(self, name, priority=100):
self.set_name(name)
self.priority(priority)
print self.name
print self.priority
When I run the following, it returns TypeError: 'NoneType' object is not callable. I investigated with pdb, and found that it was calling the getter instead of the setter at self.priority(priority).
if __name__ == '__main__':
d1 = Dumbclass()
d1.all_the_things("class 1")
d2 = Dumbclass()
d2.all_the_things("class 2", 4)
What's going on here?
Short Answer: Please change your line self.priority(priority) to self.priority = priority
Explanation: Setter is called only when you assign something to the attribute. In your code, you are not doing an assignment operation.
Here are some nice references if you want to understand more:
Python descriptors
How does the #property decorator work?
Real Life Example
You are facing this issue due to trying to treat priority as a method in your all_the_things method. At this point it is already a property, so you assign to it like a variable:
def all_the_things(self, name, priority=100):
self.set_name(name)
self.priority = priority

Why can't I have a varargs constructor and another constructor with fixed arguments?

Here is what I have so far:
class Die (object):
def __init__(self,sides):
self.sides = sides
def roll(self):
return random.randint(1,self.sides)
def __add__(self,other):
return Dice(self,other)
def __unicode__(self):
return "1d%d" % (self.sides)
def __str__(self):
return unicode(self).encode('utf-8')
class Dice (object):
def __init__(self, num_dice, sides):
self.die_list = [Die(sides)]*num_dice
def __init__(self, *dice):
self.die_list = dice
def roll(self):
return reduce(lambda x, y: x.roll() + y.roll(), self.die_list)
But when I try to do Dice(3,6) and subsequently call the roll action it says it can't because 'int' object has no attribute 'roll'. That means it's going into the varargs constructor first. What can do I do here to make this work, or is there another alternative?
As you observed in your question, the varargs constructor is being invoked. This is because the second definition of Dice.__init__ is overriding, not overloading, the first.
Python doesn't support method overloading, so you have at least two choices at hand.
Define only the varargs constructor. Inspect the length of the argument list and the types of the first few elements to determine what logic to run. Effectively, you would merge the two constructors into one.
Convert one of the constructors into a static factory method. For example, you could delete the first constructor, keep the varargs one, and then define a new factory method.
I prefer the second method, which allows you to cleanly separate your logic. You can also choose a more descriptive name for your factory method; from_n_sided_dice is more informative than just Dice:
#staticmethod
def from_n_sided_dice(num_dice, sides):
return Dice([Die(sides)] * num_dice)
Side note: Is this really what you want? [Die(sides)] * num_dice returns a list with multiple references to the same Die object. Rather, you might want [Die(sides) for _ in range(num_dice)].
EDIT: You can emulate method overloading (via dynamic dispatch, not static dispatch as you may be used to, but static types do not exist in Python) with function decorators. You may have to engineer your own solution to support *args and **kwargs, and having separate methods with more precise names is still often a better solution.
What you want to have is a single __init__ method, that is defined along these lines:
class Dice (object):
def __init__(self, *args):
if not isinstance(args[0], Die):
self.die_list = [Die(args[0]) for _ in range(args[1])]
else:
self.die_list = args
def roll(self):
return sum(x.roll() for x in self.die_list)

Categories

Resources