Override class method while keeping the original class method call - python

I'd like to modify a class method to do some things in addition to the original method call. I have a toy example posted below.
Example:
class str2(str):
def __init__(self, val):
self.val = val
def upper(self):
print('conveting to upper')
return self.upper()
x = str2('a')
print(x.upper())
This does what I should have expected and gives a maximum recursion depth error. Is it possible to modify the upper method so that it prints some text before calling the actual str.upper method, while ideally keeping the name the same?
I've been wondering if this is the situation to use a decorator, but I am not familiar enough with them to have a clear idea on how to do this.

The solution would be:
class str2(str):
def __init__(self, val):
self.val = val
def upper(self):
print('conveting to upper')
return str.upper(self.val)
x = str2('a')
print(x.upper())
My point to the your code:
in upper function you just print it then go to same function again and again
this makes it that it will keep printing
it raises an error at the end because python basically has enough of this
My point to my code:
use the method-descriptor (<method 'upper' of 'str' objects>) to use it so it doesn't confuse it with self
use that because it will still be calling the real str class (not the metaclass)
I am also blaming myself that why i didn't think of:
class str2(str):
def __init__(self, val):
self.val = val
def upper(self):
print('conveting to upper')
return self.val.upper()
x = str2('a')
print(x.upper())

In the method str2.upper you are calling str2.upper which in turn calls str2.upper which... You see where this is going.
What you probably intended to so was to call str.upper from str2.upper. This is done by using super. Calling super() returns an object which delegates method calls to the parent classes.
class str2(str):
def upper(self):
print('converting to upper')
return super().upper()

Research "Mapping" and "decorators" - I think there's an easier/more pythonic way to do what you're trying to do.

As #Schalton stated, there is a way to do it without having to inherit from str by using decorators. Consider this snippet:
def add_text(func):
def wrapper(*args, **kwargs):
print('converting to upper')
return func(*args)
return wrapper
class str2:
def __init__(self, val):
self.val = val
#add_text
def upper(self):
return self.val.upper()
instance = str2('a')
print(instance.upper())
The great advantage of this is that the wrapper is reusable, e.g. if you have another class that you want to modify with the exact same behavior, you can just add the #decorator and don't have to redo all the work. Also, removing the additional functionality gets also easier.

Related

Is there a __repr__() like method for a python class?

I'm solving a funny problem that requires to define a class that can be called like this:
class Chain(2)(3)(4)
And it should print out the multiplication of all arguments.
I ended up a solution like this:
class Chain():
calc = 1
def __new__(cls, a=None):
if a:
cls.calc = cls.calc*a
return cls
else:
return cls.calc
This works fine and self.calc is equal to 24 but i have a wrong representation <class '__main__.Chain'>.
Is there anyway to have representation of multiplication instead of class name like what we have in __repr__ for objects?
note: The call arguments count has no limits and may be different on each call.
First of all to answer your direct question from the title:
As everything in Python, classes are too - objects. And just like classes define how instances are created (what attributes and methods they will have), metaclasses define how classes are created. So let's create a metaclass:
class Meta(type):
def __repr__(self):
return str(self.calc)
class Chain(metaclass=Meta):
calc = 1
def __new__(cls, a=None):
if a:
cls.calc = cls.calc*a
return cls
else:
return cls.calc
print(Chain(2)(3)(4))
This will print, as expected, 24.
A few notes:
Currently Meta simply accesses a calc attribute blindly. A check that it actually exists could be done but the code above was just to make the point.
The way your class is implemented, you can just do Chain(2)(3)(4)() and you will get the same result (that's based on the else part of your __new__).
That's a weird way to implement such behavior - you are returning the class itself (or an int...) from the __new__ method which should return a new object of this class. This is problematic design. A classic way to do what you want is by making the objects callable:
class Chain():
def __init__(self, a=1):
self.calc = a
def __call__(self, a=None):
if a:
return self.__class__(self.calc * a)
else:
return self.calc
def __repr__(self):
return str(self.calc)
print(Chain(2)(3)(4))
This solves your problem of even needing to do what you want, because now you just implement the class' __repr__ (because now each call in the chain returns a new object, and not the class itself).

Return a custom value when a class method is accessed as an attribute, but still allow for it to perform a computation when called?

Specifically, I would want MyClass.my_method to be used for lookup of a value in the class dictionary, but MyClass.my_method() to be a method that accepts arguments and performs a computation to update an attribute in MyClass and then returns MyClass with all its attributes (including the updated one).
I am thinking that this might be doable with Python's descriptors (maybe overriding __get__ or __call__), but I can't figure out how this would look. I understand that the behavior might be confusing, but I am interested if it is possible (and if there are any other major caveats).
I have seen that you can do something similar for classes and functions by overriding __repr__, but I can't find a similar way for a method within a class. My returned value will also not always be a string, which seems to prohibit the __repr__-based approaches mentioned in these two questions:
Possible to change a function's repr in python?
How to create a custom string representation for a class object?
Thank you Joel for the minimal implementation. I found that the remaining problem is the lack of initialization of the parent, since I did not find a generic way of initializing it, I need to check for attributes in the case of list/dict, and add the initialization values to the parent accordingly.
This addition to the code should make it work for lists/dicts:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
super().__init__()
dict_attr = getattr(parent, "update", None)
list_attr = getattr(parent, "extend", None)
if callable(dict_attr): # parent is dict
self.update(init_val)
elif callable(list_attr): # parent is list
self.extend(init_val)
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
Unfortunately, we need to add case by case, but this works as intended.
A slightly less verbose way to write the above is the following:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
if isinstance(init_val, list):
self.extend(init_val)
elif isinstance(init_val, dict):
self.update(init_val)
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
As jasonharper commented,
MyClass.my_method() works by looking up MyClass.my_method, and then attempting to call that object. So the result of MyClass.my_method cannot be a plain string, int, or other common data type [...]
The trouble comes specifically from reusing the same name for this two properties, which is very confusing just as you said. So, don't do it.
But for the sole interest of it you could try to proxy the value of the property with an object that would return the original MyClass instance when called, use an actual setter to perform any computation you wanted, and also forward arbitrary attributes to the proxied value.
class MyClass:
_my_method = whatever
#property
def my_method(self):
my_class = self
class Proxy:
def __init__(self, value):
self.__proxied = value
def __call__(self, value):
my_class.my_method = value
return my_class
def __getattr__(self, name):
return getattr(self.__proxied, name)
def __str__(self):
return str(self.__proxied)
def __repr__(self):
return repr(self.__proxied)
return Proxy(self._my_method)
#my_method.setter
def my_method(self, value):
# your computations
self._my_method = value
a = MyClass()
b = a.my_method('do not do this at home')
a is b
# True
a.my_method.split(' ')
# ['do', 'not', 'do', 'this', 'at', 'home']
And today, duck typing will abuse you, forcing you to delegate all kinds of magic methods to the proxied value in the proxy class, until the poor codebase where you want to inject this is satisfied with how those values quack.
This is a minimal implementation of Guillherme's answer that updates the method instead of a separate modifiable parameter:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
This and the original answer both work well for single values, but it seems like lists and dictionaries are returned as empty instead of with the expected values and I am not sure why so help is appreciated here:

Classes returned from class factory have different IDs

I have a class factory method that is used to instantiate an object. With multiple objects are created through this method, I want to be able to compare the classes of the objects. When using isinstance, the comparison is False, as can be seen in the simple example below. Also running id(a.__class__) and id(b.__class__), gives different ids.
Is there a simple way of achieving this? I know that this does not exactly conform to duck-typing, however this is the easiest solution for the program I am writing.
def factory():
class MyClass(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
return MyClass()
a = factory()
b = factory()
print(a.compare(b))
The reason is that MyClass is created dynamically every time you run factory. If you print(id(MyClass)) inside factory you get different results:
>>> a = factory()
140465711359728
>>> b = factory()
140465712488632
This is because they are actually different classes, dynamically created and locally scoped at the time of the call.
One way to fix this is to return (or yield) multiple instances:
>>> def factory(n):
class MyClass(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
for i in range(n):
yield MyClass()
>>> a, b = factory(2)
>>> a.compare(b)
Comparison Result: True
is a possible implementation.
EDIT: If the instances are created dynamically, then the above solution is invalid. One way to do it is to create a superclass outside, then inside the factory function subclass from that superclass:
>>> class MyClass(object):
pass
>>> def factory():
class SubClass(MyClass):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
return SubClass()
However, this does not work because they are still different classes. So you need to change your comparison method to check against the first superclass:
isinstance(other, self.__class__.__mro__[1])
If your class definition is inside the factory function, than each instance of the class you create will be an instance of a separate class. That's because the class definition is a statement, that's executed just like any other assignment. The name and contents of the different classes will be the same, but their identities will be distinct.
I don't think there's any simple way to get around that without changing the structure of your code in some way. You've said that your actual factory function is a method of a class, which suggests that you might be able to move the class definition somewhere else so that it can be shared by multiple calls to the factory method. Depending on what information you expect the inner class to use from the outer class, you might define it at class level (so there'd be only one class definition used everywhere), or you could define it in another method, like __init__ (which would create a new inner class for every instance of the outer class).
Here's what that last approach might look like:
class Outer(object):
def __init__(self):
class Inner(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
self.Inner = Inner
def factory(self):
return self.Inner()
f = Outer()
a = f.factory()
b = f.factory()
print(a.compare(b)) # True
g = Outer() # create another instance of the outer class
c = g.factory()
print(a.compare(c)) # False
It's not entirely clear what you're asking. It seems to me you want a simpler version of the code you already posted. If that's incorrect, this answer is not relevant.
You can create classes dynamically by explicitly constructing a new instance of the type type.
def compare(self, other):
...
def factory():
return type("MyClass", (object,), { 'compare': compare }()
type takes three arguments: the name, the parents, and the predefined slots. So this will behave the same way as your previous code.
Working off the answer from #rassar, and adding some more detail to represent the actual implementation (e.g. the factory-method existing in a parent class), I have come up with a working example below.
From #rassar's answer, I realised that the class is dynamically created each time, and so defining it within the parent object (or even above that), means that it will be the same class definition each time it is called.
class Parent(object):
class MyClass(object):
def __init__(self, parent):
self.parent = parent
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
def factory(self):
return self.MyClass(self)
a = Parent()
b = a.factory()
c = a.factory()
b.compare(c)
print(id(b.__class__))
print(id(c.__class__))

Python: How to update the calls of a third class to the overriden method of the original class?

class ThirdPartyA(object):
def __init__(self):
...
def ...():
...
-------------------
from xxx import ThirdPartyA
class ThirdPartyB(object):
def a(self):
...
#call to ThirdPartyA
....
def b(self):
...
#call to ThirdPartyA
...
def c(self):
...
#call to ThirdPartyA
...
-----------------------------------
from xxx import ThirdPartyA
class MyCodeA(ThirdPartyA):
def __init__(self):
# overriding code
When overriding the __init__ method of A class, how could I instruct B class that it should call MyCodeA instead of ThirdPartyA in all its methods?
The real code is here:
CLass Geoposition: ThirdPartyA
Class GeopositionField: ThirdPartyB
My override to class Geoposition so it returns max 5 decimal digits:
class AccuracyGeoposition(Geoposition):
def __init__(self, latitude, longitude):
if isinstance(latitude, float) or isinstance(latitude, int):
latitude = '{0:.5f}'.format(latitude)
if isinstance(longitude, float) or isinstance(longitude, int):
longitude = '{0:.5f}'.format(longitude)
self.latitude = Decimal(latitude)
self.longitude = Decimal(longitude)
From your updated code, I think what you're trying to do is change GeopositionField. to_python() so that it returns AccuracyGeoposition values instead of Geoposition values.
There's no way to do that directly; the code in GeopositionField explicitly says it wants to construct a Geoposition, so that's what happens.
The cleanest solution is to subclass GeopositionField as well, so you can wrap that method:
class AccuracyGeopositionField(GeopositionField):
def topython(self, value):
geo = super(AccuracyGeopositionField, self).topython(value)
return AccuracyGeoposition(geo.latitude, geo.longitude)
If creating a Geoposition and then re-wrapping the values in an AccuracyGeoposition is insufficient (because accuracy has already been lost), you might be able to pre-process things before calling the super method as well/instead. For example, if the way it deals with list is not acceptable (I realize that's not true here, but it serves as a simple example), but everything else you can just let it do its thing and wrap the result, you could do this:
class AccuracyGeopositionField(GeopositionField):
def topython(self, value):
if isinstance(value, list):
return AccuracyGeoposition(value[0], value[1])
geo = super(AccuracyGeopositionField, self).topython(value)
return AccuracyGeoposition(geo.latitude, geo.longitude)
If worst comes to worst, you may have to reimplement the entire method (maybe by copying, pasting, and modifying its code), but hopefully that will rarely come up.
There are hacky alternatives to this. For example, you could monkeypatch the module to globally replace the Geoposition class with your AccuracyGeoposition class But, while that may save some work up front, you're almost certain to be unhappy with it when you're debugging things later. Systems that are designed for aspect-oriented programming (which is basically controlled monkeypatching) are great, but trying to cram it into systems that were designed to resist it will give you headaches.
Assuming your real code works like your example—that is, every method of B creates a new A instance just to call a method on it and discard it—well, that's a very weird design, but if it makes sense for your use case, you can make it work.
The key here is that classes are first-class objects. Instead of hardcoding A, store the class you want as a member of the B instance, like this:
class B(object):
def __init__(self, aclass=A):
self.aclass = aclass
def a(self):
self.aclass().a()
Now, you just create a B instance with your subclass:
b = B(OverriddenA)
Your edited version does a different strange thing: instead of constructing a new A instance each time to call methods on it, you're calling class methods on A itself. Again, this is probably not what you want—but, if it is, you can do it:
class B(object):
def __init__(self, aclass=A):
self.aclass = aclass
def a(self):
self.aclass.a()
However, more likely you don't really want either of these. You want to take an A instance at construction time, store it, and use it repeatedly. Like this:
class B(object):
def __init__(self, ainstance):
self.ainstance = ainstance
def a(self):
self.ainstance.a()
b1 = B(A())
b2 = B(OverriddenA())
If this all seems abstract and hard to understand… well, that's because we're using meaningless names like A, B, and OverriddenA. If you tell us the actual types you're thinking about, or just plug those types in mechanically, it should make a lot more sense.
For example:
class Vehicle(object):
def move(self):
print('I am a vehicle, and I am moving')
class Operator(object):
def __init__(self, vehicle):
self.vehicle = vehicle
def move(self):
print('I am moving my vehicle')
self.vehicle.move()
class Car(object):
def move(self):
print('I am a car, and I am driving')
driver = Operator(Car())
driver.move()

Using __getattribute__ or __getattr__ to call methods in Python

I am trying to create a subclass which acts as a list of custom classes. However, I want the list to inherit the methods and attributes of the parent class and return a sum of the quantities of each item. I am attempting to do this using the __getattribute__ method, but I cannot figure out how to pass arguments to callable attributes. The highly simplified code below should explain more clearly.
class Product:
def __init__(self,price,quantity):
self.price=price
self.quantity=quantity
def get_total_price(self,tax_rate):
return self.price*self.quantity*(1+tax_rate)
class Package(Product,list):
def __init__(self,*args):
list.__init__(self,args)
def __getattribute__(self,*args):
name = args[0]
# the only argument passed is the name...
if name in dir(self[0]):
tot = 0
for product in self:
tot += getattr(product,name)#(need some way to pass the argument)
return sum
else:
list.__getattribute__(self,*args)
p1 = Product(2,4)
p2 = Product(1,6)
print p1.get_total_price(0.1) # returns 8.8
print p2.get_total_price(0.1) # returns 6.6
pkg = Package(p1,p2)
print pkg.get_total_price(0.1) #desired output is 15.4.
In reality I have many methods of the parent class which must be callable. I realize that I could manually override each one for the list-like subclass, but I would like to avoid that since more methods may be added to the parent class in the future and I would like a dynamic system. Any advice or suggestions is appreciated. Thanks!
This code is awful and really not Pythonic at all. There's no way for you to pass extra argument in the __getattribute__, so you shouldn't try to do any implicit magic like this. It would be better written like this:
class Product(object):
def __init__(self, price, quantity):
self.price = price
self.quantity = quantity
def get_total_price(self, tax_rate):
return self.price * self.quantity * (1 + tax_rate)
class Package(object):
def __init__(self, *products):
self.products = products
def get_total_price(self, tax_rate):
return sum(P.get_total_price(tax_rate) for P in self.products)
If you need, you can make the wrapper more generic, like
class Package(object):
def __init__(self, *products):
self.products = products
def sum_with(self, method, *args):
return sum(getattr(P, method)(*args) for P in self.products)
def get_total_price(self, tax_rate):
return self.sum_with('get_total_price', tax_rate)
def another_method(self, foo, bar):
return self.sum_with('another_method', foo, bar)
# or just use sum_with directly
Explicit is better than implicit. Also composition is usually better than inheritance.
You have a few points of confusion here:
1) __getattribute__ intercepts all attribute access, which isn't what you want. You only want your code to step in if a real attribute doesn't exist, so you want __getattr__.
2) Your __getattribute__ is calling the method on the list elements, but it shouldn't be doing real work, it should only return a callable thing. Remember, in Python, x.m(a) is really two steps: first, get x.m, then call that thing with an argument of a. Your function should only be doing the first step, not both steps.
3) I'm surprised that all the methods you need to override should be summed. Are there really that many methods, that really all should be summed, to make this worthwhile?
This code works to do what you want, but you might want to consider more explicit approaches, as others suggest:
class Product:
def __init__(self,price,quantity):
self.price = price
self.quantity = quantity
def get_total_price(self,tax_rate):
return self.price*self.quantity*(1+tax_rate)
class Package(list):
def __init__(self,*args):
list.__init__(self,args)
def __getattr__(self,name):
if hasattr(self[0], name):
def fn(*args):
tot = 0
for product in self:
tot += getattr(product,name)(*args)
return tot
return fn
else:
raise AttributeError
Things to note in this code: I've made Package not derive from Product, because all of its Product-ness it gets from delegation to the elements of the list. Don't use in dir() to decide if a thing has an attribute, use hasattr.
To answer your immediate question, you call a function or method retrieved using getattr() the same way you call any function: by putting the arguments, if any, in parentheses following the reference to the function. The fact that the reference to the function comes from getattr() rather than an attribute access doesn't make any difference.
func = getattr(product, name)
result = func(arg)
These can be combined and the temporary variable func eliminated:
getattr(product, name)(arg)
In addition to what Cat Plus Plus said, if you really want to invoke magic anyway (please don't! There are unbelievably many disturbing surprises awaiting you with such an approach in practice), you could test for the presence of the attribute in the Product class, and create a sum_with wrapper dynamically:
def __getattribute__(self, attr):
return (
lambda *args: self.sum_with(attr, *args)
if hasattr(Product, attr)
else super(Package, self).__getattribute__(attr)
)

Categories

Resources