Dynamic Operator Overloading on dict classes in Python - python

I have a class that dynamically overloads basic arithmetic operators like so...
import operator
class IshyNum:
def __init__(self, n):
self.num=n
self.buildArith()
def arithmetic(self, other, o):
return o(self.num, other)
def buildArith(self):
map(lambda o: setattr(self, "__%s__"%o,lambda f: self.arithmetic(f, getattr(operator, o))), ["add", "sub", "mul", "div"])
if __name__=="__main__":
number=IshyNum(5)
print number+5
print number/2
print number*3
print number-3
But if I change the class to inherit from the dictionary (class IshyNum(dict):) it doesn't work. I need to explicitly def __add__(self, other) or whatever in order for this to work. Why?

The answer is found in the two types of class that Python has.
The first code-snippet you provided uses a legacy "old-style" class (you can tell because it doesn't subclass anything - there's nothing before the colon). Its semantics are peculiar. In particular, you can add a special method to an instance:
class Foo:
def __init__(self, num):
self.num = num
def _fn(other):
return self.num + other.num
self.__add__ = _fn
and get a valid response:
>>> f = Foo(2)
>>> g = Foo(1)
>>> f + g
3
But, subclassing dict means you are generating a new-style class. And the semantics of operator overloading are different:
class Foo (object):
def __init__(self, num):
self.num = num
def _fn(other):
return self.num + other.num
self.__add__ = _fn
>>> f = Foo(2)
>>> g = Foo(1)
>>> f + g
Traceback ...
TypeError: unsupported operand type(s) for +: 'Foo' and 'Foo'
To make this work with new-style classes (which includes subclasses of dict or just about any other type you will find), you have to make sure the special method is defined on the class. You can do this through a metaclass:
class _MetaFoo(type):
def __init__(cls, name, bases, args):
def _fn(self, other):
return self.num + other.num
cls.__add__ = _fn
class Foo(object):
__metaclass__ = _MetaFoo
def __init__(self, num):
self.num = num
>>> f = Foo(2)
>>> g = Foo(1)
>>> f+g
3
Also, the semantic difference means that in the very first case I could define my local add method with one argument (the self it uses is captured from the surrounding scope in which it is defined), but with new-style classes, Python expects to pass in both values explicitly, so the inner function has two arguments.
As a previous commenter mentioned, best to avoid old-style classes if possible and stick with new-style classes (old-style classes are removed in Python 3+). Its unfortunate that the old-style classes happened to work for you in this case, where new-style classes will require more code.
Edit:
You can also do this more in the way you originally tried by setting the method on the class rather than the instance:
class Foo(object):
def __init__(self, num):
self.num = num
setattr(Foo, '__add__', (lambda self, other: self.num + other.num))
>>> f = Foo(2)
>>> g = Foo(1)
>>> f+g
3
I'm afraid I sometimes think in Metaclasses, where simpler solutions would be better :)

In general, never set __ methods on the instance -- they're only supported on the class. (In this instance, the problem is that they happen to work on old-style classes. Don't use old-style classes).
You probably want to use a metaclass, not the weird thing you're doing here.
Here's a metaclass tutorial: http://www.voidspace.org.uk/python/articles/metaclasses.shtml

I do not understand what you are trying to accomplish, but I am almost certain you are going about it in the wrong way. Some of my observations:
I don't see why you're trying to dynamically generate those arithmetic methods. You don't do anything instance-specific with them, so I don't see why you would not just define them on the class.
The only reason they work at all is because IshyNum is an old-style class; this isn't a good thing, since old-style classes are long-deprecated and not as nice as new-style classes. (I'll explain later why you should be especially interested in this.)
If you wanted to automate the process of doing the same thing for multiple methods (probably not worth it in this case), you could just do this right after the class definition block.
Don't use map to do that. map is for making a list; using it for side effects is silly. Just use a normal for loop.
If you want to use composition to refer lots of methods to the same attribute automatedly when using composition, use __getattr__ and redirect to that attribute's methods.
Don't inherit dict. There is nothing much to gain from inheriting built-in types. It turns out it is more confusing than it's worth, and you don't get to re-use much.
If your code above is anything close to the stuff in your post, you really don't want to inherit dict. If it's not, try posting your real use case.
Here is what you really wanted to know:
When you inherit dict, you are making a new-style class. IshyNum is an old-style class because it doesn't inherit object (or one of its subclasses).
New-style classes have been Python's flagship kind of class for a decade and are what you want to use. In this case, they actually cause your technique no longer to work. This is fine, though, since there is no reason in the code you posted to set magic methods on a per-instance level and little reason ever to want to.

For new-style classes, Python does not check the instance for an __add__ method when performing an addition, it checks the class instead. The problem is that you are binding the __add__ method (and all the others) to the instance as a bound method and not to the class as an unbound method. (This is true to other special methods as well, you can attach them only to the class, not to an instance). So, you'll probably want to use a metaclass to achieve this functionality (although I think this is a very awkward thing to do as it is much more readable to spell out these methods explicitly). Anyway, here is an example with metaclasses:
import operator
class OperatorMeta(type):
def __new__(mcs, name, bases, attrs):
for opname in ["add", "sub", "mul", "div"]:
op = getattr(operator, opname)
attrs["__%s__" % opname] = mcs._arithmetic_func_factory(op)
return type.__new__(mcs, name, bases, attrs)
#staticmethod
def _arithmetic_func_factory(op):
def func(self, other):
return op(self.num, other)
return func
class IshyNum(dict):
__metaclass__ = OperatorMeta
def __init__(self, n):
dict.__init__(self)
self.num=n
if __name__=="__main__":
number=IshyNum(5)
print number+5
print number/2
print number*3
print number-3

Related

Using a metaclass on a class drived from another class implemented in C

I am working on a ctypes drop-in-replacement / extension and ran into an issue I do not fully understand.
I am trying to build a class factory for call-back function decorators similar to CFUNCTYPE and WINFUNCTYPE. Both factories produce classes derived from ctypes._CFuncPtr. Like every ctypes function interface, they have properties like argtypes and restype. I want to extend the classes allowing an additional property named some_param and I thought, why not, let's try this with "getter" and "setter" methods - how hard can it be ...
Because I am trying to use "getter" and "setter" methods (#property) on a property of a class (NOT a property of objects), I ended up writing a metaclass. Because my class is derived from ctypes._CFuncPtr, I think my metaclass must be derived from ctypes._CFuncPtr.__class__ (I could be wrong here).
The example below works, sort of:
import ctypes
class a_class:
def b_function(self, some_param_parg):
class c_class_meta(ctypes._CFuncPtr.__class__):
def __init__(cls, *args):
super().__init__(*args) # no idea if this is good ...
cls._some_param_ = some_param_parg
#property
def some_param(cls):
return cls._some_param_
#some_param.setter
def some_param(cls, value):
if not isinstance(value, list):
raise TypeError('some_param must be a list')
cls._some_param_ = value
class c_class(ctypes._CFuncPtr, metaclass = c_class_meta):
_argtypes_ = ()
_restype_ = None
_flags_ = ctypes._FUNCFLAG_STDCALL # change for CFUNCTYPE or WINFUNCTYPE etc ...
return c_class
d_class = a_class().b_function([1, 2, 3])
print(d_class.some_param)
d_class.some_param = [2, 6]
print(d_class.some_param)
d_class.some_param = {} # Raises an error - as expected
So far so good - using the above any further does NOT work anymore. The following pseudo-code (if used on an actual function from a DLL or shared object) will fail - in fact, it will cause the CPython interpreter to segfault ...
some_routine = ctypes.windll.LoadLibrary('some.dll').some_routine
func_type = d_class(ctypes.c_int16, ctypes.c_int16) # similar to CFUNCTYPE/WINFUNCTYPE
func_type.some_param = [4, 5, 6] # my "special" property
some_routine.argtypes = (ctypes.c_int16, func_type)
#func_type
def demo(x):
return x - 1
some_routine(4, demo) # segfaults HERE!
I am not entirely sure what goes wrong. ctypes._CFuncPtr is implemented in C, which could be a relevant limitation ... I could also have made a mistake in the implementation of the metaclass. Can someone enlighten me?
(For additional context, I am working on this function.)
Maybe ctypes metaclasses simply won't work nicely being subclasses - since it is itself written in C, it may bypass the routes inheritance imposes for some shortcuts and end up in failures.
Ideally this "bad behavior" would have to be properly documented, filled as bugs against CPython's ctypes and fixed - to my knowledge there are not many people who can fix ctypes bugs.
On the other hand, having a metaclass just because you want a property-like attribute at class level is overkill.
Python's property itself is just pre-made, very useful builtin class that implements the descriptor protocol. Any class you create yourself that implements proper __get__ and __set__ methods can replace "property" (and often, when logic is shared across property-attributes, leads to shorter, non duplicated code)
On a second though, unfortunately, descriptor setters will only work for instances, not for classes (which makes sense, since doing cls.attr will already get you the special code-guarded value, and there is no way a __set__ method could be called on it)
So, if you could work with "manually" setting the values in the cls.__dict__ and putting your logic in the __get__ attribute, you could do:
PREFIX = "_cls_prop_"
class ClsProperty:
def __set_name__(self, owner, name):
self.name = name
def __get__(self, instance, owner):
value = owner.__dict__.get(PREFIX + self.name)
# Logic to transform/check value goes here:
if not isinstance(value, list):
raise TypeError('some_param must be a list')
return value
def b_function(some_param_arg):
class c_class(ctypes._CFuncPtr):
_argtypes_ = ()
_restype_ = None
_flags_ = 0 # ctypes._FUNCFLAG_STDCALL # change for CFUNCTYPE or WINFUNCTYPE etc ...
_some_param_ = ClsProperty()
setattr(c_class, PREFIX + "_some_param_", some_param_arg)
return c_class
d_class = b_function([1, 2, 3])
print(d_class._some_param_)
d_class._some_param_ = [1, 2]
print(d_class._some_param_)
If that does not work, I don't think other approaches trying to extend CTypes metaclass will work anyway, but if you want a try, instead of a "meta-property", you might try to customize the metaclass' __setitem__ instead, to do your parameter checking, instead of using property.

How to create a static class property that returns an instance of the class itself?

I wrote a class that can handle integers with arbitrary precision (just for learning purposes). The class takes a string representation of an integer and converts it into an instance of BigInt for further calculations.
Often times you need the numbers Zero and One, so I thought it would be helpfull if the class could return these. I tried the following:
class BigInt():
zero = BigInt("0")
def __init__(self, value):
####yada-yada####
This doesn't work. Error: "name 'BigInt' is not defined"
Then I tried the following:
class BigInt():
__zero = None
#staticmethod
def zero():
if BigInt.__zero is None:
BigInt.__zero = BigInt('0')
return BigInt.__zero
def __init__(self, value):
####yada-yada####
This actually works very well. What I don't like is that zero is a method (and thus has to be called with BigInt.zero()) which is counterintuitive since it should just refer to a fixed value.
So I tried changing zero to become a property, but then writing BigInt.zero returns an instance of the class property instead of BigInt because of the decorator used. That instance cannot be used for calculations because of the wrong type.
Is there a way around this issue?
A static property...? We call a static property an "attribute". This is not Java, Python is a dynamically typed language and such a construct would be really overcomplicating matters.
Just do this, setting a class attribute:
class BigInt:
def __init__(self, value):
...
BigInt.zero = BigInt("0")
If you want it to be entirely defined within the class, do it using a decorator (but be aware it's just a more fancy way of writing the same thing).
def add_zero(cls):
cls.zero = cls("0")
return cls
#add_zero
class BigInt:
...
The question is contradictory: static and property don't go together in this way. Static attributes in Python are simply ones that are only assigned once, and the language itself includes a very large number of these. (Most strings are interred, all integers < a certain value are pre-constructed, etc. E.g. the string module.). Easiest approach is to statically assign the attributes after construction as wim illustrates:
class Foo:
...
Foo.first = Foo()
...
Or, as he further suggested, using a class decorator to perform the assignments, which is functionally the same as the above. A decorator is, effectively, a function that is given the "decorated" function as an argument, and must return a function to effectively replace the original one. This may be the original function, say, modified with some annotations, or may be an entirely different function. The original (decorated) function may or may not be called as appropriate for the decorator.
def preload(**values):
def inner(cls):
for k, v in values.items():
setattr(cls, k, cls(v))
return cls
return inner
This can then be used dynamically:
#preload(zero=0, one=1)
class Foo:
...
If the purpose is to save some time on common integer values, a defaultdict mapping integers to constructed BigInts could be useful as a form of caching and streamlined construction / singleton storage. (E.g. BigInt.numbers[27])
However, the problem of utilizing #property at the class level intrigued me, so I did some digging. It is entirely possible to make use of "descriptor protocol objects" (which the #property decorator returns) at the class level if you punt the attribute up the object model hierarchy, to the metaclass.
class Foo(type):
#property
def bar(cls):
print("I'm a", cls)
return 27
class Bar(metaclass=Foo):
...
>>> Bar.bar
I'm a <class '__main__.Bar'>
<<< 27
Notably, this attribute is not accessible from instances:
>>> Bar().bar
AttributeError: 'Bar' object has no attribute 'bar'
Hope this helps!

Cant Pickle memoized class instance

Here is the code I am using
import funcy
#funcy.memoize
class mystery(object):
def __init__(self, num):
self.num = num
feat = mystery(1)
with open('num.pickle', 'wb') as f:
pickle.dump(feat,f)
Which is giving me the following error:
PicklingError: Can't pickle <class '__main__.mystery'>: it's not the
same object as __main__.mystery
I am hoping to 1) understand why this is happening, and 2) find a solution that allows me to pickle the object (without removing the memoization). Ideally the solution would not change the call to pickle.
Running python 3.6 with funcy==1.10
The problem is that you've applied a decorator designed for functions to a class. The result is not a class, but a function that wraps up a call to the class. This causes a number of problems (e.g., as pointed out by Aran-Fey in the comments, you can't isinstance(feat, mystery), because mystery).
But the particular problem you care about is that you can't pickle instances of inaccessible classes.
In fact, that's basically what the error message is telling you:
PicklingError: Can't pickle <class '__main__.mystery'>: it's not the
same object as __main__.mystery
Your feat thinks its type is __main__.mystery, but that isn't a type at all, it's the function returned by the decorator that wraps that type.
The easy way to fix this would be to find a class decorator meant that does what you want. It might be called something like flyweight instead of memoize, but I'm sure plenty of examples exist.
But you can build a flyweight class by just memoizing the constructor, instead of memoizing the class:
class mystery:
#funcy.memoize
def __new__(cls, num):
return super().__new__(cls)
def __init__(self, num):
self.num = num
… although you probably want to move the initialization into the constructor in that case. Otherwise, calling mystery(1) and then mystery(1) will return the same object as before, but also reinitialize it with self.num = 1, which is at best wasteful, and at worst incorrect. So:
class mystery:
#funcy.memoize
def __new__(cls, num):
self = super().__new__(cls)
self.num = num
return self
And now:
>>> feat = mystery(1)
>>> feat
<__main__.mystery at 0x10eeb1278>
>>> mystery(2)
<__main__.mystery at 0x10eeb2c18>
>>> mystery(1)
<__main__.mystery at 0x10eeb1278>
And, because the type of feat is now a class that's accessible under the module-global name mystery, pickle will have no problem with it at all:
>>> pickle.dumps(feat)
b'\x80\x03c__main__\nmystery\nq\x00)\x81q\x01}q\x02X\x03\x00\x00\x00numq\x03K\x01sb.'
You do still want to think about how this class should play with pickling. In particular, do you want unpickling to go through the cache? By default, it doesn't:
>>> pickle.loads(pickle.dumps(feat)) is feat
False
What's happening is that it's using the default __reduce_ex__ for pickling, which defaults to doing the equivalent of (only slightly oversimplified):
result = object.__new__(__main__.mystery)
result.__dict__.update({'num': 1})
If you want it to go through the cache, the simplest solution is this:
class mystery:
#funcy.memoize
def __new__(cls, num):
self = super().__new__(cls)
self.num = num
return self
def __reduce__(self):
return (type(self), (self.num,))
If you plan to do this a lot, you might think of writing your own class decorator:
def memoclass(cls):
#funcy.memoize
def __new__(cls, *args, **kwargs):
return super(cls, cls).__new__(cls)
cls.__new__ = __new__
return cls
But this:
… is kind of ugly,
… only works with classes that don't need to pass constructor arguments to a base class,
… only works with classes that don't have an __init__ (or, at least, that have an idempotent and fast __init__ that's harmless to call repeatedly),
… doesn't provide an easy way to hook pickling, and
… doesn't document or test any of those restrictions.
So, I think you're better off being explicit and just memoizing the __new__ method, or writing (or finding) something a lot fancier that does the introspection needed to make memoizing a class this way fully general. (Or, alternatively, maybe write one that only works with some restricted set of classes—e.g., a #memodataclass that's just like #dataclass but with a memoized constructor would be a lot easier than a fully general #memoclass.)
Another approach is
class _mystery(object):
def __init__(self, num):
self.num = num
#funcy.memoize
def mystery(num):
return _mystery(num)

In Python can one implement mixin behavior without using inheritance?

Is there a reasonable way in Python to implement mixin behavior similar to that found in Ruby -- that is, without using inheritance?
class Mixin(object):
def b(self): print "b()"
def c(self): print "c()"
class Foo(object):
# Somehow mix in the behavior of the Mixin class,
# so that all of the methods below will run and
# the issubclass() test will be False.
def a(self): print "a()"
f = Foo()
f.a()
f.b()
f.c()
print issubclass(Foo, Mixin)
I had a vague idea to do this with a class decorator, but my attempts led to confusion. Most of my searches on the topic have led in the direction of using inheritance (or in more complex scenarios, multiple inheritance) to achieve mixin behavior.
def mixer(*args):
"""Decorator for mixing mixins"""
def inner(cls):
for a,k in ((a,k) for a in args for k,v in vars(a).items() if callable(v)):
setattr(cls, k, getattr(a, k).im_func)
return cls
return inner
class Mixin(object):
def b(self): print "b()"
def c(self): print "c()"
class Mixin2(object):
def d(self): print "d()"
def e(self): print "e()"
#mixer(Mixin, Mixin2)
class Foo(object):
# Somehow mix in the behavior of the Mixin class,
# so that all of the methods below will run and
# the issubclass() test will be False.
def a(self): print "a()"
f = Foo()
f.a()
f.b()
f.c()
f.d()
f.e()
print issubclass(Foo, Mixin)
output:
a()
b()
c()
d()
e()
False
You can add the methods as functions:
Foo.b = Mixin.b.im_func
Foo.c = Mixin.c.im_func
I am not that familiar with Python, but from what I know about Python metaprogramming, you could actually do it pretty much the same way it is done in Ruby.
In Ruby, a module basically consists of two things: a pointer to a method dictionary and a pointer to a constant dictionary. A class consists of three things: a pointer to a method dictionary, a pointer to a constant dictionary and a pointer to the superclass.
When you mix in a module M into a class C, the following happens:
an anonymous class α is created (this is called an include class)
α's method dictionary and constant dictionary pointers are set equal to M's
α's superclass pointer is set equal to C's
C's superclass pointer is set to α
In other words: a fake class which shares its behavior with the mixin is injected into the inheritance hierarchy. So, Ruby actually does use inheritance for mixin composition.
I left out a couple of subleties above: first off, the module doesn't actually get inserted as C's superclass, it gets inserted as C's superclasses' (which is C's singleton class) superclass. And secondly, if the mixin itself has mixed in other mixins, then those also get wrapped into fake classes which get inserted directly above α, and this process is applied recursively, in case the mixed in mixins in turn have mixins.
Basically, the whole mixin hierarchy gets flattened into a straight line and spliced into the inheritance chain.
AFAIK, Python actually allows you to change a class's superclass(es) after the fact (something which Ruby does not allow you to do), and it also gives you access to a class's dict (again, something that is impossible in Ruby), so you should be able to implement this yourself.
EDIT: Fixed what could (and probably should) be construed as a bug. Now it builds a new dict and then updates that from the class's dict. This prevents mixins from overwriting methods that are defined directly on the class. The code is still untested but should work. I'm busy ATM so I'll test it later. It worked fine except for a syntax error. In retrospect, I decided that I don't like it (even after my further improvements) and much prefer my other solution even if it is more complicated. The test code for that one applies here as well but I wont duplicate it.
You could use a metaclass factory:
import inspect
def add_mixins(*mixins):
Dummy = type('Dummy', mixins, {})
d = {}
for mixin in reversed(inspect.getmro(Dummy)):
d.update(mixin.__dict__)
class WithMixins(type):
def __new__(meta, classname, bases, classdict):
d.update(classdict)
return super(WithMixins, meta).__new__(meta, classname, bases, d)
return WithMixins
then use it like:
class Foo(object):
__metaclass__ = add_mixins(Mixin1, Mixin2)
# rest of the stuff
This one is based on the way it's done in ruby as explained by Jörg W Mittag. All of the wall of code after if __name__=='__main__' is test/demo code. There's actually only 13 lines of real code to it.
import inspect
def add_mixins(*mixins):
Dummy = type('Dummy', mixins, {})
d = {}
# Now get all the class attributes. Use reversed so that conflicts
# are resolved with the proper priority. This rules out the possibility
# of the mixins calling methods from their base classes that get overridden
# using super but is necessary for the subclass check to fail. If that wasn't a
# requirement, we would just use Dummy above (or use MI directly and
# forget all the metaclass stuff).
for base in reversed(inspect.getmro(Dummy)):
d.update(base.__dict__)
# Create the mixin class. This should be equivalent to creating the
# anonymous class in Ruby.
Mixin = type('Mixin', (object,), d)
class WithMixins(type):
def __new__(meta, classname, bases, classdict):
# The check below prevents an inheritance cycle from forming which
# leads to a TypeError when trying to inherit from the resulting
# class.
if not any(issubclass(base, Mixin) for base in bases):
# This should be the the equivalent of setting the superclass
# pointers in Ruby.
bases = (Mixin,) + bases
return super(WithMixins, meta).__new__(meta, classname, bases,
classdict)
return WithMixins
if __name__ == '__main__':
class Mixin1(object):
def b(self): print "b()"
def c(self): print "c()"
class Mixin2(object):
def d(self): print "d()"
def e(self): print "e()"
class Mixin3Base(object):
def f(self): print "f()"
class Mixin3(Mixin3Base): pass
class Foo(object):
__metaclass__ = add_mixins(Mixin1, Mixin2, Mixin3)
def a(self): print "a()"
class Bar(Foo):
def f(self): print "Bar.f()"
def test_class(cls):
print "Testing {0}".format(cls.__name__)
f = cls()
f.a()
f.b()
f.c()
f.d()
f.e()
f.f()
print (issubclass(cls, Mixin1) or
issubclass(cls, Mixin2) or
issubclass(cls, Mixin3))
test_class(Foo)
test_class(Bar)
You could decorate the classes __getattr__ to check in the mixin. The problem is that all methods of the mixin would always require an object the type of the mixin as their first parameter, so you would have to decorate __init__ as well to create a mixin-object. I believe you could achieve this using a class decorator.
from functools import partial
class Mixin(object):
#staticmethod
def b(self): print "b()"
#staticmethod
def c(self): print "c()"
class Foo(object):
def __init__(self, mixin_cls):
self.delegate_cls = mixin_cls
def __getattr__(self, attr):
if hasattr(self.delegate_cls, attr):
return partial(getattr(self.delegate_cls, attr), self)
def a(self): print "a()"
f = Foo(Mixin)
f.a()
f.b()
f.c()
print issubclass(Foo, Mixin)
This basically uses the Mixin class as a container to hold ad-hoc functions (not methods) that behave like methods by taking an object instance (self) as the first argument. __getattr__ will redirect missing calls to these methods-alike functions.
This passes your simple tests as shown below. But I cannot guarantee it will do all the things you want. Make more thorough test to make sure.
$ python mixin.py
a()
b()
c()
False
Composition? It seems like that would be the simplest way to handle this: either wrap your object in a decorator or just import the methods as an object into your class definition itself. This is what I usually do: put the methods that I want to share between classes in a file and then import the file. If I want to override some behavior I import a modified file with the same method names as the same object name. It's a little sloppy, but it works.
For example, if I want the init_covers behavior from this file (bedg.py)
import cove as cov
def init_covers(n):
n.covers.append(cov.Cover((set([n.id]))))
id_list = []
for a in n.neighbors:
id_list.append(a.id)
n.covers.append(cov.Cover((set(id_list))))
def update_degree(n):
for a in n.covers:
a.degree = 0
for b in n.covers:
if a != b:
a.degree += len(a.node_list.intersection(b.node_list))
In my bar class file I would do: import bedg as foo
and then if I want to change my foo behaviors in another class that inherited bar, I write
import bild as foo
Like I say, it is sloppy.

Why is getattr() not working like I think it should? I think this code should print 'sss'

the next is my code:
class foo:
def __init__(self):
self.a = "a"
def __getattr__(self,x,defalut):
if x in self:
return x
else:return defalut
a=foo()
print getattr(a,'b','sss')
i know the __getattr__ must be 2 argument,but i want to get a default attribute if the attribute is no being.
how can i get it, thanks
and
i found if defined __setattr__,my next code is also can't run
class foo:
def __init__(self):
self.a={}
def __setattr__(self,name,value):
self.a[name]=value
a=foo()#error ,why
hi alex,
i changed your example:
class foo(object):
def __init__(self):
self.a = {'a': 'boh'}
def __getattr__(self, x):
if x in self.a:
return self.a[x]
raise AttributeError
a=foo()
print getattr(a,'a','sss')
it print {'a': 'boh'},not 'boh'
i think it will print self.a not self.a['a'], This is obviously not want to see
why ,and Is there any way to avoid it
Your problem number one: you're defining an old-style class (we know you're on Python 2.something, even though you don't tell us, because you're using print as a keyword;-). In Python 2:
class foo:
means you're defining an old-style, aka legacy, class, whose behavior can be rather quirky at times. Never do that -- there's no good reason! The old-style classes exist only for compatibility with old legacy code that relies on their quirks (and were finally abolished in Python 3). Use new style classes instead:
class foo(object):
and then the check if x in self: will not cause a recursive __getattr__ call. It will however cause a failure anyway, because your class does not define a __contains__ method and therefore you cannot check if x is contained in an instance of that class.
If what you're trying to do is whether x is defined in the instance dict of self, don't bother: __getattr__ doesn't even get called in that case -- it's only called when the attribute is not otherwise found in self.
To support three-arguments calls to the getattr built-in, just raise AttributeError in your __getattr__ method if necessary (just as would happen if you had no __getattr__ method at all), and the built-in will do its job (it's the built-in's job to intercept such cases and return the default if provided). That's the reason one never ever calls special methods such as __getattr__ directly but rather uses built-ins and operators which internally call them -- the built-ins and operators provide substantial added value.
So to give an example which makes somewhat sense:
class foo(object):
def __init__(self):
self.blah = {'a': 'boh'}
def __getattr__(self, x):
if x in self.blah:
return self.blah[x]
raise AttributeError
a=foo()
print getattr(a,'b','sss')
This prints sss, as desired.
If you add a __setattr__ method, that one intercepts every attempt to set attributes on self -- including self.blah = whatever. So -- when you need to bypass the very __setattr__ you're defining -- you must use a different approach. For example:
class foo(object):
def __init__(self):
self.__dict__['blah'] = {}
def __setattr__(self, name, value):
self.blah[name] = value
def __getattr__(self, x):
if x in self.blah:
return self.blah[x]
raise AttributeError
a=foo()
print getattr(a,'b','sss')
This also prints sss. Instead of
self.__dict__['blah'] = {}
you could also use
object.__setattr__(self, 'blah', {})
Such "upcalls to the superclass's implementation" (which you could also obtain via the super built-in) are one of the rare exceptions to the rules "don't call special methods directly, call the built-in or use the operator instead" -- here, you want to specifically bypass the normal behavior, so the explicit special-method call is a possibility.
You are confusing the getattr built-in function, which retrieves some attribute binding of an object dynamically (by name), at runtime, and the __getattr__ method, which is invoked when you access some missing attribute of an object.
You can't ask
if x in self:
from within __getattr__, because the in operator will cause __getattr__ to be invoked, leading to infinite recursion.
If you simply want to have undefined attributes all be defined as some value, then
def __getattr__(self, ignored):
return "Bob Dobbs"

Categories

Resources