My question:
It seems that __getattr__ is not called for indexing operations, ie I can't use __getattr__ on a class A to provide A[...]. Is there a reason for this? Or a way to get around it so that __getattr__ can provide that functionality without having to explicitly define __getitem__, __setitem__, etc on A?
Minimal Example:
Let's say I define two nearly identical classes, Explicit and Implicit. Each creates a little list self._arr on initiation, and each defines a __getattr__ that just passes all attribute requests to self._arr. The only difference is that Explicit also defines __getitem__ (by just passing it on to self._arr).
# Passes all attribute requests on to a list it contains
class Explicit():
def __init__(self):
self._arr=[1,2,3,4]
def __getattr__(self,attr):
print('called __getattr_')
return getattr(self._arr,attr)
def __getitem__(self,item):
return self._arr[item]
# Same as above but __getitem__ not defined
class Implicit():
def __init__(self):
self._arr=[1,2,3,4]
def __getattr__(self,attr):
print('called __getattr_')
return getattr(self._arr,attr)
This works as expected:
>>> e=Explicit()
>>> print(e.copy())
called __getattr_
[1, 2, 3, 4]
>>> print(hasattr(e,'__getitem__'))
True
>>> print(e[0])
1
But this doesn't:
>>> i=Implicit()
>>> print(i.copy())
called __getattr_
[1, 2, 3, 4]
>>> print(hasattr(i,'__getitem__'))
called __getattr_
True
>>> print(i.__getitem__(0))
called __getattr_
1
>>> print(i[0])
TypeError: 'Implicit' object does not support indexing
Python bypasses __getattr__, __getattribute__, and the instance dict when looking up "special" methods for implementing language mechanics. (For the most part, special methods are ones with two underscores on each side of the name.) If you were expecting i[0] to invoke i.__getitem__(0), which would in turn invoke i.__getattr__('__getitem__')(0), that's why that didn't happen.
Related
What I want is this behavior:
class a:
list = []
x = a()
y = a()
x.list.append(1)
y.list.append(2)
x.list.append(3)
y.list.append(4)
print(x.list) # prints [1, 3]
print(y.list) # prints [2, 4]
Of course, what really happens when I print is:
print(x.list) # prints [1, 2, 3, 4]
print(y.list) # prints [1, 2, 3, 4]
Clearly they are sharing the data in class a. How do I get separate instances to achieve the behavior I desire?
You want this:
class a:
def __init__(self):
self.list = []
Declaring the variables inside the class declaration makes them "class" members and not instance members. Declaring them inside the __init__ method makes sure that a new instance of the members is created alongside every new instance of the object, which is the behavior you're looking for.
The accepted answer works but a little more explanation does not hurt.
Class attributes do not become instance attributes when an instance is created. They become instance attributes when a value is assigned to them.
In the original code no value is assigned to list attribute after instantiation; so it remains a class attribute. Defining list inside __init__ works because __init__ is called after instantiation. Alternatively, this code would also produce the desired output:
>>> class a:
list = []
>>> y = a()
>>> x = a()
>>> x.list = []
>>> y.list = []
>>> x.list.append(1)
>>> y.list.append(2)
>>> x.list.append(3)
>>> y.list.append(4)
>>> print(x.list)
[1, 3]
>>> print(y.list)
[2, 4]
However, the confusing scenario in the question will never happen to immutable objects such as numbers and strings, because their value cannot be changed without assignment. For example a code similar to the original with string attribute type works without any problem:
>>> class a:
string = ''
>>> x = a()
>>> y = a()
>>> x.string += 'x'
>>> y.string += 'y'
>>> x.string
'x'
>>> y.string
'y'
So to summarize: class attributes become instance attributes if and only if a value is assigned to them after instantiation, being in the __init__ method or not. This is a good thing because this way you can have static attributes if you never assign a value to an attribute after instantiation.
Although the accepted anwer is spot on, I would like to add a bit description.
Let's do a small exercise
first of all define a class as follows:
class A:
temp = 'Skyharbor'
def __init__(self, x):
self.x = x
def change(self, y):
self.temp = y
So what do we have here?
We have a very simple class which has an attribute temp which is a string
An __init__ method which sets self.x
A change method sets self.temp
Pretty straight forward so far yeah? Now let's start playing around with this class. Let's initialize this class first:
a = A('Tesseract')
Now do the following:
>>> print(a.temp)
Skyharbor
>>> print(A.temp)
Skyharbor
Well, a.temp worked as expected but how the hell did A.temp work? Well it worked because temp is a class attribute. Everything in python is an object. Here A is also an object of class type. Thus the attribute temp is an attribute held by the A class and if you change the value of temp through A (and not through an instance of a), the changed value is going to be reflected in all the instance of A class.
Let's go ahead and do that:
>>> A.temp = 'Monuments'
>>> print(A.temp)
Monuments
>>> print(a.temp)
Monuments
Interesting isn't it? And note that id(a.temp) and id(A.temp) are still the same.
Any Python object is automatically given a __dict__ attribute, which contains its list of attributes. Let's investigate what this dictionary contains for our example objects:
>>> print(A.__dict__)
{
'change': <function change at 0x7f5e26fee6e0>,
'__module__': '__main__',
'__init__': <function __init__ at 0x7f5e26fee668>,
'temp': 'Monuments',
'__doc__': None
}
>>> print(a.__dict__)
{x: 'Tesseract'}
Note that temp attribute is listed among A class's attributes while x is listed for the instance.
So how come that we get a defined value of a.temp if it is not even listed for the instance a. Well that's the magic of __getattribute__() method. In Python the dotted syntax automatically invokes this method so when we write a.temp, Python executes a.__getattribute__('temp'). That method performs the attribute lookup action, i.e. finds the value of the attribute by looking in different places.
The standard implementation of __getattribute__() searches first the internal dictionary (dict) of an object, then the type of the object itself. In this case a.__getattribute__('temp') executes first a.__dict__['temp'] and then a.__class__.__dict__['temp']
Okay now let's use our change method:
>>> a.change('Intervals')
>>> print(a.temp)
Intervals
>>> print(A.temp)
Monuments
Well now that we have used self, print(a.temp) gives us a different value from print(A.temp).
Now if we compare id(a.temp) and id(A.temp), they will be different.
You declared "list" as a "class level property" and not "instance level property". In order to have properties scoped at the instance level, you need to initialize them through referencing with the "self" parameter in the __init__ method (or elsewhere depending on the situation).
You don't strictly have to initialize the instance properties in the __init__ method but it makes for easier understanding.
So nearly every response here seems to miss a particular point. Class variables never become instance variables as demonstrated by the code below. By utilizing a metaclass to intercept variable assignment at the class level, we can see that when a.myattr is reassigned, the field assignment magic method on the class is not called. This is because the assignment creates a new instance variable. This behavior has absolutely nothing to do with the class variable as demonstrated by the second class which has no class variables and yet still allows field assignment.
class mymeta(type):
def __init__(cls, name, bases, d):
pass
def __setattr__(cls, attr, value):
print("setting " + attr)
super(mymeta, cls).__setattr__(attr, value)
class myclass(object):
__metaclass__ = mymeta
myattr = []
a = myclass()
a.myattr = [] #NOTHING IS PRINTED
myclass.myattr = [5] #change is printed here
b = myclass()
print(b.myattr) #pass through lookup on the base class
class expando(object):
pass
a = expando()
a.random = 5 #no class variable required
print(a.random) #but it still works
IN SHORT Class variables have NOTHING to do with instance variables.
More clearly They just happen to be in the scope for lookups on instances. Class variables are in fact instance variables on the class object itself. You can also have metaclass variables if you want as well because metaclasses themselves are objects too. Everything is an object whether it is used to create other objects or not, so do not get bound up in the semantics of other languages usage of the word class. In python, a class is really just an object that is used to determine how to create other objects and what their behaviors will be. Metaclasses are classes that create classes, just to further illustrate this point.
Yes you must declare in the "constructor" if you want that the list becomes an object property and not a class property.
To protect your variable shared by other instance you need to create new instance variable each time you create an instance. When you are declaring a variable inside a class it's class variable and shared by all instance. If you want to make it for instance wise need to use the init method to reinitialize the variable as refer to the instance
From Python Objects and Class by Programiz.com:
__init__() function. This special function gets called whenever a new object of that class is instantiated.
This type of function is also called constructors in Object Oriented
Programming (OOP). We normally use it to initialize all the variables.
For example:
class example:
list=[] #This is class variable shared by all instance
def __init__(self):
self.list = [] #This is instance variable referred to specific instance
I want to apply a function f to a collection xs but keep its type. If I use map, I get a 'map object':
def apply1(xs, f):
return map(f, xs)
If I know that xs is something like a list or tuple I can force it to have the same type:
def apply2(xs, f):
return type(xs)(map(f, xs))
However, that quickly breaks down for namedtuple (which I am currently in a habbit of using) -- because to my knowledge namedtuple needs to be constructed with unpack syntax or by calling its _make function. Also, namedtuple is const, so I cannot iterate over all entries and just change them.
Further problems arise from use of a dict.
Is there a generic way to express such an apply function that works for everything that is iterable?
Looks like a perfect task for functools.singledispatch decorator:
from functools import singledispatch
#singledispatch
def apply(xs, f):
return map(f, xs)
#apply.register(list)
def apply_to_list(xs, f):
return type(xs)(map(f, xs))
#apply.register(tuple)
def apply_to_tuple(xs, f):
try:
# handle `namedtuple` case
constructor = xs._make
except AttributeError:
constructor = type(xs)
return constructor(map(f, xs))
after that apply function can be simply used like
>>> apply([1, 2], lambda x: x + 1)
[2, 3]
>>> from collections import namedtuple
>>> Point = namedtuple('Point', ['x', 'y'])
>>> p = Point(10, 5)
>>> apply(p, lambda x: x ** 2)
Point(x=100, y=25)
I'm not aware of what is desired behavior for dict objects though, but the greatness of this approach that it is easy to extend.
I have a hunch you're coming from Haskell -- is that right? (I'm guessing because you use f and xs as variable names.) The answer to your question in Haskell would be "yes, it's called fmap, but it only works with types that have a defined Functor instance."
Python, on the other hand, has no general concept of "Functor." So strictly speaking, the answer is no. To get something like this, you'd have to fall back on other abstractions that Python does provide.
ABCs to the rescue
One pretty general approach would be to use abstract base classes. These provide a structured way to specify and check for particular interfaces. A Pythonic version of the Functor typeclass would be an abstract base class that defines a special fmap method, allowing individual classes to specify how they are to be mapped. But no such thing exists. (I think it would be a really cool addition to Python though!)
Now, you can define your own abstract base classes, so you could create a Functor ABC that expects a fmap interface, but you'd still have to write all your own functorized subclasses of list, dict, and so on, so that's not really ideal.
A better approach would be to use the existing interfaces to cobble together a generic definition of mapping that seems reasonable. You'd have to think pretty carefully about what aspects of the existing interfaces you'd need to combine. Just checking to see whether a type defines __iter__ isn't enough, because as you've already seen, a definition of iteration for a type doesn't necessarily translate into a definition of construction. For example, iterating over a dictionary only gives you the keys, but to map a dictionary in this precise way would require iteration over items.
Concrete examples
Here's an abstract base method that includes special cases for namedtuple and three abstract base classes -- Sequence, Mapping, and Set. It will behave as expected for any type that defines any of the above interfaces in the expected way. It then falls back to the generic behavior for iterables. In the latter case, the output won't have the same type as the input, but at least it will work.
from abc import ABC
from collections.abc import Sequence, Mapping, Set, Iterator
class Mappable(ABC):
def map(self, f):
if hasattr(self, '_make'):
return type(self)._make(f(x) for x in self)
elif isinstance(self, Sequence) or isinstance(self, Set):
return type(self)(f(x) for x in self)
elif isinstance(self, Mapping):
return type(self)((k, f(v)) for k, v in self.items())
else:
return map(f, self)
I've defined this as an ABC because that way you can create new classes that inherit from it. But you can also just call it on an existing instance of any class and it will behave as expected. You could also just use the map method above as a stand-alone function.
>>> from collections import namedtuple
>>>
>>> def double(x):
... return x * 2
...
>>> Point = namedtuple('Point', ['x', 'y'])
>>> p = Point(5, 10)
>>> Mappable.map(p, double)
Point(x=10, y=20)
>>> d = {'a': 5, 'b': 10}
>>> Mappable.map(d, double)
{'a': 10, 'b': 20}
The cool thing about defining an ABC is that you can use it as a "mix-in." Here's a MappablePoint derived from a Point namedtuple:
>>> class MappablePoint(Point, Mappable):
... pass
...
>>> p = MappablePoint(5, 10)
>>> p.map(double)
MappablePoint(x=10, y=20)
You could also modify this approach slightly in light of Azat Ibrakov's answer, using the functools.singledispatch decorator. (It was new to me -- he should get all credit for this part of the answer, but I thought I'd write it up for the sake of completeness.)
This would look something like the below. Notice that we still have to special-case namedtuples because they break the tuple constructor interface. That hadn't bothered me before, but now it feels like a really annoying design flaw. Also, I set things up so that the final fmap function uses the expected argument order. (I wanted to use mmap instead of fmap because "Mappable" is a more Pythonic name than "Functor" IMO. But mmap is already a built-in library! Darn.)
import functools
#functools.singledispatch
def _fmap(obj, f):
raise TypeError('obj is not mappable')
#_fmap.register(Sequence)
def _fmap_sequence(obj, f):
if isinstance(obj, str):
return ''.join(map(f, obj))
if hasattr(obj, '_make'):
return type(obj)._make(map(f, obj))
else:
return type(obj)(map(f, obj))
#_fmap.register(Set)
def _fmap_set(obj, f):
return type(obj)(map(f, obj))
#_fmap.register(Mapping)
def _fmap_mapping(obj, f):
return type(obj)((k, f(v)) for k, v in obj.items())
def fmap(f, obj):
return _fmap(obj, f)
A few tests:
>>> fmap(double, [1, 2, 3])
[2, 4, 6]
>>> fmap(double, {1, 2, 3})
{2, 4, 6}
>>> fmap(double, {'a': 1, 'b': 2, 'c': 3})
{'a': 2, 'b': 4, 'c': 6}
>>> fmap(double, 'double')
'ddoouubbllee'
>>> Point = namedtuple('Point', ['x', 'y', 'z'])
>>> fmap(double, Point(x=1, y=2, z=3))
Point(x=2, y=4, z=6)
A final note on breaking interfaces
Neither of these approaches can guarantee that this will work for all things recognized as Sequences, and so on, because the ABC mechanism doesn't check function signatures. This is a problem not only for constructors, but also for all other methods. And it's unavoidable without type annotations.
In practice, however, it probably doesn't matter much. If you find yourself using a tool that breaks interface conventions in weird ways, consider using a different tool. (I'd actually say that goes for namedtuples too, as much as I like them!) This is the "consenting adults" philosophy behind many Python design decisions, and it has worked pretty well for the last couple of decades.
I have created a custom class, and I want to use the ** operator on a instance for passing it to a function. I have already defined __getitem__ and __iter__, but when I try f(**my_object), I'm getting
`TypeError: argument must be a mapping, not 'MyClass'`
What are the minimum required methods so that the custom class qualifies as a mapping?
** is not an operator, it is part of the call syntax:
If the syntax **expression appears in the function call, expression must evaluate to a mapping, the contents of which are treated as additional keyword arguments.
So if your class implements the Mapping methods, then you should be good to go. You'll need more than just __getitem__ and __iter__ here.
A Mapping is a Collection, so must define at least __getitem__, __iter__, and __len__; in addition most of __contains__, keys, items, values, get, __eq__, and __ne__ would be expected. If your custom class directly inherits from collections.abc.Mapping, you only need to implement the first three.
Demo:
>>> from collections.abc import Mapping
>>> class DemoMapping(Mapping):
... def __init__(self, a=None, b=None, c=None):
... self.a, self.b, self.c = a, b, c
... def __len__(self): return 3
... def __getitem__(self, name): return vars(self)[name]
... def __iter__(self): return iter('abc')
...
>>> def foo(a, b, c):
... print(a, b, c)
...
>>> foo(**DemoMapping(42, 'spam', 'eggs'))
42 spam eggs
If you run this under a debugger, you'll see that Python calls the .keys() method, which returns a dictionary view, which then delegates to the custom class __iter__ method when the view is iterated over. The values are then retrieved with a series of __getitem__ calls. So for your specific case, what was missing was the .keys() method.
In addition, note that Python may enforce that the keys are strings!
>>> class Numeric(Mapping):
... def __getitem__(self, name): return {1: 42, 7: 'spam', 11: 'eggs'}[name]
... def __len__(self): return 3
... def __iter__(self): return iter((1, 7, 11))
...
>>> dict(Numeric())
{1: 42, 7: 'spam', 11: 'eggs'}
>>> def foo(**kwargs): print(kwargs)
...
>>> foo(**Numeric())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() keywords must be strings
Emulating container types
The first set of methods is used [...] to emulate a mapping...
It is also recommended that mappings provide the methods keys(), values(), items(), get(), clear(), setdefault(), pop(), popitem(), copy(), and update() behaving similar to those for Python’s standard dictionary objects.
It is recommended that [...] mappings [...] implement the __contains__() method to allow efficient use of the in operator...
It is further recommended that [...] mappings [...] implement the __iter__() method to allow efficient iteration through the container
Is there a way to make a method in a python class modify its data in place, specifically for lists?
For example, I want to write a function that behave like list.append() by modifying the origonal list instead of returning a new one
I have already
class newlist(list):
def add(self, otherlist):
self = self+otherlist
A method written like that does not seem to modify the variable it is called on.
Obviosly, I could add return self at the end, but then it would have to be called with mylist = mylist.add(stuff) to actually modify mylist. How do write the function so it will modify mylist when called with just mylist.add(stuff)?
Since newlist is a subclass of list it already has a method that does exactly what you want: extend. You don't have to write any code in newlist at all.
But if you really want to reinvent the wheel you can call extend within your new add method and get the same result:
class newlist(list):
def add(self, otherlist):
self.extend(otherlist)
mylist = newlist()
mylist.append(0)
mylist.extend([1,2,3])
print(mylist)
mylist = newlist()
mylist.append(0)
mylist.add([1,2,3])
print(mylist)
[0, 1, 2, 3]
[0, 1, 2, 3]
Plain assignment to self will rebind it; self is bound to the new object, and you've lost the reference to the original type.
The easiest approach here it to use lists existing overload for augmented assignment, +=:
class newlist(list):
def add(self, otherlist):
self += otherlist
That mutates self in place, rather than making a new object and reassigning it (it works because list is a mutable type; it wouldn't work for an immutable type without an overload for +=). You could also implement it as self.extend(otherlist), or for extra cleverness, don't even bother to write a Python level implementation at all and just alias the existing list method:
class newlist(list):
add = list.__iadd__ # Or add = list.extend
Since the definition of add is basically identical to existing += or list.extend behavior, just under a new name, aliasing concisely gets the performance of the built-in function; the only downside is that introspection (print(newline.add)) will not indicate that the function's name is add (because it's __iadd__ or extend; aliasing doesn't change the function metadata).
Try using the in-place addition += for your function:
class newlist(list):
def add(self, other):
self += other
a = newlist([1,2])
b = newlist([3,4])
a.add(b)
a
# returns:
[1, 2, 3, 4]
I'm reviewing some old python code and came accross this 'pattern' frequently:
class Foo(object):
def __init__(self, other = None):
if other:
self.__dict__ = dict(other.__dict__)
Is this how a copy constructor is typically implemented in Python?
Note that the attributes aren't copied, they are shared.
>>> a = Foo()
>>> a.x=[1,2,3]
>>> b = Foo(a)
>>> b.x[2] = 4
>>> a.x
[1, 2, 4]
This is a way to copy all attributes from one object to another one. However note that:
The object passed to the __init__ method may have any type (not the same type as the object being created).
Only object attributes are copied (class attributes and methods are not).