Overload [] python operator and chaining methods using a memory reference - python

Is it possible to overload [] (__getitem__) Python operator and chain methods using the initial memory reference.
Imagine I have a class Math that accepts a list of integer numbers, like this:
class Math(object):
def __init__(self, *args, **kwargs):
assert(all([isinstance(item, int) for item in list(args)]))
self.list = list(args)
def add_one(self):
for index in range(len(self.list)):
self.list[index] += 1
And I want to do something like this:
instance = Math(1,2,3,4,5)
instance[2:4].add_one()
After executing this code instance.list should be [1,2,4,5,5], is this possible?
I know I could do something like add_one(2,4), but this is not the style of API I would like to have if possible.
Thanks

As Winston mentions, you need to implement an auxiliary object:
class Math(object):
def __init__(self, *args, **kwargs):
self.list = list(args)
def __getitem__(self, i):
return MathSlice(self, i)
class MathSlice(object):
def __init__(self, math, slice):
self.math = math
self.slice = slice
def add_one(self):
for i in xrange(*self.slice.indices(len(self.math.list))):
self.math.list[i] += 1
instance = Math(1,2,3,4,5)
instance[2:4].add_one()
print instance.list
How you share the math object with the MathSlice object depends on what you want the semantics to be if the math object changes.

Numpy does something like this.
The __getitem__ method will recieve a slice object. See http://docs.python.org/reference/datamodel.html for details. You'll need to return a new object, but implement that object such that it modifies the original list.

Related

Return a custom value when a class method is accessed as an attribute, but still allow for it to perform a computation when called?

Specifically, I would want MyClass.my_method to be used for lookup of a value in the class dictionary, but MyClass.my_method() to be a method that accepts arguments and performs a computation to update an attribute in MyClass and then returns MyClass with all its attributes (including the updated one).
I am thinking that this might be doable with Python's descriptors (maybe overriding __get__ or __call__), but I can't figure out how this would look. I understand that the behavior might be confusing, but I am interested if it is possible (and if there are any other major caveats).
I have seen that you can do something similar for classes and functions by overriding __repr__, but I can't find a similar way for a method within a class. My returned value will also not always be a string, which seems to prohibit the __repr__-based approaches mentioned in these two questions:
Possible to change a function's repr in python?
How to create a custom string representation for a class object?
Thank you Joel for the minimal implementation. I found that the remaining problem is the lack of initialization of the parent, since I did not find a generic way of initializing it, I need to check for attributes in the case of list/dict, and add the initialization values to the parent accordingly.
This addition to the code should make it work for lists/dicts:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
super().__init__()
dict_attr = getattr(parent, "update", None)
list_attr = getattr(parent, "extend", None)
if callable(dict_attr): # parent is dict
self.update(init_val)
elif callable(list_attr): # parent is list
self.extend(init_val)
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
Unfortunately, we need to add case by case, but this works as intended.
A slightly less verbose way to write the above is the following:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
if isinstance(init_val, list):
self.extend(init_val)
elif isinstance(init_val, dict):
self.update(init_val)
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
As jasonharper commented,
MyClass.my_method() works by looking up MyClass.my_method, and then attempting to call that object. So the result of MyClass.my_method cannot be a plain string, int, or other common data type [...]
The trouble comes specifically from reusing the same name for this two properties, which is very confusing just as you said. So, don't do it.
But for the sole interest of it you could try to proxy the value of the property with an object that would return the original MyClass instance when called, use an actual setter to perform any computation you wanted, and also forward arbitrary attributes to the proxied value.
class MyClass:
_my_method = whatever
#property
def my_method(self):
my_class = self
class Proxy:
def __init__(self, value):
self.__proxied = value
def __call__(self, value):
my_class.my_method = value
return my_class
def __getattr__(self, name):
return getattr(self.__proxied, name)
def __str__(self):
return str(self.__proxied)
def __repr__(self):
return repr(self.__proxied)
return Proxy(self._my_method)
#my_method.setter
def my_method(self, value):
# your computations
self._my_method = value
a = MyClass()
b = a.my_method('do not do this at home')
a is b
# True
a.my_method.split(' ')
# ['do', 'not', 'do', 'this', 'at', 'home']
And today, duck typing will abuse you, forcing you to delegate all kinds of magic methods to the proxied value in the proxy class, until the poor codebase where you want to inject this is satisfied with how those values quack.
This is a minimal implementation of Guillherme's answer that updates the method instead of a separate modifiable parameter:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
This and the original answer both work well for single values, but it seems like lists and dictionaries are returned as empty instead of with the expected values and I am not sure why so help is appreciated here:

Why I can't inherit map in python?

I want to write a self defined class that inherit map class.
class MapT(map):
def __init__(self, iii):
self.obj = iii
But I can't initialize it.
# Example init object
ex = map(None,["","1","2"])
exp1 = MapT(ex)
# TypeError: map() must have at least two arguments.
exp1 = MapT(None,ex)
# TypeError: __init__() takes 2 positional arguments but 3 were given
How do I create a class that inherit map in python?
Or why I can't inherit map in python?
add
What I want to achieve is add custom method for iterable object
def iter_z(self_obj):
class IterC(type(self_obj)):
def __init__(self, self_obj):
super(iterC, self).__init__(self_obj)
self.obj = self_obj
def map(self, func):
return iter_z(list(map(func, self.obj))) # I want to remove "list" here, but I can't. Otherwise it cause TypeError
def filter(self, func):
return iter_z(list(filter(func, self.obj))) # I want to remove "list" here, but I can't. Otherwise it cause TypeError
def list(self):
return iter_z(list(self.obj))
def join(self, Jstr):
return Jstr.join(self)
return IterC(self_obj)
So that I can do this:
a = iter_z([1,3,5,7,9,100])
a.map(lambda x:x+65).filter(lambda x:x<=90).map(lambda x:chr(x)).join("")
# BDFHJ
Instead of this:
"".join(map(lambda x:chr(x),filter(lambda x:x<=90,map(lambda x:x+65,a))))
You shouldn't be inheriting from the object you're wrapping. That's because your API is different from that type, and there's no good way to ensure you can build a new instance of the class properly. Your map situation is an example of this, your __init__ expects a different number of argumetns than map.__new__ does, and there's no good way to rationalize them.
Instead of inheriting from the class, just wrap around it. This might limit the API of the type that can be used, but you're mostly focused on the iterator protocol, so probably __iter__ and __next__ are all you need:
class IterZ:
def __init__(self, iterable):
self.iterator = iter(iterable)
def __iter__(self):
return self
def __next__(self):
return next(self.iterator)
def map(self, func):
return IterZ(map(func, self.iterator))
def filter(self, func):
return IterZ(filter(func, self.iterator))
def join(self, Jstr):
return Jstr.join(self.iterator)

Subclass Python list to Validate New Items

I want a python list which represents itself externally as an average of its internal list items, but otherwise behaves as a list. It should raise a TypeError if an item is added that can't be cast to a float.
The part I'm stuck on is raising TypeError. It should be raised for invalid items added via any list method, like .append, .extend, +=, setting by slice, etc.
Is there a way to intercept new items added to the list and validate them?
I tried re-validating the whole list in __getattribute__, but when its called I only have access to the old version of the list, plus it doesn't even get called initialization, operators like +=, or for slices like mylist[0] = 5.
Any ideas?
Inherit from MutableSequence and implement the methods it requires as well as any others that fall outside of the scope of Sequences alone -- like the operators here. This will allow you to change the operator manipulations for list-like capabilities while automatically generating iterators and contains capabilities.
If you want to check for slices btw you need to do isinstance(key, slice) in your __getitem__ (and/or __setitem__) methods. Note that a single index like myList[0] is not a slice request, but a single index and myList[:0] is an actual slice request.
The array.array class will take care of the float part:
class AverageList(array.array):
def __new__(cls, *args, **kw):
return array.array.__new__(cls, 'd')
def __init__(self, values=()):
self.extend(values)
def __repr__(self):
if not len(self): return 'Empty'
return repr(math.fsum(self)/len(self))
And some tests:
>>> s = AverageList([1,2])
>>> s
1.5
>>> s.append(9)
>>> s
4.0
>>> s.extend('lol')
Traceback (most recent call last):
File "<pyshell#117>", line 1, in <module>
s.extend('lol')
TypeError: a float is required
Actually the best answer may be: don't.
Checking all objects as they get added to the list will be computationally expensive. What do you gain by doing those checks? It seems to me that you gain very little, and I'd recommend against implementing it.
Python doesn't check types, and so trying to have a little bit of type checking for one object really doesn't make a lot of sense.
There are 7 methods of the list class that add elements to the list and would have to be checked. Here's one compact implementation:
def check_float(x):
try:
f = float(x)
except:
raise TypeError("Cannot add %s to AverageList" % str(x))
def modify_method(f, which_arg=0, takes_list=False):
def new_f(*args):
if takes_list:
map(check_float, args[which_arg + 1])
else:
check_float(args[which_arg + 1])
return f(*args)
return new_f
class AverageList(list):
def __check_float(self, x):
try:
f = float(x)
except:
raise TypeError("Cannot add %s to AverageList" % str(x))
append = modify_method(list.append)
extend = modify_method(list.extend, takes_list=True)
insert = modify_method(list.insert, 1)
__add__ = modify_method(list.__add__, takes_list=True)
__iadd__ = modify_method(list.__iadd__, takes_list=True)
__setitem__ = modify_method(list.__setitem__, 1)
__setslice__ = modify_method(list.__setslice__, 2, takes_list=True)
The general approach would be to create your own class inheriting vom list and overwriting the specific methods like append, extend etc. This will probably also include magic methods of the Python list (see this article for details: http://www.rafekettler.com/magicmethods.html#sequence).
For validation, you will need to overwrite __setitem__(self, key, value)
Here's how to create a subclass using the MutableSequence abstract base class in the collections module as its base class (not fully tested -- an exercise for the reader ;-):
import collections
class AveragedSequence(collections.MutableSequence):
def _validate(self, x):
try: return float(x)
except: raise TypeError("Can't add {} to AveragedSequence".format(x))
def average(self): return sum(self._list) / len(self._list)
def __init__(self, arg): self._list = [self._validate(v) for v in arg]
def __repr__(self): return 'AveragedSequence({!r})'.format(self._list)
def __setitem__(self, i, value): self._list[i] = self._validate(value)
def __delitem__(self, i): del self._list[i]
def insert(i, value): return self._list.insert(i, self._validate(value))
def __getitem__(self, i): return self._list[i]
def __len__(self): return len(self._list)
def __iter__(self): return iter(self._list)
def __contains__(self, item): return item in self._list
if __name__ == '__main__':
avgseq = AveragedSequence(range(10))
print avgseq
print avgseq.average()
avgseq[2] = 3
print avgseq
print avgseq.average()
# ..etc

Implicitly binding callable objects to instances

I have this code:
class LFSeq: # lazy infinite sequence with new elements from func
def __init__(self, func):
self.evaluated = []
self.func = func
class __iter__:
def __init__(self, seq):
self.index = 0
self.seq = seq
def next(self):
if self.index >= len(self.seq.evaluated):
self.seq.evaluated += [self.seq.func()]
self.index += 1
return self.seq.evaluated[self.index - 1]
And I explicitely want that LFSeq.__iter__ becomes bounded to an instance of LFSeq like any other user-defined function would have been.
It doesn't work this way though because only user-defined functions are bounded and not classes.
When I introduce a function decorator like
def bound(f):
def dummy(*args, **kwargs):
return f(*args, **kwargs)
return dummy
then I can decorate __iter__ by it and it works:
...
#bound
class __iter__:
...
This feels somehow hacky and inconsistent however. Is there any other way? Should it be that way?
I guess yes because otherwise LFSeq.__iter__ and LFSeq(None).__iter__ wouldn't be the same object anymore (i.e. the class object). Maybe the whole thing about bounded functions should have been syntactic sugar instead of having it in the runtime. But then, on the other side, syntactic sugar shouldn't really dependent on content. I guess there has to be some tradeoff at some place.
The easiest solution for what you are trying to do is to define your __iter__() method as a generator function:
class LFSeq(object):
def __init__(self, func):
self.evaluated = []
self.func = func
def __iter__(self):
index = 0
while True:
if index == len(self.evaluated):
self.evaluated.append(self.func())
yield self.evaluated[index]
index += 1
Your approach would have to deal with lots of subtleties of the Python object model, and there's no reason to go that route.
In my opinion, the best solution is #Sven one, no doubt about it. That said, what you are trying to do really seems extremely hackish - I mean, to define __iter__ as a class. It will not work because declaring a class inside another one is not like defining a method, but instead it is like defining an attribute. The code
class LFSeq:
class __iter__:
roughly equivalent to an attribution that will create a class field:
class LFSeq:
__iter__ = type('__iter__', (), ...)
Then, every time you define an attribute inside a class, this is bound to the class itself, not to specific instances.
I think you should follow #Sven solution, but if you really want to define a class for any other reason, it seems you are lucky, because your generator class does not depend upon nothing from the LFSeq instance itself. Just define the iterator class outside:
class Iterator(object):
def __init__(self, seq):
self.index = 0
self.seq = seq
def next(self):
if self.index >= len(self.seq.evaluated):
self.seq.evaluated += [self.seq.func()]
self.index += 1
return self.seq.evaluated[self.index - 1]
and instantiate it inside LFSeq.__iter__() method:
class LFSeq(object): # lazy infinite sequence with new elements from func
def __init__(self, func):
self.evaluated = []
self.func = func
def __iter__(self):
return Iterator(self)
If you eventually need to bind the iterator class to the instance, you can define the iterator class inside LFSeq.__init__(), put it on a self attribute and instantiate it in LFSeq.__iter__():
class LFSeq(object): # lazy infinite sequence with new elements from func
def __init__(self, func):
lfseq_self = self # For using inside the iterator class
class Iterator(object): # Iterator class defined inside __init__
def __init__(self):
self.index = 0
self.seq = lfseq_self # using the outside self
def next(self):
if self.index >= len(self.seq.evaluated):
self.seq.evaluated += [self.seq.func()]
self.index += 1
return self.seq.evaluated[self.index - 1]
self.iterator_class = Iterator # setting the itrator
self.evaluated = []
self.func = func
def __iter__(self):
return self.iterator_class() # Creating an iterator
As I have said, however, #Sven solution seems finer. I just answered do try to explain why your code did not behaved as you expected and to provide some info about to do what you want to do - which may be useful sometimes nonetheless.

Create a python class that is treated as a list, but with more features?

I have a class called dataList. It is basically a list with some metadata---myDataList.data contains the (numpy) list itself, myDataList.tag contains a description, etc. I would like to be able to make myDataList[42] return the corresponding element of myDataList.data, and I would like for Numpy, etc. to recognize it as a list (I.E., numpy.asarray(myDataList) returns a numpy array containing the data in myDataList). In Java, this would be as easy as declaring dataList as implementing the List interface, and then just defining the necessary functions. How would you do this in Python?
Thanks.
You can subclass list and provide additional methods:
class CustomList(list):
def __init__(self, *args, **kwargs):
list.__init__(self, args[0])
def foobar(self):
return 'foobar'
CustomList inherits the methods of Python's ordinary lists and you can easily let it implement further methods and/or attributes.
class mylist(list):
def __init__(self, *args, **kwargs):
super(mylist, self).__init__(*args, **kwargs) # advantage of using super function is that even if you change the parent class of mylist to some other list class, like your numpy list class, you won`t have to change the remaining code, which is what you would have to do incase of jena`s code snippet.
# whatever meta data you want to add, add here
self.tag = 'some tag'
self.id = 3
# you can also add custom methods
def foobar(self):
return 'foobar'
Now, you can create instance of mylist and use them as normal lists, with your additional meta data.
>>> a = mylist([1,2,3,4])
>>> a
[1,2,3,4]
>>> a[2] = 3 # access normal list features
>>> a.append(5) # access normal list features
>>> a
[1,2,3,4,5]
>>> a.tag # your custom meta data
'some tag'
>>> a.id # your custom meta data
3
>>> a.foobar() # your custom meta data
'foobar'
>>> a.meta1 = 'some more' # you can even add more meta data on the fly (which you cannot do in a regular list class)
>>> a.meta1
'some more' # your new meta data
Define __len__, __getitem__, __iter__ and optionally other magic methods that make up a container type.
For example, a simplified range implementation:
class MyRange(object):
def __init__(self, start, end):
self._start = start
self._end = end
def __len__(self):
return self._end - self._start
def __getitem__(self, key):
if key < 0 or key >= self.end:
raise IndexError()
return self._start + key
def __iter__(self):
return iter([self[i] for i in range(len(self))])

Categories

Resources