Implicitly binding callable objects to instances - python

I have this code:
class LFSeq: # lazy infinite sequence with new elements from func
def __init__(self, func):
self.evaluated = []
self.func = func
class __iter__:
def __init__(self, seq):
self.index = 0
self.seq = seq
def next(self):
if self.index >= len(self.seq.evaluated):
self.seq.evaluated += [self.seq.func()]
self.index += 1
return self.seq.evaluated[self.index - 1]
And I explicitely want that LFSeq.__iter__ becomes bounded to an instance of LFSeq like any other user-defined function would have been.
It doesn't work this way though because only user-defined functions are bounded and not classes.
When I introduce a function decorator like
def bound(f):
def dummy(*args, **kwargs):
return f(*args, **kwargs)
return dummy
then I can decorate __iter__ by it and it works:
...
#bound
class __iter__:
...
This feels somehow hacky and inconsistent however. Is there any other way? Should it be that way?
I guess yes because otherwise LFSeq.__iter__ and LFSeq(None).__iter__ wouldn't be the same object anymore (i.e. the class object). Maybe the whole thing about bounded functions should have been syntactic sugar instead of having it in the runtime. But then, on the other side, syntactic sugar shouldn't really dependent on content. I guess there has to be some tradeoff at some place.

The easiest solution for what you are trying to do is to define your __iter__() method as a generator function:
class LFSeq(object):
def __init__(self, func):
self.evaluated = []
self.func = func
def __iter__(self):
index = 0
while True:
if index == len(self.evaluated):
self.evaluated.append(self.func())
yield self.evaluated[index]
index += 1
Your approach would have to deal with lots of subtleties of the Python object model, and there's no reason to go that route.

In my opinion, the best solution is #Sven one, no doubt about it. That said, what you are trying to do really seems extremely hackish - I mean, to define __iter__ as a class. It will not work because declaring a class inside another one is not like defining a method, but instead it is like defining an attribute. The code
class LFSeq:
class __iter__:
roughly equivalent to an attribution that will create a class field:
class LFSeq:
__iter__ = type('__iter__', (), ...)
Then, every time you define an attribute inside a class, this is bound to the class itself, not to specific instances.
I think you should follow #Sven solution, but if you really want to define a class for any other reason, it seems you are lucky, because your generator class does not depend upon nothing from the LFSeq instance itself. Just define the iterator class outside:
class Iterator(object):
def __init__(self, seq):
self.index = 0
self.seq = seq
def next(self):
if self.index >= len(self.seq.evaluated):
self.seq.evaluated += [self.seq.func()]
self.index += 1
return self.seq.evaluated[self.index - 1]
and instantiate it inside LFSeq.__iter__() method:
class LFSeq(object): # lazy infinite sequence with new elements from func
def __init__(self, func):
self.evaluated = []
self.func = func
def __iter__(self):
return Iterator(self)
If you eventually need to bind the iterator class to the instance, you can define the iterator class inside LFSeq.__init__(), put it on a self attribute and instantiate it in LFSeq.__iter__():
class LFSeq(object): # lazy infinite sequence with new elements from func
def __init__(self, func):
lfseq_self = self # For using inside the iterator class
class Iterator(object): # Iterator class defined inside __init__
def __init__(self):
self.index = 0
self.seq = lfseq_self # using the outside self
def next(self):
if self.index >= len(self.seq.evaluated):
self.seq.evaluated += [self.seq.func()]
self.index += 1
return self.seq.evaluated[self.index - 1]
self.iterator_class = Iterator # setting the itrator
self.evaluated = []
self.func = func
def __iter__(self):
return self.iterator_class() # Creating an iterator
As I have said, however, #Sven solution seems finer. I just answered do try to explain why your code did not behaved as you expected and to provide some info about to do what you want to do - which may be useful sometimes nonetheless.

Related

Return a custom value when a class method is accessed as an attribute, but still allow for it to perform a computation when called?

Specifically, I would want MyClass.my_method to be used for lookup of a value in the class dictionary, but MyClass.my_method() to be a method that accepts arguments and performs a computation to update an attribute in MyClass and then returns MyClass with all its attributes (including the updated one).
I am thinking that this might be doable with Python's descriptors (maybe overriding __get__ or __call__), but I can't figure out how this would look. I understand that the behavior might be confusing, but I am interested if it is possible (and if there are any other major caveats).
I have seen that you can do something similar for classes and functions by overriding __repr__, but I can't find a similar way for a method within a class. My returned value will also not always be a string, which seems to prohibit the __repr__-based approaches mentioned in these two questions:
Possible to change a function's repr in python?
How to create a custom string representation for a class object?
Thank you Joel for the minimal implementation. I found that the remaining problem is the lack of initialization of the parent, since I did not find a generic way of initializing it, I need to check for attributes in the case of list/dict, and add the initialization values to the parent accordingly.
This addition to the code should make it work for lists/dicts:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
super().__init__()
dict_attr = getattr(parent, "update", None)
list_attr = getattr(parent, "extend", None)
if callable(dict_attr): # parent is dict
self.update(init_val)
elif callable(list_attr): # parent is list
self.extend(init_val)
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
Unfortunately, we need to add case by case, but this works as intended.
A slightly less verbose way to write the above is the following:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
if isinstance(init_val, list):
self.extend(init_val)
elif isinstance(init_val, dict):
self.update(init_val)
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
As jasonharper commented,
MyClass.my_method() works by looking up MyClass.my_method, and then attempting to call that object. So the result of MyClass.my_method cannot be a plain string, int, or other common data type [...]
The trouble comes specifically from reusing the same name for this two properties, which is very confusing just as you said. So, don't do it.
But for the sole interest of it you could try to proxy the value of the property with an object that would return the original MyClass instance when called, use an actual setter to perform any computation you wanted, and also forward arbitrary attributes to the proxied value.
class MyClass:
_my_method = whatever
#property
def my_method(self):
my_class = self
class Proxy:
def __init__(self, value):
self.__proxied = value
def __call__(self, value):
my_class.my_method = value
return my_class
def __getattr__(self, name):
return getattr(self.__proxied, name)
def __str__(self):
return str(self.__proxied)
def __repr__(self):
return repr(self.__proxied)
return Proxy(self._my_method)
#my_method.setter
def my_method(self, value):
# your computations
self._my_method = value
a = MyClass()
b = a.my_method('do not do this at home')
a is b
# True
a.my_method.split(' ')
# ['do', 'not', 'do', 'this', 'at', 'home']
And today, duck typing will abuse you, forcing you to delegate all kinds of magic methods to the proxied value in the proxy class, until the poor codebase where you want to inject this is satisfied with how those values quack.
This is a minimal implementation of Guillherme's answer that updates the method instead of a separate modifiable parameter:
def classFactory(parent, init_val, target):
class modifierClass(parent):
def __init__(self, init_val):
self.target = target
def __call__(self, *args):
self.target.__init__(*args)
return modifierClass(init_val)
class myClass:
def __init__(self, init_val=''):
self.method = classFactory(init_val.__class__, init_val, self)
This and the original answer both work well for single values, but it seems like lists and dictionaries are returned as empty instead of with the expected values and I am not sure why so help is appreciated here:

Why I can't inherit map in python?

I want to write a self defined class that inherit map class.
class MapT(map):
def __init__(self, iii):
self.obj = iii
But I can't initialize it.
# Example init object
ex = map(None,["","1","2"])
exp1 = MapT(ex)
# TypeError: map() must have at least two arguments.
exp1 = MapT(None,ex)
# TypeError: __init__() takes 2 positional arguments but 3 were given
How do I create a class that inherit map in python?
Or why I can't inherit map in python?
add
What I want to achieve is add custom method for iterable object
def iter_z(self_obj):
class IterC(type(self_obj)):
def __init__(self, self_obj):
super(iterC, self).__init__(self_obj)
self.obj = self_obj
def map(self, func):
return iter_z(list(map(func, self.obj))) # I want to remove "list" here, but I can't. Otherwise it cause TypeError
def filter(self, func):
return iter_z(list(filter(func, self.obj))) # I want to remove "list" here, but I can't. Otherwise it cause TypeError
def list(self):
return iter_z(list(self.obj))
def join(self, Jstr):
return Jstr.join(self)
return IterC(self_obj)
So that I can do this:
a = iter_z([1,3,5,7,9,100])
a.map(lambda x:x+65).filter(lambda x:x<=90).map(lambda x:chr(x)).join("")
# BDFHJ
Instead of this:
"".join(map(lambda x:chr(x),filter(lambda x:x<=90,map(lambda x:x+65,a))))
You shouldn't be inheriting from the object you're wrapping. That's because your API is different from that type, and there's no good way to ensure you can build a new instance of the class properly. Your map situation is an example of this, your __init__ expects a different number of argumetns than map.__new__ does, and there's no good way to rationalize them.
Instead of inheriting from the class, just wrap around it. This might limit the API of the type that can be used, but you're mostly focused on the iterator protocol, so probably __iter__ and __next__ are all you need:
class IterZ:
def __init__(self, iterable):
self.iterator = iter(iterable)
def __iter__(self):
return self
def __next__(self):
return next(self.iterator)
def map(self, func):
return IterZ(map(func, self.iterator))
def filter(self, func):
return IterZ(filter(func, self.iterator))
def join(self, Jstr):
return Jstr.join(self.iterator)

Python: passing functions as arguments to initialize the methods of an object. Pythonic or not?

I'm wondering if there is an accepted way to pass functions as parameters to objects (i.e. to define methods of that object in the init block).
More specifically, how would one do this if the function depends on the objects parameters.
It seems pythonic enough to pass functions to objects, functions are objects like anything else:
def foo(a,b):
return a*b
class FooBar(object):
def __init__(self, func):
self.func = func
foobar = FooBar(foo)
foobar.func(5,6)
# 30
So that works, the problem shows up as soon as you introduce dependence on the object's other properties.
def foo1(self, b):
return self.a*b
class FooBar1(object):
def __init__(self, func, a):
self.a=a
self.func=func
# Now, if you try the following:
foobar1 = FooBar1(foo1,4)
foobar1.func(3)
# You'll get the following error:
# TypeError: foo0() missing 1 required positional argument: 'b'
This may simply violate some holy principles of OOP in python, in which case I'll just have to do something else, but it also seems like it might prove useful.
I've though of a few possible ways around this, and I'm wondering which (if any) is considered most acceptable.
Solution 1
foobar1.func(foobar1,3)
# 12
# seems ugly
Solution 2
class FooBar2(object):
def __init__(self, func, a):
self.a=a
self.func = lambda x: func(self, x)
# Actually the same as the above but now the dirty inner-workings are hidden away.
# This would not translate to functions with multiple arguments unless you do some ugly unpacking.
foobar2 = FooBar2(foo1, 7)
foobar2.func(3)
# 21
Any ideas would be appreciated!
Passing functions to an object is fine. There's nothing wrong with that design.
If you want to turn that function into a bound method, though, you have to be a little careful. If you do something like self.func = lambda x: func(self, x), you create a reference cycle - self has a reference to self.func, and the lambda stored in self.func has a reference to self. Python's garbage collector does detect reference cycles and cleans them up eventually, but that can sometimes take a long time. I've had reference cycles in my code in the past, and those programs often used upwards of 500 MB memory because python would not garbage collect unneeded objects often enough.
The correct solution is to use the weakref module to create a weak reference to self, for example like this:
import weakref
class WeakMethod:
def __init__(self, func, instance):
self.func = func
self.instance_ref = weakref.ref(instance)
self.__wrapped__ = func # this makes things like `inspect.signature` work
def __call__(self, *args, **kwargs):
instance = self.instance_ref()
return self.func(instance, *args, **kwargs)
def __repr__(self):
cls_name = type(self).__name__
return '{}({!r}, {!r})'.format(cls_name, self.func, self.instance_ref())
class FooBar(object):
def __init__(self, func, a):
self.a = a
self.func = WeakMethod(func, self)
f = FooBar(foo1, 7)
print(f.func(3)) # 21
All of the following solutions create a reference cycle and are therefore bad:
self.func = MethodType(func, self)
self.func = func.__get__(self, type(self))
self.func = functools.partial(func, self)
Inspired by this answer, a possible solution could be:
from types import MethodType
class FooBar1(object):
def __init__(self, func, a):
self.a=a
self.func=MethodType(func, self)
def foo1(self, b):
return self.a*b
def foo2(self, b):
return 2*self.a*b
foobar1 = FooBar1(foo1,4)
foobar2 = FooBar1(foo2, 4)
print(foobar1.func(3))
# 12
print(foobar2.func(3))
# 24
The documentation on types.MethodType doesn't tell much, however:
types.MethodType
The type of methods of user-defined class instances.

Overload [] python operator and chaining methods using a memory reference

Is it possible to overload [] (__getitem__) Python operator and chain methods using the initial memory reference.
Imagine I have a class Math that accepts a list of integer numbers, like this:
class Math(object):
def __init__(self, *args, **kwargs):
assert(all([isinstance(item, int) for item in list(args)]))
self.list = list(args)
def add_one(self):
for index in range(len(self.list)):
self.list[index] += 1
And I want to do something like this:
instance = Math(1,2,3,4,5)
instance[2:4].add_one()
After executing this code instance.list should be [1,2,4,5,5], is this possible?
I know I could do something like add_one(2,4), but this is not the style of API I would like to have if possible.
Thanks
As Winston mentions, you need to implement an auxiliary object:
class Math(object):
def __init__(self, *args, **kwargs):
self.list = list(args)
def __getitem__(self, i):
return MathSlice(self, i)
class MathSlice(object):
def __init__(self, math, slice):
self.math = math
self.slice = slice
def add_one(self):
for i in xrange(*self.slice.indices(len(self.math.list))):
self.math.list[i] += 1
instance = Math(1,2,3,4,5)
instance[2:4].add_one()
print instance.list
How you share the math object with the MathSlice object depends on what you want the semantics to be if the math object changes.
Numpy does something like this.
The __getitem__ method will recieve a slice object. See http://docs.python.org/reference/datamodel.html for details. You'll need to return a new object, but implement that object such that it modifies the original list.

How does this class implement the "__iter__" method without implementing "next"?

I have the following code in django.template:
class Template(object):
def __init__(self, template_string, origin=None, name='<Unknown Template>'):
try:
template_string = smart_unicode(template_string)
except UnicodeDecodeError:
raise TemplateEncodingError("Templates can only be constructed from unicode or UTF-8 strings.")
if settings.TEMPLATE_DEBUG and origin is None:
origin = StringOrigin(template_string)
self.nodelist = compile_string(template_string, origin)
self.name = name
def __iter__(self):
for node in self.nodelist:
for subnode in node:
yield subnode
def render(self, context):
"Display stage -- can be called many times"
return self.nodelist.render(context)
The part I am confused about is below. How does this __iter__ method work? I can't find any corresponding next method.
def __iter__(self):
for node in self.nodelist:
for subnode in node:
yield subnode
This is the only way that I know how to implement __iter__:
class a(object):
def __init__(self,x=10):
self.x = x
def __iter__(self):
return self
def next(self):
if self.x > 0:
self.x-=1
return self.x
else:
raise StopIteration
ainst = a()
for item in aisnt:
print item
In your answers, try to use code examples rather than text, because my English is not very good.
From the docs:
If a container object’s __iter__()
method is implemented as a generator,
it will automatically return an
iterator object (technically, a
generator object) supplying the
__iter__() and __next__() methods.
Here is your provided example using a generator:
class A():
def __init__(self, x=10):
self.x = x
def __iter__(self):
for i in reversed(range(self.x)):
yield i
a = A()
for item in a:
print(item)
That __iter__method returns a python generator (see the documentation), as it uses the yield keyword.
The generator will provide the next() method automatically; quoting the documentation:
What makes generators so compact is that the __iter__() and next() methods are created
automatically.
EDIT:
Generators are really useful. If you are not familiar with them, I suggest you readup on them, and play around with some test code.
Here is some more info on iterators and generators from StackOverflow.

Categories

Resources