Turning an attribute into a property on an object-by-object basis? - python

I have a class of objects, most of whom have this one attribute which can in 95% of cases be implemented as a simple attribute. However, there are a few important edge cases where that property must be computed from data on another object.
What I'd like to be able to do is set myobj.gnarlyattribute = property(lambda self: self.container.x*self.k).
However, this doesn't seem to work:
>>> myfoo=foo()
>>> myfoo.spam
10
>>> import random
>>> myfoo.spam=property(lambda self: random.randint(0,20))
>>> myfoo.spam
<property object at 0x02A57420>
>>>
I suppose I could have gnarlyattribute always be a property which usually just has lambda self: self._gnarlyattribute as the getter, but that seems a little smelly. Any ideas?

As has already been pointed out, properties can only work at the class level, and they can't be set on instances. (Well, they can, but they don't do what you want.)
Therefore, I suggest using class inheritance to solve your problem:
class NoProps(object):
def __init__(self, spam=None):
if spam is None:
spam = 0 # Pick a sensible default
self.spam = spam
class Props(NoProps):
#property
def spam(self):
"""Docstring for the spam property"""
return self._spam
#spam.setter
def spam(self, value):
# Do whatever calculations are needed here
import random
self._spam = value + random.randint(0,20)
#spam.deleter
def spam(self):
del self._spam
Then when you discover that a particular object needs to have its spam attribute as a calculated property, make that object an instance of Props instead of NoProps:
a = NoProps(3)
b = NoProps(4)
c = Props(5)
print a.spam, b.spam, c.spam
# Prints 3, 4, (something between 5 and 25)
If you can tell ahead of time when you'll need calculated values in a given instance, that should do what you're looking for.
Alternately, if you can't tell that you'll need calculated values until after you've created the instance, that one's pretty straightforward as well: just add a factory method to your class, which will copy the properties from the "old" object to the "new" one. Example:
class NoProps(object):
def __init__(self, spam=None):
if spam is None:
spam = 0 # Pick a sensible default
self.spam = spam
#classmethod
def from_other_obj(cls, other_obj):
"""Factory method to copy other_obj's values"""
# The call to cls() is where the "magic" happens
obj = cls()
obj.spam = other_obj.spam
# Copy any other properties here
return obj
class Props(NoProps):
#property
def spam(self):
"""Docstring for the spam property"""
return self._spam
#spam.setter
def spam(self, value):
# Do whatever calculations are needed here
import random
self._spam = value + random.randint(0,20)
#spam.deleter
def spam(self):
del self._spam
Since we call cls() inside the factory method, it will make an instance of whichever class it was invoked on. Thus the following is possible:
a = NoProps(3)
b = NoProps.from_other_obj(a)
c = NoProps.from_other_obj(b)
print(a.spam, b.spam, c.spam)
# Prints 3, 3, 3
# I just discovered that c.spam should be calculated
# So convert it into a Props object
c = Props.from_other_obj(c)
print(a.spam, b.spam, c.spam)
# Prints 3, 3, (something between 3 and 23)
One or the other of these two solutions should be what you're looking for.

The magic to make properties work only exists at the class level. There is no way to make properties work per-object.

Related

Dynamically add method to class from property function?

I think a code sample will better speak for itself:
class SomeClass:
example = create_get_method()
Yes, that's all – ideally.
In that case, create_get_method would add a get_example() to SomeClass in a way that it can be accessed via an instance of SomeClass:
obj = SomeClass()
obj.get_example() <- returns the value of self.example
(Of course, the idea is to implement a complex version of get_contact, that's why I want to do that in a non-repetitive way, and this is a simplified version that represents well the issue.)
I don't know if that's possible, because it require to have access to the property name (example) and the class (SomeClass) since these can not be guessed in advance (that function will be used on many and various classes).
I know it's something possible, because that's kind of what SQLAlchemy does with their relationship() function on a class:
class Model(BaseModel):
id = ...
contact_id = db.Integer(db.ForeignKey..)
contact = relationship('contact') <-- This !
How can this be done?
Objects bound to class-level variables can have a __set_name__ method that will be called immediately after the class object has been created. It will be called with two arguments, the class object, and the name of the variable the object is saved as in the class.
You could use this to create your extra getter method, though I'm not sure why exactly you want to (you could make the object a descriptor instead, which would probably be better than adding a separate getter function to the parent class).
class create_get_method:
def __set_name__(self, owner, name):
def getter(self):
return getattr(self, name)
getter_name = f"get_{name}"
getter.__name__ = getter_name
setattr(owner, getter_name, getter)
# you might also want a __get__ method here to give a default value (like None)
Here's how that would work:
>>> class Test:
... example = create_get_method()
...
>>> t = Test()
>>> print(t.get_example())
<__main__.create_get_method at 0x000001E0B4D41400>
>>> t.example = "foo"
>>> print(t.get_example())
foo
You could change the value returned by default (in the first print call), so that the create_get_method object isn't as exposed. Just add a __get__ method to the create_get_method class.
You can do this with a custom non-data descriptor, like a property, except that you don't need a __set__ method:
class ComplicatedDescriptor:
def __init__(self, name):
self.name = name
def __get__(self, owner, type):
# Here, `owner` is the instance of `SomeClass` that contains this descriptor
# Use `owner` to do some complicated stuff, like DB lookup or whatever
name = f'_{self.name}'
# These two lines for demo only
value = owner.__dict__.get(name, 0)
value += 1
setattr(owner, name, value)
return value
Now you can have any number of classes that use this descriptor:
class SomeClass:
example = ComplicatedDescriptor('example')
Now you can do something like:
>>> inst0 = SomeClass()
>>> inst1 = SomeClass()
>>> inst0.example
1
>>> inst1.example
1
>>> inst1.example
2
>>> inst0.example
2
The line name = f'_{self.name} is necessary because the descriptor here is a non-data descriptor: it has no __set__ method, so if you create inst0.__dict__['example'], the lookup will no longer happen: inst0.example will return inst0.__dict__['example'] instead of calling SomeClass.example.__get__(inst0, type(inst0)). One workaround is to store the value under the attribute name _example. The other is to make your descriptor into a data descriptor:
class ComplicatedDescriptor_v2:
def __init__(self, name):
self.name = name
def __get__(self, owner, type):
# Here, `owner` is the instance of `SomeClass` that contains this descriptor
# Use `owner` to do some complicated stuff, like DB lookup or whatever
# These two lines for demo only
value = owner.__dict__.get(self.name, 0)
value += 1
owner.__dict__[self.name] = value
return value
def __set__(self, *args):
raise AttributeError(f'{self.name} is a read-only attribute')
The usage is generally identical:
class SomeClass:
example = ComplicatedDescriptor_v2('example')
Except that now you can't accidentally override your attribute:
>>> inst = SomeClass()
>>> inst.example
1
>>> inst.example
2
>>> inst.example = 0
AttributeError: example is a read-only attribute
Descriptors are a fairly idiomatic way to get and set values in python. They are preferred to getters and setters in almost all cases. The simplest cases are handled by the built-in property. That being said, if you wanted to explicitly have a getter method, I would recommend doing something very similar, but just returning a method instead of calling __get__ directly.
For example:
def __get__(self, owner, type):
def enclosed():
# Use `owner` to do some complicated stuff, like DB lookup or whatever
name = f'_{self.name}'
# These two lines for demo only
value = owner.__dict__.get(name, 0)
value += 1
setattr(owner, name, value)
return value
return enclosed
There is really no point to doing something like this unless you plan on really just want to be able to call inst.example().

Custom list class in Python 3 with __get__ and __set__ attributes

I would like to write a custom list class in Python 3 like in this question How would I create a custom list class in python?, but unlike that question I would like to implement __get__ and __set__ methods. Although my class is similar to the list, but there are some magic operations hidden behind these methods. And so I would like to work with this variable like with list, like in main of my program (see below). I would like to know, how to move __get__ and __set__ methods (fget and fset respectively) from Foo class to MyList class to have only one class.
My current solution (also, I added output for each operation for clarity):
class MyList:
def __init__(self, data=[]):
print('MyList.__init__')
self._mylist = data
def __getitem__(self, key):
print('MyList.__getitem__')
return self._mylist[key]
def __setitem__(self, key, item):
print('MyList.__setitem__')
self._mylist[key] = item
def __str__(self):
print('MyList.__str__')
return str(self._mylist)
class Foo:
def __init__(self, mylist=[]):
self._mylist = MyList(mylist)
def fget(self):
print('Foo.fget')
return self._mylist
def fset(self, data):
print('Foo.fset')
self._mylist = MyList(data)
mylist = property(fget, fset, None, 'MyList property')
if __name__ == '__main__':
foo = Foo([1, 2, 3])
# >>> MyList.__init__
print(foo.mylist)
# >>> Foo.fget
# >>> MyList.__str__
# >>> [1, 2, 3]
foo.mylist = [1, 2, 3, 4]
# >>> Foo.fset
# >>> MyList.__init__
print(foo.mylist)
# >>> Foo.fget
# >>> MyList.__str__
# >>> [1, 2, 3, 4]
foo.mylist[0] = 0
# >>> Foo.fget
# >>> MyList.__setitem__
print(foo.mylist[0])
# >>> Foo.fget
# >>> MyList.__getitem__
# >>> 0
Thank you in advance for any help.
How to move __get__ and __set__ methods (fget and fset respectively) from Foo class to MyList class to have only one class?
UPD:
Thanks a lot to #Blckknght! I tried to understand his answer and it works very well for me! It's exactly what I needed. As a result, I get the following code:
class MyList:
def __init__(self, value=None):
self.name = None
if value is None:
self.value = []
else:
self.value = value
def __set_name__(self, owner, name):
self.name = "_" + name
def __get__(self, instance, owner):
return getattr(instance, self.name)
def __set__(self, instance, value):
setattr(instance, self.name, MyList(value))
def __getitem__(self, key):
return self.value[key]
def __setitem__(self, key, value):
self.value[key] = value
def append(self, value):
self.value.append(value)
def __str__(self):
return str(self.value)
class Foo:
my_list = MyList()
def __init__(self):
self.my_list = [1, 2, 3]
print(type(self.my_list)) # <class '__main__.MyList'>
self.my_list = [4, 5, 6, 7, 8]
print(type(self.my_list)) # <class '__main__.MyList'>
self.my_list[0] = 10
print(type(self.my_list)) # <class '__main__.MyList'>
self.my_list.append(7)
print(type(self.my_list)) # <class '__main__.MyList'>
print(self.my_list) # [10, 5, 6, 7, 8, 7]
foo = Foo()
I don't know, that's Pythonic way or not, but it works as I expected.
In a comment, you explained what you actually want:
x = MyList([1])
x = [2]
# and have x be a MyList after that.
That is not possible. In Python, plain assignment to a bare name (e.g., x = ..., in contrast to x.blah = ... or x[0] = ...) is an operation on the name only, not the value, so there is no way for any object to hook into the name-binding process. An assignment like x = [2] works the same way no matter what the value of x is (and indeed works the same way regardless of whether x already has a value or whether this is the first value being assigned to x).
While you can make your MyList class follow the descriptor protocol (which is what the __get__ and __set__ methods are for), you probably don't want to. That's because, to be useful, a descriptor must be placed as an attribute of a class, not as an attribute of an instance. The properties in your Foo class creating separate instances of MyList for each instance. That wouldn't work if the list was defined on the Foo class directly.
That's not to say that custom descriptors can't be useful. The property you're using in your Foo class is a descriptor. If you wanted to, you could write your own MyListAttr descriptor that does the same thing.
class MyListAttr(object):
def __init__(self):
self.name = None
def __set_name__(self, owner, name): # this is used in Pyton 3.6+
self.name = "_" + name
def find_name(self, cls): # this is used on earlier versions that don't support set_name
for name in dir(cls):
if getattr(cls, name) is self:
self.name = "_" + name
return
raise TypeError()
def __get__(self, obj, owner):
if obj is None:
return self
if self.name is None:
self.find_name(owner)
return getattr(obj, self.name)
def __set__(self, obj, value):
if self.name is None:
self.find_name(type(obj))
setattr(obj, self.name, MyList(value))
class Foo(object):
mylist = MyListAttr() # create the descriptor as a class variable
def __init__(self, data=None):
if data is None:
data = []
self.mylist = data # this invokes the __set__ method of the descriptor!
The MyListAttr class is more complicated than it otherwise might be because I try to have the descriptor object find its own name. That's not easy to figure out in older versions of Python. Starting with Python 3.6, it's much easier (because the __set_name__ method will be called on the descriptor when it is assigned as a class variable). A lot of the code in the class could be removed if you only needed to support Python 3.6 and later (you wouldn't need find_name or any of the code that calls it in __get__ and __set__).
It might not seem worth writing a long descriptor class like MyListAttr to do what you were able to do with less code using a property. That's probably correct if you only have one place you want to use the descriptor. But if you may have many classes (or many attributes within a single class) where you want the same special behavior, you will benefit from packing the behavior into a descriptor rather than writing a lot of very similar property getter and setter methods.
You might not have noticed, but I also made a change to the Foo class that is not directly related to the descriptor use. The change is to the default value for data. Using a mutable object like a list as a default argument is usually a very bad idea, as that same object will be shared by all calls to the function without an argument (so all Foo instances not initialized with data would share the same list). It's better to use a sentinel value (like None) and replace the sentinel with what you really want (a new empty list in this case). You probably should fix this issue in your MyList.__init__ method too.

Dynamically creating #attribute.setter methods for all properties in class (Python)

I have code that someone else wrote like this:
class MyClass(object):
def __init__(self, data):
self.data = data
#property
def attribute1(self):
return self.data.another_name1
#property
def attribute2(self):
return self.data.another_name2
and I want to automatically create the corresponding property setters at run time so I don't have to modify the other person's code. The property setters should look like this:
#attribute1.setter
def attribue1(self, val):
self.data.another_name1= val
#attribute2.setter
def attribue2(self, val):
self.data.another_name2= val
How do I dynamically add these setter methods to the class?
You can write a custom Descriptor like this:
from operator import attrgetter
class CustomProperty(object):
def __init__(self, attr):
self.attr = attr
def __get__(self, ins, type):
print 'inside __get__'
if ins is None:
return self
else:
return attrgetter(self.attr)(ins)
def __set__(self, ins, value):
print 'inside __set__'
head, tail = self.attr.rsplit('.', 1)
obj = attrgetter(head)(ins)
setattr(obj, tail, value)
class MyClass(object):
def __init__(self, data):
self.data = data
attribute1 = CustomProperty('data.another_name1')
attribute2 = CustomProperty('data.another_name2')
Demo:
>>> class Foo():
... pass
...
>>> bar = MyClass(Foo())
>>>
>>> bar.attribute1 = 10
inside __set__
>>> bar.attribute2 = 20
inside __set__
>>> bar.attribute1
inside __get__
10
>>> bar.attribute2
inside __get__
20
>>> bar.data.another_name1
10
>>> bar.data.another_name2
20
This is the author of the question. I found out a very jerry-rigged solution, but I don't know another way to do it. (I am using python 3.4 by the way.)
I'll start with the problems I ran into.
First, I thought about overwriting the property entirely, something like this:
Given this class
class A(object):
def __init__(self):
self._value = 42
#property
def value(self):
return self._value
and you can over write the property entirely by doing something like this:
a = A()
A.value = 31 # This just redirects A.value from the #property to the int 31
a.value # Returns 31
The problem is that this is done at the class level and not at the instance level, so if I make a new instance of A then this happens:
a2 = A()
a.value # Returns 31, because the class itself was modified in the previous code block.
I want that to return a2._value because a2 is a totally new instance of A() and therefore shouldn't be influenced by what I did to a.
The solution to this was to overwrite A.value with a new property rather than whatever I wanted to assign the instance _value to. I learned that you can create a new property that instantiates itself from the old property using the special getter, setter, and deleter methods (see here). So I can overwrite A's value property and make a setter for it by doing this:
def make_setter(name):
def value_setter(self, val):
setattr(self, name, val)
return value_setter
my_setter = make_setter('_value')
A.value = A.value.setter(my_setter) # This takes the property defined in the above class and overwrites the setter with my_setter
setattr(A, 'value', getattr(A, 'value').setter(my_setter)) # This does the same thing as the line above I think so you only need one of them
This is all well and good as long as the original class has something extremely simple in the original class's property definition (in this case it was just return self._value). However, as soon as you get more complicated, to something like return self.data._value like I have, things get nasty -- like #BrenBarn said in his comment on my post. I used the inspect.getsourcelines(A.value.fget) function to get the source code line that contains the return value and parsed that. If I failed to parse the string, I raised an exception. The result looks something like this:
def make_setter(name, attrname=None):
def setter(self, val):
try:
split_name = name.split('.')
child_attr = getattr(self, split_name[0])
for i in range(len(split_name)-2):
child_attr = getattr(child_attr, split_name[i+1])
setattr(child_attr, split_name[-1], val)
except:
raise Exception("Failed to set property attribute {0}".format(name))
It seems to work but there are probably bugs.
Now the question is, what to do if the thing failed? That's up to you and sort of off track from this question. Personally, I did a bit of nasty stuff that involves creating a new class that inherits from A (let's call this class B). Then if the setter worked for A, it will work for the instance of B because A is a base class. However, if it didn't work (because the return value defined in A was something nasty), I ran a settattr(B, name, val) on the class B. This would normally change all other instances that were created from B (like in the 2nd code block in this post) but I dynamically create B using type('B', (A,), {}) and only use it once ever, so changing the class itself has no affect on anything else.
There is a lot of black-magic type stuff going on here I think, but it's pretty cool and quite versatile in the day or so I've been using it. None of this is copy-pastable code, but if you understand it then you can write your modifications.
I really hope/wish there is a better way, but I do not know of one. Maybe metaclasses or descriptors created from classes can do some nice magic for you, but I do not know enough about them yet to be sure.
Comments appreciated!

How to keep track of class instances?

Toward the end of a program I'm looking to load a specific variable from all the instances of a class into a dictionary.
For example:
class Foo():
def __init__(self):
self.x = {}
foo1 = Foo()
foo2 = Foo()
...
Let's say the number of instances will vary and I want the x dict from each instance of Foo() loaded into a new dict. How would I do that?
The examples I've seen in SO assume one already has the list of instances.
One way to keep track of instances is with a class variable:
class A(object):
instances = []
def __init__(self, foo):
self.foo = foo
A.instances.append(self)
At the end of the program, you can create your dict like this:
foo_vars = {id(instance): instance.foo for instance in A.instances}
There is only one list:
>>> a = A(1)
>>> b = A(2)
>>> A.instances
[<__main__.A object at 0x1004d44d0>, <__main__.A object at 0x1004d4510>]
>>> id(A.instances)
4299683456
>>> id(a.instances)
4299683456
>>> id(b.instances)
4299683456
#JoelCornett's answer covers the basics perfectly. This is a slightly more complicated version, which might help with a few subtle issues.
If you want to be able to access all the "live" instances of a given class, subclass the following (or include equivalent code in your own base class):
from weakref import WeakSet
class base(object):
def __new__(cls, *args, **kwargs):
instance = object.__new__(cls, *args, **kwargs)
if "instances" not in cls.__dict__:
cls.instances = WeakSet()
cls.instances.add(instance)
return instance
This addresses two possible issues with the simpler implementation that #JoelCornett presented:
Each subclass of base will keep track of its own instances separately. You won't get subclass instances in a parent class's instance list, and one subclass will never stumble over instances of a sibling subclass. This might be undesirable, depending on your use case, but it's probably easier to merge the sets back together than it is to split them apart.
The instances set uses weak references to the class's instances, so if you del or reassign all the other references to an instance elsewhere in your code, the bookkeeping code will not prevent it from being garbage collected. Again, this might not be desirable for some use cases, but it is easy enough to use regular sets (or lists) instead of a weakset if you really want every instance to last forever.
Some handy-dandy test output (with the instances sets always being passed to list only because they don't print out nicely):
>>> b = base()
>>> list(base.instances)
[<__main__.base object at 0x00000000026067F0>]
>>> class foo(base):
... pass
...
>>> f = foo()
>>> list(foo.instances)
[<__main__.foo object at 0x0000000002606898>]
>>> list(base.instances)
[<__main__.base object at 0x00000000026067F0>]
>>> del f
>>> list(foo.instances)
[]
You would probably want to use weak references to your instances. Otherwise the class could likely end up keeping track of instances that were meant to have been deleted. A weakref.WeakSet will automatically remove any dead instances from its set.
One way to keep track of instances is with a class variable:
import weakref
class A(object):
instances = weakref.WeakSet()
def __init__(self, foo):
self.foo = foo
A.instances.add(self)
#classmethod
def get_instances(cls):
return list(A.instances) #Returns list of all current instances
At the end of the program, you can create your dict like this:
foo_vars = {id(instance): instance.foo for instance in A.instances}
There is only one list:
>>> a = A(1)
>>> b = A(2)
>>> A.get_instances()
[<inst.A object at 0x100587290>, <inst.A object at 0x100587250>]
>>> id(A.instances)
4299861712
>>> id(a.instances)
4299861712
>>> id(b.instances)
4299861712
>>> a = A(3) #original a will be dereferenced and replaced with new instance
>>> A.get_instances()
[<inst.A object at 0x100587290>, <inst.A object at 0x1005872d0>]
You can also solve this problem using a metaclass:
When a class is created (__init__ method of metaclass), add a new instance registry
When a new instance of this class is created (__call__ method of metaclass), add it to the instance registry.
The advantage of this approach is that each class has a registry - even if no instance exists. In contrast, when overriding __new__ (as in Blckknght's answer), the registry is added when the first instance is created.
class MetaInstanceRegistry(type):
"""Metaclass providing an instance registry"""
def __init__(cls, name, bases, attrs):
# Create class
super(MetaInstanceRegistry, cls).__init__(name, bases, attrs)
# Initialize fresh instance storage
cls._instances = weakref.WeakSet()
def __call__(cls, *args, **kwargs):
# Create instance (calls __init__ and __new__ methods)
inst = super(MetaInstanceRegistry, cls).__call__(*args, **kwargs)
# Store weak reference to instance. WeakSet will automatically remove
# references to objects that have been garbage collected
cls._instances.add(inst)
return inst
def _get_instances(cls, recursive=False):
"""Get all instances of this class in the registry. If recursive=True
search subclasses recursively"""
instances = list(cls._instances)
if recursive:
for Child in cls.__subclasses__():
instances += Child._get_instances(recursive=recursive)
# Remove duplicates from multiple inheritance.
return list(set(instances))
Usage: Create a registry and subclass it.
class Registry(object):
__metaclass__ = MetaInstanceRegistry
class Base(Registry):
def __init__(self, x):
self.x = x
class A(Base):
pass
class B(Base):
pass
class C(B):
pass
a = A(x=1)
a2 = A(2)
b = B(x=3)
c = C(4)
for cls in [Base, A, B, C]:
print cls.__name__
print cls._get_instances()
print cls._get_instances(recursive=True)
print
del c
print C._get_instances()
If using abstract base classes from the abc module, just subclass abc.ABCMeta to avoid metaclass conflicts:
from abc import ABCMeta, abstractmethod
class ABCMetaInstanceRegistry(MetaInstanceRegistry, ABCMeta):
pass
class ABCRegistry(object):
__metaclass__ = ABCMetaInstanceRegistry
class ABCBase(ABCRegistry):
__metaclass__ = ABCMeta
#abstractmethod
def f(self):
pass
class E(ABCBase):
def __init__(self, x):
self.x = x
def f(self):
return self.x
e = E(x=5)
print E._get_instances()
Another option for quick low-level hacks and debugging is to filter the list of objects returned by gc.get_objects() and generate the dictionary on the fly that way. In CPython that function will return you a (generally huge) list of everything the garbage collector knows about, so it will definitely contain all of the instances of any particular user-defined class.
Note that this is digging a bit into the internals of the interpreter, so it may or may not work (or work well) with the likes of Jython, PyPy, IronPython, etc. I haven't checked. It's also likely to be really slow regardless. Use with caution/YMMV/etc.
However, I imagine that some people running into this question might eventually want to do this sort of thing as a one-off to figure out what's going on with the runtime state of some slice of code that's behaving strangely. This method has the benefit of not affecting the instances or their construction at all, which might be useful if the code in question is coming out of a third-party library or something.
Here's a similar approach to Blckknght's, which works with subclasses as well. Thought this might be of interest, if someone ends up here. One difference, if B is a subclass of A, and b is an instance of B, b will appear in both A.instances and B.instances. As stated by Blckknght, this depends on the use case.
from weakref import WeakSet
class RegisterInstancesMixin:
instances = WeakSet()
def __new__(cls, *args, **kargs):
o = object.__new__(cls, *args, **kargs)
cls._register_instance(o)
return o
#classmethod
def print_instances(cls):
for instance in cls.instances:
print(instance)
#classmethod
def _register_instance(cls, instance):
cls.instances.add(instance)
for b in cls.__bases__:
if issubclass(b, RegisterInstancesMixin):
b._register_instance(instance)
def __init_subclass__(cls):
cls.instances = WeakSet()
class Animal(RegisterInstancesMixin):
pass
class Mammal(Animal):
pass
class Human(Mammal):
pass
class Dog(Mammal):
pass
alice = Human()
bob = Human()
cannelle = Dog()
Animal.print_instances()
Mammal.print_instances()
Human.print_instances()
Animal.print_instances() will print three objects, whereas Human.print_instances() will print two.
Using the answer from #Joel Cornett I've come up with the following, which seems to work. i.e. i'm able to total up object variables.
import os
os.system("clear")
class Foo():
instances = []
def __init__(self):
Foo.instances.append(self)
self.x = 5
class Bar():
def __init__(self):
pass
def testy(self):
self.foo1 = Foo()
self.foo2 = Foo()
self.foo3 = Foo()
foo = Foo()
print Foo.instances
bar = Bar()
bar.testy()
print Foo.instances
x_tot = 0
for inst in Foo.instances:
x_tot += inst.x
print x_tot
output:
[<__main__.Foo instance at 0x108e334d0>]
[<__main__.Foo instance at 0x108e334d0>, <__main__.Foo instance at 0x108e33560>, <__main__.Foo instance at 0x108e335a8>, <__main__.Foo instance at 0x108e335f0>]
5
10
15
20
(For Python)
I have found a way to record the class instances via the "dataclass" decorator while defining a class. Define a class attribute 'instances' (or any other name) as a list of the instances you want to record. Append that list with the 'dict' form of created objects via the dunder method __dict__. Thus, the class attribute 'instances' will record instances in the dict form, which you want.
For example,
from dataclasses import dataclass
#dataclass
class player:
instances=[]
def __init__(self,name,rank):
self.name=name
self.rank=rank
self.instances.append(self.__dict__)

Is it safe to replace a self object by another object of the same type in a method?

I would like to replace an object instance by another instance inside a method like this:
class A:
def method1(self):
self = func(self)
The object is retrieved from a database.
It is unlikely that replacing the 'self' variable will accomplish whatever you're trying to do, that couldn't just be accomplished by storing the result of func(self) in a different variable. 'self' is effectively a local variable only defined for the duration of the method call, used to pass in the instance of the class which is being operated upon. Replacing self will not actually replace references to the original instance of the class held by other objects, nor will it create a lasting reference to the new instance which was assigned to it.
As far as I understand, If you are trying to replace the current object with another object of same type (assuming func won't change the object type) from an member function. I think this will achieve that:
class A:
def method1(self):
newObj = func(self)
self.__dict__.update(newObj.__dict__)
It is not a direct answer to the question, but in the posts below there's a solution for what amirouche tried to do:
Python object conversion
Can I dynamically convert an instance of one class to another?
And here's working code sample (Python 3.2.5).
class Men:
def __init__(self, name):
self.name = name
def who_are_you(self):
print("I'm a men! My name is " + self.name)
def cast_to(self, sex, name):
self.__class__ = sex
self.name = name
def method_unique_to_men(self):
print('I made The Matrix')
class Women:
def __init__(self, name):
self.name = name
def who_are_you(self):
print("I'm a women! My name is " + self.name)
def cast_to(self, sex, name):
self.__class__ = sex
self.name = name
def method_unique_to_women(self):
print('I made Cloud Atlas')
men = Men('Larry')
men.who_are_you()
#>>> I'm a men! My name is Larry
men.method_unique_to_men()
#>>> I made The Matrix
men.cast_to(Women, 'Lana')
men.who_are_you()
#>>> I'm a women! My name is Lana
men.method_unique_to_women()
#>>> I made Cloud Atlas
Note the self.__class__ and not self.__class__.__name__. I.e. this technique not only replaces class name, but actually converts an instance of a class (at least both of them have same id()). Also, 1) I don't know whether it is "safe to replace a self object by another object of the same type in [an object own] method"; 2) it works with different types of objects, not only with ones that are of the same type; 3) it works not exactly like amirouche wanted: you can't init class like Class(args), only Class() (I'm not a pro and can't answer why it's like this).
Yes, all that will happen is that you won't be able to reference the current instance of your class A (unless you set another variable to self before you change it.) I wouldn't recommend it though, it makes for less readable code.
Note that you're only changing a variable, just like any other. Doing self = 123 is the same as doing abc = 123. self is only a reference to the current instance within the method. You can't change your instance by setting self.
What func(self) should do is to change the variables of your instance:
def func(obj):
obj.var_a = 123
obj.var_b = 'abc'
Then do this:
class A:
def method1(self):
func(self) # No need to assign self here
In many cases, a good way to achieve what you want is to call __init__ again. For example:
class MyList(list):
def trim(self,n):
self.__init__(self[:-n])
x = MyList([1,2,3,4])
x.trim(2)
assert type(x) == MyList
assert x == [1,2]
Note that this comes with a few assumptions such as the all that you want to change about the object being set in __init__. Also beware that this could cause problems with inheriting classes that redefine __init__ in an incompatible manner.
Yes, there is nothing wrong with this. Haters gonna hate. (Looking at you Pycharm with your in most cases imaginable, there's no point in such reassignment and it indicates an error).
A situation where you could do this is:
some_method(self, ...):
...
if(some_condition):
self = self.some_other_method()
...
return ...
Sure, you could start the method body by reassigning self to some other variable, but if you wouldn't normally do that with other parametres, why do it with self?
One can use the self assignment in a method, to change the class of instance to a derived class.
Of course one could assign it to a new object, but then the use of the new object ripples through the rest of code in the method. Reassiging it to self, leaves the rest of the method untouched.
class aclass:
def methodA(self):
...
if condition:
self = replace_by_derived(self)
# self is now referencing to an instance of a derived class
# with probably the same values for its data attributes
# all code here remains untouched
...
self.methodB() # calls the methodB of derivedclass is condition is True
...
def methodB(self):
# methodB of class aclass
...
class derivedclass(aclass):
def methodB(self):
#methodB of class derivedclass
...
But apart from such a special use case, I don't see any advantages to replace self.
You can make the instance a singleton element of the class
and mark the methods with #classmethod.
from enum import IntEnum
from collections import namedtuple
class kind(IntEnum):
circle = 1
square = 2
def attr(y): return [getattr(y, x) for x in 'k l b u r'.split()]
class Shape(namedtuple('Shape', 'k,l,b,u,r')):
self = None
#classmethod
def __repr__(cls):
return "<Shape({},{},{},{},{}) object at {}>".format(
*(attr(cls.self)+[id(cls.self)]))
#classmethod
def transform(cls, func):
cls.self = cls.self._replace(**func(cls.self))
Shape.self = Shape(k=1, l=2, b=3, u=4, r=5)
s = Shape.self
def nextkind(self):
return {'k': self.k+1}
print(repr(s)) # <Shape(1,2,3,4,5) object at 139766656561792>
s.transform(nextkind)
print(repr(s)) # <Shape(2,2,3,4,5) object at 139766656561888>

Categories

Resources