How to implement a list of references in python? - python

I'm trying to model a collection of objects in python (2). The collection should make a certain attribute (an integer, float or any immutable object) of the objects available via a list interface.
(1)
>>> print (collection.attrs)
[1, 5, 3]
>>> collection.attrs = [4, 2, 3]
>>> print (object0.attr == 4)
True
I especially expect this list interface in the collection to allow for reassigning a single object's attribute, e.g.
(2)
>>> collection.attrs[2] = 8
>>> print (object2.attr == 8)
True
I am sure this is a quite frequently occurring situation, unfortunately I was not able to find a satisfying answer on how to implement it on stackoverflow / google etc.
Behind the scenes, I expect the object.attr to be implemented as a mutable object. Somehow I also expect the collection to hold a "list of references" to the object.attr and not the respectively referenced (immutable) values themselves.
I ask for your suggestion how to solve this in an elegant and flexible way.
A possible implementation that allows for (1) but not for (2) is
class Component(object):
"""One of many components."""
def __init__(self, attr):
self.attr = attr
class System(object):
"""One System object contains and manages many Component instances.
System is the main interface to adjusting the components.
"""
def __init__(self, attr_list):
self._components = []
for attr in attr_list:
new = Component(attr)
self._components.append(new)
#property
def attrs(self):
# !!! this breaks (2):
return [component.attr for component in self._components]
#attrs.setter
def attrs(self, new_attrs):
for component, new_attr in zip(self._components, new_attrs):
component.attr = new_attr
The !!! line breaks (2) because we create a new list whose entries are references to the values of all Component.attr and not references to the attributes themselves.
Thanks for your input.
TheXMA

Just add another proxy inbetween:
class _ListProxy:
def __init__(self, system):
self._system = system
def __getitem__(self, index):
return self._system._components[index].attr
def __setitem__(self, index, value):
self._system._components[index].attr = value
class System:
...
#property
def attrs(self):
return _ListProxy(self)
You can make the proxy fancier by implementing all the other list methods, but this is enough for your use-case.

#filmor thanks a lot for your answer, this solves the problem perfectly! I made it a bit more general:
class _ListProxy(object):
"""Is a list of object attributes. Accessing _ListProxy entries
evaluates the object attributes each time it is accessed,
i.e. this list "proxies" the object attributes.
"""
def __init__(self, list_of_objects, attr_name):
"""Provide a list of object instances and a name of a commonly
shared attribute that should be proxied by this _ListProxy
instance.
"""
self._list_of_objects = list_of_objects
self._attr_name = attr_name
def __getitem__(self, index):
return getattr(self._list_of_objects[index], self._attr_name)
def __setitem__(self, index, value):
setattr(self._list_of_objects[index], self._attr_name, value)
def __repr__(self):
return repr(list(self))
def __len__(self):
return len(self._list_of_objects)
Are there any important list methods missing?
And what if I want some of the components (objects) to be garbage collected?
Do I need to use something like a WeakList to prevent memory leakage?

Related

Modify an attribute of an already defined class in Python (and run its definition again)

I am trying to modify an already defined class by changing an attribute's value. Importantly, I want this change to propagate internally.
For example, consider this class:
class Base:
x = 1
y = 2 * x
# Other attributes and methods might follow
assert Base.x == 1
assert Base.y == 2
I would like to change x to 2, making it equivalent to this.
class Base:
x = 2
y = 2 * x
assert Base.x == 2
assert Base.y == 4
But I would like to make it in the following way:
Base = injector(Base, x=2)
Is there a way to achieve this WITHOUT recompile the original class source code?
The effect you want to achieve belongs to the realm of "reactive programing" - a programing paradigm (from were the now ubiquitous Javascript library got its name as an inspiration).
While Python has a lot of mechanisms to allow that, one needs to write his code to actually make use of these mechanisms.
By default, plain Python code as the one you put in your example, uses the Imperative paradigm, which is eager: whenever an expression is encoutered, it is executed, and the result of that expression is used (in this case, the result is stored in the class attribute).
Python's advantages also can make it so that once you write a codebase that will allow some reactive code to take place, users of your codebase don't have to be aware of that, and things work more or less "magically".
But, as stated above, that is not free. For the case of being able to redefine y when x changes in
class Base:
x = 1
y = 2 * x
There are a couple paths that can be followed - the most important is that, at the time the "*" operator is executed (and that happens when Python is parsing the class body), at least one side of the operation is not a plain number anymore, but a special object which implements a custom __mul__ method (or __rmul__) in this case. Then, instead of storing a resulting number in y, the expression is stored somewhere, and when y is retrieved either as a class attribute, other mechanisms force the expression to resolve.
If you want this at instance level, rather than at class level, it would be easier to implement. But keep in mind that you'd have to define each operator on your special "source" class for primitive values.
Also, both this and the easier, instance descriptor approach using property are "lazily evaluated": that means, the value for y is calcualted when it is to be used (it can be cached if it will be used more than once). If you want to evaluate it whenever x is assigned (and not when y is consumed), that will require other mechanisms. Although caching the lazy approach can mitigate the need for eager evaluation to the point it should not be needed.
1 - Before digging there
Python's easiest way to do code like this is simply to write the expressions to be calculated as functions - and use the property built-in as a descriptor to retrieve these values. The drawback is small:
you just have to wrap your expressions in a function (and then, that function
in something that will add the descriptor properties to it, such as property). The gain is huge: you are free to use any Python code inside your expression, including function calls, object instantiation, I/O, and the like. (Note that the other approach requires wiring up each desired operator, just to get started).
The plain "101" approach to have what you want working for instances of Base is:
class Base:
x = 1
#property
def y(self):
return self.x * 2
b = Base()
b.y
-> 2
Base.x = 3
b.y
-> 6
The work of property can be rewritten so that retrieving y from the class, instead of an instance, achieves the effect as well (this is still easier than the other approach).
If this will work for you somehow, I'd recommend doing it. If you need to cache y's value until x actually changes, that can be done with normal coding
2 - Exactly what you asked for, with a metaclass
as stated above, Python'd need to know about the special status of your y attribute when calculcating its expression 2 * x. At assignment time, it would be already too late.
Fortunately Python 3 allow class bodies to run in a custom namespace for the attribute assignment by implementing the __prepare__ method in a metaclass, and then recording all that takes place, and replacing primitive attributes of interest by special crafted objects implementing __mul__ and other special methods.
Going this way could even allow values to be eagerly calculated, so they can work as plain Python objects, but register information so that a special injector function could recreate the class redoing all the attributes that depend on expressions. It could also implement lazy evaluation, somewhat as described above.
from collections import UserDict
import operator
class Reactive:
def __init__(self, value):
self._initial_value = value
self.values = {}
def __set_name__(self, owner, name):
self.name = name
self.values[owner] = self._initial_value
def __get__(self, instance, owner):
return self.values[owner]
def __set__(self, instance, value):
raise AttributeError("value can't be set directly - call 'injector' to change this value")
def value(self, cls=None):
return self.values.get(cls, self._initial_value)
op1 = value
#property
def result(self):
return self.value
# dynamically populate magic methods for operation overloading:
for name in "mul add sub truediv pow contains".split():
op = getattr(operator, name)
locals()[f"__{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(self, other, operator)))(op)
locals()[f"__r{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(other, self, operator)))(op)
class ReactiveExpr(Reactive):
def __init__(self, value, op2, operator):
self.op2 = op2
self.operator = operator
super().__init__(value)
def result(self, cls):
op1, op2 = self.op1(cls), self.op2
if isinstance(op1, Reactive):
op1 = op1.result(cls)
if isinstance(op2, Reactive):
op2 = op2.result(cls)
return self.operator(op1, op2)
def __get__(self, instance, owner):
return self.result(owner)
class AuxDict(UserDict):
def __init__(self, *args, _parent, **kwargs):
self.parent = _parent
super().__init__(*args, **kwargs)
def __setitem__(self, item, value):
if isinstance(value, self.parent.reacttypes) and not item.startswith("_"):
value = Reactive(value)
super().__setitem__(item, value)
class MetaReact(type):
reacttypes = (int, float, str, bytes, list, tuple, dict)
def __prepare__(*args, **kwargs):
return AuxDict(_parent=__class__)
def __new__(mcls, name, bases, ns, **kwargs):
pre_registry = {}
cls = super().__new__(mcls, name, bases, ns.data, **kwargs)
#for name, obj in ns.items():
#if isinstance(obj, ReactiveExpr):
#pre_registry[name] = obj
#setattr(cls, name, obj.result()
for name, reactive in pre_registry.items():
_registry[cls, name] = reactive
return cls
def injector(cls, inplace=False, **kwargs):
original = cls
if not inplace:
cls = type(cls.__name__, (cls.__bases__), dict(cls.__dict__))
for name, attr in cls.__dict__.items():
if isinstance(attr, Reactive):
if isinstance(attr, ReactiveExpr) and name in kwargs:
raise AttributeError("Expression attributes can't be modified by injector")
attr.values[cls] = kwargs.get(name, attr.values[original])
return cls
class Base(metaclass=MetaReact):
x = 1
y = 2 * x
And, after pasting the snippet above in a REPL, here is the
result of using injector:
In [97]: Base2 = injector(Base, x=5)
In [98]: Base2.y
Out[98]: 10
The idea is complicated with that aspect that Base class is declared with dependent dynamically evaluated attributes. While we can inspect class's static attributes, I think there's no other way of getting dynamic expression except for parsing the class's sourcecode, find and replace the "injected" attribute name with its value and exec/eval the definition again. But that's the way you wanted to avoid. (moreover: if you expected injector to be unified for all classes).
If you want to proceed to rely on dynamically evaluated attributes define the dependent attribute as a lambda function.
class Base:
x = 1
y = lambda: 2 * Base.x
Base.x = 2
print(Base.y()) # 4

Python classes, mappings, pprint, KeysView vs. dict_keys; to keys() or not to keys()?

I have a problem with my base class. I started writing it after finding an answer on this site about more informative __repr__() methods. I added to it after finding a different answer on this site about using pprint() with my own classes. I tinkered with it a little more after finding a third answer on this site about making my classes unpackable with a ** operator.
I modified it again after seeing in yet another answer on this site that there was a distinction between merely giving it __getitem__(), __iter__(), and __len__() methods on the one hand, and actually making it a fully-qualified mapping by subclassing collections.abc.Mapping on the other. Further, I saw that doing so would remove the need for writing my own keys() method, as the Mapping would take care of that.
So I got rid of keys(), and a class method broke.
The problem
I have a method that iterates through my class' keys and values to produce one big string formatted as I'd like it. That class looks like this.
class MyObj():
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
def the_problem_method(self):
"""Method I'm getting divergent output for."""
longest = len(max((key for key in self.keys()), key=len))
key_width = longest + TAB_WIDTH - longest % TAB_WIDTH
return '\n'.join((f'{key:<{key_width}}{value}' for key, value in self))
Yes, that doesn't have the base class in it, but the MWE later on will account for that. The nut of it is that (key for key in self.keys()) part. When I have a keys() method written, I get the output I want.
def keys(self):
"""Get object attribute names."""
return self.__dict__.keys()
When I remove that to go with the keys() method supplied by collections.abc.Mapping, I get no space between key and value
The question
I can get the output I want by restoring the keys() method (and maybe adding values() and items() while I'm at it), but is that the best approach? Would it be better to go with the Mapping one and modify my class method to suit it? If so, how? Should I leave Mapping well enough alone until I know I need it?
This is my base class to be copied all aver creation and subclassed out the wazoo. I want to Get. It. Right.
There are already several considerations I can think of and many more of which I am wholly ignorant.
I use Python 3.9 and greater. I'll abandon 3.9 when conda does.
I want to keep my more-informative __repr__() methods.
I want pprint() to work, via the _dispatch table method with _format_dict_items().
I want to allow for duck typing my classes reliably.
I have not yet used type hinting, but I want to allow for using best practices there if I start.
Everything else I know nothing about.
The MWE
This has my problem class at the top and output stuff at the bottom. There are two series of classes building upon the previous ones.
The first are ever-more-inclusive base classes, and it is here that the difference between the instance with the keys() method and that without is shown. the class, BaseMap, subclasses the Mapping and has the __getitem__(), __iter__(), and __len__() methods. The next class up the chain, BaseMapKeys, subclasses that and adds the keys() method.
The second group, MapObj and MapKeysObj, are subclasses of the problem class that also subclass those different base classes respectively.
OK, maybe the WE isn't so M, but lots of things got me to this point and I don't want to neglect any.
import collections.abc
from pprint import pprint, PrettyPrinter
TAB_WIDTH = 3
class MyObj():
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
def the_problem_method(self):
"""Method I'm getting divergent output for."""
longest = len(max((key for key in self.keys()), key=len))
key_width = longest + TAB_WIDTH - longest % TAB_WIDTH
return '\n'.join((f'{key:<{key_width}}{value}' for key, value in self))
class Base(object):
"""Base class with more informative __repr__."""
def __repr__(self):
"""Object representation."""
params = (f'{key}={repr(value)}'
for key, value in self.__dict__.items())
return f'{repr(self.__class__)}({", ".join(params)})'
class BaseMap(Base, collections.abc.Mapping):
"""Enable class to be pprint-able, unpacked with **."""
def __getitem__(self, attr):
"""Get object attribute values."""
return getattr(self.__dict__, attr)
def __iter__(self):
"""Make object iterable."""
for attr in self.__dict__.keys():
yield attr, getattr(self, attr)
def __len__(self):
"""Get length of object."""
return len(self.__dict__)
class BaseMapKeys(BaseMap):
"""Overwrite KeysView output with what I thought it would be."""
def keys(self):
"""Get object attribute names."""
return self.__dict__.keys()
class MapObj(BaseMap, MyObj):
"""Problem class with collections.abc.Mapping."""
def __init__(self, foo, bar):
super().__init__(foo, bar)
class MapKeysObj(BaseMapKeys, MyObj):
"""Problem class with collections.abc.Mapping and keys method."""
def __init__(self, foo, bar):
super().__init__(foo, bar)
if isinstance(getattr(PrettyPrinter, '_dispatch'), dict):
# assume the dispatch table method still works
def pprint_basemap(printer, object, stream, indent, allowance, context,
level):
"""Implement pprint for subclasses of BaseMap class."""
write = stream.write
write(f'{object.__class__}(\n {indent * " "}')
printer._format_dict_items(object, stream, indent, allowance + 1,
context, level)
write(f'\n{indent * " "})')
map_classes = [MapObj, MapKeysObj]
for map_class in map_classes:
PrettyPrinter._dispatch[map_class.__repr__] = pprint_basemap
def print_stuff(map_obj):
print('pprint object:')
pprint(map_obj)
print()
print('print keys():')
print(map_obj.keys())
print()
print('print list(keys()):')
print(list(map_obj.keys()))
print()
print('print the problem method:')
print(map_obj.the_problem_method())
print('\n\n')
params = ['This is a really long line to force new line in pprint output', 2]
baz = MapObj(*params)
print_stuff(baz)
scoggs = MapKeysObj(*params)
print_stuff(scoggs)

How to assign to a base class object when inheriting from builtin objects?

I'm trying to extend the Python built-in list object to include metadata. I would like to be able to assign by reference to the base class object in some cases for efficiency.
For example:
class meta(list):
def __init__(self, data=None, metadata=None):
if data is not None:
super().__init__(data) # performs a copy of data
else:
super().__init__()
self.metadata = metadata
def foo(self):
new_data = [ i for i in range(10) ]
return meta(new_data, "my meta data")
This works as expected. The call to foo returns a new meta object with the values 0-9 in it. However, the list created and assigned to new_data in the list comprehension is copied inside the initializer of meta due to the call to the base class initializer. This additional copy is unnecessary as it could simply assign the reference of new_data to the inherited list object as no other references to it could exist after exiting foo.
What I'm trying to describe is something like this:
class meta(list):
def __init__(self, data=None, metadata=None):
if data is not None:
super().__init__(data) # performs a copy of data
else:
super().__init__()
self.metadata = metadata
def foo(self):
result = meta(None, "my meta data") # an efficient initialization
new_data = [ i for i in range(10) ]
super(list, result) = new_data # just assign the reference instead of copying
return result
But I know this is not correct syntactically. However, it does describe what I'm trying to accomplish. The intent is the new_data list object would simply be referred to by the new meta object via a reference to it being assigned to the underlying list object.
I know I could use a list member object instead of inheriting from list but that causes other inefficiencies because now all of the list attributes have to be defined in the meta class and would get wrapped in another layer of function calls.
So…my questions are:
Is there a way to do this at all?
Can I access the underlying object as an independent object from the subclass?
Can it be implemented cleanly without creating more overhead than I'm trying to remove?
Is there some obscure __assign__ method available that isn't an undocumented 'feature' of the language?
The instance of meta is, necessarily, a new instance, and you assign to names, not objects. You can't simply replace the new instance with an instance of list. That is, the new instance of meta doesn't contain a reference to another list instance; it is the list instance, just with a __class__ attribute that refers to meta, not list.
If you don't want to make a copy of the argument, don't make a list argument in the first place.
class meta(list):
def __init__(self, data=None, metadata=None):
if data is not None:
super().__init__(data) # performs a copy of data
else:
super().__init__()
self.metadata = metadata
def foo(self):
new_data = range(10)
return meta(new_data, "my meta data")
list.__init__ isn't exepecting a list; it's just expecting an iterable, references to the elements of which can be added to the just-constructed list.
You would probably want to override __new__ anyway, since list.__new__ already contains the logic that decides if there is an iterable available to pass to __init__.
class meta(list):
def __new__(cls, *args, metadata=None, **kwargs):
new_list = super().__new__(cls, *args, **kwargs)
new_list.metadata = metadata
return new_list
def foo(self):
return meta(range(10), metadata="my meta data")
(And finally, foo should probably be a static method or a class method, since it makes no use of the meta instance that invokes it.)
Although #chepner didn't answer my question directly, I got some encouragement from them and came up with the following solution.
Consider having two meta objects and we want to add the underlying list objects together element by element.
The most straight forward answer is similar to my original post. I'll leave the error checking to the reader to implement.
class meta(list):
def __init__(self, data, metadata):
super().__init__(data)
self.metadata = metadata
def __add__(self, other):
return meta([self[i] + other[i] for i in range(len(self))], self.__metadata)
a = meta([1, 2, 3, 4, 5], "meta data")
b = meta([6, 7, 8, 9, 0], "more data")
a + b
[7, 9, 11, 13, 5]
Note: The meta data is just there to complete the example and I know I left out checks for making sure the b list isn't shorter than the a list. However, the operation works as expected. It's also easy to see the list created by the list comprehension is later copied completely in meta.__init__ at the call to list.__init__ via the super().__init__(...) call. This second copy is what I wanted to avoid.
Thanks to #chepner's identification of list.__init__ taking an iterable, I came up with the following solution.
class meta_add_iterable:
def __init__(self, *data):
self.data = data # copies only the references to the tuple of arguments passed in
self.current = -1
def __iter__(self):
return self
def __next__(self):
self.current += 1
if self.current < len(self.data[0]):
return sum([ i[self.current] for i in self.data ])
raise StopIteration
class meta(list):
def __init__(self, data, metadata):
super().__init__(data)
self.metadata = metadata
def __add__(self, other):
return meta(meta_add_iterable(self, other), self.metadata)
a = meta([1, 2, 3, 4, 5], "meta data")
b = meta([6, 7, 8, 9, 0], "more data")
a + b
[7, 9, 11, 13, 5]
In the above implementation, there is no interim list created via list comprehension in the meta.__add__ method. Instead, we simply pass the two meta objects (self and other) to the iterator object. Here the only copy that is done is to copy the references to the original meta objects so the iterator can refer to them. Then the call to meta.__init__ passes the iterator in instead of an already created list. The list.__init__ method simply creates the list from this iterator which means each referred to meta object is accessed only once to retrieve its data and the result is only written once in the list.__init__ method. The secondary copy is completely elided in this implementation because the add operation is actually deferred until the iterators __next__ method is called in list.__init__.
The best part is we don't need to check if we are initializing from a list object or an iterator!
I know there are plenty of things that can go wrong as it stands. I left out all of the error checking and such so just the process was visible to the reader.
I'm not sure if implementing it using the __new__ method would still be better, as suggested by #chepner. Personally, I can't see the benefit. Maybe #chepner can expand on why that may still be the better solution. Either way, this seems to have answered my question and I'm hopeful it may help others.

Choosing variables to create when initializing classes

I have a class which would be a container for a number of variables of different types. The collection is finite and not very large so I didn't use a dictionary. Is there a way to automate, or shorten the creation of variables based on whether or not they are requested (specified as True/False) in the constructor?
Here is what I have for example:
class test:
def __init__(self,a=False,b=False,c=False):
if a: self.a = {}
if b: self.b = 34
if c: self.c = "generic string"
For any of a,b,c that are true in the constructor they will be created in the object.
I have a collection of standard variables (a,b,c,d..) that some objects will have and some objects won't. The number of combinations is too large to create separate classes, but the number of variables isn't enough to have a dictionary for them in each class.
Is there any way in python to do something like this:
class test:
def __init__(self,*args):
default_values = {a:{},b:34,c:"generic string"}
for item in args:
if item: self.arg = default_values[arg]
Maybe there is a whole other way to do this?
EDIT:
To clarify this a class which represents different type of bounding boxes on a 2D surface. Depending on the function of the box it can have any of frame coordinates, internal cross coordinates, id, population statistics (attached to that box), and some other cached values for easy calculation.
I don't want to have each object as a dictionary because there are methods attached to it which allow it to export and modify its internal data and interact with other objects of the same type (similar to how strings interact with + - .join, etc.). I also don't want to have a dictionary inside each object because the call to that variable is inelegant:
print foo.info["a"]
versus
print foo.a
Thanks to ballsdotball I've come up with a solution:
class test:
def __init__(self, a=False, b=False,c =False):
default_values = {"a":{},"b":34,"c":"generic string"}
for k, v in default_values.iteritems():
if eval(k): setattr(self,k,v)
Maybe something like:
def __init__(self,*args,**kwargs):
default_values = {a:{},b:34,c:"generic string"}
for k,v in kwargs.iteritems():
try:
if not v is False:
setattr(self,k,default_values[k])
except Exception, e:
print "Argument has no default value.",e
But to be honest I would just put the default values in with the init arguments instead of having to test for them like that.
*Edited a couple times for syntax.
You can subclass dict (if you aren't using positional arguments):
class Test(dict):
def your_method(self):
return self['foo'] * 4
You can also override __getattr__ and __setattr__ if the self['foo'] syntax bothers you:
class Test(dict):
def __getattr__(self, key):
return dict.__getattr__(self, key)
def __setattr__(self, key, value):
return dict.__getattr__(self, key, value)
def your_method(self):
return self.foo * 4

Reuse existing objects for immutable objects?

In Python, how is it possible to reuse existing equal immutable objects (like is done for str)? Can this be done just by defining a __hash__ method, or does it require more complicated measures?
If you want to create via the class constructor and have it return a previously created object then you will need to provide a __new__ method (because by the time you get to __init__ the object has already been created).
Here is a simple example - if the value used to initialise has been seen before then a previously created object is returned rather than a new one created:
class Cached(object):
"""Simple example of immutable object reuse."""
def __init__(self, i):
self.i = i
def __new__(cls, i, _cache={}):
try:
return _cache[i]
except KeyError:
# you must call __new__ on the base class
x = super(Cached, cls).__new__(cls)
x.__init__(i)
_cache[i] = x
return x
Note that for this example you can use anything to initialise as long as it's hashable. And just to show that objects really are being reused:
>>> a = Cached(100)
>>> b = Cached(200)
>>> c = Cached(100)
>>> a is b
False
>>> a is c
True
There are two 'software engineering' solutions to this that don't require any low-level knowledge of Python. They apply in the following scenarios:
First Scenario: Objects of your class are 'equal' if they are constructed with the same constructor parameters, and equality won't change over time after construction. Solution: Use a factory that hashses the constructor parameters:
class MyClass:
def __init__(self, someint, someotherint):
self.a = someint
self.b = someotherint
cachedict = { }
def construct_myobject(someint, someotherint):
if (someint, someotherint) not in cachedict:
cachedict[(someint, someotherint)] = MyClass(someint, someotherint)
return cachedict[(someint, someotherint)]
This approach essentially limits the instances of your class to one unique object per distinct input pair. There are obvious drawbacks as well: not all types are easily hashable and so on.
Second Scenario: Objects of your class are mutable and their 'equality' may change over time. Solution: define a class-level registry of equal instances:
class MyClass:
registry = { }
def __init__(self, someint, someotherint, third):
MyClass.registry[id(self)] = (someint, someotherint)
self.someint = someint
self.someotherint = someotherint
self.third = third
def __eq__(self, other):
return MyClass.registry[id(self)] == MyClass.registry[id(other)]
def update(self, someint, someotherint):
MyClass.registry[id(self)] = (someint, someotherint)
In this example, objects with the same someint, someotherint pair are equal, while the third parameter does not factor in. The trick is to keep the parameters in registry in sync. As an alternative to update, you could override getattr and setattr for your class instead; this would ensure that any assignment foo.someint = y would be kept synced with your class-level dictionary. See an example here.
I believe you would have to keep a dict {args: object} of instances already created, then override the class' __new__ method to check in that dictionary, and return the relevant object if it already existed. Note that I haven't implemented or tested this idea. Of course, strings are handled at the C level.

Categories

Resources