Related
Overview
I have a python class inheritance structure in which most methods are defined in the base class and most attributes on which those methods rely are defined in child classes.
The base class looks roughly like this:
class Base(object):
__metaclass__ = ABCMeta
#abstractproperty
def property1(self):
pass
#abstractproperty
def property2(self):
pass
def method1(self):
print(self.property1)
def method2(self, val):
return self.property2(val)
while the child class looks like this:
class Child(Base):
property1 = 'text'
property2 = function
where function is a function that looks like this:
def function(val):
return val + 1
Obviously the code above is missing details, but the structure mirrors that of my real code.
The Problem
When I attempt to use method1 in the base class everything works as expected:
>>> child = Child()
>>> child.method1()
'text'
However, attempting the same for method2 spits an error:
>>> child = Child()
>>> child.method2(1) # expected 2
TypeError: method2() takes exactly 1 argument (2 given)
The second passed argument is the Child class itself.
I'm wondering if there's a way to avoid passing this second Child parameter when calling method2.
Attempts
One workaround I've found is to define an abstract method in the base class then build that function in the child classes like so:
class Base(object):
__metaclass__ = ABCMeta
#abstractproperty
def property1(self):
pass
#abstractmethod
def method2(self, val):
pass
def method1(self):
print(self.property1)
class Child(Base):
property1 = 'text'
def method2(self, val):
return function(val)
However, I would prefer that this method live in the base class. Any thoughts? Thanks in advance!
Methods implicitly receive self as the first argument, even if it seems that it is not passed. For example:
class C:
def f(self, x):
print(x)
C.f takes two arguments, but you'd normally call it with just one:
c = C()
c.f(1)
The way it is done is that when you access c.f a "bound" method is created which implicitly takes c as the first argument.
The same happens if you assign an external function to a class and use it as a method, as you did.
Solution 1
The usual way to implement a method in a child class is to do it explicitly there, rather than in an external function, so rather than what you did, I would do:
class Child(Base):
property1 = 'text'
# instead of: property2 = function
def property2(self, val):
return val + 1
Solution 2
If you really want to have property2 = function in the class (can't see why) and function out of the class, then you have to take care of self:
class Child(Base):
property1 = 'text'
property2 = function
def function(self, val):
return val + 1
Solution 3
If you want the previous solution, but without self in function:
class Child(Base):
property1 = 'text'
def property2(self, val):
return function(val)
def function(val):
return val + 1
Solution
Make your method static:
class Child(Base)
property2 = staticmethod(function)
Explanation
As zvone already explained, bound methods implicitly receive self as the first parameter.
To create a bound method you don't necessarily need to define it in the class body.
This:
def foo(self):
print("foo")
class Foo:
bar = foo
f = Foo()
print(f.bar)
will output:
>>> <bound method foo of <__main__.Foo object at 0x014EC790>>
A function assigned to a class attribute will therefore behave just as a normal class method, meaning that if you call it as f.bar() it is treated as a bound method and self is implicitly passed as first parameter.
To control what is and what is not implicitly passed to a class method as first argument is normally controlled with the decorators
#classmethod: the class itself is passed as the first argument
#staticmethod: no arguments are implicitly passed to the method
So you want the behavior of a staticmethod, but since you are simply assigning a already defined function to a class attribute you cannot use the decorator syntax.
But since decorators are just normal functions which take a function as parameter and return a wrapped function, this:
class Child(Base):
property2 = staticmethod(function)
is equivalent (*) to this:
class Child(Base):
#staticmethod
def property2():
function()
Further improvements
I would suggest a small additional modification to your Base class:
Rename property2 and mark it not as abstractproperty but as abstractstaticmethod(**).
This will help colleagues (and eventually yourself) to understand better what kind of implementation is expected in the child class.
class Base(object):
__metaclass__ = ABCMeta
#abstractstaticmethod
def staticmethod1(self):
pass
(*) well, more or less. The former actually assigns function to property2, the latter creates a new static method which delegates to function.
(**) abstractstaticmethod is deprecated since Python 3.3, but since you are also using abstractproperty I wanted to be consistent.
I have a class factory method that is used to instantiate an object. With multiple objects are created through this method, I want to be able to compare the classes of the objects. When using isinstance, the comparison is False, as can be seen in the simple example below. Also running id(a.__class__) and id(b.__class__), gives different ids.
Is there a simple way of achieving this? I know that this does not exactly conform to duck-typing, however this is the easiest solution for the program I am writing.
def factory():
class MyClass(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
return MyClass()
a = factory()
b = factory()
print(a.compare(b))
The reason is that MyClass is created dynamically every time you run factory. If you print(id(MyClass)) inside factory you get different results:
>>> a = factory()
140465711359728
>>> b = factory()
140465712488632
This is because they are actually different classes, dynamically created and locally scoped at the time of the call.
One way to fix this is to return (or yield) multiple instances:
>>> def factory(n):
class MyClass(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
for i in range(n):
yield MyClass()
>>> a, b = factory(2)
>>> a.compare(b)
Comparison Result: True
is a possible implementation.
EDIT: If the instances are created dynamically, then the above solution is invalid. One way to do it is to create a superclass outside, then inside the factory function subclass from that superclass:
>>> class MyClass(object):
pass
>>> def factory():
class SubClass(MyClass):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
return SubClass()
However, this does not work because they are still different classes. So you need to change your comparison method to check against the first superclass:
isinstance(other, self.__class__.__mro__[1])
If your class definition is inside the factory function, than each instance of the class you create will be an instance of a separate class. That's because the class definition is a statement, that's executed just like any other assignment. The name and contents of the different classes will be the same, but their identities will be distinct.
I don't think there's any simple way to get around that without changing the structure of your code in some way. You've said that your actual factory function is a method of a class, which suggests that you might be able to move the class definition somewhere else so that it can be shared by multiple calls to the factory method. Depending on what information you expect the inner class to use from the outer class, you might define it at class level (so there'd be only one class definition used everywhere), or you could define it in another method, like __init__ (which would create a new inner class for every instance of the outer class).
Here's what that last approach might look like:
class Outer(object):
def __init__(self):
class Inner(object):
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
self.Inner = Inner
def factory(self):
return self.Inner()
f = Outer()
a = f.factory()
b = f.factory()
print(a.compare(b)) # True
g = Outer() # create another instance of the outer class
c = g.factory()
print(a.compare(c)) # False
It's not entirely clear what you're asking. It seems to me you want a simpler version of the code you already posted. If that's incorrect, this answer is not relevant.
You can create classes dynamically by explicitly constructing a new instance of the type type.
def compare(self, other):
...
def factory():
return type("MyClass", (object,), { 'compare': compare }()
type takes three arguments: the name, the parents, and the predefined slots. So this will behave the same way as your previous code.
Working off the answer from #rassar, and adding some more detail to represent the actual implementation (e.g. the factory-method existing in a parent class), I have come up with a working example below.
From #rassar's answer, I realised that the class is dynamically created each time, and so defining it within the parent object (or even above that), means that it will be the same class definition each time it is called.
class Parent(object):
class MyClass(object):
def __init__(self, parent):
self.parent = parent
def compare(self, other):
print('Comparison Result: {}'.format(isinstance(other, self.__class__)))
def factory(self):
return self.MyClass(self)
a = Parent()
b = a.factory()
c = a.factory()
b.compare(c)
print(id(b.__class__))
print(id(c.__class__))
I'd like to do something like this:
class X:
#classmethod
def id(cls):
return cls.__name__
def id(self):
return self.__class__.__name__
And now call id() for either the class or an instance of it:
>>> X.id()
'X'
>>> X().id()
'X'
Obviously, this exact code doesn't work, but is there a similar way to make it work?
Or any other workarounds to get such behavior without too much "hacky" stuff?
Class and instance methods live in the same namespace and you cannot reuse names like that; the last definition of id will win in that case.
The class method will continue to work on instances however, there is no need to create a separate instance method; just use:
class X:
#classmethod
def id(cls):
return cls.__name__
because the method continues to be bound to the class:
>>> class X:
... #classmethod
... def id(cls):
... return cls.__name__
...
>>> X.id()
'X'
>>> X().id()
'X'
This is explicitly documented:
It can be called either on the class (such as C.f()) or on an instance (such as C().f()). The instance is ignored except for its class.
If you do need distinguish between binding to the class and an instance
If you need a method to work differently based on where it is being used on; bound to a class when accessed on the class, bound to the instance when accessed on the instance, you'll need to create a custom descriptor object.
The descriptor API is how Python causes functions to be bound as methods, and bind classmethod objects to the class; see the descriptor howto.
You can provide your own descriptor for methods by creating an object that has a __get__ method. Here is a simple one that switches what the method is bound to based on context, if the first argument to __get__ is None, then the descriptor is being bound to a class, otherwise it is being bound to an instance:
class class_or_instancemethod(classmethod):
def __get__(self, instance, type_):
descr_get = super().__get__ if instance is None else self.__func__.__get__
return descr_get(instance, type_)
This re-uses classmethod and only re-defines how it handles binding, delegating the original implementation for instance is None, and to the standard function __get__ implementation otherwise.
Note that in the method itself, you may then have to test, what it is bound to. isinstance(firstargument, type) is a good test for this:
>>> class X:
... #class_or_instancemethod
... def foo(self_or_cls):
... if isinstance(self_or_cls, type):
... return f"bound to the class, {self_or_cls}"
... else:
... return f"bound to the instance, {self_or_cls"
...
>>> X.foo()
"bound to the class, <class '__main__.X'>"
>>> X().foo()
'bound to the instance, <__main__.X object at 0x10ac7d580>'
An alternative implementation could use two functions, one for when bound to a class, the other when bound to an instance:
class hybridmethod:
def __init__(self, fclass, finstance=None, doc=None):
self.fclass = fclass
self.finstance = finstance
self.__doc__ = doc or fclass.__doc__
# support use on abstract base classes
self.__isabstractmethod__ = bool(
getattr(fclass, '__isabstractmethod__', False)
)
def classmethod(self, fclass):
return type(self)(fclass, self.finstance, None)
def instancemethod(self, finstance):
return type(self)(self.fclass, finstance, self.__doc__)
def __get__(self, instance, cls):
if instance is None or self.finstance is None:
# either bound to the class, or no instance method available
return self.fclass.__get__(cls, None)
return self.finstance.__get__(instance, cls)
This then is a classmethod with an optional instance method. Use it like you'd use a property object; decorate the instance method with #<name>.instancemethod:
>>> class X:
... #hybridmethod
... def bar(cls):
... return f"bound to the class, {cls}"
... #bar.instancemethod
... def bar(self):
... return f"bound to the instance, {self}"
...
>>> X.bar()
"bound to the class, <class '__main__.X'>"
>>> X().bar()
'bound to the instance, <__main__.X object at 0x10a010f70>'
Personally, my advice is to be cautious about using this; the exact same method altering behaviour based on the context can be confusing to use. However, there are use-cases for this, such as SQLAlchemy's differentiation between SQL objects and SQL values, where column objects in a model switch behaviour like this; see their Hybrid Attributes documentation. The implementation for this follows the exact same pattern as my hybridmethod class above.
I have no idea what's your actual use case is, but you can do something like this using a descriptor:
class Desc(object):
def __get__(self, ins, typ):
if ins is None:
print 'Called by a class.'
return lambda : typ.__name__
else:
print 'Called by an instance.'
return lambda : ins.__class__.__name__
class X(object):
id = Desc()
x = X()
print x.id()
print X.id()
Output
Called by an instance.
X
Called by a class.
X
It can be done, quite succinctly, by binding the instance-bound version of your method explicitly to the instance (rather than to the class). Python will invoke the instance attribute found in Class().__dict__ when Class().foo() is called (because it searches the instance's __dict__ before the class'), and the class-bound method found in Class.__dict__ when Class.foo() is called.
This has a number of potential use cases, though whether they are anti-patterns is open for debate:
class Test:
def __init__(self):
self.check = self.__check
#staticmethod
def check():
print('Called as class')
def __check(self):
print('Called as instance, probably')
>>> Test.check()
Called as class
>>> Test().check()
Called as instance, probably
Or... let's say we want to be able to abuse stuff like map():
class Str(str):
def __init__(self, *args):
self.split = self.__split
#staticmethod
def split(sep=None, maxsplit=-1):
return lambda string: string.split(sep, maxsplit)
def __split(self, sep=None, maxsplit=-1):
return super().split(sep, maxsplit)
>>> s = Str('w-o-w')
>>> s.split('-')
['w', 'o', 'w']
>>> Str.split('-')(s)
['w', 'o', 'w']
>>> list(map(Str.split('-'), [s]*3))
[['w', 'o', 'w'], ['w', 'o', 'w'], ['w', 'o', 'w']]
"types" provides something quite interesting since Python 3.4: DynamicClassAttribute
It is not doing 100% of what you had in mind, but it seems to be closely related, and you might need to tweak a bit my metaclass but, rougly, you can have this;
from types import DynamicClassAttribute
class XMeta(type):
def __getattr__(self, value):
if value == 'id':
return XMeta.id # You may want to change a bit that line.
#property
def id(self):
return "Class {}".format(self.__name__)
That would define your class attribute. For the instance attribute:
class X(metaclass=XMeta):
#DynamicClassAttribute
def id(self):
return "Instance {}".format(self.__class__.__name__)
It might be a bit overkill especially if you want to stay away from metaclasses. It's a trick I'd like to explore on my side, so I just wanted to share this hidden jewel, in case you can polish it and make it shine!
>>> X().id
'Instance X'
>>> X.id
'Class X'
Voila...
In your example, you could simply delete the second method entirely, since both the staticmethod and the class method do the same thing.
If you wanted them to do different things:
class X:
def id(self=None):
if self is None:
# It's being called as a static method
else:
# It's being called as an instance method
(Python 3 only) Elaborating on the idea of a pure-Python implementation of #classmethod, we can declare an #class_or_instance_method as a decorator, which is actually a class implementing the attribute descriptor protocol:
import inspect
class class_or_instance_method(object):
def __init__(self, f):
self.f = f
def __get__(self, instance, owner):
if instance is not None:
class_or_instance = instance
else:
class_or_instance = owner
def newfunc(*args, **kwargs):
return self.f(class_or_instance, *args, **kwargs)
return newfunc
class A:
#class_or_instance_method
def foo(self_or_cls, a, b, c=None):
if inspect.isclass(self_or_cls):
print("Called as a class method")
else:
print("Called as an instance method")
I have two methods, one for the individual Instance, and one for every Instance in that class:
class MasterMatches(models.Model):
#classmethod
def update_url_if_any_matches_has_one(cls):
# apply to all instances, call instance method.
def update_url_if_any_matches_has_one(self):
# do something
Should I name these the same? Or, what is a good naming convention here?
The question of using the same names can be clarified by understanding how decorators work.
#dec
def foo(x):
print(x)
translates to
def foo(x):
print(x)
foo = dec(foo)
In your example the decorator syntax can be expanded to
class MasterMatches(models.Model):
def update_url_if_any_matches_has_one(cls):
# apply to all instances, call instance method.
update_url_if_any_matches_has_one = classmethod(update_url_if_any_matches_has_one)
def update_url_if_any_matches_has_one(self):
# do something
The former implementation of update_url_if_any_matches_has_one will be overwritten by the latter.
Usually use self declaration style. #classmethod use only if method not works with class instance fields.
Function decorated as #classmethod takes the first argument is the class type, while normal method takes instance of object.
class A:
#classmethod
def a(cls):
print(cls)
def b(self):
print(self)
a = A()
a.a()
a.b()
# Output:
# <class '__main__.A'>
# <__main__.A object at 0x03FC5DF0>
It can be useful if you have a static class fields. The to access therm you don't need explicitly specify the class name. But you don't get access to instance fields. Example:
class A:
field = 1
#classmethod
def a(cls):
print(cls.field)
def b(self):
self.field = 2
print(self.field, A.field)
a = A()
a.a()
a.b()
# Outputs:
# 1
# 2 1
Toward the end of a program I'm looking to load a specific variable from all the instances of a class into a dictionary.
For example:
class Foo():
def __init__(self):
self.x = {}
foo1 = Foo()
foo2 = Foo()
...
Let's say the number of instances will vary and I want the x dict from each instance of Foo() loaded into a new dict. How would I do that?
The examples I've seen in SO assume one already has the list of instances.
One way to keep track of instances is with a class variable:
class A(object):
instances = []
def __init__(self, foo):
self.foo = foo
A.instances.append(self)
At the end of the program, you can create your dict like this:
foo_vars = {id(instance): instance.foo for instance in A.instances}
There is only one list:
>>> a = A(1)
>>> b = A(2)
>>> A.instances
[<__main__.A object at 0x1004d44d0>, <__main__.A object at 0x1004d4510>]
>>> id(A.instances)
4299683456
>>> id(a.instances)
4299683456
>>> id(b.instances)
4299683456
#JoelCornett's answer covers the basics perfectly. This is a slightly more complicated version, which might help with a few subtle issues.
If you want to be able to access all the "live" instances of a given class, subclass the following (or include equivalent code in your own base class):
from weakref import WeakSet
class base(object):
def __new__(cls, *args, **kwargs):
instance = object.__new__(cls, *args, **kwargs)
if "instances" not in cls.__dict__:
cls.instances = WeakSet()
cls.instances.add(instance)
return instance
This addresses two possible issues with the simpler implementation that #JoelCornett presented:
Each subclass of base will keep track of its own instances separately. You won't get subclass instances in a parent class's instance list, and one subclass will never stumble over instances of a sibling subclass. This might be undesirable, depending on your use case, but it's probably easier to merge the sets back together than it is to split them apart.
The instances set uses weak references to the class's instances, so if you del or reassign all the other references to an instance elsewhere in your code, the bookkeeping code will not prevent it from being garbage collected. Again, this might not be desirable for some use cases, but it is easy enough to use regular sets (or lists) instead of a weakset if you really want every instance to last forever.
Some handy-dandy test output (with the instances sets always being passed to list only because they don't print out nicely):
>>> b = base()
>>> list(base.instances)
[<__main__.base object at 0x00000000026067F0>]
>>> class foo(base):
... pass
...
>>> f = foo()
>>> list(foo.instances)
[<__main__.foo object at 0x0000000002606898>]
>>> list(base.instances)
[<__main__.base object at 0x00000000026067F0>]
>>> del f
>>> list(foo.instances)
[]
You would probably want to use weak references to your instances. Otherwise the class could likely end up keeping track of instances that were meant to have been deleted. A weakref.WeakSet will automatically remove any dead instances from its set.
One way to keep track of instances is with a class variable:
import weakref
class A(object):
instances = weakref.WeakSet()
def __init__(self, foo):
self.foo = foo
A.instances.add(self)
#classmethod
def get_instances(cls):
return list(A.instances) #Returns list of all current instances
At the end of the program, you can create your dict like this:
foo_vars = {id(instance): instance.foo for instance in A.instances}
There is only one list:
>>> a = A(1)
>>> b = A(2)
>>> A.get_instances()
[<inst.A object at 0x100587290>, <inst.A object at 0x100587250>]
>>> id(A.instances)
4299861712
>>> id(a.instances)
4299861712
>>> id(b.instances)
4299861712
>>> a = A(3) #original a will be dereferenced and replaced with new instance
>>> A.get_instances()
[<inst.A object at 0x100587290>, <inst.A object at 0x1005872d0>]
You can also solve this problem using a metaclass:
When a class is created (__init__ method of metaclass), add a new instance registry
When a new instance of this class is created (__call__ method of metaclass), add it to the instance registry.
The advantage of this approach is that each class has a registry - even if no instance exists. In contrast, when overriding __new__ (as in Blckknght's answer), the registry is added when the first instance is created.
class MetaInstanceRegistry(type):
"""Metaclass providing an instance registry"""
def __init__(cls, name, bases, attrs):
# Create class
super(MetaInstanceRegistry, cls).__init__(name, bases, attrs)
# Initialize fresh instance storage
cls._instances = weakref.WeakSet()
def __call__(cls, *args, **kwargs):
# Create instance (calls __init__ and __new__ methods)
inst = super(MetaInstanceRegistry, cls).__call__(*args, **kwargs)
# Store weak reference to instance. WeakSet will automatically remove
# references to objects that have been garbage collected
cls._instances.add(inst)
return inst
def _get_instances(cls, recursive=False):
"""Get all instances of this class in the registry. If recursive=True
search subclasses recursively"""
instances = list(cls._instances)
if recursive:
for Child in cls.__subclasses__():
instances += Child._get_instances(recursive=recursive)
# Remove duplicates from multiple inheritance.
return list(set(instances))
Usage: Create a registry and subclass it.
class Registry(object):
__metaclass__ = MetaInstanceRegistry
class Base(Registry):
def __init__(self, x):
self.x = x
class A(Base):
pass
class B(Base):
pass
class C(B):
pass
a = A(x=1)
a2 = A(2)
b = B(x=3)
c = C(4)
for cls in [Base, A, B, C]:
print cls.__name__
print cls._get_instances()
print cls._get_instances(recursive=True)
print
del c
print C._get_instances()
If using abstract base classes from the abc module, just subclass abc.ABCMeta to avoid metaclass conflicts:
from abc import ABCMeta, abstractmethod
class ABCMetaInstanceRegistry(MetaInstanceRegistry, ABCMeta):
pass
class ABCRegistry(object):
__metaclass__ = ABCMetaInstanceRegistry
class ABCBase(ABCRegistry):
__metaclass__ = ABCMeta
#abstractmethod
def f(self):
pass
class E(ABCBase):
def __init__(self, x):
self.x = x
def f(self):
return self.x
e = E(x=5)
print E._get_instances()
Another option for quick low-level hacks and debugging is to filter the list of objects returned by gc.get_objects() and generate the dictionary on the fly that way. In CPython that function will return you a (generally huge) list of everything the garbage collector knows about, so it will definitely contain all of the instances of any particular user-defined class.
Note that this is digging a bit into the internals of the interpreter, so it may or may not work (or work well) with the likes of Jython, PyPy, IronPython, etc. I haven't checked. It's also likely to be really slow regardless. Use with caution/YMMV/etc.
However, I imagine that some people running into this question might eventually want to do this sort of thing as a one-off to figure out what's going on with the runtime state of some slice of code that's behaving strangely. This method has the benefit of not affecting the instances or their construction at all, which might be useful if the code in question is coming out of a third-party library or something.
Here's a similar approach to Blckknght's, which works with subclasses as well. Thought this might be of interest, if someone ends up here. One difference, if B is a subclass of A, and b is an instance of B, b will appear in both A.instances and B.instances. As stated by Blckknght, this depends on the use case.
from weakref import WeakSet
class RegisterInstancesMixin:
instances = WeakSet()
def __new__(cls, *args, **kargs):
o = object.__new__(cls, *args, **kargs)
cls._register_instance(o)
return o
#classmethod
def print_instances(cls):
for instance in cls.instances:
print(instance)
#classmethod
def _register_instance(cls, instance):
cls.instances.add(instance)
for b in cls.__bases__:
if issubclass(b, RegisterInstancesMixin):
b._register_instance(instance)
def __init_subclass__(cls):
cls.instances = WeakSet()
class Animal(RegisterInstancesMixin):
pass
class Mammal(Animal):
pass
class Human(Mammal):
pass
class Dog(Mammal):
pass
alice = Human()
bob = Human()
cannelle = Dog()
Animal.print_instances()
Mammal.print_instances()
Human.print_instances()
Animal.print_instances() will print three objects, whereas Human.print_instances() will print two.
Using the answer from #Joel Cornett I've come up with the following, which seems to work. i.e. i'm able to total up object variables.
import os
os.system("clear")
class Foo():
instances = []
def __init__(self):
Foo.instances.append(self)
self.x = 5
class Bar():
def __init__(self):
pass
def testy(self):
self.foo1 = Foo()
self.foo2 = Foo()
self.foo3 = Foo()
foo = Foo()
print Foo.instances
bar = Bar()
bar.testy()
print Foo.instances
x_tot = 0
for inst in Foo.instances:
x_tot += inst.x
print x_tot
output:
[<__main__.Foo instance at 0x108e334d0>]
[<__main__.Foo instance at 0x108e334d0>, <__main__.Foo instance at 0x108e33560>, <__main__.Foo instance at 0x108e335a8>, <__main__.Foo instance at 0x108e335f0>]
5
10
15
20
(For Python)
I have found a way to record the class instances via the "dataclass" decorator while defining a class. Define a class attribute 'instances' (or any other name) as a list of the instances you want to record. Append that list with the 'dict' form of created objects via the dunder method __dict__. Thus, the class attribute 'instances' will record instances in the dict form, which you want.
For example,
from dataclasses import dataclass
#dataclass
class player:
instances=[]
def __init__(self,name,rank):
self.name=name
self.rank=rank
self.instances.append(self.__dict__)