caching python class instances - python

I have a memory heavy class, say a type representing a high-resolution resource (ie: media, models, data, etc), that can be instantiated multiple times with identical parameters, such as same filename of the resource loaded multiple times.
I'd like to implement some sort of unbounded caching on object creation to memory reuse identical instances if they have the same constructor parameter values. I don't care about mutability of one instance affecting the other shared ones. What is the easiest pythonic way to achieve this?
Note that neither singletons, object-pools, factory methods or field properties meet my use case.

You could use a factory function with functools.cache:
import functools
#functools.cache
def make_myclass(*args, **kwargs):
return MyClass(*args, **kwargs)
EDIT: Apparently you can decorate your class directly to get the same effect:
#functools.cache
class Foo:
def __init__(self, a):
print("Creating new instance")
self.a = a
>>> Foo(1)
Creating new instance
<__main__.Foo object at 0x0000021D7D61FFA0>
>>> Foo(1)
<__main__.Foo object at 0x0000021D7D61FFA0>
>>> Foo(2)
Creating new instance
<__main__.Foo object at 0x0000021D7D61F250>
Note the same memory address both times Foo(1) is called.
Edit 2: After some playing around, you can get your default-respecting instance cache behavior if you override __new__ and do all of your caching and instantiation there:
class Foo:
_cached = {}
def __new__(cls, a, b=3):
attrs = a, b
if attrs in cls._cached:
return cls._cached[attrs]
print(f"Creating new instance Foo({a}, {b})")
new_foo = super().__new__(cls)
new_foo.a = a
new_foo.b = b
cls._cached[attrs] = new_foo
return new_foo
a = Foo(1)
b = Foo(1, 3)
c = Foo(b=3, a=1)
d = Foo(4)
print(a is b)
print(b is c)
print(c is d)
output:
Creating new instance Foo(1, 3)
Creating new instance Foo(4, 3)
True
True
False
The __init__ will still be called after __new__, so you will want to do your expensive initialization (or all of it) in __new__ after the cache check.

Related

Some problems about Python inherited classmethod

I have this code:
from typing import Callable, Any
class Test(classmethod):
def __init__(self, f: Callable[..., Any]):
super().__init__(f)
def __get__(self,*args,**kwargs):
print(args) # why out put is (None, <class '__main__.A'>) where form none why no parameter 123
# where was it called
return super().__get__(*args,**kwargs)
class A:
#Test
def b(cls,v_b):
print(cls,v_b)
A.b(123)
Why the output is (None, <class '__main__.A'>)? Where did None come form and why is it not the parameter 123, which is the value I called it with?
The __get__ method is called when the method b is retrieved from the A class. It has nothing to do with the actual calling of b.
To illustrate this, separate the access to b from the actual call of b:
print("Getting a reference to method A.b")
method = A.b
print("I have a reference to the method now. Let's call it.")
method()
This results in this output:
Getting a reference to method A.b
(None, <class '__main__.A'>)
I have a reference to the method now. Let's call it.
<class '__main__.A'> 123
So you see, it is normal that the output in __get__ does not show anything about the argument you call b with, because you haven't made the call yet.
The output None, <class '__main__.A'> is in line with the Python documentation on __get__:
object.__get__(self, instance, owner=None)
Called to get the attribute of the owner class (class attribute access) or of an instance of that class (instance attribute access). The optional owner argument is the owner class, while instance is the instance that the attribute was accessed through, or None when the attribute is accessed through the owner.
In your case you are using it for accessing an attribute (b) of a class (A) -- not of an instance of A -- so that explains the instance argument is None and the owner argument is your class A.
The second output, made with print(cls,v_b), will print <class '__main__.A'> for cls, because that is what happens when you call class methods (as opposed to instance methods). Again, from the documentation:
When a class attribute reference (for class C, say) would yield a class method object, it is transformed into an instance method object whose __self__ attribute is C.
Your case is described here, where A is the class, and so the first parameter (which you called cls) will get as value A.
You can apply multiple decorators on the same function, for example,
first (and outer) decorator could be a classmethod
and the second (doing your stuff) could define a wrapper, where you could accept your arguments as usual
In [4]: def test_deco(func):
...: def wrapper(cls, *args, **kwds):
...: print("cls is", cls)
...: print("That's where 123 should appear>>>", args, kwds)
...: return func(cls, *args, **kwds)
...:
...: return wrapper
...:
...:
...: class A:
...: #classmethod
...: #test_deco
...: def b(cls, v_b):
...: print("That's where 123 will appear as well>>>", v_b)
...:
...:
...: A.b(123)
cls is <class '__main__.A'>
That's where 123 should appear>>> (123,) {}
That's where 123 will appear as well>>> 123
In [5]:
It's too much trouble to use two at a time i want only use one like it
It is possible to define a decorator applying a couple of other decorators:
def my_super_decorator_doing_everything_at_once(func):
return classmethod(my_small_decorator_doing_almost_everything(func))
That works because decorators notation
#g
#f
def x(): ...
is a readable way to say
def x(): ...
x = g(f(x))

Why set a bound method to python object create a circular reference?

I'm working in Python 2.7 and I fond that issue that puzzling me.
That is the simplest example:
>>> class A(object):
def __del__(self):
print("DEL")
def a(self):
pass
>>> a = A()
>>> del a
DEL
That is OK like expected... now I'm trying to change the a() method of object a and what happen is that after change it I can't delete a any more:
>>> a = A()
>>> a.a = a.a
>>> del a
Just to do some checks I've print the a.a reference before and after the assignment
>>> a = A()
>>> print a.a
<bound method A.a of <__main__.A object at 0xe86110>>
>>> a.a = a.a
>>> print a.a
<bound method A.a of <__main__.A object at 0xe86110>>
Finally I used objgraph module to try to understand why the object is not released:
>>> b = A()
>>> import objgraph
>>> objgraph.show_backrefs([b], filename='pre-backref-graph.png')
>>> b.a = b.a
>>> objgraph.show_backrefs([b], filename='post-backref-graph.png')
As you can see in the post-backref-graph.png image there is a __self__ references in b that have no sense for me because the self references of instance method should be ignored (as was before the assignment).
Somebody can explain why that behaviour and how can I work around it?
When you write a.a, it effectively runs:
A.a.__get__(a, A)
because you are not accessing a pre-bound method but the class' method that is being
bound at runtime.
When you do
a.a = a.a
you effectively "cache" the act of binding the method. As the bound method has a reference to the object (obviously, as it has to pass self to the function) this creates a circular reference.
So I'm modelling your problem like:
class A(object):
def __del__(self):
print("DEL")
def a(self):
pass
def log_all_calls(function):
def inner(*args, **kwargs):
print("Calling {}".format(function))
try:
return function(*args, **kwargs)
finally:
print("Called {}".format(function))
return inner
a = A()
a.a = log_all_calls(a.a)
a.a()
You can use weak references to bind on demand inside log_all_calls like:
import weakref
class A(object):
def __del__(self):
print("DEL")
def a(self):
pass
def log_all_calls_weakmethod(method):
cls = method.im_class
func = method.im_func
instance_ref = weakref.ref(method.im_self)
del method
def inner(*args, **kwargs):
instance = instance_ref()
if instance is None:
raise ValueError("Cannot call weak decorator with dead instance")
function = func.__get__(instance, cls)
print("Calling {}".format(function))
try:
return function(*args, **kwargs)
finally:
print("Called {}".format(function))
return inner
a = A()
a.a = log_all_calls_weakmethod(a.a)
a.a()
This is really ugly, so I would rather extract it out to make a weakmethod decorator:
import weakref
def weakmethod(method):
cls = method.im_class
func = method.im_func
instance_ref = weakref.ref(method.im_self)
del method
def inner(*args, **kwargs):
instance = instance_ref()
if instance is None:
raise ValueError("Cannot call weak method with dead instance")
return func.__get__(instance, cls)(*args, **kwargs)
return inner
class A(object):
def __del__(self):
print("DEL")
def a(self):
pass
def log_all_calls(function):
def inner(*args, **kwargs):
print("Calling {}".format(function))
try:
return function(*args, **kwargs)
finally:
print("Called {}".format(function))
return inner
a = A()
a.a = log_all_calls(weakmethod(a.a))
a.a()
Done!
FWIW, not only does Python 3.4 not have these issues, it also has WeakMethod pre-built for you.
Veedrac's answer about the bound method keeping a reference to the instance is only part of the answer. CPython's garbage collector knows how to detect and handle cyclic references - except when some object that's part of the cycle has a __del__ method, as mentioned here https://docs.python.org/2/library/gc.html#gc.garbage :
Objects that have __del__() methods and are part of a reference cycle
cause the entire reference cycle to be uncollectable, including
objects not necessarily in the cycle but reachable only from it.
Python doesn’t collect such cycles automatically because, in general,
it isn’t possible for Python to guess a safe order in which to run the
__del__() methods. (...) It’s generally better to avoid the issue by not creating cycles containing objects with __del__() methods, and
garbage can be examined in that case to verify that no such cycles are
being created.
IOW : remove your __del__ method and you should be fine.
EDIT: wrt/ your comment :
I use it on the object as function a.a = functor(a.a). When the test
is done I would like replace the functor by the original method.
Then the solution is plain and simple:
a = A()
a.a = functor(a.a)
test(a)
del a.a
Until you explicitely bind it, a has no 'a' instance atribute, so it's looked up on the class and a new method instance is returned (cf https://wiki.python.org/moin/FromFunctionToMethod for more on this). This method instance is then called, and (usually) discarded.
As to why Python does this. Technically all objects contain circular references if they have methods. However, garbage collection would take much longer if the garbage collector had to do explicit checks on an objects methods to make sure freeing the object wouldn't cause a problem. As such Python stores the methods separately from an object's __dict__. So when you write a.a = a.a, you are shadowing the method with itself in the a field on the object. And thus, there is an explicit reference to the method which prevents the object from being freed properly.
The solution to your problem is not bother to keep a "cache" of the original method and just delete the shadowed variable when you're done with it. This will unshadow the method and make it available again.
>>> class A(object):
... def __del__(self):
... print("del")
... def method(self):
... print("method")
>>> a = A()
>>> vars(a)
{}
>>> "method" in dir(a)
True
>>> a.method = a.method
>>> vars(a)
{'method': <bound method A.method of <__main__.A object at 0x0000000001F07940>>}
>>> "method" in dir(a)
True
>>> a.method()
method
>>> del a.method
>>> vars(a)
{}
>>> "method" in dir(a)
True
>>> a.method()
method
>>> del a
del
Here vars shows what's in the __dict__ attribute of an object. Note how __dict__ doesn't contain a reference to itself even though a.__dict__ is valid. dir produces a list of all the attributes reachable from the given object. Here we can see all the attributes and methods on an object and all the methods and attributes of its classes and their bases. This shows that the bound method of a is stored in place separate to where a's attributes are stored.

Method namespaces/collections in Python classes

Say I have a class Foo and I do foo = Foo(). I want some kind of "method namespace" foo.bar that is not shared across Foo instances, to which I dynamically can add methods that operate on foo:
def method(self, someValue):
self.value = someValue + 10
foo.bar.m = method
foo.bar.m(20)
And then I want to find that foo.value is 30.
Any way to accomplish this?
Is this what you're looking for?
import types
class Namespace(object):
pass
class Foo(object):
def __init__(self, value):
self.value = value
self.bar = Namespace()
def function(self, new_value):
self.value = new_value
a = Foo(1)
b = Foo(2)
b.bar.function = types.MethodType(function, b, Foo)
b.bar.function(6)
print a.value # prints 1
print b.value # prints 6
The trick is using the types module to convert the test function into a method that can be bound to an instance of the object.
When I run this line: b.bar.function = types.MethodType(function, b, Foo), I am essentially telling Python to create a new method that binds function to the b instance of Foo. I can then take this method and store it inside any arbitrary location.
Since the method is permanently bound to the b instance of Foo, self will always refer to b regardless of which object the method is actually assigned to.

How to keep track of class instances?

Toward the end of a program I'm looking to load a specific variable from all the instances of a class into a dictionary.
For example:
class Foo():
def __init__(self):
self.x = {}
foo1 = Foo()
foo2 = Foo()
...
Let's say the number of instances will vary and I want the x dict from each instance of Foo() loaded into a new dict. How would I do that?
The examples I've seen in SO assume one already has the list of instances.
One way to keep track of instances is with a class variable:
class A(object):
instances = []
def __init__(self, foo):
self.foo = foo
A.instances.append(self)
At the end of the program, you can create your dict like this:
foo_vars = {id(instance): instance.foo for instance in A.instances}
There is only one list:
>>> a = A(1)
>>> b = A(2)
>>> A.instances
[<__main__.A object at 0x1004d44d0>, <__main__.A object at 0x1004d4510>]
>>> id(A.instances)
4299683456
>>> id(a.instances)
4299683456
>>> id(b.instances)
4299683456
#JoelCornett's answer covers the basics perfectly. This is a slightly more complicated version, which might help with a few subtle issues.
If you want to be able to access all the "live" instances of a given class, subclass the following (or include equivalent code in your own base class):
from weakref import WeakSet
class base(object):
def __new__(cls, *args, **kwargs):
instance = object.__new__(cls, *args, **kwargs)
if "instances" not in cls.__dict__:
cls.instances = WeakSet()
cls.instances.add(instance)
return instance
This addresses two possible issues with the simpler implementation that #JoelCornett presented:
Each subclass of base will keep track of its own instances separately. You won't get subclass instances in a parent class's instance list, and one subclass will never stumble over instances of a sibling subclass. This might be undesirable, depending on your use case, but it's probably easier to merge the sets back together than it is to split them apart.
The instances set uses weak references to the class's instances, so if you del or reassign all the other references to an instance elsewhere in your code, the bookkeeping code will not prevent it from being garbage collected. Again, this might not be desirable for some use cases, but it is easy enough to use regular sets (or lists) instead of a weakset if you really want every instance to last forever.
Some handy-dandy test output (with the instances sets always being passed to list only because they don't print out nicely):
>>> b = base()
>>> list(base.instances)
[<__main__.base object at 0x00000000026067F0>]
>>> class foo(base):
... pass
...
>>> f = foo()
>>> list(foo.instances)
[<__main__.foo object at 0x0000000002606898>]
>>> list(base.instances)
[<__main__.base object at 0x00000000026067F0>]
>>> del f
>>> list(foo.instances)
[]
You would probably want to use weak references to your instances. Otherwise the class could likely end up keeping track of instances that were meant to have been deleted. A weakref.WeakSet will automatically remove any dead instances from its set.
One way to keep track of instances is with a class variable:
import weakref
class A(object):
instances = weakref.WeakSet()
def __init__(self, foo):
self.foo = foo
A.instances.add(self)
#classmethod
def get_instances(cls):
return list(A.instances) #Returns list of all current instances
At the end of the program, you can create your dict like this:
foo_vars = {id(instance): instance.foo for instance in A.instances}
There is only one list:
>>> a = A(1)
>>> b = A(2)
>>> A.get_instances()
[<inst.A object at 0x100587290>, <inst.A object at 0x100587250>]
>>> id(A.instances)
4299861712
>>> id(a.instances)
4299861712
>>> id(b.instances)
4299861712
>>> a = A(3) #original a will be dereferenced and replaced with new instance
>>> A.get_instances()
[<inst.A object at 0x100587290>, <inst.A object at 0x1005872d0>]
You can also solve this problem using a metaclass:
When a class is created (__init__ method of metaclass), add a new instance registry
When a new instance of this class is created (__call__ method of metaclass), add it to the instance registry.
The advantage of this approach is that each class has a registry - even if no instance exists. In contrast, when overriding __new__ (as in Blckknght's answer), the registry is added when the first instance is created.
class MetaInstanceRegistry(type):
"""Metaclass providing an instance registry"""
def __init__(cls, name, bases, attrs):
# Create class
super(MetaInstanceRegistry, cls).__init__(name, bases, attrs)
# Initialize fresh instance storage
cls._instances = weakref.WeakSet()
def __call__(cls, *args, **kwargs):
# Create instance (calls __init__ and __new__ methods)
inst = super(MetaInstanceRegistry, cls).__call__(*args, **kwargs)
# Store weak reference to instance. WeakSet will automatically remove
# references to objects that have been garbage collected
cls._instances.add(inst)
return inst
def _get_instances(cls, recursive=False):
"""Get all instances of this class in the registry. If recursive=True
search subclasses recursively"""
instances = list(cls._instances)
if recursive:
for Child in cls.__subclasses__():
instances += Child._get_instances(recursive=recursive)
# Remove duplicates from multiple inheritance.
return list(set(instances))
Usage: Create a registry and subclass it.
class Registry(object):
__metaclass__ = MetaInstanceRegistry
class Base(Registry):
def __init__(self, x):
self.x = x
class A(Base):
pass
class B(Base):
pass
class C(B):
pass
a = A(x=1)
a2 = A(2)
b = B(x=3)
c = C(4)
for cls in [Base, A, B, C]:
print cls.__name__
print cls._get_instances()
print cls._get_instances(recursive=True)
print
del c
print C._get_instances()
If using abstract base classes from the abc module, just subclass abc.ABCMeta to avoid metaclass conflicts:
from abc import ABCMeta, abstractmethod
class ABCMetaInstanceRegistry(MetaInstanceRegistry, ABCMeta):
pass
class ABCRegistry(object):
__metaclass__ = ABCMetaInstanceRegistry
class ABCBase(ABCRegistry):
__metaclass__ = ABCMeta
#abstractmethod
def f(self):
pass
class E(ABCBase):
def __init__(self, x):
self.x = x
def f(self):
return self.x
e = E(x=5)
print E._get_instances()
Another option for quick low-level hacks and debugging is to filter the list of objects returned by gc.get_objects() and generate the dictionary on the fly that way. In CPython that function will return you a (generally huge) list of everything the garbage collector knows about, so it will definitely contain all of the instances of any particular user-defined class.
Note that this is digging a bit into the internals of the interpreter, so it may or may not work (or work well) with the likes of Jython, PyPy, IronPython, etc. I haven't checked. It's also likely to be really slow regardless. Use with caution/YMMV/etc.
However, I imagine that some people running into this question might eventually want to do this sort of thing as a one-off to figure out what's going on with the runtime state of some slice of code that's behaving strangely. This method has the benefit of not affecting the instances or their construction at all, which might be useful if the code in question is coming out of a third-party library or something.
Here's a similar approach to Blckknght's, which works with subclasses as well. Thought this might be of interest, if someone ends up here. One difference, if B is a subclass of A, and b is an instance of B, b will appear in both A.instances and B.instances. As stated by Blckknght, this depends on the use case.
from weakref import WeakSet
class RegisterInstancesMixin:
instances = WeakSet()
def __new__(cls, *args, **kargs):
o = object.__new__(cls, *args, **kargs)
cls._register_instance(o)
return o
#classmethod
def print_instances(cls):
for instance in cls.instances:
print(instance)
#classmethod
def _register_instance(cls, instance):
cls.instances.add(instance)
for b in cls.__bases__:
if issubclass(b, RegisterInstancesMixin):
b._register_instance(instance)
def __init_subclass__(cls):
cls.instances = WeakSet()
class Animal(RegisterInstancesMixin):
pass
class Mammal(Animal):
pass
class Human(Mammal):
pass
class Dog(Mammal):
pass
alice = Human()
bob = Human()
cannelle = Dog()
Animal.print_instances()
Mammal.print_instances()
Human.print_instances()
Animal.print_instances() will print three objects, whereas Human.print_instances() will print two.
Using the answer from #Joel Cornett I've come up with the following, which seems to work. i.e. i'm able to total up object variables.
import os
os.system("clear")
class Foo():
instances = []
def __init__(self):
Foo.instances.append(self)
self.x = 5
class Bar():
def __init__(self):
pass
def testy(self):
self.foo1 = Foo()
self.foo2 = Foo()
self.foo3 = Foo()
foo = Foo()
print Foo.instances
bar = Bar()
bar.testy()
print Foo.instances
x_tot = 0
for inst in Foo.instances:
x_tot += inst.x
print x_tot
output:
[<__main__.Foo instance at 0x108e334d0>]
[<__main__.Foo instance at 0x108e334d0>, <__main__.Foo instance at 0x108e33560>, <__main__.Foo instance at 0x108e335a8>, <__main__.Foo instance at 0x108e335f0>]
5
10
15
20
(For Python)
I have found a way to record the class instances via the "dataclass" decorator while defining a class. Define a class attribute 'instances' (or any other name) as a list of the instances you want to record. Append that list with the 'dict' form of created objects via the dunder method __dict__. Thus, the class attribute 'instances' will record instances in the dict form, which you want.
For example,
from dataclasses import dataclass
#dataclass
class player:
instances=[]
def __init__(self,name,rank):
self.name=name
self.rank=rank
self.instances.append(self.__dict__)

Automatically setting class member variables in Python [duplicate]

This question already has answers here:
Automatically initialize instance variables?
(17 answers)
Closed last month.
Say, I have the following class in Python
class Foo(object):
a = None
b = None
c = None
def __init__(self, a = None, b = None, c = None):
self.a = a
self.b = b
self.c = c
Is there any way to simplify this process? Whenever I add a new member to class Foo, I'm forced to modify the constructor.
Please note that
class Foo(object):
a = None
sets a key-value pair in Foo's dict:
Foo.__dict__['a']=None
while
def __init__(self, a = None, b = None, c = None):
self.a = a
sets a key-value pair in the Foo instance object's dict:
foo=Foo()
foo.__dict__['a']=a
So setting the class members at the top of your definition is not directly related to the setting of the instance attributes in the lower half of your definition (inside the __init__.
Also, it is good to be aware that __init__ is Python's initializer. __new__ is the class constructor.
If you are looking for a way to automatically add some instance attributes based on __init__'s arguments, you could use this:
import inspect
import functools
def autoargs(*include,**kwargs):
def _autoargs(func):
attrs,varargs,varkw,defaults=inspect.getargspec(func)
def sieve(attr):
if kwargs and attr in kwargs['exclude']: return False
if not include or attr in include: return True
else: return False
#functools.wraps(func)
def wrapper(self,*args,**kwargs):
# handle default values
for attr,val in zip(reversed(attrs),reversed(defaults)):
if sieve(attr): setattr(self, attr, val)
# handle positional arguments
positional_attrs=attrs[1:]
for attr,val in zip(positional_attrs,args):
if sieve(attr): setattr(self, attr, val)
# handle varargs
if varargs:
remaining_args=args[len(positional_attrs):]
if sieve(varargs): setattr(self, varargs, remaining_args)
# handle varkw
if kwargs:
for attr,val in kwargs.iteritems():
if sieve(attr): setattr(self,attr,val)
return func(self,*args,**kwargs)
return wrapper
return _autoargs
So when you say
class Foo(object):
#autoargs()
def __init__(self,x,path,debug=False,*args,**kw):
pass
foo=Foo('bar','/tmp',True, 100, 101,verbose=True)
you automatically get these instance attributes:
print(foo.x)
# bar
print(foo.path)
# /tmp
print(foo.debug)
# True
print(foo.args)
# (100, 101)
print(foo.verbose)
# True
PS. Although I wrote this (for fun), I don't recommend using autoargs for serious work. Being explicit is simple, clear and infallible. I can't say the same for autoargs.
PPS. Is it just me, or are a lot of buttons broken on Stackoverflow? The editor window has lost all its icons... :( Clearing the browser cache fixed the problem.
Python 3.7 provides dataclasses which are helpful in situations like this:
from dataclasses import dataclass
#dataclass
class Foo:
a: str = None
b: str = None
c: str = None
This saves you from having to write out the __init__ method when you just want to store a few attributes.
Gives you a good __repr__ method:
>>> a = Foo()
>>> a
Foo(a=None, b=None, c=None)
If you need to do calculations on a param, you can implement __post_init__.
See also namedtuple:
from collections import namedtuple
Foo = namedtuple('Foo', ['a', 'b', 'c'])
All fields are required with namedtuple though.
>>> a = Foo(1, 2, 3)
>>> a
Foo(a=1, b=2, c=3)
There are elegant ways to do this.
Is there any way to simplify this process? Whenever I add a new member to class Foo, I'm forced to modify the constructor.
There is also a crude way. It will work, but is NOT recommended. See and decide.
>>> class Foo(object):
def __init__(self, **attrs):
self.__dict__.update(**attrs)
def __getattr__(self, attr):
return self.__dict__.get(attr, None)
>>> f = Foo(a = 1, b = 2, c = 3)
>>> f.a, f.b
(1, 2)
>>> f = Foo(bar = 'baz')
>>> f.bar
'baz'
>>> f.a
>>>
The keyword argument constructor lets you get away without explicitly defining any arguments. Warning: this goes against the "explicit is better than implicit" principle.
You need to override __getattr__ ONLY if you want to return a default value for an attribute that is not present instead of getting an AttributeError.
http://code.activestate.com/recipes/286185-automatically-initializing-instance-variables-from/
This recipe and its comments provide some methods.
Python: Automatically initialize instance variables?
This is a previous question.

Categories

Resources