I have a set of classes which I would like to be able to dynamically initialise from dict.
class A(object):
var = 1
class B(object):
val = 2
class C(object):
var = 1
val = 3
BASE = {'a': A, 'b': B, 'c': C}
I might use these in some dynamic function such as
def create_and_obtain(kind, **kwargs):
base = BASE[kind]()
for kwarg in kwargs.keys():
print(getattr(base, kwarg))
>>> create_and_obtain('c', var=None, val=None)
1
3
If I have a lot of classes, which are added, amended and removed from time to time, I must ensure that my BASE dict is kept up to date. Is there a way of dynamically constructing BASE according to the declared classes in the script? Can the classes themselves have any meta_attributes added that could be used to extend the definition of BASE beyond this simplistic example?
For example could I add some meta tags like:
class A(object):
...
_meta_dict_key_ = 'apples'
class B(object):
...
_meta_dict_key_ = 'bananas'
class C(object):
...
_meta_dict_key_ = 'coconuts'
so that BASE is dynamically constructed as:
BASE = {'apples': A, 'bananas': B, 'coconuts':C}
Python has class decorators and so a clean way might be this:
BASE = {}
def register(klass): # Uses an attribute of the class
global BASE
BASE[klass.name] = klass
return klass
def label(name): # Provide an explicit name
def deco(klass):
global BASE
BASE[name] = klass
return klass
return deco
#register
class A(object):
var = 1
name = 'bananas'
#label('apples')
class B(object):
val = 2
#register
class C(object):
var = 1
val = 3
name = 'cakes'
You can iterate through the output of the locals() function at the end of the file and check _meta_dict_key. If there is such key in the object, you can use it to construct base dict.
In the python, all code that declared on the file top-level executes only once on the first import so it is safe to put such code there.
Related
I want to get all class variable names of all sub-classes in my parent class (in python). While I managed to do that I'm not certain if there is a cleaner way to achieve that.
Pseudo code to explain the problem :
class Parent:
all_names = []
var_1 = 1
def __init__(self):
print(self.all_names)
class ChildA(Parent):
var_2 = 2
class ChildB(ChildA):
var_3 = 3
p = Parent()
ca = ChildA()
cb = ChildB()
>>> ["var_1"]
>>> ["var_1","var_2"]
>>> ["var_1","var_2","var_3"]
So what I am currently doing is that I use the __new__ function to set those values recursively :
class Parent(object):
test_1 = 1
signals = []
def __new__(cls, *args, **kwargs):
# create obj ref
_obj = super().__new__(cls, *args, **kwargs)
signals = []
# walk every base and add values
def __walk(base):
for key, value in vars(base).items():
if <condition>
signals.append(value.__name__)
for base in base.__bases__:
__walk(base)
__walk(cls)
signals.reverse() # to reorder the list
_obj.signals = signals
return _obj
For more context: I am trying to develop a Signal-System but to check whether an instance has a Signal I somehow need to add them to the Root-Parent-Class. Yes I could Make all Subclasses inherit from a helper class but I don't want to do that and instead have it all bundled in as few classes as possible.
Also if there are any bugs/risks with my implementation please let me know I just recently discovered __new__.
(Python 3.x)
You could walk up the method resolution order and check for any keys in each class's __dict__ that do not appear in object.__dict__ and are not callable or private keys.
class Parent:
var_p = 1
def __init__(self):
self.blah = 0
class_vars = []
for cls in self.__class__.mro()[:-1]:
class_vars.extend([
k for k, v in cls.__dict__.items()
if k not in object.__dict__ and not k.startswith('_')
and not callable(v)
])
print(class_vars)
def test(self):
return True
class ChildA(Parent):
var_a = 2
class ChildB(ChildA):
var_b = 3
p = Parent()
ca = ChildA()
cb = ChildB()
# prints:
['var_p']
['var_a', 'var_p']
['var_b', 'var_a', 'var_p']
I have two python classes
class A:
"""
This is a class retaining some constants
"""
C=1
class B:
VAR = None
def __init__(self):
b.VAR = A
def f(self, v=VAR ):
print(v.C)
clb = B()
clb .f()
AttributeError: 'NoneType' object has no attribute 'C'
So what I am trying to do is populate the B::VAR class variable in the B::init() with the reference of class A, and after that in the B::f() to have access to A::C by using default argument v (that retains VAR).
I intend to use v as a default value for the code inside B::f() and if needed to change it when calling the function.
Is my scenario possible?
Thank you,
Yes, this is possible:
class A:
"""
This is a class retaining some constants
"""
C = 1
class B:
VAR = None
def __init__(self):
self.VAR = A
def f(self, v=None):
if v is None:
v = self.VAR
print(v.C)
clb = B()
clb.f()
You issue is that the default arguments v=VAR is an old reference to the B.VAR which is None, not the updated value of the object clb.VAR.
This diagram show that the old version of f() have a default value for v that point to None, because this is computed at the definition of the method, when the class B is defined, before any creation of clb: B object, where VAR is a class attribute.
My suggestion is to set v at runtime using the VAR of the object throught self, which is changed in the __init__ to A.
class A:
C = 1
class B:
VAR = None
def __init__(self):
B.VAR = A
#classmethod
def f(cls):
print(cls.VAR.C)
clb = B()
clb.f()
This is another way to do it. However, I'm wondering what it is you're actually trying to do, because this seems really strange
Let B inherit from A. Suppose that some of B's behavior depends on the class attribute cls_x and we want to set up this dependency during construction of B objects. Since it is not a simple operation, we want to wrap it in a class method, which the constructor will call. Example:
class B(A):
cls_x = 'B'
#classmethod
def cm(cls):
return cls.cls_x
def __init__(self):
self.attr = B.cm()
Problem: cm as well as __init__ will always be doing the same things and their behavior must stay the same in each derived class. Thus, we would like to put them both in the base class and not define it in any of the derived classes. The only difference will be the caller of cm - either A or B (or any of B1, B2, each inheriting from A), whatever is being constructed. So what we'd like to have is something like this:
class A:
cls_x = 'A'
#classmethod
def cm(cls):
return cls.cls_x
def __init__(self):
self.attr = ClassOfWhateverIsInstantiated.cm() #how to do this?
class B(A):
cls_x = 'B'
I feel like it's either something very simple I'm missing about Python's inheritance mechanics or the whole issue should be handled entirely differently.
This is different than this question as I do not want to override the class method, but move its implementation to the base class entirely.
Look at it this way: Your question is essentially "How do I get the class of an instance?". The answer to that question is to use the type function:
ClassOfWhateverIsInstantiated = type(self)
But you don't even need to do that, because classmethods can be called directly through an instance:
def __init__(self):
self.attr = self.cm() # just use `self`
This works because classmethods automatically look up the class of the instance for you. From the docs:
[A classmethod] can be called either on the class (such as C.f()) or on an instance
(such as C().f()). The instance is ignored except for its class.
For ClassOfWhateverIsInstantiated you can just use self:
class A:
cls_x = 'A'
#classmethod
def cm(cls):
return cls.cls_x
def __init__(self):
self.attr = self.cm() # 'self' refers to B, if called from B
class B(A):
cls_x = 'B'
a = A()
print(a.cls_x) # = 'A'
print(A.cls_x) # = 'A'
b = B()
print(b.cls_x) # = 'B'
print(B.cls_x) # = 'B'
To understand this, just remember that class B is inheriting the methods of class A. So when __init__() is called during B's instantiation, it's called in the context of class B, to which self refers.
I'm using a library to share data between C++, VHDL, and SystemVerilog. It uses codegenerators to build datastructures that contain the appropriate field. Think of a c type data structure. I want to generate python code that contains the datastructure and read/write functions to set and write the contents of the datastructure from a / to a file.
To do this i am trying to write a program that prints all the variables in the baseclass with updates from the subclass, but without the subclass variables.
The idea being that class A is the actual VHDL/SystemVerilog/C++ record/structure and class B contains logic to do processing and generate the values in class A.
For example:
class A(object):
def __init__(self):
self.asd = "Test string"
self.foo = 123
def write(self):
print self.__dict__
class B(A):
def __init__(self):
A.__init__(self)
self.bar = 456
self.foo += 1
def write(self):
super(B, self).write()
Calling B.write() should yield the following: (Note the incremented value of foo)
"asd: Test String, foo: 124"
but instead it yields
"asd: Test String, bar: 456, foo: 124".
Is there a way to only get the base class variables? I could compare the base dictionary with the subclass dictionary and only print the values that appear in both but this does not feel like a clean way.
There is no distinction between base class and subclass variables. By definition, inheritance is an is-a relationship; everything defined in the base class is as if it was defined in the subclass.
Similarly, anything you define on the instance at any other point in your code will also appear in the dict; Python does not restrict you from adding new instance variables elsewhere in the class or even from outside.
The only way to do what you want is to record the keys when you enter A.__init__.
You said: "I could compare the base dictionary with the subclass dictionary and only print the values that appear in both but this does not feel like a clean way". What you're trying to do isn't a natural thing to do in Python, so no matter what you do, it's not going to be clean. But in fact, what you suggest is impossible, since you can't get the base dictionary when you make the .write call in a B instance. The closest you can do is to take a copy of it (or, as Daniel Roseman suggests, its keys) immediately after the __init__ call in B so you can refer to that copy later when you need it.
Here's some code that does that:
class A(object):
def __init__(self):
self.asd = "Test string"
self.foo = 123
def write(self, d=None):
print self.__dict__
class B(A):
def __init__(self):
A.__init__(self)
self.parentkeys = self.__dict__.keys()
self.bar = 456
self.foo += 1
def write(self):
bdict = self.__dict__
print dict((k, bdict[k]) for k in self.parentkeys)
a = A()
b = B()
a.write()
b.write()
output
{'foo': 123, 'asd': 'Test string'}
{'foo': 124, 'asd': 'Test string'}
Here's a minor variation:
class A(object):
def __init__(self):
self.asd = "Test string"
self.foo = 123
def write(self, d=None):
if d is None:
d = self.__dict__
print d
class B(A):
def __init__(self):
super(B, self).__init__()
self.parentkeys = self.__dict__.keys()
self.bar = 456
self.foo += 1
def write(self):
bdict = self.__dict__
d = dict((k, bdict[k]) for k in self.parentkeys)
super(B, self).write(d)
However, I get the feeling that there may be a more Pythonic way to do what you really want to do...
I have a class that keeps track of its instances in a class variable, something like this:
class Foo:
by_id = {}
def __init__(self, id):
self.id = id
self.by_id[id] = self
What I'd like to be able to do is iterate over the existing instances of the class. I can do this with:
for foo in Foo.by_id.values():
foo.do_something()
but it would look neater like this:
for foo in Foo:
foo.do_something()
is this possible? I tried defining a classmethod __iter__, but that didn't work.
If you want to iterate over the class, you have to define a metaclass which supports iteration.
x.py:
class it(type):
def __iter__(self):
# Wanna iterate over a class? Then ask that class for iterator.
return self.classiter()
class Foo:
__metaclass__ = it # We need that meta class...
by_id = {} # Store the stuff here...
def __init__(self, id): # new isntance of class
self.id = id # do we need that?
self.by_id[id] = self # register istance
#classmethod
def classiter(cls): # iterate over class by giving all instances which have been instantiated
return iter(cls.by_id.values())
if __name__ == '__main__':
a = Foo(123)
print list(Foo)
del a
print list(Foo)
As you can see in the end, deleting an instance will not have any effect on the object itself, because it stays in the by_id dict. You can cope with that using weakrefs when you
import weakref
and then do
by_id = weakref.WeakValueDictionary()
. This way the values will only kept as long as there is a "strong" reference keeping it, such as a in this case. After del a, there are only weak references pointing to the object, so they can be gc'ed.
Due to the warning concerning WeakValueDictionary()s, I suggest to use the following:
[...]
self.by_id[id] = weakref.ref(self)
[...]
#classmethod
def classiter(cls):
# return all class instances which are still alive according to their weakref pointing to them
return (i for i in (i() for i in cls.by_id.values()) if i is not None)
Looks a bit complicated, but makes sure that you get the objects and not a weakref object.
Magic methods are always looked up on the class, so adding __iter__ to the class won't make it iterable. However the class is an instance of its metaclass, so the metaclass is the correct place to define the __iter__ method.
class FooMeta(type):
def __iter__(self):
return self.by_id.iteritems()
class Foo:
__metaclass__ = FooMeta
...
Try this:
You can create a list with a global scope, define a list in the main module as follows:
fooList = []
Then add:
class Foo:
def __init__(self):
fooList.append(self)
to the init of the foo class
Then everytime you create an instance of the Foo class it will be added to the fooList list.
Now all you have to do is iterate through the array of objects like this
for f in fooList:
f.doSomething()
You can create a comprehension list and then call member methods as follows:
class PeopleManager:
def __init__(self):
self.People = []
def Add(self, person):
self.People.append(person)
class Person:
def __init__(self,name,age):
self.Name = name
self.Age = age
m = PeopleManager()
[[t.Name,t.Age] for t in m.People]
call to fill the object list:
m = PeopleManager()
m.Add( Person("Andy",38))
m.Add( Person("Brian",76))
You can create a class list and then call append in the init method as follows:
class Planet:
planets_list = []
def __init__(self, name):
self.name = name
self.planets_list.append(self)
Usage:
p1 = Planet("earth")
p2 = Planet("uranus")
for i in Planet.planets_list:
print(i.name)