I want to have compact class based python DSLs in the following form:
class MyClass(Static):
z = 3
def _init_(cls, x=0):
cls._x = x
def set_x(cls, x):
cls._x = x
def print_x_plus_z(cls):
print cls._x + cls.z
#property
def x(cls):
return cls._x
class MyOtherClass(MyClass):
z = 6
def _init_(cls):
MyClass._init_(cls, x=3)
I don't want to write MyClass() and MyOtherClass() afterwards. Just want to get this working with only class definitions.
MyClass.print_x_plus_z()
c = MyOtherClass
c.z = 5
c.print_x_plus_z()
assert MyOtherClass.z == 5, "instances don't share the same values!"
I used metaclasses and managed to get _init_, print_x and subclassing working properly, but properties don't work.
Could anyone suggest better alternative?
I'm using Python 2.4+
To give a class (as opposed to its instances) a property, you need to have that property object as an attribute of the class's metaclass (so you'll probably need to make a custom metaclass to avoid inflicting that property upon other classes with the same metaclass). Similarly for special methods such as __init__ -- if they're on the class they'd affect the instances (which you don't want to make) -- to have them affect the class, you need to have them on the (custom) metaclass. What are you trying to accomplish by programming everything "one metalevel up", i.e., never-instantiated class with custom metaclass rather than normal instances of a normal class? It just seems a slight amount of extra work for no returns;-).
Related
I'm surprised that the following code runs without error.
# ABC
class Foo(object):
__metaclass__ = ABCMeta
a = 1
def __init__(self, b, c):
self.b = b
self.c = c
def get_scaled_a(self):
return self.a / Bar1.a # why can I access Bar1.a?
#abstractmethod
def class_type(self):
pass
# Derived class 1
class Bar1(Foo):
a = 100
def class_type(self):
return 'bar1'
# Derived class 2
class Bar2(Foo):
a = 10
def class_type(self):
return 'bar2'
my_bar2_inst = Bar2(0, 0)
print(my_bar2_inst.get_scaled_a())
# 0.1
# Why can I access Bar.a?
Because Python assumes that developers are mature human beings. Instead of the interpreter checking whether you have access to a certain attribute, in Python you usually have access to all attributes. It is up to you to be mature enough and don't break anything.
There is however a convention that attributes starting with a lowercase, like _foo, __bar and __qux__ are considered private. It means it is usually a bad idea to access those yourself. But there is no mechanism in place to prevent you from accessing them: the variable name more or less asks that you would be so kind not to access it. In case you absolutely need it, it is your responsibility.
Now the a of Bar is a member of the Bar class, not of a Bar instance. So in some other languages, it would be considered to be "static". That's another reason why you can access it.
When you run get_scaled_a, the Bar1 class has already been defined, and so has its a attribute. Remember that classes are objects too, not just class instances. So when Bar1 is defined, you can access certain attributes without creating an instance of it.
The fact that it is a subclass doesn't play a role at all.
You can access the child class attribute for the same reason you can do this:
>>> class myobj: pass
>>> def f(obj):
print(obj.a)
>>> obj1, obj2 = myobj(), myobj()
>>> obj1.a, obj2.a = 1, 2
>>> f(obj1)
1
>>> f(obj2)
2
Classes are just objects. Metaclasses are little more than class-object-factories.
Here's another illustration. Say I have a Reuben maker:
>>> Reuben = type('Reuben', (), {'mayo':False}) # <-- shortcut for making a class
Now say I have some kind of worthless sandwich making factory that puts mayo on everything (gross):
>>> def MakeMeASandwich(sandwich_type):
sandwich_type.mayo = True
return sandwich_type()
Would you expect that to work? If you said yes, you're right:
>>> s = MakeMeASandwich(Reuben)
# reuben with mayo!
Why did you expect that to work? Probably because there is no reason the function shouldn't be able to access mayo. It's there. It's not hidden. So of course it can get to it.
Again: a metaclass is little more than a class making factory. It is very much the same as any other factory (though they do have some nifty extra bells and whistles that you probably don't need).
Due to the wierdness of the particular module I am working with, I am wondering if there is a general way for one class to inherit the properties of another class that is locally created in a method. For example, something the following:
def DefForm():
class F(object):
foo = bar
return F
class MyClass(DefForm):
pass
m = myClass()
m.foo
>>> 'bar'
Inherit your class from the return value of the function not the function itself:
class MyClass(DefForm()):
...
Classes in Python are plain objects and when you're inheriting from one, you're also just inheriting from an object.
For instance, this also works:
class Foo(Bar if x == 3 else Baz):
...
In fact, I can't think of a situation where this wouldn't work. Even this is perfectly valid:
try:
...
except (FooExc if x == 3 else BarExc):
...
So not only are classes objects in Python, they are also treated as objects in all situations.
On a more general note, generating classes is called metaprogramming; it's a common practice in many languages (both dynamic as well as compiled and statically typed) and there's generally nothing "weird" about code that does this, as long as sanity and readability are maintained. It's as normal as creating and returning functions from other functions (which is extremely prevalent in Functional Programming), or in fact returning any object.
Try this:
def DefForm():
class F(object):
foo = bar
return F
FClass = DeftForm()
class MyClass(FClass):
pass
m = myClass()
m.foo
I'm not really sure how best to explain what I want, so I'll just show some code:
class Stuffclass():
def add(self, x, y):
return x + y
def subtract(self, x, y):
return x - y
# imagine that there are 20-30 other methods in here (lol)
class MyClass:
def __init__(self):
self.st = Stuffclass()
def doSomething(self):
return self.st.add(1, 2)
m = MyClass()
m.doSomething() # will print 3
# Now, what I want to be able to do is:
print m.add(2, 3) # directly access the "add" method of MyClass.st
print m.subtract(10, 5) # directly access the "subtract" method of MyClass.st
m.SomeMethod() # execute function MyClass.st.SomeMethod
I know I could do something like this:
class MyClass:
def __init__(self):
self.st = Stuffclass()
self.add = self.st.add
self.subtract = self.st.subtract
...but this requires manually assigning all possible attributes.
I'm writing all the classes so I can guarantee no name collisions.
Making MyClass a subclass of Stuffclass won't work, because I actually am using this in a plugin-based application, where MyClass loads other code dynamically using import. This means MyClass can't subclass from the plugin, because the plugin could be anything that follows my API.
Advice please?
I believe that writing a getattr function for your class will let you do what you want.
Called when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self). name is the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception
So something as simple as:
def __getattr__(self, name):
if hasattr(self.st, name):
return getattr(self.st, name)
else:
raise AttributeError
should do roughly what you're after.
But, having answered (I think) the question you asked, I'm going to move on to the question I think you should have asked.
I actually am using this in a plugin-based application, where MyClass loads other code dynamically using import. This means MyClass can't subclass from the plugin, because the plugin could be anything that follows my API
I can see why MyClass can't be a subclass of StuffClass; but couldn't StuffClass be a subclass of MyClass? If you defined the inheritance that way, you'd have a guarantee what StuffClass implements all the basic stuff in MyClass, and also that your instances of StuffClass have all the extra methods defined in StuffClass.
From your mention that the plugins need to "follows my API", I'm assuming that might be a case where you need to ensure that the plugins implement a set of methods in order to conform with the API; but since the implementation of the methods is going to depend on the specifics of the plugin, you can't provide those functions in MyClass. In that case, it sounds as though defining an Abstract Base Class that your plugins are required to inherit from might be useful for you.
Use __getattr__ to delegate the calls to Stuffclass's instance:
class MyClass:
def __init__(self):
self.st = Stuffclass()
def __getattr__(self,attr):
return getattr(self.st,attr)
Demo:
>>> from so import *
>>> m = MyClass()
>>> m.add(1,2)
3
>>> m.subtract(100,2)
98
This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.
What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().