Dynamically altering object types on instantiation - python

I have a MetaClass that, at the moment, simply returns an obj of a class.
From this MetaClass, I derieved two different classes. The difference in the classes is a dictionary that gets passed to __init__ for one of the classes but not for the other one.
If this dict only contains one single entry, I want python to return an instance of the other class instead of the one that was actually called.
class _MetaClass(type):
def __new__(cls, name, bases, dct):
return super(_MetaClass, cls).__new__(name, bases, **dct)
class Class1(object):
__metaclass__ = _MetaClass
def __init__(self, someargs):
pass
class Class2(object):
__metaclass__ = _MetaClass
def __init__(self, someargs, kwargs):
if len(kwargs) == 1:
return Class1(someargs)
else:
pass
TestInstance = Class2("foo", {"bar":"foo"}) #should now be Class1 instance because only one item in dct
If the dict, like in this case, has only len(dct) == 1, then it should create an instance of Class1 with the "foo" passed to its __init__ instead of returning an instance of Class2 as it normally would
I tried to implement the __new__ and __init__ methods for the MetaClass, but I could not figure out how to check the arguments that are actually passes on new class instantiation

You could create a handlerMethod to deal with the issue
def handler(yourDict):
if len(yourDict) == 1:
return Class2(yourDict)
else:
return Class1(yourDict)
and then call the handler instead of the constructor
TestInstance = handler(yourDict)

Related

Python metaclasses handling double class creation with same `__classcell__` attribute

I wrote a metaclass to handle the creation of two classes at class definition (with one accessible as an attribute of the other).
MRE with logic removed:
class MyMeta(type):
def __new__(mcs, clsname, bases, namespace, **kwds):
result = super().__new__(mcs, clsname, bases, namespace)
# Omitted logic to prefix namespace
print(namespace)
# namespace.pop('__classcell__', None)
disambiguated_cls = super().__new__(mcs, clsname + 'Prefixed', bases, namespace)
result.disambiguated_cls = disambiguated_cls
return result
class A(metaclass=MyMeta):
#classmethod
def f(cls, x):
return x + 1
class B(A):
#classmethod
def f(cls, x):
# return A.f(x)
return super().f(x)
Anytime super is called (e.g. in class B definition) a __classcell__ object is added to the namespace. If that __classcell__ object is duplicated (e.g. in the creation of disambiguated_cls) the following exception is raised:
TypeError: __class__ set to <class '__main__.B'> defining 'B' as <class '__main__.B'>
Looks like there are two solutions (corresponding to the two commented lines in the example):
Don't call super (just directly call function on the superclass: A.f(x))
Pop the __classcell__ out of the namespace
The first means manual resolution of superclasses and I'm not sure what downstream effects the latter can have.
For reference the docs on __classcell__ in metaclasses state:
In CPython 3.6 and later, the __class__ cell is passed to the metaclass as a __classcell__ entry in the class namespace. If present, this must be propagated up to the type.__new__ call in order for the class to be initialised correctly. Failing to do so will result in a RuntimeError in Python 3.8.
Is there another option? What is the best way to do this?
It feels like the simpler way is to partially create a new "blank" class with a __classcell__ and populate it with the namespace of the original class before calling type.__new__ for the second class.
If that works, it might be worth using it - otherwise, if there would be no multiple inheritance or complicated hierarchy involved, hard-coding the call to the superclass might work. However - it seems to be that hardcoding the call is impossible, since calling the hard-coded method inside the "disambiguated" class would call the outer ("result") class in the parent.
One point to take care is that all functions (including functions decorated with #classmethod) have their closures bound to the value provided in the original __classcell__. So, in order to have the same functions to participate as methods in a second class they have to be rebound to a fresh __classcell__ value, one created along the second class. Rebindind a function closure is not that straightforward, since functions are immutable. The way to do it is to create a new "function object" with the __code__ and other attributes of the original function, passing in a new closure. Once that is done, the rest is straightforward
from types import FunctionType
class MyMeta(type):
def __new__(mcls, clsname, bases, namespace, **kwds):
result = super().__new__(mcls, clsname, bases, namespace)
# Omitted logic to prefix namespace
print(namespace)
# the upside of having "nested_process" here instead of
# inside the other metaclass is transparent
# read access to the attributes of the original class:
def nested_process(stubname, stubbases, stub_namespace):
new_namespace = namespace.copy()
classcell = stub_namespace["__classcell__"]
new_namespace["__classcell__"] = classcell
for m_name, method in new_namespace.items():
is_classmeth = False
if isinstance(method, classmethod):
is_classmeth = True
method = method.__func__
if not isinstance(method, FunctionType):
continue
if not (free:=method.__code__.co_freevars) or free[0] != "__class__":
continue
method = FunctionType(
method.__code__, method.__globals__, method.__name__,
method.__defaults__,
closure=(classcell,)
)
if is_classmeth:
method = classmethod(method)
new_namespace[m_name] = method
# maybe modify namespace __qualname__.
# maybe rewrite "bases" to point to prefixed versions?
return clsname + "Prefixed", bases, new_namespace
class disambiguated_cls(metaclass=NestedMeta, clone_hook=nested_process):
def _stub(self):
_ = __class__
result.disambiguated_cls = disambiguated_cls
return result
class NestedMeta(MyMeta):
# the new syntethic class need this modified __new__.
# but as it also inherits from a class created by
# "MyMeta", this metaclass must inherit from it
# otherwise "metaclass conflict" is raised.
def __new__(mcls, clsname, bases, namespace, clone_hook):
clsname, bases, namespace = clone_hook(clsname, bases, namespace)
# surprise: we have to hardcode "type" here:
return type.__new__(mcls, clsname, bases, namespace)
# if "super()" is desired, than an extra KWD parameter
# can be passed to "MyMeta.__new__" so it skip its nesting+prefixed logic
# ensure a eventual __init__ from MyMeta is not called:
__init__ = lambda *args, **kw: None
class A(metaclass=MyMeta):
#classmethod
def f(cls, x):
return x + 1
class B(A):
#classmethod
def f(cls, x):
return super().f(x)

When is the __prepare__ method of a metaclass excuted and what uses its return value?

PEP 3115 has the following example of using the __prepare__ method of a metaclass (print statements are mine):
# The custom dictionary
class member_table(dict):
def __init__(self):
self.member_names = []
def __setitem__(self, key, value):
print(f'in __setitem__{key, value}')
# if the key is not already defined, add to the
# list of keys.
if key not in self:
self.member_names.append(key)
# Call superclass
dict.__setitem__(self, key, value)
# The metaclass
class OrderedClass(type):
# The prepare function
#classmethod
def __prepare__(metacls, name, bases): # No keywords in this case
print('in __prepare__')
return member_table()
# The metaclass invocation
def __new__(cls, name, bases, classdict):
print('in __new__')
# Note that we replace the classdict with a regular
# dict before passing it to the superclass, so that we
# don't continue to record member names after the class
# has been created.
result = type.__new__(cls, name, bases, dict(classdict))
result.member_names = classdict.member_names
return result
print('before MyClass')
class MyClass(metaclass=OrderedClass):
print('in MyClass 1')
# method1 goes in array element 0
def method1(self):
pass
print('in MyClass 2')
# method2 goes in array element 1
def method2(self):
pass
print('in MyClass 3')
Running this, prints this:
before MyClass
in __prepare__
in __setitem__('__module__', '__main__')
in __setitem__('__qualname__', 'MyClass')
in MyClass 1
in __setitem__('method1', <function MyClass.method1 at 0x7fa70414da60>)
in MyClass 2
in __setitem__('method2', <function MyClass.method2 at 0x7fa70414daf0>)
in MyClass 3
in __new__
So it seems like when MyClass is executed, execution first goes to the class's metaclass's __prepare__ method which returns member_table() (who/what uses this return value?), then something sets the class's __module__ and __qualname__, then executes the class body, which sets the class's methods (method1 and method2), then the __new__ method is called with the return value of __prepare__ as the classdict argument value to __new__ (who/what is passing along this value?).
I tried to step through the execution in thonny's debugger, but that threw an error. I also tried stepping through execution in pythontutor.com, but that wasn't granular enough. I pdb'ed it, but it was hard to follow what was going on. Finally, I added some print statements, which are present in the code above.
The result of the prepare() is the namespace argument that gets passed to __new__. It is the namespace in which the body of the class is evaluated (see [1]).
So within the newly created class, you can see the values of MyClass.__module__, MyClass.__qualname__, etc because they are being assigned in the namespace object of MyClass.
Most uses of metaclasses have no need for prepare(), and an ordinary namespace is used.
[1] https://docs.python.org/3.9/reference/datamodel.html?highlight=__prepare__#preparing-the-class-namespace

Overriding a parent's classmethod with an instance method

I may not be thinking about this in a Pythonic way.
I have a class, SqlDB, which uses fetchall to get all of the rows from a cursor:
class SqlDB(object):
#classmethod
def executeQuery(cls, cursor, query, params):
# code to set up and execute query here
rows = cls.fetchRows(cursor)
# other code here
#classmethod
def fetchRows(cls, cursor):
print "in class SqlDB"
return cursor.fetchall()
So I want to add a subclass that uses fetchmany, and gets initialized with a batch size:
class SqlDBBatch(SqlDB):
def __init__(self, batchsize=1000):
self.batchsize = batchsize
def fetchRows(self, cursor):
print "in SqlDBBatch"
while True:
results = cursor.fetchmany(self.batchsize)
# more code
Of course, since the original executeQuery function is calling fetchRows on the class passed into it, I'm getting TypeError: unbound method fetchRows() must be called with SqlDBBatch instance as first argument (got CursorDebugWrapper instance instead) when I try to call executeQuery on an instance of SqlDBBatch. Is there any way to achieve what I'm going for, where I can override a parent's classmethod with an instance method, and have the parent class able to call the subclass implementation?
I don't think the problem is with parents or inheritance, but simply with calling an instance method from a class method.
class Bar(object):
def caller(self,x='cx'):
print('bar caller',self)
self.myfn(x)
#classmethod
def classcaller(cls,x='ccx'):
print('bar class caller',cls)
cls.myfn(x)
def myfn(self,x=None):
print('in bar instance',self,x)
def __repr__(self):
return 'BarInstance'
Bar().myfn()
# ('in bar instance', BarInstance, None)
Bar.myfn(Bar())
# ('in bar instance', BarInstance, None)
Bar().caller()
# ('bar caller', BarInstance)
# ('in bar instance', BarInstance, 'cx')
Bar.classcaller(Bar())
# ('bar class caller', <class '__main__.Bar'>)
# ('in bar instance', BarInstance, None)
Bar().classcaller(Bar())
# same
# the following produce:
# TypeError: unbound method myfn() must be called with Bar instance ...
# Bar.myfn()
# Bar.caller()
# Bar.classcaller()
# Bar().classcaller()
With this single class, I can readily call myfn from the instance caller method, but have pass a separate Bar() instance if using classcaller. Calling the classmethod with Bar() is no different from calling it with Bar.
Even if a classmethod is called with an instance, it's the klass that is passed through, not the obj. http://docs.python.org/2/howto/descriptor.html#static-methods-and-class-methods
class ClassMethod(object):
"Emulate PyClassMethod_Type() in Objects/funcobject.c"
...
def __get__(self, obj, klass=None):
if klass is None:
klass = type(obj)
def newfunc(*args):
return self.f(klass, *args)
return newfunc
Why are you using #classmethod? Do you call them with just class name, or with instances? Even if the parent class versions don't use instance attributes, it might simpler to use instance methods at all levels. Python got along for many years without this decorator (added in 2.4).

Using __delitem__ with a class object rather than an instance in Python

I'd like to be able to use __delitem__ with a class-level variable.
My use case can be found here (the answer that uses _reg_funcs) but it basically involves a decorator class keeping a list of all the functions it has decorated. Is there a way I can get the class object to support __delitem__? I know I could keep an instance around specially for this purpose but I'd rather not have to do that.
class Foo(object):
_instances = {}
def __init__(self, my_str):
n = len(self._instances) + 1
self._instances[my_str] = n
print "Now up to {} instances".format(n)
#classmethod
def __delitem__(cls, my_str):
del cls._instances[my_str]
abcd = Foo('abcd')
defg = Foo('defg')
print "Deleting via instance..."
del abcd['abcd']
print "Done!\n"
print "Deleting via class object..."
del Foo['defg']
print "You'll never get here because of a TypeError: 'type' object does not support item deletion"
When you write del obj[key], Python calls the __delitem__ method of the class of obj, not of obj. So del obj[key] results in type(obj).__delitem__(obj, key).
In your case, that means type(Foo).__delitem__(Foo, 'abcd'). type(Foo) is type, and type.__delitem__ is not defined. You can't modify type itself, you'll need to change the type of Foo itself to something that does.
You do that by defining a new metaclass, which is simply a subclass of type, then instructing Python to use your new metaclass to create the Foo class (not instances of Foo, but Foo itself).
class ClassMapping(type):
def __new__(cls, name, bases, dct):
t = type.__new__(cls, name, bases, dct)
t._instances = {}
return t
def __delitem__(cls, my_str):
del cls._instances[my_str]
class Foo(object):
__metaclass__ = ClassMapping
def __init__(self, my_str):
n = len(Foo._instances) + 1
Foo._instances[my_str] = n
print "Now up to {} instances".format(n)
Changing the metaclass of Foo from type to ClassMapping provides Foo with
a class variable _instances that refers to a dictionary
a __delitem__ method that removes arguments from _instances.

Does Python support something like literal objects?

In Scala I could define an abstract class and implement it with an object:
abstrac class Base {
def doSomething(x: Int): Int
}
object MySingletonAndLiteralObject extends Base {
override def doSomething(x: Int) = x*x
}
My concrete example in Python:
class Book(Resource):
path = "/book/{id}"
def get(request):
return aBook
Inheritance wouldn't make sense here, since no two classes could have the same path. And only one instance is needed, so that the class doesn't act as a blueprint for objects. With other words: no class is needed here for a Resource (Book in my example), but a base class is needed to provide common functionality.
I'd like to have:
object Book(Resource):
path = "/book/{id}"
def get(request):
return aBook
What would be the Python 3 way to do it?
Use a decorator to convert the inherited class to an object at creation time
I believe that the concept of such an object is not a typical way of coding in Python, but if you must then the decorator class_to_object below for immediate initialisation will do the trick. Note that any parameters for object initialisation must be passed through the decorator:
def class_to_object(*args):
def c2obj(cls):
return cls(*args)
return c2obj
using this decorator we get
>>> #class_to_object(42)
... class K(object):
... def __init__(self, value):
... self.value = value
...
>>> K
<__main__.K object at 0x38f510>
>>> K.value
42
The end result is that you have an object K similar to your scala object, and there is no class in the namespace to initialise other objects from.
Note: To be pedantic, the class of the object K can be retrieved as K.__class__ and hence other objects may be initialised if somebody really want to. In Python there is almost always a way around things if you really want.
Use an abc (Abstract Base Class):
import abc
class Resource( metaclass=abc.ABCMeta ):
#abc.abstractproperty
def path( self ):
...
return p
Then anything inheriting from Resource is required to implement path. Notice that path is actually implemented in the ABC; you can access this implementation with super.
If you can instantiate Resource directly you just do that and stick the path and get method on directly.
from types import MethodType
book = Resource()
def get(self):
return aBook
book.get = MethodType(get, book)
book.path = path
This assumes though that path and get are not used in the __init__ method of Resource and that path is not used by any class methods which it shouldn't be given your concerns.
If your primary concern is making sure that nothing inherits from the Book non-class, then you could just use this metaclass
class Terminal(type):
classes = []
def __new__(meta, classname, bases, classdict):
if [cls for cls in meta.classes if cls in bases]:
raise TypeError("Can't Touch This")
cls = super(Terminal, meta).__new__(meta, classname, bases, classdict)
meta.classes.append(cls)
return cls
class Book(object):
__metaclass__ = Terminal
class PaperBackBook(Book):
pass
You might want to replace the exception thrown with something more appropriate. This would really only make sense if you find yourself instantiating a lot of one offs.
And if that's not good enough for you and you're using CPython, you could always try some of this hackery:
class Resource(object):
def __init__(self, value, location=1):
self.value = value
self.location = location
with Object('book', Resource, 1, location=2):
path = '/books/{id}'
def get(self):
aBook = 'abook'
return aBook
print book.path
print book.get()
made possible by my very first context manager.
class Object(object):
def __init__(self, name, cls, *args, **kwargs):
self.cls = cls
self.name = name
self.args = args
self.kwargs = kwargs
def __enter__(self):
self.f_locals = copy.copy(sys._getframe(1).f_locals)
def __exit__(self, exc_type, exc_val, exc_tb):
class cls(self.cls):
pass
f_locals = sys._getframe(1).f_locals
new_items = [item for item in f_locals if item not in self.f_locals]
for item in new_items:
setattr(cls, item, f_locals[item])
del f_locals[item] # Keyser Soze the new names from the enclosing namespace
obj = cls(*self.args, **self.kwargs)
f_locals[self.name] = obj # and insert the new object
Of course I encourage you to use one of my above two solutions or Katrielalex's suggestion of ABC's.

Categories

Resources