Load order of objects in Python pickle - python

I have a python pickle file and when I try to load it, I would like it to load one specific object before another (because I have some dependencies...).
Is that possible?
Here is the scenario:
class A:
def __init__(self):
self.known_names = ["Dan", "David"]
def __getattr__(self, name):
if name not in self.known_names:
raise UnknownName
else:
return self[name]
class B:
def __init__(self):
self.a_instance = A()
def __setattr__(self, name, value):
if self.a_instance.attr == "something":
do_something...
def __setstate__(self):
self.foo = "blah"
The problem occurs when loading the pickle file. class B instance is loaded beofe class A. In that scenario, class B's __setstate__ method tries to set self.foo. This results __settattr__ method call which checks self.a_instance's attr attribute. However, class A was not unpickled yet so self.known_names does not exist. So calling the __getattr__ of A's results in an infinite recursion (since known_names does not exist, it calls __getattr__ on it as well).

This is not a dependency problem per se since normally __init__ will not be called when unpickling an object.
As described in the Python pickle docs and this ticket your problem is most likely that A cannot be safely unpickled due to the use of __getattr__.
Implementing __getstate__ and __setstate__ in A might be sufficient:
def __getstate__(self):
"""Extract state to pickle."""
return self.__dict__
def __setstate__(self, d):
"""Restore from pickled state."""
self.__dict__.update(d)

Related

Is there a recommended way of ensuring immutability

I am observing following behavior since python passes object by reference?
class Person(object):
pass
person = Person()
person.name = 'UI'
def test(person):
person.name = 'Test'
test(person)
print(person.name)
>>> Test
I found copy.deepcopy() to deepcopy object to prevent modifying the passed object. Are there any other recommendations ?
import copy
class Person(object):
pass
person = Person()
person.name = 'UI'
def test(person):
person_copy = copy.deepcopy(person)
person_copy.name = 'Test'
test(person)
print(person.name)
>>> UI
I am observing following behavior since python passes object by reference?
Not really. it's a subtle question. you can look at python - How do I pass a variable by reference? - Stack Overflow
Personally, I don't fully agree with the accepted answer and recommend you google call by sharing. Then, you can make your own decision on this subtle question.
I found copy.deepcopy() to deepcopy object to prevent modifying the passed object. Are there any other recommendations ?
As far as I know, there no other better way, if you don't use third package.
You can use the __setattr__ magic method to implement a base class that allows you to "freeze" an object after you're done with it.
This is not bullet-proof; you can still access __dict__ to mutate the object, and you can also unfreeze the object by unsetting _frozen, and if the attribute's value itself is mutable, this doesn't help much (x.things.append('x') would work for a list of things).
class Freezable:
def freeze(self):
self._frozen = True
def __setattr__(self, key, value):
if getattr(self, "_frozen", False):
raise RuntimeError("%r is frozen" % self)
super().__setattr__(key, value)
class Person(Freezable):
def __init__(self, name):
self.name = name
p = Person("x")
print(p.name)
p.name = "y"
print(p.name)
p.freeze()
p.name = "q"
outputs
x
y
Traceback (most recent call last):
File "freezable.py", line 21, in <module>
p.name = 'q'
RuntimeError: <__main__.Person object at 0x10f82f3c8> is frozen
There are no really 100% watertight way, but you can make it difficult to inadvertently mutate an object that you want to keep frozen; the recommended way for most people is probably to use a frozen DataClass, or a frozen attrs class
In his talk on DataClasses (2018), #RaymonHettinger mentions three approaches: one way, is with a metaclass, another, like in the fractions module is to give attributes a read only property; the DataClass module extends __setattr__ and __delattr__, and overrides __hash__:
-> use a metaclass.
Good resources include #DavidBeasley books and talks at python.
-> give attributes a read only property
class SimpleFrozenObject:
def __init__(self, x=0):
self._x = x
#property
def x(self):
return self._x
f = SimpleFrozenObject()
f.x = 2 # raises AttributeError: can't set attribute
-> extend __setattr__ and __delattr__, and override `hash
class FrozenObject:
...
def __setattr__(self, name, value):
if type(self) is cls or name in (tuple of attributes to freeze,):
raise FrozenInstanceError(f'cannot assign to field {name}')
super(cls, self).__setattr__(name, value)
def __delattr__(self, name):
if type(self) is cls or name in (tuple of attributes to freeze,):
raise FrozenInstanceError(f'cannot delete field {name}')
super(cls, self).__delattr__(name, value)
def __hash__(self):
return hash((tuple of attributes to freeze,))
...
The library attrs also offers options to create immutable objects.

A good practice to implement with python multiple inheritance class?

The Scenario:
class A:
def __init__(self, key, secret):
self.key = key
self.secret = secret
def same_name_method(self):
do_some_staff
def method_a(self):
pass
class B:
def __init__(self, key, secret):
self.key = key
self.secret = secret
def same_name_method(self):
do_another_staff
def method_b(self):
pass
class C(A,B):
def __init__(self, *args, **kwargs):
# I want to init both class A and B's key and secret
## I want to rename class A and B's same method
any_ideas()
...
What I Want:
I want the instance of class C initialize both class A and B, because they are different api key.
And I want rename class A and B's same_name_method, so I will not confused at which same_name_method.
What I Have Done:
For problem one, I have done this:
class C(A,B):
def __init__(self, *args, **kwargs):
A.__init__(self, a_api_key,a_api_secret)
B.__init__(self, b_api_key,b_api_secret)
Comment: I know about super(), but for this situation I do not know how to use it.
For problem two, I add a __new__ for class C
def __new__(cls, *args, **kwargs):
cls.platforms = []
cls.rename_method = []
for platform in cls.__bases__:
# fetch platform module name
module_name = platform.__module__.split('.')[0]
cls.platforms.append(module_name)
# rename attr
for k, v in platform.__dict__.items():
if not k.startswith('__'):
setattr(cls, module_name+'_'+k, v)
cls.rename_method.append(k)
for i in cls.rename_method:
delattr(cls, i) ## this line will raise AttributeError!!
return super().__new__(cls)
Comment: because I rename the new method names and add it to cls attr. I need to delete the old method attr, but do not know how to delattr. Now I just leave them alone, did not delete the old methods.
Question:
Any Suggestions?
So, you want some pretty advanced things, some complicated things, and you don't understand well how classes behave in Python.
So, for your first thing: initializing both classes, and every other method that should run in all classes: the correct solution is to make use of cooperative calls to super() methods.
A call to super() in Python returns you a very special proxy objects that reflects all methods available in the next class, obeying the proper method Resolution Order.
So, if A.__init__ and B.__init__ have to be called, both methods should include a super().__init__ call - and one will call the other's __init__ in the appropriate order, regardless of how they are used as bases in subclasses. As object also have __init__, the last super().__init__ will just call it that is a no-op. If you have more methods in your classes that should be run in all base classes, you'd rather build a proper base class so that the top-most super() call don't try to propagate to a non-existing method.
Otherwise, it is just:
class A:
def __init__(self, akey, asecret, **kwargs):
self.key = akey
self.secret = asecret
super().__init__(**kwargs)
class B:
def __init__(self, bkey, bsecret, **kwargs):
self.key = bkey
self.secret = bsecret
super().__init__(**kwargs)
class C(A,B):
# does not even need an explicit `__init__`.
I think you can get the idea. Of course, the parameter names have to differ - ideally, when writing C you don't have to worry about parameter order - but when calling C you have to worry about suplying all mandatory parameters for C and its bases. If you can't rename the parameters in A or B to be distinct, you could try to use the parameter order for the call, though, with each __init__ consuming two position-parameters - but that will require some extra care in inheritance order.
So - up to this point, it is basic Python multiple-inheritance "howto", and should be pretty straightforward. Now comes your strange stuff.
As for the auto-renaming of methods: first things first -
are you quite sure you need inheritance? Maybe having your granular classes for each external service, and a registry and dispatch class that call the methods on the others by composition would be more sane. (I may come back to this later)
Are you aware that __new__ is called for each instantiation of the class, and all class-attribute mangling you are performing there happens at each new instance of your classes?
So, if the needed method-renaming + shadowing needs to take place at class creation time, you can do that using the special method __init_subclass__ that exists from Python 3.6. It is a special class method that is called once for each derived class of the class it is defined on. So, just create a base class, from which A and B themselves will inherit, and move a properly modified version the thing you are putting in __new__ there. If you are not using Python 3.6, this should be done on the __new__ or __init__ of a metaclass, not on the __new__ of the class itself.
Another approach would be to have a custom __getattribute__ method - this could be crafted to provide namespaces for the base classes. It would owrk ony on instances, not on the classes themselves (but could be made to, again, using a metaclass). __getattribute__ can even hide the same-name-methods.
class Base:
#classmethod
def _get_base_modules(cls):
result = {}
for base in cls.__bases__:
module_name = cls.__module__.split(".")[0]
result[module_name] = base
return result
#classmethod
def _proxy(self, module_name):
class base:
def __dir__(base_self):
return dir(self._base_modules[module_name])
def __getattr__(base_self, attr):
original_value = self._base_modules[module_name].__dict__[attr]
if hasattr(original_value, "__get__"):
original_value = original_value.__get__(self, self.__class__)
return original_value
base.__name__ = module_name
return base()
def __init_subclass__(cls):
cls._base_modules = cls._get_base_modules()
cls._shadowed = {name for module_class in cls._base_modules.values() for name in module_class.__dict__ if not name.startswith("_")}
def __getattribute__(self, attr):
if attr.startswith("_"):
return super().__getattribute__(attr)
cls = self.__class__
if attr in cls._shadowed:
raise AttributeError(attr)
if attr in cls._base_modules:
return cls._proxy(attr)
return super().__getattribute__(attr)
def __dir__(self):
return super().dir() + list(self._base_modules)
class A(Base):
...
class B(Base):
...
class C(A, B):
...
As you can see - this is some fun, but starts getting really complicated - and all the hoola-boops that are needed to retrieve the actual attributes from the superclasses after ading an artificial namespace seem to indicate your problem is not calling for using inheritance after all, as I suggested above.
Since you have your small, functional, atomic classes for each "service" , you could use a plain, simple, non-meta-at-all class that would work as a registry for the various services - and you can even enhance it to call the equivalent method in several of the services it is handling with a single call:
class Services:
def __init__(self):
self.registry = {}
def register(self, cls, key, secret):
name = cls.__module__.split(".")[0]
service= cls(key, secret)
self.registry[name] = service
def __getattr__(self, attr):
if attr in self.registry:
return self.registry[attr]

How to make a class attribute exclusive to the super class

I have a master class for a planet:
class Planet:
def __init__(self,name):
self.name = name
(...)
def destroy(self):
(...)
I also have a few classes that inherit from Planet and I want to make one of them unable to be destroyed (not to inherit the destroy function)
Example:
class Undestroyable(Planet):
def __init__(self,name):
super().__init__(name)
(...)
#Now it shouldn't have the destroy(self) function
So when this is run,
Undestroyable('This Planet').destroy()
it should produce an error like:
AttributeError: Undestroyable has no attribute 'destroy'
The mixin approach in other answers is nice, and probably better for most cases. But nevertheless, it spoils part of the fun - maybe obliging you to have separate planet-hierarchies - like having to live with two abstract classes each ancestor of "destroyable" and "non-destroyable".
First approach: descriptor decorator
But Python has a powerful mechanism, called the "descriptor protocol", which is used to retrieve any attribute from a class or instance - it is even used to ordinarily retrieve methods from instances - so, it is possible to customize the method retrieval in a way it checks if it "should belong" to that class, and raise attribute error otherwise.
The descriptor protocol mandates that whenever you try to get any attribute from an instance object in Python, Python will check if the attribute exists in that object's class, and if so, if the attribute itself has a method named __get__. If it has, __get__ is called (with the instance and class where it is defined as parameters) - and whatever it returns is the attribute. Python uses this to implement methods: functions in Python 3 have a __get__ method that when called, will return another callable object that, in turn, when called will insert the self parameter in a call to the original function.
So, it is possible to create a class whose __get__ method will decide whether to return a function as a bound method or not depending on the outer class been marked as so - for example, it could check an specific flag non_destrutible. This could be done by using a decorator to wrap the method with this descriptor functionality
class Muteable:
def __init__(self, flag_attr):
self.flag_attr = flag_attr
def __call__(self, func):
"""Called when the decorator is applied"""
self.func = func
return self
def __get__(self, instance, owner):
if instance and getattr(instance, self.flag_attr, False):
raise AttributeError('Objects of type {0} have no {1} method'.format(instance.__class__.__name__, self.func.__name__))
return self.func.__get__(instance, owner)
class Planet:
def __init__(self, name=""):
pass
#Muteable("undestroyable")
def destroy(self):
print("Destroyed")
class BorgWorld(Planet):
undestroyable = True
And on the interactive prompt:
In [110]: Planet().destroy()
Destroyed
In [111]: BorgWorld().destroy()
...
AttributeError: Objects of type BorgWorld have no destroy method
In [112]: BorgWorld().destroy
AttributeError: Objects of type BorgWorld have no destroy method
Perceive that unlike simply overriding the method, this approach raises the error when the attribute is retrieved - and will even make hasattr work:
In [113]: hasattr(BorgWorld(), "destroy")
Out[113]: False
Although, it won't work if one tries to retrieve the method directly from the class, instead of from an instance - in that case the instance parameter to __get__ is set to None, and we can't say from which class it was retrieved - just the owner class, where it was declared.
In [114]: BorgWorld.destroy
Out[114]: <function __main__.Planet.destroy>
Second approach: __delattr__ on the metaclass:
While writting the above, it occurred me that Pythn does have the __delattr__ special method. If the Planet class itself implements __delattr__ and we'd try to delete the destroy method on specifc derived classes, it wuld nt work: __delattr__ gards the attribute deletion of attributes in instances - and if you'd try to del the "destroy" method in an instance, it would fail anyway, since the method is in the class.
However, in Python, the class itself is an instance - of its "metaclass". That is usually type . A proper __delattr__ on the metaclass of "Planet" could make possible the "disinheitance" of the "destroy" method by issuing a `del UndestructiblePlanet.destroy" after class creation.
Again, we use the descriptor protocol to have a proper "deleted method on the subclass":
class Deleted:
def __init__(self, cls, name):
self.cls = cls.__name__
self.name = name
def __get__(self, instance, owner):
raise AttributeError("Objects of type '{0}' have no '{1}' method".format(self.cls, self.name))
class Deletable(type):
def __delattr__(cls, attr):
print("deleting from", cls)
setattr(cls, attr, Deleted(cls, attr))
class Planet(metaclass=Deletable):
def __init__(self, name=""):
pass
def destroy(self):
print("Destroyed")
class BorgWorld(Planet):
pass
del BorgWorld.destroy
And with this method, even trying to retrieve or check for the method existense on the class itself will work:
In [129]: BorgWorld.destroy
...
AttributeError: Objects of type 'BorgWorld' have no 'destroy' method
In [130]: hasattr(BorgWorld, "destroy")
Out[130]: False
metaclass with a custom __prepare__ method.
Since metaclasses allow one to customize the object that contains the class namespace, it is possible to have an object that responds to a del statement within the class body, adding a Deleted descriptor.
For the user (programmer) using this metaclass, it is almost the samething, but for the del statement been allowed into the class body itself:
class Deleted:
def __init__(self, name):
self.name = name
def __get__(self, instance, owner):
raise AttributeError("No '{0}' method on class '{1}'".format(self.name, owner.__name__))
class Deletable(type):
def __prepare__(mcls,arg):
class D(dict):
def __delitem__(self, attr):
self[attr] = Deleted(attr)
return D()
class Planet(metaclass=Deletable):
def destroy(self):
print("destroyed")
class BorgPlanet(Planet):
del destroy
(The 'deleted' descriptor is the correct form to mark a method as 'deleted' - in this method, though, it can't know the class name at class creation time)
As a class decorator:
And given the "deleted" descriptor, one could simply inform the methods to be removed as a class decorator - there is no need for a metaclass in this case:
class Deleted:
def __init__(self, cls, name):
self.cls = cls.__name__
self.name = name
def __get__(self, instance, owner):
raise AttributeError("Objects of type '{0}' have no '{1}' method".format(self.cls, self.name))
def mute(*methods):
def decorator(cls):
for method in methods:
setattr(cls, method, Deleted(cls, method))
return cls
return decorator
class Planet:
def destroy(self):
print("destroyed")
#mute('destroy')
class BorgPlanet(Planet):
pass
Modifying the __getattribute__ mechanism:
For sake of completeness - what really makes Python reach methods and attributes on the super-class is what happens inside the __getattribute__ call. n the object version of __getattribute__ is where the algorithm with the priorities for "data-descriptor, instance, class, chain of base-classes, ..." for attribute retrieval is encoded.
So, changing that for the class is an easy an unique point to get a "legitimate" attribute error, without need for the "non-existent" descritor used on the previous methods.
The problem is that object's __getattribute__ does not make use of type's one to search the attribute in the class - if it did so, just implementing the __getattribute__ on the metaclass would suffice. One have to do that on the instance to avoid instance lookp of an method, and on the metaclass to avoid metaclass look-up. A metaclass can, of course, inject the needed code:
def blocker_getattribute(target, attr, attr_base):
try:
muted = attr_base.__getattribute__(target, '__muted__')
except AttributeError:
muted = []
if attr in muted:
raise AttributeError("object {} has no attribute '{}'".format(target, attr))
return attr_base.__getattribute__(target, attr)
def instance_getattribute(self, attr):
return blocker_getattribute(self, attr, object)
class M(type):
def __init__(cls, name, bases, namespace):
cls.__getattribute__ = instance_getattribute
def __getattribute__(cls, attr):
return blocker_getattribute(cls, attr, type)
class Planet(metaclass=M):
def destroy(self):
print("destroyed")
class BorgPlanet(Planet):
__muted__=['destroy'] # or use a decorator to set this! :-)
pass
If Undestroyable is a unique (or at least unusual) case, it's probably easiest to just redefine destroy():
class Undestroyable(Planet):
# ...
def destroy(self):
cls_name = self.__class__.__name__
raise AttributeError("%s has no attribute 'destroy'" % cls_name)
From the point of view of the user of the class, this will behave as though Undestroyable.destroy() doesn't exist … unless they go poking around with hasattr(Undestroyable, 'destroy'), which is always a possibility.
If it happens more often that you want subclasses to inherit some properties and not others, the mixin approach in chepner's answer is likely to be more maintainable. You can improve it further by making Destructible an abstract base class:
from abc import abstractmethod, ABCMeta
class Destructible(metaclass=ABCMeta):
#abstractmethod
def destroy(self):
pass
class BasePlanet:
# ...
pass
class Planet(BasePlanet, Destructible):
def destroy(self):
# ...
pass
class IndestructiblePlanet(BasePlanet):
# ...
pass
This has the advantage that if you try to instantiate the abstract class Destructible, you'll get an error pointing you at the problem:
>>> Destructible()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't instantiate abstract class Destructible with abstract methods destroy
… similarly if you inherit from Destructible but forget to define destroy():
class InscrutablePlanet(BasePlanet, Destructible):
pass
>>> InscrutablePlanet()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't instantiate abstract class InscrutablePlanet with abstract methods destroy
Rather than remove an attribute that is inherited, only inherit destroy in the subclasses where it is applicable, via a mix-in class. This preserves the correct "is-a" semantics of inheritance.
class Destructible(object):
def destroy(self):
pass
class BasePlanet(object):
...
class Planet(BasePlanet, Destructible):
...
class IndestructiblePlanet(BasePlanet): # Does *not* inherit from Destructible
...
You can provide suitable definitions for destroy in any of Destructible, Planet, or any class that inherits from Planet.
Metaclasses and descriptor protocols are fun, but perhaps overkill. Sometimes, for raw functionality, you can't beat good ole' __slots__.
class Planet(object):
def __init__(self, name):
self.name = name
def destroy(self):
print("Boom! %s is toast!\n" % self.name)
class Undestroyable(Planet):
__slots__ = ['destroy']
def __init__(self,name):
super().__init__(name)
print()
x = Planet('Pluto') # Small, easy to destroy
y = Undestroyable('Jupiter') # Too big to fail
x.destroy()
y.destroy()
Boom! Pluto is toast!
Traceback (most recent call last):
File "planets.py", line 95, in <module>
y.destroy()
AttributeError: destroy
You cannot inherit only a portion of a class. Its all or nothing.
What you can do is to put the destroy function in a second level of the class, such you have the Planet-class without the destry-function, and then you make a DestroyablePlanet-Class where you add the destroy-function, which all the destroyable planets use.
Or you can put a flag in the construct of the Planet-Class which determines if the destroy function will be able to succeed or not, which is then checked in the destroy-function.

Python metaclasses: Why isn't __setattr__ called for attributes set during class definition?

I have the following python code:
class FooMeta(type):
def __setattr__(self, name, value):
print name, value
return super(FooMeta, self).__setattr__(name, value)
class Foo(object):
__metaclass__ = FooMeta
FOO = 123
def a(self):
pass
I would have expected __setattr__ of the meta class being called for both FOO and a. However, it is not called at all. When I assign something to Foo.whatever after the class has been defined the method is called.
What's the reason for this behaviour and is there a way to intercept the assignments that happen during the creation of the class? Using attrs in __new__ won't work since I'd like to check if a method is being redefined.
A class block is roughly syntactic sugar for building a dictionary, and then invoking a metaclass to build the class object.
This:
class Foo(object):
__metaclass__ = FooMeta
FOO = 123
def a(self):
pass
Comes out pretty much as if you'd written:
d = {}
d['__metaclass__'] = FooMeta
d['FOO'] = 123
def a(self):
pass
d['a'] = a
Foo = d.get('__metaclass__', type)('Foo', (object,), d)
Only without the namespace pollution (and in reality there's also a search through all the bases to determine the metaclass, or whether there's a metaclass conflict, but I'm ignoring that here).
The metaclass' __setattr__ can control what happens when you try to set an attribute on one of its instances (the class object), but inside the class block you're not doing that, you're inserting into a dictionary object, so the dict class controls what's going on, not your metaclass. So you're out of luck.
Unless you're using Python 3.x! In Python 3.x you can define a __prepare__ classmethod (or staticmethod) on a metaclass, which controls what object is used to accumulate attributes set within a class block before they're passed to the metaclass constructor. The default __prepare__ simply returns a normal dictionary, but you could build a custom dict-like class that doesn't allow keys to be redefined, and use that to accumulate your attributes:
from collections import MutableMapping
class SingleAssignDict(MutableMapping):
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
def __getitem__(self, key):
return self._d[key]
def __setitem__(self, key, value):
if key in self._d:
raise ValueError(
'Key {!r} already exists in SingleAssignDict'.format(key)
)
else:
self._d[key] = value
def __delitem__(self, key):
del self._d[key]
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
def __contains__(self, key):
return key in self._d
def __repr__(self):
return '{}({!r})'.format(type(self).__name__, self._d)
class RedefBlocker(type):
#classmethod
def __prepare__(metacls, name, bases, **kwargs):
return SingleAssignDict()
def __new__(metacls, name, bases, sad):
return super().__new__(metacls, name, bases, dict(sad))
class Okay(metaclass=RedefBlocker):
a = 1
b = 2
class Boom(metaclass=RedefBlocker):
a = 1
b = 2
a = 3
Running this gives me:
Traceback (most recent call last):
File "/tmp/redef.py", line 50, in <module>
class Boom(metaclass=RedefBlocker):
File "/tmp/redef.py", line 53, in Boom
a = 3
File "/tmp/redef.py", line 15, in __setitem__
'Key {!r} already exists in SingleAssignDict'.format(key)
ValueError: Key 'a' already exists in SingleAssignDict
Some notes:
__prepare__ has to be a classmethod or staticmethod, because it's being called before the metaclass' instance (your class) exists.
type still needs its third parameter to be a real dict, so you have to have a __new__ method that converts the SingleAssignDict to a normal one
I could have subclassed dict, which would probably have avoided (2), but I really dislike doing that because of how the non-basic methods like update don't respect your overrides of the basic methods like __setitem__. So I prefer to subclass collections.MutableMapping and wrap a dictionary.
The actual Okay.__dict__ object is a normal dictionary, because it was set by type and type is finicky about the kind of dictionary it wants. This means that overwriting class attributes after class creation does not raise an exception. You can overwrite the __dict__ attribute after the superclass call in __new__ if you want to maintain the no-overwriting forced by the class object's dictionary.
Sadly this technique is unavailable in Python 2.x (I checked). The __prepare__ method isn't invoked, which makes sense as in Python 2.x the metaclass is determined by the __metaclass__ magic attribute rather than a special keyword in the classblock; which means the dict object used to accumulate attributes for the class block already exists by the time the metaclass is known.
Compare Python 2:
class Foo(object):
__metaclass__ = FooMeta
FOO = 123
def a(self):
pass
Being roughly equivalent to:
d = {}
d['__metaclass__'] = FooMeta
d['FOO'] = 123
def a(self):
pass
d['a'] = a
Foo = d.get('__metaclass__', type)('Foo', (object,), d)
Where the metaclass to invoke is determined from the dictionary, versus Python 3:
class Foo(metaclass=FooMeta):
FOO = 123
def a(self):
pass
Being roughly equivalent to:
d = FooMeta.__prepare__('Foo', ())
d['Foo'] = 123
def a(self):
pass
d['a'] = a
Foo = FooMeta('Foo', (), d)
Where the dictionary to use is determined from the metaclass.
There are no assignments happening during the creation of the class. Or: they are happening, but not in the context you think they are. All class attributes are collected from class body scope and passed to metaclass' __new__, as the last argument:
class FooMeta(type):
def __new__(self, name, bases, attrs):
print attrs
return type.__new__(self, name, bases, attrs)
class Foo(object):
__metaclass__ = FooMeta
FOO = 123
Reason: when the code in the class body executes, there's no class yet. Which means there's no opportunity for metaclass to intercept anything yet.
Class attributes are passed to the metaclass as a single dictionary and my hypothesis is that this is used to update the __dict__ attribute of the class all at once, e.g. something like cls.__dict__.update(dct) rather than doing setattr() on each item. More to the point, it's all handled in C-land and simply wasn't written to call a custom __setattr__().
It's easy enough to do whatever you want to the attributes of the class in your metaclass's __init__() method, since you're passed the class namespace as a dict, so just do that.
During the class creation, your namespace is evaluated to a dict and passed as an argument to the metaclass, together with the class name and base classes. Because of that, assigning a class attribute inside the class definition wouldn't work the way you expect. It doesn't create an empty class and assign everything. You also can't have duplicated keys in a dict, so during class creation attributes are already deduplicated. Only by setting an attribute after the class definition you can trigger your custom __setattr__.
Because the namespace is a dict, there's no way for you to check duplicated methods, as suggested by your other question. The only practical way to do that is parsing the source code.

Pickling a staticmethod in Python

I've been trying to pickle an object which contains references to static class methods.
Pickle fails (for example on module.MyClass.foo) stating it cannot be pickled, as module.foo does not exist.
I have come up with the following solution, using a wrapper object to locate the function upon invocation, saving the container class and function name:
class PicklableStaticMethod(object):
"""Picklable version of a static method.
Typical usage:
class MyClass:
#staticmethod
def doit():
print "done"
# This cannot be pickled:
non_picklable = MyClass.doit
# This can be pickled:
picklable = PicklableStaticMethod(MyClass.doit, MyClass)
"""
def __init__(self, func, parent_class):
self.func_name = func.func_name
self.parent_class = parent_class
def __call__(self, *args, **kwargs):
func = getattr(self.parent_class, self.func_name)
return func(*args, **kwargs)
I am wondering though, is there a better - more standard way - to pickle such an object?
I do not want to make changes to the global pickle process (using copy_reg for example), but the following pattern would be great:
class MyClass(object):
#picklable_staticmethod
def foo():
print "done."
My attempts at this were unsuccessful, specifically because I could not extract the owner class from the foo function. I was even willing to settle for explicit specification (such as #picklable_staticmethod(MyClass)) but I don't know of any way to refer to the MyClass class right where it's being defined.
Any ideas would be great!
Yonatan
This seems to work.
class PickleableStaticMethod(object):
def __init__(self, fn, cls=None):
self.cls = cls
self.fn = fn
def __call__(self, *args, **kwargs):
return self.fn(*args, **kwargs)
def __get__(self, obj, cls):
return PickleableStaticMethod(self.fn, cls)
def __getstate__(self):
return (self.cls, self.fn.__name__)
def __setstate__(self, state):
self.cls, name = state
self.fn = getattr(self.cls, name).fn
The trick is to snag the class when the static method is gotten from it.
Alternatives: You could use metaclassing to give all your static methods a .__parentclass__ attribute. Then you could subclass Pickler and give each subclass instance its own .dispatch table which you can then modify without affecting the global dispatch table (Pickler.dispatch). Pickling, unpickling, and calling the method might then be a little faster.
EDIT: modified after Jason comment.
I think python is correct in not letting pickling a staticmethod object - as it is impossible to pickle instance or class methods! Such an object would make very little sense outside of its context:
Check this: Descriptor Tutorial
import pickle
def dosomething(a, b):
print a, b
class MyClass(object):
dosomething = staticmethod(dosomething)
o = MyClass()
pickled = pickle.dumps(dosomething)
This works, and that's what should be done - define a function, pickle it, and use such function as a staticmethod in a certain class.
If you've got an use case for your need, please write it down and I'll be glad to discuss it.

Categories

Resources