unitTest a python 3 metaclass - python

I have a metaclass that set a class property my_new_property when it loads the class. This file will me named my_meta and the code is this
def remote_function():
# Get some data from a request to other site
return 'remote_response'
class MyMeta(type):
def __new__(cls, *args, **kwargs):
print("It is in")
obj = super().__new__(cls, *args, **kwargs)
new_value = remote_function()
setattr(obj, 'my_new_property', new_value)
return obj
The functionality to set the property works fine, however when writing the test file tests.py with only one code line:
from my_meta import MyMeta
The meta code is executed. As a consequence, it executes the real method remote_function.
The question is... as the meta code is executed only by using the import from the test file, how could I mock the method remote_function?

Importing the file as you show us won't trigger execution of the metaclass code.
However, importing any file (includng the one where the metaclass is), where there is a class that makes use of this metaclass, will run the code in the metaclass __new__ method - as parsing a class body defined with the class statement does just that: call the metaclass to create a new class instance.
So, the recomendation is: do not have your metaclass __new__ or __init__ methods to trigger side effects, like accessing remote stuff, if that can't be done in a seamless and innocuous way. Not only testing, but importing modules of your app in a Python shell will also trigger the behavior.
You could have a method in the metaclass to inialize with the remote value, and when you are about to actually use those, you'd explicitly call a such "remote_init" - like in
class MyMeta(type):
def __new__(cls, *args, **kwargs):
print("It is in")
obj = super().__new__(cls, *args, **kwargs)
new_value = remote_function()
setattr(obj, 'my_new_property', new_value)
return obj
def remote_init(cls):
if hasattr(cls, "my_new_property"):
return
cls.my_new_property = remote_function()
The remote_init method, being placed in the metaclass will behave jsut like a class method for the instantiated classes, but won't be visible (for dir or attribute retrieval), from the class instances.
This is the safest thing to do.
If you want to avoid the explicit step, which is understanble, you could use a setting in a configuration file, and a test inside the remote_function on whether to trigger the actual networking code, or just return a local, dummy value. You then make the configurations differ for testing/staging/production.
And, finally, you could just separate remote_method in another module, import that first, patch it out with unitttest.mock.patch, and the import the metaclass containing module - when it runs and calls the method, it will be just the pacthed version. This will work for your tests, but won't fix the problem of you triggering side-effects in other occasions (like in other tests that load this module).
Of course, for that to work you have to import the module containing the metaclass and any classes defined with it inside your test function, in a region where mock.patch is active, not importing it at the top of the file. There is just no problem in importing things inside test methods to have control over the importing process itself.

Related

Does the final decorator help in preventing method overriding?

I am trying to create a final method in my class, where I want that it cannot be overridden by any sub-class, just like when creating a final class using final decorator which cannot be inherited.
from final_class import final
class Dummy:
def show(self):
print("show id running from dummy")
#final
def display(self):
print("display from dummy")
class Demo(Dummy):
def show(self):
print("show from demo")
def display(self):
print("display from demo")
d = Demo()
d.display()
I think we should get an error when accessing the display method from Demo, but when I run the program it gives "display from demo".
So what am I missing? I have checked final annotation and decorators in python3.8 but it talks about typechecking in typing packages while I was trying it from the final_class package.
As seem in the comments, the 3rd party library final_class.final is one thing: a class decorator that will prevent, at runtime, that a class is further inherited, anf typing.final which ships with Python, and is intended to decorate both classes and methods, but which has no enforcing behavior during program execution - it will, instead, make any compliant static analysis tool to raise an error in the type-checking stage.
It is, due to Python flexibility and dynamism, possible to create a final decorator for methods that will be enforced at runtime: i.e. whenever a subclass is created overriding a method marked as final in the inheritance chain, a RuntimeError, or other custom error can be raised.
The idea is that whenever a new class is created, both methods on the metaclass and the __init_subclass__ method of the bases is called - so, if one wants to create a custom metaclass or custom base-class to be used along with such a #final decorator, it should be something more or less straightforward.
What would be less straightforward would be such a decorator that would work regardless of an specific base class or custom-metaclass - and this also can be done: by injecting in the class being constructed an __init_subclass__ method which will perform a check of violation of the final clause.
The complicated part is to co-exist with eventual pre-existing __init_subclass__ methods which also need to be called, either on the same class or in any superclass, as well as emulate the working of super(), since we are creating a method outside the class body. The decorator code can inspect the context from which its called and inject a __init_subclass__ there, taking some care:
import sys
def final(method):
f = sys._getframe().f_back
_init_subclass_meth = "super"
def __init_subclass__(cls, *args, **kw):
# all of these are readonly, so nonlocal is optional:
# nonlocal final_methods, _init_subclass_meth, _original_init_subclass, __init_subclass__
# In a normal __init_subclass__, one can know about the class in which
# a method is declared, and call super(), via the `__class__`
# magic variable. But that won't work for a method defined
# outside the class and inkected in it.
# the line bellow should retrieve the equivalent to __class__
current_class = next(supercls for supercls in cls.__mro__ if getattr(supercls.__dict__.get("__init_subclass__", None), "__func__", None) is __init_subclass__)
for meth_name in cls.__dict__:
if meth_name in final_methods:
raise RuntimeError(f"Final method {meth_name} is redeclared in subclass {cls.__name__} from {current_class.__name__}")
if _init_subclass_meth == "wrap":
return _original_init_subclass(cls, *args, **kwd)
return super(current_class, None).__init_subclass__(*args, **kw)
__init_subclass__._final_mark = True
if "__init_subclass__" in f.f_locals and not getattr(f.f_locals["__init_subclass__"], "_final_mark", False):
_init_subclass_meth = "wrap"
_original_init_subclass = f.f_locals["__init_subclass__"]
# locals assignment: will work in this case because the caller context
# is a class body, inside which `f_locals` refers usually to a
# plain dict (unless a custom metaclass changed it).
# This normally would not work (= no effect) in an ordinary frame,
# represnting a plain function or method in execution:
f.f_locals["__init_subclass__"] = __init_subclass__
final_methods = f.f_locals.setdefault("_final_methods", set())
final_methods.add(method.__name__)
return method
class A:
#final
def b(self):
print("final b")
And this will raise an error:
class B(A):
def b(self):
# RuntimeError expected
...

Python Multiprocessing Can't pickle local object

i have read a little about multiprocessing and pickling problems, I have also read that there are some solutions but I don't really know how can they help to mine situation.
I am building Test Runner where I use Multiprocessing to call modified Test Class methods. Modified by metaclass so I can have setUp and tearDown methods before and after each run test.
Here is my Parent Metaclass:
class MetaTestCase(type):
def __new__(cls, name: str, bases: Tuple, attrs: dict):
def replaced_func(fn):
def new_test(*args, **kwargs):
args[0].before()
result = fn(*args, **kwargs)
args[0].after()
return result
return new_test
# If method is found and its name starts with test, replace it
for i in attrs:
if callable(attrs[i]) and attrs[i].__name__.startswith('test'):
attrs[i] = replaced_func(attrs[i])
return (super(MetaTestCase, cls).__new__(cls, name, bases, attrs))
I am using this Sub Class to inherit MetaClass:
class TestCase(metaclass=MetaTestCase):
def before(self) -> None:
"""Overridable, execute before test part."""
pass
def after(self) -> None:
"""Overridable, execute after test part."""
pass
And then I use this in my TestSuite Class:
class TestApi(TestCase):
def before(self):
print('before')
def after(self):
print('after')
def test_api_one(self):
print('test')
Sadly when I try to execute that test with multiprocessing.Process it fails on
AttributeError: Can't pickle local object 'MetaTestCase.__new__.<locals>.replaced_func.<locals>.new_test'
Here is how I create and execute Process:
module = importlib.import_module('tests.api.test_api') # Finding and importing module
object = getattr(module, 'TestApi') # Getting Class from module
process = Process(target=getattr(object, 'test_api_one')) # Calling class method
process.start()
process.join()
I tried to use pathos.helpers.mp.Process, it passes pickling phase I guess but has some problems with Tuple that I don't understand:
Process Process-1:
Traceback (most recent call last):
result = fn(*args, **kwargs)
IndexError: tuple index out of range
Is there any simple solution for that so I can pickle that object and run test sucessfully along with my modified test class?
As for your original question of why you are getting the pickling error, this answer summarizes the problem and offers solutions (similar to those already provided here).
Now as to why you are receiving the IndexError, this is because you are not passing an instance of the class to the function (the self argument). A quick fix would be to do this (also, please don't use object as a variable name):
module = importlib.import_module('tests.api.test_api') # Finding and importing module
obj = getattr(module, 'TestApi')
test_api = obj() # Instantiate!
# Pass the instance explicitly! Alternatively, you can also do target=test_api.test_api_one
process = Process(target=getattr(obj, 'test_api_one'), args=(test_api, ))
process.start()
process.join()
Ofcourse, you can also opt to make the methods of the class as class methods, and pass the target function as obj.method_name.
Also, as a quick sidenote, the usage of a metaclass for the use case shown in the example seems like an overkill. Are you sure you can't do what you want with class decorators instead (which might also be compatible with the standard library's multiprocessing)?
https://docs.python.org/3/library/pickle.html#what-can-be-pickled-and-unpickled
"The following types can be pickled... functions (built-in and user-defined) accessible from the top level of a module (using def, not lambda);"
It sounds like you cannot pickle locally defined functions. This makes sense based on other pickle behavior I've seen. Essentially it's just pickling instructions to the python interpreter for how it can find the function definition. That usually means its a module name and function name (for example) so the multiprocessing Process can import the correct function.
There's no way for another process to import your replaced_func function because it's only locally defined.
You could try defining it outside of the metaclass, which would make it importable by other processes.

Is it safe to use autocall __init__ in this way?

I wanted to force children to call parent constructor and found this answer which seems to do the job fine. However, I'm a bit unsure if what I'm doing is safe. I have this:
# Copied from answer linked above
class meta(type):
def __init__(cls,name,bases,dct):
def auto__call__init__(self, *a, **kw):
for base in cls.__bases__:
base.__init__(self, *a, **kw)
cls.__init__child_(self, *a, **kw)
cls.__init__child_ = cls.__init__
cls.__init__ = auto__call__init__
# My code
import unittest
class TestBase(unittest.TestCase, metaclass=meta):
def __init__(self):
self.testvar="Hello, World!"
class A(TestBase):
def foo(self):
# Inherited from TestBase
print(self.testvar)
# Inherited from unittest.TestCase
self.assertEqual("Hello, World!", self.testvar)
A().foo()
This prints "Hello, World!" as expected, and I'm able to use assertEqual from unittest.TestCase, but I have a feeling that I might be on very thin ice here. Is this safe to do? Can it collide with unittest in any way?
In the original post, TestBase did not inherit from unittest.TestCase. That's the difference.
Here nothing's happening anyway, you only need to delegate back to the parent if you actually override the method, if you don't then the original method is not shadowed and will be called normally.
I'd strongly recommend not doing this though, especially for unittest as __init__ is not actually of any use there: unittest only calls __init__ to initialise test cases based on the method names it finds, the initialisation hook you want is setUp (and tearDown though addCleanup is usually safer and more reliable). You can define helper methods in test classes, but you should not instantiate testcases yourself.
Plus unittest is not great, it's quite verbose, the API surface is large, and the class-based design hampers the modularity (somewhat unexpectedly maybe). I'd strongly recommend taking a gander at pytest.
Unittest apart (which meddles some of the rules for auto-running classes iteself), this code does a mess.
It will run the __init__ methods of the classes several times over, in a hard to determine order.
here is what happens with your code when I add an intermediate class, with an actual collaborative __init__ calling super() (and take out unittest for sanity):
class meta(type):
def __init__(cls,name,bases,dct):
def auto__call__init__(self, *a, **kw):
for base in cls.__bases__:
base.__init__(self, *a, **kw)
cls.__init__child_(self, *a, **kw)
cls.__init__child_ = cls.__init__
cls.__init__ = auto__call__init__
# My code
class TestBase(metaclass=meta):
def __init__(self):
print("base")
self.testvar="hello"
class Middle(TestBase):
def __init__(self):
print("middle")
super().__init__()
class A(Middle):
def foo(self):
# Inherited from TestBase
print(self.testvar)
# Inherited from unittest.TestCase
assert self.testvar == "hello"
A().foo()
Will print:
base
middle
base
base
middle
base
hello
Doing this correctly would require the metaclass to:
check if the class being created have an __init__ of itself and each __init__ in the __mro__ (not __bases__ these are only the immediate ancestors). For each __init__ method, if it is not already wrapper, wrap it (the same as applying a decorator and replace the original method), that will make it mark an special flag on the instance once it is run, and skip running if it already have been called for the instance, and (the same wrapper) - call the next __init__ in the mro at its exit.
This could ensure all __init__ in the chain are run, even if one or more of them do not feature a super() call - but there are downsides: the superclasses __init__ would only be called at the end of the child's run - and one __init__ method would become special so it could not be manually called as an ordinary method (although that is rarely used - however, they'd become "special" in that respect).

Singleton with __new__ returns "Was __classcell__ propagated to type.__new_?" using Python 3.8

Trying to change singleton using metaclass of Python 2 to Python 3, __new__ returns:
[ ERROR ] Error in file Importing test library 'C:\Users\TestTabs.py' failed: __class__ not set defining 'BrowserDriver' as <class 'BrowserDriver.BrowserDriver'>. Was __classcell__ propagated to type.__new__?
CODE:
class Singleton(type):
_instance = None
def __new__(cls, *args, **kwargs):
print('Newtest')
if cls._instance is None:
Singleton._instance = type.__new__(cls, *args, **kwargs)
return Singleton._instance
This one is called:
class BrowserDriver(metaclass=Singleton)
first: you should not be using a metaclass for having a singleton
Second: your "singleton" code is broken, even if it would work:
By luck it crossed the way of a new mechanism used in class creation, which requires type.__new__ to receive the "class cell" when creating a new class, and this was detected.
So, the misterious __class__ cell will exit if any method in your class uses a call to super(). Python will create a rathr magic __class__ variable that will receive a reference to the class that will be created, when the class body execution ends. At that point, the metaclass.__new__ is called. When the call to metaclass.__new__ returns, the Python runtime expects that the __class__ magic variable for that class is now "filled in" with a reference to the class itself.
This is for a working class creation - now we come to the bug in your code:
I don't know where you got this "singleton metaclass code" at all, but it is broken: (if it would work), it creates ONE SINGLE CLASS, for all classes using this metaclass - and not, as probably was desired, allow one single-instance of each class using this metaclass. (as the new class body do not have its __class__ attribute set, you get the error you described under Python 3.8)
In other words: any classes past the first one using this metaclass is simply ignored, and not used by the program at all.
The (overkill) idea of using a metaclass to create singleton-enforcing classes is, yes, to allow a single-instance of a class, but the cache for the single instance should be set in the class itself, not on the metaclass - or in an attribute in the metaclass that holds one instance for each class created, like a dictionary would. A simple class attribute of the metaclass as featured in this code just makes classes past the first be ignored.
So, to fix that using metaclasses, the cache logic should be in the metaclass __call__ method, not in its __new__ method -
This is the expressly not recommended, but working, metaclass to enforce singletons:
class SingletonEnforcingmeta(type):
def __call__(cls, *args, **kw):
# check "__dict__" entry insead of "hasattr" - allows inheritance
# and one instance per subclass
if "_instance" not in cls.__dict__:
cls._instance = super().__call__(*args, **kw)
return cls._instance
But, as I wrote above, it is overkill to have a metaclass if you just once a singleton - the instantiation mechanism in __new__ itself is enough for creating a single-instance cache.
But before doing that - on should think: is a "singleton enforcing class really necessary" ? This is Python - the flexible structure and "consenting adults" mindset of the language can have you simply create an instance of your class in the same namespace you created the class itself - and just use that single instance from that point on.
Actually, if your single-instance have the same name the class have, one can't even create a new instance by accident, as the class itself will be reachable only indirectly. That is:
nice thing to do: if you need a singleton, create a singleton, not a 'singleton-enforcing-class
class BrowserDriver(...):
# normal code for the class here
...
BrowserDriver = BrowserDriver()
That is all there is to it. All you have now is a single-instance of
the BrowserDriver class that can be used from any place in your code.
Now, if you really need a singleton-enforcing class, one that upon
trying to create any instance beyond the first will silently do not
raise this attempt as an error, and just return the first instance ever created,
then the code you need in then __new__ method of the class is like the code
you were trying to use as the metaclass´ __new__. It records the sinvgle instance in the class itself:
if really needed: singleton enforcing-class using __new__:
class SingletonBase:
def __new__(cls, *args, **kw):
if "_instance" not in cls.__dict__:
cls._instance = super().__new__(cls, *args, **kw)
return cls._instance
And then just inherit your "I must be a singleton" classes from this base.
Note however, that __init__ will be called on the single-instance at each instantiation attempt - so, these singletons should use __new__ (and call super() as appropriate, instead of having an __init__ method, or have an idempotent __init__ (i.e. it can be called more than once, but this extra call have no effects)

Calling on parent class when child class is in another module

I am trying to figure out how to have a child class reside in another module. Currently it is more convenient for me to store the parent and child classes in different modules due to their size. I need the super method, since I want to inherit not just all the functions, but the variables in self as well. My current solution is as follows:
Parent Module (parent.py):
class A:
def __init__(self, *args, **kwargs):
super(A, self).__init__(*args, **kwargs)
Child Module(child.py):
from parent import A
class B(A):
def __init__(self, *args, **kwargs):
super(B, self).__init__(*args, **kwargs)
B()
When I run the child module I get the following error.
TypeError: super(type, obj): obj must be an instance or subtype of type
I understand that this is due to the module reloading and thus causing data to be lost, however I am not sure if there is a workaround.
First, on your code:
It's not necessary to always call the parent constructor, in particular calling object's constructor as you do in parent.A is not needed
In Python 3, you can use the much simpler super().__init__ form for the call for single-inheritance
The import should usually be relative: from .parent import A
Now, to your actual problem:
When you reload parent in this case, you essentially generate a new class object for A that is not identical to the one that your compiled B knows of. You can check this by comparing id(B.__base__) to id(A) after the reload. This is not a problem for the super() form, as that doesn't use the name A explicitly (which points to the new class) but instead uses the actual base class. So it will construct fine, but with the "old" A implementation.
P.S.:
It is essential that your question includes information on what you are actually trying to do, in this case reloading a module, which is not a "standard" operation in Python (that's why it's so cumbersome to do).

Categories

Resources