Why this test fails? - python

After adding a new unit test I started to get failures in unrelated test run after the new test. I could not understand why.
I have simplified the case to the code below. I still do not understand what is going on. I am surprised that commenting out seemingly unrelated lines of code affects the result: removing the call to isinstance in Block.__init__ changes the result of isinstance(blk, AddonDefault) in test_addons.
import abc
class Addon:
pass
class AddonDefault(Addon, metaclass=abc.ABCMeta):
pass
class Block:
def __init__(self):
isinstance(self, CBlock)
class CBlock(Block, metaclass=abc.ABCMeta):
def __init_subclass__(cls, *args, **kwargs):
if issubclass(cls, Addon):
raise TypeError("Do not mix Addons and CBlocks!")
super().__init_subclass__(*args, **kwargs)
class FBlock(CBlock):
pass
def test_addons():
try:
class CBlockWithAddon(CBlock, AddonDefault):
pass
except TypeError:
pass
blk = FBlock()
assert not isinstance(blk, AddonDefault), "TEST FAIL"
print("OK")
test_addons()
When I run python3 test.py I get the TEST FAIL exception. But FBlock is derived from CBlock which is derived from Block. How can it be an instance of AddonDefault?
UPDATE: I'd like to emphasize that the only purpose of the posted code is to demonstrate the behaviour I cannot understand. It was created by reducing a much larger program as much as I was able to. During this process any logic it had before was lost, so please take it as it is and focus on the question why it gives an apparantly incorrect answer.

Not a full answer, but some hints.
It seems that CBlockWithAddon is still seen as a subclass of AddonDefault. E.g. add two print statements to your test_addons():
def test_addons():
print(AddonDefault.__subclasses__())
try:
class CBlockWithAddon(CBlock, AddonDefault):
pass
except TypeError:
pass
print(AddonDefault.__subclasses__())
blk = FBlock()
assert not isinstance(blk, AddonDefault), "TEST FAIL"
print("OK")
results in
[]
[<class '__main__.test_addons.<locals>.CBlockWithAddon'>]
...
AssertionError: TEST FAIL
_py_abc tests for this:
# Check if it's a subclass of a subclass (recursive)
for scls in cls.__subclasses__():
if issubclass(subclass, scls):
cls._abc_cache.add(subclass)
return True
This will return True when cls=AddonDefault, subclass=FBlock and scls=CBlockWithAddon.
So it seems two things are going wrong:
The improperly created CBlockWithAddon is still seen as a subclass of AddonDefault.
But CBlockWithAddon is somehow created such that it seems to be a superclass of FBlock.
Perhaps the broken CBlockWithAddon is effectively identical to CBlock, and is therefore a superclass of FBlock.
This is enough for me now. Maybe it helps your investigation.
(I had to use import _py_abc as abc for this analysis. It doesn't seem to matter.)
Edit1: My hunch about CBlockWithAddon resembling its superclass CBlock seems correct:
CBWA = AddonDefault.__subclasses__()[0]
print(CBWA)
print(CBWA.__dict__.keys())
print(CBlock.__dict__.keys())
print(CBWA._abc_cache is CBlock._abc_cache)
gives
<class '__main__.test_addons.<locals>.CBlockWithAddon'>
dict_keys(['__module__', '__doc__'])
dict_keys(['__module__', '__init_subclass__', '__doc__', '__abstractmethods__', '_abc_registry', '_abc_cache', '_abc_negative_cache', '_abc_negative_cache_version'])
True
So CBlockWithAddon is not properly created, e.g. its cache registry is not properly set. So accessing those attributes will access those of the (first) super class, in this case CBlock. The not-so dummy isinstance(self, CBlock) will populate the cache when blk is created, because FBlock is indeed a subclass of CBlock. This cache is then incorrectly reused when isinstance(blk, AddonDefault) is called.
I think this answers the question as is. Now the next question would be: why does CBlockWithAddon become a subclass of CBlock when it was never properly defined?
Edit2: Simpler Proof of Concept.
from abc import ABCMeta
class Animal(metaclass=ABCMeta):
pass
class Plant(metaclass=ABCMeta):
def __init_subclass__(cls):
assert not issubclass(cls, Animal), "Plants cannot be Animals"
class Dog(Animal):
pass
try:
class Triffid(Animal, Plant):
pass
except Exception:
pass
print("Dog is Animal?", issubclass(Dog, Animal))
print("Dog is Plant?", issubclass(Dog, Plant))
will result in
Dog is Animal? True
Dog is Plant? True
Note that changing the order of the print statements will result in
Dog is Plant? False
Dog is Animal? False

Why are you making sub classes abstract instead of the base classes?
Is there some kind of logic behind this?
If you move abstraction one layer up it works as intended otherwise you mix type and abc metaclasses:
import abc
class Addon(metaclass=abc.ABCMeta):
pass
class AddonDefault(Addon):
pass
class Block(metaclass=abc.ABCMeta):
def __init__(self):
isinstance(self, CBlock)
class CBlock(Block):
def __init_subclass__(cls, *args, **kwargs):
if issubclass(cls, Addon):
raise TypeError("Do not mix Addons and CBlocks!")
super().__init_subclass__(*args, **kwargs)
class FBlock(CBlock):
pass
def test_addons():
try:
class CBlockWithAddon(CBlock, AddonDefault):
pass
except TypeError:
pass
blk = FBlock()
assert not isinstance(blk, AddonDefault), "TEST FAIL"
print("OK")
test_addons()

Related

How to verify when an unknown object created by the code under test was called as expected (pytest) (unittest)

I have some code that creates instances from a list of classes that is passed to it. This cannot change as the list of classes passed to it has been designed to be dynamic and chosen at runtime through configuration files). Initialising those classes must be done by the code under test as it depends on factors only the code under test knows how to control (i.e. it will set specific initialisation args). I've tested the code quite extensively through running it and manually trawling through reams of output. Obviously I'm at the point where I need to add some proper unittests as I've proven my concept to myself. The following example demonstrates what I am trying to test:
I would like to test the run method of the Foo class defined below:
# foo.py
class Foo:
def __init__(self, stuff):
self._stuff = stuff
def run():
for thing in self._stuff:
stuff = stuff()
stuff.run()
Where one (or more) files would contain the class definitions for stuff to run, for example:
# classes.py
class Abc:
def run(self):
print("Abc.run()", self)
class Ced:
def run(self):
print("Ced.run()", self)
class Def:
def run(self):
print("Def.run()", self)
And finally, an example of how it would tie together:
>>> from foo import Foo
>>> from classes import Abc, Ced, Def
>>> f = Foo([Abc, Ced, Def])
>>> f.run()
Abc.run() <__main__.Abc object at 0x7f7469f9f9a0>
Ced.run() <__main__.Abc object at 0x7f7469f9f9a1>
Def.run() <__main__.Abc object at 0x7f7469f9f9a2>
Where the list of stuff to run defines the object classes (NOT instances), as the instances only have a short lifespan; they're created by Foo.run() and die when (or rather, sometime soon after) the function completes. However, I'm finding it very tricky to come up with a clear method to test this code.
I want to prove that the run method of each of the classes in the list of stuff to run was called. However, from the test, I do not have visibility on the Abc instance which the run method creates, therefore, how can it be verified? I can't patch the import as the code under test does not explicitly import the class (after all, it doesn't care what class it is). For example:
# test.py
from foo import Foo
class FakeStuff:
def run(self):
self.run_called = True
def test_foo_runs_all_stuff():
under_test = Foo([FakeStuff])
under_test.run()
# How to verify that FakeStuff.run() was called?
assert <SOMETHING>.run_called, "FakeStuff.run() was not called"
It seems that you correctly realise that you can pass anything into Foo(), so you should be able to log something in FakeStuff.run():
class Foo:
def __init__(self, stuff):
self._stuff = stuff
def run(self):
for thing in self._stuff:
stuff = thing()
stuff.run()
class FakeStuff:
run_called = 0
def run(self):
FakeStuff.run_called += 1
def test_foo_runs_all_stuff():
under_test = Foo([FakeStuff, FakeStuff])
under_test.run()
# How to verify that FakeStuff.run() was called?
assert FakeStuff.run_called == 2, "FakeStuff.run() was not called"
Note that I have modified your original Foo to what I think you meant. Please correct me if I'm wrong.

Use an object even though `__init__()` raises an exception?

I'm in a situation where some meager important parts of a classes __init__ method could raise an exception. In that case I want to display an error message but carry on using the instance.
A very basic example:
class something(object):
def __init__(self):
do_something_important()
raise IrrelevantException()
def do_something_useful(self):
pass
try:
that_thing = something()
except IrrelevantException:
print("Something less important failed.")
that_thing.do_something_useful()
However, the last line does not work, because that_thing is not defined. Strange thing is, I could swear I've done things like this before and it worked fine. I even thougt about ways to keep people from using such an unfinished instance, because I found out it gets created even in case of errors. Now I wanted to use that and it does not work. Hmmm...?!?
PS: something was written by myself, so I'm in control of everything.
You can accomplish this by calling object.__new__() to create the object. Then after that call __init__() to create the object.
This will execute all of the code possible.
class IrrelevantException(Exception):
"""This is not important, keep executing."""
pass
class something(object):
def __init__(self):
print("Doing important stuff.")
raise IrrelevantException()
def do_something_useful(self):
print("Now this is useful.")
that_thing = object.__new__(something) # Create the object, does not call __init__
try:
that_thing.__init__() # Now run __init__
except IrrelevantException:
print("Something less important failed.")
that_thing.do_something_useful() # And everything that __init__ could do is done.
EDIT, as #abarnert pointed out. This code does presume that __init__() is defined, but __new__() is not.
Now if it can be assumed that __new__() will not error, it can replace object.__new__() in the code.
However, if there is an error in object.__new__(), there is no way to both create the instance, and have the actions in __new__() applied to it.
This is because __new__() returns the instance, versus __init__() which manipulates the instance. (When you call something(), the default __new__() function actually calls __init__() and then quietly returns the instance.)
So the most robust version of this code would be:
class IrrelevantException(Exception):
"""This is not important, keep executing."""
pass
class something(object):
def __init__(self):
print("Doing important stuff.")
raise IrrelevantException()
def do_something_useful(self):
print("Now this is useful.")
try:
that_thing = something.__new__(something) # Create the object, does not call __init__
except IrrelevantException:
# Well, just create the object without calling cls.__new__()
that_thing = object.__new__(something)
try:
that_thing.__init__() # Now run __init__
except IrrelevantException:
print("Something less important failed.")
that_thing.do_something_useful()
So, meanwhile both of these answer the question, this latter one should also help in the (admittedly rare) case where __new__() has an error, but this does not stop do_something_useful() from working.
From a comment:
PS: something was written by myself, so I'm in control of everything.
Well, then the answer is obvious: just remove that raise IrrelevantException()
Of course your real code probably doesn't have raise IrrelevantException, but instead a call to some dangerous_function() that might raise. But that's fine; you can handle the exception the same way you do anywhere else; the fact that you're inside an __init__ method makes no difference:
class something(object):
def __init__(self):
do_something_important()
try:
do_something_dangerous()
except IrrelevantException as e:
print(f'do_something_dangerous raised {e!r}')
do_other_stuff_if_you_have_any()
That's all there is to it. There's no reason your __init__ should be raising an exception, and therefore the question of how to handle that exception never arises in the first place.
If you can't modify something, but can subclass it, then you don't need anything fancy:
class IrrelevantException(Exception):
pass
def do_something_important():
pass
class something(object):
def __init__(self):
do_something_important()
raise IrrelevantException()
def do_something_useful(self):
pass
class betterthing(something):
def __init__(self):
try:
super().__init__() # use 2.x style if you're on 2.x of course
except IrrelevantException:
pass # or log it, or whatever
# You can even do extra stuff after the exception
that_thing = betterthing()
that_thing.do_something_useful()
Now do_something_important got called, and a something instance got returns that I was able to save and call do_something_useful on, and so on. Exactly what you were looking for.
You could of course hide something behind betterthing with some clever renaming tricks:
_something = something
class something(_something):
# same code as above
… or just monkeypatch something.__init__ with a wrapper function instead of wrapping the class:
_init = something.__init__
def __init__(self):
try:
_init(self)
except IrrelevantException:
pass
something.__init__ = __init__
But, unless there's a good reason that you can't be explicit about the fact that you're adding a wrapper it, it's probably better to be explicit.
You can't have both an exception raised and a value returned (without getting hacky). If this is all code you control, then may I suggest this pattern:
class something(object):
Exception = None
def __init__(self):
...
if BadStuff:
self.Exception = IrrelevantException()
...
that_thing = something()
if that_thing.Exception:
print(that_thing.Exception)
# carry on
Note, if you are just looking for a message, then don't bother creating an Exception object, but rather just set an error code/message on self, and check for it later.
I assume that you don't have control over the "something" class, so in that case you can call the method directly, assuming that there are no elements in the class that are needed. You're passing self=None though, so it won't be able to have any access to the class's variables.
class IrrelevantException(Exception):
x = "i don't matter"
class something(object):
def __init__(self):
raise IrrelevantException()
def do_something_useful(self):
print('hi')
#this will fail
try:
that_thing = something()
except IrrelevantException:
print("Something less important failed.")
#this will run
something.do_something_useful(None)
Alternatively you can use inheritance:
class mega_something(something):
def __init__(self):
print("its alive!")
that_other_thing = mega_something()
that_other_thing.do_something_useful()
The mega_something class won't run its parent constructor unless called.

Object that raises exception when used in any way

I need to create an object that would raise a custom exception, UnusableObjectError, when it is used in any way (creating it should not create an exception though).
a = UnusableClass() # No error
b = UnusableClass() # No error
a == 4 # Raises UnusableObjectError
'x' in a # Raises UnusableObjectError
for i in a: # Raises UnusableObjectError
print(i)
# ..and so on
I came up with the code below which seems to behave as expected.
class UnusableObjectError(Exception):
pass
CLASSES_WITH_MAGIC_METHODS = (str(), object, float(), dict())
# Combines all magic methods I can think of.
MAGIC_METHODS_TO_CHANGE = set()
for i in CLASSES_WITH_MAGIC_METHODS:
MAGIC_METHODS_TO_CHANGE |= set(dir(i))
MAGIC_METHODS_TO_CHANGE.add('__call__')
# __init__ and __new__ must not raise an UnusableObjectError
# otherwise it would raise error even on creation of objects.
MAGIC_METHODS_TO_CHANGE -= {'__class__', '__init__', '__new__'}
def error_func(*args, **kwargs):
"""(nearly) all magic methods will be set to this function."""
raise UnusableObjectError
class UnusableClass(object):
pass
for i in MAGIC_METHODS_TO_CHANGE:
setattr(UnusableClass, i, error_func)
(some improvements made, as suggested by Duncan in comments)
Questions:
Is there an already existing class that behaves as described?
If not, is there any flaw in my UnusableClass() (e.g., situations when using the instances of the class wouldn't raise an error) and if so, how can I fix those flaws?
Turns out metaclasses and dunder (double underscore) methods don't go well together (which is unfortunate, since that would have been a more streamlined way to implement this).
I couldn't find any importable listing of magic method names, so I created one and put it on PyPi (https://pypi.python.org/pypi/magicmethods/0.1.1). With it, the implementation of UnusableClass can be written as a simple class decorator:
import magicmethods
class UnusableObjectError(Exception):
pass
def unusable(cls):
def _unusable(*args, **kwargs):
raise UnusableObjectError()
for name in set(magicmethods.all) - set(magicmethods.lifecycle):
setattr(cls, name, _unusable)
return cls
#unusable
class UnusableClass(object):
pass
magicmethods.lifecycle contains __new__, __init__, and __del__. You might want to adjust this..
This implementation also handles:
a = UnusableClass()
with a:
print 'oops'
You can use __getattribute__ to block all access to attributes, except special __ attributes like __contains__ or __eq__ which are not catched by __getattribute__, and use a whitelist to allow access to some methods:
class UnuseableClass(object):
whitelist = ('alpha', 'echo',)
def __init__(self):
self.alpha = 42
def echo(self, text):
print text
def not_callable(self):
return 113
def __getattribute__(self, name):
if name in type(self).whitelist:
return super(UnuseableClass, self).__getattribute__(name)
else:
raise Exception('Attribute is not useable: %s' % name)
unuseable_object = UnuseableClass()
print(unuseable_object.alpha)
unuseable_object.echo('calling echo')
try:
unuseable_object.not_callable()
except Exception as exc:
print(exc.message)
If you really need to catch even special method calls, you can use How to catch any method called on an object in python?.

Is it possible to get access to class-level attributes from a nose plugin?

Say I have the following test class:
# file tests.py
class MyTests(object):
nose_use_this = True
def test_something(self):
assert 1
I can easily write a plugin that is run before that test:
class HelloWorld(Plugin):
# snip
def startTest(self, test):
import ipdb; ipdb.set_trace()
The test is what I want it to be, but the type of test is nose.case.Test:
ipdb> str(test)
'tests.MyTests.test_something'
ipdb> type(test)
<class 'nose.case.Test'>
And I can't see anything that will allow me to get at the nose_use_this attribute that I defined in my TestCase-ish class.
EDIT:
I think probably the best way to do this is to get access to the context with a startContext/stopContext method pair, and to set attributes on the instance there:
class MyPlugin(Plugin):
def __init__(self, *args, **kwargs):
super(MyPlugin, self).__init__(self, *args, **kwargs)
self.using_this = None
def startContext(self, context):
if hasattr(context, 'nose_use_this'):
self.using_this = context.nose_use_this
def stopContext(self, context):
self.using_this = None
def startTest(self, test):
if self.using_this is not None:
do_something()
To be really robust you'll probably need to implement a stack of things since "startContext" is called for (at least) modules and classes, but if the only place where the attribute can be set is on the class then I think this simple thing should work. It seems to be working for me. (Nose 1.3.0 and plugin api version 0.10)
original:
Oh, it's the _context() function on the test instance:
ipdb> test._context()
<class 'tests.MyTests'>
And it does indeed have the expected class-level attributes. I'm still open to a better way of doing this, the _ makes me think that nose doesn't want me treating this as part of the API.

"issubclass() arg 2 must be a class or tuple" in assertRaises

I have a module InvalidObj
class InvalidObj(Exception):
def__init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class Hello(object):
def __init__(self):
self.a = 10
self.b = 20
def aequalb(self):
if self.a != self.b:
raise InvalidObj("This is an error")
I am trying to do a unittest where a function throws the InvalidObj exception
class test_obj(unittest.TestCase):
def test_obj(self):
at = Hello()
self.assertRaises(InvalidObj("This is an error"), a.aequalb)
On running the above test_obj class, it gives me an error "issubclass() arg 2 must be a class or tuple". But if I change the line to,
self.assertRaises(InvalidObj, at.aequalb)
This runs fine. Isn't the error supposed to return the message passed to it when it is raised?
No, it is not supposed to work the way you expected. First argument is a class (or a tuple), the second is callable, the rest are as described in the documentation.
Even though exception accepts arguments, unittest does not give you deep comparisons between exceptions (otherwise, it would be pretty complex to say that two separate instances of the same class are equivalent).
To solve your issue, just test attribute separately:
with self.assertRaises(InvalidObj) as cm:
at.aequalb()
self.assertEqual("This is an error", cm.exception.value)
Note: Above I have used assertRaises() method as context manager. It behaves like that, when only one argument is given. For more details please visit mentioned documentation.

Categories

Resources