Object that raises exception when used in any way - python

I need to create an object that would raise a custom exception, UnusableObjectError, when it is used in any way (creating it should not create an exception though).
a = UnusableClass() # No error
b = UnusableClass() # No error
a == 4 # Raises UnusableObjectError
'x' in a # Raises UnusableObjectError
for i in a: # Raises UnusableObjectError
print(i)
# ..and so on
I came up with the code below which seems to behave as expected.
class UnusableObjectError(Exception):
pass
CLASSES_WITH_MAGIC_METHODS = (str(), object, float(), dict())
# Combines all magic methods I can think of.
MAGIC_METHODS_TO_CHANGE = set()
for i in CLASSES_WITH_MAGIC_METHODS:
MAGIC_METHODS_TO_CHANGE |= set(dir(i))
MAGIC_METHODS_TO_CHANGE.add('__call__')
# __init__ and __new__ must not raise an UnusableObjectError
# otherwise it would raise error even on creation of objects.
MAGIC_METHODS_TO_CHANGE -= {'__class__', '__init__', '__new__'}
def error_func(*args, **kwargs):
"""(nearly) all magic methods will be set to this function."""
raise UnusableObjectError
class UnusableClass(object):
pass
for i in MAGIC_METHODS_TO_CHANGE:
setattr(UnusableClass, i, error_func)
(some improvements made, as suggested by Duncan in comments)
Questions:
Is there an already existing class that behaves as described?
If not, is there any flaw in my UnusableClass() (e.g., situations when using the instances of the class wouldn't raise an error) and if so, how can I fix those flaws?

Turns out metaclasses and dunder (double underscore) methods don't go well together (which is unfortunate, since that would have been a more streamlined way to implement this).
I couldn't find any importable listing of magic method names, so I created one and put it on PyPi (https://pypi.python.org/pypi/magicmethods/0.1.1). With it, the implementation of UnusableClass can be written as a simple class decorator:
import magicmethods
class UnusableObjectError(Exception):
pass
def unusable(cls):
def _unusable(*args, **kwargs):
raise UnusableObjectError()
for name in set(magicmethods.all) - set(magicmethods.lifecycle):
setattr(cls, name, _unusable)
return cls
#unusable
class UnusableClass(object):
pass
magicmethods.lifecycle contains __new__, __init__, and __del__. You might want to adjust this..
This implementation also handles:
a = UnusableClass()
with a:
print 'oops'

You can use __getattribute__ to block all access to attributes, except special __ attributes like __contains__ or __eq__ which are not catched by __getattribute__, and use a whitelist to allow access to some methods:
class UnuseableClass(object):
whitelist = ('alpha', 'echo',)
def __init__(self):
self.alpha = 42
def echo(self, text):
print text
def not_callable(self):
return 113
def __getattribute__(self, name):
if name in type(self).whitelist:
return super(UnuseableClass, self).__getattribute__(name)
else:
raise Exception('Attribute is not useable: %s' % name)
unuseable_object = UnuseableClass()
print(unuseable_object.alpha)
unuseable_object.echo('calling echo')
try:
unuseable_object.not_callable()
except Exception as exc:
print(exc.message)
If you really need to catch even special method calls, you can use How to catch any method called on an object in python?.

Related

Why this test fails?

After adding a new unit test I started to get failures in unrelated test run after the new test. I could not understand why.
I have simplified the case to the code below. I still do not understand what is going on. I am surprised that commenting out seemingly unrelated lines of code affects the result: removing the call to isinstance in Block.__init__ changes the result of isinstance(blk, AddonDefault) in test_addons.
import abc
class Addon:
pass
class AddonDefault(Addon, metaclass=abc.ABCMeta):
pass
class Block:
def __init__(self):
isinstance(self, CBlock)
class CBlock(Block, metaclass=abc.ABCMeta):
def __init_subclass__(cls, *args, **kwargs):
if issubclass(cls, Addon):
raise TypeError("Do not mix Addons and CBlocks!")
super().__init_subclass__(*args, **kwargs)
class FBlock(CBlock):
pass
def test_addons():
try:
class CBlockWithAddon(CBlock, AddonDefault):
pass
except TypeError:
pass
blk = FBlock()
assert not isinstance(blk, AddonDefault), "TEST FAIL"
print("OK")
test_addons()
When I run python3 test.py I get the TEST FAIL exception. But FBlock is derived from CBlock which is derived from Block. How can it be an instance of AddonDefault?
UPDATE: I'd like to emphasize that the only purpose of the posted code is to demonstrate the behaviour I cannot understand. It was created by reducing a much larger program as much as I was able to. During this process any logic it had before was lost, so please take it as it is and focus on the question why it gives an apparantly incorrect answer.
Not a full answer, but some hints.
It seems that CBlockWithAddon is still seen as a subclass of AddonDefault. E.g. add two print statements to your test_addons():
def test_addons():
print(AddonDefault.__subclasses__())
try:
class CBlockWithAddon(CBlock, AddonDefault):
pass
except TypeError:
pass
print(AddonDefault.__subclasses__())
blk = FBlock()
assert not isinstance(blk, AddonDefault), "TEST FAIL"
print("OK")
results in
[]
[<class '__main__.test_addons.<locals>.CBlockWithAddon'>]
...
AssertionError: TEST FAIL
_py_abc tests for this:
# Check if it's a subclass of a subclass (recursive)
for scls in cls.__subclasses__():
if issubclass(subclass, scls):
cls._abc_cache.add(subclass)
return True
This will return True when cls=AddonDefault, subclass=FBlock and scls=CBlockWithAddon.
So it seems two things are going wrong:
The improperly created CBlockWithAddon is still seen as a subclass of AddonDefault.
But CBlockWithAddon is somehow created such that it seems to be a superclass of FBlock.
Perhaps the broken CBlockWithAddon is effectively identical to CBlock, and is therefore a superclass of FBlock.
This is enough for me now. Maybe it helps your investigation.
(I had to use import _py_abc as abc for this analysis. It doesn't seem to matter.)
Edit1: My hunch about CBlockWithAddon resembling its superclass CBlock seems correct:
CBWA = AddonDefault.__subclasses__()[0]
print(CBWA)
print(CBWA.__dict__.keys())
print(CBlock.__dict__.keys())
print(CBWA._abc_cache is CBlock._abc_cache)
gives
<class '__main__.test_addons.<locals>.CBlockWithAddon'>
dict_keys(['__module__', '__doc__'])
dict_keys(['__module__', '__init_subclass__', '__doc__', '__abstractmethods__', '_abc_registry', '_abc_cache', '_abc_negative_cache', '_abc_negative_cache_version'])
True
So CBlockWithAddon is not properly created, e.g. its cache registry is not properly set. So accessing those attributes will access those of the (first) super class, in this case CBlock. The not-so dummy isinstance(self, CBlock) will populate the cache when blk is created, because FBlock is indeed a subclass of CBlock. This cache is then incorrectly reused when isinstance(blk, AddonDefault) is called.
I think this answers the question as is. Now the next question would be: why does CBlockWithAddon become a subclass of CBlock when it was never properly defined?
Edit2: Simpler Proof of Concept.
from abc import ABCMeta
class Animal(metaclass=ABCMeta):
pass
class Plant(metaclass=ABCMeta):
def __init_subclass__(cls):
assert not issubclass(cls, Animal), "Plants cannot be Animals"
class Dog(Animal):
pass
try:
class Triffid(Animal, Plant):
pass
except Exception:
pass
print("Dog is Animal?", issubclass(Dog, Animal))
print("Dog is Plant?", issubclass(Dog, Plant))
will result in
Dog is Animal? True
Dog is Plant? True
Note that changing the order of the print statements will result in
Dog is Plant? False
Dog is Animal? False
Why are you making sub classes abstract instead of the base classes?
Is there some kind of logic behind this?
If you move abstraction one layer up it works as intended otherwise you mix type and abc metaclasses:
import abc
class Addon(metaclass=abc.ABCMeta):
pass
class AddonDefault(Addon):
pass
class Block(metaclass=abc.ABCMeta):
def __init__(self):
isinstance(self, CBlock)
class CBlock(Block):
def __init_subclass__(cls, *args, **kwargs):
if issubclass(cls, Addon):
raise TypeError("Do not mix Addons and CBlocks!")
super().__init_subclass__(*args, **kwargs)
class FBlock(CBlock):
pass
def test_addons():
try:
class CBlockWithAddon(CBlock, AddonDefault):
pass
except TypeError:
pass
blk = FBlock()
assert not isinstance(blk, AddonDefault), "TEST FAIL"
print("OK")
test_addons()

How do I assign a stacklevel to a Warning depending on the caller?

I have a Python class that issues a warning inside __init__(). It also provides a factory class method for opening and reading a file:
from warnings import warn
class MyWarning(Warning):
"""Warning issued when an invalid name is found."""
pass
class MyClass:
def __init__(self, names):
# Simplified; actual code is longer
if is_invalid(names):
names = fix_names(names)
warn(f'{names!r} contains invalid element(s)',
MyWarning, stacklevel=2)
self._names = names
#classmethod
def from_file(cls, filename):
with open(filename) as file:
names = extract_names(file)
return cls(names)
stacklevel=2 makes the warning refer to the call to MyClass() rather than the warn() statement itself. This works when user code directly instantiates MyClass. However, when MyClass.from_file() issues the warning, MyWarning refers to return cls(names), not the user code calling from_file().
How do I ensure that the factory method also issues a warning that points to the caller? Some options I've considered:
Add a "hidden" _stacklevel parameter to __init__(), and instantiate MyClass with _stacklevel=2 inside from_file().
This is super ugly, and exposes internal behavior to the API.
Add a "hidden" _stacklevel class attribute, and access it inside __init__(). Then temporarily modify this attribute in from_file()
Also super ugly.
Add a _set_names() method that checks/fixes the names and issues a warning when needed. Then call this method inside the constructor. For from_file(), first instantiate MyClass with empty args, then directly call _set_names() to ensure that MyWarning points to the caller.
Still hacky, and effectively calls _set_names() twice when from_file() is called.
Catch and re-throw the warning, similar to exception chaining.
Sounds good, but I have no idea how to do this.
I read the warning module docs but it offers little help on safely catching and re-throwing warnings. Converting the warning to an exception using warnings.simplefilter() would interrupt MyClass() and force me to call it again.
You can catch warnings similar to the way you catch exceptions using warnings.catch_warnings():
import warnings
class MyWarning(Warning):
"""Warning issued when an invalid name is found."""
pass
class MyClass:
def __init__(self, names):
# Simplified; actual code is longer
if is_invalid(names):
names = fix_names(names)
warn(f'{names!r} contains invalid element(s)',
MyWarning, stacklevel=2)
self._names = names
#classmethod
def from_file(cls, filename):
with open(filename) as file:
names = extract_names(file)
with warnings.catch_warnings(record=True) as cx_manager:
inst = cls(names)
#re-report warnings with the stack-level we want
for warning in cx_manager:
warnings.warn(warning.message, warning.category, stacklevel=2)
return inst
Just keep in mind the following note from the documentation of warnings.catch_warnings():
Note The catch_warnings manager works by replacing and then later restoring the module’s showwarning() function and internal list of filter specifications. This means the context manager is modifying global state and therefore is not thread-safe.
David is right, warnings.catch_warnings(record=True) is probably what you want. Though I would write it as a function decorator instead:
def reissue_warnings(func):
def inner(*args, **kwargs):
with warnings.catch_warnings(record = True) as warning_list:
result = func(*args, **kwargs)
for warning in warning_list:
warnings.warn(warning.message, warning.category, stacklevel = 2)
return result
return inner
And then in your example:
class MyClass:
def __init__(self, names):
# ...
#classmethod
#reissue_warnings
def from_file(cls, filename):
with open(filename) as file:
names = extract_names(file)
return cls(names)
inst = MyClass(['some', 'names']) # 58: MyWarning: ['some', 'names'] contains invalid element(s)
inst = MyClass.from_file('example') # 59: MyWarning: ['example'] contains invalid element(s)
This way also allows you to cleanly collect and reissue warnings across multiple functions as well:
class Test:
def a(self):
warnings.warn("This is a warning issued from a()")
#reissue_warnings
def b(self):
self.a()
#reissue_warnings
def c(self):
warnings.warn("This is a warning issued from c()")
self.b()
#reissue_warnings
def d(self):
self.c()
test = Test()
test.d() # Line 59
# 59: UserWarning: This is a warning issued from c()
# 59: UserWarning: This is a warning issued from a()

Use an object even though `__init__()` raises an exception?

I'm in a situation where some meager important parts of a classes __init__ method could raise an exception. In that case I want to display an error message but carry on using the instance.
A very basic example:
class something(object):
def __init__(self):
do_something_important()
raise IrrelevantException()
def do_something_useful(self):
pass
try:
that_thing = something()
except IrrelevantException:
print("Something less important failed.")
that_thing.do_something_useful()
However, the last line does not work, because that_thing is not defined. Strange thing is, I could swear I've done things like this before and it worked fine. I even thougt about ways to keep people from using such an unfinished instance, because I found out it gets created even in case of errors. Now I wanted to use that and it does not work. Hmmm...?!?
PS: something was written by myself, so I'm in control of everything.
You can accomplish this by calling object.__new__() to create the object. Then after that call __init__() to create the object.
This will execute all of the code possible.
class IrrelevantException(Exception):
"""This is not important, keep executing."""
pass
class something(object):
def __init__(self):
print("Doing important stuff.")
raise IrrelevantException()
def do_something_useful(self):
print("Now this is useful.")
that_thing = object.__new__(something) # Create the object, does not call __init__
try:
that_thing.__init__() # Now run __init__
except IrrelevantException:
print("Something less important failed.")
that_thing.do_something_useful() # And everything that __init__ could do is done.
EDIT, as #abarnert pointed out. This code does presume that __init__() is defined, but __new__() is not.
Now if it can be assumed that __new__() will not error, it can replace object.__new__() in the code.
However, if there is an error in object.__new__(), there is no way to both create the instance, and have the actions in __new__() applied to it.
This is because __new__() returns the instance, versus __init__() which manipulates the instance. (When you call something(), the default __new__() function actually calls __init__() and then quietly returns the instance.)
So the most robust version of this code would be:
class IrrelevantException(Exception):
"""This is not important, keep executing."""
pass
class something(object):
def __init__(self):
print("Doing important stuff.")
raise IrrelevantException()
def do_something_useful(self):
print("Now this is useful.")
try:
that_thing = something.__new__(something) # Create the object, does not call __init__
except IrrelevantException:
# Well, just create the object without calling cls.__new__()
that_thing = object.__new__(something)
try:
that_thing.__init__() # Now run __init__
except IrrelevantException:
print("Something less important failed.")
that_thing.do_something_useful()
So, meanwhile both of these answer the question, this latter one should also help in the (admittedly rare) case where __new__() has an error, but this does not stop do_something_useful() from working.
From a comment:
PS: something was written by myself, so I'm in control of everything.
Well, then the answer is obvious: just remove that raise IrrelevantException()
Of course your real code probably doesn't have raise IrrelevantException, but instead a call to some dangerous_function() that might raise. But that's fine; you can handle the exception the same way you do anywhere else; the fact that you're inside an __init__ method makes no difference:
class something(object):
def __init__(self):
do_something_important()
try:
do_something_dangerous()
except IrrelevantException as e:
print(f'do_something_dangerous raised {e!r}')
do_other_stuff_if_you_have_any()
That's all there is to it. There's no reason your __init__ should be raising an exception, and therefore the question of how to handle that exception never arises in the first place.
If you can't modify something, but can subclass it, then you don't need anything fancy:
class IrrelevantException(Exception):
pass
def do_something_important():
pass
class something(object):
def __init__(self):
do_something_important()
raise IrrelevantException()
def do_something_useful(self):
pass
class betterthing(something):
def __init__(self):
try:
super().__init__() # use 2.x style if you're on 2.x of course
except IrrelevantException:
pass # or log it, or whatever
# You can even do extra stuff after the exception
that_thing = betterthing()
that_thing.do_something_useful()
Now do_something_important got called, and a something instance got returns that I was able to save and call do_something_useful on, and so on. Exactly what you were looking for.
You could of course hide something behind betterthing with some clever renaming tricks:
_something = something
class something(_something):
# same code as above
… or just monkeypatch something.__init__ with a wrapper function instead of wrapping the class:
_init = something.__init__
def __init__(self):
try:
_init(self)
except IrrelevantException:
pass
something.__init__ = __init__
But, unless there's a good reason that you can't be explicit about the fact that you're adding a wrapper it, it's probably better to be explicit.
You can't have both an exception raised and a value returned (without getting hacky). If this is all code you control, then may I suggest this pattern:
class something(object):
Exception = None
def __init__(self):
...
if BadStuff:
self.Exception = IrrelevantException()
...
that_thing = something()
if that_thing.Exception:
print(that_thing.Exception)
# carry on
Note, if you are just looking for a message, then don't bother creating an Exception object, but rather just set an error code/message on self, and check for it later.
I assume that you don't have control over the "something" class, so in that case you can call the method directly, assuming that there are no elements in the class that are needed. You're passing self=None though, so it won't be able to have any access to the class's variables.
class IrrelevantException(Exception):
x = "i don't matter"
class something(object):
def __init__(self):
raise IrrelevantException()
def do_something_useful(self):
print('hi')
#this will fail
try:
that_thing = something()
except IrrelevantException:
print("Something less important failed.")
#this will run
something.do_something_useful(None)
Alternatively you can use inheritance:
class mega_something(something):
def __init__(self):
print("its alive!")
that_other_thing = mega_something()
that_other_thing.do_something_useful()
The mega_something class won't run its parent constructor unless called.

Is there a way to write a class decorator for rollback purposes?

I wrote a class with a "revert" decorator. The intention is to change class members in the yield, and if any exception occurs to "revert" all changes. :
class A():
def __init__(self):
self.kuku = 'old_value'
#contextmanager
def revertible_transaction(self):
old_self = deepcopy(self)
try:
yield
except Exception as e:
self = old_self
raise e
def change_stuff(self):
with self.revertible_transaction():
self.kuku = 'new_value'
raise Exception
I want self.kuku to still be 'old_value' after I run change_stuff(), but it's 'new_value' instead.
Any ideas why this is not working and how to do this properly?
You can't usefully assign to self; you're just changing which object a local name refers to without modifying the original object. You need to explicitly save the state of the object to restore if necessary.
#contextmanager
def revertible_transaction(self):
old_kuku = self.kuku
try:
yield
except Exception:
self.kuku = old_kuku
raise
self in the revertible_transaction is just a local variable. It is a different reference from the self in change_stuff, rebinding it won't change anything else. Even if you passed along the reference (say, by yield-ing and assigning with with ... as ..., you'd still not replace whatever reference the caller to the change_stuff() method has for the instance.
So you can't replace self like this. You'll need to replace the instance attributes; you can reach these through the self.__dict__ dictionary, usually:
#contextmanager
def revertible_transaction(self):
old_state = deepcopy(self.__dict__)
try:
yield
except Exception as e:
self.__dict__ = old_state
raise e
This won't work for classes that use slots, but the code could trivially be extended to then copy all names listed in all __slots__ class attributes in the MRO.

How to make sure a class function wont be called until another class function has been called first?

I have a class object that creates some data fields:
class DataFields(object):
_fields_ = ['field_1', 'field_2', 'data_length']
def __init__(self, data=None):
if data != None:
self.create_fields(data)
def create_fields(self, data):
i = 0
for field in self._fields_:
setattr(self, field, data[i])
i += 1
def get_datalength(self):
return self.data_length
What is the best way to make sure that the get_datalength() function cannot be called unless the data_length field has been created (that is, unless the create_fields() function has been called once).
I've thought about either using a variable that gets initialized in the create_fields and is checked in get_datalength() or try-except inside the get_datalength() function. What is the most Pythonic (or the best) way?
I think the most pythonic way would be to throw an exception:
def get_datalength(self):
try:
return self.data_length
except AttributeError:
raise AttributeError("No length call create_fields first")
Simple reason: There is no way to prevent the user to call this function on the object. Either the user would get a AttributeError and would not understand what is going on, or you provide an own Error class or at least error message.
BTW:
It is not pythonic creating getter methods(there are no such things as 'private members')
If you need to do smaller operation on the value returning it have a look at the #property decorator
#property
def datalength(self):
return do_some_stuff(self.data_length)
By using getattr with default value, you can return None or any value if there is no data_length attribute yet in instance:
def get_datalength(self):
return getattr(self, 'data_length', None)
Using an exception is probably the best way for what you are doing however there are alternatives that may be useful if you will be using this object from an interactive console:
def fn2(self):
print("this is fn2")
class test:
def fn1(self):
print("this is fn1")
self.fn2 = fn2
def fn2(self): # omit this if you want fn2 to only exist after fn1 is called
print("Please call fn1 first")
I wouldn't recommend this for every-day use but it can be useful in some cases. If you omit defining fn2 within the class, then the method fn2 will only be present after fn1 is called. For easier code maintenance you can do the same thing like this:
class test:
def fn1(self):
print("this is fn1")
self.fn2 = self._fn2
def _fn2(self):
print("this is fn2")
def fn2(self): # omit this if you want fn2 to only exist after fn1 is called
print("Please call fn1 first")
If this is to be used inside a module that will be imported then you should either raise an exception or return a valid value like the other answers have suggested.
This can be solved by having a dictionary, as a class variable, with method names as keys.
called['method1']
called['method2']
called['method3']
...
And setting the key in that method call
class SomeClass(obj):
def method1():
called['method1'] = 1
def method2():
if method1 in called:
# continue

Categories

Resources