Python unittest assertraises error - python

I had this weird trouble running my unittest in Python:
I used assertRaises, and running the unittest raised the correct exception, but the test still failed. Ok I cannot really explain it, please see the traceback for yourself:
Error
Traceback (most recent call last):
File "/Users/chianti/PycharmProjects/Programming_Project/Part1and4/Part1and4Test.py", line 32, in test_non_alpha_name
self.assertRaises(RestNameContainNonAlphaError, RestaurantName(self.non_alpha_name))
File "/Users/chianti/PycharmProjects/Programming_Project/Part1and4/InputCheck.py", line 29, in __init__
raise RestNameContainNonAlphaError('There are non alphabetic characters that I can not recognize!')
RestNameContainNonAlphaError: There are non alphabetic characters that I can not recognize!
Error
Traceback (most recent call last):
File "/Users/chianti/PycharmProjects/Programming_Project/Part1and4/Part1and4Test.py", line 24, in test_non_string_name
self.assertRaises(InputNotStringError, RestaurantName, self.non_string_name)
File "/Users/chianti/anaconda/lib/python2.7/unittest/case.py", line 473, in assertRaises
callableObj(*args, **kwargs)
File "/Users/chianti/PycharmProjects/Programming_Project/Part1and4/InputCheck.py", line 33, in __init__
raise InputNotStringError('Not String! The input is supposed to be a string type!')
InputNotStringError: Not String! The input is supposed to be a string type!
Why ?????????? Any ideas are appreciated !!! THANK YOU
Here is my unittest:
class RestaurantNameTests(unittest.TestCase):
def setUp(self):
self.non_string_name = 123
self.valid_name = 'Italian rest '
self.non_alpha_name = 'valid ** n'
def tearDown(self):
self.non_string_name = None
self.valid_name = None
self.non_alpha_name = None
def test_non_string_name(self):
with self.assertRaises(InputNotStringError):
RestaurantName(self.non_string_name)
def test_valid_name(self):
self.assertEqual(RestaurantName(self.valid_name).__str__(), 'Italian rest')
def test_non_alpha_name(self):
self.assertRaises(RestNameContainNonAlphaError, RestaurantName(self.non_alpha_name))
If you need to see the definition of RestaurantName, here it is:
class RestaurantName():
def __init__(self, input_contents):
self.name = input_contents
if IsValidString(self.name):
self.no_space_name = self.name.replace(' ', '')
if str.isalpha(self.no_space_name):
pass
else:
raise RestNameContainNonAlphaError('There are non alphabetic characters that I can not recognize!')
else:
raise InputNotStringError('Not String! The input is supposed to be a string type!')
def __repr__(self):
return 'RestaurantName(%s)' % self.name.strip()
def __str__(self):
return self.name.strip()
Thanks again

The traceback doesn't match your description of the problem (nor your code FWIW). The error you get is in test_non_alpha_name() which you didn't post but from your error message looks like:
self.assertRaises(
RestNameContainNonAlphaError,
RestaurantName(self.non_alpha_name)
)
This is not the correct way to use assertRaises(). You must pass ExpectedExceptionClass, callable, *args, **kw to assertRaises, and args and kw will be passed to your callable. IOW you want:
self.assertRaises(
RestNameContainNonAlphaError,
RestaurantName,
self.non_alpha_name
)
The reason is simple: the way you currently call it, the exception is triggered before the call to assertRaises.
As a side note:
your tearDown method is useless
there's already a builtin exception for wrong types, it's named TypeError
there's also a builtin exception for wrong values which is named ValueError

Related

pydantic : duplicate validator function

I used the code below: It shows duplicated validator. Why cannot use both? How do I create an alias in the #validator if I cannot use Field?
from pydantic import BaseModel, validator, Field
import datetime
class MultiSourceInput(BaseModel):
abc : str = Field(..., alias= 'abc_1',description= "xxxxxxxxxxxx.")
xyz : int= Field(..., description= "xxxxxxxx ",ge=0, le=150)
#validator("abc")
def abc(value):
values = float(value)
if value <=141 and value>=0:
return value
else:
0
Here's the traceback:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 37, in MultiSourceInput
File "pydantic/class_validators.py", line 85, in pydantic.class_validators.validator.dec
File "pydantic/class_validators.py", line 144, in pydantic.class_validators._prepare_validator
pydantic.errors.ConfigError: duplicate validator function "__main__.MultiSourceInput.abc"; if this is intended, set `allow_reuse=True`
In my case, this occurred because the validation method was receiving self instead of cls, meaning:
#validator("my_field")
def parse_my_field(self, v):
...
Instead of:
#validator("my_field")
def parse_my_field(cls, v):
...
My issue was caused by errors (seemingly unrelated) earlier in the code. Some knock-on effects caused the duplicate validator function error. When I fixed the preceding errors this went away
Check if there is a validator created with abc method name. You might need to rename the method

Where do the three arguments come from in this __exit__ function extending a python base type?

I tried to play around with the built in string type, wondering if I could use strings with the with syntax. Obviously the following will fail:
with "hello" as hello:
print(f"{hello} world!")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: __enter__
Then, just deriving a class from str with the two needed attributes for with:
class String(str):
def __enter__(self):
return self
def __exit__(self):
...
with String("hello") as hello:
print(f"{hello} world!")
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
TypeError: __exit__() takes 1 positional argument but 4 were given
Ok, I wonder what those arguments are.., I added *args, **kwargs to __exit__, and then tried it again:
class String(str):
def __enter__(self):
return self
def __exit__(self, *args, **kwargs):
print("args: ", args)
print("kwargs: ", kwargs)
with String("hello") as hello:
print(f"{hello} world!")
hello world!
args: (None, None, None)
kwargs: {}
Works with different types too that I guess can be normally called with str(), but what are those three arguments? How do I go about finding more information on what the three extra arguments were? I guess finally, where can I go to see the implementation of built-in types, etc...?
These are actually methods(__enter__() & __exit__()) of contextmanager class. Please refer to this link for detailed explaination.
The __exit__() method will exit the runtime context and return a Boolean flag indicating if any exception that occurred should be suppressed
Those three arguments are:
exception_type
exception_value
traceback
The values of these arguments contain information regarding the thrown exception. If the values equal to None means no exception was thrown.

How to get the literal representation of function argument?

I want to get a literal representation of function argument in exception message. Because x could be any object type, so there may be no str and repr defined for this type. Is there way to achieve that ? Thanks
e.g.
def f(x, y)
raise Exception("<x> is not valid")
As Johnrsharpe said in the commends a simple formatting should be fine for your use case.
For example if you have a class and a function.
class Myobj:
pass
def f(x, y):
raise Exception("{!r} is not valid".format(x))
f(Myobj, 1)
You should be able to get the output and a nice traceback
Traceback (most recent call last):
File "42920465.py", line 9, in <module>
f(Myobj, 1)
File "42920465.py", line 6, in f
raise Exception("{!r} is not valid".format(x))
Exception: <class '__main__.Myobj'> is not valid
You can see in the desciption the object that is being called is mentioned.

How can I create an Exception in Python minus the last stack frame?

Not sure how possible this is, but here goes:
I'm trying to write an object with some slightly more subtle behavior - which may or may not be a good idea, I haven't determined that yet.
I have this method:
def __getattr__(self, attr):
try:
return self.props[attr].value
except KeyError:
pass #to hide the keyerror exception
msg = "'{}' object has no attribute '{}'"
raise AttributeError(msg.format(self.__dict__['type'], attr))
Now, when I create an instance of this like so:
t = Thing()
t.foo
I get a stacktrace containing my function:
Traceback (most recent call last):
File "attrfun.py", line 23, in <module>
t.foo
File "attrfun.py", line 15, in __getattr__
raise AttributeError(msg.format(self._type, attr))
AttributeError: 'Thing' object has no attribute 'foo'
I don't want that - I want the stack trace to read:
Traceback (most recent call last):
File "attrfun.py", line 23, in <module>
t.foo
AttributeError: 'Thing' object has no attribute 'foo'
Is this possible with a minimal amount of effort, or is there kind of a lot required? I found this answer which indicates that something looks to be possible, though perhaps involved. If there's an easier way, I'd love to hear it! Otherwise I'll just put that idea on the shelf for now.
You cannot tamper with traceback objects (and that's a good thing). You can only control how you process one that you've already got.
The only exceptions are: you can
substitute an exception with another or re-raise it with raise e (i.e make the traceback point to the re-raise statement's location)
raise an exception with an explicit traceback object
remove outer frame(s) from a traceback object by accessing its tb_next property (this reflects a traceback object's onion-like structure)
For your purpose, the way to go appears to be the 1st option: re-raise an exception from a handler one level above your function.
And, I'll say this again, this is harmful for yourself or whoever will be using your module as it deletes valuable diagnostic information. If you're dead set on making your module proprietary with whatever rationale, it's more productive for that goal to make it a C extension.
The traceback object is created during stack unwinding, not directly when you raise the exception, so you can not alter it right in your function. What you could do instead (though it's probably a bad idea) is to alter the top level exception hook so that it hides your function from the traceback.
Suppose you have this code:
class MagicGetattr:
def __getattr__(self, item):
raise AttributeError(f"{item} not found")
orig_excepthook = sys.excepthook
def excepthook(type, value, traceback):
iter_tb = traceback
while iter_tb.tb_next is not None:
if iter_tb.tb_next.tb_frame.f_code is MagicGetattr.__getattr__.__code__:
iter_tb.tb_next = None
break
iter_tb = iter_tb.tb_next
orig_excepthook(type, value, traceback)
sys.excepthook = excepthook
# The next line will raise an error
MagicGetattr().foobar
You will get the following output:
Traceback (most recent call last):
File "test.py", line 49, in <module>
MagicGetattr().foobar
AttributeError: foobar not found
Note that this ignores the __cause__ and __context__ members of the exception, which you would probably want to visit too if you were to implement this in real life.
You can get the current frame and any other level using the inspect module. For instance, here is what I use when I'd like to know where I'm in my code :
from inspect import currentframe
def get_c_frame(level = 0) :
"""
Return caller's frame
"""
return currentframe(level)
...
def locate_error(level = 0) :
"""
Return a string containing the filename, function name and line
number where this function was called.
Output is : ('file name' - 'function name' - 'line number')
"""
fi = get_c_frame(level = level + 2)
return '({} - {} - {})'.format(__file__,
fi.f_code,
fi.f_lineno)

Handle exception in __init__

Is it fine to raise an exception in __init__ in python? I have this piece of code:
class VersionManager(object):
def __init__(self, path):
self._path = path
if not os.path.exists(path): os.mkdir(path)
myfunction(path)
The second line can potentially result in an exception. In that case the object will not be init'ed properly. Is there a better way to handle situations where code in __init__ might throw an exception?
EDIT
Added a call to a function after os.mkdir
Added a check to see if directory exists
It is perfectly fine to raise an exception in __init__. You would then wrap the object initiation/creation call with try/except and react to the exception.
One potential odd result though is that __del__ is run anyway:
class Demo(object):
def __init__(self, value):
self.value=value
if value==2:
raise ValueError
def __del__(self):
print '__del__', self.value
d=Demo(1) # successfully create an object here
d=22 # new int object labeled 'd'; old 'd' goes out of scope
# '__del__ 1' is printed once a new name is put on old 'd'
# since the object is deleted with no references
Now try with the value 2 that we are testing for:
Demo(2)
Traceback (most recent call last):
File "Untitled 3.py", line 11, in <module>
Demo(2)
File "Untitled 3.py", line 5, in __init__
raise ValueError
ValueError
__del__ 2 # But note that `__del__` is still run.
The creation of the object with value 2 raises a ValueError exception and show that __del__ is still run to clean up the object.
Keep in mind that if you raise an exception during __init__ your object will not get a name. (It will, however, be created and destroyed. Since __del__ is paired with __new__ it still gets called)
ie, just like this does not create x:
>>> x=1/0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero
>>> x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'x' is not defined
Potential sneakier:
>>> x='Old X'
>>> x=1/0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
>>> x
'Old X'
Same thing if you catch an exception of __init__:
try:
o=Demo(2)
except ValueError:
print o # name error -- 'o' never gets bound to the object...
# Worst still -- 'o' is its OLD value!
So don't try to refer to the incomplete object o -- it's gone out of scope by the time you get to except. And the name o is either nothing (i.e., NameError if you try to use it) or its old value.
So wrapping up (thanks to Steve Jessop for the User Defined Exception idea), you can wrap the creation of the object and catch the exception. Just figure out how to react appropriately to the OS error you are looking at.
So:
class ForbiddenTwoException(Exception):
pass
class Demo(object):
def __init__(self, value):
self.value=value
print 'trying to create with val:', value
if value==2:
raise ForbiddenTwoException
def __del__(self):
print '__del__', self.value
try:
o=Demo(2)
except ForbiddenTwoException:
print 'Doh! Cant create Demo with a "2"! Forbidden!!!'
# with your example - react to being unusable to create a directory...
Prints:
trying to create with val: 2
Doh! Cant create Demo with a "2"! Forbidden!!!
__del__ 2
You can wrap the call, as jramirez suggested:
try:
ver = VersionManager(path)
except:
raise
Or you can use a context manager:
class VersionManager(object):
def __init__(self):
#not-so-harmful code
self.path = path
def __enter__(self):
try:
self.path = path
os.mkdir(path)
self.myfunction(path)
except Exception as e:
print e
print "The directory making has failed, the function hasn't been executed."
return self
def __exit__(self, exc_type, exc_value, traceback):
print(exc_type, exc_value, traceback)
And to run it:
with VersionManager(my_path) as myVersionManager:
#do things you want with myVersionManager
This way, you'll catch errors inside the with statement as well.
You can use try/except when initializing the object.
try:
ver = VersionManager(my_path)
except Exception as e:
# raise e or handle error
print e
My favourite is to simply output errors to console and march on:
import sys, os, traceback
class Myclass
def __init__(self, path):
self._path = path
"""Risky Code"""
try:
os.mkdir(path)
except:
traceback.print_exc(file = sys.stdout)
This way an exception will print out more like a warning rather than a real exception.

Categories

Resources