Each time I write a function which has (one ore more) parameters, the first thing I do in the function body is to check that the received value of each parameter is valid.
For example:
def turn_on_or_off_power_supplies (engine, power_supply, switch_on_or_off = RemoteRebootConst.OFF):
'''
#summary:
The function turns off or turns on the switch.
#param engine:
SSH connection to the switch
#param switch_on_or_off:
This sets whether to turn off, or turn on the switch.
Use the following constants in infra_constants.py:
RemoteRebootConst.OFF = off
RemoteRebootConst.ON = on
'''
if switch_on_or_off != RemoteRebootConst.ON and switch_on_or_off =! RemoteRebootConst.OFF:
raise Exception("-ERROR TEST FAILED \n"
"Function Name: turn_on_or_off_power_supplies \n"
"Parameter 'switch_on_or_off' \n"
"Expected value: %s or %s \n"
"Actual value: %s"
% (RemoteRebootConst.ON, RemoteRebootConst.OFF, switch_on_or_off))
Output omitted...
Why do I do this? Because in this way, if exception is received, the developer which debugs, can know immediately in which function the problems happens, what are expected value/s and what is the actual value. This error messages makes debugging this kind of problems much more easier.
As said earlier, If there have been more parameters,I will do the above code for EACH parameter.
This makes my function body to me much bigger in terms of number of lines of code.
Is there a more pythonic way to print all of the above details in case there is an exception due to invalid parameter value received (function name, parameter name, expected values, actual values?
I can write a function which receives 4 parameters (function name, parameter name, expected value/s, actual value), and then use it each time I would like to raise this kind of exception (this will cause each exception of this sort to be just one line of code instead of several ones)
I will appreciate any help here.
Thanks in advance.
Note that checking the types of your arguments is against duck-typing, which is part of the python philosophy. If you want another developer to be able to debug your code more easily, you should use an assertion:
def turn_on_or_off_power_supplies (engine, power_supply, switch_on_or_off=RemoteRebootConst.OFF):
assert switch_on_or_off in {RemoteRebootConst.ON, RemoteRebootConst.OFF}, 'wrong argument switch_on_or_off'
for more about assertions look here.
Note that assertions will be remove if you optimive your code with the -O flag.
Related
I am searching for a way to change the printable output of an Exception to a silly message in order to learn more about python internals (and mess with a friend ;), so far without success.
Consider the following code
try:
x # is not defined
except NameError as exc:
print(exc)
The code shall output name 'x' is not defined
I would like the change that output to the name 'x' you suggested is not yet defined, my lord. Improve your coding skills.
So far, I understood that you can't change __builtins__ because they're "baked in" as C code, unless:
You use forbiddenfruit.curse method which adds / changes properties of any object
You manually override the dictionnaries of an object
I've tried both solutions, but without success:
forbiddenfruit solution:
from forbiddenfruit import curse
curse(BaseException, 'repr', lambda self: print("Test message for repr"))
curse(BaseException, 'str', lambda self: print("Test message for str"))
try:
x
except NameError as exc:
print(exc.str()) # Works, shows test message
print(exc.repr()) # Works, shows test message
print(repr(exc)) # Does not work, shows real message
print(str(exc)) # Does not work, shows real message
print(exc) # Does not work, shows real message
Dictionnary overriding solution:
import gc
underlying_dict = gc.get_referents(BaseException.__dict__)[0]
underlying_dict["__repr__"] = lambda self: print("test message for repr")
underlying_dict["__str__"] = lambda self: print("test message for str")
underlying_dict["args"] = 'I am an argument list'
try:
x
except NameError as exc:
print(exc.__str__()) # Works, shows test message
print(exc.__repr__()) # Works, shows test message
print(repr(exc)) # Does not work, shows real message
print(str(exc)) # Does not work, shows real message
print(exc) # Does not work, shows real message
AFAIK, using print(exc) should rely on either __repr__ or __str__, but it seems like the print function uses something else, which I cannot find even when reading all properties of BaseException via print(dir(BaseException)).
Could anyone give me an insight of what print uses in this case please ?
[EDIT]
To add a bit more context:
The problem I'm trying to solve began as a joke to mess with a programmer friend, but now became a challenge for me to understand more of python's internals.
There's no real business problem I'm trying to solve, I just want to get deeper understanding of things in Python.
I'm quite puzzled that print(exc) won't make use of BaseException.__repr__ or __str__ actually.
[/EDIT]
Intro
I'd go with a more critical approach on why you'd even want to do what you want to do.
Python provides you with an ability to handle specific exceptions. That means if you had a business problem, you'd use a particular exception class and provide a custom message for that specific case. Now, remember this paragraph and let's move on, I'll refer to this later.
TL;DR
Now, let's go top-down:
Catching all kinds of errors with except Exception is generally not a good idea if want you catch let's say a variable name error. You'd use except NameError instead. There's really not much you'd add to it that's why it had a default message that perfectly described the issue. So it's assumed you'd use it as it's given. These are called concrete exceptions.
Now, with your specific case notice the alias as exc. By using the alias you can access arguments passed to the exception object, including the default message.
try:
x # is not defined
except NameError as exc:
print(exc.args)
Run that code (I put it in app.py) and you'll see:
$ python app.py
("name 'x' is not defined",)
These args are passed to the exception as a series (list, or in this case immutable list that is a tuple).
This leads to the idea of the possibility of easily passing arguments to exceptions' constructors (__init__). In your case "name 'x' is not defined" was passed as an argument.
You can use this to your advantage to solve your problem without much effort by just providing a custom message, like:
try:
x # is not defined
except NameError as exc:
your_custom_message = "the name 'x' you suggested is not yet defined, my lord. Improve your coding skills"
# Now, you can handle it based on your requirement:
# print(your_custom_message)
# print(NameError(your_custom_message))
# raise NameError(your_custom_message)
# raise NameError(your_custom_message) from exc
The output is now what you wanted to achieve.
$ python app.py
the name 'x' you suggested is not yet defined, my lord. Improve your coding skills
Remember the first paragraph when I said I'd refer to it later? I mentioned providing a custom message for a specific case. If you build your own library when you want to handle name errors to specific variables relevant to your product, you assume your users will use your code that might raise that NameError exception. They will most likely catch it with except Exception as exc or except NameError as exc. And when they do print(exc), they will see your message now.
Summary
I hope that makes sense to you, just provide a custom message and pass it as an argument to NameError or simply just print it. IMO, it's better to learn it right together with why you'd use what you use.
Errors like this are hard-coded into the interpreter (in the case of CPython, anyway, which is most likely what you are using). You will not be able to change the message printed from within Python itself.
The C source code that is executed when the CPython interpreter tries to look up a name can be found here: https://github.com/python/cpython/blob/master/Python/ceval.c#L2602. If you would want to change the error message printed when a name lookup fails, you would need to change this line in the same file:
#define NAME_ERROR_MSG \
"name '%.200s' is not defined"
Compiling the modified source code would yield a Python interpreter that prints your custom error message when encountering a name that is not defined.
I'll just explain the behaviour you described:
exc.__repr__()
This will just call your lambda function and return the expected string. Btw you should return the string, not print it in your lambda functions.
print(repr(exc))
Now, this is going a different route in CPython and you can see this in a GDB session, it's something like this:
Python/bltinmodule.c:builtin_repr will call Objects/object.c:PyObject_Repr - this function gets the PyObject *v as the only parameter that it will use to get and call a function that implements the built-in function repr(), BaseException_repr in this case. This function will format the error message based on a value from args structure field:
(gdb) p ((PyBaseExceptionObject *) self)->args
$188 = ("name 'x' is not defined",)
The args value is set in Python/ceval.c:format_exc_check_arg based on a NAME_ERROR_MSG macro set in the same file.
Update: Sun 8 Nov 20:19:26 UTC 2020
test.py:
import sys
import dis
def main():
try:
x
except NameError as exc:
tb = sys.exc_info()[2]
frame, i = tb.tb_frame, tb.tb_lasti
code = frame.f_code
arg = code.co_code[i + 1]
name = code.co_names[arg]
print(name)
if __name__ == '__main__':
main()
Test:
# python test.py
x
Note:
I would also recommend to watch this video from PyCon 2016.
What does "Error return without exception set" commonly mean? I am trying to call a Python method that should return a list.
for facet in dispPart.FacetedBodies:
#Tag
facet_tag2=thisFctBody.Tag
#calling this method returns an "error return without exception set"
face_list=facet.GetFaces()
This is what I've found on it so far
You shoud tell what third party Python module you are using - this error message implies there is something wrong in that code, not yours. (ok, from your link, it is "NXOpen Python API")
Specifically, the third party module, interfacing with the Python C API returned an incorrect result, reporting an exception should have been raised, but did not tell Python which exception is that.
It is possible that there is something incorrect in your input data that, if fixed, could allow for normal return of the desired call - nonetheless it is buggy as it is.
One thing you might try is to call some other method on your facet object to check if, for example, it is not empty.
Does Python has a feature that allows one to evaluate a function or expression and if the evaluation fails (an exception is raised) return a default value.
Pseudo-code:
evaluator(function/expression, default_value)
The evaluator will try to execute the function or expression and return the result is the execution is successful, otherwise the default_value is returned.
I know I create a user defined function using try and except to achieve this but I want to know if the batteries are already included before going off and creating a custom solution.
In order to reuse code, you can create a decorating function (that accepts a default value) and decorate your functions with it:
def handle_exceptions(default):
def wrap(f):
def inner(*a):
try:
return f(*a)
except Exception, e:
return default
return inner
return wrap
Now let's see an example:
#handle_exceptions("Invalid Argument")
def test(num):
return 15/num
#handle_exceptions("Input should be Strings only!")
def test2(s1, s2):
return s2 in s1
print test(0) # "Invalid Argument"
print test(15) # 1
print test2("abc", "b") # True
print test2("abc", 1) # Input should be Strings only!
No, the standard way to do this is with try... except.
There is no mechanism to hide or suppress any generic exception within a function. I suspect many Python users would consider indiscriminate use of such a function to be un-Pythonic for a couple reasons:
It hides information about what particular exception occurred. (You might not want to handle all exceptions, since some could come from other libraries and indicate conditions that your program can't recover from, like running out of disk space.)
It hides the fact that an exception occurred at all; the default value returned in case of an exception might coincide with a valid non-default value. (Sometimes reasonable, sometimes not really so.)
One of the principles of the Pythonic philosophy, I believe, is that "explicit is better than implicit," so Python generally avoids automatic type casting and error recovery, which are features of more "implicit- friendly"languages like Perl.
Although the try... except form can be a bit verbose, in my opinion it has a lot of advantages in terms of clearly showing where an exception may occur and what the control flow is around that exception.
This one's a structure design problem, I guess. Back for some advice.
To start: I'm writing a module. Hence the effort of making it as usable to potential developers as possible.
Inside an object (let's call it Swoosh) I have a method which, when called, may result in either success (a new object is returned -- for insight: it's an httplib.HTTPResponse) or failure (surprising, isn't it?).
I'm having trouble deciding how to handle failures. There are two main cases here:
user supplied data that was incorrect
data was okay, but user interaction will be needed () - I need to pass back to the user a string that he or she will need to use in some way.
In (1) I decided to raise ValueError() with an appropriate description.
In (2), as I need to actually pass a str back to the user.. I'm not sure about whether it would be best to just return a string and leave it to the user to check what the function returned (httplib.HTTPResponse or str) or raise a custom exception? Is passing data through raising exceptions a good idea? I don't think I've seen this done anywhere, but on the other hand - I haven't seen much.
What would you, as a developer, expect from an object/function like this?
Or perhaps you find the whole design ridiculous - let me know, I'll happily learn.
As much as I like the approach of handling both cases with specifically-typed exceptions, I'm going to offer a different approach in case it helps: callbacks.
Callbacks tend to work better if you're already using an asynchronous framework like Twisted, but that's not their only place. So you might have a method that takes a function for each outcome, like this:
def do_request(on_success, on_interaction_needed, on_failure):
"""
Submits the swoosh request, and awaits a response.
If no user interaction is needed, calls on_success with a
httplib.HTTPResponse object.
If user interaction is needed, on_interaction_needed is
called with a single string parameter.
If the request failed, a ValueError is passed to on_failure
"""
response = sumbit_request()
if response.is_fine():
on_success(response)
elif response.is_partial()
on_interaction_needed(response.message)
else:
on_failure(ValueError(response.message))
Being Python, there are a million ways to do this. You might not like passing an exception to a function, so you maybe just take a callback for the user input scenario. Also, you might pass the callbacks in to the Swoosh initialiser instead.
But there are drawbacks to this too, such as:
Carelessness may result in spaghetti code
You're allowing your caller to inject logic into your function (eg. exceptions raised in the callback will propagate out of Swoosh)
My example here is simple, your actual function might not be
As usual, careful consideration and good documentation should avoid these problems. In theory.
I think raising an exception may actually be a pretty good idea in this case. Squashing multiple signals into a single return value of a function isn't ideal in Python, due to duck typing. It's not very Pythonic; every time you need to do something like:
result = some_function(...)
if isinstance(result, TypeA):
do_something(result)
elif isinstance(result, TypeB):
do_something_else(result)
you should be thinking about whether it's really the best design (as you're doing).
In this case, if you implement a custom exception, then the code that calls your function can just treat the returned value as a HTTPResponse. Any path where the function is unable to return something its caller can treat that way is handled by throwing an exception.
Likewise, the code that catches the exception and prompts the user with the message doesn't have to worry about the exact type of the thing its getting. It just knows that it's been explicitly instructed (by the exception) to show something to the user.
If the user interaction case means the calling code has to show a prompt, get some input and them pass control back to your function, it might be ugly trying to handle that with an exception. Eg,
try:
Swoosh.method()
except UserInteraction, ex:
# do some user interaction stuff
# pass it back to Swoosh.method()?
# did Swoosh need to save some state from the last call?
except ValueError:
pass # whatever
If this user interaction is a normal part of the control flow, it might be cleaner to pass a user-interaction function into your method in the first place - then it can return a result to the Swoosh code. For example:
# in Swoosh
def method(self, userinteractor):
if more_info_needed:
more_info = userinteractor.prompt("more info")
...
ui = MyUserInteractor(self) # or other state
Swoosh.method(ui)
You can return a tuple of (httplib.HTTPResponse, str) with the str being optionally None.
Definitely raise an exception for 1).
If you don't like returning a tuple, you can also create a "response object" i.e. an instance of a new class ( lets say SomethingResponse ) that encapsulates the HTTPResponse with optional messages to the end-user( in the simplest case, just a str).
I was wondering about the best practices for indicating invalid argument combinations in Python. I've come across a few situations where you have a function like so:
def import_to_orm(name, save=False, recurse=False):
"""
:param name: Name of some external entity to import.
:param save: Save the ORM object before returning.
:param recurse: Attempt to import associated objects as well. Because you
need the original object to have a key to relate to, save must be
`True` for recurse to be `True`.
:raise BadValueError: If `recurse and not save`.
:return: The ORM object.
"""
pass
The only annoyance with this is that every package has its own, usually slightly differing BadValueError. I know that in Java there exists java.lang.IllegalArgumentException -- is it well understood that everybody will be creating their own BadValueErrors in Python or is there another, preferred method?
I would just raise ValueError, unless you need a more specific exception..
def import_to_orm(name, save=False, recurse=False):
if recurse and not save:
raise ValueError("save must be True if recurse is True")
There's really no point in doing class BadValueError(ValueError):pass - your custom class is identical in use to ValueError, so why not use that?
I would inherit from ValueError
class IllegalArgumentError(ValueError):
pass
It is sometimes better to create your own exceptions, but inherit from a built-in one, which is as close to what you want as possible.
If you need to catch that specific error, it is helpful to have a name.
I think the best way to handle this is the way python itself handles it. Python raises a TypeError. For example:
$ python -c 'print(sum())'
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: sum expected at least 1 arguments, got 0
Our junior dev just found this page in a google search for "python exception wrong arguments" and I'm surprised that the obvious (to me) answer wasn't ever suggested in the decade since this question was asked.
It depends on what the problem with the arguments is.
If the argument has the wrong type, raise a TypeError. For example, when you get a string instead of one of those Booleans.
if not isinstance(save, bool):
raise TypeError(f"Argument save must be of type bool, not {type(save)}")
Note, however, that in Python we rarely make any checks like this. If the argument really is invalid, some deeper function will probably do the complaining for us. And if we only check the boolean value, perhaps some code user will later just feed it a string knowing that non-empty strings are always True. It might save him a cast.
If the arguments have invalid values, raise ValueError. This seems more appropriate in your case:
if recurse and not save:
raise ValueError("If recurse is True, save should be True too")
Or in this specific case, have a True value of recurse imply a True value of save. Since I would consider this a recovery from an error, you might also want to complain in the log.
if recurse and not save:
logging.warning("Bad arguments in import_to_orm() - if recurse is True, so should save be")
save = True
I've mostly just seen the builtin ValueError used in this situation.
You would most likely use ValueError (raise ValueError() in full) in this case, but it depends on the type of bad value. For example, if you made a function that only allows strings, and the user put in an integer instead, you would you TypeError instead. If a user inputted a wrong input (meaning it has the right type but it does not qualify certain conditions) a Value Error would be your best choice. Value Error can also be used to block the program from other exceptions, for example, you could use a ValueError to stop the shell form raising a ZeroDivisionError, for example, in this function:
def function(number):
if not type(number) == int and not type(number) == float:
raise TypeError("number must be an integer or float")
if number == 5:
raise ValueError("number must not be 5")
else:
return 10/(5-number)
P.S. For a list of python built-in exceptions, go here:
https://docs.python.org/3/library/exceptions.html (This is the official python databank)
Agree with Markus' suggestion to roll your own exception, but the text of the exception should clarify that the problem is in the argument list, not the individual argument values. I'd propose:
class BadCallError(ValueError):
pass
Used when keyword arguments are missing that were required for the specific call, or argument values are individually valid but inconsistent with each other. ValueError would still be right when a specific argument is right type but out of range.
Shouldn't this be a standard exception in Python?
In general, I'd like Python style to be a bit sharper in distinguishing bad inputs to a function (caller's fault) from bad results within the function (my fault). So there might also be a BadArgumentError to distinguish value errors in arguments from value errors in locals.
I'm not sure I agree with inheritance from ValueError -- my interpretation of the documentation is that ValueError is only supposed to be raised by builtins... inheriting from it or raising it yourself seems incorrect.
Raised when a built-in operation or
function receives an argument that has
the right type but an inappropriate
value, and the situation is not
described by a more precise exception
such as IndexError.
-- ValueError documentation