Security issue in catching exceptions in outside methods? Python - python

Is there a security issue in catching exceptions through a separate function? For example if I catch an exception in class:
def my_function(some_variable):
try:
int(some_variable)
except ValueError:
sys.exit()
#other code running
As opposed to passing it through some sort of method.
def my_function(some_variable)
check_error(some_variable) #this does the try/except
#other code running
def check_error(some_variable)
#does the check
Is there a flaw or vulnerability that can be exploited in one or the other, by memory, etc, or is the two methods shown above essentially the same thing. I believe that python deals with all memory handling automatically. I could be wrong. However, I want to know is there a reason why it is good practice to catch exceptions in method or in an outside method as shown in the two code samples above.
I understand if there is a lot of checking it is good for an exception method for readability and modularity, but is it good for security?
Also this might be a better question to put in chat, but I don't have that capability yet. Thanks in advance.

Related

Python Exception Handling : Is there a method to know what type of exceptions my code can possibly throw?

I have a code, lets say :
'''
try:
somecode()
except Exception as e:
somelog()
'''
Is there a way to find out all the possible exceptions somecode() can throw so that I can handle them in an appropriate order.
While you may not always be able to know every error that may happen, you can do quite a bit by thinking of common cases. This link is a good starter's guide with examples:
https://www.pythonforbeginners.com/error-handling/exception-handling-in-python1
For raising exceptions you predict in your own functions, this is a good starter guide:
https://www.programiz.com/python-programming/user-defined-exception
Finally, when you're working with built in functions or packages, they usually document what exceptions they raise. For example, look at the built in page for Python
https://docs.python.org/3/library/functions.html
And ctrl-f ValueError. A lot of docs will tell you what exceptions they raise but beyond that it's up to you to anticipate and guess based on your implementation and usage.
Hope that helps!
There are not so many exception types which you may have to consider for a single case. In case you are trying to access a file or accessing the database, the options are very few. The best practice is to keep track with the documentation. This will not take much time to know the name of the exception.
https://docs.python.org/3/tutorial/errors.html

Big exception handler or lots of try...except clauses

I have a question which is about code design in python.
I'm working on a certain project and I can see that there is a certain amount of different types of errors that I have to handle often, which results in lots of places where there is a try...execept clause that repeats itself.
Now the question is, will it be more preferred to create one exception handler (a decorator) and decorate with it all those functions that have those repeating errors.
The trade off here is that if I create this exception handler decorator it will become quite a big of a class/function which will then make the person reading the code to try and understand another piece of complicated (maybe) logic to understand how the error is handled, where if I don't use the decorator, its pretty clear to the reader how is it handled.
Another option is to create multiple decorators for each of the types of the errors.
Or maybe just leave all those try...except clauses even though they are being repeated.
Any opinions on the matter and maybe other solutions? Thanks!
A lot of this is subjective, but I personally think it's better for exception handling code to be close to where the error is occurring, for the sake of readability and debugging ease. So:
The trade off here is that if I create this exception handler decorator it will become quite a big of a class/function
I would recommend against the Mr. Fixit class. When an error occurs and the debugger drops you into the Mr. Fixit, you then have to walk back quite a bit before you can figure out why the error happened, and what needs to be fixed to make it go away. Likewise, an unfamiliar developer reading your code loses the ability to understand just one small snippet pertaining to a particular error, and now has to work through a large class. As an added issue, a lot of what's in the Mr. Fixit is irrelevant to the one error they're looking at, and the place where the error handling occurs is in an entirely different place. With decorators especially, I feel like you are sacrificing readability (especially for someone less familiar with decorators than you) while gaining not much.
If written with some care, try/catch blocks are not very performance intensive and do not clutter up code too much. I would suggest erring on the side of more try/catches, with every try/catch close to what it's handling, so that you can tell at a glance how errors are handled for any given piece of code (without having to go to a different file).
If you are repeating code a lot, you can either refactor by making the code inside the catch a method that can be repeatedly called, or by making the code inside the try its own method that does its error handling inside its body. When in doubt, keep it simple.
Also, I hate being a close/off-topic Nazi so I won't flag, but I do think this question is more suited to Programmers#SE (being an abstract philosophy/conceptual question) and you might get better responses on that site.

Python: statically detect unhandled exceptions

I'm trying to write a highly-reliable piece of code in Python. The most common issue I run into is that after running for a while, some edge case will occur and raise an exception that I haven't handled. This most often happens when using external libraries - without reading the source code, I don't know of an easy way to get a list of all exceptions that might be raised when using a specific library function, so it's hard to know what to handle. I understand that it's bad practice to use catch-all handlers, so I haven't been doing that.
Is there a good solution for this? It seems like it should be possible for a static analysis tool to check that all exceptions are handled, but I haven't found one. Does this exist? If not, why? (is it impossible? a bad idea? etc) I especially would like it to analyze imported code for the reason explained above.
"it's bad practice to use catch-all handlers" to ignore exceptions:
Our web service has an except which wraps the main loop.
except:
log_exception()
attempt_recovery()
This is good, as it notifies us (necessary) of the unexpected error and then tries to recover (not necessary). We can then go look at those logs and figure out what went wrong so we can prevent it from hitting our general exception again.
This is what you want to avoid:
except:
pass
Because it ignores the error... then you don't know an error happened and your data may be corrupted/invalid/gone/stolen by bears. Your server may be up/down/on fire. We have no idea because we ignored the exception.
Python doesn't require registering of what exceptions might be thrown, so there are no checks for all exceptions a module might throw, but most will give you some idea of what you should be ready to handle in the docs. Depending on your service, when it gets an unhandled exception, you might want to:
Log it and crash
Log it and attempt to continue
Log it and restart
Notice a trend? The action changes, but you never want to ignore it.
Great question.
You can try approaching the problem statically (by, for instance, introducing a custom flake8 rule?), but, I think, it's a problem of testing and test coverage scope. I would approach the problem by adding more "negative path"/"error handling" checks/tests to the places where third-party packages are used, introducing mock side effects whenever needed and monitoring coverage reports at the same time.
I would also look into Mutation Testing idea (check out Cosmic Ray Python package). I am not sure if mutants can be configured to also throw exceptions, but see if it could be helpful.
Instead of trying to handle all exceptions, which is very hard as you described, why don't you catch all, but exclude some, for example the KeyboardInterrupt you mentioned in the comments?
This may help

Python exception base class vs. specific exception

I just wonder, when using try.. except, what's the difference between using the Base class "Except" and using a specific exception like "ImportError" or "IOError" or any other specific exception. Are there pros and cons between one and the other?
Never catch the base exception. Always capture only the specific exceptions that you know how to handle. Everything else should be left alone; otherwise you are potentially hiding important errors.
Of course it has advantages using the correct exception for the corresponding issue. However Python already defined all possible errors for coding problems. But you can make your own exception classes by inheriting from the Exception class. This way you can make more meaningful errors for spesific parts of your code. You can even make the expections print errors like this.
SomeError: 10 should have been 5.
Provides easier debugging of the code.
For more info.

What is the pythonic way to bubble up error conditions

I have been working on a Python project that has grown somewhat large, and has several layers of functions. Due to some boneheaded early decisions I'm finding that I have to go a fix a lot of crashers because the lower level functions are returning a type I did not expect in the higher level functions (usually None).
Before I go through and clean this up, I got to wondering what is the most pythonic way of indicating error conditions and handling them in higher functions?
What I have been doing for the most part is if a function can not complete and return its expected result, I'll return None. This gets a little gross, as you end up having to always check for None in all the functions that call it.
def lowLevel():
## some error occurred
return None
## processing was good, return normal result string
return resultString
def highLevel():
resultFromLow = lowLevel()
if not resultFromLow:
return None
## some processing error occurred
return None
## processing was good, return normal result string
return resultString
I guess another solution might be to throw exceptions. With that you still get a lot of code in the calling functions to handle the exception.
Nothing seems super elegant. What do other people use? In obj-c a common pattern is to return an error parameter by reference, and then the caller checks that.
It really depends on what you want to do about the fact this processing can't be completed. If these are really exceptions to the ordinary control flow, and you need to clean up, bail out, email people etc. then exceptions are what you want to throw. In this case the handling code is just necessary work, and is basically unavoidable, although you can rethrow up the stack and handle the errors in one place to keep it tidier.
If these errors can be tolerated, then one elegant solution is to use the Null Object pattern. Rather than returning None, you return an object that mimics the interface of the real object that would normally be returned, but just doesn't do anything. This allows all code downstream of the failure to continue to operate, oblivious to the fact there was a failure. The main downside of this pattern is that it can make it hard to spot real errors, since your code will not crash, but may not produce anything useful at the end of its run.
A common example of the Null Object pattern in Python is returning an empty list or dict when you're lower level function has come up empty. Any subsequent function using this returned value and iterating through elements will just fall through silently, without the need for error checking. Of course, if you require the list to have at least one element for some reason, then this won't work, and you're back to handling an exceptional situation again.
On the bright side, you have discovered exactly the problem with using a return value to indicate an error condition.
Exceptions are the pythonic way to deal with problems. The question you have to answer (and I suspect you already did) is: Is there a useful default that can be returned by low_level functions? If so, take it and run with it; otherwise, raise an exception (`ValueError', 'TypeError' or even a custom error).
Then, further up the call stack, where you know how to deal with the problem, catch the exception and deal with it. You don't have to catch exceptions immediately -- if high_level calls mid-level calls low_level, it's okay to not have any try/except in mid_level and let high_level deal with it. It may be that all you can do is have a try/except at the top of your program to catch and log all uncaught and undealt-with errors, and that can be okay.
This is not necessarily Pytonic as such but experience has taught me to let exceptions "lie where they lie".
That is to say; Don't unnecessarily hide them or re-raise a different exception.
It's sometimes good practice to let the callee fail rather than trying to capture and hide all kinds of error conditions.
Obviously this topic is and can be a little subjective; but if you don't hide or raise a different exception, then it's much easier to debug your code and much easier for the callee of your functions or api to understand what went wrong.
Note: This answer is not complete -- See comments. Some or all of the answers presented in this Q&A should probably be combined in a nice way presneting the various problems and solutions in a clear and concise manner.

Categories

Resources