Writing wrapper function and handling exceptions correctly in Python - python

I am writing test scripts using Selenium Webdriver and I have created a simple framework for tests step-by-step execution. My goal was to improve the readability of the code as well as the logic of test scenarios. To do that I wrote some wrappers that group some logic together.
The main ideas are as follows:
The application(webpage) that is being tested is defined as a class.
Big chunks of code that define a single logical test step are defined as a method of a class. For example a class WebPage1 and possible method is LogIn.
Each method (like described in 2) then is wrapped during the execution phase to handle any possible exceptions that might be raised.
Additionally, I have a waitForElement wrapper function that waits for certain condition on the webpage. If that condition was not met within a pre-defined period of time, it raises a TimeoutException which is being handled internally, within the same function.
So to give some idea of what I am talking about, please consider the below example:
class Application(object):
def __init__(self):
#Some code
def waitForElement(self, elementName, expectedCondition, searchBy):
try:
#conditions
except TimeoutException:
#handling
def logIn(self):
#code
self.waitForElement(...)
#code
app = Application() # initialize an object of class Application
# The below chunk of code attempts to complete the logIn() test step and if ANY
# exceptions are raised during its execution they will be caught by the BaseException
try:
app.logIn()
except BaseException as exception:
#handling
So, you can see that the whole step (logIn()) in this case is wrapped inside a try and there is additional wrapper inside the waitForElement.
Now I want to fail the whole execution if TimeoutException is raised during waitForElement and I want to catch it in the outside wrapper (the BaseException). And I still want to handle it locally (inside waitForElement).
My first idea was to to raise an additional exception after the initial one is handled. So I did something like that:
def waitForElement(self, elementName, expectedCondition, searchBy):
try:
#conditions
except TimeoutException:
#handling
raise SystemException
So my 2 questions are:
Does that approach make sense? It doesn't look very clean to me so maybe I miss something? Another approach perhaps?
Ideally, I still want to get the same TimeoutException rather than SystemExit in the outer exception. If my solution is OK, should I just re-raise the TimeoutException?
P.S. Hope all that iss clear and make sense. Please let me know in the comments if I should refine my description.

Related

Suppress an Exception but Continue after the Exception, not after the Suppression

Assume the following minimal example:
def external_code():
for i in range(10):
if i == 7:
raise ValueError("I don't like sevens.")
print(i)
external_code()
When suppressing the exception either through handling it
try:
external_code()
except ValueError:
pass
or by suppression through contextlib.suppress()
from contextlib import suppress
with suppress(ValueError):
external_code()
the exception will not be raised, but the execution of the code after it will be prevented, and instead continue in the except block or after the suppression.
Is it possible to suppress an Exception and then continue with the external code, as if the Exception would not even be there? In the code examples above, this would cause all 10 numbers to be printed, instead of only 0 to 6.
I need this, because an external library (TensorFlow) raises an exception it should not raise. Related to the minimal example from above, this means that I cannot edit the code in the function, I can only put code around its call. I can comment out the exception in TensorFlow, but that's tedious with updating TF and would also cause the exception to not occur in other cases where it is actually appropriate.
As a workaround you could redefine the method at any time:
class SomeTensorFlowClass(object):
def that_method(self, n=10):
for i in range(n):
if i == 7:
raise ValueError()
print(i)
stf = SomeTensorFlowClass()
def that_method(n=10):
self = stf # for non-static methods
for i in range(n):
print(i)
stf.that_method = that_method
This way, you still "update" TensorFlow, but you can undo it directly after the situation.
If you switch often, you can add a new flag like seven_is_ok=False to skip the Exception conditionally.
If you want to avoid copying library source code, you can retrieve it from the live object with inspect.getsource(stf.that_method) followed by the desired replacements and exec("global that_method\n" + adapted_source_code). Be aware that indentation changes and forgotten imports may only mark the beginning of potentially many troubles with the inspect-based workaround.
Clearly, none of that is as beautiful as patching the library, but I assume you have reasons for not doing that now.

Why does Pythons ThreadPoolExecutor supress all Exceptions and how to fix this?

I am relatively new to Python, and I am having difficulty understanding the behavior of this pool. I have attempted to view similar questions on stackoverflow, but I am still having difficulties on comprehending this problem.
For example if we have use of an Executor:
with ThreadPoolExecutor(8) as pool:
pool.map(doSomething,test_list)
All errors that occur within the pool are suppressed and not shown.
Is there any parameter available so the pool stops execution and shows the exceptions?
Thanks in advance!
I have found a possibility for myself which can circumvent this problem indirectly. It's actually pretty simple so I'd like to publish it here if someone else encounters this problem. I do not know if it's very clever, but for me this solution was fine.
At first thanks to #artiom, as he already said, it is best to catch the error directly in the parallelized function.
Now the "bypass":
Put a try except block inside the function around the whole code and catch all exceptions with
except Exception as e:
.
The exception is simply logged in the except block, for example with sentry or another logging framework. With that, you can at least look at the thrown exceptions again and fix your mistakes in your code.
Simple example:
def for_parallel(chunk):
for one in chunk:
try:
a = 1
foo(one['bla'])
doSomething(one['blu'])
except Exception as e:
sentry.capture_exception(e)
def main():
with ThreadPoolExecutor(8) as pool:
pool.map(for_parallel, my_list)

Python: Make exceptions 'exiting'

In Python, is there any (proper) way to change the the default exception handling behaviour so that any uncaught exception will terminate/exit the program?
I don't want to wrap the entire program in a generic try-except block:
try:
// write code here
except Exception:
sys.exit(1)
For those asking for more specificity and/or claiming this is already the case, it's my understanding that not all Python exceptions are system-exiting: docs
Edit: It looks like I have forked processes complicating matters so won't be posting any specific details about my own mess.
If you're looking for an answer to the original question, Dmitry's comment is interesting and useful, references the 2nd answer to this question
You can use Specific exception instead of Exception because Exception is a Base class for all exceptions. For more details refer Exception tutorial
You can write your script like this-
try:
# write code here
except OverflowError:
raise SystemExit
except ArithmeticError:
sys.exit()
except IOError:
quit()
Try this different approaches to find what is exactly you are missing.
Edit 1 - Maintain Program Execution
In order to maintain your program execution try this one-
consider error_handler function is raising SystemExit exception then In your main method you need to add below code so you can maintain your program execution.
try:
error_handler()
except SystemExit:
print "sys.exit was called but I'm proceeding anyway (so there!-)."

Preventing python definition from execution

I want to know what is the best way of checking an condition in Python definition and prevent it from further execution if condition is not satisfied. Right now i am following the below mentioned scheme but it actually prints the whole trace stack. I want it to print only an error message and do not execute the rest of code. Is there any other cleaner solution for doing it.
def Mydef(n1,n2):
if (n1>n2):
raise ValueError("Arg1 should be less than Arg2)
# Some Code
Mydef(2,1)
That is what exceptions are created for. Your scheme of raising exception is good in general; you just need to add some code to catch it and process it
try:
Mydef(2,1)
except ValueError, e:
# Do some stuff when exception is raised, e.message will contain your message
In this case, execution of Mydef stops when it encounters raise ValueError line of code, and goes to the code block under except.
You can read more about exceptions processing in the documentation.
If you don't want to deal with exceptions processing, you can gracefully stop function to execute further code with return statement.
def Mydef(n1,n2):
if (n1>n2):
return
def Mydef(n1,n2):
if (n1>n2):
print "Arg1 should be less than Arg2"
return None
# Some Code
Mydef(2,1)
Functions stop executing when they reach to return statement or they run the until the end of definition. You should read about flow control in general (not specifically to python)

How should I use try...except while defining a function?

I find I've been confused by the problem that when I needn't to use try..except.For last few days it was used in almost every function I defined which I think maybe a bad practice.For example:
class mongodb(object):
def getRecords(self,tname,conditions=''):
try:
col = eval("self.db.%s" %tname)
recs = col.find(condition)
return recs
except Exception,e:
#here make some error log with e.message
What I thought is ,exceptions may be raised everywhere and I have to use try to get them.
And my question is,is it a good practice to use it everywhere when defining functions?If not are there any principles for it?Help would be appreciated!
Regards
That may not be the best thing to do. Whole point of exceptions is that you can catch them on very different level than it's raised. It's best to handle them in the place where you have enough information to make something useful with them (that is very application and context dependent).
For example code below can throw IOError("[Errno 2] No such file or directory"):
def read_data(filename):
return open(filename).read()
In that function you don't have enough information to do something with it, but in place where you actually using this function, in case of such exception, you may decide to try different filename or display error to the user, or something else:
try:
data = read_data('data-file.txt')
except IOError:
data = read_data('another-data-file.txt')
# or
show_error_message("Data file was not found.")
# or something else
This (catching all possible exceptions very broadly) is indeed considered bad practice. You'll mask the real reason for the exception.
Catch only 'explicitely named' types of exceptions (which you expect to happen and you can/will handle gracefully). Let the rest (unexpected ones) bubble as they should.
You can log these (uncaught) exceptions (globally) by overriding sys.excepthook:
import sys
import traceback
# ...
def my_uncaught_exception_hook(exc_type, exc_value, exc_traceback):
msg_exc = "".join( \
traceback.format_exception(exc_type, exc_value, exc_traceback) )
# ... log here...
sys.excepthook = my_uncaught_exception_hook # our uncaught exception hook
You must find a balance between several goals:
An application should recover from as many errors as possible by itself.
An application should report all unrecoverable errors with enough detail to fix the cause of the problem.
Errors can happen everywhere but you don't want to pollute your code with all the error handling code.
Applications shouldn't crash
To solve #3, you can use an exception hook. All unhandled exceptions will cause the current transaction to abort. Catch them at the highest level, roll back the transaction (so the database doesn't become inconsistent) and either throw them again or swallow them (so the app doesn't crash). You should use decorators for this. This solves #4 and #1.
The solution for #2 is experience. You will learn with time what information you need to solve problems. The hard part is to still have the information when an error happens. One solution is to add debug logging calls in the low level methods.
Another solution is a dictionary per thread in which you can store some bits and which you dump when an error happens.
another option is to wrap a large section of code in a try: except: (for instance in a web application, one specific GUI page) and then use sys.exc_info() to print out the error and also the stack where it occurred
import sys
import traceback
try:
#some buggy code
x = ??
except:
print sys.exc_info()[0] #prints the exception class
print sys.exc_info()[1] #prints the error message
print repr(traceback.format_tb(sys.exc_info()[2])) #prints the stack

Categories

Resources