I am trying to implement a job which reads from Azure Queue and writes into db. occasionally some errors are raised from the Azure server such as timeout, server busy etc. How to handle such errors in the code, I tried ti run the code in a try catch loop but, I am not able to identify Azure errors?
I triedn to import WindowsAzureError from azure , but it doesn't work (no such module to import)?
Which is a good way to handle errors in this case?
If you're using 0.30+ all errors that occur after the request to the service has been will extend from AzureException. AzureException can be found in the azure.common package which Azure storage takes a dependency on. Errors which are thrown if invalid args are passed to a method (ex None for the queue name) might not extend from this and will be standard Python exception like ValueError.
Thanks #Terran,
exception azure.common.AzureConflictHttpError(message, status_code)
Bases: azure.common.AzureHttpError
exception azure.common.AzureException
Bases: exceptions.Exception
exception azure.common.AzureHttpError(message, status_code)
Bases: azure.common.AzureException
exception azure.common.AzureMissingResourceHttpError(message, status_code)
Bases: azure.common.AzureHttpError
This helped me.. http://azure-sdk-for-python.readthedocs.org/en/latest/ref/azure.common.html
Related
I'm using a Python script to get Stock price from an API,
Everythinks works great but sometimes I receive html errors instead of prices,
These errors prevent the script from continuing and the terminal stopped working,
how do I test the response from the API before passing the information to the next script?
I don't want the terminal to stop when it receives server errors.
Only one line to get price :
get_price = api_client.quote('TWTR')
The question, "How do I test the response from the API before passing the information" is well suited for a try/except block. For example the following will attempt to call quote() and if an exception is raised it will print a message to the console:
try:
get_price = api_client.quote('TWTR')
except <Exception Type Goes Here>:
# handle the exception case.
print('Getting quote did not work as expected')
The example is useful for illustrating the point but you should improve the exception handling. I recommend you take a look at resources to help you understand how to appropriately handle exceptions in your code. A few that I have found useful would be:
The Python Docs: Learn what an exception is and how they can be used
Python Exception Handling: A blog post that details how to combine some of the concepts in the Python documentation practically.
I have an Mlflow project that raises an exception. I execute that function using mlflow.run, but I get mlflow.exceptions.ExecutionException("Run (ID '<run_id>') failed").
Is there any way I could get the exception that is being raised where I am executing mlflow.run?
Or is it possible to send an mlflow.exceptions.ExecutionException with custom message set from within the project?
Unfortunately not at the moment. mlflow run starts a new process and there is no protocol for exception passing right now. In general the other project does not even have to be in the same language.
One workaround I can think of is to pass the exception via mlflow by setting run tag. E.g.:
try:
...
except Exception as ex:
mlflow.set_tag("exception", str(ex))
I'm designing a workflow engine for a very specific task and I'm thinking about exception handling.
I've got a main process that calls a few functions. Most of those functions call other more specific functions and so on. There are a few libraries involved so there are a lot of specific errors that can occur. IOError, OSError, AuthenticationException ...
I have to stop the workflow when an error occurs and log it so I can continue from that point when the error is resolved.
Example of what I mean:
def workflow_runner():
download_file()
...
(more calls with their own exceptions)
...
def download_file():
ftps = open_ftp_connection()
ftps.get(filename)
...
(more calls with their own exceptions)
...
def open_ftp_connection():
ftps = ftplib.FTP_TLS()
try:
ftps.connect(domain, port)
ftps.login(username, password)
except ftplib.all_errors as e:
print(e)
raise
return ftps
Your basic, run of the mill, modular functions.
My question is this:
What's considered the best way of doing top to bottom error handling in Python 3?
To raise every exception to the top and thus put "try except" over each function call up the stack?
To handle every exception when it happens, log and raise and have no "try except" at the "top"?
Some better alternative?
Would it be better to just finish and raise the error on the spot or catch it in the "download_file" and/or "workflow_runner" functions?
I ask because if I end up catching everything at the top I feel like I might end up with:
except AError
except BError
...
except A4Error
It depends… You catch an exception at the point where you can do something about it. That differs between different functions and different exception types. A piece of code calls a subsystem (generically speaking any function), and it knows that subsystem may raise exception A, B or C. It now needs to decide what exceptions it expects and/or what it can do about each one of them. In the end it may decide to catch A and B exceptions, but it wouldn't make sense for it to catch C exceptions because it can't do anything about them. This now means this piece of code may raise C exceptions, and its callers need to be aware of that and make the same kinds of decisions.
So different exceptions are caught at different layers, as appropriate.
In more concrete terms, say you have some system which consists of some HTTP object which downloads some stuff from remote servers, some job manager which wrangles a bunch of these HTTP objects and stores their result in a database, and a top level coordinator that starts and stops the job managers. The HTTP objects may obviously raise all sorts of HTTP exceptions when network requests fail, and the job managers may raise exceptions when something's wrong with the database. You will probably let the job managers worry about HTTP errors like 404, but not about something fundamental like ComputerDoesntHaveANetworkInterface errors; equally DatabaseIsUnreachable exceptions is nothing a job manager can do anything about, and should probably lead to the termination of the application.
I have been banging my head around on this issue for a bit and have not come up with a solution. I am attempting to trap the exception UploadEntityTooLargeEntity. This exception is raised by GAE when 2 things happen.
Set the max_bytes_total param in the create_upload_url:
self.template_values['AVATAR_SAVE_URL'] = blobstore.create_upload_url('/saveavatar,
max_bytes_total= 524288)
Attempt to post an item that exceeds the max_bytes_total.
I expect that, since my class is derived from RequestHandler that my error() method would be called. Instead I am getting a 413 screen telling me the upload is too large.
My request handler is derived from webapp2.RequestHandler. Is it expected that GAE will work with the error method derived from webapp2.RequestHandler? I'm not seeing this in GAE's code but I can't imagine there would be such an omission.
The 413 is generated by the App Engine infrastructure; the request neve reaches your app, so it's impossible to handle this condition yourself.
I'm running a django application on apache with mod_wsgi. The apache error log is clean except for one mysterious error.
Exception exceptions.TypeError: "'NoneType' object is not callable" in <bound method SharedSocket.__del__ of <httplib.SharedSocket instance at 0x82fb138c>> ignored
I understand that this means that some code is trying to call SharedSocket.del, but it is None. I have no idea what the reason for that is, or how to go about fixing it. Also, this error is marked as ignored. It also doesn't seem to be causing any actual problems other than filling the log file with error messages. Should I even bother trying to fix it?
It is likely that this is coming about because while handling an exception a further exception is occurring within the destructor of an object being destroyed and that the latter exception is unable to be raised because of the state of the pending one. Within Python C API details of such can be written direct to error log by PyErr_WriteUnraisable().
So, it isn't that the del method is None, but some variable it is trying to use from code executed within the del method is None. You would need to look at the code for SharedSocket.del to work out exactly what is going on.
Note: this is more of a pointer than an answer, but I couldn't get this to work in a comment.
I did some googling on the error message and there seems to be a group of related problems that crop up in Apache + mod_wsgi + MySQL environments. The culprit may be that you are running out of simultaneous connections to MySQL because of process pooling, with each process maintaining an open connection to the DB server. There are also indications that some non-thread-safe code may be used in a multi-thread environment. Good luck.