I'm using a Python script to get Stock price from an API,
Everythinks works great but sometimes I receive html errors instead of prices,
These errors prevent the script from continuing and the terminal stopped working,
how do I test the response from the API before passing the information to the next script?
I don't want the terminal to stop when it receives server errors.
Only one line to get price :
get_price = api_client.quote('TWTR')
The question, "How do I test the response from the API before passing the information" is well suited for a try/except block. For example the following will attempt to call quote() and if an exception is raised it will print a message to the console:
try:
get_price = api_client.quote('TWTR')
except <Exception Type Goes Here>:
# handle the exception case.
print('Getting quote did not work as expected')
The example is useful for illustrating the point but you should improve the exception handling. I recommend you take a look at resources to help you understand how to appropriately handle exceptions in your code. A few that I have found useful would be:
The Python Docs: Learn what an exception is and how they can be used
Python Exception Handling: A blog post that details how to combine some of the concepts in the Python documentation practically.
Summary
One of our threads in production hit an error and is now producing InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction. errors, on every request with a query that it serves, for the rest of its life! It's been doing this for days, now! How is this possible, and how can we prevent it going forward?
Background
We are using a Flask app on uWSGI (4 processes, 2 threads), with Flask-SQLAlchemy providing us DB connections to SQL Server.
The problem seemed to start when one of our threads in production was tearing down its request, inside this Flask-SQLAlchemy method:
#teardown
def shutdown_session(response_or_exc):
if app.config['SQLALCHEMY_COMMIT_ON_TEARDOWN']:
if response_or_exc is None:
self.session.commit()
self.session.remove()
return response_or_exc
...and somehow managed to call self.session.commit() when the transaction was invalid. This resulted in sqlalchemy.exc.InvalidRequestError: Can't reconnect until invalid transaction is rolled back getting output to stdout, in defiance of our logging configuration, which makes sense since it happened during the app context tearing down, which is never supposed to raise exceptions. I'm not sure how the transaction got to be invalid without response_or_exec getting set, but that's actually the lesser problem AFAIK.
The bigger problem is, that's when the "'prepared' state" errors started, and haven't stopped since. Every time this thread serves a request that hits the DB, it 500s. Every other thread seems to be fine: as far as I can tell, even the thread that's in the same process is doing OK.
Wild guess
The SQLAlchemy mailing list has an entry about the "'prepared' state" error saying it happens if a session started committing and hasn't finished yet, and something else tries to use it. My guess is that the session in this thread never got to the self.session.remove() step, and now it never will.
I still feel like that doesn't explain how this session is persisting across requests though. We haven't modified Flask-SQLAlchemy's use of request-scoped sessions, so the session should get returned to SQLAlchemy's pool and rolled back at the end of the request, even the ones that are erroring (though admittedly, probably not the first one, since that raised during the app context tearing down). Why are the rollbacks not happening? I could understand it if we were seeing the "invalid transaction" errors on stdout (in uwsgi's log) every time, but we're not: I only saw it once, the first time. But I see the "'prepared' state" error (in our app's log) every time the 500s occur.
Configuration details
We've turned off expire_on_commit in the session_options, and we've turned on SQLALCHEMY_COMMIT_ON_TEARDOWN. We're only reading from the database, not writing yet. We're also using Dogpile-Cache for all of our queries (using the memcached lock since we have multiple processes, and actually, 2 load-balanced servers). The cache expires every minute for our major query.
Updated 2014-04-28: Resolution steps
Restarting the server seems to have fixed the problem, which isn't entirely surprising. That said, I expect to see it again until we figure out how to stop it. benselme (below) suggested writing our own teardown callback with exception handling around the commit, but I feel like the bigger problem is that the thread was messed up for the rest of its life. The fact that this didn't go away after a request or two really makes me nervous!
Edit 2016-06-05:
A PR that solves this problem has been merged on May 26, 2016.
Flask PR 1822
Edit 2015-04-13:
Mystery solved!
TL;DR: Be absolutely sure your teardown functions succeed, by using the teardown-wrapping recipe in the 2014-12-11 edit!
Started a new job also using Flask, and this issue popped up again, before I'd put in place the teardown-wrapping recipe. So I revisited this issue and finally figured out what happened.
As I thought, Flask pushes a new request context onto the request context stack every time a new request comes down the line. This is used to support request-local globals, like the session.
Flask also has a notion of "application" context which is separate from request context. It's meant to support things like testing and CLI access, where HTTP isn't happening. I knew this, and I also knew that that's where Flask-SQLA puts its DB sessions.
During normal operation, both a request and an app context are pushed at the beginning of a request, and popped at the end.
However, it turns out that when pushing a request context, the request context checks whether there's an existing app context, and if one's present, it doesn't push a new one!
So if the app context isn't popped at the end of a request due to a teardown function raising, not only will it stick around forever, it won't even have a new app context pushed on top of it.
That also explains some magic I hadn't understood in our integration tests. You can INSERT some test data, then run some requests and those requests will be able to access that data despite you not committing. That's only possible since the request has a new request context, but is reusing the test application context, so it's reusing the existing DB connection. So this really is a feature, not a bug.
That said, it does mean you have to be absolutely sure your teardown functions succeed, using something like the teardown-function wrapper below. That's a good idea even without that feature to avoid leaking memory and DB connections, but is especially important in light of these findings. I'll be submitting a PR to Flask's docs for this reason. (Here it is)
Edit 2014-12-11:
One thing we ended up putting in place was the following code (in our application factory), which wraps every teardown function to make sure it logs the exception and doesn't raise further. This ensures the app context always gets popped successfully. Obviously this has to go after you're sure all teardown functions have been registered.
# Flask specifies that teardown functions should not raise.
# However, they might not have their own error handling,
# so we wrap them here to log any errors and prevent errors from
# propagating.
def wrap_teardown_func(teardown_func):
#wraps(teardown_func)
def log_teardown_error(*args, **kwargs):
try:
teardown_func(*args, **kwargs)
except Exception as exc:
app.logger.exception(exc)
return log_teardown_error
if app.teardown_request_funcs:
for bp, func_list in app.teardown_request_funcs.items():
for i, func in enumerate(func_list):
app.teardown_request_funcs[bp][i] = wrap_teardown_func(func)
if app.teardown_appcontext_funcs:
for i, func in enumerate(app.teardown_appcontext_funcs):
app.teardown_appcontext_funcs[i] = wrap_teardown_func(func)
Edit 2014-09-19:
Ok, turns out --reload-on-exception isn't a good idea if 1.) you're using multiple threads and 2.) terminating a thread mid-request could cause trouble. I thought uWSGI would wait for all requests for that worker to finish, like uWSGI's "graceful reload" feature does, but it seems that's not the case. We started having problems where a thread would acquire a dogpile lock in Memcached, then get terminated when uWSGI reloads the worker due to an exception in a different thread, meaning the lock is never released.
Removing SQLALCHEMY_COMMIT_ON_TEARDOWN solved part of our problem, though we're still getting occasional errors during app teardown during session.remove(). It seems these are caused by SQLAlchemy issue 3043, which was fixed in version 0.9.5, so hopefully upgrading to 0.9.5 will allow us to rely on the app context teardown always working.
Original:
How this happened in the first place is still an open question, but I did find a way to prevent it: uWSGI's --reload-on-exception option.
Our Flask app's error handling ought to be catching just about anything, so it can serve a custom error response, which means only the most unexpected exceptions should make it all the way to uWSGI. So it makes sense to reload the whole app whenever that happens.
We'll also turn off SQLALCHEMY_COMMIT_ON_TEARDOWN, though we'll probably commit explicitly rather than writing our own callback for app teardown, since we're writing to the database so rarely.
A surprising thing is that there's no exception handling around that self.session.commit. And a commit can fail, for example if the connection to the DB is lost. So the commit fails, session is not removed and next time that particular thread handles a request it still tries to use that now-invalid session.
Unfortunately, Flask-SQLAlchemy doesn't offer any clean possibility to have your own teardown function. One way would be to have the SQLALCHEMY_COMMIT_ON_TEARDOWN set to False and then writing your own teardown function.
It should look like this:
#app.teardown_appcontext
def shutdown_session(response_or_exc):
try:
if response_or_exc is None:
sqla.session.commit()
finally:
sqla.session.remove()
return response_or_exc
Now, you will still have your failing commits, and you'll have to investigate that separately... But at least your thread should recover.
I am writing a web service based on web.py, using storm as an ORM layer for querying records from a MySQL database. The web service is deployed via mod_wsgi using Apache2 on a Linux box. I create a connection to the MySQL database server when the script is started using storm's create_database() method. This is also the point where I create a Store object, which is used later on to perform queries when a request comes in.
After some hours of inactivity, store.find() throws a DisconnectionError: (2006, 'MySQL server has gone away'). I am not surprised that the database connection is dropped as Apache/mod_wsgi reuses the Python processes without reinitializing them for a long time. My question is how to correctly deal with this?
I have tried setting up a mechanism to keep alive the connection to the MySQL server by sending it a recurring "SELECT 1" (every 300 seconds). Unfortunately that fixed the problem on our testing machine, but not on our demo deployments (ouch) while both share the same MySQL configuration (wait_timeout is set to 8 hours).
I have searched for solutions for reconnecting the storm store to the database, but didn't find anything sophisticated. The only recommendation seems to be that one has to catch the exception, treat it like an inconsistency, call rollback() on the store and then retry. However, this would imply that I either have to wrap the whole Store class or implement the same retry-mechanism over and over. Is there a better solution or am I getting something completely wrong here?
Update: I have added a web.py processor that gracefully handles the disconnection error by recreating the storm Store if the exception is caught and then retrying the operation like recommended by Andrey. However, this is an incomplete and suboptimal solution as (a) the store is referenced by a handful of objects for re-use, which requires an additional mechanism to re-wire the store reference on each of these objects, and (b), it doesn't cover transaction handling (rollbacks) when performing writes on the database. However, at least it's an acceptable fix for all read operations on the store for now.
Perhaps you can use web.py's application processor to wrap your controller methods and catch DisconnectionError from them. Something like this:
def my_processor(handler):
tries = 3
while True:
try:
return handler()
except DisconnectionError:
tries -= 1
if tries == 0:
raise
Or you may check cookbook entry of how application processor is used to have SqlAlchemy with web.py: http://webpy.org/cookbook/sqlalchemy and make something similar for storm:
def load_storm(handler):
web.ctx.store = Store(database)
try:
return handler()
except web.HTTPError:
web.ctx.store.commit()
raise
except:
web.ctx.store.rollback()
raise
finally:
web.ctx.store.commit()
app.add_processor(load_storm)
I have been banging my head around on this issue for a bit and have not come up with a solution. I am attempting to trap the exception UploadEntityTooLargeEntity. This exception is raised by GAE when 2 things happen.
Set the max_bytes_total param in the create_upload_url:
self.template_values['AVATAR_SAVE_URL'] = blobstore.create_upload_url('/saveavatar,
max_bytes_total= 524288)
Attempt to post an item that exceeds the max_bytes_total.
I expect that, since my class is derived from RequestHandler that my error() method would be called. Instead I am getting a 413 screen telling me the upload is too large.
My request handler is derived from webapp2.RequestHandler. Is it expected that GAE will work with the error method derived from webapp2.RequestHandler? I'm not seeing this in GAE's code but I can't imagine there would be such an omission.
The 413 is generated by the App Engine infrastructure; the request neve reaches your app, so it's impossible to handle this condition yourself.
I'm running a django application on apache with mod_wsgi. The apache error log is clean except for one mysterious error.
Exception exceptions.TypeError: "'NoneType' object is not callable" in <bound method SharedSocket.__del__ of <httplib.SharedSocket instance at 0x82fb138c>> ignored
I understand that this means that some code is trying to call SharedSocket.del, but it is None. I have no idea what the reason for that is, or how to go about fixing it. Also, this error is marked as ignored. It also doesn't seem to be causing any actual problems other than filling the log file with error messages. Should I even bother trying to fix it?
It is likely that this is coming about because while handling an exception a further exception is occurring within the destructor of an object being destroyed and that the latter exception is unable to be raised because of the state of the pending one. Within Python C API details of such can be written direct to error log by PyErr_WriteUnraisable().
So, it isn't that the del method is None, but some variable it is trying to use from code executed within the del method is None. You would need to look at the code for SharedSocket.del to work out exactly what is going on.
Note: this is more of a pointer than an answer, but I couldn't get this to work in a comment.
I did some googling on the error message and there seems to be a group of related problems that crop up in Apache + mod_wsgi + MySQL environments. The culprit may be that you are running out of simultaneous connections to MySQL because of process pooling, with each process maintaining an open connection to the DB server. There are also indications that some non-thread-safe code may be used in a multi-thread environment. Good luck.