I am trying to get my bottle server so that when one person in a game logs out, everyone can immediately see it. As I am using long polling, there is a request open with all the users.
The bit I am having trouble with is catching the exception that is thrown when the user leaves the page from the long polling that can no longer connect to the page. The error message is here.
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 438, in handle_one_response
self.run_application()
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 425, in run_application
self.process_result()
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 416, in process_result
self.write(data)
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 373, in write
self.socket.sendall(msg)
File "/usr/lib/python2.7/dist-packages/gevent/socket.py", line 509, in sendall
data_sent += self.send(_get_memory(data, data_sent), flags)
File "/usr/lib/python2.7/dist-packages/gevent/socket.py", line 483, in send
return sock.send(data, flags)
error: [Errno 32] Broken pipe
<WSGIServer fileno=3 address=0.0.0.0:8080>: Failed to handle request:
request = GET /refreshlobby/1 HTTP/1.1 from ('127.0.0.1', 53331)
application = <bottle.Bottle object at 0x7f9c05672750>
127.0.0.1 - - [2013-07-07 10:59:30] "GET /refreshlobby/1 HTTP/1.1" 200 160 6.038377
The function to handle that page is this.
#route('/refreshlobby/<id>')
def refreshlobby(id):
while True:
yield lobby.refresh()
gevent.sleep(1)
I tried catching the exception within the function, and in a decorator which I put to wrap #route, neither of which worked. I tried making an #error(500) decorator, but that didn't trigger, either. It seems that this is to do with the internals of bottle.
Edit: I know now that I need to be catching socket.error, but I don't know whereabouts in my code
The WSGI runner
Look closely at the traceback: this in not happening in your function, but in the WSGI runner.
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 438, in handle_one_response
self.run_application()
The way the WSGI runner works, in your case, is:
Receives a request
Gets a partial response from your code
Sends it to the client (this is where the exception is raised)
Repeats steps 2-3
You can't catch this exception
This error is not raised in your code.
It happens when you try to send a response to a client that closed the connection.
You'll therefore not be able to catch this error from within your code.
Alternate solutions
Unfortunately, it's not possible to tell from within the generator (your code) when it stops being consumed.
It's also not a good idea to rely on your generator being garbage collected.
You have a couple other solutions.
"Last seen"
Another way to know when an user disconnects would probably be to record a "last seen", after your yield statement.
You'll be able to identify clients that disconnected if their last seen is far in the past.
Other runner
Another, non-WSGI runner, will be more appropriate for a realtime application. You could give tornado a try.
Related
I'm in the process of writing a minimal websocket server with Python 3. I am using flask, socketio, and eventlet per the instructions on the latest docs. The problem is that when the webpage with the socket connection is reloaded, the server throws the following exception:
Traceback (most recent call last):
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\site-packages\eventlet\greenpool.py", line 88, in _spawn_n_impl
func(*args, **kwargs)
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\site-packages\eventlet\wsgi.py", line 734, in process_request
proto.__init__(sock, address, self)
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\socketserver.py", line 686, in __init__
self.finish()
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\site-packages\eventlet\wsgi.py", line 651, in finish
greenio.shutdown_safe(self.connection)
File "C:\Users\Noah\AppData\Local\Programs\Python\Python35-32\lib\site-packages\eventlet\greenio\base.py", line 479, in shutdown_safe
return sock.shutdown(socket.SHUT_RDWR)
OSError: [WinError 10038] An operation was attempted on something that is not a socket
I took a look at the source, and it seems like shutdown_safe is supposed to just catch any exceptions while shutting down a connection. In short, it seems like the author of this part of the library didn't foresee Windows throwing an OSError on shutdown.
Although this is a benign issue, I was wondering if there are any existing fixes/tweaks, and if not, whether I should submit this to the python-socketio GitHub issues list.
I'm using flask for development, not production, and i have a view for an ajax request, something like this:
#application.route('/xyz/<var>/', methods=['GET'])
def getAjax(var):
...
return render_template(...)
I'm also using threaded=true for development.
Whenever i call that ajax request and then just close the tab that requested it i get an error:
Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 593, in process_request_thread
self.finish_request(request, client_address) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 651, in __init__ 2015-07-07 09:46:09,430 127.0.0.1 - - [07/Jul/2015 09:46:09] "GET /xyz/List/ HTTP/1.1" 200 -
self.finish() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 710, in finish
self.wfile.close() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 279, in close
self.flush() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe
Can i have a try: except block to catch this exception?
I tried putting all of the getAjax function's contents on a:
try:
...
except socket.error, e:
logging.warn("socket error " + e)
But it's not working, where should i do this? or how?
EDIT: adding the ajax call
$.ajax({
type: 'GET',
url: '/xyz/' + var + '/',
data: {
...
},
timeout: 2000,
success: function(data) {
...
},
error: function(XMLHttpRequest, textStatus, errorThrown) {
...
}
});
The problem you have is that by using Flask you're not in control of your full server stack, and so you can't catch errors that arise outside of your application code. The socket.error is actually propagating itself in Flask's built in development server (which might just be a wrapper around SocketServer, I don't know the details), and because it's just a development server not meant for production use, it's not handling the case that a client dies suddenly.
Generally, this shouldn't matter. If you were to actually deploy your code using Gunicorn or something, then the WSGI container would handle the broken pipe without anything from you. You can test this by running your code in such a container locally -- try installing Gunicorn on your computer and seeing what happens when your app code runs in its wrapper.
If you still have a problem, then things might get complicated for you as the problem could be in many different places. Gevent just got to handling this particular error, and so if you're using a lesser-known WSGI container then that might not be able to handle this error either. Or it could be something they're using that can't handle this.
Generally speaking, if you're using a web framework in a standard way, you shouldn't have to handle low level server errors on your own. That's why you're using the web framework to begin with. If you're trying to roll your own video streaming or something, that's another case entirely, but that doesn't seem to be the case here.
I'm using Django 1.6 and Django-ImageKit 3.2.1.
I'm trying to generate images asynchronously with ImageKit. Async image generation works locally but not on the production server.
I'm using Celery and I've tried both:
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Async'
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Celery'
Using the Simple backend (synchronous) instead of Async or Celery works fine on the production server. So I don't understand why the asynchronous backend gives me the following ImportError (pulled from the Celery log):
[2014-04-05 21:51:26,325: CRITICAL/MainProcess] Can't decode message body: DecodeError(ImportError('No module named s3utils',),) [type:u'application/x-python-serialize' encoding:u'binary' headers:{}]
body: '\x80\x02}q\x01(U\x07expiresq\x02NU\x03utcq\x03\x88U\x04argsq\x04cimagekit.cachefiles.backends\nCelery\nq\x05)\x81q\x06}bcimagekit.cachefiles\nImageCacheFile\nq\x07)\x81q\x08}q\t(U\x11cachefile_backendq\nh\x06U\x12ca$
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/messaging.py", line 585, in _receive_callback
decoded = None if on_m else message.decode()
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/message.py", line 142, in decode
self.content_encoding, accept=self.accept)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 59, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 55, in _reraise_errors
yield
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 64, in pickle_loads
return load(BytesIO(s))
DecodeError: No module named s3utils
s3utils is what defines my AWS S3 bucket paths. I'll post it if need be, but the strange thing I think is that the synchronous backend has no problem importing s3utils while the asynchronous does... and asynchronous does ONLY on the production server, not locally.
I'd be SO greatful for any help debugging this. I've been wrestling this for days. I'm still learning Django and python so I'm hoping this is a stupid mistake on my part. My Google-fu has failed me.
As I hinted at in my comment above, this kind of thing is usually caused by forgetting to restart the worker.
It's a common gotcha with Celery. The workers are a separate process from your web server so they have their own versions of your code loaded. And just like with your web server, if you make a change to your code, you need to reload so it sees the change. The web server talks to your worker not by directly running code, but by passing serialized messages via the broker, which will say something like "call the function do_something()". Then the worker will read that message and—and here's the tricky part—call its version of do_something(). So even if you restart your webserver (so that it has a new version of your code), if you forget to reload the worker (which is what actually calls the function), the old version of the function will be called. In other words, you need to restart the worker any time you make a change to your tasks.
You might want to check out the autoreload option for development. It could save you some headaches.
I'm writing the code to attend a service. I receive POST requests to a particular service and start processing. The whole process is rather simple; I loop trough items in the request and add each item to the database. The problem arise when I have to process a lot of items and the loop takes like three minutes to finish, then when I try to respond:
status = '200 OK'
headers = [('Content-type', 'application/json'),('Access-Control-Allow-Origin','*')]
start_response(status, headers)
return json.dumps(response)
I get this error:
Exception happened during processing of request from ('XXX.XXX.XXX.XXX', 49172)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/SocketServer.py", line 284, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/local/lib/python2.7/SocketServer.py", line 310, in process_request
self.finish_request(request, client_address)
File "/usr/local/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/local/lib/python2.7/SocketServer.py", line 640, in __init__
self.finish()
File "/usr/local/lib/python2.7/SocketServer.py", line 693, in finish
self.wfile.flush()
File "/usr/local/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
I don't know if this helps, but the POST request is a forwarded POST made from a browser to a different domain (that's why the a Access-Control-Allow-Origin) and all the accesses to the database are made using a single object that interacts with the database using SQLAlchemy (can be seen similar to a Java EE DAO pattern).
How do I avoid this error?
You maybe are violating the idea behind REST.
If the proccessing could take some time, the service may want to answer it with an 202 Accepted Response! For a full overview of http response codes follow this link: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.
I could be wrong, but it just looks to me like the socket is timing out. You shouldn't leave the client hanging for a response for more than three minutes.
Instead, you should validate the data and send a message stating that it was received. If necessary, you can use something like AJAX to tell the client that the data was entered after it was received.
I've been googling for days trying to find a straight answer for why this is happening, but can't find anything useful. I have a web2py application that simply reads a database and makes some requests to a REST api. It is a healthcheck monitor so it refreshes itself every minute. There are about 20 or so users at any given time. Here is the error I'm seeing very consistently in the log file:
ERROR:Rocket.Errors.Port8080:Traceback (most recent call last):
File "/opt/apps/web2py/gluon/rocket.py", line 562, in listen
sock = self.wrap_socket(sock)
File "/opt/apps/web2py/gluon/rocket.py", line 506, in wrap_socket
ssl_version = ssl.PROTOCOL_SSLv23)
File "/usr/local/lib/python2.7/ssl.py", line 342, in wrap_socket
ciphers=ciphers)
File "/usr/local/lib/python2.7/ssl.py", line 121, in __init__
self.do_handshake()
File "/usr/local/lib/python2.7/ssl.py", line 281, in do_handshake
self._sslobj.do_handshake()
error: [Errno 104] Connection reset by peer
Based on some googling the most promising piece of information is that someone is trying to connect through a firewall and so it is killing the connection, however I don't understand why it's taking the actual application down. The process is still running, but no one can connect and I have to restart web2py.
I will be very appreciative of any input here. I'm beyond frustration.
Thanks!
The most common source of Connection reset by peer errors is that the remote client decides he doesn't want to contact you anymore, and cancels the interaction (with shutdown/an RST packet). This happens if the user navigates to a different page while the site is loading.
In your case, the remote host gave up on the connection even before you got to read or write anything on it. With the current web2py, this should only output the warning you're seeing, and not terminate anything.
If you have the current web2py, the error of not being able to connect is unrelated to these error messages. If you have an old version of web2py, you should update.