I'm trying to implement long pulling client in Tornado, that interacts with an asynchronous Tornado server.
What happens is one of 2 things:
Either the client timesout, or
The client receives all the messages at once after finishing the
whole background process, similar to blocking ones
This is the client I use:
from tornado import ioloop
from tornado import httpclient
print "\nNon-Blocking AsyncHTTPClient"
import tornado.ioloop
def async_call(response):
if response.error:
response.rethrow()
print "AsyncHTTPClient Response"
ioloop.IOLoop.instance().stop()
http_client = httpclient.AsyncHTTPClient()
http_client.fetch("http://localhost:9999/text/", async_call)
ioloop.IOLoop.instance().start()
Is this the right way to write a long-polling/comet client?
I would also appreciate for those who will answer to provide a sample async-server in Tornado, because may be I'm writing the cometed Tornado server wrongly... I'm a bit new to the whole long-polling process in general.
Tornado itself has an excellent example of chat, built on top of long-polling mechanism
https://github.com/facebook/tornado/tree/master/demos/chat
It helped me a lot to understand everything, and it have both server and client.
Related
I have just started using gevent-socketio and it's great!
But I have been using the default socketioserver and socketio_manage from the chat tutorial and was wondering how to integrate socketio with cherrypy.
essentially, how do I turn this:
class MyNamespace(BaseNamespace):...
def application(environ, start_response):
if environ['PATH_INFO'].startswith('/socket.io'):
return socketio_manage(environ, { '/app': MyNamespace})
else:
return serve_file(environ, start_response)
def serve_file(...):...
sio_server = SocketIOServer(
('', 8080), application,
policy_server=False) sio_server.serve_forever()
into a normal cherrypy server?
Gevent-socketio is based on Gevent, and Gevent's web server. There are two implementations: pywsgi, which is pure python, and wsgi, which uses libevent's http implementation.
See the paragraph starting with "The difference between pywsgi.WSGIServer and wsgi.WSGIServer" over here:
http://www.gevent.org/servers.html
Only those servers are "green", in the sense that they yield the control to the Gevent loop.. so you can only use those servers afaik. The reason for this is that the server is present at the very beginning of the request, and will know how to handle the "Upgrade" and websockets protocol negotiations, and it will pass values inside the "environ" that the next layer (SocketIO) will expect and know how to handle.
You will also need to use the gevent-websocket package.. because it is green (and gevent-socketio is based on that one). You can't just swap the websocket stack.
Hope this helps.
CherryPy doesn't implement the socket.io protocol, nor does it support WebSocket as a built-in. However, there is an extension to CherryPy, called ws4py, that implements only the bare WebSocket protocol on top of its stack. You could start there probably.
I am desktop programmer but I want to learn something about web services. I decided for python. I am trying understand how web applications works. I know how to create basic tornado website (request - response) and working jabber client, but I don't know how to mix them. Can I use any python components in web services? Does they must have specific structure ( sync or async )? Because I'm stuck in loop handlers:
If tornado start web serwer by command:
app = Application()
app.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
... so how (where) can I start xmpp loop?
client.connect()
client.run()
I think that tornado listen loop should handle xmpp listening, but don't know how
Regards.
Edit: I forgot. I am using pyxmpp2
I believe what you are trying to accomplish is not feasible in one thread of python as both are trying to listen at the same time which isn't possible in one thread. Might I suggest looking at this tutorial on threading.
Another question would be are you trying to make a web based xmpp or just have a xmpp & html server running in the same script. If you wish to try the former I would advise you to look into inter-thread communication either with zeromq or queue
maybe WebSocketHandler and Thread will help you.
Demo
class BotThread(threading.Thread):
def __init__(self,my_jid,settings,on_message):
super(BotThread,self).__init__()
#EchoBot is pyxmpp2's Client
self.bot = EchoBot(my_jid, settings,on_message= on_message)
def run(self):
self.bot.run()
class ChatSocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
#init xmpp client
my_jid =
settings =
bot =BotThread(my_jid, settings,on_message=self.on_message)
bot.start()
I have just begun to look at tornado and asynchronous web servers. In many examples for tornado, longer requests are handled by something like:
make a call to tornado webserver
tornado makes async web call to an api
let tornado keep taking requests while callback waits to be called
handle response in callback. server to user.
So for hypothetical purposes say users are making a request to tornado server at /retrive. /retrieve will make a request to an internal api myapi.com/retrieve_posts_for_user_id/ or w/e. the api request could take a second to run while getting requests, then when it finally returns tornado servers up the response. First of all is this flow the 'normal' way to use tornado? Many of the code examples online would suggest so.
Secondly, (this is where my mind is starting to get boggled) assuming that the above flow is the standard flow, should myapi.com be asyncronous? If its not async and the requests can take seconds apiece wouldn't it create the same bottleneck a blocking server would? Perhaps an example of a normal use case for tornado or any async would help to shed some light on this issue? Thank you.
Yes, as I understand your question, that is a normal use-case for Tornado.
If all requests to your Tornado server would make requests to myapi.com, and myapi.com is blocking, then yes, myapi.com would still be the bottleneck. However, if only some requests have to be handled by myapi.com, then Tornado would still be a win, as it can keep handling such requests while waiting for responses for the requests to myapi.com. But regardless, if myapi.com can't handle the load, then putting a Tornado server in front of it won't magically fix that. The difference is that your Tornado server will still be able to respond to requests even when myapi.com is busy.
I have been doing a lot of studying of the BaseHTTPServer and found that its not that good for multiple requests. I went through this article
http://metachris.org/2011/01/scaling-python-servers-with-worker-processes-and-socket-duplication/#python
and I wanted to know what is the best way for building a HTTP Server for multiple requests
->
My requirements for the HTTP Server are simple -
- support multiple requests (where each request may run a LONG Python Script)
Till now I have following options ->
- BaseHTTPServer (with thread is not good)
- Mod_Python (Apache intergration)
- CherryPy?
- Any other?
I have had very good luck with the CherryPy web server, one of the oldest and most solid of the pure-Python web servers. Just write your application as a WSGI callable and it should be easy to run under CherryPy's multi-threaded server.
http://www.cherrypy.org/
Indeed, the the HTTP servers provided with the standard python library are meant only for light duty use; For moderate scaling (100's of concurrent connections), mod_wsgi in apache is a great choice.
If your needs are greater than that(10,000's of concurrent connections), You'll want to look at an asynchronous framework, such as Twisted or Tornado. The general structure of an asynchronous application is quite different, so if you think you're likely to need to go down that route, you should definitely start your project in one of those frameworks from the start
Tornado is a really good and easy-to-use asynchronous event-loop / webserver developed by FriendFeed/Facebook. I've personally had very good experiences with it. You can use the HTTP classes as in the example below, or only the io-loop to multiplex plain TCP connections.
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.current().start()
Finally decided to go with Tornado as a WebSocket server, but I have a question about how it's implemented.
After following a basic tutorial on creating a working server, I ended up with this:
#!/usr/bin/env python
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado.web import Application
from tornado.websocket import WebSocketHandler
class Handler(WebSocketHandler):
def open(self):
print "New connection opened."
def on_message(self, message):
print message
def on_close(self):
print "Connection closed."
print "Server started."
HTTPServer(Application([("/", Handler)])).listen(1024)
IOLoop.instance().start()
It works great and all, but I was wondering if the other modules (tornado.httpserver, tornado.ioloop, and tornado.web) are actually needed to run the server.
It's not a huge issue having them, but I just wanted to make sure there wasn't a better way to do whatever they do (I haven't covered those modules at all, yet.).
tornado.httpserver :
A non-blocking, single-threaded HTTP server.
Typical applications have little direct interaction with the HTTPServer class.
HTTPServer is a very basic connection handler. Beyond parsing the HTTP request body and headers, the only HTTP semantics implemented in HTTPServer is HTTP/1.1 keep-alive connections.
tornado.ioloop :
An I/O event loop for non-blocking sockets.
So, the ioloop can be used for setting the time-out of the response.
In general, methods on RequestHandler and elsewhere in tornado are not thread-safe. In particular, methods such as write(), finish(), and flush() must only be called from the main thread. If you use multiple threads it is important to use IOLoop.add_callback to transfer control back to the main thread before finishing the request.
tornado.web :
Provides RequestHandler and Application classes
Helps with additional tools and optimizations to take advantage of the Tornado non-blocking web server and tools.
So, these are the provisions by this module :
Entry points : Hook for subclass initialization.
Input
Output
Cookies
I hope, this will cover the modules you left.
Yes they're needed because you're using each import from each module/package you reference. If you reference something at the top of your source but never use it again in any of the following code then of course you don't need them but in this case you use your imports.