I've got an ubuntu-server where I am running multiple web-apps.
All of them are hosted by Apache using named VirtualHosts.
One of them is a Flask app, which is running via mod_wsgi.
This app is serving continuous, unlimited HTTP streams.
Does this eventually block my app/server/apache worker, if enough clients are connecting to the streaming endpoint?
And if yes, are there alternatives?
Other non-blocking wsgi-servers that play nicely with VirtualHosts, a different http-streaming paradigm, or some magic apache mod_wsgi settings?
The core of it looks like:
#app.route('/stream')
def get_stream():
def endless():
while True:
yield get_stuff_from_redis()
time.sleep(1)
return Response(endless(), mimetype='application/json')
If the clients never disconnect, yes, you will eventually run out of processes/threads to handle more requests.
You are more than likely better off using a async framework such as Tornado or Twisted for this specific type of application. Doing async programming can be tricky if you aren't used to that concept.
Some people use coroutine system such as gevent/eventlet, but they also have their own problems you have to watch out for.
Related
we are using python tornado server for a new project.
the server should work like a node.js server, accepting thousands of connections and keeping them open for a long time until a response is ready.
the response is a result of a multiple http access to external resources, so of course we need to support concurrently a lot of http client connections open simultaneously as well (a few hundreds at least).
we tried to configure the AsyncHTTPClient like that:
if __name__ == "__main__":
app = make_app()
app.listen(8888)
tornado.httpclient.AsyncHTTPClient.configure("tornado.simple_httpclient.SimpleAsyncHTTPClient", max_clients=1000, defaults=dict(connect_timeout=float(10), request_timeout=float(100)))
tornado.ioloop.IOLoop.current().start()
it seems that our server is working fine but we have a problem with the httpclient - it doesn't seem to scale to more than a dozen connections and the application simply hangs until it gets a lot of timeout error (error 599).
any idea if the tornado async http client is buggy or do we use it in a wrong manner?
any ideas on a replacement technology? (python?)
I'm trying to integrate a RESTful responder in a Crossbar application, for which the best fit seems to be a WSGI service. This service ideally should be part of the rest of the pub/sub infrastructure, being able to receive WAMP events on the one hand and answer HTTP requests on the other.
The difficulty is to run an event loop which allows asynchronous web socket events and additionally offer a WSGI compliant component. It seems to me that Pulsar should be able to do that, but I have not been able to figure out how to set it up, none of the available samples demonstrate exactly this use case.
value = None
class Foo(ApplicationSession):
def onJoin(self, details):
yield self.subscribe(self.bar, 'bar')
def bar(self, data):
value = data
app = Flask(__name__)
#app.route('/')
def baz():
return value
if __name__ == '__main__':
runner = ApplicationRunner('ws://127.0.0.1:8080', 'test')
runner.run(Foo, start_reactor=False)
# now what?
The above demonstrates the two parts, an Autobahn WAMP client and a Flask WSGI component. How do I run both of these in parallel, allowing one thread to receive events both via HTTP and web socket? I don't particularly care about the version of Python nor underlying library (Twisted, asyncio, Pulsar, Flask), I'd just like to get this running somehow.
WSGI is an inherently synchronous API. I don't know about Pulsar, but I would be surprised if it could somehow magically work around this fact.
The way Crossbar.io integrates with classic Web (and synchronous) stacks is via a REST-bridge. Currently, we have the WAMP "Publisher" role covered today (2015/02): that is, you can publish an WAMP event by doing a simple HTTP/POST http://crossbar.io/docs/HTTP-Pusher-Service/. This REST bridge in Crossbar.io will be extended to cover all 4 WAMP roles in the near future.
If you take a step back, and primarily care about something do create a REST API in your app, and which integrates directly with WAMP and asynchronous stuff, I'd have a look a Twisted Klein. Twisted Klein is essentially modeled after Flask, but at the source level. We have a blog post that covers exactly this: Mixing Web and WAMP code with Twisted Klein
I use Tornado as the web server. I write some daemons with Python, which run in the server hardware. Sometimes the web server needs to send some data to the daemon and receives some computed results. There are two working:
1. Asynchronous mode: the server sends some data to the daemons, and it doesn't need the results soon. Can I use message queue to do it perfectly?
2. Synchronous mode: the server sends data to the daemons, and it will wait until it get the results. Should Iuse sockets?
So what's the best way of communication between tornado and Python based daemon?
ZeroMQ can be used for this purpose. It has various sockets for different purposes and it's fast enough to never be your bottleneck. For asynchronous you can use DEALER/ROUTER sockets and for strict synchronous mode you can use REQ/REP sockets.
You can use the python binding for this --> http://www.zeromq.org/bindings:python.
For the async mode you can try something like this from the zguide chapter 3 Router-to-dealer async routing :
In your case, the "client" in the diagram will be your web server and your daemon will be the "worker".
For synchronous you can try a simple request-reply broker or some variant to suit your need.
The diagram above shows a strictly synchronous cycle of send/recv at the REQ/REP sockets. Read through the zguide link to understand how it works. They also have a python code snippet on the page.
Depending on the scale - the simple thing is to just use HTTP and the AsyncHTTPClient in Tornado. For the request<->response case in our application we're going 300 connections/second with such an approach.
For the first case Fire and forget, you could also use AsyncHTTP and just have the server close out the connection and continue working...
I have been doing a lot of studying of the BaseHTTPServer and found that its not that good for multiple requests. I went through this article
http://metachris.org/2011/01/scaling-python-servers-with-worker-processes-and-socket-duplication/#python
and I wanted to know what is the best way for building a HTTP Server for multiple requests
->
My requirements for the HTTP Server are simple -
- support multiple requests (where each request may run a LONG Python Script)
Till now I have following options ->
- BaseHTTPServer (with thread is not good)
- Mod_Python (Apache intergration)
- CherryPy?
- Any other?
I have had very good luck with the CherryPy web server, one of the oldest and most solid of the pure-Python web servers. Just write your application as a WSGI callable and it should be easy to run under CherryPy's multi-threaded server.
http://www.cherrypy.org/
Indeed, the the HTTP servers provided with the standard python library are meant only for light duty use; For moderate scaling (100's of concurrent connections), mod_wsgi in apache is a great choice.
If your needs are greater than that(10,000's of concurrent connections), You'll want to look at an asynchronous framework, such as Twisted or Tornado. The general structure of an asynchronous application is quite different, so if you think you're likely to need to go down that route, you should definitely start your project in one of those frameworks from the start
Tornado is a really good and easy-to-use asynchronous event-loop / webserver developed by FriendFeed/Facebook. I've personally had very good experiences with it. You can use the HTTP classes as in the example below, or only the io-loop to multiplex plain TCP connections.
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.current().start()
How should I implement reverse AJAX when building a chat application in Django? I've looked at Django-Orbited, and from my understanding, this puts a comet server in front of the HTTP server. This seems fine if I'm just running the Django development server, but how does this work when I start running the application from mod_wsgi? How does having the orbited server handling every request scale? Is this the correct approach?
I've looked at another approach (long polling) that seems like it would work, although I'm not sure what all would be involved. Would the client request a page that would live in its own thread, so as not to block the rest of the application? Would it even block? Wouldn't the script requested by the client have to continuously poll for information?
Which of the approaches is more proper? Which is more portable, scalable, sane, etc? Are there other good approaches to this (aside from the client polling for messages) that I have overlooked?
How about using the awesome nginx push module?
Have take a look at Tornado?
Using WSGI for comet/long-polling apps is not a good choice because don't support non-blocking requests.
The Nginx Push Stream Module provides a simple HTTP interface for both the server and the client.
The Nginx HTTP Push Module is similar, but seems to no longer be maintained.