How should I implement reverse AJAX when building a chat application in Django? I've looked at Django-Orbited, and from my understanding, this puts a comet server in front of the HTTP server. This seems fine if I'm just running the Django development server, but how does this work when I start running the application from mod_wsgi? How does having the orbited server handling every request scale? Is this the correct approach?
I've looked at another approach (long polling) that seems like it would work, although I'm not sure what all would be involved. Would the client request a page that would live in its own thread, so as not to block the rest of the application? Would it even block? Wouldn't the script requested by the client have to continuously poll for information?
Which of the approaches is more proper? Which is more portable, scalable, sane, etc? Are there other good approaches to this (aside from the client polling for messages) that I have overlooked?
How about using the awesome nginx push module?
Have take a look at Tornado?
Using WSGI for comet/long-polling apps is not a good choice because don't support non-blocking requests.
The Nginx Push Stream Module provides a simple HTTP interface for both the server and the client.
The Nginx HTTP Push Module is similar, but seems to no longer be maintained.
Related
I am currently working on a desktop application in which a pair of this application will have to communicate with each other using HTTP meaning that both will server as a client/server simultaneously. What python webserver will suffice for such desktop application?
You haven't really given us much information, but depending on your use-case SimpleHTTPServer might suffice. It's in the standard library, which is convenient.
There's also Bottle, Flask, web.py...
I have a sample HTTP server that you can find it here.
http://snipplr.com/view/57745/python-web-server/
I think you are looking at a bi-directional HTTP communication.
You can get many resources over it. HTML5 websocket, my favourite.
Check this resource or BOSH.
Hope this helps.
I want to create a python application that is always listening to a parametrized port. Whenever there is a request coming from the port, the application will parse the request and do tasks based on the request.
Is this type of application called services? (I have 0 knowledge on services). Where can I find beginner's tips and guides on this type of development?
This is called a server, there are examples at the bottom of the Python socket documentation page.
HTH.
This is socket programming. Writing sockets is cumbersome, you can use any web server written in python. My recommendation is use werkzeug, it is very simple. Meanwhile have a look at Flask which is built on top of werkzeug.
In case you are trying to build your own protocol engine twisted is one which will help you to achieve that.
You can using threads or the Twisted (arguably an easier option) framework to create a server.
EDIT:Question Updated. Thanks Slott.
I have a TCP Server in Python.
It is a server with asynchronous behaviour. .
The message format is Binary Data.
Currently I have a python client that interacts with the code.
What I want to be able to do eventually implement a Web based Front End to this client.
I just wanted to know , what should be correct design for such an application.
Start with any WSGI-based web server. werkzeug is a choice.
The Asynchronous TCP/IP is a seriously complicated problem. HTTP is synchronous. So using the synchronous web server presenting some asynchronous data is always a problem. Always.
The best you can do is to buffer things and have two processes in your web application.
TCP/IP process that collects data from the remove server and buffers it in a file (or files) somewhere.
WSGI web process which handles GET/POST processing.
GET requests will fetch some or all of the buffer and display it.
POST requests will send a message to the TCP/IP server.
For Web-based, talk HTTP. Use JSON or XML as data formats.
Be standards-compliant and make use of the vast number of libraries out there. Don't reinvent the wheel. This way you have less headaches in the long run.
if you need to maintain a connection to a backend server across multiple HTTP requests, Twisted's HTTP server is an ideal choice, since it's built to manage multiple connections easily.
The title may be a bit vague, but here's my goal: I have a frontend webserver which takes incoming HTTP requests, does some preprocessing on them, and then passes the requests off to my real webserver to get the HTTP response, which is then passed back to the client.
Currently, my frontend is built off of BaseHTTPServer.HTTPServer and the backend is CherryPy.
So the question is: Is there a way to take these HTTP requests / client connections and insert them into a CherryPy server to get the HTTP response? One obvious solution is to run an instance of the CherryPy backend on a local port or using UNIX domain sockets, and then the frontend webserver establishes a connection with the backend and relays any requests/responses. Obviously, this isn't ideal due to the overhead.
What I'd really like is for the CherryPy backend to not bind to any port, but just sit there waiting for the frontend to pass the client's socket (as well as the modified HTTP Request info), at which point it does its normal CherryPy magic and returns the request directly to the client.
I've been perusing the CherryPy source to find some way to accomplish this, and currently am attempting to modify wsgiserver.CherryPyWSGIServer, but it's getting pretty hairy and is probably not the best approach.
Is your main app a wsgi application? If so, you could write some middleware that wraps around it and does all the request wrangling before passing on to the main application.
If this this is possible it would avoid you having to run two webservers and all the problems you are encountering.
Answered the Upgrade question at Handling HTTP/1.1 Upgrade requests in CherryPy. Not sure if that addresses this one or not.
We're implementing a Chat server using Tornado.
The premise is simple, a user makes open an HTTP ajax connection to the Tornado server, and the Tornado server answers only when a new message appears in the chat-room. Whenever the connection closes, regardless if a new message came in or an error/timeout occurred, the client reopens the connection.
Looking at Tornado, the question arises of what library can we use to allow us to have these calls wait on some central object that would signal them - A_NEW_MESSAGE_HAS_ARRIVED_ITS_TIME_TO_SEND_BACK_SOME_DATA.
To describe this in Win32 terms, each async call would be represented as a thread that would be hanging on a WaitForSingleObject(...) on some central Mutex/Event/etc.
We will be operating in a standard Python environment (Tornado), is there something built-in we can use, do we need an external library/server, is there something Tornado recommends?
Thanks
I'm looking into the best options for developing a chat application and was looking into tornado as well. This rough cuts Building the Realtime User Experience has a chapter on building a chat application with tornado that might be useful to you. Best of luck :)
Tornado has a "chat" example which uses long polling. It contains everything you need (or actually, probably more than you need since it includes a 3rd-party login)