HTTP server with socket communication - python

I am trying to create a simple http server, that will take http requests from a client and interact with an application that listens on a port (on the same host).
My initial take on that was to create a BaseHTTPServer with custom definitions of GET,POST.
So when a request arrived on the server, the custom POST method would first create a socket for the communication with the underlying application. It then extract the data that needs to be passed to that application and write them through the socket. Then when the application to respond and pass that response to the response of the initial post request.
Even though that flow works at a degree, i feel that there is a much more appropriate solution to the problem.
in a nutshell, we have:
Server:
appA running on port 8000
http server on port 80
Client:
connect to http server and interact with appA through http requests.
e.g. post for sending data, get to receive response
Any thoughts on how to solve the problem more appropriately?
Maybe a library to handle sockets at a higher level (currently I am connecting to the socket and use select to read,write data). I think asyncio may have such capability?
In general how would somebody optimally approach that problem. (without getting in trouble of implementing stuff that are already there, like http server)

Related

How server alone can push data to client browser using python and websockets

I'm trying to push some messages to client (browser) without any request.
For that I use WebSockets and python to do. I know WebSockets provide a full-duplex communication, but I just need server to client push. I could only find bi-directional communication examples for WebSockets over internet, for a request a response model. When I tried to send in an infinite loop from server after handshake process, the browser hung up.
the code I used is in this post
Is there any solution to do that or whether it is better to go for SSE..
Once the client initiates the web socket connection the server then is eligible to send anything to client side

Python websockets

Is it possible to have one protocol connect to another protocol on the same server? My goal is to accept a request for one protocol and then pass that request to a different protocol and have that second protocol return values to whatever client is connected to it.
I'm thinking you would transfer the onMessage request to the other protocol some how.
I don't have any code to show as I don't know where to start, but any code examples would be appreciated.
What you're asking for sounds like a proxy server. A proxy can simply be a middleman that speaks the same protocol out both ends (as in a typical http proxy) or it can be some sort of translator that has one protocol coming in and another protocol going out.
So, supposed you want a browser to be able to use a webSocket connection to speak to some other server that doesn't speak the webSocket protocol. You could implement a proxy server yourself that allows the browser to connect to it and then, via your proxy, it could send/receive messages with the other server that speaks a different protocol.
To implement a proxy server like this, you would do the following:
Create a server process that listens for incoming webSocket connections. This would allow the browser to connect to your proxy.
Once connected, the browser would send a message (of your own design) over the webSocket.
Your proxy would receive that message and translate it to the protocol of the other server,
Your proxy would then connect to that other server and send the message to the other server.
Your proxy could then receive a response from that other server and then, if needed, send a translated response message back to the browser over the webSocket.
It would be the proxy's responsibility to translate each message data from what it received over the webSocket to whatever format/protocol the other server speaks.
It would be an implementation choice whether you maintained a dedicated connection between the proxy and the other server for each webSocket connection or whether you directed all requests over one dedicated connection or whether you created a new connection upon demand only for the duration of a given request. Which makes the most sense depends entirely upon the characteristics of the other server, number of requests and the work that is being done.

Python socket server do client authentication by using token

There're basically two issues I'd like to resolve:
Client side send query string when initializing the connection to server
Server side validate user token in handshake (not after the connection is established and then validate streaming message that contains the token) and set user session accordingly.
I read a article (https://auth0.com/blog/2014/01/15/auth-with-socket-io/) that talks about this process implemented in nodejs, just wonder if the same function can be achieved by using python. (Currently I'm doing some research on twisted but haven't found anything similar)
PS: guess it's helpful to demo the use case as well. A user may login to your server over normal http then server will issue him/her an valid accessToken. Then this user may need to establish a socket connection with the server (or some other server), then the server needs to figure out who the user is and validate before establishing the socket connection.
Query strings are part of HTTP URLs.
If you're building a TCP socket server instead of an HTTP server, you don't get URLs—or headers, or anything else out-of-band.* All you get is a stream of data. You need to come up with a protocol for your data that you can fit the token into.
This means the server can't "figure out who the user is and validate before establishing the socket connection". It has to establish the socket connection, read the first message, parse it, validate the token, and then drop or continue the connection. (You can, of course, put up a front-end server that accepts connections, validates them, and then migrates or proxies them to the real back-end server. But someone has to accept, read, and parse.)
Note that this is exactly what HTTP does—it can't see the query string until it accepts the connection and reads the first line of data.
Meanwhile, the example you're looking at appears to be using WebSockets. A WebSockets client can't talk to a socket server (well, unless you build a WebSockets server on top of your socket server, or a proxy in front of it) in the first place.
* This isn't quite true. You can cram 40 bytes of options into TCP header extensions. But then you have to go below the level people are usually talking about when they say "socket server"—and there's a good chance it won't make it through the internet. Also, TCP does have a concept of "out-of-band" data, but that isn't relevant here; you still have to accept the connection and read from it to get an OOB data.

How can I examine the network communications of the Python HTTP Client?

I'm trying to isolate a bug that exists either in Python's httplib2 HTTP client or an API. (First guess is the API.) While using httplib2 to POST data to a RESTful API, I'm getting a 401 response status (no authorization) and saving data to the API.
I'd like to examine the HTTP request and response to the client, the very strings put onto and received from the network. The httplib2 code seems too involved to easily capture the values from within it, and might possibly miss the bug.
It seems quicker to look at the network communications with the client. Is there some tool I can use to monitor the client's communications with the local network socket?
I use http://www.charlesproxy.com for all my network debugging.
http://www.wireshark.org/ enables you to monitor local sockets too.
I was able to monitor local loopback even on windows using trick whit adding route.
http://wiki.wireshark.org/CaptureSetup/Loopback check Other Alternatives
Or you can just write raw socket server that listen on client side on one port and send data to server on other port and vice versa and prints out all data. It should not take more than dozen of lines of code

How do I cleanly bridge client connections between a frontend webserver and a backend running CherryPy?

The title may be a bit vague, but here's my goal: I have a frontend webserver which takes incoming HTTP requests, does some preprocessing on them, and then passes the requests off to my real webserver to get the HTTP response, which is then passed back to the client.
Currently, my frontend is built off of BaseHTTPServer.HTTPServer and the backend is CherryPy.
So the question is: Is there a way to take these HTTP requests / client connections and insert them into a CherryPy server to get the HTTP response? One obvious solution is to run an instance of the CherryPy backend on a local port or using UNIX domain sockets, and then the frontend webserver establishes a connection with the backend and relays any requests/responses. Obviously, this isn't ideal due to the overhead.
What I'd really like is for the CherryPy backend to not bind to any port, but just sit there waiting for the frontend to pass the client's socket (as well as the modified HTTP Request info), at which point it does its normal CherryPy magic and returns the request directly to the client.
I've been perusing the CherryPy source to find some way to accomplish this, and currently am attempting to modify wsgiserver.CherryPyWSGIServer, but it's getting pretty hairy and is probably not the best approach.
Is your main app a wsgi application? If so, you could write some middleware that wraps around it and does all the request wrangling before passing on to the main application.
If this this is possible it would avoid you having to run two webservers and all the problems you are encountering.
Answered the Upgrade question at Handling HTTP/1.1 Upgrade requests in CherryPy. Not sure if that addresses this one or not.

Categories

Resources