Hello! I have this code:
from twisted.web import proxy, http
from twisted.internet import reactor
class akaProxy(proxy.Proxy):
"""
Local proxy = bridge between browser and web application
"""
def dataReceived(self, data):
print "Received data..."
headers = data.split("\n")
request = headers[0].split(" ")
method = request[0].lower()
action = request[1]
print action
print "ended content manipulation"
return proxy.Proxy.dataReceived(self, data)
class ProxyFactory(http.HTTPFactory):
protocol = akaProxy
def intercept(port):
print "Intercept"
try:
factory = ProxyFactory()
reactor.listenTCP(port, factory)
reactor.run()
except Exception as excp:
print str(excp)
intercept(1337)
I use above code to intercept everything between browser and web site. When using above, I configure my browser settings: to IP: 127.0.0.1 and Port: 1337. I put this script in remote server to act my remote server as proxy server. But when I change browser proxy IP settings to my server's it does not work. What I do wrong? What else I need to configure?
Presumably your dataReceived is raising an exception during its attempts to parse the data passed to it. Try enabling logging so you can see more of what's going on:
from twisted.python.log import startLogging
from sys import stdout
startLogging(stdout)
The reason your parser is likely to raise exceptions is that dataReceived is not called only with a complete request. It is called with whatever bytes are read from the TCP connection. This may be a complete request, a partial request, or even two requests (if pipelining is in use).
dataReceived in the Proxy context is handling "translation of rawData into lines", so it may be too early for trying your manipulation code. You can try overriding allContentReceived instead and you will have access to the complete headers and content. Here is an example that I believe does what you are after:
#!/usr/bin/env python
from twisted.web import proxy, http
class SnifferProxy(proxy.Proxy):
"""
Local proxy = bridge between browser and web application
"""
def allContentReceived(self):
print "Received data..."
print "method = %s" % self._command
print "action = %s" % self._path
print "ended content manipulation\n\n"
return proxy.Proxy.allContentReceived(self)
class ProxyFactory(http.HTTPFactory):
protocol = SnifferProxy
if __name__ == "__main__":
from twisted.internet import reactor
reactor.listenTCP(8080, ProxyFactory())
reactor.run()
Related
I've made python server and i'd like to create, send and receive cookies. I have problem with receiving them, when I visit it on Chrome I can see cookie was created. I've read that it should appear in os.environ but it never does. Here's my code:
import os
import time
import Cookie
import BaseHTTPServer
from multiprocessing import Process
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(s):
#creating cookie
c = Cookie.SimpleCookie()
c['api'] = 'token'
c['api']['expires'] = 3*60*60
s.send_response(200)
#sending cookie
s.wfile.write(c)
s.wfile.write('\r\n')
s.send_header("Access-Control-Allow-Origin", "*")
s.send_header("Access-Control-Expose-Headers", "Access-Control-Allow-Origin")
s.send_header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
s.end_headers()
#reading cookies
if 'HTTP_COOKIE' in os.environ:
cookie_string = os.environ.get('HTTP_COOKIE')
c = Cookie.SimpleCookie()
c.load(cookie_string)
try:
data=c['api'].value
print "cookie data: "+data
except:
print "The cookie was not set or has expired"
else:
print 'The cookie was not set'
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
''
if __name__ == '__main__':
httpd = ThreadedHTTPServer(('', 8666), MyHandler)
print time.asctime(), "Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
print time.asctime(), "Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER)
After I visit my site cookie is being created but there's never HTTP_COOKIE in os.environ.
For future readers:
here's how you parse cookies in python3:
from http.server import BaseHTTPRequestHandler
from http.cookies import SimpleCookie
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
cookies = SimpleCookie(self.headers.get('Cookie'))
# then use somewhat like a dict, e.g:
username = cookies['username'].value
To answer OP's question:
The problem is that you are looking for the cookie in the wrong place.
With the following lines, you check in your computer's operating system environment variables if one is named HTTP_COOKIE:
if 'HTTP_COOKIE' in os.environ:
cookie_string = os.environ.get('HTTP_COOKIE')
But there is no reason that running a python server would create an operating system wide environment variable.
Instead, you must look inside the BaseHTTPRequestHandler that you are deriving from.
The correct way to access the cookies is the following:
cookie_string = s.headers.get('Cookie')
which will parse the headers sent by the client and give you the corresponding cookie string.
So here's the deal : I'm writing a simple lightweight IRC app, hosted locally, that basically does the same job as Xchat and works in your browser, just as Sabnzbd. I display search results in the browser as an html table, and using an AJAX GET request with an on_click event, the download is launched. I use another AJAX GET request in a 1 second loop to request the download information (status, progress, speed, ETA, etc.). I hit a bump with the simultaneous AJAX requests, since my CGI handler seems to only be able to handle one thread at a time : indeed, the main thread processes the download, while requests for download status are sent too.
Since I had a Django app somewhere, I tried implementing this IRC app and everything works fine. Simultaneous requests are handled properly.
So is there something I have to know with the HTTP handler ? Is it not possible with the basic CGI handle to deal with simultaneous requests ?
I use the following for my CGI IRC app :
from http.server import BaseHTTPRequestHandler, HTTPServer, CGIHTTPRequestHandler
If it's not about theory but about my code, I can gladly post various python scripts if it helps.
A little bit deeper into the documentation:
These four classes process requests synchronously; each request must be completed before the next request can be started.
TL;DR: Use a real web server.
So, after further research, here's my code, whick works :
from http.server import BaseHTTPRequestHandler, HTTPServer, CGIHTTPRequestHandler
from socketserver import ThreadingMixIn
import threading
import cgitb; cgitb.enable() ## This line enables CGI error reporting
import webbrowser
class HTTPRequestHandler(CGIHTTPRequestHandler):
"""Handle requests in a separate thread."""
def do_GET(self):
if "shutdown" in self.path:
self.send_head()
print ("shutdown")
server.stop()
else:
self.send_head()
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
allow_reuse_address = True
daemon_threads = True
def shutdown(self):
self.socket.close()
HTTPServer.shutdown(self)
class SimpleHttpServer():
def __init__(self, ip, port):
self.server = ThreadedHTTPServer((ip,port), HTTPRequestHandler)
self.status = 1
def start(self):
self.server_thread = threading.Thread(target=self.server.serve_forever)
self.server_thread.daemon = True
self.server_thread.start()
def waitForThread(self):
self.server_thread.join()
def stop(self):
self.server.shutdown()
self.waitForThread()
if __name__=='__main__':
HTTPRequestHandler.cgi_directories = ["/", "/ircapp"]
server = SimpleHttpServer('localhost', 8020)
print ('HTTP Server Running...........')
webbrowser.open_new_tab('http://localhost:8020/ircapp/search.py')
server.start()
server.waitForThread()
I have written this HTTP web server in python which simply sends reply "Website Coming Soon!" to the browser/client, but I want that this web server should sends back the URL given by the client, like if I write
http://localhost:13555/ChessBoard_x16_y16.bmp
then server should reply back the same url instead of "Website Coming Soon!" message.
please tell how can I do this?
Server Code:
import sys
import http.server
from http.server import HTTPServer
from http.server import SimpleHTTPRequestHandler
#import usb.core
class MyHandler(SimpleHTTPRequestHandler): #handles client requests (by me)
#def init(self,req,client_addr,server):
# SimpleHTTPRequestHandler.__init__(self,req,client_addr,server)
def do_GET(self):
response="Website Coming Soon!"
self.send_response(200)
self.send_header("Content-type", "application/json;charset=utf-8")
self.send_header("Content-length", len(response))
self.end_headers()
self.wfile.write(response.encode("utf-8"))
self.wfile.flush()
print(response)
HandlerClass = MyHandler
Protocol = "HTTP/1.1"
port = 13555
server_address = ('localhost', port)
HandlerClass.protocol_version = Protocol
try:
httpd = HTTPServer(server_address, MyHandler)
print ("Server Started")
httpd.serve_forever()
except:
print('Shutting down server due to some problems!')
httpd.socket.close()
You can do what you're asking, sort of, but it's a little complicated.
When a client (e.g., a web browser) connects to your web server, it sends a request that look like this:
GET /ChessBoard_x16_y16.bmp HTTP/1.1
Host: localhost:13555
This assumes your client is using HTTP/1.1, which is likely true of anything you'll find these days. If you expect HTTP/1.0 or earlier clients, life is much more difficult because there is no Host: header.
Using the value of the Host header and the path passed as an argument to the GET request, you can construct a URL that in many cases will match the URL the client was using.
But it won't necessarily match in all cases:
There may be a proxy in between the client and your server, in which case both the path and hostname/port seen by your code may be different from that used by the client.
There may be packet manipulation rules in place that modify the destination ip address and/or port, so that the connection seen by your code does not match the parameters used by the client.
In your do_GET method, you can access request headers via the
self.headers attribute and the request path via self.path. For example:
def do_GET(self):
response='http://%s/%s' % (self.headers['host'],
self.path)
Reference: http://docs.python.org/2/library/basehttpserver.html
I have the following code snippet which uses Python BaseHTTPServer to run a basic HTTP server.
from BaseHTTPServer import HTTPServer
from BaseHTTPServer import BaseHTTPRequestHandler
# http request handler
class HttpHandler(BaseHTTPRequestHandler):
def do_POST(self):
print "I have just received a HTTP request through POST"
try:
server = HTTPServer((<ip>, <port>), HttpHandler)
# wait forever for incoming http requests!
server.serve_forever()
except KeyboardInterrupt:
server.socket.close()
What I am looking for is a way to get a callback whenever the http server is started/stopped using server.serve_forever()/server.socket.close() methods.
Say we have the following functions:
def http_server_start_callback():
print "http server has just been started"
def http_server_stop_callback():
print "http server has just been stopped"
I want http_server_start_callback function to be called right after (whenever) I start the server i.e. server.serve_forever() and I want http_server_stop_callback function to be called right after (whenever) I stop the server i.e. server.socket.close()
It would be excellent to configure the http server with the following callbacks:
before starting the server
after starting the server
before stopping the server
after stopping the server
Is there a way to setup these callbacks in Python BaseHTTPServer.HTTPServer?!
It would be excellent to configure the http server with the following
callbacks:
before starting the server
after starting the server
before stopping the server
after stopping the server
Bear in mind that the OS will start accepting and queuing TCP connections the moment the socket starts listening, which is done in the constructor of BaseHTTPServer, so if you want to perform lengthy tasks before starting the server, it's probably better to do them before the OS starts accepting connections.
There's a server_activate() method which makes the call to socket.listen(), so it's probably best to override that.
Similarly, the OS will continue to accept connections until the call to socket.close(), so if you want to be able to define a 'pre-stop' handler which has the capacity to prevent itself from being shutdown, it's probably better to use the server_close() method, rather than calling socket.close() directly.
I've put together a simple example, using class methods on the request handler to handle the four new events, although you can move them somewhere else...
from BaseHTTPServer import HTTPServer
from BaseHTTPServer import BaseHTTPRequestHandler
# Subclass HTTPServer with some additional callbacks
class CallbackHTTPServer(HTTPServer):
def server_activate(self):
self.RequestHandlerClass.pre_start()
HTTPServer.server_activate(self)
self.RequestHandlerClass.post_start()
def server_close(self):
self.RequestHandlerClass.pre_stop()
HTTPServer.server_close(self)
self.RequestHandlerClass.post_stop()
# HTTP request handler
class HttpHandler(BaseHTTPRequestHandler):
#classmethod
def pre_start(cls):
print 'Before calling socket.listen()'
#classmethod
def post_start(cls):
print 'After calling socket.listen()'
#classmethod
def pre_stop(cls):
print 'Before calling socket.close()'
#classmethod
def post_stop(cls):
print 'After calling socket.close()'
def do_POST(self):
print "I have just received an HTTP POST request"
def main():
# Create server
try:
print "Creating server"
server = CallbackHTTPServer(('', 8000), HttpHandler)
except KeyboardInterrupt:
print "Server creation aborted"
return
# Start serving
try:
print "Calling serve_forever()"
server.serve_forever()
except KeyboardInterrupt:
print "Calling server.server_close()"
server.server_close()
if __name__ == '__main__':
main()
Note that I've also moved the call to the constructor into its own try...except block, since the server variable won't exist if you hit CTRL-C during its construction.
You have to subclass HTTPServer and use your class instead HTTPServer
from __future__ import print_function
from BaseHTTPServer import HTTPServer
from BaseHTTPServer import BaseHTTPRequestHandler
class MyHTTPServer(HTTPServer):
def __init__(self, *args, **kwargs):
self.on_before_serve = kwargs.pop('on_before_serve', None)
HTTPServer.__init__(self, *args, **kwargs)
def serve_forever(self, poll_interval=0.5):
if self.on_before_serve:
self.on_before_serve(self)
HTTPServer.serve_forever(self, poll_interval)
# http request handler
class HttpHandler(BaseHTTPRequestHandler):
def do_POST(self):
print("I have just received a HTTP request through POST")
try:
server = MyHTTPServer(('0.0.0.0', 8080), HttpHandler,
on_before_serve = lambda server: print('Server will start to serve in the moment...'))
# wait forever for incoming http requests!
server.serve_forever()
except KeyboardInterrupt:
server.socket.close()
I am running an HTTP server using the twisted framework. Is there any way I can "manually" ask it to process some payload? For example, if I've constructed some Ethernet frame can I ask twisted's reactor to handle it just as if it had just arrived on my network card?
You can do something like this:
from twisted.web import server
from twisted.web.resource import Resource
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ClientFactory
class SomeWebThing(Resource):
def render_GET(self, request):
return "hello\n"
class SomeClient(Protocol):
def dataReceived(self, data):
p = self.factory.site.buildProtocol(self.transport.addr)
p.transport = self.transport
p.dataReceived(data)
class SomeClientFactory(ClientFactory):
protocol = SomeClient
def __init__(self, site):
self.site = site
if __name__ == '__main__':
root = Resource()
root.putChild('thing', SomeWebThing())
site = server.Site(root)
reactor.listenTCP(8000, site)
factory = SomeClientFactory(site)
reactor.connectTCP('localhost', 9000, factory)
reactor.run()
and save it as simpleinjecter.py, if you then do (from the commandline):
echo -e "GET /thing HTTP/1.1\r\n\r\n" | nc -l 9000 # runs a server, ready to send req to first client connection
python simpleinjecter.py
it should work as expected, with the request from the nc server on port 9000 getting funneled as the payload into the twisted web server, and the response coming back as expected.
The key lines are in SomeClient.dataRecieved(). You'll need a transport object with the right methods -- in the example above, I just steal the object from the client connection. If you aren't going to do that, I imagine you'll have to make one up, as the stack will want to do things like call getPeer() on it.
What is the use-case?
Perhaps you want to create your own Datagram Protocol
At the base, the place where you
actually implement the protocol
parsing and handling, is the
DatagramProtocol class. This class
will usually be decended from twisted.internet.protocol.DatagramProtocol.
Most protocol handlers inherit either
from this class or from one of its
convenience children. The
DatagramProtocol class receives
datagrams, and can send them out over
the network. Received datagrams
include the address they were sent
from, and when sending datagrams the
address to send to must be specified.
If you want to see wire-level transmissions rather than inject them, install and run WireShark, the fantastic, free packet sniffer.