Python http server - can't read cookie - python

I've made python server and i'd like to create, send and receive cookies. I have problem with receiving them, when I visit it on Chrome I can see cookie was created. I've read that it should appear in os.environ but it never does. Here's my code:
import os
import time
import Cookie
import BaseHTTPServer
from multiprocessing import Process
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(s):
#creating cookie
c = Cookie.SimpleCookie()
c['api'] = 'token'
c['api']['expires'] = 3*60*60
s.send_response(200)
#sending cookie
s.wfile.write(c)
s.wfile.write('\r\n')
s.send_header("Access-Control-Allow-Origin", "*")
s.send_header("Access-Control-Expose-Headers", "Access-Control-Allow-Origin")
s.send_header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
s.end_headers()
#reading cookies
if 'HTTP_COOKIE' in os.environ:
cookie_string = os.environ.get('HTTP_COOKIE')
c = Cookie.SimpleCookie()
c.load(cookie_string)
try:
data=c['api'].value
print "cookie data: "+data
except:
print "The cookie was not set or has expired"
else:
print 'The cookie was not set'
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
''
if __name__ == '__main__':
httpd = ThreadedHTTPServer(('', 8666), MyHandler)
print time.asctime(), "Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
print time.asctime(), "Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER)
After I visit my site cookie is being created but there's never HTTP_COOKIE in os.environ.

For future readers:
here's how you parse cookies in python3:
from http.server import BaseHTTPRequestHandler
from http.cookies import SimpleCookie
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
cookies = SimpleCookie(self.headers.get('Cookie'))
# then use somewhat like a dict, e.g:
username = cookies['username'].value
To answer OP's question:
The problem is that you are looking for the cookie in the wrong place.
With the following lines, you check in your computer's operating system environment variables if one is named HTTP_COOKIE:
if 'HTTP_COOKIE' in os.environ:
cookie_string = os.environ.get('HTTP_COOKIE')
But there is no reason that running a python server would create an operating system wide environment variable.
Instead, you must look inside the BaseHTTPRequestHandler that you are deriving from.
The correct way to access the cookies is the following:
cookie_string = s.headers.get('Cookie')
which will parse the headers sent by the client and give you the corresponding cookie string.

Related

Web server in python in plainText

I am looking for a way to expose a text file with Python web server.
I get some python code to run a web server :
import http.server
import socketserver
port = 9500
address = ("", port)
handler = http.server.SimpleHTTPRequestHandler
httpd = socketserver.TCPServer(address, handler)
print(f"Serveur démarré sur le PORT {port}")
httpd.serve_forever()
It's working fine. but i would :
Run a web sever exposing textplain content (and not Html content).
Set manually the workpath and name of index file (default: index.html)
keep Python server Code simple and light
I found some help on the web :
handler.extensions_map['Content-type'] = 'text/plain'
or
handler.send_header('Content-Type','text/plain')
But none os this proposition work.
Could you help me to build a simple python code to do this ?
Thanks a lot,
Script for Python 2 with using only built-in modules, just place the absolute path of the file which you want to be served <INSERT_FILE>:
#!/usr/bin/python
from SimpleHTTPServer import SimpleHTTPRequestHandler
import BaseHTTPServer
from io import StringIO
import sys
import os
class MyHandler(SimpleHTTPRequestHandler):
def send_head(self):
# Place here the absolute path of the file
with open("<INSERT_FILE>", "r") as f:
body = unicode("".join( f.readlines()))
self.send_response(200)
self.send_header("Content-type", "text/html; charset=UTF-8")
self.send_header("Content-Length", str(len(body)))
#self.send_header("Server", "SimpleHTTP/1.1 Python/2.7.5")
self.end_headers()
# text I/O binary, and raw I/O binary
# initial value must be unicode or None
return StringIO(body)
if __name__ == "__main__":
HandlerClass = MyHandler
ServerClass = BaseHTTPServer.HTTPServer
Protocol = "HTTP/1.1"
server_address = ('', 5555)
HandlerClass.protocol_version = Protocol
httpd = ServerClass (server_address, HandlerClass)
print("serving on port 5555")
httpd.serve_forever()
For python3 (SimpleHTTPServer module has been merged into http.server), place absolute path <INSERT_FILE>:
from http.server import HTTPServer, BaseHTTPRequestHandler
class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
# place absolute path here
f_served = open('<INSERT_FILE>','rb')
f_content = f_served.read()
f_served.close()
self.wfile.write(f_content)
if __name__ == "__main__":
httpd = HTTPServer(('localhost', 5555), SimpleHTTPRequestHandler)
httpd.serve_forever()
I recommend using aiohttp with its lowlevel server, which is described here:
You can either return plain text, or you change the content type of your web.Response to text/html to send data that will be interpreted as html.
You can just replace the "OK" in the text="OK" with whatever plain text you wish. Or you replace it with the content of your *.html and change the content_type.
import asyncio
from aiohttp import web
async def handler(request):
return web.Response(text="OK")
async def main():
server = web.Server(handler)
runner = web.ServerRunner(server)
await runner.setup()
site = web.TCPSite(runner, 'localhost', 8080)
await site.start()
print("======= Serving on http://127.0.0.1:8080/ ======")
# pause here for very long time by serving HTTP requests and
# waiting for keyboard interruption
await asyncio.sleep(100*3600)
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
except KeyboardInterrupt:
pass
loop.close()

Issues in creating web app with SSE + long polling

i am new in web development. i am creating a web app for my home automation project, in which i need bi-directional communication. any theft-security alert from home will be send to client from server or if client want to control the main gate, he'll sent a POST request to server. I am still confused what thing to use, SSE or Web sockets. my question is, is it possible to develop an app that uses both, SSE as well as handles traditional (long-polling) HTTP requests from client (GET/POST) ? i have tested each of them individually and they work fine but i am unable to make them work together. i am using python BaseHTTPServer. Or at last, do i have to move to WebSocket? Any suggestion will be highly appreciated. my code here is;
import time
import BaseHTTPServer
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
from SocketServer import ThreadingMixIn
import os
import requests
import threading
from threading import Thread
from chk_connection import is_connected
import socket
HOST_NAME = socket.gethostbyname(socket.gethostname())
PORT_NUMBER = 8040 # Maybe set this to 9000.
ajax_count=0
ajax_count_str=""
switch=0
IP_Update_time=2
keep_alive=0
connected=False
###############################
##############
my_dir = os.getcwd()
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_HEAD(s):
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
def do_POST(s):
global keep_alive
"""Respond to a POST request."""
s.send_response(200)
s.send_header('Access-Control-Allow-Origin', '*')
s.send_header("Content-type", "text/html")
s.end_headers()
HTTP_request=s.requestline
if HTTP_request.find('keep_alive')>-1:
keep_alive += 1
keep_alive_str = str(keep_alive)
s.wfile.write(keep_alive_str) #sending ajax calls for keep alive
def do_GET(s):
global ajax_count
global my_dir
global switch
global ajax_count_str
global keep_alive
#print 'got Get',
"""Respond to a GET request."""
s.send_response(200)
#s.send_header('Access-Control-Allow-Origin', '*')
s.send_header('content-type', 'text/html')
s.end_headers()
print s.headers
HTTP_request=s.requestline
index_1=HTTP_request.index("GET /")
index_2=HTTP_request.index(" HTTP/1.1")
file_name=HTTP_request[index_1+5:index_2]
#print 'file_name:',file_name
#print 'HTTP_request:',HTTP_request
#if len(file_name)>0:
#if HTTP_request.find('L2ON')>-1:
# print 'sending SSE'
# s.wfile.write('event::'.format(time.time()))
elif HTTP_request.find('GET / HTTP/1.1')>-1:
print 'send main'
file1=open('Index.html','r')
file_read=file1.read()
s.wfile.write(file_read)
elif file_name.find("/")==-1:
for root, dirs, files in os.walk(my_dir):
#print 'in for 1'
for file in files:
#print 'in for'
if HTTP_request.find(file)>-1:
file_path=os.path.join(root,file)
file1=open(file_path,'r')
file_read=file1.read()
s.wfile.write(file_read)
print 'send',file
elif file_name.find("/")>-1:
#print 'get /...'
slash_indexes=[n for n in xrange(len(file_name)) if file_name.find('/', n)==n]
length=len(slash_indexes)
slash=slash_indexes[length-1]
file_path=file_name[0:slash]
root_dir=(my_dir + '/' + file_path + '/')
for root, dirs, files in os.walk(root_dir):
for file in files:
if HTTP_request.find(file)>-1:
image_path=os.path.join(root,file)
image=open(image_path,'r')
image_read=image.read()
s.wfile.write(image_read)
print 'send',file
#else:
#print 'file not found'
class MyHandler_SSE(BaseHTTPRequestHandler):
print 'SSE events class'
def do_GET(self):
print 'this is SSE'
self.send_response(200)
self.send_header('content-type', 'text/event-stream')
self.end_headers()
while True:
print 'SSE sent'
self.wfile.write('event: message\nid: 1\ndata: {0}\ndata:\n\n'.format(time.time()))
time.sleep(2)
class chk_connection(threading.Thread):
"""
# this thread checks weather there is internet connection available ?
"""
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global connected
while 1:
########################################################## INSIDE THE chk_connection import is_connected
#import socket
#REMOTE_SERVER = "www.google.com"
#def is_connected():
#try:
# # see if we can resolve the host name -- tells us if there is
# # a DNS listening
# host = socket.gethostbyname(REMOTE_SERVER)
# connect to the host -- tells us if the host is actually
# reachable
# s = socket.create_connection((host, 80), 2)
# return True
#except:
# pass
#return False
##########################################################
connected=is_connected()
#print 'internet:', connected
time.sleep(1)
class server_main(threading.Thread):
"""
"""
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global connected
#print 'shutdown started'
server_class = BaseHTTPServer.HTTPServer
HOST_NAME = socket.gethostbyname(socket.gethostname())
last_HOST_NAME = HOST_NAME
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
#http_SSE = server_class((HOST_NAME, PORT_NUMBER), MyHandler_SSE)
print time.asctime(), "Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER)
while(1):
while is_connected():
httpd._handle_request_noblock()
#print 'no block'
#http_SSE._handle_request_noblock()
time.sleep(1)
HOST_NAME = socket.gethostbyname(socket.gethostname())
if HOST_NAME != last_HOST_NAME:
print 'Serving at new host:', HOST_NAME
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
def start():
tx_socket_thread3 = chk_connection() # this thread checks weather there is internet connection available ?
tx_socket_thread3.start()
tx_socket_thread5 = server_main()
tx_socket_thread5.start()
print 's1:',tx_socket_thread1.is_alive()
if __name__ == '__main__':
start()
i might need to modify the code in a new manner, but don't know how. What i want is, if any interrupt happens at server side, it pulls data to client, and mean while it also responds to the GET and POST requests from client. Help Plx...
It is definitely possible to develop a web application which uses a mixture of normal HTTP traffic, server-side events and WebSockets. However, the web server classes in Python standard library are not designed for this purpose, though one can probably make them to work with enough hammering. You should to install a proper web server and use it facilities.
Examples include
uWSGI and server-side events with WSGI applications
Tornado and WebSockets
Furthermore
Installing Python packages

Python Proxy with Twisted

Hello! I have this code:
from twisted.web import proxy, http
from twisted.internet import reactor
class akaProxy(proxy.Proxy):
"""
Local proxy = bridge between browser and web application
"""
def dataReceived(self, data):
print "Received data..."
headers = data.split("\n")
request = headers[0].split(" ")
method = request[0].lower()
action = request[1]
print action
print "ended content manipulation"
return proxy.Proxy.dataReceived(self, data)
class ProxyFactory(http.HTTPFactory):
protocol = akaProxy
def intercept(port):
print "Intercept"
try:
factory = ProxyFactory()
reactor.listenTCP(port, factory)
reactor.run()
except Exception as excp:
print str(excp)
intercept(1337)
I use above code to intercept everything between browser and web site. When using above, I configure my browser settings: to IP: 127.0.0.1 and Port: 1337. I put this script in remote server to act my remote server as proxy server. But when I change browser proxy IP settings to my server's it does not work. What I do wrong? What else I need to configure?
Presumably your dataReceived is raising an exception during its attempts to parse the data passed to it. Try enabling logging so you can see more of what's going on:
from twisted.python.log import startLogging
from sys import stdout
startLogging(stdout)
The reason your parser is likely to raise exceptions is that dataReceived is not called only with a complete request. It is called with whatever bytes are read from the TCP connection. This may be a complete request, a partial request, or even two requests (if pipelining is in use).
dataReceived in the Proxy context is handling "translation of rawData into lines", so it may be too early for trying your manipulation code. You can try overriding allContentReceived instead and you will have access to the complete headers and content. Here is an example that I believe does what you are after:
#!/usr/bin/env python
from twisted.web import proxy, http
class SnifferProxy(proxy.Proxy):
"""
Local proxy = bridge between browser and web application
"""
def allContentReceived(self):
print "Received data..."
print "method = %s" % self._command
print "action = %s" % self._path
print "ended content manipulation\n\n"
return proxy.Proxy.allContentReceived(self)
class ProxyFactory(http.HTTPFactory):
protocol = SnifferProxy
if __name__ == "__main__":
from twisted.internet import reactor
reactor.listenTCP(8080, ProxyFactory())
reactor.run()

Python basehttpserver not serving requests properly

I'm trying to write down a simple local proxy for javascript: since I need to load some stuff from javascript within a web page, I wrote this simple daemon in python:
import string,cgi,time
from os import curdir, sep
import urllib
import urllib2
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
class MyHandler(BaseHTTPRequestHandler):
def fetchurl(self, url, post, useragent, cookies):
headers={"User-Agent":useragent, "Cookie":cookies}
url=urllib.quote_plus(url, ":/?.&-=")
if post:
req = urllib2.Request(url,post,headers)
else:
req=urllib2.Request(url, None, headers)
try:
response=urllib2.urlopen(req)
except urllib2.URLError, e:
print "URLERROR: "+str(e)
return False
except urllib2.HTTPError, e:
print "HTTPERROR: "+str(e)
return False
else:
return response.read()
def do_GET(self):
if self.path != "/":
[callback, url, post, useragent, cookies]=self.path[1:].split("%7C")
print "callback = "+callback
print "url = "+url
print "post = "+post
print "useragent = "+useragent
print "cookies = "+cookies
if useragent=="":
useragent="pyjproxy v. 1.0"
load=self.fetchurl(url, post, useragent, cookies)
pack=load.replace("\\", "\\\\").replace("\"", "\\\"").replace("\n", "\\n").replace("\r", "\\r").replace("\t", "\\t").replace(" </script>", "</scr\"+\"ipt>")
response=callback+"(\""+pack+"\");"
if load:
self.send_response(200)
self.send_header('Content-type', 'text/javascript')
self.end_headers()
self.wfile.write(response)
self.wfile.close()
return
else:
self.send_error(404,'File Not Found: %s' % self.path)
return
else:
embedscript="function pyjload(datadict){ if(!datadict[\"url\"] || !datadict[\"callback\"]){return false;} if(!datadict[\"post\"]) datadict[\"post\"]=\"\"; if(!datadict[\"useragent\"]) datadict[\"useragent\"]=\"\"; if(!datadict[\"cookies\"]) datadict[\"cookies\"]=\"\"; var oHead = document.getElementsByTagName('head').item(0); var oScript= document.createElement(\"script\"); oScript.type = \"text/javascript\"; oScript.src=\"http://localhost:1180/\"+datadict[\"callback\"]+\"%7C\"+datadict[\"url\"]+\"%7C\"+datadict[\"post\"]+\"%7C\"+datadict[\"useragent\"]+\"%7C\"+datadict[\"cookies\"]; oHead.appendChild( oScript);}"
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(embedscript)
self.wfile.close()
return
def main():
try:
server = HTTPServer(('127.0.0.1', 1180), MyHandler)
print 'started httpserver...'
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down server'
server.socket.close()
if __name__ == '__main__':
main()
And I use within a web page like this one:
<!DOCTYPE HTML>
<html><head>
<script>
function miocallback(htmlsource)
{
alert(htmlsource);
}
</script>
<script type="text/javascript" src="http://localhost:1180"></script>
</head><body>
<a onclick="pyjload({'url':'http://www.google.it','callback':'miocallback'});"> Take the Red Pill</a>
</body></html>
Now, on Firefox and Chrome looks like it works always. On Opera and Internet Explorer, however, I noticed that sometimes it doesn't work, or it hangs for a lot of time... what's up, I wonder? Did I misdo something?
Thank for any help!
Matteo
You have to understand that (modern) browsers try to optimize their browsing speed using different techniques, which is why you get different results on different browsers.
In your case, the technique that caused you trouble is concurrent HTTP/1.1 session setup: in order to utilize your bandwidth better, your browser is able to start several HTTP/1.1 sessions at the same time. This allows to retrieve multiple resources (e.g. images) simultaneously.
However, BaseHTTPServer is not threaded: as soon as your browser tries to open another connection, it will fail to do so because BaseHTTPServer is already blocked by the first session that's still open. The request will never reach the server and run into a timeout. This also means that only one user can access your service at a given time. Inconvenient? Aye, but help is here:
Threads! .. and python makes this one rather easy:
Derive a new class from HTTPServer using a MixIn from socketserver.
.
Example:
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
from SocketServer import ThreadingMixIn
import threading
class Handler(BaseHTTPRequestHandler):
def do_HEAD(self):
pass
def do_GET(self):
pass
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
""" This class allows to handle requests in separated threads.
No further content needed, don't touch this. """
if __name__ == '__main__':
server = ThreadedHTTPServer(('localhost', 80), Handler)
print 'Starting server on port 80...'
server.serve_forever()
From now on, BaseHTTPServer is threaded and ready to serve multiple connections ( and therefore requests ) at the same time which will solve your problem.
Instead of the ThreadingMixIn, you can also use the ForkingMixIn in order to spawn another process instead of another thread.
all the best,
creo
Note that Python basehttpserver is a very basic HTTP server far to be perfect, but that's not your first issue.
What is happening if you put the two scripts at the end of the document just before the </body> tag? Does it help?

Determine site domain in BaseHTTPServer

I try to implement simple server on python based on HTTPServer.
How can i extract information about site domain served in current request?
I mean it can serv several domains such as site1.com and site2.com for example, how can i get it in this code:
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
print "get"
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
#how can i get here host name of serving site?
#site1.com or site2.com ?
domain = ???
self.wfile.write('<html>Welcome on www.%s.com</html>' % (domain))
if __name__ == "__main__":
try:
server = HTTPServer(("", 8070), MyHandler)
print "started httpserver..."
server.serve_forever()
except KeyboardInterrupt:
print "^C received, shutting down server"
server.socket.close()
I guess you should be able to read the Host header.
The headers can be accessed from BaseHTTPRequestHandler.headers

Categories

Resources