Handle http protocol using socketserver - python

Handle http protocol using socketserver
I'm structuring a web server (...), aiming to interpret the python language (just like apache does with php ). I establish the connection (client x server) in the transport layer through the TCP protocol using python's socketserver module. Now I need to capture the request header data and send an http response to the client (browser), how could I do that? Below is the code and result of the processing:
File : httptcpipv4.py
# Module Name : socketserver -> https://docs.python.org/3/library/socketserver.html#module-socketserver
from socketserver import BaseRequestHandler, ThreadingTCPServer
# Transport Layer Protocol : TCP
# This is the superclass of all request handler objects. It defines the interface, given below.
class HTTPTCPIPv4(BaseRequestHandler):
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).decode()
print(self.data)
if __name__ == "__main__":
HOST, PORT = "", 8080
# Create the asynchronous server, binding to localhost on port 8080
with ThreadingTCPServer((HOST, PORT), HTTPTCPIPv4) as server:
print("Server : Action v0.0.1, running address http://127.0.0.1:8080,")
print("cancel program with Ctrl-C")
server.serve_forever()
Output :
GET / HTTP/1.1
Host: 127.0.0.1:8080
Connection: keep-alive
sec-ch-ua: " Not;A Brand";v="99", "Google Chrome";v="91", "Chromium";v="91"
sec-ch-ua-mobile: ?0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.106 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Purpose: prefetch
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7

Related

HTTPS - How to actually decrypt client request

I am using the socket library for handling http requests waiting on port 80 for connections (does not really matter right now), which works fine as all responses follow the following format
b"""GET / HTTP/1.1
Host: localhost:8000
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36 OPR/70.0.3728.189
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: el-GR,el;q=0.9"""
if you open port 443 or just use https in any browser, when a request is made the data is encrypted. But how can you actually decrypt the data and interact with the client? I've seen many posts about this but no one explains how the data can actually be decrypted. The data that is received always looks something like this and starts the same way with 0x16 and 0x03 bytes
b'\x16\x03\x01\x02\x00\x01\x00\x01\xfc\x03\x03\xfb\'\xa3\xa5\xa4\x1cf\xd1w~(L\xb5%0,\xfb\xa57\xf4\x92\x03}\x84xCIA\xd9}]2 \x15ID\xafU\xb6\xe3\x9d\xbdr\x93 L\x98\rD\xca\xa7\x11\x89\x00`Q\xf5\th\xde\x85S\xf8Q\x98\x00"jj\x13\x03\x13\x01\x13\x02\xcc\xa9\xcc\xa8\xc0+\xc0/\xc0,\xc00\xc0\x13\xc0\x14\x00\x9c\x00\x9d\x00/\x005\x00\n\x01\x00\x01\x91ZZ\x00\x00\x00\x00\x00\x0e\x00\x0c\x00\x00\tlocalhost\x00\x17\x00\x00\xff\x01\x00\x01\x00\x00\n\x00\n\x00\x08\x9a\x9a\x00\x1d\x00\x17\x00\x18\x00\x0b\x00\x02\x01\x00\x00#\x00\x00\x00\x10\x00\x0e\x00\x0c\x02h2\x08http/1.1\x00\x05\x00\x05\x01\x00\x00\x00\x00\x00\r\x00\x14\x00\x12\x04\x03\x08\x04\x04\x01\x05\x03\x08\x05\x05\x01\x08\x06\x06\x01\x02\x01\x00\x12\x00\x00\x003\x00+\x00)\x9a\x9a\x00\x01\x00\x00\x1d\x00 \xa5\x81S\xec\xf4I_\x08\xd2\n\xa6\xb5\xf6E\x9dE\xe6ha\xe7\xfdy\xdab=\xf4\xd3\x1b`V\x94F\x00-\x00\x02\x01\x01\x00+\x00\x0b\nZZ\x03\x04\x03\x03\x03\x02\x03\x01\x00\x1b\x00\x03\x02\x00\x02\xea\xea\x00\x01\x00\x00\x15\x00\xcf\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
My question is how can I bring the HTTPS data into a form like the above. I've read about some specific handshake procedures but I could not find something that just answsers telling exactly what to do. Of course I am only asking for development purposes.

Obtain arguments of url listening to get requests

I have a server in python that listens to GET requests:
host = '127.0.0.1' # listen to localhost
port = 8001
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind((host, port))
sock.listen(5) # don't queue up any requests
while True:
csock, caddr = sock.accept()
print "Connection from: " + repr(caddr)
req = csock.recv(1024)
print req
And I get the following request:
Connection from: ('127.0.0.1', 42311)
GET /?categories[]=100&categories[]=200 HTTP/1.1
Host: localhost:8001
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
The requests have the form http://localhost:8000/?categories[]=100&categories[]=200 and I want to get the categories that were passed.
Should I write a regular expression to parse req or I can get 'categories' parameters as attribute parameter of req?
It depends on how you intend to use these requests. If you want to respond with HTML pages (or the such) depending on the categories, you should take a look at frameworks like Flask. If you just want to parse the headers, take a look at this. It's a thread on how to parse HTTP headers.

scapy can't send large length packet

i use scapy
from scapy.all import *
send_L2sock=conf.L2socket(iface="eth0")
for send_packet in rdpcap("test.pcap"):
send_L2sock.send(send_packet)
this code work well. i confirm send packet using wireshark.
but scapy can't send large packet like below:
GET /exp?q=jfLB9FPyAHp7oMHe.tCKVPX3wCHhZKVmBKySJoHYgtE-csHql43fyskKdzf6F7Uef-GSJ1OdDrtmUR1GomLA9yccoByMeJX9WJDnvloeMyhLKom2reGiWn9XH82ZojR9qGQwisjIDXctEmCE.SQS8-aAAYUE3kHxfq4AE3myf_Q78VsHxziE3_A7WIP3az39NxFxFePBvnPAlhte.H5WG.miDthyWva9z9NBbmCqndkckeA6evdYY7qck4Vjski53WgmuRfy.jg4nApd27WMFrkeSdA4XUvlfiqntyr5wd8AaRiv6woeUx.PwdIeZh_LYonhy7RTlJs549LJLc7M1-LNR3dQUcg_TFX_Mk6jQsaoo19lX6pZYLwpEWB3MASRcF3okwRtou-ILr8ZjF72R-pmgUcaqtQb5eYqDb3egHFSwViZ7XdGUs4CWdta-ludhSveDyXsbQjLKACC9cf2C-T99.3ccMbrLBkhXDYbIZoFdPscVdLr_oAmZMN3e7694BKoj.qEozYY5F4_ccxWNMcNcuZKBnNPgsf1MtMZWqOJnuy1s6lmFLNjjLfD_A8UmmI0&r=jfPj7L6MnFAT7vOaw3JYhRf.rirghOXccEeIuteMs85_MyLRv8cOqw7gFE8Y6Jb4fkx5eGYu4bG68IF-vzypY2uOsgjdLEahuNWYVjNKjKOIZDVs4FVTpimmM87xRLRi1a865pAp8t-ldEr_asu8uHQ1uQne5XopLjeSfJF97c39kv.mhEO7.yYdwkID4I8SRys4VcynL7xoWbSj-vCdINM21.kR_nUkhvXouJc9OFaiY3g279al418MiJGw.yMsLX6xAPwVU1exUw00&r=jfRZzkhG5VOeh3LPN46zMNYgFXkHxwhQMnhwsPO3Prf6eh6SBXxW9NgCee5Zqa_fyHWFiUDkSUmy.ZbinGnKNcAej_syDrT75pcueqUDXNBv.PWpJvo54egnrogm4LueZBdy2PaFhct2SDQa1OM9ZPaR.tcap4MndLySTVKBixJ93xSEqt2LlUup8GJtA6p5sEiyAzcf9VHD8Hl5tTYrgt1apSKFu_aB-MWxUMVWZsyPiB2HLgw8_83yb1oHibV_Xh.W3fQ7xlPRYg00&r=jfXxMnW3iZvmNlsj3_c3Bo4udASWLu_ADTFFSfJDJPtAUdMSnQCYMzC1EgkEuuWUuAs3YznlX-sU9-844Eme3ZYmj_syDrT75perH6Zbv1VsXtPvMEUXfVKGjO9Dj9a84n5B7gzL_A7IFsfjw4SlxKKgqb8iletk.zT2OvEN3r8e9Jh_flIGdMMzfzRs5.UlTikAyoNZpdcLFb79H4EymcBcEV4-14zGv3.sphU9YMl6qkjbPClgEKXn4tl29r8M9ibJ8EqE9ANotQ00&r=jfaqsn3IQmXRl4K3pCMOMN-EBhNNsPARWkN1i2xgY7nfFDnFqIzEUStIOrhPfnTDXrlqBZyf615PZ_5_ohc1kmEtd7Ra2eGnxGCk_EgxhCmuO-Vsm-A3xKIY_TcePv1AIjdZdKHR1Ych4Eic8vkC_HhP1TllHuEPd4L.tio8mPniMAAGZtGVL4qFxLASiWnS-Fe3n7-_fPmDUn-_QYgE7J5m&r=jfe4YnKJRYFENp4qDSa_92n98_dN-lTyYFj14RP-xSiWLPBXFbc3B77ODz-tx-8fNU32mbcETQjjTIMM.2G.CWoAv7ciNl9.J-ghO5P2d5A9WL5SvcHXKiO4miZCy2IXV-f9DO7XmdTA8WgjROw-u1.EgxcWiSQ2yYq-BN_oaoIdYbIW2OP1KwqWdAe5gX7VJ8aFgfuTnG8Z5cvdKzoad28GvsmMKz1CscwrFdGkr3q_HPG15c7XxwHeNwnbtLXYNWk0& HTTP/1.1
Accept: image/png, image/svg+xml, image/*;q=0.8, */*;q=0.5
Referer: http://vegadisk.com/contents/view.php?from_cate=IMG&idx=8475504
Accept-Language: ko-KR
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
Accept-Encoding: gzip, deflate
Host: keyword.daumdn.com
Connection: Keep-Alive
Cookie: pref=id%3DEaG8S7X0QwiOHm715GfXtA%7Cts%3D1404084124009%7Cip%3D118.131.203.130%7Ccs%3D5729a0b58afd539b0b59f6db83533376; ae=mOqeh6nOaMrRP-hXTygYC-VNriitJP_bT-KAdhzbcnB4X2H4s8azRAJZ
packet length is 2475.
And packet is consist of only one packet.
why scapy can't send large length packet?

Python Socket Server sends file request 3 times instead of once?

Making a simple python web server using sockets to start understanding how they work, but I think I'm lost on this on. My python server is supposed to access a basic html file in the same directory and display it, once for every time it's requested. But this code for some reason sends the request 3 to 5 times...
from socket import *
server = socket(AF_INET, SOCK_STREAM)
port = 12030
server.bind((gethostname(), port))
server.listen(1)
while True:
print 'Ready to serve'
conection, addr = server.accept()
try:
print 'Working'
message = conection.recv(1024)
filename = message.split()[1] #cuts off the '/' in the request page
f = open(filename[1:])
print message
outputdata = f.read()
print outputdata
conection.send('HTTP/1.1 200 OK\r\n')
for i in range(0, len(outputdata)):
conection.send(outputdata[i])
conection.close()
except IOError:
print 'IO ERROR'
print message
print outputdata
conection.close()
except KeyboardInterrupt:
server.close()
conection.close()
break;
This is the output from me opening the page in the browser.
-en 14:59:54 # ・ー ・
python project.py
Ready to serve
Working
<html><body><h1>Wurld</body></html>
Ready to serve
Working
IO ERROR
<html><body><h1>Wurld</body></html>
Ready to serve
Working
IO ERROR
<html><body><h1>Wurld</body></html>
Ready to serve
I've tried adding a server.listen(1)
and a conection.send("Content-Type:text/html\r\n") , but neither of these do anything.
I'm not sure what the problem could be other than blocking how many times can be requested per minute?
Updated to print message every time
-en 15:33:26 # ・ー ・
python project.py
Ready to serve
Working
GET /HelloWorld.html HTTP/1.1
Host: seppala:12030
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36
DNT: 1
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
<html><body><h1>Wurld</body></html>
Ready to serve
Working
IO ERROR
GET /favicon.ico HTTP/1.1
Host: seppala:12030
Connection: keep-alive
Accept: */*
DNT: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
<html><body><h1>Wurld</body></html>
Ready to serve
Working
IO ERROR
GET /favicon.ico HTTP/1.1
Host: seppala:12030
Connection: keep-alive
Accept: */*
DNT: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
<html><body><h1>Wurld</body></html>
Ready to serve
It seems your browser is requesting favicon.ico . Try adding a favicon.ico to your document root, or perhaps try a different browser. This problem isn't because of your script.

Apparently fine http request results malformed when sent over socket

I'm working with socket operations and have coded a basic interception proxy in python. It works fine, but some hosts return 400 bad request responses.
These requests do not look malformed though. Here's one:
GET http://www.baltour.it/ HTTP/1.1
Host: www.baltour.it
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:28.0) Gecko/20100101 Firefox/28.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Same request, raw:
GET http://www.baltour.it/ HTTP/1.1\r\nHost: www.baltour.it\r\nUser-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:28.0) Gecko/20100101 Firefox/28.0\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language: en-US,en;q=0.5\r\nAccept-Encoding: gzip, deflate\r\nConnection: keep-alive\r\n\r\n
The code I use to send the request is the most basic socket operation (though I don't think the problem lies there, it works fine with most hosts)
socket_client.send(request_raw)
while socket_client.recv is used to get the response (but no problems here, the response is well-formed, though its status is 400).
Any ideas?
When not talking to a proxy, you are not supposed to put the http://hostname part in the HTTP header; see section 5.1.2 of the HTTP 1.1 RFC 2616 spec:
The most common form of Request-URI is that used to identify a resource on an origin server or gateway. In this case the absolute path of the URI MUST be transmitted (see section 3.2.1, abs_path) as the Request-URI, and the network location of the URI (authority) MUST be transmitted in a Host header field.
(emphasis mine); abs_path is the absolute path part of the request URI, not the full absolute URI itself.
E.g. the server expects you to send:
GET / HTTP/1.1
Host: www.baltour.it
A receiving server should be tolerant of the incorrect behaviour, however. The server seems to violate the RFC as well here too. Further on in the same section it reads:
To allow for transition to absoluteURIs in all requests in future versions of HTTP, all HTTP/1.1 servers MUST accept the absoluteURI form in requests, even though HTTP/1.1 clients will only generate them in requests to proxies.

Categories

Resources