I'm trying to do some network analysis and have difficulty discerning between HTTP/1.1 and HTTP/2 traffic.
For starters, HTTP/1.1 is pretty straightforward (TCP port 80 + I can check the HTTP header to get the version since it's unencrypted). For HTTP/3 I can check if it's UDP port 443 and if the payload is QUIC.
The problem arises when I try to discern between HTTP/1.1 and HTTP/2 on TCP port 443 (HTTPS), since it's encrypted I cannot check which version we're using...
EDIT: For TLS1.1 we can assume it's HTTP/1.1 since HTTP/2 supports only >= TLS1.2. For TLS1.2 we can check the ALPN field in the Server Hello response but for TLS1.3 I don't see any option, since the ALPN field is not present in TLS1.3 Server Hello headers...
Any ideas on how to figure out the HTTPS version using Scapy, especially for TLS1.3?
Related
How would I use scapy 2.4.3+ to spoof a http response? I tried using the packet below, however on the target machine (which has been arp spoofed, where the machine of the spoofer does not have port fowarding, resulting in the http request having to be answered by the spoofed packet, that is what I am trying to do), the provided HTML from the packet does not get rendered, once its sent using scapy.send(packet). So how would I adapt the packet below, to send a HTTP packet that would render on the target machine?
packet = scapy.IP(src=server_ip, dst=target_ip)/scapy.TCP() / HTTP() / HTTPResponse(Server=server_ip) / "<html><p>Hi</p></html>"
I am using Charles proxy right now to monitor traffic between my devices and a website. The traffic is SSL and I am able to read it on charles. The issue is charles makes the content hard to read when I am filtering through hundreds of variables in s JSON object. I created a program that will filter the JSON after exporting the charles log. My next step is to get rid of charles completely and create my own proxy in python that can view http and https data. I was wondering if scapy or any other existing libraries existed that would work? I am interested with scapy because I can save the proxy log as a pcap file.
Reading through mitmproxy would be overwhelming since it's a huge source base. If you would like to implement the proxy server from scratch. Here is what I learn during developing Proxyman
Learn how to set up a tiny Proxy server: Basically, open the listening socket at your port (9090 for example). Accept any incoming requests and get the first line of the HTTP Message. It could be done a lightweight http-parser or any Python parser. The raw HTTP message looks like:
CONNECT https://google.com HTTP/1.1
Parse and get the google and the IP: Open the socket connection to the destination IP and start to receive and sent forth and back from the client <-> the destination server.
The first step is essential to implement the HTTP Proxy in this step. Use http-parser to parse the rest of the HTTP Message. Thus, you can get the headers and body from the Request / Response -> Present to UI
Learn how HTTPS and SSL work: Use OpenSSL to generate a self-signed certificate and how to generate the chain certificates too.
Learn how to import those certificate to the macOS keychain by using security CLI or Security framework from Apple.
When you've done: it's time to start the HTTPS interception: Start the 2nd step and do SSL Handshake with appropriate certificate in both sides (Client -> Your Proxy Server and your Proxy Server -> Destination)
Parse the HTTP message as usual and get the rest of the message.
Overall, there are a lot of open sources out there, but I suggest to start from the simple version before moving on.
Hope that could help you.
I have a Scrapy crawler and I want to rotate the IP so my application will not be blocked. I am setting IP in scrapy using request.meta['proxy'] = 'http://51.161.82.60:80' but this is a VM's IP. My question is can VM or Machine's IP be used for scrapy or I need a proxy server?
Currently I am doing this. This does not throw any error but when I get response from http://checkip.dyndns.org it is my own IP not updated IP which I set in meta. That is why I want to know if I do need proxy server.
The reason you are getting your own IP is because your VM is 'transparent'. You will need to intercept your request at the VM, remove tracking headers such as X-Forwarded-For, and your server has to know who to respond to when it receives the response from the website you are crawling.
The simplest solution though, is to install a proxy service on your VM, for example Squid, then set forwarded_for off to make it an anonymous proxy server. There may be other request options to tweak to make it truly anonymous. Remember to secure the whitelisted IP addresses with http_access allow specialIP and acl specialIP src x.x.x.x in /etc/squid/squid.conf. The default port of Squid is 3128.
Definitely you need a proxy server. meta data is only a field in the http request. the server side still knows the public ip that really connecting from the tcp connection layer.
I'm using Python requests to play with a REST API. The response format is JSON and let's assume the server always send correct data. Given the fact that HTTP uses TCP for transmission, do I still have to check the existence of a required key if no exception is thrown by requests?
For TCP transmissions, you don't need to verify the response if you assume that the server always sends correct data:
TCP provides reliable, ordered, and error-checked delivery of a stream of octets between applications running on hosts communicating by an IP network.
Source: Wikipedia
Of course, it's always a good idea to add some error handling and verification to your code just in case the server doesn't send what you'd expect.
I am new to Twisted and started using treq because of its similarity to Requests (very easy to use basic authentication etc). I have an HTTPConnectionPool with maxPersistentPerHost=1 and persistence=True, I send 4 requests in a row to a host: treq.put(1), treq.get(1), treq.put(2) and treq.get(1). However the apache server running on the host receives the requests out of order (checked /var/log/apache2/access.log). Using netstat I saw 4 connections to the host, I was expecting only 1 connection and all requests to go over the same connection in order. Any idea on what I am doing wrong or missing?
Thanks!
Reshad.