Creating a Charles proxy alternative using Python - python

I am using Charles proxy right now to monitor traffic between my devices and a website. The traffic is SSL and I am able to read it on charles. The issue is charles makes the content hard to read when I am filtering through hundreds of variables in s JSON object. I created a program that will filter the JSON after exporting the charles log. My next step is to get rid of charles completely and create my own proxy in python that can view http and https data. I was wondering if scapy or any other existing libraries existed that would work? I am interested with scapy because I can save the proxy log as a pcap file.

Reading through mitmproxy would be overwhelming since it's a huge source base. If you would like to implement the proxy server from scratch. Here is what I learn during developing Proxyman
Learn how to set up a tiny Proxy server: Basically, open the listening socket at your port (9090 for example). Accept any incoming requests and get the first line of the HTTP Message. It could be done a lightweight http-parser or any Python parser. The raw HTTP message looks like:
CONNECT https://google.com HTTP/1.1
Parse and get the google and the IP: Open the socket connection to the destination IP and start to receive and sent forth and back from the client <-> the destination server.
The first step is essential to implement the HTTP Proxy in this step. Use http-parser to parse the rest of the HTTP Message. Thus, you can get the headers and body from the Request / Response -> Present to UI
Learn how HTTPS and SSL work: Use OpenSSL to generate a self-signed certificate and how to generate the chain certificates too.
Learn how to import those certificate to the macOS keychain by using security CLI or Security framework from Apple.
When you've done: it's time to start the HTTPS interception: Start the 2nd step and do SSL Handshake with appropriate certificate in both sides (Client -> Your Proxy Server and your Proxy Server -> Destination)
Parse the HTTP message as usual and get the rest of the message.
Overall, there are a lot of open sources out there, but I suggest to start from the simple version before moving on.
Hope that could help you.

Related

Implementing WebSockets with Sony's Audio Control API in Python

Sony's website provided a example to use WebSockets to works with their api in Node.js
https://developer.sony.com/develop/audio-control-api/get-started/websocket-example#tutorial-step-3
it worked fine for me. But when i was trying to implement it in Python, it does not seems to work
i use websocket_client
import websocket
ws = websocket.WebSocket()
ws.connect("ws://192.168.0.34:54480/sony/avContent",sslopt={"cert_reqs": ssl.CERT_NONE})
gives
websocket._exceptions.WebSocketBadStatusException: Handshake status 403 Forbidden
but in their example code, there is not any kinds of authrization or authentication
I recently had the same problem. Here is what I found out:
Normal HTTP responses can contain Access-Control-Allow-Origin headers to explicitly allow other websites to request data. Otherwise, web browsers block such "cross-origin" requests, because the user could be logged in there for example.
This "same-origin-policy" apparently does not apply to WebSockets and the handshakes can't have these headers. Therefore any website could connect to your Sony device. You probably wouldn't want some website to set your speaker/receiver volume to 100% or maybe upload a defective firmware, right?
That's why the audio control API checks the Origin header of the handshake. It always contains the website the request is coming from.
The Python WebSocket client you use assumes http://192.168.0.34:54480/sony/avContent as the origin by default in your case. However, it seems that the API ignores the content of the Origin header and just checks whether it's there.
The WebSocket#connect method has a parameter named suppress_origin which can be used to exclude the Origin header.
TL;DR
The Sony audio control API doesn't accept WebSocket handshakes that contain an Origin header.
You can fix it like this:
ws.connect("ws://192.168.0.34:54480/sony/avContent",
sslopt={"cert_reqs": ssl.CERT_NONE},
suppress_origin=True)

Python requests being fingerprinted?

I'm hacking together an amazon api and when only using python requests without proxying, it prompts for a captcha. When routing this python requests traffic through fiddler, it seems to pass without a problem. Is it possible that amazon is fingerprinting python requests and fiddler changes the fingerprint since it's a proxy?
I viewed headers sent from fiddler and python requests and they are the same.
There is no exra proxying/fiddler rules/filters set on fiddler to create a change.
To be clear, all mentioned proxying is only done locally, so it will not change the public ip address.
Thank you!
The reason is that websites are fingerprinting your requests with TLS hello package. There exist libraries like JA3 to generate a fingerprint for each request. They will intentionally block http clients like requests or urllib. If you uses a MITM proxy, because the proxy server create a new TLS connection with the server, the server only sees proxy server's fingerprint, so they will not block it.
If the server only blocks certain popular http libraries, you can simply change the TLS version, then you will have different fingerprint than the default one.
If the server only allows popular real-world browsers, and only accepts them as valid requests, you will need libraries that can simulate browser fingerprints, one of which is curl-impersonate and its python binding curl_cffi.
pip install curl_cffi
from curl_cffi import requests
# Notice the impersonate parameter
r = requests.get("https://tls.browserleaks.com/json", impersonate="chrome101")
print(r.json())
# output: {'ja3_hash': '53ff64ddf993ca882b70e1c82af5da49'
# the fingerprint should be the same as target browser

Is requests response likely to be corrupted?

I'm using Python requests to play with a REST API. The response format is JSON and let's assume the server always send correct data. Given the fact that HTTP uses TCP for transmission, do I still have to check the existence of a required key if no exception is thrown by requests?
For TCP transmissions, you don't need to verify the response if you assume that the server always sends correct data:
TCP provides reliable, ordered, and error-checked delivery of a stream of octets between applications running on hosts communicating by an IP network.
Source: Wikipedia
Of course, it's always a good idea to add some error handling and verification to your code just in case the server doesn't send what you'd expect.

Redirect IP camera stream from a Raspberry to my website

So here's my setup:
IP camera -> Raspberry Pi (Raspbian) -> WiFi -> my server
I am currently using motion to retrieve the camera's stream on my RPi. I am able to view it on the local network (192.168.x.x:8080) through my browser (it's an Mjpeg stream).
I would now like to publish this online so I can view it from http://camera.example.com/ for example.
The difference here is that I would like to do so independently of the WiFi network used (so I cannot simply open a port on my router to accept a connection from the server).
I think this would be possible using WebSockets but I never used them before. Or is there some tool that already exists AND is easy to use ? There are many streaming tools out there, but they all seem to be Windows-GUI programs rather than command line tools.
The choice of language is Python, but if for some reason another language would be more suited that is fine too. Also, I do not need to use motion specifically, so if there is a better alternative that would work too. Thanks !
As a set of minimum steps you will need
A domain name that points to your public IP address
A way of keeping the DNS records for the domain up to date as your IP periodically changes (a free dynamic IP from noip.com will help with the first point, and they have a client you can install which will keep their DNS updated with your current IP)
A port forwarding rule on your router to forward port 8080 (and the stream port for the camera stream, probably 8081 but you can change that in the Motion config) to the internal (192.168.x.x) IP of your Pi
A DHCP reservation in your router to reserve the IP of the Pi (otherwise if the internal IP changes you will need to change the port forwarding rule)
You will now be able to access on the internet via the domain name e.g. http://camera.example.com:8080
BUT...
You have just allowed an insecure http (unencrypted) access into a device on your home network, which could then be exploited (someone could view your cameras, or perhaps gain further access to the Pi and other devices on your network...)
You can enable authentication for the web control gui in Motion config but it’s still being served over http and so easy to hack or to intercept.
So, I would also want to ensure it is all accessible only via https (secure,encrypted).
Items you will need:
an SSL certificate for your domain (available for free from letsencrypt.org)
a web server on the Pi (since Motion doesn’t use any installed webserver but instead has its own inbuilt one) - I’d recommend Nginx or Apache
certbot (to generate/install the certificate on the pi)
configure the web server to be a reverse proxy and serve the http motion website as https using your SSL certificate
secure the website (both apache and nginx support http basic authentication which, if the reverse proxy is configured correctly, will be served over https so encrypted, which is better than unencrypted, base64 encoded (and easily decoded) credential info transmitted in the clear for all to see/intercept).
Other authentication options are available with some extra work but as a bare minimum basic auth and full https are better than nothing.

Python-Scapy HTTP Traffic Manipulation

I need to intercept an HTTP Response packet from the server and replace it with my own response, or at least modify that response, before it arrives to my browser.
I'm already able to sniff this response and print it, the problem is with manipulating/replacing it.
Is there a way to do so wiht scapy library ?
Or do i have to connect my browser through a proxy to manipulate the response ?
If you want to work from your ordinary browser, then you need proxy between browser and server in order to manipulate it. E.g. see https://portswigger.net/burp/ which is a proxy specifically created for penetration testing with easy replacing of responses/requests (which is sriptable, too).
If you want to script all your session in scapy, then you can create requests and responses to your liking, but response does not go to the browser. Also, you can record ordinary web session (with tcpdump/wireshark/scapy) into pcap, then use scapy to read pcap modify it and send similar requests to the server.

Categories

Resources