Python-Scapy HTTP Traffic Manipulation - python

I need to intercept an HTTP Response packet from the server and replace it with my own response, or at least modify that response, before it arrives to my browser.
I'm already able to sniff this response and print it, the problem is with manipulating/replacing it.
Is there a way to do so wiht scapy library ?
Or do i have to connect my browser through a proxy to manipulate the response ?

If you want to work from your ordinary browser, then you need proxy between browser and server in order to manipulate it. E.g. see https://portswigger.net/burp/ which is a proxy specifically created for penetration testing with easy replacing of responses/requests (which is sriptable, too).
If you want to script all your session in scapy, then you can create requests and responses to your liking, but response does not go to the browser. Also, you can record ordinary web session (with tcpdump/wireshark/scapy) into pcap, then use scapy to read pcap modify it and send similar requests to the server.

Related

A http proxy same as Fiddler AutoResponder

Hey I'm trying to create something like fiddler autoresponder.
I want replace specific url content to other example:
I researched everything but can't found I tried create proxy server, nodejs script, python script. Nothing.
https://randomdomain.com content replace with https://cliqqi.ml
ps I'm doing this because I want intercept electron app getting main game file from site and then just intercept they game file site to my site.
If you're looking for a way to do this programmatically in Node.js, I've written a library you can use to do exactly that, called Mockttp.
You can use Mockttp to create a HTTPS-intercepting & rewriting proxy, which will allow you to send mock responses directly, redirect traffic from one address to another, rewrite anything including the headers & body of existing traffic, or just log everything that's sent & received. There's a full guide here: https://httptoolkit.tech/blog/javascript-mitm-proxy-mockttp/

Implementing WebSockets with Sony's Audio Control API in Python

Sony's website provided a example to use WebSockets to works with their api in Node.js
https://developer.sony.com/develop/audio-control-api/get-started/websocket-example#tutorial-step-3
it worked fine for me. But when i was trying to implement it in Python, it does not seems to work
i use websocket_client
import websocket
ws = websocket.WebSocket()
ws.connect("ws://192.168.0.34:54480/sony/avContent",sslopt={"cert_reqs": ssl.CERT_NONE})
gives
websocket._exceptions.WebSocketBadStatusException: Handshake status 403 Forbidden
but in their example code, there is not any kinds of authrization or authentication
I recently had the same problem. Here is what I found out:
Normal HTTP responses can contain Access-Control-Allow-Origin headers to explicitly allow other websites to request data. Otherwise, web browsers block such "cross-origin" requests, because the user could be logged in there for example.
This "same-origin-policy" apparently does not apply to WebSockets and the handshakes can't have these headers. Therefore any website could connect to your Sony device. You probably wouldn't want some website to set your speaker/receiver volume to 100% or maybe upload a defective firmware, right?
That's why the audio control API checks the Origin header of the handshake. It always contains the website the request is coming from.
The Python WebSocket client you use assumes http://192.168.0.34:54480/sony/avContent as the origin by default in your case. However, it seems that the API ignores the content of the Origin header and just checks whether it's there.
The WebSocket#connect method has a parameter named suppress_origin which can be used to exclude the Origin header.
TL;DR
The Sony audio control API doesn't accept WebSocket handshakes that contain an Origin header.
You can fix it like this:
ws.connect("ws://192.168.0.34:54480/sony/avContent",
sslopt={"cert_reqs": ssl.CERT_NONE},
suppress_origin=True)

Python requests being fingerprinted?

I'm hacking together an amazon api and when only using python requests without proxying, it prompts for a captcha. When routing this python requests traffic through fiddler, it seems to pass without a problem. Is it possible that amazon is fingerprinting python requests and fiddler changes the fingerprint since it's a proxy?
I viewed headers sent from fiddler and python requests and they are the same.
There is no exra proxying/fiddler rules/filters set on fiddler to create a change.
To be clear, all mentioned proxying is only done locally, so it will not change the public ip address.
Thank you!
The reason is that websites are fingerprinting your requests with TLS hello package. There exist libraries like JA3 to generate a fingerprint for each request. They will intentionally block http clients like requests or urllib. If you uses a MITM proxy, because the proxy server create a new TLS connection with the server, the server only sees proxy server's fingerprint, so they will not block it.
If the server only blocks certain popular http libraries, you can simply change the TLS version, then you will have different fingerprint than the default one.
If the server only allows popular real-world browsers, and only accepts them as valid requests, you will need libraries that can simulate browser fingerprints, one of which is curl-impersonate and its python binding curl_cffi.
pip install curl_cffi
from curl_cffi import requests
# Notice the impersonate parameter
r = requests.get("https://tls.browserleaks.com/json", impersonate="chrome101")
print(r.json())
# output: {'ja3_hash': '53ff64ddf993ca882b70e1c82af5da49'
# the fingerprint should be the same as target browser

HTTPS request using python requests library

I am trying to send a https request using python requests library
my code is
full_url = ''.join(['https://', get_current_site(request).domain, '/am/reply'])
data = {'agent_type':'trigger','input':platform,'user':request.user.id}
print "hi" ### this is printing
a = requests.get(full_url,params=data,verify=False) ##the execution is stucked here even error are not appearing
print "hello" ## this code is not printed
The problem is that there is no execution after requests whole code is stucked at this point.
I tried to verify my code using python shell and it run perfectly.
Is there any way that i can debug whole my requests response that is going on real time or can someone suggest me a solution
The whole code was working fine when there was http but after switching to https whole code stopped working. I even tried to place the certificate file but also no success
It is normal. Some website only accept http and some https and some of them both. http port is 80 and https port is port 443. For example, if a website is using https which means is secure http. So they actually need extra information in header etc. Check requests api for http
http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification

Simple way to detect browser or script

Complexities aside, what is the simplest quirty-and-dirty way to detect in a request whether that request was issues by a CLI program, such as curl, or whether it was by a browser? Here is what I'm trying to figure out:
def view(request):
if request.is_from_browser:
return HTML_TEMPLATE
else:
return JSON
Request.is_ajax() checks if the HTTP_X_REQUESTED_WITH header equals XMLHttpRequest. This is becoming an "industry standard" among web frameworks/libraries to separate Ajax calls from normal requests. But it depends on cooperation from the client side to actually set the header. There's no 100 % foolproof way of detecting browser, client, Ajax etc without this cooperation.
Btw, why do you need to know what's calling?
Somthing in the HTTP request headers, I'd first try using the Accept header. with the accept header the client can specify what sort of content it wants.this puts the responsibily on the client.

Categories

Resources