I have created a local server using Python and I am trying to pass traffic to see the HTTP requests and responses.
I've tried the inputting the following into my browser and only get a 404:
http://127.0.0.1:8080/http://www.google.com
For clarification sake I am trying to get the following:
>Starting simple_httpd on port: 8080<br>
>127.0.0.1 - - [05/Jan/2015 15:12:33] code 404, message File not found<br>
>127.0.0.1 - - [05/Jan/2015 15:12:33] "GET /http://www.google.com HTTP/1.1" 404 -
If you'd like the browser to send HTTP requests through a proxy server you've developed running on localhost, you need to configure its proxy settings to point at (in your case) "127.0.0.1:8080."
On Chrome for example, you can do this in Settings | Advanced Settings... as per this link. (Depending on OS, it will probably just bring you to the appropriate system-wide settings for your computer).
Another option is to edit your computer's hosts file to point the desired domain(s) at 127.0.0.1. If you go this way, you'll need to run your server on port 80.
Note that writing a completely-functional proxy server with good compatibility can be rather involved, but getting the basics up for experimentation aren't so bad. Also note that you'll generally be out of luck for SSL traffic.
BTW if you are just interested in seeing the request/response traffic, you might be better served with your browser's developer tools, or the excellent Firebug, which generally allow pretty good inspection of HTTP traffic in the "network" tab.
Related
I went through the gRPC tutorial and got a successful response from the server.
However the server did not log anything from the command line. I was expecting to see something like request received - response sent with this status code. Similar to a django dev server or a local http.server:
Serving HTTP on 127.0.0.1 port 8000 (http://127.0.0.1:8000/) ...
127.0.0.1 - - [27/Jun/2022 13:06:08] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [27/Jun/2022 13:06:08] code 404, message File not found
What I tried with my gRPC server:
GRPC_VERBOSITY=DEBUG python greeter_server.py
That just gave strange output. I tried with INFO and received nothing when requests came in.
I saw on grpc.server there was the interceptor parameter:
interceptors: An optional list of ServerInterceptor objects that observe
and optionally manipulate the incoming RPCs before handing them over to
handlers. The interceptors are given control in the order they are
specified. This is an EXPERIMENTAL API.
How can I log requests and responses to the cli?
In gRPC, Since The complete implementation of server and client are in your hands, it is upto the developer to create context-ful logging at both sides.
You can use python logging module to print logs to cli. This is better than plain print() in many ways.
Ideally One can make server to print logs like.
Interface on which server is started on. (ip/hostname:port)
When a RPC call is received for a particular method.
Any intermediate info or debug logs in the method. Etc..,
In case you are looking for default logging in grpc which are just useful during development and not much helps in production, you can explore more about gRPC traces using which you can control logging of different sub components of gRPC.
I have a super special proxy i need to use to access certain hosts ( it turns all other traffic away ), and a bunch of complex libraries and applications that can only take a single http proxy configuration parameter for all their http requests. Which are of course a mix of restricted/proxied traffic and traffic that this proxy is refusing to handle.
I've found an example script showing how to manipulate the upstream proxy host/address in upstream mode, but couldn't find any indication in public API, that "breaking out" of upstream mode in a script is possible, to have mitmproxy directly handle traffic instead of sending it upstream, given certain conditions are met ( request target host mostly )
What am i missing? Should i be trying to do this in "regular" mode?
I invoke PAC in the title because it has the DIRECT keyword that allows the library/application to continue processing the request without going to a proxy.
thanks!
i've found evidence that this is in fact not possible and unlikely to be implemented https://github.com/mitmproxy/mitmproxy/issues/2042#issuecomment-280857954 although this issue and comment is very old, there are some recent related and unanswered questions such as How can I switch mitmproxy mode based on attributes of the proxied request
So instead, i'm pivoting to tinyproxy which does seem to provide this exact functionality https://github.com/tinyproxy/tinyproxy/blob/1.10.0/etc/tinyproxy.conf.in#L143
A shame because the replay/monitoring/interactive editing features of mitmproxy would've been amazing to have
I am using the following script to get issues from Jira.
from jira import JIRA
options = {'server': 'https://it.company.com/'}
jira = JIRA(options, basic_auth=('user', 'password'), max_retries=1)
issues = jira.search_issues('project="Web"', startAt=0, maxResults=50)
I want to replace https://it.company.com/ with https://ip:port.
I usedping to get the IP.
I used nmap for checking ports, but no matter what https://ip:port input I use, I can't get a connection. I also tried these ports.
How can I find out which IP and Port is JIRA() accessing?
The https protocol uses port 443. Refer to wikipedia for details.
However accessing a server via https://server_name/ is different from accessing a server via https://server_ip_address/. This is because during TLS negotiation, server_name is passed to the server via TLS SNI (Server Name Indication). This way multiple virtual websites may be hosted at the same server_ip_address. See wikipedia for details.
If the script works and you just want to know how the connection looks, I recommend letting it run and in the background execute netstat -ano.
If the script doesn't work and you just want to know where it tries to connect, I recommend installing wireshark.
Edit: In any case you (most likely) won't be able to replace it with ip:port because servers treat HTTP requests to an IP different than how they treat requests to a name.
Ask the Jira admin to tell you. Configured in conf/server.xml like any Tomcat app, or there may be a remote proxy such as nginx configured in front of the Jira
I am using Charles proxy right now to monitor traffic between my devices and a website. The traffic is SSL and I am able to read it on charles. The issue is charles makes the content hard to read when I am filtering through hundreds of variables in s JSON object. I created a program that will filter the JSON after exporting the charles log. My next step is to get rid of charles completely and create my own proxy in python that can view http and https data. I was wondering if scapy or any other existing libraries existed that would work? I am interested with scapy because I can save the proxy log as a pcap file.
Reading through mitmproxy would be overwhelming since it's a huge source base. If you would like to implement the proxy server from scratch. Here is what I learn during developing Proxyman
Learn how to set up a tiny Proxy server: Basically, open the listening socket at your port (9090 for example). Accept any incoming requests and get the first line of the HTTP Message. It could be done a lightweight http-parser or any Python parser. The raw HTTP message looks like:
CONNECT https://google.com HTTP/1.1
Parse and get the google and the IP: Open the socket connection to the destination IP and start to receive and sent forth and back from the client <-> the destination server.
The first step is essential to implement the HTTP Proxy in this step. Use http-parser to parse the rest of the HTTP Message. Thus, you can get the headers and body from the Request / Response -> Present to UI
Learn how HTTPS and SSL work: Use OpenSSL to generate a self-signed certificate and how to generate the chain certificates too.
Learn how to import those certificate to the macOS keychain by using security CLI or Security framework from Apple.
When you've done: it's time to start the HTTPS interception: Start the 2nd step and do SSL Handshake with appropriate certificate in both sides (Client -> Your Proxy Server and your Proxy Server -> Destination)
Parse the HTTP message as usual and get the rest of the message.
Overall, there are a lot of open sources out there, but I suggest to start from the simple version before moving on.
Hope that could help you.
Is it possible to setup a listener on say port 9090 and add a header, like Host: test.host to each request incoming on 9090 and send it on to say 8080?
Thanks
EDIT: I went with a reverse-proxy for now, applying the hostname:port to any request that comes in.
Twisted has an implementation of a reverse proxy that you could modify to suit your needs. You can look at the examples here. If you look at the source code of twisted.web.proxy, you can see that the 'Host:' header is set in ReverseProxyRequest.process, so you could subclass it and set your own header.
Unless you need to tailor the proxied request based on parameters that only your web application can know (for example, you need to authenticate the proxied request with your webapp's custom authentication system), you should use your web server's proxy capabilities.
Example with Apache:
Listen 0.0.0.0:9090
ProxyRequests off
<VirtualHost myhost:9090>
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
ProxyPassReverseCookieDomain localhost myhost
</VirtualHost>
If you have to proxy things in a Flask or Werkzeug application, you can use httplib, creating requests based on the incoming request data and returning the response, either raw or modified (eg for link rewriting). It's doable, I have one such proxy in use where there was no good alternative. If you do that I recommend against using regular expressions to rewrite HTML links. I used PyQuery instead, it's far easier to get it right.