I'm trying to route certain requests through burp. These would be HTTPS requests
I have burp properly installed, and placed the cacert.der certificate in the current working directory that the code is contained. I've looked at https://curl.haxx.se/libcurl/c/curl_easy_setopt.html, but can't get it working.
This is the current code I'm using to attempt this
#classmethod
def CURLpageThroughBurp(cls, encodedURL):
buffer = BytesIO()
c = pycurl.Curl()
c.setopt(c.URL, str(encodedURL))
c.setopt(c.CAINFO, certifi.where())
c.setopt(c.PROXY, '127.0.0.1')
c.setopt(c.PROXYPORT, 8080)
c.setopt(c.PROXY_SSLCERT, "cacert.der")
c.setopt(c.PROXY_SSLCERTTYPE, "DER")
c.setopt(c.FOLLOWLOCATION, True)
##c.setopt(c.PROXYTYPE, c.PROXYTYPE_SOCKS5_HOSTNAME)
c.setopt(c.WRITEDATA, buffer)
c.perform()
c.close()
webpage = buffer.getvalue()
return webpage
Here's the error I receive:
File "randomcode,py", line 111 c.perform()
pycurl.error: (60, 'SSL certificate problem: self signed certificate in certificate chain')
HTTP requests:
When I have "intercept on" enabled on Burp, the HTTP requests are intercepted, and they actually successfully can be forwarded and I receive responses from the destination server.
HTTPS requets:
When I have "intercept on" enabled on Burp, the HTTPS requests aren't intercepted, so I am unable to forward them, because they don't pop on the intercept tab. And I get the pycurl.error seen above.
NOTE: I can successfully send both HTTP and HTTPS requests with curl when not using Burp proxy
Related
Here are my test code:
import requests
url = 'https://api.ipify.org/'
proxyapi = 'http://ip.11jsq.com/index.php/api/entry?method=proxyServer.generate_api_url&packid=1&fa=0&fetch_key=&qty=1&time=1&pro=&city=&port=1&format=txt&ss=1&css=&dt=1'
proxy = {'http' : 'http://{}'.format(requests.get(proxyapi).text)}
print ('Downloading with fresh proxy.', proxy)
resp = requests.get(url, proxies = proxy_new)
print ('Fresh proxy response status.', resp.status_code)
print (resp.text)
#terminal output
Downloading with fresh proxy. {'http': 'http://49.84.152.176:30311'}
Fresh proxy response status. 200
222.68.154.34#my public ip address
with no error message and seems that the requests lib never apply this proxy settings. The proxyapi is valid for I've checked the proxy in my web browser, and by visiting https://api.ipify.org/, it returns the desired proxy server's ip address.
I am using python 3.6.4 and requests 2.18.4.
Below is the simple script I'm using to redirect regular HTTP requests on port 8080, it redirects(causes them to be at least) them depending on the source IP address right away.
It works (for HTTP), however I would like to have the same behavior for HTTPS requests coming over 443 port. Assume that if the redirection was not present, incoming clients to this simple server would be able to handshake with the target they are being redirected to via a self signed certificate.
import SimpleHTTPServer
import SocketServer
LISTEN_PORT = 8080
source = "127.0.0.1"
target = "http://target/"
class simpleHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_POST(self):
clientAddressString = ''.join(str(self.clientAddress))
if source in clientAddressString:
# redirect incoming request
self.send_response(301)
new_path = '%s%s' % (target, self.path)
self.send_header('Location', new_path)
self.end_headers()
handler = SocketServer.TCPServer(("", LISTEN_PORT), simpleHandler)
handler.serve_forever()
I can use a self signed certificate and have access to files "server.crt" and "server.key" that are normally used for this connection(without the middle redirecting python server). I am not sure what happens when I put a redirection in between like this, although I assume it has to be part of the hand-shaking chain.
How can I achieve this behavior?
Is there anything I should modify apart from the new target and the response code within request headers?
I will split my answer into Networking and Python parts.
On the Networking side, you cannot redirect at the SSL layer - hence you need a full HTTPs server, and redirect the GET/POST request once the SSL handshake is complete. The response code, and the actual do_POST or do_GET implementation would be exactly the same for both HTTP and HTTPs.
As a side note, don't you get any issues with redirecting POSTs? When you do a 301 on POST, the browser will not resend the POST data to your new target, so something is likely to break at the application level.
On the Python side, you can augment an HTTP server to an HTTPs one by wrapping the socket:
import BaseHTTPServer, SimpleHTTPServer
import ssl
handler = BaseHTTPServer.HTTPServer(("", LISTEN_PORT), simpleHandler)
handler.socket = ssl.wrap_socket (handler.socket, certfile='path/to/combined/PKCS12/container', server_side=True)
handler.serve_forever()
Hope this helps.
Using Python httplib or httpclient, what code do I need to use in my HTTP client to:
use an HTTP HEAD request and
contact a web server by just specifying only its IP address and
contact a web server without specifying any webpage (or homepage) on the request
to extend its HTTP connection using Keepalive messages?
I used the following code example but it has two problems:
It does not extend the http connection using Keepalive,
It gives me an error message "500 Domain Not Found" if I use the IP address instead of the domain name.
import http.client
Connection = http.client.HTTPConnection("www.python.org")
Connection.request("HEAD", "")
response = Connection.getresponse()
print(response.status, response.reason)
requests allows to:
send requests with HEAD method:
import requests
resp = requests.head("http://www.python.org")
use sessions for auto Keep-alive: info
s = requests.Session()
resp = s.head("http://www.python.org")
resp2 = s.get("http://www.python.org/")
Regarding using the IP address instead of domain, that has nothing to do with your request. Most sites use some kind of virtual hosts, so they don't respond to IP address only to specific domain names. If you ask for the IP address you may get a 500 error or a message error.
I am trying to get the html for a website, but the site uses https not http.
import httplib
c = httplib.HTTPSConnection("google.com")
c.request("GET", "/")
response = c.getresponse()
print response.status, response.reason
data = response.read()
print data
according to HTTPS connection Python the socket module needs to be compiled with ssl or have ssl enabled.
how do i do this? I am running ubuntu 14.04
Thanks
I am struggling with connection to https sites and files. We have ntlm network proxy authentication. HTTP connections are working like a charm, but https is stuck with error:
pycurl.error: (27, "SSL: couldn't create a context: error:140A90A1:lib(20):func(169):reason(161)")
I tried verifypeer to 0, but it doesnt work, the same with: conn.setopt(pycurl.SSL_CIPHER_LIST, 'rsa_rc4_128_sha'). I want to download: https://nbp.pl/kursy/xml/LastA.xml. Any clue?
The code:
conn=pycurl.Curl()
conn.setopt(pycurl.URL, url)
conn.setopt(pycurl.PROXY, proxy)
conn.setopt(pycurl.PROXYPORT,8080)
conn.setopt(pycurl.HTTPAUTH, pycurl.HTTPAUTH_NTLM)
conn.setopt(pycurl.PROXYUSERPWD, user)
conn.setopt(pycurl.WRITEFUNCTION, open(r'xml\\'+name+'.'+extension,'w+').write)
conn.perform()
conn.close()
Got success with a bypass using CNTLM.
The code:
proxy = urllib2.ProxyHandler({'https':'127.0.0.1:3128'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
u = urllib2.urlopen(url)
data = u.read()
fil=open(r'xml\\'+name+'.'+extension,'w+')
fil.write(data)