I am struggling with connection to https sites and files. We have ntlm network proxy authentication. HTTP connections are working like a charm, but https is stuck with error:
pycurl.error: (27, "SSL: couldn't create a context: error:140A90A1:lib(20):func(169):reason(161)")
I tried verifypeer to 0, but it doesnt work, the same with: conn.setopt(pycurl.SSL_CIPHER_LIST, 'rsa_rc4_128_sha'). I want to download: https://nbp.pl/kursy/xml/LastA.xml. Any clue?
The code:
conn=pycurl.Curl()
conn.setopt(pycurl.URL, url)
conn.setopt(pycurl.PROXY, proxy)
conn.setopt(pycurl.PROXYPORT,8080)
conn.setopt(pycurl.HTTPAUTH, pycurl.HTTPAUTH_NTLM)
conn.setopt(pycurl.PROXYUSERPWD, user)
conn.setopt(pycurl.WRITEFUNCTION, open(r'xml\\'+name+'.'+extension,'w+').write)
conn.perform()
conn.close()
Got success with a bypass using CNTLM.
The code:
proxy = urllib2.ProxyHandler({'https':'127.0.0.1:3128'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
u = urllib2.urlopen(url)
data = u.read()
fil=open(r'xml\\'+name+'.'+extension,'w+')
fil.write(data)
Related
I'm trying to route certain requests through burp. These would be HTTPS requests
I have burp properly installed, and placed the cacert.der certificate in the current working directory that the code is contained. I've looked at https://curl.haxx.se/libcurl/c/curl_easy_setopt.html, but can't get it working.
This is the current code I'm using to attempt this
#classmethod
def CURLpageThroughBurp(cls, encodedURL):
buffer = BytesIO()
c = pycurl.Curl()
c.setopt(c.URL, str(encodedURL))
c.setopt(c.CAINFO, certifi.where())
c.setopt(c.PROXY, '127.0.0.1')
c.setopt(c.PROXYPORT, 8080)
c.setopt(c.PROXY_SSLCERT, "cacert.der")
c.setopt(c.PROXY_SSLCERTTYPE, "DER")
c.setopt(c.FOLLOWLOCATION, True)
##c.setopt(c.PROXYTYPE, c.PROXYTYPE_SOCKS5_HOSTNAME)
c.setopt(c.WRITEDATA, buffer)
c.perform()
c.close()
webpage = buffer.getvalue()
return webpage
Here's the error I receive:
File "randomcode,py", line 111 c.perform()
pycurl.error: (60, 'SSL certificate problem: self signed certificate in certificate chain')
HTTP requests:
When I have "intercept on" enabled on Burp, the HTTP requests are intercepted, and they actually successfully can be forwarded and I receive responses from the destination server.
HTTPS requets:
When I have "intercept on" enabled on Burp, the HTTPS requests aren't intercepted, so I am unable to forward them, because they don't pop on the intercept tab. And I get the pycurl.error seen above.
NOTE: I can successfully send both HTTP and HTTPS requests with curl when not using Burp proxy
Here are my test code:
import requests
url = 'https://api.ipify.org/'
proxyapi = 'http://ip.11jsq.com/index.php/api/entry?method=proxyServer.generate_api_url&packid=1&fa=0&fetch_key=&qty=1&time=1&pro=&city=&port=1&format=txt&ss=1&css=&dt=1'
proxy = {'http' : 'http://{}'.format(requests.get(proxyapi).text)}
print ('Downloading with fresh proxy.', proxy)
resp = requests.get(url, proxies = proxy_new)
print ('Fresh proxy response status.', resp.status_code)
print (resp.text)
#terminal output
Downloading with fresh proxy. {'http': 'http://49.84.152.176:30311'}
Fresh proxy response status. 200
222.68.154.34#my public ip address
with no error message and seems that the requests lib never apply this proxy settings. The proxyapi is valid for I've checked the proxy in my web browser, and by visiting https://api.ipify.org/, it returns the desired proxy server's ip address.
I am using python 3.6.4 and requests 2.18.4.
I am trying to get the html for a website, but the site uses https not http.
import httplib
c = httplib.HTTPSConnection("google.com")
c.request("GET", "/")
response = c.getresponse()
print response.status, response.reason
data = response.read()
print data
according to HTTPS connection Python the socket module needs to be compiled with ssl or have ssl enabled.
how do i do this? I am running ubuntu 14.04
Thanks
Via Python's urllib2 I try to get data over HTTPS while I am behind a corporate NTLM proxy.
I run
proxy_url = ('http://user:pw#ntlmproxy:port/')
proxy_handler = urllib2.ProxyHandler({'http': proxy_url})
opener = urllib2.build_opener(proxy_handler, urllib2.HTTPHandler)
urllib2.install_opener(opener)
f = urllib2.urlopen('https://httpbin.org/ip')
myfile = f.read()
print myfile
but I get as error
urllib2.URLError: <urlopen error [Errno 8] _ssl.c:507:
EOF occurred in violation of protocol>
How can I fix this error?
Note 0: With the same code I can retrieve the unsecured HTTP equivalent http://httpbin.org/ip.
Note 1: From a normal browser I can access https://httpbin.org/ip (and other HTTPS sites) via the same corporate proxy.
Note 2: I was reading about many similar issues on the net and some suggested that it might be related to certificate verification, but urllib2 does not verify certificates anyway.
Note 3: Some people suggested in similar situations monekypatching, but I guess, there is no way to monkeypatch _ssl.c.
The problem is that Python's standard HTTP libraries do not speak Microsoft's proprietary NTLM authentication protocol fully.
I solved this problem by setting up a local NTLM-capable proxy - ntlmaps did the trick for me.(*) - which providesthe authentication against the corporate proxy and point my python code to this local proxy without authentication credentials.
Additionally I had to add in the above listed python code a proxy_handler for HTTPS. So I replaced the two lines
proxy_url = 'http://user:pw#ntlmproxy:port/'
proxy_handler = urllib2.ProxyHandler({'http': proxy_url})
with the two lines
proxy_url = 'http://localproxy:localport/'
proxy_url_https = 'https://localproxy:localport/'
proxy_handler = urllib2.ProxyHandler({'http': proxy_url, 'https': proxy_url_https})
Then the request works perfectly.
(*) ntlmaps is a Python program. Due to some reasons in my personal environment it was for me necessary that the proxy is a python program.)
I am trying to access a website from behind corporate firewall using below:-
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, url, username, password)
auth_handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
conn = urllib2.urlopen('http://python.org')
Getting error
URLError: <urlopen error [Errno 11004] getaddrinfo failed>
I have tried with different handlers (tried ProxyHandler also in slightly different way), but doesn't seem to work.
Any clues to what could be the reason for error and any different ways to supply the credentials and make it work?
If you are using Proxy and that proxy has Username and Password (which many corporate proxies have), you need to set the proxy handler with urllib2.
proxy_url = 'http://' + proxy_user + ':' + proxy_password + '#' + proxy_ip
proxy_support = urllib2.ProxyHandler({"http":proxy_url})
opener = urllib2.build_opener(proxy_support,urllib2.HTTPHandler)
urllib2.install_opener(opener)
HTTPBasicAuthHandler is used to provide credentials for the site which you are going to access and not for going through the proxy. The above snippet might help you.
On Windows, I observed that python uses the IE Internet Options-> LAN Settings settings.
So even if we use urllib2 to install opener and specify the proxy_url, it would continue to use the IE settings.
It worked fine finally, when I exported a system variable:
http_proxy=http://userid:pswd#proxyurl.com:port