During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 34, in <module>
page = http.robotCheck()
File "C:\Users\user001\Desktop\automation\Loader.py", line 21, in robotCheck
request = requests.get('http://*redacted*.onion/login')
File "C:\Users\user001\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\user001\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\user001\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 512, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\user001\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 622, in send
r = adapter.send(request, **kwargs)
File "C:\Users\user001\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\adapters.py", line 513, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='*redacted*.onion', port=80): Max retries exceeded with url: /login (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0348E230>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
Here's code of my Loader class. Loader is responsible for uploading and making requests to resource over TOR. I configured proxy, but it still throws me error.
I disabled VPN, Windows firewall and everything what I could.
import socket
import requests
class Loader:
url = "http://*redacted*.onion/login"
user = "username"
password = 'password'
manageUrl = ''
def __init__(self, config):
self.config = config
self.restartSession()
def restartSession(self):
self.userSession = requests.session()
self.userSession.proxies['http'] = 'http://127.0.0.1:9050'
self.userSession.proxies['https'] = 'https://127.0.0.1:9051'
def robotCheck(self):
request = requests.get('http://*redacted*.onion/login')
print(request)
#self.session.post(self.robotCheckUrl, data=checkResult)
def authorization(self):
self.session.get(self.url)
authPage = self.session.post(self.url, data = self.getAuthData())
def getAuthData(self):
return {'login' : self.user, 'password' : self.password}
Code which call Loader class:
http = Loader(Config())
page = http.robotCheck()
Tor is a SOCKS proxy so the proxy configuration needs to be slightly different.
Change the following lines:
self.userSession.proxies['http'] = 'http://127.0.0.1:9050'
self.userSession.proxies['https'] = 'https://127.0.0.1:9051'
To:
self.userSession.proxies['http'] = 'socks5h://127.0.0.1:9050'
self.userSession.proxies['https'] = 'socks5h://127.0.0.1:9050'
Port 9051 is the Tor Controller port. For both HTTP and HTTPS SOCKS connections over Tor, use port 9050 (the default SOCKS port).
The socks5h scheme is necessary to have DNS names resolved over Tor instead of by the client. This privatizes DNS lookups and is necessary to be able to resolve .onion addresses.
EDIT: I was able to make a SOCKS request for a .onion address using the following example:
import socket
import requests
s = requests.session()
s.proxies['http'] = 'socks5h://127.0.0.1:9050'
s.proxies['https'] = 'socks5h://127.0.0.1:9050'
print(s.proxies)
r = s.get('http://***site***.onion/')
Make sure you have the most up-to-date requests libraries with pip3 install -U requests[socks]'
Wrap like this to get the exception only.
def robotCheck(self):
try:
request = requests.get('http://hydraruzxpnew4af.onion/login')
print(request)
except requests.exceptions.RequestException as e:
print('exception caught', e)
#self.session.post(self.robotCheckUrl, data=checkResult)
Because of the following reasons you may get those errors:
your server may refuse your connection (you're sending too many
requests from same ip address in short period of time)
Check your proxy settings
For your reference : https://github.com/requests/requests/issues/1198
Related
I cannot use request proxy to connect to https with my local ip
192.168.1.55 is my local ip
if i hide the https line for proxy, it works, but i know that it is not actually using the proxies
import requests
ipAddress = '192.168.1.55:80'
proxies = {
"http": "%s" % ipAddress,
#"https": "%s" % ipAddress,
}
url = 'https://www.google.com'
res = requests.get(url, proxies=proxies)
print res
Result: Response [200]
import requests
ipAddress = '192.168.1.55:80'
proxies = {
"http": "%s" % ipAddress,
"https": "%s" % ipAddress,
}
url = 'https://www.google.com'
res = requests.get(url, proxies=proxies)
print res
Result:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 400 Bad Request',)))
I also tried external VPN server which support HTTPS protocol, even the https proxy line is un-hide, it will work
I have multiple IP and would like to use specified ip with the request.
I am running it on Win 10 with python 2.7, and I suspect it is due to the SSL problem, yet to confirm with expertise here. (i think i didnt deal with the SSL properly)
I tried a lot of ways to deal with the SSL, no luck so far.
you can try this to bind requests to selected adapter/IP but first install requests_toolbelt
pip install requests_toolbelt
then
import requests
from requests_toolbelt.adapters.source import SourceAddressAdapter
# default binding
response = requests.get('https://ip.tyk.nu/').text
print(response)
# bind to 192.168.1.55
session = requests.Session()
session.mount('http://', SourceAddressAdapter('192.168.1.55'))
session.mount('https://', SourceAddressAdapter('192.168.1.55'))
response = session.get('https://ip.tyk.nu/').text
print(response)
I try to use https proxy in python like this:
proxiesDict ={
'http': 'http://' + proxy_line,
'https': 'https://' + proxy_line
}
response = requests.get('https://api.ipify.org/?format=json', proxies=proxiesDict, allow_redirects=False)
proxy_line is a proxy read from file in the format of ip:port. I checked this https proxy in browser and it works. But in python this code hangs for a few seconds and then i get exception:
HTTPSConnectionPool(host='api.ipify.org', port=443): Max retries exceeded with url: /?format=json (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x0425E450>: Failed to establish a new connection: [WinError 10060]
I tried to use socks5 proxy, and it works on socks5 proxies with a PySocks installed. But for https i get this exception, can someone help me
When specifying a proxy list for requests, the key is the protocol, and the value is the domain/ip. You don't need to specify http:// or https:// again, for the actual value.
So, your proxiesDict will be:
proxiesDict = {
'http': proxy_line,
'https': proxy_line
}
You can also configure proxies by setting the enviroment variables:
$ export HTTP_PROXY="http://proxyIP:PORT"
$ export HTTPS_PROXY="http://proxyIP:PORT"
Then, you only need to execute your python script without proxy request.
Also, you can configure your proxy with http://user:password#host
For more information see this documentation: http://docs.python-requests.org/en/master/user/advanced/
Try using pycurl this function may help:
import pycurl
def pycurl_downloader(url, proxy_url, proxy_usr):
"""
Download files with pycurl
the proxy configuration:
proxy_url = 'http://10.0.0.0:3128'
proxy_usr = 'user:password'
"""
c = pycurl.Curl()
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.CONNECTTIMEOUT, 30)
c.setopt(pycurl.AUTOREFERER, 1)
if proxy_url: c.setopt(pycurl.PROXY, proxy_url)
if proxy_usr: c.setopt(pycurl.PROXYUSERPWD, proxy_usr)
content = StringIO()
c.setopt(pycurl.URL, url)
c.setopt(c.WRITEFUNCTION, content.write)
try:
c.perform()
c.close()
except pycurl.error, error:
errno, errstr = error
print 'An error occurred: ', errstr
return content.getvalue()
Notes:
versions
Python 2.7.11 and my requests version is '2.10.0'
'OpenSSL 1.0.2d 9 Jul 2015'
Please read the below comment by Martijn Pieters before reproducing
Initially I tried to get pdf from https://www.neco.navy.mil/necoattach/N6945016R0626_2016-06-20__INFO_NAS_Pensacola_Base_Access.docx using code as below
code1:
>>> import requests
>>> requests.get("https://www.neco.navy.mil/necoattach/N6945016R0626_2016-06-20__INFO_NAS_Pensacola_Base_Access.docx",verify=False)
Error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\api.py", line 67, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\adapters.py", line 447, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: ("bad handshake: SysCallError(10054, 'WSAECONNRESE
T')",)
After googling and searching I found that you have use SSL verification and using session with adapters can solve the problem. But I still got error's please find the code and error's below
Code2:
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager
import ssl
import traceback
class MyAdapter(HTTPAdapter):
def init_poolmanager(self, connections, maxsize, block=False):
self.poolmanager = PoolManager(num_pools=connections,
maxsize=maxsize,
block=block,
ssl_version=ssl.PROTOCOL_TLSv1)
s = requests.Session()
s.mount('https://', MyAdapter())
print "Mounted "
r = s.get("https://www.neco.navy.mil/necoattach/N6945016R0626_2016-06-20__INFO_NAS_Pensacola_Base_Access.docx", stream=True, timeout=120)
Error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\sessions.py", line 480, in get
return self.request('GET', url, **kwargs)
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "C:\Users\mob140003207\AppData\Local\Enthought\Canopy\User\lib\site-packa
ges\requests\adapters.py", line 447, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: ("bad handshake: SysCallError(10054, 'WSAECONNRESET')",)
First of all, I confirm that the host, www.neco.navy.mil, is not accessible from everywhere. From some networks (geography) it works*, from others connection just hangs:
$ curl www.neco.navy.mil
curl: (7) couldn't connect to host
$ curl https://www.neco.navy.mil
curl: (7) couldn't connect to host
Second, when connection can be established there is an certificate problem:
$ curl -v https://www.neco.navy.mil
* Rebuilt URL to: https://www.neco.navy.mil/
* Hostname was NOT found in DNS cache
* Trying 205.85.2.133...
* Connected to www.neco.navy.mil (205.85.2.133) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS alert, Server hello (2):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
To make sure, you just feed it to Qualys SSL tester:
The CA (DoD Root CA 2) is not trusted. Moreover it's not in the chain. Note that OpenSSL validation process needs whole chain:
Firstly a certificate chain is built up starting from the supplied certificate and ending in the root CA. It is an error if the whole chain cannot be built up.
But there's only www.neco.navy.mil -> DODCA-28. It may be related to the TLD and extra security measure, but C grade alone isn't much anyway ;-)
On they Python side it won't be much different. If you don't have access to the CA, you can only disable certificate validation entirely (after you have connectivity problem solved, of course). If you have it, you can use cafile.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import urllib2
import ssl
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
r = urllib2.urlopen('https://www.neco.navy.mil/'
'necoattach/N6945016R0626_2016-06-20__INFO_NAS_Pensacola_Base_Access.docx',
timeout = 5, context = ctx)
print(len(r.read()))
r = urllib2.urlopen('https://www.neco.navy.mil/'
'necoattach/N6945016R0626_2016-06-20__INFO_NAS_Pensacola_Base_Access.docx',
timeout = 5, cafile = '/path/to/DODCA-28_and_DoD_Root_CA_2.pem')
print(len(r.read()))
To reproduce with certain version of Python, use simple Dockerfile like follows:
FROM python:2.7.11
WORKDIR /opt
ADD . ./
CMD dpkg -s openssl | grep Version && ./app.py
Then run:
docker build -t ssl-test .
docker run --rm ssl-test
This snippet works for me (py2.7.11 64bits + requests==2.10.0) on windows7:
import requests
import ssl
import traceback
import shutil
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager
class MyAdapter(HTTPAdapter):
def init_poolmanager(self, connections, maxsize, block=False):
self.poolmanager = PoolManager(num_pools=connections,
maxsize=maxsize,
block=block,
ssl_version=ssl.PROTOCOL_TLSv1)
if __name__ == "__main__":
s = requests.Session()
s.mount('https://', MyAdapter())
print "Mounted "
filename = "N6945016R0626_2016-06-20__INFO_NAS_Pensacola_Base_Access.docx"
r = s.get(
"https://www.neco.navy.mil/necoattach/{0}".format(filename), verify=False, stream=True, timeout=120)
if r.status_code == 200:
with open(filename, 'wb') as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
I use python 2.7.6 and this simple example still working on my ubuntu 14.04
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
with open('out.docx', 'wb') as h :
r = requests.get("https://www.neco.navy.mil/necoattach/N6945016R0626_2016-06-20__INFO_NAS_Pensacola_Base_Access.docx", verify=False, stream=True)
for block in r.iter_content(1024):
h.write(block)
I use python 2.6 and request Facebook API (https). I guess my service could be target of Man In The Middle attacks.
I discovered this morning reading again urllib module documentation that :
Citation:
Warning : When opening HTTPS URLs, it is not attempted to validate the server certificate. Use at your own risk!
Do you have hints / url / examples to complete a full certificate validation ?
Thanks for your help
You could create a urllib2 opener which can do the validation for you using a custom handler. The following code is an example that works with Python 2.7.3 . It assumes you have downloaded http://curl.haxx.se/ca/cacert.pem to the same folder where the script is saved.
#!/usr/bin/env python
import urllib2
import httplib
import ssl
import socket
import os
CERT_FILE = os.path.join(os.path.dirname(__file__), 'cacert.pem')
class ValidHTTPSConnection(httplib.HTTPConnection):
"This class allows communication via SSL."
default_port = httplib.HTTPS_PORT
def __init__(self, *args, **kwargs):
httplib.HTTPConnection.__init__(self, *args, **kwargs)
def connect(self):
"Connect to a host on a given (SSL) port."
sock = socket.create_connection((self.host, self.port),
self.timeout, self.source_address)
if self._tunnel_host:
self.sock = sock
self._tunnel()
self.sock = ssl.wrap_socket(sock,
ca_certs=CERT_FILE,
cert_reqs=ssl.CERT_REQUIRED)
class ValidHTTPSHandler(urllib2.HTTPSHandler):
def https_open(self, req):
return self.do_open(ValidHTTPSConnection, req)
opener = urllib2.build_opener(ValidHTTPSHandler)
def test_access(url):
print "Acessing", url
page = opener.open(url)
print page.info()
data = page.read()
print "First 100 bytes:", data[0:100]
print "Done accesing", url
print ""
# This should work
test_access("https://www.google.com")
# Accessing a page with a self signed certificate should not work
# At the time of writing, the following page uses a self signed certificate
test_access("https://tidia.ita.br/")
Running this script you should see something a output like this:
Acessing https://www.google.com
Date: Mon, 14 Jan 2013 14:19:03 GMT
Expires: -1
...
First 100 bytes: <!doctype html><html itemscope="itemscope" itemtype="http://schema.org/WebPage"><head><meta itemprop
Done accesing https://www.google.com
Acessing https://tidia.ita.br/
Traceback (most recent call last):
File "https_validation.py", line 54, in <module>
test_access("https://tidia.ita.br/")
File "https_validation.py", line 42, in test_access
page = opener.open(url)
...
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1177, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 1] _ssl.c:504: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed>
If you have a trusted Certificate Authority (CA) file, you can use Python 2.6 and later's ssl library to validate the certificate. Here's some code:
import os.path
import ssl
import sys
import urlparse
import urllib
def get_ca_path():
'''Download the Mozilla CA file cached by the cURL project.
If you have a trusted CA file from your OS, return the path
to that instead.
'''
cafile_local = 'cacert.pem'
cafile_remote = 'http://curl.haxx.se/ca/cacert.pem'
if not os.path.isfile(cafile_local):
print >> sys.stderr, "Downloading %s from %s" % (
cafile_local, cafile_remote)
urllib.urlretrieve(cafile_remote, cafile_local)
return cafile_local
def check_ssl(hostname, port=443):
'''Check that an SSL certificate is valid.'''
print >> sys.stderr, "Validating SSL cert at %s:%d" % (
hostname, port)
cafile_local = get_ca_path()
try:
server_cert = ssl.get_server_certificate((hostname, port),
ca_certs=cafile_local)
except ssl.SSLError:
print >> sys.stderr, "SSL cert at %s:%d is invalid!" % (
hostname, port)
raise
class CheckedSSLUrlOpener(urllib.FancyURLopener):
'''A URL opener that checks that SSL certificates are valid
On SSL error, it will raise ssl.
'''
def open(self, fullurl, data = None):
urlbits = urlparse.urlparse(fullurl)
if urlbits.scheme == 'https':
if ':' in urlbits.netloc:
hostname, port = urlbits.netloc.split(':')
else:
hostname = urlbits.netloc
if urlbits.port is None:
port = 443
else:
port = urlbits.port
check_ssl(hostname, port)
return urllib.FancyURLopener.open(self, fullurl, data)
# Plain usage - can probably do once per day
check_ssl('www.facebook.com')
# URL Opener
opener = CheckedSSLUrlOpener()
opener.open('https://www.facebook.com/find-friends/browser/')
# Make it the default
urllib._urlopener = opener
urllib.urlopen('https://www.facebook.com/find-friends/browser/')
Some dangers with this code:
You have to trust the CA file from the cURL project (http://curl.haxx.se/ca/cacert.pem), which is a cached version of Mozilla's CA file. It's also over HTTP, so there is a potential MITM attack. It's better to replace get_ca_path with one that returns your local CA file, which will vary from host to host.
There is no attempt to see if the CA file has been updated. Eventually, root certs will expire or be deactivated, and new ones will be added. A good idea would be to use a cron job to delete the cached CA file, so that a new one is downloaded daily.
It's probably overkill to check certificates every time. You could manually check once per run, or keep a list of 'known good' hosts over the course of the run. Or, be paranoid!
I'm trying to open an url with urllib2 patched with gevent on Windows XP:
from gevent import monkey
monkey.patch_all()
import urllib2
opener = urllib2.build_opener()
request = urllib2.Request("http://www.google.com")
response = opener.open(request)
And I get this exception during the opener.open call:
File "C:\Python26\lib\site-packages\gevent\socket.py", line 768, in getaddrinfo
sockaddr = (inet_ntop(AF_INET6, res), port, 0, 0)
File "C:\Python26\lib\site-packages\gevent\socket.py", line 133, in inet_ntop
raise NotImplementedError('inet_ntop() is not available on this platform')
NotImplementedError: inet_ntop() is not available on this platform
<SERPScrapper at 0xbc0f60> failed with NotImplementedError
Looking at the gevent socket.py source code it seems to be related to IPV6 on windows...
Any idea or proposition to solve this problem ?
edit: I don't get the problem with other url (ie: http://www.bing.com). It seems that google is using IPV6. Is there a way to force an IPV4 response ?
Try making your request to http://ipv4.google.com/ instead.