Python cannot bind requests to network interface - python

I cannot use request proxy to connect to https with my local ip
192.168.1.55 is my local ip
if i hide the https line for proxy, it works, but i know that it is not actually using the proxies
import requests
ipAddress = '192.168.1.55:80'
proxies = {
"http": "%s" % ipAddress,
#"https": "%s" % ipAddress,
}
url = 'https://www.google.com'
res = requests.get(url, proxies=proxies)
print res
Result: Response [200]
import requests
ipAddress = '192.168.1.55:80'
proxies = {
"http": "%s" % ipAddress,
"https": "%s" % ipAddress,
}
url = 'https://www.google.com'
res = requests.get(url, proxies=proxies)
print res
Result:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 400 Bad Request',)))
I also tried external VPN server which support HTTPS protocol, even the https proxy line is un-hide, it will work
I have multiple IP and would like to use specified ip with the request.
I am running it on Win 10 with python 2.7, and I suspect it is due to the SSL problem, yet to confirm with expertise here. (i think i didnt deal with the SSL properly)
I tried a lot of ways to deal with the SSL, no luck so far.

you can try this to bind requests to selected adapter/IP but first install requests_toolbelt
pip install requests_toolbelt
then
import requests
from requests_toolbelt.adapters.source import SourceAddressAdapter
# default binding
response = requests.get('https://ip.tyk.nu/').text
print(response)
# bind to 192.168.1.55
session = requests.Session()
session.mount('http://', SourceAddressAdapter('192.168.1.55'))
session.mount('https://', SourceAddressAdapter('192.168.1.55'))
response = session.get('https://ip.tyk.nu/').text
print(response)

Related

SSL Error CERTIFICATE_VERIFY_FAILED with requests BUT NOT with urllib.request

If I try to use requests.get() to connect a HTTPS server (a Jenkins) I got SSL error CERTIFICATE_VERIFY_FAILED certificate verify failed: unable to get local issuer certificate (_ssl.c:997)'))
HTTPS connection are working fine if I use curl or any browser.
The HTTPS server is an internal server but use a SSL cert from DigiCert. It is a wildcard certificate and the same certificate is used for a lot of other servers (like IIS server) in my company, which are working fine together with requests.
If I use urllib package the HTTPS connection will be also fine.
I don't understand why requests doesn't work and I ask what can I do that requests is working?
And no! verify=false is not the solution ;-)
For the SSLContext in the second function I have to call method load_default_certs()
My system: Windows 10, Python 3.10, requests 2.28.1, urllib3 1.26.10, certifi 2022.6.15. Packages are installed today.
url = 'https://redmercury.acme.org/'
def use_requests(url):
import requests
try:
r = requests.get(url)
print(r)
except Exception as e:
print(e)
def use_magic_code_from_stackoverflow(url):
import urllib
import ssl
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
# ssl_context.verify_mode = ssl.CERT_REQUIRED
# ssl_context.check_hostname = True
ssl_context.load_default_certs() # WITHOUT I got SSL error(s)
# previous context
https_handler = urllib.request.HTTPSHandler(context=ssl_context)
opener = urllib.request.build_opener(https_handler)
ret = opener.open(url, timeout=2)
print(ret.status)
def use_urllib_requests(url):
import urllib.request
with urllib.request.urlopen(url) as response:
print(response.status)
use_requests(url) # SSL error
use_magic_code_from_stackoverflow(url) # server answers with 200
use_urllib_requests(url) # server answers with 200

Forcing prepared requests from HTTPS back to HTTP Python Zeep client (wsdl https, binding needs to be http)

Currently working on using zeep for a client binding to an application that we do not control (so we cannot change its behavior).
Unfortunately for me, the WSDL is hosted on a https:// page, while the binding itself ONLY support HTTP, so i cannot simply change the binding address to HTTPS to make this working.
When first creating the zeep client object I am assuming this is then a python requests prepared request, which now is forced to only accept SSL.
Question: Is there a way to tell zeep or python requests that the next response won't be HTTPS?
Example:
from requests import Session
from zeep import Client
from zeep.transports import Transport
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
session = Session()
session.verify = False
transport = Transport(session=session)
client = Client('https://example.local:8443/www/core-service/services/LoginService?wsdl', transport=transport)
with client.settings(raw_response=True):
print(client.service.login('0', 'user', 'password'))
This would return this error because the next call is towards an http address:
requests.exceptions.SSLError: HTTPSConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /core-service/services/LoginService (Caused by SSLError(SSLError(1, '[SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:877)'),))
You can set the "force_https" property to false in order to avoid the https forcing.
https://python-zeep.readthedocs.io/en/master/settings.html#settings

python 'requests' package with proxy not working

Here are my test code:
import requests
url = 'https://api.ipify.org/'
proxyapi = 'http://ip.11jsq.com/index.php/api/entry?method=proxyServer.generate_api_url&packid=1&fa=0&fetch_key=&qty=1&time=1&pro=&city=&port=1&format=txt&ss=1&css=&dt=1'
proxy = {'http' : 'http://{}'.format(requests.get(proxyapi).text)}
print ('Downloading with fresh proxy.', proxy)
resp = requests.get(url, proxies = proxy_new)
print ('Fresh proxy response status.', resp.status_code)
print (resp.text)
#terminal output
Downloading with fresh proxy. {'http': 'http://49.84.152.176:30311'}
Fresh proxy response status. 200
222.68.154.34#my public ip address
with no error message and seems that the requests lib never apply this proxy settings. The proxyapi is valid for I've checked the proxy in my web browser, and by visiting https://api.ipify.org/, it returns the desired proxy server's ip address.
I am using python 3.6.4 and requests 2.18.4.

Python Requests doesnt work for https proxy

I try to use https proxy in python like this:
proxiesDict ={
'http': 'http://' + proxy_line,
'https': 'https://' + proxy_line
}
response = requests.get('https://api.ipify.org/?format=json', proxies=proxiesDict, allow_redirects=False)
proxy_line is a proxy read from file in the format of ip:port. I checked this https proxy in browser and it works. But in python this code hangs for a few seconds and then i get exception:
HTTPSConnectionPool(host='api.ipify.org', port=443): Max retries exceeded with url: /?format=json (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x0425E450>: Failed to establish a new connection: [WinError 10060]
I tried to use socks5 proxy, and it works on socks5 proxies with a PySocks installed. But for https i get this exception, can someone help me
When specifying a proxy list for requests, the key is the protocol, and the value is the domain/ip. You don't need to specify http:// or https:// again, for the actual value.
So, your proxiesDict will be:
proxiesDict = {
'http': proxy_line,
'https': proxy_line
}
You can also configure proxies by setting the enviroment variables:
$ export HTTP_PROXY="http://proxyIP:PORT"
$ export HTTPS_PROXY="http://proxyIP:PORT"
Then, you only need to execute your python script without proxy request.
Also, you can configure your proxy with http://user:password#host
For more information see this documentation: http://docs.python-requests.org/en/master/user/advanced/
Try using pycurl this function may help:
import pycurl
def pycurl_downloader(url, proxy_url, proxy_usr):
"""
Download files with pycurl
the proxy configuration:
proxy_url = 'http://10.0.0.0:3128'
proxy_usr = 'user:password'
"""
c = pycurl.Curl()
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.CONNECTTIMEOUT, 30)
c.setopt(pycurl.AUTOREFERER, 1)
if proxy_url: c.setopt(pycurl.PROXY, proxy_url)
if proxy_usr: c.setopt(pycurl.PROXYUSERPWD, proxy_usr)
content = StringIO()
c.setopt(pycurl.URL, url)
c.setopt(c.WRITEFUNCTION, content.write)
try:
c.perform()
c.close()
except pycurl.error, error:
errno, errstr = error
print 'An error occurred: ', errstr
return content.getvalue()

Using an HTTP PROXY - Python [duplicate]

This question already has answers here:
Proxy with urllib2
(7 answers)
Closed 7 years ago.
I familiar with the fact that I should set the HTTP_RPOXY environment variable to the proxy address.
Generally urllib works fine, the problem is dealing with urllib2.
>>> urllib2.urlopen("http://www.google.com").read()
returns
urllib2.URLError: <urlopen error [Errno 10061] No connection could be made because the target machine actively refused it>
or
urllib2.URLError: <urlopen error [Errno 11004] getaddrinfo failed>
Extra info:
urllib.urlopen(....) works fine! It is just urllib2 that is playing tricks...
I tried #Fenikso answer but I'm getting this error now:
URLError: <urlopen error [Errno 10060] A connection attempt failed because the
connected party did not properly respond after a period of time, or established
connection failed because connected host has failed to respond>
Any ideas?
You can do it even without the HTTP_PROXY environment variable. Try this sample:
import urllib2
proxy_support = urllib2.ProxyHandler({"http":"http://61.233.25.166:80"})
opener = urllib2.build_opener(proxy_support)
urllib2.install_opener(opener)
html = urllib2.urlopen("http://www.google.com").read()
print html
In your case it really seems that the proxy server is refusing the connection.
Something more to try:
import urllib2
#proxy = "61.233.25.166:80"
proxy = "YOUR_PROXY_GOES_HERE"
proxies = {"http":"http://%s" % proxy}
url = "http://www.google.com/search?q=test"
headers={'User-agent' : 'Mozilla/5.0'}
proxy_support = urllib2.ProxyHandler(proxies)
opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler(debuglevel=1))
urllib2.install_opener(opener)
req = urllib2.Request(url, None, headers)
html = urllib2.urlopen(req).read()
print html
Edit 2014:
This seems to be a popular question / answer. However today I would use third party requests module instead.
For one request just do:
import requests
r = requests.get("http://www.google.com",
proxies={"http": "http://61.233.25.166:80"})
print(r.text)
For multiple requests use Session object so you do not have to add proxies parameter in all your requests:
import requests
s = requests.Session()
s.proxies = {"http": "http://61.233.25.166:80"}
r = s.get("http://www.google.com")
print(r.text)
I recommend you just use the requests module.
It is much easier than the built in http clients:
http://docs.python-requests.org/en/latest/index.html
Sample usage:
r = requests.get('http://www.thepage.com', proxies={"http":"http://myproxy:3129"})
thedata = r.content
Just wanted to mention, that you also may have to set the https_proxy OS environment variable in case https URLs need to be accessed.
In my case it was not obvious to me and I tried for hours to discover this.
My use case: Win 7, jython-standalone-2.5.3.jar, setuptools installation via ez_setup.py
Python 3:
import urllib.request
htmlsource = urllib.request.FancyURLopener({"http":"http://127.0.0.1:8080"}).open(url).read().decode("utf-8")
I encountered this on jython client.
The server was only talking TLS and the client using SSL context.
javax.net.ssl.SSLContext.getInstance("SSL")
Once the client was to TLS, things started working.

Categories

Resources