python 'requests' package with proxy not working - python

Here are my test code:
import requests
url = 'https://api.ipify.org/'
proxyapi = 'http://ip.11jsq.com/index.php/api/entry?method=proxyServer.generate_api_url&packid=1&fa=0&fetch_key=&qty=1&time=1&pro=&city=&port=1&format=txt&ss=1&css=&dt=1'
proxy = {'http' : 'http://{}'.format(requests.get(proxyapi).text)}
print ('Downloading with fresh proxy.', proxy)
resp = requests.get(url, proxies = proxy_new)
print ('Fresh proxy response status.', resp.status_code)
print (resp.text)
#terminal output
Downloading with fresh proxy. {'http': 'http://49.84.152.176:30311'}
Fresh proxy response status. 200
222.68.154.34#my public ip address
with no error message and seems that the requests lib never apply this proxy settings. The proxyapi is valid for I've checked the proxy in my web browser, and by visiting https://api.ipify.org/, it returns the desired proxy server's ip address.
I am using python 3.6.4 and requests 2.18.4.

Related

How to connect to a website with ipv6 https proxy

import requests
import socket
from unittest.mock import patch
orig_getaddrinfo = socket.getaddrinfo
def getaddrinfoIPv6(host, port, family=0, type=0, proto=0, flags=0):
return orig_getaddrinfo(host=host, port=port, family=socket.AF_INET6, type=type, proto=proto, flags=flags)
with patch('socket.getaddrinfo', side_effect=getaddrinfoIPv6):
r = requests.get('http://icanhazip.com')
print(r.text)
Instead of using a ipv4 proxy to connect to a website, I would like to connect using an ipv6 https proxy. I have scoured google for answers, and have not found any (that I understand)... Closest I have found is... (does not use the ipv6 proxy, instead uses my own ipv6). I am open to using something besides requests to do this, however, requests are prefered. I will be attempting to thread later on.
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
proxy = {"http":"http://username:password#[2604:0180:2:3b5:9ebc:64e9:166c:d9f9]", "https":"https://username:password#[2604:0180:2:3b5:9ebc:64e9:166c:d9f9]"}
url = "https://icanhazip.com"
r = requests.get(url, proxies=proxy, verify=False)
print(r.content)
If the code above does not work
import requests
proxy = {"http": "http://userame:password#168.235.109.30:18117", "https":"https://userame:password#168.235.109.30:18117"}
url = "https://icanhazip.com"
r = requests.get(url, proxies=proxy)
print(r.content)
This is my current provider for my ipv6 https proxy, however, they are using ipv6 over ipv4 to their clients, so this is why this code works, and the above code does not (if using the same provider) If you using a provider that supports ipv6 all by itself, then the code at the top should work for you.
You can use https://proxyturk.net/
Example curl command:
curl -m 90 -x http://proxyUsername:proxyPassword#93.104.200.99:20000 http://api6.ipify.org
You will see example result:
2a13:c206:2021:1522:9c5a:3ed5:156b:c1d0

Python cannot bind requests to network interface

I cannot use request proxy to connect to https with my local ip
192.168.1.55 is my local ip
if i hide the https line for proxy, it works, but i know that it is not actually using the proxies
import requests
ipAddress = '192.168.1.55:80'
proxies = {
"http": "%s" % ipAddress,
#"https": "%s" % ipAddress,
}
url = 'https://www.google.com'
res = requests.get(url, proxies=proxies)
print res
Result: Response [200]
import requests
ipAddress = '192.168.1.55:80'
proxies = {
"http": "%s" % ipAddress,
"https": "%s" % ipAddress,
}
url = 'https://www.google.com'
res = requests.get(url, proxies=proxies)
print res
Result:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 400 Bad Request',)))
I also tried external VPN server which support HTTPS protocol, even the https proxy line is un-hide, it will work
I have multiple IP and would like to use specified ip with the request.
I am running it on Win 10 with python 2.7, and I suspect it is due to the SSL problem, yet to confirm with expertise here. (i think i didnt deal with the SSL properly)
I tried a lot of ways to deal with the SSL, no luck so far.
you can try this to bind requests to selected adapter/IP but first install requests_toolbelt
pip install requests_toolbelt
then
import requests
from requests_toolbelt.adapters.source import SourceAddressAdapter
# default binding
response = requests.get('https://ip.tyk.nu/').text
print(response)
# bind to 192.168.1.55
session = requests.Session()
session.mount('http://', SourceAddressAdapter('192.168.1.55'))
session.mount('https://', SourceAddressAdapter('192.168.1.55'))
response = session.get('https://ip.tyk.nu/').text
print(response)

Python HTTP HEAD request keepalive

Using Python httplib or httpclient, what code do I need to use in my HTTP client to:
use an HTTP HEAD request and
contact a web server by just specifying only its IP address and
contact a web server without specifying any webpage (or homepage) on the request
to extend its HTTP connection using Keepalive messages?
I used the following code example but it has two problems:
It does not extend the http connection using Keepalive,
It gives me an error message "500 Domain Not Found" if I use the IP address instead of the domain name.
import http.client
Connection = http.client.HTTPConnection("www.python.org")
Connection.request("HEAD", "")
response = Connection.getresponse()
print(response.status, response.reason)
requests allows to:
send requests with HEAD method:
import requests
resp = requests.head("http://www.python.org")
use sessions for auto Keep-alive: info
s = requests.Session()
resp = s.head("http://www.python.org")
resp2 = s.get("http://www.python.org/")
Regarding using the IP address instead of domain, that has nothing to do with your request. Most sites use some kind of virtual hosts, so they don't respond to IP address only to specific domain names. If you ask for the IP address you may get a 500 error or a message error.

Python - get HTML from HTTPS url - socket module does not have ssl enabled?

I am trying to get the html for a website, but the site uses https not http.
import httplib
c = httplib.HTTPSConnection("google.com")
c.request("GET", "/")
response = c.getresponse()
print response.status, response.reason
data = response.read()
print data
according to HTTPS connection Python the socket module needs to be compiled with ssl or have ssl enabled.
how do i do this? I am running ubuntu 14.04
Thanks

Pycurl ssl connection throught ntlm

I am struggling with connection to https sites and files. We have ntlm network proxy authentication. HTTP connections are working like a charm, but https is stuck with error:
pycurl.error: (27, "SSL: couldn't create a context: error:140A90A1:lib(20):func(169):reason(161)")
I tried verifypeer to 0, but it doesnt work, the same with: conn.setopt(pycurl.SSL_CIPHER_LIST, 'rsa_rc4_128_sha'). I want to download: https://nbp.pl/kursy/xml/LastA.xml. Any clue?
The code:
conn=pycurl.Curl()
conn.setopt(pycurl.URL, url)
conn.setopt(pycurl.PROXY, proxy)
conn.setopt(pycurl.PROXYPORT,8080)
conn.setopt(pycurl.HTTPAUTH, pycurl.HTTPAUTH_NTLM)
conn.setopt(pycurl.PROXYUSERPWD, user)
conn.setopt(pycurl.WRITEFUNCTION, open(r'xml\\'+name+'.'+extension,'w+').write)
conn.perform()
conn.close()
Got success with a bypass using CNTLM.
The code:
proxy = urllib2.ProxyHandler({'https':'127.0.0.1:3128'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
u = urllib2.urlopen(url)
data = u.read()
fil=open(r'xml\\'+name+'.'+extension,'w+')
fil.write(data)

Categories

Resources