I have this code with httplib where config.testConnection is an xlm string.
conn = httplib.HTTPSConnection(url1)
conn.request('POST', url2, config.testConnection, config.headers)
response = conn.getresponse()
data = response.read().decode('utf-8')
But I have ssl error : socket.sslerror: (1, 'error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure')
With Python 2.4 I cannot use ssl._create_unverified_context() and I really need verified HTTPS connexion.
I have found this kind of pages https://bugzilla.redhat.com/show_bug.cgi?id=1064942 which sais that it might be a bug between python on server and java of webservice.
But I cannot modify any packages like this. Is there a workaround directly to put in code please ?
Related
If I try to use requests.get() to connect a HTTPS server (a Jenkins) I got SSL error CERTIFICATE_VERIFY_FAILED certificate verify failed: unable to get local issuer certificate (_ssl.c:997)'))
HTTPS connection are working fine if I use curl or any browser.
The HTTPS server is an internal server but use a SSL cert from DigiCert. It is a wildcard certificate and the same certificate is used for a lot of other servers (like IIS server) in my company, which are working fine together with requests.
If I use urllib package the HTTPS connection will be also fine.
I don't understand why requests doesn't work and I ask what can I do that requests is working?
And no! verify=false is not the solution ;-)
For the SSLContext in the second function I have to call method load_default_certs()
My system: Windows 10, Python 3.10, requests 2.28.1, urllib3 1.26.10, certifi 2022.6.15. Packages are installed today.
url = 'https://redmercury.acme.org/'
def use_requests(url):
import requests
try:
r = requests.get(url)
print(r)
except Exception as e:
print(e)
def use_magic_code_from_stackoverflow(url):
import urllib
import ssl
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
# ssl_context.verify_mode = ssl.CERT_REQUIRED
# ssl_context.check_hostname = True
ssl_context.load_default_certs() # WITHOUT I got SSL error(s)
# previous context
https_handler = urllib.request.HTTPSHandler(context=ssl_context)
opener = urllib.request.build_opener(https_handler)
ret = opener.open(url, timeout=2)
print(ret.status)
def use_urllib_requests(url):
import urllib.request
with urllib.request.urlopen(url) as response:
print(response.status)
use_requests(url) # SSL error
use_magic_code_from_stackoverflow(url) # server answers with 200
use_urllib_requests(url) # server answers with 200
I am trying to get the html for a website, but the site uses https not http.
import httplib
c = httplib.HTTPSConnection("google.com")
c.request("GET", "/")
response = c.getresponse()
print response.status, response.reason
data = response.read()
print data
according to HTTPS connection Python the socket module needs to be compiled with ssl or have ssl enabled.
how do i do this? I am running ubuntu 14.04
Thanks
Via Python's urllib2 I try to get data over HTTPS while I am behind a corporate NTLM proxy.
I run
proxy_url = ('http://user:pw#ntlmproxy:port/')
proxy_handler = urllib2.ProxyHandler({'http': proxy_url})
opener = urllib2.build_opener(proxy_handler, urllib2.HTTPHandler)
urllib2.install_opener(opener)
f = urllib2.urlopen('https://httpbin.org/ip')
myfile = f.read()
print myfile
but I get as error
urllib2.URLError: <urlopen error [Errno 8] _ssl.c:507:
EOF occurred in violation of protocol>
How can I fix this error?
Note 0: With the same code I can retrieve the unsecured HTTP equivalent http://httpbin.org/ip.
Note 1: From a normal browser I can access https://httpbin.org/ip (and other HTTPS sites) via the same corporate proxy.
Note 2: I was reading about many similar issues on the net and some suggested that it might be related to certificate verification, but urllib2 does not verify certificates anyway.
Note 3: Some people suggested in similar situations monekypatching, but I guess, there is no way to monkeypatch _ssl.c.
The problem is that Python's standard HTTP libraries do not speak Microsoft's proprietary NTLM authentication protocol fully.
I solved this problem by setting up a local NTLM-capable proxy - ntlmaps did the trick for me.(*) - which providesthe authentication against the corporate proxy and point my python code to this local proxy without authentication credentials.
Additionally I had to add in the above listed python code a proxy_handler for HTTPS. So I replaced the two lines
proxy_url = 'http://user:pw#ntlmproxy:port/'
proxy_handler = urllib2.ProxyHandler({'http': proxy_url})
with the two lines
proxy_url = 'http://localproxy:localport/'
proxy_url_https = 'https://localproxy:localport/'
proxy_handler = urllib2.ProxyHandler({'http': proxy_url, 'https': proxy_url_https})
Then the request works perfectly.
(*) ntlmaps is a Python program. Due to some reasons in my personal environment it was for me necessary that the proxy is a python program.)
tl;dr: Used the httplib to create a connection to a site. I failed, I'd love some guidance!
I've ran into some trouble. Read about socket and httplib of python's, altough I have some problems with the syntax, it seems.
Here is it:
connection = httplib.HTTPConnection('www.site.org', 80, timeout=10, 1.2.3.4)
The syntax is this:
httplib.HTTPConnection(host[, port[, strict[, timeout[, source_address]]]])
How does "source_address" behave? Can I make requests with any IP from it?
Wouldn't I need an User-Agent for it?
Also, how do I check if the connect is successful?
if connection:
print "Connection Successful."
(As far as I know, HTTP doesn't need a "are you alive" ping every one second, as long as both client & server are okay, when a request is made, it'll be processed. So I can't constantly ping.)
Creating the object does not actually connect to the website:
HTTPConnection.connect():
Connect to the server specified when the object was created.
source_address seems to be sent to the server with any request, but it doesn't
seem to have any effect. I'm not sure why you'd need to use a User-Agent for it.
Either way, it is an optional parameter.
You don't seem to be able to check if a connection was made, either, which
is strange.
Assuming what you want to do is get the contents of the website root, you can use this:
from httplib import HTTPConnection
conn = HTTPConnection("www.site.org", 80, timeout=10)
conn.connect()
conn.request("GET", "http://www.site.org/")
resp = conn.getresponse()
data = resp.read()
print(data)
(slammed together from the HTTPConnection documentation)
Honestly though, you should not be using httplib, but instead urllib2 or another HTTP library that is less... low-level.
I am trying to fetch some urls using urllib2 library.
a = urllib2.urlopen("http://www.google.com")
ret = a.read()
Code above is working fine, and giving expected result. But when I make the url https, it gives "network unreachable" error
a = urllib2.urlopen("https://www.google.com")
urllib2.URLError: <urlopen error [Errno 101] Network is unreachable>
Is there any problem with ssl? My python version is Python2.6.5. I am also behind an academic proxy server. I have the settings in bash file. Anyway, since http is opening proxy shouldn't be the problem here.
Normally the issue in cases like this is the proxy you are behind having an out of date or untrusted SSL certificate. urllib is fussier than most browsers when it comes to SSL and this is why you might be getting this error.
The http url didn't give error because http_proxy variable was set already. By setting https_proxy the above error disappears.
export http_proxy = "http://{proxy-address}"
Set samething for https_proxy
export https_proxy = "http://{proxy-address}"