Background
I am trying to bring up my grails project, which stopped building all of a sudden (probably due to me playing around with my local maven repository). Now when I run the grails command, I get the following errors -
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for xalan:serializer:jar:2.7.1
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.resolveCachedArtifactDescriptor(DefaultDependencyCollector.java:537)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.getArtifactDescriptorResult(DefaultDependencyCollector.java:521)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:421)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:375)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:363)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:266)
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.collectDependencies(DefaultRepositorySystem.java:337)
at grails.util.BuildSettings.doResolve(BuildSettings.groovy:514)
at grails.util.BuildSettings.doResolve(BuildSettings.groovy)
at grails.util.BuildSettings$_getDefaultBuildDependencies_closure19.doCall(BuildSettings.groovy:775)
at grails.util.BuildSettings$_getDefaultBuildDependencies_closure19.doCall(BuildSettings.groovy)
at grails.util.BuildSettings.getDefaultBuildDependencies(BuildSettings.groovy:769)
at grails.util.BuildSettings.getBuildDependencies(BuildSettings.groovy:674)
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact xalan:serializer:pom:2.7.1 from/to repo1_maven_org_maven2 (https://repo1.maven.org/maven2): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:462)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:264)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:241)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:320)
... 14 more
I followed this blog https://stackoverflow.com/a/36427118/351903 and added the certificate of the remote server to my Cacerts, but am now getting these errors -
sun.security.validator.ValidatorException: KeyUsage does not allow digital signatures
I tried to manually download the artefacts and then observed that this error resolves for that one and shows for the next artefact. So, I thought, if I could list down all the file names in a url like this https://repo1.maven.org/maven2/org/grails/grails-bootstrap/2.4.0/ and then curl all the file names and create them on my local, I could resolve this one by one for my dependencies, like this -
curl https://repo1.maven.org/maven2/org/grails/grails-bootstrap/2.4.0/grails-bootstrap-2.4.0-javadoc.jar -o 'grails-bootstrap-2.4.0-javadoc.jar'
However, I am however, unable to list down all the files for an artefact -
SandeepanNath:Desktop sandeepan.nath$ ssh https://repo1.maven.org ls -l /maven2/org/grails/grails-bootstrap/2.4.0/
ssh: Could not resolve hostname https://repo1.maven.org: nodename nor servname provided, or not known
SandeepanNath:Desktop sandeepan.nath$
Trying with the IP -
SandeepanNath:Desktop sandeepan.nath$ ssh 151.101.36.209 ls -l /maven2/org/grails/grails-bootstrap/2.4.0/
ssh: connect to host 151.101.36.209 port 22: Operation timed out
Then I understood that the only option I have is to scrape the url looking for links and then doing curl. But, I am unable to scrape as well due to ssl errors. I tried following this python example - https://www.geeksforgeeks.org/implementing-web-scraping-python-beautiful-soup/?ref=lbp
#This will not run on online IDE
import requests
from bs4 import BeautifulSoup
import ssl
URL = "http://www.values.com/inspirational-quotes"
r = requests.get(URL)
soup = BeautifulSoup(r.content, 'html5lib')
print(soup.prettify())
But I get this error -
Traceback (most recent call last):
File "parse_remote.py", line 7, in <module>
r = requests.get(URL)
File "/Users/sandeepan.nath/Library/Python/2.7/lib/python/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/Users/sandeepan.nath/Library/Python/2.7/lib/python/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/sandeepan.nath/Library/Python/2.7/lib/python/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/Users/sandeepan.nath/Library/Python/2.7/lib/python/site-packages/requests/sessions.py", line 665, in send
history = [resp for resp in gen] if allow_redirects else []
File "/Users/sandeepan.nath/Library/Python/2.7/lib/python/site-packages/requests/sessions.py", line 245, in resolve_redirects
**adapter_kwargs
File "/Users/sandeepan.nath/Library/Python/2.7/lib/python/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/Users/sandeepan.nath/Library/Python/2.7/lib/python/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='www.values.com', port=443): Max retries exceeded with url: /inspirational-quotes (Caused by SSLError(SSLEOFError(8, u'EOF occurred in violation of protocol (_ssl.c:590)'),))
Related
I have a HDFS to which I want to read and write using a Python script.
import requests
import json
import os
import kerberos
import sys
node = os.getenv("namenode").split(",")
print (node)
local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
read_or_write = sys.argv[3]
print (local_file_path,remote_file_path)
def check_node_status(node):
for name in node:
print (name)
request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,
verify=False).json()
status = request["beans"][0]["State"]
if status =="active":
nnhost = request["beans"][0]["HostAndPort"]
splitaddr = nnhost.split(":")
nnaddress = splitaddr[0]
print(nnaddress)
break
return status,name,nnaddress
def kerberos_auth(nnaddress):
__, krb_context = kerberos.authGSSClientInit("HTTP#%s"%nnaddress)
kerberos.authGSSClientStep(krb_context, "")
negotiate_details = kerberos.authGSSClientResponse(krb_context)
headers = {"Authorization": "Negotiate " + negotiate_details,
"Content-Type":"application/binary"}
return headers
def kerberos_hdfs_upload(status,name,headers):
print("running upload function")
if status =="active":
print("if function")
data=open('%s'%local_file_path, 'rb').read()
write_req = requests.put("%s/webhdfs/v1%s?op=CREATE&overwrite=true"%(name,remote_file_path),
headers=headers,
verify=False,
allow_redirects=True,
data=data)
print(write_req.text)
def kerberos_hdfs_read(status,name,headers):
if status == "active":
read = requests.get("%s/webhdfs/v1%s?op=OPEN"%(name,remote_file_path),
headers=headers,
verify=False,
allow_redirects=True)
if read.status_code == 200:
data=open('%s'%local_file_path, 'wb')
data.write(read.content)
data.close()
else :
print(read.content)
status, name, nnaddress= check_node_status(node)
headers = kerberos_auth(nnaddress)
if read_or_write == "write":
kerberos_hdfs_upload(status,name,headers)
elif read_or_write == "read":
print("fun")
kerberos_hdfs_read(status,name,headers)
The code works on my own machine which is not behind any proxy. But when running it in the office machine, which is behind a proxy, it is giving the following proxy error:
$ python3 python_hdfs.py ./1.png /user/testuser/2018-02-07_1.png write
['https://<servername>:50470', 'https:// <servername>:50470']
./1.png /user/testuser/2018-02-07_1.png
https://<servername>:50470
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 555, in urlopen
self._prepare_proxy(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 753, in _prepare_proxy
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 230, in connect
self._tunnel()
File "/usr/lib/python3.5/http/client.py", line 832, in _tunnel
message.strip()))
OSError: Tunnel connection failed: 504 Unknown Host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 273, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='<servername>', port=50470): Max retries exceeded with url: /jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Unknown Host',)))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "python_hdfs.py", line 68, in <module>
status, name, nnaddress= check_node_status(node)
File "python_hdfs.py", line 23, in check_node_status
verify=False).json()
File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='<server_name>', port=50470): Max retries exceeded with url: /jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Unknown Host',)))
I tried giving proxy info in the code, like so:
proxies = {
"http": "<proxy_username>:<proxy_password>#<proxy_IP>:<proxy_port>",
"https": "<proxy_username>:<proxy_password>#<proxy_IP>:<proxy_port>",
}
node = os.getenv("namenode").split(",")
print (node)
local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
print (local_file_path,remote_file_path)
local_file_path = sys.argv[1]
remote_file_path = sys.argv[2]
read_or_write = sys.argv[3]
print (local_file_path,remote_file_path)
def check_node_status(node):
for name in node:
print (name)
request = requests.get("%s/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"%name,proxies=proxies,
verify=False).json()
status = request["beans"][0]["State"]
if status =="active":
nnhost = request["beans"][0]["HostAndPort"]
splitaddr = nnhost.split(":")
nnaddress = splitaddr[0]
print(nnaddress)
break
return status,name,nnaddress
### Rest of the code is the same
Now it is giving the following error:
$ python3 python_hdfs.py ./1.png /user/testuser/2018-02-07_1.png write
['https://<servername>:50470', 'https:// <servername>:50470']
./1.png /user/testuser/2018-02-07_1.png
https://<servername>:50470
Traceback (most recent call last):
File "python_hdfs.py", line 73, in <module>
status, name, nnaddress= check_node_status(node)
File "python_hdfs.py", line 28, in check_node_status
verify=False).json()
File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 343, in send
conn = self.get_connection(request.url, proxies)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 254, in get_connection
proxy_manager = self.proxy_manager_for(proxy)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 160, in proxy_manager_for
**proxy_kwargs)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 281, in proxy_from_url
return ProxyManager(proxy_url=url, **kw)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 232, in __init__
raise ProxySchemeUnknown(proxy.scheme)
requests.packages.urllib3.exceptions.ProxySchemeUnknown: Not supported proxy scheme <proxy_username>
So, my question is, do I need to set up the proxy in kerberos for it to be working? If so, how? I am not too familiar with kerberos. I run kinit before running the python code, in order to enter into the kerberos realm, which runs fine and connects to the appropriate HDFS servers without the proxy. So I don't know why this error occurs when reading or writing to the same HDFS servers. Any help is appreciated.
I also have the proxy set up in /etc/apt/apt.conf like so:
Acquire::http::proxy "http://<proxy_username>:<proxy_password>#<proxy_IP>:<proxy_port>/";
Acquire::https::proxy "https://<proxy_username>:<proxy_password>#<proxy_IP>:<proxy_port>/";
I have also tried the following:
$ export http_proxy="http://<user>:<pass>#<proxy>:<port>"
$ export HTTP_PROXY="http://<user>:<pass>#<proxy>:<port>"
$ export https_proxy="http://<user>:<pass>#<proxy>:<port>"
$ export HTTPS_PROXY="http://<user>:<pass>#<proxy>:<port>"
import os
proxy = 'http://<user>:<pass>#<proxy>:<port>'
os.environ['http_proxy'] = proxy
os.environ['HTTP_PROXY'] = proxy
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
#rest of the code is same
But the error persists.
UPDATE: I have also tried the following.
Somebody suggested that we already have a proxy set up in /etc/apt/apt.conf to connect to the web. But maybe we don't need proxy to connect to the HDFS. So, try commenting the proxies in /etc/apt/apt.conf, and run the python script again. I did that.
$ env | grep proxy
http_proxy=http://hfli:Test6969#192.168.44.217:8080
https_proxy=https://hfli:Test6969#192.168.44.217:8080
$ unset http_proxy
$ unset https_proxy
$ env | grep proxy
$
And ran the python script again - (i) without defining proxies in the python script, and also (ii) with the proxies defined in the python script. I got the same original proxy error in both cases.
I found the following Java program that supposedly gives access to run Java programs on the HDFS:
import com.sun.security.auth.callback.TextCallbackHandler;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import javax.security.auth.Subject;
import javax.security.auth.login.LoginContext;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;
public class HDFS_RW_Secure
{
public static void main(String[] args) throws Exception
{
System.setProperty("java.security.auth.login.config", "/tmp/sc3_temp/hadoop_kdc.txt");
System.setProperty("java.security.krb5.conf", "/tmp/sc3_temp/hadoop_krb.txt");
Configuration hadoopConf= new Configuration();
//this example use password login, you can change to use Keytab login
LoginContext lc;
Subject subject;
lc = new LoginContext("JaasSample", new TextCallbackHandler());
lc.login();
System.out.println("login");
subject = lc.getSubject();
UserGroupInformation.setConfiguration(hadoopConf);
UserGroupInformation ugi = UserGroupInformation.getUGIFromSubject(subject);
UserGroupInformation.setLoginUser(ugi);
Path pt=new Path("hdfs://edhcluster"+args[0]);
FileSystem fs = FileSystem.get(hadoopConf);
//write
FSDataOutputStream fin = fs.create(pt);
fin.writeUTF("Hello!");
fin.close();
BufferedReader br=new BufferedReader(new InputStreamReader(fs.open(pt)));
String line;
line=br.readLine();
while (line != null)
{
System.out.println(line);
line=br.readLine();
}
fs.close();
System.out.println("This is the end.");
}
}
We need to take its jar file, HDFS.jar, and run the following shell script to enable Java programs to be run on the HDFS.
nano run.sh
# contents of the run.sh file:
/tmp/sc3_temp/jre1.8.0_161/bin/java -Djavax.net.ssl.trustStore=/tmp/sc3_temp/cacerts -Djavax.net.ssl.trustStorePassword=changeit -jar /tmp/sc3_temp/HDFS.jar $1
So, I can run this shell script with /user/testuser as the argument to give it access to run Java programs in the HDFS:
./run.sh /user/testuser/test2
which gives the following output:
Debug is true storeKey false useTicketCache false useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Kerberos username [testuser]: testuser
Kerberos password for testuser:
[Krb5LoginModule] user entered username: testuser
principal is testuser#KRB.REALM
Commit Succeeded
login
2018-02-08 14:09:30,020 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hello!
This is the end.
So, that's working I suppose. But how do I write an equivalent shell script to run Python codes?
I found the solution. It turns out, I was looking in the wrong place. It seems the user account was set up wrongly. I tried to do something simpler, like downloading a webpage into the server. And I noticed that it was downloading the page, but had no permissions to fix it. So I explored a bit more and found that when the user account was created, it was not assigned proper ownership. So, once I assigned the proper owner to the user account, the proxy error was gone. (Sigh, so much time wasted.)
I have written about it in more detail here.
I would like to authenticate to server from my client using certificate that is generated from server.I have a server-ca.crt and below is the CURL command that is working.How to send similar request using python requests module .
$ curl -X GET -u sat_username:sat_password \
-H "Accept:application/json" --cacert katello-server-ca.crt \
https://satellite6.example.com/katello/api/organizations
I have tried following way and it is getting some exception, can someone help in resolving this issue.
python requestsCert.py
Traceback (most recent call last):
File "requestsCert.py", line 2, in <module>
res=requests.get('https://satellite6.example.com/katello/api/organizations', cert='/certificateTests/katello-server-ca.crt', verify=True)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 68, in get
return request('get', url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 464, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 431, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: [SSL] PEM lib (_ssl.c:2554)
res=requests.get('https://...', cert='/certificateTests/katello-server-ca.crt', verify=True)
The cert argument in requests.get is used to specify the client certificate and key which should be used for mutual authentication. It is not used to specify the trusted CA as the --cacert argument in curl does. Instead you should use the verify argument:
res=requests.get('https://...', verify='/certificateTests/katello-server-ca.crt')
For more information see SSL Cert Verification and Client Side Certificates in the documentation for requests.
I try to access the IBM Hyperledger Blockchain from python. Unfortunately I get an SSL Protocol error while trying to connect. I already searched the internet and specially stackoverflow to find a solution for it. Here is what I did:
Setup an IBM Hyperledger Blockchain service and get the URL to work with.
I tried the CURL call to access the API
curl -X GET --header "Accept: application/json" "https://SOMETHING_vp0.us.blockchain.ibm.com:443/network/peers"
This works fine for me (wonder why I don't need a passwd but it works).
I tried to access the same API from python, but got an error.
Python2.7 => 2.7.12, requests => 2.10.0 on mac
import requests
url = "https://SOMETHING_vp0.us.blockchain.ibm.com:443/network/peers"
response = requests.get(url)
print response.status_code
Accessing https://www.google.com works without a problem, but blockchain returns:
Traceback (most recent call last):
File "/Users/ansi/development/hyperledger/mcp.py", line 9, in <module>
response = requests.get(url)
File "/Users/ansi/development/virtualenv/general/lib/python2.7/site-packages/requests/api.py", line 71, in get
return request('get', url, params=params, **kwargs)
File "/Users/ansi/development/virtualenv/general/lib/python2.7/site-packages/requests/api.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/ansi/development/virtualenv/general/lib/python2.7/site-packages/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/Users/ansi/development/virtualenv/general/lib/python2.7/site-packages/requests/sessions.py", line 585, in send
r = adapter.send(request, **kwargs)
File "/Users/ansi/development/virtualenv/general/lib/python2.7/site-packages/requests/adapters.py", line 477, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:590)
What can I do to solve the problem?
Thanks a lot
I think I've discovered a problem with the Requests library's handling of redirects when using HTTPS. As far as I can tell, this is only a problem when the server redirects the Requests client to another HTTPS resource.
I can assure you that the proxy I'm using supports HTTPS and the CONNECT method because I can use it with a browser just fine. I'm using version 2.1.0 of the Requests library which is using 1.7.1 of the urllib3 library.
I watched the transactions in wireshark and I can see the first transaction for https://www.paypal.com/ but I don't see anything for https://www.paypal.com/home. I keep getting timeouts when debugging any deeper in the stack with my debugger so I don't know where to go from here. I'm definitely not seeing the request for /home as a result of the redirect. So it must be erroring out in the code before it gets sent to the proxy.
I want to know if this truly is a bug or if I am doing something wrong. It is really easy to reproduce so long as you have access to a proxy that you can send traffic through. See the code below:
import requests
proxiesDict = {
'http': "http://127.0.0.1:8080",
'https': "http://127.0.0.1:8080"
}
# This fails with "requests.exceptions.ProxyError: Cannot connect to proxy. Socket error: [Errno 111] Connection refused." when it tries to follow the redirect to /home
r = requests.get("https://www.paypal.com/", proxies=proxiesDict)
# This succeeds.
r = requests.get("https://www.paypal.com/home", proxies=proxiesDict)
This also happens when using urllib3 directly. It is probably mainly a bug in urllib3, which Requests uses under the hood, but I'm using the higher level requests library. See below:
proxy = urllib3.proxy_from_url('http://127.0.0.1:8080/')
# This fails with the same error as above.
res = proxy.urlopen('GET', https://www.paypal.com/)
# This succeeds
res = proxy.urlopen('GET', https://www.paypal.com/home)
Here is the traceback when using Requests:
Traceback (most recent call last):
File "tests/downloader_tests.py", line 22, in test_proxy_https_request
r = requests.get("https://www.paypal.com/", proxies=proxiesDict)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 382, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 505, in send
history = [resp for resp in gen] if allow_redirects else []
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 167, in resolve_redirects
allow_redirects=False,
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 485, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 375, in send
raise ProxyError(e)
requests.exceptions.ProxyError: Cannot connect to proxy. Socket error: [Errno 111] Connection refused.
Update:
The problem only seems to happen with a 302 (Found) redirect not with the normal 301 redirects (Moved Permanently). Also, I noticed that with the Chrome browser, Paypal doesn't return a redirect. I do see the redirect when using Requests - even though I'm borrowing Chrome's User Agent for this experiment. I'm looking for more URLs that return a 302 in order to get more data points.
I need this to work for all URLs or at least understand why I'm seeing this behavior.
This is a bug in urllib3. We're tracking it as urllib3 issue #295.
I am attempting to use the requests.py library for calls to a rest web service. I wrote a quick prototype for my usage under windows and everything worked fine, but when I attempted to run the same prototype under linux I get a "requests.exceptions.Timeout: Request timed out" error. Does anyone know why this might be happening? If I try to use the library to access a non https url it works fine under both windows and linux.
import requests
url = 'https://path.to.rest/svc/?options'
r = requests.get(url, auth=('uid','passwd'), verify=False)
print(r.content)
I did notice that if I leave off the verify=False parameter from the get call, I get a different exception, namely "requests.exceptions.SSLError: Can't connect to HTTPS URL because the SSL module is not available". This appears to be a possible underlying cause, though I dont know why they would change the errorcode, but I cant find any reference to an ssl module and I verified that certifi was installed. Interestingly, if I leave off the verify parameter in windows I get a different exception, "requests.exceptions.SSLError: [Errno 1] _ssl.c:503: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"
EDIT:
Tracebacks for all cases/scenarios mentioned
Full code as shown above:
Traceback(most recent call last):
File "testRequests.py", line 15, in <module>
r = requests.get(url, auth=('uid','passwd'), verify=False)
File "build/bdist.linux-x86_64/egg/requests/api.py", line 52, in get
File "build/bdist.linux-x86_64/egg/requests/api.py", line 40, in request
File "build/bdist.linux-x86_64/egg/requests/sessions.py", line 208, in request
File "build/bdist.linux-x86_64/egg/requests/models.py", line 586, in send
requests.exceptions.Timeout: Request timed out
Code as shown above minus the "verify=False" paramter:
Traceback(most recent call last):
File "testRequests.py", line 15, in <module>
r = requests.get(url, auth=('uid','passwd'))
File "build/bdist.linux-x86_64/egg/requests/api.py", line 52, in get
File "build/bdist.linux-x86_64/egg/requests/api.py", line 40, in request
File "build/bdist.linux-x86_64/egg/requests/sessions.py", line 208, in request
File "build/bdist.linux-x86_64/egg/requests/models.py", line 584, in send
requests.exceptions.SSLError: Can't connect to HTTPS URL because the SSL module is not available
Code as show above minus the "verify=False" parameter and run under windows:
Traceback(most recent call last):
File "testRequests.py", line 59, in <module>
r = requests.get(url, auth=('uid','passwd'))
File "c:\Python27\lib\site-packages\requests\api.py", line 52, in get
return request('get', url, **kwargs)
File "c:\Python27\lib\site-packages\requests\api.py", line 40, in request
return s.request(method=method, url=url, **kwargs)
File "c:\Python27\lib\site-packages\requests\sessions.py", line 208, in request
r.send(prefetch=prefetch)
File "c:\Python27\lib\site-packages\requests\models.py", line 584, in send
raise SSLError(e)
requests.exceptions.SSLError: [Errno 1] _ssl.c:503: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
I'm not an expert on the matter but it looks like the certificate from the server can't be verified correctly. I don't know how Python and ssl handles certificate verification but the first option is to try ignoring the exception, or maybe change https to http in an attempt to see if the web-service allows non-secure service calls.
If the issue is revolving around an import error for ssl, the module is part of CPython and you may need to ensure that the Python interpreter is compiled with SSL support (from openssl). Look into removing the package for python (be careful) and compiling it with openssl support, personally I would strongly advise you looking into a virtualenv before removing anything, compiling Python is not too difficult and it would give you a finer grain of control for what you aim to do.