I have this host: http://retsau.torontomls.net:7890/and I want to access http://retsau.torontomls.net:7890/rets-treb3pv/server/login, how can I accomplish this using Python Requests? All my attempts till now have failed.
I also followed the solution here - Python Requests - Use navigate site by servers IP and came up with this -
response = requests.get(http://206.152.41.279/rets-treb3pv/server/login, headers={'Host': retsau.torontomls.net})
but that resulted in this error:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='206.152.41.279', port=80): Max retries exceeded with url: /rets-treb3pv/server/login (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10a4f6d84>: Failed to establish a new connection: [Errno 60] Operation timed out',))
Funny thing is everything seems to work perfectly fine on Postman, I am able to access all sorts of URLs on that server, from logging in to searching for something.
You left out the port number (7890) from the URL to your get call:
response = requests.get('http://206.152.41.279:7890/rets-treb3pv/server/login', headers={'Host': 'retsau.torontomls.net'})
# ^^^^ Add this
Also, unless you actually have a specific reason for accessing the site by IP address, it would make more sense to put the FQDN in the URL rather than the Host header:
response = requests.get('http://retsau.torontomls.net:7890/rets-treb3pv/server/login')
Related
I have created an .exe file with the help of pyinstaller module, the problem resides when I perform a request to an endpoint via the .exe using https proxies which throws me an error:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.lustmexico.com', port=443): Max retries exceeded with url: /products.json (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1059: The handshake operation timed out')))
But instead, when I execute the request via my main.py file (e.g. the main entry point of the program, using python files, not .exe converted yet) no error happens
Here's how my proxies are configured:
ip = "IP OF MY PROXY"
proxies = {
"https": 'https://' + ip,
"http": 'http://' + ip
}
return proxy
And the way I perform the request is:
r = requests.get(self.link + "/products.json", proxies=proxies, headers=headers, timeout=timeout)
At first instance I guessed was the timeout, but its to high now and I have tested and is not, for sure, the main error cause
After doing my long research I found that there was an error on my https proxies or SSL installed in my machine but not really sure about that, yet I don't understand the problem, please help
well i've been trying to download a File directly from a URL, the thing is, when I do it at home it works perfectly fine, but when I try to run it at my company's server it doesn't work.
My code is:
import requests
url = "https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php"
response = requests.get(url, verify=False)
url = open("teste.zip", "w")
url.write(response.content)
url.close()
and the error message that i get is:
HTTPSConnectionPool(host='servicos.ibama.gov.br', port=443): Max retries exceeded with url: /ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x000001F2F792C0F0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
I konw that we use proxy here, i've been looking for solutions for this problem and I could see that this is relevant, but I couldn't find something to do with this information
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='mycompanyurl.in',
port=443): Max retries exceeded with url: /api/v1/issues.json (Caused by
NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object
at 0x51047d0>: Failed to establish a new connection: [Errno 110] Connection timed out',))
However, mycompanyurl.in is fine & I can open it in a browser as well.
I'm using Python 2.7.5.
I faced this issue earlier and in my case the IP address of our server was not allowed to access the APIs by the APIs provider. So maybe you should contact with your API's provider to whitelist your server IP.
I don't know exactly about your infrastructure, but I resolved the same issue.
For example, assuming you have AWS EC2 (internal IP:150.150.150.150 and external IP: 10.10.10.10) and a second one for any API (internal IP:x.x.x.x and external IP: y.y.y.y). Now, you want to call API on the second EC2.
If in you a security group of the second EC2 allow, for example, port 5000 on HTTP protocol for internal IP (150.150.150.150) of the first EC2 you will have this issue. When you write down the external IP (10.10.10.10) you will be successful.
To be close to your case, I would like to suggest check the security group/policy on the instance where 'mycompanyurl.in' is located. Maybe, there, something is wrong
I have the follow code:
res = requests.get(url)
I use multi-thread method that will have the follow error:
ConnectionError: HTTPConnectionPool(host='bjtest.com', port=80): Max retries exceeded with url: /rest/data?method=check&test=123 (Caused by : [Errno 104] Connection reset by peer)
I have used the follow method, but it still have the error:
s = requests.session()
s.keep_alive = False
OR
res = requests.get(url, headers={'Connection': 'close'})
So, I should how do it?
BTW, the url is OK, but it only can be visited internal, so the url have no problem. Thanks!
you run your script on Mac? I also meet similar problem, you can execute ulimit -n to check how many files you can handle in a time.
you can use below to enlarge the configuration.
resource.setrlimit(resource.RLIMIT_NOFILE, (the number you reset,resource.RLIM_INFINITY))
hoping can help you.
my blog which associated with your problem
I got a similar case, hopefully it can save some time to you:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8001): Max retries exceeded with url: /enroll/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10f96ecc0>: Failed to establish a new connection: [Errno 61] Connection refused'))
The problem was actually silly... the localhost was down at port 8001! Restarting the server solved it.
The error message (which is admittedly a little confusing) actually means that requests failed to connect to your requested URL at all.
In this case that's because your url is http://bjtest.com/rest/data?method=check&test=123, which isn't a real website.
It has nothing to do with the format you made the request in. Fix your url and it should (presumably) work for you.
I'm trying to run code on a site that only accepts HTTPS connections, and am having trouble incorporating it with proxies.
I run code such as this to instantiate the proxy:
os.environ['https_proxy'] = 'http://' + proxy
And when I try to complete requests using said previously implemented proxy (I'm going through the site's API), I always get this error:
HTTPSConnectionPool(host=[ . . . ], port=443): Max retries exceeded with url: [. . .] (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fab996ef790>: Failed to establish a new connection: [Errno -2] Name or service not known',)))
The question I have is to of course how to alleviate the error, though more primarily, when a HTTPS connection is forced, what are the ways to work around it so you're not completely stopped from utilizing or maneuvering around the site (with proxies)?
The proxy server's host name can not be resolved to an IP address.
Either there is a problem with the proxy's host name, or there is a problem with the DNS server. If you are sure that the host is correct, try using its IP address, e.g.
proxy = '192.168.1.1:1234'
os.environ['https_proxy'] = 'http://' + proxy
If that works then the proxy is OK, but the name resolution is failing for some reason. Try using curl and see if that works, e.g.
https_proxy='https://localhost:1234' curl -v https://httpbin.org