I try to use or change IP address and do web scraping on public data accessible by all but I can't find a solution. I am trying to do a rotating ip address. I'm on Windows 10 and the Anaconda IDE.
For example, i execute this code below:
import requests
domain = "https://www.undernews.fr"
#define your proxies
#the socks5h method allows the socks server to translate the #hostname. So make sure that you add 'socks5h'.
proxies = {
'http': 'socks5h://127.0.0.1:9050',
'https': 'socks5h://127.0.0.1:9050'
}
a = requests.get(domain.strip(), proxies=proxies).text
print(a)
And my kernel return:
File "C:\Users\FirstName\anaconda3\lib\site-packages\requests\adapters.py", line 519, in send
raise ConnectionError(e, request=request)
ConnectionError: SOCKSHTTPSConnectionPool(host='www.undernews.fr', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.contrib.socks.SOCKSHTTPSConnection object at 0x000001CF0D4EA3A0>: Failed to establish a new connection: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée'))
It looks like those proxies have been blocked:
Translated part of your error:
"No connection could be established because the target computer expressly refused it"
Maybe try looking into using proxyscrape to get a list of "free" proxies. Or at least try with a different proxy IP
Related
I tried to use proxy with requests library
import requests
proxies = {'https': 'http://xxx.xxx.xxx.xx:yyyy',
'http': 'http://xx.xxx.xxx.xxx:yyyy'}
r = requests.get('https://www.instagram.com', proxies=proxies)
print(r.status_code)
and faced this problem:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.wikipedia.org', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000013CB6D8D610>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')))
I researched many different sites and solutions to this problem but nothing helped.
Then I started asking questions: "How does a proxy work", "How to choose a proxy?".
For my project, I need several (maybe even several dozen different proxies), so buying was not my option. (I used public proxies, correct me,
if it is possible to buy one proxy or vpn account, so that it is not one permanent proxy adress, but many different ones)
Also, in the process of searching for an answer, I was faced with a strange (in my opinion) reaction of the program to changing the source of the Internet on the computer. From router, public wi-fi and mobile internet got different error results. How is this possible?
You should try this
Your code
proxies = {'https': 'http://xxx.xxx.xxx.xx:yyyy',
'http': 'http://xx.xxx.xxx.xxx:yyyy'}
New (remove http and https preffix in proxies dict)
proxies = {'https': 'xxx.xxx.xxx.xx:yyyy',
'http': 'xx.xxx.xxx.xxx:yyyy'}
I also had a similar error, generally that error occurs for HTTPS validation, you can try adding the parameter Verify = False
r = requests.get('https://www.instagram.com', proxies=proxies, Verify=False)
I have created an .exe file with the help of pyinstaller module, the problem resides when I perform a request to an endpoint via the .exe using https proxies which throws me an error:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.lustmexico.com', port=443): Max retries exceeded with url: /products.json (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1059: The handshake operation timed out')))
But instead, when I execute the request via my main.py file (e.g. the main entry point of the program, using python files, not .exe converted yet) no error happens
Here's how my proxies are configured:
ip = "IP OF MY PROXY"
proxies = {
"https": 'https://' + ip,
"http": 'http://' + ip
}
return proxy
And the way I perform the request is:
r = requests.get(self.link + "/products.json", proxies=proxies, headers=headers, timeout=timeout)
At first instance I guessed was the timeout, but its to high now and I have tested and is not, for sure, the main error cause
After doing my long research I found that there was an error on my https proxies or SSL installed in my machine but not really sure about that, yet I don't understand the problem, please help
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='mycompanyurl.in',
port=443): Max retries exceeded with url: /api/v1/issues.json (Caused by
NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object
at 0x51047d0>: Failed to establish a new connection: [Errno 110] Connection timed out',))
However, mycompanyurl.in is fine & I can open it in a browser as well.
I'm using Python 2.7.5.
I faced this issue earlier and in my case the IP address of our server was not allowed to access the APIs by the APIs provider. So maybe you should contact with your API's provider to whitelist your server IP.
I don't know exactly about your infrastructure, but I resolved the same issue.
For example, assuming you have AWS EC2 (internal IP:150.150.150.150 and external IP: 10.10.10.10) and a second one for any API (internal IP:x.x.x.x and external IP: y.y.y.y). Now, you want to call API on the second EC2.
If in you a security group of the second EC2 allow, for example, port 5000 on HTTP protocol for internal IP (150.150.150.150) of the first EC2 you will have this issue. When you write down the external IP (10.10.10.10) you will be successful.
To be close to your case, I would like to suggest check the security group/policy on the instance where 'mycompanyurl.in' is located. Maybe, there, something is wrong
i'm trying to send GET request from my Django app to Spring app hosted in my local machine.
I've tested sending requests to another websites outside localhost and it works perfectly. Problem appears when it comes to sending to http://localhost:port.
This is function which is sending requests in my Django app. It works for url1 but doesnt for url2.
def send_mail(request):
url1 = "https://httpbin.org/get"
url2 = "http://localhost:5000"
response = requests.get(url2)
return HttpResponse(response)
Here's an error that shows wherever i try send request to localhost.
HTTPConnectionPool(host='localhost', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))
I have this host: http://retsau.torontomls.net:7890/and I want to access http://retsau.torontomls.net:7890/rets-treb3pv/server/login, how can I accomplish this using Python Requests? All my attempts till now have failed.
I also followed the solution here - Python Requests - Use navigate site by servers IP and came up with this -
response = requests.get(http://206.152.41.279/rets-treb3pv/server/login, headers={'Host': retsau.torontomls.net})
but that resulted in this error:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='206.152.41.279', port=80): Max retries exceeded with url: /rets-treb3pv/server/login (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10a4f6d84>: Failed to establish a new connection: [Errno 60] Operation timed out',))
Funny thing is everything seems to work perfectly fine on Postman, I am able to access all sorts of URLs on that server, from logging in to searching for something.
You left out the port number (7890) from the URL to your get call:
response = requests.get('http://206.152.41.279:7890/rets-treb3pv/server/login', headers={'Host': 'retsau.torontomls.net'})
# ^^^^ Add this
Also, unless you actually have a specific reason for accessing the site by IP address, it would make more sense to put the FQDN in the URL rather than the Host header:
response = requests.get('http://retsau.torontomls.net:7890/rets-treb3pv/server/login')