I have follow this tutorial but I still fail to get output. Below is my code in view.py
def index(request):
#html="a"
#url= requests.get("https://www.python.org/")
#page = urllib.request.urlopen(url)
#soup = BeautifulSoup(page.read())
#soup=url.content
#urllib3.disable_warnings()
#requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
#url=url.content
#default_headers = make_headers(basic_auth='myusername:mypassword')
#http = ProxyManager("https://myproxy.com:8080/", headers=default_headers)
r = urllib.request.urlopen('http://www.aflcio.org/Legislation-and-Politics/Legislative-Alerts').read()
soup = BeautifulSoup(r)
url= type(soup)
context={"result":url,}
return render (request, 'index.html',context)
Output:
urlopen error [WinError 10060] A connection attempt failed because the
connected party did not properly respond after a period of time, or
established connection failed because connected host has failed to respond
If you are sitting behind a firewall or similar you might have to specify a proxy for the request to get through.
See below example using the requests library.
import requests
proxies = {
'http': 'http://10.10.1.10:3128',
'https': 'http://10.10.1.10:1080',
}
r = requests.get('http://example.org', proxies=proxies)
Related
Trying to bypass Cloudflare's wall, I wanted to get access to cf_clearence...
I tried cfscrape, link to the package [link].
import cfscrape
cookie_value, user_agent = cfscrape.get_cookie_string("https://somesite.com")
request += "Cookie: %s\r\nUser-Agent: %s\r\n" % (cookie_value, user_agent)
print(request)
This should return cf_clearance, and __cfduid, but in our case its returning,
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://somesite.com
Also tried cf_clearance, to make a Cloudflare challenge pass successfully, the code that I tried:
from playwright.sync_api import sync_playwright
from cf_clearance import sync_cf_retry, sync_stealth
import requests
# not use cf_clearance, cf challenge is fail
proxies = {
"all": "socks5://localhost:7890"
}
res = requests.get('https://somesite.com', proxies=proxies)
if '<title>Please Wait... | Cloudflare</title>' in res.text:
print("cf challenge fail")
# get cf_clearance
with sync_playwright() as p:
browser = p.chromium.launch(headless=False, proxy={"server": "socks5://localhost:7890"})
page = browser.new_page()
sync_stealth(page, pure=True)
page.goto('https://somesite.com')
res = sync_cf_retry(page)
if res:
cookies = page.context.cookies()
for cookie in cookies:
if cookie.get('name') == 'cf_clearance':
cf_clearance_value = cookie.get('value')
print(cf_clearance_value)
ua = page.evaluate('() => {return navigator.userAgent}')
print(ua)
else:
print("cf challenge fail")
browser.close()
# use cf_clearance, must be same IP and UA
headers = {"user-agent": ua}
cookies = {"cf_clearance": cf_clearance_value}
res = requests.get('https://somesite.com', proxies=proxies, headers=headers, cookies=cookies)
if '<title>Please Wait... | Cloudflare</title>' not in res.text:
print("cf challenge success")
The above code was from here.
Tried it with and without proxies.
With proxies:
O/P:
Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it
Without proxies:
O/P:
cf challenge fail
.
.
.
NotImplementedError: Encountered recaptcha. Check whether your proxy is an elite proxy.
I am connected to the web via VPN and I would like to connect to news site to grab, well, news. For this a library exists: finNews. And this is the code:
import FinNews as fn
cnbc_feed = fn.CNBC(topics=['finance', 'earnings'])
print(cnbc_feed.get_news())
print(cnbc_feed.possible_topics())
Now because of the VPN the connection wont work and it throws:
<urlopen error [WinError 10061] No connection could be made because
the target machine actively refused it ( client - server )
So I started separately to understand how to make a connection work and it does work (return is "connected"):
import urllib.request
proxy = "http://user:pw#proxy:port"
proxies = {"http":"http://%s" % proxy}
url = "http://www.google.com/search?q=test"
headers={'User-agent' : 'Mozilla/5.0'}
try:
proxy_support = urllib.request.ProxyHandler(proxies)
opener = urllib.request.build_opener(proxy_support, urllib.request.HTTPHandler(debuglevel=1))
urllib.request.install_opener(opener)
req = urllib.request.Request(url, None, headers)
html = urllib.request.urlopen(req).read()
#print (html)
print ("Connected")
except (HTTPError, URLError) as err:
print("No internet connection.")
Now I figured how to access news and how to make a connection via VPN, but I cant bring both together. I want to grab the news via the library through VPN?! I am fairly new to Python so I guess I dont get the logic fully yet.
EDIT: I tried to combine with Feedparser, based on furas hint:
import urllib.request
import feedparser
proxy = "http://user:pw#proxy:port"
proxies = {"http":"http://%s" % proxy}
#url = "http://www.google.com/search?q=test"
#url = "http://www.reddit.com/r/python/.rss"
url = "https://timesofindia.indiatimes.com/rssfeedstopstories.cms"
headers={'User-agent' : 'Mozilla/5.0'}
try:
proxy_support = urllib.request.ProxyHandler(proxies)
opener = urllib.request.build_opener(proxy_support, urllib.request.HTTPHandler(debuglevel=1))
urllib.request.install_opener(opener)
req = urllib.request.Request(url, None, headers)
html = urllib.request.urlopen(req).read()
#print (html)
#print ("Connected")
feed = feedparser.parse(html)
#print (feed['feed']['link'])
print ("Number of RSS posts :", len(feed.entries))
entry = feed.entries[1]
print ("Post Title :",entry.title)
except (HTTPError, URLError) as err:
print("No internet connection.")
But same error....this is a big nut to crack...
May I ask for your advice? Thank you :)
I'm building a small script to test the certain proxies against the API.
It seems that the actual request isn't trigger under the provided proxy. For example, the following request will be valid and I will get an response from the API.
import requests
r = requests.post("https://someapi.com", data=request_data,
proxies={"http": "http://999.999.999.999:1212"}, timeout=5)
print(r.text)
How come I get the response even if the proxy provided was invalid?
You can define the proxies like this;
import requests
pxy = "http://999.999.999.999:1212"
proxyDict = {
'http': pxy,
'https': pxy,
'ftp': pxy,
'SOCKS4': pxy
}
r = requests.post("https://someapi.com", data=request_data,
proxies=proxyDict, timeout=5)
print(r.text)
My code is suppose to call the https://httpbin.org/ip to get my origin IP using a random proxy I have choosen in a list scraped from a website that provides a list of free proxies.
However, when I run my code below, sometimes it returns a correct response (200 and with the correct response) and some of the time it returns :
MaxRetryError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /ip (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x000001EF83500DC8>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')))
Traceback (most recent call last):
File "<ipython-input-196-baf92a94e8ec>", line 19, in <module>
response = s.get(url,proxies=proxyDict)
This is the code I am using
import requests
from bs4 import BeautifulSoup
res = requests.get('https://free-proxy-list.net/', headers={'User-Agent':'Mozilla/5.0'})
soup = BeautifulSoup(res.text,"lxml")
proxies = []
for items in soup.select("#proxylisttable tbody tr"):
proxy_list = ':'.join([item.text for item in items.select("td")[:2]])
proxies.append(proxy_list)
url = 'https://httpbin.org/ip'
choosenProxy = random.choice(proxies)
proxyDict = {
'http' : 'http://'+str(choosenProxy),
'https' : 'https://'+str(choosenProxy)
}
s = requests.Session()
response = s.get(url,proxies=proxyDict)
print(response.text)
What does the error mean ? Is there a way I could fix this ?
Try the following solution. It will keep trying with different proxies until it find a working one. Once it finds a working proxy, the script should give you the required response and break the loop.
import random
import requests
from bs4 import BeautifulSoup
url = 'https://httpbin.org/ip'
proxies = []
res = requests.get('https://free-proxy-list.net/', headers={'User-Agent':'Mozilla/5.0'})
soup = BeautifulSoup(res.text,"lxml")
for items in soup.select("#proxylisttable tbody tr"):
proxy_list = ':'.join([item.text for item in items.select("td")[:2]])
proxies.append(proxy_list)
while True:
choosenProxy = random.choice(proxies)
proxyDict = {
'http' : f'http://{choosenProxy}',
'https' : f'https://{choosenProxy}'
}
print("trying with:",proxyDict)
try:
response = requests.get(url,proxies=proxyDict,timeout=5)
print(response.text)
break
except Exception:
continue
I was wondering if my requests is stopped by the website and I need to set a proxy.I first try to close the http's connection ,bu I failed.I also try to test my code but now it seems no outputs.Mybe I use a proxy everything will be OK?
Here is the code.
import requests
from urllib.parse import urlencode
import json
from bs4 import BeautifulSoup
import re
from html.parser import HTMLParser
from multiprocessing import Pool
from requests.exceptions import RequestException
import time
def get_page_index(offset, keyword):
#headers = {'User-Agent':'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50'}
data = {
'offset': offset,
'format': 'json',
'keyword': keyword,
'autoload': 'true',
'count': 20,
'cur_tab': 1
}
url = 'http://www.toutiao.com/search_content/?' + urlencode(data)
try:
response = requests.get(url, headers={'Connection': 'close'})
response.encoding = 'utf-8'
if response.status_code == 200:
return response.text
return None
except RequestException as e:
print(e)
def parse_page_index(html):
data = json.loads(html)
if data and 'data' in data.keys():
for item in data.get('data'):
url = item.get('article_url')
if url and len(url) < 100:
yield url
def get_page_detail(url):
try:
response = requests.get(url, headers={'Connection': 'close'})
response.encoding = 'utf-8'
if response.status_code == 200:
return response.text
return None
except RequestException as e:
print(e)
def parse_page_detail(html):
soup = BeautifulSoup(html, 'lxml')
title = soup.select('title')[0].get_text()
pattern = re.compile(r'articleInfo: (.*?)},', re.S)
pattern_abstract = re.compile(r'abstract: (.*?)\.', re.S)
res = re.search(pattern, html)
res_abstract = re.search(pattern_abstract, html)
if res and res_abstract:
data = res.group(1).replace(r".replace(/<br \/>|\n|\r/ig, '')", "") + '}'
abstract = res_abstract.group(1).replace(r"'", "")
content = re.search(r'content: (.*?),', data).group(1)
source = re.search(r'source: (.*?),', data).group(1)
time_pattern = re.compile(r'time: (.*?)}', re.S)
date = re.search(time_pattern, data).group(1)
date_today = time.strftime('%Y-%m-%d')
img = re.findall(r'src="(.*?)"', content)
if date[1:11] == date_today and len(content) > 50 and img:
return {
'title': title,
'content': content,
'source': source,
'date': date,
'abstract': abstract,
'img': img[0]
}
def main(offset):
flag = 1
html = get_page_index(offset, '光伏')
for url in parse_page_index(html):
html = get_page_detail(url)
if html:
data = parse_page_detail(html)
if data:
html_parser = HTMLParser()
cwl = html_parser.unescape(data.get('content'))
data['content'] = cwl
print(data)
print(data.get('img'))
flag += 1
if flag == 5:
break
if __name__ == '__main__':
pool = Pool()
pool.map(main, [i*20 for i in range(10)])
and the error is the here!
HTTPConnectionPool(host='tech.jinghua.cn', port=80): Max retries exceeded with url: /zixun/20160720/f191549.shtml (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x00000000048523C8>: Failed to establish a new connection: [Errno 11004] getaddrinfo failed',))
By the way, When I test my code at first it shows everything is OK!
Thanks in advance!
It seems to me you're hitting the limit of connection in the HTTPConnectionPool. Since you start 10 threads at the same time
Try one of the following:
Increase the request timeout (seconds): requests.get('url', timeout=5)
Close the response: Response.close(). Instead of returning response.text, assign response to a varialble, close Response, and then return variable
When I faced this issue I had the following problems
I wasn't able to do the following
- The requests python module was unable to get information from any url. Although I was able to surf the site with browser, also could get wget or curl to download that page.
- pip install was also not working and use to fail with following errors
Failed to establish a new connection: [Errno 11004] getaddrinfo failed
Certain site blocked me so i tried forcebindip to use another network interface for my python modules and then i removed it. Probably that cause my network to mess up and my request module and even the direct socket module were stuck and not able to fetch any url.
So I followed network configuration reset in the below URL and now I am good.
network configuration reset
In case it helps someone else, I faced this same error message:
Client-Request-ID=long-string Retry policy did not allow for a retry: , HTTP status code=Unknown, Exception=HTTPSConnectionPool(host='table.table.core.windows.net', port=443): Max retries exceeded with url: /service(PartitionKey='requests',RowKey='9999') (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001D920ADA970>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')).
...when trying to retrieve a record from Azure Table Storage using
table_service.get_entity(table_name, partition_key, row_key).
My issue:
I had the table_name incorrectly defined.
My structural URL was incorrect (after ".com" there was no slash and there was a coupling of another part of the url)
Sometimes it's due to a VPN connection. I had the same problem. I wasn't even capable of installing the package requests via pip. I turned off my VPN and voilà, I managed to install it and also to make requests. The [Errno 11004] code was gone.