Scraping webpage using BeautifulSoup - python

I am attempting to scrape this site: https://www.senate.gov/general/contact_information/senators_cfm.cfm
My Code:
import requests
from bs4 import BeautifulSoup
URL = 'https://www.senate.gov/general/contact_information/senators_cfm.cfm'
page = requests.get(URL)
soup = BeautifulSoup(page.content, 'html.parser')
print(soup)
The issue is that it's not actually going to the site. The HTML that I get in my soup var is not at all what the HTML is in the correct webpage.

This worked for me
headers = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36',
}
r = requests.get(URL,headers=headers)
Found the info here - https://towardsdatascience.com/5-strategies-to-write-unblock-able-web-scrapers-in-python-5e40c147bdaf

DUPLICATE HTTP 503 Error while using python requests module
try that:
import requests
from bs4 import BeautifulSoup
URL = 'https://www.senate.gov/general/contact_information/senators_cfm.cfm'
page = requests.post(URL, headers=headers)
soup = BeautifulSoup(page.content, 'html.parser')
print(soup)

Related

requests.get(url) not returning for this specific url

I'm trying to use requests.get(url).text to get the HTML from this website. However, when requests.get(url) is called with this specific url, it never returns no matter how long I wait. This works with other urls, but this one specifically is giving me trouble. Code is below
from bs4 import BeautifulSoup
import requests
source = requests.get('https://www.carmax.com/cars/all', allow_redirects=True).text
soup = BeautifulSoup(source, 'lxml')
print(soup.prettify().encode('utf-8'))
Thanks for any help!
Try:
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36', "Upgrade-Insecure-Requests": "1","DNT": "1","Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","Accept-Language": "en-US,en;q=0.5","Accept-Encoding": "gzip, deflate"}
html = requests.get("https://www.carmax.com/cars/all",headers=headers)
soup = BeautifulSoup(html.content, 'html.parser')
print(soup.prettify())

Elements on page don't exist when scraping wsj.com

I am using Python to scrape a webpage. This is my code:
import requests
from bs4 import BeautifulSoup
# Set local variables
URL = 'https://www.wsj.com/market-data/bonds'
page = requests.get(URL)
soup = BeautifulSoup(page.content, 'html.parser')
# Get Master data table and Last update from URL
table = soup.find("table", attrs={"class": "WSJTables--table--1QzSOCfq "})
print(table)
The result of that code is nothing--I can't find the table and not sure why.
Any suggestions?
You need to add the user-agent header, otherwise the page thinks that you’re a bot and will block you. Also note you had an extra space in your class name
import requests
from bs4 import BeautifulSoup
URL = 'https://www.wsj.com/market-data/bonds'
HEADERS = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36"
}
page = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(page.content, 'html.parser')
table = soup.find("table", attrs={"class": "WSJTables--table--1QzSOCfq"})
print(table)

beautifulsoup not returning all html

import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.amazon.com/s?k=iphone+5s&ref=nb_sb_noss')
c = r.content
soup = BeautifulSoup(c, 'html.parser')
all = soup.find_all("span", {"class": "a-size-medium a-color-base a-text-normal"})
print(all)
so this is my simple script of python trying to scrape a page in amazon but not all the html is returned in the "soup" variable therefor i get nothing when trying to find a specific series of tags and extract them.
Try the below code, it should do the trick for you.
You actually missed to add headers in your code
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
url = 'https://www.amazon.com/s?k=iphone+5s&ref=nb_sb_noss'
response = requests.get(url, headers=headers)
print(response.text)
soup = BeautifulSoup(response.content, features="lxml")
my_all = soup.find_all("span", {"class": "a-size-medium a-color-base a-text-normal"})
print(my_all)

BeautifulSoup Find periodically returns None

I am trying to get a value from a class. From time to time, find returns the value I need, but another time it no longer works.
Code:
import requests
from bs4 import BeautifulSoup
url = 'https://beru.ru/catalog/molotyi-kofe/76321/list'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'}
page = requests.get(url, headers=headers)
soup = BeautifulSoup(page.content, 'html.parser')
item_count = (soup.find('div', class_='_2StYqKhlBr')).text.split()[4]
print(item_count)
The reason why that you get the values sometimes and sometimes not. That's because the website is protected by CAPTCHA
So when the request is blocked by CAPTCHA
It's became like the following:
https://beru.ru/showcaptcha?retpath=https://beru.ru/catalog/molotyi-kofe/76321/list?ncrnd=4561_aa1b86c2ca77ae2b0831c4d95b9d85a4&t=0/1575204790/b39289ef083d539e2a4630548592a778&s=7e77bfda14c97f6fad34a8a654d9cd16
You can verify by parse the response content:
import requests
from bs4 import BeautifulSoup
r = requests.get(
'https://beru.ru/catalog/molotyi-kofe/76321/list')
soup = BeautifulSoup(r.text, 'html.parser')
for item in soup.findAll('div', attrs={'class': '_2StYqKhlBr _1wAXjGKtqe'}):
print(item)
for item in soup.findAll('div', attrs={'class': 'captcha__image'}):
for captcha in item.findAll('img'):
print(captcha.get('src'))
And you will get the CAPTCHA image link:
https://beru.ru/captchaimg?aHR0cHM6Ly9leHQuY2FwdGNoYS55YW5kZXgubmV0L2ltYWdlP2tleT0wMEFMQldoTnlaVGh3T21WRmN4NWFJRUdYeWp2TVZrUCZzZXJ2aWNlPW1hcmtldGJsdWU,_0/1575206667/b49556a86deeece9765a88f635c7bef2_df12d7a36f0e2d36bd9c9d94d8d9e3d7

Generating URL for Yahoo and Bing Scraping for multiple pages with Python and BeautifulSoup

I want to scrape news from different sources. I found a way to generate URL for scraping multiple pages from google, but I think that there is a way to generate much shorter link.
Can you please tell me how to generate the URL for scraping multiple pages for Bing and Yahoo news, and also, is there a way to make google url shorter.
This is the code for google:
from bs4 import BeautifulSoup
import requests
headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
term = 'usa'
page=0
for page in range(1,5):
page = page*10
url = 'https://www.google.com/search?q={}&tbm=nws&sxsrf=ACYBGNTx2Ew_5d5HsCvjwDoo5SC4U6JBVg:1574261023484&ei=H1HVXf-fHfiU1fAP65K6uAU&start={}&sa=N&ved=0ahUKEwi_q9qog_nlAhV4ShUIHWuJDlcQ8tMDCF8&biw=1280&bih=561&dpr=1.5'.format(term,page)
print(url)
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
These are the URL-s for yahoo and bing, but for 1 page:
yahoo: url = 'https://news.search.yahoo.com/search?q={}'.format(term)
bing: url = 'https://www.bing.com/news/search?q={}'.format(term)
I am not sure are you looking after this shorten url for news.
from bs4 import BeautifulSoup
import requests
headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
term = 'usa'
page=0
for page in range(1,5):
page = page*10
url = 'https://www.google.com/search?q={}&tbm=nws&start={}'.format(term,page)
print(url)
response = requests.get(url, headers=headers,verify=False)
soup = BeautifulSoup(response.text, 'html.parser')
#Yahoo:
from bs4 import BeautifulSoup
import requests
headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
term = 'usa'
page=1
while True:
url ='https://news.search.yahoo.com/search?q={}&pz=10&b={}'.format(term,page)
print(url)
page = page + 10
response = requests.get(url, headers=headers,verify=False)
if response.status_code !=200:
break
soup = BeautifulSoup(response.text, 'html.parser')

Categories

Resources