Scrape the snippet text from google search page - python

When we search a question in google it often produces an answer in a snippet like the following:
My objective is to scrape this text ("August 4, 1961" encircled in red mark in the screenshot) in my python code.
Before trying to scrape the text, I stored the web response in a text file using the following code:
page = requests.get("https://www.google.com/search?q=when+barak+obama+born")
soup = BeautifulSoup(page.content, 'html.parser')
out_file = open("web_response.txt", "w", encoding='utf-8')
out_file.write(soup.prettify())
In the inspect element section, I noticed that the snippet is inside div class Z0LcW XcVN5d (encircled in green mark in the screenshot). However, the response in my txt file contains no such text, let alone class name.
I've also tried this solution where the author scraped items with id rhs_block. But my response contains no such id.
I've searched the occurrences of "August 4, 1961" in my response txt file and tried to comprehend whether it could be the snippet. But none of the occurences seemed to be the one that I was looking for.
My plan was to get the div id or class name of the snippet and find its content like this:
# IT'S A PSEUDO CODE
containers = soup.find_all(class or id = 'somehting')
for tag in containers:
print(f"tag text : {tag.text}")
Is there any way to do this?
NOTE: I'm also okay with using libraries other than beautifulsoup and requests as long as it can produce result.

There's no need to use Selenium, you can achieve this using requests and BS4 since everything you need is located in HTML and there's no dynamic JavaScript.
Code and example in online IDE:
from bs4 import BeautifulSoup
import requests, lxml
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
html = requests.get('https://www.google.com/search?q=Barack Obama born date', headers=headers).text
soup = BeautifulSoup(html, 'lxml')
born = soup.select_one('.XcVN5d').text
age = soup.select_one('.kZ91ed').text
print(born)
print(age)
Output:
August 4, 1961
age 59 years

Selenium will produce the result you need.
It's convenient because you can add any waits and see what is actually going on on your screen.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome(executable_path='/snap/bin/chromium.chromedriver')
driver.get('https://google.com/')
assert "Google" in driver.title
wait = WebDriverWait(driver, 20)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".gLFyf.gsfi")))
input_field = driver.find_element_by_css_selector(".gLFyf.gsfi")
input_field.send_keys("how many people in the world")
input_field.send_keys(Keys.RETURN)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".Z0LcW.XcVN5d")))
result = driver.find_element_by_css_selector(".Z0LcW.XcVN5d").text
print(result)
driver.close()
driver.quit()
The result will probably wonder you :)
You'll need to install Selenium and Chromedriver. You'll need to put Chromedriver executable in the path for Windows, or show the path to it for Linux. My example is for Linux.

Related

How to scrape multiple products on Google Shopping with Python?

Following this article, I created my first web scraper with Python. My intention is to scrape Google Shopping, looking for products price. The script works, but I want to search more than one product when I run the script.
So, I'm looping over a list of products like this:
from time import sleep
from random import randint
import requests
from bs4 import BeautifulSoup
# from dataProducts import products
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
stores = ["Submarino", "Casas Bahia", "Extra.com.br", "Americanas.com",
"Pontofrio.com", "Shoptime", "Magazine Luiza", "Amazon.com.br - Retail", "Girafa"]
products = [
{
"name" : "Console Playstation 5",
"lowestPrice" : 4000.0,
"highestPrice" : 4400.0
},
{
"name" : "Controle Xbox Robot",
"lowestPrice" : 320.0,
"highestPrice" : 375.0
}
]
for product in products:
params = {"q": product["name"], 'tbm': 'shop'}
response = requests.get("https://www.google.com/search",
params=params,
headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
# Normal results
for shopping_result in soup.select('.sh-dgr__content'):
product = shopping_result.select_one('.Lq5OHe.eaGTj h4').text
price = shopping_result.select_one('span.kHxwFf span.a8Pemb').text
store = shopping_result.select_one('.IuHnof').text
link = f"https://www.google.com{shopping_result.select_one('.Lq5OHe.eaGTj')['href']}"
if store in stores:
print(product)
print(price)
print(store)
print(link)
print()
print()
print('####################################################################################################################################################')
When I run the script, it doesn't bring all the data. And sometimes, It doesn't even bring any data from the first search. It just show the prints from the second iteration. I tried to put a sleed after the soup line, 10 seconds, after last line of the loop, and nothing changes.
I don't understang why my script can't get all the results for the given products. Can anyone give me a little help?
To start off I would recommend selenium requests will most times not bring data. Second if you are trying to get alerts for stock for PS5's or Xbox's I would scrape a website not google. You will need to install chrome and chrome driver. Link: https://chromedriver.chromium.org/downloads Below is how to use Selenium!
import selenium
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
ua = UserAgent()
options = Options()
options.add_argument("useragent="+ua.random)
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_experimental_option("excludeSwitches", ["enable-logging"])
browser = webdriver.Chrome("chromedriver location", options=options)
browser.get("https://google.com")
html = browser.page_source
So you need to do:
pip install selenium
pip install fake_useragent to get it setup.
Then using html you can use BS4 to scrape the website.

Unable to fetch the rest of the names leading to the next pages from a webpage using requests

I've created a script to get different names from this website filtering State Province to Alabama and Country to United States in the search box. The script can parse the names from the first page. However, I can't figure out how I can get the results from next pages as well using requests.
There are two options in there to get all the names. Option one: using this show all 410 and option two: making use of next button.
I've tried with (capable of grabbing names from the first page):
import re
import requests
from bs4 import BeautifulSoup
URL = "https://cci-online.org/CCI/Verify/CCI/Credential_Verification.aspx"
params = {
'errorpath': '/CCI/Verify/CCI/Credential_Verification.aspx'
}
with requests.Session() as s:
s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36'
r = s.get(URL)
params['WebsiteKey'] = re.search(r"gWebsiteKey[^\']+\'(.*?)\'",r.text).group(1)
params['hkey'] = re.search(r"gHKey[^\']+\'(.*?)\'",r.text).group(1)
soup = BeautifulSoup(r.text,"lxml")
payload = {i['name']: i.get('value', '') for i in soup.select('input[name]')}
payload['ctl01$TemplateBody$WebPartManager1$gwpciPeopleSearch$ciPeopleSearch$ResultsGrid$Sheet0$Input4$DropDown1'] = 'AL'
payload['ctl01$TemplateBody$WebPartManager1$gwpciPeopleSearch$ciPeopleSearch$ResultsGrid$Sheet0$Input5$DropDown1'] = 'United States'
r = s.post(URL,params=params,data=payload)
soup = BeautifulSoup(r.text,"lxml")
for item in soup.select("table.rgMasterTable > tbody > tr a[title]"):
print(item.text)
In case someone comes up with any solution based on selenium, I've found success already with the same. However, I'm not willing to go that route:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
link = "https://cci-online.org/CCI/Verify/CCI/Credential_Verification.aspx"
with webdriver.Chrome() as driver:
driver.get(link)
wait = WebDriverWait(driver,15)
Select(wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "select[id$='Input4_DropDown1']")))).select_by_value("AL")
Select(wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "select[id$='Input5_DropDown1']")))).select_by_value("United States")
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "input[id$='SubmitButton']"))).click()
wait.until(EC.visibility_of_element_located((By.XPATH, "//a[contains(.,'show all')]"))).click()
wait.until(EC.invisibility_of_element_located((By.XPATH, "//span[#id='ctl01_LoadingLabel' and .='Loading']")))
soup = BeautifulSoup(driver.page_source,"lxml")
for item in soup.select("table.rgMasterTable > tbody > tr a[title]"):
print(item.text)
How can I get the rest of the names from that webpage leading to the next pages using requests module?
First, click that link in chrome with the network panel open. Then look at the Form Data for the request:
Pay extra attention to __EVENTTARGET and __EVENTARGUMENT.
Next, inspect one of those next links, they will look like this:
<a onclick="return false;" title="Go to page 2" class="rgCurrentPage" href="javascript:__doPostBack('ctl01$TemplateBody$WebPartManager1$gwpciPeopleSearch$ciPeopleSearch$ResultsGrid$Grid1$ctl00$ctl02$ctl00$ctl07','')"><span>2</span></a>
The doPostBack arguments go in __EVENTTARGET and __EVENTARGUMENT and everything else should match what you see in network (headers as well as form data).
It will be helpful to proxy requests through Charles or Fiddler so you can compare the requests side by side.

How to scrape text from this webpage?

I'm trying to scrape this HTML title
<h2 id="p89" data-pid="89"><span id="page77" class="pageNum" data-no="77" data-before-text="77"></span>Tuesday, July 30</h2>
from this website: https://wol.jw.org/en/wol/h/r1/lp-e
My code:
from bs4 import BeautifulSoup
import requests
url = requests.get('https://wol.jw.org/en/wol/h/r1/lp-e').text
soup = BeautifulSoup(url, 'lxml')
textodiario = soup.find('header')
dia = textodiario.h2.text
print(dia)
It should returns me today's day but it returns me a passed day: Wednesday, July 24
At the moment I don't have a PC to test, please double check for possible errors.
You need the chromedriver for your platform too, put it in the same folder of the script.
My idea would be to use selenium to get the HTML and then parse it:
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
url = "https://wol.jw.org/en/wol/h/r1/lp-e"
options = Options()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
driver = webdriver.Chrome(chrome_options=options)
driver.get(url)
time.sleep(3)
page = driver.page_source
driver.quit()
soup = BeautifulSoup(page, 'html.parser')
textodiario = soup.find('header')
dia = textodiario.h2.text
print(dia)
The data is getting loaded asynchronously and the contents of the div are being changed. What you need is a selenium web driver to act alongside bs4.
I actually tried your code, and there's definitely something wrong with how the website/the code is grabbing data. Because when I pipe the entirety of the URL text to a grep with July, it gives:
Wednesday, July 24
<h2 id="p71" data-pid="71"><span id="page75" class="pageNum" data-no="75" data-before-text="75"></span>Wednesday, July 24</h2>
<h2 id="p74" data-pid="74">Thursday, July 25</h2>
<h2 id="p77" data-pid="77">Friday, July 26</h2>
If I had to take a guess, the fact that they're keeping multiple dates under h2 probably doesn't help, but I have almost zero experience in web scraping. And if you notice, July 30th isn't even in there, meaning that somewhere along the line your data is getting weird (as LazyCoder points out).
Hope that Selenium fixes your issue.
Go to NetWork Tab and you will get the link.
https://wol.jw.org/wol/dt/r1/lp-e/2019/7/30
Here is the code.
from bs4 import BeautifulSoup
headers = {'User-Agent':
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
session = requests.Session()
response = session.get('https://wol.jw.org/wol/dt/r1/lp-e/2019/7/30',headers=headers)
result=response.json()
data=result['items'][0]['content']
soup=BeautifulSoup(data,'html.parser')
print(soup.select_one('h2').text)
Output:
Tuesday, July 30

Scraping AJAX e-commerce site using python

I have a problem on scraping an e-commerce site using BeautifulSoup. I did some Googling but I still can't solve the problem.
Please refer on the pictures:
1 Chrome F12 :
2 Result :
Here is the site that I tried to scrape: "https://shopee.com.my/search?keyword=h370m"
Problem:
When I tried to open up Inspect Element on Google Chrome (F12), I can see the for the product's name, price, etc. But when I run my python program, I could not get the same code and tag in the python result. After some googling, I found out that this website used AJAX query to get the data.
Anyone can help me on the best methods to get these product's data by scraping an AJAX site? I would like to display the data in a table form.
My code:
import requests
from bs4 import BeautifulSoup
source = requests.get('https://shopee.com.my/search?keyword=h370m')
soup = BeautifulSoup(source.text, 'html.parser')
print(soup)
Welcome to StackOverflow! You can inspect where the ajax request is being sent to and replicate that.
In this case the request goes to this api url. You can then use requests to perform a similar request. Notice however that this api endpoint requires a correct UserAgent header. You can use a package like fake-useragent or just hardcode a string for the agent.
import requests
# fake useragent
from fake_useragent import UserAgent
user_agent = UserAgent().chrome
# or hardcode
user_agent = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36'
url = 'https://shopee.com.my/api/v2/search_items/?by=relevancy&keyword=h370m&limit=50&newest=0&order=desc&page_type=search'
resp = requests.get(url, headers={
'User-Agent': user_agent
})
data = resp.json()
products = data.get('items')
Welcome to StackOverflow! :)
As an alternative, you can check Selenium
See example usage from documentation:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
When you use requests (or libraries like Scrapy) usually JavaScript not loaded. As #dmitrybelyakov mentioned you can reply these calls or imitate normal user interaction using Selenium.

BeautifulSoup can't find class that exists on webpage?

So I am trying to scrape the following webpage https://www.scoreboard.com/uk/football/england/premier-league/,
Specifically the scheduled and finished results. Thus I am trying to look for the elements with class = "stage-finished" or "stage-scheduled". However when I scrape the webpage and print out what page_soup contains, it doesn't contain these elements.
I found another SO question with an answer saying that this is because it is loaded via AJAX and I need to look at the XHR under the network tab on chrome dev tools to find the file thats loading the necessary data, however it doesn't seem to be there?
import bs4
import requests
from bs4 import BeautifulSoup as soup
import csv
import datetime
myurl = "https://www.scoreboard.com/uk/football/england/premier-league/"
headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
page = requests.get(myurl, headers=headers)
page_soup = soup(page.content, "html.parser")
scheduled = page_soup.select(".stage-scheduled")
finished = page_soup.select(".stage-finished")
live = page_soup.select(".stage-live")
print(page_soup)
print(scheduled[0])
The above code throws an error of course as there is no content in the scheduled array.
My question is, how do I go about getting the data I'm looking for?
I copied the contents of the XHR files to a notepad and searched for stage-finished and other tags and found nothing. Am I missing something easy here?
The page is JavaScript rendered. You need Selenium. Here is some code to start on:
from selenium import webdriver
url = 'https://www.scoreboard.com/uk/football/england/premier-league/'
driver = webdriver.Chrome()
driver.get(url)
stages = driver.find_elements_by_class_name('stage-scheduled')
driver.close()
Or you could pass driver.content in to the BeautifulSoup method. Like this:
soup = BeautifulSoup(driver.page_source, 'html.parser')
Note:
You need to install a webdriver first. I installed chromedriver.
Good luck!

Categories

Resources