How to print Google Search results properly with bs4? - python

I have a working code, that prints firstly search titles and then urls but it prints a lot of urls between website titles. But how to print them in format like this and avoid printing the same urls 10 times for each:
1) Title url
2) Title url
and so on...
My code:
search = input("Search:")
page = requests.get(f"https://www.google.com/search?q=" + search)
soup = BeautifulSoup(page.content, "html5lib")
links = soup.findAll("a")
heading_object = soup.find_all('h3')
for info in heading_object:
x = info.getText()
print(x)
for link in links:
link_href = link.get('href')
if "url?q=" in link_href:
y = (link.get('href').split("?q=")[1].split("&sa=U")[0])
print(y)

If you get separatelly titles and links then you can use zip() to group them in pairs
for info, link in zip(heading_object, links):
info = info.getText()
link = link.get('href')
if "?q=" in link:
link = link.split("?q=")[1].split("&sa=U")[0]
print(info, link)
But this may have problem when some title or link doesn't exist on page because then it will create wrong pairs. It will pair title with link for next element. You should rather search elements which keep both title and link and inside every element search single title and single link to create pair. If there is no title or link then you can put some default value and it will not create wrong pairs.

You're looking for this:
for result in soup.select('.yuRUbf'):
title = result.select_one('.DKV0Md').text
url = result.a['href']
print(f'{title}, {url}\n') # prints TITLE, URL followed by a new line.
If you're using f-string then the appropriate way is to use it like so:
page = requests.get(f"https://www.google.com/search?q=" + search) # not proper f-string
page = requests.get(f"https://www.google.com/search?q={search}") # proper f-string
Code:
import requests, lxml
from bs4 import BeautifulSoup
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {
"q": "python memes",
"hl": "en"
}
soup = BeautifulSoup(requests.get('https://www.google.com/search', headers=headers, params=params).text, 'lxml')
for result in soup.select('.yuRUbf'):
title = result.select_one('.DKV0Md').text
url = result.a['href']
print(f'{title}, {url}\n')
--------
'''
35 Funny And Best Python Programming Memes - CodeItBro, https://www.codeitbro.com/funny-python-programming-memes/
ML Memes (#python.memes_) • Instagram photos and videos, https://www.instagram.com/python.memes_/?hl=en
28 Python Memes ideas - Pinterest, https://in.pinterest.com/codeitbro/python-memes/
'''
Alternatively, you can do the same thing by using Google Organic Results API from SerpApi. It's a paid API with a free plan.
One of the differences is that you only need to iterate over JSON rather than figuring out how to scrape stuff.
Code to integrate:
from serpapi import GoogleSearch
import os
params = {
"api_key": os.getenv("API_KEY"),
"engine": "google",
"q": "python memes",
"hl": "en"
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results['organic_results']:
title = result['title']
url = result['link']
print(f'{title}, {url}\n')
-------
'''
35 Funny And Best Python Programming Memes - CodeItBro, https://www.codeitbro.com/funny-python-programming-memes/
ML Memes (#python.memes_) • Instagram photos and videos, https://www.instagram.com/python.memes_/?hl=en
28 Python Memes ideas - Pinterest, https://in.pinterest.com/codeitbro/python-memes/
'''
Disclaimer, I work for SerpApi.

Related

Google scholar search results web scraping: problem only scrapes excerpts ending with tripple dot (...)

I'm using the following code to scrape papers from google scholar. I noticed that only the shorted descriptions of the papers are scraped, but not the entire description. If you look on the google scholar search results page, only a short excerpt from the text is seen ending with a triple dot (...)
The scraper only scrapes this, leaving the rest of the information out. This happens for authors (especially when there are many), journal names, and abstracts, leaving parts of the information out.
Do you maybe know a solution to this? If you execute the code yourself you will see what I mean.
from bs4 import BeautifulSoup
import requests, lxml, os, json
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {
"q": "samsung",
"hl": "en",
}
html = requests.get('https://scholar.google.com/scholar', headers=headers, params=params).text
soup = BeautifulSoup(html, 'lxml')
# Scrape just PDF links
for pdf_link in soup.select('.gs_or_ggsm a'):
pdf_file_link = pdf_link['href']
print(pdf_file_link)
# JSON data will be collected here
data = []
# Container where all needed data is located
for result in soup.select('.gs_ri'):
title = result.select_one('.gs_rt').text
title_link = result.select_one('.gs_rt a')['href']
publication_info = result.select_one('.gs_a').text
snippet = result.select_one('.gs_rs').text
cited_by = result.select_one('#gs_res_ccl_mid .gs_nph+ a')['href']
related_articles = result.select_one('a:nth-child(4)')['href']
try:
all_article_versions = result.select_one('a~ a+ .gs_nph')['href']
except:
all_article_versions = None
data.append({
'title': title,
'title_link': title_link,
'publication_info': publication_info,
'snippet': snippet,
'cited_by': f'https://scholar.google.com{cited_by}',
'related_articles': f'https://scholar.google.com{related_articles}',
'all_article_versions': f'https://scholar.google.com{all_article_versions}',
})
print(json.dumps(data, indent = 2, ensure_ascii = False))
I think, I saw your code in Scrape Google Scholar with Python blog post.
This is because only part of the page's content is displayed in search results. Mostly this information is related to your search question or written in advance.
Therefore, it makes no sense to display all text in search results. If you are still interested in the full text, then you can follow each of the links and scrape the information you need. But keep in mind that each site uses its own selectors and the script will have to be rewritten.

Scrape urls from search results in Python and BeautifulSoup

I was trying to scrape some urls from the search result and I tried to include both cookies setting or user-agent as Mozilla/5.0 and so on. I still cannot get any urls from the search result. Any solution I can get this working?
from bs4 import BeautifulSoup
import requests
monitored_tickers = ['GME', 'TSLA', 'BTC']
def search_for_stock_news_urls(ticker):
search_url = "https://www.google.com/search?q=yahoo+finance+{}&tbm=nws".format(ticker)
r = requests.get(search_url)
soup = BeautifulSoup(r.text, 'html.parser')
atags = soup.find_all('a')
hrefs = [link['href'] for link in atags]
return hrefs
raw_urls = {ticker:search_for_stock_news_urls(ticker) for ticker in monitored_tickers}
raw_urls
You could be running into the issue that requests and bs4 may not be the best tool for what you're trying to accomplish. As balderman said in another comment, using google search api will be easier.
This code:
from googlesearch import search
tickers = ['GME', 'TSLA', 'BTC']
links_list = []
for ticker in tickers:
ticker_links = search(ticker, stop=25)
links_list.append(ticker_links)
will make a list of the top 25 links on google for each ticker, and append that list into another list. Yahoo finance is sure to be in that list of links, and a simple parser based on keyword will get the yahoo finance url for that specific ticker. You could also adjust the search criteria in the search() function to whatever you wish, say ticker + ' yahoo finance' for example.
Google News could be easily scraped with requests and beautifulsoup. It would be enough to use user-agent to extract data from there.
Check out SelectorGadget Chrome extension to visually grab CSS selectors by clicking on the element you want to extract.
If you only want to extract URLs from Google News, then it's as simple as:
for result in soup.select('.dbsr'):
link = result.a['href']
# 10 links here..
Code and example that scrape more in the online IDE:
from bs4 import BeautifulSoup
import requests, lxml
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {
"q": "yahoo finance BTC",
"hl": "en",
"gl": "us",
"tbm": "nws",
}
html = requests.get('https://www.google.com/search', headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')
for result in soup.select('.dbsr'):
link = result.a['href']
print(link)
-----
'''
https://finance.yahoo.com/news/riot-blockchain-reports-record-second-203000136.html
https://finance.yahoo.com/news/el-salvador-not-require-bitcoin-175818038.html
https://finance.yahoo.com/video/bitcoin-hovers-around-50k-paypal-155437774.html
... other links
'''
Alternatively, you can achieve the same result by using Google News Results API from SerpApi. It's a paid API with a free plan.
The differences is that you don't have to figure out how to extract elements, maintain the parser over time, bypass blocks from Google.
Code to integrate:
import os
from serpapi import GoogleSearch
params = {
"engine": "google",
"q": "coca cola",
"tbm": "nws",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
for news_result in results["news_results"]:
print(f"Title: {news_result['title']}\nLink: {news_result['link']}\n")
-----
'''
Title: Coca-Cola Co. stock falls Monday, underperforms market
Link: https://www.marketwatch.com/story/coca-cola-co-stock-falls-monday-underperforms-market-01629752653-994caec748bb
... more results
'''
P.S. I wrote a blog post about how to scrape Google News (including pagination) in a bit more detailed way with visual representation.
Disclaimer, I work for SerpApi.

How can I scrape the first link of a google search with beautiful soup

I'm trying to make a script that will scrape the first link of a google search so that it will give me back only the first link so I can run a search in the terminal and look at the link later on with the search term. I'm struggling to only get the first result. This is the closest thing I've got so far.
import requests
from bs4 import BeautifulSoup
research_later = "hiya"
goog_search = "https://www.google.co.uk/search?sclient=psy-ab&client=ubuntu&hs=k5b&channel=fs&biw=1366&bih=648&noj=1&q=" + research_later
r = requests.get(goog_search)
soup = BeautifulSoup(r.text)
for link in soup.find_all('a'):
print research_later + " :"+link.get('href')
Seems like Google use cite tag to save the link, so we can just use soup.find('cite').text like this:
import requests
from bs4 import BeautifulSoup
research_later = "hiya"
goog_search = "https://www.google.co.uk/search?sclient=psy-ab&client=ubuntu&hs=k5b&channel=fs&biw=1366&bih=648&noj=1&q=" + research_later
r = requests.get(goog_search)
soup = BeautifulSoup(r.text, "html.parser")
print soup.find('cite').text
Output is:
www.urbandictionary.com/define.php?term=hiya
You can use either select_one() for selecting CSS selectors or find() bs4 methods to get only one element from the page. To grab CSS selectors you can use SelectorGadget extension.
Code and example in the online IDE:
from bs4 import BeautifulSoup
import requests, json
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
html = requests.get('https://www.google.com/search?q=ice cream', headers=headers).text
soup = BeautifulSoup(html, 'lxml')
# locating .tF2Cxc class
# calling for <a> tag and then calling for 'href' attribute
link = soup.select('.yuRUbf a')['href']
print(link)
# https://en.wikipedia.org/wiki/Ice_cream
Alternatively, you can do the same thing by using Google Search Engine Results API from SerpApi. It's a paid API with a free plan.
The main difference is that everything (selecting, bypass blocks, proxy rotation, and more) is already done for the end-user with a JSON output.
Code to integrate:
params = {
"engine": "google",
"q": "ice cream",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
# [0] - first index from the search results
link = results['organic_results'][0]['link']
print(link)
# https://en.wikipedia.org/wiki/Ice_cream
Disclaimer, I work for SerpApi.

Not Getting proper links from google search results using mechanize and Beautifulsoup

I am using the following snippet to get links from the google search results for the "keyword" I give.
import mechanize
from bs4 import BeautifulSoup
import re
def googlesearch():
br = mechanize.Browser()
br.set_handle_robots(False)
br.set_handle_equiv(False)
br.addheaders = [('User-agent', 'Mozilla/5.0')]
br.open('http://www.google.com/')
# do the query
br.select_form(name='f')
br.form['q'] = 'scrapy' # query
data = br.submit()
soup = BeautifulSoup(data.read())
for a in soup.find_all('a', href=True):
print "Found the URL:", a['href']
googlesearch()
Since am parsing the search results HTML page to get links.Its getting all the 'a' tags.But what I need is to get only the links for the results.Another thing is when you see the output of the href attribute it gives something like this
Found the URL:
/search?q=scrapy&hl=en-IN&gbv=1&prmd=ivns&source=lnt&tbs=li:1&sa=X&ei=DT8HU9SlG8bskgWvqIHQAQ&ved=0CBgQpwUoAQ
But the actual link present in href attitube is http://scrapy.org/
Can anyone point me the solution for the above two questions mentioned above??
Thanks in advance
Get only the links for the results
The links you're interested in are inside the h3 tags (with r class):
<li class="g">
<h3 class="r">
<a href="/url?q=http://scrapy.org/&sa=U&ei=XdIUU8DOHo-ElAXuvIHQDQ&ved=0CBwQFjAA&usg=AFQjCNHVtUrLoWJ8XWAROG-a4G8npQWXfQ">
<b>Scrapy</b> | An open source web scraping framework for Python
</a>
</h3>
..
You can find the links using css selector:
soup.select('.r a')
Get the actual link
URLs are in the following format:
/url?q=http://scrapy.org/&sa=U&ei=s9YUU9TZH8zTkQWps4BY&ved=0CBwQFjAA&usg=AFQjCNE-2uiVSl60B9cirnlWz2TMv8KMyQ
^^^^^^^^^^^^^^^^^^^^
Actual url is in the q parameter.
To get the the entire query string, use urlparse.urlparse:
>>> url = '/url?q=http://scrapy.org/&sa=U&ei=s9YUU9TZH8zTkQWps4BY&ved=0CBwQFjAA&usg=AFQjCNE-2uiVSl60B9cirnlWz2TMv8KMyQ'
>>> urlparse.urlparse(url).query
'q=http://scrapy.org/&sa=U&ei=s9YUU9TZH8zTkQWps4BY&ved=0CBwQFjAA&usg=AFQjCNE-2uiVSl60B9cirnlWz2TMv8KMyQ'
Then, use urlparse.parse_qs to parse the query string and extract the q parameter value:
>>> urlparse.parse_qs(urlparse.urlparse(url).query)['q']
['http://scrapy.org/']
>>> urlparse.parse_qs(urlparse.urlparse(url).query)['q'][0]
'http://scrapy.org/'
Final result
for a in soup.select('.r a'):
print urlparse.parse_qs(urlparse.urlparse(a['href']).query)['q'][0]
output:
http://scrapy.org/
http://doc.scrapy.org/en/latest/intro/tutorial.html
http://doc.scrapy.org/
http://scrapy.org/download/
http://doc.scrapy.org/en/latest/intro/overview.html
http://scrapy.org/doc/
http://scrapy.org/companies/
https://github.com/scrapy/scrapy
http://en.wikipedia.org/wiki/Scrapy
http://www.youtube.com/watch?v=1EFnX1UkXVU
https://pypi.python.org/pypi/Scrapy
http://pypix.com/python/build-website-crawler-based-upon-scrapy/
http://scrapinghub.com/scrapy-cloud
Or you could use https://code.google.com/p/pygoogle/ which basically does the same thing.
And you can get links to results as well.
A snippet of output from sample query for 'stackoverflow':
*Found 3940000 results*
[Stack Overflow]
Stack Overflow is a question and answer site for professional and enthusiast
programmers. It's 100% free, no registration required. Take the 2-minute tour
http://stackoverflow.com/
In your code example you were extracting all <a> tags from the HTML, not only related to organic results:
for a in soup.find_all('a', href=True):
print "Found the URL:", a['href']
You're looking for this to grab links from organic results only:
# container with needed data: title, link, etc.
for result in soup.select('.tF2Cxc'):
link = result.select_one('.yuRUbf a')['href']
Code and example in the online IDE:
from bs4 import BeautifulSoup
import requests, lxml
headers = {
'User-agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582'
}
params = {
'q': 'minecraft',
'gl': 'us',
'hl': 'en',
}
html = requests.get('https://www.google.com/search', headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')
for result in soup.select('.tF2Cxc'):
link = result.select_one('.yuRUbf a')['href']
print(link)
---------
'''
https://www.minecraft.net/en-us/
https://classic.minecraft.net/
https://play.google.com/store/apps/details?id=com.mojang.minecraftpe&hl=en_US&gl=US
https://en.wikipedia.org/wiki/Minecraft
'''
Alternatively, you can achieve the same thing by using Google Organic Results API from SerpApi. It's a paid API with a free plan.
The difference in your case is that you don't have to make everything from scratch, bypass blocks, and maintain the parser over time.
Code to integrate to achieve your goal:
import os
from serpapi import GoogleSearch
params = {
"engine": "google",
"q": "minecraft",
"hl": "en",
"gl": "us",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results["organic_results"]:
print(result['link'])
---------
'''
https://www.minecraft.net/en-us/
https://classic.minecraft.net/
https://play.google.com/store/apps/details?id=com.mojang.minecraftpe&hl=en_US&gl=US
https://en.wikipedia.org/wiki/Minecraft
'''
Disclaimer, I work for SerpApi.

BeautifulSoup script parsing Google search results stopped working

I would like to parse Google search results with Python. Everything worked perfectly, but now I keep getting an empty list. Here is the code that used to work fine:
query = urllib.urlencode({'q': self.Tagsinput.GetValue()+footprint,'ie': 'utf-8', 'num':searchresults, 'start': '100'})
result = url + query1
myopener = MyOpener()
page = myopener.open(result)
xss = page.read()
soup = BeautifulSoup.BeautifulSoup(xss)
contents = [x['href'] for x in soup.findAll('a', attrs={'class':'l'})]
This script worked perfectly in December, now it stopped working.
As far as I understand the problem is in this line:
contents = [x['href'] for x in soup.findAll('a', attrs={'class':'l'})]
when I print contents the program returns an empty list: []
Please, anybody, help.
The API works a whole lot better, too. Simple JSON which you can easily parse and manipulate.
import urllib, json
BASE_URL = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&'
url = BASE_URL + urllib.urlencode({'q' : SearchTerm.encode('utf-8')})
raw_res = urllib.urlopen(url).read()
results = json.loads(raw_res)
hit1 = results['responseData']['results'][0]
prettyresult = ' - '.join((urllib.unquote(hit1['url']), hit1['titleNoFormatting']))
At the time of writing this answer you don't have to parse <script> tag (for the most part) to get the output from the Google Search. This can be achieved by using beautifulsoup, requests, and lxml libraries.
Code to get the title, link, and example in the online IDE:
import requests, lxml
from bs4 import BeautifulSoup
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3538.102 Safari/537.36 Edge/18.19582"
}
html = requests.get(f'https://www.google.com/search?q=minecraft', headers=headers).text
soup = BeautifulSoup(html, 'lxml')
for container in soup.findAll('div', class_='tF2Cxc'):
title = container.select_one('.DKV0Md').text
link = container.find('a')['href']
print(f'{title}\n{link}')
# part of the output:
'''
Minecraft Official Site | Minecraft
https://www.minecraft.net/en-us/
Minecraft Classic
https://classic.minecraft.net/
'''
Alternatively, you can do it as well by using Google Search Engine Results API from SerpApi. It's a paid API with a free trial of 5,000 searches. Check out the Playground.
Code to integrate:
from serpapi import GoogleSearch
import os
params = {
"api_key": os.getenv("API_KEY"), # environment for API_KEY
"engine": "google",
"q": "minecraft",
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results['organic_results']:
title = result['title']
link = result['link']
print(f'{title}\n{link}')
# part of the output:
'''
Minecraft Official Site | Minecraft
https://www.minecraft.net/en-us/
Minecraft Classic
https://classic.minecraft.net/
'''
Disclaimer, I work for SerpApi.

Categories

Resources