How extract description in a google search using python? - python

I want to extract the description from the google search,
now I have this code:
from urlparse import urlparse, parse_qs
import urllib
from lxml.html import fromstring
from requests import get
url='https://www.google.com/search?q=Gotham'
raw = get(url).text
pg = fromstring(raw)
v=[]
for result in pg.cssselect(".r a"):
url = result.get("href")
if url.startswith("/url?"):
url = parse_qs(urlparse(url).query)['q']
print url[0]
that extract urls related with the search, how can I extract the description that appears under the url?

You can scrape Google Search Description Website using BeautifulSoup web scraping library.
To collect information from all pages you can use "pagination" with while True loop. The while loop is an endless loop, the exit from which in our case is the presence of a switch button to the next page, namely the CSS selector ".d6cvqb a[id=pnnext]":
if soup.select_one('.d6cvqb a[id=pnnext]'):
params["start"] += 10
else:
break
You can use CSS selectors search to find all the information you need (description, title, etc.) which are easy to identify on the page using a SelectorGadget Chrome extension (not always work perfectly if the website is rendered via JavaScript).
Make sure you're using request headers user-agent to act as a "real" user visit. Because default requests user-agent is python-requests and websites understand that it's most likely a script that sends a request. Check what's your user-agent.
Check code in online IDE.
from bs4 import BeautifulSoup
import requests, json, lxml
# https://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls
params = {
"q": "gotham", # query
"hl": "en", # language
"gl": "us", # country of the search, US -> USA
"start": 0, # number page by default up to 0
#"num": 100 # parameter defines the maximum number of results to return.
}
# https://docs.python-requests.org/en/master/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
}
page_num = 0
website_data = []
while True:
page_num += 1
print(f"page: {page_num}")
html = requests.get("https://www.google.com/search", params=params, headers=headers, timeout=30)
soup = BeautifulSoup(html.text, 'lxml')
for result in soup.select(".tF2Cxc"):
website_name = result.select_one(".yuRUbf a")["href"]
try:
description = result.select_one(".lEBKkf").text
except:
description = None
website_data.append({
"website_name": website_name,
"description": description
})
if soup.select_one('.d6cvqb a[id=pnnext]'):
params["start"] += 10
else:
break
print(json.dumps(website_data, indent=2, ensure_ascii=False))
Example output:
[
{
"website_name": "https://www.imdb.com/title/tt3749900/",
"description": "The show follows Jim as he cracks strange cases whilst trying to help a young Bruce Wayne solve the mystery of his parents' murder. It seemed each week for a ..."
},
{
"website_name": "https://www.netflix.com/watch/80023082",
"description": "When the key witness in a homicide ends up dead while being held for questioning, Gordon suspects an inside job and seeks details from an old friend."
},
{
"website_name": "https://www.gothamknightsgame.com/",
"description": "Gotham Knights is an open-world, action RPG set in the most dynamic and interactive Gotham City yet. In either solo-play or with one other hero, ..."
},
# ...
]
Or you can also use Google Search Engine Results API from SerpApi. It's a paid API with the free plan.
The difference is that it will bypass blocks (including CAPTCHA) from Google, no need to create the parser and maintain it.
Code example:
from serpapi import GoogleSearch
from urllib.parse import urlsplit, parse_qsl
import json, os
params = {
"api_key": os.getenv("API_KEY"), # serpapi key
"engine": "google", # serpapi parser engine
"q": "gotham", # search query
"num": "100" # number of results per page (100 per page in this case)
# other search parameters: https://serpapi.com/search-api#api-parameters
}
search = GoogleSearch(params) # where data extraction happens
organic_results_data = []
page_num = 0
while True:
results = search.get_dict() # JSON -> Python dictionary
page_num += 1
for result in results["organic_results"]:
organic_results_data.append({
"title": result.get("title"),
"snippet": result.get("snippet")
})
if "next_link" in results.get("serpapi_pagination", []):
search.params_dict.update(dict(parse_qsl(urlsplit(results.get("serpapi_pagination").get("next_link")).query)))
else:
break
print(json.dumps(organic_results_data, indent=2, ensure_ascii=False))
Output:
[
{
"title": "Gotham (TV Series 2014–2019) - IMDb",
"snippet": "The show follows Jim as he cracks strange cases whilst trying to help a young Bruce Wayne solve the mystery of his parents' murder. It seemed each week for a ..."
},
{
"title": "Gotham (TV series) - Wikipedia",
"snippet": "Gotham is an American superhero crime drama television series developed by Bruno Heller, produced by Warner Bros. Television and based on characters from ..."
},
# ...
]

Related

python web scraping for emails

I wrote this code to scrape email addresses from google search results or websites depending on t url given. However, the output is always blank.
The only thing in the excel sheet is the column name. I'm still new to python so not sure why that's happening.
What am I missing here?
import requests
from bs4 import BeautifulSoup
import pandas as pd
url ="https://www.google.com/search?q=solicitor+bereavement+wales+%27email%27&rlz=1C1CHBD_en-GBIT1013IT1013&sxsrf=AJOqlzWelf5qGpc4uqy_C2cd583OKlSEcQ%3A1675616694195&ei=tuHfY83MC-aIrwSQ3qxY&ved=0ahUKEwjN_9jO7v78AhVmxIsKHRAvCwsQ4dUDCBA&uact=5&oq=solicitor+bereavement+wales+%27email%27&gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAzIFCAAQogQyBwgAEB4QogQyBwgAEB4QogQyBwgAEB4QogQyBwgAEB4QogQ6CggAEEcQ1gQQsANKBAhBGABKBAhGGABQrAxY7xRg1xZoAXABeACAAdIBiAGmBpIBBTEuNC4xmAEAoAEByAEIwAEB&sclient=gws-wiz-serp"
response = requests.get(url)
html_content = response.text
soup = BeautifulSoup(html_content, 'html.parser')
email_addresses = []
for link in soup.find_all('a'):
if 'mailto:' in link.get('href'):
email_addresses.append(link.get('href').replace('mailto:', ''))
df = pd.DataFrame(email_addresses, columns=['Email Addresses'])
df.to_excel('email_addresses_.xlsx',index=False)
First you need to extract all the snippets on the page:
for result in soup.select('.tF2Cxc'):
snippet = result.select_one('.lEBKkf').text
After using regular expression, it will get the email from the snippets (if it's present in the snippet):
match_email = re.findall(r'[\w\.-]+#[\w\.-]+\.\w+', snippet)
email = ''.join(match_email)
Also, instead of a request for a full URL, you can make a request for certain parameters (it’s convenient if you need to change query or other parameters):
params = {
'q': 'intext:"gmail.com" solicitor bereavement wale', # your query
'hl': 'en', # language
'gl': 'us' # country of the search, US -> USA
# other parameters
}
Check full code in the online IDE.
import requests, re, json, lxml
from bs4 import BeautifulSoup
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36"
}
params = {
'q': 'intext:"gmail.com" solicitor bereavement wale', # your query
'hl': 'en', # language
'gl': 'us' # country of the search, US -> USA
}
html = requests.get("https://www.google.com/search",
headers=headers,
params=params).text
soup = BeautifulSoup(html, 'lxml')
data = []
for result in soup.select('.tF2Cxc'):
title = result.select_one('.DKV0Md').text
link = result.find('a')['href']
snippet = result.select_one('.lEBKkf').text
match_email = re.findall(r'[\w\.-]+#[\w\.-]+\.\w+', snippet)
email = ''.join(match_email)
data.append({
'Title': title,
'Link': link,
'Email': email if email else None
})
print(json.dumps(data, indent=2, ensure_ascii=False))
Example output:
[
{
"Title": "Revealed: Billboard's 2022 Top Music Lawyers",
"Link": "https://www.billboard.com/wp-content/uploads/2022/03/march-28-2022-billboard-bulletin.pdf",
"Email": "cmellow.billboard#gmail.com"
},
{
"Title": "Folakemi Jegede, LL.B, BL, LLM, ACIS.'s Post - LinkedIn",
"Link": "https://www.linkedin.com/posts/folakemi-jegede-ll-b-bl-llm-acis-855a8a2a_lawyers-law-advocate-activity-6934498515867815936-9R6G?trk=posts_directory",
"Email": "OurlawandI#gmail.com"
},
other results ...
]
Also you can use Google Search Engine Results API from SerpApi. It's a paid API with a free plan.
The difference is that it will bypass blocks (including CAPTCHA) from Google, no need to create the parser and maintain it.
Code example:
from serpapi import GoogleSearch
import os, json, re
params = {
"engine": "google", # search engine
"q": 'intext:"gmail.com" solicitor bereavement wale', # search query
"api_key": "..." # serpapi key from https://serpapi.com/manage-api-key
}
search = GoogleSearch(params) # where data extraction happens
results = search.get_dict() # JSON -> Python dictionary
data = []
for result in results['organic_results']:
title = result['title']
link = result['link']
snippet = result['snippet']
match_email = re.findall(r'[\w\.-]+#[\w\.-]+\.\w+', snippet)
email = '\n'.join(match_email)
data.append({
'title': title,
'link': link,
'email': email if email else None,
})
print(json.dumps(data, indent=2, ensure_ascii=False))
Output: exactly the same as in the previous solution.
It's not finding the html you want because the html is loaded dynamically with javascript. Thus you need to execute the javascript to get all the html.
The selenium module can be used to do this, but it requires a driver to interface with a given browser. So you'll need to install a browser driver in order to use the selenium module. The selenium documentation goes over the installation
Once you have selenium setup, you can use this function to get all the html from the website. Pass its return value into the BeautifulSoup object.
from selenium import webdriver
from time import sleep
def get_page_source(url):
try:
driver = webdriver.Chrome()
driver.get(url)
sleep(3)
return driver.page_source
finally: driver.quit()

Why can I only scrape first 4 pages of results on eBay?

I have a simple script to analyze sold data on eBay (baseball trading cards). It seems to be working fine for the first 4 pages but on the 5th page it simply does not load in the desired html content anymore, and I am not able to figure out why this happens:
#Import statements
import requests
import time
from bs4 import BeautifulSoup as soup
from tqdm import tqdm
#FOR DEBUG
Page_1="https://www.ebay.com/sch/213/i.html?_from=R40&LH_Sold=1&_sop=16&_pgn=1"
#Request URL working example
source=requests.get(Page_1)
time.sleep(5)
eBay_full = soup(source.text, "lxml")
Complete_container=eBay_full.find("ul",{"class":"b-list__items_nofooter"})
Single_item=Complete_container.find_all("div",{"class":"s-item__wrapper clearfix"})
items=[]
#For all items on page perform desired operation
for i in tqdm(Single_item):
items.append(i.find("a", {"class": "s-item__link"})["href"].split('?')[0].split('/')[-1])
#Works fine for Links_to_check[0] upto Links_to_check[3]
However, when I try to scrape the fifth page or further pages the following occurs:
Page_5="https://www.ebay.com/sch/213/i.html?_from=R40&LH_Sold=1&_sop=16&_pgn=5"
source=requests.get(Page_5)
time.sleep(5)
eBay_full = soup(source.text, "lxml")
Complete_container=eBay_full.find("ul",{"class":"b-list__items_nofooter"})
Single_item=Complete_container.find_all("div",{"class":"s-item__wrapper clearfix"})
items=[]
#For all items on page perform desired operation
for i in tqdm(Single_item):
items.append(i.find("a", {"class": "s-item__link"})["href"].split('?')[0].split('/')[-1])
----> 5 Single_item=Complete_container.find_all("div",{"class":"s-item__wrapper clearfix"})
6 items=[]
7 #For all items on page perform desired operation
AttributeError: 'NoneType' object has no attribute 'find_all'
This seems a logical consequence of the ul class b-list__items_nofooter missing in the eBay_full soup for the later pages. The question however is why is this information missing? Scrolling through the soup, all items of interest seem to be absent. On the webpage itself this information is, as expected, present. Who can guide me?
As per #Sebastien D his remark the problem has been solved
In the headers variable put only one of these browsers, along with the current stable version number (e.g. Chrome/53.0.2785.143, latest found here)
headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'}
source= requests.get(Page_5, headers=headers, timeout=2)
As Sebastien D suggested, the main problem lies in that eBay understands that the bot/script send a request.
But how does eBay understand it? It's because default requests user-agent is python-requests and eBay understands it and seems to block the requests made with such user-agent.
By adding a custom user-agent we can somewhat fake real user request. However, it's not completely reliable, and headers might need to be rotated or/and used with proxies, ideally residential.
List of user-agents at whatismybrowser.
As a side note, you can use the SelectorGadget Chrome extension to easily select CSS selectors by clicking on the desired element in your browser, which does not always work perfectly if the page is heavily using JS ( in this case we can).
The example below shows how to extract listings from all pages. Code in online IDE.
from bs4 import BeautifulSoup
import requests, json, lxml
# https://requests.readthedocs.io/en/latest/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36",
}
params = {
'_nkw': 'baseball trading cards', # search query
'LH_Sold': '1', # shows sold items
'_pgn': 1 # page number
}
data = []
while True:
page = requests.get('https://www.ebay.com/sch/i.html', params=params, headers=headers, timeout=30)
soup = BeautifulSoup(page.text, 'lxml')
print(f"Extracting page: {params['_pgn']}")
print("-" * 10)
for products in soup.select(".s-item__info"):
title = products.select_one(".s-item__title span").text
price = products.select_one(".s-item__price").text
link = products.select_one(".s-item__link")["href"]
data.append({
"title" : title,
"price" : price,
"link" : link
})
if soup.select_one(".pagination__next"):
params['_pgn'] += 1
else:
break
print(json.dumps(data, indent=2, ensure_ascii=False))
Example output
Extracting page: 1
----------
[
{
"title": "Shop on eBay",
"price": "$20.00",
"link": "https://ebay.com/itm/123456?hash=item28caef0a3a:g:E3kAAOSwlGJiMikD&amdata=enc%3AAQAHAAAAsJoWXGf0hxNZspTmhb8%2FTJCCurAWCHuXJ2Xi3S9cwXL6BX04zSEiVaDMCvsUbApftgXEAHGJU1ZGugZO%2FnW1U7Gb6vgoL%2BmXlqCbLkwoZfF3AUAK8YvJ5B4%2BnhFA7ID4dxpYs4jjExEnN5SR2g1mQe7QtLkmGt%2FZ%2FbH2W62cXPuKbf550ExbnBPO2QJyZTXYCuw5KVkMdFMDuoB4p3FwJKcSPzez5kyQyVjyiIq6PB2q%7Ctkp%3ABlBMULq7kqyXYA"
},
{
"title": "Ken Griffey Jr. Seattle Mariners 1989 Topps Traded RC Rookie Card #41T",
"price": "$7.20",
"link": "https://www.ebay.com/itm/385118055958?hash=item59aad32e16:g:EwgAAOSwhgljI0Vm&amdata=enc%3AAQAHAAAAoFRRlvb50yb%2FN4cmlg5OtVDKIH0DsaMJBL3Tp67SI1dCSP1WPdZW3f16bTf4HTSUhX0g3OMmZSitEY3F3SVGg0%2FhSBF3ykE9X88Lo2EHuS2b23tA1kGiG92F9xyr73RLorcidserdH8tvUXhxmT4pJDnCfMAdfqtRzSIxcB6h4aDC1J1XvJ5IyRfYtWBGUQ60ykrA7mNlhH53cwZe5MiRSw%3D%7Ctkp%3ABk9SR7rKxt7sYA"
},
{
"title": "Ken Griffey Jr. 1989 Score Traded Rookie Card Gem 10 Auto Beckett 13604418",
"price": "$349.00",
"link": "https://www.ebay.com/itm/353982131344?hash=item526afaac90:g:9hQAAOSwvCpiQ5FY&amdata=enc%3AAQAHAAAAoOKm1SWvHtdNVIEqtE4m5%2B453xtvR75ZimUBLL16P0WwfJy%2BJJQ2Phd9crgAacTWlsqp9HB%2Ft0McttOjmCfyL0RDQB%2FYOWQK3hxj%2FoDRmybJRipjqb0JG2%2BCa1RhI04PN3R5wpH9vvYqefwY6JuAsPqDU0SmSk6h1h%2FQr7cfJqOmdCo0cqbwPcJ8OcvAyP07txigrDyO55XqFD7CHcSmUPA%3D%7Ctkp%3ABk9SR7rKxt7sYA"
},
{
"title": "Mike Jorgensen NY Mets MLB OF-1B 1972 Topps Baseball Card #16 Single Original",
"price": "$1.19",
"link": "https://www.ebay.com/itm/374255790865?hash=item5723622b11:g:KiwAAOSwz4ljI0G4&amdata=enc%3AAQAHAAAAoPVkKyeDZ7wbRNBwQppCcjVmLlOlY3ylPVwQyG7dfOy1UtPYhK7tRXtvn5v3M5n%2F35MS1LXLvWAioKRrMGPEPCmDoMkhdynuH3csaincrM%2F6JNwwIUFa3F%2FcylfPqnrxjJXF7cZ3ga9aCihTM6sfVJc1kzNkaBw2C2ewMyQ3ARgYpuDcUa6CMo4zBKF%2FGTj5KlZieLYywQm4dnzLCrFbtEM%3D%7Ctkp%3ABk9SR7rKxt7sYA"
},
# ...
]

How to parse and get clean image source from Bing/Google news feed?

I have created a program that will scrape Bing Newsfeed and analyze the content and email me the headline, a summary, and a link to the news. So far I have been able to get all of that correctly using BeautifulSoup.
I want to improve my program by also including an image of the news that gets displayed on the Bing Newsfeed page. I am having trouble getting the image source link because the source seems different.
from bs4 import BeautifulSoup
import requests
source = requests.get('https://www.bing.com/news?q=Technology&cf=intr&FORM=NWRFSH').text
soup = BeautifulSoup(source, "html.parser")
for image in soup.find_all("div", class_="image right"):
print(image.img)
If I run the code above, it prints some weird things that don't make much sense to me. Here is an example:
<img class="rms_img" height="132" id="emb249968768" src="/th?id=ON.B139539B9DC398104440D89FAFB6F0C2&pid=News&w=234&h=132&c=14&
rs=2&qlt=90" width="234"/>
All the other img tags are also like this. As you can see the data-src here isn't ideal to get a link of the image that I can use when sending the email.
Can anyone take a look at the website (from my code) and inspect it a bit to see what I might be doing wrong or how I can get all the image links in a clean and usable way when sending the email? Thanks so much.
The src attribute of the img tag is perfectly ok and just what you will find in most website. It's a relative url (doesn't have the "scheme" nor "domain name" parts) with an absolute path (path starting with a forward slash) , so it's the client (in this case your code) responsability to rebuild the full absolute url using the same scheme and domain name as the one used for the initial request and the path from the img tag - in your example, the end result should be something like "https://www.bing.com/th?id=ON.B139539B9DC398104440D89FAFB6F0C2&pid=News&w=234&h=132&c=14&rs=2&qlt=90" (which indeed points to the image).
NB: do not try to parse the url into components by yourself, just use the stdlib's urllib.parse module.
Seems like an answer from bruno desthuilliers no longer works.
To make the parser more reliable, one of the ways is to parse data from inline JSON. It is the case with images. It's changing not so often as other parts of the website like CSS selectors and similar things.
Since you can't parse image data directly from the src attribute, well, you can but it will be a 1x1 image placeholder.
An alternative way would be to parse data from inline JSON + regex where you match the image ID (emb23ACF3D86 as an example) parsed beforehand and use it in your match pattern to make sure you're extracting not some random images but images from news results.
Make sure you're using user-agent because Bing could detect that it's a script that sends a request. It could detect it because the default requests user-agent is python-requests so when you make a request, Bing sees that the user-agent. Check what's your user-agent.
Code and example in the online IDE:
from bs4 import BeautifulSoup
import requests, json, re
params = {
'q': 'Technology'
# other params: https://serpapi.com/bing-news-api
}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36'
}
html = requests.get('https://www.bing.com/news/search', headers=headers, params=params).text
soup = BeautifulSoup(html, 'html.parser')
news_data = []
all_script_tags = soup.select('script')
img_ids = [id['id'] for id in soup.select('.pubimg.rms_img, .rms_img')] # emb23ACF3D86
for news, image_id in zip(soup.select('.card-with-cluster'), img_ids):
# https://regex101.com/r/5XWmaF/1
thumbnails = re.findall(r"processEmbImg\('{_id}','(.*?)'\);".format(_id=image_id), str(all_script_tags))
# returned result in bas64 image which needs to be decoded
# it decodes twice. For some reason the first iteration
# don't remove all Unicode chars.
decoded_thumbnail = "".join([
bytes(bytes(image_id, "ascii").decode("unicode-escape"), "ascii").decode("unicode-escape") for image_id in thumbnails
])
news_data.append({
'title': news.select_one('.title').text,
'link': news.select_one('.title')['href'],
'image': decoded_thumbnail
})
print(json.dumps(news_data, indent=2, ensure_ascii=False))
Outputs (try to copy the image link and paste it in your browser URL bar):
[
{
"title": "Flanders Technology: straffe aankondigingen en onthullingen",
"link": "https://doorbraak.be/flanders-technology-straffe-aankondigingen-en-onthullingen/",
"image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAQCAwMDAgQDAwMEBAQEBQkGBQUFBQsICAYJDQsNDQ0LDAwOEBQRDg8TDwwMEhgSExUWFxcXDhEZGxkWGhQWFxb/2wBDAQQEBAUFBQoGBgoWDwwPFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhb/wAARCACEAOoDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwCYkU3iqzXgF2lp5X79lRnUyou0MSBjcRu5U9Pb1p8N1DJpq3+SsLRebyOduM9PX2r8/qYHEQSco6OyXz2+/p3P6QVaDbVybincbaqx31o80USyjfNb/aEUgglM4J/A9R1FPW7tXZUS4QtIAUGfvDGePw5qHg8Sr3pv7n/XRgqtN/aRNxRxUMd3aukjJcQssbbXYSDCn0J6VKxAwu5dxGQpYZI9cenvWcqNSDtKLXyKUovZitzSGihhWRQnGKlszi6ix/z0X+dRGn2v/H1GfRgf1oew1ufDnj5dvjjWVJ6alcD/AMitWQcVs/EgBfiBri+mqXP/AKNasRjX6RT+Beh/OOL/AI8/V/mS5FDHNR7jRuNWc4cZpQ+3im7jSE5piHkimkg0hYmkFADiRtP+6a+mNJs/DL/Cu1ivNfu/s0UymGVrEkxyGFPNjVc/Mv3STkYOPWvmZvut9DX0/a6FdXnw/wBMRtY0MSaa+IH+1r5RR0VtjnGN4Izgg5B56V9Dw/Up06qc67prmjqrX+13jLrZejd7n1XC8JSlX5YKXu9f+HXr8uhqvb6QvirQ7tfEV1/a0kVuDH9iYm6BAC7wD8haPGRzxzXOeMLTQbb4feNYdM1xru3/ALO3m3+zsqxyCZRGRIeGOSyjA5BNdOdJvR42sta+3aQx1CGNbqIXahzujEbmAEcg9VK49K5L4g6HdaX8IfF1jLe6fNDHHDPG1rcK0pZJ1TEijkAh+h4BFd+JxND6jJLGSb9nD3fc/nd1/DT9z7Nmnq7O2h9ji4TVKo/Yr7euv8mj+L7XX+mfPW4UoOaj6GlDY4r5A/JDrvgrDay+PomvNTfTkit5XWdITIwcAbQFHOSTX0VJZaIPiJMy68yXU8cov7T7M21gYT5paTOFA5YjnaRivAf2eobqf4iJNZNYrLZ25nH22VI4ztdOMtwTz0/wr32Pw3dR+KtXt0vNPa31WKdW/wBJX7TFuHmDK/ewCBuweV5NfT5dWo08HJTxLp+7PRcvVwvvFvRarXVr3eXVv9B4WpzeDuqfN7++vb1Xp+d+mdY2Hhv/AIRXVLceJZpNPFxCyyGxZTFNhtoAP3yVzkDHAzXmP7Ty2kWn+D4ra+e8xps5SZ4vKZofPIjymSQPvAeoAr1BfDV/ceCmsZtQ0VfsV75sUqXyeWwdcFZGHAbKgrnnGRXlH7Vccy+JvD8lzLbvNLoURk+zMGhBWR0BQjjBCA8cZzWmfYilUpSUcW6l6j093pFJPSEX7y9562ukmm9TXPFKOWzvTUdIrr/Nfq3s9P6seVHGaTinde1Nr5M/OT7wvvs9tqwu5oZJWfyYVPlqyq29tuM9Dljz9Kjt7EyaZDarc7bVBGdrR4l2qd21ju4yQOcDoas6yomWC2MnltLcoyExlgSh8zGR0yFPJ96gm0y6e6uHUwsWcvEzo+5csrMjDO0qdgHTOD7c/O4WrF4am51OR6Wurr3dE1fRbvVPoz+hqkXztKN1/nv+hXh04/bLmOK6Z3gTzITIhZoS7M4JY/eBJkBHoxFFrpt8lobN1gZXkikaXzGO3bGgIVcdynUnoasLp0wvUmMMCGPyAEjZgE2SMzgcdCrY9+elNt7C8gWMbmx8pnAmJ8zbI5wM9PkK/wDfOK7njpNaV4v4d0r3XW6a6666NvXqzH2S/kfXYr2NpeFbOAzyJJaY81pJI5tgMRUbVK9CQOopun2U1vJb7rL96rR7pCVcABNpYOfmU4428g5+uLFvBfJcq0xyXVBJskAeRVEvA56jdHk98GoGubwbo7i7MMkcRDyAgRmQIhAPPy8hhnHzb/pWjqV63PCnODi10v1vfZ3au36adbi5YRSbTubZBpPas7VPtbWNuVgkeRnLsu3O0kEhGx0HO3d/CQDV61kM0CyGOSPP8Mi7WH1FfI4jBOlRVTmTu2tOlv8AM9KFXmk42HN92nW/+uT2YfzpMcU6EfvV+orhNlufEPxSUj4keIB6atc/+jWrAZfeui+K/HxN8RDH/MWuf/RrVz/X86/RqP8ADj6I/nPG6Ymp/if5jKSnUmK2OQXaaNpp2KQDmgBoGTinLxxSL96nKKAEblWz6V9NzeF9euPA+k20Ph24V7F3NxZgYNwHCkTdcncPlODxgYxXzL/nmvoy8Yp4E0n7OdSFiJpRf7CRL5/AznGMbMbc9Oe9fV8L/WVVvQlGL5o/Em/s1O0o7q6Xm1a3X6nhv2dq/Om1yra3dd0/J+h039iauvxCTVf7EuPs88QWCcL/AMg1jDsXIBwPLPHPbkc1xfxC0XVtK+CviaK90qexkWa1kkuHBxcoJduwHuAzK3vn2rqWmmT4lbnkuSPsx/sZQx8kkQnyd2evb/gfWuK8fT3J+CviAebdvdm9tEvRcEnZEXJG3PrIBnv0r0cbLHrLJc04OPs6Oyle3NK322tNnpZytonqfU4/2Co1bJ3/AHvVfy69Pu8vI8NNJ3pzCkxXwmh+XHoX7NNrNcfEZJl0mbUo7SNJZoolLbU81PnOOwOD6HpXtFr4e12PVNfjl0u6kuL6GVbfUsEJJmQMyg9P3i5Hr2rxv9mVrZfiEBdzTxwv5St5TFdxMg2q5H8JIGfavWNPvdWf/hI5J7q9OsLA2IyW2xoZFEpQf3gCAPRc4r7fJ1jVgf3MopNL4lLrVstpJWTXv6bWvfQ+94eVFYKPMnfmltb+X0+7z2HXXhvXLjwVa2kPhy8imtr2R7m1KkNc7lXbL1zwMpx07V5P+1EskfxKtw8LW6f2PaeVbOMNbL5ePLI7YIP869O1K6lHgXTHt7vUvsr3EzahIjnzfOGOAc9AmNue5PevKP2nJHk+L14mT5ENpapack/uPJUx5z3KnJ9yax4lli/YxVaUbc9XZSv8Sve8nrfVdVGyvYWfOksvfKnf3O3Z9kun4nn5pKU0Yr4w+CPvm6haS5s5FKbbe481w2fmGxlwP++u/pWbBpM8VqsSrDnyY0cqxG9ll37jxz8p2/8A1q2WHzYqtrWo2ekaTPqV+7pbWyhpCiF2OSAAqjqSSAB718bhsyxNGEaVO1tOnm2vxbP6KrUqWs5/1p/kikdPvFVhHKq7Q4txvbEGZi4bHf5SFx/sgdKfcWl8tjLBakMcssDNOwZFwxQk92DEDvwAeelTeG9Xs9c01b6ziuIkY4MdzF5cqn/aXJxxg/Q1fIA7V0SzfFQnacVdO9muv3/1r3M6dGjUgpU5XT6pmRcW1+Xme2UpI80ro7yAhd0eFOO2G44Hv0pt1HqRkk8gyrGA5iXepJ5i2q5J5/5a98YIrZCkjIWmEDuBUf2zVum6cXby/wCCaewi9OZiNgk4/Cmt/SnGk/CvEaOpMb0FEZ/fKSe4okZEieSR0jjjUs7uwVUUckkngACquj6lp2pw/aNNvra8iV9rPbyh1DDqCR0PtS5XZu2gc8VJRb1PjH4wfL8VPEi+mrXOf+/jVz0au/3FZscnAzXS/GgAfFzxMo7axc/+jWqj4NvV0/UWnSfybgoY42MAf7xAbaxIMbhclWHO5QOM5r9Eofwo+iP53x/+91f8T/NmRNG8eN3G5Qw+hpinj/61eifFDRtKubjUPENprwvIJ7+VYjIqx3Fy5Od7R9VByT079RXnrLhsU6dRTV0cSd1cM+9Jn3o/Cg/StRixjLU8qQrEDgYpqEBsmrMxG3AwdwGcVcFdAV1zvGPUV9P6h4n1yDw7o5h8STqLwSvcaiUy0bBwnl4xnCKM++7PpXzNCim6iUjIMig/iRX3A+laWbc2p0yzMG8t5RgXZuPU4xjNXSzzAZVOLxdD2t3faLtZSX2k+sk10016H3fBeBrYmOI9lPl0j37t9H5NfM4w+IdZ/wCE81CyOoulvYQzNbWJAIvDHGShJ77v9YfXoK4T4sa7q2pfA7UHvtUk1BrjVLaF0ZABaqNzjoB94jjthcV7h/Z2n/bI7r7DbefCAI5fKXegAwAD1HHFedftPWVhY/By9e0sre3a41G1EhijCFyC55x171n/AK1ZRiaSw1LBqM5KEU7R0cb3d0r635r73Vm2j67N8uxVLA16kqt0ozdteq9beXa22p8vMvy8DmmMCDg1bZVwfQDIPvVaTBanKNj8XR6n+yddXEPjW5hh1A2sdwkSyIBzdbSzCIHHyliMZ969T0/xdr82h6vqcmqF54PLSK0EQC2od8GQZHbAQdcE5PauA/Y/t7YJ4l1GayhuWshbOpkQFowBKxKn+E/KOe1e9Q2FoPNmbSrATXKnzSkX+sUnOGOOffPevQoZvl+HpKFbCKo/dV3ya2nzNaq9nFpN+SWx+p8NYOvPLabjVcfifXrdLy0epwmpeJ9ei0vRoo/EjRnUFeS41Ixj5T5hTy8Y/gA59Sc9K8P/AGlLqW6+NWuGZCjQypbqCcnbHGqKSe5IAP419UNp+mNbi0l0mx+zqS6xC3BAY9SARj0r5U/aUZW+OPiIKBtS6VAB0wsaD+lc+NzXBYuUadDDezkuZt2jqnK6+FdtLbK2ljk4tw9ajl8XOpzJyjpr0i+/9dzh/wAab+NONJ+Fch+bn3P4N8TxeII1kXTbqyEwLwrcOhdlwpyUHzICGUjcOc8VnfFbxHpdjo2oaLPZz3t1LahhFHhVRm5jLMemCA3A9PWmal4js/BdjNqHiq+lKkRpZokalnUliw64JUDJJxwQBzXFfEXWNOk8dTzpfQ/vFikxO4VtjQp8pU8qR0KtgjBzXzjy+H1tOMWodNeqt1P1qtndRZa05p1b2ei0Tv02L/wh8b2lo02nXdrN9mlRLhL+NTsLkYaNkOCNuOSM/wAjXrKsCuQcg9D61znw90/wjN8F2s4fCtrDe3DMYNRBR/NAiAYl2IJPmbskjjgAAAVD4Z8UacfC8Pnahp/2i3LQSRNfRhl8skZIJyOF9OavO8u5aiqUtb7lcN5pL2XscQ0klddPU5n4+tC3ivw7bXMkdxHNHKiWasrvHLuBEwjByDt4Dn0xXVfDnX9O1nw7axQarBdX1tAEvIfOBmidSVPmL1ByOuMV84/Eq71S7+Kmv+KLeea2tJNSlhhvreUBWVflCowILcBenGK774C6HZ6Fcf29dajb28ixIJZ/tYAZHIMhlJBDLnAHIAIBzmrqZXzYKMb+8lf9bHnYXiCSzipJR9yTs9ei0TR7g3FI1YEnjvwODz408Pj/ALiCf0NXR4h8Pf8ACNjxAfEWkDSmuDbLem9j8tpQASgGdxYAgkY4BBPUV85HDV5XtBu3kz7x47CR3qx/8CX+Zy3xy8T+HNN8L33h3VJ7hrzUrUbILdM7AXBV5GOAFyvI5JAPFZPwt16z0PWpNE8RRTabqupyxfZhKpMcqAFEBfOMkhsEgZGOc8VfvrH4Y+Pbi4uHsH1S51D91LrKamyvbbYxHFDbxq6pEF4kZpFfeWxwuRV3xnp2mQ/ErwLfQXEF9qVjY+XeW48pmEoQNGVRtyyHcZOADjjPHX7TBZLhqmWSjKTUmk36+S8up+d47PcVHOFWjGLjF8q63V9XdPr0Pmf4uPAvxq8TPcBmi/ti4LbMBv8AWE5UngH612XhHwlY/Em+1XWLawhsdJ0PT1hWGO78h5CkQAKNIJAX6uRkAkEcDAqx4w0a38Z/HjxFd3Vo7W89zsN08edrqoEjeWjJlshuM4GOhPNNhu5PBOma34e8M3cseWeC7kdyTcNjAlAIO0YIG0cgdyRz008LdJdUj42vThUxM61T4W5bHnfhuxnOqQ3xYfZ1LJIzN7dsfgTUWpeHtQm8SQ2cUSLJf3KwxJySGbpkAZ/Kuw8JQWqNJZ2TMiQuJEeU5UZUZUtxnkHBx0xW3460yWw1bS9Z1K0mjMkov4IxINzCF9uWYdBkdAcgEVpSoTlNxW6IdOCoJvY84+Ing6+8I+JDos86Xk6wiZ2gRsKO+QeQBjrV/RtNsb7wJdStBGZbdlQEgrIkuOTkEZUgr1z34GK+gZPCHgzx7dvZWM2pab4o1OwgkXznRrSENEAgaIAsxL/M7s38RwAVAbwiy1iXStN8QeHNYtBbSQxyqIQgX7NeRsEckd9yhlPPUKe1W6TV7vYJQjTqNpe69jlf7FnEIkM8fzKWVQGywGeenTI696jtrBpIZP34DKVxGoJ69ycYH9efx9+0H4Hx2N9oE2s69ZXE2qWP2pNOsrkSxYZvkieYDpjIYgNz3rN3+H9G8ZeJp7HSLaDRZFvTapDbymCSJYWjjMZYhiBLwCTgMCRkDnnjUUp8sBvAzjBSm7XPFxYzW2vW9rI6SH7RGN0ZO374HcCvuFx+8bH94/zr4p0m7ku/EFqJsNJJdwYIAIB8xQeT65zX2vJkSPz/ABH+dfN8RXvT+f6H6H4epKOJt3j+onTmvNP2tH2/B3b/AHtWt8f98y1uWPiLU4PjdeeHrky3GnXloFtYhAp8iRIhKXU7d3ZgSDgDOc9sD9rj/klduv8Ae1eH9Ipa5MLg5UcXh23fms/Q9/NMxhisqxqircnNH18z5jKsW2+tNkUq2CK9c/Zb+HGmfEX4ijRdYubmGzj0y+vme22h90Fu8iDLAjaXC54yRnBHWvN9csxb3joAflPevvZUWocx+Hns37E8Ktp/idmRWV5rZCrDIPyS5BH417XcX2nRsBLPHlsgEcjgc8jtXkH7FMSjQfEEjsVVr+BSwXdjEZ7d+teu+Lzoum3km+aFY7eIsYvlXylYJw2Pu5I+UE5A6+/x+Ip+0xtS8mrW29EfqWDzStl+RYZ0opt33/xMsQrC22SLawYZVlOQQemDXx38fJfN+NPid8/8xSZfyOP6V9VfDvUtP1GxaPS5WntbeXyvNZww3k5KggAcAj86+T/ihDdap8YPEMVpbzXM8+r3WyKFC7t+8boByeBWuVxlHFVIt3sjm4uxixmUYaqlbmd7eaVmcsv3uaXj+7VjVrG+02/lsdRs57O6t22zW9zC0ckZ9GVgCPxFQ49zXvn5sfSHiyPw5p2raYs17rV0uh3ayJaalqlgrQLuDtuURq0hLBTjHIyKz9W13wb4t+IXi3Umt764FwbeexjhmSOOIgYlJDIS/IGBwOhOelcTqk3hLz3SEyptIOYkJz9Seaj0QaaPFVnJopmZp0eO5ilyAylchvwIFephcLBYmHtGpK/5ndWx0nBxgrXPeP2dZ9H8cahceBBcSaVeSGS70rUFctKoUfvrcqciQsoVlBIAKOf4q4L45eDr/wAAfFHVNCvbGO0F3K17aMjF0nickqQx+8QAVJ9VNVvh1rWn+G/jJ4f8R3V3NYw6PqUNzeLFFucxhtrqq5HOwnjuPzr6C/bGn8OfF/4UW/jDwHrDa3/wj18EmgtbRvMt0uMKFcFfNVgyhtv3du4gHBNe7isNFVW1Fa/18jCFV1KCV9Uz5H8bxWsOj2CRKqyBWmmG/cVZ+TxnArHjutQeFNCe6uVCvuSzeUqquRnOw8AkEVe8baNL4Z8SXmh6yGke1mCyTJuUSIRwyBscc5B74rl9cgT7duhvzevKx35hZHUg9GDZ5/E183Xi6cmmtUR7RvUsXFhPcXcbW/kKkqbkLzLGrcnOCxA616BBo1/P8E9H0D+2fDq3Efii7u3t11+zeWGOa2s4lk2LIScmJhtALfJ05FT/AAp+F/inxJ4Hk8Wnwpr2rWGk3YgGn2GntLJOzAEyOuN/kglQSik5PA4OOy+LngXxVpHibSyPhldNPp4tLyW70GyuprW3gRy/lH92FV8hmYnpuX7uMUU4SU0+VtPd9h8r/lO0/ZS8DeDvEnirWPCOkieCONk1GLUbh2a5lET+W0bfdG0iUnbtA3AdelcxpXi/wDpXxA1DxUNEu9BvIbO9ECC+ae1jvmjZYSu4B4lGCOpALgnaBkel/wDBPjTBJ8SPFvjS51XT7fT47VIIjezGMSNMzO0YP3d2IlOCcnqOAceEftGeEn0DxV4n0+G/s5LW21Iy74XLG3gM4VA69QcsAAM8FSSM13zUFF8q9f0L9o4xi77HfeFPhl4O0vwXbQzatqWpeK7q2juLmaK6S3tLaVkSQ26DDFtu5gzkhmbbjao5zfiP4D8MNomreIdGs9Ss7uyhF9dRLqD3fnW5fbK+xxlWQspwDtKBjxil8NfEnw/f+AdF04RsLjSC1p9ris9ssmzAhDHHzHYVUu/Xb26Vu+IfiH4v0DSY47B9HmXUc2aRx2iwXM7ujJscxDlcHlOh3HjkivS+r4V4fnhHW2+/4HGqs9It9djyHX9R8O2C/aPBFvqlssVm3n3WovFO9zNjOdoGIwOAFA9fWu8+Gt1oHjrx54K0y88PXWq26W6Ndx2t1H5l1OqPK7LC8qKAyRAMJCqAEtg4rxjQtK1698PauqWs1w1qxLrFbvzsI8xcAfwjBPYceuKpeGb6YaoqadOYZnl+VlUcgggr9CCRjuODxXh+0cbeZ2Ks7u6+R9gZtrb4gNBpviHSI/EHihIZ720tbFWl8hWbEUMxBCgSYVnUgPgkkhQT4l8ebnTvGfxw1jxP4f01SL6dLd5jbssE90qrHK6Fhg5YZJGeue9S/BHWdbs/i0us6tp95f6pfyGK1m2xGK2iSCQpuBZAsasoLcABctkFcHpNc0Gaf4FeFfAnh7xNo2s3Gl3U19qV6Zz9naSVFHkwsFJYDacvxuJyOK1wNSnSrxlU0X9WNqrlWptJbHb+JG8H6Haf8I1rqvbX3h/w5E2l3SmWM3U7QN8nA+9ucMN2M8/3a4r4O+JrPTvFd3ok9nBd6grL/ZnnyAB2lCpJAMkD5t2V7glsdyMjTfh344n86XUfFOnyPMwctJNJcPwuACzYJxgflT7f4XakmtW8k19pmoW8tzEsqEiFol3qHfe2c8ZOMduDXr4WWV4XBS5avvTs3ppdJ7eTepniJYmvUjeFktvRmT+0N4POjePvCup/8IlcaC+rXAZ/NthEbqRLlUDFAxVcpsYsAMlmOK95YHzGxt+8cbunXvXDaX4EtpfEFiNZ8OaJc21hulgd9Wu2CzrISjeXHIqsCFQkewz6V6KbaWeZpbiWzj3SswS2h2oqk5CgE5OPfn69a/NeIKc8RWjKnrY++4PxdHBRqwq6c1vwv/meLWPi3xPBqV94itbO4n1rR2UQ6wkhW50j94NyfJkrFINyZK4+c4bPBvfteatpWu/B3Rdc0v8AtCNL/Xp98OonM8G0zlFY98RsgBPULnjOK1774TPNeQ3kHiqK3vILxp/O+xtvaNpGkMWQ/ALEAnnKjGKr+JvhjrGuafa6Pr3i+yvdOTW4r0RLaNCyx4ZZYlZcBdwYYOOCD0zXtSxUKsIxejTT+4+Zp0MTSlWk0+WSa6a32vqZH7AIA+ImtXJPFv4Q1eQn0/0Yj+teFeMoJ4NSZbiGSFnVZVWVChZGGVcA9VI5B6EdK+tvhP4O0P4fePr290sxLplz4PutOks3u3aW8upRtIeQ58pHxgleUBGB3PPfGTw3F4j+Gtj4YsvhXougQ6XP5ljqFt4oa7mi3ZMkf7xFJVzg4JKgjIAPNdtbF07RgmeWsFPkk29Ucr+xgv8AxRetk99UUflCv+NaPx58VWth5nh22tLKRwVa6aSAExyHBAHbdjGevXBqr4F0vW/hl4JubKxNvfy6hcySJdMsi7JdgVEXYcKcKDuY4z9K8Xvtbk1HWHu7yJ4i0hYqWzhj1J9TnJrxKGXuWOnXqL3eh9JjM7jTyShgKXxfafbVuyPcP2X/ABPc6rcXnhy6MZGnmKa2ZVCny3YqynHXDAY/3seleXeHLF38feI9au7Kc20+pypbSm3LJKTPIW2sRhsbRnHQkVofs4622l/E+6ZIo5Fu7D7OGbPys0sexlx33Ecd+lfQ+sWdtqkculi3kjt7aHyraAkhbNSwYbS3TACj3FEsN7HEVKi2la3r1OWrmf1vLsPQm25U2/u6Hyx8aLW5ulsvEDQTSRLusppmV2xswYw7nI3FWI6/wgDpXDYtf77V9x6fpWlXOlXvhm4t9unX2nNaX6yKdpVgR5jcY3KW3An7pAI6V8kah8PNQg1CeC3uYZ4Y5WWOUHIkUEgNkcHI54rso3nC/U8LE07TuupkzMrHcoIyOc9q1/hysMniuOSYZWGKRxwTtO3Abgjpuo8Wtod58Spv7PvJF0m8uVdrp4yPL3gFjhscBt34VXv2/wCEd8cSxafdSG2t5DE0syL86cZI25BBGCCK9fDVo0q8JSWzOaVNuLseseFdL+H/AIgvZIdaDTXalvPm+1GF0xzywPGAOhyAO1S6RqGq+E7pvDfhTxkp0S71JbqaAbWxOItgZtwy3yEqMHGc8A15dql9p1xd/aI9c09NrZR/szMxzySCOe596oySu8jTQN50KvncGx8o6HnB9D+FevWzClXVpU1v96MoKdKScZanffH/AEzN6fGGneIoL+4hjtvtqSALcQzA4V+BtdSQO+VwAc8GuZ8Vf2H4uW1vtKW6h8QXL+Vd2qwmRLuVsBWj2ZO5ieR6kcdTVPQdQ0o+e2p6/d2q3EYiktrbTVn81Sd21mcgDDKp4/OtKw1TwNpetfbbbTry5SEKYre+sLe4hdudwdGbOPu4w+eTz0ryq0adSq5xVo9rm8ajcbS1vudh+zrb+PvAvxWfStQu7/TxfeENbuorVdSbYXXR7qSJyitw6Ng8jKkZHY157o+qeLLia3hv/EmrTaa1z5ksI1WWRCRycqWIJx6ivWPC/wC0rP4e1QX9r8Lvh7cXHkPB5snh6KNvLePy2QYJKqUJUgNhlJBrlviN8db/AMVTRqPh14C0qOJNippuhqg289iSFPJ5UAnvU8sIyXvaIPdvuZWoeNdV0vVrqPwhevY25bZIlwQyTAZGWRsjJHHHIGeeaz9N8U6j9l1ix1WKz1htUhjN5cTRLPN5ceFVVlI3RoPl4Qg8Csg6vtklvjodhJujAeCMSRxjn7/ysDn8QKbda2L3Ty0WiaXp0cfDPaRSb5+nDu7sWHQ49foKv2y9s533M3dwUSzcX0emaffR6fOhW9uUljgE4Y2mFYHg8s2CoD9gD3PFu/1nUfEej2dwIoRf6ewjNxEoSR41A2BiMFiDzuOTz1rnf+Ei1ATMPMs4WIAJW2iyPpkcdadb3V0JrmZbg+cyGR2Tg8DJ6ewpUsVKzinp/TM3GOjO1s/GfjW6u10PxHqP26wkXa9peIsylQD0bBdOWJyhBzzWT4X0SfTdXguEiZn81fnZN2UPDMoPbB6nkUz4ftaSeLLHV9faW5sIZ1mvR98zAH7rc/d45B6428A5HVfFpNGf4r6pJos13qNzqmp20lpOY4oRMZG3OscEUjIgLFeCxI24wuSKurKVSCnN3sEWoySsdPp4DOoPPPHsexqx+014s1TxJZeGdQ/sG20f+zbT7Ffa9aW6w/2lMArZmZAACq5CA8sdx6cDMsryNeGIBUkMM9COtUPGj65eXFjL4Wu7pL7cYpIYJf8Aj4Q4O0xn5ZMEfdIOc9DWUZRjJSauds72sjiX8eeK9Pu1tbHXZplIzunjWTP4kdKdD8W/F0T5k/s+ZgeS9uRn/vlhWL4ptfENrrlxdaxot1ZSzN8/maabdFPU7FCqqD2UAVi3iATiMAblGHx696yqUqdSLnFW12MPa1Iu3Me0/Dr4m3usxXkN6lnBqMMBk09UjkMU0i/MY3zJldybtrA4DAA8NxB4f+PWoSNM2r2FpCscO6H7HHI7SPkYU7nwBgk59q8/8A2Hh2a1vr3XtV1K2eGJlt7eytPM85ipxvcnCDOPrzyOhq+GdPtb0XjXJbfGY1SJFYkhs5bIBwBgdfWuaVClaPu6nTTxWJj8Mtz1TQ/j9eSed/bFpDa7dvlC0jeUuTndu3OMY4xWtqHxYuJ9Dju7G18yQzhc3FuVQ7gSoAWXIPHJJI5HHPHgdlZvcTSKu75HGcKxP4cfzrvNCF9f+H7XS30q/wDOhuY/Lkjt3ZJEBOTkLgEDjk0o4aMmrR/M1p46s1JTm9j6D+E0EvxD+FPxF8R2kWqWureDtDF9CstuBZNIuXkj39ZD5aMRjbjeud2OfnfV/iv4vvI/IlvBGqk8wIF/Q5r6c+Huq33hP9i74tTyWsluZtJgtFkcg5NxMLcrwTg4kNfFTESXG3cq7m+8+cL7nGTj8K6sZhqdKfLyq+n5I5XiKr+0d3a/EO3tPCepWlzY3Wpa9dyn7JrE9wVNvAQoaML1HAbvjLA9q5SaATW6zwOy7l/i5+pz61uaV8PNR1KMfZtSsXY2pukQFt0kQJG5QQMgkHB46GtHS/BQj0oW891ezzGFblhawxKkUL5IYs756KcjZx71yQSptpu13fc6p0cXiYwfJdJWVl/V/Ud8I9HeXw34svxk3UMFjaWpSQLhpbjcduSMtttz0OcZrtjqGrQeCbZr/UZZY5Lj/UGMhHZBtJJwP4sgDvya5nxna22hfC8aRp9vutzeRahdt5vmPfNtaOPfIoX5USR9oUDHmE8k5qnH44uvF/jG0SW1W2sbKzSK2sbV/LghKRhcqhyBkIMDtge9Y1eSpFtPYv2dTCVFTqLV6nR3PiPVZtPvtNi1GZbiziIhaFC7KpUbo+hP3TwRzn2Nalj4TvvscOzTvESr5a7QmiPtAx2+bpXKeJtT8MWlysGr6r4qiaaIMLS1ni+zEfdxlfmUnGcEd69l8M/G74f2PhvT7Hz9ZP2a0iiy6RlvlQDk7hzx6VcKN4J8xlVqc8teh8qtHLna0Ux9R5ZoW0nMOyK2uGGf+eZ6V6DfaPrEDYmdUB7CUFvxA6fjS6fo17c3PkxPNI3oh3Ef0r1Fh22cJx/hfw3Je69YrfFrXTzeRJczFlDRxlhuI3cZxnGeK7bxR4Ve7gOowWkNvDfxy/Zk84MFUP5aTSFeWJ29sA8ccGp5vD2nWd3AdRuJZGZhmOCFWyfQsSB/OtWx0vSF22yWsi9WZGctlc5BdI9uQO2accG3JFxqRjBruebr4RghuAt14jsoV6hhbyOc5xjA71K2iaZFMUudRuz/AHXFpsP4rkkDpXrEKaVZw/6PfQozD5Ut4AG4452jI+m6ls4vDMlw0mradHqUhUMIpJd35xqCB/wPP1FdP1WK0TXzuRFaXPKDp3hG3aRbrXriZ/lEQiQKBkc7s4OQcdualhsfDiy4sbC/1ZPJbcPMkjZnxwfkBwo64AGe+K+h/CNtf6yqppWh6foWnKPlmFgqsT6JGq7pT/vMF9TVlrHRLbxQsNrYSapexn53vpss7gcgWtuSCPYKoAPLV2/2W+RS5lb0ZN2r3R4rH4F8TanGt54e+H9vptrNIpS4l1PcqBRtYeZJKd0bNkk4PI4OBWh4Z8DeMJY7ixh+HtnqyxEEzwfNYkE5Ls5KBh15Ru3Br6E8ReJ20Czi1DxNJb2EEaZEF3HGsbnsI4B5hcjsC7YHVa811z9ovUXuJIbOz/tK2ViVF+PLhY9MmFOo/FfcUp5dh4TUpys/l+VjOnOvUTjCOhys3w9+I2p3a6RpsGixRjjZpyQpCMAZJLIHPrwDyam8JfBrUJvFUdvf+MoGeymxFBYacbkCQH5gzybEUdPuh2/2aZqHxs8c3wuFTVV09bsYuP7Nt47Rpx6O8ah3GOME4xWDB4hnk2q7DCrtAJ6D0FUqGEk7ybf9fI6YYWs95JHXyfArwzYeJER/itp6zmUy3UMSq0EKhuVMoAVmxkYGAOfpWH8RfDWiW/iaS40bxBZ6n82S5iIQbcgBMDJHAIzg556VRn1CWYArIAuPuqeaSOZQuZH2juW7D1qnh8Io8qRqsG+sjn/7PuLWfyxqe4yAyzOkTKIh/DkE87jnkenSsrxAhhEpk1KSWNLU3ALLt3HcFXbzyDn26cZrqtKhN4JL65cn7W+9Iem1BxHu9SAM46Anpnmuf8U21xeeKrSzJCw3MwjnOcKyJtc/TOPzNedWowjBSit3/wAN+hpKjaOhmXWveNdPWGGfxBrEMZhSRF+2yKuCM9M/pXNape3d7fte3880883JllJLOBwDk9elenakBF4nt5LqMNamPyw0ibhD6YJHqT9Kp+JZ9OvNJ/sOWYYXZtkiRSE2kHCnPpx2rKVNSlKlezjtfqZyw7u9TnvBE8tlHdOokMUto4cKOSCpwRntmuj+B7WyaVqMkzOskl0iqVB4AQE/zqF20dbFnYzeba2E1tCI2+UpglN2OSR0z7mj4VTQQeGnEsUhaS5Ztw4HRR1/CtKODTrxhPWOprRvSqpkXg2yzq2vRCQ4jvioJPu9eg6XcWMFtBE5c+UMHb1PPYk1z/huztLHUtVu3nEi6hciZUVcmMc8H1PPatkS6QF3vdwwnp++GwZ9MmvVwuApxim97v8AMcaacXfqaXxi+I1hZfs83ngLTnk/tHxPqto9wsinbHbW5dwd3q0rR8f7BNeX/EDwbY2ni7TNK0jUE8m9s1UXFwhjUTIpDb/TcVBz23+1XPEvhy813WLrU59Q0uKKK3K2UFvdCRpWUfKuWCgAkkkn1qz4+gtx8P8ASXuX8+/09oWuY0kD/LtxIGYHjJCiuTEUPbSqznGy0t8tH95z+wtdrRIg+EeqPF4W17QJISt6tu9xDGGKT3KtFsEaHrhCwk2rgkM3bNegtZT6h4q02xsILeO3OkvAIAxj3rDJGcMOgJ35wOD6VzusDTI/GUGvaa6xz6bot7LuReC3kOsIx6guOPcVs+E7+bVbLQtTjti8pik+1snCrvjAIz/voOma5aOWwrYiph3P4bbdb/5Hr4bFyowUbXsVPGmn31xBexxDEEZXzAqfMMepBwRkdfYVyfgPOi+PrHVbuykaG1eQsIlGXzDIP5kfzr1W8E0dwLiREWNx5cik5BB5GR9Rj8aoXHhXRppGmlsbONm5YxhlJ4Pbdjue1bw4eqR5oRnou+n9f8AjGT9u1JLVHiup6ft1ibUEhEn7/wA2N5ZA4fnI4HJ9ORTYbS68lc2mTtHJbFepah8OtDlBFvqV9ak8gIUZQfoR/Ws7/hV69vF9xjtm1/8As6iWR4u+kL/NHmewqLodvo+m6e/n3F3d21s8K5lgt7lWCAdnuG+XJ9IwT/tU1vEN9qs66ZpMsMVsq/IUt2/eD6n5j/Wr2i/DLXryb+0/FcCQxhT5cO5SkXpux1/3VB9yKm1KCZ5Y9N0uZLaMLte4dxG3vhR8w9gDk12LnjFNqy/FnMoxk3bX8jIuNOtvDv8AyF7uT+0J0JEAiRn5/uKSSPqcU3S4tS1aT+zILUwsxDeWXCkj36A/r1rr/BnwwlTT5NRFiuqT+aS9y5BSD03OTwe+O3vmk8vUdR027t31BdN0O0k232pLIskkuP8AljbjOW+vStnTaSureX/BM4rmempjapo+mN5Ok2tnNfajE2bw2syiOFR3eQHC4z2rR0vQbK51O2093MNxH8zWWmpGqIOzSTHJZz6c9e1dHeXml/D/AMIpd6locllDdJu0jTpGKtMP+e8wB3bjzwa53wVcT6jZ33iDXbhbfRwNwsxtVr5+oUEfNjpnFYy9kppdd35Lz/r5HdSoz5W+iNWGyhsYSqg+czkusjFm2+75JP6Vy2oeMNQ0Rp49FtrPSdzEmW0gAaQ+pc5JPPrWl4k14x6fHfvPFFe3jc2cR5tkH3QT3rC1YQ3Pgv7RNPCSWf8AdzHDKeoKkDr7H1rLEYuUaFqTs1qexRoQsnONzyHxdrOpa7r1xqGp3Uk9wzsu93LYAOMAkk9qzNzn+KqniyV9P8RT/Z3+SXEjoegY9cfl+tRQ6vbtgS7kb/dyPzFeVDEKesnqefKcYScNrF9XfP3zT0nkRsiT86ghkilXdFIrfQ0sitmtveKvcvW+oTqeH59qnm1WVrfyWmYCU7W/3e/6cVlAktgUEsOoquadtwudLb6yQAAcjtTI9RhXUGup4Q65dWB5HzBMceny/rXNtI6JlSRzUclzOScHhuvFU6rbXkJ1DrNU1LTlge3RHQypykbYUfUVlz6suxZD8shGCNoKk+uDWC0jnkk8DAprMzckmlKtJyuiHO5qX2qNNZzIzr80TAKkYHOPaq/h7UpbPR4YIWlVuSSo45NUmHtS/PtxS9tPnUiHvc3bbXZozxy/d26046vI77pgs/p5uSF+g6CsACTsDUiLKfvMoFbLEVWNSaOij12ZF2xRwRD/AGIwD+fWrK6rPf2cmnOgaO4Qo2wYwD3wOvrzXNRgJ7+5qeO6kj+5Iyn1U4raNaf2noPmudp4e0iw063dUeF2mjKTNJHvaQHbnOeBnaucelbdrLBFGE+1sirwAihAB6DHSvOYtXuV6ylvrViPWSeG3D8a66OIo0/gjYqMkj0oalZCPDymQejyE0ra3bgHa3P1rzj+1kPp+NSJfswyF/I10rHdivaHoEmuxsvSo/7aH92uGS7lYfeb86f9on9f/HTT+uSFzn0nrt1cS+Il05ZWihkTexj4bjsCeg+lVW8OWOreMbWK6muVjBfKxOELBRnBYDdz3wc+9FFc1XWaT7o4aOiduw/4t+INS0zQrTwvYSLBp0xAdEyCVz93Oen612Hw38GeHI/AkeqXGmx3l1HbNdK9z82GQ/KuBj5B6d++aKK6LKWLkn2X6GMtKSt3PEb26ufHvxCnl8QzvJtRiFhwigKOAB2H0rtvh34a0a+0/VtZvbQTzaYVS1jdj5SfLnO3uaKK8TBJTrvm11f5HvVtKaS7I8v1W5luNXuLiQjezngDAHJ6VvfFuwg0ux06C03KksTFgxzk8c0UV5cv92qPz/U9Zf8ALv8ArofO/jDnXp89nIrL70UVzR2R8piv40vUdHkNkHH0qaK9u0Pyzv8Aic0UVrCTWzMotot2epXDsA2w59q04jv5NFFepT1Wp1022LIBsP1qJgPSiinJalsZtHpRgDiiipJ6isB6UHjFFFUJjiSFyKbknrRRTJQmaATiiikN7C0vOKKKBD42JfFSJKUOQqnnvmiitI7jLlrqkqNtMEDj/aU5/nWgNWl/59rf8m/+Koor06Xwgj//2Q=="
}, ... other results
]
If you don't want to deal with regex, bypassing blocks or something else, a.k.a maintaining parser, then Bing News Engine Results API or Google News Result API may be an option.
Here's an example on how to parse data from Bing/Google News and combine it into single JSON string:
# Keep in mind that I was not using DRY methods here.
from serpapi import GoogleSearch
import json
news_data = {
'bing_news': [],
'google_news': []
}
for engine in ['bing_news', 'google_news']:
if engine == 'bing_news':
params = {
"api_key": "<your-serpapi-api-key>",
"device": "desktop",
"engine": "bing_news",
"q": "Coffee"
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results['organic_results']:
news_data['bing_news'].append({
'title': resultget('title'),
'link': resultget('link'),
'image': result.get('thumbnail')
})
if engine == 'google_news':
params = {
"api_key": "<your-serpapi-api-key>",
"device": "desktop",
"engine": "google",
"q": "Coffee",
"gl": "us",
"hl": "en",
"tbm": "nws"
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results['news_results']:
news_data['google_news'].append({
'title': result.get('title'),
'link': result.get('link'),
'image': result.get('thumbnail')
})
print(json.dumps(news_data, indent=2, ensure_ascii=False))
Outputs:
{
"bing_news": [
{
"title": "Is Decaf or Caffeinated Coffee Better for Heart Disease Symptoms?",
"link": "https://news.yahoo.com/decaf-caffeinated-coffee-better-heart-194648652.html",
"image": "https://serpapi.com/searches/63469624f05eb8bd3ec0eaa0/images/c9deaf41400f27622ff9680d72158ee9c74e042768bc6201d72f8b7031003236.gif"
}, ... other bing news
],
"google_news": [
{
"title": "9 Best Coffee Items on Sale for Amazon Prime Day 2022",
"link": "https://www.thekitchn.com/prime-day-coffee-deals-october-2022-23459339",
"image": "https://serpapi.com/searches/6346981060739305e5fed620/images/3283bbc090b4be4dafbc522fab6467927bd3225fd94f0f09c764eaa814e78117.jpeg"
}, ... other google news
]

How to scrape google maps using python

I am trying to scrape the number of reviews of a place from google maps using python. For example the restaurant Pike's Landing (see google maps URL below) has 162 reviews. I want to pull this number in python.
URL: https://www.google.com/maps?cid=15423079754231040967
I am not vert well versed with HTML, but from some basic examples on the internet I wrote the following code, but what I get is a black variable after running this code. If you could let me know what am I dong wrong in this that would be much appreciated.
from urllib.request import urlopen
from bs4 import BeautifulSoup
quote_page ='https://www.google.com/maps?cid=15423079754231040967'
page = urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
price_box = soup.find_all('button',attrs={'class':'widget-pane-link'})
print(price_box.text)
It's hard to do it in pure Python and without an API, here's what I ended with (note that I added &hl=en at the end of the url, to get English results and not in my language):
import re
import requests
from ast import literal_eval
urls = [
'https://www.google.com/maps?cid=15423079754231040967&hl=en',
'https://www.google.com/maps?cid=16168151796978303235&hl=en']
for url in urls:
for g in re.findall(r'\[\\"http.*?\d+ reviews?.*?]', requests.get(url).text):
data = literal_eval(g.replace('null', 'None').replace('\\"', '"'))
print(bytes(data[0], 'utf-8').decode('unicode_escape'))
print(data[1])
Prints:
http://www.google.com/search?q=Pike's+Landing,+4438+Airport+Way,+Fairbanks,+AK+99709,+USA&ludocid=15423079754231040967#lrd=0x51325b1733fa71bf:0xd609c9524d75cbc7,1
469 reviews
http://www.google.com/search?q=Sequoia+TreeScape,+Newmarket,+ON+L3Y+8R5,+Canada&ludocid=16168151796978303235#lrd=0x882ad2157062b6c3:0xe060d065957c4103,1
42 reviews
You need to view the source code of the page and parse window.APP_INITIALIZATION_STATE variable block using a regular expression, there you'll find all needed data.
Alternatively, you can use Google Maps Reviews API from SerpApi.
Example JSON output:
"place_results": {
"title": "Pike's Landing",
"data_id": "0x51325b1733fa71bf:0xd609c9524d75cbc7",
"reviews_link": "https://serpapi.com/search.json?engine=google_maps_reviews&hl=en&place_id=0x51325b1733fa71bf%3A0xd609c9524d75cbc7",
"gps_coordinates": {
"latitude": 64.8299557,
"longitude": -147.8488774
},
"place_id_search": "https://serpapi.com/search.json?data=%214m5%213m4%211s0x51325b1733fa71bf%3A0xd609c9524d75cbc7%218m2%213d64.8299557%214d-147.8488774&engine=google_maps&google_domain=google.com&hl=en&type=place",
"thumbnail": "https://lh5.googleusercontent.com/p/AF1QipNtwheOCQ97QFrUNIwKYUoAPiV81rpiW5cIiQco=w152-h86-k-no",
"rating": 3.9,
"reviews": 839,
"price": "$$",
"type": [
"American restaurant"
],
"description": "Burgers, seafood, steak & river views. Pub fare alongside steak & seafood, served in a dining room with river views & a waterfront patio.",
"service_options": {
"dine_in": true,
"curbside_pickup": true,
"delivery": false
}
}
Code to integrate:
import os
from serpapi import GoogleSearch
params = {
"engine": "google_maps",
"type": "search",
"q": "pike's landing",
"ll": "#40.7455096,-74.0083012,14z",
"google_domain": "google.com",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
reviews = results["place_results"]["reviews"]
print(reviews)
Output:
839
Disclaimer, I work for SerpApi.
Scraping Google Maps without a browser or proxies will lead to blocking after a few successful requests. Therefore, the main problem of scraping Google is dealing with cookies and ReCaptcha.
This is a good post where you can see an example of using selenium in python for the same purpose. The general idea you start a browser and simulate what a user does on the website.
Another way will be using some reliable 3rd party service that will do all job for you and return you the results. For example, you can try Outscraper's Reviews service with a free tier.
from outscraper import ApiClient
api_client = ApiClient(api_key='SECRET_API_KEY')
# Get reviews of the specific place by id
result = api_client.google_maps_reviews('ChIJrc9T9fpYwokRdvjYRHT8nI4', reviewsLimit=20, language='en')
# Get reviews for places found by search query
result = api_client.google_maps_reviews('Memphis Seoul brooklyn usa', reviewsLimit=20, limit=500, language='en')
# Get only new reviews during last 24 hours
from datetime import datetime, timedelta
yesterday_timestamp = int((datetime.now() - timedelta(1)).timestamp())
result = api_client.google_maps_reviews(
'ChIJrc9T9fpYwokRdvjYRHT8nI4', sort='newest', cutoff=yesterday_timestamp, reviewsLimit=100, language='en')
Disclaimer, I work for Outscraper.

BeautifulSoup.select Method

This script is suppose to take command line string and run it through the google search engine and then if results are found it will open up the first 5 in different tabs. I am having some issues trying to get it to work. I think the problem is happening towards the bottom where it says link = soup.select(".r a"), I have been altering the values here and then it will show the next line with an actual length. But running it like this shows the length to still be 0. I am trying to scrape the .r class and a tag because that seems to be where the searched results start on the google result source code.
import requests
import bs4
import sys
import webbrowser
print("Googling...")
response = requests.get("https://www.google.com/#q=" + " ".join(sys.argv[1:]))
response.raise_for_status()
'''Function to return the top search result links'''
soup = bs4.BeautifulSoup(response.text, "html.parser")
'''Open a browser tab for each result'''
links = soup.select(".r a")
print(len(links))
numOpen = min(5, len(links))
for i in range(numOpen):
webbrowser.open("https://google.com/#q=" + links[i].get("href"))
Your logic is right except the URL for google search is not right.
It's gotta be
response = requests.get("https://www.google.com/search?q=" + " ".join(sys.argv[1:]))
...
for i in range(numOpen):
webbrowser.open("https://www.google.com" + links[i].get("href"))
Here is the full code:
import requests
import bs4
import sys
import webbrowser
print("Googling...")
response = requests.get("https://www.google.com/search?q=" + " ".join(sys.argv[1:]))
response.raise_for_status()
'''Function to return the top search result links'''
soup = bs4.BeautifulSoup(response.text, "html.parser")
'''Open a browser tab for each result'''
links = soup.select(".r a")
print(len(links))
numOpen = min(5, len(links))
for i in range(numOpen):
webbrowser.open("https://www.google.com" + links[i].get("href"))
You are right! The problem should be resulted from select(".r a")
I suggest you try find_all('a',{"data-uch":1}), which will find all a tags with attribute data-uch = 1
Explanation:
"If you look up a little from the element, though, there is an element like this: . Looking through the rest of the HTML source,
it looks like the r class is used only for search result links."
The sentence above is from the book. However, in real, if you print this soup variable, soup = bs4.BeautifulSoup(response.text, "html.parser"), you will not find any <h3 class="r">`` in the HTML source code. That is whyprint(len(links))``` always show 0.
Instead of using min(5, len(links)) you can use slicing:
links = soup.select('.r a')[:5]
# or
for i in soup.select('.r a')[:5]:
# other code..
Also, you can use find_all() limit argument.
Make sure you're using user-agent because default requests user-agent is python-requests thus Google blocks a request because it knows that it's a bot and not a "real" user visit and you'll receive a different HTML with some sort of an error. User-agent fakes user visit by adding this information into HTTP request headers.
I wrote a dedicated blog about how to reduce the chance of being blocked while web scraping search engines that cover multiple solutions.
Code and example in the online IDE:
from bs4 import BeautifulSoup
import requests, lxml
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {
"q": "samurai cop what does katana mean",
"gl": "us",
"hl": "en",
"num": "100"
}
html = requests.get("https://www.google.com/search", headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')
for result in soup.select('.tF2Cxc')[:5]:
title = result.select_one('.DKV0Md').text
link = result.select_one('.yuRUbf a')['href']
print(title, link, sep='\n')
--------
'''
Samurai Cop - He speaks fluent Japanese - YouTube
https://www.youtube.com/watch?v=paTW3wOyIYw
Samurai Cop - What does "katana" mean? - Quotes.net
https://www.quotes.net/mquote/1060647
"It means "Japanese sword"... 2 minute review of a ... - Reddit
https://www.reddit.com/r/NewTubers/comments/47hw1g/what_does_katana_mean_it_means_japanese_sword_2/
Samurai Cop (1991) - Mathew Karedas as Joe Marshall - IMDb
https://www.imdb.com/title/tt0130236/characters/nm0360481
What does Katana mean? - Samurai Cop quotes - Subzin.com
http://www.subzin.com/quotes/Samurai+Cop/What+does+Katana+mean%3F+-+It+means+Japanese+sword
'''
Alternatively, you can achieve the same thing by using Google Organic Results API from SerpApi. It's a paid API with a free plan.
The difference in your case is that you don't have to figure out how to pick the correct selector or how to bypass blocks from search engines since it's already done for the end-user. All that really needs to be done is to iterate over structured JSON and get the data you want.
Code to integrate:
import os
from serpapi import GoogleSearch
params = {
"engine": "google",
"q": "samurai cop what does katana mean",
"hl": "en",
"gl": "us",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results["organic_results"][:5]:
print(result['title'])
print(result['link'])
---------
'''
Samurai Cop - He speaks fluent Japanese - YouTube
https://www.youtube.com/watch?v=paTW3wOyIYw
Samurai Cop - What does "katana" mean? - Quotes.net
https://www.quotes.net/mquote/1060647
"It means "Japanese sword"... 2 minute review of a ... - Reddit
https://www.reddit.com/r/NewTubers/comments/47hw1g/what_does_katana_mean_it_means_japanese_sword_2/
Samurai Cop (1991) - Mathew Karedas as Joe Marshall - IMDb
https://www.imdb.com/title/tt0130236/characters/nm0360481
What does Katana mean? - Samurai Cop quotes - Subzin.com
http://www.subzin.com/quotes/Samurai+Cop/What+does+Katana+mean%3F+-+It+means+Japanese+sword
'''
Disclaimer, I work for SerpApi.

Categories

Resources