How to scrape google maps using python - python

I am trying to scrape the number of reviews of a place from google maps using python. For example the restaurant Pike's Landing (see google maps URL below) has 162 reviews. I want to pull this number in python.
URL: https://www.google.com/maps?cid=15423079754231040967
I am not vert well versed with HTML, but from some basic examples on the internet I wrote the following code, but what I get is a black variable after running this code. If you could let me know what am I dong wrong in this that would be much appreciated.
from urllib.request import urlopen
from bs4 import BeautifulSoup
quote_page ='https://www.google.com/maps?cid=15423079754231040967'
page = urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
price_box = soup.find_all('button',attrs={'class':'widget-pane-link'})
print(price_box.text)

It's hard to do it in pure Python and without an API, here's what I ended with (note that I added &hl=en at the end of the url, to get English results and not in my language):
import re
import requests
from ast import literal_eval
urls = [
'https://www.google.com/maps?cid=15423079754231040967&hl=en',
'https://www.google.com/maps?cid=16168151796978303235&hl=en']
for url in urls:
for g in re.findall(r'\[\\"http.*?\d+ reviews?.*?]', requests.get(url).text):
data = literal_eval(g.replace('null', 'None').replace('\\"', '"'))
print(bytes(data[0], 'utf-8').decode('unicode_escape'))
print(data[1])
Prints:
http://www.google.com/search?q=Pike's+Landing,+4438+Airport+Way,+Fairbanks,+AK+99709,+USA&ludocid=15423079754231040967#lrd=0x51325b1733fa71bf:0xd609c9524d75cbc7,1
469 reviews
http://www.google.com/search?q=Sequoia+TreeScape,+Newmarket,+ON+L3Y+8R5,+Canada&ludocid=16168151796978303235#lrd=0x882ad2157062b6c3:0xe060d065957c4103,1
42 reviews

You need to view the source code of the page and parse window.APP_INITIALIZATION_STATE variable block using a regular expression, there you'll find all needed data.
Alternatively, you can use Google Maps Reviews API from SerpApi.
Example JSON output:
"place_results": {
"title": "Pike's Landing",
"data_id": "0x51325b1733fa71bf:0xd609c9524d75cbc7",
"reviews_link": "https://serpapi.com/search.json?engine=google_maps_reviews&hl=en&place_id=0x51325b1733fa71bf%3A0xd609c9524d75cbc7",
"gps_coordinates": {
"latitude": 64.8299557,
"longitude": -147.8488774
},
"place_id_search": "https://serpapi.com/search.json?data=%214m5%213m4%211s0x51325b1733fa71bf%3A0xd609c9524d75cbc7%218m2%213d64.8299557%214d-147.8488774&engine=google_maps&google_domain=google.com&hl=en&type=place",
"thumbnail": "https://lh5.googleusercontent.com/p/AF1QipNtwheOCQ97QFrUNIwKYUoAPiV81rpiW5cIiQco=w152-h86-k-no",
"rating": 3.9,
"reviews": 839,
"price": "$$",
"type": [
"American restaurant"
],
"description": "Burgers, seafood, steak & river views. Pub fare alongside steak & seafood, served in a dining room with river views & a waterfront patio.",
"service_options": {
"dine_in": true,
"curbside_pickup": true,
"delivery": false
}
}
Code to integrate:
import os
from serpapi import GoogleSearch
params = {
"engine": "google_maps",
"type": "search",
"q": "pike's landing",
"ll": "#40.7455096,-74.0083012,14z",
"google_domain": "google.com",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
reviews = results["place_results"]["reviews"]
print(reviews)
Output:
839
Disclaimer, I work for SerpApi.

Scraping Google Maps without a browser or proxies will lead to blocking after a few successful requests. Therefore, the main problem of scraping Google is dealing with cookies and ReCaptcha.
This is a good post where you can see an example of using selenium in python for the same purpose. The general idea you start a browser and simulate what a user does on the website.
Another way will be using some reliable 3rd party service that will do all job for you and return you the results. For example, you can try Outscraper's Reviews service with a free tier.
from outscraper import ApiClient
api_client = ApiClient(api_key='SECRET_API_KEY')
# Get reviews of the specific place by id
result = api_client.google_maps_reviews('ChIJrc9T9fpYwokRdvjYRHT8nI4', reviewsLimit=20, language='en')
# Get reviews for places found by search query
result = api_client.google_maps_reviews('Memphis Seoul brooklyn usa', reviewsLimit=20, limit=500, language='en')
# Get only new reviews during last 24 hours
from datetime import datetime, timedelta
yesterday_timestamp = int((datetime.now() - timedelta(1)).timestamp())
result = api_client.google_maps_reviews(
'ChIJrc9T9fpYwokRdvjYRHT8nI4', sort='newest', cutoff=yesterday_timestamp, reviewsLimit=100, language='en')
Disclaimer, I work for Outscraper.

Related

Getting href links from a website using Python's Beautiful Soup module

I am trying to get the href links from this page, specifically the links to the pages of those respective clubs. My current code is as follows. I have not included the imports. If needed, I just did import requests and from bs4 import BeautifulSoup:
rsoLink = "https://illinois.campuslabs.com/engage/organizations?query=badminton"
page = requests.get(rsoLink)
beautifulPage = BeautifulSoup(page.content, 'html.parser')
for link in beautifulPage.findAll("a"):
print(link.get('href'))
My output is empty, suggesting that the program did not find the links. When I looked at the HTML structure of the page, the "a" tags seem to be nested deep within the page's structure (they are inside an element which is within another element, which itself is inside an another element). My question is how I would access the links then; do I have to go through all these elements?
The data you see on page is loaded with JavaScript from different URL. So beautifulsoup doesn't see it. To load the data you can use next example:
import json
import requests
url = (
"https://illinois.campuslabs.com/engage/api/discovery/search/organizations"
)
params = {"top": "10", "filter": "", "query": "badminton", "skip": "0"}
data = requests.get(url, params=params).json()
# uncomment to print all data:
# print(json.dumps(data, indent=4))
for v in data["value"]:
print(
"{:<50} {}".format(
v["Name"],
"https://illinois.campuslabs.com/engage/organization/"
+ v["WebsiteKey"],
)
)
Prints:
Badminton For Fun https://illinois.campuslabs.com/engage/organization/badminton4fun
Illini Badminton Intercollegiate Sports Club https://illinois.campuslabs.com/engage/organization/illinibadmintonintercollegiatesportsclub
If you take a look at the actual HTML returned by requests, you can see that none of the actual page content is loaded, suggesting that it's loaded client-side via Javascript, likely using an HTTP request to fetch the necessary data.
Here, the easiest solution would be to inspect the HTTP requests made by the site and look for an API endpoint that returns the organizations data. By checking the Network tab of Chrome DevTools, you can find this endpoint:
https://illinois.campuslabs.com/engage/api/discovery/search/organizations?top=10&filter=&query=badminton&skip=0
Here, you can see the JSON response for all of the organizations that are being loaded into the page by client-side JS. If you take a look at the JSON, you'll notice that a link isn't one of the keys returned, but it's easily constructed using the WebsiteKey key.
Putting all of this together:
import requests
import json
SEARCH_URL = "https://illinois.campuslabs.com/engage/api/discovery/search/organizations"
ORGANIZATION_URL = "https://illinois.campuslabs.com/engage/organization/"
search = "badminton"
resp = requests.get(
SEARCH_URL,
params={"top": 10, "filter": "", "query": search, "skip": 0}
)
organizations = json.loads(resp.text)["value"]
links = [ORGANIZATION_URL + organization["WebsiteKey"] for organization in organizations]
print(links)
Similar strategies can be used to find and use other API endpoints on the site, such as the organization categories.

How to parse and get clean image source from Bing/Google news feed?

I have created a program that will scrape Bing Newsfeed and analyze the content and email me the headline, a summary, and a link to the news. So far I have been able to get all of that correctly using BeautifulSoup.
I want to improve my program by also including an image of the news that gets displayed on the Bing Newsfeed page. I am having trouble getting the image source link because the source seems different.
from bs4 import BeautifulSoup
import requests
source = requests.get('https://www.bing.com/news?q=Technology&cf=intr&FORM=NWRFSH').text
soup = BeautifulSoup(source, "html.parser")
for image in soup.find_all("div", class_="image right"):
print(image.img)
If I run the code above, it prints some weird things that don't make much sense to me. Here is an example:
<img class="rms_img" height="132" id="emb249968768" src="/th?id=ON.B139539B9DC398104440D89FAFB6F0C2&pid=News&w=234&h=132&c=14&
rs=2&qlt=90" width="234"/>
All the other img tags are also like this. As you can see the data-src here isn't ideal to get a link of the image that I can use when sending the email.
Can anyone take a look at the website (from my code) and inspect it a bit to see what I might be doing wrong or how I can get all the image links in a clean and usable way when sending the email? Thanks so much.
The src attribute of the img tag is perfectly ok and just what you will find in most website. It's a relative url (doesn't have the "scheme" nor "domain name" parts) with an absolute path (path starting with a forward slash) , so it's the client (in this case your code) responsability to rebuild the full absolute url using the same scheme and domain name as the one used for the initial request and the path from the img tag - in your example, the end result should be something like "https://www.bing.com/th?id=ON.B139539B9DC398104440D89FAFB6F0C2&pid=News&w=234&h=132&c=14&rs=2&qlt=90" (which indeed points to the image).
NB: do not try to parse the url into components by yourself, just use the stdlib's urllib.parse module.
Seems like an answer from bruno desthuilliers no longer works.
To make the parser more reliable, one of the ways is to parse data from inline JSON. It is the case with images. It's changing not so often as other parts of the website like CSS selectors and similar things.
Since you can't parse image data directly from the src attribute, well, you can but it will be a 1x1 image placeholder.
An alternative way would be to parse data from inline JSON + regex where you match the image ID (emb23ACF3D86 as an example) parsed beforehand and use it in your match pattern to make sure you're extracting not some random images but images from news results.
Make sure you're using user-agent because Bing could detect that it's a script that sends a request. It could detect it because the default requests user-agent is python-requests so when you make a request, Bing sees that the user-agent. Check what's your user-agent.
Code and example in the online IDE:
from bs4 import BeautifulSoup
import requests, json, re
params = {
'q': 'Technology'
# other params: https://serpapi.com/bing-news-api
}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36'
}
html = requests.get('https://www.bing.com/news/search', headers=headers, params=params).text
soup = BeautifulSoup(html, 'html.parser')
news_data = []
all_script_tags = soup.select('script')
img_ids = [id['id'] for id in soup.select('.pubimg.rms_img, .rms_img')] # emb23ACF3D86
for news, image_id in zip(soup.select('.card-with-cluster'), img_ids):
# https://regex101.com/r/5XWmaF/1
thumbnails = re.findall(r"processEmbImg\('{_id}','(.*?)'\);".format(_id=image_id), str(all_script_tags))
# returned result in bas64 image which needs to be decoded
# it decodes twice. For some reason the first iteration
# don't remove all Unicode chars.
decoded_thumbnail = "".join([
bytes(bytes(image_id, "ascii").decode("unicode-escape"), "ascii").decode("unicode-escape") for image_id in thumbnails
])
news_data.append({
'title': news.select_one('.title').text,
'link': news.select_one('.title')['href'],
'image': decoded_thumbnail
})
print(json.dumps(news_data, indent=2, ensure_ascii=False))
Outputs (try to copy the image link and paste it in your browser URL bar):
[
{
"title": "Flanders Technology: straffe aankondigingen en onthullingen",
"link": "https://doorbraak.be/flanders-technology-straffe-aankondigingen-en-onthullingen/",
"image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAQCAwMDAgQDAwMEBAQEBQkGBQUFBQsICAYJDQsNDQ0LDAwOEBQRDg8TDwwMEhgSExUWFxcXDhEZGxkWGhQWFxb/2wBDAQQEBAUFBQoGBgoWDwwPFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhb/wAARCACEAOoDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwCYkU3iqzXgF2lp5X79lRnUyou0MSBjcRu5U9Pb1p8N1DJpq3+SsLRebyOduM9PX2r8/qYHEQSco6OyXz2+/p3P6QVaDbVybincbaqx31o80USyjfNb/aEUgglM4J/A9R1FPW7tXZUS4QtIAUGfvDGePw5qHg8Sr3pv7n/XRgqtN/aRNxRxUMd3aukjJcQssbbXYSDCn0J6VKxAwu5dxGQpYZI9cenvWcqNSDtKLXyKUovZitzSGihhWRQnGKlszi6ix/z0X+dRGn2v/H1GfRgf1oew1ufDnj5dvjjWVJ6alcD/AMitWQcVs/EgBfiBri+mqXP/AKNasRjX6RT+Beh/OOL/AI8/V/mS5FDHNR7jRuNWc4cZpQ+3im7jSE5piHkimkg0hYmkFADiRtP+6a+mNJs/DL/Cu1ivNfu/s0UymGVrEkxyGFPNjVc/Mv3STkYOPWvmZvut9DX0/a6FdXnw/wBMRtY0MSaa+IH+1r5RR0VtjnGN4Izgg5B56V9Dw/Up06qc67prmjqrX+13jLrZejd7n1XC8JSlX5YKXu9f+HXr8uhqvb6QvirQ7tfEV1/a0kVuDH9iYm6BAC7wD8haPGRzxzXOeMLTQbb4feNYdM1xru3/ALO3m3+zsqxyCZRGRIeGOSyjA5BNdOdJvR42sta+3aQx1CGNbqIXahzujEbmAEcg9VK49K5L4g6HdaX8IfF1jLe6fNDHHDPG1rcK0pZJ1TEijkAh+h4BFd+JxND6jJLGSb9nD3fc/nd1/DT9z7Nmnq7O2h9ji4TVKo/Yr7euv8mj+L7XX+mfPW4UoOaj6GlDY4r5A/JDrvgrDay+PomvNTfTkit5XWdITIwcAbQFHOSTX0VJZaIPiJMy68yXU8cov7T7M21gYT5paTOFA5YjnaRivAf2eobqf4iJNZNYrLZ25nH22VI4ztdOMtwTz0/wr32Pw3dR+KtXt0vNPa31WKdW/wBJX7TFuHmDK/ewCBuweV5NfT5dWo08HJTxLp+7PRcvVwvvFvRarXVr3eXVv9B4WpzeDuqfN7++vb1Xp+d+mdY2Hhv/AIRXVLceJZpNPFxCyyGxZTFNhtoAP3yVzkDHAzXmP7Ty2kWn+D4ra+e8xps5SZ4vKZofPIjymSQPvAeoAr1BfDV/ceCmsZtQ0VfsV75sUqXyeWwdcFZGHAbKgrnnGRXlH7Vccy+JvD8lzLbvNLoURk+zMGhBWR0BQjjBCA8cZzWmfYilUpSUcW6l6j093pFJPSEX7y9562ukmm9TXPFKOWzvTUdIrr/Nfq3s9P6seVHGaTinde1Nr5M/OT7wvvs9tqwu5oZJWfyYVPlqyq29tuM9Dljz9Kjt7EyaZDarc7bVBGdrR4l2qd21ju4yQOcDoas6yomWC2MnltLcoyExlgSh8zGR0yFPJ96gm0y6e6uHUwsWcvEzo+5csrMjDO0qdgHTOD7c/O4WrF4am51OR6Wurr3dE1fRbvVPoz+hqkXztKN1/nv+hXh04/bLmOK6Z3gTzITIhZoS7M4JY/eBJkBHoxFFrpt8lobN1gZXkikaXzGO3bGgIVcdynUnoasLp0wvUmMMCGPyAEjZgE2SMzgcdCrY9+elNt7C8gWMbmx8pnAmJ8zbI5wM9PkK/wDfOK7njpNaV4v4d0r3XW6a6666NvXqzH2S/kfXYr2NpeFbOAzyJJaY81pJI5tgMRUbVK9CQOopun2U1vJb7rL96rR7pCVcABNpYOfmU4428g5+uLFvBfJcq0xyXVBJskAeRVEvA56jdHk98GoGubwbo7i7MMkcRDyAgRmQIhAPPy8hhnHzb/pWjqV63PCnODi10v1vfZ3au36adbi5YRSbTubZBpPas7VPtbWNuVgkeRnLsu3O0kEhGx0HO3d/CQDV61kM0CyGOSPP8Mi7WH1FfI4jBOlRVTmTu2tOlv8AM9KFXmk42HN92nW/+uT2YfzpMcU6EfvV+orhNlufEPxSUj4keIB6atc/+jWrAZfeui+K/HxN8RDH/MWuf/RrVz/X86/RqP8ADj6I/nPG6Ymp/if5jKSnUmK2OQXaaNpp2KQDmgBoGTinLxxSL96nKKAEblWz6V9NzeF9euPA+k20Ph24V7F3NxZgYNwHCkTdcncPlODxgYxXzL/nmvoy8Yp4E0n7OdSFiJpRf7CRL5/AznGMbMbc9Oe9fV8L/WVVvQlGL5o/Em/s1O0o7q6Xm1a3X6nhv2dq/Om1yra3dd0/J+h039iauvxCTVf7EuPs88QWCcL/AMg1jDsXIBwPLPHPbkc1xfxC0XVtK+CviaK90qexkWa1kkuHBxcoJduwHuAzK3vn2rqWmmT4lbnkuSPsx/sZQx8kkQnyd2evb/gfWuK8fT3J+CviAebdvdm9tEvRcEnZEXJG3PrIBnv0r0cbLHrLJc04OPs6Oyle3NK322tNnpZytonqfU4/2Co1bJ3/AHvVfy69Pu8vI8NNJ3pzCkxXwmh+XHoX7NNrNcfEZJl0mbUo7SNJZoolLbU81PnOOwOD6HpXtFr4e12PVNfjl0u6kuL6GVbfUsEJJmQMyg9P3i5Hr2rxv9mVrZfiEBdzTxwv5St5TFdxMg2q5H8JIGfavWNPvdWf/hI5J7q9OsLA2IyW2xoZFEpQf3gCAPRc4r7fJ1jVgf3MopNL4lLrVstpJWTXv6bWvfQ+94eVFYKPMnfmltb+X0+7z2HXXhvXLjwVa2kPhy8imtr2R7m1KkNc7lXbL1zwMpx07V5P+1EskfxKtw8LW6f2PaeVbOMNbL5ePLI7YIP869O1K6lHgXTHt7vUvsr3EzahIjnzfOGOAc9AmNue5PevKP2nJHk+L14mT5ENpapack/uPJUx5z3KnJ9yax4lli/YxVaUbc9XZSv8Sve8nrfVdVGyvYWfOksvfKnf3O3Z9kun4nn5pKU0Yr4w+CPvm6haS5s5FKbbe481w2fmGxlwP++u/pWbBpM8VqsSrDnyY0cqxG9ll37jxz8p2/8A1q2WHzYqtrWo2ekaTPqV+7pbWyhpCiF2OSAAqjqSSAB718bhsyxNGEaVO1tOnm2vxbP6KrUqWs5/1p/kikdPvFVhHKq7Q4txvbEGZi4bHf5SFx/sgdKfcWl8tjLBakMcssDNOwZFwxQk92DEDvwAeelTeG9Xs9c01b6ziuIkY4MdzF5cqn/aXJxxg/Q1fIA7V0SzfFQnacVdO9muv3/1r3M6dGjUgpU5XT6pmRcW1+Xme2UpI80ro7yAhd0eFOO2G44Hv0pt1HqRkk8gyrGA5iXepJ5i2q5J5/5a98YIrZCkjIWmEDuBUf2zVum6cXby/wCCaewi9OZiNgk4/Cmt/SnGk/CvEaOpMb0FEZ/fKSe4okZEieSR0jjjUs7uwVUUckkngACquj6lp2pw/aNNvra8iV9rPbyh1DDqCR0PtS5XZu2gc8VJRb1PjH4wfL8VPEi+mrXOf+/jVz0au/3FZscnAzXS/GgAfFzxMo7axc/+jWqj4NvV0/UWnSfybgoY42MAf7xAbaxIMbhclWHO5QOM5r9Eofwo+iP53x/+91f8T/NmRNG8eN3G5Qw+hpinj/61eifFDRtKubjUPENprwvIJ7+VYjIqx3Fy5Od7R9VByT079RXnrLhsU6dRTV0cSd1cM+9Jn3o/Cg/StRixjLU8qQrEDgYpqEBsmrMxG3AwdwGcVcFdAV1zvGPUV9P6h4n1yDw7o5h8STqLwSvcaiUy0bBwnl4xnCKM++7PpXzNCim6iUjIMig/iRX3A+laWbc2p0yzMG8t5RgXZuPU4xjNXSzzAZVOLxdD2t3faLtZSX2k+sk10016H3fBeBrYmOI9lPl0j37t9H5NfM4w+IdZ/wCE81CyOoulvYQzNbWJAIvDHGShJ77v9YfXoK4T4sa7q2pfA7UHvtUk1BrjVLaF0ZABaqNzjoB94jjthcV7h/Z2n/bI7r7DbefCAI5fKXegAwAD1HHFedftPWVhY/By9e0sre3a41G1EhijCFyC55x171n/AK1ZRiaSw1LBqM5KEU7R0cb3d0r635r73Vm2j67N8uxVLA16kqt0ozdteq9beXa22p8vMvy8DmmMCDg1bZVwfQDIPvVaTBanKNj8XR6n+yddXEPjW5hh1A2sdwkSyIBzdbSzCIHHyliMZ969T0/xdr82h6vqcmqF54PLSK0EQC2od8GQZHbAQdcE5PauA/Y/t7YJ4l1GayhuWshbOpkQFowBKxKn+E/KOe1e9Q2FoPNmbSrATXKnzSkX+sUnOGOOffPevQoZvl+HpKFbCKo/dV3ya2nzNaq9nFpN+SWx+p8NYOvPLabjVcfifXrdLy0epwmpeJ9ei0vRoo/EjRnUFeS41Ixj5T5hTy8Y/gA59Sc9K8P/AGlLqW6+NWuGZCjQypbqCcnbHGqKSe5IAP419UNp+mNbi0l0mx+zqS6xC3BAY9SARj0r5U/aUZW+OPiIKBtS6VAB0wsaD+lc+NzXBYuUadDDezkuZt2jqnK6+FdtLbK2ljk4tw9ajl8XOpzJyjpr0i+/9dzh/wAab+NONJ+Fch+bn3P4N8TxeII1kXTbqyEwLwrcOhdlwpyUHzICGUjcOc8VnfFbxHpdjo2oaLPZz3t1LahhFHhVRm5jLMemCA3A9PWmal4js/BdjNqHiq+lKkRpZokalnUliw64JUDJJxwQBzXFfEXWNOk8dTzpfQ/vFikxO4VtjQp8pU8qR0KtgjBzXzjy+H1tOMWodNeqt1P1qtndRZa05p1b2ei0Tv02L/wh8b2lo02nXdrN9mlRLhL+NTsLkYaNkOCNuOSM/wAjXrKsCuQcg9D61znw90/wjN8F2s4fCtrDe3DMYNRBR/NAiAYl2IJPmbskjjgAAAVD4Z8UacfC8Pnahp/2i3LQSRNfRhl8skZIJyOF9OavO8u5aiqUtb7lcN5pL2XscQ0klddPU5n4+tC3ivw7bXMkdxHNHKiWasrvHLuBEwjByDt4Dn0xXVfDnX9O1nw7axQarBdX1tAEvIfOBmidSVPmL1ByOuMV84/Eq71S7+Kmv+KLeea2tJNSlhhvreUBWVflCowILcBenGK774C6HZ6Fcf29dajb28ixIJZ/tYAZHIMhlJBDLnAHIAIBzmrqZXzYKMb+8lf9bHnYXiCSzipJR9yTs9ei0TR7g3FI1YEnjvwODz408Pj/ALiCf0NXR4h8Pf8ACNjxAfEWkDSmuDbLem9j8tpQASgGdxYAgkY4BBPUV85HDV5XtBu3kz7x47CR3qx/8CX+Zy3xy8T+HNN8L33h3VJ7hrzUrUbILdM7AXBV5GOAFyvI5JAPFZPwt16z0PWpNE8RRTabqupyxfZhKpMcqAFEBfOMkhsEgZGOc8VfvrH4Y+Pbi4uHsH1S51D91LrKamyvbbYxHFDbxq6pEF4kZpFfeWxwuRV3xnp2mQ/ErwLfQXEF9qVjY+XeW48pmEoQNGVRtyyHcZOADjjPHX7TBZLhqmWSjKTUmk36+S8up+d47PcVHOFWjGLjF8q63V9XdPr0Pmf4uPAvxq8TPcBmi/ti4LbMBv8AWE5UngH612XhHwlY/Em+1XWLawhsdJ0PT1hWGO78h5CkQAKNIJAX6uRkAkEcDAqx4w0a38Z/HjxFd3Vo7W89zsN08edrqoEjeWjJlshuM4GOhPNNhu5PBOma34e8M3cseWeC7kdyTcNjAlAIO0YIG0cgdyRz008LdJdUj42vThUxM61T4W5bHnfhuxnOqQ3xYfZ1LJIzN7dsfgTUWpeHtQm8SQ2cUSLJf3KwxJySGbpkAZ/Kuw8JQWqNJZ2TMiQuJEeU5UZUZUtxnkHBx0xW3460yWw1bS9Z1K0mjMkov4IxINzCF9uWYdBkdAcgEVpSoTlNxW6IdOCoJvY84+Ing6+8I+JDos86Xk6wiZ2gRsKO+QeQBjrV/RtNsb7wJdStBGZbdlQEgrIkuOTkEZUgr1z34GK+gZPCHgzx7dvZWM2pab4o1OwgkXznRrSENEAgaIAsxL/M7s38RwAVAbwiy1iXStN8QeHNYtBbSQxyqIQgX7NeRsEckd9yhlPPUKe1W6TV7vYJQjTqNpe69jlf7FnEIkM8fzKWVQGywGeenTI696jtrBpIZP34DKVxGoJ69ycYH9efx9+0H4Hx2N9oE2s69ZXE2qWP2pNOsrkSxYZvkieYDpjIYgNz3rN3+H9G8ZeJp7HSLaDRZFvTapDbymCSJYWjjMZYhiBLwCTgMCRkDnnjUUp8sBvAzjBSm7XPFxYzW2vW9rI6SH7RGN0ZO374HcCvuFx+8bH94/zr4p0m7ku/EFqJsNJJdwYIAIB8xQeT65zX2vJkSPz/ABH+dfN8RXvT+f6H6H4epKOJt3j+onTmvNP2tH2/B3b/AHtWt8f98y1uWPiLU4PjdeeHrky3GnXloFtYhAp8iRIhKXU7d3ZgSDgDOc9sD9rj/klduv8Ae1eH9Ipa5MLg5UcXh23fms/Q9/NMxhisqxqircnNH18z5jKsW2+tNkUq2CK9c/Zb+HGmfEX4ijRdYubmGzj0y+vme22h90Fu8iDLAjaXC54yRnBHWvN9csxb3joAflPevvZUWocx+Hns37E8Ktp/idmRWV5rZCrDIPyS5BH417XcX2nRsBLPHlsgEcjgc8jtXkH7FMSjQfEEjsVVr+BSwXdjEZ7d+teu+Lzoum3km+aFY7eIsYvlXylYJw2Pu5I+UE5A6+/x+Ip+0xtS8mrW29EfqWDzStl+RYZ0opt33/xMsQrC22SLawYZVlOQQemDXx38fJfN+NPid8/8xSZfyOP6V9VfDvUtP1GxaPS5WntbeXyvNZww3k5KggAcAj86+T/ihDdap8YPEMVpbzXM8+r3WyKFC7t+8boByeBWuVxlHFVIt3sjm4uxixmUYaqlbmd7eaVmcsv3uaXj+7VjVrG+02/lsdRs57O6t22zW9zC0ckZ9GVgCPxFQ49zXvn5sfSHiyPw5p2raYs17rV0uh3ayJaalqlgrQLuDtuURq0hLBTjHIyKz9W13wb4t+IXi3Umt764FwbeexjhmSOOIgYlJDIS/IGBwOhOelcTqk3hLz3SEyptIOYkJz9Seaj0QaaPFVnJopmZp0eO5ilyAylchvwIFephcLBYmHtGpK/5ndWx0nBxgrXPeP2dZ9H8cahceBBcSaVeSGS70rUFctKoUfvrcqciQsoVlBIAKOf4q4L45eDr/wAAfFHVNCvbGO0F3K17aMjF0nickqQx+8QAVJ9VNVvh1rWn+G/jJ4f8R3V3NYw6PqUNzeLFFucxhtrqq5HOwnjuPzr6C/bGn8OfF/4UW/jDwHrDa3/wj18EmgtbRvMt0uMKFcFfNVgyhtv3du4gHBNe7isNFVW1Fa/18jCFV1KCV9Uz5H8bxWsOj2CRKqyBWmmG/cVZ+TxnArHjutQeFNCe6uVCvuSzeUqquRnOw8AkEVe8baNL4Z8SXmh6yGke1mCyTJuUSIRwyBscc5B74rl9cgT7duhvzevKx35hZHUg9GDZ5/E183Xi6cmmtUR7RvUsXFhPcXcbW/kKkqbkLzLGrcnOCxA616BBo1/P8E9H0D+2fDq3Efii7u3t11+zeWGOa2s4lk2LIScmJhtALfJ05FT/AAp+F/inxJ4Hk8Wnwpr2rWGk3YgGn2GntLJOzAEyOuN/kglQSik5PA4OOy+LngXxVpHibSyPhldNPp4tLyW70GyuprW3gRy/lH92FV8hmYnpuX7uMUU4SU0+VtPd9h8r/lO0/ZS8DeDvEnirWPCOkieCONk1GLUbh2a5lET+W0bfdG0iUnbtA3AdelcxpXi/wDpXxA1DxUNEu9BvIbO9ECC+ae1jvmjZYSu4B4lGCOpALgnaBkel/wDBPjTBJ8SPFvjS51XT7fT47VIIjezGMSNMzO0YP3d2IlOCcnqOAceEftGeEn0DxV4n0+G/s5LW21Iy74XLG3gM4VA69QcsAAM8FSSM13zUFF8q9f0L9o4xi77HfeFPhl4O0vwXbQzatqWpeK7q2juLmaK6S3tLaVkSQ26DDFtu5gzkhmbbjao5zfiP4D8MNomreIdGs9Ss7uyhF9dRLqD3fnW5fbK+xxlWQspwDtKBjxil8NfEnw/f+AdF04RsLjSC1p9ris9ssmzAhDHHzHYVUu/Xb26Vu+IfiH4v0DSY47B9HmXUc2aRx2iwXM7ujJscxDlcHlOh3HjkivS+r4V4fnhHW2+/4HGqs9It9djyHX9R8O2C/aPBFvqlssVm3n3WovFO9zNjOdoGIwOAFA9fWu8+Gt1oHjrx54K0y88PXWq26W6Ndx2t1H5l1OqPK7LC8qKAyRAMJCqAEtg4rxjQtK1698PauqWs1w1qxLrFbvzsI8xcAfwjBPYceuKpeGb6YaoqadOYZnl+VlUcgggr9CCRjuODxXh+0cbeZ2Ks7u6+R9gZtrb4gNBpviHSI/EHihIZ720tbFWl8hWbEUMxBCgSYVnUgPgkkhQT4l8ebnTvGfxw1jxP4f01SL6dLd5jbssE90qrHK6Fhg5YZJGeue9S/BHWdbs/i0us6tp95f6pfyGK1m2xGK2iSCQpuBZAsasoLcABctkFcHpNc0Gaf4FeFfAnh7xNo2s3Gl3U19qV6Zz9naSVFHkwsFJYDacvxuJyOK1wNSnSrxlU0X9WNqrlWptJbHb+JG8H6Haf8I1rqvbX3h/w5E2l3SmWM3U7QN8nA+9ucMN2M8/3a4r4O+JrPTvFd3ok9nBd6grL/ZnnyAB2lCpJAMkD5t2V7glsdyMjTfh344n86XUfFOnyPMwctJNJcPwuACzYJxgflT7f4XakmtW8k19pmoW8tzEsqEiFol3qHfe2c8ZOMduDXr4WWV4XBS5avvTs3ppdJ7eTepniJYmvUjeFktvRmT+0N4POjePvCup/8IlcaC+rXAZ/NthEbqRLlUDFAxVcpsYsAMlmOK95YHzGxt+8cbunXvXDaX4EtpfEFiNZ8OaJc21hulgd9Wu2CzrISjeXHIqsCFQkewz6V6KbaWeZpbiWzj3SswS2h2oqk5CgE5OPfn69a/NeIKc8RWjKnrY++4PxdHBRqwq6c1vwv/meLWPi3xPBqV94itbO4n1rR2UQ6wkhW50j94NyfJkrFINyZK4+c4bPBvfteatpWu/B3Rdc0v8AtCNL/Xp98OonM8G0zlFY98RsgBPULnjOK1774TPNeQ3kHiqK3vILxp/O+xtvaNpGkMWQ/ALEAnnKjGKr+JvhjrGuafa6Pr3i+yvdOTW4r0RLaNCyx4ZZYlZcBdwYYOOCD0zXtSxUKsIxejTT+4+Zp0MTSlWk0+WSa6a32vqZH7AIA+ImtXJPFv4Q1eQn0/0Yj+teFeMoJ4NSZbiGSFnVZVWVChZGGVcA9VI5B6EdK+tvhP4O0P4fePr290sxLplz4PutOks3u3aW8upRtIeQ58pHxgleUBGB3PPfGTw3F4j+Gtj4YsvhXougQ6XP5ljqFt4oa7mi3ZMkf7xFJVzg4JKgjIAPNdtbF07RgmeWsFPkk29Ucr+xgv8AxRetk99UUflCv+NaPx58VWth5nh22tLKRwVa6aSAExyHBAHbdjGevXBqr4F0vW/hl4JubKxNvfy6hcySJdMsi7JdgVEXYcKcKDuY4z9K8Xvtbk1HWHu7yJ4i0hYqWzhj1J9TnJrxKGXuWOnXqL3eh9JjM7jTyShgKXxfafbVuyPcP2X/ABPc6rcXnhy6MZGnmKa2ZVCny3YqynHXDAY/3seleXeHLF38feI9au7Kc20+pypbSm3LJKTPIW2sRhsbRnHQkVofs4622l/E+6ZIo5Fu7D7OGbPys0sexlx33Ecd+lfQ+sWdtqkculi3kjt7aHyraAkhbNSwYbS3TACj3FEsN7HEVKi2la3r1OWrmf1vLsPQm25U2/u6Hyx8aLW5ulsvEDQTSRLusppmV2xswYw7nI3FWI6/wgDpXDYtf77V9x6fpWlXOlXvhm4t9unX2nNaX6yKdpVgR5jcY3KW3An7pAI6V8kah8PNQg1CeC3uYZ4Y5WWOUHIkUEgNkcHI54rso3nC/U8LE07TuupkzMrHcoIyOc9q1/hysMniuOSYZWGKRxwTtO3Abgjpuo8Wtod58Spv7PvJF0m8uVdrp4yPL3gFjhscBt34VXv2/wCEd8cSxafdSG2t5DE0syL86cZI25BBGCCK9fDVo0q8JSWzOaVNuLseseFdL+H/AIgvZIdaDTXalvPm+1GF0xzywPGAOhyAO1S6RqGq+E7pvDfhTxkp0S71JbqaAbWxOItgZtwy3yEqMHGc8A15dql9p1xd/aI9c09NrZR/szMxzySCOe596oySu8jTQN50KvncGx8o6HnB9D+FevWzClXVpU1v96MoKdKScZanffH/AEzN6fGGneIoL+4hjtvtqSALcQzA4V+BtdSQO+VwAc8GuZ8Vf2H4uW1vtKW6h8QXL+Vd2qwmRLuVsBWj2ZO5ieR6kcdTVPQdQ0o+e2p6/d2q3EYiktrbTVn81Sd21mcgDDKp4/OtKw1TwNpetfbbbTry5SEKYre+sLe4hdudwdGbOPu4w+eTz0ryq0adSq5xVo9rm8ajcbS1vudh+zrb+PvAvxWfStQu7/TxfeENbuorVdSbYXXR7qSJyitw6Ng8jKkZHY157o+qeLLia3hv/EmrTaa1z5ksI1WWRCRycqWIJx6ivWPC/wC0rP4e1QX9r8Lvh7cXHkPB5snh6KNvLePy2QYJKqUJUgNhlJBrlviN8db/AMVTRqPh14C0qOJNippuhqg289iSFPJ5UAnvU8sIyXvaIPdvuZWoeNdV0vVrqPwhevY25bZIlwQyTAZGWRsjJHHHIGeeaz9N8U6j9l1ix1WKz1htUhjN5cTRLPN5ceFVVlI3RoPl4Qg8Csg6vtklvjodhJujAeCMSRxjn7/ysDn8QKbda2L3Ty0WiaXp0cfDPaRSb5+nDu7sWHQ49foKv2y9s533M3dwUSzcX0emaffR6fOhW9uUljgE4Y2mFYHg8s2CoD9gD3PFu/1nUfEej2dwIoRf6ewjNxEoSR41A2BiMFiDzuOTz1rnf+Ei1ATMPMs4WIAJW2iyPpkcdadb3V0JrmZbg+cyGR2Tg8DJ6ewpUsVKzinp/TM3GOjO1s/GfjW6u10PxHqP26wkXa9peIsylQD0bBdOWJyhBzzWT4X0SfTdXguEiZn81fnZN2UPDMoPbB6nkUz4ftaSeLLHV9faW5sIZ1mvR98zAH7rc/d45B6428A5HVfFpNGf4r6pJos13qNzqmp20lpOY4oRMZG3OscEUjIgLFeCxI24wuSKurKVSCnN3sEWoySsdPp4DOoPPPHsexqx+014s1TxJZeGdQ/sG20f+zbT7Ffa9aW6w/2lMArZmZAACq5CA8sdx6cDMsryNeGIBUkMM9COtUPGj65eXFjL4Wu7pL7cYpIYJf8Aj4Q4O0xn5ZMEfdIOc9DWUZRjJSauds72sjiX8eeK9Pu1tbHXZplIzunjWTP4kdKdD8W/F0T5k/s+ZgeS9uRn/vlhWL4ptfENrrlxdaxot1ZSzN8/maabdFPU7FCqqD2UAVi3iATiMAblGHx696yqUqdSLnFW12MPa1Iu3Me0/Dr4m3usxXkN6lnBqMMBk09UjkMU0i/MY3zJldybtrA4DAA8NxB4f+PWoSNM2r2FpCscO6H7HHI7SPkYU7nwBgk59q8/8A2Hh2a1vr3XtV1K2eGJlt7eytPM85ipxvcnCDOPrzyOhq+GdPtb0XjXJbfGY1SJFYkhs5bIBwBgdfWuaVClaPu6nTTxWJj8Mtz1TQ/j9eSed/bFpDa7dvlC0jeUuTndu3OMY4xWtqHxYuJ9Dju7G18yQzhc3FuVQ7gSoAWXIPHJJI5HHPHgdlZvcTSKu75HGcKxP4cfzrvNCF9f+H7XS30q/wDOhuY/Lkjt3ZJEBOTkLgEDjk0o4aMmrR/M1p46s1JTm9j6D+E0EvxD+FPxF8R2kWqWureDtDF9CstuBZNIuXkj39ZD5aMRjbjeud2OfnfV/iv4vvI/IlvBGqk8wIF/Q5r6c+Huq33hP9i74tTyWsluZtJgtFkcg5NxMLcrwTg4kNfFTESXG3cq7m+8+cL7nGTj8K6sZhqdKfLyq+n5I5XiKr+0d3a/EO3tPCepWlzY3Wpa9dyn7JrE9wVNvAQoaML1HAbvjLA9q5SaATW6zwOy7l/i5+pz61uaV8PNR1KMfZtSsXY2pukQFt0kQJG5QQMgkHB46GtHS/BQj0oW891ezzGFblhawxKkUL5IYs756KcjZx71yQSptpu13fc6p0cXiYwfJdJWVl/V/Ud8I9HeXw34svxk3UMFjaWpSQLhpbjcduSMtttz0OcZrtjqGrQeCbZr/UZZY5Lj/UGMhHZBtJJwP4sgDvya5nxna22hfC8aRp9vutzeRahdt5vmPfNtaOPfIoX5USR9oUDHmE8k5qnH44uvF/jG0SW1W2sbKzSK2sbV/LghKRhcqhyBkIMDtge9Y1eSpFtPYv2dTCVFTqLV6nR3PiPVZtPvtNi1GZbiziIhaFC7KpUbo+hP3TwRzn2Nalj4TvvscOzTvESr5a7QmiPtAx2+bpXKeJtT8MWlysGr6r4qiaaIMLS1ni+zEfdxlfmUnGcEd69l8M/G74f2PhvT7Hz9ZP2a0iiy6RlvlQDk7hzx6VcKN4J8xlVqc8teh8qtHLna0Ux9R5ZoW0nMOyK2uGGf+eZ6V6DfaPrEDYmdUB7CUFvxA6fjS6fo17c3PkxPNI3oh3Ef0r1Fh22cJx/hfw3Je69YrfFrXTzeRJczFlDRxlhuI3cZxnGeK7bxR4Ve7gOowWkNvDfxy/Zk84MFUP5aTSFeWJ29sA8ccGp5vD2nWd3AdRuJZGZhmOCFWyfQsSB/OtWx0vSF22yWsi9WZGctlc5BdI9uQO2accG3JFxqRjBruebr4RghuAt14jsoV6hhbyOc5xjA71K2iaZFMUudRuz/AHXFpsP4rkkDpXrEKaVZw/6PfQozD5Ut4AG4452jI+m6ls4vDMlw0mradHqUhUMIpJd35xqCB/wPP1FdP1WK0TXzuRFaXPKDp3hG3aRbrXriZ/lEQiQKBkc7s4OQcdualhsfDiy4sbC/1ZPJbcPMkjZnxwfkBwo64AGe+K+h/CNtf6yqppWh6foWnKPlmFgqsT6JGq7pT/vMF9TVlrHRLbxQsNrYSapexn53vpss7gcgWtuSCPYKoAPLV2/2W+RS5lb0ZN2r3R4rH4F8TanGt54e+H9vptrNIpS4l1PcqBRtYeZJKd0bNkk4PI4OBWh4Z8DeMJY7ixh+HtnqyxEEzwfNYkE5Ls5KBh15Ru3Br6E8ReJ20Czi1DxNJb2EEaZEF3HGsbnsI4B5hcjsC7YHVa811z9ovUXuJIbOz/tK2ViVF+PLhY9MmFOo/FfcUp5dh4TUpys/l+VjOnOvUTjCOhys3w9+I2p3a6RpsGixRjjZpyQpCMAZJLIHPrwDyam8JfBrUJvFUdvf+MoGeymxFBYacbkCQH5gzybEUdPuh2/2aZqHxs8c3wuFTVV09bsYuP7Nt47Rpx6O8ah3GOME4xWDB4hnk2q7DCrtAJ6D0FUqGEk7ybf9fI6YYWs95JHXyfArwzYeJER/itp6zmUy3UMSq0EKhuVMoAVmxkYGAOfpWH8RfDWiW/iaS40bxBZ6n82S5iIQbcgBMDJHAIzg556VRn1CWYArIAuPuqeaSOZQuZH2juW7D1qnh8Io8qRqsG+sjn/7PuLWfyxqe4yAyzOkTKIh/DkE87jnkenSsrxAhhEpk1KSWNLU3ALLt3HcFXbzyDn26cZrqtKhN4JL65cn7W+9Iem1BxHu9SAM46Anpnmuf8U21xeeKrSzJCw3MwjnOcKyJtc/TOPzNedWowjBSit3/wAN+hpKjaOhmXWveNdPWGGfxBrEMZhSRF+2yKuCM9M/pXNape3d7fte3880883JllJLOBwDk9elenakBF4nt5LqMNamPyw0ibhD6YJHqT9Kp+JZ9OvNJ/sOWYYXZtkiRSE2kHCnPpx2rKVNSlKlezjtfqZyw7u9TnvBE8tlHdOokMUto4cKOSCpwRntmuj+B7WyaVqMkzOskl0iqVB4AQE/zqF20dbFnYzeba2E1tCI2+UpglN2OSR0z7mj4VTQQeGnEsUhaS5Ztw4HRR1/CtKODTrxhPWOprRvSqpkXg2yzq2vRCQ4jvioJPu9eg6XcWMFtBE5c+UMHb1PPYk1z/huztLHUtVu3nEi6hciZUVcmMc8H1PPatkS6QF3vdwwnp++GwZ9MmvVwuApxim97v8AMcaacXfqaXxi+I1hZfs83ngLTnk/tHxPqto9wsinbHbW5dwd3q0rR8f7BNeX/EDwbY2ni7TNK0jUE8m9s1UXFwhjUTIpDb/TcVBz23+1XPEvhy813WLrU59Q0uKKK3K2UFvdCRpWUfKuWCgAkkkn1qz4+gtx8P8ASXuX8+/09oWuY0kD/LtxIGYHjJCiuTEUPbSqznGy0t8tH95z+wtdrRIg+EeqPF4W17QJISt6tu9xDGGKT3KtFsEaHrhCwk2rgkM3bNegtZT6h4q02xsILeO3OkvAIAxj3rDJGcMOgJ35wOD6VzusDTI/GUGvaa6xz6bot7LuReC3kOsIx6guOPcVs+E7+bVbLQtTjti8pik+1snCrvjAIz/voOma5aOWwrYiph3P4bbdb/5Hr4bFyowUbXsVPGmn31xBexxDEEZXzAqfMMepBwRkdfYVyfgPOi+PrHVbuykaG1eQsIlGXzDIP5kfzr1W8E0dwLiREWNx5cik5BB5GR9Rj8aoXHhXRppGmlsbONm5YxhlJ4Pbdjue1bw4eqR5oRnou+n9f8AjGT9u1JLVHiup6ft1ibUEhEn7/wA2N5ZA4fnI4HJ9ORTYbS68lc2mTtHJbFepah8OtDlBFvqV9ak8gIUZQfoR/Ws7/hV69vF9xjtm1/8As6iWR4u+kL/NHmewqLodvo+m6e/n3F3d21s8K5lgt7lWCAdnuG+XJ9IwT/tU1vEN9qs66ZpMsMVsq/IUt2/eD6n5j/Wr2i/DLXryb+0/FcCQxhT5cO5SkXpux1/3VB9yKm1KCZ5Y9N0uZLaMLte4dxG3vhR8w9gDk12LnjFNqy/FnMoxk3bX8jIuNOtvDv8AyF7uT+0J0JEAiRn5/uKSSPqcU3S4tS1aT+zILUwsxDeWXCkj36A/r1rr/BnwwlTT5NRFiuqT+aS9y5BSD03OTwe+O3vmk8vUdR027t31BdN0O0k232pLIskkuP8AljbjOW+vStnTaSureX/BM4rmempjapo+mN5Ok2tnNfajE2bw2syiOFR3eQHC4z2rR0vQbK51O2093MNxH8zWWmpGqIOzSTHJZz6c9e1dHeXml/D/AMIpd6locllDdJu0jTpGKtMP+e8wB3bjzwa53wVcT6jZ33iDXbhbfRwNwsxtVr5+oUEfNjpnFYy9kppdd35Lz/r5HdSoz5W+iNWGyhsYSqg+czkusjFm2+75JP6Vy2oeMNQ0Rp49FtrPSdzEmW0gAaQ+pc5JPPrWl4k14x6fHfvPFFe3jc2cR5tkH3QT3rC1YQ3Pgv7RNPCSWf8AdzHDKeoKkDr7H1rLEYuUaFqTs1qexRoQsnONzyHxdrOpa7r1xqGp3Uk9wzsu93LYAOMAkk9qzNzn+KqniyV9P8RT/Z3+SXEjoegY9cfl+tRQ6vbtgS7kb/dyPzFeVDEKesnqefKcYScNrF9XfP3zT0nkRsiT86ghkilXdFIrfQ0sitmtveKvcvW+oTqeH59qnm1WVrfyWmYCU7W/3e/6cVlAktgUEsOoquadtwudLb6yQAAcjtTI9RhXUGup4Q65dWB5HzBMceny/rXNtI6JlSRzUclzOScHhuvFU6rbXkJ1DrNU1LTlge3RHQypykbYUfUVlz6suxZD8shGCNoKk+uDWC0jnkk8DAprMzckmlKtJyuiHO5qX2qNNZzIzr80TAKkYHOPaq/h7UpbPR4YIWlVuSSo45NUmHtS/PtxS9tPnUiHvc3bbXZozxy/d26046vI77pgs/p5uSF+g6CsACTsDUiLKfvMoFbLEVWNSaOij12ZF2xRwRD/AGIwD+fWrK6rPf2cmnOgaO4Qo2wYwD3wOvrzXNRgJ7+5qeO6kj+5Iyn1U4raNaf2noPmudp4e0iw063dUeF2mjKTNJHvaQHbnOeBnaucelbdrLBFGE+1sirwAihAB6DHSvOYtXuV6ylvrViPWSeG3D8a66OIo0/gjYqMkj0oalZCPDymQejyE0ra3bgHa3P1rzj+1kPp+NSJfswyF/I10rHdivaHoEmuxsvSo/7aH92uGS7lYfeb86f9on9f/HTT+uSFzn0nrt1cS+Il05ZWihkTexj4bjsCeg+lVW8OWOreMbWK6muVjBfKxOELBRnBYDdz3wc+9FFc1XWaT7o4aOiduw/4t+INS0zQrTwvYSLBp0xAdEyCVz93Oen612Hw38GeHI/AkeqXGmx3l1HbNdK9z82GQ/KuBj5B6d++aKK6LKWLkn2X6GMtKSt3PEb26ufHvxCnl8QzvJtRiFhwigKOAB2H0rtvh34a0a+0/VtZvbQTzaYVS1jdj5SfLnO3uaKK8TBJTrvm11f5HvVtKaS7I8v1W5luNXuLiQjezngDAHJ6VvfFuwg0ux06C03KksTFgxzk8c0UV5cv92qPz/U9Zf8ALv8ArofO/jDnXp89nIrL70UVzR2R8piv40vUdHkNkHH0qaK9u0Pyzv8Aic0UVrCTWzMotot2epXDsA2w59q04jv5NFFepT1Wp1022LIBsP1qJgPSiinJalsZtHpRgDiiipJ6isB6UHjFFFUJjiSFyKbknrRRTJQmaATiiikN7C0vOKKKBD42JfFSJKUOQqnnvmiitI7jLlrqkqNtMEDj/aU5/nWgNWl/59rf8m/+Koor06Xwgj//2Q=="
}, ... other results
]
If you don't want to deal with regex, bypassing blocks or something else, a.k.a maintaining parser, then Bing News Engine Results API or Google News Result API may be an option.
Here's an example on how to parse data from Bing/Google News and combine it into single JSON string:
# Keep in mind that I was not using DRY methods here.
from serpapi import GoogleSearch
import json
news_data = {
'bing_news': [],
'google_news': []
}
for engine in ['bing_news', 'google_news']:
if engine == 'bing_news':
params = {
"api_key": "<your-serpapi-api-key>",
"device": "desktop",
"engine": "bing_news",
"q": "Coffee"
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results['organic_results']:
news_data['bing_news'].append({
'title': resultget('title'),
'link': resultget('link'),
'image': result.get('thumbnail')
})
if engine == 'google_news':
params = {
"api_key": "<your-serpapi-api-key>",
"device": "desktop",
"engine": "google",
"q": "Coffee",
"gl": "us",
"hl": "en",
"tbm": "nws"
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results['news_results']:
news_data['google_news'].append({
'title': result.get('title'),
'link': result.get('link'),
'image': result.get('thumbnail')
})
print(json.dumps(news_data, indent=2, ensure_ascii=False))
Outputs:
{
"bing_news": [
{
"title": "Is Decaf or Caffeinated Coffee Better for Heart Disease Symptoms?",
"link": "https://news.yahoo.com/decaf-caffeinated-coffee-better-heart-194648652.html",
"image": "https://serpapi.com/searches/63469624f05eb8bd3ec0eaa0/images/c9deaf41400f27622ff9680d72158ee9c74e042768bc6201d72f8b7031003236.gif"
}, ... other bing news
],
"google_news": [
{
"title": "9 Best Coffee Items on Sale for Amazon Prime Day 2022",
"link": "https://www.thekitchn.com/prime-day-coffee-deals-october-2022-23459339",
"image": "https://serpapi.com/searches/6346981060739305e5fed620/images/3283bbc090b4be4dafbc522fab6467927bd3225fd94f0f09c764eaa814e78117.jpeg"
}, ... other google news
]

How extract description in a google search using python?

I want to extract the description from the google search,
now I have this code:
from urlparse import urlparse, parse_qs
import urllib
from lxml.html import fromstring
from requests import get
url='https://www.google.com/search?q=Gotham'
raw = get(url).text
pg = fromstring(raw)
v=[]
for result in pg.cssselect(".r a"):
url = result.get("href")
if url.startswith("/url?"):
url = parse_qs(urlparse(url).query)['q']
print url[0]
that extract urls related with the search, how can I extract the description that appears under the url?
You can scrape Google Search Description Website using BeautifulSoup web scraping library.
To collect information from all pages you can use "pagination" with while True loop. The while loop is an endless loop, the exit from which in our case is the presence of a switch button to the next page, namely the CSS selector ".d6cvqb a[id=pnnext]":
if soup.select_one('.d6cvqb a[id=pnnext]'):
params["start"] += 10
else:
break
You can use CSS selectors search to find all the information you need (description, title, etc.) which are easy to identify on the page using a SelectorGadget Chrome extension (not always work perfectly if the website is rendered via JavaScript).
Make sure you're using request headers user-agent to act as a "real" user visit. Because default requests user-agent is python-requests and websites understand that it's most likely a script that sends a request. Check what's your user-agent.
Check code in online IDE.
from bs4 import BeautifulSoup
import requests, json, lxml
# https://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls
params = {
"q": "gotham", # query
"hl": "en", # language
"gl": "us", # country of the search, US -> USA
"start": 0, # number page by default up to 0
#"num": 100 # parameter defines the maximum number of results to return.
}
# https://docs.python-requests.org/en/master/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
}
page_num = 0
website_data = []
while True:
page_num += 1
print(f"page: {page_num}")
html = requests.get("https://www.google.com/search", params=params, headers=headers, timeout=30)
soup = BeautifulSoup(html.text, 'lxml')
for result in soup.select(".tF2Cxc"):
website_name = result.select_one(".yuRUbf a")["href"]
try:
description = result.select_one(".lEBKkf").text
except:
description = None
website_data.append({
"website_name": website_name,
"description": description
})
if soup.select_one('.d6cvqb a[id=pnnext]'):
params["start"] += 10
else:
break
print(json.dumps(website_data, indent=2, ensure_ascii=False))
Example output:
[
{
"website_name": "https://www.imdb.com/title/tt3749900/",
"description": "The show follows Jim as he cracks strange cases whilst trying to help a young Bruce Wayne solve the mystery of his parents' murder. It seemed each week for a ..."
},
{
"website_name": "https://www.netflix.com/watch/80023082",
"description": "When the key witness in a homicide ends up dead while being held for questioning, Gordon suspects an inside job and seeks details from an old friend."
},
{
"website_name": "https://www.gothamknightsgame.com/",
"description": "Gotham Knights is an open-world, action RPG set in the most dynamic and interactive Gotham City yet. In either solo-play or with one other hero, ..."
},
# ...
]
Or you can also use Google Search Engine Results API from SerpApi. It's a paid API with the free plan.
The difference is that it will bypass blocks (including CAPTCHA) from Google, no need to create the parser and maintain it.
Code example:
from serpapi import GoogleSearch
from urllib.parse import urlsplit, parse_qsl
import json, os
params = {
"api_key": os.getenv("API_KEY"), # serpapi key
"engine": "google", # serpapi parser engine
"q": "gotham", # search query
"num": "100" # number of results per page (100 per page in this case)
# other search parameters: https://serpapi.com/search-api#api-parameters
}
search = GoogleSearch(params) # where data extraction happens
organic_results_data = []
page_num = 0
while True:
results = search.get_dict() # JSON -> Python dictionary
page_num += 1
for result in results["organic_results"]:
organic_results_data.append({
"title": result.get("title"),
"snippet": result.get("snippet")
})
if "next_link" in results.get("serpapi_pagination", []):
search.params_dict.update(dict(parse_qsl(urlsplit(results.get("serpapi_pagination").get("next_link")).query)))
else:
break
print(json.dumps(organic_results_data, indent=2, ensure_ascii=False))
Output:
[
{
"title": "Gotham (TV Series 2014–2019) - IMDb",
"snippet": "The show follows Jim as he cracks strange cases whilst trying to help a young Bruce Wayne solve the mystery of his parents' murder. It seemed each week for a ..."
},
{
"title": "Gotham (TV series) - Wikipedia",
"snippet": "Gotham is an American superhero crime drama television series developed by Bruno Heller, produced by Warner Bros. Television and based on characters from ..."
},
# ...
]

Beautifulsoup doesn't reach a child element

I wrote the following code trying to scrape a google scholar page
import requests as req
from bs4 import BeautifulSoup as soup
url = r'https://scholar.google.com/scholar?hl=en&q=Sustainability and the measurement of wealth: further reflections'
session = req.Session()
content = session.get(url)
html2bs = soup(content.content, 'lxml')
gs_cit = html2bs.select('#gs_cit')
gs_citd = html2bs.find('div', {'id':"gs_citd"})
gs_cit1 = html2bs.find('div', {'id':"gs_cit1"})
but the gs_citd gives me only this line <div aria-live="assertive" id="gs_citd"></div> and doesn't reach any level beneath it. Also gs_cit1 returns a None.
As appearing in this image
I want to reach the highlighted class to be able to grab the BibTeX citation.
Can you help, please!
Ok, so I figured it out. I used the selenium module for python which creates a virtual browser if you will that will allow you to perform actions like clicking links and getting the output of the resulting HTML. There was another issue I ran into while solving this which was the page had to be loaded otherwise it just returned the content "Loading..." in the pop-up div so I used the python time module to time.sleep(2) for 2 seconds which allowed the content to load in. Then I just parsed the resulting HTML output using BeautifulSoup to find the anchor tag with the class "gs_citi". Then pulled the href from the anchor and put this into a request with "requests" python module. Finally, I wrote the decoded response to a local file - scholar.bib.
I installed chromedriver and selenium on my Mac using these instructions here:
https://gist.github.com/guylaor/3eb9e7ff2ac91b7559625262b8a6dd5f
Then signed by python file to allow to stop firewall issues using these instructions:
Add Python to OS X Firewall Options?
The following is the code I used to produce the output file "scholar.bib":
import os
import time
from selenium import webdriver
from bs4 import BeautifulSoup as soup
import requests as req
# Setup Selenium Chrome Web Driver
chromedriver = "/usr/local/bin/chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
driver = webdriver.Chrome(chromedriver)
# Navigate in Chrome to specified page.
driver.get("https://scholar.google.com/scholar?hl=en&q=Sustainability and the measurement of wealth: further reflections")
# Find "Cite" link by looking for anchors that contain "Cite" - second link selected "[1]"
link = driver.find_elements_by_xpath('//a[contains(text(), "' + "Cite" + '")]')[1]
# Click the link
link.click()
print("Waiting for page to load...")
time.sleep(2) # Sleep for 2 seconds
# Get Page source after waiting for 2 seconds of current page in Chrome
source = driver.page_source
# We are done with the driver so quit.
driver.quit()
# Use BeautifulSoup to parse the html source and use "html.parser" as the Parser
soupify = soup(source, 'html.parser')
# Find anchors with the class "gs_citi"
gs_citt = soupify.find('a',{"class":"gs_citi"})
# Get the href attribute of the first anchor found
href = gs_citt['href']
print("Fetching: ", href)
# Instantiate a new requests session
session = req.Session()
# Get the response object of href
content = session.get(href)
# Get the content and then decode() it.
bibtex_html = content.content.decode()
# Write the decoded data to a file named scholar.bib
with open("scholar.bib","w") as file:
file.writelines(bibtex_html)
Hope this helps anyone looking for a solution to this out.
Scholar.bib file:
#article{arrow2013sustainability,
title={Sustainability and the measurement of wealth: further reflections},
author={Arrow, Kenneth J and Dasgupta, Partha and Goulder, Lawrence H and Mumford, Kevin J and Oleson, Kirsten},
journal={Environment and Development Economics},
volume={18},
number={4},
pages={504--516},
year={2013},
publisher={Cambridge University Press}
}
You can parse BibTeX data using beautifulsoup and requests by parsing data-cid attribute which is a unique publication ID. Then you need to temporarily store those IDs to a list, iterate over them, and make a request to every ID to parse BibTeX publication citation.
Example below will work for ~10-20 requests then Google will throw a CAPTCHA or you'll hit the rate limit. The ideal solution is to have a CAPTCHA solving service as well as proxies.
Code and full example in the online IDE:
from bs4 import BeautifulSoup
import requests, lxml
params = {
"q": "samsung",
"hl": "en"
}
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3538.102 Safari/537.36 Edge/18.19582",
"server": "scholar",
"referer": f"https://scholar.google.com/scholar?hl={params['hl']}&q={params['q']}",
}
def cite_ids() -> list:
response = requests.get("https://scholar.google.com/scholar", params=params, headers=headers, timeout=10)
soup = BeautifulSoup(response.text, "lxml")
# returns a list of publication ID's -> U8bh6Ca9uwQJ
return [result["data-cid"] for result in soup.select(".gs_or")]
def scrape_cite_results() -> list:
bibtex_data = []
for cite_id in cite_ids():
response = requests.get(f"https://scholar.google.com/scholar?output=cite&q=info:{cite_id}:scholar.google.com", headers=headers, timeout=10)
soup = BeautifulSoup(response.text, "lxml")
# selects first matched element which in this case always will be BibTeX
# if Google will not switch BibTeX position.
bibtex_data.append(soup.select_one(".gs_citi")["href"])
# returns a list of BibTex URLs, for example: https://scholar.googleusercontent.com/scholar.bib?q=info:ifd-RAVUVasJ:scholar.google.com/&output=citation&scisdr=CgVDYtsfELLGwov-iJo:AAGBfm0AAAAAYgD4kJr6XdMvDPuv7R8SGODak6AxcJxi&scisig=AAGBfm0AAAAAYgD4kHUUPiUnYgcIY1Vo56muYZpFkG5m&scisf=4&ct=citation&cd=-1&hl=en
return bibtex_data
Alternatively, you can achieve the same thing using Google Scholar API from SerpApi without the need to figure out what proxy provider provides good proxies as well as with the CAPTCHA solving service, besides figuring out how to scrape the data from the JavaScript without using browser automation.
Example to integrate:
import os
from serpapi import GoogleSearch
def organic_results() -> list:
params = {
"api_key": os.getenv("API_KEY"),
"engine": "google_scholar",
"q": "samsung", # search query
"hl": "en", # language
}
search = GoogleSearch(params)
results = search.get_dict()
return [result["result_id"] for result in results["organic_results"]]
def cite_results() -> list:
citation_results = []
for citation in organic_results():
params = {
"api_key": os.getenv("API_KEY"),
"engine": "google_scholar_cite",
"q": citation
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results["links"]:
if "BibTeX" in result["name"]:
citation_results.append(result["link"])
return citation_results
If you would like to parse the data from all available pages, there's a dedicated blog post Scrape historic Google Scholar results using Python at SerpApi which is all about scraping historic 2017-2021 Organic, Cite results to CSV and SQLite using pagination.
Disclaimer, I work for SerpApi.

Scrape authors h-index, i10-index and total citations from Google Scholar

I am working on a project to scrape data from Google Scholar. I want to scrape an authors h-index, total citations and i-10 index (all). For example from Louisa Gilbert I wish to scrape:
h-index = 36
i10-index = 74
citations = 4383
I have written this:
from bs4 import BeautifulSoup
import urllib.request
url="https://scholar.google.ca/citations?user=OdQKi7wAAAAJ&hl=en"
page = urllib.request.urlopen(url)
soup = BeautifulSoup(page, 'html.parser')
but I am unsure how to continue. (I understand there are some libraries available, but none allow you to scrape h-index's and i10-index's.)
Your are almost there. You need to find the HTML elements that contain the data that you want to extract. In this particular case, the indexes are included in the tag <td class="gsc_rsb_std">. You need to pick up these tags from the Soup element and then use the method string to recover the text from within the tags:
indexes = soup.find_all("td", "gsc_rsb_std")
h_index = indexes[2].string
i10_index = indexes[4].string
citations = indexes[0].string
To scrape all of the information from Google Scholar Author page you could use a third party solution like SerpApi. It's a paid API with a free trial.
Example python code (available in other libraries also):
from serpapi import GoogleSearch
params = {
"api_key": "SECRET_API_KEY",
"engine": "google_scholar_author",
"hl": "en",
"author_id": "-muoO7gAAAAJ"
}
search = GoogleSearch(params)
results = search.get_dict()
Example JSON output:
"cited_by": {
"table": [
{
"citations": {
"all": 7326,
"since_2016": 2613
}
},
{
"h_index": {
"all": 47,
"since_2016": 27
}
},
{
"i10_index": {
"all": 103,
"since_2016": 79
}
}
]
}
You can check out the documentation for more details.
Disclaimer: I work at SerpApi.

Categories

Resources