Selenium scraping dynamic infinite scroll without AJAX - python

Intent: Scrape company data from the Inc.5000 list (e.g., rank, company name, growth, industry, state, city, description (via hovering over company name)).
Problem: From what I can see, data from the list is dynamically generated in the browser (no AJAX). Additionally, I can't just scroll to the bottom and then scrape the whole page because only a certain number of companies are available at any one time. In other words, companies 1-10 render, but once I scroll to companies 500-510, companies 1-10 are "de-rendered".
Current effort: The following code is where I'm at now.
driver = webdriver.Chrome()
driver.implicitly_wait(30)
driver.get('https://www.inc.com/inc5000/list/2020')
all_companies = []
scroll_max = 600645 #found via Selenium IDE
curr_scroll = 0
next_scroll = curr_scroll+2000
for elem in driver.find_elements_by_class_name('franchise-list__companies'):
while scroll_num <= scroll_max:
scroll_fn = ''.join(("window.scrollTo(", str(curr_scroll), ", ", str(next_scroll), ")"))
driver.execute_script(scroll_fn)
all_companies.append(elem.text.split('\n'))
print('Current length: ', len(all_companies))
curr_scroll += 2000
next_scroll += 2000
Most SO posts related to infinite scroll deal with those that either maintain the data generated as scrolling occurs, or produce AJAX that can be tapped. This problem is an exception to both (but if I missed an applicable SO post, feel free to point me in that direction).
Problem:
Redundant data is scraped (e.g. a single company may be scraped twice)
I still have to split out the data afterwards (final destination is a Pandas datafarame)
Doesn't include the company description (seen by hovering over company name)
It's slow (I realize this is a caveat to Selenium itself, but think the code can be optimized)

The data is loaded from external URL. To print all companies, you can use this example:
import json
import requests
url = 'https://www.inc.com/rest/i5list/2020'
data = requests.get(url).json()
# uncomment this to print all data:
# print(json.dumps(data, indent=4))
for i, company in enumerate(data['companies'], 1):
print('{:>05d} {}'.format(i, company['company']))
# the hover text is stored in company['ifc_business_model']
Prints:
00001 OneTrust
00002 Create Music Group
00003 Lovell Government Services
00004 Avalon Healthcare Solutions
00005 ZULIE VENTURE INC
00006 Hunt A Killer
00007 Case Energy Partners
00008 Nationwide Mortgage Bankers
00009 Paxon Energy
00010 Inspire11
00011 Nugget
00012 TRYFACTA
00013 CannaSafe
00014 BRUMATE
00015 Resource Innovations
...and so on.

Related

reviews of a firm

My goal is to scrape the entire reviews of this firm. I tried manipulating #Driftr95 codes:
def extract(pg):
headers = {'user-agent' : 'Mozilla/5.0'}
url = f'https://www.glassdoor.com/Reviews/3M-Reviews-E446_P{pg}.htm?filter.iso3Language=eng'
# f'https://www.glassdoor.com/Reviews/Google-Engineering-Reviews-EI_IE9079.0,6_DEPT1007_IP{pg}.htm?sort.sortType=RD&sort.ascending=false&filter.iso3Language=eng'
r = requests.get(url, headers)
soup = BeautifulSoup(r.content, 'html.parser')# this a soup function that retuen the whole html
return soup
for j in range(1,21,10):
for i in range(j+1,j+11,1): #3M: 4251 reviews
soup = extract( f'https://www.glassdoor.com/Reviews/3M-Reviews-E446_P{i}.htm?filter.iso3Language=eng')
print(f' page {i}')
for r in soup.select('li[id^="empReview_"]'):
rDet = {'reviewId': r.get('id')}
for sr in r.select(subRatSel):
k = sr.select_one('div:first-of-type').get_text(' ').strip()
sval = getDECstars(sr.select_one('div:nth-of-type(2)'), soup)
rDet[f'[rating] {k}'] = sval
for k, sel in refDict.items():
sval = r.select_one(sel)
if sval: sval = sval.get_text(' ').strip()
rDet[k] = sval
empRevs.append(rDet)
In the case where not all the subratings are always available, all four subratings will turn out to be N.A.
All four subratings will turn out to be N.A.
there were some things that I didn't account for because I hadn't encountered them before, but the updated version of getDECstars shouldn't have that issue. (If you use the longer version with argument isv=True, it's easier to debug and figure out what's missing from the code...)
I scraped 200 reviews in this case, and it turned out that only 170 unique reviews
Duplicates are fairly easy to avoid by maintaining a list of reviewIds that have already been added and checking against it before adding a new review to empRevs
scrapedIds = []
# for...
# for ###
# soup = extract...
# for r in ...
if r.get('id') in scrapedIds: continue # skip duplicate
## rDet = ..... ## AND REST OF INNER FOR-LOOP ##
empRevs.append(rDet)
scrapedIds.append(rDet['reviewId']) # add to list of ids to check against
Https tends to time out after 100 rounds...
You could try adding breaks and switching out user-agents every 50 [or 5 or 10 or...] requests, but I'm quick to resort to selenium at times like this; this is my suggested solution - if you just call it like this and pass a url to start with:
## PASTE [OR DOWNLOAD&IMPORT] from https://pastebin.com/RsFHWNnt ##
startUrl = 'https://www.glassdoor.com/Reviews/3M-Reviews-E446.htm?sort.sortType=RD&sort.ascending=false&filter.iso3Language=eng'
scrape_gdRevs(startUrl, 'empRevs_3M.csv', maxScrapes=1000, constBreak=False)
[last 3 lines of] printed output:
total reviews: 4252
total reviews scraped this run: 4252
total reviews scraped over all time: 4252
It clicks through the pages until it reaches the last page (or maxes out maxScrapes). You do have to log in at the beginning though, so fill out login_to_gd with your username and password or log in manually by replacing the login_to_gd(driverG) line with the input(...) line that waits for you to login [then press ENTER in the terminal] before continuing.
I think cookies can also be used instead (with requests), but I'm not good at handling that. If you figure it out, then you can use some version of linkToSoup or your extract(pg); then, you'll have to comment out or remove the lines ending in ## for selenium and uncomment [or follow instructions from] the lines that end with ## without selenium. [But please note that I've only fully tested the selenium version.]
The CSVs [like "empRevs_3M.csv" and "scrapeLogs_empRevs_3M.csv" in this example] are updated after every page-scrape, so even if the program crashes [or you decide to interrupt it], it will have saved upto the previous scrape. Since it also tries to load form the CSVs at the beginning, you can just continue it later (just set startUrl to the url of the page you want to continue from - but even if it's at page 1, remember that duplicates will be ignored, so it's okay - it'll just waste some time though).

Having two divs under the same class take the content of the fist and second seperately webscraping with BeautifulSoup

I have such a html page inside the content_list variable
<h3 class="sds-heading--7 title">Problems with battery capacity long-term</h3>
<div class="review-byline review-section">
<div>July 21, 2014</div>
<div>By Cathie from San Diego</div>
<div class="review-type"><strong>Owns this car</strong></div>
</div>
<div class="review-section">
<p class="review-body">We have owned our Leaf since May 2011. We have loved the car but are now getting quite concerned. My husband drives the car, on average, 20-40 miles/day to and from work and running errands, mostly 100% on city roads. We live in San Diego, so no issue with winter weather and we live 7 miles from the ocean so seldom have daytime temperatures above 85. Originally, we would get 65-70 miles per 80-90% charge. Last fall we noticed that there was considerably less remaining charge left after a day of driving. He began to track daily miles, remaining "bars", as well as started charging it 100%. For 9 months we have only been getting 40-45 miles on a full charge with only 1-2 "bars" remaining at the end of the day. Sometimes it will be blinking and "talking" to us to get to a charging place ASAP. We just had it into the dealership. Though on a full charge, the car gauge shows 12 bars, the dealership states that the batteries have lost 2 bars via the computer diagnostics (which we are told is a different reading from the car gauge itself) and, that they say, is average and excepted for the car at this age. Everything else (software, diagnostics, etc.) shows 100%, so the dealership thinks that the car is functioning as it should. They are unable to explain why we can only go 40-45 miles on a charge, but keep saying that the car tests out fine. If the distance one is able to drive on a full charge decreases any further, it will begin to render the car useless. As someone else recommended, in retrospect, the best way to go is to lease the Leaf so that battery life is not an issue.</p>
</div>
First I used this code to get to the collection of reviews
ua = UserAgent()
header = {'User-Agent':str(ua.safari)}
url = 'https://www.cars.com/research/nissan-leaf-2011/consumer-reviews/?page=1'
response = requests.get(url, headers=header)
print(response)
html_soup = BeautifulSoup(response.text, 'lxml')
content_list = html_soup.find_all('div', attrs={'class': 'consumer-review-container'})
Now I would like to take the value of date of the review and the name of the reviewer which in this case would be
<div class="review-byline review-section">
<div>July 21, 2014</div>
<div>By Cathie from San Diego</div>
The problem is I can't separate those two divs
My code:
data = []
for e in content_list:
data.append({
'review_date':e.find_all("div", {"class":"review-byline"})[0].text,
'overall_rating': e.select_one('span.sds-rating__count').text,
'review_title': e.h3.text,
'review_content': e.select_one('p').text,
})
The result of my code
{'overall_rating': '4.7',
'review_content': 'This is the perfect electric car for driving around town, doing errands or even for a short daily commuter. It is very comfy and very quick. The only issue was the first gen battery. The 2011-2014 battery degraded quickly and if the owner did not have Nissan replace it, all those cars are now junk and can only go 20 miles or so on a charge. We had Nissan replace our battery with the 2nd gen battery and it is good as new!',
'review_date': '\nFebruary 24, 2020\nBy EVs are the future from Tucson, AZ\nOwns this car\n',
'review_title': 'Great Electric Car!'}
For the first one you could the <div> directly:
'review_date':e.find("div", {"class":"review-byline"}).div.text,
for the second one use e.g. css selector:
'reviewer_name':e.select_one("div.review-byline div:nth-of-type(2)").text,
Example
url = 'https://www.cars.com/research/nissan-leaf-2011/consumer-reviews/?page=1'
response = requests.get(url, headers=header)
html_soup = BeautifulSoup(response.text, 'lxml')
content_list = html_soup.find_all('div', attrs={'class': 'consumer-review-container'})
data = []
for e in content_list:
data.append({
'review_date':e.find("div", {"class":"review-byline"}).div.text,
'reviewer_name':e.select_one("div.review-byline div:nth-of-type(2)").text,
'overall_rating': e.select_one('span.sds-rating__count').text,
'review_title': e.h3.text,
'review_content': e.select_one('p').text,
})
data

Unable to handle empty <td> value in web scraping

I am scraping a wiki page, but there are some empty <td> elements in some rows, therefore I used :
for tr in table1.tbody:
list = []
for td in tr:
try:
if(td.text is None): list.append('NA')
else: list.append(td.text.strip())
except:
list.append(td.strip())
to store those rows element in a list, but when I print row_list.
Those rows_list with empty <td> value, which should now have append 'NA' value, those are still empty, i.e, 'NA' have not appended in list.
How could I fix this?
Note Question needs improvment - While you update here just two option to fix
Option#1
Use pandas to get the tables in a quick and propper way:
import pandas as pd
pd.concat(pd.read_html('https://en.wikipedia.org/wiki/List_of_Falcon_9_and_Falcon_Heavy_launches#Past_launches')[2:11])
Option #2
Put the list outside before your loops, so you avoid overwriting and check your indentation:
data = []
for tr in table1.tbody:
for td in tr:
try:
if(td.text is None): data.append('NA')
else: data.append(td.text.strip())
except:
data.append(td.strip())
Few things here:
don't use list as a variable. It's a predefined method in python.
the td.text is not None. There is actually an string as content (Ie: ' ')
You are not iterating through the tr tags and td tags (or atleast in the code you are providing here). You need to create your list of tr tags, and td elements to use in your for loop.
Try this:
import requests
from bs4 import BeautifulSoup
url = 'https://en.wikipedia.org/wiki/List_of_Falcon_9_and_Falcon_Heavy_launches#Past_launches'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
table1 = soup.find_all('table')[2]
stored_list = []
for tr in table1.tbody.find_all('tr'):
for td in tr.find_all('td'):
if td.text.strip() == '':
stored_list.append('NA')
else:
stored_list.append(td.text.strip())
Output:
print(stored_list)
['4 June 2010,18:45', 'F9 v1.0[7]B0003[8]', 'CCAFS,SLC-40', 'Dragon Spacecraft Qualification Unit', 'NA', 'LEO', 'SpaceX', 'Success', 'Failure[9][10](parachute)', 'First flight of Falcon 9 v1.0.[11] Used a boilerplate version of Dragon capsule which was not designed to separate from the second stage.(more details below) Attempted to recover the first stage by parachuting it into the ocean, but it burned up on reentry, before the parachutes even go to deploy.[12]', '8 December 2010,15:43[13]', 'F9 v1.0[7]B0004[8]', 'CCAFS,SLC-40', 'Dragon demo flight C1(Dragon C101)', 'NA', 'LEO (ISS)', 'NASA (COTS)\nNRO', 'Success[9]', 'Failure[9][14](parachute)', "Maiden flight of SpaceX's Dragon capsule, consisting of over 3 hours of testing thruster maneuvering and then reentry.[15] Attempted to recover the first stage by parachuting it into the ocean, but it disintegrated upon reentry, again before the parachutes were deployed.[12] (more details below) It also included two CubeSats,[16] and a wheel of Brouère cheese. Before the launch, SpaceX discovered that there was a crack in the nozzle of the 2nd stage's Merlin vacuum engine. So Elon just had them cut off the end of the nozzle with a pair of shears and launched the rocket a few days later. After SpaceX had trimmed the nozzle, NASA was notified of the change and they agreed to it.[17]", '22 May 2012,07:44[18]', 'F9 v1.0[7]B0005[8]', 'CCAFS,SLC-40', 'Dragon demo flight C2+[19](Dragon C102)', '525\xa0kg (1,157\xa0lb)[20] (excl. Dragon mass)', 'LEO (ISS)', 'NASA (COTS)', 'Success[21]', 'No attempt', 'The Dragon spacecraft demonstrated a series of tests before it was allowed to approach the International Space Station. Two days later, it became the first commercial spacecraft to board the ISS.[18] (more details below)', '8 October 2012,00:35[22]', 'F9 v1.0[7]B0006[8]', 'CCAFS,SLC-40', 'SpaceX CRS-1[23](Dragon C103)', '4,700\xa0kg (10,400\xa0lb) (excl. Dragon mass)', 'LEO (ISS)', 'NASA (CRS)', 'Success', 'No attempt', 'Orbcomm-OG2[24]', '172\xa0kg (379\xa0lb)[25]', 'LEO', 'Orbcomm', 'Partial failure[26]', "CRS-1 was successful, but the secondary payload was inserted into an abnormally low orbit and subsequently lost. This was due to one of the nine Merlin engines shutting down during the launch, and NASA declining a second reignition, as per ISS visiting vehicle safety rules, the primary payload owner is contractually allowed to decline a second reignition. NASA stated that this was because SpaceX could not guarantee a high enough likelihood of the second stage completing the second burn successfully which was required to avoid any risk of secondary payload's collision with the ISS.[27][28][29]", '1 March 2013,15:10', 'F9 v1.0[7]B0007[8]', 'CCAFS,SLC-40', 'SpaceX CRS-2[23](Dragon C104)', '4,877\xa0kg (10,752\xa0lb) (excl. Dragon mass)', 'LEO (ISS)', 'NASA (CRS)', 'Success', 'No attempt', 'Last launch of the original Falcon 9 v1.0 launch vehicle, first use of the unpressurized trunk section of Dragon.[30]', '29 September 2013,16:00[31]', 'F9 v1.1[7]B1003[8]', 'VAFB,SLC-4E', 'CASSIOPE[23][32]', '500\xa0kg (1,100\xa0lb)', 'Polar orbit LEO', 'MDA', 'Success[31]', 'Uncontrolled(ocean)[d]', 'First commercial mission with a private customer, first launch from Vandenberg, and demonstration flight of Falcon 9 v1.1 with an improved 13-tonne to LEO capacity.[30] After separation from the second stage carrying Canadian commercial and scientific satellites, the first stage booster performed a controlled reentry,[33] and an ocean touchdown test for the first time. This provided good test data, even though the booster started rolling as it neared the ocean, leading to the shutdown of the central engine as the roll depleted it of fuel, resulting in a hard impact with the ocean.[31] This was the first known attempt of a rocket engine being lit to perform a supersonic retro propulsion, and allowed SpaceX to enter a public-private partnership with NASA and its Mars entry, descent, and landing technologies research projects.[34] (more details below)', '3 December 2013,22:41[35]', 'F9 v1.1B1004', 'CCAFS,SLC-40', 'SES-8[23][36][37]', '3,170\xa0kg (6,990\xa0lb)', 'GTO', 'SES', 'Success[38]', 'No attempt[39]', 'First Geostationary transfer orbit (GTO) launch for Falcon 9,[36] and first successful reignition of the second stage.[40] SES-8 was inserted into a Super-Synchronous Transfer Orbit of 79,341\xa0km (49,300\xa0mi) in apogee with an inclination of 20.55° to the equator.']

Facebook Market scraping using class id in python

Class ID to scrape
I wanted to scrape data from facebook market using python and by using this script below however no data is showing when i run the script. Class ID is in the picture above.
elements = driver.find_elements_by_class_name('l9j0dhe7 f9o22wc5 ad2k81qe') for ele in elements: print(ele.text) print(ele.get_attribute('title'))
Okay there are a few things, you should take a look - find_element_by_class_name takes only one class name, you better take use of find_element_by_css_selector
Solution for getting informations based on your information
Get all results:
elements = driver.find_elements_by_class_name('kbiprv82')
Loop results and print:
for ele in elements:
title = ele.find_element_by_css_selector('span.a8c37x1j.ni8dbmo4.stjgntxs.l9j0dhe7').text
price = ele.find_element_by_css_selector('div.hlyrhctz > span').text
print(title, price)
Output
IMac 21,5 Ende 2012 380 €
Imac 27 Mitte 2011 (Wie Neu Zustand ) 550 €
iMac 14,2 27" 550 €
Hope that helps, let us know.

Access aria label and reviews of yelp with BeautifulSoup [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am trying to access the reviews and star rating of each reviewer and append the values to the list. However, it doesn't allow me to retun the output. Can anyone tell me what's wrong with my codes?
l=[]
for i in range(0,len(allrev)):
try:
l["stars"]=allrev[i].allrev.find("div",{"class":"lemon--div__373c0__1mboc i-stars__373c0__1T6rz i-stars--regular-4__373c0__2YrSK border-color--default__373c0__3-ifU overflow--hidden__373c0__2y4YK"}).get('aria-label')
except:
l["stars"]= None
try:
l["review"]=allrev[i].find("span",{"class":"lemon--span__373c0__3997G raw__373c0__3rKqk"}).text
except:
l["review"]=None
u.append(l)
l={}
print({"data":u})
To get all the reviews you can try the following:
import requests
from bs4 import BeautifulSoup
URL = "https://www.yelp.com/biz/sushi-yasaka-new-york"
soup = BeautifulSoup(requests.get(URL).content, "html.parser")
for star, review in zip(
soup.select(
".margin-b1__373c0__1khoT .border-color--default__373c0__3-ifU .border-color--default__373c0__3-ifU .border-color--default__373c0__3-ifU .overflow--hidden__373c0__2y4YK"
),
soup.select(".comment__373c0__3EKjH .raw__373c0__3rcx7"),
):
print(star.get("aria-label"))
print(review.text)
print("-" * 50)
Output:
5 star rating
I've been craving sushi for weeks now and Sushi Yasaka hit the spot for me. Their lunch prices are unbeatable. Their lunch specials seem to extend through weekends which is also amazing.I got the Miyabi lunch as take out and ate in along the benches near the MTA. It came with 4 nigiri, 7 sashimi and you get to pick the other roll (6 pieces). It also came with a side (choose salad or soup, add $1 for both). It was an incredible deal for only $20. I was so full and happy! The fish tasted very fresh with wonderful flavor. I ordered right as they opened and there were at least 10 people waiting outside when I picked up my food so I imagine there is high turnover, keeping the seafood fresh. This will be a regular splurge lunch spot for sure.
--------------------------------------------------
5 star rating
If you're looking for great sushi on Manhattan's upper west side, head over to Sushi Yakasa ! Best sushi lunch specials, especially for sashimi. I ordered the Miyabi - it included a fresh oyster ! The oyster was delicious, served raw on the half shell. The sashimi was delicious too. The portion size was very good for the area, which tends to be a pricey neighborhood. The restaurant is located on a busy street (west 72nd) & it was packed when I dropped by around lunchtimeStill, they handled my order with ease & had it ready quickly. Streamlined service & highly professional. It's a popular sushi place for a reason. Every piece of sashimi was perfect. The salmon avocado roll was delicious too. Very high quality for the price. Highly recommend! Update - I've ordered from Sushi Yasaka a few times since the pandemic & it's just as good as it was before. Fresh, and they always get my order correct. I like their takeout system - you can order over the phone (no app required) & they text you when it's ready. Home delivery is also available & very reliable. One of my favorite restaurants- I'm so glad they're still in business !
--------------------------------------------------
...
...
Edit to only get the first 100 reviews:
import csv
import requests
from bs4 import BeautifulSoup
url = "https://www.yelp.com/biz/sushi-yasaka-new-york?start={}"
offset = 0
review_count = 0
with open("output.csv", "a", encoding="utf-8") as f:
csv_writer = csv.writer(f, delimiter="\t")
csv_writer.writerow(["rating", "review"])
while True:
resp = requests.get(url.format(offset))
soup = BeautifulSoup(resp.content, "html.parser")
for rating, review in zip(
soup.select(
".margin-b1__373c0__1khoT .border-color--default__373c0__3-ifU .border-color--default__373c0__3-ifU .border-color--default__373c0__3-ifU .overflow--hidden__373c0__2y4YK"
),
soup.select(".comment__373c0__3EKjH .raw__373c0__3rcx7"),
):
print(f"review # {review_count}. link: {resp.url}")
csv_writer.writerow([rating.get("aria-label"), review.text])
review_count += 1
if review_count > 100:
raise Exception("Exceeded 100 reviews.")
offset += 20

Categories

Resources