BeautifulSoup find_all give me the value of the last element only - python

Hello hope you're all doing well I wrote the following program to extract the data from it's source but it gives me the last value and I don't know why
users_list = soup.find_all("li", class_= "ContentGrid-gridItem-2Ad e2e-ContentGriditem")
for user in users_list:
name = user.find("span", class_ = "e2e-UserSummary-displayName")
profile_link = user.find('a', class_='UserSummary-ownerLink-1cJ')
the variable profile_link gives me this output
<a class="UserSummary-ownerLink-1cJ"
href="https://www.behance.net/baianat?tracking_source=search_users"
target="_blank"></a>
i want to make it return only the link
another thing also I built other variables to return values but it keep returning the same values because it's having the same class and they all in h4 here's the code and it's returning the value of the first object which is appreciations 189.9k
appreciations = users.find("h4", class_= "UserSummaryStats-statAmount-13R").text
followers = users.find("h4", class_= "UserSummaryStats-statAmount-13R").text
project_views = users.find("h4", class_= "UserSummaryStats-statAmount-13R").text

To extract only href attribute of the tag you can use tag['<attr>']. For example:
profile_link = user.find('a', class_='UserSummary-ownerLink-1cJ')['href']
For the second case, just use .find_all(<tag_name>, <params>) method which returns all the tags with defined parameters as a list:
user_data = users.find_all("h4", class_= "UserSummaryStats-statAmount-13R")
for entry in user_data:
# do something with each entry

Related

Using find_all in bs4

When I parse for more than 1 class I get an error on line 12 (when I add all to find)
Error: ResultSet object has no attribute 'find'. You're probably treating a list of elements like a single element
import requests
from bs4 import BeautifulSoup
heroes_page_list=[]
url = f'https://dota2.fandom.com/wiki/Dota_2_Wiki'
q = requests.get(url)
result = q.content
soup = BeautifulSoup(result, 'lxml')
heroes = soup.find_all('div', class_= 'heroentry').find('a')
for hero in heroes:
hero_url = heroes.get('href')
heroes_page_list.append("https://dota2.fandom.com" + hero_url)
# print(heroes_page_list)
with open ('heroes_page_list.txt', "w") as file:
for line in heroes_page_list:
file.write(f'{line}\n')
You are searching a tag inside a list of div tags you need to do like this,
heroes = soup.find_all('div', class_= 'heroentry')
a_tags = [hero.find('a') for hero in heroes]
for a_tag in a_tags:
hero_url = a_tag.get('href')
heroes_page_list.append("https://dota2.fandom.com" + hero_url)
heroes_page_list look like this,
['https://dota2.fandom.com/wiki/Abaddon',
'https://dota2.fandom.com/wiki/Alchemist',
'https://dota2.fandom.com/wiki/Axe',
'https://dota2.fandom.com/wiki/Beastmaster',
'https://dota2.fandom.com/wiki/Brewmaster',
'https://dota2.fandom.com/wiki/Bristleback',
'https://dota2.fandom.com/wiki/Centaur_Warrunner',
....
The error is stating everything you need to do.
find() method is only usable on a single element. find_all() returns a list of elements. You are trying to apply find() to a list of elements.
If you want to apply find('a') you should to something similar to this:
heroes = soup.find_all('div', class_= 'heroentry')
for hero in heroes:
hero_a_tag = hero.find('a')
hero_url = hero_a_tag .get('href')
heroes_page_list.append("https://dota2.fandom.com" + hero_url)
You basically have to apply the find() method on every element presents in the list generated by the find_all() method

How to fetch/scrape all elements from a html "class" which is inside "span"?

I am trying to scrape data from a website where i am collecting data from all elements under "class" which is inside "span" using this piece of code. But i am ending up in fetching only one element instead of all.
expand_hits = soup.findAll("a", {"class": "sold-property-listing"})
apartments = []
for hit_property in expand_hits:
#element = soup.findAll("div", {"class": "sold-property-listing__location"})
place_name = expand_hits[1].find("div", {"class": "sold-property-listing__location"}).findAll("span", {"class": "item-link"})[1].getText()
print(place_name)
apartments.append(final_str)
Expected result for print(place_name)
Stockholm
Malmö
Copenhagen
...
..
.
The result which is am getting for print(place_name)
Malmö
Malmö
Malmö
...
..
.
When i try to fetch the contents from expand_hits[1] i get only one element. If i don't specify the index scraper is throwing an error regarding the usage find(), find_all() and findAll(). As far as i understood i think i have to call the content of the elements iteratively.
Any help is much appreciated.
Thanks in Advance!
Use the loop variable rather than indexing to same collection with same index (expand_hits[1]) and append place_name not final_str
expand_hits = soup.findAll("a", {"class": "sold-property-listing"})
apartments = []
for hit_property in expand_hits:
place_name = hit_property.find("div", {"class": "sold-property-listing__location"}).find("span", {"class": "item-link"}).getText()
print(place_name)
apartments.append(place_name)
You only then need Find and no indexing
Add User-Agent header to ensure results. Also, I note that I have to pick a parent node because at least one result will not be captured by using that class item-link e.g. Övägen 6C. I use replace to get rid of the hidden text present due to now selecting for parent node.
from bs4 import BeautifulSoup
import requests
import re
url = "https://www.hemnet.se/salda/bostader?location_ids%5B%5D=474035"
page = requests.get(url, headers = {'User-Agent':'Mozilla/5.0'})
soup = BeautifulSoup(page.content,'html.parser')
for result in soup.select('.sold-results__normal-hit'):
print(re.sub(r'\s{2,}',' ', result.select_one('.sold-property-listing__location h2 + div').text).replace(result.select_one('.hide-element').text.strip(), ''))
If you only want where in Malmo e.g. Limhamns Sjöstad, you need to check how many child span tags there are for each listing.
for result in soup.select('.sold-results__normal-hit'):
nodes = result.select('.sold-property-listing__location h2 + div span')
if len(nodes)==2:
place = nodes[1].text.strip()
else:
place = 'not specified'
print(place)

Webscraping Issue w/ BeautifulSoup

I am new to Python web scraping, and I am scraping productreview.com for review. The following code pulls all the data I need for a single review:
#Scrape TrustPilot for User Reviews (Rating, Comments)
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup as bs
import json
import requests
import datetime as dt
final_list=[]
url = 'https://www.productreview.com.au/listings/world-nomads'
r = requests.get(url)
soup = bs(r.text, 'lxml')
for div in soup.find('div', class_ = 'loadingOverlay_24D'):
try:
name = soup.find('h4', class_ = 'my-0_27D align-items-baseline_kxl flex-row_3gP d-inline-flex_1j8 text-muted_2v5')
name = name.find('span').text
location = soup.find('h4').find('small').text
policy = soup.find('div', class_ ='px-4_1Cw pt-4_9Zz pb-2_1Ex card-body_2iI').find('span').text
title = soup.find('h3').find('span').text
content = soup.find('p', class_ = 'mb-0_2CX').text
rating = soup.find('div', class_ = 'mb-4_2RH align-items-center_3Oi flex-wrap_ATH d-flex_oSG')
rating = rating.find('div')['title']
final_list.append([name, location, policy, rating, title, content])
except AttributeError:
pass
reviews = pd.DataFrame(final_list, columns = ['Name', 'Location', 'Policy', 'Rating', 'Title', 'Content'])
print(reviews)
But when I edit
for div in soup.find('div', class_ = 'loadingOverlay_24D'):
to
for div in soup.findAll('div', class_ = 'loadingOverlay_24D'):
I don't get all reviews, I just get the same entry looped over and over.
Any help would be much appreciated.
Thanks!
Issue 1: Repeated data inside the loop
You loop has the following form:
for div in soup.find('div' , ...):
name = soup.find('h4', ... )
policy = soup.find('div', ... )
...
Notice that you are calling find inside the loop for the soup object. This means that each time you try to find the value for name, it will search the whole document from the beginning and return the first match, in every iteration.
This is why you are getting the same data over and over.
To fix this, you need to call find inside the current review div that you are currently at. That is:
for div in soup.find('div' , ...):
name = div.find('h4', ... )
policy = div.find('div', ... )
...
Issue 2: Missing data and error handling
In your code, any errors inside the loop are ignored. However, there are many errors that are actually happening while parsing and extracting the values. For example:
location = div.find('h4').find('small').text
Not all reviews have location information. Hence, the code will extract h4, then try to find small, but won't find any, returning None. Then you are calling .text on that None object, causing an exception. Hence, this review will not be added to the result data frame.
To fix this, you need to add more error checking. For example:
locationDiv = div.find('h4').find('small')
if locationDiv:
location = locationDiv.text
else:
location = ''
Issue 3: Identifying and extracting data
The page you're trying to parse has broken HTML, and uses CSS classes that seem random or at least inconsistent. You need to find the correct and unique identifiers for the data that you are extracting such that they strictly match all the entries.
For example, you are extracting the review-container div using CSS class loadingOverlay_24D. This is incorrect. This CSS class seems to be for a "loading" placeholder div or something similar. Actual reviews are enclosed in div blocks that look like this:
<div itemscope="" itemType="http://schema.org/Review" itemProp="review">
....
</div>
Notice that the uniquely identifying property is the itemProp attribute. You can extract those div blocks using:
soup.find('div', {'itemprop': 'review'}):
Similarly, you have to find the correct identifying properties of the other data you want to extract to ensure you get all your data fully and correctly.
One more thing, when a tag has more than one CSS class, usually only one of them is the identifying property you want to use. For example, for names, you have this:
name = soup.find('h4', class_ = 'my-0_27D align-items-baseline_kxl flex-row_3gP d-inline-flex_1j8 text-muted_2v5')
but in reality, you don't need all these classes. The first class, in this case, is sufficient to identify the name h4 blocks
name = soup.find('h4', class_ = 'my-0_27D')
Example:
Here's an example to extract the author names from review page:
for div in soup.find_all('div', {'itemprop': 'review'}):
name = div.find('h4', class_ = 'my-0_27D')
if (name):
name = name.find('span').text
else:
name = '-'
print(name)
Output:
Aidan
Bruno M.
Ba. I.
Luca Evangelista
Upset
Julian L.
Alison Peck
...
The page servs broken html code and html.parser is better at dealing with it.
Change soup = bs(r.text, 'lxml') to soup = bs(r.text, 'html.parser')

extracting attribute value out of the "aria-label" tag

I am trying to extract the rating of the products from Sephora's website. However, when I try to extract the rating, it does not work the way I want it to. I guess my question is I don't know how to target that tag b/c it has a different structure compares to other data I needed to extract. Please help!
Here is the website:https://www.sephora.com/product/revitalizing-supreme-global-anti-aging-creme-P384342?icid2=products%20grid:p384342
The picture here is the value that I wanted to extract.
enter image description here
And I also attached my code here.
final_products = [] #empty list to append the dictionary in to before passing in to a DataFrame
for i in range(0, len(link_list)):
#for i in urls.index:
current_url = link_list[i]
resource = requests.get(current_url)
current_data = resource.text
soup = BeautifulSoup(current_data, 'html.parser')
#gathering the data from the pages
try:
product = {}
product['Name'] = soup.find('span', {'class': 'css-0'}).text
product['Price'] = soup.find('div', {'class': 'css-slwsq8'}).text
product['Num_of_reviews'] = soup.find_all('span', {'class': 'css-2rg6q7'})[0].text
product['Num_of_Likes'] = soup.find('span', {'data-at': 'product_love_count'}).text
product['Rating'] = soup.find('a', {'aria-label'})
#append the empty list to later make in to a dataframe
final_products.append(product)
except:
pass

Short & Easy - soup.find_all Not Returning Multiple Tag Elements

I need to scrape all 'a' tags with "result-title" class, and all 'span' tags with either class 'results-price' and 'results-hood'. Then, write the output to a .csv file across multiple columns. The current code does not print anything to the csv file. This may be bad syntax but I really can't see what I am missing. Thanks.
f = csv.writer(open(r"C:\Users\Sean\Desktop\Portfolio\Python - Web Scraper\RE Competitor Analysis.csv", "wb"))
def scrape_links(start_url):
for i in range(0, 2500, 120):
source = urllib.request.urlopen(start_url.format(i)).read()
soup = BeautifulSoup(source, 'lxml')
for a in soup.find_all("a", "span", {"class" : ["result-title hdrlnk", "result-price", "result-hood"]}):
f.writerow([a['href']], span['results-title hdrlnk'].getText(), span['results-price'].getText(), span['results-hood'].getText() )
if i < 2500:
sleep(randint(30,120))
print(i)
scrape_links('my_url')
If you want to find multiple tags with one call to find_all, you should pass them in a list. For example:
soup.find_all(["a", "span"])
Without access to the page you are scraping, it's too hard to give you a complete solution, but I recommend extracting one variable at a time and printing it to help you debug. For example:
a = soup.find('a', class_ = 'result-title')
a_link = a['href']
a_text = a.text
spans = soup.find_all('span', class_ = ['results-price', 'result-hood'])
row = [a_link, a_text] + [s.text for s in spans]
print(row) # verify we are getting the results we expect
f.writerow(row)

Categories

Resources