Beautiful Soup - finding all classes which contain a know strin - python

I was trying to extract the string '£150,000' from this HTML code by identifying the string 'Purchase Price' within the class since the same class is used more than once
<div class="row mb-sm-1 property-header-row">
<div class="prop-capital-fields property-header-col col-6"><h3>£150,000</h3>
<p class="label-paragraph">
Purchase Price
</p>
</div>
<div class="prop-capital-fields property-header-col col-6"><h3>£180,000</h3>
<p class="label-paragraph">
Market Value
</p>
</div>
<div class="prop-capital-fields property-header-col col-6"><h3>£1,185</h3>
<p class="label-paragraph">
Potential Cashflow PCM
</p>
</div>
So I wrote the following code
property_ = soup.find(class_="properties-content-body col-xs-12 col-sm-12 col-md-7")
for a in property_.find_all('div', attrs={'class': 'prop-capital-fields property-header-col col-6'}, text="Purchase Price"):
purchase_price_list.append(a)
print(purchase_price_list)
but all I get is a blank list
I've tried many other things but I'm pretty sure I just don't know the correct way to do it.
Any help is appreciated.

I've found the answer:
for a in property_.find_all('div', attrs={'class': 'prop-capital-fields property-header-col col-6'}):
b = a.find('p').text.replace("\n", "").strip()
c = a.find('h3').text.strip()
if(b=='Purchase Price'):
purchase_price_list.append(c)

Related

Scrape values inside span class webpage with beautifulsoup python

Hello everyone I have a webpage I'm trying to scrape and the page has tons of span classes and most of which is useless information I posted a section of the span class data that I need but I'm not able to do find.all span because there are 100's of others not needed.
<div class="col-md-4">
<p>
<span class="text-muted">File Number</span><br>
A-21-897274
</p>
</div>
<div class="col-md-4">
<p>
<span class="text-muted">Location</span><br>
Ohio
</p>
</div>
<div class="col-md-4">
<p>
<span class="text-muted">Date</span><br>
07/01/2022
</p>
</div>
</div>
I need the span titles:
File Number, Location, Date
and then the values that match:
"A-21-897274", "Ohio", "07/01/2022"
I need this printed out so I can make a pandas data frame. But I cant seem to get the specific spans printed with their value.
What I've tried:
import bs4
from bs4 import BeautifulSoup
soup = BeautifulSoup(..., 'lxml')
for title_tag in soup.find_all('span', class_='text-muted'):
# get the last sibling
*_, value_tag = title_tag.next_siblings
title = title_tag.text.strip()
if isinstance(value_tag, bs4.element.Tag):
value = value_tag.text.strip()
else: # it's a navigable string element
value = value_tag.strip()
print(title, value)
output:
File Number "A-21-897274"
Location "Ohio"
Operations_Manager "Joanna"
Date "07/01/2022"
Type "Transfer"
Status "Open"
ETC "ETC"
ETC "ETC"
This will print out everything I need BUT it also prints out 100's of other values I don't want/need.
You can use function in soup.find_all to select only wanted elements and then .find_next_sibling() to select the value. For example:
from bs4 import BeautifulSoup
html_doc = """
<div class="col-md-4">
<p>
<span class="text-muted">File Number</span><br>
A-21-897274
</p>
</div>
<div class="col-md-4">
<p>
<span class="text-muted">Location</span><br>
Ohio
</p>
</div>
<div class="col-md-4">
<p>
<span class="text-muted">Date</span><br>
07/01/2022
</p>
</div>
</div>
"""
soup = BeautifulSoup(html_doc, "html.parser")
def correct_tag(tag):
return tag.name == "span" and tag.get_text(strip=True) in {
"File Number",
"Location",
"Date",
}
for t in soup.find_all(correct_tag):
print(f"{t.text}: {t.find_next_sibling(text=True).strip()}")
Prints:
File Number: A-21-897274
Location: Ohio
Date: 07/01/2022

Extract html block based on tag, class and string content

I am really new to bf4 and I would like to get specific content from an html page.
When I try the following code, I will get many results having the same tag and class. So I need to filter more. There is a string content into the block I am interested in. Is there a way to additionally scrape also by content? Any contribution is appreciated.
html_doc = requests.get('https://www.blockchain.com/bch/address/qqe2tae7hfga2zj5jj8mtjsgznjpy5rvyglew4cy8m')
soup = BeautifulSoup(html_doc.content, 'html.parser')
print(soup.find_all('span', class_='sc-1ryi78w-0 gCzMgE sc-16b9dsl-1 kUAhZx u3ufsr-0 fGQJzg'))
Edit:
I should add that the content look like the following. So the there is a string for which I want to extract the value but the value is in the next tag. Here I want to extract 3.79019103 which is under the string 'Final Balance'.
Total Sent
</span>
</div>
</div>
<div class="sc-8sty72-0 kcFwUU">
<span class="sc-1ryi78w-0 gCzMgE sc-16b9dsl-1 kUAhZx u3ufsr-0 fGQJzg" opacity="1">
13794.11698089 BCH
</span>
</div>
</div>
<div class="sc-1enh6xt-0 jqiNji">
<div class="sc-8sty72-0 kcFwUU">
<div>
<span class="sc-1ryi78w-0 gCzMgE sc-16b9dsl-1 kUAhZx sc-1n72lkw-0 lhmHll" opacity="1">
Final Balance
</span>
</div>
</div>
<div class="sc-8sty72-0 kcFwUU">
<span class="sc-1ryi78w-0 gCzMgE sc-16b9dsl-1 kUAhZx u3ufsr-0 fGQJzg" opacity="1">
3.79019103 BCH
</span>
</div>
</div>
</div>
</div>
</div>
For finding the Final Balance tag:
final_balance_tag = next(x for x in soup.find_all('span') if 'Final Balance' in x.text)
With this tag you may just jump to the next span tag.
final_balance_tag.findNext('span')
Which gives
<span class="sc-1ryi78w-0 gCzMgE sc-16b9dsl-1 kUAhZx u3ufsr-0 fGQJzg" opacity="1">
3.79019103 BCH
</span>
Search for the string Final Balance using the text= <your string> parameter.
Get the next tag using find_next(), which returns the first match.
Use a list comprehension to filter the output only if it isdigit().
import requests
from bs4 import BeautifulSoup
URL = 'https://www.blockchain.com/bch/address/qqe2tae7hfga2zj5jj8mtjsgznjpy5rvyglew4cy8m'
soup = BeautifulSoup(requests.get(URL).content, 'html.parser')
price = soup.find(text=lambda t: "Final" in t).find_next(text=True)
print("".join([t for t in price if t.isdigit()]))
Output (currently):
000000000

Python Beautifulsoup - problem read <span>

I try to extract "brand-logo", "product-name", "price" and "best-price " from the following HTML:
<div class="container">
<div class="catalog-wrapper">
<div class="slideout-filters"></div>
<section class="catalog-top-banner"></section>
<section class="search-results">
<section class="catalog">
<div class="row">
<div class="col-xs-12 col-md-4 col-lg-3">
<div class="col-xs-12 col-md-8 col-lg-9">
<div class="catalog-container">
<a class="catalog-product catalog-item ">
<div class="product-image "></div>
<div class="product-description">
<div>
<div class="brand-logo">
<span>PACO RABANNE</span>
</div>
<span class="product-name">
PACO RABANNE PERFUME MUJER 30 ML
</span>
<span class="price">Normal: S/ 219</span>
<span class="best-price ">Internet: S/ 209</span>
"brand-logo" and "product-name, done, but I can not read "price" & "best-price "
I tried it this way:
box_3 = soup.find('div','col-xs-12 col-md-8 col-lg-9')
for div in box_3.find_all('div','product-description'):
d={}
d["Marca"] = div.find_all("div",{"class","brand-logo"})[0].getText()
d["Producto"] = div.find_all("span",{"class","product-name"})[0].getText()
d["Precio"] = div.find_all('span',class_='price')
d["Oferta"] = div.find_all('span',class_='best-price ')
l.append(d)
l
out:
{'Marca': 'PACO RABANNE',
'Oferta': [],
'Precio': [<span class="price">Normal: S/ 219</span>],
'Producto': 'PACO RABANNE PERFUME MUJER 30 ML'}
can anyone help me?
You can find the "product-description" div and then iterate over the desired div classes:
from bs4 import BeautifulSoup as soup
import re
_to_find = ['brand-logo', 'product-name', 'price', 'best-price']
s = soup(content, 'html.parser').find('div', {'class':'product-description'})
final_results = [(lambda x: s.find('span', {'class':i}).text if not x else x.text)(s.find('div', {'class':i})) \
for i in _to_find]
filtered = [re.sub('^[\n\s]+|[\n\s]+$', '', i) for i in final_results]
Output:
['PACO RABANNE', 'PACO RABANNE PERFUME MUJER 30 ML', 'Normal: S/ 219', 'Internet: S/ 209']
Unfortunatelly, without the actual website I'm unable to check the solution :(.
Maybe you should extract data from "not working" part the same way as the working one (this is lucky guess - without website or just website that will be parsed by bs4, I'm really unable to test it).
d["Precio"] = div.find_all('span',{"class","price"})[0].getText()
d["Oferta"] = div.find_all('span',{"class","best-price"})[0].getText()
It might be good idea to make new method/function that will get the chosen attribute and handle potential errors.

Accessing untagged text using beautifulsoup

I am using python and beautifulsoup4 to extract some address information.
More specifically, I require assistance when retrieving non-US based zip codes.
Consider the following html data of a US based company: (already a soup object)
<div class="compContent curvedBottom" id="companyDescription">
<div class="vcard clearfix">
<p id="adr">
<span class="street-address">999 State St Ste 100</span><br/>
<span class="locality">Salt Lake City,</span>
<span class="region">UT</span>
<span class="zip">84114-0002,</span>
<br/><span class="country-name">United States</span>
</p>
<p>
<span class="tel">
<strong class="type">Phone: </strong>+1-000-000-000
</span><br/>
</p>
<p class="companyURL"><a class="url ext" href="http://www.website.com" target="_blank">http://www.website.com</a></p>
</div>
</ul>
</div>
I can extract the zipcode (84114-0002) by using the following piece of python code:
class CompanyDescription:
def __init__(self, page):
self.data = page.find('div', attrs={'id': 'companyDescription'})
def address(self):
#TODO: Also retrieve the Zipcode for UK and German based addresses - tricky!
address = {'street-address': '', 'locality': '', 'region': '', 'zip': '', 'country-name': ''}
for key in address:
try:
adr = self.data.find('p', attrs={'id': 'adr'})
if adr.find('span', attrs={'class': key}) is None:
address[key] = ''
else:
address[key] = adr.find('span', attrs={'class': key}).text.split(',')[0]
# Attempting to grab another zip code value
if address['zip'] == '':
pass
except:
# We should return a dictionary with "" as key adr
return address
return address
You can see that I need some counsel with line if address['zip'] == '':
These two soup object examples are giving me trouble. In the below I would like to retrieve EC4N 4SA
<div class="compContent curvedBottom" id="companyDescription">
<div class="vcard clearfix">
<p id="adr">
<span class="street-address">Albert Buildings</span><br/>
<span class="extended-address">00 Queen Victoria Street</span>
<span class="locality">London</span>
EC4N 4SA
<span class="region">London</span>
<br/><span class="country-name">England</span>
</p>
<p>
</p>
<p class="companyURL"><a class="url ext" href="http://www.website.com.com" target="_blank">http://www.website.com.com</a></p>
</div>
<p><strong>Line of Business</strong> <br/>Management services, nsk</p>
</div>
as well as below, where I am interested in getting 71364
<div class="compContent curvedBottom" id="companyDescription">
<div class="vcard clearfix">
<p id="adr">
<span class="street-address">Alfred-Kärcher-Str. 100</span><br/>
71364
<span class="locality">Winnenden</span>
<span class="region">Baden-Württemberg</span>
<br/><span class="country-name">Germany</span>
</p>
<p>
<span class="tel">
<strong class="type">Phone: </strong>+00-1234567
</span><br/>
<span class="tel"><strong class="type">Fax: </strong>+00-1234567</span>
</p>
</div>
</div>
Now, I am running this program over approximately 68,000 accounts of which 28,000 are non-US based. I have only pulled out two examples of which I know the current method is not bullet proof. There may be other address formats where this script is not working as expected but I believe figuring out UK and German based accounts will help tremendously.
Thanks in advance
Because it is only text without tag inside <p> so you can use
find_all(text=True, recursive=False)
to get only text (without tags) but not from nested tags (<span>). This gives list with your text and some \n and spaces so you can use join() to create one string, and strip() to remove all \n and spaces.
data = '''<p id="adr">
<span class="street-address">Albert Buildings</span><br/>
<span class="extended-address">00 Queen Victoria Street</span>
<span class="locality">London</span>
EC4N 4SA
<span class="region">London</span>
<br/><span class="country-name">England</span>
</p>'''
from bs4 import BeautifulSoup as BS
soup = BS(data, 'html.parser').find('p')
print(''.join(soup.find_all(text=True, recursive=False)).strip())
result: EC4N 4SA
The same with second HTML
data = '''<p id="adr">
<span class="street-address">Alfred-Kärcher-Str. 100</span><br/>
71364
<span class="locality">Winnenden</span>
<span class="region">Baden-Württemberg</span>
<br/><span class="country-name">Germany</span>
</p>'''
from bs4 import BeautifulSoup as BS
soup = BS(data, 'html.parser').find('p')
print(''.join(soup.find_all(text=True, recursive=False)).strip())
result: 71364

unknown error in beautifulsoup web scraping using python

<div id="browse_in_widget">
<span id="browse_in_breadcrumb" style="width: 583px;">
<div class="seo_itemscope" itemtype="http://data-vocabulary.org/Breadcrumb" itemscope="">
<a itemprop="url" href="/search/"> Arabian Area</a>
<span class="seo_itemprop-title" itemprop="title">Arabian Area</span>
>
</div>
<div class="seo_itemscope" itemtype="http://data-vocabulary.org/Breadcrumb" itemscope="">
<a itemprop="url" href="/property-for-rent/home/"> Phase 2 </a>
<span class="seo_itemprop-title" itemprop="title">Phase 2 </span>
>
</div>
<div class="seo_itemscope" itemtype="http://data-vocabulary.org/Breadcrumb" itemscope="">
<a itemprop="url" href="/property-for-rent/residential/"> Residential Units for Rent </a>
<span class="seo_itemprop-title" itemprop="title">Residential Units for Rent</span>
>
</div>
<div class="seo_itemscope" itemtype="http://data-vocabulary.org/Breadcrumb" itemscope="">
<a itemprop="url" href="/property-for-rent/residential/apartmentflat/"> Apartment/Flat for Rent </a>
<span class="seo_itemprop-title" itemprop="title">Apartment/Flat for Rent</span>
>
</div>
<strong class="seo_itemprop-title" itemprop="title">Details</strong>
</span>
</div>
I want to get
['Arabian Area', 'Phase 2', 'Residential Units for Rent','Apartment/Flat for Rent']
I am trying to use the following code using beautiful 4 python
try:
Type = [str(Area.text) for Area in soup.find_all("span", {"class" : "seo_itemscope"})]
Area=' , '.join(Area)
print Area
except StandardError as e:
Area="Error was {0}".format(e)
print Area
All i want is to get the desired output in a list but there seems to be some problem. I am not getting any print. What can be the problem?
Thank you!
The first problem is that you are looking for span elements with seo_itemscope class which don't exist. Use seo_itemprop-title if you are looking for the titles:
Type = [item.get_text() for item in soup.find_all("span", {"class": "seo_itemprop-title"})]
The other problem is here:
Area=' , '.join(Area)
You meant to join items of the Type list instead:
Area = ' , '.join(Type)
And, it is not a good idea to catch the StandardError - it is too broad of an exception and actually is close to having a bare except clause. You should catch more specific exceptions, see:
Should I always specify an exception type in `except` statements?

Categories

Resources