Can't get the xml element value using lxml xpath - python

I am trying to scrape a spotify playlist webpage to pull out artist and song name data. Here is my python code:
#! /usr/bin/python
from lxml import html
import requests
playlistPage = requests.get('https://open.spotify.com/playlist/0csaTlUWTfiyXscv4qKDGE')
print("\n\nprinting variable playListPage: " + str(playlistPage))
tree = html.fromstring(playlistPage.content)
print("printing variable tree: " + str(tree))
artistList = tree.xpath("//span/a[#class='tracklist-row__artist-name-link']/text()")
print("printing variable artistList: " + str(artistList) + "\n\n")
Right now the final print statement is printing out an empty list.
Here is some example HTML from the page I'm trying to scrape. Ideally my code would pull out the string "M83"...not sure how much html is relevant so pasting what I believe necessary:
<div class="react-contextmenu-wrapper">
<div draggable="true">
<li class="tracklist-row" role="button" tabindex="0" data-testid="tracklist-row">
<div class="tracklist-col position-outer">
<div class="tracklist-play-pause tracklist-top-align">
<svg class="icon-play" viewBox="0 0 85 100">
<path fill="currentColor" d="M81 44.6c5 3 5 7.8 0 10.8L9 98.7c-5 3-9 .7-9-5V6.3c0-5.7 4-8 9-5l72 43.3z">
<title>
PLAY</title>
</path>
</svg>
</div>
<div class="position tracklist-top-align">
<span class="spoticon-track-16">
</span>
</div>
</div>
<div class="tracklist-col name">
<div class="track-name-wrapper tracklist-top-align">
<div class="tracklist-name ellipsis-one-line" dir="auto">
Intro</div>
<div class="second-line">
<span class="TrackListRow__artists ellipsis-one-line" dir="auto">
<span class="react-contextmenu-wrapper">
<span draggable="true">
<a tabindex="-1" class="tracklist-row__artist-name-link" href="/artist/63MQldklfxkjYDoUE4Tppz">
M83</a>
</span>
</span>
</span>
<span class="second-line-separator" aria-label="in album">
•</span>
<span class="TrackListRow__album ellipsis-one-line" dir="auto">
<span class="react-contextmenu-wrapper">
<span draggable="true">
<a tabindex="-1" class="tracklist-row__album-name-link" href="/album/6R0ynY7RF20ofs9GJR5TXR">
Hurry Up, We're Dreaming</a>
</span>
</span>
</span>
</div>
</div>
</div>
<div class="tracklist-col more">
<div class="tracklist-top-align">
<div class="react-contextmenu-wrapper">
<button class="_2221af4e93029bedeab751d04fab4b8b-scss c74a35c3aba27d72ee478f390f5d8c16-scss" type="button">
<div class="spoticon-ellipsis-16">
</div>
</button>
</div>
</div>
</div>
<div class="tracklist-col tracklist-col-duration">
<div class="tracklist-duration tracklist-top-align">
<span>
5:22</span>
</div>
</div>
</li>
</div>
</div>

A solution using Beautiful Soup:
import requests
from bs4 import BeautifulSoup as bs
page = requests.get('https://open.spotify.com/playlist/0csaTlUWTfiyXscv4qKDGE')
soup = bs(page.content, 'lxml')
tracklist_container = soup.find("div", {"class": "tracklist-container"})
track_artists_container = tracklist_container.findAll("span", {"class": "artists-albums"})
artists = []
for ta in track_artists_container:
artists.append(ta.find("span").text)
print(artists[0])
prints
M83
This solution gets all the artists on the page so you could print out the list artists and get:
['M83',
'Charles Bradley',
'Bon Iver',
...
'Death Cab for Cutie',
'Destroyer']
And you can extend this to track names and albums quite easily by changing the classname in the findAll(...) function call.

Nice answer provided by #eNc. lxml solution :
from lxml import html
import requests
playlistPage = requests.get('https://open.spotify.com/playlist/0csaTlUWTfiyXscv4qKDGE')
tree = html.fromstring(playlistPage.content)
artistList = tree.xpath("//span[#class='artists-albums']/a[1]/span/text()")
print(artistList)
Output :
['M83', 'Charles Bradley', 'Bon Iver', 'The Middle East', 'The Antlers', 'Handsome Furs', 'Frank Turner', 'Frank Turner', 'Amy Winehouse', 'Black Lips', 'M83', 'Florence + The Machine', 'Childish Gambino', 'DJ Khaled', 'Kendrick Lamar', 'Future Islands', 'Future Islands', 'JAY-Z', 'Blood Orange', 'Cut Copy', 'Rihanna', 'Tedeschi Trucks Band', 'Bill Callahan', 'St. Vincent', 'Adele', 'Beirut', 'Childish Gambino', 'David Guetta', 'Death Cab for Cutie', 'Destroyer']
Since you can't get all the results in one shot, maybe you should switch for Selenium.

Related

Adding multiple loop outputs to single dictionary

I'm learning how to use python and trying to use beautiful soup to do some web scraping. I want to pull the product name and product number from the saved page I'm referencing in my python code, but have provided a snippet of a section where this script is looking. They're located under a div with the class name and a span with the id product_id
Essentially, my python script does put in all the product names, but once it gets to the product_id loop, it overwrites the initial values from my first loop. Looking to see if anyone can point me in the right direction.
OUTPUT
After first loop
{'name': 'ADA Hi-Lo Power Plinth Table'}
{'name': 'Adjustable Headrest Couch - Chrome-Plated Steel Legs'}
{'name': 'Adjustable Headrest Couch - Chrome-Plated Steel Legs (X-Large)'}
After second loop
{'name': 'Weekender Folding Cot', 'product_ID': '55984'}
{'name': 'Weekender Folding Cot', 'product_ID': '31350'}
{'name': 'Weekender Folding Cot', 'product_ID': '31351'}
<div class="revealOnScroll product-item" data-addcart-callback="addcart_callback" data-ajaxcart="1" data-animation="fadeInUp" data-catalogid="1496" data-categoryid="5127" data-timeout="500">
<div class="img">
<a href="ADA-Hi-Lo-Power-Plinth-Table_p_1496.html">
<img alt="ADA Hi-Lo Power Plinth Table" class="img-responsive" src="assets/images/thumbnails/55984_thumbnail.jpg"/>
</a>
<button class="quickview" data-toggle="modal">
Quick View
</button>
</div>
<div class="name">
<a href="ADA-Hi-Lo-Power-Plinth-Table_p_1496.html">
ADA Hi-Lo Power Plinth Table
</a>
</div>
<div class="product-id">
Item Number:
<strong>
<span id="product_id">
55984
</span>
</strong>
</div>
<div class="status">
</div>
<div class="reviews">
</div>
<div class="price">
<span class="regular-price">
$2,849.00
</span>
</div>
<div class="action">
<a class="add-to-cart btn btn-default" href="add_cart.asp?quick=1&item_id=1496&cat_id=5127">
<span class="buyitlink-text">
Select Options
</span>
<span class="ajaxcart-loader icon-spin2 animate-spin">
</span>
<span class="ajaxcart-added icon-ok">
</span>
</a>
</div>
</div>
<div class="revealOnScroll product-item" data-addcart-callback="addcart_callback" data-ajaxcart="1" data-animation="fadeInUp" data-catalogid="2878" data-categoryid="5127" data-timeout="500">
<div class="img">
<a href="Adjustable-Headrest-Couch--Chrome-Plated-Steel-Legs_p_2878.html">
<img alt="Adjustable Headrest Couch - Chrome-Plated Steel Legs" class="img-responsive" src="assets/images/thumbnails/31350_thumbnail.jpg"/>
</a>
<button class="quickview" data-toggle="modal">
Quick View
</button>
</div>
<div class="name">
<a href="Adjustable-Headrest-Couch--Chrome-Plated-Steel-Legs_p_2878.html">
Adjustable Headrest Couch - Chrome-Plated Steel Legs
</a>
</div>
<div class="product-id">
Item Number:
<strong>
<span id="product_id">
31350
</span>
</strong>
</div>
<div class="status">
</div>
<div class="reviews">
</div>
<div class="price">
<span class="regular-price">
$729.00
</span>
</div>
<div class="action">
<a class="add-to-cart btn btn-default" href="add_cart.asp?quick=1&item_id=2878&cat_id=5127">
<span class="buyitlink-text">
Select Options
</span>
<span class="ajaxcart-loader icon-spin2 animate-spin">
</span>
<span class="ajaxcart-added icon-ok">
</span>
</a>
</div>
</div>
<div class="revealOnScroll product-item" data-addcart-callback="addcart_callback" data-ajaxcart="1" data-animation="fadeInUp" data-catalogid="2879" data-categoryid="5127" data-timeout="500">
<div class="img">
<a href="Adjustable-Headrest-Couch--Chrome-Plated-Steel-Legs-X-Large_p_2879.html">
<img alt="Adjustable Headrest Couch - Chrome-Plated Steel Legs (X-Large)" class="img-responsive" src="assets/images/thumbnails/31350_thumbnail.jpg"/>
</a>
<button class="quickview" data-toggle="modal">
Quick View
</button>
</div>
<div class="name">
<a href="Adjustable-Headrest-Couch--Chrome-Plated-Steel-Legs-X-Large_p_2879.html">
Adjustable Headrest Couch - Chrome-Plated Steel Legs (X-Large)
</a>
</div>
<div class="product-id">
Item Number:
<strong>
<span id="product_id">
31351
</span>
</strong>
</div>
<div class="status">
</div>
<div class="reviews">
</div>
<div class="price">
<span class="regular-price">
$769.00
</span>
</div>
<div class="action">
<a class="add-to-cart btn btn-default" href="add_cart.asp?quick=1&item_id=2879&cat_id=5127">
<span class="buyitlink-text">
Select Options
</span>
<span class="ajaxcart-loader icon-spin2 animate-spin">
</span>
<span class="ajaxcart-added icon-ok">
</span>
</a>
</div>
</div>
BEGINNING OF PYTHON SCRIPT
import requests
from bs4 import BeautifulSoup
with open('recoveryCouches','r') as html_file:
content= html_file.read()
soup = BeautifulSoup(content,'lxml')
allProductDivs = soup.find('div', class_='product-items product-items-4')
#get names of products on page
nameDiv = soup.find_all('div',class_='name')
prodID = soup.find_all('span', id='product_id')
records=[]
d=dict()
for name in nameDiv:
d['name'] = name.find('a').text
records.append(d)
print(d)
for productId in prodID:
d['product_ID'] = productId.text
records.append(d)
print(d)
Try this:
nameDiv = soup.find_all('div',class_='name')
prodID = soup.find_all('span', id='product_id')
records=[]
for i in range(len(nameDiv)):
records.append({
"name": nameDiv[i].find('a').text.strip(),
"product_ID": prodID[i].text.strip()
})
to write data to csv file:
import csv
with open("file.csv", 'w') as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=records[0].keys())
writer.writeheader()
for record in records:
writer.writerow(record)
If I understand the question correctly, you're trying to get all the names and productIds and store them. The problem you're running into is, in the dictionary, your values are getting overwritten.
One solution to that problem would be to initialize your python dictionary values as lists, like so:
d = {
'name': [],
'product_ID': []
}
Then in each of the loops, you can append the new value to that array. What you currently have will overwrite the previous value.
for name in nameDiv:
d['name'].append(name.find('a').text)
for productId in prodID:
d['product_ID'].append(productId.text)
This will result in a list of all names and product_IDs stored in that dictionary.
If you want to put these lists together in a format like this:
[(name0, productId0), (name1, productId1), ...]
Then you can make use of zip, which will basically combine your lists as long as they are equal length. For example:
zipped_results = list(zip(d['name'], d['product_ID']))

Scrape everything between two unested tags

Is it possible to scrape everything between two unested tags ?
For instance:
<h3>Title 1<h3>
<div class="div">
<span class="span">span1</span>
<label class="label">label1</label>
</div>
<div class="div">
<span class="span">span2</span>
</div>
<h3>Title 2<h3>
<div class="div">
<span class="span">span3</span>
<label class="label">label2</label>
</div>
<div id="div">
<span id="span">span4</span>
</div>
So I would like to scrape just what is located under Title 1 until Title 2. Is this possible using bs4 ?
Right now I have something like this (problem is it scrape everything since classes are all the same):
for i in soup.findAll("div",{"class":"div"}):
print(i.span.text)
Now I get:
span1
span2
span3
span4
I'd like to get:
span1
span2
I don't know if this is best solution to this problem but you can split your text and scrape only the part that you need.
text = """
<h3>Title 1</h3>
<div class="div">
<span class="span">span1</span>
<label class="label">label1</label>
</div>
<div class="div">
<span class="span">span2</span>
</div>
<h3>Title 2</h3>
<div class="div">
<span class="span">span3</span>
<label class="label">label2</label>
</div>
<div id="div">
<span id="span">span4</span>
</div>
"""
sub_text = text.split(soup.find('h3', text="Title 2").string)[0]
This will give:
'"\n<h3>Title 1</h3>\n<div class="div">\n <span class="span">span1</span>\n <label class="label">label1</label>\n</div>\n<div class="div">\n <span class="span">span2</span>\n</div>\n<h3>'
After converting that string into a bs4 object, you can scrape all you need:
scrape_me = BeautifulSoup(sub_text, 'lxml')
for i in scrape_me.findAll("div", class_="div"):
print(i.span.text)
# -> span1 span2
One approach is:
find the second class="span", then navigate backwards, and find_all_previous() div.
The tags are in backward order, so use the reversed() function..
find the <span> tags
from bs4 import BeautifulSoup
html = """
<h3>Title 1<h3>
<div class="div">
<span class="span">span1</span>
<label class="label">label1</label>
</div>
<div class="div">
<span class="span">span2</span>
</div>
<h3>Title 2<h3>
<div class="div">
<span class="span">span3</span>
<label class="label">label2</label>
</div>
<div id="div">
<span id="span">span4</span>
</div>
"""
soup = BeautifulSoup(html, "lxml")
for tag in reversed(
soup.select_one("div:nth-of-type(2) span.span").find_all_previous("div")
):
print(tag.find("span").text)
Output:
span1
span2

Find and retrieve content from html text using BeautifulSoup

I have the following html code (or at least I think it's html) that I am working on with BeautifulSoup on Python.
I have parsed the html using Beautiful soup correctly. What I would like to do next is to retrieve the content associated with the 'div' containing a certain data-label (for example, in the bottom part of the code, data-label="Relation"). In particular I would like to obtain a dictionary that has as key the text of the data-label, i.e. in my example "Relation", and as value the content of the same 'div', i.e. in my example the href "http://documenti.camera.it/apps/commonServices/getDocumento.ashx?sezione=bollettini=comunicato=17=2016=06=14=03=data.20160614.com03.bollettino.sede00020.tit00010.int00010=data.20160614.com03.bollettino.sede00020.tit00010.int00010#data.20160614.com03.bollettino.sede00020.tit00010.int00010"
I have tried several approaches but data-label, as far as I know, does not appear to be a valid attribute, so I am not sure how to handle this.
(Note that this is just an example, but I will have to do the same for thousands, if not millions, of these webpages, with this similar structure).
Any help is appreciated. Thank you!
<div id="directs">
<label class="c1"><a data-comment="A human-readable name for the subject." data-label="label" href="http://www.w3.org/2000/01/rdf-schema#label">
rdfs:<span>label</span>
</a></label>
<div class="c2 value ">
<div class="toMultiLine ">
<div class="fixed">
<span class="dType">xsd:string</span>
intervento di Fabrizio CICCHITTO
</div>
</div>
</div>
<label class="c1"><a data-comment="A name given to the resource." data-label="Title" href="http://purl.org/dc/elements/1.1/title">
dc:<span>title</span>
</a></label>
<div class="c2 value ">
<div class="toMultiLine ">
<div class="fixed">
intervento di Fabrizio CICCHITTO
</div>
</div>
</div>
<label class="c1"><a data-comment="" data-label="" href="http://lod.xdams.org/ontologies/ods/modified">
ods:<span>modified</span>
</a></label>
<div class="c2 value ">
<div class="toMultiLine ">
<div class="fixed">
<span class="dType">xsd:dateTime</span>
2016-07-05T12:26:02Z
</div>
</div>
</div>
<label class="c1"><a data-comment="The subject is an instance of a class." data-label="type" href="http://www.w3.org/1999/02/22-rdf-syntax-ns#type">
rdf:<span>type</span>
</a></label>
<div class="c2 value">
<div class="toOneLine">
<a class=" isLocal" href="http://dati.camera.it/ocd/intervento" title="<http://dati.camera.it/ocd/intervento>">
ocd:intervento
</a>
</div>
</div>
<label class="c1"><a data-comment="propriet generica utilizzata per puntare alla risorsa deputato in vari punti dell'ontologia" data-label="rierimento a deputato" href="http://dati.camera.it/ocd/rif_deputato">
ocd:<span>rif_deputato</span>
</a></label>
<div class="c2 value">
<div class="toOneLine">
<a class=" isLocal" href="http://dati.camera.it/ocd/deputato.rdf/d15080_17" title="<http://dati.camera.it/ocd/deputato.rdf/d15080_17>">
http://dati.camera.it/ocd/deputato.rdf/d15080_17
</a>
</div>
</div>
<label class="c1"><a data-comment="A related resource." data-label="Relation" href="http://purl.org/dc/elements/1.1/relation">
dc:<span>relation</span>
</a></label>
<div class="c2 value">
<div class="toOneLine">
<a class=" " href="http://documenti.camera.it/apps/commonServices/getDocumento.ashx?sezione=bollettini=comunicato=17=2016=06=14=03=data.20160614.com03.bollettino.sede00020.tit00010.int00010=data.20160614.com03.bollettino.sede00020.tit00010.int00010#data.20160614.com03.bollettino.sede00020.tit00010.int00010"
target="_blank" title="<http://documenti.camera.it/apps/commonServices/getDocumento.ashx?sezione=bollettini=comunicato=17=2016=06=14=03=data.20160614.com03.bollettino.sede00020.tit00010.int00010=data.20160614.com03.bollettino.sede00020.tit00010.int00010#data.20160614.com03.bollettino.sede00020.tit00010.int00010>">
http://documenti.camera.it/apps/commonServices/getDocumento.ashx?sezione=bollettini=comunicato=17=2016=06=14=03=data.20160614.com03.bollettino.sede00020.tit00010.int00010=data.20160614.com03.bollettino.sede00020.tit00010.int00010#data.20160614.com03.bollettino.sede00020.tit00010.int00010
</a>
</div>
</div>
</div>
You can find the data-labels in one pass and the div content in another. Then, the results can be zipped together to create the dictionary:
from bs4 import BeautifulSoup as soup
import re
d = soup(content, 'html.parser').find('div', {'id':'directs'})
_labels = [i.a['data-label'] for i in d.find_all('label')]
_content = [i.text for i in d.find_all('div', {'class':re.compile('c2 value\s*')})]
result = dict(zip(_labels, _content))
Output:
{'label': '\n\n\nxsd:string \n intervento di Fabrizio CICCHITTO\n \n\n',
'Title': '\n\n\n intervento di Fabrizio CICCHITTO\n \n\n',
'': '\n\n\nxsd:dateTime \n 2016-07-05T12:26:02Z\n \n\n',
'type': '\n\n\n ocd:intervento\n \n\n',
'rierimento a deputato': '\n\n\n http://dati.camera.it/ocd/deputato.rdf/d15080_17\n \n\n',
'Relation': '\n\n\n http://documenti.camera.it/apps/commonServices/getDocumento.ashx?sezione=bollettini=comunicato=17=2016=06=14=03=data.20160614.com03.bollettino.sede00020.tit00010.int00010=data.20160614.com03.bollettino.sede00020.tit00010.int00010#data.20160614.com03.bollettino.sede00020.tit00010.int00010\n \n\n'}

Python Beautifulsoup - problem read <span>

I try to extract "brand-logo", "product-name", "price" and "best-price " from the following HTML:
<div class="container">
<div class="catalog-wrapper">
<div class="slideout-filters"></div>
<section class="catalog-top-banner"></section>
<section class="search-results">
<section class="catalog">
<div class="row">
<div class="col-xs-12 col-md-4 col-lg-3">
<div class="col-xs-12 col-md-8 col-lg-9">
<div class="catalog-container">
<a class="catalog-product catalog-item ">
<div class="product-image "></div>
<div class="product-description">
<div>
<div class="brand-logo">
<span>PACO RABANNE</span>
</div>
<span class="product-name">
PACO RABANNE PERFUME MUJER 30 ML
</span>
<span class="price">Normal: S/ 219</span>
<span class="best-price ">Internet: S/ 209</span>
"brand-logo" and "product-name, done, but I can not read "price" & "best-price "
I tried it this way:
box_3 = soup.find('div','col-xs-12 col-md-8 col-lg-9')
for div in box_3.find_all('div','product-description'):
d={}
d["Marca"] = div.find_all("div",{"class","brand-logo"})[0].getText()
d["Producto"] = div.find_all("span",{"class","product-name"})[0].getText()
d["Precio"] = div.find_all('span',class_='price')
d["Oferta"] = div.find_all('span',class_='best-price ')
l.append(d)
l
out:
{'Marca': 'PACO RABANNE',
'Oferta': [],
'Precio': [<span class="price">Normal: S/ 219</span>],
'Producto': 'PACO RABANNE PERFUME MUJER 30 ML'}
can anyone help me?
You can find the "product-description" div and then iterate over the desired div classes:
from bs4 import BeautifulSoup as soup
import re
_to_find = ['brand-logo', 'product-name', 'price', 'best-price']
s = soup(content, 'html.parser').find('div', {'class':'product-description'})
final_results = [(lambda x: s.find('span', {'class':i}).text if not x else x.text)(s.find('div', {'class':i})) \
for i in _to_find]
filtered = [re.sub('^[\n\s]+|[\n\s]+$', '', i) for i in final_results]
Output:
['PACO RABANNE', 'PACO RABANNE PERFUME MUJER 30 ML', 'Normal: S/ 219', 'Internet: S/ 209']
Unfortunatelly, without the actual website I'm unable to check the solution :(.
Maybe you should extract data from "not working" part the same way as the working one (this is lucky guess - without website or just website that will be parsed by bs4, I'm really unable to test it).
d["Precio"] = div.find_all('span',{"class","price"})[0].getText()
d["Oferta"] = div.find_all('span',{"class","best-price"})[0].getText()
It might be good idea to make new method/function that will get the chosen attribute and handle potential errors.

Accessing untagged text using beautifulsoup

I am using python and beautifulsoup4 to extract some address information.
More specifically, I require assistance when retrieving non-US based zip codes.
Consider the following html data of a US based company: (already a soup object)
<div class="compContent curvedBottom" id="companyDescription">
<div class="vcard clearfix">
<p id="adr">
<span class="street-address">999 State St Ste 100</span><br/>
<span class="locality">Salt Lake City,</span>
<span class="region">UT</span>
<span class="zip">84114-0002,</span>
<br/><span class="country-name">United States</span>
</p>
<p>
<span class="tel">
<strong class="type">Phone: </strong>+1-000-000-000
</span><br/>
</p>
<p class="companyURL"><a class="url ext" href="http://www.website.com" target="_blank">http://www.website.com</a></p>
</div>
</ul>
</div>
I can extract the zipcode (84114-0002) by using the following piece of python code:
class CompanyDescription:
def __init__(self, page):
self.data = page.find('div', attrs={'id': 'companyDescription'})
def address(self):
#TODO: Also retrieve the Zipcode for UK and German based addresses - tricky!
address = {'street-address': '', 'locality': '', 'region': '', 'zip': '', 'country-name': ''}
for key in address:
try:
adr = self.data.find('p', attrs={'id': 'adr'})
if adr.find('span', attrs={'class': key}) is None:
address[key] = ''
else:
address[key] = adr.find('span', attrs={'class': key}).text.split(',')[0]
# Attempting to grab another zip code value
if address['zip'] == '':
pass
except:
# We should return a dictionary with "" as key adr
return address
return address
You can see that I need some counsel with line if address['zip'] == '':
These two soup object examples are giving me trouble. In the below I would like to retrieve EC4N 4SA
<div class="compContent curvedBottom" id="companyDescription">
<div class="vcard clearfix">
<p id="adr">
<span class="street-address">Albert Buildings</span><br/>
<span class="extended-address">00 Queen Victoria Street</span>
<span class="locality">London</span>
EC4N 4SA
<span class="region">London</span>
<br/><span class="country-name">England</span>
</p>
<p>
</p>
<p class="companyURL"><a class="url ext" href="http://www.website.com.com" target="_blank">http://www.website.com.com</a></p>
</div>
<p><strong>Line of Business</strong> <br/>Management services, nsk</p>
</div>
as well as below, where I am interested in getting 71364
<div class="compContent curvedBottom" id="companyDescription">
<div class="vcard clearfix">
<p id="adr">
<span class="street-address">Alfred-Kärcher-Str. 100</span><br/>
71364
<span class="locality">Winnenden</span>
<span class="region">Baden-Württemberg</span>
<br/><span class="country-name">Germany</span>
</p>
<p>
<span class="tel">
<strong class="type">Phone: </strong>+00-1234567
</span><br/>
<span class="tel"><strong class="type">Fax: </strong>+00-1234567</span>
</p>
</div>
</div>
Now, I am running this program over approximately 68,000 accounts of which 28,000 are non-US based. I have only pulled out two examples of which I know the current method is not bullet proof. There may be other address formats where this script is not working as expected but I believe figuring out UK and German based accounts will help tremendously.
Thanks in advance
Because it is only text without tag inside <p> so you can use
find_all(text=True, recursive=False)
to get only text (without tags) but not from nested tags (<span>). This gives list with your text and some \n and spaces so you can use join() to create one string, and strip() to remove all \n and spaces.
data = '''<p id="adr">
<span class="street-address">Albert Buildings</span><br/>
<span class="extended-address">00 Queen Victoria Street</span>
<span class="locality">London</span>
EC4N 4SA
<span class="region">London</span>
<br/><span class="country-name">England</span>
</p>'''
from bs4 import BeautifulSoup as BS
soup = BS(data, 'html.parser').find('p')
print(''.join(soup.find_all(text=True, recursive=False)).strip())
result: EC4N 4SA
The same with second HTML
data = '''<p id="adr">
<span class="street-address">Alfred-Kärcher-Str. 100</span><br/>
71364
<span class="locality">Winnenden</span>
<span class="region">Baden-Württemberg</span>
<br/><span class="country-name">Germany</span>
</p>'''
from bs4 import BeautifulSoup as BS
soup = BS(data, 'html.parser').find('p')
print(''.join(soup.find_all(text=True, recursive=False)).strip())
result: 71364

Categories

Resources