I'm trying to parse html like the following:
<tbody>
<tr class data-row="0">
<td align="right"></td>
</tr>
<tr class data-row="1">
<td align="right"></td>
</tr>
<tr class="thead over_theader" data-row="2">
<td align="right"></td>
</tr>
<tr class="thead" data-row="3">
<td align="right"></td>
</tr>
<tr class data-row="4">
<td align="right"></td>
</tr>
<tr class data-row="5">
<td align="right"></td>
</tr>
</tbody>
I want to obtain all tr tags (and their children) where class is not specified. For the example above, that means I want the tr tags where data-row is not 2 or 3.
How do I do this using Beautiful Soup 4?
I tried
tableBody = soup.findAll('tbody')
rows = tableBody[0].findAll(attrs={"class":""})
but this returned a type bs4.element.ResultSet of length 8 (i.e. it included the tr children with td tags) when I wanted a bs4.element.ResultSet of length 4 (one for each tr tag with class = "").
Your method actually works for me when I specify the tr tag name:
>>> from bs4 import BeautifulSoup
>>> data = """
... <tbody>
... <tr class data-row="0">
... <td align="right"></td>
... </tr>
... <tr class data-row="1">
... <td align="right"></td>
... </tr>
... <tr class="thead over_theader" data-row="2">
... <td align="right"></td>
... </tr>
... <tr class="thead" data-row="3">
... <td align="right"></td>
... </tr>
... <tr class data-row="4">
... <td align="right"></td>
... </tr>
... <tr class data-row="5">
... <td align="right"></td>
... </tr>
... </tbody>
... """
>>> soup = BeautifulSoup(data, "html.parser")
>>> len(soup.find_all("tr", class_=""))
4
Alternatively, you can use a tr[class=""] CSS selector:
>>> len(soup.select('tr[class=""]'))
4
find_all will, by default, search recursively. So the td tags are valid matches.
Docs:
If you call mytag.find_all(), Beautiful Soup will examine all the descendants of mytag: its children, its children’s children, and so on. If you only want Beautiful Soup to consider direct children, you can pass in recursive=False
So you might write, for example:
tableBody = soup.findAll('tbody')
rows = tableBody[0].find_all(attrs={"class":""}, recursive=False)
print(len(rows))
for r in rows:
print('---')
print(r)
Output:
4
---
<tr class="" data-row="0">
<td align="right"></td>
</tr>
---
<tr class="" data-row="1">
<td align="right"></td>
</tr>
---
<tr class="" data-row="4">
<td align="right"></td>
</tr>
---
<tr class="" data-row="5">
<td align="right"></td>
</tr>
Related
I'm still a python noob trying to learn beautifulsoup.I looked at solutions on stack but was unsuccessful Please help me to understand this better.
i have extracted the html which is as shown below
<table cellspacing="0" id="ContentPlaceHolder1_dlDetails"
style="width:100%;border-collapse:collapse;">
<tbody><tr>
<td>
<table border="0" cellpadding="5" cellspacing="0" width="70%">
<tbody><tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td bgcolor="#4F95FF" class="listhead" width="49%">Location:</td>
<td bgcolor="#4F95FF" class="listhead" width="51%">On Site </td>
</tr>
<tr>
<td class="listmaintext">ATM ID: </td>
<td class="listmaintext">DAGR00401111111</td>
</tr>
<tr>
<td class="listmaintext">ATM Centre:</td>
<td class="listmaintext"></td>
</tr>
<tr>
<td class="listmaintext">Site Location: </td>
<td class="listmaintext">ADA Building - Agra</td>
</tr>
i tried to parse find_all('tbody') but was unsuccessful
#table = bs.find("table", {"id": "ContentPlaceHolder1_dlDetails"})
html = browser.page_source
soup = bs(html, "lxml")
table = soup.find_all('table', {'id':'ContentPlaceHolder1_dlDetails'})
table_body = table.find('tbody')
rows = table.select('tr')
for row in rows:
cols = row.find_all('td')
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols if ele])values
I'm trying to save values in "listmaintext" class
Error message
AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?
Another way to do this using next_sibling
from bs4 import BeautifulSoup as bs
html ='''
<html>
<table cellspacing="0" id="ContentPlaceHolder1_dlDetails"
style="width:100%;border-collapse:collapse;">
<tbody><tr>
<td>
<table border="0" cellpadding="5" cellspacing="0" width="70%">
<tbody><tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td bgcolor="#4F95FF" class="listhead" width="49%">Location:</td>
<td bgcolor="#4F95FF" class="listhead" width="51%">On Site </td>
</tr>
<tr>
<td class="listmaintext">ATM ID: </td>
<td class="listmaintext">DAGR00401111111</td>
</tr>
<tr>
<td class="listmaintext">ATM Centre:</td>
<td class="listmaintext"></td>
</tr>
<tr>
<td class="listmaintext">Site Location: </td>
<td class="listmaintext">ADA Building - Agra</td>
</tr>
</html>'''
soup = bs(html, 'lxml')
data = [' '.join((item.text, item.next_sibling.next_sibling.text)) for item in soup.select('#ContentPlaceHolder1_dlDetails tr .listmaintext:first-child') if item.text !='']
print(data)
from bs4 import BeautifulSoup
data = '''<table cellspacing="0" id="ContentPlaceHolder1_dlDetails"
style="width:100%;border-collapse:collapse;">
<tbody><tr>
<td>
<table border="0" cellpadding="5" cellspacing="0" width="70%">
<tbody><tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td bgcolor="#4F95FF" class="listhead" width="49%">Location:</td>
<td bgcolor="#4F95FF" class="listhead" width="51%">On Site </td>
</tr>
<tr>
<td class="listmaintext">ATM ID: </td>
<td class="listmaintext">DAGR00401111111</td>
</tr>
<tr>
<td class="listmaintext">ATM Centre:</td>
<td class="listmaintext"></td>
</tr>
<tr>
<td class="listmaintext">Site Location: </td>
<td class="listmaintext">ADA Building - Agra</td>
</tr>'''
soup = BeautifulSoup(data, 'lxml')
s = soup.select('.listmaintext')
for td1, td2 in zip(s[::2], s[1::2]):
print('{} [{}]'.format(td1.text.strip(), td2.text.strip()))
Prints:
ATM ID: [DAGR00401111111]
ATM Centre: []
Site Location: [ADA Building - Agra]
I want to scrape some data prices out of a bunch of html tables. The tables contain all sorts of prices, and of course the table data tags don't contain anything useful.
<div id="item-price-data">
<table>
<tbody>
<tr>
<td class="some-class">Normal Price:</td>
<td class="another-class">$100.00</td>
</tr>
<tr>
<td class="some-class">Member Price:</td>
<td class="another-class">$90.00</td>
</tr>
<tr>
<td class="some-class">Sale Price:</td>
<td class="another-class">$80.00</td>
</tr>
<tr>
<td class="some-class">You save:</td>
<td class="another-class">$20.00</td>
</tr>
</tbody>
</table>
</div>
The only prices that I care about are those that are paired with an element that has "Normal Price" as it's text.
What I'd like to be able to do is scan the table's descendants, find the <td> tag that has that text, then pull the text from it's sibling.
The problem I'm having is that in BeautifulSoup the descendants attribute returns a list of NavigableString, not Tag.
So if I do this:
from bs4 import BeautifulSoup
from urllib import request
html = request.urlopen(url)
soup = BeautifulSoup(html, 'lxml')
div = soup.find('div', {'id': 'item-price-data'})
table_data = div.find_all('td')
for element in table_data:
if element.get_text() == 'Normal Price:':
price = element.next_sibling
print(price)
I get nothing. Is there an easy way to get the string value back?
You can use the find_next() method also you may need a bit of regex:
Demo:
>>> import re
>>> from bs4 import BeautifulSoup
>>> html = """<div id="item-price-data">
... <table>
... <tbody>
... <tr>
... <td class="some-class">Normal Price:</td>
... <td class="another-class">$100.00</td>
... </tr>
... <tr>
... <td class="some-class">Member Price:</td>
... <td class="another-class">$90.00</td>
... </tr>
... <tr>
... <td class="some-class">Sale Price:</td>
... <td class="another-class">$80.00</td>
... </tr>
... <tr>
... <td class="some-class">You save:</td>
... <td class="another-class">$20.00</td>
... </tr>
... </tbody>
... </table>
... </div>"""
>>> soup = BeautifulSoup(html, 'lxml')
>>> div = soup.find('div', {'id': 'item-price-data'})
>>> for element in div.find_all('td', text=re.compile('Normal Price')):
... price = element.find_next('td')
... print(price)
...
<td class="another-class">$100.00</td>
If you don't want to bring regex into this then the following will work for you.
>>> table_data = div.find_all('td')
>>> for element in table_data:
... if 'Normal Price' in element.get_text():
... price = element.find_next('td')
... print(price)
...
<td class="another-class">$100.00</td>
I have a table formed like this from a website:
<table>
<tr class="head">
<td class="One">
Column 1
</td>
<td class="Two">
Column 2
</td>
<td class="Four">
Column 3
</td>
<td class="Five">
Column 4
</td>
</tr>
<tr class="DataSet1">
<td class="One">
<table>
<tr>
<td class="DataType1">
Data 1
</td>
</tr>
<tr>
<td class="DataType_2">
<ul>
<li> Data 2a</li>
<li> Data 2b</li>
<li> Data 2c</li>
<li> Data 2d</li>
</ul>
</td>
</tr>
</table>
</td>
<td class="Two">
<table>
<tr>
<td class="DataType_3">
Data 3
</td>
</tr>
<tr>
<td class="DataType_4">
Data 4
</td>
</tr>
</table>
</td>
<td class="Three">
<table>
<tr>
<td class="DataType_5">
Data 5
</td>
</tr>
</table>
</td>
<td class="Four">
<table>
<tr>
<td class="DataType_6">
Data 6
</td>
</tr>
</table>
</td>
</tr>
<tr class="Empty">
<td class="One">
</td>
<td class="Two">
</td>
<td class="Four">
</td>
<td class="Five">
</td>
</tr>
<tr class="DataSet2">
<td class="One">
<table>
<tr>
<td class="DataType_1">
Data 7
</td>
</tr>
<tr>
<td class="DataType_2">
Data 8
</td>
</tr>
</table>
</td>
<td class="Two">
<table>
<tr>
<td class="DataType_3">
Data 9
</td>
</tr>
<tr>
<td class="DataType_4">
Data 10
</td>
</tr>
</table>
</td>
<td class="Three">
<table>
<tr>
<td class="DataType_5">
Data 11
</td>
</tr>
</table>
</td>
<td class="Four">
<table>
<tr>
<td class="DataType_6">
Data 12
</td>
</tr>
</table>
</td>
</tr>
<!-- and so on -->
</table>
The tags sometimes are also empty, for example:
<td class="DataType_6> </td>
I tried to scrape the content with Scrapy and the following script:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from project.items import ProjectItem
class MySpider(BaseSpider):
name = "SpiderName"
allowed_domains = ["url"]
start_urls = ["url"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
rows = hxs.select('//tr')
items = []
item = ProjectItem()
item["Data_1"] = rows.select('//td[#class="DataType_1"]/text()').extract()
item["Data_2"] = rows.select('//td[#class="DataType_2"]/text()').extract()
item["Data_3"] = rows.select('//td[#class="DataType_3"]/text()').extract()
item["Data_4"] = rows.select('//td[#class="DataType_4"]/text()').extract()
item["Data_5"] = rows.select('//td[#class="DataType_5"]/text()').extract()
item["Data_6"] = rows.select('//td[#class="DataType_6"]/text()').extract()
items.append(item)
return items
If I crawl using this command:
scrapy crawl SpiderName -o output.csv -t csv
I only get crap like as many times as I have got the Dataset all the values for "Data_1".
Had a similar problem. First of all, rows = hxs.select('//tr') is going to loop on everything from the first child. You need to dig a bit deeper, and use relative paths. This link gives an excellent explanation on how to structure your code.
When I finally got my head around it, I realised that in that order to parse each item separately, row.select should not have the // in it.
Hope this helps.
I have an HTML text like this
<tr>
<td><strong>Turnover</strong></td>
<td width="20%" class="currency">£348,191</td>
<td width="20%" class="currency">£856,723</td>
<td width="20%" class="currency">£482,177</td>
</tr>
<tr>
<td> Cost of sales</td>
<td width="20%" class="currency">£275,708</td>
<td width="20%" class="currency">£671,345</td>
<td width="20%" class="currency">£357,587</td>
</tr>
<tr>
There's lots of html before and after it. I'd like to parse the numbers. There can be varying number of td columns, so I'd like to parse all of them. In this case, there are three columns, so the result I'm looking for is:
[348191, 856723, 482177]
Ideally, I'd like to parse the Turnover and Cost of Sales data separately into different variables
You can use BeautifulSoup:
>>> from bs4 import BeautifulSoup as BS
>>> html = """ <tr>
... <td><strong>Turnover</strong></td>
... <td width="20%" class="currency">£348,191</td>
... <td width="20%" class="currency">£856,723</td>
... <td width="20%" class="currency">£482,177</td>
... </tr>
... <tr>
... <td> Cost of sales</td>
... <td width="20%" class="currency">£275,708</td>
... <td width="20%" class="currency">£671,345</td>
... <td width="20%" class="currency">£357,587</td>
... </tr>"""
>>> soup = BS(html)
>>> for i in soup.find_all('tr'):
... if i.find('td').text == "Turnover":
... for x in i.find_all('td', {'class':'currency'}):
... print x.text
...
£348,191
£856,723
£482,177
Explanation
First we convert the HTML to a bs4 type which we can easily navigate through. find_all, no prizes for guessing what it does, finds all the <tr>s.
We loop through each tr and if the first <td> is Turnover, we then go through the rest of the <td>s.
We loop through each td with class="currency" and print its content.
I'm trying to parse through a table of rows using beautiful soup and save values of each row in a dict.
One hiccup is the structure of the table has some rows as the section headers. So for any row with the class 'header' I want to define a variable called "section". Here's what I have, but it's not working because it's saying ['class'] TypeError: string indices must be integers
Here's what I have:
for i in credits.contents:
if i['class'] == 'header':
section = i.contents
DATA_SET[section] = {}
else:
DATA_SET[section]['data_point_1'] = i.find('td', {'class' : 'data_point_1'}).find('p').contents
DATA_SET[section]['data_point_2'] = i.find('td', {'class' : 'data_point_2'}).find('p').contents
DATA_SET[section]['data_point_3'] = i.find('td', {'class' : 'data_point_3'}).find('p').contents
Example of data:
<table class="credits">
<tr class="header">
<th colspan="3"><h1>HEADER NAME</h1></th>
</tr>
<tr>
<td class="data_point_1"><p>DATA</p></td>
<td class="data_point_2"><p>DATA</p></td>
<td class="data_point_3"><p>DATA</p></td>
</tr>
<tr>
<td class="data_point_1"><p>DATA</p></td>
<td class="data_point_2"><p>DATA</p></td>
<td class="data_point_3"><p>DATA</p></td>
</tr>
<tr>
<td class="data_point_1"><p>DATA</p></td>
<td class="data_point_2"><p>DATA</p></td>
<td class="data_point_3"><p>DATA</p></td>
</tr>
<tr class="header">
<th colspan="3"><h1>HEADER NAME</h1></th>
</tr>
<tr>
<td class="data_point_1"><p>DATA</p></td>
<td class="data_point_2"><p>DATA</p></td>
<td class="data_point_3"><p>DATA</p></td>
</tr>
<tr>
<td class="data_point_1"><p>DATA</p></td>
<td class="data_point_2"><p>DATA</p></td>
<td class="data_point_3"><p>DATA</p></td>
</tr>
<tr>
<td class="data_point_1"><p>DATA</p></td>
<td class="data_point_2"><p>DATA</p></td>
<td class="data_point_3"><p>DATA</p></td>
</tr>
</table>
Here is one solution, with a slight adaptation of your example data so that the result is clearer:
from BeautifulSoup import BeautifulSoup
from pprint import pprint
html = '''<body><table class="credits">
<tr class="header">
<th colspan="3"><h1>HEADER 1</h1></th>
</tr>
<tr>
<td class="data_point_1"><p>DATA11</p></td>
<td class="data_point_2"><p>DATA12</p></td>
<td class="data_point_3"><p>DATA12</p></td>
</tr>
<tr>
<td class="data_point_1"><p>DATA21</p></td>
<td class="data_point_2"><p>DATA22</p></td>
<td class="data_point_3"><p>DATA23</p></td>
</tr>
<tr>
<td class="data_point_1"><p>DATA31</p></td>
<td class="data_point_2"><p>DATA32</p></td>
<td class="data_point_3"><p>DATA33</p></td>
</tr>
<tr class="header">
<th colspan="3"><h1>HEADER 2</h1></th>
</tr>
<tr>
<td class="data_point_1"><p>DATA11</p></td>
<td class="data_point_2"><p>DATA12</p></td>
<td class="data_point_3"><p>DATA13</p></td>
</tr>
<tr>
<td class="data_point_1"><p>DATA21</p></td>
<td class="data_point_2"><p>DATA22</p></td>
<td class="data_point_3"><p>DATA23</p></td>
</tr>
<tr>
<td class="data_point_1"><p>DATA31</p></td>
<td class="data_point_2"><p>DATA32</p></td>
<td class="data_point_3"><p>DATA33</p></td>
</tr>
</table></body>'''
soup = BeautifulSoup(html)
rows = soup.findAll('tr')
section = ''
dataset = {}
for row in rows:
if row.attrs:
section = row.text
dataset[section] = {}
else:
cells = row.findAll('td')
for cell in cells:
if cell['class'] in dataset[section]:
dataset[section][ cell['class'] ].append( cell.text )
else:
dataset[section][ cell['class'] ] = [ cell.text ]
pprint(dataset)
Produces:
{u'HEADER 1': {u'data_point_1': [u'DATA11', u'DATA21', u'DATA31'],
u'data_point_2': [u'DATA12', u'DATA22', u'DATA32'],
u'data_point_3': [u'DATA12', u'DATA23', u'DATA33']},
u'HEADER 2': {u'data_point_1': [u'DATA11', u'DATA21', u'DATA31'],
u'data_point_2': [u'DATA12', u'DATA22', u'DATA32'],
u'data_point_3': [u'DATA13', u'DATA23', u'DATA33']}}
EDIT ADAPTATION OF YOUR SOLUTION
Your code is neat and has only a couple of issues. You use contents in places where you shoul duse text or findAll -- I repaired that below:
soup = BeautifulSoup(html)
credits = soup.find('table')
section = ''
DATA_SET = {}
for i in credits.findAll('tr'):
if i.get('class', '') == 'header':
section = i.text
DATA_SET[section] = {}
else:
DATA_SET[section]['data_point_1'] = i.find('td', {'class' : 'data_point_1'}).find('p').contents
DATA_SET[section]['data_point_2'] = i.find('td', {'class' : 'data_point_2'}).find('p').contents
DATA_SET[section]['data_point_3'] = i.find('td', {'class' : 'data_point_3'}).find('p').contents
print DATA_SET
Please note that if successive cells have the same data_point class, then successive rows will replace earlier ones. I suspect this is not an issue in your real dataset, but that is why your code would return this, abbreviated, result:
{u'HEADER 2': {'data_point_2': [u'DATA32'],
'data_point_3': [u'DATA33'],
'data_point_1': [u'DATA31']},
u'HEADER 1': {'data_point_2': [u'DATA32'],
'data_point_3': [u'DATA33'],
'data_point_1': [u'DATA31']}}