Why is Beautiful Soup truncating this page? - python

I am trying to pull at list of resource/database names and IDs from a listing of resources that my school library has subscriptions to. There are pages listing the different resources, and I can use urllib2 to get the pages, but when I pass the page to BeautifulSoup, it truncates its tree just before the end of the entry for the first resource in the list. The problem seems to be in image link used to add the resource to a search set. This is where things get cut off, here's the HTML:
<a href="http://www2.lib.myschool.edu:7017/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45899?func=find-db-add-res&resource=XYZ00618&z122_key=000000000&function-in=www_v_find_db_0" onclick='javascript:addToz122("XYZ00618","000000000","myImageXYZ00618","http://discover.lib.myschool.edu:8331/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45900");return false;'>
<img name="myImageXYZ00618" id="myImageXYZ00618" src="http://www2.lib.myschool.edu:7017/INS01/icon_eng/v-add_favorite.png" title="Add to My Sets" alt="Add to My Sets" border="0">
</a>
And here is my python code:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://discover.lib.myschool.edu:8331/V?func=find-db-1-title&mode=titles&scan_start=latp&scan_utf=D&azlist=Y&restricted=all")
print BeautifulSoup(page).prettify
In BeautifulSoup's version, the opening <a href...> shows up, but the <img> doesn't, and the <a> is immediately closed, as are the rest of the open tags, all the way to </html>.
The only distinguishing trait I see for these "add to sets" images is that they are the only ones to have name and id attributes. I can't see why that would cause BeautifulSoup to stop parsing immediately, though.
Note: I am almost entirely new to Python, but seem to be understanding it all right.
Thank you for your help!

You can try beautiful soup with html5lib rather than the built-in parser.
BeautifulSoup(markup, "html5lib")
html5lib is more lenient and often parses pages that the built-in parser truncates. See the docs at http://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-the-tree

I was using Firefox's "view selection source", which apparently cleans up the HTML for me. When I viewed the original source, this is what I saw
<img name="myImageXYZ00618" id="myImageXYZ00618" src='http://www2.lib.myschool.edu:7017/INS01/icon_eng/v-add_favorite.png' alt='Add to My Sets' title='Add to My Sets' border="0"title="Add to clipboard PAIS International (CSA)" alt="Add to clipboard PAIS International (CSA)">
By putting a space after the border="0" attribute, I can get BS to parse the page.

I strongly recommend using html5lib + lxml instead of beautiful soup. It uses a real HTML parser (very similar to the one in Firefox) and lxml provides a very flexible way to query the resulting tree (css-selectors or xpath).
There are tons of bugs or strange behavior in BeautifulSoup which makes it not the best solution for a lot of HTML markup you can't trust.

If I remember correctly, BeautifulSoup uses "name" in it's tree as the name of the tag. In this case "a" would be the "name" of the anchor tag.
That doesn't seem like it should break it though. What version of Python and BS are you using?

Related

BeautifulSoup returning 'None' when element definitely exists

firstly-I apologize if I'm missing something super simple, I've looked at many questions and cannot find this out for the life of me.
Basically, the website I'm trying to gather text is here:
https://www.otcmarkets.com/stock/MNGG/overview
I want to pull the information from the side that says 'Dark or Defunct,' my current code is as follows:
url = 'https://www.otcmarkets.com/stock/MNGG/overview'
page = requests.get(url)
soup = BeautifulSoup(page.content, "html.parser")
ticker = soup.find('href', 'Dark or Defunct')
But as the title says, it just returns none. Where am I going wrong? I'm quite inexperienced so I'd love an explanation if possible.
It's returning none because it there is no mention of it in the HTML page source. Everything on that website is dynamically loaded from JavaScript sources. BeautifulSoup is designed to pull data out of HTML and XML files, and in the HTML file provided, there is no mention of "Dark or Darker" (so BeautifulSoup correctly finds nothing). You'll need to use a scraping method that has support for JavaScript. See Web-scraping JavaScript page with Python.

Crawling text of a specific heading for any web page URL document in python

I have searched and get a little bit introduced to some of the web crawling libraries in python like scrapy, beautifulsoup etc. Using these libraries I want to crawl all of the text under a specific heading in a document. If any of you can help me his/her help would be highly appreciated. I have seen some tutorial that how one can get links under a specific class name (by view source page option) using beautiful soap but how can I get a simple text not links under the specific class of heading. Sorry for my bad English
import requests
from bs4 import BeautifulSoup
r=requests.get('https://patents.google.com/patent/US6886010B2/en')
print(r.content)
soup=BeautifulSoup(r.content)
for link in soup.find_all("div", class_="claims"):
print(link)
Here i have extracted claims text but it also shows other div written in these claims that is div in div i just want to extract the text of the claims only.
By links, I assume you mean the entire contents of the div elements. If you'd like to just print the text contained within them, use the .text attribute or .get_text() method. The entire text of the claims is wrapped inside a unique section element. So you might want to try this:
print(soup.find('section', attrs={'id': 'claims'}).text)
The get_text method gives you a bit more flexibility such as joining bits of text together with a separator and stripping the text of extra newlines.
Also, take a look at the BeautifulSoup Documentation and spend some time reading it.

data mining from website using xpath in python

I run this program but it is giving me only "[]" instead of giving the web page data.please help
import urllib
import re
import lxml.html
start_link= "http://aepcindia.com/ApparelMarketplaces/detail"
html_string = urllib.urlopen(start_link)
dom = lxml.html.fromstring(html_string.read())
side_bar_link = dom.xpath("//*[#id='show_cont']/div/table/tr[2]/td[2]/text()")
print side_bar_link
file = open("next_page.txt","w")
for link in side_bar_link:
file.write(link)
print link
file.close()
The HTML source you are downloading contains an empty content area: <div id="show_cont"></div>. This div is populated later by a javascript function showData(). When you look at the page in a browser, the javascript is executed before, which is not the case when you just download the HTML source using urllib.
To get the data you want, you can try to mimic the POST request in the showData() function or, preferably, scrape the website using a scriptable headless browser.
Update: While a headless browser would be a much more generally applicable approach, in this case it might be overhead here. You actually will be better off reverse engineering the showData() function. The alax-call in that is all too obvious, delivers a plain HTML table and you can also limit searches :)
http://aepcindia.com/ApparelMarketplaces/ajax_detail/search_type:/search_value:

python html parser which doesn't modify actual markup?

I want to parse html code in python and tried beautiful soup and pyquery already. The problem is that those parsers modify original code e.g insert some tag or etc. Is there any parser out there that do not change the code?
I tried HTMLParser but no success! :(
It doesn't modify the code and just tells me where tags are placed. But it fails in parsing web pages like mail.live.com
Any idea how to parse a web page just like a browser?
You can use BeautifulSoup to extract just text and not modify the tags. Its in their documentation.
Same question here:
How to extract text from beautiful soup
No, to this moment there is no such HTML parser and every parser has it's own limitations.
Have you tried the webkit engine with Python bindings?
See this: https://github.com/niwibe/phantompy
You can traverse the real DOM of the parsed web page and do what you need to do.

Parsing HTML in python - lxml or BeautifulSoup? Which of these is better for what kinds of purposes?

From what I can make out, the two main HTML parsing libraries in Python are lxml and BeautifulSoup. I've chosen BeautifulSoup for a project I'm working on, but I chose it for no particular reason other than finding the syntax a bit easier to learn and understand. But I see a lot of people seem to favour lxml and I've heard that lxml is faster.
So I'm wondering what are the advantages of one over the other? When would I want to use lxml and when would I be better off using BeautifulSoup? Are there any other libraries worth considering?
Pyquery provides the jQuery selector interface to Python (using lxml under the hood).
http://pypi.python.org/pypi/pyquery
It's really awesome, I don't use anything else anymore.
For starters, BeautifulSoup is no longer actively maintained, and the author even recommends alternatives such as lxml.
Quoting from the linked page:
Version 3.1.0 of Beautiful Soup does
significantly worse on real-world HTML
than version 3.0.8 does. The most
common problems are handling
tags incorrectly, "malformed start
tag" errors, and "bad end tag" errors.
This page explains what happened, how
the problem will be addressed, and
what you can do right now.
This page was originally written in
March 2009. Since then, the 3.2 series
has been released, replacing the 3.1
series, and development of the 4.x
series has gotten underway. This page
will remain up for historical
purposes.
tl;dr
Use 3.2.0 instead.
In summary, lxml is positioned as a lightning-fast production-quality html and xml parser that, by the way, also includes a soupparser module to fall back on BeautifulSoup's functionality. BeautifulSoup is a one-person project, designed to save you time to quickly extract data out of poorly-formed html or xml.
lxml documentation says that both parsers have advantages and disadvantages. For this reason, lxml provides a soupparser so you can switch back and forth. Quoting,
BeautifulSoup uses a different parsing approach. It is not a real HTML
parser but uses regular expressions to dive through tag soup. It is
therefore more forgiving in some cases and less good in others. It is
not uncommon that lxml/libxml2 parses and fixes broken HTML better,
but BeautifulSoup has superiour support for encoding detection. It
very much depends on the input which parser works better.
In the end they are saying,
The downside of using this parser is that it is much slower than
the HTML parser of lxml. So if performance matters, you might want
to consider using soupparser only as a fallback for certain cases.
If I understand them correctly, it means that the soup parser is more robust --- it can deal with a "soup" of malformed tags by using regular expressions --- whereas lxml is more straightforward and just parses things and builds a tree as you would expect. I assume it also applies to BeautifulSoup itself, not just to the soupparser for lxml.
They also show how to benefit from BeautifulSoup's encoding detection, while still parsing quickly with lxml:
>>> from BeautifulSoup import UnicodeDammit
>>> def decode_html(html_string):
... converted = UnicodeDammit(html_string, isHTML=True)
... if not converted.unicode:
... raise UnicodeDecodeError(
... "Failed to detect encoding, tried [%s]",
... ', '.join(converted.triedEncodings))
... # print converted.originalEncoding
... return converted.unicode
>>> root = lxml.html.fromstring(decode_html(tag_soup))
(Same source: http://lxml.de/elementsoup.html).
In words of BeautifulSoup's creator,
That's it! Have fun! I wrote Beautiful Soup to save everybody time.
Once you get used to it, you should be able to wrangle data out of
poorly-designed websites in just a few minutes. Send me email if you
have any comments, run into problems, or want me to know about your
project that uses Beautiful Soup.
--Leonard
Quoted from the Beautiful Soup documentation.
I hope this is now clear. The soup is a brilliant one-person project designed to save you time to extract data out of poorly-designed websites. The goal is to save you time right now, to get the job done, not necessarily to save you time in the long term, and definitely not to optimize the performance of your software.
Also, from the lxml website,
lxml has been downloaded from the Python Package Index more than two
million times and is also available directly in many package
distributions, e.g. for Linux or MacOS-X.
And, from Why lxml?,
The C libraries libxml2 and libxslt have huge benefits:...
Standards-compliant... Full-featured... fast. fast! FAST! ... lxml
is a new Python binding for libxml2 and libxslt...
Don't use BeautifulSoup, use
lxml.soupparser then you're sitting on top of the power of lxml and can use the good bits of BeautifulSoup which is to deal with really broken and crappy HTML.
I've used lxml with great success for parsing HTML. It seems to do a good job of handling "soupy" HTML, too. I'd highly recommend it.
Here's a quick test I had lying around to try handling of some ugly HTML:
import unittest
from StringIO import StringIO
from lxml import etree
class TestLxmlStuff(unittest.TestCase):
bad_html = """
<html>
<head><title>Test!</title></head>
<body>
<h1>Here's a heading
<p>Here's some text
<p>And some more text
<b>Bold!</b></i>
<table>
<tr>row
<tr><td>test1
<td>test2
</tr>
<tr>
<td colspan=2>spanning two
</table>
</body>
</html>"""
def test_soup(self):
"""Test lxml's parsing of really bad HTML"""
parser = etree.HTMLParser()
tree = etree.parse(StringIO(self.bad_html), parser)
self.assertEqual(len(tree.xpath('//tr')), 3)
self.assertEqual(len(tree.xpath('//td')), 3)
self.assertEqual(len(tree.xpath('//i')), 0)
#print(etree.tostring(tree.getroot(), pretty_print=False, method="html"))
if __name__ == '__main__':
unittest.main()
For sure i would use EHP. It is faster than lxml, much more elegant and simpler to use.
Check out. https://github.com/iogf/ehp
<body ><em > foo <font color="red" ></font></em></body>
from ehp import *
data = '''<html> <body> <em> Hello world. </em> </body> </html>'''
html = Html()
dom = html.feed(data)
for ind in dom.find('em'):
print ind.text()
Output:
Hello world.
A somewhat outdated speed comparison can be found here, which clearly recommends lxml, as the speed differences seem drastic.

Categories

Resources