Identifying branches different in tag structure - python

I'm hoping to check if two html are different by tags only without considering the text and pick out those branch(es).
For example :
html_1 = """
<p>i love it</p>
"""
html_2 = """
<p>i love it really</p>
"""
They share the same tag structure, so they're seen to be the same. However:
html_1 = """
<div>
<p>i love it</p>
</div>
<p>i love it</p>
"""
html_2 = """
<div>
<p>i <em>love</em> it</p>
</div>
<p>i love it</p>
"""
I'd expect it to return the <div> branch, because the tag structures are different. Could lxml, BeautifulSoup or some other lib achieve this? I'm trying to find a way to actually pick out the different branches.
Thanks

A more reliable approach would be to construct a Tree of tag names out of the document as discussed here:
HTML Parse tree using Python 2.7
Here is an example working solution based on treelib.Tree:
from bs4 import BeautifulSoup
from treelib import Tree
def traverse(parent, tree):
tree.create_node(parent.name, parent.name, parent=parent.parent.name if parent.parent else None)
for node in parent.find_all(recursive=False):
tree.create_node(node.name, parent=parent.name)
traverse(node, tree)
def compare(html1, html2):
tree1 = Tree()
traverse(BeautifulSoup(html1, "html.parser"), tree1)
tree2 = Tree()
traverse(BeautifulSoup(html2, "html.parser"), tree2)
return tree1.to_json() == tree2.to_json()
print compare("<p>i love it</p>", "<p>i love it really</p>")
print compare("<p>i love it</p>", "<p>i <em>love</em> it</p>")
Prints:
True
False

Sample code to check tagging structure of two HTML content are same for not
Demo:
def getTagSequence(content):
"""
Get all Tag Sequence
"""
root = PARSER.fromstring(content)
tag_sequence = []
for elm in root.getiterator():
tag_sequence.append(elm.tag)
return tag_sequence
html_1_tags = getTagSequence(html_1)
html_2_tags = getTagSequence(html_2)
if html_1_tags==html_2_tags:
print "Tagging structure is same."
else:
print "Tagging structure is diffrent."
print "HTML 1 Tagging:", html_1_tags
print "HTML 2 Tagging:", html_2_tags
Note:
Above code just check tagging sequence only, Not checking parent and its children relationship i.e
html_1 = """ <p> This <span>is <em>p</em></span> tag</p>"""
html_2 = """ <p> This <span>is </span><em>p</em> tag</p>"""

Related

Find Tags that Match Specific Classes but one class keeps changing

I want to extract information from a div tag which has some specific classes.
Class are in the format of abc def jss238 xyz
Now, the jss class number keeps changing, so after some time ,the classes will become abc def jss384 xyz
What is the best way to extract information so that the code doesn't break if the tags change as well.
The current code that I using is
val = soup.findAll('div', class_="abc def jss328 xyz")
I feel Regex can be a good way, but can I also not use jss class and use the other 3 only to search?
SO yes you can use regex to find the pattern that has abc def <pattern of 3 letters and 3 digits> xyz
Personally, I would see if you can get the data from the source. When classes change like that, it's usually because the page is rendered through javascript, but it needs to put the data in there and get it from somewhere. If you share the url and what data you are after, I could see if thats the case. But here's the regex version:
from bs4 import BeautifulSoup
import re
html = '''<div class="abc def jss238 xyz">jss238 text</div>
<div class="abc def jss384 xyz">jss384 text</div>
<div class="hij klm jss238 xyz">doesn't match the pattern</div>'''
soup = BeautifulSoup(html, 'html.parser')
regex = re.compile('abc def \w{3}\d{3} xyz')
specialDivs = soup.find_all('div', {'class':regex})
for each in specialDivs:
print(f'html: {each}\tText: {each.text}')
Output:
html: <div class="abc def jss238 xyz">jss238 text</div> Text: jss238 text
html: <div class="abc def jss384 xyz">jss384 text</div> Text: jss384 text

How to change tags with lxml in Python?

I want to change all tags names <p> to <para> using lxml in python.
Here's an example of what the xml file looks like.
<concept id="id15CDB0Q0Q4G"><title id="id15CDB0R0VHA">General</title>
<conbody><p>This section</p></conbody>
<concept id="id156F7H00GIE"><title id="id15CDB0R0V1W">
System</title>
<conbody><p> </p>
<p>The
</p>
<p>status.</p>
<p>sensors.</p>
And I've been trying to code it like this but it doesn't find the tags with .findall.
from lxml import etree
doc = etree.parse("73-20.xml")
print("\n")
print(etree.tostring(doc, pretty_print=True, xml_declaration=True, encoding="utf-8"))
print("\n")
raiz = doc.getroot()
print(raiz.tag)
children = raiz.getchildren()
print(children)
print("\n")
libros = doc.findall("p")
print(libros)
print("\n")
for i in range(len(libros)):
if libros[i].find("p").tag == "p" :
libros[i].find("p").tag = "para"
Any thoughts?
lxml findall() function provides ability to search by path:
libros = raiz.findall(".//p")
for el in libros:
el.tag = "para"
Here .//p means that lxml will search nested p elements as well.

Split HTML nested list into python list

I have HTML list formed that way (It's what CKeditor create for nested list):
<ul>
<li>niv1alone</li>
<li>niv1
<ul>
<li>niv2
<ul>
<li>niv3
<ul>
<li>niv4</li>
</ul></li></ul></li></ul></li>
<li>autre niv1 alone</li>
</ul>
How do I form a "recursive list" like that:
[
('niv1alone',[]),('niv1',[('niv2',[('niv3',[('niv4',[])])])]),('autre niv1 alone',[])
]
I have tried several things with beautifulsoup but I can't get the desired result.
Here's a recursive function that functions similar to what you ask. Trick to writing recursive functions is to make the problem smaller then recurse it. Here I'm walking down the element tree and passing the children, which is a strictly smaller set than one before.
import bs4
html = '''
<ul>
<li>niv1alone</li>
<li>niv1
<ul>
<li>niv2
<ul>
<li>niv3
<ul>
<li>niv4</li>
</ul></li></ul></li></ul></li>
<li>autre niv1 alone</li>
</ul>
'''
def make_tree(body: bs4.Tag):
branch = []
for ch in body.children:
if isinstance(ch, bs4.NavigableString):
# skip whitespace
if str(ch).strip():
branch.append(str(ch).strip())
else:
branch.append(make_tree(ch))
return branch
if __name__ == '__main__':
soup = bs4.BeautifulSoup(html, 'html.parser')
tree = make_tree(soup.select_one('ul'))
print(tree)
output:
[['niv1alone'], ['niv1', [['niv2', [['niv3', [['niv4']]]]]]], ['autre niv1 alone']]

Extract all attributes of an element from XML in Python

I have multiple XML files containing tweets in a format similar to the one below:
<tweet idtweet='xxxxxxx'>
<topic>#irony</topic>
<date>20171109T03:39</date>
<hashtag>#irony</hashtag>
<irony>1</irony>
<emoji>Laughing with tears</emoji>
<nbreponse>0</nbreponse>
<nbretweet>0</nbretweet>
<textbrut> Some text here <img class="Emoji Emoji--forText" src="source.png" draggable="false" alt="😁" title="Laughing with tears" aria-label="Emoji: Laughing with tears"></img> #irony </textbrut>
<text>Some text here #irony </text>
</tweet>
There is a problem with the way the files were created (the closing tag for img is missing) so I made the choice of closing it as in the above example. I know that in HTML you can close it as
<img **something here** />
but I don't know if this holds for XML, as I didn't see it anywhere.
I'm writing a python code that extracts the topic and the plain text, but I am also interested in all the attributes contained by img and I don't seem able to do it. Here is what I've tried so far:
top = []
txt = []
emj = []
for article in root:
topic = article.find('.topic')
textbrut = article.find('.textbrut')
emoji = article.find('.img')
everything = textbrut.attrib
if topic is not None and textbrut is not None:
top.append(topic.text)
txt.append(textbrut.text)
x = list(everything.items())
emj.append(x)
Any help would be greatly appreciated.
Apparently, Element has some useful methods (such as Element.iter()) that help iterate recursively over all the sub-tree below it (its children, their children,...). So here is the solution that worked for me:
for emoji in root.iter('img'):
print(emoji.attrib)
everything = emoji.attrib
x = list(everything.items())
new.append(x)
For more details read here.
Below
import xml.etree.ElementTree as ET
xml = '''<t><tweet idtweet='xxxxxxx'>
<topic>#irony</topic>
<date>20171109T03:39</date>
<hashtag>#irony</hashtag>
<irony>1</irony>
<emoji>Laughing with tears</emoji>
<nbreponse>0</nbreponse>
<nbretweet>0</nbretweet>
<textbrut> Some text here <img class="Emoji Emoji--forText" src="source.png" draggable="false" alt="😁" title="Laughing with tears" aria-label="Emoji: Laughing with tears"></img> #irony </textbrut>
<text>Some text here #irony </text>
</tweet></t>'''
root = ET.fromstring(xml)
data = []
for tweet in root.findall('.//tweet'):
data.append({'topic': tweet.find('./topic').text, 'text': tweet.find('./text').text,
'img_attributes': tweet.find('.//img').attrib})
print(data)
output
[{'topic': '#irony', 'text': 'Some text here #irony ', 'img_attributes': {'class': 'Emoji Emoji--forText', 'src': 'source.png', 'draggable': 'false', 'alt': '😁', 'title': 'Laughing with tears', 'aria-label': 'Emoji: Laughing with tears'}}]

Parse HTML and preserve original content

I have lots of HTML files. I want to replace some elements, keeping all the other content unchanged. For example, I would like to execute this jQuery expression (or some equivalent of it):
$('.header .title').text('my new content')
on the following HTML document:
<div class=header><span class=title>Foo</span></div>
<p>1<p>2
<table><tr><td>1</td></tr></table>
and have the following result:
<div class=header><span class=title>my new content</span></div>
<p>1<p>2
<table><tr><td>1</td></tr></table>
The problem is, all parsers I’ve tried (Nokogiri, BeautifulSoup, html5lib) serialize it to something like this:
<html>
<head></head>
<body>
<div class=header><span class=title>my new content</span></div>
<p>1</p><p>2</p>
<table><tbody><tr><td>1</td></tr></tbody></table>
</body>
</html>
E.g. they add:
html, head and body elements
closing p tags
tbody
Is there a parser that satisfies my needs? It should work in either Node.js, Ruby or Python.
I highly recommend the pyquery package, for python. It is a jquery-like interface layered ontop of the extremely reliable lxml package, a python binding to libxml2.
I believe this does exactly what you want, with a quite familiar interface.
from pyquery import PyQuery as pq
html = '''
<div class=header><span class=title>Foo</span></div>
<p>1<p>2
<table><tr><td>1</td></tr></table>
'''
doc = pq(html)
doc('.header .title').text('my new content')
print doc
Output:
<div><div class="header"><span class="title">my new content</span></div>
<p>1</p><p>2
</p><table><tr><td>1</td></tr></table></div>
The closing p tag can't be helped. lxml only keeps the values from the original document, not the vagaries of the original. Paragraphs can be made two ways, and it chooses the more standard way when doing serialization. I don't believe you'll find a (bug-free) parser that does better.
Note: I'm on Python 3.
This will only handle a subset of CSS selectors, but it may be enough for your purposes.
from html.parser import HTMLParser
class AttrQuery():
def __init__(self):
self.repl_text = ""
self.selectors = []
def add_css_sel(self, seltext):
sels = seltext.split(" ")
for selector in sels:
if selector[:1] == "#":
self.add_selector({"id": selector[1:]})
elif selector[:1] == ".":
self.add_selector({"class": selector[1:]})
elif "." in selector:
html_tag, html_class = selector.split(".")
self.add_selector({"html_tag": html_tag, "class": html_class})
else:
self.add_selector({"html_tag": selector})
def add_selector(self, selector_dict):
self.selectors.append(selector_dict)
def match_test(self, tagwithattrs_list):
for selector in self.selectors:
for condition in selector:
condition_value = selector[condition]
if not self._condition_test(tagwithattrs_list, condition, condition_value):
return False
return True
def _condition_test(self, tagwithattrs_list, condition, condition_value):
for tagwithattrs in tagwithattrs_list:
try:
if condition_value == tagwithattrs[condition]:
return True
except KeyError:
pass
return False
class HTMLAttrParser(HTMLParser):
def __init__(self, html, **kwargs):
super().__init__(self, **kwargs)
self.tagwithattrs_list = []
self.queries = []
self.matchrepl_list = []
self.html = html
def handle_starttag(self, tag, attrs):
tagwithattrs = dict(attrs)
tagwithattrs["html_tag"] = tag
self.tagwithattrs_list.append(tagwithattrs)
if debug:
print("push\t", end="")
for attrname in tagwithattrs:
print("{}:{}, ".format(attrname, tagwithattrs[attrname]), end="")
print("")
def handle_endtag(self, tag):
try:
while True:
tagwithattrs = self.tagwithattrs_list.pop()
if debug:
print("pop \t", end="")
for attrname in tagwithattrs:
print("{}:{}, ".format(attrname, tagwithattrs[attrname]), end="")
print("")
if tag == tagwithattrs["html_tag"]: break
except IndexError:
raise IndexError("Found a close-tag for a non-existent element.")
def handle_data(self, data):
if self.tagwithattrs_list:
for query in self.queries:
if query.match_test(self.tagwithattrs_list):
line, position = self.getpos()
length = len(data)
match_replace = (line-1, position, length, query.repl_text)
self.matchrepl_list.append(match_replace)
def addquery(self, query):
self.queries.append(query)
def transform(self):
split_html = self.html.split("\n")
self.matchrepl_list.reverse()
if debug: print ("\nreversed list of matches (line, position, len, repl_text):\n{}\n".format(self.matchrepl_list))
for line, position, length, repl_text in self.matchrepl_list:
oldline = split_html[line]
newline = oldline[:position] + repl_text + oldline[position+length:]
split_html = split_html[:line] + [newline] + split_html[line+1:]
return "\n".join(split_html)
See the example usage below.
html_test = """<div class=header><span class=title>Foo</span></div>
<p>1<p>2
<table><tr><td class=hi><div id=there>1</div></td></tr></table>"""
debug = False
parser = HTMLAttrParser(html_test)
query = AttrQuery()
query.repl_text = "Bar"
query.add_selector({"html_tag": "div", "class": "header"})
query.add_selector({"class": "title"})
parser.addquery(query)
query = AttrQuery()
query.repl_text = "InTable"
query.add_css_sel("table tr td.hi #there")
parser.addquery(query)
parser.feed(html_test)
transformed_html = parser.transform()
print("transformed html:\n{}".format(transformed_html))
Output:
transformed html:
<div class=header><span class=title>Bar</span></div>
<p>1<p>2
<table><tr><td class=hi><div id=there>InTable</div></td></tr></table>
Ok I have done this in a few languages and I have to say the best parser I have seen that preserves whitespace and even HTML comments is:
Jericho which is unfortunately Java.
That is Jericho knows how to parse and preserve fragments.
Yes I know its Java but you could easily make a RESTful service with a tiny bit of Java that would take the payload and convert it. In the Java REST service you could use JRuby, Jython, Rhino Javascript etc. to coordinate with Jericho.
You can use Nokogiri HTML Fragment for this:
fragment = Nokogiri::HTML.fragment('<div class=header><span class=title>Foo</span></div>
<p>1<p>2
<table><tr><td>1</td></tr></table>')
fragment.css('.title').children.first.replace(Nokogiri::XML::Text.new('HEY', fragment))
frament.to_s #=> "<div class=\"header\"><span class=\"title\">HEY</span></div>\n<p>1</p><p>2\n</p><table><tr><td>1</td></tr></table>"
The problem with the p tag persists, because it is invalid HTML, but this should return your document without html, head or body and tbody tags.
With Python - using lxml.html is fairly straight forward:
(It meets points 1 & 3, but I don't think much can be done about 2, and handles the unquoted class='s)
import lxml.html
fragment = """<div class=header><span class=title>Foo</span></div>
<p>1<p>2
<table><tr><td>1</td></tr></table>
"""
page = lxml.html.fromstring(fragment)
for span in page.cssselect('.header .title'):
span.text = 'my new value'
print lxml.html.tostring(page, pretty_print=True)
Result:
<div>
<div class="header"><span class="title">my new content</span></div>
<p>1</p>
<p>2
</p>
<table><tr><td>1</td></tr></table>
</div>
This is a slightly separate solution but if this is only for a few simple instances then perhaps CSS is the answer.
Generated Content
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN">
<html>
<head>
<style type="text/css">
#header.title1:first-child:before {
content: "This is your title!";
display: block;
width: 100%;
}
#header.title2:first-child:before {
content: "This is your other title!";
display: block;
width: 100%;
}
</style>
</head>
<body>
<div id="header" class="title1">
<span class="non-title">Blah Blah Blah Blah</span>
</div>
</body>
</html>
In this instance you could just have jQuery swap the classes and you'd get the change for free with css. I haven't tested this particular usage but it should work.
We use this for things like outage messages.
If you're running a Node.js app, this module will do exactly what you want, a JQuery style DOM manipulator: https://github.com/cheeriojs/cheerio
An example from their wiki:
var cheerio = require('cheerio'),
$ = cheerio.load('<h2 class="title">Hello world</h2>');
$('h2.title').text('Hello there!');
$('h2').addClass('welcome');
$.html();
//=> <h2 class="title welcome">Hello there!</h2>

Categories

Resources