Processing large XML file [duplicate] - python

I have the following XML which I want to parse using Python's ElementTree:
<rdf:RDF xml:base="http://dbpedia.org/ontology/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns="http://dbpedia.org/ontology/">
<owl:Class rdf:about="http://dbpedia.org/ontology/BasketballLeague">
<rdfs:label xml:lang="en">basketball league</rdfs:label>
<rdfs:comment xml:lang="en">
a group of sports teams that compete against each other
in Basketball
</rdfs:comment>
</owl:Class>
</rdf:RDF>
I want to find all owl:Class tags and then extract the value of all rdfs:label instances inside them. I am using the following code:
tree = ET.parse("filename")
root = tree.getroot()
root.findall('owl:Class')
Because of the namespace, I am getting the following error.
SyntaxError: prefix 'owl' not found in prefix map
I tried reading the document at http://effbot.org/zone/element-namespaces.htm but I am still not able to get this working since the above XML has multiple nested namespaces.
Kindly let me know how to change the code to find all the owl:Class tags.

You need to give the .find(), findall() and iterfind() methods an explicit namespace dictionary:
namespaces = {'owl': 'http://www.w3.org/2002/07/owl#'} # add more as needed
root.findall('owl:Class', namespaces)
Prefixes are only looked up in the namespaces parameter you pass in. This means you can use any namespace prefix you like; the API splits off the owl: part, looks up the corresponding namespace URL in the namespaces dictionary, then changes the search to look for the XPath expression {http://www.w3.org/2002/07/owl}Class instead. You can use the same syntax yourself too of course:
root.findall('{http://www.w3.org/2002/07/owl#}Class')
Also see the Parsing XML with Namespaces section of the ElementTree documentation.
If you can switch to the lxml library things are better; that library supports the same ElementTree API, but collects namespaces for you in .nsmap attribute on elements and generally has superior namespaces support.

Here's how to do this with lxml without having to hard-code the namespaces or scan the text for them (as Martijn Pieters mentions):
from lxml import etree
tree = etree.parse("filename")
root = tree.getroot()
root.findall('owl:Class', root.nsmap)
UPDATE:
5 years later I'm still running into variations of this issue. lxml helps as I showed above, but not in every case. The commenters may have a valid point regarding this technique when it comes merging documents, but I think most people are having difficulty simply searching documents.
Here's another case and how I handled it:
<?xml version="1.0" ?><Tag1 xmlns="http://www.mynamespace.com/prefix">
<Tag2>content</Tag2></Tag1>
xmlns without a prefix means that unprefixed tags get this default namespace. This means when you search for Tag2, you need to include the namespace to find it. However, lxml creates an nsmap entry with None as the key, and I couldn't find a way to search for it. So, I created a new namespace dictionary like this
namespaces = {}
# response uses a default namespace, and tags don't mention it
# create a new ns map using an identifier of our choice
for k,v in root.nsmap.iteritems():
if not k:
namespaces['myprefix'] = v
e = root.find('myprefix:Tag2', namespaces)

Note: This is an answer useful for Python's ElementTree standard library without using hardcoded namespaces.
To extract namespace's prefixes and URI from XML data you can use ElementTree.iterparse function, parsing only namespace start events (start-ns):
>>> from io import StringIO
>>> from xml.etree import ElementTree
>>> my_schema = u'''<rdf:RDF xml:base="http://dbpedia.org/ontology/"
... xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
... xmlns:owl="http://www.w3.org/2002/07/owl#"
... xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
... xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
... xmlns="http://dbpedia.org/ontology/">
...
... <owl:Class rdf:about="http://dbpedia.org/ontology/BasketballLeague">
... <rdfs:label xml:lang="en">basketball league</rdfs:label>
... <rdfs:comment xml:lang="en">
... a group of sports teams that compete against each other
... in Basketball
... </rdfs:comment>
... </owl:Class>
...
... </rdf:RDF>'''
>>> my_namespaces = dict([
... node for _, node in ElementTree.iterparse(
... StringIO(my_schema), events=['start-ns']
... )
... ])
>>> from pprint import pprint
>>> pprint(my_namespaces)
{'': 'http://dbpedia.org/ontology/',
'owl': 'http://www.w3.org/2002/07/owl#',
'rdf': 'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
'rdfs': 'http://www.w3.org/2000/01/rdf-schema#',
'xsd': 'http://www.w3.org/2001/XMLSchema#'}
Then the dictionary can be passed as argument to the search functions:
root.findall('owl:Class', my_namespaces)

I've been using similar code to this and have found it's always worth reading the documentation... as usual!
findall() will only find elements which are direct children of the current tag. So, not really ALL.
It might be worth your while trying to get your code working with the following, especially if you're dealing with big and complex xml files so that that sub-sub-elements (etc.) are also included.
If you know yourself where elements are in your xml, then I suppose it'll be fine! Just thought this was worth remembering.
root.iter()
ref: https://docs.python.org/3/library/xml.etree.elementtree.html#finding-interesting-elements
"Element.findall() finds only elements with a tag which are direct children of the current element. Element.find() finds the first child with a particular tag, and Element.text accesses the element’s text content. Element.get() accesses the element’s attributes:"

To get the namespace in its namespace format, e.g. {myNameSpace}, you can do the following:
root = tree.getroot()
ns = re.match(r'{.*}', root.tag).group(0)
This way, you can use it later on in your code to find nodes, e.g using string interpolation (Python 3).
link = root.find(f"{ns}link")

This is basically Davide Brunato's answer however I found out that his answer had serious problems the default namespace being the empty string, at least on my python 3.6 installation. The function I distilled from his code and that worked for me is the following:
from io import StringIO
from xml.etree import ElementTree
def get_namespaces(xml_string):
namespaces = dict([
node for _, node in ElementTree.iterparse(
StringIO(xml_string), events=['start-ns']
)
])
namespaces["ns0"] = namespaces[""]
return namespaces
where ns0 is just a placeholder for the empty namespace and you can replace it by any random string you like.
If I then do:
my_namespaces = get_namespaces(my_schema)
root.findall('ns0:SomeTagWithDefaultNamespace', my_namespaces)
It also produces the correct answer for tags using the default namespace as well.

My solution is based on #Martijn Pieters' comment:
register_namespace only influences serialisation, not search.
So the trick here is to use different dictionaries for serialization and for searching.
namespaces = {
'': 'http://www.example.com/default-schema',
'spec': 'http://www.example.com/specialized-schema',
}
Now, register all namespaces for parsing and writing:
for name, value in namespaces.iteritems():
ET.register_namespace(name, value)
For searching (find(), findall(), iterfind()) we need a non-empty prefix. Pass these functions a modified dictionary (here I modify the original dictionary, but this must be made only after the namespaces are registered).
self.namespaces['default'] = self.namespaces['']
Now, the functions from the find() family can be used with the default prefix:
print root.find('default:myelem', namespaces)
but
tree.write(destination)
does not use any prefixes for elements in the default namespace.

Related

Why doesn't Element.attrib include namespace definitions?

I'd like to create a XML namespace mapping (e.g., to use in findall calls as in the Python documentation of ElementTree). Given the definitions seem to exist as attributes of the xbrl root element, I'd have thought I could just examine the attrib attribute of the root element within my ElementTree. However, the following code
from io import StringIO
import xml.etree.ElementTree as ET
TEST = '''<?xml version="1.0" encoding="utf-8"?>
<xbrl
xml:lang="en-US"
xmlns="http://www.xbrl.org/2003/instance"
xmlns:country="http://xbrl.sec.gov/country/2021"
xmlns:dei="http://xbrl.sec.gov/dei/2021q4"
xmlns:iso4217="http://www.xbrl.org/2003/iso4217"
xmlns:link="http://www.xbrl.org/2003/linkbase"
xmlns:nvda="http://www.nvidia.com/20220130"
xmlns:srt="http://fasb.org/srt/2021-01-31"
xmlns:stpr="http://xbrl.sec.gov/stpr/2021"
xmlns:us-gaap="http://fasb.org/us-gaap/2021-01-31"
xmlns:xbrldi="http://xbrl.org/2006/xbrldi"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
</xbrl>'''
xbrl = ET.parse(StringIO(TEST))
print(xbrl.getroot().attrib)
produces the following output:
{'{http://www.w3.org/XML/1998/namespace}lang': 'en-US'}
Why aren't any of the namespace attributes showing up in root.attrib? I'd at least expect xlmns to be in the dictionary given it has no prefix.
What have I tried?
The following code seems to work to generate the namespace mapping:
print({prefix: uri for key, (prefix, uri) in ET.iterparse(StringIO(TEST), events=['start-ns'])})
output:
{'': 'http://www.xbrl.org/2003/instance',
'country': 'http://xbrl.sec.gov/country/2021',
'dei': 'http://xbrl.sec.gov/dei/2021q4',
'iso4217': 'http://www.xbrl.org/2003/iso4217',
'link': 'http://www.xbrl.org/2003/linkbase',
'nvda': 'http://www.nvidia.com/20220130',
'srt': 'http://fasb.org/srt/2021-01-31',
'stpr': 'http://xbrl.sec.gov/stpr/2021',
'us-gaap': 'http://fasb.org/us-gaap/2021-01-31',
'xbrldi': 'http://xbrl.org/2006/xbrldi',
'xlink': 'http://www.w3.org/1999/xlink',
'xsi': 'http://www.w3.org/2001/XMLSchema-instance'}
But yikes is it gross to have to parse the file twice.
As for the answer to your specific question, why the attrib list doesn't contain the namespace prefix decls, sorry for the unquenching answer: because they're not attributes.
http://www.w3.org/XML/1998/namespace is a special schema that doesn't act like the other schemas in your userspace. In that representation, xmlns:prefix="uri" is an attribute. In all other subordinate (by parsing sequence) schemas, xmlns:prefix="uri" is a special thing, a namespace prefix declaration, which is different than an attribute on a node or element. I don't have a reference for this but it holds true perfectly in at least a half dozen (correct) implementations of XML parsers that I've used, including those from IBM, Microsoft and Oracle.
As for the ugliness of reparsing the file, I feel your pain but it's necessary. As tdelaney so well pointed out, you may not assume that all of your namespace decls or prefixes must be on your root element.
Be prepared for the possibility of the same prefix being redefined with a different namespace on every node in your document. This may hold true and the library must correctly work with it, even if it is never the case your document (or worse, if it's never been the case so far).
Consider if perhaps you are shoehorning some text processing to parse or query XML when there may be a better solution, like XPath or XQuery. There are some good recent changes to and Python wrappers for Saxon, even though their pricing model has changed.

Setting and accessing namespaces in python lxml

I am writing a script that processes a rdf:skos file with python3 and lxml:
I learnt that I need to pass to the findall procedure the namespaces that the XML mentions. (Ok, strange, since the XML files lists these in the header, so this seems like an unnecessary step but anyway).
When calling
for concept in root.findall('.//skos:Concept', namespaces=root.nsmap):
that works, because a root.nsmap is constructed by lxml.
But then later in my code I also need to perform a test on xml:lang
for pl in concept.findall(".//skos:prefLabel[#xml:lang='en']", namespaces=root.nsmap):
and here python tells me
SyntaxError: prefix 'xml' not found in prefix map
Ok, true, in my skos file there is no extra declaration for the xml namespace. So I try to add it to the root.nsmap dict
root.nsmap['xml'] = "http://www.w3.org/XML/1998/namespace"
but that too doesn't work
nsmap = {'rdf': 'http://www.w3.org/1999/02/22-rdf-syntax-ns#', 'uneskos': 'http://purl.org/umu/uneskos#', 'iso-thes': 'http://purl.org/iso25964/skos-thes#', 'dcterms': 'http://purl.org/dc/terms/', 'skos': 'http://www.w3.org/2004/02/skos/core#', 'rdfs': 'http://www.w3.org/2000/01/rdf-schema#'}
Seems I am not allowed to modify the root.nsmap?
Anyone an idea how this is done? I have processed tons of XML in the past with Perl XML::Twig which is very very comfortable and I assmue, the Python community has (at least) similarly comfortable ways to do that ... but how?
Any hint appreciated.
Modifying root.nsmap has no effect. But you can create another dictionary and modify that one. Example:
from lxml import etree
doc = """
<root xmlns:skos="http://www.w3.org/2004/02/skos/core#">
<skos:prefLabel xml:lang='en'>FOO</skos:prefLabel>
<skos:prefLabel xml:lang='de'>BAR</skos:prefLabel>
</root>"""
root = etree.fromstring(doc)
nsmap = root.nsmap
nsmap["xml"] = "http://www.w3.org/XML/1998/namespace"
en = root.find(".//skos:prefLabel[#xml:lang='en']", namespaces=nsmap)
print(en.text)
Output:
FOO

Parsing XML with Python - accessing elements

I'm using lxml to parse some xml, but for some reason I can't find a specific element.
I'm trying to access the <Constant> elements.
Here's an xml snippet:
</rdf:Description>
</rdf:RDF>
</MiriamAnnotation>
<ListOfSubstrates>
<Substrate metabolite="Metabolite_5" stoichiometry="1"/>
</ListOfSubstrates>
<ListOfModifiers>
<Modifier metabolite="Metabolite_9" stoichiometry="1"/>
</ListOfModifiers>
<ListOfConstants>
<Constant key="Parameter_4344" name="Kcat" value="433.724"/>
<Constant key="Parameter_4343" name="km" value="479.617"/>
The code I'm using is like this:
>>> from lxml import etree as ET
>>> parsed = ET.parse('ct.cps')
>>> root = parsed.getroot()
>>> for a in root.findall(".//Constant"):
... print a.attrib['key']
...
>>> for a in root.findall('Constant'):
... print a.get('key')
...
>>> for a in root.findall('Constant'):
... print a.attrib['key']
...
As you can see, none of these things seem to work.
What am I doing wrong?
EDIT: I'm wondering if it has something to do with the fact that <Constant> elements are empty?
EDIT2: Source xml here: https://www.dropbox.com/s/i6hga7nvmcd6rxx/ct.cps?dl=0
Here is how you can get the values you are looking for:
from lxml import etree
parsed = etree.parse('ct.cps')
for a in parsed.findall("//{http://www.copasi.org/static/schema}Constant"):
print a.attrib["key"]
Output:
Parameter_4344
Parameter_4343
Parameter_4342
Parameter_4341
Parameter_4340
Parameter_4339
Parameter_4338
Parameter_4337
Parameter_4336
Parameter_4335
Parameter_4334
Parameter_4333
Parameter_4332
Parameter_4331
Parameter_4330
Parameter_4329
Parameter_4328
Parameter_4327
Parameter_4326
Parameter_4325
Parameter_4324
Parameter_4323
Parameter_4322
Parameter_4321
Parameter_4320
Parameter_4319
The important thing here is that the COPASI root element in your XML file (the real one at the Dropbox URL) declares a default namespace (http://www.copasi.org/static/schema). This means that the element and all its descendants, including Constant, belong to that namespace.
So instead of Constant elements, you need to look for {http://www.copasi.org/static/schema}Constant elements.
See http://lxml.de/tutorial.html#namespaces.
Here is how you could do it using XPath instead of findall:
from lxml import etree
NSMAP = {"c": "http://www.copasi.org/static/schema"}
parsed = etree.parse('ct.cps')
for a in parsed.xpath("//c:Constant", namespaces=NSMAP):
print a.attrib["key"]
See http://lxml.de/xpathxslt.html#namespaces-and-prefixes.
First, please disregard my comment. It turns out that xml.etree is much better than the standard xml.etree.ElementTree in that it takes care of the namespace. The problem you have is you want to search for '//Constant', which means the nodes can be at any level. However, the root element does not allow you to do it:
>>> root.findall('//Constant')
SyntaxError: cannot use absolute path on element
However, you can do that at higher level:
>>> parsed.findall('//Constant')
[<Element Constant at 0x10a7ce128>, <Element Constant at 0x10a7ce170>]
Update
I am posting here the full text. Since I don't have your full XML file, I make something up to fill in the blank.
from lxml import etree as ET
from StringIO import StringIO
xml_text = """<?xml version='1.0' encoding='utf-8' ?>
<rdf:root xmlns:rdf='http://foo.bar.com/rdf'>
<rdf:RDF>
<rdf:Description>
DescriptionX
</rdf:Description>
</rdf:RDF>
<rdf:foo>
<MiriamAnnotation>
bar
</MiriamAnnotation>
<ListOfSubstrates>
<Substrate metabolite="Metabolite_5" stoichiometry="1"/>
</ListOfSubstrates>
<ListOfModifiers>
<Modifier metabolite="Metabolite_9" stoichiometry="1"/>
</ListOfModifiers>
<ListOfConstants>
<Constant key="Parameter_4344" name="Kcat" value="433.724"/>
<Constant key="Parameter_4343" name="km" value="479.617"/>
</ListOfConstants>
</rdf:foo>
</rdf:root>
"""
buffer = StringIO(xml_text)
tree = ET.parse(buffer)
for constant_node in tree.findall('//Constant'):
print constant_node.attrib['key']
Don't use findall. It is has a limited featureset and is designed to be compatible with ElementTree.
Instead, use xpath, which supports namespaces. From the above, it appears that you probably want to say something like
# possibilities, you need to get these right...
ns_dict = {'atom':"http://www.w3.org/2005/Atom",,
"rdf":"http://www.w3.org/2000/01/rdf-schema#" }
root = parsed.getroot()
for a in root.xpath('.//rdf:Constant', namespaces=ns_dict):
print a.attrib['key']
Note that you must include a namespace prefix in your xpath expression whenever an element has a non-blank namespace, and they must map to one of the namespace URLs that match the same URLs in your document.
Update
Since you posted your original document, I see that there is no namespace assigned to the elements you are looking for. This will work, I just tried it with your source document:
for a in tree.xpath("//Constant"):
print a.attrib['key']
You don't need a namespace because there is no default namespace specified in the document itself.

XML Python Parsing Namespace [duplicate]

I have the following XML which I want to parse using Python's ElementTree:
<rdf:RDF xml:base="http://dbpedia.org/ontology/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns="http://dbpedia.org/ontology/">
<owl:Class rdf:about="http://dbpedia.org/ontology/BasketballLeague">
<rdfs:label xml:lang="en">basketball league</rdfs:label>
<rdfs:comment xml:lang="en">
a group of sports teams that compete against each other
in Basketball
</rdfs:comment>
</owl:Class>
</rdf:RDF>
I want to find all owl:Class tags and then extract the value of all rdfs:label instances inside them. I am using the following code:
tree = ET.parse("filename")
root = tree.getroot()
root.findall('owl:Class')
Because of the namespace, I am getting the following error.
SyntaxError: prefix 'owl' not found in prefix map
I tried reading the document at http://effbot.org/zone/element-namespaces.htm but I am still not able to get this working since the above XML has multiple nested namespaces.
Kindly let me know how to change the code to find all the owl:Class tags.
You need to give the .find(), findall() and iterfind() methods an explicit namespace dictionary:
namespaces = {'owl': 'http://www.w3.org/2002/07/owl#'} # add more as needed
root.findall('owl:Class', namespaces)
Prefixes are only looked up in the namespaces parameter you pass in. This means you can use any namespace prefix you like; the API splits off the owl: part, looks up the corresponding namespace URL in the namespaces dictionary, then changes the search to look for the XPath expression {http://www.w3.org/2002/07/owl}Class instead. You can use the same syntax yourself too of course:
root.findall('{http://www.w3.org/2002/07/owl#}Class')
Also see the Parsing XML with Namespaces section of the ElementTree documentation.
If you can switch to the lxml library things are better; that library supports the same ElementTree API, but collects namespaces for you in .nsmap attribute on elements and generally has superior namespaces support.
Here's how to do this with lxml without having to hard-code the namespaces or scan the text for them (as Martijn Pieters mentions):
from lxml import etree
tree = etree.parse("filename")
root = tree.getroot()
root.findall('owl:Class', root.nsmap)
UPDATE:
5 years later I'm still running into variations of this issue. lxml helps as I showed above, but not in every case. The commenters may have a valid point regarding this technique when it comes merging documents, but I think most people are having difficulty simply searching documents.
Here's another case and how I handled it:
<?xml version="1.0" ?><Tag1 xmlns="http://www.mynamespace.com/prefix">
<Tag2>content</Tag2></Tag1>
xmlns without a prefix means that unprefixed tags get this default namespace. This means when you search for Tag2, you need to include the namespace to find it. However, lxml creates an nsmap entry with None as the key, and I couldn't find a way to search for it. So, I created a new namespace dictionary like this
namespaces = {}
# response uses a default namespace, and tags don't mention it
# create a new ns map using an identifier of our choice
for k,v in root.nsmap.iteritems():
if not k:
namespaces['myprefix'] = v
e = root.find('myprefix:Tag2', namespaces)
Note: This is an answer useful for Python's ElementTree standard library without using hardcoded namespaces.
To extract namespace's prefixes and URI from XML data you can use ElementTree.iterparse function, parsing only namespace start events (start-ns):
>>> from io import StringIO
>>> from xml.etree import ElementTree
>>> my_schema = u'''<rdf:RDF xml:base="http://dbpedia.org/ontology/"
... xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
... xmlns:owl="http://www.w3.org/2002/07/owl#"
... xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
... xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
... xmlns="http://dbpedia.org/ontology/">
...
... <owl:Class rdf:about="http://dbpedia.org/ontology/BasketballLeague">
... <rdfs:label xml:lang="en">basketball league</rdfs:label>
... <rdfs:comment xml:lang="en">
... a group of sports teams that compete against each other
... in Basketball
... </rdfs:comment>
... </owl:Class>
...
... </rdf:RDF>'''
>>> my_namespaces = dict([
... node for _, node in ElementTree.iterparse(
... StringIO(my_schema), events=['start-ns']
... )
... ])
>>> from pprint import pprint
>>> pprint(my_namespaces)
{'': 'http://dbpedia.org/ontology/',
'owl': 'http://www.w3.org/2002/07/owl#',
'rdf': 'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
'rdfs': 'http://www.w3.org/2000/01/rdf-schema#',
'xsd': 'http://www.w3.org/2001/XMLSchema#'}
Then the dictionary can be passed as argument to the search functions:
root.findall('owl:Class', my_namespaces)
I've been using similar code to this and have found it's always worth reading the documentation... as usual!
findall() will only find elements which are direct children of the current tag. So, not really ALL.
It might be worth your while trying to get your code working with the following, especially if you're dealing with big and complex xml files so that that sub-sub-elements (etc.) are also included.
If you know yourself where elements are in your xml, then I suppose it'll be fine! Just thought this was worth remembering.
root.iter()
ref: https://docs.python.org/3/library/xml.etree.elementtree.html#finding-interesting-elements
"Element.findall() finds only elements with a tag which are direct children of the current element. Element.find() finds the first child with a particular tag, and Element.text accesses the element’s text content. Element.get() accesses the element’s attributes:"
To get the namespace in its namespace format, e.g. {myNameSpace}, you can do the following:
root = tree.getroot()
ns = re.match(r'{.*}', root.tag).group(0)
This way, you can use it later on in your code to find nodes, e.g using string interpolation (Python 3).
link = root.find(f"{ns}link")
This is basically Davide Brunato's answer however I found out that his answer had serious problems the default namespace being the empty string, at least on my python 3.6 installation. The function I distilled from his code and that worked for me is the following:
from io import StringIO
from xml.etree import ElementTree
def get_namespaces(xml_string):
namespaces = dict([
node for _, node in ElementTree.iterparse(
StringIO(xml_string), events=['start-ns']
)
])
namespaces["ns0"] = namespaces[""]
return namespaces
where ns0 is just a placeholder for the empty namespace and you can replace it by any random string you like.
If I then do:
my_namespaces = get_namespaces(my_schema)
root.findall('ns0:SomeTagWithDefaultNamespace', my_namespaces)
It also produces the correct answer for tags using the default namespace as well.
My solution is based on #Martijn Pieters' comment:
register_namespace only influences serialisation, not search.
So the trick here is to use different dictionaries for serialization and for searching.
namespaces = {
'': 'http://www.example.com/default-schema',
'spec': 'http://www.example.com/specialized-schema',
}
Now, register all namespaces for parsing and writing:
for name, value in namespaces.iteritems():
ET.register_namespace(name, value)
For searching (find(), findall(), iterfind()) we need a non-empty prefix. Pass these functions a modified dictionary (here I modify the original dictionary, but this must be made only after the namespaces are registered).
self.namespaces['default'] = self.namespaces['']
Now, the functions from the find() family can be used with the default prefix:
print root.find('default:myelem', namespaces)
but
tree.write(destination)
does not use any prefixes for elements in the default namespace.

Parse an html page with XML in python

I'm trying to put python parsing this XML code from an HTML page:
<weather>
<loc mobiurl="http://foreca.mobi/?lon=-8.6110&lat=41.1496&source=navi/" url="http://foreca.com/?lon=-8.6110&lat=41.1496&source=navi/">
<obs station="Porto / Pedras Rubras" dist="11 km NW" dt="2013-03-06 17:00:00" t="14" tf="14" s="d320" wn="S" ws="8" p="997" rh="94" v="5000"/>
<fc dt="2013-03-07" tx="16" tn="11" s="d220"/>
<fc dt="2013-03-08" tx="15" tn="10" s="d220"/>
<fc dt="2013-03-09" tx="15" tn="10" s="d220"/>
</loc>
</weather>
I want to get the information on dr, s, tx and tn fields but I don't know how to do it with XML functions. I try to read the HTML file and then create and arrow to store the content after the paths said before but I can't get it working.
Is there any easy way to get the data with python?
Some HTML scraping is easily done with pyparsing, using that library's makeHTMLTags method (makeHTMLTags returns a pair of expressions, for opening and closing tags, but in your example, only the opening tag is needed):
from pyparsing import makeHTMLTags
fcTag = makeHTMLTags("fc")[0]
tagAttrs = 'dt s tx tn'.split()
for match in fcTag.searchString(htmltext):
print ' '.join("%s:%s" % (attr,match[attr]) for attr in tagAttrs)
Prints:
dt:2013-03-07 s:d220 tx:16 tn:11
dt:2013-03-08 s:d220 tx:15 tn:10
dt:2013-03-09 s:d220 tx:15 tn:10
This makes it easy to incorporate this fragment parser with pyparsing's other features, such as run-time parse actions, semantic checking, etc.
EDIT
If you want all the dt's, s's, etc. in their own respective lists (in Python, we call them "lists", not "vectors"), do this:
dtArray = []
sArray = []
txArray = []
tnArray = []
for match in fcTag.searchString(htmltext):
dtArray.append(match.dt)
sArray.append(match.s)
txArray.append(match.tx)
tnArray.append(match.tn)
print ' '.join("%s:%s" % (attr,match[attr]) for attr in tagAttrs)
I've seen code like this before, and it is a poor data structure pattern. You access the value of the i'th entry of the original table by getting dtArray[i], sArray[i], etc.
Please consider instead one of the several structured types offered by Python. You have several to choose from:
A. Use dicts.
fcArray = []
for match in fcTag.searchString(htmltext):
fcArray.append(dict((attr,match[attr]) for attr in tagAttrs))
Now to get at the i'th entry, just get fc = fcArray[i], and access the fc['dt'], fc['s'] etc. values from that dict.
B. Use namedtuples.
from collections import namedtuple
FCData = namedtuple("FCData", tagAttrs)
fcArray = []
for match in fcTag.searchString(htmltext):
fcArray.append(FCData(*(match[attr] for attr in tagAttrs)))
You again use fc = fcArray[i] to get the i'th entry, but now you access the values using fc.dt, fc.s, etc. I find this form to be cleaner-looking than the dict form, but there are some restrictions. All the tag names have to be legal Python identifiers, so if you have a tag "rise/run", then you can't use a namedtuple. Also, namedtuples are immutable - you can't take an existing FCData fc and assign into its dt field with fc.dt = "new datetime value". dicts on the other hand would allow this.
C. Use objects. The simplest is a "bag"-type object that creates empty object instances, which you than add attributes to through simple assignment or setattr calls:
class FCData(object): pass
fcArray = []
for match in fcTag.searchString(htmltext):
fc = FCdata()
for attr in tagAttrs:
setattr(fc, attr, match[attr])
fcArray.append(fc)
You get the i'th entry with fc = fcArray[i], and like the namedtuple, you get the attributes using fc.dt and so on. But you can also modify the attributes if need be, and the assignment fc.dt = "new datetime value" would work.
D. Just use the objects created by pyparsing's searchString method.
fcArray = fcTag.searchString(htmltext)
pyparsing returns ParseResults, and it combines the behavior of both dicts and namedtuples. Just like before you access the i'th entry with fc = fcArray[i]. You can read the dt attribute with fc.dt or fc['dt']. You can read fc.dt, but you can't assign to it, just like the namedtuple. You can assign to fc['dt'], just like the dict.
If you can extract just the weather tags easily, you can use the xml.etree.ElementTree API which comes with Python.
import xml.etree.ElementTree as ET
tree = ET.fromstring(weatherdata)
for fcelem in tree.findall('.//fc'):
print fcelem.attrib['tx'], fcelem.attrib['tn']
If you want to extract it from the HTML document, then it depends on how well-formed the HTML is. If it is a XHTML document, the ElementTree API can handle it fine.
Otherwise, you'll need to switch to a HTML parser instead. You could install the lxml library; that library supports the same ElementTree API but has a dedicated HTML parser included.
You could also use BeautifulSoup for an alternate HTML API. In fact, lxml and BeautifulSoup can work in concert giving you a choice of APIs for your tasks; use whichever is easier for you.
Both lxml and BeautifulSoup are external libraries.

Categories

Resources