I have what is, probably, a very stupid question, but I'm stumped by it and would appreciate any help.
I'm trying to gather xbrl data from SEC filings using Python and BeautifulSoup. One problem I'm having is that certain line items are referred to differently in the instance document and the calculation linkbase.
As a concrete example, take this recent 10-K from PHI Group Inc.:
https://www.sec.gov/Archives/edgar/data/704172/000149315221015100/0001493152-21-015100-index.htm
A line item with the xbrl tag 'WriteoffOfFinancingCosts' shows up as
<PHIL:WriteoffOfFinancingCosts ...> in the instance document (along with a value and contexts)
but shows up as 'loc_PHILWriteoffOfFinancingCosts' in the calculation linkbase.
But this relationship, 'PHIL:' = 'loc_PHIL', isn't standard across XBRL filings. How does one know what prefix will be added to a tag in the calculation linkbase so that (with the prefix removed) it can be reliably tied back to a tag in the instance document?
I can think of various workarounds, but it just seems silly; isn't there somewhere I can look in the calculation linkbase or elsewhere that will just TELL me exactly what prefix is added?
As some (possibly relevant) nuance: lots of tags in lots of filings, of course, have a prefix like 'us-gaap', indicating the us-gaap namespace, but that doesn't seem to guarantee that a tag in the calculation linkbase will therefore look like 'us-gaapAccountsPayableCurrent' and not 'loc_us-gaapAccountsPayableCurrent' or 'us-gaap:AccountsPayableCurrent' or some other variation of the basic pattern, all of which, of course, look different to BeautifulSoup.
Can anyone point me in the right direction?
PHIL:WriteoffOfFinancingCosts is the name of the XBRL concept, while loc_PHILWriteoffOfFinancingCosts is the (calculation linkbase) label of the locator pointing to the concept PHIL:WriteoffOfFinancingCosts. This mechanism is the way linkbases connect concepts together: each locator is a "proxy" to a concept.
loc_PHILWriteoffOfFinancingCosts is thus an internal detail of the calculation linkbase. The names of linkbase labels are in principle "free to choose", however there are conventions that established themselves (such as prefixing with loc_) but I would not rely on them. Rather, you can "follow the trail" by looking at the definition of the linkbase label:
<link:loc xlink:type="locator"
xlink:href="phil-20200630.xsd#PHIL_WriteoffOfFinancingCosts"
xlink:label="loc_PHILWriteoffOfFinancingCosts" />
Where you see, thanks to the xlink:href attribute, that this locator points to the concept with the ID PHIL_WriteoffOfFinancingCosts in file phil-20200630.xsd.
<element id="PHIL_WriteoffOfFinancingCosts"
name="WriteoffOfFinancingCosts" .../>
And you can see that the local name of this concept is WriteoffOfFinancingCosts. It is in the namespace commonly associated with prefix PHIL: but never appears in a concept definition as all concepts in that file are in the namespace commonly associated with PHIL:. Now, how do we know this? because at the top of the xsd file, it says targetNamespace="http://phiglobal.com/20200630" and the prefix PHIL: is also attached to this namespace in the instance file phil-20200630.xml with xmlns:PHIL="http://phiglobal.com/20200630"
It is common practice to choose concept IDs with the prefix followed by underscore followed by the local name. Some users rely on it, but following the levels of indirection, in spite of being more complex, is "safer": linkbase label loc_PHILWriteoffOfFinancingCosts -> concept ID PHIL_WriteoffOfFinancingCosts -> concept local name WriteoffOfFinancingCosts -> concept's fully qualified name PHIL:WriteoffOfFinancingCosts.
You probably notice how complex this is. In fact, this is the reason why it is worth using an XBRL processor, which will do all of this for you.
#Ghislain Fourny: Many thanks. I'm glad to know that I wasn't crazy for finding the situation complex. Knowing now that the linkbase labels are "free-to-choose", here is the specific BeautifulSoup workaround that I've come up with, in case anyone is interested:
labeldict = {}
resp = requests.get(calcurl, headers = headers)
ctext = resp.text
soup = BeautifulSoup(ctext, 'lxml')
tags = soup.find_all()
for tag in tags:
if tag.name == 'link:loc':
if tag.has_attr('xlink:href') and tag.has_attr('xlink:label'):
href = tag['xlink:href']
firstsplit = href.split('#')[1] ## gets the part of the link after the pound symbol
value = firstsplit.split('_')[1] ## gets the part after the underscore
key = tag['xlink:label']
labeldict[key] = value
Which results in a dictionary where keys are the 'loc_Phil'-type label names and the values are the plain concept names, e.g. labeldict['loc_PHILWriteoffOfFinancingCosts'] = 'WriteoffOfFinancingCosts'
This assumes that xsd links will always follow a format of '...#..._concept'. I haven't found any that don't follow that format, but that's not a guarantee.
Related
I'd like to create a XML namespace mapping (e.g., to use in findall calls as in the Python documentation of ElementTree). Given the definitions seem to exist as attributes of the xbrl root element, I'd have thought I could just examine the attrib attribute of the root element within my ElementTree. However, the following code
from io import StringIO
import xml.etree.ElementTree as ET
TEST = '''<?xml version="1.0" encoding="utf-8"?>
<xbrl
xml:lang="en-US"
xmlns="http://www.xbrl.org/2003/instance"
xmlns:country="http://xbrl.sec.gov/country/2021"
xmlns:dei="http://xbrl.sec.gov/dei/2021q4"
xmlns:iso4217="http://www.xbrl.org/2003/iso4217"
xmlns:link="http://www.xbrl.org/2003/linkbase"
xmlns:nvda="http://www.nvidia.com/20220130"
xmlns:srt="http://fasb.org/srt/2021-01-31"
xmlns:stpr="http://xbrl.sec.gov/stpr/2021"
xmlns:us-gaap="http://fasb.org/us-gaap/2021-01-31"
xmlns:xbrldi="http://xbrl.org/2006/xbrldi"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
</xbrl>'''
xbrl = ET.parse(StringIO(TEST))
print(xbrl.getroot().attrib)
produces the following output:
{'{http://www.w3.org/XML/1998/namespace}lang': 'en-US'}
Why aren't any of the namespace attributes showing up in root.attrib? I'd at least expect xlmns to be in the dictionary given it has no prefix.
What have I tried?
The following code seems to work to generate the namespace mapping:
print({prefix: uri for key, (prefix, uri) in ET.iterparse(StringIO(TEST), events=['start-ns'])})
output:
{'': 'http://www.xbrl.org/2003/instance',
'country': 'http://xbrl.sec.gov/country/2021',
'dei': 'http://xbrl.sec.gov/dei/2021q4',
'iso4217': 'http://www.xbrl.org/2003/iso4217',
'link': 'http://www.xbrl.org/2003/linkbase',
'nvda': 'http://www.nvidia.com/20220130',
'srt': 'http://fasb.org/srt/2021-01-31',
'stpr': 'http://xbrl.sec.gov/stpr/2021',
'us-gaap': 'http://fasb.org/us-gaap/2021-01-31',
'xbrldi': 'http://xbrl.org/2006/xbrldi',
'xlink': 'http://www.w3.org/1999/xlink',
'xsi': 'http://www.w3.org/2001/XMLSchema-instance'}
But yikes is it gross to have to parse the file twice.
As for the answer to your specific question, why the attrib list doesn't contain the namespace prefix decls, sorry for the unquenching answer: because they're not attributes.
http://www.w3.org/XML/1998/namespace is a special schema that doesn't act like the other schemas in your userspace. In that representation, xmlns:prefix="uri" is an attribute. In all other subordinate (by parsing sequence) schemas, xmlns:prefix="uri" is a special thing, a namespace prefix declaration, which is different than an attribute on a node or element. I don't have a reference for this but it holds true perfectly in at least a half dozen (correct) implementations of XML parsers that I've used, including those from IBM, Microsoft and Oracle.
As for the ugliness of reparsing the file, I feel your pain but it's necessary. As tdelaney so well pointed out, you may not assume that all of your namespace decls or prefixes must be on your root element.
Be prepared for the possibility of the same prefix being redefined with a different namespace on every node in your document. This may hold true and the library must correctly work with it, even if it is never the case your document (or worse, if it's never been the case so far).
Consider if perhaps you are shoehorning some text processing to parse or query XML when there may be a better solution, like XPath or XQuery. There are some good recent changes to and Python wrappers for Saxon, even though their pricing model has changed.
I have a website I'm building running off Python 2.7 and using ElementTree to build the HTML on the fly. I have no problem creating the elements and appending them to the tree. It's where I have to insert links in the middle of a large paragraph that I am stumped. This is easy when it's done in text, but this is doing it via XML. Here's what I mean:
Sample text:
lawLine = "..., a vessel as defined in Section 21 of the Harbors and Navigation Code which is inhabited and designed for habitation, an inhabited floating home as defined in subdivision (d) of Section 18075.55 of the Health and Safety Code, ..."
To add that text to the HTML as H4-style text, I typically use:
h4 = ET.Element('h4')
htmlTree.append(h4)
h4.text = lawLine
I need to add links at the word "Section" and the numbers associated with it, but I can't simply create a new element "a" in the middle of a paragraph and add it to the HTML tree, so I'm trying to build that piece as text, then do ET.fromstring and append it to the tree:
thisLawType = 'PC'
matches = re.findall(r'Section [0-9.]*', lawLine)
if matches:
lawLine = """<h4>{0}</h4>""".format(lawLine)
for thisMatch in matches:
thisMatchLinked = """{2}""".format(thisLawType, thisMatch.replace('Section ',''), thisMatch)
lawLine = lawLine.replace(thisMatch, thisMatchLinked)
htmlBody.append(ET.fromstring(lawLine))
I am getting "xml.etree.ElementTree.ParseError: not well-formed" errors when I do ET.fromstring. Is there a better way to do this in ElementTree? I'm sure there are better extensions out there, but my work environment is limited to Python 2.7 and the standard library. Any help would be appreciated. Thanks!
Evan
The xml you are generating is indeed not well formed, because of the presence of & in thisMatchLinked. It's one of the special charcters which need to be escaped (see an interesting explanation here).
So try replacing & with & and see if it works.
xml file snapshot
From above .xml file I am extracting article-id, article-title, abstract and keywords. For normal text inside single tag getting correct results. But text with multiple tags such as:
<title-group>
<article-title>
Acetylcholinesterase-Inhibiting Activity of Pyrrole Derivatives from a Novel Marine Gliding Bacterium,
<italic>Rapidithrix thailandica</italic>
</article-title>
</title-group>
.
.
same is for abstract...
I got output as:
OrderedDict([(u'italic**', u'Rapidithrix thailandica'), ('#text', u'Acetylcholines terase-Inhibiting Activity of Pyrrole Derivatives from a Novel Marine Gliding Ba cterium,')])
code has considered tag as a text and the o/p generated is also not in the sequence.
How to simply extract text from such input document as "Acetylcholinesterase-Inhibiting Activity of Pyrrole Derivatives from a Novel Marine Gliding Bacterium, Rapidithrix thailandica".
I am using below python code to perform above task..
import xmltodict
import os
from os.path import basename
import re
with open('2630847.nxml') as fd:
doc = xmltodict.parse(fd.read())
pmc_id = doc['article']['front']['article-meta']['article-id'][1]['#text']
article_title = doc['article']['front']['article-meta']['title-group']['article-title']
y = doc['article']['front']['article-meta']['abstract']
y = y.items()[0]
article_abstract = [g.encode('ascii','ignore') for g in y][1]
z = doc['article']['front']['article-meta']['kwd-group']['kwd']
zz = [g.encode('ascii','ignore') for g in z]
article_keywords = ",".join(zz).replace(","," ")
fout = open(str(pmc_id)+".txt","w")
fout.write(str(pmc_id)+"\n"+str(article_title)+". "+str(article_abstract)+". "+str(article_keywords))
Can somebody please suggest corrections..
xmltodict will likely be hard to use for your data. PMC journal articles are definitely not what the authors could have had in mind. Putting any but the most trivial XML into xmltodict is pounding a round peg into a square hole -- you might succeed, but it won't be pretty. I explain this further below under "tldr"....
Instead, I suggest you use a library whose data model fits your data better, such as xml.dom, minidom, or recent versions of BeautifulSoup. In many such libraries you just load the document with one call and then call some function like innerText() to get all the text content of it. You could even just load the document into a browser and call the Javascript innerText() function to get what you want. If the tool you choose doesn't provide innertext() already, it is:
def innertext(node):
t = ""
for curNode in node.childNodes:
if (isinstance(curNode, Text)):
t += curNode.nodeValue
elif (isinstance(curNode, Element)):
t += curNode.innerText
return(t)
You could tweak that to put spaces between the text nodes, depending on your data.
Hope that helps.
==tldr==
xmltodict makes an admirable attempt at making XML "as simple as possible"; but IMHO it errs in making it simpler than possible.
xmltodict basically works by turning every element into a dict, with its children as the dict items, keyed by their element names. But in many cases (such as yours), XML data isn't very much like that at all. For example, an element can have many children with the same name, but a dict can't.
So xmltodict has to do something special. It turns adjacent instances of the same element type into an array (without the element type). Here's an example excerpted from https://github.com/martinblech/xmltodict):
<and>
<many>elements</many>
<many>more elements</many>
</and>
becomes:
"and": {
"many": [
"elements",
"more elements"
]
},
First off, this means that xmltodict always loses the ordering information about child elements unless they are of the same type. So a section that contains a mix of paragraphs, lists, blockquotes, and so on, will either fail to load in xmltodict, or have all the scattered instances of each kind of child gathered together, completely losing their order.
The xmltodict approach also introduces frequent special-cases -- for example, you can't just get a list of all the children, or use len() to find out how many there are, etc. etc., because at every step you have to check whether you're really at a child element, or at a list of them.
Looking at xmltodict's own examples, you'll see that they mostly consist of walking down the tree by element names, but every now and then there's an integer subscript -- that's for the cases where these arrays are needed. But unless the data is unusually simple (which yours isn't), you won't know where that is. For example, if one DIV in an HTML document happens to contain only one P, the code to access the P needs one fewer subscript than with another DIV that happens to have more than one P.
It seems to me undesirable that the number of subscripts to get to something depends on how many siblings it has, and their types.
Alas, the structure still isn't good enough. Since child elements may have their own child elements, just making them strings in that extra array won't be enough. Sometimes they'll have to be dicts again, with some of their items in turn perhaps being arrays, some of whose items may be dicts, and so on. Writing the correct traversal algorithm to gather up the text is significantly harder than the DOM one shown above.
To be completely fair, there is some XML in which the order doesn't matter logically -- for example, you could export a SQL table into an XML file, using a container element for each record with a child element for each field. The order of fields is not information, so if you load such XML into xmltodict, losing the order doesn't matter. Likewise if you serialized Python data that was already just a dict. But those are very specialized edge cases. xmltodict might be an excellent choice for a case like that -- but the articles you're looking at are very far from that.
I am trying find particular tag in an xbrl file. I originally tried using python-xbrl package, but it is not exactly what I want, so I based my code on the one available from the package.
Here's the part of xbrl that I am interested in
<us-gaap:LiabilitiesCurrent contextRef="eol_PE2035----1510-Q0008_STD_0_20150627_0" unitRef="iso4217_USD" decimals="-6" id="id_5025426_6FEF05CB-B19C-4D84-AAF1-79B431731049_1_24">65285000000</us-gaap:LiabilitiesCurrent>
<us-gaap:Liabilities contextRef="eol_PE2035----1510-Q0008_STD_0_20150627_0" unitRef="iso4217_USD" decimals="-6" id="id_5025426_6FEF05CB-B19C-4D84-AAF1-79B431731049_1_28">147474000000</us-gaap:Liabilities>
Here is the code
python-xbrl package is based on beautifulsoup4 and several other packages.
liabilities = xbrl.find_all(name=re.compile("(us-gaap:Liabilities)",
re.IGNORECASE | re.MULTILINE))
I get the value for us-gaap:LiabilitiesCurrent, but I want value for us-gaap:Liabilities.
Right now as soon as it finds a match it, stores it. But in many cases its the wrong match due to the tag format in xbrl. I believe I need to change re.compile() part to make it work correctly.
I'd be very wary about using this approach to parsing XBRL (or indeed, any XML with namespaces in it). "us-gaap:Liabilities" is a QName, consisting of a prefix ("us-gaap") and a local name ("Liabilities"). The prefix is just a shorthand for a full namespace URI such as "http://fasb.org/us-gaap/2015-01-31", which is defined by a namespace declaration, usually at the top of the document. If you look at the top of the document you'll see something like:
xmlns:us-gaap="http://fasb.org/us-gaap/2015-01-31"
This means that within the scope of this document, "us-gaap" is taken to mean that full namespace URI.
XML creators are free to use whatever prefixes they want, so there is no guarantee that the element will actually be called "us-gaap:Liabilities" across all documents that you encounter.
beautifulsoup4 has very limited support for namespaces, so I wouldn't recommend it as a starting point for building an XBRL processor. It may be worth taking a look at the Arelle project, which is a full XBRL processor, and will make it easier to do other tasks such as finding the labels and other information associated with facts in the taxonomy.
Try it with a $ dollar sign at the end to indicate not to match anything else following the dollar sign:
liabilities = xbrl.find_all(name=re.compile("(us-gaap:Liabilities$)",
re.IGNORECASE | re.MULTILINE))
I am looking for an easily implemented HTML generator for Python. I found HTML.py, but there is no way to add CSS elements (id, class) for table.
Dominate is an HTML generation library that lets you easily create tags. In dominate, python reserved words are prefixed with an underscore, so it would look like this:
from dominate.tags import *
t = div(table(_id="the_table"), _class="tbl")
print(t)
<div class="tbl">
<table id="the_table"></table>
</div>
Disclaimer: I am the author of dominate
If you want programmatic generation rather than templating, Karrigell's HTMLTags module is one possibility; it can include e.g. the class attribute (which would be a reserved word in Python) by the trick of uppercasing its initial, i.e., quoting the doc URL I just gave:
Attributes with the same name as
Python keywords (class, type) must be
capitalized :
print DIV('bar', Class="title") ==> <DIV class="title">bar</DIV>
HTML Generation is usually done with one of the infinite amounts of HTML templating languages available for Python. Personally I like Templess, but Genshi is probably the most popular. There are infinite amounts of them, there is a list which is highly likely to be incomplete.
Otherwise you might want to use lxml, where you can generate it in a more programmatically XML-ish way. Although I have a hard time seeing the benefit.
There's the venerable HTMLGen by Robin Friedrich, which is hard to find but still available here (dated 2001, but HTML hasn't changed much since then ;-). There's also xist. Of course nowadays HTML generation, as Lennart points out, is generally better done using templating systems such as Jinja or Mako.
This is one ultra-simple HTML generator I have written. I use it build-time to generate html. If one is generating html pages run-time then there are better options available
Here is the link
http://pypi.python.org/pypi/sphc
And a quick example
>> import sphw
>> tf = sphw.TagFactory()
>>> div = tf.DIV("Some Text here.", Class='content', id='foo')
>>> print(div)
<DIV Class="content", id="foo">Some Text here.</DIV>
Actually you can add any attribute such as id and class to objects in HTML.py (http://www.decalage.info/python/html).
attribs is an optional parameter of Table, TableRow and TableCell classes. It is a dictionary of additional attributes you would like to set. For example, the following code sets id and class for a table:
import HTML
table_data = [
['Last name', 'First name', 'Age'],
['Smith', 'John', 30],
['Carpenter', 'Jack', 47],
['Johnson', 'Paul', 62],
]
htmlcode = HTML.table(table_data,
attribs={'id':'table1', 'class':'myclass'})
print htmlcode
The same parameter can be used with TableRow and TableCell objects to format rows and cells. It does not exist for columns yet, but should be easy to implement if needed.
Ok, here's another html generator, or I prefer to think of it as a compiler.
https://pypi.python.org/pypi/python-html-compiler
This is a set of base classes that can be used to define tags and attributes. Thus a tag class has attributes and children. Children are themselves Tag classes that have attributes and children etc etc. Also you can set parameters that start with your root class and work up the various branches.
This will allow you to define all the tag classes you want thus be able to create customised classes and implement any tags or attributes you want.
Just started on this, so if anybody wants to test :)
Html generation or any text generatio,jinja is a powerful template engine.
You might be interested in some of the Python HAML implementations. HAML is like HTML shorthand and only takes a few minutes to learn. There's a CSS version called SASS too.
http://haml.hamptoncatlin.com/
"Is there a HAML implementation for use with Python and Django" talks about Python and HAML a bit more.
I'm using HAML as much as possible when I'm programming in Ruby. And, as a footnote, there's also been some work getting modules for Perl which work with the nice MVC Mojolicious:
http://metacpan.org/pod/Text::Haml