XML injections in Python using xml.dom.minidom - python

I scanned Python source code using AppScan and it says that the code contains potential vulnerabilities (XML Injection). For example:
import xml.dom.minidom
...
dom = xml.dom.minidom.parse(filename)
...
document = xml.dom.minidom.parseString(xmlStr)
...
I installed the defusedxml and replaced all parsings where use the standard Python xml package with parse/parseString from defusedxml.minidom & defusedxml.cElementTree:
import defusedxml.minidom
...
dom = defusedxml.minidom.parse(filename)
...
document = defusedxml.minidom.parseString(xmlStr)
...
These vulnerabilities are gone from scan report. But AppScan still notify me about vulnerabilities where from standard xml package are importing any functions/classes. For example classes from ElementTree to modify/build xml tree:
from xml.etree.cElementTree import ( # vulnerability here
SubElement, Element, ElementTree)
import defusedxml.cElementTree as et
...
template = et.parse(template_filename) # safe parsing
root = template.getroot()
email_list_el = root.find('emails').find('list')
for email_address in to_list:
SubElement(email_list_el , 'string').text = email_address
root.find('subject')[0].text = subject
root.find('body')[0].text = body
...
Can this be considered a vulnerability if xml.dom.minidom is used only for writing XML?

ElementTree is not secured against maliciously constructed data. See list of vulnerabilities. Consider using defusedxml instead.

Related

Parse XML with Python resolving an external ENTITY reference

In my S1000D xml, it specifies a DOCTYPE with a reference to a public URL that contains references to a number of other files that contain all the valid character entities. I've used xml.etree.ElementTree and lxml to try to parse it and get a parse error with both indicating:
undefined entity −: line 82, column 652
Even though − is a valid entity according to the ENTITY Reference specfied.
The xml top is as follow:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE dmodule [
<!ENTITY % ISOEntities PUBLIC 'ISO 8879-1986//ENTITIES ISO Character Entities 20030531//EN//XML' 'http://www.s1000d.org/S1000D_4-1/ent/ISOEntities'>
%ISOEntities;]>
If you go out and get http://www.s1000d.org/S1000D_4-1/ent/ISOEntities, it will include 20 other ent files with one called iso-tech.ent which contains the line:
<!ENTITY minus "−"> <!-- MINUS SIGN -->
in line 82 of the xml file near column 652 is the following:
....Refer to 70−41....
How can I run a python script to parse this file without get the undefined entity?
Sorry I don't want to specify parser.entity['minus'] = chr(2212) for example. I did that for a quick fix but there are many character entity references.
I would like the parser to check Entity reference that is specified in the xml.
I'm surprised but I've gone around the sun and back and haven't found how to do this (or maybe I have but couldn't follow it).
if I update my xml file and add
<!ENTITY minus "−">
It won't fail, so It's not the xml.
It fails on the parse. Here's code I use for ElementTree
fl = os.path.join(pth, fn)
try:
root = ET.parse(fl)
except ParseError as p:
print("ParseError : ", p)
Here's the code I use for lxml
fl = os.path.join(pth, fn)
try:
parser = etree.XMLParser(load_dtd=True, resolve_entities=True)
root = etree.parse(fl, parser=parser)
except etree.XMLSyntaxError as pe:
print("lxml XMLSyntaxError: ", pe)
I would like the parser to load the ENTITY reference so that it knows that − and all the other character entities specified in all the files are valid entity characters.
Thank you so much for your advice and help.
I'm going to answer for lxml. No reason to consider ElementTree if you can use lxml.
I think the piece you're missing is no_network=False in the XMLParser; it's True by default.
Example...
XML Input (test.xml)
<!DOCTYPE doc [
<!ENTITY % ISOEntities PUBLIC 'ISO 8879-1986//ENTITIES ISO Character Entities 20030531//EN//XML' 'http://www.s1000d.org/S1000D_4-1/ent/ISOEntities'>
%ISOEntities;]>
<doc>
<test>Here's a test of minus: −</test>
</doc>
Python
from lxml import etree
parser = etree.XMLParser(load_dtd=True,
no_network=False)
tree = etree.parse("test.xml", parser=parser)
etree.dump(tree.getroot())
Output
<doc>
<test>Here's a test of minus: −</test>
</doc>
If you wanted the entity reference retained, add resolve_entities=False to the XMLParser.
Also, instead of going out to an external location to resolve the parameter entity, consider setting up an XML Catalog. This will let you resolve public and/or system identifiers to local versions.
Example using same XML input above...
XML Catalog ("catalog.xml" in the directory "catalog test" (space used in directory name for testing))
<!DOCTYPE catalog PUBLIC "-//OASIS//DTD XML Catalogs V1.1//EN" "http://www.oasis-open.org/committees/entity/release/1.1/catalog.dtd">
<catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog">
<!-- The path in #uri is relative to this file (catalog.xml). -->
<uri name="http://www.s1000d.org/S1000D_4-1/ent/ISOEntities" uri="./ents/ISOEntities_stackoverflow.ent"/>
</catalog>
Entity File ("ISOEntities_stackoverflow.ent" in the directory "catalog test/ents". Changed the value to "BAM!" for testing)
<!ENTITY minus "BAM!">
Python (Changed no_network to True for additional evidence that the local version of http://www.s1000d.org/S1000D_4-1/ent/ISOEntities is being used.)
import os
from urllib.request import pathname2url
from lxml import etree
# The XML_CATALOG_FILES environment variable is used by libxml2 (which is used by lxml).
# See http://xmlsoft.org/catalog.html.
try:
xcf_env = os.environ['XML_CATALOG_FILES']
except KeyError:
# Path to catalog must be a url.
catalog_path = f"file:{pathname2url(os.path.join(os.getcwd(), 'catalog test/catalog.xml'))}"
# Temporarily set the environment variable.
os.environ['XML_CATALOG_FILES'] = catalog_path
parser = etree.XMLParser(load_dtd=True,
no_network=True)
tree = etree.parse("test.xml", parser=parser)
etree.dump(tree.getroot())
Output
<doc>
<test>Here's a test of minus: BAM!</test>
</doc>

Python - use lxml to return value of title.text attrib

I'm trying to figure out how to use lxml to parse the xml from a url to return the value of the title attribute. Does anyone know what I have wrong or what would return the Title value/text? So in the example below I want to return the value of 'Weeds - S05E05 - Van Nuys - HD TV'
XML from URL:
<?xml version="1.0" encoding="UTF-8"?>
<subsonic-response xmlns="http://subsonic.org/restapi" status="ok" version="1.8.0">
<song id="11345" parent="11287" title="Weeds - S05E05 - Van Nuys - HD TV" album="Season 5" artist="Weeds" isDir="false" created="2009-07-06T22:21:16" duration="1638" bitRate="384" size="782304110" suffix="mkv" contentType="video/x-matroska" isVideo="true" path="Weeds/Season 5/Weeds - S05E05 - Van Nuys - HD TV.mkv" transcodedSuffix="flv" transcodedContentType="video/x-flv"/>
</subsonic-response>
My current Python code:
import lxml
from lxml import html
from urllib2 import urlopen
url = 'https://myurl.com'
tree = html.parse(urlopen(url))
songs = tree.findall('{*}song')
for song in songs:
print song.attrib['title']
With the above code I get no data return, any ideas?
print out of tree =
<lxml.etree._ElementTree object at 0x0000000003348F48>
print out of songs =
[]
First of all, you are not actually using lxml in your code. You import the lxml HTML parser, but otherwise ignore it and just use the standard library xml.etree.ElementTree module instead.
Secondly, you search for data/song but you do not have any data elements in your document, so no matches will be found. And last, but not least, you have a document there that uses namespaces. You'll have to include those when searching for elements, or use a {*} wildcard search.
The following finds songs for you:
from lxml import etree
tree = etree.parse(URL) # lxml can load URLs for you
songs = tree.findall('{*}song')
for song in songs:
print song.attrib['title']
To use an explicit namespace, you'd have to replace the {*} wildcard with the full namespace URL; the default namespace is available in the .nsmap namespace dict on the tree object:
namespace = tree.nsmap[None]
songs = tree.findall('{%s}song' % namespace)
The whole issue is with the fact that the subsonic-response tag has a xmlns attribute indicating that there is an xml namespace in effect. The below code takes that into account and correctly pigs up the song tags.
import xml.etree.ElementTree as ET
root = ET.parse('test.xml').getroot()
print root.findall('{http://subsonic.org/restapi}song')
Thanks for the help guys, I used a combination of both of yours to get it working.
import xml.etree.ElementTree as ET
from urllib2 import urlopen
url = 'https://myurl.com'
root = ET.parse(urlopen(url)).getroot()
for song in root:
print song.attrib['title']

Python Minidom XML Query

I'm trying to query this XML with lxml:
<lista_tareas>
<tarea id="1" realizzato="False" data_limite="12/10/2012" priorita="1">
<description>XML TEST</description>
</tarea>
<tarea id="2" realizzato="False" data_limite="12/10/2012" priorita="1">
<description>XML TEST2</description>
</tarea>
I wrote this code:
from lxml import etree
doc = etree.parse(file_path)
root = etree.Element("lista_tareas")
for x in root:
z = x.Element("tarea")
for y in z:
element_text = y.Element("description").text
print element_text
It doesn't print anything, could you suggest me how to do?
You do not want to use the minidom; use the ElementTree API instead. The DOM API is a very verbose and constrained API, the ElementTree API plays to Python's strengths instead.
The MiniDOM module doesn't offer any query API like you are looking for.
You can use the bundled xml.etree.ElementTree module, or you could install lxml, which offers more powerful XPath and other query options.
import xml.etree.ElementTree as ET
root = ET.parse('document.xml').getroot()
for c in root.findall("./Root_Node[#id='1']/sub_node"):
# Do something with c
Using lxml:
from lxml import etree
doc = etree.parse ( source )
for c in doc.xpath ( "//Root_Node[#id='1']" ):
subnode = c.find ( "sub_node" )
# ... etc ...

Python ElementTree support for parsing unknown XML entities?

I have a set of super simple XML files to parse... but... they use custom defined entities. I don't need to map these to characters, but I do wish to parse and act on each one. For example:
<Style name="admin-5678">
<Rule>
<Filter>[admin_level]='5'</Filter>
&maxscale_zoom11;
</Rule>
</Style>
There is a tantalizing hint at http://effbot.org/elementtree/elementtree-xmlparser.htm that XMLParser has limited entity support, but I can't find the methods mentioned, everything gives errors:
#!/usr/bin/python
##
## Where's the entity support as documented at:
## http://effbot.org/elementtree/elementtree-xmlparser.htm
## In Python 2.7.1+ ?
##
from pprint import pprint
from xml.etree import ElementTree
from cStringIO import StringIO
parser = ElementTree.ElementTree()
#parser.entity["maxscale_zoom11"] = unichr(160)
testf = StringIO('<foo>&maxscale_zoom11;</foo>')
tree = parser.parse(testf)
#tree = parser.parse(testf,"XMLParser")
for node in tree.iter('foo'):
print node.text
Which depending on how you adjust the comments gives:
xml.etree.ElementTree.ParseError: undefined entity: line 1, column 5
or
AttributeError: 'ElementTree' object has no attribute 'entity'
or
AttributeError: 'str' object has no attribute 'feed'
For those curious the XML is from the OpenStreetMap's mapnik project.
As #cnelson already pointed out in a comment, the chosen solution here won't work in Python 3.
I finally got it working. Quoted from this Q&A.
Inspired by this post, we can just prepend some XML definition to the incoming raw HTML content, and then ElementTree would work out of box.
This works for both Python 2.6, 2.7, 3.3, 3.4.
import xml.etree.ElementTree as ET
html = '''<html>
<div>Some reasonably well-formed HTML content.</div>
<form action="login">
<input name="foo" value="bar"/>
<input name="username"/><input name="password"/>
<div>It is not unusual to see in an HTML page.</div>
</form></html>'''
magic = '''<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" [
<!ENTITY nbsp ' '>
]>''' # You can define more entities here, if needed
et = ET.fromstring(magic + html)
I'm not sure if this is a bug in ElementTree or what, but you need to call UseForeignDTD(True) on the expat parser to behave the way it did in the past.
It's a bit hacky, but you can do this by creating your own instance of ElementTree.Parser, calling the method on it's instance of xml.parsers.expat, and then passing it to ElementTree.parse():
from xml.etree import ElementTree
from cStringIO import StringIO
testf = StringIO('<foo>&moo_1;</foo>')
parser = ElementTree.XMLParser()
parser.parser.UseForeignDTD(True)
parser.entity['moo_1'] = 'MOOOOO'
etree = ElementTree.ElementTree()
tree = etree.parse(testf, parser=parser)
for node in tree.iter('foo'):
print node.text
This outputs "MOOOOO"
Or using a mapping interface:
from xml.etree import ElementTree
from cStringIO import StringIO
class AllEntities:
def __getitem__(self, key):
#key is your entity, you can do whatever you want with it here
return key
testf = StringIO('<foo>&moo_1;</foo>')
parser = ElementTree.XMLParser()
parser.parser.UseForeignDTD(True)
parser.entity = AllEntities()
etree = ElementTree.ElementTree()
tree = etree.parse(testf, parser=parser)
for node in tree.iter('foo'):
print node.text
This outputs "moo_1"
A more complex fix would be to subclass ElementTree.XMLParser and fix it there.

What's the best way to handle -like entities in XML documents with lxml?

Consider the following:
from lxml import etree
from StringIO import StringIO
x = """<?xml version="1.0" encoding="utf-8"?>\n<aa> â</aa>"""
p = etree.XMLParser(remove_blank_text=True, resolve_entities=False)
r = etree.parse(StringIO(x), p)
This would fail with:
lxml.etree.XMLSyntaxError: Entity 'nbsp' not defined, line 2, column 11
This is because resolve_entities=False doesn't ignore them, it just doesn't resolve them.
If I use etree.HTMLParser instead, it creates html and body tags, plus a lot of other special handling it tries to do for HTML.
What's the best way to get a â text child under the aa tag with lxml?
You can't ignore entities as they are part of the XML definition. Your document is not well-formed if it doesn't have a DTD or standalone="yes" or if it includes entities without an entity definition in the DTD. Lie and claim your document is HTML.
https://mailman-mail5.webfaction.com/pipermail/lxml/2008-February/003398.html
You can try lying and putting an XHTML DTD on your document. e.g.
from lxml import etree
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
x = """<?xml version="1.0" encoding="utf-8"?>\n<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" >\n<aa> â</aa>"""
p = etree.XMLParser(remove_blank_text=True, resolve_entities=False)
r = etree.parse(StringIO(x), p)
etree.tostring(r) # '<aa> â</aa>'
#Alex is right: your document is not well-formed XML, and so XML parsers will not parse it. One option is to pre-process the text of the document to replace bogus entities with their utf-8 characters:
entities = [
(' ', u'\u00a0'),
('â', u'\u00e2'),
...
]
for before, after in entities:
x = x.replace(before, after.encode('utf8'))
Of course, this can be broken by sufficiently weird "xml" also.
Your best bet is to fix your input XML documents to be well-formed XML.
When I was trying to do something similar, I just used x.replace('&', '&') before parsing the string.

Categories

Resources