Parsing a UTF-8 XML file - python

I want to parse a file with minidom:
with codecs.open(fname, encoding="utf-8") as xml:
dom = parse(xml)
Returns a UnicodeEncodeError. The XML file is in UTF-8 without BOM format and has
<?xml version="1.0" encoding="utf-8"?>
in the first line.
If I first read the file, .encode("utf-8") it and pass it to parseString, it works. Is there a way to parse an UTF-8 XML file directly with minidom.parse?

Leave the decoding to the XML parser; it'll detect what codec to use. Open the file without converting to unicode:
with open(fname) as xml:
dom = parse(xml)
Note the use of the standard function open() instead of codecs.open().
This applies to any XML parser; it is the job of the parser to determine from the XML prologue what codec to use for parsing the document. If no prologue is present then UTF-8 is the default.

Related

Remove "encoding" attribute from XML in Python

I am using python to do some conditional changes to an XML document. The incoming document has <?xml version="1.0" ?> at the top.
I'm using xml.etree.ElementTree.
How I'm parsing the changed XMl:
filter_update_body = ET.tostring(root, encoding="utf8", method="xml")
The output has this at the top:
<?xml version='1.0' encoding='utf8'?>
The client wants the "encoding" tag removed but if I remove it then it either doesn't include the line at all or it puts in encoding= 'us-ascii'
Can this be done so the output matches: <?xml version="1.0" ?>?
(I don't know why it matters honestly but that's what I was told needed to happen)
As pointed out in this answer there is no way to make ElementTree omit the encoding attribute. However, as #James suggested in a comment, it can be stripped from the resulting output like this:
filter_update_body = ET.tostring(root, encoding="utf8", method="xml")
filter_update_body = filter_update_body.replace(b"encoding='utf8'", b"", 1)
The b prefixes are required because ET.tostring() will return a bytes object if encoding != "unicode". In turn, we need to call bytes.replace().
With encoding = "unicode" (note that this is the literal string "unicode"), it will return a regular str. In this case, the bs can be omitted. We use good old str.replace().
It's worth noting that the choice between bytes and str also affects how the XML will eventually be written to a file. A bytes object should be written in binary mode, a str in text mode.

ElementTree Unicode Encode Error about korean words in python2.7 [duplicate]

I have this char in an xml file:
<data>
<products>
<color>fumè</color>
</product>
</data>
I try to generate an instance of ElementTree with the following code:
string_data = open('file.xml')
x = ElementTree.fromstring(unicode(string_data.encode('utf-8')))
and I get the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe8' in position 185: ordinal not in range(128)
(NOTE: The position is not exact, I sampled the xml from a larger one).
How to solve it? Thanks
Might you have stumbled upon this problem while using Requests (HTTP for Humans), response.text decodes the response by default, you can use response.content to get the undecoded data, so ElementTree can decode it itself. Just remember to use the correct encoding.
More info: http://docs.python-requests.org/en/latest/user/quickstart/#response-content
You need to decode utf-8 strings into a unicode object. So
string_data.encode('utf-8')
should be
string_data.decode('utf-8')
assuming string_data is actually an utf-8 string.
So to summarize: To get an utf-8 string from a unicode object you encode the unicode (using the utf-8 encoding), and to turn a string to a unicode object you decode the string using the respective encoding.
For more details on the concepts I suggest reading The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (not Python specific).
You do not need to decode XML for ElementTree to work. XML carries it's own encoding information (defaulting to UTF-8) and ElementTree does the work for you, outputting unicode:
>>> data = '''\
... <data>
... <products>
... <color>fumè</color>
... </products>
... </data>
... '''
>>> x = ElementTree.fromstring(data)
>>> x[0][0].text
u'fum\xe8'
If your data is contained in a file(like) object, just pass the filename or file object directly to the ElementTree.parse() function:
x = ElementTree.parse('file.xml')
Have you tried using the parse function, instead of opening the file... (which BTW would require a .read() after it for the .fromstring() to work...)
import xml.etree.ElementTree as ET
tree = ET.parse('file.xml')
root = tree.getroot()
# etc...
The most likely your file is not UTF-8. è character can be from some other encoding, latin-1 for example.
Function open() does not return a string.
Instead use open('file.xml').read().

Remove non Unicode characters from xml database with Python

So I have a 9000 line xml database, saved as a txt, which I want to load in python, so I can do some formatting and remove unnecessary tags (I only need some of the tags, but there is a lot of unnecessary information) to make it readable. However, I am getting a UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 608814: character maps to <undefined>, which I assume means that the program ran into a non-Unicode character. I am quite positive that these characters are not important to the program (the data I am looking for is all plain text, with no special symbols), so how can I remove all of these from the txt file, when I can't read the file without getting the UnicodeDecodeError?
One crude workaround is to decode the bytes from the file yourself and specify the error handling. EG:
for line in somefile:
uline = line.decode('ascii', errors='ignore')
That will turn the line into a Unicode object in which any non-ascii bytes have been dropped. This is not a generally recommended approach - ideally you'd want to process XML with a proper parser, or at least know your file's encoding and open it appropriately (the exact details depend on your Python version). But if you're entirely certain you only care about ascii characters this is a simple fallback.
The error suggests that you're using open() function without specifying an explicit character encoding. locale.getpreferredencoding(False) is used in this case (e.g., cp1252). The error says that it is not an appropriate encoding for the input.
An xml document may contain a declaration at the very begining that specifies the encoding used explicitly. Otherwise the encoding is defined by BOM or it is utf-8. If your copy-pasting and saving the file hasn't messed up the encoding and you don't see a line such as <?xml version="1.0" encoding="iso-8859-1" ?> then open the file using utf-8:
with open('input-xml-like.txt', encoding='utf-8', errors='ignore') as file:
...
If the input is an actual XML then just pass it to an XML parser instead:
import xml.etree.ElementTree as etree
tree = etree.parse('input.xml')

Add xml metatag on python 2.54

I am writing a python 2.54 script that uses the object xml.etree.ElementTree. I'm using the function write in order to write the resulted xml into a file. Thing is, I need to have an XML meta tag: <?xml version="1.0" encoding="UTF-8"?>, and as noted in here:
https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.ElementTree.write
All that needs to be done is passing true on the second parameter of write. But that seemed to work only on python 2.6 and up.
Any ideas how can this be done on python 2.5.4 (If this is possible, that is...)
As it seems the easiest way to force writing a xml declaration is to pass a different encoding then us-ascii or utf-8. This is not really documented, but a quick look at the source for the write() method reveals this:
def write(self, file, encoding="us-ascii"):
assert self._root is not None
if not hasattr(file, "write"):
file = open(file, "wb")
if not encoding:
encoding = "us-ascii"
elif encoding != "utf-8" and encoding != "us-ascii":
file.write("<?xml version='1.0' encoding='%s'?>\n" % encoding)
self._write(file, self._root, encoding, {})
The comparison is case sensitive, so if you use encoding="UTF-8" (not encoding="utf-8"), you'll end up exactly with what you want.

Forcing encoding on bad XML files with ElementTree

A big set of XML files have the wrong encoding defined. It should be utf-8 but the content has latin-1 characters all over the place. What's the best way to parse this content?
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
Edit: this is happening with Adobe InDesign IDML files, it seems the "Content" text has latin-1 but the rest could be utf-8. I'm favoring normal parsing with utf-8, then reencode the Unicode text chunks in Content to utf-8 and then re-parsing with latin-1. What a mess.
ಠ_ಠ
You can override the encoding specified in the XML when you parse it:
class xml.etree.ElementTree.XMLParser(html=0, target=None, encoding=None)
Element
structure builder for XML source data,
based on the expat parser. html are
predefined HTML entities. This flag is
not supported by the current
implementation. target is the target
object. If omitted, the builder uses
an instance of the standard
TreeBuilder class. encoding 1 is
optional. If given, the value
overrides the encoding specified in
the XML file.
docs
Don't try to deal with encoding problems during parse, but pre-process the offending file(s).

Categories

Resources