Python 2.7 encoding - python

I am trying to write a xml file from python (2.7) using xml.append function.
I have a string "Frédéric" that needs to be written to xml file as one of the values.
I am trying to use unicode function on this string and then encode function to write to the file.
a ="Frédéric"
unicode(a, 'utf8')
By I get error message as 'ascii' codec can't decode byte 0xe9 in position 9'
I have gone through other stackoverflow posts for this scenario, the suggestion was to add unicode-literal before the string.
a = u'Frédéric'
a.encode('utf8')
Since, my 'a' variable is going to be dynamic (it can take any value from a list) I need to use unicode function.
Any suggestions please?
Thanks

Maybe following helps. You can use codecs to save the XML string while using utf-8.
import codecs
def save_xml_string(path, xml_string):
"""
Writes the given string to the file associated with the given path.
:param path:
Path to the file to write to.
:param xml_string:
The string to be written
:return:
nothing
"""
output_file = codecs.open(path, "w", "utf-8")
output_file.write(xml_string)
output_file.close()

Related

Replacing non-UTF-8 from a string

Here is the code:
s = 'Waitematā'
w = open('test.txt','w')
w.write(s)
w.close()
I get the following error.
UnicodeEncodeError: 'charmap' codec can't encode character '\u0101' in position 8: character maps to <undefined>
The string will print with the macron a, ā. However, I am not able to write this to a .txt or .csv file.
Am I able to swap our the macron a, ā for no macron? Thanks for the help in advance.
Note that if you open a file with open('text.txt', 'w') and write a string to it, you are not writing a string to a file, but writing the encoded string into the file. What encoding used depends on your LANG environment variable or other factors.
To force UTF-8, as you suggested in title, you can try this:
w = open('text.txt', 'wb') # note for binary
w.write(s.encode('utf-8')) # convert str into byte explicitly
w.close()
As documented in open:
In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding.
Not all encodings support all Unicode characters. Since the encoding is platform dependent when not specified, it is better and more portable to be explicit and call out the encoding when reading or writing a text file. UTF-8 supports all Unicode code points:
s = 'Waitematā'
with open('text.txt','w',encoding='utf8') as w:
w.write(s)

Python's UTF-8 encoding yields odd results even though explicit utf-8 encoding is used

I am parsing some JSON (specifically Amazon reviews file, which Amazon publicly provides). I am doing a line by line parsing with conversion to Pandas DataFrame and insert to SQL on the fly. I found something really odd. I use UTF-8 to open the json file. In the file itself when I open it with notepad I don't see any strange symbols or whatever. For example, substring of review:
The temperature control doesn’t hold to as tight a temperature as some of the others reported.
But when I parse it and check the contents of string:
The temperature control doesn\xe2\x80\x99t hold to as tight a temperature as some of the others reported.
Why is that so? How I can't properly read it?
My current code is below:
def parseJSON(path):
g = io.open(path,'r',encoding='utf8')
for l in g:
yield eval(l)
for l in parseJSON(r"reviews.json"):
for review in l["reviews"]:
df = {}
df[l["url"]] = review["review"]
dfInsert = pd.DataFrame( list(df.items()), columns = ["url", "Review"])
File subset which fails is there:
http://www.filedropper.com/subset
First of all, you should never parse a text from an unsafe (online) source with eval. If the data is in JSON, you should use a JSON parser. That's why JSON was invented - to provide a safe serialization and deserialization.
In your case, use json.load() from the standard json module:
import json
def parseJSON(path):
return json.load(io.open(path, 'r', encoding='utf-8-sig'))
Since your JSON file contains a BOM, you should use the codec that knows how to strip it, i.e. the utf-8-sig.
If your file contains one JSON Object per line, you can read it like this:
def parseJSON(path):
with io.open(path, 'r', encoding='utf-8-sig') as f:
for line in f:
yield json.loads(line)
Now to answer why are you seeing doesn\xe2\x80\x99t instead of doesn’t. If you decode the bytes \xe2\x80\x99 as UTF-8, you get:
>>> '\xe2\x80\x99'.decode('utf8')`
u'\u2019'
and what Unicode codepoint is that?
>>> unicodedata.name(u'\u2019')
'RIGHT SINGLE QUOTATION MARK'
Ok, now what happens when you eval() it in Python 2? Well, first, note that Unicode is not really a first-class citizen in the land of Python 2 strings (Python 3 fixed that).
So, eval tries to parse the string (series of bytes in Python 2) as a Python expression:
>>> eval('"’"')
'\xe2\x80\x99'
Note that (in my console that uses UTF-8) even when I type ’, that's represented as a sequence of 3 bytes.
It doesn't even help to say it's supposed to be a unicode:
>>> eval('u"’"')
u'\xe2\x80\x99'
What will help is to tell Python how to interpret the series of bytes that follow in the source/string, i.e. what's the encoding (see PEP-263):
>>> eval('# encoding: utf-8\nu"’"')
u'\u2019'

How to fix: "UnicodeDecodeError: 'ascii' codec can't decode byte"

as3:~/ngokevin-site# nano content/blog/20140114_test-chinese.mkd
as3:~/ngokevin-site# wok
Traceback (most recent call last):
File "/usr/local/bin/wok", line 4, in
Engine()
File "/usr/local/lib/python2.7/site-packages/wok/engine.py", line 104, in init
self.load_pages()
File "/usr/local/lib/python2.7/site-packages/wok/engine.py", line 238, in load_pages
p = Page.from_file(os.path.join(root, f), self.options, self, renderer)
File "/usr/local/lib/python2.7/site-packages/wok/page.py", line 111, in from_file
page.meta['content'] = page.renderer.render(page.original)
File "/usr/local/lib/python2.7/site-packages/wok/renderers.py", line 46, in render
return markdown(plain, Markdown.plugins)
File "/usr/local/lib/python2.7/site-packages/markdown/init.py", line 419, in markdown
return md.convert(text)
File "/usr/local/lib/python2.7/site-packages/markdown/init.py", line 281, in convert
source = unicode(source)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 1: ordinal not in range(128). -- Note: Markdown only accepts unicode input!
How to fix it?
In some other python-based static blog apps, Chinese post can be published successfully.
Such as this app: http://github.com/vrypan/bucket3. In my site http://bc3.brite.biz/, Chinese post can be published successfully.
tl;dr / quick fix
Don't decode/encode willy nilly
Don't assume your strings are UTF-8 encoded
Try to convert strings to Unicode strings as soon as possible in your code
Fix your locale: How to solve UnicodeDecodeError in Python 3.6?
Don't be tempted to use quick reload hacks
Unicode Zen in Python 2.x - The Long Version
Without seeing the source it's difficult to know the root cause, so I'll have to speak generally.
UnicodeDecodeError: 'ascii' codec can't decode byte generally happens when you try to convert a Python 2.x str that contains non-ASCII to a Unicode string without specifying the encoding of the original string.
In brief, Unicode strings are an entirely separate type of Python string that does not contain any encoding. They only hold Unicode point codes and therefore can hold any Unicode point from across the entire spectrum. Strings contain encoded text, beit UTF-8, UTF-16, ISO-8895-1, GBK, Big5 etc. Strings are decoded to Unicode and Unicodes are encoded to strings. Files and text data are always transferred in encoded strings.
The Markdown module authors probably use unicode() (where the exception is thrown) as a quality gate to the rest of the code - it will convert ASCII or re-wrap existing Unicodes strings to a new Unicode string. The Markdown authors can't know the encoding of the incoming string so will rely on you to decode strings to Unicode strings before passing to Markdown.
Unicode strings can be declared in your code using the u prefix to strings. E.g.
>>> my_u = u'my ünicôdé strįng'
>>> type(my_u)
<type 'unicode'>
Unicode strings may also come from file, databases and network modules. When this happens, you don't need to worry about the encoding.
Gotchas
Conversion from str to Unicode can happen even when you don't explicitly call unicode().
The following scenarios cause UnicodeDecodeError exceptions:
# Explicit conversion without encoding
unicode('€')
# New style format string into Unicode string
# Python will try to convert value string to Unicode first
u"The currency is: {}".format('€')
# Old style format string into Unicode string
# Python will try to convert value string to Unicode first
u'The currency is: %s' % '€'
# Append string to Unicode
# Python will try to convert string to Unicode first
u'The currency is: ' + '€'
Examples
In the following diagram, you can see how the word café has been encoded in either "UTF-8" or "Cp1252" encoding depending on the terminal type. In both examples, caf is just regular ascii. In UTF-8, é is encoded using two bytes. In "Cp1252", é is 0xE9 (which is also happens to be the Unicode point value (it's no coincidence)). The correct decode() is invoked and conversion to a Python Unicode is successfull:
In this diagram, decode() is called with ascii (which is the same as calling unicode() without an encoding given). As ASCII can't contain bytes greater than 0x7F, this will throw a UnicodeDecodeError exception:
The Unicode Sandwich
It's good practice to form a Unicode sandwich in your code, where you decode all incoming data to Unicode strings, work with Unicodes, then encode to strs on the way out. This saves you from worrying about the encoding of strings in the middle of your code.
Input / Decode
Source code
If you need to bake non-ASCII into your source code, just create Unicode strings by prefixing the string with a u. E.g.
u'Zürich'
To allow Python to decode your source code, you will need to add an encoding header to match the actual encoding of your file. For example, if your file was encoded as 'UTF-8', you would use:
# encoding: utf-8
This is only necessary when you have non-ASCII in your source code.
Files
Usually non-ASCII data is received from a file. The io module provides a TextWrapper that decodes your file on the fly, using a given encoding. You must use the correct encoding for the file - it can't be easily guessed. For example, for a UTF-8 file:
import io
with io.open("my_utf8_file.txt", "r", encoding="utf-8") as my_file:
my_unicode_string = my_file.read()
my_unicode_string would then be suitable for passing to Markdown. If a UnicodeDecodeError from the read() line, then you've probably used the wrong encoding value.
CSV Files
The Python 2.7 CSV module does not support non-ASCII characters 😩. Help is at hand, however, with https://pypi.python.org/pypi/backports.csv.
Use it like above but pass the opened file to it:
from backports import csv
import io
with io.open("my_utf8_file.txt", "r", encoding="utf-8") as my_file:
for row in csv.reader(my_file):
yield row
Databases
Most Python database drivers can return data in Unicode, but usually require a little configuration. Always use Unicode strings for SQL queries.
MySQL
In the connection string add:
charset='utf8',
use_unicode=True
E.g.
>>> db = MySQLdb.connect(host="localhost", user='root', passwd='passwd', db='sandbox', use_unicode=True, charset="utf8")
PostgreSQL
Add:
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY)
HTTP
Web pages can be encoded in just about any encoding. The Content-type header should contain a charset field to hint at the encoding. The content can then be decoded manually against this value. Alternatively, Python-Requests returns Unicodes in response.text.
Manually
If you must decode strings manually, you can simply do my_string.decode(encoding), where encoding is the appropriate encoding. Python 2.x supported codecs are given here: Standard Encodings. Again, if you get UnicodeDecodeError then you've probably got the wrong encoding.
The meat of the sandwich
Work with Unicodes as you would normal strs.
Output
stdout / printing
print writes through the stdout stream. Python tries to configure an encoder on stdout so that Unicodes are encoded to the console's encoding. For example, if a Linux shell's locale is en_GB.UTF-8, the output will be encoded to UTF-8. On Windows, you will be limited to an 8bit code page.
An incorrectly configured console, such as corrupt locale, can lead to unexpected print errors. PYTHONIOENCODING environment variable can force the encoding for stdout.
Files
Just like input, io.open can be used to transparently convert Unicodes to encoded byte strings.
Database
The same configuration for reading will allow Unicodes to be written directly.
Python 3
Python 3 is no more Unicode capable than Python 2.x is, however it is slightly less confused on the topic. E.g the regular str is now a Unicode string and the old str is now bytes.
The default encoding is UTF-8, so if you .decode() a byte string without giving an encoding, Python 3 uses UTF-8 encoding. This probably fixes 50% of people's Unicode problems.
Further, open() operates in text mode by default, so returns decoded str (Unicode ones). The encoding is derived from your locale, which tends to be UTF-8 on Un*x systems or an 8-bit code page, such as windows-1251, on Windows boxes.
Why you shouldn't use sys.setdefaultencoding('utf8')
It's a nasty hack (there's a reason you have to use reload) that will only mask problems and hinder your migration to Python 3.x. Understand the problem, fix the root cause and enjoy Unicode zen.
See Why should we NOT use sys.setdefaultencoding("utf-8") in a py script? for further details
Finally I got it:
as3:/usr/local/lib/python2.7/site-packages# cat sitecustomize.py
# encoding=utf8
import sys
reload(sys)
sys.setdefaultencoding('utf8')
Let me check:
as3:~/ngokevin-site# python
Python 2.7.6 (default, Dec 6 2013, 14:49:02)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> reload(sys)
<module 'sys' (built-in)>
>>> sys.getdefaultencoding()
'utf8'
>>>
The above shows the default encoding of python is utf8. Then the error is no more.
This is the classic "unicode issue". I believe that explaining this is beyond the scope of a StackOverflow answer to completely explain what is happening.
It is well explained here.
In very brief summary, you have passed something that is being interpreted as a string of bytes to something that needs to decode it into Unicode characters, but the default codec (ascii) is failing.
The presentation I pointed you to provides advice for avoiding this. Make your code a "unicode sandwich". In Python 2, the use of from __future__ import unicode_literals helps.
Update: how can the code be fixed:
OK - in your variable "source" you have some bytes. It is not clear from your question how they got in there - maybe you read them from a web form? In any case, they are not encoded with ascii, but python is trying to convert them to unicode assuming that they are. You need to explicitly tell it what the encoding is. This means that you need to know what the encoding is! That is not always easy, and it depends entirely on where this string came from. You could experiment with some common encodings - for example UTF-8. You tell unicode() the encoding as a second parameter:
source = unicode(source, 'utf-8')
In some cases, when you check your default encoding (print sys.getdefaultencoding()), it returns that you are using ASCII. If you change to UTF-8, it doesn't work, depending on the content of your variable.
I found another way:
import sys
reload(sys)
sys.setdefaultencoding('Cp1252')
I was searching to solve the following error message:
unicodedecodeerror: 'ascii' codec can't decode byte 0xe2 in position 5454: ordinal not in range(128)
I finally got it fixed by specifying 'encoding':
f = open('../glove/glove.6B.100d.txt', encoding="utf-8")
Wish it could help you too.
"UnicodeDecodeError: 'ascii' codec can't decode byte"
Cause of this error: input_string must be unicode but str was given
"TypeError: Decoding Unicode is not supported"
Cause of this error: trying to convert unicode input_string into unicode
So first check that your input_string is str and convert to unicode if necessary:
if isinstance(input_string, str):
input_string = unicode(input_string, 'utf-8')
Secondly, the above just changes the type but does not remove non ascii characters. If you want to remove non-ascii characters:
if isinstance(input_string, str):
input_string = input_string.decode('ascii', 'ignore').encode('ascii') #note: this removes the character and encodes back to string.
elif isinstance(input_string, unicode):
input_string = input_string.encode('ascii', 'ignore')
In order to resolve this on an operating system level in an Ubuntu installation check the following:
$ locale charmap
If you get
locale: Cannot set LC_CTYPE to default locale: No such file or directory
instead of
UTF-8
then set LC_CTYPE and LC_ALL like this:
$ export LC_ALL="en_US.UTF-8"
$ export LC_CTYPE="en_US.UTF-8"
I find the best is to always convert to unicode - but this is difficult to achieve because in practice you'd have to check and convert every argument to every function and method you ever write that includes some form of string processing.
So I came up with the following approach to either guarantee unicodes or byte strings, from either input. In short, include and use the following lambdas:
# guarantee unicode string
_u = lambda t: t.decode('UTF-8', 'replace') if isinstance(t, str) else t
_uu = lambda *tt: tuple(_u(t) for t in tt)
# guarantee byte string in UTF8 encoding
_u8 = lambda t: t.encode('UTF-8', 'replace') if isinstance(t, unicode) else t
_uu8 = lambda *tt: tuple(_u8(t) for t in tt)
Examples:
text='Some string with codes > 127, like Zürich'
utext=u'Some string with codes > 127, like Zürich'
print "==> with _u, _uu"
print _u(text), type(_u(text))
print _u(utext), type(_u(utext))
print _uu(text, utext), type(_uu(text, utext))
print "==> with u8, uu8"
print _u8(text), type(_u8(text))
print _u8(utext), type(_u8(utext))
print _uu8(text, utext), type(_uu8(text, utext))
# with % formatting, always use _u() and _uu()
print "Some unknown input %s" % _u(text)
print "Multiple inputs %s, %s" % _uu(text, text)
# but with string.format be sure to always work with unicode strings
print u"Also works with formats: {}".format(_u(text))
print u"Also works with formats: {},{}".format(*_uu(text, text))
# ... or use _u8 and _uu8, because string.format expects byte strings
print "Also works with formats: {}".format(_u8(text))
print "Also works with formats: {},{}".format(*_uu8(text, text))
Here's some more reasoning about this.
Got a same error and this solved my error. Thanks!
python 2 and python 3 differing in unicode handling is making pickled files quite incompatible to load. So Use python pickle's encoding argument. Link below helped me solve the similar problem when I was trying to open pickled data from my python 3.7, while my file was saved originally in python 2.x version.
https://blog.modest-destiny.com/posts/python-2-and-3-compatible-pickle-save-and-load/
I copy the load_pickle function in my script and called the load_pickle(pickle_file) while loading my input_data like this:
input_data = load_pickle("my_dataset.pkl")
The load_pickle function is here:
def load_pickle(pickle_file):
try:
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
except UnicodeDecodeError as e:
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f, encoding='latin1')
except Exception as e:
print('Unable to load data ', pickle_file, ':', e)
raise
return pickle_data
Encode converts a unicode object in to a string object. I think you are trying to encode a string object. first convert your result into unicode object and then encode that unicode object into 'utf-8'.
for example
result = yourFunction()
result.decode().encode('utf-8')
This worked for me:
file = open('docs/my_messy_doc.pdf', 'rb')
I had the same error, with URLs containing non-ascii chars (bytes with values > 128), my solution:
url = url.decode('utf8').encode('utf-8')
Note: utf-8, utf8 are simply aliases . Using only 'utf8' or 'utf-8' should work in the same way
In my case, worked for me, in Python 2.7, I suppose this assignment changed 'something' in the str internal representation--i.e., it forces the right decoding of the backed byte sequence in url and finally puts the string into a utf-8 str with all the magic in the right place.
Unicode in Python is black magic for me.
Hope useful
I had the same problem but it didn't work for Python 3. I followed this and it solved my problem:
enc = sys.getdefaultencoding()
file = open(menu, "r", encoding = enc)
You have to set the encoding when you are reading/writing the file.
I got the same problem with the string "Pastelería Mallorca" and I solved with:
unicode("Pastelería Mallorca", 'latin-1')
In short, to ensure proper unicode handling in Python 2:
use io.open for reading/writing files
use from __future__ import unicode_literals
configure other data inputs/outputs (e.g., databases, network) to use unicode
if you cannot configure outputs to utf-8, convert your output for them print(text.encode('ascii', 'replace').decode())
For explanations, see #Alastair McCormack's detailed answer.
In a Django (1.9.10)/Python 2.7.5 project I have frequent UnicodeDecodeError exceptions; mainly when I try to feed unicode strings to logging. I made a helper function for arbitrary objects to basically format to 8-bit ascii strings and replacing any characters not in the table to '?'. I think it's not the best solution but since the default encoding is ascii (and i don't want to change it) it will do:
def encode_for_logging(c, encoding='ascii'):
if isinstance(c, basestring):
return c.encode(encoding, 'replace')
elif isinstance(c, Iterable):
c_ = []
for v in c:
c_.append(encode_for_logging(v, encoding))
return c_
else:
return encode_for_logging(unicode(c))
`
This error occurs when there are some non ASCII characters in our string and we are performing any operations on that string without proper decoding.
This helped me solve my problem.
I am reading a CSV file with columns ID,Text and decoding characters in it as below:
train_df = pd.read_csv("Example.csv")
train_data = train_df.values
for i in train_data:
print("ID :" + i[0])
text = i[1].decode("utf-8",errors="ignore").strip().lower()
print("Text: " + text)
Here is my solution, just add the encoding.
with open(file, encoding='utf8') as f
And because reading glove file will take a long time, I recommend to the glove file to a numpy file. When netx time you read the embedding weights, it will save your time.
import numpy as np
from tqdm import tqdm
def load_glove(file):
"""Loads GloVe vectors in numpy array.
Args:
file (str): a path to a glove file.
Return:
dict: a dict of numpy arrays.
"""
embeddings_index = {}
with open(file, encoding='utf8') as f:
for i, line in tqdm(enumerate(f)):
values = line.split()
word = ''.join(values[:-300])
coefs = np.asarray(values[-300:], dtype='float32')
embeddings_index[word] = coefs
return embeddings_index
# EMBEDDING_PATH = '../embedding_weights/glove.840B.300d.txt'
EMBEDDING_PATH = 'glove.840B.300d.txt'
embeddings = load_glove(EMBEDDING_PATH)
np.save('glove_embeddings.npy', embeddings)
Gist link: https://gist.github.com/BrambleXu/634a844cdd3cd04bb2e3ba3c83aef227
Specify: # encoding= utf-8 at the top of your Python File, It should fix the issue
I experienced this error with Python2.7. It happened to me while trying to run many python programs, but I managed to reproduce it with this simple script:
#!/usr/bin/env python
import subprocess
import sys
result = subprocess.Popen([u'svn', u'info'])
if not callable(getattr(result, "__enter__", None)) and not callable(getattr(result, "__exit__", None)):
print("foo")
print("bar")
On success, it should print out 'foo' and 'bar', and probably an error message if you're not in a svn folder.
On failure, it should print 'UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 39: ordinal not in range(128)'.
After trying to regenerate my locales and many other solutions posted in this question, I learned the error was happening because I had a special character (ĺ) encoded in my PATH environment variable. After fixing the PATH in '~/.bashrc', and exiting my session and entering again, (apparently sourcing '~/.bashrc' didn't work), the issue was gone.

How to parse unicode strings with minidom?

I'm trying to parse a bunch of xml files with the library xml.dom.minidom, to extract some data and put it in a text file. Most of the XMLs go well, but for some of them I get the following error when calling minidom.parsestring():
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 5189: ordinal not in range(128)
It happens for some other non-ascii characters too. My question is: what are my options here? Am I supposed to somehow strip/replace all those non-English characters before being able to parse the XML files?
Try to decode it:
> print u'abcdé'.encode('utf-8')
> abcdé
> print u'abcdé'.encode('utf-8').decode('utf-8')
> abcdé
In case your string is 'str':
xmldoc = minidom.parseString(u'{0}'.format(str).encode('utf-8'))
This worked for me.
Minidom doesn't directly support parsing Unicode strings; it's something that has historically had poor support and standardisation. Many XML tools recognise only byte streams as something an XML parser can consume.
If you have plain files, you should either read them in as byte strings (not Unicode!) and pass that to parseString(), or just use parse() which will read a file directly.
I know the O.P. asked about parsing strings, but I had the same exception upon writing the DOM model to a file via Document.writexml(...). In case people with that (related) problem land here, I will offer my solution.
My code which was throwing the UnicodeEncodeError looked like:
with tempfile.NamedTemporaryFile(delete=False) as fh:
dom.writexml(fh, encoding="utf-8")
Note that the "encoding" param only effects the XML header and has no effect on the treatment of the data. To fix it, I changed it to:
with tempfile.NamedTemporaryFile(delete=False) as fh:
fh = codecs.lookup("utf-8")[3](fh)
dom.writexml(fh, encoding="utf-8")
This will wrap the file handle with an instance of encodings.utf_8.StreamWriter, which handles the data as UTF-8 rather then ASCII, and the UnicodeEncodeError went away. I got the idea from reading the source of xml.dom.minidom.Node.toprettyxml(...).
I encounter this error a few times, and my hacky way of dealing with it is just to do this:
def getCleanString(word):
str = ""
for character in word:
try:
str_character = str(character)
str = str + str_character
except:
dummy = 1 # this happens if character is unicode
return str
Of course, this is probably a dumb way of doing it, but it gets the job done for me, and doesn't cost me anything in speed.

Reading UTF-8 XML and writing it to a file with Python

I'm trying to parse UTF-8 XML file and save some parts of it to another file. Problem is, that this is my first Python script ever and I'm totally confused about the character encoding problems I'm finding.
My script fails immediately when it tries to write non-ascii character to a file, but it can print it to command prompt (at least in some level)
Here's the XML (from the parts that matter at least, it's a *.resx file which contains UI strings)
<?xml version="1.0" encoding="utf-8"?>
<root>
<resheader name="foo">
<value>bar</value>
</resheader>
<data name="lorem" xml:space="preserve">
<value>ipsum öä</value>
</data>
</root>
And here's my python script
from xml.dom.minidom import parse
names = []
values = []
def getStrings(path):
dom = parse(path)
data = dom.getElementsByTagName("data")
for i in range(len(data)):
name = data[i].getAttribute("name")
names.append(name)
value = data[i].getElementsByTagName("value")
values.append(value[0].firstChild.nodeValue.encode("utf-8"))
def writeToFile():
with open("uiStrings-fi.py", "w") as f:
for i in range(len(names)):
line = names[i] + '="'+ values[i] + '"' #varName='varValue'
f.write(line)
f.write("\n")
getStrings("ResourceFile.fi-FI.resx")
writeToFile()
And here's the traceback:
Traceback (most recent call last):
File "GenerateLanguageFiles.py", line 24, in
writeToFile()
File "GenerateLanguageFiles.py", line 19, in writeToFile
line = names[i] + '="'+ values[i] + '"' #varName='varValue'
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in ran
ge(128)
How should I fix my script so it would read and write UTF-8 characters properly? The files I'm trying to generate would be used in test automation with Robots Framework.
You'll need to remove the call to encode() - that is, replace nodeValue.encode("utf-8") with nodeValue - and then change the call to open() to
with open("uiStrings-fi.py", "w", "utf-8") as f:
This uses a "Unicode-aware" version of open() which you will need to import from the codecs module, so also add
from codecs import open
to the top of the file.
The issue is that when you were calling nodeValue.encode("utf-8"), you were converting a Unicode string (Python's internal representation that can store all Unicode characters) into a regular string (which can only store single-byte characters 0-255). Later on, when you construct the line to write to the output file, names[i] is still a Unicode string but values[i] is a regular string. Python tries to convert the regular string to Unicode, which is the more general type, but because you don't specify an explicit conversion, it uses the ASCII codec, which is the default, and ASCII can't handle characters with byte values greater than 127. Unfortunately, several of those do occur in the string values[i] because the UTF-8 encoding uses those upper-range bytes frequently. So Python complains that it sees a character it can't handle. The solution, as I said above, is to defer the conversion from Unicode to bytes until the last possible moment, and you do that by using the Unicode-aware version of open (which will handle the encoding for you).
Now that I think about it, instead of what I said above, an alternate solution would be to replace names[i] with names[i].encode("utf-8"). That way, you convert names[i] into a regular string as well, and Python has no reason to try to convert values[i] back to Unicode. Although, one could make the argument that it's good practice to keep your strings as Unicode objects until you write them out to the file... if nothing else, I believe unicode becomes the default in Python 3.
The XML parser decodes the UTF-8 encoding of the input when it reads the file and all the text nodes and attributes of the resulting DOM are then unicode objects. When you select the interesting data from the DOM, you re-encode the values as UTF-8, but you don't encode the names. The resulting values array contains encoded byte strings while the names array still contains unicode objects.
In the line where the encoding error is thrown, Python tries to concatenate such a unicode name and a byte string value. To do so, both values have to be of the same type and Python tries to convert the byte string values[i] to unicode, but it doesn't know that it's UTF-8 encoded and fails when it tries to use the ASCII codec.
The easiest way to work around this would be to keep all the strings as Unicode objects and just encode them to UTF-8 when they are written to the file:
values.append(value[0].firstChild.nodeValue) # encode not yet
...
f.write(line.encode('utf-8')) # but now

Categories

Resources