I'm trying to write out a csv file with Unicode characters, so I'm using the unicodecsv package. Unfortunately, I'm still getting UnicodeDecodeErrors:
# -*- coding: utf-8 -*-
import codecs
import unicodecsv
raw_contents = 'He observes an “Oversized Gorilla” near Ashford'
encoded_contents = unicode(raw_contents, errors='replace')
with codecs.open('test.csv', 'w', 'UTF-8') as f:
w = unicodecsv.writer(f, encoding='UTF-8')
w.writerow(["1", encoded_contents])
This is the traceback:
Traceback (most recent call last):
File "unicode_test.py", line 11, in <module>
w.writerow(["1", encoded_contents])
File "/Library/Python/2.7/site-packages/unicodecsv/__init__.py", line 83, in writerow
self.writer.writerow(_stringify_list(row, self.encoding, self.encoding_errors))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 691, in write
return self.writer.write(data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 351, in write
data, consumed = self.encode(object, self.errors)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 17: ordinal not in range(128)
I thought converting it to Unicode would be good enough, but that doesn't seem to be the case. I'd really like to understand what is happening so that I'm better prepared to handle these errors in other projects in the future.
From the traceback, it looks like I can reproduce the error like this:
>>> raw_contents = 'He observes an “Oversized Gorilla” near Ashford'
>>> raw_contents.encode('UTF-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 15: ordinal not in range(128)
>>>
Up until now, I thought I had a decent working knowledge of working with Unicode text in Python 2.x, but this has humbled me.
You should not use codecs.open() for your file. unicodecsv wraps the csv module, which always writes a byte string to the open file object. In order to write that byte string to a Unicode-aware file object such as returned by codecs.open(), it is implicitly decoded; this is where your UnicodeDecodeError exception stems from.
Use a file in binary mode instead:
with open('test.csv', 'wb') as f:
w = unicodecsv.writer(f, encoding='UTF-8')
w.writerow(["1", encoded_contents])
The binary mode is not strictly necessary unless your data contains embedded newlines, but the csv module wants to control how newlines are written to ensure that such values are handled correctly. However, not using codecs.open() is an absolute requirement.
The same thing happens when you call .encode() on a byte string; you already have encoded data there, so Python implicitly decodes to get a Unicode value to encode.
Related
I'm getting an exception when reading a file that contains a RIGHT DOUBLE QUOTATION MARK Unicode symbol. It is encoded in UTF-8 (0xE2 0x80 0x9D). The minimal example:
import sys
print(sys.getdefaultencoding())
f = open("input.txt", "r")
r.readline()
This script fails reading the first line even if the right quotation mark is not on the first line. The exception looks like that:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python36\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 102: char
acter maps to <undefined>
The input file is in utf-8 encoding, I've tried both with and without BOM. The default encoding returned by sys.getdefaultencoding() is utf-8.
This script fails on the machine with Python 3.6.5 but works well on another with Python 3.6.0. Both machines are Windows.
My questions are mostly theoretical, as this exception is thrown from external software that I cannot change, and it reads file that I don't wish to change. What should be the difference in these machines except the Python patch version? Why does vanilla open use cp1252 if the system default is utf-8?
As clearly stated in Python's open documentation:
In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding.
Windows defaults to a localized encoding (cp1252 on US and Western European versions). Linux typically defaults to utf-8.
Because it is platform-dependent, use the encoding parameter and specify the encoding of the file explicitly.
>>> path = 'name.txt'
>>> content = None
>>> with open(path, 'r') as file:
... content = file.readlines()
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/mnt/lustre/share/miniconda3/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 163: ordinal not in range(128)
When I run this code to read a file which contains Chinese characters, I got an error. The file is saved by using UTF-8. My python version is 3.6.5. But it runs ok in python2.7.
open is using the ASCII codec to try to read the file. The easiest way to fix this is to specify the encoding:
with open(path, 'r', encoding='utf-8') as file:
Your locale should probably specify the preferred encoding as UTF-8, but I think it depends on OS and language settings.
Python 2.7 reads files into byte strings by default.
Python 3.x reads files into Unicode strings by default, so the bytes in the file must be decoded.
The default encoding used varies by operating system, but can be determined by calling locale.getpreferredencoding(False). This is often utf8 on Linux systems, but Windows systems return a localized ANSI encoding, e.g. cp1252 for US/Western European Windows versions.
In Python 3, specify the encoding you expect for files so as not to rely on a locale-specific default. For example:
with open(path,'r',encoding='utf8') as f:
...
You can do this in Python 2 as well, but use io.open(), which is compatible with Python 3's open() and will read Unicode strings instead of byte strings. io.open() is available in Python 3 as well for portability.
I would like to import even a simple text file into Python. For example, here's the contents of example.txt:
hello
my
friend
Very simple. However, when I try to import the file and read it:
f = open('example.txt')
f.read()
I get the following error:
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
f.read()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)
What's the source of this problem? Clearly there are not any non-ascii characters in the file.
I've tried this in IDLE, terminal (Mac OSX) and Rodeo and get similar issues in all.
I'm very new to Python and am concerned I may have screwed up something in installation. I've downloaded various versions over the years, straight from Python, Anaconda, macports, etc. and I'm wondering if the various sources are not playing nicely...
Python 3.5.1 on OSX 10.11.4.
Maybe your file is saved with the encoding UTF-8 with BOM (Byte order mark). Try to save you file explicit as UTF-8 (without BOM). While the BOM is not included in the ASCII codec, it causes an UnicodeError.
Hope this helps!
I'm trying to download BVLC-trained model and I'm stuck with this error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 110: invalid start byte
I think it's because of the following function (complete code)
# Closure-d function for checking SHA1.
def model_checks_out(filename=model_filename, sha1=frontmatter['sha1']):
with open(filename, 'r') as f:
return hashlib.sha1(f.read()).hexdigest() == sha1
Any idea how to fix this?
You are opening a file that is not UTF-8 encoded, while the default encoding for your system is set to UTF-8.
Since you are calculating a SHA1 hash, you should read the data as binary instead. The hashlib functions require you pass in bytes:
with open(filename, 'rb') as f:
return hashlib.sha1(f.read()).hexdigest() == sha1
Note the addition of b in the file mode.
See the open() documentation:
mode is an optional string that specifies the mode in which the file is opened. It defaults to 'r' which means open for reading in text mode. [...] In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. (For reading and writing raw bytes use binary mode and leave encoding unspecified.)
and from the hashlib module documentation:
You can now feed this object with bytes-like objects (normally bytes) using the update() method.
You didn't specify to open the file in binary mode, so f.read() is trying to read the file as a UTF-8-encoded text file, which doesn't seem to be working. But since we take the hash of bytes, not of strings, it doesn't matter what the encoding is, or even whether the file is text at all: just open it, and then read it, as a binary file.
>>> with open("test.h5.bz2","r") as f: print(hashlib.sha1(f.read()).hexdigest())
Traceback (most recent call last):
File "<ipython-input-3-fdba09d5390b>", line 1, in <module>
with open("test.h5.bz2","r") as f: print(hashlib.sha1(f.read()).hexdigest())
File "/home/dsm/sys/pys/Python-3.5.1-bin/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb8 in position 10: invalid start byte
but
>>> with open("test.h5.bz2","rb") as f: print(hashlib.sha1(f.read()).hexdigest())
21bd89480061c80f347e34594e71c6943ca11325
Since there is not a single hint in the documentation nor src code, I have no clue why, but using the b char (i guess for binary) totally works (tf-version: 1.1.0):
image_data = tf.gfile.FastGFile(filename, 'rb').read()
For more information, check out: gfile
I am using Pyhton3.4.1 and win7. I am trying to reading a txt file exported from a software. it seems that python cannot read this text file. But I found if I open the text file by notepad and add a space in any place and save it, the python works well then.
I tried the same code and same file on my mac, it has the same problem as in windows.
For original text file, not working,open and saved in windows notepad, working,
open ans saved in mac textedit, not working.
I am doubting the original coding of the text file might not be right.
Thanks
Python code
InputFileName=input("Please tell me the input file name:")
#StartLNum=int(input("Please tell me the start line number:"))
#EndLNum=int(input("Please tell me the end line number:"))
StartLNum=18
EndLNum=129
lnum=1
OutputName='out'+InputFileName
fw=open(OutputName,'w')
with open(InputFileName,"r") as fr:
for line in fr:
if (lnum >= StartLNum) & (lnum<=EndLNum):
#print(line)
fw.write(line)
lnum+=1
fw.close()
Shell
>>> ================================ RESTART ================================
>>>
Please tell me the input file name:Jul-18-2014.txt
Traceback (most recent call last):
File "C:\Users\Jeremy\Desktop\read.py", line 13, in <module>
for line in fr:
UnicodeDecodeError: 'gbk' codec can't decode byte 0xb3 in position 4309: illegal multibyte sequence
>>> ================================ RESTART ================================
>>>
Please tell me the input file name:Jul-18-2014.txt
>>>
Plus, the error below is the same code reported on my mac(Python3.4.1,OS10.9)
Traceback (most recent call last):
File "/Users/Jeremy/Desktop/read.py", line 14, in <module>
for line in fr:
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xb3 in position 4174: ordinal not in range(128)
When you save the file in Notepad, the file is reencoded to be saved as your default file encoding for your Windows installation. Notepad auto-detected the encoding when it opened the file, however.
Python opens file using that same system encoding, by default, which is why you can now open the file. Quoting the open() function documentation:
encoding is the name of the encoding used to decode or encode the file. This should only be used in text mode. The default encoding is platform dependent (whatever locale.getpreferredencoding() returns), but any encoding supported by Python can be used.
You'll have to explicitly specify the correct encoding for the file if you wanted to open it directly in Python:
with open(InputFileName, "r", encoding='utf-8-sig') as fr:
I used 'utf-8-sig' as an example here, as that is a file encoding that Notepad can auto-detect. It could well be that the encoding is UTF-16 or plain UTF-8 or any number of other encodings, however.
If you think that the page is encoded with a specific ANSI codepage you still have to name the exact codepage. Your system is configured to use code page 936 (GBK) but that is not the correct encoding for this file.
See the codecs module for a list of supported encodings.