I have a file where each line is a json object like so:
{"name": "John", ...}
{...}
I am trying to create a new file with the same objects, but with certain properties removed from all of them.
When I do this, I get a UnicodeEncodeError. Strangely, If I instead loop over range(n) (for some number n) and use infile.next(), it works just as I want it to.
Why so? How do I get this to work by iterating over infile? I tried using dumps() instead of dump(), but that just makes a bunch of empty lines in the outfile.
with open(filename, 'r') as infile:
with open('_{}'.format(filename), 'w') as outfile:
for comment in infile:
decodedComment = json.loads(comment)
for prop in propsToRemove:
# use pop to avoid exception handling
decodedComment.pop(prop, None)
json.dump(decodedComment, outfile, ensure_ascii = False)
outfile.write('\n')
Here is the error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\U0001f47d' in position 1: ordinal not in range(128)
Thanks for the help!
The problem you are facing is that the standard file.write() function (called by the json.dump() function) does not support unicode strings. From the error message, it turns out that your string contains the UTF character \U0001f47d (which turns out to code for the character EXTRATERRESTRIAL ALIEN, who knew?), and possibly other UTF characters. To handle these characters, either you can encode them into an ASCII encoding (they'll show up in your output file as \XXXXXX), or you need to use a file writer that can handle unicode.
To do the first option, replace your writing line with this line:
json.dump(unicode(decodedComment), outfile, ensure_ascii = False)
The second option is likely more what you want, and an easy option is to use the codecs module. Import it, and change your second line to:
with codecs.open('_{}'.format(filename), 'w', encoding="utf-8") as outfile:
Then, you'll be able to save the special characters in their original form.
Related
I'm trying to get a Python 3 program to do some manipulations with a text file filled with information. However, when trying to read the file I get the following error:
Traceback (most recent call last):
File "SCRIPT LOCATION", line NUMBER, in <module>
text = file.read()
File "C:\Python31\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2907500: character maps to `<undefined>`
The file in question is not using the CP1252 encoding. It's using another encoding. Which one you have to figure out yourself. Common ones are Latin-1 and UTF-8. Since 0x90 doesn't actually mean anything in Latin-1, UTF-8 (where 0x90 is a continuation byte) is more likely.
You specify the encoding when you open the file:
file = open(filename, encoding="utf8")
If file = open(filename, encoding="utf-8") doesn't work, try
file = open(filename, errors="ignore"), if you want to remove unneeded characters. (docs)
Alternatively, if you don't need to decode the file, such as uploading the file to a website, use:
open(filename, 'rb')
where r = reading, b = binary
As an extension to #LennartRegebro's answer:
If you can't tell what encoding your file uses and the solution above does not work (it's not utf8) and you found yourself merely guessing - there are online tools that you could use to identify what encoding that is. They aren't perfect but usually work just fine. After you figure out the encoding you should be able to use solution above.
EDIT: (Copied from comment)
A quite popular text editor Sublime Text has a command to display encoding if it has been set...
Go to View -> Show Console (or Ctrl+`)
Type into field at the bottom view.encoding() and hope for the best (I was unable to get anything but Undefined but maybe you will have better luck...)
TLDR: Try: file = open(filename, encoding='cp437')
Why? When one uses:
file = open(filename)
text = file.read()
Python assumes the file uses the same codepage as current environment (cp1252 in case of the opening post) and tries to decode it to its own default UTF-8. If the file contains characters of values not defined in this codepage (like 0x90) we get UnicodeDecodeError. Sometimes we don't know the encoding of the file, sometimes the file's encoding may be unhandled by Python (like e.g. cp790), sometimes the file can contain mixed encodings.
If such characters are unneeded, one may decide to replace them by question marks, with:
file = open(filename, errors='replace')
Another workaround is to use:
file = open(filename, errors='ignore')
The characters are then left intact, but other errors will be masked too.
A very good solution is to specify the encoding, yet not any encoding (like cp1252), but the one which has ALL characters defined (like cp437):
file = open(filename, encoding='cp437')
Codepage 437 is the original DOS encoding. All codes are defined, so there are no errors while reading the file, no errors are masked out, the characters are preserved (not quite left intact but still distinguishable).
Stop wasting your time, just add the following encoding="cp437" and errors='ignore' to your code in both read and write:
open('filename.csv', encoding="cp437", errors='ignore')
open(file_name, 'w', newline='', encoding="cp437", errors='ignore')
Godspeed
for me encoding with utf16 worked
file = open('filename.csv', encoding="utf16")
For those working in Anaconda in Windows, I had the same problem. Notepad++ help me to solve it.
Open the file in Notepad++. In the bottom right it will tell you the current file encoding.
In the top menu, next to "View" locate "Encoding". In "Encoding" go to "character sets" and there with patiente look for the enconding that you need. In my case the encoding "Windows-1252" was found under "Western European"
Before you apply the suggested solution, you can check what is the Unicode character that appeared in your file (and in the error log), in this case 0x90: https://unicodelookup.com/#0x90/1 (or directly at Unicode Consortium site http://www.unicode.org/charts/ by searching 0x0090)
and then consider removing it from the file.
def read_files(file_path):
with open(file_path, encoding='utf8') as f:
text = f.read()
return text
OR (AND)
def read_files(text, file_path):
with open(file_path, 'rb') as f:
f.write(text.encode('utf8', 'ignore'))
In the newer version of Python (starting with 3.7), you can add the interpreter option -Xutf8, which should fix your problem. If you use Pycharm, just got to Run > Edit configurations (in tab Configuration change value in field Interpreter options to -Xutf8).
Or, equivalently, you can just set the environmental variable PYTHONUTF8 to 1.
for me changing the Mysql character encoding the same as my code helped to sort out the solution. photo=open('pic3.png',encoding=latin1)
I need to read specified rows and columns of csv file and write into txt file.But I got an unicode decode error.
import csv
with open('output.csv', 'r', encoding='utf-8') as f:
reader = csv.reader(f)
your_list = list(reader)
print(your_list)
The reason for this error is perhaps that your CSV file does not use UTF-8 encoding. Find out the original encoding used for your document.
First of all, try using the default encoding by leaving out the encoding parameter:
with open('output.csv', 'r') as f:
...
If that does not work, try alternative encoding schemes that are commonly used, for example:
with open('output.csv', 'r', encoding="ISO-8859-1") as f:
...
If you get a unicode decode error with this code, it is likely that the csv file is not utf-8 encoded... The correct fix is to find what is the correct encoding and use it.
If you only want quick and dirty workarounds, Python offers the errors=... option of open. From the documentation of open function in the standard library:
'strict' to raise a ValueError exception if there is an encoding error. The default value of None has the same effect.
'ignore' ignores errors. Note that ignoring encoding errors can lead to data loss.
'replace' causes a replacement marker (such as '?') to be inserted where there is malformed data.
'surrogateescape' will represent any incorrect bytes as code points in the Unicode Private Use Area ranging from U+DC80 to U+DCFF. These private code points will then be turned back into the same bytes when the surrogateescape error handler is used when writing data. This is useful for processing files in an unknown encoding.
'xmlcharrefreplace' is only supported when writing to a file. Characters not supported by the encoding are replaced with the appropriate XML character reference &#nnn;.
'backslashreplace' replaces malformed data by Python’s backslashed escape sequences.
'namereplace' (also only supported when writing) replaces unsupported characters with \N{...} escape sequences.
I often use errors='replace', when I only want to know that there were erroneous bytes or errors='backslashreplace' when I want to know what they were.
I have a large csv file that contains unicode characters which are causing errors in a Python script I am trying to run. My process for removing them so far has been quite tedious. I run my script and as soon as it hits a unicode character, I get an error:
'ascii' codec can't encode character u'\xef' in position 197: ordinal not in range(128)
Then I Google u'\xef' and try to figure out what the character actually is (Does anyone know of a website with a list of these definitions?). I'm using that information to build a dictionary and I have a second Python script that converts the unicode characters to regular text:
unicode_dict = {"\xb0":"deg", "\xa0":" ", "\xbd":"1/2", "\xbc":"1/4", "\xb2":"^2", "\xbe":"3/4"}
for f in glob.glob(r"C:\Folder1\*.csv"):
in_csv = f
out_csv = f.replace(".csv", "_2.csv")
write_f=open(out_csv, "wb")
writer = csv.writer(write_f)
with open(in_csv,'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
new_row = []
for s in row:
for k, v in unicode_dict.iteritems():
s = s.replace(k, v)
new_row.append(s)
writer.writerow(new_row)
write_f.close()
os.remove(in_csv)
os.rename(out_csv, in_csv)
Then I have to run the code again, get another error, and look up the next unicode character on Google. There must be a better way, right?
Read http://www.joelonsoftware.com/articles/Unicode.html . Carefully.
Then, you'll understand that you need to know which encoding your file is in. If you've been able to find out what \xbd means, maybe that some place mentions which encoding it is.
Then, use io.open(in_csv, 'rb', encoding='yourencodinghere') instead of the vanilla open call.
Then, apparently the csv module doesn't handle Unicode, sigh. Use something from SBillion's answer (e.g. http://www.joelonsoftware.com/articles/Unicode.html ) to work around it.
You should have a look at this for a way to handle Unicode via utf-8 in csv files with the standard python library:
https://docs.python.org/2/library/csv.html#csv-examples
But if you prefer, you can use this external unicode-compliant module: https://pypi.python.org/pypi/unicodecsv/0.9.0
So I'm trying to parse out JSON files into a tab delimited file. The parsing seems to work fine and all the data is coming through. Although the oddest thing is happening on the output file. I told it to use a tab delimiter and on the output it does use tabs, but it still seems to keep the single quotes. And for some reason it also seems to be adding the letter B to the beginning. I manually typed in the header, and that works fine, but the data itself is acting weird. Here's an example of the output I'm getting.
id created text screen name name latitude longitude place name place type
b'1234567890' b'Thu Mar 14 19:39:07 +0000 2013' "b""I'm at Bank Of America (Wayne, MI) http://t.co/asdf""" b'userid' b'username' 42.28286837 -83.38487864 b'Bank Of America, Wayne' b'poi'
b'1234567891' b'Thu Mar 14 19:39:16 +0000 2013' b'here is a sample tweet \xf0\x9f\x8f\x80 #notingoodhands' b'userid2' b'username2'
Here is the code that I'm using to write the data out.
out = open(filename, 'w')
out.write('id\tcreated\ttext\tscreen name\tname\tlatitude\tlongitude\tplace name\tplace type')
out.write('\n')
rows = zip(ids, times, texts, screen_names, names, lats, lons, place_names, place_types)
from csv import writer
csv = writer(out, dialect='excel', delimiter = '\t')
for row in rows:
values = [(value.encode('utf-8') if hasattr(value, 'encode') else value) for value in row]
csv.writerow(values)
out.close()
So here's the thing. If i did this without the utf-8 bit and just output it straight, the formatting would be perfectly how i want it. But then when people type in special characters, the program crashes and isn't able to handle it.
Traceback (most recent call last):
File "tweets.py", line 34, in <module>
csv.writerow(values)
File "C:\Python33\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f3c0' in position 153: character maps to <undefined>
Adding the utf-8 bit converts it to the type of output you see here, but then it adds all these characters to the output. Does anyone have any thoughts on this?
You are writing byte data instead of unicode to your files, because you are encoding the data yourself.
Remove the encode calls altogether and let Python handle this for you; open the file with the UTF8 encoding and the rest takes care of itself:
out = open(filename, 'w', encoding='utf8')
This is documented in the csv module documentation:
Since open() is used to open a CSV file for reading, the file will by default be decoded into unicode using the system default encoding (see locale.getpreferredencoding()). To decode a file using a different encoding, use the encoding argument of open:
import csv
with open('some.csv', newline='', encoding='utf-8') as f:
reader = csv.reader(f)
for row in reader:
print(row)
The same applies to writing in something other than the system default encoding: specify the encoding argument when opening the output file.
You've got multiple things going on here, but first, let's clear up a bit of confusion.
Encoding non-ASCII characters to UTF-8 means you get multiple bytes. For example, the character 🏀 is \xf0\x9f\x8f\x80 in UTF-8. But that's still just one character, it's just a character that takes four bytes. If you write the string to a binary file, then look at that file in a UTF-8-compatible tool (Notepad or TextEdit, or just cat on a UTF-8-friendly terminal/shell), you'll see one 🏀, not four garbage characters.
Second, b'abc' is not a string with b added to the beginning, it's the repr representation of the byte-string abc. The b is no more a part of the string than the quotes are.
Finally, in Python 3, you can't open a file in text mode and then write byte strings to it. Either open it in text mode, with an encoding, and write normal unicode strings, or open it in binary mode and write encoded byte strings.
Edit: http://pastebin.com/W4iG3tjS - the file
I have a text file encoded in utf8 with some Cyrillic text it. To load it, I use the following code:
import codecs
fopen = codecs.open('thefile', 'r', encoding='utf8')
fread = fopen.read()
fread dumps the file on the screen all unicodish (escape sequences). print fread displays it in readable form (ASCII I guess).
I then try to split it and write it to an empty file with no encoding:
a = fread.split()
for l in a:
print>>dasFile, l
But I get the following error message: UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-13: ordinal not in range(128)
Is there a way to dump fread.split() into a file? How can I get rid of this error?
Since you've opened and read the file via codecs.open(), it's been decoded to Unicode. So to output it you need to encode it again, presumably back to UTF-8.
for l in a:
dasFile.write(l.encode('utf-8'))
print is going to use the default encoding, which is normally "ascii". So you see that error with print. But you can open a file and write directly to it.
a = fopen.readlines() # returns a list of lines already, with line endings intact
# do something with a
dasFile.writelines(a) # doesn't add line endings, expects them to be present already.
assuming the lines in a are encoded already.
PS. You should also investigate the io module.