International characters in Python - python

I'm currently working on a Python script that takes a list of log files (from a search engine) and produces a file with all the queries within these, for later analysis.
Another feature of the script is that it removes the most common words, which I've also implemented, but I've faced a problem I can't seem to overcome. The removing of words does work as intended, as long as the queries does not contain special characters. As the search logs are in Danish, the characters æ, ø and å will appear regularly.
Searching on the topic I'm now aware that I need to encode these into UTF-8, which I'm doing when obtaining the query:
tmp = t_query.encode("UTF-8").lower().split()
t_query is the query and I split it up to later compare each word with my list of forbidden words. If I do not use the encoding I'll get the error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 1: ordinal not in range(128)
Edit: I also tried using the decode instead, but get the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa7' in position 3: ordinal not in range(128)
I loop through the words like this:
for i in tmp:
if i in words_to_filter:
tmp.remove(i)
As said this works perfectly for words not including special characters. I've tried to print the i along with the current forbidden word and will get e.g:
færdelsloven - færdelsloven
Where the first word is the ith element in tmp. The last word in the one from the forbidden words. Obviously something has gone wrong, but I just can't manage to find a solution. I've tried many suggestions found on Google and in here, but nothing have worked so far.
Edit 2: if it makes a difference, I've tried loading the log files both with and without the use of codec:
with codecs.open(file_name, "r", "utf-8") as f_src:
jlogs = map(json.loads, f_src.readlines())
I'm running Python 2.7.2 from a Windows environment, if it matters. The script should be able to run on other platforms (namely Linux and Mac OS).
I would really appreciate if one of you are able to help me out.
Best regards
Casper

If you are reading files, you want to decode them.
tmp = t_query.decode("UTF-8").lower().split()

Given a utf-8 file with json object per line, you could read all objects:
with open(filename) as file:
jlogs = [json.loads(line) for line in file]
Except for an embeded newline treatment the above code should produce the same result as yours:
with codecs.open(file_name, "r", "utf-8") as f_src:
jlogs = map(json.loads, f_src.readlines())
At this point all strings in jlogs are Unicode you don't need to do anything to handle "special" characters. Just make sure you are not mixing bytes and Unicode text in your code.
to get Unicode text from bytes: some_bytes.decode(character_encoding)
to get bytes from Unicode text: some_text.encode(character_encoding)
Don't encode bytes/decode Unicode.

If encoding is right and you just want to ignore unexpected characters you could use errors='ignore' or errors='replace' parameter passed to codecs.open function.
with codecs.open(file_name, encoding='utf-8', mode='r', errors='ignore') as f:
jlogs = map(json.loads, f.readlines())
Details in docs:
http://docs.python.org/2/howto/unicode.html#reading-and-writing-unicode-data

I've finally solved it. As Lattyware Python 3.x seems to do much better. After changing the version and encoding the Python file to Unicode it works as intended.

Related

python codecs can't encode to cp1252...but notepad++ can?

I have a very simple piece of code that's converting a csv....also do note i reference notepad++ a few times but my standard IDE is vs-code.
with codecs.open(filePath, "r", encoding = "UTF-8") as sourcefile:
lines = sourcefile.read()
with codecs.open(filePath, 'w', encoding = 'cp1252') as targetfile:
targetfile.write(lines)
Now the job I'm doing requires a specific file be encoded to windows-1252 and from what i understand cp1252=windows-1252. Now this conversion works fine when i do it using the UI features in notepad++, but when i try using python codecs to encode this file it fails;
UnicodeEncodeError: 'charmap' codec can't encode character '\ufffd' in position 561488: character maps to <undefined>
When i saw this failure i was confused, so i double checked the output from when i manually convert the file using notepad++, and the converted file is encoded in windows-1252.....so what gives? Why can a UI feature in notepad++ able to do the job when but codecs seems not not be able to? Does notepad++ just ignore errors?
Looks like your input text has the character "�" (the actual placeholder "replacement character" character, not some other undefined character), which cannot be mapped to cp1252 (because it doesn't have the concept).
Depending on what you need, you can:
Filter it out (or replace it, or otherwise handle it) in Python before writing out lines to the output file.
Pass errors=... to the second codecs.open, choosing one of the other error-handling modes; the default is 'strict', you can also use 'ignore', 'replace', 'xmlcharrefreplace', 'backslashreplace' or 'namereplace'.
Check the input file and see why it's got the "�" character; is it corrupted?
Probably Python is simply more explicit in its error handling. If Notepad++ managed to represent every character correctly in CP-1252 then there is a bug in the Python codec where it should not fail where it currently does; but I'm guessing Notepad++ is silently replacing some characters with some other characters, and falsely claiming success.
Maybe try converting the result back to UTF-8 and compare the files byte by byte if the data is not easy to inspect manually.
Uncode U+FFFD is a reserved character which serves as a placeholder for a character which cannot be represented in Unicode; often, it's an indication of a conversion problem previously, when presumably this data was imperfectly input or converted at an earlier point in time.
(And yes, Windows-1252 is another name for Windows code page 1252.)
Why notepad++ "succeeds"
Notepad++ does not offer you to convert your file to cp1252, but to reinterpret it using this encoding. What lead to your confusion is that they are actually using the wrong term for this. This is the encoding menu in the program:
When "Encode with cp1252" is selected, Notepad decodes the file using cp1252 and shows you the result. If you save the character '\ufffd' to a file using utf8:
with open('f.txt', 'w', encoding='utf8') as f:
f.write('\ufffd')`
and use "Encode with cp1252" you'd see three characters:
That means that Notepad++ does not read the character in utf8 and then writes it in cp1252, because then you'd see exactly one character. You could achieve similar results to Notepad++ by reading the file using cp1252:
with open('f.txt', 'r', encoding='cp1252') as f:
print(f.read()) # Prints �
Notepad++ lets you actually convert to only five encodings, as you can see in the screenshot above.
What should you do
This character does not exist in the cp1252 encoding, which means you can't convert this file without losing information. Common solutions are to skip such characters or replace them with other similar characters that exist in your encoding (see encoding error handlers)
You are dealing with the "utf-8-sig" encoding -- please specify this one as the encoding argument instead of "utf-8".
There is information on it in the docs (search the page for "utf-8-sig").
To increase the reliability with which a UTF-8 encoding can be detected, Microsoft invented a variant of UTF-8 (that Python 2.5 calls "utf-8-sig") for its Notepad program: Before any of the Unicode characters is written to the file, a UTF-8 encoded BOM (which looks like this as a byte sequence: 0xef, 0xbb, 0xbf) is written. [...]

'utf-8' codec can't decode byte 0xa0 in position 4276: invalid start byte

I try to read and print the following file: txt.tsv (https://www.sec.gov/files/dera/data/financial-statement-and-notes-data-sets/2017q3_notes.zip)
According to the SEC the data set is provided in a single encoding, as follows:
Tab Delimited Value (.txt): utf-8, tab-delimited, \n- terminated lines, with the first line containing the field names in lowercase.
My current code:
import csv
with open('txt.tsv') as tsvfile:
reader = csv.DictReader(tsvfile, dialect='excel-tab')
for row in reader:
print(row)
All attempts ended with the following error message:
'utf-8' codec can't decode byte 0xa0 in position 4276: invalid start byte
I am a bit lost. Can anyone help me?
Encoding in the file is 'windows-1252'. Use:
open('txt.tsv', encoding='windows-1252')
If someone works on Turkish data, then I suggest this line:
df = pd.read_csv("text.txt",encoding='windows-1254')
ds = pd.read_csv('/Dataset/test.csv', encoding='windows-1252')
Works fine for me, thanks.
i have the same error message for .csv file, and This Worked for me :
df = pd.read_csv('Text.csv',encoding='ANSI')
I also encountered the same issue and worked while using latin1 encoding, refer to the sample code to apply in your codebase. Give a try if above resolution doesn't work.
df=pd.read_csv("../CSV_FILE.csv",na_values=missing, encoding='latin1')
If the input has a stray '\xa0', then it's not in UTF-8, full stop.
Yes, you have to either recode it to UTF-8 (see: iconv, recode commands, or a lot of text editors and IDEs can do it), or read it using an 8-bit encoding (as all the other answers suggest).
What you should ask yourself is - what is this character after all (0xa0 or 160)?
Well, in many 8-bit encodings it's a non-breaking space (like in HTML). For at least one DOS encoding it's an accented "a" character. That's why you need to look at the result of decoding it from the 8-bit encoding.
BTW, sometimes people say "UTF-8", and they mean "mostly ASCII, I guess". And if it was a non-breaking space, they weren't that far:
In [1]: '\xa0'.encode()
Out[1]: b'\xc2\xa0'
One exptra preceeding '\xc2' byte would do the trick.

Converting ascii characters in text file to Unicode

We are creating a website using Django 1.5, We have several large text files stored on the server that are to render with the web page, depending on the country. The problem is that these text files contain the copyright symbol (c) and we keep getting a 'Non-ascii character' error, and the text does not load. Does anyone have any suggestions on how to successfully convert one to the other?
Selections of the Code:
#Open file, where filename is our variable
with open(filename) as f:
#Append (It is in a loop, and we are only passing 1 document variable
document=document + f.read()
f.close
We have tried using:
mark safe (in django)
smart_str
.encode('utf8')
But to no avail, the page continues so spit back an error saying there is an ascii character that it cannot convert. Any ideas?
Here is the error we keep getting
UnicodeDecodeError at /<website-hidden>/
'ascii' codec can't decode byte 0x92 in position 950: ordinal not in range(128)
The issue is that the copyright symbol isn't a strict ASCII character, as it's 8th (most significant) bit is 1. ASCII only uses 7 bits. You need to tell python that the file isn't ASCII data, but something like "Extended ASCII", "ISO 8859-1" or "ISO Latin-1" data.
As such, you need to read it as bytes and then convert it to a string using that decoding. You can then re-encode it to anything you want, including UTF-8.
Exact handling for this depends if you are using python 2.x or 3.x.
Ref
http://www.ascii-code.com/
https://en.wikipedia.org/wiki/Extended_ASCII
A restart of the computer and eclipse seemed to do the trick. Perhaps it was a problem with the cache? Either way, strange error...

Python: convert Chinese characters into pinyin with CJKLIB

I'm trying to convert a bunch of Chinese characters into pinyin, reading the characters from one file and writing the pinyin into another. I'm working with the CJKLIB functions to do this.
Here's the code,
from cjklib.characterlookup import CharacterLookup
source_file = 'cities_test.txt'
dest_file = 'output.txt'
s = open(source_file, 'r')
d = open(dest_file, 'w')
cjk = CharacterLookup('T')
for line in s:
p = line.split('\t')
for p_shard in p:
for c in p_shard:
readings = cjk.getReadingForCharacter(c.encode('utf-8'), 'Pinyin')
d.write(readings[0].encode('utf-8'))
d.write('\t')
d.write('\n')
s.close()
d.close()
My problem is that I keep running into Unicode-related errors, the error comes up when I call the getReadingForCharacter function. If I called it as written,
readings = cjk.getReadingForCharacter(c.encode('utf-8'), 'Pinyin')
I get: UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range (128).
If I call it like this, without the .encoding(),
readings = cjk.getReadingForCharacter(c, 'Pinyin')
I get an error thrown by sqlalchemy (the CJKLIB uses sqlalchemy and sqlite): You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings ... etc.
Can someone help me out? Thanks!
Oh also, is there a way for CJKLIB to return the pinyin without any tones? I think by default it's returning pinyin with these weird characters to represent tones, I just want the letters without these tones.
Your bug is that you are not decoding the input stream, and yet you are turning around and re-encoding it as though it were UTF-8. That’s going the wrong way.
You have two choices.
You can codecs.open the input file with an explicit encoding so you always get back regular Unicode strings whenever you read from it because the decoding is automatic. This is always my strong preference. There is no such thing as a text file anymore.
Your other choice is to manually decode your binary string it before you pass it to the function. I hate this style, because it almost always indicates that you're doing something wrong, and even when it doesn't, it is clumsy as all get out.
I would do the same thing for the output file. I just hate seeing manually .encode("utf-8") and .decode("utf-8") all over the place. Set the stream encoding and be done with it.

How to parse unicode strings with minidom?

I'm trying to parse a bunch of xml files with the library xml.dom.minidom, to extract some data and put it in a text file. Most of the XMLs go well, but for some of them I get the following error when calling minidom.parsestring():
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 5189: ordinal not in range(128)
It happens for some other non-ascii characters too. My question is: what are my options here? Am I supposed to somehow strip/replace all those non-English characters before being able to parse the XML files?
Try to decode it:
> print u'abcdé'.encode('utf-8')
> abcdé
> print u'abcdé'.encode('utf-8').decode('utf-8')
> abcdé
In case your string is 'str':
xmldoc = minidom.parseString(u'{0}'.format(str).encode('utf-8'))
This worked for me.
Minidom doesn't directly support parsing Unicode strings; it's something that has historically had poor support and standardisation. Many XML tools recognise only byte streams as something an XML parser can consume.
If you have plain files, you should either read them in as byte strings (not Unicode!) and pass that to parseString(), or just use parse() which will read a file directly.
I know the O.P. asked about parsing strings, but I had the same exception upon writing the DOM model to a file via Document.writexml(...). In case people with that (related) problem land here, I will offer my solution.
My code which was throwing the UnicodeEncodeError looked like:
with tempfile.NamedTemporaryFile(delete=False) as fh:
dom.writexml(fh, encoding="utf-8")
Note that the "encoding" param only effects the XML header and has no effect on the treatment of the data. To fix it, I changed it to:
with tempfile.NamedTemporaryFile(delete=False) as fh:
fh = codecs.lookup("utf-8")[3](fh)
dom.writexml(fh, encoding="utf-8")
This will wrap the file handle with an instance of encodings.utf_8.StreamWriter, which handles the data as UTF-8 rather then ASCII, and the UnicodeEncodeError went away. I got the idea from reading the source of xml.dom.minidom.Node.toprettyxml(...).
I encounter this error a few times, and my hacky way of dealing with it is just to do this:
def getCleanString(word):
str = ""
for character in word:
try:
str_character = str(character)
str = str + str_character
except:
dummy = 1 # this happens if character is unicode
return str
Of course, this is probably a dumb way of doing it, but it gets the job done for me, and doesn't cost me anything in speed.

Categories

Resources