Converting latin-1 encoded UTF-8 string in Python - python

I'm using a Python 2.x-library email to iterate over some .eml-files, but I have Python 3.x installed.
I extract the filename in the header of each payload (attachment) using .get_filename(). Encoding is not set in the header and thus I believe Python 3.x interprets the returned string as utf-8. The string however looks like this, when it contains special characters, e.g. like "ø":
=?ISO-8859-1?Q?Sp=F8rgeskema=2Edoc?=
I have failed in numerous ways to convert this string into utf-8 making it into bytes or not and de- and encoding using latin-1, ISO-8859-1 (should be the same though) and utf-8.
I've also tried using:
ast.literal_eval(r"b'=?ISO-8859-1?Q?Sp=F8rgeskema=2Edoc?='")
and decoding that, but it still returns the original string containing the encoded characters.
How do one go about this?

You are handling email, so you can use email handling functions:
Try with https://docs.python.org/3.5/library/email.header.html.
The last example (and second one, very small module:
>>> from email.header import decode_header
>>> decode_header('=?iso-8859-1?q?p=F6stal?=')
[(b'p\xf6stal', 'iso-8859-1')]
There is also a version for python 2.7.
So for your case:
subj = '=?ISO-8859-1?Q?Sp=F8rgeskema=2Edoc?='
subject, encoder = email.header.decode_header(subj)[0]
print(subject.decode(encoder))

Related

when python interpreter loads source file, will it convert file content to unicode in memory?

Say, I have a source file encoded in utf8, when python interpreter loads that source file, will it convert file content to unicode in memory and then try to evaluate source code in unicode?
If I have a string with non ASCII char in it, like
astring = '中文'
and the file is encoded in gbk.
Running that file with python 2, I found that string actually is still in raw gbk bytes.
So I dboubt, python 2 interpret does not convert source code to unicode. Beacause if so, the string content will be in unicode(I heard it is actually UTF16)
Is that right? And if so, how about python 3 interpreter? Does it convert source code to unicode format?
Acutally, I know how to define unicode and raw string in both Python2 and 3.
I'm just curious about one detail when the interpreter loads source code.
Will it convert the WHOLE raw source code (encoded bytes) to unicode at very beginning and then try to interpret unicode format source code piece by piece?
Or instead, it just loads raw source piece by piece, and only decodes what it think should. For example, when it hits the statement u'中文' , OK, decode to unicode. While it hits statment b'中文', OK, no need to decode.
Which way the interpreter will go?
If your source file is encoded with GBK, put this line at the top of the file (first or second line):
# coding: gbk
This is required for both Python 2 and 3.
If you omit this encoding declaration, the interpreter will assume ASCII in the case of Python 2, and UTF-8 for Python 3.
The encoding declaration controls how the interpreter reads the bytes of the source file. This is mainly relevant for string literals (like in your example), but theoretically also applies to comments and even identifiers (it's probably not a good idea to use non-ASCII in identifiers, though).
As for the question whether you get byte strings or unicode strings: this depends on the syntax, not on the choice and declaration of encoding.
As pointed out in Ignacio's answer, if you want to have unicode strings in Python 2, you need to use the u'...' notation.
In Python 3, the u prefix is optional.
So, with a correct encoding declaration in the file header, it is sufficient to write astring = '中文' to get a correct unicode string in Python 3.
Update
By comment, the OP asks about the interpretation of b'中文'.
In Python 3, this isn't allowed (byte strings can only contain ASCII characters), but you can test this yourself in Python 2.x:
# coding: gbk
x = b'中文'
y = u'中文'
print repr(x)
print repr(y)
This will produce:
'\xd6\xd0\xce\xc4'
u'\u4e2d\u6587'
The first line reflects the actual bytes contained in the source file (if you saved it with GBK, of course).
So there seems to be no decoding happening for b'中文'.
However, I don't know how the interpreter internally represents the source code with respect to encoding (that seems to be your question).
This is implementation-dependent anyway, so the answer might even be different for cPython, Jython, IronPython etc.
So I dboubt, python 2 interpret does not convert source code to unicode.
It never does. If you want to use Unicode rather than bytes then you need to use a unicode instead.
astring = u'中文'
Python source is only plain ASCII, meaning that the actual encoding does not matter except for litteral strings, be them unicode strings or byte strings. Identifiers can use non ascii characters (IMHO it would be a very bad practice), but their meaning is normally internal to the Python interpreter, so the way it reads them is not really important
Byte strings are always left unchanged. That means that normal strings in Python 2 and byte litteral strings in Python 3 are never converted.
Unicode strings are always converted:
if the special string coding: charset_name exists in a comment on first or second line, the original byte string is converted as it would be with decode(charset_name)
if not encoding is specified, Python 2 will assume ASCII and Python 3 will assume utf8

Why is sys.getdefaultencoding() different from sys.stdout.encoding and how does this break Unicode strings?

I spent a few angry hours looking for the problem with Unicode strings that was broken down to something that Python (2.7) hides from me and I still don't understand. First, I tried to use u".." strings consistently in my code, but that resulted in the infamous UnicodeEncodeError. I tried using .encode('utf8'), but that didn't help either. Finally, it turned out I shouldn't use either and it all works out automagically. However, I (here I need to give credit to a friend who helped me) did notice something weird while banging my head against the wall. sys.getdefaultencoding() returns ascii, while sys.stdout.encoding returns UTF-8. 1. in the code below works fine without any modifications to sys and 2. raises a UnicodeEncodeError. If I change the default system encoding with reload(sys).setdefaultencoding("utf8"), then 2. works fine. My question is why the two encoding variables are different in the first place and how do I manage to use the wrong encoding in this simple piece of code? Please, don't send me to the Unicode HOWTO, I've read that obviously in the tens of questions about UnicodeEncodeError.
# -*- coding: utf-8 -*-
import sys
class Token:
def __init__(self, string, final=False):
self.value = string
self.final = final
def __str__(self):
return self.value
def __repr__(self):
return self.value
print(sys.getdefaultencoding())
print(sys.stdout.encoding)
# 1.
myString = "I need 20 000€."
tok = Token(myString)
print(tok)
reload(sys).setdefaultencoding("utf8")
# 2.
myString = u"I need 20 000€."
tok = Token(myString)
print(tok)
My question is why the two encoding variables are different in the first place
They serve different purposes.
sys.stdout.encoding should be the encoding that your terminal uses to interpret text otherwise you may get mojibake in the output. It may be utf-8 in one environment, cp437 in another, etc.
sys.getdefaultencoding() is used on Python 2 for implicit conversions (when the encoding is not set explicitly) i.e., Python 2 may mix ascii-only bytestrings and Unicode strings together e.g., xml.etree.ElementTree stores text in ascii range as bytestrings or json.dumps() returns an ascii-only bytestring instead of Unicode in Python 2 — perhaps due to performance — bytes were cheaper than Unicode for representing ascii characters. Implicit conversions are forbidden in Python 3.
sys.getdefaultencoding() is always 'ascii' on all systems in Python 2 unless you override it that you should not do otherwise it may hide bugs and your data may be easily corrupted due to the implicit conversions using a possibly wrong encoding for the data.
btw, there is another common encoding sys.getfilesystemencoding() that may be different from the two. sys.getfilesystemencoding() should be the encoding that is used to encode OS data (filenames, command-line arguments, environment variables).
The source code encoding declared using # -*- coding: utf-8 -*- may be different from all of the already-mentioned encodings.
Naturally, if you read data from a file, network; it may use character encodings different from the above e.g., if a file created in notepad is saved using Windows ANSI encoding such as cp1252 then on another system all the standard encodings can be different from it.
The point being: there could be multiple encodings for reasons unrelated to Python and to avoid the headache, use Unicode to represent text: convert as soon as possible encoded text to Unicode on input, and encode it to bytes (possibly using a different encoding) as late as possible on output — this is so called the concept of Unicode sandwich.
how do I manage to use the wrong encoding in this simple piece of code?
Your first code example is not fine. You use non-ascii literal characters in a byte string on Python 2 that you should not do. Use bytestrings' literals only for binary data (or so called native strings if necessary). The code may produce mojibake such as I need 20 000Γé¼. (notice the character noise) if you run it using Python 2 in any environment that does not use utf-8-compatible encoding such as Windows console
The second code example is ok assuming reload(sys) is not part of it. If you don't want to prefix all string literals with u''; you could use from __future__ import unicode_literals
Your actual issue is UnicodeEncodeError error and reload(sys) is not the right solution!
The correct solution is to configure your locale properly on POSIX (LANG, LC_CTYPE) or set PYTHONIOENCODING envvar if the output is redirected to a pipe/file or install win-unicode-console to print Unicode to Windows console.
I have noticed the same behaviour of some standard code (mailman library).
Thanks for your analysis, it helped me save some time. :-)
The problem is exactly the same. My system uses sys.getdefaultencoding() and gets ascii, which is inappropriate to handle a list of 1000 UTF-8 encoded names.
There is a mismatch between stdin/stdout and even filesystem encoding (utf-8) on one hand and "defaultencoding" on the other (ascii). This thread: How to print UTF-8 encoded text to the console in Python < 3? seems to indicate that it is well known and Changing default encoding of Python? contains some indication that a more homogeneous (like "utf-8 everywhere") would break other things like the hash implementation.
For that reason it is also not straightforward to change the defaultencoding. (See http://blog.ianbicking.org/illusive-setdefaultencoding.html for various ways to do so.) It is removed from the sys instance in the site.py file.

Replacing = with '\x' and then decoding in python

I fetched the subject of an email message using python modules and received string
'=D8=B3=D9=84=D8=A7=D9=85_=DA=A9=D8=AC=D8=A7=D8=A6=DB=8C?='
I know the string is encoded in 'utf-8'. Python has a method called on strings to decode such strings. But to use the method I needed to replace = sign with \x string. By manual interchange and then printing the decoded resulting string, I get the string سلام_کجائی which is exactly what I want. The question is how I can do the interchange automatically? The answer seems harder than just simple usage of functions on strings like replace function.
Below I brought the code I used after manual operation?
r='\xD8\xB3\xD9\x84\xD8\xA7\xD9\x85_\xDA\xA9\xD8\xAC\xD8\xA7\xD8\xA6\xDB\x8C'
print r.decode('utf-8')
I would appreciate any workable idea.
Just decode it from quoted-printable to get utf8-encoded bytestring:
In [35]: s = '=D8=B3=D9=84=D8=A7=D9=85_=DA=A9=D8=AC=D8=A7=D8=A6=DB=8C?='
In [36]: s.decode('quoted-printable')
Out[36]: '\xd8\xb3\xd9\x84\xd8\xa7\xd9\x85_\xda\xa9\xd8\xac\xd8\xa7\xd8\xa6\xdb\x8c?'
Then, if needed, from utf-8 to unicode:
In [37]: s.decode('quoted-printable').decode('utf8')
Out[37]: u'\u0633\u0644\u0627\u0645_\u06a9\u062c\u0627\u0626\u06cc?'
In [39]: print s.decode('quoted-printable')
سلام_کجائی?
This sort of encoding is known as quoted-printable. There is a Python module for performing encoding and decoding.
You're right that it's just a pure quoting of binary strings, so you need to apply UTF-8 decoding afterwards. (Assuming the string is in UTF-8, of course. But that looks correct although I don't know the language.)
import quopri
print quopri.decodestring( "'=D8=B3=D9=84=D8=A7=D9=85_=DA=A9=D8=AC=D8=A7=D8=A6=DB=8C?='" ).decode( "utf-8" )

Python(2.6) cStringIO unicode support?

I'm using python pycurl module to download content from various web pages. Since I also wanted to support potential unicode text I've been avoiding the cStringIO.StringIO function which according to python docs: cStringIO - Faster version of StringIO
Unlike the StringIO module, this module is not able to accept Unicode strings that cannot be encoded as plain ASCII strings.
... does not support unicode strings. Actually it states that it does not support unicode strings that can not be converted to ASCII strings. Can someone please clarify this to me? Which can and which can not be converted?
I've tested with the following code and it seems to work just fine with unicode:
import pycurl
import cStringIO
downloadedContent = cStringIO.StringIO()
curlHandle = pycurl.Curl()
curlHandle.setopt(pycurl.WRITEFUNCTION, downloadedContent.write)
curlHandle.setopt(pycurl.URL, 'http://www.ltg.ed.ac.uk/~richard/unicode-sample.html')
curlHandle.perform()
content = downloadedContent.getvalue()
fileHandle = open('unicode-test.txt','w')
for char in content:
fileHandle.write(char)
And the file is correctly written. I can even print the whole content in the console, all characters show up fine... So what I'm puzzled about is, where does the cStringIO fail ? Is there any reason why I should not use it?
[Note: I'm using Python 2.6 and need to stick to this version]
Any text that only uses ASCII codepoints (byte values 00-7F hexadecimal) can be converted to ASCII. Basically any text that uses characters not often used in American English is not ASCII.
In your example code, you are not converting the input to Unicode text; you are treating it as un-decoded bytes. The test page in question is encoded in UTF-8, and you never decode that to Unicode.
If you were to decode the value to a Unicode string, you won't be able to store that string in a cStringIO object.
You may want to read up on the difference between Unicode and text encodings such as ASCII and UTF-8. I can recommend:
Joel Spolsky's minimum Unicode article
The Python Unicode HOWTO.

Processing Utf-8 data in python

I'm using the python module requests to get data from some API's and they all return json data which are converted to dicts. What I want to do is take some info from these dicts and either convert them all to python strings where I can use the stemming and string.translate() modules on them, or convert the whole thing to data that is recognisable to these modules. I can't do this with the UTF-8 data and it's doing my head in. Is there any solution to this at all? Can I iterate through the dict and convert it to ASCII?
The strange thing is I am comparing ASCII strings to the UTF data in other functions (if ASCII-word is in UTF dict: do something) and it works perfectly. The ASCII value matches the UTF-8 data all the time. I can't get my head around this encoding stuff at all
UTF-8 is an extension of ASCII in that valid 7-bit ASCII text is also valid UTF-8 text, so if all the data is in fact representable in ASCII it doesn't make any difference whether it's ASCII or UTF-8.
If the data coming is UTF-8 encoded, the best approach is to decode it to unicode objects. For example if you read in a string from some source and store it in the variable utf8str, you can do utf8str.decode('utf-8'). Then pass this unicode object around and do all your operations on the unicode object. Instead of string.translate you can use unicode.translate (assuming you're referring to the string method called "translate" there).
If your modules cannot deal with unicode strings, you need to think about how you want to handle that. You have to decide what to do if your input contains characters that can't be represented in ASCII.
When you are sure the function does not support Unicode, you can always convert to an ASCII-approximation:
ascii_string = unicodedata.normalize('NFKD', unicode_string).encode('ascii','ignore')

Categories

Resources