character encoding in python - python

I have a byte stream that looks like this '\xe6\x97\xa5\xe6\x9c\xac\xe8\xaa\x9e'
str_data = '\xe6\x97\xa5\xe6\x9c\xac\xe8\xaa\x9e'
str_data is wrote into text file using the following code
file = open("test_doc","w")
file.write(str_data)
file.close()
If test_doc is opened in a web browser and character encoding is set to Japanese it just works fine.
I am using reportlab for generating pdf . using the following code
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfgen.canvas import Canvas
from reportlab.pdfbase.cidfonts import CIDFont
pdfmetrics.registerFont(CIDFont('HeiseiMin-W3','90ms-RKSJ-H'))
pdfmetrics.registerFont(CIDFont('HeiseiKakuGo-W5','90ms-RKSJ-H'))
c = Canvas('test1.pdf')
c.setFont('HeiseiMin-W3-90ms-RKSJ-H', 6)
message1 = '\202\261\202\352\202\315\225\275\220\254\226\276\222\251\202\305\202\267\201B'
message3 = '\xe3\x83\x86\xe3\x82\xb9\xe3\x83\x88';
c.drawString(100, 675,message1)
c.save()
Here I use message1 variable which gives output in Japanese I need to use message3 instead of message1 to generate the pdf. message3 generated garabage probably because of improper encoding.

Here is an answer:
message1 is encoded in shift_jis; message3 and str_data are encoded in UTF-8. All appear to represent Japanese text. See the following IDLE session:
>>> message1 = '\202\261\202\352\202\315\225\275\220\254\226\276\222\251\202\305\202\267\201B'
>>> print message1.decode('shift_jis')
これは平成明朝です。
>>> message3 = '\xe3\x83\x86\xe3\x82\xb9\xe3\x83\x88'
>>> print message3.decode('UTF-8')
テスト
>>>str_data = '\xe6\x97\xa5\xe6\x9c\xac\xe8\xaa\x9e'
>>> print str_data.decode('UTF-8')
日本語
>>>
Google Translate detects the language as Japanese and translates them to the English "This is the Heisei Mincho.", "Test", and "Japanese" respectively.
What is the question?

I guess you have to learn more about encoding of strings in general. A string in python has no encoding information attached, so it's up to you to use it in the right way or convert it appropriately. Have a look at unicode strings, the encode / decode methods and the codecs module. And check whether c.drawString might also allow to pass a unicode string, which might make your live much easier.

If you need to detect these encodings on the fly, you can take a look at Mark Pilgrim's excellent open source Universal Encoding Detector.
#!/usr/bin/env python
import chardet
message1 = '\202\261\202\352\202\315\225\275\220\254\226\276\222\251\202\305\202\267\201B'
print chardet.detect(message1)
message3 = '\xe3\x83\x86\xe3\x82\xb9\xe3\x83\x88'
print chardet.detect(message3)
str_data = '\xe6\x97\xa5\xe6\x9c\xac\xe8\xaa\x9e'
print chardet.detect(str_data)
Output:
{'confidence': 0.99, 'encoding': 'SHIFT_JIS'}
{'confidence': 0.87625, 'encoding': 'utf-8'}
{'confidence': 0.87625, 'encoding': 'utf-8'}

Related

Invalid continuation byte saving cipher

I have the following function to create cipher text and then save it:
def create_credential(self):
des = DES.new(CIPHER_N, DES.MODE_ECB)
text = str(uuid.uuid4()).replace('-','')[:16]
cipher_text = des.encrypt(text)
return cipher_text
def decrypt_credential(self, text):
des = DES.new(CIPHER_N, DES.MODE_ECB)
return des.decrypt(text)
def update_access_credentials(self):
self.access_key = self.create_credential()
print repr(self.access_key) # "\xf9\xad\xfbO\xc1lJ'\xb3\xda\x7f\x84\x10\xbbv&"
self.access_password = self.create_credential()
self.save()
And I will call:
>>> from main.models import *
>>> u=User.objects.all()[0]
>>> u.update_access_credentials()
And this is the stacktrace I get:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xf5 in position 738: invalid start byte
Why is this occurring and how would I get around it?
You are storing a bytestring into a Unicode database field, so it'll try and decode to Unicode.
Either use a database field that can store opaque binary data, decode explicitly to Unicode (latin-1 maps bytes one-on-one to Unicode codepoints) or wrap your data into a representation that can be stored as text.
For Django 1.6 and up, use a BinaryField, for example. For earlier versions, using a binary-to-text conversion (such as Base64) would be preferable over decoding to Latin-1; the result of the latter would not give you meaningful textual data but Django may try to display it as such (in the admin interface for example).
It's occurring because you're attempting to save non-text data in a text field. Either use a non-text field instead, or encode the data as text via e.g. Base-64 encoding.
Using base64 encoding and decoding here fixed this:
import base64
def create_credential(self):
des = DES.new(CIPHER_N, DES.MODE_ECB)
text = str(uuid.uuid4()).replace('-','')[:16]
cipher_text = des.encrypt(text)
base64_encrypted_message = base64.b64encode(cipher_text)
return base64_encrypted_message
def decrypt_credential(self, text):
text = base64.b64decode(text)
des = DES.new(CIPHER_N, DES.MODE_ECB)
message = des.decrypt(text)
return message

"Prefix" before "unicode(entry.title.text, "utf-8")"

I'm editing a script which gets the name of a video url and the line of code that does this is:
title = unicode(entry.title.text, "utf-8")
It can be found here. Is there a simple way to add a predefined prefix before this?
For example if there is a Youtube video named "test", the script should show "Testing Videos: test".
Just prepend a unicode string:
title = u'Testing Videos: ' + unicode(entry.title.text, "utf-8")
or use string formatting for more complex options; like adding both a prefix and a postfix:
title = u'Testing Videos: {} (YouTube)'.format(unicode(entry.title.text, "utf-8"))
All that unicode(inputvalue, codec) does is decode a byte string to a unicode value; you are free to concatenate that with other unicode values, including unicode literals.
An alternative spelling would be to use the str.decode() method on the entry.title.text object:
title = u'Testing Videos: ' + entry.title.text.decode("utf-8")
but the outcome would be the same.

Unable to encode/decode pprint output

This question is based on a side-effect of that one.
My .py files are all have # -*- coding: utf-8 -*- encoding definer on the first line, like my api.py
As I mention on the related question, I use HttpResponse to return the api documentation. Since I defined encoding by:
HttpResponse(cy_content, content_type='text/plain; charset=utf-8')
Everything is ok, and when I call my API service, there are no encoding problems except the string formed from a dictionary by pprint
Since I am using Turkish characters in some values in my dict, pprint converts them to unichr equivalents, like:
API_STATUS = {
1: 'müşteri',
2: 'some other status message'
}
my_str = 'Here is the documentation part that contains Turkish chars like işüğçö'
my_str += pprint.pformat(API_STATUS, indent=4, width=1)
return HttpRespopnse(my_str, content_type='text/plain; charset=utf-8')
And my plain text output is like:
Here is the documentation part that contains Turkish chars like işüğçö
{
1: 'm\xc3\xbc\xc5\x9fteri',
2: 'some other status message'
}
I try to decode or encode pprint output to different encodings, with no success... What is the best practice to overcome this problem
pprint appears to use repr by default, you can work around this by overriding PrettyPrinter.format:
# coding=utf8
import pprint
class MyPrettyPrinter(pprint.PrettyPrinter):
def format(self, object, context, maxlevels, level):
if isinstance(object, unicode):
return (object.encode('utf8'), True, False)
return pprint.PrettyPrinter.format(self, object, context, maxlevels, level)
d = {'foo': u'işüğçö'}
pprint.pprint(d) # {'foo': u'i\u015f\xfc\u011f\xe7\xf6'}
MyPrettyPrinter().pprint(d) # {'foo': işüğçö}
You should use unicode strings instead of 8-bit ones:
API_STATUS = {
1: u'müşteri',
2: u'some other status message'
}
my_str = u'Here is the documentation part that contains Turkish chars like işüğçö'
my_str += pprint.pformat(API_STATUS, indent=4, width=1)
The pprint module is designed to print out all possible kind of nested structure in a readable way. To do that it will print the objects representation rather then convert it to a string, so you'll end up with the escape syntax wheather you use unicode strings or not. But if you're using unicode in your document, then you really should be using unicode literals!
Anyway, thg435 has given you a solution how to change this behaviour of pformat.

Python convert file content to unicode form

For example, I have a file a.js whose content is:
Hello, 你好, bye.
Which contains two Chinese characters whose unicode form is \u4f60\u597d
I want to write a python program which convert the Chinese characters in a.js to its unicode form to output b.js, whose content should be: Hello, \u4f60\u597d, bye.
My code:
fp = open("a.js")
content = fp.read()
fp.close()
fp2 = open("b.js", "w")
result = content.decode("utf-8")
fp2.write(result)
fp2.close()
but it seems that the Chinese characters are still one character , not an ASCII string like I want.
>>> print u'Hello, 你好, bye.'.encode('unicode-escape')
Hello, \u4f60\u597d, bye.
But you should consider using JSON, via json.
You can try codecs module
codecs.open(filename, mode[, encoding[, errors[, buffering]]])
a = codecs.open("a.js", "r", "cp936").read() # a is a unicode object
codecs.open("b.js", "w", "utf16").write(a)
There two ways you can use.
first one, use 'encode' method
str1 = "Hello, 你好, bye. "
print(str1.encode("raw_unicode_escape"))
print(str1.encode("unicode_escape"))
Also you can use 'codecs' module:
import codecs
print(codecs.raw_unicode_escape_encode(str1))
I found that repr(content.decode("utf-8")) will return "u'Hello, \u4f60\u597d, bye'"
so repr(content.decode("utf-8"))[2:-1] will do the job
you can use repr:
a = u"Hello, 你好, bye. "
print repr(a)[2:-1]
or you can use encode method:
print a.encode("raw_unicode_escape")
print a.encode("unicode_escape")

How can I understand this python error message?

Hi can you help me decode this message and what to do:
main.py", line 1278, in post
message.body = "%s %s/%s/%s" % (msg, host, ad.key().id(), slugify(ad.title.encode('utf-8')))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128)
Thanks
UPDATE having tried removing the encode call it appears to work:
class Recommend(webapp.RequestHandler):
def post(self, key):
ad= db.get(db.Key(key))
email = self.request.POST['tip_email']
host = os.environ.get("HTTP_HOST", os.environ["SERVER_NAME"])
senderemail = users.get_current_user().email() if users.get_current_user() else 'info#monton.cl' if host.endswith('.cl') else 'info#monton.com.mx' if host.endswith('.mx') else 'info#montao.com.br' if host.endswith('.br') else 'admin#koolbusiness.com'
message = mail.EmailMessage(sender=senderemail, subject="%s recommends %s" % (self.request.POST['tip_name'], ad.title) )
message.to = email
message.body = "%s %s/%s/%s" % (self.request.POST['tip_msg'],host,ad.key().id(),slugify(ad.title))
message.send()
matched_images=ad.matched_images
count = matched_images.count()
if ad.text:
p = re.compile(r'(www[^ ]*|http://[^ ]*)')
text = p.sub(r'\1',ad.text.replace('http://',''))
else:
text = None
self.response.out.write("Message sent<br>")
path = os.path.join(os.path.dirname(__file__), 'market', 'market_ad_detail.html')
self.response.out.write(template.render(path, {'user_url':users.create_logout_url(self.request.uri) if users.get_current_user() else users.create_login_url(self.request.uri),
'user':users.get_current_user(), 'ad.user':ad.user,'count':count, 'ad':ad, 'matched_images': matched_images,}))
The problem here is your underlying model (message.body) only wants ASCII text but you're trying to give it a string encoded in unicode.
But since you've got a normal ascii string here, you can just make python print out the '?' character when you've got a non-ascii-printing string.
"UNICODE STRING".encode('ascii','replace').decode('ascii')
So like from your example above:
message.body = "%s %s/%s/%s" % \
(msgencode('ascii','replace').decode('ascii'),
hostencode('ascii','replace').decode('ascii'),
ad.key().id()encode('ascii','replace').decode('ascii'),
slugify(ad.title)encode('ascii','replace').decode('ascii'))
Or just encode/decode on the variable that has the unicode character.
But this isn't an optimal solution. The best idea is to make message.body a unicode string. Being that doesn't seem feasible (I'm not familiar with GAE), you can use this to at least not have errors.
You've got a Unicode character in a place that you're not supposed to. Most often I find this error is having MS Word-style slanted quotes.
One of these fields has some characters that cannot be encoded. If you switch to python 3 (it has better unicode support), or you change the encoding of the entire script the problem should stop, about the best way to change the encoding in 2.x is using the encoding comment line. If you see http://evanjones.ca/python-utf8.html you will see more of an explanation of using python with utf-8 support the best suggestion is add # -*- coding: utf-8 -*- to the top of your script. And handle scripts like this
s = "hello normal string"
u = unicode( s, "utf-8" )
backToBytes = u.encode( "utf-8" )
I had a similar problem when using Django norel and Google App Engine.
The problem was at the folder containing the application. Probably isn't this the problem described in this question, but, maybe helps someone don't waste time like me.
Try first change you application folder maybe to /home/ and try to run again, if doesn't works, try something more.

Categories

Resources