How to display utf-8 in windows console - python

I'm using Python 2.6 on Windows 7
I borrowed some code from here:
Python, Unicode, and the Windows console
My goal is to be able to display uft-8 strings in the windows console.
Apparantly in python 2.6, the
sys.setdefaultencoding()
is no longer supported
However, I wrote reload(sys) before I tried to use it and it magically didn't error.
This code will NOT error, but it shows funny characters instead of japanese text.
I believe the problem is because I have not successfully changed the codepage of the windows console.
These are my attempts, but they don't work:
reload(sys)
sys.setdefaultencoding('utf-8')
print os.popen('chcp 65001').read()
sys.stdout.encoding = 'cp65001'
Perhaps you can use win32console to change the codepage?
I tried the code from the website I linked, but it also errored from the win32console.. maybe that code is obsolete.
Here's my code, that doesn't error but prints funny characters:
#coding=<utf8>
import os
import sys
import codecs
reload(sys)
sys.setdefaultencoding('utf-8')
sys.stdout = codecs.getwriter('utf8')(sys.stdout)
sys.stderr = codecs.getwriter('utf8')(sys.stderr)
#print os.popen('chcp 65001').read()
print(sys.stdout.encoding)
sys.stdout.encoding = 'cp65001'
print(sys.stdout.encoding)
x = raw_input('press enter to continue')
a = 'こんにちは世界'#.decode('utf8')
print a
x = raw_input()

I know you state you're using Python 2.6, but if you're able to use Python 3.3 you'll find that this is finally supported.
Use the command chcp 65001 before starting Python.
See http://docs.python.org/dev/whatsnew/3.3.html#codecs
In Python 3.6 it's no longer even necessary to use the chcp command, since Python bypasses the byte-level console interface entirely and uses a native Unicode interface instead. See PEP 528: Change Windows console encoding to UTF-8.
As noted in the comments by #mbom007, it's also important to make sure the console is configured with a font that supports the characters you're trying to display.

Never ever ever use setdefaultencoding. If you want to write unicode strings to stdio, encode them explicitly. Monkeying around with setdefaultencoding will cause stdlib modules and third-party modules alike to break in horrible subtle ways by allowing implicit conversion between str and unicode when it shouldn't happen.
Yes, the problem is most likely that your code page isn't set properly. However, using os.popen won't change the code page; it'll spawn a new shell, change its code page, and then immediately exit without affecting your console at all. I'm not personally very familiar with windows, so I couldn't tell you how to change your console's code page from within your python program.
The way to properly display unicode data via utf-8 from python, as mentioned before, is to explicitly encode your strings before printing them: print s.encode('utf-8')

Changing the console code page is both unnecessary and won't work (in particular, setting it to 65001 runs into a Python bug). See this question for details, and for how to print Unicode characters to the console regardless of the code page.

Windows doesn't support UTF-8 in a console properly. The only way I know of to display Japanese in the console is by changing (on XP) Control Panel's Regional and Language Options, Advanced Tab, Language for non-Unicode Programs to Japanese. After rebooting, open a console and run "chcp" to find out the Japanese console's code page. Then either print Unicode strings or byte strings explicitly encoded in the correct code page.

Related

exec() not working with unicode characters

Im trying to execute a .py program from within my python code, but non-ASCII characters behave oddly when printed and dealt with.
module1.py:
test = "áéíóúabcdefgçë"
print(test)
Main code:
exec(open("module1.py").read(), globals())
I want this to print áéíóúabcdefgçë but it instead prints áéíóúabcdefgçë. This happens with all non-ASCII characters i have tried.
I am using Python 3.7 and Windows 10.
Running module1.py individually does not produce this error, but i want to run the program using exec() or something else that has roughly the same function.
I found a way to fix the issue. Python's open is assuming some encoding other than UTF-8. Changing the main code to the following fixes the issue on my computer (python 3.7 and windows 10):
exec(open("module1.py", encoding="utf-8").read(),globals())
Thanks #jjramsey for additional information:
According to the Python documentation for open(), "The default encoding is platform dependent (whatever locale.getpreferredencoding() returns)."
For me, if I run the following check:
import locale
print(locale.getpreferredencoding())
I get cp1252, which is notably not UTF-8 and so open() will cause the issues we have seen in this question, unless we specify the encoding.

hi-ascii characters python string

I am always perplexed with the whole hi-ascii handling in python 2.x. I am currently facing an issue in which I have a string with hiascii characters in it. I have a few questions related to it.
How can a string store hiascii characters in it (not a unicode string, but a normal str in python 2.x), which I thought can handle only ascii chars. Does python internally convert the hiascii to something else ?
I have a cli which I spawn as a subprocess from my python code, when I pass this string to the cli, it works fine. While, if I encode this string to utf-8, the cli fails( this string is a password, so it fails saying the password is invalid).
For the second point, I actually did a bit of research and found the following:
1) In windows(sucks), the command line args are encoded in mbcs (sys.getfilesystemencoding). The question I still don't get is, if I read the same string using raw_input, it is encoded in Windows console encoding(on EN windows, it was cp437).
I have a different question that am confused about now regarding Windows encoding. Is the windows sys.stdin.encoding different from Windows console encoding ?
If yes, is there a pythonic way to figure out what my windows console encoding is. I needed this because when I read input using raw_input, its encoded in Windows console encoding, and I want to convert it to say, utf-8. Note: I have already set my sys.stdin.encoding to utf-8, but it doesnt seem to make any effect in the read input.
To answer your first question, Python 2.x byte strings contain the source-encoded bytes of the characters, meaning the exact bytes used to store the string on disk in the source file. For example, here is a Python 2.x program where the source is saved in Windows-1252 encoding (Notepad's default on US Windows):
#!python2
#coding:windows-1252
s = 'æüÿ€éêè'
u = u'æüÿ€éêè'
print repr(s)
print repr(u)
Output:
'\xe6\xfc\xff\x80\xe9\xea\xe8'
u'\xe6\xfc\xff\u20ac\xe9\xea\xe8'
The byte string contains the bytes that represent the characters in Windows-1252.
The Python decodes that same sequence of using the declared source encoding (!#coding:Windows-1252) into Unicode codepoints. Since Windows-1252 is very close to iso-8859-1, and iso-8859-1 is a 1:1 mapping to the first 0-255 Unicode codepoints, the code points are almost the same, except for the Euro character.
But save the source in a different encoding, and you'll get those bytes instead for the byte string:
#!python2
#coding:utf8
s = 'æüÿ€éêè'
u = u'æüÿ€éêè'
print repr(s)
print repr(u)
Output:
'\xc3\xa6\xc3\xbc\xc3\xbf\xe2\x82\xac\xc3\xa9\xc3\xaa\xc3\xa8'
u'\xe6\xfc\xff\u20ac\xe9\xea\xe8'
So, Python 2.X just gives you the source code bytes directly, without decoding them to Unicode codepoints, like a Unicode string would do.
Python 3.X notes that this is confusing, and just forbids non-ASCII characters in byte strings:
#!python3
#coding:utf8
s = b'æüÿ€éêè'
u = 'æüÿ€éêè'
print(repr(s))
print(repr(u))
Output:
File "C:\test.py", line 3
s = b'æüÿ\u20acéêè'
^
SyntaxError: bytes can only contain ASCII literal characters.
To answer your second question, please edit your question to show an example that demonstrates the problem.
Is the windows sys.stdin.encoding different from Windows console encoding?
Yes. There are two locale-specific codepages:
the ANSI code page, aka mbcs, used for strings in Win32 ...A APIs and hence for C runtime operations like reading the command line;
the IO code page, used for stdin/stdout/stderr streams.
These do not have to be the same encoding, and typically they aren't. In my locale (UK), the ANSI code page is 1252 and the IO code page defaults to 850. You can change the console code page using the chcp command, so you can make the two encodings match using eg chcp 1252 before running the Python command.
(You also have to be using a TrueType font in the console for chcp to make any difference.)
is there a pythonic way to figure out what my windows console encoding is.
Python reads it at startup using the Win32 API GetConsoleOutputCP and—unless overridden by PYTHONIOENCODING—writes that to the property sys.stdout.encoding. (Similarly GetConsoleCP for stdin though they will generally be the same code page.)
If you need to read this directly, regardless of whether PYTHONIOENCODING is set, you might have to use ctypes to call GetConsoleOutputCP directly.
Note: I have already set my sys.stdin.encoding to utf-8, but it doesnt seem to make any effect in the read input.
(How have you done that? It's a read-only property.)
Although you can certainly treat input and output as UTF-8 at your end, the Windows console won't supply or display content in that encoding. Most other tools you call via the command line will also be treating their input as encoded in the IO code page, so would misinterpret any UTF-8 sent to them.
You can affect what code page the console side uses by calling the Win32 SetConsoleCP/SetConsoleOutputCP APIs with ctypes (equivalent of the chcp command and also requires TTF console font). In principle you should be able to set code page 65001 and get something that is nearly UTF-8. Unfortunately long-standing console bugs usually make this approach infeasible.
windows(sucks)
yes.

How to read excel Unicode characters using Python

I am receiving an Excel file whose content I cannot influence. It contains some Unicode characters like "á" or "é".
My code has been unchanged, but I migrated from Eclipse Juno to LiClipse together to a migration to a different python package (2.6 from 2.5). In principle the specific package I am using has a working version on win32com package.
When I read the Excel file my code is crashing when extracting and converting to to strings using str(). The console output is the following:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe1' in position 89: ordinal not in range(128)
Being more concrete I perform the following:
Read the Excel:
xlApp = Dispatch("Excel.Application")
excel = xlApp.Workbooks.Open(excel_location)
in an internal loop I extract the value of the cell:
cell_value = self.excel.ActiveSheet.Cells(excel_line + 1, excel_column + 1)
and finally, if I try to convert cell_value to str, crashes:
print str(cell_value)
If I go to the Excel and remove the non-ASCII characters everything is working smoothly. I have tried this encode proposal. Any other solution I have googled proposes saving the file in a specific format, that I can't do.
What puzzles me is that the code was working before with the same input Excel but this change to LiClipse and 2.6 Python killed everything.
Any idea how can I progress?
This is a common problem when working with UTF-8 encoded Unicode data in Python 2.x. The handling of this has changed in a few places between 2.4 and 2.7, so it's no surprise that you suddenly get an error.
The source of the error is print: In Python 2.x, print doesn't try to assume what encoding your terminal supports. It just plays save and assumes that ascii is the only supported charset (which means characters between 0 and 127 are fine, everything else gives an error).
Now you convert a COMObject to a string. str is just a bunch of bytes (values 0 to 255) as far as Python 2.x is concerned. It doesn't have an encoding.
Combining the two is a recipe for trouble. When Python prints, it tries to validate the input (the string) and suddenly finds UTF-8 encoded characters (UTF-8 adds these odd \xe1 markers which tells the decoder that the next byte is special in some way; check Wikipedia for the gory details).
That's when the ascii encoder says: Sorry, can't help you there.
That means you can work with this value, compare it and such, but you can't print it. A simple fix for the printing problem is:
s = str(cell_value) # Convert COM -> UTF-8 encoded string
print repr(s) # repr() converts anything to ascii
If your terminal supports UTF-8, then you need to tell Python about it:
import sys
import codecs
sys.stdout = codecs.getwriter('utf8')(sys.stdout)
You should also have a look at sys.stdout.encoding which tells what Python currently thinks the output encoding is/should be. When Python 2 is properly configured (like on modern Linux distributions), then the correct codec for output should be used automatically.
Related:
Python 2 Unicode howto
Pragmatic Unicode, or, How do I stop the pain?
Setting the correct encoding when piping stdout in Python
.Cells(row,col) returns a Range object. You probably want the text from the cell:
cell = xl.ActiveSheet.Cells(1,2).Text
or
cell = xl.ActiveSheet.Range('B1').Text
The resulting value will be a Unicode string. To convert to bytes that you can write to a file, use .encode(encoding), for example:
bytes = cell.encode('utf8')
The below example uses the following spreadsheet:
import win32com.client
xl = win32com.client.gencache.EnsureDispatch('Excel.Application')
xl.Workbooks.Open(r'book1.xlsx')
cell = xl.ActiveSheet.Cells(1,2)
cell_value = cell.Text
print repr(cell)
print repr(cell_value)
print cell_value
Output (Note, Chinese will only print if console/IDE supports the characters):
<win32com.gen_py.Microsoft Excel 14.0 Object Library.Range instance at 0x129909424>
u'\u4e2d\u56fd\u4eba'
中国人
What is described here is a hack, you should not use as a long term
solution. Looking at the comments it could crush the terminal.
Finally I found a solution helped by the suggestion that #Huan-YuTseng provided, probably the solutions offered by other might work in other context but not in this one.
So, what happened is that I migrated from Eclipse Juno version (as Pydev stopped working due to Java upgrade needed that I can't accomplish in this computer) to LiClipse direct package (I did not upgraded a downloaded Eclipse version).
By default, in my LiClipse version (1.4.0.201502042042) the Console output is not by default utf-8. So I needed to change the output from either LiClipse or using my code. Fourtunately, there was another question related to a similar problem that helped me. You can see more details here, but essentially what you need to do is to include at the begginning of your code the following code:
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
And everything works. In the answers from #AarongDigulla the solution is there, but is actually the very last solution.
However, I need to say that LiClipse is giving me an error on sys.setdefaultencoding statement, that during execution is not creating any issue... no idea what's happening. That stopped me testing this solution before. Maybe there is something wrong in LiClipse (is alowing me to execute code with errors!)
Use 'utf-8 BOM' which in python used as utf_8_sig for Unicode character & also to avoid irrelevant results in Excel sheet.

How to get IDLE to accept paste of Unicode characters?

Oftentimes when I'm working interactively in IDLE, I'd like to paste a Unicode string into the IDLE window. It appears to paste properly but generates an error immediately. It has no trouble displaying the same character on output.
>>> c = u'ĉ'
Unsupported characters in input
>>> print u'\u0109'
ĉ
I suspect that the input window, like most Windows programs, uses UTF-16 internally and has no trouble dealing with the full Unicode set; the problem is that IDLE insists on coercing all input to the default mbcs code page, and anything not in that page gets rejected.
Is there any way to configure or cajole IDLE into accepting the full Unicode character set as input?
Python 3.2 handles this much better and has no trouble with anything I throw at it.
I know that I can simply save the code to a file in UTF-8 and import it, but I want to be able to work with Unicode characters in the interactive window.
I finally figured out a way. Since the sources to IDLE are part of the distribution you can make a couple of quick edits to enable the capability. The files will typically be found in C:\Python27\Lib\idlelib.
The first step is to prevent IDLE from trying to encode all those nice Unicode characters into a character set that can't handle them. This is controlled by IOBinding.py. Edit the file, find the section after if sys.platform == 'win32': and comment out this line:
#encoding = locale.getdefaultlocale()[1]
Now add this line after it:
encoding = 'utf-8'
I was hoping that there would be a way to override this with an environment variable or something, but getdefaultlocale calls directly into a Win32 function that gets the globally configured Windows mbcs encoding.
This is half the battle, now we need to get the command line interpreter to recognize that the input bytes are UTF-8 encoded. It didn't appear that there was a way to pass an encoding into the interpreter, so I came up with the mother of all hacks. Maybe someone with a little more patience can come up with a better way, but this works for now. The input is processed in PyShell.py, in the runsource function. Change the following:
if isinstance(source, types.UnicodeType):
from idlelib import IOBinding
try:
source = source.encode(IOBinding.encoding)
except UnicodeError:
self.tkconsole.resetoutput()
self.write("Unsupported characters in input\n")
return
To:
from idlelib import IOBinding # line moved
if isinstance(source, types.UnicodeType):
try:
source = source.encode(IOBinding.encoding)
except UnicodeError:
self.tkconsole.resetoutput()
self.write("Unsupported characters in input\n")
return
source = "#coding=%s\n%s" % (IOBinding.encoding, source) # line added
We're taking advantage of PEP 263 to specify the encoding for each line of input provided to the interpreter.
Update: In Python 2.7.10 it is no longer necessary to make the change in PyShell.py, it already works properly if the encoding is set to utf-8. Unfortunately I haven't found a way to bypass the change in IOBinding.py.

Converting from ascii to utf-8 with Python

I have xmpp bot written in python. One of it's plugins is able to execute OS commands and send output to the user. As far as I know output should be unicode-like to send it over xmpp protocol. So I tried to handle it this way:
output = os.popen(cmd).read()
if not isinstance(output, unicode):
output = unicode(output,'utf-8','ignore')
bot.send(xmpp.Message(mess.getFrom(),output))
But when Russian symbols appear in output they aren't converted well.
sys.getdefaultencoding()
says that default command prompt encoding is 'ascii', but when I try to do
output.decode('ascii')
in python console I get
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0x92 in position 1:
ordinal not in range(128)
OS: Win XP, Python 2.5.4
PS: Sorry for my English :(
sys.getdefaultencoding() returns python's default encoding - which is ASCII unless you have changed it. ASCII doesn't support Russian characters.
You need to work out what encoding the actual text is, either manually, or using the locale module.
Typically something like:
import locale
encoding = locale.getpreferredencoding(do_setlocale=True)¶
Ascii has no defined character values above 127 0x7F. Perhaps you mean the Cyrillic code page? It's 866
See http://en.wikipedia.org/wiki/Code_page
edit: since this answer was marked correct presumably 886 worked, but as other answers have pointed it, 886 is not the only Russian language code page. If you use a code page different from the one that was used when the Russian symbols were encoded, you will get the wrong result.
You say """sys.getdefaultencoding() says that default command prompt encoding is 'ascii'"""
sys.getdefaultencoding says NOTHING about the "command prompt" encoding.
On Windows, sys.stdout.encoding should do the job. On my machine, it contains cp850 when Python is run in a Command Prompt window, and cp1252 in IDLE. Yours should contain cp866 and cp1251 respectively.
Update You say that you still need cp866 in IDLE. Note this:
IDLE 2.6.4
>>> import os
>>> os.popen('chcp').read()
'Active code page: 850\n'
>>>
So when your app starts up, check if you are on Windows and if so, parse the result of os.popen('chcp').read(). The text before the : is probably locale-dependent. codepage = result.split()[-1] may be good enough "parsing". On Unix, which doesn't have a Windows/MS-DOS split personality, sys.stdout.encoding should be OK.
In Python 'cp855', 'cp866', 'cp1251', 'iso8859_5', 'koi8_r' are differing Russian code pages. You'll need to use the right one to decode the output of popen. In the Windows console, the 'chcp' command lists the code page used by console commands. That won't necessarily be the same code page as Windows applications. On US Windows, 'cp437' is used for the console and 'cp1252' is used for applications like Notepad.

Categories

Resources