I sometimes use emojis in programs to highlight certain parts of the code (in open source libraries). I rarely use more than say 5-6 per script and I find they really stand out due to their colors in a text editor.
Typically, they are transient markers and will be removed when whatever issue they are associated with is closed.
My question is: are emojis liable to cause any issues in the general Python toolchain? This includes, but is not limited to: git, github, pypi, editors, linters, interpreter, CI/CD pipelines, command line usage...
I haven't seen any, but then again I rarely see emojis in code. This is a Python 3 only question, so Python 2 unicode aspects are out.
(This question is not about whether this looks professional or not. That's a valid, but entirely separate consideration.)
Some examples:
# ⚙️ this is where you configure foo
foo.max_cntr = 10
foo.tolerate_duplicates = False
# 🧟♂️🧟♂️🧟♂️ to indicate code to be removed
some dead code
# 👇 very important, don't forget to do this!
bar.deactivate_before_call()
In terms of risks, there aren't really any real ones. If you use them in comments they'll be removed/ignored at runtime anyway so performance-wise there's no issues.
The main issue that you could run into is that some Linux distributions (distros) DONT support emojis, so they'd fallback to some standard unicode character (generically a white rectangle with a cross through the middle), so this could make comments hard to understand.
But in personal use: no not really, there's no issues.
TLDR: Probably not, but maybe.
I am creating CLI app for Unix terminal using click module. So I see two ways how I can display data:
print(data) and click.echo(data)
What is difference between them and what should I use?
Please, read at least quickstart of library before using it. The answer is in the third part of quickstart.
If you use click click.echo() is preferred because:
Click attempts to support both Python 2 and Python 3 the same way and to be very robust even when the environment is misconfigured. Click wants to be functional at least on a basic level even if everything is completely broken.
What this means is that the echo() function applies some error correction in case the terminal is misconfigured instead of dying with an UnicodeError.
As an added benefit, starting with Click 2.0, the echo function also has good support for ANSI colors. It will automatically strip ANSI codes if the output stream is a file and if colorama is supported, ANSI colors will also work on Windows. See ANSI Colors for more information.
If you don’t need this, you can also use the print() construct / function.
I have an engine called Ergame that has a module called erfunc. I wrote it on NT/Windows platform, and now I'm on POSIX/Linux. Since I find input on pygame considerably obscure many times, and I wanted to create an explicit distinction between IBM-STANDARD-US-PC and ABNT2 keyboard layouts, I've created several constants whose values are pygame-keycodes. I have problem.
For example, the keycode for "ACUTE/TILDE" on US-Standard Layout is 96. I've checked many times. Now, on POSIX, when I check, I get 39 (And the same applies to all the others). Which basically means: If I refer to the pygame-name, like pygame.K_UP, whatever. But if I refer to the keycodes directly, they differ according to the OS (Which basically means that I'll have to detect the OS on my engine and define the constants accordingly. Pretty boring.)
Anyway, I got curious. Why?
Let me preface this by saying I don't have any experience with input on windows or mac systems, but here is whats happening on the linux side.
Key events typically follow a three stage processing before reaching a program. The keyboard generates a scancode. The OS converts the scancode into a keycode. A keyboard map translates the keycode into a symbol.
Scancodes are hardware specific and represent a location on the keyboard
Keycodes are values the OS maps to scancodes, usually defined in /usr/include/linux/input.h
Keysyms are the symbols mapped to the keycodes, defined by your keymap. You can check them with xmodmap -pke
For SDL (which PyGame acts as a wrapper around), the scancode/keycode distinction is a little fuzzy, and not super important. What it reports as a "scancode" is actually the keycode, you'll notice pygame'sevent.scancode will match the "keycode" value printed in xev. What SDL calls "sym", pygame calls "key" and is really an SDL specific keycode. The keysym is represented by the event's "unicode" value.
The important part of this is that you are not actually getting the raw scancode, so it can be expected to be OS dependent rather than keyboard dependent. Also, if you were getting the raw scancode, you would expect scancodes to be equivalent onposition rather than character. So all row-1 col-1 keys produce the same scancode independent of keyboard layout.
While it may be boring to do OS checks and have massive constant tables, this is typically how its done. The good news is SDL does this for you, so you really should be using the pygame.K_* names. If supporting multiple keyboard layouts is an issue, consider adding an input configuration menu instead of hard coding tables for each layout.
I want to leave some links for further reading, but I'm not really sure what to link to. I'll leave the SDL Input Guide here for now.
I'm relatively new to programming and recently I've started playing around with pygame (set of modules to write games in python). I'm looking to create a program/game in which some of the labels, strings, buttons, etc are in Arabic. I'm guessing that pygame has to support Arabic letters and it probably doesn't? Or could I possibly use another GUI library that does support Arabic and use that in unison with pygame? Any direction would be much appreciated!
Well Python itself uses Unicode for everything so that's not the problem. A quick googling also shows that PyGame should be able to render Unicode fonts just fine. So I assume the problem is more that it can't find fonts for the specific language to use for rendering.
Here is a short example for PyGame and especially this link should be useful.
This is the important library - so specifying a font that can render your language and using it to render it should work fine. Probably a good idea to write a small wrapper
Nb: Haven't used PyGame myself so this is based on speculation and some quick search about how PyGame renders fonts.
PS: If you want the game to work reliably for all of your users, it's probably a good idea to include an Open Source font in your release, otherwise you need some methodology to check if the user has some fonts installed that will work fine - a probably non-trivial problem if you want Xplattform support.
Python does support unicode-coded source.
Set the coding of your source file to the right type with a line of the form # coding: [yourCoding] at the very beginning of your file. I think # coding: utf-8 works for Arabic.
Then prepend your string literals with a u, like so:
u'アク'
(Sorry if you don't have a Japanese font installed, it's the only one I had handy!)
This makes python treat them as unicode characters. There's further information specific to Arabic on this site.
Both previous answers are great.
There is also a great built-in python function called unicode. It's very easy to use.
I wanted to write Hebrew text so I wrote a function:
def hebrew(text):
# Encoding that supports hebrew + punctuation marks
return unicode(text, "Windows-1255")
Then you can call it using:
hebrew("<Hebrew text>")
And it will return your text Hebrew encoded.
The Windows console has been Unicode aware for at least a decade and perhaps as far back as Windows NT. However for some reason the major cross-platform scripting languages including Perl and Python only ever output various 8-bit encodings, requiring much trouble to work around. Perl gives a "wide character in print" warning, Python gives a charmap error and quits. Why on earth after all these years do they not just simply call the Win32 -W APIs that output UTF-16 Unicode instead of forcing everything through the ANSI/codepage bottleneck?
Is it just that cross-platform performance is low priority? Is it that the languages use UTF-8 internally and find it too much bother to output UTF-16? Or are the -W APIs inherently broken to such a degree that they can't be used as-is?
UPDATE
It seems that the blame may need to be shared by all parties. I imagined that the scripting languages could just call wprintf on Windows and let the OS/runtime worry about things such as redirection. But it turns out that even wprintf on Windows converts wide characters to ANSI and back before printing to the console!
Please let me know if this has been fixed since the bug report link seems broken but my Visual C test code still fails for wprintf and succeeds for WriteConsoleW.
UPDATE 2
Actually you can print UTF-16 to the console from C using wprintf but only if you first do _setmode(_fileno(stdout), _O_U16TEXT).
From C you can print UTF-8 to a console whose codepage is set to codepage 65001, however Perl, Python, PHP and Ruby all have bugs which prevent this. Perl and PHP corrupt the output by adding additional blank lines following lines which contain at least one wide character. Ruby has slightly different corrupt output. Python crashes.
UPDATE 3
Node.js is the first scripting language that shipped without this problem straight out of the box.
The Python dev team slowly came to realize this was a real problem since it was first reported back at the end of 2007 and has seen a huge flurry of activity to fully understand and fully fix the bug in 2016.
The main problem seems to be that it is not possible to use Unicode on Windows using only the standard C library and no platform-dependent or third-party extensions. The languages you mentioned originate from Unix platforms, whose method of implementing Unicode blends well with C (they use normal char* strings, the C locale functions, and UTF-8). If you want to do Unicode in C, you more or less have to write everything twice: once using nonstandard Microsoft extensions, and once using the standard C API functions for all other operating systems. While this can be done, it usually doesn't have high priority because it's cumbersome and most scripting language developers either hate or ignore Windows anyway.
At a more technical level, I think the basic assumption that most standard library designers make is that all I/O streams are inherently byte-based on the OS level, which is true for files on all operating systems, and for all streams on Unix-like systems, with the Windows console being the only exception. Thus the architecture many class libraries and programming language standard have to be modified to a great extent if one wants to incorporate Windows console I/O.
Another more subjective point is that Microsoft just did not enough to promote the use of Unicode. The first Windows OS with decent (for its time) Unicode support was Windows NT 3.1, released in 1993, long before Linux and OS X grew Unicode support. Still, the transition to Unicode in those OSes has been much more seamless and unproblematic. Microsoft once again listened to the sales people instead of the engineers, and kept the technically obsolete Windows 9x around until 2001; instead of forcing developers to use a clean Unicode interface, they still ship the broken and now-unnecessary 8-bit API interface, and invite programmers to use it (look at a few of the recent Windows API questions on Stack Overflow, most newbies still use the horrible legacy API!).
When Unicode came out, many people realized it was useful. Unicode started as a pure 16-bit encoding, so it was natural to use 16-bit code units. Microsoft then apparently said "OK, we have this 16-bit encoding, so we have to create a 16-bit API", not realizing that nobody would use it. The Unix luminaries, however, thought "how can we integrate this into the current system in an efficient and backward-compatible way so that people will actually use it?" and subsequently invented UTF-8, which is a brilliant piece of engineering. Just as when Unix was created, the Unix people thought a bit more, needed a bit longer, has less financially success, but did it eventually right.
I cannot comment on Perl (but I think that there are more Windows haters in the Perl community than in the Python community), but regarding Python I know that the BDFL (who doesn't like Windows as well) has stated that adequate Unicode support on all platforms is a major goal.
Small contribution to the discussion - I am running Czech localized Windows XP, which almost everywhere uses CP1250 code page. Funny thing with console is though that it still uses legacy DOS 852 code page.
I was able to make very simple perl script that prints utf8 encoded data to console using:
binmode STDOUT, ":utf8:encoding(cp852)";
Tried various options (including utf16le), but only above settings printed accented Czech characters correctly.
Edit: I played a little more with the problem and found Win32::Unicode. The module exports function printW that works properly both in output and redirected:
use utf8;
use Win32::Unicode;
binmode STDOUT, ":utf8";
printW "Příliš žluťoučký kůň úpěl ďábelské ódy";
I have to unask many of your questions.
Did you know that
Windows uses UTF-16 for its APIs, but still defaults to the various "fun" legacy encodings (e.g. Windows-1252, Windows-1251) in userspace, including file names, differently for the many localisations of Windows?
you need to encode output, and picking the appropriate encoding for the system is achieved by the locale pragma, and that there is the a POSIX standard called locale on which this is built, and Windows is incompatible with it?
Perl already supported the so-called "wide" APIs once?
Microsoft managed to adapt UTF-8 into their codepage system of character encoding, and you can switch your terminal by issuing the appropriate chcp 65001 command?
Michael Kaplan has series of blog posts about the cmd console and Unicode that may be informative (while not really answering your question):
Conventional wisdom is retarded, aka What the ##%&* is _O_U16TEXT?
Anyone who says the console can't do Unicode isn't as smart as they think they are
A confluence of circumstances leaves a stone unturned...
PS: Thanks #Jeff for finding the archive.org links.
Are you sure your script would output Unicode on some other platform correctly? "wide character in print" warning makes me very suspicious.
I recommend to look over this overview
Why on earth after all these years do
they not just simply call the Win32 -W
APIs that output UTF-16 Unicode
instead of forcing everything through
the ANSI/codepage bottleneck?
Because Perl and Python aren't Windows programs. They're Unix programs that happen to have been mostly ported to Windows. As such, they don't like to call Win32 functions unless necessary. For byte-based I/O, it's not necessary; this can be done with the Standard C Libary. UTF-16-based I/O is a special case.
Or are the -W APIs inherently broken
to such a degree that they can't be
used as-is?
I wouldn't say that the -W APIs are inherently broken as much as I'd say that Microsoft's approach to Unicode in C(++) is inherently broken.
No matter how much certain Windows developers insist that programs should use wchar_t instead of char, there are just too many barriers to switching:
Platform dependence:
The use of UTF-16 wchar_t on Windows and UTF-32 wchar_t elsewhere. (The new char16_t and char32_t types may help.)
The non-standardness of UTF-16 filename functions like _wfopen, _wstat, etc. limits the ability to use wchar_t in cross-platform code.
Education. Everbody learns C with printf("Hello, world!\n");, not wprintf(L"Hello, world!\n");. The C textbook I used in college never even mentioned wide characters until Appendix A.13.
The existing zillions of lines of code that use char* strings.
For Perl to fully support Windows in this way, every call to print printf say warn and die has to be modified.
Is this Windows?
Which version of Windows?
Perl still mostly works on Windows 95
Is this going to the console, or somewhere else.
Once you have that determined, you then have to use a completely different set of API functions.
If you really want to see everything involved in doing this properly, have a look at the source of Win32::Unicode::Console.
On Linux, OpenBSD, FreeBSD and similar OS's you can usually just call binmode on the STDOUT and STDERR file handles.
binmode STDOUT, ':utf8';
binmode STDERR, ':utf8';
This assumes that the terminal is using the UTF-8 encoding.
For Python, the relevant issue in tracker is http://bugs.python.org/issue1602 (as said in comments). Note that it is open for 7 years. I tried to publish a working solution (based on information in the issue) as a Python package: https://github.com/Drekin/win-unicode-console, https://pypi.python.org/pypi/win_unicode_console.
Unicode issues in Perl
covers how the Win32 console works with Perl and the transcoding that happens behind the scene from ANSI to Unicode;albeit not just a Perl issue but affects other languages