I am getting into machine learning, and to document my code, I will write LaTeX math versions of my functions, right next to the code in an Jupyter/IPython notebook. The mathematical definitions include many Greek symbols, so I thought that I might as well use the Greek symbols in function and variable names, since that's possible in python. Would this be bad practice?
It seems a good use case under these assumptions:
the audience is mathematically versed,
you make use of a lot of Jupyter Notebook features such as inline plotting and
table display (e.g. pandas) so that the use of your code outside a notebook
is unlikely,
it is application code rather than a library that others will use.
Hint: Entering Greek letters in the notebook is simple.
Just type the LaTeX math notation and TAB.
For example, type:
\pi
and then the TAB key to get a π.
This is what the official style guide has to say:
For Python 3.0 and beyond, the following policy is prescribed for the
standard library (see PEP 3131): All identifiers in the Python
standard library MUST use ASCII-only identifiers, and SHOULD use
English words wherever feasible (in many cases, abbreviations and
technical terms are used which aren't English). In addition, string
literals and comments must also be in ASCII. The only exceptions are
(a) test cases testing the non-ASCII features, and (b) names of
authors. Authors whose names are not based on the latin alphabet MUST
provide a latin transliteration of their names.
Open source projects with a global audience are encouraged to adopt a similar policy.
In other words: It would be considered better practice to use ascii-only, if you are targeting a global audience. If the code is only going to be read by your team, it's a matter of preference.
Really, it is a matter of personal opinion. Keep in mind that Unicode character support for variable names is ONLY in Python 3, so make sure that any external libraries support Python 3. Other than that, there isn't a reason to say no.
Related
I am relatively new in the programming community, and recently I acknowledged the existence of PEP8, a sort-of codex which aims to improve readability. As listed in the said PEP8 documentation (https://www.python.org/dev/peps/pep-0008/), variables and function names should be "lower_case_with_underscores". I wonder if my habit is violating this convention.
Specifically, I often replace words with numbers whenever possible, to abbreviate and shorten the names of variables and functions.
col4keys
things2do
I searched for the answer here and there, but nothing seems to be addressing my specific inquiry.
I highly recommend to use a linter such as flake8 to find such errors, i.e. PEP-8 violations. It is really good in finding those and pretty much the standart tool as of today.
In your specific case I believe it is not really a violation, so feel free to name your variables this way if you must -- at least flake8 will not complain.
..I find that when coding very "math-heavy" function in python one way of making the code more readable (to me at least) is by following the literature as tightly as possible, including using the greek letters such as Ω, θ, φ, ψ, ω, ϕ etc.
And the code works as expected, but PyCharm highlights those characters and says "non-ASCII characters in identifier".
..without referencing any PEP (Python coding standard).
I'm asking for sober arguments as to why I should refrain from using non-ASCII characters in identifiers.
Because they're not included in a standard qwerty keyboard and would be a pain in the arse to keep referring to in your code.
You're better off just referring to them by their ascii equivalents (the string alpha etc.) and letting intellisense in modern ide's handle the typing.
If you're looking for a pep standard, you probably want PEP 3131
Should identifiers be allowed to contain any Unicode letter?
Drawbacks of allowing non-ASCII identifiers wholesale:
Python will lose the ability to make a reliable round trip to a
human-readable display on screen or on paper.
Python will become vulnerable to a new class of security exploits; code and submitted
patches will be much harder to inspect.
Humans will no longer be able
to validate Python syntax.
Unicode is young; its problems are not yet well understood and solved; tool support is weak.
Languages with non-ASCII identifiers use different character sets and normalization
schemes; PEP 3131's choices are non-obvious.
The Unicode bidi algorithm yields an extremely confusing display order for RTL text when digits or operators are nearby.
Of course this has been accepted so its pycharm's decision, not pep.
I'm using Python, but I think it's same for pretty much every programming languages. It has nothing to do with the functionalities, but I came to notice that when I see other people's codes, variable names with multiple words are connected, and the first letters of each words (except for the first one) are capitalized.
thisIsATypicalVariableNameWithMultipleWords = 0
But when using functions, usually nothing is capitalized and the words are connected by _.
this_is_a_typical_function_name_with_multiple_words()
Is this how the variables and functions are typically named? Thanks in advance.
You are looking at a naming convention, and such conventions are rather arbitrary but usually agreed upon in a style guide.
The Python developers have published such a style guide for Python: PEP 8 -- Style Guide for Python Code. It includes a section on naming conventions for Python code. You don't have to adhere to these, but it is a good idea to do so if you want to work with other Python developers where most do adhere to these conventions.
The style guide also provides you with names for the styles; your two specific styles (which are not the only two styles), are called mixedCase and lower_case_with_underscores.
mixedCase is no longer used in Python code; the style guide only mentions it as a legacy case for function names used by certain older standard library modules.
lower_case_with_underscores is the current recommended convention for function, method and attribute names.
Somewhat out of necessity, I develop software with my locale set to either "C" or "en_US". It's difficult to use a different locale because I only speak one language with anything even remotely approaching fluency.
As a result, I often overlook the differences in behavior that can be introduced by having different locale settings.Unsurprisingly, overlooking those differences will sometimes lead to bugs which are only discovered by some unfortunate user using a different locale. In particularly bad cases, that user may not even share a language with me, making the bug reporting process a challenging one. And, importantly, a lot of my software is in the form of libraries; while almost none of it sets the locale, it may be combined with another library, or used in an application which does set the locale - generating behavior I never experience myself.
To be a bit more specific, the kinds of bugs I have in mind are not missing text localizations or bugs in the code for using those localizations. Instead, I mean bugs where a locale changes the result of some locale-aware API (for example, toupper(3)) when the code using that API did not anticipate the possibility of such a change (eg, in the Turkish locale, toupper does not change "i" to "I" - potentially a problem for a network server trying to speak a particular network protocol to another host).
A few examples of such bugs in software I maintain:
AttributeError in a Turkish locale
imap relies on a C locale for date formatting
Fix for locale-dependant date formatting in imap and conch
In the past, one approach I've taken to dealing with this is to write regression tests which explicitly change the locale to one where code was known not to work, exercise the code, verify correct behavior, and then restore the original locale. This works well enough, but only after someone has reported a bug, and it only covers one small area of a codebase.
Another approach which seems possible is to have a continuous integration system (CIS) set up to run a full suite of tests in an environment with a different locale set. This improves the situation somewhat, by giving as much coverage in that one alternate locale as the test suite normally gives. Another shortcoming is that there are many, many, many locales, and each may possibly cause different problems. In practice, there are probably only a dozen or so different ways a locale can break a program, but having dozens of extra testing configurations is taxing on resources (particularly for a project already stretching its resource limits by testing on different platforms, against different library versions, etc).
Another approach which occurred to me is to use (possibly first creating) a new locale which is radically different from the "C" locale in every way it can be - have a different case mapping, use a different thousands separator, format dates differently, etc. This locale could be used with one extra CIS configuration and hopefully relied upon to catch any errors in the code that could be triggered by any locale.
Does such a testing locale exist already? Are there flaws with this idea to testing for locale compatibility?
What other approaches to locale testing have people taken?
I'm primarily interested in POSIX locales, since those are the ones I know about. However, I know that Windows also has some similar features, so extra information (perhaps with more background information about how those features work), could perhaps also be useful.
I would just audit your code for incorrect uses of functions like toupper. Under the C locale model, such functions should be considered as operating only on natural-language text in the locale's language. For any application which deals with potentially multi-lingual text, this means functions such as tolower should not be used at all.
If your target is POSIX, you have a little bit more flexibility due to the uselocale function which makes it possible to temporarily override the locale in a single thread (i.e. without messing up the global state of your program). You could then keep the C locale globally and use tolower etc. for ASCII/machine-oriented text (like config files and such) and only uselocale to the user's selected locale when working with natural-language text from said locale.
Otherwise (and perhaps even then if you needs are more advanced), I think the best solution is to completely throw out functions like tolower and write your own ASCII versions for config text and the like, and use a powerful Unicode-aware library for natural-language text.
One sticky issue that I haven't yet touched on is the decimal separator in relation to functions like snprintf and strtod. Having it changed to a , instead of a . in some locales can ruin your ability to parse files with the C library. My preferred solution is simply to never set the LC_NUMERIC locale whatsoever. (And I'm a mathematician so I tend to believe numbers should be universal, not subject to cultural convention.) Depending on your application, the only locale categories really needed may just be LC_CTYPE, LC_COLLATE, and LC_MESSAGES. Also often useful are LC_MONETARY and LC_TIME.
You have two different problems to solve to answer your question: testing your code and dealing with issues with other peoples code.
Testing your own code - I've dealt with this by using 2 or 3 english based locales setup in a CI environment: en_GB (collation), en_ZW (almost everything changes but you can still read the errors) and then en_AU (date, collation)
If you want to make sure your code works with multibyte filenames then you also need to test with ja_JP
Dealing with other peoples code is in many ways the hardest and my solution for that is to store the date values (it's almost always dates :) in their raw date/time value and always keep them as GMT. Then when you are crossing the boundary of your app you convert to the appropriate format.
PyTZ and PyICU are very helpful doing the above.
I have to parse some strings based on PCRE in Python, and I've no idea how to do that.
Strings I want to parse looks like:
match mysql m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s p/MySQL/ i/$1/
In this example, I have to get this different items:
"m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s" ; "p/MySQL/" ; "i/$1/"
The only thing I've found relating to PCRE manipulation in Python is this module: http://pydoc.org/2.2.3/pcre.html (but it's written it's a .so file ...)
Do you know if some Python module exists to parse this kind of string?
Be Especially Careful with non‐ASCII in Python
There are some really subtle issues with how Python deals with, or fails to deal with, non-ASCII in patterns and strings. Worse, these disparities vary substantially according, not just to which version of Python you are using, but also whether you have a “wide build”.
In general, when you’re doing Unicode stuff, Python 3 with a wide build works best and Python 2 with a narrow build works worst, but all combinations are still a pretty far cry far from how Perl regexes work vis‐à‐vis Unicode. If you’re looking for ᴘᴄʀᴇ patterns in Python, you may have to look a bit further afield than its old re module.
The vexing “wide-build” issues have finally been fixed once and for all — provided you use a sufficiently advanced release of Python. Here’s an excerpt from the v3.3 release notes:
Functionality
Changes introduced by PEP 393 are the following:
Python now always supports the full range of Unicode codepoints, including non-BMP ones (i.e. from U+0000 to U+10FFFF). The distinction between narrow and wide builds no longer exists and Python now behaves like a wide build, even under Windows.
With the death of narrow builds, the problems specific to narrow builds have also been fixed, for example:
len() now always returns 1 for non-BMP characters, so len('\U0010FFFF') == 1;
surrogate pairs are not recombined in string literals, so '\uDBFF\uDFFF' != '\U0010FFFF';
indexing or slicing non-BMP characters returns the expected value, so '\U0010FFFF'[0] now returns '\U0010FFFF' and not '\uDBFF';
all other functions in the standard library now correctly handle non-BMP codepoints.
The value of sys.maxunicode is now always 1114111 (0x10FFFF in hexadecimal). The PyUnicode_GetMax() function still returns either 0xFFFF or 0x10FFFF for backward compatibility, and it should not be used with the new Unicode API (see issue 13054).
The ./configure flag --with-wide-unicode has been removed.
The Future of Python Regexes
In contrast to what’s currently available in the standard Python distribution’s re library, Matthew Barnett’s regex module for both Python 2 and Python 3 alike is much, much better in pretty much all possible ways and will quite probably replace re eventually. Its particular relevance to your question is that his regex library is far more ᴘᴄʀᴇ (i.e. it’s much more Perl‐compatible) in every way than re now is, which will make porting Perl regexes to Python easier for you. Because it is a ground‐up rewrite (as in from‐scratch, not as in hamburger :), it was written with non-ASCII in mind, which re was not.
The regex library therefore much more closely follows the (current) recommendations of UTS#18: Unicode Regular Expressions in how it approaches things. It meets or exceeds the UTS#18 Level 1 requirements in most if not all regards, something you normally have to use the ICU regex library or Perl itself for — or if you are especially courageous, the new Java 7 update to its regexes, as that also conforms to the Level One requirements from UTS#18.
Beyond meeting those Level One requirements, which are all absolutely essential for basic Unicode support, but which are not met by Python’s current re library, the awesome regex library also meets the Level Two requirements for RL2.5 Named Characters (\N{...})), RL2.2 Extended Grapheme Clusters (\X), and the new RL2.7 on Full Properties from revision 14 of UTS#18.
Matthew’s regex module also does Unicode casefolding so that case insensitive matches work reliably on Unicode, which re does not.
The following is no longer true, because regex now supports full Unicode casefolding, like Perl and Ruby.
One super‐tiny difference is that for now, Perl’s case‐insensitive patterns use full string‐oriented casefolds while his regex module still uses simple single‐char‐oriented casefolds, but this is something he’s looking into. It’s actually a very hard problem, one which apart from Perl, only Ruby even attempts.
Under full casefolding, this means that (for example) "ß" now correct matches "SS", "ss", "ſſ", "ſs" (etc.) when case-insensitive matching is selected. (This is admittedly more important in the Greek script than the Latin one.)
See also the slides or doc source code from my third OSCON2011 talk entitled “Unicode Support Shootout: The Good, the Bad, and the (mostly) Ugly” for general issues in Unicode support across JavaScript, PHP, Go, Ruby, Python, Java, and Perl. If can’t use either Perl regexes or possibly the ICU regex library (which doesn’t have named captures, alas!), then Matthew’s regex for Python is probably your best shot.
Nᴏᴛᴀ Bᴇɴᴇ s.ᴠ.ᴘ. (= s’il vous plaît, et même s’il ne vous plaît pas :) The following unsolicited noncommercial nonadvertisement was not actually put here by the author of the Python regex library. :)
Cool regex Features
The Python regex library has a cornucopeia of superneat features, some of which are found in no other regex system anywhere. These make it very much worth checking out no matter whether you happen to be using it for its ᴘᴄʀᴇ‐ness or its stellar Unicode support.
A few of this module’s outstanding features of interest are:
Variable‐width lookbehind, a feature which is quite rare in regex engines and very frustrating not to have when you really want it. This may well be the most frequently requested feature in regexes.
Backwards searching so you don’t have to reverse your string yourself first.
Scoped ismx‐type options, so that (?i:foo) only casefolds for foo, not overall, or (?-i:foo) to turn it off just on foo. This is how Perl works (or can).
Fuzzy matching based on edit‐distance (which Udi Manber’s agrep and glimpse also have)
Implicit shortest‐to‐longest sorted named lists via \L<list> interpolation
Metacharacters that specifically match only the start or only the end of a word rather than either side (\m, \M)
Support for all Unicode line separators (Java can do this, as can Perl albeit somewhat begrudgingly with \R per RL1.6.
Full set operations — union, intersection, difference, and symmetric difference — on bracketed character classes per RL1.3, which is much easier than getting at it in Perl.
Allows for repeated capture groups like (\w+\s+)+ where you can get all separate matches of the first group not just its last match. (I believe C# might also do this.)
A more straightforward way to get at overlapping matches than sneaky capture groups in lookaheads.
Start and end positions for all groups for later slicing/substring operations, much like Perl’s #+ and #- arrays.
The branch‐reset operator via (?|...|...|...|) to reset group numbering in each branch the way it works in Perl.
Can be configured to have your coffee waiting for you in the morning.
Support for the more sophisticated word boundaries from RL2.3.
Assumes Unicode strings by default, and fully supports RL1.2a so that \w, \b, \s, and such work on Unicode.
Supports \X for graphemes.
Supports the \G continuation point assertion.
Works correctly for 64‐bit builds (re only has 32‐bit indices).
Supports multithreading.
Ok, that’s enough hype. :)
Yet Another Fine Alternate Regex Engine
One final alternative that is worth looking at if you are a regex geek is the Python library bindings to Russ Cox’s awesome RE2 library. It also supports Unicode natively, including simple char‐based casefolding, and unlike re it notably provides for both the Unicode General Category and the Unicode Script character properties, which are the two key properties you most often need for the simpler kinds of Unicode processing.
Although RE2 misses out on a few Unicode features like \N{...} named character support found in ICU, Perl, and Python, it has extremely serious computational advantages that make it the regex engine of choice whenever you’re concern with starvation‐based denial‐of‐service attacks through regexes in web queries and such. It manages this by forbidding backreferences, which cause a regex to stop being regular and risk super‐exponential explosions in time and space.
Library bindings for RE2 are available not just for C/C++ and Python, but also for Perl and most especially for Go, where it is slated to very shortly replace the standard regex library there.
You're looking for '(\w/[^/]+/\w*)'.
Used like so,
import re
x = re.compile('(\w/[^/]+/\w*)')
s = 'match mysql m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s p/MySQL/ i/$1/'
y = x.findall(s)
# y = ['m/^.\x00\x00\x00\n(4\\.[-.\\w]+)\x00...\x00/s', 'p/MySQL/', 'i/$1/']
Found it while playing with Edi Weitz's Regex Coach, so thanks to the comments to the question which made me remember its existence.
Since you want to run PCRE regexes, and Python's re module has diverged from its original PCRE origins, you may also want to check out Arkadiusz Wahlig's Python bindings for PCRE. That way you'll have access to native PCRE and won't need to translate between regex flavors.