Related
I am getting into machine learning, and to document my code, I will write LaTeX math versions of my functions, right next to the code in an Jupyter/IPython notebook. The mathematical definitions include many Greek symbols, so I thought that I might as well use the Greek symbols in function and variable names, since that's possible in python. Would this be bad practice?
It seems a good use case under these assumptions:
the audience is mathematically versed,
you make use of a lot of Jupyter Notebook features such as inline plotting and
table display (e.g. pandas) so that the use of your code outside a notebook
is unlikely,
it is application code rather than a library that others will use.
Hint: Entering Greek letters in the notebook is simple.
Just type the LaTeX math notation and TAB.
For example, type:
\pi
and then the TAB key to get a π.
This is what the official style guide has to say:
For Python 3.0 and beyond, the following policy is prescribed for the
standard library (see PEP 3131): All identifiers in the Python
standard library MUST use ASCII-only identifiers, and SHOULD use
English words wherever feasible (in many cases, abbreviations and
technical terms are used which aren't English). In addition, string
literals and comments must also be in ASCII. The only exceptions are
(a) test cases testing the non-ASCII features, and (b) names of
authors. Authors whose names are not based on the latin alphabet MUST
provide a latin transliteration of their names.
Open source projects with a global audience are encouraged to adopt a similar policy.
In other words: It would be considered better practice to use ascii-only, if you are targeting a global audience. If the code is only going to be read by your team, it's a matter of preference.
Really, it is a matter of personal opinion. Keep in mind that Unicode character support for variable names is ONLY in Python 3, so make sure that any external libraries support Python 3. Other than that, there isn't a reason to say no.
I have to parse some strings based on PCRE in Python, and I've no idea how to do that.
Strings I want to parse looks like:
match mysql m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s p/MySQL/ i/$1/
In this example, I have to get this different items:
"m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s" ; "p/MySQL/" ; "i/$1/"
The only thing I've found relating to PCRE manipulation in Python is this module: http://pydoc.org/2.2.3/pcre.html (but it's written it's a .so file ...)
Do you know if some Python module exists to parse this kind of string?
Be Especially Careful with non‐ASCII in Python
There are some really subtle issues with how Python deals with, or fails to deal with, non-ASCII in patterns and strings. Worse, these disparities vary substantially according, not just to which version of Python you are using, but also whether you have a “wide build”.
In general, when you’re doing Unicode stuff, Python 3 with a wide build works best and Python 2 with a narrow build works worst, but all combinations are still a pretty far cry far from how Perl regexes work vis‐à‐vis Unicode. If you’re looking for ᴘᴄʀᴇ patterns in Python, you may have to look a bit further afield than its old re module.
The vexing “wide-build” issues have finally been fixed once and for all — provided you use a sufficiently advanced release of Python. Here’s an excerpt from the v3.3 release notes:
Functionality
Changes introduced by PEP 393 are the following:
Python now always supports the full range of Unicode codepoints, including non-BMP ones (i.e. from U+0000 to U+10FFFF). The distinction between narrow and wide builds no longer exists and Python now behaves like a wide build, even under Windows.
With the death of narrow builds, the problems specific to narrow builds have also been fixed, for example:
len() now always returns 1 for non-BMP characters, so len('\U0010FFFF') == 1;
surrogate pairs are not recombined in string literals, so '\uDBFF\uDFFF' != '\U0010FFFF';
indexing or slicing non-BMP characters returns the expected value, so '\U0010FFFF'[0] now returns '\U0010FFFF' and not '\uDBFF';
all other functions in the standard library now correctly handle non-BMP codepoints.
The value of sys.maxunicode is now always 1114111 (0x10FFFF in hexadecimal). The PyUnicode_GetMax() function still returns either 0xFFFF or 0x10FFFF for backward compatibility, and it should not be used with the new Unicode API (see issue 13054).
The ./configure flag --with-wide-unicode has been removed.
The Future of Python Regexes
In contrast to what’s currently available in the standard Python distribution’s re library, Matthew Barnett’s regex module for both Python 2 and Python 3 alike is much, much better in pretty much all possible ways and will quite probably replace re eventually. Its particular relevance to your question is that his regex library is far more ᴘᴄʀᴇ (i.e. it’s much more Perl‐compatible) in every way than re now is, which will make porting Perl regexes to Python easier for you. Because it is a ground‐up rewrite (as in from‐scratch, not as in hamburger :), it was written with non-ASCII in mind, which re was not.
The regex library therefore much more closely follows the (current) recommendations of UTS#18: Unicode Regular Expressions in how it approaches things. It meets or exceeds the UTS#18 Level 1 requirements in most if not all regards, something you normally have to use the ICU regex library or Perl itself for — or if you are especially courageous, the new Java 7 update to its regexes, as that also conforms to the Level One requirements from UTS#18.
Beyond meeting those Level One requirements, which are all absolutely essential for basic Unicode support, but which are not met by Python’s current re library, the awesome regex library also meets the Level Two requirements for RL2.5 Named Characters (\N{...})), RL2.2 Extended Grapheme Clusters (\X), and the new RL2.7 on Full Properties from revision 14 of UTS#18.
Matthew’s regex module also does Unicode casefolding so that case insensitive matches work reliably on Unicode, which re does not.
The following is no longer true, because regex now supports full Unicode casefolding, like Perl and Ruby.
One super‐tiny difference is that for now, Perl’s case‐insensitive patterns use full string‐oriented casefolds while his regex module still uses simple single‐char‐oriented casefolds, but this is something he’s looking into. It’s actually a very hard problem, one which apart from Perl, only Ruby even attempts.
Under full casefolding, this means that (for example) "ß" now correct matches "SS", "ss", "ſſ", "ſs" (etc.) when case-insensitive matching is selected. (This is admittedly more important in the Greek script than the Latin one.)
See also the slides or doc source code from my third OSCON2011 talk entitled “Unicode Support Shootout: The Good, the Bad, and the (mostly) Ugly” for general issues in Unicode support across JavaScript, PHP, Go, Ruby, Python, Java, and Perl. If can’t use either Perl regexes or possibly the ICU regex library (which doesn’t have named captures, alas!), then Matthew’s regex for Python is probably your best shot.
Nᴏᴛᴀ Bᴇɴᴇ s.ᴠ.ᴘ. (= s’il vous plaît, et même s’il ne vous plaît pas :) The following unsolicited noncommercial nonadvertisement was not actually put here by the author of the Python regex library. :)
Cool regex Features
The Python regex library has a cornucopeia of superneat features, some of which are found in no other regex system anywhere. These make it very much worth checking out no matter whether you happen to be using it for its ᴘᴄʀᴇ‐ness or its stellar Unicode support.
A few of this module’s outstanding features of interest are:
Variable‐width lookbehind, a feature which is quite rare in regex engines and very frustrating not to have when you really want it. This may well be the most frequently requested feature in regexes.
Backwards searching so you don’t have to reverse your string yourself first.
Scoped ismx‐type options, so that (?i:foo) only casefolds for foo, not overall, or (?-i:foo) to turn it off just on foo. This is how Perl works (or can).
Fuzzy matching based on edit‐distance (which Udi Manber’s agrep and glimpse also have)
Implicit shortest‐to‐longest sorted named lists via \L<list> interpolation
Metacharacters that specifically match only the start or only the end of a word rather than either side (\m, \M)
Support for all Unicode line separators (Java can do this, as can Perl albeit somewhat begrudgingly with \R per RL1.6.
Full set operations — union, intersection, difference, and symmetric difference — on bracketed character classes per RL1.3, which is much easier than getting at it in Perl.
Allows for repeated capture groups like (\w+\s+)+ where you can get all separate matches of the first group not just its last match. (I believe C# might also do this.)
A more straightforward way to get at overlapping matches than sneaky capture groups in lookaheads.
Start and end positions for all groups for later slicing/substring operations, much like Perl’s #+ and #- arrays.
The branch‐reset operator via (?|...|...|...|) to reset group numbering in each branch the way it works in Perl.
Can be configured to have your coffee waiting for you in the morning.
Support for the more sophisticated word boundaries from RL2.3.
Assumes Unicode strings by default, and fully supports RL1.2a so that \w, \b, \s, and such work on Unicode.
Supports \X for graphemes.
Supports the \G continuation point assertion.
Works correctly for 64‐bit builds (re only has 32‐bit indices).
Supports multithreading.
Ok, that’s enough hype. :)
Yet Another Fine Alternate Regex Engine
One final alternative that is worth looking at if you are a regex geek is the Python library bindings to Russ Cox’s awesome RE2 library. It also supports Unicode natively, including simple char‐based casefolding, and unlike re it notably provides for both the Unicode General Category and the Unicode Script character properties, which are the two key properties you most often need for the simpler kinds of Unicode processing.
Although RE2 misses out on a few Unicode features like \N{...} named character support found in ICU, Perl, and Python, it has extremely serious computational advantages that make it the regex engine of choice whenever you’re concern with starvation‐based denial‐of‐service attacks through regexes in web queries and such. It manages this by forbidding backreferences, which cause a regex to stop being regular and risk super‐exponential explosions in time and space.
Library bindings for RE2 are available not just for C/C++ and Python, but also for Perl and most especially for Go, where it is slated to very shortly replace the standard regex library there.
You're looking for '(\w/[^/]+/\w*)'.
Used like so,
import re
x = re.compile('(\w/[^/]+/\w*)')
s = 'match mysql m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s p/MySQL/ i/$1/'
y = x.findall(s)
# y = ['m/^.\x00\x00\x00\n(4\\.[-.\\w]+)\x00...\x00/s', 'p/MySQL/', 'i/$1/']
Found it while playing with Edi Weitz's Regex Coach, so thanks to the comments to the question which made me remember its existence.
Since you want to run PCRE regexes, and Python's re module has diverged from its original PCRE origins, you may also want to check out Arkadiusz Wahlig's Python bindings for PCRE. That way you'll have access to native PCRE and won't need to translate between regex flavors.
Are there any real differences between Ruby regex and Python regex?
I've been unable to find any differences in the two, but may have missed something.
The last time I checked, they differed substantially in their Unicode support. Ruby in 1.9 at least has some very limited Unicode support. I believe one or two Unicode properties might be supported by now. Probably the general categories and maybe the scripts were the two I'm thinking of.
Python has less and more Unicode support at the same time. Python does seem to make it possible to meet the requirements of RL1.2a "Compatability Properties" from UTS#18 on Unicode Regular Expressions.
That said, there is a really rather nice Python library out there by Matthew Barnett (mrab) that finally adds a couple of Unicode properties to Python regexes. He supports the two most important ones: the general categories, and the script properties. It has some other intriguing features as well. It deserves some good publicity.
I don't think either of Ruby or Python support Unicode all that terribly well, although more and more gets done every day. In particular, however, neither meets even the barebones Level 1 requirement for Unicode Regular Expressions cited above. For example, RL1.2 requires that at least 11 properties be supported: General_Category, Script, Alphabetic, Uppercase, Lowercase, White_Space, Noncharacter_Code_Point, Default_Ignorable_Code_Point, ANY, ASCII, and ASSIGNED.
I think Python only lets you get to some of those, and only in a roundabout way. Of course, there are many, many other properties beyond these 11.
When you’re looking for Unicode support, there's more than just UTS#10 on Regular Expressions of course, although that is the one that matters most to this question and neither Ruby nor Puython are Level 1 compliant. Other very important aspects of Unicode include UAX#15, UAX#14, UTS#18, UAX#11, UAX#29, and of course the crucial UAX#44. Python has libraries for at least a couple of those, I know. I don't know that they're standard.
But when it comes to regular expression support, um, there are richer alternatives than just those two, you know. :)
I like the /pattern/ syntax in Ruby, inspired from Perl, for regular expressions. Python's re.compile("pattern") is not really elegant for me. The syntatic sugar in Ruby and the fact that regular expressions are a separate re module in Python, makes me lean towards Ruby when it comes to Regular Expressions.
Apart from this, I don't see much of a difference from a normal Regular Expression programming perspective. Both the languages have pretty comprehensive and mostly similar RE support. There might be performance differences ( Python traditionally has has better performance ) and also Python has greater unicode regular expressions support.
If the question is only about regex's: neither. Use Perl.
You should choose between those languages based on the other non-regex issues that you are trying to solve and the community support in that language that is nearby your field of endeavor.
If you are truly only picking a language based on regex support -- choose Perl...
Ruby's Regexp#match method is equivalent to Python's re.search(), not re.match(). re.search() and Regexp#match look for the first match anywhere in a string. re.match() looks for a match only at the beginning of a string.
To perform the equivalent of re.match(), a Ruby regular expression will need to start with a ^, indicating matching the beginning of the string.
To perform the equivalent of Regexp#match, a Python regular expression will need to start with .*, indicating matching zero or more characters.
The regular expression libraries for Ruby and Python are developed by two completely independent teams. Even if they are identical now (and I wouldn't be certain they are), there's no guarantee that they won't diverge sometime in the future.
The safest position is to assume they're different now, and assume they will continue to be different in the future.
I was trying to understand why Python is said to be a beautiful language. I was directed to the beauty of PEP 8... and it was strange. In fact it says that you can use any convention you want, just be consistent... and suddenly I found some strange things in the core library:
request()
getresponse()
set_debuglevel()
endheaders()
http://docs.python.org/py3k/library/http.client.html
The below functions are new in the Python 3.1. What part of PEP 8 convention is used here?
popitem()
move_to_end()
http://docs.python.org/py3k/library/collections.html
So my question is: is PEP 8 used in the core library, or not? Why is it like that?
Is there the same situation as in PHP where I cannot just remember the name of the function because there are possible all ways of writing the name?
Why PEP 8 is not used in the core library even for the new functions?
PEP 8 recommends using underscores as the default choice, but leaving them out is generally done for one of two reasons:
consistency with some other API (e.g. the current module, or a standard interface)
because leaving them out doesn't hurt readability (or even improves it)
To address the specific examples you cite:
popitem is a longstanding method on dict objects. Other APIs that adopt it retain that spelling (i.e. no underscore).
move_to_end is completely new. Despite other methods on the object omitting underscores, it follows the recommended PEP 8 convention of using underscores, since movetoend is hard to read (mainly because toe is a word, so most people's brains will have to back up and reparse once they notice the nd)
set_debuglevel (and the newer set_tunnel) should probably have left the underscore out for consistency with the rest of the HTTPConnection API. However, the original author may simply have preferred set_debuglevel tosetdebuglevel (note that debuglevel is also an argument to the HTTPConnection constructor, explaining the lack of a second underscore) and then the author of set_tunnel simply followed that example.
set_tunnel is actually another case where dropping the underscore arguably hurts readability. The juxtaposition of the two "t"s in settunnel isn't conducive to easy parsing.
Once these inconsistencies make it into a Python release module, it generally isn't worth the hassle to try and correct them (this was done to de-Javaify the threading module interface between Python 2 and Python 3, and the process was annoying enough that nobody else has volunteered to "fix" any other APIs afflicted by similar stylistic problems).
From PEP8:
But most importantly: know when to be
inconsistent -- sometimes the style
guide just doesn't apply. When in doubt, use your best judgment. Look
at other examples and decide what looks best. And don't hesitate to
ask!
What you have mentioned here is somewhat consistent with the PEP8 guidelines; actually, the main inconsistencies are in other parts, usually with CamelCase.
The Python standard library is not as tightly controlled as it could be, and the style of modules varies. I'm not sure what your examples are meant to illustrate, but it is true that Python's library does not have one voice, as Java's does, or Win32. The language (and library) are built by an all-volunteer crew, with no corporation paying salaries to people dedicated to the language, and it sometimes shows.
Of course, I believe other factors outweigh this negative, but it is a negative nonetheless.
Is it considered as a good practice to pick Unicode string over regular string when coding in Python? I mainly work on the Windows platform, where most of the string types are Unicode these days (i.e. .NET String, '_UNICODE' turned on by default on a new c++ project, etc ). Therefore, I tend to think that the case where non-Unicode string objects are used is a sort of rare case. Anyway, I'm curious about what Python practitioners do in real-world projects.
From my practice -- use unicode.
At beginning of one project we used usuall strings, however our project was growing, we were implementing new features and using new third-party libraries. In that mess with non-unicode/unicode string some functions started failing. We started spending time localizing this problems and fixing them. However, some third-party modules doesn't supported unicode and started failing after we switched to it (but this is rather exclusion than a rule).
Also I have some experience when we needed to rewrite some third party modules(e.g. SendKeys) cause they were not supporting unicode. If it was done in unicode from beginning it will be better :)
So I think today we should use unicode.
P.S. All that mess upwards is only my hamble opinion :)
As you ask this question, I suppose you are using Python 2.x.
Python 3.0 changed quite a lot in string representation, and all text now is unicode.
I would go for unicode in any new project - in a way compatible with the switch to Python 3.0 (see details).
Yes, use unicode.
Some hints:
When doing input output in any sort of binary format, decode directly after reading and encode directly before writing, so that you never need to mix strings and unicode. Because mixing that tends to lead to UnicodeEncodeDecodeErrors sooner or later.
[Forget about this one, my explanations just made it even more confusing. It's only an issue when porting to Python 3, you can care about it then.]
Common Python newbie errors with Unicode (not saying you are a newbie, but this may be read by newbies): Don't confuse encode and decode. Remember, UTF-8 is an ENcoding, so you ENcode Unicode to UTF-8 and DEcode from it.
Do not fall into the temptation of setting the default encoding in Python (by setdefaultencoding in sitecustomize.py or similar) to whatever you use most. That is just going to give you problems if you reinstall or move to another computer or suddenly need to use another encoding. Be explicit.
Remember, not all of Python 2s standard library accepts unicode. If you feed a method unicode and it doesn't work, but it should, try feeding it ascii and see. Examples: urllib.urlopen(), which fails with unhelpful errors if you give it a unicode object instead of a string.
Hm. That's all I can think of now!
It can be tricky to consistently use unicode strings in Python 2.x - be it because somebody inadvertently uses the more natural str(blah) where they meant unicode(blah), forgetting the u prefix on string literals, third-party module incompatibilities - whatever. So in Python 2.x, use unicode only if you have to, and are prepared to provide good unit test coverage.
If you have the option of using Python 3.x however, you don't need to care - strings will be unicode with no extra effort.
Additional to Mihails comment I would say: Use Unicode, since it is the future. In Python 3.0, Non-Unicode will be gone, and as much I know, all the "U"-Prefixes will make trouble, since they are also gone.
If you are dealing with severely constrained memory or disk space, use ASCII strings. In this case, you should additionally write your software in C or something even more compact :)