I have a city name in unicode, and I want to match it with regex, but I also want to validate when it is a string, like "New York".
I searched a little bit and tried something attached below, but could not figure out how?
I tried this regex "([\u0000-\uFFFF]+)" on this website:http://regex101.com/#python and it works, but could not get it working in python.
Thanks in advance!!
city=u"H\u0101na"
mcity=re.search(r"([\u0000-\uFFFFA-Za-z\s]+)", city, re.U)
mcity.group(0)
u'H'
mcity=re.search(r"([\u0000-\uFFFFA-Za-z\s]+)", city, re.U)
Unlike \x, \u is not a special sequence in regex syntax, so your character group matches a literal backslash, letter U, and so on.
To refer to non-ASCII in a regex you have to include them as raw characters in a Unicode string, for example as:
mcity=re.search(u"([\u0000-\uFFFFA-Za-z\\s]+)", city, re.U)
(If you don't want to double-backslash the \s, you could also use a ur string, in which \u still works as an escape but the other escapes like \x don't. This is a bit confusing though.)
This character group is redundant: including the range U+0000 to U+FFFF already covers all of A-Za-z\s, and indeed the whole Basic Multilingual Plane including control characters. On a narrow build of Python (including Windows Python 2 builds), where the characters outside the BMP are represented using surrogate pairs in the range U+D800 to U+DFFF, you are actually allowing every single character, so it's not much of a filter. (.+ would be a simpler way of putting it.)
Then again it's pretty difficult to express what might constitute a valid town name in different parts of the world. I'd be tempted to accept anything that, shorn of control characters and leading/trailing whitespace, wasn't an empty string.
Related
In Python 2, a Python variable name contains only ASCII letters, numbers and underscores, and it must not start with a number. Thus,
re.search(r'[_a-zA-Z][_a-zA-Z0-9]*', s)
will find a matching Python name in the str s.
In Python 3, the letters are no longer restricted to ASCII. I am in search for a new regex which will match any and all legal Python 3 variable names.
According to the docs, \w in a regex will match any Unicode word literal, including numbers and the underscore. I am however unsure whether this character set contains exactly those characters which might be used in variable names.
Even if the character set \w contains exactly the characters from which Python 3 variable names may legally be constructed, how do I use it to create my regex? Using just \w+ will also match "words" which start with a number, which is no good. I have the following solution in mind,
re.search(r'(\w&[^0-9])\w*', s)
where & is the "and" operator (just like | is the "or" operator). The parentheses will thus match any word literal which at the same time is not a number. The problem with this is that the & operator does not exist, and so I'm stuck with no solution.
Edit
Though the "double negative" trick (as explained in the answer by Patrick Artner below) can also be found in this question, note that this only partly answers my question. Using [^\W0-9]\w* only works if I am guaranteed that \w exactly matches the legal Unicode characters, plus the numbers 0-9. I would like a source of this knowledge, or some other regex which gets the job done.
You can use a double negative - \W is anything that \w is not - just disallow it to allow any \w:
[^\W0-9]\w*
essentially using any not - non-wordcharacter except 0-9 followed by any word character any number of times.
Doku: regular-expression-syntax
You could try using
^(?![0-9])\w+$
Which will not partial match invalid variable names
Alternatively, if you don't need to use regex. str.isidentifier() will probably do what you want.
I have an HTML to LaTeX parser tailored to what it's supposed to do (convert snippets of HTML into snippets of LaTeX), but there is a little issue with filling in variables. The issue is that variables should be allowed to contain the LaTeX reserved characters (namely # $ % ^ & _ { } ~ \). These need to be escaped so that they won't kill our LaTeX renderer.
The program that handles the conversion and everything is written in Python, so I tried to find a nice solution. My first idea was to simply do a .replace(), but replace doesn't allow you to match only if the first is not a \. My second attempt was a regex, but I failed miserably at that.
The regex I came up with is ([^\][#\$%\^&_\{\}~\\]). I hoped that this would match any of the reserved characters, but only if it didn't have a \ in front. Unfortunately, this matches ever single character in my input text. I've also tried different variations on this regex, but I can't get it to work. The variations mainly consisted of removing/adding slashes in the second part of the regex.
Can anyone help with this regex?
EDIT Whoops, I seem to have included the slashes as well. Shows how awake I was when I posted this :) They shouldn't be escaped in my case, but it's relatively easy to remove them from the regexes in the answers. Thanks all!
The [^\] is a character class for anything not a \, that is why it is matching everything. You want a negative lookbehind assertion:
((?<!\)[#\$%\^&_\{\}~\\])
(?<!...) will match whatever follows it as long as ... is not in front of it. You can check this out at the python docs
The regex ([^\][#\$%\^&_\{\}~\\]) is matching anything that isn't found between the first [ and the last ], so it should be matching everything except for what you want it to.
Moving around the parenthesis should fix your original regex ([^\\])[#\$%\^&_\{\}~\\].
I would try using regex lookbehinds, which won't match the character preceding what you want to escape. I'm not a regex expert so perhaps there is a better pattern, but this should work (?<!\\)[#\$%\^&_\{\}~\\].
If you're looking to find special characters that aren't escaped, without eliminating special chars preceded by escaped backslashes (e.g. you do want to match the last backslash in abc\\\def), try this:
(?<!\\)(\\\\)*[#\$%\^&_\{\}~\\]
This will match any of your special characters preceded by an even number (this includes 0) of backslashes. It says the character can be preceded by any number of pairs of backslashes, with a negative lookbehind to say those backslashes can't be preceded by another backslash.
The match will include the backslashes, but if you stick another in front of all of them, it'll achieve the same effect of escaping the special char, anyway.
I am looking for a regular expression to validate names (using Python standard module re).
The expression should work for names with standard latin characters (a-z), space, dash, names with western european characters (æøåüöä etc.), but also Chinese, Thai, Arab etc.
All these can be considered "letters" - they are ok, but special characters such as !##$%&*() and quotes etc. should fail.
I haven't really found something that can do this - anybody who knows how to do this?
PS: there are thousands of characters that qualify - it's not realistic to simply list them all.
Well the question really is what do you need this for? Maybe the opposite approach might be better for you, i.e. specify which characters are not allowed: e.g. [^ \t] etc.
You should also take a look in the manual at things like \s, \w and others, combined with setting the LOCALE.
You can create a character class which will match all the languages you want to match:
for example
[\p{Cyrillic}\p{Latin}]
will match all cyrilic and latin letters. Not sure if this is the best solution, but it works.
Here is the full reference
IS it possible to define that specific languages characters would be considered as word.
I.e. re do not accept ä,ö as word characters if i search them in following way
Ft=codecs.open('c:\\Python27\\Scripts\\finnish2\\textfields.txt','r','utf–8')
word=Ft.readlines()
word=smart_str(word, encoding='utf-8', strings_only=False, errors='replace')
word=re.sub('[^äÄöÖåÅA-Za-z0-9]',"""\[^A-Za-z0-9]*""", word) ; print 'word= ', word #works in skipping ö,ä,å characters
I would like that these character would be included to [A-Za-z].
How to define this?
[A-Za-z0-9] will only match the characters listed here, but the docs also mention some other special constructs like:
\w which stands for alphanumeric characters (namely [a-zA-Z0-9_] plus all unicode characters which are declared to be alphanumeric
\W which stands for all nun-alphanumeric characters [^a-zA-Z0-9_] plus unicode
\d which stands for digits
\b which matches word boundaries (including all rules from the unicode tables)
So, you will to (a) use this constructs instead (which are shorter and maybe easier to read), and (b) tell re that you want to "localize" those strings with the current locale by setting the UNICODE flag like:
re_word = re.compile(r'\w+', re.U)
For a start, you appear to be slightly confused about the args for re.sub.
The first arg is the pattern. You have '[^äÄöÖåÅA-Za-z0-9]' which matches each character which is NOT in the Finnish alphabet nor a digit.
The second arg is the replacement. You have """[^A-Za-z0-9]*""" ... so each of those non-Finnish-alphanumeric characters is going to be replaced by the literal string [^A-Za-z0-9]*. It's reasonable to assume that this is not what you want.
What do you want to do?
You need to explain your third line; after your first 2 lines, word will be a list of unicode objects, which is A Good Thing. However the encoding= and the errors= indicate that the unknown (to us) smart_str() is converting your lovely unicode back to UTF-8. Processing data in UTF-8 bytes instead of Unicode characters is EVIL, unless you know what you are doing.
What encoding directive do you have at the top of your source file?
Advice: Get your data into unicode. Work on it in unicode. All your string constants should have the u prefix; if you consider that too much wear and tear on your typing fingers, at least put it on the non-ASCII constants e.g. u'[^äÄöÖåÅA-Za-z0-9]'. When you have done all the processing, encode your results for display or storage using an appropriate encoding.
When working with re, consider \w which will match any alphanumeric (and also the underscore) instead of listing out what is alphabetic in one language. Do use the re.UNICODE flag; docs here.
Something like this might do the trick:
pattern = re.compile("(?u)pattern")
or
pattern = re.compile("pattern", re.UNICODE)
whitespace_pattern = u"\s" # bug: tried to use unicode \u0020, broke regex
time_sig_pattern = \
"""^%(ws)s*time signature:%(ws)s*(?P<top>\d+)%(ws)s*\/%(ws)s*(?P<bottom>\d+)%(ws)s*$""" %{"ws": whitespace_pattern}
time_sig = compile(time_sig_pattern, U|M)
For some reason, adding the Verbose flag, X, to compile breaks the pattern.
Also, I wanted to use unicode for whitespace_pattern recognition (supposedly, we'll get patterns that use non-unicode spaces and we need to explicitly check for that one unicode character as a valid space), but the pattern keeps breaking.
VERBOSE gives you the ability to write comments in your regex to document it.
In order to do so, it ignores spaces, since you need to use line breaks to write comments.
Replace all spaces in your regex by \s to specify they are spaces you want to match in your pattern, and not just some spaces to format your comments.
What's more, you may want to use the r prefix for the string you use as a pattern. It tells Python not to interpret special notations such as \n and let you use backslashes without escaping them.
Always define regexes with the r prefix to indicate they are raw strings.
r"""^%(ws)s*time signature:%(ws)s*(?P<top>\d+)%(ws)s*\/%(ws)s*(?P<bottom>\d+)%(ws)s*$""" %{"ws": whitespace_pattern}
When creating a regex to match unicode characters you do not want to use a Python unicode string. In your example regular expression needs to see the literal characters \u0020, so you should use whitespace_pattern = r"\u0020" instead of u"\u0020".
As other answers have mentioned, you should also use the r prefix for time_sig_pattern, after those two changes your code should work fine.
For VERBOSE to work correctly you need to escape all whitespace in the pattern, so towards the beginning of the pattern replace the space in time signature with "\ " (quotes for clarity), \s, or [ ] as documented here.