Combine isalpha() and isspace() into 1 statement - python

I am looking for a function that combines the methods isalpha() and isspace() into a single method.
I want to check if a given string only contains letters and/or spaces, for example:
"This is text".isalpha_or_space()
# True
However, with the 2 methods, I get:
"This is text".isalpha() or "This is text".isspace()
# False
as the string is not only alpha nor space.
Of course, I could iterate over every character and check it for space or alpha.
I could also compare the string with ("abcdefghijklmnopqrstuvwxyz" + " ")
However, both of these approaches don't seem very pythonic to me - convince me otherwise.

The most Pythonic will be to use a def for this:
def isalpha_or_space(self):
if self == "":
return False
for char in self:
if not (char.isalpha() or char.isspace()):
return False
return True
It is not easy to contribute this as a method on str, since Python does not encourage the monkeypatching of built-in types. My recommendation is just to leave this as a module level function.
Nonetheless, it is still possible to mimic the interface of a method, since most namespaces in Python are writable if you know where to find them. The suggestion below is not Pythonic, and relies on implementation detail.
>>> import gc
>>> def monkeypatch(type_, func):
... gc.get_referents(type_.__dict__)[0][func.__name__] = func
...
>>> monkeypatch(str, isalpha_or_space)
>>> "hello world".isalpha_or_space()
True

Use a regular expression (regex):
>>> import re
>>> result = re.match('[a-zA-Z\s]+$', "This is text")
>>> bool(result)
True
Breakdown:
re - Python's regex module
[a-zA-Z\s] - Any letter or whitespace
+ - One or more of the previous item
$ - End of string
The above works with ASCII letters. For the full Unicode range on Python 3, unfortunately the regex is a bit complicated:
>>> result = re.match('([^\W\d_]|\s)+$', 'un café')
Breakdown:
(x|y) - x or y
[^\W\d_] - Any word character except a number or an underscore
From Mark Tolonen's answer on How to match all unicode alphabetic characters and spaces in a regex?

You can use the following solution:
s != '' and all(c.isalpha() or c.isspace() for c in s)

Related

How can I implement isalnum() into this Python web scraper to remove special characters? [duplicate]

I'm trying to remove specific characters from a string using Python. This is the code I'm using right now. Unfortunately it appears to do nothing to the string.
for char in line:
if char in " ?.!/;:":
line.replace(char,'')
How do I do this properly?
Strings in Python are immutable (can't be changed). Because of this, the effect of line.replace(...) is just to create a new string, rather than changing the old one. You need to rebind (assign) it to line in order to have that variable take the new value, with those characters removed.
Also, the way you are doing it is going to be kind of slow, relatively. It's also likely to be a bit confusing to experienced pythonators, who will see a doubly-nested structure and think for a moment that something more complicated is going on.
Starting in Python 2.6 and newer Python 2.x versions *, you can instead use str.translate, (see Python 3 answer below):
line = line.translate(None, '!##$')
or regular expression replacement with re.sub
import re
line = re.sub('[!##$]', '', line)
The characters enclosed in brackets constitute a character class. Any characters in line which are in that class are replaced with the second parameter to sub: an empty string.
Python 3 answer
In Python 3, strings are Unicode. You'll have to translate a little differently. kevpie mentions this in a comment on one of the answers, and it's noted in the documentation for str.translate.
When calling the translate method of a Unicode string, you cannot pass the second parameter that we used above. You also can't pass None as the first parameter. Instead, you pass a translation table (usually a dictionary) as the only parameter. This table maps the ordinal values of characters (i.e. the result of calling ord on them) to the ordinal values of the characters which should replace them, or—usefully to us—None to indicate that they should be deleted.
So to do the above dance with a Unicode string you would call something like
translation_table = dict.fromkeys(map(ord, '!##$'), None)
unicode_line = unicode_line.translate(translation_table)
Here dict.fromkeys and map are used to succinctly generate a dictionary containing
{ord('!'): None, ord('#'): None, ...}
Even simpler, as another answer puts it, create the translation table in place:
unicode_line = unicode_line.translate({ord(c): None for c in '!##$'})
Or, as brought up by Joseph Lee, create the same translation table with str.maketrans:
unicode_line = unicode_line.translate(str.maketrans('', '', '!##$'))
* for compatibility with earlier Pythons, you can create a "null" translation table to pass in place of None:
import string
line = line.translate(string.maketrans('', ''), '!##$')
Here string.maketrans is used to create a translation table, which is just a string containing the characters with ordinal values 0 to 255.
Am I missing the point here, or is it just the following:
string = "ab1cd1ef"
string = string.replace("1", "")
print(string)
# result: "abcdef"
Put it in a loop:
a = "a!b#c#d$"
b = "!##$"
for char in b:
a = a.replace(char, "")
print(a)
# result: "abcd"
>>> line = "abc##!?efg12;:?"
>>> ''.join( c for c in line if c not in '?:!/;' )
'abc##efg12'
With re.sub regular expression
Since Python 3.5, substitution using regular expressions re.sub became available:
import re
re.sub('\ |\?|\.|\!|\/|\;|\:', '', line)
Example
import re
line = 'Q: Do I write ;/.??? No!!!'
re.sub('\ |\?|\.|\!|\/|\;|\:', '', line)
'QDoIwriteNo'
Explanation
In regular expressions (regex), | is a logical OR and \ escapes spaces and special characters that might be actual regex commands. Whereas sub stands for substitution, in this case with the empty string ''.
The asker almost had it. Like most things in Python, the answer is simpler than you think.
>>> line = "H E?.LL!/;O:: "
>>> for char in ' ?.!/;:':
... line = line.replace(char,'')
...
>>> print line
HELLO
You don't have to do the nested if/for loop thing, but you DO need to check each character individually.
For the inverse requirement of only allowing certain characters in a string, you can use regular expressions with a set complement operator [^ABCabc]. For example, to remove everything except ascii letters, digits, and the hyphen:
>>> import string
>>> import re
>>>
>>> phrase = ' There were "nine" (9) chick-peas in my pocket!!! '
>>> allow = string.letters + string.digits + '-'
>>> re.sub('[^%s]' % allow, '', phrase)
'Therewerenine9chick-peasinmypocket'
From the python regular expression documentation:
Characters that are not within a range can be matched by complementing
the set. If the first character of the set is '^', all the characters
that are not in the set will be matched. For example, [^5] will match
any character except '5', and [^^] will match any character except
'^'. ^ has no special meaning if it’s not the first character in the
set.
line = line.translate(None, " ?.!/;:")
>>> s = 'a1b2c3'
>>> ''.join(c for c in s if c not in '123')
'abc'
Strings are immutable in Python. The replace method returns a new string after the replacement. Try:
for char in line:
if char in " ?.!/;:":
line = line.replace(char,'')
This is identical to your original code, with the addition of an assignment to line inside the loop.
Note that the string replace() method replaces all of the occurrences of the character in the string, so you can do better by using replace() for each character you want to remove, instead of looping over each character in your string.
I was surprised that no one had yet recommended using the builtin filter function.
import operator
import string # only for the example you could use a custom string
s = "1212edjaq"
Say we want to filter out everything that isn't a number. Using the filter builtin method "...is equivalent to the generator expression (item for item in iterable if function(item))" [Python 3 Builtins: Filter]
sList = list(s)
intsList = list(string.digits)
obj = filter(lambda x: operator.contains(intsList, x), sList)))
In Python 3 this returns
>> <filter object # hex>
To get a printed string,
nums = "".join(list(obj))
print(nums)
>> "1212"
I am not sure how filter ranks in terms of efficiency but it is a good thing to know how to use when doing list comprehensions and such.
UPDATE
Logically, since filter works you could also use list comprehension and from what I have read it is supposed to be more efficient because lambdas are the wall street hedge fund managers of the programming function world. Another plus is that it is a one-liner that doesnt require any imports. For example, using the same string 's' defined above,
num = "".join([i for i in s if i.isdigit()])
That's it. The return will be a string of all the characters that are digits in the original string.
If you have a specific list of acceptable/unacceptable characters you need only adjust the 'if' part of the list comprehension.
target_chars = "".join([i for i in s if i in some_list])
or alternatively,
target_chars = "".join([i for i in s if i not in some_list])
Using filter, you'd just need one line
line = filter(lambda char: char not in " ?.!/;:", line)
This treats the string as an iterable and checks every character if the lambda returns True:
>>> help(filter)
Help on built-in function filter in module __builtin__:
filter(...)
filter(function or None, sequence) -> list, tuple, or string
Return those items of sequence for which function(item) is true. If
function is None, return the items that are true. If sequence is a tuple
or string, return the same type, else return a list.
Try this one:
def rm_char(original_str, need2rm):
''' Remove charecters in "need2rm" from "original_str" '''
return original_str.translate(str.maketrans('','',need2rm))
This method works well in Python 3
Here's some possible ways to achieve this task:
def attempt1(string):
return "".join([v for v in string if v not in ("a", "e", "i", "o", "u")])
def attempt2(string):
for v in ("a", "e", "i", "o", "u"):
string = string.replace(v, "")
return string
def attempt3(string):
import re
for v in ("a", "e", "i", "o", "u"):
string = re.sub(v, "", string)
return string
def attempt4(string):
return string.replace("a", "").replace("e", "").replace("i", "").replace("o", "").replace("u", "")
for attempt in [attempt1, attempt2, attempt3, attempt4]:
print(attempt("murcielago"))
PS: Instead using " ?.!/;:" the examples use the vowels... and yeah, "murcielago" is the Spanish word to say bat... funny word as it contains all the vowels :)
PS2: If you're interested on performance you could measure these attempts with a simple code like:
import timeit
K = 1000000
for i in range(1,5):
t = timeit.Timer(
f"attempt{i}('murcielago')",
setup=f"from __main__ import attempt{i}"
).repeat(1, K)
print(f"attempt{i}",min(t))
In my box you'd get:
attempt1 2.2334518376057244
attempt2 1.8806643818474513
attempt3 7.214925774955572
attempt4 1.7271184513757465
So it seems attempt4 is the fastest one for this particular input.
Here's my Python 2/3 compatible version. Since the translate api has changed.
def remove(str_, chars):
"""Removes each char in `chars` from `str_`.
Args:
str_: String to remove characters from
chars: String of to-be removed characters
Returns:
A copy of str_ with `chars` removed
Example:
remove("What?!?: darn;", " ?.!:;") => 'Whatdarn'
"""
try:
# Python2.x
return str_.translate(None, chars)
except TypeError:
# Python 3.x
table = {ord(char): None for char in chars}
return str_.translate(table)
#!/usr/bin/python
import re
strs = "how^ much for{} the maple syrup? $20.99? That's[] ricidulous!!!"
print strs
nstr = re.sub(r'[?|$|.|!|a|b]',r' ',strs)#i have taken special character to remove but any #character can be added here
print nstr
nestr = re.sub(r'[^a-zA-Z0-9 ]',r'',nstr)#for removing special character
print nestr
You can also use a function in order to substitute different kind of regular expression or other pattern with the use of a list. With that, you can mixed regular expression, character class, and really basic text pattern. It's really useful when you need to substitute a lot of elements like HTML ones.
*NB: works with Python 3.x
import re # Regular expression library
def string_cleanup(x, notwanted):
for item in notwanted:
x = re.sub(item, '', x)
return x
line = "<title>My example: <strong>A text %very% $clean!!</strong></title>"
print("Uncleaned: ", line)
# Get rid of html elements
html_elements = ["<title>", "</title>", "<strong>", "</strong>"]
line = string_cleanup(line, html_elements)
print("1st clean: ", line)
# Get rid of special characters
special_chars = ["[!##$]", "%"]
line = string_cleanup(line, special_chars)
print("2nd clean: ", line)
In the function string_cleanup, it takes your string x and your list notwanted as arguments. For each item in that list of elements or pattern, if a substitute is needed it will be done.
The output:
Uncleaned: <title>My example: <strong>A text %very% $clean!!</strong></title>
1st clean: My example: A text %very% $clean!!
2nd clean: My example: A text very clean
My method I'd use probably wouldn't work as efficiently, but it is massively simple. I can remove multiple characters at different positions all at once, using slicing and formatting.
Here's an example:
words = "things"
removed = "%s%s" % (words[:3], words[-1:])
This will result in 'removed' holding the word 'this'.
Formatting can be very helpful for printing variables midway through a print string. It can insert any data type using a % followed by the variable's data type; all data types can use %s, and floats (aka decimals) and integers can use %d.
Slicing can be used for intricate control over strings. When I put words[:3], it allows me to select all the characters in the string from the beginning (the colon is before the number, this will mean 'from the beginning to') to the 4th character (it includes the 4th character). The reason 3 equals till the 4th position is because Python starts at 0. Then, when I put word[-1:], it means the 2nd last character to the end (the colon is behind the number). Putting -1 will make Python count from the last character, rather than the first. Again, Python will start at 0. So, word[-1:] basically means 'from the second last character to the end of the string.
So, by cutting off the characters before the character I want to remove and the characters after and sandwiching them together, I can remove the unwanted character. Think of it like a sausage. In the middle it's dirty, so I want to get rid of it. I simply cut off the two ends I want then put them together without the unwanted part in the middle.
If I want to remove multiple consecutive characters, I simply shift the numbers around in the [] (slicing part). Or if I want to remove multiple characters from different positions, I can simply sandwich together multiple slices at once.
Examples:
words = "control"
removed = "%s%s" % (words[:2], words[-2:])
removed equals 'cool'.
words = "impacts"
removed = "%s%s%s" % (words[1], words[3:5], words[-1])
removed equals 'macs'.
In this case, [3:5] means character at position 3 through character at position 5 (excluding the character at the final position).
Remember, Python starts counting at 0, so you will need to as well.
In Python 3.5
e.g.,
os.rename(file_name, file_name.translate({ord(c): None for c in '0123456789'}))
To remove all the number from the string
How about this:
def text_cleanup(text):
new = ""
for i in text:
if i not in " ?.!/;:":
new += i
return new
Below one.. with out using regular expression concept..
ipstring ="text with symbols!##$^&*( ends here"
opstring=''
for i in ipstring:
if i.isalnum()==1 or i==' ':
opstring+=i
pass
print opstring
Recursive split:
s=string ; chars=chars to remove
def strip(s,chars):
if len(s)==1:
return "" if s in chars else s
return strip(s[0:int(len(s)/2)],chars) + strip(s[int(len(s)/2):len(s)],chars)
example:
print(strip("Hello!","lo")) #He!
You could use the re module's regular expression replacement. Using the ^ expression allows you to pick exactly what you want from your string.
import re
text = "This is absurd!"
text = re.sub("[^a-zA-Z]","",text) # Keeps only Alphabets
print(text)
Output to this would be "Thisisabsurd". Only things specified after the ^ symbol will appear.
# for each file on a directory, rename filename
file_list = os.listdir (r"D:\Dev\Python")
for file_name in file_list:
os.rename(file_name, re.sub(r'\d+','',file_name))
Even the below approach works
line = "a,b,c,d,e"
alpha = list(line)
while ',' in alpha:
alpha.remove(',')
finalString = ''.join(alpha)
print(finalString)
output: abcde
The string method replace does not modify the original string. It leaves the original alone and returns a modified copy.
What you want is something like: line = line.replace(char,'')
def replace_all(line, )for char in line:
if char in " ?.!/;:":
line = line.replace(char,'')
return line
However, creating a new string each and every time that a character is removed is very inefficient. I recommend the following instead:
def replace_all(line, baddies, *):
"""
The following is documentation on how to use the class,
without reference to the implementation details:
For implementation notes, please see comments begining with `#`
in the source file.
[*crickets chirp*]
"""
is_bad = lambda ch, baddies=baddies: return ch in baddies
filter_baddies = lambda ch, *, is_bad=is_bad: "" if is_bad(ch) else ch
mahp = replace_all.map(filter_baddies, line)
return replace_all.join('', join(mahp))
# -------------------------------------------------
# WHY `baddies=baddies`?!?
# `is_bad=is_bad`
# -------------------------------------------------
# Default arguments to a lambda function are evaluated
# at the same time as when a lambda function is
# **defined**.
#
# global variables of a lambda function
# are evaluated when the lambda function is
# **called**
#
# The following prints "as yellow as snow"
#
# fleece_color = "white"
# little_lamb = lambda end: return "as " + fleece_color + end
#
# # sometime later...
#
# fleece_color = "yellow"
# print(little_lamb(" as snow"))
# --------------------------------------------------
replace_all.map = map
replace_all.join = str.join
If you want your string to be just allowed characters by using ASCII codes, you can use this piece of code:
for char in s:
if ord(char) < 96 or ord(char) > 123:
s = s.replace(char, "")
It will remove all the characters beyond a....z even upper cases.

Case insensitive in python condition [duplicate]

I'm looking for ignore case string comparison in Python.
I tried with:
if line.find('mandy') >= 0:
but no success for ignore case. I need to find a set of words in a given text file. I am reading the file line by line. The word on a line can be mandy, Mandy, MANDY, etc. (I don't want to use toupper/tolower, etc.).
I'm looking for the Python equivalent of the Perl code below.
if ($line=~/^Mandy Pande:/i)
If you don't want to use str.lower(), you can use a regular expression:
import re
if re.search('mandy', 'Mandy Pande', re.IGNORECASE):
# Is True
There's another post here. Try looking at this.
BTW, you're looking for the .lower() method:
string1 = "hi"
string2 = "HI"
if string1.lower() == string2.lower():
print "Equals!"
else:
print "Different!"
One can use the in operator after applying str.casefold to both strings.
str.casefold is the recommended method for use in case-insensitive comparison.
Return a casefolded copy of the string. Casefolded strings may be used for caseless matching.
Casefolding is similar to lowercasing but more aggressive because it is intended to remove all case distinctions in a string. For example, the German lowercase letter 'ß' is equivalent to "ss". Since it is already lowercase, lower() would do nothing to 'ß'; casefold() converts it to "ss".
The casefolding algorithm is described in section 3.13 of the Unicode Standard.
New in version 3.3.
For case-insensitive substring search:
needle = "TEST"
haystack = "testing"
if needle.casefold() in haystack.casefold():
print('Found needle in haystack')
For case-insensitive string comparison:
a = "test"
b = "TEST"
if a.casefold() == b.casefold():
print('a and b are equal, ignoring case')
Try:
if haystackstr.lower().find(needlestr.lower()) != -1:
# True
a = "MandY"
alow = a.lower()
if "mandy" in alow:
print "true"
work around
you can also use: s.lower() in str.lower()
You can use in operator in conjunction with lower method of strings.
if "mandy" in line.lower():
import re
if re.search('(?i)Mandy Pande:', line):
...
See this.
In [14]: re.match("mandy", "MaNdY", re.IGNORECASE)
Out[14]: <_sre.SRE_Match object at 0x23a08b8>
If it is a pandas series, you can mention case=False in the str.contains
data['Column_name'].str.contains('abcd', case=False)
OR if it is just two string comparisons try the other method below
You can use casefold() method. The casefold() method ignores cases when comparing.
firstString = "Hi EVERYONE"
secondString = "Hi everyone"
if firstString.casefold() == secondString.casefold():
print('The strings are equal.')
else:
print('The strings are not equal.')
Output:
The strings are equal.

Regular Expression (find matching characters in order)

Let us say that I have the following string variables:
welcome = "StackExchange 2016"
string_to_find = "Sx2016"
Here, I want to find the string string_to_find inside welcome using regular expressions. I want to see if each character in string_to_find comes in the same order as in welcome.
For instance, this expression would evaluate to True since the 'S' comes before the 'x' in both strings, the 'x' before the '2', the '2' before the 0, and so forth.
Is there a simple way to do this using regex?
Your answer is rather trivial. The .* character combination matches 0 or more characters. For your purpose, you would put it between all characters in there. As in S.*x.*2.*0.*1.*6. If this pattern is matched, then the string obeys your condition.
For a general string you would insert the .* pattern between characters, also taking care of escaping special characters like literal dots, stars etc. that may otherwise be interpreted by regex.
This function might fit your need
import re
def check_string(text, pattern):
return re.match('.*'.join(pattern), text)
'.*'.join(pattern) create a pattern with all you characters separated by '.*'. For instance
>> ".*".join("Sx2016")
'S.*x.*2.*0.*1.*6'
Use wildcard matches with ., repeating with *:
expression = 'S.*x.*2.*0.*1.*6'
You can also assemble this expression with join():
expression = '.*'.join('Sx2016')
Or just find it without a regular expression, checking whether the location of each of string_to_find's characters within welcome proceeds in ascending order, handling the case where a character in string_to_find is not present in welcome by catching the ValueError:
>>> welcome = "StackExchange 2016"
>>> string_to_find = "Sx2016"
>>> try:
... result = [welcome.index(c) for c in string_to_find]
... except ValueError:
... result = None
...
>>> print(result and result == sorted(result))
True
Actually having a sequence of chars like Sx2016 the pattern that best serve your purpose is a more specific:
S[^x]*x[^2]*2[^0]*0[^1]*1[^6]*6
You can obtain this kind of check defining a function like this:
import re
def contains_sequence(text, seq):
pattern = seq[0] + ''.join(map(lambda c: '[^' + c + ']*' + c, list(seq[1:])))
return re.search(pattern, text)
This approach add a layer of complexity but brings a couple of advantages as well:
It's the fastest one because the regex engine walk down the string only once while the dot-star approach go till the end of the sequence and back each time a .* is used. Compare on the same string (~1k chars):
Negated class -> 12 steps
Dot star -> 4426 step
It works on multiline strings in input as well.
Example code
>>> sequence = 'Sx2016'
>>> inputs = ['StackExchange2015','StackExchange2016','Stack\nExchange\n2015','Stach\nExchange\n2016']
>>> map(lambda x: x + ': yes' if contains_sequence(x,sequence) else x + ': no', inputs)
['StackExchange2015: no', 'StackExchange2016: yes', 'Stack\nExchange\n2015: no', 'Stach\nExchange\n2016: yes']

string.translate() with unicode data in python

I have 3 API's that return json data to 3 dictionary variables. I am taking some of the values from the dictionary to process them. I read the specific values that I want to the list valuelist. One of the steps is to remove the punctuation from them. I normally use string.translate(None, string.punctuation) for this process but because the dictionary data is unicode I get the error:
wordlist = [s.translate(None, string.punctuation)for s in valuelist]
TypeError: translate() takes exactly one argument (2 given)
Is there a way around this? Either by encoding the unicode or a replacement for string.translate?
The translate method work differently on Unicode objects than on byte-string objects:
>>> help(unicode.translate)
S.translate(table) -> unicode
Return a copy of the string S, where all characters have been mapped
through the given translation table, which must be a mapping of
Unicode ordinals to Unicode ordinals, Unicode strings or None.
Unmapped characters are left untouched. Characters mapped to None
are deleted.
So your example would become:
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)
word_list = [s.translate(remove_punctuation_map) for s in value_list]
Note however that string.punctuation only contains ASCII punctuation. Full Unicode has many more punctuation characters, but it all depends on your use case.
I noticed that string.translate is deprecated. Since you are removing punctuation, not actually translating characters, you can use the re.sub function.
>>> import re
>>> s1="this.is a.string, with; (punctuation)."
>>> s1
'this.is a.string, with; (punctuation).'
>>> re.sub("[\.\t\,\:;\(\)\.]", "", s1, 0, 0)
'thisis astring with punctuation'
>>>
In this version you can relatively make one's letters to other
def trans(to_translate):
tabin = u'привет'
tabout = u'тевирп'
tabin = [ord(char) for char in tabin]
translate_table = dict(zip(tabin, tabout))
return to_translate.translate(translate_table)
Python re module allows to use a function as a replacement argument, which should take a Match object and return a suitable replacement. We may use this function to build a custom character translation function:
import re
def mk_replacer(oldchars, newchars):
"""A function to build a replacement function"""
mapping = dict(zip(oldchars, newchars))
def replacer(match):
"""A replacement function to pass to re.sub()"""
return mapping.get(match.group(0), "")
return replacer
An example. Match all lower-case letters ([a-z]), translate 'h' and 'i' to 'H' and 'I' respectively, delete other matches:
>>> re.sub("[a-z]", mk_replacer("hi", "HI"), "hail")
'HI'
As you can see, it may be used with short (incomplete) replacement sets, and it may be used to delete some characters.
A Unicode example:
>>> re.sub("[\W]", mk_replacer(u'\u0435\u0438\u043f\u0440\u0442\u0432', u"EIPRTV"), u'\u043f\u0440\u0438\u0432\u0435\u0442')
u'PRIVET'
As I stumbled upon the same problem and Simon's answer was the one that helped me to solve my case, I thought of showing an easier example just for clarification:
from collections import defaultdict
And then for the translation, say you'd like to remove '#' and '\r' characters:
remove_chars_map = defaultdict()
remove_chars_map['#'] = None
remove_chars_map['\r'] = None
new_string = old_string.translate(remove_chars_map)
And an example:
old_string = "word1#\r word2#\r word3#\r"
new_string = "word1 word2 word3"
'#' and '\r' removed

Python code to use a regular expression to make sure a string is alphanumeric plus . - _

I looked and searched and couldn't find what I needed although I think it should be simple (if you have any Python experience, which I don't).
Given a string, I want to verify, in Python, that it contains ONLY alphanumeric characters: a-zA-Z0-9 and . _ -
examples:
Accepted:
bill-gates
Steve_Jobs
Micro.soft
Rejected:
Bill gates -- no spaces allowed
me#host.com -- # is not alphanumeric
I'm trying to use:
if re.match("^[a-zA-Z0-9_.-]+$", username) == True:
But that doesn't seem to do the job...
re.match does not return a boolean; it returns a MatchObject on a match, or None on a non-match.
>>> re.match("^[a-zA-Z0-9_.-]+$", "hello")
<_sre.SRE_Match object at 0xb7600250>
>>> re.match("^[a-zA-Z0-9_.-]+$", " ")
>>> print re.match("^[a-zA-Z0-9_.-]+$", " ")
None
So, you shouldn't do re.match(...) == True; rather, you should be checking re.match(...) is not None in this case, which can be further shortened to just if re.match(...).
Never use == True or == False in a comparison. Many types already have a bool equivalent which you should use instead:
if re.match("^[a-zA-Z0-9_.-]+$", username):
Could also shorten it slightly to :
if re.match(r'^[\w.-]+$', username):
I would consider this for a valid username:
1) Username must be 6-30 characters long
2) Username may only contain:
Uppercase and lowercase letters
Numbers from 0-9 and
Special characters _ - .
3) Username may not:
Begin or finish with characters _ - .
Have more than one sequential character _ - . inside
This would be example of usage:
if re.match(r'^(?![-._])(?!.*[_.-]{2})[\w.-]{6,30}(?<![-._])$',username) is not None:
If you are going to use many regular expressions you can compile it for speed (or readability)
import re
ALPHANUM=re.compile('^[a-zA-Z0-9_.-]+$')
for u in users:
if ALPHANUM.match(u) is None:
print "invalid"
From the docs:
The compiled versions of the most recent patterns passed to re.match(), re.search() or re.compile() are cached, so programs that use only a few regular expressions at a time needn’t worry about compiling regular expressions.
I do my validation this way in my utils class:
def valid_re(self, s, r):
reg = re.compile(r)
return reg.match(s)
Then I call the utils instance, and check this way:
if not utils.valid_re(username, r'^[a-zA-Z0-9_.-]+$'):
error = "Invalid username!"

Categories

Resources