Python replace function [replace once] - python

I need help with a program I'm making in Python.
Assume I wanted to replace every instance of the word "steak" to "ghost" (just go with it...) but I also wanted to replace every instance of the word "ghost" to "steak" at the same time. The following code does not work:
s="The scary ghost ordered an expensive steak"
print s
s=s.replace("steak","ghost")
s=s.replace("ghost","steak")
print s
it prints: The scary steak ordered an expensive steak
What I'm trying to get is The scary steak ordered an expensive ghost

I'd probably use a regex here:
>>> import re
>>> s = "The scary ghost ordered an expensive steak"
>>> sub_dict = {'ghost':'steak','steak':'ghost'}
>>> regex = '|'.join(sub_dict)
>>> re.sub(regex, lambda m: sub_dict[m.group()], s)
'The scary steak ordered an expensive ghost'
Or, as a function which you can copy/paste:
import re
def word_replace(replace_dict,s):
regex = '|'.join(replace_dict)
return re.sub(regex, lambda m: replace_dict[m.group()], s)
Basically, I create a mapping of words that I want to replace with other words (sub_dict). I can create a regular expression from that mapping. In this case, the regular expression is "steak|ghost" (or "ghost|steak" -- order doesn't matter) and the regex engine does the rest of the work of finding non-overlapping sequences and replacing them accordingly.
Some possibly useful modifications
regex = '|'.join(map(re.escape,replace_dict)) -- Allows the regular expressions to have special regular expression syntax in them (like parenthesis). This escapes the special characters to make the regular expressions match the literal text.
regex = '|'.join(r'\b{0}\b'.format(x) for x in replace_dict) -- make sure that we don't match if one of our words is a substring in another word. In other words, change he to she but not the to tshe.

Split the string by one of the targets, do the replace, and put the whole thing back together.
pieces = s.split('steak')
s = 'ghost'.join(piece.replace('ghost', 'steak') for piece in pieces)
This works exactly as .replace() would, including ignoring word boundaries. So it will turn "steak ghosts" into "ghost steaks".

Rename one of the words to a temp value that doesn't occur in the text. Note this wouldn't be the most efficient way for a very large text. For that a re.sub might be more appropriate.
s="The scary ghost ordered an expensive steak"
print s
s=s.replace("steak","temp")
s=s.replace("ghost","steak")
S=s.replace("temp","steak")
print s

Use the count variable in the string.replace() method. So using your code, you wouold have:
s="The scary ghost ordered an expensive steak"
print s
s=s.replace("steak","ghost", 1)
s=s.replace("ghost","steak", 1)
print s
http://docs.python.org/2/library/stdtypes.html

How about something like this? Store the original in a split list, then have a translation dict. Keeps your core code short, then just adjust the dict when you need to adjust the translation. Plus, easy to port to a function:
def translate_line(s, translation_dict):
line = []
for i in s.split():
# To take account for punctuation, strip all non-alnum from the
# word before looking up the translation.
i = ''.join(ch for ch in i if ch.isalnum()]
line.append(translation_dict.get(i, i))
return ' '.join(line)
>>> translate_line("The scary ghost ordered an expensive steak", {'steak': 'ghost', 'ghost': 'steak'})
'The scary steak ordered an expensive ghost'

Note Considering the viewership of this Question, I undeleted and rewrote it for different types of test cases
I have considered four competing implementations from the answers
>>> def sub_noregex(hay):
"""
The Join and replace routine which outpeforms the regex implementation. This
version uses generator expression
"""
return 'steak'.join(e.replace('steak','ghost') for e in hay.split('ghost'))
>>> def sub_regex(hay):
"""
This is a straight forward regex implementation as suggested by #mgilson
Note, so that the overheads doesn't add to the cummulative sum, I have placed
the regex creation routine outside the function
"""
return re.sub(regex,lambda m:sub_dict[m.group()],hay)
>>> def sub_temp(hay, _uuid = str(uuid4())):
"""
Similar to Mark Tolonen's implementation but rather used uuid for the temporary string
value to reduce collission
"""
hay = hay.replace("steak",_uuid).replace("ghost","steak").replace(_uuid,"steak")
return hay
>>> def sub_noregex_LC(hay):
"""
The Join and replace routine which outpeforms the regex implementation. This
version uses List Comprehension
"""
return 'steak'.join([e.replace('steak','ghost') for e in hay.split('ghost')])
A generalized timeit function
>>> def compare(n, hay):
foo = {"sub_regex": "re",
"sub_noregex":"",
"sub_noregex_LC":"",
"sub_temp":"",
}
stmt = "{}(hay)"
setup = "from __main__ import hay,"
for k, v in foo.items():
t = Timer(stmt = stmt.format(k), setup = setup+ ','.join([k, v] if v else [k]))
yield t.timeit(n)
And the generalized test routine
>>> def test(*args, **kwargs):
n = kwargs['repeat']
print "{:50}{:^15}{:^15}{:^15}{:^15}".format("Test Case", "sub_temp",
"sub_noregex ", "sub_regex",
"sub_noregex_LC ")
for hay in args:
hay, hay_str = hay
print "{:50}{:15.10}{:15.10}{:15.10}{:15.10}".format(hay_str, *compare(n, hay))
And the Test Results are as follows
>>> test((' '.join(['steak', 'ghost']*1000), "Multiple repeatation of search key"),
('garbage '*998 + 'steak ghost', "Single repeatation of search key at the end"),
('steak ' + 'garbage '*998 + 'ghost', "Single repeatation of at either end"),
("The scary ghost ordered an expensive steak", "Single repeatation for smaller string"),
repeat = 100000)
Test Case sub_temp sub_noregex sub_regex sub_noregex_LC
Multiple repeatation of search key 0.2022748797 0.3517142003 0.4518992298 0.1812594258
Single repeatation of search key at the end 0.2026047957 0.3508259952 0.4399926194 0.1915298898
Single repeatation of at either end 0.1877455356 0.3561734007 0.4228843986 0.2164233388
Single repeatation for smaller string 0.2061019057 0.3145984487 0.4252060592 0.1989413449
>>>
Based on the Test Result
Non Regex LC and the temp variable substitution have better performance though the performance of the usage of temp variable is not consistent
LC version has better performance compared to generator (confirmed)
Regex is more than two times slower (so if the piece of code is a bottleneck then the implementation change can be reconsidered)
The Regex and non regex versions are equivalently Robust and can scale

Related

I want to split a string by a character on its first occurence, which belongs to a list of characters. How to do this in python?

Basically, I have a list of special characters. I need to split a string by a character if it belongs to this list and exists in the string. Something on the lines of:
def find_char(string):
if string.find("some_char"):
#do xyz with some_char
elif string.find("another_char"):
#do xyz with another_char
else:
return False
and so on. The way I think of doing it is:
def find_char_split(string):
char_list = [",","*",";","/"]
for my_char in char_list:
if string.find(my_char) != -1:
my_strings = string.split(my_char)
break
else:
my_strings = False
return my_strings
Is there a more pythonic way of doing this? Or the above procedure would be fine? Please help, I'm not very proficient in python.
(EDIT): I want it to split on the first occurrence of the character, which is encountered first. That is to say, if the string contains multiple commas, and multiple stars, then I want it to split by the first occurrence of the comma. Please note, if the star comes first, then it will be broken by the star.
I would favor using the re module for this because the expression for splitting on multiple arbitrary characters is very simple:
r'[,*;/]'
The brackets create a character class that matches anything inside of them. The code is like this:
import re
results = re.split(r'[,*;/]', my_string, maxsplit=1)
The maxsplit argument makes it so that the split only occurs once.
If you are doing the same split many times, you can compile the regex and search on that same expression a little bit faster (but see Jon Clements' comment below):
c = re.compile(r'[,*;/]')
results = c.split(my_string)
If this speed up is important (it probably isn't) you can use the compiled version in a function instead of having it re compile every time. Then make a separate function that stores the actual compiled expression:
def split_chars(chars, maxsplit=0, flags=0, string=None):
# see note about the + symbol below
c = re.compile('[{}]+'.format(''.join(chars)), flags=flags)
def f(string, maxsplit=maxsplit):
return c.split(string, maxsplit=maxsplit)
return f if string is None else f(string)
Then:
special_split = split_chars(',*;/', maxsplit=1)
result = special_split(my_string)
But also:
result = split_chars(',*;/', my_string, maxsplit=1)
The purpose of the + character is to treat multiple delimiters as one if that is desired (thank you Jon Clements). If this is not desired, you can just use re.compile('[{}]'.format(''.join(chars))) above. Note that with maxsplit=1, this will not have any effect.
Finally: have a look at this talk for a quick introduction to regular expressions in Python, and this one for a much more information packed journey.

Replace string from a huge python dictionary

I have a dictionary like this:
id_dict = {'C1001': 'John','D205': 'Ben','501': 'Rose'}
This dictionary has more than 10000 keys and values. I have to search for the key from a report which has nearly 500 words and replace with values.
I have to process thousands of reports within a few minutes, so speed and memory are really important for me.
This is the code I am using now:
str = "strings in the reports"
for key, value in id_dict.iteritems():
str = str.replace(key, value)
Is there any better solution than this?
Using str.replace in a loop is very inefficient. A few arguments:
when the word is replaced, a new string is allocated and the old one is discarded. If you have a lot of words, it can take ages
str.replace would replace inside of words, probably not what you want: ex: replace "nut" by "eel" changes "donut" to "doeel".
if there are a lot of words in your replacement dictionary, you loop through all of them (using a python loop, rather slow), even if the text doesn't contain any one of them.
I would use re.sub with a replacement function (as a lambda), matching a word-boundary alphanumeric string (letters or digits).
The lambda would lookup in the dictionary and return the word if found, else return the original word, replacing nothing, but since everything is done in the re module, it executes way faster.
import re
id_dict = {'C1001': 'John','D205': 'Ben','501': 'Rose'}
s = "Hello C1001, My name is D205, not X501"
result = re.sub(r"\b(\w+)\b",lambda m : id_dict.get(m.group(1),m.group(1)),s)
print(result)
prints:
Hello John, My name is Ben, not X501
(note that the last word was left unreplaced because it's only a partial match)

Efficient way of matching and replacing multiple strings in python 3?

I have multiple (>30) compiled regex's
regex_1 = re.compile(...)
regex_2 = re.compile(...)
#... define multiple regex's
regex_n = re.compile(...)
I then have a function which takes a text and replaces some of its words using every one of the regex's above and the re.sub method as follows
def sub_func(text):
text = re.sub(regex_1, "string_1", text)
# multiple subsitutions using all regex's ...
text = re.sub(regex_n, "string_n", text)
return text
Question: Is there a more efficient way to make these replacements?
The regex's cannot be generalized or simplified from their current form.
I feel like reassigning the value of text each time for every regex is quite slow, given that the function only replaces a word or two from the entirety of text for each reassignment. Also, given that I have to do this for multiple documents, that slows things down even more.
Thanks in advance!
Reassigning a value takes constant time in Python. Unlike in languages like C, variables are more of a "name tag". So, changing what the name tag points to takes very little time.
If they are constant strings, I would collect them into a tuple:
regexes = (
(regex_1, 'string_1'),
(regex_2, 'string_2'),
(regex_3, 'string_3'),
...
)
And then in your function, just iterate over the list:
def sub_func_2(text):
for regex, sub in regexes:
text = re.sub(regex, sub, text)
return text
But if your regexes are actually named regex_1, regex_2, etc., they probably should be directly defined in a list of some sort.
Also note, if you are doing replacements like 'cat' -> 'dog', the str.replace() method might be easier (text = text.replace('cat', 'dog')), and it will probably be faster.
If your strings are very long, and re-making it from scratch with the regexes might take very long. An implementation of #Oliver Charlesworth's method that was mentioned in the comments could be:
# Instead of this:
regexes = (
('1(1)', '$1i'),
('2(2)(2)', '$1a$2'),
('(3)(3)3', '$1a$2')
)
# Merge the regexes:
regex = re.compile('(1(1))|(2(2)(2))|((3)(3)3)')
substitutions = (
'{1}i', '{1}a{2}', '{1}a{2}'
)
# Keep track of how many groups are in each alternative
group_nos = (1, 2, 2)
cumulative = [1]
for i in group_nos:
cumulative.append(cumulative[-1] + i + 1)
del i
cumulative = tuple(zip(substitutions, cumulative))
def _sub_func(match):
iter_ = iter(cumulative)
for sub, x in iter_:
if match.group(x) is not None:
return sub.format(*map(match.group, range(x, next(iter_)[1])))
def sub_func(text):
return re.sub(regex, _sub_func, text)
But this breaks down if you have overlapping text that you need to substitute.
we can pass a function to re.sub repl argument
simplify to 3 regex for easier understanding
assuming regex_1, regex_2, and regex_3 will be 111,222 and 333 respectively. Then, regex_replace will be the list holding string that will be use for replace follow the order of regex_1, regex_2 and regex_3.
regex_1 will be replace will 'one'
regex_2 replace with 'two' and so on
Not sure how much this will improve the runtime though, give it a try
import re
regex_x = re.compile('(111)|(222)|(333)')
regex_replace = ['one', 'two', 'three']
def sub_func(text):
return re.sub(regex_x, lambda x:regex_replace[x.lastindex-1], text)
>>> sub_func('testing 111 222 333')
>>> 'testing one two three'

Choose best fitting regex [duplicate]

Is it possible to match 2 regular expressions in Python?
For instance, I have a use-case wherein I need to compare 2 expressions like this:
re.match('google\.com\/maps', 'google\.com\/maps2', re.IGNORECASE)
I would expect to be returned a RE object.
But obviously, Python expects a string as the second parameter.
Is there a way to achieve this, or is it a limitation of the way regex matching works?
Background: I have a list of regular expressions [r1, r2, r3, ...] that match a string and I need to find out which expression is the most specific match of the given string. The way I assumed I could make it work was by:
(1) matching r1 with r2.
(2) then match r2 with r1.
If both match, we have a 'tie'. If only (1) worked, r1 is a 'better' match than r2 and vice-versa.
I'd loop (1) and (2) over the entire list.
I admit it's a bit to wrap one's head around (mostly because my description is probably incoherent), but I'd really appreciate it if somebody could give me some insight into how I can achieve this. Thanks!
Outside of the syntax clarification on re.match, I think I am understanding that you are struggling with taking two or more unknown (user input) regex expressions and classifying which is a more 'specific' match against a string.
Recall for a moment that a Python regex really is a type of computer program. Most modern forms, including Python's regex, are based on Perl. Perl's regex's have recursion, backtracking, and other forms that defy trivial inspection. Indeed a rogue regex can be used as a form of denial of service attack.
To see of this on your own computer, try:
>>> re.match(r'^(a+)+$','a'*24+'!')
That takes about 1 second on my computer. Now increase the 24 in 'a'*24 to a bit larger number, say 28. That take a lot longer. Try 48... You will probably need to CTRL+C now. The time increase as the number of a's increase is, in fact, exponential.
You can read more about this issue in Russ Cox's wonderful paper on 'Regular Expression Matching Can Be Simple And Fast'. Russ Cox is the Goggle engineer that built Google Code Search in 2006. As Cox observes, consider matching the regex 'a?'*33 + 'a'*33 against the string of 'a'*99 with awk and Perl (or Python or PCRE or Java or PHP or ...) Awk matches in 200 microseconds but Perl would require 1015 years because of exponential back tracking.
So the conclusion is: it depends! What do you mean by a more specific match? Look at some of Cox's regex simplification techniques in RE2. If your project is big enough to write your own libraries (or use RE2) and you are willing to restrict the regex grammar used (i.e., no backtracking or recursive forms), I think the answer is that you would classify 'a better match' in a variety of ways.
If you are looking for a simple way to state that (regex_3 < regex_1 < regex_2) when matched against some string using Python or Perl's regex language, I think that the answer is it is very very hard (i.e., this problem is NP Complete)
Edit
Everything I said above is true! However, here is a stab at sorting matching regular expressions based on one form of 'specific': How many edits to get from the regex to the string. The greater number of edits (or the higher the Levenshtein distance) the less 'specific' the regex is.
You be the judge if this works (I don't know what 'specific' means to you for your application):
import re
def ld(a,b):
"Calculates the Levenshtein distance between a and b."
n, m = len(a), len(b)
if n > m:
# Make sure n <= m, to use O(min(n,m)) space
a,b = b,a
n,m = m,n
current = range(n+1)
for i in range(1,m+1):
previous, current = current, [i]+[0]*n
for j in range(1,n+1):
add, delete = previous[j]+1, current[j-1]+1
change = previous[j-1]
if a[j-1] != b[i-1]:
change = change + 1
current[j] = min(add, delete, change)
return current[n]
s='Mary had a little lamb'
d={}
regs=[r'.*', r'Mary', r'lamb', r'little lamb', r'.*little lamb',r'\b\w+mb',
r'Mary.*little lamb',r'.*[lL]ittle [Ll]amb',r'\blittle\b',s,r'little']
for reg in regs:
m=re.search(reg,s)
if m:
print "'%s' matches '%s' with sub group '%s'" % (reg, s, m.group(0))
ld1=ld(reg,m.group(0))
ld2=ld(m.group(0),s)
score=max(ld1,ld2)
print " %i edits regex->match(0), %i edits match(0)->s" % (ld1,ld2)
print " score: ", score
d[reg]=score
print
else:
print "'%s' does not match '%s'" % (reg, s)
print " ===== %s ===== === %s ===" % ('RegEx'.center(10),'Score'.center(10))
for key, value in sorted(d.iteritems(), key=lambda (k,v): (v,k)):
print " %22s %5s" % (key, value)
The program is taking a list of regex's and matching against the string Mary had a little lamb.
Here is the sorted ranking from "most specific" to "least specific":
===== RegEx ===== === Score ===
Mary had a little lamb 0
Mary.*little lamb 7
.*little lamb 11
little lamb 11
.*[lL]ittle [Ll]amb 15
\blittle\b 16
little 16
Mary 18
\b\w+mb 18
lamb 18
.* 22
This based on the (perhaps simplistic) assumption that: a) the number of edits (the Levenshtein distance) to get from the regex itself to the matching substring is the result of wildcard expansions or replacements; b) the edits to get from the matching substring to the initial string. (just take one)
As two simple examples:
.* (or .*.* or .*?.* etc) against any sting is a large number of edits to get to the string, in fact equal to the string length. This is the max possible edits, the highest score, and the least 'specific' regex.
The regex of the string itself against the string is as specific as possible. No edits to change one to the other resulting in a 0 or lowest score.
As stated, this is simplistic. Anchors should increase specificity but they do not in this case. Very short stings don't work because the wild-card may be longer than the string.
Edit 2
I got anchor parsing to work pretty darn well using the undocumented sre_parse module in Python. Type >>> help(sre_parse) if you want to read more...
This is the goto worker module underlying the re module. It has been in every Python distribution since 2001 including all the P3k versions. It may go away, but I don't think it is likely...
Here is the revised listing:
import re
import sre_parse
def ld(a,b):
"Calculates the Levenshtein distance between a and b."
n, m = len(a), len(b)
if n > m:
# Make sure n <= m, to use O(min(n,m)) space
a,b = b,a
n,m = m,n
current = range(n+1)
for i in range(1,m+1):
previous, current = current, [i]+[0]*n
for j in range(1,n+1):
add, delete = previous[j]+1, current[j-1]+1
change = previous[j-1]
if a[j-1] != b[i-1]:
change = change + 1
current[j] = min(add, delete, change)
return current[n]
s='Mary had a little lamb'
d={}
regs=[r'.*', r'Mary', r'lamb', r'little lamb', r'.*little lamb',r'\b\w+mb',
r'Mary.*little lamb',r'.*[lL]ittle [Ll]amb',r'\blittle\b',s,r'little',
r'^.*lamb',r'.*.*.*b',r'.*?.*',r'.*\b[lL]ittle\b \b[Ll]amb',
r'.*\blittle\b \blamb$','^'+s+'$']
for reg in regs:
m=re.search(reg,s)
if m:
ld1=ld(reg,m.group(0))
ld2=ld(m.group(0),s)
score=max(ld1,ld2)
for t, v in sre_parse.parse(reg):
if t=='at': # anchor...
if v=='at_beginning' or 'at_end':
score-=1 # ^ or $, adj 1 edit
if v=='at_boundary': # all other anchors are 2 char
score-=2
d[reg]=score
else:
print "'%s' does not match '%s'" % (reg, s)
print
print " ===== %s ===== === %s ===" % ('RegEx'.center(15),'Score'.center(10))
for key, value in sorted(d.iteritems(), key=lambda (k,v): (v,k)):
print " %27s %5s" % (key, value)
And soted RegEx's:
===== RegEx ===== === Score ===
Mary had a little lamb 0
^Mary had a little lamb$ 0
.*\blittle\b \blamb$ 6
Mary.*little lamb 7
.*\b[lL]ittle\b \b[Ll]amb 10
\blittle\b 10
.*little lamb 11
little lamb 11
.*[lL]ittle [Ll]amb 15
\b\w+mb 15
little 16
^.*lamb 17
Mary 18
lamb 18
.*.*.*b 21
.* 22
.*?.* 22
It depends on what kind of regular expressions you have; as #carrot-top suggests, if you actually aren't dealing with "regular expressions" in the CS sense, and instead have crazy extensions, then you are definitely out of luck.
However, if you do have traditional regular expressions, you might make a bit more progress. First, we could define what "more specific" means. Say R is a regular expression, and L(R) is the language generated by R. Then we might say R1 is more specific than R2 if L(R1) is a (strict) subset of L(R2) (L(R1) < L(R2)). That only gets us so far: in many cases, L(R1) is neither a subset nor a superset of L(R2), and so we might imagine that the two are somehow incomparable. An example, trying to match "mary had a little lamb", we might find two matching expressions: .*mary and lamb.*.
One non-ambiguous solution is to define specificity via implementation. For instance, convert your regular expression in a deterministic (implementation-defined) way to a DFA and simply count states. Unfortunately, this might be relatively opaque to a user.
Indeed, you seem to have an intuitive notion of how you want two regular expressions to compare, specificity-wise. Why not simple write down a definition of specificity, based on the syntax of regular expressions, that matches your intuition reasonably well?
Totally arbitrary rules follow:
Characters = 1.
Character ranges of n characters = n (and let's say \b = 5, because I'm not sure how you might choose to write it out long-hand).
Anchors are 5 each.
* divides its argument by 2.
+ divides its argument by 2, then adds 1.
. = -10.
Anyway, just food for thought, as the other answers do a good job of outlining some of the issues you're facing; hope it helps.
I don't think it's possible.
An alternative would be to try to calculate the number of strings of length n that the regular expression also matches. A regular expression that matches 1,000,000,000 strings of length 15 characters is less specific than one that matches only 10 strings of length 15 characters.
Of course, calculating the number of possible matches is not trivial unless the regular expressions are simple.
Option 1:
Since users are supplying the regexes, perhaps ask them to also submit some test strings which they think are illustrative of their regex's specificity. (i.e. that show their regex is more specific than a competitor's regex.) Collect all the user's submitted test strings, and then test all the regexes against the complete set of test strings.
To design a good regex, the author must have put thought into what strings match and don't match their regex, so it should be easy for them to supply good test strings.
Option 2:
You might try a Monte Carlo approach: Starting with the string that both regexes match, write a generator which generates mutations of that string (permute characters, add/remove characters, etc.) If both regexes match or don't match the same way for each mutation, then the regexes "probably tie". If one matches a mutations that the other doesn't, and vice versa, then they "absolutely tie".
But if one matches a strict superset of mutations then it is "probably less specific" than the other.
The verdict after a large number of mutations may not always be correct, but may be reasonable.
Option 3:
Use ipermute or pyParsing's invert to generate strings which match each regex. This will only work on a regexes that use a limited subset of regex syntax.
I think you could do it by looking the result of matching with the longest result
>>> m = re.match(r'google\.com\/maps','google.com/maps/hello')
>>> len(m.group(0))
15
>>> m = re.match(r'google\.com\/maps2','google.com/maps/hello')
>>> print (m)
None
>>> m = re.match(r'google\.com\/maps','google.com/maps2/hello')
>>> len(m.group(0))
15
>>> m = re.match(r'google\.com\/maps2','google.com/maps2/hello')
>>> len(m.group(0))
16
re.match('google\.com\/maps', 'google\.com\/maps2', re.IGNORECASE)
The second item to re.match() above is a string -- that's why it's not working: the regex says to match a period after google, but instead it finds a backslash. What you need to do is double up the backslashes in the regex that's being used as a regex:
def compare_regexes(regex1, regex2):
"""returns regex2 if regex1 is 'smaller' than regex2
returns regex1 if they are the same
returns regex1 if regex1 is 'bigger' than regex2
otherwise returns None"""
regex1_mod = regex1.replace('\\', '\\\\')
regex2_mod = regex2.replace('\\', '\\\\')
if regex1 == regex2:
return regex1
if re.match(regex1_mod, regex2):
return regex2
if re.match(regex2_mod, regex1):
return regex1
You can change the returns to whatever suits your needs best. Oh, and make sure you are using raw strings with re. r'like this, for example'
Is it possible to match 2 regular expressions in Python?
That certainly is possible. Use parenthetical match groups joined by | for alteration. If you arrange the parenthetical match groups by most specific regex to least specific, the rank in the returned tuple from m.groups() will show how specific your match is. You can also use named groups to name how specific your match is, such as s10 for very specific and s0 for a not so specific match.
>>> s1='google.com/maps2text'
>>> s2='I forgot my goggles at the house'
>>> s3='blah blah blah'
>>> m1=re.match(r'(^google\.com\/maps\dtext$)|(.*go[a-z]+)',s1)
>>> m2=re.match(r'(^google\.com\/maps\dtext$)|(.*go[a-z]+)',s2)
>>> m1.groups()
('google.com/maps2text', None)
>>> m2.groups()
(None, 'I forgot my goggles')
>>> patt=re.compile(r'(?P<s10>^google\.com\/maps\dtext$)|
... (?P<s5>.*go[a-z]+)|(?P<s0>[a-z]+)')
>>> m3=patt.match(s3)
>>> m3.groups()
(None, None, 'blah')
>>> m3.groupdict()
{'s10': None, 's0': 'blah', 's5': None}
If you do not know ahead of time which regex is more specific, this is a much harder problem to solve. You want to have a look at this paper covering security of regex matches against file system names.
I realize that this is a non-solution, but as there is no unambiguous way to tell which is the "most specific match", certainly when it depends on what your users "meant", the easiest thing to do would be to ask them to provide their own priority. For example just by putting the regexes in the right order. Then you can simply take the first one that matches. If you expect the users to be comfortable with regular expressions anyway, this is maybe not too much to ask?

Splitting a string into an iterator

Does python have a build-in (meaning in the standard libraries) to do a split on strings that produces an iterator rather than a list? I have in mind working on very long strings and not needing to consume most of the string.
Not directly splitting strings as such, but the re module has re.finditer() (and corresponding finditer() method on any compiled regular expression).
#Zero asked for an example:
>>> import re
>>> s = "The quick brown\nfox"
>>> for m in re.finditer('\S+', s):
... print(m.span(), m.group(0))
...
(0, 3) The
(4, 9) quick
(13, 18) brown
(19, 22) fox
Like s.Lott, I don't quite know what you want. Here is code that may help:
s = "This is a string."
for character in s:
print character
for word in s.split(' '):
print word
There are also s.index() and s.find() for finding the next character.
Later: Okay, something like this.
>>> def tokenizer(s, c):
... i = 0
... while True:
... try:
... j = s.index(c, i)
... except ValueError:
... yield s[i:]
... return
... yield s[i:j]
... i = j + 1
...
>>> for w in tokenizer(s, ' '):
... print w
...
This
is
a
string.
If you don't need to consume the whole string, that's because you are looking for something specific, right? Then just look for that, with re or .find() instead of splitting. That way you can find the part of the string you are interested in, and split that.
There is no built-in iterator-based analog of str.split. Depending on your needs you could make a list iterator:
iterator = iter("abcdcba".split("b"))
iterator
# <list_iterator at 0x49159b0>
next(iterator)
# 'a'
However, a tool from this third-party library likely offers what you want, more_itertools.split_at. See also this post for an example.
Here's an isplit function, which behaves much like split - you can turn off the regex syntax with the regex argument. It uses the re.finditer function, and returns the strings "inbetween" the matches.
import re
def isplit(s, splitter=r'\s+', regex=True):
if not regex:
splitter = re.escape(splitter)
start = 0
for m in re.finditer(splitter, s):
begin, end = m.span()
if begin != start:
yield s[start:begin]
start = end
if s[start:]:
yield s[start:]
_examples = ['', 'a', 'a b', ' a b c ', '\na\tb ']
def test_isplit():
for example in _examples:
assert list(isplit(example)) == example.split(), 'Wrong for {!r}: {} != {}'.format(
example, list(isplit(example)), example.split()
)
Look at itertools. It contains things like takewhile, islice and groupby that allows you to slice an iterable -- a string is iterable -- into another iterable based on either indexes or a boolean condition of sorts.
You could use something like SPARK (which has been absorbed into the Python distribution itself, though not importable from the standard library), but ultimately it uses regular expressions as well so Duncan's answer would possibly serve you just as well if it was as easy as just "splitting on whitespace".
The other, far more arduous option would be to write your own Python module in C to do it if you really wanted speed, but that's a far larger time investment of course.

Categories

Resources