Multiple regex substitutions using a dict with regex expressions as keys - python

I want to make multiple substitutions to a string using multiple regular expressions. I also want to make the substitutions in a single pass to avoid creating multiple instances of the string.
Let's say for argument that I want to make the substitutions below, while avoiding multiple use of re.sub(), whether explicitly or with a loop:
import re
text = "local foals drink cola"
text = re.sub("(?<=o)a", "w", text)
text = re.sub("l(?=a)", "co", text)
print(text) # "local fowls drink cocoa"
The closest solution I have found for this is to compile a regular expression from a dictionary of substitution targets and then to use a lambda function to replace each matched target with its value in the dictionary. However, this approach does not work when using metacharacters, thus removing the functionality needed from regular expressions in this example.
Let me demonstrate first with an example that works without metacharacters:
import re
text = "local foals drink cola"
subs_dict = {"a":"w", "l":"co"}
subs_regex = re.compile("|".join(subs_dict.keys()))
text = re.sub(subs_regex, lambda match: subs_dict[match.group(0)], text)
print(text) # "coocwco fowcos drink cocow"
Now observe that adding the desired metacharacters to the dictionary keys results in a KeyError:
import re
text = "local foals drink cola"
subs_dict = {"(?<=o)a":"w", "l(?=a)":"co"}
subs_regex = re.compile("|".join(subs_dict.keys()))
text = re.sub(subs_regex, lambda match: subs_dict[match.group(0)], text)
>>> KeyError: "a"
The reason for this is that the sub() function correctly finds a match for the expression "(?<=o)a", so this must now be found in the dictionary to return its substitution, but the value submitted for dictionary lookup by match.group(0) is the corresponding matched string "a". It also does not work to search for match.re in the dictionary (i.e. the expression that produced the match) because the value of that is the whole disjoint expression that was compiled from the dictionary keys (i.e. "(?<=o)a|l(?=a)").
EDIT: In case anyone would benefit from seeing thejonny's solution implemented with a lambda function as close to my originals as possible, it would work like this:
import re
text = "local foals drink cola"
subs_dict = {"(?<=o)a":"w", "l(?=a)":"co"}
subs_regex = re.compile("|".join("("+key+")" for key in subs_dict))
group_index = 1
indexed_subs = {}
for target, sub in subs_dict.items():
indexed_subs[group_index] = sub
group_index += re.compile(target).groups + 1
text = re.sub(subs_regex, lambda match: indexed_subs[match.lastindex], text)
print(text) # "local fowls drink cocoa"

If no expression you want to use matches an empty string (which is a valid assumption if you want to replace), you can use groups before |ing the expressions, and then check which group found a match:
(exp1)|(exp2)|(exp3)
Or maybe named groups so you don't have to count the subgroups inside the subexpressions.
The replacement function than can look which group matched, and chose the replacement from a list.
I came up with this implementation:
import re
def dictsub(replacements, string):
"""things has the form {"regex1": "replacement", "regex2": "replacement2", ...}"""
exprall = re.compile("|".join("("+x+")" for x in replacements))
gi = 1
replacements_by_gi = {}
for (expr, replacement) in replacements.items():
replacements_by_gi[gi] = replacement
gi += re.compile(expr).groups + 1
def choose(match):
return replacements_by_gi[match.lastindex]
return re.sub(exprall, choose, string)
text = "local foals drink cola"
print(dictsub({"(?<=o)a":"w", "l(?=a)":"co"}, text))
that prints local fowls drink cocoa

You could do this by keeping your key as the expected match and storing both your replace and regex in a nested dict. Given you're looking to match specific chars, this definition should work.
subs_dict = {"a": {'replace': 'w', 'regex': '(?<=o)a'}, 'l': {'replace': 'co', 'regex': 'l(?=a)'}}
subs_regex = re.compile("|".join([subs_dict[k]['regex'] for k in subs_dict.keys()]))
re.sub(subs_regex, lambda match: subs_dict[match.group(0)]['replace'], text)
'local fowls drink cocoa'

Related

Extract all variables from a string of Python code (regex or AST)

I want to find and extract all the variables in a string that contains Python code. I only want to extract the variables (and variables with subscripts) but not function calls.
For example, from the following string:
code = 'foo + bar[1] + baz[1:10:var1[2+1]] + qux[[1,2,int(var2)]] + bob[len("foobar")] + func() + func2 (var3[0])'
I want to extract: foo, bar[1], baz[1:10:var1[2+1]], var1[2+1], qux[[1,2,int(var2)]], var2, bob[len("foobar")], var3[0]. Please note that some variables may be "nested". For example, from baz[1:10:var1[2+1]] I want to extract baz[1:10:var1[2+1]] and var1[2+1].
The first two ideas that come to mind is to use either a regex or an AST. I have tried both but with no success.
When using a regex, in order to make things simpler, I thought it would be a good idea to first extract the "top level" variables, and then recursively the nested ones. Unfortunately, I can't even do that.
This is what I have so far:
regex = r'[_a-zA-Z]\w*\s*(\[.*\])?'
for match in re.finditer(regex, code):
print(match)
Here is a demo: https://regex101.com/r/INPRdN/2
The other solution is to use an AST, extend ast.NodeVisitor, and implement the visit_Name and visit_Subscript methods. However, this doesn't work either because visit_Name is also called for functions.
I would appreciate if someone could provide me with a solution (regex or AST) to this problem.
Thank you.
I find your question an interesting challenge, so here is a code that do what you want, doing this using Regex alone it's impossible because there is nested expression, this is a solution using a combination of Regex and string manipulations to handle nested expressions:
# -*- coding: utf-8 -*-
import re
RE_IDENTIFIER = r'\b[a-z]\w*\b(?!\s*[\[\("\'])'
RE_INDEX_ONLY = re.compile(r'(##)(\d+)(##)')
RE_INDEX = re.compile('##\d+##')
def extract_expression(string):
""" extract all identifier and getitem expression in the given order."""
def remove_brackets(text):
# 1. handle `[...]` expression replace them with #{#...#}#
# so we don't confuse them with word[...]
pattern = '(?<!\w)(\s*)(\[)([^\[]+?)(\])'
# keep extracting expression until there is no expression
while re.search(pattern, text):
text = re.sub(pattern, r'\1#{#\3#}#', string)
return text
def get_ordered_subexp(exp):
""" get index of nested expression."""
index = int(exp.replace('#', ''))
subexp = RE_INDEX.findall(expressions[index])
if not subexp:
return exp
return exp + ''.join(get_ordered_subexp(i) for i in subexp)
def replace_expression(match):
""" save the expression in the list, replace it with special key and it's index in the list."""
match_exp = match.group(0)
current_index = len(expressions)
expressions.append(None) # just to make sure the expression is inserted before it's inner identifier
# if the expression contains identifier extract too.
if re.search(RE_IDENTIFIER, match_exp) and '[' in match_exp:
match_exp = re.sub(RE_IDENTIFIER, replace_expression, match_exp)
expressions[current_index] = match_exp
return '##{}##'.format(current_index)
def fix_expression(match):
""" replace the match by the corresponding expression using the index"""
return expressions[int(match.group(2))]
# result that will contains
expressions = []
string = remove_brackets(string)
# 2. extract all expression and keep track of there place in the original code
pattern = r'\w+\s*\[[^\[]+?\]|{}'.format(RE_IDENTIFIER)
# keep extracting expression until there is no expression
while re.search(pattern, string):
# every exression that is extracted is replaced by a special key
string = re.sub(pattern, replace_expression, string)
# some times inside brackets can contains getitem expression
# so when we extract that expression we handle the brackets
string = remove_brackets(string)
# 3. build the correct result with extracted expressions
result = [None] * len(expressions)
for index, exp in enumerate(expressions):
# keep replacing special keys with the correct expression
while RE_INDEX_ONLY.search(exp):
exp = RE_INDEX_ONLY.sub(fix_expression, exp)
# finally we don't forget about the brackets
result[index] = exp.replace('#{#', '[').replace('#}#', ']')
# 4. Order the index that where extracted
ordered_index = ''.join(get_ordered_subexp(exp) for exp in RE_INDEX.findall(string))
# convert it to integer
ordered_index = [int(index[1]) for index in RE_INDEX_ONLY.findall(ordered_index)]
# 5. fix the order of expressions using the ordered indexes
final_result = []
for exp_index in ordered_index:
final_result.append(result[exp_index])
# for debug:
# print('final string:', string)
# print('expression :', expressions)
# print('order_of_expresion: ', ordered_index)
return final_result
code = 'foo + bar[1] + baz[1:10:var1[2+1]] + qux[[1,2,int(var2)]] + bob[len("foobar")] + func() + func2 (var3[0])'
code2 = 'baz[1:10:var1[2+1]]'
code3 = 'baz[[1]:10:var1[2+1]:[var3[3+1*x]]]'
print(extract_expression(code))
print(extract_expression(code2))
print(extract_expression(code3))
OUTPU:
['foo', 'bar[1]', 'baz[1:10:var1[2+1]]', 'var1[2+1]', 'qux[[1,2,int(var2)]]', 'var2', 'bob[len("foobar")]', 'var3[0]']
['baz[1:10:var1[2+1]]', 'var1[2+1]']
['baz[[1]:10:var1[2+1]:[var3[3+1*x]]]', 'var1[2+1]', 'var3[3+1*x]', 'x']
I tested this code for very complicated examples and it worked perfectly. and notice that the order if extraction is the same as you wanted, Hope that this is what you needed.
This answer might be too later. But it is possible to do it using python regex package.
import regex
code= '''foo + bar[1] + baz[1:10:var1[2+1]] +
qux[[1,2,int(var2)]] + bob[len("foobar")] + func() + func2
(var3[0])'''
p=r'(\b[a-z]\w*\b(?!\s*[\(\"])(\[(?:[^\[\]]|(?2))*\])?)'
result=regex.findall(p,code,overlapped=True) #overlapped=True is needed to capture something inside a group like 'var1[2+1]'
[x[0] for x in result] #result variable is list of tuple of two,as each pattern capture two groups ,see below.
output:
['foo','bar[1]','baz[1:10:var1[2+1]]','var1[2+1]','qux[[1,2,int(var2)]]','var2','bob[len("foobar")]','var3[0]']
pattern explaination:
(     # 1st capturing group start
\b[a-z]\w*\b     #variable name ,eg 'bar'
(?!\s*[\(\"])     #negative lookahead. so to ignore something like foobar"
(\[(?:[^\[\]]|(?2))*\])     #2nd capture group,capture nested groups in '[ ]'
                                        #eg '[1:10:var1[2+1]]'.
                                        #'?2' refer to 2nd capturing group recursively
?     #2nd capturing group is optional so to capture something like 'foo'
)     #end of 1st group.
Regex is not a powerful enough tool to do this. If there is a finite depth of your nesting there is some hacky work around that would allow you to make complicate regex to do what you are looking for but I would not recommend it.
This is question is asked a lot an the linked response is famous for demonstrating the difficulty of what you are trying to do
If you really must parse a string for code an AST would technically work but I am not aware of a library to help with this. You would be best off trying to build a recursive function to do the parsing.

Replace named captured groups with arbitrary values in Python

I need to replace the value inside a capture group of a regular expression with some arbitrary value; I've had a look at the re.sub, but it seems to be working in a different way.
I have a string like this one :
s = 'monthday=1, month=5, year=2018'
and I have a regex matching it with captured groups like the following :
regex = re.compile('monthday=(?P<d>\d{1,2}), month=(?P<m>\d{1,2}), year=(?P<Y>20\d{2})')
now I want to replace the group named d with aaa, the group named m with bbb and group named Y with ccc, like in the following example :
'monthday=aaa, month=bbb, year=ccc'
basically I want to keep all the non matching string and substitute the matching group with some arbitrary value.
Is there a way to achieve the desired result ?
Note
This is just an example, I could have other input regexs with different structure, but same name capturing groups ...
Update
Since it seems like most of the people are focusing on the sample data, I add another sample, let's say that I have this other input data and regex :
input = '2018-12-12'
regex = '((?P<Y>20\d{2})-(?P<m>[0-1]?\d)-(?P<d>\d{2}))'
as you can see I still have the same number of capturing groups(3) and they are named the same way, but the structure is totally different... What I need though is as before replacing the capturing group with some arbitrary text :
'ccc-bbb-aaa'
replace capture group named Y with ccc, the capture group named m with bbb and the capture group named d with aaa.
In the case, regexes are not the best tool for the job, I'm open to some other proposal that achieve my goal.
This is a completely backwards use of regex. The point of capture groups is to hold text you want to keep, not text you want to replace.
Since you've written your regex the wrong way, you have to do most of the substitution operation manually:
"""
Replaces the text captured by named groups.
"""
def replace_groups(pattern, string, replacements):
pattern = re.compile(pattern)
# create a dict of {group_index: group_name} for use later
groupnames = {index: name for name, index in pattern.groupindex.items()}
def repl(match):
# we have to split the matched text into chunks we want to keep and
# chunks we want to replace
# captured text will be replaced. uncaptured text will be kept.
text = match.group()
chunks = []
lastindex = 0
for i in range(1, pattern.groups+1):
groupname = groupnames.get(i)
if groupname not in replacements:
continue
# keep the text between this match and the last
chunks.append(text[lastindex:match.start(i)])
# then instead of the captured text, insert the replacement text for this group
chunks.append(replacements[groupname])
lastindex = match.end(i)
chunks.append(text[lastindex:])
# join all the junks to obtain the final string with replacements
return ''.join(chunks)
# for each occurence call our custom replacement function
return re.sub(pattern, repl, string)
>>> replace_groups(pattern, s, {'d': 'aaa', 'm': 'bbb', 'Y': 'ccc'})
'monthday=aaa, month=bbb, year=ccc'
You can use string formatting with a regex substitution:
import re
s = 'monthday=1, month=5, year=2018'
s = re.sub('(?<=\=)\d+', '{}', s).format(*['aaa', 'bbb', 'ccc'])
Output:
'monthday=aaa, month=bbb, year=ccc'
Edit: given an arbitrary input string and regex, you can use formatting like so:
input = '2018-12-12'
regex = '((?P<Y>20\d{2})-(?P<m>[0-1]?\d)-(?P<d>\d{2}))'
new_s = re.sub(regex, '{}', input).format(*["aaa", "bbb", "ccc"])
Extended Python 3.x solution on extended example (re.sub() with replacement function):
import re
d = {'d':'aaa', 'm':'bbb', 'Y':'ccc'} # predefined dict of replace words
pat = re.compile('(monthday=)(?P<d>\d{1,2})|(month=)(?P<m>\d{1,2})|(year=)(?P<Y>20\d{2})')
def repl(m):
pair = next(t for t in m.groupdict().items() if t[1])
k = next(filter(None, m.groups())) # preceding `key` for currently replaced sequence (i.e. 'monthday=' or 'month=' or 'year=')
return k + d.get(pair[0], '')
s = 'Data: year=2018, monthday=1, month=5, some other text'
result = pat.sub(repl, s)
print(result)
The output:
Data: year=ccc, monthday=aaa, month=bbb, some other text
For Python 2.7 :
change the line k = next(filter(None, m.groups())) to:
k = filter(None, m.groups())[0]
I suggest you use a loop
import re
regex = re.compile('monthday=(?P<d>\d{1,2}), month=(?P<m>\d{1,2}), year=(?P<Y>20\d{2})')
s = 'monthday=1, month=1, year=2017 \n'
s+= 'monthday=2, month=2, year=2019'
regex_as_str = 'monthday={d}, month={m}, year={Y}'
matches = [match.groupdict() for match in regex.finditer(s)]
for match in matches:
s = s.replace(
regex_as_str.format(**match),
regex_as_str.format(**{'d': 'aaa', 'm': 'bbb', 'Y': 'ccc'})
)
You can do this multile times wiht your different regex patterns
Or you can join ("or") both patterns together

Efficient way of matching and replacing multiple strings in python 3?

I have multiple (>30) compiled regex's
regex_1 = re.compile(...)
regex_2 = re.compile(...)
#... define multiple regex's
regex_n = re.compile(...)
I then have a function which takes a text and replaces some of its words using every one of the regex's above and the re.sub method as follows
def sub_func(text):
text = re.sub(regex_1, "string_1", text)
# multiple subsitutions using all regex's ...
text = re.sub(regex_n, "string_n", text)
return text
Question: Is there a more efficient way to make these replacements?
The regex's cannot be generalized or simplified from their current form.
I feel like reassigning the value of text each time for every regex is quite slow, given that the function only replaces a word or two from the entirety of text for each reassignment. Also, given that I have to do this for multiple documents, that slows things down even more.
Thanks in advance!
Reassigning a value takes constant time in Python. Unlike in languages like C, variables are more of a "name tag". So, changing what the name tag points to takes very little time.
If they are constant strings, I would collect them into a tuple:
regexes = (
(regex_1, 'string_1'),
(regex_2, 'string_2'),
(regex_3, 'string_3'),
...
)
And then in your function, just iterate over the list:
def sub_func_2(text):
for regex, sub in regexes:
text = re.sub(regex, sub, text)
return text
But if your regexes are actually named regex_1, regex_2, etc., they probably should be directly defined in a list of some sort.
Also note, if you are doing replacements like 'cat' -> 'dog', the str.replace() method might be easier (text = text.replace('cat', 'dog')), and it will probably be faster.
If your strings are very long, and re-making it from scratch with the regexes might take very long. An implementation of #Oliver Charlesworth's method that was mentioned in the comments could be:
# Instead of this:
regexes = (
('1(1)', '$1i'),
('2(2)(2)', '$1a$2'),
('(3)(3)3', '$1a$2')
)
# Merge the regexes:
regex = re.compile('(1(1))|(2(2)(2))|((3)(3)3)')
substitutions = (
'{1}i', '{1}a{2}', '{1}a{2}'
)
# Keep track of how many groups are in each alternative
group_nos = (1, 2, 2)
cumulative = [1]
for i in group_nos:
cumulative.append(cumulative[-1] + i + 1)
del i
cumulative = tuple(zip(substitutions, cumulative))
def _sub_func(match):
iter_ = iter(cumulative)
for sub, x in iter_:
if match.group(x) is not None:
return sub.format(*map(match.group, range(x, next(iter_)[1])))
def sub_func(text):
return re.sub(regex, _sub_func, text)
But this breaks down if you have overlapping text that you need to substitute.
we can pass a function to re.sub repl argument
simplify to 3 regex for easier understanding
assuming regex_1, regex_2, and regex_3 will be 111,222 and 333 respectively. Then, regex_replace will be the list holding string that will be use for replace follow the order of regex_1, regex_2 and regex_3.
regex_1 will be replace will 'one'
regex_2 replace with 'two' and so on
Not sure how much this will improve the runtime though, give it a try
import re
regex_x = re.compile('(111)|(222)|(333)')
regex_replace = ['one', 'two', 'three']
def sub_func(text):
return re.sub(regex_x, lambda x:regex_replace[x.lastindex-1], text)
>>> sub_func('testing 111 222 333')
>>> 'testing one two three'

python replace multiple words retaining case

I'm trying to write a filter in django that highlights words based on a search query. For example, if my string contains this is a sample string that I want to highlight using my filter and my search stubs are sam and ring, my desired output would be:
this is a <mark>sam</mark>ple st<mark>ring</mark> that I want to highlight using my filter
I'm using the answer from here to replace multiple words. I've presented the code below:
import re
words = search_stubs.split()
rep = dict((re.escape(k), '<mark>%s</mark>'%(k)) for k in words)
pattern = re.compile('|'.join(rep.keys()))
text = pattern.sub(lambda m : rep[re.escape(m.group(0))], text_to_replace)
However, when there is case sensitivity, this breaks. For example, if I have the string Check highlight function, and my search stub contains check, this breaks.
The desired output in this case would naturally be:
<mark>Check</mark> highlight function
You don't need to go for dictionary here. (?i) called case-insensitive modifier helps to do a case-insensitive match.
>>> s = "this is a sample string that I want to highlight using my filter"
>>> l = ['sam', 'ring']
>>> re.sub('(?i)(' + '|'.join(map(re.escape, l)) + ')', r'<mark>\1</mark>', s)
'this is a <mark>sam</mark>ple st<mark>ring</mark> that I want to highlight using my filter'
EXample 2:
>>> s = 'Check highlight function'
>>> l = ['check']
>>> re.sub('(?i)(' + '|'.join(map(re.escape, l)) + ')', r'<mark>\1</mark>', s)
'<mark>Check</mark> highlight function'
The simple way to do this is to not try to build a dict mapping every single word to its marked-up equivalent, and just use a capturing group and a reference to it. Then you can just use the IGNORECASE flag to do a case-insensitive search.
pattern = re.compile('({})'.format('|'.join(map(re.escape, words))),
re.IGNORECASE)
text = pattern.sub(r'<mark>\1</mark>', text_to_replace)
For example, if text_to_replace were:
I am Sam. Sam I am. I will not eat green eggs and spam.
… then text will be:
I am <mark>Sam</mark>. <mark>Sam</mark> I am. I will not eat green eggs and spam
If you really did want to do it your way, you could. For example:
text = pattern.sub(lambda m: rep[re.escape(m.group(0))].replace(m, m.group(0)),
text_to_replace)
But that would be kind of silly. You're building a dict with 'sam' embedded in the value, just so you can replace that 'sam' with the 'Sam' that you actually matched.
See Grouping in the Regular Expression HOWTO for more on groups and references, and the re.sub docs for specifics on using references in substitutions.

Smart pythonic way of removing if elif on regular expressions

I have a series of reg expressions called in order. I need to check the first one, and then the second, then the third etc etc right the way until the end. I need to do some processed on the matched string, so I'm trying to avoid too much logic, but in python, unlike perl I do not think I can perform assignment in the if-elif-elif..blocks so I'll end up doing an assignment, then checking for a match and then getting the results of that match. For example:
m = re.search(patternA, string)
if m:
stripped = m.group(0)
xyz = stripped[45:67]
elif:
m = re.search(patternB, string)
if m:
stripped = m.group(0)
abc = stripped[5:7]
elif:
m = re.search(patternB, string)
if m:
stripped = m.group(0)
txt = stripped[4:5]
elif:
......
Ideally I'd like to find a better structure that ensures I preserve the ordering of the tested regular expressions, and also that I can incorporate the assignment into the if-then statements. So for example:
if (m = re.search(patternA, string)):
stripped = m.group(0)
xyz = stripped[45:67]
elif (m = re.search(patternB, string)):
stripped = m.group(0)
abc = stripped[5:7]
...
What is the most pythonic way of dealing with this? Thanks.
The use case is to read old data - very old data. However each string may include information about particular values and these are only present if the regular expression matches a particular pattern. So the variables extracted are highly dependent upon what matches.
for (pattern, slice) in zip([patternA, patternB, patternC],
[slice(45,67), slice(5,7), slice(4,5)]):
m = re.search(pattern, string)
if m:
value = m.group(0)[slice]
break
else:
# Handle no match found for any pattern here
This iterates over pairs of regular expressions and the relevant portion of their match until a match is found. If there is no match found, the else clause of the for loop will execute. The result of the match is found in value after the loop, regardless of which pattern matches.
Having different variables set based on which "branch" succeeds is not a great idea, since you won't necessarily know which variables are set at any given time. A dictionary would be a better idea if you really want separate labels for each match, since you can query which key or keys are set in a dictionary.
value = {}
for (pattern, slice, key) in zip([patternA, patternB, patternC],
[slice(45,67), slice(5,7), slice(4,5)],
['abc', 'xyx', 'txt']):
m = re.search(pattern, string)
if m:
value[key] = m.group(0)[slice]
break
The general idea, though, is to note that your chain of if statements is like a hard-coded iteration, so you just need to identify which parts of each if/elif clause varies from the preceding ones, and create a list that you can iterate over instead.

Categories

Resources