I created 2 regex (re1 and re2), if I try to compile the first regex (re1) it takes about 30 seconds to find all matches.
and if I try to compile the second regex (re2) it takes about 1 second to find all matches.
Can you help me find the difference or what causing this problem?
Thanks!
import re
data = b'000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'
re1 = b'.*00.*00.*00.*00.*76.*62.*55.*75.*'
re2 = b'.*aa.*00.*00.*6f.*63.*20.*6d.*75.*'
reg = re.compile(b'^%s$' % re1, re.RegexFlag.M)
results = len(reg.findall(data))
print(results)
The problem comes from backtracking in the implementation of CPython regexp engine (as emphasized by #Thefourthbird). It comes more specifically from the first and the last .* which are not needed if data do not contain new line characters. Indeed, in this case, findall will either find only one match (all data due to the .*) or nothing. So you do not need findall: a search is enough. Moreover, using ^ and $ with .* prefixed and suffixed is not useful too. The following code should produce the same effect but is 20 times faster on my machine (still not very fast regarding the input size).
import re
data = b'000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'
re1 = b'00.*00.*00.*00.*76.*62.*55.*75'
re2 = b'aa.*00.*00.*6f.*63.*20.*6d.*75'
reg = re.compile(re1, re.RegexFlag.M)
results = 1 if reg.search(data) else 0
print(results)
If data contain new line characters, then it is a bit more complex as only the line containing the pattern will be matched. Indeed, . does not match a new line character (since RegexFlag.DOTALL is not present). One solution to overcome the problem consists in splitting lines before and then apply the regexp. Another solution consists in using the above code, then replacing the search line with a finditer to track the location of the matches, then track the beginning and the end of the lines of each match.
If you want to know more about why backtracking causes such a slow execution you can look at this post: Why can regular expressions have an exponential running time?.
Note that re1 is a quite critical regexp for this data. There are much faster regexp engine to compute it. The Google's RE2 regexp engine is a linear-time regexp engine that should be much faster in such critical cases (but generally not so on non-critical data). There is also the Intel's Hyperscan regexp engine which is generally very fast compared to other engines (although a bit less user-friendly).
Related
I have a massive string. It looks something like this:
hej34g934gj93gh398gie foo#bar.com e34y9u394y3h4jhhrjg bar#foo.com hge98gej9rg938h9g34gug
Except that it's much longer (1,000,000+ characters).
My goal is to find all the email addresses in this string.
I've tried a number of solutions, including this one:
#matches foo#bar.com and bar#foo.com
re.findall(r'[\w\.-]{1,100}#[\w\.-]{1,100}', line)
Although the above code technically works, it takes an insane amount of time to execute. I'm not sure if it counts as catastrophic backtracking or if it's just really inefficient, but whatever the case, it's not good enough for my use case.
I suspect that there's a better way to do this. For example, if I use this regex to only search for the latter part of the email addresses:
#matches #bar.com and #foo.com
re.findall(r'#[\w-]{1,256}[\.]{1}[a-z.]{1,64}', line)
It executes in just a few milliseconds.
I'm not familiar enough with regex to write the rest, but I assume that there's some way to find the #x.x part first and then check the first part afterwards? If so, then I'm guessing that would be a lot quicker.
You can use PyPi regex module by Matthew Barnett, that is much more powerful and stable when it comes to parsing long texts. This regex library has some basic checks for pathological cases implemented. The library author mentions at his post:
The internal engine no longer interprets a form of bytecode but
instead follows a linked set of nodes, and it can work breadth-wise as
well as depth-first, which makes it perform much better when faced
with one of those 'pathological' regexes.
However, there is yet another trick you may implement in your regex: Python re (and regex, too) optimize matching at word boundary locations. Thus, if your pattern is supposed to match at a word boundary, always start your pattern with it. In your case, r'\b[\w.-]{1,100}#[\w.-]{1,100}' or r'\b\w[\w.-]{0,99}#[\w.-]{1,100}' should also work much better than the original pattern without a word boundary.
Python test:
import re, regex, timeit
text='your_long_sting'
re_pattern=re.compile(r'\b\w[\w.-]{0,99}#[\w.-]{1,100}')
regex_pattern=regex.compile(r'\b\w[\w.-]{0,99}#[\w.-]{1,100}')
timeit.timeit("p.findall(text)", 'from __main__ import text, re_pattern as p', number=100000)
# => 6034.659449000001
timeit.timeit("p.findall(text)", 'from __main__ import text, regex_pattern as p', number=100000)
# => 218.1561693
Don't use regex on the whole string. Regex are slow. Avoiding them is your best bet to better overall performance.
My first approach would look like this:
Split the string on spaces.
Filter the result down to the parts that contain #.
Create a pre-compiled regex.
Use regex on the remaining parts only to remove false positives.
Another idea:
in a loop....
use .index("#") to find the position of the next candidate
extend e.g. 100 characters to the left, 50 to the right to cover name and domain
adapt the range depending on the last email address you found so you don't overlap
check the range with a regex, if it matches, yield the match
This code snippet intends to search for a regex match on each line of the supplied file. re.search() gets hung at a line containing "#" character 3e+5 times in the file.
What could be the solution to this problem?
import re
print "Started..."
exp = "(.*)\$\$\$Uniqueterm:(.*)"
with open("sample.txt", 'r') as file:
for line in file:
if re.search(exp, line):
print "Found match: " + re.search(exp,line).groups()[1].strip()
print "File finished..."
Sample input file (sample.txt):
abc
pqr
##### (3e+5 times '#' in a single line)
xyz
$$$Uniqueterm: Match it
qaz
Expected Output:
Match it
You're using re.search with a regex that starts with (.*). re.search looks for a match at any starting position, meaning it has to start a search from every possible starting index until it finds a match or runs out of positions to search from. The leading (.*) forces a scan of the entire string starting from the search start position, for every starting position.
It's classic catastrophic backtracking, just with part of the backtracking implicit in the use of re.search instead of built into the regex itself. You could adjust the regex to eliminate the catastrophic backtracking, but why use a regex at all? Basic methods like str.split or str.find can do the job just fine. Jean-François Fabre's answer shows one way to do it.
regex engine can reach high complexties, specially when it must backtrack, like in your case.
So when the expression to search for is long and you have groups that must be computed with a lot of trial & error (i.e. backtracking), the search can take ages (see a famous example of regex failure on StackOverflow network).
On July 20, 2016 we experienced a 34 minute outage starting at 14:44 UTC. It took 10 minutes to identify the cause, 14 minutes to write the code to fix it, and 10 minutes to roll out the fix to a point where Stack Overflow became available again.
The direct cause was a malformed post that caused one of our regular expressions to consume high CPU on our web servers. The post was in the homepage list, and that caused the expensive regular expression to be called on each home page view.
...
This regular expression has been replaced with a substring function.
I'd propose a workaround since you don't really need regex here, using str.split would do and is fast, since it just searches for the substring (O(N) approach) then creates 2 strings, which is equivalent of what you're trying to do with regular expressions:
a = "foo$$$Uniqueterm:bar"
g1,g2 = a.split("$$$Uniqueterm:")
print(g1,g2)
result
foo bar
import re
Regex_Pattern = r"^\d\w{4}\.$"
Regex_Pattern = r"^[0-9][a-zA-Z0-9_]{4}[.]{1}$"
print(str(bool(re.search(Regex_Pattern, raw_input()))).lower())
I get different execution time for both the regex exp. first one gets executed faster than second .
Input 0qwer.
Testing your regexes on regex101 site reveals that both of them take 6 steps to find a match. So they're really the same. The reason the first is running faster than the second probably has to do with the mere fact that the first string is shorter; a bit less time to parse and compile.
Try compiling the regexes first separately by creating a regex object sequence = regex.compile(r'regexhere', flags) and then calling its search method sequence.search(test_subject).
Actually \d and [0-9] are equal as well as \w and [a-zA-Z0-9_]. The only difference is using [.]{1} (which {1} is redundant) over \. that it doesn't make much difference in terms of runtime.
regarding regex (specifically python re), if we ignore the way the expression is written, is the length of the text the only factor for the time required to process the document? Or are there other factors (like how the text is structured) that play important roles too?
One important consideration can also be whether the text actually matches the regular expression. Take (as a contrived example) the regex (x+x+)+y from this regex tutorial.
When applied to xxxxxxxxxxy it matches, taking the regex engine 7 steps. When applied to xxxxxxxxxx, it fails (of course), but it takes the engine 2558 steps to arrive at this conclusion.
For xxxxxxxxxxxxxxy vs. xxxxxxxxxxxxxx it's already 7 vs 40958 steps, and so on exponentially...
This happens especially easily with nested repetitions or regexes where the same text can be matched by two or more different parts of the regex, forcing the engine to try all permutations before being able to declare failure. This is then called catastrophic backtracking.
Both the length of the text and its contents are important.
As an example the regular expression a+b will fail to match quickly on a string containing one million bs but more slowly on a string containing one million as. This is because more backtracking will be required in the second case.
import timeit
x = "re.search('a+b', s)"
print timeit.timeit(x, "import re;s='a'*10000", number=10)
print timeit.timeit(x, "import re;s='b'*10000", number=10)
Results:
6.85791902323
0.00795443275612
To refactor a regex to create a multi-level trie covers 95% of of the
800% increase in performance. The other 5% involves factoring to not only facilitate
the trie but to enhance it to give a possible 30x performance boost.
I have a somewhat complex regular expression which I'm trying to match against a long string (65,535 characters). I'm looking for multiple occurrences of the re in the string, and so am using finditer. It works, but for some reason it hangs after identifying the first few occurrences. Does anyone know why this might be? Here's the code snippet:
pattern = "(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*b|d*)*c)"
matches = re.finditer(pattern, string)
for match in matches:
print "(%d-%d): %s" % (match.start(), match.end(), match.group())
It prints out the first four occurrences, but then it hangs. When I kill it using Ctrl-C, it tells me it was killed in the iterator:
Traceback (most recent call last):
File "code.py", line 133, in <module>
main(sys.argv[1:])
File "code.py", line 106, in main
for match in matches:
KeyboardInterrupt
If I try it with a simpler re, it works fine.
I'm running this on python 2.5.4 running on Cygwin on Windows XP.
I managed to get it to hang with a very much shorter string. With this 50 character string, it never returned after about 5 minutes:
ddddddeddbedddbddddddddddddddddddddddddddddddddddd
With this 39 character string it took about 15 seconds to return (and display no matches):
ddddddeddbedddbdddddddddddddddddddddddd
And with this string it returns instantly:
ddddddeddbedddbdddddddddddddd
Definitely exponential behaviour. You've got so many d* parts to your regexp that it'll be backtracking like crazy when it gets to the long string of d's, but fails to match something earlier. You need to rethink the regexp, so it has less possible paths to try.
In particular I think:
([ef]d\*b|d\*)*</pre></code> and <code><pre>([ef]|([gh]d\*(ad\*[gh]d)\*b))d\*b
Might need rethinking, as they'll force a retry of the alternate match. Plus they also overlap in terms of what they match. They'd both match edb for example, but if one fails and tries to backtrack the other part will probably have the same behaviour.
So in short try not to use the | if you can and try to make sure the patterns don't overlap where possible.
Could it be that your expression triggers exponential behavior in the Python RE engine?
This article deals with the problem. If you have the time, you might want to try running your expression in an RE engine developed using those ideas.
Thanks to all the responses, which were very helpful. In the end, surprisingly, it was easy to speed it up. Here's the original regex:
(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*b|d*)*c)
I noticed that the |d* near the end was not really what I needed, so I modified it as follows:
(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*bd*)*c)
Now it works almost instantaneously on the 65,536 character string. I guess now I just have to make sure that the regex is really matching the strings I need it to match...
I think you experience what is known as "catastrophic backtracking".
Your regex has many optional/alternative parts, all of which still try to match, so previous sub-expressions give back characters to the following expression on local failure. This leads to a back-and-fourth behavior within the regex and exponentially rising execution times.
Python (2.7+?, I'm not sure) supports atomic grouping and possessive quantifiers, you could examine your regex to identify the parts that should match or fail as a whole. Unnecessary backtracking can be brought under control with that.
catastrophic backtracking!
Regular Expressions can be very expensive. Certain (unintended and intended) strings may cause RegExes to exhibit exponential behavior. We've taken several hotfixes for this. RegExes are so handy, but devs really need to understand how they work; we've gotten bitten by them.
example and debugger:
http://www.codinghorror.com/blog/archives/000488.html
You already gave yourself the answer: The regular expression is to complex and ambiguous.
You should try to find a less complex and more distinct expression that is easier to process. Or tell us what you want to accomplish and we could try to help you to find one.
Edit If you just want to allow ds in every position as you said in a comment to John Montgomery’s answer, you should remove them before testing the pattern:
import re
string = "ddddddeddbedddbddddddddddddddddddddddddddddddddddd"
pattern = "(([ef]|([gh](a[gh])*b))b([ef]b)*c)"
matches = re.finditer(pattern, re.sub("d+", "", string))
for match in matches:
print "(%d-%d): %s" % (match.start(), match.end(), match.group())