Related
I'm doing a NLP project with my university, collecting data on words in Icelandic that exist both spelled with an i and with a y (they sound the same in Icelandic fyi) where the variants are both actual words but do not mean the same thing. Examples of this would include leyti (an approximation in time) and leiti (a grassy hill), or kirkja (church) and kyrkja (choke). I have a dataset of 2 million words. I have already collected two wordlists, one of which includes words spelled with a y and one includes the same words spelled with a i (although they don't seem to match up completely, as the y-list is a bit longer, but that's a separate issue). My problem is that I want to end up with pairs of words like leyti - leiti, kyrkja - kirkja, etc. But, as y is much later in the alphabet than i, it's no good just sorting the lists and pairing them up that way. I also tried zipping the lists while checking the first few letters to see if I can find a match but that leaves out all words that have y or i as the first letter. Do you have a suggestion on how I might implement this?
Try something like this:
s = "trydfydfgfay"
l = list(s)
candidateWords = []
for idx, c in enumerate(l):
if c=='y':
newList = l.copy()
newList[idx] = "i"
candidateWord = "".join(newList)
candidateWords.append(candidateWord)
print(candidateWords)
#['tridfydfgfay', 'trydfidfgfay', 'trydfydfgfai']
#look up these words to see if they are real words
I do not think this is a programming challenge, but looks more like an NLP challenge itself. The spelling variations are often a hurdle that one would face during preprocessing.
I would suggest that you use an Edit-distance based approaches to identify word pairs that allow for some variations. Specifically for the kind of problem that you have described above, I would recommend "Jaro Winkler Distance". This method allows to give higher similarity score between word pairs that shows variations between specific character pairs, say y and i.
All these approaches are implemented in Jellyfish library.
You may also have a look at the fuzzywuzzy package as well. Hope this helps.
So this accomplishes my task, kind of an easy not-that-pretty solution I suppose but it works:
wordlist = open("data.txt", "r", encoding='utf-8')
y_words = open("y_wordlist.txt", "w+", encoding='utf-8')
all_words = []
y_words = []
for word in wordlist:
word = word.lower()
all_words.append(word)
for word in all_words:
if "y" in word:
y_words.append(word)
word_dict = {}
for word in y_words:
newwith1y = word.replace("y", "i",1)
newwith2y = word.replace("y", "i",2)
newyback = word[::-1].replace("y", "i",1)
newyback = newyback[::-1]
word_dict[word] = newwith1y
word_dict[word] = newwith2y
word_dict[word] = newyback
for key, value in word_dict.items():
if value in all_words:
y_wordlist.write(key)
y_wordlist.write(" - ")
y_wordlist.write(value)
y_wordlist.write("\n")
I want to auto-correct the words which are in my list.
Say I have a list
kw = ['tiger','lion','elephant','black cat','dog']
I want to check if these words appeared in my sentence. If they are wrongly spelled I want to correct them. I don't intend to touch other words except from the given list.
Now I have list of str
s = ["I saw a tyger","There are 2 lyons","I mispelled Kat","bulldogs"]
Expected output:
['tiger','lion',None,'dog']
My Efforts:
import difflib
op = [difflib.get_close_matches(i,kw,cutoff=0.5) for i in s]
print(op)
My Output:
[[], [], [], ['dog']]
The problem with above code is I want to compare entire sentence and my kw list can have more than 1 word(upto 4-5 words).
If I lower the cutoff value it starts returning the words which is should not.
So even if I plan to create bigrams, trigrams from given sentence it would consume a lot of time.
So is there way to implement this?
I have explored few more libraries like autocorrect, hunspell etc. but no success.
You could implement something based of levenshtein distance.
It's interesting to note elasticsearch's implementation: https://www.elastic.co/guide/en/elasticsearch/guide/master/fuzziness.html
Clearly, bieber is a long way from beaver—they are too far apart to be
considered a simple misspelling. Damerau observed that 80% of human
misspellings have an edit distance of 1. In other words, 80% of
misspellings could be corrected with a single edit to the original
string.
Elasticsearch supports a maximum edit distance, specified with the
fuzziness parameter, of 2.
Of course, the impact that a single edit has on a string depends on
the length of the string. Two edits to the word hat can produce mad,
so allowing two edits on a string of length 3 is overkill. The
fuzziness parameter can be set to AUTO, which results in the following
maximum edit distances:
0 for strings of one or two characters
1 for strings of three, four, or five characters
2 for strings of more than five characters
I like to use pyxDamerauLevenshtein myself.
pip install pyxDamerauLevenshtein
So you could do a simple implementation like:
keywords = ['tiger','lion','elephant','black cat','dog']
from pyxdameraulevenshtein import damerau_levenshtein_distance
def correct_sentence(sentence):
new_sentence = []
for word in sentence.split():
budget = 2
n = len(word)
if n < 3:
budget = 0
elif 3 <= n < 6:
budget = 1
if budget:
for keyword in keywords:
if damerau_levenshtein_distance(word, keyword) <= budget:
new_sentence.append(keyword)
break
else:
new_sentence.append(word)
else:
new_sentence.append(word)
return " ".join(new_sentence)
Just make sure you use a better tokenizer or this will get messy, but you get the point. Also note that this is unoptimized, and will be really slow with a lot of keywords. You should implement some kind of bucketing to not match all words with all keywords.
Here is one way using difflib.SequenceMatcher. The SequenceMatcher class allows you to measure sentence similarity with its ratio method, you only need to provide a suitable threshold in order to keep words with a ratio that falls above the given threshold:
def find_similar_word(s, kw, thr=0.5):
from difflib import SequenceMatcher
out = []
for i in s:
f = False
for j in i.split():
for k in kw:
if SequenceMatcher(a=j, b=k).ratio() > thr:
out.append(k)
f = True
if f:
break
if f:
break
else:
out.append(None)
return out
Output
find_similar_word(s, kw)
['tiger', 'lion', None, 'dog']
Although this is slightly different from your expected output (it is a list of list instead of a list of string) I thing it is a step in the right direction. The reason I chose this method, is so that you can have multiple corrections per sentence. That is why I added another example sentence.
import difflib
import itertools
kw = ['tiger','lion','elephant','black cat','dog']
s = ["I saw a tyger","There are 2 lyons","I mispelled Kat","bulldogs", "A tyger is different from a doog"]
op = [[difflib.get_close_matches(j,kw,cutoff=0.5) for j in i.split()] for i in s]
op = [list(itertools.chain(*o)) for o in op]
print(op)
The output is generate is:
[['tiger'], ['lion'], [], ['dog'], ['tiger', 'dog']]
The trick is to split all the sentences along the whitespaces.
I have written the following (crude) code to find the association strengths among the words in a given piece of text.
import re
## The first paragraph of Wikipedia's article on itself - you can try with other pieces of text with preferably more words (to produce more meaningful word pairs)
text = "Wikipedia was launched on January 15, 2001, by Jimmy Wales and Larry Sanger.[10] Sanger coined its name,[11][12] as a portmanteau of wiki[notes 3] and 'encyclopedia'. Initially an English-language encyclopedia, versions in other languages were quickly developed. With 5,748,461 articles,[notes 4] the English Wikipedia is the largest of the more than 290 Wikipedia encyclopedias. Overall, Wikipedia comprises more than 40 million articles in 301 different languages[14] and by February 2014 it had reached 18 billion page views and nearly 500 million unique visitors per month.[15] In 2005, Nature published a peer review comparing 42 science articles from Encyclopadia Britannica and Wikipedia and found that Wikipedia's level of accuracy approached that of Britannica.[16] Time magazine stated that the open-door policy of allowing anyone to edit had made Wikipedia the biggest and possibly the best encyclopedia in the world and it was testament to the vision of Jimmy Wales.[17] Wikipedia has been criticized for exhibiting systemic bias, for presenting a mixture of 'truths, half truths, and some falsehoods',[18] and for being subject to manipulation and spin in controversial topics.[19] In 2017, Facebook announced that it would help readers detect fake news by suitable links to Wikipedia articles. YouTube announced a similar plan in 2018."
text = re.sub("[\[].*?[\]]", "", text) ## Remove brackets and anything inside it.
text=re.sub(r"[^a-zA-Z0-9.]+", ' ', text) ## Remove special characters except spaces and dots
text=str(text).lower() ## Convert everything to lowercase
## Can add other preprocessing steps, depending on the input text, if needed.
from nltk.corpus import stopwords
import nltk
stop_words = stopwords.words('english')
desirable_tags = ['NN'] # We want only nouns - can also add 'NNP', 'NNS', 'NNPS' if needed, depending on the results
word_list = []
for sent in text.split('.'):
for word in sent.split():
'''
Extract the unique, non-stopword nouns only
'''
if word not in word_list and word not in stop_words and nltk.pos_tag([word])[0][1] in desirable_tags:
word_list.append(word)
'''
Construct the association matrix, where we count 2 words as being associated
if they appear in the same sentence.
Later, I'm going to define associations more properly by introducing a
window size (say, if 2 words seperated by at most 5 words in a sentence,
then we consider them to be associated)
'''
table = np.zeros((len(word_list),len(word_list)), dtype=int)
for sent in text.split('.'):
for i in range(len(word_list)):
for j in range(len(word_list)):
if word_list[i] in sent and word_list[j] in sent:
table[i,j]+=1
df = pd.DataFrame(table, columns=word_list, index=word_list)
# Count the number of occurrences of each word from word_list in the text
all_words = pd.DataFrame(np.zeros((len(df), 2)), columns=['Word', 'Count'])
all_words.Word = df.index
for sent in text.split('.'):
count=0
for word in sent.split():
if word in word_list:
all_words.loc[all_words.Word==word,'Count'] += 1
# Sort the word pairs in decreasing order of their association strengths
df.values[np.triu_indices_from(df, 0)] = 0 # Make the upper triangle values 0
assoc_df = pd.DataFrame(columns=['Word 1', 'Word 2', 'Association Strength (Word 1 -> Word 2)'])
for row_word in df:
for col_word in df:
'''
If Word1 occurs 10 times in the text, and Word1 & Word2 occur in the same sentence 3 times,
the association strength of Word1 and Word2 is 3/10 - Please correct me if this is wrong.
'''
assoc_df = assoc_df.append({'Word 1': row_word, 'Word 2': col_word,
'Association Strength (Word 1 -> Word 2)': df[row_word][col_word]/all_words[all_words.Word==row_word]['Count'].values[0]}, ignore_index=True)
assoc_df.sort_values(by='Association Strength (Word 1 -> Word 2)', ascending=False)
This produces the word associations like so:
Word 1 Word 2 Association Strength (Word 1 -> Word 2)
330 wiki encyclopedia 3.0
895 encyclopadia found 1.0
1317 anyone edit 1.0
754 peer science 1.0
755 peer encyclopadia 1.0
756 peer britannica 1.0
...
...
...
However, the code contains a lot of for loops which hampers its running time. Specially the last part (sort the word pairs in decreasing order of their association strengths) consumes a lot of time as it computes the association strengths of n^2 word pairs/combinations, where n is the number of words we are interested in (those in word_list in my code above).
So, the following are what I would like some help on:
How do I vectorize the code, or otherwise make it more efficient?
Instead of producing n^2 combinations/pairs of words in the last step, is there any way to prune some of them before producing them? I am going to prune some of the useless/meaningless pairs by inspection after they are produced anyway.
Also, and I know this does not fall into the purview of a coding question, but I would love to know if there's any mistake in my logic, specially when calculating the word association strengths.
Since you asked about your specific code, I will not go into alternate libraries. I will be mostly concentrating on points 1) and 2) of your question:
Instead of iterating through the whole word ist twice (i and j) you can already cut the processing time by ~ half by just iterating j between i + i and the end of the list. This removes duplicate pairs (index 24 and 42 as well as index 42 and 24) as well as the identical pair (index 42 and 42).
for sent in text.split('.'):
for i in range(len(word_list)):
for j in range(i+1, len(word_list)):
if word_list[i] in sent and word_list[j] in sent:
table[i,j]+=1
Be careful with this, though. the in operator will also match partial words (like and in hand)
Of course, you could also remove the j iteration completely by first filtering for all words in your word list and then pairing them afterward:
word_list = set() # Using set instead of list makes lookups faster since this is a hashed structure
for sent in text.split('.'):
for word in sent.split():
'''
Extract the unique, non-stopword nouns only
'''
if word not in word_list and word not in stop_words and nltk.pos_tag([word])[0][1] in desirable_tags:
word_list.add(word)
(...)
for sent in text.split('.'):
found_words = [word for word in sent.split() if word in word_list] # list comprehensions are usually faster than pure for loops
# If you want to count duplicate words, then leave the whole line below out.
found_words = tuple(frozenset(found_words)) # make every word unique using a set and then iterable by index again by converting it into a tuple.
for i in range(len(found_words):
for j in range(i+1, len(found_words):
table[i, j] += 1
In general, you should really think about using external libraries for most of this, though. As some of the comments on your question already pointed out, splitting on . may get you wrong results, the same counts for splitting on whitespace, for example with words that are separated with a - or words that are followed by a ,.
I am using str.contains for text analytics in Pandas. If for the sentence "My latest Data job was an Analyst" , I want a combination of the words "Data" & "Analyst" but at the same time I want to specify the number of words between the two words used for the combination( here it is 2 words between "Data" and "Analyst".Currently I am using (DataFile.XXX.str.contains('job') & DataFile.XXX.str.contains('Analyst') to get the counts for "job Analyst".
How can I Specify the number of words in between the 2 words in the str.contains syntax.
Thanks in advance
You can't. At least, not in a simple or standardized way.
Even the basics, like how you define a "word," are a lot more complex than you probably imagine. Both word parsing and lexical proximity (e.g. "are two words within distance D of one another in sentence s?") is the realm of natural language processing (NLP). NLP and proximity searches are not part of basic Pandas, nor of Python's standard string processing. You could import something like NLTK, the Natural Language Toolkit to solve this problem in a general way, but that's a whole 'nother story.
Let's look at a simple approach. First you need a way to parse a string into words. The following is rough by NLP standards, but will work for simpler cases:
def parse_words(s):
"""
Simple parser to grab English words from string.
CAUTION: A simplistic solution to a hard problem.
Many possibly-important edge- and corner-cases
not handled. Just one example: Hyphenated words.
"""
return re.findall(r"\w+(?:'[st])?", s, re.I)
E.g.:
>>> parse_words("and don't think this day's last moment won't come ")
['and', "don't", 'think', 'this', "day's", 'last', 'moment', "won't", 'come']
Then you need a way to find all the indices in a list where a target word is found:
def list_indices(target, seq):
"""
Return all indices in seq at which the target is found.
"""
indices = []
cursor = 0
while True:
try:
index = seq.index(target, cursor)
except ValueError:
return indices
else:
indices.append(index)
cursor = index + 1
And finally a decision making wrapper:
def words_within(target_words, s, max_distance, case_insensitive=True):
"""
Determine if the two target words are within max_distance positiones of one
another in the string s.
"""
if len(target_words) != 2:
raise ValueError('must provide 2 target words')
# fold case for case insensitivity
if case_insensitive:
s = s.casefold()
target_words = [tw.casefold() for tw in target_words]
# for Python 2, replace `casefold` with `lower`
# parse words and establish their logical positions in the string
words = parse_words(s)
target_indices = [list_indices(t, words) for t in target_words]
# words not present
if not target_indices[0] or not target_indices[1]:
return False
# compute all combinations of distance for the two words
# (there may be more than one occurance of a word in s)
actual_distances = [i2 - i1 for i2 in target_indices[1] for i1 in target_indices[0]]
# answer whether the minimum observed distance is <= our specified threshold
return min(actual_distances) <= max_distance
So then:
>>> s = "and don't think this day's last moment won't come at last"
>>> words_within(["THIS", 'last'], s, 2)
True
>>> words_within(["think", 'moment'], s, 2)
False
The only thing left to do is map that back to Pandas:
df = pd.DataFrame({'desc': [
'My latest Data job was an Analyst',
'some day my prince will come',
'Oh, somewhere over the rainbow bluebirds fly',
"Won't you share a common disaster?",
'job! rainbow! analyst.'
]})
df['ja2'] = df.desc.apply(lambda x: words_within(["job", 'analyst'], x, 2))
df['ja3'] = df.desc.apply(lambda x: words_within(["job", 'analyst'], x, 3))
This is basically how you'd solve the problem. Keep in mind, it's a rough and simplistic solution. Some simply-posed questions are not simply-answered. NLP questions are often among them.
I'm looking for a library or a method using existing libraries( difflib, fuzzywuzzy, python-levenshtein) to find the closest match of a string (query) in a text (corpus)
I've developped a method based on difflib, where I split my corpus into ngrams of size n (length of query).
import difflib
from nltk.util import ngrams
def get_best_match(query, corpus):
ngs = ngrams( list(corpus), len(query) )
ngrams_text = [''.join(x) for x in ngs]
return difflib.get_close_matches(query, ngrams_text, n=1, cutoff=0)
it works as I want when the difference between the query and the matched string are just character replacements.
query = "ipsum dolor"
corpus = "lorem 1psum d0l0r sit amet"
match = get_best_match(query, corpus)
# match = "1psum d0l0r"
But when the difference is character deletion, it is not.
query = "ipsum dolor"
corpus = "lorem 1psum dlr sit amet"
match = get_best_match(query, corpus)
# match = "psum dlr si"
# expected_match = "1psum dlr"
Is there a way to get a more flexible result size ( as for expected_match ) ?
EDIT 1:
The actual use of this script is to match queries (strings) with a
messy ocr output.
As I said in the question, the ocr can confound characters, and even miss them.
If possible consider also the case when a space is missing between words.
A best match, is the one that does not include characters from other words than those on the query.
EDIT 2:
The solution I use now is to extend the ngrams with (n-k)-grams for k = {1,2,3} to prevent 3 deletions. It's much better than the first version, but not efficient in terms of speed, as we have more than 3 times the number of ngrams to check. It is also a non generalizable solution.
This function finds best matching substring of variable length.
The implementation considers the corpus as one long string, hence avoiding your concerns with spaces and unseparated words.
Code summary:
1. Scan the corpus for match values in steps of size step to find the approximate location of highest match value, pos.
2. Find the substring in the vicinity of pos with the highest match value, by adjusting the left/right positions of the substring.
from difflib import SequenceMatcher
def get_best_match(query, corpus, step=4, flex=3, case_sensitive=False, verbose=False):
"""Return best matching substring of corpus.
Parameters
----------
query : str
corpus : str
step : int
Step size of first match-value scan through corpus. Can be thought of
as a sort of "scan resolution". Should not exceed length of query.
flex : int
Max. left/right substring position adjustment value. Should not
exceed length of query / 2.
Outputs
-------
output0 : str
Best matching substring.
output1 : float
Match ratio of best matching substring. 1 is perfect match.
"""
def _match(a, b):
"""Compact alias for SequenceMatcher."""
return SequenceMatcher(None, a, b).ratio()
def scan_corpus(step):
"""Return list of match values from corpus-wide scan."""
match_values = []
m = 0
while m + qlen - step <= len(corpus):
match_values.append(_match(query, corpus[m : m-1+qlen]))
if verbose:
print(query, "-", corpus[m: m + qlen], _match(query, corpus[m: m + qlen]))
m += step
return match_values
def index_max(v):
"""Return index of max value."""
return max(range(len(v)), key=v.__getitem__)
def adjust_left_right_positions():
"""Return left/right positions for best string match."""
# bp_* is synonym for 'Best Position Left/Right' and are adjusted
# to optimize bmv_*
p_l, bp_l = [pos] * 2
p_r, bp_r = [pos + qlen] * 2
# bmv_* are declared here in case they are untouched in optimization
bmv_l = match_values[p_l // step]
bmv_r = match_values[p_l // step]
for f in range(flex):
ll = _match(query, corpus[p_l - f: p_r])
if ll > bmv_l:
bmv_l = ll
bp_l = p_l - f
lr = _match(query, corpus[p_l + f: p_r])
if lr > bmv_l:
bmv_l = lr
bp_l = p_l + f
rl = _match(query, corpus[p_l: p_r - f])
if rl > bmv_r:
bmv_r = rl
bp_r = p_r - f
rr = _match(query, corpus[p_l: p_r + f])
if rr > bmv_r:
bmv_r = rr
bp_r = p_r + f
if verbose:
print("\n" + str(f))
print("ll: -- value: %f -- snippet: %s" % (ll, corpus[p_l - f: p_r]))
print("lr: -- value: %f -- snippet: %s" % (lr, corpus[p_l + f: p_r]))
print("rl: -- value: %f -- snippet: %s" % (rl, corpus[p_l: p_r - f]))
print("rr: -- value: %f -- snippet: %s" % (rl, corpus[p_l: p_r + f]))
return bp_l, bp_r, _match(query, corpus[bp_l : bp_r])
if not case_sensitive:
query = query.lower()
corpus = corpus.lower()
qlen = len(query)
if flex >= qlen/2:
print("Warning: flex exceeds length of query / 2. Setting to default.")
flex = 3
match_values = scan_corpus(step)
pos = index_max(match_values) * step
pos_left, pos_right, match_value = adjust_left_right_positions()
return corpus[pos_left: pos_right].strip(), match_value
Example:
query = "ipsum dolor"
corpus = "lorem i psum d0l0r sit amet"
match = get_best_match(query, corpus, step=2, flex=4)
print(match)
('i psum d0l0r', 0.782608695652174)
Some good heuristic advice is to always keep step < len(query) * 3/4, and flex < len(query) / 3. I also added case sensitivity, in case that's important. It works quite well when you start playing with the step and flex values. Small step values gives better results but takes longer to compute. flex governs how flexible the length of the resulting substring is allowed to be.
Important to note: This will only find the first best match, so if there are multiple equally good matches, only the first will be returned. To allow for multiple matches, change index_max() to return a list of indices for the n highest values of the input list, and loop over adjust_left_right_positions() for values in that list.
The main path to a solution uses finite state automata (FSA) of some kind. If you want a detailed summary of the topic, check this dissertation out (PDF link). Error-based models (including Levenshtein automata and transducers, the former of which Sergei mentioned) are valid approaches to this. However, stochastic models, including various types of machine learning approaches integrated with FSAs, are very popular at the moment.
Since we are looking at edit distances (effectively misspelled words), the Levenshtein approach is good and relatively simple. This paper (as well as the dissertation; also PDF) give a decent outline of the basic idea and it also explicitly mentions the application to OCR tasks. However, I will review some of the key points below.
The basic idea is that you want to build an FSA that computes both the valid string as well as all strings up to some error distance (k). In the general case, this k could be infinite or the size of the text, but this is mostly irrelevant for OCR (if your OCR could even potentially return bl*h where * is the rest of the entire text, I would advise finding a better OCR system). Hence, we can restrict regex's like bl*h from the set of valid answers for the search string blah. A general, simple and intuitive k for your context is probably the length of the string (w) minus 2. This allows b--h to be a valid string for blah. It also allows bla--h, but that's okay. Also, keep in mind that the errors can be any character you specify, including spaces (hence 'multiword' input is solvable).
The next basic task is to set up a simple weighted transducer. Any of the OpenFST Python ports can do this (here's one). The logic is simple: insertions and deletions increment the weight while equality increments the index in the input string. You could also just hand code it as the guy in Sergei's comment link did.
Once you have the weights and associated indexes of the weights, you just sort and return. The computational complexity should be O(n(w+k)), since we will look ahead w+k characters in the worst case for each character (n) in the text.
From here, you can do all sorts of things. You could convert the transducer to a DFA. You could parallelize the system by breaking the text into w+k-grams, which are sent to different processes. You could develop a language model or confusion matrix that defines what common mistakes exist for each letter in the input set (and thereby restrict the space of valid transitions and the complexity of the associated FSA). The literature is vast and still growing so there are probably as many modifications as there are solutions (if not more).
Hopefully that answers some of your questions without giving any code.
I would try to build a regular expression template from the query string. The template could then be used to search the corpus for substrings that are likely to match the query. Then use difflib or fuzzywuzzy to check if the substring does match the query.
For example, a possible template would be to match at least one of the first two letters of the query, at least one of the last two letters of the query, and have approximately the right number of letters in between:
import re
query = "ipsum dolor"
corpus = ["lorem 1psum d0l0r sit amet",
"lorem 1psum dlr sit amet",
"lorem ixxxxxxxr sit amet"]
first_letter, second_letter = query[:2]
minimum_gap, maximum_gap = len(query) - 6, len(query) - 3
penultimate_letter, ultimate_letter = query[-2:]
fmt = '(?:{}.|.{}).{{{},{}}}(?:{}.|.{})'.format
pattern = fmt(first_letter, second_letter,
minimum_gap, maximum_gap,
penultimate_letter, ultimate_letter)
#print(pattern) # for debugging pattern
m = difflib.SequenceMatcher(None, "", query, False)
for c in corpus:
for match in re.finditer(pattern1, c, re.IGNORECASE):
substring = match.group()
m.set_seq1(substring)
ops = m.get_opcodes()
# EDIT fixed calculation of the number of edits
#num_edits = sum(1 for t,_,_,_,_ in ops if t != 'equal')
num_edits = sum(max(i2-i1, j2-j1) for op,i1,i2,j1,j2 in ops if op != 'equal' )
print(num_edits, substring)
Output:
3 1psum d0l0r
3 1psum dlr
9 ixxxxxxxr
Another idea is to use the characteristics of the ocr when building the regex. For example, if the ocr always gets certain letters correct, then when any of those letters are in the query, use a few of them in the regex. Or if the ocr mixes up '1', '!', 'l', and 'i', but never substitutes something else, then if one of those letters is in the query, use [1!il] in the regex.