string comparison in python but not Levenshtein distance (I think) - python

I found a crude string comparison in a paper I am reading done as follows:
The equation they use is as follows (extracted from the paper with small word changes to make it more general and readable)
I have tried to explain a bit more in my own words since the description by the author is not very clear (using an example by the author)
For example for 2 sequences ABCDE and BCEFA, there are two possible graphs
graph 1) which connects B with B C with C and E with E
graph 2) connects A with A
I cannot connect A with A when I am connecting the other three (graph 1) since that would be crossing lines (imagine you draw lines between B-B, C-C and E-E); that is the line inking A-A will cross the lines linking B-B, C-C and E-E.
So these two sequences result in 2 possible graphs; one has 3 connections (BB, CC and EE) and the other only one (AA) then I calculate the score d as given by the equation below.
Consequently, to define the degree of similarity between two
penta-strings we calculate the distance d between them. Aligning the
two penta-strings, we look for all the identities between their
characters, wherever these may be located. If each identity is
represented by a link between both penta-strings, we define a graph
for this pair. We call any part of this graph a configuration.
Next, we retain all of those configurations in which there is no character
cross pairing (the meaning is explained in my example above, i.e., no crossings of links between identical characters and only those graphs are retained).
Each of these is then evaluated as a function of the
number p of characters related to the graph, the shifting Δi for the
corresponding pairs and the gap δij between connected characters of
each penta-string. The minimum value is chosen as characteristic and
is called distance d: d Min(50 – 10p + ΣΔi + Σδij) Although very rough,
this measure is generally in good agreement with the qualitative eye
guided estimation. For instance, the distance between abcde and abcfg
is 20, whereas that between abcde and abfcg is 23 =(50 – 30 + 1 +2).
I am confused as to how to go about doing this. Any suggestions to help me would be much appreciated.
I tried the Levenshtein and also simple sequence alignment as used in protein sequence comparison
The link to the paper is:
http://peds.oxfordjournals.org/content/16/2/103.long
I could not find any information on the first author, Alain Figureau and my emails to MA Soto have not been answered (as of today).
Thank you

Well, it's definitely not Levenshtein:
>>> from nltk import metrics
>>> metrics.distance.edit_distance('abcde','abcfg')
2
>>> metrics.distance.edit_distance('abcde','abfcg')
3
>>> help(metrics.distance.edit_distance)
Help on function edit_distance in module nltk.metrics.distance:
edit_distance(s1, s2)
Calculate the Levenshtein edit-distance between two strings.
The edit distance is the number of characters that need to be
substituted, inserted, or deleted, to transform s1 into s2. For
example, transforming "rain" to "shine" requires three steps,
consisting of two substitutions and one insertion:
"rain" -> "sain" -> "shin" -> "shine". These operations could have
been done in other orders, but at least three steps are needed.
#param s1, s2: The strings to be analysed
#type s1: C{string}
#type s2: C{string}
#rtype C{int}

Just after the text block you cite, there is a reference to a previous paper from the same authors : Secondary Structure of Proteins and Three-dimensional Pattern Recognition. I think it is worth to look into it if there is no explanantion of the distance (I'm not at work so I haven't the access to the full document).
Otherwise, you can also try to contact directly the authors : Alain Figureau seems to be an old-school French researcher with no contact whatsoever (no webpage, no e-mail, no "social networking",..) so I advise to try contacting M.A. Soto , whose e-mail is given at the end of the paper. I think they will give you the answer you're looking for : the experiment's procedure has to be crystal clear in order to be repeatable, it's part of the scientific method in research.

Related

Checking if a graph contain a given induced subgraph

I'm trying to detect some minimal patterns with properties in random digraphs. Namely, I have a list called patterns of adjacency matrix of various size. For instance, I have [0] (a sink), but also a [0100 0001 1000 0010] (cycle of size 4), [0100, 0010, 0001, 0000] a path of length 3, etc.
When I generate a digraph, I compute all sets that may be new patterns. However, in most of the case it is something that I don't care about: for instance, if the potential new pattern is a cycle of size 5, it does not teach me anything because it has a cycle of length 3 as an induced subgraph.
I suppose one way to do it would look like this:
#D is the adjacency matrix of a possible new pattern
new_pattern = True
for pi in patterns:
k = len(pi)
induced_subgraphs = all_induced_subgraphs(D, k)
for s in induced_subgraphs:
if isomorphic(s, pi):
new_pattern = False
break
where all_induced_subgraphs(D,k) gives all possible induced subgraphs of D of size k, and isomorphic(s,pi) determines if s and pi are isomorphic digraphs.
However, checking all induced subgraphs of a digraph seems absolutely horrible to do. Is there a clever thing to do there?
Thanks to #Stef I learned that this problem has a name
and can be solved using on netwokx with a function described on this page.
Personally I use igraph on my project so I will use this.

Gensim: word mover distance with string as input instead of list of string

I'm trying to find out how similar are 2 sentences.
For doing it i'm using gensim word mover distance and since what i'm trying to find it's a similarity i do like it follow:
sim = 1 - wv.wmdistance(sentence_obama, sentence_president)
What i give as an input are 2 strings:
sentence_obama = 'Obama speaks to the media in Illinois'
sentence_president = 'The president greets the press in Chicago'
The model i'm using is the one that you can find on the web: word2vec-google-news-300
I load it with this code:
wv = api.load("word2vec-google-news-300")
It give me reasonable results.
Here it's where the problem starts.
For what i can read from the documentation here it seems the wmd take as input a list of string and not a string like i do!
def preprocess(sentence):
return [w for w in sentence.lower().split() if w not in stop_words]
sentence_obama = preprocess(sentence_obama)
sentence_president = preprocess(sentence_president)
sim = 1 - wv.wmdistance(sentence_obama, sentence_president)
When i follow the documentation i get results really different:
wmd using string as input: 0.5562025871542842
wmd using list of string as input: -0.0174646259300113
I'm really confused. Why is it working with string as input and it works better than when i give what the documentation is asking for?
The function needs a list-of-string-tokens to give proper results: if your results pasing full strings look good to you, it's pure luck and/or poor evaluation.
So: why do you consider 0.556 to be a better value than -0.017?
Since passing the texts as plain strings means they are interpreted as lists-of-single-characters, the value there is going to be a function of how different the letters in the two texts are - and the fact that all English sentences of about the same length have very similar letter-distributions, means most texts will rate as very-similar under that error.
Also, similarity or distance values mainly have meaning in comparison to other pairs of sentences, not two different results from different processes (where one of them is essentially random). You shouldn't consider absolute values that are exceeding some set threshold, or close to 1.0, as definitively good. You should instead consider relative differences, between two similarity/distance values, to mean one pair is more similary/distant than another pair.
Finally: converting a distance (which goes from 0.0 for closest to infinity for furthest) to a similarity (which typically goes from 1.0 for most-similar to -1.0 or 0.0 for least-similar) is not usefully done via the formula you're using, similarity = 1.0 - distance. Because a distance can be larger than 2.0, you could have arbitrarily negative similarities with that approach, and be fooled to think -0.017 (etc) is bad, because it's negative, even if it's quite good across all the possible return values.
Some more typical distance-to-similarity conversions are given in another SO question:
How do I convert between a measure of similarity and a measure of difference (distance)?

Why the fuzzywuzzy Ratio() uses a slightly different implementation of Levenshtein Distance while calculating the ratio between two strings?

I am trying to wrap my head around how the fuzzywuzzy library calculates the Levenshtein Distance between two strings, as the docs clearly mention that it is using that.
The Levenshtein Distance algorithm counts looks for the minimum number of edits between the two strings. That can be achieved using the addition, deletion, and substitution of a character in the string. All these operations are counted as a single operation when calculating the score.
Here are a couple of examples:
Example 1
s1 = 'hello'
s2 = 'hell'
Levenshtein Score = 1 (it requires 1 edit, addition of 'o')
Example 2
s1 = 'hello'
s2 = 'hella'
Levenshtein Score = 1 (it requires 1 edit, substitution of 'a' to 'o')
Plugging these scores into the Fuzzywuzzy formula (len(s1)+len(s2) - LevenshteinScore)/((len(s1)+len(s2)):
Example 1: (5+4-1)/9 = 89%
Example 2: (5+5-1)/10 = 90%
Now the fuzzywuzzy does return the same score for Example 1, but not for example 2. The score for example 2 is 80%. On investigating how it is calculating the distances under the hood, I found out that it counts the 'substitution' operation as 2 operations rather than 1 (as defined for Levenshtein). I understand that it uses the difflib library but I just want to know why is it called Levenshtein Distance, when it actually is not?
I am just trying to figure out why is there a distinction here? What does it mean or explain? Basically the reason for using 2 operations for substitution rather than one as defined in Levenshtein Distance and still calling it Levenshtein Distance. Is it got something to do with the gaps in sentences? Is this a standard way of converting LD to a normalized similarity score?
I would love if somebody could give me some insight. Also is there a better way to convert LD to a similarity score? Or in general measure the similarity between two strings? I am trying to measure the similarity between some audio file transcriptions done by a human transcription service and by an Automatic Speech Recognition system.
Thank you!

Minimum Levenshtein distance across multiple words

I am trying to do some string matching using the Levenshtein algorithm for closest words on businesses. (In python but language won't make a huge difference)
An example query would be
search = 'bna'
lat & lon are close by the result I am looking for.
There is a pub right by the latitude and longitude called BNA Brewing Co. by searching BNA my hopes would be that that shows up first (as bna == bna)
I have tried two different way
m = min([editdistance.eval(search, place_split) for place_split in place.name.split(' ')
if place_split not in string.punctuation])
returns without ranking based on geographical distance, only levenshtein distance
Coffee & Books In Town Center
Talk 'n' Coffee
Raggedy Ann & Andy's
and with taking into account geographical distance, secondary to levenshtein
Shapers Hair Salon & Spa
Amora Day Spa
Pure Esthetics and Micro-Pigmentation
And
m = editdistance.eval(search, place.name)
The first one returns without ranking based on geographical distance, only levenshtein distance
KFC
MOO
A&W
and with taking into account geographical distance, secondary to levenshtein
A&W
A&W
KFC
So you can see that neither way are returning anything close to BNA Brewing Co.
What kind of logic do I have to use to get it to return something when the search terms exactly matches one of the place names in my database?
Recall that Levenshtein distances count the number of substitutions, additions and deletions required to transform one string into another. Because of this, they often are minimized when comparing strings of similar length (because even if a lot of substitutions are required, you don't have to add or remove a bunch of characters). You can see this playing out in your second example where your best outputs all are the same length as your search string (len("bna") == len("A&W")).
If your search string is always going to be a single word, then your idea to calculate the distance for each word in the string is a good one since each word is more likely to be a similar length to your search string. However, currently you are doing a case sensitive comparison which means that editdistance.eval('bna', 'BNA') == 3 which I'm guessing you don't want.
try:
m = min([editdistance.eval(search.lower(), place_split.lower()) for place_split in place.name.split(' ') if place_split not in string.punctuation])
which should give you a case insensitive search.

Matching 2 short descriptions and returning a confidence level

I have some data that I get from the Banks using Yodlee and the corresponding transaction messages on the mobile. Both have some description in them - short descriptions.
For example -
string1 = "tatasky_TPSL MUMBA IND"
string2 = "tatasky_TPSL"
They can be matched if one is a completely inside the other. However, some strings like
string1 = "T.G.I Friday's"
string1 = "TGI Friday's MUMBA MAH"
Still need to be matched. Is there a y algorithm which gives a confidence level in matching 2 descriptions ?
You might want to use Normalized edit distance also called levenstien distance levenstien distance wikipedia. So after getting levenstien distance between two strings, you can normalize it by dividing by the length of longest string (or average of those two strings). This normalised socre can act as confidense. You can find some 4-5 python packages of calculating levenstien distance. You can try it online as well edit distance calculator
Alternatively one simple solution is algorithm called longest common subsequence, which can be used here

Categories

Resources