How to find similar word in set? [duplicate] - python

This question already has answers here:
How to find the most similar word in a list in python
(2 answers)
Closed 6 years ago.
word = "work"
word_set = {"word","look","wrap","pork"}
How can I find the similar word such that both "word" and "pork" need only one letter to change to the "work"?
I am wondering that if there is a method to find the difference between a string and the item in set.

Use difflib.get_close_matches() from the standard library:
import difflib
word = "work"
word_set = {"word","look","wrap","pork"}
difflib.get_close_matches(word, word_set)
returns:
['word', 'pork']
EDIT If needed, difflib.SequenceMatcher.get_opcodes() can be used to calculate the edit distance:
matcher = difflib.SequenceMatcher(b=word)
for test_word in word_set:
matcher.set_seq1(test_word)
distance = len([m for m in matcher.get_opcodes() if m[0]!='equal'])
print(distance, test_word)

You could do something like:
word = "work"
word_set = set(["word","look","wrap","pork"])
for example in word_set:
if len(example) != len(word):
continue
num_chars_out = sum([1 for c1,c2 in zip(example, word) if c1 != c2])
if num_chars_out == 1:
print(example)

I would recommend the editdistance Python package, which provides an editdistance.eval function that calculates the number of characters you need to change to get from the first word to the second word. Edit distance is the same as Levenshtein distance, which was suggested by MattDMo.
In your case, if you want to identify words within 1 edit distance of each other, you could do:
import editdistance as ed
thresh = 1
w1 = "work"
word_set = set(["word","look","wrap","pork"])
neighboring_words = [w2 for w2 in word_set if ed.eval(w1, w2) <= thresh]
print neighboring_words
with neighboring_words evaluating to ['pork', 'word'].

Related

Finding elements not in a string present in another string in python [duplicate]

This question already has answers here:
Common elements comparison between 2 lists
(14 answers)
Closed 1 year ago.
Python.
Str1="acddeffg"
Str2="fgfdeca"
I want to obtain the letter/s that are in Str1 not present in Str2 and viceversa.
In this example is Np1="", Np2="d"
Str1="abcdefffg"
Str2="aabcdef"
Answer Np1="a", Np2="ffg"
Re-reading your question, you probably want to do this:
str1 = "abcdefffg"
str2 = "aabcdef"
def find_diff(left, right):
l = list(left)
for letter in right:
if letter in l:
l.remove(letter)
return "".join(l)
print(find_diff(str1, str2)) # ffg
print(find_diff(str2, str1)) # a
Original answer: You could use sets to get the difference between used letters (ignoring duplicates). Not what the OP is looking for (second example) but I will leave it here for reference.
letters1 = set("abcdefffg")
letters2 = set("aabcdef")
print(letters1 - letters2) # {'g'}
print(letters2 - letters1) # set()
You can try this:
Np1 = ''.join([letter for letter in Str1 if letter not in Str2])
Np2 = ''.join([letter for letter in Str2 if letter not in Str1])

Python Code to only print words from a list that start with an A [duplicate]

This question already has answers here:
How does python startswith work?
(2 answers)
Closed 2 years ago.
I have to print words from an array that only start with A.
w = ["Algorithm", "Logic", "Filter", "Software", "Network", "Parameters", "Analyze", "Algorithm", "Functionality", "Viruses"]
for i in range (len(w)):
if(w == "A"):
print(w[i])
'# print (w[i].upper())'
The output should be:
Algorithm
Analyze
Algorithm
I am confused on how you would get a word that starts with an A. This is what I have so far. Any suggestions? I am not allowed to use any other methods like startswith and etc.
Just use the startswith method of strings like this:
w = ["Algorithm", "Logic", "Filter", "Software", "Network", "Parameters", "Analyze", "Algorithm", "Functionality", "Viruses"]
for word in w:
if word.startswith('A'):
print(word)
>>> Algorithm
>>> Analyze
>>> Algorithm
EDIT: Since you can't use startswith, access the first position of the word and compare it:
for word in w:
if word[0] == 'A':
print(word)
You could also use list comprehension if you wanted to store this in a list:
result = [word for word in w if word[0] == "A"]
Or, just plain looping (if you want to just print the words out):
for word in w:
if word[0] == "A":
print(word)

How to check if letters of one string are in another

I have a list L of words in alphabetical order, e.g hello = ehllo
how do I check for this words "blanagrams", which are words with mostly all similar letters except 1. For example, orchestra turns into orchestre. I've only been able to think to this part. I have an idea in which you have to test every letter of the current word and see whether this corresponds, to the other and if it does, it is a blanagram and create a dictionary, but i'm struggling to put it into code
L = [list of alphabetically ordered strings]
for word in L:
for letter in word:
#confused at this part
from collections import Counter
def same_except1(s1, s2):
ct1, ct2 = Counter(s1), Counter(s2)
return sum((ct1 - ct2).values()) == 1 and sum((ct2 - ct1).values()) == 1
Examples:
>>> same_except1('hello', 'hella')
True
>>> same_except1('hello', 'heela')
False
>>> same_except1('hello', 'hello')
False
>>> same_except1('hello', 'helloa')
False
Steven Rumbalski's answer got me thinking and there's also another way you can do this with a Counter (+1 for use of collections and thank you for sparking my interest)
from collections import Counter
def diff_one(w,z):
c=Counter(sorted(w+z)).values()
c=filter(lambda x:x%2!=0,c)
return len(c)==2
Basically all matched letters will have a counter value that will be even. So you filter those out and get left with the unmatched ones. If you have more than 2 unmatched then you have a problem.
Assuming this is similar to the part of the ladders game commonly used in AI, and you are trying to create a graph with adjacent nodes as possible words and not a dictionary.
d = {}
# create buckets of words that differ by one letter
for word in L:
for i in range(len(word)):
bucket = word[:i] + '_' + word[i+1:]
if bucket in d:
d[bucket].append(word)
else:
d[bucket] = [word]

how to find the longest word in python? [duplicate]

This question already has answers here:
How to find the longest word with python?
(5 answers)
Closed 7 years ago.
I a new to python and am stuck on this one exercise. I am supposed to enter a sentence and find the longest word. If there are two or more words that have the same longest length, then it is to return the first word. This is what I have so far:
def find_longest_word(word_list):
longest_word = ''
for word in word_list:
print(word, len(word))
words = input('Please enter a few words')
word_list = words.split()
find_longest_word(word_list)
But I do not know how to compare the lists and return the first/longest word.
Use max python built-in function, using as key parameter the len function. It would iterate over word_list applying len function and then returning the longest one.
def find_longest_word(word_list):
longest_word = max(word_list, key=len)
return longest_word
You shouldn't print out the length of each word. Instead, compare the length of the current word and the length of longest_word. If word is longer, you update longest_word to word. When you have been through all words, the longest world will be stored in longest_word.
Then you can print or return it.
def find_longest_word(word_list):
longest_word = ''
for word in word_list:
if len(word) > len(longest_word)
longest_word = word
print longest_word
edit:
levi's answer is much more elegant, this is a solution with a simple for loop, and is somewhat close to the one you tried to make yourself.
Compare each word to the longest one yet, starting with the length of 0. If the word is longer than the longest yet, update the word and the longest_size. Should look similar to this:
def find_longest_word(word_list):
longest_word = ''
longest_size = 0
for word in word_list:
if len(word) > longest_size
longest_word = word
longest_size = len(word)
return longest_word
words = input('Please enter a few words')
word_list = words.split()
find_longest_word(word_list)

Efficient hunting for words in scrambled letters

I guess you could classify this as a Scrabble style problem, but it started out due to a friend mentioning the UK TV quiz show Countdown. Various rounds in the show involve the contestants being presented a scrambled set of letters and they have to come up with the longest word they can. The one my friend mentioned was "RAEPKWAEN".
In fairly short order I whipped up something in Python to handle this problem, using PyEnchant to handle the dictionary look-ups, however I'm noticing that it really can't scale all that well.
Here's what I have currently:
#!/usr/bin/python
from itertools import permutations
import enchant
from sys import argv
def find_longest(origin):
s = enchant.Dict("en_US")
for i in range(len(origin),0,-1):
print "Checking against words of length %d" % i
pool = permutations(origin,i)
for comb in pool:
word = ''.join(comb)
if s.check(word):
return word
return ""
if (__name__)== '__main__':
result = find_longest(argv[1])
print result
That's fine on a 9 letter example like they use in the show, 9 factorial = 362,880 and 8 factorial = 40,320. On that scale even if it would have to check all possible permutations and word lengths it's not that many.
However once you reach 14 characters that's 87,178,291,200 possibly combinations, meaning you're reliant on luck that a 14 character word is quickly found.
With the example word above it's taking my machine about 12 1/2 seconds to find "reawaken". With 14 character scrambled words we could be talking on the scale of 23 days just to check all possible 14 character permutations.
Is there any more efficient way to handle this?
Implementation of Jeroen Coupé idea from his answer with letters count:
from collections import defaultdict, Counter
def find_longest(origin, known_words):
return iter_longest(origin, known_words).next()
def iter_longest(origin, known_words, min_length=1):
origin_map = Counter(origin)
for i in xrange(len(origin) + 1, min_length - 1, -1):
for word in known_words[i]:
if check_same_letters(origin_map, word):
yield word
def check_same_letters(origin_map, word):
new_map = Counter(word)
return all(new_map[let] <= origin_map[let] for let in word)
def load_words_from(file_path):
known_words = defaultdict(list)
with open(file_path) as f:
for line in f:
word = line.strip()
known_words[len(word)].append(word)
return known_words
if __name__ == '__main__':
known_words = load_words_from('words_list.txt')
origin = 'raepkwaen'
big_origin = 'raepkwaenaqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnmqwertyuiopasdfghjklzxcvbnm'
print find_longest(big_origin, known_words)
print list(iter_longest(origin, known_words, 5))
Output (for my small 58000 words dict):
counterrevolutionaries
['reawaken', 'awaken', 'enwrap', 'weaken', 'weaker', 'apnea', 'arena', 'awake',
'aware', 'newer', 'paean', 'parka', 'pekan', 'prank', 'prawn', 'preen', 'renew',
'waken', 'wreak']
Notes:
It's simple implementation without optimizations.
words_list.txt - can be /usr/share/dict/words on Linux.
UPDATE
In case we need to find word only once, and we have dictionary with words sorted by length, e.g. by this script:
with open('words_list.txt') as f:
words = f.readlines()
with open('words_by_len.txt', 'w') as f:
for word in sorted(words, key=lambda w: len(w), reverse=True):
f.write(word)
We can find longest word without loading full dict to memory:
from collections import Counter
import sys
def check_same_letters(origin_map, word):
new_map = Counter(word)
return all(new_map[let] <= origin_map[let] for let in word)
def iter_longest_from_file(origin, file_path, min_length=1):
origin_map = Counter(origin)
origin_len = len(origin)
with open(file_path) as f:
for line in f:
word = line.strip()
if len(word) > origin_len:
continue
if len(word) < min_length:
return
if check_same_letters(origin_map, word):
yield word
def find_longest_from_file(origin, file_path):
return iter_longest_from_file(origin, file_path).next()
if __name__ == '__main__':
origin = sys.argv[1] if len(sys.argv) > 1 else 'abcdefghijklmnopqrstuvwxyz'
print find_longest_from_file(origin, 'words_by_len.txt')
You want to avoid doing the permutation. You could count how many times a character appears in both strings ( the original string and the one from the dictionary). Dismiss all the words from the dictionary where the frequency of characters isn't the same.
So to check one word from the dictionary you will need to count the characters at most MAX (26, n) time.
Pre-parse the dictionary as sorted(word), word pairs. (e.g. giilnstu, linguist)
Sort the dictionary file.
Then, when you are searching for a given set of letters:
Binary search the dictionary for the letters you have, sorting the letters first.
You'd need to do this separately for each word length.
EDIT: should say that you're searching for all unique combinations of the sorted letters of the target word length (range(len(letters), 0, -1))
This is similar to an anagram problem I've worked on before. I solved that by using prime numbers to represent each letter. The product of the letters for each word produces a number. To determine if a given set of input characters are sufficient to make a work, just divide the product of the input character by the product for the number you want to check. If there is no remainder then the input characters are sufficient. I've implemented it below. The output is:
$ python longest.py rasdaddea aosddna raepkwaen
rasdaddea --> sadder
aosddna --> soda
raepkwaen --> reawaken
You can find more details and a thorough explanation of the anagrams case at:
http://mostlyhighperformance.blogspot.com/2012/01/generating-anagrams-efficient-and-easy.html
This algorithm takes a small amount of time to set up a dictionary, and then individual checks are as easy as a single division for every word in the dictionary. There may be faster methods that rely on closing off parts of the dictionary if it lacks a letter, but these may end up performing worse if you have large number of input letters so it is actually not able to close off any part of the dictionary.
import sys
def nextprime(x):
while True:
x += 1
for pot_fac in range(2,x):
if x % pot_fac == 0:
break
else:
return x
def prime_generator():
'''Returns a generator that produces the next largest prime as
compared to the one returned from this function the last time
it was called. The first time it is called it will return 2.'''
lastprime = 1
while True:
lastprime = nextprime(lastprime)
yield lastprime
# Assign prime numbers to each lower case letter
gen = prime_generator()
primes = dict( [ (chr(x),gen.next()) for x in range(ord('a'),ord('z')+1) ] )
product = lambda x: reduce( lambda m,n: m*n, x, 1 )
make_key = lambda x: product( [ primes[y] for y in x ] )
try:
words = open('words').readlines()
words = [ ''.join( [ c for c in x.lower() \
if ord('a') <= ord(c) <= ord('z') ] ) \
for x in words ]
for x in words:
try:
make_key(x)
except:
print x
raise
except IOError:
words = [ 'reawaken','awaken','enwrap','weaken','weaker', ]
words = dict( ( (make_key(x),x,) for x in words ) )
inputs = sys.argv[1:] if sys.argv[1:] else [ 'raepkwaen', ]
for input in inputs:
input_key = make_key(input)
results = [ words[x] for x in words if input_key % x == 0 ]
result = reversed(sorted(results, key=len)).next()
print input,'--> ',result
I started this last night shortly after you asked the question, but didn't get around to polishing it up until just now. This was my solution, which is basically a modified trie, which I didn't know until today!
class Node(object):
__slots__ = ('words', 'letter', 'child', 'sib')
def __init__(self, letter, sib=None):
self.words = []
self.letter = letter
self.child = None
self.sib = sib
def get_child(self, letter, create=False):
child = self.child
if not child or child.letter > letter:
if create:
self.child = Node(letter, child)
return self.child
return None
return child.get_sibling(letter, create)
def get_sibling(self, letter, create=False):
node = self
while node:
if node.letter == letter:
return node
sib = node.sib
if not sib or sib.letter > letter:
if create:
node.sib = Node(letter, sib)
node = node.sib
return node
return None
node = sib
return None
def __repr__(self):
return '<Node({}){}{}: {}>'.format(chr(self.letter), 'C' if self.child else '', 'S' if self.sib else '', self.words)
def add_word(root, word):
word = word.lower().strip()
letters = [ord(c) for c in sorted(word)]
node = root
for letter in letters:
node = node.get_child(letter, True)
node.words.append(word)
def find_max_word(root, word):
word = word.lower().strip()
letters = [ord(c) for c in sorted(word)]
words = []
def grab_words(root, letters):
last = None
for idx, letter in enumerate(letters):
if letter == last: # prevents duplication
continue
node = root.get_child(letter)
if node:
words.extend(node.words)
grab_words(node, letters[idx+1:])
last = letter
grab_words(root, letters)
return words
root = Node(0)
with open('/path/to/dict/file', 'rt') as f:
for word in f:
add_word(root, word)
Testing:
>>> def nonrepeating_words():
... return find_max_word(root, 'abcdefghijklmnopqrstuvwxyz')
...
>>> sorted(nonrepeating_words(), key=len)[-10:]
['ambidextrously', 'troublemakings', 'dermatoglyphic', 'hydromagnetics', 'hydropneumatic', 'pyruvaldoxines', 'hyperabductions', 'uncopyrightable', 'dermatoglyphics', 'endolymphaticus']
>>> len(nonrepeating_words())
67590
I think I prefer dermatoglyphics to uncopyrightable for longest word, myself. Performance-wise, utilizing a ~500k word dictionary (from here),
>>> import timeit
>>> timeit.timeit(nonrepeating_words, number=100)
62.8912091255188
>>>
So, on average, 6/10ths of a second (on my i5-2500) to find all sixty-seven thousand words that contain no repeating letters.
The big differences between this implementation and a trie (which makes it even further from a DAWG in general) is that: words are stored in the trie in relation to their sorted letters. So the word 'dog' is stored under the same path as 'god': d-g-o. The second bit is the the find_max_word algorithm, which makes sure every possible letter combination is visited by continually lopping off its head and re-running the search.
Oh, and just for giggles:
>>> sorted(tree.find_max_word('RAEPKWAEN'), key=len)[-5:]
['wakener', 'rewaken', 'reawake', 'reawaken', 'awakener']
Another approach, similar to #market's answer, is to precompute a 'bitmask' for each word in the dictionary. Bit 0 is set if the word contains at least one A, bit 1 is set if it contains at least one B, and so on up to bit 25 for Z.
If you want to search for all words in the dictionary that could be made up from a combination of letters, you start by forming the bitmask for the collection of letters. You can then filter out all of the words that use other letters by checking whether wordBitmask & ~lettersBitMask is zero. If this is zero, the word only uses letters available in the collection, and so could be valid. If this is non-zero, it uses a letter not available in the collection and so is not allowed.
The advantage of this approach is that the bitwise operations are fast. The vast majority of words in the dictionary will use at least one of the 17 or more letters that aren't in the collection given, and you can speedily discount them all. However, for the minority of words that make it through the filter, there is one more check that you still have to make. You still need to check that words aren't using letters more often than they appear in the collection. For example, the word 'weakener' must be disallowed because it has three 'e's, whereas there are only two in the collection of letters RAEPKWAEN. The bitwise approach alone will not filter out this word since each letter in the word appears in the collection.
When looking for words longer than 10 letters you may try to iterate over words (I think there are not so many words with 10 letters) that are longer than 10 letters and check it you have required letters in your set.
Problem is that you have to find all those len(word) >= 10 words first.
So, what I would do:
When reading the dictionary split the words into 2 categories: shorts and longs. You can process shorts by iterating over every possible permutation. Than you can process longs by iterating over then and checking it they are possible.
Of course there are many optimisations possible to both paths.
Construct a trie (prefix tree) from your dictionary. You may want to cache it.
Walk on this trie and remove whole branches that do not fit your bag of letters.
At this point, your trie is the representation of all words in your dictionary that can be constructed from your bag of letters.
Just take the longer one(s) :-)
Edit: you may also use a DAGW (Directed Acyclic Word Graph) which will have fewer vertices. Although I haven't read it, this wikipedia article have a link about The World's Fastest Scrabble Program.
DAWG (Directed Acyclic Word Graph)
Mark Wutka was kind enough to provide some pascal code here.
http://www.wutka.com/dawg.html
http://www.wutka.com/DictConvert.ZIP
In case you have a text file with sorted words. Simply this code does the math:
UsrWrd = input() #here you Enter scrambled letters
with open('words.db','r') as f:
for Line in f:
for Word in Line.split():
if len(Word) == len(UsrWrd) and set(Word) == set(UsrWrd):
print(Word)
break
else:continue `

Categories

Resources