cumulative distribution in dictionary - python

Im trying to calculate a cumulative distribution into a dictionary. The distribution should take letters from a given text and find the probability over the times they appear in the text, and from this it should calculate the cumulative distribution.
I don't know if I'm doing it the right way, but here's my code:
with open('text') as infile:
text = infile.read()
letters = list(text)
letter_freqs = Counter(letters(text))
letter_sum = len(letters)
letter_proba = [letter_freqs[letter]/letter_sum for letter in letters(text)]
And now I wan't to calculate a cumulative distribution, and plot it like a histogram, can someone help me?

The following should at least run (which your code as posted won't):
import collections, itertools
with open('text') as infile:
letters = list(infile.read()) # not just letters: whitespace & punct, too
letter_freqs = collections.Counter(letters)
letter_sum = len(letters)
letters_set = sorted(set(letters))
d = {l: letter_freqs[letter]/letter_sum for l in letters_set}
cum = itertools.accumulate(d[l] for l in letters_set)
cum_d = dict(zip(letters_set, cum)
Now you have in cum_d a dictionary mapping each character, not just letters of course since you're done nothing to exclude whitespace and punctuation, to the cumulative probability of that character and all those below it in alphabetical order. How you plan to "plot" a dictionary, no idea. But hey, at least this does run, and produce something that might fit at least one interpretation of the vague specs you give for the task!-)

Related

What is the most efficient way to calculate the distance of a word with the other words in a list?

I am working on correcting Turkish words by using Levensthein distance. First of all I detect wrong written words and compare them with a list that contains all Turkish words. The list contains about 1.300.000 words. I use Levensthein distance to compare word with the words in the list. Here is the part of my code.
index_to_track_document_order = 1
log_text = ''
main_directory = "C:\\words.txt"
f= codecs.open(main_directory,mode="rb",encoding="utf-8")
f=f.readlines()
similarity = 0
text_to_find = 'aktarıları'
best_fit_word = ''
for line in f:
word = word_tokenize( line, language= 'turkish')[0]
root = word_tokenize( line, language= 'turkish')[1]
new_similarity = textdistance.levenshtein.normalized_similarity(text_to_find , word) * 100
if new_similarity > similarity:
similarity = new_similarity
best_fit_word = word
if(similarity > 90):
print(best_fit_word, str(similarity))
As I mentioned, word.txt contains more than a million records and so that my code takes more than 5 minutes to complete. How I can optimize the code so that it can complete in a shorter time. Thank you.
Index your words by length. Most similar words are of the same length, or one or two lengthes appart. A word cat(length 3) is similar to the word can(length 3), but it won't be very similar to caterpillar(length 11), so there is no reason to compare the levensthein of the two words with a big difference in length. So in total, you save lot of comparisons, because you only compare words of near similar lengths.
#creating a dictionary of words by length
word_dict = {}
for word in f:
word_length = len(word)
if word_length in word_dict:
word_dict[word_length].append(word)
else:
word_dict[word_length] = [word]
#now lets compare words with nearly the same length as our text_to_find
target_length = len(text_to_find)
x = 2 #the length difference we'd like to look at words
for i in range (target_length-x, target_length+x):
if i in word_dict:
#loop through all the words of that given length.
for word in word_dict:
new_similarity = textdistance.levenshtein.normalized_similarity(text_to_find , word) * 100
if new_similarity > similarity:
similarity = new_similarity
best_fit_word = word
if(similarity > 90):
print(best_fit_word, str(similarity))
Note: The creation of word_dict needs to be calculated only once. You can save it as a pickle if necessary.
Also, I did not test the code, but the general idea should be clear. One could even extend the idea to exend the length difference dynamically, if there was not yet found the most similar word.
Every time you say
similarity = new similarity
the old 'new_similarity' is kept, you are simply copying it to 'similarity'.
Using yield will return a generator which does not store all the values in memory, they generate the values on the fly.

Matching two-word variants with each other if they don't match alphabetically

I'm doing a NLP project with my university, collecting data on words in Icelandic that exist both spelled with an i and with a y (they sound the same in Icelandic fyi) where the variants are both actual words but do not mean the same thing. Examples of this would include leyti (an approximation in time) and leiti (a grassy hill), or kirkja (church) and kyrkja (choke). I have a dataset of 2 million words. I have already collected two wordlists, one of which includes words spelled with a y and one includes the same words spelled with a i (although they don't seem to match up completely, as the y-list is a bit longer, but that's a separate issue). My problem is that I want to end up with pairs of words like leyti - leiti, kyrkja - kirkja, etc. But, as y is much later in the alphabet than i, it's no good just sorting the lists and pairing them up that way. I also tried zipping the lists while checking the first few letters to see if I can find a match but that leaves out all words that have y or i as the first letter. Do you have a suggestion on how I might implement this?
Try something like this:
s = "trydfydfgfay"
l = list(s)
candidateWords = []
for idx, c in enumerate(l):
if c=='y':
newList = l.copy()
newList[idx] = "i"
candidateWord = "".join(newList)
candidateWords.append(candidateWord)
print(candidateWords)
#['tridfydfgfay', 'trydfidfgfay', 'trydfydfgfai']
#look up these words to see if they are real words
I do not think this is a programming challenge, but looks more like an NLP challenge itself. The spelling variations are often a hurdle that one would face during preprocessing.
I would suggest that you use an Edit-distance based approaches to identify word pairs that allow for some variations. Specifically for the kind of problem that you have described above, I would recommend "Jaro Winkler Distance". This method allows to give higher similarity score between word pairs that shows variations between specific character pairs, say y and i.
All these approaches are implemented in Jellyfish library.
You may also have a look at the fuzzywuzzy package as well. Hope this helps.
So this accomplishes my task, kind of an easy not-that-pretty solution I suppose but it works:
wordlist = open("data.txt", "r", encoding='utf-8')
y_words = open("y_wordlist.txt", "w+", encoding='utf-8')
all_words = []
y_words = []
for word in wordlist:
word = word.lower()
all_words.append(word)
for word in all_words:
if "y" in word:
y_words.append(word)
word_dict = {}
for word in y_words:
newwith1y = word.replace("y", "i",1)
newwith2y = word.replace("y", "i",2)
newyback = word[::-1].replace("y", "i",1)
newyback = newyback[::-1]
word_dict[word] = newwith1y
word_dict[word] = newwith2y
word_dict[word] = newyback
for key, value in word_dict.items():
if value in all_words:
y_wordlist.write(key)
y_wordlist.write(" - ")
y_wordlist.write(value)
y_wordlist.write("\n")

auto-correct the words from the list in python

I want to auto-correct the words which are in my list.
Say I have a list
kw = ['tiger','lion','elephant','black cat','dog']
I want to check if these words appeared in my sentence. If they are wrongly spelled I want to correct them. I don't intend to touch other words except from the given list.
Now I have list of str
s = ["I saw a tyger","There are 2 lyons","I mispelled Kat","bulldogs"]
Expected output:
['tiger','lion',None,'dog']
My Efforts:
import difflib
op = [difflib.get_close_matches(i,kw,cutoff=0.5) for i in s]
print(op)
My Output:
[[], [], [], ['dog']]
The problem with above code is I want to compare entire sentence and my kw list can have more than 1 word(upto 4-5 words).
If I lower the cutoff value it starts returning the words which is should not.
So even if I plan to create bigrams, trigrams from given sentence it would consume a lot of time.
So is there way to implement this?
I have explored few more libraries like autocorrect, hunspell etc. but no success.
You could implement something based of levenshtein distance.
It's interesting to note elasticsearch's implementation: https://www.elastic.co/guide/en/elasticsearch/guide/master/fuzziness.html
Clearly, bieber is a long way from beaver—they are too far apart to be
considered a simple misspelling. Damerau observed that 80% of human
misspellings have an edit distance of 1. In other words, 80% of
misspellings could be corrected with a single edit to the original
string.
Elasticsearch supports a maximum edit distance, specified with the
fuzziness parameter, of 2.
Of course, the impact that a single edit has on a string depends on
the length of the string. Two edits to the word hat can produce mad,
so allowing two edits on a string of length 3 is overkill. The
fuzziness parameter can be set to AUTO, which results in the following
maximum edit distances:
0 for strings of one or two characters
1 for strings of three, four, or five characters
2 for strings of more than five characters
I like to use pyxDamerauLevenshtein myself.
pip install pyxDamerauLevenshtein
So you could do a simple implementation like:
keywords = ['tiger','lion','elephant','black cat','dog']
from pyxdameraulevenshtein import damerau_levenshtein_distance
def correct_sentence(sentence):
new_sentence = []
for word in sentence.split():
budget = 2
n = len(word)
if n < 3:
budget = 0
elif 3 <= n < 6:
budget = 1
if budget:
for keyword in keywords:
if damerau_levenshtein_distance(word, keyword) <= budget:
new_sentence.append(keyword)
break
else:
new_sentence.append(word)
else:
new_sentence.append(word)
return " ".join(new_sentence)
Just make sure you use a better tokenizer or this will get messy, but you get the point. Also note that this is unoptimized, and will be really slow with a lot of keywords. You should implement some kind of bucketing to not match all words with all keywords.
Here is one way using difflib.SequenceMatcher. The SequenceMatcher class allows you to measure sentence similarity with its ratio method, you only need to provide a suitable threshold in order to keep words with a ratio that falls above the given threshold:
def find_similar_word(s, kw, thr=0.5):
from difflib import SequenceMatcher
out = []
for i in s:
f = False
for j in i.split():
for k in kw:
if SequenceMatcher(a=j, b=k).ratio() > thr:
out.append(k)
f = True
if f:
break
if f:
break
else:
out.append(None)
return out
Output
find_similar_word(s, kw)
['tiger', 'lion', None, 'dog']
Although this is slightly different from your expected output (it is a list of list instead of a list of string) I thing it is a step in the right direction. The reason I chose this method, is so that you can have multiple corrections per sentence. That is why I added another example sentence.
import difflib
import itertools
kw = ['tiger','lion','elephant','black cat','dog']
s = ["I saw a tyger","There are 2 lyons","I mispelled Kat","bulldogs", "A tyger is different from a doog"]
op = [[difflib.get_close_matches(j,kw,cutoff=0.5) for j in i.split()] for i in s]
op = [list(itertools.chain(*o)) for o in op]
print(op)
The output is generate is:
[['tiger'], ['lion'], [], ['dog'], ['tiger', 'dog']]
The trick is to split all the sentences along the whitespaces.

Python Text processing (str.contains)

I am using str.contains for text analytics in Pandas. If for the sentence "My latest Data job was an Analyst" , I want a combination of the words "Data" & "Analyst" but at the same time I want to specify the number of words between the two words used for the combination( here it is 2 words between "Data" and "Analyst".Currently I am using (DataFile.XXX.str.contains('job') & DataFile.XXX.str.contains('Analyst') to get the counts for "job Analyst".
How can I Specify the number of words in between the 2 words in the str.contains syntax.
Thanks in advance
You can't. At least, not in a simple or standardized way.
Even the basics, like how you define a "word," are a lot more complex than you probably imagine. Both word parsing and lexical proximity (e.g. "are two words within distance D of one another in sentence s?") is the realm of natural language processing (NLP). NLP and proximity searches are not part of basic Pandas, nor of Python's standard string processing. You could import something like NLTK, the Natural Language Toolkit to solve this problem in a general way, but that's a whole 'nother story.
Let's look at a simple approach. First you need a way to parse a string into words. The following is rough by NLP standards, but will work for simpler cases:
def parse_words(s):
"""
Simple parser to grab English words from string.
CAUTION: A simplistic solution to a hard problem.
Many possibly-important edge- and corner-cases
not handled. Just one example: Hyphenated words.
"""
return re.findall(r"\w+(?:'[st])?", s, re.I)
E.g.:
>>> parse_words("and don't think this day's last moment won't come ")
['and', "don't", 'think', 'this', "day's", 'last', 'moment', "won't", 'come']
Then you need a way to find all the indices in a list where a target word is found:
def list_indices(target, seq):
"""
Return all indices in seq at which the target is found.
"""
indices = []
cursor = 0
while True:
try:
index = seq.index(target, cursor)
except ValueError:
return indices
else:
indices.append(index)
cursor = index + 1
And finally a decision making wrapper:
def words_within(target_words, s, max_distance, case_insensitive=True):
"""
Determine if the two target words are within max_distance positiones of one
another in the string s.
"""
if len(target_words) != 2:
raise ValueError('must provide 2 target words')
# fold case for case insensitivity
if case_insensitive:
s = s.casefold()
target_words = [tw.casefold() for tw in target_words]
# for Python 2, replace `casefold` with `lower`
# parse words and establish their logical positions in the string
words = parse_words(s)
target_indices = [list_indices(t, words) for t in target_words]
# words not present
if not target_indices[0] or not target_indices[1]:
return False
# compute all combinations of distance for the two words
# (there may be more than one occurance of a word in s)
actual_distances = [i2 - i1 for i2 in target_indices[1] for i1 in target_indices[0]]
# answer whether the minimum observed distance is <= our specified threshold
return min(actual_distances) <= max_distance
So then:
>>> s = "and don't think this day's last moment won't come at last"
>>> words_within(["THIS", 'last'], s, 2)
True
>>> words_within(["think", 'moment'], s, 2)
False
The only thing left to do is map that back to Pandas:
df = pd.DataFrame({'desc': [
'My latest Data job was an Analyst',
'some day my prince will come',
'Oh, somewhere over the rainbow bluebirds fly',
"Won't you share a common disaster?",
'job! rainbow! analyst.'
]})
df['ja2'] = df.desc.apply(lambda x: words_within(["job", 'analyst'], x, 2))
df['ja3'] = df.desc.apply(lambda x: words_within(["job", 'analyst'], x, 3))
This is basically how you'd solve the problem. Keep in mind, it's a rough and simplistic solution. Some simply-posed questions are not simply-answered. NLP questions are often among them.

In python, how can I distinguish between a human readable word and a random string?

Examples of words:
ball
encyclopedia
tableau
Examples of random strings:
qxbogsac
jgaynj
rnnfdwpm
Of course it may happen that a random string will actually be a word in some language or look like one. But basically a human being is able to say it something looks 'random' or not, basically just by checking if you are able to pronounce it or not.
I was trying to calculate entropy to distinguish those two but it's far from perfect. Do you have any other ideas, algorithms that works?
There is one important requirement though, I can't use heavy-weight libraries like nltk or use dictionaries. Basically what I need is some simple and quick heuristic that works in most cases.
I developed a Python 3 package called Nostril for a problem closely related to what the OP asked: deciding whether text strings extracted during source-code mining are class/function/variable/etc. identifiers or random gibberish. It does not use a dictionary, but it does incorporate a rather large table of n-gram frequencies to support its probabilistic assessment of text strings. (I'm not sure if that qualifies as a "dictionary".) The approach does not check pronunciation, and its specialization may make it unsuitable for general word/nonword detection; nevertheless, perhaps it will be useful for either the OP or someone else looking to solve a similar problem.
Example: the following code,
from nostril import nonsense
real_test = ['bunchofwords', 'getint', 'xywinlist', 'ioFlXFndrInfo',
'DMEcalPreshowerDigis', 'httpredaksikatakamiwordpresscom']
junk_test = ['faiwtlwexu', 'asfgtqwafazfyiur', 'zxcvbnmlkjhgfdsaqwerty']
for s in real_test + junk_test:
print('{}: {}'.format(s, 'nonsense' if nonsense(s) else 'real'))
will produce the following output:
bunchofwords: real
getint: real
xywinlist: real
ioFlXFndrInfo: real
DMEcalPreshowerDigis: real
httpredaksikatakamiwordpresscom: real
faiwtlwexu: nonsense
asfgtqwafazfyiur: nonsense
zxcvbnmlkjhgfdsaqwerty: nonsense
Caveat I am not a Natural Language Expert
Assuming what ever mentioned in the link If You Can Raed Tihs, You Msut Be Raelly Smrat is authentic, a simple approach would be
Have an English (I believe its language antagonistic) dictionary
Create a python dict of the words, with keys as the first and last character of the words in the dictionary
words = defaultdict()
with open("your_dict.txt") as fin:
for word in fin:
words[word[0]+word[-1]].append(word)
Now for any given word, search the dictionary (remember key is the first and last character of the word)
for matches in words[needle[0] + needle[-1]]:
Compare if the characters in the value of the dictionary and your needle matches
for match in words[needle[0] + needle[-1]]:
if sorted(match) == sorted(needle):
print "Human Readable Word"
A comparably slower approach would be to use difflib.get_close_matches(word, possibilities[, n][, cutoff])
If you really mean that your metric of randomness is pronounceability, you're getting into the realm of phonotactics: the allowed sequences of sounds in a language. As #ChrisPosser points out in his comment to your question, these allowed sequences of sounds are language-specific.
This question only makes sense within a specific language.
Whichever language you choose, you might have some luck with an n-gram model trained over the letters themselves (as opposed to the words, which is the usual approach). Then you can calculate a score for a particular string and set a threshold under which a string is random and over which a string is something like a word.
EDIT: Someone has done this already and actually implemented it: https://stackoverflow.com/a/6298193/583834
Works quite well for me:
VOWELS = "aeiou"
PHONES = ['sh', 'ch', 'ph', 'sz', 'cz', 'sch', 'rz', 'dz']
def isWord(word):
if word:
consecutiveVowels = 0
consecutiveConsonents = 0
for idx, letter in enumerate(word.lower()):
vowel = True if letter in VOWELS else False
if idx:
prev = word[idx-1]
prevVowel = True if prev in VOWELS else False
if not vowel and letter == 'y' and not prevVowel:
vowel = True
if prevVowel != vowel:
consecutiveVowels = 0
consecutiveConsonents = 0
if vowel:
consecutiveVowels += 1
else:
consecutiveConsonents +=1
if consecutiveVowels >= 3 or consecutiveConsonents > 3:
return False
if consecutiveConsonents == 3:
subStr = word[idx-2:idx+1]
if any(phone in subStr for phone in PHONES):
consecutiveConsonents -= 1
continue
return False
return True
Use PyDictionary.
You can install PyDictionary using following command.
easy_install -U PyDictionary
Now in code:
from PyDictionary import PyDictionary
dictionary=PyDictionary()
a = ['ball', 'asdfg']
for item in a:
x = dictionary.meaning(item)
if x==None:
print item + ': Not a valid word'
else:
print item + ': Valid'
As far as I know, you can use PyDictionary for some other languages then english.
I wrote this logic to detect number of consecutive vowels and consonants in a string. You can choose the threshold based on the language.
def get_num_vowel_bunches(txt,num_consq = 3):
len_txt = len(txt)
num_viol = 0
if len_txt >=num_consq:
pos_iter = re.finditer('[aeiou]',txt)
pos_mat = np.zeros((num_consq,len_txt),dtype=int)
for idx in pos_iter:
pos_mat[0,idx.span()[0]] = 1
for i in np.arange(1,num_consq):
pos_mat[i,0:-1] = pos_mat[i-1,1:]
sum_vec = np.sum(pos_mat,axis=0)
num_viol = sum(sum_vec == num_consq)
return num_viol
def get_num_consonent_bunches(txt,num_consq = 3):
len_txt = len(txt)
num_viol = 0
if len_txt >=num_consq:
pos_iter = re.finditer('[bcdfghjklmnpqrstvwxz]',txt)
pos_mat = np.zeros((num_consq,len_txt),dtype=int)
for idx in pos_iter:
pos_mat[0,idx.span()[0]] = 1
for i in np.arange(1,num_consq):
pos_mat[i,0:-1] = pos_mat[i-1,1:]
sum_vec = np.sum(pos_mat,axis=0)
num_viol = sum(sum_vec == num_consq)
return num_viol

Categories

Resources