String search by coincidence? - python

I just wanted to know if there's a simple way to search a string by coincidence with another one in Python. Or if anyone knows how it could be done.
To make myself clear I'll do an example.
text_sample = "baguette is a french word"
words_to_match = ("baguete","wrd")
letters_to_match = ('b','a','g','u','t','e','w','r','d') # With just one 'e'
coincidences = sum(text_sample.count(x) for x in letters_to_match)
# coincidences = 14 Current output
# coincidences = 10 Expected output
My current method breaks the words_to_match into single characters as in letters_to_match but then it is matched as follows: "baguette is a french word" (coincidences = 14).
But I want to obtain (coincidences = 10) where "baguette is a french word" were counted as coincidences. By checking the similarity between words_to_match and the words in text_sample.
How do I get my expected output?

It looks like you need the length of the longest common subsequence (LCS). See the algorithm in the Wikipedia article for computing it. You may also be able to find a C extension which computes it quickly. For example, this search has many results, including pylcs. After installation (pip install pylcs):
import pylcs
text_sample = "baguette is a french word"
words_to_match = ("baguete","wrd")
print(pylcs.lcs2(text_sample, ' '.join(words_to_match.join))) #: 14

first, split words_to_match with
words = ''
for item in words_to_match:
words += item
letters = [] # create a list
for letter in words:
letters.append(letter)
letters = tuple(letters)
then, see if its in it
x = 0
for i in sample_text:
if letters[x] == i:
x += 1
coincidence += 1
also if it's not in sequence just do:
for i in sample_text:
if i in letters: coincidence += 1
(note that some versions of python you'l need a newline)

Related

Replace string in list then join list to form new string

I have a project where I need to do the following:
User inputs a sentence
intersect sentence with list for matching strings
replace one of the matching strings with a new string
print the original sentence featuring the replacement
fruits = ['Quince', 'Raisins', 'Raspberries', 'Rhubarb', 'Strawberries', 'Tangelo', 'Tangerines']
# Asks the user for a sentence.
random_sentence = str(input('Please enter a random sentence:\n')).title()
stripped_sentence = random_sentence.strip(',.!?')
split_sentence = stripped_sentence.split()
# Solve for single word fruit names
sentence_intersection = set(fruits).intersection(split_sentence)
# Finds and replaces at least one instance of a fruit in the sentence with “Brussels Sprouts”.
intersection_as_list = list(sentence_intersection)
intersection_as_list[-1] = 'Brussels Sprouts'
Example Input: "I would like some raisins and strawberries."
Expected Output: "I would like some raisins and Brussels Sprouts."
But I can't figure out how to join the string back together after making the replacement. Any help is appreciated!
You can do it with a regex:
(?i)Quince|Raisins|Raspberries|Rhubarb|Strawberries|Tangelo|Tangerines
This pattern will match any of your words in a case insensitive way (?i).
In Python, you can obtain that pattern by joining your fruits into a single string. Then you can use the re.sub function to replace your first matching word with "Brussels Sprouts".
import re
fruits = ['Quince', 'Raisins', 'Raspberries', 'Rhubarb', 'Strawberries', 'Tangelo', 'Tangerines']
# Asks the user for a sentence.
#random_sentence = str(input('Please enter a random sentence:\n')).title()
sentence = "I would like some raisins and strawberries."
pattern = '(?i)' + '|'.join(fruits)
replacement = 'Brussels Sprouts'
print(re.sub(pattern, replacement, sentence, 1))
Output:
I would like some Brussels Sprouts and strawberries.
Check the Python demo here.
Create a set of lowercase possible word matches, then use a replacement function.
If a word is found, clear the set, so replacement works only once.
import re
fruits = ['Quince', 'Raisins', 'Raspberries', 'Rhubarb', 'Strawberries', 'Tangelo', 'Tangerines']
fruit_set = {x.lower() for x in fruits}
s = "I would like some raisins and strawberries."
def repfunc(m):
w = m.group(1)
if w.lower() in fruit_set:
fruit_set.clear()
return "Brussel Sprouts"
else:
return w
print(re.sub(r"(\w+)",repfunc,s))
prints:
I would like some Brussel Sprouts and strawberries.
That method has the advantage of being O(1) on lookup. If there are a lot of possible words it will beat the linear search that | performs when testing word after word.
It's simpler to replace just the first occurrence, but replacing the last occurrence, or a random occurrence is also doable. First you have to count how many fruits are in the sentence, then decide which replacement is effective in a second pass.
like this: (not very beautiful, using a lot of globals and all)
total = 0
def countfunc(m):
global total
w = m.group(1)
if w.lower() in fruit_set:
total += 1
idx = 0
def repfunc(m):
global idx
w = m.group(1)
if w.lower() in fruit_set:
if total == idx+1:
return "Brussel Sprouts"
else:
idx += 1
return w
else:
return w
re.sub(r"(\w+)",countfunc,s)
print(re.sub(r"(\w+)",repfunc,s))
first sub just counts how many fruits would match, then the second function replaces only when the counter matches. Here last occurrence is selected.

How to determine the number of negation words per sentence

I would like to know how to count how many negative words (no, not) and abbreviation (n't) there are in a sentence and in the whole text.
For number of sentences I am applying the following one:
df["sent"]=df['text'].str.count('[\w][\.!\?]')
However this gives me the count of sentences in a text. I would need to look per each sentence at the number of negation words and within the whole text.
Can you please give me some tips?
The expected output for text column is shown below
text sent count_n_s count_tot
I haven't tried it yet 1 1 1
I do not like it. What do you think? 2 0.5 1
It's marvellous!!! 1 0 0
No, I prefer the other one. 2 1 1
count_n_s is given by counting the total number of negotiation words per sentence, then dividing by the number of sentences.
I tried
split_w = re.split("\w+",df['text'])
neg_words=['no','not','n\'t']
words = [w for i,w in enumerate(split_w) if i and (split_w[i-1] in neg_words)]
This would get a count of total negations in the text (not for individual sentences):
import re
NEG = r"""(?:^(?:no|not)$)|n't"""
NEG_RE = re.compile(NEG, re.VERBOSE)
def get_count(text):
count = 0
for word in text:
if NEG_RE .search(word):
count+=1
continue
else:
pass
return count
df['text_list'] = df['text'].apply(lambda x: x.split())
df['count'] = df['text_list'].apply(lambda x: get_count(x))
To get count of negations for individual lines use the code below. For words like haven't you can add it to neg_words since it is not a negation if you strip the word of everything else if it has n't
import re
str1 = '''I haven't tried it yet
I do not like it. What do you think?
It's marvellous!!!
No, I prefer the other one.'''
neg_words=['no','not','n\'t']
for text in str1.split('\n'):
split_w = re.split("\s", text.lower())
# to get rid of special characters such as comma in 'No,' use the below search
split_w = [re.search('^\w+', w).group(0) for w in split_w]
words = [w for w in split_w if w in neg_words]
print(len(words))

How can I find if a word (string) occurs more than once in an input/list in python

For example if an example input is:
ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR COUNTRY
My program must return:
The word ‘COUNTRY’ occurs in the 5th and 17th positions.
I only need help for the part in finding if the string occurs more than once.
This is my attempt so far, I am new in python so sorry if my question seems too easily answered.
# wordsList=[]
words=input("Enter a sentence without punctuation:\n")
# wordsList.append(words)
# print(wordsList)
for i in words:
if i in words>1:
print(words)
# words.split(" ")
# print(words[0])
To find the number of occurences
There are probably several ways of doing it. One simple way would be to split your sentence to a list and find the number of occurrences.
sentence = "ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR COUNTRY"
words_in_a_list = sentence.split(" ")
words_in_a_list.count("COUNTRY")
You could also use regular expressions and would also be very easy to do.
import re
m = re.findall("COUNTRY", sentence)
To find the location of each occurrence
Probably you want to read this post.
You can use search which returns the span as well. And write a loop to find them all. Once you know the location of the first one, start searching the string from so many chars further.
def count_num_occurences(word, sentence):
start = 0
pattern = re.compile(word)
start_locations = []
while True:
match_object = there.search(sentence, start)
if match_object is not None:
start_locations.append(match_object.start())
start = 1 + match_object.start()
else:
break
return start_locations
str = 'ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR COUNTRY'
# split your sentence and make it a set to get the unique parts
# then make it a list so you ca iterate
parts = list(set(str.split(' ')))
# you count to get the nr of occurences of parts in the str
for part in parts:
print(f'{part} {str.count(part)}x')
result
COUNTRY 2x
YOU 4x
ASK 2x
YOUR 2x
CAN 2x
NOT 1x
DO 2x
WHAT 2x
FOR 2x
or with positions
import re
str = 'ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR DO YOUR COUNTRY'
# split your sentence and make it a set to get the unique parts
# then make it a list so you ca iterate
parts = list(set(str.split(' ')))
# you count to get the nr of occurences of parts in the str
for part in parts:
test = re.findall(part, str)
print(f'{part} {str.count(part)}x')
for m in re.finditer(part, str):
print(' found at', m.start())
result
DO 3x
found at 30
found at 58
found at 65
ASK 2x
found at 0
found at 41
COUNTRY 2x
found at 18
found at 73
YOUR 2x
found at 13
found at 68
WHAT 2x
found at 8
found at 45
YOU 4x
found at 13
found at 37
found at 50
found at 68
NOT 1x
found at 4
FOR 2x
found at 33
found at 61
CAN 2x
found at 26
found at 54
If you want only the words that occur more than once:
words=input("Enter a sentence without punctuation:\n").strip().split()
word_counts = {}
for word in words:
if word in word_counts:
word_counts[word] += 1
else:
word_counts[word] = 1
for word in word_counts.keys():
if word_counts[word] > 1:
print(word)
Just storing all the counts in a dictionary and then looping through the dictionary to print the ones that occur more than once.
Also efficient as it only goes through the input once and then once more through the dictionary
If you want the actual positions of the words:
words=input("Enter a sentence without punctuation:\n").strip().split()
word_counts = {}
for i in len(words):
word = words[i]
if word in word_counts:
word_counts[word].append(i) // keep a list of indices
else:
word_counts[word] = [i]
for word in word_counts.keys():
if len(word_counts[word]) > 1:
print("{0} found in positions: {1}".format(word, word_counts[word]))

In python, how can I distinguish between a human readable word and a random string?

Examples of words:
ball
encyclopedia
tableau
Examples of random strings:
qxbogsac
jgaynj
rnnfdwpm
Of course it may happen that a random string will actually be a word in some language or look like one. But basically a human being is able to say it something looks 'random' or not, basically just by checking if you are able to pronounce it or not.
I was trying to calculate entropy to distinguish those two but it's far from perfect. Do you have any other ideas, algorithms that works?
There is one important requirement though, I can't use heavy-weight libraries like nltk or use dictionaries. Basically what I need is some simple and quick heuristic that works in most cases.
I developed a Python 3 package called Nostril for a problem closely related to what the OP asked: deciding whether text strings extracted during source-code mining are class/function/variable/etc. identifiers or random gibberish. It does not use a dictionary, but it does incorporate a rather large table of n-gram frequencies to support its probabilistic assessment of text strings. (I'm not sure if that qualifies as a "dictionary".) The approach does not check pronunciation, and its specialization may make it unsuitable for general word/nonword detection; nevertheless, perhaps it will be useful for either the OP or someone else looking to solve a similar problem.
Example: the following code,
from nostril import nonsense
real_test = ['bunchofwords', 'getint', 'xywinlist', 'ioFlXFndrInfo',
'DMEcalPreshowerDigis', 'httpredaksikatakamiwordpresscom']
junk_test = ['faiwtlwexu', 'asfgtqwafazfyiur', 'zxcvbnmlkjhgfdsaqwerty']
for s in real_test + junk_test:
print('{}: {}'.format(s, 'nonsense' if nonsense(s) else 'real'))
will produce the following output:
bunchofwords: real
getint: real
xywinlist: real
ioFlXFndrInfo: real
DMEcalPreshowerDigis: real
httpredaksikatakamiwordpresscom: real
faiwtlwexu: nonsense
asfgtqwafazfyiur: nonsense
zxcvbnmlkjhgfdsaqwerty: nonsense
Caveat I am not a Natural Language Expert
Assuming what ever mentioned in the link If You Can Raed Tihs, You Msut Be Raelly Smrat is authentic, a simple approach would be
Have an English (I believe its language antagonistic) dictionary
Create a python dict of the words, with keys as the first and last character of the words in the dictionary
words = defaultdict()
with open("your_dict.txt") as fin:
for word in fin:
words[word[0]+word[-1]].append(word)
Now for any given word, search the dictionary (remember key is the first and last character of the word)
for matches in words[needle[0] + needle[-1]]:
Compare if the characters in the value of the dictionary and your needle matches
for match in words[needle[0] + needle[-1]]:
if sorted(match) == sorted(needle):
print "Human Readable Word"
A comparably slower approach would be to use difflib.get_close_matches(word, possibilities[, n][, cutoff])
If you really mean that your metric of randomness is pronounceability, you're getting into the realm of phonotactics: the allowed sequences of sounds in a language. As #ChrisPosser points out in his comment to your question, these allowed sequences of sounds are language-specific.
This question only makes sense within a specific language.
Whichever language you choose, you might have some luck with an n-gram model trained over the letters themselves (as opposed to the words, which is the usual approach). Then you can calculate a score for a particular string and set a threshold under which a string is random and over which a string is something like a word.
EDIT: Someone has done this already and actually implemented it: https://stackoverflow.com/a/6298193/583834
Works quite well for me:
VOWELS = "aeiou"
PHONES = ['sh', 'ch', 'ph', 'sz', 'cz', 'sch', 'rz', 'dz']
def isWord(word):
if word:
consecutiveVowels = 0
consecutiveConsonents = 0
for idx, letter in enumerate(word.lower()):
vowel = True if letter in VOWELS else False
if idx:
prev = word[idx-1]
prevVowel = True if prev in VOWELS else False
if not vowel and letter == 'y' and not prevVowel:
vowel = True
if prevVowel != vowel:
consecutiveVowels = 0
consecutiveConsonents = 0
if vowel:
consecutiveVowels += 1
else:
consecutiveConsonents +=1
if consecutiveVowels >= 3 or consecutiveConsonents > 3:
return False
if consecutiveConsonents == 3:
subStr = word[idx-2:idx+1]
if any(phone in subStr for phone in PHONES):
consecutiveConsonents -= 1
continue
return False
return True
Use PyDictionary.
You can install PyDictionary using following command.
easy_install -U PyDictionary
Now in code:
from PyDictionary import PyDictionary
dictionary=PyDictionary()
a = ['ball', 'asdfg']
for item in a:
x = dictionary.meaning(item)
if x==None:
print item + ': Not a valid word'
else:
print item + ': Valid'
As far as I know, you can use PyDictionary for some other languages then english.
I wrote this logic to detect number of consecutive vowels and consonants in a string. You can choose the threshold based on the language.
def get_num_vowel_bunches(txt,num_consq = 3):
len_txt = len(txt)
num_viol = 0
if len_txt >=num_consq:
pos_iter = re.finditer('[aeiou]',txt)
pos_mat = np.zeros((num_consq,len_txt),dtype=int)
for idx in pos_iter:
pos_mat[0,idx.span()[0]] = 1
for i in np.arange(1,num_consq):
pos_mat[i,0:-1] = pos_mat[i-1,1:]
sum_vec = np.sum(pos_mat,axis=0)
num_viol = sum(sum_vec == num_consq)
return num_viol
def get_num_consonent_bunches(txt,num_consq = 3):
len_txt = len(txt)
num_viol = 0
if len_txt >=num_consq:
pos_iter = re.finditer('[bcdfghjklmnpqrstvwxz]',txt)
pos_mat = np.zeros((num_consq,len_txt),dtype=int)
for idx in pos_iter:
pos_mat[0,idx.span()[0]] = 1
for i in np.arange(1,num_consq):
pos_mat[i,0:-1] = pos_mat[i-1,1:]
sum_vec = np.sum(pos_mat,axis=0)
num_viol = sum(sum_vec == num_consq)
return num_viol

Python code flow does not work as expected?

I am trying to process various texts by regex and NLTK of python -which is at http://www.nltk.org/book-. I am trying to create a random text generator and I am having a slight problem. Firstly, here is my code flow:
Enter a sentence as input -this is called trigger string, is assigned to a variable-
Get longest word in trigger string
Search all Project Gutenberg database for sentences that contain this word -regardless of uppercase lowercase-
Return the longest sentence that has the word I spoke about in step 3
Append the sentence in Step 1 and Step4 together
Assign the sentence in Step 4 as the new 'trigger' sentence and repeat the process. Note that I have to get the longest word in second sentence and continue like that and so on-
So far, I have been able to do this only once. When I try to keep this to continue, the program only keeps printing the first sentence my search yields. It should actually look for the longest word in this new sentence and keep applying my code flow described above.
Below is my code along with a sample input/output :
Sample input
"Thane of code"
Sample output
"Thane of code Norway himselfe , with terrible numbers , Assisted by that most disloyall Traytor , The Thane of Cawdor , began a dismall Conflict , Till that Bellona ' s Bridegroome , lapt in proofe , Confronted him with selfe - comparisons , Point against Point , rebellious Arme ' gainst Arme , Curbing his lauish spirit : and to conclude , The Victorie fell on vs"
Now this should actually take the sentence that starts with 'Norway himselfe....' and look for the longest word in it and do the steps above and so on but it doesn't. Any suggestions? Thanks.
import nltk
from nltk.corpus import gutenberg
triggerSentence = raw_input("Please enter the trigger sentence: ")#get input str
split_str = triggerSentence.split()#split the sentence into words
longestLength = 0
longestString = ""
montyPython = 1
while montyPython:
#code to find the longest word in the trigger sentence input
for piece in split_str:
if len(piece) > longestLength:
longestString = piece
longestLength = len(piece)
listOfSents = gutenberg.sents() #all sentences of gutenberg are assigned -list of list format-
listOfWords = gutenberg.words()# all words in gutenberg books -list format-
# I tip my hat to Mr.Alex Martelli for this part, which helps me find the longest sentence
lt = longestString.lower() #this line tells you whether word list has the longest word in a case-insensitive way.
longestSentence = max((listOfWords for listOfWords in listOfSents if any(lt == word.lower() for word in listOfWords)), key = len)
#get longest sentence -list format with every word of sentence being an actual element-
longestSent=[longestSentence]
for word in longestSent:#convert the list longestSentence to an actual string
sstr = " ".join(word)
print triggerSentence + " "+ sstr
triggerSentence = sstr
How about this?
You find longest word in trigger
You find longest word in the longest sentence containing word found in 1.
The word of 1. is the longest word of the sentence of 2.
What happens? Hint: answer starts with "Infinite". To correct the problem you could find set of words in lower case to be useful.
BTW when you think MontyPython becomes False and the program finish?
Rather than searching the entire corpus each time, it may be faster to construct a single map from word to the longest sentence containing that word. Here's my (untested) attempt to do this.
import collections
from nltk.corpus import gutenberg
def words_in(sentence):
"""Generate all words in the sentence (lower-cased)"""
for word in sentence.split():
word = word.strip('.,"\'-:;')
if word:
yield word.lower()
def make_sentence_map(books):
"""Construct a map from words to the longest sentence containing the word."""
result = collections.defaultdict(str)
for book in books:
for sentence in book:
for word in words_in(sentence):
if len(sentence) > len(result[word]):
result[word] = sent
return result
def generate_random_text(sentence, sentence_map):
while True:
yield sentence
longest_word = max(words_in(sentence), key=len)
sentence = sentence_map[longest_word]
sentence_map = make_sentence_map(gutenberg.sents())
for sentence in generate_random_text('Thane of code.', sentence_map):
print sentence
Mr. Hankin's answer is more elegant, but the following is more in keeping with the approach you began with:
import sys
import string
import nltk
from nltk.corpus import gutenberg
def longest_element(p):
"""return the first element of p which has the greatest len()"""
max_len = 0
elem = None
for e in p:
if len(e) > max_len:
elem = e
max_len = len(e)
return elem
def downcase(p):
"""returns a list of words in p shifted to lower case"""
return map(string.lower, p)
def unique_words():
"""it turns out unique_words was never referenced so this is here
for pedagogy"""
# there are 2.6 million words in the gutenburg corpus but only ~42k unique
# ignoring case, let's pare that down a bit
for word in gutenberg.words():
words.add(word.lower())
print 'gutenberg.words() has', len(words), 'unique caseless words'
return words
print 'loading gutenburg corpus...'
sentences = []
for sentence in gutenberg.sents():
sentences.append(downcase(sentence))
trigger = sys.argv[1:]
target = longest_element(trigger).lower()
last_target = None
while target != last_target:
matched_sentences = []
for sentence in sentences:
if target in sentence:
matched_sentences.append(sentence)
print '===', target, 'matched', len(matched_sentences), 'sentences'
longestSentence = longest_element(matched_sentences)
print ' '.join(longestSentence)
trigger = longestSentence
last_target = target
target = longest_element(trigger).lower()
Given your sample sentence though, it reaches fixation in two cycles:
$ python nltkgut.py Thane of code
loading gutenburg corpus...
=== target thane matched 24 sentences
norway himselfe , with terrible
numbers , assisted by that most
disloyall traytor , the thane of
cawdor , began a dismall conflict ,
till that bellona ' s bridegroome ,
lapt in proofe , confronted him with
selfe - comparisons , point against
point , rebellious arme ' gainst arme
, curbing his lauish spirit : and to
conclude , the victorie fell on vs
=== target bridegroome matched 1 sentences
norway himselfe , with
terrible numbers , assisted by that
most disloyall traytor , the thane of
cawdor , began a dismall conflict ,
till that bellona ' s bridegroome ,
lapt in proofe , confronted him with
selfe - comparisons , point against
point , rebellious arme ' gainst arme
, curbing his lauish spirit : and to
conclude , the victorie fell on vs
Part of the trouble with the response to the last problem is that it did what you asked, but you asked a more specific question than you wanted an answer to. Thus the response got bogged down in some rather complicated list expressions that I'm not sure you understood. I suggest that you make more liberal use of print statements and don't import code if you don't know what it does. While unwrapping the list expressions I found (as noted) that you never used the corpus wordlist. Functions are a help also.
You are assigning "split_str" outside of the loop, so it gets the original value and then keeps it. You need to assign it at the beginning of the while loop, so it changes each time.
import nltk
from nltk.corpus import gutenberg
triggerSentence = raw_input("Please enter the trigger sentence: ")#get input str
longestLength = 0
longestString = ""
montyPython = 1
while montyPython:
#so this is run every time through the loop
split_str = triggerSentence.split()#split the sentence into words
#code to find the longest word in the trigger sentence input
for piece in split_str:
if len(piece) > longestLength:
longestString = piece
longestLength = len(piece)
listOfSents = gutenberg.sents() #all sentences of gutenberg are assigned -list of list format-
listOfWords = gutenberg.words()# all words in gutenberg books -list format-
# I tip my hat to Mr.Alex Martelli for this part, which helps me find the longest sentence
lt = longestString.lower() #this line tells you whether word list has the longest word in a case-insensitive way.
longestSentence = max((listOfWords for listOfWords in listOfSents if any(lt == word.lower() for word in listOfWords)), key = len)
#get longest sentence -list format with every word of sentence being an actual element-
longestSent=[longestSentence]
for word in longestSent:#convert the list longestSentence to an actual string
sstr = " ".join(word)
print triggerSentence + " "+ sstr
triggerSentence = sstr

Categories

Resources