I would like to know how to count how many negative words (no, not) and abbreviation (n't) there are in a sentence and in the whole text.
For number of sentences I am applying the following one:
df["sent"]=df['text'].str.count('[\w][\.!\?]')
However this gives me the count of sentences in a text. I would need to look per each sentence at the number of negation words and within the whole text.
Can you please give me some tips?
The expected output for text column is shown below
text sent count_n_s count_tot
I haven't tried it yet 1 1 1
I do not like it. What do you think? 2 0.5 1
It's marvellous!!! 1 0 0
No, I prefer the other one. 2 1 1
count_n_s is given by counting the total number of negotiation words per sentence, then dividing by the number of sentences.
I tried
split_w = re.split("\w+",df['text'])
neg_words=['no','not','n\'t']
words = [w for i,w in enumerate(split_w) if i and (split_w[i-1] in neg_words)]
This would get a count of total negations in the text (not for individual sentences):
import re
NEG = r"""(?:^(?:no|not)$)|n't"""
NEG_RE = re.compile(NEG, re.VERBOSE)
def get_count(text):
count = 0
for word in text:
if NEG_RE .search(word):
count+=1
continue
else:
pass
return count
df['text_list'] = df['text'].apply(lambda x: x.split())
df['count'] = df['text_list'].apply(lambda x: get_count(x))
To get count of negations for individual lines use the code below. For words like haven't you can add it to neg_words since it is not a negation if you strip the word of everything else if it has n't
import re
str1 = '''I haven't tried it yet
I do not like it. What do you think?
It's marvellous!!!
No, I prefer the other one.'''
neg_words=['no','not','n\'t']
for text in str1.split('\n'):
split_w = re.split("\s", text.lower())
# to get rid of special characters such as comma in 'No,' use the below search
split_w = [re.search('^\w+', w).group(0) for w in split_w]
words = [w for w in split_w if w in neg_words]
print(len(words))
Related
I have a project where I need to do the following:
User inputs a sentence
intersect sentence with list for matching strings
replace one of the matching strings with a new string
print the original sentence featuring the replacement
fruits = ['Quince', 'Raisins', 'Raspberries', 'Rhubarb', 'Strawberries', 'Tangelo', 'Tangerines']
# Asks the user for a sentence.
random_sentence = str(input('Please enter a random sentence:\n')).title()
stripped_sentence = random_sentence.strip(',.!?')
split_sentence = stripped_sentence.split()
# Solve for single word fruit names
sentence_intersection = set(fruits).intersection(split_sentence)
# Finds and replaces at least one instance of a fruit in the sentence with “Brussels Sprouts”.
intersection_as_list = list(sentence_intersection)
intersection_as_list[-1] = 'Brussels Sprouts'
Example Input: "I would like some raisins and strawberries."
Expected Output: "I would like some raisins and Brussels Sprouts."
But I can't figure out how to join the string back together after making the replacement. Any help is appreciated!
You can do it with a regex:
(?i)Quince|Raisins|Raspberries|Rhubarb|Strawberries|Tangelo|Tangerines
This pattern will match any of your words in a case insensitive way (?i).
In Python, you can obtain that pattern by joining your fruits into a single string. Then you can use the re.sub function to replace your first matching word with "Brussels Sprouts".
import re
fruits = ['Quince', 'Raisins', 'Raspberries', 'Rhubarb', 'Strawberries', 'Tangelo', 'Tangerines']
# Asks the user for a sentence.
#random_sentence = str(input('Please enter a random sentence:\n')).title()
sentence = "I would like some raisins and strawberries."
pattern = '(?i)' + '|'.join(fruits)
replacement = 'Brussels Sprouts'
print(re.sub(pattern, replacement, sentence, 1))
Output:
I would like some Brussels Sprouts and strawberries.
Check the Python demo here.
Create a set of lowercase possible word matches, then use a replacement function.
If a word is found, clear the set, so replacement works only once.
import re
fruits = ['Quince', 'Raisins', 'Raspberries', 'Rhubarb', 'Strawberries', 'Tangelo', 'Tangerines']
fruit_set = {x.lower() for x in fruits}
s = "I would like some raisins and strawberries."
def repfunc(m):
w = m.group(1)
if w.lower() in fruit_set:
fruit_set.clear()
return "Brussel Sprouts"
else:
return w
print(re.sub(r"(\w+)",repfunc,s))
prints:
I would like some Brussel Sprouts and strawberries.
That method has the advantage of being O(1) on lookup. If there are a lot of possible words it will beat the linear search that | performs when testing word after word.
It's simpler to replace just the first occurrence, but replacing the last occurrence, or a random occurrence is also doable. First you have to count how many fruits are in the sentence, then decide which replacement is effective in a second pass.
like this: (not very beautiful, using a lot of globals and all)
total = 0
def countfunc(m):
global total
w = m.group(1)
if w.lower() in fruit_set:
total += 1
idx = 0
def repfunc(m):
global idx
w = m.group(1)
if w.lower() in fruit_set:
if total == idx+1:
return "Brussel Sprouts"
else:
idx += 1
return w
else:
return w
re.sub(r"(\w+)",countfunc,s)
print(re.sub(r"(\w+)",repfunc,s))
first sub just counts how many fruits would match, then the second function replaces only when the counter matches. Here last occurrence is selected.
I just wanted to know if there's a simple way to search a string by coincidence with another one in Python. Or if anyone knows how it could be done.
To make myself clear I'll do an example.
text_sample = "baguette is a french word"
words_to_match = ("baguete","wrd")
letters_to_match = ('b','a','g','u','t','e','w','r','d') # With just one 'e'
coincidences = sum(text_sample.count(x) for x in letters_to_match)
# coincidences = 14 Current output
# coincidences = 10 Expected output
My current method breaks the words_to_match into single characters as in letters_to_match but then it is matched as follows: "baguette is a french word" (coincidences = 14).
But I want to obtain (coincidences = 10) where "baguette is a french word" were counted as coincidences. By checking the similarity between words_to_match and the words in text_sample.
How do I get my expected output?
It looks like you need the length of the longest common subsequence (LCS). See the algorithm in the Wikipedia article for computing it. You may also be able to find a C extension which computes it quickly. For example, this search has many results, including pylcs. After installation (pip install pylcs):
import pylcs
text_sample = "baguette is a french word"
words_to_match = ("baguete","wrd")
print(pylcs.lcs2(text_sample, ' '.join(words_to_match.join))) #: 14
first, split words_to_match with
words = ''
for item in words_to_match:
words += item
letters = [] # create a list
for letter in words:
letters.append(letter)
letters = tuple(letters)
then, see if its in it
x = 0
for i in sample_text:
if letters[x] == i:
x += 1
coincidence += 1
also if it's not in sequence just do:
for i in sample_text:
if i in letters: coincidence += 1
(note that some versions of python you'l need a newline)
I'm trying to figure out and put together a somewhat complicated syntax (for me) with .join function for hours already but just can't get it to work.
The task is to remove all duplicate words from a string obtained through scraping process but leave all duplicate numbers and digits intact.
Example Code:
from collections import OrderedDict
examplestring = 'Your Brand22 For Awesome Product 1 Year 1 User Subscription Brand22'
print(' '.join(OrderedDict((w,w) for w in examplestring.split()).keys()))
>>> Your Brand22 For Awesome Product 1 Year User Subscription
Note that the above code works but removes the duplicated 1 (1 Year 1 User) too, which I need. I'm trying to leave the numbers intact by comparing it to isdigit() function as .split() goes through the string word by word but cannot figure it out what is the proper syntax for it.
result = ' '.join(OrderedDict((w,w) for w in examplestring.split()).keys() if w not isdigit())
result = ([' '.join(OrderedDict((w,w) for w in examplestring.split()).keys())] if w not isdigit())
result = ' '.join([(OrderedDict((w,w) for w in examplestring.split()).keys()] if w not isdigit()))
I tried many more different variations of the above one-liner code and might be even missing the if statement, but these brackets everywhere confuse me so I'm grateful if anyone can help me out.
Goal: Remove duplicate words but keep repeated digits/numbers inside the string
You can Solve the problem by modifying the keys if the key is a number. Here I'm using enumerate to modify the key if key is numeric.
examplestring = 'Your Brand22 For Awesome Product 1 Year 1 User Subscription Brand22'
res = ' '.join(OrderedDict(((word + str(idx) if word.isnumeric() else word), word) for idx, word in enumerate(examplestring.split())).values())
print(res)
Output:
Your Brand22 For Awesome Product 1 Year 1 User Subscription
Does this work for you?
example_str = '''Your Brand22 For Awesome Product 1 Year 1 User Subscription Brand22'''
words_list = example_str.split()
numeric_flags_list = [all([char.isnumeric() for char in word]) for word in words_list]
unique_words = []
for word, numeric_flag in zip(words_list, numeric_flags_list):
if numeric_flag:
unique_words.append(word)
else:
if word not in unique_words:
unique_words.append(word)
else:
continue
For example if an example input is:
ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR COUNTRY
My program must return:
The word ‘COUNTRY’ occurs in the 5th and 17th positions.
I only need help for the part in finding if the string occurs more than once.
This is my attempt so far, I am new in python so sorry if my question seems too easily answered.
# wordsList=[]
words=input("Enter a sentence without punctuation:\n")
# wordsList.append(words)
# print(wordsList)
for i in words:
if i in words>1:
print(words)
# words.split(" ")
# print(words[0])
To find the number of occurences
There are probably several ways of doing it. One simple way would be to split your sentence to a list and find the number of occurrences.
sentence = "ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR COUNTRY"
words_in_a_list = sentence.split(" ")
words_in_a_list.count("COUNTRY")
You could also use regular expressions and would also be very easy to do.
import re
m = re.findall("COUNTRY", sentence)
To find the location of each occurrence
Probably you want to read this post.
You can use search which returns the span as well. And write a loop to find them all. Once you know the location of the first one, start searching the string from so many chars further.
def count_num_occurences(word, sentence):
start = 0
pattern = re.compile(word)
start_locations = []
while True:
match_object = there.search(sentence, start)
if match_object is not None:
start_locations.append(match_object.start())
start = 1 + match_object.start()
else:
break
return start_locations
str = 'ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR YOUR COUNTRY'
# split your sentence and make it a set to get the unique parts
# then make it a list so you ca iterate
parts = list(set(str.split(' ')))
# you count to get the nr of occurences of parts in the str
for part in parts:
print(f'{part} {str.count(part)}x')
result
COUNTRY 2x
YOU 4x
ASK 2x
YOUR 2x
CAN 2x
NOT 1x
DO 2x
WHAT 2x
FOR 2x
or with positions
import re
str = 'ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU ASK WHAT YOU CAN DO FOR DO YOUR COUNTRY'
# split your sentence and make it a set to get the unique parts
# then make it a list so you ca iterate
parts = list(set(str.split(' ')))
# you count to get the nr of occurences of parts in the str
for part in parts:
test = re.findall(part, str)
print(f'{part} {str.count(part)}x')
for m in re.finditer(part, str):
print(' found at', m.start())
result
DO 3x
found at 30
found at 58
found at 65
ASK 2x
found at 0
found at 41
COUNTRY 2x
found at 18
found at 73
YOUR 2x
found at 13
found at 68
WHAT 2x
found at 8
found at 45
YOU 4x
found at 13
found at 37
found at 50
found at 68
NOT 1x
found at 4
FOR 2x
found at 33
found at 61
CAN 2x
found at 26
found at 54
If you want only the words that occur more than once:
words=input("Enter a sentence without punctuation:\n").strip().split()
word_counts = {}
for word in words:
if word in word_counts:
word_counts[word] += 1
else:
word_counts[word] = 1
for word in word_counts.keys():
if word_counts[word] > 1:
print(word)
Just storing all the counts in a dictionary and then looping through the dictionary to print the ones that occur more than once.
Also efficient as it only goes through the input once and then once more through the dictionary
If you want the actual positions of the words:
words=input("Enter a sentence without punctuation:\n").strip().split()
word_counts = {}
for i in len(words):
word = words[i]
if word in word_counts:
word_counts[word].append(i) // keep a list of indices
else:
word_counts[word] = [i]
for word in word_counts.keys():
if len(word_counts[word]) > 1:
print("{0} found in positions: {1}".format(word, word_counts[word]))
How to sum up the number of words frequency using fd.items() from FreqDist?
>>> fd = FreqDist(text)
>>> most_freq_w = fd.keys()[:10] #gives me the most 10 frequent words in the text
>>> #here I should sum up numbers of each of these 10 freq words appear in the text
e.g. if each word in most_freq_w appear 10 times, the result should be 100
!!! I don't need that number of all words in the text, just the 10 most frequent
I'm not familiar with nltk, but since FreqDist derives from dict, then the following should work:
v = fd.values()
v.sort()
count = sum(v[-10:])
To find the number of times a word appears in the corpus(your piece of text):
raw="<your file>"
tokens = nltk.word_tokenize(raw)
fd = FreqDist(tokens)
print fd['<your word here>']
It has a pretty print feature
fd.pprint()
will do it.
If FreqDist is a mapping of words to their frequencies:
sum(map(fd.get, most_freq_w))