Unique word frequency using NLTK - python

Code to get the unique Word Frequency for the following using NLTK.
Seq Sentence
1 Let's try to be Good.
2 Being good doesn't make sense.
3 Good is always good.
Output:
{'good':3, 'let':1, 'try':1, 'to':1, 'be':1, 'being':1, 'doesn':1, 't':1, 'make':1, 'sense':1, 'is':1, 'always':1, '.':3, ''':2, 's':1}

If you are very particular about using nltk you the refer the following code snippet
import nltk
text1 = '''Seq Sentence
1 Let's try to be Good.
2 Being good doesn't make sense.
3 Good is always good.'''
words = nltk.tokenize.word_tokenize(text1)
fdist1 = nltk.FreqDist(words)
filtered_word_freq = dict((word, freq) for word, freq in fdist1.items() if not word.isdigit())
print(filtered_word_freq)
Hope it helps.
Referred some parts from:
How to check if string input is a number?
Dropping specific words out of an NLTK distribution beyond stopwords

Try this
from collections import Counter
import pandas as pd
import nltk
sno = nltk.stem.SnowballStemmer('english')
s = "1 Let's try to be Good. 2 Being good doesn't make sense. 3 Good is always good."
s1 = s.split(' ')
d = pd.DataFrame(s1)
s2 = d[0].apply(lambda x: sno.stem(x))
counts = Counter(s2)
print(counts)
Output will be:
Counter({'': 6, 'be': 2, 'good.': 2, 'good': 2, '1': 1, 'let': 1, 'tri': 1, 'to': 1, '2': 1, "doesn't": 1, 'make': 1, 'sense.': 1, '3': 1, 'is': 1, 'alway': 1})

Related

Count total number of modal verbs in text

I am trying to create a custom collection of words as shown in the following Categories:
Modal Tentative Certainty Generalizing
Can Anyhow Undoubtedly Generally
May anytime Ofcourse Overall
Might anything Definitely On the Whole
Must hazy No doubt In general
Shall hope Doubtless All in all
ought to hoped Never Basically
will uncertain always Essentially
need undecidable absolute Most
Be to occasional assure Every
Have to somebody certain Some
Would someone clear Often
Should something clearly Rarely
Could sort inevitable None
Used to sorta forever Always
I am reading text from a CSV file row by row:
import nltk
import numpy as np
import pandas as pd
from collections import Counter, defaultdict
from nltk.tokenize import word_tokenize
count = defaultdict(int)
header_list = ["modal","Tentative","Certainity","Generalization"]
categorydf = pd.read_csv('Custom-Dictionary1.csv', names=header_list)
def analyze(file):
df = pd.read_csv(file)
modals = str(categorydf['modal'])
tentative = str(categorydf['Tentative'])
certainity = str(categorydf['Certainity'])
generalization = str(categorydf['Generalization'])
for text in df["Text"]:
tokenize_text = text.split()
for w in tokenize_text:
if w in modals:
count[w] += 1
analyze("test1.csv")
print(sum(count.values()))
print(count)
I want to find number of Modal/Tentative/Certainty verbs which are present in the above table and in each row in test1.csv, but not able to do so. This is generating words frequency with number.
19
defaultdict(<class 'int'>, {'to': 7, 'an': 1, 'will': 2, 'a': 7, 'all': 2})
See 'an','a' are not present in the table. I want to get No of Model verbs = total modal verbs present in 1 row of test.csv text
test1.csv:
"When LIWC was first developed, the goal was to devise an efficient will system"
"Within a few years, it became clear that there are two very broad categories of words"
"Content words are generally nouns, regular verbs, and many adjectives and adverbs."
"They convey the content of a communication."
"To go back to the phrase “It was a dark and stormy night” the content words are: “dark,” “stormy,” and “night.”"
I am stuck and not getting anything. How can I proceed?
I've solved your task for initial CSV format, could be of cause adopted to XML input if needed.
I've did quite fancy solution using NumPy, that's why solution might be a bit complex, but runs very fast and suitable for large data, even Giga-Bytes.
It uses sorted table of words, also sorts text to count words and sorted-search in table, hence works in O(n log n) time complexity.
It outputs original text line on first line, then Found-line where it lists each found in Tabl word in sorted order with (Count, Modality, (TableRow, TableCol)), then Non-Found-line where it lists non-found-in-table words plus Count (number of occurancies of this word in text).
Also a much simpler (but slower) similar solution is located after the first one.
Try it online!
import io, pandas as pd, numpy as np
# Instead of io.StringIO(...) provide filename.
tab = pd.read_csv(io.StringIO("""
Modal,Tentative,Certainty,Generalizing
Can,Anyhow,Undoubtedly,Generally
May,anytime,Ofcourse,Overall
Might,anything,Definitely,On the Whole
Must,hazy,No doubt,In general
Shall,hope,Doubtless,All in all
ought to,hoped,Never,Basically
will,uncertain,always,Essentially
need,undecidable,absolute,Most
Be to,occasional,assure,Every
Have to,somebody,certain,Some
Would,someone,clear,Often
Should,something,clearly,Rarely
Could,sort,inevitable,None
Used to,sorta,forever,Always
"""))
tabc = np.array(tab.columns.values.tolist(), dtype = np.str_)
taba = tab.values.astype(np.str_)
tabw = np.char.lower(taba.ravel())
tabi = np.zeros([tabw.size, 2], dtype = np.int64)
tabi[:, 0], tabi[:, 1] = [e.ravel() for e in np.split(np.mgrid[:taba.shape[0], :taba.shape[1]], 2, axis = 0)]
t = np.argsort(tabw)
tabw, tabi = tabw[t], tabi[t, :]
texts = pd.read_csv(io.StringIO("""
Text
"When LIWC was first developed, the goal was to devise an efficient will system"
"Within a few years, it became clear that there are two very broad categories of words"
"Content words are generally nouns, regular verbs, and many adjectives and adverbs."
They convey the content of a communication.
"To go back to the phrase “It was a dark and stormy night” the content words are: “dark,” “stormy,” and “night.”"
""")).values[:, 0].astype(np.str_)
for i, (a, text) in enumerate(zip(map(np.array, np.char.split(texts)), texts)):
vs, cs = np.unique(np.char.lower(a), return_counts = True)
ps = np.searchsorted(tabw, vs)
unc = np.zeros_like(a, dtype = np.bool_)
psm = ps < tabi.shape[0]
psm[psm] = tabw[ps[psm]] == vs[psm]
print(
i, ': Text:', text,
'\nFound:',
', '.join([f'"{vs[i]}": ({cs[i]}, {tabc[tabi[ps[i], 1]]}, ({tabi[ps[i], 0]}, {tabi[ps[i], 1]}))'
for i in np.flatnonzero(psm).tolist()]),
'\nNon-Found:',
', '.join([f'"{vs[i]}": {cs[i]}'
for i in np.flatnonzero(~psm).tolist()]),
'\n',
)
Outputs:
0 : Text: When LIWC was first developed, the goal was to devise an efficient will system
Found: "will": (1, Modal, (6, 0))
Non-Found: "an": 1, "developed,": 1, "devise": 1, "efficient": 1, "first": 1, "goal": 1, "liwc": 1, "system": 1, "the": 1, "to": 1, "was": 2, "when":
1
1 : Text: Within a few years, it became clear that there are two very broad categories of words
Found: "clear": (1, Certainty, (10, 2))
Non-Found: "a": 1, "are": 1, "became": 1, "broad": 1, "categories": 1, "few": 1, "it": 1, "of": 1, "that": 1, "there": 1, "two": 1, "very": 1, "withi
n": 1, "words": 1, "years,": 1
2 : Text: Content words are generally nouns, regular verbs, and many adjectives and adverbs.
Found: "generally": (1, Generalizing, (0, 3))
Non-Found: "adjectives": 1, "adverbs.": 1, "and": 2, "are": 1, "content": 1, "many": 1, "nouns,": 1, "regular": 1, "verbs,": 1, "words": 1
3 : Text: They convey the content of a communication.
Found:
Non-Found: "a": 1, "communication.": 1, "content": 1, "convey": 1, "of": 1, "the": 1, "they": 1
4 : Text: To go back to the phrase “It was a dark and stormy night” the content words are: “dark,” “stormy,” and “night.”
Found:
Non-Found: "a": 1, "and": 2, "are:": 1, "back": 1, "content": 1, "dark": 1, "go": 1, "night”": 1, "phrase": 1, "stormy": 1, "the": 2, "to": 2, "was":
1, "words": 1, "“dark,”": 1, "“it": 1, "“night.”": 1, "“stormy,”": 1
Second solution is implemented in pure Python just for simplicity, only standard python modules io and csv are used.
Try it online!
import io, csv
# Instead of io.StringIO(...) just read from filename.
tab = csv.DictReader(io.StringIO("""Modal,Tentative,Certainty,Generalizing
Can,Anyhow,Undoubtedly,Generally
May,anytime,Ofcourse,Overall
Might,anything,Definitely,On the Whole
Must,hazy,No doubt,In general
Shall,hope,Doubtless,All in all
ought to,hoped,Never,Basically
will,uncertain,always,Essentially
need,undecidable,absolute,Most
Be to,occasional,assure,Every
Have to,somebody,certain,Some
Would,someone,clear,Often
Should,something,clearly,Rarely
Could,sort,inevitable,None
Used to,sorta,forever,Always
"""))
texts = csv.DictReader(io.StringIO("""
"When LIWC was first developed, the goal was to devise an efficient will system"
"Within a few years, it became clear that there are two very broad categories of words"
"Content words are generally nouns, regular verbs, and many adjectives and adverbs."
They convey the content of a communication.
"To go back to the phrase “It was a dark and stormy night” the content words are: “dark,” “stormy,” and “night.”"
"""), fieldnames = ['Text'])
tabi = dict(sorted([(v.lower(), k) for e in tab for k, v in e.items()]))
texts = [e['Text'] for e in texts]
for text in texts:
cnt, mod = {}, {}
for word in text.lower().split():
if word in tabi:
cnt[word], mod[word] = cnt.get(word, 0) + 1, tabi[word]
print(', '.join([f"'{word}': ({cnt[word]}, {mod[word]})" for word, _ in sorted(cnt.items(), key = lambda e: e[0])]))
It outputs:
'will': (1, Modal)
'clear': (1, Certainty)
'generally': (1, Generalizing)
I'm reading from StringIO content of CSV, that is to convenience so that code contains everything without need of extra files, for sure in your case you'll need direct files reading, for this you may do same as in next code and next link (named Try it online!):
Try it online!
import io, csv
tab = csv.DictReader(open('table.csv', 'r', encoding = 'utf-8-sig'))
texts = csv.DictReader(open('texts.csv', 'r', encoding = 'utf-8-sig'), fieldnames = ['Text'])
tabi = dict(sorted([(v.lower(), k) for e in tab for k, v in e.items()]))
texts = [e['Text'] for e in texts]
for text in texts:
cnt, mod = {}, {}
for word in text.lower().split():
if word in tabi:
cnt[word], mod[word] = cnt.get(word, 0) + 1, tabi[word]
print(', '.join([f"'{word}': ({cnt[word]}, {mod[word]})" for word, _ in sorted(cnt.items(), key = lambda e: e[0])]))

python count the number of words in the list of strings [duplicate]

This question already has answers here:
How to find the count of a word in a string
(9 answers)
Closed 2 years ago.
consider
doc = ["i am a fellow student", "we both are the good student", "a student works hard"]
I have this as input I just wanted to print the number of times each word in the whole list occurs:
For example student occurs 3 times so
expected output student=3, a=2,etc
I was able to print the unique words in the doc, but not able to print the occurrences. Here is the function i used:
def fit(doc):
unique_words = set()
if isinstance(dataset, (list,)):
for row in dataset:
for word in row.split(" "):
if len(word) < 2:
continue
unique_words.add(word)
unique_words = sorted(list(unique_words))
return (unique_words)
doc=fit(docs)
print(doc)
['am', 'are', 'both', 'fellow', 'good', 'hard', 'student', 'the', 'we', 'works']
I got this as output I just want the number of occurrences of the unique_words. How do i do this please?
You just need to use Counter, and you will solve the problem by using a single line of code:
from collections import Counter
doc = ["i am a fellow student",
"we both are the good student",
"a student works hard"]
count = dict(Counter(word for sentence in doc for word in sentence.split()))
count is your desired dictionary:
{
'i': 1,
'am': 1,
'a': 2,
'fellow': 1,
'student': 3,
'we': 1,
'both': 1,
'are': 1,
'the': 1,
'good': 1,
'works': 1,
'hard': 1
}
So for example count['student'] == 3, count['a'] == 2 etc.
Here it's important to use split() instead of split(' '): in this way you will not end up with having an "empty" word within count. Example:
>>> sentence = "Hello world"
>>> dict(Counter(sentence.split(' ')))
{'Hello': 1, '': 4, 'world': 1}
>>> dict(Counter(sentence.split()))
{'Hello': 1, 'world': 1}
Use
from collections import Counter
Counter(" ".join(doc).split())
results in
Counter({'i': 1,
'am': 1,
'a': 2,
'fellow': 1,
'student': 3,
'we': 1,
'both': 1,
'are': 1,
'the': 1,
'good': 1,
'works': 1,
'hard': 1})
Explanation: first create one string by using join and split it on spaces with split to have a list of single words. Use Counter to count the appearances of each word
doc = ["i am a fellow student", "we both are the good student", "a student works hard"]
p = doc[0].split() #first list
p1 = doc[1].split() #second list
p2 = doc[2].split() #third list
f1 = p + p1 + p2
j = len(f1)-1
n = 0
while n < j:
print(f1[n],"is found",f1.count(f1[n]), "times")
n+=1
You can use set and a string to aggregate all word in each sentence after that to use dictionary comprehension to create a dictionary by the key of the word and value of the count in the sentence
doc = ["i am a fellow student", "we both are the good student", "a student works hard"]
uniques = set()
all_words = ''
for i in doc:
for word in i.split(" "):
uniques.add(word)
all_words += f" {word}"
print({i: all_words.count(f" {i} ") for i in uniques})
Output
{'the': 1, 'hard': 0, 'student': 3, 'both': 1, 'fellow': 1, 'works': 1, 'a': 2, 'are': 1, 'am': 1, 'good': 1, 'i': 1, 'we': 1}
Thanks for Posting in Stackoverflow I have written a sample code that does what you need just check it and ask if there is anything you don't understand
doc = ["i am a fellow student", "we both are the good student", "a student works hard"]
checked = []
occurence = []
for sentence in doc:
for word in sentence.split(" "):
if word in checked:
occurence[checked.index(word)] = occurence[checked.index(word)] + 1
else:
checked.append(word)
occurence.append(1)
for i in range(len(checked)):
print(checked[i]+" : "+str(occurence[i]))
try this one
doc = ["i am a fellow student", "we both are the good student", "a student works hard"]
words=[]
for a in doc:
b=a.split()
for c in b:
#if len(c)>3: #most words there length > 3 this line in your choice
words.append(c)
wc=[]
for a in words:
count = 0
for b in words:
if a==b :
count +=1
wc.append([a,count])
print(wc)

Transform every word in string to a dict and pass how many times all the word occured as value in python

I'm having trouble transforming every word of a string in a dictionary and passing how many times the word appears as the value.
For example
string = 'How many times times appeared in this many times'
The dict i wanted is:
dict = {'times':3, 'many':2, 'how':1 ...}
Using Counter
from collections import Counter
res = dict(Counter(string.split()))
#{'How': 1, 'many': 2, 'times': 3, 'appeared': 1, 'in': 1, 'this': 1}
You can loop through the words and increment the count like so:
d = {}
for word in string.split(" "):
d.setdefault(word, 0)
d[word] += 1

counting words from a dictionary?

My function is supposed to have:
One parameter as a tweet.
This tweet can involve numbers, words, hashtags, links and punctuations.
A second parameter is a dictionary that counts the words in that string with tweets, disregarding the hashtag's, mentions, links, and punctuation included in it.
The function returns all individual words in the dictionary as lowercase letters without any punctuation.
If the tweet had Don't then the dictionary would count it as dont.
Here is my function:
def count_words(tweet, num_words):
''' (str, dict of {str: int}) -> None
Return a NoneType that updates the count of words in the dictionary.
>>> count_words('We have made too much progress', num_words)
>>> num_words
{'we': 1, 'have': 1, 'made': 1, 'too': 1, 'much': 1, 'progress': 1}
>>> count_words("#utmandrew Don't you wish you could vote? #MakeAmericaGreatAgain", num_words)
>>> num_words
{'dont': 1, 'wish': 1, 'you': 2, 'could': 1, 'vote': 1}
>>> count_words('I am fighting for you! #FollowTheMoney', num_words)
>>> num_words
{'i': 1, 'am': 1, 'fighting': 1, 'for': 1, 'you': 1}
>>> count_words('', num_words)
>>> num_words
{'': 0}
'''
I might misunderstand your question, but if you want to update the dictionary you can do it in this manner:
d = {}
def update_dict(tweet):
for i in tweet.split():
if i not in d:
d[i] = 1
else:
d[i] += 1
return d

How to count frequency of single word and also double word count from input text in python?

Hello I want to count single word and double word count from input text in python .
Ex.
"what is your name ? what you want from me ?
You know best way to earn money is Hardwork
what is your aim ?"
output:
sinle W.C. :
what 3
is 3
your 2
you 2
and so on..
Double W.C. :
what is 2
is your 2
your name 1
what you 1
ans so on..
please post the way to do this ?
i use following code for the singl word count :
ws={}
for line in text:
for wrd in line:
if wrd not in ws:
ws[wrd]=1
else:
ws[wrd]+=1
from collections import Counter
s = "..."
words = s.split()
pairs = zip(words, words[1:])
single_words, double_words = Counter(words), Counter(pairs)
Output:
print "sinle W.C."
for word, count in sorted(single_words.items(), key=lambda x: -x[1]):
print word, count
print "double W.C."
for pair, count in sorted(double_words.items(), key=lambda x: -x[1]):
print pair, count
import nltk
from nltk import bigrams
from nltk import trigrams
tokens = nltk.word_tokenize(text)
tokens = [token.lower() for token in tokens if len(token) > 1]
bi_tokens = bigrams(tokens)
print [(item, tokens.count(item)) for item in sorted(set(tokens))]
print [(item, bi_tokens.count(item)) for item in sorted(set(bi_tokens))]
this works. using defaultdict. python 2.6
>>> from collections import defaultdict
>>> d = defaultdict(int)
>>> string = "what is your name ? what you want from me ?\n
You know best way to earn money is Hardwork\n what is your aim ?"
>>> l = string.split()
>>> for i in l:
d[i]+=1
>>> d
defaultdict(<type 'int'>, {'me': 1, 'aim': 1, 'what': 3, 'from': 1, 'name': 1,
'You': 1, 'money': 1, 'is': 3, 'earn': 1, 'best': 1, 'Hardwork': 1, 'to': 1,
'way': 1, 'know': 1, 'want': 1, 'you': 1, 'your': 2, '?': 3})
>>> d2 = defaultdict(int)
>>> for i in zip(l[:-1], l[1:]):
d2[i]+=1
>>> d2
defaultdict(<type 'int'>, {('You', 'know'): 1, ('earn', 'money'): 1,
('is', 'Hardwork'): 1, ('you', 'want'): 1, ('know', 'best'): 1,
('what', 'is'): 2, ('your', 'name'): 1, ('from', 'me'): 1,
('name', '?'): 1, ('?', 'You'): 1, ('?', 'what'): 1, ('to', 'earn'): 1,
('aim', '?'): 1, ('way', 'to'): 1, ('Hardwork', 'what'): 1,
('money', 'is'): 1, ('me', '?'): 1, ('what', 'you'): 1, ('best', 'way'): 1,
('want', 'from'): 1, ('is', 'your'): 2, ('your', 'aim'): 1})
>>>
I realize this question is a few years old. I wrote a little routine today to just count individual words from a word doc (docx). I used docx2txt to get the text from the word document, and used my first regex expression ever to remove every character other than alpha, numeric or spaces, and switched all to uppercase. I put this in because the question isn't answered.
Here is my little test routine in case it may help anyone.
mydoc = 'I:/flashdrive/pmw/pmw_py.docx'
words_all = {}
#####
import docx2txt
my_text = docx2txt.process(mydoc)
print(my_text)
my_text_org = my_text
import re
#added this code for the double words
from collections import Counter
pairs = zip(words, words[1:])
pair_list = Counter(pairs)
print('before pair listing')
for pair, count in sorted(pair_list.items(), key=lambda x: -x[1]):
#print (''.join('{} {}'.format(*pair)), count) #worked
#print(' '.join(pair), '', count) #worked
new_pair = ("{} {}")
my_pair = new_pair.format(pair[0],pair[1])
print ((my_pair), ": ", count)
#end of added code
my_text = re.sub('[\W_]+', ' ', my_text.upper(), flags=re.UNICODE)
print(my_text)
words = my_text.split()
words_org = words #just in case I may need the original version later
for i in words:
if not i in words_all:
words_all[i] = words.count(i)
for k,v in sorted(words_all.items()):
print(k, v)
print("Number of items in word list: {}".format(len(words_all)))

Categories

Resources