novice coder here, trying to sort out issues I've found with a simple spam detection python script from Youtube.
Naive Bayes cannot be applied because the list isn't generating correctly. I know the problem step is
featuresets = [(email_features(n),g) for (n,g) in mixedemails]
Could someone help me understand why that line is failing to generate anything?
def email_features(sent):
features = {}
wordtokens = [wordlemmatizer.lemmatize(word.lower()) for word in word_tokenize(sent)]
for word in wordtokens:
if word not in commonwords:
features[word] = True
return features
hamtexts=[]
spamtexts=[]
for infile in glob.glob(os.path.join('ham/','*.txt')):
text_file =open(infile,"r")
hamtexts.append(text_file.read())
text_file.close()
for infile in glob.glob(os.path.join('spam/','*.txt')):
text_file =open(infile,"r")
spamtexts.append(text_file.read())
text_file.close()
mixedemails = ([(email,'spam') for email in spamtexts]+ [(email,'ham') for email in hamtexts])
featuresets = [(email_features(n),g) for (n,g) in mixedemails]
I converted your problem into a minimal, runnable example:
commonwords = []
def lemmatize(word):
return word
def word_tokenize(text):
return text.split(" ")
def email_features(sent):
wordtokens = [lemmatize(word.lower()) for word in word_tokenize(sent)]
features = dict((word, True) for word in wordtokens if word not in commonwords)
return features
hamtexts = ["hello test", "test123 blabla"]
spamtexts = ["buy this", "buy that"]
mixedemails = [(email,'spam') for email in spamtexts] + [(email,'ham') for email in hamtexts]
featuresets = [(email_features(n),g) for (n,g) in mixedemails]
print len(mixedemails), len(featuresets)
Executing that example prints 4 4 on the console. Therefore, most of your code seems to work and the exact cause of the error cannot be estimated based on what you posted. I would suggest you to look at the following points for the bug:
Maybe your spam and ham files are not read properly (e.g. your path might be wrong). To validate that this is not the case add print hamtexts, spamtexts before mixedemails = .... Both variables should contain not-empty lists of strings.
Maybe your implementation of word_tokenize() returns always an empty list. Add a print sent, wordtokens after wordtokens = [...] in email_features() to make sure that sent contains a string and that it gets correctly converted to a list of tokens.
Maybe commonwords contains every single word from your ham and spam emails. To make sure that this is not the case, add the previous print sent, wordtokens before the loop in email_features() and a print features after the loop. All three variables should (usually) be not empty.
Related
Recently I am working on a genome data set which consists of many blocks of genomes. On previous works on natural language processing, I have used sent_tokenize and word_tokenize from nltk to tokenize the sentences and words. But when I use these functions on genome data set, it is not able to tokenize the genomes correctly. The text below shows some part of the genome data set.
>NR_004049 1
tattattatacacaatcccggggcgttctatatagttatgtataatgtat
atttatattatttatgcctctaactggaacgtaccttgagcatatatgct
gtgacccgaaagatggtgaactatacttgatcaggttgaagtcaggggaa
accctgatggaagaccgaaacagttctgacgtgcaaatcgattgtcagaa
ttgagtataggggcgaaagaccaatcgaaccatctagtagctggttcctt
ccgaagtttccctcaggatagctggtgcattttaatattatataaaataa
tcttatctggtaaagcgaatgattagaggccttagggtcgaaacgatctt
aacctattctcaaactttaaatgggtaagaaccttaactttcttgatatg
aagttcaaggttatgatataatgtgcccagtgggccacttttggtaagca
gaactggcgctgtgggatgaaccaaacgtaatgttacggtgcccaaataa
caact
>NR_004048 1
aatgttttatataaattgcagtatgtgtcacccaaaatagcaaaccccat
aaccaaccagattattatgatacataatgcttatatgaaactaagacatt
tcgcaacatttattttaggtatataaatacatttattgaaggaattgata
tatgccagtaaaatggtgtatttttaatttctttcaataaaaacataatt
gacattatataaaaatgaattataaaactctaagcggtggatcactcggc
tcatgggtcgatgaagaacgcagcaaactgtgcgtcatcgtgtgaactgc
aggacacatgaacatcgacattttgaacgcatatcgcagtccatgctgtt
atgtactttaattaattttatagtgctgcttggactacatatggttgagg
gttgtaagactatgctaattaagttgcttataaatttttataagcatatg
gtatattattggataaatataataatttttattcataatattaaaaaata
aatgaaaaacattatctcacatttgaatgt
>NR_004047 1
atattcaggttcatcgggcttaacctctaagcagtttcacgtactgttta
actctctattcagagttcttttcaactttccctcacggtacttgtttact
atcggtctcatggttatatttagtgtttagatggagtttaccacccactt
agtgctgcactatcaagcaacactgactctttggaaacatcatctagtaa
tcattaacgttatacgggcctggcaccctctatgggtaaatggcctcatt
taagaaggacttaaatcgctaatttctcatactagaatattgacgctcca
tacactgcatctcacatttgccatatagacaaagtgacttagtgctgaac
tgtcttctttacggtcgccgctactaagaaaatccttggtagttactttt
cctcccctaattaatatgcttaaattcagggggtagtcccatatgagttg
>NR_004052 1
When the tokenizer of ntlk is applied on this dataset, each line of text (for example tattattatacacaatcccggggcgttctatatagttatgtataatgtat ) becomes one token which is not correct. and a block of sequences should be considered as one token. For example in this case contents between >NR_004049 1 and >NR_004048 1 should be consider as one token:
>NR_004049 1
tattattatacacaatcccggggcgttctatatagttatgtataatgtat
atttatattatttatgcctctaactggaacgtaccttgagcatatatgct
gtgacccgaaagatggtgaactatacttgatcaggttgaagtcaggggaa
accctgatggaagaccgaaacagttctgacgtgcaaatcgattgtcagaa
ttgagtataggggcgaaagaccaatcgaaccatctagtagctggttcctt
ccgaagtttccctcaggatagctggtgcattttaatattatataaaataa
tcttatctggtaaagcgaatgattagaggccttagggtcgaaacgatctt
aacctattctcaaactttaaatgggtaagaaccttaactttcttgatatg
aagttcaaggttatgatataatgtgcccagtgggccacttttggtaagca
gaactggcgctgtgggatgaaccaaacgtaatgttacggtgcccaaataa
caact
>NR_004048 1
So each block starting with special words such as >NR_004049 1 until the next special character should be considered as one token. The problem here is tokenizing this kind of data set and i dont have any idea how can i correctly tokenize them.
I really appreciate answers which helps me to solve this.
Update:
One way to solve this problem is to append al lines within each block, and then using the nltk tokenizer. for example This means that to append all lines between >NR_004049 1 and >NR_004048 1 to make one string from several lines, so the nltk tokenizer will consider it as one token. Can any one help me how can i append lines within each block?
You just need to concatenate the lines between two ids apparently. There should be no need for nltk or any tokenizer, just a bit of programming ;)
patterns = {}
with open('data', "r") as f:
id = None
current = ""
for line0 in f:
line= line0.rstrip()
if line[0] == '>' : # new pattern
if len(current)>0:
# print("adding "+id+" "+current)
patterns[id] = current
current = ""
# to find the next id:
tokens = line.split(" ")
id = tokens[0][1:]
else: # continuing pattern
current = current + line
if len(current)>0:
patterns[id] = current
# print("adding "+id+" "+current)
# do whatever with the patterns:
for id, pattern in patterns.items():
print(f"{id}\t{pattern}")
I'm an absolute beginner with Python, and I am very stuck at this part. I tried creating a function to preprocess my texts/data for topic modeling, and it works perfectly when I ran it as an individual code, but when it does not return anything when I ran it as a function. I would appreciate any help!
The codes I'm using are very basic, and probably inefficient, but it's for my basic class, so really basic ways is the way to go for me!
codes:
def clean (data):
data_prep = []
for data in data:
tokenized_words = nltk.word_tokenize (data)
text_words = [token.lower() for token in tokenized_words if token.isalnum()]
text_words = [word for word in text_words if word not in stop_words]
text_joined = " ".join(textwords)
data_prep.append(text_joined)
return data_prep
the outputs are really random like "j", ",", "i". I was using a .txt file as my data, converted from a .csv file.
edit:
I've adjusted my codes from pointed mistakes and it is now
def clean (data):
data_prep = []
for row in data:
tokenized_words = nltk.word_tokenize (data)
text_words = [token.lower() for token in tokenized_words if token.isalnum()]
text_words = [word for word in text_words if word not in stop_words]
text_joined = " ".join(text_words)
data_prep.append(text_joined)
return data_prep
results: it now returns tokenized sentences and seemingly on loop.
what is my mistake this time?
see image
I don't have enough reputation to comment, so I will instead post this as an answer. It seems you are unnecessarily looping through all of your data twice, once in your outside for loop (for row in data) and then again in your list comprehensions ([token.lower() for token in tokenized_words if token.isalnum()]) since you are tokenizing all of the data (nltk.word_tokenize(data)), not just the current row. That is, your code should stop returning the same sentence multiple times if you get rid of your outermost for loop.
I have two .txt files, one that contains 200.000 words and the second contains 100 keywords( one each line). I want to calculate the cosine similarity between each of the 100 keywords and each word of my 200.000 words , and display for every keyword the 50 words with the highest score.
Here's what I did, note that Bertclient is what i'm using to extract vectors :
from sklearn.metrics.pairwise import cosine_similarity
from bert_serving.client import BertClient
bc = BertClient()
# Process words
with open("./words.txt", "r", encoding='utf8') as textfile:
words = textfile.read().split()
with open("./100_keywords.txt", "r", encoding='utf8') as keyword_file:
for keyword in keyword_file:
vector_key = bc.encode([keyword])
for w in words:
vector_word = bc.encode([w])
cosine_lib = cosine_similarity(vector_key,vector_word)
print (cosine_lib)
This keeps running but it doesn't stop. Any idea how I can correct this ?
I know nothing of Bert...but there's something fishy with the import and run. I don't think you have it installed correctly or something. I tried to pip install it and just run this:
from sklearn.metrics.pairwise import cosine_similarity
from bert_serving.client import BertClient
bc = BertClient()
print ('done importing')
and it never finished. Take a look at the dox for bert and see if something else needs to be done.
On your code, it is generally better do do ALL of the reading first, then the processing, so import both lists first, separately, check a few values with something like:
# check first five
print(words[:5])
Also, you need to look at a different way to do your comparisons instead of the nested loops. You realize now that you are converting each word in words EVERY TIME for each keyword, which is not necessary and probably really slow. I would recommend you either use a dictionary to pair the word with the encoding or make a list of tuples with the (word, encoding) in it if you are more comfortable with that.
Comment me back if that doesn't makes sense after you get Bert up and running.
--Edit--
Here is a chunk of code that works similar to what you want to do. There are a lot of options for how you can hold results, etc. depending on your needs, but this should get you started with "fake bert"
from operator import itemgetter
# fake bert ... just return something like length
def bert(word):
return len(word)
# a fake compare function that will compare "bert" conversions
def bert_compare(x, y):
return abs(x-y)
# Process words
with open("./word_data_file.txt", "r", encoding='utf8') as textfile:
words = textfile.read().split()
# Process keywords
with open("./keywords.txt", "r", encoding='utf8') as keyword_file:
keywords = keyword_file.read().split()
# encode the words and put result in dictionary
encoded_words = {}
for word in words:
encoded_words[word] = bert(word)
encoded_keywords = {}
for word in keywords:
encoded_keywords[word] = bert(word)
# let's use our bert conversions to find which keyword is most similar in
# length to the word
for word in encoded_words.keys():
result = [] # make a new result set for each pass
for kword in encoded_keywords.keys():
similarity = bert_compare(encoded_words.get(word), encoded_keywords.get(kword))
# stuff the answer into a tuple that can be sorted
result.append((word, kword, similarity))
result.sort(key=itemgetter(2))
print(f'the keyword with the closest size to {result[0][0]} is {result[0][1]}')
I have a function that scores words. I have lots of text from sentences to several page documents. I'm stuck on how to score the words and return the text near its original state.
Here's an example sentence:
"My body lies over the ocean, my body lies over the sea."
What I want to produce is the following:
"My body (2) lies over the ocean (3), my body (2) lies over the sea."
Below is a dummy version of my scoring algorithm. I've figured out how to take text, tear it apart and score it.
However, I'm stuck on how to put it back together into the format I need it in.
Here's a dummy version of my function:
def word_score(text):
words_to_work_with = []
words_to_return = []
passed_text = TextBlob(passed_text)
for word in words_to_work_with:
word = word.singularize().lower()
word = str(word)
e_word_lemma = lemmatizer.lemmatize(word)
words_to_work_with.append(e_word_lemma)
for word in words to work with:
if word == 'body':
score = 2
if word == 'ocean':
score = 3
else:
score = None
words_to_return.append((word,score))
return words_to_return
I'm a relative newbie so I have two questions:
How can I put the text back together, and
Should that logic be put into the function or outside of it?
I'd really like to be able to feed entire segments (i.e. sentences, documents) into the function and have it return them.
Thank you for helping me!
So basically, you want to attribute a score for each word. The function you give may be improved using a dictionary instead of several if statements.
Also you have to return all scores, instead of just the score of the first wordin words_to_work_with which is the current behavior of the function since it will return an integer on the first iteration.
So the new function would be :
def word_score(text)
words_to_work_with = []
passed_text = TextBlob(text)
for word in words_to_work_with:
word = word.singularize().lower()
word = str(word) # Is this line really useful ?
e_word_lemma = lemmatizer.lemmatize(word)
words_to_work_with.append(e_word_lemma)
dict_scores = {'body' : 2, 'ocean' : 3, etc ...}
return [dict_scores.get(word, None)] # if word is not recognized, score is None
For the second part, which is reconstructing the string, I would actually do this in the same function (so this answers your second question) :
def word_score_and_reconstruct(text):
words_to_work_with = []
passed_text = TextBlob(text)
reconstructed_text = ''
for word in words_to_work_with:
word = word.singularize().lower()
word = str(word) # Is this line really useful ?
e_word_lemma = lemmatizer.lemmatize(word)
words_to_work_with.append(e_word_lemma)
dict_scores = {'body': 2, 'ocean': 3}
dict_strings = {'body': ' (2)', 'ocean': ' (3)'}
word_scores = []
for word in words_to_work_with:
word_scores.append(dict_scores.get(word, None)) # we still construct the scores list here
# we add 'word'+'(word's score)', only if the word has a score
# if not, we add the default value '' meaning we don't add anything
reconstructed_text += word + dict_strings.get(word, '')
return reconstructed_text, word_scores
I'm not guaranteeing this code will work at first try, I can't test it but it'll give you the main idea
Hope this would help. Based on your question, it has worked for me.
best regards!!
"""
Python 3.7.2
Input:
Saved text in the file named as "original_text.txt"
My body lies over the ocean, my body lies over the sea.
"""
input_file = open('original_text.txt', 'r') #Reading text from file
output_file = open('processed_text.txt', 'w') #saving output text in file
output_text = []
for line in input_file:
words = line.split()
for word in words:
if word == 'body':
output_text.append('body (2)')
output_file.write('body (2) ')
elif word == 'body,':
output_text.append('body (2),')
output_file.write('body (2), ')
elif word == 'ocean':
output_text.append('ocean (3)')
output_file.write('ocean (3) ')
elif word == 'ocean,':
output_text.append('ocean (3),')
output_file.write('ocean (3), ')
else:
output_text.append(word)
output_file.write(word+' ')
print (output_text)
input_file.close()
output_file.close()
Here's a working implementation. The function first parses the input text as a list, such that each list element is a word or a combination of punctuation characters (eg. a comma followed by a space.) Once the words in the list have been processed, it combines the list back into a string and returns it.
def word_score(text):
words_to_work_with = re.findall(r"\b\w+|\b\W+",text)
for i,word in enumerate(words_to_work_with):
if word.isalpha():
words_to_work_with[i] = inflection.singularize(word).lower()
words_to_work_with[i] = lemmatizer.lemmatize(word)
if word == 'body':
words_to_work_with[i] = 'body (2)'
elif word == 'ocean':
words_to_work_with[i] = 'ocean (3)'
return ''.join(words_to_work_with)
txt = "My body lies over the ocean, my body lies over the sea."
output = word_score(txt)
print(output)
Output:
My body (2) lie over the ocean (3), my body (2) lie over the sea.
If you have more than 2 words that you want to score, using a dictionary instead of if conditions is indeed a good idea.
I am trying to generate a sentence in the style of the bible. But whenever I run it, it stops at a KeyError on the same exact word. This is confusing as it is only using its own keys and it is the same word every time in the error, despite having random.choice.
This is the txt file if you want to run it: ftp://ftp.cs.princeton.edu/pub/cs226/textfiles/bible.txt
import random
files = []
content = ""
output = ""
words = {}
files = ["bible.txt"]
sentence_length = 200
for file in files:
file = open(file)
content = content + " " + file.read()
content = content.split(" ")
for i in range(100): # I didn't want to go through every word in the bible, so I'm just going through 100 words
words[content[i]] = []
words[content[i]].append(content[i+1])
word = random.choice(list(words.keys()))
output = output + word
for i in range(int(sentence_length)):
word = random.choice(words[word])
output = output + word
print(output)
The KeyError happens on this line:
word = random.choice(words[word])
It always happens for the word "midst".
How? "midst" is the 100th word in the text.
And the 100th position is the first time it is seen.
The consequence is that "midst" itself was never put in words as a key.
Hence the KeyError.
Why does the program reach this word so fast? Partly because of a bug here:
for i in range(100):
words[content[i]] = []
words[content[i]].append(content[i+1])
The bug here is the words[content[i]] = [] statement.
Every time you see a word,
you recreate an empty list for it.
And the word before "midst" is "the".
It's a very common word,
many other words in the text have "the".
And since words["the"] is ["midst"],
the problem tends to happen a lot, despite the randomness.
You can fix the bug of creating words:
for i in range(100):
if content[i] not in words:
words[content[i]] = []
words[content[i]].append(content[i+1])
And then when you select words randomly,
I suggest to add a if word in words condition,
to handle the corner case of the last word in the input.
"midst" is the 101st word in your source text and it is the first time it shows up. When you do this:
words[content[i]].append(content[i+1])
you are making a key:value pair but you aren't guaranteed that that value is going to be equivalent to an existing key. So when you use that value to search for a key it doesn't exist so you get a KeyError.
If you change your range to 101 instead of 100 you will see that your program almost works. That is because the 102nd word is "of" which has already occurred in your source text.
It's up to you how you want to deal with this edge case. You could do something like this:
if i == (100-1):
words[content[i]].append(content[0])
else:
words[content[i]].append(content[i+1])
which basically loops back around to the beginning of the source text when you get to the end.