What I'm trying to achieve:
I have been looking for an approach for a long while now but I'm not able to find an (effective) way to this:
build a model from example sentences while taking word order and synonyms into account.
map a sentence against this model and get a similarity score (thus a score indicating how much this sentence fits the model, in other words fits the sentences which were used to train the model)
What I tried:
Python: nltk in combination with gensim (as far as I could code and read it was only capable to use word similarity (but not taking order into
account).
R: used tm to build a TermDocumentMatrix which looked really promising but was not able to map anything to this matrix. Further this TermDocumentMatrix seems to take the order into account but misses the synonyms (I think).
I know the lemmatization didn't go that well hahah :)
Question:
Is there any way to do achieve the steps described above using either R or Python? A simple sample code would be great (or references to a good tutorial)
There are many ways to do what you described above, and it will of course take lots of testing to find an optimized solution. But here is some helpful functionality to help solve this using python/nltk.
build a model from example sentences while taking word order and
synonyms into account.
1. Tokenization
In this step you will want to break down individual sentences into a list of words.
Sample code:
import nltk
tokenized_sentence = nltk.word_tokenize('this is my test sentence')
print(tokenized_sentence)
['this', 'is', 'my', 'test', 'sentence']
2. Finding synonyms for each word.
Sample code:
from nltk.corpus import wordnet as wn
synset_list = wn.synsets('motorcar')
print(synset_list)
[Synset('car.n.01')]
Feel free to research synsets if you are unfamiliar, but for now just know the above returns a list, so multiple synsets are possibly returned.
From the synset you can get a list of synonyms.
Sample code:
print( wn.synset('car.n.01').lemma_names() )
['car', 'auto', 'automobile', 'machine', 'motorcar']
Great, now you are able to convert your sentence into a list of words, and you're able to find synonyms for all words in your sentences (while retaining the order of your sentence). Also, you may want to consider removing stopwords and stemming your tokens, so feel free to look up those concepts if you think it would be helpful.
You will of course need to write the code to do this for all sentences, and store the data in some data structure, but that is probably outside the scope of this question.
map a sentence against this model and get a similarity score (thus a
score indicating how much this sentence fits the model, in other words
fits the sentences which were used to train the model)
This is difficult to answer since the possibilities to do this are endless, but here are a few examples of how you could approach it.
If you're interested in binary classification you could do something as simple as, Have I seen this sentence of variation of this sentence before (variation being same sentence but words replaced by their synonyms)? If so, score is 1, else score is 0. This would work, but may not be what you want.
Another example, store each sentence along with synonyms in a python dictionary and calculate score depending on how far down the dictionary you can align the new sentence.
Example:
training_sentence1 = 'This is my awesome sentence'
training_sentence2 = 'This is not awesome'
And here is a sample data structure on how you would store those 2 sentences:
my_dictionary = {
'this': {
'is':{
'my':{
'awesome': {
'sentence':{}
}
},
'not':{
'awesome':{}
}
}
}
}
Then you could write a function that traverses that data structure for each new sentence, and depending how deep it gets, give it a higher score.
Conclusion:
The above two examples are just some possible ways to approach the similarity problem. There are countless articles/whitepapers about computing semantic similarity between text, so my advice would be just explore many options.
I purposely excluded supervised classification models, since you never mentioned having access to labelled training data, but of course that route is possible if you do have a gold standard data source.
Related
I'm trying to match an input text (e.g. a headline of a news article) to sets of keywords, s.t. the best-matching set can be selected.
Let's assume, I have some sets of keywords:
[['democracy', 'votes', 'democrats'], ['health', 'corona', 'vaccine', 'pandemic'], ['security', 'police', 'demonstration']]
and as input the (hypothetical) headline: New Pfizer vaccine might beat COVID-19 pandemic in the next few months.. Obviously, it fits well to the second set of keywords.
Exact matching words is one way to do it, but more complex situations might arise, for which it might make sense to use base forms of words (e.g. duck instead of ducks, or run instead of running) to enhance the algorithm. Now we're talking NLP already.
I experimented with Spacy word and document embeddings (example) to determine similarity between a headline and each set of keywords. Is it a good idea to calculate document similarity between a full sentence and a limited number of keywords? Are there other ways?
Related: What NLP tools to use to match phrases having similar meaning or semantics
There is not one correct solution for such a task. you have to try what fits your problem!
Possible ways to solve your problem I can think of:
Matching: either exact or more elaborated such as lemma/stemming, or Levensthein.
Embedding Similarity: I guess word similarity would outperform document-keywords similarity, but again, just experiment with it.
Classification: Your problem seems to be a classic classification problem, which each set being one class. If you don't have enough labeled training data, you could try active-learning.
I have two files one is CSV and other one is a text file. Both of them contains Unicode words. My task is to compare words from these two files to correct spelling mistakes.(CSV file contains miss spelled words and text file contains correct words) CSV file contains around 1000 words and text file contains 5000 words.
I have Implemented following code for this task and since I'm new to python it is very inefficient. What are the suggestion to make it more efficient.
import pandas as pd
import nltk
df = pd.read_csv('C:/mis_spel.csv',encoding='utf8')
list_check_words = df['words'].tolist()
df2 = pd.read_csv('C:/words.txt',encoding='utf8',delimiter='\t')
list_words = df2['word'].tolist()
for word in list_check_words:
for dix in list_words:
ed = nltk.edit_distance(word, dix)
if (ed<2):
print(word, dix, ed)
This might be an overkill for your use-case, but still I'm putting it here anyways. AFAIK, these days the industry standard for spelling auto-correction involves looking at the problem through the lens of word embeddings. In older times, an n-gram based probabilistic approach was being used, but not any more.
What you'd want to do, is probably something like the following:
Train a model to produce character-level word embeddings
Project the entire dictionary to your vector space and build an index for efficient search
For each misspelled word, pair it with its nearest neighbor.
I'm adding reference to two different articles below, which explain this in much greater detail. One suggestion though, please try exploring ANNOY indexing from gensim, it's crazy fast for approx nearest neighbors search. Speaking from personal experience.
article 1: Embedding for spelling correction
article 2: Spelling Correction Using Deep Learning: How Bi-Directional LSTM with Attention Flow works in Spelling Correction
I have a large number of strings in a list:
A small example of the list contents is :
["machine learning","Apple","Finance","AI","Funding"]
I wish to convert these into vectors and use them for clustering purpose.
Is the context of these strings in the sentences considered while finding out their respective vectors?
How should I go about with getting the vectors of these strings if i have just this list containing the strings?
I have done this code so far..
from gensim.models import Word2Vec
vec = Word2Vec(mylist)
P.S. Also, can I get a good reference/tutorial on Word2Vec?
To find word vectors using word2vec you need a list of sentences not a list of strings.
What word2vec does is, it tries to goes through every word in a sentence and for each word, it tries to predict the words around it in a specified window (mostly around 5) and adjusts the vector associated with that word so that the error is minimized.
Obviously, this means that the order of words matter when finding word vectors. If you just supply a list of strings without a meaningful order, you will not get a good embedding.
I'm not sure, but I think you will find LDA better suited in this case, because your list of strings don't have inherent order in them.
Answers to your 2 questions:
Is the context of these strings in the sentences considered while finding out their respective vectors?
Yes, word2vec creates one vector per word (or string since it can consider multiword expression as unique word, e.g. New York); this vector describe the word by its context. It assumes that similar words will appear with similar context. The context is composed of the surrounding words (in a window, with bag-of-words or skip-gram assumption).
How should I go about with getting the vectors of these strings if i have just this list containing the strings?
You need more words. Word2Vec outputs quality depends on the size of the training set. Training Word2Vec on your data is a non-sense.
The links provided by #Beta are a good introduction/explanation.
Word2Vec is an artificial neural network method. Word2Vec actually creates embeddings, which reflects the relationship among the words. The links below will help you get the complete code to implement Word2Vec.
Some good links are this and this. For the 2nd link try his github repo for the detail code. He is explaining only major part in the blog. Main article is this.
You can use the following code, to convert words to there corresponding numerical values.
word_counts = Counter(words)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
word2vec + context = doc2vec
Build sentences from text you have and tag them with labels.
Train doc2vec on tagged sentences to get vectors for each label embedded in the same space as words.
Then you can do vector inference and get labels for arbitrary piece of text.
I am trying to get the bigrams in the sentences using Phrases in Gensim as follows.
from gensim.models import Phrases
from gensim.models.phrases import Phraser
documents = ["the mayor of new york was there", "machine learning can be useful sometimes","new york mayor was present"]
sentence_stream = [doc.split(" ") for doc in documents]
#print(sentence_stream)
bigram = Phrases(sentence_stream, min_count=1, threshold=2, delimiter=b' ')
bigram_phraser = Phraser(bigram)
for sent in sentence_stream:
tokens_ = bigram_phraser[sent]
print(tokens_)
Even though it catches "new", "york" as "new york", it does not catch "machine", learning as "machine learning"
However, in the example shown in Gensim Website they were able to catch the words "machine", "learning" as "machine learning".
Please let me know how to get "machine learning" as a bigram in the above example
The technique used by gensim Phrases is purely based on statistics of co-occurrences: how often words appear together, versus alone, in a formula also affected by min_count and compared against the threshold value.
It is only because your training set has 'new' and 'york' occur alongside each other twice, while other words (like 'machine' and 'learning') only occur alongside each other once, that 'new_york' becomes a bigram, and other pairings do not. What's more, even if you did find a combination of min_count and threshold that would promote 'machine_learning' to a bigram, it would also pair together every other bigram-that-appears-once – which is probably not what you want.
Really, to get good results from these statistical techniques, you need lots of varied, realistic data. (Toy-sized examples may superficially succeed, or fail, for superficial toy-sized reasons.)
Even then, they will tend to miss combinations a person would consider reasonable, and make combinations a person wouldn't. Why? Because our minds have much more sophisticated ways (including grammar and real-world knowledge) for deciding when clumps of words represent a single concept.
So even with more better data, be prepared for nonsensical n-grams. Tune or judge the model on whether it is overall improving on your goal, not any single point or ad-hoc check of matching your own sensibility.
(Regarding the referenced gensim documentation comment, I'm pretty sure that if you try Phrases on just the two sentences listed there, it won't find any of the desired phrases – not 'new_york' or 'machine_learning'. As a figurative example, the ellipses ... imply the training set is larger, and the results indicate that the extra unshown texts are important. It's just because of the 3rd sentence you've added to your code that 'new_york' is detected. If you added similar examples to make 'machine_learning' look more like a statistically-outlying pairing, your code could promote 'machine_learning', too.)
It is probably below your threshold?
Try using more data.
I am wondering if it's possible to calculate the distance/similarity between two related words in Python (like "fraud" and "steal"). These two words are not synonymous per se but they are clearly related. Are there any concepts/algorithms in NLP that can show this relationship numerically? Maybe via NLTK?
I'm not looking for the Levenshtein distance as that relates to the individual characters that make up a word. I'm looking for how the meaning relates.
Would appreciate any help provided.
My suggestion is as follows:
Put each word through the same thesaurus, to get a list of synonyms.
Get the size of the set of similar synonyms for the two words.
That is a measure of similarity between the words.
If you would like to do a more thorough analysis:
Also get the antonyms for each of the two words.
Get the size of the intersection of the sets of antonyms for the two words.
If you would like to go further!...
Put each word through the same thesaurus, to get a list of synonyms.
Use the top n (=5, or whatever) words from the query result to initiate a new query.
Repeat this to a depth you feel is adequate.
Make a collection of synonyms from the repeated synonym queries.
Get the size of the set of similar synonyms for the two words from the two collections of synonyms.
That is a measure of similarity between the words.
NLTK's wordnet is the tool you'd want to use for this. First get the set of all the senses of each word using:
synonymSet = wordnet.synsets(word)
Then loop through each possible sense of each of the 2 words and compare them to each other in a nested loop:
similarity = synonym1.res_similarity(synonym2,semcor_ic)
Either average that value or use the maximum you find; up to you.
This example is using a word similarity comparison that uses "IC" or information content. This will score similarity higher if the word is more specific, or contains more information, so generally it's closer to what we mean when we think about word similarity.
To use this stuff you'll need the imports and variables:
import nltk
from nltk.corpus import wordnet
from nltk.corpus import wordnet_ic
semcor_ic = wordnet_ic.ic('ic-semcor.dat')
As #jose_bacoy suggested above, the Gensim library can provide a measure of similarity between words using the word2vec technique. The below example is modified from this blog post. You can run it in Google Colab.
Google Colab comes with the Gensim package installed. We can import the part of it we require:
from gensim.models import KeyedVectors
We will download training data from Google News, and load it up
!wget -P /root/input/ -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
word_vectors = KeyedVectors.load_word2vec_format('/root/input/GoogleNews-vectors-negative300.bin.gz', binary=True)
This gives us a measure of similarity between any two words. To use your example:
word_vectors.similarity('fraud', 'steal')
>>> 0.19978741
Twenty percent similarity may be a surprisingly low level of similarity to you if you consider these words to be similar. But consider that fraud is a noun and steal is generally a verb. This will give them very different associations as viewed by word2vec.
They become much more similar if you modify the noun to become a verb:
word_vectors.similarity('defraud', 'steal')
>>> 0.43293646