Sentence Embedding Clustering - python

I am working on a small project in which I need to eliminate irrelevant information (ads for instance) from the html content I extracted from the websites. Since I am a beginner in NLP, I came up with a simple approach after doing some research.
The language used in the websites is mainly Chinese and I stored each sentence (separated by comma) into a list. I used a model called HanLP to do semantic parsing on my sentences. Something like this:
[['萨哈夫', '说', ',', '伊拉克', '将', '同', '联合国', '销毁', '伊拉克', '大', '规模', '杀伤性', '武器', '特别', '委员会', '继续', '保持', '合作', '。'],
['上海', '华安', '工业', '(', '集团', ')', '公司', '董事长', '谭旭光', '和', '秘书', '张晚霞', '来到', '美国', '纽约', '现代', '艺术', '博物馆', '参观', '。']]
I found a pretrained Chinese word embedding database to get the word embeddings in my list. Then my approach is to get the sentence embedding by calculating the element-wise average in that sentence. Now I got a list with sentence embedding vector of each individual sentence I parsed.
sentence: ['各国', '必须', '“', '大', '规模', '”', '支出', '》', '的', '报道', '称']
sentence embedding: [0.08130878633396192, -0.07660450288941237, 0.008989107615145093, 0.07014013996178453, 0.028158639980988068, 0.01821030060422014, 0.017793822186914356, 0.04148909364911643, 0.019383941353722053, 0.03080177273262631, -0.025636445207055658, -0.019274188523096116, 0.0007501963356679136, 0.00476544528183612, -0.024648051539605313, -0.011124626140702854, -0.0009071269834583455, -0.08850407109341839, 0.016131568784740837, -0.025241035714068195, -0.041586867829954084, -0.0068722023954085835, -0.010853541125966744, 0.03994347004812549, 0.04977656596086242, 0.029051605612039566, -0.031031965550606732, 0.05125975541093133, 0.02666312647687102, 0.0376262941096105, -0.00833959155716002, 0.035523645325817844, -0.0026961421932686458, 0.04742895790629766, -0.07069634984840047, -0.054931600324132225, 0.0727336619218642, 0.0434290729039772, -0.09277284060689536, -0.020194332538680596, 0.0011523241092535582, 0.035080605863847515, 0.13034072890877724, 0.06350403482263739, -0.04108352984555743, 0.03208382343026725, -0.08344872626052662, -0.14081071757457472, -0.010535095733675089, -0.04253014939075166, -0.06409504175694151, 0.04499104322696274, -0.1153958263722333, 0.011868207969448784, 0.032386500388383865, -0.0036963022192305125, 0.01861521213802255, 0.05440248447385701, 0.026148285970769146, 0.011136160687204789, 0.04259885661303997, 0.09219381585717201, 0.06065366725141013, -0.015763109010136264, -0.0030524068596688185, 0.0031816939061338253, -0.01272551697382534, 0.02884035756472837, -0.002176688645373691, -0.04119681418788704, -0.08371328799562021, 0.007803680078888481, 0.0917377421124415, 0.027042210250246255, -0.0168504383076321, -0.0005781924013387073, 0.0075592477594248276, 0.07226487367667934, 0.005541681396690282, 0.001809495755217292, 0.011297995647923513, 0.10331092673269185, 0.0034428672357039018, 0.07364177612841806, 0.03861967177892273, -0.051503680434755304, -0.025596174390309236, 0.014137779785828157, -0.08445698734034192, -0.07401955000717532, 0.05168289600194178, -0.019313615386966954, 0.007136409255591306, -0.042960755484686655, 0.01830706542188471, -0.001172357662157579, -0.008949846103364094, -0.02356141348454085, -0.05277112944432619, 0.006653293967247009, -0.00572453092106364, 0.049479073389771984, -0.03399876727913083, 0.029434629207984966, -0.06990156170319427, 0.0924786920659244, 0.015472117049450224, -0.10265431468459693, -0.023421658562834968, 0.004523425542918796, -0.008990391665561632, -0.06445665437389504, 0.03898039324717088, -0.025552247142927212, 0.03958867977119305, -0.03243451675569469, -0.03848901360338046, -0.061713250523263756, -0.00904815017499707, -0.03730008362750099, 0.02715366007760167, -0.08498009599067947, -0.00397337388924577, -0.0003402943098494275, 0.008005982349542055, 0.05871503853069788, -0.013795949010686441, 0.007956360128115524, -0.024331797295334665, 0.03842244771393863, -0.04393653944134712, 0.02677931230176579, 0.07715398648923094, -0.048624055216681554, -0.11324723844882101, -0.08751555024222894, -0.02469049582511864, -0.08767948790707371, -0.021930147846102376, 0.011519658294591036, -0.08155732788145542, -0.10763703049583868, -0.07967398501932621, -0.03249315629628571, 0.02701333300633864, -0.015305672687563028, 0.002375963249836456, 0.012275356545367024, -0.02917095824060115, 0.02626959386874329, -0.0158629031767222, -0.05546591058373451, -0.023678493686020374, -0.048296650278974666, -0.06167154920033433, 0.004435380412773652, 0.07418209609617903, 0.03524015434297987, 0.063185997529548, -0.05814945189790292, 0.13036084697920491, -0.03370768073099581, 0.03256692289671099, 0.06808869439092549, 0.0563600350340659, 5.7854774323376745e-05, -0.0793171048333699, 0.03862177783792669, 0.007196083004766313, 0.013824320821599527, 0.02798982642707415, -0.00918149473992261, -0.00839392692697319, 0.040496235374699936, -0.007375971498814496, -0.03586547057652338, -0.03411220566538924, -0.025101724758066914, -0.005714270286262035, 0.07351569867354225, -0.024216756182299418, 0.0066968070935796604, -0.032809603959321976, 0.05006068360737779, 0.0504626590250568, 0.04525104385208, -0.027629732069644062, 0.10429493219337681, -0.021474285961382768, 0.018212029964409092, 0.07260083373297345, 0.026920156976716084, 0.043199389770796355, -0.03641596379351209, 0.0661080302670598, 0.09141866947439584, 0.0157452768815512, -0.04552285996297459, -0.03509725736115466, 0.02857604629190808]
My next step is to cluster these sentence embedding vectors and find out sentences that clearly have irrelevant content compared to the others.
Does my approach even make sense? If it does, what tools can I use to cluster my sentence embedding values? I saw there are approaches such as K-means or calculate L2 distances but I am not sure how to implement.
Thanks!

The approach makes sense, if you are trying to get rid of sentences which do not contribute to the downstream analysis but element-wise average may not be the best way to construct the sentence embeddings. A better way to construct sentence embeddings would be to take the individual word embeddings and then combine them using tf-idf.
sentence = [w1, w2, w3]
word_vectors = [v1, v2, v3] , # v is of shape (N, ) where N is the size of embedding
term_frequency_of_word = [t1, t2, t3]
inverse_doc_freq = [idf1, idf2, idf3]
word_weights = [tf*idf for tf,idf in zip(term_frequency_of_word, inverse_doc_freq)]
sentence_vector = np.zeros(N)
for weight, vector in zip(word_weights, word_vectors):
scaled_vectors = vector * weight
sentence_vector += scaled_vector
By applying tf-idf scaling your sentence embedding will move towards the embedding of the most important word(s) in the sentence which might help you apply clustering algorithms to filter out unwanted sentences.
Here is a quick tutorial on TF-IDF: http://www.tfidf.com

For clustering you can try k-means, but this algorithm uses just Euclidean metric. For using another distance (i.e. cosine distance), the k-medoids is also suitable EM-algorithm. In Python, you can find KMeans in scikit-learn library. In order to try 'KMedoids', you should install scikit-learn-extra library (https://scikit-learn-extra.readthedocs.io/en/latest/generated/sklearn_extra.cluster.KMedoids.html) or this one: https://github.com/letiantian/kmedoids

Related

cluster most similar words with pre trained word2vec model using k means clustering

I have build a word2vec model containing the sentences of several scientific research papers and retrieved the top 100 most similar words for the term "online_platform":
# Create a model with 300 dimensions and a context window of 6 words.
# Only consider words that appear at least in 10 documents. Use 3 CPU
# cores for estimating the model.
model_online_platform = word2vec.Word2Vec(sentences_online_platform, window = 4, min_count = 10, workers=3)
online_platform_most_similar = model_online_platform.most_similar("online_platform", topn=100)
Here are some of the most similar words:
most_similar_words=[('event', 0.6746241450309753),
('date', 0.612309992313385),
('stems', 0.6009981632232666),
('draws', 0.5935811996459961),
('company', 0.5848336219787598), ...]
From this list of tuples I retrived only the words, leaving out the similarity scores:
first_tuple_elements = ['event', 'date', 'stems', 'draws', 'company',...]
Then I used a Pre-Trained Word2Vec (glove-wiki-gigaword-300) Model in order to get the vektor-representation of each word from this Pre-trained word2vec model instead of my own word2vec model:
vectores = []
for word in first_tuple_elements:
vectores.append(word_vectors_wiki_gigaword["word"])
print(vectores)
This gave me a list of arrays for each word.
Right now I am struggling to link the arrays(vektor representations) to the associated word..
What I want to have is a list of tuples that I can the use for k means clustering:
[
('event', array([...,...,..]),
('date', array([...,...,..]),
etc....
]
I think that with this kind of list of tuples I might be able to do k means clustering, but I also might be wrong...

How can I find the probability of a sentence using GPT-2?

I'm trying to write a program that, given a list of sentences, returns the most probable one. I want to use GPT-2, but I am quite new to using it (as in I don't really know how to do it). I'm planning on finding the probability of a word given the previous words and multiplying all the probabilities together to get the overall probability of that sentence occurring, however I don't know how to find the probability of a word occurring given the previous words. This is my (psuedo) code:
sentences = # my list of sentences
max_prob = 0
best_sentence = sentences[0]
for sentence in sentences:
prob = 1 #probability of that sentence
for idx, word in enumerate(sentence.split()[1:]):
prob *= probability(word, " ".join(sentence[:idx])) # this is where I need help
if prob > max_prob:
max_prob = prob
best_sentence = sentence
print(best_sentence)
Can I have some help please?
You can also try lm-scorer, a tiny wrapper around transformers that allows you to get sentences probabilities using models that support it (only GPT2 models are implemented at the time of writing).
https://github.com/simonepri/lm-scorer
I just used it myself and works perfectly.
Warning: If you use other transformers / pipelines in the same environment, things may get messy.
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import numpy as np
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
def score(tokens_tensor):
loss=model(tokens_tensor, labels=tokens_tensor)[0]
return np.exp(loss.cpu().detach().numpy())
texts = ['i would like to thank you mr chairman', 'i would liking to thanks you mr chair in', 'thnks chair' ]
for text in texts:
tokens_tensor = tokenizer.encode( text, add_special_tokens=False, return_tensors="pt")
print (text, score(tokens_tensor))
This code snippet could be an example of what are you looking for. You feed the model with a list of sentences, and it scores each whereas the lowest the better.
The output of the code above is:
i would like to thank you mr chairman 122.3066
i would liking to thanks you mr chair in 1183.7637
thnks chair 14135.129
I wrote a set of functions that can do precisely what you're looking for. Recall that GPT-2 parses its input into tokens (not words): the last word in 'Joe flicked the grasshopper' is actually three tokens: ' grass', 'ho', and 'pper'. The cloze_finalword function takes this into account, and computes the probabilities of all tokens (conditioned on the tokens appearing before them). You can adapt part of this function so that it returns what you're looking for. I hope you find the code useful!
I think GPT-2 is a bit overkill for what you're trying to achieve. You can build a basic language model which will give you sentence probability using NLTK. A tutorial for this can be found here.

NLP for multi feature data set using TensorFlow

I am just a beginner in this subject, I have tested some NN for image recognition as well as using NLP for sequence classification.
This second topic is interesting for me.
using
sentences = [
'some test sentence',
'and the second sentence'
]
tokenizer = Tokenizer(num_words=100, oov_token='<OOV>')
tokenizer.fit_on_texts(sentences)
sentences = tokenizer.texts_to_sequences(sentences)
will result with an array of size [n,1] where n is word size in sentence. And assuming I have implemented padding correctly each Training example in set will be size of [n,1] where n is the max sentence length.
that prepared training set I can pass into keras model.fit
what when I have multiple features in my data set?
let's say I would like to build an event prioritization algorithm and my data structure would look like:
[event_description, event_category, event_location, label]
trying to tokenize such array would result in [n,m] matrix where n is maximum word length and m is the feature number
how to prepare such a dataset so a model could be trained on such data?
would this approach be ok:
# Going through training set to get all features into specific ararys
for data in dataset:
training_sentence.append(data['event_description'])
training_category.append(data['event_category'])
training_location.append(data['event_location'])
training_labels.append(data['label'])
# Tokenize each array which contains tokenized value
tokenizer.fit_on_texts(training_sentence)
tokenizer.fit_on_texts(training_category)
tokenizer.fit_on_texts(training_location)
sequences = tokenizer.texts_to_sequences(training_sentence)
categories = tokenizer.texts_to_sequences(training_category)
locations = tokenizer.texts_to_sequences(training_location)
# Concatenating arrays with features into one
training_example = numpy.concatenate([sequences,categories, locations])
#ommiting model definition, training the model
model.fit(training_example, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))
I haven't been testing it yet. I just want to make sure if I understand everything correctly and if my assumptions are correct.
Is this a correct approach to create NPL using NN?
I know of two common ways to manage multiple input sequences, and your approach lands somewhere between them.
One approach is to design a multi-input model with each of your text columns as a different input. They can share the vocabulary and/or embedding layer later, but for now you still need a distinct input sub-model for each of description, category, etc.
Each of these becomes an input to the network, using the Model(inputs=[...], outputs=rest_of_nn) syntax. You will need to design rest_of_nn so it can take multiple inputs. This can be as simple as your current concatenation, or you could use additional layers to do the synthesis.
It could look something like this:
# Build separate vocabularies. This could be shared.
desc_tokenizer = Tokenizer()
desc_tokenizer.fit_on_texts(training_sentence)
desc_vocab_size = len(desc_tokenizer.word_index)
categ_tokenizer = Tokenizer()
categ_tokenizer.fit_on_texts(training_category)
categ_vocab_size = len(categ_tokenizer.word_index)
# Inputs.
desc = Input(shape=(desc_maxlen,))
categ = Input(shape=(categ_maxlen,))
# Input encodings, opting for different embeddings.
# Descriptions go through an LSTM as a demo of extra processing.
embedded_desc = Embedding(desc_vocab_size, desc_embed_size, input_length=desc_maxlen)(desc)
encoded_desc = LSTM(categ_embed_size, return_sequences=True)(embedded_desc)
encoded_categ = Embedding(categ_vocab_size, categ_embed_size, input_length=categ_maxlen)(categ)
# Rest of the NN, which knows how to put everything together to get an output.
merged = concatenate([encoded_desc, encoded_categ], axis=1)
rest_of_nn = Dense(hidden_size, activation='relu')(merged)
rest_of_nn = Flatten()(rest_of_nn)
rest_of_nn = Dense(output_size, activation='softmax')(rest_of_nn)
# Create the model, assuming some sort of classification problem.
model = Model(inputs=[desc, categ], outputs=rest_of_nn)
model.compile(optimizer='adam', loss=K.categorical_crossentropy)
The second approach is to concatenate all of your data before encoding it, and then treat everything as a more standard single-sequence problem after that. It is common to use a unique token to separate or define the different fields, similar to BOS and EOS for the beginning and end of the sequence.
It would look something like this:
XXBOS XXDESC This event will be fun. XXCATEG leisure XXLOC Seattle, WA XXEOS
You can also do end tags for the fields like DESCXX, omit the BOS and EOS tokens, and generally mix and match however you want. You can even use this to combine some of your input sequences, but then use a multi-input model as above to merge the rest.
Speaking of mixing and matching, you also have the option to treat some of your inputs directly as an embedding. Low-cardinality fields like category and location do not need to be tokenized, and can be embedded directly without any need to split into tokens. That is, they don't need to be a sequence.
If you are looking for a reference, I enjoyed this paper on Large Scale Product Categorization using Structured and Unstructured Attributes. It tests all or most of the ideas I have just outlined, on real data at scale.

Finding closest related words using word2vec

My goal is to find most relevant words given set of keywords using word2vec. For example, if I have a set of words [girl, kite, beach], I would like relevants words to be output from word2vec: [flying, swimming, swimsuit...]
I understand that word2vec will vectorize a word based on the context of surround words. So what I did, was use the following function:
most_similar_cosmul([girl, kite, beach])
However, it seems to give out words not very related to the set of keywords:
['charade', 0.30288437008857727]
['kinetic', 0.3002534508705139]
['shells', 0.29911646246910095]
['kites', 0.2987399995326996]
['7-9', 0.2962781488895416]
['showering', 0.2953910827636719]
['caribbean', 0.294752299785614]
['hide-and-go-seek', 0.2939240336418152]
['turbine', 0.2933803200721741]
['teenybopper', 0.29288050532341003]
['rock-paper-scissors', 0.2928623557090759]
['noisemaker', 0.2927709221839905]
['scuba-diving', 0.29180505871772766]
['yachting', 0.2907838821411133]
['cherub', 0.2905363440513611]
['swimmingpool', 0.290039986371994]
['coastline', 0.28998953104019165]
['Dinosaur', 0.2893030643463135]
['flip-flops', 0.28784963488578796]
['guardsman', 0.28728148341178894]
['frisbee', 0.28687697649002075]
['baltic', 0.28405341506004333]
['deprive', 0.28401875495910645]
['surfs', 0.2839275300502777]
['outwear', 0.28376665711402893]
['diverstiy', 0.28341981768608093]
['mid-air', 0.2829524278640747]
['kickboard', 0.28234976530075073]
['tanning', 0.281939834356308]
['admiration', 0.28123530745506287]
['Mediterranean', 0.281186580657959]
['cycles', 0.2807052433490753]
['teepee', 0.28070521354675293]
['progeny', 0.2775532305240631]
['starfish', 0.2775339186191559]
['romp', 0.27724218368530273]
['pebbles', 0.2771730124950409]
['waterpark', 0.27666303515434265]
['tarzan', 0.276429146528244]
['lighthouse', 0.2756190896034241]
['captain', 0.2755546569824219]
['popsicle', 0.2753356397151947]
['Pohoda', 0.2751699686050415]
['angelic', 0.27499720454216003]
['african-american', 0.27493417263031006]
['dam', 0.2747344970703125]
['aura', 0.2740659713745117]
['Caribbean', 0.2739778757095337]
['necking', 0.27346789836883545]
['sleight', 0.2733519673347473]
This is the code I used to train word2vec
def train(data_filepath, epochs=300, num_features=300, min_word_count=2, context_size=7, downsampling=1e-3, seed=1,
ckpt_filename=None):
"""
Train word2vec model
data_filepath path of the data file in csv format
:param epochs: number of times to train
:param num_features: increase to improve generality, more computationally expensive to train
:param min_word_count: minimum frequency of word. Word with lower frequency will not be included in training data
:param context_size: context window length
:param downsampling: reduce frequency for frequent keywords
:param seed: make results reproducible for random generator. Same seed means, after training model produces same results.
:returns path of the checkpoint after training
"""
if ckpt_filename == None:
data_base_filename = os.path.basename(data_filepath)
data_filename = os.path.splitext(data_base_filename)[0]
ckpt_filename = data_filename + ".wv.ckpt"
num_workers = multiprocessing.cpu_count()
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
nltk.download("punkt")
nltk.download("stopwords")
print("Training %s ..." % data_filepath)
sentences = _get_sentences(data_filepath)
word2vec = w2v.Word2Vec(
sg=1,
seed=seed,
workers=num_workers,
size=num_features,
min_count=min_word_count,
window=context_size,
sample=downsampling
)
word2vec.build_vocab(sentences)
print("Word2vec vocab length: %d" % len(word2vec.wv.vocab))
word2vec.train(sentences, total_examples=len(sentences), epochs=epochs)
return _save_ckpt(word2vec, ckpt_filename)
def _save_ckpt(model, ckpt_filename):
if not os.path.exists("checkpoints"):
os.makedirs("checkpoints")
ckpt_filepath = os.path.join("checkpoints", ckpt_filename)
model.save(ckpt_filepath)
return ckpt_filepath
def _get_sentences(data_filename):
print("Found Data:")
sentences = []
print("Reading '{0}'...".format(data_filename))
with codecs.open(data_filename, "r") as data_file:
reader = csv.DictReader(data_file)
for row in reader:
sentences.append(ast.literal_eval((row["highscores"])))
print("There are {0} sentences".format(len(sentences)))
return sentences
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='Train Word2vec model')
parser.add_argument('data_filepath',
help='path to training CSV file.')
args = parser.parse_args()
data_filepath = args.data_filepath
train(data_filepath)
This is a sample of training data used for word2vec:
22751473,"[""lover"", ""sweetheart"", ""couple"", ""dietary"", ""meal""]"
28738542,"[""mallotus"", ""villosus"", ""shishamo"", ""smelt"", ""dried"", ""fish"", ""spirinchus"", ""lanceolatus""]"
25163686,"[""Snow"", ""Removal"", ""snow"", ""clearing"", ""female"", ""females"", ""woman"", ""women"", ""blower"", ""snowy"", ""road"", ""operate""]"
32837025,"[""milk"", ""breakfast"", ""drink"", ""cereal"", ""eating""]"
23828321,"[""jogging"", ""female"", ""females"", ""lady"", ""woman"", ""women"", ""running"", ""person""]"
22874156,"[""lover"", ""sweetheart"", ""heterosexual"", ""couple"", ""man"", ""and"", ""woman"", ""consulting"", ""hear"", ""listening""]
For prediction, I simply used the following function for a set of keywords:
most_similar_cosmul
I was wondering whether it is possible to find relevant keywords with word2vec. If it is not, then what machine learning model would be more suitable for this. Any insights would be very helpful
When supplying multiple positive-word examples, like ['girl', 'kite', 'beach'], to most_similar()/most_similar_cosmul(), the vectors for those words will be averaged-together first, then a list of words most similar to the average returned. Those might not be as obviously related to any one of the words than a simple check of a single word. So:
When you try most_similar() (or most_similar_cosmul()) on a single word, what kind of results do you get? Are they words that seem related to the input word, in the way that you care about?
If not, you have deeper problems in your setup that should be fixed before trying a multi-word similarity.
Word2Vec gets its usual results from (1) lots of training data; and (2) natural-language sentences. With enough data, a typical number of epochs training-passes (and thus the default) is 5. You can sometimes, somewhat make up for less data by using more epoch iterations, or a smaller vector size, but not always.
It's not clear how much data you have. Also, your example rows aren't real natural-language sentences – they appear to have had some other preprocessing/reordering applied. That may be hurting rather than helping.
Word-vectors often improve by throwing away more low-frequency words (increasing min_count above the default 5, rather than reducing it to 2.) Low-frequency words don't have enough examples to get good vectors – and the few examples they have, even if repeated with many iterations, tend to be idiosyncratic examples of the words' usage, not the generalizable broad representations that you'd get from many varied examples. And by keeping these doomed-to-be-weak words still in the training-data, the training of other more-frequent words is interfered with. (When you get a word that you don't think belongs in a most-similar ranking, it may be a rare-word that, given its its few occurrence contexts, found its way to those coordinates as the least-bad location among plenty of other unhelpful coordinates.)
If you do get good results from single-word checks, but not from the average-of-multiple-words, the results might improve with more and better data, or adjusted training parameters – but to achieve that you'd need to more rigorously define what you consider good results. (Your existing list doesn't look that bad to me: it includes many words related to sun/sand/beach activities.)
On the other hand, your expectations of Word2Vec may be too high: it may not be that the average of ['girl', 'kite', 'beach'] is necessarily closed to those desired words, compared to the individual words themselves, or that may only be achievable with lots of dataset/parameter tweaking.

Gensim - LDA create a document- topic matrix

I am working on a project where I need to apply topic modelling to a set of documents and I need to create a matrix :
DT , a D × T matrix, where D is the number of documents and T is the number of topics. DT(ij) contains the number of times a word in document Di has been assigned to topic Tj.
So far I have followed this tut: https://rstudio-pubs-static.s3.amazonaws.com/79360_850b2a69980c4488b1db95987a24867a.html
I am new to gensim and so far I have
1. created a document list
2. preprocessed and tokenized the documents.
3. Used corpora.Dictionary() to create id-> term dictionary (id2word)
4. convert tokenized documents into a document-term matrix
generated an LDA model.
So now I get the topics.
How can I now get the matrix that I mentioned before.
I will be using this matrix to calculate similarity between 2 documents on topic t as :
sim(a,b) = 1- |DT(a,t) - DT(b, t)|
There is an implementation in the pyLDAvis source code which returns the lists that may be helpful for building the matrix you are interested in.
Snippet from the _extract_data method in gensim.py:
def _extract_data(topic_model, corpus, dictionary, doc_topic_dists=None):
...
...
...
return {'topic_term_dists': topic_term_dists, 'doc_topic_dists': doc_topic_dists,
'doc_lengths': doc_lengths, 'vocab': vocab, 'term_frequency': term_freqs}
The number of topics for your model will be static. Maybe you're interested in finding the document topic distribution for the T matrix. In that case, the DxT matrix would be doc_lengths x doc_topic_dists.
Showing your code would be helpful, but if we were to go off of the example in the tutorial you linked then the model is identified by:
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=2, id2word = dictionary, passes=20)
you could put into your script something like:
model_name = "name_of_my_model"
ldamodel.save(model_name)
Then when you run it, this will create a model in the same directory that the script is run from.
Then you can get topic probability distribution with:
print(ldamodel[doc_bow])
If you want to get similarity to this model then you need to create a model for the query document, too, and then get cosine similarity between the two:
dictionary = corpora.Dictionary.load('dictionary.dict')
corpus = corpora.MmCorpus("corpus.mm")
lda = models.LdaModel.load("name_of_my_model.lda")
index = similarities.MatrixSimilarity(lda[corpus])
index.save("simIndex.index")
docname = "docs/the_doc.txt"
doc = open(docname, 'r').read()
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lda = lda[vec_bow]
sims = index[vec_lda]
sims = sorted(enumerate(sims), key=lambda item: -item[1])
print sims

Categories

Resources