NLP Text Classification Model with defined categories / context / intent - python

This is more of a guideline question rather than a technical query. I am looking to create a classification model that classifies documents based on a specific list of strings. However, as it turns out from the data, that that output is more useful when you pass labelled intents / topics, instead of allowing the model to guess. Based on my research so far, I found that Bertopic might be the right way to achieve this as it allows guided topic modeling but the only caveat is that the guided topics should contain words of similar meaning (see link below).
It might be more clear from the example below on what I want to achieve. Suppose we have a chat text extract from a conversation between a customer and store associate, below is what the customer asking for.
Hi, the product I purchased from your store is defective and I would like to get it replaced or refunded. Here is the receipt number and other details...
If my list of intended labels is as follows: ['defective or damaged', 'refund request', 'wrong label description',...], we can see that the above extract qualifies into 'defective or damaged' and 'refund request'. For the sake of simplicity, I would pick the one that model returns with highest score so that we only have 1 label per request. Now, if my data does not have these labels defined, I may be able to use zero-shot classification to "best guess" the intent from this list. However, as I understand the use of zero-shot classification or even the guided topic modeling of BERTopic, the categories that I want above may not be derived since the individual words in those categories do not mean that same.
For example, in BERTopic, an intended classified label could be like ["space", "launch", "orbit", "lunar"] for a "Space Related" topic, but in my case let us say for the 3rd label it would be ["wrong", "label", "description"] which would not be best suited as it would try to find all records that have mentions of wrong address, wrong department, wrong color etc., so I am essentially looking for a combination of those 3 words in the context. Also, those 3 words may not always be together or in same order. For example, in this sentence -
The item had description which was labelled incorrectly.
This same challenge would be for zero-shot classification where the labels are expected to be one word or a combination of words that mean the same thing. Let me know if this clarifies the question more or if I can help further clarify it.
Approach mentioned above:
https://maartengr.github.io/BERTopic/getting_started/guided/guided.html#semi-supervised-topic-modeling

Related

Predicting score from text data and non-text data

This is for an assignment. I have ~100 publications on a dummy website. Each publication has been given an artificial score indicating how successful it is. I need to predict what affects that value.
So far, I've scraped the information that might affect the score and saved them into individual lists. The relevant part of my loop looks like:
for publicationurl:
Predictor 1 = 3.025
Predictor 2 = Journal A
Predictor 3 = 0
Response Variable = 42.5
Title = Sentence
Abstract = Paragraph
I can resolve most of that by putting predictors 1-3 and the response into a dataframe then doing regression. The bit that is tripping me up is the Title and Abstract text. I can strip them of punctuation and remove stopwords, but after that I'm not sure how to actually analyse them alongside the other predictors. I was looking into doing some kind of text-similarity comparison between high-high and high-low scoring pairs and basing whether the title and abstract affect the score based off of that, but am hoping there is a much neater method that allows me to actually put that text into a predictive model too.
I currently have 5 predictors besides the text and there are ~40,000 words in total across all titles and abstracts if any of that affects what kind of method works best. Ideally I'd like to end up being able to put everything into a single predictive model, but any method that can lead me to a working solution is good.
This would be an ideal situation for using Multinomial Naive Bayes. This is a relatively simple yet quite powerful method to classify the class of a text. If this is an introductory exercise I'm 99% sure that you're prof is expecting something with NB to solve the given problem.
I would recommend a library like sklearn which should make the task almost trivial. If you're interested in the intuition behind NB, this YouTube video should serve as a good introduction.
Start off by going over some examples/blog posts, google should provide you with countless examples. Then modify the code to fit your use case.
You could either group the articles in two classes e.g. score <=5 = bad, score > 5 = good. A next step would be to predict more than two classes like explained here.

What should be used between Doc2Vec and Word2Vec when analyzing product reviews?

I collected some product reviews of a website from different users, and I'm trying to find similarities between products through the use of the embeddings of the words used by the users.
I grouped each review per product, such that I can have different reviews succeeding one after the other in my dataframe (i.e: different authors for one product). Furthermore, I also already tokenized the reviews (and all other pre-processing methods). Below is a mock-up dataframe of what I'm having (the list of tokens per product is actually very high, as well as the number of products):
Product
reviews_tokenized
XGame3000
absolutely amazing simulator feel inaccessible ...
Poliamo
production value effect tend cover rather ...
Artemis
absolutely fantastic possibly good oil ...
Ratoiin
ability simulate emergency operator town ...
However, I'm not sure of what would be the most efficient between doc2Vec and Word2Vec. I would initially go for Doc2Vec, since it has the ability to find similarities by taking into account the paragraph/sentence, and find the topic of it (which I'd like to have, since I'm trying to cluster products by topics), but I'm a bit worry about the fact that the reviews are from different authors, and thus might bias the embeddings? Note that I'm quite new to NLP and embeddings, so some notions may escape me. Below is my code for Doc2Vec, which giving me a quite good silhouette score (~0.7).
product_doc = [TaggedDocument(doc.split(' '), [i]) for i, doc in enumerate(df.tokens)]
model3 = Doc2Vec(min_count=1, seed = SEED, ns_exponent = 0.5)
model3.build_vocab(product_doc)
model3.train(product_doc, total_examples=model3.corpus_count, epochs=model3.epochs)
product2vec = [model3.infer_vector((df['tokens'][i].split(' '))) for i in range(0,len(df['tokens']))]
dtv = np.array(product2vec)
What do you think would be the most efficient method to tackle this? If something is not clear enough, or else, please tell me.
Thank you for your help.
EDIT: Below is the clusters I'm obtaining:
There's no way to tell which particular mix of methods will work best for a specific dataset and particular end-goal: you really have to try them against each other, in your own reusable pipeline for scoring them against your desired results.
It looks like you've already stripped the documents down to keywords rather than original natural text, which could hurt with these algorithms - you may want to try it both ways.
Depending on the size & format of your texts, you may also want to look at doing "Word Mover's Distance" (WMD) comparisons between sentences (or other small logical chunks of your data). Some work has demo'd interesting results in finding "similar concerns" (even with different wording) in the review domain, eg: https://tech.opentable.com/2015/08/11/navigating-themes-in-restaurant-reviews-with-word-movers-distance/
Note, though, WMD gets quite costly to calculate in bulk with larger texts.

Can I use LDA topic modeling if I do not know the number of topics

I have more than 100k .txt files with newspaper issues and I need to define the lexical field of protectionism. Nonetheless, the newspaper issues treat very diverse subjects and I cannot know the total number of topics. Can I still use LDA topic modeling to find the lexical field or is there another method (maybe supervised learning)?
You probably can, but have a look at this CorEx idea. This works very nice and gives you the opportunity to guide the groups by supplying a set of anchor words (so you can call it semi-supervised leraning).
You could give it ["protectionism", "tarifs", "trade wars", ...] as ancors for one topic and even try to push articles that are non related to the topic you are interested in into a second topic by defining ancor words that have nothing to do with your topic e.g. ["police protection", "custom features", ...]
The notebooks supplied are really excellent and will have you up and running in no time

Classifying sentences with overlapping words

I've this CSV file which has comments (tweets, comments). I want to classify them into 4 categories, viz.
Pre Sales
Post Sales
Purchased
Service query
Now the problems that I'm facing are these :
There is a huge number of overlapping words between each of the
categories, hence using NaiveBayes is failing.
The size of tweets being only 160 chars, what is the best way to
prevent words from one category falling into the another.
What all ways should I select the features which can take care of both the 160 char tweets and a bit lengthier facebook comments.
Please let me know of any reference link/tutorial link to follow up the same, being a newbee in this field
Thanks
I wouldn't be so quick to write off Naive Bayes. It does fine in many domains where there are lots of weak clues (as in "overlapping words"), but no absolutes. It all depends on the features you pass it. I'm guessing you are blindly passing it the usual "bag of words" features, perhaps after filtering for stopwords. Well, if that's not working, try a little harder.
A good approach is to read a couple of hundred tweets and see how you know which category you are looking at. That'll tell you what kind of things you need to distill into features. But be sure to look at lots of data, and focus on the general patterns.
An example (but note that I haven't looked at your corpus): Time expressions may be good clues on whether you are pre- or post-sale, but they take some work to detect. Create some features "past expression", "future expression", etc. (in addition to bag-of-words features), and see if that helps. Of course you'll need to figure out how to detect them first, but you don't have to be perfect: You're after anything that can help the classifier make a better guess. "Past tense" would probably be a good feature to try, too.
This is going to be a complex problem.
How do you define the categories? Get as many tweets and FB posts as you can and tag them all with the correct categories to get some ground truth data
Then you can identify which words/phrases are the best for identifying particular category using e.g. PCA
Look into scikit-learn they have tutorials for text processing and classification.

How to automatically label a cluster of words using semantics?

The context is : I already have clusters of words (phrases actually) resulting from kmeans applied to internet search queries and using common urls in the results of the search engine as a distance (co-occurrence of urls rather than words if I simplify a lot).
I would like to automatically label the clusters using semantics, in other words I'd like to extract the main concept surrounding a group of phrases considered together.
For example - sorry for the subject of my example - if I have the following bunch of queries : ['my husband attacked me','he was arrested by the police','the trial is still going on','my husband can go to jail for harrassing me ?','free lawyer']
My study deals with domestic violence, but clearly this cluster is focused on the legal aspect of the problem so the label could be "legal" for example.
I am new to NPL but I have to precise that I don't want to extract words using POS tagging (or at least this is not the expected final outcome but maybe a necessary preliminary step).
I read about Wordnet for sense desambiguation and I think that might be a good track, but I don't want to calculate similarity between two queries (since the clusters are the input) nor obtain the definition of one selected word thanks to the context provided by the whole bunch of words (which word to select in this case ?). I want to use the whole bunch of words to provide a context (maybe using synsets or categorization with the xml structure of the wordnet) and then summarize the context in one or few words.
Any ideas ? I can use R or python, I read a little about nltk but I don't find a way to use it in my context.
Your best bet is probably is to label the clusters manually, especially if there are few of them. This a difficult problem even for humans to solve, because you might need a domain expert. Anyone claiming they could do that automatically and reliably (except in some very limited domains) is probably running a startup and trying to get your business.
Also, going through the clusters yourself will have benefits. 1) you may discover you had the wrong number of clusters (k parameter) or that there was too much junk in the input to begin with. 2) you will gain qualitative insight into what is being talked about and what topic there are in the data (which you probably can't know before looking at the data). Therefore, label manually if qualitative insight is what you are after. If you need quantitative result too, you could then train a classifier on the manually labelled topics to 1) predict topics for the rest of the clusters, or 2) for future use, if you repeat the clustering, get new data, ...
When we talk about semantics in this area we mean Statistical Semantics. The statistical or distributional semantics is very different from other definitions of semantics which has logic and reasoning behind it. Statistical semantics is based on Distributional Hypothesis, which considers context as meaning aspect of words and phrases. Meaning in very abstract and general sense in different litterers is called topics. There are several unsupervised methods for modelling topics, such as LDA or even word2vec, which basically provide word similarity metric or suggest a list of similar words for a document as another context. Usually when you have these unsupervised clusters, you need a domain expert to tell the meaning of each cluster.
However, for several reasons you might accept low accuracy assignment of a word as the general topic (or as in your words "global semantic") to a list of phrases. If this is the case, I would suggest to take a look at Word Sense Disambiguation tasks which look for coarse grained word senses. For WordNet, it might be called supersense tagging task.
This paper worth to take a look: More or less supervised supersense tagging of Twitter
And about your question about choosing words from current phrases, there is also an active question about "converting phrase to vectors", my answer to that question in word2vec fashion might be useful:
How can a sentence or a document be converted to a vector?
I can add more related papers later if it comes to my mind.
The paper Automatic Labelling of Topic Models explains the author's approach to this problem. To provide an overview I can tell you that they generate some label candidates using the information retrieved from Wikipedia and Google, and once they have the list of candidates in place they rank those candidates to find the best label.
I think the code is not available online, but I have not looked for it.
The package chowmein claims to do this in python using the algorithm outlined in Automatic Labeling of Multinomial Topic Models.
One possible approach, which the below papers suggest is identifying the set of keywords from the cluster, getting all the synonyms and then finding the hypernyms for each synonym.
The idea is to get a more abstract meaning for the cluster by using the hypernym.
Example: A word cluster containing words dog and wolf should not be labelled with either word but as canids. They achieve it using synonymy and hypernymy.
Cluster Labeling by Word Embeddings
and WordNet’s Hypernymy
Automated Text Clustering and Labeling using Hypernyms

Categories

Resources