Are TF-IDF and BoW techniques incompatible? - python

I have studied the difference between TF-IDF and BoW methods but I have a big doubt about it. I thought that the two methods could be combined, I will explain better. I have a csv file (MY_DATA) with thousands of comments from a social network, I would like to use this dataset to create my BoW for the creation of a classification model of the sentiment of comments (the sentiment of comments is the other variable of MY_DATA and is of three types: positive, negative and neutral)
tf = TfidfVectorizer()
text_tf = tf.fit_transform(MY_DATA['comments'])
X_train, X_test, y_train, y_test = train_test_split(text_tf, MY_DATA['sentiment'], test_size=0.2)
#Classification model Multinomial Naive Bayes
clf = MultinomialNB().fit(X_train, y_train)
predicted = clf.predict(X_test)
Now that you have seen my script I would like to know if I am using the TF-IDF method correctly. How could I apply the BoW method in my case? Do the two methods inevitably remain incompatible?

Related

How to use multiclassification model to make predicitions in entire dataframe

I have trained multiclassification models in my training and test sets and have achieved good results with SVC. Now, I want to use the model o make predictions in my entire dataframe, but when I get the following error: ValueError: X has 36976 features, but SVC is expecting 8989 features as input.
My dataframe has two columns: one with the categories (which I manually labeled for around 1/5 of the dataframe) and the text columns with all the texts (including those that have not been labeled).
data={'categories':['1','NaN','3', 'NaN'], 'documents':['Paragraph 1.\nParagraph 2.\nParagraph 3.', 'Paragraph 1.\nParagraph 2.', 'Paragraph 1.\nParagraph 2.\nParagraph 3.\nParagraph 4.', ''Paragraph 1.\nParagraph 2.']}
df=pd.DataFrame(data)
First, I drop the rows with Nan values in the 'categories' column. I then, create the document term matrix, define the 'y', and split into training and test sets.
tf = CountVectorizer(tokenizer=word_tokenize)
X = tf.fit_transform(df['documents'])
y = df['categories']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
Second, I run the SVC model getting good results:
from sklearn.svm import SVC
svm = SVC(C=0.1, class_weight='balanced', kernel='linear', probability=True)
model = svm.fit(X_train, y_train)
print('accuracy:', model.score(X_test, y_test))
y_pred = model.predict(X_test)
print(metrics.classification_report(y_test, y_pred))
Finally, I try to apply the the SVC model to predict the categories of the entire column 'documents' of my dataframe. To do so, I create the document term matrix of the entire column 'documents' and then apply the model:
tf_entire_df = CountVectorizer(tokenizer=word_tokenize)
X_entire_df = tf_entire_df.fit_transform(df['documents'])
y_pred_entire_df = model.predict(X_entire_df)
Bu then I get the error that my X_entire_df has more features than the SVC model is expecting as input. I magine that this is because now I am trying to apply the model to the whole column documents, but I do know how to fix this.
I would appreciate your help!
These issues usually comes from the fact that you are feeding the model with unknown or unseen data (more/less features than the one used for training).
I would strongly suggest you to use sklearn.pipeline and create a pipeline to include preprocessing (CountVectorizer) and your machine learning model (SVC) in a single object.
From experience, this helps a lot to avoid tedious complex preprocessing fitting issues.

Pre train a model (classifier) in scikit learn

I would like to pre-train a model and then train it with another model.
I have model Decision Tree Classifer and then I would like to train it further with model LGBM Classifier. Is there a possibility to do this in scikit learn?
I have already read this post about it https://datascience.stackexchange.com/questions/28512/train-new-data-to-pre-trained-model.. In the post it says
As per the official documentation, calling fit() more than once will
overwrite what was learned by any previous fit()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
# Train Decision Tree Classifer
clf = DecisionTreeClassifier()
clf = clf.fit(X_train,y_train)
lgbm = lgb.LGBMClassifier()
lgbm = lgbm.fit(X_train,y_train)
#Predict the response for test dataset
y_pred = lgbm.predict(X_test)
Perhaps you are looking for stacked classifiers.
In this approach, the predictions of earlier models are available as features for later models.
Look into StackingClassifiers.
Adapted from the documentation:
from sklearn.ensemble import StackingClassifier
estimators = [
('dtc_model', DecisionTreeClassifier()),
]
clf = StackingClassifier(
estimators=estimators,
final_estimator=LGBMClassifier()
)
Unfortunately this is not possible at present. According to the doc at https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html?highlight=init_model, you can continue training the model if the model is from lightgbm.
I did try this setup with:
# dtc
dtc_model = DecisionTreeClassifier()
dtc_model = dtc_model.fit(X_train, y_train)
# save
dtc_fn = 'dtc.pickle.db'
pickle.dump(dtc_model, open(dtc_fn, 'wb'))
# lgbm
lgbm_model = LGBMClassifier()
lgbm_model.fit(X_train_2, y_train_2, init_model=dtc_fn)
And I get:
LightGBMError: Unknown model format or submodel type in model file dtc.pickle.db
As #Ferdy explained in his post, there is no simple way to perform this operation and it is understandable.
Scikit-learn DecisionTreeClassifier takes only numerical features and cannot handle nan values whereas LGBMClassifier can handle those.
By looking at the decision function of scikit-learn you can see that all it can perform is splits based on feature <= threshold.
On the contrary LGBM can perform the following:
feature is na
feature <= threshold
feature in categories
Splits in decision tree are selected at each step as they best splits the set of items. They try to minimize the node impurity (giny) or entropy.
The risk of further training a DecisionTreeClassifier is that you are not sure that splits performed in the original tree are the best, since you have new splits capabilities with LGBM that might/should lead in better performance.
I would recommend you to retrain the model with LGBMClassifier only as it might be possible that splits will be different from the original scikit-learn Tree.

different score when using train_test_split before vs after SMOTETomek

I'm trying to classify a text to a 6 different classes.
Since I'm having an imbalanced dataset, I'm also using SMOTETomek method that should synthetically balance the dataset with additional artificial samples.
I've noticed a huge score difference when applying it via pipeline vs 'Step by step" where the only difference is (I believe) the place I'm using train_test_split
Here are my features and labels:
for curr_features, label in self.training_data:
features.append(curr_features)
labels.append(label)
algorithms = [
linear_model.SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, random_state=42, max_iter=5, tol=None),
naive_bayes.MultinomialNB(),
naive_bayes.BernoulliNB(),
tree.DecisionTreeClassifier(max_depth=1000),
tree.ExtraTreeClassifier(),
ensemble.ExtraTreesClassifier(),
svm.LinearSVC(),
neighbors.NearestCentroid(),
ensemble.RandomForestClassifier(),
linear_model.RidgeClassifier(),
]
Using Pipeline:
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.2, random_state=42)
# Provide Report for all algorithms
score_dict = {}
for algorithm in algorithms:
model = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('smote', SMOTETomek()),
('classifier', algorithm)
])
model.fit(X_train, y_train)
# Score
score = model.score(X_test, y_test)
score_dict[model] = int(score * 100)
sorted_score_dict = {k: v for k, v in sorted(score_dict.items(), key=lambda item: item[1])}
for classifier, score in sorted_score_dict.items():
print(f'{classifier.__class__.__name__}: score is {score}%')
Using Step by Step:
vectorizer = CountVectorizer()
transformer = TfidfTransformer()
cv = vectorizer.fit_transform(features)
text_tf = transformer.fit_transform(cv).toarray()
smt = SMOTETomek()
X_smt, y_smt = smt.fit_resample(text_tf, labels)
X_train, X_test, y_train, y_test = train_test_split(X_smt, y_smt, test_size=0.2, random_state=0)
self.test_classifiers(X_train, X_test, y_train, y_test, algorithms)
def test_classifiers(self, X_train, X_test, y_train, y_test, classifiers_list):
score_dict = {}
for model in classifiers_list:
model.fit(X_train, y_train)
# Score
score = model.score(X_test, y_test)
score_dict[model] = int(score * 100)
print()
print("SCORE:")
sorted_score_dict = {k: v for k, v in sorted(score_dict.items(), key=lambda item: item[1])}
for model, score in sorted_score_dict.items():
print(f'{model.__class__.__name__}: score is {score}%')
I'm getting (for the best classifier model) around 65% using pipeline vs 90% using step by step.
Not sure what am I missing.
There is nothing wrong with your code by itself. But your step-by-step approach is using bad practice in Machine Learning theory:
Do not resample your testing data
In your step-by-step approach, you resample all of the data first and then split them into train and test sets. This will lead to an overestimation of model performance because you have altered the original distribution of classes in your test set and it is not representative of the original problem anymore.
What you should do instead is to leave the testing data in its original distribution in order to get a valid approximation of how your model will perform on the original data, which is representing the situation in production. Therefore, your approach with the pipeline is the way to go.
As a side note: you could think about shifting the whole data preparation (vectorization and resampling) out of your fitting and testing loop as you probably want to compare the model performance against the same data anyway. Then you would only have to run these steps once and your code executes faster.
The correct approach in such cases is described in detail in own answer in the Data Science SE thread Why you shouldn't upsample before cross validation (although the answer is about CV, the rationale is identical for the train/test split case as well). In short, any resampling method (SMOTE included) should be applied only to the training data and not to the validation or test ones.
Given that, your Pipeline approach here is correct: you apply SMOTE only to your training data after splitting, and, according to the documentation of the imblearn pipeline:
The samplers are only applied during fit.
So, no SMOTE is actually applied to your test data during model.score, which is exactly as it should be.
Your step-by-step approach, on the other hand, is wrong on many levels, and SMOTE is only one of them; all these preprocessing steps should be applied after the train/test split, and fitted only on the training portion of your data, which is not the case here, thus the results are invalid (no wonder they look "better"). For a general discussion (and a practical demonstration) of how & why such preprocessing should be applied only to the training data, see my (2) answers in Should Feature Selection be done before Train-Test Split or after? (again, the discussion there is about feature selection, but it is applicable to such feature engineering tasks like count vectorizer and TF-IDF transformation as well).

I can't get my test accuracy to increase in a sentiment analysis

I'm not sure if this is the right place but my test accuracy is always at about .40 while I can get my training set accuracy to 1.0. I'm trying to do a sentiment analysis of tweets on trump, I have annotated each tweet with a positive,negative or neutral polarity. I want to be able to predict the polarity of new data based on my model. I've tried different models but the SVM seems to give me the highest test accuracy. I'm unsure as to why my data model accuracy is so low but would appreciate any help or direction.
trump = pd.read_csv("trump_data.csv", delimiter = ";")
#drop all nan values
trump = trump.dropna()
trump = trump.rename(columns = {"polarity,,,":"polarity"})
#print(trump.columns)
def tokenize(text):
ps = PorterStemmer()
return [ps.stem(w.lower()) for w in word_tokenize(text)
X = trump.text
y = trump.polarity
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = .2, random_state = 42)
svm = Pipeline([('vectorizer', TfidfVectorizer(stop_words=stopwords.words('english'),
tokenizer=tokenize)), ('svm', SGDClassifier(loss='hinge', penalty='l2',alpha=1e-3,
random_state=42,max_iter=5, tol=None))])
svm.fit(X_train, y_train)
model = svm.score(X_test, y_test)
print("The svm Test Classification Accuracy is:", model )
print("The svm training set accuracy is : {}".format(naive.score(X_train,y_train)))
y_pred = svm.predict(X)
This is an example of one of the strings in the text column of the dataset
".#repbilljohnson congress must step up and overturn president trump’s discriminatory #eo banning #immigrants & #refugees #oxfam4refugees"
Data set
Why are you using naive.score? I assume it's a copy-paste mistake. Here are a few steps you can follow.
Make sure you enough data points and clean it. Cleaning the dataset is the inevitable process in data science.
Make use of the parameters like ngram_range, max_df, min_df, max_features while featurizing the text with either TfidfVectorizer or CountVectorizer. You may also try embeddings using Word2Vec.
Do a hyperparameter tuning on alpha, penalty & other variables using GridSearch or RandomizedSearchCV. Make sure you are CV currently. Refer the documentation for more info
If the dataset is imbalanced, then try using other matrices like log-loss, precision, recall, f1-score, etc. Refer this for more info.
Make sure your model is neither overfitted not underfitted by checking train-error & test error.
Other than SVM, also try the traditional models like Logistic Regression, NV, RF etc. If you have a large number of data points, then you may try Deep Learning models.
Turns out I needed to clean the polarity data set as it had values such as "positive," , "positive,," and "positive,,," hence not registering them as different so I just removed all "," from the column.

Ttfidfvectorizer adjust test by training frequencies in pipeline during cross-validation

I have a text classification task based on documents, where I expect the classes are related to word frequencies. Because of the specific nature of my application, where I have a corpus that will grow over time and want to classify new documents as they arrive, I have used FeatureHasher rather than the existing TFidfVectorizer (which both vectorizes and does adjustment), since the vocabulary size can grow with new documents.
As discussed here for instance (https://stats.stackexchange.com/questions/154660/tfidfvectorizer-should-it-be-used-on-train-only-or-traintest), it seems correct to me that the term frequencies when doing TFIDF should be calculated relative to the train set only, then used to rescale the test set, rather than first doing rescaling on the entire corpus and then splitting. This is because using the test dataset for frequency calculations is violating the principle that you shouldn't use this information.
Let's assume you start with a matrix X of raw term frequencies (not adjusted yet) and y, a vector of classes. The typical order that many code examples show is:
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
vec = TfidfTransformer()
#rescale X by its own frequencies, then split
X = vec.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
#...now fit a model
but the correct thing should be the following:
vec = TfidfTransformer()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
#store rescaling based on X_train frequencies alone
vec.fit(X_train)
#resacale each (transform) by the same model
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
#...now fit a model
Okay, now the main question: I want to conduct some kind of cross-validation, perhaps with GridSearchCV, where I can feed it a set of potential model parameters and conduct several splits of the data for each one. The typical way to do this is to build a model pipeline and then feed it into the cross-validation utility. Since pipelines are kind of black boxes that are hard to view the details of, I just wanted to verify whether, if TfidfTransformer is included as a step in the pipeline, that it does the adjustment correctly, as I've mentioned above, by conducting the adjustment on the training data of each split.

Categories

Resources