Python -- SciKit -- Text Feature Extraction of Classifer - python

I have to classify articles into my custom categories. So I chose MultinomialNB from SciKit. I am doing supervised learning. So I have an editor who look at the articles daily and then tag them. Once they are tagged I include them into my Learning model and so on. Below is the code to get an idea what i am doing and using. (I am not including any import lines because I am just trying to give you an idea of what I am doing) (Reference)
corpus = (train_set)
vectorizer = HashingVectorizer(stop_words='english', non_negative=True)
x = vectorizer.transform(corpus)
x_array = x.toarray()
data_array = np.array(x_array)
cat_set = list(cat_set)
cat_array = np.array(cat_set)
filename = '/home/ubuntu/Classifier/Intelligence-MultinomialNB.pkl'
if(not os.path.exists(filename)):
classifier.partial_fit(data_array,cat_array,classes)
print "Saving Classifier"
joblib.dump(classifier, filename, compress=9)
else:
print "Loading Classifier"
classifier = joblib.load(filename)
classifier.partial_fit(data_array,cat_array)
print "Saving Classifier"
joblib.dump(classifier, filename, compress=9)
Now I have a Classifier ready after custom tagging and it works well with new articles and work like a charm. Now the requirement has arisen to get most frequent words against each category. In short I have to extract feature from the learned model. By looking into documentation I only found out how to extract text features at the time of learning.
But once learned and I only have the model file (.pkl) is it possible to load that classifier and extract features from it?
Will it be possible to get the most frequent terms against each class or category?

You can access the features by using the feature_count_ property. This will tell you how many times a particular feature occurred. For example:
# Imports
import numpy as np
from sklearn.naive_bayes import MultinomialNB
# Data
X = np.random.randint(3, size=(3, 10))
X2 = np.random.randint(3, size=(3, 10))
y = np.array([1, 2, 3])
# Initial fit
clf = MultinomialNB()
clf.fit(X, y)
# Check to see that the stored features are equal to the input features
print np.all(clf.feature_count_ == X)
# Modify fit with new data
clf.partial_fit(X2, y)
# Check to see that the stored features represents both sets of input
print np.all(clf.feature_count_ == (X + X2))
In the above example, we can see that the feature_count_ property is nothing more than a running sum of the number of features for each class. Using this, you can go backwards from your classifier model to your features, to determine the frequency of your features. Unfortunately, your problem is more complex, you now need to go back one more step, as your features are not simply words.
This is where the bad news comes - you used a HashingVectorizer feature extractor. If you refer to the docs:
there is no way to compute the inverse transform (from feature indices to string feature names) which can be a problem when trying to introspect which features are most important to a model.
So even though we know the frequency of the features, we can't translate those features back to words. Had you used a different type of feature extractor (perhaps the one referenced on that same page, CountVectorizer) the situation would be different entirely.
In short - You can extract the features from the model and determine their frequency by class, but you can't convert those features back to words.
To obtain the functionality you desire you would need to start over using a reversible mapping function (a feature extractor that allows you to encode words into features and decode features back into words).

I would suggest using the code below. you just need to load the pickel object and transform the test data using the same vectorizer. try just TFIDF vectorizer in case you face problems.
clf = joblib.load("'/home/ubuntu/Classifier/Intelligence-MultinomialNB.pkl'")
# you need to read the test sample
# type (data_test) list of list
X_test = vectorizer.transform(data_test)
print "pickel model loaded"
print clf
pred = clf.predict(X_test)
print ("prediction done")
for p in enumerate(pred):
print p

Related

adding more data to Support Vector Classifier training

I am using the LinearSVC() available on scikit learn to classify texts into a max of 7 seven labels. So, it is a multilabel classification problem. I am training on a small amount of data and testing it. Now, I want to add more data (retrieved from a pool based on a criteria) to the fitted model and evaluate on the same test set. How can this be done?
Question:
It is necessary to merge the previous data set with the new data set, get everything preprocessed and then retrain to see if the performance improve with the old + new data?
My code so far is below:
def preprocess(data, x, y):
global Xfeatures
global y_train
global labels
porter = PorterStemmer()
multilabel=MultiLabelBinarizer()
y_train=multilabel.fit_transform(data[y])
print("\nLabels are now binarized\n")
data[multilabel.classes_] = y_train
labels = multilabel.classes_
print(labels)
data[x].apply(lambda x:nt.TextFrame(x).noise_scan())
print("\English stop words were extracted\n")
data[x].apply(lambda x:nt.TextExtractor(x).extract_stopwords())
corpus = data[x].apply(nfx.remove_stopwords)
corpus = data[x].apply(lambda x: porter.stem(x))
tfidf = TfidfVectorizer()
Xfeatures = tfidf.fit_transform(corpus).toarray()
print('\nThe text is now vectorized\n')
return Xfeatures, y_train
Xfeatures, y_train = preprocess(df1, 'corpus', 'zero_level_name')
Xfeatures_train=Xfeatures[:300]
y_train_features = y_train[:300]
X_test=Xfeatures[300:400]
y_test=y_train[300:400]
X_pool=Xfeatures[400:]
y_pool=y_train[400:]
def model(modelo, tipo):
svc= modelo
clf = tipo(svc)
clf.fit(Xfeatures_train,y_train_features)
clf_predictions = clf.predict(X_test)
return clf_predictions
preds_pool = model(LinearSVC(class_weight='balanced'), OneVsRestClassifier)
It depends on how your previous dataset was. If your previous dataset was a well representation of your problem at hand, then adding more data will not increase your model performance by a large. So you can just test with the new data.
However, it is also possible that your initial dataset was not representative enough, and therefore with more data your classification accuracy increases. So in that case it is better to include all the data and preprocess it. Because preprocessing generally includes parameters that are computed on the dataset as whole. e.g., I can see you have TFIDF, or mean which is sensitive to the dataset at hand.

Use SHAP values to explain LogisticRegression Classification

I am trying to do some bad case analysis on my product categorization model using SHAP. My data looks something like this:
corpus_train, corpus_test, y_train, y_test = train_test_split(data['Name_Description'],
data['Category_Target'],
test_size = 0.2,
random_state=8)
vectorizer = TfidfVectorizer(stop_words='english', ngram_range=(1, 3), min_df=3, analyzer='word')
X_train = vectorizer.fit_transform(corpus_train)
X_test = vectorizer.transform(corpus_test)
model = LogisticRegression(max_iter=200)
model.fit(X_train, y_train)
X_train_sample = shap.sample(X_train, 100)
X_test_sample = shap.sample(X_test, 20)
masker = shap.maskers.Independent(data=X_test_sample)
explainer = shap.LinearExplainer(model, masker=masker)
shap_values = explainer.shap_values(X_test_sample)
X_test_array = X_test_sample.toarray()
shap.summary_plot(shap_values, X_test_array, feature_names=vectorizer.get_feature_names(), class_names=data['Category'].unique())
Now to save space I didn't include the actual summary plot, but it looks fine. My issue is that I want to be able to analyze a single prediction and get something more along these lines:
In other words, I want to know which specific words contribute the most to the prediction. But when I run the code in cell 36 in the image above I get an
AttributeError: 'numpy.ndarray' object has no attribute 'output_names'
I'm still confused on the indexing of shap_values. How can I solve this?
I was unable to find a solution with SHAP, but I found a solution using LIME. The following code displays a very similar output where its easy to see how the model made its prediction and how much certain words contributed.
c = make_pipeline(vectorizer, classifier)
# saving a list of strings version of the X_test object
ls_X_test= list(corpus_test)
# saving the class names in a dictionary to increase interpretability
class_names = list(data.Category.unique())
# Create the LIME explainer
# add the class names for interpretability
LIME_explainer = LimeTextExplainer(class_names=class_names)
# explain the chosen prediction
# use the probability results of the logistic regression
# can also add num_features parameter to reduce the number of features explained
LIME_exp = LIME_explainer.explain_instance(ls_X_test[idx], c.predict_proba)
LIME_exp.show_in_notebook(text=True, predict_proba=True)
Using the kernalSHAP, first you need to find the shaply value and then find the single instance, as following below;
#convert your training and testing data using the TF-IDF vectorizer
tfidf_vectorizer = TfidfVectorizer(use_idf=True)
tfidf_train = tfidf_vectorizer.fit_transform(IV_train)
tfidf_test = tfidf_vectorizer.transform(IV_test)
model=LogisticRegression()
model.fit(tfidf_train, DV_train)
#shap apply
#first shorten the data & convert to data frame
X_train_sample = tfidf_train[0:20]
sample_text = pd.DataFrame(X_test_sample)
SHAP_explainer = shap.KernelExplainer(model.predict, X_train_sample)
shap_vals = SHAP_explainer.shap_values(X_test_sample)
#print it.
print(df_test.iloc[7].Text , df_test.iloc[7].Label)
shap.initjs()
shap.force_plot(SHAP_explainer.expected_value, shap_vals[7,:],sample_text.iloc[7,:], feature_names=tfidf_vectorizer.get_feature_names_out())
as the original text is "good article interested natural alternatives treat ADHD" and Label is "1"

using saved sklearn model to make prediction

I have a saved logistic regression model which I trained with training data and saved using joblib. I am trying to load this model in a different script, pass it new data and make a prediction based on the new data.
I am getting the following error "sklearn.exceptions.NotFittedError: CountVectorizer - Vocabulary wasn't fitted." Do I need to fit the data again ? I would have thought that the point of being able to save the model would be to not have to do this.
The code I am using is below excluding the data cleaning section. Any help to get the prediction to work would be appreciated.
new_df = pd.DataFrame(latest_tweets,columns=['text'])
new_df.to_csv('new_tweet.csv',encoding='utf-8')
csv = 'new_tweet.csv'
latest_df = pd.read_csv(csv)
latest_df.dropna(inplace=True)
latest_df.reset_index(drop=True,inplace=True)
new_x = latest_df.text
loaded_model = joblib.load("finalized_mode.sav")
tfidf_transformer = TfidfTransformer()
cvec = CountVectorizer()
x_val_vec = cvec.transform(new_x)
X_val_tfidf = tfidf_transformer.transform(x_val_vec)
result = loaded_model.predict(X_val_tfidf)
print (result)
Your training part have 3 parts which are fitting the data:
CountVectorizer: Learns the vocabulary of the training data and returns counts
TfidfTransformer: Learns the counts of the vocabulary from previous part, and returns tfidf
LogisticRegression: Learns the coefficients for features for optimum classification performance.
Since each part is learning something about the data and using it to output the transformed data, you need to have all 3 parts while testing on new data. But you are only saving the lr with joblib, so the other two are lost and with it is lost the training data vocabulary and count.
Now in your testing part, you are initializing new CountVectorizer and TfidfTransformer, and calling fit() (fit_transform()), which will learn the vocabulary only from this new data. So the words will be less than the training words. But then you loaded the previously saved LR model, which expects the data according to features like training data. Hence this error:
ValueError: X has 130 features per sample; expecting 223086
What you need to do is this:
During training:
filename = 'finalized_model.sav'
joblib.dump(lr, filename)
filename = 'finalized_countvectorizer.sav'
joblib.dump(cvec, filename)
filename = 'finalized_tfidftransformer.sav'
joblib.dump(tfidf_transformer, filename)
During testing
loaded_model = joblib.load("finalized_model.sav")
loaded_cvec = joblib.load("finalized_countvectorizer.sav")
loaded_tfidf_transformer = joblib.load("finalized_tfidftransformer.sav")
# Observe that I only use transform(), not fit_transform()
x_val_vec = loaded_cvec.transform(new_x)
X_val_tfidf = loaded_tfidf_transformer.transform(x_val_vec)
result = loaded_model.predict(X_val_tfidf)
Now you wont get that error.
Recommendation:
You should use TfidfVectorizer in place of both CountVectorizer and TfidfTransformer, so that you dont have to use two objects all the time.
And along with that you should use Pipeline to combine the two steps:- TfidfVectorizer and LogisticRegression, so that you only have to use a single object (which is easier to save and load and generic handling).
So edit the training part like this:
tfidf_vectorizer = TfidfVectorizer()
lr = LogisticRegression()
tfidf_lr_pipe = Pipeline([('tfidf', tfidf_vectorizer), ('lr', lr)])
# Internally your X_train will be automatically converted to tfidf
# and that will be passed to lr
tfidf_lr_pipe.fit(X_train, y_train)
# Similarly here only transform() will be called internally for tfidfvectorizer
# And that data will be passed to lr.predict()
y_preds = tfidf_lr_pipe.predict(x_test)
# Now you can save this pipeline alone (which will save all its internal parts)
filename = 'finalized_model.sav'
joblib.dump(tfidf_lr_pipe, filename)
During testing, do this:
loaded_pipe = joblib.load("finalized_model.sav")
result = loaded_model.predict(new_x)
You have not fit the CountVectorizer.
You should do like this..
cvec = CountVectorizer()
x_val_vec = cvec.fit_transform(new_x)
Similarly, TfidTransformer must be used like this..
X_val_tfidf = tfidf_transformer.fit_transform(x_val_vec)

Predict radio signal strength (RSS) using Gaussian Process Regression (GPR)

I want to use GPR to predict RSS from a deployed access point (AP). Since GPR gives mean RSS and its variance too, GPR could be very useful in positioning and navigation system. I read the GPR related published journals and got the theoretical insight of it. Now, I want to implement it with real data (RSS). In my system, the input and corresponding outputs (observations) are:
X: 2D cartesian coordinates points
y: an array of RSS (-dBm) at the corresponding coordinates
After searching online, I found that I can use sklearn software (using python). I installed sklearn and successfully tested the sample codes. The sample python scripts are for 1D GPR. Since my input sets are 2D coordinates, I wanted to modify the sample code. I found that other people have also tried to do the same, for example : How to correctly use scikit-learn's Gaussian Process for a 2D-inputs, 1D-output regression?, How to make a 2D Gaussian Process Using GPML (Matlab) for regression?, and Is kringing suitable for high dimensional regression problems?.
The expected (predicted) values should be similar to y. The value I got is very different. The size of the testbed where I want to predict the RSS is 16*16 sq.meters. I want to predict RSS at every meter apart. I assume that the Gaussian Process predictor is trained with the Gaussian Decent algorithm in the sample code. I want to optimize the hyperparameter (theta: trained by using y and X) with Firefly algorithm.
In order to use my own data (2D input), in which line of code am I suppose to edit? Similarly, how can I implement Firefly algorithm (I've installed firefly algorithm using pip)?
Please help me with your kind suggestions and comments.
Thank you very much.
I have simplified the code a bit to illustrate potential issues:
import numpy as np
from sklearn.gaussian_process import GaussianProcessRegressor
x_train = np.array([[0,0],[2,0],[4,0],[6,0],[8,0],[10,0],[12,0],[14,0],[16,0],[0,2],
[2,2],[4,2],[6,2],[8,2],[10,2],[12,2],[14,2],[16,2]])
y_train = np.array([-54,-60,-62,-64,-66,-68,-70,-72,-74,-60,-62,-64,-66,
-68,-70,-72,-74,-76])
# This is a test set?
x1min = 0
x1max = 16
x2min = 0
x2max = 16
x1 = np.linspace(x1min, x1max)
x2 = np.linspace(x2min, x2max)
x_test =(np.array([x1, x2])).T
gp = GaussianProcessRegressor()
gp.fit(x_train, y_train)
# predict on training data
y_pred_train = gp.predict(x_train)
print('Avg MSE: ', ((y_train - y_pred_train)**2).mean()) # MSE is 0
# predict on test (?) data
y_pred_test = gp.predict(x_test)
# it is unclear how good this result without y_test (e.g., held out labeled test samples)
The expected (predicted) values should be similar to y.
Here, I have renamed y to y_train for clarity. After fitting the GP and predicting on x_train, we see that the model perfectly predicts the training samples, which is possibly what you meant. I am not sure if you mistakenly wrote lowercase x which I call x_test (instead of uppercase X which I call x_train) in the question. If we predict on x_test, we cannot really know how good the prediction is without the corresponding y_test values. So, this basic example is working as I would expect.
It also appears you are trying to create a grid for x_test, however the current code does not do that. Here, x1 and x2 are always the same for each position. If you want a grid, take a look at np.meshgrid.

How does TF-IDF produce features for machine-learning ? What is different from a bag of words?

I was hoping to get a brief explanation of how TF-IDF produces features that can be used for machine learning. What are the differences between bag of words and TF-IDF? I understand how TF-IDF works; but not how features are made with it and how these are used in classification/regression.
I am using scikit-learn; what does the following code actually do theoretically and in practice? I have commented it with my understanding and some questions, any help would be really appreciated :
traindata = list(np.array(p.read_table('data/train.tsv'))[:,2]) #taking in data for TF-IDF, I get this
testdata = list(np.array(p.read_table('data/test.tsv'))[:,2]) #taking in data for TF-IDF, I get this
y = np.array(p.read_table('data/train.tsv'))[:,-1] #labels for our data
tfv = TfidfVectorizer(min_df=3, max_features=None, strip_accents='unicode',
analyzer='word',token_pattern=r'\w{1,}',ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1) #making tf-idf object with params to dictate how it should behave
rd = lm.LogisticRegression(penalty='l2', dual=True, tol=0.0001,
C=1, fit_intercept=True, intercept_scaling=1.0,
class_weight=None, random_state=None)
X_all = traindata + testdata #adding data together
lentrain = len(traindata) #what is this?
tfv.fit(X_all) #is this where features are created? Are all words used as features? What happens here ?
X_all = tfv.transform(X_all)#transforms our numpy array of text into a TF-IDF
X = X_all[:lentrain]
X_test = X_all[lentrain:]
rd.fit(X,y) #train LR on newly made feature set with a feature for each word?
I guess idf is what make you confused here, since bag of words is the tf of word in the document, so why idf ? idf is a way to estimate how important the word is, usually, document frequency (df) is a good way to estimate how important a word in classfication, since when a word appear in less document (nba would always appear in documents belong to sports) show a better descrimination, so idf is in positive correlation to word's importance.
Tf-idf is the most common vector representation for documents. It takes into account the frequencies of the words in a text and also in the whole document corpus.
Obviously, this method is not scientifically backed, this means that it pragmatically works well in a bunch of contexts, like document similarity using cosine distance or other types of metrics, but was not derived from a mathematical proof.

Categories

Resources