using ColumnTransformer for predicting values - python

I am currently running a logistic regression model using keras.
I have 1 numeric variable and around 6 categorical variables.
I am currently using a column transformer for training and testing the model and it works perfect (code shown below):
numeric_variables = ["var1"]
cat_variables = ["var2","var3","var4","var5","var6","var7"]
pipeline = ColumnTransformer([('num',StandardScaler(), numeric_variables), ('cat',OneHotEncoder(handle_unknown = "ignore"), cat_variables)], remainder = "passthrough")
pipeline.fit(X_Train)
pipeline.fit_transform(X_Train)
This works perfectly when I run the train and test dataset.
However, when I deploy the model to get the probability of a customer renewing, I am sending the data as a dataframe with one row.
While the fit_transform for X_Train and X_Test gives out a nx17 array (because of the onehotencoding of the 7 factors), the transform of the predictions only gives nx7.
My theory here is that the pipeline is dropping one hot encoded fields. For instance, if var2 can take 3 values (say "M","F" and "O"), the X_Train gives out 3 columns for each (isM, isF and isO) while the transform for the predictions is only giving the output for "isM" if the value of Var2 is "M"
How do I address this issue?
I get this error when I run the model.predict on the single customer example:
Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 19), found shape=(None, 7)

After the discussion in the comments:
It appears that you are using pipeline.fit_transform(X_test). This means you are fitting your pipeline with X_test before transforming it. This is a problem in your case for two reasons:
You are re-fitting the StandardScaler, which means you will scale your features differently than what you did with the train set.
You are re-fitting the OneHotEncoder. Hence, you could miss some categories in cat_variables that were present only in the train set. Consequently, your output shape is smaller.
Simply use .transform(X_train) instead.

Related

tensorflow keras: dimension error when using different evaluation metrics other than accuracy

I have a tfdf RandomForestModel for multi-class prediction and want to evaluate its quality.
However, calling model.evaluate() after compiling the model with anything else than "Accuracy", I get the following error:
ValueError: Shapes (None, 3) and (None, 1) are incompatible
Looking into the Traceback I see that an assertion fails:
y_pred.shape.assert_is_compatible_with(y_true.shape).
My labels are a single column in the training and test set, containing 3 factors and the output of predict gives 3 columns with membership likelihoods.
So I guess here lies the problem, but I am not sure where and how to transform the dimensions of my labels.
Should I one-hot encode them and have 3 columns instead of one as labels in my data ?
How can I resolve this dimension error?

Apply embedding layer for categorical variable with keras

I have a dataset with many categorical features and many features.I want to apply embedding layer to transfer the categorical data to numerical data for the using of the other models.But, I got some error during training.
Now, my training process is:
Perform label encoder to categorical features
Split training and testing data by train_test_split() function
Drop the numerical columns. Only send the categorical features and target y for model training.
And I got this error:
indices[13,0] = 10 is not in [0, 10)
[[node functional_1/embed_6/embedding_lookup (defined at <ipython-input-34-0b6b3ae455d0>:4) ]] [Op:__inference_train_function_3509]
Errors may have originated from an input operation.
Input Source operations connected to node functional_1/embed_6/embedding_lookup:
functional_1/embed_6/embedding_lookup/2395 (defined at /usr/lib/python3.6/contextlib.py:81)
Function call stack:
train_function
After searching, someone says the problem is that the vocabulary_size parameter of embedding layer is wrong. Enlarge the vocabulary_size can solve this problem.
But in my case, I need to map the result back to original label.
For example, I have a categorical feature ['dog', 'cat', 'fish']. After label encode, it become[0,1,2]. An embedding layer for this feature with 3 unique variable should output something like
([-0.22748041], [-0.03832678], [-0.16490786]).
Then I can replace the ['dog'] variable in original data as -0.22748041, replace ['cat'] variable as -0.03832678, and so on.
So, I can't change the vocabulary_size or the output dimension will be wrong.
I guess the problem in my case is that not all of the categorical variable are go into the training process.
(E.x. Only ['dog', 'fish'] are in the training data. ['cat'] is only appear in testing data). If I set the vocabulary_size as 3, it will report an error like above. If I experimentally add ['cat'] to training data. It works fine.
My problem is, dose embedding layer have to look all of the unique value in training process to perform the application I want? If there are a lot of categorical data with a lot of unique value, how to ensure all the unique value appear in testing data when splitting data.
Thanks in advance!
Solution
You need to use out-of-vocabulary buckets when creating the the lookup table.
oov buckets allow to lookup of unknown category if found during testing.
What the solution does?
Setting it to a required number (like 1000) will allow you to get ids of those other category as well which were not present in test data categories.
words = tf.constant(vocabulary)
word_ids = tf.range(len(vocabulary), dtype=tf.int64)
# important
vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids)
num_oov_buckets = 1000
table = tf.lookup.StaticVocabularyTable(vocab_init, num_oov_buckets) # lokup table for ids->category
Then you can encode the training set (I am using TensorFlow Dataset IMDb rating dataset)
def encode_words(X_batch, y_batch):
"""
Encode the training set converting words to IDs
using the lookup table just created
"""
return table.lookup(X_batch), y_batch
train_set = datasets["train"].batch(32).map(preprocess)
train_set = train_set.map(encode_words).prefetch(1)
when creating model:
vocab_size=10000 # whatever the length of variable vocabulary is of
embedding_size = 128 # tweakable | hyperparameter
model = keras.models.Sequential([
keras.layers.Embedding(vocab_size + num_oov_buckets, embedding_size,
input_shape=[None]),
# usual code follows
])
and fit the data
model.compile(loss="binary_crossentropy",
optimizer="adam",
metrics="accuracy")
history = model.fit(train_set, epochs=5)

Why is the shape different for train,test and cv?

I have a dataset of 3321 rows and i have divided them into train test and cv sets.
After dividing the data-set i have applied response coding and onehot-encoding, but after onehotencoding the shapes of the column have also changed, due to which i am getting an error further while predicting
#response coding for the Gene feature
alpha = 1 #Used for laplace smoothing
train_gene_feature_responseCoding = np.array(get_gv_feature(alpha, "Gene", train_df)) #train gene feature
test_gene_feature_responseCoding = np.array(get_gv_feature(alpha, "Gene", test_df)) #test gene feature
cv_gene_feature_responseCoding = np.array(get_gv_feature(alpha, "Gene", cv_df)) #cv gene feature
#one-hot encoding of Gene Feature
gene_vectorizer = CountVectorizer()
train_gene_feature_onehotCoding = gene_vectorizer.fit_transform(train_df['Gene'])
test_gene_feature_onehotCoding = gene_vectorizer.fit_transform(test_df['Gene'])
cv_gene_feature_onehotCoding = gene_vectorizer.fit_transform(cv_df['Gene'])
train_gene_feature_responseCoding.shape -
(2124, 9)
test_gene_feature_responseCoding.shape -
(665, 9)
cv_gene_feature_responseCoding.shape -
(532, 9)
train_gene_feature_onehotCoding.shape -
(2124, 228)
test_gene_feature_onehotCoding.shape -
(665, 158)
cv_gene_feature_onehotCoding.shape -
(532, 144)
You need to use gene_vectorizer.transform() only on the test and cv dataframe.
gene_vectorizer.transform(test_df['Gene'])
gene_vectorizer.transform(cv_df['Gene'])
In scikit-learn estimator api,
fit() : used for generating learning model parameters from training data
transform() : parameters generated from fit() method,applied upon model to generate transformed data set.
fit_transform() : combination of fit() and transform() api on same data set
So on the test datasets, you just need to use transform() to convert test dataset to the shape which is acceptable by the model.
Reference: what is the difference between 'transform' and 'fit_transform' in sklearn

predicitng new value through a model trained on one hot encoded data

This might look like a trivial problem. But I am getting stuck in predicting results from a model. My problem is like this:
I have a dataset of shape 1000 x 19 (except target feature) but after one hot encoding it becomes 1000 x 141.
Since I trained the model on the data which is of shape 1000 x 141, so I need data of shape 1 x 141 (at least) for prediction.
I also know in python, I can make future prediction using
model.predict(data)
But, since I am getting data from an end user through a web portal which is shape of 1 x 19. Now I am very confused how should I proceed further to make predictions based on the user data.
How can I convert data of shape 1 x 19 into 1 x 141 as I have to maintain the same order with respect to train/test data means the order of column should not differ?
Any help in this direction would be highly appreciated.
I am assuming that to create a one hot encoding, you are using sklearn onehotencoder. If you using that, then the problem should be solved easily. Since you are fitting the one hot encoder on your training data
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(categories = "auto", handle_unknown = "ignore")
X_train_encoded = encoder.fit_transform(X_train)
So now in the above code, your encoder is fitted on your training data so when you get the test data, you can transform it into the same encoded data using this fitted encoder.
test_data = encoder.transform(test_data)
Now your test data will also be of 1x141 shape. You can check shape using
(pd.DataFrame(test_data.toarray())).shape

Is it possible to use a different data set as input for prediction in AdaBoostRegressor (sklearn)?

I am just beginner of machine learning and I am playing sklearn now.
I copied the example of AdaBoostRegressor from official site at here and added the following.
X_pred = np.linspace (6, 12, 100)[:, np.newaxis]
y_pred = regr_2.predict(X_1)
As the training data set X is ranged from 0 to 6, I am trying to get a prediction for a different data set X_pred ranged from 6 to 12.
However, I found that the value of y_pred is always -1.05382839 which is the last value of training set output y.
I am wondering whether it is possible to use a non-input-sample data set as the input of prediction.
Is it possible to do that? If so what is the correct usage?
BTW, attached pic is the output.
Red and green are the predicated output based on training set input (0-6) and blue is the output of X_pred (6 - 12).
In short - no. This is not what regression is about. Regression is about interpolation, not extrapolation. Pretty much none of the regressors can make any predictions about data outside of the training set.

Categories

Resources