How to invoke Sagemaker XGBoost endpoint post model creation? - python

I have been following along with this really helpful XGBoost tutorial on Medium (code used towards bottom of article): https://medium.com/analytics-vidhya/random-forest-and-xgboost-on-amazon-sagemaker-and-aws-lambda-29abd9467795.
To-date, I've been able to get data appropriately formatted for ML purposes, a model created based on training data, and then test data fed through the model to give useful results.
Whenever I leave and come back to work more on the model or feed in new test data however, I find I need to re-run all model creation steps in order to make any further predictions. Instead I would like to just call my already created model endpoint based on the Image_URI and feed in new data.
Current steps performed:
Model Training
xgb = sagemaker.estimator.Estimator(containers[my_region],
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket_name, prefix),
sagemaker_session=sess)
xgb.set_hyperparameters(eta=0.06,
alpha=0.8,
lambda_bias=0.8,
gamma=50,
min_child_weight=6,
subsample=0.5,
silent=0,
early_stopping_rounds=5,
objective='reg:linear',
num_round=1000)
xgb.fit({'train': s3_input_train})
xgb_predictor = xgb.deploy(initial_instance_count=1,instance_type='ml.m4.xlarge')
Evaluation
test_data_array = test_data.drop([ 'price','id','sqft_above','date'], axis=1).values #load the data into an array
xgb_predictor.serializer = csv_serializer # set the serializer type
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
predictions_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an array
print(predictions_array.shape)
from sklearn.metrics import r2_score
print("R2 score : %.2f" % r2_score(test_data['price'],predictions_array))
It seems that this particular line:
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
needs to be re-written in order to not reference xgb.predictor but instead reference the model location.
I have tried the following
trained_model = sagemaker.model.Model(
model_data='s3://{}/{}/output/xgboost-2020-11-10-00-00/output/model.tar.gz'.format(bucket_name, prefix),
image_uri='XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
role=role) # your role here; could be different name
trained_model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
and then replaced
xgb_predictor.serializer = csv_serializer # set the serializer type
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
with
trained_model.serializer = csv_serializer # set the serializer type
predictions = trained_model.predict(test_data_array).decode('utf-8') # predict!
but I get the following error:
AttributeError: 'Model' object has no attribute 'predict'

that's a good question :) I agree, many of the official tutorials tend to show the full train-to-invoke pipeline and don't emphasize enough that each step can be done separately. In your specific case, when you want to invoke an already-deployed endpoint, you can either: (A) use the invoke API call in one of the numerous SDKs (example in CLI, boto3) or (B) or instantiate a predictor with the high-level Python SDK, either the generic sagemaker.model.Model class or its XGBoost-specific child: sagemaker.xgboost.model.XGBoostPredictor as illustrated below:
from sagemaker.xgboost.model import XGBoostPredictor
predictor = XGBoostPredictor(endpoint_name='your-endpoint')
predictor.predict('<payload>')
similar question How to use a pretrained model from s3 to predict some data?
Note:
If you want the model.deploy() call to return a predictor, your model must be instantiated with a predictor_cls. This is optional, you can also first deploy a model, and then invoke it as a separate step with the above technique
Endpoints create charges even if you don't invoke them; they are charged per uptime. So if you don't need an always-on endpoint, don't hesitate to shut it down to minimize costs.

Related

What is the need to return a function object while creating a data set using tensorflow

I am new to Machine Learning and I am trying to create a Machine Learning Model using the Tensorflow API from the tutorial in the Tensorflow documentation from here
But I am having trouble understanding this part of the code
def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32):
def input_function(): # inner function, this will be returned
ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df)) # create tf.data.Dataset object with data and its label
if shuffle:
ds = ds.shuffle(1000) # randomize order of data
ds = ds.batch(batch_size).repeat(num_epochs) # split dataset into batches of 32 and repeat process for number of epochs
return ds # return a batch of the dataset
return input_function # return a function object for use
Then storing the output of the function in a variable
train_input_fn = make_input_fn(dftrain, y_train)
And at last training the model with the data set
linear_est.train(train_input_fn)
I failed to realize what we are trying to do when by just returning the function name of the inner-function in make_input_function instead of just returning our data set and passing it to train the model.
I am a beginner in Python and just started to learn Machine Learning and I am unable to find a proper answer to my question so if anyone can kindly explain it in a beginner friendly way I would be very much obliged.
I failed to realize what we are trying to do when by just returning the function name of the inner-function in make_input_function instead of just returning our data set and passing it to train the model.
In python programming, this is called Currying, It is used to transform multiple-argument function into single argument function by evaluating incremental nesting of function arguments. Currying also mends one argument to another forms a relative pattern while execution.
In tensorflow, based on the documentation (https://www.tensorflow.org/api_docs/python/tf/estimator/LinearClassifier#train).
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
The method train of the estimator is expecting a parameter input_fn. The reason is that everytime you call the Estimator.train() it will create a new graph by invoking either input_fn and model_fn and connecting them together. If you supply either a tensor or a dataset it will lead to different errors.

How i can get probabilities in DeepPavlov classifier?

I train my classifier using DeepPavlov, and then when i call trained model for some sample function returns only one class label, but I want to get the probabilities of every class. I did not find function parameters that would allow me to get probabilities.
Has anyone encountered such a problem? Thank!
from deeppavlov import configs, train_model
model = train_model(configs.classifiers.intents_snips)
model(['Some sentence'])
I want the output like np.array with number of classes length, but current output is one label like ['PlayMusic'].
You can change chainer.out parameter of your config to be ["y_pred_probas"] before inferring, but it will also most likely require you to update change train.metrics if you want to train your model on the same config.
Alternatively you can call your model like
model.compute(['Some sentence'], targets=["y_pred_probas"])
And to get classes indexes you can run
dict(model['classes_vocab'])

Creating an H2OGeneralizedLinearEstimator instance from existing coefficients

I have a set of coefficients from a trained model but I don't have access to the model itself or training dataset. I'd like to create an instance of H2OGeneralizedLinearEstimator and set the coefficients manually to use the model for prediction.
The first thing I tried was (this is an example to reproduce the error):
import h2o
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
from h2o.frame import H2OFrame
h2o.init()
# creating some test dataset
test = {"x":[0,1,2], "y":[0,0,1]}
df = H2OFrame(python_obj=test)
glm = H2OGeneralizedLinearEstimator(family='binomial', model_id='logreg')
# setting the coefficients
glm.coef = {'Intercept':0, 'x':1}
# predict
glm.predict(test_data=df)
This throws an error:
H2OResponseError: Server error
water.exceptions.H2OKeyNotFoundArgumentException: Error: Object
'logreg' not found in function: predict for argument: model
I also tried to set glm.params keys based on the keys of a similar trained model:
for key in trained.params.keys():
glm.params.__setitem__(key, trained.params[key])
but this doesn't populate glm.params (glm.params = {}).
It looks like you want to use the function makeGLMModel
This is further described in the documentation, and I will repost here for your convenience:
Modifying or Creating a Custom GLM Model
In R and python, the makeGLMModel call can be used to create an H2O model from given coefficients. It needs a source GLM model trained on the same dataset to extract the dataset information. To make a custom GLM model from R or python:
R: call h2o.makeGLMModel. This takes a model, a vector of coefficients, and (optional) decision threshold as parameters.
Pyton: H2OGeneralizedLinearEstimator.makeGLMModel (static method) takes a model, a dictionary containing coefficients, and (optional) decision threshold as parameters.

TensorFlow ExportOutputs, PredictOuput, and specifying signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY

Context
I have a colab with a very simple demo Estimator for the purpose of learning / understanding the Estimator API with the goal of making a convention for a plug-and-play model with useful bells and whistles of the trade in tack (e.g. early stopping if the validation set stops improving, exporting the model, etc).
Each of the three Estimator modes (TRAIN, EVAL, and PREDICT) return an EstimatorSpec.
According to the docs:
__new__(
cls,
mode,
predictions=None, # required by PREDICT
loss=None, # required by TRAIN and EVAL
train_op=None, # required by TRAIN
eval_metric_ops=None,
export_outputs=None,
training_chief_hooks=None,
training_hooks=None,
scaffold=None,
evaluation_hooks=None,
prediction_hooks=None.
)
Of these named arguments I would like to bring attention to predictions and export_outputs, which are described in the docs as:
predictions: Predictions Tensor or dict of Tensor.
export_outputs: Describes the output signatures to be exported to SavedModel and used during serving. A dict {name: output} where:
name: An arbitrary name for this output.
output: an ExportOutput object such as ClassificationOutput, RegressionOutput, or PredictOutput. Single-headed models only need to specify one entry in this dictionary. Multi-headed models should specify one entry for each head, one of which must be named using signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY. If no entry is provided, a default PredictOutput mapping to predictions will be created.
Thus it should be clear why I bring up export_outputs; namely, as one would most likely like to use the model they trained in the future (by loading it from a SavedModel).
To make this question a bit more accessible / add some clarity:
"single-headed" models are the most common model one encounters where the input_fn features are transformed to a singular (batched) output
"multi-headed" models are models where there is more than one output
e.g. this multi-headed model's input_fn (in accordance with the Estimator api) returns a tuple (features, labels) i.e. this model has two heads).
def input_fn():
features = ...
labels1 = ...
labels2 = ...
return features, {'head1': labels1, 'head2': labels2}
How one specifies the signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY is the core of this question. Namely, how does one specify it? (e.g. should it be a dict {signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: head})
Right, so in the colab you see that our model's export_outputs is actually defined in a multi-head manner (although it shouldn't be):
From estimator functions > model_fn of the colab:
def model_fn(...):
# ...
# send the features through the graph
MODEL = build_fn(MODEL)
# prediction
MODEL['predictions'] = {'labels': MODEL['net_logits']} # <--- net_logits added in the build_fn
MODEL['export_outputs'] = {
k: tf.estimator.export.PredictOutput(v) for k, v in MODEL['predictions'].items()
}
# ...
in this particular instance, if we expand the dictionary comprehension, we have the functional equivalent of:
MODEL['export_outputs'] = {
'labels': tf.estimator.export.PredictOutput(MODEL['net_logits'])
}
It works in this instance as our dictionary has one key and hence one PredictOutput, where in the colab our model_fn has only a single head and would be more properly formatted as:
MODEL['export_outputs'] = {
'predictions': tf.estimator.export.PredictOutput(MODEL['predictions'])
}
as it states in PredictOutput:
__init__(outputs)
where
outputs: A Tensor or a dict of string to Tensor representing the predictions.
Question
Thus my question is as follows:
if PredictOutput can be a dictionary, when / why would one want multiple PredictOutputs as their export_outputs for the EstimatorSpec?
If one has a multi-headed model, (say with multiple PredictOutputs) how does one actually specify the signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
what is the point of predictions in the EstimatorSpec when it is also "required" (for anyone who cares about using SavedModels) in export_outputs?
Thanks for your detailed question; you have clearly dug deep here.
There are also classes for RegressionOutput and ClassificationOutput which cannot be dictionaries. The use of an export_outputs dict allows for generalizations over those use cases.
The head you want to be served by default from the saved model should take the default signature key. For example:
export_outputs = {
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
PredictOutput(outputs={'some_output_1': output_1}),
'head-2': PredictOutput(outputs={'some_output_2': output_2}),
'head-3': PredictOutput(outputs={'some_output_3': output_3})
}
Reason 1: Many people use the default export_outputs (which is in turn the value of predictions), or don't export to saved model. Reason 2: History. Predictions came first, and over time more and more features were added. These features required flexibility and extra info, and were therefore independently packed into the EstimatorSpec.

Get weight parameters from Shogun Shareboost model

I have a trained ShareBoost model. How do I obtain the model's weight parameters/vectors?
I tried to get the individual linear machines and extract the individual weight vectors but unlike the linear svm it does not seem to have a get_w() method.
Also, even though the C++ ShareBoost class is a subclass of CMachine, the Parameters object obtained from m_parameters (see docs) does not appear to have the parameters available.
The following code is what I have tried.
num_machines = shareboost.get_num_machines()
# num_machines = 2
lm0 = shareboost.get_machine(0)
p0 = lm0.m_parameters
# The following method does not exist
p0.get_parameter(0)
in case you are using the C++ API you could get the weight vector the following way:
auto lm = (CLinearMachine*)shareboost->get_machine(0);
lm->get_w();
since you are using the python API currently this only possible if you are using the new API of shogun (that is only available in develop branch atm):
lm0 = shareboost.get_machine(0)
weights = lm0.get_real_vector("w")
see some more examples of how to use the new API:
http://shogun.ml/examples/nightly/examples/binary/linear_support_vector_machine.html

Categories

Resources