I have a trained ShareBoost model. How do I obtain the model's weight parameters/vectors?
I tried to get the individual linear machines and extract the individual weight vectors but unlike the linear svm it does not seem to have a get_w() method.
Also, even though the C++ ShareBoost class is a subclass of CMachine, the Parameters object obtained from m_parameters (see docs) does not appear to have the parameters available.
The following code is what I have tried.
num_machines = shareboost.get_num_machines()
# num_machines = 2
lm0 = shareboost.get_machine(0)
p0 = lm0.m_parameters
# The following method does not exist
p0.get_parameter(0)
in case you are using the C++ API you could get the weight vector the following way:
auto lm = (CLinearMachine*)shareboost->get_machine(0);
lm->get_w();
since you are using the python API currently this only possible if you are using the new API of shogun (that is only available in develop branch atm):
lm0 = shareboost.get_machine(0)
weights = lm0.get_real_vector("w")
see some more examples of how to use the new API:
http://shogun.ml/examples/nightly/examples/binary/linear_support_vector_machine.html
Related
I have been following along with this really helpful XGBoost tutorial on Medium (code used towards bottom of article): https://medium.com/analytics-vidhya/random-forest-and-xgboost-on-amazon-sagemaker-and-aws-lambda-29abd9467795.
To-date, I've been able to get data appropriately formatted for ML purposes, a model created based on training data, and then test data fed through the model to give useful results.
Whenever I leave and come back to work more on the model or feed in new test data however, I find I need to re-run all model creation steps in order to make any further predictions. Instead I would like to just call my already created model endpoint based on the Image_URI and feed in new data.
Current steps performed:
Model Training
xgb = sagemaker.estimator.Estimator(containers[my_region],
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket_name, prefix),
sagemaker_session=sess)
xgb.set_hyperparameters(eta=0.06,
alpha=0.8,
lambda_bias=0.8,
gamma=50,
min_child_weight=6,
subsample=0.5,
silent=0,
early_stopping_rounds=5,
objective='reg:linear',
num_round=1000)
xgb.fit({'train': s3_input_train})
xgb_predictor = xgb.deploy(initial_instance_count=1,instance_type='ml.m4.xlarge')
Evaluation
test_data_array = test_data.drop([ 'price','id','sqft_above','date'], axis=1).values #load the data into an array
xgb_predictor.serializer = csv_serializer # set the serializer type
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
predictions_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an array
print(predictions_array.shape)
from sklearn.metrics import r2_score
print("R2 score : %.2f" % r2_score(test_data['price'],predictions_array))
It seems that this particular line:
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
needs to be re-written in order to not reference xgb.predictor but instead reference the model location.
I have tried the following
trained_model = sagemaker.model.Model(
model_data='s3://{}/{}/output/xgboost-2020-11-10-00-00/output/model.tar.gz'.format(bucket_name, prefix),
image_uri='XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
role=role) # your role here; could be different name
trained_model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
and then replaced
xgb_predictor.serializer = csv_serializer # set the serializer type
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
with
trained_model.serializer = csv_serializer # set the serializer type
predictions = trained_model.predict(test_data_array).decode('utf-8') # predict!
but I get the following error:
AttributeError: 'Model' object has no attribute 'predict'
that's a good question :) I agree, many of the official tutorials tend to show the full train-to-invoke pipeline and don't emphasize enough that each step can be done separately. In your specific case, when you want to invoke an already-deployed endpoint, you can either: (A) use the invoke API call in one of the numerous SDKs (example in CLI, boto3) or (B) or instantiate a predictor with the high-level Python SDK, either the generic sagemaker.model.Model class or its XGBoost-specific child: sagemaker.xgboost.model.XGBoostPredictor as illustrated below:
from sagemaker.xgboost.model import XGBoostPredictor
predictor = XGBoostPredictor(endpoint_name='your-endpoint')
predictor.predict('<payload>')
similar question How to use a pretrained model from s3 to predict some data?
Note:
If you want the model.deploy() call to return a predictor, your model must be instantiated with a predictor_cls. This is optional, you can also first deploy a model, and then invoke it as a separate step with the above technique
Endpoints create charges even if you don't invoke them; they are charged per uptime. So if you don't need an always-on endpoint, don't hesitate to shut it down to minimize costs.
A GLS (or thus also OLS) regression with constraints on parameters can readily be run using statsmodels GLM.fit_constrained() method, as with the code below (or here).
How can I make the GLMresults object resulting from such a statsmodels GLM.fit_constrained() regression picklable, so that the estimation result can be stored for re-use for prediction in a new session anytime later?
The GLMresults object obtained from fit_constrained() and containing the relevant estimation result has its .save() method that would normally readily pickle the object into a file.
This .save() works for the result from a standard (unconstrained) GLM regression, sm.glm.fit(). However, it doesn't work with the result for sm.glm.fit_unconstrained(). Instead, it throws a pickling error, seemingly because patsy DesignMatrixBuilder is not Picklable, so it links to the never resolved issue here. This at least for my Python 3.6.3 (running on Windows).
An example:
import statsmodels
import statsmodels.api as sm
import pandas as pd
# Define exapmle data & Constraints:
import numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(100, 5)), columns=list('ABCDF'))
y = df['A']
X = df[['B','C','D','F']]
constraints = ['B + C + D', 'C - F'] # Add two linear constraints on parameters: B+C+D = 0 & C-F = 0
statsmodels.genmod.families.links.identity()
OLS_from_GLM = sm.GLM(y, X)
# Unconstrained regression:
result_u = OLS_from_GLM.fit()
result_u.save('myfile_u.pickle') # This works
# Constrained regression - save() fails
result_c = OLS_from_GLM.fit_constrained(constraints)
result_c.save('myfile_c.pickle') # This fails with pickling error (tested in Python 3.6.3 on Windows): "NotImplementedError: Sorry, pickling not yet supported. See https://github.com/pydata/patsy/issues/26 if you want to help."
Is there a way to readily make the result from fit_unconstrained() picklable i.e./or storable?
I below suggest a first workaround answer; it is trivial and works well for me so far. I do not know, however, whether it is truly advisable or whether its risks are large and/or any preferable alternative solution exists.
I got this to work by simply removing (commenting out) the line
res._results.constraints = lc
in the function definition of fit_constrained() within statsmodels' active generalized_linear_model.py script (in my case in the virtualenv folder \env\Lib\site-packages\statsmodels\genmod\generalized_linear_model.py).
Idling this line seems to have created no problem for my work; I can now readily save and reload the pickled file and use it to make correct predictions based on the stored estimation; the imposed parameter constraints remain respected and predictions made using .predict() remain unchanged after reloading.
I wonder though whether there is any major risk attached to this procedure. I am not familiar with the inner workings of the statsmodels library, or with its glm.fit_constrained() method in particular. i reckon it's unadvisable to change anything in a pre-existing module one does not understand. However, it is the only way I am conveniently able to impose various constraints to my GLM parameters and to be able to save the regression results to readily re-use it for prediction in a later session.
I have a set of coefficients from a trained model but I don't have access to the model itself or training dataset. I'd like to create an instance of H2OGeneralizedLinearEstimator and set the coefficients manually to use the model for prediction.
The first thing I tried was (this is an example to reproduce the error):
import h2o
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
from h2o.frame import H2OFrame
h2o.init()
# creating some test dataset
test = {"x":[0,1,2], "y":[0,0,1]}
df = H2OFrame(python_obj=test)
glm = H2OGeneralizedLinearEstimator(family='binomial', model_id='logreg')
# setting the coefficients
glm.coef = {'Intercept':0, 'x':1}
# predict
glm.predict(test_data=df)
This throws an error:
H2OResponseError: Server error
water.exceptions.H2OKeyNotFoundArgumentException: Error: Object
'logreg' not found in function: predict for argument: model
I also tried to set glm.params keys based on the keys of a similar trained model:
for key in trained.params.keys():
glm.params.__setitem__(key, trained.params[key])
but this doesn't populate glm.params (glm.params = {}).
It looks like you want to use the function makeGLMModel
This is further described in the documentation, and I will repost here for your convenience:
Modifying or Creating a Custom GLM Model
In R and python, the makeGLMModel call can be used to create an H2O model from given coefficients. It needs a source GLM model trained on the same dataset to extract the dataset information. To make a custom GLM model from R or python:
R: call h2o.makeGLMModel. This takes a model, a vector of coefficients, and (optional) decision threshold as parameters.
Pyton: H2OGeneralizedLinearEstimator.makeGLMModel (static method) takes a model, a dictionary containing coefficients, and (optional) decision threshold as parameters.
R's Caret clearly supports a wide range of models that are accessible by simply changing method:
fit <- caret::train(y = as.vector(y),
x = x,
method = BstLm,
trControl = ctrl,
preProc = c("center", "scale"))
Are most of the models in the caret list below available in Python (through scikit-learn or otherwise)?
https://topepo.github.io/caret/available-models.html
For example, I wanted to implement the Boosted Linear Model method = 'BstLm' in Python, but couldn't find any way, certainly not a way as simply changing one variable in caret.
I build up a XGBoost model using scikit-learn and I am pretty happy with it. As fine tuning to avoid overfitting, I'd like to ensure monotonicity of some features but there I start facing some difficulties...
As far as I understood, there is no documentation in scikit-learn about xgboost (which I confess I am really surprised about - knowing that this situation is lasting for several months). The only documentation I found is directly on http://xgboost.readthedocs.io
On this website, I found out that monotonicity can be enforced using "monotone_constraints" option.
I tried to use it in Scikit-Learn but I got an error message "TypeError: init() got an unexpected keyword argument 'monotone_constraints'"
Do you know a way to do it ?
Here is the code I wrote in python (using spyder):
grid = {'learning_rate' : 0.01, 'subsample' : 0.5, 'colsample_bytree' : 0.5,
'max_depth' : 6, 'min_child_weight' : 10, 'gamma' : 1,
'monotone_constraints' : monotonic_indexes}
#'monotone_constraints' ~ = "(1,-1)"
m07_xgm06 = xgb.XGBClassifier(n_estimators=2000, **grid)
m07_xgm06.fit(X_train_v01_oe, Label_train, early_stopping_rounds=10, eval_metric="logloss",
eval_set=[(X_test1_v01_oe, Label_test1)])
In order to do this using the xgboost sklearn API, you need to upgrade to xgboost 0.81. They fixed the ability to set parameters controlled via kwargs as part of this PR:
https://github.com/dmlc/xgboost/pull/3791
XGBoost Scikit-Learn API currently (0.6a2) doesn't support monotone_constraints. You can use Python API instead. Take a look into example.
This code in the example can be removed:
params_constr['updater'] = "grow_monotone_colmaker,prune"
How would you expect monotone constraints to work for a general classification problem where the response might have more than 2 levels? All the examples I've seen relating to this functionality are for regression problems. If your classification response only has 2 levels, try switching to regression on an indicator variable and then choose an appropriate score threshold for classification.
This feature appears to work as of the latest xgboost / scikit-learn, provided that you use an XGBregressor rather than an XGBclassifier and set monotone_constraints via kwargs.
The syntax is like this:
params = {
'monotone_constraints':'(-1,0,1)'
}
normalised_weighted_poisson_model = XGBRegressor(**params)
In this example, there is a negative constraint on column 1 in the training data, no constraint on column 2, and a positive constraint on column 3. It is up to you to keep track of which is which - you cannot refer to columns by name, only by position, and you must specify an entry in the constraint tuple for every column in your training data.