How do I save/load a tensorflow contrib.learn regressor? - python

I have a tensorflow contrib.learn.DNNRegressor that I have trained as part of the following code snippet:
regressor = tf.contrib.learn.DNNRegressor(feature_columns=fc,
hidden_units=hu_array,
optimizer=tf.train.AdamOptimizer(
learning_rate=0.001,
),
enable_centered_bias=False,
activation_fn=tf.tanh,
model_dir="./models/my_model/",
)
regressor.fit(x=training_features, y=training_labels, steps=10000)
The trained network performs quite well, and I'd like to use it as a part of some other code, on another machine. I have tried copying over the models/my_model directory, and constructing a new DNNRegressor pointing just at the model_dir, but it requires that I supply feature_columns and hidden_units definitions. Shouldn't that information be available via the snapshots stored in model_dir? Is there a better way to save/recover a trained model which is performing well, to be used as a predictor, without having to separately save the feature_columns and hidden_units?

I came up with something workable- not ideal, but it gets the job done. If anyone has a better idea, I am all ears.
I converted my kwargs for DNNRegressor into a dict, and used the ** operator. Then I was able to pickle the kwargs dict, and reconstruct the DNNRegressor from that. E.g:
reg_args = {'feature_columns': fc, 'hidden_units': hu_array, ...}
regressor = tf.contrib.learn.DNNRegressor(**reg_args)
pickle.dump(reg_args, open('reg_args.pkl', 'wb'))
Later on, I reconstruct via:
reg_args = pickle.load(open('reg_args.pkl', 'rb'))
# On another machine and so my model dir path changed:
reg_args['model_dir'] = NEW_MODEL_DIR
regressor = tf.contrib.learn.DNNRegressor(**reg_args)
It worked well. I'm sure there must be a better way but for now if someone is trying to figure out a workaround for tf.contrib.learn, this is a solution.

When training
You call DNNRegressor(..., model_dir) and then call the fit() and evaluate() method.
When testing
You call DNNRegressor(..., model_dir) and then can call predict() methods. Your model will find a trained model in the model_dir and will load the trained model params.
Reference
Issue #3340 of TF

Related

Accessing 'training' attribute in TensorFlow functional (functional API) Model

As the title states I'm wondering how I could access the privileged 'training' argument when I'm using the functional API.
So if I use subclassing, I can write something like:
class MyLayer(tf.keras.layers.Layer):
def __init__(self):
...
self.BN = tf.keras.Layers.BatchNormalization()
def call(self,inputs, training=None):
self.BN(inputs, training=training)
So I can control how my batchnorm behaves during training and prediction. But If I want to use the functional API:
input = tf.Input(someshape)
normalized = tf.keras.layers.BatchNormalization()(input)
tf.keras.Model(inputs=input, outputs=normalized)
Now I can't really set the priviledged 'training' argument for my batch_norm anymore. I love the functional API, its just really so much fun to use, but having to build around this kind of is a dealbreaker quite often. I feel like I must miss some important idea on how one would solve this here.
I'm aware that I could create a tf.Input, which could hold the 'training' argument. But this would change it from a keyord arg to some element of a list, which creates very very inconsistent code. Any smarter solution to this?
Edit: Should make it clear that I'm looking for a general idea that can be used for the 'training' arg, not just tackling the BatchNormalization in particular.
When you instantiate the model model = tf.keras.Model(inputs=input, outputs=normalized), the model has not yet been built. You will need to call the build method, usually when you do everything by hand using the gradient tape, or when you first call the fit method. At that point, the weights will be initialized. Now, if you use the fit method or call your model output_tensors = mymodel(input_tensors, training=True), or conversely if you use the predict method or use output_tensors = mymodel(input_tensors, training=False), the training flag will be set to True or False, (which is obvious if you call the model directly).

How to invoke Sagemaker XGBoost endpoint post model creation?

I have been following along with this really helpful XGBoost tutorial on Medium (code used towards bottom of article): https://medium.com/analytics-vidhya/random-forest-and-xgboost-on-amazon-sagemaker-and-aws-lambda-29abd9467795.
To-date, I've been able to get data appropriately formatted for ML purposes, a model created based on training data, and then test data fed through the model to give useful results.
Whenever I leave and come back to work more on the model or feed in new test data however, I find I need to re-run all model creation steps in order to make any further predictions. Instead I would like to just call my already created model endpoint based on the Image_URI and feed in new data.
Current steps performed:
Model Training
xgb = sagemaker.estimator.Estimator(containers[my_region],
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket_name, prefix),
sagemaker_session=sess)
xgb.set_hyperparameters(eta=0.06,
alpha=0.8,
lambda_bias=0.8,
gamma=50,
min_child_weight=6,
subsample=0.5,
silent=0,
early_stopping_rounds=5,
objective='reg:linear',
num_round=1000)
xgb.fit({'train': s3_input_train})
xgb_predictor = xgb.deploy(initial_instance_count=1,instance_type='ml.m4.xlarge')
Evaluation
test_data_array = test_data.drop([ 'price','id','sqft_above','date'], axis=1).values #load the data into an array
xgb_predictor.serializer = csv_serializer # set the serializer type
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
predictions_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an array
print(predictions_array.shape)
from sklearn.metrics import r2_score
print("R2 score : %.2f" % r2_score(test_data['price'],predictions_array))
It seems that this particular line:
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
needs to be re-written in order to not reference xgb.predictor but instead reference the model location.
I have tried the following
trained_model = sagemaker.model.Model(
model_data='s3://{}/{}/output/xgboost-2020-11-10-00-00/output/model.tar.gz'.format(bucket_name, prefix),
image_uri='XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
role=role) # your role here; could be different name
trained_model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
and then replaced
xgb_predictor.serializer = csv_serializer # set the serializer type
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
with
trained_model.serializer = csv_serializer # set the serializer type
predictions = trained_model.predict(test_data_array).decode('utf-8') # predict!
but I get the following error:
AttributeError: 'Model' object has no attribute 'predict'
that's a good question :) I agree, many of the official tutorials tend to show the full train-to-invoke pipeline and don't emphasize enough that each step can be done separately. In your specific case, when you want to invoke an already-deployed endpoint, you can either: (A) use the invoke API call in one of the numerous SDKs (example in CLI, boto3) or (B) or instantiate a predictor with the high-level Python SDK, either the generic sagemaker.model.Model class or its XGBoost-specific child: sagemaker.xgboost.model.XGBoostPredictor as illustrated below:
from sagemaker.xgboost.model import XGBoostPredictor
predictor = XGBoostPredictor(endpoint_name='your-endpoint')
predictor.predict('<payload>')
similar question How to use a pretrained model from s3 to predict some data?
Note:
If you want the model.deploy() call to return a predictor, your model must be instantiated with a predictor_cls. This is optional, you can also first deploy a model, and then invoke it as a separate step with the above technique
Endpoints create charges even if you don't invoke them; they are charged per uptime. So if you don't need an always-on endpoint, don't hesitate to shut it down to minimize costs.

What does model.compile() do in keras tensorflow?

According to keras.io:
Once the model is created, you can config the model with losses and
metrics with model.compile().
But this explanation does not provide enough information about what exactly compiling model does.
Configures the model for training. documentation
Personally, I wouldn't call it compile, because what it does has got nothing to do with compilation, in computer science terms, and this is very confusing/ overwhelming to think about machine learning and compilation at the same time.
Its just a method which does configuration:
It just sets the arguments you pass it: optimizer, loss function, metrics, eager execution. You can run it multiple times, it will just overwrite the settings you set previously.
My suggestion to developers of TensorFlow would be to rename it to configure in the short term, and perhaps in the future (not that important), move to having 1 setter (or use the factory/ builder pattern) for each configuration argument.
Heres the code for it:
base_layer.keras_api_gauge.get_cell('compile').set(True)
with self.distribute_strategy.scope():
if 'experimental_steps_per_execution' in kwargs:
logging.warn('The argument `steps_per_execution` is no longer '
'experimental. Pass `steps_per_execution` instead of '
'`experimental_steps_per_execution`.')
if not steps_per_execution:
steps_per_execution = kwargs.pop('experimental_steps_per_execution')
self._validate_compile(optimizer, metrics, **kwargs)
self._run_eagerly = run_eagerly
self.optimizer = self._get_optimizer(optimizer)
self.compiled_loss = compile_utils.LossesContainer(
loss, loss_weights, output_names=self.output_names)
self.compiled_metrics = compile_utils.MetricsContainer(
metrics, weighted_metrics, output_names=self.output_names)
self._configure_steps_per_execution(steps_per_execution or 1)
# Initializes attrs that are reset each time `compile` is called.
self._reset_compile_cache()
self._is_compiled = True
self.loss = loss or {} # Backwards compat.
model.compile is related to training your model. Actually, your weights need to optimize and this function can optimize them. In a way that your accuracy make increases. This was just one of the input parameters called 'optimizer'.
model.compile(
optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics='acc'
)
These are the main inputs. Also you can find more details in TensorFlow documentation in link below:
https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile

What is the need to return a function object while creating a data set using tensorflow

I am new to Machine Learning and I am trying to create a Machine Learning Model using the Tensorflow API from the tutorial in the Tensorflow documentation from here
But I am having trouble understanding this part of the code
def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32):
def input_function(): # inner function, this will be returned
ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df)) # create tf.data.Dataset object with data and its label
if shuffle:
ds = ds.shuffle(1000) # randomize order of data
ds = ds.batch(batch_size).repeat(num_epochs) # split dataset into batches of 32 and repeat process for number of epochs
return ds # return a batch of the dataset
return input_function # return a function object for use
Then storing the output of the function in a variable
train_input_fn = make_input_fn(dftrain, y_train)
And at last training the model with the data set
linear_est.train(train_input_fn)
I failed to realize what we are trying to do when by just returning the function name of the inner-function in make_input_function instead of just returning our data set and passing it to train the model.
I am a beginner in Python and just started to learn Machine Learning and I am unable to find a proper answer to my question so if anyone can kindly explain it in a beginner friendly way I would be very much obliged.
I failed to realize what we are trying to do when by just returning the function name of the inner-function in make_input_function instead of just returning our data set and passing it to train the model.
In python programming, this is called Currying, It is used to transform multiple-argument function into single argument function by evaluating incremental nesting of function arguments. Currying also mends one argument to another forms a relative pattern while execution.
In tensorflow, based on the documentation (https://www.tensorflow.org/api_docs/python/tf/estimator/LinearClassifier#train).
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
The method train of the estimator is expecting a parameter input_fn. The reason is that everytime you call the Estimator.train() it will create a new graph by invoking either input_fn and model_fn and connecting them together. If you supply either a tensor or a dataset it will lead to different errors.

Keras: How to save models or weights?

I am sorry if this question seems pretty straight forward. But reading the Keras save and restore help page :
https://www.tensorflow.org/beta/tutorials/keras/save_and_restore_models
I do not understand how to use the "ModelCheckpoint" for saving during training. The help file mentions it should give 3 files, I see only one, MODEL.ckpt.
Here is my code:
checkpoint_dir = FolderName + "/tmp/model.ckpt"
cp_callback = k.callbacks.ModelCheckpoint(checkpoint_dir,verbose=1,save_weights_only=True)
parallel_model.compile(optimizer=tf.keras.optimizers.Adam(lr=learning_rate),loss=my_cost_MSE, metrics=['accuracy])
parallel _model.fit(image, annotation, epochs=epoch,
batch_size=batch_size, steps_per_epoch=10,
validation_data=(image_val,annotation_val),validation_steps=num_batch_val,callbacks=callbacks_list)
Also, when I want to load the weights after training with:
model = k.models.load_model(file_checkpoint)
I get the error:
"raise ValueError('Unknown ' + printable_module_name + ':' + object_name)
ValueError: Unknown loss function:my_cost_MSE"
my-cost_MSE is my cost function that is used in the training.
First of all, it looks like you are using the tf.keras (from tensorflow) implementation rather than keras (from the keras-team/keras repo). In this case, as stated in the tf.keras guide :
When saving a model's weights, tf.keras defaults to the checkpoint
format. Pass save_format='h5' to use HDF5.
On the other hand, note that adding the callback ModelCheckpoint is, usually, roughly equivalent to call model.save(...) at the end of each epoch, so that's why you should expect three files to be saved (according to the checkpoint format).
The reason it's not doing so is because, by using the option save_weights_only=True, you are saving just the weights. Roughly equivalent to replace the call to model.save for model.save_weights at the end of each epoch. Hence, the only file that's being saved is the one with the weights.
From here, you can proceed in two different ways:
Storing just the weights
You need your model (the structure, let's say) to be loaded beforehand and then call model.load_weights instead of keras.models.load_model:
model = MyModel(...) # Your model definition as used in training
model.load_weights(file_checkpoint)
Note that in this case, you won't have problems with custom definitions (my_cost_MSE) since you are just loading model weights.
Storing the whole model
Another way to proceed is to store the whole model and load it accordingly:
cp_callback = k.callbacks.ModelCheckpoint(
checkpoint_dir,verbose=1,
save_weights_only=False
)
parallel_model.compile(
optimizer=tf.keras.optimizers.Adam(lr=learning_rate),
loss=my_cost_MSE,
metrics=['accuracy']
)
model.fit(..., callbacks=[cp_callback])
Then you could load it by:
model = k.models.load_model(file_checkpoint, custom_objects={"my_cost_MSE": my_cost_MSE})
Note that in this latter case, you need to specify custom_objects since its definition is needed to deserialize the model.
keras has a save command. It saves all the details needed to rebuild the model.
(from the keras docs)
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns am identical compiled model
model = load_model('my_model.h5')

Categories

Resources