How to know the trained model is correct? - python

I use PyTorch Lightning for model training, during which I use ModelCheckpoint to save loading points. Finally, I would like to know whether the model is loaded correctly. Let me know if you require further information?
checkpoint_callback = ModelCheckpoint(
filename='tb1000_{epoch: 02d}-{step}',
monitor='val/acc#1',
save_top_k=5,
mode='max')
wandb_logger = pl.loggers.wandb.WandbLogger(
name=run_name,
project=args.project,
entity=args.entity,
offline=args.offline,
log_model='all')
model = BYOL(**args.__dict__, num_classes=dm.num_classes)
trainer = pl.Trainer.from_argparse_args(args,
logger=wandb_logger, callbacks=[checkpoint_callback])
trainer.fit(model, dm)
# Loading and testing
model_test = BYOL(**args.__dict__, num_classes=dm.num_classes)
path = "/tb100_epoch= 819-step=39359.ckpt"
model_test.load_from_checkpoint(path)

load_from_checkpoint() will return a model with trained weights, so you need to assign it to a new variable.
model_test = model_test.load_from_checkpoint(path)
or
model_test = BYOL.load_from_checkpoint(path)

Related

TensorFlow Extended | Trainer Not Warm Starting With GenericExecutor & Keras Model

I'm presently trying to get a Trainer component of a TFX pipeline to warm-start from a previous run of the same pipeline. The use case is:
Run the pipeline once, produce a model.
As new data comes in, train the existing model with the new data.
I am aware the ResolverNode component is designed for this purpose, so you can see how I utilize it below:
# detect the previously trained model
latest_model_resolver = ResolverNode(
instance_name='latest_model_resolver',
resolver_class=latest_artifacts_resolver.LatestArtifactsResolver,
latest_model=Channel(type=Model))
context.run(latest_model_resolver)
# set prior model as base_model
train_file = 'tfx_modules/recommender_train.py'
trainer = Trainer(
module_file=os.path.abspath(train_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
transformed_examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000),
base_model=latest_model_resolver.outputs['latest_model'])
The components above run successfully, and the ResolverNode is able to detect the latest model from prior pipeline runs. No error is thrown - however, when running context.run(trainer), the model loss basically begins where it started the first time. After the model's first run, it finishes training loss ~0.1, however, upon the second run (with the supposed warm-start), it restarts ~18.2.
This leads me to believe all weights were re-initialized, which I don't believe should have occurred. Below are the relevant model construction functions:
def build_keras_model():
"""build keras model"""
embedding_max_values = load(open(os.path.abspath('tfx-example/user_artifacts/embedding_max_dict.pkl'), 'rb'))
embedding_dimensions = dict([(key, 20) for key in embedding_max_values.keys()])
embedding_pairs = [recommender.EmbeddingPair(embedding_name=feature,
embedding_dimension=embedding_dimensions[feature],
embedding_max_val=embedding_max_values[feature])
for feature in recommender_constants.univalent_features]
numeric_inputs = []
for num_feature in recommender_constants.numeric_features:
numeric_inputs.append(keras.Input(shape=(1,), name=num_feature))
input_layers = numeric_inputs + [elem for pair in embedding_pairs for elem in pair.input_layers]
pre_concat_layers = numeric_inputs + [elem for pair in embedding_pairs for elem in pair.embedding_layers]
concat = keras.layers.Concatenate()(pre_concat_layers) if len(pre_concat_layers) > 1 else pre_concat_layers[0]
layer_1 = keras.layers.Dense(64, activation='relu', name='layer1')(concat)
output = keras.layers.Dense(1, kernel_initializer='lecun_uniform', name='out')(layer_1)
model = keras.models.Model(input_layers, outputs=output)
model.compile(optimizer='adam', loss='mean_squared_error')
return model
def run_fn(fn_args: TrainerFnArgs):
"""function for the Trainer component"""
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor,
tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor,
tf_transform_output, 40)
model = build_keras_model()
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq='epoch', histogram_freq=1,
write_images=True)
model.fit(train_dataset, steps_per_epoch=fn_args.train_steps, validation_data=eval_dataset,
validation_steps=fn_args.eval_steps, callbacks=[tensorboard_callback],
epochs=5)
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model, tf_transform_output).get_concrete_function(tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')
)
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
To research the problem, I have perused:
Warm Start Example From TFX
https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_warmstart.py
However, this guide uses the Estimator component instead of the Keras components. That component has a warm_start_from initialization parameter which I couldn't find for the Keras equivalent.
I suspect:
Either the warm-start functionality is only available for Estimator components and won't take effect even if base_model is set for Keras components.
I am somehow telling the model to re-initialize weights even after successfully loading the prior model - in that case I would love a pointer as to where that's happening.
Any assistance would be great! Much thanks.
With Keras models you have to load the model first using the base model path, then you can continue training from there instead of building a new model.
Your Trainer component looks correct, but in run_fn do the following instead:
def run_fn(fn_args: FnArgs):
model = tf.keras.models.load_model(fn_args.base_model)
model.fit(train_dataset, steps_per_epoch=fn_args.train_steps, validation_data=eval_dataset,
validation_steps=fn_args.eval_steps, callbacks=[tensorboard_callback],
epochs=5)

Loading a converted pytorch model in huggingface transformers properly

I converted a pre-trained tf model to pytorch using the following function.
def convert_tf_checkpoint_to_pytorch(*, tf_checkpoint_path, albert_config_file, pytorch_dump_path):
# Initialise PyTorch model
config = AlbertConfig.from_json_file(albert_config_file)
print("Building PyTorch model from configuration: {}".format(str(config)))
model = AlbertForPreTraining(config)
# Load weights from tf checkpoint
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
# Save pytorch-model
print("Save PyTorch model to {}".format(pytorch_dump_path))
torch.save(model.state_dict(), pytorch_dump_path)
I am loading the converted model and encoding sentences in the following way:
def vectorize_sentence(text):
albert_tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
config = AlbertConfig.from_pretrained(config_path, output_hidden_states=True)
model = TFAlbertModel.from_pretrained(pytorch_dir, config=config, from_pt=True)
e = albert_tokenizer.encode(text, max_length=512)
model_input = tf.constant(e)[None, :] # Batch size 1
output = model(model_input)
v = [0] * 768
# generate sentence vectors by averaging the word vectors
for i in range(1, len(model_input[0]) - 1):
v = v + output[0][0][i].numpy()
vector = v/len(model_input[0])
return vector
However while loading the model, a warning comes up:
Some weights or buffers of the PyTorch model TFAlbertModel were not
initialized from the TF 2.0 model and are newly initialized:
['predictions.LayerNorm.bias', 'predictions.dense.weight',
'predictions.LayerNorm.weight', 'sop_classifier.classifier.bias',
'predictions.dense.bias', 'sop_classifier.classifier.weight',
'predictions.decoder.bias', 'predictions.bias',
'predictions.decoder.weight'] You should probably TRAIN this model on
a down-stream task to be able to use it for predictions and inference.
Can anyone tell me if I am doing anything wrong? What does the warning mean? I saw issue #5588 on the github repo of Transformers. Don't know if my issue is the same as this.
I think you could try using
model = AlbertModel.from_pretrained
instead of
model = TFAlbertModel.from_pretrained
in the VectorizeSentence definition.
AlbertModel is the name of the class for the pytorch format model, and TFAlbertModel is the name of the class for the tensorflow format model.
I'm not sure exactly what load_tf_weights_in_albert() does, but I think that once you have done that your model is in pytorch format.

Saving normalization values in Keras model

I have a Keras model for which I would like to save the normalization values in the model object itself for easier portability.
I'm using sklearn's StandardScaler() to normalize my data, so I simply want to save the mean_ and var_ attributes from the scaler to the model, save the model, and when I reload the model have access to these attributes.
Currently when I reload the model the attributes I added are not there. What is the correct way of doing this ?
Code:
# Normalize data
scaler = StandardScaler()
scaler.fit(X_train)
...
# Create model
model = Sequential(...)
# Compile and train
...
# Save model with normalization mean and var
model.normalization_mean = scaler.mean_
model.normalization_var = scaler.var_
keras.models.save_model(model = model,
filepath = ...)
# Reload model
model = keras.models.load_model(filepath = ...)
hasattr(model, 'normalization_mean') # False
hasattr(model, 'normalization_var') # False
this is a possibility... you can create a model subclass in this way and assign external object like not-trainable variables
X = np.random.uniform(0,1, (100,10))
y = np.random.uniform(0,1, 100)
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.dense1 = Dense(32)
self.dense2 = Dense(1)
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
model = MyModel()
model.compile('adam','mse')
model.fit(X,y)
model._normalization_mean = tf.Variable([111.], trainable=False)
model._normalization_var = tf.Variable([222.], trainable=False)
model.save('abc.tf', save_format='tf')
model = tf.keras.models.load_model(filepath = 'abc.tf')
after loading the model you can call
model._normalization_mean.numpy()
# array([111.], dtype=float32)
here the running notebook
to save and load subclass model you can refer to this
I just came across Keras preprocessing layers whose purpose seem to be exactly what you're describing.
The Keras preprocessing layers API allows developers to build Keras-native input processing pipelines. These input processing pipelines can be used as independent preprocessing code in non-Keras workflows, combined directly with Keras models, and exported as part of a Keras SavedModel.
With Keras preprocessing layers, you can build and export models that are truly end-to-end: models that accept raw images or raw structured data as input; models that handle feature normalization or feature value indexing on their own.

Use an already trained neuronal network?

I have an already trained neuronal network consisting of files NNbiases_b1.csv, NNbiases_out.csv, NNweights_h1.csv and NNweights_out.csv. The input and output layer sizes are known too.
Now I'm looking for a Python script that uses this neuronal network, means outputs data dependent on input data and trained network.
But whenever I google for an related script, I only find howtos and explanations about training an network!
So my question: when I have an already trained network with the data/files above: how can I use this neuronal network?
Thanks!
I think you need to reconstruct your model's architecture, and then manually set the weights of each layer with something like that :
all_weights = []
NNweights_h1 = [...] #load your csv of weights
NNbiases_b1 = [...] #load your csv of biases
all_weights.append(NNweights_h1)
all_weights.append(NNbiases_b1)
model.layers[i].set_weights(all_weights)
And do that for all your layers.
Update after precisions
In order to use your model (dummy exemple) :
Reconstruct the architecture :
def model(model_input):
x = Dense(12, input_dim=8, activation='relu')(model_input)
x = Dense(1, activation='sigmoid')(x)
model = Model(model_input, x, name='Your_model')
return model
Instanciate it :
X_test = [...] #load your data
input_shape = [...] #your test data shape
model_input = Input(shape=input_shape)
model = model(model_input)
Manually set the weights with using the code at the begining of the answer
Use this model to predict your data:
prediction = model.predict(X_test) #get the predictions of your model
I hope this will help you !

Is there some way to save best model only with tensorflow.estimator.train_and_evaluate()?

I try retrain TF Object Detection API model from checkpoint with already .config file for training pipeline with tf.estimator.train_and_evaluate() method like in models/research/object_detection/model_main.py. And it saves checkpoints every N steps or every N seconds.
But I want to save only one best model like in Keras.
Is there some way to do it with TF Object Detection API model? Maybe some options/callbacks for tf.Estimator.train or some way to use Detection API with Keras?
I have been using https://github.com/bluecamel/best_checkpoint_copier which works well for me.
Example:
best_copier = BestCheckpointCopier(
name='best', # directory within model directory to copy checkpoints to
checkpoints_to_keep=10, # number of checkpoints to keep
score_metric='metrics/total_loss', # metric to use to determine "best"
compare_fn=lambda x,y: x.score < y.score, # comparison function used to determine "best" checkpoint (x is the current checkpoint; y is the previously copied checkpoint with the highest/worst score)
sort_key_fn=lambda x: x.score,
sort_reverse=False) # sort order when discarding excess checkpoints
pass it to your eval_spec:
eval_spec = tf.estimator.EvalSpec(
...
exporters=best_copier,
...)
You can try using BestExporter. As far as I know, it's the only option for what you're trying to do.
exporter = tf.estimator.BestExporter(
compare_fn=_loss_smaller,
exports_to_keep=5)
eval_spec = tf.estimator.EvalSpec(
input_fn,
steps,
exporters)
https://www.tensorflow.org/api_docs/python/tf/estimator/BestExporter
If you are training using the models repo of tensorflow/models.
models/research/object_detection/model_lib.py file create_train_and_eval_specs function can be modified to include the best exporter:
final_exporter = tf.estimator.FinalExporter(
name=final_exporter_name, serving_input_receiver_fn=predict_input_fn)
best_exporter = tf.estimator.BestExporter(
name="best_exporter",
serving_input_receiver_fn=predict_input_fn,
event_file_pattern='eval_eval/*.tfevents.*',
exports_to_keep=5)
exporters = [final_exporter, best_exporter]
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=train_steps)
eval_specs = [
tf.estimator.EvalSpec(
name=eval_spec_name,
input_fn=eval_input_fn,
steps=eval_steps,
exporters=exporters)
]

Categories

Resources