I am using Chainer to train (fine-tune) a Resnet model and then use the checkpoint for evaluation. The checkpoint is a npz file with the following structure:
When I am loading the model for evaluation with chainer.serializers.load_npz(args.load, model) (where model is the standard resnet) I get the following error: KeyError: 'rpn/loc/b is not a file in the archive'.
I think the problem is that the files in the model do not have the 'updater/optimizer/faster/extractor' prefix.
How can I change the name of the files in the resulting npz to remove the prefix or what else should I do to fix the problem?
Thank you!
When you load a snapshot generated by the Snapshot Extension, you need to do it from the trainer.
chainer.serializers.load_npz(args.load, trainer) The trainer will automatically load the state of the updater, optimizer and the model.
You can also load only the model manually by accessing the corresponding field in the snapshot and passing it as an argument to the model.serialize function
npz_data = numpy.load(args.load)
snap = chainer.serializers.NpzDeserializer(npz_data)
model.serialize(snap['updater']['model:main'])
This should load only the weights of the model
Related
I am trying to jit trace and save my pytorch model from the segmentation models package. But I am getting an error. "Could not export Python function call 'SwishImplementation'. Remove calls to python functions before export. Did you forget to add #script or #scrript_method annotation? If this is a nn.ModuleList, add it to _ constants _" It only happens when I use the efficientnet backbone. How can I get the save() function to work? I need to be able to use the model in a c++ application.
import torch
import segmentation_models_pytorch as smp
model = smp.Unet('efficientnet-b7')
model.eval()
input = torch.randn((1,3,224,224))
torch_out = model(input)
model = torch.jit.trace(model,input)
trace_out = model(input)
model.save('model.pt')
The UNET model from the segmentation_models_pytorch module uses an EfficientNet, which uses a MemoryEfficientSwish module. To fix the error, change all instances of MemoryEfficientSwish to Swish before saving the model.
You can iterate through the UNET model, and if the module is an instance of EfficientNet, call the function .set_swish(memory_efficient = False).
After that, you can load the state_dict, and then trace and save the model.
I am having trouble loading large model after saving.
have tried all below saveing methods:
tf.saved_model.save(model, model_save_path)
model.save(model_save_path+"_new_save")
tf.keras.models.save_model(model, model_save_path+"_v3")
error when loading :
method 1
m2=tf.keras.models.load_model(model_save_path+"_v3")
error:
__init__() got an unexpected keyword argument 'reduction'
method 2
m3=tf.keras.models.load_model(model_save_path
error:
ARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), *NOT* tf.saved_model.save(). To confirm, there should be a file named "keras_metadata.pb" in the SavedModel directory.
ValueError: Unable to create a Keras model from SavedModel at xxxx . This SavedModel was exported with `tf.saved_model.save`, and lacks the Keras metadata file. Please save your Keras model by calling `model.save`or `tf.keras.models.save_model`. Note that you can still load this SavedModel with `tf.saved_model.load`.
method 3
m4=tf.saved_model.load(model_save_path)
this works but m4 object has no predict method
and not able to use
model.signatures["serving_default"](**input_data)
or
model.__call__(input_data,training=False)
to predict on data
any help would be appreciated
Adding compile=False to the load function will solve the issue:
m2=tf.keras.models.load_model(model_save_path+"_v3", compile=False)
I am unable to load saved pytorch model from the outputs folder in my other scripts.
I am using following lines of code to save the model:
os.makedirs("./outputs/model", exist_ok=True)
torch.save({
'model_state_dict': copy.deepcopy(model.state_dict()),
'optimizer_state_dict': optimizer.state_dict()
}, './outputs/model/best-model.pth')
new_run.upload_file("outputs/model/best-model.pth", "outputs/model/best-model.pth")
saved_model = new_run.register_model(model_name='pytorch-model', model_path='outputs/model/best-model.pth')
and using the following code to access it:
global model
best_model_path = 'outputs/model/best-model.pth'
model_checkpoint = torch.load(best_model_path)
model.load_state_dict(model_checkpoint['model_state_dict'], strict = False)
but when I run the above mentioned code, I get this error: No such file or directory: './outputs/model/best-model.pth'
Also I want to know is there a way to get the saved model from Azure Models? I have tried to get it by using following lines of code:
from azureml.core.model import Model
model = Model(ws, "Pytorch-model")
but it returns Model type object which returns error on model.eval() (error: Model has no such attribute eval()).
There is no global output folder. If you want to use a Model in a new script you need to give the script the model as an input or register the model and download the model from the new script.
The Model object form from azureml.core.model import Model is not your pytorch Model. 1
You can use model.register(...) to register your model. And model.download(...) to download you model. Than you can use pytorch to load you model. 2
I have successfully trained a Keras model like:
import tensorflow as tf
from keras_segmentation.models.unet import vgg_unet
# initaite the model
model = vgg_unet(n_classes=50, input_height=512, input_width=608)
# Train
model.train(
train_images=train_images,
train_annotations=train_annotations,
checkpoints_path="/tmp/vgg_unet_1", epochs=5
)
And saved it in hdf5 format with:
tf.keras.models.save_model(model,'my_model.hdf5')
Then I load my model with
model=tf.keras.models.load_model('my_model.hdf5')
Finally I want to make a segmentation prediction on a new image with
out = model.predict_segmentation(
inp=image_to_test,
out_fname="/tmp/out.png"
)
I am getting the following error:
AttributeError: 'Functional' object has no attribute 'predict_segmentation'
What am I doing wrong ?
Is it when I am saving my model or when I am loading it ?
Thanks !
predict_segmentation isn't a function available in normal Keras models. It looks like it was added after the model was created in the keras_segmentation library, which might be why Keras couldn't load it again.
I think you have 2 options for this.
You could use the line from the code I linked to manually add the function back to the model.
model.predict_segmentation = MethodType(keras_segmentation.predict.predict, model)
You could create a new vgg_unet with the same arguments when you reload the model, and transfer the weights from your hdf5 file to that model as suggested in the Keras documentation.
model = vgg_unet(n_classes=50, input_height=512, input_width=608)
model.load_weights('my_model.hdf5')
I trained a neural network, without any checkpoints, and at the end I wrote tf.keras.models.save_model(model, dirpath) to save the whole model, which created the following files:
savedmodel.pb
assets/
variables/variables.index
variables/variables.data-00000-of-00001
I tried loading the model using new_model = tf.keras.models.load_model(dirpath), but it gave a ValueError because I'm using a custom model (it seems, I created a class inheriting from tf.keras.Model). So instead I tried to instantiate a new model, and then just load the weights using
model = myModel(someArgs)
model.load_weights(dirpath/variables)
However, I get the following error message:
OSError: Unable to open file (unable to open file: name = 'dirpath/variables', errno = 13, error message = 'Permission denied', flags = 0, o_flags = 0)
So how can I load the weights onto the model? The files are there I just don't know how to put them back inside my model.
Figured it out, I was using the wrong path. I need to do model.load_weights(dirpath/variables/variables). There ate two files called variables, with different extenstions (.data-00000-of-00001 and .index), and that is the name you want to call.