My impression is that it only saves the model's architecture, so I should be able to call it before I start training? And then save_weights() saves the weights I need to restore the model? Any more details on this?
At what stage can I call to_json()? I.e. do I have to call compile() first? Can it be before fit() ?
As mentioned in Keras docs it only saves the architecture of the model:
Saving/loading only a model's architecture
If you only need to save the architecture of a model, and not its
weights or its training configuration, you can do:
# save as JSON
json_string = model.to_json()
# save as YAML
yaml_string = model.to_yaml()
The generated JSON / YAML files are human-readable and can be manually
edited if needed.
You can then build a fresh model from this data:
# model reconstruction from JSON:
from keras.models import model_from_json
model = model_from_json(json_string)
# model reconstruction from YAML
from keras.models import model_from_yaml
model = model_from_yaml(yaml_string)
Related
I tried to import vgg16 which I downloaded from google storage
import keras
import cv2
from keras.models import Sequential, load_model
But I got that error
ValueError: No model found in config file.
I was able to recreate the issue using your code and downloaded weights file mentioned by you. I am not sure about the reason for the issue but I can offer an alternative way for you to use pretrained vgg16 model from keras.
You need to use model from keras.applications file
Here is the link for your reference https://keras.io/api/applications/
There are three ways to instantiate this model by using weights argument which takes any of following three values None/'imagenet'/filepathToWeightsFile.
Since you have already downloaded the weights , I suggest that you use the filepath option like the below code but for first time usage I will suggest to use imagenet (option 3). It will download the weight file which can be saved and reused later.
You need to add the following lines of code.
Option 1:
from keras.applications.vgg16 import VGG16
model = VGG16(weights = 'vgg16_weights_tf_dim_ordering_tf_kernels.h5')
Option 2:
from keras.applications.vgg16 import VGG16
model = VGG16(weights = None)
model.load_weights('vgg16_weights_tf_dim_ordering_tf_kernels.h5')
Option 3: for using pretrained imagenet weights
from keras.applications.vgg16 import VGG16
model = VGG16(weights = 'imagenet')
The constructor also takes other arguments like include_top etc which can be added as per requirement.
The problem here is that you're trying to load a model that is not a model and probably are just weights: so the problem is not in the load of the model but in the save.
When you are saving the model try:
If you are using callbacks then "save_weights_only"=False
Else use the function tf.keras.models.save_model(model,filepath)
A complete model has two parts: model architecture and weights.
So if we just have weights, we must first load architecture(may be use python file or keras file ), and then load weights on the architecture.
for example:
model = tf.keras.models.load_model("facenet_keras.h5")
model.load_weights("facenet_keras_weights.h5")
I want to send the Keras model after training to another python function which is saved in another python file? How can I send the model as an argument? Thank you.
If I understand it correctly, you want to transfer the model created in script A to script B so it can be used there.
To my experience, the easiest way of using a Keras model in a different script, is to save the model to disk as a file. As described here in the Keras docs:.
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
Passing on the model to a different Python file (i.e. via a commandline argument), can be then be done by passing the filename of where you saved that model to the second script. This script can then load the model from disk and use it.
In case you only have 1 model at a time, you could pick a filename and hardcode it into your functions. For example:
Script A
# assuming you already have a model stored in 'model'
model.save('my_stored_model.h5')
Script B (which accesses the saved model)
from keras.models import load_model
def function_a():
model = load_model('my_stored_model.h5')
return model.predict(...)
I try to load a LSTM model (created by Keras) after using the command:
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
with the command:
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
and to print the lr and other hyper-parameters via the command :
loaded_model.summary()
I received all the information about the structure of the LSTM. But I did not receive the hyper-parameters information like the lr etc.
Learning rate is a parameter of the optimizer of the model and is not included in the model.summary() output. If you want to find the value of learning rate, you can use optimizer attribute of the model and use K.eval() to evaluate the learning rate tensor and get its actual value:
print(K.eval(model.optimizer.lr))
Update: the optimizer of the model is not saved when you use to_json method and therefore the above solution does not work. If you want to save the whole model including the layers weights as well as the optimizer (along with its state), you can use save method:
model.save('my_model.h5')
Then you can load it using load_model:
from keras.models import load_model
model = load_model('my_model.h5')
Alternatively, if you have used save_weights method (to save the weights of layers) along with to_json method (to save only the architecture of the model), then you can load back the weights after loading back the model using model_from_json:
# load the architecture of model from json file ...
# load the weights
model.load_weights('model_weights.h5')
However, the optimizer in this second approach has not been saved and therefore you need to recompile the model (note that this means the state of the optimizer is lost and therefore you may not be able to easily to continue training the model without configuring the optimizer first; however, this is fine if you only want to use the model for prediction or retrain the model from scratch).
I highly recommend to read the related section in Keras FAQ as well: How can I save a Keras model?
I am training a keras model and saving the weights into a JSON file like so:
with open('weigths.json', 'w') as f:
json.dump(model.get_weigths())
Now I want to load the weights and rebuild the keras model so that I can do testing and prediction
How can I do that?
To save a model, you should use the dedicated function keras.model.save(filepath) and load it again with keras.models.load_model(filepath) as explained here.
Im trying to save and load weights from the model i have trained.
the code im using to save the model is.
TensorBoard(log_dir='/output')
model.fit_generator(image_a_b_gen(batch_size), steps_per_epoch=1, epochs=1)
model.save_weights('model.hdf5')
model.save_weights('myModel.h5')
Let me know if this an incorrect way to do it,or if there is a better way to do it.
but when i try to load them,using this,
from keras.models import load_model
model = load_model('myModel.h5')
but i get this error:
ValueError Traceback (most recent call
last)
<ipython-input-7-27d58dc8bb48> in <module>()
1 from keras.models import load_model
----> 2 model = load_model('myModel.h5')
/home/decentmakeover2/anaconda3/lib/python3.5/site-
packages/keras/models.py in load_model(filepath, custom_objects, compile)
235 model_config = f.attrs.get('model_config')
236 if model_config is None:
--> 237 raise ValueError('No model found in config file.')
238 model_config = json.loads(model_config.decode('utf-8'))
239 model = model_from_config(model_config,
custom_objects=custom_objects)
ValueError: No model found in config file.
Any suggestions on what i may be doing wrong?
Thank you in advance.
Here is a YouTube video that explains exactly what you're wanting to do: Save and load a Keras model
There are three different saving methods that Keras makes available. These are described in the video link above (with examples), as well as below.
First, the reason you're receiving the error is because you're calling load_model incorrectly.
To save and load the weights of the model, you would first use
model.save_weights('my_model_weights.h5')
to save the weights, as you've displayed. To load the weights, you would first need to build your model, and then call load_weights on the model, as in
model.load_weights('my_model_weights.h5')
Another saving technique is model.save(filepath). This save function saves:
The architecture of the model, allowing to re-create the model.
The weights of the model.
The training configuration (loss, optimizer).
The state of the optimizer, allowing to resume training exactly where you left off.
To load this saved model, you would use the following:
from keras.models import load_model
new_model = load_model(filepath)'
Lastly, model.to_json(), saves only the architecture of the model. To load the architecture, you would use
from keras.models import model_from_json
model = model_from_json(json_string)
For loading weights, you need to have a model first. It must be:
existingModel.save_weights('weightsfile.h5')
existingModel.load_weights('weightsfile.h5')
If you want to save and load the entire model (this includes the model's configuration, it's weights and the optimizer states for further training):
model.save_model('filename')
model = load_model('filename')
Since this question is quite old, but still comes up in google searches, I thought it would be good to point out the newer (and recommended) way to save Keras models.
Instead of saving them using the older h5 format like has been shown before, it is now advised to use the SavedModel format, which is actually a dictionary that contains both the model configuration and the weights.
More information can be found here: https://www.tensorflow.org/guide/keras/save_and_serialize
The snippets to save & load can be found below:
model.fit(test_input, test_target)
# Calling save('my_model') creates a SavedModel folder 'my_model'.
model.save('my_model')
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model('my_model')
A sample output of this :
Loading model from scratch requires you to build model from scratch,
so you can try saving your model architecture first using model.to_json()
model_architecture = model.to_json()
Save model weighs using
model.save_weights('model_weights.h5')
For loading the weights you need to reconstruct your model using the saved json file
first.
from tensorflow.keras.models import model_from_json
model = model_from_json(model_architecture)
Then load the weights using
model.load_weights('model_weights.h5')
You can now Compile and test the model , No need to retrain
eg
model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(lr=0.001), metrics=["accuracy"])
model.evaluate(x_test, y_test, batch_size=32, verbose=2)