I am trying to use a tensorflow neural network in "interactive" mode:
my goal would be to load a trained model, keeping it in memory, and then perform inference on it once in a while.
The problem is that apparently the tensorflow Estimator class (tf.estimator.Estimator) does not allow to do so.
The method predict (documentation, source) takes as input a batch of features and the path to the model. Then it creates a session, loads the model and perform the inference.
After that, the session is closed and for a successive inference it is necessary to load the model again.
How could I achieve my desired behavior using the Estimator class?
Thank you
You may want to have a look at tfe.make_template, its goal is precisely to make graph-based code available in eager mode.
Following the example given during the 2018 TF summit, that would give something like
def apply_my_estimator(x)
return my_estimator(x)
t = tfe.make_template('f', apply_my_estimator, create_graph_function=True)
print(t(x))
Related
According to the demo code
"Image similarity estimation using a Siamese Network with a contrastive loss"
https://keras.io/examples/vision/siamese_contrastive/
I'm trying to save model by model.save to h5 or hdf5; however, after I used load_model (even tried load_weights)
it showed error message for : unknown opcode
Have done googling job which all tells me it's python version problem between py3.5~py3.6
But actually I use only python 3.8....
other info say that there's some extra job need to be done either in model building or load_model
It would be very kind for any one to help provide the save and load model part
to make this demo code more completed
thanks!!
Actually here they are using two individual factors which come in a custom object.
Custom objects:
contrastive loss
embedding layer: where we are finding euclidean_distance.
Saving model:
for the saving model, it's straightforward
<model_name>.save("siamese_contrastive.h5")
Loading model:
Here the good part will come model will not load directly here because it doesn't have an understanding of two things one is your custom layer and 2nd is your loss.
model = tf.keras.models.load_model('siamese_contrastive.h5', custom_objects={ })
In the custom object mentioned above, you have to provide the definition of those two objects.
After that, it will accept your model and it will run separately at inferencing time.
Still figuring out how??
Have a look at my implementation let me know if you still have any questions: https://github.com/anukash/Keras_siamese_contrastive
Let's say that I want to fine tune one of the Tensorflow Hub image feature vector modules. The problem arises because in order to fine-tune a module, the following needs to be done:
module = hub.Module("https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/3", trainable=True, tags={"train"})
Assuming that the module is Resnet50.
In other words, the module is imported with the trainable flag set as True and with the train tag. Now, in case I want to validate the model (perform inference on the validation set in order to measure the performance of the model), I can't switch off the batch-norm because of the train tag and the trainable flag.
Please note that this question has already been asked here Tensorflow hub fine-tune and evaluate but no answer has been provided.
I also raised a Github issue about it issue about it.
Looking forward to your help!
With hub.Module for TF1, the situation is as you say: either the training or the inference graph is instantiated, and there is no good way to import both and share variables between them in a single tf.Session. That's informed by the approach used by Estimators and many other training scripts in TF1 (esp. distributed ones): there's a training Session that produces checkpoints, and a separate evaluation Session that restores model weights from them. (The two will likely also differ in the dataset they read and the preprocessing they perform.)
With TF2 and its emphasis on Eager mode, this has changed. TF2-style Hub modules (as found at https://tfhub.dev/s?q=tf2-preview) are really just TF2-style SavedModels, and these don't come with multiple graph versions. Instead, the __call__ function on the restored top-level object takes an optional training=... parameter if the train/inference distinction is required.
With this, TF2 should match your expectations. See the interactive demo tf2_image_retraining.ipynb and the underlying code in tensorflow_hub/keras_layer.py for how it can be done. The TF Hub team is working on making more complete selection of modules available for the TF2 release.
Is there a way to take a trained TensorFlow model and convert all the tf.Variables and their respective weights (either from within a running tf.Session or from a checkpoint) into tf.constants with that value, such that one can run the model on a new input tensor without initializing or restoring the weights in a session? So can I basically condense a trained model into a fixed and immutable TensorFlow operation?
Yes, there is a freeze_graph.py tool just for that purpose.
It is described (a bit) in the Tool Developer's Guide. And you can find usage example in the Preparing models for mobile deployment section.
I'm using the Keras plot_model() function to visualize my machine learning models. Apart from having the issue that the first node of my output is always simply a very large number, there is another thing annoying me about this function: It does not provide a very elaborate output. For example, I would like to be able to see more information about the used loss function, the batch size, the number of epochs, the used optimizer, etc...
Is there any way I can retrieve this information from a model I previously saved to the disk and loaded again with the model_from_json() function?
How about TensorBoardCallback? It will create interactive graphs that you can explore based on your model if you use Tensorflow as your backend.
You just need add it as a callback to your fit function and make sure write_graph=True is set (which it is by default). If you want a shortcut you can directly invoke its methods instead of passing as a callback:
tensorboard = TensorboardCallback()
tensorboard.set_model(model) # your model here, will write graph etc
tensorboard.on_train_end() # will close the writer
Then just run tensorboard --logdir=./logs to start the server.
I'm using Keras with Cern ROOT and its analysis package TMVA. The way it works is that I use Keras to initialize the NN, then save it to a file, then TMVA loads that file in. The problem is that I am using custom metrics when setting up the neural network, and when doing this, Keras wants you to do something like
models.load_model(model_path, custom_objects={"my_object":my_object})
Unfortunately, the way that TMVA takes arguments requires that I only supply the filename of the model file being used. However, based on the error messages that I am getting, it is clear that it is simply using Keras to load the model in. My question is, how do I force Keras to automatically load my custom objects in without having to use the above line, as this is incompatible with the package I'm trying to use.