I have checkpoint files *.data, *.index and *.meta. I have to convert these checkpoint files into a frozen inference graph. I have seen this reference. You can download my checkpoint files from here
Heres what I did:
Step 1:
Display internal layers of the graph using following code:
import tensorflow as tf
saver = tf.train.import_meta_graph('./VGGnet_fast_rcnn_iter_50000.meta', clear_devices=True)
graph = tf.get_default_graph()
input_graph_def = graph.as_graph_def()
sess = tf.Session()
saver.restore(sess, "./VGGnet_fast_rcnn_iter_50000")
for n in tf.get_default_graph().as_graph_def().node:
print(n.name)
writer = tf.summary.FileWriter('./log/', sess.graph)
Here's the output :
checkpoints.txt
Using checkpoints.txt and *.data, *.index and *.meta I want to create frozen inference graph
My main challange is to find output node names from checkpoints.txt file. If you see this file there are list of names without clear seperation on which exactly is the output
Once I find the output node names. Creating a frozen graph is a easy ride. How do I find which is my output from list of names in checkpoints.txt file ?
Related
I have a project that was developed on TensorFlow v1 I think. It works in Python 3.8 like this:
...
saver = tf.train.Saver(var_list=vars)
...
saver.restore(self.sess, tf.train.latest_checkpoint(checkpoint_dir))
...
The checkpoint files reside in the "checkpoint_dir"
I would like to use this with TFjs but I can't figure out how to transform the checkpoint files to something that can be loaded with TFjs.
What should I do?
thanks,
John
Ok, I figured it out. Hope this helps other beginners like me too.
The checkpoint files do not contain the model, they only contain the values (weights, etc) of the model.
The model is actually built in the code. So, here are the steps to convert the Tensorflow v1 checkpoint files to TensorflowJS loadable model:
First I saved the checkpoint again because there was a file that was missing (.meta file) This contains some meta information about the values in the checkpoint. To save the checkpoint with meta I used this code right after the saver.restore(... call like this:
...
saver.save(self.sess,save_path='./newcheckpoint/')
...
Save the model as a frozen model file like this:
import tensorflow.compat.v1 as tf
meta_path = './newcheckpoint/.meta' # Your .meta file
output_node_names = ['name_of_the_output_node'] # Output nodes
with tf.Session() as sess:
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('./newcheckpoint/'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('./freeze/output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
This will save the model to ./freeze/output_graph.pb
Using tensorflowjs_converter convert the frozen model to a web model like this:
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='final_add' --skip_op_check ./freeze/output_graph.pb ./web_model/
Had to use the --skip_op_check due to some missing op errors/warnings when trying to convert.
As a result of step 3, the ./webmodel/ folder will contain the JSON and binary files required by the TensorflowJS library.
Here's how I load the model using tfjs 2.x:
model=await tf.loadGraphModel('web_model/model.json');
I'm trying to convert these three files of a pre-trained model:
semantic_model.data-00000-of-00001
semantic_model.index
semantic_model.meta
into a Saved Model format, so that I can later convert it into TFLite format for Inference.
Searching StackOverflow, I'd come across this code, which properly generates the Saved_model.pb, however as noted in some comments, doing it in this way doesn't keep the Meta Graph Definitions, which causes an error when I later try to convert it into TFlite format or freeze it.
import os
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
export_dir = '/tf-end-to-end/export_dir'
#trained_checkpoint_prefix = 'Models/semantic_model' \tf-end-to-end\Models
trained_checkpoint_prefix = 'PATH TO MODEL DIRECTORY'
tf.reset_default_graph()
graph = tf.Graph()
loader = tf.train.import_meta_graph(trained_checkpoint_prefix + ".meta" )
sess = tf.Session()
loader.restore(sess,trained_checkpoint_prefix)
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.TRAINING, tf.saved_model.tag_constants.SERVING], strip_default_attrs=True)
builder.save()
This is the error I get when trying to use the saved_model:
RuntTimeError: MetaGraphDef associated with tags {'serve'} could not be found in SavedModel
Running the showsavedmodelcli --all doesn't display anything under signature definitions for the created saved_model.
My question is, how do I maintain the data and convert this to saved_model, for later conversion into TFLite format?
Model Structure and creation details can be seen here, including the checkpoint files mentioned: https://github.com/OMR-Research/tf-end-to-end
Refer to these steps for converting checkpoints to a TFLite model: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/r1/convert/python_api.md#convert-checkpoints-
I am trying to convert my frozen_model.pb to a tensorflow JS compatible (.pb) file, which is based on SSD Mobilenet V2 COCO pretrained model by Tensorflow. I am stuck at how to get the output_node_names parameter which is needed while using the tensorflowjs_converter. How do I get to know the output node names?
I have tried to get the operation names by using the below Python script, but am not able to understand which one is the output node.
def load_graph(model_file):
graph = tf.Graph()
graph_def = tf.GraphDef()
with open(model_file, "rb") as f:
graph_def.ParseFromString(f.read())
with graph.as_default():
tf.import_graph_def(graph_def)
return graph
graph = load_graph('frozen_model.pb')
ops = graph.get_operations()
Firstly, you can inspect all the nodes in your graph_def as follows:
for node in graph_def.node
print(node.name)
Alternatively, if you want to visually see the graph and determine which node to be used as output, using TensorBoard is the way to go. There is a tool called import_pb_to_tensorboard. It's essentially using a handful of lines to write the graph to a log_dir, which you can point tensorboard to. You can simply copy those lines into your own script to achieve the same thing without building from the tensorflow repo.
Thirdly, there is another tool called summarize_graph tool:
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/path/to/your/graph.pb
Tensorflow version =1.8.0
I am trying to restore my model using one of the intermediate checkpoint files in Tensorflow. By default Tensorflow will take the last saved checkpoint file.
For example, the folder contains files like:
checkpoint
model-56000.index model-56000.data-00000-of-00001 model-56000.meta model-57000.index model-57000.data-00000-of-00001 model-57000.meta
By default, Tensorflow loads the last 57K checkpoint, but for reasons, I want to load the weights for the 56K checkpoint.
Following is my code for restoring the model:
def load_G(self, checkpoint_dir):
print(" [*] Reading checkpoints of G...")
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
self.saver_gen.restore(self.sess, os.path.join(checkpoint_dir, ckpt_name))
return True
else:
return False
From Tensorflow's page, I read that for tf.train.get_checkpoint_state(), I can specify tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None). But I am not able to figure, what should I write for latest_filename. I tried writing latest_filename = model-56000
But that did not load the model.
I also tried writing latest_filename = model-56000.meta. That also did not work.
So, what is the correct way to load some intermediate checkpoint files in Tensorflow.
Ok,so a hack is to modify the checkpoint protobuf file and change the first line of that file from: model_checkpoint_path: "model-57000" to model_checkpoint_path: "model-56000" ad now it loads the 56K checkpoint.
Looking for some better ways to do this.
The ckpt file name will be model-56000.ckpt
model-56000.meta points to the meta information of the ckpt
model-56000 is file name for either ckpt, data files, or meta files
I am loading a MetaGraph from .meta and .ckpt file
meta_graph = tf.train.import_meta_graph(meta_file)
with tf.Session() as sess:
meta_graph.restore(tf.get_default_session(), ckpt_file)
and a Graph from .pb file
with gfile.FastGFile(pb_file,'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
current_graph = tf.get_default_graph()
and then I would like to convert any of them into a Keras model or .h5 file directly.
But I am unable to find a helper function for this. Neither does MetaGraph or Graph have an export utility for Keras or .h5, nor does Keras have an import function for MetaGraph or Graph.
Please help me cross the bridge.
Files can be found here :
https://github.com/davidsandberg/facenet#pre-trained-models (VGGFace 2) or
https://drive.google.com/file/d/1EXPBSXwTaqrSC0OhUdXNmKSh9qJUQ55-/view