I have existing model that is stored in .pb, I want to add some constant tensor (initially I have data in numpy array) to some tensor in graph via Add op, how can it be done?
As I understand each node in graph is tensorflow.core.framework.node_def_pb2.NodeDef, so I need create one node for Add op and one node for const tensor?
These are related questions:
TensorFlow Manual Construction of GraphDef
How to get weights from .pb model in Tensorflow
EDIT: The answer below is just to add a disconnected constant to a graph, if you want to add a new adding operation then it would be something like this:
import tensorflow as tf
constant_value = ...
with tf.Graph().as_default():
gd = tf.GraphDef()
with open('my_graph.pb', 'rb') as f:
gd.MergeFromString(f.read())
my_tensor = tf.import_graph_def(gd, name='', return_elements='SomeOperation:0')
tf.add(my_tensor, constant_value, name='NewOperation')
tf.train.write_graph(tf.get_default_graph(), '.',
'my_modified_graph.pb', as_text=False)
Note however this just adds new operations, it does not modify the value of the original tensor. I'm not sure which one of these is what you wanted.
The most practical way is to import the graph, add the constant and save it again:
import tensorflow as tf
new_constant = ...
with tf.Graph().as_default():
gd = tf.GraphDef()
with open('my_graph.pb', 'rb') as f:
gd.MergeFromString(f.read())
tf.import_graph_def(gd, name='')
tf.constant(new_constant, name='NewConstant')
tf.train.write_graph(tf.get_default_graph(), '.',
'my_graph_with_constant.pb', as_text=False)
If for some reason you don't want to import the graph, you can manually build the node object like this:
import numpy as np
import tensorflow as tf
from tensorflow.core.framework.tensor_pb2 import TensorProto
from tensorflow.core.framework.tensor_shape_pb2 import TensorShapeProto
# New constant to add
my_value = np.array([[1, 2, 3], [4, 5,6]], dtype=np.int32)
# Make graph node
tensor_content = my_value.tobytes()
dt = tf.as_dtype(my_value.dtype).as_datatype_enum
tensor_shape = TensorShapeProto(dim=[TensorShapeProto.Dim(size=s) for s in my_value.shape])
tensor_proto = TensorProto(tensor_content=tensor_content,
tensor_shape=tensor_shape,
dtype=dt)
node = tf.NodeDef(name='MyConstant', op='Const',
attr={'value': tf.AttrValue(tensor=tensor_proto),
'dtype': tf.AttrValue(type=dt)})
# Read existing graph
gd = tf.GraphDef()
with open('my_graph.pb', 'rb') as f:
gd.MergeFromString(f.read())
# Add new node
gd.node.extend([node])
# Save modified graph
tf.train.write_graph(tf.get_default_graph(), '.',
'my_graph_with_constant.pb', as_text=False)
Note this case is relatively easy because the node is not connected to any other node.
Related
I using the below script to convert my frozen_inference_graph into a TensorRT optimized one:
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
with tf.Session() as sess:
# First deserialize your frozen graph:
with tf.gfile.GFile('frozen_inference_graph.pb', 'rb') as f:
frozen_graph = tf.GraphDef()
frozen_graph.ParseFromString(f.read())
# Now you can create a TensorRT inference graph from your
# frozen graph:
converter = trt.TrtGraphConverter(
input_graph_def=frozen_graph,
nodes_blacklist=['outputs/Softmax']) #output nodes
trt_graph = converter.convert()
# Import the TensorRT graph into a new graph and run:
output_node = tf.import_graph_def(
trt_graph,
return_elements=['outputs/Softmax'])
sess.run(output_node)
My question is how can I save this optimized graph to disk so I can use it to run inference?
yes you can just add those two lines:
saved_model_dir_trt = "./tensorrt_model.trt"
converter.save(saved_model_dir_trt)
These discussion talked (1,2) about adding new layers to Tensorflow graph and retrain the model.
And the following code shows to add in new layer to restored trainable model.
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('my_test_model-1000.meta')
saver.restore(sess,tf.train.latest_checkpoint('./'))
# Now, let's access and create placeholders variables and
# create feed-dict to feed new data
graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict ={w1:13.0,w2:17.0}
#Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")
#Add more to the current graph
add_on_op = tf.multiply(op_to_restore,2)
print sess.run(add_on_op,feed_dict)
#This will print 120.
But I like to add in layers to restored frozen graph.
I have frozen model only for an application. I like to add in layers to the model and freeze again.
Those layers are more for post processing and not necessary to train so not in the trained model.
The reason why is I am converting the freeze graph to TensorRT and I like to include those layers into Int8 engine.
I hope below will help you. I have a custom Op which was supposed to be added to my existing graph which i loaded from .pb file (freezed model file)
With this i was able to append new nodes to my existing graph.
Source code below:
import tensorflow as tf
from tensorflow.python.framework import load_library
from tensorflow.python.platform import resource_loader
from tensorflow.core.protobuf import saved_model_pb2
from tensorflow.python.util import compat
# Utility functions for Loading and Freezing graphs
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name="")
return graph
def freeze_graph(sess, output_graph):
output_node_names = [
"custom_op_zero","custom_op_zero_1"
output_node_names = ",".join(output_node_names)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(",")
)
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
print("{} ops written to {}.".format(len(output_graph_def.node), output_graph))
## load custom Ops shared object file
zero_out_ops = load_library.load_op_library(
resource_loader.get_path_to_datafile('my-op/tensorflow_zero_out/python/ops/_zero_out_ops.so'))
zero_out = zero_out_ops.zero_out
frozen_graph = load_graph("frozen_model.pb")
all_tensors = [tensor for op in frozen_graph.get_operations() for tensor in op.values()]
#print (all_tensors[29])
# Input to the new node is the output of last node
zero_out_custom = zero_out(all_tensors[-1],name="custom_op_zero")
zero_out_custom1 = zero_out(all_tensors[-1],name="custom_op_zero_1")
#print (new_op)
# save new freezed model file
with tf.Session(graph=frozen_graph) as persisted_sess:
for op in persisted_sess.graph.get_operations():
print(op)
freeze_graph(persisted_sess,"new_model.pb")
I am trying to generate an eightbit quantized graph for a custom LSTM model using TransformGraph. The graph import works fine if I only quantize_weights. Once quantize_nodes is applied importing fails with an error as given below
ValueError: Specified colocation to an op that does not exist during import: lstm1/lstm1/BasicLSTMCellZeroState/zeros in lstm1/lstm1/cond/Switch_2
The code snippet I an using for quantizing is listed below
from tensorflow.tools.graph_transforms import TransformGraph
import tensorflow as tf
input_names = ["inp/X"]
output_names = ["out/Softmax"]
#transforms = ["quantize_weights", "quantize_nodes"]
#transforms = ["quantize_weights"]
transforms = ["add_default_attributes",
"strip_unused_nodes",
"remove_nodes(op=Identity, op=CheckNumerics)",
#"fold_constants(ignore_errors=true)",
"fold_batch_norms",
"fold_old_batch_norms",
"quantize_weights",
"quantize_nodes",
"sort_by_execution_order"]
#output_graph_path="/tmp/fixed.pb"
output_graph_path="/tmp/output_graph.pb"
with tf.Graph().as_default():
output_graph_def = tf.GraphDef()
with tf.Session() as sess:
with open(output_graph_path, "rb") as f:
output_graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(output_graph_def, name="")
transformed_graph_def = TransformGraph(output_graph_def, input_names,
output_names, transforms)
tf.train.write_graph(transformed_graph_def, '/tmp', 'quantized.pb', as_text=False)
I also tried using quantize_graph.py, which always resulted in a keyerror as in https://github.com/tensorflow/tensorflow/issues/8025. I believe this code is no longer maintained. Can anyone please point out how to debug this issue.
I have already converted a pre-trained .ckpt file to .pb file freezing the model and saving the weighs as well. What I am trying to do now is to make a simple inference using that .pb file and extract and save output image. The model is a (Fully Convolutional Network for Semantic Segmentation) downloaded from here : https://github.com/MarvinTeichmann/KittiSeg . So far I have managed to, load the image, set the default tf graph and import the graph defined by the model on that, read the input and the output tensors and run the session (error here).
import tensorflow as tf
import os
import numpy as np
from tensorflow.python.platform import gfile
from PIL import Image
# Read the image & get statstics
img=Image.open('/path-to-image/demoImage.png')
img.show()
width, height = img.size
print(width)
print(height)
#Plot the image
#image.show()
with tf.Graph().as_default() as graph:
with tf.Session() as sess:
# Load the graph in graph_def
print("load graph")
# We load the protobuf file from the disk and parse it to retrive the unserialized graph_drf
with gfile.FastGFile("/path-to-FCN-model/FCN8.pb",'rb') as f:
#Set default graph as current graph
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
#sess.graph.as_default() #new line
# Import a graph_def into the current default Graph
tf.import_graph_def(graph_def, name='')
# Print the name of operations in the session
#for op in sess.graph.get_operations():
#print "Operation Name :",op.name # Operation name
#print "Tensor Stats :",str(op.values()) # Tensor name
# INFERENCE Here
l_input = graph.get_tensor_by_name('Placeholder:0')
l_output = graph.get_tensor_by_name('save/Assign_38:0')
print "l_input", l_input
print "l_output", l_output
print
print
# Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.
result = sess.run(l_output, feed_dict={l_input : img})
print(results)
print("Inference done")
# Info
# First Tensor name : Placeholder:0
# Last tensor name : save/Assign_38:0"
Can the error come from the format of the image (e.g should I convert .png to another format?). Is it another fundamental error?
I managed to fix the error, below is the working script to inference a single image on Fully Convolutional Networks (for whoever is interesting in an alternative segmentation algorithm from SEGNET) . This model use billinear interpolation for scaling rather than an un-pooling layer. Anyway, because the model is available to download in a .chkpt format, you must first freeze the model and save it as a .pb file. Later on, you must pass the network from TF optimizer to set Dropout probabilities to 1. Afterwards, set the correct input and output tensor name in this script and the inference works correctly, extracting the segmented image.
import tensorflow as tf # Default graph is initialized when the library is imported
import os
from tensorflow.python.platform import gfile
from PIL import Image
import numpy as np
import scipy
from scipy import misc
import matplotlib.pyplot as plt
import cv2
with tf.Graph().as_default() as graph: # Set default graph as graph
with tf.Session() as sess:
# Load the graph in graph_def
print("load graph")
# We load the protobuf file from the disk and parse it to retrive the unserialized graph_drf
with gfile.FastGFile("/path-to-protobuf/FCN8_Freezed.pb",'rb') as f:
print("Load Image...")
# Read the image & get statstics
image = scipy.misc.imread('/Path-To-Image/uu_000010.png')
image = image.astype(float)
Input_image_shape=image.shape
height,width,channels = Input_image_shape
print("Plot image...")
#scipy.misc.imshow(image)
# Set FCN graph to the default graph
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sess.graph.as_default()
# Import a graph_def into the current default Graph (In this case, the weights are (typically) embedded in the graph)
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="",
op_dict=None,
producer_op_list=None
)
# Print the name of operations in the session
for op in graph.get_operations():
print "Operation Name :",op.name # Operation name
print "Tensor Stats :",str(op.values()) # Tensor name
# INFERENCE Here
l_input = graph.get_tensor_by_name('Inputs/fifo_queue_Dequeue:0') # Input Tensor
l_output = graph.get_tensor_by_name('upscore32/conv2d_transpose:0') # Output Tensor
print "Shape of input : ", tf.shape(l_input)
#initialize_all_variables
tf.global_variables_initializer()
# Run Kitty model on single image
Session_out = sess.run( l_output, feed_dict = {l_input : image}
Have you already looked at the demo.py. There is shown at line 141 how they modify the input of the graph:
# Create placeholder for input
image_pl = tf.placeholder(tf.float32)
image = tf.expand_dims(image_pl, 0)
# build Tensorflow graph using the model from logdir
prediction = core.build_inference_graph(hypes, modules,
image=image)
And at line 164 how the image is opened:
image = scp.misc.imread(input_image)
Which is fed directly to image_pl. The only point is that core.build_inference_graph is a TensorVision call.
Note, it would be interesting to provide the exact error message as as input as well.
I try to simply save and restore a graph, but the simplest example does not work as expected (this is done using version 0.9.0 or 0.10.0 on Linux 64 without CUDA using python 2.7 or 3.5.2)
First I save the graph like this:
import tensorflow as tf
v1 = tf.placeholder('float32')
v2 = tf.placeholder('float32')
v3 = tf.mul(v1,v2)
c1 = tf.constant(22.0)
v4 = tf.add(v3,c1)
sess = tf.Session()
result = sess.run(v4,feed_dict={v1:12.0, v2:3.3})
g1 = tf.train.export_meta_graph("file")
## alternately I also tried:
## g1 = tf.train.export_meta_graph("file",collection_list=["v4"])
This creates a file "file" that is non-empty and also sets g1 to something that looks like a proper graph definition.
Then I try to restore this graph:
import tensorflow as tf
g=tf.train.import_meta_graph("file")
This works without an error, but does not return anything at all.
Can anyone provide the necessary code to simply just save the graph for "v4" and completely restore it so that running this in a new session will produce the same result?
To reuse a MetaGraphDef, you will need to record the names of interesting tensors in your original graph. For example, in the first program, set an explicit name argument in the definition of v1, v2 and v4:
v1 = tf.placeholder(tf.float32, name="v1")
v2 = tf.placeholder(tf.float32, name="v2")
# ...
v4 = tf.add(v3, c1, name="v4")
Then, you can use the string names of the tensors in the original graph in your call to sess.run(). For example, the following snippet should work:
import tensorflow as tf
_ = tf.train.import_meta_graph("./file")
sess = tf.Session()
result = sess.run("v4:0", feed_dict={"v1:0": 12.0, "v2:0": 3.3})
Alternatively, you can use tf.get_default_graph().get_tensor_by_name() to get tf.Tensor objects for the tensors of interest, which you can then pass to sess.run():
import tensorflow as tf
_ = tf.train.import_meta_graph("./file")
g = tf.get_default_graph()
v1 = g.get_tensor_by_name("v1:0")
v2 = g.get_tensor_by_name("v2:0")
v4 = g.get_tensor_by_name("v4:0")
sess = tf.Session()
result = sess.run(v4, feed_dict={v1: 12.0, v2: 3.3})
UPDATE: Based on discussion in the comments, here a the complete example for saving and loading, including saving the variable contents. This illustrates the saving of a variable by doubling the value of variable vx in a separate operation.
Saving:
import tensorflow as tf
v1 = tf.placeholder(tf.float32, name="v1")
v2 = tf.placeholder(tf.float32, name="v2")
v3 = tf.mul(v1, v2)
vx = tf.Variable(10.0, name="vx")
v4 = tf.add(v3, vx, name="v4")
saver = tf.train.Saver([vx])
sess = tf.Session()
sess.run(tf.initialize_all_variables())
sess.run(vx.assign(tf.add(vx, vx)))
result = sess.run(v4, feed_dict={v1:12.0, v2:3.3})
print(result)
saver.save(sess, "./model_ex1")
Restoring:
import tensorflow as tf
saver = tf.train.import_meta_graph("./model_ex1.meta")
sess = tf.Session()
saver.restore(sess, "./model_ex1")
result = sess.run("v4:0", feed_dict={"v1:0": 12.0, "v2:0": 3.3})
print(result)
The bottom line is that, in order to make use of a saved model, you must remember the names of at least some of the nodes (e.g. a training op, an input placeholder, an evaluation tensor, etc.). The MetaGraphDef stores the list of variables that are contained in the model, and helps to restore these from a checkpoint, but you are required to reconstruct the tensors/operations used in training/evaluating the model yourself.
Because tf.train.import_meta_graph is deprecated version now.
replace tf.train.import_meta_graph in your code with tf.compat.v1.train.import_meta_graph
It will solve your error.