When using tensorflow, the graph is logged in the summary file, which I "abuse" to keep track of the architecture modifications.
But that means every time I need to use tensorboard to visualise and view the graph.
Is there a way to write out such a graph prototxt in code or export this prototxt from summary file from tensorboard?
Thanks for your answer!
The graph is available by calling tf.get_default_graph. You can get it in GraphDef format by doing graph.as_graph_def().
Related
I know how to create a freezed graph and use the export_tflite_ssd_graph - however, is it possible to add post-processing to a tflite graph?
It is not possible to freeze a tflite graph since r1.9, so the usual function won't work. So is there any workaround to add post-processing to a tflite graph when you only have the .tflite File?
I am using a tensorflow estimator object to train a model from the official tensorflow layers documentation (https://www.tensorflow.org/tutorials/layers). I can see that the training loss is displayed on the console during training. Is there a way to store these training loss values?
Thanks!
The displaying is done via logging.info. tf.estimator creates a LoggingTensorHook for the training loss to do this, see here.
I suppose you could reroute the logging output to some file, but this would still not give you the raw values.
Two ways I could think of:
Write your own hook to store the values; this would probably look extremely similar to LoggingTensorHook, you would just need to write the numbers to a file instead of printing them.
By default tf.estimator also creates summary data in Tensorboard for the training loss; you could open the "Scalar" tab in Tensorboard where you should see the loss curve. Tick "Show data download links" in the top left corner. This will give you an option to download each graph's data in either CSV or JSON format. By default, both the logging and summary hooks are set up such that they both log values every 100 steps. So the graph should have the same information you saw in the console. If you're unfamiliar with Tensorboard, there are tutorials on the Tensorflow website as well; the basic usage should be quite simple!
You can use TensorBoard event file in model_dir after training your estimator by estimator.train()
model = tf.estimator.Estimator(..., model_dir= 'tmp')
# model data will be save in tmp directory after training
image
The event file has the name events.out.tfevents.15121254...., this file save the log of training process (there is an other event file in eval folder that save evaluate log). You can get training loss by:
for e in tf.train.summary_iterator(path_to_events_file):
for v in e.summary.value:
if v.tag == 'loss':
print(v.simple_value)
In addition, you can save other values during training by add tf.summary inside your model_fn:
tf.summary.scalar('accuracy', accuracy)
reference: https://www.tensorflow.org/api_docs/python/tf/train/summary_iterator
I have trained new model on top of ssd_mobilenet_v1_coco for a custom data set. This model works fine in tensorflow. But now I want to use this in OpenCV.
net = cv2.dnn.readNetFromTensorflow("model/frozen_inference_graph.pb", "model/protobuf.pbtxt")
detections = net.forward()
So for the config file I convert frozen_graph to pbtxt and add it. But then I got the following error
[libprotobuf ERROR /home/chamath/Projects/opencv/opencv/3rdparty/protobuf/src/google/protobuf/text_format.cc:298] Error parsing text-format tensorflow.GraphDef: 731:5: Unknown enumeration value of "DT_RESOURCE" for field "type".
As suggested here I try to use this config file mentioned in the thread but when I use it object detection is not working properly. Incorrect number of squares detected and they are misplaced.
Is there any method to create a pbtxt config file that works with OpenCV? Or any suggestions how to make my model work in OpenCV?
Is likely that you havent generate the propper graph, after training.
You have to convert the graph like this:
python ../opencv/samples/dnn/tf_text_graph_ssd.py --input
trained-inference-graphs/inference_graph_v5.pb/frozen_inference_graph.pb
--output trained-inference-graphs/inference_graph_v5.pb/graph.pbtxt
Then pass the .pb and the graph.pbtxt to DNN.readNetFromTensorflow that should work for you :)
I am implementing a GAN to generate fake tweets. I would like to visualize my model using Tensorboard but it's not displaying anything. This is my first time using Tensorboard, and I followed a tutorial on YouTube (https://gist.github.com/dandelionmane/4f02ab8f1451e276fea1f165a20336f1#file-mnist-py). When I run tensorboard --logdir=/path_to_dir it gives me a port, and this port takes me to Tensorboard, but nothing is displayed. Below is my code. Thank you!
code deleted
It's pretty long, so please ctrl-F to find the lines related to Tensorboard.
You need to add the following line after you have defined your graph:
writer.add_graph(sess.graph)
Look at the documentation here.
Look at this question:
How to create a Tensorflow Tensorboard Empty Graph
I'm training an inception model from scratch using the scripts provided here.
The output of the training are these files:
checkpoint
events.out.tfevents.1499334145.fdbf-Dell
model.ckpt-5000.data-00000-of-00001
model.ckpt-5000.index
model.ckpt-5000.meta
...
model.ckpt-25000.data-00000-of-00001
model.ckpt-25000.index
model.ckpt-25000.meta
Does someone have a script to convert these files in something I can use to classify my images? I have already tried to modify the inception_train.py file to output the graph.pb, but nothing happens...
Any help would be appreciated, thank you!
How to use your checkpoint directly is explained here.
To create a .pb file, you'll have to freeze the graph, as explained here.
To create the initial .pb (or .pbtxt) file needed for freeze_graph, you can use tf.train.write_graph()