I am using skflow for simple classification. What I noticed is that I have multiple sessions/graphs. For example, error that I get is
ValueError: Cannot execute operation using Run(): No default session is registered. Use 'with default_session(sess)' or pass an explicit session to Run(session=sess)
When I try to set in the main function a tf.Session().as_default(), I realized that there is another session and graph created by skflow. After research that came to be true. Indeed, skflow creates it here.
classifier = skflow.TensorFlowEstimator(
model_fn=model_fn, n_classes=2,
steps=100, optimizer='Adam',
learning_rate=0.01, continue_training=True)
My problem is that I want to print some variables that are used during training. For example, word embeddings matrix. In my model_fn, I save the word embeddings matrix since I have access to it. But when I try printing it, it seems that sessions was closed and i get that error I mentioned above. So I am not sure how I can set one default sessions, why skflow created another one, how can I print a varibale that is used inside of skflow classifier, and also why in SummaryWriter Graph tensorboard I only see the main graph (not the one in skflow)?
I might be very wrong so Any help would be appreciated!
Related
According to the demo code
"Image similarity estimation using a Siamese Network with a contrastive loss"
https://keras.io/examples/vision/siamese_contrastive/
I'm trying to save model by model.save to h5 or hdf5; however, after I used load_model (even tried load_weights)
it showed error message for : unknown opcode
Have done googling job which all tells me it's python version problem between py3.5~py3.6
But actually I use only python 3.8....
other info say that there's some extra job need to be done either in model building or load_model
It would be very kind for any one to help provide the save and load model part
to make this demo code more completed
thanks!!
Actually here they are using two individual factors which come in a custom object.
Custom objects:
contrastive loss
embedding layer: where we are finding euclidean_distance.
Saving model:
for the saving model, it's straightforward
<model_name>.save("siamese_contrastive.h5")
Loading model:
Here the good part will come model will not load directly here because it doesn't have an understanding of two things one is your custom layer and 2nd is your loss.
model = tf.keras.models.load_model('siamese_contrastive.h5', custom_objects={ })
In the custom object mentioned above, you have to provide the definition of those two objects.
After that, it will accept your model and it will run separately at inferencing time.
Still figuring out how??
Have a look at my implementation let me know if you still have any questions: https://github.com/anukash/Keras_siamese_contrastive
I'm trying to modify a program that uses the Estimator class in TensorFlow (v1.10) and I would like to access the evaluation metric results every time evaluation occurs so that I can copy the checkpoint files only when a new maximum has been achieved.
One idea I had was to create a class inheriting from SessionRunHook, doing the work I want in the after_run method. According to the documentation I can specify what is passed to after_run using before_run. However I cannot find a way to access the evaluation metrics results I want from the information passed in to before_run.
I looked into the Estimator code and it appears that it is writing the results to a summary file so another idea I had was to read this back in the after_run method, but the summary api doesn't seem to provide any read operations.
Are there any other ways I can achieve what I want to do? Not using the Estimator class is not an option as that would involve drastic changes to the code I'm working with.
Checkpoints are not the same as exporting. Checkpoints are about fault-recovery and involve saving the complete training state (weights, global step number, etc.).
In your case I would recommend exporting. The exported model will written to a directory called “exporter” and the serving input function specifies what the end-user will be expected to provide to the prediction service.
You can use the class "Best Exporter" to just export the models that are perfoming best:
https://www.tensorflow.org/api_docs/python/tf/estimator/BestExporter
This class exports the serving graph and checkpoints of the best models.
Also, it performs a model export everytime when the new model is better than any exsiting model.
I'm using the Keras plot_model() function to visualize my machine learning models. Apart from having the issue that the first node of my output is always simply a very large number, there is another thing annoying me about this function: It does not provide a very elaborate output. For example, I would like to be able to see more information about the used loss function, the batch size, the number of epochs, the used optimizer, etc...
Is there any way I can retrieve this information from a model I previously saved to the disk and loaded again with the model_from_json() function?
How about TensorBoardCallback? It will create interactive graphs that you can explore based on your model if you use Tensorflow as your backend.
You just need add it as a callback to your fit function and make sure write_graph=True is set (which it is by default). If you want a shortcut you can directly invoke its methods instead of passing as a callback:
tensorboard = TensorboardCallback()
tensorboard.set_model(model) # your model here, will write graph etc
tensorboard.on_train_end() # will close the writer
Then just run tensorboard --logdir=./logs to start the server.
Maybe this is a stupid question, but I switched from basic TensorFlow recently to tflearn and while I knew little of TensorFlow, I know even less of tflearn as I have just begun to experiment with it. I was able to create a network, train it, and generate a model that achieved a satisfactory metric. I did this all without using a TensorFlow session because a) none of the documentation I was looking at necessarily suggested it and b) I didn't even think to use it.
However, I would like to predict a value for a single input (the model performs regression on images, so I'm trying to get a value for a single image) and now I'm getting an error that the convolutional layers need to be initialized (Specifically "FailedPreconditionError: Attempting to use uninitialized value Conv2D/W").
The only thing I've added, though, are two lines:
model = Evaluator(network)
model.predict(feed_dict={input_placeholder: image_data})
I'm asking this as a general question because my actual code is a bit troublesome to just post here because admittedly I've been very sloppy in writing it. I will mention, however, that even if I start a session and initialize all variables before that second line, then run the line in the session, I get the same error.
Succinctly put, does tflearn require a session if I've not used TensorFlow stuff directly anywhere in my code? If so, does the model need to be trained in the session? And if not, what about those two lines would cause such an error?
I'm hoping it isn't necessary for more code to be posted, but if this isn't a general issue and is actually specific to my code then I can try to format it to be understandable here and then edit the post.
I know that there are countless questions on stack and github, etc. on how to restore a trained model in Tensorflow. I have read most of them (1,2,3).
I have almost exactly the same problem as 3 however I would like if possible to solve it in a different fashion as my training and my test need to be in separate scripts called from the shell and I do not want to add the exact same lines I used to define the graph in the test script so I cannot use tensorflow FLAGS and the other answers based on reruning the graph by hand.
I also do not want to sess.run every variables and manually map them by hands as it was explained as my graph is quite big (Using import_graph_def with the arguments input_map).
So I run some graph and train it in a specific script. Like for instance (but without the training part)
#Script 1
import tensorflow as tf
import cPickle as pickle
x=tf.Variable(42)
saver=tf.train.Saver()
sess=tf.Session()
#Saving the graph
graph_def=sess.graph_def
with open('graph.pkl','wb') as output:
pickle.dump(graph_def,output,HIGHEST_PROTOCOL)
#Training the model
sess.run(tf.initialize_all_variables())
#Saving the variables
saver.save(sess,"pretrained_model.ckpt")
I now have both graph and variables saved so I should be able to run my test model from another script even if I have extra training nodes in my graph.
#Script 2
import tensorflow as tf
import cPickle as pickle
sess=tf.Session()
with open('graph.pkl','rb') as input:
graph_def=pickle.load(input)
tf.import_graph_def(graph_def,name='persisted')
Then obviously I want to restore the variables using a saver but I encounter the same problem as 3 as there are no variables found to save to even create a saver. So I cannot write:
saver=tf.train.Saver()
saver.restore(sess,"pretrained_model.ckpt")
Is there a way to bypass those limitations ? I thought by importing graph it would recover the uninitialized variables in every node but it seems not. Do I really need to rerun it a second time like most of the answers given ?
The list of variables is saved in a Collection which is not saved in the GraphDef. Saver by default uses the list in ops.GraphKeys.VARIABLES collection (accessible through tf.all_variables()), and if you restored from GraphDef rather than using Python API to build your model, that collection is empty. You could specify the list of variables manually in tf.train.Saver(var_list=['MyVariable1:0', 'MyVariable2:0',...]).
Alternatively instead of GraphDef you could use MetaGraphDef which saves collections, there's a recently added MetaGraphDef HowTo
To my knowledge and my tests you can't simply pass names to tf.train.Saver object. It must be either list of variables o dictionary.
I would also like to read model from graph_def and then load variables using saver - however attempting it results only in error message: "Variable to save is not a variable"