Sessions with tensorflow - python

I'm a tensorflow beginner. So, excuse my question if it is stupied
I checked a github code for implementing CNN using MNIST data and tensorflow.
the link below:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py
However, I need to save the model generated by this code, but don't know how to do it, as this code does not involve the use of sessions, how to incoperate session on it?
Would appreciate your response.

The linked code is using tf.estimator.Estimator to train the model. Its documentation includes how to save the model using export_savedmodel. A saved model can be imported by specifying its location through the model_dir argument of the tf.estimator.Estimator initialiser.

Related

keras demo code siamese_contrastive.py save and load model?

According to the demo code
"Image similarity estimation using a Siamese Network with a contrastive loss"
https://keras.io/examples/vision/siamese_contrastive/
I'm trying to save model by model.save to h5 or hdf5; however, after I used load_model (even tried load_weights)
it showed error message for : unknown opcode
Have done googling job which all tells me it's python version problem between py3.5~py3.6
But actually I use only python 3.8....
other info say that there's some extra job need to be done either in model building or load_model
It would be very kind for any one to help provide the save and load model part
to make this demo code more completed
thanks!!
Actually here they are using two individual factors which come in a custom object.
Custom objects:
contrastive loss
embedding layer: where we are finding euclidean_distance.
Saving model:
for the saving model, it's straightforward
<model_name>.save("siamese_contrastive.h5")
Loading model:
Here the good part will come model will not load directly here because it doesn't have an understanding of two things one is your custom layer and 2nd is your loss.
model = tf.keras.models.load_model('siamese_contrastive.h5', custom_objects={ })
In the custom object mentioned above, you have to provide the definition of those two objects.
After that, it will accept your model and it will run separately at inferencing time.
Still figuring out how??
Have a look at my implementation let me know if you still have any questions: https://github.com/anukash/Keras_siamese_contrastive

How to open and analyze Google Tables model using python/tf?

I read several discussions about this and still cannot make it work for my case
Have a classification model trained using Google Tables.
Exported the model and download the directory with cli.
My goal is to get a better understanding of the model trained by google, study it, understand its decisions. And later try to prune it to improve performance.
I'm using this code, just to start:
import tensorflow as tf
from tensorflow import keras
import struct2tensor
location = "model_dir"
model = tf.saved_model.load(location)
model.summary()
I get this error:
AttributeError: 'AutoTrackable' object has no attribute 'summary'
the variable model is of type:
<tensorflow.python.training.tracking.tracking.AutoTrackable at 0x7fa8eaa7ed30>
And I stuck there, don't know how to continue. Using Python 3.8 and the last version of those libraries. Any idea of how can I proceed?
Thanks!
The proper method to load your model depends on your file formatting.
You can see in the Tensorflow documentation that "The object returned by tf.saved_model.load is not a Keras object (i.e. doesn't have .fit, .predict, etc. methods)" and "Use tf.keras.models.load_model to restore the Keras model".
I'm not sure if you want to use the keras module or not, but since you have imported it I assume you do. In that case I would recommend checking this other Stackoverflow thread where it is explained how to use the tf.keras.models.load_model method depending if your model is saved as .pb or .h5.
If the model is saved as .pb you should use it with the string pointing to the directory where the model is saved, as you did in your code snippet but in this case using the keras method:
model = tf.keras.models.load_model('model_dir')
If instead it's saved as .h5 you should use it specifying it:
model = tf.keras.models.load_model('my_model_in_h5.h5')

Save entire model but load weights only

I have defined a deep learning model my_unet() in tensorflow. During training I set save_weigths=False since I wanted to save the entire model (not only the wieghts bu the whole configuration). The generated file is path_to_model.hdf5.
However, when loading back the model I used the earlier version (I forgot to update it) in which I first called the model and then load the model using:
model = my_unet()
model.load_weights(path_to_model.hdf5)
Instead of simply using: model = tf.keras.models.load_model(path_to_model.hdf5) to load the entire model.
Both ways of loading the model and the weights provided the same predictions when run in some dummy data and there were no errors.
My question is: Why loading the entire model using model.load_weights() does not generate any problem? What is the structure of the hdf5 file and how theese two ways of loading exactly work? Where can I find this information?
You can please see the documentation here for any future reference: http://davis.lbl.gov/Manuals/HDF5-1.8.7/UG/03_DataModel.html

How can I "see" the model/network when loading a model from tfhub?

I'm new to this topic, so forgive me my lack of knowledge. There is a very good model called inception resnet v2 that basically works like this, the input is an image and outputs a list of predictions with their positions and bounded rectangles. I find this very useful, and I thought of using the already worked model in order to recognize things that it now can't (for example if a human is wearing a mask or not). Yes, I wanted to add a new recognition class to the model.
import tensorflow as tf
import tensorflow_hub as hub
mod = hub.load("https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1")
mod is an object of type
tensorflow.python.training.tracking.tracking.AutoTrackable, reading the documentation (that was only available on the source code was a bit hard to understand without context)
and I tried to inspect some of it's properties in order to see if I could figure it out by myself.
And well, I didn't. How can I see the network, the layers, the weights? the fit methods, Is it's all abstracted away?. Can I convert it to keras? I want to experiment with it, see if I can modify it, and see if I could export the model to another representation, for example pytorch.
I wanted to do this because I thought it'd be better to modify an already working model instead of creating one from scratch. Also because I'm not good at training models myself.
I've run into this issue too. Tensorflow hub guide says:
This error frequently arises when loading models in TF1 Hub format with the hub.load() API in TF2. Adding the correct signature should fix this problem.
mod = hub.load(handle).signatures['default']
As an example, you can see this notebook.
You can dir the loaded model asset to see what's defined on it
m = hub.load(handle)
dir(model)
As mentioned in the other answer, you can also look at the signatures with print(m.signatures)
Hub models are SavedModel assets and do not have a keras .fit method on them. If you want to train the model from scratch, you'll need to go to the source code.
Some models have more extensive exported interfaces including access to individual layers, but this model does not.

Training neural network with own dataset doesn't work due to cache strange problem

I faced a strange challenge trying to train neural network using code from github, it is huggingface conversational model.
What happens: even i use my own dataset for training result remains the same like with original dataset. My hypothesis that it is a somehow cache problem - old dataset continuously get loaded from cached and replace my.
Them when i launch actual interactive session with neural network it works, but without my data, even if i pass model checkpoint.
Why i think of cache: in this repo author use automatic downloading and caching neural network model in /home/joo/.cache/torch/pytorch_transformers/ if no parameter specified in terminal.
I have created an issue on Github. BUT i am not sure is that a problem specific for this repo only, or it is a common problem with retraining neural networks i faced first time.
https://github.com/huggingface/transfer-learning-conv-ai/issues/36
Some copypaste from issue:
I am still curious, was not able to pass my dataset:
I added to original 200mb json my personality
trained once more with --dataset_path ./my.json
invoke interact.py with new checkpoint and path python ./interact.py --model_checkpoint
./runs/Oct08_18-22-53_joo-tf_openai-gpt/ --dataset_path ./my.json
and it reports Gathered 18878 personalities (but not 18879, with my own).
I changed the code in interact.py to choose my first perosnality this way
was: personality = random.choice(personalities)
become: personality = personalities[0]
and this first personality is not mine.
Solved: it is a specific issue to this repo, just hardcoded dataset path.
But still why it doesn't load first time - no answer

Categories

Resources