I'm training an inception model from scratch using the scripts provided here.
The output of the training are these files:
checkpoint
events.out.tfevents.1499334145.fdbf-Dell
model.ckpt-5000.data-00000-of-00001
model.ckpt-5000.index
model.ckpt-5000.meta
...
model.ckpt-25000.data-00000-of-00001
model.ckpt-25000.index
model.ckpt-25000.meta
Does someone have a script to convert these files in something I can use to classify my images? I have already tried to modify the inception_train.py file to output the graph.pb, but nothing happens...
Any help would be appreciated, thank you!
How to use your checkpoint directly is explained here.
To create a .pb file, you'll have to freeze the graph, as explained here.
To create the initial .pb (or .pbtxt) file needed for freeze_graph, you can use tf.train.write_graph()
Related
So, is simple as the title of the question, is there a way to save the weights after training like this
model.save_weights("path")
and then load them on another project only with
model = load_weights("path")
model.predict(x)
is it possible ?
yes. it is possible if you call the right path
for instance, you have this path
- project1/cool.py
- project2/another_cool.py
you train with cool.py and the model is saved inside project1's folder. then you want to load the model in another_cool.py
just call load_model function with path ../project1/weigh.h5
If you only want to save/load the weights, you can use
model.save_weights("path/to/my_model_weights.hdf5")
and then to reload (potentially in another python project / in another interpreter, you just have to update the path accordingly)
other_model.load_weights("path/to/my_model_weights.hdf5")
However both models should have the same architecture (instances of the same class), and Python/tensorflow/keras versions should be the same. See the doc for more info.
You can save both weights and architecture through model.save("path/to/my_model.hdf5")for saving on disk and keras.models.load_model("path/to/my_model.hdf5")for loading from disk (once again, the documentation should provide details).
Once loaded in memory, you can retrain your model, or use predict on it, predictions should be identical between projects
I have a PyTorch model class and its statedict with the weights.
I'd like to save the model directly with its weight in a .pt file using torch.save(model, PATH) but that simply saves the state dict again.
How do I save the model with the loaded_weights in it?
What I'm currently doing
lin_model = ModelClass(args)
lin_model.load_state_dict(torch.load('state_dict.pt'))
torch.save(lin_model, PATH)
I want the newly saved model to be a fully loaded pt file. Please help me here,thanks in advance.
According to the pytorch documentation here, when you use torch.save(model, PATH) it saves the entire model with the class. But here is the problem. It doesn't work every time. You see, the saved model is in pickle format, but the pickle file does not save the exact directory structure but just a path to the file containing the model class. So this saving method can break in various ways when used in other projects.
I am trying to load 300W_lp dataset in tensorflow.
I downloaded and extracted the dataset manually at C:/datasets/the300w
Now when I try to load dataset into tensorflow using
the300w = tfds.load('the300w_lp',data_dir='C:\datasets\the300w', download=False)
it gives me error
Dataset the300w_lp: could not find data in C:\datasets\the300w. Please make sure to call dataset_builder.download_and_prepare(), or pass download=True to tfds.load() before trying to access the tf.data.Dataset object.
Please help. How to load dataset in tensorflow?
Try to use plain old
dataset = tfds.load('the300w_lp')
It works fine for me. Maybe You somehow incorrectly unzipped the dataset file? If you have spare time, try the above code and see if it works.
Just a simple way to tackle this issue. Simply run the above command in google colab, grab a portion of the dataset object, download it and use it for your own purpose :)
When using tensorflow, the graph is logged in the summary file, which I "abuse" to keep track of the architecture modifications.
But that means every time I need to use tensorboard to visualise and view the graph.
Is there a way to write out such a graph prototxt in code or export this prototxt from summary file from tensorboard?
Thanks for your answer!
The graph is available by calling tf.get_default_graph. You can get it in GraphDef format by doing graph.as_graph_def().
i was wondering if theres a way to indicate to your training models(in tensorflow) or to tensorflow configuration in general where to store checkpoint files, i was tranining a neural network and got the errors:
InternalError: Error writing (tmp) checkpoint file: /tmp/tmpn2cWXm/model.ckpt-500-00000-of-00001.tempstate12392765014661958578: Resource exhauste
and
ERROR:tensorflow:Got exception during tf.learn final checkpoint .
And also im getting operating system alerts(debian linux) about low disk space so i asume that the problem is that my disk got full with checkpoint files but i have serveral partitions in my disk with enough space and would like to move checkpoint files there.
Thank you!
You can specify your save path as the second argument of tf.train.Saver.save(sess, 'your/save/path', ...). Similarly, you can restore your previously saved variables passing your restore path as the second argument of tf.train.Saver.restore(sess, 'your/restore/path').
Please, see TensorFlow Saver documentation and these saving and restoring examples for further details.
This is a pretty old query that I stumbled upon today while searching for some other issue. Anyway, I thought I will put down my thoughts in case it helps someone in the future.
You can specify the model directory while constructing your regressor. This can be any location on your filesystem where you have write permission and enough space (the events file needs quite a bit). For ex:
dnn_regressor = tf.estimator.DNNRegressor(
...,
model_dir="/tmp/mydata"
)