Train Inception from Scratch in TensorFlow - python

I wanted to train the inception model like shown in the tensorflow github-tutorial.
Except i wanted to use a selfmade Dataset of TFRecord files.
bazel build inception/imagenet_train
bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=32 --train_dir=/tmp/imagenet_train --data_dir=/tmp/imagenet_data
I changed the data directory to the folder with my own TFRecord files.
Now i´am wondering whether i´am realy training from scratch, or if this is the same thing like the "retraining the last layer -Tutorial"

Yes you are training from scratch: see the code

Related

How to convert a pretrained tensorflow pb frozen graph into a modifiable h5 keras model?

I have been searching for a method to do this for so long, and I can not find an answer. Most threads I found are of people wanting to do the opposite.
Backstory:
I am experimenting with some pre-trained models provided by the tensorflow/models repository. The models are saved as .pb frozen graphs. I want to fine-tune some of these models by changing the final layers to suit my application.
Hence, I want to load the models inside a jupyter notebook as a normal keras h5 model.
How can I do that?
do you have a better way to do so?
Thanks.
seems like all you would have to do is download the model files and store them in a directory. Call the directory for example c:\models. Then load the model.
model = tf.keras.models.load_model(r'c:\models')
model.summary() # prints out the model layers
# generate code to modify the model as you typically do for transfer learning
# compile the changed model
# train the model
# save the trained model as a .h5 file
dir=r'path to the directory you want to save the model to'
model_identifier= 'abcd.h5' # for abcd use whatever identification you want
save_path=os.path.join(dir, model_identifier)
model.save(save_path)

How can I train tensorflow deeplab model?

I need to train tensorflow deeplab model with my shoes dataset. Then i will use this model in order to remove background of image shoe. How could i train it ? Could you explain step by step ? You have any example for this situation ?
tensorflow/deeplab
You will need read some parts of Deeplab code
Download repo
Now you need to put your data in tfrecord in proper format
Use some of scripts in https://github.com/tensorflow/models/tree/master/research/deeplab/datasets to download and generate example datasets
Prepare analogous script for your shoes dataset
Add information about data to Deeplab source file https://github.com/tensorflow/models/blob/master/research/deeplab/datasets/data_generator.py add info in analogous format like example datasets
Check flags for architecture https://github.com/tensorflow/models/blob/master/research/deeplab/common.py
Check specific flags and then train, export, count statistics or visualize using train.py, vis.py, export_model.py, eval.py in folder https://github.com/tensorflow/models/tree/master/research/deeplab

Tensorflow Object Detection: training from scratch using a .h5 (hdf5) file

I need to train from scratch a CNN over a COCO dataset with a specific configuration: https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/embedded_ssd_mobilenet_v1_coco.config
Thus, I installed TF Object Detection API and I downloaded the COCO dataset. However the dataset is in .h5 extension.
Is it possible to run the training with this kind of file or do I need to convert it in images in someway? If that is possible, what would the command be?
PS: I was not able to find a pre-trained model with that config, this is why I need to train a cnn from scratch.
My suggestion would be to convert the .hdf5 file to a .tfrecord file, you can find examples of how to do this here.

Loading a trained model from Python to C++ in Tensorflow 1.2

I'm looking to run a basic fully-connected neural network for the MNIST dataset with the C++ API v1.2 from Tensorflow. I have trained the model and exported it using tf.train.Saver() in Python. This gave me a checkpoint file, a data file, an index file and a meta file.
I know that the data file contains the saved variables while the meta file contains the graph from using Tensorboard on a previous project.
However, I am not sure what is the recommended way to load those files
and run the trained model in a C++ environment in v1.2, since all the
tutorials and questions I've found are for older versions which differ
substantially.
I've found that tensorflow::ops::Restore should be the method to do such a thing, but I know that inference in Tensorflow isn't well supported, as such I am not certain what parameters should I give it in order to receive the trained model that I can just put into a session->Run() and receive an accuracy statement when fed test data.

Pro training input data

So I'm new to tensorflow and deep learning. I learned how to install tf and run a mnist python on tensorflow. I can also check the program training result on tensorboard.
I have 2 questions after this:
After I have trained this program, how I can use it in future ? I want to know how to give some image data to my program and it decide whether which number is it.
And does it need to be trained each time I run the program or for future use I just give it some input data(handwritten image) and it give me result.
Based on my newbie above question please also tell me; What am I missing? What is my lack?
Thanks
You might check out TensorFlow serving. First train and evaluate the model. Once you're happy with it, export it in SavedModel format. The SavedModel includes the model itself and the trained variables; just load it and start classifying images.

Categories

Resources