So I'm new to tensorflow and deep learning. I learned how to install tf and run a mnist python on tensorflow. I can also check the program training result on tensorboard.
I have 2 questions after this:
After I have trained this program, how I can use it in future ? I want to know how to give some image data to my program and it decide whether which number is it.
And does it need to be trained each time I run the program or for future use I just give it some input data(handwritten image) and it give me result.
Based on my newbie above question please also tell me; What am I missing? What is my lack?
Thanks
You might check out TensorFlow serving. First train and evaluate the model. Once you're happy with it, export it in SavedModel format. The SavedModel includes the model itself and the trained variables; just load it and start classifying images.
Related
I am a newbie in Machine learning. I would like to design a model of ASR(Automatic speech recognition), so I came across the following link is:
https://keras.io/examples/audio/transformer_asr/
I am able to understand the training model concept. I also saved the above model using the following code:
model.save_weights('data_sa',save_format='hdf5')
then I load it again using:
model.load_weights('/content/data_sa')
Now, I would like to predict the model, by giving the validation data set. But I am facing an issue.
I am using the following step for prediction:
model.predict(np.array(val_ds)) #this is the validation dataset
It is giving an error of tensor conversion.
I tried to search the possible solution to this step but couldn't able to find the it.
tf.cast() and tf.to_float() are TensorFlow functions, so you'd get them using import tensorflow as tf.
Totally new to Tensorflow,
I have created one object detection model (.pb and .pbtxt) using 'faster_rcnn_inception_v2_coco_2018_01_28' model I found from TensorFlow zoo. It works fine on windows but I want to use this model on google coral edge TPU. How can I convert my frozen model into edgetpu.tflite quantized model?
There are 2 more steps to this pipeline:
1) Convert the .pb -> tflite:
I won't go through details since there are documentation on this on tensorflow official page and it changes very often, but I'll still try to answer specifically to your question. There are 2 ways of doing this:
Quantization Aware Training: this happens during training of the model. I don't think this applies to you since your question seems to indicates that you were not aware of this process. But please correct me if I'm wrong.
Post Training Quantization: Basically loading your model where all tensors are of type float and convert it to a tflite form with int8 tensors. Again, I won't go into too much details, but I'll give you 2 actual examples of doing so :) a) with code
b) with tflite_convert tools
2) Compile the model from tflite -> edgetpu.tflite:
Once you have produced a fully quantized tflite model, congrats your model is now much more efficient for arm platform and the size is much smaller. However it will still be ran on the CPU unless you compile it for the edgetpu. You can review this doc for installation and usage. But compiling it is as easy as:
$ edgetpu_compiler -s your_quantized_model.tflite
Hope this helps!
I am working on a project using a keras deep learning model that i need to transfer into PyTorch .
The goal of the project is to localize some elements on the images. To train it, I first use patches extracted from my images and then infer on the full image. I read that it was possible with the (None,None,1) input shape for the keras input layer and it is currently working. However, the same training system does not seem to work in pytorch. Therefore i was wondering is the (None,None,1) input layer doing something specific when I start inferring on full images?
Thanks for your answers
As in the discussion in the link and referencing the words of fchollet:
Of course,
it's not always possible to have such free dimensions (for instance it's
not possible to have variable-length sequences with TensorFlow, but it is
with Theano).
One can assume that it's because the architecture of the framework. As you stated, it may be accepted in keras, but not accepted in PyTorch.
I have BERT-base model checkpoints which I trained from scratch in Tensorflow. How can I use those checkpoints to predict masked word in a given sentence?
Like, let say sentence is,
"[CLS] abc pqr [MASK] xyz [SEP]"
And I want to predict word at [MASK] position.
How can I do it?
I searched a lot online but everyone is using BERT for their task specific classification tasks.
Not using BERT to predict masked word.
Please help me in solving this prediction problem.
I created data using create_pretraining_data.py & trained model from scratch using run_pretraining.py from official BERT repo (https://github.com/google-research/bert)
I have searched in issues in official bert repo. But didn't found any solution.
Also looked at code in that repo. They're using Estimator which they are training not using from checkpoints weights.
Didn't found any way to use way to use Tensorflow checkpoints of BERT-base model (trained from scratch) to predict word masked token (i.e. [MASK]).
Do you definitely need to start from a TF checkpoint? If you can use one of the pretrained models used in the pytorch-transformers library, I wrote a library for doing exactly this: FitBERT.
If you have to start with a TF checkpoint, there are scripts for converting from a TF checkpoint to something pytorch-transformers can use, link, and after converting you should be able to use FitBERT, or you can just see what we're doing in the code.
I built a CNN model for image classification using the Keras library. However training takes many hours. Once I trained my model, how can I use it without training once more? I mean after I trained my model, I want to use it many times.
Because I will use my model in android studio.
Any help is appreciated
Thank YOU...
EDIT
When I wrote this question, I did not know the save model and load.model, in the answers you see the appropriate usage of them.
You can easily save your model after the training process by using:
model.save('my_model.h5')
you can later load that model by using:
model = load_model('my_model.h5')
for more details have a look at the documentation: https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model