tensorflow.contrib.predictor.from_saved_model() in Tensorflow 2 - python

Currently using Tensorflow 1 and noticed tensorflow.contrib has been removed in Tensorflow 2. How to convert tensorflow.contrib.predictor.from_saved_model() to work on Tensorflow 2?

In TF2 the Predictor API is no longer supported and is not in TF2 at all (the whole contrib module is gone). You can either attempt using TF-HUB (the link above says Predictor is replaced by it), convert your model to a Keras model (the way I'd recommend if you have a custom model architecture), convert it to an Estimator, or stick to the latest TF1 release.

Related

how to train a custom object detection model ssd_mobilenet_v1_coco and ssd_inception_v2_coco on google colab tensorflow 1.15.2?

Basically I have been trying to train a custom object detection model with ssd_mobilenet_v1_coco and ssd_inception_v2_coco on google colab tensorflow 1.15.2 using tensorflow object detection api. As soon as I start training it throws error for both the models respectively.
I also ran the python object_detection/builders/model_builder_tf1_test.py and it passed all the test without any errors or warnings.
ValueError: ssd_inception_v2 is not supported. See model_builder.py for features extractors compatible with different versions of Tensorflow.
ValueError: ssd_mobilenet_v1_coco is not supported. See model_builder.py for features extractors compatible with different versions of Tensorflow.
I have successfully changed the tensorflow to 1.15.2 by using below command this is my first step before installing any of the dependencies.
%tensorflow_version 1.x
import tensorflow
print(tensorflow.__version__)
When I checked the model_builder.py I can see that they still have support for ssd_mobilenet_v1 and ssd_inception_v2. I want to deploy my custom trained model ssd_mobilenet_v1 or ssd_inception_v2 on jetson tx2 by converting them into trt-tf models
. In these 2 documents https://www.elinux.org/Jetson_Zoo and https://github.com/NVIDIA-AI-IOT/tf_trt_models#od_models we can see object detection models which can be converted to tf-trt models. So my question is how can I train these models as they are supported on google colab on tensorflow 1.15.2 and deploy on jetson txt for converting them to tf-trt models? Can anyone guide me through it would be really helpful to conitue my learning and learn something interesting thanks
i think you downloaded both of the models from tensorflow v2 repository and tensorflow 1.15 will obviously not support them.
download models from here : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md
and try again.good luck

Keras Default Backend in Python & R

I am very confused after reading a lot about keras and tensorFlow, and still have some basic questions in my mind.
My confusion started from the answer of this question, where he writes keras standalone and from tensorflow.keras import keras.
1- (Python case):
Does keras use any backend when I write this line of code import keras, and No single line of code related to tensorflow e.g tf.keras or tf.keras.layers in my full implementation of the model, but only import keras? if it does, then is there any way to check what backend is being used?
2- Same question in the case of R Language.
3 - Is TensorFlow only used as backend when we write import tensorflow as tf and import tf.keras ?
4- Does import keras and import tf.keras have any discrepency in performance and accuracy in case of python?
5- Does versions of keras and tensorFlow have an impact in performance and accuracy in both language (R and Python) ?
6- What could be the reasons to have 5% accuracy difference in R and Python. Python gives 94%, while the same implementation in R gives 89% accuracy. The versions of keras & tensorFlow in R are 2.3.0, 2.2.0, while the versions in Python are : tf: 2.3.0, keras: 2.4.3. Please see this one.

How to use produced tf.keras(for tf version>2) model in C++

I am using https://github.com/fizyr/keras-retinanet this implementation of retinanet which is implemented with tensorflow and keras. So I want to use produced model in c++ for inference. But when I search for it, I can't find anything to try for tensorflow version>=2.0 . There is a good documentation for this operation for pytorch https://pytorch.org/tutorials/advanced/cpp_export.html . I am looking for tensorflow version of it. Thanks.

Kmeans clustering using TENSORFLOW2

How could I convert a pandas database that contains 47 columns and 99999 lines into a tensors in
Tensorflow 2? is the Kmeans algorithm already implemented under TF 2? because the command tf.contrib.factorization.KMeans does not work under TF2 since tf.contrib no longer exists on the second API Tensorflow
List item
I got my TF2 example working with KMeans implementation from here tf.compat.v1.estimator.experimental.KMeans
Note: As of Nov 7, 2019 the code example has a problem. Change tf.estimator.experimental.KMeans to tf.compat.v1.estimator.experimental.KMeans
Also for anyone looking for TF 2 contrib, there's a quote on the r1.15 tf.contrib page:
Warning: The tf.contrib module will not be included in TensorFlow 2.0.
Many of its submodules have been integrated into TensorFlow core, or
spun-off into other projects like tensorflow_io, or tensorflow_addons.
For instructions on how to upgrade see the Migration guide.

CoreML load model saved with Keras 2

Apple's new CoreML can work with models trained on popular framework. At least they say so at 18" in the WWDC video. But in the docs it seems that concerning neural nets they only support Caffe and Keras 1.2.2 (see code), while Keras it on it 2.0 version and Tensorflow and Theano are quite popular in their own rights.
To get the conversion running with Keras 2, is there a better way than implementing the conversion myself? The Keras conversion code in the coremltools package is ~2000 lines longs and I don't have a deep knowledge of all Keras model representation so I really don't want to go that route.
I've tried converting a model saved with Keras 2 directly, but that doesn't work and fails with
TypeError: ('Keyword argument not understood:', u'gamma_initializer')
from site-packages/keras/engine/topology.py", line 326
They also don't have a github for coremltools so it's hard to discuss this package in more details. I've uploaded a copy of the package: https://github.com/gsabran/coremltools
Keras 2.0 support is already there (released yesterday), with coremltools v.0.4.0
Also, refer to newest comments on Apple Developer Forums

Categories

Resources