I am new to StackOverflow.I am trying to export a model by using export_inference_graph.py.
I trained my model locally using faster_rcnn_inception_v2.I am following this tutorial.
When in command prompt I type
python export_inference_graph.py --input_type image_tensor --pipeline_config_path CAPTCHA_training/faster_rcnn_inception_v2_coco.config --trained_checkpoint_prefix "CAPTCHA_training_dir/model.ckpt-51272" --output_directory CAPTCHA_inference_graph
All with correct paths I get following error.
File "export_inference_graph.py", line 206, in <module>
tf.app.run()
File "C:\Users\Jatin\anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\Jatin\anaconda3\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Users\Jatin\anaconda3\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "export_inference_graph.py", line 194, in main
exporter.export_inference_graph(
File "C:\Users\Jatin\anaconda3\lib\site-packages\object_detection\exporter.py", line 604, in export_inference_graph
detection_model = model_builder.build(pipeline_config.model,
File "C:\Users\Jatin\anaconda3\lib\site-packages\object_detection\builders\model_builder.py", line 1116, in build
return build_func(getattr(model_config, meta_architecture), is_training,
File "C:\Users\Jatin\anaconda3\lib\site-packages\object_detection\builders\model_builder.py", line 583, in _build_faster_rcnn_model
_check_feature_extractor_exists(frcnn_config.feature_extractor.type)
File "C:\Users\Jatin\anaconda3\lib\site-packages\object_detection\builders\model_builder.py", line 249, in _check_feature_extractor_exists
raise ValueError('{} is not supported. See `model_builder.py` for features '
ValueError: faster_rcnn_inception_v2 is not supported. See `model_builder.py` for features extractors compatible with different versions of Tensorflow
I am using Python 3.8.5 and tensorflow version 2.4.1
Thanks in advance
Looks like a Tensorflow version problem according to error. Looking into source faster_rcnn_inception_v2 exists under if tf_version.is_tf1():. Try using to TF 1.
https://github.com/tensorflow/models/blob/5a89897396aa8ecc7b3ef8919f987e96fc8d74db/research/object_detection/builders/model_builder.py#L70
https://github.com/tensorflow/models/blob/5a89897396aa8ecc7b3ef8919f987e96fc8d74db/research/object_detection/models/faster_rcnn_inception_resnet_v2_feature_extractor.py#L33
faster_rcnn_inception_v2_coco exists here in Tensorflow 1 Detection model zoo.
https://github.com/tensorflow/models/blob/5a89897396aa8ecc7b3ef8919f987e96fc8d74db/research/object_detection/g3doc/tf1_detection_zoo.md#coco-trained-models
Related
I have an .h5 model that was built with tensorflow==1.13.1 and Keras==2.2.4 on a host to which I don't have access. I'm trying to load that model using keras.models.load_model as follows:
model.py:
from keras.models import load_model
import numpy as np
encoder = load_model('encoder.h5')
encoder.summary()
This throws a stacktrace that points to a source file (implicit_delta.py) I cannot open:
Duhaime:web doug$ python model.py
Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W0620 09:18:29.064763 140735739011968 deprecation_wrapper.py:119] From /Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0620 09:18:29.130089 140735739011968 deprecation_wrapper.py:119] From /Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
Traceback (most recent call last):
File "model.py", line 8, in <module>
encoder = load_model('../pose-enc-raymond.h5')
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 225, in _deserialize_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 458, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/layers/__init__.py", line 55, in deserialize
printable_module_name='layer')
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 145, in deserialize_keras_object
list(custom_objects.items())))
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/network.py", line 1032, in from_config
process_node(layer, node_data)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/network.py", line 991, in process_node
layer(unpack_singleton(input_tensors), **kwargs)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/layers/core.py", line 687, in call
return self.function(inputs, **arguments)
File "/home/cshimmin/jupyter/dance/implicit_delta.py", line 95, in <lambda>
IndexError: tuple index out of range
I have tried installing other versions of tensorflow and keras but so far haven't had luck working around this. Is there any trick I can do to figure out how to load this model? Any suggestions or hacks are appreciated!
This thread helped me realize I needed to load the model using the version of python that was used to create the model:
conda create -n 3.7.3 python=3.7.3
conda activate 3.7.3
Then pip install everything and the model will boot!
I am trying to quantize my model (specifically pretrained faster_rcnn_inception_v2 on coco, that was downloaded from the model zoo), in hopes to speedup inference time.
I use the following code from here:
import tensorflow as tf
converter = tf.lite.TocoConverter.from_saved_model(saved_model_dir)
converter.post_training_quantize = True
tflite_quantized_model = converter.convert()
open("quantized_model.tflite", "wb").write(tflite_quantized_model)
Models directory didnt have saved_model.pb file. So i renamed frozen_inference_graph.pb to saved_model.pb.
Running the code above produce the following runtime error:
Traceback (most recent call last):
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1664, in <module>
main()
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/juggernaut/pycharm-community-2018.2.4/helpers/pydev/pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/hdd/motorola/motorola_heads/tensorflow_face_detection/quantize.py", line 5, in <module>
converter = tf.lite.TocoConverter.from_saved_model(saved_model_dir)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 318, in new_func
return func(*args, **kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/lite.py", line 587, in from_saved_model
tag_set, signature_key)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/lite.py", line 376, in from_saved_model
output_arrays, tag_set, signature_key)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/convert_saved_model.py", line 254, in freeze_saved_model
meta_graph = get_meta_graph_def(saved_model_dir, tag_set)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/lite/python/convert_saved_model.py", line 61, in get_meta_graph_def
return loader.load(sess, tag_set, saved_model_dir)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 318, in new_func
return func(*args, **kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 269, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 420, in load
**saver_kwargs)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 347, in load_graph
meta_graph_def = self.get_meta_graph_def_from_tags(tags)
File "/hdd/motorola/venv_py27_tf1.10/local/lib/python2.7/site-packages/tensorflow/python/saved_model/loader_impl.py", line 323, in get_meta_graph_def_from_tags
" could not be found in SavedModel. To inspect available tag-sets in"
RuntimeError: MetaGraphDef associated with tags set(['serve']) could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
What does it mean and what should i do?
Please refer to this issue. They seem to have the same issue as you.
This may be fixed in a more recent version of Tensorflow (perhaps the tag has switched from 'serve' to 'serving' in the meantime).
You should use tf.saved_model.simple_save to save the pb model.
So, I trained an object detection model and now I want to export .ckpt files.
When I try to export the .ckpt files:
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training3/model.ckpt-47816 --output_directory inference_graph
I get this:
Traceback (most recent call last):
File "export_inference_graph.py", line 147, in <module>
tf.app.run()
File "/home/ubuntu/anaconda3/envs/tensorflow1/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "export_inference_graph.py", line 143, in main
FLAGS.output_directory, input_shape)
File "/home/ubuntu/tensorflow1/models/research/object_detection/exporter.py", line 454, in export_inference_graph
is_training=False)
File "/home/ubuntu/anaconda3/envs/tensorflow1/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/builders/model_builder.py", line 101, in build
add_summaries)
File "/home/ubuntu/anaconda3/envs/tensorflow1/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/builders/model_builder.py", line 274, in _build_faster_rcnn_model
image_resizer_fn = image_resizer_builder.build(frcnn_config.image_resizer)
File "/home/ubuntu/anaconda3/envs/tensorflow1/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/builders/image_resizer_builder.py", line 83, in build
if keep_aspect_ratio_config.per_channel_pad_value:
AttributeError: 'KeepAspectRatioResizer' object has no attribute 'per_channel_pad_value'
It seems that everybody has this working fine and have no problems with this.
Could anyone please tell me what is going on here?
I know this is a few months later, but I just encountered this issue too!
It seems the image_resizer.proto is missing the per_channel_pad_value attribute.
Update the proto file to include the attribute, from here:
https://github.com/tensorflow/models/blob/master/research/object_detection/protos/image_resizer.proto
recompile it and then try again.
Should work this time.
I am a beginner in machine learning and currently trying to follow the tutorial by #sentdex for tracking Custom Objects.
I had completed training a model which gave me model.ckpt files. (https://pythonprogramming.net/training-custom-objects-tensorflow-object-detection-api-tutorial/)
Now for testing the model we need to export the inference graph. (https://pythonprogramming.net/testing-custom-object-detector-tensorflow-object-detection-api-tutorial/)
I run the following command from the models/research/object-detection folder (https://github.com/tensorflow/models/) on the Command Prompt:
export_inference_graph.py --input_type image_tensor --pipeline_config_path training\ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix training\model.ckpt-292 --output_directory mac_n_cheese_inference_graph
And obtain the following errors: ( the cmd screen is attached)
Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Programs\Python\Python36\models\research\object_detection\export_inference_graph.py", line 149, in <module>
tf.app.run()
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "C:\Users\user\AppData\Local\Programs\Python\Python36\models\research\object_detection\export_inference_graph.py", line 145, in main
FLAGS.output_directory, input_shape)
File "C:/Users/user/AppData/Local/Programs/Python/Python36/models/research/object_detection\exporter.py", line 452, in export_inference_graph
is_training=False)
File "C:/Users/user/AppData/Local/Programs/Python/Python36/models/research/object_detection/builders\model_builder.py", line 91, in build
return _build_ssd_model(model_config.ssd, is_training, add_summaries)
File "C:/Users/user/AppData/Local/Programs/Python/Python36/models/research/object_detection/builders\model_builder.py", line 152, in _build_ssd_model
is_training)
File "C:/Users/user/AppData/Local/Programs/Python/Python36/models/research/object_detection/builders\model_builder.py", line 118, in _build_ssd_feature_extractor
use_explicit_padding = feature_extractor_config._use_explicit_padding
AttributeError: 'SsdFeatureExtractor' object has no attribute '_use_explicit_padding'
Please help correct the issue
I cloned tensorflow object detection model on githug:
github link
And I want to train this model with my own data (331 samoyed dog's images) following by this blog tutorial click here
My steps:
Created PASCAL VOC format dataset;
download retrained model(ssd_mobilenet_v1_coco_11_06_2017.tar.gz)
change the config file(ssd_mobilenet_v1_pets.config)
initial the training process by this codes:
python object_detection/train.py \
--logtostderr \
--pipeline_config_path=./samoyed_test_and_train/training/ssd_mobilenet_v1_pets.config \
--train_dir=./samoyed_test_and_train/data/train.record
but I receive errors, my os is MacOS,and I tried on AWS,same problem occurs, can you figured out my mistakes ?errors:
INFO:tensorflow:Summary name Learning Rate is illegal; using Learning_Rate instead.
WARNING:tensorflow:From /Users/zhaoenpei/Desktop/dabai-robot-arm/experiments/models/object_detection/meta_architectures/ssd_meta_arch.py:579: all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Please use tf.global_variables instead.
INFO:tensorflow:Summary name /clone_loss is illegal; using clone_loss instead.
2017-08-01 10:34:42.992224: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 10:34:42.992254: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-01 10:35:00.359032: I tensorflow/core/common_runtime/simple_placer.cc:675] Ignoring device specification /device:GPU:0 for node 'prefetch_queue_Dequeue' because the input edge from 'prefetch_queue' is a reference connection and already has a device field set to /device:CPU:0
INFO:tensorflow:Restoring parameters from /Users/zhaoenpei/Desktop/dabai-robot-arm/experiments/models/samoyed_test_and_train/training/model.ckpt
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.FailedPreconditionError'>, ./samoyed_test_and_train/data/train.record/graph.pbtxt.tmpf4587d1958df43cbaa9a0d7a04199f6f
2017-08-01 10:35:29.556458: E tensorflow/core/util/events_writer.cc:62] Could not open events file: ./samoyed_test_and_train/data/train.record/events.out.tfevents.1501554929.MacBook-Pro.local: Failed precondition: ./samoyed_test_and_train/data/train.record/events.out.tfevents.1501554929.MacBook-Pro.local
2017-08-01 10:35:29.556480: E tensorflow/core/util/events_writer.cc:95] Write failed because file could not be opened.
Traceback (most recent call last):
File "object_detection/train.py", line 198, in <module>
tf.app.run()
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection/train.py", line 194, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "/Users/zhaoenpei/Desktop/dabai-robot-arm/experiments/models/object_detection/trainer.py", line 290, in train
saver=saver)
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/contrib/slim/python/slim/learning.py", line 732, in train
master, start_standard_services=False, config=session_config) as sess:
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 964, in managed_session
self.stop(close_summary_writer=close_summary_writer)
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 792, in stop
stop_grace_period_secs=self._stop_grace_secs)
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/training/coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 953, in managed_session
start_standard_services=start_standard_services)
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 709, in prepare_or_wait_for_session
self._write_graph()
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 612, in _write_graph
self._logdir, "graph.pbtxt")
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/framework/graph_io.py", line 67, in write_graph
file_io.atomic_write_string_to_file(path, str(graph_def))
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 418, in atomic_write_string_to_file
write_string_to_file(temp_pathname, contents)
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 305, in write_string_to_file
f.write(file_content)
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 101, in write
self._prewrite_check()
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 87, in _prewrite_check
compat.as_bytes(self.__name), compat.as_bytes(self.__mode), status)
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/Users/zhaoenpei/.virtualenvs/python_virtual_1/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.FailedPreconditionError: ./samoyed_test_and_train/data/train.record/graph.pbtxt.tmpf4587d1958df43cbaa9a0d7a04199f6f
the train_dir flag is meant to point at some (typically empty) directory where your training logs and checkpoints will be written during training. For example it could be something like train_dir=/tmp/training_directory. It looks like you are trying to point it at your dataset --- which the config file should already be pointing at.