Im currently working on a tensorflow project and I am getting this error.
ValueError: model_config not of type model_pb2.DetectionModel.
It happens when I'm trying to train my model. Has anyone encountered this issue before? First question asked on here so be gentle.
`Chinatowns-MacBook-Air:object_detection nathangrant$ python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.7
return f(*args, **kwds)
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/platform/app.py:48: main (from __main__) is deprecated and will be removed in a future version.
Instructions for updating:
Use object_detection/model_main.py.
Traceback (most recent call last):
File "train.py", line 184, in <module>
tf.app.run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 136, in new_func
return func(*args, **kwargs)
File "train.py", line 180, in main
graph_hook_fn=graph_rewriter_fn)
File "/Users/nathangrant/Downloads/models-master/research/object_detection/legacy/trainer.py", line 245, in train
detection_model = create_model_fn()
File "/Users/nathangrant/Downloads/models-master/research/object_detection/builders/model_builder.py", line 105, in build
raise ValueError('model_config not of type model_pb2.DetectionModel.')
ValueError: model_config not of type model_pb2.DetectionModel.`
Try this first,
Solution A:
Download this complete "models_shareToPublic".rar file.
Unzip/Unrar the file.
Copy all the contains of the folder to your project library directory
Run your code/program again.
Solution B:
Please open the cmd prompt in your project library directory
In my case, it is: (my_project_name\venv\Lib\site-packages)
Clone the master branch of the Tensorflow Models repository by typing the cmd command below in the cmd prompt, which is opened in the specified directory above. (might take a few minutes depending on your network speed)
git clone https://github.com/tensorflow/models.git
Move all the contents inside the models folder into the library directory stated in Step 1.
Create "*.pb2" files by following this steps
Change the import line below in model_builder.py
Change from this:
from protos import model_pb2
to this:
from object_detection.protos import model_pb2
Run your code/program again, and you will notice this same error will be disappeared.
Notes:
If it appears
ModuleNotFoundError: No module named 'object_detection'
Go to "research" folder and copy the "object_detection" folder to the directory same with "research" folder. So that it can call the library directly, if not, you have to adjust the import line in each .py file (very wasting time)
Hope it works to you too, Good Luck!
In my computer, I did these and fixed this problem:
In model_builder.py, don't use
from protos import model_pb2
use
from object_detection.protos import model_pb2
instead.
Related
I made an exe to read images including gifs using pyinstaller. However, the exe can't read gif and the below error occurs. I have installed the latest version of imageio and the program works in interpreter. Could you advise? Thanks in advance.
The line raising error:
cv_gif = imageio.mimread(pic_path)
Error:
Traceback (most recent call last):
File "Image Viewer.py", line 3085, in <module>
File "Image Viewer.py", line 522, in go
File "Image Viewer.py", line 1264, in ShowANI
File "imageio\core\functions.py", line 247, in mimread
File "imageio\core\imopen.py", line 277, in imopen
ValueError: Could not find a backend to open `C:\Users\simon\Practice\testdir3\alpaca.gif`` with iomode `rI`.
Based on the extension, the following plugins might add capable backends:
pillow: pip install imageio[pillow]
GIF-PIL: pip install imageio[pillow]
I have exactly the same problem.
This might have something to do with the new feature of imageio introduced in version 2.11.0: Choose plugin based on extension and plugin lazy-import. Revert back to 2.10.5 solves the problem.
Just manually copied the folder \Lib\site-packages\imageio and replaced the imagoio folder of the exe, then it works.
I am new in GCP dataflow.
I try to read text files(one-line JSON string) into JSON format from GCP cloud storage, then split it based on values of certain field and output to GCP cloud storage (as JSON string text file).
Here is my code
However, I encounter some error on GCP dataflow:
Traceback (most recent call last):
File "main.py", line 169, in <module>
run()
File "main.py", line 163, in run
shard_name_template='')
File "C:\ProgramData\Miniconda3\lib\site-packages\apache_beam\pipeline.py", line 426, in __exit__
self.run().wait_until_finish()
File "C:\ProgramData\Miniconda3\lib\site-packages\apache_beam\runners\dataflow\dataflow_runner.py", line 1346, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self)
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 773, in run
self._load_main_session(self.local_staging_directory)
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 489, in _load_main_session
pickler.load_session(session_file)
File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py", line 287, in load_session
return dill.load_session(file_path)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 410, in load_session
module = unpickler.load()
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 474, in find_class
return StockUnpickler.find_class(self, module, name)
AttributeError: Can't get attribute '_JsonSink' on <module 'dataflow_worker.start' from '/usr/local/lib/python3.7/site-packages/dataflow_worker/start.py'>
I am able to run this script locally, but it fails when I try to use dataflowRunner
Please give me some suggestions.
PS. apache-beam version: 2.15.0
[Update1]
I try #Yueyang Qiu suggestion, add
pipeline_options.view_as(SetupOptions).save_main_session = True
The provided link says:
DoFn's in this workflow relies on global context (e.g., a module
imported at module level)
This link supports the suggestion above.
However, the same error occurred.
So, I am thinking whether my implementation of _JsonSink (inherit from filebasedsink.FileBasedSink) is wrong or something else needed to be added.
Any opinion would be appreciated, thank you all!
You have encountered a known issue that currently (as of 2.17.0 release), Beam does not support super() calls in main module on Python 3. Please take a look at possible solutions in BEAM-6158. Udi's answer is a good way to address this until BEAM-6158 is resolved, this way you don't have to run your pipeline on Python 2.
Using the guidelines from here, I managed get your example to run.
Directory structure:
./setup.py
./dataflow_json
./dataflow_json/dataflow_json.py (no change from your example)
./dataflow_json/__init__.py (empty file)
./main.py
setup.py:
import setuptools
setuptools.setup(
name='dataflow_json',
version='1.0',
install_requires=[],
packages=setuptools.find_packages(),
)
main.py:
from __future__ import absolute_import
from dataflow_json import dataflow_json
if __name__ == '__main__':
dataflow_json.run()
and you run the pipeline with python main.py.
Basically what's happening is that the '--setup_file=./setup.py' flag tells Beam to create a package and install it on the Dataflow remote worker. The __init__.py file is required for setuptools to identify the dataflow_json/ directory as a package.
I finally find out the problem:
the class '_jsonsink' I implement using some features form Python3
However, I do not aware of what version of Python I am using for 'Dataflowrunner'
(Actually, I have not figured out how to specify the python version for dataflow runner on GCP. Any suggestions?)
Hence, I re-write my code to Python2-compatible version, everything works fine!
Thanks for all of you!
Can you try setting option save_main_session = True as in here: https://github.com/apache/beam/blob/a2b0ad14f1525d1a645cb26f5b8ec45692d9d54e/sdks/python/apache_beam/examples/cookbook/coders.py#L88.
I am trying to connect to the Google App Engine Datastore from my local machine. I have spent all day digging in to this without any luck.
I have tried the approach here (as well as alot of other suggestions from SO such as Using gcloud-python in GAE and Unable to run dev_appserver.py with gcloud):
How to access a remote datastore when running dev_appserver.py?
I first installed gcloud based on this description from google:
https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27
According to the description I should add the following to my appengine_config.py:
from google.appengine.ext import vendor
vendor.add('lib')
If I do that I get an error saying ImportError: No module named gcloud
If I then move the code to my main.py it seems to pickup the lib-folder and the modules there. That seems a bit strange to me, since I thought appengine_config was being run first to make sure things were initialised.
But now I am getting the following stack trace:
ERROR 2016-09-23 17:22:30,623 cgi.py:122] Traceback (most recent call last):
File "/Users/thomasd/Documents/github/myapp/main.py", line 10, in <module>
from gcloud import datastore
File "/Users/thomasd/Documents/github/myapp/lib/gcloud/__init__.py", line 17, in <module>
from pkg_resources import get_distribution
File "/Users/thomasd/Documents/github/myapp/lib/pkg_resources/__init__.py", line 2985, in <module>
#_call_aside
File "/Users/thomasd/Documents/github/myapp/lib/pkg_resources/__init__.py", line 2971, in _call_aside
f(*args, **kwargs)
File "/Users/thomasd/Documents/github/myapp/lib/pkg_resources/__init__.py", line 3013, in _initialize_master_working_set
dist.activate(replace=False)
File "/Users/thomasd/Documents/github/myapp/lib/pkg_resources/__init__.py", line 2544, in activate
declare_namespace(pkg)
File "/Users/thomasd/Documents/github/myapp/lib/pkg_resources/__init__.py", line 2118, in declare_namespace
_handle_ns(packageName, path_item)
File "/Users/thomasd/Documents/github/myapp/lib/pkg_resources/__init__.py", line 2057, in _handle_ns
loader.load_module(packageName)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py", line 246, in load_module
mod = imp.load_module(fullname, self.file, self.filename, self.etc)
File "/Library/Python/2.7/site-packages/google/cloud/logging/__init__.py", line 18, in <module>
File "/usr/local/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 999, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named google.cloud.logging.client
What am I doing wrong here?
The google-cloud library is not working on App Engine and most likely you don't even have to since you can use the build in functionality.
From the official docs you can use it like this:
import cloudstorage as gcs
I solved it this way:-
1.) Create a lib folder in your project path.
2.) Install gcloud libraries by running following command into terminal from your project path:-
pip install -t lib gcloud
3.) Create an appengine_config.py module in your project and add following lines of code:-
import sys
import os.path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'lib'))
4.) After this, you can import like this:-
from gcloud import datastore
5.) To save data into live google datastore from local:-
client = datastore.Client("project-id")
key = client.key('Person')
entity = datastore.Entity(key=key)
entity['name'] = ashish
entity['age'] = 23
client.put(entity)
It will save an entity named Person having properties name and age. Do not forget to specify your correct project id.
Old question but this may be worth including:
I'm unsure the state of your requirements.txt file but I scrounged mine a bit and noticed setuptools was not included.
pip freeze doesn't export setuptools related question
Assuming you're following the tutorial, you likely installed those libraries EXCEPT for setuptools to lib.
I added setuptools=={verionnumber} to requirements.txt and that fixed this related issue for me.
useing Pyinstaller packages a python script
Pyinstaller version 3.2
OS:Ubuntu
Traceback (most recent call last):
File "<string>", line 57, in <module>
File "<string>", line 29, in feature_extract
File "caffe/io.py", line 295, in load_image
File "skimage/io/_io.py", line 100, in imread
File "skimage/io/manage_plugins.py", line 194, in call_plugin
RuntimeError: No suitable plugin registered for imread.
You may load I/O plugins with the `skimage.io.use_plugin` command. A list of all available plugins can be found using `skimage.io.plugins()`.
file_test returned -1
I have been getting above error. Could some one please tell me how would i fix it?
The problem seems to be related to this github issue, essentially the skimage.io._plugins submodule is making life hard for Pyinstaller.
To make sure everything you need is packaged you should have a hook file that contains
from PyInstaller.utils.hooks import collect_data_files, collect_submodules
datas = collect_data_files("skimage.io._plugins")
hiddenimports = collect_submodules('skimage.io._plugins')
(or if you already have a hook file with these, extend the current datas and hiddenimports).
This is probably a simple problem. But I downloaded the pywiiuse library from here and I also downloaded the examples. However when I try to run one of the examples I end up with import issues. I'm not certain I have everything configured properly to run. One error I receive when trying to run example.py:
Press 1&2
Traceback (most recent call last):
File "example.py", line 73, in <module>
wiimotes = wiiuse.init(nmotes)
File "/home/thed0ctor/Descargas/wiiuse-0.12/wiiuse/__init__.py", line 309, in init
dll = ctypes.cdll.LoadLibrary('libwiiuse.so')
File "/usr/lib/python2.7/ctypes/__init__.py", line 431, in LoadLibrary
return self._dlltype(name)
File "/usr/lib/python2.7/ctypes/__init__.py", line 353, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libwiiuse.so: cannot open shared object file: No such file or directory
I'm really just starting out with this library and don't really see any documentation on how to configure pywiiuse so any help is much appreciated.
The pywiiuse library is a Python wrapper for the wiiuse C library.
Before you can use the wrapper you will first need to install the library it wraps, choose the newest version from this download page and download the appropriate installation package for you system (probably the .tar.gz since you appear to be on Linux).
add the link of libwiiuse.so to /usr/local/lib.
I also ran into this situation, I konw why it happies, but I don't konw the deep reason.