Unable to save a trained network: " KeyError: 'predictions_ib-0' " - python

I was trying to save a trained model using my_model.save('path/file.h5'). However, I obtained an error message: KeyError: 'predictions_ib-0'
I did some research but could not find a solution. I put my code on Github and the link is down below. The error shows up at In[39]
https://github.com/xizhenke/test2/blob/master/demo-Copy1.ipynb
Please give me some suggestions, thank you in advance!

I did more research and found it seems to be an "on-going" bug of the latest Keras (2.2.0+). So what I ended up doing was down-grading Keras to 2.1.6. And it works like a charm. I hope this could help people who have a similar issue.
Make sure you use pip or conda to down-grade. Or you may run into a package dependency issue.

Related

Error when running code using pickle load

I'm doing my code exactly the same with https://www.youtube.com/watch?v=y1ZrOs9s2QA&t=4124s in minute 1:11:09.
I write this code :
pickle_in = open("venv/model_trained.p","rb")
model = pickle.load(pickle_in)
It showed an error like this when I tried to run it.
Error that I got after running the code
Is someone having the same issue as me?
Thank you.
Best Regards,
Bhetrand
Never mind, I solved it, the problem is on the python version that doesn't support TensorFlow version 2.0.0, I guess pickle can work well in TensorFlow 2.0.0 and we need the python version 3.5- 3.7 to run tensor flow 2.0.0.

ssd_inception_v2 is not supported. See `model_builder.py` for features extractors compatible with different versions of Tensorflow

I'm working on an object detection project. I followed the instruction from Github.
But I used a different model.
I run this command
python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_inception_v2_coco.config
The error is
ValueError: ssd_inception_v2 is not supported. See `model_builder.py` for features extractors compatible with different versions of Tensorflow
I don't know why. I tried to change the model version but still error.
Please guide me. How to solve it?
I already knew how to solve it. I used model which is support Tensorflow 1., but I build my program with Tensorflow 2.. So, I changed the to use a model which support tensorflow 2.*
For those asking where to find v2 version of ssd_mobilenet_v1_coco trained models please visit and download appropriate model:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md

KeyError: "Couldn't find enum caffe.EmitConstraint.EmitType"

After installing caffe from source, while running sample test projects I am facing above issue.
What might be the cause of this error.
Seems like your caffe version and the version used to train/deploy the model you are trying to use are not the same. More specifically, it seems like the model you are trying to use has some "EmitType" parameter in its prototxt. This parameter is not defined in the master caffe.proto.
Please make sure you use the same branch of caffe as required by the model you are trying to run.

Error "ValueError: bad marshal data (unknown type code)" with Python 2.7.13 and Keras 2.0.8

I get the ValueError: bad marshal data (unknown type code) above when trying to load a previously saved Keras model (I think it's a Python error though that has nothing to do with Keras, but not quite sure.)
from keras.models import load_model
from keras import __version__ as keras_version
model = load_model("model.h5")
I searched on Google but didn't find a working solution. I tried deleting pya-files with: sudo find /usr -name '*.pyc' -delete but that didn't help either.
Do you have an idea how I can fix this error? Thank you!
I know the post is a bit older, but I just ran into the same problem.
As #Daniel Möller said, it was because I had installed different versions of Python, Tensorflow and Keras. Try to train the model again, in the same environment that you use to load the model afterwards. Or at least make sure that the Python version and the modules used are installed in the same version.

Tensorflow with poets

I have questions to ask about why tensorflow with poets was not able to classify the image i want. I am using Ubuntu 14.04 with tensorflow installed using docker. Here is my story:
After a successful retrained on flower category following this link here. I wish to train on my own category as well, I have 10 classes of images and they are well organized according to the tutorial. My photos were also stored in the tf_files directory and following the guide i retrain the inception model on my category.
Everything on the retraining went well. However, as I tired to classify the image I want, I was unable to do so and I have this error. I also tried to look for py file in /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py but my dist-packages were empty! Can someone help me in this! Where can I find the py files? Thank you!
Your error indicates that it is unable to locate the file. I would suggest you to execute the following command in the directory where you have the graph and label files
python label_image.py exact_path_to_your_testimage_file.jpg

Categories

Resources