I'm learning TensorFlow, running version r0.10 on Ubuntu 16.04. I am working on the CIFAR-10 Tutorial and have trained the CNN in the example.
Where is the image data stored for this tutorial?
The data path is defined on this line, in cifar10.py:
tf.app.flags.DEFINE_string('data_dir', '/tmp/cifar10_data',
"""Path to the CIFAR-10 data directory.""")
However I am confused as to why I cannot find this directory. I have attempted to manually search for it, and also look through all the example directories for it.
It is getting saved in a relative path for your OS, not your working directory. Take a look at my answer here and see if that helps.
Related
I am trying to train an SSD-based face detector from scratch in Caffe. I have a dataset of images where the bounding boxes for faces are stored in a csv file. In all the tutorials and code I've come across so far, convert_annoset tool is used to generate an lmdb file for object detection. However, in the latest Caffe version on Github (https://github.com/BVLC/caffe), this tool has been removed.
The two options I see to deal with this issue are:
Rewrite the convert_annoset tool using functions in the current Caffe library
Use other python packages (such as lmdb and OpenCV) to manually create lmdb files from the images and bounding box information
I've been trying to rewrite the code but I am unable to find certain classes and functions that were used in the original code such as AnnotatedDatum_AnnotationType and LabelMap in the current version.
Any suggestions for how to proceed with creating lmdb for the object detection problem?
EDIT:
I've just realized that AnnotatedData layer no longer exists in the master branch of Caffe. Does this mean that detection is not possible in this version? Do I have to use some older fork such as https://github.com/weiliu89/caffe for detection or is there any other option?
Since the original Caffe does not have a layer for object detection(mobilenet-ssd)
so use this Caffe (https://github.com/weiliu89/caffe) ssd branch
This Caffe has a convert_annoset that can generate an lmdb.
I am trying to build an Image classifier with keras and tensorflow. However currently flow_from_directory does not see my images due to them being in .gif format (I checked this with .jpg and it works here). How can I fix this?
This old github page claims that I should be able to put .gif on my white_list_formats in the keras/preprocessing/image.py file. But after opening it there seems to be no white_list_formats in my version of image.py. Did keras change anything here?
I'm on Windows using anaconda3 distribution in case that matters.
Thanks for your help!
The github post you are referring to was posted for keras version 1.1.0. In the latest version they have removed that variable.
In the latest version of Keras, below formats are only supported.
PNG, JPG, BMP, PPM, TIF
Please refer to the this for documentation.
If you still want to try and edit white_list_format, please install keras 1.1.0 and try it.
I am about to learn about Neural Networks and I am about to reproduce a tutorial which trains a Neural Network with the target to identify handwritten letters. The training of the Neural Network should be done with the MNIST data set. Unfortunately, exactly where my issue comes as I am not able to read in the MNIST data set.
The environment I am using is a Jupyter Notebook and Python 3.
These are the lines of code I have (line 2 causes the issue):
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)
Line 2 causes this error message:
ModuleNotFoundError: No module named 'tensorflow.contrib'
Ok, what the error tells me, is clear. Reason is, that in my tensorflow installation folder a directory /tensorflow/contrib/... does not exist.
The issues is caused by line 2, as the module input_data.py contains this line of code:
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
So, the core of my issue is, that I do not know, where to get the module read_data_sets from. I was searching at GitHub, but the path
/tensorflow/contrib/learn/python/learn/datasets/mnist/
does not exist there.
In detail: Subfolder 'mnist' is not to be found in GitHub. Therefore, I also do not find the file read_data_sets.py.
So, where do I find the missing module 'read_data_sets'?
Would be great, if someone could help me as this issue stops my attempt to deal with Neural Networks already at the very beginning.
Thanks a lot and kind regards,
Matthias
It seems that you are using a new version of tensorflow >= 1.13.0 so you may follow this link if you want to load MNIST dataset
I have questions to ask about why tensorflow with poets was not able to classify the image i want. I am using Ubuntu 14.04 with tensorflow installed using docker. Here is my story:
After a successful retrained on flower category following this link here. I wish to train on my own category as well, I have 10 classes of images and they are well organized according to the tutorial. My photos were also stored in the tf_files directory and following the guide i retrain the inception model on my category.
Everything on the retraining went well. However, as I tired to classify the image I want, I was unable to do so and I have this error. I also tried to look for py file in /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py but my dist-packages were empty! Can someone help me in this! Where can I find the py files? Thank you!
Your error indicates that it is unable to locate the file. I would suggest you to execute the following command in the directory where you have the graph and label files
python label_image.py exact_path_to_your_testimage_file.jpg
When I follow the tutorials of "How to Retrain Inception's Final Layer for New Categories", I need to build the retainer like this
bazel build tensorflow/examples/image_retraining:retrain
However, my tensorflow on windows does not have such directory. I am wondering why and how can I solve the problem?
Thank you in advance
In my case tensorflow version is 1.2 and corresponding retrain.py is here.
Download and extract flowers images from here.
Now run the the retrain.py file as
python retrain.py --image_dir=path\to\dir\where\flowers\images\where\extracted --output_lables=retrained_labels.txt --output_graph=retrained_graph.pb
note: the last two arguments in the above command are optional.
Now to test the retrained model:
go the master branch and download the label_image.py code as shown below
Then run python label_image.py --image=image/path/to/test/classfication --graph=retrained_graph.pb --labels=retrained_labels.txt
The result will be like
From the screenshot, it appears that you have installed the TensorFlow PIP package, whereas the instructions in the image retraining tutorial assume that you have cloned the Git repository (and can use bazel to build TensorFlow).
However, fortunately the script (retrain.py) for image retraining is a simple Python script, which you can download and run without building anything. Simply download the copy of retrain.py from the branch of the TensorFlow repository that matches your installed package (e.g. if you've installed TensorFlow 0.12, you can download this version), and you should be able to run it by typing python retrain.py at the Command Prompt.
I had the same problem on windows. My windows could not find script.retrain. I downloaded retrain.py file from tensoflow website at here. Then, copied the file in the tensorflow folder and run the retrain script using Python command.