I am a newbie to deep learning and its frameworks. I have collected some jpg images of bicycle and bike object. Now, I wanted to train this images using capsule network.
Capsule network is implemented on mnist dataset. The code is here.
What are the required changes to apply my custom dataset. Image should be changed to numpy format or tfrecord format? And how to load them on network?
To load custom dataset, I refered this. It helps. The dataset is converted to numpy array and keras background is used.
I load custom dataset and for capsule network model, use this.
Related
I have a data-set in tfrecord format annotated on RoboFlow. Now, I want to train a Faster-Rcnn and SSD from scratch in tensorflow. How will I load the dataset in the program and input to the model? Typical classification models have an input layer with an image shape defined but here I have to deal with image and the annotations as well.
I am fairly new to keras and I am trying transfer learning here:
https://www.tensorflow.org/tutorials/images/transfer_learning
My dataset however is not a binary and I have tfrecord file. I can read the file in tensorflow. I do not want to feed the images as an input to the network as the input comes from the pre-trained model. How can I pass the images and labels in the ImageDataGenerator class in Keras.
For anyone that may have this issue in the future. If the pre-train process is all correct. You can use the tf.data API to read and prepare the images for the training and the (image, label) set, can be fed to to the (.fit) method of your model.
look at this great Post to get familiar how to read the tfrecord file:
https://medium.com/#moritzkrger/speeding-up-keras-with-tfrecord-datasets-5464f9836c36
Directory structure:
Data
-Cats
--<images>.jpg
-Dogs
--<images>.jpg
I'm training a (n-ary) classification model. I want to create an input_fn for serving these images for training.
image dimensions are (200, 200, 3). I have a (keras) generator for them, if they can be used somehow.
I've been looking for a while but haven't found an easy way to do this. I thought this should be a standard use-case? e.g. Keras provides flow_from_directory to serve keras models. I need to use a tf.estimator for AWS Sagemaker so I'm stuck with it.
By using the tf dataset Module you can feed your data directly into your estimator. You basically have 3 ways to integrate this into your api:
1. convert your images into tfrecords and use tfrecorddataset
2 use the tf dataset from generator function to use generators
3 try introducing these decoder functions into your inputpipeline
I have an image classification problem where the number of classes increases over time and when a new class is created I just trained the model with images of the new class. I know this is not possible to do with a CNN, so to solve this problem I did transfer learning where I used a Keras pretrained model to extract the features of the images but instead of replacing the last layers (used for classification) with new layers, I used a Random Forest that is able to increase the number of classes. I achieved an accuracy of 86% using the InceptionResnetV2 trained on the imagenet dataset, which is good for now.
Now I want to do the same but on an object detection problem. How can I achieve this? Can I use the Tensorflow Object Detection API?
Is it possible to replace the last layers, of a pretrained CNN with a detection algorithm like Faster-RCNN or SSD, with a random forest?
Yes, you could implement the above-mentioned approach using Tensorflow object detection API. Also, you could use your InceptionResnetV2 trained model as a feature extractor. The tensorflow object detection API already has InceptionResnetV2 feature extractor trained on coco dataset. Its available at https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
Or if you want to provide or create custom feature extractor, please follow the link https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/defining_your_own_model.md
If you are new to Tensorflow object detection API. Please follow this tutorial,
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10
Hope this helps.
I am new to Machine Learning and recently took a courser by Andrew Ng on Coursera.
After that I shifted to Python and used Pandas, Numpy, Sklearn to implement ML algorithms.
Now while surfing I came across tensorFLow and found it pretty amazing, and implemented this example which takes MNIST data as input.
Now I want to read my own custom images and use them for training. I am confused as to how should I convert the images to MNIST sort of data. Or some other way to train my Network.
I took this tutorial to create my network.
Information on the MNIST dataset can be found on Yann LeCun's website.
The TensorFlow module tensorflow.examples.tutorials.mnist.mnist_softmax.py looks to be acquiring/preparing the dataset for the train/test steps.
The MNIST dataset contains an image of a handwritten digit and a corresponding label. If you would like to create labels for a new image, you could use scipy.misc.imread.