Read data directly from folders for training in keras - python

I am doing super resolution with resnet in keras and I have split my data into train and test (70-30) and from the test data 20% for validation .i am trying to read the data with datagen.flow_from_directory but its showing 0 images for 0 classes .The main issue is i dont have classes. I only have high resolution images and low resolution images. The high resolution images goes to output and the low resolution images goes to input. How can i load the data without separating them in classess
from keras.preprocessing.image import ImageDataGenerator
import os
train_dir = r'G:\\images\\train'
train_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_dir)

To resolve 0 images for 0 classes, notice that a common mistake is that the target folder you specify has no subdirectory. ImageDataGenerator splits data to classes, based on each subdirectory under the directory you specify as it's first argument. So, you should have at least one subdirectory under the target.
Furthermore, the generator should label them in order to feed them to your network. By default it uses categorical method as a 2D one-hot encoded labels. But if you want your labels in other ways, set class_mode argument. For example for autoencoders that inputs has no label, you should specify it as class_mode=input.
Base on the docs here, class_mode should be one of these:
categorical will be 2D one-hot encoded labels, (Default Mode)
binary will be 1D binary labels,
sparse will be 1D integer labels,
input will be images identical to input images (mainly used to work with
autoencoders).
None, no labels are returned (the generator will only yield batches of image data, which is useful to use with model.predict())

Related

Can I flow_from_dataframe if my labels are also filepaths?

I am trying to train an image reconstruction network, that I would train like this:
vae.fit(X_train, X_train, epochs=10, batch_size=1)
where X_train is a NumPy array of the training images.
However, I want to use a generator because otherwise, I run out of memory. I have tried to use flow_from_dataframe, where I have all the file paths of the images stored (they are across multiple folders).
train_generator=datagen.flow_from_dataframe(
dataframe=df,
x_col="filepath",
y_col="filepath")
The issue is this function inputs x_col (file path) and y_col(label). Since my loss function is based on reconstruction error, my label should be the same image itself. Is there a way to do this with this function or with another kind of generator?
For autoencoders, you can set class_mode=input, and then you don't have to set y_col.
So try this:
train_generator=datagen.flow_from_dataframe(
dataframe=df,
x_col="filepath",
class_mode="input")

Dividing images into patches tensorflow

I am trying to build a CNN and want to divide my input images into non-overlapping patches and then use it for training.
However, I am unsure how to combine the extraction of patches with the code below.
I believe a function like tf.image.extract_patches should do the trick but I am unsure how I can include it in the pipeline. It's important for me to use flow_from_directory as I have organised my dataset accordingly.
train_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = train_datagen.flow_from_directory(train_dir,target_size=(64,64),class_mode='categorical',batch_size=64)
I thought of using extract_patches_2d from scikit but it has two issues :
It gives random overlapping patches
I need to resave all images and again reorganize my dataset (same issue as tf.image.extract_patches unless included in pipeline)

Single Prediction Image doesn't need to be rescaled?

I followed a tutorial to make my first Convolutional Neural Network using Keras and I have a small question regarding the rescaling step.
So when we are importing the training set and test set, we create an instance of the tf.keras.preprocessing.image.ImageDataGenerator class and use it as:
train_datagen = ImageDataGenerator(rescale=1/255)
Along with some other augmentation parameters. My understanding is that we use the rescale parameter to normalize the pixel values of the images imported.
But when we load up a single image to run through the CNN, we write something like (code from keras docs):
image = tf.keras.preprocessing.image.load_img(image_path)
input_arr = keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr]) # Convert single image to a batch.
predictions = model.predict(input_arr)
My question is, I cannot see the single input image being rescaled anywhere. Is it being done implicitly, or is there no need to actually perform rescaling? If the latter, then why is it so?
Thanks!
The image should be normalized that it should be divided by 255, if it's done during the training. Network will not be able to interpret that.
Also, when we use test_datagen, we apply Rescaling by 1/255 for the predict generator.
Normalization, mean subtraction and std deviation needs to be done at the testing time, if that has been applied during the training stage.

Is Dataset Organization for Image Classification Necessary?

I'm currently working on a program that can do binary image classification with machine learning. I have a list of labels and a list of images that i'm using as inputs which are then fed into the Inception V3 model.
Will inputting of the dataset this way work with the inception V3 architecture? Is it necessary to organize the images with labeled folders before feeding it into the model?
Thanks for your help!
In your example, you have all the images in memory. You can simply call model.fit(trainX, trainY) to train your model. No need to organize the images in specific folder structures.
What you are referring to, is the flow_from_directory() method of the ImageDataGenerator. This is an object that will yield images from the directories, and automatically infer the labels from the folder structure. In this case, your images should be arranged in one folder per label. Since the ImageDataGenerator is a generator, you should use it in combination with model.fit_generator().
As a third option, you can write your own custom generator that yields both images and labels. This is advised in case you have a more complex label structure than one label per images; for instance in multi-label classification, object detection or semantic segmentation, where the outputs are also images. A custom generator should also be used with model.fit_generator().

Is it possible to train on multiple images sizes in keras?

Keras takes numpy arrays as input for training data, however it is possible to create models that can take variable input sizes. I'm wondering if there is a way to incorporate images of various dimensions in the training data for a model.
You cannot give variable size images to train a model in Keras. According to Keras API, the Input layer function looks as follows.
Input(shape=(3,None,None))
Where, 3 is presenting number of channels for RGB images. But you have to clearly tell Keras, what is the width and height of the training images. So, there is no way Keras can handle variable length images.
So, you have to transform the images to a specific size first and then train model using Keras.

Categories

Resources