How to reshape the (image, label) dataset after py_function - python

I am trying to read a custom mapped dataset for training. But after I map the dataset using a py_function, I get unknown shape like:
def process_path(file_path):
label = get_label(file_path)
img = tf.io.read_file(file_path)
img = decode_img(img)
print('image shape:', img.shape) #this print correctly: image shape: (180, 180, 3)
print('label shape:', label.shape) #this print correctly: label shape: ()
return img, label
train_ds = train_ds.map(lambda x: tf.py_function(process_path, [x], (tf.float32, tf.int32)))
print(train_ds)
# this print unknown shape <PrefetchDataset shapes: (<unknown>, <unknown>), types: (tf.float32, tf.int32)>
This will fail the model.fit(), so I want to reshape the dataset to the correct shape like:
<BatchDataset shapes: ((None, 180, 180, 3), (None,)), types: (tf.float32, tf.int32)>
using:
train_ds = tf.reshape(train_ds, ((None, 180, 180, 3), (None,)))
But this will give an error:
ValueError: Attempt to convert a value (<MapDataset shapes: (<unknown>, <unknown>), types: (tf.float32, tf.int32)>) with an unsupported type (<class 'tensorflow.python.data.ops.dataset_ops.MapDataset'>) to a Tensor.
How can I correctly assign the (image, label) shape in this step?

You don't need py_function here. Let's say you have a folder called /dogs that is full of jpg. You can use these two little functions to load and decode.
The first one returns 1 if the file name (e.g., 'dogs\\dog1.jpg') is in the folder dogs and 0 otherwise.
The second function also takes a file name and transforms it into float between 0 and 1. Then, it also resizes the picture.
Let me know if anything is unclear.
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
from glob2 import glob
os.chdir('c:/users/nicol/pictures')
files = glob('*/*jpg')
def get_label(file_path):
split = tf.strings.split(file_path, sep=os.sep)[0]
equal = tf.equal(split, 'dogs')
cast = tf.cast(equal, tf.int32)
return cast
def process_path(file_path):
label = get_label(file_path)
img = tf.io.read_file(file_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = tf.image.resize(img, size=(180, 180))
return img, label
train_ds = tf.data.Dataset.from_tensor_slices(files).map(process_path)
next(iter(train_ds))
(<tf.Tensor: shape=(180, 180, 3), dtype=float32, numpy=
array([[[1.41176477e-01, 9.41176564e-02, 1.33333340e-01],
[1.41176477e-01, 9.41176564e-02, 1.33333340e-01],
[1.41176477e-01, 9.41176564e-02, 1.33333340e-01],
...,
[2.63300300e-01, 2.76176542e-01, 4.67582583e-01],
[2.46176332e-01, 2.59706050e-01, 4.50785339e-01],
[2.54726082e-01, 2.68909693e-01, 4.59662050e-01]]], dtype=float32)>,
<tf.Tensor: shape=(), dtype=int32, numpy=1>)
get_label should return an integer, if it's not already the case.

Related

Expected shape=(None, 256, 256, 3), found shape=(None, 256, 256, 4)

I'm decoding a base64 image with the following code:
def string_to_image(base64_string):
decoded = base64.b64decode(base64_string)
np_data = np.frombuffer(decoded, np.uint8)
img = cv2.imdecode(np_data, cv2.IMREAD_UNCHANGED)
return img
The goal is to receive an image from the request body, decode it, resize it with tensorflow, predict it with a model, and return a response saying what is that image:
image_base64 = request.json['image']
decoded_image = string_to_image(image_base64)
image_resized = tf.image.resize(decoded_image, (256, 256))
model = load_model('src/models/mymodel.h5')
result = model.predict(np.expand_dims(image_resized/255, 0))
However, I'm getting the error ValueError: Input 0 of layer "sequential_2" is incompatible with the layer: expected shape=(None, 256, 256, 3), found shape=(None, 256, 256, 4).
I don't know how to change the Shape value from '4' to '3'.
I tried the following:
image_resized = tf.image.resize(decoded_image, (256, 256, 3))
But I get 'size' must be a 1-D Tensor of 2 elements: new_height, new_width.
I also tried:
image_resized = cv2.resize(decoded_image, (256,256,3))
But I get OpenCV(4.6.0) :-1: error: (-5:Bad argument) in function 'resize'
Overload resolution failed:
- Can't parse 'dsize'. Expected sequence length 2, got 3
- Can't parse 'dsize'. Expected sequence length 2, got 3
Please help :(
You could reshape the array by using tf.squeeze after reshaping the tensor. According to documentation, tf.squeeze will remove axis with dimensions 1.
image_resized = tf.reshape(decoded_image, (-1, 256, 256, 3, 1))
image_resized = tf.squeeze(image_resized)
With vijayachandran mariappan comment and AndreaYolo answer I figured out a solution. First, change the channels of the image and then resize its dimensions:
decoded_image = string_to_image(image_base64)
decoded_image = decoded_image[:,:,:3]
image_resized = tf.image.resize(decoded_image, (256, 256))
My model then was able to predict perfectly!

Cannot convert tf.keras.preprocessing.image_dataset_from_directory to np.array

I am trying to create a image classification model using CNN. For that I am reading the data using the tf.keras.preprocessing.image_dataset_from_directory function.
This is the code:
train_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir_train,seed=123,validation_split = 0.2,subset = 'training',image_size=(img_height, img_width),batch_size=batch_size)
Then I am trying to convert the dataset in np.array object. My code is
x_train = np.array(train_ds)
But when I print x_train, I am getting
array(<BatchDataset shapes: ((None, 180, 180, 3), (None,)), types: (tf.float32, tf.int32)>, dtype=object)
The object train_ds is of shape (2000,180,180,3). I am not sure what is wrong in my code.
When using image_dataset_from_directory:
... image_dataset_from_directory(main_directory, labels='inferred') will return a tf.data.Dataset that yields batches of images from the subdirectories ...
One option to get the data you want, is to use take to create a Dataset with at most count elements from this dataset.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
img = np.empty((6000,180,180,3), dtype=np.float32)
label = np.empty((6000,), dtype=np.int32)
train_ds = tf.data.Dataset.from_tensor_slices((img,label)).batch(2000)
print(train_ds) # <BatchDataset shapes: ((None, 180, 180, 3), (None,)), types: (tf.float32, tf.int32)>
for imgs, labels in train_ds.take(1):
print(imgs.shape) # (2000, 180, 180, 3)
print(labels.shape) # (2000,)
for img in imgs:
plt.imshow(img.numpy().astype(np.uint8)) # print imgs in the batch
Depending on how you're structuring your code, you may not even need to convert to a numpy.array, since tf.keras.Model fit accepts tf.data datasets.
model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)

ds_train has shape (2, 224, 224, 3) instead of (None, 224, 224, 3)

I have created my own custom dataset (with 2 classes) with the following code:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
ds_train = tf.keras.preprocessing.image_dataset_from_directory(
'C:/Users/mydir/Source_Images/',
labels = 'inferred', # from subfolders in alphabetical order
label_mode = "int",
class_names = ["CVS", "No_CVS"],
color_mode = 'rgb',
batch_size = 2,
image_size = (224, 224),
shuffle = True, # randomized order of images
seed = 123, #set the seed if train, valid images are the same when you run again
validation_split = 0.1,
subset = "training"
)
df_train results in:
<BatchDataset shapes: ((None, 224, 224, 3), (None,)), types: (tf.float32, tf.int32)>
Now, I want to visualize my data by looking at 9 images:
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("uint8"))
plt.axis("off")
However, I get the following error:
line 61, in
plt.imshow(image.numpy().astype("uint8"))
TypeError: Invalid shape (2, 224, 224, 3) for image data
I'm looking for a way to resolve this, and be able to plot my images with matplotlib.
EDIT:
More importantly, it seems that the data of the data cannot be used when training the model either as I get this error:
ValueError: Input 0 is incompatible with layer EfficientNet: expected shape=(None, 224, 224, 3), found shape=(2, None, 224, 224, 3)
After running the Keras example code I found here (where I created ds_train with the image_dataset_from_directory instead of the tdsf.load() function).
So I think there is something going wrong in the way I created the ds_train. Any resolutions are very welcome.
It seems like you are leaving the batch_size in, when you do:
plt.imshow(image.numpy().astype("uint8"))
With your original code you still won't be able to see 9 images because of your batch_size. I think it will be fine if you do it like:
No errors should be thrown like TypeError: Invalid shape...:
plt.imshow(image[i].numpy().astype("uint8"))
Furthermore you can do following to see batch_size:
for img_batch_size, labels_batch_size in train_df:
print(img_batch_size.shape)
print(labels_batch_size.shape)
For your case img_batch_size.shape should print (2,224,224,3) where this tuple corresponds to image tensor.
For input_shape problem, you need to add your model so we can see what's wrong with input_shape.

The workflow of loading images as TensorFlow dataset and feed into Keras model.fit?

I'm working on a remote sensing dataset that requires me manually load two folders of input images of (1024, 1024, 3) and concatenate each pair into a (1024, 1024, 6) input. Each label is in the format of (1024, 1024, 1).
Therefore the final feedable tf.data.Dataset should be in the dimension of ((None, 1024, 1024, 6), (None, 1024, 1024, 1)) where None is the batch dimension.
Following the guide on Loading images into TensorFlow and Train and evaluate with Keras, my dataset was in the shape of <ParallelMapDataset shapes: ((None, None, 6), (None, None, 1)), types: (tf.float32, tf.float32)>.
Therefore it fails to pass into model.fit() function, and yields the error of 'ParallelMapDataset' object has no attribute 'ndim'.
Relevant code:
base_path = './LEVIR-CD/'
train_path_list = tf.data.Dataset.list_files(base_path + 'train/A/*.png')
test_path_list = tf.data.Dataset.list_files(base_path + 'test/A/*.png')
val_path_list = tf.data.Dataset.list_files(base_path + 'val/A/*.png')
def process_path(a_path):
b_path = tf.strings.regex_replace(a_path, '/A/', '/B/')
label_path = tf.strings.regex_replace(a_path, '/A/', '/label/')
a = tf.image.decode_png(tf.io.read_file(a_path), channels=3) / 255
b = tf.image.decode_png(tf.io.read_file(b_path), channels=3) / 255
label = tf.image.decode_png(tf.io.read_file(label_path), channels = 1) / 255
return concatenate([a, b], axis = 2), label
train_ds = train_path_list.map(process_path, num_parallel_calls = tf.data.experimental.AUTOTUNE)
test_ds = test_path_list.map(process_path, num_parallel_calls = tf.data.experimental.AUTOTUNE)
val_ds = val_path_list.map(process_path, num_parallel_calls = tf.data.experimental.AUTOTUNE)
isinstance(train_ds, tf.data.Dataset) # ==> True
model.fit(train_ds) # ==> Error
Update:
tf.image.resize(a, [1024, 1024]) which resizes these 1024*1024 images into the same size will solve the missing dimension issue but the error remains.

How to get training data for Keras Sequential CNN into the correct tensor shape?

I have a 4 dimensional tensor of image pixel data (Red(height, width), Green (height, width), Blue (height, width), 14000 examples) and a CSV file containing the coordinates of the bounding boxes that each image has ie, (Image name, X1, Y1, X2, Y2), it has 14000 rows, one for each example, as well.
How do I feed this data to my neural network? Currently, if I try feeding the tensor it passes the entire array of 14000 examples against one row of (X1,Y1,X2,Y2) {it should have passed one array for one row of x1,y1,x2,y2}.
Any idea how to fix this?
Here's the code and the associated error:
train_csv = pd.read_csv('datasets/training.csv').values
test_csv = pd.read_csv('datasets/test.csv').values
y_train = train_csv[:,[1,2,3,4]] #done
x_train_names = train_csv[:,0] #obtained names of images in array
#### load images into an array ####
X_train = []
path = "datasets/images/images/"
imagelist = listdir(path)
for i in range(len(x_train_names)):
img_name = x_train_names[i]
img = Image.open(path + str(img_name))
arr = array(img)
X_train.append(arr)
#### building a very basic classifier, just to get some result ####
classifier = Sequential()
classifier.add(Convolution2D(64,(3,3),input_shape=(64,64,3), activation =
'relu'))
classifier.add(Dropout(0.2))
classifier.add(MaxPooling2D((4,4)))
classifier.add(Convolution2D(32,(2,2), activation = 'relu'))
classifier.add(MaxPooling2D((2,2)))
classifier.add(Flatten())
classifier.add(Dense(16, activation = 'relu'))
classifier.add(Dropout(0.5))
classifier.add(Dense(4))
classifier.compile('adam','binary_crossentropy',['accuracy'])
classifier.fit(x=X_train,y=y_train, steps_per_epoch=80, batch_size=32,
epochs=25)
Error:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 14000 arrays:
[array([[[141, 154, 144],
[141, 154, 144],
[141, 154, 144],
...,
[149, 159, 150],
[150, 160, 151],
[150, 160, 151]],
[[140, 153, 143],
[…
EDIT: I converted all my images to grayscale so I don't get a memory error. This means that my X_train should have 1 dimension along the number of channels (earlier, RGB). Here's my edited code:
y_train = train_csv[:,[1,2,3,4]] #done
x_train_names = train_csv[:,0] #obtained names of images in array
# load images into an array
path = "datasets/images/images/"
imagelist = listdir(path)
img_name = x_train_names[0]
X_train = np.ndarray((14000,img.height,img.width,1))
for i in range(len(x_train_names)):
img_name = x_train_names[i]
img = Image.open(path + str(img_name)).convert('L')
##converting image to grayscale because I get memory error else
X_train[i,:,:,:] = np.asarray(img)
ValueError: could not broadcast input array from shape (480,640) into shape (480,640,1)
(At X_train[i,:,:,:] = np.asarray(img) line)
The first step is always to find out which input shape your first convolution layer expects. The documentation of tf.nn.conv2d states that the expected shape of the 4D input tensor is [batch, in_height, in_width, in_channels].
To load the data we can use a numpy ndarray. For that we should know the number of images you want to load, as well as the dimensions of the images:
path = "datasets/images/images/"
imagelist = listdir(path)
img_name = x_train_names[0]
img = Image.open(path + str(img_name))
X_train = np.ndarray((len(imagelist),img.height,img.width,3))
for i in range(len(x_train_names)):
img_name = x_train_names[i]
img = Image.open(path + str(img_name))
X_train[i,:,:,:] = np.asarray(img)
The shape property of your X_train tensor should give you then:
print(X_train.shape)
> (len(x_train_names), img.height, img.width, 3)
EDIT:
To load the images in multiple batches you could do something like this:
#### Build and compile your classifier up here here ####
num_batches = 5
len_batch = np.floor(len(x_train_names)/num_batches).astype(int)
X_train = np.ndarray((len_batch,img.height,img.width,3))
for batch_idx in range(num_batches):
idx_start = batch_idx*len_batch
idx_end = (batch_idx+1)*len_batch-1
x_train_names_batch = x_train_names[idx_start:idx_end]
for i in range(len(x_train_names_batch)):
img_name = x_train_names_batch[i]
img = Image.open(path + str(img_name))
X_train[i,:,:,:] = np.asarray(img)
classifier.fit(x=X_train,y=y_train, steps_per_epoch=num_batches, batch_size=len(x_train_names_batch), epochs=2)

Categories

Resources