I started working with tf a few weeks ago and am struggling with the input queue right now.
What I want to do is the following: I have a folder with 477 temporal, greyscale images. Now I want e.g. take the first 3 images and stack them together (=> 600,600,3), so that I get a single example with 3 channels. Next I want to take the fourth image and use it as a label (just 1 channel => 600,600,1). Then I want to pass both to tf.train.batch and create batches.
I think I found a solution, see code below. But I was wondering if there is a more fashionable solution.
My actual question is: What happens at the end of the queue. Since I'm always picking 4 images from the queue (3 for input, 1 for label) and I have 477 images in my queue, things are not working out. Does tf then just fill up my queue again and continues (so if there is 1 image left in the queue, it takes this image, fills up the queue again and take 2 more images to get the desired 3 images?). Or do I need a number of images divisible by 4 in my folder if I want a proper solution?
def read_image(filename_queue):
reader = tf.WholeFileReader()
_, value = reader.read(filename_queue)
image = tf.image.decode_png(value, dtype=tf.uint8)
image = tf.cast(image, tf.float32)
image = tf.image.resize_images(image, [600, 600])
return image
def input_pipeline(file_names, batch_size, num_epochs=None):
filename_queue = tf.train.string_input_producer(file_names, num_epochs=num_epochs, shuffle=False)
image1 = read_image(filename_queue)
image2 = read_image(filename_queue)
image3 = read_image(filename_queue)
image = tf.concat([image1, image2, image3,], axis=2)
label = read.image(filename_queue)
# Reshape is necessary, otherwise I get an error..
image = tf.reshape(image, [600, 600, 3])
label = tf.reshape(label, [600, 600, 1])
min_after_dequeue = 200
capacity = min_after_dequeue + 12 * batch_size
image_batch, label_batch = tf.train.batch([image, label],
batch_size=batch_size,
num_threads=12,
capacity=capacity)
return image_batch, label_batch
Thanks for any help!
But I was wondering if there is a more fashionable solution
Yes! there's a better and faster solution. First you redesign your database, since you want to combine 3 gray images into 1 rgb images for training; prepare a dataset of RGB images from gray images (it will save whole lot of time during training).
redesign the way you retrieve data
# retrieve image and corresponding label at the same time
# here if you set the num_epochs=None, the queue will run continuously; and it will take-care of the data need for training till end
filename_queue = tf.train.string_input_producer([file_names_images_list, corresponding_file_names_label_list], num_epochs=None, shuffle=False)
image = read_image(filename_queue[0])
label = read_image(filename_queue[1])
Related
I am making image search using the one-shot model because I have very few data for per class.
I am following this tutorial
Already prepared the datapipeline and trained the model. But I didn't understand the single image prediction process which we do which we do generally by model.predict.
I tried the following code but I think I am missing something.
img1 = cv2.imread("./images_evaluation/test.jpg",cv2.IMREAD_GRAYSCALE)
img1 = cv2.resize(img1,(105,105))
img1 = np.expand_dims(cv2.resize(img1, (105,105)), axis=2)
(test_image_names, train_image_names) = generate_oneshot_validation_trials(dataset, 20)
train_images = get_images(train_image_names, IMAGE_SHAPE)
images = np.tile(img1, (len(train_images), 1, 1, 1))
preds = siamese_model1.predict([images, train_images])
pred_idx = np.argmax(preds, axis=0)[0]
pred_char_name = train_image_names[pred_idx].split('/')[-2]
print(pred_char_name) ## here, finding different prediction after every try. whats the reason?
I have the following code:
imagepaths = tf.convert_to_tensor(imagepaths, dtype=tf.string)
labels = tf.convert_to_tensor(labels, dtype=tf.int32)
# Build a TF Queue, shuffle data
image, label = tf.data.Dataset.from_tensor_slices((imagepaths, labels))
and am getting the following error:
image, label = tf.data.Dataset.from_tensor_slices((imagepaths, labels))
ValueError: too many values to unpack (expected 2)
Shouldn't Dataset.from_tensor_slices see this as the length of the tensor, not the number of inputs? How can I fix this issue or combine the data tensors into the same variable more effectively?
Just for reference:
There are 1800 imagepaths and 1800 labels corresponding to each other. And to be clear, the imagepaths are paths to the files where the jpgs images are located. My goal after this is to shuffle the data set and build the neural network model.
That code is right here:
# Read images from disk
image = tf.read_file(image)
image = tf.image.decode_jpeg(image, channels=CHANNELS)
# Resize images to a common size
image = tf.image.resize_images(image, [IMG_HEIGHT, IMG_WIDTH])
# Normalize
image = image * 1.0/127.5 - 1.0
# Create batches
X, Y = tf.train.batch([image, label], batch_size=batch_size,
capacity=batch_size * 8,
num_threads=4)
try to do this:
def transform(entry):
img = entry[0]
lbl = entry[1]
return img, lbl
raw_data = list(zip(imagepaths, labels))
dataset = tf.data.Dataset.from_tensor_slices(raw_data)
dataset = dataset.map(transform)
and if you want to have a look at your dataset you can do it like this:
for e in dataset.take(1):
print(e)
you can add multiple map functions and you can after that use shuffle and batch on your dataset to prepare it for training ;)
I want to create image sequence samples using the tf.data API. But as of now, it seems like there is no easy way to concatenate multiple images to form a single sample. I have tried to use the dataset.window function, which groups my images right. But I don't know how to concatenate them.
import tensorflow as tf
from glob import glob
IMG_WIDTH = 256
IMG_HEIGHT = 256
def load_and_process_image(path):
img = tf.io.read_file(path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, [IMG_WIDTH, IMG_HEIGHT])
img = tf.reshape(img, shape=(IMG_WIDTH, IMG_HEIGHT, 1, 3))
return img
def create_dataset(files, time_distance=8, frame_step=1):
dataset = tf.data.Dataset.from_tensor_slices(files)
dataset = dataset.map(load_and_process_image)
dataset = dataset.window(time_distance, 1, frame_step, True)
# TODO: Concatenate elements from dataset.window
return dataset
files = sorted(glob('some/path/*.jpg'))
images = create_dataset(images)
I know that I could save my image sequences as TFRecords but that would make my data pipeline much more unflexible and would cost tons of memory.
My input batches should have the form N x W x H x T x C
(N: Number of samples
W: Image Width
H: Image Height
T: Image Sequence length
C: Image Channels).
You can use batching to create batches of size N.
iterations = #
batched_dataset = dataset.batch(N)
for batch in batched_dataset.take(iterations):
# process your batch
Here iterations is the number of batches you want to generate.
I am trying to train pytorches torchvision.models.detection.fasterrcnn_resnet50_fpn to detect objects in my own images.
According to the documentation, this model expects a list of images and a list of dictionaries with
'boxes' and 'labels' as keys. So my dataloaders __getitem__() looks like this:
def __getitem__(self, idx):
# load images
_, img = self.images[idx].getImage()
img = Image.fromarray(img, mode='RGB')
objects = self.images[idx].objects
boxes = []
labels = []
for o in objects:
# append bbox to boxes
boxes.append([o.x, o.y, o.x+o.width, o.y+o.height])
# append the 4th char of class_id, the number of lights (1-4)
labels.append(int(str(o.class_id)[3]))
# convert everything into a torch.Tensor
boxes = torch.as_tensor(boxes, dtype=torch.float32)
labels = torch.as_tensor(labels, dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
# transforms consists only of transforms.Compose([transforms.ToTensor()]) for the time being
if self.transforms is not None:
img = self.transforms(img)
return img, target
To my best knowledge, it returns exactly what's asked. My dataloader looks like this
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=4, shuffle=False, num_workers=2)
however, when it get's to this stage:
for images, targets in dataloaders[phase]:
it raises
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 12 and 7 in dimension 1 at C:\w\1\s\windows\pytorch\aten\src\TH/generic/THTensor.cpp:689
Can someone point me in the right direction?
#jodag was right, I had to write a seperate collate function in order for the net to receive the data like it was supposed to. In my case I only needed to bypass the default function.
I have built a CNN to predict lymph node positivity (has cancer or not). Right now to load the data I have a self defined function that loads a batch of data and feeds it to the model for training.
Instead of loading batches I would love to use the flow_from_directory method. The problem I have is that my data are saved as arrays [#, rows, width, height, PET or CT] not images (that would later be converted to arrays). Example [0,:,:,:,0] is volume sized 48x48x32 from a ct image.
If I try to use flow_from_directory I get 0 images with 3 classes which I expected since '.mat' is not a recognized file (https://github.com/keras-team/keras-preprocessing/blob/362fe9f8daf556151328eb5d02bd5ae638c653b8/keras_preprocessing/image.py#L1868). Interestingly it doesnt raise any errors but I am indefinitely stuck on 1/150 epochs. I am going to see if I can write my own flow_from_directory. Not sure if someone has run across this problem and could give my pointers.
Illustrating how data is combined
for fname in fnames:
data = scipy.io.loadmat(os.path.join(dir_in_train, fname))['roi_patch']
data_PET = scipy.io.loadmat(os.path.join(dir_in_train_PET, fname))['roi_patch']
train_combo[0,:,:,:,0]=data/4.0950
train_combo[0,:,:,:,1]=data_PET/32.1959
train_combo[0,:,:,:,:].shape
train_combo = np.zeros((1, 48, 48, 32, 2))
scipy.io.savemat(fname, {fname: train_combo})
This will create a file ex '1.mat' that has CT data and PET data in one area
Then I have code changing it into npy files.
Example of data generator I already have
# load training data
def load_train_data_batch_generator(self, batch_size=32, rows_in=48, cols_in=48, zs_in=32,
channels_in=2, num_classes=3,
dir_in_train=None, dir_out_train=None):
# dir_in_train = main_dir + '/test_CT_PET_combo'
fnames = ['{}.mat'.format(i) for i in range(1,len(os.listdir(dir_in_train))+1)]
y_train = np.zeros((batch_size, num_classes))
x_train = np.zeros((batch_size, rows_in, cols_in, zs_in, channels_in))
while True:
count = 0
for fname in np.random.choice(fnames, batch_size, replace=False):
data_label = scipy.io.loadmat(os.path.join(dir_out_train, fname))['output']
# changing one hot encoding to integer
integer_label = np.argmax(data_label[0], axis=0)
y_train[count,:] = data_label
# Loading train ct w/ c and pet/ct combo that will be saved into new directory
train_combo = scipy.io.loadmat(os.path.join(dir_in_train, fname))[fname]
x_train[count,:,:,:,:] = train_combo
count += 1
yield(x_train, y_train)