Given groups=1, weight of size [48, 3, 3, 3], expected input [5, 128, 129, 4] to have 3 channels, but got 128 channels instead.
This is my code:
**model_ft.eval()
for image in test_loader:
image = image.cuda()
output = model_ft(image)
output = output.cpu().detach().numpy()
for i, (e, n) in enumerate(list(zip(output, name))):
sub.loc[sub['id_code'] == n.split('/')[-1].split('.')[0], 'diagnosis'] = le.inverse_transform([np.argmax(e)])
sub.to_csv('submission.csv', index=False)**
print(X_test.shape)
(3071, 128, 128, 3)
from torch.utils.data import DataLoader
test_loader = DataLoader(X_test, batch_size=5, shuffle=True)
print(train_data)
i don't know how to fix this problem to predict my compete
I'm assuming by
print(X_test.shape)
(3071, 128, 128, 3)
you mean that the test data has 3071 samples with 128x128 pixels and 3 color channels each.
Also I'm assuming that the model you are using doesn't transpose the inputs, so the convolution layers expect the default layout which is shape (N, C, H, W) but you provide your data as (N, H, W, C).
Solution: Try image.transpose_(1, 3) or image = image.cuda().transpose(1, 3) before handing it to the model.
Related
I have a big dataset that I want to use to train a CNN with Keras (too big to load it in memory). I always train using ImageDataGenerator.flow_from_dataframe, as I have my images across different directories, as shown below.
datagen = ImageDataGenerator(
rescale=1./255.
)
train_gen=datagen.flow_from_dataframe(
dataframe=train_df),
x_col="filepath",
class_mode="input",
shuffle=True,
seed=1)
However, this time I don't want to use my full images, but random patches of the images instead, i.e., I want to choose a random image and take a random patch of 32x32 of that image each time. How can I do this?
I thought of using tf.extract_image_patches and sklearn.feature_extraction.image.extract_patches_2d, but I don't know if it is possible to integrate these to the flow_from_dataframe.
Any help would be appreciated.
You could try using a preprocessing function in your ImageDataGenerator combined with tf.image.extract_patches:
import tensorflow as tf
import matplotlib.pyplot as plt
BATCH_SIZE = 32
def get_patches():
def _get_patches(image):
image = tf.expand_dims(image,0)
patches = tf.image.extract_patches(images=image,
sizes=[1, 32, 32, 1],
strides=[1, 32, 32, 1],
rates=[1, 1, 1, 1],
padding='VALID')
patches = tf.reshape(patches, (1, 256, 256, 3))
return patches
return _get_patches
def reshape_data(images, labels):
ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True)
for b in tf.range(BATCH_SIZE):
i = tf.random.uniform((), maxval=int(256/32), dtype=tf.int32)
j = tf.random.uniform((), maxval=int(256/32), dtype=tf.int32)
patched_image = tf.reshape(images[b], (8, 8, 3072))
ta = ta.write(ta.size(), tf.reshape(patched_image[i, j], shape=(32, 32 ,3)))
return ta.stack(), labels
preprocessing = get_patches()
flowers = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20, preprocessing_function = preprocessing)
ds = tf.data.Dataset.from_generator(
lambda: img_gen.flow_from_directory(flowers, batch_size=BATCH_SIZE, shuffle=True),
output_types=(tf.float32, tf.float32))
ds = ds.map(reshape_data)
images, _ = next(iter(ds.take(1)))
image = images[0] # (32, 32, 3)
plt.imshow(image.numpy())
The problem is that the preprocessing_function of the ImageDataGenerator expects the same output shape as the input shape. I therefore first create the patches and construct the same output shape of the original image based on the patches. Later, in the method reshape_data, I reshape the images from (256, 256, 3) to (8, 8, 3072), extract a random patch and then return it with the shape (32, 32, 3).
When I tried to use tf.nn.conv2d_transpose() to get layer result which has doubled width, height and halved depth, it worked while using specified [batch, width, height, channel(input and output)].
By setting batch_size="None", training works well for specified batch_size and validation works well for images(or an image).
Now, I'm trying to make encoder-decoder network structure using training [128 x 128 x 3] images. (Those training images are cropped images from [w x h x 3] original images)
Input shape is [128 x 128 x 3] and output shape is [128 x 128 x 3]. First layer is convolution layer with k=3x3, strides = 1, padding = 1 with encoder-decoder structure.
All of the process above works well for specified width and height (128 x 128).
However, after training is finished with training patches of [128 x 128 x 3], I'd like to infer [w x h x 3] image using trained network.
I guess all of the sequences : convolution, maxpool gives correct results, except transpose convolution.
When I infer fixed shape of images [128 x 128 x 3] :
InputImagesTensor = tf.placeholder(tf.float32, [None, 128, 128, 3], name='InputImages')
ResultImages = libs.Network(InputImagesTensor)
saver = tf.train.Saver()
w = 128
h = 128
sess = tf.Session()
sess.run(tf.global_variables_initializer())
saver.restore(sess, 'output.ckpt')
for i in range(0, len(Datas.InputImageNameList)):
temp = np.resize(getResizedImage(Datas.InputImageList[i]), (1, 128, 128, 3))
resultimg = sess.run(ResultImages, feed_dict={InputImagesTensor: temp})
with network inside :
def Transpose2d(input, inC, outC):
b, w, h, c = input.shape
batch_size = tf.shape(input)[0]
deconv_shape = tf.stack([batch_size, int(w*2), int(h*2), outC])
kernel = tf.Variable(tf.random_normal([2, 2, outC, inC], stddev=0.01))
output_shape = [None, int(w * 2), int(h * 2), outC]
transConv = tf.nn.conv2d_transpose(input, kernel, output_shape=deconv_shape, strides=[1, 2, 2, 1], padding="SAME")
return transConv
Now, I tried to convert them with fixed width and height to dynamic width and height.
In my opinion, this would work (however it failed)
Change
InputImagesTensor = tf.placeholder(tf.float32, [None, 128, 128, 3], name='InputImages')
temp = np.resize(getResizedImage(Datas.InputImageList[i]), (1, 128, 128, 3))
to
InputImagesTensor = tf.placeholder(tf.float32, [None, None, None, 3], name='InputImages')
temp = np.resize(getResizedImage(Datas.InputImageList[i]), (1, w, h, 3))
However, this line gives error.
deconv_shape = tf.stack([batch_size, int(w*2), int(h*2), outC])
TypeError: int returned non-int (type NoneType)
I guess it is because we cannot double the None value to 2*None.
How can I do this?? is it possible??
Self answer...
I cannot come up with suitable 'standard' solution to set w, h as None property for transpose convolution.
However I solved problem by giving transpose convolution's shape as maximum shape of my training/validation images. For example, if the maximum width and height of my images = [656 x 656], and a test image = [450 x 656], then I create zeros np.ndarray of [656 x 656] and just fill [450 x 656] region using test image RGB. (Apply concept of zero-padding)
I am trying to code a ResNet CNN architecture based on the paper by using Python3, TensorFlow2 and CIFAR-10 dataset. You can access the Jupyter notebook here.
During training the model using "model.fit()", after just one epoch of training, I get the following error:
ValueError: Input 0 is incompatible with layer model: expected
shape=(None, 32, 32, 3), found shape=(32, 32, 3)
The training images are batched using batch_size = 128, hence the training loop gives the following 4-d tensor which TF Conv2D expects- (128, 32, 32, 3).
What's the source of this error?
Ok, I found a small issue in your code. The problem occurs in the test data set. You forget to transform it properly. So currently you have like this
images, labels = next(iter(test_dataset))
images.shape, labels.shape
(TensorShape([32, 32, 3]), TensorShape([10]))
You need to do the same transformation on the test as you did on the train set. But of course, things you consider: no shuffling, no augmentation.
def testaugmentation(x, y):
x = tf.image.resize_with_crop_or_pad(x, HEIGHT + 8, WIDTH + 8)
x = tf.image.random_crop(x, [HEIGHT, WIDTH, NUM_CHANNELS])
return x, y
def normalize(x, y):
x = tf.image.per_image_standardization(x)
return x, y
test_dataset = (test_dataset
.map(testaugmentation)
.map(normalize)
.batch(batch_size = batch_size, drop_remainder = True))
images, labels = next(iter(test_dataset))
images.shape, labels.shape
(TensorShape([128, 32, 32, 3]), TensorShape([128, 10]))
I am confused on how to replicate Keras (TensorFlow) convolutions in PyTorch.
In Keras, I can do something like this. (the input size is (256, 237, 1, 21) and the output size is (256, 237, 1, 1024).
import tensorflow as tf
x = tf.random.normal((256,237,1,21))
y = tf.keras.layers.Conv1D(filters=1024, kernel_size=5,padding="same")(x)
print(y.shape)
(256, 237, 1, 1024)
However, in PyTorch, when I try to do the same thing I get a different output size:
import torch.nn as nn
x = torch.randn(256,237,1,21)
m = nn.Conv1d(in_channels=237, out_channels=1024, kernel_size=(1,5))
y = m(x)
print(y.shape)
torch.Size([256, 1024, 1, 17])
I want PyTorch to give me the same output size that Keras does:
This previous question seems to imply that Keras filters are PyTorch's out_channels but thats what I have. I tried to add the padding in PyTorch of padding=(0,503) but that gives me torch.Size([256, 1024, 1, 1023]) but that still not correct. This also takes so much longer than keras does so I feel that I have incorrectly assigned a parameter.
How can I replicate what Keras did with convolution in PyTorch?
In TensorFlow, tf.keras.layers.Conv1D takes in a tensor of shape (batch_shape + (steps, input_dim)). Which means that what is commonly known as channels appears on the last axis. For instance in 2D convolution you would have (batch, height, width, channels). This is different from PyTorch where the channel dimension is right after the batch axis: torch.nn.Conv1d takes in shapes of (batch, channel, length). So you will need to permute two axes.
For torch.nn.Conv1d:
in_channels is the number of channels in the input tensor
out_channels is the number of filters, i.e. the number of channels the output will have
stride the step size of the convolution
padding the zero-padding added to both sides
In PyTorch there is no option for padding='same', you will need to choose padding correctly. Here stride=1, so padding must equal to kernel_size//2 (i.e. padding=2) in order to maintain the length of the tensor.
In your example, since x has a shape of (256, 237, 1, 21), in TensorFlow's terminology it will be considered as an input with:
a batch shape of (256, 237),
steps=1, so the length of your 1D input is 1,
21 input channels.
Whereas in PyTorch, x of shape (256, 237, 1, 21) would be:
batch shape of (256, 237),
1 input channel
a length of 21.
Have kept the input in both examples below (TensorFlow vs. PyTorch) as x.shape=(256, 237, 21) assuming 256 is the batch size, 237 is the length of the input sequence, and 21 is the number of channels (i.e. the input dimension, what I see as the dimension on each timestep).
In TensorFlow:
>>> x = tf.random.normal((256, 237, 21))
>>> m = tf.keras.layers.Conv1D(filters=1024, kernel_size=5, padding="same")
>>> y = m(x)
>>> y.shape
TensorShape([256, 237, 1024])
In PyTorch:
>>> x = torch.randn(256, 237, 21)
>>> m = nn.Conv1d(in_channels=21, out_channels=1024, kernel_size=5, padding=2)
>>> y = m(x.permute(0, 2, 1))
>>> y.permute(0, 2, 1).shape
torch.Size([256, 237, 1024])
So in the latter, you would simply work with x = torch.randn(256, 21, 237)...
PyTorch now has out of the box same convolution operation you can take a look at this link [Same convolution][1]
class InceptionNet(nn.Module):
def __init__(self, in_channels, in_1x1, in_3x3reduce, in_3x3, in_5x5reduce, in_5x5, in_1x1pool):
super(InceptionNet, self).__init__()
self.incep_1 = ConvBlock(in_channels, in_1x1, kernel_size=1, padding='same')
Note a same convolution only supports the default stride value which is 1 anything other won't work.
[1]: https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html
I am going to perform pixel-based classification on an image. Here is the code I used for training the NN
net = input_data(shape=[None, 1,4])
net = tflearn.lstm(net, 128, return_seq=True)
net = tflearn.lstm(net, 128)
net = tflearn.fully_connected(net, 1, activation='softmax')
net = tflearn.regression(net, optimizer='adam',
loss='categorical_crossentropy')
model = tflearn.DNN(net, tensorboard_verbose=2, checkpoint_path='model.tfl.ckpt')
X_train = np.expand_dims(X_train, axis=1)
model.fit(X_train, y_train, n_epoch=1, validation_set=0.1, show_metric=True,snapshot_step=100)
The problem is that after training the model, the result of p.array(model.predict(x_test)) is 1 only although I expected this to be either 2 or 3. In one example where I have had 4 classes of objects and I expected the result of that command to be a label between 2 and 5 (note:y_train has int values between 2 and 5) but again the output of the prediction function is 1. Could that be a problem of training phase?
The None parameter is used to denote different training examples. In your case, each image has a total of 28*28*4 parameters, due to the custom four channel dataset you are using.
To make this LSTM work, you should try to do the following -
X = np.reshape(X, (-1, 28, 28, 4))
testX = np.reshape(testX, (-1, 28, 28, 4))
net = tflearn.input_data(shape=[None, 28, 28, 4])
Of course, (this is very important), make sure that reshape() puts the four different channels corresponding to a single pixel in the last dimension of the numpy array, and the 28, 28 correspond to pixels in a single image.
In case your images don't have dimension 28*28, adjust those parameters accordingly.