Keras 2 similar dataset : 1 works other raises valueError - python

I'm learning deep learning and trying to write some models. But I've stucked at data set. When I use ready code and data-set from github it works normally, but when I try my data set with same code it doesn't work. However , both datasets have same type and shape:
working dataset:
Shape of train: (5000, 32, 32, 3)
Type of train: <class 'numpy.ndarray'>
Shape of train labels: (5000,)
Shape of valid: (500, 32, 32, 3)
Shape of valid labels: (500,)
my data-set:
Shape of train: (31368, 32, 32, 3)
Type of train: <class 'numpy.ndarray'>
Shape of train labels: (31368,)
Shape of valid: (7841, 32, 32, 3)
Shape of valid labels: (7841, 32, 32, 3)
Shape of train_pixels[0]: (32, 32, 3)
Error I got:
ValueError: Error when checking model input: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 1 arrays but instead got the following list of 7841
arrays: [array([[[186, 182, 255],
[179, 177, 255],
[163, 161, 244],...
There is one similar question I have found here but I couldn't use it, I got other errors. This solution doesn't work too.

Related

Expected 5-dimensional input for 5-dimensional weight but got 4

RuntimeError: Expected 5-dimensional input for 5-dimensional weight [32, 3, 1, 5, 5], but got 4-dimensional input of size [3, 256, 128, 128] instead
i printed the input shape which is "inputs shape: torch.Size([2, 3, 256, 128, 128])"
The error is occurring when i am training the model in this line
for i, model in enumerate(models):
opts.append(optim.AdamW(models[i].parameters(), lr=args.lr[i]))
train_model(models,
dataloaders,
criterion=loss_fn,
optimizers=opts,
opath=args.checkpoint_dir,
num_epochs=args.epochs)
Your model expect the 5-dimensional input, same with [32,3,1,5,5] shape. But you input the [3,256,128,128] shape. So you must be check your model and fix the input&output shape.

ValueError: TensorFlow2 Input 0 is incompatible with layer model

I am trying to code a ResNet CNN architecture based on the paper by using Python3, TensorFlow2 and CIFAR-10 dataset. You can access the Jupyter notebook here.
During training the model using "model.fit()", after just one epoch of training, I get the following error:
ValueError: Input 0 is incompatible with layer model: expected
shape=(None, 32, 32, 3), found shape=(32, 32, 3)
The training images are batched using batch_size = 128, hence the training loop gives the following 4-d tensor which TF Conv2D expects- (128, 32, 32, 3).
What's the source of this error?
Ok, I found a small issue in your code. The problem occurs in the test data set. You forget to transform it properly. So currently you have like this
images, labels = next(iter(test_dataset))
images.shape, labels.shape
(TensorShape([32, 32, 3]), TensorShape([10]))
You need to do the same transformation on the test as you did on the train set. But of course, things you consider: no shuffling, no augmentation.
def testaugmentation(x, y):
x = tf.image.resize_with_crop_or_pad(x, HEIGHT + 8, WIDTH + 8)
x = tf.image.random_crop(x, [HEIGHT, WIDTH, NUM_CHANNELS])
return x, y
def normalize(x, y):
x = tf.image.per_image_standardization(x)
return x, y
test_dataset = (test_dataset
.map(testaugmentation)
.map(normalize)
.batch(batch_size = batch_size, drop_remainder = True))
images, labels = next(iter(test_dataset))
images.shape, labels.shape
(TensorShape([128, 32, 32, 3]), TensorShape([128, 10]))

Input image data to tensorflow placeholder

I'm working with the keras.datasets.fashion_mnist dataset, which contains 28 x 28 grayscale images. I've built a pretty simple convolutional neural network that accepts a placeholder of images defined as:
X = tf.placeholder(tf.float32, [None, 28, 28, INPUT_CHANNELS], name='X_placeholder')
I'm starting out with a <type 'numpy.ndarray'> of shape (100, 28, 28). 100 here represents the batch size that I've chosen to train with.
Obviously, the dimensionality doesn't line up here. The graph I've built should work with RGB images as well, hence the INPUT_CHANNEL dimension. As expected, when I try to train, I get the following error:
ValueError: Cannot feed value of shape (100, 28, 28) for Tensor u'X_placeholder:0', which has shape '(?, 28, 28, 1)'
Being relatively new to TF and numpy, I'm failing to see how to add in that extra dimension. Having pieced together my code from various sources, I can't say that I chose the placeholder input shape [None, 28, 28, INPUT_CHANNELS], but I want to stick with it instead of trying to work around it.
Question
How can I reshape my training data to match the expected placeholder dimensionality?
In numpy:
You can use np.newaxis,np.expand_dims and reshape() to add dimension.
import numpy as np
train_data = np.random.normal(size=(100,28,28))
print(train_data.shape)
new_a = train_data[...,np.newaxis]
print(new_a.shape)
new_a = np.expand_dims(train_data,axis=-1)
print(new_a.shape)
new_a = train_data.reshape(100,28,28,1)
print(new_a.shape)
(100, 28, 28)
(100, 28, 28, 1)
(100, 28, 28, 1)
(100, 28, 28, 1)
In tensorflow:
You can use tf.newaxis,tf.expand_dims and tf.reshape to add dimension.
import tensorflow as tf
train_data = tf.placeholder(shape=(None,28,28),dtype=tf.float64)
print(train_data.shape)
new_a = train_data[...,tf.newaxis]
print(new_a.shape)
new_a = tf.reshape(train_data,shape=(-1,28,28,1))
print(new_a.shape)
new_a = tf.expand_dims(train_data,axis=-1)
print(new_a.shape)
(?, 28, 28)
(?, 28, 28, 1)
(?, 28, 28, 1)
(?, 28, 28, 1)

Trying to get behaviour of keras dense input shape and ndarrays

I'm trying to fit my simple keras model for 5-classes classification:
model = Sequential()
model.add(Dense(64, input_shape=(6,), activation="relu"))
model.add(Dense(5, activation="softmax"))
Also, I have the data with format:
>print(features)
[array([155, 22, 159, 57, 247, 88], dtype=uint8),
array([184, 165, 127, 49, 190, 0,], dtype=uint8),
...
array([35, 136, 32, 255, 114, 137], dtype=uint8)]
But when I'm trying to fit the model, I'm getting the next error:
Error when checking input: expected input_layer_input to have shape (6,) but got array with shape (1,)
I can't understand what is the reason oh this error. Could you please help me to get it?
Some additional information:
>type(features)
numpy.ndarray
>features.shape
(108885,)
>type(features[0])
numpy.ndarray
>features[0].shape
(6,)
You could change the input data to be a 2 Dimensional (numpy) array, or you could just change the input_shape to (1,), depending on what you want to do. Rightnow you have an array of arrays. Keras dosn't accept that.

ValueError: Dimensions must be equal, but are 64 and 4 for 'MatMul' (op: 'MatMul') with input shapes: [?,64], [4,?]

I got an error,
ValueError: Dimensions must be equal, but are 64 and 4 for 'MatMul' (op: 'MatMul') with input shapes: [?,64], [4,?].
I wrote codes,
from keras import backend as K
print(input_encoded_m)
print(question_encoded)
match = K.dot(input_encoded_m, question_encoded)
print(input_encoded_m) shows Tensor("cond_3/Merge:0", shape=(?, 68, 64), dtype=float32) and print(question_encoded) shows Tensor("cond_5/Merge:0", shape=(?, 4, 64), dtype=float32).I think dot method is not good to calcurate matrix has different rank,so I rewrite
from keras import backend as K
match = K.get_value(input_encoded_m * question_encoded)
But this error occurs:
ValueError: Dimensions must be equal, but are 68 and 4 for 'mul' (op: 'Mul') with input shapes: [?,68,64], [?,4,64]
How can I calcurate input_encoded_m & question_encoded? What is wrong ?
I'm not sure which of the dimensions your actual number of inputs is, but the first dimension needs to be the same.
But for example you need to have shapes:
(68, 64, 4) and (68, 4, 64)
or
(64, 68, 4) and (64, 4, 68)
or
(4, 68, 64) and (4, 64, 68) etc..
But you have number of inputs 68 and 4, these need to match.
You should checkout the examples given in the here in the docs.

Categories

Resources