input and output shape in convulational neural network - python

ValueError: Input 0 of layer sequential_30 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, None, None]

First of all, the details that you have provided is very less. From the details that you have provided, I have interpreted that you are trying to pass a numpy array of dimension 3 to a CNN Layer. A CNN Layer only accepts 4-dimensional numpy arrays, but you have given a 3-dimensional numpy array as the input. In order to solve this problem, reshape the array to a 4D array. If you provide ur code, I will be able to provide you the exact lines that you have to use.

Related

Using a convolution neural network with a non-image input

I want to use a convolutional neural network, but I have a 2D array for the input, not an image. I am trying to evaluate a board game state where shapes are important.
The board is 5x5 and the values can be between -1 and 1, stored as a list of lists ex:
[[-1,1.0,-1,1,-1],[0,1,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[-1,0.6,-1,-1,1]]
the first layer of the model is
tf.keras.layers.Conv2D(32, (3,3), input_shape=(5,5,1))
I convert the board to a numpy array
np.array([[-1,1.0,-1,1,-1],[0,1,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[-1,0.6,-1,-1,1]])
I gather the boards into a list.
Then I convert the list into an array of arrays to fit
model.fit(np.array(x_train_l), y_train, epochs=10)
I get the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, 5, 5]
Just reshape your numpy array to have shape (5,5,1). Currently it is with shape (5,5).
np.array([[-1,1.0,-1,1,-1],[0,1,0,0,0],[0,0,0,0,0],[0,0,0,0,0],[-1,0.6,-1,-1,1]]).reshape(5,5,1)

Tensorflow Keras Conv2D error with 2D numpy array input

I would like to train a CNN using a 2D numpy array as input, but I am receiving this error: ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (21, 21).
My input is indeed a 21x21 numpy array of floats. The first layer of the network is defined as Conv2D(32, (3, 3), input_shape=(21, 21, 1)) to match the shape of the input array.
I have found some similar questions but none pertaining to a 2D input array, they mostly deal with images. According to the documentation, Conv2D is expecting an input of a 4D tensor containing (samples, channels, rows, cols), but I cannot find any documentation explaining the meaning of these values. Similar questions pertaining to image inputs suggest reshaping the input array using np.ndarray.reshape(), but when trying to do that I receive an input error.
How can I train a CNN on such an input array? Should input_shape be a different size tuple?
Your current numpy array has dimensions (21, 21). However, TensorFlow expects input tensors to have dimensions in the format (batch_size, height, width, channels) or BHWC implying that you need to convert your numpy input array to 4 dimensions (from the current 2 dimensions). One way to do so is as follows:
input = np.expand_dims(input, axis=0)
input = np.expand_dims(input, axis=-1)
Now, the numpy input array has dimensions: (1, 21, 21, 1) which can be passed to a TF Conv2D operation.
Hope this helps! :)

ValueError: expected axis -1 of input shape to have value 1 but received input with shape [1,32,32,3]

I am currently working to train a dataset stored as numpy arrays using
train_dataset=tf.data.Dataset.from_tensor_slices(train_data)
Here, train_data is a numpy array of data without the associated labels. The model that I am running was created to work on datasets as DatasetV1Adapters(MNIST and dataset for pix to pix GANs). I have been looking for documentation for making the required correction for quite a while now(around 4 weeks). and this method hasn't solved my problem.
For the training process, I was running:
for images in train_dataset:
#images=np.expand_dims(images, axis=0)
disc_loss += train_discriminator(images)
Which would give me an error of
ValueError: Input 0 of layer conv2d_2 is incompatible with the layer: expected ndim=4, found ndim=3.
The array shape was [32,32,3] so the 100 from number of images was lost. I tried to run the commented out line images=np.expand_dims(images, axis=0). Thus I got [1,32,32,3] which matched my required dimensionality. I thought my problem would be solved, but instead I now have the following error:
ValueError: Input 0 of layer conv2d_4 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape [1, 32, 32, 3]
Which I don't fully understand. It seems like the error is definitely related to the datasetV1Adapter as I get the same type of error was various codes. I have tried uploading my dataset to github, but as a 10GB folder, I am unable to actually upload it. Any help will be appreciated
EDIT: Followed #Sebastian-Sz's advice(Kind off). I set my channels in the model to three to accommodate RGB instead of grayscale. Running this code gave me
TypeError: Value passed to parameter 'input' has DataType uint8 not in list of allowed values: float16, bfloat16, float32, float64
So I added
train_data = np.asarray(train_data, dtype=np.float)
Now I get an error saying:
Input 0 of layer dense_5 is incompatible with the layer: expected axis -1 of input shape to have value 6272 but received input with shape [1, 8192]
Which makes no sense to me

3 dimensional array as input for Keras

I got a 3 dimensional array and would like to use it as an input for a sequential model in Keras. The shape of the input array is (32, 32, 4). I want to get an array with the same shape as output. How should i make a feed forward neuronal network with one input, one output and one hidden layer, to make it work with such an array as input?

Why tf.pad return shape [?,?,?]

I tried do some customized padding before feeding to a conv1D net as following.
x=tf.placeholder("float",[None,50,1])
padding=tf.constant([0,0],[5,0],[0,0])
y=tf.pad(x,padding)
However, after the above manipulation, y would be a tensor of shape (?,?,?), thus when feeding to tf.layers.conv1d, I get an error that "The channel dimension of the inputs should be defined. Found 'None'".
My question is why does pad result has None shape? It should not be hard to calculate the shape, my guess is this is only calculated in run time, but it is not convenient right? And can I use reshape before passing to conv1d?

Categories

Resources