In my Convolution Network, I recently add a Lambda Layer as the input layer for select specific channels of the input images following the answer from this question
model.add(Lambda(lambda x: x[:,:,:2], input_shape=(w, h, 3)))
When I tried to add the MaxPooling2D Layer, I got the error ValueError: Negative dimension size caused by subtracting 3 from 2 for 'max_pooling2d_14/MaxPool' (op: 'MaxPool') with input shapes: [?,250,2,64]
I thought I make some mistakes between Theano and Tensorflow dim order so I edited the Lambda Layer:
model.add(Lambda(lambda x: x[:2,:,:], input_shape=(w, h, 3)))
This time I got no problem when adding more layer, but when I tried to use fit_generator, it gets the error: InvalidArgumentError: Incompatible shapes: [64] vs. [2]
The full trace back is very long, I upload them to here.
I'm using on Linux with 4 GPU for calculation, thanks for your help.
The problem lies the way I slice the input using the Lambda Layer.
The input shape has 4 properties in this order: batch_size, width, height, channels.
For selecting multiple array of the input data, because Tensorflow is not supporting the advance indexing method from numpy, we should slice the input tensor first, use dim expanding to add the color depth, and then concatenate them later.
Related
Suppose I have a numpy array with shape = (1303, 3988, 1). What value do I need to pass to Input() so that my ai learns or do I need it, do I need to reshape it?
I understand that your data is 1303 instances of vectors size (3988,1).
The answer depend on the layer goes after the input:
If you feed it after to Conv1D layer so the input layer should be:
Input(3988,1)
Otherwise you should squeeze the layer with:
np.squeeze(your_numpy_array)
or just flatten the input after the first layer:
x=Input(3988,1)
x=Flatten()(x)
I'm building a 1D model with TensorFlow for audio but I have a problem with the input shape during the second MaxPool1D in the model.
The problem is here, after this Pooling:
x = Convolution1D(32, 3, activation=relu, padding='valid')(x)
x = MaxPool1D(4)(x)
I get this error:
ValueError: Negative dimension size caused by subtracting 4 from 1 for 'max_pooling1d_5/MaxPool' (op: 'MaxPool') with input shapes: [?,1,1,32].
I tried to reshape x (which is a tensor) but I don't think I'm going in the right way.
In this same model, before that, I have a couple convolutional layers and a maxpooling that are working proporly.
Anyone have suggestions?
Thanks
The number of steps in the input to the MaxPool1D layer is smaller than the pool size.
In the error, it says ...input shapes: [?,1,1,32], which means the output from the Convolution1D layer has shape [1,32]. It needs to be at least 4 steps to be used as input to the MaxPool1D(4) layer, so have a minimum size of [4,32].
You can continue walking this back. For example, the Convolution1D layer will decrease the step size by kernel_size-1=2. This means the input to the Convolution1D layer needs to have at least 4+2=6 steps, meaning a shape of at least [6,?]. Continuing up to the input layer, you'll find the input size is too small.
You'll need to change the architecture to allow the input size, or, if applicable, change the input size.
This question already has answers here:
Error when checking target: expected dense_3 to have shape (2,) but got array with shape (1,)
(2 answers)
Closed 4 years ago.
Somehow I found a very strange bug in the Keras library.
My learning method includes a three-layer neural network: an input layer with 130,517 units (input size), a hidden layer of 10,000, and an output layer of 2 units.
During the code, I ran a batch learning (I used partial_fit function) but the code repeatedly threw the same Error:
{ValueError} Error when checking input: expected dense_1_input to have
shape (130517,) but got array with shape (1,)
I checked the input dimension again, and found that it was indeed as I thought, having 130,517 dimensions.
Here is a picture of variables at debugging, and as you can see, the shape of np.array(X[0]) is 130,517:
For any case, I attached the code of initialization of the Neural Network, and the code of the call to partial_fit:
def initClassifier(self):
self.classifier.add(Dense(100000, input_dim=130517, activation='relu'))
self.classifier.add(Dense(2, activation='softmax'))
self.classifier.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
def partial_fit(self, X, y, classes):
self.classifier.train_on_batch(np.array(X[0]), np.array(y))
Does anyone have a solution?
Could it be a bug in the Keras code?
On training, Keras expects your data to include a dimension for the batch size. In your case, this means the data should be of shape (batch_size, 130517). However, you are passing a np array of shape (130517), which is causing your error. You can reshape your data to include a batch shape as follows:
X_reshaped = X[0].reshape(1, -1)
I have a multidimensional time series dataset which has the following shape (n_samples, 512, 9) where 512 is the timesteps and 9 are the channels.
After the first 1D CNN layer with 64 kernels my output shape is (n_samples, 512, 64). Now I would like to have my input to next layer which is an LSTM to be of the shape (n_samples, 384, 64).
It can be achieved if I have a Maxpool layer that returns maximum 3 values from pool size of 4 but is it possible to implement this in Keras?
You can probably solve this with a keras.layers.Lambda layer, and the backend to tf.nn.in_top_k. Note that the handling is somewhat different from tf.nn.top_k, in that it does not pool if all the values are of the same value!
Now you can define yourself a function that returns the top k values (and does so
somewhat efficiently), and then pass it as a function to the lambda layer.
I sadly haven't worked enough with Keras to type out the specific code, but maybe this is help enough to point you in the right direction.
Also, there exists a similar thread for TensorFlow specifically.
System:
Keras 1.0.1
Theano 0.8.2
I've a very simple function:
from keras import backend as kback
def ave_embed(xval):
return kback.mean(xval, axis=1)
I'm using this in a Keras Lambda Layer followed by a Flatten Layer:
model.add(Lambda(ave_embed, output_shape=(d, 1)))
model.add(Flatten())
However, when I compile the model, I get the following error:
Exception: Input 0 is incompatible with layer flatten_1: expected ndim >= 3, found ndim=2
I fix it by doing the following:
model.add(Lambda(ave_embed, output_shape=(d, 1)))
model.add(Reshape(d,1))
model.add(Flatten())
Can anyone explain the cause for the exception? It looks like I'm applying reshape on an output that should already be that shape.
It looks like I'm applying reshape on an output that should already be that shape.
You are right!
If you have a 3d input and take the mean accross the second dimension (kback.mean(xval, axis=1)) your Lambda layer will output you a 2d tensor.
For your combination of Lambda layer and Flatten layer to work, you should have at least a 4D input.
You just have to remove your Flatten layer to make it work.
Adding a Reshape layer and a dimension makes your input tensor of your Flatten layer 3d but with an unnecessary dimension that you flatten right after.