I am working on a CNN model and want to add a new categorical feature before the Dense layer. I tried to concatenate the feature to the flattened output of CNN layer but looks like the concatenate function in Keras requires input of tensors and not arrays. How should I go about it? Here is the code that I have tried so far:
model = Sequential()
model.add(Conv2D(128, (6, 6), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(128, (6, 6)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
I am trying to use Concatenate function but it can join tensors, where as my feature is a numpy array of shape (1, 3). Any help would be appreciated.
You should create a new model on side of your actual model.
This second model will take in input your numpy array and does nothing else.
Then you concatenate them.
Like this ->
m1 = Sequential()
m1.add(Conv2D(128, (6, 6), padding='same'))
m1.add(Activation('relu'))
m1.add(Conv2D(128, (6, 6)))
m1.add(Activation('relu'))
m1.add(MaxPooling2D(pool_size=(2, 2)))
m1.add(Dropout(0.25))
m1.add(Flatten())
m2 = Sequential()
m2.add(Input()) # Put needed infos to input your numpy array
#Don't forget to flatten it if needed ?
model = Sequential()
model.add(Merge([m1,m2], mode='concat'))
#Then add your final layer.
#To train it, in place of the normal var X_train, you'll use [X_train,yournumpyarray] in model.train method
Related
I wrote very simple cnn code with spectrogram images, but
the accuracy is only 0.3~0.4
What do i have to add the other option to improve accuracy?
model.add(Conv2D(32, (3, 3), input_shape=X_train.shape[1:], padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(128, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(14))
model.add(Activation('softmax'))```
With the information you provide, there is zero chance to help you with the problem. The definition of your model looks correct (but you missed an activation function after the first dense layer if this is by accident). So here are some considerations:
Do you train long enough? Your model is quite big and therefore needs a long time to converge AND a large dataset to train with.
Is your dataset large enough and contains enough variance? When your dataset doesn't represent your problem well, you can't train.
Take a look at the Loss curves of your validation AND training set. Are you overfitting/underfitting?
Do you correctly normalize and preprocess your dataset? Try to transform the values of the images to a range of -1 to 1 or 0 to 1 with a float datatype.
Is your dataset balanced? As you are softmaxing 14 classes, you need a balanced dataset in order to train every single class.
Hoped this helped a little, if you need further help please provide some detailed descriptions of your problem and what are you doing in your whole process.
I am getting confused with the filter paramater, which is the first parameter in the Conv2D() layer function in keras. As I understand the filters are supposed to do things like edge detection or sharpening the image or blurring the image, but when I am defining the model as
input_shape = (32, 32, 3)
model = Sequential()
model.add( Conv2D(64, kernel_size=(5, 5), activation='relu', input_shape=input_shape, strides=(1,1), padding='same') )
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(Conv2D(64, kernel_size=(5, 5), activation='relu', input_shape=input_shape, strides=(1,1), padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(Conv2D(128, kernel_size=(5, 5), activation='relu', input_shape=input_shape, strides=(1,1), padding='same'))
model.add(Flatten())
model.add(Dense(3072, activation='relu'))
model.add(Dense(2048, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
I am not mentioning the the edge detection or blurring or sharpening anywhere in the Conv2D function. The input images are 32 by 32 RGB images.
So my question is, when I define the Convolution layer as Conv2D(64, ...), does this 64 means 64 different types of filters, such as vertical edge, horizontal edge, etc, which are chosen by keras at random? if so then is the output of the convolution layer (with 64 filters and 5x5 kernel and 1x1 stride) on a 32x32 1-channel image is 64 images of 28x28 size each. How are these 64 images combined to form a single image for further layers?
The filters argument sets the number of convolutional filters in that layer. These filters are initialized to small, random values, using the method specified by the kernel_initializer argument. During network training, the filters are updated in a way that minimizes the loss. So over the course of training, the filters will learn to detect certain features, like edges and textures, and they might become something like the image below (from here).
It is very important to realize that one does not hand-craft filters. These are learned automatically during training -- that's the beauty of deep learning.
I would highly recommend going through some deep learning resources, particularly https://cs231n.github.io/convolutional-networks/ and https://www.youtube.com/watch?v=r5nXYc2wYvI&list=PLypiXJdtIca5sxV7aE3-PS9fYX3vUdIOX&index=3&t=3122s.
Just wanted to clarify what the output shape was.
Although jakub's answer was good, I don't think it addressed the "single image for further layers" part of the question.
I did a model.summary() to find out more.
I found that the shape returned from a Conv2D is (None, img_width, img_height, num_filters)
So when you pass the output of the Conv2D to MaxPooling you are passing that shape which means it is basically passing each entire convoluted image.
The other layers handle this gracefully. MaxPooling2D(2,2) returns the same shape but half the image size (None, img_width / 2, img_height / 2, num_filters).
Side note: I wish the filters was named num_filters because filters seems to imply you're passing in a list of filters in which to convolute the image.
I am trying to combine CNN and LSTM for image classification.
I tried the following code and I am getting an error. I have 4 classes on which I want to train and test.
Following is the code:
from keras.models import Sequential
from keras.layers import LSTM,Conv2D,MaxPooling2D,Dense,Dropout,Input,Bidirectional,Softmax,TimeDistributed
input_shape = (200,300,3)
Model = Sequential()
Model.add(TimeDistributed(Conv2D(
filters=16, kernel_size=(12, 16), activation='relu', input_shape=input_shape)))
Model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2),strides=2)))
Model.add(TimeDistributed(Conv2D(
filters=24, kernel_size=(8, 12), activation='relu')))
Model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2),strides=2)))
Model.add(TimeDistributed(Conv2D(
filters=32, kernel_size=(5, 7), activation='relu')))
Model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2),strides=2)))
Model.add(Bidirectional(LSTM((10),return_sequences=True)))
Model.add(Dense(64,activation='relu'))
Model.add(Dropout(0.5))
Model.add(Softmax(4))
Model.compile(loss='sparse_categorical_crossentropy',optimizer='adam')
Model.build(input_shape)
I am getting the following error:
"Input tensor must be of rank 3, 4 or 5 but was {}.".format(n + 2))
ValueError: Input tensor must be of rank 3, 4 or 5 but was 2.
I found a lot of problems in the code:
your data are in 4D so simple Conv2D are ok, TimeDistributed is not needed
your output is 2D so set return_sequences=False in the last LSTM cell
your last layers are very messy: no need to put a dropout between a layer output and an activation
you need categorical_crossentropy and not sparse_categorical_crossentropy because your target is one-hot encoded
LSTM expects 3D data. So you need to pass from 4D (the output of convolutions) to 3D. There are two possibilities you can adopt: 1) make a reshape (batch_size, H, W * channel); 2) (batch_size, W, H * channel). In this way, u have 3D data to use inside your LSTM
here a full model example:
def ReshapeLayer(x):
shape = x.shape
# 1 possibility: H,W*channel
reshape = Reshape((shape[1],shape[2]*shape[3]))(x)
# 2 possibility: W,H*channel
# transpose = Permute((2,1,3))(x)
# reshape = Reshape((shape[1],shape[2]*shape[3]))(transpose)
return reshape
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(12, 16), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2),strides=2))
model.add(Conv2D(filters=24, kernel_size=(8, 12), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2),strides=2))
model.add(Conv2D(filters=32, kernel_size=(5, 7), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2),strides=2))
model.add(Lambda(ReshapeLayer)) # <========== pass from 4D to 3D
model.add(Bidirectional(LSTM(10, activation='relu', return_sequences=False)))
model.add(Dense(nclasses,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam')
model.summary()
here the running notebook
I Have an auto encoder like neural network that i coded and 9000 training examples. I also have a numpy ndarray with 9000 arrays.
My goal is to tie the activations (not weights) of the middle layer (which is reshaped to the size of one of the arrays in the 9000 group).
I would like a situation where i randomly innitialise the numpy array. Then , using a batch size of 9000..average the values between the particular training examples activation in the middle layer , and its associated array out of the 9000 in the numpy array.
The numpy ndarray is to be used elsewhere in the program after training.
So this is what should happen should we have 1 training example, then we may have the numpy array be a (1,240,5) and the representation (activations) in the middle layer of the neural network after forward propagating become reshaped to (1,240,5).
so there is a neuron for each entry in the numpy ndarray. Given that after the first forward pass , the activations will be reshaped to an ndarray the same shape as the numpy array, i would like to average the values of those found already within the numpy array , and those that appeared after the first foward pass.
as training countinues on that one training example, the activations will change, and i would like to keep averaging the values at each iteration, untill training is complete.
this is more complex in the example with 9000 training examples where each example after the first pass has a seperate activation ndarray and must be averaged with its own external numpy array.
Again, the external numpy array is not a part of the neural network although its final values will depend on the training procedure.
i am using keras.
so the code could be something like this perhaps, using model.metrics_tensors and a minibatch of 1.
a=np.random.rand(9000, 240, 5)
Mirror=(Dense(240*5))
model = Sequential()
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2))) #7 x 7 x 64
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Flatten())
model.add(Mirror)
#decoder
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2,2))) # 14 x 14 x 128
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2,2))) # 28 x 28 x 64
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.compile(optimizer="adadelta", loss='mse',metrics=["accuracy"])
output_layers= ['Mirror']
model.metrics_names += 'Mirror'
model.metrics_tensors = [Mirror]
x=[layer.output for layer in model.layers if layer.name in output_layers]
del x[0]
x=np.asarray(x)
x=np.flatten(x)
x=np.ndarray.reshape(x,(1,240,5))
tf.keras.layers.average(x,a[0:1,:,:])
I have 1000, 28*28 resolution images. I converted those 1000 images into numpy array and formed a new array with size of (1000,28,28). So, while
creating convolution layer using keras, input shape(X value) is specified as (1000,28,28) and output shape(Y value) as (1000,10). Because I ha
ve 1000 examples are inputs and 10 categories of output.
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal',input_shape=(1000,28,28)))
.
.
.
model.fit(train_x,train_y,batch_size=32,epochs=10,verbose=1)
So, while using fit function, it shows ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (1000, 28, 28) as error. Pls help me guys to provide proper input and output dimension for CNN.
Code:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal',input_shape=(4132,28,28)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(10, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
model.summary()
train_x = numpy.array([train_x])
model.fit(train_x,train_y,batch_size=32,epochs=10,verbose=1)
You need to change the inputs to 4 dimensions with channel set to 1 : (1000, 28, 28, 1) and you need to change the input_shape of the convolutional layer to (28, 28, 1):
model.add(Conv2D(32, kernel_size=(3, 3),...,input_shape=(28,28,1)))
Your numpy arrays need a fourth dimension, the common standard is to number the samples with the first dimension, so changing (1000, 28, 28) to (1, 1000, 28, 28).
You can read more about this here.
from your input it looks like you are using tensorflow as back end.
In keras the input_shape should always be 3 dimension .
For tensorflow as a backend the input_shape to your model will be
input_shape = [img_height,img_width,channels(depth)]
in your case for tensor flow backend that should be
input_shape = [28,28,1]
and the shape of train_x should be
train_x = [batch_size,img_height,img_width,channels(depth)]
in your case
train_x = [1000,28,28,1]
As you are using a gray scale image,the dimension of the image will be (image_height, image_width) and hence you have to add an extra dimension to the image which will result to (image_height, image_width, 1) the '1' suggests the depth of the image,for gray scale that is '1' and for rgb that is '3'.