I have this model which is attempting to classify cats and dogs:
model = Sequential([Conv2D(128, kernel_size=(3,3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
MaxPooling2D(pool_size=(2,2)),
Conv2D(64, kernel_size=(3,3), activation='relu'),
MaxPooling2D(pool_size=(2,2)),
Flatten(),
Dense(32, activation='relu'),
Dense(2, activation='softmax')]) # pick between 2 different possible outputs
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
and then attempting to run the model like so:
history = model.fit(x=train_data_gen, steps_per_epoch=total_train//batch_size,
epochs=epochs, batch_size=batch_size,
validation_data=val_data_gen,
validation_steps=total_val//batch_size)
however, I get this ValueError:
ValueError: `logits` and `labels` must have the same shape, received ((None, 2) vs (None, 1)).
If I change the last dense layer to have a dimensionality of 1, then this runs, but I want a binary classification with 2 output layers to which I softmax between them to analyze the testing data.
How do I fix my train_data_gen in order to match the dimensionality, as it is a keras.preprocessing.image.DirectoryIterator object defined like so:
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
Is there a way I can reshape this object so my model runs correctly, because I can't seem to find it with regards to this object, or if I need to convert this into a numpy array or tensor first. Also, how do I classify dimensionality/filter arguments in these models? I went with 128, 64, 32, and cutting by 2 because this is what I saw online, but if an explanation could be provided as to why these values are picked that would greatly help me out. Thank you in advance for the help!
Thanks Jay Mody for your response. I was away on holiday and taking a break from this project, but yes what you suggested was correct and useful to actually understand what I was doing. I also wanted to mention a few other errors I had which led to a worse/useless model performance.
The steps_per_epoch and validation_steps arguments were not totally incorrect, but produced weird graphs that I didn't see in other online examples
I learned how they are implemented through this website, in my case substituting the training images and validation images counts as the corresponding sizes. Now my graph looks like so:
Also I played around with my filter(s) arguments for my model, and found this resource helpful. My model now looks like so, and works well:
model = Sequential([Conv2D(32, kernel_size=(3,3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
MaxPooling2D(pool_size=(2,2)),
Conv2D(32, kernel_size=(3,3), activation='relu'),
MaxPooling2D(pool_size=(2,2)),
Flatten(),
Dense(128, activation='relu'),
Dense(1, activation='sigmoid')]) # pick between 2 different possible outputs
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
Hope this helps others, this whole field still confuses me but we go on :)
Related
The shape of the train/test data is (samples, 256, 256, 1). The training dataset has around 1400 samples, the validation dataset has 150 samples, and the test dataset has 250 samples. Then I build a CNN model for a six-object classification task. However, no matter how hard I tuning the parameters and add/remove layers(conv&dense), I get a chance level of accuracy all the time (around 16.5%). Thus, I would like to know whether I made some deadly mistakes while building the model. Or there is something wrong with the data itself, not the CNN model.
Code:
def build_cnn_model(input_shape, activation='relu'):
model = Sequential()
# 3 Convolution layer with Max polling
model.add(Conv2D(64, (5, 5), activation=activation, padding = 'same', input_shape=input_shape))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (5, 5), activation=activation, padding = 'same'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(256, (5, 5), activation=activation, padding = 'same'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
# 3 Full connected layer
model.add(Dense(1024, activation = activation))
model.add(Dropout(0.5))
model.add(Dense(512, activation = activation))
model.add(Dropout(0.5))
model.add(Dense(6, activation = 'softmax')) # 6 classes
# summarize the model
print(model.summary())
return model
def compile_and_fit_model(model, X_train, y_train, X_vali, y_vali, batch_size, n_epochs, LR=0.01):
# compile the model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=LR),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
# fit the model
history = model.fit(x=X_train,
y=y_train,
batch_size=batch_size,
epochs=n_epochs,
verbose=1,
validation_data=(X_vali, y_vali))
return model, history
I transformed the MEG data my professor recorded into Magnitude Scalogram using CWT. pywt.cwt(data, scales, wavelet) was used. And if I plot the coefficients I got from cwt, I will have a graph like this (I emerged 62 channels into one graph). enter image description here
I used the coefficients as train/test data for the CNN model. However, I tuned the parameters and tried to add/remove layers for the CNN model, and the classification accuracy was unchanged. Thus, I want to know where I made mistakes. Did I make mistakes with building the CNN model, or did I make mistakes with CWT (the way I handled data)?
Please give me some advices, thank you.
How is the accuracy of the training data? If you have a small dataset and the model does not overfit after training for a while, then something is wrong with the model. You can also test with existing datasets, which the model should be able to handle (like Fashion MNIST).
Testing if you handled the data correctly is harder. Did you write unit tests for the different steps in the preprocessing pipeline?
I'm working with this Kaggle chess pieces dataset but after I coded my model and ran it, it only achieved about 20% accuracy and stalled there. Is this normal if each class has less than 100 images to train? I did image augmentation as well. If this is the case, around how many images do I need for datasets like this?
This is my model structure:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(6, activation = "softmax")
])
model.compile(
loss = "sparse_categorical_crossentropy",
optimizer = "rmsprop",
metrics = ["accuracy"]
)
Looks like this person got up to around 40% validation accuracy: https://www.kaggle.com/code/diegofreitasholanda/chess-pieces-image-classification
But in general, yes that is a small dataset, and it will be hard to learn a good, generalizable network well, especially when the images all look very different from one another (I see some real pictures, others are clip art, etc).
I'm new to Tensorflow. I followed some tutorials with a provided dataset and wanted to try something on my own. I decided I'd try to classify Magic the Gathering sets. Each card has a symbol in different colors on it: Black, Gold and so on.
The colors don't matter, just the different symbols. So I created a dataset of 3 different sets (so 3 different symbols) and got around 15'000 images like this. Some are a little bit rotated, some have an X and Y offset, just to get some different images.
Then I adapted the tutorial on the tensorflow website for image classification. Instead of two classes I wanted to try three:
batch_size = 250
epochs = 3
IMG_HEIGHT = 55
IMG_WIDTH = 55
train_image_generator = ImageDataGenerator(rescale=1./255)
validation_image_generator = ImageDataGenerator(rescale=1./255)
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size,
callbacks=[cp_callback]
)
But my loss is negative and I don't get a good accuracy after training. What did I mess up? Is the model used in the tutorial not good for my usecase? Or is there an error in the code because I used three instead of two classes?
The model from the tutorial was used for binary classification (only two classes, cat or dog). You on the other hand want to classify 3 classes not 2. Therefore you have to adapt the architecture a little bit. Your last layer should be:
Dense(3, activation='softmax')
Three neurons because you have three classes and softmax activation because you want your outputs to be valid probabilities. To compile the model, use categorical_crossentropy instead of binary_crossentropy and make sure your labels are one-hot-encoded. Also for your ImageDataGenerator you should pass class_mode=categorical to the .flow_from_directory() function.
So I've been building a convolutional neural network. I'm trying to predict whether a boardgame state (10x10 matrix) will lead to a win (binary 0 or 1) or not.
I have six million examples, which you would think would be enough, but clearly not, as my network is predicting all of one class...
Is there something obvious I'm missing? I tried giving it even 10 examples and it still predicts them all as the same class.
The input matrices are 10x10 of integers.
Input reshaping:
x_train = x_train.reshape(len(x_train),10,10,1)
Actual model building:
model = Sequential()
model.add(Conv2D(3, kernel_size=(1, 1), strides=(1, 1), activation='relu', input_shape=(10,10,1)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(500, activation='tanh'))
model.add(Dropout(0.5))
model.add(keras.layers.Dense(75, activation='relu'))
model.add(BatchNormalization())
model.add(keras.layers.Dense(10, activation='sigmoid'))
model.add(keras.layers.Dense(1,kernel_initializer='normal',activation='sigmoid'))
optimizerr = keras.optimizers.SGD(lr=0.001, momentum=0.9, decay=0.01, nesterov=True)
model.compile(optimizer=optimizerr, loss='binary_crossentropy', metrics=[metrics.binary_accuracy])
model.fit(x_train, y_train,epochs = 100, batch_size = 128, verbose=1)
I've tried modifying the learning rate, momentum, decay, the kernel_sizes, layer types, sizes... I checked for dying relu and that didn't seem to be the problem. Removing the dropout/batch normalization layers (or various random layers) didn't do anything either.
The data have roughly 53/47% split across the labels, so it's not that either.
I'm more confused because even when I ask it to predict the train set, it STILL insists on only labeling things one class, even if there are only ~20 samples or fewer.
How do I train 1 model multiple times and combine them at the output layer?
For example:
model_one = Sequential() #model 1
model_one.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
model_one.add(Flatten())
model_one.add(Dense(128, activation='relu'))
model_two = Sequential() #model 2
model_two.add(Dense(128, activation='relu', input_shape=(784)))
model_two.add(Dense(128, activation='relu'))
model_???.add(Dense(10, activation='softmax')) #combine them here
model.compile(loss='categorical_crossentropy', #continu together
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, Y_train, #continu together somehow, even though this would never work because X_train and Y_train have wrong formats
batch_size=32, nb_epoch=10, verbose=1)
I've heard I can do this through a graph model but I can't find any documentation on it.
EDIT: in reply to the suggestion below:
A1 = Conv2D(20,kernel_size=(5,5),activation='relu',input_shape=( 28, 28, 1))
---> B1 = MaxPooling2D(pool_size=(2,2))(A1)
throws this error:
AttributeError: 'Conv2D' object has no attribute 'get_shape'
Graph notation would do it for you. Essentially you give every layer a unique handle then link back to the previous layer using the handle in brackets at the end:
layer_handle = Layer(params)(prev_layer_handle)
Note that the first layer must be an Input(shape=(x,y)) with no prior connection.
Then when you make your model you need to tell it that it expects multiple inputs with a list:
model = Model(inputs=[in_layer1, in_layer2, ..], outputs=[out_layer1, out_layer2, ..])
Finally when you train it you also need to provide a list of input and output data that corresponds with your definition:
model.fit([x_train1, x_train2, ..], [y_train1, y_train2, ..])
Meanwhile everything else is the same so you just need to combine together the above to give you the network layout that you want:
from keras.models import Model
from keras.layers import Input, Convolution2D, Flatten, Dense, Concatenate
# Note Keras 2.02, channel last dimension ordering
# Model 1
in1 = Input(shape=(28,28,1))
model_one_conv_1 = Convolution2D(32, (3, 3), activation='relu')(in1)
model_one_flat_1 = Flatten()(model_one_conv_1)
model_one_dense_1 = Dense(128, activation='relu')(model_one_flat_1)
# Model 2
in2 = Input(shape=(784, ))
model_two_dense_1 = Dense(128, activation='relu')(in2)
model_two_dense_2 = Dense(128, activation='relu')(model_two_dense_1)
# Model Final
model_final_concat = Concatenate(axis=-1)([model_one_dense_1, model_two_dense_2])
model_final_dense_1 = Dense(10, activation='softmax')(model_final_concat)
model = Model(inputs=[in1, in2], outputs=model_final_dense_1)
model.compile(loss='categorical_crossentropy', #continu together
optimizer='adam',
metrics=['accuracy'])
model.fit([X_train_one, X_train_two], Y_train,
batch_size=32, nb_epoch=10, verbose=1)
Documentation can be found in the Functional Model API. I'd recommend reading around other questions or checking out Keras' repo as well since the documentation currently doesn't have many examples.