I'm attempting to follow along on what I'm thinking is the 5th or 6th simple introductory tutorial for keras that almost but never quite works.
Stripping everything out, I appear to come down to a problem with the format of my input. I read in an array of images, and extract two types, images of sign language ones and images of sign language zeros. I then set up an array of ones and zeros to correspond to what the images actually are, then make sure of sizes and types.
import numpy as np
from subprocess import check_output
print(check_output(["ls", "../data/keras/"]).decode("utf8"))
## load dataset of images of sign language numbers
x = np.load('../data/keras/npy_dataset/X.npy')
# Get the zeros and ones, construct a list of known values (Y)
X = np.concatenate((x[204:409], x[822:1027] ), axis=0) # from 0 to 204 is zero sign and from 205 to 410 is one sign
Y = np.concatenate((np.zeros(205), np.ones(205)), axis=0).reshape(X.shape[0],1)
# test shape and type
print("X shape: " , X.shape)
print("X class: " , type(X))
print("Y shape: " , Y.shape)
print("Y type: " , type(Y))
This gives me:
X shape: (410, 64, 64)
X class: <class 'numpy.ndarray'>
Y shape: (410, 1)
Y type: <class 'numpy.ndarray'>
which is all good. I then load the relevant bits from Keras, using Tensorflow as the backend and try to construct a classifier.
# get the relevant keras bits.
from keras.models import Sequential
from keras.layers import Convolution2D
# construct a classifier
classifier = Sequential() # initialize neural network
classifier.add(Convolution2D(32, (3, 3), input_shape=(410, 64, 64), activation="relu", data_format="channels_last"))
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
classifier.fit(X, Y, batch_size=32, epochs=10, verbose=1)
This results in:
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (410, 64, 64)
This SO question, I think, suggests that my input shape needs to be altered to have a 4th dimension added to it - though it also says it's the output shape that needs to altered, I haven't been able to find anywhere to specify an output shape, so I'm assuming it is meant that I should alter the input shape to input_shape=(1, 64, 64, 1).
If I change my input shape however, then I immeadiately get this:
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5
Which this github issue suggests is because I no longer need to specify the number of samples. So I'm left with the situation of using one input shape and getting one error, or changing it and getting another error.
Reading this and this made me think I might need to reshape my data to include information about the channels in X, but if I add in
X = X.reshape(X.shape[0], 64, 64, 1)
print(X.shape)
Then I get
ValueError: Error when checking target: expected conv2d_1 to have 4 dimensions, but got array with shape (410, 1)
If I change the reshape to anything else, i.e.
X = X.reshape(X.shape[0], 64, 64, 2)
Then I get a message saying it's unable to reshape the data, so I'm obviously doing something wrong with that, if that is, indeed, the problem.
I have read the suggested Conv2d docs which shed exactly zero light on the matter for me. Anyone else able to?
At first I used the following data sets (similar to your case):
import numpy as np
import keras
X = np.random.randint(256, size=(410, 64, 64))
Y = np.random.randint(10, size=(410, 1))
x_train = X[:, :, :, np.newaxis]
y_train = keras.utils.to_categorical(Y, num_classes=10)
And then modified your code as follows to work:
from keras.models import Sequential
from keras.layers import Convolution2D, Flatten, Dense
classifier = Sequential() # initialize neural network
classifier.add(Convolution2D(32, (3, 3), input_shape=(64, 64, 1), activation="relu", data_format="channels_last"))
classifier.add(Flatten())
classifier.add(Dense(10, activation='softmax'))
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
classifier.fit(x_train, y_train, batch_size=32, epochs=10, verbose=1)
Changed the shape of X from 410 x 64 x 64 to 410 x 64 x 64 x 1 (with channel 1).
input_shape be the shape of a sample data, that is, 64 x 64 x 1.
Changed the shape of Y using keras.utils.to_categorical() (one-hot encoding with num_classes=10).
Before compiling, Flatten() and Dense() were applied because you want categorical_crossentropy.
Related
I was wondering if you would be able to help me with an errors that I am getting in the code that I am writing.
I have 2 datasets as inputs and 1 other as taget dataset. All datasets are set of images in dimantion of (17, 20, 1).
I set a code as:
from tensorflow.keras.layers import Concatenate
from tensorflow.keras import Model
# define two sets of inputs
inputA = Input(shape=(17, 20, 1))
inputB = Input(shape=(17, 20, 1))
# the first branch operates on the first input
x = Sequential()(inputA)
x = Conv2D(filters=64, kernel_size=(3,3), activation='relu')(x)
x = Model(inputs=inputA, outputs=x)
# the second branch opreates on the second input
y = Sequential()(inputB)
y = Conv2D(filters=64, kernel_size=(3,3), activation='relu')(y)
y = Model(inputs=inputB, outputs=y)
# combine the output of the two branches
combined = Concatenate()([x.output, y.output])
# apply a FC layer and then a regression prediction on the
# combined outputs
z = Sequential()(combined)
z = Conv2D(filters=64, kernel_size=(3,3), activation='relu')(z)
# our model will accept the inputs of the two branches and
# then output a single value
model = Model(inputs=[x.input, y.input], outputs=z)
model.compile(loss="mean_absolute_percentage_error", optimizer='adam', metrics=['accuracy'])
history = model.fit([input1_train, input2_train], target_train, validation_data=([input1_test, input2_test], target_test), epochs=100, verbose=0)
then I get the error as:
ValueError: Dimensions must be equal, but are 17 and 13 for '{{node mean_absolute_percentage_error/sub}} = Sub[T=DT_FLOAT](IteratorGetNext:1, model_18/conv2d_18/Relu)' with input shapes: [?,17,20,1], [?,13,16,64].
I also test this code for images with shape (20, 20, 1), and get the code as:
ValueError: Dimensions must be equal, but are 20 and 16 for '{{node mean_absolute_percentage_error/sub}} = Sub[T=DT_FLOAT](IteratorGetNext:1, model_15/conv2d_15/Relu)' with input shapes: [?,20,20,1], [?,16,16,64].
but when I set the kernel size as (1,1) the code run with no problem.
Does anyone know where the problem comes from? And what should I do?
I'll very thanks if anyone can help me fix it.
I got an error while running a python code
I'm trying to reshape a Tensorflow model's input along the batch dimension. I want to combine some of the batch samples into a time-series so I can feed it into an LSTM layer.
Specifically, I have 1024 samples and I'd like to put them into groups of 64 timesteps with the result being 16 batches of 64 timesteps, each timestep having the original 24 features.
#input tensor is (1024, 24)
inputLayer = Input(shape=(24,))
#I want it to be (16, 64, 24)
reshapedLayer = layers.Reshape([64, 24])(inputLayer)
lstmLayer = layers.LSTM(128, activation='relu')(reshapedLayer)
This compiles but throws a runtime error
tensorflow.python.framework.errors_impl.InvalidArgumentError:
Input to reshape is a tensor with 24576 values, but the requested shape has 1572864
I understand what the error is telling me, but I'm not sure the right way to go about fixing it.
Perhaps this could work for you:
import tensorflow as tf
inputs = tf.keras.layers.Input(shape=(24,))
x = tf.reshape(inputs, (16, 64, 24))
x = tf.keras.layers.LSTM(128, activation='relu')(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
# dummy data
inputs = tf.random.uniform(shape=(1024, 24))
outputs = model(inputs)
Replacing the Reshape layer with tf.reshape.
I am facing some problems in training the following GRU model, which has to be stateful and output the hidden state.
import numpy as np
import tensorflow as tf #2.1.0
from tensorflow import keras
BATCH_SIZE = 1
nfeatures = 3
history = 30 # shapes input array
horizon = 5 # shapes output array
nodes = 32
input_layer = tf.keras.layers.Input(batch_shape=(1,30,3),name="INPUT")
output, state_h = tf.keras.layers.GRU(nodes,
return_sequences=True,
stateful=True,
return_state=True,
batch_input_shape=(1,history,3), name='GRU1')(input_layer)
output_layer = tf.keras.layers.GRU(nodes, activation='tanh', name='GRU2')(output, state_h)
output_dense = tf.keras.layers.Dense(5, name='DENSE')(output_layer)
model = tf.keras.Model(input_layer, [output_dense, state_h])
model.compile(optimizer=tf.keras.optimizers.Adam(clipvalue=2.0),
loss='mse',
metrics=['mean_absolute_error', 'mean_squared_error'])
As I need the model to output the hidden state, I do not use a Sequential model. (I had no problems training a stateful sequential model.)
The features fed to network are of shape np.shape(x)=(30,3) and the target np.shape(y)=(5,).
If I call model.predict(x), where x is a numpy array with the shape mentioned above, it throws an error, as expected, because the input shape doesn't match the expected input. Therefore, I reshape the input array to have an input shape of (1,30,3) by calling np.expand_dims(x,axis=0). After that, it works fine, i.e. I get an output.
The issues I am facing are when I try to train the model. Calling
model.fit(x, y,epochs=1,steps_per_epoch=STEPS_PER_EPOCH)
throws the same error, about the shape of the data
ValueError: Error when checking input: expected input to have 3 dimensions, but got array with shape (30, 3)
Reshapping the data as I did for the prediction didn't help
model.fit(np.expand_dims(x,axis=0), np.expand_dims(y,axis=0),epochs=1,steps_per_epoch=STEPS_PER_EPOCH)
ValueError: The number of samples 1 is not divisible by steps 30. Please change the number of steps to a value that can consume all the samples.
This was a new error, setting the steps_per_epoch=1 threw a new one
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), for inputs ['DENSE', 'GRU1'] but instead got the following list of 1 arrays: [array([[0.5124772 , 0.51047856, 0.509669 , 0.50830126, 0.5070507 ]],
dtype=float32)]...
Is the format of my data wrong or is the architecture of my layers missing something? I tried adding a Flatten layer after the input, but it didn't make much sense (in my head) and it didn't work either.
Thanks in advance.
Problem here is that the Number of Nodes should be equal to the Output Shape. Changing the value of Nodes from 32 to 5, along with other minor changes, will fix the Error.
Complete working code is shown below:
import numpy as np
import tensorflow as tf #2.1.0
from tensorflow import keras
BATCH_SIZE = 1
nfeatures = 3
history = 30 # shapes input array
horizon = 5 # shapes output array
nodes = 5
x = np.ones(shape = (30,3))
x = np.expand_dims(x, axis = 0)
y = np.ones(shape = (5,))
y = np.expand_dims(y, axis = 0)
print(x.shape) #(1, 30, 3)
print(y.shape) #(1, 5)
input_layer = tf.keras.layers.Input(batch_shape=(1,30,3),name="INPUT")
output, state_h = tf.keras.layers.GRU(nodes,
return_sequences=True,
stateful=True,
return_state=True,
batch_input_shape=(1,history,3), name='GRU1')(input_layer)
output_layer = tf.keras.layers.GRU(nodes, activation='tanh', name='GRU2')(output, state_h)
output_dense = tf.keras.layers.Dense(5, name='DENSE')(output_layer)
model = tf.keras.Model(input_layer, [output_dense, state_h])
model.compile(optimizer=tf.keras.optimizers.Adam(clipvalue=2.0),
loss='mse',
metrics=['mean_absolute_error', 'mean_squared_error'])
STEPS_PER_EPOCH = 1
model.fit(x, y,epochs=1,steps_per_epoch=STEPS_PER_EPOCH)
Output of the above code is:
(1, 30, 3)
(1, 5)
1/1 [==============================] - 0s 3ms/step - loss: 1.8172 - DENSE_loss: 1.1737 - GRU1_loss: 0.6435 - DENSE_mean_absolute_error: 1.0498 - DENSE_mean_squared_error: 1.1737 - GRU1_mean_absolute_error: 0.7157 - GRU1_mean_squared_error: 0.6435
<tensorflow.python.keras.callbacks.History at 0x7f698bf8ac50>
Hope this helps. Happy Learning!
I compared the results, obtained via model.evaluate(...) and the ones via numpy. As you can see, they differ a lot. The kernel has just been restarted. Cannot find where the problem is.
import numpy as np
import keras
from keras.layers import Dense
from keras.models import Sequential
import keras.backend as K
X = np.random.rand(10000)
Y = X + np.random.rand(10000) / 5
X_train, X_valid = X[:8000], X[8000:]
Y_train, Y_valid = Y[:8000], Y[8000:]
model = Sequential([
Dense(1, input_shape=(1,), activation='linear'),
])
model.compile('adam', 'mae')
model.fit(X_train, Y_train, epochs=1, batch_size=2000, validation_data=(X_valid, Y_valid))
print(model.evaluate(X_valid, Y_valid))
>>> 0.15643194556236267
preds = model.predict(X_valid)
np.abs(Y_valid - preds).mean()
>>> 0.34461398701699736
Versions: keras = '2.3.1', tensorflow = '2.1.0'.
It's because the model.predict output shape is not same with Y_valid. If you get the transpose of the predictions it will give you almost same loss.
>>> Y_valid.shape
(2000,)
>>> preds.shape
(2000, 1)
>>> np.abs(Y_valid - np.transpose(preds)).mean()
This is a tricky one, but actually simple to fix:
Your targets Y_valid have shape (2000,), i.e. just an array of 2000 numbers. The network outputs however, have shape (2000, 1). The expression Y_valid - preds then tries to subtract a shape (2000, 1) from a shape (2000,)... The two are not compatible, and need to be broadcast. Standard broadcasting rules will proceed as follows:
1. Align like
( 2000,)
(2000, 1)`
2. add extra dimension in front
(1, 2000,)
(2000, 1)
3. broadcast to make compatible
(2000, 2000)
(2000, 2000)
...and so you are actually subtracting two arrays of size (2000, 2000) from each other. You are basically computing the difference between each prediction and all targets instead of just the corresponding one. Obviously, the mean of this will be much larger.
tl; dr: model.evaluate is correct. The manual computation is incorrect due to funny broadcasting. You can fix it by reshaping the predictions to (2000,) (or the targets to (2000, 1):
preds = model.predict(X_valid)[:, 0]
np.abs(Y_valid - preds).mean()
I am trying to implement number of networks using Keras applications. Here I am attaching a piece of code and this code works fine for ResNet50 and VGG16 but when it comes to MobileNet it generate the error:
ValueError: Error when checking target: expected dense_1 to have 4 dimensions, but got array with shape (24, 2)
I am working with 224x224 images with 3 channels and batch size of 24 and trying to classify them in 2 classes, so the number 24 mentioned in the error is the batch size but I am not sure about number 2, probably it is number of classes.
Btw is there anyone who knows why I am receiving this error for keras.applications.mobilenet?
# basic_model = ResNet50()
# basic_model = VGG16()
basic_model = MobileNet()
classes = list(iter(train_generator.class_indices))
basic_model.layers.pop()
for layer in basic_model.layers[:25]:
layer.trainable = False
last = basic_model.layers[-1].output
temp = Dense(len(classes), activation="softmax")(last)
fineTuned_model = Model(basic_model.input, temp)
fineTuned_model.classes = classes
fineTuned_model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
fineTuned_model.fit_generator(
train_generator,
steps_per_epoch=3764 // batch_size,
epochs=100,
validation_data=validation_generator,
validation_steps=900 // batch_size)
fineTuned_model.save('mobile_model.h5')
From the source code, we can see that you're popping a Reshape() layer. Exactly the one that transforms the convolution's output (4D) into a class tensor (2D).
Source code:
if include_top:
if K.image_data_format() == 'channels_first':
shape = (int(1024 * alpha), 1, 1)
else:
shape = (1, 1, int(1024 * alpha))
x = GlobalAveragePooling2D()(x)
x = Reshape(shape, name='reshape_1')(x)
x = Dropout(dropout, name='dropout')(x)
x = Conv2D(classes, (1, 1),
padding='same', name='conv_preds')(x)
x = Activation('softmax', name='act_softmax')(x)
x = Reshape((classes,), name='reshape_2')(x)
But all the keras convolutional models are meant to be used in a different way. If you want your own number of classes, you should create these models with include_top=False. This way, the final part of the model (the classes part) will simply not exist and you just add your own layers:
basic_model = MobileNet(include_top=False)
for layer in basic_model.layers:
layers.trainable=False
furtherOutputs = YourOwnLayers()(basic_model.outputs)
You should probably try to copy that final part shown in the keras code, changing classes with your own number of classes. Or maybe try pop 3 layers from the complete model, the Reshape, the Activation and the Conv2D, replacing them with your own.