TensorFlow - ValueError: Shapes (None, 1) and (None, 10) are incompatible - python

I am trying to implement an image classifier using "The Street View House Numbers (SVHN) Dataset" from this link. I am using format 2 which contains 32x32 RGB centered digit images from 0 to 9. When I try to compile and fit the model I get the following error:
Epoch 1/10
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-37-31870b6986af> in <module>()
3
4 model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
----> 5 model.fit(trainX, trainY, validation_data=(validX, validY), batch_size=128, epochs=10)
9 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
975 except Exception as e: # pylint:disable=broad-except
976 if hasattr(e, "ag_error_metadata"):
--> 977 raise e.ag_error_metadata.to_exception(e)
978 else:
979 raise
ValueError: in user code:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:756 train_step
y, y_pred, sample_weight, regularization_losses=self.losses)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:203 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:152 __call__
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:256 call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:1537 categorical_crossentropy
return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/backend.py:4833 categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_shape.py:1134 assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (None, 1) and (None, 10) are incompatible
The code is:
model = Sequential([
Conv2D(filters=64, kernel_size=3, strides=2, activation='relu', input_shape=(32,32,3)),
MaxPooling2D(pool_size=(2, 2), strides=1, padding='same'),
Conv2D(filters=32, kernel_size=3, strides=1, activation='relu'),
MaxPooling2D(pool_size=(2, 2), strides=1, padding='same'),
Flatten(),
Dense(10, activation='softmax')
])
model.summary()
Model: "sequential_10"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_23 (Conv2D) (None, 15, 15, 64) 1792
_________________________________________________________________
max_pooling2d_23 (MaxPooling (None, 15, 15, 64) 0
_________________________________________________________________
conv2d_24 (Conv2D) (None, 13, 13, 32) 18464
_________________________________________________________________
max_pooling2d_24 (MaxPooling (None, 13, 13, 32) 0
_________________________________________________________________
flatten_10 (Flatten) (None, 5408) 0
_________________________________________________________________
dense_13 (Dense) (None, 10) 54090
=================================================================
Total params: 74,346
Trainable params: 74,346
Non-trainable params: 0
_________________________________________________________________
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(trainX, trainY, validation_data=(validX, validY), batch_size=128, epochs=10)
I was unable to solve the error, does anyone have any ideas on how to fix it?

As i could not see your coding for trainY; seems like - your trainY has only one column and your model output have 10 neurons, so Shapes (None, 1) and (None, 10) are incompatible. you can try this on your trainY(i.e one-hot encoding)
from sklearn.preprocessing import LabelBinarizer
label_as_binary = LabelBinarizer()
train__y_labels = label_as_binary.fit_transform(trainY)
and compile will look like as (look for train__y_labels)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(train_X_input, train__y_labels, batch_size=128, epochs=1)
note: if your valid also throws the error, same would be needed on all y(s).

Change the compile statement so that
loss = 'sparse_categorical_cross_entropy'
The "sparse" indicates that the y values are numeric rather than one-hot

Related

How to Solve 1 Dimensional CNN Input Shape Error?

I am trying to train a 1D CNN. My input is a numpy array which has 476 rows and 4 columns. I couldn't figure out how to set input shape. I also tried to reshape input to (476, 4, 1) but still got error. Train_labels's shape is (476, ). O is train data and 0[0].shape is (4,) Here is the code, error and data below.
ValueError: Input 0 of layer "sequential_17" is incompatible with the layer: expected shape=(None, 476, 4), found shape=(1, 4, 1)
Traceback (most recent call last)
<ipython-input-363-440de758fd68> in <module>()
10
11
---> 12 model.fit(o, train_labels, epochs=5, batch_size=1)
13 print(model.evaluate(o, train_labels))
1 frames
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
model=Sequential()
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(476,4)))
model.add(Conv1D(filters=16, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(50, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(o, train_labels, epochs=5, batch_size=1)
print(model.evaluate(o, train_labels))
Here is the model structure below.
Model: "sequential_34"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_56 (Conv1D) (None, 474, 32) 416
conv1d_57 (Conv1D) (None, 472, 16) 1552
dropout_19 (Dropout) (None, 472, 16) 0
max_pooling1d_19 (MaxPoolin (None, 236, 16) 0
g1D)
flatten_19 (Flatten) (None, 3776) 0
dense_40 (Dense) (None, 50) 188850
dense_41 (Dense) (None, 2) 102
Total params: 190,920
Trainable params: 190,920
Non-trainable params: 0
My data is a numpy array which has 476 rows and 4 columns. It's shape is (476,4).
[[0.35603836 0.6439616 0.49762452 0.5023755 ]
[0.12395032 0.87604964 0.49762452 0.5023755 ]
[0.5605615 0.43943852 0.49762452 0.5023755 ]
...
[0.6250699 0.37493005 0.48114303 0.51885694]
[0.6650569 0.33494312 0.48114303 0.51885694]
[0.53505033 0.46494964 0.48114303 0.51885694]]
I solved the problem myself and wanted to provide an answer for anyone having the same issue.
Here is the steps I applied:
I used sparse categorical crossentropy.
I changed input shape to (number of attributes, 1)
I changed CNN kernel size from 3 to 2
model=Sequential()
model.add(Conv1D(filters=8, kernel_size=2, activation='relu', input_shape=(4,1)))
model.add(Conv1D(filters=4, kernel_size=2, activation='relu'))
model.add(Dropout(0.3))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(20, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(o, train_labels, epochs=5, batch_size=1)
print(model.evaluate(o, train_labels))

Keras ValueError: Dimensions must be equal, but are 2 and 32 for '{{node Equal}} with input shapes: [?,2], [?,32,32]

I was trying to train a simple Keras network for classification when I faced the following error. I know there is something wrong with my inputs but I couldn't figure out how to fix it. Here is my code
my data set shape :
x_train : float32 0.0 1.0 (2444, 64, 64, 1)
y_train : float32 0.0 1.0 (2444, 2)
x_test : float32 0.0 1.0 (9123, 64, 64, 1)
y_test : float32 0.0 1.0 (9123, 2)
the model :
inputs = keras.Input(shape=(64,64,1), dtype='float32')
x = keras.layers.Conv2D(12,(9,9), padding="same",input_shape=(64,64,1), dtype='float32',activation='relu')(inputs)
x = keras.layers.Conv2D(18,(7,7), padding="same", activation='relu')(x)
x = keras.layers.MaxPool2D(pool_size=(2,2))(x)
x = keras.layers.Dropout(0.25)(x)
x = keras.layers.Dense(50, activation='relu')(x)
x = keras.layers.Dropout(0.4)(x)
outputs = keras.layers.Dense(2, activation='softmax')(x)
model = keras.Model(inputs, outputs)
model summary :
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 64, 64, 1)] 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 64, 12) 984
_________________________________________________________________
conv2d_3 (Conv2D) (None, 64, 64, 18) 10602
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 32, 32, 18) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 32, 32, 18) 0
_________________________________________________________________
dense_2 (Dense) (None, 32, 32, 50) 950
_________________________________________________________________
dropout_3 (Dropout) (None, 32, 32, 50) 0
_________________________________________________________________
dense_3 (Dense) (None, 32, 32, 2) 102
=================================================================
Total params: 12,638
Trainable params: 12,638
Non-trainable params: 0
________________________
compiler and fitter which error occurs when I wanna fit the model
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(),
optimizer=keras.optimizers.Adam(0.01),
metrics=["acc"],
)
model.fit(x_train, y_train, batch_size=32, epochs = 20, validation_split= 0.3,
callbacks=[tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)])
and finally the error:
ValueError Traceback (most recent call last)
<ipython-input-31-e4cade46a08c> in <module>()
1 model.fit(x_train, y_train, batch_size=32, epochs = 20, validation_split= 0.3,
----> 2 callbacks=[tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)])
9 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
992 except Exception as e: # pylint:disable=broad-except
993 if hasattr(e, "ag_error_metadata"):
--> 994 raise e.ag_error_metadata.to_exception(e)
995 else:
996 raise
ValueError: in user code:
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:853 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:842 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1286 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2849 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3632 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:835 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:792 train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py:457 update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
/usr/local/lib/python3.7/dist-packages/keras/utils/metrics_utils.py:73 decorated
update_op = update_state_fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/keras/metrics.py:177 update_state_fn
return ag_update_state(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/keras/metrics.py:681 update_state **
matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/keras/metrics.py:3537 sparse_categorical_accuracy
return tf.cast(tf.equal(y_true, y_pred), backend.floatx())
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:1864 equal
return gen_math_ops.equal(x, y, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_math_ops.py:3219 equal
name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:601 _create_op_internal
compute_device)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:3569 _create_op_internal
op_def=op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:2042 __init__
control_input_ops, op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:1883 _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 2 and 32 for '{{node Equal}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](IteratorGetNext:1, Cast_1)' with input shapes: [?,2], [?,32,32].
As you can see in the model summary, the output shape of the model is (None,32,32,2), while based on target values it should be (None,2), Try to add Flatten layer before Dense layers:
x = keras.layers.Dropout(0.25)(x)
x = keras.layers.Flatten()(x) # Add this
x = keras.layers.Dense(50, activation='relu')(x)

ValueError: logits and labels must have the same shape ((None, 124, 124, 3) vs (None, 2))

I am developing a image classification model. I have my input shape of image as (128,128,3) but when I am running the model.fit it is giving an error.
My input data is
real_data = [f for f in os.listdir(data_dir+'/test') if f.endswith('.png')]
fake_data = [f for f in os.listdir(data_dir+'/test_f') if f.endswith('.png')]
print(real_data)
X = []
Y = []
for img in real_data:
X.append(img_to_array(load_img(data_dir+'/test/'+img)) / 255.0)
Y.append(1)
for img in fake_data:
X.append(img_to_array(load_img(data_dir+'/test_f/'+img)) / 255.0)
Y.append(0)
Y_val_org = Y
X = np.array(X)
Y = to_categorical(Y, 2)
print(X)
print(Y)
My model is
model = Sequential()
model.add(Conv2D(16, kernel_size=(3,3), activation='relu',input_shape=(128,128,3)))
model.add(Conv2D(16, kernel_size=(3,3), activation='relu'))
model.add(Dense(units=3, activation='softmax'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.Adam(lr=1e-5, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False),
metrics=['accuracy'])
#model.build(input_shape=(128,128,3))
model.summary()
And model summary is
Model: "sequential_80"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_892 (Conv2D) (None, 126, 126, 16) 448
_________________________________________________________________
conv2d_893 (Conv2D) (None, 124, 124, 16) 2320
_________________________________________________________________
dense_48 (Dense) (None, 124, 124, 3) 51
=================================================================
Total params: 2,819
Trainable params: 2,819
Non-trainable params: 0
_________________________________________________________________
When I am fitting the model through model.fit()
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=2, mode='auto')
EPOCHS = 20
BATCH_SIZE = 100
history = model.fit(X_train, Y_train, batch_size = BATCH_SIZE, epochs = EPOCHS, validation_data = (X_val, Y_val))
This is the error I am getting
Epoch 1/20
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-168-b3e2ed37ed88> in <module>()
2 EPOCHS = 20
3 BATCH_SIZE = 100
----> 4 history = model.fit(X_train, Y_train, batch_size = BATCH_SIZE, epochs = EPOCHS, validation_data = (X_val, Y_val))
9 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
975 except Exception as e: # pylint:disable=broad-except
976 if hasattr(e, "ag_error_metadata"):
--> 977 raise e.ag_error_metadata.to_exception(e)
978 else:
979 raise
ValueError: in user code:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:756 train_step
y, y_pred, sample_weight, regularization_losses=self.losses)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:203 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:152 __call__
losses = call_fn(y_true, y_pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:256 call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/losses.py:1608 binary_crossentropy
K.binary_crossentropy(y_true, y_pred, from_logits=from_logits), axis=-1)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/backend.py:4979 binary_crossentropy
return nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/nn_impl.py:174 sigmoid_cross_entropy_with_logits
(logits.get_shape(), labels.get_shape()))
ValueError: logits and labels must have the same shape ((None, 124, 124, 3) vs (None, 2))
Change your model into:
model.add(Conv2D(16, kernel_size=(3,3), activation='relu'))
model.add(Flatten()) # added flatten before dense
model.add(Dense(units=2, activation='softmax'))
Last output should be 2 units because you have 2 classes. Also change your loss to:
loss='categorical_crossentropy'
because you applied to_categorical().

ValueError: Shapes (None, 9) and (None, 10) are incompatible

I have a dataset with 565 features and 10 different columns on the prediction site for predicting labels in the training model.Here is the model summary dimensions :
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 564, 64) 256
_________________________________________________________________
flatten (Flatten) (None, 36096) 0
_________________________________________________________________
dense (Dense) (None, 50) 1804850
_________________________________________________________________
dense_1 (Dense) (None, 50) 2550
_________________________________________________________________
dense_2 (Dense) (None, 50) 2550
_________________________________________________________________
dense_3 (Dense) (None, 50) 2550
_________________________________________________________________
dense_4 (Dense) (None, 10) 510
=================================================================
Total params: 1,813,266
Trainable params: 1,813,266
Non-trainable params: 0
_________________________________________________________________
Here is the code I have used :
import pandas as pd
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv1D, Flatten
from tensorflow.keras import optimizers
from sklearn.metrics import confusion_matrix
import tensorflow as tf
import tensorflow.keras.metrics
data = pd.read_csv('Step1_reducedfile.csv',skiprows = 1,header = None)
data = data.sample(frac=1).reset_index(drop=True)
train_X = data[0:data.shape[0],0:566]
train_y = data[0:data.shape[0],566:data.shape[1]]
train_X = train_X.reshape((train_X.shape[0], train_X.shape[1], 1))
import random
neurons = 50
strategy = tensorflow.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv1D(64,kernel_size = 3,activation='relu',input_shape=train_X.shape[1:]),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(neurons,activation='relu'),
tf.keras.layers.Dense(neurons,activation='relu'),
tf.keras.layers.Dense(neurons,activation='relu'),
tf.keras.layers.Dense(neurons,activation='relu'),
tf.keras.layers.Dense(10, activation='softmax'),])
model.summary()
sgd = optimizers.SGD(lr=0.05, decay=1e-6, momentum=0.24, nesterov=True)
model.compile(loss='categorical_crossentropy',optimizer=sgd,metrics=['accuracy',tensorflow.keras.metrics.Precision()])
model.summary()
results = model.fit(train_X,train_y,validation_split = 0.2,epochs=10,batch_size = 100)
print(results)
I am getting the following error :
ValueError: in user code:
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/mirrored_strategy.py:585 _call_for_each_replica
self._container_strategy(), fn, args, kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/mirrored_run.py:96 call_for_each_replica
return _call_for_each_replica(strategy, fn, args, kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/mirrored_run.py:237 _call_for_each_replica
coord.join(threads)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/training/coordinator.py:389 join
six.reraise(*self._exc_info_to_raise)
/usr/local/lib/python3.6/site-packages/six.py:703 reraise
raise value
/usr/local/lib64/python3.6/site-packages/tensorflow/python/training/coordinator.py:297 stop_on_exception
yield
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/mirrored_run.py:323 run
self.main_result = self.main_fn(*self.main_args, **self.main_kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py:749 train_step
y, y_pred, sample_weight, regularization_losses=self.losses)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/losses.py:149 __call__
losses = ag_call(y_true, y_pred)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/losses.py:253 call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/losses.py:1535 categorical_crossentropy
return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/backend.py:4687 categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/framework/tensor_shape.py:1134 assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (None, 9) and (None, 10) are incompatible
That error shows that you are giving a wrong shape of label array to your model. It is s expecting an array of shape (None, 9), while you are giving an array of shape (None, 10). This may be because your dataset has 9 classes as rightly mentioned by Dr.Snoopy.
For the benefit of community here i am providing complete working code.
import pandas as pd
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv1D, Flatten
from tensorflow.keras import optimizers
from sklearn.metrics import confusion_matrix
import tensorflow as tf
import tensorflow.keras.metrics
data = pd.read_csv('Step1_reducedfile.csv',skiprows = 1,header = None)
data = data.sample(frac=1).reset_index(drop=True)
train_X = data[0:data.shape[0],0:566]
train_y = data[0:data.shape[0],566:data.shape[1]]
train_X = train_X.reshape((train_X.shape[0], train_X.shape[1], 1))
import random
neurons = 50
strategy = tensorflow.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv1D(64,kernel_size = 3,activation='relu',input_shape=train_X.shape[1:]),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(neurons,activation='relu'),
tf.keras.layers.Dense(neurons,activation='relu'),
tf.keras.layers.Dense(neurons,activation='relu'),
tf.keras.layers.Dense(neurons,activation='relu'),
tf.keras.layers.Dense(9, activation='softmax'),])
model.summary()
sgd = optimizers.SGD(lr=0.05, decay=1e-6, momentum=0.24, nesterov=True)
model.compile(loss='categorical_crossentropy',optimizer=sgd,metrics=['accuracy',tensorflow.keras.metrics.Precision()])
model.summary()
results = model.fit(train_X,train_y,validation_split = 0.2,epochs=10,batch_size = 100)
print(results)

Why is the sequential layer expecting 3 dimensions in Keras?

Upon trying to train my LSTM network, I am given the error: ValueError: Input 0 of layer sequential_2 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, None].
My code is as follows:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
numFeatures = 26; #amino acids
numClasses = 64; #codons
x_test = tf.ragged.constant(XTest)
y_train = tf.ragged.constant(YTrain)
y_test = tf.ragged.constant(YTest)
x_train = tf.ragged.constant(XTrain)
model = keras.Sequential(
[
keras.Input(shape=(26, 10), ragged=True),
layers.LSTM(128,use_bias=False),
layers.Dense(10, activation="softmax")
]
)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="adam",
metrics=["accuracy"]
)
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=5, epochs=1
)
model.summary() shows me this:
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_2 (LSTM) (None, 128) 70656
_________________________________________________________________
dense_2 (Dense) (None, 10) 1290
=================================================================
Total params: 71,946
Trainable params: 71,946
Non-trainable params: 0
_________________________________________________________________
I get the following error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-ae3b9eb7daa0> in <module>()
15 )
16 model.fit(
---> 17 x_train, y_train, validation_data=(x_test, y_test), batch_size=5, epochs=1
18 )
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:531 train_step **
y_pred = self(x, training=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:886 __call__
self.name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py:180 assert_input_compatibility
str(x.shape.as_list()))
ValueError: Input 0 of layer sequential_2 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, None]

Categories

Resources