How to hypertune input shape using keras tuner? - python

I am trying to hypertune the input shape of an LSTM model based on the different values of timesteps. However, I am facing an issue. While initializing the model, the default value of timesteps (which is 2) is chosen, and accordingly, the build_model.scaled_train is created of shape (4096, 2, 64). Thus the value of input_shape during initialization is (2, 64). When the training starts and the value of timesteps is arbitrarily chosen as 16, then build_model.scaled_train has shape (512, 16, 64). This means that input_shape now takes the value (16, 64). However, this is not reflected in the model. The InputLayer retains the shape (2, 64) it got during initialization. Hence, an error - Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 16, 64), found shape=(None, 2, 64).
def build_model(hp):
timesteps = hp.Choice('timesteps', [2, 4, 8, 16], ordered = False)
....
DFS, UFS = get_data_in_shape(DF, UF, timesteps)
build_model.scaled_train, build_model.train_label = train_test_splitting(DFS, UFS)
model = keras.Sequential()
model.add(InputLayer(input_shape = (timesteps, nosamples))
...
...
return model
class MyTuner(BayesianOptimization):
def run_trial(self, trial, *args, **kwargs):
kwargs['batch_size'] = trial.hyperparameters.Choice('batch_size', [32, 64, 128, 256])
return super(MyTuner, self).run_trial(trial, *args, **kwargs)
tuner = MyTuner(
build_model,
objective ='val_loss',
max_trials = 20,
overwrite = True,
directory = '/content/drive/MyDrive/Colab Notebooks',
project_name = 'bo4')
When I start hyperparameter tuning, this happens.
tuner.search(build_model.scaled_train, build_model.train_label, validation_split = 0.2, epochs = 100, callbacks = [early_stopping])
Error -
Search: Running Trial #1
Value |Best Value So Far |Hyperparameter
16 |? |timesteps
4 |? |layers
1024 |? |unitsLSTM
0.15 |? |rate
64 |? |unitsANN
0.001 |? |learning_rate
Epoch 1/100
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-76-10e3851dd45f> in <module>()
----> 1 tuner.search(build_model.scaled_train, build_model.train_label, validation_split = 0.2, epochs = 100, callbacks = [early_stopping]) #, model_checkpoint
6 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 16, 64), found shape=(None, 2, 64)
I think I am making a logical mistake somewhere but cannot find it. Please help.

I made some changes which are written below and it worked fine. But I don't know if it is the optimal solution.
def build_model(hp):
...
...
scaled_train, train_label = train_test_splitting(DFS, UFS)
...
...
return model, scaled_train, train_label
class MyTuner(BayesianOptimization):
def run_trial(self, trial, *args, **kwargs):
hp = trial.hyperparameters
model, scaled_train, train_label = self.hypermodel.build(hp)
kwargs['batch_size'] = trial.hyperparameters.Choice('batch_size', [32, 64, 128, 256])
return self.hypermodel.fit(hp, model, scaled_traintrain, train_label, *args, **kwargs)

Related

Keras ValueError: Dimensions must be equal LSTM

I'm creating a Bidirectional LSTM but I faced following error
ValueError: Dimensions must be equal, but are 5 and 250 for '{{node Equal}} = Equal[T=DT_INT64, incompatible_shape_error=true](ArgMax, ArgMax_1)' with input shapes: [?,5], [?,250]
I have no idea what is wrong and how to fix it!
I have a text dataset with 59k row for train the model and i would divid them into 15 classes which then I would use for text similarity base on classes for the received new text.
Based on the other post I played with loss but still it doesn't solve the issue.
Here is the model plot:
Also sequential model would be as follow:
model_lstm = Sequential()
model_lstm.add(InputLayer(250,))
model_lstm.add(Embedding(input_dim=max_words+1, output_dim=200, weights=[embedding_matrix],
mask_zero=True, trainable= True, name='corpus_embed'))
enc_lstm = Bidirectional(LSTM(128, activation='sigmoid', return_sequences=True, name='LSTM_Encod'))
model_lstm.add(enc_lstm)
model_lstm.add(Dropout(0.25))
model_lstm.add(Bidirectional(LSTM( 128, activation='sigmoid',dropout=0.25, return_sequences=True, name='LSTM_Decod')))
model_lstm.add(Dropout(0.25))
model_lstm.add(Dense(15, activation='softmax'))
model_lstm.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['Accuracy'])
## Feed the model
history = model_lstm.fit(x=corpus_seq_train,
y=target_seq_train,
batch_size=128,
epochs=50,
validation_data=(corpus_seq_test,target_seq_test),
callbacks=[tensorboard],
sample_weight= sample_wt_mat)
This is the model summary:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
corpus_embed (Embedding) (None, 250, 200) 4000200
bidirectional (Bidirectiona (None, 250, 256) 336896
l)
dropout (Dropout) (None, 250, 256) 0
bidirectional_1 (Bidirectio (None, 250, 256) 394240
nal)
dropout_1 (Dropout) (None, 250, 256) 0
dense (Dense) (None, 250, 15) 3855
=================================================================
Total params: 4,735,191
Trainable params: 4,735,191
Non-trainable params: 0
_________________________________
and dataset shape:
corpus_seq_train.shape, target_seq_train.shape
((59597, 250), (59597, 5, 8205))
Finally, here is the error:
Epoch 1/50
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
C:\Users\AMIRSH~1\AppData\Local\Temp/ipykernel_10004/3838451254.py in <module>
9 ## Feed the model
10
---> 11 history = model_lstm.fit(x=corpus_seq_train,
12 y=target_seq_train,
13 batch_size=128,
C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py in tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1051, in train_function *
return step_function(self, iterator)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1040, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1030, in run_step **
outputs = model.train_step(data)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 894, in train_step
return self.compute_metrics(x, y, y_pred, sample_weight)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 987, in compute_metrics
self.compiled_metrics.update_state(y, y_pred, sample_weight)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\compile_utils.py", line 501, in update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\metrics_utils.py", line 70, in decorated
update_op = update_state_fn(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\base_metric.py", line 140, in update_state_fn
return ag_update_state(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\base_metric.py", line 646, in update_state **
matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\metrics.py", line 3295, in categorical_accuracy
return metrics_utils.sparse_categorical_matches(
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\metrics_utils.py", line 893, in sparse_categorical_matches
matches = tf.cast(tf.equal(y_true, y_pred), backend.floatx())
ValueError: Dimensions must be equal, but are 5 and 250 for '{{node Equal}} = Equal[T=DT_INT64, incompatible_shape_error=true](ArgMax, ArgMax_1)' with input shapes: [?,5], [?,250].
the problem is because of the Loss function and y-label shape.
we should not pad y_label and it should fit the model directly without any other process

ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, 290, 290, 3)

I am trying to implement the game of Rock, paper and scissors in jupyther notebook using tensorflow with a neural network, the code I am trying to implement is this one: https://learnopencv.com/playing-rock-paper-scissors-with-ai/
When I use my webcam It works correctly, but when I use a dslr camera it doesnt work
The specific line when the code broke is here:
history = model.fit(x=augment.flow(trainX, trainY, batch_size=batchsize), validation_data=(testX, testY),
steps_per_epoch= len(trainX) // batchsize, epochs=epochs)
The complete error is :
Epoch 1/15
7/7 [==============================] - ETA: 0s - loss: 1.0831 - accuracy: 0.6154
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_17300/1526770187.py in <module>
4
5 # Start training
----> 6 history = model.fit(x=augment.flow(trainX, trainY, batch_size=batchsize), validation_data=(testX, testY),
7 steps_per_epoch= len(trainX) // batchsize, epochs=epochs)
8
C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = process_traceback_frames(e.traceback_)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py in tf__test_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag_.converted_call(ag.ld(step_function), (ag.ld(self), ag_.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1557, in test_function *
return step_function(self, iterator)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1546, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1535, in run_step **
outputs = model.test_step(data)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1499, in test_step
y_pred = self(x, training=False)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, 290, 290, 3)
THE COMPLETE CODE OF THE PROGRAM IS HERE: https://learnopencv.com/playing-rock-paper-scissors-with-ai/
From the error, it seems like the shape of the input images is (290, 290, 3). Resizing the images to (224, 224, 3) will solve the issue. Please add the following line before normalizing.
#Resizing images
images = np.resize(images,(400, 224, 224, 3))
#Normalizing images
images = np.array(images, dtype="float") / 255.0

Input 0 of layer "dense_23" is incompatible with the layer: expected axis -1 of input shape to have value 1, but received input with shape (32, 6995)

currently making text classification using CNN my dataset is divided into text in arabic and dialect can anayone help me to solve this problem
x=df['text']
y=df['dialect']
train_size = int(len(df) * .8)t= df['text']
dial = df['dialect']
k= df['text'][:train_size]
c = df['dialect'][:train_size]
k2 = df['text'][train_size:]
c2 = df['dialect'][train_size:]
tokenizer = Tokenizer(num_words=None,lower=False)
tokenizer.fit_on_texts(t)
x_train = tokenizer.texts_to_matrix(k, mode='tfidf')
x_test = tokenizer.texts_to_matrix(k2, mode='tfidf')
encoder = LabelEncoder()
encoder.fit(dial)
dial2=encoder.fit_transform(dial)
num_classes = int((len(set(dial2))))
print((len(set(dial2))))
y_train = encoder.fit_transform(c)
y_test = encoder.fit_transform(c2)
model = Sequential()
model.add(Dense(1024, input_shape=(None, 64, 1)))
model.add(Conv2D(filters=32, kernel_size=8, activation='relu'))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Dropout(.2))
model.add(Activation('sigmoid'))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics="accuracy")
model.fit(x_train, y_train, epochs=10)
**but i got error like this
ValueError: in user code:**
ValueError Traceback (most recent call last)
in ()
----> 1 model.fit(x_train, y_train, epochs=10)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 249, in assert_input_compatibility
f'Input {input_index} of layer "{layer_name}" is '
ValueError: Exception encountered when calling layer "sequential_16" (type Sequential).
Input 0 of layer "dense_23" is incompatible with the layer: expected axis -1 of input shape to have value 1, but received input with shape (32, 6995)
Call arguments received:
• inputs=tf.Tensor(shape=(32, 6995), dtype=float32)
• training=True
• mask=None

WARNING:tensorflow:Model was constructed with shape (4, 112, 112, 3) for input ..., but it was called on an input with incompatible shape ((None, 112)

I trained a model and saved it as a h5 file. When I reuse the model and try to make a prediction using an image, it throws an error saying that it is incompatible.
How can I eliminate the warning?
Here is the code:
CATEGORIES = ["Tuberculosis", "Normal"]
def prepare(filepath):
IMG_SIZE = 112
img_array = cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
model = tf.keras.models.load_model("tuberculosis.h5")
prediction = model.predict([prepare("test.jpg")])
print(CATEGORIES[int(prediction[0][0])])
Here is the error it throws:
WARNING:tensorflow:Model was constructed with shape (4, 112, 112, 3) for input KerasTensor(type_spec=TensorSpec(shape=(4, 112, 112, 3), dtype=tf.float32, name='conv2d_input'), name='conv2d_input', description="created by layer 'conv2d_input'"), but it was called on an input with incompatible shape (None, 112).
ValueError
Traceback (most
recent call last)
<ipython-input-79-6d3f8245e17a> in <module>()
----> 1 prediction = model.predict([prepare("test.jpg")])
2 print(CATEGORIES[int(prediction[0][0])])
1 frames
/usr/local/lib/python3.7/dist-
packages/tensorflow/python/framework/func_graph.py in
autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-
except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1801, in predict_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1790, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1783, in run_step **
outputs = model.predict_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1751, in predict_step
return self(x, training=False)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 228, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" '
ValueError: Exception encountered when calling layer "sequential" (type Sequential).
Input 0 of layer "conv2d" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 112)
Call arguments received:
• inputs=('tf.Tensor(shape=(None, 112), dtype=uint8)',)
• training=False
• mask=None
This error came from shape of input, You can try this:
import tensorflow as tf
import cv2
IMAGE_CHANNEL = 1 # or 3
def prepare(filepath):
IMG_SIZE = 112
img_array = cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, IMAGE_CHANNEL)
x = tf.keras.Input(shape=(112,112,IMAGE_CHANNEL))
y = tf.keras.layers.Dense(16, activation='softmax')(x)
model = tf.keras.Model(x, y)
model.summary()
prediction = model.predict([prepare("test.jpg")])
print(prediction)
Output:
Model: "model_11"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_17 (InputLayer) [(None, 112, 112, 1)] 0
dense_13 (Dense) (None, 112, 112, 16) 32
=================================================================
Total params: 32
Trainable params: 32
Non-trainable params: 0
_________________________________________________________________
[[[[2.40530210e-15 1.25257872e-18 2.81339079e-01 ... 1.10927344e-20
1.28210900e-22 3.45369773e-24]
[1.21484684e-15 5.40451430e-19 2.79041141e-01 ... 4.33733043e-21
4.56826763e-23 1.14132918e-24]
[3.09763760e-16 1.00567346e-19 2.74375856e-01 ... 6.62814917e-22
5.79706900e-24 1.24585123e-25]
...
the error is saying you are passing a wronly shapped image, input with 3 channels. This might work if your input image has 3 channels:
def prepare(filepath):
IMG_SIZE = 112
img_array = cv2.imread(filepath)
img_array = cv2.cvtColor(img_array ,cv2.COLOR_BGR2RGB)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 3)

logits and labels must have the same first dimension, got logits shape [327680,7] and labels shape [983040]

I am trying to perform semantic segmentation by using the U-Net architecture.
When I fit the model:
history = model.fit(imgs_train,
masks_train,
batch_size= 5,
epochs = 5)
I keep getting the error below.
imgs_train.shape: (1500, 256, 256, 3)
masks_train.shape: (1500, 256, 256, 3)
# Building Unet using encoder and decoder blocks
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, concatenate, Conv2DTranspose,
BatchNormalization, Dropout, Lambda
from keras.layers import Activation, MaxPool2D, Concatenate
def conv_block(input, num_filters=64):
# first conv layer
x = Conv2D(num_filters, kernel_size = (3,3), padding='same')(input)
x = BatchNormalization()(x)
x = Activation('relu')(x)
# second conv layer
x = Conv2D(num_filters, kernel_size= (3,3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def encoder_block(input, num_filters=64):
# conv block
x = conv_block(input,num_filters)
# maxpooling
p = MaxPool2D(strides = (2,2))(x)
p = Dropout(0.4)(p)
return x,p
def decoder_block(input, skip_features, num_filters=64):
x = Conv2DTranspose(num_filters, (2,2), strides=2, padding='same')(input)
x = Concatenate()([x, skip_features])
x = conv_block(x, num_filters)
return x
num_classes=7
def unet_architect(input_shape=(256,256,3)):
""" Input Layer """
inputs = Input(input_shape)
""" Encoder """
s1,p1 = encoder_block(inputs, 64)
s2,p2 = encoder_block(p1,128)
s3,p3 = encoder_block(p2, 256)
s4,p4 = encoder_block(p3, 512)
""" Bridge """
b1 = conv_block(p4,1024)
""" Decoder """
d1 = decoder_block(b1, s4, 512)
d2 = decoder_block(d1, s3, 256)
d3 = decoder_block(d2, s2, 128)
d4 = decoder_block(d3, s1, 64)
""" Output Layer """
outputs = Conv2D(num_classes, (1,1), padding='same', activation = 'softmax')(d4)
model = Model(inputs, outputs, name='U-Net')
return model
model = unet_architect()
model.compile(optimizer = 'adam' ,
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
I tried to change sparse_categorical_crossentropy to categorical_cross_entropy, another error shows up.
And when I change the batch size while fitting the model, the logits.shape and labels.shape changes accordingly.
ERROR
Epoch 1/5
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-35-4c0704d8f65d> in <module>()
2 masks_train,
3 batch_size= 10,
----> 4 epochs = 5)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 919, in compute_loss
y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1863, in sparse_categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5203, in sparse_categorical_crossentropy
labels=target, logits=output)
ValueError: `labels.shape` must equal `logits.shape` except for the last dimension. Received: labels.shape=(1966080,) and logits.shape=(655360, 7)
NOTEBOOK LINK:https://github.com/Tamimi123600/Deep-Learning/blob/main/Image_Segmentation1.ipynb
Thanks in advance
Why are your mask image (GT targets) of shape 1500, 256, 256, 3, and not 1500, 256, 256? You have num_classes=7 so your GT images should have a single channel with values {0...6} representing the class of each pixel.
Please check how you load and process your target images -- the issue is there.

Categories

Resources