ValueError in model.fit keras and user code - python

I'm learning deep-learning python using keras and tensorflow. I am using efficientnetb0 from imagenet dataset. I had divided the training and testing sets and performed one hot encoding. I have 17 folders or classifications of images.
effnet = EfficientNetB0(weights='imagenet',include_top=False,input_shape=(image_size,image_size,3))
model = effnet.output
model = tf.keras.layers.GlobalAveragePooling2D()(model)
model = tf.keras.layers.Dropout(rate=0.5)(model)
model = tf.keras.layers.Dense(17,activation='softmax')(model)
model = tf.keras.models.Model(inputs=effnet.input, outputs = model)
model.compile(loss='categorical_crossentropy',optimizer = 'Adam', metrics= ['accuracy'])
Everything ran smooth until training the model
history = model.fit(X_train,y_train,validation_split=0.1, epochs =10, verbose=1,batch_size=32, callbacks=[tensorboard,checkpoint,reduce_lr])
ValueError Traceback (most recent call last)
Input In [22], in <cell line: 1>()
----> 1 history = model.fit(X_train,y_train,validation_split=0.1, epochs =20, verbose=1,batch_size=32, callbacks=[tensorboard,checkpoint,reduce_lr])
File C:\Ken\Conda\lib\site-packages\keras\utils\traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
File ~\AppData\Local\Temp\__autograph_generated_filexkmxbaog.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 1051, in train_function *
return step_function(self, iterator)
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 1040, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 1030, in run_step **
outputs = model.train_step(data)
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 890, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 948, in compute_loss
return self.compiled_loss(
File "C:\Ken\Conda\lib\site-packages\keras\engine\compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "C:\Ken\Conda\lib\site-packages\keras\losses.py", line 139, in __call__
losses = call_fn(y_true, y_pred)
File "C:\Ken\Conda\lib\site-packages\keras\losses.py", line 243, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "C:\Ken\Conda\lib\site-packages\keras\losses.py", line 1787, in categorical_crossentropy
return backend.categorical_crossentropy(
File "C:\Ken\Conda\lib\site-packages\keras\backend.py", line 5119, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, 18) and (None, 17) are incompatible

I fixed the error!
I just changed dense layer to 18 and deleted dropout layer. I don't know why. I will determine if it is because of the dense layer or the dropout layer. Maybe dropout helps the model not to overfit. Maybe it was the dense layer's error.

Related

Keras ValueError: Dimensions must be equal LSTM

I'm creating a Bidirectional LSTM but I faced following error
ValueError: Dimensions must be equal, but are 5 and 250 for '{{node Equal}} = Equal[T=DT_INT64, incompatible_shape_error=true](ArgMax, ArgMax_1)' with input shapes: [?,5], [?,250]
I have no idea what is wrong and how to fix it!
I have a text dataset with 59k row for train the model and i would divid them into 15 classes which then I would use for text similarity base on classes for the received new text.
Based on the other post I played with loss but still it doesn't solve the issue.
Here is the model plot:
Also sequential model would be as follow:
model_lstm = Sequential()
model_lstm.add(InputLayer(250,))
model_lstm.add(Embedding(input_dim=max_words+1, output_dim=200, weights=[embedding_matrix],
mask_zero=True, trainable= True, name='corpus_embed'))
enc_lstm = Bidirectional(LSTM(128, activation='sigmoid', return_sequences=True, name='LSTM_Encod'))
model_lstm.add(enc_lstm)
model_lstm.add(Dropout(0.25))
model_lstm.add(Bidirectional(LSTM( 128, activation='sigmoid',dropout=0.25, return_sequences=True, name='LSTM_Decod')))
model_lstm.add(Dropout(0.25))
model_lstm.add(Dense(15, activation='softmax'))
model_lstm.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['Accuracy'])
## Feed the model
history = model_lstm.fit(x=corpus_seq_train,
y=target_seq_train,
batch_size=128,
epochs=50,
validation_data=(corpus_seq_test,target_seq_test),
callbacks=[tensorboard],
sample_weight= sample_wt_mat)
This is the model summary:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
corpus_embed (Embedding) (None, 250, 200) 4000200
bidirectional (Bidirectiona (None, 250, 256) 336896
l)
dropout (Dropout) (None, 250, 256) 0
bidirectional_1 (Bidirectio (None, 250, 256) 394240
nal)
dropout_1 (Dropout) (None, 250, 256) 0
dense (Dense) (None, 250, 15) 3855
=================================================================
Total params: 4,735,191
Trainable params: 4,735,191
Non-trainable params: 0
_________________________________
and dataset shape:
corpus_seq_train.shape, target_seq_train.shape
((59597, 250), (59597, 5, 8205))
Finally, here is the error:
Epoch 1/50
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
C:\Users\AMIRSH~1\AppData\Local\Temp/ipykernel_10004/3838451254.py in <module>
9 ## Feed the model
10
---> 11 history = model_lstm.fit(x=corpus_seq_train,
12 y=target_seq_train,
13 batch_size=128,
C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py in tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1051, in train_function *
return step_function(self, iterator)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1040, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1030, in run_step **
outputs = model.train_step(data)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 894, in train_step
return self.compute_metrics(x, y, y_pred, sample_weight)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 987, in compute_metrics
self.compiled_metrics.update_state(y, y_pred, sample_weight)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\compile_utils.py", line 501, in update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\metrics_utils.py", line 70, in decorated
update_op = update_state_fn(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\base_metric.py", line 140, in update_state_fn
return ag_update_state(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\base_metric.py", line 646, in update_state **
matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\metrics.py", line 3295, in categorical_accuracy
return metrics_utils.sparse_categorical_matches(
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\metrics_utils.py", line 893, in sparse_categorical_matches
matches = tf.cast(tf.equal(y_true, y_pred), backend.floatx())
ValueError: Dimensions must be equal, but are 5 and 250 for '{{node Equal}} = Equal[T=DT_INT64, incompatible_shape_error=true](ArgMax, ArgMax_1)' with input shapes: [?,5], [?,250].
the problem is because of the Loss function and y-label shape.
we should not pad y_label and it should fit the model directly without any other process

ValueError of Input and Output values during LSTM training

I was trying to implement a basic LSTM network using some random data, and I got the following error during execution of the code
'''
Traceback (most recent call last):
File "C:/Users/dell/Desktop/test run for LSTM thingy.py", line 39, in <module>
history = model.fit(x_train, y_train, epochs=1, batch_size=16, verbose=1)
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\dell\AppData\Local\Temp\__autograph_generated_fileu1zdna1b.py", line 15, in tf__train_function
retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
ValueError: in user code:
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 1051, in train_function *
return step_function(self, iterator)
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 1040, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 1030, in run_step **
outputs = model.train_step(data)
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 890, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 948, in compute_loss
return self.compiled_loss(
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\losses.py", line 139, in __call__
losses = call_fn(y_true, y_pred)
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\losses.py", line 243, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\losses.py", line 1787, in categorical_crossentropy
return backend.categorical_crossentropy(
File "C:\Users\dell\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\backend.py", line 5119, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, 133, 1320) and (None, 133, 5) are incompatible
'''
This is how my code looks like at the moment:
import tensorflow as tf
x_train = tf.random.normal((28, 133, 1320))
y_train = tf.random.normal((28, 133, 1320))
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(5,activation='tanh',recurrent_activation='sigmoid', input_shape=(x_train.shape[1],x_train.shape[2]),return_sequences=True))
model.add(tf.keras.layers.Dense(5, activation= "softmax"))
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, epochs=1, batch_size=16, verbose=1)
Could anyone help me in debugging this code, as I need to use something similar in another side project which involves both X and Y input data of similar shapes, and I was not able to find a solution to the problem. I know it has something to do with the loss function, but thats all.
Shape of Y - (28, 133, 1320)
Shape of X - (28, 133, 1320)
Categories needed - 5
You are currently trying to do categorical classification with 5 classes but y has the shape (28, 133, 1320). It does not work like that. Also, when you use categorical_crossentropy, you need one-hot-encoded labels. Here is a working example as orientation:
import tensorflow as tf
x_train = tf.random.normal((28, 133, 1320))
# one-hot encoded labels
y_train = tf.keras.utils.to_categorical(tf.random.uniform((28,), maxval=5, dtype=tf.int32))
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(5,activation='tanh',recurrent_activation='sigmoid', input_shape=(x_train.shape[1],x_train.shape[2]), return_sequences=False))
model.add(tf.keras.layers.Dense(5, activation= "softmax"))
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, epochs=1, batch_size=16, verbose=1)

ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, 290, 290, 3)

I am trying to implement the game of Rock, paper and scissors in jupyther notebook using tensorflow with a neural network, the code I am trying to implement is this one: https://learnopencv.com/playing-rock-paper-scissors-with-ai/
When I use my webcam It works correctly, but when I use a dslr camera it doesnt work
The specific line when the code broke is here:
history = model.fit(x=augment.flow(trainX, trainY, batch_size=batchsize), validation_data=(testX, testY),
steps_per_epoch= len(trainX) // batchsize, epochs=epochs)
The complete error is :
Epoch 1/15
7/7 [==============================] - ETA: 0s - loss: 1.0831 - accuracy: 0.6154
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_17300/1526770187.py in <module>
4
5 # Start training
----> 6 history = model.fit(x=augment.flow(trainX, trainY, batch_size=batchsize), validation_data=(testX, testY),
7 steps_per_epoch= len(trainX) // batchsize, epochs=epochs)
8
C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = process_traceback_frames(e.traceback_)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py in tf__test_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag_.converted_call(ag.ld(step_function), (ag.ld(self), ag_.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1557, in test_function *
return step_function(self, iterator)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1546, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1535, in run_step **
outputs = model.test_step(data)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1499, in test_step
y_pred = self(x, training=False)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, 290, 290, 3)
THE COMPLETE CODE OF THE PROGRAM IS HERE: https://learnopencv.com/playing-rock-paper-scissors-with-ai/
From the error, it seems like the shape of the input images is (290, 290, 3). Resizing the images to (224, 224, 3) will solve the issue. Please add the following line before normalizing.
#Resizing images
images = np.resize(images,(400, 224, 224, 3))
#Normalizing images
images = np.array(images, dtype="float") / 255.0

Running transfer learning for my binary classification model following ResNetV250 model on tensorflow: Value error

I am trying to apply transfer learning (ResNetV250 & EfficientnetB0) to my binary image classification model but got a Value Error while fitting the model.
I add the final layer with the following parameter ->
layers.Dense(num_classes, activation='sigmoid', name='output_layer') where use num_classes = 1 and compile the model using loss='binary_crossentropy.
I got the following error:
ValueError: logits and labels must have the same shape, received
((None, 2) vs (None, 1)).
Any help/suggestions are welcome:
# Resnet 50 V2 feature vector
resnet_url = "https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4"
# Original: EfficientNetB0 feature vector (version 1)
efficientnet_url = "https://tfhub.dev/tensorflow/efficientnet/b0/feature-vector/1"
-----------------------------------------------------------------------------------------
def create_model(model_url, num_classes=1):
# Download the pretrained model and save it as a Keras layer
feature_extractor_layer = hub.KerasLayer(model_url,
trainable=False, # freeze the underlying patterns
name='feature_extraction_layer',
input_shape=IMAGE_SHAPE+(3,)) # define the input image shape
# Create our own model
model = tf.keras.Sequential([
feature_extractor_layer, # use the feature extraction layer as the base
layers.Dense(num_classes, activation='sigmoid', name='output_layer') # create our own output layer
])
return model
-----------------------------------------------------------------------------------------------------
# Create model
resnet_model = create_model(resnet_url, num_classes=train_data.num_classes)
# Compile
resnet_model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
------------------------------------------------------------------------------------------------------
# Fit the model
resnet_history = resnet_model.fit(train_data,
epochs=5,
steps_per_epoch=len(train_data),
validation_data=val_data,
validation_steps=len(val_data),
# Add TensorBoard callback to model (callbacks parameter takes a list)
callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", # save experiment logs here
experiment_name="resnet50V2")]) # name of log files
And the error I got:
Saving TensorBoard log files to: tensorflow_hub/resnet50V2/20220418-221123
Epoch 1/5
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-196f7d30141c> in <module>()
7 # Add TensorBoard callback to model (callbacks parameter takes a list)
8 callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", # save experiment logs here
----> 9 experiment_name="resnet50V2")]) # name of log files
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 919, in compute_loss
y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1932, in binary_crossentropy
backend.binary_crossentropy(y_true, y_pred, from_logits=from_logits),
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5247, in binary_crossentropy
return tf.nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
ValueError: `logits` and `labels` must have the same shape, received ((None, 2) vs (None, 1)).
For binary classification you don't need to use a unit in the Dense layer for each class, since that would be redundant. And in this case you can't do so in the first place, since you use the binary_crossentropy loss. Try adjusting layers.Dense(num_classes) to layers.Dense(1).

ValueError: Can not squeeze dim[1], expected a dimension of 1

EDIT: I got past that error message by reshaping my data as follows:
train_x = np.array(train_x)
train_y = np.array(train_y)
x_size = train_x.shape[0] * train_x.shape[1]
train_x = train_x.reshape(x_size, train_x.shape[2])
train_x = np.expand_dims(train_x, 1)
train_x = train_x.transpose(0,2,1)
train_y = train_y.flatten()
shape = train_x.shape # 3D: number of texts * number of padded paragraphs, number of features, 1
time_steps = shape[0] # number of padded pars * number of texts
features = shape[1] # number of features
model = Sequential()
model.add(layers.Masking(mask_value=0, input_shape=(time_steps, features)))
model.add(layers.LSTM(128, return_sequences=True, return_state=False, input_shape=(time_steps, features))) # 128 internal units
model.add(layers.TimeDistributed(layers.Dense(1, activation='sigmoid')))
#model.add(layers.Dense(len(train_y))) # Dense layer
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(train_x, train_y, batch_size=train_y.shape[0])
predictions = model.predict(test_x)
I get a new error message:
ValueError: Input 0 is incompatible with layer lstm: expected shape=(None, None, 3), found shape=[288, 3, 1]
I'll keep updating this question in case someone runs into a similiar problem.
Still happy about any input.
Original question:
I want to buil a sequential LSTM model that predicts binary classification at every time step. More exactly, I want to predict an output for every paragraph in my texts (48 is the number of paragraphs). This is my code:
shape = np.shape(train_x) # 3D: number of texts, number of padded paragraphs, number of features
n = shape[0] # number of texts
time_steps = shape[1] # number of padded pars
features = shape[2] # number of features
model = Sequential()
model.add(layers.Masking(mask_value=0.0, input_shape=(time_steps, features)))
model.add(layers.LSTM(128, return_sequences=True, return_state=False))
model.add(layers.TimeDistributed(layers.Dense(1)))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
#train_x = np.array(train_x).reshape(2, input_shape, 3)
train_x = tf.convert_to_tensor(train_x) # data needs to be tensor object
train_y = tf.convert_to_tensor(train_y)
model.fit(train_x, train_y, batch_size=2)
predictions = model.predict(test_x)
This is the error message I get:
ValueError: Can not squeeze dim[1], expected a dimension of 1,
got 48 for '{{node categorical_crossentropy/weighted_loss/Squeeze}} = Squeeze[T=DT_FLOAT,
squeeze_dims=[-1]](Cast)' with input shapes: [2,48].
I don't really know what to do with this, do I need to reshape my data? How? Or do I need to change something in the model?
Thanks!
(changing the loss function to 'binary_crossentropy' raises the same error)
This is the entire traceback:
Traceback (most recent call last):
File "program.py", line 247, in <module>
eval_scores = train_classifier(x_train, y_train_sc, x_test, y_test_sc)
File "program.py", line 201, in train_classifier
model.fit(train_x, train_y, batch_size=2)
File "C:\Python38\lib\site-packages\tensorflow\python\keras\engine\training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "C:\Python38\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1098, in fit
tmp_logs = train_function(iterator)
File "C:\Python38\lib\site-packages\tensorflow\python\eager\def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "C:\Python38\lib\site-packages\tensorflow\python\eager\def_function.py", line 823, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "C:\Python38\lib\site-packages\tensorflow\python\eager\def_function.py", line 696, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "C:\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 2855, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "C:\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 3213, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "C:\Python38\lib\site-packages\tensorflow\python\eager\function.py", line 3065, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "C:\Python38\lib\site-packages\tensorflow\python\framework\func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "C:\Python38\lib\site-packages\tensorflow\python\eager\def_function.py", line 600, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "C:\Python38\lib\site-packages\tensorflow\python\framework\func_graph.py", line 973, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
C:\Python38\lib\site-packages\tensorflow\python\keras\engine\training.py:806 train_function *
return step_function(self, iterator)
C:\Python38\lib\site-packages\tensorflow\python\keras\engine\training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
C:\Python38\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
C:\Python38\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
C:\Python38\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
C:\Python38\lib\site-packages\tensorflow\python\keras\engine\training.py:789 run_step **
outputs = model.train_step(data)
C:\Python38\lib\site-packages\tensorflow\python\keras\engine\training.py:748 train_step
loss = self.compiled_loss(
C:\Python38\lib\site-packages\tensorflow\python\keras\engine\compile_utils.py:204 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
C:\Python38\lib\site-packages\tensorflow\python\keras\losses.py:150 __call__
return losses_utils.compute_weighted_loss(
C:\Python38\lib\site-packages\tensorflow\python\keras\utils\losses_utils.py:111 compute_weighted_loss
weighted_losses = tf_losses_utils.scale_losses_by_sample_weight(
C:\Python38\lib\site-packages\tensorflow\python\ops\losses\util.py:142 scale_losses_by_sample_weight
losses, _, sample_weight = squeeze_or_expand_dimensions(
C:\Python38\lib\site-packages\tensorflow\python\ops\losses\util.py:95 squeeze_or_expand_dimensions
sample_weight = array_ops.squeeze(sample_weight, [-1])
C:\Python38\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
return target(*args, **kwargs)
C:\Python38\lib\site-packages\tensorflow\python\util\deprecation.py:507 new_func
return func(*args, **kwargs)
C:\Python38\lib\site-packages\tensorflow\python\ops\array_ops.py:4259 squeeze
return gen_array_ops.squeeze(input, axis, name)
C:\Python38\lib\site-packages\tensorflow\python\ops\gen_array_ops.py:10043 squeeze
_, _, _op, _outputs = _op_def_library._apply_op_helper(
C:\Python38\lib\site-packages\tensorflow\python\framework\op_def_library.py:742 _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
C:\Python38\lib\site-packages\tensorflow\python\framework\func_graph.py:591 _create_op_internal
return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access
C:\Python38\lib\site-packages\tensorflow\python\framework\ops.py:3477 _create_op_internal
ret = Operation(
C:\Python38\lib\site-packages\tensorflow\python\framework\ops.py:1974 __init__
self._c_op = _create_c_op(self._graph, node_def, inputs,
C:\Python38\lib\site-packages\tensorflow\python\framework\ops.py:1815 _create_c_op
raise ValueError(str(e))
ValueError: Can not squeeze dim[1], expected a dimension of 1, got 48 for '{{node categorical_crossentropy/weighted_loss/Squeeze}} = Squeeze[T=DT_FLOAT, squeeze_dims=[-1]](Cast)' with input shapes: [2,48].

Categories

Resources