Hausdorff loss for U-Net segmentation Keras - python

I wanted to use the hausdorff I found here (https://github.com/N0vel/weighted-hausdorff-distance-tensorflow-keras-loss) as loss in my U-Net, but when I try to do so I get the following error:
ValueError: Dimensions must be equal, but are 3 and 2 for 'loss/conv2d_19_loss/MatMul' (op: 'MatMul') with input shapes: [?,3], [2,16384].
I don't really understand the code of the hausdorff distance, but I put the same batch size that my network uses in the loop. I also tried to print the shape of y_true and y_pred using another loss function to see the size that was wanted, but it only printed Tensor("loss/conv2d_19_loss/strided_slice:0", shape=(?, ?, ?), dtype=float32).
I tried to find the problem by following the path of the error, but I didn't uderstood the code.
Traceback (most recent call last):
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 686, in _call_cpp_shape_fn_impl
input_tensors_as_shapes, status)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 3 and 2 for 'loss/conv2d_19_loss/MatMul' (op: 'MatMul') with input shapes: [?,3], [2,16384].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/etudiant_master/Documents/Marouane/Scripts/Test_parameters.py", line 367, in <module>
test_hyperparam_list('Loss', LOSS)
File "/home/etudiant_master/Documents/Marouane/Scripts/Test_parameters.py", line 339, in test_hyperparam_list
test_ligne = leave_one_out_model_test(batch_size, nb_epoch, validation_split, kernels, kernel_size, dropout_rate, pooling_size, block_number, path_test = path_test, iteration = var, metrics = metrics, optimizer = optimizer, loss = loss, activation = activation, activation2 = activation2)
File "/home/etudiant_master/Documents/Marouane/Scripts/Test_parameters.py", line 123, in leave_one_out_model_test
model = train_model.CNNs_layers(kernels = kernels, kernel_size = kernel_size, dropout_rate = dropout_rate, pooling_size = pooling_size, block_number = block_number, activation = activation, activation2 = activation2, metrics = metrics, optimizer = optimizer, loss = loss)
File "/home/etudiant_master/Documents/Marouane/Scripts/train_model.py", line 120, in CNNs_layers
model.compile(optimizer= optimizer, loss=loss, metrics=[metrics])
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/training.py", line 849, in compile
output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/training.py", line 454, in weighted
score_array = fn(y_true, y_pred)
File "/home/etudiant_master/Documents/Marouane/Scripts/Metrics.py", line 73, in Weighted_Hausdorff_loss
d_matrix = tf.sqrt(tf.maximum(tf.reshape(tf.reduce_sum(gt_b*gt_b, axis=1), (-1, 1)) + tf.reduce_sum(all_img_locations*all_img_locations, axis=1)-2*(tf.matmul(gt_b, tf.transpose(all_img_locations))), 0.0))
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 2022, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 2516, in _mat_mul
name=name)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3162, in create_op
compute_device=compute_device)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3208, in _create_op_helper
set_shapes_for_outputs(op)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2427, in set_shapes_for_outputs
return _set_shapes_for_outputs(op)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2400, in _set_shapes_for_outputs
shapes = shape_func(op)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2330, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
require_shape_fn)
File "/home/etudiant_master/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Dimensions must be equal, but are 3 and 2 for 'loss/conv2d_19_loss/MatMul' (op: 'MatMul') with input shapes: [?,3], [2,16384].
I found nothing on hausdorff loss implementation, so I hope someone will find the problem.
Thanks a lot!

Related

RandomCrop causing INVALID_ARGUMENT: required broadcastable shapes

I'm training a neural network with Keras, and trying to use RandomCrop layer. I'm using a dynamic sized dataset (varying resolution), but I've found it's not currently the cause of this issue.
When I run model.fit(), after a short while, I receive the above mentioned error INVALID_ARGUMENT: required broadcastable shapes. I am able to get a summary of my model, so it's not some mismatch there.
My model works fine when I remove this layer, but I need it to reduce the size of my inputs (hence using RandomCrop).
full traceback + tensorflow status
2022-03-23 13:27:28.772937: W tensorflow/core/framework/op_kernel.cc:1733] INVALID_ARGUMENT: required broadcastable shapes
Traceback (most recent call last):
File "c:\Users\samue\Desktop\rcrop\main.py", line 37, in <module>
conv_model.fit(
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\tensorflow\python\eager\execute.py", line 54, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Detected at node 'mean_squared_error/SquaredDifference' defined at (most recent call last):
File "C:\Program Files\Python310\lib\threading.py", line 966, in _bootstrap
self._bootstrap_inner()
File "C:\Program Files\Python310\lib\threading.py", line 1009, in _bootstrap_inner
self.run()
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\engine\training.py", line 1000, in run_step
outputs = model.train_step(data)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\engine\training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\engine\training.py", line 918, in compute_loss
return self.compiled_loss(
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\engine\compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\losses.py", line 245, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\losses.py", line 1329, in mean_squared_error
return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)
Node: 'mean_squared_error/SquaredDifference'
Detected at node 'mean_squared_error/SquaredDifference' defined at (most recent call last):
File "C:\Program Files\Python310\lib\threading.py", line 966, in _bootstrap
self._bootstrap_inner()
File "C:\Program Files\Python310\lib\threading.py", line 1009, in _bootstrap_inner
self.run()
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\engine\training.py", line 1000, in run_step
outputs = model.train_step(data)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\engine\training.py", line 860, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\engine\training.py", line 918, in compute_loss
return self.compiled_loss(
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\engine\compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\losses.py", line 245, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "C:\Users\samue\AppData\Roaming\Python\Python310\site-packages\keras\losses.py", line 1329, in mean_squared_error
return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)
Node: 'mean_squared_error/SquaredDifference'
2 root error(s) found.
(0) INVALID_ARGUMENT: required broadcastable shapes
[[{{node mean_squared_error/SquaredDifference}}]]
[[div_no_nan/ReadVariableOp/_84]]
(1) INVALID_ARGUMENT: required broadcastable shapes
[[{{node mean_squared_error/SquaredDifference}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_1308]
How to reproduce
I've create a minimal reproducible example, with only two images with resolution of [10, 10] both saved as .png with rgb colorspace.
Running main.py loads these images and tries to start training (failing with an error).
When I exclude the RandomCrop layer, it works just fine.
folder structure
/main_folder
--main.py
--/data
--001.png
--002.png
main.py
import cv2, os
import keras
import tensorflow as tf
from keras import layers
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
input_layer = keras.Input(shape=(None, None, 3))
cropped = layers.RandomCrop(32, 32)(input_layer)
out = layers.Conv2D(3, (3, 3), activation='sigmoid', padding='same')(cropped)
conv_model = keras.Model(input_layer, out)
conv_model.compile(
optimizer='adam',
loss=tf.keras.losses.MeanSquaredError()
)
conv_model.summary()
path = "data"
data = [cv2.imread(os.path.join(path, f)) / 255 for f in os.listdir(os.path.join(path))]
def data_generator():
for i in range(len(data)):
yield data[i], data[i]
dataset = tf.data.Dataset.from_generator(
data_generator,
output_types=(tf.float32, tf.float32),
output_shapes=((None, None, 3), (None, None, 3))
).batch(1)
conv_model.fit(
dataset,
epochs=1,
validation_data=dataset
)
So, I wanted to use this for an autoencoder (in the example). That means, I'd have to have the same crop done on both the input and compare image. This doesn't sound like something the RandomCrop could do, but since I'm already using a custom generator, I can implement it right there:
def data_generator():
for i in range(len(data)):
# Custom function to determine the patch size
x, x1, y, y1 = randomly_choose(data[i].shape)
yield data[i][x: x1, y: y1], data[i][x: x1, y: y1]
This gives me full power over the generation process, allowing me to include image flipping, rotating and other alterations.

Trying to concatenate keras models: ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float)

I'm trying to concatenate two parallel models in keras, each with different inputs. The relevant code is below.
# model 1
model1_in = Input(shape=(train_x_1.shape[1], train_x_1.shape[2]))
model1_out = LSTM(50, activation='relu',return_sequences=False, name='layer_1')(model1_in)
model1 = Model(model1_in, model1_out)
# model 2
model2_in = Input(shape=(1))
model2_out = Dense(8, activation='relu', name='layer_2')(model2_in)
model2 = Model(model2_in, model2_out)
concatenated = concatenate(inputs=[model1.output, model2.output])
out = Dense(1, activation='relu', name='output_layer')(concatenated)
model = Model([model1_in, model2_in], out)
model.compile(loss='mean_absolute_error', optimizer='adam')
# fit network
history = model.fit([train_x_1,train_x_2], train_y, epochs=100, batch_size=72, validation_data=([test_x_1,test_x_2], test_y), verbose=2, shuffle=False)
The error I'm getting is
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float).
and occurs at the model.fit line.
I'm running in IDLE. The train and test values are all arrays, and I've checked that all training inputs are of the same length:
#train_x_1.shape[0]
15465
#train_y.shape[0]
15465
#train_x_2.shape[0]
15465
#test_x_1.shape[0]
1719
#test_x_2.shape[0]
1719
#test_y.shape[0]
1719
#test_x_1
array([[[0.6243922 ],
[0.5463666 ],
[0.7083546 ], ... etc ...
Any help would be greatly appreciated- thanks in advance!
Full error trace is as below:
Traceback (most recent call last): File "filepath.py", line 220,
in
history = model.fit([train_x_1,train_x_2], train_y, epochs=100, batch_size=72, validation_data=([test_x_1,test_x_2], test_y),
verbose=2, shuffle=False) File
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py",
line 728, in fit
use_multiprocessing=use_multiprocessing) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 224, in fit
distribution_strategy=strategy) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 547, in _process_training_inputs
use_multiprocessing=use_multiprocessing) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 606, in _process_inputs
use_multiprocessing=use_multiprocessing) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py",
line 217, in init
x = _process_numpy_inputs(x) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py",
line 703, in _process_numpy_inputs
inputs = nest.map_structure(_convert_non_tensor, inputs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/util/nest.py",
line 535, in map_structure
structure[0], [func(*x) for x in entries], File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/util/nest.py",
line 535, in
structure[0], [func(*x) for x in entries], File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py",
line 700, in _convert_non_tensor
return ops.convert_to_tensor(x) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py",
line 1184, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name) File
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py",
line 1242, in convert_to_tensor_v2
as_ref=False) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py",
line 1296, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_conversion_registry.py",
line 52, in _default_conversion_function
return constant_op.constant(value, dtype, name=name) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py",
line 227, in constant
allow_broadcast=True) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py",
line 235, in _constant_impl
t = convert_to_eager_tensor(value, ctx, dtype) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py",
line 96, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype) ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type
float).
Specifying the Solution in Answer Section (even though it is present in the Comments Section), for the benefit of the Community.
The values of x_2 were all as type float, whilst the x_1 values were as float32.
Modifying x2 to float32 using x2.astype('float32') has resolved the issue.

ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs

I'm trying to use a sigmoid to join the output of two models with different embedding matrix. but I keep getting the error at the concatenate line. I have tried other suggestions from similar questions but it keeps giving the same error. I feel I'm missing something but I can't find it. please help explain. Thanks
############################ MODEL 1 ######################################
input_tensor=Input(shape=(35,))
input_layer= Embedding(vocab_size, 300, input_length=35, weights=[embedding_matrix],trainable=True)(input_tensor)
conv_blocks = []
filter_sizes = (2,3,4)
for fx in filter_sizes:
conv_layer= Conv1D(100, kernel_size=fx, activation='relu', data_format='channels_first')(input_layer) #filters=100, kernel_size=3
maxpool_layer = MaxPooling1D(pool_size=4)(conv_layer)
flat_layer= Flatten()(maxpool_layer)
conv_blocks.append(flat_layer)
conc_layer=concatenate(conv_blocks, axis=1)
graph = Model(inputs=input_tensor, outputs=conc_layer)
model = Sequential()
model.add(graph)
model.add(Dropout(0.2))
############################ MODEL 2 ######################################
input_tensor_1=Input(shape=(35,))
input_layer_1= Embedding(vocab_size, 300, input_length=35, weights=[embedding_matrix_1],trainable=True)(input_tensor_1)
conv_blocks_1 = []
filter_sizes_1 = (2,3,4)
for fx in filter_sizes_1:
conv_layer_1= Conv1D(100, kernel_size=fx, activation='relu', data_format='channels_first')(input_layer_1) #filters=100, kernel_size=3
maxpool_layer_1 = MaxPooling1D(pool_size=4)(conv_layer_1)
flat_layer_1= Flatten()(maxpool_layer_1)
conv_blocks_1.append(flat_layer_1)
conc_layer_1=concatenate(conv_blocks_1, axis=1)
graph_1 = Model(inputs=input_tensor_1, outputs=conc_layer_1)
model_1 = Sequential()
model_1.add(graph_1)
model_1.add(Dropout(0.2))
fused = concatenate([graph, graph_1], axis=-1)
prediction = Dense(3, activation='sigmoid')(fused)
model = Model(inputs=[input_tensor,input_tensor_1], outputs=[prediction])
model.compile(loss='sparse_categorical_crossentropy',optimizer='Adagrad', metrics=['accuracy'])
model.summary()
This is the error trace
Traceback (most recent call last):
File "DL_Ensemble.py", line 145, in <module>
fused = concatenate([graph, graph_1], axis= 1 )
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/keras/layers/merge.py", line 705, in concatenate
return Concatenate(axis=axis, **kwargs)(inputs)
File "/usr/pkg/lib/python3.8/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 887, in __call__
self._maybe_build(inputs)
File "/usr/pkg/lib/python3.8/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 2141, in _maybe_build
self.build(input_shapes)
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/keras/utils/tf_utils.py", line 306, in wrapper
output_shape = fn(instance, input_shape)
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/keras/layers/merge.py", line 378, in build
raise ValueError('A `Concatenate` layer should be called '
ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs
UPDATE: I have reflected the answers given by #VivekMehta, however, I have this error.
File "DL_Ensemble.py", line 165, in <module>
model.fit([train_sequences,train_sequences], train_y, epochs=10,
verbose=False, batch_size=32, class_weight={0: 6.0, 1: 1.0, 2: 2.0})
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/keras/engine/training.py", line 709, in fit
return func.fit(
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/keras/engine/training_v2.py", line 313, in fit
training_result = run_one_epoch(
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/keras/engine/training_v2.py", line 123, in run_one_epoch
batch_outs = execution_function(iterator)
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/keras/engine/training_v2_utils.py",
line
86, in execution_function
distributed_function(input_fn))
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/eager/def_function.py", line 457, in __call__
result = self._call(*args, **kwds)
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/eager/def_function.py", line 520, in _call
return self._stateless_fn(*args, **kwds)
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/eager/function.py", line 1823, in __call__
return graph_function._filtered_call(args, kwargs) # pylint:
disable=protected-access
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/eager/function.py", line 1137, in _filtered_call
return self._call_flat(
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/eager/function.py", line 1223, in _call_flat
flat_outputs = forward_function.call(
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/eager/function.py", line 506, in call
outputs = execute.execute(
File "/usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError:
Conv2DCustomBackpropInputOp only supports NHWC.
[[node Conv2DBackpropInput (defined at /usr/pkg/lib/python3.8/site-
packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_distributed_function_2250]
Function call stack:
distributed_function
I also wanted to add that when the code is run on a GPU as opposed to a CPU, the error occurs on the same line as before but the message changes to :
File "DL_Ensemble.py", line 166, in <module>
model.fit([train_sequences,train_sequences], train_y, epochs=10, verbose=False, batch_size=32, class_weight={0: 6.0, 1: 1.0, 2: 2.0})
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 880, in fit
validation_steps=validation_steps)
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 329, in model_iteration
batch_outs = f(ins_batch)
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3073, in __call__
self._make_callable(feed_arrays, feed_symbols, symbol_vals, session)
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3019, in _make_callable
callable_fn = session._make_callable_from_options(callable_opts)
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1471, in _make_callable_from_options
return BaseSession._Callable(self, callable_options)
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1425, in __init__
session._session, options_ptr, status)
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Conv2DCustomBackpropInputOp only supports NHWC.
[[{{node training/Adagrad/gradients/conv1d_5/conv1d/Conv2D_grad/Conv2DBackpropInput}}]]
Exception ignored in: <function BaseSession._Callable.__del__ at 0x7fe4dd06a730>
Traceback (most recent call last):
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1455, in __del__
self._session._session, self._handle, status)
File "/home/kosimadukwe/.local/lib/python3.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: No such callable handle: 94697914208640
So from you stack trace, code is throwing error at:
fused = concatenate([graph, graph_1], axis= 1 )
print(type(graph))
# output: <class 'tensorflow.python.keras.engine.training.Model'>
This error is coming because concatenate expects list of tensors to be concatenated. While you are passing graph and graph_1 which is not tensor but a Model instance.
So from your code I assume that you want to concatenate output of these two models. In that case you'll have to change above line to:
fused = concatenate([graph.outputs[0], graph_1.outputs[0]], axis=-1)
Here, graph.outputs gives list of outputs by given by Model. Since each model is giving us one output, we will take 0th index from each output.
Change this part and you'll get model summary as you are expecting.

Keras | Can't define Model - ValueError: Invalid reduction dimension 2 for input with 2 dimensions

When I try to create this audio generating RNN I'm getting a weird error facing my input. But I don't really know how I should interpret the error.
I'm creating two input Tensors: noise and label with dim. (100,) and (1,). Then I'm embedding the labels. Then I create a propper input and init the model input and return the finished Model with input and output.
The error says that it's unable to reduce dimensions to 2 from 2 dim. input and that the input would have the shape [?,100], [2] which is "not" the case?
Thanks in advance!
Code:
call: build_audio_generator(100, 1)
def build_audio_generator(latent_dim, num_classes):
model = Sequential()
model.add(LSTM(512, input_dim=latent_dim, return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(512, return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(512))
model.add(Dense(256))
model.add(Dropout(0.3))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.summary()
noise = Input(shape=(latent_dim,))
label = Input(shape=(1,), dtype='int32')
label_embedding = Flatten()(Embedding(num_classes, 100)(label))
model_input = multiply([noise, label_embedding])
sound = model(model_input)
return Model([noise, label], sound)
Error:
Traceback (most recent call last):
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 686, in _call_cpp_shape_fn_impl
input_tensors_as_shapes, status)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid reduction dimension 2 for input with 2 dimensions. for 'sequential_3/lstm_1/Sum' (op: 'Sum') with input shapes: [?,100], [2] and with comput
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 94, in <module>
main()
File "main.py", line 67, in main
audio_generator = build_audio_generator(latent_dim, num_classes)
File "C:\Users\MrGrimod\Desktop\gan-audio-generator\model.py", line 70, in build_audio_generator
sound = model(model_input)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\topology.py", line 603, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\models.py", line 546, in call
return self.model.call(inputs, mask)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\topology.py", line 2061, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\topology.py", line 2212, in run_internal_graph
output_tensors = _to_list(layer.call(computed_tensor, **kwargs))
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\layers\recurrent.py", line 2023, in call
initial_state=initial_state)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\layers\recurrent.py", line 540, in call
initial_state = self.get_initial_state(inputs)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\layers\recurrent.py", line 469, in get_initial_state
initial_state = K.sum(initial_state, axis=(1, 2)) # (samples,)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\backend\tensorflow_backend.py", line 1242, in sum
return tf.reduce_sum(x, axis=axis, keep_dims=keepdims)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1307, in reduce_sum
name=name)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 4681, in _sum
keep_dims=keep_dims, name=name)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 2958, in create_op
set_shapes_for_outputs(ret)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 2209, in set_shapes_for_outputs
shapes = shape_func(op)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 2159, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 627, in call_cpp_shape_fn
require_shape_fn)
File "C:\Users\MrGrimod\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 691, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Invalid reduction dimension 2 for input with 2 dimensions. for 'sequential_3/lstm_1/Sum' (op: 'Sum') with input shapes: [?,100], [2] and with computed input tensors: input[1] = <1 2>.
As you know, LSTM input should be have ndim=3, which in your case becomes (None, None, 100). I'm not sure about your input. But your noise and model_input is 2D with shape (None, 100) based on the code. This didn't match the requirement and thus triggered the error. Maybe you want to reshape your input as noise = Input(shape=(None, latent_dim))?

Python / Tensorflow - Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304

I have the following code portion for a convolutional neural network:
import numpy as np
import matplotlib.pyplot as plt
import cifar_tools
import tensorflow as tf
data, labels = cifar_tools.read_data('C:\\Users\\abc\\Desktop\\temp')
x = tf.placeholder(tf.float32, [None, 150 * 150])
y = tf.placeholder(tf.float32, [None, 2])
w1 = tf.Variable(tf.random_normal([5, 5, 1, 64]))
b1 = tf.Variable(tf.random_normal([64]))
w2 = tf.Variable(tf.random_normal([5, 5, 64, 64]))
b2 = tf.Variable(tf.random_normal([64]))
w3 = tf.Variable(tf.random_normal([6*6*64, 1024]))
b3 = tf.Variable(tf.random_normal([1024]))
w_out = tf.Variable(tf.random_normal([1024, 2]))
b_out = tf.Variable(tf.random_normal([2]))
def conv_layer(x,w,b):
conv = tf.nn.conv2d(x,w,strides=[1,1,1,1], padding = 'SAME')
conv_with_b = tf.nn.bias_add(conv,b)
conv_out = tf.nn.relu(conv_with_b)
return conv_out
def maxpool_layer(conv,k=2):
return tf.nn.max_pool(conv, ksize=[1,k,k,1], strides=[1,k,k,1], padding='SAME')
def model():
x_reshaped = tf.reshape(x, shape=[-1,150,150,1])
conv_out1 = conv_layer(x_reshaped, w1, b1)
maxpool_out1 = maxpool_layer(conv_out1)
norm1 = tf.nn.lrn(maxpool_out1, 4, bias=1.0, alpha=0.001/9.0, beta=0.75)
conv_out2 = conv_layer(norm1, w2, b2)
maxpool_out2 = maxpool_layer(conv_out2)
norm2 = tf.nn.lrn(maxpool_out2, 4, bias=1.0, alpha=0.001/9.0, beta=0.75)
maxpool_reshaped = tf.reshape(maxpool_out2, [-1,w3.get_shape().as_list()[0]])
local = tf.add(tf.matmul(maxpool_reshaped, w3), b3)
local_out = tf.nn.relu(local)
out = tf.add(tf.matmul(local_out, w_out), b_out)
return out
model_op = model()
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model_op, y))
train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
correct_pred = tf.equal(tf.argmax(model_op, 1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred,tf.float32))
I'm reading 150x150 grayscale images, but couldn't understand the following error I'm having:
EPOCH 0
Traceback (most recent call last):
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _do_call
return fn(*args)
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1003, in _run_fn
status, run_metadata)
File "C:\Python35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304
[[Node: Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](MaxPool_1, Reshape_1/shape)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "cnn.py", line 70, in <module>
_, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: batch_data, y: batch_onehot_vals})
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 766, in run
run_metadata_ptr)
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304
[[Node: Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](MaxPool_1, Reshape_1/shape)]]
Caused by op 'Reshape_1', defined at:
File "cnn.py", line 50, in <module>
model_op = model()
File "cnn.py", line 43, in model
maxpool_reshaped = tf.reshape(maxpool_out2, [-1,w3.get_shape().as_list()[0]])
File "C:\Python35\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 2448, in reshape
name=name)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 759, in apply_op
op_def=op_def)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2240, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1128, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304
[[Node: Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](MaxPool_1, Reshape_1/shape)]]
EDIT-1
Got this new error after modifying based on those edits:
x_reshaped = tf.reshape(x, shape=[-1,150,150,1])
batch_size = x_reshaped.get_shape().as_list()[0]
... Same code as above ...
maxpool_reshaped = tf.reshape(maxpool_out2, [batch_size, -1])
Error:
Traceback (most recent call last):
File "cnn.py", line 52, in <module>
model_op = model()
File "cnn.py", line 45, in model
maxpool_reshaped = tf.reshape(maxpool_out2, [batch_size, -1])
File "C:\Python35\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 2448, in reshape
name=name)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 493, in apply_op
raise err
File "C:\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 490, in apply_op
preferred_dtype=default_dtype)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 669, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\constant_op.py", line 176, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\constant_op.py", line 165, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "C:\Python35\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 441, in make_tensor_proto
tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values])
File "C:\Python35\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 441, in <listcomp>
tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values])
File "C:\Python35\lib\site-packages\tensorflow\python\util\compat.py", line 65, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got None
EDIT-2
After doing the following edits (in addtion to removing batch_size:
w3 = tf.Variable(tf.random_normal([361, 256]))
...
...
w_out = tf.Variable(tf.random_normal([256, 2]))
I'm having the following error:
EPOCH 0
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:975] Invalid argument: logits and labels must be same size: logits_size=[256,2] labels_size=[1,2]
[[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_2, Reshape_3)]]
Traceback (most recent call last):
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _do_call
return fn(*args)
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1003, in _run_fn
status, run_metadata)
File "C:\Python35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be same size: logits_size=[256,2] labels_size=[1,2]
[[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_2, Reshape_3)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "cnn.py", line 73, in <module>
_, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: batch_data, y: batch_onehot_vals})
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 766, in run
run_metadata_ptr)
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be same size: logits_size=[256,2] labels_size=[1,2]
[[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_2, Reshape_3)]]
Caused by op 'SoftmaxCrossEntropyWithLogits', defined at:
File "cnn.py", line 55, in <module>
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model_op, y))
File "C:\Python35\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1449, in softmax_cross_entropy_with_logits
precise_logits, labels, name=name)
File "C:\Python35\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 2265, in _softmax_cross_entropy_with_logits
features=features, labels=labels, name=name)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 759, in apply_op
op_def=op_def)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2240, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1128, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[256,2] labels_size=[1,2]
[[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_2, Reshape_3)]]
EDIT-3
This is how the binary (pickled) file looks like [label, filename, data]:
[array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), array(['1.jpg', '10.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg', '6.jpg',
'7.jpg', '8.jpg', '9.jpg'],
dtype='<U6'), array([[142, 138, 134, ..., 128, 125, 122],
[151, 151, 149, ..., 162, 159, 157],
[120, 121, 122, ..., 132, 128, 122],
...,
[179, 175, 177, ..., 207, 205, 203],
[126, 129, 130, ..., 134, 130, 134],
[165, 170, 175, ..., 193, 193, 187]])]
How can I solve this issue?
Let's come to your original error:
Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304
This is because you adapt your code from a code with original input image size 24*24. The tensor shape after two convolution and two max-pooling layers is [-1, 6, 6, 64]. However, as your input image shape is 150*150, the intermediate shape becomes [-1, 38, 38, 64].
try change w3
w3 = tf.Variable(tf.random_normal([38*38*64, 1024]))
You should always keep an eye on your tensor shape flow.
The error is happening here:
maxpool_reshaped = tf.reshape(maxpool_out2, [-1,w3.get_shape().as_list()[0]])
As it states: Input to reshape is a tensor with 92416 values, but the requested shape requires a multiple of 2304
Meaning
w3.get_shape().as_list()[0] = 2304
and
maxpool_out2 has 92416 values
but 92416 /2304 has a fractional remainder so python can't fit the rest evenly into "-1".
So you need to recheck the shapes of w3 and what you expect it to be.
Alternative suggestion Update:
x_reshaped = tf.reshape(x, shape=[-1,150,150,1])
batch_size = x_reshaped.get_shape().as_list()[0]
... Same code as above ...
maxpool_reshaped = tf.reshape(maxpool_out2, [batch_size, -1])
I have faced the same issue, I tried to print the tensor layer for the given image of 300*200 in CNN.
Tensor("add_35:0", shape=(?, 300, 200, 16), dtype=float32)
Tensor("MaxPool_21:0", shape=(?, 100, 150, 16), dtype=float32)
Tensor("MaxPool_22:0", shape=(?, 75, 50, 32), dtype=float32)
Tensor("MaxPool_23:0", shape=(?, 38, 25, 64), dtype=float32)
Its dividing the each layer by 2 for each layer, in the fully connected layer, we can try with 38*25*64(output of previous layer)
'w_fc_layer' : tf.Variable(tf.random_normal([38*38*64,1024]))

Categories

Resources