Python Array Reshaping Issue to Array with Shape (None, 192) - python

I have this error and I'm not sure how do I reshape where there's a dimension with None.
Exception: Error when checking : expected input_1 to have shape (None, 192) but got array with shape (192, 1)
How do I reshape an array to (None, 192)?
I've the array accuracy with shape (12, 16) and I did accuracy.reshape(-1) that gives (192,). However this is not (None, 192).

In keras/keras/engine/training.py
def standardize_input_data(data, names, shapes=None,
check_batch_dim=True,
exception_prefix=''):
...
# check shapes compatibility
if shapes:
for i in range(len(names)):
...
for j, (dim, ref_dim) in enumerate(zip(array.shape, shapes[i])):
if not j and not check_batch_dim:
# skip the first axis
continue
if ref_dim:
if ref_dim != dim:
raise Exception('Error when checking ' + exception_prefix +
': expected ' + names[i] +
' to have shape ' + str(shapes[i]) +
' but got array with shape ' +
str(array.shape))
Comparing that with the error
Error when checking : expected input_1 to have shape (None, 192) but got array with shape (192, 1)
So it is comparing (None, 192) with (192, 1), and skipping the 1st axis; that is comparing 192 and 1. If array has shape (n, 192) it probably would pass.
So basically, what ever is generating the (192,1) shape, as opposed to (1,192) or a broadcastable (192,) is causing the error.
I'm adding keras to the tags on the guess that this is the problem module.
Searching other keras tagged SO questions:
Exception: Error when checking model target: expected dense_3 to have shape (None, 1000) but got array with shape (32, 2)
Error: Error when checking model input: expected dense_input_6 to have shape (None, 784) but got array with shape (784L, 1L)
Dimensions not matching in keras LSTM model
Getting shape dimension errors with a simple regression using Keras
Deep autoencoder in Keras converting one dimension to another i
I don't know enough about keras to understand the answers, but there's more to it than simply reshaping your input array.

Related

KAGGLE BERT : ValueError: Shapes (None, 2) and (None, 3) are incompatible

I'm trying to follow a kaggle for the BERT model : https://www.kaggle.com/code/ludovicocuoghi/twitter-sentiment-analysis-with-bert-roberta/notebook
And at the step near the end I get this error :
ValueError: Shapes (None, 2) and (None, 3) are incompatible
The cell :
history_bert = model.fit([train_input_ids,train_attention_masks], y_train, validation_data=([val_input_ids,val_attention_masks], y_valid), epochs=4, batch_size=32)
I tried many things but I didnt find any solution If someone cane enlight me it'll be useful !
the shape of differents variables :
train_input_ids SHAPE: (640740, 128)
train_attention_masks SHAPE: (640740, 128)
y_train SHAPE: (640740, 2)
val_input_ids SHAPE: (71194, 128)
val_attention_masks SHAPE: (71194, 128)
y_valid SHAPE: (71194, 2)
In my dataset Sentiment has only 2 value 0 / 1 idk if this may have an impact on the error
I switched the variable's place in the functions fit.
I search for similar errors on differents forums to find a solution.

Keras Can't Add similar shapes together

I am trying to build an Inception model as described here:
https://towardsdatascience.com/deep-learning-for-time-series-classification-inceptiontime-245703f422db
It all works so far, but when I try to implement the shortcut layer and add the two tensors together I get an Error.
Here is my shortcut code:
def shortcut_layer(inputs,z_interception):
print(inputs.shape)
inputs = keras.layers.Conv1D(filters=int(z_interception.shape[-1]),kernel_size=1,padding='same',use_bias=False)(inputs)
print(z_interception.shape[-1])
print(inputs.shape,z_interception.shape)
inputs = keras.layers.BatchNormalization()(inputs)
z = keras.layers.Add()([inputs,z_interception])
print('zshape: ',z.shape)
return keras.layers.Activation('relu')(z)
The output is as follows:
(None, 160, 8)
128
(None, 160, 128) (None, 160, 128)
The output is exactly as I expect it to be, but I still get the error:
ValueError: Operands could not be broadcast together with shapes (160, 128) (160, 8)
which doesn't make sense to me as I try to add the two tensors with shape: (None, 160, 128)
I hope someone can help me with this. Thank you in advance.

ValueError: Shape must be rank 3 but is rank 2. A `Concatenate` layer requires inputs with matching shapes except for the concat

I am trying to use Tensorflow Functional API to define a multi input neural network.
This is my code:
from keras_self_attention import SeqSelfAttention
from tensorflow import keras
Input1 = Input(shape=(120, ),name="Input1")
Input2 = Input(shape=(10, ),name="Input2")
embedding_layer = Embedding(30,5, input_length=120,)(Input1)
lstm_layer = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(units=512))(embedding_layer)
attention=SeqSelfAttention(attention_activation='sigmoid')(lstm_layer)
merge = concatenate([attention, Input2])
However, I get the following error:
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, None, 1024), (None, 10)].
If I change shape of Input2 to (None,10, ), then I get this error:
ValueError: Shape must be rank 3 but is rank 2 for '{{node model/concatenate/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](model/dense/BiasAdd, model/Cast_1, model/concatenate/concat/axis)' with input shapes: [?,?,1024], [?,10], [].
and if I change shape of Input2 to (1,10, ), then I get this error:
ValueError: Shape must be rank 3 but is rank 2 for '{{node model/concatenate/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](model/dense/BiasAdd, model/Cast_1, model/concatenate/concat/axis)' with input shapes: [?,?,1024], [?,10], [].
How can I reshape output of attention layer from (None, None, 1024) to something which I can concatenate with (None, 10)?
The inputs for concatenate layer will not have matching dimensions. You can add a reshape layer in front of the concatenate layer to alleviate this issue.
[(None, None, 1024), (None, 10)].
Here one is three and one is two. Reshape the first input to [None,1024] or reshape the second input to [None, 1 , 10] whichever suits your need.
reshape_layer = tf.keras.layers.Reshape((1024,))(attention)
merge = concatenate([reshapelayer, Input2])

any way to rescale and stack features maps with different shape in tensorflow?

I intend to use the concepts of skip connection in my experiment. Basically, in my pipeline, features maps that comes after Conv2D are going to be stacked or concatenated. But, features maps are in different shape and try to stack them together into one tensor gave me error. Does anyone knows any possible way of doing this correctly in tensorflow? Any thoughts or ideas to make this happen? Thanks
idea flowchart
here is the pipeline flowchart I want to do it:
my case is little different because I got extra building block is used after Conv2D and its output now is feature maps of 15x15x64 and so on. I want to stack those features map into one then use it to Conv2D again.
my attempt:
this is my reproducible attempt:
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout, Activation, Conv2D, Flatten, MaxPool2D, BatchNormalization
inputs = tf.keras.Input(shape=(32, 32, 3))
x = inputs
x = Conv2D(32, (3, 3), input_shape=(32,32,3))(x)
x = BatchNormalization(axis=-1)(x)
x = Activation('relu')(x)
fm1 = MaxPooling2D(pool_size=(2,2))(x)
x = Conv2D(32,(3, 3), input_shape=(15,15,32))(fm1)
x = BatchNormalization(axis=-1)(x)
x = Activation('relu')(x)
fm2 = MaxPooling2D(pool_size=(2,2))(x)
concatted = tf.keras.layers.Concatenate(axis=1)([fm1, fm2])
but this way I ended up with following error: ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 15, 15, 32), (None, 6, 6, 32)]. I am not sure what would be correct way to stack features maps with different shape. How can we make this right? Any possible thoughts?
desired output
in my actual model, I got shape of features maps are TensorShape([None, 15, 15, 128]) and TensorShape([None, 6, 6, 128]). I need to find way to merge them or stack them into one. Ideally, shape of concatenated or stacked feature maps' shape would be: [None, 21,21,128]. Is there any way of stacking them into one? Any idea?
What you're trying to achieve doesn't work mathematically. Let me illustrate. Take the simple 1D problem (like 1D convolution). You have a (None, 64, 128) (fm1) sized output and a (None, 32, 128) (fm2) output that you want to concatenate. Then,
concatted = tf.keras.layers.Concatenate(axis=1)([fm1, fm2])
works totally fine, giving you an output of size (None, 96, 128).
Let's come to the 2D problem. Now you got two tensors (None, 15, 15, 128) and (None, 6, 6, 128) and want to end up with a (None, 21, 21, 128) sized output. Well the math doesn't work here. To understand why, reduce this to 1D format. Then you got
fm1 -> (None, 225, 128)
fm2 -> (None, 36, 128)
By concat you get,
concatted -> (None, 261, 128)
If the math works you should get (None, 441, 128) which is reshape-able to (None, 21, 21, 128). So this cannot be achieved unless you pad the edges of the smaller with 441-261 = 180 on the reshaped tensor. And then reshape it to the desired shape. Following is an example of how you can do it,
concatted = tf.keras.layers.Lambda(
lambda x: K.reshape(
K.concatenate(
[K.reshape(x[0], (-1, 225, 128)),
tf.pad(
K.reshape(x[1], (-1, 36, 128)), [(0,0), (0, 180), (0,0)]
)
], axis=1
), (-1, 21, 21, 128))
)([fm1, fm2])
Important: But I can't guaranttee the performance of your model this just solves your problem mathematically. In a machine learning perspective, I wouldn't advice this. Best way would be making sure the outputs are compatible in sizes for concatenation. Few ways would be,
Not reduce the size of convolution outputs (stride = 0 and padding='same')
Use transpose convolution operation to size-up the smaller one

Keras Error when checking input

I want to explore and intermediate layer on a tensorflow model defined with Keras:
input_dim = 30
input_layer = Input(shape=(input_dim, ))
encoder = Dense(encoding_dim, activation="tanh",
activity_regularizer=regularizers.l1(10e-5))(input_layer)
encoder = Dense(int(encoding_dim / 2), activation="relu")(encoder)
decoder = Dense(int(encoding_dim / 2), activation='tanh')(encoder)
decoder = Dense(input_dim, activation='relu')(decoder)
autoencoder = Model(inputs=input_layer, outputs=decoder)
####TRAINING....
#inspect layer 1
intermediate_layer_model = Model(inputs=autoencoder.layers[0].input,
outputs=autoencoder.layers[1].output)
xtest = #array of dim (30,)
intermediate_output = intermediate_layer_model.predict(xtest)
print(intermediate_output)
However I got the error on dimension when I inspect:
/usr/local/lib/python2.7/site-packages/keras/engine/training_utils.pyc in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
134 ': expected ' + names[i] + ' to have shape ' +
135 str(shape) + ' but got array with shape ' +
--> 136 str(data_shape))
137 return data
138
ValueError: Error when checking input: expected input_4 to have shape (30,) but got array with shape (1,)
Any help appreciated
From the Keras docs:
shape: A shape tuple (integer), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors.
When specifying the model, you do not need to provide a batch dimension. model.predict() expects your array to be shaped as such however.
Reshape your xtest to contain a batch dimension: xtest = np.reshape(xtest, (1, -1)) and set the batch_size argument of model.predict() to 1.

Categories

Resources