How to concatenate 2 keras.modeloutput? - python

I would like to merge two Keras model by using numpy.concatenate. I created two Keras model with output (None, 1). What I want to do is creating new layer with output (None, 2) by concatenating this two output. I tried below code but received "ValueError: zero-dimensional arrays cannot be concatenated"
def test():
model = Sequential()
model.add(Dense(1, input_shape=(1,)))
return model
a = test()
b= test()
x = np.concatenate([a.output, b.output], axis=1)

Related

How to change the input shape of model in Keras

I have a model that I load this way:
def YOLOv3_pretrained(n_classes=12, n_bbox=3):
yolo3 = tf.keras.models.load_model("yolov3/yolo3.h5")
yolo3.trainable = False
l3 = yolo3.get_layer('leaky_re_lu_71').output
l3_flat = tf.keras.layers.Flatten()(l3)
out3 = tf.keras.layers.Dense(100*(4+1+n_classes))(l3_flat)
out3 = Reshape((100, (4+1+n_classes)), input_shape=(12,))(out3)
yolo3 = Model(inputs=yolo3.input, outputs=[out3])
return yolo3
I want to add a Dense at the end of it but since it takes an input with shape (None, 416,416,3) it doesn't let me do it and it returns an error:
ValueError: The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None)
I also tried this way with a Sequential (I want to use just the last output of yolo):
def YOLOv3_Dense(n_classes=12):
yolo3 = tf.keras.models.load_model("yolov3/yolo3.h5")
model = Sequential()
model.add(yolo3)
model.add(Flatten())
model.add(Dense(100*(4+1+n_classes)))
model.add(Reshape((100, (4+1+n_classes)), input_shape=(413,413,3)))
return model
But it returns another error:
ValueError: All layers in a Sequential model should have a single output tensor. For multi-output layers, use the functional API.
Is there a way to add the final Dense layer?
The problem is that you are trying to reduce (flatten) an output with multiple None dimensions, which will not work if you want to use the output as input to another layer. You can try using a GlobalAveragePooling2D or GlobalMaxPooling2D instead:
import tensorflow as tf
yolo3 = tf.keras.models.load_model("yolo3.h5")
yolo3.trainable = False
l3 = yolo3.get_layer('leaky_re_lu_71').output
l3_flat = tf.keras.layers.GlobalMaxPooling2D()(l3)
out3 = tf.keras.layers.Dense(100*(4+1+12))(l3_flat)
out3 = tf.keras.layers.Reshape((100, (4+1+12)), input_shape=(12,))(out3)
yolo3 = tf.keras.Model(inputs=yolo3.input, outputs=[out3])

ValueError with Concatenate Layer

I am trying to concatenate two sequential models. I have a model which is a concatenation of two sub-models, each of which is a concatenation of two sequential models. I have the following code but it doesn't work with Keras 2.3.0
model = Sequential()
sub_model1 = Sequential()
sub_model_channel1 = Sequential()
sub_model_channel2 = Sequential()
sub_model_channel1.add(Dropout(dropout_prob[0], input_shape=(channels, sequence_length,sequence_length)))
sub_model_channel2.add(Dropout(dropout_prob[0], input_shape=(channels, sequence_length,sequence_length)))
in1 = Input(shape=(channels, sequence_length,sequence_length))
in2 = Input(shape=(channels, sequence_length,sequence_length))
convs1 = model_unichannel(in1)
convs2 = model_unichannel(in2)
out1 = Concatenate()(convs1)
out2 = Concatenate()(convs2)
m1 = Model(inputs=in1, outputs=out1)
m2 = Model(inputs=in2, outputs=out2)
sub_model_channel1.add(m1)
sub_model_channel2.add(m2)
m = Concatenate()([sub_model_channel1, sub_model_channel2])
sub_model1.add(m)
model.add(sub_model1)
I am getting the following error
ValueError: Layer concatenate_3 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.engine.sequential.Sequential'>.
in the line m = Concatenate()([sub_model_channel1, sub_model_channel2]).
I have already looked at following solutions but nothing really solves my problem.
1) ValueError with Concatenate Layer (Keras functional API)
2) Merge 2 sequential models in Keras
I modified my code following the approach in the second link.
model = Sequential()
sub_model_channel1 = Sequential()
sub_model_channel2 = Sequential()
sub_model_channel1.add(Dropout(dropout_prob[0], input_shape=(channels, sequence_length,sequence_length)))
sub_model_channel2.add(Dropout(dropout_prob[0], input_shape=(channels, sequence_length,sequence_length)))
in1 = Input(shape=(channels, sequence_length,sequence_length))
in2 = Input(shape=(channels, sequence_length,sequence_length))
convs1 = model_unichannel(in1) #adds Conv, MaxPooling and Flatten layer
convs2 = model_unichannel(in2)
out1 = Concatenate()(convs1)
out2 = Concatenate()(convs2)
m1 = Model(inputs=in1, outputs=out1)
m2 = Model(inputs=in2, outputs=out2)
sub_model_channel1.add(m1)
sub_model_channel2.add(m2)
m = Concatenate()([sub_model_channel1.output, sub_model_channel2.output])
sub_model1 = Model([sub_model_channel1.input,sub_model_channel2.input], m)
model.add(sub_model1)
In this case I am getting an error ValueError: Layer model_3 expects 2 inputs, but it received 1 input tensors. Input received: [<tf.Tensor 'model_3_input:0' shape=(?, 7, 145, 145) dtype=float32>]. I understand this is because my model is also Sequential but how do I define the inputs? Also, is there any alternative way(apart from approach two) of doing this?

Behavior difference when building TF Keras RNN with two different methods

I am building a RNN text generator, mostly going from the Tensorflow docs here.
My question, I have defined the model two ways:
Method (1):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embed_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.GRU(rnn_units, return_sequences=True, stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
Method (2):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(vocab_size, embed_dim,
batch_input_shape=[BATCH_SIZE, None]))
model.add(tf.keras.layers.GRU(rnn_units, return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'))
model.add(tf.keras.layers.Dense(vocab_size))
In my mind, these both do the same thing. However when generating text with:
def generate_text(model, start_string, length=1000):
# converting start string to numbers (vectorisation)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# initialise empty string to store results
text = []
model.reset_states()
for i in range(length):
predictions = model(input_eval)
# remove batch dimension
predictions = tf.squeeze(predictions, 0)
# use categorical distribution to predict character returned by model
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# we pass the predicted character as the next input to the model
# along with previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
# append predicted text
text.append(idx2char[predicted_id])
return (start_string + ''.join(text))
Which I pass:
print(generate_text(model, start_string=u'From '))
Method (1) works perfectly, but method (2) throws the following error:
WARNING:tensorflow:Model was constructed with shape Tensor("embedding_1_input:0", shape=(64, None), dtype=float32) for input (64, None), but it was re-called on a Tensor with incompatible shape (1, 5).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-33-eb814780c9fe> in <module>()
----> 1 print(generate_text(model, start_string=u'From ', length=PRINT))
14 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py in set_shape(self, shape)
1086 raise ValueError(
1087 "Tensor's shape %s is not compatible with supplied shape %s" %
-> 1088 (self.shape, shape))
1089
1090 # Methods not supported / implemented for Eager Tensors.
ValueError: Tensor's shape (5, 64, 1024) is not compatible with supplied shape [5, 1, 1024]
If anyone could help me understand what the difference is between these two methods that would be amazing, thankyou!
Edit:
Including model saving and loading code. I use this to save the model (with a batch size 64) and then load with a batch size of 1 for text generation.
Saving weights:
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath='./training_checkpoints/ckpt_{epoch}',
save_weights_only=True
)
Loading weights into new model (batch size = 1):
model = build_model(len(vocab_char), EMBED_DIM, UNITS, 1)
model.load_weights(tf.train.latest_checkpoint('./training_checkpoints'))
model.build(tf.TensorShape([1, None]))
model.summary()

Keras Input raises value error when dimensioons match

I am starting to develop a model and am getting stuck with dimensions. My X_train and Y_train are numpy arrays of shape (65337, 19)
Input_1= Input(shape=(19,))
x = Dense(100, activation='relu')(Input_1)
out1 = Dense(1, activation='linear')(x)
out2 = Dense(1, activation='linear')(x)
...
out19 = Dense(1, activation = 'linear')(x)
model = Model(inputs=Input_1, outputs=[out1,out2,out3,out4,out5,out6,
out7,out8,out9,out10,out11,out12,
out13,out14,out15,out16,out17,out18,out19])
model.compile(optimizer = "rmsprop", loss = 'mse')
model.fit(X_train,y_train,epochs=5)
When I run this, I get the value error:
ValueError: Error when checking model target: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 19 array(s), but instead got the following list of 1
arrays:
Looking at other questions here, it seems using .fit(np.array(X_train) , np.array(y_train) has helped some, but I get the same error (which makes sense, since it tells me I have an array).
You are expecting 19 differents outputs, so you need to feed to you network with the 19 slices of your array of labels :
model.fit(X_train,[y_train[:,0], y_train[:,1], y_train[:,2],[...], y_train[:, 18]] , epochs=5)

MobileNet ValueError: Error when checking target: expected dense_1 to have 4 dimensions, but got array with shape (24, 2)

I am trying to implement number of networks using Keras applications. Here I am attaching a piece of code and this code works fine for ResNet50 and VGG16 but when it comes to MobileNet it generate the error:
ValueError: Error when checking target: expected dense_1 to have 4 dimensions, but got array with shape (24, 2)
I am working with 224x224 images with 3 channels and batch size of 24 and trying to classify them in 2 classes, so the number 24 mentioned in the error is the batch size but I am not sure about number 2, probably it is number of classes.
Btw is there anyone who knows why I am receiving this error for keras.applications.mobilenet?
# basic_model = ResNet50()
# basic_model = VGG16()
basic_model = MobileNet()
classes = list(iter(train_generator.class_indices))
basic_model.layers.pop()
for layer in basic_model.layers[:25]:
layer.trainable = False
last = basic_model.layers[-1].output
temp = Dense(len(classes), activation="softmax")(last)
fineTuned_model = Model(basic_model.input, temp)
fineTuned_model.classes = classes
fineTuned_model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
fineTuned_model.fit_generator(
train_generator,
steps_per_epoch=3764 // batch_size,
epochs=100,
validation_data=validation_generator,
validation_steps=900 // batch_size)
fineTuned_model.save('mobile_model.h5')
From the source code, we can see that you're popping a Reshape() layer. Exactly the one that transforms the convolution's output (4D) into a class tensor (2D).
Source code:
if include_top:
if K.image_data_format() == 'channels_first':
shape = (int(1024 * alpha), 1, 1)
else:
shape = (1, 1, int(1024 * alpha))
x = GlobalAveragePooling2D()(x)
x = Reshape(shape, name='reshape_1')(x)
x = Dropout(dropout, name='dropout')(x)
x = Conv2D(classes, (1, 1),
padding='same', name='conv_preds')(x)
x = Activation('softmax', name='act_softmax')(x)
x = Reshape((classes,), name='reshape_2')(x)
But all the keras convolutional models are meant to be used in a different way. If you want your own number of classes, you should create these models with include_top=False. This way, the final part of the model (the classes part) will simply not exist and you just add your own layers:
basic_model = MobileNet(include_top=False)
for layer in basic_model.layers:
layers.trainable=False
furtherOutputs = YourOwnLayers()(basic_model.outputs)
You should probably try to copy that final part shown in the keras code, changing classes with your own number of classes. Or maybe try pop 3 layers from the complete model, the Reshape, the Activation and the Conv2D, replacing them with your own.

Categories

Resources