output of model.summary() is not as expected tensorflow 2 - python

I've defined a complex deep learning model, but for the purpose of this question, I'll use a simple one.
Consider the following:
import tensorflow as tf
from tensorflow.keras import layers, models
def simpleMLP(in_size, hidden_sizes, num_classes, dropout_prob=0.5):
in_x = layers.Input(shape=(in_size,))
hidden_x = models.Sequential(name="hidden_layers")
for i, num_h in enumerate(hidden_sizes):
hidden_x.add(layers.Dense(num_h, input_shape=(in_size,) if i == 0 else []))
hidden_x.add(layers.Activation('relu'))
hidden_x.add(layers.Dropout(dropout_prob))
out_x = layers.Dense(num_classes, activation='softmax', name='baseline')
return models.Model(inputs=in_x, outputs=out_x(hidden_x(in_x)))
I will call the function in the following manner:
mdl = simpleMLP(28*28, [500, 300], 10)
Now when I do mdl.summary() I get the following:
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 784)] 0
_________________________________________________________________
hidden_layers (Sequential) (None, 300) 542800
_________________________________________________________________
baseline (Dense) (None, 10) 3010
=================================================================
Total params: 545,810
Trainable params: 545,810
Non-trainable params: 0
_________________________________________________________________
The problem is that the Sequential block is condensed and showing only the last layer but the sum total of parameters.
In my complex model, I have multiple Sequential blocks that are all hidden.
Is there a way to make it be more verbose? Am I doing something wrong in the model definition?
Edit
When using pytorch I don't see the same behaviour, given the following example (taken from here):
import torch
import torch.nn as nn
class MyCNNClassifier(nn.Module):
def __init__(self, in_c, n_classes):
super().__init__()
self.conv_block1 = nn.Sequential(
nn.Conv2d(in_c, 32, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.conv_block2 = nn.Sequential(
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.decoder = nn.Sequential(
nn.Linear(32 * 28 * 28, 1024),
nn.Sigmoid(),
nn.Linear(1024, n_classes)
)
def forward(self, x):
x = self.conv_block1(x)
x = self.conv_block2(x)
x = x.view(x.size(0), -1) # flat
x = self.decoder(x)
return x
When printing it I get:
MyCNNClassifier(
(conv_block1): Sequential(
(0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(conv_block2): Sequential(
(0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(decoder): Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): Sigmoid()
(2): Linear(in_features=1024, out_features=10, bias=True)
)
)

There is nothing wrong in model summary in Tensorflow 2.x.
import tensorflow as tf
from tensorflow.keras import layers, models
def simpleMLP(in_size, hidden_sizes, num_classes, dropout_prob=0.5):
in_x = layers.Input(shape=(in_size,))
hidden_x = models.Sequential(name="hidden_layers")
for i, num_h in enumerate(hidden_sizes):
hidden_x.add(layers.Dense(num_h, input_shape=(in_size,) if i == 0 else []))
hidden_x.add(layers.Activation('relu'))
hidden_x.add(layers.Dropout(dropout_prob))
out_x = layers.Dense(num_classes, activation='softmax', name='baseline')
return models.Model(inputs=in_x, outputs=out_x(hidden_x(in_x)))
mdl = simpleMLP(28*28, [500, 300], 10)
mdl.summary()
Output:
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 784)] 0
_________________________________________________________________
hidden_layers (Sequential) (None, 300) 542800
_________________________________________________________________
baseline (Dense) (None, 10) 3010
=================================================================
Total params: 545,810
Trainable params: 545,810
Non-trainable params: 0
_________________________________________________________________
You can use get_layer to retrieve a layer on either its name or index.
If name and index are both provided, index will take precedence.
Indices are based on order of horizontal graph traversal (bottom-up).
Here to get Sequential layer (i.e. indexed at 1 in mdl) details, you can try
mdl.get_layer(index=1).summary()
Output:
Model: "hidden_layers"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_2 (Dense) (None, 500) 392500
_________________________________________________________________
activation_2 (Activation) (None, 500) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 500) 0
_________________________________________________________________
dense_3 (Dense) (None, 300) 150300
_________________________________________________________________
activation_3 (Activation) (None, 300) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 300) 0
=================================================================
Total params: 542,800
Trainable params: 542,800
Non-trainable params: 0
_________________________________________________________________

Related

Tensorflow VGG16 SENet implementation prediction problem

I need to implement a SENet (squeeze-and-excitation blocks) in key points of my CNN VGG16. Everything runs fine but when I decode the prediction, the result I get is very strange. I give it a picture of a "panda" and it tells me that it is a "cloak". And when I test with the official VGG16 I do not have the same problem. Whether with or without the SENet block.
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Activation, Conv2D, Input, BatchNormalization, Reshape, GlobalAveragePooling2D
from tensorflow.keras.activations import sigmoid, softmax, relu, tanh
from tensorflow.keras import Sequential
import keras
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import vis
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import cv2
from keras.preprocessing import image
from keras import backend as K
from keras.applications.vgg16 import preprocess_input, decode_predictions,VGG16, preprocess_input
from vis.utils import utils
from tensorflow.keras.preprocessing.image import load_img
class SENET_Attn(Layer):
def __init__(self,out_dim, ratio, layer_name="SENET"):
super(SENET_Attn, self).__init__()
self.out_dim = out_dim
self.ratio = ratio
self.layer_name = layer_name
def build(self, ratio, layer_name="SENET"):
self.Global_Average_Pooling = GlobalAveragePooling2D(keepdims= True)
self.Fully_connected_1_1 = Dense(units= self.out_dim/self.ratio, name=self.layer_name+'_fully_connected1'
self.Relu = ReLU()
self.Fully_connected_2 = Dense(units=self.out_dim, name=layer_name+'_fully_connected2', activation = "tanh"
self.Sigmoid = Activation("sigmoid")
def call(self, inputs):
inputs = tf.cast(inputs, dtype = "float32")
squeeze = self.Global_Average_Pooling(inputs)
excitation = self.Fully_connected_1_1(squeeze)
excitation = self.Relu(excitation)
excitation = self.Fully_connected_2(excitation)
excitation = self.Sigmoid(excitation)
excitation = tf.reshape(excitation, [-1,1,1,self.out_dim])
scale = inputs * excitation
return scale
Vgg = VGG16(weights='imagenet', include_top=True)
# SENET-RATIO
ratio = 8
input_layer = tf.keras.Input(shape=(224,224,3))
out = Vgg.layers[1](input_layer) # Block 1 of VGG16
out = Vgg.layers[2](out)
out = Vgg.layers[3](out)
#out = SENET_Attn(out.shape[-1], ratio, )(out) # SENET Attention
out = Vgg.layers[4](out) # Block 2 of VGG16
out = Vgg.layers[5](out)
out = Vgg.layers[6](out)
#out = SENET_Attn(out.shape[-1], ratio, )(out) # SENET Attention
out = Vgg.layers[7](out) # Block 3 of VGG16
out = Vgg.layers[8](out)
out = Vgg.layers[9](out)
out = Vgg.layers[10](out)
#out = SENET_Attn(out.shape[-1], ratio, )(out) # SENET Attention
out = Vgg.layers[11](out) # Block 4 of VGG16
out = Vgg.layers[12](out)
out = Vgg.layers[13](out)
out = Vgg.layers[14](out)
#out = SENET_Attn(out.shape[-1], ratio, )(out) # SENET Attention
out = Vgg.layers[15](out) #Block 5 of VGG16
out = Vgg.layers[16](out)
out = Vgg.layers[17](out)
out = Vgg.layers[18](out)
#out = SENET_Attn(out.shape[-1], ratio, )(out) # SENET Attention
flatten = Flatten()(out)
out = Dense(4096, activation='relu')(flatten)
out = Dropout(0.5)(out)
out = Dense(4096, activation='relu')(out)
out = Dropout(0.5)(out)
out = Dense(1000, activation='softmax')(out)
model = tf.keras.Model(inputs=input_layer, outputs= out)
model.compile('adam', loss ='mae', metrics=['accuracy'])
model.summary()
Model: "model_24"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_99 (InputLayer) [(None, 224, 224, 3)] 0
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
flatten_24 (Flatten) (None, 25088) 0
dense_72 (Dense) (None, 4096) 102764544
dropout_14 (Dropout) (None, 4096) 0
dense_73 (Dense) (None, 4096) 16781312
dropout_15 (Dropout) (None, 4096) 0
dense_74 (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
img = load_img('Panda.jpg',target_size=(224,224))
x = tf.keras.utils.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('The most accurate possibility is :', tf.keras.applications.vgg16.decode_predictions(model.predict(x), top=3)[0])
The most accurate possibility is : [('n03045698', 'cloak', 0.9999887), ('n01692333', 'Gila_monster', 5.1300117e-06), ('n02965783', 'car_mirror', 2.17886e-06)]
I imagine that is one of the layers that is missing, but in the summary of my model it is the same layers and the same parameters. Does anyone have a solution? Thank you for your help.

tensorflow.keras.Model inherit

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class KerasSupervisedModelWrapper(keras.Model):
def __init__(self, batch_size, **kwargs):
super().__init__()
self.batch_size = batch_size
def summary(self, input_shape): # temporary fix for a bug
x = layers.Input(shape=input_shape)
model = keras.Model(inputs=[x], outputs=self.call(x))
return model.summary()
class ExampleModel(KerasSupervisedModelWrapper):
def __init__(self, batch_size):
super().__init__(batch_size)
self.conv1 = layers.Conv2D(32, kernel_size=(3, 3), activation='relu')
def call(self, x):
x = self.conv1(x)
return x
model = MyModel(15)
model.summary([28, 28, 1])
output:
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 28, 28, 1)] 0
conv2d_2 (Conv2D) (None, 26, 26, 32) 320
=================================================================
Total params: 320
Trainable params: 320
Non-trainable params: 0
_________________________________________________________________
I'm writting a wrapper for keras model to pre-define some useful method and variables as above.
And I'd like to modify the wrapper to get some layers to compose model as the keras.Sequential does.
Therefore, I added Sequential method that assigns new call method as below.
class KerasSupervisedModelWrapper(keras.Model):
...(continue)...
#staticmethod
def Sequential(layers, **kwargs):
model = KerasSupervisedModelWrapper(**kwargs)
pipe = keras.Sequential(layers)
def call(self, x):
return pipe(x)
model.call = call
return model
However, it seems not working as I intended. Instead, it shows below error message.
model = KerasSupervisedModelWrapper.Sequential([
layers.Conv2D(32, kernel_size=(3, 3), activation="relu")
], batch_size=15)
model.summary((28, 28, 1))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_91471/2826773946.py in <module>
1 # model.build((None, 28, 28, 1))
2 # model.compile('adam', loss=keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'])
----> 3 model.summary((28, 28, 1))
/tmp/ipykernel_91471/3696340317.py in summary(self, input_shape)
10 def summary(self, input_shape): # temporary fix for a bug
11 x = layers.Input(shape=input_shape)
---> 12 model = keras.Model(inputs=[x], outputs=self.call(x))
13 return model.summary()
14
TypeError: call() missing 1 required positional argument: 'x'
What can I do for the wrapper to get keras.Sequential model while usuing other properties?
You could try something like this:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class KerasSupervisedModelWrapper(keras.Model):
def __init__(self, batch_size, **kwargs):
super().__init__()
self.batch_size = batch_size
def summary(self, input_shape): # temporary fix for a bug
x = layers.Input(shape=input_shape)
model = keras.Model(inputs=[x], outputs=self.call(x))
return model.summary()
#staticmethod
def Sequential(layers, **kwargs):
model = KerasSupervisedModelWrapper(**kwargs)
pipe = keras.Sequential(layers)
model.call = pipe
return model
class ExampleModel(KerasSupervisedModelWrapper):
def __init__(self, batch_size):
super().__init__(batch_size)
self.conv1 = layers.Conv2D(32, kernel_size=(3, 3), activation='relu')
def call(self, x):
x = self.conv1(x)
return x
model = ExampleModel(15)
model.summary([28, 28, 1])
model = KerasSupervisedModelWrapper.Sequential([
layers.Conv2D(32, kernel_size=(3, 3), activation="relu")
], batch_size=15)
model.summary((28, 28, 1))
print(model(tf.random.normal((1, 28, 28, 1))).shape)
Model: "model_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_14 (InputLayer) [(None, 28, 28, 1)] 0
conv2d_17 (Conv2D) (None, 26, 26, 32) 320
=================================================================
Total params: 320
Trainable params: 320
Non-trainable params: 0
_________________________________________________________________
Model: "model_10"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) [(None, 28, 28, 1)] 0
sequential_8 (Sequential) (None, 26, 26, 32) 320
=================================================================
Total params: 320
Trainable params: 320
Non-trainable params: 0
_________________________________________________________________
(1, 26, 26, 32)

ValueError: Error when checking target: expected time_distributed_1 to have 3 dimensions, but got array with shape (18912, 35)

I'm constructing an encoder-decoder using a BLSTM to model word inflection generation.
I'm not sure why I am getting the titular error message at the model.fit step. I am passing in a matrix of integer-encoded word vectors, but I was under the impression that my input would be converted to three dimensions when passed through the Embedding layer.
encoder_inputs = Input(shape=(enc_len,))
encoder_embedding = Embedding(vocab_size, 100, mask_zero=True)(encoder_inputs)
encoder_outputs = Bidirectional(LSTM(100))(encoder_embedding)
e = Dense(200)(encoder_outputs)
e = RepeatVector(35)(e)
decoder_inputs_lemm = Input(shape=(dec_len,))
decoder_inputs_infl = Input(shape=(dec_len,))
embedding_layer = Embedding(vocab_size, 100) # shared weights
decoder_embedding_lemm = embedding_layer(decoder_inputs_lemm)
decoder_embedding_infl = embedding_layer(decoder_inputs_infl)
concat = Concatenate()([decoder_embedding_lemm, decoder_embedding_infl, e])
decoder_outputs = LSTM(100, return_sequences=True)(concat)
decoder_outputs = TimeDistributed(Dense(dec_len, activation='softmax'))(decoder_outputs)
# prepare input data
enc_lemma = pad_sequences([x[0] for x in data['train']], enc_len, padding='pre')
dec_lemma = pad_sequences([x[0] for x in data['train']], dec_len, padding='post')
dec_infl_shifted = pad_sequences([x[1] for x in data['train']], enc_len, padding='post')
dec_infl_shifted = np.hstack((np.full((dec_infl_shifted.shape[0], 1), 2), dec_infl_shifted))
dec_infl_target = pad_sequences([x[1] for x in data['train']], enc_len, padding='post') # not shifted
dec_infl_target = np.hstack((dec_infl_target, np.full((dec_infl_target.shape[0], 1), 0)))
model = Model([encoder_inputs, decoder_inputs_lemm, decoder_inputs_infl], decoder_outputs)
model.compile(optimizer='adadelta', loss='categorical_crossentropy')
model.fit([enc_lemma, dec_lemma, dec_infl_shifted], dec_infl_target, epochs=30, verbose=1)
Here is the summary:
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 34) 0
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, 34, 100) 6300 input_1[0][0]
__________________________________________________________________________________________________
bidirectional_1 (Bidirectional) (None, 200) 160800 embedding_1[0][0]
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 35) 0
__________________________________________________________________________________________________
input_3 (InputLayer) (None, 35) 0
__________________________________________________________________________________________________
dense_1 (Dense) (None, 200) 40200 bidirectional_1[0][0]
__________________________________________________________________________________________________
embedding_2 (Embedding) (None, 35, 100) 6300 input_2[0][0]
input_3[0][0]
__________________________________________________________________________________________________
repeat_vector_1 (RepeatVector) (None, 35, 200) 0 dense_1[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 35, 400) 0 embedding_2[0][0]
embedding_2[1][0]
repeat_vector_1[0][0]
__________________________________________________________________________________________________
lstm_2 (LSTM) (None, 35, 100) 200400 concatenate_1[0][0]
__________________________________________________________________________________________________
time_distributed_1 (TimeDistrib (None, 35, 35) 3535 lstm_2[0][0]
==================================================================================================
Total params: 417,535
Trainable params: 417,535
Non-trainable params: 0
__________________________________________________________________________________________________
None

Keras ValueError: Dimensions must be equal issue

Even after applying the suggestions in answer and comments, it looks like the dimension mismatch issue persists. This is exact code and data file to replicate as well: https://drive.google.com/drive/folders/1q67s0VhB-O7J8OtIhU2jmj7Kc4LxL3sf?usp=sharing
How can this be corrected!? Latest code, model summary, functions used and error I get is below
type_ae=='dcor'
#Wrappers for keras
def custom_loss1(y_true,y_pred):
dcor = -1*distance_correlation(y_true,encoded_layer)
return dcor
def custom_loss2(y_true,y_pred):
recon_loss = losses.categorical_crossentropy(y_true, y_pred)
return recon_loss
input_layer = Input(shape=(64,64,1))
encoded_layer = Conv2D(filters = 128, kernel_size = (5,5),padding = 'same',activation ='relu',
input_shape = (64,64,1))(input_layer)
encoded_layer = MaxPool2D(pool_size=(2,2))(encoded_layer)
encoded_layer = Dropout(0.25)(encoded_layer)
encoded_layer = (Conv2D(filters = 64, kernel_size = (3,3),padding = 'same',activation ='relu'))(encoded_layer)
encoded_layer = (MaxPool2D(pool_size=(2,2)))(encoded_layer)
encoded_layer = (Dropout(0.25))(encoded_layer)
encoded_layer = (Conv2D(filters = 64, kernel_size = (3,3),padding = 'same',activation ='relu'))(encoded_layer)
encoded_layer = (MaxPool2D(pool_size=(2,2)))(encoded_layer)
encoded_layer = (Dropout(0.25))(encoded_layer)
encoded_layer = Conv2D(filters = 1, kernel_size = (3,3),padding = 'same',activation ='relu',
input_shape = (64,64,1),strides=1)(encoded_layer)
encoded_layer = ZeroPadding2D(padding=(28, 28), data_format=None)(encoded_layer)
decoded_imag = Conv2D(8, (2, 2), activation='relu', padding='same')(encoded_layer)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(8, (3, 3), activation='relu', padding='same')(decoded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(16, (3, 3), activation='relu', padding='same')(decoded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(decoded_imag)
flat_layer = Flatten()(decoded_imag)
dense_layer = Dense(256,activation = "relu")(flat_layer)
dense_layer = Dense(64,activation = "relu")(dense_layer)
dense_layer = Dense(32,activation = "relu")(dense_layer)
output_layer = Dense(9, activation = "softmax")(dense_layer)
autoencoder = Model(input_layer, [encoded_layer,output_layer])
autoencoder.summary()
autoencoder.compile(optimizer='adadelta', loss=[custom_loss1,custom_loss2])
autoencoder.fit(x_train,[x_train, y_train],batch_size=32,epochs=3,shuffle=True,
validation_data=(x_val, [x_val,y_val]))
The data is of dimensions:
x_train.shape: (4000, 64, 64, 1)
x_val.shape: (1000, 64, 64, 1)
y_train.shape: (4000, 9)
y_val.shape: (1000, 9)
losses look like:
def custom_loss1(y_true,y_pred):
dcor = -1*distance_correlation(y_true,encoded_layer)
return dcor
def custom_loss2(y_true,y_pred):
recon_loss = losses.categorical_crossentropy(y_true, y_pred)
return recon_loss
The correlation function is based on tensors as follows:
def distance_correlation(y_true,y_pred):
pred_r = tf.reduce_sum(y_pred*y_pred,1)
pred_r = tf.reshape(pred_r,[-1,1])
pred_d = pred_r - 2*tf.matmul(y_pred,tf.transpose(y_pred))+tf.transpose(pred_r)
true_r = tf.reduce_sum(y_true*y_true,1)
true_r = tf.reshape(true_r,[-1,1])
true_d = true_r - 2*tf.matmul(y_true,tf.transpose(y_true))+tf.transpose(true_r)
concord = 1-tf.matmul(y_true,tf.transpose(y_true))
#print(pred_d)
#print(tf.reshape(tf.reduce_mean(pred_d,1),[-1,1]))
#print(tf.reshape(tf.reduce_mean(pred_d,0),[1,-1]))
#print(tf.reduce_mean(pred_d))
tf.check_numerics(pred_d,'pred_d has NaN')
tf.check_numerics(true_d,'true_d has NaN')
A = pred_d - tf.reshape(tf.reduce_mean(pred_d,1),[-1,1]) - tf.reshape(tf.reduce_mean(pred_d,0),[1,-1]) + tf.reduce_mean(pred_d)
B = true_d - tf.reshape(tf.reduce_mean(true_d,1),[-1,1]) - tf.reshape(tf.reduce_mean(true_d,0),[1,-1]) + tf.reduce_mean(true_d)
#dcor = -tf.reduce_sum(concord*pred_d)/tf.reduce_sum((1-concord)*pred_d)
dcor = -tf.log(tf.reduce_mean(A*B))+tf.log(tf.sqrt(tf.reduce_mean(A*A)*tf.reduce_mean(B*B)))#-tf.reduce_sum(concord*pred_d)/tf.reduce_sum((1-concord)*pred_d)
#print(dcor.shape)
#tf.Print(dcor,[dcor])
#dcor = tf.tile([dcor],batch_size)
return (dcor)
model summary looks like:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) (None, 64, 64, 1) 0
_________________________________________________________________
conv2d_30 (Conv2D) (None, 64, 64, 128) 3328
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 32, 32, 128) 0
_________________________________________________________________
dropout_13 (Dropout) (None, 32, 32, 128) 0
_________________________________________________________________
conv2d_31 (Conv2D) (None, 32, 32, 64) 73792
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 16, 16, 64) 0
_________________________________________________________________
dropout_14 (Dropout) (None, 16, 16, 64) 0
_________________________________________________________________
conv2d_32 (Conv2D) (None, 16, 16, 64) 36928
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 8, 8, 64) 0
_________________________________________________________________
dropout_15 (Dropout) (None, 8, 8, 64) 0
_________________________________________________________________
conv2d_33 (Conv2D) (None, 8, 8, 1) 577
_________________________________________________________________
zero_padding2d_5 (ZeroPaddin (None, 64, 64, 1) 0
_________________________________________________________________
conv2d_34 (Conv2D) (None, 64, 64, 8) 40
_________________________________________________________________
up_sampling2d_10 (UpSampling (None, 128, 128, 8) 0
_________________________________________________________________
conv2d_35 (Conv2D) (None, 128, 128, 8) 584
_________________________________________________________________
up_sampling2d_11 (UpSampling (None, 256, 256, 8) 0
_________________________________________________________________
conv2d_36 (Conv2D) (None, 256, 256, 16) 1168
_________________________________________________________________
up_sampling2d_12 (UpSampling (None, 512, 512, 16) 0
_________________________________________________________________
conv2d_37 (Conv2D) (None, 512, 512, 1) 145
_________________________________________________________________
flatten_4 (Flatten) (None, 262144) 0
_________________________________________________________________
dense_13 (Dense) (None, 256) 67109120
_________________________________________________________________
dense_14 (Dense) (None, 64) 16448
_________________________________________________________________
dense_15 (Dense) (None, 32) 2080
_________________________________________________________________
dense_16 (Dense) (None, 9) 297
=================================================================
Total params: 67,244,507
Trainable params: 67,244,507
Non-trainable params: 0
_________________________________________________________________
This is the error:
InvalidArgumentError Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1658 try:
-> 1659 c_op = c_api.TF_FinishOperation(op_desc)
1660 except errors.InvalidArgumentError as e:
InvalidArgumentError: Dimensions must be equal, but are 1 and 64 for 'loss_1/zero_padding2d_5_loss/MatMul' (op: 'BatchMatMul') with input shapes: [?,64,64,1], [1,64,64,?].
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-11-0e924885fc6b> in <module>
40 autoencoder = Model(input_layer, [encoded_layer,output_layer])
41 autoencoder.summary()
---> 42 autoencoder.compile(optimizer='adadelta', loss=[custom_loss1,custom_loss2])
43 autoencoder.fit(x_train,[x_train, y_train],batch_size=32,epochs=3,shuffle=True,
44 validation_data=(x_val, [x_val,y_val]))
~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs)
340 with K.name_scope(self.output_names[i] + '_loss'):
341 output_loss = weighted_loss(y_true, y_pred,
--> 342 sample_weight, mask)
343 if len(self.outputs) > 1:
344 self.metrics_tensors.append(output_loss)
~/anaconda3/lib/python3.6/site-packages/keras/engine/training_utils.py in weighted(y_true, y_pred, weights, mask)
402 """
403 # score_array has ndim >= 2
--> 404 score_array = fn(y_true, y_pred)
405 if mask is not None:
406 # Cast the mask to floatX to avoid float64 upcasting in Theano
<ipython-input-11-0e924885fc6b> in custom_loss1(y_true, y_pred)
2 #Wrappers for keras
3 def custom_loss1(y_true,y_pred):
----> 4 dcor = -1*distance_correlation(y_true,encoded_layer)
5 return dcor
6
<ipython-input-6-f282528532cc> in distance_correlation(y_true, y_pred)
2 pred_r = tf.reduce_sum(y_pred*y_pred,1)
3 pred_r = tf.reshape(pred_r,[-1,1])
----> 4 pred_d = pred_r - 2*tf.matmul(y_pred,tf.transpose(y_pred))+tf.transpose(pred_r)
5 true_r = tf.reduce_sum(y_true*y_true,1)
6 true_r = tf.reshape(true_r,[-1,1])
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)
2415 adjoint_b = True
2416 return gen_math_ops.batch_mat_mul(
-> 2417 a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)
2418
2419 # Neither matmul nor sparse_matmul support adjoint, so we conjugate
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py in batch_mat_mul(x, y, adj_x, adj_y, name)
1421 adj_y = _execute.make_bool(adj_y, "adj_y")
1422 _, _, _op = _op_def_lib._apply_op_helper(
-> 1423 "BatchMatMul", x=x, y=y, adj_x=adj_x, adj_y=adj_y, name=name)
1424 _result = _op.outputs[:]
1425 _inputs_flat = _op.inputs
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
786 op = g.create_op(op_type_name, inputs, output_types, name=scope,
787 input_types=input_types, attrs=attr_protos,
--> 788 op_def=op_def)
789 return output_structure, op_def.is_stateful, op
790
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
505 'in a future version' if date is None else ('after %s' % date),
506 instructions)
--> 507 return func(*args, **kwargs)
508
509 doc = _add_deprecated_arg_notice_to_docstring(
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in create_op(***failed resolving arguments***)
3298 input_types=input_types,
3299 original_op=self._default_original_op,
-> 3300 op_def=op_def)
3301 self._create_op_helper(ret, compute_device=compute_device)
3302 return ret
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
1821 op_def, inputs, node_def.attr)
1822 self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1823 control_input_ops)
1824
1825 # Initialize self._outputs.
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1660 except errors.InvalidArgumentError as e:
1661 # Convert to ValueError for backwards compatibility.
-> 1662 raise ValueError(str(e))
1663
1664 return c_op
ValueError: Dimensions must be equal, but are 1 and 64 for 'loss_1/zero_padding2d_5_loss/MatMul' (op: 'BatchMatMul') with input shapes: [?,64,64,1], [1,64,64,?].
You are having two loss functions and so you have to pass two y (ground truths) for evaluating the loss with respect to the predictions.
Your first prediction is the output of layer encoded_layer which has a size of (None, 8, 8, 128) as observed from the model.summary for conv2d_59 (Conv2D)
But what you are passing in the fit for y is [x_train, y_train]. The loss_1 is expecting input of size (None, 8, 8, 128) but you are passing x_train which has a different size.
If you want the loss_1 to find the correlation of input image with the encoded image then stack the convolutions such that the output of the convolutions will result in the shape which is the same as your x_train image shape. Use model.summary to see the output shape of convolutions.
No use the padding, strides and kernel size of the convolution layer to get the desired output size of convolutions. use formula W2=(W1−F+2P)/S+1 and H2=(H1−F+2P)/S+1 to find the output width and height of convolutions. Check this reference
There are two major issues with your approach.
Your loss function is checking the correlation between the encoded image and the actual image. The correct way to do it is to decode the image back from the encoded image and then check the correlation between the decoded image and the actual image (in lines of Autoencoder)
Your loss 1 is using numpy arrays. For a loss function to be part of a computation graph it should use tensor operations, not numy operations.
Below is the working code. However, for loss 1 I am using l2 norm of the two images. If you want to use correlation then you have to somehow convert it into tensor operations (which is a different issue from this question)
def image_loss(y_true,y_pred):
return tf.norm(y_true - y_pred)
def label_loss(y_true,y_pred):
return categorical_crossentropy(y_true, y_pred)
input_img = Input(shape=(64, 64, 1))
enocded_imag = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
enocded_imag = MaxPooling2D((2, 2), padding='same')(enocded_imag)
enocded_imag = Conv2D(8, (3, 3), activation='relu', padding='same')(enocded_imag)
enocded_imag = MaxPooling2D((2, 2), padding='same')(enocded_imag)
enocded_imag = Conv2D(8, (3, 3), activation='relu', padding='same')(enocded_imag)
enocded_imag = MaxPooling2D((2, 2), padding='same')(enocded_imag)
decoded_imag = Conv2D(8, (2, 2), activation='relu', padding='same')(enocded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(8, (3, 3), activation='relu', padding='same')(decoded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(16, (3, 3), activation='relu', padding='same')(decoded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(decoded_imag)
flat_layer = Flatten()(enocded_imag)
dense_layer = Dense(32,activation = "relu")(flat_layer)
output_layer = Dense(9, activation = "softmax")(dense_layer)
model = Model(input_img, [decoded_imag, output_layer])
model.compile(optimizer='adadelta', loss=[image_loss, label_loss])
images = np.random.randn(10,64,64,1)
model.fit(images, [images, np.random.randn(10,9)])
The loss function distance_correlation you have coded assumes that each row in y_true and y_pred represent an image. When you use Dense layers it will work because Dense layer outputs a batch of (row) vectors, where each vector represents an individual image. However, 2D convolutions operate on a batch of 2d tensors with multiple channels ( you have only 1 channel). So to use the distance_correlation loss function you have to reshape your tensor such that each row corresponds to an image. Add below two lines to reshape your tensors.
def distance_correlation(y_true,y_pred):
y_true = tf.reshape(tf.squeeze(y_true), [-1,64*64])
y_pred = tf.reshape(tf.squeeze(y_pred), [-1,64*64])
.... REST OF THE CODE ....
The intention is to use the original image in custom_loss1 and the scalar label values in custom_loss2. I think the working code by #mujjiga in his answer is almost correct. I suggest one slight modification.
In model.compile() pass the input tensor in the loss which needs it. Keep the other one same. model.fit() just passes the labels.
model.compile(optimizer='adadelta', loss=[custom_loss1(input_layer), custom_loss2])
mode.fit(x_train, y_train)
Inside the custom loss functions:
def custom_loss1(input):
def loss1(y_true, y_pred):
return tf.norm(input - y_pred) # use your custom loss 1
return loss1
def custom_loss2(y_true, y_pred):
return categorical_crossentropy(y_true, y_pred) # use your custom loss 2
Try this with simple in-built Keras loss functions first. If that works well, look into your custom loss functions.

Adversarial Discriminative Domain Adaptation (ADDA)

I am trying to implement ADDA in Keras. Here is my code :
class ADDA_Images(object):
def __init__(self,modelInput):
self.img_rows = 28
self.img_cols = 28
self.channels = 3
self.img_shape = (self.img_rows, self.img_cols, self.channels)
optimizer = opt.Adam(0.001)
self.source_generator = self.build_generator(modelInput)
self.target_generator = self.build_generator(modelInput)
outputFeatureExtraction = layers.Input(shape = self.target_generator.output_shape[1:])
self.source_classificator = self.build_classifier(outputFeatureExtraction)
self.discriminator_model = self.build_discriminator(outputFeatureExtraction)
self.discriminator_model.compile(optimizer, loss='binary_crossentropy', metrics=['acc'])
self.discriminator_model.name='disk'
input = layers.Input(shape=self.img_shape)
fe_rep = self.source_generator(input)
cl = self.source_classificator(fe_rep)
self.source_model = Model(input,cl)
self.source_model.compile(optimizer, loss='categorical_crossentropy', metrics=['acc'])
input = layers.Input(shape=self.img_shape)
fe_rep = self.target_generator(input)
cl = self.source_classificator(fe_rep)
self.target_model = Model(input, cl)
self.target_model.compile(optimizer, loss='categorical_crossentropy', metrics=['acc'])
self.combined_model = Sequential()
self.combined_model.add(self.target_generator)
self.combined_model.add(self.discriminator_model)
self.combined_model.get_layer('disk').trainable = False
self.combined_model.compile(optimizer, loss='binary_crossentropy', metrics=['acc'])
print('Source model')
self.source_model.summary()
print('Target model')
self.target_model.summary()
print('Discriminator')
self.discriminator_model.summary()
print('Combined model')
self.combined_model.summary()
def build_generator(self,modelInput):
gen = layers.Conv2D(filters=20, kernel_size=5, padding='valid')(modelInput)
gen = layers.MaxPooling2D(pool_size=2, strides=2)(gen)
gen = layers.Conv2D(filters=50, kernel_size=5, padding='valid')(gen)
gen = layers.MaxPooling2D(pool_size=2, strides=2)(gen)
gen = layers.Flatten()(gen)
model = Model(modelInput,gen)
print('Generator summary')
model.summary()
return model
def build_classifier(self,modelInput):
cl = layers.Dense(3072, activation='relu')(modelInput)
cl = layers.Dense(2048, activation='relu')(cl)
cl = layers.Dense(10, activation='softmax')(cl)
model = Model(modelInput,cl)
print('Classificatior summary')
model.summary()
return model
def build_discriminator(self,modelInput):
disc = layers.Dense(500, activation='relu')(modelInput)
disc = layers.Dense(500, activation='relu')(disc)
disc = layers.Dense(2, activation='softmax')(disc)
model = Model(modelInput,disc)
print('Discriminator summary')
model.summary()
return model
But, it seems that target_generator is not connected to target model. I loaded target model from pretrained source model and then train discriminator and combined model in ADDA way. But, target model is not changed. It has same predictions (accs and losses) as source model all the time.
Here is summary of models :
Source model
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 28, 28, 3) 0
_________________________________________________________________
model_1 (Model) (None, 800) 26570
_________________________________________________________________
model_3 (Model) (None, 10) 8774666
=================================================================
Total params: 8,801,236
Trainable params: 8,801,236
Non-trainable params: 0
_________________________________________________________________
Target model
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) (None, 28, 28, 3) 0
_________________________________________________________________
model_2 (Model) (None, 800) 26570
_________________________________________________________________
model_3 (Model) (None, 10) 8774666
=================================================================
Total params: 8,801,236
Trainable params: 8,801,236
Non-trainable params: 0
_________________________________________________________________
Discriminator
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 800) 0
_________________________________________________________________
dense_4 (Dense) (None, 500) 400500
_________________________________________________________________
dense_5 (Dense) (None, 500) 250500
_________________________________________________________________
dense_6 (Dense) (None, 2) 1002
=================================================================
Total params: 1,304,004
Trainable params: 652,002
Non-trainable params: 652,002
_________________________________________________________________
Combined model
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
model_2 (Model) (None, 800) 26570
_________________________________________________________________
disk (Model) (None, 2) 652002
=================================================================
Total params: 678,572
Trainable params: 26,570
Non-trainable params: 652,002
I validated outputs from target_model's second layer (it should be target_generator by specification) and it is not same as output of target_generator (on same input). So, it seems that those two models are not connected as reported in summaries.
Can someone help me to figure out what is wrong?
I am using Keras 2, Tensorflow backend.
Problem was in the training part - I loaded into the target model pretrained source model (load_model) and that made problems because it changed reference to generator model. Instead of load_model, I should use load_weights
So, loading pretrained model which works and not make problems with references is :
source_model = load_model(modelName)
target_model.set_weights(source_model.get_weights())

Categories

Resources