Can intermediate layers be accessed directly within keras loss function? - python

I am curious whether a loss function can implement intermediate layer outputs within keras, without designing the model to feed the intermediate layers as outputs. I have seen a solution can be to redesign the architecture to return the intermediate layer in addition to the final prediction and use that as a workaround, but I'm unclear whether a layer output can be accessed directly from a loss function

I'm unclear whether a layer output can be accessed directly from a loss function
It certainly can.
By way of an example, consider this model using the functional API:
inp = keras.layers.Input(shape=(28, 28))
flat = keras.layers.Flatten()(inp)
dense = keras.layers.Dense(128, activation=tf.nn.relu)(flat)
out = keras.layers.Dense(10, activation=tf.nn.softmax)(dense)
model = keras.models.Model(inputs=inp, outputs=out )
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
If, say, we wanted to introduce a new loss function that also penalised the largest weight of the outputs of our dense layer then we could write a custom loss function something like this:
def my_funky_loss_fn(y_true, y_pred):
return (keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
+ keras.backend.max(dense))
which we can use in our model just by passing our new loss function to the compile() method:
model.compile(optimizer='adam',
loss=my_funky_loss_fn,
metrics=['accuracy'])

Related

Extracting hidden features from Autoencoders using Pytorch

Following the tutorials in this post, I am trying to train an autoencoder and extract the features from its hidden layer.
So here are my questions:
In the autoencoder class, there is a "forward" function. However, I cannot see anywhere in the code that this function is called. So how does it get trained?
My question above is because I feel if I want to extract the features, I should add another function (f"orward_hidden") in the autoencoder class:
def forward(self, features):
#print("in forward")
#print(type(features))
activation = self.encoder_hidden_layer(features)
activation = torch.relu(activation)
code = self.encoder_output_layer(activation)
code = torch.relu(code)
activation = self.decoder_hidden_layer(code)
activation = torch.relu(activation)
activation = self.decoder_output_layer(activation)
reconstructed = torch.relu(activation)
return reconstructed
def forward_hidden(self, features):
activation = self.encoder_hidden_layer(features)
activation = torch.relu(activation)
code = self.encoder_output_layer(activation)
code = torch.relu(code)
return code
Then, after training, which means after this line in the main code:
print("AE, epoch : {}/{}, loss = {:.6f}".format(epoch + 1, epochs_AE, loss))
I can put the following code to retrieve the features from the hidden layer:
hidden_features = model_AE.forward_hidden(my_input)
Is this way correct? Still, I am wondering how the "forward" function was used for training. Because I cannot see it anywhere in the code that is being called.
forward is the essence of your model and actually defines what the model does.
It is implicetly called with model(input) during the training.
If you are askling how to extract intermediate features after running the model, you can register a forward-hook like described here, that will "catch" the values for you.
When creating a class with nn.Module when working with PyTorch, the forward function is called implicitly and you do not need to separately call it.

How do i specify to a model what to take as input of a custom loss function?

I'm having issues in understanding/implementing a custom loss function in my model.
I have a keras model which is composed by 3 sub models as you can see here in the model architecture,
Now, I'd like to use the outputs of model and model_2 in my custom loss function.
I understand that in the loss function definition I can write:
def custom_mse(y_true, y_pred):
*calculate stuff*
return loss
But how do I tell the model to take its 2 outputs as inputs of the loss function?
Maybe, and i hope so, it's super trivial but I didn't find anything online, if you could help me it'd be fantastic.
Thanks in advance
Context:
model and model_2 are the same pretrained model, a binary classifier, which predicts the interaction between 2 inputs (of image-like type).
model_1 is a generative model which will edit one of the inputs.
Therefore:
complete_model = Model(inputs=[input_1, input_2], outputs=[out_model, out_model2])
opt = *an optimizer*
complete_model.compile(loss=custom_mse,
??????,
optimizer = opt,
metrics=['whatever'])
The main goal is to compare the prediction with the edited input against the one with the un-edited input, therefore the model will outputs the 2 interactions, which i need to use in the loss function.
EDIT:
Thank you Andrey for the solution,
Now however i can't manage to implement together the 2 loss functions, namely the one with add_loss(func) and a classic binary_crossentropy in model.complie(loss='binary_crossentropy', ...).
Can I maybe add an add_loss specifying model_2.output and the label? If yes do you know how?
They work by themselves but not together, when i try to run the code they raise
ValueError: Shapes must be equal rank, but are 0 and 4 From merging shape 0 with other shapes. for '{{node AddN}} = AddN[N=2, T=DT_FLOAT](binary_crossentropy/weighted_loss/value, complete_model/generator/tf_op_layer_SquaredDifference_3/SquaredDifference_3)' with input shapes: [], [?,500,400,1].
You can add loss with compile() only for standard loss function signature (y_true, y_pred). You can not use it because your signature is something like (y_true, (y_pred1, y_pred2)). Use add_loss() API instead. See here: https://keras.io/api/losses/

TensorFlow Keras Sequential Model Trains Differently if Loss is a String or Function

So I have been working with a sequential Keras model in Tensor-Flow, and have come across an odd behavior where if I compile a sequential model with the loss as a function, the result is different than if it were a string.
The network (convolutional in nature) is defined as such:
model = tf.keras.Sequential()
add_model_layers(model) # Can provide if needed
adam_opt = tf.train.AdamOptimizer(learning_rate=0.001,
beta1=0.9,
beta2=0.999)
The network is then compiled:
loss_param1 = tf.keras.losses.categorical_crossentropy
loss_param2 = "categorical_crossentropy"
model.compile(optimizer=adam_opt,
loss=loss_param # ADD NUMBER TO END
metrics=[tf.keras.metrics.categorical_accuracy])
After which it is trained:
# Records are tf.data.Dataset based on TFRecord files
model.fit(train_records,
epochs=400,
use_multiprocessing=False,
validation_data=validation_records)
# And for completeness, tested
model.evaluate(test_records)
If the loss parameter of model.compile is loss_param1, it begins training with a high loss value (in my case, after an epoch or 5 around 192). On the other hand, if loss_param2 is used, training begins at a much lower loss (around 41).
Does anyone know why this would be occurring?
(As an additional note, I am also running into a similar issue where if the metric is as a string I get a different result. However in model.fit, if use_multiprocessing is True the effect is negated (this also applies the other way around).)

Using the output of an internal layer to fit Keras model?

I have a model M that have two inputs: x_train1, x_train2. After passing through heavy transformations these inputs are concatenated into one single array x1_x2. Later it is plugged into an autoencoder where output should be x1_x2. But when I try to fit the model I get the following error:
ValueError: When feeding symbolic tensors to a model, we expect thetensors to have a static batch size. Got tensor with shape: (None, 2080)
I know that the problem lays down on how I am specifying my expected output. I was able to run the code using a dummy array such as np.zeros((96, 2080)), but not by setting the output of an internal layer.
I do the following to fit the model:
autoencoder.fit([x_train1, x_train2],
autoencoder.layers[-7].output,
epochs=50,
batch_size=8,
shuffle=True,
validation_split=0.2)
How can I make Keras understand that the expected output should be the output of an internal layer with shape (number_of_input_images, 2080)?
I'd do the following: Import the Model class from Keras and create an additional model.
from tensorflow.python.keras.models import Model
# model = your existing model
new_model = Model(
inputs = model.input,
outputs = model.get_layer(name_of_desired_output_layer).output
)
That's it, now you can use your new model and train it instead.

Keras encoder-decoder model RuntimeError: You must compile your model before using it

I am trying to reproduce the results of an image captioning model but I get this error. The code for the two models is the following:
image_model = Sequential()
image_model.add(Dense(EMBEDDING_DIM, input_dim=4096, activation='relu'))
image_model.add(RepeatVector(self.max_length))
lang_model = Sequential()
lang_model.add(Embedding(self.vocab_size, 256, input_length=self.max_length))
lang_model.add(LSTM(256, return_sequences=True))
lang_model.add(TimeDistributed(Dense(EMBEDDING_DIM)))
model = Sequential()
model.add(Concatenate([image_model, lang_model]))
model.add(LSTM(1000, return_sequences=False))
model.add(Dense(self.vocab_size))
model.add(Activation('softmax'))
print ("Model created!")
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop', metrics=['accuracy'])
The model is the then called by the following code:
sd = SceneDesc.scenedesc()
model = sd.create_model()
batch_size = 512
model.fit_generator(sd.data_process(batch_size=batch_size),
steps_per_epoch=sd.no_samples/batch_size, epochs=epoch, verbose=2,
callbacks=None)
However, when the fit_generator is called that particular error is raised. Is there anything wrong with the concatenation of the models?
In keras, there is a concept called compiling your model.
Basically, this configures the loss function and sets the optimizer for the model you want to train.
For example, model.compile(loss='mse',optimizer='Adam') will configure your model to use the mse loss function, and to use the Adam optimization algorithm. What you use instead of these will heavily depend on the type of problem.
The reason your code throws an error is because the model cannot train, since you have not configured the loss function and optimizer using the compile method. Simple call model.compile() with your choice of loss function and optimizer, and then you will be able to train your model.
You need to call the method model.compile(loss, optimizer) before you can fit it.

Categories

Resources