Model not improving with GradientTape but with model.fit() - python

I am currently trying to train a model using tf.GradientTape, as model.fit(...) from keras will not be able to handle my data input in the future. However, while a test run with model.fit(...) and my model works perfectly, tf.GradientTape does not.
During training, the loss using the tf.GradientTape custom workflow will first slightly decrease, but then become stuck and not improve any further, no matter how many epochs I run. The chosen metric will also not change after the first few batches. Additionally, the loss per batch is unstable and jumps between nearly zero to something very large. The running loss is more stable but shows the model not improving.
This is all in contrast to using model.fit(...), where loss and metrics are improving immediately.
My code:
def build_model(kernel_regularizer=l2(0.0001), dropout=0.001, recurrent_dropout=0.):
x1 = Input(62)
x2 = Input((62, 3))
x = Embedding(30, 100, mask_zero=True)(x1)
x = Concatenate()([x, x2])
x = Bidirectional(LSTM(500,
return_sequences=True,
kernel_regularizer=kernel_regularizer,
dropout=dropout,
recurrent_dropout=recurrent_dropout))(x)
x = Bidirectional(LSTM(500,
return_sequences=False,
kernel_regularizer=kernel_regularizer,
dropout=dropout,
recurrent_dropout=recurrent_dropout))(x)
x = Activation('softmax')(x)
x = Dense(1000)(x)
x = Dense(500)(x)
x = Dense(250)(x)
x = Dense(1, bias_initializer='ones')(x)
x = tf.math.abs(x)
return Model(inputs=[x1, x2], outputs=x)
optimizer = Adam(learning_rate=0.0001)
model = build_model()
model.compile(optimizer=optimizer, loss='mse', metrics='mse')
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = AutoShardPolicy.DATA
dat_train = tf.data.Dataset.from_generator(
generator= lambda: <load_function()>
output_types=((tf.int32, tf.float32), tf.float32)
)
dat_train = dat_train.with_options(options)
# keras training
model.fit(dat_train, epochs=50)
# custom training
for epoch in range(50):
for (x1, x2), y in dat_train:
with tf.GradientTape() as tape:
y_pred = model((x1, x2), training=True)
loss = model.loss(y, y_pred)
grads = tape.gradient(loss, model.trainable_variables)
model.optimizer.apply_gradients(zip(grads, model.trainable_variables))
I could use relu at the output layer, however, I found the abs to be more robust. Changing it does not change the outcome. The input x1 of the model is a sequence, x2 are some additional features, that are later concatenated to the embedded x1 sequence. For my approach, I'm not using the MSE, but it works either way.
I could provide some data, however, my dataset is quite large, so I would need to extract a bit out of it.
All in all, my problem seems to be similar to:
Keras model doesn't train when using GradientTape
Edit 1
The softmax activation is currently not necessary, but is relevant for my future goal of splitting the model.
Additionally, some things I noticed:
The custom training takes roughly 2x the amount of time compared to model.fit(...).
The gradients in the custom training seem very small and range from ±1e-3 to ±1e-9 inside the model. I don't know if that's normal and don't know how to compare it to the gradients provided by model.fit(...).
Edit 2
I've added a Google Colab notebook to reproduce the issue:
https://colab.research.google.com/drive/1pk66rbiux5vHZcav9VNSBhdWWIhQM-nF?usp=sharing
The loss and MSE for 20 epochs is shown here:
custom training
keras training
While I only used a portion of my data in the notebook, it will still run for a very long time. For the custom training run, the loss for each batch is simply stored in losses. It matches the behavior in the custom training run image.
So far, I've noticed two ways of improving the performance of the custom training:
The usage of custom layer initialization
Using MSE as a loss function
Using the MSE, compared to my own loss function actually improves the custom training performance. Still, using MSE and/or different initialization won't come close to the performance of keras fit.

I have found the solution, it was a simple shape mismatch, which was somehow not picked up by any error check and worked both with my custom loss function and MSE. Using x = Reshape(())(x) as final layer did the trick.

Related

Keras multiple input, output, loss model

I am working on super-resolution GAN and having some doubts about the code I found on Github. In particular, I have multiple inputs, multiple outputs in the model. Also, I have two different loss functions.
In the following code will the mse loss be applied to img_hr and fake_features?
# Build and compile the discriminator
self.discriminator = self.build_discriminator()
self.discriminator.compile(loss='mse',
optimizer=optimizer,
metrics=['accuracy'])
# Build the generator
self.generator = self.build_generator()
# High res. and low res. images
img_hr = Input(shape=self.hr_shape)
img_lr = Input(shape=self.lr_shape)
# Generate high res. version from low res.
fake_hr = self.generator(img_lr)
# Extract image features of the generated img
fake_features = self.vgg(fake_hr)
# For the combined model we will only train the generator
self.discriminator.trainable = False
# Discriminator determines validity of generated high res. images
validity = self.discriminator(fake_hr)
self.combined = Model([img_lr, img_hr], [validity, fake_features])
self.combined.compile(loss=['binary_crossentropy', 'mse'],
loss_weights=[1e-3, 1],
optimizer=optimizer)
In the following code will the mse loss be applied to img_hr and
fake_features?
From the documentation, https://keras.io/models/model/#compile
"If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses."
In this case, the mse loss will be applied to fake_features and the corresponding y_true passed as part of self.combined.fit().
In neural networks Loss is applied to the Outputs of a network in order to have a way of measurement of "How wrong is this output?" so you can take this value and minimize it via Gradient decent and backprop.
Following this Intuition the Losses in keras are a List with the same length as the Outputs of your model. They are appied to the Output with the same index.
self.combined = Model([img_lr, img_hr], [validity, fake_features])
This gives you a model with 2 Inputs (img_lr, img_hr) and 2 outputs (validity, fake_features). So combined.compile(loss=['binary_crossentropy', 'mse']... uses binary_crossentropy loss for validity and Mean Squared Error for fake_features.

Custom Neural Network Implementation on MNIST using Tensorflow 2.0?

I tried to write a custom implementation of basic neural network with two hidden layers on MNIST dataset using *TensorFlow 2.0 beta* but I'm not sure what went wrong here but my training loss and accuracy seems to stuck at 1.5 and around 85 respectively. But If I build the using Keras I was getting very low training loss and accuracy above 95% with just 8-10 epochs.
I believe that maybe I'm not updating my weights or something? So do I need to assign my new weights which I compute in backprop function backs to their respective weights/bias variables?
I really appreciate if someone could help me out with this and these few more questions that I've mentioned below.
Few more Questions:
1) How to add a Dropout and Batch Normalization layer in this custom implementation? (i.e making it work for both train and test time)
2) How can I use callbacks in this code? i.e (making use of EarlyStopping and ModelCheckpoint callbacks)
3) Is there anything else in my code below that I can optimize further in this code like maybe making use of tensorflow 2.x #tf.function decorator etc.)
4) I would also require to extract the final weights that I obtain for plotting and checking their distributions. To investigate issues like gradient vanishing or exploding. (Eg: Maybe Tensorboard)
5) I also want help in writing this code in a more generalized way so I can easily implement other networks like ConvNets (i.e Conv, MaxPool, etc.) based on this code easily.
Here's my full code for easy reproducibility :
Note: I know I can use high-level API like Keras to build the model much easier but that is not my goal here. Please understand.
import numpy as np
import os
import logging
logging.getLogger('tensorflow').setLevel(logging.ERROR)
import tensorflow as tf
import tensorflow_datasets as tfds
(x_train, y_train), (x_test, y_test) = tfds.load('mnist', split=['train', 'test'],
batch_size=-1, as_supervised=True)
# reshaping
x_train = tf.reshape(x_train, shape=(x_train.shape[0], 784))
x_test = tf.reshape(x_test, shape=(x_test.shape[0], 784))
ds_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# rescaling
ds_train = ds_train.map(lambda x, y: (tf.cast(x, tf.float32)/255.0, y))
class Model(object):
def __init__(self, hidden1_size, hidden2_size, device=None):
# layer sizes along with input and output
self.input_size, self.output_size, self.device = 784, 10, device
self.hidden1_size, self.hidden2_size = hidden1_size, hidden2_size
self.lr_rate = 1e-03
# weights initializationg
self.glorot_init = tf.initializers.glorot_uniform(seed=42)
# weights b/w input to hidden1 --> 1
self.w_h1 = tf.Variable(self.glorot_init((self.input_size, self.hidden1_size)))
# weights b/w hidden1 to hidden2 ---> 2
self.w_h2 = tf.Variable(self.glorot_init((self.hidden1_size, self.hidden2_size)))
# weights b/w hidden2 to output ---> 3
self.w_out = tf.Variable(self.glorot_init((self.hidden2_size, self.output_size)))
# bias initialization
self.b1 = tf.Variable(self.glorot_init((self.hidden1_size,)))
self.b2 = tf.Variable(self.glorot_init((self.hidden2_size,)))
self.b_out = tf.Variable(self.glorot_init((self.output_size,)))
self.variables = [self.w_h1, self.b1, self.w_h2, self.b2, self.w_out, self.b_out]
def feed_forward(self, x):
if self.device is not None:
with tf.device('gpu:0' if self.device=='gpu' else 'cpu'):
# layer1
self.layer1 = tf.nn.sigmoid(tf.add(tf.matmul(x, self.w_h1), self.b1))
# layer2
self.layer2 = tf.nn.sigmoid(tf.add(tf.matmul(self.layer1,
self.w_h2), self.b2))
# output layer
self.output = tf.nn.softmax(tf.add(tf.matmul(self.layer2,
self.w_out), self.b_out))
return self.output
def loss_fn(self, y_pred, y_true):
self.loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_true,
logits=y_pred)
return tf.reduce_mean(self.loss)
def acc_fn(self, y_pred, y_true):
y_pred = tf.cast(tf.argmax(y_pred, axis=1), tf.int32)
y_true = tf.cast(y_true, tf.int32)
predictions = tf.cast(tf.equal(y_true, y_pred), tf.float32)
return tf.reduce_mean(predictions)
def backward_prop(self, batch_xs, batch_ys):
optimizer = tf.keras.optimizers.Adam(learning_rate=self.lr_rate)
with tf.GradientTape() as tape:
predicted = self.feed_forward(batch_xs)
step_loss = self.loss_fn(predicted, batch_ys)
grads = tape.gradient(step_loss, self.variables)
optimizer.apply_gradients(zip(grads, self.variables))
n_shape = x_train.shape[0]
epochs = 20
batch_size = 128
ds_train = ds_train.repeat().shuffle(n_shape).batch(batch_size).prefetch(batch_size)
neural_net = Model(512, 256, 'gpu')
for epoch in range(epochs):
no_steps = n_shape//batch_size
avg_loss = 0.
avg_acc = 0.
for (batch_xs, batch_ys) in ds_train.take(no_steps):
preds = neural_net.feed_forward(batch_xs)
avg_loss += float(neural_net.loss_fn(preds, batch_ys)/no_steps)
avg_acc += float(neural_net.acc_fn(preds, batch_ys) /no_steps)
neural_net.backward_prop(batch_xs, batch_ys)
print(f'Epoch: {epoch}, Training Loss: {avg_loss}, Training ACC: {avg_acc}')
# output for 10 epochs:
Epoch: 0, Training Loss: 1.7005115111824125, Training ACC: 0.7603832868262543
Epoch: 1, Training Loss: 1.6052448933478445, Training ACC: 0.8524806404020637
Epoch: 2, Training Loss: 1.5905528008006513, Training ACC: 0.8664196092868224
Epoch: 3, Training Loss: 1.584107405738905, Training ACC: 0.8727630912326276
Epoch: 4, Training Loss: 1.5792385798413306, Training ACC: 0.8773203844903037
Epoch: 5, Training Loss: 1.5759121985174716, Training ACC: 0.8804754322627559
Epoch: 6, Training Loss: 1.5739163148682564, Training ACC: 0.8826455712551251
Epoch: 7, Training Loss: 1.5722616605926305, Training ACC: 0.8840812018606812
Epoch: 8, Training Loss: 1.569699136307463, Training ACC: 0.8867688354803249
Epoch: 9, Training Loss: 1.5679460542742163, Training ACC: 0.8885049475356936
I wondered where to start with your multiquestion, and I decided to do so with a statement:
Your code definitely should not look like that and is nowhere near current Tensorflow best practices.
Sorry, but debugging it step by step is waste of everyone's time and would not benefit either of us.
Now, moving to the third point:
Is there anything else in my code below that I can optimize further
in this code like maybe making use of tensorflow 2.x #tf.function
decorator etc.)
Yes, you can use tensorflow2.0 functionalities and it seems like you are running away from those (tf.function decorator is of no use here actually, leave it for the time being).
Following new guidelines would alleviate your problems with your 5th point as well, namely:
I also want help in writing this code in a more generalized way so
I can easily implement other networks like ConvNets (i.e Conv, MaxPool
etc.) based on this code easily.
as it's designed specifically for that. After a little introduction I will try to introduce you to those concepts in a few steps:
1. Divide your program into logical parts
Tensorflow did much harm when it comes to code readability; everything in tf1.x was usually crunched in one place, globals followed by function definition followed by another globals or maybe data loading, all in all mess. It's not really developers fault as the system's design encouraged those actions.
Now, in tf2.0 programmer is encouraged to divide his work similarly to the structure one can see in pytorch, chainer and other more user-friendly frameworks.
1.1 Data loading
You were on good path with Tensorflow Datasets but you turned away for no apparent reason.
Here is your code with commentary what's going on:
# You already have tf.data.Dataset objects after load
(x_train, y_train), (x_test, y_test) = tfds.load('mnist', split=['train', 'test'],
batch_size=-1, as_supervised=True)
# But you are reshaping them in a strange manner...
x_train = tf.reshape(x_train, shape=(x_train.shape[0], 784))
x_test = tf.reshape(x_test, shape=(x_test.shape[0], 784))
# And building from slices...
ds_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Unreadable rescaling (there are built-ins for that)
You can easily generalize this idea for any dataset, place this in separate module, say datasets.py:
import tensorflow as tf
import tensorflow_datasets as tfds
class ImageDatasetCreator:
#classmethod
# More portable and readable than dividing by 255
def _convert_image_dtype(cls, dataset):
return dataset.map(
lambda image, label: (
tf.image.convert_image_dtype(image, tf.float32),
label,
)
)
def __init__(self, name: str, batch: int, cache: bool = True, split=None):
# Load dataset, every dataset has default train, test split
dataset = tfds.load(name, as_supervised=True, split=split)
# Convert to float range
try:
self.train = ImageDatasetCreator._convert_image_dtype(dataset["train"])
self.test = ImageDatasetCreator._convert_image_dtype(dataset["test"])
except KeyError as exception:
raise ValueError(
f"Dataset {name} does not have train and test, write your own custom dataset handler."
) from exception
if cache:
self.train = self.train.cache() # speed things up considerably
self.test = self.test.cache()
self.batch: int = batch
def get_train(self):
return self.train.shuffle().batch(self.batch).repeat()
def get_test(self):
return self.test.batch(self.batch).repeat()
So now you can load more than mnist using simple command:
from datasets import ImageDatasetCreator
if __name__ == "__main__":
dataloader = ImageDatasetCreator("mnist", batch=64, cache = True)
train, test = dataloader.get_train(), dataloader.get_test()
And you could use any name other than mnist you want to load datasets from now on.
Please, stop making everything deep learning related one hand-off scripts, you are a programmer as well.
1.2 Model creation
Since tf2.0 there are two advised ways one can proceed depending on models complexity:
tensorflow.keras.models.Sequential - this way was shown by #Stewart_R, no need to reiterate his points. Used for the simplest models (you should use this one with your feedforward).
Inheriting tensorflow.keras.Model and writing custom model. This one should be used when you have some kind of logic inside your module or it's more complicated (things like ResNets, multipath networks etc.). All in all more readable and customizable.
Your Model class tried to resemble something like that but it went south again; backprop definitely is not part of the model itself, neither is loss or accuracy, separate them into another module or function, defo not a member!
That said, let's code the network using the second approach (you should place this code in model.py for brevity). Before that, I will code YourDense feedforward layer from scratch by inheriting from tf.keras.Layers (this one might go into layers.py module):
import tensorflow as tf
class YourDense(tf.keras.layers.Layer):
def __init__(self, units):
# It's Python 3, you don't have to specify super parents explicitly
super().__init__()
self.units = units
# Use build to create variables, as shape can be inferred from previous layers
# If you were to create layers in __init__, one would have to provide input_shape
# (same as it occurs in PyTorch for example)
def build(self, input_shape):
# You could use different initializers here as well
self.kernel = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
# You could define bias in __init__ as well as it's not input dependent
self.bias = self.add_weight(shape=(self.units,), initializer="random_normal")
# Oh, trainable=True is default
def call(self, inputs):
# Use overloaded operators instead of tf.add, better readability
return tf.matmul(inputs, self.kernel) + self.bias
Regarding your
How to add a Dropout and Batch Normalization layer in this custom
implementation? (i.e making it work for both train and test time)
I suppose you would like to create a custom implementation of those layers.
If not, you can just import from tensorflow.keras.layers import Dropout and use it anywhere you want as #Leevo pointed out.
Inverted dropout with different behaviour during train and test below:
class CustomDropout(layers.Layer):
def __init__(self, rate, **kwargs):
super().__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
# You could simply create binary mask and multiply here
return tf.nn.dropout(inputs, rate=self.rate)
# You would need to multiply by dropout rate if you were to do that
return inputs
Layers taken from here and modified to better fit showcasing purpose.
Now you can create your model finally (simple double feedforward):
import tensorflow as tf
from layers import YourDense
class Model(tf.keras.Model):
def __init__(self):
super().__init__()
# Use Sequential here for readability
self.network = tf.keras.Sequential(
[YourDense(100), tf.keras.layers.ReLU(), YourDense(10)]
)
def call(self, inputs):
# You can use non-parametric layers inside call as well
flattened = tf.keras.layers.Flatten()(inputs)
return self.network(flattened)
Ofc, you should use built-ins as much as possible in general implementations.
This structure is pretty extensible, so generalization to convolutional nets, resnets, senets, whatever should be done via this module. You can read more about it here.
I think it fulfills your 5th point:
I also want help in writing this code in a more generalized way so
I can easily implement other networks like ConvNets (i.e Conv, MaxPool
etc.) based on this code easily.
Last thing, you may have to use model.build(shape) in order to build your model's graph.
model.build((None, 28, 28, 1))
This would be for MNIST's 28x28x1 input shape, where None stands for batch.
1.3 Training
Once again, training could be done in two separate ways:
standard Keras model.fit(dataset) - useful in simple tasks like classification
tf.GradientTape - more complicated training schemes, most prominent example would be Generative Adversarial Networks, where two models optimize orthogonal goals playing minmax game
As pointed out by #Leevo once again, if you are to use the second way, you won't be able to simply use callbacks provided by Keras, hence I'd advise to stick with the first option whenever possible.
In theory you could call callback's functions manually like on_batch_begin() and others where needed, but it would be cumbersome and I'm not sure how would this work.
When it comes to the first option, you can use tf.data.Dataset objects directly with fit. Here is it presented inside another module (preferably train.py):
def train(
model: tf.keras.Model,
path: str,
train: tf.data.Dataset,
epochs: int,
steps_per_epoch: int,
validation: tf.data.Dataset,
steps_per_validation: int,
stopping_epochs: int,
optimizer=tf.optimizers.Adam(),
):
model.compile(
optimizer=optimizer,
# I used logits as output from the last layer, hence this
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.metrics.SparseCategoricalAccuracy()],
)
model.fit(
train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_data=validation,
validation_steps=steps_per_validation,
callbacks=[
# Tensorboard logging
tf.keras.callbacks.TensorBoard(
pathlib.Path("logs")
/ pathlib.Path(datetime.datetime.now().strftime("%Y%m%d-%H%M%S")),
histogram_freq=1,
),
# Early stopping with best weights preserving
tf.keras.callbacks.EarlyStopping(
monitor="val_sparse_categorical_accuracy",
patience=stopping_epochs,
restore_best_weights=True,
),
],
)
model.save(path)
More complicated approach is very similar (almost copy and paste) to PyTorch training loops, so if you are familiar with those, they should not pose much of a problem.
You can find examples throughout tf2.0 docs, e.g. here or here.
2. Other things
2.1 Unanswered questions
Is there anything else in the code that I can optimize further in
this code? i.e (making use of tensorflow 2.x #tf.function decorator
etc.)
Above already transforms the Model into graphs, hence I don't think you would benefit from calling it in this case. And premature optimization is the root of all evil, remember to measure your code before doing this.
You would gain much more with proper caching of data (as described at the beginning of #1.1) and good pipeline rather than those.
Also I need a way to extract all my final weights for all layers
after training so I can plot them and check their distributions. To
check issues like gradient vanishing or exploding.
As pointed out by #Leevo above,
weights = model.get_weights()
Would get you the weights. You may transform them into np.array and plot using seaborn, matplotlib, analyze, check or whatever else you want.
2.2 Putting it altogether
All in all, your main.py (or entrypoint or something similar) would consist of this (more or less):
from dataset import ImageDatasetCreator
from model import Model
from train import train
# You could use argparse for things like batch, epochs etc.
if __name__ == "__main__":
dataloader = ImageDatasetCreator("mnist", batch=64, cache=True)
train, test = dataloader.get_train(), dataloader.get_test()
model = Model()
model.build((None, 28, 28, 1))
train(
model, train, path epochs, test, len(train) // batch, len(test) // batch, ...
) # provide necessary arguments appropriately
# Do whatever you want with those
weights = model.get_weights()
Oh, remember that above functions are not for copy pasting and should be treated more like a guideline. Hit me up if you have any questions.
3. Questions from comments
3.1 How to initialize custom and built-in layers
3.1.1 TLDR what you are about to read
Custom Poisson initalization function, but it takes three
arguments
tf.keras.initalization API needs two arguments (see last point in their docs), hence one is
specified via Python's lambda inside custom layer we have written before
Optional bias for the layer is added, which can be turned off with
boolean
Why is it so uselessly complicated? To show that in tf2.0 you can finally use Python's functionality, no more graph hassle, if instead of tf.cond etc.
3.1.2 From TLDR to implementation
Keras initializers can be found here and Tensorflow's flavor here.
Please note API inconsistencies (capital letters like classes, small letters with underscore like functions), especially in tf2.0, but that's beside the point.
You can use them by passing a string (as it's done in YourDense above) or during object creation.
To allow for custom initialization in your custom layers, you can simply add additional argument to the constructor (tf.keras.Model class is still Python class and it's __init__ should be used same as Python's).
Before that, I will show you how to create custom initialization:
# Poisson custom initialization because why not.
def my_dumb_init(shape, lam, dtype=None):
return tf.squeeze(tf.random.poisson(shape, lam, dtype=dtype))
Notice, it's signature takes three arguments, while it should take (shape, dtype) only. Still, one can "fix" this easily while creating his own layer, like the one below (extended YourLinear):
import typing
import tensorflow as tf
class YourDense(tf.keras.layers.Layer):
# It's still Python, use it as Python, that's the point of tf.2.0
#classmethod
def register_initialization(cls, initializer):
# Set defaults if init not provided by user
if initializer is None:
# let's make the signature proper for init in tf.keras
return lambda shape, dtype: my_dumb_init(shape, 1, dtype)
return initializer
def __init__(
self,
units: int,
bias: bool = True,
# can be string or callable, some typing info added as well...
kernel_initializer: typing.Union[str, typing.Callable] = None,
bias_initializer: typing.Union[str, typing.Callable] = None,
):
super().__init__()
self.units: int = units
self.kernel_initializer = YourDense.register_initialization(kernel_initializer)
if bias:
self.bias_initializer = YourDense.register_initialization(bias_initializer)
else:
self.bias_initializer = None
def build(self, input_shape):
# Simply pass your init here
self.kernel = self.add_weight(
shape=(input_shape[-1], self.units),
initializer=self.kernel_initializer,
trainable=True,
)
if self.bias_initializer is not None:
self.bias = self.add_weight(
shape=(self.units,), initializer=self.bias_initializer
)
else:
self.bias = None
def call(self, inputs):
weights = tf.matmul(inputs, self.kernel)
if self.bias is not None:
return weights + self.bias
I have added my_dumb_initialization as the default (if user does not provide one) and made the bias optional with bias argument. Note you can use if freely as long as it's not data dependent. If it is (or is dependent on tf.Tensor somehow), one has to use #tf.function decorator which changes Python's flow to it's tensorflow counterpart (e.g. if to tf.cond).
See here for more on autograph, it's very easy to follow.
If you want to incorporate above initializer changes into your model, you have to create appropriate object and that's it.
... # Previous of code Model here
self.network = tf.keras.Sequential(
[
YourDense(100, bias=False, kernel_initializer="lecun_uniform"),
tf.keras.layers.ReLU(),
YourDense(10, bias_initializer=tf.initializers.Ones()),
]
)
... # and the same afterwards
With built-in tf.keras.layers.Dense layers, one can do the same (arguments names differ, but idea holds).
3.2 Automatic Differentiation using tf.GradientTape
3.2.1 Intro
Point of tf.GradientTape is to allow users normal Python control flow and gradient calculation of variables with respect to another variable.
Example taken from here but broken into separate pieces:
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
Regular python function with for and if flow control statements
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
Using gradient tape you can record all operations on Tensors (and their intermediate states as well) and "play" it backwards (perform automatic backward differentiation using chaing rule).
Every Tensor within tf.GradientTape() context manager is recorded automatically. If some Tensor is out of scope, use watch() method as one can see above.
Finally, gradient of output with respect to x (input is returned).
3.2.2 Connection with deep learning
What was described above is backpropagation algorithm. Gradients w.r.t (with respect to) outputs are calculated for each node in the network (or rather for every layer). Those gradients are then used by various optimizers to make corrections and so it repeats.
Let's continue and assume you have your tf.keras.Model, optimizer instance, tf.data.Dataset and loss function already set up.
One can define a Trainer class which will perform training for us. Please read comments in the code if in doubt:
class Trainer:
def __init__(self, model, optimizer, loss_function):
self.model = model
self.loss_function = loss_function
self.optimizer = optimizer
# You could pass custom metrics in constructor
# and adjust train_step and test_step accordingly
self.train_loss = tf.keras.metrics.Mean(name="train_loss")
self.test_loss = tf.keras.metrics.Mean(name="train_loss")
def train_step(self, x, y):
# Setup tape
with tf.GradientTape() as tape:
# Get current predictions of network
y_pred = self.model(x)
# Calculate loss generated by predictions
loss = self.loss_function(y, y_pred)
# Get gradients of loss w.r.t. EVERY trainable variable (iterable returned)
gradients = tape.gradient(loss, self.model.trainable_variables)
# Change trainable variable values according to gradient by applying optimizer policy
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables))
# Record loss of current step
self.train_loss(loss)
def train(self, dataset):
# For N epochs iterate over dataset and perform train steps each time
for x, y in dataset:
self.train_step(x, y)
def test_step(self, x, y):
# Record test loss separately
self.test_loss(self.loss_function(y, self.model(x)))
def test(self, dataset):
# Iterate over whole dataset
for x, y in dataset:
self.test_step(x, y)
def __str__(self):
# You need Python 3.7 with f-string support
# Just return metrics
return f"Loss: {self.train_loss.result()}, Test Loss: {self.test_loss.result()}"
Now, you could use this class in your code really simply like this:
EPOCHS = 5
# model, optimizer, loss defined beforehand
trainer = Trainer(model, optimizer, loss)
for _ in range(EPOCHS):
trainer.train(train_dataset) # Same for training and test datasets
trainer.test(test_dataset)
print(f"Epoch {epoch}: {trainer})")
Print would tell you training and test loss for each epoch. You can mix training and testing any way you want (e.g. 5 epochs for training and 1 testing), you could add different metrics etc.
See here if you want non-OOP oriented approach (IMO less readable, but to each it's own).
Also If there's something I could improve in the code do let me know
as well.
Embrace the high-level API for something like this. You can do it in just a few lines of code and it's much easier to debug, read and reason about:
(x_train, y_train), (x_test, y_test) = tfds.load('mnist', split=['train', 'test'],
batch_size=-1, as_supervised=True)
x_train = tf.cast(tf.reshape(x_train, shape=(x_train.shape[0], 784)), tf.float32)
x_test = tf.cast(tf.reshape(x_test, shape=(x_test.shape[0], 784)), tf.float32)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(512, activation='sigmoid'),
tf.keras.layers.Dense(256, activation='sigmoid'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
I tried to write a custom implementation of basic neural network with
two hidden layers on MNIST dataset using tensorflow 2.0 beta but I'm
not sure what went wrong here but my training loss and accuracy seems
to stuck at 1.5 and around85's respectively.
Where is the training part? Training of TF 2.0 models either Keras' syntax or Eager execution with tf.GradientTape(). Can you paste the code with conv and dense layers, and how you trained it?
Other questions:
1) How to add a Dropout layer in this custom implementation? i.e
(making it work for both train and test time)
You can add a Dropout() layer with:
from tensorflow.keras.layers import Dropout
And then you insert it into a Sequential() model just with:
Dropout(dprob) # where dprob = dropout probability
2) How to add Batch Normalization in this code?
Same as before, with:
from tensorflow.keras.layers import BatchNormalization
The choise of where to put batchnorm in the model, well, that's up to you. There is no rule of thumb, I suggest you to make experiments. With ML it's always a trial and error process.
3) How can I use callbacks in this code? i.e (making use of
EarlyStopping and ModelCheckpoint callbacks)
If you are training using Keras' syntax, you can simply use that. Please check this very thorough tutorial on how to use it. It just takes few lines of code.
If you are running a model in Eager execution, you have to implement these techniques yourself, with your own code. It's more complex, but it also gives you more freedom in the implementation.
4) Is there anything else in the code that I can optimize further in
this code? i.e (making use of tensorflow 2.x #tf.function decorator
etc.)
It depends. If you are using Keras syntax, I don't think you need to add more to it. In case you are training the model in Eager execution, then I'd suggest you to use the #tf.function decorator on some function to speed up a bit.
You can see a practical TF 2.0 example on how to use the decorator in this Notebook.
Other than this, I suggest you to play with regularization techniques such as weights initializations, L1-L2 loss, etc.
5) Also I need a way to extract all my final weights for all layers
after training so I can plot them and check their distributions. To
check issues like gradient vanishing or exploding.
Once the model is trained, you can extract its weights with:
weights = model.get_weights()
or:
weights = model.trainable_weights
If you want to keep only trainable ones.
6) I also want help in writing this code in a more generalized way so
I can easily implement other networks like convolutional network (i.e
Conv, MaxPool etc.) based on this code easily.
You can pack all your code into a function, then . At the end of this Notebook I did something like this (it's for a feed-forward NN, which is much more simple, but that's a start and you can change the code according to your needs).
---
UPDATE:
Please check my TensorFlow 2.0 implementaion of a CNN classifier. This might be a useful hint: it is trained on the Fashion MNIST dataset, which makes it very similar to your task.

Transfer learning with pretrained model by tf.GradientTape can't converge

I would like to perform transfer learning with pretrained model of keras
import tensorflow as tf
from tensorflow import keras
base_model = keras.applications.MobileNetV2(input_shape=(96, 96, 3), include_top=False, pooling='avg')
x = base_model.outputs[0]
outputs = layers.Dense(10, activation=tf.nn.softmax)(x)
model = keras.Model(inputs=base_model.inputs, outputs=outputs)
Training with keras compile/fit functions can converge
model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'])
history = model.fit(train_data, epochs=1)
The results are: loss: 0.4402 - accuracy: 0.8548
I wanna train with tf.GradientTape, but it can't converge
optimizer = keras.optimizers.Adam()
train_loss = keras.metrics.Mean()
train_acc = keras.metrics.SparseCategoricalAccuracy()
def train_step(data, labels):
with tf.GradientTape() as gt:
pred = model(data)
loss = keras.losses.SparseCategoricalCrossentropy()(labels, pred)
grads = gt.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss(loss)
train_acc(labels, pred)
for xs, ys in train_data:
train_step(xs, ys)
print('train_loss = {:.3f}, train_acc = {:.3f}'.format(train_loss.result(), train_acc.result()))
But the results are: train_loss = 7.576, train_acc = 0.101
If I only train the last layer by setting
base_model.trainable = False
It converges and the results are: train_loss = 0.525, train_acc = 0.823
What's the problem with the codes? How should I modify it? Thanks
Try RELU as activation function. It may be Vanishing Gradient issue which occurs if you use activation function other than RELU.
Following my comment, the reason why it didn't converge is because you picked a learning rate that was too big. This causes the weight to change too much and the loss to explode. When setting base_model.trainable to False, most of the weight in the networks were fixed and the learning rate was a good fit for your last layers. Here's a picture :
As a general rule, your learning rate should always be chosen for each experiments.
Edit : Following Wilson's comment, I'm not sure this is the reason you have different results but this could be it :
When you specify your loss, your loss is computed on each element of the batch, then to get the loss of the batch, you can take the sum or the mean of the losses, depending on which one you chose, you get a different magnitude. For example, if your batch size is 64, summing the loss will yield you a 64 times bigger loss which will yield 64 times bigger gradient, so choosing sum over mean with a batch size 64 is like picking a 64 times bigger learning rate.
So maybe the reason you have different results is that by default a keras.losses wrapped in a model.compile has a different reduction method. In the same vein, if the loss is reduced by a sum method, the magnitude of the loss depends on the batch size, if you have twice the batch size, you get (on average) twice the loss, and twice the gradient and so it's like doubling the learning rate.
My advice is to check the reduction method used by the loss to be sure it's the same in both case, and if it's sum, to check that the batch size is the same. I would advise to use mean reduction in general since it's not influenced by batch size.

Training and testing CNN with pytorch. With and without model.eval()

I have two questions:-
I am trying to train a convolution neural network initialized with some pre trained weights (Netwrok contains batch normalization layers as well) (taking reference from here). Before training I want to calculate a validation error using loss_fn = torch.nn.MSELoss().cuda().
And in the reference, the author is using model.eval() before calculating the validation error. But with that result, the CNN model is off from what it should be however when I comment out model.eval(), the output is good (what it should be with pre-trained weights). What could be reason behind it as I have read on many posts that model.eval should be used before testing the model and model.train() before training it.
While calculating the validation error with pre-trained weights and above mentioned loss function what should be the batch size. Shouldn't it be 1 as i want output on each of my input, calculate error with ground truth and in the end take average of all results. If i use higher batch size error is increased. So question is can i use higher batch size if yes what should be the right way. In given code i have given err = float(loss_local) / num_samples but i observed without averaging i.e err = float(loss_local). Error is different for different batch size. I am doing this without model.eval right now.
batch_size = 1
data_path = 'path_to_data'
dtype = torch.FloatTensor
weight_file = 'path_to_weight_file'
val_loader = torch.utils.data.DataLoader(NyuDepthLoader(data_path, val_lists),batch_size=batch_size, shuffle=True, drop_last=True)
model = Model(batch_size)
model.load_state_dict(load_weights(model, weight_file, dtype))
loss_fn = torch.nn.MSELoss().cuda()
# model.eval()
with torch.no_grad():
for input, depth in val_loader:
input_var = Variable(input.type(dtype))
depth_var = Variable(depth.type(dtype))
output = model(input_var)
input_rgb_image = input_var[0].data.permute(1, 2, 0).cpu().numpy().astype(np.uint8)
input_gt_depth_image = depth_var[0][0].data.cpu().numpy().astype(np.float32)
pred_depth_image = output[0].data.squeeze().cpu().numpy().astype(np.float32)
print (format(type(depth_var)))
pred_depth_image_resize = cv2.resize(pred_depth_image, dsize=(608, 456), interpolation=cv2.INTER_LINEAR)
target_depth_transform = transforms.Compose([flow_transforms.ArrayToTensor()])
pred_depth_image_tensor = target_depth_transform(pred_depth_image_resize)
#both inputs to loss_fn are 'torch.Tensor'
loss_local += loss_fn(pred_depth_image_tensor, depth_var)
num_samples += 1
print ('num_samples {}'.format(num_samples))
err = float(loss_local) / num_samples
print('val_error before train:', err)
What could be reason behind it as I have read on many posts that model.eval should be used before testing the model and model.train() before training it.
Note: testing the model is called inference.
As explained in the official documentation:
Remember that you must call model.eval() to set dropout and batch normalization layers to evaluation mode before running inference. Failing to do this will yield inconsistent inference results.
So this code must be present once you load the model from a file and do inference.
# Model class must be defined somewhere
model = torch.load(PATH)
model.eval()
This is because dropout works as a regularization for preventing overfitting during training, it is not needed for inference. Same for the batch norms.
When you use eval() this just sets module train label to False and affects only certain types of modules in particular Dropout and BatchNorm.

keras model.evaluate() does not show loss

I've created a neural network of the following form in keras:
from keras.layers import Dense, Activation, Input
from keras import Model
input_dim_v = 3
hidden_dims=[100, 100, 100]
inputs = Input(shape=(input_dim_v,))
net = inputs
for h_dim in hidden_dims:
net = Dense(h_dim)(net)
net = Activation("elu")(net)
outputs = Dense(self.output_dim_v)(net)
model_v = Model(inputs=inputs, outputs=outputs)
model_v.compile(optimizer='adam', loss='mean_squared_error', metrics=['mse'])
Later, I train it on single examples using model_v.train_on_batch(X[i],y[i]).
To test, whether the neural network is becoming a better function approximator, I wanted to evaluate the model on the accumulated X and y (in my case, X and y grow over time) periodically. However, when I call model_v.evaluate(X, y), only the characteristic progress bars appear in the console, but neither the loss value nor the mse-metric (which are the same in this case) are printed.
How can I change that?
The loss and metric values are not shown in the progress bar of evaluate() method. Instead, they are returned as the output of the evaluate() method and therefore you can print them:
for i in n_iter:
# ... get the i-th batch or sample
# ... train the model using the `train_on_batch` method
# evaluate the model on whole or part of test data
loss_metric = model.evaluate(test_data, test_labels)
print(loss_metric)
According to the documentation, if your model has multiple outputs and/or metrics, you can use model.metric_names attribute to find out what the values in loss_metric correspond to.

Categories

Resources