Custom Neural Network Implementation on MNIST using Tensorflow 2.0? - python

I tried to write a custom implementation of basic neural network with two hidden layers on MNIST dataset using *TensorFlow 2.0 beta* but I'm not sure what went wrong here but my training loss and accuracy seems to stuck at 1.5 and around 85 respectively. But If I build the using Keras I was getting very low training loss and accuracy above 95% with just 8-10 epochs.
I believe that maybe I'm not updating my weights or something? So do I need to assign my new weights which I compute in backprop function backs to their respective weights/bias variables?
I really appreciate if someone could help me out with this and these few more questions that I've mentioned below.
Few more Questions:
1) How to add a Dropout and Batch Normalization layer in this custom implementation? (i.e making it work for both train and test time)
2) How can I use callbacks in this code? i.e (making use of EarlyStopping and ModelCheckpoint callbacks)
3) Is there anything else in my code below that I can optimize further in this code like maybe making use of tensorflow 2.x #tf.function decorator etc.)
4) I would also require to extract the final weights that I obtain for plotting and checking their distributions. To investigate issues like gradient vanishing or exploding. (Eg: Maybe Tensorboard)
5) I also want help in writing this code in a more generalized way so I can easily implement other networks like ConvNets (i.e Conv, MaxPool, etc.) based on this code easily.
Here's my full code for easy reproducibility :
Note: I know I can use high-level API like Keras to build the model much easier but that is not my goal here. Please understand.
import numpy as np
import os
import logging
logging.getLogger('tensorflow').setLevel(logging.ERROR)
import tensorflow as tf
import tensorflow_datasets as tfds
(x_train, y_train), (x_test, y_test) = tfds.load('mnist', split=['train', 'test'],
batch_size=-1, as_supervised=True)
# reshaping
x_train = tf.reshape(x_train, shape=(x_train.shape[0], 784))
x_test = tf.reshape(x_test, shape=(x_test.shape[0], 784))
ds_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# rescaling
ds_train = ds_train.map(lambda x, y: (tf.cast(x, tf.float32)/255.0, y))
class Model(object):
def __init__(self, hidden1_size, hidden2_size, device=None):
# layer sizes along with input and output
self.input_size, self.output_size, self.device = 784, 10, device
self.hidden1_size, self.hidden2_size = hidden1_size, hidden2_size
self.lr_rate = 1e-03
# weights initializationg
self.glorot_init = tf.initializers.glorot_uniform(seed=42)
# weights b/w input to hidden1 --> 1
self.w_h1 = tf.Variable(self.glorot_init((self.input_size, self.hidden1_size)))
# weights b/w hidden1 to hidden2 ---> 2
self.w_h2 = tf.Variable(self.glorot_init((self.hidden1_size, self.hidden2_size)))
# weights b/w hidden2 to output ---> 3
self.w_out = tf.Variable(self.glorot_init((self.hidden2_size, self.output_size)))
# bias initialization
self.b1 = tf.Variable(self.glorot_init((self.hidden1_size,)))
self.b2 = tf.Variable(self.glorot_init((self.hidden2_size,)))
self.b_out = tf.Variable(self.glorot_init((self.output_size,)))
self.variables = [self.w_h1, self.b1, self.w_h2, self.b2, self.w_out, self.b_out]
def feed_forward(self, x):
if self.device is not None:
with tf.device('gpu:0' if self.device=='gpu' else 'cpu'):
# layer1
self.layer1 = tf.nn.sigmoid(tf.add(tf.matmul(x, self.w_h1), self.b1))
# layer2
self.layer2 = tf.nn.sigmoid(tf.add(tf.matmul(self.layer1,
self.w_h2), self.b2))
# output layer
self.output = tf.nn.softmax(tf.add(tf.matmul(self.layer2,
self.w_out), self.b_out))
return self.output
def loss_fn(self, y_pred, y_true):
self.loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_true,
logits=y_pred)
return tf.reduce_mean(self.loss)
def acc_fn(self, y_pred, y_true):
y_pred = tf.cast(tf.argmax(y_pred, axis=1), tf.int32)
y_true = tf.cast(y_true, tf.int32)
predictions = tf.cast(tf.equal(y_true, y_pred), tf.float32)
return tf.reduce_mean(predictions)
def backward_prop(self, batch_xs, batch_ys):
optimizer = tf.keras.optimizers.Adam(learning_rate=self.lr_rate)
with tf.GradientTape() as tape:
predicted = self.feed_forward(batch_xs)
step_loss = self.loss_fn(predicted, batch_ys)
grads = tape.gradient(step_loss, self.variables)
optimizer.apply_gradients(zip(grads, self.variables))
n_shape = x_train.shape[0]
epochs = 20
batch_size = 128
ds_train = ds_train.repeat().shuffle(n_shape).batch(batch_size).prefetch(batch_size)
neural_net = Model(512, 256, 'gpu')
for epoch in range(epochs):
no_steps = n_shape//batch_size
avg_loss = 0.
avg_acc = 0.
for (batch_xs, batch_ys) in ds_train.take(no_steps):
preds = neural_net.feed_forward(batch_xs)
avg_loss += float(neural_net.loss_fn(preds, batch_ys)/no_steps)
avg_acc += float(neural_net.acc_fn(preds, batch_ys) /no_steps)
neural_net.backward_prop(batch_xs, batch_ys)
print(f'Epoch: {epoch}, Training Loss: {avg_loss}, Training ACC: {avg_acc}')
# output for 10 epochs:
Epoch: 0, Training Loss: 1.7005115111824125, Training ACC: 0.7603832868262543
Epoch: 1, Training Loss: 1.6052448933478445, Training ACC: 0.8524806404020637
Epoch: 2, Training Loss: 1.5905528008006513, Training ACC: 0.8664196092868224
Epoch: 3, Training Loss: 1.584107405738905, Training ACC: 0.8727630912326276
Epoch: 4, Training Loss: 1.5792385798413306, Training ACC: 0.8773203844903037
Epoch: 5, Training Loss: 1.5759121985174716, Training ACC: 0.8804754322627559
Epoch: 6, Training Loss: 1.5739163148682564, Training ACC: 0.8826455712551251
Epoch: 7, Training Loss: 1.5722616605926305, Training ACC: 0.8840812018606812
Epoch: 8, Training Loss: 1.569699136307463, Training ACC: 0.8867688354803249
Epoch: 9, Training Loss: 1.5679460542742163, Training ACC: 0.8885049475356936

I wondered where to start with your multiquestion, and I decided to do so with a statement:
Your code definitely should not look like that and is nowhere near current Tensorflow best practices.
Sorry, but debugging it step by step is waste of everyone's time and would not benefit either of us.
Now, moving to the third point:
Is there anything else in my code below that I can optimize further
in this code like maybe making use of tensorflow 2.x #tf.function
decorator etc.)
Yes, you can use tensorflow2.0 functionalities and it seems like you are running away from those (tf.function decorator is of no use here actually, leave it for the time being).
Following new guidelines would alleviate your problems with your 5th point as well, namely:
I also want help in writing this code in a more generalized way so
I can easily implement other networks like ConvNets (i.e Conv, MaxPool
etc.) based on this code easily.
as it's designed specifically for that. After a little introduction I will try to introduce you to those concepts in a few steps:
1. Divide your program into logical parts
Tensorflow did much harm when it comes to code readability; everything in tf1.x was usually crunched in one place, globals followed by function definition followed by another globals or maybe data loading, all in all mess. It's not really developers fault as the system's design encouraged those actions.
Now, in tf2.0 programmer is encouraged to divide his work similarly to the structure one can see in pytorch, chainer and other more user-friendly frameworks.
1.1 Data loading
You were on good path with Tensorflow Datasets but you turned away for no apparent reason.
Here is your code with commentary what's going on:
# You already have tf.data.Dataset objects after load
(x_train, y_train), (x_test, y_test) = tfds.load('mnist', split=['train', 'test'],
batch_size=-1, as_supervised=True)
# But you are reshaping them in a strange manner...
x_train = tf.reshape(x_train, shape=(x_train.shape[0], 784))
x_test = tf.reshape(x_test, shape=(x_test.shape[0], 784))
# And building from slices...
ds_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Unreadable rescaling (there are built-ins for that)
You can easily generalize this idea for any dataset, place this in separate module, say datasets.py:
import tensorflow as tf
import tensorflow_datasets as tfds
class ImageDatasetCreator:
#classmethod
# More portable and readable than dividing by 255
def _convert_image_dtype(cls, dataset):
return dataset.map(
lambda image, label: (
tf.image.convert_image_dtype(image, tf.float32),
label,
)
)
def __init__(self, name: str, batch: int, cache: bool = True, split=None):
# Load dataset, every dataset has default train, test split
dataset = tfds.load(name, as_supervised=True, split=split)
# Convert to float range
try:
self.train = ImageDatasetCreator._convert_image_dtype(dataset["train"])
self.test = ImageDatasetCreator._convert_image_dtype(dataset["test"])
except KeyError as exception:
raise ValueError(
f"Dataset {name} does not have train and test, write your own custom dataset handler."
) from exception
if cache:
self.train = self.train.cache() # speed things up considerably
self.test = self.test.cache()
self.batch: int = batch
def get_train(self):
return self.train.shuffle().batch(self.batch).repeat()
def get_test(self):
return self.test.batch(self.batch).repeat()
So now you can load more than mnist using simple command:
from datasets import ImageDatasetCreator
if __name__ == "__main__":
dataloader = ImageDatasetCreator("mnist", batch=64, cache = True)
train, test = dataloader.get_train(), dataloader.get_test()
And you could use any name other than mnist you want to load datasets from now on.
Please, stop making everything deep learning related one hand-off scripts, you are a programmer as well.
1.2 Model creation
Since tf2.0 there are two advised ways one can proceed depending on models complexity:
tensorflow.keras.models.Sequential - this way was shown by #Stewart_R, no need to reiterate his points. Used for the simplest models (you should use this one with your feedforward).
Inheriting tensorflow.keras.Model and writing custom model. This one should be used when you have some kind of logic inside your module or it's more complicated (things like ResNets, multipath networks etc.). All in all more readable and customizable.
Your Model class tried to resemble something like that but it went south again; backprop definitely is not part of the model itself, neither is loss or accuracy, separate them into another module or function, defo not a member!
That said, let's code the network using the second approach (you should place this code in model.py for brevity). Before that, I will code YourDense feedforward layer from scratch by inheriting from tf.keras.Layers (this one might go into layers.py module):
import tensorflow as tf
class YourDense(tf.keras.layers.Layer):
def __init__(self, units):
# It's Python 3, you don't have to specify super parents explicitly
super().__init__()
self.units = units
# Use build to create variables, as shape can be inferred from previous layers
# If you were to create layers in __init__, one would have to provide input_shape
# (same as it occurs in PyTorch for example)
def build(self, input_shape):
# You could use different initializers here as well
self.kernel = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
# You could define bias in __init__ as well as it's not input dependent
self.bias = self.add_weight(shape=(self.units,), initializer="random_normal")
# Oh, trainable=True is default
def call(self, inputs):
# Use overloaded operators instead of tf.add, better readability
return tf.matmul(inputs, self.kernel) + self.bias
Regarding your
How to add a Dropout and Batch Normalization layer in this custom
implementation? (i.e making it work for both train and test time)
I suppose you would like to create a custom implementation of those layers.
If not, you can just import from tensorflow.keras.layers import Dropout and use it anywhere you want as #Leevo pointed out.
Inverted dropout with different behaviour during train and test below:
class CustomDropout(layers.Layer):
def __init__(self, rate, **kwargs):
super().__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
# You could simply create binary mask and multiply here
return tf.nn.dropout(inputs, rate=self.rate)
# You would need to multiply by dropout rate if you were to do that
return inputs
Layers taken from here and modified to better fit showcasing purpose.
Now you can create your model finally (simple double feedforward):
import tensorflow as tf
from layers import YourDense
class Model(tf.keras.Model):
def __init__(self):
super().__init__()
# Use Sequential here for readability
self.network = tf.keras.Sequential(
[YourDense(100), tf.keras.layers.ReLU(), YourDense(10)]
)
def call(self, inputs):
# You can use non-parametric layers inside call as well
flattened = tf.keras.layers.Flatten()(inputs)
return self.network(flattened)
Ofc, you should use built-ins as much as possible in general implementations.
This structure is pretty extensible, so generalization to convolutional nets, resnets, senets, whatever should be done via this module. You can read more about it here.
I think it fulfills your 5th point:
I also want help in writing this code in a more generalized way so
I can easily implement other networks like ConvNets (i.e Conv, MaxPool
etc.) based on this code easily.
Last thing, you may have to use model.build(shape) in order to build your model's graph.
model.build((None, 28, 28, 1))
This would be for MNIST's 28x28x1 input shape, where None stands for batch.
1.3 Training
Once again, training could be done in two separate ways:
standard Keras model.fit(dataset) - useful in simple tasks like classification
tf.GradientTape - more complicated training schemes, most prominent example would be Generative Adversarial Networks, where two models optimize orthogonal goals playing minmax game
As pointed out by #Leevo once again, if you are to use the second way, you won't be able to simply use callbacks provided by Keras, hence I'd advise to stick with the first option whenever possible.
In theory you could call callback's functions manually like on_batch_begin() and others where needed, but it would be cumbersome and I'm not sure how would this work.
When it comes to the first option, you can use tf.data.Dataset objects directly with fit. Here is it presented inside another module (preferably train.py):
def train(
model: tf.keras.Model,
path: str,
train: tf.data.Dataset,
epochs: int,
steps_per_epoch: int,
validation: tf.data.Dataset,
steps_per_validation: int,
stopping_epochs: int,
optimizer=tf.optimizers.Adam(),
):
model.compile(
optimizer=optimizer,
# I used logits as output from the last layer, hence this
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.metrics.SparseCategoricalAccuracy()],
)
model.fit(
train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_data=validation,
validation_steps=steps_per_validation,
callbacks=[
# Tensorboard logging
tf.keras.callbacks.TensorBoard(
pathlib.Path("logs")
/ pathlib.Path(datetime.datetime.now().strftime("%Y%m%d-%H%M%S")),
histogram_freq=1,
),
# Early stopping with best weights preserving
tf.keras.callbacks.EarlyStopping(
monitor="val_sparse_categorical_accuracy",
patience=stopping_epochs,
restore_best_weights=True,
),
],
)
model.save(path)
More complicated approach is very similar (almost copy and paste) to PyTorch training loops, so if you are familiar with those, they should not pose much of a problem.
You can find examples throughout tf2.0 docs, e.g. here or here.
2. Other things
2.1 Unanswered questions
Is there anything else in the code that I can optimize further in
this code? i.e (making use of tensorflow 2.x #tf.function decorator
etc.)
Above already transforms the Model into graphs, hence I don't think you would benefit from calling it in this case. And premature optimization is the root of all evil, remember to measure your code before doing this.
You would gain much more with proper caching of data (as described at the beginning of #1.1) and good pipeline rather than those.
Also I need a way to extract all my final weights for all layers
after training so I can plot them and check their distributions. To
check issues like gradient vanishing or exploding.
As pointed out by #Leevo above,
weights = model.get_weights()
Would get you the weights. You may transform them into np.array and plot using seaborn, matplotlib, analyze, check or whatever else you want.
2.2 Putting it altogether
All in all, your main.py (or entrypoint or something similar) would consist of this (more or less):
from dataset import ImageDatasetCreator
from model import Model
from train import train
# You could use argparse for things like batch, epochs etc.
if __name__ == "__main__":
dataloader = ImageDatasetCreator("mnist", batch=64, cache=True)
train, test = dataloader.get_train(), dataloader.get_test()
model = Model()
model.build((None, 28, 28, 1))
train(
model, train, path epochs, test, len(train) // batch, len(test) // batch, ...
) # provide necessary arguments appropriately
# Do whatever you want with those
weights = model.get_weights()
Oh, remember that above functions are not for copy pasting and should be treated more like a guideline. Hit me up if you have any questions.
3. Questions from comments
3.1 How to initialize custom and built-in layers
3.1.1 TLDR what you are about to read
Custom Poisson initalization function, but it takes three
arguments
tf.keras.initalization API needs two arguments (see last point in their docs), hence one is
specified via Python's lambda inside custom layer we have written before
Optional bias for the layer is added, which can be turned off with
boolean
Why is it so uselessly complicated? To show that in tf2.0 you can finally use Python's functionality, no more graph hassle, if instead of tf.cond etc.
3.1.2 From TLDR to implementation
Keras initializers can be found here and Tensorflow's flavor here.
Please note API inconsistencies (capital letters like classes, small letters with underscore like functions), especially in tf2.0, but that's beside the point.
You can use them by passing a string (as it's done in YourDense above) or during object creation.
To allow for custom initialization in your custom layers, you can simply add additional argument to the constructor (tf.keras.Model class is still Python class and it's __init__ should be used same as Python's).
Before that, I will show you how to create custom initialization:
# Poisson custom initialization because why not.
def my_dumb_init(shape, lam, dtype=None):
return tf.squeeze(tf.random.poisson(shape, lam, dtype=dtype))
Notice, it's signature takes three arguments, while it should take (shape, dtype) only. Still, one can "fix" this easily while creating his own layer, like the one below (extended YourLinear):
import typing
import tensorflow as tf
class YourDense(tf.keras.layers.Layer):
# It's still Python, use it as Python, that's the point of tf.2.0
#classmethod
def register_initialization(cls, initializer):
# Set defaults if init not provided by user
if initializer is None:
# let's make the signature proper for init in tf.keras
return lambda shape, dtype: my_dumb_init(shape, 1, dtype)
return initializer
def __init__(
self,
units: int,
bias: bool = True,
# can be string or callable, some typing info added as well...
kernel_initializer: typing.Union[str, typing.Callable] = None,
bias_initializer: typing.Union[str, typing.Callable] = None,
):
super().__init__()
self.units: int = units
self.kernel_initializer = YourDense.register_initialization(kernel_initializer)
if bias:
self.bias_initializer = YourDense.register_initialization(bias_initializer)
else:
self.bias_initializer = None
def build(self, input_shape):
# Simply pass your init here
self.kernel = self.add_weight(
shape=(input_shape[-1], self.units),
initializer=self.kernel_initializer,
trainable=True,
)
if self.bias_initializer is not None:
self.bias = self.add_weight(
shape=(self.units,), initializer=self.bias_initializer
)
else:
self.bias = None
def call(self, inputs):
weights = tf.matmul(inputs, self.kernel)
if self.bias is not None:
return weights + self.bias
I have added my_dumb_initialization as the default (if user does not provide one) and made the bias optional with bias argument. Note you can use if freely as long as it's not data dependent. If it is (or is dependent on tf.Tensor somehow), one has to use #tf.function decorator which changes Python's flow to it's tensorflow counterpart (e.g. if to tf.cond).
See here for more on autograph, it's very easy to follow.
If you want to incorporate above initializer changes into your model, you have to create appropriate object and that's it.
... # Previous of code Model here
self.network = tf.keras.Sequential(
[
YourDense(100, bias=False, kernel_initializer="lecun_uniform"),
tf.keras.layers.ReLU(),
YourDense(10, bias_initializer=tf.initializers.Ones()),
]
)
... # and the same afterwards
With built-in tf.keras.layers.Dense layers, one can do the same (arguments names differ, but idea holds).
3.2 Automatic Differentiation using tf.GradientTape
3.2.1 Intro
Point of tf.GradientTape is to allow users normal Python control flow and gradient calculation of variables with respect to another variable.
Example taken from here but broken into separate pieces:
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
Regular python function with for and if flow control statements
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
Using gradient tape you can record all operations on Tensors (and their intermediate states as well) and "play" it backwards (perform automatic backward differentiation using chaing rule).
Every Tensor within tf.GradientTape() context manager is recorded automatically. If some Tensor is out of scope, use watch() method as one can see above.
Finally, gradient of output with respect to x (input is returned).
3.2.2 Connection with deep learning
What was described above is backpropagation algorithm. Gradients w.r.t (with respect to) outputs are calculated for each node in the network (or rather for every layer). Those gradients are then used by various optimizers to make corrections and so it repeats.
Let's continue and assume you have your tf.keras.Model, optimizer instance, tf.data.Dataset and loss function already set up.
One can define a Trainer class which will perform training for us. Please read comments in the code if in doubt:
class Trainer:
def __init__(self, model, optimizer, loss_function):
self.model = model
self.loss_function = loss_function
self.optimizer = optimizer
# You could pass custom metrics in constructor
# and adjust train_step and test_step accordingly
self.train_loss = tf.keras.metrics.Mean(name="train_loss")
self.test_loss = tf.keras.metrics.Mean(name="train_loss")
def train_step(self, x, y):
# Setup tape
with tf.GradientTape() as tape:
# Get current predictions of network
y_pred = self.model(x)
# Calculate loss generated by predictions
loss = self.loss_function(y, y_pred)
# Get gradients of loss w.r.t. EVERY trainable variable (iterable returned)
gradients = tape.gradient(loss, self.model.trainable_variables)
# Change trainable variable values according to gradient by applying optimizer policy
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables))
# Record loss of current step
self.train_loss(loss)
def train(self, dataset):
# For N epochs iterate over dataset and perform train steps each time
for x, y in dataset:
self.train_step(x, y)
def test_step(self, x, y):
# Record test loss separately
self.test_loss(self.loss_function(y, self.model(x)))
def test(self, dataset):
# Iterate over whole dataset
for x, y in dataset:
self.test_step(x, y)
def __str__(self):
# You need Python 3.7 with f-string support
# Just return metrics
return f"Loss: {self.train_loss.result()}, Test Loss: {self.test_loss.result()}"
Now, you could use this class in your code really simply like this:
EPOCHS = 5
# model, optimizer, loss defined beforehand
trainer = Trainer(model, optimizer, loss)
for _ in range(EPOCHS):
trainer.train(train_dataset) # Same for training and test datasets
trainer.test(test_dataset)
print(f"Epoch {epoch}: {trainer})")
Print would tell you training and test loss for each epoch. You can mix training and testing any way you want (e.g. 5 epochs for training and 1 testing), you could add different metrics etc.
See here if you want non-OOP oriented approach (IMO less readable, but to each it's own).

Also If there's something I could improve in the code do let me know
as well.
Embrace the high-level API for something like this. You can do it in just a few lines of code and it's much easier to debug, read and reason about:
(x_train, y_train), (x_test, y_test) = tfds.load('mnist', split=['train', 'test'],
batch_size=-1, as_supervised=True)
x_train = tf.cast(tf.reshape(x_train, shape=(x_train.shape[0], 784)), tf.float32)
x_test = tf.cast(tf.reshape(x_test, shape=(x_test.shape[0], 784)), tf.float32)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(512, activation='sigmoid'),
tf.keras.layers.Dense(256, activation='sigmoid'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

I tried to write a custom implementation of basic neural network with
two hidden layers on MNIST dataset using tensorflow 2.0 beta but I'm
not sure what went wrong here but my training loss and accuracy seems
to stuck at 1.5 and around85's respectively.
Where is the training part? Training of TF 2.0 models either Keras' syntax or Eager execution with tf.GradientTape(). Can you paste the code with conv and dense layers, and how you trained it?
Other questions:
1) How to add a Dropout layer in this custom implementation? i.e
(making it work for both train and test time)
You can add a Dropout() layer with:
from tensorflow.keras.layers import Dropout
And then you insert it into a Sequential() model just with:
Dropout(dprob) # where dprob = dropout probability
2) How to add Batch Normalization in this code?
Same as before, with:
from tensorflow.keras.layers import BatchNormalization
The choise of where to put batchnorm in the model, well, that's up to you. There is no rule of thumb, I suggest you to make experiments. With ML it's always a trial and error process.
3) How can I use callbacks in this code? i.e (making use of
EarlyStopping and ModelCheckpoint callbacks)
If you are training using Keras' syntax, you can simply use that. Please check this very thorough tutorial on how to use it. It just takes few lines of code.
If you are running a model in Eager execution, you have to implement these techniques yourself, with your own code. It's more complex, but it also gives you more freedom in the implementation.
4) Is there anything else in the code that I can optimize further in
this code? i.e (making use of tensorflow 2.x #tf.function decorator
etc.)
It depends. If you are using Keras syntax, I don't think you need to add more to it. In case you are training the model in Eager execution, then I'd suggest you to use the #tf.function decorator on some function to speed up a bit.
You can see a practical TF 2.0 example on how to use the decorator in this Notebook.
Other than this, I suggest you to play with regularization techniques such as weights initializations, L1-L2 loss, etc.
5) Also I need a way to extract all my final weights for all layers
after training so I can plot them and check their distributions. To
check issues like gradient vanishing or exploding.
Once the model is trained, you can extract its weights with:
weights = model.get_weights()
or:
weights = model.trainable_weights
If you want to keep only trainable ones.
6) I also want help in writing this code in a more generalized way so
I can easily implement other networks like convolutional network (i.e
Conv, MaxPool etc.) based on this code easily.
You can pack all your code into a function, then . At the end of this Notebook I did something like this (it's for a feed-forward NN, which is much more simple, but that's a start and you can change the code according to your needs).
---
UPDATE:
Please check my TensorFlow 2.0 implementaion of a CNN classifier. This might be a useful hint: it is trained on the Fashion MNIST dataset, which makes it very similar to your task.

Related

Model not improving with GradientTape but with model.fit()

I am currently trying to train a model using tf.GradientTape, as model.fit(...) from keras will not be able to handle my data input in the future. However, while a test run with model.fit(...) and my model works perfectly, tf.GradientTape does not.
During training, the loss using the tf.GradientTape custom workflow will first slightly decrease, but then become stuck and not improve any further, no matter how many epochs I run. The chosen metric will also not change after the first few batches. Additionally, the loss per batch is unstable and jumps between nearly zero to something very large. The running loss is more stable but shows the model not improving.
This is all in contrast to using model.fit(...), where loss and metrics are improving immediately.
My code:
def build_model(kernel_regularizer=l2(0.0001), dropout=0.001, recurrent_dropout=0.):
x1 = Input(62)
x2 = Input((62, 3))
x = Embedding(30, 100, mask_zero=True)(x1)
x = Concatenate()([x, x2])
x = Bidirectional(LSTM(500,
return_sequences=True,
kernel_regularizer=kernel_regularizer,
dropout=dropout,
recurrent_dropout=recurrent_dropout))(x)
x = Bidirectional(LSTM(500,
return_sequences=False,
kernel_regularizer=kernel_regularizer,
dropout=dropout,
recurrent_dropout=recurrent_dropout))(x)
x = Activation('softmax')(x)
x = Dense(1000)(x)
x = Dense(500)(x)
x = Dense(250)(x)
x = Dense(1, bias_initializer='ones')(x)
x = tf.math.abs(x)
return Model(inputs=[x1, x2], outputs=x)
optimizer = Adam(learning_rate=0.0001)
model = build_model()
model.compile(optimizer=optimizer, loss='mse', metrics='mse')
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = AutoShardPolicy.DATA
dat_train = tf.data.Dataset.from_generator(
generator= lambda: <load_function()>
output_types=((tf.int32, tf.float32), tf.float32)
)
dat_train = dat_train.with_options(options)
# keras training
model.fit(dat_train, epochs=50)
# custom training
for epoch in range(50):
for (x1, x2), y in dat_train:
with tf.GradientTape() as tape:
y_pred = model((x1, x2), training=True)
loss = model.loss(y, y_pred)
grads = tape.gradient(loss, model.trainable_variables)
model.optimizer.apply_gradients(zip(grads, model.trainable_variables))
I could use relu at the output layer, however, I found the abs to be more robust. Changing it does not change the outcome. The input x1 of the model is a sequence, x2 are some additional features, that are later concatenated to the embedded x1 sequence. For my approach, I'm not using the MSE, but it works either way.
I could provide some data, however, my dataset is quite large, so I would need to extract a bit out of it.
All in all, my problem seems to be similar to:
Keras model doesn't train when using GradientTape
Edit 1
The softmax activation is currently not necessary, but is relevant for my future goal of splitting the model.
Additionally, some things I noticed:
The custom training takes roughly 2x the amount of time compared to model.fit(...).
The gradients in the custom training seem very small and range from ±1e-3 to ±1e-9 inside the model. I don't know if that's normal and don't know how to compare it to the gradients provided by model.fit(...).
Edit 2
I've added a Google Colab notebook to reproduce the issue:
https://colab.research.google.com/drive/1pk66rbiux5vHZcav9VNSBhdWWIhQM-nF?usp=sharing
The loss and MSE for 20 epochs is shown here:
custom training
keras training
While I only used a portion of my data in the notebook, it will still run for a very long time. For the custom training run, the loss for each batch is simply stored in losses. It matches the behavior in the custom training run image.
So far, I've noticed two ways of improving the performance of the custom training:
The usage of custom layer initialization
Using MSE as a loss function
Using the MSE, compared to my own loss function actually improves the custom training performance. Still, using MSE and/or different initialization won't come close to the performance of keras fit.
I have found the solution, it was a simple shape mismatch, which was somehow not picked up by any error check and worked both with my custom loss function and MSE. Using x = Reshape(())(x) as final layer did the trick.

Stateful LSTM in custom training loop

I am writing a simple custom model + training in tensorflow. My goal is to build a stateful LSTM based model and being able to reset the states when I want to.
So far this is my custom model:
class ResNetModel(Model):
def __init__(self, num_inputs, **kwargs):
"""
The class initialiser should call the base class initialiser, passing any keyword
arguments along. It should also create the layers of the network according to the
above specification.
"""
super(ResNetModel, self).__init__(**kwargs)
self.lstm_1 = tf.keras.layers.LSTM(units=32, input_shape=(None, num_inputs), return_sequences=True)
self.dense = tf.keras.layers.Dense(units=1, activation=None)
def call(self, inputs, training=False):
"""
This method should contain the code for calling the layer according to the above
specification, using the layer objects set up in the initialiser.
"""
x = self.lstm_1(inputs)
y = self.dense(x)
return y + inputs
And this is my custom training loop (I am omitting the whole code because it is quite big, but the function is self contained for the purpose of my question):
def run_training(self, in_train, out_train, epoch_loss, epoch_error, n_skip, n_block):
n_samples = in_train.shape[1]
self.model.reset_states() # clear existing state
self.model(in_train[:, :n_skip, :]) # process some samples to build up state
for n in range(n_skip, n_samples - n_block, n_block):
# compute loss
with tf.GradientTape() as tape:
y_pred = self.model(in_train[:, n:n + n_block, :])
loss = self.loss_func(out_train[:, n:n + n_block, :], y_pred)
grads = tape.gradient(loss, self.model.trainable_variables)
self.opt.apply_gradients(zip(grads, self.model.trainable_variables))
epoch_loss.update_state(loss)
epoch_error.update_state(out_train[:, n:n + n_block, :], y_pred)
And it trains fine, the whole code works as expected.
Then I make predictions like this:
for i in range(0, math.floor(24000/4096)):
predictions[i*4096: (i+1)*4096] = np.array(residual_net.model(X_test[idx][i*4096: (i+1)*4096].reshape(1, 4096, 1))).ravel()
So basically I am passing my input test to my model in residual_net.model(my_test_data) (the numpy slicing ecc... is just to make my input data coherent with the network, it works fine).
However, when I make predictions with my trained network (to give some context, it is working with audio data) I have the output audio that is as expected (an input song processed by the network that adds some distortion), but there are clicks in the output audio that are directly related to the input buffer size.
To make this point clearer: if I predict on chuncks of 512 samples, I have clicks every 512 samples, if I predict every 4096 samples, I have clicks every 4096.
This behaviour is pretty similar to the one you have with IIR filters that do not carry its filter state across audio buffers, so that got me thinking that my LSTM network is not working stateful as I expected.
So my question is:
does Tensorflow automatically reset the state of the network after each processed buffer (even in the case of custom training-prediction loops) if the parameter stateful = True is not specified in the LSTM layer?
I found no information about this, but I expected that behaviour for "standard" training (.fit/.predict functions) and not for custom training loops.
Does this hold also for the training step? (so basically I am messing up also the training)

Why is this tensorflow training taking so long?

I'm learning DRL with the book Deep Reinforcement Learning in Action. In chapter 3, they present the simple game Gridworld (instructions here, in the rules section) with the corresponding code in PyTorch.
I've experimented with the code and it takes less than 3 minutes to train the network with 89% of wins (won 89 of 100 games after training).
As an exercise, I have migrated the code to tensorflow. All the code is here.
The problem is that with my tensorflow port it takes near 2 hours to train the network with a win rate of 84%. Both versions are using the only CPU to train (I don't have GPU)
Training loss figures seem correct and also the rate of a win (we have to take into consideration that the game is random and can have impossible states). The problem is the performance of the overall process.
I'm doing something terribly wrong, but what?
The main differences are in the training loop, in torch is this:
loss_fn = torch.nn.MSELoss()
learning_rate = 1e-3
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
....
Q1 = model(state1_batch)
with torch.no_grad():
Q2 = model2(state2_batch) #B
Y = reward_batch + gamma * ((1-done_batch) * torch.max(Q2,dim=1)[0])
X = Q1.gather(dim=1,index=action_batch.long().unsqueeze(dim=1)).squeeze()
loss = loss_fn(X, Y.detach())
optimizer.zero_grad()
loss.backward()
optimizer.step()
and in the tensorflow version:
loss_fn = tf.keras.losses.MSE
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
...
Q2 = model2(state2_batch) #B
with tf.GradientTape() as tape:
Q1 = model(state1_batch)
Y = reward_batch + gamma * ((1-done_batch) * tf.math.reduce_max(Q2, axis=1))
X = [Q1[i][action_batch[i]] for i in range(len(action_batch))]
loss = loss_fn(X, Y)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
Why is the training taking so long?
Why is TensorFlow slow
TensorFlow has 2 execution modes: eager execution, and graph mode. TensorFlow default behavior, since version 2, is to default to eager execution. Eager execution is great as it enables you to write code close to how you would write standard python. It's easier to write, and it's easier to debug. Unfortunately, it's really not as fast as graph mode.
So the idea is, once the function is prototyped in eager mode, to make TensorFlow execute it in graph mode. For that you can use tf.function. tf.function compiles a callable into a TensorFlow graph. Once the function is compiled into a graph, the performance gain is usually quite important. The recommended approach when developing in TensorFlow is the following:
Debug in eager mode, then decorate with #tf.function.
Don't rely on Python side effects like object mutation or list appends.
tf.function works best with TensorFlow ops; NumPy and Python calls are converted to constants.
I would add: think about the critical parts of your program, and which ones should be converted first into graph mode. It's usually the parts where you call a model to get a result. It's where you will see the best improvements.
You can find more information in the following guides:
Better performance with tf.function
Introduction to graphs and tf.function
Applying tf.function to your code
So, there are at least two things you can change in your code to make it run quite faster:
The first one is to not use model.predict on a small amount of data. The function is made to work on a huge dataset or on a generator. (See this comment on Github). Instead, you should call the model directly, and for performance enhancement, you can wrap the call to the model in a tf.function.
Model.predict is a top-level API designed for batch-predicting outside of any loops, with the fully-features of the Keras APIs.
The second one is to make your training step a separate function, and to decorate that function with #tf.function.
So, I would declare the following things before your training loop:
# to call instead of model.predict
model_func = tf.function(model)
def get_train_func(model, model2, loss_fn, optimizer):
"""Wrapper that creates a train step using the two model passed"""
#tf.function
def train_func(state1_batch, state2_batch, done_batch, reward_batch, action_batch):
Q2 = model2(state2_batch) #B
with tf.GradientTape() as tape:
Q1 = model(state1_batch)
Y = reward_batch + gamma * ((1-done_batch) * tf.math.reduce_max(Q2, axis=1))
# gather is more efficient than a list comprehension, and needed in a tf.function
X = tf.gather(Q1, action_batch, batch_dims=1)
loss = loss_fn(X, Y)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
return train_func
# train step is a callable
train_step = get_train_func(model, model2, loss_fn, optimizer)
And you can use that function in your training loop:
if len(replay) > batch_size:
minibatch = random.sample(replay, batch_size)
state1_batch = np.array([s1 for (s1,a,r,s2,d) in minibatch]).reshape((batch_size, 64))
action_batch = np.array([a for (s1,a,r,s2,d) in minibatch]) #TODO: Posibles diferencies
reward_batch = np.float32([r for (s1,a,r,s2,d) in minibatch])
state2_batch = np.array([s2 for (s1,a,r,s2,d) in minibatch]).reshape((batch_size, 64))
done_batch = np.array([d for (s1,a,r,s2,d) in minibatch]).astype(np.float32)
loss = train_step(state1_batch, state2_batch, done_batch, reward_batch, action_batch)
losses.append(loss)
There are other changes that you could make to make your code more TensorFlowesque, but with those modifications, your code takes ~2 minutes on my CPU. (with a 97% win rate).

Adding Dropout to testing/inference phase

I've trained the following model for some timeseries in Keras:
input_layer = Input(batch_shape=(56, 3864))
first_layer = Dense(24, input_dim=28, activation='relu',
activity_regularizer=None,
kernel_regularizer=None)(input_layer)
first_layer = Dropout(0.3)(first_layer)
second_layer = Dense(12, activation='relu')(first_layer)
second_layer = Dropout(0.3)(second_layer)
out = Dense(56)(second_layer)
model_1 = Model(input_layer, out)
Then I defined a new model with the trained layers of model_1 and added dropout layers with a different rate, drp, to it:
input_2 = Input(batch_shape=(56, 3864))
first_dense_layer = model_1.layers[1](input_2)
first_dropout_layer = model_1.layers[2](first_dense_layer)
new_dropout = Dropout(drp)(first_dropout_layer)
snd_dense_layer = model_1.layers[3](new_dropout)
snd_dropout_layer = model_1.layers[4](snd_dense_layer)
new_dropout_2 = Dropout(drp)(snd_dropout_layer)
output = model_1.layers[5](new_dropout_2)
model_2 = Model(input_2, output)
Then I'm getting the prediction results of these two models as follow:
result_1 = model_1.predict(test_data, batch_size=56)
result_2 = model_2.predict(test_data, batch_size=56)
I was expecting to get completely different results because the second model has new dropout layers and theses two models are different (IMO), but that's not the case. Both are generating the same result. Why is that happening?
As I mentioned in the comments, the Dropout layer is turned off in inference phase (i.e. test mode), so when you use model.predict() the Dropout layers are not active. However, if you would like to have a model that uses Dropout both in training and inference phase, you can pass training argument when calling it, as suggested by François Chollet:
# ...
new_dropout = Dropout(drp)(first_dropout_layer, training=True)
# ...
Alternatively, If you have already trained your model and now want to use it in inference mode and keep the Dropout layers (and possibly other layers which have different behavior in training/inference phase such as BatchNormalization) active, you can define a backend function that takes the model's inputs as well as Keras learning phase:
from keras import backend as K
func = K.function(model.inputs + [K.learning_phase()], model.outputs)
# to use it pass 1 to set the learning phase to training mode
outputs = func([input_arrays] + [1.])
your question has a simple solution in the latest version of Tensorflow. you can set the training argument of the call method to true.
you can run a code like the below code:
model(input,training=True)
by using training=True TensorFlow automatically applies the Dropout layer in inference mode.
As there are already some working code solutions above, I will simply add a few more details regarding dropout during inference to prevent confusion.
Based on the original paper, Dropout layers play the role of turning off (setting gradients to zero) the neuron nodes during training to reduce overfitting. However, once we finish off with training and start testing the model, we do not 'touch' any neurons, thus, all the units are considered to make the decision when inferencing. This causes previously 'dead' neuron weights to be large than expected due to the usage of Dropout. To prevent this, a scaling factor is applied to balance the network node. To be more precise, if a unit is retained with probability p during training, the outgoing weights of that unit are multiplied by p during the prediction stage.

using pre-loaded data in TensorFlow

I am using TensorFlow to run some Kaggle competitions. Since I don't have much training data, I am using TF constants to pre-load all of my training and test data into the Graph for efficiency. My code looks like this
... lots of stuff ...
with tf.Graph().as_default():
train_images = tf.constant(train_data[:36000,1:], dtype=tf.float32)
... more stuff ...
train_set = tf.train.slice_input_producer([train_images, train_labels])
images, labels = tf.train.batch(train_set, batch_size=100)
# this is where my model graph is built
model = MLP(hidden=[512, 512])
logits = model._create_model(images)
loss = model._create_loss_op(logits, labels)
train = model._create_train_op(loss)
# I know I am not supposed to call _something() methods
# from outside of the class. I used to call these internally
# but refactoring is still in progress
Now, when I was using feed dictionary to feed the data, I could only build the model once, but easily switch the inputs between, for example, my training data and my validation data (and my test data). But with pre-loading it seems that I have to build a separate copy of the graph for every set of inputs I have. Currently, I do exactly that and I use variable reuse to make sure the same weights and biases are being used by the graphs. But I cannot help, but feel that this is a weird way of doing things. So, for example, here are some bits and pieces of my MLP class and my validation code
class MLP(object):
... lots of stuff happens here ...
def _create_dense_layer(self, name, inputs, n_in, n_out, reuse=None, activation=True):
with tf.variable_scope(name, reuse=reuse):
weights = self._weights([n_in, n_out])
self.graph.add_to_collection('weights', weights)
layer = tf.matmul(inputs, weights)
if self.biases:
biases = self._biases([n_out])
layer = layer + biases
if activation:
layer = self.activation(layer)
return layer
... and back to the training code ...
valid_images = tf.constant(train_data[36000:,1:], dtype=tf.float32)
valid_logits = model._create_model(valid_images, reuse=True)
valid_accuracy = model._create_accuracy_op(valid_logits, valid_labels)
So, do I really have to create a complete copy of my model for each set of data I want to use it on or am I missing something in TF and there is an easier way of doing it?

Categories

Resources