Training WGAN-GP: weird generated images - python

I'm studying GANs, and atm I'm trying to implement a WGAN-GP, based on codes presented in books and Keras tutorials. I was able to train a regular GAN as shown here. When adapting that code to be a WGAN-GP, I'm getting weird generated images though:
I'm trying to understand what I'm doing wrong. This is the definition of the WGAN-GP:
class WGAN_GP(Model):
"""Implements a Wasserstein GAN with Gradient Penalty"""
def __init__(
self,
discriminator,
generator,
latent_dim,
discriminator_extra_steps=3,
gp_weight=10.0,
):
super().__init__()
self.discriminator = discriminator
self.generator = generator
self.latent_dim = latent_dim
self.d_steps = discriminator_extra_steps
self.gp_weight = gp_weight
def compile(self, d_optimizer, g_optimizer, d_loss_fn, g_loss_fn):
super().compile()
self.d_optimizer = d_optimizer
self.g_optimizer = g_optimizer
self.d_loss_fn = d_loss_fn
self.g_loss_fn = g_loss_fn
def gradient_penalty(self, batch_size, real_images, fake_images):
"""
Calculates the gradient penalty.
This loss is calculated on an interpolated image
and added to the discriminator loss.
"""
# Get the interpolated image
alpha = tf.random.normal([batch_size, 1, 1, 1], 0.0, 1.0)
diff = fake_images - real_images
interpolated = real_images + alpha * diff
with tf.GradientTape() as gp_tape:
gp_tape.watch(interpolated)
# 1. Get the discriminator output for this interpolated image.
pred = self.discriminator(interpolated)
# 2. Calculate the gradients w.r.t to this interpolated image.
grads = gp_tape.gradient(pred, [interpolated])[0]
# 3. Calculate the norm of the gradients.
norm = tf.sqrt(tf.reduce_sum(tf.square(grads), axis=[1, 2, 3]))
gp = tf.reduce_mean((norm - 1.0) ** 2)
return gp
def train_step(self, real_images) -> dict:
# 1. Train the discriminator
# The original paper recommends training
# the discriminator for `x` more steps (typically 5) as compared to
# one step of the generator.
batch_size = tf.shape(real_images)[0]
for i in range(self.d_steps):
# Sample random points in the latent space
# and decode them to fake images
random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
generated_images = self.generator(random_latent_vectors)
generated_logits = self.discriminator(generated_images)
real_logits = self.discriminator(real_images)
with tf.GradientTape() as tape:
d_cost = self.d_loss_fn(real_logits, generated_logits)
# Calculate the gradient penalty
gp = self.gradient_penalty(batch_size, generated_images, real_images)
# Add the gradient penalty to the original discriminator loss
d_loss = d_cost + gp * self.gp_weight
grads = tape.gradient(d_loss, self.discriminator.trainable_weights)
self.d_optimizer.apply_gradients(
zip(grads, self.discriminator.trainable_weights)
)
# 2. Train the generator
# Sample random points in the latent space and calculate the loss
# (note that we should *not* update the weights of the discriminator)!
random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
with tf.GradientTape() as tape:
predictions = self.discriminator(self.generator(random_latent_vectors))
g_loss = self.g_loss_fn(predictions)
grads = tape.gradient(g_loss, self.generator.trainable_weights)
self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))
return {
"d_loss": d_loss,
"g_loss": g_loss,
}
# Define the loss functions for the discriminator,
# which should be (fake_loss - real_loss).
# We will add the gradient penalty later to this loss function.
def discriminator_loss(real_img, fake_img):
real_loss = tf.reduce_mean(real_img)
fake_loss = tf.reduce_mean(fake_img)
return fake_loss - real_loss
# Define the loss functions for the generator.
def generator_loss(fake_img):
return -tf.reduce_mean(fake_img)
The full code can be found here.
I've tried changing the network, adjusting the lr, but it didn't solve the issue.

Looks like your generator is suffering from mode collapse; it has found one (or a small number) of images that fool the discriminator every time. From this answer: https://datascience.stackexchange.com/questions/51276/gan-am-i-seeing-mode-collapse-common-fixes-not-working.
"Be sure that your learning rate is small enough. My first problem was a large learning rate (I used 0.001 with Adam, then realized that my model only works with something small like 0.0002)."
"Make sure learning is happening. Track the loss values over time and make sure they make sense. There shouldn't be any spikes in loss values, else something is wrong."
Unfortunately, tuning GANs can be more of an art than a science, so play around with hyperparameters and carefully not what works and what doesn't. Hope this helps!

Related

Control flow in Tensorflow 2 - gradients are None

I have a Tensorflow 2.x model with the purpose of dynamically choosing a computational path. Here's a schematic drawing of this model:
The only trainable block is the Decision Module (DM), which is essentially a fully connected layer with a single binary output (0 or 1; It's differentiable using a technique called Improved Semantic Hashing). Nets A & B have the same network architecture.
In the training progress, I feed forward a batch of images until the output of the DM, and then process the decision image-by-image, directing each image to the decided net (A or B). The predictions are concatenated into a single tensor, who's used to evaluate the performance. Here's the training code (sigma is the output of the DM; model includes the feature extractor and the DM):
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
#tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
# training=True is only needed if there are custom_layers with different
# behavior during training versus inference (e.g. Dropout).
_, sigma = model(images, training=True)
out = []
for img, s in zip(images, sigma):
if s == 0:
o = binary_classifier_model_a(tf.expand_dims(img, axis=0), training=False)
else:
o = binary_classifier_model_b(tf.expand_dims(img, axis=0), training=False)
out.append(o)
predictions = tf.concat(out, axis=0)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
The problem - when running this code, gradients returns [None, None].
What I know for now is:
The first part of the model (until the DM's output) is differentiable; I tested it by running only this section and applying a loss function (MSE) and then applying tape.gradients - I got actual gradients.
I tried choosing a single (constant) net - e.g, net A - and simply multiplying it's output by s (which is either 0 or 1); This is performed instead of the if-else block in the code. In this case I also got gradients.
My concern is that such thing might not be possible - quoting from the official docs:
x = tf.constant(1.0)
v0 = tf.Variable(2.0)
v1 = tf.Variable(2.0)
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
if x > 0.0:
result = v0
else:
result = v1**2
Depending on the value of x in the above example, the tape either
records result = v0 or result = v1**2. The gradient with respect to
x is always None.
dx = tape.gradient(result, x)
print(dx)
>> None
I'm not 100% sure that this is my case, but I wanted to ask here for the experts' opinion.
Is what I'm trying to do possible? And if yes - what should I change in order for this to work?
Thanks
You correctly identified the issue. The control statement of the conditional is not differentiable, so you lose your link to the model variables that produced sigma.
In your case, because you state that sigma is either 1 or 0, you can use the value of sigma as a mask, and skip the conditional statement (and even the loop).
with tf.GradientTape() as tape:
_, sigma = model(images, training=True)
predictions = (1.0 - sigma) * binary_classifier_model_a(images, training=False)\
+ sigma * binary_classifier_model_b(images, training=False)
loss = loss_object(labels, predictions)
It seems solution to your ploblem is to control flow operations. Try using tf.where. You can implement your condition by doing something like this.
a = tf.constant([1, 1])
b = tf.constant([2, 2])
p = tf.constant([True, False])
x = tf.where(p, a + b, a * b)
For more information please refer this

What could cause a VAE(Variational AutoEncoder) to output random noise even after training?

I have trained a VAE on CIFAR10 data-set. However, when I try to generate images from the VAE all I get is a bunch of gray noise back. The implementation of this VAE follows the implementation from the book Generative Deep Learning, but instead of TensorFlow the code uses PyTorch.
The notebook containing the training as well as the generation can be found here, while the actual implementation of the VAE can be found here.
I have tried:
Disabling dropouts.
Increasing the dimension of the latent space.
None of the methods show any improvement at all.
I have verified that:
The input size matches the output size
Back-propagation runs successfully as the loss decreases during training.
Thanks for providing the code and a link to a Colab notebook! +1!
Also, your code is well-written and easy to read. Unless I missed something, I think there are two problems with your code:
The data normalization
The implementation of the VAE loss.
About 1., your CIFAR10DataModule class normalizes the RGB channels of the CIFAR10 images using mean = 0.5 and std = 0.5. Since the pixel values are initially in [0,1] range, the normalized images have pixel values in the [-1,1] range. However, your Decoder class applies a nn.Sigmoid() activation to the reconstructed images. Therefore, your reconstructed images have pixel values in the [0,1] range. I suggest to remove this mean-std normalization so that both the "true" images and the reconstructed images have their pixel values in the [0,1] range.
About 2.: since you're dealing with RGB images the MSE loss makes sense. The idea behind the MSE loss is the "Gaussian decoder". This decoder assumes the pixel values of a "true image" is generated by independent Gaussian distributions whose mean is the pixel values of the reconstructed image (i.e. the output of the decoder) and with a given variance. Your implementation of the reconstruction loss (namely r_loss = F.mse_loss(predictions, targets)) is equivalent to a fixed variance. Using ideas from this paper, we can do better and obtain an analytic expression for the "optimal value" of this variance parameter. Finally, the reconstruction loss should be summed over all pixels (reduction = 'sum'). To understand why, have a look at analytic expression of the reconstruction loss (see, for instance, this blog post which considers the BCE loss).
Here is what the refactored LitVAE class looks like:
class LitVAE(pl.LightningModule):
def __init__(self,
learning_rate: float = 0.0005,
**kwargs) -> None:
"""
Parameters
----------
- `learning_rate: float`:
learning rate for the optimizer
- `**kwargs`:
arguments to pass to the variational autoencoder constructor
"""
super(LitVAE, self).__init__()
self.learning_rate = learning_rate
self.vae = VariationalAutoEncoder(**kwargs)
def forward(self, x) -> _tensor_size_3_t:
return self.vae(x)
def training_step(self, batch, batch_idx):
r_loss, kl_loss, sigma_opt = self.shared_step(batch)
loss = r_loss + kl_loss
self.log("train_loss_step", loss)
return {"loss": loss, 'log':{"r_loss": r_loss / len(batch[0]), "kl_loss": kl_loss / len(batch[0]), 'sigma_opt': sigma_opt}}
def training_epoch_end(self, outputs) -> None:
# add computation graph
if(self.current_epoch == 0):
sample_input = torch.randn((1, 3, 32, 32))
sample_model = LitVAE(**MODEL_PARAMS)
self.logger.experiment.add_graph(sample_model, sample_input)
epoch_loss = self.average_metric(outputs, "loss")
self.logger.experiment.add_scalar("train_loss_epoch", epoch_loss, self.current_epoch)
def validation_step(self, batch, batch_idx):
r_loss, kl_loss, _ = self.shared_step(batch)
loss = r_loss + kl_loss
self.log("valid_loss_step", loss)
return {"loss": loss}
def validation_epoch_end(self, outputs) -> None:
epoch_loss = self.average_metric(outputs, "loss")
self.logger.experiment.add_scalar("valid_loss_epoch", epoch_loss, self.current_epoch)
def test_step(self, batch, batch_idx):
r_loss, kl_loss, _ = self.shared_step(batch)
loss = r_loss + kl_loss
self.log("test_loss_step", loss)
return {"loss": loss}
def test_epoch_end(self, outputs) -> None:
epoch_loss = self.average_metric(outputs, "loss")
self.logger.experiment.add_scalar("test_loss_epoch", epoch_loss, self.current_epoch)
def configure_optimizers(self):
return optim.Adam(self.parameters(), lr=self.learning_rate)
def shared_step(self, batch) -> torch.TensorType:
# images are both samples and targets thus original
# labels from the dataset are not required
true_images, _ = batch
# perform a forward pass through the VAE
# mean and log_variance are used to calculate the KL Divergence loss
# decoder_output represents the generated images
mean, log_variance, generated_images = self(true_images)
r_loss, kl_loss, sigma_opt = self.calculate_loss(mean, log_variance, generated_images, true_images)
return r_loss, kl_loss, sigma_opt
def calculate_loss(self, mean, log_variance, predictions, targets):
mse = F.mse_loss(predictions, targets, reduction='mean')
log_sigma_opt = 0.5 * mse.log()
r_loss = 0.5 * torch.pow((targets - predictions) / log_sigma_opt.exp(), 2) + log_sigma_opt
r_loss = r_loss.sum()
kl_loss = self._compute_kl_loss(mean, log_variance)
return r_loss, kl_loss, log_sigma_opt.exp()
def _compute_kl_loss(self, mean, log_variance):
return -0.5 * torch.sum(1 + log_variance - mean.pow(2) - log_variance.exp())
def average_metric(self, metrics, metric_name):
avg_metric = torch.stack([x[metric_name] for x in metrics]).mean()
return avg_metric
After 10 epochs, that's what the reconstructed images look like:

Should we stop training discriminator while training generator in CycleGAN tutorial?

In the code provided by tensorlfow tutorial for CycleGAN, they have trained discriminator and generator simultaneously.
def train_step(real_x, real_y):
# persistent is set to True because the tape is used more than
# once to calculate the gradients.
with tf.GradientTape(persistent=True) as tape:
# Generator G translates X -> Y
# Generator F translates Y -> X.
fake_y = generator_g(real_x, training=True)
cycled_x = generator_f(fake_y, training=True)
fake_x = generator_f(real_y, training=True)
cycled_y = generator_g(fake_x, training=True)
# same_x and same_y are used for identity loss.
same_x = generator_f(real_x, training=True)
same_y = generator_g(real_y, training=True)
disc_real_x = discriminator_x(real_x, training=True)
disc_real_y = discriminator_y(real_y, training=True)
disc_fake_x = discriminator_x(fake_x, training=True)
disc_fake_y = discriminator_y(fake_y, training=True)
# calculate the loss
gen_g_loss = generator_loss(disc_fake_y)
gen_f_loss = generator_loss(disc_fake_x)
total_cycle_loss = calc_cycle_loss(real_x, cycled_x) + calc_cycle_loss(real_y, cycled_y)
# Total generator loss = adversarial loss + cycle loss
total_gen_g_loss = gen_g_loss + total_cycle_loss + identity_loss(real_y, same_y)
total_gen_f_loss = gen_f_loss + total_cycle_loss + identity_loss(real_x, same_x)
disc_x_loss = discriminator_loss(disc_real_x, disc_fake_x)
disc_y_loss = discriminator_loss(disc_real_y, disc_fake_y)
# Calculate the gradients for generator and discriminator
generator_g_gradients = tape.gradient(total_gen_g_loss,
generator_g.trainable_variables)
generator_f_gradients = tape.gradient(total_gen_f_loss,
generator_f.trainable_variables)
discriminator_x_gradients = tape.gradient(disc_x_loss,
discriminator_x.trainable_variables)
discriminator_y_gradients = tape.gradient(disc_y_loss,
discriminator_y.trainable_variables)
# Apply the gradients to the optimizer
generator_g_optimizer.apply_gradients(zip(generator_g_gradients,
generator_g.trainable_variables))
generator_f_optimizer.apply_gradients(zip(generator_f_gradients,
generator_f.trainable_variables))
discriminator_x_optimizer.apply_gradients(zip(discriminator_x_gradients,
discriminator_x.trainable_variables))
discriminator_y_optimizer.apply_gradients(zip(discriminator_y_gradients,
discriminator_y.trainable_variables))
But while training a GAN network we need to stop training discriminator when we are training generator network.
What's the benefit of using it?
In the GANs, you don't stop training D or G. They are trained simultaneously.
Here they first calculate the gradient values for each network (not to change D or G before calculating the current loss), then update the weights using those.
It's not clear in your question, what's the benefit of what?
For the training to happen in an adversarial way the gradients of the discriminator and generator networks should be updated separately. The discriminator becomes stronger because generator produces more realistic samples and vise versa. If you update these networks together the "adversarial" training is not happening - to the best of my knowledge you are unlikely to obtain pleasing synthetic samples this way.

Generating MNIST numbers using LSTM-CGAN in TensorFlow

Inspired by this article, I'm trying to build a Conditional GAN which will use LSTM to generate MNIST numbers. I hope I'm using the same architecture as in the image below (except for the bidirectional RNN in discriminator, taken from this paper):
When I run this model, I've got very strange results. This image shows my model generating number 3 after each epoch. It should look more like this. It's really bad.
Loss of my discriminator network decreasing really fast up to close to zero. However, the loss of my generator network oscillates around some fixed point (maybe diverging slowly). I really don't know what's happening. Here is the most important part of my code (full code here):
timesteps = 28
X_dim = 28
Z_dim = 100
y_dim = 10
X = tf.placeholder(tf.float32, [None, timesteps, X_dim]) # reshaped MNIST image to 28x28
y = tf.placeholder(tf.float32, [None, y_dim]) # one-hot label
Z = tf.placeholder(tf.float32, [None, timesteps, Z_dim]) # numpy.random.uniform noise in range [-1; 1]
y_timesteps = tf.tile(tf.expand_dims(y, axis=1), [1, timesteps, 1]) # [None, timesteps, y_dim] - replicate y along axis=1
def discriminator(x, y):
with tf.variable_scope('discriminator', reuse=tf.AUTO_REUSE) as vs:
inputs = tf.concat([x, y], axis=2)
D_cell = tf.contrib.rnn.LSTMCell(64)
output, _ = tf.nn.dynamic_rnn(D_cell, inputs, dtype=tf.float32)
last_output = output[:, -1, :]
logit = tf.contrib.layers.fully_connected(last_output, 1, activation_fn=None)
pred = tf.nn.sigmoid(logit)
variables = [v for v in tf.all_variables() if v.name.startswith(vs.name)]
return variables, pred, logit
def generator(z, y):
with tf.variable_scope('generator', reuse=tf.AUTO_REUSE) as vs:
inputs = tf.concat([z, y], axis=2)
G_cell = tf.contrib.rnn.LSTMCell(64)
output, _ = tf.nn.dynamic_rnn(G_cell, inputs, dtype=tf.float32)
logit = tf.contrib.layers.fully_connected(output, X_dim, activation_fn=None)
pred = tf.nn.sigmoid(logit)
variables = [v for v in tf.all_variables() if v.name.startswith(vs.name)]
return variables, pred
G_vars, G_sample = run_generator(Z, y_timesteps)
D_vars, D_real, D_logit_real = run_discriminator(X, y_timesteps)
_, D_fake, D_logit_fake = run_discriminator(G_sample, y_timesteps)
D_loss = -tf.reduce_mean(tf.log(D_real) + tf.log(1. - D_fake))
G_loss = -tf.reduce_mean(tf.log(D_fake))
D_solver = tf.train.AdamOptimizer().minimize(D_loss, var_list=D_vars)
G_solver = tf.train.AdamOptimizer().minimize(G_loss, var_list=G_vars)
There is most likely something wrong with my model. Anyone could help me make the generator network converge?
There are a few things you can do to improve your network architecture and training phase.
Remove the tf.nn.sigmoid(logit) from both the generator and discriminator. Return just the pred.
Use a numerically stable function to calculate your loss functions and fix the loss functions:
D_loss = -tf.reduce_mean(tf.log(D_real) + tf.log(1. - D_fake))
G_loss = -tf.reduce_mean(tf.log(D_fake))
should be:
D_loss_real = tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_real,
labels=tf.ones_like(D_real))
D_loss_fake = tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_fake,
labels=tf.zeros_like(D_fake))
D_loss = -tf.reduce_mean(D_loss_real + D_loss_fake)
G_loss = -tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_real,
labels=tf.ones_like(D_real)))
Once you fixed the loss and used a numerically stable function, things will go better. Also, as a rule of thumb, if there's too much noise in the loss, reduce the learning rate (the default lr of ADAM is usually too high when training GANs).
Hope it helps

Convergence of LSTM network using Tensorflow

I am trying to detect micro-events in a long time series. For this purpose, I will train a LSTM network.
Data. Input for each time sample is 11 different features somewhat normalized to fit 0-1. Output will be either one of two classes.
Batching. Due to huge class imbalance I have extracted the data in batches of each 60 time samples, of which at least 5 will always be class 1, and the rest class to. In this way the class imbalance is reduced from 150:1 to around 12:1 I have then randomized the order of all my batches.
Model. I am attempting to train an LSTM, with initial configuration of 3 different cells with 5 delay steps. I expect the micro events to arrive in sequences of at least 3 time steps.
Problem: When I try to train the network it will quickly converge towards saying that EVERYTHING belongs to the majority class. When I implement a weighted loss function, at some certain threshold it will change to saying that EVERYTHING belongs to the minority class. I suspect (without being expert) that there is no learning in my LSTM cells, or that my configuration is off?
Below is the code for my implementation. I am hoping that someone can tell me
Is my implementation correct?
What other reasons could there be for such behaviour?
ar_model.py
import numpy as np
import tensorflow as tf
from tensorflow.models.rnn import rnn
import ar_config
config = ar_config.get_config()
class ARModel(object):
def __init__(self, is_training=False, config=None):
# Config
if config is None:
config = ar_config.get_config()
# Placeholders
self._features = tf.placeholder(tf.float32, [None, config.num_features], name='ModelInput')
self._targets = tf.placeholder(tf.float32, [None, config.num_classes], name='ModelOutput')
# Hidden layer
with tf.variable_scope('lstm') as scope:
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(config.num_hidden, forget_bias=0.0)
cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * config.num_delays)
self._initial_state = cell.zero_state(config.batch_size, dtype=tf.float32)
outputs, state = rnn.rnn(cell, [self._features], dtype=tf.float32)
# Output layer
output = outputs[-1]
softmax_w = tf.get_variable('softmax_w', [config.num_hidden, config.num_classes], tf.float32)
softmax_b = tf.get_variable('softmax_b', [config.num_classes], tf.float32)
logits = tf.matmul(output, softmax_w) + softmax_b
# Evaluate
ratio = (60.00 / 5.00)
class_weights = tf.constant([ratio, 1 - ratio])
weighted_logits = tf.mul(logits, class_weights)
loss = tf.nn.softmax_cross_entropy_with_logits(weighted_logits, self._targets)
self._cost = cost = tf.reduce_mean(loss)
self._predict = tf.argmax(tf.nn.softmax(logits), 1)
self._correct = tf.equal(tf.argmax(logits, 1), tf.argmax(self._targets, 1))
self._accuracy = tf.reduce_mean(tf.cast(self._correct, tf.float32))
self._final_state = state
if not is_training:
return
# Optimize
optimizer = tf.train.AdamOptimizer()
self._train_op = optimizer.minimize(cost)
#property
def features(self):
return self._features
#property
def targets(self):
return self._targets
#property
def cost(self):
return self._cost
#property
def accuracy(self):
return self._accuracy
#property
def train_op(self):
return self._train_op
#property
def predict(self):
return self._predict
#property
def initial_state(self):
return self._initial_state
#property
def final_state(self):
return self._final_state
ar_train.py
import os
from datetime import datetime
import numpy as np
import tensorflow as tf
from tensorflow.python.platform import gfile
import ar_network
import ar_config
import ar_reader
config = ar_config.get_config()
def main(argv=None):
if gfile.Exists(config.train_dir):
gfile.DeleteRecursively(config.train_dir)
gfile.MakeDirs(config.train_dir)
train()
def train():
train_data = ar_reader.ArousalData(config.train_data, num_steps=config.max_steps)
test_data = ar_reader.ArousalData(config.test_data, num_steps=config.max_steps)
with tf.Graph().as_default(), tf.Session() as session, tf.device('/cpu:0'):
initializer = tf.random_uniform_initializer(minval=-0.1, maxval=0.1)
with tf.variable_scope('model', reuse=False, initializer=initializer):
m = ar_network.ARModel(is_training=True)
s = tf.train.Saver(tf.all_variables())
tf.initialize_all_variables().run()
for batch_input, batch_target in train_data:
step = train_data.iter_steps
dict = {
m.features: batch_input,
m.targets: batch_target
}
session.run(m.train_op, feed_dict=dict)
state, cost, accuracy = session.run([m.final_state, m.cost, m.accuracy], feed_dict=dict)
if not step % 10:
test_input, test_target = test_data.next()
test_accuracy = session.run(m.accuracy, feed_dict={
m.features: test_input,
m.targets: test_target
})
now = datetime.now().time()
print ('%s | Iter %4d | Loss= %.5f | Train= %.5f | Test= %.3f' % (now, step, cost, accuracy, test_accuracy))
if not step % 1000:
destination = os.path.join(config.train_dir, 'ar_model.ckpt')
s.save(session, destination)
if __name__ == '__main__':
tf.app.run()
ar_config.py
class Config(object):
# Directories
train_dir = '...'
ckpt_dir = '...'
train_data = '...'
test_data = '...'
# Data
num_features = 13
num_classes = 2
batch_size = 60
# Model
num_hidden = 3
num_delays = 5
# Training
max_steps = 100000
def get_config():
return Config()
UPDATED ARCHITECTURE:
# Placeholders
self._features = tf.placeholder(tf.float32, [None, config.num_features, config.num_delays], name='ModelInput')
self._targets = tf.placeholder(tf.float32, [None, config.num_output], name='ModelOutput')
# Weights
weights = {
'hidden': tf.get_variable('w_hidden', [config.num_features, config.num_hidden], tf.float32),
'out': tf.get_variable('w_out', [config.num_hidden, config.num_classes], tf.float32)
}
biases = {
'hidden': tf.get_variable('b_hidden', [config.num_hidden], tf.float32),
'out': tf.get_variable('b_out', [config.num_classes], tf.float32)
}
#Layer in
with tf.variable_scope('input_hidden') as scope:
inputs = self._features
inputs = tf.transpose(inputs, perm=[2, 0, 1]) # (BatchSize,NumFeatures,TimeSteps) -> (TimeSteps,BatchSize,NumFeatures)
inputs = tf.reshape(inputs, shape=[-1, config.num_features]) # (TimeSteps,BatchSize,NumFeatures -> (TimeSteps*BatchSize,NumFeatures)
inputs = tf.add(tf.matmul(inputs, weights['hidden']), biases['hidden'])
#Layer hidden
with tf.variable_scope('hidden_hidden') as scope:
inputs = tf.split(0, config.num_delays, inputs) # -> n_steps * (batchsize, features)
cell = tf.nn.rnn_cell.BasicLSTMCell(config.num_hidden, forget_bias=0.0)
self._initial_state = cell.zero_state(config.batch_size, dtype=tf.float32)
outputs, state = rnn.rnn(cell, inputs, dtype=tf.float32)
#Layer out
with tf.variable_scope('hidden_output') as scope:
output = outputs[-1]
logits = tf.add(tf.matmul(output, weights['out']), biases['out'])
Odd elements
Weighted loss
I am not sure your "weighted loss" does what you want it to do:
ratio = (60.00 / 5.00)
class_weights = tf.constant([ratio, 1 - ratio])
weighted_logits = tf.mul(logits, class_weights)
this is applied before calculating the loss function (further I think you wanted an element-wise multiplication as well? also your ratio is above 1 which makes the second part negative?) so it forces your predictions to behave in a certain way before applying the softmax.
If you want weighted loss you should apply this after
loss = tf.nn.softmax_cross_entropy_with_logits(weighted_logits, self._targets)
with some element-wise multiplication of your weights.
loss = loss * weights
Where your weights have a shape like [2,]
However, I would not recommend you to use weighted losses. Perhaps try increasing the ratio even further than 1:6.
Architecture
As far as I can read, you are using 5 stacked LSTMs with 3 hidden units per layer?
Try removing the multi rnn and just use a single LSTM/GRU (maybe even just a vanilla RNN) and jack the hidden units up to ~100-1000.
Debugging
Often when you are facing problems with an odd behaving network, it can be a good idea to:
Print everything
Literally print the shapes and values of every tensor in your model, use sess to fetch it and then print it. Your input data, the first hidden representation, your predictions, your losses etc.
You can also use tensorflows tf.Print() x_tensor = tf.Print(x_tensor, [tf.shape(x_tensor)])
Use tensorboard
Using tensorboard summaries on your gradients, accuracy metrics and histograms will reveal patterns in your data that might explain certain behavior, such as what lead to exploding weights. Like maybe your forget bias goes to infinity or your not tracking gradient through a certain layer etc.
Other questions
How large is your dataset?
How long are your sequences?
Are the 13 features categorical or continuous? You should not normalize categorical variables or represent them as integers, instead you should use one-hot encoding.
Gunnar has already made lots of good suggestions. A few more small things worth paying attention to in general for this sort of architecture:
Try tweaking the Adam learning rate. You should determine the proper learning rate by cross-validation; as a rough start, you could just check whether a smaller learning rate saves your model from crashing on the training data.
You should definitely use more hidden units. It's cheap to try larger networks when you first start out on a dataset. Go as large as necessary to avoid the underfitting you've observed. Later you can regularize / pare down the network after you get it to learn something useful.
Concretely, how long are the sequences you are passing into the network? You say you have a 30k-long time sequence.. I assume you are passing in subsections / samples of this sequence?

Categories

Resources