Difference between these implementations of LSTM Autoencoder? - python

Specifically what spurred this question is the return_sequence argument of TensorFlow's version of an LSTM layer.
The docs say:
Boolean. Whether to return the last output. in the output sequence,
or the full sequence. Default: False.
I've seen some implementations, especially autoencoders that use this argument to strip everything but the last element in the output sequence as the output of the 'encoder' half of the autoencoder.
Below are three different implementations. I'd like to understand the reasons behind the differences, as the seem like very large differences but all call themselves the same thing.
Example 1 (TensorFlow):
This implementation strips away all outputs of the LSTM except the last element of the sequence, and then repeats that element some number of times to reconstruct the sequence:
model = Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_in,1)))
# Decoder below
model.add(RepeatVector(n_out))
model.add(LSTM(100, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
When looking at implementations of autoencoders in PyTorch, I don't see authors doing this. Instead they use the entire output of the LSTM for the encoder (sometimes followed by a dense layer and sometimes not).
Example 1 (PyTorch):
This implementation trains an embedding BEFORE an LSTM layer is applied... It seems to almost defeat the idea of an LSTM based auto-encoder... The sequence is already encoded by the time it hits the LSTM layer.
class EncoderLSTM(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1, drop_prob=0):
super(EncoderLSTM, self).__init__()
self.hidden_size = hidden_size
self.n_layers = n_layers
self.embedding = nn.Embedding(input_size, hidden_size)
self.lstm = nn.LSTM(hidden_size, hidden_size, n_layers, dropout=drop_prob, batch_first=True)
def forward(self, inputs, hidden):
# Embed input words
embedded = self.embedding(inputs)
# Pass the embedded word vectors into LSTM and return all outputs
output, hidden = self.lstm(embedded, hidden)
return output, hidden
Example 2 (PyTorch):
This example encoder first expands the input with one LSTM layer, then does its compression via a second LSTM layer with a smaller number of hidden nodes. Besides the expansion, this seems in line with this paper I found: https://arxiv.org/pdf/1607.00148.pdf
However, in this implementation's decoder, there is no final dense layer. The decoding happens through a second lstm layer that expands the encoding back to the same dimension as the original input. See it here. This is not in line with the paper (although I don't know if the paper is authoritative or not).
class Encoder(nn.Module):
def __init__(self, seq_len, n_features, embedding_dim=64):
super(Encoder, self).__init__()
self.seq_len, self.n_features = seq_len, n_features
self.embedding_dim, self.hidden_dim = embedding_dim, 2 * embedding_dim
self.rnn1 = nn.LSTM(
input_size=n_features,
hidden_size=self.hidden_dim,
num_layers=1,
batch_first=True
)
self.rnn2 = nn.LSTM(
input_size=self.hidden_dim,
hidden_size=embedding_dim,
num_layers=1,
batch_first=True
)
def forward(self, x):
x = x.reshape((1, self.seq_len, self.n_features))
x, (_, _) = self.rnn1(x)
x, (hidden_n, _) = self.rnn2(x)
return hidden_n.reshape((self.n_features, self.embedding_dim))
Question:
I'm wondering about this discrepancy in implementations. The difference seems quite large. Are all of these valid ways to accomplish the same thing? Or are some of these mis-guided attempts at a "real" LSTM autoencoder?

There is no official or correct way of designing the architecture of an LSTM based autoencoder... The only specifics the name provides is that the model should be an Autoencoder and that it should use an LSTM layer somewhere.
The implementations you found are each different and unique on their own even though they could be used for the same task.
Let's describe them:
TF implementation:
It assumes the input has only one channel, meaning that each element in the sequence is just a number and that this is already preprocessed.
The default behaviour of the LSTM layer in Keras/TF is to output only the last output of the LSTM, you could set it to output all the output steps with the return_sequences parameter.
In this case the input data has been shrank to (batch_size, LSTM_units)
Consider that the last output of an LSTM is of course a function of the previous outputs (specifically if it is a stateful LSTM)
It applies a Dense(1) in the last layer in order to get the same shape as the input.
PyTorch 1:
They apply an embedding to the input before it is fed to the LSTM.
This is standard practice and it helps for example to transform each input element to a vector form (see word2vec for example where in a text sequence, each word that isn't a vector is mapped into a vector space). It is only a preprocessing step so that the data has a more meaningful form.
This does not defeat the idea of the LSTM autoencoder, because the embedding is applied independently to each element of the input sequence, so it is not encoded when it enters the LSTM layer.
PyTorch 2:
In this case the input shape is not (seq_len, 1) as in the first TF example, so the decoder doesn't need a dense after. The author used a number of units in the LSTM layer equal to the input shape.
In the end you choose the architecture of your model depending on the data you want to train on, specifically: the nature (text, audio, images), the input shape, the amount of data you have and so on...

Related

LSTM autoencoder for anomaly detection

I'm testing out different implementation of LSTM autoencoder on anomaly detection on 2D input.
My question is not about the code itself but about understanding the underlying behavior of each network.
Both implementation have the same number of units (16). Model 2 is a "typical" seq to seq autoencoder with the last sequence of the encoder repeated "n" time to match the input of the decoder.
I'd like to understand why Model 1 seem to easily over-perform Model 2 and why Model 2 isn't able to do better than the mean ?
Model 1:
class LSTM_Detector(Model):
def __init__(self, flight_len, param_len, hidden_state=16):
super(LSTM_Detector, self).__init__()
self.input_dim = (flight_len, param_len)
self.units = hidden_state
self.encoder = layers.LSTM(self.units,
return_state=True,
return_sequences=True,
activation="tanh",
name='encoder',
input_shape=self.input_dim)
self.decoder = layers.LSTM(self.units,
return_sequences=True,
activation="tanh",
name="decoder",
input_shape=(self.input_dim[0],self.units))
self.dense = layers.TimeDistributed(layers.Dense(self.input_dim[1]))
def call(self, x):
output, hs, cs = self.encoder(x)
encoded_state = [hs, cs] # see https://www.tensorflow.org/guide/keras/rnn
decoded = self.decoder(output, initial_state=encoded_state)
output_decoder = self.dense(decoded)
return output_decoder
Model 2:
class Seq2Seq_Detector(Model):
def __init__(self, flight_len, param_len, hidden_state=16):
super(Seq2Seq_Detector, self).__init__()
self.input_dim = (flight_len, param_len)
self.units = hidden_state
self.encoder = layers.LSTM(self.units,
return_state=True,
return_sequences=False,
activation="tanh",
name='encoder',
input_shape=self.input_dim)
self.repeat = layers.RepeatVector(self.input_dim[0])
self.decoder = layers.LSTM(self.units,
return_sequences=True,
activation="tanh",
name="decoder",
input_shape=(self.input_dim[0],self.units))
self.dense = layers.TimeDistributed(layers.Dense(self.input_dim[1]))
def call(self, x):
output, hs, cs = self.encoder(x)
encoded_state = [hs, cs] # see https://www.tensorflow.org/guide/keras/rnn
repeated_vec = self.repeat(output)
decoded = self.decoder(repeated_vec, initial_state=encoded_state)
output_decoder = self.dense(decoded)
return output_decoder
I fitted this 2 models for 200 Epochs on a sample of data (89, 1500, 77) each input being a 2D aray of (1500, 77). And the test data (10,1500,77). Both model had only 16 units.
Here or the results of the autoencoder on one features of the test data.
Results Model 1: (black line is truth, red in reconstructed image)
Results Model 2:
I understand the second one is more restrictive since all the information from the input sequence is compressed into one step, but I'm still surprise that it's barely able to do better than predict the average.
On the other hand, I feel Model 1 tends to be more "influenced" by new data without giving back the input. see example below of Model 1 having a flat line as input :
PS : I know it's not a lot of data for that kind of model, I have much more available but at this stage I'm just experimenting and trying to build my understanding.
PS 2 : Neither models overfitted their data and the training and validation curve are almost text book like.
Is anyone able to explain why there is such a gap in term of behavior ?
Thank you
In model 1, each point of 77 features is compressed and decompressed this way: 77->16->16->77 plus some info from the previous steps. It seems that replacing LSTMs with just TimeDistributed(Dense(...)) may also work in this case, but cannot say for sure as I don't know the data. The third image may become better.
What predicts model 2 usually happens when there is no useful signal in the input and the best thing model can do (well, optimize to do) is just to predict the mean target value of the training set.
In model 2 you have:
...
self.encoder = layers.LSTM(self.units,
return_state=True,
return_sequences=False,
...
and then
self.repeat = layers.RepeatVector(self.input_dim[0])
So, in fact, when it does
repeated_vec = self.repeat(output)
decoded = self.decoder(repeated_vec, initial_state=encoded_state)
it just takes only one last output from the encoder (which in this case represents the last step of 1500), copies it 1500 times (input_dim[0]), and tries to predict all 1500 values from the information about a couple of last ones. Here is where the model loses most of the useful signal. It does not have enough/any information about the input, and the best thing it can learn in order to minimize the loss function (which I suppose in this case is MSE or MAE) is to predict the mean value for each of the features.
Also, a seq to seq model usually passes a prediction of a decoder step as an input to the next decoder step, in the current case, it is always the same value.
TL;DR 1) seq-to-seq is not the best model for this case; 2) due to the bottleneck it cannot really learn to do anything better than just to predict the mean value for each feature.

Adding softmax layer to LSTM network "freezes" output

I've been trying to teach myself the basics of RNN's with a personnal project on PyTorch. I want to produce a simple network that is able to predict the next character in a sequence (idea mainly from this article http://karpathy.github.io/2015/05/21/rnn-effectiveness/ but I wanted to do most of the stuff myself).
My idea is this : I take a batch of B input sequences of size n (np array of n integers), one hot encode them and pass them through my network composed of several LSTM layers, one fully connected layers and one softmax unit.
I then compare the output to the target sequences which are the input sequences shifted one step ahead.
My issue is that when I include the softmax layer, the output is the same every single epoch for every single batch. When I don't include it, the network seems to learn appropriately. I can't figure out what's wrong.
My implementation is the following :
class Model(nn.Module):
def __init__(self, one_hot_length, dropout_prob, num_units, num_layers):
super().__init__()
self.LSTM = nn.LSTM(one_hot_length, num_units, num_layers, batch_first = True, dropout = dropout_prob)
self.dropout = nn.Dropout(dropout_prob)
self.fully_connected = nn.Linear(num_units, one_hot_length)
self.softmax = nn.Softmax(dim = 1)
# dim = 1 as the tensor is of shape (batch_size*seq_length, one_hot_length) when entering the softmax unit
def forward_pass(self, input_seq, hc_states):
output, hc_states = self.LSTM (input_seq, hc_states)
output = output.view(-1, self.num_units)
output = self.fully_connected(output)
# I simply comment out the next line when I run the network without the softmax layer
output = self.softmax(output)
return output, hc_states
one_hot_length is the size of my character dictionnary (~200, also the size of a one hot encoded vector)
num_units is the number of hidden units in a LSTM cell, num_layers the number of LSTM layers in the network.
The inside of the training loop (simplified) goes as follows :
input, target = next_batches(data, batch_pointer)
input = nn.functional.one_hot(input_seq, num_classes = one_hot_length).float().
for state in hc_states:
state.detach_()
optimizer.zero_grad()
output, states = net.forward_pass(input, hc_states)
loss = nn.CrossEntropyLoss(output, target)
loss.backward()
nn.utils.clip_grad_norm_(net.parameters(), MaxGradNorm)
optimizer.step()
With hc_states a tuple with the hidden states tensor and the cell states tensor, input, is a tensor of size (B,n,one_hot_length), target is (B,n).
I'm training on a really small dataset (sentences in a .txt of ~400Ko) just to tune my code, and did 4 different runs with different parameters and each time the outcome was the same : the network doesn't learn at all when it has the softmax layer, and trains somewhat appropriately without.
I don't think it is an issue with tensors shapes as I'm almost sure I checked everything.
My understanding of my problem is that I'm trying to do classification, and that the usual is to put a softmax unit at the end to get "probabilities" of each character to appear, but clearly this isn't right.
Any ideas to help me ?
I'm also fairly new to Pytorch and RNN so I apologize in advance if my architecture/implementation is some kind of monstrosity to a knowledgeable person. Feel free to correct me and thanks in advance.

Puzzled by stacked bidirectional RNN in TensorFlow 2

I'm learning how to build a seq2seq model based on this TensorFlow 2 NMT tutorial, and I'm trying to expand upon it by stacking multiple RNN layers for the encoder and decoder. However, I'm having trouble retrieving the output which corresponds to the hidden state of the encoder.
Here's my code for building the stacked bidirectional GRUCell layers in the encoder:
# Encoder initializer
def __init__(self, n_layers, dropout, ...):
...
gru_cells = [layers.GRUCell(units,
recurrent_initializer='glorot_uniform',
dropout=dropout)
for _ in range(n_layers)]
self.gru = layers.Bidirectional(layers.RNN(gru_cells,
return_sequences=True,
return_state=True))
Assuming the above is correct, I then call the layer I created:
# Encoder call method
def call(self, inputs, state):
...
list_outputs = self.gru(inputs, initial_state=state)
print(len(list_outputs)) # test
list_outputs has length 3 when n_layers = 1, which is expected behavior according to this SO post. When I increase n_layers by one, I find that the number outputs increases by two, which I presume are the forward and reverse final states of the new layer. So 2 layers -> 5 outputs, 3 layers -> 7 outputs, etc. However, I can't figure out which output corresponds to which layer and in which direction.
Ultimately what I'd like to know is: how can I get the forward and reverse final states of the last layer in this stacked bidirectional RNN? If I understand the seq2seq model correctly, they make up the hidden state that is passed to the decoder.
After digging through TensorFlow source code for the RNN and Bidirectional classes, my best guess for the output format of a stacked bidirectional RNN layer is the following 1+2n tuple, where n is the number of stacked layers:
[0] concatenation of forward and backward state across the RNN
[1 : len//2 + 1] final state of forward layers, from first to last
[len//2 + 1:] final state of reverse layers, from first to last

Time Series Forecasting model with LSTM in Tensorflow predicts a constant

I am building a hurricane track predictor using satellite data. I have a multiple to many output in a multilayer LSTM model, with input and output arrays following the structure [samples[time[features]]]. I have as features of inputs and outputs the coordinates of the hurricane, WS, and other dimensions.
The problem is that the error reduction, and as a consequence, the model predicts always a constant. After reading several posts, I standardized the data, removed some unnecessary layers, but still, the model always predicts the same output.
I think the model is big enough, activation functions make sense, given that the outputs are all within [-1;1].
So my questions are : What am I doing wrong ?
The model is the following:
class Stacked_LSTM():
def __init__(self, training_inputs, training_outputs, n_steps_in, n_steps_out, n_features_in, n_features_out, metrics, optimizer, epochs):
self.training_inputs = training_inputs
self.training_outputs = training_outputs
self.epochs = epochs
self.n_steps_in = n_steps_in
self.n_steps_out = n_steps_out
self.n_features_in = n_features_in
self.n_features_out = n_features_out
self.metrics = metrics
self.optimizer = optimizer
self.stop = EarlyStopping(monitor='loss', min_delta=0.000000000001, patience=30)
self.model = Sequential()
self.model.add(LSTM(360, activation='tanh', return_sequences=True, input_shape=(self.n_steps_in, self.n_features_in,))) #, kernel_regularizer=regularizers.l2(0.001), not a good idea
self.model.add(layers.Dropout(0.1))
self.model.add(LSTM(360, activation='tanh'))
self.model.add(layers.Dropout(0.1))
self.model.add(Dense(self.n_features_out*self.n_steps_out))
self.model.add(Reshape((self.n_steps_out, self.n_features_out)))
self.model.compile(optimizer=self.optimizer, loss='mae', metrics=[metrics])
def fit(self):
return self.model.fit(self.training_inputs, self.training_outputs, callbacks=[self.stop], epochs=self.epochs)
def predict(self, input):
return self.model.predict(input)
Notes
1) In this particular problem, the time series data is not "continuous", because one time serie belongs to a particular hurricane. I have therefore adapted the training and test samples of the time series to each hurricane. The implication of this is that I cannot use the function stateful=True in my layers because it would then mean that the model doesn't makes any difference between the different hurricanes (if my understanding is correct).
2) No image data, so no convolutionnal model needed.
Few suggestions, based on my experience:
4 layers of LSTM is too much. Stick to two, maximum three.
Don't use relu as activations for LSTMs.
Do not use BatchNormalization for time-series.
Other than these, I'd also suggest removing the dense layers between two LSTM layers.

Generation of sequences from latent space [pytorch]

I have several questions about best practice in using recurrent networks in pytorch for generation of sequences.
The first one, if I want to build decoder net should I use nn.GRU (or nn.LSTM) instead nn.LSTMCell (nn.GRUCell)? From my experience, if I work with LSTMCell the speed of calculations is drammatically lower (up to 100 times) than if I use nn.LSTM. Maybe it is related with cudnn optimisation for LSTM (and GRU) module? Is any way to speedup LSTMCell calculations?
I try to build an autoencoder, that accepts sequences of variable length. My autoencoder looks like:
class SimpleAutoencoder(nn.Module):
def init(self, input_size, hidden_size, n_layers=3):
super(SimpleAutoencoder, self).init()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.gru_encoder = nn.GRU(input_size, hidden_size,n_layers,batch_first=True)
self.gru_decoder = nn.GRU(input_size, hidden_size, n_layers, batch_first=True)
self.h2o = nn.Linear(hidden_size,input_size) # Hidden to output
def encode(self, input):
output, hidden = self.gru_encoder(input, None)
return output, hidden
def decode(self, input, hidden):
output,hidden = self.gru_decoder(input,hidden)
return output,hidden
def h2o_apply(self,input):
return self.h2o(input)
My training loop looks like:
one_hot_batch = list(map(lambda x:Variable(torch.FloatTensor(x)),one_hot_batch))
packed_one_hot_batch = pack_padded_sequence(pad_sequence(one_hot_batch,batch_first=True).cuda(),batch_lens, batch_first=True)
_, latent = vae.encode(packed_one_hot_batch)
outputs, = vae.decode(packed_one_hot_batch,latent)
packed = pad_packed_sequence(outputs,batch_first=True)
for string,length,index in zip(*packed,range(batch_size)):
decoded_string_without_sos_symbol = vae.h2o_apply(string[1:length])
loss += criterion(decoded_string_without_sos_symbol,real_strings_batch[index][1:])
loss /= len(batch)
The training in such manner, as I can understand, is teacher force. Because at the decoding stage the network feeds the real inputs (outputs,_ = vae.decode(packed_one_hot_batch,latent)). But, for my task it leads to the situation when, in the test stage, network can generate sequences very well only if I use the real symbols (as in training mode), but if I feed the output of the previous step, the network generates rubbish (just infinite repetition of one specific symbol).
I tried another one approach. I generated “fake” inputs( just ones), to make the model generate only from the hidden state.
one_hot_batch_fake = list(map(lambda x:torch.ones_like(x).cuda(),one_hot_batch))
packed_one_hot_batch_fake = pack_padded_sequence(pad_sequence(one_hot_batch_fake, batch_first=True).cuda(), batch_lens, batch_first=True)
_, latent = vae.encode(packed_one_hot_batch)
outputs, = vae.decode(packed_one_hot_batch_fake,latent)
packed = pad_packed_sequence(outputs,batch_first=True)
It works, but very inefficiently, the quality of reconstruction is very low. So the second question, what is the right way to generate sequences from latent representation?
I suppose, that good idea is to apply teacher forcing with some probability, but for that, how one can use nn.GRU layer so the output of the previous step should be the input for the next step?

Categories

Resources