I am reading an example of using RNN with tensorflow here: ptb_word_lm.py
I can't figure out what the embedding and embedding_lookup are doing here. How can it add another dimension to the tensor? Going from (20, 25) to (20, 25, 200). In this case (20,25) is a batch-size of 20 with 25 time steps. I can't understand how/why you can add the hidden_size of the cell as a dimension of the input data? Typically the input data would be a matrix of size [batch_size, num_features] and the model would map num_features ---> hidden_dims with a matrix of size [num_features, hidden_dims] yielding an output of size [batch-size, hidden-dims]. So how can hidden_dims be a dimension of the input tensor?
input_data, targets = reader.ptb_producer(train_data, 20, 25)
cell = tf.nn.rnn_cell.BasicLSTMCell(200, forget_bias=1.0, state_is_tuple=True)
initial_state = cell.zero_state(20, tf.float32)
embedding = tf.get_variable("embedding", [10000, 200], dtype=tf.float32)
inputs = tf.nn.embedding_lookup(embedding, input_data)
input_data_train # <tf.Tensor 'PTBProducer/Slice:0' shape=(20, 25) dtype=int32>
inputs # <tf.Tensor 'embedding_lookup:0' shape=(20, 25, 200) dtype=float32>
outputs = []
state = initial_state
for time_step in range(25):
if time_step > 0:
tf.get_variable_scope().reuse_variables()
cell_output, state = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
output = tf.reshape(tf.concat(1, outputs), [-1, 200])
outputs # list of 20: <tf.Tensor 'BasicLSTMCell/mul_2:0' shape=(20, 200) dtype=float32>
output # <tf.Tensor 'Reshape_2:0' shape=(500, 200) dtype=float32>
softmax_w = tf.get_variable("softmax_w", [config.hidden_size, config.vocab_size], dtype=tf.float32)
softmax_b = tf.get_variable("softmax_b", [config.hidden_size, config.vocab_size], dtype=tf.float32)
logits = tf.matmul(output, softmax_w) + softmax_b
loss = tf.nn.seq2seq.sequence_loss_by_example([logits], [tf.reshape(targets, [-1])],[tf.ones([20*25], dtype=tf.float32)])
cost = tf.reduce_sum(loss) / batch_size
ok, I'm not going to try and explain this specific code, but I will try and answer the "what is an embedding?" part of the title.
Basically it's a mapping of the original input data into some set of real-valued dimensions, and the "position" of the original input data in those dimensions is organized to improve the task.
In tensorflow, if you imagine some text input field has "king", "queen", "girl","boy", and you have 2 embedding dimensions. Hopefully the backprop will train the embedding to put the concept of royalty on one axis and gender on the other. So in this case, what was a 4 categorical value feature gets "boiled" down to a floating point embedding feature with 2 dimensions.
They are implemented using a lookup table, either hashed from the original or from a dictionary ordering. For a fully trained one, You might put in "Queen", and you get out say [1.0,1.0], Put in "Boy" and you get out [0.0,0.0].
Tensorflow does backprop of the error INTO this lookup table, and hopefully what starts off as a randomly initialized dictionary will gradually become like we see above.
Hope this helps. If not, look at: http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
At simplest,
input_data: Batch of sequence of word IDs (with shape (20,25))
inputs: Batch of sequence of word embeddings (with shape (20,25,200))
How does input_data becomes inputs you might ask? This is what learning word embeddings does. The easiest way to imagine is,
unwrap the input_data to a single batch of shape (20*25,).
Now assign a vector of size 200 for each element in that unwrapped input_data which gives you a matrix of shape (20*25,200).
Now, reshape the matrix to shape (20,25,200).
This is because, embedding learning is not a time-series process. You learn word embeddings with a feed forward network. Next important question would be, how do you learn the word embeddings.
Initialise a huge Tensorflow variable of size (vocabulary_size, 200) (i.e. embedding in the code)
Optimise the embedding so that a given word should be able to predict any word from its context. (e.g. in "dog barked at the mailman", if "at" is the target word "dog", "barked", "the" and "mailman" are context words)
This process give you a vector (200 long in this example) for each word, such that semantics are preserved (i.e. vector of "dog" is close to "cat", but far away from "pen").
Here's an overview of what I just explained.
Related
I am trying to train an LSTM using two types of embedding layers. Let's say that the following is my tokenized sentence:
tokenized_sentence = ['I', 'am', 'trying', 'to', 'TARGET_TOKEN', 'my', 'way', 'home']
Now for the words surrounding the "TARGET_TOKEN" I have an embedding layer which is (vocab_size x 128) while for the token 'TARGET_TOKEN' at index 4 I have an embedding layer which is (vocab_size x 512). So I need to transform the TARGET_TOKEN embedding from 512 to 128 and then insert this 128 dimension vector at index 4 (this index will change depending on the feature) of the output from the surrounding words embedding layer before feeding this concatenated list (tensor) to the LSTM. In my situation the positioning of the words/tokens is very important and so I do not wish to lose the position of where the token 'TARGET_TOKEN' is in the sentence.
Initially I was looking on how to reduce the size of the 512 embeddings and I found that with numpy I could take the average of every 4 adjacent vectors and thus I end up from 512 dimensions to 128 dimensions. However, it is to my understanding that this might not represent the vectors in the right way anymore.
Let's call the token 'TARGET_TOKEN' as "target_token" and the rest of the words as "context_tokens". So instead after further reading I thought could take the output of the target_token embedding layer and pass it through a Dense layer with 128 units (thus reducing its size to 128). Following this I will concatenate the output of the Dense layer with the output of the context_tokens embedding layer. So far I know how to do this. My issue is that positioning is important and it is important that my LSTM learns the target_token embedding with respect to its surrounding context. So long-story-short I need to concatenate at index 4 (maybe I'm looking at this the wrong way but that's how I understand it).
However, the concatenate layer in Keras does not have such a parameter and I can only concatenate the two layers without taking into consideration the positioning.
My model will take three inputs:
input1 = target_token
input2 = context_tokens
input3 = target_token_index
and one output (as a sequence).
My code looks like this:
target_token_input = Input((1,))
sentence_input = Input((None,))
index_input = Input((1,), dtype="int32")
target_token_embedding_layer = Embedding(500, 512, weights=[], trainable=False)(target_token_input)
target_token_dense_layer = Dense(128, activation="relu")(target_token_embedding_layer)
context_embedding_layer = Embedding(self.vocab_size, 128, weights=[self.weight_matrix],
trainable=False)(sentence_input)
concatenation_layer = Concatenate()([target_token_dense_layer, context_embedding_layer])
bidirectional = Bidirectional(LSTM(64, return_sequences=self.return_sequences, dropout=0.2, recurrent_dropout=0.2))(concatenation_layer)
normalization_layer = BatchNormalization()(bidirectional)
output_layer = TimeDistributed(Dense(self.output_size, activation=self.activation))(normalization_layer)
model = Model([target_token_input, sentence_input, index_input],[output_layer])
My expected result should be the following, here the numbers represent the dimensions of the the token vectors.
original_tokens = ['I', 'am', 'trying', 'to', 'eng-12345', 'my', 'way', 'home']
vector_original_tokens = [128, 128, 128, 128, 512, 128, 128, 128]
post_concatenation_tokens = [128, 128, 128, 128, 128, 128, 128, 128]
Notice how at index 4 the embedding went from 512 to 128. I am looking into the possibility of transforming the tensor into a list, inserting the output of the target_token_embedding_layer into this list at the desired index and then transforming the list back to a tensor and using that tensor as input for the LSTM. However, I'm still trying to figure this out.
Does anyone know how to do this? Any help would be greatly appreciated!
Umm, the short answer is you can't. You could do it programmatically, where things would look like how you want them to look, but the Keras LSTM wouldn't understand it. The Keras LSTM needs to understand the connections between tokens. When you get a list of embeddings from one "universe" and try to mash it with embeddings from another "universe", things just don't work.
You will have to tokenize / embed all the words in the same dimensions for it to work. I'm assuming that the TARGET_TOKEN has different embeddings (512) compared to the rest of the dictionary? You could create a new embedding which is (128+512). However, since you mention that you reduced the original 512 embeddings to 128, my suggestion is that you should just go back to 512 embeddings.
Note how I mention Keras LSTM: What I wrist is in response to the common, well known LSTM layer with the standard 4 gates. If you are writing your own variation of the LSTM layer from scratch (say in Tensorflow or numpy), all bets are off.
I'm guessing you're trying to build some kind of network that fills in the blanks? What was your reason to switch from 512 embeddings to 128? Saving memory?
I am working to understand Erik Linder-Norén's implementation of the Categorical GAN model, and am confused by the generator in that model:
def build_generator(self):
model = Sequential()
# ...some lines removed...
model.add(Dense(np.prod(self.img_shape), activation='tanh'))
model.add(Reshape(self.img_shape))
model.summary()
noise = Input(shape=(self.latent_dim,))
label = Input(shape=(1,), dtype='int32')
label_embedding = Flatten()(Embedding(self.num_classes, self.latent_dim)(label))
model_input = multiply([noise, label_embedding])
img = model(model_input)
return Model([noise, label], img)
My question is: How does the Embedding() layer work here?
I know that noise is a vector that has length 100, and label is an integer, but I don't understand what the label_embedding object contains or how it functions here.
I tried printing the shape of label_embedding to try and figure out what's going on in that Embedding() line but that returns (?,?).
If anyone could help me understand how the Embedding() lines here work, I'd be very grateful for their assistance!
To keep in mind why use embedding here at all: the alternative is to concatenate the noise with the conditioned class, which may cause the generator to completely ignore the noise values, generating data with high similarity in each class (or even just 1 per class).
From the documentation, https://keras.io/layers/embeddings/#embedding,
Turns positive integers (indexes) into dense vectors of fixed size.
eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
In the GAN model, the input integer(0-9) is converted to a vector of shape 100. With this short code snippet, we can feed some test input to check the output shape of the Embedding layer.
from keras.layers import Input, Embedding
from keras.models import Model
import numpy as np
latent_dim = 100
num_classes = 10
label = Input(shape=(1,), dtype='int32')
label_embedding = Embedding(num_classes, latent_dim)(label)
mod = Model(label, label_embedding)
test_input = np.zeros((1))
print(f'output shape is {mod.predict(test_input).shape}')
mod.summary()
output shape is (1, 1, 100)
From model summary, output shape for embedding layer is (1,100) which is the same as output of predict.
embedding_1 (Embedding) (None, 1, 100) 1000
One additional point, in the output shape (1,1,100), the leftmost 1 is the batch size, the middle 1 is the input length. In this case, we provided an input of length 1.
The embedding stores the per label state. If I read the code correctly, each label corresponds to a digit; i.e. there is an embedding that captures how to generate a 0, 1, ... 9.
This code takes some random noise and multiplies it to this per label state. The result should be a vector that leads the generator to display the digit corresponding to the label (i.e. 0..9).
So currently i'm sitting on a text-classification problem, but i can't even set up my model in Tensorflow. I have a batch of sentences of length 70 (using padding) and i'm using a embedding_lookup with an embedding size of 300. Here the code for the embedding:
embedding = tf.constant(embedding_matrix, name="embedding")
inputs = tf.nn.embedding_lookup(embedding, input_.input_data)
So now inputs should be of shape [batch_size, sentence_length, embedding_size] which is not surprising. Now sadly i'm getting a ValueError for my LSTMCell since it is expecting ndim=2 and obviously inputs is of ndim=3. I have not found a way to change the expected input shape of the LSTM Layer. Here is the code for my LSTMCell init:
for i in range(num_layers):
cells.append(LSTMCell(num_units, forget_bias, state_is_tuple, reuse=reuse, name='lstm_{}'.format(i))
cell = tf.contrib.rnn.MultiRNNCell(cells, state_is_tuple=True)
The error is triggered in the call function of the cell, that looks like this:
for step in range(self.num_steps):
if step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, step, :], state)
Similar question but not helping: Understanding Tensorflow LSTM Input shape
I could solve the problem myself. As it seems, the LSTMCell implementation is more hands on and basic in relation to how a LSTM actually works. The Keras LSTM Layers took care of stuff i need to consider when i'm using TensorFlow. The example i'm using is from the following official TensorFlow example:
https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb
As we want to feed our LSTM Layers with a sequence, we need to feed the cells each word after another. As the call of the Cell creates two outputs (cell output and cell state), we use a loop for all words in all sentences to feed the cell and reuse our cell states. This way we create the output for our layers, which we can then use for further operations. The code for this looks like this:
self._initial_state = cell.zero_state(config.batch_size, data_type())
state = self._initial_state
outputs = []
with tf.variable_scope("RNN"):
for time_step in range(self.num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
output = tf.reshape(tf.concat(outputs, 1), [-1, config.hidden_size])
num_steps represents the amount of words in our sentence, that we are going to use.
I have the following idea to implement:
Input -> CNN-> LSTM -> Dense -> Output
The Input has 100 time steps, each step has a 64-dimensional feature vector
A Conv1D layer will extract features at each time step. The CNN layer contains 64 filters, each has length 16 taps. Then, a maxpooling layer will extract the single maximum value of each convolutional output, so a total of 64 features will be extracted at each time step.
Then, the output of the CNN layer will be fed into an LSTM layer with 64 neurons. Number of recurrence is the same as time step of input, which is 100 time steps. The LSTM layer should return a sequence of 64-dimensional output (the length of sequence == number of time steps == 100, so there should be 100*64=6400 numbers).
input = Input(shape=(100,64), dtype='float', name='mfcc_input')
CNN_out = TimeDistributed(Conv1D(64, 16, activation='relu'))(mfcc_input)
CNN_out = BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True)(CNN_out)
CNN_out = TimeDistributed(MaxPooling1D(pool_size=(64-16+1), strides=None, padding='valid'))(CNN_out)
LSTM_out = LSTM(64,return_sequences=True)(CNN_out)
... (more code) ...
But this doesn't work. The second line reports "list index out of range" and I don't understand what's going on.
I'm new to Keras, so I appreciate sincerely if anyone could help me with it.
This picture explains how CNN should be applied to EACH TIME STEP
The problem is with your input. Your input is of shape (100, 64) in which the first dimension is the timesteps. So ignoring that, your input is of shape (64) to a Conv1D.
Now, refer to the Keras Conv1D documentation, which states that the input should be a 3D tensor (batch_size, steps, input_dim). Ignoring the batch_size, your input should be a 2D tensor (steps, input_dim).
So, you are providing 1D tensor input, where the expected size of the input is a 2D tensor. For example, if you are providing Natural Language input to the Conv1D in form of words, then there are 64 words in your sentence and supposing each word is encoded with a vector of length 50, your input should be (64, 50).
Also, make sure that you are feeding the right input to LSTM as given in the code below.
So, the correct code should be
embedding_size = 50 # Set this accordingingly
mfcc_input = Input(shape=(100, 64, embedding_size), dtype='float', name='mfcc_input')
CNN_out = TimeDistributed(Conv1D(64, 16, activation='relu'))(mfcc_input)
CNN_out = BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True)(CNN_out)
CNN_out = TimeDistributed(MaxPooling1D(pool_size=(64-16+1), strides=None, padding='valid'))(CNN_out)
# Directly feeding CNN_out to LSTM will also raise Error, since the 3rd dimension is 1, you need to purge it as
CNN_out = Reshape((int(CNN_out.shape[1]), int(CNN_out.shape[3])))(CNN_out)
LSTM_out = LSTM(64,return_sequences=True)(CNN_out)
... (more code) ...
I've got a question on Tensorflow LSTM-Implementation. There are currently several implementations in TF, but I use:
cell = tf.contrib.rnn.BasicLSTMCell(n_units)
where n_units is the amount of 'parallel' LSTM-Cells.
Then to get my output I call:
rnn_outputs, rnn_states = tf.nn.dynamic_rnn(cell, x,
initial_state=initial_state, time_major=False)
where (as time_major=False) x is of shape (batch_size, time_steps, input_length)
where batch_size is my batch_size
where time_steps is the amount of timesteps my RNN will go through
where input_length is the length of one of my input vectors (vector fed into the network on one specific timestep on one specific batch)
I expect rnn_outputs to be of shape (batch_size, time_steps, n_units, input_length) as I have not specified another output size.
Documentation of nn.dynamic_rnn tells me that output is of shape (batch_size, input_length, cell.output_size).
The documentation of tf.contrib.rnn.BasicLSTMCell does have a property output_size, which is defaulted to n_units (the amount of LSTM-cells I use).
So does each LSTM-Cell only output a scalar for every given timestep? I would expect it to output a vector of the length of the input vector. This seems not to be the case from how I understand it right now, so I am confused. Can you tell me whether that's the case or how I could change it to output a vector of size of the input vector per single lstm-cell maybe?
I think the primary confusion is on the terminology of the LSTM cell's argument: num_units. Unfortunately it doesn't mean, as the name suggests, "the no. of LSTM cells" that should be equal to your time-steps. They actually correspond to the number of dimensions in the hidden state (cell state + hidden state vector).
The call to dynamic_rnn() returns a tensor of shape: [batch_size, time_steps, output_size] where,
(Please note this) output_size = num_units; if (num_proj = None) in the lstm cell
where as, output_size = num_proj; if it is defined.
Now, typically, you will extract the last time_step's result and project it to the size of output dimensions using a mat-mul + biases operation manually, or use the num_proj argument in the LSTM cell.
I have been through the same confusion and had to look really deep to get it cleared. Hope this answer clears some of it.