I am not sure why we have only output vector of size 32, while have LSTM 100?
What I am confuse is that if we have only 32 words vector, if fetch into LSTM, 32 LSTM should big enough to hold it?
Model.add(Embedding(5000,32)
Model.add(LSTM(100))
Those are hyper-parameters of your model and there is no best way of setting them without experimentation. In your case, embedding single words into a vector of dimension 32 might be enough, but the LSTM will process a sequence of them and might require more capacity (ie dimensions) to store information about multiple words. Without knowing the objective or the dataset it is difficult to make an educated guess on what those parameters would be. Often we look at past research papers tackling similar problems and see what hyper-parameters they used and then tune them via experimentation.
Related
I am building a text classifier using an RNN in PyTorch. The embeddings i'm using are GLOVE. However i am feeding variable length index references in to the model. This will lead to variable length embeddings, which i take it will not work. How do i get around this and make the embedding output the same length for all sentences?
def forward(self, sentence):
embeds = self.embedding(sentence)
hidden = self.__init__hidden(size)
output, hidden = self.rnn(embeds, hidden)
out = self.hidden2out(output)
Also, if someone could tell me how to choose the hidden layer size that would be great.
Your data in this case can be represented in pytorch as having shape: [seq_len, batch, vocab_size]. When you create a tensor for your model this way, the batch size is up to you, and sequence length is variable.
When you employ an embedding, such as GloVe or word2vec, what you're doing is condensing your vocab size (from many thousands to <1000) to make it more information dense and to give your model some contextual information about each word. So you can see that your output from an embedding layer will be of shape: [seq_len, batch, embedding_size]. Therefore, it doesn't matter how long your sequences are at this stage because your output is sequence-length-independent.
Without knowing your embedding layer, I just have to assume that it's operating along the correct dimensions. If it is not, a couple of transpose operations to ensure that it is will work.
The only way to gain intuition about hidden layer size (I'm assuming you're not talking about your h_0 input into your RNN) and number of hidden layers is to unfortunately try things about. I can't really even give you a guess because I have no idea what dataset you're using or what problem you're trying to solve. There is no one-size-fits-all policy in machine learning!
I have 50 time series, each having at least 500 data points (some series have as much as 2000+ data points). All the time series go from a value of 1.089 to 0.886, so you can see that the resolution for each dataset comes close to 10e-4, i.e. the data is something like:
1.079299, 1.078809, 1.078479, 1.078389, 1.078362,... and so on in a decreasing fashion from 1.089 to 0.886 for all 50 time-series.
My questions hence, are:
Can LSTMs handle such dense data?
In order to avoid overfitting, what can be the suggested number of epochs, timesteps per batch, batches, hidden layers and neurons per layer?
I have been struggling with this for more than a week, and no other source that I could find talks about this specific case, so it could also help others.
A good question and I can understand why you did not find a lot of explanations because there are many tutorials which cover some basic concepts and aspects, not necessarily custom problems.
You have 50 time series. However, the frequency of your data is not the same for each time series. You have to interpolate in order to reach the same number of samples for each time series if you want to properly construct the dataset.
LSTMs can handle such dense data. It can be both a classification and a regression problem, neural networks can adapt to such situations.
In order to avoid overfitting(LSTMs are very prone to it), the first major aspect to take into consideration is the hidden layers and the number of units per layer. Normally people tend to use 256-512 by default since in Natural Language Processing where you process huge datasets they are suitable. In my experience, for simpler regression/classification problems you do not need such a big number, it will only lead to overfitting in smaller problems.
Therefore, taking into consideration (1) and (2), start with an LSTM/GRU with 32 units and then the output layer. If you see that you do not have good results, add another layer (64 first 32 second) and then the output layer.
Admittedly, timesteps per batch is of crucial importance. This cannot be determined here, you have to manually iterate through values of it and see what yields you the best results. I assume you create your dataset via sliding window manner; consider this(window size) also a hyperparameter to alter before arriving at the batch and epochs ones.
I often see natural language processing tasks use LSTM in a way that they first use a embedding layer followed by an LSTM layer of the size of the embedding, i.e. if a word is represented by a 1x300 vector LSTM(300) is used.
E.g.:
model = Sequential()
model.add(Embedding(vocabulary, hidden_size, input_length=num_steps))
model.add(LSTM(hidden_size, return_sequences=True))
Is there a particular reason for doing so? Like a better representation of the meaning?
I don't think there's any special reason/need for this and frankly I haven't seen that many cases myself where this is the case (i.e. using the LSTM hidden units == Embedding size). The only effect this has is there's a single memory cell for each embedding vector element (which I don't think is a requirement or a necessity).
Having said that, I thought I might mention something extra. That is, there's a reason for having an Embedding layer in this setup. In fact a very good reason(s). Let's consider the two options,
Using one hot encoding for word representation
Using Embeddings for word represetnations
Option 2 has several advantages over option 1.
The dimensionality of the inputs is way smaller when you use an embedding layer (e.g. 300 as opposed to 50000)
You're providing the flexibility to the model to learn a word representation that is infact suited to the task your solving. In other words, you are not restricting the representation of words to remain constant during the training process.
If you use a pretrained word embedding layer to initialize the Embedding layer, even better. You are bringing word semantics in to the task you are solving. That always help to solve the task better. This is analogous to asking a toddler that doesn't understand the meaning of words to do something text-related (e.g. order words in the correct grammatical order) vs asking a 3-year old to do the same task. They both might eventually do it. But one will do it quicker and better.
I am working on a sequence prediction problem where my inputs are of size (numOfSamples, numOfTimeSteps, features) where each sample is independent, number of time steps is uniform for each sample (after pre-padding the length with 0's using keras.pad_sequences), and my number of features is 2. To summarize my question(s), I am wondering how to structure my Y-label dataset to feed the model and want to gain some insight on how to properly structure my model to output what I want.
My first feature is a categorical variable encoded to a unique int and my second is numerical. I want to be able to predict the next categorical variable as well as an associated feature2 value, and then use this to feed back into the network to predict a sequence until the EOS category is output.
This is a main source I've been referencing to try and understand how to create a generator for use with keras.fit_generator.
[1]
There is no confusion with how the mini-batch for "X" data is grabbed, but for the "Y" data, I am not sure about the proper format for what I am trying to do. Since I am trying to predict a category, I figured a one-hot vector representation of the t+1 timestep would be the proper way to encode the first feature, I guess resulting in a 4? Dimensional numpy matrix?? But I'm kinda lost with how to deal with the second numerical feature.
Now, this leads me to questions concerning architecture and how to structure a model to do what I am wanting. Does the following architecture make sense? I believe there is something missing that I am not understanding.
Proposed architecture (parameters loosely filled in, nothing set yet):
model = Sequential()
model.add(Masking(mask_value=0., input_shape=(timesteps, features)))
model.add(LSTM(hidden_size, return_sequences=True))
model.add(TimeDistributed(Dense(vocab_size)))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['categorical_accuracy'])
model.fit_generator(...) #ill figure this out
So, at the end, a softmax activation can predict the next categorical value for feature1. How do I also output a value for feature2 so that I can feed the new prediction for both features back as the next time-step? Do I need some sort of parallel architecture with two LSTMs that are combined somehow?
This is my first attempt at doing anything with neural networks or Keras, and I would not say I'm "great" at python, I can get by though. However, I feel I have a decent grasp at the fundamental theoretical concepts, but lack the practice.
This question is sorta open ended, with encouragement to pick apart my current strategy.
Once again, the overall goal is to predict both features (categorical, numeric) in order to predict "full sequences" from intermediate length sequences.
Ex. I train on these padded max-len sequences, but in production I want to use this to predict the remaining part of the currently unseen time-steps, which would be variable length.
Okay, so If I understand you properly (correct me if I'm wrong) you would like to predict next features based on the current ones.
When it comes to categorical variables, you are on point, your Dense layer should output N-1 vector containing probability of each class (while we are at it, if you, by any chance, use pandas.get_dummies remember to specify argument drop_first=True, similiar approach should be employed whatever you are using for one-hot encoding).
Except those N-1 output vector for each sample, it should output one more number for numerical value.
Remember to output logits (no activation, don't use softmax at the end like you currently do). Afterwards network output should be separated into N-1 part (your categorical feature) and passed to loss function able to handle logits (e.g. in Tensorflow it is tf.nn.softmax_cross_entropy_with_logits_v2 which applies numerically stable softmax for you).
Now, your N-th element of network output should be passed to different loss, probably Mean Squared Error.
Based on loss value of those two losses (you could take a mean of both to obtain one loss value), you backpropagate through the network and it might do just fine.
Unfortunately I'm not skilled enough in Keras in order to help you with the code, but I think you will figure it out yourself. While we're at it, I would like to suggest PyTorch for more custom neural networks (I think yours fits this description), though it's definitely doable in Keras as well, your choice.
Additional 'maybe helpful' thought: you may check Teacher Forcing for your kind of task. More on the topic and theory behind it can be found in the outstanding Deep Learning Book and code example (though in PyTorch once again), can be found in their docs here.
BTW interesting idea, mind if I use it in connection with my current research trajectory (with kudos going to you of course)? Comment on this answer if so we can talk it out in chat.
Basically every answer I was looking for was exampled and explained in this tutorial. Absolutely great resource for trying to understand how to model multi-output networks. This one goes through a lengthy walkthrough of a multi-output CNN architecture. It only took me about three weeks to stumble upon, however.
https://www.pyimagesearch.com/2018/06/04/keras-multiple-outputs-and-multiple-losses/
I have recently started working on ECG signal classification in to various classes. It is basically multi label classification task (Total 4 classes). I am new to Deep Learning, LSTM and Keras that why i am confused in few things.
I am thinking about giving normalized original signal as input to the network, is this a good approach?
I also need to understand training input shape for LSTM as ECG signals are of variable length (9000 to 18000 samples) and usually classifier need fixed variable input. How can i handle such type of input in case of LSTM.
Finally what should be structure of deep LSTM network for such lengthy input and how many layers should i use.
Thanks for your time.
Regards
I am thinking about giving normalized original signal as input to the network, is this a good approach?
Yes this is a good approach. It is actually quite standard for Deep Learning algorithms to give them your input normalized or rescaled.
This usually helps your model converge faster, as now you are inside smaller range (i.e.: [-1, 1]) instead of greater un-normalized ranges from your original input (say [0, 1000]). It also helps you get better, more precise results, as it helps solve problems like the vanishing gradient as well as adapting better to modern activation and optimizer functions.
I also need to understand training input shape for LSTM as ECG signals are of variable length (9000 to 18000 samples) and usually classifier need fixed variable input. How can i handle such type of input in case of LSTM.
This part is really important. You are correct, LSTM expects to receive inputs with a fixed shape, one that you know beforehand (in fact, any Deep Learning layer expects fixed shape inputs). This is also explained in the keras docs on Recurrent Layers where they say:
Input shape
3D tensor with shape (batch_size, timesteps, input_dim).
As we can see, it expects your data to have a number of timesteps as well as a dimension on each one of those timesteps (batch size is usually 1). To exemplify, suppose your input data consists of elements like: [[1,4],[2,3],[3,2],[4,1]]. Then, using a batch_size of 1, the shape of your data would be (1,4,2). As you have 4 timesteps, each with 2 features.
So bottom line, you have to make sure that you pre-process you data so it has a fixed shape you can then pass to your LSTM layers. This one you will have to find out by yourself, as you know your data and problem better than we do.
Maybe you can fix the samples you obtain from your signal, discarding some and keeping others so every signal is of the same length (if you say your signals are between 9k and 18k choosing 9000 could be the logical choice, discarding samples from the others you get). You could even do some other conversion to your data in a way that you can map from inputs of 9000-18000 to a fixed size.
Finally what should be structure of deep LSTM network for such lengthy input and how many layers should i use.
This one is really quite broad and doesn't have a unique answer. It would depend on the nature of your problem, and determining those parameters a priori is not so straightforward.
What I recommend you do is to start with a simple model first, and then add layers and blocks (neurons) incrementally until you are satisfied with the results.
Try just one hidden layer first, train and test your model and check your performance. You can then add more blocks and see if your performance improved. You can also add more layers and check for the same until you are satisfied.
This is a good way to create Deep Learning models, as you will arrive to the results you want while keeping your Network as lean as possible, which in turn helps your execution time and complexity. Good luck with your coding, hope you find this useful.