Can LSTMs handle incredibly dense time-series data? - python

I have 50 time series, each having at least 500 data points (some series have as much as 2000+ data points). All the time series go from a value of 1.089 to 0.886, so you can see that the resolution for each dataset comes close to 10e-4, i.e. the data is something like:
1.079299, 1.078809, 1.078479, 1.078389, 1.078362,... and so on in a decreasing fashion from 1.089 to 0.886 for all 50 time-series.
My questions hence, are:
Can LSTMs handle such dense data?
In order to avoid overfitting, what can be the suggested number of epochs, timesteps per batch, batches, hidden layers and neurons per layer?
I have been struggling with this for more than a week, and no other source that I could find talks about this specific case, so it could also help others.

A good question and I can understand why you did not find a lot of explanations because there are many tutorials which cover some basic concepts and aspects, not necessarily custom problems.
You have 50 time series. However, the frequency of your data is not the same for each time series. You have to interpolate in order to reach the same number of samples for each time series if you want to properly construct the dataset.
LSTMs can handle such dense data. It can be both a classification and a regression problem, neural networks can adapt to such situations.
In order to avoid overfitting(LSTMs are very prone to it), the first major aspect to take into consideration is the hidden layers and the number of units per layer. Normally people tend to use 256-512 by default since in Natural Language Processing where you process huge datasets they are suitable. In my experience, for simpler regression/classification problems you do not need such a big number, it will only lead to overfitting in smaller problems.
Therefore, taking into consideration (1) and (2), start with an LSTM/GRU with 32 units and then the output layer. If you see that you do not have good results, add another layer (64 first 32 second) and then the output layer.
Admittedly, timesteps per batch is of crucial importance. This cannot be determined here, you have to manually iterate through values of it and see what yields you the best results. I assume you create your dataset via sliding window manner; consider this(window size) also a hyperparameter to alter before arriving at the batch and epochs ones.

Related

How to arrange multiple multivariate time series of different length before passing it to Keras LSTM layer

I have a number of multivariate time series that are produced by the same kind of process but:
are of significantly different lengths;
each time series is an independent instance, and the measurements are taken at different, quite random timestamps;
each time series is related at every timestamp to two targets.
In other words:
each time series has a shape of (n_timestamps, n_features)
each target series has a shape of (n_timestamps, 2).
To give an example, this could be treated as stocks of different companies, that are described by few various features and the target at a given timestamp are probabilities that the final price at the end of the year will be higher than x, except we learn them directly from magically given ground-truth probabilities (instead of observed 0/1 responses).
I want to be able to predict the target at each time point and I wanted to give RNNs a try. However, I'm having issues with figuring out how I should arrange the data before passing it to Keras LSTM layers. The main things I'm wondering about are:
I want my RNN to use data starting from the beginning of the series to make prediction at time t, not only last k timestamps. I can't really use the whole history directly without exploding the gradient (it's too long), therefore I need a way to "remember" previously learned weights even though in reality my RNN will loop over last k timestamps.
Each time series has different length, so I'm unsure how to make things compatible with each other. I'm aware of padding as an option, but since the difference in length of examples can be as significant as 1000 vs 3000 this will results in many training examples that constitutes only of padding value.
Since measurements are taken at different timestamps, I believe it may affect my network in a sense that it can't really learn that e.g. last 10 timestamps are the most important. Or even if it can, these last 10 timestamps will have different lengths in reality for each input time-series... How big problem is this? Should I start with resampling all examples to the same time points (e.g. by interpolating)?
My current thinking is that:
I can pad each of my example sequences to the same length (max(n_timestamps))
Create batches of short sequences of length k, where k represents the length of the loop of RNN layer. In consequence, assuming I have 200 example sequences with the longest one has 3000 timestamps and my selected k is 50, it would result in 3000/50=60 batches of (200, 50) shape. Or should I make 3000-1 batches where one batch differs from the next one only by one timestamp (i.e. while the fist batch has timestamps from 1 to 50, the next batch has timestamps from 2 to 51 etc.)?
Since padding was used, I would need to use Masking layer. Some (quite many) of the rows in prepared batches would constitute of inputs that should be ignored completely (as they would only have padding value for all 50 elements).
Is this the correct way to prepare the data for my problem? Can it be done better to not introduce bottlenecks such as learning using examples of only padding value (that should be ignored with masking layer). Or how can I prepare that data to address points 1., 2. and 3. described above?
each time series has a shape of (n_timestamps, n_features)
each target series has a shape of (n_timestamps, 2).
Okay, this is pretty standard so far.
I want my RNN to use data starting from the beginning of the series to make prediction at time t, not only last k timestamps. I can't really use the whole history directly without exploding the gradient (it's too long), therefore I need a way to "remember" previously learned weights even though in reality my RNN will loop over last k timestamps.
Check and make sure you actually need this. An RNN (or a Transformer) could use any of/all of the history that you give it. But that's assuming that the history is useful for the predictions you're making.
I'd try training on standard-sized random-clips of the data (like in this tutorial). I'd retrain it a few times with longer and longer clips and see if the model performance plateaus before I run out of memory.
But in Keras it is relatively simple to do exactly the thing you're asking.
Keras RNNs (LSTM, GRU) have these this argument return_states. It allows you to allows you to run the model over part of a sequence, pause, execute a training step, and then continue running exactly where you left off.
(and stateful argument is another mechanism to provide that effect)
The code ends up looking something like this:
class MyModel(keras.Model):
...
def train_step(self, args):
inputs, labels = args
state = self.get_initial_state()
while tf.shape(inputs)[1] != 0:
in_slice, inputs = inputs[:,:100], inputs[:,100:]
label_slice, labels = labels[:, :100], labels[:,100:]
with tf.GradientTape() as tape:
result, state = self(in_slice, state)
loss = self.loss(label_slice, result)
vars = self.trainable_variables
grads = tape.gradient(loss, vars)
self.optimizer.apply_gradients(zip(grads, vars))
It may also be possible to use ForwardAccumulator to collect the gradients. In that case you don't need to cut the sequences into chunks because the memory used by forward accumulator doesn't grow with sequence length. I've never tried before so I don't have example code.
Each time series has different length, so I'm unsure how to make things compatible with each other. I'm aware of padding as an option, but since the difference in length of examples can be as significant as 1000 vs 3000 this will results in many training examples that constitutes only of padding value.
That might be okay, just inefficient. You can make batches of similar sequence lengths using: Dataset.bucket_by_sequence_length
Since measurements are taken at different timestamps, I believe it may affect my network in a sense that it can't really learn that e.g. last 10 timestamps are the most important. Or even if it can, these last 10 timestamps will have different lengths in reality for each input time-series... How big problem is this? Should I start with resampling all examples to the same time points (e.g. by interpolating)?
Interpolating to a fixed rate might be a resonable thing to try if it doesn't make your data too much longer. Just think carefully about making predictions on interpolated values: There's some data leaking back in time from a future measurement.
Another approach would be to make the size of the time-step a feature. If each input is tagged with how long it's been since the last input the model can learn how to handle small or large steps.
I can pad each of my example sequences to the same length (max(n_timestamps))
Yes. Pad, or make clips of a fixed size.
Create batches of short sequences of length k, where k represents the length of the loop of RNN layer. In consequence, assuming I have 200 example sequences with the longest one has 3000 timestamps and my selected k is 50, it would result in 3000/50=60 batches of (200, 50) shape.
That would line up with the code example I gave.
Or should I make 3000-1 batches where one batch differs from the next one only by one timestamp
Either way is fine. But if you want to carry the state over from batch to batch (I'm skeptical that you actually need the carry over) then you need to do them chunk by chunk, not by single-stepping your window.
Since padding was used, I would need to use Masking layer. Some (quite many) of the rows in prepared batches would constitute of inputs that should be ignored completely (as they would only have padding value for all 50 elements).
Yeah, that'll be wasted computation, but it won't hurt anything.

How to force RNN output to become smoother?

I am using experimental data with Keras LSTM to model a complicated physical system. The problem is output value tends to change drastically between two points at certain points. All physical systems must show some continuous/smooth behavior. How can I make my output smoother, is there some kind of layer or regularization?
I tried introducing l1-l2 regularization, drop-outs... They help but I could not get good results. What I seek is some kind of layer which limits sudden changes in the values. By the way I work with a rather small amount of data; I am using 2 series to train and validify, 1 to test.
Network structure: I get similar results for 2 LSTM + 1 Dense layer or 1 LSTM + 1 Dense layer. (With/without dropout layers between LSTM and Dense, and some l2 regularization)
The time-series data represents some measurements. Measurements are taken in short intervals, resulting in repeated values time to time. I remove some of the repeated lines as well.(I concatenated them together and then removed the rows with respect to one of the inputs. I tried doing it for several inputs. But as you can understand, I did not remove all the repeated lines with this approach, can this be the source of the problem?)
I use sklearn.StandardScaler or sklearn.MinMaxScaler to normalize the input data, not much difference between the two.
You can see a sample result on test data, which has l2 regularization - please note the first two peaks in the start. There is around 20,000 points in the graph and these peaks occur over 3-5 points. In the training set there are some jumps as well but they are far more smooth and spread out. Is there someway to smoothen the output within the neural net, without adding some external filters?

On training LSTMs efficiently but well, parallelism vs training regime

For a model that I intend to spontaneously generate sequences I find that training it sample by sample and keeping state in between feels most natural. I've managed to construct this in Keras after reading many helpful resources. (SO: Q and two fantastic answers, Macine Learning Mastery 1, 2, 3)
First a sequence is constructed (in my case one-hot encoded too). X and Y are procuded from this sequence by shifting Y forward one time step. Training is done in batches of one sample and one time step.
For Keras this looks something like this:
data = get_some_data() # Shape (samples, features)
Y = data[1:, :] # Shape (samples-1, features)
X = data[:-1, :].reshape((-1, 1, data.shape[-1])) # Shape (samples-1, 1, features)
model = Sequential()
model.add(LSTM(256, batch_input_shape=(1, 1, X.shape[-1]), stateful=True))
model.add(Dense(Y.shape[-1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['categorical_accuracy'])
for epoch in range(10):
model.fit(X, Y, batch_size=1, shuffle=False)
model.reset_states()
It does work. However after consulting my Task Manager, it seems it's only using ~10 % of my GPU resources, which are already quite limited. I'd like to improve this to speed up training. Increasing batch size would allow for parallel computations.
The network in its current state presumably "remembers" things even from the start of the training sequence. For training in batches one would need to first set up the sequence and then predict one value - and do this for multiple values. To train on the full sequence one would need to generate data in the shape (samples-steps, steps, features). I imagine it wouldn't be uncommon to have a sequence spanning at least a couple of hundred time steps. So that would mean a huge increase in the data amount.
Between framing the problem a bit differently and requiring more data to be stored in memory, and utilising only a small amount of processing resources, I must ask:
Is my intuition of the natural way of training and statefulness correct?
Are there other downsides to this training with one sample per batch?
Could the utilisation issues be resolved any other way?
Finally, is there an accepted way of performing this kind of training to generate long sequences?
Any help is greatly appreciated, I'm fairly new with LSTMs.
I do not know your specific application, however sending only one timestep of data in is surely not a good idea. You should instead, give the LSTM the entire sequence of previously given one-hot vectors (presumably words), and pre-pad (with zeros) if necessary as it appears you are working on sequences of varying length. Consider also using an embedding layer before your LSTM if these are indeed words. Read the documentation carefully.
The utilization of your GPU being low is not a problem. You simply do not have enough data to fully utilize all resources in each batch. Training with batches is a sequential process, there is no real way to parallelize this process, at least a way that is introductory and beneficial to what your goals appear to be. If you do give the LSTM more data per timestep, however, this surely will increase your utilization.
statefull in an LSTM does not do what you think it does. An LSTM always remembers the sequence it is iterating over as it updates it's internal hidden states, h and c. Furthermore, the weight transformations that "build" those internal states are learned during training. What stateful does is preserve the previous hidden state from the last batch index. Meaning, the final hidden state at the third element in the batch is sent as the initial hidden state in the third element of the next batch and so on. I do not believe this is useful for your applications.
There are downsides to training the LSTM with one sample per batch. In general, training with min-batches increases stability. However, you appear to not be training with one sample per batch but instead one timestep per sample.
Edit (from comments)
If you use stateful and send the next 'character' of your sequence in the same index of the previous batch this would be analogous to sending the full sequence timesteps per sample. I would still recommend the initial approach described above in order to improve the speed of the application and to be in more line with other LSTM applications. I see no disadvantages to the approach of sending the full sequence per sample instead of doing it along every batch. However, the advantage of speed, being able to shuffle your input data per batch, and being more readable/consistent would be worth the change IMO.

Neural Network - Input Normalization

It is a common practice to normalize input values (to a neural network) to speed up the learning process, especially if features have very large scales.
In its theory, normalization is easy to understand. But I wonder how this is done if the training data set is very large, say for 1 million training examples..? If # features per training example is large as well (say, 100 features per training example), 2 problems pop up all of a sudden:
- It will take some time to normalize all training samples
- Normalized training examples need to be saved somewhere, so that we need to double the necessary disk space (especially if we do not want to overwrite the original data).
How is input normalization solved in practice, especially if the data set is very large?
One option maybe is to normalize inputs dynamically in the memory per mini batch while training.. But normalization results will then be changing from one mini batch to another. Would it be tolerable then?
There is maybe someone in this platform having hands on experience on this question. I would really appreciate if you could share your experiences.
Thank you in advance.
A large number of features makes it easier to parallelize the normalization of the dataset. This is not really an issue. Normalization on large datasets would be easily GPU accelerated, and it would be quite fast. Even for large datasets like you are describing. One of my frameworks that I have written can normalize the entire MNIST dataset in under 10 seconds on a 4-core 4-thread CPU. A GPU could easily do it in under 2 seconds. Computation is not the problem. While for smaller datasets, you can hold the entire normalized dataset in memory, for larger datasets, like you mentioned, you will need to swap out to disk if you normalize the entire dataset. However, if you are doing reasonably large batch sizes, about 128 or higher, your minimums and maximums will not fluctuate that much, depending upon the dataset. This allows you to normalize the mini-batch right before you train the network on it, but again this depends upon the network. I would recommend experimenting based on your datasets, and choosing the best method.

Multilabel classification using LSTM on variable length signal using Keras

I have recently started working on ECG signal classification in to various classes. It is basically multi label classification task (Total 4 classes). I am new to Deep Learning, LSTM and Keras that why i am confused in few things.
I am thinking about giving normalized original signal as input to the network, is this a good approach?
I also need to understand training input shape for LSTM as ECG signals are of variable length (9000 to 18000 samples) and usually classifier need fixed variable input. How can i handle such type of input in case of LSTM.
Finally what should be structure of deep LSTM network for such lengthy input and how many layers should i use.
Thanks for your time.
Regards
I am thinking about giving normalized original signal as input to the network, is this a good approach?
Yes this is a good approach. It is actually quite standard for Deep Learning algorithms to give them your input normalized or rescaled.
This usually helps your model converge faster, as now you are inside smaller range (i.e.: [-1, 1]) instead of greater un-normalized ranges from your original input (say [0, 1000]). It also helps you get better, more precise results, as it helps solve problems like the vanishing gradient as well as adapting better to modern activation and optimizer functions.
I also need to understand training input shape for LSTM as ECG signals are of variable length (9000 to 18000 samples) and usually classifier need fixed variable input. How can i handle such type of input in case of LSTM.
This part is really important. You are correct, LSTM expects to receive inputs with a fixed shape, one that you know beforehand (in fact, any Deep Learning layer expects fixed shape inputs). This is also explained in the keras docs on Recurrent Layers where they say:
Input shape
3D tensor with shape (batch_size, timesteps, input_dim).
As we can see, it expects your data to have a number of timesteps as well as a dimension on each one of those timesteps (batch size is usually 1). To exemplify, suppose your input data consists of elements like: [[1,4],[2,3],[3,2],[4,1]]. Then, using a batch_size of 1, the shape of your data would be (1,4,2). As you have 4 timesteps, each with 2 features.
So bottom line, you have to make sure that you pre-process you data so it has a fixed shape you can then pass to your LSTM layers. This one you will have to find out by yourself, as you know your data and problem better than we do.
Maybe you can fix the samples you obtain from your signal, discarding some and keeping others so every signal is of the same length (if you say your signals are between 9k and 18k choosing 9000 could be the logical choice, discarding samples from the others you get). You could even do some other conversion to your data in a way that you can map from inputs of 9000-18000 to a fixed size.
Finally what should be structure of deep LSTM network for such lengthy input and how many layers should i use.
This one is really quite broad and doesn't have a unique answer. It would depend on the nature of your problem, and determining those parameters a priori is not so straightforward.
What I recommend you do is to start with a simple model first, and then add layers and blocks (neurons) incrementally until you are satisfied with the results.
Try just one hidden layer first, train and test your model and check your performance. You can then add more blocks and see if your performance improved. You can also add more layers and check for the same until you are satisfied.
This is a good way to create Deep Learning models, as you will arrive to the results you want while keeping your Network as lean as possible, which in turn helps your execution time and complexity. Good luck with your coding, hope you find this useful.

Categories

Resources