I want to build a classifier that provides labels given a time series of vectors. I have the code for a static LSTM-based classifier, but I don't know how I can incorporate the time information:
Training set:
time = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12,13,14,15,16,17,18]
f1 = [1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2]
f2 = [2, 1, 3, 2, 4, 2, 3, 1, 9, 2, 1, 2, 1, 6, 1, 8, 2, 2]
labels = [a, a, b, b, a, a, b, b, a, a, b, b, a, a, b, b, a, a]
Test set:
time = [1, 2, 3, 4, 5, 6]
f1 = [2, 2, 2, 1, 1, 1]
f2 = [2, 1, 2, 1, 6, 1]
labels = [?, ?, ?, ?, ?, ?]
Following this post, I implemented the following in pybrain:
from pybrain.datasets import SequentialDataSet
from itertools import cycle
import matplotlib.pyplot as plt
from pybrain.tools.shortcuts import buildNetwork
from pybrain.structure.modules import LSTMLayer
from pybrain.supervised import RPropMinusTrainer
from sys import stdout
data = [1,2,3,4,5,6,7]
ds = SequentialDataSet(1, 1)
for sample, next_sample in zip(data, cycle(data[1:])):
ds.addSample(sample, next_sample)
print ds
net = buildNetwork(2, 5, 1, hiddenclass=LSTMLayer, outputbias=False, recurrent=True)
trainer = RPropMinusTrainer(net, dataset=ds)
train_errors = [] # save errors for plotting later
EPOCHS_PER_CYCLE = 5
CYCLES = 100
EPOCHS = EPOCHS_PER_CYCLE * CYCLES
for i in xrange(CYCLES):
trainer.trainEpochs(EPOCHS_PER_CYCLE)
train_errors.append(trainer.testOnData())
epoch = (i+1) * EPOCHS_PER_CYCLE
print("\r epoch {}/{}".format(epoch, EPOCHS))
stdout.flush()
print()
print("final error =", train_errors[-1])
plt.plot(range(0, EPOCHS, EPOCHS_PER_CYCLE), train_errors)
plt.xlabel('epoch')
plt.ylabel('error')
plt.show()
for sample, target in ds.getSequenceIterator(0):
print(" sample = %4.1f" % sample)
print("predicted next sample = %4.1f" % net.activate(sample))
print(" actual next sample = %4.1f" % target)
print()
This trains a classifier, but I don't know how to incorporate the time information. How can I include the information about the order of the vectors?
This is how I implemented my sequence labeling. I have six classes of labels. I have 20 sample sequence for each class. Each sequence consist of 100 timesteps of datapoints with 10 variables.
input_variable = 10
output_class = 1
trndata = SequenceClassificationDataSet(input_variable,output_label, nb_classes=6)
# input 1st sequence into dataset for class label 0
for i in range(100):
trndata.appendLinked(sequence1_class0[i,:], [0])
trndata.newSequence()
# input 2nd sequence into dataset for class label 0
for i in range(100):
trndata.appendLinked(sequence2_class0[i,:], [0])
trndata.newSequence()
......
......
# input 20th sequence into dataset for class label 5
for i in range(100):
trndata.appendLinked(sequence20_class5[i,:], [5])
trndata.newSequence()
You could put all of them in a for loop eventually. The trndata.newSequence() is called every time a new sample sequence is given as dataset.
The training of the network should be similar to your existing code.
Related
Suppose I have the following toy data:
import pandas as pd
from linearmodels.panel import PanelOLS
y = pd.DataFrame(
index=[[1, 1, 1, 2, 2, 2], [1, 2, 3, 1, 2, 3]],
data=[70, 60, 50, 30, 33, 27],
columns=["y"],
)
y.index.set_names(["Entity", "Time"], inplace=True)
x = pd.DataFrame(
index=[[1, 1, 1, 2, 2, 2], [1, 2, 3, 1, 2, 3]],
data=[[100], [89], [62], [29], [49], [23]],
columns=["X"],
)
x.index.set_names(["Entity", "Time"], inplace=True)
And build a model using PanelOLS with entity_effects=True:
model_within = PanelOLS(dependent=y, exog=x, entity_effects=True).fit()
And then wanted to use the predict() method to see how a new "entity" would be modelled. First creating a new entity with:
new_x = pd.DataFrame(
index=[[3, 3, 3], [1, 2, 3]],
data=[[40], [70], [33]],
columns=["X"],
)
new_x.index.set_names(["Entity", "Time"], inplace=True)
Then predicting with:
model_within.predict(new_x)
To get the following output:
predictions
Entity
Time
3
1
16.136230
2
28.238403
3
13.312390
According to Wooldridge, 2012, pg 485, the within estimator is estimating:
Since this is modelling a change in expected y from the average of past y's for this entity, how should the predictions be interpreted? My intuition is that the prediction is saying:
For this new entity, 3, in time period 1, given these X inputs, y at time 1 should be 16 units higher than it's average y across all time, for this entity. Is this interpretation correct? How might it be improved?
linearmodels .predict() documentation
Posting results from seeking clarification through the repo:
https://github.com/bashtage/linearmodels/issues/465
"The model is always Y=XB + epsilon + (eta_t ) + (nu_i ). The effects are treated as errors, and so when you predict you get new_x # params and so the entity effects are not used."
So the predictions are for actual values of y, not time-demeaned predictions. However, to achieve time-demeaned predictions, one can create the same model using data that has first been time-demeaned, and pass in new time-demeaned data to predict on.
I'm training a network to reconstruct coordinates of specific structures in an image. Until now, my loss function contains three 2D vectors (i.e. 6 variables) for the coordinates which are learned via MSE, and three corresponding classifiers learned via SigmoidFocalCrossEntropy, indicating if there are 0, 1, 2 or 3 of these structures present. I thought it might be beneficial to give tensorflow the information that it is neglectable in which order that the vectors are reconstructed as long as the classifier is still correct. Simple example:
loss(tf.constant([[30, 20, 15, 7, 0, 0, 1, 1, 0]], dtype=tf.float32),
tf.constant([[0, 0, 15, 7, 30, 20, 0, 1, 1]], dtype=tf.float32)) == 0
To implement this I used tf.argsort on the magnitude of each vector:
def sort(tensor):
x = tf.unstack(tensor, axis=-1)
squ = []
for i in range(len(x) // 2):
i *= 2
squ.append(x[i] ** 2 + x[i+1] ** 2)
new = tf.stack(squ, axis=-1)
return tf.argsort(new, axis=-1, direction='ASCENDING',
stable=False, name=None)
and consecutively permuted the tensor:
def permute_tensor_structure(tensor, indices):
c = indices + 6
x = indices * 2
y = indices * 2 + 1
v = tf.transpose([x, y])
v = tf.reshape(v, [indices.shape[0], -1])
perm = tf.concat([v, c], axis=-1)
return tf.gather(tensor, perm, batch_dims=1, name=None, axis=-1)
I did the same for my ground truth and got the network up and running.
Minimal example extracted from my code:
import tensorflow as tf
from tensorflow_addons.losses import SigmoidFocalCrossEntropy
def compute_permute_loss(truth, predict):
l2 = tf.keras.losses.MeanSquaredError()
ce = SigmoidFocalCrossEntropy()
indices = sort(predict[:, 0:6])
indices2 = sort(truth[:, 0:6])
predict = permute_tensor_structure(predict, indices)
truth = permute_tensor_structure(truth, indices2)
L2 = l2(predict[:, 0:6], truth[:, 0:6])
BCE = tf.reduce_sum(ce(truth[:, 6:], predict[:, 6:],))
return 3 * L2 + BCE
class Test(tf.test.TestCase):
def test_permutation_loss(self):
tensor1 = tf.constant(
[[30, 20, 15, 7, 0, 0, 1, 1, 0]],
dtype=tf.float32)
tensor2 = permute_tensor_structure(tensor1, tf.constant([[2, 1, 0]]))
loss = compute_permute_loss(tensor1, tensor2)
self.assertEqual(loss, 0,
msg="loss for only permuted tensors is not zero")
tensor3 = tf.constant(
[[29, 22, 15, 7, 0, 0, 1, 1, 0]],
dtype=tf.float32)
loss = compute_permute_loss(tensor3, tensor2)
self.assertAllClose(loss, (1.0 + 4.0) / 2.0,
msg="loss for values is not rmse")
if __name__ == "__main__":
tf.test.main()
# [ RUN ] Test.test_permutation_loss
# [ OK ] Test.test_permutation_loss
However, I'm afraid the permutation in the loss could backfire and impair tensorflows backpropagation. Maybe somebody already faced a similar problem? Or has deeper knowledge on tensorflows graph building and back propagation? I would be grateful for every suggestion or input.
I am doing a randomly generated world and I'm starting of with basic graphing trying to be similar to Perlin Noise. I did everything and the last thing that I've written (important one) did work.
import math
import random
import matplotlib.pyplot as plt
print('ur seed')
a = input()
seed = int(a)
b = (math.cos(seed) * 100)
c = round(b)
# print(c)
for i in range(10):
z = (random.randint(-1, 2))
change = (z + c)
gener = []
gener.append(change)
time = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#print(gener)
#print(change)
plt.ylabel('generated')
plt.xlabel('time')
#Here I wanna add them to the graph and it is Erroring a lot
plt.scatter(time, gener)
plt.title('graph')
plt.show()
the problem is that you're setting gener to [] in the loop not out of the loop. also, you don't need the time variable inside the loop either.
change
for i in range(10):
z = (random.randint(-1, 2))
change = (z + c)
gener = []
gener.append(change)
time = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
to
gener = []
for i in range(10):
z = (random.randint(-1, 2))
change = (z + c)
gener.append(change)
time = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
I want to plot the time series and forecasts of its future values. I am using autoregression model. I am not sure if the strategy I am using is correct or not. But I want to estimate time series data for 15 timestamps in the future. I expect to have a fill between different possibilities:
import numpy as np
from numpy import convolve
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def moving_average(y, period):
buffer = []
for i in range(period, len(y)):
buffer.append(y[i - period : i].mean())
return buffer
def auto_regressive(y, p, d, q, future_count):
"""
p = the order (number of time lags)
d = degree of differencing
q = the order of the moving-average
"""
buffer = np.copy(y).tolist()
for i in range(future_count):
ma = moving_average(np.array(buffer[-p:]), q)
forecast = buffer[-1]
for n in range(0, len(ma), d):
forecast -= buffer[-1 - n] - ma[n]
buffer.append(forecast)
return buffer
y=[60, 2, 0, 0, 1, 1, 0, -1, -2, 0, -2, 6, 0, 2, 0, 4, 0, 1, 3, 2, 1, 2, 1, 0, 2, 2, 0, 1, 0, 1, 3, -1, 0, 2, 2, 1, 3, 2, 4, 2, 3, 0, 0, 2, 2, 0, 3, 1, 0, 2]
x=[1549984749, 1549984751, 1549984755, 1549984761, 1549984768, 1549984769, 1549984770, 1549984774, 1549984780, 1549984783, 1549984786, 1549984787, 1549984788,
1549984794, 1549984797, 1549984855, 1549984923, 1549984930, 1549984955, 1549985006, 1549985008, 1549985027, 1549985086, 1549985091, 1549985101, 1549985115,
1549985116, 1549985118, 1549985130, 1549985130, 1549985139, 1549985141, 1549985146, 1549985154, 1549985178, 1549985192, 1549985203, 1549985217, 1549985245,
1549985288, 1549985311, 1549985316, 1549985425, 1549985447, 1549985460, 1549985463, 1549985489, 1549985561, 1549985595, 1549985610]
x=np.array(x)
print(np.size(x))
y=np.array(y)
print(np.size(y))
future_count = 15
predicted_15 = auto_regressive(y,20,1,2,future_count)
plt.plot(x[len(x) - len(predicted_15):], predicted_15)
plt.plot(x, y, 'o-')
plt.show()
But I got this error:
"have shapes {} and {}".format(x.shape, y.shape))
ValueError: x and y must have same first dimension, but have shapes (15,) and (65,)
You are getting an error as predicted_15 contains y as well as your forecasted values (so y has length 65). You want to plot only the forecasted values (length 15).
plt.plot(x[len(x) - len(predicted_15):], predicted_15[len(x):])
Having said this, you need to consider what x-values these predicted y values correspond to.
I am trying to implement a simple neural net. I want to print the initial pattern, weights, activation. I then want it to print the learning process (i.e. every pattern it goes through as it learns). I am as yet unable to do this - it returns the initial and final pattern (whn I put print p in appropriate places), but nothing else. Hints and tips appreciated - I'm a complete newbie to Python!
#!/usr/bin/python
import random
p = [ [1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1] ] # pattern I want the net to learn
n = 5
alpha = 0.01
activation = [] # unit activations
weights = [] # weights
output = [] # output
def initWeights(n): # set weights to zero, n is the number of units
global weights
weights = [[[0]*n]*n] # initialised to zero
def initNetwork(p): # initialises units to activation
global activation
activation = p
def updateNetwork(k): # pick unit at random and update k times
for l in range(k):
unit = random.randint(0,n-1)
activation[unit] = 0
for i in range(n):
activation[unit] += output[i] * weights[unit][i]
output[unit] = 1 if activation[unit] > 0 else -1
def learn(p):
for i in range(n):
for j in range(n):
weights += alpha * p[i] * p[j]
You have a problem with the line:
weights = [[[0]*n]*n]
When you use*, you multiply object references. You are using the same n-len array of zeroes every time. This will cause:
>>> weights[0][1][0] = 8
>>> weights
[[[8, 0, 0], [8, 0, 0], [8, 0, 0]]]
The first item of all the sublists is 8, because they are one and the same list. You stored the same reference multiple times, and so modifying the n-th item on any of them will alter all of them.
this the line is where you get :
"IndexError: list index out of range"
output[unit] = 1 if activation[unit] > 0 else -1
because output = [] , you should do output.append() or ...