How can I compare weights of different Keras models? - python

I've saved numbers of models in .h5 format. I want to compare their characteristics such as weight.
I don't have any Idea how I can appropriately compare them specially in the form of tables and figures.
Thanks in advance.

Weight-introspection is a fairly advanced endeavor, and requires model-specific treatment. Visualizing weights is a largely technical challenge, but what you do with that information's a different matter - I'll address largely the former, but touch upon the latter.
Update: I also recommend See RNN for weights, gradients, and activations visualization.
Visualizing weights: one approach is as follows:
Retrieve weights of layer of interest. Ex: model.layers[1].get_weights()
Understand weight roles and dimensionality. Ex: LSTMs have three sets of weights: kernel, recurrent, and bias, each serving a different purpose. Within each weight matrix are gate weights - Input, Cell, Forget, Output. For Conv layers, the distinction's between filters (dim0), kernels, and strides.
Organize weight matrices for visualization in a meaningful manner per (2). Ex: for Conv, unlike for LSTM, feature-specific treatment isn't really necessary, and we can simply flatten kernel weights and bias weights and visualize them in a histogram
Select visualization method: histogram, heatmap, scatterplot, etc - for flattened data, a histogram is the best bet
Interpreting weights: a few approaches are:
Sparsity: if weight norm ("average") is low, the model is sparse. May or may not be beneficial.
Health: if too many weights are zero or near-zero, it's a sign of too many dead neurons; this can be useful for debugging, as once a layer's in such a state, it usually does not revert - so training should be restarted
Stability: if weights are changing greatly and quickly, or if there are many high-valued weights, it may indicate impaired gradient performance, remedied by e.g. gradient clipping or weight constraints
Model comparison: there isn't a way for simply looking at two weights from separate models side-by-side and deciding "this is the better one"; analyze each model separately, for example as above, then decide which one's ups outweigh downs.
The ultimate tiebreaker, however, will be validation performance - and it's also the more practical one. It goes as:
Train model for several hyperparameter configurations
Select one with best validation performance
Fine-tune that model (e.g. via further hyperparameter configs)
Weight visualization should be mainly kept as a debugging or logging tool - as, put simply, even with our best current understanding of neural networks one cannot tell how well the model will generalize just by looking at the weights.
Suggestion: also visualize layer outputs - see this answer and sample output at bottom.
Visual example:
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten
from tensorflow.keras.models import Model
ipt = Input(shape=(16, 16, 16))
x = Conv2D(12, 8, 1)(ipt)
x = Flatten()(x)
out = Dense(16)(x)
model = Model(ipt, out)
model.compile('adam', 'mse')
X = np.random.randn(10, 16, 16, 16) # toy data
Y = np.random.randn(10, 16) # toy labels
for _ in range(10):
model.train_on_batch(X, Y)
def get_weights_print_stats(layer):
W = layer.get_weights()
print(len(W))
for w in W:
print(w.shape)
return W
def hist_weights(weights, bins=500):
for weight in weights:
plt.hist(np.ndarray.flatten(weight), bins=bins)
W = get_weights_print_stats(model.layers[1])
# 2
# (8, 8, 16, 12)
# (12,)
hist_weights(W)
Conv1D outputs visualization: (source)

To compare weights of two models, I vectorize model weights (i.e., create 1D array) for each model. Then, I calculate the percent difference between respective weights and construct a histogram of these percent differences. If all values are close to zero, there is suggestion (but not proof) that the models are practically the same. This is just one approach out of many for comparing models using their weights.
As an aside, I will note that I use this method when I want some indication that my model has converged on a global, rather than local, minimum. I will train models with several different initializations. If all the initializations result in convergence to the same weights, it suggests that the minimum is a global minimum.

Related

Finding patterns in time series with PyTorch

I started PyTorch with image recognition. Now I want to test (very basically) with pure NumPy arrays. I struggle with getting the setup to work, so basically I have vectors with values between 0 and 1 (normalized curves). Those vectors are always of length 1500 and I want to find e.g. "high values at the beginning" or "sine wave-like function", "convex", "concave" etc. stuff like that, so just shapes of those curves.
My training set consists of many vectors with their classes; I have chosen 7 classes. The net should be trained to classify a vector into one or more of those 7 classes (not one hot).
I'm struggling with multiple issues, but first my very basic Net
class Net(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(Net, self).__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.rnn = nn.RNN(input_dim, hidden_dim, layer_dim)
self.fc = nn.Linear(self.hidden_dim, output_dim)
def forward(self, x):
h0 = torch.zeros(self.layer_dim, x.size(1), self.hidden_dim).requires_grad_()
out, h0 = self.rnn(x, h0.detach())
out = out[:, -1, :]
out = self.fc(out)
return out
network = Net(1500, 70, 20, 7)
optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum)
This is just a copy-paste from an RNN demo. Here is my first issue. Is an RNN the right choice? It is a time series, but then again it is an image recognition problem when plotting the curve.
Now, this here is an attempt to batch the data. The data object contains all training curves together with the correct classifiers.
def train(epoch):
network.train()
network.float()
batching = True
index = 0
# monitor the cummulative loss for an epoch
cummloss = []
# start batching some curves
while batching:
optimizer.zero_grad()
# here I start clustering come curves to a batch and normalize the curves
_input = []
batch_size = min(len(data)-1, index+batch_size_train) - index
for d in data[index:min(len(data)-1, index+batch_size_train)]:
y = np.array(d['data']['y'], dtype='d')
y = np.multiply(y, y.max())
y = y[0:1500]
y = np.pad(y, (0, max(1500-len(y), 0)), 'edge')
if len(_input) == 0:
_input = y
else:
_input = np.vstack((_input, y))
input = torch.from_numpy(_input).float()
input = torch.reshape(input, (1, batch_size, len(y)))
target = np.zeros((1,7))
# the correct classes have indizes, to I create a vector with 1 at the correct locations
for _index in np.array(d['classifier']):
target[0,_index-1] = 1
target = torch.from_numpy(target)
# get the result form the network
output = network(input)
# is this a good loss function?
loss = F.l1_loss(output, target)
loss.backward()
cummloss.append(loss.item())
optimizer.step()
index = index + batch_size_train
if index > len(data):
print(np.mean(cummloss))
batching = False
for e in range(1, n_epochs):
print('Epoch: ' + str(e))
train(0)
The problem I'm facing right now is, the loss doesn't change very little, even with hundreds of epochs.
Are there existing examples of this kind of problem? I didn't find any, just pure png/jpg image recognition. When I convert the curves to png then I have a little issue to train a net, I took densenet and it worked just fine but it seems to be super overkill for this simple task.
This is just a copy-paste from an RNN demo. Here is my first issue. Is an RNN the right choice?
In theory what model you choose does not matter as much as "How" you formulate your problem.
But in your case the most obvious limitation you're going to face is your sequence length: 1500. RNN store information across steps and typically runs into trouble over long sequence with vanishing or exploding gradient.
LSTM net have been developed to circumvent this limitations with memory cell, but even then in the case of long sequence it will still be limited by the amount of information stored in the cell.
You could try using a CNN network as well and think of it as an image.
Are there existing examples of this kind of problem?
I don't know but I might have some suggestions : If I understood your problem correctly, you're going from a (1500, 1) input to a (7,1) output, where 6 of the 7 positions are 0 except for the corresponding class where it's 1.
I don't see any activation function, usually when dealing with multi class you don't use the output of the dense layer to compute the loss you apply a normalizing function like softmax and then you can compute the loss.
From your description of features you have in the form of sin like structures, the closes thing that comes to mind is frequency domain. As such, if you have and input image, just transform it to the frequency domain by a Fourier transform and use that as your feature input.
Might be best to look for such projects on the internet, one such project that you might want to read the research paper or video from this group (they have some jupyter notebooks for you to try) or any similar works. They use the furrier features, that go though a multi layer perceptron (MLP).
I am not sure what exactly you want to do, but seems like a classification task, you would use RNN if you want your neural network to work with a sequence. To me it seems like the 1500 dimensions are independent, and as such can be just treated as input.
Regarding the last layer, for a classification problem it usually is a probability distribution obtained by applying softmax (if only the classification is distinct - i.e. probability sums up to 1), in which, given an input, the net gives a probability of it being from each class. If we are predicting multiple classes we are going to use sigmoid as the last layer of the neural network.
Regarding your loss, there are many losses you can try and see if they are better. Once again, for different features you have to know what exactly is the measurement of distance (a.k.a. how different 2 things are). Check out this website, or just any loss function explanations on the net.
So you should try a simple MLP on top of fourier features as a starting point, assuming that is your feature vector.
Image Recognition is different from Time-Series data. In the imaging domain your data-set might have more similarity with problems like Activity-Recognition, Video-Recognition which have temporal component. So, I'd recommend looking into some models for those.
As for the current model, I'd recommend using LSTM instead of RNN. And also for classification you need to use an activation function in your final layer. This should softmax with cross entropy based loss or sigmoid with MSE loss.
Keras has a Timedistributed model which makes it easy to handle time components. You can use a similar approach with Pytorch by applying linear layers followed by LSTM.
Look into these for better undertsanding ::
Activity Recognition : https://www.narayanacharya.com/vision/2019-12-30-Action-Recognition-Using-LSTM
https://discuss.pytorch.org/t/any-pytorch-function-can-work-as-keras-timedistributed/1346
How to implement time-distributed dense (TDD) layer in PyTorch
Activation Function ::
https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html

How to use Multivariate time-series prediction with Keras, when multiple samples are used

As the title states, I am doing multivariate time-series prediction. I have some experience with this situation and was able to successfully setup and train a working model in TF Keras.
However, I did not know the 'proper' way to handle having multiple unrelated time-series samples. I have about 8000 unique sample 'blocks' with anywhere from 800 time steps to 30,000 time steps per sample. Of course I couldn't concatenate them all into one single time series because the first points of sample 2 are not related in time with the last points of sample 1.
Thus my solution was to fit each sample individually in a loop (at great inefficiency).
My new idea is can/should I pad the start of each sample with empty time-steps = to the amount of look back for the RNN and then concatenate the padded samples into one time-series? This will mean that the first time-step will have a look-back data of mostly 0's which sounds like another 'hack' for my problem and not the right way to do it.
The main challenge is in 800 vs. 30,000 timesteps, but nothing you can't do.
Model design: group sequences into chunks - for example, 30 sequences of 800-to-900 timesteps, padded, then 60 sequences of 900-to-1000, etc. - don't have to be contiguous (i.e. next can be 1200-to-1500)
Input shape: (samples, timesteps, channels) - or equivalently, (sequences, timesteps, features)
Layers: Conv1D and/or RNNs - e.g. GRU, LSTM. Each can handle variable timesteps
Concatenation: don't do it. If each of your sequences is independent, then each must be fed along dimension 0 in Keras - the batch or samples dimension. If they are dependent, e.g. multivariate timeseries, like many channels in a signal - then feed them along the channels dimension (dim 2). But never concatenate along timeseries dimension, as it implies causal continuity whrere none exists.
Stateful RNNs: can help in processing long sequences - info on how they work here
RNN capability: is limited w.r.t. long sequences, and 800 is already in danger zone even for LSTMs; I'd suggest dimensionality reduction via either autoencoders or CNNs w/ strides > 1 at input, then feeding their outputs to RNNs.
RNN training: is difficult. Long train times, hyperparameter sensitivity, vanishing gradients - but, with proper regularization, they can be powerful. More info here
Zero-padding: before/after/both - debatable, can read about it, but probably stay clear from "both" as learning to ignore paddings is easier with one locality; I personally use "before"
RNN variant: use CuDNNLSTM or CuDNNGRU whenever possible, as they are 10x faster
Note: "samples" above, and in machine learning, refers to independent examples / observations, rather than measured signal datapoints (which would be referred to as timesteps).
Below is a minimal code for what a timeseries-suited model would look like:
from tensorflow.keras.layers import Input, Conv1D, LSTM, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
import numpy as np
def make_data(batch_shape): # dummy data
return (np.random.randn(*batch_shape),
np.random.randint(0, 2, (batch_shape[0], 1)))
def make_model(batch_shape): # example model
ipt = Input(batch_shape=batch_shape)
x = Conv1D(filters=16, kernel_size=10, strides=2, padding='valid')(ipt)
x = LSTM(units=16)(x)
out = Dense(1, activation='sigmoid')(x) # assuming binary classification
model = Model(ipt, out)
model.compile(Adam(lr=1e-3), 'binary_crossentropy')
return model
batch_shape = (32, 100, 16) # 32 samples, 100 timesteps, 16 channels
x, y = make_data(batch_shape)
model = make_model(batch_shape)
model.train_on_batch(x, y)

Diverging losses in PPO + ICM using LSTM

I have tried to implement Proximal Policy Optimization with Intrinsic Curiosity Rewards for statefull LSTM neural network.
Losses in both PPO and ICM are diverging and I would like to find out if its bug in code or badly selected hyperparameters.
Code (where some wrong implementation could be):
In ICM model I use first layer LSTM too to match input dimensions.
In ICM whole dataset is propagated at once, with zeros as initial hidden(resultin tensors are different, than they would be if I propagated only 1 state or batch and re-use hidden cells)
In PPO advantage and discount reward processing the dataset is propagated one by one and hidden cells are re-used (exact opposite than in ICM because here it uses same model for selecting actions and this approach is "real-time-like")
In PPO training model is trained on batches with re-use of hidden cells
I have used https://github.com/adik993/ppo-pytorch as default code and reworked it to run on my environment and use LSTM
I may provide code samples later if specifically requested due to large amount of rows
Hyperparameters:
def __init_curiosity(self):
curiosity_factory=ICM.factory(MlpICMModel.factory(), policy_weight=1,
reward_scale=0.1, weight=0.2,
intrinsic_reward_integration=0.01,
reporter=self.reporter)
self.curiosity = curiosity_factory.create(self.state_converter,
self.action_converter)
self.curiosity.to(self.device, torch.float32)
self.reward_normalizer = StandardNormalizer()
def __init_PPO_trainer(self):
self.PPO_trainer = PPO(agent = self,
reward = GeneralizedRewardEstimation(gamma=0.99, lam=0.95),
advantage = GeneralizedAdvantageEstimation(gamma=0.99, lam=0.95),
learning_rate = 1e-3,
clip_range = 0.3,
v_clip_range = 0.3,
c_entropy = 1e-2,
c_value = 0.5,
n_mini_batches = 32,
n_optimization_epochs = 10,
clip_grad_norm = 0.5)
self.PPO_trainer.to(self.device, torch.float32)
Training graphs:
(Notice large numbers on y axis)
UPDATE
For now I have reworked LSTM processing to use batches and hidden memory on all places (for both main model and ICM), but the problem is still present. I have traced it to output from ICM's model, here the output diverges mainly in action_hat tensor.
Found the problem... In main model I use softmax for eval runs and log_softmax for training in output layer and according to PyTorch docs the CrossEntropyLoss uses log_softmax inside, so as advised I used NLLLoss but forthe computation of ICM model loss which does not have softmax fnc in output layer! So switching back to CrossEntropyLoss (which was originaly in reference code) solved ICM loss divergence.

Appropriate Deep Learning Structure for multi-class classification

I have the following data
feat_1 feat_2 ... feat_n label
gene_1 100.33 10.2 ... 90.23 great
gene_2 13.32 87.9 ... 77.18 soso
....
gene_m 213.32 63.2 ... 12.23 quitegood
The size of M is large ~30K rows, and N is much smaller ~10 columns.
My question is what is the appropriate Deep Learning structure to learn
and test the data like above.
At the end of the day, the user will give a vector of genes with expression.
gene_1 989.00
gene_2 77.10
...
gene_N 100.10
And the system will label which label does each gene apply e.g. great or soso, etc...
By structure I mean one of these:
Convolutional Neural Network (CNN)
Autoencoder
Deep Belief Network (DBN)
Restricted Boltzman Machine
To expand a little on #sung-kim 's comment:
CNN's are used primarily for problems in computer imaging, such as
classifying images. They are modelled on animals visual cortex, they
basically have a connection network such that there are tiles of
features which have some overlap. Typically they require a lot of
data, more than 30k examples.
Autoencoder's are used for feature generation and dimensionality reduction. They start with lots of neurons on each layer, then this number is reduced, and then increased again. Each object is trained on itself. This results in the middle layers (low number of neurons) providing a meaningful projection of the feature space in a low dimension.
While I don't know much about DBN's they appear to be a supervised extension of the Autoencoder. Lots of parameters to train.
Again I don't know much about Boltzmann machines, but they aren't widely used for this sort of problem (to my knowledge)
As with all modelling problems though, I would suggest starting from the most basic model to look for signal. Perhaps a good place to start is Logistic Regression before you worry about deep learning.
If you have got to the point where you want to try deep learning, for whatever reasons. Then for this type of data a basic feed-forward network is the best place to start. In terms of deep-learning, 30k data points is not a large number, so always best start out with a small network (1-3 hidden layers, 5-10 neurons) and then get bigger. Make sure you have a decent validation set when performing parameter optimisation though. If your a fan of the scikit-learn API, I suggest that Keras is a good place to start
One further comment, you will want to use a OneHotEncoder on your class labels before you do any training.
EDIT
I see from the bounty and the comments that you want to see a bit more about how these networks work. Please see the example of how to build a feed-forward model and do some simple parameter optisation
import numpy as np
from sklearn import preprocessing
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
# Create some random data
np.random.seed(42)
X = np.random.random((10, 50))
# Similar labels
labels = ['good', 'bad', 'soso', 'amazeballs', 'good']
labels += labels
labels = np.array(labels)
np.random.shuffle(labels)
# Change the labels to the required format
numericalLabels = preprocessing.LabelEncoder().fit_transform(labels)
numericalLabels = numericalLabels.reshape(-1, 1)
y = preprocessing.OneHotEncoder(sparse=False).fit_transform(numericalLabels)
# Simple Keras model builder
def buildModel(nFeatures, nClasses, nLayers=3, nNeurons=10, dropout=0.2):
model = Sequential()
model.add(Dense(nNeurons, input_dim=nFeatures))
model.add(Activation('sigmoid'))
model.add(Dropout(dropout))
for i in xrange(nLayers-1):
model.add(Dense(nNeurons))
model.add(Activation('sigmoid'))
model.add(Dropout(dropout))
model.add(Dense(nClasses))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd')
return model
# Do an exhaustive search over a given parameter space
for nLayers in xrange(2, 4):
for nNeurons in xrange(5, 8):
model = buildModel(X.shape[1], y.shape[1], nLayers, nNeurons)
modelHist = model.fit(X, y, batch_size=32, nb_epoch=10,
validation_split=0.3, shuffle=True, verbose=0)
minLoss = min(modelHist.history['val_loss'])
epochNum = modelHist.history['val_loss'].index(minLoss)
print '{0} layers, {1} neurons best validation at'.format(nLayers, nNeurons),
print 'epoch {0} loss = {1:.2f}'.format(epochNum, minLoss)
Which outputs
2 layers, 5 neurons best validation at epoch 0 loss = 1.18
2 layers, 6 neurons best validation at epoch 0 loss = 1.21
2 layers, 7 neurons best validation at epoch 8 loss = 1.49
3 layers, 5 neurons best validation at epoch 9 loss = 1.83
3 layers, 6 neurons best validation at epoch 9 loss = 1.91
3 layers, 7 neurons best validation at epoch 9 loss = 1.65
Deep learning structure would be recommended if you were dealing with raw data and wanted to find features, that work towards your classification goal, automatically. But based on the names of your columns and their number (only 10) it seems that you have your features already engineered.
For this reason you could just go with a standard multi-layer neural network and use supervised learning (back propagation). Such network would have the number of inputs matching the number of your columns (10), followed by a number of hidden layers, and then followed by an output layer with the number of neurons matching the number of your labels. You could experiment with using different number of hidden layers, neurons, different neuron types (sigmoid, tanh, rectified linear etc.) and so on.
Alternatively you could use the raw data (if it's available) and then go with DBNs (they're known to be robust and achieve good results across different problems) or auto-encoders.
If you expect the output to be thought of like scores for a label (as I understood from your question), try a supervised multi-class logistic regression classifier. (the highest score takes the label).
If you're bound to use deep-learning.
A simple feed-forward ANN should do, supervise learning through back propagation. Input layer with N neurons, and one or two hidden layers can be added, not more than that. There is no need to go 'deep' and add more layers for this data, there is risk to overfit the data easily with more layers, if you do so it can be tricky to figure out what the problem is, and the test accuracy will be affected greatly.
Simply plotting or visualizing the data i.e with t-sne can be a good start, if you need to figure out which features are important (or any correlation that may exist).
you can then play with higher powers of those feature dimensions/ or add increased weight to their score.
For problems like this, deep-learning probably isn't very well suited. but a simpler ANN architecture like this should work well depending on the data.

neural networks regression using pybrain

I need to solve a regression problem with a feed forward network and I've been trying to use PyBrain to do it. Since there are no examples of regression on pybrain's reference, I tried to adapt it's classification example for regression instead, but with no success (The classification example can be found here: http://pybrain.org/docs/tutorial/fnn.html). Following is my code:
This first function converts my data in numpy array form to a pybrain SupervisedDataset. I use the SupervisedDataset because according to pybrain's reference it is the dataset to use when the problem is regression. The parameters are an array with the feature vectors (data) and their expected output (values):
def convertDataNeuralNetwork(data, values):
fulldata = SupervisedDataSet(data.shape[1], 1)
for d, v in zip(data, values):
fulldata.addSample(d, v)
return fulldata
Next, is the function to run the regression. train_data and train_values are the train feature vectors and their expected output, test_data and test_values are the test feature vectors and their expected output:
regressionTrain = convertDataNeuralNetwork(train_data, train_values)
regressionTest = convertDataNeuralNetwork(test_data, test_values)
fnn = FeedForwardNetwork()
inLayer = LinearLayer(regressionTrain.indim)
hiddenLayer = LinearLayer(5)
outLayer = GaussianLayer(regressionTrain.outdim)
fnn.addInputModule(inLayer)
fnn.addModule(hiddenLayer)
fnn.addOutputModule(outLayer)
in_to_hidden = FullConnection(inLayer, hiddenLayer)
hidden_to_out = FullConnection(hiddenLayer, outLayer)
fnn.addConnection(in_to_hidden)
fnn.addConnection(hidden_to_out)
fnn.sortModules()
trainer = BackpropTrainer(fnn, dataset=regressionTrain, momentum=0.1, verbose=True, weightdecay=0.01)
for i in range(10):
trainer.trainEpochs(5)
res = trainer.testOnClassData(dataset=regressionTest )
print res
when I print res, all it's values are 0. I've tried to use the buildNetwork function as a shortcut to build the network, but it didn't work as well. I've also tried different kinds of layers and different number of nodes in the hidden layer, with no luck.
Does somebody have any idea of what I am doing wrong? Also, some pybrain regression examples would really help! I couldn't find any when I looked.
Thanks in advance
pybrain.tools.neuralnets.NNregression is a tool which
Learns to numerically predict the targets of a set of data, with
optional online progress plots.
so it seems like something well suited for constructing a neural network for your regression task.
I think there could be a couple of things going on here.
First, I'd recommend using a different configuration of layer activations than what you're using. In particular, for starters, try to use sigmoidal nonlinearities for the hidden layers in your network, and linear activations for the output layer. This is by far the most common setup for a typical supervised network and should help you get started.
The second thing that caught my eye is that you have a relatively large value for the weightDecay parameter in your trainer (though what constitutes "relatively large" depends on the natural scale of your input and output values). I would remove that parameter for starters, or set its value to 0. The weight decay is a regularizer that will help prevent your network from overfitting, but if you increase the value of that parameter too much, your network weights will all go to 0 very quickly (and then your network's gradient will be basically 0, so learning will halt). Only set weightDecay to a nonzero value if your performance on a validation dataset starts to decrease during training.
As originally pointed out by Ben Allison, for the network to be able to approximate arbitrary values (i.e. not necessarily in the range 0..1) it is important not to use an activation function with limited output range in the final layer. A linear activation function for example should work well.
Here is a simple regression example built from the basic elements of pybrain:
#----------
# build the dataset
#----------
from pybrain.datasets import SupervisedDataSet
import numpy, math
xvalues = numpy.linspace(0,2 * math.pi, 1001)
yvalues = 5 * numpy.sin(xvalues)
ds = SupervisedDataSet(1, 1)
for x, y in zip(xvalues, yvalues):
ds.addSample((x,), (y,))
#----------
# build the network
#----------
from pybrain.structure import SigmoidLayer, LinearLayer
from pybrain.tools.shortcuts import buildNetwork
net = buildNetwork(1,
100, # number of hidden units
1,
bias = True,
hiddenclass = SigmoidLayer,
outclass = LinearLayer
)
#----------
# train
#----------
from pybrain.supervised.trainers import BackpropTrainer
trainer = BackpropTrainer(net, ds, verbose = True)
trainer.trainUntilConvergence(maxEpochs = 100)
#----------
# evaluate
#----------
import pylab
# neural net approximation
pylab.plot(xvalues,
[ net.activate([x]) for x in xvalues ], linewidth = 2,
color = 'blue', label = 'NN output')
# target function
pylab.plot(xvalues,
yvalues, linewidth = 2, color = 'red', label = 'target')
pylab.grid()
pylab.legend()
pylab.show()
A side remark (since in your code example you have a hidden layer with linear activation functions): In any hidden layer, linear functions are not useful because:
the weights at the input side to this layer form a linear transformation
the activation function is linear
the weights at the output side to this layer form a linear transformation
which can be reduced to one single linear transformation, i.e. they corresponding layer may as well be eliminated without any reduction in the set of functions which can be approximated. An important point of neural networks is that the activation functions are non-linear in the hidden layers.
As Andre Holzner explained, hidden layer should be nonlinear. Andre's code example is great, but it doesn't work well when you have more features and not so much data. In this case because of large hidden layer we get quite good approximation, but when you are dealing with more complex data, only linear function in output layer is not enough, you should normalize features and target to be in range [0..1].

Categories

Resources