neural network from scratch in python using sigmoid activation - python

i am new to python,trying to learn machine learning in python.i have tried to write a neural network from scratch with one hidden layer on the famous iris dataset.this is a three class classifier with out put as one hot vectors.i have also taken help from already written algos for help.for instance i used the same training set as my testing set.
it is a huge code to go through,i would like you to tell me, that how do we subtract 'y' output( which is one hot vector) of dimensions (150,3) and my out y softmax will be of vector (150,21).this is my biggest problem.i tried to look online everyone have used this method but since i am weak in python i don't understand it.this is the line of code delta3[range(m1), y] -= 1
arrays used as indices must be of integer (or boolean) type if m1 is sie of(150)
and if i give size m1(150,3) then
delta3[range(m1), y] -= 1
TypeError: range() integer end argument expected, got tuple.
remember m1=150
my y vector=150,3
softmax=150,21
my code is
#labels or classes
#1=iris-setosa
#2=iris-versicolor
#0=iris-virginica
#features
#sepallength
#sepalwidth
#petallengthcm
#petalwidth
import pandas as pd
import matplotlib.pyplot as plt
import csv
import numpy as np
df=pd.read_csv('Iris.csv')
df.convert_objects(convert_numeric=True)
df.fillna(0,inplace=True)
df.drop(['Id'],1,inplace=True)
#function to convert three labels into values 0,1,2
def handle_non_numericaldata(df):
columns=df.columns.values
for column in columns:
text_digit_vals={}
def convert_to_int(val):
return text_digit_vals[val]
if df[column].dtype!=np.int64 and df[column].dtype!=np.float:
column_contents=df[column].values.tolist()
unique_elements=set(column_contents)
x=0
for unique in unique_elements:
if unique not in text_digit_vals:
text_digit_vals[unique]=x
x+=1
df[column]=list(map(convert_to_int,df[column]))
return(df)
handle_non_numericaldata(df)
x=np.array(df.drop(['Species'],1).astype(float))
c=np.array(df['Species'])
n_values=(np.max(c)+1)
y=(np.eye(n_values)[c])
m1=np.size(c)
theta=np.ones(shape=(4,1))
theta2=np.ones(shape=(1,21))
#no of examples "m"
#learning rate alpha
alpha=0.01
#regularization parameter
lamda=0.01
for i in range(1,1000):
z1=np.dot(x,theta)
sigma=1/(1+np.exp(-z1))
#activation layer 2.
a2=sigma
z2=np.dot(a2,theta2)
probs=np.exp(z2)
softmax=probs/np.sum(probs,axis=1,keepdims=True)
delta3=softmax
delta3[range(m1), y] -= 1
A2=np.transpose(a2)
dw2 = (A2).dot(delta3)
W2=np.transpose(theta2)
delta2=delta3.dot(W2)*sigma*(1-sigma)
X2=np.transpose(x)
dw1=np.dot(X2,delta2)
dw2=dw2-lamda*theta2
dw1=dw1-lamda*theta
theta =theta -alpha* dw1
theta2= theta2-alpha * dw2
correct_logprobs=0
correct_logprobs=correct_logprobs-np.log(probs[range(m1),y])
data_loss=np.sum(correct_logprobs)
data_loss+=lamda/2*(np.sum(np.square(theta))+ np.square(theta2))
loss=1./m1*data_loss
if 1000%i==0:
print("loss after iteration%i:%f",loss)
final1=x.dot(theta)
sigma=1/(1+np.exp(-final1))
z2=sigma.dot(theta2)
exp_scores=np.exp(z2)
probs=exp_scores/np.sum(exp_scores,axis=1,keepdims=True)
print(np.argmax(probs,axis=1))

In Python range generates a tuple of numbers from x to y with range(x, y). If you generate something like range(10) then it is the same as (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Lists in Python need an integer index such as list[0] or list[4], not list[0, 4], however, there is a built-in thing in Python that allows access from index x to index y in a list here is the syntax: list[0:4]. This will return every value from 0 to 3 in the list. Such as if a list is list = [0,10,3,4,12,5,3] then list[0:4] will return [0,10,3,4].
Try taking a look at list data structures in Python on the Python Docs. As well as Understanding Generators in Python.
I think what your looking for is something like this: delta3 = [[z-1 for z in delta3[x:y]] for x in range(m1)]. This list comprehension uses two generations both, [x-1 for x in l], which subtracts one from every element in the list, and [l[x:y] for x in range(m)], which generates a list of lists with values through x to y in a range of m. Though I'm not sure I understand what your end goal is, fully.

What is a Neural Network?
The term ‘Neural’ has origin from the human (animal) nervous system’s basic functional unit ‘neuron’ or nerve cells present in the brain and other parts of the human (animal) body. A neural network is a group of algorithms that certify the underlying relationship in a set of data similar to the human brain. The neural network helps to change the input so that the network gives the best result without redesigning the output procedure
Now in code Example:_
import numpy as np
#assign input values
input_value=np.array([[0.26,0.77,0.25],[0.42,0.8,0.25],[0.56,0.53,0.25],[0.29,0.79,0.25]])
input_value.shape
#assign output values
output=np.array([0.644045,0.651730,0.707523,0.644395])
output=output.reshape(4,1)
output
#assign weights
weights=np.array([[0.1],[0.1],[0.1]])
weights.shape
weights
#add bias
bias=0.3
#activation function
def sigmoid_func(x):
return 1/(1+np.exp(-x))
#derivative of sigmoid function
def der(x):
return sigmoid_func(x)*(1-sigmoid_func(x))
#updating weights
for epochs in range(10000):
input_arr=input_value
#print(input_arr)
weighted_sum=np.dot(input_arr,weights)+bias
### CALCULATION OF PRE ACTIVATION FUNCTION
first_output=sigmoid_func(weighted_sum)
#print(first_output)
error=first_output - output
#print(error)
total_error=np.square(np.subtract(first_output,output)).mean()
#print total error
first_der=error
second_der=der(first_output)
derivative=first_der*second_der
t_input=input_value.T
final_derivative=np.dot(t_input,derivative)
#update Weigths
weights=weights-0.05*final_derivative
#update bias
for i in derivative:
bias=bias-0.05*i
print(weights)
print(bias)
#prediction for 1st item
pred=np.array([0.26,0.77,0.25])
result=np.dot(pred,weights)+bias
res=sigmoid_func(result)
print(res)
#prediction for 2nd item
pred=np.array([0.42,0.8,0.25])
result=np.dot(pred,weights)+bias
res=sigmoid_func(result)
print(res)
#prediction for 3rd item
pred=np.array([0.56,0.53,0.25])
result=np.dot(pred,weights)+bias
res=sigmoid_func(result)
print(res)
#prediction for 4th item
pred=np.array([0.29,0.79,0.25])
result=np.dot(pred,weights)+bias
res=sigmoid_func(result)
print(res)

Related

Newbie: Conceptual Understanding of W and U in RNNs

First Post, so please go easy on me :) Please post any comments regarding my questioning and forum skills, they will be gratefully received!
I'm trying to understand the matrix sizes and manipulations that make up an RNN. I'll go through what I understand already so hopefully we're all on the same page. (Alternatively you can TL;DR down to the question at the bottom)
X_Sets is a 2D array which has some sine wave values, Y_sets is a 1D array which holds the next sine wave value in the sequence for each record. the goal here is to accurately predict what the next value of the sine wave will be.
Initial Values:
learning_rate = 0.0001
nepoch = 25
T = 50 # sequence length
hidden_dim = 100
output_dim = 1
U = np.random.uniform(0, 1, (hidden_dim, T))
W = np.random.uniform(0, 1, (hidden_dim, hidden_dim))
V = np.random.uniform(0, 1, (output_dim, hidden_dim))
Here's a snippet of the code I'm working with at the moment, its part of the forward propagation function. explanations in comments.
for i in range(Y_Sets.shape[0]):
#select the first record from both data sets and print out the sizes for all to see
x, y = X_Sets[i], Y_Sets[i]
print(Y_Sets.shape) #(100, 1)
print(X_Sets.shape) #(100, 50, 1)
print(x.shape) #(50, 1)
print(y.shape) #(1,)
#clear the prev_s values as the computed hidden values will be different for each record.
prev_s = np.zeros((hidden_dim, 1))
#loop for one record.
for t in range(T):
#new input array is 0'd every loop
new_input = np.zeros(x.shape)
#we only fill the array in the t'th position, everything else is 0
new_input[t] = x[t]
#See Question
mulu = np.dot(U, new_input)
#Same issue here
mulw = np.dot(W, prev_s) #why is W a 2D matrix?
add = mulw + mulu
s = sigmoid(add)
mulv = np.dot(V, s)
prev_s = s
Question:
I understand that there are 100 hidden layers and every hidden layer will have it's own U, so it makes sense to multiply each individual x[t] by a column of U. But - On the next loop round, t will be 2, x[2] will be in the 2nd column which will be dot-product(ed) by a different set of 100 Us.
Now - I was lead to believe that the whole point of RNNs is that they are efficient because U, V and W are constant over the entire sequence, whereas here we can see that they differ over the sequence.
Why?
Edit: Here's the Guide I'm following: https://www.analyticsvidhya.com/blog/2019/01/fundamentals-deep-learning-recurrent-neural-networks-scratch-python/
I think you are mistaken. First of all, there is only one hidden layer with 100 nodes. Second, U is not changing after every time step, from the code snippet it looks like U is fixed and it will probably change after seeing the whole sequence. Same for V and W. I don't see the update equation here.

Python2: Fitting a multi-parameter sum of functions with scipy.optimize.curve_fit

This is my first post on Stack Overflow so please be patient if any info is missing.
I am trying to fit a function through data using Python 2.7.15 (ubuntu 18.04) with scipy.optimize.curve_fit(). This fitting function consists of a sum of a variable number of exponentials with associated parameters which are passed through the *args parameter of my fitting function.
I have tried passing vectors of parameters to my fitting function. Unfortunately, it seems like the sum of exponentials I perform with a 'for' loop is actually interpreted as a numpy. ndarray, where it should be a single value to be returned to the fitting algorithm.
Find below a (simplified) example of what I tried:
import numpy as np
import scipy
import math
from scipy import optimize
# Fitting function:
def fitFuncTau(amplitude, nFit, t, *args):
C0=args[0]
C=list(args[1:(nFit+1)])
tau=list(args[(nFit+1):(2*nFit+2)])
sumFit=0
for i in range(0, nFit):
sumFit+=C[i]*np.exp(-t/tau[i])
print sumFit
return C0+amplitude*sumFit
#Fitting Args: C0 parameter, then two lists C[] and tau[] (size Nfit)
fitArgs=[1, 0.01, 0.01, 0.1, 0.1]
nFit=2
amplitude=1
# Dummy fitting data
x=np.linspace(0, 4, 100)
np.random.seed(1729)
y=np.random.normal(size=x.size)
#Fit
wrapFunc=lambda t, *args: fitFuncTau(amplitude, nFit, t, *args)
fit_opt, fit_cov = scipy.optimize.curve_fit(wrapFunc, x, y, p0=fitArgs)
Any help would be much appreciated!
Try to use your fitFuncTau function standalone. fitFuncTau(1, 2, 3, 4, 5, 6, 7, 8) (or whatever values you want to provide to fill the correct amount of parameters) prints just a number, not a list.
I cannot find any doc or reference to prove it, but I guess is just a printing optimization done by curve_fit().
All the print calls due to each element of x are collected in a list and the list is printed. If you check the length of the printed list is the same of your x array (100 in your case).
It should not affect the result of the fit. Check if the values in fit_opt are reasonable.

What is the purpose of keras utils normalize?

I'd like to normalize my training set before passing it to my NN so instead of doing it manually (subtract mean and divide by std), I tried keras.utils.normalize() and I am amazed about the results I got.
Running this:
r = np.random.rand(3000) * 1000
nr = normalize(r)
print(np.mean(r))
print(np.mean(nr))
print(np.std(r))
print(np.std(nr))
print(np.min(r))
print(np.min(nr))
print(np.max(r))
print(np.max(nr))
​
​Results in that:
495.60440066771866
0.015737914577213984
291.4440194021
0.009254802974329002
0.20755517410064872
6.590913227674956e-06
999.7631481267636
0.03174747238214018
Unfortunately, the docs don't explain what's happening under the hood. Can you please explain what it does and if I should use keras.utils.normalize instead of what I would have done manually?
It is not the kind of normalization you expect. Actually, it uses np.linalg.norm() under the hood to normalize the given data using Lp-norms:
def normalize(x, axis=-1, order=2):
"""Normalizes a Numpy array.
# Arguments
x: Numpy array to normalize.
axis: axis along which to normalize.
order: Normalization order (e.g. 2 for L2 norm).
# Returns
A normalized copy of the array.
"""
l2 = np.atleast_1d(np.linalg.norm(x, order, axis))
l2[l2 == 0] = 1
return x / np.expand_dims(l2, axis)
For example, in the default case, it would normalize the data using L2-normalization (i.e. the sum of squared of elements would be equal to one).
You can either use this function, or if you don't want to do mean and std normalization manually, you can use StandardScaler() from sklearn or even MinMaxScaler().

neural netork from scratch using one hidden layer and sigmoid activation

i am making a neural network for scratch from practise and i am not very experienced python programmer.
i know most of the maths concepts of neural network. my model is in not behaving well.derivation of sigmoid function is h(x)*(1-h(x)) but i am not sure that line of code is correct,i searched it on google and everyone have used tanh activation.and i am not really sure delta 2.i have no idea where my code is going wrong.i have few doubts of how do we subtract the prediction (y)-label(y). this is a three class classifier.
delta3[range(m1), y] -= 1
this line of code is also not clear to me,i have copied it from online just putting my m1(total number of examples in it).because my y matrix(labels in form of 0,1,2) is a vector of order(150,1) and the the prediction matrix is of (151,21) sp how do we subtract them.
#labels or classes
#1=iris-setosa
#2=iris-versicolor
#0=iris-virginica
#features
#sepallength
#sepalwidth
#petallengthcm
#petalwidth
import pandas as pd
import matplotlib.pyplot as plt
import csv
import numpy as np
df=pd.read_csv('Iris.csv')
df.convert_objects(convert_numeric=True)
df.fillna(0,inplace=True)
df.drop(['Id'],1,inplace=True)
#function to convert three labels into values 0,1,2
def handle_non_numericaldata(df):
columns=df.columns.values
for column in columns:
text_digit_vals={}
def convert_to_int(val):
return text_digit_vals[val]
if df[column].dtype!=np.int64 and df[column].dtype!=np.float:
column_contents=df[column].values.tolist()
unique_elements=set(column_contents)
x=0
for unique in unique_elements:
if unique not in text_digit_vals:
text_digit_vals[unique]=x
x+=1
df[column]=list(map(convert_to_int,df[column]))
return(df)
handle_non_numericaldata(df)
x=np.array(df.drop(['Species'],1).astype(float))
y=np.array(df['Species'])
m1=np.size(y)
theta=np.ones(shape=(4,1))
theta2=np.ones(shape=(1,21))
#no of examples "m"
#learning rate alpha
alpha=0.01
#regularization parameter
lamda=0.01
for i in range(1,2):
z1=np.dot(x,theta)
sigma=1/(1+np.exp(-z1))
#activation layer 2.
a2=sigma
z2=np.dot(a2,theta2)
probs=np.exp(z2)
softmax=probs/np.sum(probs,axis=1,keepdims=True)
delta3=softmax
print(softmax)
delta3[range(m1), y] -= 1
A2=np.transpose(a2)
dw2 = (A2).dot(delta3)
W2=np.transpose(theta2)
delta2=delta3.dot(W2)*sigma*(1-sigma)
X2=np.transpose(x)
dw1=np.dot(X2,delta2)
dw2=dw2-lamda*theta2
dw1=dw1-lamda*theta
theta =theta -alpha* dw1
theta2= theta2-alpha * dw2
correct_logprobs=-np.log(probs[range(m1),y])
data_loss=np.sum(correct_logprobs)
data_loss+=lamda/2*(np.sum(np.square(theta))+ np.square(theta2))
( 1./m1*data_loss)
my output for theta(weights)is
[[ 1.22833047]
[ 1.22591229]
[ 1.22341726]
[ 1.22162091]]
which obviously can not be correct.

Neural Network predictions are always the same while testing an fMRI dataset with pyBrain. Why?

I am quite new to fMRI analysis. I am trying to determine which object (out of 9 objects) a person is thinking about just by looking at their Brain Images. I am using the dataset on https://openfmri.org/dataset/ds000105/ . So, I am using a neural network by inputting 2D slices of brain images to get the output as 1 of the 9 objects. There are details about every step and the images in the code below.
import os, mvpa2, pyBrain
import numpy as np
from os.path import join as opj
from mvpa2.datasets.sources import OpenFMRIDataset
from pybrain.datasets import SupervisedDataSet,classification
path = opj(os.getcwd() , 'datasets','ds105')
of = OpenFMRIDataset(path)
#12th run of the 1st subject
ds = of.get_model_bold_dataset(model_id=1, subj_id=1,run_ids=[12])
#Get the unique list of 8 objects (sicissors, ...) and 'None'.
target_list = np.unique(ds.sa.targets).tolist()
#Returns Nibabel Image instance
img = of.get_bold_run_image(subj=1,task=1,run=12)
# Getting the actual image from the proxy image
img_data = img.get_data()
#Get the middle voxelds of the brain samples
mid_brain_slices = [x/2 for x in img_data.shape]
# Each image in the img_data is a 3D image of 40 x 64 x 64 voxels,
# and there are 121 such samples taken periodically every 2.5 seconds.
# Thus, a single person's brain is scanned for about 300 seconds (121 x 2.5).
# This is a 4D array of 3 dimensions of space and 1 dimension of time,
# which forms a matrix of (40 x 64 x 64 x 121)
# I only want to extract the slice of the 2D images brain in it's top view
# i.e. a series of 2D images 40 x 64
# So, i take the middle slice of the brain, hence compute the middle_brain_slices
DS = classification.ClassificationDataSet(40*64, class_labels=target_list)
# Loop over every brain image
for i in range(0,121):
#Image of brain at i th time interval
brain_instance = img_data[:,:,:,i]
# We will slice the brain to create 2D plots and use those 'pixels'
# as the features
slice_0 = img_data[mid_brain_slices[0],:,:,i] #64 x 64
slice_1 = img_data[:,mid_brain_slices[1],:,i] #40 x 64
slice_2 = img_data[:,:,mid_brain_slices[2],i] #40 x 64
#Note : we may actually only need one of these slices (the one with top view)
X = slice_2 #Possibly top view
# Reshape X from 40 x 64 to 1D vector 2560 x 1
X = np.reshape(X,40*64)
#Get the target at this intance (y)
y = ds.sa.targets[i]
y = target_list.index(y)
DS.appendLinked(X,y)
print DS.calculateStatistics()
print DS.classHist
print DS.nClasses
print DS.getClass(1)
# Generate y as a 9 x 1 matrix with eight 0's and only one 1 (in this training set)
DS._convertToOneOfMany(bounds=[0, 1])
#Split into Train and Test sets
test_data, train_data = DS.splitWithProportion( 0.25 )
#Note : I think splitWithProportion will also internally shuffle the data
#Build neural network
from pybrain.tools.shortcuts import buildNetwork
from pybrain.structure.modules import SoftmaxLayer
nn = buildNetwork(train_data.indim, 64, train_data.outdim, outclass=SoftmaxLayer)
from pybrain.supervised.trainers import BackpropTrainer
trainer = BackpropTrainer(nn, dataset=train_data, momentum=0.1, learningrate=0.01 , verbose=True, weightdecay=0.01)
trainer.trainUntilConvergence(maxEpochs = 20)
The line nn.activate(X_test[i]) should take the 2560 inputs and generate a probability output, right? in the predicted y vector (shape 9 x 1 )
So, I assume the highest of the 9 values should be assigned answer. But it is not the case when I verify it with y_test[i]. Furthermore, I get similar values for X_test for every test sample. Why is this so?
#Just splitting the test and trainset
X_train = train_data.getField('input')
y_train = train_data.getField('target')
X_test = test_data.getField('input')
y_test = test_data.getField('target')
#Testing the network
for i in range(0,len(X_test)):
print nn.activate(X_test[i])
print y_test[i]
When I include the code above, here are some values of X_test :
.
.
.
nn.activated = [ 0.44403205 0.06144328 0.04070154 0.09399672 0.08741378 0.05695479 0.08178353 0.0623408 0.07133351]
y_test [0 1 0 0 0 0 0 0 0]
nn.activated = [ 0.44403205 0.06144328 0.04070154 0.09399672 0.08741378 0.05695479 0.08178353 0.0623408 0.07133351]
y_test [1 0 0 0 0 0 0 0 0]
nn.activated = [ 0.44403205 0.06144328 0.04070154 0.09399672 0.08741378 0.05695479 0.08178353 0.0623408 0.07133351]
y_test [0 0 0 0 0 0 1 0 0]
.
.
.
So the probability of the test sample being index 0 in every case id 44.4% irrespective of the sample value. The actual values keep varying though.
print 'print predictions: ' , trainer.testOnClassData (dataset=test_data)
x = []
for item in y_test:
x.extend(np.where(item == 1)[0])
print 'print actual: ' , x
Here, the output comparison is :
print predictions: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
print actual: [7, 0, 4, 8, 2, 0, 2, 1, 0, 6, 1, 4]
All the predictions are for the first item. I don't know what the problem is. The total error seems to be decreasing, which is a good sign though :
Total error: 0.0598287764931
Total error: 0.0512272330797
Total error: 0.0503835076374
Total error: 0.0486402801867
Total error: 0.0498354140541
Total error: 0.0495447833038
Total error: 0.0494208449895
Total error: 0.0491162599037
Total error: 0.0486775862084
Total error: 0.0486638648161
Total error: 0.0491337891419
Total error: 0.0486965691406
Total error: 0.0490016912735
Total error: 0.0489939195858
Total error: 0.0483910986235
Total error: 0.0487459940103
Total error: 0.0485516142106
Total error: 0.0477407360102
Total error: 0.0490661144891
Total error: 0.0483103097669
Total error: 0.0487965594586
I can't be sure -- because I haven't used all of these tools together before, or worked specifically in this kind of project -- but I would look at the documentation and be sure that your nn is being created as you expect it to.
Specifically, it mentions here:
http://pybrain.org/docs/api/tools.html?highlight=buildnetwork#pybrain.tools.shortcuts.buildNetwork
that "If the recurrent flag is set, a RecurrentNetwork will be created, otherwise a FeedForwardNetwork.", and you can read here:
http://pybrain.org/docs/api/structure/networks.html?highlight=feedforwardnetwork
that "FeedForwardNetworks are networks that do not work for sequential data. Every input is treated as independent of any previous or following inputs.".
Did you mean to create a "FeedForward" network object?
You're testing by looping over an index and activating each "input" field that's based off the instantiation of a FeedForwardNetwork object, which the documentation suggests are treated as independent of other inputs. This may be why you're getting such similar results each time, when you are expecting better convergences.
You initialize your dataset ds object with the parameters model_id=1, subj_id=1,run_ids=[12], suggesting that you're only looking at a single subject and model, but 12 "runs" from that subject under that model, right?
Most likely there's nothing semantically or grammatically wrong with your code, but a general confusion from the PyBrain library's presumed and assumed models, parameters, and algorithms. So don't tear your hair out looking for code "errors"; this is definitely a common difficulty with under-documented libraries.
Again, I may be off base, but in my experience with similar tools and libraries, it's most often that the benefit of taking an extremely complicated process and simplifying it to just a couple dozen lines of code, comes with a TON of completely opaque and fixed assumptions.
My guess is that you're essentially re-running "new" tests on "new" or independent training data, without all the actual information and parameters that you thought you had setup in the previous code lines. You are exactly correct that the highest value (read: largest probability) is the "most likely" (that's exactly what each value is, a "likeliness") answer, especially if your probability array represents a unimodal distribution.
Because there are no obvious code syntax errors -- like accidentally looping over a range iterator equivalent to the list [0,0,0,0,0,0]; which you can verify because you reuse the i index integer in printing y_test which varies and the result of nn.activate(X_test[i]) which isn't varying -- then most likely what's happening is that you're basically restarting your test every time and that's why you're getting an identical result, not just similar but identical for every printout of that nn.activate(...) method's results.
This is a complex, but very well written and well illustrated question, but unfortunately I don't think there will be a simple or blatantly obvious solution.
Again, you're getting the benefits of PyBrain's simplificaiton of neural networks, data training, heuristics, data reading, sampling, statistical modelling, classification, and so on and so forth, all reduced into single line or two line commands. There are assumptions being made, TONS of them. That's what the documentation needs to be illuminating, and we have to be very very careful when we use tools like these that it's not just a matter of correct syntax, but an actually correct (read: expected) algorithm, assumptions and all.
Good luck!
(P.S. -- Open source libraries also, despite a lack of documentation, give you the benefit of checking the source code to see [assumptions and all] what they're actually doing: https://github.com/pybrain/pybrain )

Categories

Resources