Hello it is my first time working with tensorflow, i try to adapt the example here TensorFlow-Examples to use this code for regression problems with boston database. Basically, i only change the cost function ,the database, the inputs number, and the target number but when i run the MPL doesn't converge (i use a very low rate). I test it with Adam Optimization and descend gradient optimization but i have the same behavior.
I appreciate your suggestions and ideas...!!!
Observation: When i ran this program without the modifications described above, the cost function value always decrease.
Here the evolution when i run the model, the cost function oscillated even with a very low learning rate.In the worst case, i hope the model converge in a value, for example the epoch 944 shows a value 0.2267548 if not other better value is find then this value must stay until the optimization is finished.
Epoch: 0942 cost= 0.445707272
Epoch: 0943 cost= 0.389314095
Epoch: 0944 cost= 0.226754842
Epoch: 0945 cost= 0.404150135
Epoch: 0946 cost= 0.382190095
Epoch: 0947 cost= 0.897880572
Epoch: 0948 cost= 0.481954243
Epoch: 0949 cost= 0.269408980
Epoch: 0950 cost= 0.427961614
Epoch: 0951 cost= 1.206053280
Epoch: 0952 cost= 0.834200084
from __future__ import print_function
# Import MNIST data
#from tensorflow.examples.tutorials.mnist import input_data
#mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
import tensorflow as tf
import ToolInputData as input_data
ALL_DATA_FILE_NAME = "boston_normalized.csv"
##Load complete database, then this database is splitted in training, validation and test set
completedDatabase = input_data.Databases(databaseFileName=ALL_DATA_FILE_NAME, targetLabel="MEDV", trainPercentage=0.70, valPercentage=0.20, testPercentage=0.10,
randomState=42, inputdataShuffle=True, batchDataShuffle=True)
# Parameters
learning_rate = 0.0001
training_epochs = 1000
batch_size = 5
display_step = 1
# Network Parameters
n_hidden_1 = 10 # 1st layer number of neurons
n_hidden_2 = 10 # 2nd layer number of neurons
n_input = 13 # number of features of my database
n_classes = 1 # one target value (float)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.square(pred-y))
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(completedDatabase.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = completedDatabase.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))
print("Optimization Finished!")
A couple of points.
Your model is quite shallow being only two layers. Granted you'll need more data to train a larger model so I don't know how much data you have in the Boston data set.
What are your labels? That would better inform whether squared error is better for your model.
Also your learning rate is quite low.
You stated that your labels are in the range [0,1], but I cannot see that the predictions are in the same range. In order to make them comparable to the labels, you should transform them to the same range before returning, for example using the sigmoid function:
out_layer = tf.matmul(...)
out = tf.sigmoid(out_layer)
return out
Maybe this fixes the problem with the stability. You might also want to increase the batch size a bit, for example 20 examples per batch. If this improves the performance, you can probably increase the learning rate a bit.
Related
It's a simple model architecture based on this tutorial. The dataset would look like this, although in 10 dimensions:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers, optimizers
from sklearn.datasets import make_blobs
def pre_processing(inputs, targets):
inputs = tf.cast(inputs, tf.float32)
targets = tf.cast(targets, tf.int64)
return inputs, targets
def get_data():
inputs, targets = make_blobs(n_samples=1000, n_features=10, centers=7, cluster_std=1)
data = tf.data.Dataset.from_tensor_slices((inputs, targets))
data = data.map(pre_processing)
data = data.take(count=1000).shuffle(buffer_size=1000).batch(batch_size=256)
return data
model = Sequential([
layers.Dense(8, input_shape=(10,), activation='relu'),
layers.Dense(16, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(7)])
#tf.function
def compute_loss(logits, labels):
return tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels))
#tf.function
def compute_accuracy(logits, labels):
predictions = tf.argmax(logits, axis=1)
return tf.reduce_mean(tf.cast(tf.equal(predictions, labels), tf.float32))
#tf.function
def train_step(model, optim, x, y):
with tf.GradientTape() as tape:
logits = model(x)
loss = compute_loss(logits, y)
grads = tape.gradient(loss, model.trainable_variables)
optim.apply_gradients(zip(grads, model.trainable_variables))
accuracy = compute_accuracy(logits, y)
return loss, accuracy
def train(epochs, model, optim):
train_ds = get_data()
loss = 0.
acc = 0.
for step, (x, y) in enumerate(train_ds):
loss, acc = train_step(model, optim, x, y)
if step % 500 == 0:
print(f'Epoch {epochs} loss {loss.numpy()} acc {acc.numpy()}')
return loss, acc
optim = optimizers.Adam(learning_rate=1e-6)
for epoch in range(100):
loss, accuracy = train(epoch, model, optim)
Epoch 85 loss 2.530677080154419 acc 0.140625
Epoch 86 loss 3.3184046745300293 acc 0.0
Epoch 87 loss 3.138179063796997 acc 0.30078125
Epoch 88 loss 3.7781732082366943 acc 0.0
Epoch 89 loss 3.4101686477661133 acc 0.14453125
Epoch 90 loss 2.2888522148132324 acc 0.13671875
Epoch 91 loss 5.993691444396973 acc 0.16015625
What have I done wrong?
There are two problems in your code:
The first one is that you are generating a new training dataset in each epoch (see first line of train function, i.e. get_data function is called in each epoch). Since you are using sklearn.datasets.make_blobs function to generate data clusters, there is no guarantee that the generated data clusters between different calls follow the same distribution and/or label mapping. Therefore, the best thing the model could do in each epoch on a completely different dataset is just a random guess (hence, the average 1/7 ~= 0.14 accuracy you see in the results). To resolve this problem, take the data generation out of train function (i.e. generate the data at global level once by calling get_data function), and then pass the generated data to train function as an argument in each epoch.
The second problem is that you are using a very low learning rate, i.e. 1e-6, for the optimizer; therefore, the model is stuck and effectively not training at all. Instead, use the default learning rate for Adam optimizer, i.e. 1e-3, and change it only as needed (e.g. based on the results of experiments you perform).
Edits below
I am in the process of learning about artificial neural networks using the Keras library and in order to ensure that I have a good understanding of the basics of neural network classification, I have been trying to reproduce a neural network written with Keras using only tensorflow. However, I have run into some problems.
training_epochs = 100
n_input = 11
n_hidden_1 = 6
n_hidden_2 = 6
n_output = 1
classifier = Sequential()
classifier.add(Dense(output_dim=n_hidden_1, init='uniform', activation='relu', input_dim=n_input))
classifier.add(Dense(output_dim=n_hidden_2, init='uniform', activation='relu'))
classifier.add(Dense(output_dim=n_output, init='uniform', activation='sigmoid'))
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
classifier.fit(X_train, y_train, batch_size=10, nb_epoch=training_epochs)
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
cm = confusion_matrix(y_test, y_pred)
print(cm)
So essentially I am using a neural network with 2 hidden layers of size 6, an input layer of size 11, and an output of size 1. My output uses the sigmoid function to generate probabilities in order to classify training data into binary categories. I tried to reproduce this with tensorflow as follows:
training_epochs = 100
n_input = 11
n_hidden_1 = 6
n_hidden_2 = 6
n_output = 1
def neuralNetwork(x, weights):
layer_1 = tf.matmul(x, weights['h1'])
layer_1 = tf.nn.relu(layer_1)
layer_2 = tf.matmul(layer_1, weights['h2'])
layer_2 = tf.nn.relu(layer_2)
output_layer = tf.matmul(layer_2, weights['output'])
return output_layer
weights = {
'h1': tf.Variable(tf.random_uniform([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_uniform([n_hidden_1, n_hidden_2])),
'output': tf.Variable(tf.random_uniform([n_hidden_2, n_output]))
}
x = tf.placeholder('float', [None, n_input]) # [?, 11]
y = tf.placeholder('float', [None, n_output]) # [?, 1]
logits = neuralNetwork(x, weights)
prediction = tf.nn.softmax(logits)
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as session:
session.run(init)
for epoch in range(training_epochs):
loss, accuracy = session.run([optimizer, cost], feed_dict={x:X_train, y:y_train})
print('Epoch: {} Acc: {}'.format(epoch+1, accuracy))
print('Model has completed training.')
However, I keep getting the error:
Cannot feed value of shape (8000,) for Tensor 'Placeholder_1:0', which has shape '(?, 1)
My input data has 8000 rows with 11 columns and my output data has 8000 rows and 1 column. In order to try to reshape my data, I tried feeding it in row by row, but I kept getting more errors. Am I going about this the right way? Any help would be appreciated!
Edit: So I updated my code following the given suggestions. I am now getting output for accuracy, however, it seems to finish at around 4-5%. Furthermore, the accuracy also seems to decrease over time rather than improving. When I increase the number of training epochs to 200, the accuracy dips even lower (to around 2%).
Epoch: 1 Acc: 7.641509056091309
...
...
Epoch: 100 Acc: 4.339457035064697
I'm comparing the performance of Tensorflow with sklearn on two datasets:
A toy dataset in sklearn
MNIST dataset
Here is my code (Python):
from __future__ import print_function
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
import tensorflow as tf
from sklearn.datasets import load_digits
import numpy as np
# digits = load_digits()
# data = digits.data
# labels = digits.target
# convert to binary labels
# y = np.zeros((labels.shape[0],10))
# y[np.arange(labels.shape[0]),labels] = 1
x_train = mnist.train.images
y_train = mnist.train.labels
x_test = mnist.test.images
y_test = mnist.test.labels
n_train = mnist.train.images.shape[0]
# import pdb;pdb.set_trace()
# Parameters
learning_rate = 1e-3
lambda_val = 1e-5
training_epochs = 30
batch_size = 200
display_step = 1
# Network Parameters
n_hidden_1 = 300 # 1st layer number of neurons
n_input = x_train.shape[1] # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
X = tf.placeholder("float", [None, n_input])
Y = tf.placeholder("float", [None, n_classes])
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Create model
def multilayer_perceptron(x):
# Hidden fully connected layer with 256 neurons
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
# Activation
layer_1_relu = tf.nn.relu(layer_1)
# Output fully connected layer with a neuron for each class
out_layer = tf.matmul(layer_1_relu, weights['out']) + biases['out']
return out_layer
# Construct model
logits = multilayer_perceptron(X)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y)) + lambda_val*tf.nn.l2_loss(weights['h1']) + lambda_val*tf.nn.l2_loss(weights['out'])
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Test model
pred = tf.nn.softmax(logits) # Apply softmax to logits
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(Y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
# Initializing the variables
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(n_train/batch_size)
# Loop over all batches
ptr = 0
for i in range(total_batch):
next_ptr = ptr + batch_size
if next_ptr > len(x_train):
next_ptr = len(x_train)
batch_x, batch_y = x_train[ptr:next_ptr],y_train[ptr:next_ptr]
ptr += batch_size
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([train_op, loss_op], feed_dict={X: batch_x,
Y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost={:.9f}".format(avg_cost))
print("Optimization Finished!")
print("Accuracy on training set: ", accuracy.eval({X:x_train,Y:y_train}))
print("Accuracy on testing set:", accuracy.eval({X: x_test, Y: y_test}))
print("Experimenting sklearn...")
# now experiment with sklearn
from sklearn.datasets import load_digits
import numpy as np
from sklearn.neural_network import MLPClassifier
import time
# use MLP
t_start = time.time()
print('fitting MLP...')
clf = MLPClassifier(solver='adam', alpha=1e-5, hidden_layer_sizes=(300,),max_iter=training_epochs)
clf.fit(x_train,y_train)
print('fitted MLP in {:.2f} seconds'.format(time.time() - t_start))
print('predicting...')
labels_predicted = clf.predict(x_test)
print('accuracy: {:.2f} %'.format(np.mean(np.argmax(y_test,axis=1) == np.argmax(labels_predicted,axis=1)) * 100))
The code is adapted from a github repository. For this testing, I'm using a traditional neural network (MLP) with only one hidden layer of size 300.
Following is the result for the both datasets:
sklearn digits: ~83% (tensorflow), ~90% (sklearn)
MNIST: ~94% (tensorflow), ~97% (sklearn)
I'm using the same model for both libraries. All the parameters (number of hidden layers, number of hidden units, learning_rate, l2 regularization constant, number of training epochs, batch size) and optimization algorithms are the same (Adam optimizer, beta parameters for Adam optimizer, no momentum, etc).
I wonder if sklearn has done a magic implementation over tensorflow? Can anyone help answer?
Thank you very much.
I am using two tutorials to figure out how to take a CVS file of format:
feature1,feature2....feature20,label
feature1,feature2....feature20,label
...
and train a neural network on it. What I do in the code below is read in the CVS file and group 100 lines at a time into batches: x_batch and y_batch. Next, i try to have the NN learn in batches. However, I get the following error:
"ValueError: Cannot feed value of shape (99,) for Tensor 'Placeholder_1:0', which has shape '(?, 4)'"
I am wondering what i am doing wrong and what another approach might be.
import tensorflow as tf
filename_queue = tf.train.string_input_producer(["VOL_TRAIN.csv"])
line_reader = tf.TextLineReader(skip_header_lines=1)
_, csv_row = line_reader.read(filename_queue)
# Type information and column names based on the decoded CSV.
[[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[""]]
record_defaults = [[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],[0]]
in1,in2,in3,in4,in5,in6,in7,in8,in9,in10,in11,in12,in13,in14,in15,in16,in17,in18,in19,in20,out = \
tf.decode_csv(csv_row, record_defaults=record_defaults)
# Turn the features back into a tensor.
features = tf.pack([in1,in2,in3,in4,in5,in6,in7,in8,in9,in10,in11,in12,in13,in14,in15,in16,in17,in18,in19,in20])
# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
num_examples= 33500
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 20 # MNIST data input (img shape: 28*28)
n_classes = 4 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
with tf.Session() as sess:
#tf.initialize_all_variables().run()
sess.run(init)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x = []
batch_y = []
for iteration in range(1, batch_size):
example, label = sess.run([features, out])
batch_x.append(example)
batch_y.append(label)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))
print ("Optimization Finished!")
coord.request_stop()
coord.join(threads)
Your placeholder y specifies you input an array of unknown length, with arrays of length "n_classes" (which is 4). In your feed_dict you give the array batch_y, which is an array of length 99 (your batch_size) with numbers.
What you want to do is change your batch_y variable to have one-hot vectors as input. Please let me know if this works!
I am trying to train a single layer perceptron (basing my code on this) on the following data file in tensor flow:
1,1,0.05,-1.05
1,1,0.1,-1.1
....
where the last column is the label (function of 3 parameters) and the first three columns are the function argument. The code that reads the data and trains the model (I simplify it for readability):
import tensorflow as tf
... # some basics to read the data
example, label = read_file_format(filename_queue)
... # model construction and parameter setting
n_hidden_1 = 4 # 1st layer number of features
n_input = 3
n_output = 1
...
# calls a function which produces a prediction
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
_, c = sess.run([optimizer, cost], feed_dict={x: example.reshape(1,3), y: label.reshape(-1,1)})
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "Cost:",c)
but when I run it, something seems to be very wrong:
('Epoch:', '0001', 'Cost:', nan)
('Epoch:', '0002', 'Cost:', nan)
....
('Epoch:', '0015', 'Cost:', nan)
This is the complete code for the multilaye_perceptron function, etc:
# Parameters
learning_rate = 0.001
training_epochs = 15
display_step = 1
# Network Parameters
n_hidden_1 = 4 # 1st layer number of features
n_input = 3
n_output = 1
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_output])
# Create model
def multilayer_perceptron(x, weights, biases):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Output layer with linear activation
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_output]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_output]))
}
Is this one example at a time? I would go batches and increase batch size to 128 or similar, as long as you are getting nans.
When I am getting nans it is usually either of the three:
- batch size too small (in your case then just 1)
- log(0) somewhere
- learning rate too high and uncapped gradients