I code tensorflow program for linear regression. I am using Gradient Descent algorithm for optimizing(Minimising) loss function. But value of loss function is increasing while executing the program. My program and output is in follow.
import tensorflow as tf
W = tf.Variable([.3],dtype=tf.float32)
b = tf.Variable([-.3],dtype=tf.float32)
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
sess = tf.Session()
init = init = tf.global_variables_initializer()
sess.run(init)
lm = W*X + b
delta = tf.square(lm-Y)
loss = tf.reduce_sum(delta)
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
for i in range(8):
print(sess.run([W, b]))
print("loss= %f" %sess.run(loss,{X:[10,20,30,40],Y:[1,2,3,4]}))
sess.run(train, {X: [10,20,30,40],Y: [1,2,3,4]})
sess.close()
Output for my program is
2017-12-07 14:50:10.517685: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
[array([ 0.30000001], dtype=float32), array([-0.30000001],dtype=float32)]
loss= 108.359993
[array([-11.09999943], dtype=float32), array([-0.676], dtype=float32)]
loss= 377836.000000
[array([ 662.25195312], dtype=float32), array([ 21.77807617], dtype=float32)]
loss= 1318221568.000000
[array([-39110.421875], dtype=float32), array([-1304.26794434], dtype=float32)]
loss= 4599107289088.000000
[array([ 2310129.25], dtype=float32), array([ 77021.109375], dtype=float32)]
loss= 16045701465112576.000000
[array([ -1.36451664e+08], dtype=float32), array([-4549399.], dtype=float32)]
loss= 55981405829796462592.000000
[array([ 8.05974733e+09], dtype=float32), array([ 2.68717856e+08], dtype=float32)]
loss= 195312036582209632600064.000000
Please provide me a answer why value of loss is increasing instead of decreasing.
Did you try changing the learning rate? Using a lower running rate (~1e-4) and more iterations should work.
More justification as to why a lower learning rate might be required. Note that your loss function is
L = \sum (Wx+b-Y)^2
and dL/dW = \sum 2(Wx+b-Y)*x
and hessian d^2L/d^2W = \sum 2x*x
Now, your loss is diverging because learning rate is more than inverse of hessian which there will be roughly 1/(2*2900). So you should try and decrease the learning rate here.
Note: I wasn't sure how to add math to StackOverflow answer so I had to add it this way.
To do a linear regression this is the code i've been using with numpy:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
print(tf.__version__)
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
x = np.arange(start=0.0, stop=5.0, step=0.1)
##You can adjust the slope and intercept to verify the changes in the graph
W=1
b=0
# We define de linear ecuation
y= W*x + b
# And plot it thanks to matplotlib
plt.plot(x,y)
plt.ylabel('Dependent Variable')
plt.xlabel('Indepdendent Variable')
plt.show()
With TensorFlow you can use something similar to the code bellow to do a linear regression:
def graph_formula_vs_data(formula, x_vector, y_vector):
"""
This function graphs a formula in the form of a line, vs. data points
"""
x = np.array(range(0, int(max(x_vector))))
y = eval(formula)
plt.plot(x, y)
plt.plot(x_vector, y_vector, "ro")
plt.show()
df=pd.read_csv('./linear_reg_exam_dataset.csv',usecols = [0,1],skiprows = [0],header=None)
d = df.values
data = np.float32(d)
dataset = pd.DataFrame({'x': data[:, 0], 'y': data[:, 1]})
# Number of epochs (times we make the model go through all the data)
n_epochs = 100
# Model parameters
W = tf.Variable([0.], tf.float32)
b = tf.Variable([0.], tf.float32)
y = dataset['y'] # define the target variable (dependent variable) as y
x = dataset['x']
msk = np.random.rand(len(df)) < 0.8
# Model input and output
x_train = x[msk].values.tolist()
y_train = y[msk].values.tolist()
# Validation data (with this we validate that the model has learned to generalize the problem)
x_val = x[~msk].values.tolist()
y_val = y[~msk].values.tolist()
# Model definition
#tf.function
def linear_model(x, W, b):
return W*x + b
# Cost function
loss = lambda: tf.reduce_sum(tf.math.squared_difference(y_train,linear_model(x_train, W, b)))
# optimizer to do the gradient descent
optimizer = tf.optimizers.SGD(0.0000000000001)
# We perform n_epochs training iterations
for i in range(n_epochs):
optimizer.minimize(loss, var_list=[W, b])
# Every 10 epochs we print the data of how W, b evolve and the amount of error there is
if i % 10 == 0 or i == n_epochs-1:
print("Epoch {}".format(i))
print("W: {}".format(W.numpy()))
print("b: {}".format(b.numpy()))
print("loss: {}".format(loss()))
# This formula represents w * x + b in string form to be able to graph it
stringfied_formula=str(W.numpy()) + "*x +" + str(b.numpy())
graph_formula_vs_data(formula=stringfied_formula, x_vector=x_train, y_vector=y_train)
print("\n")
Epoch 99
W: [0.39189553]
b: [0.00059491]
loss: 1458421628928.0
# Evaluation of the model with validation data
stringfied_formula=str(W.numpy()) + "*x +" + str(b.numpy())
graph_formula_vs_data(formula=stringfied_formula, x_vector=x_val, y_vector=y_val)
loss = lambda: tf.reduce_sum(tf.math.squared_difference(y_val,linear_model(x_val, W, b)))
print("\nValidation: ")
print("W: {}".format(W.numpy()))
print("b: {}".format(b.numpy()))
print("loss: {}".format(loss()))
graph_formula_vs_data(formula=stringfied_formula, x_vector=x_val, y_vector=y_val)
Validation:
W: [75.017586]
b: [0.11139687]
loss: 8863.4775390625
Related
I'm trying to implement and train a neural network using the JAX library and its little neural network submodule, "Stax". Since this library doesn't come with an implementation of binary cross entropy, I wrote my own:
def binary_cross_entropy(y_hat, y):
bce = y * jnp.log(y_hat) + (1 - y) * jnp.log(1 - y_hat)
return jnp.mean(-bce)
I implemented a simple neural network and trained it on MNIST, and started to get suspicious of some of the results I was getting. So I implemented the same setup in Keras, and I immediately got wildly different results! The same model, trained in the same way on the same data, was getting 90% training accuracy in Keras instead of around 50% in JAX. Eventually I tracked down part of the issue to my naive implementation of cross-entropy, which is supposedly numerically unstable. Following this post and this code I found, I wrote the following new version:
def binary_cross_entropy_stable(y_hat, y):
y_hat = jnp.clip(y_hat, 0.000001, 0.9999999)
logits = jnp.log(y_hat/(1 - y_hat))
max_logit = jnp.clip(logits, 0, None)
bces = logits - logits * y + max_logit + jnp.log(jnp.exp(-max_logit) + jnp.exp(-logits - max_logit))
return jnp.mean(bces)
This works a little better. Now my JAX implementation gets up to 80% train accuracy, but that's still a lot less than the 90% Keras gets. What I want to know is what is going on? Why are my two implementations not behaving the same way?
Below, I condensed my two implementations down to a single script. In this script, I implement the same model in JAX and in Keras. I initialize both with the same weights, and train them using full-batch gradient descent for 10 steps on 1000 datapoints from MNIST, the same data for each model. JAX finishes with 80% training accuracy, while Keras finishes with 90%. Specifically, I get this output:
Initial Keras accuracy: 0.4350000023841858
Initial JAX accuracy: 0.435
Final JAX accuracy: 0.792
Final Keras accuracy: 0.9089999794960022
JAX accuracy (Keras weights): 0.909
Keras accuracy (JAX weights): 0.7919999957084656
And actually, when I vary the conditions a little (using different random initial weights or a different training set), sometimes I get back the 50% JAX accuracy and 90% Keras accuracy.
I swap the weights at the end to verify that the weights obtained from training are indeed the issue, not something to do with the actual computation of the network predictions, or the way I calculate accuracy.
The code:
import numpy as np
import jax
from jax import jit, grad
from jax.experimental import stax, optimizers
import jax.numpy as jnp
import keras
import keras.datasets.mnist
def binary_cross_entropy(y_hat, y):
bce = y * jnp.log(y_hat) + (1 - y) * jnp.log(1 - y_hat)
return jnp.mean(-bce)
def binary_cross_entropy_stable(y_hat, y):
y_hat = jnp.clip(y_hat, 0.000001, 0.9999999)
logits = jnp.log(y_hat/(1 - y_hat))
max_logit = jnp.clip(logits, 0, None)
bces = logits - logits * y + max_logit + jnp.log(jnp.exp(-max_logit) + jnp.exp(-logits - max_logit))
return jnp.mean(bces)
def binary_accuracy(y_hat, y):
return jnp.mean((y_hat >= 1/2) == (y >= 1/2))
########################################
# #
# Create dataset #
# #
########################################
input_dimension = 784
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data(path="mnist.npz")
xs = np.concatenate([x_train, x_test])
xs = xs.reshape((70000, 784))
ys = np.concatenate([y_train, y_test])
ys = (ys >= 5).astype(np.float32)
ys = ys.reshape((70000, 1))
train_xs = xs[:1000]
train_ys = ys[:1000]
########################################
# #
# Create JAX model #
# #
########################################
jax_initializer, jax_model = stax.serial(
stax.Dense(1000),
stax.Relu,
stax.Dense(1),
stax.Sigmoid
)
rng_key = jax.random.PRNGKey(0)
_, initial_jax_weights = jax_initializer(rng_key, (1, input_dimension))
########################################
# #
# Create Keras model #
# #
########################################
initial_keras_weights = [*initial_jax_weights[0], *initial_jax_weights[2]]
keras_model = keras.Sequential([
keras.layers.Dense(1000, activation="relu"),
keras.layers.Dense(1, activation="sigmoid")
])
keras_model.compile(
optimizer=keras.optimizers.SGD(learning_rate=0.01),
loss=keras.losses.binary_crossentropy,
metrics=["accuracy"]
)
keras_model.build(input_shape=(1, input_dimension))
keras_model.set_weights(initial_keras_weights)
if __name__ == "__main__":
########################################
# #
# Compare untrained models #
# #
########################################
initial_keras_predictions = keras_model.predict(train_xs, verbose=0)
initial_jax_predictions = jax_model(initial_jax_weights, train_xs)
_, keras_initial_accuracy = keras_model.evaluate(train_xs, train_ys, verbose=0)
jax_initial_accuracy = binary_accuracy(jax_model(initial_jax_weights, train_xs), train_ys)
print("Initial Keras accuracy:", keras_initial_accuracy)
print("Initial JAX accuracy:", jax_initial_accuracy)
########################################
# #
# Train JAX model #
# #
########################################
L = jit(binary_cross_entropy_stable)
gradL = jit(grad(lambda w, x, y: L(jax_model(w, x), y)))
opt_init, opt_apply, get_params = optimizers.sgd(0.01)
network_state = opt_init(initial_jax_weights)
for _ in range(10):
wT = get_params(network_state)
gradient = gradL(wT, train_xs, train_ys)
network_state = opt_apply(
0,
gradient,
network_state
)
final_jax_weights = get_params(network_state)
final_jax_training_predictions = jax_model(final_jax_weights, train_xs)
final_jax_accuracy = binary_accuracy(final_jax_training_predictions, train_ys)
print("Final JAX accuracy:", final_jax_accuracy)
########################################
# #
# Train Keras model #
# #
########################################
for _ in range(10):
keras_model.fit(
train_xs,
train_ys,
epochs=1,
batch_size=1000,
verbose=0
)
final_keras_loss, final_keras_accuracy = keras_model.evaluate(train_xs, train_ys, verbose=0)
print("Final Keras accuracy:", final_keras_accuracy)
########################################
# #
# Swap weights #
# #
########################################
final_keras_weights = keras_model.get_weights()
final_keras_weights_in_jax_format = [
(final_keras_weights[0], final_keras_weights[1]),
tuple(),
(final_keras_weights[2], final_keras_weights[3]),
tuple()
]
jax_accuracy_with_keras_weights = binary_accuracy(
jax_model(final_keras_weights_in_jax_format, train_xs),
train_ys
)
print("JAX accuracy (Keras weights):", jax_accuracy_with_keras_weights)
final_jax_weights_in_keras_format = [*final_jax_weights[0], *final_jax_weights[2]]
keras_model.set_weights(final_jax_weights_in_keras_format)
_, keras_accuracy_with_jax_weights = keras_model.evaluate(train_xs, train_ys, verbose=0)
print("Keras accuracy (JAX weights):", keras_accuracy_with_jax_weights)
Try changing the PRNG seed at line 57 to a value other than 0 to run the experiment using different initial weights.
Your binary_cross_entropy_stable function does not match the output of keras.binary_crossentropy; for example:
x = np.random.rand(10)
y = np.random.rand(10)
print(keras.losses.binary_crossentropy(x, y))
# tf.Tensor(0.8134677734043875, shape=(), dtype=float64)
print(binary_cross_entropy_stable(x, y))
# 0.9781515
That is where I would start if you're trying to exactly duplicate the model.
You can view the source of the keras loss function here: keras/losses.py#L1765-L1810, with the main part of the implementation here: keras/backend.py#L4972-L5017
One detail: it appears that with a sigmoid activation function, Keras re-uses some cached logits to compute the binary cross entropy while avoiding problematic values: keras/backend.py#L4988-L4997. I'm not sure how to easily replicate that behavior using JAX & stax.
I am new to tensorflow so I am trying to get my hands dirty by working on a binary classification problem on kaggle. I have trained the model using sigmoid function and got a very good accuracy when tested but when I try to export the prediction to df for submission, I get the error below...I have attached the code and the prediction and the output, please suggest what I am doing wrong, I suspect it has to do with my sigmoid function, thanks.
This is output of the predictions....the expected is 1s and 0s
INFO:tensorflow:Restoring parameters from ./movie_review_variables
Prections are [[3.8743019e-07]
[9.9999821e-01]
[1.7650980e-01]
...
[9.9997473e-01]
[1.4901161e-07]
[7.0333481e-06]]
#Importing tensorflow
import tensorflow as tf
#defining hyperparameters
learning_rate = 0.01
training_epochs = 1000
batch_size = 100
num_labels = 2
num_features = 5000
train_size = 20000
#defining the placeholders and encoding the y placeholder
X = tf.placeholder(tf.float32, shape=[None, num_features])
Y = tf.placeholder(tf.int32, shape=[None])
y_oneHot = tf.one_hot(Y, 1)
#defining the model parameters -- weight and bias
W = tf.Variable(tf.zeros([num_features, 1]))
b = tf.Variable(tf.zeros([1]))
#defining the sigmoid model and setting up the learning algorithm
y_model = tf.nn.sigmoid(tf.add(tf.matmul(X, W), b))
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_model, labels=y_oneHot)
train_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#defining operation to measure success rate
correct_prediction = tf.equal(tf.argmax(y_model, 1), tf.argmax(y_oneHot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#saving variables
saver = tf.train.Saver()
#executing the graph and saving the model variables
with tf.Session() as sess: #new session
tf.global_variables_initializer().run()
#Iteratively updating parameter batch by batch
for step in range(training_epochs * train_size // batch_size):
offset = (step * batch_size) % train_size
batch_xs = x_train[offset:(offset + batch_size), :]
batch_labels = y_train[offset:(offset + batch_size)]
#run optimizer on batch
err, _ = sess.run([cost, train_optimizer], feed_dict={X:batch_xs, Y:batch_labels})
if step % 1000 ==0:
print(step, err) #print ongoing result
#Print final learned parameters
w_val = sess.run(W)
print('w', w_val)
b_val = sess.run(b)
print('b', b_val)
print('Accuracy', accuracy.eval(feed_dict={X:x_test, Y:y_test}))
save_path = saver.save(sess, './movie_review_variables')
print('Model saved in path {}'.format(save_path))
#creating csv file for kaggle submission
with tf.Session() as sess:
saver.restore(sess, './movie_review_variables')
predictions = sess.run(y_model, feed_dict={X: test_data_features})
subm2 = pd.DataFrame(data={'id':test['id'],'sentiment':predictions})
subm2.to_csv('subm2nlp.csv', index=False, quoting=3)
print("I am done predicting")
INFO:tensorflow:Restoring parameters from ./movie_review_variables
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-85-fd74ed82109c> in <module>()
5 # print('Prections are {}'.format(predictions))
6
----> 7 subm2 = pd.DataFrame(data={'id':test['id'], 'sentiment':predictions})
8 subm2.to_csv('subm2nlp.csv', index=False, quoting=3)
9 print("I am done predicting")
Exception: Data must be 1-dimensional
You'll need to set some threshold for the sigmoidal output. E.g. split the outputs into bins with space of 0.5 between them:
>>> import numpy as np
>>> x = np.linspace(0, 10, 20)
>>> x
array([ 0. , 0.52631579, 1.05263158, 1.57894737, 2.10526316,
2.63157895, 3.15789474, 3.68421053, 4.21052632, 4.73684211,
5.26315789, 5.78947368, 6.31578947, 6.84210526, 7.36842105,
7.89473684, 8.42105263, 8.94736842, 9.47368421, 10. ])
>>> q = 0.5 # The continuous value between two discrete points
>>> y = q * np.round(x/q)
>>> y
array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5.5,
6. , 6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5, 10. ])
You can see the definition of the sigmoid function. This will always have a continuous output. If you want to discretize your output, you need to determine some threshold above which you will set your solution to 1 and below will be zero.
pred = tf.math.greater(y_model, tf.constant(0.5))
However, you must be careful choosing an appropriate threshold as it is not guaranteed that your model will be well calibrated with probability. You can choose a suitable threshold based on the best discrimination on some held-out validation set.
It is important that this step is for evaluation only as you will not be able to backpropagate your loss signal through this op.
For a complex text classification task I am training an RNN model. Somehow the weights of the RNN only change at the beginning and then with very tiny steps (gradients are in the range of e^-7):
For those of you who are not familiar with tensorboard: This shows the distribution of values for the bias and the weights of the RNN (x axis is the value, y axis the number of values and z the training iterations).
I constructed a toy example which does not make sense but reproduces the same behaviour:
import numpy as np
import tensorflow as tf
tensorboard_save_path = "../RNN/tensorboard/supersimple/"
x = np.random.normal(size=(33, 20, 5000))
y = np.array([1 if i>0.5 else 0 for i in np.random.random(33)])
##### NETWORK #########
with tf.name_scope("RNN"):
rnn_cell = tf.contrib.rnn.BasicRNNCell(1)
outputs, states = tf.nn.dynamic_rnn(rnn_cell, x, dtype=tf.float64)
rnn_weights, rnn_biases = rnn_cell.variables
tf.summary.histogram("RNN weights", rnn_weights)
tf.summary.histogram("RNN biases", rnn_biases)
pred = tf.sigmoid(outputs[:,-1])
with tf.name_scope("cost"):
cost = tf.losses.mean_squared_error(predictions=pred, labels=np.reshape(y, (33,1)))
with tf.name_scope("train"):
optimizer = tf.train.AdagradOptimizer(learning_rate=0.5).minimize(cost)
init = tf.global_variables_initializer()
merged_summary = tf.summary.merge_all()
writer = tf.summary.FileWriter(tensorboard_save_path)
print("\ttensorboard --logdir=" + tensorboard_save_path)
sess = tf.Session()
sess.run(init)
for i in range(1000):
sess.run([optimizer])
if i % 50 == 0:
c, s = sess.run([cost, merged_summary])
writer.add_summary(s, i)
print("cost is %f" % c)
Expected behaviour would be that the model overfits due to the huge amount of variables in contrast to the few training samples. Any idea what's going wrong here?
I'm trying to make some price prediction on a kaggle dataset with Tensorflow.
My Neural network is learning, but, my cost function is really high and my predictions are far from the real output.
I tried to change my network by adding or removing some layers, neurons and activations functions.
I tried a lot with my hyper-parameters but that don't change so much things.
I don't think that the problem come from my datas, I checked on kaggle and that's the ones that most people uses.
If you have any idea why my cost is so high and how to reduce it and if you could explain it to me, it would be really great !
Her's my code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from sklearn.utils import shuffle
df = pd.read_csv(r"C:\Users\User\Documents\TENSORFLOW\Prediction prix\train2.csv", sep=';')
df.head()
df = df.loc[:, ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'SalePrice']]
df = df.replace(np.nan, 0)
df
%matplotlib inline
plt = sns.pairplot(df)
plt
df = shuffle(df)
df_train = df[0:1000]
df_test = df[1001:1451]
inputX = df_train.drop('SalePrice', 1).as_matrix()
inputX = inputX.astype(int)
inputY = df_train.loc[:, ['SalePrice']].as_matrix()
inputY = inputY.astype(int)
inputX_test = df_test.drop('SalePrice', 1).as_matrix()
inputX_test = inputX_test.astype(int)
inputY_test = df_test.loc[:, ['SalePrice']].as_matrix()
inputY_test = inputY_test.astype(int)
# Parameters
learning_rate = 0.01
training_epochs = 1000
batch_size = 500
display_step = 50
n_samples = inputX.shape[0]
x = tf.placeholder(tf.float32, [None, 5])
y = tf.placeholder(tf.float32, [None, 1])
def add_layer(inputs, in_size, out_size, activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size, out_size], stddev=0.1))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
output = Wx_plus_b
else:
output = activation_function(Wx_plus_b)
return output
l1 = add_layer(x, 5, 3, activation_function=tf.nn.relu)
pred = add_layer(l1, 3, 1)
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-y, 2))/(2*n_samples)
# Gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = batch_size
# Loop over all batches
for i in range(total_batch):
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: inputX,
y: inputY})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test model
correct_prediction = tf.equal(pred,y)
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: inputX, y: inputY}))
print(sess.run(pred, feed_dict={x: inputX_test}))
Epoch: 0001 cost= 10142407502702304395526144.000000000
Epoch: 0051 cost= 3256106752.000019550
Epoch: 0101 cost= 3256106752.000019550
Epoch: 0151 cost= 3256106752.000019550
Epoch: 0201 cost= 3256106752.000019550
...
Thanks for your help !
I see couple of problems with the implementation:
Inputs are not scaled.
Use sklearn StandardScaler to scale the inputs inputX, inputY (and also inputX_text and inputY_text) to make it zero mean and unit variance. You can use the inverse_transform to convert the outputs back to proper scale again.
sc = StandardScaler().fit(inputX)
inputX = sc.transform(inputX)
inputX_test = sc.transform(inputX_test)
The batch_size is too large, you are passing the entire set as a single batch. This should not cause the particular problem you are facing, but for better convergence try with reduced batch size. Implement a get_batch() generator function and do the following:
for batch_X, batch_Y in get_batch(input_X, input_Y, batch_size):
_, c = sess.run([optimizer, cost], feed_dict={x: batch_X,
y: batch_Y})
Try smaller Weights initialization (stddev) if you still see issues.
WORKING CODE BELOW:
inputX = df_train.drop('SalePrice', 1).as_matrix()
inputX = inputX.astype(int)
sc = StandardScaler().fit(inputX)
inputX = sc.transform(inputX)
inputY = df_train.loc[:, ['SalePrice']].as_matrix()
inputY = inputY.astype(int)
sc1 = StandardScaler().fit(inputY)
inputY = sc1.transform(inputY)
inputX_test = df_test.drop('SalePrice', 1).as_matrix()
inputX_test = inputX_test.astype(int)
inputX_test = sc.transform(inputX_test)
inputY_test = df_test.loc[:, ['SalePrice']].as_matrix()
inputY_test = inputY_test.astype(int)
inputY_test = sc1.transform(inputY_test)
learning_rate = 0.01
training_epochs = 1000
batch_size = 50
display_step = 50
n_samples = inputX.shape[0]
x = tf.placeholder(tf.float32, [None, 5])
y = tf.placeholder(tf.float32, [None, 1])
def get_batch(inputX, inputY, batch_size):
duration = len(inputX)
for i in range(0,duration//batch_size):
idx = i*batch_size
yield inputX[idx:idx+batch_size], inputY[idx:idx+batch_size]
def add_layer(inputs, in_size, out_size, activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size, out_size], stddev=0.005))
biases = tf.Variable(tf.zeros([1, out_size]))
Wx_plus_b = tf.matmul(inputs, Weights) + biases
if activation_function is None:
output = Wx_plus_b
else:
output = activation_function(Wx_plus_b)
return output
l1 = add_layer(x, 5, 3, activation_function=tf.nn.relu)
pred = add_layer(l1, 3, 1)
# Mean squared error
cost = tf.reduce_mean(tf.pow(tf.subtract(pred, y), 2))
# Gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = batch_size
# Loop over all batches
#for i in range(total_batch):
for batch_x, batch_y in get_batch(inputX, inputY, batch_size):
# Run optimization op (backprop) and cost op (to get loss value)
_, c, _l1, _pred = sess.run([optimizer, cost, l1, pred], feed_dict={x: batch_x, y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f} ".format(avg_cost))
#print(_l1, _pred)
print("Optimization Finished!")
I have already had a similar problem of a very high cost reached after a few training steps, and then the cost remaining constant there. For me it was a kind of overflow, with the gradients too big and creating Nan values quite early in training. I solved it by starting with a smaller learning rate (potentially much smaller), until the cost and gradients become more reasonable (a few dozen steps), and then back to a regular one (higher at the start, potentially decaying).
See my answer to this post for a similar case that was solved just by taking a smaller learning rate on start.
You can also clip your gradients to avoid this problem, using tf.clip_by_value. It sets a minimum and maximum value to your gradients, which avoids to have huge ones that send your weights straight to Nan after the first few iterations. To use it (with min and max at -1 and 1, which is probably too tight), replace
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
by
opt= tf.train.GradientDescentOptimizer(learning_rate)
gvs = opt.compute_gradients(cost)
capped_gvs = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gvs]
optimizer = opt.apply_gradients(capped_gvs)
I am trying to implement a linear regression model in Tensorflow, with additional constraints (coming from the domain) that the W and b terms must be non-negative.
I believe there are a couple of ways to do this.
We can modify the cost function to penalize negative weights [Lagrangian approach] [See:TensorFlow - best way to implement weight constraints
We can compute the gradients ourselves and project them on [0, infinity] [Projected gradient approach]
Approach 1: Lagrangian
When I tried the first approach, I would often end up with negative b.
I had modified the cost function from:
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
to:
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
nn_w = tf.reduce_sum(tf.abs(W) - W)
nn_b = tf.reduce_sum(tf.abs(b) - b)
constraint = 100.0*nn_w + 100*nn_b
cost_with_constraint = cost + constraint
Keeping the coefficient of nn_b and nn_w to be very high leads to instability and very high cost.
Here is the complete code.
import numpy as np
import tensorflow as tf
n_samples = 50
train_X = np.linspace(1, 50, n_samples)
train_Y = 10*train_X + 6 +40*np.random.randn(50)
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(np.random.randn(), name="weight")
b = tf.Variable(np.random.randn(), name="bias")
# Construct a linear model
pred = tf.add(tf.multiply(X, W), b)
# Gradient descent
learning_rate=0.0001
# Initializing the variables
init = tf.global_variables_initializer()
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
nn_w = tf.reduce_sum(tf.abs(W) - W)
nn_b = tf.reduce_sum(tf.abs(b) - b)
constraint = 1.0*nn_w + 100*nn_b
cost_with_constraint = cost + constraint
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_with_constraint)
training_epochs=200
with tf.Session() as sess:
sess.run(init)
# Fit all training data
cost_array = np.zeros(training_epochs)
W_array = np.zeros(training_epochs)
b_array = np.zeros(training_epochs)
for epoch in range(training_epochs):
for (x, y) in zip(train_X, train_Y):
sess.run(optimizer, feed_dict={X: x, Y: y})
W_array[epoch] = sess.run(W)
b_array[epoch] = sess.run(b)
cost_array[epoch] = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
The following is the mean of b across 10 different runs.
0 -1.101268
1 0.169225
2 0.158363
3 0.706270
4 -0.371205
5 0.244424
6 1.312516
7 -0.069609
8 -1.032187
9 -1.711668
Clearly, the first approach is not optimal. Further, there is a lot of art involved in choosing the coefficient of penalty terms.
Approach 2: Projected gradient
I then thought to use the second approach, which is more guaranteed to work.
gr = tf.gradients(cost, [W, b])
We manually compute the gradients and update the W and b.
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
for (x, y) in zip(train_X, train_Y):
W_del, b_del = sess.run(gr, feed_dict={X: x, Y: y})
W = max(0, (W - W_del)*learning_rate) #Project the gradient on [0, infinity]
b = max(0, (b - b_del)*learning_rate) # Project the gradient on [0, infinity]
This approach seems to be very slow.
I am wondering if there is a better way to run the second approach, or guarantee the results with the first approach. Can we somehow allow the optimizer to ensure that the learnt weights are non-negative?
Edit: How to do this in Autograd
https://github.com/HIPS/autograd/issues/207
If you modify your linear model to:
pred = tf.add(tf.multiply(X, tf.abs(W)), tf.abs(b))
it will have the same effect as using only positive W and b values.
The reason your second approach is slow is that you clip the W and b values outside of the tensorflow graph. (Also it will not converge because (W - W_del)*learning_rate must instead be W - W_del*learning_rate)
edit:
You can implement the clipping using tensorflow graph like this:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
with tf.control_dependencies([train_step]):
clip_W = W.assign(tf.maximum(0., W))
clip_b = b.assign(tf.maximum(0., b))
train_step_with_clip = tf.group(clip_W, clip_b)
In this case W and b values will be clipped to 0 and not to small positive numbers.
Here is a small mnist example with clipping:
import tensorflow as tf
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x = tf.placeholder(tf.uint8, [None, 28, 28])
x_vec = tf.cast(tf.reshape(x, [-1, 784]), tf.float32) / 255.
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x_vec, W) + b
y_target = tf.placeholder(tf.uint8, [None])
y_target_one_hot = tf.one_hot(y_target, 10)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_target_one_hot, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
with tf.control_dependencies([train_step]):
clip_W = W.assign(tf.maximum(0., W))
clip_b = b.assign(tf.maximum(0., b))
train_step_with_clip = tf.group(clip_W, clip_b)
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_target_one_hot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
tf.global_variables_initializer().run()
for i in range(1000):
sess.run(train_step_with_clip, feed_dict={
x: x_train[(i*100)%len(x_train):((i+1)*100)%len(x_train)],
y_target: y_train[(i*100)%len(x_train):((i+1)*100)%len(x_train)]})
if not i%100:
print("Min_W:", sess.run(tf.reduce_min(W)))
print("Min_b:", sess.run(tf.reduce_min(b)))
print("Accuracy:", sess.run(accuracy, feed_dict={
x: x_test,
y_target: y_test}))
I actually was not able to reproduce your problem of getting negative bs with your first approach.
But I do agree that this is not optimal for your use case and can result in negative values.
You should be able to constrain your parameters to non-negative values like so:
W *= tf.cast(W > 0., tf.float32)
b *= tf.cast(b > 0., tf.float32)
(exchange > with >= if necessary, the cast is necessary as the comparison operators will produce boolean values.
You then would optimize for the "standard cost" without the additional constraints.
However, this does not work in every case. For example, it should be avoided to initialize W or b with negative values in the beginning.
Your second (and probably better) approach can be accelerated by defining the update logic in the general computational graph, i.e. after the definition of cost
params = [W, b]
grads = tf.gradients(cost, params)
optimizer = [tf.assign(param, tf.maximum(0., param - grad*learning_rate))
for param, grad in zip(params, grads)]
I think your solution is slow because it creates new computation nodes every time which is probably very costly and repeated a lot inside the loops.
update using tensorflow optimizer
In my solution above not the gradients are clipped but rather the resulting update values.
Along the lines of this answer you could clip the gradients to be at most the value of the updated parameter like so:
params = [W, b]
opt = tf.train.GradientDescentOptimizer(learning_rate)
grads_and_vars = opt.compute_gradients(cost, params)
clipped_grads_vars = [(tf.clip_by_value(grad, -np.inf, var), var) for grad, var in grads_and_vars]
optimizer = opt.apply_gradients(clipped_grads_vars)
This way an update will never decrease a parameter to a value below 0.
However, I think this will not work in the case the updated variable is already negative.
Also, if the optimizing algorithm somehow multiplies the clipped gradient by a value greater than 1.
The latter might actually never happen, but I'm not 100% sure.