I have an input tensor
data = tf.placeholder(tf.int32, [None])
which will be embedded by
embedding_matrix = tf.get_variable("embedding_matrix", [5,3], tf.float32, initializer=tf.random_normal_initializer())
input_vectors = tf.nn.embedding_lookup(params=embedding_matrix, ids=data)
I perform a linear transformation on the input vector using output1_weights to get network_output1
output1_weights = tf.get_variable("output1", [3,4], tf.float32, initializer=tf.random_normal_initializer())
network_output1 = tf.matmul(input_vectors, output1_weights)
The loss will be very standard stuff
loss1 = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=output1, logits=network_output1)
Now I want to use the logits network_output1 as input to compute another linear transformation
output2_weights = tf.get_variable("output2", [4,5], tf.float32, initializer=tf.random_normal_initializer())
network_output2 = tf.matmul(network_output1, output2_weights)
Again cross-entropy loss on the second output
loss2 = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=output2, logits=network_output2)
Here is what I want to achieve. In a joint loss setting I want to only back-prop the gradient of output1_weights when minimizing the loss of loss1 and only the gradient of output2_weights when minimizing loss2. In other words, when optimizing loss2 I don't want the gradients to flow all the back to tamper output1_weights. I am aware of the compute_gradients function in optimizer class which can take an argument var_list, but it seems it can not stop the gradients flowing for separate losses. Also I can consider separating the losses and minimize them individually, which will also be a bad solution in my setting.
All you have to do is select a trainable variable and assign it to var_list.
First count the trainable variables of your different loss.
import numpy as np
import tensorflow as tf
data = tf.placeholder(tf.int32, [None])
output1 = tf.placeholder(tf.int32, [None])
output2 = tf.placeholder(tf.int32, [None])
embedding_matrix = tf.get_variable("embedding_matrix", [5,3], tf.float32, initializer=tf.random_normal_initializer())
input_vectors = tf.nn.embedding_lookup(params=embedding_matrix, ids=data)
# count
params_num0 = len(tf.trainable_variables())
output1_weights = tf.get_variable("output1", [3,4], tf.float32, initializer=tf.random_normal_initializer())
network_output1 = tf.matmul(input_vectors, output1_weights)
# count
params_num1 = len(tf.trainable_variables())
loss1 = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=output1, logits=network_output1)
output2_weights = tf.get_variable("output2", [4,5], tf.float32, initializer=tf.random_normal_initializer())
network_output2 = tf.matmul(network_output1, output2_weights)
loss2 = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=output2, logits=network_output2)
Then print them and all trainable variables.
params = tf.trainable_variables()
print(params_num0)
print(params_num1)
print(params)
# 1
# 2
# [<tf.Variable 'embedding_matrix:0' shape=(5, 3) dtype=float32_ref>, <tf.Variable 'output1:0' shape=(3, 4) dtype=float32_ref>, <tf.Variable 'output2:0' shape=(4, 5) dtype=float32_ref>]
You can see that there are three trainable variables: loss1 for the second, and loss2 for the third.
# if you want to back-prop the gradient of embedding_matrix,
# params1 = params[:params_num1]
# params2 = params[:params_num0] + params[params_num1:]
params1 = params[params_num0:params_num1]
params2 = params[params_num1:]
print(params1)
print(params2)
# [<tf.Variable 'output1:0' shape=(3, 4) dtype=float32_ref>]
# [<tf.Variable 'output2:0' shape=(4, 5) dtype=float32_ref>]
Next, Specify the updated gradient for the corresponding variable.
opt = tf.train.AdamOptimizer(0.01)
grads_vars = opt.compute_gradients(loss1,var_list=params1)
grads_vars2 = opt.compute_gradients(loss2,var_list=params2)
print(grads_vars)
print(grads_vars2)
# [(<tf.Tensor 'gradients/MatMul_grad/tuple/control_dependency_1:0' shape=(3, 4) dtype=float32>, <tf.Variable 'output1:0' shape=(3, 4) dtype=float32_ref>)]
# [(<tf.Tensor 'gradients_1/MatMul_1_grad/tuple/control_dependency_1:0' shape=(4, 5) dtype=float32>, <tf.Variable 'output2:0' shape=(4, 5) dtype=float32_ref>)]
Finally, we can use apply_gradients() to update trainable variable.
train_op = opt.apply_gradients(grads_vars+grads_vars2)
Experiment
data_np = np.random.normal(size=(100))
output1_np = np.random.randint(0,4,size=(100))
output2_np = np.random.randint(0,5,size=(100))
feed_dict_v = {data: data_np, output1: output1_np, output2: output2_np}
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(2):
print("epoch:{}".format(i))
sess.run(train_op, feed_dict=feed_dict_v)
print("embedding_matrix value:\n",sess.run(embedding_matrix, feed_dict=feed_dict_v))
print("output1_weights value:\n",sess.run(output1_weights, feed_dict=feed_dict_v))
print("output2_weights value:\n",sess.run(output2_weights, feed_dict=feed_dict_v))
The result:
epoch:0
embedding_matrix value:
[[ 0.7646786 -0.44221798 -1.6374763 ]
[-0.4061512 -0.70626575 0.09637168]
[ 1.3499098 0.38479885 -0.10424987]
[-1.3999717 0.67008936 1.8843309 ]
[-0.11357951 -1.1893668 1.1205566 ]]
output1_weights value:
[[-0.22709225 0.70598644 0.10429419 -2.2737694 ]
[-0.6364337 -0.08602498 1.9750406 0.8664075 ]
[ 0.3656631 -0.25182125 -0.14689662 -0.03764082]]
output2_weights value:
[[ 0.00554644 -0.49370843 -0.75148153 0.6645286 1.0131303 ]
[ 0.21612553 0.07851358 0.05937392 -0.3236267 -0.8081816 ]
[ 0.82237226 0.17242427 -1.3059226 -1.1134574 0.22402465]
[-1.6996336 -0.58993673 -0.7071007 0.8407903 0.62416744]]
epoch:1
embedding_matrix value:
[[ 0.7646786 -0.44221798 -1.6374763 ]
[-0.4061512 -0.70626575 0.09637168]
[ 1.3499098 0.38479885 -0.10424987]
[-1.3999717 0.67008936 1.8843309 ]
[-0.11357951 -1.1893668 1.1205566 ]]
output1_weights value:
[[-0.21710345 0.6959941 0.11408082 -2.2637703 ]
[-0.64639646 -0.07603455 1.9650643 0.85640883]
[ 0.35567763 -0.24182947 -0.15682784 -0.04763966]]
output2_weights value:
[[ 0.01553426 -0.5036415 -0.7415529 0.65454334 1.003145 ]
[ 0.20613036 0.08847766 0.04942677 -0.31363514 -0.7981894 ]
[ 0.8323502 0.16245098 -1.2959852 -1.1234138 0.21408063]
[-1.6896346 -0.59990865 -0.6971453 0.8307945 0.6141711 ]]
You can see that embedding_matrix has never changed.output1_weights and output2_weights only update the corresponding gradient.
Add
In fact, you can combine loss1 and loss2 on output2_weights. For example:
grads_vars3 = opt.compute_gradients(loss1+loss2,var_list=params2)
You will find that grads_vars2 and grads_vars3 are equal when loss1 and loss2 are combined by addition. The reason is that the gradient of loss1 does not flow to output2_weights in loss1+loss2. But in the following cases, grads_vars2 and grads_vars3 are not equal when loss1 and loss2 are combined by multiplication.
grads_vars3 = opt.compute_gradients(loss1*loss2,var_list=params2)
The above cases mean that we can combine losses for corresponding trainable variables according to our own needs.
In your scenario, network_output2 needs to use network_output1, so we have to specify loss. If network_output2 does not depend on network_output1, we can directly optimize loss1 + loss2.
About gradients
input = tf.constant([[1,2,3]],tf.float32)
label1 = tf.constant([[1,2,3,4]],tf.float32)
label2 = tf.constant([[1,2,3,4,5]],tf.float32)
weight1 = tf.reshape(tf.range(12,dtype=tf.float32),[3,4])
output1 = tf.matmul(input , weight1)
loss1 = tf.reduce_sum(output1 - label1)
weight2 = tf.reshape(tf.range(20,dtype=tf.float32),[4,5])
output2 = tf.matmul(output1 , weight2)
loss2 = tf.reduce_sum(output2 - label2)
grad1 = tf.gradients(loss1,weight1)
grad2 = tf.gradients(loss2,weight2)
grad3 = tf.gradients(loss1+loss2,weight2)
with tf.Session() as sess:
print(sess.run(grad1))
print(sess.run(grad2))
print(sess.run(grad3))
# [array([[1., 1., 1., 1.],
# [2., 2., 2., 2.],
# [3., 3., 3., 3.]], dtype=float32)]
# [array([[32., 32., 32., 32., 32.],
# [38., 38., 38., 38., 38.],
# [44., 44., 44., 44., 44.],
# [50., 50., 50., 50., 50.]], dtype=float32)]
# [array([[32., 32., 32., 32., 32.],
# [38., 38., 38., 38., 38.],
# [44., 44., 44., 44., 44.],
# [50., 50., 50., 50., 50.]], dtype=float32)]
Related
I'm trying to reimplement the Categorical Cross Entropy loss from Keras so that I can customize it. I got the following
def CustomCrossEntropy(output, target, axis=-1):
target = ops.convert_to_tensor_v2_with_dispatch(target)
output = ops.convert_to_tensor_v2_with_dispatch(output)
target.shape.assert_is_compatible_with(output.shape)
# scale preds so that the class probas of each sample sum to 1
output = output / math_ops.reduce_sum(output, axis, True)
# Compute cross entropy from probabilities.
epsilon_ = _constant_to_tensor(epsilon(), output.dtype.base_dtype)
output = clip_ops.clip_by_value(output, epsilon_, 1. - epsilon_)
return -math_ops.reduce_sum(target * math_ops.log(output), axis)
It produces different results than the internal function which confuses me, as I just copied the code from github so far. What am I missing here?
Prove:
y_true = [[0., 1., 0.], [0., 0., 1.]]
y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)
customLoss = CustomCrossEntropy(y_true, y_pred)
assert loss.shape == (2,)
print(loss)
print(customLoss)
>>tf.Tensor([0.05129331 2.3025851 ], shape=(2,), dtype=float32)
>>tf.Tensor([ 0.8059049 14.506287 ], shape=(2,), dtype=float32)
You have inverted the arguments of the function in your definition of CustomCrossEntropy, if you double check the source code in GitHub you will see that the first argument is target and the second one is output. If you switch them back you will get the same results as expected.
import tensorflow as tf
from tensorflow.keras.backend import categorical_crossentropy as CustomCrossEntropy
y_true = [[0., 1., 0.], [0., 0., 1.]]
y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
y_true = tf.convert_to_tensor(y_true)
y_pred = tf.convert_to_tensor(y_pred)
loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)
print(loss)
# tf.Tensor([0.05129331 2.3025851 ], shape=(2,), dtype=float32)
loss = CustomCrossEntropy(y_true, y_pred)
print(loss)
# tf.Tensor([0.05129331 2.3025851 ], shape=(2,), dtype=float32)
loss = CustomCrossEntropy(y_pred, y_true)
print(loss)
# tf.Tensor([ 0.8059049 14.506287 ], shape=(2,), dtype=float32)
Suppose I multiply a vector with a scalar, e.g.:
a = tf.Variable(3.)
b = tf.Variable([1., 0., 1.])
with tf.GradientTape() as tape:
c = a*b
grad = tape.gradient(c, a)
The resulting gradient I get is a scalar,
<tf.Tensor: shape=(), dtype=float32, numpy=2.0>
whereas we would expect the vector:
<tf.Variable 'Variable:0' shape=(3,) dtype=float32, numpy=array([1., 0., 1.], dtype=float32)>
Looking at other examples, it appears that tensorflow sums the expected vector, also for scalar-matrix multiplication and so on.
Why does tensorflow do this? This can probably be avoided using #custum_gradient, is there another less cumbersome way to get the correct gradient?
There are appear to be some related questions but these all seem to consider a the gradient of a loss function that aggregates over a training-batch. No loss function or aggregation is used here, so I think the issue is something else?
You're getting scaler value because you took the gradient wrt scaler. You would get a vector if you took grad wrt some vector. Take a look to the following example:
import tensorflow as tf
a = tf.Variable(3., trainable=True)
b = tf.Variable([1., 0, 1.], trainable=True)
c = tf.Variable(2., trainable=True)
d = tf.Variable([2., 1, 2.], trainable=True)
with tf.GradientTape(persistent=True) as tape:
e = a*b*c*d # abcd , abcd , abcd
tf.print(e)
grad = tape.gradient(e, [a, b, c, d])
grad[0].numpy(), grad[1].numpy(), grad[2].numpy(), grad[3].numpy()
[12 0 12]
(8.0,
array([12., 6., 12.], dtype=float32),
12.0,
array([6., 0., 6.], dtype=float32))
Formally, what I was looking for was the differential of the vector-field that is function of the variable a. For a vector-field the differential is the same as the Jacobian. It turns out that what I was looking for can be done by tape.jacobian.
I want to do some experiment. and i need to get Keras model weights, make it 1D array , and make the shape like initial shape
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense( 4, input_dim = 5 ,activation='relu'))
# Add another:
model.add(layers.Dense(3, activation='relu'))
# Add an output layer with 10 output units:
model.add(layers.Dense(2))
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
weights = (model.get_weights())
#make weight become 1D array
#maka 1D array become like inital shape
model.set_weights(weights)
why iwant to do this ?
because i want to do some mutation using other module, that's necessary to pass 1D array
how to do this ?
as we know the shape of Keras model weights look like this
[array([[-0.24053234, 0.4722855 , 0.29863954, 0.22805429],
[ 0.45101106, -0.00229341, -0.6142864 , -0.2751704 ],
[ 0.159172 , 0.43983865, 0.61577237, 0.24255097],
[ 0.24160242, 0.422235 , 0.8066592 , -0.2711717 ],
[-0.30763668, -0.4841219 , 0.767977 , 0.23558974]],
dtype=float32), array([0., 0., 0., 0.], dtype=float32), array([[ 0.24129152, -0.4890638 , 0.18787515],
[ 0.8663894 , -0.09163451, -0.86416066],
[-0.01754427, 0.32654428, -0.78837514],
[ 0.589849 , 0.5886531 , 0.27824092]], dtype=float32), array([0., 0., 0.], dtype=float32), array([[ 0.8456359 , -0.26292562],
[-1.0447757 , -0.43539298],
[ 1.0835328 , -0.43536085]], dtype=float32), array([0., 0.], dtype=float32)]
When I use the assign method of tf.Variable to change the value of a variable, it brakes the tf.Gradient, e. g., see the code for a toy example below:
(NOTE: I am interested in TensorFlow 2 only.)
x = tf.Variable([[2.0,3.0,4.0], [1.,10.,100.]])
patch = tf.Variable([[0., 1.], [2., 3.]])
with tf.GradientTape() as g:
g.watch(patch)
x[:2,:2].assign(patch)
y = tf.tensordot(x, tf.transpose(x), axes=1)
o = tf.reduce_mean(y)
do_dpatch = g.gradient(o, patch)
Then it gives me None for the do_dpatch.
Note that if I do the following it works perfectly fine:
x = tf.Variable([[2.0,3.0,4.0], [1.,10.,100.]])
patch = tf.Variable([[0., 1.], [2., 3.]])
with tf.GradientTape() as g:
g.watch(patch)
x[:2,:2].assign(patch)
y = tf.tensordot(x, tf.transpose(x), axes=1)
o = tf.reduce_mean(y)
do_dx = g.gradient(o, x)
and gives me:
>>>do_dx
<tf.Tensor: id=106, shape=(2, 3), dtype=float32, numpy=
array([[ 1., 2., 52.],
[ 1., 2., 52.]], dtype=float32)>
This behavior does make sense. Let's take your first example
x = tf.Variable([[2.0,3.0,4.0], [1.,10.,100.]])
patch = tf.Variable([[1., 1.], [1., 1.]])
with tf.GradientTape() as g:
g.watch(patch)
x[:2,:2].assign(patch)
y = tf.tensordot(x, tf.transpose(x), axes=1)
dy_dx = g.gradient(y, patch)
You are computing dy/d(patch). But your y depends on x only not on patch. Yes, you do assign values to x from patch. But this operation doesn't carry a reference to the patch Variable. It just copies the values.
In short, you are trying to get a gradient w.r.t something it doesn't depend on. So you will get None.
Let's look at the second example and why it works.
x = tf.Variable([[2.0,3.0,4.0], [1.,10.,100.]])
with tf.GradientTape() as g:
g.watch(x)
x[:2,:2].assign([[1., 1.], [1., 1.]])
y = tf.tensordot(x, tf.transpose(x), axes=1)
dy_dx = g.gradient(y, x)
This example is perfectly fine. Y depends on x and you are computing dy/dx. So you'd get actual gradients in this example.
As explained HERE (see the quote below from alextp) tf.assign does not support gradient.
"There is no plan to add a gradient to tf.assign because it's not possible in general to connect the uses of the assigned variable with the graph which assigned it."
So, the above problem can be resolved by the following code:
x= tf.Variable([[0.0,0.0,4.0], [0.,0.,100.]])
patch = tf.Variable([[0., 1.], [2., 3.]])
with tf.GradientTape() as g:
g.watch(patch)
padding = tf.constant([[0, 0], [0, 1]])
padde_patch = tf.pad(patch, padding, mode='CONSTANT', constant_values=0)
revised_x = x+ padde_patch
y = tf.tensordot(revised_x, tf.transpose(revised_x), axes=1)
o = tf.reduce_mean(y)
do_dpatch = g.gradient(o, patch)
which results in
do_dpatch
<tf.Tensor: id=65, shape=(2, 2), dtype=float32, numpy=
array([[1., 2.],
[1., 2.]], dtype=float32)>
I tried to build a simple MLP with an input layer (2 neurons), a hidden layer (5 neurons) and an output layer (1 neuron). I planned to train and feed it with [[0., 0.], [0., 1.], [1., 0.], [1., 1.]] for getting the desired output of [0., 1., 1., 0.] (elementwise).
Unfortunately my code refuses to run. I keep getting dimensionality errors no matter what I'm trying. Quite frustrating :/ I think I'm missing something but I can not figure out what is wrong.
For better readability I also uploaded the code to a pastebin: code
Any ideas?
import tensorflow as tf
#####################
# preparation stuff #
#####################
# define input and output data
input_data = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]] # XOR input
output_data = [0., 1., 1., 0.] # XOR output
# create a placeholder for the input
# None indicates a variable batch size for the input
# one input's dimension is [1, 2]
n_input = tf.placeholder(tf.float32, shape=[None, 2])
# number of neurons in the hidden layer
hidden_nodes = 5
################
# hidden layer #
################
b_hidden = tf.Variable(0.1) # hidden layer's bias neuron
W_hidden = tf.Variable(tf.random_uniform([hidden_nodes, 2], -1.0, 1.0)) # hidden layer's weight matrix
# initialized with a uniform distribution
hidden = tf.sigmoid(tf.matmul(W_hidden, n_input) + b_hidden) # calc hidden layer's activation
################
# output layer #
################
W_output = tf.Variable(tf.random_uniform([hidden_nodes, 1], -1.0, 1.0)) # output layer's weight matrix
output = tf.sigmoid(tf.matmul(W_output, hidden)) # calc output layer's activation
############
# learning #
############
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(output, n_input) # calc cross entropy between current
# output and desired output
loss = tf.reduce_mean(cross_entropy) # mean the cross_entropy
optimizer = tf.train.GradientDescentOptimizer(0.1) # take a gradient descent for optimizing with a "stepsize" of 0.1
train = optimizer.minimize(loss) # let the optimizer train
####################
# initialize graph #
####################
init = tf.initialize_all_variables()
sess = tf.Session() # create the session and therefore the graph
sess.run(init) # initialize all variables
# train the network
for epoch in xrange(0, 201):
sess.run(train) # run the training operation
if epoch % 20 == 0:
print("step: {:>3} | W: {} | b: {}".format(epoch, sess.run(W_hidden), sess.run(b_hidden)))
EDIT: I am still getting errors :/
hidden = tf.sigmoid(tf.matmul(n_input, W_hidden) + b_hidden)
outputs line 27 (...) ValueError: Dimensions Dimension(2) and Dimension(5) are not compatible. Altering the line to:
hidden = tf.sigmoid(tf.matmul(W_hidden, n_input) + b_hidden)
seems to be working, but then the error appears in:
output = tf.sigmoid(tf.matmul(hidden, W_output))
telling me: line 34 (...) ValueError: Dimensions Dimension(2) and Dimension(5) are not compatible
Turning the statement to:
output = tf.sigmoid(tf.matmul(W_output, hidden))
also throws an exception: line 34 (...) ValueError: Dimensions Dimension(1) and Dimension(5) are not compatible.
EDIT2: I do not really understand this. Shouldn't hidden be W_hidden x n_input.T, since in dimensions this would be (5, 2) x (2, 1)? If I transpose n_input hidden is still working (I even don't get the point why it is working without a transpose at all). However, output keeps throwing errors but this operation in dimensions should be (1, 5) x (5, 1)?!
(0) It's helpful to include the error output - it's also a useful thing to look at, because it does identify exactly where you were having shape problems.
(1) The shape errors arose because you have the arguments to matmul backwards in both of your matmuls, and have the tf.Variable backwards. The general rule is that the weights for layer that has input_size, output_size should be [input_size, output_size], and the matmul should be tf.matmul(input_to_layer, weights_for_layer) (and then add the biases, which are of shape [output_size]).
So with your code,
W_hidden = tf.Variable(tf.random_uniform([hidden_nodes, 2], -1.0, 1.0))
should be:
W_hidden = tf.Variable(tf.random_uniform([2, hidden_nodes], -1.0, 1.0))
and
hidden = tf.sigmoid(tf.matmul(W_hidden, n_input) + b_hidden)
should be tf.matmul(n_input, W_hidden); and
output = tf.sigmoid(tf.matmul(W_output, hidden))
should be tf.matmul(hidden, W_output)
(2) Once you've fixed those bugs, your run needs to be fed a feed_dict:
sess.run(train)
should be:
sess.run(train, feed_dict={n_input: input_data})
At least, I presume that this is what you're trying to achieve.