I am trying to build a neural network using custom activation functions. I followed the solution given here, and it works when the input and output vectors have the same size, but not when using different sizes (like in a pooling function). Here is my problem so far:
I am trying to generalize this to the case when the input and the output have different sizes. In my code the input 'x' is of size (2,4), the output 'y' is of size (1,2), and the activation function MEX(.) does the mapping y = MEX(x). I have computed the gradient of MEX() as d_MEX(), where d_MEX(x) has the same size as 'x', that is (2,4). Nevertheless, I get this error
InvalidArgumentError (see above for traceback): Incompatible shapes: [1,2] vs. [2,4]
Shouldn't the gradient of MEX(x) be of the same size as x? Here is my complete code:
import tensorflow as tf
import numpy as np
# This is our target function
def MEX(x):
'''
:param x: is a row vector which is the concatenation of [input, beta]
:return MEX_{beta}(x): scalar output
'''
# lenx = np.size(x) # Number of columns (ROW vector)
lenx = x.shape[1]
N = x.shape[0]
out = np.zeros((1,N))
for ii in range(N):
c = x[ii,0:lenx-1]
beta = x[ii,lenx-1]
out[0,ii] = 1./beta * np.log( np.mean( np.exp(beta*c) ))
return np.array(out)
# Now we should write its derivative.
def d_MEX(x):
# lenx = np.size(x) # Number of
lenx = x.shape[1]
N = x.shape[0]
out = np.zeros((N,lenx))
for ii in range(N):
c = x[ii,0:lenx-1]
beta = x[ii,lenx-1]
d_beta = np.array([0.])
d_beta[0] = -1./beta*( MEX(np.array([x[ii,:]])) - np.mean( np.multiply( c, np.exp(beta*c)))/np.mean( np.exp(beta*c)) )
d_c = 1./lenx*np.exp(beta*c) /np.mean( np.exp(beta*c))
out[ii,:] = np.concatenate((d_c,d_beta), axis=0)
return out
# The first step is making it into a numpy function, this is easy:
np_MEX = np.vectorize(MEX, excluded=['x']) # IMPORTANT!! Otherwise np.vectorize() doesnt work
np_d_MEX = np.vectorize(d_MEX, excluded=['x']) # IMPORTANT!! Otherwise np.vectorize() doesnt work
# Now we make a tensforflow function
'''
Making a numpy fct to a tensorflow fct: We will start by making np_d_MEX_32 into a tensorflow function.
There is a function in tensorflow tf.py_func(func, inp, Tout, stateful=stateful, name=name) [doc]
which transforms any numpy function to a tensorflow function, so we can use it:
'''
np_d_MEX_32 = lambda x: np_d_MEX(x=x).astype(np.float32)
def tf_d_MEX(x,name=None):
with tf.name_scope(name, "d_MEX", [x]) as name:
y = tf.py_func(np_d_MEX_32,
[x],
[tf.float32],
name=name,
stateful=False)
return y[0]
'''
tf.py_func acts on lists of tensors (and returns a list of tensors), that is why we have [x] (and return y[0]).
The stateful option is to tell tensorflow whether the function always gives the same output for the same input (stateful = False)
in which case tensorflow can simply the tensorflow graph, this is our case and will probably be the case in most situations.
One thing to be careful of at this point is that numpy used float64 but tensorflow uses float32 so you need to convert
your function to use float32 before you can convert it to a tensorflow function otherwise tensorflow will complain.
This is why we need to make np_d_MEX_32 first.
What about the Gradients? The problem with only doing the above is that even though we now have tf_d_MEX which is the
tensorflow version of np_d_MEX, we couldn't use it as an activation function if we wanted to because tensorflow doesn't
know how to calculate the gradients of that function.
Hack to get Gradients: As explained in the sources mentioned above, there is a hack to define gradients of a function
using tf.RegisterGradient [doc] and tf.Graph.gradient_override_map [doc]. Copying the code from harpone we can modify
the tf.py_func function to make it define the gradient at the same time:
'''
def py_func(func, inp, Tout, stateful=True, name=None, grad=None):
# Need to generate a unique name to avoid duplicates:
rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8))
tf.RegisterGradient(rnd_name)(grad) # see _MySquareGrad for grad example
g = tf.get_default_graph()
with g.gradient_override_map({"PyFunc": rnd_name}):
return tf.py_func(func, inp, Tout, stateful=stateful, name=name)
'''
Now we are almost done, the only thing is that the grad function we need to pass to the above py_func function needs to
take a special form. It needs to take in an operation, and the previous gradients before the operation and propagate
the gradients backward after the operation.
Gradient Function: So for our MEX activation function that is how we would do it:
'''
def MEXgrad(op, grad):
x = op.inputs[0]
# x = op
n_gr = tf_d_MEX(x)
return grad * n_gr
'''
The activation function has only one input, that is why x = op.inputs[0]. If the operation had many inputs, we would
need to return a tuple, one gradient for each input. For example if the operation was a-bthe gradient with respect to a
is +1 and with respect to b is -1 so we would have return +1*grad,-1*grad. Notice that we need to return tensorflow
functions of the input, that is why need tf_d_MEX, np_d_MEX would not have worked because it cannot act on
tensorflow tensors. Alternatively we could have written the derivative using tensorflow functions:
'''
# Combining it all together: Now that we have all the pieces, we can combine them all together:
np_MEX_32 = lambda x: np_MEX(x=x).astype(np.float32)
def tf_MEX(x, name=None):
with tf.name_scope(name, "MEX",[x]) as name:
y = py_func(np_MEX_32,
[x],
[tf.float32],
name=name,
grad=MEXgrad) # <-- here's the call to the gradient
return y[0]
with tf.Session() as sess:
x = tf.constant([[0.2,0.7,1.2,1.7],[0.2,0.7,1.2,1.7]])
y = tf_MEX(x)
tf.global_variables_initializer().run()
print(x.eval(), y.eval(), tf.gradients(y, [x])[0].eval())
In the console, I have checked that the variables have the "correct" shapes:
x.eval()
Out[9]:
array([[ 0.2 , 0.69999999, 1.20000005, 1.70000005],
[ 0.2 , 0.69999999, 1.20000005, 1.70000005]], dtype=float32)
y.eval()
Out[10]: array([[ 0.83393127, 0.83393127]], dtype=float32)
tf_d_MEX(x).eval()
Out[11]:
array([[ 0.0850958 , 0.19909413, 0.46581003, 0.07051659],
[ 0.0850958 , 0.19909413, 0.46581003, 0.07051659]], dtype=float32)
My bad, I just found the mistake.
Its here:
def MEXgrad(op, grad):
x = op.inputs[0]
# x = op
n_gr = tf_d_MEX(x)
return n_gr
I wonder if there is a typo here, where this mistake is also there.
Related
My model is used to predict values based on an minimising a loss function L. But, the loss function doesn’t have a single global minima value, but rather a large number of places where it achieves global minima.
So, the model is based like this:
Model Input is [nXn] tensor (let’s say: inp=[ [i_11, i_12, i_13, ..., i_1n],[i_21, i_22, ..., i_2n],...,[i_n1,i_n2, ..., i_nn] ]) and model output is [nX1] tensor (let’s say: out1=[o_1, o_2,..., o_n ])
Output tensor is out1 is passed in a function f to get out2 (let’s say: f(o_1, o_2, o_3,..., o_n)=[O_1, O_2, O_3, ..., O_n] )
These 2 values (i.e., out1 and out2) are minimised using MSELoss i.e., Loss = ||out1 - out2||
Now, there are a lot of values for [o_1, o_2, ..., o_n] for which the Loss goes to minimum.
But, I want the values of [o_1, o_2, ..., o_n] for which |o_1| + |o_2| + |o_3| + ... + |o_n| is maximum
Right now, the weights are initialised randomly:
self.weight = torch.nn.parameter.Parameter(torch.FloatTensor(in_features, out_features)) for some value of in_features and out_features
But by doing this, I am getting the values of [o_1, o_2, ..., o_n] for which |o_1| + |o_2| + |o_3| + ... + |o_n| is minimum.
I know this problem can be solved by without using deep-learning, but I am trying to get the results like this for some task computation.
Is there a way to change this to get the largest values predicted at the output of the neural net?
Or is there any other technique (backpropagation change) to change it to get the desired largest valued output?
Thanks in advance.
EDIT 1:
Based on the answer, out1=[o_1, o_2,..., o_n ] is tending to zero-valued tensor. In the initial epochs, out2=[O_1, O_2, O_3, ..., O_n] takes very large values, but subsequently comes down to lower values.
A snippet of code below will give the idea:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
class Model(nn.Module):
def __init__(self, inp_l, hid_l, out_l=1):
super(Model, self).__init__()
self.lay1 = nn.Linear(inp_l ,hid_l)
self.lay2 = nn.Linear(hid_l ,out_l)
self.dp = nn.Dropout(p=0.5)
def forward(self, inp):
self.out1= torch.tensor([]).float()
for row in range(x.shape[0]):
y = self.lay1(inp[row])
y = F.relu(y)
y = self.dp(y.float())
y = self.lay2(y)
y = F.relu(y)
self.out1= torch.cat((self.out1, y))
return self.out1.view(inp.shape[0],-1)
def function_f(inp, out1):
'''
Some functional computation is done to return out2.
'''
return out2
def train_model(epoch):
model.train()
t = time.time()
optimizer.zero_grad()
out1 = model(inp)
out2 = function_f(inp, out1)
loss1 = ((out1-out2)**2).mean()
loss2 = -out1.abs().mean()
loss_train = loss1 + loss2
loss_train.backward(retain_graph=True)
optimizer.step()
if epoch%40==0:
print('Epoch: {:04d}'.format(epoch+1),
'loss_train: {:.4f}'.format(loss_train.item()),
'time: {:.4f}s'.format(time.time() - t))
model= Model(inp_l=10, hid_l=5, out_l=1)
optimizer = optim.Adam(model.parameters(), lr=0.001)
inp = torch.randint(100, (10, 10))
for ep in range(100):
train_model(ep)
But, out1 value goes to trivial solution i.e., zero-valued tensor which is the minimum valued solution. As mentioned before EDIT, I want to get the max-valued solution.
Thank you.
I am not sure I understand what you want.
Your weight initialization is overly complicated as well, you may just do:
self.weight = torch.nn.Linear(in_features, out_featues)
If you want to have the largest value of a batch of inputs you may simply do:
y = self.weight(x)
return y.max(dim=0)[0]
But I am not entirely sure that is what you meant with your question.
EDIT:
It seems you have two objectives. The first thing I would try is to convert both of them in losses to be minimized by the optimizer.
loss1 = MSE(out1, out2)
loss2 = - out1.abs().mean()
loss = loss1 + loss2
minimizing loss will simutaneously minimize the MSE between out1 and out2 and maximize the absolute values of out1. (minimizing - out1.abs().mean() is the same as maximizing out1.abs().mean()).
Notice that it is possible your neural net will just create large biases and zero the weights as a lazy solution for the objective. You may turn of biases to avoid the problem, but I would still expect some other training problems.
Background
In Tensorflow 2, there exists a class called GradientTape which is used to record operations on tensors, the result of which can then be differentiated and fed to some minimization algorithm. For example, from the documentation we have this example:
x = tf.constant(3.0)
with tf.GradientTape() as g:
g.watch(x)
y = x * x
dy_dx = g.gradient(y, x) # Will compute to 6.0
The docstring for the gradient method implies that the first argument can be not just a tensor, but a list of tensors:
def gradient(self,
target,
sources,
output_gradients=None,
unconnected_gradients=UnconnectedGradients.NONE):
"""Computes the gradient using operations recorded in context of this tape.
Args:
target: a list or nested structure of Tensors or Variables to be
differentiated.
sources: a list or nested structure of Tensors or Variables. `target`
will be differentiated against elements in `sources`.
output_gradients: a list of gradients, one for each element of
target. Defaults to None.
unconnected_gradients: a value which can either hold 'none' or 'zero' and
alters the value which will be returned if the target and sources are
unconnected. The possible values and effects are detailed in
'UnconnectedGradients' and it defaults to 'none'.
Returns:
a list or nested structure of Tensors (or IndexedSlices, or None),
one for each element in `sources`. Returned structure is the same as
the structure of `sources`.
Raises:
RuntimeError: if called inside the context of the tape, or if called more
than once on a non-persistent tape.
ValueError: if the target is a variable or if unconnected gradients is
called with an unknown value.
"""
In the above example, it is easy to see that y, the target, is the function to be differentiated, and x is the dependent variable the "gradient" is taken with respect to.
From my limited experience, it appears that the gradient method returns a list of tensors, one per each element of sources, and each of these gradients is a tensor that is the same shape as the corresponding member of sources.
Question
The above description of the behavior of gradients makes sense if target contains a single 1x1 "tensor" to be differentiated, because mathematically a gradient vector should be the same dimension as the domain of the function.
However, if target is a list of tensors, the output of gradients is still the same shape. Why is this the case? If target is thought of as a list of functions, shouldn't the output resemble something like a Jacobian? How am I to interpret this behavior conceptually?
This is how tf.GradientTape().gradient() is defined. It has the same functionality as the tf.gradients(), except that the latter can't be used in eager mode. From the docs of tf.gradients():
It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys
where xs are sources and ys are target.
Example 1:
So let's say target = [y1, y2] and sources = [x1, x2]. The result will be:
[dy1/dx1 + dy2/dx1, dy1/dx2 + dy2/dx2]
Example 2:
Compute gradients for loss-per-sample (tensor) vs reduced loss (scalar)
Let w, b be two variables.
xentropy = [y1, y2] # tensor
reduced_xentropy = 0.5 * (y1 + y2) # scalar
grads = [dy1/dw + dy2/dw, dy1/db + dy2/db]
reduced_grads = [d(reduced_xentropy)/dw, d(reduced_xentropy)/db]
= [d(0.5 * (y1 + y2))/dw, d(0.5 * (y1 + y2))/db]
== 0.5 * grads
Tensorflow example of the above snippet:
import tensorflow as tf
print(tf.__version__) # 2.1.0
inputs = tf.convert_to_tensor([[0.1, 0], [0.5, 0.51]]) # two two-dimensional samples
w = tf.Variable(initial_value=inputs)
b = tf.Variable(tf.zeros((2,)))
labels = tf.convert_to_tensor([0, 1])
def forward(inputs, labels, var_list):
w, b = var_list
logits = tf.matmul(inputs, w) + b
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
return xentropy
# `xentropy` has two elements (gradients of tensor - gradient
# of each sample in a batch)
with tf.GradientTape() as g:
xentropy = forward(inputs, labels, [w, b])
reduced_xentropy = tf.reduce_mean(xentropy)
grads = g.gradient(xentropy, [w, b])
print(xentropy.numpy()) # [0.6881597 0.71584916]
print(grads[0].numpy()) # [[ 0.20586157 -0.20586154]
# [ 0.2607238 -0.26072377]]
# `reduced_xentropy` is scalar (gradients of scalar)
with tf.GradientTape() as g:
xentropy = forward(inputs, labels, [w, b])
reduced_xentropy = tf.reduce_mean(xentropy)
grads_reduced = g.gradient(reduced_xentropy, [w, b])
print(reduced_xentropy.numpy()) # 0.70200443 <-- scalar
print(grads_reduced[0].numpy()) # [[ 0.10293078 -0.10293077]
# [ 0.1303619 -0.13036188]]
If you compute loss (xentropy) for each element in a batch the final gradients of each variable will be the sum of all gradients for each sample in a batch (which makes sense).
I've attempted converting a Python-side training loop to Tensorflow to (hypothetically) make the code run faster - not having to pass control over to cpu constantly. However, I can't manage using tf.while_loop.
Here's the code that works:
import numpy as np
import tensorflow as tf
from tqdm import tqdm
from sklearn.datasets import load_iris
from sklearn.preprocessing import RobustScaler
x, y = load_iris(True)
x = RobustScaler().fit_transform(x)
shape = (10, 10)
max_epochs = 1000
graph = tf.Graph()
sess = tf.Session(graph=graph)
x = x.astype(np.float64)
# Construct graph
with graph.as_default():
weights = tf.get_variable(
'weights', shape, initializer=tf.constant_initializer, dtype=tf.float64
)
curr_epoch = tf.placeholder(dtype=tf.int64, shape=())
with tf.name_scope('data'):
data = tf.data.Dataset.from_tensor_slices(x)
data = data.shuffle(buffer_size=10000)
data = data.repeat(max_epochs)
data = data.batch(1)
data = data.make_one_shot_iterator().get_next()
with tf.name_scope('update'):
update_op = make_update_op(weights)
init = tf.global_variables_initializer()
sess.run(init)
for i in tqdm(range(max_epochs)):
for _ in range(x.shape[0]):
sess.run(update_op, feed_dict={
curr_epoch: i
})
np_weights = sess.run(weights)
print(np_weights) # Correctly prints an array of 150's.
Now, if I create an update function to pass tf.while_loop, an error is thrown.
def make_update_op(w):
return w.assign(
w + 0.001
)
# In the code above:
update_op = tf.while_loop(lambda _: True, make_update_op, (weights,), maximum_iterations=x.shape[0])
# No inner loop:
for i in tqdm(range(max_epochs)):
sess.run(update_op, feed_dict={
curr_epoch: i
})
Line 22, in make_update_op
return w.assign(
AttributeError: 'Tensor' object has no attribute 'assign'
I don't quite understand what is happening even after reading the documentation. weights is a Variable after all. What could be done to correctly make the training loop?
The tensor that you're trying to assign a new value within a while loop is a result of a sequence of multiple operations-tensors (operation is node in the graph, while tensor is a directed edge). In particular, the while loop will produce:
Variable/Read-->while/Enter-->while/Merge-->while/Switch-->while/Identity
What you're trying to assign here is a tensor while/Identity.
tf.while_loop is usually used to iterate over the dimensions of a tensor (also over the None - the unknown dimension). You're trying to iterate over the variables that are fully defined. You don't need to create a tf.while_loop for that. Just create operations that update each variable and group these operations together:
update_ops = [w.assign(w + 0.001) for w in weights]
update_op = tf.group(update_ops)
Now, when you execute the update_op with tf.Session() interface it will update all variables.
Example:
import tensorflow as tf
v1 = tf.Variable(tf.ones((1, 2), dtype=tf.float32))
v2 = tf.Variable(2*tf.ones((1, 3), dtype=tf.float32))
update_ops = [w.assign(w + 0.001) for w in [v1, v2]]
update_op = tf.group(update_ops)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print('before update:')
print(v1.eval(), v2.eval())
print('after update:')
sess.run(update_op) # <-- update your variables
print(v1.eval(), v2.eval())
# before update:
# [[1. 1.]] [[2. 2. 2.]]
# after update:
# [[1.001 1.001]] [[2.001 2.001 2.001]]
Turns out, all that was missing was the fact that one cannot assign to a variable inside a loop as Vlad pointed out. Instead, one can return the new value of a variable.
def make_update_op(w):
return w + 0.001
new_w = tf.while_loop(lambda _: True, make_update_op, (weights,), maximum_iterations=x.shape[0])
update_op = weights.assign(new_w)
To use more variables one would need to return the same amount from the function and unpack them in Python, but the principle is the same.
def make_update_op(w, d):
return w + 0.001, d
new_w, _ = tf.while_loop(lambda *_: True, make_update_op, (weights, data), maximum_iterations=x.shape[0])
update_op = weights.assign(new_w)
I am trying to use the Keras.backend ops to write a function that I will wrap as a Lambda to use in my model.
There are two tensors, X and Y. X is not trainable. Y is trainable.
The python function that is wrapped is:
import keras.backend as K
from keras.activations import softmax
def _attention(inputs):
X, Y = inputs
attention_weight = K.dot(X, K.expand_dims(Y))
attention_weight = K.squeeze(attention_weight, axis=-1)
attention_weight = softmax(attention_weight, axis=-1)
return attention_weight
which I wanted to wrap as:
Y = K.random_normal_variable(shape=(200,), mean=0.0, scale=1.0)
attend = Lambda(_attention)
attention = attend((X,Y))
When I call:
model = Model(inputs=[input], outputs=[attention])
I receive the message
ValueError: Output tensors to a Model must be the output of a TensorFlowLayer(thus holding past layer metadata). Found: Tensor("lambda_2/Softmax:0", shape=(?, ?), dtype=float32)
Do I really need to make a custom layer for the expand_dims, dot product, and squeeze method? I know I could always reshape Y from (dim,) -> (dim,1) but I am still stuck with the squeeze.
I am creating a tf.Variable() and then create a simple function using that variable, then I flatten the original variable using tf.reshape() and then I take the tf.gradients() between the function and the flattened variable. Why does that return [None].
var = tf.Variable(np.ones((5,5)), dtype = tf.float32)
f = tf.reduce_sum(tf.reduce_sum(tf.square(var)))
var_f = tf.reshape(var, [-1])
print tf.gradients(f,var_f)
The above codeblock when executed returns [None]. Is this a bug? Please Help!
You are finding derivative of f with respect to var_f, but f is not a function of var_f but var instead. Thats why you are getting [None]. Now if you change the code to:
var = tf.Variable(np.ones((5,5)), dtype = tf.float32)
var_f = tf.reshape(var, [-1])
f = tf.reduce_sum(tf.reduce_sum(tf.square(var_f)))
grad = tf.gradients(f,var_f)
print(grad)
your gradients will be defined:
tf.Tensor 'gradients_28/Square_32_grad/mul_1:0' shape=(25,) dtype=float32>
The visualization of the graphs for the following code is given below:
var = tf.Variable(np.ones((5,5)), dtype = tf.float32, name='var')
f = tf.reduce_sum(tf.reduce_sum(tf.square(var)), name='f')
var_f = tf.reshape(var, [-1], name='var_f')
grad_1 = tf.gradients(f,var_f, name='grad_1')
grad_2 = tf.gradients(f,var, name='grad_2')
The derivative of grad_1 is not defined, while for grad_2 it's defined. The back-propagation graph (gradient graphs) of the two gradients are shown.