How to create an optimizer in Tensorflow - python

I want to write a new optimization algorithm for my network on Tensorflow. I hope to implement the Levenberg Marquardt optimization algorithm, which now is excluded from TF API. I found poor documentation on how to write a custom optimizer, so i ask if someone can give my any advice. Thanks.

The simplest example of an optimizer is probably the gradient descent optimizer. It shows how one creates an instance of the basic optimizer class. The optimizer base class documentation explains what the methods do.
The python side of the optimizers adds new nodes to the graph that compute and apply the gradients being back-propagated. It supplies the parameters that get passed to the ops and does some of the high-level management of the optimizer. Then, you need the actual "Apply" op.
Ops have both a python and a C++ component. Writing a training op is the same (but specialized) as the general process of adding an Op to TensorFlow.
For an example set of training ops that compute and apply gradients, see
python/training/training_ops.py - this is the Python glue for the actual training ops. Note that the code here is mostly about shape inference - the computation is going to be in the C++.
The actual math for applying the gradients is handled by an Op (recalling that, in general, ops are written in C++). In this case, the apply gradients ops are defined in core/kernels/training_ops.cc. You can see, for example, the implementation of ApplyGradientDescentOp in there, which references a functor ApplyGradientDescent:
var.device(d) -= grad * lr();
The implementation of the Op itself follows the implementation of any other op as described in the adding-an-op docs.

Before running the Tensorflow Session, one should initiate an Optimizer as seen below:
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
tf.train.GradientDescentOptimizer is an object of the class GradientDescentOptimizer and as the name says, it implements the gradient descent algorithm.
The method minimize() is being called with a “cost” as parameter and consists of the two methods compute_gradients() and then apply_gradients().
For most (custom) optimizer implementations, the method apply_gradients() needs to be adapted.
This method relies on the (new) Optimizer (class), which we will create, to implement the following methods: _create_slots(), _prepare(), _apply_dense(), and _apply_sparse().
_create_slots() and _prepare() create and initialise additional
variables, such as momentum.
_apply_dense(), and _apply_sparse() implement the actual Ops, which update the variables.
Ops are generally written in C++ . Without having to change the C++ header yourself, you can still return a python wrapper of some Ops through these methods.
This is done as follows:
def _create_slots(self, var_list):
# Create slots for allocation and later management of additional
# variables associated with the variables to train.
# for example: the first and second moments.
'''
for v in var_list:
self._zeros_slot(v, "m", self._name)
self._zeros_slot(v, "v", self._name)
'''
def _apply_dense(self, grad, var):
#define your favourite variable update
# for example:
'''
# Here we apply gradient descents by substracting the variables
# with the gradient times the learning_rate (defined in __init__)
var_update = state_ops.assign_sub(var, self.learning_rate * grad)
'''
#The trick is now to pass the Ops in the control_flow_ops and
# eventually groups any particular computation of the slots your
# wish to keep track of:
# for example:
'''
m_t = ...m... #do something with m and grad
v_t = ...v... # do something with v and grad
'''
return control_flow_ops.group(*[var_update, m_t, v_t])
For a more detailed explanation with example, see this blog post
https://www.bigdatarepublic.nl/custom-optimizer-in-tensorflow/

Related

Setting up an optimization solver on top of a neural network model

I have a trained neural network model developed using the Keras framework in a Jupyter notebook. It is a regression problem, where I am trying to predict an output variable using some 14 input variables or features.
As a next step, I would like to minimize my output and want to determine what configuration/values these 14 inputs would take to get to the minimal value of the output.
So, essentially, I would like to pass the trained model object as my objective function in a solver, and also a bunch of constraints on the input variables to optimize/minimize the objective.
What is the best Python solver that can help me get there?
Thanks in advance!
So you already have your trained model, which we can think of as f(x) = y.
The standard SciPy method to minimize this is appropriately named scipy.optimize.minimize.
To use it, you just need to adapt your f(x) = y function to fit the API that SciPy uses. That is, the first function argument is the list of params to optimize over. The second argument is optional, and can contain any args that are fixed for the entire optimization (i.e. your trained model).
def score_trained_model(params, args):
# Get the model from the fixed args.
model = args[0]
# Run the model on the params, return the output.
return model_predict(model, params)
With this, plus an initial guess, you can use the minimize function now:
# Nelder-Mead is my go-to to start with.
# But it doesn't take advantage of the gradient.
# Something that does, e.g. BGFS, may perform better for your case.
method = 'Nelder-Mead'
# All zeros is fine, but improving this initial guess can help.
guess_params = [0]*14
# Given a trained model, optimize the inputs to minimize the output.
optim_params = scipy.optimize.minimize(
score_trained_model,
guess_params,
args=(trained_model,),
method=method,
)
It is possible to supply constraints and bounds to some of the optimization methods. For Nelder-Mead that is not supported, but you can just return a very large error when constraints are violated.
Older answer.
OP wants to optimize the inputs, x, not the hyperparameters.
It sounds like you want to do hyperparameter optimization. My Python library of choice is hyperopt: https://github.com/hyperopt/hyperopt
Given that you already have some training and scoring code, for example:
def train_and_score(args):
# Unpack args and train your model.
model = make_model(**args)
trained = train_model(model, **args)
# Return the output you want to minimize.
return score_model(trained)
You can easily use hyperopt to tune parameters like the learning rate, dropout, or choice of activations:
from hyperopt import fmin, hp, tpe, space_eval
space = {
'lr': hp.loguniform('lr', np.log(0.01), np.log(0.5)),
'dropout': hp.uniform('dropout', 0, 1),
'activation': hp.choice('activation', ['relu', 'sigmoid']),
}
# Minimize the training score over the space.
trials = Trials()
best = fmin(train_and_score, space, trials=trials, algo=tpe.suggest, max_evals=100)
# Print details about the best results and hyperparameters.
print(best)
print(space_eval(space, best))
There are also libraries that will help you directly integrate this with Keras. A popular choice is hyperas: https://github.com/maxpumperla/hyperas

When should tf.losses.add_loss() be used in TensorFlow?

I cannot find an answer to this question in the TensorFlow documentation. I once read that one should add losses from tf.nn functions but it isn't necessary for functions from tf.losses. Therefore:
When should I use tf.losses.add_loss()?
Example:
loss = tf.reduce_mean(tf.nn.sparse_softmax_corss_entropy_with_logits
(labels=ground_truth, logits=predictions))
tf.losses.add_loss(loss) <-- when is this required?
Thank yoou.
One would use this method to register the loss defined by user.
Namely, if you have created a tensor that defines your loss, for example as my_loss = tf.mean(output) you can use this method to add it to loss collection. You might want to do that if you are not tracking all your losses manually. For example if you are using a method like tf.losses.get_total_loss().
Inside tf.losses.add_loss is very much straightforward:
def add_loss(loss, loss_collection=ops.GraphKeys.LOSSES):
if loss_collection and not context.executing_eagerly():
ops.add_to_collection(loss_collection, loss)

How to apply Optimizer on Variable in Chainer?

Here is an example in Pytorch:
optimizer = optim.Adam([modifier_var], lr=0.0005)
And here in Tensorflow:
self.train = self.optimizer.minimize(self.loss, var_list=[self.modifier])
But Chainer's optimizers only can use on 'Link', how can I apply Optimizer on Variable in Chainer?
In short, there is no way to directly assign chainer.Variable (even nor chainer.Parameter) to chainer.Optimizer.
The following is some redundant explanation.
First, I re-define Variable and Parameter to avoid confusion.
Variable is (1) torch.Tensor in PyTorch v4, (2) torch.autograd.Variable in PyTorch v3, and (3) chainer.Variable in Chainer v4.
Variable is an object who holds two tensors; .data and .grad. It is the necessary and sufficient condition, so Variable is not necessarily a learnable parameter, which is a target of the optimizer.
In both libraries, there is another class Parameter, which is similar but not the same with Variable. Parameter is torch.autograd.Parameter in Pytorch and chainer.Parameter in Chainer.
Parameter must be a learnable parameter and should be optimized.
Therefore, there should be no case to register Variable (not Parameter) to Optimizer (although PyTorch allows to register Variable to Optimizer: this is just for backward compatibility).
Second, in PyTorch torch.nn.Optimizer directly optimizes Parameter, but in Chainer chainer.Optimizer DOES NOT optimize Parameter: instead, chainer.UpdateRule does. The Optimizer just registers UpdateRules to Parameters in a Link.
Therefore, it is only natural that chainer.Optimizer does not receive Parameter as its arguments, because it is just a "delivery-man" of UpdateRule.
If you want to attach different UpdateRule for each Parameter, you should directly create an instance of UpdateRule subclass, and attach it to the Parameter.
Below is an example to learn regression task by MyChain MLP model using Adam optimizer in Chainer.
from chainer import Chain, Variable
# Prepare your model (neural network) as `Link` or `Chain`
class MyChain(Chain):
def __init__(self):
super(MyChain, self).__init__(
l1=L.Linear(None, 30),
l2=L.Linear(None, 30),
l3=L.Linear(None, 1)
)
def __call__(self, x):
h = self.l1(x)
h = self.l2(F.sigmoid(h))
return self.l3(F.sigmoid(h))
model = MyChain()
# Then you can instantiate optimizer
optimizer = chainer.optimizers.Adam()
# Register model to optimizer (to indicate which parameter to update)
optimizer.setup(model)
# Calculate loss, and update parameter as follows.
def lossfun(x, y):
loss = F.mean_squared_error(model(x), y)
return loss
# this iteration is "training", to fit the model into desired function.
for i in range(300):
optimizer.update(lossfun, x, y)
So in summary, you need to setup the model, after that you can use update function to calculate loss and update model's parameter.
The above code comes from here
Also, there are other way to write training code using Trainer module. For more detailed tutorial of Chainer, please refer below
chainer-handson
deep-learning-tutorial-with-chainer

How can I apply custom regularization in CNTK (using python)?

do you know how can I apply a custom regularization function to CNTK?
In particular, I would like to add to the loss the derivative of the functino wrt to the inputs; something like
newLoss = loss + lambda * gradient_F(inputs)
where F is the function learned by the model and inputs are the inputs to the model.
How can I achieve this in CNTK? I don't know how to access the gradients wrt to the inputs, and how to take the gradient wrt to the weights of the regularizer.
First, gradient is not a scalar, so it doesn't make a lot of sense to optimize it. The gradient norm might be an interesting thing to add to your loss. To do that, CNTK would have to take the gradient of the gradient norm, which at the time of this writing (July 2017) is not supported. It is however an important feature we want to add in the next few months.
Update: One workaround is to do something like this
noisy_inputs = x + C.random.normal_like(x, scale=0.01)
noisy_model = model.clone('share', {x: noisy_inputs})
auxiliary_loss = C.squared_error(model, noisy_model)
but you will have to tune the scale of the noise for your problem.
CNTK learners only accept numbers as regularizer (L1/L2) values. If you really want to add your custom regularizer, you can easily implement your own Learner. You will have access to the gradients you need. You will find couple of examples on how to implement your own Learner here.
Here's the code to do this:
def cross_entropy_with_softmax_plus_regularization(model, labels, l2_regularization_weight):
w_norm = C.Constant(0);
for p in (model.parameters):
w_norm = C.plus(w_norm, 0.5*C.reduce_sum(C.square(p)))
return C.reduce_log_sum_exp(model.output) -
C.reduce_log_sum_exp(C.times_transpose(labels, model.output)) + l2_regularization_weight*w_norm
and my http://www.telesens.co/2017/09/29/spiral_cntk/ blog post about it

Train only some of the variables in tensorflow

I'm using tensorflow to do a gradient decent classification.
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
here cost is the cost function that I have used in optimization.
After launching the Graph in the Session, the Graph can be fed as:
sess.run(train_op, feed_dict)
And with this, all the variables in the cost function will be updated in order to minimized the cost.
Here is my question. How can I update only some variables in the cost function when training..? Is there a way to convert created variables into constants or something..?
There are several good answers, this subject should already be closed:
stackoverflow
Quora
Just to avoid another click for people getting here :
The minimize function of the tensorflow optimizer takes a var_list argument for that purpose:
first_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
"scope/prefix/for/first/vars")
first_train_op = optimizer.minimize(cost, var_list=first_train_vars)
second_train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
"scope/prefix/for/second/vars")
second_train_op = optimizer.minimize(cost, var_list=second_train_vars)
I took it as is from mrry
To get the list of the names you should use instead of "scope/prefix/for/second/vars" you can use :
tf.get_default_graph().get_collection_ref(tf.GraphKeys.TRAINABLE_VARIABLES)

Categories

Resources