I am trying to learn how to use Theano. I work very frequently with survival analysis and I wanted therefore to try to implement a standard survival model using Theano's automatic differentiation and gradient descent. The model that I am trying to implement is called the Cox model and here is the wikipedia article: https://en.wikipedia.org/wiki/Proportional_hazards_model
Very helpfully, they have written there the partial likelihood function, which is what is maximized when estimating the parameters of a Cox model. I am quite new to Theano and as a result am having difficulties implementing this cost function and so I am looking for some guidance.
Here is the code I have written so far. My dataset has 137 records and hence the reason I hard-coded that value. T refers to the tensor module, and W refers to what the wikipedia article calls beta, and status is what wikipedia calls C. The remaining variables are identical to wikipedia's notation.
def negative_log_likelihood(self, y, status):
v = 0
for i in xrange(137):
if T.eq(status[i], 1):
v += T.dot(self.X[i], self.W)
u = 0
for j in xrange(137):
if T.gt(y[j], y[i]):
u += T.exp(T.dot(self.X[j], self.W))
v -= T.log(u)
return T.sum(-v)
Unfortunately, when I run this code, I am unhappily met with an infinite recursion error, which I hoped would not happen. This makes me think that I have not implemented this cost function in the way that Theano would like and so I am hoping to get some guidance on how to improve this code so that it works.
You are mixing symbolic and non-symbolic operations but this doesn't work.
For example, T.eq returns a non-executable symbolic expression representing the idea of comparing two things for equality but it doesn't actually do the comparison there and then. T.eq actually returns a Python object that represents the equality comparison and since a non-None object reference is considered the same as True in Python, the execution will always continue inside the if statement.
If you need to construct a Theano computation involving conditionals then you need to use one of its two symbolic conditional operations: T.switch or theano.ifelse.ifelse. See the documentation for examples and details.
You are also using Python loops which is probably not what you need. To construct a Theano computation that explicitly loops you need to use the theano.scan module. However, if you can express your computation in terms of matrix operations (dot products, reductions, etc.) then it will run much, much, faster than something using scans.
I suggest you work through some more Theano tutorials before trying to implement something complex from scratch.
Related
I've been trying to understand how automatic differentiation (autodiff) works. There are several implementations of this that can be found in Tensorflow, PyTorch and other programs.
There are three aspects of automatic differentiation that currently seem vague to me.
The exact process used to calculate the gradients
How autodiff works with respect to inputs
How autodiff works with respect to a singular value as input
So far, it seems to roughly follow the following steps:
Break up original function into elementary operations (individual arithmetic operations, composition and function calls).
The elementary operations are combined to form a computational graph in such a way that the original function can be calculated using the computational graph.
The computational graph is executed for a certain input, and each operation is recorded
Walking through the recorded operations in reverse using the chain rule gives us the gradient.
First of all, is this a correct overview of the steps that are taken in automatic differentiation?
Secondly, how would the above process work for a derivative with respect to the inputs. For instance, a function would need a difference in the x value. Does that mean that the derivative can only be calculated after at least two different x values have been provided as the input? Or does it require multiple inputs at once (i.e. vector input) over which it can calculate a difference? And how does this compare when we calculate the gradient with respect to the model weights (i.e. as done in backpropagation).
Thirdly, how can we take the derivative of a singular value. Take, for instance, the following Python code where the derivative of is calculated:
x = tf.constant(3.0)
with tf.GradientTape() as tape:
tape.watch(x)
y = x**2
# dy = 2x * dx
dy_dx = tape.gradient(y, x)
print(dy_dx.numpy()) # prints: '6.0'
Since dx is the difference between several x inputs, would that not mean that dx = 0?
I found that this paper had a pretty good overview of the various modes of autodiff. As well as the differences as compared to numerical and symbolic differentiation. However, it did not bring a full understanding and I would still like to understand the autodiff process in context of these traditional differentiation techniques.
Rather than applying it practically, I would love to get a more theoretical understanding.
I had similar questions in my mind a few weeks ago until I started to code my own Automatic Differentiation package tensortrax in Python. It uses forward-mode AD with a hyper-dual number approach. I wrote a Readme (landing page of the repository, section Theory) with an example which could be of interest for you.
I think what you need to understand first is what is a derivative, many math textbooks could help you with that. The notation dx means an infinitesimal variation, so you not actually compute any difference, but do a symbolic operation on your function f that transforms it to a function f' also noted df/dx, which you then apply at any point where it is defined.
Regarding the algorithm used for automatic differentiation, you understood it right, the part that you seem to be missing is how the derivatives of elementary operations are computed and what do they mean, but it would be hard to do a crash course about that in a SO answer.
I'm trying to implement direct collocation from scratch in a MathematicalProgram such that each constraint is a Python function, meaning that I can get the gradient of each constraint and cost with respect to their inputs. My goal is to use these gradients for a downstream task.
I'm converting the "Direct Collocation for the Pendulum" part of this notebook: https://github.com/RussTedrake/underactuated/blob/master/trajopt.ipynb to a MathematicalProgram. I've been able to convert the cost and constraints to lambdas instead of Formulas. For instance:
torque_limit = 3.0 # N*m.
u = dircol.input()
dircol.AddConstraintToAllKnotPoints(-torque_limit <= u[0])
dircol.AddConstraintToAllKnotPoints(u[0] <= torque_limit)
becomes
torque_limit = 3.0 # N*m.
for n in range(N-1):
constraint = prog.AddConstraint(lambda u: u, [-torque_limit], [torque_limit], [u[n][0]])
This means I can use ExtractGradient with that constraint:
ExtractGradient(constraint.evaluator().Eval(InitializeAutoDiff(arbitrary_u_vector)))
The one constraint I have not been able to turn into a Python lambda/function is the constraint for Pendulum's dynamics, since that is implemented inside the black box of DirectCollocation. I've read the C++ for DirectCollocation, but I haven't been able to figure it out.
What would it take to either reimplement trajectory optimization's dynamics constraint in Python as a function/lambda or access the dynamics constraint's gradient in Python in some other way? I would like to be able to do this for an arbitrary MultibodyPlant, say Quadrotor or Acrobot, not just Pendulum.
It should definitely be possible to implement the DirectCollocation constraint in python. This notebook has a bunch of relevant examples of using python functions as constraints; it could help. But it should be a fairly straightforward port of the DirectCollocationConstraint::Eval you've found in C++.
But you can also always get the gradients from the constraints that are being added by DirectCollocation. That class is just a helper class that makes it easier to set up the MathematicalProgram for you. You can still get that program back out and evaluate any of the constraints individually if you like. I'm not convinced you get anything by reimplementing it yourself, and will certainly have worse performance in python.
I apparently have a complicated enough constraint function that it takes over 1 minute just to call prog.AddConstraint(), presumably because it's spending a long time constructing the symbolic function (although if you have other insights why it takes so long I would appreciate it). What I would like to do is pull out the symbolic constraint function and cache it so that once it's created, I don't need to wait 1 minute and it can just load it from disk.
My issue is I don't know how to access that function. I can clearly see the expression when I print it, for example in this simplified example:
from pydrake.all import MathematicalProgram
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2, "x")
c = prog.AddConstraint(x[0] * x[1] == 1)
print(c)
I get the output:
ExpressionConstraint
0 <= (-1 + (x(0) * x(1))) <= 0
I realize I could parse that string and pull out the part that represents the function, but it seems like there should be a better way to do it? I'm thinking I could pull out the expression function and upper/lower bounds and use that subsequently in my AddConstraint calls.
In my real use-case, I need to apply the same constraint to multiple timesteps in a trajectory, so I think it would be helpful to create the symbolic function once when I call AddConstraint for the first timestep and then all subsequent timesteps shouldn't have to re-create the symbolic function, I can just use the cached version and apply it to the relevant variables. The expression involves a Cholesky decomposition of a covariance matrix so there are a lot of constraints being applied.
Any help is greatly appreciated.
In my real use-case, I need to apply the same constraint to multiple timesteps in a trajectory, so I think it would be helpful to create the symbolic function once when I call AddConstraint for the first timestep and then all subsequent timesteps shouldn't have to re-create the symbolic function, I can just use the cached version and apply it to the relevant variables. The expression involves a Cholesky decomposition of a covariance matrix so there are a lot of constraints being applied.
I would strongly encourage to write this constraint using a function evaluation, instead of a symbolic expression. One example is that if you want to impose the constraint lb <= my_evaluator(x) <= ub, then you can call it this way
def my_evaluator(x):
# Do Cholesky decomposition and other things to evaluate it. Return the evaluation result.
return result
prog.AddConstraint(my_evaluator, lb, ub, x)
Adding constraint using symbolic expression is convenient when your constraint is linear in the decision variables, otherwise it is better to avoid using symbolic expression to add constraint. (Evaluating a symbolic expression, especially one involving Cholesky decomposition, is really time consuming).
For more details on adding generic nonlinear constraint, you could refer to our tutorial
Usually I use Mathematica, but now trying to shift to python, so this question might be a trivial one, so I am sorry about that.
Anyways, is there any built-in function in python which is similar to the function named Interval[{min,max}] in Mathematica ? link is : http://reference.wolfram.com/language/ref/Interval.html
What I am trying to do is, I have a function and I am trying to minimize it, but it is a constrained minimization, by that I mean, the parameters of the function are only allowed within some particular interval.
For a very simple example, lets say f(x) is a function with parameter x and I am looking for the value of x which minimizes the function but x is constrained within an interval (min,max) . [ Obviously the actual problem is just not one-dimensional rather multi-dimensional optimization, so different paramters may have different intervals. ]
Since it is an optimization problem, so ofcourse I do not want to pick the paramter randomly from an interval.
Any help will be highly appreciated , thanks!
If it's a highly non-linear problem, you'll need to use an algorithm such as the Generalized Reduced Gradient (GRG) Method.
The idea of the generalized reduced gradient algorithm (GRG) is to solve a sequence of subproblems, each of which uses a linear approximation of the constraints. (Ref)
You'll need to ensure that certain conditions known as the KKT conditions are met, etc. but for most continuous problems with reasonable constraints, you'll be able to apply this algorithm.
This is a good reference for such problems with a few examples provided. Ref. pg. 104.
Regarding implementation:
While I am not familiar with Python, I have built solver libraries in C++ using templates as well as using function pointers so you can pass on functions (for the objective as well as constraints) as arguments to the solver and you'll get your result - hopefully in polynomial time for convex problems or in cases where the initial values are reasonable.
If an ability to do that exists in Python, it shouldn't be difficult to build a generalized GRG solver.
The Python Solution:
Edit: Here is the python solution to your problem: Python constrained non-linear optimization
I am building a script that generates input data [parameters] for another program to calculate. I would like to optimize the resulting data. Previously I have been using the numpy powell optimization. The psuedo code looks something like this.
def value(param):
run_program(param)
#Parse output
return value
scipy.optimize.fmin_powell(value,param)
This works great; however, it is incredibly slow as each iteration of the program can take days to run. What I would like to do is coarse grain parallelize this. So instead of running a single iteration at a time it would run (number of parameters)*2 at a time. For example:
Initial guess: param=[1,2,3,4,5]
#Modify guess by plus minus another matrix that is changeable at each iteration
jump=[1,1,1,1,1]
#Modify each variable plus/minus jump.
for num,a in enumerate(param):
new_param1=param[:]
new_param1[num]=new_param1[num]+jump[num]
run_program(new_param1)
new_param2=param[:]
new_param2[num]=new_param2[num]-jump[num]
run_program(new_param2)
#Wait until all programs are complete -> Parse Output
Output=[[value,param],...]
#Create new guess
#Repeat
Number of variable can range from 3-12 so something such as this could potentially speed up the code from taking a year down to a week. All variables are dependent on each other and I am only looking for local minima from the initial guess. I have started an implementation using hessian matrices; however, that is quite involved. Is there anything out there that either does this, is there a simpler way, or any suggestions to get started?
So the primary question is the following:
Is there an algorithm that takes a starting guess, generates multiple guesses, then uses those multiple guesses to create a new guess, and repeats until a threshold is found. Only analytic derivatives are available. What is a good way of going about this, is there something built already that does this, is there other options?
Thank you for your time.
As a small update I do have this working by calculating simple parabolas through the three points of each dimension and then using the minima as the next guess. This seems to work decently, but is not optimal. I am still looking for additional options.
Current best implementation is parallelizing the inner loop of powell's method.
Thank you everyone for your comments. Unfortunately it looks like there is simply not a concise answer to this particular problem. If I get around to implementing something that does this I will paste it here; however, as the project is not particularly important or the need of results pressing I will likely be content letting it take up a node for awhile.
I had the same problem while I was in the university, we had a fortran algorithm to calculate the efficiency of an engine based on a group of variables. At the time we use modeFRONTIER and if I recall correctly, none of the algorithms were able to generate multiple guesses.
The normal approach would be to have a DOE and there where some algorithms to generate the DOE to best fit your problem. After that we would run the single DOE entries parallely and an algorithm would "watch" the development of the optimizations showing the current best design.
Side note: If you don't have a cluster and needs more computing power HTCondor may help you.
Are derivatives of your goal function available? If yes, you can use gradient descent (old, slow but reliable) or conjugate gradient. If not, you can approximate the derivatives using finite differences and still use these methods. I think in general, if using finite difference approximations to the derivatives, you are much better off using conjugate gradients rather than Newton's method.
A more modern method is SPSA which is a stochastic method and doesn't require derivatives. SPSA requires much fewer evaluations of the goal function for the same rate of convergence than the finite difference approximation to conjugate gradients, for somewhat well-behaved problems.
There are two ways of estimating gradients, one easily parallelizable, one not:
around a single point, e.g. (f( x + h directioni ) - f(x)) / h;
this is easily parallelizable up to Ndim
"walking" gradient: walk from x0 in direction e0 to x1,
then from x1 in direction e1 to x2 ...;
this is sequential.
Minimizers that use gradients are highly developed, powerful, converge quadratically (on smooth enough functions).
The user-supplied gradient function
can of course be a parallel-gradient-estimator.
A few minimizers use "walking" gradients, among them Powell's method,
see Numerical Recipes p. 509.
So I'm confused: how do you parallelize its inner loop ?
I'd suggest scipy fmin_tnc
with a parallel-gradient-estimator, maybe using central, not one-sided, differences.
(Fwiw,
this
compares some of the scipy no-derivative optimizers on two 10-d functions; ymmv.)
I think what you want to do is use the threading capabilities built-in python.
Provided you your working function has more or less the same run-time whatever the params, it would be efficient.
Create 8 threads in a pool, run 8 instances of your function, get 8 result, run your optimisation algo to change the params with 8 results, repeat.... profit ?
If I haven't gotten wrong what you are asking, you are trying to minimize your function one parameter at the time.
you can obtain it by creating a set of function of a single argument, where for each function you freeze all the arguments except one.
Then you go on a loop optimizing each variable and updating the partial solution.
This method can speed up by a great deal function of many parameters where the energy landscape is not too complex (the dependency between the parameters is not too strong).
given a function
energy(*args) -> value
you create the guess and the function:
guess = [1,1,1,1]
funcs = [ lambda x,i=i: energy( guess[:i]+[x]+guess[i+1:] ) for i in range(len(guess)) ]
than you put them in a while cycle for the optimization
while convergence_condition:
for func in funcs:
optimize fot func
update the guess
check for convergence
This is a very simple yet effective method of simplify your minimization task. I can't really recall how this method is called, but A close look to the wikipedia entry on minimization should do the trick.
You could do parallel at two parts: 1) parallel the calculation of single iteration or 2) parallel start N initial guessing.
On 2) you need a job controller to control the N initial guess discovery threads.
Please add an extra output on your program: "lower bound" that indicates the output values of current input parameter's decents wont lower than this lower bound.
The initial N guessing thread can compete with each other; if any one thread's lower bound is higher than existing thread's current value, then this thread can be dropped by your job controller.
Parallelizing local optimizers is intrinsically limited: they start from a single initial point and try to work downhill, so later points depend on the values of previous evaluations. Nevertheless there are some avenues where a modest amount of parallelization can be added.
As another answer points out, if you need to evaluate your derivative using a finite-difference method, preferably with an adaptive step size, this may require many function evaluations, but the derivative with respect to each variable may be independent; you could maybe get a speedup by a factor of twice the number of dimensions of your problem. If you've got more processors than you know what to do with, you can use higher-order-accurate gradient formulae that require more (parallel) evaluations.
Some algorithms, at certain stages, use finite differences to estimate the Hessian matrix; this requires about half the square of the number of dimensions of your matrix, and all can be done in parallel.
Some algorithms may also be able to use more parallelism at a modest algorithmic cost. For example, quasi-Newton methods try to build an approximation of the Hessian matrix, often updating this by evaluating a gradient. They then take a step towards the minimum and evaluate a new gradient to update the Hessian. If you've got enough processors so that evaluating a Hessian is as fast as evaluating the function once, you could probably improve these by evaluating the Hessian at every step.
As far as implementations go, I'm afraid you're somewhat out of luck. There are a number of clever and/or well-tested implementations out there, but they're all, as far as I know, single-threaded. Your best bet is to use an algorithm that requires a gradient and compute your own in parallel. It's not that hard to write an adaptive one that runs in parallel and chooses sensible step sizes for its numerical derivatives.