How to speed up optimization routine in python? - python

I have a constrained optimization problem for which I use the sp.optimize.minimize() function, with the SLSQP (Sequential Least Square Quadratic Programming) method.
A single access to the actual objective function is computationally quick. My problem is the minimize() routine does many fast access and then suddenly stops for a long time then does many fast iterations and waits and so on. So on the whole its slow, so is there anything I can do to alleviate this problem?
Any alternatives for constrained optimization other that SLSQP in scipy like PyOpt for example?

Related

SciPy rootfinding algorithm 'gives up' too fast

Is there any way to force 'hybr' method of scipy.optimize 'root' to keep working even after it finds that convergence its too slow? In my problem, the solver nearly reaches desired precision, but right before it, the algorithm terminates because of slow convergence... Is it possible to make 'hybr' more 'self-confident'?
I use the root-finding algorithm root from scipy.optimize module to solve a system of two algebraic, non-linear equations. Since the equations have to be solved many times for various parameter values it is important to find a numerical method that would be most stable for this problem.
I have compared the performance of all the methods provided by scipy.optimize module. To visualize their performance I have used the following procedure:
The algebraic equations were rearranged so that they have zero on the R.H.S.
Then, at each step made by the algorithm, the sum of the L.H.S. squared of all the equations was computed and printed.
In my case, the most efficient method is the default "hybr". Other build-in methods either do not converge at all or are significantly slower. Unfortunately, in some cases the desired method gives up too fast. Lowering the precision and/or providing additional options to the functions did not help.

Regularizing viscosity with scipy's ode solvers

Consider for the sake of simplicity the following equation (Burgers equation):
Let's solve it using scipy (in my case scipy.integrate.ode.set_integrator("zvode", ..).integrate(T)) with a variable time-step solver.
The issue is the following: if we use the naïve implementation in Fourier space
then the viscosity term nu * d2x(u[t]) can cause an overshoot if the time step is too big. This can lead to a fair amount of noise in the solutions, or even to (fake) diverging solutions (even with stiff solvers, on slightly more complex version of this equation).
One way to regularize this is to evaluate the viscosity term at step t+dt, and the update step becomes
This solution works well when programmed explicitly. How can I use scipy's variable-step ode solver to implement it ? To my surprise I haven't found any documentation on this fairly elementary thorny issue...
You actually can't, or on the other extreme, odeint or ode->zvode already does that to any given problem.
To the first, you would need to give the two parts of the equation separately. Obviously, that is not part of the solver interface. Look at DDE and SDE solvers where such a partition of the equation is actually required.
To the second, odeint and ode->zvode use implicit multi-step methods, which means that the values of u(t+dt) and the right side there enter the computation and the underlying local approximation.
You could still try to hack your original approach into the solver by providing a Jacobian function that only contains the second derivative term, but quite probably you will not achieve an improvement.
You could operator-partition the ODE and solve the linear part separately introducing
vhat(k,t) = exp(nu*k^2*t)*uhat(k,t)
so that
d/dt vhat(k,t) = -i*k*exp(nu*k^2*t)*conv(uhat(.,t),uhat(.,t))(k)

Alternatives to fmincon in python for constrained non-linear optimisation problems

I am having trouble solving an optimisation problem in python, involving ~20,000 decision variables. The problem is non-linear and I wish to apply both bounds and constraints to the problem. In addition to this, the gradient with respect to each of the decision variables may be calculated.
The bounds are simply that each decision variable must lie in the interval [0, 1] and there is a monotonic constraint placed upon the variables, i.e each decision variable must be greater than the previous one.
I initially intended to use the L-BFGS-B method provided by the scipy.optimize package however I found out that, while it supports bounds, it does not support constraints.
I then tried using the SQLSP method which does support both constraints and bounds. However, because it requires more memory than L-BFGS-B and I have a large number of decision variables, I ran into memory errors fairly quickly.
The paper which this problem comes from used the fmincon solver in Matlab to optimise the function, which, to my knowledge, supports the application of both bounds and constraints in addition to being more memory efficient than the SQLSP method provided by scipy. I do not have access to Matlab however.
Does anyone know of an alternative I could use to solve this problem?
Any help would be much appreciated.

Minimizing Log Likelihood in Python

This is a general question about optimisation in Python. The problem I am attempting to solve has 91 parameters which need to be optimised in order to minimise the log likelihood function. Currently, I am using scipy.optimize.minimize, however, I find the process is very slow, compared to using Excel's solver, despite my expectation that Python would be much faster.
Is this a case of scipy.optimize.minimize isn't as fast as Excel's Solver in general or is the number of parameters too great for scipy? And if so, is there an alternative to Excel's Solver in Python that would be faster?

Parallel many dimensional optimization

I am building a script that generates input data [parameters] for another program to calculate. I would like to optimize the resulting data. Previously I have been using the numpy powell optimization. The psuedo code looks something like this.
def value(param):
run_program(param)
#Parse output
return value
scipy.optimize.fmin_powell(value,param)
This works great; however, it is incredibly slow as each iteration of the program can take days to run. What I would like to do is coarse grain parallelize this. So instead of running a single iteration at a time it would run (number of parameters)*2 at a time. For example:
Initial guess: param=[1,2,3,4,5]
#Modify guess by plus minus another matrix that is changeable at each iteration
jump=[1,1,1,1,1]
#Modify each variable plus/minus jump.
for num,a in enumerate(param):
new_param1=param[:]
new_param1[num]=new_param1[num]+jump[num]
run_program(new_param1)
new_param2=param[:]
new_param2[num]=new_param2[num]-jump[num]
run_program(new_param2)
#Wait until all programs are complete -> Parse Output
Output=[[value,param],...]
#Create new guess
#Repeat
Number of variable can range from 3-12 so something such as this could potentially speed up the code from taking a year down to a week. All variables are dependent on each other and I am only looking for local minima from the initial guess. I have started an implementation using hessian matrices; however, that is quite involved. Is there anything out there that either does this, is there a simpler way, or any suggestions to get started?
So the primary question is the following:
Is there an algorithm that takes a starting guess, generates multiple guesses, then uses those multiple guesses to create a new guess, and repeats until a threshold is found. Only analytic derivatives are available. What is a good way of going about this, is there something built already that does this, is there other options?
Thank you for your time.
As a small update I do have this working by calculating simple parabolas through the three points of each dimension and then using the minima as the next guess. This seems to work decently, but is not optimal. I am still looking for additional options.
Current best implementation is parallelizing the inner loop of powell's method.
Thank you everyone for your comments. Unfortunately it looks like there is simply not a concise answer to this particular problem. If I get around to implementing something that does this I will paste it here; however, as the project is not particularly important or the need of results pressing I will likely be content letting it take up a node for awhile.
I had the same problem while I was in the university, we had a fortran algorithm to calculate the efficiency of an engine based on a group of variables. At the time we use modeFRONTIER and if I recall correctly, none of the algorithms were able to generate multiple guesses.
The normal approach would be to have a DOE and there where some algorithms to generate the DOE to best fit your problem. After that we would run the single DOE entries parallely and an algorithm would "watch" the development of the optimizations showing the current best design.
Side note: If you don't have a cluster and needs more computing power HTCondor may help you.
Are derivatives of your goal function available? If yes, you can use gradient descent (old, slow but reliable) or conjugate gradient. If not, you can approximate the derivatives using finite differences and still use these methods. I think in general, if using finite difference approximations to the derivatives, you are much better off using conjugate gradients rather than Newton's method.
A more modern method is SPSA which is a stochastic method and doesn't require derivatives. SPSA requires much fewer evaluations of the goal function for the same rate of convergence than the finite difference approximation to conjugate gradients, for somewhat well-behaved problems.
There are two ways of estimating gradients, one easily parallelizable, one not:
around a single point, e.g. (f( x + h directioni ) - f(x)) / h;
this is easily parallelizable up to Ndim
"walking" gradient: walk from x0 in direction e0 to x1,
then from x1 in direction e1 to x2 ...;
this is sequential.
Minimizers that use gradients are highly developed, powerful, converge quadratically (on smooth enough functions).
The user-supplied gradient function
can of course be a parallel-gradient-estimator.
A few minimizers use "walking" gradients, among them Powell's method,
see Numerical Recipes p. 509.
So I'm confused: how do you parallelize its inner loop ?
I'd suggest scipy fmin_tnc
with a parallel-gradient-estimator, maybe using central, not one-sided, differences.
(Fwiw,
this
compares some of the scipy no-derivative optimizers on two 10-d functions; ymmv.)
I think what you want to do is use the threading capabilities built-in python.
Provided you your working function has more or less the same run-time whatever the params, it would be efficient.
Create 8 threads in a pool, run 8 instances of your function, get 8 result, run your optimisation algo to change the params with 8 results, repeat.... profit ?
If I haven't gotten wrong what you are asking, you are trying to minimize your function one parameter at the time.
you can obtain it by creating a set of function of a single argument, where for each function you freeze all the arguments except one.
Then you go on a loop optimizing each variable and updating the partial solution.
This method can speed up by a great deal function of many parameters where the energy landscape is not too complex (the dependency between the parameters is not too strong).
given a function
energy(*args) -> value
you create the guess and the function:
guess = [1,1,1,1]
funcs = [ lambda x,i=i: energy( guess[:i]+[x]+guess[i+1:] ) for i in range(len(guess)) ]
than you put them in a while cycle for the optimization
while convergence_condition:
for func in funcs:
optimize fot func
update the guess
check for convergence
This is a very simple yet effective method of simplify your minimization task. I can't really recall how this method is called, but A close look to the wikipedia entry on minimization should do the trick.
You could do parallel at two parts: 1) parallel the calculation of single iteration or 2) parallel start N initial guessing.
On 2) you need a job controller to control the N initial guess discovery threads.
Please add an extra output on your program: "lower bound" that indicates the output values of current input parameter's decents wont lower than this lower bound.
The initial N guessing thread can compete with each other; if any one thread's lower bound is higher than existing thread's current value, then this thread can be dropped by your job controller.
Parallelizing local optimizers is intrinsically limited: they start from a single initial point and try to work downhill, so later points depend on the values of previous evaluations. Nevertheless there are some avenues where a modest amount of parallelization can be added.
As another answer points out, if you need to evaluate your derivative using a finite-difference method, preferably with an adaptive step size, this may require many function evaluations, but the derivative with respect to each variable may be independent; you could maybe get a speedup by a factor of twice the number of dimensions of your problem. If you've got more processors than you know what to do with, you can use higher-order-accurate gradient formulae that require more (parallel) evaluations.
Some algorithms, at certain stages, use finite differences to estimate the Hessian matrix; this requires about half the square of the number of dimensions of your matrix, and all can be done in parallel.
Some algorithms may also be able to use more parallelism at a modest algorithmic cost. For example, quasi-Newton methods try to build an approximation of the Hessian matrix, often updating this by evaluating a gradient. They then take a step towards the minimum and evaluate a new gradient to update the Hessian. If you've got enough processors so that evaluating a Hessian is as fast as evaluating the function once, you could probably improve these by evaluating the Hessian at every step.
As far as implementations go, I'm afraid you're somewhat out of luck. There are a number of clever and/or well-tested implementations out there, but they're all, as far as I know, single-threaded. Your best bet is to use an algorithm that requires a gradient and compute your own in parallel. It's not that hard to write an adaptive one that runs in parallel and chooses sensible step sizes for its numerical derivatives.

Categories

Resources