Maximize a function with many parameters (python) - python

first, let me say that I lack experiences with scientific math or statistics - so this might be a very well-known problem, but I don't know where to start.
I have a function f(x1, x2, ..., xn) where I need to guess the x'ses and find the highest value for f. The function has the following properties:
the total number or parameters is usually around 40 to 60, so a brute-force approach is impossible.
the possible values for each x range from 0.01 to 2.99
the function is steady, meaning that a higher f value means that the guess for the parameters is better and vice versa.
So far, I implemented a pretty basic method in python. It initially sets all parameters to 1, randomly guesses new values and checks if the f is higher than before. If not, roll back to the previous values.
In a loop with 10,000 iterations this seems to work somehow, but the result is propably far from being perfect.
Any suggestions on how to improve the search for the optimal parameters will be appreciated. When googling this issue things linke MCMC came up, but that seems like a very advanced method and I would need a lot of time to even understand the method.
Basic hints or concepts would help me more than elaborated methods and algorithms.

Don't do it yourself. Install SciPy and use its optimization routines. scipy.optimize.minimize looks like a good fit.

I think you want to take a look at scipy.optimize (http://docs.scipy.org/doc/scipy-0.10.0/reference/tutorial/optimize.html). A maximization is the minimization of the -1*function.

Related

Computational cost of scipy.newton_krylov

I am trying to compare the computational cost of several methods of solving a nonlinear partial differential equation. One of the methods uses scipy.optimize.newton_krylov. The other methods use scipy.sparese.linalg.lgmres.
My first thought was to count the number of iterations of newton_krylov using the 'callback' keyword. This worked, to a point. The counter that I set up counts the number of iterations made by the newton_krylov solver.
I would also like to count the number of steps it takes in each (Newton) iteration to solve the linear system. I thought that I could use a similar counter and try using a keyword like 'inner_callback', but that gave the number of iterations the newton_krylov solver used rather than the number of steps to solve the inner linear system.
It is possible that I don't understand how newton_krylov was implemented and that it is impossible to separate the the two types of iterations (Newton steps vs lgmres steps) but it would be great if there was a way.
I tried posting this on scicomp.stackexchange, but I didn't get any answers. I think that the question might be too specific to Scipy to get a good answer there, so I hope that someone here can help me. Thank you for your help!

Function to determine a reasonable initial guess for scipy.optimize?

I'm using scipy.optimize.minimize to find the minimum of a 4D function that is rather sensitive to the initial guess used. If I vary it a little bit, the solution will change considerably.
There are many questions similar to this one already in SO (e.g.: 1, 2, 3), but no real answer.
In an old question of mine, one of the developers of the zunzun.com site (apparently no longer online) explained how they managed this:
Zunzun.com uses the Differential Evolution genetic algorithm (DE) to find initial parameter estimates which are then passed to the Levenberg-Marquardt solver in scipy. DE is not actually used as a global optimizer per se, but rather as an "initial parameter guesser".
The closest I've found to this algorithm is this answer where a for block is used to call the minimizing function many times with random initial guesses. This generates multiple minimized solutions, and finally the best (smallest value) one is picked.
Is there something like what the zunzun dev described already implemented in Python?
There is no general answer for such question, as a problem of minimizing arbitrary function is impossible to solve. You can do better or worse on particular classes of functions, thus it is rather a domain for mathematician, to analyze how your function probably looks like.
Obviously you can also work with dozens of so called "meta optimizers", which are just bunch of heuristics, which might (or not) work for you particular application. Those include random sampling starting point in a loop, using genetic algorithms, or - which is as far as I know most mathematically justified approach - using Bayesian optimization. In general the idea is to model your function in the same time when you try to minimize it, this way you can make informed guess where to start next time (which is level of abstraction higher than random guessing or using genetic algorithms/differential evolution). Thus, I would order these methods in following way
grid search / random sampling - uses no information from previous runs, thus - worst results
genetic approach, evolutionary, basin-hooping, annealing - use information from previous runs as a (x, f(x)) pairs, for limited period of time (generations) - thus average results
Bayesian optimization (and similar methods) - use information from all previous experiences through modeling of the underlying function and performing sampling selection based on expected improvement - best results (at the cost of most complex methods)

parameter within an interval while optimizing

Usually I use Mathematica, but now trying to shift to python, so this question might be a trivial one, so I am sorry about that.
Anyways, is there any built-in function in python which is similar to the function named Interval[{min,max}] in Mathematica ? link is : http://reference.wolfram.com/language/ref/Interval.html
What I am trying to do is, I have a function and I am trying to minimize it, but it is a constrained minimization, by that I mean, the parameters of the function are only allowed within some particular interval.
For a very simple example, lets say f(x) is a function with parameter x and I am looking for the value of x which minimizes the function but x is constrained within an interval (min,max) . [ Obviously the actual problem is just not one-dimensional rather multi-dimensional optimization, so different paramters may have different intervals. ]
Since it is an optimization problem, so ofcourse I do not want to pick the paramter randomly from an interval.
Any help will be highly appreciated , thanks!
If it's a highly non-linear problem, you'll need to use an algorithm such as the Generalized Reduced Gradient (GRG) Method.
The idea of the generalized reduced gradient algorithm (GRG) is to solve a sequence of subproblems, each of which uses a linear approximation of the constraints. (Ref)
You'll need to ensure that certain conditions known as the KKT conditions are met, etc. but for most continuous problems with reasonable constraints, you'll be able to apply this algorithm.
This is a good reference for such problems with a few examples provided. Ref. pg. 104.
Regarding implementation:
While I am not familiar with Python, I have built solver libraries in C++ using templates as well as using function pointers so you can pass on functions (for the objective as well as constraints) as arguments to the solver and you'll get your result - hopefully in polynomial time for convex problems or in cases where the initial values are reasonable.
If an ability to do that exists in Python, it shouldn't be difficult to build a generalized GRG solver.
The Python Solution:
Edit: Here is the python solution to your problem: Python constrained non-linear optimization

Advantage of metropolis hastings or MonteCarlo methods over a simple grid search?

I have a relatively simple function with three unknown input parameters for which I only know the upper and lower bounds. I also know what the output Y should be for all of my data.
So far I have done a simple grid search in python, looping through all of the possible parameter combinations and returning those results where the error between Y predicted and Y observed is within a set limit.
I then look at the results to see which set of parameters performs best for each group of samples, look at the trade-off between parameters, see how outliers effect the data etc..
So really my questions is - whilst the grid search method I'm using is a bit cumbersome, what advantages would there be in using Monte Carlo methods such as metropolis hastings instead?
I am currently researching into MCMC methods, but don’t have any practical experience in using them and, in this instance, can’t quite see what might be gained.
I’d greatly appreciate any comments or suggestions
Many Thanks
MCMC methods tend to be useful when the underlying function is complex (sometimes too complicated to directly compute) and/or in high-dimensional spaces. They are often used when nothing else is feasible or works well. Since you have a simple, low-dimensional problem, I wouldn't expect MCMC approaches to be especially helpful for you.
If you can perform the grid search at a sufficiently-fine scale in a small enough amount of time for your problem domain, it's likely a good approach.
If your function is convex, there are many well-known approaches such a gradient descent.
If your function has a simple functional form that can easily be solved but you have large amounts of data with gross outliers, RANSAC can be helpful.
If your function has many local minima at unknown locations, simulated annealing can work well.
When the search space becomes larger, it can become infeasible to do an exhaustive search. So we turn to Monte Carlo methods out of necessity.

Parallel many dimensional optimization

I am building a script that generates input data [parameters] for another program to calculate. I would like to optimize the resulting data. Previously I have been using the numpy powell optimization. The psuedo code looks something like this.
def value(param):
run_program(param)
#Parse output
return value
scipy.optimize.fmin_powell(value,param)
This works great; however, it is incredibly slow as each iteration of the program can take days to run. What I would like to do is coarse grain parallelize this. So instead of running a single iteration at a time it would run (number of parameters)*2 at a time. For example:
Initial guess: param=[1,2,3,4,5]
#Modify guess by plus minus another matrix that is changeable at each iteration
jump=[1,1,1,1,1]
#Modify each variable plus/minus jump.
for num,a in enumerate(param):
new_param1=param[:]
new_param1[num]=new_param1[num]+jump[num]
run_program(new_param1)
new_param2=param[:]
new_param2[num]=new_param2[num]-jump[num]
run_program(new_param2)
#Wait until all programs are complete -> Parse Output
Output=[[value,param],...]
#Create new guess
#Repeat
Number of variable can range from 3-12 so something such as this could potentially speed up the code from taking a year down to a week. All variables are dependent on each other and I am only looking for local minima from the initial guess. I have started an implementation using hessian matrices; however, that is quite involved. Is there anything out there that either does this, is there a simpler way, or any suggestions to get started?
So the primary question is the following:
Is there an algorithm that takes a starting guess, generates multiple guesses, then uses those multiple guesses to create a new guess, and repeats until a threshold is found. Only analytic derivatives are available. What is a good way of going about this, is there something built already that does this, is there other options?
Thank you for your time.
As a small update I do have this working by calculating simple parabolas through the three points of each dimension and then using the minima as the next guess. This seems to work decently, but is not optimal. I am still looking for additional options.
Current best implementation is parallelizing the inner loop of powell's method.
Thank you everyone for your comments. Unfortunately it looks like there is simply not a concise answer to this particular problem. If I get around to implementing something that does this I will paste it here; however, as the project is not particularly important or the need of results pressing I will likely be content letting it take up a node for awhile.
I had the same problem while I was in the university, we had a fortran algorithm to calculate the efficiency of an engine based on a group of variables. At the time we use modeFRONTIER and if I recall correctly, none of the algorithms were able to generate multiple guesses.
The normal approach would be to have a DOE and there where some algorithms to generate the DOE to best fit your problem. After that we would run the single DOE entries parallely and an algorithm would "watch" the development of the optimizations showing the current best design.
Side note: If you don't have a cluster and needs more computing power HTCondor may help you.
Are derivatives of your goal function available? If yes, you can use gradient descent (old, slow but reliable) or conjugate gradient. If not, you can approximate the derivatives using finite differences and still use these methods. I think in general, if using finite difference approximations to the derivatives, you are much better off using conjugate gradients rather than Newton's method.
A more modern method is SPSA which is a stochastic method and doesn't require derivatives. SPSA requires much fewer evaluations of the goal function for the same rate of convergence than the finite difference approximation to conjugate gradients, for somewhat well-behaved problems.
There are two ways of estimating gradients, one easily parallelizable, one not:
around a single point, e.g. (f( x + h directioni ) - f(x)) / h;
this is easily parallelizable up to Ndim
"walking" gradient: walk from x0 in direction e0 to x1,
then from x1 in direction e1 to x2 ...;
this is sequential.
Minimizers that use gradients are highly developed, powerful, converge quadratically (on smooth enough functions).
The user-supplied gradient function
can of course be a parallel-gradient-estimator.
A few minimizers use "walking" gradients, among them Powell's method,
see Numerical Recipes p. 509.
So I'm confused: how do you parallelize its inner loop ?
I'd suggest scipy fmin_tnc
with a parallel-gradient-estimator, maybe using central, not one-sided, differences.
(Fwiw,
this
compares some of the scipy no-derivative optimizers on two 10-d functions; ymmv.)
I think what you want to do is use the threading capabilities built-in python.
Provided you your working function has more or less the same run-time whatever the params, it would be efficient.
Create 8 threads in a pool, run 8 instances of your function, get 8 result, run your optimisation algo to change the params with 8 results, repeat.... profit ?
If I haven't gotten wrong what you are asking, you are trying to minimize your function one parameter at the time.
you can obtain it by creating a set of function of a single argument, where for each function you freeze all the arguments except one.
Then you go on a loop optimizing each variable and updating the partial solution.
This method can speed up by a great deal function of many parameters where the energy landscape is not too complex (the dependency between the parameters is not too strong).
given a function
energy(*args) -> value
you create the guess and the function:
guess = [1,1,1,1]
funcs = [ lambda x,i=i: energy( guess[:i]+[x]+guess[i+1:] ) for i in range(len(guess)) ]
than you put them in a while cycle for the optimization
while convergence_condition:
for func in funcs:
optimize fot func
update the guess
check for convergence
This is a very simple yet effective method of simplify your minimization task. I can't really recall how this method is called, but A close look to the wikipedia entry on minimization should do the trick.
You could do parallel at two parts: 1) parallel the calculation of single iteration or 2) parallel start N initial guessing.
On 2) you need a job controller to control the N initial guess discovery threads.
Please add an extra output on your program: "lower bound" that indicates the output values of current input parameter's decents wont lower than this lower bound.
The initial N guessing thread can compete with each other; if any one thread's lower bound is higher than existing thread's current value, then this thread can be dropped by your job controller.
Parallelizing local optimizers is intrinsically limited: they start from a single initial point and try to work downhill, so later points depend on the values of previous evaluations. Nevertheless there are some avenues where a modest amount of parallelization can be added.
As another answer points out, if you need to evaluate your derivative using a finite-difference method, preferably with an adaptive step size, this may require many function evaluations, but the derivative with respect to each variable may be independent; you could maybe get a speedup by a factor of twice the number of dimensions of your problem. If you've got more processors than you know what to do with, you can use higher-order-accurate gradient formulae that require more (parallel) evaluations.
Some algorithms, at certain stages, use finite differences to estimate the Hessian matrix; this requires about half the square of the number of dimensions of your matrix, and all can be done in parallel.
Some algorithms may also be able to use more parallelism at a modest algorithmic cost. For example, quasi-Newton methods try to build an approximation of the Hessian matrix, often updating this by evaluating a gradient. They then take a step towards the minimum and evaluate a new gradient to update the Hessian. If you've got enough processors so that evaluating a Hessian is as fast as evaluating the function once, you could probably improve these by evaluating the Hessian at every step.
As far as implementations go, I'm afraid you're somewhat out of luck. There are a number of clever and/or well-tested implementations out there, but they're all, as far as I know, single-threaded. Your best bet is to use an algorithm that requires a gradient and compute your own in parallel. It's not that hard to write an adaptive one that runs in parallel and chooses sensible step sizes for its numerical derivatives.

Categories

Resources