Alternatives to fmincon in python for constrained non-linear optimisation problems - python

I am having trouble solving an optimisation problem in python, involving ~20,000 decision variables. The problem is non-linear and I wish to apply both bounds and constraints to the problem. In addition to this, the gradient with respect to each of the decision variables may be calculated.
The bounds are simply that each decision variable must lie in the interval [0, 1] and there is a monotonic constraint placed upon the variables, i.e each decision variable must be greater than the previous one.
I initially intended to use the L-BFGS-B method provided by the scipy.optimize package however I found out that, while it supports bounds, it does not support constraints.
I then tried using the SQLSP method which does support both constraints and bounds. However, because it requires more memory than L-BFGS-B and I have a large number of decision variables, I ran into memory errors fairly quickly.
The paper which this problem comes from used the fmincon solver in Matlab to optimise the function, which, to my knowledge, supports the application of both bounds and constraints in addition to being more memory efficient than the SQLSP method provided by scipy. I do not have access to Matlab however.
Does anyone know of an alternative I could use to solve this problem?
Any help would be much appreciated.

Related

Why would sp.optimize.leastsq converge when sp.optimize.least_squares doesn't?

I'm trying to fit my data and have so far used sp.optimize.leastsq. I changed to sp.optimize.least_squares to add bounds to the parameters, but both when I use bounds and when I don't the search doesn't converge, even in data sets sp.optimize.leastsq fitted just fine.
Shouldn't these functions work the same?
What could be the difference between them that makes the newer one not to find solutions the older one did?
leastsq
is a wrapper around MINPACK’s lmdif and lmder algorithms.
least_squares implements other methods in addition to the MINPACK algorithm.
method{‘trf’, ‘dogbox’, ‘lm’}, optional
Algorithm to perform minimization.
‘trf’ : Trust Region Reflective algorithm, particularly suitable for large sparse problems with bounds. Generally robust method.
‘dogbox’ : dogleg algorithm with rectangular trust regions, typical use case is small problems with bounds. Not recommended for problems with rank-deficient Jacobian.
‘lm’ : Levenberg-Marquardt algorithm as implemented in MINPACK. Doesn’t handle bounds and sparse Jacobians. Usually the most efficient method for small unconstrained problems.
Default is ‘trf’. See Notes for more information.
It is possible for some problems that lm method does not converge while trf converges.

SciPy rootfinding algorithm 'gives up' too fast

Is there any way to force 'hybr' method of scipy.optimize 'root' to keep working even after it finds that convergence its too slow? In my problem, the solver nearly reaches desired precision, but right before it, the algorithm terminates because of slow convergence... Is it possible to make 'hybr' more 'self-confident'?
I use the root-finding algorithm root from scipy.optimize module to solve a system of two algebraic, non-linear equations. Since the equations have to be solved many times for various parameter values it is important to find a numerical method that would be most stable for this problem.
I have compared the performance of all the methods provided by scipy.optimize module. To visualize their performance I have used the following procedure:
The algebraic equations were rearranged so that they have zero on the R.H.S.
Then, at each step made by the algorithm, the sum of the L.H.S. squared of all the equations was computed and printed.
In my case, the most efficient method is the default "hybr". Other build-in methods either do not converge at all or are significantly slower. Unfortunately, in some cases the desired method gives up too fast. Lowering the precision and/or providing additional options to the functions did not help.

scipy curve_fit and local minima: get to global minima as fast as possible

my problem at hand: I am using scipy curve_fit to fit a curve (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) but in many occasions, the parameters estimated for such curve refer to one of the local many "local" minima and not "global" minimum. Now this is to be expected given how curve_fit was designed. Still, I really need my global minimum.
In order to find it, my initial hunch would be to multiply initial starting points, run multiple curve_fit instances and choose the one with the lowest fit error but I would suffer from a number of biases in my personal initial guess estimates (also potentially the number of combinations could be considerable and this would be detrimental to performance).
Do you happen to know better, faster and/or methodologically sounder methods on how to proceed? (they do not need to pass for least squares, I can build ad hoc stuff if necessary)
There are a couple possible approaches. One would be to do a "brute force" search through your parameter space to find candidate starting points for the local solver in curve_fit. Another would be to use a global solver such as differential evolution. For sure, both of these can be much slower than a single curve_fit, but they do aim at finding "global minima". Within scipy.optimize, these methods are brute and differential_evolution, respectively. It should be noted neither of these is actually global optimizers, as they both require upper and lower bounds for the search space of all parameters. Still, within those boundaries, they do attempt to find the best result, not just a local minimum close to your starting values.
A common approach is to use brute with medium-sized steps for each parameter, then take the best ten of those and use Levenberg-Marquardt (from leastsq, as used in curve_fit) starting from each of these. Similarly, you can use differential_evolution and then refine.
You might find lmfit (https://lmfit.github.io/lmfit-py) helpful, as it allows you to set up the model once and switch between solvers, including
brute, differential_evolution, and leastsq. Lmfit also makes it easy to fix some parameters or place upper/lower bounds on some parameters. It also provides a higher-level interface to model building and curve-fitting, and methods to explore the confidence intervals of parameters.

parameter within an interval while optimizing

Usually I use Mathematica, but now trying to shift to python, so this question might be a trivial one, so I am sorry about that.
Anyways, is there any built-in function in python which is similar to the function named Interval[{min,max}] in Mathematica ? link is : http://reference.wolfram.com/language/ref/Interval.html
What I am trying to do is, I have a function and I am trying to minimize it, but it is a constrained minimization, by that I mean, the parameters of the function are only allowed within some particular interval.
For a very simple example, lets say f(x) is a function with parameter x and I am looking for the value of x which minimizes the function but x is constrained within an interval (min,max) . [ Obviously the actual problem is just not one-dimensional rather multi-dimensional optimization, so different paramters may have different intervals. ]
Since it is an optimization problem, so ofcourse I do not want to pick the paramter randomly from an interval.
Any help will be highly appreciated , thanks!
If it's a highly non-linear problem, you'll need to use an algorithm such as the Generalized Reduced Gradient (GRG) Method.
The idea of the generalized reduced gradient algorithm (GRG) is to solve a sequence of subproblems, each of which uses a linear approximation of the constraints. (Ref)
You'll need to ensure that certain conditions known as the KKT conditions are met, etc. but for most continuous problems with reasonable constraints, you'll be able to apply this algorithm.
This is a good reference for such problems with a few examples provided. Ref. pg. 104.
Regarding implementation:
While I am not familiar with Python, I have built solver libraries in C++ using templates as well as using function pointers so you can pass on functions (for the objective as well as constraints) as arguments to the solver and you'll get your result - hopefully in polynomial time for convex problems or in cases where the initial values are reasonable.
If an ability to do that exists in Python, it shouldn't be difficult to build a generalized GRG solver.
The Python Solution:
Edit: Here is the python solution to your problem: Python constrained non-linear optimization

Constrained least-squares estimation in Python

I'm trying to perform a constrained least-squares estimation using Scipy such that all of the coefficients are in the range (0,1) and sum to 1 (this functionality is implemented in Matlab's LSQLIN function).
Does anybody have tips for setting up this calculation using Python/Scipy. I believe I should be using scipy.optimize.fmin_slsqp(), but am not entirely sure what parameters I should be passing to it.[1]
Many thanks for the help,
Nick
[1] The one example in the documentation for fmin_slsqp is a bit difficult for me to parse without the referenced text -- and I'm new to using Scipy.
scipy-optimize-leastsq-with-bound-constraints on SO givesleastsq_bounds, which is
leastsq
with bound constraints such as 0 <= x_i <= 1.
The constraint that they sum to 1 can be added in the same way.
(I've found leastsq_bounds / MINPACK to be good on synthetic test functions in 5d, 10d, 20d;
how many variables do you have ?)
Have a look at this tutorial, it seems pretty clear.
Since MATLAB's lsqlin is a bounded linear least squares solver, you would want to check out scipy.optimize.lsq_linear.
Non-negative least squares optimization using scipy.optimize.nnls is a robust way of doing it. Note that, if the coefficients are constrained to be positive and sum to unity, they are automatically limited to interval [0,1], that is one need not additionally constrain them from above.
scipy.optimize.nnls automatically makes variables positive using Lawson and Hanson algorithm, whereas the sum constraint can be taken care of as discussed in this thread and this one.
Scipy nnls uses an old fortran backend, which is apparently widely used in equivalent implementations of nnls by other software.

Categories

Resources