I have this formula:
1 - e^(log(0.5) * (x / beta) ^ alpha )
with alpha and beta that are my variable that I have to find.
x is a bunch of images (my data) and I can compare that formula's output with the ground truth that comes from a user test. Basically I can generate a loss function that I would like to minimize. For finding the best alpha and beta I tried to use tensorflow but gradient descent and other optimizers appear to fail as that function is not convex (I try different initial conditions). Is there a global optimization tool in python that I can use to solve this problem?
You could use NLopt, which has some global optimizers, e.g. DIRECT (download at gohlke). Or there is scipy's basinhopping. Another nice solution is NOMAD, a very good black box optimizier. It also has a Python interface, but it is not that user friendly and intuitive.
You can find other hints on local and global optimization in this answer or this answer.
Related
I've been trying to understand how automatic differentiation (autodiff) works. There are several implementations of this that can be found in Tensorflow, PyTorch and other programs.
There are three aspects of automatic differentiation that currently seem vague to me.
The exact process used to calculate the gradients
How autodiff works with respect to inputs
How autodiff works with respect to a singular value as input
So far, it seems to roughly follow the following steps:
Break up original function into elementary operations (individual arithmetic operations, composition and function calls).
The elementary operations are combined to form a computational graph in such a way that the original function can be calculated using the computational graph.
The computational graph is executed for a certain input, and each operation is recorded
Walking through the recorded operations in reverse using the chain rule gives us the gradient.
First of all, is this a correct overview of the steps that are taken in automatic differentiation?
Secondly, how would the above process work for a derivative with respect to the inputs. For instance, a function would need a difference in the x value. Does that mean that the derivative can only be calculated after at least two different x values have been provided as the input? Or does it require multiple inputs at once (i.e. vector input) over which it can calculate a difference? And how does this compare when we calculate the gradient with respect to the model weights (i.e. as done in backpropagation).
Thirdly, how can we take the derivative of a singular value. Take, for instance, the following Python code where the derivative of is calculated:
x = tf.constant(3.0)
with tf.GradientTape() as tape:
tape.watch(x)
y = x**2
# dy = 2x * dx
dy_dx = tape.gradient(y, x)
print(dy_dx.numpy()) # prints: '6.0'
Since dx is the difference between several x inputs, would that not mean that dx = 0?
I found that this paper had a pretty good overview of the various modes of autodiff. As well as the differences as compared to numerical and symbolic differentiation. However, it did not bring a full understanding and I would still like to understand the autodiff process in context of these traditional differentiation techniques.
Rather than applying it practically, I would love to get a more theoretical understanding.
I had similar questions in my mind a few weeks ago until I started to code my own Automatic Differentiation package tensortrax in Python. It uses forward-mode AD with a hyper-dual number approach. I wrote a Readme (landing page of the repository, section Theory) with an example which could be of interest for you.
I think what you need to understand first is what is a derivative, many math textbooks could help you with that. The notation dx means an infinitesimal variation, so you not actually compute any difference, but do a symbolic operation on your function f that transforms it to a function f' also noted df/dx, which you then apply at any point where it is defined.
Regarding the algorithm used for automatic differentiation, you understood it right, the part that you seem to be missing is how the derivatives of elementary operations are computed and what do they mean, but it would be hard to do a crash course about that in a SO answer.
I want to find the minimum of a function in python y = f(x)
Problem : the solver tries to compute the gradient with super close x values (delta x around 1e-8), and my function f is not sensitive to such a small step (ie we can see y vary when delta x around 1e-1).
Hence gradient is 0 to the solver, and can not find the proper solution.
I've tried following solvers from scipy, I can't find the option I'm looking for..
scipy.optimize.minimize
scipy.optimize.fmin
In Matlab fmincon , there is an option that does the job 'DiffMinChange' : Minimum change in variables for finite-difference gradients (a positive scalar).
You may want to try and use L-BFGS-B from scipy:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_l_bfgs_b.html
And provide the “epsilon” parameter to be around 0.1/0.05 and see if it makes it better. I am of course assuming that you will let the solver compute the gradient for you by numerical differentiation (I.e., you pass fprime=None and approx_grad=True) to the routine.
I personally despise the “minimize” interface to various solvers so I prefer to deal with the actual solvers themselves.
I want to solve the following optimization problem with Python:
I have a black box function f with multiple variables as input.
The execution of the black box function is quite time consuming, therefore I would like to avoid a brute force approach.
I would like to find the optimum input parameters for that black box function f.
In the following, for simplicity I just write the dependency for one dimension x.
An optimum parameter x is defined as:
the cost function cost(x) is maximized with the sum of
f(x) value
a maximum standard deviation of f(x)
.
cost(x) = A * f(x) + B * max(standardDeviation(f(x)))
The parameters A and B are fix.
E.g., for the picture below, the value of x at the position 'U' would be preferred over the value of x at the positon of 'V'.
My question is:
Is there any easily adaptable framework or process that I could utilize (similar to e. g. simulated annealing or bayesian optimisation)?
As mentioned, I would like to avoid a brute force approach.
I’m still not 100% sure of your approach, but does this formula ring true to you:
A * max(f(x)) + B * max(standardDeviation(f(x)))
?
If it does, then I guess you may want to consider that maximizing f(x) may (or may not) be compatible with maximizing the standard deviation of f(x), which means you may be facing a multi-objective optimization problem.
Again, you haven’t specified what f(x) returns - is it a vector? I hope it is, otherwise I’m unclear on what you can calculate the standard deviation on.
The picture you posted is not so obvious to me. F(x) is the entire black curve, it has a maximum at the point v, but what can you say about the standard deviation? To calculate the standard deviation of you have to take into account the entire f(x) curve (including the point u), not just the neighborhood of u and v. If you only want to get the standard deviation in an interval around a maximum for f(x), then I think you’re out of luck when it comes to frameworks. The best thing that comes to my mind is to use a local (or maybe global, better) optimization algorithm to hunt for the maximum of f(x) - simulated annealing, differential evolution, tunnelling, and so on - and then, when you have found a maximum for f(x), sample a few points on the left and right of your optimum and calculate the standard deviation of these evaluations. Then you’ll have to decide if the combination of the maximum of f(x) and this standard deviation is good enough or not compared to any previous “optimal” point found.
This is all speculation, as I’m unsure that your problem is really an optimization one or simply a “peak finding” exercise, for which there are many different - and more powerful and adequate- methods.
Andrea.
I'm trying to perform a constrained least-squares estimation using Scipy such that all of the coefficients are in the range (0,1) and sum to 1 (this functionality is implemented in Matlab's LSQLIN function).
Does anybody have tips for setting up this calculation using Python/Scipy. I believe I should be using scipy.optimize.fmin_slsqp(), but am not entirely sure what parameters I should be passing to it.[1]
Many thanks for the help,
Nick
[1] The one example in the documentation for fmin_slsqp is a bit difficult for me to parse without the referenced text -- and I'm new to using Scipy.
scipy-optimize-leastsq-with-bound-constraints on SO givesleastsq_bounds, which is
leastsq
with bound constraints such as 0 <= x_i <= 1.
The constraint that they sum to 1 can be added in the same way.
(I've found leastsq_bounds / MINPACK to be good on synthetic test functions in 5d, 10d, 20d;
how many variables do you have ?)
Have a look at this tutorial, it seems pretty clear.
Since MATLAB's lsqlin is a bounded linear least squares solver, you would want to check out scipy.optimize.lsq_linear.
Non-negative least squares optimization using scipy.optimize.nnls is a robust way of doing it. Note that, if the coefficients are constrained to be positive and sum to unity, they are automatically limited to interval [0,1], that is one need not additionally constrain them from above.
scipy.optimize.nnls automatically makes variables positive using Lawson and Hanson algorithm, whereas the sum constraint can be taken care of as discussed in this thread and this one.
Scipy nnls uses an old fortran backend, which is apparently widely used in equivalent implementations of nnls by other software.
I'm using scipy.optimize.curve_fit, but I suspect it is converging to a local minimum and not the global minimum.
I tried using simulated annealing in the following way:
def fit(params):
return np.sum((ydata - specf(xdata,*params))**2)
p = scipy.optimize.anneal(fit,[1000,1E-10])
where specf is the curve I am trying to fit. The results in p though are clearly worse than the minimum returned by curve_fit even when the return value indicates the global minimum was reached (see anneal).
How can I improve the results? Is there a global curve fitter in SciPy?
You're right, it only converges towards a local minimum (when it converges) since it uses the Levenburg-Marquardt algorithm. There is no global curve fitter in SciPy, you have to write you own using the existing global optimizers . But be aware, that this still don't have to converge to the value you want. That's impossible in most cases.
The only method to improve your result is to guess the starting parameters quite well.
You might want to try using leastsq() (curve_fit actually uses this, but you dont get the full output) or the ODR package instead of curve_fit.
The full output of leastsq() gives you a lot more information, such as the chisquared value (if you want to use that as a quick and dirty goodness of fit test).
If you need to weight the fit you can just that this way:
fitfunc = lambda p,x: p[0]+ p[1]*exp(-x)
errfunc = lambda p, x, y, xerr: (y-fitfunc(p,x))/xerr
out = leastsq(errfunc, pinit, args=(x,y, xerr), full_output=1)
chisq=sum(infodict['fvec']*infodict['fvec'])
This is a nontrivial problem. Have you considered using Evolutionary Strategies? I have had great success with ecspy (see http://code.google.com/p/ecspy/) and the community is small but very helpful.