How to determine scale factor of the variable: scipy.optimize.fsolve? - python

The fsolve function of scipy library has an optional input argument, diag which reads as follows:
diag: sequence, optional
N positive entries that serve as a scale factors for the variables.
My question is, what are these scale variables?
It appears that it determines the step taken by the algorithm in each iteration during the search for the roots. Is that correct? If yes, it can be very useful for better convergence of the algorithm. Then, how do I determine the scale? I know the lower and upper bounds for my variables.

Related

Is there a function in python or R that estimates the limit of a sequence based on a group of points?

Say I have 5 points that follow the path of a converging function, but I don't know the function. Is there a function that uses the points to estimate the limit?

how to set one fitting parameter larger than the other as constraints in iminuit in python?

I have two related fitting parameters. They have the same fitting range. Let's call them r1 and r2. I know I can limit the fitting range using minuit.limits, but I have an additional criteria that r2 has to be smaller than r1, can I do that in iminuit?
I've found this, I hope this can help you!
Extracted from: https://iminuit.readthedocs.io/en/stable/faq.html
**Can I have parameter limits that depend on each other (e.g. x^2 + y^2 < 3)?**¶
MINUIT was only designed to handle box constrains, meaning that the limits on the parameters are independent of each other and constant during the minimisation. If you want limits that depend on each other, you have three options (all with caveats), which are listed in increasing order of difficulty:
Change the variables so that the limits become independent. For example, transform from cartesian coordinates to polar coordinates for a circle. This is not always possible, of course.
Use another minimiser to locate the minimum which supports complex boundaries. The nlopt library and scipy.optimize have such minimisers. Once the minimum is found and if it is not near the boundary, place box constraints around the minimum and run iminuit to get the uncertainties (make sure that the box constraints are not too tight around the minimum). Neither nlopt nor scipy can give you the uncertainties.
Artificially increase the negative log-likelihood in the forbidden region. This is not as easy as it sounds.
The third method done properly is known as the interior point or barrier method. A glance at the Wikipedia article shows that one has to either run a series of minimisations with iminuit (and find a clever way of knowing when to stop) or implement this properly at the level of a Newton step, which would require changes to the complex and convoluted internals of MINUIT2.
Warning: you cannot just add a large value to the likelihood when the parameter boundary is violated. MIGRAD expects the likelihood function to be differential everywhere, because it uses the gradient of the likelihood to go downhill. The derivative at a discrete step is infinity and zero in the forbidden region. MIGRAD does not like this at all.

How to effectively solve a compound cost function optimisation problem?

I want to solve the following optimization problem with Python:
I have a black box function f with multiple variables as input.
The execution of the black box function is quite time consuming, therefore I would like to avoid a brute force approach.
I would like to find the optimum input parameters for that black box function f.
In the following, for simplicity I just write the dependency for one dimension x.
An optimum parameter x is defined as:
the cost function cost(x) is maximized with the sum of
f(x) value
a maximum standard deviation of f(x)
.
cost(x) = A * f(x) + B * max(standardDeviation(f(x)))
The parameters A and B are fix.
E.g., for the picture below, the value of x at the position 'U' would be preferred over the value of x at the positon of 'V'.
My question is:
Is there any easily adaptable framework or process that I could utilize (similar to e. g. simulated annealing or bayesian optimisation)?
As mentioned, I would like to avoid a brute force approach.
I’m still not 100% sure of your approach, but does this formula ring true to you:
A * max(f(x)) + B * max(standardDeviation(f(x)))
?
If it does, then I guess you may want to consider that maximizing f(x) may (or may not) be compatible with maximizing the standard deviation of f(x), which means you may be facing a multi-objective optimization problem.
Again, you haven’t specified what f(x) returns - is it a vector? I hope it is, otherwise I’m unclear on what you can calculate the standard deviation on.
The picture you posted is not so obvious to me. F(x) is the entire black curve, it has a maximum at the point v, but what can you say about the standard deviation? To calculate the standard deviation of you have to take into account the entire f(x) curve (including the point u), not just the neighborhood of u and v. If you only want to get the standard deviation in an interval around a maximum for f(x), then I think you’re out of luck when it comes to frameworks. The best thing that comes to my mind is to use a local (or maybe global, better) optimization algorithm to hunt for the maximum of f(x) - simulated annealing, differential evolution, tunnelling, and so on - and then, when you have found a maximum for f(x), sample a few points on the left and right of your optimum and calculate the standard deviation of these evaluations. Then you’ll have to decide if the combination of the maximum of f(x) and this standard deviation is good enough or not compared to any previous “optimal” point found.
This is all speculation, as I’m unsure that your problem is really an optimization one or simply a “peak finding” exercise, for which there are many different - and more powerful and adequate- methods.
Andrea.

How to restrict values of np.random with some non-normal distribution to an interval?

I am seeking a random array with non-normal distribution and values in some defined interval, say, from -100 to 100. I cannot see how to do that with the arguments available in numpy.random.name_of_distribution. For example,
k = np.random.noncentral_chisquare(1, 50, 100)
has arguments (degrees of freedom, noncentrality and size). The range of values seems to depend on where I set that noncentrality; in other words, changing it to 90 moves the top end of the distribution up to 120 or 130.
np.random.exponential(10,100)
is producing a range from 0 to 60 or so. Must one be clever and rescale the output algebraically, or is there a quicker fix?
If the input distribution has a bounded range, then rescaling algebraically is the simplest solution. However, your chosen example of a noncentral_chisquare is unbounded above. In that case you'll need to use something like an acceptance/rejection scheme to throw away values larger than some upper bound. That can be done either before or after algebraic rescaling, as long as you adjust the rescaling parameterization appropriately.
There are other alternatives if you're generating the target distribution directly via inversion sampling, but acceptance/rejection is going to be your best bet with library functions.

Find all roots of a nonlinear function

Let us assume I have a smooth, nonlinear function f: R^n -> R with the (known) maximum number of roots N. How can I find the roots efficiently? Right now I have calculated the function on the grid on a preselected area, refined the grid where the function is below a predefined threshold and continued that routine, but this does not seem to be very efficient, though, because I have noticed that it is difficult to select the area correctly before and to define the threshold accordingly.
there are several ways to go about this of course, scipy is known to contain the safest and most efficient method for finding a single root provided you know the interval:
scipy.optimize.brentq
to find more roots using some estimate you can use:
scipy.optimize.fsolve
de Moivre's formulae to use for root finding that is fairly quick in comparison to others (in case you would rather build your own method):
given a complex number
the n roots are given by:
where k varies over the integer values from 0 to n − 1.
You can square the function and use global optimization software to locate all the minima inside a domain and pick those with a zero value.
Stochastic multistart based global optimization methods with clustering are quite proper for this task.

Categories

Resources