I am converting some Matlab code into python using numpy. Everything worked pretty smoothly but recently I encountered fminsearch function.
So, to cut it short: is there an easy way to make in python something like this:
banana = #(x)100*(x(2)-x(1)^2)^2+(1-x(1))^2;
[x,fval] = fminsearch(banana,[-1.2, 1])
which will return
x = 1.0000 1.0000
fval = 8.1777e-010
Up till now I have not found anything that looks similar in numpy. The only thing that I found similar is scipy.optimize.fmin. Based on the definition it
Minimize a function using the downhill simplex algorithm.
But right now I can not find to write the above-mentioned Matlab code using this function
It's just a straight-forward conversion from Matlab syntax to python syntax:
import scipy.optimize
banana = lambda x: 100*(x[1]-x[0]**2)**2+(1-x[0])**2
xopt = scipy.optimize.fmin(func=banana, x0=[-1.2,1])
with output:
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 85
Function evaluations: 159
array([ 1.00002202, 1.00004222])
fminsearch implements the Nelder-Mead method, see Matlab document: http://www.mathworks.com/help/matlab/ref/fminsearch.html. In the reference section.
To find its equivalent in scipy, you just need to check the doc strings of the methods provided in scipy.optimize. See: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html#scipy.optimize.fmin. fmin also implements Nelder-Mead method.
The names do not always translate directly from matlab to scipy and are sometimes even misleading. For example, Brent's method is implemented as fminbnd in Matlab but optimize.brentq in scipy. So, checking the doc strings are always a good idea.
Related
I have a mathematica function which output is a sum of Sinc https://reference.wolfram.com/language/ref/Sinc.html functions. I need to send said output to a coworker who uses Pyomo https://www.pyomo.org/ for optimization. We have discovered that said optimization software doesn't understand Sinc even if regular Python does. I need to know if there is a way to change the output so instead of using Sinc it returns Sin(x)/x.
I have looked for a solution in Mathworks, but the function seems very limited. I have also checked question like https://mathematica.stackexchange.com/questions/19855/simplify-sinx-x-to-sincx/19856 or https://mathematica.stackexchange.com/questions/144899/simplify-is-excluding-indeterminate-expression-from-output.
However, I haven't found a way to solve the issue.
I have attempted to define by hand sinc as six(x)/x, but this doesn't work due to the indetermination at 0
This is how I define sinc:
sinc = Sinc[Pi #] & ;
sincB = (Sin[Pi #]/(Pi #)) & ;
This is where I use the data to construct an analytic expression. The upper one is the one I used in the past and the lower one is the one that I have constructed now.
shannonIP[v_, w_] =
Total[#3* sinc[(v - #1)/dDelta]*sinc[(w - #2)/dDelta] & ###
interpolatedData]
shannonIPB[v_, w_] =
Total[#3* sincB[(v - #1)/dDelta]*sincB[(w - #2)/dDelta] & ###
interpolatedData]
The resulting expression of the upper code returns a sum of Sincs, the resulting expression of the lower code returns a sum of sin(x)/x, but if evaluated at some points I run in the error of 1/0.
Is there a way to "fix" the output of the lower code or to transform the output of the upper one to an expression readable by Pyomo?
This figure is the function constructed using Sinc.
This figure is the function constructed using Sin[x]/(x+0.0000000000000001)
for arguments near zero, you should compute sinc(x) as 1-(x^2)/6+(x^4)/120
I'm needing to perform a 2D-integration (one dimension has an infinite bound). In MatLab, I have done it with integral2:
int_x = integral2(fun, 0, inf, 0, a, 'abstol', 0, 'reltol', 1e-6);
In Python, I've tried scipy's dblquad:
int_x = scipy.integrate.dblquad(fun, 0, numpy.inf, lambda x: 0, lambda x: a, epsabs=0, epsrel=1e-6)
and have also tried using nested single quads. Unfortunately, both of the scipy options take ~80x longer than MatLab's.
My question is: is there a different implementation of 2D integrals within Python that might be faster (I've tried "quadpy" without much benefit)? Alternatively, could I compile MatLab's integral2 function and call it from python without needing the MatLab runtime (and is that even kosher)?
Thanks in advance!
Brad
Update:
Turns out that I don't have the "reputation" to post an image of the equation, so please bear with the formatting: fun(N,t) = P(N) N^2 S(N,t), where P(N) is a lognormal probability distribution and S(N,t) is fairly convoluted but is an exponential in its simplest form and a hypergeometric function (truncated series) in its most complex form. N is integrated from 0 to infinity and t from 0 to pi.
First, profile. If the profile tells you that it's evaluations if fun, then your best bet is to either numba.jit it, or rewrite it in Cython.
I created quadpy once because the the scipy quadrature functions were too slow for me. If you can bring your integrand into one of the respective forms (e.g., the 2D plane with weight function exp(-x) or exp(-x^2)), you should take a look.
I need to use ctypes functions to reduce the running time of quad in python. Here is my original question original question, but now i know what path i need to follow. I need to follow the same steps as in here similar problem link.
However in my case the function that will be handled in the numerical integration is calling another python function. Like this:
from sklearn.neighbors import KernelDensity
import numpy as np
funcA = lambda x: np.exp(kde_bad.score_samples([[x]]))
quad(funcA, 0, cut_off)
where cut_off is just a scalar that I decide in my code, and kde_bad is the kernel object created using KernelDensity.
So my question is how do I need to specify the function in C? the equivalent of this:
//testlib.c
double f(int n, double args[n])
{
return args[0] - args[1] * args[2]; //corresponds to x0 - x1 * x2
}
Any input is appreciated!
You can do this using ctypes's callback function facilities.
That said, it's debatable whether or not you'll actually achieve any speed gains if your function calls something from Python. There are essentially two reasons that ctypes speeds up integration: (1) the integrand function itself is faster as compiled C than as Python bytecode, and (2) it avoids calling back to Python from the compiled (Fortran!) QUADPACK routines. What you're proposing completely eliminates the second of these sources of performance gains, and might even increase the penalty if you make such a call more than once. If, however, the large bulk of the execution time of your integrand is in its own code, rather than in these other Python functions that you need to call, then you might see some benefit.
As answered in the other question, quadpy is here to save the day with its vectorized computation capabilities.
I'm currently trying to solve numerically a minimization problem and I tried to use the optimization library available in SciPy.
My function and derivative are a bit too complicated to be presented here, but they are based on the following functions, the minimization of which do not work either:
def func(x):
return np.log(1 + np.abs(x))
def grad(x):
return np.sign(x) / (1.0 + np.abs(x))
When calling the fmin_bfgs function (and initializing the descent method to x=10), I get the following message:
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 2.397895
Iterations: 0
Function evaluations: 24
Gradient evaluations: 22
and the output is equal to 10 (i.e. initial point). I suppose that this error may be caused by two problems:
The objective function is not convex: however I checked with other non-convex functions and the method gave me the right result.
The objective function is "very flat" when far from the minimum because of the log.
Are my suppositions true? Or does the problem come from anything else?
Whatever the error can be, what can I do to correct this? In particular, is there any other available minimization method that I could use?
Thanks in advance.
abs(x) is always somewhat dangerous as it is non-differentiable. Most solvers expect problems to be smooth. Note that we can drop the log from your objective function and then drop the 1, so we are left with minimizing abs(x). Often this can be done better by the following.
Instead of min abs(x) use
min t
-t <= x <= t
Of course this requires a solver that can solve (linearly) constrained NLPs.
I'm having trouble optimizing a very simple function I'm using as a test case before moving on to something more complex. I've tried different optimization methods, giving the method a bound and even giving the exact solution as the initial guess.
Function I'm trying to optimize: f(x) = 1 / x - x
Here is my code:
import scipy
def testfun(x): return (1 / x - x)
sol = scipy.optimize.minimize(testfun, 1).x
it returns large numbers (3.2 e+08) as the solution
Am I using the optimization function incorrectly?
As Victor mentioned, the optimization function is working correctly,
I was looking to solve f(x) --> 0 which requires a root finding method rather than an optimization routine.
for example:
scipy.optimize.root(testfun, 1) or scipy.optimize.Newton(testfun, 1)