scipy.optimize get's trapped in local minima. What can I do? - python

from numpy import *; from scipy.optimize import *; from math import *
def f(X):
x=X[0]; y=X[1]
return x**4-3.5*x**3-2*x**2+12*x+y**2-2*y
bnds = ((1,5), (0, 2))
min_test = minimize(f,[1,0.1], bounds = bnds);
print(min_test.x)
My function f(X)has a local minima at x=2.557, y=1 which I should be able to find.
The code showed above will only give result where x=1. I have tried with different tolerance and alle three method: L-BFGS-B, TNC and SLSQP.
This is the thread I have been looking at so far:
Scipy.optimize: how to restrict argument values
How can I fix this?
I am using Spyder(Python 3.6).

You just encounterd the problem with local optimization: it strongly depends on the start (initial) values you pass in. If you supply [2, 1] it will find the correct minima.
Common solutions are:
use your optimization in a loop with random starting points inside your boundaries
import numpy as np
from numpy import *; from scipy.optimize import *; from math import *
def f(X):
x=X[0]; y=X[1]
return x**4-3.5*x**3-2*x**2+12*x+y**2-2*y
bnds = ((1,3), (0, 2))
for i in range(100):
x_init = np.random.uniform(low=bnds[0][0], high=bnds[0][1])
y_init = np.random.uniform(low=bnds[1][0], high=bnds[1][1])
min_test = minimize(f,[x_init, y_init], bounds = bnds)
print(min_test.x, min_test.fun)
use an algorithm that can break free of local minima, I can recommend scipy's basinhopping()
use a global optimization algorithm and use it's result as initial value for a local algorithm. Recommendations are NLopt's DIRECT or the MADS algorithms (e.g. NOMAD). There is also another one in scipy, shgo, that I have no tried yet.

Try scipy.optimize.basinhopping. It simply just repeat your minimize procedure multiple times and get multiple local minimums. The minimal one is the global minimum.
minimizer_kwargs = {"method": "L-BFGS-B"}
res=optimize.basinhopping(nethedge,guess,niter=100,minimizer_kwargs=minimizer_kwargs)

Related

is there a way to retrieve the nodes automatically computed by mpmath.quad integration routine?

I am trying to calculate an integral with mpmath.quad. I basically have to calculate three moments of a distribution, call it f (pseudocode):
integrate(f(x)/x, 0, infinity)
integrate(f(x)*x, 0, infinity)
integrate(f(x)ln(x)/x, 0, infinity)
As far as I understood, the tanh-sinh algorithm from mpmath applies a coordinate transformation to the integration interval, uses it to find suitable nodes xk-s and weights wk-s and returns (pseudocode):
sum f(xk)*wk
with N the number of nodes. Since in my original problem the calculation takes a long time, I was wishing to reuse the values of f calculated by the first integration for the other integrals, i.e., computing them from discrete samples with something like scipy.integrate.trapezoid or simpson. By employing a structure Simulator I simplified from an answer on this forum, I managed to cache the xk-s and f(xk)-s, but not the weights. I checked that the xk-s from different integrations are the same.
Now, if I naively apply a standard quadrature from the scipy module to my samples, say trapezoid(fs, xs), the result I get is different from that calculated by mpmath.quad, especially if the samples are few. While this fact does not surprise me, I would like to find out how to retrieve the weights mpmath.quad uses in the first calculation, say the one for f(x)/x, so I could avoid running the time consuming algorithm thrice.
I cannot understand the documentation as for this point. mpmath documentation gives a lot of examples, but none concerning the retrieval of calculated nodes, although it states several times the nodes are "cached". Where are they? mpmath.quad only returns the integral result!
So I would like to know: is what I'm trying to achieve sensible at all? And if it is, how could I accomplish the task?
Below is a code that reproduces the behaviour. Any help is very appreciated.
import numpy as np
import mpmath as mp
import matplotlib.pyplot as plt
from scipy.integrate import trapezoid, simpson
class Simulator:
def __init__(self, func, storex:np.ndarray=np.array([]), storef:np.ndarray=np.array([])):
self.func = func
self.storex = storex
self.storef = storef
def simulate(self, x, *args):
result = self.func(x, *args)
self.storex = np.append(self.storex, x)
self.storef = np.append(self.storef, result)
return result
def lorentz(x):
x = mp.mpf(x)
return mp.mpf(1)/(mp.power(x-mp.mpf(1), 2) + mp.mpf(1))
simratio = simproduct = Simulator(lorentz)
integralratio = mp.quad(lambda x: simratio.simulate(x)/x, [0, 1, mp.inf])
integralproduct = mp.quad(lambda x: simproduct.simulate(x)*x, [0, 1, mp.inf])
ratiox, ratioy = np.transpose(sorted([(x,y) for x, y in zip(simratio.storex, simratio.storef)])) #we get the sampled xs and f(x)s
integralproduct_trap = trapezoid(ratioy*ratiox, ratiox)
print("int[f(x)/x], quad: ", integralratio)
print("int[f(x)*x], quad: ", integralproduct)
print("int[f(x)*x], scipy.trapezoid: ", integralproduct_trap)

Differential evolution doesn't give back the global Minimum

I am trying to find the global minimum of the Sum of Squared Differences which contains five different parameters x[0], x[1], x[2], x{3], x[4] coming from an affine transformation of data. Since I have a lot of data to compare the different_evolution approach was speeded up with the use of numba as you can see in the code part below. (More details to the code can be found here: How to speed up differential_evolution to find the minimum of Sum of Squared Errors with five variables)
My problem now is, that I get different values for my parameters for running the same Code several times. This problem is also described in Differential evolution algorithm different results for different runs but I use the scipy.optimize package to import differential_evolution instead of mystic. I tried to use mystic instead but then I get de following error message:
TypeError: CPUDispatcher(<function tempsum3 at 0x000002AA043999D0>) is not a Python function
My question now is, if there is any appproach that really gives me back the global minimum of my function? An approach using the powell-method with the minimize command gives back worse values for my parameters than differential_evolution (gets stuck in local minmia even faster). And if there is no other way than different_evolution, how can I be sure that I really have the global Minimum at some point? And in case the mystic approach is expedient, how do i get it going?
I would be thankful for any kind of advice,
Here's my function to minimize with the use of numba to speed it up and the mystic pacakge, that gives me back an error message:
from math import floor
import matplotlib.pyplot as plt
import numpy as np
import mystic as my
from mystic.solvers import diffev
import numba as nb
w1=5 #A1
h1=3
w2=8#A2
h2=5
hw1=w1*h1
hw2=w2*h2
A1=np.ones((h1,w1))
A2=np.ones((h2,w2))
for n1 in np.arange(2,4):
A1[1][n1]=1000
for n2 in np.arange(5,7):
A2[3][n2]=1000
A2[4][n2]=1000
#Norm the raw data
maxA1=np.max(A1)
A1=1/maxA1*A1
#Norm the raw data
maxA2=np.max(A2)
A2=1/maxA2*A2
fig, axes = plt.subplots(1, 2)
axes[0].imshow(A2)
axes[0].set_title('original-A2')
axes[1].imshow(A1)
axes[1].set_title('shifted-A1')
plt.show()
#nb.njit
def xy_to_pixelvalue(x,y,dataArray): #getting the values to the normed pixels
height=dataArray.shape[0]
width=dataArray.shape[1]
xcoord=floor((x+1)/2*(width-1))
ycoord=floor((y+1)/2*(height-1))
if -1 <=x<=1 and -1<=y<=1:
return dataArray[ycoord][xcoord]
else:
return(0)
#norming pixel coordinates
A1x=np.linspace(-1,1,w1)
A1y=np.linspace(-1,1,h1)
A2x=np.linspace(-1,1,w2)
A2y=np.linspace(-1,1,h2)
#normed coordinates of A2 in a matrix
Ap2=np.zeros((h2,w2,2))
for i in np.arange(0,h2):
for j in np.arange(0,w2):
Ap2[i][j]=(A2x[j],A2y[i])
#defining a vector with the coordinates of A2
d=[]
cdata=Ap2[:,:,0:2]
for i in np.arange(0,h2):
for j in np.arange(0,w2):
d.append((cdata[i][j][0],cdata[i][j][1]))
d=np.asarray(d)
coora2=d.transpose()
coora2=np.array(coora2, dtype=np.float64)
#nb.njit('(float64[::1],)')
def tempsum3(x):
tempsum=0
for l in np.arange(0,hw2):
tempsum += (xy_to_pixelvalue(np.cos(np.radians(x[2]))*x[0]*coora2[0][l]-np.sin(np.radians(x[2]))*x[1]*coora2[1][l]+x[3],np.sin(np.radians(x[2]))*x[0]*coora2[0][l]+np.cos(np.radians(x[2]))*x[1]*coora2[1][l]+x[4],A1)-xy_to_pixelvalue(coora2[0][l],coora2[1][l],A2))**2
return tempsum
x0 = np.array([1,1,-0.5,0,0])
bounds = [(0.1,5),(0.1,5),(-1,2),(-4,4),(-4,4)]
result = diffev(tempsum3, x0, npop = 5*15, bounds = bounds, ftol = 1e-11, gtol = 3500, maxiter = 1024**3, maxfun = 1024**3, full_output=True, scale = 0.8)
print(result)

Calculate Integral over array in Python with output array

I'd like to calculate an integral of the form
where I want the results as an array (to eventually plot them as a function of omega). I have
import numpy as np
import pylab as plt
from scipy import integrate
w = np.linspace(-5, 5, 1000)
def g(x):
return np.exp(-2*x)
def complexexponential(x, w):
return np.exp(-1j*w*x)
def integrand(x, w):
return g(x)*complexexponential(x, w)
integrated = np.real(integrate.quad(integrand, 0, np.inf, args = (w)))
which gives me the error "supplied function does not return a valid float". I am not very familiar with the integrate-function of Scipy. Many thanks for your help in advance!
Scipy integrate.quad doesn't seem to support vector output. If you loop over all your values of w and only give one of them at a time as args your code seems to work fine.
Also it doesn't handle complex integration, which you can get around using the procedure outlined in this answer.

Definite integral over one variable in a function with two variables in Scipy

I am trying to calculate the definite integral of a function with multiple variables over just one variable in scipy.
This is kind of like what my code looks like-
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
quad(integrand, 1,2, args=())
And it returns this type error:
TypeError: integrand() takes exactly 2 arguments (1 given)
However, it works if I put a number into args. But I don't want to, because I want y to remain as y and not a number. Does anyone know how this can be done?
EDIT: Sorry, don't think I was clear. I want the end result to be a function of y, with y still being a symbol.
Thanks to mdurant, here's what works:
from sympy import integrate, Symbol, exp
from sympy.abc import x
y=Symbol('y')
f=x*exp(x/y)
integrate(f, (x, 1, 2))
Answer:
-(-y**2 + y)*exp(1/y) + (-y**2 + 2*y)*exp(2/y)
You probably just want the result to be a function of y right?:
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
partial_int = lambda y: quad(integrand, 1,2, args=(y,))
print partial_int(5)
#(2.050684698584342, 2.2767173686148355e-14)
The best you can do is use functools.partial, to bind what arguments you have for the moment. But one fundamentally cannot numerically integrate a definite integral if you havnt got the entire domain specified yet; in that case the resulting expression will necessarily still contain symbolic parts, so the intermediate result isn't numerical.
(Assuming that you are talking about computing the definite integral over x given a specific, fixed value of y.)
You could use a lambda:
quad(lambda x:integrand(x, 10), 1, 2, args=())
or functools.partial():
quad(functools.partial(integrand, y=10), 1, 2, args=())
from scipy.integrate import quad
import numpy as np
def integrand(x,y):
return x*np.exp(x/y)
vec_int = np.vectorize(integrand)
y = np.linspace(0, 10, 100)
vec_int(y)

Computing definite integrals in python

I'm trying to write a loop that calculates the value of a definite integral at each step. The function bigF is very complicated. To put it in simple terms, it integrates a bunch of terms with respect to s, from s=tn-(n/2) to s=tn+(n/2). After the integration, bigF still has a variable t. So you can say bigF(t) = integral(f(s,t)), where f(s,t) is the big mess of terms after integrate.integ. In the last line, I want to evaluate bigF(t) at t=tn after bigF computes the integral of f(s,t)
After running, I get the error global name 's' is not defined. But s was meant to be just a dummy variable in the integration, since I am computing a convolution. What do I need to do?
import numpy as np
import scipy.integrate as integ
import math
nt=5001#; %since (50-0)/.01 = 5000
dt = .01#; % =H
H=.01
theta_n = np.ones(nt)
theta_n[1]=0#; %theta_o
omega_n = np.ones(nt)
omega_n[1]=-0.4# %omega_o
epsilon=10^(-6)
eta = epsilon*10
t_o=0
def bigF(t, n):
return integrate.integ((422.11/eta)*math.exp((5*(4*((eta*t-s-tn)^2)/eta^2)-1)^(-1))*omega, s,tn-(n/2),tn+(n/2))
for n in range(1,4999)
tn=t_o+n*dt;
theta_n[n+1] = theta_n[n] + H*bigF(tn, n);
If you're doing a convolution, sounds like you want numpy.convolve.

Categories

Resources