Minimize a function in a given interval with scipy.optimize.brute - python

I am trying to minimize a function in a given interval; in my case the interval is [-pi/2, pi/2].
Here is what I wrote in my script:
ranges = slice(-pi/2, pi/2, pi/200)
res = optimize.brute(g, (ranges,))
with
def g(x):
# z and a are global
(-(z+1) * (((a/4) * (3*cos(x/3) + cos(3*x/2)) +
(b/4) * (-3*sin(x/2)-3*sin(3*x/2)))**2 +
((a/4‌​) * (sin(x/3) + sin(3*x/2)) + (b/4)*
(cos(x/2) + 3*cos(3*x/2)))**2) + 4*(c*cos(x/2))**2)
and the result res is
array([-3.14159265])
The problem I encounter while plotting my solutions is that some of the solutions of the minimization are outside the interval [-pi/2, pi/2]. Any help?

The "problem" is with the default "finishing function": brute has the option of supplying a finishing minimization function. It does this so that the brute force method can be used as a first guess, and then the result can be "polished" using a better minimization function.
If this function is set to None, nothing happens, which is likely what you want here. Unfortunately in this case, the default is set to fmin, which is the downhill simplex (Nelder-Mead) method, and this will simply ignore any range/grid specification. Thus, for a function like sin(0.5 * x), it will start at the lowest point that the brute function found (-pi/2) and continue from there, finding -pi to be the (closest-by) global minimum.
The solution is simple:
res = optimize.brute(g, (ranges,), finish=None)
will give what you want.
Mandatory link to the scipy.optimize.brute documentation.

You can just write your objective function to return np.inf if the parameter it is passed is outside your desired range. So, for example:
def g(x, x_limit):
if x > x_limit:
return np.inf
else:
return (-(z+1) * (((a/4) * (3*cos(x/3) + cos(3*x/2)) +
(b/4) * (-3*sin(x/2)-3*sin(3*x/2)))**2 +
((a/4‌​) * (sin(x/3) + sin(3*x/2)) + (b/4)*
(cos(x/2) + 3*cos(3*x/2)))**2) + 4*(c*cos(x/2))**2)

Related

evalf and subs in sympy on single variable expression returns expression instead of expected float value

I'm new to sympy and I'm trying to use it to get the values of higher order Greeks of options (basically higher order derivatives). My goal is to do a Taylor series expansion. The function in question is the first derivative.
f(x) = N(d1)
N(d1) is the P(X <= d1) of a standard normal distribution. d1 in turn is another function of x (x in this case is the price of the stock to anybody who's interested).
d1 = (np.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
As you can see, d1 is a function of only x. This is what I have tried so far.
import sympy as sp
from math import pi
from sympy.stats import Normal,P
x = sp.symbols('x')
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # This should be 0.5155
f1 = sp.simplify(sp.diff(f,x))
f1.evalf(subs={x:100}) # This should also return a float value
The last line of code however returns an expression, not a float value as I expected like in the case with f. I feel like I'm making a very simple mistake but I can't find out why. I'd appreciate any help.
Thanks.
If you define x with positive=True (which is implied by the log in the definition of u assuming u is real which is implied by the definition of f) it looks like you get almost the expected result (also using f1.subs({x:100}) in the version without the positive x assumption shows the trouble is with unevaluated polar_lift(0) terms):
import sympy as sp
from sympy.stats import Normal, P
x = sp.symbols('x', positive=True)
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*sp.sqrt(0.5)) # changed np to sp
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # 0.541087287864516
f1 = sp.simplify(sp.diff(f,x))
print(f1.evalf(subs={x:100})) # 0.0510177033783834

Converting All Redundant Floats in a String to Integers

I'm using Sympy to make a custom function which converts complex square roots into their complex numbers. When I input -sqrt(-2 + 2*sqrt(3)*I) I get the expected output of -1 - sqrt(3)*I, however, inputting -sqrt(-2.0 + 2*sqrt(3)*I) (has a -2.0 instead of -2), I get the output -1.0 - 0.707106781186547*sqrt(6)*I.
I've tried to convert the input expression to a string, gotten rid of the '.0 ' and then executed a piece of code to return it to the type sympy.core.add.Mul, which usually works with other strings, but the variable expression is still a string.
expression = str(input_expression).replace('.0 ', '')
exec(f'expression = {expression}')
How do I get rid of the redundant use of floats in my expression, while maintaining its type of sympy.core.add.Mul, so that my function will give a nice output?
P.S. The number 0.707106781186547 is an approximation of 1/sqrt(2). The fact that this number is present in the second output means that my function is running properly, it just isn't outputting in the desired way.
Edit:
For whatever reason, unindenting and getting rid of the function as a whole, running the code as its own program gives the expected output. It's only when the code is in function form that it doesn't work.
Code as Requested:
from IPython.display import display, Math
from sympy.abc import *
from sympy import *
def imaginary_square_root(x, y):
return(sqrt((x + sqrt(x**2 + y**2)) / (2)) + I*((y*sqrt(2)) / (2*sqrt(x + sqrt(x**2 + y**2))))) # calculates the square root of a complex number
def find_imaginary_square_root(polynomial): # 'polynomial' used because this function is meant to change expressions including variables such as 'x'
polynomial = str(polynomial).replace('.0 ', ' ')
exec(f'polynomial = {polynomial}')
list_of_square_roots = [] # list of string instances of square roots and their contents
list_of_square_root_indexes = [] # list of indexes at which the square roots can be found in the string
polynomial_string = str(polynomial)
temp_polynomial_string = polynomial_string # string used and chopped up, hence the prefix 'temp_...'
current_count = 0 # counter variable used for two seperate jobs
while 'sqrt' in temp_polynomial_string: # gets indexes of every instance of 'sqrt'
list_of_square_root_indexes.append(temp_polynomial_string.index('sqrt') + current_count)
temp_polynomial_string = temp_polynomial_string[list_of_square_root_indexes[-1] + 4:]
current_count += list_of_square_root_indexes[-1] + 4
for square_root_location in list_of_square_root_indexes:
current_count = 1 # second job for 'current_count'
for index, char in enumerate(polynomial_string[square_root_location + 5:]):
if char == '(':
current_count += 1
elif char == ')':
current_count -= 1
if not current_count: # when current_count == 0, we know that the end of the sqrt contents have been reached
list_of_square_roots.append(polynomial_string[square_root_location:square_root_location + index + 6]) # adds the square root with contents to a list
break
for individual_square_root in list_of_square_roots:
if individual_square_root in str(polynomial):
evaluate = individual_square_root[5:-1]
x = re(evaluate)
y = im(evaluate)
polynomial = polynomial.replace(eval(individual_square_root), imaginary_square_root(x, y)) # replace function used here is Sympy's replace function for polynomials
return polynomial
poly = str(-sqrt(-2.0 + 2*sqrt(3)*I))
display(Math(latex(find_imaginary_square_root(poly))))
What exactly are you trying to accomplish? I still do not understand. You have a whole chunck of code. Try this out:
from sympy import *
def parse(expr): print(simplify(expr).evalf().nsimplify())
parse(-sqrt(-2.0 + 2*sqrt(3)*I))
-1 - sqrt(3)*I
I think everything that you're fighting to do here can be made easier with what sympy has built in. First, assuming that you're taking in user given strings, I'd recommend using the built in parser's of sympy. Second, sympy will do this exact calculation for you, although with a caveat.
from sympy.parsing.sympy_parser import parse_expr
def simplify_string(polynomial_str):
polynomial = parse_expr(polynomial_str)
return polynomial.powsimp().evalf()
Usage examples:
>>>simplify_string('-sqrt(-2 + 2*sqrt(3)*I)')
-1.0 - 1.73205080756888*I
>>>simplify_string('sqrt(sqrt(1 + sqrt(2)*I) + I*sqrt(3 - I*sqrt(5)))')
1.54878147282944 + 0.78803305913*I
>>>simpify_string('sqrt((3 + sqrt(2 + sqrt(3)*I)*I)*x**2 + (3 + sqrt(5)*I)*x + I*4)'
(x**2*(3.0 + I*(2.0 + 1.73205080756888*I)**0.5) + x*(3.0 + 2.23606797749979*I) + 4.0*I)**0.5
The problem is, that sympy will either work in floats, or exact. If you want sympy to calculate out the numerical value of a square root, it's going to display what could be an int as a float for clarity. You can't fix the typecasting, but a lot of the work that you're trying to do, sympy has built in under the hood.
Edit
You can use .nsimplify() on the polynomial to bring things back to nice looking numbers if possible, but you won't be able to have both evaluated roots, and nice displays in the same form.
The sqrtdenest batteries are already included. If you replace ints expressed as floats it will work:
>>> from sympy import sqrtdenest, sqrt, Float
>>> eq = -sqrt(-2.0 + 2*sqrt(3)*I)
Define a function that will extract Floats that are equal to ints
>>> intfloats = lambda x: dict([(i,int(i)) for i in x.atoms(Float) if i==int(i)])
Use it to transform eq and then apply the sqrtdenest
>>> eq.xreplace(intfloats(eq))
-sqrt(-2 + 2*sqrt(3)*I)
>>> sqrtdenest(_)
-1 + sqrt(3)
A problem with using nsimplify (or any mass simplification) is that it may do more than you want. It's best to use the most specific transformation as possible to limit the impact (and work).
/!\ sqrtdenest appears to have a problem that I will report: it is dropping the I

Numpy: different values when calculating a sum of a sequence

I'm using scipy.integrate's odeint function to evaluate the time evolution of to find solutions to the equation
$$ \dot x = -\frac{f(x)}{g(x)}, $$
where $f$ and $g$ are both functions of $x$. $f,g$ are given by series of the form
$$ f(x) = x(1 + \sum_k b_k x^{k/2}) $$
$$ g(x) = 1 + \sum_k a_k (1 + k/2) x^{k/2}. $$
All positive initial values for $x$ should result in the solution blowing up in time, but they aren't...well, not always.
The coefficients $a_n, b_n$ are long polynomials, where $b_n$ is dependent on $x$ in a certain way, and $a_n$ is dependent on several terms being held constant.
Depending on the way I compute $g(x)$, I get very different behavior.
The first way I tried is as follows. 'a' and 'b' are 1x8 and 1x9 numpy arrays. Note that in the function g(x, a), a is multiplied by gterms in line 3, and does not appear in line 2.
def g(x, a):
gterms = [(0.5*k + 1.) * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(a*gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
The second way was this:
def g(x, a):
gterms = [(0.5*k + 1.) * a[k] * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
Note the difference: using the first method, I stuck the array 'a' into the sum in line 3, whereas using the second method, I suck the values of 'a' into the list 'gterms' in line 2 instead.
The first method gives the expected behavior: solutions blow up positive x. However, the second method does not do this. The second method gives a bifurcation for some x0 > 0 that acts as a source. For initial conditions greater than x0, solutions blow up as expected, but initial conditions less than x0 have the solutions tending to 0 very slowly.
Something else of note: in the rhs function, if I change it from
def rhs(u,t)
x = u
...
return .
to
def rhs(u,t)
x = u[0]
...
return .
the same exact change occurs
So my question is: what is the difference between the two different methods I used? I can't tell for the life of me what is actually going on here. Sorry for being so verbose.

Writing a function for x * sin(3/x) in python

I have to write a function, s(x) = x * sin(3/x) in python that is capable of taking single values or vectors/arrays, but I'm having a little trouble handling the cases when x is zero (or has an element that's zero). This is what I have so far:
def s(x):
result = zeros(size(x))
for a in range(0,size(x)):
if (x[a] == 0):
result[a] = 0
else:
result[a] = float(x[a] * sin(3.0/x[a]))
return result
Which...doesn't work for x = 0. And it's kinda messy. Even worse, I'm unable to use sympy's integrate function on it, or use it in my own simpson/trapezoidal rule code. Any ideas?
When I use integrate() on this function, I get the following error message: "Symbol" object does not support indexing.
This takes about 30 seconds per integrate call:
import sympy as sp
x = sp.Symbol('x')
int2 = sp.integrate(x*sp.sin(3./x),(x,0.000001,2)).evalf(8)
print int2
int1 = sp.integrate(x*sp.sin(3./x),(x,0,2)).evalf(8)
print int1
The results are:
1.0996940
-4.5*Si(zoo) + 8.1682775
Clearly you want to start the integration from a small positive number to avoid the problem at x = 0.
You can also assign x*sin(3./x) to a variable, e.g.:
s = x*sin(3./x)
int1 = sp.integrate(s, (x, 0.00001, 2))
My original answer using scipy to compute the integral:
import scipy.integrate
import math
def s(x):
if abs(x) < 0.00001:
return 0
else:
return x*math.sin(3.0/x)
s_exact = scipy.integrate.quad(s, 0, 2)
print s_exact
See the scipy docs for more integration options.
If you want to use SymPy's integrate, you need a symbolic function. A wrong value at a point doesn't really matter for integration (at least mathematically), so you shouldn't worry about it.
It seems there is a bug in SymPy that gives an answer in terms of zoo at 0, because it isn't using limit correctly. You'll need to compute the limits manually. For example, the integral from 0 to 1:
In [14]: res = integrate(x*sin(3/x), x)
In [15]: ans = limit(res, x, 1) - limit(res, x, 0)
In [16]: ans
Out[16]:
9⋅π 3⋅cos(3) sin(3) 9⋅Si(3)
- ─── + ──────── + ────── + ───────
4 2 2 2
In [17]: ans.evalf()
Out[17]: -0.164075835450162

Best way to find roots of a multidimensional, scalar function with SciPy

Suppose I have a function whose range is a scalar but whose domain is a vector. For example:
def func(x):
return x[0] + 1 + x[1]**2
What's a good way to find the a root of this function? scipy.optimize.fsolve and scipy.optimize.root expect func to return a vector (rather than a scalar), and scipy.optimize.newton only takes scalar arguments. I can redefine func as
def func(x):
return [x[0] + 1 + x[1]**2, 0]
Then root and fsolve can find a root, but the zeros in the Jacobian means it won't always do a good job. For example:
fsolve(func, array([0,2]))
=> array([-5, 2])
It'll only vary the first parameter but not the second, meaning that it often finds a zero that's far away.
EDIT: it looks like the following redefinition of func works better:
def func(x):
fx = x[0] + 1 + x[1]**2
return [fx, fx]
fsolve(func, array([0,5]))
=>array([-16.27342781, 3.90812331])
So it's now willing to change both parameters. The code is still kind of ugly though.
Have you tried the minimization of the absolute value of your function using fmin?
For example:
>>> import scipy.optimize as op
>>> import numpy as np
>>> def func(x):
>>> return x[0] + 1 + x[1]**2
>>> func1 = lambda x: np.abs(func(x))
>>> tmp = op.fmin(func1, [10000., 10000.])
>>> func(tmp)
0.0
>>> print tmp
[-8346.12025122 91.35162971]
Since -- for my problem -- I have a good initial guess and a non-crazy function, Newton's method works well. For a scalar, multidimensional function, Newton's method becomes:
Here's a rough code example:
def func(x): #the function to find a root of
return x[0] + 1 + x[1]**2
def dfunc(x): #the gradient of that function
return array([1, 2*x[1]])
def newtRoot(x0, func, dfunc):
x = array(x0)
for n in xrange(100): # do at most 100 iterations
f = func(x)
df = dfunc(x)
if abs(f) < 1e-6: # exit function if we're close enough
break
x = x - df*f/norm(df)**2 # update guess
return x
In use:
nsolve([0,2],func,dfunc)
=> array([-1.0052546 , 0.07248865])
func([-1.0052546 , 0.07248865])
=> 4.3788225025098715e-09
Not bad! Of course, this function is very rough, but you get the idea. It also won't work well for "tricky" functions or where you don't have a good starting guess. I think I'll use something like this but then fall back to fsolve or root if Newton's method doesn't converge.

Categories

Resources