Suppose I have a function whose range is a scalar but whose domain is a vector. For example:
def func(x):
return x[0] + 1 + x[1]**2
What's a good way to find the a root of this function? scipy.optimize.fsolve and scipy.optimize.root expect func to return a vector (rather than a scalar), and scipy.optimize.newton only takes scalar arguments. I can redefine func as
def func(x):
return [x[0] + 1 + x[1]**2, 0]
Then root and fsolve can find a root, but the zeros in the Jacobian means it won't always do a good job. For example:
fsolve(func, array([0,2]))
=> array([-5, 2])
It'll only vary the first parameter but not the second, meaning that it often finds a zero that's far away.
EDIT: it looks like the following redefinition of func works better:
def func(x):
fx = x[0] + 1 + x[1]**2
return [fx, fx]
fsolve(func, array([0,5]))
=>array([-16.27342781, 3.90812331])
So it's now willing to change both parameters. The code is still kind of ugly though.
Have you tried the minimization of the absolute value of your function using fmin?
For example:
>>> import scipy.optimize as op
>>> import numpy as np
>>> def func(x):
>>> return x[0] + 1 + x[1]**2
>>> func1 = lambda x: np.abs(func(x))
>>> tmp = op.fmin(func1, [10000., 10000.])
>>> func(tmp)
0.0
>>> print tmp
[-8346.12025122 91.35162971]
Since -- for my problem -- I have a good initial guess and a non-crazy function, Newton's method works well. For a scalar, multidimensional function, Newton's method becomes:
Here's a rough code example:
def func(x): #the function to find a root of
return x[0] + 1 + x[1]**2
def dfunc(x): #the gradient of that function
return array([1, 2*x[1]])
def newtRoot(x0, func, dfunc):
x = array(x0)
for n in xrange(100): # do at most 100 iterations
f = func(x)
df = dfunc(x)
if abs(f) < 1e-6: # exit function if we're close enough
break
x = x - df*f/norm(df)**2 # update guess
return x
In use:
nsolve([0,2],func,dfunc)
=> array([-1.0052546 , 0.07248865])
func([-1.0052546 , 0.07248865])
=> 4.3788225025098715e-09
Not bad! Of course, this function is very rough, but you get the idea. It also won't work well for "tricky" functions or where you don't have a good starting guess. I think I'll use something like this but then fall back to fsolve or root if Newton's method doesn't converge.
Related
I need to define a function that checks if the input function is continuous at a point with sympy.
I searched the sympy documents with the keyword "continuity" and there is no existing function for that.
I think maybe I should consider doing it with limits, but I'm not sure how.
def check_continuity(f, var, a):
try:
f = sympify(f)
except SympifyError:
return("Invaild input")
else:
x1 = Symbol(var, positive = True)
x2 = Symbol(var, negative = True)
//I don't know what to do after this
I would suggest you use the function continuous_domain. This is defined in the calculus.util module.
Example usage:
>>> from sympy import Symbol, S
>>> from sympy.calculus.util import continuous_domain
>>> x = Symbol("x")
>>> f = sin(x)/x
>>> continuous_domain(f, x, S.Reals)
Union(Interval.open(-oo, 0), Interval.open(0, oo))
This is documented in the SymPy docs here. You can also view the source code here.
Yes, you need to use the limits.
The formal definition of continuity at a point has three conditions that must be met.
A function f(x) is continuous at a point where x = c if
lim x —> c f(x) exists
f(c) exists (That is, c is in the domain of f.)
lim x —> c f(x) = f(c)
SymPy can compute symbolic limits with the limit function.
>>> limit(sin(x)/x, x, 0)
1
See: https://docs.sympy.org/latest/tutorial/calculus.html#limits
Here is a more simple way to check if a function is continues for a specific value:
import sympy as sp
x = sp.Symbol("x")
f = 1/x
value = 0
def checkifcontinus(func,x,symbol):
return (sp.limit(func, symbol, x).is_real)
print(checkifcontinus(f,value,x))
This code output will be - False
I'm using scipy.integrate's odeint function to evaluate the time evolution of to find solutions to the equation
$$ \dot x = -\frac{f(x)}{g(x)}, $$
where $f$ and $g$ are both functions of $x$. $f,g$ are given by series of the form
$$ f(x) = x(1 + \sum_k b_k x^{k/2}) $$
$$ g(x) = 1 + \sum_k a_k (1 + k/2) x^{k/2}. $$
All positive initial values for $x$ should result in the solution blowing up in time, but they aren't...well, not always.
The coefficients $a_n, b_n$ are long polynomials, where $b_n$ is dependent on $x$ in a certain way, and $a_n$ is dependent on several terms being held constant.
Depending on the way I compute $g(x)$, I get very different behavior.
The first way I tried is as follows. 'a' and 'b' are 1x8 and 1x9 numpy arrays. Note that in the function g(x, a), a is multiplied by gterms in line 3, and does not appear in line 2.
def g(x, a):
gterms = [(0.5*k + 1.) * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(a*gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
The second way was this:
def g(x, a):
gterms = [(0.5*k + 1.) * a[k] * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
Note the difference: using the first method, I stuck the array 'a' into the sum in line 3, whereas using the second method, I suck the values of 'a' into the list 'gterms' in line 2 instead.
The first method gives the expected behavior: solutions blow up positive x. However, the second method does not do this. The second method gives a bifurcation for some x0 > 0 that acts as a source. For initial conditions greater than x0, solutions blow up as expected, but initial conditions less than x0 have the solutions tending to 0 very slowly.
Something else of note: in the rhs function, if I change it from
def rhs(u,t)
x = u
...
return .
to
def rhs(u,t)
x = u[0]
...
return .
the same exact change occurs
So my question is: what is the difference between the two different methods I used? I can't tell for the life of me what is actually going on here. Sorry for being so verbose.
I have to write a function, s(x) = x * sin(3/x) in python that is capable of taking single values or vectors/arrays, but I'm having a little trouble handling the cases when x is zero (or has an element that's zero). This is what I have so far:
def s(x):
result = zeros(size(x))
for a in range(0,size(x)):
if (x[a] == 0):
result[a] = 0
else:
result[a] = float(x[a] * sin(3.0/x[a]))
return result
Which...doesn't work for x = 0. And it's kinda messy. Even worse, I'm unable to use sympy's integrate function on it, or use it in my own simpson/trapezoidal rule code. Any ideas?
When I use integrate() on this function, I get the following error message: "Symbol" object does not support indexing.
This takes about 30 seconds per integrate call:
import sympy as sp
x = sp.Symbol('x')
int2 = sp.integrate(x*sp.sin(3./x),(x,0.000001,2)).evalf(8)
print int2
int1 = sp.integrate(x*sp.sin(3./x),(x,0,2)).evalf(8)
print int1
The results are:
1.0996940
-4.5*Si(zoo) + 8.1682775
Clearly you want to start the integration from a small positive number to avoid the problem at x = 0.
You can also assign x*sin(3./x) to a variable, e.g.:
s = x*sin(3./x)
int1 = sp.integrate(s, (x, 0.00001, 2))
My original answer using scipy to compute the integral:
import scipy.integrate
import math
def s(x):
if abs(x) < 0.00001:
return 0
else:
return x*math.sin(3.0/x)
s_exact = scipy.integrate.quad(s, 0, 2)
print s_exact
See the scipy docs for more integration options.
If you want to use SymPy's integrate, you need a symbolic function. A wrong value at a point doesn't really matter for integration (at least mathematically), so you shouldn't worry about it.
It seems there is a bug in SymPy that gives an answer in terms of zoo at 0, because it isn't using limit correctly. You'll need to compute the limits manually. For example, the integral from 0 to 1:
In [14]: res = integrate(x*sin(3/x), x)
In [15]: ans = limit(res, x, 1) - limit(res, x, 0)
In [16]: ans
Out[16]:
9⋅π 3⋅cos(3) sin(3) 9⋅Si(3)
- ─── + ──────── + ────── + ───────
4 2 2 2
In [17]: ans.evalf()
Out[17]: -0.164075835450162
I am trying to minimize a function in a given interval; in my case the interval is [-pi/2, pi/2].
Here is what I wrote in my script:
ranges = slice(-pi/2, pi/2, pi/200)
res = optimize.brute(g, (ranges,))
with
def g(x):
# z and a are global
(-(z+1) * (((a/4) * (3*cos(x/3) + cos(3*x/2)) +
(b/4) * (-3*sin(x/2)-3*sin(3*x/2)))**2 +
((a/4) * (sin(x/3) + sin(3*x/2)) + (b/4)*
(cos(x/2) + 3*cos(3*x/2)))**2) + 4*(c*cos(x/2))**2)
and the result res is
array([-3.14159265])
The problem I encounter while plotting my solutions is that some of the solutions of the minimization are outside the interval [-pi/2, pi/2]. Any help?
The "problem" is with the default "finishing function": brute has the option of supplying a finishing minimization function. It does this so that the brute force method can be used as a first guess, and then the result can be "polished" using a better minimization function.
If this function is set to None, nothing happens, which is likely what you want here. Unfortunately in this case, the default is set to fmin, which is the downhill simplex (Nelder-Mead) method, and this will simply ignore any range/grid specification. Thus, for a function like sin(0.5 * x), it will start at the lowest point that the brute function found (-pi/2) and continue from there, finding -pi to be the (closest-by) global minimum.
The solution is simple:
res = optimize.brute(g, (ranges,), finish=None)
will give what you want.
Mandatory link to the scipy.optimize.brute documentation.
You can just write your objective function to return np.inf if the parameter it is passed is outside your desired range. So, for example:
def g(x, x_limit):
if x > x_limit:
return np.inf
else:
return (-(z+1) * (((a/4) * (3*cos(x/3) + cos(3*x/2)) +
(b/4) * (-3*sin(x/2)-3*sin(3*x/2)))**2 +
((a/4) * (sin(x/3) + sin(3*x/2)) + (b/4)*
(cos(x/2) + 3*cos(3*x/2)))**2) + 4*(c*cos(x/2))**2)
Supose that I want to generate a function to be later incorporated in a set of equations to be solved with scipy nsolve function. I want to create a function like this:
xi + xi+1 + xi+3 = 1
in which the number of variables will be dependent on the number of components. For example, if I have 2 components:
f = lambda x: x[0] + x[1] - 1
for 3:
f = lambda x: x[0] + x[1] + x[2] - 1
I specify the components as an array within the arguments of the function to be called:
def my_func(components):
for component in components:
.....
.....
return f
I can't just find a way of doing this. I've to be able to make it this way as this function and other functions need to be solved together with nsolve:
x0 = scipy.optimize.fsolve(f, [0, 0, 0, 0 ....])
Any help would be appreciated
Thanks!
Since I'm not sure which is the best way of doing this I will fully explain what I'm trying to do:
-I'm trying to generate this two functions to be later nsolved:
So I want to create a function teste([list of components]) that can return me this two equations (Psat(T) is a function I can call depending on the component and P is a constant(value = 760)).
Example:
teste(['Benzene','Toluene'])
would return:
xBenzene + xToluene = 1
xBenzenePsat('Benzene') + xToluenePsat('Toluene') = 760
in the case of calling:
teste(['Benzene','Toluene','Cumene'])
it would return:
xBenzene + xToluene + xCumene = 1
xBenzenePsat('Benzene') + xToluenePsat('Toluene') + xCumene*Psat('Cumene') = 760
All these x values are not something I can calculate and turn into a list I can sum. They are variables that are created as a function ofthe number of components I have in the system...
Hope this helps to find the best way of doing this
A direct translation would be:
f = lambda *x: sum(x) - 1
But not sure if that's really what you want.
You can dynamically build a lambda with a string then parse it with the eval function like this:
a = [1, 2, 3]
s = "lambda x: "
s += " + ".join(["x[" + str(i) + "]" for i in xrange(0, 3)]) # Specify any range
s += " - 1"
print s
f = eval(s)
print f(a)
I would take advantage of numpy and do something like:
def teste(molecules):
P = np.array([Psat(molecule) for molecule in molecules])
f1 = lambda x: np.sum(x) - 1
f2 = lambda x: np.dot(x, P) - 760
return f1, f2
Actually what you are trying to solve is a possibly underdetermined system of linear equations, of the form A.x = b. You can construct A and b as follows:
A = np.vstack((np.ones((len(molecules),)),
[Psat(molecule) for molecule in molecules]))
b = np.array([1, 760])
And you could then create a single lambda function returning a 2 element vector as:
return lambda x: np.dot(A, x) - b
But I really don´t think that is the best approach to solving your equations: either you have a single solution you can get with np.linalg.solve(A, b), or you have a linear system with infinitely many solutions, in which case what you want to find is a base of the solution space, not a single point in that space, which is what you will get from a numerical solver that takes a function as input.
If you really want to define a function by building it up iteratively, you can. I can't think of any situation where this would be the best answer, or even a reasonable one, but it's what you asked for, so:
def my_func(components):
f = lambda x: -1
for component in components:
def wrap(f):
return lambda x: component * x[0] + f(x[1:])
f = wrap(f)
return f
Now:
>>> f = my_func([1, 2, 3])
>>> f([4,5,6])
44
Of course this will be no fun to debug. For example, look at the traceback from calling f([4,5]).
def make_constraint_function(components):
def constraint(vector):
return sum(vector[component] for component in components) - 1
return constraint
You could do it with a lambda, but a named function may be more readable. deffed functions can do anything lambdas can and more. Make sure to give the function a good docstring, and use variable and function names appropriate for your program.