I'm using scipy.integrate's odeint function to evaluate the time evolution of to find solutions to the equation
$$ \dot x = -\frac{f(x)}{g(x)}, $$
where $f$ and $g$ are both functions of $x$. $f,g$ are given by series of the form
$$ f(x) = x(1 + \sum_k b_k x^{k/2}) $$
$$ g(x) = 1 + \sum_k a_k (1 + k/2) x^{k/2}. $$
All positive initial values for $x$ should result in the solution blowing up in time, but they aren't...well, not always.
The coefficients $a_n, b_n$ are long polynomials, where $b_n$ is dependent on $x$ in a certain way, and $a_n$ is dependent on several terms being held constant.
Depending on the way I compute $g(x)$, I get very different behavior.
The first way I tried is as follows. 'a' and 'b' are 1x8 and 1x9 numpy arrays. Note that in the function g(x, a), a is multiplied by gterms in line 3, and does not appear in line 2.
def g(x, a):
gterms = [(0.5*k + 1.) * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(a*gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
The second way was this:
def g(x, a):
gterms = [(0.5*k + 1.) * a[k] * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
Note the difference: using the first method, I stuck the array 'a' into the sum in line 3, whereas using the second method, I suck the values of 'a' into the list 'gterms' in line 2 instead.
The first method gives the expected behavior: solutions blow up positive x. However, the second method does not do this. The second method gives a bifurcation for some x0 > 0 that acts as a source. For initial conditions greater than x0, solutions blow up as expected, but initial conditions less than x0 have the solutions tending to 0 very slowly.
Something else of note: in the rhs function, if I change it from
def rhs(u,t)
x = u
...
return .
to
def rhs(u,t)
x = u[0]
...
return .
the same exact change occurs
So my question is: what is the difference between the two different methods I used? I can't tell for the life of me what is actually going on here. Sorry for being so verbose.
Related
def f(x):
f='exp(x)-x-2'
y=eval(f)
print(y)
return y
def bissection(f,f_line,f_2lines,a,b,epsilon1,epsilon2):
x=a
result_a=f(a)
x=b
result_b=f(b)
if (f.evalf(a)*f.evalf(b)>=0):
print("Interval [a,b] does not contain a zero ")
exit()
zeta=min(epsilon1,epsilon2)/10
x=a
while(f_line(x)>0):
if(x<b or x>-b):
x=x+zeta
else:
stop
ak=a
bk=b
xk=(ak+bk)/2
k=0
if (f(xk)*f(ak)<0):
ak=ak
bk=xk
if (f(xk)*f(bk)<0):
ak=xk
bk=bk
k=k+1
from sympy import *
import math
x=Symbol('x')
f=exp(x)-x-2
f_line=f.diff(x)
f_2lines=f_line.diff(x)
print("Derivative of f:", f_line)
print("2nd Derivative of f:", f_2lines)
a=int(input('Beginning of interval: '))
b=int(input('End of interval: '))
epsilon1=input('1st tolerance: ')
epsilon2=input('2nd tolerance: ')
bissection(f,f_line,f_2lines,a,b,epsilon1,epsilon2)
This program is an attempt to implement the Bissection Method. I've tried writing two functions:
The first one, f, is supposed to receive the extremes of the interval that may or may not contain a root (a and b) and return the value of the function evaluated in this point.
The second one, bissection, should receive the function, the function's first and second derivatives, the extremes of the interval (a,b) and two tolerances (epsilon1,epsilon2).
What I want to do is pass each value a and b, one at a time, as arguments to the function f, that is supposed to return f(a) and f(b); that is, the values of the function in each of the points a and b.
Then, it should test two conditions:
1) If the function values in the extremes of the intervals have opposite signs. If they don't, the method won't converge for this interval, then the program should terminate.
if(f.evalf(a)*f.evalf(b)>=0)
exit()
2)
while(f_line(x)>0): #while the first derivative of the function evaluated in x is positive
if(x<b or x>-b): #This should test whether x belongs to the interval [a,b]
x=x+zeta #If it does, x should receive x plus zeta
else:
stop
At the end of this loop, my objective was to determine whether the first derivative was strictly positive (I didn't do the negative case yet).
The problem: I'm getting the error
Traceback (most recent call last):
File "bissec.py", line 96, in <module>
bissection(f,f_line,f_2lines,a,b,epsilon1,epsilon2)
File "bissec.py", line 41, in bissection
result_a=f(a)
TypeError: 'Add' object is not callable
How can I properly call the function so that it returns the value of the function (in this case, f(x)=exp(x)-x-2), for every x needed? That is, how can I evaluate f(a) and f(b)?
Ok, so I've figured it out where your program was failing and I've got 4 reasons why.
First of all, and the main topic of your question, if you want to evaluate a function f for a determined x value, let's say a, you need to use f.subs(x, a).evalf(), as it is described in SymPy documentation. You used in 2 different ways: f.evalf(2) and f_line(a); both were wrong and need to be substituted by the correct syntax.
Second, if you want to stop a while loop you should use the keyword break, not "stop", as written in your code.
Third, avoid using the same name for variables and functions. In your f function, you also used f as the name of a variable. In bissection function, you passed f as a parameter and tried to call the f function. That'll fail too. Instead, I've changed the f function to f_calc, and applied the correct syntax of my first point in it.
Fourth, your epsilon1 and epsilon2 inputs were missing a float() conversion. I've added that.
Now, I've also edited your code to use good practices and applied PEP8.
This code should fix this error that you're getting and a few others:
from sympy import *
def func_calc(func, x, val):
"""Evaluate a given function func, whose varible is x, with value val"""
return func.subs(x, val).evalf()
def bissection(x, f, f_line, f_2lines, a, b, epsilon1, epsilon2):
"""Applies the Bissection Method"""
result_a = func_calc(f, x, a)
result_b = func_calc(f, x, b)
if (result_a * result_b >= 0):
print("Interval [a,b] does not contain a zero")
exit()
zeta = min(epsilon1, epsilon2) / 10
x_val = a
while(func_calc(f_line, x, a) > 0):
if(-b < x_val or x_val < b):
x_val = x_val + zeta
else:
break # the keyword you're looking for is break, instead of "stop"
print(x_val)
ak = a
bk = b
xk = (ak + bk) / 2
k = 0
if (func_calc(f, x, xk) * func_calc(f, x, ak) < 0):
ak = ak
bk = xk
if (func_calc(f, x, xk) * func_calc(f, x, bk) < 0):
ak = xk
bk = bk
k = k + 1
def main():
x = Symbol('x')
f = exp(x) - x - 2
f_line = f.diff(x)
f_2lines = f_line.diff(x)
print("Derivative of f:", f_line)
print("2nd Derivative of f:", f_2lines)
a = int(input('Beginning of interval: '))
b = int(input('End of interval: '))
epsilon1 = float(input('1st tolerance: '))
epsilon2 = float(input('2nd tolerance: '))
bissection(x, f, f_line, f_2lines, a, b, epsilon1, epsilon2)
if __name__ == '__main__':
main()
Note: I am brand new to sympy and trying to figure it out how it works.
What I have now:
I do get the correct solutions but it takes 35 - 50 seconds.
Goal:
To speed it up the calculations by defining symbolic equation once and then reusing it with different variables.
Set up:
I need to calculate a polynomial G(t) (t = 6 roots) for every iteration of the loop. (220 iterations total)
G(t) have 6 other variables, which are calculated and are known on every iterations.
These variables are different on every iteration.
First try (slow):
I simply put every into one python function, where I defined Gt symbolically and solved for t.
It was running around 35 - 40 seconds. function_G is called on every iteration.
def function_G(f1, f2, a, b, c, d):
t = sp.symbols('t')
left = t * ((a * t + b)**2 + f2**2 * (c*t+d)**2)**2
right = (a*d-b*c) * (1+ f1**2 * t**2)**2 * (a*t+b) * (c*t+d)
eq = sp.expand(left - right)
roots = sp.solveset(Gt, t)
return roots
Then a person gave me a hint that:
You should only need to (symbolically) solve for the coefficients of the polynomial once, as a preprocessing step. After that, when processing each iterations, you simply calculate the polynomial coefficients, then solve for the roots.
I asked for clarification the person added:
So I defined the function g(t) and then used sympy.expand to work out all parenthesis/exponents, and then sympy.collect to collect terms by powers of t. Finally I used .coeff on the output of collect to get the coefficients to feed into numpy.root.
Second try:
To followed the advice, I defined a G(t) symbolically first and passed it to the function that runs the loop along with its symbolic parameters. Function constructGt() thus is called only once.
def constructGt():
t, a, b, c, d, f1, f2 = sp.symbols('t a b c d f1 f2')
left = t * ((a * t + b)**2 + f2**2 * (c*t+d)**2)**2
right = (a*d-b*c) * (1+ f1**2 * t**2)**2 * (a*t+b) * (c*t+d)
gt = sp.Eq(left - right, 0)
expanded = sp.expand(gt)
expanded = sp.collect(expanded, t)
g_vars = {
"a": a,
"b": b,
"c": c,
"d": d,
"f1": f1,
"f2": f2
}
return expanded, g_vars
then on every iteration I was passing the function and its parameters to get the roots:
#Variables values:
#a = 0.00011713490404073987
#b = 0.00020253296124588926
#c = 4.235688216068313e-07
#d = 0.012262546040805029
#f1= -0.012553203944721956
#f2 = 0.018529776776949003
def function_G(f1_, f2_, a_, b_, c_, d_, Gt, v):
Gt = Gt.subs([(v['a'], a_), (v['b'], b_),
(v['c'], c_), (v['d'], d_),
(v['f1'], f1_), (v['f2'], f2_)])
roots = sp.solveset(Gt, t)
return roots
But it got even slower around 56 seconds.
Question:
I do not understand what Am I doing wrong? I also do not understand how this person is using .coeff() and then np.roots on the results.
Even if your f1 and f2 are linear in a variable, you are working with a quartic polynomial and the roots of that are very long. If this is univariate then it would be better to just use the expression, solve it at some value of constants where the solution is known and then use that value and new constants that are relatively close to the old ones and use nsolve to get the next root. If you are interested in more than one solution you may have to "follow" each root separately with nsolve...but I think you will be much happier with the overall performance. Using real_roots is another option, especially if the expression is simply a polynomial in some variable.
Given that you are working with a quartic you should keep this in mind: the general solution is so long and complicated (except for very special cases) that it is not efficient to work with the general solution and substitute in values as they are known. It is very easy to solve for numerical values, however, and it is much faster:
First create the symbolic expression into which values will be substituted; assumptionless "vanilla" symbols are used:
t, a, b, c, d, f1, f2 = symbols('t a b c d f1 f2')
left = t * ((a * t + b)**2 + f2**2 * (c*t+d)**2)**2
right = (a*d-b*c) * (1+ f1**2 * t**2)**2 * (a*t+b) * (c*t+d)
eq = left - right
Next, define a dictionary of replacements to subtitute into the expression noting that dict(x=1) creates {'x': 1} and when this is used with subs a vanilla Symbol will be created for "x":
reps = dict(
a = 0.00011713490404073987 ,
b = 0.00020253296124588926 ,
c = 4.235688216068313e-07 ,
d = 0.012262546040805029 ,
f1= -0.012553203944721956 ,
f2 = 0.018529776776949003)
Evaluate the real roots of the expression:
from time import time
t=time();[i.n(3) for i in real_roots(eq.subs(reps))];'%s sec' % round(time()-t)
[-11.5, -1.73, 8.86, 1.06e+8]
'3 sec'
Find all 6 roots of the expression but take only the real parts:
>>> roots(eq.subs(reps))
{-11.4594523988215: 1, -1.73129179415963: 1, 8.85927293271708: 1, 106354884.4365
42: 1, -1.29328524826433 - 10.3034942999005*I: 1, -1.29328524826433 + 10.3034942
999005*I: 1}
>>> [re(i).n(3) for i in _]
[-11.5, -1.73, 8.86, 1.06e+8, -1.29, -1.29]
Change one or more values and do it again
reps.update(dict(a=2))
[i.n(3) for i in real_roots(eq.subs(reps))]
[-0.0784, -0.000101, 0.0782, 3.10e+16]
Update values in a loop:
>>> a = 1
>>> for i in range(3):
... a += 1
... reps.update(dict(a=a))
... a, real_roots(eq.subs(reps))[0].n(3)
...
(2, -0.0784)
(3, -0.0640)
(4, -0.0554)
Note: when using roots, the real roots will come first in sorted order and then imaginary roots will come in conjugate pairs (but otherwise not in any given order).
I cannot seem to get an output when I pass numbers to the function. I need to get the computed value and subtract it from the exact. Is there something I am not getting right?
def f1(x):
f1 = np.exp(x)
return f1;
def trapezoid(f,a,b,n):
'''Computes the integral of functions using the trapezoid rule
f = function of x
a = upper limit of the function
b = lower limit of the function
N = number of divisions'''
h = (b-a)/N
xi = np.linspace(a,b,N+1)
fi = f(xi)
s = 0.0
for i in range(1,N):
s = s + fi[i]
s = np.array((h/2)*(fi[0] + fi[N]) + h*s)
print(s)
return s
exactValue = np.full((20),math.exp(1)-1)
a = 0.0;b = 1.0 # integration interval [a,b]
computed = np.empty(20)
E=np.zeros(20)
exact=np.zeros(20)
N=20
def convergence_tests(f, a, b, N):
n = np.zeros(N, 1);
E = np.zeros(N, 1);
Exact = math.exp(1)-1
for i in range(N):
n[i] = 2^i
computed[i] = trapezoid(f, a, b, n[i])
E = abs(Exact - computed)
print(E, computed)
return E, computed
You have defined several functions, but your main program never calls any of them. In fact, your "parent" function convergence_test cannot be called, because it's defined at the bottom of the program.
I suggest that you use incremental programming: write a few lines; test those before you proceed to the next mini-task in your code. In the posting, you've written about 30 lines of active code, without realizing that virtually none of it actually executes. There may well be several other errors in this; you'll likely have a difficult time fixing all of them to get the expected output.
Start small and grow incrementally.
I am trying to solve an equation for variable 'X' in python where some of the variables in the equation ('ABC, PQR') are output from a function 'calculations'. The problem is, in order to get an output from the function, I need to pass variable X as an argument itself. I am kind of stuck in a loop here. I tried two different approaches but didn't get any success. Is there a way I can solve the equation?
Any help/direction is really appreciated.
My first approach is to start with a small value and run a loop. I tried to use math.isclose() but receive 'math bound error' once the values go off the range and it runs into an infinite loop.
The second approach is to write the complete expression and use scipy.optimize fsolve() but I am unable to understand how to properly implement it.
# function
def calculations(X, a, b, c):
ABC = X*a*b + c
XYZ = X*b*c + a
PQR = X*a*c + b
return ABC, XYZ, PQR
# ABC, PQR is the output from a function which uses X as input
# solve for X
func = 2*ABC + sqrt(PQR*ABC) + X*100*0.5
# Approach 1
X = 0.001
step = 0.001
while True:
# function call
Var_ABC, Var_XYZ, Var_PQR = calculations(X, a, b, c)
func = 2*Var_ABC + math.sqrt(Var_PQR * Var_ABC) + X*100*0.5
if (math.isclose(func, 0.0, rel_tol=0.1) == True):
break
else:
X = X + step
# Approach 2
# Since I don't know the value of X, how can I get the output from the function and then solve it again?
func_output[0] = calculations(X, a, b, c) # ABC
func_output[2] = calculations(X, a, b, c) # PQR
func = 2* func_output[0] + math.sqrt (func_output[0] * func_output[2] ) + X*100*0.5
from scipy.optimize import fsolve
desired_output_X = fsolve(func, [0.01, 1])
This may help you getting started with fsolve:
# function of X
def func(X):
func_output = calculations(X, a, b, c)
func = 2* func_output[0] + math.sqrt (func_output[0] * func_output[2]) + X*100*0.5
return func
# extra arguments for calculations function, dummy values used: set them as desired
a,b,c = 1,2,6
# initiating X = 0.01 and solve for X
desired_output_X = fsolve(func, x0 = 0.01)
Suppose I have a function whose range is a scalar but whose domain is a vector. For example:
def func(x):
return x[0] + 1 + x[1]**2
What's a good way to find the a root of this function? scipy.optimize.fsolve and scipy.optimize.root expect func to return a vector (rather than a scalar), and scipy.optimize.newton only takes scalar arguments. I can redefine func as
def func(x):
return [x[0] + 1 + x[1]**2, 0]
Then root and fsolve can find a root, but the zeros in the Jacobian means it won't always do a good job. For example:
fsolve(func, array([0,2]))
=> array([-5, 2])
It'll only vary the first parameter but not the second, meaning that it often finds a zero that's far away.
EDIT: it looks like the following redefinition of func works better:
def func(x):
fx = x[0] + 1 + x[1]**2
return [fx, fx]
fsolve(func, array([0,5]))
=>array([-16.27342781, 3.90812331])
So it's now willing to change both parameters. The code is still kind of ugly though.
Have you tried the minimization of the absolute value of your function using fmin?
For example:
>>> import scipy.optimize as op
>>> import numpy as np
>>> def func(x):
>>> return x[0] + 1 + x[1]**2
>>> func1 = lambda x: np.abs(func(x))
>>> tmp = op.fmin(func1, [10000., 10000.])
>>> func(tmp)
0.0
>>> print tmp
[-8346.12025122 91.35162971]
Since -- for my problem -- I have a good initial guess and a non-crazy function, Newton's method works well. For a scalar, multidimensional function, Newton's method becomes:
Here's a rough code example:
def func(x): #the function to find a root of
return x[0] + 1 + x[1]**2
def dfunc(x): #the gradient of that function
return array([1, 2*x[1]])
def newtRoot(x0, func, dfunc):
x = array(x0)
for n in xrange(100): # do at most 100 iterations
f = func(x)
df = dfunc(x)
if abs(f) < 1e-6: # exit function if we're close enough
break
x = x - df*f/norm(df)**2 # update guess
return x
In use:
nsolve([0,2],func,dfunc)
=> array([-1.0052546 , 0.07248865])
func([-1.0052546 , 0.07248865])
=> 4.3788225025098715e-09
Not bad! Of course, this function is very rough, but you get the idea. It also won't work well for "tricky" functions or where you don't have a good starting guess. I think I'll use something like this but then fall back to fsolve or root if Newton's method doesn't converge.