solving multiple non linear equation (with power function) - python

I would like to solve this kind of equations:
a*85**b+c=100
a*90**b+c=66
a*92**b+c=33
I tried this
import scipy.optimize
def fun(variables) :
(a,b,c)= variables
eq0=a*85**b+c-100
eq1=a*90**b+c-66
eq2=a*92**b+c-33
return [eq0,eq1,eq2]
result = scipy.optimize.fsolve(fun, (1, -1, 0))
print(result)
But I get ValueError: Integers to negative integer powers are not allowed.
Then I tried the equivalent
def fun(variables) :
(a,b,c)= variables
eq0=log(a)+b*log(85)-log(100-c)
eq1=log(a)+b*log(90)-log(66-c)
eq2=log(a)+b*log(92)-log(33-c)
return [eq0,eq1,eq2]
result = scipy.optimize.fsolve(fun, (1, -1, 0))
print(result)
I get a solution but that is equal to the initial values (1, -1, 0)
Thus when I test fun(result), I get values different from zero.
I have noticed that for this example the same problem is observed
import scipy.optimize
def fun(variables) :
(x,y)= variables
eqn_1 = x**2+y-4
eqn_2 = x+y**2+3
return [eqn_1,eqn_2]
result = scipy.optimize.fsolve(fun, (0.1, 1))
print(result)
fun(result)
Does anyone would know how I could do ? Thank you
PS I have posted here about sympy last week
Resolution of multiple equations (with exponential)

When the initial condition is not well known, sometimes its best to try other methods first . For small problems, simplex minimization is useful:
import numpy as np
def func(x):
a,b,c= x
eq0=np.log(a)+b*np.log(85)-np.log(100-c)
eq1=np.log(a)+b*np.log(90)-np.log(66-c)
eq2=np.log(a)+b*np.log(92)-np.log(33-c)
return eq0**2+ eq1**2 + eq2**2
def func_vec(x):
a,b,c= x
eq0=np.log(a)+b*np.log(85)-np.log(100-c)
eq1=np.log(a)+b*np.log(90)-np.log(66-c)
eq2=np.log(a)+b*np.log(92)-np.log(33-c)
return eq0, eq1, eq2
from scipy.optimize import fsolve, minimize
out = minimize(func, [1,1,0], method="Nelder-Mead", options={"maxfev":100000})
print("roots:", out.x)
print("value at roots:", func_vec(out.x))
# roots: [ 7.87002460e+11 -1.07401055e-09 -7.87002456e+11]
# value at roots: (6.0964566728216596e-12, -1.2086331935279304e-11, 6.235012506294879e-12)
Note, I also tried [1,1,1] as an initial condition and found it converged to the wrong solution. Further increasing maxfev from 1e5 to 1e7 allowed [1,1,1] to converged to the proper solution, but then perhaps there are better methods to solve this.

Related

Using scipy.linalg.solve with symmetric coefficient matrix (assume_a='sym')

I am trying to solve a system of linear equations A * x = b for the unknown x using scipy's linalg.solve function. Here is an example that works fine:
import numpy as np
import scipy.linalg as linalg
A = np.array([[ 0.18666667, 0.06222222, -0.01777778],
[ 0.01777778, 0.18666667, 0.01777778],
[-0.01777778, 0.06222222, 0.18666667]])
b = np.array([0.26666667, -0.26666667, -0.4])
x = linalg.solve(A, b, assume_a='gen')
It results in x = [1.77194417, -1.4555256, -1.48892533], which is a correct solution. This can be verified by computing A.dot(x), which results in [0.26666667, -0.26666667, -0.4]. As this is the same as b, the solution is correct.
However, the matrix of coefficients A is symmetrical, i.e., the values above and below the main diagonal are the same. If I understand the documentation correctly, for solving such a problem more efficiently, the solve function allows to set the argument assume_a='sym'. Unfortunately, using the following code (given the same A and b) results in an incorrect solution being found:
x = linalg.solve(A, b, assume_a='sym')
It results in x = [1.88811181, -1.88811181, -1.78321672], which is different from the solution above. Computing A.dot(x) results in [0.26666667, -0.35058274, -0.48391607]. As this is different from b, the solution seems to be incorrect.
I am wondering, if there is any problem with my code, or if my understanding of symmetric matrices or the expected result is simply wrong!? Maybe the matrix must satisfy additional constraints to be used together with assume_a='sym'?
I appreciate your answers. Thanks in advance!
In think it won't happen. I provide a short answer about it.
Non-symmetric A
import numpy as np
import scipy.linalg as linalg
A = np.array([[ 0.18666667, 0.06222222, -0.01777778],
[ 0.01777778, 0.18666667, 0.01777778],
[-0.01777778, 0.06222222, 0.18666667]])
b = np.array([0.26666667, -0.26666667, -0.4])
x = linalg.solve(A, b, assume_a='gen')
np.allclose(A # x,b)
Out:
True
Which shows the solver works well.
Symmetric A
# use you upper triangular A to get a symmetric matrix
A_symm = (np.triu(A) + np.triu(A).T -np.diag(A.diagonal()))
# solve the equations
x = linalg.solve(A_symm, b, assume_a='sym')
np.allclose(A_symm # x,b)
Out:
True
It still works.
If you pass a non-symmetric matrix A to the solver , and then specify the assume_a = 'sym', solver will only use upper triangular matrix of A, see below:
x = linalg.solve(A, b, assume_a='sym')
np.allclose(A # x,b),x
Out:
(False, array([ 1.88811181, -1.88811181, -1.78321672]))
The result shows that solver works "wrong", but the result x is the same with result of linalg.solve(A_symm, b, assume_a='sym')

How do you feed Scipy's bvp only the BCs you have?

The only example/docs I can find are on the Scipy docs page.
To test, I'm looking at a time-independent Schrod eq in a 1d infinite potential well. This has a neat analytic solution found by solving the DE, and inserting boundary conditions of ψ(0) = 0, ψ(L) = 0, and that the func soln to 1, but this question applies to solving any DE where the BCs we know aren't for the initial value.
You can solve it numerically with Scipy's solve_ivp by starting with ψ(0) = 0, and cheating to place ψ'(0) appropriately using the analytic soln. Can use shooting method to find an appropriate E value, eg the normalization condition above.
These are two sets of BCs: ψ(0) = 0 for both, normalization for both, and a second value of ψ for the analytic approach, and an initial value of ψ' for the ivp approach. Scipy's solve_bvp seems to offer a solution using the first set of BCs numerically (since we're cheating by inserting ψ'), but i can't get it working. This pseudocode describes the problem, and is how I expect the API to behave:
bcs = {0: (0, None), L: (0, None)} # Two BCs on ψ; no BCs on derivative
x_span = (0, L)
sol = solve_bvp(rhs, bcs, x_span)
In reality, the code looks something like this, and I can't get it to work:
def bc(ψ_a, ψ_b):
return np.array([ψ_a[0], ψ_b[0]])
x_span = (0, L)
x_eval = np.linspace(x_span[0], x_span[1], int(1e5))
x_guess = np.array([0, L])
ψ_guess = np.array([[0, 1], [0, -1]])
res = solve_bvp(rhs_1d, bc, x_guess, ψ_guess)
I've no idea how to build the bc function, and don't know why the guesses are set up the way they are. And unsure how I can guess for the value of ψ without also inserting a guess for ψ'. (The docs imply you can) Also of note, the docs shows an example implying you can use solve_bvp for a normalization BC as well, but not sure how to approach. (Example is too sparse)
The equivalent and working ivp code, for ref: (Compare to my solve_bvp pseudocode)
Python code:
ψ_0 = (0, sqrt(2/L) * n*π/L)
x_span = (0, L)
sol = solve_ivp(rhs_1d, x_span, ψ_0)
For the eigenvalue problem
-u''+V(x)u = c*u
with boundary conditions
u(0)=0=u(L)
and normalization
int(u(x)^2, x=0 to L)=1
set up the integral as third component. With the eigenvalue as parameter these are 4 dimensions allowing for 4 boundary conditions, the additional 2 are that the integral at 0 is zero and that the integral at L has value 1.
# some length
L = 10;
# some potential function
def V(x): return 1+(2*x-L)**2;
# the ODE function
def odesys(x,y,p):
u,v,S = y; c=p[0]
return [v, (V(x)-c)*u , u**2 ]
# the boundary conditions
def boundary(y0, yL, c):
return [ y0[0], yL[0], y0[2], yL[2]-1 ]
With the initial guess you select approximately what eigenfunction/eigenvalue you will get, more or less.
n=11;
w = (np.pi*n)/L
x_init = np.linspace(0,L,4*n+1);
u_init = np.sin(w*x_init);
v_init = np.cos(w*x_init)*w;
y_init = [ u_init, v_init, x_init/L ]
There is no need to put too many points into the guess, just enough that the structure of the first component is faithfully represented.
Then call the solver with the prepared data, take notice that the default tolerance is 1e-3, if you want better you have to allow for a finer subdivision. If everything runs fine, plot the solution.
res = solve_bvp(odesys, boundary, x_init, y_init, p=[w**2], max_nodes=10000, tol=1e-6)
print res.message
if res.success:
x_disp = np.linspace(0,L,3001)
y_disp = res.sol(x_disp)
plt.plot(x_disp, y_disp[0])
plt.title("eigenfunction to eigenvalue $\lambda=%.6f$"%res.p[0]);
plt.grid(); plt.show()

python: Initial condition in solving differential equation

I want to solve this differential equation:
y′′+2y′+2y=cos(2x) with initial conditions:
y(1)=2,y′(2)=0.5
y′(1)=1,y′(2)=0.8
y(1)=0,y(2)=1
and it's code is:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def dU_dx(U, x):
return [U[1], -2*U[1] - 2*U[0] + np.cos(2*x)]
U0 = [1,0]
xs = np.linspace(0, 10, 200)
Us = odeint(dU_dx, U0, xs)
ys = Us[:,0]
plt.xlabel("x")
plt.ylabel("y")
plt.title("Damped harmonic oscillator")
plt.plot(xs,ys);
how can I fulfill it?
Your initial conditions are not, as they give values at two different points. These are all boundary conditions.
def bc1(u1,u2): return [u1[0]-2.0,u2[1]-0.5]
def bc2(u1,u2): return [u1[1]-1.0,u2[1]-0.8]
def bc3(u1,u2): return [u1[0]-0.0,u2[0]-1.0]
You need a BVP solver to solve these boundary value problems.
You can either make your own solver using the shooting method, in case 1 as
def shoot(b): return odeint(dU_dx,[2,b],[1,2])[-1,1]-0.5
b = fsolve(shoot,0)
T = linspace(1,2,N)
U = odeint(dU_dx,[2,b],T)
or use the secant method instead of scipy.optimize.fsolve, as the problem is linear this should converge in 1, at most 2 steps.
Or you can use the scipy.integrate.solve_bvp solver (which is perhaps newer than the question?). Your task is similar to the documented examples. Note that the argument order in the ODE function is switched in all other solvers, even in odeint you can give the option tfirst=True.
def dudx(x,u): return [u[1], np.cos(2*x)-2*(u[1]+u[0])]
Solutions generated with solve_bvp, the nodes are the automatically generated subdivision of the integration interval, their density tells how "non-flat" the ODE is in that region.
xplot=np.linspace(1,2,161)
for k,bc in enumerate([bc1,bc2,bc3]):
res = solve_bvp(dudx, bc, [1.0,2.0], [[0,0],[0,0]], tol=1e-5)
print res.message
l,=plt.plot(res.x,res.y[0],'x')
c = l.get_color()
plt.plot(xplot, res.sol(xplot)[0],c=c, label="%d."%(k+1))
Solutions generated using the shooting method using the initial values at x=0 as unknown parameters to then obtain the solution trajectories for the interval [0,3].
x = np.linspace(0,3,301)
for k,bc in enumerate([bc1,bc2,bc3]):
def shoot(u0): u = odeint(dudx,u0,[0,1,2],tfirst=True); return bc(u[1],u[2])
u0 = fsolve(shoot,[0,0])
u = odeint(dudx,u0,x,tfirst=True);
l, = plt.plot(x, u[:,0], label="%d."%(k+1))
c = l.get_color()
plt.plot(x[::100],u[::100,0],'x',c=c)
You can use the scipy.integrate.ode function this is similar to scipy.integrate.odeint but allows a jac parameter which is df/dy or in the case of your given ODE df/dx

integrate.ode sets t0 values outside of my data range

I would like to solve the ODE dy/dt = -2y + data(t), between t=0..3, for y(t=0)=1.
I wrote the following code:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
linear_interpolation = interp1d(t, data)
def func(y, t0):
print 't0', t0
return -2*y + linear_interpolation(t0)
soln = odeint(func, 1, t)
When I run this code, I get several errors like this:
ValueError: A value in x_new is above the interpolation range.
odepack.error: Error occurred while calling the Python function named
func
My interpolation range is between 0.0 and 3.0.
Printing the value of t0 in func, I realized that t0 is actually sometimes above my interpolation range: 3.07634612585, 3.0203768998, 3.00638459329, ... It's why linear_interpolation(t0) raises the ValueError exceptions.
I have a few questions:
how does integrate.ode makes t0 vary? Why does it make t0 exceed the infimum (3.0) of my interpolation range?
in spite of these errors, integrate.ode returns an array which seems to contain correct value. So, should I just catch and ignore these errors? Should I ignore them whatever the differential equation(s), the t range and the initial condition(s)?
if I shouldn't ignore these errors, what is the best way to avoid them? 2 suggestions:
in interp1d, I could set bounds_error=False and fill_value=data[-1] since the t0 outside of my interpolation range seem to be closed to t[-1]:
linear_interpolation = interp1d(t, data, bounds_error=False, fill_value=data[-1])
But first I would like to be sure that with any other func and any other data the t0 will always remain closed to t[-1]. For example, if integrate.ode chooses a t0 below my interpolation range, the fill_value would be still data[-1], which would not be correct. Maybe to know how integrate.ode makes t0 vary would help me to be sure of that (see my first question).
in func, I could enclose the linear_interpolation call in a try/except block, and, when I catch a ValueError, I recall linear_interpolation but with t0 truncated:
def func(y, t0):
try:
interpolated_value = linear_interpolation(t0)
except ValueError:
interpolated_value = linear_interpolation(int(t0)) # truncate t0
return -2*y + interpolated_value
At least this solution permits linear_interpolation to still raise an exception if integrate.ode makes t0 >= 4.0 or t0 <= -1.0. I can then be alerted of incoherent behavior. But it is not really readable and the truncation seems to me a little arbitrary by now.
Maybe I'm just over-thinking about these errors. Please let me know.
It is normal for the odeint solver to evaluate your function at time values past the last requested time. Most ODE solvers work this way--they take internal time steps with sizes determined by their error control algorithm, and then use their own interpolation to evaluate the solution at the times requested by the user. Some solvers (e.g. the CVODE solver in the Sundials library) allow you to specify a hard bound on the time, beyond which the solver is not allowed to evaluate your equations, but odeint does not have such an option.
If you don't mind switching from scipy.integrate.odeint to scipy.integrate.ode, it looks like the "dopri5" and "dop853" solvers don't evaluate your function at times beyond the requested time. Two caveats:
The ode solvers use a different convention for the order of the arguments that define the differential equation. In the ode solvers, t is the first argument. (Yeah, I know, grumble, grumble...)
The "dopri5" and "dop853" solvers are for non-stiff systems. If your problem is stiff, they should still give correct answers, but they will do a lot more work than a stiff solver would do.
Here's a script that shows how to solve your example. To emphasize the change in the arguments, I renamed func to rhs.
import numpy as np
from scipy.integrate import ode
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
linear_interpolation = interp1d(t, data)
def rhs(t, y):
"""The "right-hand side" of the differential equation."""
#print 't', t
return -2*y + linear_interpolation(t)
# Initial condition
y0 = 1
solver = ode(rhs).set_integrator("dop853")
solver.set_initial_value(y0)
k = 0
soln = [y0]
while solver.successful() and solver.t < t[-1]:
k += 1
solver.integrate(t[k])
soln.append(solver.y)
# Convert the list to a numpy array.
soln = np.array(soln)
The rest of this answer looks at how you could continue to use odeint.
If you are only interested in linear interpolation, you could simply extend your data linearly, using the last two points of the data. A simple way to extend the data array is to append the value 2*data[-1] - data[-2] to the end of the array, and do the same for the t array. If the last time step in t is small, this might not be a sufficiently long extension to avoid the problem, so in the following, I've used a more general extension.
Example:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
# Slope of the last segment.
m = (data[-1] - data[-2]) / (t[-1] - t[-2])
# Amount of time by which to extend the interpolation.
dt = 3.0
# Extended final time.
t_ext = t[-1] + dt
# Extended final data value.
data_ext = data[-1] + m*dt
# Extended arrays.
extended_t = np.append(t, t_ext)
extended_data = np.append(data, data_ext)
linear_interpolation = interp1d(extended_t, extended_data)
def func(y, t0):
print 't0', t0
return -2*y + linear_interpolation(t0)
soln = odeint(func, 1, t)
If simply using the last two data points to extend the interpolator linearly is too crude, then you'll have to use some other method to extrapolate a little beyond the final t value given to odeint.
Another alternative is to include the final t value as an argument to func, and explicitly handle t values larger than it in func. Something like this, where extrapolation is something you'll have to figure out:
def func(y, t0, tmax):
if t0 > tmax:
f = -2*y + extrapolation(t0)
else:
f = -2*y + linear_interpolation(t0)
return f
soln = odeint(func, 1, t, args=(t[-1],))

Scipy: Integration of Hermite function with quadrature weights

I want to integrate the product of two time- and frequency-shifted Hermite functions using scipy.integrate.quad.
However, since large order-polynomials are included, there are numerical errors occuring. Here's my Code:
import numpy as np
import scipy.integrate
import scipy.special as sp
from math import pi
def makeFuncs():
# Create the 0th, 4th, 8th, 12th and 16th order hermite function
return [lambda t, n=n: np.exp(-0.5*t**2)*sp.hermite(n)(t) for n in np.arange(5)*4]
def ambgfun(funcs, i, k, tau, f):
# Integrate f1(t)*f2(t+tau)*exp(-j2pift) over t from -inf to inf
f1 = funcs[i]
f2 = funcs[k]
func = lambda t: np.real(f1(t) * f2(t+tau) * np.exp(-1j*(2*pi)*f*t))
return scipy.integrate.quad(func, -np.inf, np.inf)
def main():
f = makeFuncs()
print "A00(0,0):", ambgfun(f, 0, 0, 0, 0)
print "A01(0,0):", ambgfun(f, 0, 1, 0, 0)
print "A34(0,0):", ambgfun(f, 3, 4, 0, 0)
if __name__ == '__main__':
main()
The hermite functions are orthogonal, thus all integrals should be equal to zero. However, they are not, as the output shows:
A00(0,0): (1.7724538509055159, 1.4202636805184462e-08)
A01(0,0): (8.465450562766819e-16, 8.862237123626351e-09)
A34(0,0): (-10.1875, 26.317246925873935)
How can I make this calculation more accurate? The hermite-function from scipy contain a weights variable which should be used for Gaussian Quadrature, as given in the documentation (http://docs.scipy.org/doc/scipy/reference/special.html#orthogonal-polynomials). However, I have not found a hint in the docs how to use these weights.
I hope you can help :)
Thanks, Max
The answer is that the result you get is numerically as close to zero as it gets. I don' think it's really possible to get much better results if you work with floating point numbers --- you are facing a general problem in numerical integration.
Consider this:
import numpy as np
from scipy import integrate, special
f = lambda t: np.exp(-t**2) * special.eval_hermite(12, t) * special.eval_hermite(16, t)
abs_ig, abs_err = integrate.quad(lambda t: abs(f(t)), -np.inf, np.inf)
ig, err = integrate.quad(f, -np.inf, np.inf)
print ig
# -10.203125
print abs_ig
# 2.22488114805e+15
print ig / abs_ig, err / abs_ig
# -4.58591912155e-15 1.18053770382e-14
The value of the integrand has therefore been computed to an accuracy comparable to the floating point epsilon. Because of the rounding error in subtracting values of a large-magnitude oscillating integrand, it's not really possible to get better results.
So how to proceed? In my experience, what you'd need to do now is to approach the problem not numerically, but analytically. Importantly, the Fourier transform of Hermite polynomials times the weight function is known, so you can work in the Fourier space all the time here.

Categories

Resources