Sympy: solve a differential equation - python

I want to find an elegant way of solving the following differential equation:
from sympy import *
init_printing()
M, phi, t, r = symbols('M phi t r')
eq = Eq(-M * phi(t).diff(t), Rational(3, 2) * m * r**2 * phi(t).diff(t) * phi(t).diff(t,t))
I assume that phi(t).diff(t) is not zero. Hence the left and right side are shortened.
This is how I get to the solution:
# I assume d/dt(phi(t)) != 0
theta = symbols('theta')
eq = eq.subs({phi(t).diff(t, 2): theta}) # remove the second derivative
eq = eq.subs({phi(t).diff(t): 1}) # the first derivative is shortened
eq = eq.subs({theta: phi(t).diff(t, 2)}) # get the second derivative back
dsolve(eq, phi(t))
How do I solve this more elegantly?

Ideally dsolve() would be able to solve the equation directly, but it doesn't know how (it needs to learn that it can factor an equation and solve the factors independently). I opened an issue for it.
My only other suggestion is to divide phi' out directly:
eq = Eq(eq.lhs/phi(t).diff(t), eq.rhs/phi(t).diff(t))
You can also use
eq.xreplace({phi(t).diff(t): 1})
to replace the first derivative with 1 without modifying the second derivative (unlike subs, xreplace has no mathematical knowledge of what it is replacing; it just replaces expressions exactly).
And don't forget that phi(t) = C1 is also a solution (for when phi' does equal 0).

Related

Finding roots to equation using a modulo-divided numpy.poly1d in python

I've created a polynomial object using numpy.poly1d and some arbitrary coefficients (a,b,c) so that I can find the roots of the equation ax^2 + bx + c = y0 at a given y0. In principle, that can be done fairly easily by calling the method root of the poly1d object.
The only issue is that the actual equation I am trying to solve is the same as the one written above, but modulo-divided by 2π which corresponds to finding x when the polynomial modulo-divided by 2π equals to y0, (or find x for y = (ax^2 + bx + (c-yo)) [2*pi])
However, it seems that I can't apply this modulo operator to a poly1d object.
Is there a way of doing that using NumPy?
Here are some lines of code:
import numpy as np
def x_to_y(x,a,b,c):
return (a*x**2 + b*x + c) % (2*np.pi)
def y_to_x(y0,a,b,c):
a,b,c = coeffs
eq = np.poly1d([a,b,c]) % (2*np.pi) # throws an error, can't apply % operation on poly1d object
return (eq-yo).roots
It seems that you can use np.mod instead of %.
The only problem being that np.mod returns an array in that case, not a poly1d object.
Actually managed to bodge it by simply adding 2kπ to the 0th order term of the polynomial, and sweeping over a few values of k till I get a root that is comprised within the correct bounds.
Not ideal but it works. Still open to clever ways of doing it though!

How do I tell sympy that i^2 = -1?

I'm currently using sympy to check my algebra on some nasty equations involving second order derivatives and complex numbers.
import sympy
from sympy.abc import a, e, i, h, t, x, m, A
# define a wavefunction
Psi = A * sympy.exp(-a * ((m*x**2 /h)+i*t))
# take the first order time derivative
Psi_dt = sympy.diff(Psi, t)
# take the second order space derivative
Psi_d2x = sympy.diff(Psi, x, 2)
# write an expression for energy potential (rearrange Schrodingers Equation)
V = 1/Psi * (i*h*Psi_dt + (((h**2)/2*m) * Psi_d2x))
# symplify potential function
sympy.simplify(V)
Which yields this nice thing:
a*(-h*i**2 + m**2*(2*a*m*x**2 - h))
It would be really nice if sympy simplified i^2 to -1.
So how do I tell it that i represents the square root of -1?
On a related note, it would also be really nice to tell sympy that e is eulers number, so if I call sympy.diff(e**x, x) I get e**x as output.
You need to use the SymPy built-ins, rather than treating those symbols as free variables. In particular:
from sympy import I, E
I is sqrt(-1); E is Euler's number.
Then use the complexes methods to manipulate the complex numbers as needed.

ODE with time-varying coefficients in scipy

I am evaluating a set of ODEs with time varying coefficients
def deriv(y, t, N, coefficients):
S, I, R = y
dSdt = coefficients['beta'](t) * S * I / N * -1
dIdt = coefficients['beta'](t) * S * I / N - coefficients['gamma']* I
dRdt = coefficients['gamma'] * I
return dSdt, dIdt, dRdt
In particular, I have 'beta' values in a pre-calculated array, of size equal to int(max(t)).
coefficients = {'beta' : beta_f,'gamma':0.1}
def beta_f(t):
return mybetas.iloc[int(t)]
# Initial conditions vector
y0 = (S0, I0, R0)
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv,y0,t,args=(N,coefficients))
When I run odeint, it is evaluate also for value beyond max(t), raising an index out of bound error in beta_f.
How to limit the evaluation span for odeint?
Since len(mybetas) == int(max(t)), you can get an out-of-bounds error even for values of t which are not beyond max(t).
For example, mybetas.iloc[int(max(t))] will give you the out-of-bounds error, even though int(max(t)) <= max(t) for positive values of t.
But to your point, odeint does indeed check some values outside of the domain of integration. I had to deal with a problem similar to yours just a few weeks ago, and the following two discussions on stackoverflow were really helpful:
integrate.ode sets t0 values outside of my data range
Solve ODEs with discontinuous input/forcing data
The second link explains why it might be computationally faster to solve the ODE with odeint over each individual integer time step one after the other in a for loop, instead of letting odeint deal with the discontinuities in your derivative caused by jumps in the values of your betas.
Otherwise, if this is appropriate for your study case, you can interpolate your betas, and let the function beta_f return interpolated values of beta. Of course, you will have to extend the interpolation domain slightly beyond your integration domain, since odeint might want to evaluate the derivative for some t larger than max(t).

Unknown error with self-defined function for approximation of an integral

I've defined the following function as a method of approximating an integral using Boole's Rule:
def integrate_boole(f,l,r,N):
h=((r-l)/N)
xN = np.linspace(l,r,N+1)
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*(np.sum(fN[1:-2:2]))+12*(np.sum(fN[2:-3:4]))+14*(np.sum(fN[4:-5]))+7*fN[-1])
I used the function to get the value of the integral for sin(x)dx between 0 and pi (where N=8) and assigned it to a variable sine_int.
The answer given was 1.3938101893248442
After doing the original equation (see here) out by hand I realised this answer was quite inaccurate.
The sums of fN are giving incorrect values, but I'm not sure why. For example, np.sum(fN[4:-5]) is going to 0.
Is there a better way of coding the sums involved, or is there an error in my parameters that's causing the calculations to be inaccurate?
Thanks in advance.
EDIT
I should have made it clearer that this is supposed to be a composite version of the rule, i.e. approximating over N points where N is divisible by 4. So the typical 5 points with 4 intervals isn't going to cut it here, unfortunately. I would copy the equation I'm using into here, but I don't have an image of it and LaTex isn't an option. It should/might be clear from the code I have after return.
From a quick inspection looks like the term multiplying f(x_4) should be 32, not 14:
def integrate_boole(f,l,r,N):
h=((r-l)/N)
xN = np.linspace(l,r,N+1)
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*(np.sum(fN[1:-2:2]))+
12*(np.sum(fN[2:-3:4]))+32*(np.sum(fN[4:-5]))+7*fN[-1])
First, one of your coefficients was wrong as pointed out by #nixon. Then, I think you do not really understand how the Boole's rule works - It approximates the integral of a function only using 5 points of the function. Hence, the terms like np.sum(fN[1:-2:2]) makes no sense. You only need five points, which you can obtain with xN = np.linspace(l,r,5). Your h is simply the distance between 2 of the contiguos points h = xN[1] - xN[0]. And then, easy peasy:
import numpy as np
def integrate_boole(f,l,r):
xN = np.linspace(l,r,5)
h = xN[1] - xN[0]
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*fN[1]+12*fN[2]+32*fN[3]+7*fN[4])
def f(x):
return np.sin(x)
I = integrate_boole(f, 0, np.pi)
print(I) # Outputs 1.99857...
I'm not sure what you're hoping your code does w.r.t. Boole's rule. Why are you summing over samples of the function (i.e. np.sum(fN[2:-3:4]))? I think your N parameter is also not well defined and I'm not sure what it's supposed to represent. Maybe you're using another rule I'm not familiar with: I'll let you decide.
Regardless, here's an implementation of Boole's rule as Wikipedia defines it. Variables map to the Wikipedia version you linked:
def integ_boole(func, left, right):
h = (right - left) / 4
x1 = left
x2 = left + h
x3 = left + 2*h
x4 = left + 3*h
x5 = right # or left + 4h
result = (2*h / 45) * (7*func(x1) + 32*func(x2) + 12*func(x3) + 32*func(x4) + 7*func(x5))
return result
then, to test:
import numpy as np
print(integ_boole(np.sin, 0, np.pi))
outputs 1.9985707318238357, which is extremely close to the correct answer of 2.
HTH.

need to improve accuracy in fsolve to find multiples roots

I'm using this code to get the zeros of a nonlinear function.
Most certainly, the function should have 1 or 3 zeros
import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fsolve
[a, b, c] = [5, 10, 0]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
x = np.linspace(-10, 10, 1000)
print(fsolve(func, [-10, 0, 10]))
plt.plot(x, func(x))
plt.show()
In this case the code give the 3 expected roots without any problem.
But, with c = -1.5 the code miss a root, and with c = -3 it find a non existing root.
I want to calculate the roots for many different parameter combinations, so changing the seeds manually is not a practical solution.
I appreciate any solution, trick or advice.
What you need is an automatic way to obtain good initial estimates of the roots of the function. This is in general a difficult task, however, for univariate, continuous functions, it is rather simple. The idea is to note that (a) this class of functions can be approximated to an arbitrary precision by a polynomial of appropriately large order, and (b) there are efficient algorithms for finding (all) the roots of a polynomial. Fortunately, Numpy provides functions for both performing polynomial approximation and finding polynomial roots.
Let's consider a specific function
[a, b, c] = [5, 10, -1.5]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
The following code uses polyfit and poly1d to approximate func over the range of interest (-10<x<10) by a polynomial function f_poly of order 10.
x_range = np.linspace(-10,10,100)
y_range = func(x_range)
pfit = np.polyfit(x_range,y_range,10)
f_poly = np.poly1d(pfit)
As the following plot shows, f_poly is indeed a good approximation of func. Even greater accuracy can be obtained by increasing the order. However, there is no point in pursuing extreme accuracy in the polynomial approximation, since we are looking for approximate estimates of the roots that will be later refined by fsolve
The roots of the polynomial approximation can be simply obtained as
roots = np.roots(pfit)
roots
array([-10.4551+1.4893j, -10.4551-1.4893j, 11.0027+0.j ,
8.6679+2.482j , 8.6679-2.482j , -5.7568+3.2928j,
-5.7568-3.2928j, -4.9269+0.j , 4.7486+0.j , 2.9158+0.j ])
As expected, Numpy returns 10 complex roots. However, we are only interested for real roots within the interval [-10,10]. These can be extracted as follows:
x0 = roots[np.where(np.logical_and(np.logical_and(roots.imag==0, roots.real>-10), roots.real<10))].real
x0
array([-4.9269, 4.7486, 2.9158])
Array x0 can serve as the initialization for fsolve:
fsolve(func, x0)
array([-4.9848, 4.5462, 2.7192])
Remark: The pychebfun package provides a function that directly gives all the roots of a function within an interval. It is also based on the idea of performing polynomial approximation, however, it uses a more sophisticated (yet, more efficient) approach. It automatically chooses the best polynomial order of the approximation (no user input), with the polynomial roots being practically equal to the true ones (no need to refine them via fsolve).
This simple code gives the same roots as those by fsolve
import pychebfun
f_cheb = pychebfun.Chebfun.from_function(func, domain = (-10,10))
f_cheb.roots()
Between two stationary points (i.e., df/dx=0), you have one or zero roots. In your case it is possible to calculate the two stationary points analytically:
[-c + log(1/(b - sqrt(b*(b - 4)) - 2)) + log(2),
-c + log(1/(b + sqrt(b*(b - 4)) - 2)) + log(2)]
So you have three intervals where you need to find a zero. Using Sympy saves you from doing the calculations by hand. Its sy.nsolve() allows to robustly find a zero in an interval:
import sympy as sy
a, b, c, x = sy.symbols("a, b, c, x", real=True)
# The function:
f = -(x+a) + b / (1 + sy.exp(-(x + c)))
df = f.diff(x) # calculate f' = df/dx
xxs = sy.solve(df, x) # Solving for f' = 0 gives two solutions
# numerical values:
pp = {a: 5, b: 10, c: .5} # values for a, b, c
fpp = f.subs(pp)
xxs_pp = [xpr.subs(pp).evalf() for xpr in xxs] # numerical stationary points
xxs_pp.sort() # in ascending order
# resulting intervals:
xx_low = [-1e9, xxs_pp[0], xxs_pp[1]]
xx_hig = [xxs_pp[0], xxs_pp[1], 1e9]
# calculate roots for each interval:
xx0 = []
for xl_, xh_ in zip(xx_low, xx_hig):
try:
x0 = sy.nsolve(fpp, (xl_, xh_), solver="bisect") # calculate zero
except ValueError: # no solution found
continue
xx0.append(x0)
print("The zeros are:")
print(xx0)
sy.plot(fpp) # plot function

Categories

Resources