sympy not ignoring unimportant decimals in exponential expression - python

I have a code that calculate some mathematical equations and when I want to see the simplified results, it can not equate 2.0 with 2 inside power, which is logical since one is float and the other is integer. But decision was sympys where to put these two values, not mine.
Here is the expression in my results that sympy is not simplifying
from sympy import *
x = symbols('x')
y = -exp(2.0*x) + exp(2*x)
print(simplify(y)) # output is -exp(2.0*x) + exp(2*x)
y = -exp(2*x) + exp(2*x)
print(simplify(y)) # output is 0
y = -2.0*x + 2*x
print(simplify(y)) # output is 0
y = -x**2.0 + x**2
print(simplify(y)) # output is -x**2.0 + x**2
is there any way working around this problem? I am looking for a way to make sympy assume that everything other than symbols are floats, and preventing it to decide which one is float or integer.
this problem has been asked before by Gerardo Suarez but not with a satisfactory answer.

There is another sympy function you can use called nsimplify. When I run your examples they all return zero:
from sympy import *
x = symbols("x")
y = -exp(2.0 * x) + exp(2 * x)
print(nsimplify(y)) # output is 0
y = -exp(2 * x) + exp(2 * x)
print(nsimplify(y)) # output is 0
y = -2.0 * x + 2 * x
print(nsimplify(y)) # output is 0
y = -(x ** 2.0) + x ** 2
print(nsimplify(y)) # output is 0
Update
As #Shoaib Mirzaei mentioned you can also use the rational argument in the simplify() function like this:
simplify(y,rational=True)

Related

Access current time step in scipy.integrate.odeint within the function

Is there a way to access what the current time step is in scipy.integrate.odeint?
I am trying to solve a system of ODEs where the form of the ode depends on whether or not a population will be depleted. Basically I take from population x provided x doesn't go below a threshold. If the amount I need to take this timestep is greater than that threshold I will take all of x to that point and the rest from z.
I am trying to do this by checking how much I will take this time step, and then allocating between populations x and z in the DEs.
To do this I need to be able to access the step size within the ODE solver to calculate what will be taken this time step. I am using scipy.integrate.odeint - is there a way to access the time step within the function defining the odes?
Alternatively, can you access what the last time was in the solver? I know it won't necessarily be the next time step, but it's likely a good enough approximation for me if that is the best I can do. Or is there another option I've not thought of to do this?
The below MWE is not my system of equations but what I could come up with to try to illustrate what I'm doing. The problem is that on the first time step, if the time step were 1 then the population will go too low, but since the timestep will be small, initially you can take all from x.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
plt.interactive(False)
tend = 5
tspan = np.linspace(0.0, tend, 1000)
A = 3
B = 4.09
C = 1.96
D = 2.29
def odefunc(P,t):
x = P[0]
y = P[1]
z = P[2]
if A * x - B * x * y < 0.6:
dxdt = A/5 * x
dydt = -C * y + D * x * y
dzdt = - B * z * y
else:
dxdt = A * x - B * x * y
dydt = -C * y + D * x * y
dzdt = 0
dPdt = np.ravel([dxdt, dydt, dzdt])
return dPdt
init = ([0.75,0.95,100])
sol = odeint(odefunc, init, tspan, hmax = 0.01)
x = sol[:, 0]
y = sol[:, 1]
z = sol[:, 2]
plt.figure(1)
plt.plot(tspan,x)
plt.plot(tspan,y)
plt.plot(tspan,z)
Of course you can hack something together that might work.
You could log t but you have to be aware that the values
might not be constantly increasing. This depends on the ODE algorithm and how it works (forward, backward, and central finite differences).
But it will give you an idea where about you are.
logger = [] # visible in odefunc
def odefunc(P,t):
x = P[0]
y = P[1]
z = P[2]
print(t)
logger.append(t)
if logger: # if the list is not empty
if logger[-1] > 2.5: # then read the last value
print('hua!')
if A * x - B * x * y < 0.6:
dxdt = A/5 * x
dydt = -C * y + D * x * y
dzdt = - B * z * y
else:
dxdt = A * x - B * x * y
dydt = -C * y + D * x * y
dzdt = 0
dPdt = np.ravel([dxdt, dydt, dzdt])
return dPdt
print(logger)
As pointed out in the another answer, time may not be strictly increasing at each call to the ODE function in odeint, especially for stiff problems.
The most robust way to handle this kind of discontinuity in the ode function is to use an event to find the location of the zero of (A * x - B * x * y) - 0.6 in your example. For a discontinuous solution, use a terminal event to stop the computation precisely at the zero, and then change the ode function. In solve_ivp you can do this with the events parameter. See the solve ivp documentation and specifically the examples related to the cannonball trajectories. odeint does not support events, and solve_ivp has an LSODA method available that calls the same Fortran library as odeint.
Here is a short example, but you may want to additionally check that sol1 reached the terminal event before solving for sol2.
from scipy.integrate import solve_ivp
tend = 10
def discontinuity_zero(t, y):
return y[0] - 10
discontinuity_zero.terminal = True
def ode_func1(t, y):
return y
def ode_func2 (t, y):
return -y**2
sol1 = solve_ivp(ode_func1, t_span=[0, tend], y0=[1], events=discontinuity_zero, rtol=1e-8)
t1 = sol1.t[-1]
y1 = [sol1.y[0, -1]]
print(f'time={t1} y={y1} discontinuity_zero={discontinuity_zero(t1, y1)}')
sol2 = solve_ivp(ode_func2, t_span=[t1, tend], y0=y1, rtol=1e-8)
plt.plot(sol1.t, sol1.y[0,:])
plt.plot(sol2.t, sol2.y[0,:])
plt.show()
This prints the following, where the time of the discontinuity is accurate to 7 digits.
time=2.302584885712467 y=[10.000000000000002] discontinuity_zero=1.7763568394002505e-15

Indefinite integral with constant

I need to use python to get an indefinite integral of 1/(x^4(sqrt(x^2-a^2))), where a>0.
I know how to use python to integrate, but not when there is a constant. Just to be clear, the answer needs to be in terms of x and a, because it is indefinite.
Using SymPy, you can compute the symbolic integral for that expression like this:
import sympy
# 1/(x^4(sqrt(x^2-a^2))), a > 0
x, a = sympy.symbols('x, a')
exp = 1 / ((x ** 4) * sympy.sqrt(x ** 2 - a ** 2))
exp_int = sympy.integrate(exp, x)
print(exp_int)
Which results in:
Piecewise((I*sqrt(a**2/x**2 - 1)/(3*a**2*x**2) + 2*I*sqrt(a**2/x**2 - 1)/(3*a**4), Abs(a**2/x**2) > 1), (sqrt(-a**2/x**2 + 1)/(3*a**2*x**2) + 2*sqrt(-a**2/x**2 + 1)/(3*a**4), True))
Or displayed with LaTeX:

Python How to get the value of one specific point of derivative?

from sympy import *
x = Symbol('x')
y = x ** 2
dx = diff(y, x)
This code can get the derivative of y.
It's easy dx = 2 * x
Now I want to get the value of dx for x = 2.
Clearly, dx = 2 * 2 = 4 when x = 2
But how can I realize this with python codes?
Thanks for your help!
Probably the most versatile way is to lambdify:
sympy.lambdify creates and returns a function that you can assign to a name, and call, like any other python callable.
from sympy import *
x = Symbol('x')
y = x**2
dx = diff(y, x)
print(dx, dx.subs(x, 2)) # this substitutes 2 for x as suggested by #BugKiller in the comments
ddx = lambdify(x, dx) # this creates a function that you can call
print(ddx(2))
According to SymPy's documentation you have to evaluate the value of the function after substituting x with the desired value:
>>> dx.evalf(subs={x: 2})
4.00000000000000
or
>>> dx.evalf(2, subs={x: 2})
4.0
to limit the output to two digits.

How to compute argmax with sympy?

I was wondering if there is any way to get the parameter for which a given expression attains its maximum value.
You normally get first the parameter and then evaluate to obtain the function value. For example:
from sympy import *
x = Symbol('x', real=True) # parameter
f = -2 * x**2 + 4*x # function
derivative = f.diff(x) # -4*x + 4
solve(derivative, x) # -4*x + 4 = 0
would get you x=1.

Euler method (explicit and implicit)

I'd like to implement Euler's method (the explicit and the implicit one)
(https://en.wikipedia.org/wiki/Euler_method) for the following model:
x(t)' = q(x_M -x(t))x(t)
x(0) = x_0
where q, x_M and x_0 are real numbers.
I know already the (theoretical) implementation of the method. But I couldn't figure out where I can insert / change the model.
Could anybody help?
EDIT: You were right. I didn't understand correctly the method. Now, after a few hours, I think that I really got it! With the explicit method, I'm pretty sure (nevertheless: could anybody please have a look at my code? )
With the implicit implementation, I'm not very sure if it's correct. Could please anyone have a look at the implementation of the implicit method and give me a feedback what's correct / not good?
def explizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = x + dt * (q * (xM - x) * x)
print '%6.3f %6.3f' % (t, x)
def implizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = (1.0 / (1.0 - q *(xM + x) * x))
print '%6.3f %6.3f' % (t, x)
Pre-emptive note: Although the general idea should be correct, I did all the algebra in place in the editor box so there might be mistakes there. Please, check it yourself before using for anything really important.
I'm not sure how you come to the "implicit" formula
x = (1.0 / (1.0 - q *(xM + x) * x))
but this is wrong and you can check it by comparing your "explicit" and "implicit" results: they should slightly diverge but with this formula they will diverge drastically.
To understand the implicit Euler method, you should first get the idea behind the explicit one. And the idea is really simple and is explained at the Derivation section in the wiki: since derivative y'(x) is a limit of (y(x+h) - y(x))/h, you can approximate y(x+h) as y(x) + h*y'(x) for small h, assuming our original differential equation is
y'(x) = F(x, y(x))
Note that the reason this is only an approximation rather than exact value is that even over small range [x, x+h] the derivative y'(x) changes slightly. It means that if you want to get a better approximation of y(x+h), you need a better approximation of "average" derivative y'(x) over the range [x, x+h]. Let's call that approximation just y'. One idea of such improvement is to find both y' and y(x+h) at the same time by saying that we want to find such y' and y(x+h) that y' would be actually y'(x+h) (i.e. the derivative at the end). This results in the following system of equations:
y'(x+h) = F(x+h, y(x+h))
y(x+h) = y(x) + h*y'(x+h)
which is equivalent to a single "implicit" equation:
y(x+h) - y(x) = h * F(x+h, y(x+h))
It is called "implicit" because here the target y(x+h) is also a part of F. And note that quite similar equation is mentioned in the Modifications and extensions section of the wiki article.
So now going to your case that equation becomes
x(t+dt) - x(t) = dt*q*(xM -x(t+dt))*x(t+dt)
or equivalently
dt*q*x(t+dt)^2 + (1 - dt*q*xM)*x(t+dt) - x(t) = 0
This is a quadratic equation with two solutions:
x(t+dt) = [(dt*q*xM - 1) ± sqrt((dt*q*xM - 1)^2 + 4*dt*q*x(t))]/(2*dt*q)
Obviously we want the solution that is "close" to the x(t) which is the + solution. So the code should be something like:
b = (q * xM * dt - 1)
x(t+h) = (b + (b ** 2 + 4 * q * x(t) * dt) ** 0.5) / 2 / q / dt
(editor note:) Applying the binomial complement, this formula has the numerically more stable form for small dt, where then b < 0,
x(t+h) = (2 * x(t)) / ((b ** 2 + 4 * q * x(t) * dt) ** 0.5 - b)

Categories

Resources