Python How to get the value of one specific point of derivative? - python

from sympy import *
x = Symbol('x')
y = x ** 2
dx = diff(y, x)
This code can get the derivative of y.
It's easy dx = 2 * x
Now I want to get the value of dx for x = 2.
Clearly, dx = 2 * 2 = 4 when x = 2
But how can I realize this with python codes?
Thanks for your help!

Probably the most versatile way is to lambdify:
sympy.lambdify creates and returns a function that you can assign to a name, and call, like any other python callable.
from sympy import *
x = Symbol('x')
y = x**2
dx = diff(y, x)
print(dx, dx.subs(x, 2)) # this substitutes 2 for x as suggested by #BugKiller in the comments
ddx = lambdify(x, dx) # this creates a function that you can call
print(ddx(2))

According to SymPy's documentation you have to evaluate the value of the function after substituting x with the desired value:
>>> dx.evalf(subs={x: 2})
4.00000000000000
or
>>> dx.evalf(2, subs={x: 2})
4.0
to limit the output to two digits.

Related

sympy not ignoring unimportant decimals in exponential expression

I have a code that calculate some mathematical equations and when I want to see the simplified results, it can not equate 2.0 with 2 inside power, which is logical since one is float and the other is integer. But decision was sympys where to put these two values, not mine.
Here is the expression in my results that sympy is not simplifying
from sympy import *
x = symbols('x')
y = -exp(2.0*x) + exp(2*x)
print(simplify(y)) # output is -exp(2.0*x) + exp(2*x)
y = -exp(2*x) + exp(2*x)
print(simplify(y)) # output is 0
y = -2.0*x + 2*x
print(simplify(y)) # output is 0
y = -x**2.0 + x**2
print(simplify(y)) # output is -x**2.0 + x**2
is there any way working around this problem? I am looking for a way to make sympy assume that everything other than symbols are floats, and preventing it to decide which one is float or integer.
this problem has been asked before by Gerardo Suarez but not with a satisfactory answer.
There is another sympy function you can use called nsimplify. When I run your examples they all return zero:
from sympy import *
x = symbols("x")
y = -exp(2.0 * x) + exp(2 * x)
print(nsimplify(y)) # output is 0
y = -exp(2 * x) + exp(2 * x)
print(nsimplify(y)) # output is 0
y = -2.0 * x + 2 * x
print(nsimplify(y)) # output is 0
y = -(x ** 2.0) + x ** 2
print(nsimplify(y)) # output is 0
Update
As #Shoaib Mirzaei mentioned you can also use the rational argument in the simplify() function like this:
simplify(y,rational=True)

How to get the value of a middle variable in a function that need to use 'fsolve'?

My first py file is the function that I want to find the roots, like this:
def myfun(unknowns,a,b):
x = unknowns[0]
y = unknowns[1]
eq1 = a*y+b
eq2 = x**b
z = x*y + y/x
return eq1, eq2
And my second one is to find the value of x and y from a starting point, given the parameter value of a and b:
a = 3
b = 2
x0 = 1
y0 = 1
x, y = scipy.optimize.fsolve(myfun, (x0,y0), args= (a,b))
My question is: I actually need the value of z after plugging in the result of found x and y, and I don't want to repeat again z = x*y + y/x + ..., which in my real case it's a middle step variable without an explicit expression.
However, I cannot replace the last line of fun with return eq1, eq2, z, since fslove only find the roots of eq1 and eq2.
The only solution now is to rewrite this function and let it return z, and plug in x and y to get z.
Is there a good solution to this problem?
I believe that's the wrong approach. Since you have z as a direct function of x and y, then what you need is to retrieve those two values. In the listed case, it's easy enough: given b you can derive x as the inverse of eqn2; also given a, you can invert eqn1 to get y.
For clarity, I'm changing the names of your return variables:
ret1, ret2 = scipy.optimize.fsolve(myfun, (x0,y0), args= (a,b))
Now, invert the two functions:
# eq2 = x**b
x = ret2**(1/b)
# eq1 = a*y+b
y = (ret1 - b) / a
... and finally ...
z = x*y + y/x
Note that you should remove the z computation from your function, as it serves no purpose.

Access current time step in scipy.integrate.odeint within the function

Is there a way to access what the current time step is in scipy.integrate.odeint?
I am trying to solve a system of ODEs where the form of the ode depends on whether or not a population will be depleted. Basically I take from population x provided x doesn't go below a threshold. If the amount I need to take this timestep is greater than that threshold I will take all of x to that point and the rest from z.
I am trying to do this by checking how much I will take this time step, and then allocating between populations x and z in the DEs.
To do this I need to be able to access the step size within the ODE solver to calculate what will be taken this time step. I am using scipy.integrate.odeint - is there a way to access the time step within the function defining the odes?
Alternatively, can you access what the last time was in the solver? I know it won't necessarily be the next time step, but it's likely a good enough approximation for me if that is the best I can do. Or is there another option I've not thought of to do this?
The below MWE is not my system of equations but what I could come up with to try to illustrate what I'm doing. The problem is that on the first time step, if the time step were 1 then the population will go too low, but since the timestep will be small, initially you can take all from x.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
plt.interactive(False)
tend = 5
tspan = np.linspace(0.0, tend, 1000)
A = 3
B = 4.09
C = 1.96
D = 2.29
def odefunc(P,t):
x = P[0]
y = P[1]
z = P[2]
if A * x - B * x * y < 0.6:
dxdt = A/5 * x
dydt = -C * y + D * x * y
dzdt = - B * z * y
else:
dxdt = A * x - B * x * y
dydt = -C * y + D * x * y
dzdt = 0
dPdt = np.ravel([dxdt, dydt, dzdt])
return dPdt
init = ([0.75,0.95,100])
sol = odeint(odefunc, init, tspan, hmax = 0.01)
x = sol[:, 0]
y = sol[:, 1]
z = sol[:, 2]
plt.figure(1)
plt.plot(tspan,x)
plt.plot(tspan,y)
plt.plot(tspan,z)
Of course you can hack something together that might work.
You could log t but you have to be aware that the values
might not be constantly increasing. This depends on the ODE algorithm and how it works (forward, backward, and central finite differences).
But it will give you an idea where about you are.
logger = [] # visible in odefunc
def odefunc(P,t):
x = P[0]
y = P[1]
z = P[2]
print(t)
logger.append(t)
if logger: # if the list is not empty
if logger[-1] > 2.5: # then read the last value
print('hua!')
if A * x - B * x * y < 0.6:
dxdt = A/5 * x
dydt = -C * y + D * x * y
dzdt = - B * z * y
else:
dxdt = A * x - B * x * y
dydt = -C * y + D * x * y
dzdt = 0
dPdt = np.ravel([dxdt, dydt, dzdt])
return dPdt
print(logger)
As pointed out in the another answer, time may not be strictly increasing at each call to the ODE function in odeint, especially for stiff problems.
The most robust way to handle this kind of discontinuity in the ode function is to use an event to find the location of the zero of (A * x - B * x * y) - 0.6 in your example. For a discontinuous solution, use a terminal event to stop the computation precisely at the zero, and then change the ode function. In solve_ivp you can do this with the events parameter. See the solve ivp documentation and specifically the examples related to the cannonball trajectories. odeint does not support events, and solve_ivp has an LSODA method available that calls the same Fortran library as odeint.
Here is a short example, but you may want to additionally check that sol1 reached the terminal event before solving for sol2.
from scipy.integrate import solve_ivp
tend = 10
def discontinuity_zero(t, y):
return y[0] - 10
discontinuity_zero.terminal = True
def ode_func1(t, y):
return y
def ode_func2 (t, y):
return -y**2
sol1 = solve_ivp(ode_func1, t_span=[0, tend], y0=[1], events=discontinuity_zero, rtol=1e-8)
t1 = sol1.t[-1]
y1 = [sol1.y[0, -1]]
print(f'time={t1} y={y1} discontinuity_zero={discontinuity_zero(t1, y1)}')
sol2 = solve_ivp(ode_func2, t_span=[t1, tend], y0=y1, rtol=1e-8)
plt.plot(sol1.t, sol1.y[0,:])
plt.plot(sol2.t, sol2.y[0,:])
plt.show()
This prints the following, where the time of the discontinuity is accurate to 7 digits.
time=2.302584885712467 y=[10.000000000000002] discontinuity_zero=1.7763568394002505e-15

How to evaluate this line integral (Python-Sympy)

The idea is to compute the line integral of the following vector field and curve:
This is the code I have tried:
import numpy as np
from sympy import *
from sympy import Curve, line_integrate
from sympy.abc import x, y, t
C = Curve([cos(t) + 1, sin(t) + 1, 1 - cos(t) - sin(t)], (t, 0, 2*np.pi))
line_integrate(y * exp(x) + x**2 + exp(x) + z**2 * exp(z), C, [x, y, z])
But the ValueError: Function argument should be (x(t), y(t)) but got [cos(t) + 1, sin(t) + 1, -sin(t) - cos(t) + 1] comes up.
How can I compute this line integral then?
I think that maybe this line integral contains integrals that don't have exact solution. It is also fine if you provide a numerical approximation method.
Thanks
In this case you can compute the integral using line_integrate because we can reduce the 3d integral to a 2d one. I'm sorry to say I don't know python well enough to write the code, but here's the drill:
If we write
C(t) = x(t),y(t),z(t)
then the thing to notice is that
z(t) = 3 - x(t) - y(t)
and so
dz = -dx - dy
So, we can write
F.dr = Fx*dx + Fy*dy + Fz*dz
= (Fx-Fz)*dx + (Fy-Fz)*dy
So we have reduced the problem to a 2d problem: we integrate
G = (Fx-Fz)*i + (Fx-Fz)*j
round
t -> x(t), y(t)
Note that in G we need to get rid of z by substituting
z = 3 - x - y
The value error you receive does not come from your call to the line_integrate function; it comes because according to the source code for the Curve class, only functions in 2D Euclidean space are supported. This integral can still be computed without using sympy according to this research blog that I found by simply searching for a workable method on Google.
The code you need looks like this:
import autograd.numpy as np
from autograd import jacobian
from scipy.integrate import quad
def F(X):
x, y, z = X
return [y * np.exp(x), x**2 + np.exp(x), z**2 * np.exp(z)]
def C(t):
return np.array([np.cos(t) + 1, np.sin(t) + 1, 1 - np.cos(t) - np.sin(t)])
dCdt = jacobian(C, 0)
def integrand(t):
return F(C(t)) # dCdt(t)
I, e = quad(integrand, 0, 2 * np.pi)
The variable I then stores the numerical solution to your question.
You can define a function:
import sympy as sp
from sympy import *
def linea3(f,C):
P = f[0].subs([(x,C[0]),(y,C[1]),(z,C[2])])
Q = f[1].subs([(x,C[0]),(y,C[1]),(z,C[2])])
R = f[2].subs([(x,C[0]),(y,C[1]),(z,C[2])])
dx = diff(C[0],t)
dy = diff(C[1],t)
dz = diff(C[2],t)
m = integrate(P*dx+Q*dy+R*dz,(t,C[3],C[4]))
return m
Then use the example:
f = [x**2*z**2,y**2*z**2,x*y*z]
C = [2*cos(t),2*sin(t),4,0,2*sp.pi]

How to compute argmax with sympy?

I was wondering if there is any way to get the parameter for which a given expression attains its maximum value.
You normally get first the parameter and then evaluate to obtain the function value. For example:
from sympy import *
x = Symbol('x', real=True) # parameter
f = -2 * x**2 + 4*x # function
derivative = f.diff(x) # -4*x + 4
solve(derivative, x) # -4*x + 4 = 0
would get you x=1.

Categories

Resources