Sympy gives a residual value when trying to solve the logistics equation - python

If I try solving the logistics differential equation in Sympy I get a residual value (10^(-13)) which prevents sympy from getting the correct values for the initial coditions. If I run this code:
import numpy as np
import sympy as sp
M = 10000
a = 0.03
x = sp.symbols("x")
# x, a, M = sp.symbols("x a M")
f = sp.Function('f')
fl = sp.Derivative(f(x),x)
sol = sp.dsolve(fl - a*(1 - f(x)/M)*f(x), f(x));sol
I get:
Eq(f(x), (9.09494701772928e-13*exp(0.03*C1 - 0.03*x) - 10000.0)/(exp(0.03*C1 - 0.03*x) - 1))
How can one get rid of these residuals in the solution?

Either don't use Float (use a = Rational(3, 100)) or if you know you want those 1e-13 magnitude numbers to be 0 then you can replace them with 0:
>>> eq
Eq(f(x), (9.09494701772928e-13*exp(0.03*C1 - 0.03*x) - 10000.0
... )/(exp(0.03*C1 - 0.03*x) - 1))
>>> eq.replace(lambda x: x.is_Float and abs(x) < 1e-12, lambda x: 0)
Eq(f(x), -10000.0/(exp(0.03*C1 - 0.03*x) - 1))

Related

Calculating a definite integral with variable boundaries in Python

I have been currently working on calculating a definite integral. I need to obtain some numerical outputs, but I cannot. My two attempts to solve the integral are below. How can I run one of these methods below?
My first way to calculate the integral is
import numpy as np
from sympy import *
Nt = 17
alpha = .99
t = np.linspace(0, .85, Nt)
s = np.linspace(0, .85, Nt)
for k in range(Nt):
for i in range(k):
Int = integrate( (t[k] - s) ** - int(alpha), (s, t[i], t[i + 1] ))
print(Int)
Error:
ValueError: Invalid limits given: ((array([0. , 0.053125, 0.10625 , 0.159375, 0.2125 , 0.265625,
0.31875 , 0.371875, 0.425 , 0.478125, 0.53125 , 0.584375,
0.6375 , 0.690625, 0.74375 , 0.796875, 0.85 ]), 0.0, 0.053125),)
My second way is
import numpy as np
from sympy import *
from numba import jit, prange
Nt = 17
alpha = .99
t = np.linspace(0, .85, Nt)
s = np.linspace(0, .85, Nt)
#jit(nopython=True)
def Int(alpha):
Int = 0
for k in prange(Nt):
for i in prange(k):
Int = Int + integrate( (t[k] - s) ** - int(alpha), (s, t[i], t[i + 1] ))
print(Int)
return Int
There is no any outputs. 
EDIT: For my first way, I made a small change and got some numerical results. I am not sure about the result. Any comments are still welcome.
Nt = 17
alpha = .99
t = np.linspace(0, .85, Nt)
s = symbols('s')
for k in range(Nt):
for i in range(k):
Int = integrate((t[k] - s) ** - int(alpha), (s, t[i], t[i + 1] ))
print(Int)
Output:
0.0531250000000000
0.0531250000000000
0.0531250000000000
0.0531250000000000...
If you use SymPy you can get a symbolic expression:
>>> var('s t e ti tj');i=integrate(1/(t-s)**e,(t,ti,tj));i
Piecewise(
((t - ti)**(1 - e)/(1 - e) - (t - tj)**(1 - e)/(1 - e),
(e > -oo) & (e < oo) & Ne(e, 1)),
(log(t - ti) - log(t - tj),
True))
With e = -int(.99) = 0 -- are you sure that is what you want? -- this is
>>> i0 = i.subs(e,-int(.99)); i0
-ti + tj
In this case, the integral does not depend on t, only the difference in the limits. As Oscar says, you can now just substitute in values of interest. Or, knowing that it is just the difference of points, you could compute those differences directly:
>>> ts = [.85/16*_ for _ in range(17)]
>>> i0.subs({ti:ts[0], tj:ts[1]})
0.053125
>>> ts[1] - ts[0]
0.053125
This is the same that you got. You now (hopefully) understand why you were getting the same value every time.

Minimize system of nonlinear equation (integral on exponent)

General:
I am using maximum entropy to find distribution for on positive integers vectors, I can estimate the mean and variance, and have three equation I am trying to find a and b,
The equations:
integral(exp(a*x^2+bx+c) from (0 , infinity))-1
integral(xexp(ax^2+bx+c)from (0 , infinity))- mean
integral(x^2*exp(a*x^2+bx+c) from (0 , infinity))- mean^2 - var
(integrals between [0,∞))
The problem:
I am trying to use numerical solver and I used fsolve of sympy
But I guess I am missing some knowledge.
My code:
import numpy as np
import sympy as sym
from scipy.optimize import *
def myFunction(x,*data):
y = sym.symbols('y')
m,v=data
F = [0]*3
x[0] = - abs(x[0])
print(x)
F[0] = (sym.integrate(sym.exp(x[0] * y ** 2 + x[1] * y + x[2]), (y, 0,sym.oo)) -1).evalf()
F[1] = (sym.integrate(y*sym.exp(x[0] * y ** 2 + x[1] * y + x[2]), (y, 0,sym.oo))-m).evalf()
F[2] = (sym.integrate((y**2)*sym.exp(x[0] * y ** 2 + x[1] * y + x[2]), (y,0,sym.oo)) -v-m).evalf()
print(F)
return F
data = (10,3.5) # mean and var for example
xGuess = [1, 1, 1]
z = fsolve(myFunction,xGuess,args = data)
print(z)
my result are not that accurate, is there a better way to solve it?
integral(exp(a*x^2+bx+c))-1 = 5.67659292676884
integral(xexp(ax^2+bx+c))- mean = −1.32123173796713
integral(x^2*exp(a*x^2+bx+c))- mean^2 - var = −2.20825624606312
Thanks
I have rewritten the problem replacing sympy with numpy and lambdas (inline functions).
Also note that in your problem statement you subtract the third equation with $mean^2$, but in your code you only subtract $mean$.
import numpy as np
from scipy.optimize import minimize
from scipy.integrate import quad
def myFunction(x,data):
m,v=data
F = np.zeros(3) # use numpy array
# use scipy.integrade.quad for integration of lambda functions
# quad output is (result, error), so we just select the result value at the end
F[0] = quad(lambda y: np.exp(x[0] * y ** 2 + x[1] * y + x[2]), 0, np.inf)[0] -1
F[1] = quad(lambda y: y*np.exp(x[0] * y ** 2 + x[1] * y + x[2]), 0, np.inf)[0] -m
F[2] = quad(lambda y: (y**2)*np.exp(x[0] * y ** 2 + x[1] * y + x[2]), 0, np.inf)[0] -v-m**2
# minimize the squared error
return np.sum(F**2)
data = (10,3.5) # mean and var for example
xGuess = [-1, 1, 1]
z = minimize(lambda x: myFunction(x, data), x0=xGuess,
bounds=((None, 0), (None, None), (None, None))) # use bounds for negative first coefficient
print(z)
# x: array([-0.99899311, 2.18819689, 1.85313181])
Does this seem more reasonable?

How can I solve y = (x+1)**3 -2 for x in sympy?

I'd like to solve y = (x+1)**3 - 2 for x in sympy to find its inverse function.
I tried using solve, but I didn't get what I expected.
Here's what I wrote in IPython console in cmd (sympy 1.0 on Python 3.5.2):
In [1]: from sympy import *
In [2]: x, y = symbols('x y')
In [3]: n = Eq(y,(x+1)**3 - 2)
In [4]: solve(n,x)
Out [4]:
[-(-1/2 - sqrt(3)*I/2)*(-27*y/2 + sqrt((-27*y - 54)**2)/2 - 27)**(1/3)/3 - 1,
-(-1/2 + sqrt(3)*I/2)*(-27*y/2 + sqrt((-27*y - 54)**2)/2 - 27)**(1/3)/3 - 1,
-(-27*y/2 + sqrt((-27*y - 54)**2)/2 - 27)**(1/3)/3 - 1]
I was looking at the last element in the list in Out [4], but it doesn't equal x = (y+2)**(1/3) - 1 (which I was expecting).
Why did sympy output the wrong result, and what can I do to make sympy output the solution I was looking for?
I tried using solveset, but I got the same results as using solve.
In [13]: solveset(n,x)
Out[13]: {-(-1/2 - sqrt(3)*I/2)*(-27*y/2 + sqrt((-27*y - 54)**2)/2 - 27)**(1/3)/
3 - 1, -(-1/2 + sqrt(3)*I/2)*(-27*y/2 + sqrt((-27*y - 54)**2)/2 - 27)**(1/3)/3 -
1, -(-27*y/2 + sqrt((-27*y - 54)**2)/2 - 27)**(1/3)/3 - 1}
Sympy gave you the correct result: your last result is equivalent to (y+2)**(1/3) - 1.
What you're looking for is simplify:
>>> from sympy import symbols, Eq, solve, simplify
>>> x, y = symbols("x y")
>>> n = Eq(y, (x+1)**3 - 2)
>>> s = solve(n, x)
>>> simplify(s[2])
(y + 2)**(1/3) - 1
edit: Worked with sympy 0.7.6.1, after updating to 1.0 it doesn't work anymore.
If you declare that x and y are positive, then there is only one solution:
import sympy as sy
x, y = sy.symbols("x y", positive=True)
n = sy.Eq(y, (x+1)**3 - 2)
s = sy.solve(n, x)
print(s)
yields
[(y + 2)**(1/3) - 1]

Finding the minimum of a function on a closed interval with Python

Updated: How do I find the minimum of a function on a closed interval [0,3.5] in Python? So far I found the max and min but am unsure how to filter out the minimum from here.
import sympy as sp
x = sp.symbols('x')
f = (x**3 / 3) - (2 * x**2) + (3 * x) + 1
fprime = f.diff(x)
all_solutions = [(xx, f.subs(x, xx)) for xx in sp.solve(fprime, x)]
print (all_solutions)
Since this PR you should be able to do the following:
from sympy.calculus.util import *
f = (x**3 / 3) - (2 * x**2) - 3 * x + 1
ivl = Interval(0,3)
print(minimum(f, x, ivl))
print(maximum(f, x, ivl))
print(stationary_points(f, x, ivl))
Perhaps something like this
from sympy import solveset, symbols, Interval, Min
x = symbols('x')
lower_bound = 0
upper_bound = 3.5
function = (x**3/3) - (2*x**2) - 3*x + 1
zeros = solveset(function, x, domain=Interval(lower_bound, upper_bound))
assert zeros.is_FiniteSet # If there are infinite solutions the next line will hang.
ans = Min(function.subs(x, lower_bound), function.subs(x, upper_bound), *[function.subs(x, i) for i in zeros])
Here's a possible solution using sympy:
import sympy as sp
x = sp.Symbol('x', real=True)
f = (x**3 / 3) - (2 * x**2) - 3 * x + 1
#f = 3 * x**4 - 4 * x**3 - 12 * x**2 + 3
fprime = f.diff(x)
all_solutions = [(xx, f.subs(x, xx)) for xx in sp.solve(fprime, x)]
interval = [0, 3.5]
interval_solutions = filter(
lambda x: x[0] >= interval[0] and x[0] <= interval[1], all_solutions)
print(all_solutions)
print(interval_solutions)
all_solutions is giving you all points where the first derivative is zero, interval_solutions is constraining those solutions to a closed interval. This should give you some good clues to find minimums and maximums :-)
The f.subs commands show two ways of displaying the value of the given function at x=3.5, the first as a rational approximation, the second as the exact fraction.

Python's sympy solver returning bad roots on 4th degree equation

I need to solve a 4th degree equation with python. For this I'm using the sympy module.
When I run the script, sympy returns the 4 solutions of the equation as complex numbers (see output), while, in fact, all of them are real.
What is making sympy return the wrong answer?
import numpy as np
import math
from numpy import linalg as la
import sympy as sy
from matplotlib.pyplot import *
L = np.array([0,1,-20.0])
S = np.array([0,0,-10.0])
a = np.dot(S,S)
b = np.dot(S,L)
c = np.dot(L,L)
k0 = a - 1
k1 = 2*(a-b)
k2 = a + 2*b + c - 4*a*c
k3 = -4*(a*c - b**2)
k4 = 4*c*(a*c - b**2)
y = sy.Symbol('y')
r = sy.solvers.solve(k4*y**4 + k3*y**3 + k2*y**2 + k1*y + k0, y)
print r
y = np.linspace(-1.1, 1.1, 1000)
x = k4*y**4 + k3*y**3 + k2*y**2 + k1*y + k0
figure()
plot(y, x)
grid(True)
show()
Output:
[-0.994999960838935 + 1.66799419488535e-31*I,
-0.0255580200028216 - 6.34301512012529e-30*I,
0.0243009597954184 + 6.32628752256216e-30*I,
0.998750786632373 - 1.50071821925406e-31*I]
Plot (there are 4 zero-crossings):
Notice that the result is actually real, up to numerical precision. e-30 is really a small number. The solutions reported are also consistent with the plot, so nothing to worry about.
Real values can also be obtained directly from nroots:
>>> eq=k4*y**4 + k3*y**3 + k2*y**2 + k1*y + k0
>>> eq
160400.0*y**4 - 400.0*y**3 - 159499.0*y**2 - 200.0*y + 99.0
>>> nroots(eq)
[-0.994999960838935, -0.0255580200028216, 0.0243009597954184, 0.998750786632373]

Categories

Resources