I would like to solve the (x+1)e^x=c equation in Python.
The equation has been successfully solved by hand using lambert w functions as depicted in the figure below:
Using same steps, I would like to solve (x+1)e^x programmatically. I have coded it using the module SymPy as per the step shown in the figure above , but without success.
Is there any to solve these kinds of equations in Python?
import numpy as np
from sympy import *
n = symbols('n')
sigmao=0.06866
sigmas=0.142038295
theta=38.9
rad=(np.pi/180)*38.9076
cos=np.cos(rad)
sec=1/np.cos(rad)
out = (0.06*0.7781598455*n*(1-exp(-2*0.42*sec*n))+exp(-2*0.42*n*sec)*sigmas)/sigmao
#Apply diff for the above expression.
fin=diff(out, n)
print(solve(fin,n))
from scipy.optimize import fsolve
import numpy as np
const = 20
def func(x):
return [(x[0]+1) * np.exp(x[0]) - const]
result = fsolve(func, [1])[0]
print('constant: ', const, ', solution: ', result)
#check
print('check: ', (result+1) * np.exp(result))
#Output[]:
constant: 20.0 , solution: 1.9230907433218063
check: 20.0
Preview : https://onlinegdb.com/By8Z2Jwgw
Your expression is very numeric. As sympy's solve tries to find a perfect symbolic solution, sympy gets into troubles.
To find numeric solutions, sympy has nsolve (which allows sympy's expressions but behind the scenes calls mpmath's numeric solver). Unlike solve, here an initial guess is needed:
from sympy import symbols, exp, diff, nsolve, pi, cos
n = symbols('n')
sigmao = 0.06866
sigmas = 0.142038295
theta = 38.9076
rad = (pi / 180) * theta
sec = 1 / cos(rad)
out = (0.06 * 0.7781598455 * n * (1 - exp(-2 * 0.42 * sec * n)) + exp(-2 * 0.42 * n * sec) * sigmas) / sigmao
# Apply diff for the above expression.
fin = diff(out, n)
result = nsolve(fin, n, 1)
print(result, fin.subs(n, result).evalf())
Result: 1.05992379637846 -7.28565300819065e-17
Note that when working with numeric values, you should be very careful to use as many digits as possible to avoid accumulation of errors. Whenever you have an exact expression, it is recommended to leave that expression into the code, instead of replacing it with digits. (Usually, 64 bits or about 16 digits are used in calculations, but for intermediate calculations 80 bits can be taken into account).
To solve the original question with sympy:
from sympy import symbols, Eq, exp, solve
x = symbols('x')
solutions = solve(Eq((x + 1) * exp(x), 20))
for s in solutions:
print(s.evalf())
Result: 1.92309074332181
Related
When trying to evaluate a function f(x) with Sympy, the final output appears to keep sin(n * pi) and cos (n * pi) in symbolic form but does not evaluate them as 0 and (-1)^n respectively when n is defined to be positive integers. However, Symbolab appears to be able to do the above behavior.
Is there a way to get Sympy evaluate expression similar to what Symbolab does?
Minimal example to generate the behavior in Sympy
# Load libraries
import sympy as sp
from sympy import*
from IPython.display import display
init_printing()
# To turn off bugging warnings
import warnings
warnings.filterwarnings('ignore')
x,L,pi = symbols('x, L, pi', real = True, positive = true)
n = symbols('n', positive = True, integer=True)
a_n_x = 2/L
a_n_in = x * cos((n*pi*x)/L)
display(a_n_in)
a_n = a_n_x * integrate(a_n_in, (x,0,2))
display(a_n)
a_n = a_n.subs(L,2)
display(a_n)
The problem is on this line:
x,L,pi = symbols('x, L, pi', real = True, positive = true)
This defines pi as a positive real variable, so it is treated just like any other positive real variable would be - in particular, sin(n * pi) and cos(n * pi) cannot be simplified, any more than sin(n * x) or cos(n * x) could be. The fact that you named it pi doesn't matter.
To fix it, use the symbol pi defined in the sympy module itself, which SymPy understands to mean the constant π. This is already imported in the line from sympy import *, so you just need to not replace it with your own variable.
You need to ensure n is an integer, otherwise the identity does not hold:
n = symbols('n', integer=True)
trigsimp( cos(pi * n) ) # Prints (-1)**n
In order to get the desired result, we have to use sympy.cos() and sympy.pi. The updated minimal reproduceable example demonstrates this convention in further details
# Load libraries
import sympy as sp
from sympy import*
from IPython.display import display
init_printing()
# To turn off bugging warnings
import warnings
warnings.filterwarnings('ignore')
x,L = symbols('x, L', real = True, positive = True)
n = symbols('n', positive = True, integer=True)
a_n_x = 2/L
a_n_in = x * sp.cos((n*pi*x)/L)
display(a_n_in)
a_n = a_n_x * integrate(a_n_in, (x,0,2))
display(a_n)
a_n = a_n.subs(L,2)
display(a_n.simplify())
EDIT: 12/4/2021 based on detailed explanation of how pi is defined in SymPy by #kaya3
I'm working on a solver for Python, preferably using pytorch tensors that would solve a series of non-linear equations very quickly and efficiently. This solver would essentially need to solve for the variable 's' in the below formula, given a certain value of 'ob'; i.e. the value returned by function 'f' should be zero
from scipy.optimize import fsolve
from scipy.stats import norm
import numpy as np
d = 0.01
r = 0.02
v_base = 0.19
v_skew1 = -0.0035
v_skew2 = -0.0021
t = 1
def v(s):
return v_base + v_skew1 * (s-1) + v_skew2 * (s-1.1)
def f(s, ob):
v_temp = v(s)
k1 = np.log(1/s) + (r - d + 0.5 * v_temp**2)*t)/(v_temp * np.sqrt(t))
k2 = k1 - v_temp * np.sqrt(t)
result = np.exp(-d*t) * norm.cdf(k1) - strike * np.exp(-r * t) * norm.cdf(k2)
return (result - ob)
ob = 0.015 # only one value of ob for now, but need to solve for thousands of ob values
answer = fsolve(f, 0.01, args = ob)
ob is going to be calculated using pytorch tensors for performance while solving for a large number of values, so I would prefer that the approach for solving the ob would also use tensors. Also the runtime increases to a lot when I am trying to solve for thousands of 'ob' values even if I pass tensor arguments into fsolve. Is there a way to implement solver functionality of all of these individual fsolves to run parallely using tensors. Also, is there anyway to implement solvers using torch.optim?
I have the following code where I need to solve an expression to find the roots. The expression needs to be solved for omega.
import numpy as np
from sympy import Symbol,lambdify
import scipy
from mpmath import findroot, exp
eta = 1.5
tau = 5 /1000
omega = Symbol("omega")
Tf = exp(1j * omega * tau)
symFun = 1 + Tf * (eta - 1)
denom = lambdify((omega), symFun, "scipy")
Tf_high = 1j * 2 * np.pi * 1000 * tau
sol = findroot(denom, [0+1j,Tf_high])
The program gives an error and I am not able to correct. The error is : TypeError: cannot create mpf from 0.005Iomega
Edit 1 - I have tried to implement different approach based on comments. First approach was to use the sympy.solveset module. Second approach was to use fsolve from scipy.optimise. Both are not giving proper output.
For clarity, I am copying the relevant code to each approach along with the output I am getting.
Approach 1 - Sympy
import numpy as np
from sympy import Symbol,exp
from sympy.solvers.solveset import solveset,solveset_real,solveset_complex
import matplotlib.pyplot as plt
def denominator(eta,Tf):
return 1 + Tf * (eta - 1)
if __name__ == "__main__":
eta = 1.5
tau = 5 /1000
omega = Symbol("omega")
n = 1
Tf = exp(1j * omega * tau)
denom = 1 + Tf * (eta - 1)
symFun = denominator(eta,Tf)
sol = solveset_real(denom,omega)
sol1 = solveset_complex(denom,omega)
print('In real domain', sol)
print('In imaginary domain',sol1)
Output:
In real domain EmptySet
In imaginary domain ImageSet(Lambda(_n, -200.0*I*(I*(2*_n*pi + pi) + 0.693147180559945)), Integers)
Approach 2 Scipy
import numpy as np
from scipy.optimize import fsolve, root
def denominator(eta,tau,n, omega):
Tf = n * np.exo(1j * omega * tau)
return 1 + Tf * (eta - 1)
if __name__ == "__main__":
eta = 1.5
tau = 5 /1000
n = 1
func = lambda omega : 1 + (eta - 1) * (n * np.exp( 1j * omega * tau))
sol = fsolve(func,10)
print(sol)
Output:
Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'
How do I correct the program? Please suggest me the approach that will give proper results.
SymPy is a computer algebra system and solves the equation like a human would. SciPy uses numeric optimization. If you want ALL the solutions, I suggest going with SymPy. If you want one solution, I suggest going with SciPy.
Approach 1 - SymPy
The solutions SymPy gives will be more "interactive" for you as the developer. But it will be perfectly correct almost all the time.
from sympy import *
eta = S(3)/2
tau = S(5) / 1000
omega = Symbol("omega")
n = 1
Tf = exp(I * omega * tau)
denom = 1 + Tf * (eta - 1)
sol = solveset(denom, omega)
print(sol)
Giving
ImageSet(Lambda(_n, -200*I*(I*(2*_n*pi + pi) + log(2))), Integers)
This is the true mathematical solution.
Notice how I put S around an integer before dividing it. When dividing integers in Python, it loses accuracy because it uses floating point numbers. Converting it to SymPy objects keep all the accuracy.
Since we know we have an ImageSet over integers, we can start listing a few solutions:
for n in range(-3, 3):
print(complex(sol.lamda(n)))
Which gives
(-3141.5926535897934-138.62943611198907j)
(-1884.9555921538758-138.62943611198907j)
(-628.3185307179587-138.62943611198907j)
(628.3185307179587-138.62943611198907j)
(1884.9555921538758-138.62943611198907j)
(3141.5926535897934-138.62943611198907j)
With some experience, you could automate it so that the whole program only returns 1 solution no matter on the type of output returned by solveset.
Approach 2 - SciPy
The solutions SciPy gives will be more automated. You will never have a perfect answer and different choices of the initial conditions may not converge all the time.
import numpy as np
from scipy.optimize import root
eta = 1.5
tau = 5 / 1000
n = 1
def f(omega: Tuple):
omega_real, omega_imag = omega
omega: complex = omega_real + omega_imag*1j
result: complex = 1 + (eta - 1) * (n * np.exp(1j * omega * tau))
return result.real, result.imag
sol = root(f, [100, 100])
print(sol)
print(sol.x[0]+sol.x[1]*1j)
Which gives
fjac: array([[ 0.00932264, 0.99995654],
[-0.99995654, 0.00932264]])
fun: array([-2.13074003e-12, -8.86389816e-12])
message: 'The solution converged.'
nfev: 30
qtf: array([ 2.96274855e-09, -6.82780898e-10])
r: array([-0.00520194, -0.00085702, -0.00479143])
status: 1
success: True
x: array([ 628.31853072, -138.62943611])
(628.3185307197314-138.62943611241522j)
Looks like that's one of the solutions SymPy found. So we must be doing something right. Note that there are many initial values that don't converge, for example, sol = root(f, [1, 1]).
How do I simplify a*sin(wt) + b*cos(wt) into c*sin(wt+theta) using SymPy? For example:
f = sin(t) + 2*cos(t) = 2.236*sin(t + 1.107)
I tried the following:
from sympy import *
t = symbols('t')
f=sin(t)+2*cos(t)
trigsimp(f) #Returns sin(t)+2*cos(t)
simplify(f) #Returns sin(t)+2*cos(t)
f.rewrite(sin) #Returns sin(t)+2*sin(t+Pi/2)
PS.: I dont have direct access to a,b and w. Only to f
Any suggestion?
The general answer can be achieved by noting that you want to have
a * sin(t) + b * cos(t) = A * (cos(c)*sin(t) + sin(c)*cos(t))
This leads to a simultaneous equation a = A * cos(c) and b = A * sin(c).
Dividing the second equation by the second, we can solve for c. Substituting its solution into the first equation, you can solve for A.
I followed the same pattern but just to get it in terms of cos. If you want to get it in terms of sin, you can use Rodrigo's formula.
The following code should be able to take any linear combination of the form x * sin(t - w) or y * cos(t - z). There can be multiple sins and cos'.
from sympy import *
t = symbols('t', real=True)
expr = sin(t)+2*cos(t) # unknown
d = collect(expr.expand(trig=True), [sin(t), cos(t)], evaluate=False)
a = d[sin(t)]
b = d[cos(t)]
cos_phase = atan(a/b)
amplitude = a / sin(cos_phase)
print(amplitude.evalf() * cos(t - cos_phase.evalf()))
Which gives
2.23606797749979*cos(t - 0.463647609000806)
This seems to be a satisfactory match after plotting both graphs.
You could even have something like
expr = 2*sin(t - 3) + cos(t) - 3*cos(t - 2)
and it should work fine.
a * sin(wt) + b * cos(wt) = sqrt(a**2 + b**2) * sin(wt + acos(a / sqrt(a**2 + b**2)))
While the amplitude is the radical sqrt(a**2 + b**2), the phase is given by the arccosine of the ratio a / sqrt(a**2 + b**2), which may not be expressible in terms of arithmetic operations and radicals. Hence, you may be asking SymPy to do the impossible. Better use floating-point values, but you do not need SymPy for that.
I can implement the error function, erf, myself, but I'd prefer not to. Is there a python package with no external dependencies that contains an implementation of this function? I have found this but this seems to be part of some much larger package (and it's not even clear which one!).
Since v.2.7. the standard math module contains erf function. This should be the easiest way.
http://docs.python.org/2/library/math.html#math.erf
I recommend SciPy for numerical functions in Python, but if you want something with no dependencies, here is a function with an error error is less than 1.5 * 10-7 for all inputs.
def erf(x):
# save the sign of x
sign = 1 if x >= 0 else -1
x = abs(x)
# constants
a1 = 0.254829592
a2 = -0.284496736
a3 = 1.421413741
a4 = -1.453152027
a5 = 1.061405429
p = 0.3275911
# A&S formula 7.1.26
t = 1.0/(1.0 + p*x)
y = 1.0 - (((((a5*t + a4)*t) + a3)*t + a2)*t + a1)*t*math.exp(-x*x)
return sign*y # erf(-x) = -erf(x)
The algorithm comes from Handbook of Mathematical Functions, formula 7.1.26.
I would recommend you download numpy (to have efficiant matrix in python) and scipy (a Matlab toolbox substitute, which uses numpy). The erf function lies in scipy.
>>>from scipy.special import erf
>>>help(erf)
You can also use the erf function defined in pylab, but this is more intended at plotting the results of the things you compute with numpy and scipy. If you want an all-in-one
installation of these software you can use directly the Python Enthought distribution.
A pure python implementation can be found in the mpmath module (http://code.google.com/p/mpmath/)
From the doc string:
>>> from mpmath import *
>>> mp.dps = 15
>>> print erf(0)
0.0
>>> print erf(1)
0.842700792949715
>>> print erf(-1)
-0.842700792949715
>>> print erf(inf)
1.0
>>> print erf(-inf)
-1.0
For large real x, \mathrm{erf}(x) approaches 1 very
rapidly::
>>> print erf(3)
0.999977909503001
>>> print erf(5)
0.999999999998463
The error function is an odd function::
>>> nprint(chop(taylor(erf, 0, 5)))
[0.0, 1.12838, 0.0, -0.376126, 0.0, 0.112838]
:func:erf implements arbitrary-precision evaluation and
supports complex numbers::
>>> mp.dps = 50
>>> print erf(0.5)
0.52049987781304653768274665389196452873645157575796
>>> mp.dps = 25
>>> print erf(1+j)
(1.316151281697947644880271 + 0.1904534692378346862841089j)
Related functions
See also :func:erfc, which is more accurate for large x,
and :func:erfi which gives the antiderivative of
\exp(t^2).
The Fresnel integrals :func:fresnels and :func:fresnelc
are also related to the error function.
To answer my own question, I have ended up using the following code, adapted from a Java version I found elsewhere on the web:
# from: http://www.cs.princeton.edu/introcs/21function/ErrorFunction.java.html
# Implements the Gauss error function.
# erf(z) = 2 / sqrt(pi) * integral(exp(-t*t), t = 0..z)
#
# fractional error in math formula less than 1.2 * 10 ^ -7.
# although subject to catastrophic cancellation when z in very close to 0
# from Chebyshev fitting formula for erf(z) from Numerical Recipes, 6.2
def erf(z):
t = 1.0 / (1.0 + 0.5 * abs(z))
# use Horner's method
ans = 1 - t * math.exp( -z*z - 1.26551223 +
t * ( 1.00002368 +
t * ( 0.37409196 +
t * ( 0.09678418 +
t * (-0.18628806 +
t * ( 0.27886807 +
t * (-1.13520398 +
t * ( 1.48851587 +
t * (-0.82215223 +
t * ( 0.17087277))))))))))
if z >= 0.0:
return ans
else:
return -ans
I have a function which does 10^5 erf calls. On my machine...
scipy.special.erf makes it time at 6.1s
erf Handbook of Mathematical Functions takes 8.3s
erf Numerical Recipes 6.2 takes 9.5s
(three-run averages, code taken from above posters).
One note for those aiming for higher performance: vectorize, if possible.
import numpy as np
from scipy.special import erf
def vectorized(n):
x = np.random.randn(n)
return erf(x)
def loopstyle(n):
x = np.random.randn(n)
return [erf(v) for v in x]
%timeit vectorized(10e5)
%timeit loopstyle(10e5)
gives results
# vectorized
10 loops, best of 3: 108 ms per loop
# loops
1 loops, best of 3: 2.34 s per loop
SciPy has an implementation of the erf function, see scipy.special.erf.
From Python's math.erf function documentation, it uses up to 50 terms in the approximation:
Implementations of the error function erf(x) and the complementary error
function erfc(x).
Method: we use a series approximation for erf for small x, and a continued
fraction approximation for erfc(x) for larger x;
combined with the relations erf(-x) = -erf(x) and erfc(x) = 1.0 - erf(x),
this gives us erf(x) and erfc(x) for all x.
The series expansion used is:
erf(x) = x*exp(-x*x)/sqrt(pi) * [
2/1 + 4/3 x**2 + 8/15 x**4 + 16/105 x**6 + ...]
The coefficient of x**(2k-2) here is 4**k*factorial(k)/factorial(2*k).
This series converges well for smallish x, but slowly for larger x.
The continued fraction expansion used is:
erfc(x) = x*exp(-x*x)/sqrt(pi) * [1/(0.5 + x**2 -) 0.5/(2.5 + x**2 - )
3.0/(4.5 + x**2 - ) 7.5/(6.5 + x**2 - ) ...]
after the first term, the general term has the form:
k*(k-0.5)/(2*k+0.5 + x**2 - ...).
This expansion converges fast for larger x, but convergence becomes
infinitely slow as x approaches 0.0. The (somewhat naive) continued
fraction evaluation algorithm used below also risks overflow for large x;
but for large x, erfc(x) == 0.0 to within machine precision. (For
example, erfc(30.0) is approximately 2.56e-393).
Parameters: use series expansion for abs(x) < ERF_SERIES_CUTOFF and
continued fraction expansion for ERF_SERIES_CUTOFF <= abs(x) <
ERFC_CONTFRAC_CUTOFF. ERFC_SERIES_TERMS and ERFC_CONTFRAC_TERMS are the
numbers of terms to use for the relevant expansions.
#define ERF_SERIES_CUTOFF 1.5
#define ERF_SERIES_TERMS 25
#define ERFC_CONTFRAC_CUTOFF 30.0
#define ERFC_CONTFRAC_TERMS 50
Error function, via power series.
Given a finite float x, return an approximation to erf(x).
Converges reasonably fast for small x.