I want to do something like h = f(g(x)) and be able to differentiate h, like h.diff(x). For just one function like h = cos(x) this is in fact possible and the documentation makes it clear.
But for function compositions it is not so clear. If you have done this, kindly show me an example or link me to the relevant document.
(If Sympy can't do this, do you know of any other packages that does this, even if it is non-python)
thank you.
It seems that function composition works as you would expect in sympy:
import sympy
h = sympy.cos('x')
g = sympy.sin(h)
g
Out[245]: sin(cos(x))
Or if you prefer
from sympy.abc import x,y
g = sympy.sin('y')
f = g.subs({'y':h})
Then you can just call diff to get your derivative.
g.diff()
Out[246]: -sin(x)*cos(cos(x))
It's named compose; although, I can't find it in the docs.
from sympy import Symbol, compose
x = Symbol('x')
f = x**2 + 2
g = x**2 + 1
compose(f,g)
Out: x**4 + 2*x**2 + 4
Related
I'm new to sympy and I'm trying to use it to get the values of higher order Greeks of options (basically higher order derivatives). My goal is to do a Taylor series expansion. The function in question is the first derivative.
f(x) = N(d1)
N(d1) is the P(X <= d1) of a standard normal distribution. d1 in turn is another function of x (x in this case is the price of the stock to anybody who's interested).
d1 = (np.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
As you can see, d1 is a function of only x. This is what I have tried so far.
import sympy as sp
from math import pi
from sympy.stats import Normal,P
x = sp.symbols('x')
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # This should be 0.5155
f1 = sp.simplify(sp.diff(f,x))
f1.evalf(subs={x:100}) # This should also return a float value
The last line of code however returns an expression, not a float value as I expected like in the case with f. I feel like I'm making a very simple mistake but I can't find out why. I'd appreciate any help.
Thanks.
If you define x with positive=True (which is implied by the log in the definition of u assuming u is real which is implied by the definition of f) it looks like you get almost the expected result (also using f1.subs({x:100}) in the version without the positive x assumption shows the trouble is with unevaluated polar_lift(0) terms):
import sympy as sp
from sympy.stats import Normal, P
x = sp.symbols('x', positive=True)
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*sp.sqrt(0.5)) # changed np to sp
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # 0.541087287864516
f1 = sp.simplify(sp.diff(f,x))
print(f1.evalf(subs={x:100})) # 0.0510177033783834
Apologies if this has been asked before, I’m a little stuck.
If I had two variables
a = 3x+4y
b = 2x+2y
How could I make it so that a+b = 5x+4y? The way I’ve been currently doing it is with numpy and the imaginary variable. However that doesn’t extend to more than one.
My current one variable code looks like this:
from numpy import *
a = 1+3j
b = 2+7j
Then I can just you the real and imag functions to get the appropriate coefficients.
Thanks
You can use Sympy.
from sympy import symbols
x = symbols('x')
y = symbols('y')
a = symbols('a')
b = symbols('b')
And define your equation using the python variables defined above
expr1 = 5*x + 4*y
expr2 = a + b
I need to define a function that checks if the input function is continuous at a point with sympy.
I searched the sympy documents with the keyword "continuity" and there is no existing function for that.
I think maybe I should consider doing it with limits, but I'm not sure how.
def check_continuity(f, var, a):
try:
f = sympify(f)
except SympifyError:
return("Invaild input")
else:
x1 = Symbol(var, positive = True)
x2 = Symbol(var, negative = True)
//I don't know what to do after this
I would suggest you use the function continuous_domain. This is defined in the calculus.util module.
Example usage:
>>> from sympy import Symbol, S
>>> from sympy.calculus.util import continuous_domain
>>> x = Symbol("x")
>>> f = sin(x)/x
>>> continuous_domain(f, x, S.Reals)
Union(Interval.open(-oo, 0), Interval.open(0, oo))
This is documented in the SymPy docs here. You can also view the source code here.
Yes, you need to use the limits.
The formal definition of continuity at a point has three conditions that must be met.
A function f(x) is continuous at a point where x = c if
lim x —> c f(x) exists
f(c) exists (That is, c is in the domain of f.)
lim x —> c f(x) = f(c)
SymPy can compute symbolic limits with the limit function.
>>> limit(sin(x)/x, x, 0)
1
See: https://docs.sympy.org/latest/tutorial/calculus.html#limits
Here is a more simple way to check if a function is continues for a specific value:
import sympy as sp
x = sp.Symbol("x")
f = 1/x
value = 0
def checkifcontinus(func,x,symbol):
return (sp.limit(func, symbol, x).is_real)
print(checkifcontinus(f,value,x))
This code output will be - False
I am using sympy to solve a very simple equation symbolically, but the solution I get for the variable is an empty matrix! Here is the code:
from sympy import *
x = Symbol('x')
l_x = Symbol('l_x')
x_min = -6
x_max = 6
precision_x = 10**8
solve(((x_max-x_min)/((2**l_x)-1))/precision_x, l_x)
print(l_x)
I tried some other simple equations such as:
solve(x**2 = 4, x)
And the later works perfectly; I just do not understand why the former one does not work!
The expression given to solve has an assumed rhs of 0 which no value of l_x can satisfy. Try something like this instead:
from sympy import *
q, r, s, t = symbols("q r s t")
eq = (q-r)/(2**s-1)/t
solve(eq-1,s)
The output is:
[log((q - r + t)/t)/log(2)]
to explicitly create an equation object with a non-zero rhs you can do something like:
solve(Eq(eq,1),s)
It is simple: your equation has no result.
The equation is 12/((2**l_x)-1))/1e8 = 0 and that has no solution.
See what y = 12/((2**x)-1))/1e8 looks like (copied from wolframalpha):
To compare, try solving e.g. 12/((2**l_x)-1))/1e8 = 1 instead:
>>> solve(((x_max-x_min)/((2**l_x)-1))/precision_x - 1, l_x)
[(-log(25000000) + log(25000003))/log(2)]
Works like a charm!
I have a function like:
(np.sqrt((X)**2 + (Y)**2))/(np.sqrt((X)**2 + (Y)**2 + d**2))
I wrote a program for calculating integral by using series:
for i in range (num): # for X
print i
Y=(-distance)
for j in range(num): # for Y
f=(np.sqrt((X)**2 + (Y)**2))/(np.sqrt((X)**2 + (Y)**2 + d**2))
Y=Y+delta
sum+=(f*(delta**2))/((2*distance)**2)
X=X+delta
print sum
And It works fine for me.. But it takes too long for some complex function.
Is there any python module for integrating this function when -2.0 < X and Y < 2.0? (or something else)
I guess that you want to integrate fun between x equals a and b and y equals c and d. In this case what you have to do is:
import numpy as np
# Define 'd' to whatever value you need
d = 1.
# Function to integrate
fun = lambda x, y: np.sqrt(x**2. + y**2.) / np.sqrt(x**2. + y**2. + d**2.)
# Limits of integration
a, b = -2., 2.
c, d = -2., 2.
gfun = lambda x: c
hfun = lambda x: d
# Perform integration
from scipy.integrate import dblquad
int, err = dblquad(fun, a, b, gfun, hfun)
If you need more complex limits of integration you just need to change gfun and hfun. If you are interested in more advanced feature you can take a look at the documentation of dblquad: http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.dblquad.html#scipy.integrate.dblquad
There's a library for this, scipy.integrate.
This should be pretty straightforward to do:
func = lambda y: (np.sqrt((X)**2 + (Y)**2))/(np.sqrt((X)**2 + (Y)**2 + d**2)) and a == -2 and b == 2
from scipy import integrate
integrate.quad(func, a b)
This should do it. I would consult the documentation for SciPy for more info.
Edit: If there are issues, make sure you're using floats instead of ints.