Here's what I'm doing in a SymPy session:
from sympy import *
xi1,xi2,xi3 = symbols('xi_1,xi_2,xi_3')
N1 = 1-xi1-xi2-xi3
N2 = xi3
N3 = xi1
N4 = xi2
x1,x2,x3,x4 = symbols('x_1, x_2, x_3, x_4')
x = N1*x1+N2*x2+N3*x3+N2*x4
subdict = {x1:Matrix([0.025,1.0,0.0]), x2 : Matrix([0,1,0]), x3:Matrix([0, 0.975, 0]), x4:Matrix([0,0.975,0.025])}
x.subs(subdict)
test.subs({xi1:1, xi2:0,xi3:0})
To me we are simply multiplying some scalars with some vectors then adding them up. However SymPy would disagree and throws a ginormous error for which the last line is:
TypeError: cannot add <class 'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.Zero'>
Why is this a problem? Is there a workaround for what I am trying to do?
I suspect that what is happening is that before the matrix is substituted you substitute a 0 which makes 0*matrix_symbol = 0 instead of a matrix of zeros. Terms that end up being matrices cannot be added to 0 and thus the error. My attempts at using the simultaneous flag or xreplace instead of subs give the same result (on sympy.live.org). Then I tried to do the substitutions in reversed order, passing them as a list with the matrices first. Still didn't work. It looks like subs assumes that 0*foo is 0. An issue at sympy issues should be raised if there is not already an existing issue.
The workaround is to do the scalar substitutions first, allowing the zero terms to disappear. Then do a subs with the matrices. So this will require 2 calls to subs.
A true, hackish workaround for doing substitution with 0 is this:
def remul(m):
rv = 1
for i in m.args:
rv *= i
return rv
expr = x*y
mat = expr.subs(x, randMatrix(2)) # replace x with matrix
expr = mat.replace( # replace y with scalar 0
lambda m: m.is_Mul,
lambda m: remul(Mul(*[i.subs(y, 0) for i in m.args], evaluate=False)))
Related
I'm new to sympy and I'm trying to use it to get the values of higher order Greeks of options (basically higher order derivatives). My goal is to do a Taylor series expansion. The function in question is the first derivative.
f(x) = N(d1)
N(d1) is the P(X <= d1) of a standard normal distribution. d1 in turn is another function of x (x in this case is the price of the stock to anybody who's interested).
d1 = (np.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
As you can see, d1 is a function of only x. This is what I have tried so far.
import sympy as sp
from math import pi
from sympy.stats import Normal,P
x = sp.symbols('x')
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # This should be 0.5155
f1 = sp.simplify(sp.diff(f,x))
f1.evalf(subs={x:100}) # This should also return a float value
The last line of code however returns an expression, not a float value as I expected like in the case with f. I feel like I'm making a very simple mistake but I can't find out why. I'd appreciate any help.
Thanks.
If you define x with positive=True (which is implied by the log in the definition of u assuming u is real which is implied by the definition of f) it looks like you get almost the expected result (also using f1.subs({x:100}) in the version without the positive x assumption shows the trouble is with unevaluated polar_lift(0) terms):
import sympy as sp
from sympy.stats import Normal, P
x = sp.symbols('x', positive=True)
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*sp.sqrt(0.5)) # changed np to sp
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # 0.541087287864516
f1 = sp.simplify(sp.diff(f,x))
print(f1.evalf(subs={x:100})) # 0.0510177033783834
I am using sympy to solve a very simple equation symbolically, but the solution I get for the variable is an empty matrix! Here is the code:
from sympy import *
x = Symbol('x')
l_x = Symbol('l_x')
x_min = -6
x_max = 6
precision_x = 10**8
solve(((x_max-x_min)/((2**l_x)-1))/precision_x, l_x)
print(l_x)
I tried some other simple equations such as:
solve(x**2 = 4, x)
And the later works perfectly; I just do not understand why the former one does not work!
The expression given to solve has an assumed rhs of 0 which no value of l_x can satisfy. Try something like this instead:
from sympy import *
q, r, s, t = symbols("q r s t")
eq = (q-r)/(2**s-1)/t
solve(eq-1,s)
The output is:
[log((q - r + t)/t)/log(2)]
to explicitly create an equation object with a non-zero rhs you can do something like:
solve(Eq(eq,1),s)
It is simple: your equation has no result.
The equation is 12/((2**l_x)-1))/1e8 = 0 and that has no solution.
See what y = 12/((2**x)-1))/1e8 looks like (copied from wolframalpha):
To compare, try solving e.g. 12/((2**l_x)-1))/1e8 = 1 instead:
>>> solve(((x_max-x_min)/((2**l_x)-1))/precision_x - 1, l_x)
[(-log(25000000) + log(25000003))/log(2)]
Works like a charm!
I'm using scipy.integrate's odeint function to evaluate the time evolution of to find solutions to the equation
$$ \dot x = -\frac{f(x)}{g(x)}, $$
where $f$ and $g$ are both functions of $x$. $f,g$ are given by series of the form
$$ f(x) = x(1 + \sum_k b_k x^{k/2}) $$
$$ g(x) = 1 + \sum_k a_k (1 + k/2) x^{k/2}. $$
All positive initial values for $x$ should result in the solution blowing up in time, but they aren't...well, not always.
The coefficients $a_n, b_n$ are long polynomials, where $b_n$ is dependent on $x$ in a certain way, and $a_n$ is dependent on several terms being held constant.
Depending on the way I compute $g(x)$, I get very different behavior.
The first way I tried is as follows. 'a' and 'b' are 1x8 and 1x9 numpy arrays. Note that in the function g(x, a), a is multiplied by gterms in line 3, and does not appear in line 2.
def g(x, a):
gterms = [(0.5*k + 1.) * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(a*gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
The second way was this:
def g(x, a):
gterms = [(0.5*k + 1.) * a[k] * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
Note the difference: using the first method, I stuck the array 'a' into the sum in line 3, whereas using the second method, I suck the values of 'a' into the list 'gterms' in line 2 instead.
The first method gives the expected behavior: solutions blow up positive x. However, the second method does not do this. The second method gives a bifurcation for some x0 > 0 that acts as a source. For initial conditions greater than x0, solutions blow up as expected, but initial conditions less than x0 have the solutions tending to 0 very slowly.
Something else of note: in the rhs function, if I change it from
def rhs(u,t)
x = u
...
return .
to
def rhs(u,t)
x = u[0]
...
return .
the same exact change occurs
So my question is: what is the difference between the two different methods I used? I can't tell for the life of me what is actually going on here. Sorry for being so verbose.
How can I evaluate the constants C1 and C2 from a solution of a differential equation SymPy gives me? There are the initial condition f(0)=0 and f(pi/2)=3.
>>> from sympy import *
>>> f = Function('f')
>>> x = Symbol('x')
>>> dsolve(f(x).diff(x,2)+f(x),f(x))
f(x) == C1*sin(x) + C2*cos(x)
I tried some ics stuff but it's not working. Example:
>>> dsolve(f(x).diff(x,2)+f(x),f(x), ics={f(0):0, f(pi/2):3})
f(x) == C1*sin(x) + C2*cos(x)
By the way: C2 = 0 and C1 = 3.
There's a pull request implementing initial/boundary conditions, which was merged and should be released in SymPy 1.2. Meanwhile, one can solve for constants like this:
sol = dsolve(f(x).diff(x,2)+f(x),f(x)).rhs
constants = solve([sol.subs(x,0), sol.subs(x, math.pi/2) - 3])
final_answer = sol.subs(constants)
The code returns final_answer as 3.0*sin(x).
Remarks
solve may return a list of solutions, in which case one would have to substitute constants[0], etc. To force it to return a list in any case (for consistency), use dict=True:
constants = solve([sol.subs(x,0), sol.subs(x, math.pi/2) - 3], dict=True)
final_answer = sol.subs(constants[0])
If the equation contains parameters, solve may or may not solve for the variables you want (C1 and C2). This can be ensured as follows:
constants = solve([sol.subs(x,0), sol.subs(x, math.pi/2) - 3], symbols('C1 C2'))
where again, dict=True would force the list format of the output.
I have to write a function, s(x) = x * sin(3/x) in python that is capable of taking single values or vectors/arrays, but I'm having a little trouble handling the cases when x is zero (or has an element that's zero). This is what I have so far:
def s(x):
result = zeros(size(x))
for a in range(0,size(x)):
if (x[a] == 0):
result[a] = 0
else:
result[a] = float(x[a] * sin(3.0/x[a]))
return result
Which...doesn't work for x = 0. And it's kinda messy. Even worse, I'm unable to use sympy's integrate function on it, or use it in my own simpson/trapezoidal rule code. Any ideas?
When I use integrate() on this function, I get the following error message: "Symbol" object does not support indexing.
This takes about 30 seconds per integrate call:
import sympy as sp
x = sp.Symbol('x')
int2 = sp.integrate(x*sp.sin(3./x),(x,0.000001,2)).evalf(8)
print int2
int1 = sp.integrate(x*sp.sin(3./x),(x,0,2)).evalf(8)
print int1
The results are:
1.0996940
-4.5*Si(zoo) + 8.1682775
Clearly you want to start the integration from a small positive number to avoid the problem at x = 0.
You can also assign x*sin(3./x) to a variable, e.g.:
s = x*sin(3./x)
int1 = sp.integrate(s, (x, 0.00001, 2))
My original answer using scipy to compute the integral:
import scipy.integrate
import math
def s(x):
if abs(x) < 0.00001:
return 0
else:
return x*math.sin(3.0/x)
s_exact = scipy.integrate.quad(s, 0, 2)
print s_exact
See the scipy docs for more integration options.
If you want to use SymPy's integrate, you need a symbolic function. A wrong value at a point doesn't really matter for integration (at least mathematically), so you shouldn't worry about it.
It seems there is a bug in SymPy that gives an answer in terms of zoo at 0, because it isn't using limit correctly. You'll need to compute the limits manually. For example, the integral from 0 to 1:
In [14]: res = integrate(x*sin(3/x), x)
In [15]: ans = limit(res, x, 1) - limit(res, x, 0)
In [16]: ans
Out[16]:
9⋅π 3⋅cos(3) sin(3) 9⋅Si(3)
- ─── + ──────── + ────── + ───────
4 2 2 2
In [17]: ans.evalf()
Out[17]: -0.164075835450162