Writing a function for x * sin(3/x) in python - python

I have to write a function, s(x) = x * sin(3/x) in python that is capable of taking single values or vectors/arrays, but I'm having a little trouble handling the cases when x is zero (or has an element that's zero). This is what I have so far:
def s(x):
result = zeros(size(x))
for a in range(0,size(x)):
if (x[a] == 0):
result[a] = 0
else:
result[a] = float(x[a] * sin(3.0/x[a]))
return result
Which...doesn't work for x = 0. And it's kinda messy. Even worse, I'm unable to use sympy's integrate function on it, or use it in my own simpson/trapezoidal rule code. Any ideas?
When I use integrate() on this function, I get the following error message: "Symbol" object does not support indexing.

This takes about 30 seconds per integrate call:
import sympy as sp
x = sp.Symbol('x')
int2 = sp.integrate(x*sp.sin(3./x),(x,0.000001,2)).evalf(8)
print int2
int1 = sp.integrate(x*sp.sin(3./x),(x,0,2)).evalf(8)
print int1
The results are:
1.0996940
-4.5*Si(zoo) + 8.1682775
Clearly you want to start the integration from a small positive number to avoid the problem at x = 0.
You can also assign x*sin(3./x) to a variable, e.g.:
s = x*sin(3./x)
int1 = sp.integrate(s, (x, 0.00001, 2))
My original answer using scipy to compute the integral:
import scipy.integrate
import math
def s(x):
if abs(x) < 0.00001:
return 0
else:
return x*math.sin(3.0/x)
s_exact = scipy.integrate.quad(s, 0, 2)
print s_exact
See the scipy docs for more integration options.

If you want to use SymPy's integrate, you need a symbolic function. A wrong value at a point doesn't really matter for integration (at least mathematically), so you shouldn't worry about it.
It seems there is a bug in SymPy that gives an answer in terms of zoo at 0, because it isn't using limit correctly. You'll need to compute the limits manually. For example, the integral from 0 to 1:
In [14]: res = integrate(x*sin(3/x), x)
In [15]: ans = limit(res, x, 1) - limit(res, x, 0)
In [16]: ans
Out[16]:
9⋅π 3⋅cos(3) sin(3) 9⋅Si(3)
- ─── + ──────── + ────── + ───────
4 2 2 2
In [17]: ans.evalf()
Out[17]: -0.164075835450162

Related

evalf and subs in sympy on single variable expression returns expression instead of expected float value

I'm new to sympy and I'm trying to use it to get the values of higher order Greeks of options (basically higher order derivatives). My goal is to do a Taylor series expansion. The function in question is the first derivative.
f(x) = N(d1)
N(d1) is the P(X <= d1) of a standard normal distribution. d1 in turn is another function of x (x in this case is the price of the stock to anybody who's interested).
d1 = (np.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
As you can see, d1 is a function of only x. This is what I have tried so far.
import sympy as sp
from math import pi
from sympy.stats import Normal,P
x = sp.symbols('x')
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # This should be 0.5155
f1 = sp.simplify(sp.diff(f,x))
f1.evalf(subs={x:100}) # This should also return a float value
The last line of code however returns an expression, not a float value as I expected like in the case with f. I feel like I'm making a very simple mistake but I can't find out why. I'd appreciate any help.
Thanks.
If you define x with positive=True (which is implied by the log in the definition of u assuming u is real which is implied by the definition of f) it looks like you get almost the expected result (also using f1.subs({x:100}) in the version without the positive x assumption shows the trouble is with unevaluated polar_lift(0) terms):
import sympy as sp
from sympy.stats import Normal, P
x = sp.symbols('x', positive=True)
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*sp.sqrt(0.5)) # changed np to sp
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # 0.541087287864516
f1 = sp.simplify(sp.diff(f,x))
print(f1.evalf(subs={x:100})) # 0.0510177033783834

Sympy outputs a derivative with log(e)

I'm using Sympy to calculate derivatives and some other things. I tried to calculate the derivative of "e**x + x + 1", and it returns e**x*log(e) + 1 as the result, but as far as I know the correct result should be e**x + 1. What's going on here?
Full code:
from sympy import *
from sympy.parsing.sympy_parser import parse_expr
x = symbols("x")
_fOfX = "e**x + x + 1"
sympyFunction = parse_expr(_fOfX)
dSeconda = diff(sympyFunction,x,1)
print(dSeconda)
The answer correctly includes log(e) because you never specified what "e" is. It's just a letter like "a" or "b".
The Euler number 2.71828... is represented as E in SymPy. But usually, writing exp(x) is preferable because the notation is unambiguous, and also because SymPy is going to return exp(x) anyway. Examples:
>>> fx = E**x + x + 1
>>> diff(fx, x, 1)
exp(x) + 1
or with exp notation:
>>> fx = exp(x) + x + 1
>>> diff(fx, x, 1)
exp(x) + 1
Avoid creating expressions by parsing strings, unless you really need to and know why you need it.

Integral function (x,y) in python

I have a function like:
(np.sqrt((X)**2 + (Y)**2))/(np.sqrt((X)**2 + (Y)**2 + d**2))
I wrote a program for calculating integral by using series:
for i in range (num): # for X
print i
Y=(-distance)
for j in range(num): # for Y
f=(np.sqrt((X)**2 + (Y)**2))/(np.sqrt((X)**2 + (Y)**2 + d**2))
Y=Y+delta
sum+=(f*(delta**2))/((2*distance)**2)
X=X+delta
print sum
And It works fine for me.. But it takes too long for some complex function.
Is there any python module for integrating this function when -2.0 < X and Y < 2.0? (or something else)
I guess that you want to integrate fun between x equals a and b and y equals c and d. In this case what you have to do is:
import numpy as np
# Define 'd' to whatever value you need
d = 1.
# Function to integrate
fun = lambda x, y: np.sqrt(x**2. + y**2.) / np.sqrt(x**2. + y**2. + d**2.)
# Limits of integration
a, b = -2., 2.
c, d = -2., 2.
gfun = lambda x: c
hfun = lambda x: d
# Perform integration
from scipy.integrate import dblquad
int, err = dblquad(fun, a, b, gfun, hfun)
If you need more complex limits of integration you just need to change gfun and hfun. If you are interested in more advanced feature you can take a look at the documentation of dblquad: http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.dblquad.html#scipy.integrate.dblquad
There's a library for this, scipy.integrate.
This should be pretty straightforward to do:
func = lambda y: (np.sqrt((X)**2 + (Y)**2))/(np.sqrt((X)**2 + (Y)**2 + d**2)) and a == -2 and b == 2
from scipy import integrate
integrate.quad(func, a b)
This should do it. I would consult the documentation for SciPy for more info.
Edit: If there are issues, make sure you're using floats instead of ints.

Best way to find roots of a multidimensional, scalar function with SciPy

Suppose I have a function whose range is a scalar but whose domain is a vector. For example:
def func(x):
return x[0] + 1 + x[1]**2
What's a good way to find the a root of this function? scipy.optimize.fsolve and scipy.optimize.root expect func to return a vector (rather than a scalar), and scipy.optimize.newton only takes scalar arguments. I can redefine func as
def func(x):
return [x[0] + 1 + x[1]**2, 0]
Then root and fsolve can find a root, but the zeros in the Jacobian means it won't always do a good job. For example:
fsolve(func, array([0,2]))
=> array([-5, 2])
It'll only vary the first parameter but not the second, meaning that it often finds a zero that's far away.
EDIT: it looks like the following redefinition of func works better:
def func(x):
fx = x[0] + 1 + x[1]**2
return [fx, fx]
fsolve(func, array([0,5]))
=>array([-16.27342781, 3.90812331])
So it's now willing to change both parameters. The code is still kind of ugly though.
Have you tried the minimization of the absolute value of your function using fmin?
For example:
>>> import scipy.optimize as op
>>> import numpy as np
>>> def func(x):
>>> return x[0] + 1 + x[1]**2
>>> func1 = lambda x: np.abs(func(x))
>>> tmp = op.fmin(func1, [10000., 10000.])
>>> func(tmp)
0.0
>>> print tmp
[-8346.12025122 91.35162971]
Since -- for my problem -- I have a good initial guess and a non-crazy function, Newton's method works well. For a scalar, multidimensional function, Newton's method becomes:
Here's a rough code example:
def func(x): #the function to find a root of
return x[0] + 1 + x[1]**2
def dfunc(x): #the gradient of that function
return array([1, 2*x[1]])
def newtRoot(x0, func, dfunc):
x = array(x0)
for n in xrange(100): # do at most 100 iterations
f = func(x)
df = dfunc(x)
if abs(f) < 1e-6: # exit function if we're close enough
break
x = x - df*f/norm(df)**2 # update guess
return x
In use:
nsolve([0,2],func,dfunc)
=> array([-1.0052546 , 0.07248865])
func([-1.0052546 , 0.07248865])
=> 4.3788225025098715e-09
Not bad! Of course, this function is very rough, but you get the idea. It also won't work well for "tricky" functions or where you don't have a good starting guess. I think I'll use something like this but then fall back to fsolve or root if Newton's method doesn't converge.

Imitating 'ppoints' R function in python

The R ppoints function is described as:
Ordinates for Probability Plotting
Description:
Generates the sequence of probability points ‘(1:m - a)/(m +
(1-a)-a)’ where ‘m’ is either ‘n’, if ‘length(n)==1’, or
‘length(n)’.
Usage:
ppoints(n, a = ifelse(n <= 10, 3/8, 1/2))
...
I've been trying to replicate this function in python and I have a couple of doubts.
1- The first m in (1:m - a)/(m + (1-a)-a) is always an integer: int(n) (ie: the integer of n) if length(n)==1 and length(n) otherwise.
2- The second m in the same equation is NOT an integer if length(n)==1 (it assumes the real value of n) and it IS an integer (length(n)) otherwise.
3- The n in a = ifelse(n <= 10, 3/8, 1/2) is the real number n if length(n)==1 and the integer length(n) otherwise.
This points are not made clear at all in the description and I'd very much appreciate if someone could confirm that this is the case.
Add
Well this was initially posted at https://stats.stackexchange.com/ because I was hoping to get the input of staticians who work with the ppoints function. Since it has been migrated here, I'll paste below the function I wrote to replicate ppoints in python. I've tested it and both seem to give back the same results, but I'd be great if someone could clarify the points made above because they are not made at all clear by the function's description.
def ppoints(vector):
'''
Mimics R's function 'ppoints'.
'''
m_range = int(vector[0]) if len(vector)==1 else len(vector)
n = vector[0] if len(vector)==1 else len(vector)
a = 3./8. if n <= 10 else 1./2
m_value = n if len(vector)==1 else m_range
pp_list = [((m+1)-a)/(m_value+(1-a)-a) for m in range(m_range)]
return pp_list
I would implement this with numpy:
import numpy as np
def ppoints(n, a):
""" numpy analogue or `R`'s `ppoints` function
see details at http://stat.ethz.ch/R-manual/R-patched/library/stats/html/ppoints.html
:param n: array type or number"""
try:
n = np.float(len(n))
except TypeError:
n = np.float(n)
return (np.arange(n) + 1 - a)/(n + 1 - 2*a)
Sample output:
>>> ppoints(5, 1./2)
array([ 0.1, 0.3, 0.5, 0.7, 0.9])
>>> ppoints(5, 1./4)
array([ 0.13636364, 0.31818182, 0.5 , 0.68181818, 0.86363636])
>>> n = 10
>>> a = 3./8. if n <= 10 else 1./2
>>> ppoints(n, a)
array([ 0.06097561, 0.15853659, 0.25609756, 0.35365854, 0.45121951,
0.54878049, 0.64634146, 0.74390244, 0.84146341, 0.93902439])
One can use R fiddle to test implementation.

Categories

Resources