Unexpected output in Monte Carlo Integration - python

First time posting, please excuse me if I'm not as concise as I need to be. So I'm trying to do the algorithm for Monte Carlo integration and I've got the code below. However when I run it with the parameters specified in main I get 0 as output. I'm pretty sure it has something to do with the way Python handles floats but I don't know where else to look. Any help is always appreciated. I know I will kick myself when I see my mistake.
One more thing, I'm using Python 2.7.
montecarloint.py
from random import uniform
from math import exp
def estimate_area(f, a, b, m, n=1000):
hits = 0
total = m * (b-a)
for i in range(n):
x = uniform(a, b)
y = uniform(0, m)
if y <= f(x):
hits += 1
frac = hits / n
return (frac * total)
def f(x):
return exp(-x ** 2)
def main():
print (estimate_area(f, 0, 2, 1))
main()

Related

How to write Bessel function using power series method in Python without Sympy?

I am studying Computational Physics with a lecturer who always ask me to write Python and Matlab code without using instant code (a library that gives me final answer without showing mathematical expression). So I try to write Bessel function for first kind using power series because I thought it was easy compare to other method (I am not sure). I dont know why the result is still very different? Far from answer that Sympy.special provided?
Here is my code for x = 5 and n = 3
import math
def bessel_function(n, x, num_terms):
# Initialize the power series expansion with the first term
series_sum = (x / 2) ** n
# Calculate the remaining terms of the power series expansion
for k in range(0, num_terms):
term = ((-1) ** k) * ((x / 2) ** (2 * k)) / (math.factorial(k)**2)*(2**2*k)
series_sum = series_sum + term
return series_sum
# Test the function with n = 3, x = 5, and num_terms = 10
print(bessel_function(3, 5, 30))
print(bessel_function(3, 5, 15))
And here is the code using sympy library:
from mpmath import *
mp.dps = 15; mp.pretty = True
print(besselj(3, 5))
import sympy
def bessel_function(n, x):
# Use the besselj function from sympy to calculate the Bessel function
return sympy.besselj(n, x)
# Calculate the numerical value of the Bessel function using evalf
numerical_value = bessel_function(3, 5).evalf()
print(numerical_value)
It is a big waste to compute the terms like you do, each from scratch with power and factorial. Much more efficient to compute a term from the previous.
For Jn,
Tk / Tk-1 = - (X/2)²/(k(k+N))
with T0 = (X/2)^N/N!.
N= 3
X= 5
# First term
X*= 0.5
Term= pow(X, N) / math.factorial(N)
Sum= Term
print(Sum)
# Next terms
X*= -X
for k in range(1, 21):
Term*= X / (k * (k + N))
Sum+= Term
print(Sum)
The successive sums are
2.6041666666666665
-1.4648437499999996
1.0782877604166665
0.19525598596643523
0.39236129276336185
0.3615635885763421
0.365128137672062
0.3648098743599441
0.3648324782883616
0.36483117019065225
0.3648312330799652
0.36483123052763916
0.3648312306162616
0.3648312306135987
0.3648312306136686
0.364831230613667
0.36483123061366707
0.36483123061366707
0.36483123061366707
0.36483123061366707
0.36483123061366707

Fast Fourier Transform algorithm wrong by a single minus sign

So after watching this video on the fast fourier transform https://www.youtube.com/watch?v=h7apO7q16V0
I analysed the pseudocode and implemented it in python to find out that it was producing a different output to that of many fft calculator sites. My values seem to be all there its just odd, as the order is out of place, anyone know why. Is it a different kind of algorithm implementation or something.
import cmath
import math
def FFT(P):
n= len(P)
if n == 1:
return P
omega = cmath.exp((2 * cmath.pi * 1j)/n)
p_even = P[::2]
p_odd = P[1::2]
y_even = FFT(p_even)
y_odd = FFT(p_odd)
y = [0] * n
for i in range(n//2):
y[i] = y_even[i] + omega**i*y_odd[i]
y[i+n//2] = y_even[i] - omega**i*y_odd[i]
return y
poly = [0,1,2,3]
print(FFT([0,1,2,3]))
The site I tested it against was https://tonysader.github.io/FFT_Calculator/?
and I input into this site 0,1,2,3 and obtained: 6, -2+2J, -2, -2+-2J
whilst my python program output : 6, -2-2J, -2, -2+2J
The pseudocode I followed:
I think the program you're running is executing the inverse FFT. Try
omega = cmath.exp((-2 * cmath.pi * 1j)/n). Note the minus sign.

How can I optimize this code in python? For solving stochastic differential equations

I am developing a code that uses a method called Platen to solve stochastic differential equations. Then I must solve that stochastic differential equation many times (on the order of 10,000 times) to average all the results. My code is:
import numpy as np
import random
import numba
#numba.jit(nopython=True)
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
for i in range(len(t)):
dW=c*random.gauss(0,1)
A=np.array([y[i,-1]/m,-gamma*y[i,-1]]) #this is the platen method that is applied at
B_dW=np.array([0,b*dW]) #each time step
z=y[i]+A*h+B_dW
Az=np.array([z[-1]/m,-gamma*z[-1]])
y[i+1]=y[i]+1/2*(Az+A)*h+B_dW
return y
def media(args): #args is a tuple with the parameters
y = args[0]
t = args[1]
k = args[2]
x=0
p=0
for n in range(k): #k=number of trajectories
y=integrador2(y,t,h)
x=(1./(n+1))*(n*x+y[:,0]) #I do the average like this so as not to have to save all the
p=(1./(n+1))*(n*p+y[:,1]) #solutions in memory
return x,p
The variables y, t and h are:
y0 = np.array([initial position, initial moment]) #initial conditions
t = np.linspace(initial time, final time, number of time intervals) #time array
y = np.zeros((len(t)+1,len(y0))) #array of positions and moments
y[0,:]=np.array(y0) #I keep the initial condition
h = (final time-initial time)/(number of time intervals) #time increment
I need to be able to run the program for a number of time intervals of 10 ** 7 and solve it 10 ** 4 times (k = 10 ** 4).
I feel that I have already reached a dead end because I already accelerate the function that calculates the result with Numba and then (although I do not put it here) I parallelize the "media" function to work with the four cores that my computer has. Even doing all this, my program takes an hour and a half to execute for 10 ** 6 time intervals and k = 10 ** 4, I have not had the courage to execute it for 10 ** 7 time intervals because my intuition tells me that it would take more than 10 hours.
I would really appreciate if someone could advise me to make some parts of the code faster.
Finally, I apologize if I have not expressed myself completely correctly in any part of the question, I am a physicist, not a computer scientist and my English is far from perfect.
I can save about 75% of compute time by simplifying the math in the loop:
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
h = h * 1.
coeff0 = h/m - gamma*h**2/(2.*m)
coeff1 = (1. - gamma*h + gamma**2*h**2/2.)
coeffd = c*b*(1. - gamma*h/2.)
for i in range(len(t)):
dW=np.random.normal()
# Method 2
y[i+1] = np.array([y[i][0] + y[i][1]*coeff0, y[i][1]*coeff1 + dW*coeffd])
return y
Here's a method using filters with scipy, which I don't think is compatible with Numba, but is slightly faster than the solution above:
from scipy import signal
# #numba.jit(nopython=True)
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
h = h * 1.
coeff0a = 1.
coeff0b = h/m - gamma*h**2/(2.*m)
coeff1 = (1. - gamma*h + gamma**2*h**2/2.)
coeffd = c*b*(1. - gamma*h/2.)
noise = np.zeros(y.shape[0])
noise[1:] = np.random.normal(0.,coeffd*1.,y.shape[0]-1)
noise[0] = y[0,1]
a = [1, -coeff1]
b = [1]
y[1:,1] = signal.lfilter(b,a,noise)[1:]
a = [1, -coeff0a]
b = [coeff0b]
y[1:,0] = signal.lfilter(b,a,y[:,1])[1:]
return y

How to solve for roots given a summation of a function of an unknown and a variable?[Scipy][Root-finding]

I am trying to solve for the following equation, however, there is very little documentation on the SciPy reference guide so I am not sure how to go about doing it.
I have an array of 10 uniform random variables, let's call it X.
I have the following function which takes a parameter theta and a uniform RV from the X's and returns their product raised to the power of e:
def f(x_i, theta):
return math.exp(x * theta)
I am trying to find theta such that this equation holds:
ok you can move the 20 to the other side of the equation so that it is 0, but still I am not sure how to optimize for this quantity given the summation?
Any help would be appreciated! Thank you.
I am wondering what the hiccup is. In essence, you just have a function g(theta) with
g(theta) = sum(i, f(x[i],theta))/10 - 20
I assume x is data.
So, we could do:
from scipy.optimize import root_scalar
from random import uniform, seed
from math import exp
def f(x_i, theta):
return exp(x_i * theta)
def g(theta, x):
s = 0
for x_i in x:
s += f(x_i,theta)
return s/10 - 20
def solve():
n = 10
seed(100)
x = [uniform(0,1) for i in range(n)]
print(x)
res = root_scalar(g,args=x,bracket=(0,100))
print(res)
solve()
I see:
[0.1456692551041303, 0.45492700451402135, 0.7707838056590222, 0.705513226934028, 0.7319589730332557, 0.43351443489540376, 0.8000204571334277, 0.5329014146425713, 0.08015370917850195, 0.45594588118356716]
converged: True
flag: 'converged'
function_calls: 16
iterations: 15
root: 4.856897057972411

Plotting Fourier Series coefficients in Python using Simpson's Rule

I want to 1. express Simpson's Rule as a general function for integration in python and 2. use it to compute and plot the Fourier Series coefficients of the function .
I've stolen and adapted this code for Simpson's Rule, which seems to work fine for integrating simple functions such as ,
or
Given period , the Fourier Series coefficients are computed as:
where k = 1,2,3,...
I am having difficulty figuring out how to express . I'm aware that since this function is odd, but I would like to be able to compute it in general for other functions.
Here's my attempt so far:
import matplotlib.pyplot as plt
from numpy import *
def f(t):
k = 1
for k in range(1,10000): #to give some representation of k's span
k += 1
return sin(t)*sin(k*t)
def trapezoid(f, a, b, n):
h = float(b - a) / n
s = 0.0
s += f(a)/2.0
for j in range(1, n):
s += f(a + j*h)
s += f(b)/2.0
return s * h
print trapezoid(f, 0, 2*pi, 100)
This doesn't give the correct answer of 0 at all since it increases as k increases and I'm sure I'm approaching it with tunnel vision in terms of the for loop. My difficulty in particular is with stating the function so that k is read as k = 1,2,3,...
The problem I've been given unfortunately doesn't specify what the coefficients are to be plotted against, but I am assuming it's meant to be against k.
Here's one way to do it, if you want to run your own integration or fourier coefficient determination instead of using numpy or scipy's built in methods:
import numpy as np
def integrate(f, a, b, n):
t = np.linspace(a, b, n)
return (b - a) * np.sum(f(t)) / n
def a_k(f, k):
def ker(t): return f(t) * np.cos(k * t)
return integrate(ker, 0, 2*np.pi, 2**10+1) / np.pi
def b_k(f, k):
def ker(t): return f(t) * np.sin(k * t)
return integrate(ker, 0, 2*np.pi, 2**10+1) / np.pi
print(b_k(np.sin, 0))
This gives the result
0.0
On a side note, trapezoid integration is not very useful for uniform time intervals. But if you desire:
def trap_integrate(f, a, b, n):
t = np.linspace(a, b, n)
f_t = f(t)
dt = t[1:] - t[:-1]
f_ab = f_t[:-1] + f_t[1:]
return 0.5 * np.sum(dt * f_ab)
There's also np.trapz if you want to use pre-builtin functionality. Similarly, there's also scipy.integrate.trapz

Categories

Resources