Finite difference approximations in python - python

I am trying to calculate the derivative of a function at x = 0, but I keep getting odd answers with all functions I have tried. For example with f(x)=x**2 I get the derivative to be 2 at all points. My finite difference coefficients are correct, it is second order accurate for the second derivative with respect to x.
from numpy import *
from matplotlib.pyplot import *
def f1(x):
return x**2
n = 100 # grid points
x = zeros(n+1,dtype=float) # array to store values of x
step = 0.02/float(n) # step size
f = zeros(n+1,dtype=float) # array to store values of f
df = zeros(n+1,dtype=float) # array to store values of calulated derivative
for i in range(0,n+1): # adds values to arrays for x and f(x)
x[i] = -0.01 + float(i)*step
f[i] = f1(x[i])
# have to calculate end points seperately using one sided form
df[0] = (f[2]-2*f[1]+f[0])/step**2
df[1] = (f[3]-2*f[2]+f[1])/step**2
df[n-1] = (f[n-1]-2*f[n-2]+f[n-3])/step**2
df[n] = (f[n]-2*f[n-1]+f[n-2])/step**2
for i in range(2,n-1): # add values to array for derivative
df[i] = (f[i+1]-2*f[i]+f[i-1])/step**2
print df # returns an array full of 2...

The second derivative of x^2 is the constant 2, and you use the central difference quotient for the second derivative, as you can also see by the square in the denominator. Your result is absolutely correct, your code does exactly what you told it to do.
To get the first derivative with a symmetric difference quotient, use
df[i] = ( f[i+1] - f[i-1] ) / ( 2*step )

first order derivative at point x of function f1 (for the case f1(x)=x^2) can be obtained:
def f1(x):
return (x**2)
def derivative (f, x, step=0.0000000000001):
return ((f(x+step)-f(x))/step)
hope that helps

Related

In Python, how do I add an incrementing element of a polynomial?

I construct a Newton polynomial based on a given simple sine function. Implemented intermediate calculations, but stopped at the final stage - to obtain the formula of the polynomial. Recursion may help here, but it's inaccurate. Here is the formula of the polynomial
The formula iterates over the values from the table below: we go through the column of x's and the first line of the calculated deltas (we go up to the delta, which degree of the polynomial we get). For example, if the degree is 2, then we will take 2 deltas in the first row and values up to 2.512 in the column of x (9 brackets with x differences will be in the last block of the polynomial)
In the formula, there is a set of constant blocks where values are iterated through, but I have a snag in the element (x —x_0)**[n]. This is the degree of the polynomial n that the user sets. Here [n] means that the expression in the parenthesis is expanded:
I use the sympy library for symbolic calculations: x in the formula of the future polynomial should remain x (as a symbol, not its value). How to implement a part of a block repeating in a polynomial that grows with a new bracket of the degree of the polynomial?
Code:
import numpy as np
from sympy import *
import pandas as pd
from scipy.special import factorial
def func(x):
return np.sin(x)
def poly(order):
# building columns X and Y:
x_i_list = [round( (0.1*np.pi*i), 4 ) for i in range(0, 11)]
y_i_list = []
for x in x_i_list:
y_i = round( (func(x)), 4 )
y_i_list.append(y_i)
# we get deltas:
n=order
if n < len(y_i_list):
result = [ np.diff(y_i_list, n=d) for d in np.arange(1, len(y_i_list)) ]
print(result)
else:
print(f'Determine the order of the polynomial less than {len(y_i_list)}')
# We determine the index in the x column based on the degree of the polynomial:
delta_index=len(result[order-1])-1
x_index = delta_index
h = (x_i_list[x_index] - x_i_list[0]) / n # calculate h
b=x_i_list[x_index]
a=x_i_list[0]
y_0=x_i_list[0]
string_one = [] # list with deltas of the first row (including the degree column of the polynomial)
for elen in result:
string_one.append(round(elen[0], 4))
# creating a list for the subsequent passage through the x's
x_col_list = []
for col in x_i_list:
if col <= x_i_list[x_index]:
x_col_list.append(col)
x = Symbol('x') # for symbolic representation of x's
# we go along the deltas of the first line:
for delta in string_one:
# we go along the column of x's
for arg in x_col_list:
for n in range(1, order+1):
polynom = ( delta/(factorial(n)*h**n) )*(x - arg) # Here I stopped
I guess you're looking for something like this:
In [52]: from sympy import symbols, prod
In [53]: x = symbols('x')
In [54]: nums = [1, 2, 3, 4]
In [55]: prod((x-n) for n in nums)
Out[55]: (x - 4)⋅(x - 3)⋅(x - 2)⋅(x - 1)
EDIT: Actually it's more efficient to do this with Mul rather than prod:
In [134]: Mul(*((x-n) for n in nums))
Out[134]: (x - 4)⋅(x - 3)⋅(x - 2)⋅(x - 1)

Calculating the derivative of points in python

I want to calculate the derivative of points, a few internet posts suggested using np.diff function. However, I tried using np.diff against manually calculated results (chose a random polynomial equation and differentiated it) to see if I end up with the same results. I used the following eq : Y = (X^3) + (X^2) + 7 and the results i ended up with were different. Any ideas why?. Is there any other method to calculate the differential.
In the problem, I am trying to solve, I have recieved data points of fitted spline function ( not the original data that need to be fitted by spline but the points of the already fitted spline). The x-values are at equal intervals. I only have the points and no equation, what I require is to calculate, the first, second and third derivatives. i.e dy/dx, d2y/dx2, d3y/dx3. Any ideas on how to do this?. Thanks in advance.
xval = [1,2,3,4,5]
yval = []
yval_dashList = []
#selected a polynomial equation
def calc_Y(X):
Y = (X**3) + (X**2) + 7
return(Y)
#calculate y values using equatuion
for i in xval:
yval.append(calc_Y(i))
#output: yval = [9,19,43,87,157]
#manually differentiated the equation or use sympy library (sym.diff(x**3 + x**2 + 7))
def calc_diffY(X):
yval_dash = 3*(X**2) + 2**X
#store differentiated y-values in a list
for i in xval:
yval_dashList.append(yval_dash(i))
#output: yval_dashList = [5,16,35,64,107]
#use numpy diff method on the y values(yval)
numpyDiff = np.diff(yval)
#output: [10,24,44,60]
The values of numpy diff method [10,24,44,60] is different from yval_dashList = [5,16,35,64,107]
The idea behind what you are trying to do is correct, but there are a couple of points to make it work as intended:
There is a typo in calc_diffY(X), the derivative of X**2 is 2*X, not 2**X:
def calc_diffY(X):
yval_dash = 3*(X**2) + 2*X
By doing this you don't obtain much better results:
yval_dash = [5, 16, 33, 56, 85]
numpyDiff = [10. 24. 44. 70.]
To calculate the numerical derivative you should do a "Difference quotient" which is an approximation of a derivative
numpyDiff = np.diff(yval)/np.diff(xval)
The approximation becomes better and better if the values of the points are more dense.
The difference between your points on the x axis is 1, so you end up in this situation (in blue the analytical derivative, in red the numerical):
If you reduce the difference in your x points to 0.1, you get this, which is much better:
Just to add something to this, have a look at this image showing the effect of reducing the distance of the points on which the derivative is numerically calculated, taken from Wikipedia:
I like #lgsp's answer. I will add that you can directly estimate the derivative without having to worry about how much space there is between the values. This just uses the symmetric formula for calculating finite differences, described at this wikipedia page.
Take note, though, of the way delta is specified. I found that when it is too small, higher-order estimates fail. There's probably not a 100% generic value that will always work well!
Also, I simplified your code by taking advantage of numpy broadcasting over arrays to eliminate for loops.
import numpy as np
# selecte a polynomial equation
def f(x):
y = x**3 + x**2 + 7
return y
# manually differentiate the equation
def f_prime(x):
return 3*x**2 + 2*x
# numerically estimate the first three derivatives
def d1(f, x, delta=1e-10):
return (f(x + delta) - f(x - delta)) / (2 * delta)
def d2(f, x, delta=1e-5):
return (d1(f, x + delta, delta) - d1(f, x - delta, delta)) / (2 * delta)
def d3(f, x, delta=1e-2):
return (d2(f, x + delta, delta) - d2(f, x - delta, delta)) / (2 * delta)
# demo output
# note that functions operate in parallel on numpy arrays -- no for loops!
xval = np.array([1,2,3,4,5])
print('y = ', f(xval))
print('y\' = ', f_prime(xval))
print('d1 = ', d1(f, xval))
print('d2 = ', d2(f, xval))
print('d3 = ', d3(f, xval))
And the outputs:
y = [ 9 19 43 87 157]
y' = [ 5 16 33 56 85]
d1 = [ 5.00000041 16.00000132 33.00002049 56.00000463 84.99995374]
d2 = [ 8.0000051 14.00000116 20.00000165 25.99996662 32.00000265]
d3 = [6. 6. 6. 6. 5.99999999]

How to iterate for loop over epsilon=1e-8 to implement Simpson's integrator

I have implemented the following logic and had asked this question for a different question (array range). I'm getting output but it is not going through for loop for the iteration because I have given frange(start, stop, range)
Explanation
"""Approximate definite integral of function from a to b using Simpson's method.
This function is vectorized, it uses numpy array operations to calculate the approximation.
This is an adaptive implementation, the method starts out with N=2 intervals, and try
successive sizes of N (by doubling the size), until the desired precision, is reached.
This adaptive solution uses our improved approach/equation for Simpson's method, to
avoid unnecessary recalculations of the integrand function.
a, b - Scalar float values, the begin, and endpoints of the interval we are to
integrate the function over.
f - A vectorized function, should accept a numpy array of x values, and compute the
corresponding y values for all points according to some function.
epsilon - The desired precision to calculate the integral to. Default is 8 decimal places
of precision (1e-8)
returns - A tuple, (ival, error). A scalar float value, the approximated integral of
the function over the given interval, and a scaler float value of the
approximation error on the integral
"""
Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
import pylab as pl
def simpsons_adaptive_approximation(a, b, f, epsilon=1e-8):
N_prev = 2 # the previous number of slices
h_prev = (b - a) / N_prev # previous interval width
x = np.arange(a+h_prev, b, h_prev) # x locations of the previous interval
I_prev = h_prev * (0.5 * f(a) + 0.5 * f(b) + np.sum(f(x)))
# set up variables to adaptively iterate successively better approximations
N_cur = 2 # the current number of slices
I_cur = 0.0 # calculated in loop iteration
error = 1.0 # calculated in loop iteration
itr = 1 # keep track of the number of iterations we perform, for display/debug
h = (b-a)/float(epsilon)
I_cur = f(a) + f(b)
while error > epsilon:
for i in pl.frange(1,epsilon,1):
print('Hello')
if(i%2 ==0):
print('Hello')
I_cur = I_cur + (2*(f(a + i*h)))
else:
I_cur = I_cur + (4*(f(a + i*h)))
error = np.abs((1.0/3.0) * (I_cur - I_prev))
print("At iteration %d (N=%d), val=%0.16f prev=%0.16f error=%e" % (itr, N_cur, I_cur, I_prev, error) )
I_cur *= (h/3.0)
I_prev = I_cur
N_prev = N_cur
N_cur *= 2
itr += 1
return (I_cur, error)
Another function that calling above-mentioned function
def f2(x):
return x**4 - 2*x + 1
a = 0.0
b = 2.0
eps = 1e-10
(val, err) = simpsons_adaptive_approximation(a, b, f2, eps)
print( "Calculated value: %0.16f error: %e for an epsilon of: %e" % (val, err, eps) )
Following is the outcome
At iteration 1 (N=2), val=14.0000000000000000 prev=7.0000000000000000 error=2.333333e+00
At iteration 2 (N=4), val=93333333333.3333435058593750 prev=93333333333.3333435058593750 error=0.000000e+00
Calculated value: 622222222222222295040.0000000000000000 error: 0.000000e+00 for an epsilon of: 1.000000e-10
It should give me more iteration
Can anyone help me to iterate over for loop get more result

Solving an ordinary differential equation on a fixed grid (preferably in python)

I have a differential equation of the form
dy(x)/dx = f(y,x)
that I would like to solve for y.
I have an array xs containing all of the values of x for which I need ys.
For only those values of x, I can evaluate f(y,x) for any y.
How can I solve for ys, preferably in python?
MWE
import numpy as np
# these are the only x values that are legal
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# some made up function --- I don't actually have an analytic form like this
def f(y, x):
if not np.any(np.isclose(x, xs)):
return np.nan
return np.sin(y + x**2)
# now I want to know which array of ys satisfies dy(x)/dx = f(y,x)
Assuming you can use something simple like Forward Euler...
Numerical solutions will rely on approximate solutions at previous times. So if you want a solution at t = 1 it is likely you will need the approximate solution at t<1.
My advice is to figure out what step size will allow you to hit the times you need, and then find the approximate solution on an interval containing those times.
import numpy as np
#from your example, smallest step size required to hit all would be 0.0001.
a = 0 #start point
b = 1.5 #possible end point
h = 0.0001
N = float(b-a)/h
y = np.zeros(n)
t = np.linspace(a,b,n)
y[0] = 0.1 #initial condition here
for i in range(1,n):
y[i] = y[i-1] + h*f(t[i-1],y[i-1])
Alternatively, you could use an adaptive step method (which I am not prepared to explain right now) to take larger steps between the times you need.
Or, you could find an approximate solution over an interval using a coarser mesh and interpolate the solution.
Any of these should work.
I think you should first solve ODE on a regular grid, and then interpolate solution on your fixed grid. The approximate code for your problem
import numpy as np
from scipy.integrate import odeint
from scipy import interpolate
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# dy/dx = f(x,y)
def dy_dx(y, x):
return np.sin(y + x ** 2)
y0 = 0.0 # init condition
x = np.linspace(0, 10, 200)# here you can control an accuracy
sol = odeint(dy_dx, y0, x)
f = interpolate.interp1d(x, np.ravel(sol))
ys = f(xs)
But dy_dx(y, x) should always return something reasonable (not np.none).
Here is the drawing for this case

Integrating an array that returns the same size array instead of a tuple in python?

My apologize for the lengthy title. I have been working on a project for a a while and i'm in a rut for a certain part in my code. Ill do my best to be thorough.
I have an numpy array of masses, M, of size and shape 167197.
## Non-constant
M = data['m200'] # kg // Mass of dark matter haloes
R = [] # Km // Radius of sphere
for masses in M:
R.append(((3*masses)/(RHO_C*4*(3.14))**(1.0/3.0)))
I have fitting function with independent values of k that are part of my question. k is defined value in my code.
def T(k): # Fitting Function // Assuming a lambdaCDM model
q = k/((OMEGA_M)*H**2)*((T_CMB)/27)**2
L = np.log(euler+1.84*q)
C = 14.4 + 325/(1+60.5*q**1.11)
return L/(L+C*q**2)
##############################################################################
def P(k): # Linear Power Spectrum
A = 0.75 # LambdaCDM Power Normalization
n = 0.95 # current constraints from WMAP+LSS
return A*k**n*T(k)**2
* For the actual problem *
I have a Fourier transfrom W(kR)
def W(R):# Fourier Transfrom of Top Hat function
return (3*(np.sin(k*R)-(k*R)*np.cos(k*R)))/(k*R)**(3)
W_a = []
for radii in R:
W_a.append(W(radii))
In this condition, i'm treating R as the independent value instead of kR combined
printing the length of W_a gives me the exact same size as mu numpy array, so all is well.
This function will play a part for a integral along with the is included in this function of sigma
def sigma(R): # Mass Varience
k1 = lambda k: k**2*P(k)*W(R)**2
norm1 = 1/(2*np.pi**2)
return (integrate.quad(k1, 0, np.Inf))
sigma_a = []
for radii in R:
sigma_a.extend(sigma(radii))
The integral will create a tuple, of course. But for each value inside R. I'm wanting to create a list, or an array. So, when using .extend(), the length of my array is now doubled with a length of now 334394.
How do i correct it to where the integral evaluates each R in W(kR) returning an array of the same size, 167197?
First just a Python note:
R = [] # Km // Radius of sphere
for masses in M:
R.append(((3*masses)/(RHO_C*4*(3.14))**(1.0/3.0)))
can be expressed as:
R = [((3*masses)/(RHO_C*4*(3.14))**(1.0/3.0)) for masses in M]
In:
return (integrate.quad(k1, 0, np.Inf))
the outer set of () doesn't make a difference.
return integrate.quad(k1, 0, np.Inf)
should return the same thing.
Now where does the doubling come from? In the quad docs we see it returns 2 values, the integeral and an error term. That's shown as a tuple in some examples, but it is also unpacked in others:
y, err = integrate.quad(f, 0, 1, args=(3,))
If you want just integral, and not err, you could index, integate...()[0].
sigma_a = []
for radii in R:
sigma_a.append(sigma(radii)[0])
or
sigma_a = [sigma(radii)[0] for radii in R]
or
def sigma1(R): # Mass Varience
k1 = lambda k: k**2*P(k)*W(R)**2
norm1 = 1/(2*np.pi**2)
y, err = integrate.quad(k1, 0, np.Inf)
return y # return just the integral
sigma_a = [sigma1(radii) for radii in R]
If you want to collect both y and err, but in separate lists, use zip* to repack them (something like the numpy transpose).
ll = [sigma(radii) for radii in R]
# [(y0,err0),(y1,err1), ...]
ys, errs = zip(*ll)

Categories

Resources