How to implement linear interpolation in python? - python

I want to implement a function interpolate(x, y, X_new) that computes the linear interpolation of the unknown function f at a new point x_new. The sample is given in the form of two sequences x and y. Both sequences have the same length, and their elements are numbers. The x sequence contains the points where the function has been sampled, and the y sequence contains the function value at the corresponding point. (without using import statement).

As I understand your question, you want to write some function y = interpolate(x_values, y_values, x), which will give you the y value at some x? The basic idea then follows these steps:
Find the indices of the values in x_values which define an interval containing x. For instance, for x=3 with your example lists, the containing interval would be [x1,x2]=[2.5,3.4], and the indices would be i1=1, i2=2
Calculate the slope on this interval by (y_values[i2]-y_values[i1])/(x_values[i2]-x_values[i1]) (ie dy/dx).
The value at x is now the value at x1 plus the slope multiplied by the distance from x1.
You will additionally need to decide what happens if x is outside the interval of x_values, either it's an error, or you could interpolate "backwards", assuming the slope is the same as the first/last interval.
Did this help, or did you need more specific advice?

Related

How to find the maximum of the polynomial fit of this Python array?

I have max nine points of a function as the following array in Python:
array([0.04625943, 0.04646331, 0.04636401, 0.04636489, 0.04651253,
0.0462647 , 0.04549576, 0.04484105, 0.04463366], dtype=float32)
I use numpy library and need to polyfit(2nd order) this array and find the maximum of that polyfit.
How can this be achieved?
In order to fit a polynomial, you need arrays of x and y values. Since your data consists of a single array, I am not sure how you use it with np.polyfit.
Assuming though that you have two arrays x and y, then np.polyfit(x, y, 2) will return an array of coefficients [a, b, c] of a second degree polynomial. If this polynomial has the maximum value (i.e. if a is a negative number) this maximal value will be attained at the point x0 = -b/(2*a). Thus, you just need to compute x0 and then evaluate the polynomial at this value.

non uniform spacing, multivariate derivative with numpy.gradient

so I'm trying to get the second derivative of the following formula using numpy.gradient, and I'm trying to differentiate it once by S[:,0] and then by S[:,1]
S = np.random.multivariate_normal(mean, covariance, N)
formula = (S[:,0]**2) * (S[:,1]**2)
But the thing is when I use spacings as the second argument of numpy.gradient
dx = np.diff(S[:,0])
dy = np.diff(S[:,1])
dfdx = np.gradient(formula,dx)
I get the error saying
ValueError: when 1d, distances must match the length of the corresponding dimension
And I get that's because the spacings vector length is one element less than the formula's, but I didn't know what to do to fix that.
I've read somewhere also that you can have coordinates of the point rather than the spacing as the second argument, but when I tried checking the result out of that by differentiating the formula by S[:,0] and then by S[:,1], and then trying to differentiate it this time by S[:,0] and then by S[:,1], and comparing the two results, which should be similar; there was a huge difference between those two results.
Can anybody explain to me what I'm doing wrong here?
When introducing the vector of coordinates of values of your function using Numpy's gradient, you have to be careful to either introduce it as a list with as many arrays as dimensions of your function, or to specify at which axis (as an argument of gradient) you want to calculate the gradient.
When you checked both ways of differentiation, I think the problem is that your formula isn't actually two-dimensional, but one-dimensional (even though you use data from two variables, note your f array has only one dimension).
Take a look at this little script in which we verify that, indeed, the order of differentiation doesn't alter the result (assuming your function is well-behaved).
import numpy as np
# Dummy arrays and function
x = np.linspace(0,1,50)
y = np.linspace(0,2,50)
f = np.sin(2*np.pi*x[:,None]) * np.cos(2*np.pi*y)
dfdx = np.gradient(f, x, axis = 0)
df2dy = np.gradient(dfdx, y, axis = 1)
dfdy = np.gradient(f, y, axis = 1)
df2dx = np.gradient(dfdy, x, axis = 0)
# Check how many values are essentially different
print(np.sum(~np.isclose(df2dx, df2dy)))
Does this apply to your problem?

Integrate a function depending on two arrays

Initially, I have two arrays that correspond to the values of x and y in a function, but I don't know that function, I just know that the values of y depend on x. Then, I calculate a function that depends on both arrays.
I need to calculate in python the integral of that last function to obtain the total area under the curve between the first value of x and the last. Any idea of how to do that?
x = [array]
y(x) = [array]
a = 2.839*10**25
b = 4*math.pi
alpha = 0.5
z = 0.003642
def L(x,y,a,b,alpha,z):
return x*((y*b*a)/(1+z)**(1+alpha))
Your function is a function of x (in that given a value of x it spits out a value), so first you should repackage it as such (introduce a function yy which, given x, produces the requisite y), then write LL(x) = L(x, yy[x]), then use scipy.integrate to integrate it.

Why is the output of linspace and interp1d always the same?

So I was doing my assignment and we are required to use interpolation (linear interpolation) for the same. We have been asked to use the interp1d package from scipy.interpolate and use it to generate new y values given new x values and old coordinates (x1,y1) and (x2,y2).
To get new x coordinates (lets call this x_new) I used np.linspace between (x1,x2) and the new y coordinates (lets call this y_new) I found out using interp1d function on x_new.
However, I also noticed that applying np.linspace on (y1,y2) generates the exact same values of y_new which we got from interp1d on x_new.
Can anyone please explain to me why this is so? And if this is true, is it always true?
And if this is always true why do we at all need to use the interp1d function when we can use the np.linspace in it's place?
Here is the code I wrote:
import scipy.interpolate as ip
import numpy as np
x = [-1.5, 2.23]
y = [0.1, -11]
x_new = np.linspace(start=x[0], stop=x[-1], num=10)
print(x_new)
y_new = np.linspace(start=y[0], stop=y[-1], num=10)
print(y_new)
f = ip.interp1d(x, y)
y_new2 = f(x_new)
print(y_new2) # y_new2 values always the same as y_new
The reason why you stumbled upon this is that you only use two points for an interpolation of a linear function. You have as an input two different x values with corresponding y values. You then ask interp1d to find a linear function f(x)=m*x +b that fits best your input data. As you only have two points as input data, there is an exact solution, because a linear function is exactly defined by two points. To see this: take piece of paper, draw two dots an then think about how many straight lines you can draw to connect these dots.
The linear function that you get from two input points is defined by the parameters m=(y1-y2)/(x1-x2) and b=y1-m*x1, where (x1,y1),(x2,y2) are your two inputs points (or elements in your x and y arrays in your code snippet.
So, now what does np.linspace(start, stop, num,...) do? It gives you num evenly spaced points between start and stop. These points are start, start + delta, ..., end. The step width delta is given by delta=(end-start)/(num - 1). The -1 comes from the fact that you want to include your endpoint. So the nth point in your interval will lie at xn=x1+n*(x2-x1)/(num-1). At what y values will these points end up after we apply our linear function from interp1d? Lets plug it in:
f(xn)=m*xn+b=(y1-y2)/(x1-x2)*(x1+n/(num-1)*(x2-x1)) + y1-(y1-y1)/(x1-x2)*x1. Simplifying this results in f(xn)=(y2-y1)*n/(num - 1) + y1. And this is exactly what you get from np.linspace(y1,y2,num), i.e. f(xn)=yn!
Now, does this always work? No! We made use of the fact that our linear function is defined by the two endpoints of the intervals we use in np.linspace. So this will not work in general. Try to add one more x value and one more y value in your input list and then compare the results.

Local linear approximation in numpy

I have some x and y data, where for every entry in the x vector there's a corresponding entry in the y vector. Furthermore, the x data are not evenly spaced.
I'd like to interpolate between the x samples to obtain an even spacing in the x dimension, and to approximate the corresponding y value. In numpy, interp1d seems like a natural solution, but my problem has a caveat: the x values are not monotonically increasing (because both x and y are a function of time). The interp1d function, and the other functions from the interpolate module, thus give weird results at those points where x reverses direction.
What I'd really like to do is simply fit a straight line between every set of two adjacent x points and then interpolate based on this very local approximation. Is there a function to do this in numpy or do I have to rig something up myself?
Could you sort your xy pairs and then use interp1d? Something like this?
import sort
xy = zip(x,y)
new_xy = sorted(xy, key=lambda xy: xy[0])
x = new_xy[:,0]
y = new_xy[:,1]
Now your x's are monotonically increasing and the relationships have been preserved.

Categories

Resources