Cubic interpolation with derivatives in numpy - python

A recent immigrant to Python and scientific computing with Python. This is a question to avoid any duplication of code that might already exist.
I have a field that is sampled as a function of x and y in a regular grid. I would want to interpolate the data to obtain not only the value of the field at any point on the grid, but also the first and second derivatives. If I interpolate it with bicubic interpolation using interp2d, I can obtain the value of the field.
Does anyone have a suggestion on how to obtain the first and second derivatives of the field using an EXISTING numpy or scipy function?
Thanks!

The scipy.interpolate.interp2d.__call__ method has options dx and dy to evaluate higher derivatives at point (at least since version 0.14.0).

Related

Is there a Python function or methodology for locating the first elbow of a plot?

I have a situation where I need to find the first x-value for which an "elbow" (or backwards "L") in the data occurs. For example, I have drawn an arrow on a plot to show what I mean:
What's the fastest way to find this in Python? Is there a function for this?
You can calculate the 1st derivative of your data with the numpu function gradient, for example.
Let's assume that your data is stored in the variable numpy variable 'x', so the 1st derivative is calculated with np.gradient(x). Basically the derivative calculate the rate of change of your function.
With this you can specify a tolerance for the derivative, as the function grows a lot you can check where the gradient is greater than the tolerance (some big number).
Ps: to really understand it you have to know a little about calculus, so I encourage you to search something about the derivative of a function.

Monotonic Interpolation in 2D

I would like to use scipy.interpolate.interp2d or scipy.interpolate.RectBivariateSpline to interpolate a 2D image, but would like for the interpolation to be monotonic.
Is there an equivalent function that assures monotonicity or a way to force interp2d or RectBivariateSpline to return monotonic interpolations?
I believe I am looking for something similar to PchipInterpolator, but for 2D (or n-dimensional).
The first question is: How do you define monotonicity in 2D (and in higher dimensions)? It seems to be not unique. I searched and found this paper: Piecewise polynomial monotonic interpolation of 2D gridded data (https://hal.inria.fr/hal-01059532/document). Maybe this helps.
The function RectBivariateSpline in scipy is wrapped around fitpack (as I remember, but maybe not). If that is the case, then it wouldn't be monotonic, it is b-spline-based.

Symmetrical log scale array in python

I am trying to solve a 1D non linear Poisson equation (the charge distribution depends on the potential). I have been treating it as a minimization problem and have been using the fsolve function from the scipy.optimize module.
While qualitatively i get a reasonable answer, i had noticed that it varies with the distance between points in the array. It is reasonable as the solution (and its derivatives) are exponential. The solution if most affected near the boundaries of the space over which the problem is defined.
It appears that the amount of time required for 'fsolve' to complete its calculation increases dramatically with the number of points in the array. I have been looking into the option of using nonlinear spacing with the help of the 'logspace' function from numpy. However, this function gives tighter spacing at one side of the array only. I have been trying to generate two arrays using 'logspace' and concatenating them but have not managed to get the required outcome.
To clarify, i require an array in the range [0,x] (x is some float value) where the spacing between array points becomes smaller as they get closer to 0 or x. Any suggestions on how to accomplish this?
The following should give you a log-scale spacing between 0 and 1, so you can scale it to your requirements. I've included two solutions, with and without the boundary values.
import numpy
import math
#set number of spaces: num=?
logrange = numpy.logspace(0,math.log10(11),num=6)
#including boudary points
inclusive = numpy.hstack([logrange -1,21-logrange[-2:0:-1],20])/20
print(inclusive)
#excluding boundary points
exclusive = numpy.hstack([logrange[1:] -1,21-logrange[-2:0:-1]])/20
print(exclusive)

Create 3D- polynomial via numpy etc. from given coordinates

Given some coordinates in 3D (x-, y- and z-axes), what I would like to do is to get a polynomial (fifth order). I know how to do it in 2D (for example just in x- and y-direction) via numpy. So my question is: Is it possible to do it also with the third (z-) axes?
Sorry if I missed a question somewhere.
Thank you.
Numpy has functions for multi-variable polynomial evaluation in the polynomial package -- polyval2d, polyval3d -- the problem is getting the coefficients. For fitting, you need the polyvander2d, polyvander3d functions that create the design matrices for the least squares fit. The multi-variable polynomial coefficients thus determined can then be reshaped and used in the corresponding evaluation functions. See the documentation for those functions for more details.

How calculate the Error for trapezoidal rule if I only have data? (Python)

I got this array of data and I need to calculate the area under the curve, so I use the Numpy library and the Scipy library which contain the functions trapz in Numpy and integrate.simps in Scipy for a Numerical Integration which gave me a really nice result in both cases.
The problem now is, that I need the error for each one or at least the error for the Trapezoidal Rule. The thing is, that the formula for that ask me a function, which obviously I don't have. I have been researching for a way to obtain the error but always return to the same point...
Here are the pages of scipy.integrate http://docs.scipy.org/doc/scipy/reference/integrate.html and trapz in Numpy http://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html I try and see a lot of code about the Numerical Integration and prefer to use the existing ones...
Any ideas please?
While cel is right that you cannot determine an integration error if you don't know the function, there is something you can do.
You can use curve fitting to fit a function through the available data points. You can then use that function for error estimation.
If you expect the data to fit a certain kind of function like a sine, log or exponential it is good to use that as a basis for curve fitting.
For instance, if you are measuring the drag on a moving car, it is known that this mostly proportional to the velocity squared because of air resistance.
However, if you do not have any knowledge about the applicable function then assuming you have N data points, there is a polynomial of the N-1 degree that fits exactly though all those data points. Determining such a polynomial from the data is solving a system of lineair equations. See e.g. polynomial interpolation. You could use this polynomial as an estimate for the unknown real function. Note however that outside the range of data points this polynomial might be wildly inaccurate.

Categories

Resources