Non-evenly spaced np.array with more points near the boundary - python

I have an interval, say (0, 9) and I have to generate points between them such that they are denser at the both the boundaries. I know the number of points, say n_x. alpha decides the "denseness" of the system such that points are evenly spaced if alpha = 1.
The cross product of n_x and n_y is supposed to look like this:
[
So far the closest I've been to this is by using np.geomspace, but it's only dense near the left-hand side of the domain,
In [55]: np.geomspace(1,10,15) - 1
Out[55]:
array([0. , 0.17876863, 0.38949549, 0.63789371, 0.93069773,
1.27584593, 1.6826958 , 2.16227766, 2.72759372, 3.39397056,
4.17947468, 5.1054023 , 6.19685673, 7.48342898, 9. ])
I also tried dividing the domain into two parts, (0,4), (5,10) but that did not help either (since geomspace gives more points only at the LHS of the domain).
In [29]: np.geomspace(5,10, 15)
Out[29]:
array([ 5. , 5.25378319, 5.52044757, 5.80064693, 6.09506827,
6.40443345, 6.72950096, 7.07106781, 7.42997145, 7.80709182,
8.20335356, 8.61972821, 9.05723664, 9.51695153, 10. ])
Apart from that, I am a bit confused about which mathematical function can I use to generate such an array.

You can use cumulative beta functions and map to your range.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import beta
def denseboundspace(size=30, start=0, end=9, alpha=.5):
x = np.linspace(0, 1, size)
return start + beta.cdf(x, 2.-alpha, 2.-alpha) * (end-start)
n_x = denseboundspace()
#[0. 0.09681662 0.27092155 0.49228501 0.74944966 1.03538131
# 1.34503326 1.67445822 2.02038968 2.38001283 2.75082572 3.13054817
# 3.51705806 3.9083439 4.30246751 4.69753249 5.0916561 5.48294194
# 5.86945183 6.24917428 6.61998717 6.97961032 7.32554178 7.65496674
# 7.96461869 8.25055034 8.50771499 8.72907845 8.90318338 9. ]
plt.vlines(n_x, 0,2);
n_x = denseboundspace(size=13, start=1.2, end=7.8, alpha=1.0)
#[1.2 1.75 2.3 2.85 3.4 3.95 4.5 5.05 5.6 6.15 6.7 7.25 7.8 ]
plt.vlines(n_x, 0,2);
The spread is continuously controlled by the alpha parameter.

Related

Scipy won't find the optimum (simple cosine function)

I am trying to estimate the argument of a cosine function using the scipy optimizer (yes I am aware arc cos could be used, but I don't want to do that).
The code + a demonstration:
import numpy
import scipy
def solver(data):
Z=numpy.zeros(len(data))
a=0.003
for i in range(len(data)):
def minimizer(b):
return numpy.abs(data[i]-numpy.cos(b))
Z[i]=scipy.optimize.minimize(minimizer,a,bounds=[(0,numpy.pi)],method="L-BFGS-B").x[0]
return Z
Y=numpy.zeros(100)
for i in range(100):
Y[i]=numpy.cos(i/25)
solver(Y)
The result is not good, when the argument of the cos function reaches values above 2, the estimation "skips over" the values and returns the maximum argument value instead.
array([0. , 0.04 , 0.08 , 0.12 , 0.16 ,
0.2 , 0.24 , 0.28 , 0.32 , 0.36 ,
0.4 , 0.44 , 0.48 , 0.52 , 0.56 ,
0.6 , 0.64 , 0.67999999, 0.72 , 0.75999999,
0.8 , 0.83999999, 0.88 , 0.92 , 0.95999999,
1. , 1.04 , 1.08 , 1.12 , 1.16 ,
1.2 , 1.24 , 1.28 , 1.32 , 1.36 ,
1.4 , 1.44 , 1.48 , 1.52 , 1.56 ,
1.6 , 1.64 , 1.68 , 1.72 , 1.76 ,
1.8 , 1.84 , 1.88 , 1.91999999, 1.95999999,
2. , 2.04 , 3.14159265, 3.14159265, 3.14159265,
3.14159265, 3.14159265, 3.14159265, 3.14159265, 3.14159265,...
What causes this phenomenon? Are there some other optimizers/ settings that could help with the issue?
The reason is that for the function (for example) f = abs(cos(0.75*pi) - cos(z)) the gradient f' happens to vanish at z = pi, as can be seen from the following plot:
If you check the result the optimization procedure then you'll see that:
fun: array([0.29289322])
hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64>
jac: array([0.])
message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 16
nit: 2
status: 0
success: True
x: array([3.14159265])
So the optimization procedure reached one of its convergence criteria. More detailed information about the criterion can be found at the L-BFGS-B documentation. It says that
gtol : float
The iteration will stop when max{|proj g_i | i = 1, ..., n} <= gtol where pg_i is the i-th component of the projected gradient.
So it eventually reaches a point z >= pi which is then projected back to z = pi due to the constraint and at this point the gradient of the function is zero and hence it stops. You can observe that by registering a callback which prints the current parameter vector:
def new_callback():
step = 1
def callback(xk):
nonlocal step
print('Step #{}: xk = {}'.format(step, xk))
step += 1
return callback
scipy.optimize.minimize(..., callback=new_callback())
Which outputs:
Step #1: xk = [0.006]
Step #2: xk = [3.14159265]
So at the second step it hit z >= pi which is projected back to z = pi.
You can circumvent this problem by reducing the bounds to for example bounds=[(0, 0.99*np.pi)]. This will give you the expected result, however the method won't converge; you will see something like:
fun: array([1.32930966e-09])
hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64>
jac: array([0.44124484])
message: b'ABNORMAL_TERMINATION_IN_LNSRCH'
nfev: 160
nit: 6
status: 2
success: False
x: array([2.35619449])
Note the message ABNORMAL_TERMINATION_IN_LNSRCH. This is due to the nature of abs(x) and the fact that its derivative has a discontinuity at x = 0 (you can read more about that here).
Alternative approach (finding the root)
For all the lines above we were trying to find a value z for which cos(z) == cos(0.75*pi) (or abs(cos(z) - cos(0.75*pi)) < eps). This problem is actually finding the root of the function f = cos(z) - cos(0.75*pi) where we can make use of the fact that cos is a continuous function. We need to set the boundaries a, b such that f(a)*f(b) < 0 (i.e. they have opposite sign). For example using bisect method:
res = scipy.optimize.bisect(f, 0, np.pi)
Besides the general minimize method, SciPy has minimize_scalar specifically for 1-dimensional problems like here, and least_squares for minimizing a particular kind of functions that measure the difference between two quantities (such as the difference between cos(b) and diff[i] here). The latter performs well here, even without fine-tuning.
for i in range(len(data)):
Z[i] = scipy.optimize.least_squares(lambda b: data[i] - numpy.cos(b), a, bounds=(0, numpy.pi)).x[0]
The function passed to least_squares is the thing we'd like to be close to 0, without an absolute value on it. I'll add that a=0.003 seems a suboptimal choice for a starting point, being so close to the boundary; nonetheless it works.
Also, as a_guest already posted, a scalar root finding method should do the same thing while throwing fewer surprises here, given that we already have a nice bracketing interval [0, pi]. Bisection is reliable but slow; Brent's method is what I'd probably use.
for i in range(len(data)):
Z[i] = scipy.optimize.brentq(lambda b: data[i] - numpy.cos(b), 0, numpy.pi)

Python: Coordinates Boxes around Polyline

I have the following problem. I have a numpy array of coordinates (entry 0 to 2) and want to define all the coordinates of small boxes between pairs of my coordiante list instead of creating a huge box around the minimum and maximum of all my coordinates in the list. The boxes should have a range of 5 around the coordinate pairs for example.
My list for example looks like:
[[ 24.313 294.679 1.5 1. 0. ]
[ 25.51 295.263 1.5 2. 0. ]
[ 26.743 294.526 1.5 3. 0. ]
...,
[ 30.362 307.242 10.779 95. 0. ]
[ 29.662 307.502 10.38 96. 0. ]
[ 29.947 308.99 11.147 97. 0. ]]
My first idea is to calculate the minumum and maximum of each pair and use itertools.product to create the coordinates for the small boxes. So i want to have a box around 24.313 294.679 1.5 and 25.51 295.263 1.5, next a box aorund 25.51 295.263 1.5 and 26.743 294.526 1.5 and so on. For better understanding, i want the coordinates like here, but in 3D of course:
And not like here:
Is there any easy numpy, scipy approach to do this?
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
# create some data; in 2D so we can plot stuff
x = np.linspace(0, 2*np.pi, 10)
y = np.sin(x)
data = np.c_[x,y]
# --------------------------------------------------
# core bit: get boxes
# bboxes = np.array([data[:-1], np.diff(data, axis=0)]).transpose([1,2,0]) # shorter but with negative widths, etc
data_pairs = np.array([data[:-1], data[1:]])
minima = data_pairs.min(axis=0)
maxima = data_pairs.max(axis=0)
widths = maxima-minima
bboxes = np.array([minima, widths]).transpose(1,2,0)
# --------------------------------------------------
# plot
fig, ax = plt.subplots(1,1)
ax.plot(data[:,0], data[:,1], 'ko')
for bbox in bboxes:
patch = Rectangle(xy=bbox[:,0], width=bbox[0,1], height=bbox[1,1], linewidth=0., alpha=0.5)
ax.add_artist(patch)
plt.show()
with pads:
# padded boxes:
pad = 0.1
N, D = data.shape
correction = pad*np.ones((N-1,D))
padded = bboxes.copy()
padded[:,:,0] -= correction
padded[:,:,1] += 2*correction
fig, ax = plt.subplots(1,1)
ax.plot(data[:,0], data[:,1], 'ko')
for bbox in padded:
patch = Rectangle(xy=bbox[:,0], width=bbox[0,1], height=bbox[1,1], linewidth=0., alpha=0.5, facecolor='red')
ax.add_artist(patch)
ax.set_xlim(0-pad, 2*np.pi+pad)
ax.set_ylim(-1-pad, 1+pad)
plt.show()

Different values weibull pdf

I was wondering why the values of weibull pdf with the prebuilt function dweibull.pdf are more or less the half they should be
I did a test. For the same x I created the weibull pdf for A=10 and K=2 twice, one by writing myself the formula and the other one with the prebuilt function of dweibull.
import numpy as np
from scipy.stats import exponweib,dweibull
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
K=2.0
A=10.0
x=np.arange(0.,20.,1)
#own function
def weib(data,a,k):
return (k / a) * (data / a)**(k - 1) * np.exp(-(data / a)**k)
pdf1=weib(x,A,K)
print sum(pdf1)
#prebuilt function
dist=dweibull(K,1,A)
pdf2=dist.pdf(x)
print sum(pdf2)
f=plt.figure()
suba=f.add_subplot(121)
suba.plot(x,pdf1)
suba.set_title('pdf dweibull')
subb=f.add_subplot(122)
subb.plot(x,pdf2)
subb.set_title('pdf own function')
f.show()
It seems with dweibull the pdf values are the half but that this is wrong as the summation should be in total 1 and not aroung 0.5 as it is with dweibull. By writing myself the formula the summation is around 1[
scipy.stats.dweibull implements the double Weibull distribution. Its support is the real line. Your function weib corresponds to the PDF of scipy's weibull_min distribution.
Compare your function weib to weibull_min.pdf:
In [128]: from scipy.stats import weibull_min
In [129]: x = np.arange(0, 20, 1.0)
In [130]: K = 2.0
In [131]: A = 10.0
Your implementation:
In [132]: weib(x, A, K)
Out[132]:
array([ 0. , 0.019801 , 0.03843158, 0.05483587, 0.0681715 ,
0.07788008, 0.08372116, 0.0857677 , 0.08436679, 0.08007445,
0.07357589, 0.0656034 , 0.05686266, 0.04797508, 0.03944036,
0.03161977, 0.02473752, 0.01889591, 0.014099 , 0.0102797 ])
scipy.stats.weibull_min.pdf:
In [133]: weibull_min.pdf(x, K, scale=A)
Out[133]:
array([ 0. , 0.019801 , 0.03843158, 0.05483587, 0.0681715 ,
0.07788008, 0.08372116, 0.0857677 , 0.08436679, 0.08007445,
0.07357589, 0.0656034 , 0.05686266, 0.04797508, 0.03944036,
0.03161977, 0.02473752, 0.01889591, 0.014099 , 0.0102797 ])
By the way, there is a mistake in this line of your code:
dist=dweibull(K,1,A)
The order of the parameters is shape, location, scale, so you are setting the location parameter to 1. That's why the values in your second plot are shifted by one. That line should have been
dist = dweibull(K, 0, A)

Improving a numpy implementation of a simple spring network

I wanted a very simple spring system written in numpy. The system would be defined as a simple network of knots, linked by links. I'm not interested in evaluating the system over time, but instead I want to go from an initial state, change a variable (usually move a knot to a new position) and solve the system until it reaches a stable state (last applied force is below a given threshold). The knots have no mass, there's no gravity, the forces are all derived from each link's current lengths/init lengths. And the only "special" variable is that each knot can bet set as "anchored" (doesn't move).
So I wrote this simple solver below, and included a simple example. Jump to the very end for my question.
import numpy as np
from numpy.core.umath_tests import inner1d
np.set_printoptions(precision=4)
np.set_printoptions(suppress=True)
np.set_printoptions(linewidth =150)
np.set_printoptions(threshold=10)
def solver(kPos, kAnchor, link0, link1, w0, cycles=1000, precision=0.001, dampening=0.1, debug=False):
"""
kPos : vector array - knot position
kAnchor : float array - knot's anchor state, 0 = moves freely, 1 = anchored (not moving)
link0 : int array - array of links connecting each knot. each index corresponds to a knot
link1 : int array - array of links connecting each knot. each index corresponds to a knot
w0 : float array - initial link length
cycles : int - eval stops when n cycles reached
precision : float - eval stops when highest applied force is below this value
dampening : float - keeps system stable during each iteration
"""
kPos = np.asarray(kPos)
pos = np.array(kPos) # copy of kPos
kAnchor = 1-np.clip(np.asarray(kAnchor).astype(float),0,1)[:,None]
link0 = np.asarray(link0).astype(int)
link1 = np.asarray(link1).astype(int)
w0 = np.asarray(w0).astype(float)
F = np.zeros(pos.shape)
i = 0
for i in xrange(cycles):
# Init force applied per knot
F = np.zeros(pos.shape)
# Calculate forces
AB = pos[link1] - pos[link0] # get link vectors between knots
w1 = np.sqrt(inner1d(AB,AB)) # get link lengths
AB/=w1[:,None] # normalize link vectors
f = (w1 - w0) # calculate force vectors
f = f[:,None] * AB
# Apply force vectors on each knot
np.add.at(F, link0, f)
np.subtract.at(F, link1, f)
# Update point positions
pos += F * dampening * kAnchor
# If the maximum force applied is below our precision criteria, exit
if np.amax(F) < precision:
break
# Debug info
if debug:
print 'Iterations: %s'%i
print 'Max Force: %s'%np.amax(F)
return pos
Here's some test data to show how it works. In this case i'm using a grid, but in reality this can be any type of network, like a string with many knots, or a mess of polygons...:
import cProfile
# Create a 5x5 3D knot grid
z = np.linspace(-0.5, 0.5, 5)
x = np.linspace(-0.5, 0.5, 5)[::-1]
x,z = np.meshgrid(x,z)
kPos = np.array([np.array(thing) for thing in zip(x.flatten(), z.flatten())])
kPos = np.insert(kPos, 1, 0, axis=1)
'''
array([[-0.5 , 0. , 0.5 ],
[-0.25, 0. , 0.5 ],
[ 0. , 0. , 0.5 ],
...,
[ 0. , 0. , -0.5 ],
[ 0.25, 0. , -0.5 ],
[ 0.5 , 0. , -0.5 ]])
'''
# Define the links connecting each knots
link0 = [0,1,2,3,5,6,7,8,10,11,12,13,15,16,17,18,20,21,22,23,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
link1 = [1,2,3,4,6,7,8,9,11,12,13,14,16,17,18,19,21,22,23,24,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]
AB = kPos[link0]-kPos[link1]
w0 = np.sqrt(inner1d(AB,AB)) # this is a square grid, each link's initial length will be 0.25
# Set the anchor states
kAnchor = np.zeros(len(kPos)) # All knots will be free floating
kAnchor[12] = 1 # Middle knot will be anchored
This is what the grid looks like:
If we run my code using this data, nothing will happen since the links aren't pushing or stretching:
print np.allclose(kPos,solver(kPos, kAnchor, link0, link1, w0, debug=True))
# Returns True
# Iterations: 0
# Max Force: 0.0
Now lets move that middle anchored knot up a bit and solve the system:
# Move the center knot up a little
kPos[12] = np.array([0,0.3,0])
# eval the system
new = solver(kPos, kAnchor, link0, link1, w0, debug=True) # positions will have moved
#Iterations: 102
#Max Force: 0.000976603249133
# Rerun with cProfile to see how fast it runs
cProfile.run('solver(kPos, kAnchor, link0, link1, w0)')
# 520 function calls in 0.008 seconds
And here's what the grid looks like after being pulled by that single anchored knot:
Question:
My actual use cases are a little more complex than this example and solve a little too slow for my taste: (100-200 knots with a network anywhere between 200-300 links, solves in a few seconds).
How can i make my solver function run faster? I'd consider Cython but i have zero experience with C. Any help would be greatly appreciated.
Your method, at a cursory glance, appears to be an explicit under-relaxation type of method. Calculate the residual force at each knot, apply a factor of that force as a displacement, repeat until convergence. It's the repeating until convergence that takes the time. The more points you have, the longer each iteration takes, but you also need more iterations for the constraints at one end of the mesh to propagate to the other.
Have you considered an implicit method? Write the equation for the residual force at each non-constrained node, assemble them into a large matrix, and solve in one step. Information now propagates across the entire problem in a single step. As an additional benefit, the matrix you construct should be sparse, which scipy has a module for.
Wikipedia: explicit and implicit methods
EDIT Example of an implicit method matching (roughly) your problem. This solution is linear, so it doesn't take into account the effect of the calculated displacement on the force. You would need to iterate (or use non-linear techniques) to calculate this. Hope it helps.
#!/usr/bin/python3
import matplotlib.pyplot as pp
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import scipy as sp
import scipy.sparse
import scipy.sparse.linalg
#------------------------------------------------------------------------------#
# Generate a grid of knots
nX = 10
nY = 10
x = np.linspace(-0.5, 0.5, nX)
y = np.linspace(-0.5, 0.5, nY)
x, y = np.meshgrid(x, y)
knots = list(zip(x.flatten(), y.flatten()))
# Create links between the knots
links = []
# Horizontal links
for i in range(0, nY):
for j in range(0, nX - 1):
links.append((i*nX + j, i*nX + j + 1))
# Vertical links
for i in range(0, nY - 1):
for j in range(0, nX):
links.append((i*nX + j, (i + 1)*nX + j))
# Create constraints. This dict takes a knot index as a key and returns the
# fixed z-displacement associated with that knot.
constraints = {
0 : 0.0,
nX - 1 : 0.0,
nX*(nY - 1): 0.0,
nX*nY - 1 : 1.0,
2*nX + 4 : 1.0,
}
#------------------------------------------------------------------------------#
# Matrix i-coordinate, j-coordinate and value
Ai = []
Aj = []
Ax = []
# Right hand side array
B = np.zeros(len(knots))
# Loop over the links
for link in links:
# Link geometry
displacement = np.array([ knots[1][i] - knots[0][i] for i in range(2) ])
distance = np.sqrt(displacement.dot(displacement))
# For each node
for i in range(2):
# If it is not a constraint, add the force associated with the link to
# the equation of the knot
if link[i] not in constraints:
Ai.append(link[i])
Aj.append(link[i])
Ax.append(-1/distance)
Ai.append(link[i])
Aj.append(link[not i])
Ax.append(+1/distance)
# If it is a constraint add a diagonal and a value
else:
Ai.append(link[i])
Aj.append(link[i])
Ax.append(+1.0)
B[link[i]] += constraints[link[i]]
# Create the matrix and solve
A = sp.sparse.coo_matrix((Ax, (Ai, Aj))).tocsr()
X = sp.sparse.linalg.lsqr(A, B)[0]
#------------------------------------------------------------------------------#
# Plot the links
fg = pp.figure()
ax = fg.add_subplot(111, projection='3d')
for link in links:
x = [ knots[i][0] for i in link ]
y = [ knots[i][1] for i in link ]
z = [ X[i] for i in link ]
ax.plot(x, y, z)
pp.show()

2 dimensional interpolation problem

I have DATA on x and y axes and the output is on z
for example
y = 10
x = [1,2,3,4,5,6]
z = [2.3,3.4,5.6,7.8,9.6,11.2]
y = 20
x = [1,2,3,4,5,6]
z = [4.3,5.4,7.6,9.8,11.6,13.2]
y = 30
x = [1,2,3,4,5,6]
z = [6.3,7.4,8.6,10.8,13.6,15.2]
how can i find the value of z when y = 15 x = 3.5
I was trying to use scipy but i am very new at it
Thanks a lot for the help
vibhor
scipy.interpolate.bisplrep
Reference:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.bisplrep.html
import scipy
import math
import numpy
from scipy import interpolate
x= [1,2,3,4,5,6]
y= [10,20,30]
Y = numpy.array([[i]*len(x) for i in y])
X = numpy.array([x for i in y])
Z = numpy.array([[2.3,3.4,5.6,7.8,9.6,11.2],
[4.3,5.4,7.6,9.8,11.6,13.2],
[6.3,7.4,8.6,10.8,13.6,15.2]])
tck = interpolate.bisplrep(X,Y,Z)
print interpolate.bisplev(3.5,15,tck)
7.84921875
EDIT:
Upper solution does not give you perfect fit.
check
print interpolate.bisplev(x,y,tck)
[[ 2.2531746 4.2531746 6.39603175]
[ 3.54126984 5.54126984 7.11269841]
[ 5.5031746 7.5031746 8.78888889]
[ 7.71111111 9.71111111 10.9968254 ]
[ 9.73730159 11.73730159 13.30873016]
[ 11.15396825 13.15396825 15.2968254 ]]
to overcome this interpolate whit polyinomials of 5rd degree in x and 2nd degree in y direction
tck = interpolate.bisplrep(X,Y,Z,kx=5,ky=2)
print interpolate.bisplev(x,y,tck)
[[ 2.3 4.3 6.3]
[ 3.4 5.4 7.4]
[ 5.6 7.6 8.6]
[ 7.8 9.8 10.8]
[ 9.6 11.6 13.6]
[ 11.2 13.2 15.2]]
This yield
print interpolate.bisplev(3.5,15,tck)
7.88671875
Plotting:
reference http://matplotlib.sourceforge.net/examples/mplot3d/surface3d_demo.html
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_surface(X, Y, Z,rstride=1, cstride=1, cmap=cm.jet)
plt.show()
Given (not as Python code, since the second assignment would obliterate the first in each case, of course;-):
y = 10
x = [1,2,3,4,5,6]
z = [2.3,3.4,5.6,7.8,9.6,11.2]
y = 20
x = [1,2,3,4,5,6]
z = [4.3,5.4,7.6,9.8,11.6,13.2]
you ask: "how can i find the value of z when y = 15 x = 3.5"?
Since you're looking at a point exactly equidistant in both x and y from the given "grid", you just take the midpoint between the grid values (if you had values not equidistant, you'd take a proportional midpoint, see later). So for y=10, the z values for x 3 and 4 are 5.6 and 7.8, so for x 3.5 you estimate their midpoint, 6.7; and similarly for y=20 you estimate the midpoint between 7.6 and 9.8, i.e., 8.7. Finally, since you have y=15, the midpoint between 6.7 and 8.7 is your final interpolated value for z: 7.7.
Say you had y=13 and x=3.8 instead. Then for x you'd take the values 80% of the way, i.e.:
for y=10, 0.2*5.6+0.8*7.8 -> 7.36
for y=20, 0.2*7.6+0.8*9.8 -> 9.46
Now you want the z 30% of the way between these, 0.3*7.36 + 0.7*9.46 -> 8.83, that's z.
This is linear interpolation, and it's really very simple. Do you want to compute it by hand, or find routines that do it for you (given e.g. numpy arrays as "the grids")? Even in the latter case, I hope this "manual" explanation (showing what you're doing in the most elementary of arithmetical terms) can help you understand what you're doing...;-).
There are more advanced forms of interpolation, of course -- do you need those, or does linear interpolation suffice for your use case?
I would say just take the average of the values around it. So if you need X=3.5 and Y=15 (3.5,15), you average (3,10), (3,20), (4,10) and (4,20). Since I have no idea what the data is you are dealing with, I am not sure if the exact proximity would matter - in which case you can just stick w/the average - or if you need to do some sort of inverse distance weighting.

Categories

Resources