Compute the B-spline basis of a Bivariate spline - python

I need to compute uv queries on a bivariate spline in the B-spline basis. With this answer I have a good function (copied below) that leverages scipy.dfitpack.bispeu to get the results I need.
import numpy as np
import scipy.interpolate as si
def fitpack_bispeu(cv, u, v, count_u, count_v, degree_u, degree_v):
# cv = grid of control vertices
# u,v = list of u,v component queries
# count_u, count_v = grid counts along the u and v directions
# degree_u, degree_v = curve degree along the u and v directions
# Calculate knot vectors for both u and v
tck_u = np.clip(np.arange(count_u+degree_u+1)-degree_u,0,count_u-degree_u) # knot vector in the u direction
tck_v = np.clip(np.arange(count_v+degree_v+1)-degree_v,0,count_v-degree_v) # knot vector in the v direction
# Compute queries
positions = np.empty((u.shape[0], cv.shape[1]))
for i in range(cv.shape[1]):
positions[:, i] = si.dfitpack.bispeu(tck_u, tck_v, cv[:,i], degree_u, degree_v, u, v)[0]
return positions
This function works, but it occurred to me I could get better performance by calculating the bivariate basis ahead of time and then just get my result via a dot product. Here's what i wrote to compute the basis.
def basis_bispeu(cv, u, v, count_u, count_v, degree_u, degree_v):
# Calculate knot vectors for both u and v
tck_u = np.clip(np.arange(count_u+degree_u+1)-degree_u,0,count_u-degree_u) # knot vector in the u direction
tck_v = np.clip(np.arange(count_v+degree_v+1)-degree_v,0,count_v-degree_v) # knot vector in the v direction
# Compute basis for each control vertex
basis = np.empty((u.shape[0], cv.shape[0]))
cv_ = np.identity(len(cv))
for i in range(cv.shape[0]):
basis[:,i] = si.dfitpack.bispeu(tck_u, tck_v, cv_[i], degree_u, degree_v, u, v)[0]
return basis
lets compare and profile with cProfile:
# A test grid of control vertices
cv = np.array([[-0.5 , -0. , 0.5 ],
[-0.5 , -0. , 0.33333333],
[-0.5 , -0. , 0. ],
[-0.5 , 0. , -0.33333333],
[-0.5 , 0. , -0.5 ],
[-0.16666667, 1. , 0.5 ],
[-0.16666667, -0. , 0.33333333],
[-0.16666667, 0.5 , 0. ],
[-0.16666667, 0.5 , -0.33333333],
[-0.16666667, 0. , -0.5 ],
[ 0.16666667, -0. , 0.5 ],
[ 0.16666667, -0. , 0.33333333],
[ 0.16666667, -0. , 0. ],
[ 0.16666667, 0. , -0.33333333],
[ 0.16666667, 0. , -0.5 ],
[ 0.5 , -0. , 0.5 ],
[ 0.5 , -0. , 0.33333333],
[ 0.5 , -0.5 , 0. ],
[ 0.5 , 0. , -0.33333333],
[ 0.5 , 0. , -0.5 ]])
count_u = 4
count_v = 5
degree_u = 3
degree_v = 3
n = 10**6 # make 1 million random queries
u = np.random.random(n) * (count_u-degree_u)
v = np.random.random(n) * (count_v-degree_v)
# get the result from fitpack_bispeu
result_bispeu = fitpack_bispeu(cv,u,v,count_u,count_v,degree_u,degree_v) # 0.482 seconds
# precompute the basis for the same grid
basis = basis_bispeu(cv,u,v,count_u,count_v,degree_u,degree_v) # 2.124 seconds
# get results via dot product
result_basis = np.dot(basis,cv) # 0.028 seconds (17x faster than fitpack_bispeu)
# all close?
print np.allclose(result_basis, result_bispeu) # True
With a 17x speed increase, pre-calculating the basis seems like the way to go, but basis_bispeu is rather slow.
QUESTION
Is there a faster way to compute the basis of a bivariate spline? I know of deBoor's algorithm to compute a similar basis on a curve. Are there similar algorithms for bivariates that once written with numba or cython could yield better performance?
Otherwise can the basis_bispeu function above be improved to compute the basis faster? Maybe there are built in numpy functions I'm not aware of that could help.

Related

python3.7:qpsolver.solve_qp() method get stuck

I have a QP problem in my reinforcement learning system, which calculates the safe action with the minimium distance between original action. This is the background.
Here is part of my code :
P = np.matrix([[1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.],[0.,0.,0.,1.]])
q = np.array(-u0).flatten() #
G = gx
h = np.array(c - fx).flatten()
lower_bound = np.array([0., -140., -200 * self.Pgs_M, 0.])
upper_bound = np.array([143., 140., 200 * self.Pgs_M, 0.])
safe_action = solve_qp(P,q,G,h,lb=lower_bound, ub=upper_bound)
The u0 is a matrix with the shape of [4,1], and my formulation is correct, G and h can be calculated through u0, self.Pgs_M is a constant number. Meanswhile, I considered the condition that solve_qp returns None.
When my reinforcement learning training working, it might get stuck. I print the control input and initial state before my programm get stuck. It shows that:
===============debug================
u0 = [[ 38.28203142]
[-140. ]
[-144.34985435]
[ 0. ]]
ini_spd = [[ 0. ]
[ 0. ]
[-203.67992371]
[ 203.67992371]
[ 0. ]
[ -0. ]]
So I use these input to check my QP problem solving programm, it actually worked, and returned None, because this problem cannot be solved.
===============debug================
u0 = [[ 38.28203142]
[-140. ]
[-144.34985435]
[ 0. ]]
ini_spd = [[ 0.00000000e+00]
[ 0.00000000e+00]
[-2.03679924e+10]
[ 2.03679924e+10]
[ 0.00000000e+00]
[-0.00000000e+00]]
safe_aciton = None
I was wondering why my programm might get stuck, even if I have already considered the None return. What factor will influence the qpslovers and python programm.

PyMC3 Normal with variance per column

I am trying to define a pymc3.Normal variable with the following as mu:
import numpy as np
import pymc3 as pm
mx = np.array([[0.25 , 0.5 , 0.75 , 1. ],
[0.25 , 0.333, 0.25 , 0. ],
[0.25 , 0.167, 0. , 0. ],
[0.25 , 0. , 0. , 0. ]])
epsilon = pm.Gamma('epsilon', alpha=10, beta=10)
p_ = pm.Normal('p_', mu=mx, shape = mx.shape, sd = epsilon)
The problem is that all random variables in p_ get the same std (epsilon). I would like the first row to use epsilon1, the second row epsilon2 etc.
How Can I do that?
One can pass an argument for the shape parameter to achieve this. To demonstrate this, let's make some fake data to pass as observed, where we use fixed values for epsilon that we can compare against the inferred ones.
Example Model
import numpy as np
import pymc3 as pm
import arviz as az
# priors
mu = np.array([[0.25 , 0.5 , 0.75 , 1. ],
[0.25 , 0.333, 0.25 , 0. ],
[0.25 , 0.167, 0. , 0. ],
[0.25 , 0. , 0. , 0. ]])
alpha, beta = 10, 10
# fake data
np.random.seed(2019)
# row vector will use a different sd for each column
sd = np.random.gamma(alpha, 1.0/beta, size=(1,4))
# generate 100 fake observations of the (4,4) random variables
Y = np.random.normal(loc=mu, scale=sd, size=(100,4,4))
# true column sd's
print(sd)
# [[0.90055471 1.24522079 0.85846659 1.19588367]]
# mean sd's per column
print(np.mean(np.std(Y, 0), 0))
# [0.92028042 1.24437592 0.83383181 1.22717313]
# model
with pm.Model() as model:
# use a (1,4) matrix to pool variance by columns
epsilon = pm.Gamma('epsilon', alpha=10, beta=10, shape=(1, mu.shape[1]))
p_ = pm.Normal('p_', mu=mu, sd=epsilon, shape=mu.shape, observed=Y)
trace = pm.sample(random_seed=2019)
This samples well, and gives the following summary
which clearly bound the true values of the standard deviations within the HPDs.

Automatically calculate distance between nodes in a Graph by using Networkx or other Python Framework

Let's say we have a complete graph G with nodes A, B, C which is created by networkx library.
Each node has a coordinate attribute like {x: 2, y: 4}. Currently, the edge weights are 1, but they should be the Euclidean distance between nodes. I can calculate them with for loops but it is extremely inefficient.
So my question is how can I calculate the edge weights in an efficient manner?
Note: I found this but it is an old question.
Edit: I created my network as follows:
# Get a complete graph
rag = nx.complete_graph(L)
if L > 0:
for i, node in enumerate(nodes):
x, y = get_coord() # This function cant be changed
rag.nodes[i]["x"] = x
rag.nodes[i]["y"] = y
If you have the data in advance, we can use numpy and/or pandas to first calculate the distance in bulk, and then load the data into a graph.
Say for instance we can first construct an n×2-matrix with:
import numpy as np
A = np.array([list(get_coord()) for _ in range(L)])
We then can use scipy to calcuate a 2d matrix of distances, for example:
from scipy.spatial.distance import pdist, squareform
B = squareform(pdist(A))
If for instance A is:
>>> A
array([[ 0.16401235, -0.60536247],
[ 0.19705099, 1.74907373],
[ 1.13078545, 2.03750256],
[ 0.52009543, 0.25292921],
[-0.8018697 , -1.45384157],
[-1.37731085, 0.20679761],
[-1.52384856, 0.14468123],
[-0.12788698, 0.22348265],
[-0.27158565, 0.21804304],
[-0.03256846, -2.85381269]])
then B will be:
>>> B
array([[ 0. , 2.354668 , 2.81414033, 0.92922536, 1.28563016,
1.74220584, 1.84700839, 0.8787431 , 0.93152683, 2.25702734],
[ 2.354668 , 0. , 0.97726722, 1.53062279, 3.35507213,
2.20391262, 2.35277933, 1.5598118 , 1.60114811, 4.60861026],
[ 2.81414033, 0.97726722, 0. , 1.88617187, 3.99056885,
3.10516145, 3.26034573, 2.20792312, 2.29718907, 5.02775867],
[ 0.92922536, 1.53062279, 1.88617187, 0. , 2.15885579,
1.897967 , 2.04680841, 0.64865114, 0.79244935, 3.15551623],
[ 1.28563016, 3.35507213, 3.99056885, 2.15885579, 0. ,
1.75751388, 1.7540036 , 1.80766956, 1.75396674, 1.59741777],
[ 1.74220584, 2.20391262, 3.10516145, 1.897967 , 1.75751388,
0. , 0.1591595 , 1.24953527, 1.10578239, 3.34300278],
[ 1.84700839, 2.35277933, 3.26034573, 2.04680841, 1.7540036 ,
0.1591595 , 0. , 1.39818396, 1.25440996, 3.34886281],
[ 0.8787431 , 1.5598118 , 2.20792312, 0.64865114, 1.80766956,
1.24953527, 1.39818396, 0. , 0.14380159, 3.07877122],
[ 0.93152683, 1.60114811, 2.29718907, 0.79244935, 1.75396674,
1.10578239, 1.25440996, 0.14380159, 0. , 3.08114051],
[ 2.25702734, 4.60861026, 5.02775867, 3.15551623, 1.59741777,
3.34300278, 3.34886281, 3.07877122, 3.08114051, 0. ]])
And now we can construct a graph based on that matrix:
G = nx.from_numpy_matrix(B)
now we see that the weights match:
>>> G.get_edge_data(2,5)
{'weight': 3.105161451820312}

Raise diagonal matrix to the negative power 1/2

I am trying to compute the matrix which has the following equation.
S = (D^−1/2) * W * (D^−1/2)
where D is a diagonal matrix of this form:
array([[ 0.59484625, 0. , 0. , 0. ],
[ 0. , 0.58563893, 0. , 0. ],
[ 0. , 0. , 0.58280472, 0. ],
[ 0. , 0. , 0. , 0.58216725]])
and W:
array([[ 0. , 0.92311635, 0.94700586, 0.95599748],
[ 0.92311635, 0. , 0.997553 , 0.99501248],
[ 0.94700586, 0.997553 , 0. , 0.9995501 ],
[ 0.95599748, 0.99501248, 0.9995501 , 0. ]])
I tried to compute D^-1/2 by using numpy function linalg.matrix_power(D,-1/2) and numpy.power(D,-1/2) and matrix_power function raises TypeError: exponent must be an integer and numpy.power function raises RuntimeWarning: divide by zero encountered in power.
How to compute negative power -1/2 for diagonal matrix. Please help.
If you can update D(like in your own answer) then simply update the items at its diagonal indices and then call np.dot:
>>> D[np.diag_indices(4)] = 1/ (D.diagonal()**0.5)
>>> np.dot(D, W).dot(D)
array([[ 0. , 0.32158153, 0.32830723, 0.33106193],
[ 0.32158153, 0. , 0.34047794, 0.33923936],
[ 0.32830723, 0.34047794, 0. , 0.33913717],
[ 0.33106193, 0.33923936, 0.33913717, 0. ]])
Or create a new zeros array and then fill its diagonal elements with 1/ (D.diagonal()**0.5):
>>> arr = np.zeros(D.shape)
>>> np.fill_diagonal(arr, 1/ (D.diagonal()**0.5))
>>> np.dot(arr, W).dot(arr)
array([[ 0. , 0.32158153, 0.32830723, 0.33106193],
[ 0.32158153, 0. , 0.34047794, 0.33923936],
[ 0.32830723, 0.34047794, 0. , 0.33913717],
[ 0.33106193, 0.33923936, 0.33913717, 0. ]])
I got the answer by computing thro' mathematical terms but would love to see any straight forward one liners :)
def compute_diagonal_to_negative_power():
for i in range(4):
for j in range(4):
if i == j:
element = D[i][j]
numerator = 1
denominator = math.sqrt(element)
D[i][j] = numerator / denominator
return D
diagonal_matrix = compute_diagonal_to_negative_power()
S = np.dot(diagonal_matrix, W).dot(diagonal_matrix)
print(S)
"""
[[ 0. 0.32158153 0.32830723 0.33106193]
[ 0.32158153 0. 0.34047794 0.33923936]
[ 0.32830723 0.34047794 0. 0.33913718]
[ 0.33106193 0.33923936 0.33913718 0. ]]
"""
Source: https://math.stackexchange.com/questions/340321/raising-a-square-matrix-to-a-negative-half-power
You can do the following:
numpy.power(D,-1/2, where=(D!=0))
And then you will avoid getting the warning:
RuntimeWarning: divide by zero encountered in power
numpy will divide every value on the matrix element-wise by it's own square root, which is not zero, so basically you won't try to divide by zero anymore.

speeding up moving time delta with irregular time intervals in a numpy array

I want to calculate the 10 second difference of a dataset where the time increments are irregular.The data exists in 2 1-D arrays of equal length, one for the time, and the other for the data value.
After some poking around I was able to come up with a solution, but it's too slow based on (i suspect) having to iterate through every item in the array.
My general method is to iterate through the time array, and for each time value i find the index of the time value that is x seconds earlier. I then use those indices on the data array to calculate the difference.
The code is shown below.
First, the find_closest function from Bi Rico
def find_closest(A, target):
#A must be sorted
idx = A.searchsorted(target)
idx = np.clip(idx, 1, len(A)-1)
left = A[idx-1]
right = A[idx]
idx -= target - left < right - target
return idx
Which I then use in the following manner
def trailing_diff(time_array,data_array,seconds):
trailing_list=[]
for i in xrange(len(time_array)):
now=time_array[i]
if now<seconds:
trailing_list.append(0)
else:
then=find_closest(time_array,now-seconds)
trailing_list.append(data_array[i]-data_array[then])
return np.asarray(trailing_list)
unfortunately this doesn't scale particularly well, and I'd like to be able to calculate this (and plot it) on the fly.
Any thoughts on how I can make it more expedient?
EDIT: input/output
In [48]:time1
Out[48]:
array([ 0.57200003, 0.579 , 0.58800006, 0.59500003,
0.5999999 , 1.05999994, 1.55900002, 2.00900006,
2.57599998, 3.05599999, 3.52399993, 4.00699997,
4.09599996, 4.57299995, 5.04699993, 5.52099991,
6.09299994, 6.55999994, 7.04099989, 7.50900006,
8.07500005, 8.55799985, 9.023 , 9.50699997,
9.59399986, 10.07200003, 10.54200006, 11.01999998,
11.58899999, 12.05699992, 12.53799987, 13.00499988,
13.57599998, 14.05599999, 14.52399993, 15.00199985,
15.09299994, 15.57599998, 16.04399991, 16.52199984,
17.08899999, 17.55799985, 18.03699994, 18.50499988,
19.0769999 , 19.5539999 , 20.023 , 20.50099993,
20.59099984, 21.07399988])
In [49]:weight1
Out[49]:
array([ 82.268, 82.268, 82.269, 82.272, 82.275, 82.291, 82.289,
82.288, 82.287, 82.287, 82.293, 82.303, 82.303, 82.314,
82.321, 82.333, 82.356, 82.368, 82.386, 82.398, 82.411,
82.417, 82.419, 82.424, 82.424, 82.437, 82.45 , 82.472,
82.498, 82.515, 82.541, 82.559, 82.584, 82.607, 82.617,
82.626, 82.626, 82.629, 82.63 , 82.636, 82.651, 82.663,
82.686, 82.703, 82.728, 82.755, 82.773, 82.8 , 82.8 ,
82.826])
In [50]:trailing_diff(time1,weight1,10)
Out[50]:
array([ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. , 0.169, 0.182, 0.181, 0.209, 0.227, 0.254, 0.272,
0.291, 0.304, 0.303, 0.305, 0.305, 0.296, 0.274, 0.268,
0.265, 0.265, 0.275, 0.286, 0.309, 0.331, 0.336, 0.35 ,
0.35 , 0.354])
Use a ready-made interpolation routine. If you really want nearest neighbor behavior, I think it will have to be scipy's scipy.interpolate.interp1d, but linear interpolation seems a better option, and then you could use numpy's numpy.interp:
def trailing_diff(time, data, diff):
ret = np.zeros_like(data)
mask = (time - time[0]) >= diff
ret[mask] = data[mask] - np.interp(time[mask] - diff,
time, data)
return ret
time = np.arange(10) + np.random.rand(10)/2
weight = 82 + np.random.rand(10)
>>> time
array([ 0.05920317, 1.23000929, 2.36399981, 3.14701595, 4.05128494,
5.22100886, 6.07415922, 7.36161563, 8.37067107, 9.11371986])
>>> weight
array([ 82.14004969, 82.36214992, 82.25663272, 82.33764514,
82.52985723, 82.67820915, 82.43440796, 82.74038368,
82.84235675, 82.1333915 ])
>>> trailing_diff(time, weight, 3)
array([ 0. , 0. , 0. , 0.18093749, 0.20161107,
0.4082712 , 0.10430073, 0.17116831, 0.20691594, -0.31041841])
To get nearest neighbor, you would do
from scipy.interpolate import interp1d
def trailing_diff(time, data, diff):
ret = np.zeros_like(data)
mask = (time - time[0]) >= diff
interpolator = interp1d(time, data, kind='nearest')
ret[mask] = data[mask] - interpolator(time[mask] - diff)
return ret

Categories

Resources