Efficient way to calculate all IoUs of two lists - python

I have a function to calculate the IoU of two rectangles/bounding boxes.
def intersection_over_union(boxA, boxB):
# determine the (x, y)-coordinates of the intersection rectangle
xA = max(boxA[0], boxB[0])
yA = max(boxA[1], boxB[1])
xB = min(boxA[2], boxB[2])
yB = min(boxA[3], boxB[3])
# compute the area of intersection rectangle
interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1)
# compute the area of both the prediction and ground-truth
# rectangles
boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1)
boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1)
# compute the intersection over union by taking the intersection
# area and dividing it by the sum of prediction + ground-truth
# areas - the interesection area
iou = interArea / float(boxAArea + boxBArea - interArea)
# return the intersection over union value
return iou
Now I want to calculate all IoUs of bboxes of one list with bboxes of another list, i.e. if list A containts 4 bboxes and list B contains 3 bboxes, then I want a 4x3 matrix with all possible IoUs as a result.
Of course I can do this with two loops like this
import numpy as np
n_i = len(bboxes_a)
n_j = len(bboxes_b)
iou_mat = np.empty((n_i, n_j))
for i in range(n_i):
for j in range(n_j):
iou_mat[i, j] = intersection_over_union(bboxes_a[i], bboxes_b[j])
but this approach is very slow, especially when the lists become very big.
I'm struggling to find a more efficient way. There has to be a way to utilize numpy to get rid of the loops, but I don't get it. Also the complexity right now is O(m*n). Is there a possibility to reduce the complexity?

vectorize:
low = np.s_[...,:2]
high = np.s_[...,2:]
def iou(A,B):
A,B = A.copy(),B.copy()
A[high] += 1; B[high] += 1
intrs = (np.maximum(0,np.minimum(A[high],B[high])
-np.maximum(A[low],B[low]))).prod(-1)
return intrs / ((A[high]-A[low]).prod(-1)+(B[high]-B[low]).prod(-1)-intrs)
AB = iou(A[:,None],B[None])
complexity:
Since you are calculating M x N values reducing complexity below M x N is impossible unless most of the values are zero and a sparse representation of the matrix is acceptable.
This can be done by argsorting (separately for x and y) all the ends of A and B. That's O((M+N) log(M+N)) EDIT As the coordinates are integer linear complexity may be possible here. EDIT ends This can then be used to prefilter A x B. The complexity of filtering and computing the nonzeros would be O(M + N + number of nonzeros).

You can use product() in itertools in python to replace a nested for-loop. Using built-in function is always better I think. Examples can be like:
import numpy as np
l1 = np.random.randint(0, 10, (4, 4))
l2 = np.random.randint(0, 10, (3, 4))
print(f'l1:\n{l1}')
print(f'l2:\n{l2}')
from itertools import product
ious = np.array([intersection_over_union(box1, box2) for box1, box2 in product(l1, l2)]).reshape(len(l2), len(l1))
print(f'ious:\n{ious}')
What's more, you should change iou = interArea / float(boxAArea + boxBArea - interArea) to iou = interArea / float(boxAArea + boxBArea - interArea + 1e-16) to avoid divided by zero error.

Related

Which loss function calculates the distance between two contours

In my contour generation network I am using the nn.L1Loss() to caculate how many pixels are wrong. This works for training, but the 2D-Distance between the real contour and the fake would be way better. My aim is to measure the length of generated contours afterwards. This code example of two binary images shows where the nn.L1Loss() fails.
import cv2
import torch
from torch import nn
p1 = [(15, 15),(45,45)]
p2 = [(16, 15),(46,45)]
real = cv2.rectangle(np.ones((60,60)), p1[0], p1[1], color=0, thickness=1)
fake = cv2.rectangle(np.ones((60,60)), p2[0], p2[1], color=0, thickness=1)
cv2.imshow('image',np.hstack((real,fake)))
cv2.waitKey(0)
real = torch.tensor(real)
fake = torch.tensor(fake)
losss = [nn.L1Loss(), nn.MSELoss(), nn.BCELoss(), nn.HingeEmbeddingLoss(), nn.SmoothL1Loss()]
print(my_loss(real, fake))
for k, loss in enumerate(losss):
err = loss(real, fake)
print(err*60)
If I move the rectangle 1 pixel to the right:
-> L1 loss is 0.0333 * 60 = 2
If I move the rectangle 1 pixel to the right and 1 to the left:
-> L1 loss is 0.0656 * 60 = 3.933
If I move the rectangle 10 pixel to the right and 10 to the left:
-> L1 loss is 0.0656 * 60 = 3.933 still the same! Which is no suprise, the amount of wrong pixels is the same. But the distance to them changed by 10 * 2**1/2.
I also thought about the distance between both centers with this:
M = cv2.moments(c)
cX = int(M['m10'] /M['m00'])
cY = int(M['m01'] /M['m00'])
centers.append([cX,cY])
The problem here is that the generated contours are not identical to the real and thus have diffrent centers.
This answer is close to what I am looking for, but is computional very expensive?!
https://stackoverflow.com/a/36505073/12337147
Is there a custom loss functions to determine the distance like I described?
Is this the equation you want
As opposed to the area between the curves that is given by the next equation if the contours are sufficiently similar to each other
It means the accumulated squared distances from points in one contour to points
to the closest point in the other contour?
Given two arrays with the points on the contours, I can compute this with complexity O(M * N) where C1 has M points and C2 has N points, directly on GPU. Alternatively it could be computed in a O(W * H) where W * H is the dimension of the images.
If this is exactly what you want I may can post a solution.
The solution
First Let's create some example data.
import torch
import math
from torch import nn
import matplotlib.pyplot as plt;
# Number of points in each contour
M, N = 1000, 1500
t1 = torch.linspace(0, 2*math.pi, M).view(1, -1)
t2 = torch.linspace(0, 2*math.pi, N).view(1, -1)
c1 = torch.stack([torch.sin(t1),torch.cos(t1)], dim=2) # (1 x M x 2)
c2 = 1 - 2* torch.sigmoid(torch.stack([torch.sin(t2)*3 + 1, torch.cos(t2)*3 + 2], dim=2)) # (1 x N x 2)
With this we can compute the distance between every pair of points using torch.cdist. Here I used torch.argmin to find the position of the minimum of each column in the array. For computing the loss function what is important is the distance itself, and that can be computed with torch.amin.
distances = torch.cdist(c1, c2); # (1 x M x N)
plt.imshow(distances[0]);
plt.xlabel('index in countor 1');
plt.ylabel('index in countor 2');
plt.plot(torch.argmin(distances[0], axis=0), '.r')
Now basically however, what you to accumulate is not the distance, is a function of the distance. this could be easily obtained with torch.min(f(distances)), and assuming f(.) is monotonic can be simplified to f(torch.min(distances)).
To approximate the integral we can use the trapezoidal rule, that integrates a linear interpolation of a sampled function, in our case the contour sampled at the points you give.
This gives you a loss function
def contour_divergence(c1, c2, func = lambda x: x**2):
c1 = torch.atleast_3d(c1);
c2 = torch.atleast_3d(c2);
f = func(torch.amin(torch.cdist(c1, c2), dim=2));
# this computes the length of each segment connecting two consecutive points
df = torch.sum((c1[:, 1:, :] - c1[:, :-1, :])**2, axis=2)**0.5;
# here is the trapesoid rule
return torch.sum((f[:, :-1] + f[:, 1:]) * df[:, :], axis=1) / 4.0;
def contour_dist(c1, c2, func = lambda x: x**2):
return contour_divergence(c1, c2, func) + contour_divergence(c2, c1, func)
For the case where the line connecting the closest point is always perpendicular to the trajectory contour_dist(c1, c2, lambda x: x) gives the area.
This will give the area of the circle of radius 1 (all the points of the second circle is on the origin).
print(contour_dist(c1, c1*0, lambda x: x) / math.pi) # should print 1
Consider now the distance between a circle of radius 1 and a circle of radius 1 (it will be pi * (1 - 1/4) = 0.75*pi)
print(contour_dist(c1, c1*0.5, lambda x: x) / math.pi) # should print 0.75
And if you want any loss that accumulates the square distance you just use contour_dist(c1, c2), and you can pass an arbitrary function as parameter to the function. You can backpropagate the loss as long as you can backpropagate the passed func.

Python - Function return for a 2D array

I am converting an old Matlab code into a Python script before I lose access to my Matlab license.
The purpose of the Python code is to write a script that calculates an optimal I-beam geometry given a specified force requirement and distance from the force loading point.
When the function does not return anything and just prints 'vals' (i.e. print(vals)), the output shows me all possible combinations which is desirable. However when I return the vals array (i.e. return vals), the output is just 0.0. I am not too sure why I am unable to return vals.
Ideally I want to achieve a 2D array of all possible combinations of vals (it should be a 980x980 array), then extract the maximal value of vals using a different function call.
This is my code:
## Defining the system inputs
sigma = 100 # Max stress not allowed to exceed (N/mm^2)
F = 100 * 10**3 # Force applied ( N)
dist = [100, 1000, 2000, 3000, 4000] # Distance from force applied (mm)
t = 10 # Thickness of I-beam (mm)
y = np.zeros(1000 - 2*t) # Pre-allocating vector with zeroes
h = np.arange(1, 1001 - 2*t, 1) # Height of I-beam
w = np.arange(1, 1001 - 2*t, 1) # width of I-beam
tol = 2.00 # tolerance for the system (how close we want our desired shear modulus to be to the limit)
# Writing the IBeam function
def IBeam(sigma, F, t, h, w, dist, tol):
M = F * dist # calculating moment
zShear = M / sigma # calculating shear modulus based on given conditions
# iterating through all combinations of height and width
for i in range(0, len(h)): # iterating through the height vector
for j in range(0, len(w)): # iterating through the width vector
XSA = 2*t*w[j] + t*h[i] # calculating XSA for each h/w combination
Ix = ((t*h[i]**3)/12) + (w[j]/12)*((2*t+h[i])**3 - h[i]**3) # calculating Ix for each h/w combination
y = h[i]/2 + t # calculating y for each h combination
zInt = Ix / y # calculating zInt for each Ix/y combination
# Check to see if our zInt < zShear since this is not ideal
if (zInt <= zShear):
zInt = 0
# Check to see if our zInt >= tol * zShear since this is not ideal
if (zInt >= tol * zShear):
zInt = 0
# Creating the relationship between zInt and XSA
vals = zInt / XSA
# return vals
return vals
# Calling the IBeam and max function
iBeamVals = IBeam(sigma, F, t, h, w, dist[0], tol)
print(iBeamVals)
Apologies if this questions has been asked before or to a similar capacity beforehand.
Thanks
AJ

Smooth a curve in Python while preserving the value and slope at the end points

I have two solutions to this problem actually, they are both applied below to a test case. The thing is that none of them is perfect: first one only take into account the two end points, the other one can't be made "arbitrarily smooth": there is a limit in the amount of smoothness one can achieve (the one I am showing).
I am sure there is a better solution, that kind-of go from the first solution to the other and all the way to no smoothing at all. It may already be implemented somewhere. Maybe solving a minimization problem with an arbitrary number of splines equidistributed?
Thank you very much for your help
Ps: the seed used is a challenging one
import matplotlib.pyplot as plt
from scipy import interpolate
from scipy.signal import savgol_filter
import numpy as np
import random
def scipy_bspline(cv, n=100, degree=3):
""" Calculate n samples on a bspline
cv : Array ov control vertices
n : Number of samples to return
degree: Curve degree
"""
cv = np.asarray(cv)
count = cv.shape[0]
degree = np.clip(degree,1,count-1)
kv = np.clip(np.arange(count+degree+1)-degree,0,count-degree)
# Return samples
max_param = count - (degree * (1-periodic))
spl = interpolate.BSpline(kv, cv, degree)
return spl(np.linspace(0,max_param,n))
def round_up_to_odd(f):
return np.int(np.ceil(f / 2.) * 2 + 1)
def generateRandomSignal(n=1000, seed=None):
"""
Parameters
----------
n : integer, optional
Number of points in the signal. The default is 1000.
Returns
-------
sig : numpy array
"""
np.random.seed(seed)
print("Seed was:", seed)
steps = np.random.choice(a=[-1, 0, 1], size=(n-1))
roughSig = np.concatenate([np.array([0]), steps]).cumsum(0)
sig = savgol_filter(roughSig, round_up_to_odd(n/10), 6)
return sig
# Generate a random signal to illustrate my point
n = 1000
t = np.linspace(0, 10, n)
seed = 45136. # Challenging seed
sig = generateRandomSignal(n=1000, seed=seed)
sigInit = np.copy(sig)
# Add noise to the signal
mean = 0
std = sig.max()/3.0
num_samples = n/5
idxMin = n/2-100
idxMax = idxMin + num_samples
tCut = t[idxMin+1:idxMax]
noise = np.random.normal(mean, std, size=num_samples-1) + 2*std*np.sin(2.0*np.pi*tCut/0.4)
sig[idxMin+1:idxMax] += noise
# Define filtering range enclosing the noisy area of the signal
idxMin -= 20
idxMax += 20
# Extreme filtering solution
# Spline between first and last points, the points in between have no influence
sigTrim = np.delete(sig, np.arange(idxMin,idxMax))
tTrim = np.delete(t, np.arange(idxMin,idxMax))
f = interpolate.interp1d(tTrim, sigTrim, kind='quadratic')
sigSmooth1 = f(t)
# My attempt. Not bad but not perfect because there is a limit in the maximum
# amount of smoothing we can add (degree=len(tSlice) is the maximum)
# If I could do degree=10*len(tSlice) and converging to the first solution
# I would be done!
sigSlice = sig[idxMin:idxMax]
tSlice = t[idxMin:idxMax]
cv = np.stack((tSlice, sigSlice)).T
p = scipy_bspline(cv, n=len(tSlice), degree=len(tSlice))
tSlice = p.T[0]
sigSliceSmooth = p.T[1]
sigSmooth2 = np.copy(sig)
sigSmooth2[idxMin:idxMax] = sigSliceSmooth
# Plot
plt.figure()
plt.plot(t, sig, label="Signal")
plt.plot(t, sigSmooth1, label="Solution 1")
plt.plot(t, sigSmooth2, label="Solution 2")
plt.plot(t[idxMin:idxMax], sigInit[idxMin:idxMax], label="What I'd want (kind of, smoother will be even better actually)")
plt.plot([t[idxMin],t[idxMax]], [sig[idxMin],sig[idxMax]],"o")
plt.legend()
plt.show()
sys.exit()
Yes, a minimization is a good way to approach this smoothing problem.
Least squares problem
Here is a suggestion for a least squares formulation: let s[0], ..., s[N] denote the N+1 samples of the given signal to smooth, and let L and R be the desired slopes to preserve at the left and right endpoints. Find the smoothed signal u[0], ..., u[N] as the minimizer of
min_u (1/2) sum_n (u[n] - s[n])² + (λ/2) sum_n (u[n+1] - 2 u[n] + u[n-1])²
subject to
s[0] = u[0], s[N] = u[N] (value constraints),
L = u[1] - u[0], R = u[N] - u[N-1] (slope constraints),
where in the minimization objective, the sums are over n = 1, ..., N-1 and λ is a positive parameter controlling the smoothing strength. The first term tries to keep the solution close to the original signal, and the second term penalizes u for bending to encourage a smooth solution.
The slope constraints require that
u[1] = L + u[0] = L + s[0] and u[N-1] = u[N] - R = s[N] - R. So we can consider the minimization as over only the interior samples u[2], ..., u[N-2].
Finding the minimizer
The minimizer satisfies the Euler–Lagrange equations
(u[n] - s[n]) / λ + (u[n+2] - 4 u[n+1] + 6 u[n] - 4 u[n-1] + u[n-2]) = 0
for n = 2, ..., N-2.
An easy way to find an approximate solution is by gradient descent: initialize u = np.copy(s), set u[1] = L + s[0] and u[N-1] = s[N] - R, and do 100 iterations or so of
u[2:-2] -= (0.05 / λ) * (u - s)[2:-2] + np.convolve(u, [1, -4, 6, -4, 1])[4:-4]
But with some more work, it is possible to do better than this by solving the E–L equations directly. For each n, move the known quantities to the right-hand side: s[n] and also the endpoints u[0] = s[0], u[1] = L + s[0], u[N-1] = s[N] - R, u[N] = s[N]. The you will have a linear system "A u = b", and matrix A has rows like
0, ..., 0, 1, -4, (6 + 1/λ), -4, 1, 0, ..., 0.
Finally, solve the linear system to find the smoothed signal u. You could use numpy.linalg.solve to do this if N is not too large, or if N is large, try an iterative method like conjugate gradients.
you can apply a simple smoothing method and plot the smooth curves with different smoothness values to see which one works best.
def smoothing(data, smoothness=0.5):
last = data[0]
new_data = [data[0]]
for datum in data[1:]:
new_value = smoothness * last + (1 - smoothness) * datum
new_data.append(new_value)
last = datum
return new_data
You can plot this curve for multiple values of smoothness and pick the curve which suits your needs. You can also apply this method only on a range of values in the actual curve by defining start and end

Integrating 2D data over an irregular grid in python

So I have 2D function which is sampled irregularly over a domain, and I want to calculate the volume underneath the surface. The data is organised in terms of [x,y,z], taking a simple example:
def f(x,y):
return np.cos(10*x*y) * np.exp(-x**2 - y**2)
datrange1 = np.linspace(-5,5,1000)
datrange2 = np.linspace(-0.5,0.5,1000)
ar = []
for x in datrange1:
for y in datrange2:
ar += [[x,y, f(x,y)]]
for x in xrange2:
for y in yrange2:
ar += [[x,y, f(x,y)]]
val_arr1 = np.array(ar)
data = np.unique(val_arr1)
xlist, ylist, zlist = data.T
where np.unique sorts the data in the first column then the second. The data is arranged in this way as I need to sample more heavily around the origin as there is a sharp feature that must be resolved.
Now I wondered about constructing a 2D interpolating function using scipy.interpolate.interp2d, then integrating over this using dblquad. As it turns out, this is not only inelegant and slow, but also kicks out the error:
RuntimeWarning: No more knots can be added because the number of B-spline
coefficients already exceeds the number of data points m.
Is there a better way to integrate data arranged in this fashion or overcoming this error?
If you can sample the data with high enough resolution around the feature of interest, then more sparsely everywhere else, the problem definition then becomes how to define the area under each sample. This is easy with regular rectangular samples, and could likely be done stepwise in increments of resolution around the origin. The approach I went after is to generate the 2D Voronoi cells for each sample in order to determine their area. I pulled most of the code from this answer, as it had almost all the components needed already.
import numpy as np
from scipy.spatial import Voronoi
#taken from: # https://stackoverflow.com/questions/28665491/getting-a-bounded-polygon-coordinates-from-voronoi-cells
#computes voronoi regions bounded by a bounding box
def square_voronoi(xy, bbox): #bbox: (min_x, max_x, min_y, max_y)
# Select points inside the bounding box
points_center = xy[np.where((bbox[0] <= xy[:,0]) * (xy[:,0] <= bbox[1]) * (bbox[2] <= xy[:,1]) * (bbox[2] <= bbox[3]))]
# Mirror points
points_left = np.copy(points_center)
points_left[:, 0] = bbox[0] - (points_left[:, 0] - bbox[0])
points_right = np.copy(points_center)
points_right[:, 0] = bbox[1] + (bbox[1] - points_right[:, 0])
points_down = np.copy(points_center)
points_down[:, 1] = bbox[2] - (points_down[:, 1] - bbox[2])
points_up = np.copy(points_center)
points_up[:, 1] = bbox[3] + (bbox[3] - points_up[:, 1])
points = np.concatenate((points_center, points_left, points_right, points_down, points_up,), axis=0)
# Compute Voronoi
vor = Voronoi(points)
# Filter regions (center points should* be guaranteed to have a valid region)
# center points should come first and not change in size
regions = [vor.regions[vor.point_region[i]] for i in range(len(points_center))]
vor.filtered_points = points_center
vor.filtered_regions = regions
return vor
#also stolen from: https://stackoverflow.com/questions/28665491/getting-a-bounded-polygon-coordinates-from-voronoi-cells
def area_region(vertices):
# Polygon's signed area
A = 0
for i in range(0, len(vertices) - 1):
s = (vertices[i, 0] * vertices[i + 1, 1] - vertices[i + 1, 0] * vertices[i, 1])
A = A + s
return np.abs(0.5 * A)
def f(x,y):
return np.cos(10*x*y) * np.exp(-x**2 - y**2)
#sampling could easily be shaped to sample origin more heavily
sample_x = np.random.rand(1000) * 10 - 5 #same range as example linspace
sample_y = np.random.rand(1000) - .5
sample_xy = np.array([sample_x, sample_y]).T
vor = square_voronoi(sample_xy, (-5,5,-.5,.5)) #using bbox from samples
points = vor.filtered_points
sample_areas = np.array([area_region(vor.vertices[verts+[verts[0]],:]) for verts in vor.filtered_regions])
sample_z = np.array([f(p[0], p[1]) for p in points])
volume = np.sum(sample_z * sample_areas)
I haven't exactly tested this, but the principle should work, and the math checks out.

Calculate distance between neighbors efficiently

I have data geographically scattered without any kind of pattern and I need to create an image where the value of each pixel is an average of the neighbors of that pixel that are less than X meters.
For this I use the library scipy.spatial to generate a KDTree with the data (cKDTree). Once the data structure is generated, I locate the pixel geographically and locate the geographic points that are closest.
# Generate scattered data points
coord_cart= [
[
feat.geometry().GetY(),
feat.geometry().GetX(),
feat.GetField(feature),
] for feat in layer
]
# Create KDTree structure
tree = cKDTree(coord_cart)
# Get raster image dimensions
pixel_size = 5
source_layer = shapefile.GetLayer()
x_min, x_max, y_min, y_max = source_layer.GetExtent()
x_res = int((x_max - x_min) / pixel_size)
y_res = int((y_max - y_min) / pixel_size)
# Create grid
x = np.linspace(x_min, x_max, x_res)
y = np.linspace(y_min, y_max, y_res)
X, Y = np.meshgrid(x, y)
grid = np.array(zip(Y.ravel(), X.ravel()))
# Get points that are less than 10 meters away
inds = tree.query_ball_point(grid, 10)
# inds is an np.array of lists of different length, so I need to convert it into an array of n_points x maximum number of neighbors
ll = np.array([len(l) for l in inds])
maxlen = max(ll)
arr = np.zeros((len(ll), maxlen), int)
# I don't know why but inds is an array of list, so I convert it into an array of array to use grid[inds]
# I THINK THIS IS A LITTLE INEFFICIENT
for i in range(len(inds)):
inds[i].extend([i] * (maxlen - len(inds[i])))
arr[i] = np.array(inds[i], dtype=int)
# AND THIS DOESN'T WORK
d = np.linalg.norm(grid - grid[inds])
Is there a better way to do this? I'm trying to use IDW to perform the interpolation between the points. I found this snippet that uses a function that gets the N nearest points but it does not work for me because I need that if there is no point in a radius R, the value of the pixel is 0.
d, inds = tree.query(zip(xt, yt, zt), k = 10)
w = 1.0 / d**2
air_idw = np.sum(w * air.flatten()[inds], axis=1) / np.sum(w, axis=1)
air_idw.shape = lon_curv.shape
Thanks in advance!
This may be one of the cases where KDTrees are not a good solution. This is because you are mapping to a grid, which is a very simple structure meaning there is nothing to gain from the KDTree's sophistication. Nearest grid point and distance can be found by simple arithmetic.
Below is a simple example implementation. I'm using a Gaussian kernel but changing that to IDW if you prefer should be straight-forward.
import numpy as np
from scipy import stats
def rasterize(coords, feature, gu, cutoff, kernel=stats.norm(0, 2.5).pdf):
# compute overlap (filter size / grid unit)
ovlp = int(np.ceil(cutoff/gu))
# compute raster dimensions
mn, mx = coords.min(axis=0), coords.max(axis=0)
reso = np.ceil((mx - mn) / gu).astype(int)
base = (mx + mn - reso * gu) / 2
# map coordinates to raster, the residual is the distance
grid_res = coords - base
grid_coords = np.rint(grid_res / gu).astype(int)
grid_res -= gu * grid_coords
# because of overlap we must add neighboring grid points to the nearest
gcovlp = np.c_[-ovlp:ovlp+1, np.zeros(2*ovlp+1, dtype=int)]
grid_coords = (gcovlp[:, None, None, :] + gcovlp[None, :, None, ::-1]
+ grid_coords).reshape(-1, 2)
# the corresponding residuals have the same offset with opposite sign
gdovlp = -gu * (gcovlp+1/2)
grid_res = (gdovlp[:, None, None, :] + gdovlp[None, :, None, ::-1]
+ grid_res).reshape(-1, 2)
# discard off fov grid points and points outside the cutoff
valid, = np.where(((grid_coords>=0) & (grid_coords<=reso)).all(axis=1) & (
np.einsum('ij,ij->i', grid_res, grid_res) <= cutoff*cutoff))
grid_res = grid_res[valid]
feature = feature[valid // (2*ovlp+1)**2]
# flatten grid so we can use bincount
grid_flat = np.ravel_multi_index(grid_coords[valid].T, reso+1)
return np.bincount(
grid_flat,
feature * kernel(np.sqrt(np.einsum('ij,ij->i', grid_res, grid_res))),
(reso + 1).prod()).reshape(reso+1)
gu = 5
cutoff = 10
coords = np.random.randn(10_000, 2) * (100, 20)
coords[:, 1] += 80 * np.sin(coords[:, 0] / 40)
feature = np.random.uniform(0, 1000, (10_000,))
from timeit import timeit
print(timeit("rasterize(coords, feature, gu, cutoff)", globals=globals(), number=100)*10, 'ms')
pic = rasterize(coords, feature, gu, cutoff)
import pylab
pylab.pcolor(pic, cmap=pylab.cm.jet)
pylab.colorbar()
pylab.show()

Categories

Resources