2D Rotation of Image - python

I am trying to rotate the image for any given angle.
I am rotating with the center of the image as the origin.
But the code is not doing the rotation as expected.
I am attaching the code below.
import math
import numpy as np
import cv2
im = cv2.imread("Samples\\baboon.jpg", cv2.IMREAD_GRAYSCALE)
new = np.zeros(im.shape,np.uint8)
new_x = im.shape[0] // 2
new_y = im.shape[1] // 2
x = int(input("Enter the angle : "))
trans_mat = np.array([[math.cos(x), math.sin(x), 0],[-math.sin(x), math.cos(x), 0],[0, 0, 1]])
for i in range(-new_x, im.shape[0] - new_x):
for j in range(-new_y, im.shape[1] - new_y):
vec = np.matmul([i, j, 1], trans_mat)
if round(vec[0] + new_x) < 512 and round(vec[1] + new_y) < 512:
new[round(vec[0]+new_x), round(vec[1]+new_y)] = im[i+new_x,j+new_y]
cv2.imshow("rot",new)
cv2.imshow("1",im)
cv2.waitKey(0)
cv2.destroyAllWindows()

It looks like you are trying to implement a nearest-neighbor resampler. What you are doing is going through the image and mapping each input pixel to a new location in the output image. This can lead to problems like pixels overwriting each other incorrectly, output pixels being left empty, and similar.
I would suggest (based on experience) that you are looking at the problem backwards. Rather than looking at where an input pixel ends up in the output, you should consider where each output pixel originates in the input. That way, you have no ambiguity about nearest neighbors, and the entire image array will be filled.
You want to rotate about the center. The current rotation matrix you are using rotates about (0, 0). To compensate for that, you need to translate the center of the image to (0, 0), rotate, and then translate back. Rather than developing the full affine matrix, I will show you how to do the individual operations manually, and then how to combine them into the transform matrix.
Manual Computation
First get an input and output image:
im = cv2.imread("Samples\\baboon.jpg", cv2.IMREAD_GRAYSCALE)
new = np.zeros_like(im)
Then determine the center of rotation. Be clear about your dimensions x is usually the column (dim 1), not the row (dim 0):
center_row = im.shape[0] // 2
center_col = im.shape[1] // 2
Compute the radial coordinates of each pixel in the image, shaped to the corresponding dimension:
row_coord = np.arange(im.shape[0])[:, None] - center_row
col_coord = np.arange(im.shape[1]) - center_col
row_coord and col_coord are the distances from center in the output image. Now compute the locations where they came from in the input. Notice that we can use broadcasting to avoid the need for a loop. I'm following your original convention for angle definitions here, and finding the inverse rotation to determine the source location. The big difference here is that the input in degrees is converted to radians, since that's what the trigonometric functions expect:
angle = float(input('Enter Angle in Degrees: ')) * np.pi / 180.0
source_row = row_coord * np.cos(angle) - col_coord * np.sin(angle) + center_row
source_col = row_coord * np.sin(angle) + col_coord * np.cos(angle) + center_col
If all the indices were guaranteed to fall within the input image, you wouldn't even need to pre-allocate the output. You could literally just do new = im[source_row, source_col]. However, you need to mask the indices:
mask = source_row >= 0 & source_row < im.shape[0] & source_col >= 0 & source_col < im.shape[1]
new[mask] = im[source_row[mask].round().astype(int), source_col[mask].round().astype(int)]
Affine Transforms
Now let's take a look at using Affine transforms. First you want to subtract the center from your coordinates. Let's say you have a column vector [[r], [c], [1]]. A translation to zero would be the matrix
[[r'] [[1 0 -rc] [[r]
[c'] = [0 1 -cc] . [c]
[1 ]] [0 0 1 ]] [1]]
Then the (backwards) rotation is applied:
[[r''] [[cos(a) -sin(a) 0] [[r']
[c''] = [sin(a) cos(a) 0] . [c']
[ 1 ]] [ 0 0 1]] [1 ]]
And finally, you need to translate back to center:
[[r'''] [[1 0 rc] [[r'']
[c'''] = [0 1 cc] . [c'']
[ 1 ]] [0 0 1]] [ 1 ]]
If you multiply these three matrices out in order from right to left, you get
[[cos(a) -sin(a) cc * sin(a) - rc * cos(a) + rc]
M = [sin(a) cos(a) -cc * cos(a) - rc * sin(a) + cc]
[ 0 0 1 ]]
If you build a full matrix of output coordinates rather than the subset arrays we started with, you can use np.matmul, a.k.a. the # operator to do the multiplication for you. There is no need for this level of complexity for such a simple case though:
matrix = np.array([[np.cos(angle), -np.sin(angle), col_center * np.sin(angle) - row_center * np.cos(angle) + row_center],
[np.sin(angle), np.cos(angle), -col_center * np.cos(angle) - row_center * np.sin(angle) + col_center],
[0, 0, 1]])
coord = np.ones((*im.shape, 3, 1))
coord[..., 0, :] = np.arange(im.shape[0]).reshape(-1, 1, 1, 1)
coord[..., 1, :] = np.arange(im.shape[1]).reshape(-1, 1, 1)
source = (matrix # coord)[..., :2, 0]
The remainder of the processing is fairly similar to the manual computations:
mask = (source >= 0 & source_row < im.shape).all(axis=-1)
new[mask] = im[source[0, mask].round().astype(int), source_col[1, mask].round().astype(int)]

I tried to implement Madphysicist's matrix multiplication method. Here's is the implementation, for those who care:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
path = Path(".")
img = plt.imread(path.resolve().parent / "img_align" / "faces_imgs" / "4.jpg")
angle = 15
def _transform(rot_mat, x, y):
"""
conveninece method for matrix multiplication
"""
return np.matmul(rot_mat, np.array([x, y, 1]))
def rotate(img, angle):
angle %= 360
angle = np.radians(angle)
new = np.zeros_like(img)
cx, cy = tuple(x / 2 for x in img.shape[:2])
# Angles are reverse as we are interpolating from destination to source
rot_mat = np.array(
[
[np.cos(-angle), -np.sin(-angle), 0],
[np.sin(-angle), np.cos(-angle), 0],
[0, 0, 1],
]
)
rot_mat[0, 2], rot_mat[1, 2], _ = _transform(rot_mat, -cx, -cy)
# build combined affine transformation matrrix
rot_mat[0, 2] += cx
rot_mat[1, 2] += cy
coord = np.ones((*img.shape, 3, 1)) # [576x336x3x3x1]
coord[..., 0, :] = np.arange(img.shape[0]).reshape(-1, 1, 1, 1)
coord[..., 1, :] = np.arange(img.shape[1]).reshape(-1, 1, 1)
source = (rot_mat # coord)[..., :2, 0]
x_mask = source[..., 0]
y_mask = source[..., 1]
mask = (
(x_mask >= 0)
& (x_mask < img.shape[0])
& (y_mask >= 0)
& (y_mask < img.shape[1])
).all(axis=-1)
# Clipping values to avoid IndexError
new[mask] = img[
x_mask[..., 0][mask].round().astype(int).clip(None, img.shape[0] - 1),
y_mask[..., 1][mask].round().astype(int).clip(None, img.shape[1] - 1),
]
plt.imsave("test.jpg", new)
if __name__ == "__main__":
rotate(img, angle)

I think this is what you are looking for
Properly rotate image in OpenCV?
Here is the code
ang = int(input("Enter the angle : "))
im = cv2.imread("Samples\\baboon.jpg", cv2.IMREAD_GRAYSCALE)
def rotimage(image):
row,col = image.shape[0:2]
center=tuple(np.array([col,row])/2)
rot_mat = cv2.getRotationMatrix2D(center,ang,1.0)
new_image = cv2.warpAffine(image, rot_mat, (col,row))
return new_image
new_image = rotimage(im)
cv2.imshow("1",new_image)
cv2.waitKey(0)

Related

Mandelbrot set using normalized iteration count

I have the following Python program that endeavours to use the normalized iteration count algorithm to colour the Mandelbrot set:
from PIL import Image
import numpy as np
from matplotlib.colors import hsv_to_rgb
steps = 256 # maximum iterations
bailout_radius = 64 # bailout radius
def normalized_iteration(n, abs_z):
return n + 1 - np.log2(np.log(abs_z))/np.log(2)
def make_set(real_start, real_end, imag_start, imag_end, height):
width = \
int(abs(height * (real_end - real_start) / (imag_end - imag_start)))
real_axis = \
np.linspace(real_start, real_end, num = width)
imag_axis = \
np.linspace(imag_start, imag_end, num = height)
complex_plane = \
np.zeros((height, width), dtype = np.complex_)
real, imag = np.meshgrid(real_axis, imag_axis)
complex_plane.real = real
complex_plane.imag = imag
pixels = \
np.zeros((height, width, 3), dtype = np.float_)
new = np.zeros_like(complex_plane)
is_not_done = np.ones((height, width), dtype = bool)
# cosine_interpolation = lambda x: (np.cos(x * np.pi + np.pi) + 1) / 2
for i in range(steps):
new[is_not_done] = \
new[is_not_done] ** 2 + complex_plane[is_not_done]
mask = np.logical_and(np.absolute(new) > bailout_radius, is_not_done)
pixels[mask, :] = (i, 0.6, 1)
is_not_done = np.logical_and(is_not_done, np.logical_not(mask))
new_after_mask = np.zeros_like(complex_plane)
new_after_mask[np.logical_not(is_not_done)] = \
new[np.logical_not(is_not_done)]
new_after_mask[is_not_done] = bailout_radius
pixels[:, :, 0] = \
normalized_iteration(pixels[:, :, 0], np.absolute(new_after_mask)) / steps
image = Image.fromarray((hsv_to_rgb(np.flipud(pixels)) * 255).astype(np.uint8))
image.show()
make_set(-2, 1, -1, 1, 2000)
It produces a fairly nice image. However, when I compare it to other sets employing this algorithm, the colours in my set barely change. If I reduce steps, I get a more varied gradient, but that reduces the quality of the fractal. The important parts of this code are my normalized_iteration definition, which varies slightly from this Wikipedia article's version,
def normalized_iteration(n, abs_z):
return n + 1 - np.log2(np.log(abs_z))/np.log(2)
where I use that definition (mapping the function to the array of pixels),
pixels[:, :, 0] = \
normalized_iteration(pixels[:, :, 0], np.absolute(new_after_mask)) / steps
and the final array, where I convert the HSV format to RGB and turn the pixel values on [0, 1) to values on [0, 255)
image = Image.fromarray((hsv_to_rgb(np.flipud(pixels)) * 255).astype(np.uint8))
I have been fighting with this problem for a while now, and I am not sure of what is going wrong. Thanks for helping me determine how to make the gradient more varied in colour and for bearing with my perhaps hard-to-read code. Also, I realize that there is room for optimization in there.

Need to speed up very slow loop for image manipulation on Python

I am currently completing a program in Pyhton (3.6) as per internal requirement. As part of it, I am having to loop through a colour image (3 bytes per pixel, R, G & B) and distort the image pixel by pixel.
I have the same code in other languages (C++, C#), and non-optimized code executes in about two seconds, while optimized code executes in less than a second. By non-optimized code I mean that the matrix multiplication is performed by a 10 line function I implemented. The optimized version just uses external libraries for multiplication.
In Python, this code takes close to 300 seconds. I canĀ“t think of a way to vectorize this logic or speed it up, as there are a couple of "if"s inside the nested loop. Any help would be greatly appreciated.
import numpy as np
#for test purposes:
#roi = rect.rect(0, 0, 1200, 1200)
#input = DCImage.DCImage(1200, 1200, 3)
#correctionImage = DCImage.DCImage(1200,1200,3)
#siteToImage= np.zeros((3,3), np.float32)
#worldToSite= np.zeros ((4, 4))
#r11 = r12 = r13 = r21 = r22 = r23 = r31 = r32 = r33 = 0.0
#xMean = yMean = zMean = 0
#tx = ty = tz = 0
#epsilon = np.finfo(float).eps
#fx = fy = cx = cy = k1 = k2 = p1 = p2 = 0
for i in range (roi.x, roi.x + roi.width):
for j in range (roi.y , roi.y + roi.height):
if ( (input.pixels [i] [j] == [255, 0, 0]).all()):
#Coordinates conversion
siteMat = np.matmul(siteToImage, [i, j, 1])
world =np.matmul(worldToSite, [siteMat[0], siteMat[1], 0.0, 1.0])
xLocal = world[0] - xMean
yLocal = world[1] - yMean
zLocal = z_ortho - zMean
#From World to camera
xCam = r11*xLocal + r12*yLocal + r13*zLocal + tx
yCam = r21*xLocal + r22*yLocal + r23*zLocal + ty
zCam = r31*xLocal + r32*yLocal + r33*zLocal + tz
if (zCam > epsilon or zCam < -epsilon):
xCam = xCam / zCam
yCam = yCam / zCam
#// DISTORTIONS
r2 = xCam*xCam + yCam*yCam
a1 = 2*xCam*yCam
a2 = r2 + 2*xCam*xCam
a3 = r2 + 2*yCam*yCam
cdist = 1 + k1*r2 + k2*r2*r2
u = int((xCam * cdist + p1 * a1 + p2 * a2) * fx + cx + 0.5)
v = int((yCam * cdist + p1 * a3 + p2 * a1) * fy + cy + 0.5)
if (u>=0 and u<correctionImage.width and v>=0 and v < correctionImage.height):
input.pixels [i] [j] = correctionImage.pixels [u][v]
You normally vectorize this kind of thing by making a displacement map.
Make a complex image where each pixel has the value of its own coordinate, apply the usual math operations to compute whatever transform you want, then apply the map to your source image.
For example, in pyvips you might write:
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
# this makes an image where pixel (0, 0) (at the top-left) has value [0, 0],
# and pixel (image.width, image.height) at the bottom-right has value
# [image.width, image.height]
index = pyvips.Image.xyz(image.width, image.height)
# make a version with (0, 0) at the centre, negative values up and left,
# positive down and right
centre = index - [image.width / 2, image.height / 2]
# to polar space, so each pixel is now distance and angle in degrees
polar = centre.polar()
# scale sin(distance) by 1/distance to make a wavey pattern
d = 10000 * (polar[0] * 3).sin() / (1 + polar[0])
# and back to rectangular coordinates again to make a set of vectors we can
# apply to the original index image
distort = index + d.bandjoin(polar[1]).rect()
# distort the image
distorted = image.mapim(distort)
# pick pixels from either the distorted image or the original, depending on some
# condition
result = (d.abs() > 10 or image[2] > 100).ifthenelse(distorted, image)
result.write_to_file(sys.argv[2])
That's just a silly wobble pattern, but you can swap it for any distortion you want. Then run as:
$ /usr/bin/time -f %M:%e ./wobble.py ~/pics/horse1920x1080.jpg x.jpg
54572:0.31
300ms and 55MB of memory on this two-core, 2015 laptop to make:
After much testing, the only way to speed the function without writing it in C++ was dissassembling it and vectorizing it. The way to do it in this particular instance is to create an array with the valid indexes at the beginning of the funcion and use them as tuples to index the final solution.
subArray[roi.y:roi.y+roi.height,roi.x:roi.x+roi.width,] = input.pixels[roi.y:roi.y+roi.height,roi.x:roi.x+roi.width,]
#Calculate valid XY indexes
y_index, x_index = np.where(np.all(subArray== np.array([255,0,0]), axis=-1))
#....
#do stuff
#....
#Join result values with XY indexes
ij_xy = np.column_stack((i, j, y_index, x_index))
#Only keep valid ij values
valids_ij_xy = ij_xy [(ij_xy [:,0] >= 0) & (ij_xy [:,0] < correctionImage.height) & (ij_xy [:,1] >= 0) & (ij_xy [:,1] < correctionImage.width)]
#Assign values
input.pixels [tuple(np.array(valids_ij_xy [:,2:]).T)] = correctionImage.pixels[tuple(np.array(valids_ij_xy [:,:2]).T)]

Integrating 2D data over an irregular grid in python

So I have 2D function which is sampled irregularly over a domain, and I want to calculate the volume underneath the surface. The data is organised in terms of [x,y,z], taking a simple example:
def f(x,y):
return np.cos(10*x*y) * np.exp(-x**2 - y**2)
datrange1 = np.linspace(-5,5,1000)
datrange2 = np.linspace(-0.5,0.5,1000)
ar = []
for x in datrange1:
for y in datrange2:
ar += [[x,y, f(x,y)]]
for x in xrange2:
for y in yrange2:
ar += [[x,y, f(x,y)]]
val_arr1 = np.array(ar)
data = np.unique(val_arr1)
xlist, ylist, zlist = data.T
where np.unique sorts the data in the first column then the second. The data is arranged in this way as I need to sample more heavily around the origin as there is a sharp feature that must be resolved.
Now I wondered about constructing a 2D interpolating function using scipy.interpolate.interp2d, then integrating over this using dblquad. As it turns out, this is not only inelegant and slow, but also kicks out the error:
RuntimeWarning: No more knots can be added because the number of B-spline
coefficients already exceeds the number of data points m.
Is there a better way to integrate data arranged in this fashion or overcoming this error?
If you can sample the data with high enough resolution around the feature of interest, then more sparsely everywhere else, the problem definition then becomes how to define the area under each sample. This is easy with regular rectangular samples, and could likely be done stepwise in increments of resolution around the origin. The approach I went after is to generate the 2D Voronoi cells for each sample in order to determine their area. I pulled most of the code from this answer, as it had almost all the components needed already.
import numpy as np
from scipy.spatial import Voronoi
#taken from: # https://stackoverflow.com/questions/28665491/getting-a-bounded-polygon-coordinates-from-voronoi-cells
#computes voronoi regions bounded by a bounding box
def square_voronoi(xy, bbox): #bbox: (min_x, max_x, min_y, max_y)
# Select points inside the bounding box
points_center = xy[np.where((bbox[0] <= xy[:,0]) * (xy[:,0] <= bbox[1]) * (bbox[2] <= xy[:,1]) * (bbox[2] <= bbox[3]))]
# Mirror points
points_left = np.copy(points_center)
points_left[:, 0] = bbox[0] - (points_left[:, 0] - bbox[0])
points_right = np.copy(points_center)
points_right[:, 0] = bbox[1] + (bbox[1] - points_right[:, 0])
points_down = np.copy(points_center)
points_down[:, 1] = bbox[2] - (points_down[:, 1] - bbox[2])
points_up = np.copy(points_center)
points_up[:, 1] = bbox[3] + (bbox[3] - points_up[:, 1])
points = np.concatenate((points_center, points_left, points_right, points_down, points_up,), axis=0)
# Compute Voronoi
vor = Voronoi(points)
# Filter regions (center points should* be guaranteed to have a valid region)
# center points should come first and not change in size
regions = [vor.regions[vor.point_region[i]] for i in range(len(points_center))]
vor.filtered_points = points_center
vor.filtered_regions = regions
return vor
#also stolen from: https://stackoverflow.com/questions/28665491/getting-a-bounded-polygon-coordinates-from-voronoi-cells
def area_region(vertices):
# Polygon's signed area
A = 0
for i in range(0, len(vertices) - 1):
s = (vertices[i, 0] * vertices[i + 1, 1] - vertices[i + 1, 0] * vertices[i, 1])
A = A + s
return np.abs(0.5 * A)
def f(x,y):
return np.cos(10*x*y) * np.exp(-x**2 - y**2)
#sampling could easily be shaped to sample origin more heavily
sample_x = np.random.rand(1000) * 10 - 5 #same range as example linspace
sample_y = np.random.rand(1000) - .5
sample_xy = np.array([sample_x, sample_y]).T
vor = square_voronoi(sample_xy, (-5,5,-.5,.5)) #using bbox from samples
points = vor.filtered_points
sample_areas = np.array([area_region(vor.vertices[verts+[verts[0]],:]) for verts in vor.filtered_regions])
sample_z = np.array([f(p[0], p[1]) for p in points])
volume = np.sum(sample_z * sample_areas)
I haven't exactly tested this, but the principle should work, and the math checks out.

Align a face to a plane with numpy

I have a face made from 4 xyz vertices.
I want to align it with the z axis so it is parallel with it.
If I calculate the normals I can calculate the angle between them but that is just the angle. I need an x rotation and a y rotation.
I am using numpy on Python 3.
Thanks.
To rotate a unit vector to the, say, 1st axis you can use QR decomp, like so:
normal = np.random.random(3)
normal /= np.sqrt(normal#normal)
some_base = np.identity(3)
some_base[:, 0] = normal
Q, R = np.linalg.qr(some_base)
Q.T#normal
# array([-1.00000000e+00, -2.77555756e-17, 1.11022302e-16])
As you can see you may have to flip one or two of the columns of Q:
if (Q.T#normal)[0] < 0:
if np.linalg.det(Q) < 0:
rot = (Q * [-1, 1, 1]).T
else:
rot = (Q * [-1, -1, 1]).T
else:
if np.linalg.det(Q) < 0:
rot = (Q * [1, -1, 1]).T
else:
rot = Q.T

Calculate distance between neighbors efficiently

I have data geographically scattered without any kind of pattern and I need to create an image where the value of each pixel is an average of the neighbors of that pixel that are less than X meters.
For this I use the library scipy.spatial to generate a KDTree with the data (cKDTree). Once the data structure is generated, I locate the pixel geographically and locate the geographic points that are closest.
# Generate scattered data points
coord_cart= [
[
feat.geometry().GetY(),
feat.geometry().GetX(),
feat.GetField(feature),
] for feat in layer
]
# Create KDTree structure
tree = cKDTree(coord_cart)
# Get raster image dimensions
pixel_size = 5
source_layer = shapefile.GetLayer()
x_min, x_max, y_min, y_max = source_layer.GetExtent()
x_res = int((x_max - x_min) / pixel_size)
y_res = int((y_max - y_min) / pixel_size)
# Create grid
x = np.linspace(x_min, x_max, x_res)
y = np.linspace(y_min, y_max, y_res)
X, Y = np.meshgrid(x, y)
grid = np.array(zip(Y.ravel(), X.ravel()))
# Get points that are less than 10 meters away
inds = tree.query_ball_point(grid, 10)
# inds is an np.array of lists of different length, so I need to convert it into an array of n_points x maximum number of neighbors
ll = np.array([len(l) for l in inds])
maxlen = max(ll)
arr = np.zeros((len(ll), maxlen), int)
# I don't know why but inds is an array of list, so I convert it into an array of array to use grid[inds]
# I THINK THIS IS A LITTLE INEFFICIENT
for i in range(len(inds)):
inds[i].extend([i] * (maxlen - len(inds[i])))
arr[i] = np.array(inds[i], dtype=int)
# AND THIS DOESN'T WORK
d = np.linalg.norm(grid - grid[inds])
Is there a better way to do this? I'm trying to use IDW to perform the interpolation between the points. I found this snippet that uses a function that gets the N nearest points but it does not work for me because I need that if there is no point in a radius R, the value of the pixel is 0.
d, inds = tree.query(zip(xt, yt, zt), k = 10)
w = 1.0 / d**2
air_idw = np.sum(w * air.flatten()[inds], axis=1) / np.sum(w, axis=1)
air_idw.shape = lon_curv.shape
Thanks in advance!
This may be one of the cases where KDTrees are not a good solution. This is because you are mapping to a grid, which is a very simple structure meaning there is nothing to gain from the KDTree's sophistication. Nearest grid point and distance can be found by simple arithmetic.
Below is a simple example implementation. I'm using a Gaussian kernel but changing that to IDW if you prefer should be straight-forward.
import numpy as np
from scipy import stats
def rasterize(coords, feature, gu, cutoff, kernel=stats.norm(0, 2.5).pdf):
# compute overlap (filter size / grid unit)
ovlp = int(np.ceil(cutoff/gu))
# compute raster dimensions
mn, mx = coords.min(axis=0), coords.max(axis=0)
reso = np.ceil((mx - mn) / gu).astype(int)
base = (mx + mn - reso * gu) / 2
# map coordinates to raster, the residual is the distance
grid_res = coords - base
grid_coords = np.rint(grid_res / gu).astype(int)
grid_res -= gu * grid_coords
# because of overlap we must add neighboring grid points to the nearest
gcovlp = np.c_[-ovlp:ovlp+1, np.zeros(2*ovlp+1, dtype=int)]
grid_coords = (gcovlp[:, None, None, :] + gcovlp[None, :, None, ::-1]
+ grid_coords).reshape(-1, 2)
# the corresponding residuals have the same offset with opposite sign
gdovlp = -gu * (gcovlp+1/2)
grid_res = (gdovlp[:, None, None, :] + gdovlp[None, :, None, ::-1]
+ grid_res).reshape(-1, 2)
# discard off fov grid points and points outside the cutoff
valid, = np.where(((grid_coords>=0) & (grid_coords<=reso)).all(axis=1) & (
np.einsum('ij,ij->i', grid_res, grid_res) <= cutoff*cutoff))
grid_res = grid_res[valid]
feature = feature[valid // (2*ovlp+1)**2]
# flatten grid so we can use bincount
grid_flat = np.ravel_multi_index(grid_coords[valid].T, reso+1)
return np.bincount(
grid_flat,
feature * kernel(np.sqrt(np.einsum('ij,ij->i', grid_res, grid_res))),
(reso + 1).prod()).reshape(reso+1)
gu = 5
cutoff = 10
coords = np.random.randn(10_000, 2) * (100, 20)
coords[:, 1] += 80 * np.sin(coords[:, 0] / 40)
feature = np.random.uniform(0, 1000, (10_000,))
from timeit import timeit
print(timeit("rasterize(coords, feature, gu, cutoff)", globals=globals(), number=100)*10, 'ms')
pic = rasterize(coords, feature, gu, cutoff)
import pylab
pylab.pcolor(pic, cmap=pylab.cm.jet)
pylab.colorbar()
pylab.show()

Categories

Resources