Draw triangle's in pygame given only side lengths - python

I have three line lengths and I need to plot a triangle on the screen with them.
Say I have:
len1 = 30
len2 = 50
len3 = 70
(these are randomly generated)
I can draw the first line at the bottom like this
pygame.draw.line(screen, red, (500,500), (500+len1,500), 10)
The other two lines will start at (500,500) and (500+len1,500) respectivly and will have the same endpoint but I can't figure out the math to get that location

Converted the formula in Jody Muelaner's answer here to python:
def thirdpoint(a, b, c):
result = []
y=((a**2)+(b**2)-(c**2))/(a*2)
x = math.sqrt((b**2)-(y**2))
result.append(x)
result.append(y)
return result

Related

How to generate points within rectangle, at random locations and without overlap?

I have an image with width: 1980 and height: 1080.
Ultimately, I want to place various shapes within the image, but at random locations and in such a way that they do not overlap. The 0,0 coordinates of the image are in the center.
Before rendering the shapes into the image (I don't need help with this), I need to write an algorithm to generate the XY points/locations. I want to be able to specify the minimum distance any given point is allowed to get to any other points.
How can do this?
All I have been able to do, is to generate points at equally spaced locations and then add a bit of randomness to each point. But this is not ideal, because it means points just vary within some 'cell' within a grid, and if the randomness value is too high, they will appear outside of the rectangle. Here is my code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from random import randrange
def is_square(integer):
root = np.sqrt(integer)
return integer == int(root + 0.5) ** 2
def perfect_sqr(n):
nextN = np.floor(np.sqrt(n)) + 1
return int(nextN * nextN)
def generate_cells(width = 1920, height = 1080, n = 9, show_plot=False):
# If the number is not a perfect square, we need to find the next number which is
# so that we can get the root N, which will be used to determine the number of rows/columns
if not is_square(n):
n = perfect_sqr(n)
N = np.sqrt(n)
# generate x and y lists, where each represents an array of points evenly spaced between 0 and the width/height
x = np.array(list(range(0, width, int(width/N))))
y = np.array(list(range(0, height, int(height/N))))
# center the points within each 'cell'
x_centered = x+int(width/N)/2
y_centered = y+int(height/N)/2
x_centered = [a+randrange(50) for a in x_centered]
y_centered = [a+randrange(50) for a in y_centered]
# generate a grid with the points
xv, yv = np.meshgrid(x_centered, y_centered)
if(show_plot):
plt.scatter(xv,yv)
plt.gca().add_patch(Rectangle((0,0),width, height,edgecolor='red', facecolor='none', lw=1))
plt.show()
# convert the arrays to 1D
xx = xv.flatten()
yy = yv.flatten()
# Merge them side-by-side
zips = zip(xx, yy)
# convert to set of points/tuples and return
return set(zips)
coords = generate_cells(width=1920, height=1080, n=15, show_plot=True)
print(coords)
Assuming you simply want to randomly define non-overlapping coordinates within the confines of a maximum image size subject to not having images overlap, this might be a good solution.
import numpy as np
def locateImages(field_height: int, field_width: int, min_sep: int, points: int)-> np.array:
h_range = np.array(range(min_sep//2, field_height - (min_sep//2), min_sep))
w_range = np.array(range(min_sep//2, field_width-(min_sep//2), min_sep))
mx_len = max(len(h_range), len(w_range))
if len(h_range) < mx_len:
xtra = np.random.choice(h_range, mx_len - len(h_range))
h_range = np.append(h_range, xtra)
if len(w_range) < mx_len:
xtra = np.random.choice(w_range, mx_len - len(w_range))
w_range = np.append(w_range, xtra)
h_points = np.random.choice(h_range, points, replace=False)
w_points = np.random.choice(w_range, points, replace=False)
return np.concatenate((np.vstack(h_points), np.vstack(w_points)), axis= 1)
Then given:
field_height = the vertical coordinate of the Image space
field_width = the maximum horizontal coordinate of the Image space
min_sep = the minimum spacing between images
points = number of coordinates to be selected
Then:
locateImages(15, 8, 2, 5) will yield:
array([[13, 1],
[ 7, 3],
[ 1, 5],
[ 5, 5],
[11, 5]])
Render the output:
points = locateImages(1080, 1920, 100, 15)
x,y= zip(*points)
plt.scatter(x,x)
plt.gca().add_patch(Rectangle((0,0),1920, 1080,edgecolor='red', facecolor='none', lw=1))
plt.show()

How to I properly fill outside a curved shape in pycairo?

I am trying to fill the image area outside of a custom curved shape in Pycairo, however am struggling to achieve this. I have managed to get the result I require by stroking the shape with a large thickness and drawing multiple shapes of increasing size on top of each other, however this solution is inefficient (I care about efficiency as I will be needing to draw 1200 shapes quickly, which currently takes 1 minute). I think there might be a way to use a mask or clip or something similar, but can't find anything online that helps. If there is a way to specify that the stroke is drawn only outside the path, not on both sides, that could also be a solution.
Anyone out there no of a better way to achieve this?
Here's the code I use to draw a curved shape, the calculate_curve_handles function just returns two curve handles between the two sides of the shape based on the curve_point_1 and 2 offsets. The polygon function returns the vertex locations for an N sided polygon
vertices = polygon(num_sides, shape_radius + (scale * (line_thickness-20)), rotation, [x + offset[0], y + offset[1]])
for i in range(len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
cr.move_to(start_point[0], start_point[1])
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
cr.set_line_cap(cairo.LINE_CAP_ROUND)
cr.fill()
This is the desired result, achieved with many stroked objects layered on top of each other:
This is what I get when I try to use cr.fill() on the curved path:
Ok, I just figured out that if I move the move_to() function outside of the for loop for the vertices, it draws the shape properly.
Then by setting the fill rule to cr.set_fill_rule(cairo.FILL_RULE_EVEN_ODD) and drawing a large rectangle behind the shape, I can get the desired effect int even less time.
cr.move_to(vertices[0][0], vertices[0][1])
for i in range(0, len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
I found a solution that works for now. Basically for every side of the shape, I find a point that extends from the vector between the centre of the object and the vertex, well outside the drawing area. Then I fill each line segment as a separate shape
def calculate_bounds(start_point, end_point, centre_point):
direction = np.subtract(start_point, centre_point)
normalised_dir = direction / np.sqrt(np.sum(direction ** 2))
bound_1 = start_point + normalised_dir * 5000
direction = np.subtract(end_point, centre_point)
normalised_dir = direction / np.sqrt(np.sum(direction ** 2))
bound_2 = end_point + normalised_dir * 5000
return bound_1, bound_2
Then the code for drawing the polygon is:
for i in range(0, len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
cr.move_to(start_point[0], start_point[1])
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
bound_1, bound_2 = calculate_bounds(start_point, end_point, [x + offset[0], y + offset[1]])
cr.line_to(bound_2[0], bound_2[1])
cr.line_to(bound_1[0], bound_1[1])
cr.fill_preserve()
cr.stroke()

Interpolate between two images

I'm trying to interpolate between two images in Python.
Images are of shapes (188, 188)
I wish to interpolate the image 'in-between' these two images. Say Image_1 is at location z=0 and Image_2 is at location z=2. I want the interpolated image at location z=1.
I believe this answer (MATLAB) contains a similar problem and solution.
Creating intermediate slices in a 3D MRI volume with MATLAB
I've tried to convert this code to Python as follows:
from scipy.interpolate import interpn
from scipy.interpolate import griddata
# Construct 3D volume from images
# arr.shape = (2, 182, 182)
arr = np.r_['0,3', image_1, image_2]
slices,rows,cols = arr.shape
# Construct meshgrids
[X,Y,Z] = np.meshgrid(np.arange(cols), np.arange(rows), np.arange(slices));
[X2,Y2,Z2] = np.meshgrid(np.arange(cols), np.arange(rows), np.arange(slices*2));
# Run n-dim interpolation
Vi = interpn([X,Y,Z], arr, np.array([X1,Y1,Z1]).T)
However, this produces an error:
ValueError: The points in dimension 0 must be strictly ascending
I suspect I am not constructing my meshgrid(s) properly but am kind of lost on whether or not this approach is correct.
Any ideas?
---------- Edit -----------
Found some MATLAB code that appears to solve this problem:
Interpolating Between Two Planes in 3d space
I attempted to convert this to Python:
from scipy.ndimage.morphology import distance_transform_edt
from scipy.interpolate import interpn
def ndgrid(*args,**kwargs):
"""
Same as calling ``meshgrid`` with *indexing* = ``'ij'`` (see
``meshgrid`` for documentation).
"""
kwargs['indexing'] = 'ij'
return np.meshgrid(*args,**kwargs)
def bwperim(bw, n=4):
"""
perim = bwperim(bw, n=4)
Find the perimeter of objects in binary images.
A pixel is part of an object perimeter if its value is one and there
is at least one zero-valued pixel in its neighborhood.
By default the neighborhood of a pixel is 4 nearest pixels, but
if `n` is set to 8 the 8 nearest pixels will be considered.
Parameters
----------
bw : A black-and-white image
n : Connectivity. Must be 4 or 8 (default: 8)
Returns
-------
perim : A boolean image
From Mahotas: http://nullege.com/codes/search/mahotas.bwperim
"""
if n not in (4,8):
raise ValueError('mahotas.bwperim: n must be 4 or 8')
rows,cols = bw.shape
# Translate image by one pixel in all directions
north = np.zeros((rows,cols))
south = np.zeros((rows,cols))
west = np.zeros((rows,cols))
east = np.zeros((rows,cols))
north[:-1,:] = bw[1:,:]
south[1:,:] = bw[:-1,:]
west[:,:-1] = bw[:,1:]
east[:,1:] = bw[:,:-1]
idx = (north == bw) & \
(south == bw) & \
(west == bw) & \
(east == bw)
if n == 8:
north_east = np.zeros((rows, cols))
north_west = np.zeros((rows, cols))
south_east = np.zeros((rows, cols))
south_west = np.zeros((rows, cols))
north_east[:-1, 1:] = bw[1:, :-1]
north_west[:-1, :-1] = bw[1:, 1:]
south_east[1:, 1:] = bw[:-1, :-1]
south_west[1:, :-1] = bw[:-1, 1:]
idx &= (north_east == bw) & \
(south_east == bw) & \
(south_west == bw) & \
(north_west == bw)
return ~idx * bw
def signed_bwdist(im):
'''
Find perim and return masked image (signed/reversed)
'''
im = -bwdist(bwperim(im))*np.logical_not(im) + bwdist(bwperim(im))*im
return im
def bwdist(im):
'''
Find distance map of image
'''
dist_im = distance_transform_edt(1-im)
return dist_im
def interp_shape(top, bottom, num):
if num<0 and round(num) == num:
print("Error: number of slices to be interpolated must be integer>0")
top = signed_bwdist(top)
bottom = signed_bwdist(bottom)
r, c = top.shape
t = num+2
print("Rows - Cols - Slices")
print(r, c, t)
print("")
# rejoin top, bottom into a single array of shape (2, r, c)
# MATLAB: cat(3,bottom,top)
top_and_bottom = np.r_['0,3', top, bottom]
#top_and_bottom = np.rollaxis(top_and_bottom, 0, 3)
# create ndgrids
x,y,z = np.mgrid[0:r, 0:c, 0:t-1] # existing data
x1,y1,z1 = np.mgrid[0:r, 0:c, 0:t] # including new slice
print("Shape x y z:", x.shape, y.shape, z.shape)
print("Shape x1 y1 z1:", x1.shape, y1.shape, z1.shape)
print(top_and_bottom.shape, len(x), len(y), len(z))
# Do interpolation
out = interpn((x,y,z), top_and_bottom, (x1,y1,z1))
# MATLAB: out = out(:,:,2:end-1)>=0;
array_lim = out[-1]-1
out[out[:,:,2:out] >= 0] = 1
return out
I call this as follows:
new_image = interp_shape(image_1,image_2, 1)
Im pretty sure this is 80% of the way there but I still get this error when running:
ValueError: The points in dimension 0 must be strictly ascending
Again, I am probably not constructing my meshes correctly. I believe np.mgrid should produce the same result as MATLABs ndgrid though.
Is there a better way to construct the ndgrid equivalents?
I figured this out. Or at least a method that produces desirable results.
Based on: Interpolating Between Two Planes in 3d space
def signed_bwdist(im):
'''
Find perim and return masked image (signed/reversed)
'''
im = -bwdist(bwperim(im))*np.logical_not(im) + bwdist(bwperim(im))*im
return im
def bwdist(im):
'''
Find distance map of image
'''
dist_im = distance_transform_edt(1-im)
return dist_im
def interp_shape(top, bottom, precision):
'''
Interpolate between two contours
Input: top
[X,Y] - Image of top contour (mask)
bottom
[X,Y] - Image of bottom contour (mask)
precision
float - % between the images to interpolate
Ex: num=0.5 - Interpolate the middle image between top and bottom image
Output: out
[X,Y] - Interpolated image at num (%) between top and bottom
'''
if precision>2:
print("Error: Precision must be between 0 and 1 (float)")
top = signed_bwdist(top)
bottom = signed_bwdist(bottom)
# row,cols definition
r, c = top.shape
# Reverse % indexing
precision = 1+precision
# rejoin top, bottom into a single array of shape (2, r, c)
top_and_bottom = np.stack((top, bottom))
# create ndgrids
points = (np.r_[0, 2], np.arange(r), np.arange(c))
xi = np.rollaxis(np.mgrid[:r, :c], 0, 3).reshape((r**2, 2))
xi = np.c_[np.full((r**2),precision), xi]
# Interpolate for new plane
out = interpn(points, top_and_bottom, xi)
out = out.reshape((r, c))
# Threshold distmap to values above 0
out = out > 0
return out
# Run interpolation
out = interp_shape(image_1,image_2, 0.5)
Example output:
I came across a similar problem where I needed to interpolate the shift between frames where the change did not merely constitute a translation but also changes to the shape itself . I solved this problem by :
Using center_of_mass from scipy.ndimage.measurements to calculate the center of the object we want to move in each frame
Defining a continuous parameter t where t=0 first and t=1 last frame
Interpolate the motion between two nearest frames (with regard to a specific t value) by shifting the image back/forward via shift from scipy.ndimage.interpolation and overlaying them.
Here is the code:
def inter(images,t):
#input:
# images: list of arrays/frames ordered according to motion
# t: parameter ranging from 0 to 1 corresponding to first and last frame
#returns: interpolated image
#direction of movement, assumed to be approx. linear
a=np.array(center_of_mass(images[0]))
b=np.array(center_of_mass(images[-1]))
#find index of two nearest frames
arr=np.array([center_of_mass(images[i]) for i in range(len(images))])
v=a+t*(b-a) #convert t into vector
idx1 = (np.linalg.norm((arr - v),axis=1)).argmin()
arr[idx1]=np.array([0,0]) #this is sloppy, should be changed if relevant values are near [0,0]
idx2 = (np.linalg.norm((arr - v),axis=1)).argmin()
if idx1>idx2:
b=np.array(center_of_mass(images[idx1])) #center of mass of nearest contour
a=np.array(center_of_mass(images[idx2])) #center of mass of second nearest contour
tstar=np.linalg.norm(v-a)/np.linalg.norm(b-a) #define parameter ranging from 0 to 1 for interpolation between two nearest frames
im1_shift=shift(images[idx2],(b-a)*tstar) #shift frame 1
im2_shift=shift(images[idx1],-(b-a)*(1-tstar)) #shift frame 2
return im1_shift+im2_shift #return average
if idx1<idx2:
b=np.array(center_of_mass(images[idx2]))
a=np.array(center_of_mass(images[idx1]))
tstar=np.linalg.norm(v-a)/np.linalg.norm(b-a)
im1_shift=shift(images[idx2],-(b-a)*(1-tstar))
im2_shift=shift(images[idx1],(b-a)*(tstar))
return im1_shift+im2_shift
Result example
I don't know the solution to your problem, but I don't think it's possible to do this with interpn.
I corrected the code that you tried, and used the following input images:
But the result is:
Here's the corrected code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from scipy import interpolate
n = 8
img1 = np.zeros((n, n))
img2 = np.zeros((n, n))
img1[2:4, 2:4] = 1
img2[4:6, 4:6] = 1
plt.figure()
plt.imshow(img1, cmap=cm.Greys)
plt.figure()
plt.imshow(img2, cmap=cm.Greys)
points = (np.r_[0, 2], np.arange(n), np.arange(n))
values = np.stack((img1, img2))
xi = np.rollaxis(np.mgrid[:n, :n], 0, 3).reshape((n**2, 2))
xi = np.c_[np.ones(n**2), xi]
values_x = interpolate.interpn(points, values, xi, method='linear')
values_x = values_x.reshape((n, n))
print(values_x)
plt.figure()
plt.imshow(values_x, cmap=cm.Greys)
plt.clim((0, 1))
plt.show()
I think the main difference between your code and mine is in the specification of xi. interpn tends to be somewhat confusing to use, and I've explained it in greater detail in an older answer. If you're curious about the mechanics of how I've specified xi, see this answer of mine explaining what I've done.
This result is not entirely surprising, because interpn just linearly interpolated between the two images: so the parts which had 1 in one image and 0 in the other simply became 0.5.
Over here, since one image is the translation of the other, it's clear that we want an image that's translated "in-between". But how would interpn interpolate two general images? If you had one small circle and one big circle, is it in any way clear that there should be a circle of intermediate size "between" them? What about interpolating between a dog and a cat? Or a dog and a building?
I think you are essentially trying to "draw lines" connecting the edges of the two images and then trying to figure out the image in between. This is similar to sampling a moving video at a half-frame. You might want to check out something like optical flow, which connects adjacent frames using vectors. I'm not aware if and what python packages/implementations are available though.

Python: objects in a list

How do I make it so that the circles in this list can be modified or removed later on? Isn't the list different from the actual objects?
def drawAllBubbles(window,numOfBubbles):
bublist=list()
for x in range(numOfBubbles):
p1= random.randrange(1000)
p2= random.randrange(1000)
center= graphics.Point(p1,p2)
bubx = center.getX()
buby = center.getY()
r = random.randint(1, 255)#randomize rgb values
g = random.randint(1, 255)
b = random.randint(1, 255)
circle = graphics.Circle(center, 5)
circle.setFill(color_rgb(r, g, b))
circle.setOutline("black")
circle.draw(window)
bublist.append(circle)
return bublist
window.getMouse()
This part of the script essentially draws
And then returns a list of circles.
The objects are contained in bublist
If you iterate over the list, you can change, remove, or redraw the circles. For example:
for bubble in bublist:
bubble.setOutline("green")
bubble.draw(window)

Generate coordinates inside Polygon

I want to bin the values of polygons to a fine regular grid.
For instance, I have the following coordinates:
data = 2.353
data_lats = np.array([57.81000137, 58.15999985, 58.13000107, 57.77999878])
data_lons = np.array([148.67999268, 148.69999695, 148.47999573, 148.92999268])
My regular grid looks like this:
delta = 0.25
grid_lons = np.arange(-180, 180, delta)
grid_lats = np.arange(90, -90, -delta)
llx, lly = np.meshgrid( grid_lons, grid_lats )
rows = lly.shape[0]
cols = llx.shape[1]
grid = np.zeros((rows,cols))
Now I can find the grid pixel that corresponds to the center of my polygon very easily:
centerx, centery = np.mean(data_lons), np.mean(data_lats)
row = int(np.floor( centery/delta ) + (grid.shape[0]/2))
col = int(np.floor( centerx/delta ) + (grid.shape[1]/2))
grid[row,col] = data
However, there are probably a couple of grid pixels that still intersect with the polygon. Hence, I would like to generate a bunch of coordinates inside my polygon (data_lons, data_lats) and find their corresponding grid pixel as before. Do you a suggestion to generate the coordinates randomly or systematically? I failed, but am still trying.
Note: One data set contains around ~80000 polygons so it has to be really fast (a couple of seconds). That is also why I chose this approach, because it does not account the area of overlap... (like my earlier question Data binning: irregular polygons to regular mesh which is VERY slow)
I worked on a quick and dirty solution by simply calculating the coordinates between corner pixels. Take a look:
dlats = np.zeros((data_lats.shape[0],4))+np.nan
dlons = np.zeros((data_lons.shape[0],4))+np.nan
idx = [0,1,3,2,0] #rearrange the corner pixels
for cc in range(4):
dlats[:,cc] = np.mean((data_lats[:,idx[cc]],data_lats[:,idx[cc+1]]), axis=0)
dlons[:,cc] = np.mean((data_lons[:,idx[cc]],data_lons[:,idx[cc+1]]), axis=0)
data_lats = np.column_stack(( data_lats, dlats ))
data_lons = np.column_stack(( data_lons, dlons ))
Thus, the red dots represent the original corners - the blue ones the intermediate pixels between them.
I can do this one more time and include the center pixel (geo[:,[4,9]])
dlats = np.zeros((data.shape[0],8))
dlons = np.zeros((data.shape[0],8))
for cc in range(8):
dlats[:,cc] = np.mean((data_lats[:,cc], geo[:,4]), axis=0)
dlons[:,cc] = np.mean((data_lons[:,cc], geo[:,9]), axis=0)
data_lats = np.column_stack(( data_lats, dlats, geo[:,4] ))
data_lons = np.column_stack(( data_lons, dlons, geo[:,9] ))
This works really nice, and I can assign each point directly to its corresponding grid pixel like this:
row = np.floor( data_lats/delta ) + (llx.shape[0]/2)
col = np.floor( data_lons/delta ) + (llx.shape[1]/2)
However the final binning now takes ~7sec!!! How can I speed this code up:
for ii in np.arange(len(data)):
for cc in np.arange(data_lats.shape[1]):
final_grid[row[ii,cc],col[ii,cc]] += data[ii]
final_grid_counts[row[ii,cc],col[ii,cc]] += 1
You'll need to test the following approach to see if it is fast enough. First, you should modify all your lats and lons into, to make them (possibly fractional) indices into your grid:
idx_lats = (data_lats - lat_grid_start) / lat_grid step
idx_lons = (data_lons - lon_grid_start) / lon_grid step
Next, we want to split your polygons into triangles. For any convex polygon, you could take the center of the polygon as one vertex of all triangles, and then the vertices of the polygon in consecutive pairs. But if your polygon are all quadrilaterals, it is going to be faster to divide them into only 2 triangles, using vertices 0, 1, 2 for the first, and 0, 2, 3 for the second.
To know if a certain point is inside a triangle, I am going to use the barycentric coordinates approach described here. This first function checks whether a bunch of points are inside a triangle:
def check_in_triangle(x, y, x_tri, y_tri) :
A = np.vstack((x_tri[0], y_tri[0]))
lhs = np.vstack((x_tri[1:], y_tri[1:])) - A
rhs = np.vstack((x, y)) - A
uv = np.linalg.solve(lhs, rhs)
# Equivalent to (uv[0] >= 0) & (uv[1] >= 0) & (uv[0] + uv[1] <= 1)
return np.logical_and(uv >= 0, axis=0) & (np.sum(uv, axis=0) <= 1)
Given a triangle by its vertices, you can get the lattice points inside it, by running the above function on the lattice points in the bounding box of the triangle:
def lattice_points_in_triangle(x_tri, y_tri) :
x_grid = np.arange(np.ceil(np.min(x_tri)), np.floor(np.max(x_tri)) + 1)
y_grid = np.arange(np.ceil(np.min(y_tri)), np.floor(np.max(y_tri)) + 1)
x, y = np.meshgrid(x_grid, y_grid)
x, y = x.reshape(-1), y.reshape(-1)
idx = check_in_triangle(x, y, x_tri, y_tri)
return x[idx], y[idx]
And for a quadrilateral, you simply call this last function twice :
def lattice_points_in_quadrilateral(x_quad, y_quad) :
return map(np.concatenate,
zip(lattice_points_in_triangle(x_quad[:3], y_quad[:3]),
lattice_points_in_triangle(x_quad[[0, 2, 3]],
y_quad[[0, 2, 3]])))
If you run this code on your example data, you will get two empty arrays returned: that's because the order of the quadrilateral points is a surprising one: indices 0 and 1 define one diagonal, 2 and 3 the other. My function above was expecting the vertices to be ordered around the polygon. If you really are doing things this other way, you need to change the second call to lattice_points_in_triangle inside lattice_points_in_quadrilateral so that the indices used are [0, 1, 3] instead of [0, 2, 3].
And now, with that change :
>>> idx_lats = (data_lats - (-180) ) / 0.25
>>> idx_lons = (data_lons - (-90) ) / 0.25
>>> lattice_points_in_quadrilateral(idx_lats, idx_lons)
[array([952]), array([955])]
If you change the resolution of your grid to 0.1:
>>> idx_lats = (data_lats - (-180) ) / 0.1
>>> idx_lons = (data_lons - (-90) ) / 0.1
>>> lattice_points_in_quadrilateral(idx_lats, idx_lons)
[array([2381, 2380, 2381, 2379, 2380, 2381, 2378, 2379, 2378]),
array([2385, 2386, 2386, 2387, 2387, 2387, 2388, 2388, 2389])]
Timing wise this approach is going to be, in my system, about 10x too slow for your needs:
In [8]: %timeit lattice_points_in_quadrilateral(idx_lats, idx_lons)
1000 loops, best of 3: 269 us per loop
So you are looking at over 20 sec. to process your 80,000 polygons.

Categories

Resources