Hello all (first time posting here so I hope I'm not doing anything horribly wrong)...
I'm trying to randomly generate a set of convex polygons with 3 to 2l sides in Python such that each side of each polygon is parallel to one of l predetermined lines. If anybody knows of a way of doing this (with or without the aid of a computational geometry package like CGAL or Shapely), that'd be fantastic.
I start with a list containing 2l angles (the direction of each line, and the direction of each line + pi for parallel sides). For each polygon I make, I randomly choose 3 to 2l angles from this list, sorted in increasing order such that no angle differs by more than pi from the one before it in order to ensure that the angles are capable of defining a polygon. However, after that I am unable to ensure that the polygons I generate remain convex and only contain sides parallel to the lines I chose. My code currently looks like this:
def generate(l, n, w, h):
"""Generate n polygons with sides parallel to
at most l vectors in a w x h plane."""
L = []
polygons = []
while len(L) < 2*l:
i = random.uniform(0, math.pi)
if i != math.pi and not i in L:
L.append(i)
L.append(i+math.pi)
L.sort()
while len(polygons) < n:
Lp = list(L)
rm = random.randint(0, 2*l-3)
#Filter out rm lines, if possible
for i in range(rm):
i = random.randint(0, len(Lp)-1)
for j in range(i, len(Lp)) + range(0, i):
nxt = Lp[(j+1)%len(Lp)]
prv = Lp[(j-1)%len(Lp)]
if prv < nxt < prv+math.pi or nxt < (prv+math.pi)%(2*math.pi)-1e-14 < prv:
del Lp[j]
break
# Choose a "center point, then generate a polygon consisting of points
# a fixed distance away in the direction perpendicular to each angle.
# This does not work however; resulting polygons may have sides not
# parallel to one of the original lines.
cx, cy = random.uniform(-w/2,w/2), random.uniform(-h/2,h/2)
points = []
r = random.uniform(10,100)
for theta in Lp:
# New point is r away from "center" in direction
# perpendicular to theta
x = cx + r * math.sin(theta)
y = cy - r * math.cos(theta)
points.append(polygon.Vector(x,y))
polygons.append(polygon.Polygon(points))
return polygons
The problem lies in the selection of your angles. You have to respect two constraints.
First constraint The sum of the angles of a convex polygon is 180*(n-2) degrees, where n is the number of sides of your convex polygon [src].
Second constraint Given two lines, you have two choices for your angle :
You have to select the green angle. Your selection criteria is not very clear in your description, so I can't be sure if there is a mistake. To select the good angle, I think the simpliest thing to do is considering direction vector for each line. Compute u the direction vector of your last line (pointing towards the new line). Compute v, a direction vector of the new line. If (u^v) > 0, v is not correctly oriented, so you want to take -v. Else if (u^v) < 0, v is correctly oriented. Details : u^v = u.x*v.y -u.y*v.x
So this lead us to our second constraint. Considering u the direction vector of a side and u_next the direction vector of the next side, we have u^u_next < 0.
I think the second constraint is sufficient. We won't need the first one (but it is still good to know for general knowledge).
What to do Here's what I would do for your problem :
Select a random line. Compute the direction vector u0 such as u0.x > 0. Initialize the list listDV of direction vector with u. Note: if u.x = 0, then select u such as u.y > 0.
While(listDV.last^listDV.first < 0) {Select a random line, compute the direction vector u such as listDV.last^u < 0, push u at the end of listDV}.
Discard the last vector of listDV.
So now you have a list of direction vectors, which are parallel to your lines. The list forms a convex polygon.
Next will be the creation of your polygon. If you need help on this, let me know !
Related
I have a list of points let's say 5 points. I want to crop the area that this polygon is covering from the image. Here, red areas are the points and I want to crop inside of the white area from the black background.
I am able to do this with cv2.fillConvexPoly() function but I will run this code on GPU so I can not use cv2. I want to do this with only numpy arrays. I have the X and Y coordinates of the points and their orders to draw edges. I could not implement the code without using libraries like PIL or opencv so any advice would be helpful.
I don't think you could achieve a more optimized approach than cv2 by using only python. But if you're wondering what would a python + NumPy implementation of cv2.fillConvexPoly() look like, this is how I would do it:
For each pixel in an image, check if it is inside the polygon
If it is not inside, change the alpha value for that pixel to 0 (assuming the image has an alpha channel. Or you could just make that pixel black)
In order to know if a pixel is inside a polygon, you could use the Winding Number Algorithm / Nonzero-rule which states:
For any point inside the polygon the winding number would be non-zero.
Therefore it is also known as the nonzero-rule algorithm.
And:
For a given curve C and a given point P: construct a ray (a straight
line) heading out from P in any direction towards infinity. Find all
the intersections of C with this ray. Score up the winding number as
follows: for every clockwise intersection (the curve passing through
the ray from left to right, as viewed from P) subtract 1; for every
counter-clockwise intersection (curve passing from right to left, as
viewed from P) add 1. If the total winding number is zero, P is
outside C; otherwise, it is inside.
In my approach I won't be adding or subtracting 1, instead I'll think of it as the number of revolutions, meaning that if the sum of all the angles between the rays is 360, that means the point is inside the polygon
import numpy as np
def _angle_between_three_points(A, B, C):
a, b, c = np.array(A), np.array(B), np.array(C)
ba = a - b
bc = c - b
cosine_angle = np.dot(ba, bc) / (np.linalg.norm(ba) * np.linalg.norm(bc))
angle = np.arccos(cosine_angle) # in radians
return np.degrees(angle) # in degrees
def _get_edges_from_points(points):
edges = []
dist = lambda p1, p2: np.hypot(p2[0] - p1[0], p2[1] - p1[1])
_p = points.copy()
for i, p in enumerate(points):
_p.pop(0)
try:
next_point = sorted(map(lambda pn: (pn, dist(p, pn)), _p), key=lambda x: x[1])[0][0]
except IndexError:
next_point = points[0]
edges.append((p, next_point))
return edges
def is_point_inside(point, polygon):
point = [point[0], point[1]]
angles = map(lambda edge: _angle_between_three_points(edge[0], point, edge[1]), _get_edges_from_points(polygon))
return sum(angles) == 360
Now you can just apply the is_point_inside() to every pixel.
NOTE: It is worth checking out this article from Medium's Towards Data Science
Given a 10x10 grid (2d-array) filled randomly with numbers, either 0, 1 or 2. How can I find the Euclidean distance (the l2-norm of the distance vector) between two given points considering periodic boundaries?
Let us consider an arbitrary grid point called centre. Now, I want to find the nearest grid point containing the same value as centre. I need to take periodic boundaries into account, such that the matrix/grid can be seen rather as a torus instead of a flat plane. In that case, say the centre = matrix[0,2], and we find that there is the same number in matrix[9,2], which would be at the southern boundary of the matrix. The Euclidean distance computed with my code would be for this example np.sqrt(0**2 + 9**2) = 9.0. However, because of periodic boundaries, the distance should actually be 1, because matrix[9,2] is the northern neighbour of matrix[0,2]. Hence, if periodic boundary values are implemented correctly, distances of magnitude above 8 should not exist.
So, I would be interested on how to implement in Python a function to compute the Euclidean distance between two arbitrary points on a torus by applying a wrap-around for the boundaries.
import numpy as np
matrix = np.random.randint(0,3,(10,10))
centre = matrix[0,2]
#rewrite the centre to be the number 5 (to exclude itself as shortest distance)
matrix[0,2] = 5
#find the points where entries are same as centre
same = np.where((matrix == centre) == True)
idx_row, idx_col = same
#find distances from centre to all values which are of same value
dist = np.zeros(len(same[0]))
for i in range(0,len(same[0])):
delta_row = same[0][i] - 0 #row coord of centre
delta_col = same[1][i] - 2 #col coord of centre
dist[i] = np.sqrt(delta_row**2 + delta_col**2)
#retrieve the index of the smallest distance
idx = dist.argmin()
print('Centre value: %i. The nearest cell with same value is at (%i,%i)'
% (centre, same[0][idx],same[1][idx]))
For each axis, you can check whether the distance is shorter when you wrap around or when you don't. Consider the row axis, with rows i and j.
When not wrapping around, the difference is abs(i - j).
When wrapping around, the difference is "flipped", as in 10 - abs(i - j). In your example with i == 0 and j == 9 you can check that this correctly produces a distance of 1.
Then simply take whichever is smaller:
delta_row = same[0][i] - 0 #row coord of centre
delta_row = min(delta_row, 10 - delta_row)
And similarly for delta_column.
The final dist[i] calculation needs no changes.
I have a working 'sketch' of how this could work. In short, I calculate the distance 9 times, 1 for the normal distance, and 8 shifts to possibly correct for a closer 'torus' distance.
As n is getting larger, the calculation costs can go sky high as the numbers go up. But, the torus effect, is probably not needed as there is always a point nearby without 'wrap around'.
You can easily test this, because for a grid of size 1, if a point is found of distance 1/2 or closer, you know there is not a closer torus point (right?)
import numpy as np
n=10000
np.random.seed(1)
A = np.random.randint(low=0, high=10, size=(n,n))
I create 10000x10000 points, and store the location of the 1's in ONES.
ONES = np.argwhere(A == 0)
Now I define my torus distance, which is trying which of the 9 mirrors is the closest.
def distance_on_torus( point=[500,500] ):
index_diff = [[1],[1],[0],[0],[0,1],[0,1],[0,1],[0,1]]
coord_diff = [[-1],[1],[-1],[1],[-1,-1],[-1,1],[1,-1],[1,1]]
tree = BallTree( ONES, leaf_size=5*n, metric='euclidean')
dist, indi = tree.query([point],k=1, return_distance=True )
distances = [dist[0]]
for indici_to_shift, coord_direction in zip(index_diff, coord_diff):
MIRROR = ONES.copy()
for i,shift in zip(indici_to_shift,coord_direction):
MIRROR[:,i] = MIRROR[:,i] + (shift * n)
tree = BallTree( MIRROR, leaf_size=5*n, metric='euclidean')
dist, indi = tree.query([point],k=1, return_distance=True )
distances.append(dist[0])
return np.min(distances)
%%time
distance_on_torus([2,3])
It is slow, the above takes 15 minutes.... For n = 1000 less than a second.
A optimisation would be to first consider the none-torus distance, and if the minimum distance is possibly not the smallest, calculate with only the minimum set of extra 'blocks' around. This will greatly increase speed.
I am trying to pack hard-spheres in a unit cubical box, such that these spheres cannot overlap on each other. This is being done in Python.
I am given some packing fraction f, and the number of spheres in the system is N.
So, I say that the diameter of each sphere will be
d = (p*6/(math.pi*N)**)1/3).
My box has periodic boundary conditions - which means that there is a recurring image of my box in all direction. If there is a particle who is at the edge of the box and has a portion of it going beyond the wall, it will stick out at the other side.
My attempt:
Create a numpy N-by-3 array box which holds the position vector of each particle [x,y,z]
The first particle is fine as it is.
The next particle in the array is checked with all the previous particles. If the distance between them is more than d, move on to the next particle. If they overlap, randomly change the position vector of the particle in question. If the new position does not overlap with the previous atoms, accept it.
Repeat steps 2-3 for the next particle.
I am trying to populate my box with these hard spheres, in the following manner:
for i in range(1,N):
mybool=True
print("particles in box: " + str(i))
while (mybool): #the deal with this while loop is that if we place a bad particle, we need to change its position, and restart the process of checking
for j in range(0,i):
displacement=box[j,:]-box[i,:]
for k in range(3):
if abs(displacement[k])>L/2:
displacement[k] -= L*np.sign(displacement[k])
distance = np.linalg.norm(displacement,2) #check distance between ith particle and the trailing j particles
if distance<diameter:
box[i,:] = np.random.uniform(0,1,(1,3)) #change the position of the ith particle randomly, restart the process
break
if j==i-1 and distance>diameter:
mybool = False
break
The problem with this code is that if p=0.45, it is taking a really, really long time to converge. Is there a better method to solve this problem, more efficiently?
I think what you are looking for is either the hexagonal closed-packed (HCP or sometime called face-centered cubic, FCC) lattice or the cubic closed-packed one (CCP). See e.g. Wikipedia on Close-packing of equal spheres.
Since your space has periodic conditions, I believe it doesn't matter which one you chose (hcp or ccp), and they both achieve the same density of ~74.04%, which was proved by Gauss to be the highest density by lattice packing.
Update:
For the follow-up question on how to generate efficiently one such lattice, let's take as an example the HCP lattice. First, let's create a bunch of (i, j, k) indices [(0,0,0), (1,0,0), (2,0,0), ..., (0,1,0), ...]. Then, get xyz coordinates from those indices and return a DataFrame with them:
def hcp(n):
dim = 3
k, j, i = [v.flatten()
for v in np.meshgrid(*([range(n)] * dim), indexing='ij')]
df = pd.DataFrame({
'x': 2 * i + (j + k) % 2,
'y': np.sqrt(3) * (j + 1/3 * (k % 2)),
'z': 2 * np.sqrt(6) / 3 * k,
})
return df
We can plot the result as scatter3d using plotly for interactive exploration:
import plotly.graph_objects as go
df = hcp(12)
fig = go.Figure(data=go.Scatter3d(
x=df.x, y=df.y, z=df.z, mode='markers',
marker=dict(size=df.x*0 + 30, symbol="circle", color=-df.z, opacity=1),
))
fig.show()
Note: plotly's scatter3d is not a very good rendering of spheres: the marker sizes are constant (so when you zoom in and out, the "spheres" will appear to change relative size), and there is no shading, limited z-ordering faithfulness, etc., but it's convenient to interact with the plot.
Resize and clip to the unit box:
Here, a strict clipping (each sphere needs to be completely inside the unit box). Your "periodic boundary condition" is something you will need to address separately (see further below for ideas).
def hcp_unitbox(r):
n = int(np.ceil(1 / (np.sqrt(3) * r)))
df = hcp(n) * r
df += r
df = df[(df <= 1 - r).all(axis=1)]
return df
With this, you find that a radius of 0.06 gives you 608 fully enclosed spheres:
hcp_unitbox(.06).shape # (608, 3)
Where you would go next:
You may dig deeper into the effect of your so-called "periodic boundary conditions", and perhaps play with some rotations (and small translations).
To do so, you may try to generate an HCP-lattice that is large enough that any rotation will still fully enclose your unit cube. For example:
r = 0.2 # example
n = int(np.ceil(2 / r))
df = hcp(n) * r - 1
Then rotate it (by any amount) and translate it (by up to 1 radius in any direction) as you wish for your research, and clip. The "periodic boundary conditions", as you call them, present a bit of extra challenge, as the clipping becomes trickier. First, clip any sphere whose center is outside your box. Then select spheres close enough to the boundaries, or even partition the regions of interest into overlapping regions along the walls of your cube, then check for collisions among the spheres (as per your periodic boundary conditions) that fall in each such region.
I have the following problem. Imaging you have a set of coordinates that are somewhat organized in a regular pattern, such as the one shown below.
What i want to do is to automatically extract coordinates, such that they are ordered from left to right and top to bottom. In addition, the total number of coordinates should be as large as possible, but only include coordinates, such that the extracted coordinates are on a nearly rectangular grid (even if the coordinates have a different symmetry, e.g. hexagonal). I always want to extract coordinates that follow a rectangular unit cell structure.
For the example shown above, the largest number that contain such an orthorhombic set would be 8 x 8 coordinates (lets call this dimensions: m x n), as framed by the red rectangle.
The problem is that the given coordinates are noisy and distorted.
My approach was to generate an artificial lattice, and minimizing the difference to the given coordinates, taking into account some rotation, shift and simple distortion of the lattice. However, it turned out to be tricky to define a cost function that covers the complexity of the problem, i.e. minimizing the difference between the given coordinates and the fitted lattice, but also maximizing the grid components m x n.
If anyone has a smart idea how to tackle this problem, maybe also with machine learning algorithms, i would be very thankful.
Here is the code that i have used so far:
A function to generate the artificial lattice with m x n coordinates that are spaced by a and b in the "n" and "m" directions. The angle theta allows for a rotation of the lattice.
def lattice(m, n, a, b, theta):
coords = []
for j in range(m):
for i in range(n):
coords.append([np.sin(theta)*a*i + np.cos(theta)*b*j, np.cos(theta)*a*i - np.sin(theta)*b*j])
return np.array(coords)
I used the following function to measure the mean minimal distance between points, which is a good starting point for fitting:
def mean_min_distance(coords):
from scipy.spatial import distance
cd = distance.cdist(coords, coords)
cd_1 = np.where(cd == 0, np.nan, cd)
return np.mean(np.nanmin(cd_1, axis=1))
The following function provides all possible combinations of m x n that theoretically fit into the lengths of the coordinates, whose arrangement is assumed to be unknown. The ability to limit this to minimal and maximal values is included already:
def get_all_mxn(l, min_m=2, min_n=2, max_m=None, max_n=None):
poss = []
if max_m is None:
max_m = l + 1
if max_n is None:
max_n = l +1
for i in range(min_m, max_m):
for j in range(min_n, max_n):
if i * j <= l:
poss.append([i, j])
return np.array(poss)
The definition of the costfunction i used (for one particular set of m x n). So i first wanted to get a good fit for a certain m x n arrangement.
def cost(x0):
a, b, theta, shift_a, shift_b, dd1 = x0
# generate lattice
l = lattice(m, n, a, b, theta)
# distort lattice by affine transformation
distortion_matr = np.array([[1, dd1], [0, 1]])
l = np.dot(distortion_matr, l.T).T
# shift lattice
l = l + np.array((shift_b, shift_a))
# Some padding to make the lists the same length
len_diff = coords.shape[0] - l.shape[0]
l = np.append(l, (1e3, 1e3)*len_diff).reshape((l.shape[0] + len_diff, 2))
# calculate all distances between all points
cd = distance.cdist(coords, l)
minimum distance between each artificial lattice point and all coords
cd_min = np.min(cd[:, :coords.shape[0] - len_diff], axis=0)
# returns root mean square difference of all minimal distances
return np.sqrt(np.sum(np.abs(cd_min) ** 2) )
I then run the minimization:
md = mean_min_distance(coords)
# initial guess
x0 = np.array((md, md, np.deg2rad(-3.), 3, 1, 0.12))
res = minimize(cost, x0)
However, the results are extremely dependend on the initial parameter x0 and i have not even included a fitting of m and n.
Below given is an example image where 'center-point' is (x0,y0) (the center of the wheel). Other points are the other ends of the spoke. The distance between 'center-point" and the other end of spoke may be different (spokes of different length). These all points are in cartesian coordinate system.
I need to find here the largest angle made by any two consecutive spoke. In this fig all the angles are same but assume that any one of the spoke is missing, then we will have that angle as the the largest angle at origin.
My take:
I am calculating the angle created by each edge with respect to x axis one at a time subtracting with the previous one (that gives angle between two spoke). I am keeping track of the largest angle, everytime updating it if I encounter an angle larger than the previous. My method works but just wondering if any efficient method is available to find the same.
Assuming you want the angle between two spokes, I suggest you convert the data points to polar/complex co-ordinates, this is made easy in the cmath module, and allows you to do something like this (phase takes out just the angle about centre):
import cmath
def largest_spoke_angle(centre, peripheral):
per_from_centre = [complex(z[0]-centre[0], z[1]-centre[1]) for z in peripheral]
per_angles = [cmath.phase(z) for z in per_from_centre]
per_angles.sort()
differences = [ per_angles[n+1]-per_angles[n] for n in range(len(per_angles)-1)] \
+ [per_angles[0] +2*cmath.pi - per_angles[-1]]
return max(differences)#in radians
centre = (0.,0.)
peripheral = [(1.,2.),(3.,4.),(3.,5.)]
print largest_spoke_angle(centre, peripheral)
I think I would do something like this:
angles = [get_angle_from_xaxis(origin,point) for point in points]
#make sure the angles are in order
angles.sort()
#need to compare last one with first one
angles.insert(0,angles[-1]-360.0) #360 if degrees, otherwise 2*math.pi.
#Now calculate the difference between adjacent angles and take the maximum
maxangle = max( angles[i] - angle for i,angle in enumerate(angles[:-1],1) )
This is basically the solution you describe. The only thing I've added is a check between the last and first and a sort to make sure we have the angles in the right order.
The answer of #user1597034 is correct. But it's not possible to get which spokes resulted in the largest angle.
The code below finds the indices of the two vectors of largest angle:
import cmath
import numpy as np
center = (0.,0.)
peripheral = np.array([(-1.,-1.),(0.,1.),(1.,-0.55), (0,-1), (-1,1)])
per_from_centre = [complex(z[0]-center[0], z[1]-center[1]) for z in peripheral]
per_angles = [cmath.phase(z) for z in per_from_centre]
id_ord = np.argsort(per_angles,axis=-1) # order index
per_angles.sort()
differences = [ per_angles[n+1]-per_angles[n] for n in range(len(per_angles)-1)] \
+ [per_angles[0] +2*cmath.pi - per_angles[-1]]
# ----- so far, same code in relation to #user1597034 -----
# find index of adjacent angles of greater angle
max_value = max(differences) # maximum value
for i in range(len(differences)):
if max_value == differences[i]:
if i == (len(differences)-1):
pairs = [id_ord[0], id_ord[-1]]
else:
pairs = [id_ord[i]] + [id_ord[i+1]]
print('pair index of largest angle:',pairs)
pair index of largest angle: [2, 1]