Creating and offsetting points outside polygon on a discrete grid - python

I am working in a discrete 2D grid of points in which there are "shapes" that I would like to create points outside of. I have been able to identify the vertices of these points and take convex hulls. So far, this leads to this and all is good and well. The purple here is the shape in question and the red line is the convex contour I have computed.
What I would like to do now is create two neighborhoods of points outside this shape. The first one is a set of points directly outside (as close as the grid size will allow), the second is another set of points but offset some distance away (the distance is not fixed, but rather an input).
I have attempted to write this in Python and get okay results. Here is an example of my current output. The problem is I notice the offsets are not perfect, for example look at the bottom most point in the image I attached. It kinks downwards whereas the original shape does not. It's not too bad in this example, but in other cases where the shape is smaller or if I take a smaller offset it gets worse. I also have an issue where the offsets sometimes overlap, even if they are supposed to be some distance away. I would also like there to be one line in each section of the contour, not two lines (for example in the top left).
My current attempt uses the Shapely package to handle most of the computational geometry. An outline of what I do once I have found the vertices of the convex contour is to offset these vertices by some amount, and interpolate along each pair of vertices to obtain many points alone these lines. Afterwards I use a coordinate transform to identify all points to the nearest grid point. This is how I obtain my final set of points. Below is the actual code I have written.
How can I improve this so I don't run into the issues I described?
Function #1 - Computes the offset points
def OutsidePoints(vertices, dist):
poly_line = LinearRing(vertices)
poly_line_offset = poly_line.buffer(dist, resolution=1, join_style=2, mitre_limit=1).exterior
new_vertices = list(poly_line_offset.coords)
new_vertices = np.asarray(new_vertices)
shape = sg.Polygon(new_vertices)
points = []
for t in np.arange(0, shape.length, step_size):
temp_points = np.transpose(shape.exterior.interpolate(t).xy)
points.append(temp_points[0])
points = np.array(points)
points = np.unique(points, axis=0)
return points
Function #2 - Transforming these points into points that are on my grid
def IndexFinder(points):
index_points = invCoordinateTransform(points)
for i in range(len(index_points)):
for j in range(2):
index_points[i][j] = math.floor(index_points[i][j])
index_points = np.unique(index_points, axis=0)
return index_points
Many thanks!

Related

Numpy: How to find all pixels within a certain distance of a given pixel and weigh them accordingly?

Please take a look at the image A below.
Some introduction to my question:
My goal is to obtain an accurate coordinate trace of the illustrated DNA molecule. The coordinates of the trace are represented by the blue dots in the image and are represented as a 2-d numpy array in Python, i.e.: trace: nd-array; shape (N, 2) where N is the number of trace points. The plots was done using plt.scatter(trace[:, 1], trace[:, 0]).
Now, please take a closer look at the following function:
def rel_coords_to_trace(trace, distance_limit=5.0):
"""
Finds the pixels in the image that are within the 'distance_limit' of the 'trace' points. For those pixels the
relative coordinates to the closest trace point is calculated.
Args:
trace ([N, 2] array): Initial trace of the DNA strand
distance_limit (float): Maximum distance a pixel can have from the trace to be taken into account
Returns:
pixels: Array with row/column coordinates of the pixels within the distance limit from the trace
trace_id: Int relating each pixel from the 'pixels' array to the point in the 'trace' it is closest to
relative_coords ([N, 2] array): Relative x and y distances of all pixels from the closest point of the trace
heights([N, ] array): Height of the image at the position of the pixel
"""
min_r, min_c = np.floor(trace.min(axis=0) - distance_limit).astype(int).clip(min=0)
max_r, max_c = np.ceil(trace.max(axis=0) + distance_limit).astype(int).clip(max=mol_filtered.shape)
pixels_pos = np.mgrid[min_r:max_r, min_c:max_c].reshape([2, -1]).T # all potential pixels
# kdTree finds the nearest neighbour between a specific pixel and all trace points
# Returns distances between pixels and nn and the id of the nn. Distances are inf if bigger than distance_limit
kdtree = cKDTree(trace)
distances, trace_id = kdtree.query(pixels_pos, k=1, distance_upper_bound=distance_limit)
pixels = pixels_pos[distances != np.inf]
trace_id = trace_id[distances != np.inf]
rel_coords = pixels - trace[trace_id]
return rel_coords, pixels, trace_id
Its execution is illustrated in Image B
My question:
Now, when I have a sharp turn in my coordinate trace, I get comparatively many white arrows that point to a specific trace point from more or less one direction. My goal is to quantify how many more white arrows are pointing from one side of the trace (in a direction normal to the trace) compared to the other side of the trace. This quantification doesn't have to be exact, I just somehow want to add a respective weight into the mix.
How can I achieve this quantification?
I don't understand what you need to quantify exactly.
For example, how do you define in this image, whether pixel 8,7 is normal to segment AB or to BC?
I mean, cKDTree is from point to point and you want neighbor points to be aligned to the grid (but they could be anywhere else)
how did you define the pixel to line relationship?

How to recalculate the coordinates of a point after scaling and rotation?

I have the coordinates of 6 points in an image
(170.01954650878906, 216.98866271972656)
(201.3812255859375, 109.42137145996094)
(115.70114135742188, 210.4272918701172)
(45.42426300048828, 97.89037322998047)
(167.0367889404297, 208.9329833984375)
(70.13690185546875, 140.90538024902344)
I have a point as center [89.2458, 121.0896]. I am trying to re-calculate the position of points in python using 4 rotation degree (from 0,90,-90,180) and 6 scaling factor (0.5,0.75,1,1.10,1.25,1.35,1.5).
My question is how can I rotate and scale the abovementioned points relative to the center point and get the new coordinates of those 6 points?
Your help is really appreciated.
Mathematics
A mathematical approach would be to represent this data as vectors from the center to the image-points, translate these vectors to the origin, apply the transformation and relocate them around the center point. Let's look at how this works in detail.
Representation as vectors
We can show these vectors in a grid, this will produce following image
This image provides a nice way to look at these points, so we can see our actions happening in a visual way. The center point is marked with a dot at the beginning of all the arrows, and the end of each arrow is the location of one of the points supplied in the question.
A vector can be seen as a list of the values of the coordinates of the point so
my_vector = [point[0], point[1]]
could be a representation for a vector in python, it just holds the coordinates of a point, so the format in the question could be used as is! Notice that I will use the position 0 for the x-coordinate and 1 for the y-coordinate throughout my answer.
I have only added this representation as a visual aid, we can look at any set of two points as being a vector, no calculation is needed, this is only a different way of looking at those points.
Translation to origin
The first calculations happen here. We need to translate all these vectors to the origin. We can very easily do this by subtracting the location of the center point from all the other points, for example (can be done in a simple loop):
point_origin_x = point[0] - center_point[0] # Xvalue point - Xvalue center
point_origin_y = point[1] - center_point[1] # Yvalue point - Yvalue center
The resulting points can now be rotated around the origin and scaled with respect to the origin. The new points (as vectors) look like this:
In this image, I deliberately left the scale untouched, so that it is clear that these are exactly the same vectors (arrows), in size and orientation, only shifted to be around (0, 0).
Why the origin
So why translate these points to the origin? Well, rotations and scaling actions are easy to do (mathematically) around the origin and not as easy around other points.
Also, from now on, I will only include the 1st, 2nd and 4th point in these images to save some space.
Scaling around the origin
A scaling operation is very easy around the origin. Just multiply the coordinates of the point with the factor of the scaling:
scaled_point_x = point[0] * scaling_factor
scaled_point_y = point[1] * scaling_factor
In a visual way, that looks like this (scaling all by 1.5):
Where the blue arrows are the original vectors and the red ones are the scaled vectors.
Rotating
Now for rotating. This is a little bit harder, because a rotation is most generally described by a matrix multiplication with this vector.
The matrix to multiply with is the following
(from wikipedia: Rotation Matrix)
So if V is the vector than we need to perform V_r = R(t) * V to get the rotated vector V_r. This rotation will always be counterclockwise! In order to rotate clockwise, we simply need to use R(-t).
Because only multiples of 90° are needed in the question, the matrix becomes a almost trivial. For a rotation of 90° counterclockwise, the matrix is:
Which is basically in code:
rotated_point_x = -point[1] # new x is negative of old y
rotated_point_y = point[0] # new y is old x
Again, this can be nicely shown in a visual way:
Where I have matched the colors of the vectors.
A rotation 90° clockwise will than be
rotated_counter_point_x = point[1] # x is old y
rotated_counter_point_y = -point[0] # y is negative of old x
A rotation of 180° will just be taking the negative coordinates or, you could just scale by a factor of -1, which is essentially the same.
As last point of these operations, might I add that you can scale and/or rotated as much as you want in a sequence to get the desired result.
Translating back to the center point
After the scaling actions and/or rotations the only thing left is te retranslate the vectors to the center point.
retranslated_point_x = new_point[0] + center_point_x
retranslated_point_y = new_point[1] + center_point_y
And all is done.
Just a recap
So to recap this long post:
Subtract the coordinates of the center point from the coordinates of the image-point
Scale by a factor with a simply multiplication of the coordinates
Use the idea of the matrix multiplication to think about the rotation (you can easily find these things on Google or Wikipedia).
Add the coordinates of the center point to the new coordinates of the image-point
I realize now that I could have just given this recap, but now there is at least some visual aid and a slight mathematical background in this post, which is also nice. I really believe that such problems should be looked at from a mathematical angle, the mathematical description can help a lot.

Numpy griddata interpolation up to certain radius

I'm using griddata() to interpolate my (irregular) 2-dimensional depth-measurements; x,y,depth. The method does a great job - but it interpolates over the entire grid where it can find to opposing points. I don't want that behaviour. I'd like to have an interpolation around the existing measurements, say with up to an extent of a certain radius.
Is it possible to tell numpy/scipy: don't interpolate if you're too far from an existing measurement? Resulting in a NODATA-value? ideal = griddata(.., .., .., radius=5.0)
edit example:
In the image below; black dots are the measurements. Shades of blue are the interpolated cells by numpy. The area marked in green is in fact part of the picture but is considered as NODATA by numpy (because there's no points in between). Now, the red areas, are interpolated, but I want to get rid of them. any ideas?
Ok cool. I don't think there is a built-in option for griddata() that does what you want, so you will need to write it yourself.
This comes down to calculating the distances between N input data points and M interpolation points. This is simple enough to do but if you have a lot of points it can be slow at ~O(M*N). But here's an example that calculates the distances to allN data points, for each interpolation point. If the number of data points withing the radius is at least neighbors, it keeps the value. Otherwise is writes the value of NODATA.
neighbors is 4 because griddata() will use biilinear interpolation which needs points bounding the interpolants in each dimension (2*2 = 4).
#invec - input points Nx2 numpy array
#mvec - interpolation points Mx2 numpy array
#just some random points for example
N=100
invec = 10*np.random.random([N,2])
M=50
mvec = 10*np.random.random([M,2])
# --- here you would put your griddata() call, returning interpolated_values
interpolated_values = np.zeros(M)
NODATA=np.nan
radius = 5.0
neighbors = 4
for m in range(M):
data_in_radius = np.sqrt(np.sum( (invec - mvec[m])**2, axis=1)) <= radius
if np.sum(data_in_radius) < neighbors :
interpolated_values[m] = NODATA
Edit:
Ok re-read and noticed the input is really 2D. Example modified.
Just as an additional comment, this could be greatly accelerated if you first build a coarse mapping from each point mvec[m] to a subset of the relevant data points.
The costliest step in the loop would change from
np.sqrt(np.sum( (invec - mvec[m])**2, axis=1))
to something like
np.sqrt(np.sum( (invec[subset[m]] - mvec[m])**2, axis=1))
There are plenty of ways to do this, for example using a Quadtree, hashing function, or 2D index. But whether this gives performance advantage depends on the application, how your data is structured, etc.

Straighten B-Spline

I've interpolated a spline to fit pixel data from an image with a curve that I would like to straighten. I'm not sure what tools are appropriate to solve this problem. Can someone recommend an approach?
Here's how I'm getting my spline:
import numpy as np
from skimage import io
from scipy import interpolate
import matplotlib.pyplot as plt
from sklearn.neighbors import NearestNeighbors
import networkx as nx
# Read a skeletonized image, return an array of points on the skeleton, and divide them into x and y coordinates
skeleton = io.imread('skeleton.png')
curvepoints = np.where(skeleton==False)
xpoints = curvepoints[1]
ypoints = -curvepoints[0]
# reformats x and y coordinates into a 2-dimensional array
inputarray = np.c_[xpoints, ypoints]
# runs a nearest neighbors algorithm on the coordinate array
clf = NearestNeighbors(2).fit(inputarray)
G = clf.kneighbors_graph()
T = nx.from_scipy_sparse_matrix(G)
# sorts coordinates according to their nearest neighbors order
order = list(nx.dfs_preorder_nodes(T, 0))
xx = xpoints[order]
yy = ypoints[order]
# Loops over all points in the coordinate array as origin, determining which results in the shortest path
paths = [list(nx.dfs_preorder_nodes(T, i)) for i in range(len(inputarray))]
mindist = np.inf
minidx = 0
for i in range(len(inputarray)):
p = paths[i] # order of nodes
ordered = inputarray[p] # ordered nodes
# find cost of that order by the sum of euclidean distances between points (i) and (i+1)
cost = (((ordered[:-1] - ordered[1:])**2).sum(1)).sum()
if cost < mindist:
mindist = cost
minidx = i
opt_order = paths[minidx]
xxx = xpoints[opt_order]
yyy = ypoints[opt_order]
# fits a spline to the ordered coordinates
tckp, u = interpolate.splprep([xxx, yyy], s=3, k=2, nest=-1)
xpointsnew, ypointsnew = interpolate.splev(np.linspace(0,1,270), tckp)
# prints spline variables
print(tckp)
# plots the spline
plt.plot(xpointsnew, ypointsnew, 'r-')
plt.show()
My broader project is to follow the approach outlined in A novel method for straightening curved text-lines in stylistic documents. That article is reasonably detailed in finding the line that describes curved text, but much less so where straightening the curve is concerned. I have trouble visualizing the only reference to straightening that I see is in the abstract:
find the angle between the normal at a point on the curve and the vertical line, and finally visit each point on the text and rotate by their corresponding angles.
I also found Geometric warp of image in python, which seems promising. If I could rectify the spline, I think that would allow me to set a range of target points for the affine transform to map to. Unfortunately, I haven't found an approach to rectify my spline and test it.
Finally, this program implements an algorithm to straighten splines, but the paper on the algorithm is behind a pay wall and I can't make sense of the javascript.
Basically, I'm lost and in need of pointers.
Update
The affine transformation was the only approach I had any idea how to start exploring, so I've been working on that since I posted. I generated a set of destination coordinates by performing an approximate rectification of the curve based on the euclidean distance between points on my b-spline.
From where the last code block left off:
# calculate euclidian distances between adjacent points on the curve
newcoordinates = np.c_[xpointsnew, ypointsnew]
l = len(newcoordinates) - 1
pointsteps = []
for index, obj in enumerate(newcoordinates):
if index < l:
ord1 = np.c_[newcoordinates[index][0], newcoordinates[index][1]]
ord2 = np.c_[newcoordinates[index + 1][0], newcoordinates[index + 1][1]]
length = spatial.distance.cdist(ord1, ord2)
pointsteps.append(length)
# calculate euclidian distance between first point and each consecutive point
xpositions = np.asarray(pointsteps).cumsum()
# compose target coordinates for the line after the transform
targetcoordinates = [(0,0),]
for element in xpositions:
targetcoordinates.append((element, 0))
# perform affine transformation with newcoordinates as control points and targetcoordinates as target coordinates
tform = PiecewiseAffineTransform()
tform.estimate(newcoordinates, targetcoordinates)
I'm presently hung up on errors with the affine transform (scipy.spatial.qhull.QhullError: QH6154 Qhull precision error: Initial simplex is flat (facet 1 is coplanar with the interior point)
), but I'm not sure whether it's because of a problem with how I'm feeding the data in, or because I'm abusing the transform to do my projection.
I got the same error with you when using scipy.spatial.ConvexHull.
First, let me explain my project: what i wanted to do is to segment the people from its background(image matting). In my code, first I read an image and a trimap, then according to the trimap, I segment the original image to foreground, bakground and unknown pixels. Here is part of the coed:
img = scipy.misc.imread('sweater_black.png') #color_image
trimap = scipy.misc.imread('sw_trimap.png', flatten='True') #trimap
bg = trimap == 0 #background
fg = trimap == 255 #foreground
unknown = True ^ np.logical_or(fg,bg) #unknown pixels
fg_px = img[fg] #here i got the rgb value of the foreground pixels,then send them to the ConvexHull
fg_hull = scipy.spatial.ConvexHull(fg_px)
But i got an error here.So I check the Array of fg_px and then I found this array is n*4. which means every scalar i send to ConvexHull has four values. Howerver, the input of ConvexHUll should be 3 dimension.
I source my error and found that the input color image is 32bits(rgb channel and alpha channel) which means it has an alpha channel. After transferring the image to 24 bit (which means only rgb channels), the code works.
In one sentence, the input of ConvexHull should be b*4, so check your input data! Hope this works for you~

How to get start and end coordinates (x, y) of major axis of a rotating ellipse in opencv?

I am performing motion tracking of an object, and I am trying to identify the front and back of the object. The object is asymmetrical, which means that the centroid of the contour is closer to the front than the back. Using this information, I am approaching this as follows:
Draw contours of object
Find centroid
centroidx, centroidy = int(moments['m10']/moments['m00']), int(moments['m10']/moments['m00'])
Draw bounding ellipse
cv2.fitEllipse(contour)
Calculate major axis length as follows (and as shown in the figure)
MAx, MAy = int(0.5 * ellipseMajorAxisx*math.sin(ellipseAngle)), int(0.5 * ellipseMajorAxisy*math.cos(ellipseAngle))
Calculate beginning and ending x, y coordinates of the major axis
MAxtop, MAytop = int(ellipseCentrex + MAx), int(ellipseCentrey + MAy)
MAxbot, MAybot = int(ellipseCentrex - MAx), int(ellipseCentrey - MAy)
Identify which of the points is closer to the centroid of the contour
distancetop = math.sqrt((centroidx - MAxtop)**2 + (centroidy - MAytop)**2)
distancebot = math.sqrt((centroidx - MAxbot)**2 + (centroidy - MAybot)**2)
min(distancetop, distancebot)
The problem I am encountering is, while I get the "front" end of the ellipse correct most of the time, occasionally the point is a little bit away. As far as I have observed, this seems to be happening such that the x value is correct, but y value is different (in effect, I think this represents the major axis of an ellipse that is perpendicular to mine). I am not sure if this is an issue with opencv's calculation of angles or (more than likely) my calculations are incorrect. I do realize this is a complicated example, hope my figures help!
EDIT: When I get the wrong point, it is not from a perpendicular ellipse, but of a mirror image of my ellipse. And it happens with the x values too, not just y.
After following ssm's suggestion below, I am getting the desired point most of the time. The point still goes wrong occasionally, but "snaps back" into place soon after. For example, this is a few frames when this happens:
By the way, the above images are after "correcting" for angle by using this code:
if angle > 90:
angle = 180 - angle
If I do not do the correction, I get the wrong point at other times, as shown below for the same frames.
So it looks like I get it right for some angles with angle correction and the other angles without correction. How do I get all the right points in both conditions?
(White dot inside the ellipse is the centroid of the contour, whereas the dot on or outside the ellipse is the point I am getting)
I think your only problem is MAytop. You can consider doing the following:
if ycen<yc:
# switch MAytop and MAybot
temp = MAytop
MAytop = MAybot
MAybot = temp
You may have to do a similar check on the x - scale

Categories

Resources