I need to rotate a polygon (which I am provided as a KML file) on a map, so that its shape remains the same.
Here is the function I use on each coordinate in turn to perform the rotation:
def rotate(
points,
origin,
angle
):
# Convert to radians:
rotation_angle = np.deg2rad(angle)
# Build transformer:
R = np.array([[np.cos(rotation_angle), -np.sin(rotation_angle)], [np.sin(rotation_angle), np.cos(rotation_angle)]])
# Convert list to at least 2 dimension arrays...
o = np.atleast_2d(origin)
p = np.atleast_2d(points)
# Return transformed points:
return np.squeeze((R # (p.T-o.T) + o.T).T)
When I perform this on all the coordinates in a rectangle and rotate 90 degrees anti-clockwise, I get a parallelogram. I am thinking this is because the scaling of distances along the arc between two lines of latitude and longitude are different.
Here is an example of what I am seeing:
The green rectangle is the original shape, the red parallelogram is the rotated shape, and the origin is shown for ease of understanding.
I have spent too long trying to make this work and there is obviously a simple way to achieve this which both I and trusty Google are missing!
Related
I have two 2d images, one is the source image and the other is a target image; I need to rotate the source image to match the target image using python (scikit & numpy). I have 3 landmark coordinates for each image, as follows:
image1_points = [(12,16),(7,4),(25,20)]
image2_points = [(15,22),(1,22),(25,10)]
I believe the following steps are what's needed:
Create rotation matrix using least squares approach using the 3 landmark coordinates
Use the rotation matrix to get theta
Convert theta to degrees (for the angle)
Use the apply_angle method with the angle to rotate the image
I've been trying to use these points and the least squares approach to compute a linear transformation matrix that transforms points from the source to the target image.
I know I need to create a rotation matrix, but having never taken algebra I'm a bit lost. I've done lots of reading, and tried using scipy's built-in procrustes to do an affine transformation below (which may be all wrong).
m1, m2, d = scipy.spatial.procrustes(target_points, source_points)
a = np.dot(m1.T, m2, out=None) / norm(m1)**2
#separate x and y for the sake of convenience
ref_x = m2[::2]
ref_y = m2[1::2]
x = m1[::2]
y = m1[1::2]
b = np.sum(x*ref_y - ref_x*y) / norm(m1)**2
scale = np.sqrt(a**2+b**2)
theta = atan(b / max(a.all(), 10**-10)) #avoid dividing by 0
degrees = cos(radians(theta))
apply_angle(source_img, degrees)
However, this is not giving me the result I would expect. It's giving me a degree around 1, where I would expect a degree around 72. I suspect that the degree is what's needed to rotate the image as the angle parameter.
Any help would be hugely appreciated. Thank you!
I am working in a discrete 2D grid of points in which there are "shapes" that I would like to create points outside of. I have been able to identify the vertices of these points and take convex hulls. So far, this leads to this and all is good and well. The purple here is the shape in question and the red line is the convex contour I have computed.
What I would like to do now is create two neighborhoods of points outside this shape. The first one is a set of points directly outside (as close as the grid size will allow), the second is another set of points but offset some distance away (the distance is not fixed, but rather an input).
I have attempted to write this in Python and get okay results. Here is an example of my current output. The problem is I notice the offsets are not perfect, for example look at the bottom most point in the image I attached. It kinks downwards whereas the original shape does not. It's not too bad in this example, but in other cases where the shape is smaller or if I take a smaller offset it gets worse. I also have an issue where the offsets sometimes overlap, even if they are supposed to be some distance away. I would also like there to be one line in each section of the contour, not two lines (for example in the top left).
My current attempt uses the Shapely package to handle most of the computational geometry. An outline of what I do once I have found the vertices of the convex contour is to offset these vertices by some amount, and interpolate along each pair of vertices to obtain many points alone these lines. Afterwards I use a coordinate transform to identify all points to the nearest grid point. This is how I obtain my final set of points. Below is the actual code I have written.
How can I improve this so I don't run into the issues I described?
Function #1 - Computes the offset points
def OutsidePoints(vertices, dist):
poly_line = LinearRing(vertices)
poly_line_offset = poly_line.buffer(dist, resolution=1, join_style=2, mitre_limit=1).exterior
new_vertices = list(poly_line_offset.coords)
new_vertices = np.asarray(new_vertices)
shape = sg.Polygon(new_vertices)
points = []
for t in np.arange(0, shape.length, step_size):
temp_points = np.transpose(shape.exterior.interpolate(t).xy)
points.append(temp_points[0])
points = np.array(points)
points = np.unique(points, axis=0)
return points
Function #2 - Transforming these points into points that are on my grid
def IndexFinder(points):
index_points = invCoordinateTransform(points)
for i in range(len(index_points)):
for j in range(2):
index_points[i][j] = math.floor(index_points[i][j])
index_points = np.unique(index_points, axis=0)
return index_points
Many thanks!
I am asking this questions as a trimmed version of my previous question. Now that I have a face looking some position on screen and also gaze coordinates (pitch and yaw) of both the eye. Let us say
Left_Eye = [-0.06222888 -0.06577308]
Right_Eye = [-0.04176027 -0.44416167]
I want to identify the screen coordinates where the person probably may be looking at? Is this possible? Please help!
What you need is:
3D position and direction for each eye
you claim you got it but pitch and yaw are just Euler angles and you need also some reference frame and order of transforms to convert them back into 3D vector. Its better to leave the direction in a vector form (which I suspect you got in the first place). Along with the direction you need th position in 3D in the same coordinate system too...
3D definition of your projection plane
so you need at least start position and 2 basis vectors defining your planar rectangle. Much better is to use 4x4 homogenous transform matrix for this because that allows very easy transform from and in to its local coordinate system...
So I see it like this:
So now its just matter of finding the intersection between rays and plane
P(s) = R0 + s*R
P(t) = L0 + t*L
P(u,v) = P0 + u*U +v*V
Solving this system will lead to acquiring u,v which is also the 2D coordinate inside your plane yo are looking at. Of course because of inaccuracies this will not be solvable algebraicaly. So its better to convert the rays into plane local coordinates and just computing the point on each ray with w=0.0 (making this a simple linear equation with single unknown) and computing average position between one for left eye and the other for right eye (in case they do not align perfectly).
so If R0',R',L0',L' are the converted values in UVW local coordinates then:
R0z' + s*Rz' = 0.0
s = -R0z'/Rz'
// so...
R1 = R0' - R'*R0z'/Rz'
L1 = L0' - L'*L0z'/Lz'
P = 0.5 * (R1 + L1)
Where P is the point you are looking at in the UVW coordinates...
The conversion is done easily according to your notations you either multiply the inverse or direct matrix representing the plane by (R,1),(L,1),(R0,0)(L0,0). The forth coordinate (0,1) just tells if you are transforming vector or point.
Without knowing more about your coordinate systems, data accuracy, and what knowns and unknowns you got is hard to be more specific than this.
If your plane is the camera projection plane than U,V are the x and y axis of the image taken from camera and W is normal to it (direction is just matter of notation).
As you are using camera input which uses a perspective projection I hope your positions and vectors are corrected for it.
I have the coordinates of 6 points in an image
(170.01954650878906, 216.98866271972656)
(201.3812255859375, 109.42137145996094)
(115.70114135742188, 210.4272918701172)
(45.42426300048828, 97.89037322998047)
(167.0367889404297, 208.9329833984375)
(70.13690185546875, 140.90538024902344)
I have a point as center [89.2458, 121.0896]. I am trying to re-calculate the position of points in python using 4 rotation degree (from 0,90,-90,180) and 6 scaling factor (0.5,0.75,1,1.10,1.25,1.35,1.5).
My question is how can I rotate and scale the abovementioned points relative to the center point and get the new coordinates of those 6 points?
Your help is really appreciated.
Mathematics
A mathematical approach would be to represent this data as vectors from the center to the image-points, translate these vectors to the origin, apply the transformation and relocate them around the center point. Let's look at how this works in detail.
Representation as vectors
We can show these vectors in a grid, this will produce following image
This image provides a nice way to look at these points, so we can see our actions happening in a visual way. The center point is marked with a dot at the beginning of all the arrows, and the end of each arrow is the location of one of the points supplied in the question.
A vector can be seen as a list of the values of the coordinates of the point so
my_vector = [point[0], point[1]]
could be a representation for a vector in python, it just holds the coordinates of a point, so the format in the question could be used as is! Notice that I will use the position 0 for the x-coordinate and 1 for the y-coordinate throughout my answer.
I have only added this representation as a visual aid, we can look at any set of two points as being a vector, no calculation is needed, this is only a different way of looking at those points.
Translation to origin
The first calculations happen here. We need to translate all these vectors to the origin. We can very easily do this by subtracting the location of the center point from all the other points, for example (can be done in a simple loop):
point_origin_x = point[0] - center_point[0] # Xvalue point - Xvalue center
point_origin_y = point[1] - center_point[1] # Yvalue point - Yvalue center
The resulting points can now be rotated around the origin and scaled with respect to the origin. The new points (as vectors) look like this:
In this image, I deliberately left the scale untouched, so that it is clear that these are exactly the same vectors (arrows), in size and orientation, only shifted to be around (0, 0).
Why the origin
So why translate these points to the origin? Well, rotations and scaling actions are easy to do (mathematically) around the origin and not as easy around other points.
Also, from now on, I will only include the 1st, 2nd and 4th point in these images to save some space.
Scaling around the origin
A scaling operation is very easy around the origin. Just multiply the coordinates of the point with the factor of the scaling:
scaled_point_x = point[0] * scaling_factor
scaled_point_y = point[1] * scaling_factor
In a visual way, that looks like this (scaling all by 1.5):
Where the blue arrows are the original vectors and the red ones are the scaled vectors.
Rotating
Now for rotating. This is a little bit harder, because a rotation is most generally described by a matrix multiplication with this vector.
The matrix to multiply with is the following
(from wikipedia: Rotation Matrix)
So if V is the vector than we need to perform V_r = R(t) * V to get the rotated vector V_r. This rotation will always be counterclockwise! In order to rotate clockwise, we simply need to use R(-t).
Because only multiples of 90° are needed in the question, the matrix becomes a almost trivial. For a rotation of 90° counterclockwise, the matrix is:
Which is basically in code:
rotated_point_x = -point[1] # new x is negative of old y
rotated_point_y = point[0] # new y is old x
Again, this can be nicely shown in a visual way:
Where I have matched the colors of the vectors.
A rotation 90° clockwise will than be
rotated_counter_point_x = point[1] # x is old y
rotated_counter_point_y = -point[0] # y is negative of old x
A rotation of 180° will just be taking the negative coordinates or, you could just scale by a factor of -1, which is essentially the same.
As last point of these operations, might I add that you can scale and/or rotated as much as you want in a sequence to get the desired result.
Translating back to the center point
After the scaling actions and/or rotations the only thing left is te retranslate the vectors to the center point.
retranslated_point_x = new_point[0] + center_point_x
retranslated_point_y = new_point[1] + center_point_y
And all is done.
Just a recap
So to recap this long post:
Subtract the coordinates of the center point from the coordinates of the image-point
Scale by a factor with a simply multiplication of the coordinates
Use the idea of the matrix multiplication to think about the rotation (you can easily find these things on Google or Wikipedia).
Add the coordinates of the center point to the new coordinates of the image-point
I realize now that I could have just given this recap, but now there is at least some visual aid and a slight mathematical background in this post, which is also nice. I really believe that such problems should be looked at from a mathematical angle, the mathematical description can help a lot.
I have a coarse skymap made up of 128 points, of which I would like to make a smooth healpix map (see attached Figure, LHS). Figures referenced in the text:
I load my data, then make new longitude and latitude arrays of the appropriate pixel length for the final map (with e.g. nside=32).
My input data are:
lats = pi/2 + ths # theta from 0, pi, size 8
lons = phs # phi from 0, 2pi, size 16
data = sky_data[0] # shape (8,16)
New lon/lat array size based on number of pixels from nside:
nside = 32
pixIdx = hp.nside2npix(nside) # number of pixels I can get from this nside
pixIdx = np.arange(pixIdx) # pixel index numbers
I then find the new data values for those pixels by interpolation, and then convert back from angles to pixels.
# new lon/lat
new_lats = hp.pix2ang(nside, pixIdx)[0] # thetas I need to populate with interpolated theta values
new_lons = hp.pix2ang(nside, pixIdx)[1] # phis, same
# interpolation
lut = RectSphereBivariateSpline(lats, lons, data, pole_values=4e-14)
data_interp = lut.ev(new_lats.ravel(), new_lons.ravel()) #interpolate the data
pix = hp.ang2pix(nside, new_lats, new_lons) # convert latitudes and longitudes back to pixels
Then, I construct a healpy map with the interpolated values:
healpix_map = np.zeros(hp.nside2npix(nside), dtype=np.double) # create empty map
healpix_map[pix] = data_interp # assign pixels to new interpolated values
testmap = hp.mollview(healpix_map)
The result of the map is the upper RHS of the attached Figure.
(Forgive the use of jet -- viridis doesn't have a "white" zero, so using that colormap adds a blue background.)
The map doesn't look right: you can see from the coarse map in the Figure that there should be a "hotspot" on the lower RHS, but here it appears in the upper left.
As a sanity check, I used matplotlib to make a scatter plot of the interpolated points in a mollview projection, Figure 2, where I removed the edges of the markers to make it look like a map ;)
ax = plt.subplot(111, projection='astro mollweide')
ax.grid()
colors = data_interp
sky=plt.scatter(new_lons, new_lats-pi/2, c = colors, edgecolors='none', cmap ='jet')
plt.colorbar(sky, orientation = 'horizontal')
You can see that this map, lower RHS of attached Figure, produces exactly what I expect! So the coordinates are ok, and I am completely confused.
Has anyone encountered this before? What can I do? I'd like to use the healpy functions on this and future maps, so just using matplotlib isn't an option.
Thanks!
I figured it out -- I had to add pi/2 to my thetas for the interpolation to work, so in the end need to apply the following transformation for the image to render correctly:
newnew_lats = pi - new_lats
newnew_lons = pi + new_lons
There still seems to be a bit of an issue with the interpolation, although the seem is not so visible now. I may try a different one to compare.
I'm no expert in healpix (actually I've never used it before - I'm a particle physicist), but as far as I can tell it's just a matter of conventions: in a Mollweide projection, healpy places the north pole (positive latitude) at the bottom of the map, for some reason. I'm not sure why it would do that, or whether this is intentional behavior, but it seems pretty clear that's what is happening. If I mask out everything below the equator, i.e. keep only the positive-latitude points
mask = new_lats - pi/2 > 0
pix = hp.ang2pix(nside, new_lats[mask], new_lons[mask])
healpix_map = np.zeros(hp.nside2npix(nside), dtype=np.double)
healpix_map[pix] = data_interp[mask]
testmap = hp.mollview(healpix_map)
it comes up with a plot with no data above the center line:
At least it's easy enough to fix. mollview admits a rot parameter that will effectively rotate the sphere around the viewing axis before projecting it, and a flip parameter which can be set to 'astro' (default) or 'geo' to set whether east is shown at the left or right. A little experimentation shows that you get the coordinate system you want with
hp.mollview(healpix_map, rot=(180, 0, 180), flip='geo')
In the tuple, the first two elements are longitude and latitude of the point to set in the center of the plot, and the third element is the rotation. All are in degrees. With no mask it gives this:
which I believe is just what you're looking for.