I am trying to perform a simple task using simple math in python and I suspect that the inherit error in converting from radians to degrees as a result of an error with floating point math (as garnered from another question on the topic please don't mark this as a duplicate question, it's not).
I am trying to extend a line by 500m. To do this I am taking the the endpoint coordinates from a supplied line and using the existing heading of said line to generate the coordinates of the point which is 500m in the same heading.
Heading is important in this case as it is the source of my error. Or so I suspect.
I use the following function to calculate the interior angle of my right angle triangle, built using the existing line, or in this case my hypotenuse:
def intangle(xypoints):
angle = []
for i in xypoints:
x1 = i[0][0]
x2 = i[1][0]
y1 = i[0][1]
y2 = i[1][1]
gradient = (x1 - x2)/(y1-y2)
radangle = math.atan(gradient)
angle.append((math.degrees(radangle)))
return angle
My input points are, for example:
(22732.23679147904, 6284399.7935522054)
(20848.591367954294, 6281677.926560438)
I know going into this that my angle is 35° as these coordinates are programmatically generated by a separate function and when plotted are out by around 3.75" for each KM. Another error as a result of converting radians to degrees but acceptable in its scope.
The error generated by the above function however, results in an angle that plots my new endpoint in such a place that the line is no longer perfectly straight when I connect the dots and I absolutely have to have a straight line.
How can I go about doing this differently to account for the floating point error? Is it even possible? If not, then what would be an acceptable method of extending my line by howevermany meters using euclidean geometry?
To add to this, I have already done all relevant geographic conversions and I am 100% sure that I am working on a 2D plane so the ellipsoid and such do not play a role in this at all.
Using angles is unnecessary, and there are problems in the way you do it. Using the atan will only give you angles between -pi/2 and pi/2, and you will get the same angle value for opposite directions.
You should rather use Thales:
import math
a = (22732.23679147904, 6284399.7935522054)
b = (20848.591367954294, 6281677.926560438)
def extend_line(a, b, length):
"""
Returns the coordinates of point C at length beyond B in the direction of A->B"""
ab = math.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)
coeff = (ab + length)/ab
return (a[0] + coeff*(b[0]-a[0]), a[1] + coeff*(b[1]-a[1]) )
print(extend_line(a, b, 500))
# (20564.06031560228, 6281266.7792872535)
Related
I am asking this questions as a trimmed version of my previous question. Now that I have a face looking some position on screen and also gaze coordinates (pitch and yaw) of both the eye. Let us say
Left_Eye = [-0.06222888 -0.06577308]
Right_Eye = [-0.04176027 -0.44416167]
I want to identify the screen coordinates where the person probably may be looking at? Is this possible? Please help!
What you need is:
3D position and direction for each eye
you claim you got it but pitch and yaw are just Euler angles and you need also some reference frame and order of transforms to convert them back into 3D vector. Its better to leave the direction in a vector form (which I suspect you got in the first place). Along with the direction you need th position in 3D in the same coordinate system too...
3D definition of your projection plane
so you need at least start position and 2 basis vectors defining your planar rectangle. Much better is to use 4x4 homogenous transform matrix for this because that allows very easy transform from and in to its local coordinate system...
So I see it like this:
So now its just matter of finding the intersection between rays and plane
P(s) = R0 + s*R
P(t) = L0 + t*L
P(u,v) = P0 + u*U +v*V
Solving this system will lead to acquiring u,v which is also the 2D coordinate inside your plane yo are looking at. Of course because of inaccuracies this will not be solvable algebraicaly. So its better to convert the rays into plane local coordinates and just computing the point on each ray with w=0.0 (making this a simple linear equation with single unknown) and computing average position between one for left eye and the other for right eye (in case they do not align perfectly).
so If R0',R',L0',L' are the converted values in UVW local coordinates then:
R0z' + s*Rz' = 0.0
s = -R0z'/Rz'
// so...
R1 = R0' - R'*R0z'/Rz'
L1 = L0' - L'*L0z'/Lz'
P = 0.5 * (R1 + L1)
Where P is the point you are looking at in the UVW coordinates...
The conversion is done easily according to your notations you either multiply the inverse or direct matrix representing the plane by (R,1),(L,1),(R0,0)(L0,0). The forth coordinate (0,1) just tells if you are transforming vector or point.
Without knowing more about your coordinate systems, data accuracy, and what knowns and unknowns you got is hard to be more specific than this.
If your plane is the camera projection plane than U,V are the x and y axis of the image taken from camera and W is normal to it (direction is just matter of notation).
As you are using camera input which uses a perspective projection I hope your positions and vectors are corrected for it.
I have two curves which meet around origin z and y, listed below. When I plot these according to certain functions I get the attached plot.
origin_z = 260
origin_y = 244
plt.plot(phi_z+origin_z,phi_y+origin_y,'b')
plt.plot(phi_z+origin_z,phi_y+origin_y,'r')
Where phi_z and _y are some functions (which I have avoided posting for the sake of clarity). I want to rotate both lines about 45 degrees clockwise around the specified origin, but when I try the following code, it merely shifts the curves further along each axis rather than rotating them:
phi_z_rot = origin_z + np.cos(45) * (phi_z - origin_z) - np.sin(45) * (phi_z - origin_z)
phi_y_rot = origin_y + np.cos(45) * (phi_y - origin_y) - np.sin(45) * (phi_y - origin_y)
Can anyone tell me what I'm doing wrong? Sorry for not posting more of the functions, but hopefully it isn't necessary.
Without much information is very little what I can explicitly provide. Anyhow, you have your rotation wrong. First the angles are in degrees instead of radians, and then you use the incorrect rotation matrix.
Avoiding the translation of the coordinates, the proper rotation is as follows:
rot = np.pi/4
phi_z_rot = phi_z*np.cos(rot)+phi_y*np.sin(rot)
phi_y_rot = -phi_z*np.sin(rot)+phi_y*np.cos(rot)
I am new in python.
I have two vectors in 3d space, and I want to know the angle between two
I tried:
vec1=[x1,y1,z1]
vec2=[x2,y2,z2]
angle=np.arccos(np.dot(vec1,vec2)/(np.linalg.norm(vec1)*np.linalg.norm(vec2)))
but when change the order, vec2,vec1 obtain the same angle and no higher.
I want to give me a greater angle when the order of the vectors changes.
Use a function to help you choose which angle do you want. In the beggining of your code, write:
def angle(v1, v2, acute):
# v1 is your firsr vector
# v2 is your second vector
angle = np.arccos(np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2)))
if (acute == True):
return angle
else:
return 2 * np.pi - angle
Then, when you want to calculate an angle (in radians) in your program just write
angle(vec1, vec2, 'True')
for acute angles, and
angle(vec2, vec1, 'False')
for obtuse angles.
For example:
vec1 = [1, -1, 0]
vec2 = [1, 1, 0]
#I am explicitly converting from radian to degree
print(180* angle(vec1, vec2, True)/np.pi) #90 degrees
print(180* angle(vec2, vec1, False)/np.pi) #270 degrees
If you're working with 3D vectors, you can do this concisely using the toolbelt vg. It's a light layer on top of numpy.
import numpy as np
import vg
vec1 = np.array([x1, y1, z1])
vec2 = np.array([x2, y2, z2])
vg.angle(vec1, vec2)
You can also specify a viewing angle to compute the angle via projection:
vg.angle(vec1, vec2, look=vg.basis.z)
Or compute the signed angle via projection:
vg.signed_angle(vec1, vec2, look=vg.basis.z)
I created the library at my last startup, where it was motivated by uses like this: simple ideas which are verbose or opaque in NumPy.
What you are asking is impossible as the plane that contains the angle can be oriented two ways and nothing in the input data gives a clue about it.
All you can do is to compute the smallest angle between the vectors (or its complement to 360°), and swapping the vectors can't have an effect.
The dot product isn't guilty here, this is a geometric dead-end.
The dot product is commutative, so you'll have to use a different metric. It doesn't care about the order.
Since the dot product is commutative, simply reversing the order you put the variables into the function will not work.
If your objective is to find the obtuse(larger) angle rather than the acute(smaller) one, subtract the value returned by your function from 360 degrees. Since you seem to have a criteria for when you want to switch the variables around, you should use that same criteria to determine when to subtract your found value from 360. This will give you the value you are looking for in these cases.
I am performing motion tracking of an object, and I am trying to identify the front and back of the object. The object is asymmetrical, which means that the centroid of the contour is closer to the front than the back. Using this information, I am approaching this as follows:
Draw contours of object
Find centroid
centroidx, centroidy = int(moments['m10']/moments['m00']), int(moments['m10']/moments['m00'])
Draw bounding ellipse
cv2.fitEllipse(contour)
Calculate major axis length as follows (and as shown in the figure)
MAx, MAy = int(0.5 * ellipseMajorAxisx*math.sin(ellipseAngle)), int(0.5 * ellipseMajorAxisy*math.cos(ellipseAngle))
Calculate beginning and ending x, y coordinates of the major axis
MAxtop, MAytop = int(ellipseCentrex + MAx), int(ellipseCentrey + MAy)
MAxbot, MAybot = int(ellipseCentrex - MAx), int(ellipseCentrey - MAy)
Identify which of the points is closer to the centroid of the contour
distancetop = math.sqrt((centroidx - MAxtop)**2 + (centroidy - MAytop)**2)
distancebot = math.sqrt((centroidx - MAxbot)**2 + (centroidy - MAybot)**2)
min(distancetop, distancebot)
The problem I am encountering is, while I get the "front" end of the ellipse correct most of the time, occasionally the point is a little bit away. As far as I have observed, this seems to be happening such that the x value is correct, but y value is different (in effect, I think this represents the major axis of an ellipse that is perpendicular to mine). I am not sure if this is an issue with opencv's calculation of angles or (more than likely) my calculations are incorrect. I do realize this is a complicated example, hope my figures help!
EDIT: When I get the wrong point, it is not from a perpendicular ellipse, but of a mirror image of my ellipse. And it happens with the x values too, not just y.
After following ssm's suggestion below, I am getting the desired point most of the time. The point still goes wrong occasionally, but "snaps back" into place soon after. For example, this is a few frames when this happens:
By the way, the above images are after "correcting" for angle by using this code:
if angle > 90:
angle = 180 - angle
If I do not do the correction, I get the wrong point at other times, as shown below for the same frames.
So it looks like I get it right for some angles with angle correction and the other angles without correction. How do I get all the right points in both conditions?
(White dot inside the ellipse is the centroid of the contour, whereas the dot on or outside the ellipse is the point I am getting)
I think your only problem is MAytop. You can consider doing the following:
if ycen<yc:
# switch MAytop and MAybot
temp = MAytop
MAytop = MAybot
MAybot = temp
You may have to do a similar check on the x - scale
I'm currently working with the turtle library of python.
I'm working on my midterm project for my coding class and my project is to draw cos, sin, and tangent curves using turtle as well as their inverse functions.
My problem is that when I'm coding inverse sin, the graph shows up way too small and is impossible to be seen by the user. I was wondering if there was a zoom function or a way to stretch the graph to make it bigger?
Here is my code for arcsin:
def drawarcsincurve(amplitude, period, horShift, verShift):
turtle.speed(0)
startPoint = -1
turtle.goto(startPoint, math.asin(startPoint))
turtle.pendown()
for angles in range(-1,1):
y = math.asin(angles)
turtle.goto(angles,y)
Your main problem here, I think, is the range over which your are iterating your angles variable. The line
for angles in range(-1,1):
will execute the loop only twice, with angle == 1 and angle == 0 - i.e. it is equivalent to using
for angles in [-1,0]:
Type range(-1,1) in a Python interpreter window to see what I mean.
You might be getting confused over names as well. You call your loop variable angles, but it's actually representing a ratio (the sine value whose inverse you are calculating).
What you probably want really is something that iterates over the range -1 to 1 in fairly small steps. Lets choose 0.01 as our step (that's arbitrary)
I've altered your code directly rather than doing my own implementation.
I've put in a scale factor (plot_scale) which is equivalent to the zoom that I think you want in your original question.
I've left your original function arguments in, although I don't use them. I thought you might want to play with them later.
def drawarcsincurve(amplitude, period, horShift, verShift):
plot_scale = 100 # Arbitrary value - up to you - similar to "zoom"
turtle.speed(1)
turtle.penup()
startPoint = -1
turtle.goto(plot_scale*startPoint, plot_scale*math.asin(startPoint))
turtle.pendown()
for angles in range(-100,100):
sinval = 1.0 * angles / 100 # this will run -1 to 1 in 0.01 steps
y = math.asin(sinval)
turtle.goto(plot_scale*sinval,plot_scale*y)
This outputs: