Python Programming: plotting points with mathematical expressions - python

So i need to be able to plot a point on a graph made by my already coded cartesian coordinate system. The geometry is like: (0,0) is at the top left of the window and as it goes right, the x increases and as it goes down, the y increases so the bottom right corner would be (800, 600).
My cartesian (0,0) is actually on the point (400, 300) and that's where I want my graphs to be aligned to.
My code for getting input, converting it to an expression and graphing these points using small rectangular dots is
expression = input("Enter a mathematical
for x in range(0, 800):
y = eval(expression)
rect(x, y, 2, 2)
My problem is: the code needs to be able to read and graph all normal mathematical expressions properly such as x, x^2, x^3, etc., but on my drawn cartesian plane, the values are actually all positives due to the window's weird quadrant system created by the graphics library.
I'm not getting the correct plotting when my program starts to plot out and map all these coordinates.
Could someone shed some light on what im supposed to do in terms of actually converting these graphics coordinates to match my cartesian plane coordinates?
NOTE every 30 graphics units = 1 tick unit of my cartesian plane.

if your problem is what i think it is, try the following code.
expression = input("Enter a mathematical
for x in range(0, 800):
x_val = x-400
y_val = eval(expression(x_val))
y = -y_val+300
rect(x, y, 2, 2)

Related

Python-Creating List Of Grid Positions Within A Certain Bound Given A Distance

two scenarios
I have an x axis size, a y axis size, a render distance, a list of grid position numbers and a central grid position.
I am trying to create a list of all the grid positions within the render distance of a central grid position.
The size of the x and y axis may be different independently. Optimally this algorithm would not attempt to get positions where the render distance extends over the side of the x or y axis.
Thanks.
I'm writing this to help you answer your own question in the style that I would go about it. As with anything in coding, what you need to do is be able to break down a big problem into multiple smaller ones.
Design two functions to convert to and from (x, y) coordinates (optional, it'll make your life easier, but won't be as efficient, personally I would avoid this for a bit of a challenge).
Given n, size and distance, calculate up, down, left and right. If the size is different for different axis, then just provide the function with the correct one.
eg.
def right(n, size, distance):
return n + size * distance
def down(n, size, distance):
return n - distance
Given size, make sure the above functions don't go off the edge of the grid. Converting the points to (x, y) coordinates for this part may help.
Now you have the sides of the square, run the functions again to get the corners. For example, to get the top right corner, you could do right(up(*args)) or up(right(*args))
With the corners, you can now calculate what's in your square. Converting the points to (x, y) coordinates will make it easier.

Plotting robot path and orientation using python +/- matplotlib

I have a 1000 x 3 numpy array of coordinates that consist of an (x, y, theta in radians) pose for a moving robot at various times (from time = 0 to time = 1000). Is it possible to graph this position and orientation information using python so that at each point (x,y) there is a small arrow that points in the theta direction? Perhaps a matplotlib type graph would be possible for this?
Have you tried the arrow function in matplotlib (documentation)?
Assuming that theta is angle in radians from the x axis, perhaps something like the following for each point will do it.
arrow(x, y, cos(theta), sin(theta))
The above code will draw an arrow from (x,y) to (x+dx, y+dy).
Another option is matplotlib.pyplot.quiverdocumentation. The quiver function allows us to control the length of the arrow in many ways.

How to recalculate the coordinates of a point after scaling and rotation?

I have the coordinates of 6 points in an image
(170.01954650878906, 216.98866271972656)
(201.3812255859375, 109.42137145996094)
(115.70114135742188, 210.4272918701172)
(45.42426300048828, 97.89037322998047)
(167.0367889404297, 208.9329833984375)
(70.13690185546875, 140.90538024902344)
I have a point as center [89.2458, 121.0896]. I am trying to re-calculate the position of points in python using 4 rotation degree (from 0,90,-90,180) and 6 scaling factor (0.5,0.75,1,1.10,1.25,1.35,1.5).
My question is how can I rotate and scale the abovementioned points relative to the center point and get the new coordinates of those 6 points?
Your help is really appreciated.
Mathematics
A mathematical approach would be to represent this data as vectors from the center to the image-points, translate these vectors to the origin, apply the transformation and relocate them around the center point. Let's look at how this works in detail.
Representation as vectors
We can show these vectors in a grid, this will produce following image
This image provides a nice way to look at these points, so we can see our actions happening in a visual way. The center point is marked with a dot at the beginning of all the arrows, and the end of each arrow is the location of one of the points supplied in the question.
A vector can be seen as a list of the values of the coordinates of the point so
my_vector = [point[0], point[1]]
could be a representation for a vector in python, it just holds the coordinates of a point, so the format in the question could be used as is! Notice that I will use the position 0 for the x-coordinate and 1 for the y-coordinate throughout my answer.
I have only added this representation as a visual aid, we can look at any set of two points as being a vector, no calculation is needed, this is only a different way of looking at those points.
Translation to origin
The first calculations happen here. We need to translate all these vectors to the origin. We can very easily do this by subtracting the location of the center point from all the other points, for example (can be done in a simple loop):
point_origin_x = point[0] - center_point[0] # Xvalue point - Xvalue center
point_origin_y = point[1] - center_point[1] # Yvalue point - Yvalue center
The resulting points can now be rotated around the origin and scaled with respect to the origin. The new points (as vectors) look like this:
In this image, I deliberately left the scale untouched, so that it is clear that these are exactly the same vectors (arrows), in size and orientation, only shifted to be around (0, 0).
Why the origin
So why translate these points to the origin? Well, rotations and scaling actions are easy to do (mathematically) around the origin and not as easy around other points.
Also, from now on, I will only include the 1st, 2nd and 4th point in these images to save some space.
Scaling around the origin
A scaling operation is very easy around the origin. Just multiply the coordinates of the point with the factor of the scaling:
scaled_point_x = point[0] * scaling_factor
scaled_point_y = point[1] * scaling_factor
In a visual way, that looks like this (scaling all by 1.5):
Where the blue arrows are the original vectors and the red ones are the scaled vectors.
Rotating
Now for rotating. This is a little bit harder, because a rotation is most generally described by a matrix multiplication with this vector.
The matrix to multiply with is the following
(from wikipedia: Rotation Matrix)
So if V is the vector than we need to perform V_r = R(t) * V to get the rotated vector V_r. This rotation will always be counterclockwise! In order to rotate clockwise, we simply need to use R(-t).
Because only multiples of 90° are needed in the question, the matrix becomes a almost trivial. For a rotation of 90° counterclockwise, the matrix is:
Which is basically in code:
rotated_point_x = -point[1] # new x is negative of old y
rotated_point_y = point[0] # new y is old x
Again, this can be nicely shown in a visual way:
Where I have matched the colors of the vectors.
A rotation 90° clockwise will than be
rotated_counter_point_x = point[1] # x is old y
rotated_counter_point_y = -point[0] # y is negative of old x
A rotation of 180° will just be taking the negative coordinates or, you could just scale by a factor of -1, which is essentially the same.
As last point of these operations, might I add that you can scale and/or rotated as much as you want in a sequence to get the desired result.
Translating back to the center point
After the scaling actions and/or rotations the only thing left is te retranslate the vectors to the center point.
retranslated_point_x = new_point[0] + center_point_x
retranslated_point_y = new_point[1] + center_point_y
And all is done.
Just a recap
So to recap this long post:
Subtract the coordinates of the center point from the coordinates of the image-point
Scale by a factor with a simply multiplication of the coordinates
Use the idea of the matrix multiplication to think about the rotation (you can easily find these things on Google or Wikipedia).
Add the coordinates of the center point to the new coordinates of the image-point
I realize now that I could have just given this recap, but now there is at least some visual aid and a slight mathematical background in this post, which is also nice. I really believe that such problems should be looked at from a mathematical angle, the mathematical description can help a lot.

Move a vertex along a plane, given the plane normal

I have a 3D vector and a 3D face normal. How do I go along to move this vector along the given face normal using Python (with or without numpy)?
Ideally, I'd build a matrix using the face normal with the x and y and multiply it by the original vector or something like that, but I can't get my head around on how to build it. It's been a while since Linear Algebra.
EDIT:
Thanks for pointing out that my question was too broad.
My goal is to get a new point, that is x and y units away from the original point, along the face defined by its normal.
Example: If the point is (0,0,0) and the normal is (0, 0, 1), the result would be (x, y, 0).
Example 2: If the point is (1, 0, 0) and the normal is (0, 1, 0), the result would be (1+x, 0, y).
I'd need to extrapolate that to work with any point, normal, x and y.
The projection of a vector to a plane defined by its normal is:
def projection(vector, normal):
return vector - vector.dot(normal) * normal
Presumably this means you want something like:
x + projection(y, normal)
def give_me_a_new_vertex_position_along_normal(old_vertex_position, normal):
new_vertex_position = old_vertex_position + normal
return new_vertex_position
There is a difference between affine spaces (your normals) and euclidean/linear spaces (your vertices).
Vectors in linear space have coordinates associated with them, while vectors in affine space do not.
Adding an affine-spaced vector to a linear-spaced vector is called projection and that is what you are looking to do.

How to get start and end coordinates (x, y) of major axis of a rotating ellipse in opencv?

I am performing motion tracking of an object, and I am trying to identify the front and back of the object. The object is asymmetrical, which means that the centroid of the contour is closer to the front than the back. Using this information, I am approaching this as follows:
Draw contours of object
Find centroid
centroidx, centroidy = int(moments['m10']/moments['m00']), int(moments['m10']/moments['m00'])
Draw bounding ellipse
cv2.fitEllipse(contour)
Calculate major axis length as follows (and as shown in the figure)
MAx, MAy = int(0.5 * ellipseMajorAxisx*math.sin(ellipseAngle)), int(0.5 * ellipseMajorAxisy*math.cos(ellipseAngle))
Calculate beginning and ending x, y coordinates of the major axis
MAxtop, MAytop = int(ellipseCentrex + MAx), int(ellipseCentrey + MAy)
MAxbot, MAybot = int(ellipseCentrex - MAx), int(ellipseCentrey - MAy)
Identify which of the points is closer to the centroid of the contour
distancetop = math.sqrt((centroidx - MAxtop)**2 + (centroidy - MAytop)**2)
distancebot = math.sqrt((centroidx - MAxbot)**2 + (centroidy - MAybot)**2)
min(distancetop, distancebot)
The problem I am encountering is, while I get the "front" end of the ellipse correct most of the time, occasionally the point is a little bit away. As far as I have observed, this seems to be happening such that the x value is correct, but y value is different (in effect, I think this represents the major axis of an ellipse that is perpendicular to mine). I am not sure if this is an issue with opencv's calculation of angles or (more than likely) my calculations are incorrect. I do realize this is a complicated example, hope my figures help!
EDIT: When I get the wrong point, it is not from a perpendicular ellipse, but of a mirror image of my ellipse. And it happens with the x values too, not just y.
After following ssm's suggestion below, I am getting the desired point most of the time. The point still goes wrong occasionally, but "snaps back" into place soon after. For example, this is a few frames when this happens:
By the way, the above images are after "correcting" for angle by using this code:
if angle > 90:
angle = 180 - angle
If I do not do the correction, I get the wrong point at other times, as shown below for the same frames.
So it looks like I get it right for some angles with angle correction and the other angles without correction. How do I get all the right points in both conditions?
(White dot inside the ellipse is the centroid of the contour, whereas the dot on or outside the ellipse is the point I am getting)
I think your only problem is MAytop. You can consider doing the following:
if ycen<yc:
# switch MAytop and MAybot
temp = MAytop
MAytop = MAybot
MAybot = temp
You may have to do a similar check on the x - scale

Categories

Resources