glTranslatef gets messed up after glRotatef - python

I'm a newbie to PyOpenGL. I made a simple code using it that displays a cube, and you can navigate around it using the keyboard. However, I noticed that after a glRotatef command gets called, it changes your perspective, not the positionment of the objects. However, glTranslatef does not work on perspective, and rather a literal coordinate system. In other words, after calling a glRotatef command by 90*, a glTranslatef command that would have moved you forward now moves you to the left. Is there a function like glTranslatef, only it changes with the rotation so you don't get weird motions, or some sort of workaround you can do, to change the values you pass to glTranslatef based on the rotation?

Operations like glTranslate and glRotate define a (translation respectively rotation) matrix and multiply the current matrix by the new matrix.
The matrix multiplication is not commutative. If you want to rotate the model an translate the model independent of the rotation, then you have to do the rotation before the translation respectively multiply the translation matrix by the rotation matrix:
(OpenGL matrix multiplications have to be read from the right to the left):
modelmatrix = translation * rotation
Respectively glTranslate before glRotate:
glTranslate(x, y, z)
glRotate(angle, ax, ay, az)

Related

How to create a curved line vector plot of a triangle in Python?

Question
Suppose one has 3 random coordinates with 3 random functions that describe the continuous lines between them*, how would one create a vector plot in Python that allows for smooth lines after infinite zooming in?
Example
The functions should be rotated and translated from their specification to map onto the edge/line in the geometry. For example, one curved line may be specified as -x(x-5)=0 which describes the line from (x,y) coordinates:(2,6) to (5,2) (which has length 5). Another curved line from (x,y) coordinates:(2,2) to (2,6) may be specified as sin(x/4*pi)=0. One can assume all bulges point outward (of the triangle in this case).
Approach
I can perform a translation and rotation of the respective functions to the lines of the coordinates, and then save the plt as a .eps or .pdf, however before doing that, I thought it would be wise to ask how these functions are represented and how these plots are generated, as I expect the dpi setting may simply turn it into a (very) high resolution plot, instead of something that still provides smooth lines after infinite scrolling.
Doubt
I can imagine using a sinusoid does not allow for infinite smooth scrolling as they may be stored numerically. If the representation is finite for sinusoids but analytical/symbolic for polynomials, I would be happy to constraint this question to polynomials only to get smooth infinitely scrollable images (like fractals).

Is it possible to get the index of 'invisible' vertices in python-opengl like gloo, glumpy?

Is it possible to get the index of 'invisible' vertices in python-opengl like gloo, glumpy?
For example, when I draw a 3D sphere in scenegraph and rotate the object using turn-table camera, the half of vertices are invisible, but the index will be changed whenever I rotate.
If I turn-on 'cull_face' option, the OpenGL will not draw them, but is there any possibility to get the indices of those vertices, 'undrawn' or invisible due to blocked by other vertices?
but is there any possibility to get the indices of those vertices, 'undrawn' or invisible due to blocked by other vertices?
OpenGL will not offer such functionality in a direct way. It still can be achieved, of course. But you need to implement that by yourself. Here are some ideas:
After drawing the thing, use OpenGL occlusion queries by rendering each vertex as a separately queried point - I would not recomment it performance-wise, but it could be done.
Since you basically are interested in face culling, just calculate the face culling yourself. For each triangle, you just need to calculate the normal vector, which is just the cross product of two edges, and then just check the if the angle between the view direction vector and the normal is above or below 90 degrees, so it is a simple dot product. This approach can easily be ported to GPU compute shaders, and will trivially run in parallel as each triangle can be tested independently - and it should also be implemented on the GPU because you can use its power also to transform the vertices with exactly the same matrices as before, and since this results in a viewing direction which is constant (0,0,1) in window space, the dot product will yield only the z coordinate of the dot product, which is just the signed area of the 2D projection of the vertex coordinates in window space.
You could also do simple ray-casting by checking a ray from the camera to each vertex for intersections. You can easily apply the projection matrix here, so each view ray will become orthogonal, and you can simply test against the z buffer then. This approach could also be implemented on the GPU.

How to compare the orientation of a 3D vector against a plane in three dimensions

I am currently trying to plot a plane in three dimensional space but not sure how to do it for the problem I have.
Currently I have code that defines a 3D vector according to co-ordinates I have, this includes the ability to rotate, translate, and work out the angle between vectors.
The next step is to define a plane. I am not sure the best way to do this, however. The plane will be in a 100,100,100 box, be flat, and likely exist at a z height of around 30.
My issue comes because I need this plane to do a couple of things:
1: I need to be able to rotate it around the three axes.
2: I need to be able to measure the smallest angle between the plane and the vector I have defined where the vector intersects the plane.
I was initially playing around trying to fill a numpy array with 1s where the plane would be etc but I don't see this really working how I need it to.
Does anyone know of any other tool that I would be able to use in this situation? Many thanks.
First of all, you'll need the normal vector to the plane. From there and following this link it should be easy for you to figure it out :)
Basically you get arcsin of the scalar product of your vector and the normal vector of the plane divided by the product of the norms of both vectors.
PS: If the plane is paralel to the XY plane, then it's normal vector it's just (0,0,1).

Method to determine polygon surface rotation from top-down camera

I have a webcam looking down on a surface which rotates about a single-axis. I'd like to be able to measure the rotation angle of the surface.
The camera position and the rotation axis of the surface are both fixed. The surface is a distinct solid color right now, but I do have the option to draw features on the surface if it would help.
Here's an animation of the surface moving through its full range, showing the different apparent shapes:
My approach thus far:
Record a series of "calibration" images, where the surface is at a known angle in each image
Threshold each image to isolate the surface.
Find the four corners with cv2.approxPolyDP(). I iterate through various epsilon values until I find one that yields exactly 4 points.
Order the points consistently (top-left, top-right, bottom-right, bottom-left)
Compute the angles between each points with atan2.
Use the angles to fit a sklearn linear_model.linearRegression()
This approach is getting me predictions within about 10% of actual with only 3 training images (covering full positive, full negative, and middle position). I'm pretty new to both opencv and sklearn; is there anything I should consider doing differently to improve the accuracy of my predictions? (Probably increasing the number of training images is a big one??)
I did experiment with cv2.moments directly as my model features, and then some values derived from the moments, but these did not perform as well as the angles. I also tried using a RidgeCV model, but it seemed to perform about the same as the linear model.
If I'm clear, you want to estimate the Rotation of the polygon with respect to the camera. If you know the length of the object in 3D, you can use solvePnP to estimate the pose of the object, from which you can get the Rotation of the object.
Steps:
Calibrate your webcam and get the intrinsic matrix and distortion matrix.
Get the 3D measurements of the object corners and find the corresponding points in 2d. Let me assume a rectangular planar object and the corners in 3d will be (0,0,0), (0, 100, 0), (100, 100, 0), (100, 0, 0).
Use solvePnP to get the rotation and translation of the object
The rotation will be the rotation of your object along the axis. Here you can find an example to estimate the pose of the head, you can modify it to suit your application
Your first step is good -- everything after that becomes way way way more complicated than necessary (if I understand correctly).
Don't think of it as 'learning,' just think of it as a reference. Every time you're in a particular position where you DON'T know the angle, take a picture, and find the reference picture that looks most like it. Guess it's THAT angle. You're done! (They may well be indeterminacies, maybe the relationship isn't bijective, but that's where I'd start.)
You can consider this a 'nearest-neighbor classifier,' if you want, but that's just to make it sound better. Measure a simple distance (Euclidean! Why not!) between the uncertain picture, and all the reference pictures -- meaning, between the raw image vectors, nothing fancy -- and choose the angle that corresponds to the minimum distance between observed, and known.
If this isn't working -- and maybe, do this anyway -- stop throwing away so much information! You're stripping things down, then trying to re-estimate them, propagating error all over the place for no obvious (to me) benefit. So when you do a nearest neighbor, reference pictures and all that, why not just use the full picture? (Maybe other elements will change in it? That's a more complicated question, but basically, throw away as little as possible -- it should all be useful in, later, accurately choosing your 'nearest neighbor.')
Another option that is rather easy to implement, especially since you've done a part of the job is the following (I've used it to compute the orientation of a cylindrical part from 3 images acquired when the tube was rotating) :
Threshold each image to isolate the surface.
Find the four corners with cv2.approxPolyDP(), alternatively you could find the four sides of your part with LineSegmentDetector (available from OpenCV 3).
Compute the angle alpha, as depicted on the image hereunder
When your part is rotating, this angle alpha will follow a sine curve. That is, you will measure alpha(theta) = A sin(theta + B) + C. Given alpha you want to know theta, but first you need to determine A, B and C.
You've acquired many "calibration" or reference images, you can use all of these to fit a sine curve and determine A, B and C.
Once this is done, you can determine theta from alpha.
Notice that you have to deal with sin(a+Pi/2) = sin(a). It is not a problem if you acquire more than one image sequentially, if you have a single static image, you have to use an extra mechanism.
Hope I'm clear enough, the implementation really shouldn't be a problem given what you have done already.

Python get transformation matrix from two sets of points

I have to images, one simulation, one real data, with bright spots.
Simulation:
Reality:
I can detect the spots just fine and get the coordinates. Now I need to compute transformation matrix (scale, rotation, translation, maybe shear) between the two coordinate systems. If needed, I can pick some (5-10) corresponding points by hand to give to the algorithm
I tried a lot of approaches already, including:
2 implementations of ICP:
https://engineering.purdue.edu/kak/distICP/ICP-2.0.html#ICP
https://github.com/KojiKobayashi/iterative_closest_point_2d
Implementing affine transformations:
https://math.stackexchange.com/questions/222113/given-3-points-of-a-rigid-body-in-space-how-do-i-find-the-corresponding-orienta/222170#222170
Implementations of affine transformations:
Determining a homogeneous affine transformation matrix from six points in 3D using Python
how to perform coordinates affine transformation using python? part 2
Most of them simply fail somehow like this:
The red points are the spots from the simulation transformed into the reality - coordinate system.
The best approach so far is this one how to perform coordinates affine transformation using python? part 2 yielding this:
As you see, the scaling and translating mostly works, but the image still needs to be rotated / mirrored.
Any ideas on how to get a working algorithm? If neccessary, I can provide my current non-working implementations, but they are basically as linked.
I found the error.
I used plt.imshow to display both the simulated and real image and from there, pick the reference points from which to calculate the transformation.
Turns out, due to the usual array-to-image-index-flipping-voodoo (or a bad missunderstanding of the transformation on my side), I need to switch the x and y indices of the reference points from the simulated image.
With this, everything works fine using this how to perform coordinates affine transformation using python? part 2

Categories

Resources