I am trying to output a square, and am getting a rather distorted rhombus. Like so,
And though I can tell that this is in fact the cube I had intended, the cube is strangely distorted. In my own workings to create a simple 3D projection program, I found a similar problem when I lacked the offset of 2D points to the middle of the screen, however I know of no such way to inform OpenGL of this offset...
For anyone who may be wondering, my current camera object looks like [in python]:
class Camera:
def __init__(self,x,y,z,fov=45,zNear=0.1,zFar=50):
self.x,self.y,self.z = x,y,z
self.pitch,self.yaw,self.roll = 0,0,0
glMatrixMode(GL_PROJECTION)
gluPerspective(fov, 1, zNear, zFar)
def __goto__(self,x,y,z,pitch,yaw,roll):
print "loc:",x,y,z
print "ang:",pitch,yaw,roll
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glRotatef(pitch,1,0,0)
glRotatef(yaw,0,1,0)
glRotatef(roll,0,0,1)
glTranslatef(-x,-y,-z)
def __flushloc__(self):
self.__goto__(self.x,self.y,self.z,self.pitch,self.yaw,self.roll)
with a cube being rendered in the following manner:
class Cube:
def __init__(self,x,y,z,width):
self.vertices=[]
for x in [x,x+width]:
for y in [y,y+width]:
for z in [z,z+width]:
self.vertices.append((x,y,z))
self.faces = [
[0,1,3,2],
[4,5,7,6],
[0,2,6,4],
[1,3,7,5],
[0,1,5,4],
[2,3,7,6]]
def __render__(self):
glBegin(GL_QUADS)
for face in self.faces:
for vertex in face:
glVertex3fv(self.vertices[vertex])
glEnd()
Perhaps I should also mention that the given window is 400px by 400px, thence the aspect ratio is 1.
Two things:
I suspect the "distortion" you're seeing is likely a normal effect of the perspective projection matrix; a square in 3D space can render as a weird rhombus shape in 2D when perspective is applied. It's hard to tell in this case what the expected output should be, because you haven't included the coordinates of the camera and the cube. However, if you used an orthogonal projection (via gluOrtho2D), my guess is that it would come out to be a (rotated, translated, and squashed) parallelogram.
Second, I think you're only seeing a single face of your cube displayed. The picture might make more sense if you could see the other visible faces of the cube, but your vertices list is getting trashed because you've reused the x,y,z variables in Cube.__init__.
If you fix this name collision, does your render get better? You might try renaming one set of x,y,z to something else, like this:
def __init__(self,x,y,z,width):
self.vertices=[]
for xv in [x,x+width]:
for yv in [y,y+width]:
for zv in [z,z+width]:
self.vertices.append((xv,yv,zv))
Related
The complex transformation of the line should map the line to a circle passing through the origin, also the complex transformation of the circle centered at 1,0,0 of radius 1 should map to a line but it is behaving weirdly.
from manim import *
#config['frame_height'] = 10.0
#config['frame_width'] = 10.0
class Method(Scene):
def construct(self):
text = Tex(r"Applying Complex Transformations")
self.play(Create(text))
class Complex(Scene):
def construct(self):
d=ComplexPlane()
k=d.copy()
self.play(Create(d))
self.add(k)
line = Line(start=[2,0,0],end=[2,5,0],stroke_width=3,color=RED)
circle = Circle().shift(RIGHT)
self.add(line)
c = Circle().shift(RIGHT)
self.play(c.animate.apply_complex_function(lambda z:z**2)) #works correctly
self.play(line.animate.apply_complex_function(lambda z:1/z),run_time=5) #behaving wierdly
self.play(circle.animate.apply_complex_function(lambda z:1/z),run_time=5) #behaving wierdly
applied to line
applied to circle
Well the function 1/z diverges at z=0, therefore, every point that starts at that position (or near it) will have problems and give you weird artifacts. I tried your code and the line seems fine to me, but the circle is the one that might do weird stuff, and it just so happens that the circle goes through (0,0) in the complex plane. Try Shifting it some more so that it doesn't happen.
circle = Circle().shift(RIGHT*1.5)
you can also of course, change the function so that it doesn't diverge at z=0. Something like 1/(z+2) should work
I'm estimating the translation and rotation of a single camera using the following code.
E, mask = cv2.findEssentialMat(k1, k2,
focal = SCALE_FACTOR * 2868
pp = (1920/2 * SCALE_FACTOR, 1080/2 * SCALE_FACTOR),
method = cv2.RANSAC,
prob = 0.999,
threshold = 1.0)
points, R, t, mask = cv2.recoverPose(E, k1, k2)
where k1 and k2 are my matching set of key points, which are Nx2 matrices where the first column is the x-coordinates and the second column is y-coordinates.
I collect all the translations over several frames and generate a path that the camera traveled like this.
def generate_path(rotations, translations):
path = []
current_point = np.array([0, 0, 0])
for R, t in zip(rotations, translations):
path.append(current_point)
# don't care about rotation of a single point
current_point = current_point + t.reshape((3,)
return np.array(path)
So, I have a few issues with this.
The OpenCV camera coordinate system suggests that if I want to view the 2D "top down" view of the camera's path, I should plot the translations along the X-Z plane.
plt.plot(path[:,0], path[:,2])
This is completely wrong.
However, if I write this instead
plt.plot(path[:,0], path[:,1])
I get the following (after doing some averaging)
This path is basically perfect.
So, perhaps I am misunderstanding the coordinate system convention used by cv2.recoverPose? Why should the "birds eye view" of the camera path be along the XY plane and not the XZ plane?
Another, perhaps unrelated issue is that the reported Z-translation appears to decrease linearly, which doesn't really make sense.
I'm pretty sure there's a bug in my code since these issues appear systematic - but I wanted to make sure my understanding of the coordinate system was correct so I can restrict the search space for debugging.
At the very beginning, actually, your method is not producing a real path. The translation t produced by recoverPose() is always a unit vector. Thus, in your 'path', every frame is moving exactly 1 'meter' from the previous frame. The correct method would be, 1) initialize:(featureMatch, findEssentialMatrix, recoverPose), then 2) track:(triangluate, featureMatch, solvePnP). If you would like to dig deeper, finding tutorials on Monocular Visual SLAM would help.
Secondly, you might have messed up with the camera coordinate system and world coordinate system. If you want to plot the trajectory, you would use the world coordinate system rather than camera coordinate system. Besides, the results of recoverPose() are also in world coordinate system. And the world coordinate system is: x-axis pointing to right, y-axis pointing forward, z-axix pointing up.Thus, when you would like to plot the 'bird view', it is correct that you should plot along the X-Y plane.
I'm trying to use gdal_grid to make an elevation grid from a surface in a geojson. I use this command:
gdal_grid -a linear:radius=0 inputSurface.geojson outputFile.tif
It seems to give the correct pixel values, but if I open the result in Global Mapper or QGIS, the image is flipped/mirrored in a horizontal axis, such that the tif is directly below the surface and upside-down.
What is the reason for this and how do I fix it??
Update
I already tried changing the geotransform, but it hasn't totally fixed my problem.
I looked at the resulting image in gdalinfo and found out that the upper left corner is actually the lower left corner, so I set it using the SetGeoTransform. This moved it to the correct location, but it is still upside-down. (This may by dependent on the projection, which might cause problems later)
I also tried looking at the pixel width in the geotransform as mentioned below:
Xgeo = GT[0] + Xpixel*GT[1] + Yline*GT[2]
Ygeo = GT[3] + Xpixel*GT[4] + Yline*GT[5]
The image returned by gdal_grid has a positive GT[5], but unfortunately changing it to -GT[5] doesn't change anything.
The code I used to change the geotransform:
transform = list(ds.GetGeoTransform())
transform = [upperLeftX, transform[1], 0, upperLeftY, 0, -transform[5]]
ds.SetGeoTransform(transform)
GDAL's georeferencing is commonly specified by two sets of parameters. The first is the spatial reference, which defines the coordinate system (UTM, WGS, something more localized). The spatial reference for a raster is set using gdal.Dataset.setProjection(). The second piece of georeferencing is the GeoTransform, which translates (row, column) pixel indices into coordinates in the coordinate system. It is likely the geotransform that you need to update to make your image "unflipped".
The GeoTransform is a tuple of 6 values, which relate raster indices into coordinates.
Xgeo = GT[0] + Xpixel*GT[1] + Yline*GT[2]
Ygeo = GT[3] + Xpixel*GT[4] + Yline*GT[5]
Because these are raster images, the (line, pixel) or (row, col) coordinates start from the top left of the image.
[ ]----> column
|
|
v row
This means that GT[1] will be positive when the image is positioned "upright" in the coordinate system. Similarly, and sometimes counter-intuitively, GT[5] will be negative because the y value should decrease for every increasing row in the image. This isn't a requirement, but it is very common.
Modifying the GeoTransform
You state that the image is upside down and below where is should be. This isn't guaranteed to be a fix, but it will get you started. It's easier if you have the image in front of you and can experiment or compare coordinates...
import gdal
# open dataset as readable/writable
ds = gdal.Open('input.tif', gdal.GA_Update)
# get the GeoTransform as a tuple
gt = gdal.GetGeoTransform()
# change gt[5] to be it's negative, flipping the image
gt_new = (gt[0], gt[1], gt[2], gt[3], gt[4], -1 * gt[5])
# set the new GeoTransform, effectively flipping the image
ds.SetGeoTransform(gt_new)
# delete the dataset reference, flushing the cache of changes
del ds
I ended up having more problems with gdal_grid, where it just crashes at seemingly random places, so I'm using the scipy.interpolate-function called griddata in stead. This uses a meshgrid to get the coordinates in the grid, and I had to tile it up because of memory restrictions of meshgrid.
import scipy.interpolate as il #for griddata
import numpy as np
# meshgrid of coords in this tile
gridX, gridY = np.meshgrid(xi[c*tcols:(c+1)*tcols], yi[r*trows:(r+1)*trows][::-1])
## Creating the DEM in this tile
zi = il.griddata((coordsT[0], coordsT[1]), coordsT[2], (gridX, gridY),method='linear',fill_value = nodata) # fill_value to prevent NaN at polygon outline
raster.GetRasterBand(1).WriteArray(zi,c*tcols,nrows-r*trows-rtrows)
The linear interpolation seems to do the same as gdal_grid is supposed to. This was actually effected by making the 5'th element in the geotransform negative as described in the question update.
See description at scipy.interpolate.griddata.
A few things to note:
The point used in the geotransform should be upper-left
The resolution in y-direction should be negative
In the projection (at least the ones I use) positive y-direction is up
In numpy arrays positive y-direction is down
When using gdal's WriteArray it uses the upper left corner
Hope this helps other people's confusion.
I've solved a similar issue by simply re-projecting the results of the gdal_grid. Give this a try (replacing the epsg code with your projection and replacing the input/output filepaths):
gdalwarp -s_srs epsg:4326 -t_srs epsg:4326 gdal_grid_result.tif inverted_output.tif
it does not. it is simply the standards of the tool rendering it. try opening it in QGIS and youll notice it is right side up.
I have a point (x,y,z) in 3d that I would like to rotate. First I would like to rotate the point around another point (0,0,0) 360 degrees. Then I would like to change the plane that the point rotates in by 1 degree and repeat. I have been looking at the rotation_matrix function in http://www.lfd.uci.edu/~gohlke/code/transformations.py.html , however it seems as if the rotation only goes around the x,y or z axis rather than an arbitrary angle. Does anyone know how to accomplish this?
Rotating around the x, y, and z pretty much is the only way to do it. Rotating a plane is exactly like rotating around an axis. Check out my scratch project for the math.
I've been trying all morning to figure this problem out, eventually had to resort to SO.
I'm trying to rotate a set of 'objects' which have a 3D position and rotation (it's actually for another program but I am writing a quick Python tool to parse the data, rotate it how I want and spit it back out).
In my program there are two classes:
class Object:
def __init__(self, mod, px, py, pz, rx, ry, rz):
self.mod = mod
self.pos = [px, py, pz]
self.rot = [rx, ry, rz]
def rotate(self, axisx, axisy, axisz, rotx, roty, rotz):
"""rotates object around axis"""
?
This is my 'object' class (okay, I realise now how badly that's named!). Ignore 'mod', it's very simple, just exists in space with a position and rotation (degrees).
I have no idea what to write into the rotate part. I sort of get matrices but only in the mathematical form, I've never actually written code for them and wondered if there are any libraries out there to help.
My other class is a simple group for these objects. The only other attribute is an averaged position which is actually the axis that I want to rotate each of the objects around:
class ObjectMap:
def __init__(self, objs):
self.objs = objs
tpx = 0.0
tpy = 0.0
tpz = 0.0
for obj in objs:
tpx += obj.pos[0]
tpy += obj.pos[1]
tpz += obj.pos[2]
# calculate average position for this object map
# somewhere in the middle of all the objects
self.apx = tpx / len(objs)
self.apy = tpy / len(objs)
self.apz = tpz / len(objs)
def rotate(self, rotx, roty, rotz):
"""rotate the entire object map around the averaged position in the centre"""
for o in self.objs:
o.rotate(self.apx, self.apy, self.apz, rotx, roty, rotz)
As you can see, there is a rotate function for this class which simply runs through all the objects contained within it and rotates them about the "average position" axis which should be somewhere in the middle since it's an average.
I made a quick animation to better explain what I am after here:
http://puu.sh/i3DxU/adfe44a99d.gif http://puu.sh/i3DxU/adfe44a99d.gif
Where the sphere shapes are my "objects" and the shape in the middle is the axis they are rotating around (the apx, apy, apz coordinates of the ObjectMap class).
I tried to get this library working but it just wasn't working so I abandoned that idea. I'm using Python 3, got numpy installed as I figured it would help. I've also tried numerous bits of code on the internet but things just aren't working (or they are for old python versions, or just plain fail at installing).
I'd love if someone could point me in the right direction for getting these rotations working. Even just a link to an example of matrices in Python or a useful library would be great!
Edit: My original answer avoided pitch, roll, and yaw entirely. Based on a clarification of the question, it seems this code may be using data structures and/or APIs that require the use of pitch, roll, and yaw, so I will now try to address this requirement.
There are several ways to specify a rotation in a three-dimensional Cartesian coordinate system:
Euler angles (3 numeric parameters)
Axis and angle (4 numeric parameters)
Rotation matrix (9 numeric parameters)
Quaternions (4 numeric parameters)
Yaw, pitch, and roll are Euler angles
(at least according to any applicable definitions I know of those three terms).
But transformations.py
says there are 24 possible ways to interpret a sequence of three Euler angles,
and every single one of those interpretations has different results from
each of the others for at least some sequences of angles.
It's not obvious how to translate "yaw, pitch, and roll" to one of the
24 possible "axis sequences" in transformations.py.
In fact, unless you know exactly how the existing data/software you are
interfacing with applies yaw, pitch, and roll to objects that are to be
rotated, I don't think you can really say what "yaw, pitch, and roll" means in this application, and you are unlikely to guess the correct "axis sequence" to use in transformations.py.
I suspect this may be the main reason why you have not been able to get
transformations.py to work for you.
On top of all that ambiguity, it's unclear to me what the parameters
axisx, axisy, and axisz represent in
rotate(self, axisx, axisy, axisz, rotx, roty, rotz).
Conventionally, yaw, pitch, and roll refer to rotations about three axes
of the body being rotated, and generally one is supposed to define
what those axes are and the order in which the rotations are applied
before doing any rotations, and never change those definitions.
So it really makes no sense to specify axes every time one has to
do another rotation; the software should already know exactly which
axes to use, even if they are body axes rather than world axes.
(I'm assuming now that each of the parameters axisx, axisy, and axisz
is an axis by itself, and that these three parameters are not somehow
being used to specify a single axis as I assumed in my initial answer.)
To add to the confusion, while pitch, roll, and yaw are typically applied
to body axes, you are supposed to be rotating an entire ensemble of objects,
which seems to imply you should be rotating around world axes rather than
individual body axes.
In practical terms, once you figure out what yaw, pitch, and roll
really mean in your application, and what the parameters of your
rotate function are supposed to mean, the first thing I would do with
any rotation is to convert it to a representation that is not
any kind of Euler angles. Rotation matrices look like a good choice.
If you know the correct "axis sequence" that represents your definition
of yaw, pitch and roll in transformations.py, euler_matrix promises
to compute that matrix for you.
You can further rotate objects by doing a matrix multiplication of the
new rotation matrix and the matrix of the existing rotation;
the result is a third matrix.
The new matrix goes on the left in the multiplication if it is a rotation in world coordinates, on the right if it is a rotation in body coordinates.
Once you have reoriented your objects using the new rotation matrix,
if you really need to store the resulting orientation of the object as
a sequence of Euler angles (roll, pitch, and yaw) somewhere,
euler_from_matrix in transformations.py promises to tell you what those
angles are (but once again, you have to know how your "roll, pitch, and yaw"
are defined and how transformations.py represents that definition as an axis sequence).
Below the line is material from my original answer (that is, thoughts about how one might do things if one were not forced to use Euler angles).
My recommendations:
For Object, the rotation function signature should be something equivalent to:
def rotate(self, axisx, axisy, axisz, angle)
For ObjectMap the signature should be equivalent to
def rotate(self, angle)
The idea is that once you choose an axis (either through input variables to the function or implicitly already computed as in ObjectMap), the only difference between any two rotations is the angle of rotation around that axis, described by a single scalar parameter angle.
(I recommend that the units of angle be radians.)
To relate this to your GIF, in the GIF each of the colored arcs has its
own axis perpendicular to the plane of the arc. The radius of the arc
does not really matter; the only other thing that controls how the
spheres move (when you rotate around that axis) is how far the pointer
has moved back or forth along the arc. Motion back or forth is something
that is described by a single scalar parameter, in this case the angle of rotation.
For the contents of Object, it takes one set of coordinates to
specify the location of the object's "position" (which could actually
be any reference point of your choosing on the object; but the
center is usually a good choice if there's an obvious center):
def __init__(self, mod, px, py, pz):
self.mod = mod
self.position = [px, py, pz]
If you also want to represent the orientation of the object
(for example, so that the spheres in your GIF appear to rotate around their
own axes as they revolve around the axis of the ObjectMap),
add sufficient points to the Object (in addition to the "position")
so that you can draw it wherever it may end up after rotation.
The minimum is two additional points;
for example, to draw a sphere like one of the ones in your GIF, it is
sufficient to know the location of the north pole and the location of one
point on the equator.
(It is actually possible to know the exact orientation of the sphere
with only three scalar coordinates, rather than the six involved in
these two points, but I would not recommend it unless you're willing
to seriously study the mathematics involved.)
This leads to something like this:
def __init__(self, mod, px, py, pz, vector1x, vector1y, vector1z, vector2x, vector2y, vector2z):
self.mod = mod
self.position = [px, py, pz]
self.feature1 = [px + vector1x, py + vector1y, pz + vector1z]
self.feature2 = [px + vector2x, py + vector2y, pz + vector2z]
The rationale for px + vector1x, etc., is that you might find it
convenient to describe the features of your objects by vectors
from the object's center to each feature; but for the drawing and
rotation of objects you may prefer for all the points
to be described by their global coordinates.
(I should note, however, it is not necessary to
describe the object entirely in global coordinates.)
The rotation of an Object then becomes something like this pseudocode:
def rotate(self, axisx, axisy, axisz, angle):
rotate self.position around [axisx, axisy, axisz] by angle radians
rotate self.feature1 around [axisx, axisy, axisz] by angle radians
rotate self.feature2 around [axisx, axisy, axisz] by angle radians
If all your coordinates are global coordinates, this will move all of them
around the global axis, resulting in movement and rotation of the objects
made of those points. If you decide that only the center of an Object
has a global coordinate and all the other parts of it are described
relative to that point, using vectors, you can still use the same
rotate function; the result will be the same as if you simply rotated
the global coordinates of each point.
The actual details of how to rotate a point in 3D space around
a given axis by a given angle are written up in various places
such as Python - Rotation of 3D vector (which includes
at least one numpy implementation).
You may find it beneficial to create a "rotation" object
(probably a matrix, but you can let the library take care of
the details of setting its values)
via something like
rotation = define_rotation(axisx, axisy, axisz, angle)
so that you can repeatedly use this same object to compute the rotated
position of every point within every Object. This tends to yield
faster execution than if you have to compute every rotation of every point
from the original axis coordinates and angle value.
If this were my code I'd rather define a Point class
and/or a Vector class (or use classes from an existing library)
consisting of x, y, and z coordinates of a single point or vector,
so that I would not have to keep passing parameters in groups of three
and writing out vector-addition formulas throughout my code.
For example, instead of
self.feature1 = [px + vector1x, py + vector1y, pz + vector1z]
I might have
self.feature1 = p.add(vector1)
But that is a design choice that you can make independently of
what math you ultimately choose to do for the rotations.