Animation of sphere rotation in Mayavi - python

I am using Mayavi to simulate satellite trajectories, and hence need the sphere which is the main central body to rotate, as the satellite moves, but I can not seem to be able to find a way to change the orientation of the sphere which is in fact a TVTK Actor object, as I am using this method to apply the texture. Is there a way or will I have to find another way to implement the texture on the sphere and then rotate it? Thank You for the help. I am also open to using another python library which can help me achieve this.

In this example, we will animate the rotation of a sphere in Mayavi.
We will create a sphere source and rotate it using a Transform node.
We will also add a scalar cut plane to visualize the rotation.
import numpy as np from mayavi import mlab # Create a sphere source sphere = mlab.pipeline.scalar_field(x, y, z) # Rotate the sphere using a Transform node transform = mlab.pipeline.transform_data(sphere) transform.transform.rotate(angle=0.2, axis=(1, 0, 0)) # Add a scalar cut plane cut_plane = mlab.pipeline.scalar_cut_plane(transform) # Animate the rotation anim = mlab.animate(100) # Rotate the sphere for i in range(100): transform.transform.rotate(angle=0.2, axis=(1, 0, 0)) mlab.draw() anim.save('rotating_sphere.gif', figure=mlab.gcf())

Related

How to place a virtual spherical camera centered inside a 3D object/mesh?

I'm currently working with some 3D models from the ModelNet40 dataset and I'd like to place a virtual spherical camera centered inside of the 3D object - which is a 3D triangular mesh - in order to trace rays from the camera and store informations related to each triangle that the ray hits.
I've only computed some basic informations such as vertices, normals and triangles:
mesh = o3d.io.read_triangle_mesh(r"...\airplane_0627.off")
print('Vertices:')
print(np.asarray(mesh.vertices),"\n")
print('Triangles:')
print(np.asarray(mesh.triangles),"\n")
print("Computing normal and rendering it:")
mesh.compute_vertex_normals()
print(np.asarray(mesh.triangle_normals))
meshVertices = np.asarray(mesh.vertices)
triangleNormals = np.asarray(mesh.triangle_normals)
meshTriangles = np.asarray(mesh.triangles)
o3d.visualization.draw_geometries([mesh], mesh_show_wireframe=True)
And tried to use the camera positioning from Open3D with some random pose parameters to see where the camera is placed:
vis = o3d.visualization.VisualizerWithKeyCallback()
vis.create_window()
view_ctl = vis.get_view_control()
vis.add_geometry(mesh)
pose = np.array([ 0.23522,-7.0289e-17,-0.97194,-129.54,
0.97194,-7.3988e-17,0.23522,59.464,
-8.8446e-17,-1,5.0913e-17,2.11,
0,0,0,1]).reshape(4,4)
cam = view_ctl.convert_to_pinhole_camera_parameters()
cam.extrinsic = pose
view_ctl.convert_from_pinhole_camera_parameters(cam)
vis.run()
vis.destroy_window()
What is the best way to place the camera inside the centre of a 3D object/triangular mesh?

Mayavi - Mouse hovering interaction & plot same scene from different perspectives

i am currently struggling to see through how to interact in an appropriate way with a mayavi rendered scene.
I have a lidar point cloud which gets plotted by the function points3d(), now i have set in addition a bounding box around a car in between the point cloud, and i would like to change the color of the points inside the box as soon as i hover with my mouse over the bounding box.
Can you tell me how I can just select the points inside the bbox and change their color?
And my second question is, how can i show the same scene of the pointcloud in a 3d view and a bird view, concurrently?
Thank you very much :]
I have found a solution regarding the color problem - I don't know if it is best practice. But i still need help for determining the points inside the bounding box. I would also like to create a gui which enables the user to modify the size and orientation of the bounding box. [but that is an other topic]
import numpy as np
from mayavi.mlab import draw, points3d
from tvtk.api import tvtk
# Primitives
N = 3000 # Number of points
ones = np.ones(N) #np.hstack((np.tile(np.array([1]), int(N/2)).T, np.tile(np.array([4000]), int(N/2)).T))
scalars = ones #np.arange(N) # Key point: set an integer for each point
# Define color table (including alpha), which must be uint8 and [0,255]
colors = np.vstack((np.tile(np.array([[255],[255],[0]]), int(N/2)).T, np.tile(np.array([[0],[0],[255]]), int(N/2)).T))
# Define coordinates and points
x, y, z = (np.random.random((N, 3))*255).astype(np.uint8).T # Assign x, y, z values to match color
pts = points3d(x, y, z, scale_factor=10) # Create points
#pts.glyph.color_mode = 'color_by_vector' # Color by scalar
# Set look-up table and redraw
#pts.module_manager.scalar_lut_manager.lut.table = colors
pts.glyph.scale_mode = 'scale_by_vector'
sc=tvtk.UnsignedCharArray()
sc.from_array(colors)
pts.mlab_source.dataset.point_data.scalars = sc
draw()

Rotated image coordinates after scipy.ndimage.interpolation.rotate?

I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()

How do I make an elevation model from a 3d polygon?

I have a number of polygons in 3d from a geojson file, and I would like to make an elevation model. This means that I want a raster, where every pixel is the height of the polygon in this position.
I tried looking at gdal_rasterize, but the description says
As of now, only points and lines are drawn in 3D.
gdal_rasterize
I ended up using the scipy.interpolat-function called griddata. This uses a meshgrid to get the coordinates in the grid, and I had to tile it up because of memory restrictions of meshgrid.
import scipy.interpolate as il #for griddata
# meshgrid of coords in this tile
gridX, gridY = np.meshgrid(xi[c*tcols:(c+1)*tcols], yi[r*trows:(r+1)*trows][::-1])
## Creating the DEM in this tile
zi = il.griddata((coordsT[0], coordsT[1]), coordsT[2], (gridX, gridY),method='linear',fill_value = nodata) # fill_value to prevent NaN at polygon outline
The linear interpolation seems to do exactly what I want. See description at https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html

Render image on the surface with Mayavi and Python

I am trying to create visualization with use of Python and Mayavi.
The purpose of that visualization is to show a trajectory and camera frustums at different stages of the path.
The thing I struggle with is to texturize camera frustum polygons with an actual images.
I am willing to put performance considerations aside for now, and want to find a way to texture a mayavi-created surface with an image provided by numpy.
The most promising suggestions were found there, yet I was unable to construct a surface as I implemented them.
def render_image(self, frustum, timestamp):
surf = mayavi.mlab.surf(frustum[0, :-1],
frustum[1, :-1],
frustum[2, :-1],
color = (1.0, 1.0, 1.0))
That's the code for surface creation, where rows of the numpy array frustum are x, y, z coordinates respectively and the last, fifth point is the tip of pyramid and hence not needed for mesh.
x [-8.717184671492793, -8.623419637172622, -8.363581977642212, -8.269816943322041]
y [-4.563044562134721, -4.941612408713827, -4.37100415350352, -4.749572000082626]
z [13.614485323873417, 13.703336344550703, 14.059553426925493, 14.148404447602779]
That is an example of function input - four 3D points representing vertices of a desired polygon.
Yet, the surf function fails on that input:
File "/usr/local/lib/python2.7/dist-packages/mayavi/tools/helper_functions.py", line 679, in __call_internal__
aspect_ratios = [(zf - zi) / (xf - xi), (zf - zi) / (yf - yi)]
ZeroDivisionError: float division by zero
Note: I was able to render images with mayavi.mlab.imshow, but I find it error-prone and onerous to specify image pose and size in terms of axis angles and scale vectors, so I'm reluctant to accept answers pointing to that direction.
Your help is greatly appreciated.
I got to draw textured cameras with mayavi!
see:
Although the way I've done it is using mlab.imshow, so it maybe this is the type of answer you don't want. See this code:
obj=mlab.imshow(image.T)
obj.actor.orientation = np.rad2deg(camera.w_Rt_c.euler)
pp = np.array([0, 0, camera.f])[:,None]
w_pp = camera.w_Rt_c.forward(pp)
obj.actor.position = w_pp.ravel()
obj.actor.scale = [0.8, 0.8, 0.8]
image is a (n,m) numpy array, for some reason imshow would show the image 90 degrees rotated, that's why I transpose it.
obj.actor.orientation expects a yaw, pitch, roll angles is degrees. The rotation of the image is the product of individual rotation matrices Rx(yaw)*Ry(pitch)*Rz(roll). In the code I use the camera to world euler angles of my camera class (can't share that code at the moment).
The position of the image is set to the 3d position where the principal point of my camera would be transformed to world coordinates.
Why the scale factor is 0.8 is a mystery, if I leave it to 1 the image plane appear larger than the frustum???
I encapsulate the above in a class that expects a camera and an image and draws the frustum and the image at the position and orientation of the given camera.

Categories

Resources