How do I make an elevation model from a 3d polygon? - python

I have a number of polygons in 3d from a geojson file, and I would like to make an elevation model. This means that I want a raster, where every pixel is the height of the polygon in this position.
I tried looking at gdal_rasterize, but the description says
As of now, only points and lines are drawn in 3D.
gdal_rasterize

I ended up using the scipy.interpolat-function called griddata. This uses a meshgrid to get the coordinates in the grid, and I had to tile it up because of memory restrictions of meshgrid.
import scipy.interpolate as il #for griddata
# meshgrid of coords in this tile
gridX, gridY = np.meshgrid(xi[c*tcols:(c+1)*tcols], yi[r*trows:(r+1)*trows][::-1])
## Creating the DEM in this tile
zi = il.griddata((coordsT[0], coordsT[1]), coordsT[2], (gridX, gridY),method='linear',fill_value = nodata) # fill_value to prevent NaN at polygon outline
The linear interpolation seems to do exactly what I want. See description at https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html

Related

Creating an image mask using polygon points coordinates

I have a grayscale image with size (1920,1080) that I''m trying to create a mask for. I used an external software to manually get the points of interest (polygon). There are now 27 coordinates points representing a polygon in the middle of the image.
I created a mask using the following:
import numpy as np
import matplotlib.pyplot as plt
from skimage.draw import polygon2mask
#image= grayscale with shape (1920,1080)
coordinates = ([1080.15, 400.122], [1011.45, 400.90], .......) #27 points
polygon = np.array(coordinates)
mask = polygon2mask(image.shape, polygon)
result = ma.masked_array(image, np.invert(mask))
plt.imshow(result)
the problem I'm facing is the output in a wrong place; it should be somehow centred because I took the coordinates from the center, but it was actually in the edge of the image (bottom):
Also, the size seem to be a bit smaller that expected. I'm not sure what is causing this problem, I must have done something wrong in my code.. Kindly help me identifying the problem.
You inverted x and y coordinates. polygon2mask coordinates are in y,x order.
Add
coordinates = [[y,x] for [x,y] in coordinates]
after defining coordinates, and you'll have probably what you expected.

Animation of sphere rotation in Mayavi

I am using Mayavi to simulate satellite trajectories, and hence need the sphere which is the main central body to rotate, as the satellite moves, but I can not seem to be able to find a way to change the orientation of the sphere which is in fact a TVTK Actor object, as I am using this method to apply the texture. Is there a way or will I have to find another way to implement the texture on the sphere and then rotate it? Thank You for the help. I am also open to using another python library which can help me achieve this.
In this example, we will animate the rotation of a sphere in Mayavi.
We will create a sphere source and rotate it using a Transform node.
We will also add a scalar cut plane to visualize the rotation.
import numpy as np from mayavi import mlab # Create a sphere source sphere = mlab.pipeline.scalar_field(x, y, z) # Rotate the sphere using a Transform node transform = mlab.pipeline.transform_data(sphere) transform.transform.rotate(angle=0.2, axis=(1, 0, 0)) # Add a scalar cut plane cut_plane = mlab.pipeline.scalar_cut_plane(transform) # Animate the rotation anim = mlab.animate(100) # Rotate the sphere for i in range(100): transform.transform.rotate(angle=0.2, axis=(1, 0, 0)) mlab.draw() anim.save('rotating_sphere.gif', figure=mlab.gcf())

How would you render a 3D arrow (cone + cylinder) onto a 2D image in Python without OpenGL?

Assuming I've been given image I and a 3D vector V_3d = [V_x, V_y, V_z]. To make things easier, assume they're both in the same coordinate system. That is, the image plane normal is Z axis. The Y axis points downwards and X points to the right.
The closest example:
Their solution is complicated and requires OpenGL.

Rotated image coordinates after scipy.ndimage.interpolation.rotate?

I have a numpy array for an image that I read in from a FITS file. I rotated it by N degrees using scipy.ndimage.interpolation.rotate. Then I want to figure out where some point (x,y) in the original non-rotated frame ends up in the rotated image -- i.e., what are the rotated frame coordinates (x',y')?
This should be a very simple rotation matrix problem but if I do the usual mathematical or programming based rotation equations, the new (x',y') do not end up where they originally were. I suspect this has something to do with needing a translation matrix as well because the scipy rotate function is based on the origin (0,0) rather than the actual center of the image array.
Can someone please tell me how to get the rotated frame (x',y')? As an example, you could use
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
data_rot = rotate(data_orig,66) # data array
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
P.S. The following two related questions' answers do not help me:
Find new coordinates of a point after rotation
New coordinates after image rotation using scipy.ndimage.rotate
As usual with rotations, one needs to translate to the origin, then rotate, then translate back. Here, we can take the center of the image as origin.
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
from scipy.ndimage import rotate
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
def rot(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
fig,axes = plt.subplots(2,2)
axes[0,0].imshow(data_orig)
axes[0,0].scatter(x0,y0,c="r" )
axes[0,0].set_title("original")
for i, angle in enumerate([66,-32,90]):
data_rot, (x1,y1) = rot(data_orig, np.array([x0,y0]), angle)
axes.flatten()[i+1].imshow(data_rot)
axes.flatten()[i+1].scatter(x1,y1,c="r" )
axes.flatten()[i+1].set_title("Rotation: {}deg".format(angle))
plt.show()

Render image on the surface with Mayavi and Python

I am trying to create visualization with use of Python and Mayavi.
The purpose of that visualization is to show a trajectory and camera frustums at different stages of the path.
The thing I struggle with is to texturize camera frustum polygons with an actual images.
I am willing to put performance considerations aside for now, and want to find a way to texture a mayavi-created surface with an image provided by numpy.
The most promising suggestions were found there, yet I was unable to construct a surface as I implemented them.
def render_image(self, frustum, timestamp):
surf = mayavi.mlab.surf(frustum[0, :-1],
frustum[1, :-1],
frustum[2, :-1],
color = (1.0, 1.0, 1.0))
That's the code for surface creation, where rows of the numpy array frustum are x, y, z coordinates respectively and the last, fifth point is the tip of pyramid and hence not needed for mesh.
x [-8.717184671492793, -8.623419637172622, -8.363581977642212, -8.269816943322041]
y [-4.563044562134721, -4.941612408713827, -4.37100415350352, -4.749572000082626]
z [13.614485323873417, 13.703336344550703, 14.059553426925493, 14.148404447602779]
That is an example of function input - four 3D points representing vertices of a desired polygon.
Yet, the surf function fails on that input:
File "/usr/local/lib/python2.7/dist-packages/mayavi/tools/helper_functions.py", line 679, in __call_internal__
aspect_ratios = [(zf - zi) / (xf - xi), (zf - zi) / (yf - yi)]
ZeroDivisionError: float division by zero
Note: I was able to render images with mayavi.mlab.imshow, but I find it error-prone and onerous to specify image pose and size in terms of axis angles and scale vectors, so I'm reluctant to accept answers pointing to that direction.
Your help is greatly appreciated.
I got to draw textured cameras with mayavi!
see:
Although the way I've done it is using mlab.imshow, so it maybe this is the type of answer you don't want. See this code:
obj=mlab.imshow(image.T)
obj.actor.orientation = np.rad2deg(camera.w_Rt_c.euler)
pp = np.array([0, 0, camera.f])[:,None]
w_pp = camera.w_Rt_c.forward(pp)
obj.actor.position = w_pp.ravel()
obj.actor.scale = [0.8, 0.8, 0.8]
image is a (n,m) numpy array, for some reason imshow would show the image 90 degrees rotated, that's why I transpose it.
obj.actor.orientation expects a yaw, pitch, roll angles is degrees. The rotation of the image is the product of individual rotation matrices Rx(yaw)*Ry(pitch)*Rz(roll). In the code I use the camera to world euler angles of my camera class (can't share that code at the moment).
The position of the image is set to the 3d position where the principal point of my camera would be transformed to world coordinates.
Why the scale factor is 0.8 is a mystery, if I leave it to 1 the image plane appear larger than the frustum???
I encapsulate the above in a class that expects a camera and an image and draws the frustum and the image at the position and orientation of the given camera.

Categories

Resources