I have the coordinates of four 3D points bounding a 3d plane representing a wall. I want to create a 3D object of that plane using python. Red points shown in figure are the boundaries of the plane, while green points are boundaries of voids in the plane. Any help by a simple code or name of a library or a method?
PS: The shown figure is a mesh but I want a planar surface something like that
Related
I am adding some cross markers to my plot using:
points = mlab.points3d(*vertices.T, mode='2dcross')
But the orientation of the 2D crosses is fixed. Ideally I want the crosses to face the camera. I have seen this Python3.0 Mayavi rotating cube glyph which shows how you can rotate 3D cube glyphs, but this doesn't make any difference for the 2D glyphs.
Any suggestions?
I projected the 3D point cloud acquired through the depth camera onto a 2D plane.
As a result, I got a point cloud list projected on a plane like in fig1.
example Image:
How can I get the real area of a polygon made up of a point cloud projected onto a 2D plane?
I tried to calculate the area using the shapely polygon.area function, but the function receives the coordinates of the boundary points of the polygon as input. So it failed.
(https://shapely.readthedocs.io/en/stable/reference/shapely.Polygon.html#shapely.Polygon)
I am trying to transform a picture with OpenCV in python.
Therefore I have points in a grid placed on the image that I can also move.
I then split each grid rectangle into two triangles and I have their coordinates:
where they were at the beginning and
where they are after i moved some points around
Now I want to transform the image so it fits the new mesh but without seeing lines on the edges of the triangles or image pieces getting ripped by transforming differently.
Help!
I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/t... regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks!
I am wondering if there is a way to generate 2d contour plots from a set of 3D data points using either python or matlab? The 3D data points create a roughly spherical shape and the contour would represent the outer border of this spheroid on different planes angled from 0 to 180 degrees
I am thinking one way is to generate a 3D surface from the pointcloud by using numpy.meshgrid and then using plot_surface. However, I also do not find a way to take angled slices out of the interpolated 3D data to obtain a contour.
Ideas very welcome!
thanks
Jesse