Can I assign an ID to a vtkPoint myself? - python

I want to create a VTK Unstructured Grid from FE data in Python.
I have the nodes/points and their ID in an np.Array. Can I now write a VTK file with it and keep the ID assigned by the FE model?
I need a VTK point to which I can assign any ID.
Example:
points = np.array([
[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
[1.0, 0.0, 1.0],
[1.0, 1.0, 1.0],
[0.0, 1.0, 1.0]
])
ids = np.array([10, 21, 22, 23, 34, 15, 36, 7])
Thanks for any help.
When I try to use vtk.SetPoint I get the following error: ValueError: expects 0 <= id && id <GetNumberOfPoints()

In VTK, id means "the index in the array list". So it always follows the constraint displayed in the error message.
To keep a trace of your initial ids, you can create a PointData array:
ugrid.Points = points
ugrid.PointData.append(ids, "originalIds")
see some documentation here

Related

Open3D registration with ICP shows error of 0 and returns the input transformation

I try to use the ICP algorithm of open3d to find a transformation that minimizes the distance between 2 point clouds and loosely followed their tutorial page: http://www.open3d.org/docs/latest/tutorial/pipelines/icp_registration.html
(I use Ubuntu 20.04)
I tried to use point clouds from my ouster128, but it didn't work and therefore I decided to use 2 'dummy' point clouds that I create with numpy. The icp registration method gets a transformation as input and, in my case, always returns the input transformation (it basically does nothing, probably because the errors are 0). Here's the code (should be ready to use when copy pasted):
import numpy as np
import copy
import open3d as o3d
def draw_registration_result(source, target, transformation):
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.206, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
print("Transformation: " + str(transformation))
source_temp.transform(transformation)
coord_frame = o3d.geometry.TriangleMesh.create_coordinate_frame()
o3d.visualization.draw_geometries([source_temp, target_temp, coord_frame],
zoom=0.5,
front=[0.9288, -0.2951, -0.2242],
lookat=[0, 1, 1],
up=[0, 0, 1])
src_points = np.array([
[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[2.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[1.0, 1.0, 0.0],
[2.0, 1.0, 0.0],
[0.0, 2.0, 0.0],
[1.0, 2.0, 0.0],
[2.0, 2.0, 0.0],
[0.0, 3.0, 0.0],
[1.0, 3.0, 0.0],
[2.0, 3.0, 0.0],
])
tgt_points = np.array([
[0.0, 0.0, 0.1], # Due to the 0.1 the clouds do not match perfectly
[1.0, 0.0, 0.1],
[2.0, 0.0, 0.1],
[0.0, 1.0, 0.1],
[1.0, 1.0, 0.1],
[2.0, 1.0, 0.1],
[0.0, 2.0, 0.1],
[1.0, 2.0, 0.1],
[2.0, 2.0, 0.1],
[0.0, 3.0, 0.1],
[1.0, 3.0, 0.1],
[2.0, 3.0, 0.1],
])
o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
source = o3d.geometry.PointCloud()
source.points = o3d.utility.Vector3dVector(src_points)
target = o3d.geometry.PointCloud()
target.points = o3d.utility.Vector3dVector(tgt_points)
trans_init = np.asarray([[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]])
threshold = 0.02
reg_p2p = o3d.pipelines.registration.registration_icp(
source, target, threshold, trans_init,
o3d.pipelines.registration.TransformationEstimationPointToPoint())
print("Post Registration")
print("Inlier Fitness: ", reg_p2p.fitness)
print("Inlier RMSE: ", reg_p2p.inlier_rmse)
draw_registration_result(source, target, reg_p2p.transformation)
source and target are the same point clouds. Only difference is, that target is translated by 0.1 in z direction. The initial transformation is the identity matrix. The matrix that I would expect as output is the same as I, just with I[2][4]=0.1. Now, the fitness and inlier_rmse are 0. Which makes no sense (unless I completely misunderstood something) as that would mean that the clouds match perfectly, which they obviously don't do. Sometimes the fitness is not zero (for example, when source and target are the same clouds, except that 3 points of target are translated by 0.1).
What I tried before posting this thread:
2 different versions of open3d (0.15.2 and 0.16.0).
different point clouds
different initial transformations
some thresholds, 2e-10, 2e-6, 0.2
(The visualization window is white and the camera has to be rotated to view the clouds)
So, what am I doing wrong here? Thanks in advance.
Ok, I fixed it in the end.
I'd blame the documentation in this case as it says:
"max_correspondence_distance (float) – Maximum correspondence points-pair distance."
In their tutorial this parameter is called "threshold" and I would expect the algorithm to run until the error is less than the threshold. However, the algorithm doesn't start if the error is bigger than the threshold. This should be formulated more precisely in the documentation. And especially in the tutorial this has to be fixed. If I use an threshold of 50 it works as expected.

How to calculate a deformation gradient in Python. (On a cube in 3D) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am trying to calculate Large(Langrangian/Green) Strains in python on a cube I have artificially deformed. I am having trouble calculating the deformation matix, F:
Non-deformed Cube:
Deformed Cube:
Cube coordinates are x1 and x2
x1 = [[0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [0.0, 0.0, 1.0], [1.0, 0.0, 1.0],[0.0, 1.0, 1.0], [1.0, 1.0, 1.0]]
x2 = [[0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [0.0, 0.0, 1.0], [1.5, 0.0, 1.5],[0.0, 1.5, 1.5], [2.0, 2.0, 2.0]]
global_coords[0] = x1
global_coords[1] = x2
def positions_at_t(global_coords, t):
gc = np.array(global_coords)
new_coords = gc[0] + t*(gc[1] - gc[0])
return list(new_coords)
gc[1]-g[2] works as a gradient over time as we can see above. But the problem is it is defined in a 8x3 matrix as we would expect a 3x3 for a deformation gradient.
Side note/question:
On this site he generates equations that define the movement of points in (x,y,z) to then differentiate with respect to (X,Y,Z) to get F. Is there a library I can use to get these equations? (or the gradients?)
Deformation Gradient, F
The deformation gradient is described as:
So solving for F will be:
F = dX\dx
Where '\' is left matrix division, roughly equivalent to inv(A) * B.
This is a linear algebra problem with 9 variables and 9 equations.
Green Strain, E
From http://www.continuummechanics.org/greenstrain.html:
Implementing this in python using numpy looks like:
dx = x1
dX = x2
F = np.linalg.solve(dX[5:],dx[5:])
C = F.T # F
E = .5*(C-np.identity(3))
E
(output:)array([[ 1.5 , 0.125, -0.125],
[ 0.125, 1.5 , -0.125],
[-0.125, -0.125, 0. ]])

Plot and fill 3D volumes in Python

I am working with some 3D (volumetric) data using Python, and for every tetrahedron, I have not only the vertices's coordinates but also a fourth dimension which is the value of some parameter for that tetrahedron volume.
For example:
# nodes coordinates that defines a tetrahedron volume:
x = [0.0, 1.0, 0.0, 0.0]
y = [0.0, 0.0, 1.0, 0.0]
z = [0.0, 0.0, 0.0, 1.0]
# Scaler value of the potential for the given volume:
c = 100.0
I would like to plot a 3D volume (given by the nodes coordinates) filled with some solid color, which would represent the given value C.
How could I do that in Python 3.6 using its plotting libraries?
You can use mayavi.mlab.triangular_mesh():
from mayavi import mlab
from itertools import combinations, chain
x = [0.0, 1.0, 0.0, 0.0, 2.0, 3.0, 0.0, 0.0]
y = [0.0, 0.0, 1.0, 0.0, 2.0, 0.0, 3.0, 0.0]
z = [0.0, 0.0, 0.0, 1.0, 2.0, 0.0, 0.0, 3.0]
c = [20, 30]
triangles = list(chain.from_iterable(combinations(range(s, s+4), 3) for s in range(0, len(x), 4)))
c = np.repeat(c, 4)
mlab.triangular_mesh(x, y, z, triangles, scalars=c)

Plotting mesh data from vtk python using matplotlib

The following questions makes use of vtk python but what I am attempting to do should not require any knowledge of vtk because I have converted the data I wish to plot into numpy arrays described below. If anyone does know of an improvement to the way I go about actually processing the vtk data into numpy, please let me know!
I have some data that I have extracted using vtk python. The data consists of a 3D unstructured grid and has several 'blocks'. The block I am interested in is block0. The data is contained at each cell rather than at each point. I wish to plot a contourf plot of a scalar variable on this grid using matplotlib. In essence my problem comes down to the following:
Given a set of cell faces with known vertices in space and a known scalar field variable, create a contour plot as one would get if one had created a numpy.meshgrid and used plt.contourf/plt.pcolormesh etc. Basically I post process my vtk data like so:
numCells = block0.GetCells().GetNumberOfCells()
# Array of the 8 vertices that make up a cell in 3D
cellPtsArray = np.zeros((numCells,8,3))
# Array of the 4 vertices that make up a cell face
facePtsArray = np.zeros((numCells,4,3))
#Array to store scalar field value from each cell
valueArray = np.zeros((numCells,1))
for i in xrange(numCells):
cell = block0.GetCell(i)
numCellPts = cell.GetNumberOfPoints()
for j in xrange(numCellPts):
cellPtsArray[i,j,:] = block0.GetPoint(cell.GetPointId(j))
valueArray[i] = block0.GetCellData().GetArray(3).GetValue(i)
xyFacePts = cell.GetFaceArray(3)
facePtsArray[i,:,:] = cellPtsArray[i,xyFacePts,:]
Now I wish to create a contour plot of this data (fill each cell in space according to an appropriate colormap of the scalar field variable). Is there a good built in function in matplotlib to do this? Note that I cannot use any form of automatic triangulation-the connectivity of the mesh is already specified by facePtsArray by the fact that connections between points of a cell have been ordered correctly (see my plot below)
Here is some test data:
import numpy as np
import matplotlib.pyplot as plt
# An example of the array containing the mesh information: In this case the
# dimensionality is (9,4,3) denoting 9 adjacent cells, each with 4 vertices and
# each vertex having (x,y,z) coordinates.
facePtsArray = np.asarray([[[0.0, 0.0, 0.0 ],
[1.0, 0.0, 0.0 ],
[1.0, 0.5, 0.0 ],
[0.0, 0.5, 0.0 ]],
[[0.0, 0.5, 0.0 ],
[1.0, 0.5, 0.0 ],
[1.0, 1.0, 0.0 ],
[0.0, 1.0, 0.0 ]],
[[0.0, 1.0, 0.0 ],
[1.0, 1.0, 0.0 ],
[1.0, 1.5, 0.0 ],
[0.0, 1.5, 0.0 ]],
[[1.0, 0.0, 0.0 ],
[2.0, -0.25, 0.0],
[2.0, 0.25, 0.0],
[1.0, 0.5, 0.0]],
[[1.0, 0.5, 0.0],
[2.0, 0.25, 0.0],
[2.0, 0.75, 0.0],
[1.0, 1.0, 0.0]],
[[1.0, 1.0, 0.0],
[2.0, 0.75, 0.0],
[2.0, 1.25, 0.0],
[1.0, 1.5, 0.0]],
[[2.0, -0.25, 0.0],
[2.5, -0.75, 0.0],
[2.5, -0.25, 0.0 ],
[2.0, 0.25, 0.0]],
[[2.0, 0.25, 0.0],
[2.5, -0.25,0.0],
[2.5, 0.25, 0.0],
[2.0, 0.75, 0.0]],
[[2.0, 0.75, 0.0],
[2.5, 0.25, 0.0],
[2.5, 0.75, 0.0],
[2.0, 1.25, 0.0]]])
valueArray = np.random.rand(9) # Scalar field values for each cell
plt.figure()
for i in xrange(9):
plt.plot(facePtsArray[i,:,0], facePtsArray[i,:,1], 'ko-')
plt.show()

Python: adjust coordinates

I have this Python script which should adjust the coordinates of a set of triangular elements. The script should change the coordinates of the nodes from an element towards the centre of gravity of the element. The below image is a sketch I made of the problem.
However something is wrong in my script, and I can't figure out what. The coordinates are not changed in the correct direction and extra new coordinates are generated, while I only want to adjust the existing coordinates.
Does anyone know how to correctly program this in Python?
coords = [[0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 0.0], [1.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 1.0], [0.0, 2.0], [0.0, 2.0], [1.0, 1.0], [1.0, 2.0], [1.0, 1.0], [2.0, 1.0], [1.0, 2.0], [1.0, 2.0], [2.0, 1.0], [2.0, 2.0], [1.0, 1.0], [2.0, 0.0], [2.0, 1.0], [1.0, 0.0], [2.0, 0.0], [1.0, 1.0]]
elems = [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11], [12, 13, 14], [15, 16, 17], [18, 19, 20], [21, 22, 23]]
#define vectors
def add_vectors(*points):
new_x = 0.0
new_y = 0.0
for point in points:
new_x += point[0]
new_y += point[1]
return [new_x, new_y]
def subtract_vectors(a, b):
new_x = a[0] - b[0]
new_y = a[1] - b[1]
return [new_x, new_y]
def mul_by_scalar(vector, scalar):
new_x = vector[0] * scalar
new_y = vector[1] * scalar
return [new_x, new_y]
#define triangles
triangles = []
for elem in elems:
triangles += [coords[e] for e in elem]
#adjust coordinates
CM = mul_by_scalar(add_vectors(*triangles), 1.0/3)
point_to_CM_vectors = []
for point in triangles:
point_to_CM_vectors.append(subtract_vectors(CM, point))
new_triangle = []
for elem in elems:
for point, motion in zip(triangles, point_to_CM_vectors):
new_triangle.append(add_vectors(point, mul_by_scalar(motion, 0.01)))
print 'triangles =', triangles
print 'new_triangle =', new_triangle
Thanks in advance for any help!
Here's a reworking of your problem using the vectorization operators provided by numpy.
import numpy as np
#define triangles
triangles = np.array([[coords[e] for e in elem] for elem in elems])
#find centroid of each triangle
CM = np.mean(triangles,axis=1)
#find vector from each point in triangle pointing towards centroid
point_to_CM_vectors = CM[:,np.newaxis] - triangles
#calculate similar triangles 1% smaller
new_triangle = triangles + 0.01*point_to_CM_vectors

Categories

Resources