In Blender, my attempt (see code below) to python script BEVEL on selected edges with the BEVEL tool (not the modifier) fails: All selected and non-selected edges are beveled.
What I want :
What I get:
Question 1: why?
Question 2: Is there a way to apply the beveler by script without using operators (ops) depending on the context?
Thank you.
Code that does not work:
def selectEdgesAlongX(mesh):
"""Select all edges of an object directed along X"""
bm = bmesh.new()
bm.from_mesh(mesh)
edges, verts = bm.edges, bm.verts
verts.ensure_lookup_table()
for edge in bm.edges:
indices = (edge.verts[0].index, edge.verts[1].index)
vs=[verts[index].co for index in indices]
edge.select = abs((vs[1]-vs[0])[0])>1e-4
bm.to_mesh(mesh)
bm.free()
def printSelectedEdges(mesh):
print([edge.select for edge in mesh.edges])
# box
dims = (depth,height, width) = (0.4,0.6,1)
thickness, radius = 0.05, 0.05
center =(0,0,0)
bpy.ops.mesh.primitive_cube_add(size=1,enter_editmode=False, align='WORLD', location=center, scale=np.array(dims)/2-thickness)
interior = bpy.context.view_layer.objects.active
interior.name='interior'
printSelectedEdges(interior.data)
selectEdgesAlongX(interior.data)
printSelectedEdges(interior.data)
bpy.ops.object.editmode_toggle()
bpy.ops.mesh.bevel(offset=0.03, offset_pct=0,segments=10, affect='EDGES')
print outputs:
Related
I am trying to create a volume in Gmsh (using Python API) by cutting some small cylinders from a bigger one.
When I do that, I expect to have one surface for each cutted region, instead, I get the result in the figure. I have highlighted in red the surfaces that give me the problem (some cutted regions behave as expected), as you can see, instead of one surface I get two, that sometimes aren't even equal.
gmsh creates more surfaces than expected:
So, my questions are:
Why gmsh behaves like that?
How can I fix this as I need predictable behavior?
Below is the code I used to generate the geometry.
The code to work requires some parameters such as core_height, core_inner_radius and core_outer_radius, the number of small cylinders and their radius.
gmsh.initialize(sys.argv)
#gmsh.initialize()
gmsh.clear()
gmsh.model.add("circle_extrusion")
inner_cyl_tag = 1
outer_cyl_tag = 2
inner_cyl = gmsh.model.occ.addCylinder(0,0,0, 0, 0, core_height, core_inner_radius, tag = inner_cyl_tag)
outer_cyl = gmsh.model.occ.addCylinder(0,0,0, 0, 0, core_height, core_outer_radius, tag = outer_cyl_tag)
core_tag = 3
cut1 = gmsh.model.occ.cut([(3,outer_cyl)],[(3,inner_cyl)], tag = core_tag)
#create a set of filled cylinders
#set position
angle_vector = np.linspace(0,2*np.pi,number_of_hp+1)
pos_x = hp_radial_position*np.cos(angle_vector)
pos_y = hp_radial_position*np.sin(angle_vector)
pos_z = 0.0
#cut one cylinder at the time and assign the new core tag
for ii in range(0,len(angle_vector)):
old_core_tag = core_tag
heat_pipe = gmsh.model.occ.addCylinder(pos_x[ii], pos_y[ii], pos_z, 0, 0, core_height,hp_outer_radius, tag =-1)
core_tag = heat_pipe+1
core = gmsh.model.occ.cut([(3,old_core_tag)],[(3,heat_pipe)], tag = core_tag)
gmsh.model.occ.synchronize()
#get volume entities and assign physical groups
volumes = gmsh.model.getEntities(dim=3)
solid_marker = 1
gmsh.model.addPhysicalGroup(volumes[0][0], [volumes[0][1]],solid_marker)
gmsh.model.setPhysicalName(volumes[0][0],solid_marker, "solid_volume")
#get surfaces entities and apply physical groups
surfaces = gmsh.model.getEntities(dim=2)
surface_markers= np.arange(1,len(surfaces)+1,1)
for ii in range(0,len(surfaces)):
gmsh.model.addPhysicalGroup(2,[surfaces[ii][1]],tag = surface_markers[ii])
#We finally generate and save the mesh:
gmsh.model.mesh.generate(3)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
gmsh.option.setNumber("Mesh.MshFileVersion", 2.2) #save in ASCII 2 format
gmsh.write(mesh_name+".msh")
# Launch the GUI to see the results:
#if '-nopopup' not in sys.argv:
# gmsh.fltk.run()
gmsh.finalize()
I do not think that you have additional surfaces in the sense of gmsh.model.occ surfaces. To me this looks like your volume mesh is sticking out of your surface mesh, i.e. volume and surface mesh do not fit together.
Here is what I did to check your case:
First I added the following lines at the beginning of our code to get a minimum working example:
import gmsh
import sys
import numpy as np
inner_cyl_tag = 1
outer_cyl_tag = 2
core_height = 1
core_inner_radius = 0.1
core_outer_radius = 0.2
number_of_hp = 5
hp_radial_position = 0.1
hp_outer_radius = 0.05
What I get with this code is the following:
To visualize it like this go to "Tools"-->"Options"-->"Mesh" and check "2D element faces", "3D element edges" and "3D element faces".
You can see that there are some purple triangles sticking out of the green/yellowish surfaces triangles of the inner surfaces.
You could try to visualize your case the same way and check <--> uncheck the "3D element faces" a few times.
So here is the solution for this behaviour, I did not know that gmsh behaves like this myself. It seems that when you create your mesh and refine it the refinement will be applied on the 2D surface mesh and the 3D volume mesh seperately, which means that those two meshes are not connected after the refinement anymore. What I did next was to try what happens if you create the 2D mesh only, refine it, and then create the 3D mesh, i.e.:
replace:
gmsh.model.mesh.generate(3)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
by:
gmsh.model.mesh.generate(2)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
gmsh.model.mesh.generate(3)
The result then looks like this:
I hope that this was actually your problem. But in future it would be good if you could provide us a minimum working example of code that we can copy-paste and get the same visualization you showed us in your image.
I have a project to detect distance between people. the project runs smoothly when the centroid and the plotting line is in the center of the people. But, i want to move the centroid and the plotting line to the feet of the people, I have successfully move the centroid but the plotting line is not move along the centroid. here's the code:
utils.py (def distancing)
def distancing(people_coords, img, dist_thres_lim=(200,250)):
# Plot lines connecting people
already_red = dict() # dictionary to store if a plotted rectangle has already been labelled as high risk
centers = []
for i in people_coords:
centers.append(((int(i[0])+int(i[2]))//2, (int(max(i[3]), (i[1])))))
for j in centers:
already_red[j] = 0
x_combs = list(itertools.combinations(people_coords,2))
radius = 10
thickness = 5
for x in x_combs:
xyxy1, xyxy2 = x[0],x[1]
cntr1 = ((int(xyxy1[2])+int(xyxy1[0]))//2,(int(xyxy1[3])+int(xyxy1[1]))//2)
cntr2 = ((int(xyxy2[2])+int(xyxy2[0]))//2,(int(xyxy2[3])+int(xyxy2[1]))//2)
dist = ((cntr2[0]-cntr1[0])**2 + (cntr2[1]-cntr1[1])**2)**0.5
the problem is in people_coords(looping) xy coordinate. I have tried to change the code with (int(max(i[3]), (i[1]))))) but when I runs it, I get an error (TypeError: iteration over a 0-d tensor). what should I do to move the plotting line along with the centroid?
here is the centroid code
def plot_dots_on_people(x, img):
# Plotting centers of people with green dot.
thickness = -1;
color = [0, 255, 0] # green
center = ((int(x[0])+int(x[2]))//2, (int(max(x[3], x[1]))))
radius = 10
cv2.circle(img, center, radius, color, thickness)
I hope someone can help me, Thank you.
I suggest printing the value of people_coords and running people_coords.size() . The error comes from iterating over a tensor that has a size of torch.Size([]). An example tensor with that size is like torch.tensor(5).
One way to solve this error is to make sure people_coords is the value you expect it to be by using the debugging steps I illustrated above or you can use unsqueeze to turn torch.tensor(x) to torch.tensor([x]) and thus make it iterable.
Sarthak Jain
I'm trying to compute the normals of a mesh in vtk for each vertex.
When I display them It seems normals vertex for green vertex have smaller length than vertex with white color.
I don't understand why I get this result.
To color the mesh I used this function :
def update_colors(self,couleurs=None,Vertex_correspondance=None):
Colors = vtk.vtkUnsignedCharArray()
Colors.SetNumberOfComponents(3)
Colors.SetName("Colors")
for i in range(self.points.shape[0]):
if not i in Vertex_correspondance :
Colors.InsertNextTuple3(255,255,255)
else:
Colors.InsertNextTuple3(couleurs[0] , couleurs[1] , couleurs[2])
self.GetOutput.GetPointData().SetScalars(Colors)
self.GetOutput.Modified()
So I assign a green color to some vertices of the mesh (30 vertices).
To compute the normals, I use :
poly_data = self.actor.GetMapper().GetInput()
normalsCalc = vtk.vtkPolyDataNormals()
normalsCalc.SetInputData(poly_data)
normalsCalc.ComputePointNormalsOn()
normalsCalc.ComputeCellNormalsOff()
normalsCalc.SplittingOff()
normalsCalc.FlipNormalsOff()
normalsCalc.ConsistencyOn()
normalsCalc.AutoOrientNormalsOn()
normalsCalc.Update()
arrowSource = vtk.vtkArrowSource()
glyph3D = vtk.vtkGlyph3D()
glyph3D.SetSourceConnection(arrowSource.GetOutputPort())
glyph3D.SetVectorModeToUseNormal()
glyph3D.SetInputData(normalsCalc.GetOutput())
glyph3D.SetScaleFactor(0.02)
glyph3D.OrientOn()
glyph3D.Update()
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(glyph3D.GetOutputPort())
self.glyphActor = vtk.vtkActor()
self.glyphActor.SetMapper(mapper)
self.glyphActor.GetProperty().SetColor([0, 0, 1])
here is the display I get
Also If I compute the normals length after with
normals = []
array = normalsCalc.GetOutput().GetPointData().GetNormals()
for i in range(array.GetNumberOfTuples()):
normals.append(array.GetTuple(i))
self.Normals = np.array(normals)
np.linalg.norm(self.Normals,axis=1)
I have number really close to 1.
So the normals seems to have been computed well...
Maybe you need to use SetScaleModeToDataScalingOff(), this seems to work:
from vedo import Ellipsoid, show
import vtk
s = Ellipsoid().computeNormals()
arr = s.points()[:,2]
s.cmap('jet_r', arr)
arrowSource = vtk.vtkArrowSource()
glyph3D = vtk.vtkGlyph3D()
glyph3D.SetSourceConnection(arrowSource.GetOutputPort())
glyph3D.SetVectorModeToUseNormal()
glyph3D.SetInputData(s.polydata())
glyph3D.SetScaleFactor(0.2)
glyph3D.OrientOn()
glyph3D.SetScaleModeToDataScalingOff() ###### <--
glyph3D.Update()
show(s, glyph3D, axes=1)
I'm looking for a way in Mel or Python to determine if a particular UV shell crosses across multiple texture tiles in Maya. I know Mudbox has a check for this, but I can't find an easy command for this in Maya in the interface or in script.
One approach I thought of is to compare the shells in each tile and see if any shell appears in more than one tile.
I found a page here someone figured out how to select each UV in a given set of texture tiles, but it operates on UVs, not shells. Here's the page:
http://forums.cgsociety.org/showthread.php?t=1123235
Thanks,
Mike
There's not a one-liner way to do it.
Here's an example of a method for getting the uv bounding boxes of the shells in Python. It uses the API to get bounding boxes for all the uv shells in an object (note -- it should be a mesh shape object , I didn't add any checks).
import maya.api.OpenMaya as api
from math import floor
def get_shells(obj, uvset = ''):
selected = api.MGlobal.getSelectionListByName(obj)
node = selected.getDependNode(0)
mesh = api.MFnMesh(node)
howmany, shell_ids = mesh.getUvShellsIds(uvset)
u_list, v_list = mesh.getUVs()
shells = []
for shell_num in range(howmany):
umin = vmin = 9999999999999
umax= vmax = -9999999999999
for idx, value in enumerate(shell_ids):
if value == shell_num:
umin = min(umin, u_list[idx])
umax = max(umax, u_list[idx])
vmin = min(vmin, v_list[idx])
vmax = max(vmax, v_list[idx])
shells.append ( (umin, vmin, umax, vmax) )
return shells
print get_shells('pCylinderShape1')
# [(0.3437499701976776, -7.450580596923828e-08, 0.65625, 0.3125), (0.375, 0.3125, 0.6249997615814209, 0.6884398460388184), (0.3437499701976776, 0.6874999403953552, 0.65625, 1.0)]
The result is a list of 4-item tuples which are the bounding boxes of the uv shells in the object stored as ( u min, v min, u max, v max ) . To catch a tile boundary just check that the minimum and maximum corners of the are the same integer values:
for shell in get_shells('pCylinderShape1'):
mintile = int(shell[0]), int(shell[1])
maxtile = int(shell[2]), int(shell[3])
if mintile != maxtile:
print "shell crosses UV boundary", shell
Hi I am trying to map a texture to 3d mesh using Mayavi and Python bindings of vtk. I am visualising an .obj wavefront. This obj is 3D photograph of a face. The texture image is a composite of three 2D photographs.
Each node in the mesh has an (uv) co-ordinate in the image, which defines its color. Different regions of the mesh draw their colours from different sections of the image. To illustrate this I have replaced the actual texture image with this one:
And mapped this to the mesh instead.
The problem I am having is illustrated around the nose. At the border between red and green there is an outline of blue. Closer inspection of this region in wireframe mode shows that it is not a problem with the uv mapping, but with how vtk is interpolating colour between two nodes. For some reason it is adding a piece of blue in between two nodes where one is red and one is green.
This causes serious problems when visualising using the real texture
Is there a way to force vtk to choose the colour of one or the other neighbouring nodes for the colour between them? I tried turning "edge-clamping" on, but this did not achieve anything.
The code that I am using is below and you can access the files in question from here https://www.dropbox.com/sh/ipel0avsdiokr10/AADmUn1-qmsB3vX7BZObrASPa?dl=0
but I hope this is a simple solution.
from numpy import *
from mayavi import mlab
from tvtk.api import tvtk
import os
from vtk.util import numpy_support
def obj2array(f):
"""function for reading a Wavefront obj"""
if type(f)==str:
if os.path.isfile(f)==False:
raise ValueError('obj2array: unable to locate file ' + str(f))
f =open(f)
vertices = list()
connectivity = list()
uv = list()
vt = list()
fcount = 0
for l in f:
line = l.rstrip('\n')
data = line.split()
if len(data)==0:
pass
else:
if data[0] == 'v':
vertices.append(atleast_2d(array([float(item) for item in data[1:4]])))
elif data[0]=='vt':
uv.append(atleast_2d(array([float(item) for item in data[1:3]])))
elif data[0]=='f':
nverts = len(data)-1 # number of vertices comprising each face
if fcount == 0: #on first face establish face format
fcount = fcount + 1
if data[1].find('/')==-1: #Case 1
case = 1
elif data[1].find('//')==True:
case = 4
elif len(data[1].split('/'))==2:
case = 2
elif len(data[1].split('/'))==3:
case = 3
if case == 1:
f = atleast_2d([int(item) for item in data[1:len(data)]])
connectivity.append(f)
if case == 2:
splitdata = [item.split('/') for item in data[1:len(data)]]
f = atleast_2d([int(item[0]) for item in splitdata])
connectivity.append(f)
if case == 3:
splitdata = [item.split('/') for item in data[1:len(data)]]
f = atleast_2d([int(item[0]) for item in splitdata])
connectivity.append(f)
if case == 4:
splitdata = [item.split('//') for item in data[1:len(data)]]
f = atleast_2d([int(item[0]) for item in splitdata])
connectivity.append(f)
vertices = concatenate(vertices, axis = 0)
if len(uv)==0:
uv=None
else:
uv = concatenate(uv, axis = 0)
if len(connectivity) !=0:
try:
conarray = concatenate(connectivity, axis=0)
except ValueError:
if triangulate==True:
conarray=triangulate_mesh(connectivity,vertices)
else:
raise ValueError('obj2array: not all faces triangles?')
if conarray.shape[1]==4:
if triangulate==True:
conarray=triangulate_mesh(connectivity,vertices)
return vertices, conarray,uv
# load texture image
texture_img = tvtk.Texture(interpolate = 1,edge_clamp=1)
texture_img.input = tvtk.BMPReader(file_name='HM_1_repose.bmp').output
#load obj
verts, triangles, uv = obj2array('HM_1_repose.obj')
# make 0-indexed
triangles = triangles-1
surf = mlab.triangular_mesh(verts[:,0],verts[:,1],verts[:,2],triangles)
tc=numpy_support.numpy_to_vtk(uv)
pd = surf.mlab_source.dataset._vtk_obj.GetPointData()
pd.SetTCoords(tc)
surf.actor.actor.mapper.scalar_visibility=False
surf.actor.enable_texture = True
surf.actor.actor.texture = texture_img
mlab.show(stop=True)
You can turn off all interpolation (change interpolate = 1 to interpolate = 0 in your example), but there is not a way to turn off interpolation at just the places where it would interpolate across sub-images of the texture – at least not without writing your own fragment shader. This will likely look crude.
Another solution would be to create 3 texture images with transparent texels at each location that is not part of the actor's face. Then render the same geometry with the same texture coordinates but a different image each time (i.e., have 3 actors each with the same polydata but a different texture image).
I just ran into this exact problem as well and found that the reason this happens is because VTK assumes there's a 1-to-1 relationship between points in the polydata and uv coordinates when rendering the actor and associated vtkTexture. However, in my case and the case of OP, there are neighboring triangles that are mapped to different sections the the image, so they have very different uv coordinates. The points that share these neighboring faces can only have one uv coordinate (or Tcoord) associated with it, but they actually need 2 (or more, depending on your case).
My solution was to loop through and duplicate these points that lie on the the seams/borders and create a new vtkCellArray with triangles with these duplicated pointIds. Then I simply replaced the vtkPolyData Polys() list with the new triangles. It would have been much easier to duplicate the points and update the existing pointIds for each of the triangles that needed it, but I couldn't find a way to update the cells properly.