I am using Blender 2.8. I want to import an object into blender that is made up of a few pieces that aren't connected. So I want to split the object up and only export the largest of the pieces.
So lets say there are 3 pieces in one object, one big and two small. I'm able to turn this object into three objects, each containing one of the pieces. I would like to delete the two smaller objects and only keep the largest one. I'm thinking maybe to somehow find the surface area of the three different objects and only keep the largest while deleting all others? I'm pretty new at Blender.
bpy.ops.import_mesh.stl(filepath='path/of/file.stl')
bpy.ops.mesh.separate(type='LOOSE')
amount_of_pieces = len(context.selected_objects)
if amount_of_pieces > 1:
highest_surface_area = 0
#the rest is pseudocode
for object in scene:
if object.area > highest_surface_area:
highest_surface_area = object.area
else:
bpy.ops.object.delete()
bpy.ops.export_mesh.stl(filepath='path/of/new/file.stl')
The steps would be :-
import file
break into multiple objects
for safety, get a list of mesh objects
list the surface area of each object
get the max from the list of areas
delete the not biggest objects
export the largest
cleanup
We don't need to use bmesh to get the surface area, the normal mesh data includes polygon.area.
Using list comprehension, we can get most steps into one line each.
import bpy
# import and separate
file = (r'path/of/file.stl')
bpy.ops.import_mesh.stl(filepath= file)
bpy.ops.mesh.separate(type='LOOSE')
# list of mesh objects
mesh_objs = [o for o in bpy.context.scene.objects
if o.type == 'MESH']
# dict with surface area of each object
obj_areas = {o:sum([f.area for f in o.data.polygons])
for o in mesh_objs}
# which is biggest
big_obj = max(obj_areas, key=obj_areas.get)
# select and delete not biggest
[o.select_set(o is not big_obj) for o in mesh_objs]
bpy.ops.object.delete(use_global=False, confirm=False)
#export
bpy.ops.export_mesh.stl(filepath= 'path/of/new/file.stl')
# cleanup
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete(use_global=False, confirm=False)
I was able to write a code that works for this, however, it is very long and chaotic. I would appreciate it if anyone could give me some advice on cleaning it up.
import bpy
import os
import bmesh
context = bpy.context
file = (r'path\to\file.stl')
bpy.ops.import_mesh.stl(filepath= file)
fileName = os.path.basename(file)[:-4].capitalize()
bpy.ops.mesh.separate(type='LOOSE')
bpy.ops.object.select_all(action='SELECT')
piece = len(context.selected_objects)
bpy.ops.object.select_all(action='DESELECT')
high = 0
if piece > 1:
bpy.data.objects[fileName].select_set(True)
obj = bpy.context.active_object
bm = bmesh.new()
bm.from_mesh(obj.data)
area = sum(f.calc_area() for f in bm.faces)
high = area
bm.free()
bpy.ops.object.select_all(action='DESELECT')
for x in range (1, piece):
name = fileName + '.00' + str(x)
object = bpy.data.objects[name]
context.view_layer.objects.active = object
bpy.data.objects[name].select_set(True)
obj = bpy.context.active_object
bm = bmesh.new()
bm.from_mesh(obj.data)
newArea = sum(f.calc_area() for f in bm.faces)
bm.free()
if newArea > high:
high = newArea
bpy.ops.object.select_all(action='DESELECT')
else:
bpy.ops.object.delete()
bpy.ops.object.select_all(action='DESELECT')
if area != high:
bpy.data.objects[fileName].select_set(True)
bpy.ops.object.delete()
bpy.ops.export_mesh.stl(filepath= 'path/to/export/file.stl')
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete(use_global=False, confirm=False)
Related
I am trying since days to assign new materials and random colors to meshes that already exist in a Blender scene.
I need to do it in Python but I cannot find a solution.
I would like each mesh to have different materials and different colors.
I have found two scripts that are very useful but I don't manage to combine them together, can anyone help please?
Thank you in advance
Script 1) assign a new material to all the meshes in a scene (but color not assigned):
import bpy
bpy.ops.object.select_all(action='SELECT')
# I separated the creation of nodes to a function so it's easier
# to edit later if needed
def create_nodes(mat):
mat.use_nodes = True
nodes = mat.node_tree.nodes
for every_node in nodes: # this removes all existing nodes
nodes.remove(every_node)
# creating Principled node and moving it:
node = nodes.new('ShaderNodeBsdfPrincipled')
node.location = (-190,100)
# creating Output node and moving it:
output_node = nodes.new('ShaderNodeOutputMaterial')
output_node.location = (40,100)
# creating the link between the two nodes:
links = mat.node_tree.links
link = links.new(node.outputs[0], output_node.inputs[0])
# this saves the currently active object so it can be restored later
active = bpy.context.object
# let's loop through all selected objects
for every_object in bpy.context.selected_objects:
# I only want to work with objects capable of having a material
if every_object.type in {'MESH','CURVE', 'SURFACE','META', 'FONT'}:
if every_object.name not in bpy.data.materials:
# if there is no material named after the object yet let's make one
mat = bpy.data.materials.new(every_object.name)
# and let's create the nodes for it
create_nodes(mat)
else:
# if the material already exists let's just use it
mat = bpy.data.materials.get(every_object.name)
if len(every_object.material_slots) == 0: # if there are no material slots
every_object.data.materials.append(mat)
# The only thing left now is to assign the material to
# all material slots. We probably do not want to loose the info
# about how the object is divided into separate materials
for every_slot in every_object.material_slots:
every_slot.material = mat
Script 2) assign a new material and a random color to an existing object in the scene (but works only for 1 object):
import bpy, random
ob = bpy.data.objects.get("Cube")
if ob != None:
# Create materials.
mat_one = bpy.data.materials.get("mat_one")
if mat_one == None:
mat_one = bpy.data.materials.new("mat_one")
mat_one.diffuse_color = (random.random(),random.random(),random.random(),random.random())
# Add materials to slots.
if len(ob.material_slots) != 1:
ob.data.materials.append(mat_one)
I'm trying to get up to speed on Maya Python so I have been reading This book (http://www.maya-python.com/) There is an online pdf of it... I'm on a section (The For Statement) where I'm not getting the correct result but I am also not getting any errors. If anyone could look at this and give me an idea on whats causing the problem that'd be great.
So it creates 3 file nodes fine and it is supposed to rename all 3 nodes to have the prefix 'dirt_'. But it only renames 'file1' and not the other two nodes
Here is the process:
#The FOR statement
import maya.cmds;
def process_all_textures(**kwargs):
pre = kwargs.setdefault('prefix', 'my_');
textures = kwargs.setdefault('texture_nodes');
new_texture_names = [];
for texture in textures:
new_texture_names.append(
maya.cmds.rename(
texture,
'%s%s'%(pre, texture)
)
);
return new_texture_names;
#create new Maya scene & list 3 file nodes & print their names
maya.cmds.file(new=True, f=True);
textures = [];
for i in range(3):
textures.append(
maya.cmds.shadingNode(
'file',
asTexture=True
)
);
print(textures);
#pass new texture list to process_all_textures() func and print resulting names
new_textures = process_all_textures(
texture_nodes = textures,
prefix = 'dirt_'
);
print(new_textures);
[u'file1', u'file2', u'file3']
[u'dirt_file1']
The line return new_texture_names must be indented by four spaces (not by eight ones).
In your case return statement stops the function and immediately returns the value.
#The FOR statement
import maya.cmds as mc
def process_all_textures(**kwargs):
pre = kwargs.setdefault('prefix', 'my_')
textures = kwargs.setdefault('texture_nodes')
new_texture_names = []
for texture in textures:
new_texture_names.append(mc.rename(texture,'%s%s'%(pre, texture)))
return new_texture_names
#create new Maya scene & list 3 file nodes & print their names
mc.file(new=True,f=True)
textures = []
for i in range(3):
textures.append(mc.shadingNode('file',asTexture=True))
print(textures)
#pass new texture list to process_all_textures() func and print resulting names
new_textures = process_all_textures(texture_nodes = textures,prefix = 'dirt_')
print(new_textures)
[u'file1', u'file2', u'file3']
[u'dirt_file1']
I am using code I found and slightly modified for my purposes. The problem is, it is not doing exactly what I want, and I am stuck with what to change to fix it.
I am searching for all neighbouring polygons, that share common borded (a line), that is not a point
My goal: 135/12 is neigbour with 319/2 135/4, 317 but not with 320/1
What I get in my QGIS table after I run my script
NEIGBOURS are the neighbouring polygons,
SUM is the number of neighbouring polygons
The code I use also includes 320/1 as a neighbouring polygon. How to fix it?
from qgis.utils import iface
from PyQt4.QtCore import QVariant
_NAME_FIELD = 'Nr'
_SUM_FIELD = 'calc'
_NEW_NEIGHBORS_FIELD = 'NEIGHBORS'
_NEW_SUM_FIELD = 'SUM'
layer = iface.activeLayer()
layer.startEditing()
layer.dataProvider().addAttributes(
[QgsField(_NEW_NEIGHBORS_FIELD, QVariant.String),
QgsField(_NEW_SUM_FIELD, QVariant.Int)])
layer.updateFields()
feature_dict = {f.id(): f for f in layer.getFeatures()}
index = QgsSpatialIndex()
for f in feature_dict.values():
index.insertFeature(f)
for f in feature_dict.values():
print 'Working on %s' % f[_NAME_FIELD]
geom = f.geometry()
intersecting_ids = index.intersects(geom.boundingBox())
neighbors = []
neighbors_sum = 0
for intersecting_id in intersecting_ids:
intersecting_f = feature_dict[intersecting_id]
if (f != intersecting_f and
not intersecting_f.geometry().disjoint(geom)):
neighbors.append(intersecting_f[_NAME_FIELD])
neighbors_sum += intersecting_f[_SUM_FIELD]
f[_NEW_NEIGHBORS_FIELD] = ','.join(neighbors)
f[_NEW_SUM_FIELD] = neighbors_sum
layer.updateFeature(f)
layer.commitChanges()
print 'Processing complete.'
I have found somewhat a workaround for it. Before using my script, I create a small (for my purposes, 0,01 m was enough) buffer around all joints. Later, I use a Difference tool to remove the buffer areas from my main layer, thus removing not-needed neighbouring polygons. Using the code now works fine
I am trying to write an script to remove cells in a part in ABAQUS if the cell volume is smaller than a given value.
Is there a simple command to delete a cell?
This is what I have tried:
# Keeps cells bigger than a certain minimum value 'paramVol': paramVol=volCell/part_volume_r
cellsVolume = []
pfacesInter_clean = []
allCells = pInterName.cells
mask_r = pInter.cells.getMask();
cellobj_sequence_r = pInter.cells.getSequenceFromMask(mask=mask_r);
part_volume_r = pInterName.getVolume(cells=cellobj_sequence_r);
volume_sliver = 0
# get faces
for i in range(0, len(allCells)):
volCell = allCells[i].getSize()
cellsVolume.append(volCell)
paramVol = volCell / part_volume_r
print 'paramVol= '+str(paramVol)
if paramVol < 0.01:
print 'liver Volume'
#session.viewports['Viewport: 1'].setColor(initialColor='#FF0000') #-->RED
faces = allCells[i].getFaces()
highlight(allCells[i].getFaces())
#pfacesInter_clean = [x for i, x in enumerate(pfacesInter) if i not in faces]
volume_sliver += volCell
else:
print 'Not an sliver Volume'
Thanks!
How about this, assuming pInter is a Part object:
pInter.RemoveFaces(faceList=[pInter.faces[j] for j in pInter.cells[i].getFaces()])
Update: once the common face of two cells is deleted, both cells cease to exist. Therefore, we need to do a little workaround:
faces_preserved = # List of faces that belong to cells with 'big' volume.
for cell in pInter.cells:
pInter.RemoveFaces(faceList=[face for face in pInter.faces if \
face not in faces_preserved])
I am extracting 150 different cell values from 350,000 (20kb) ascii raster files. My current code is fine for processing the 150 cell values from 100's of the ascii files, however it is very slow when running on the full data set.
I am still learning python so are there any obvious inefficiencies? or suggestions to improve the below code.
I have tried closing the 'dat' file in the 2nd function; no improvement.
dat = None
First: I have a function which returns the row and column locations from a cartesian grid.
def world2Pixel(gt, x, y):
ulX = gt[0]
ulY = gt[3]
xDist = gt[1]
yDist = gt[5]
rtnX = gt[2]
rtnY = gt[4]
pixel = int((x - ulX) / xDist)
line = int((ulY - y) / xDist)
return (pixel, line)
Second: A function to which I pass lists of 150 'id','x' and 'y' values in a for loop. The first function is called within and used to extract the cell value which is appended to a new list. I also have a list of files 'asc_list' and corresponding times in 'date_list'. Please ignore count / enumerate as I use this later; unless it is impeding efficiency.
def asc2series(id, x, y):
#count = 1
ls_id = []
ls_p = []
ls_d = []
for n, (asc,date) in enumerate(zip(asc, date_list)):
dat = gdal.Open(asc_list)
gt = dat.GetGeoTransform()
pixel, line = world2Pixel(gt, east, nort)
band = dat.GetRasterBand(1)
#dat = None
value = band.ReadAsArray(pixel, line, 1, 1)[0, 0]
ls_id.append(id)
ls_p.append(value)
ls_d.append(date)
Many thanks
In world2pixel you are setting rtnX and rtnY which you don't use.
You probably meant gdal.Open(asc) -- not asc_list.
You could move gt = dat.GetGeoTransform() out of the loop. (Rereading made me realize you can't really.)
You could cache calls to world2Pixel.
You're opening dat file for each pixel -- you should probably turn the logic around to only open files once and lookup all the pixels mapped to this file.
Benchmark, check the links in this podcast to see how: http://talkpython.fm/episodes/show/28/making-python-fast-profiling-python-code