I'm totally new to this field, I'm using python and Trimesh to create a 3D model of a bone and I need to figure out how can I create or represent the thickness of the bone I only have surface of the bone which is ok, but I need also to represent how the bone looks inside this surface, Anyone has a clue on how to approach this problem?
Ask me if you need more information, I'm not sure what you need to know to understand the problem.
Thanks in advance.
You can use gmsh along with mmg to mesh in volume your surface.
Depending on what you need.
if you want to display volumetric informations, you have 2 options:
generate a voxel filling the volume delimited by your surface
triangulate your volume with tetraedrons (I heard that gmsh was good for it)
if you just want to have successive inner surfaces, you can use offseted/deflated surfaces
this can simply be achieved by offsetting your first surface points along their normal
you can use for instance inflate or thicken here from the pymadcad module
Related
I am working in my graduation project and one of the tasks I am required to draw a 3D shape(for example an ellipsoid using vtkSampleFunction), this represents the heart for example. I need to change the color of certain areas in that shape and make color gradients using 2 colors for example. How can this be achieved? All I could find is that cant be done without having polydata(points) and I dont know how to access specific points on the outline of my shape. Any help would be appreciated
I tried millions of ways to access points on the outline of my shape but I cant find anyway to do it.
I am new to VTK so please try to simply any answer. Thank You
If you are looking for a way to extract all the points inside a surface, you can use the vtkSelectEnclosedPoints method. For example, if you want to find out which all points in pointsPolydata lie inside the surface surfacePolydata, you can use the below example.
select = vtkSelectEnclosedPoints()
select.CheckSurfaceOn ()
select.SetTolerance(0.001)
select.SetInputData(pointsPolydata)
select.SetSurfaceData(surfacePolydata)
select.Update()
outPut=select.GetOutput()
The outPut polydata will have an array named "SelectedPoints", with 0 for point outside the surface and 1 for points inside the surface.
For more details, refer vtkSelectEnclosedPoints
I'm just a newbie in Blender.
Going to create an jigsaw puzzled sphere model, like wikipedia one, or these plastic 3D puzzles you have probably saw.
For now, i have created Python script which creates arbitrary plain 2D puzzles with Bezier curves, which later can be easy be converted to a mesh
But how to wrap it around a sphere ?
PS. Just had an idea -
to unwrap cube on the puzzle plane, copy edges as negative as shown below
(there no copy of edges on the picture yet).
Then with affine transformations, transform each 2D cube face to respective 3D place, and then apply Object->Transform->To Sphere modifier.
What do you think ? Is there a better way to create puzzled sphere ?
Thanks for your attention !
EDIT: You know, there is a dodecahedron, which can be also assembled from pentagon faces
Transformed code to blender add-on
https://bitbucket.org/ios29A/blender_puzzle_generator, maybe it will help someone
Actually 30Kb of Python code, and cube faces are transformed by hands
Just finished it with affine transformations of cube faces.
See a heart from 7x7x7 cube, so here is plain->cube->sphere->lattice transform
I think this method makes it possible to create any 3D shape from squares, even not sphere ones.
Was going to print it in plastic before metal.
Here it is 3х3х3 and was made right before 14' February
It's much simpler than you may imagine.
Download the addon from the given link above (https://bitbucket.org/ios29A/blender_puzzle_generator).
Install it (refer to the documentation).
In Blender, create a cuboid puzzle (add-curve-puzzle. Choose cuboid in the shape option).
Convert the curve into a mesh (object-convert to-mesh...)
Select the cuboid and enter edit mode (tab)
Select all (A)
Mesh-transform-to sphere.
Move your mouse.
I have a data set which contains latitude and longitude.(These are car racing data ) I like to draw a map in pyqtgrapgh based on this coordinates and then interact with it. The problem is I cannot find the proper way to draw the map with the current pyqtgrapgh api. (or maybe I am missing something).
Does anybody know how can I draw the map with pyqtgraph?
I would start with Qt's primitives like QGraphicsPathItem or QGraphicsPolygonItem. If you are starting from a numpy array of coordinates, then you might find pg.arrayToQPath() useful as well.
I have been all over the internet trying to find the answer to this one and I've reached the end of my patience. What I am trying to achieve is pretty much what the standard bend deformer does but the only difference is I want it to unfold along a pre-defined curve. There are literally hundreds of tutorials about the bend deformer all doing the same thing, unfolding along a flat plane but none on how to do it along a curved surface. I have also tried paint effects with control curves to no avail and baking the bend deformer into the geometry then curving it afterwards. This last option didn't work as I no longer had the control I required. It seems from my search that probably the only way to do this would be through a Mel or Python script and I was wondering would anyone be able to help?
something like this?
Constrain -> Motion Patch -> Attach to Motion Patch + Flow Path Object
Lattice deformers themselves are skinnable, so you can run a skeleton or bend deformer through a lattice to maniuplate it's shape. This will also let you control the twist along the deformation. Animate the object you want to deform into the area of influence of the lattice to create the follow effect, while animating the deformation of the lattice itself at the same time to create the follow effect.
Or, you can just make the lattice follow the path using a spline IK control.
I have a question regarding python scripting in Blender, and I'd really appreciate it if someone could give me at least some conceptual guidelines to how I could do this:
Basically I have around 100 planes (simple primitive planes) and each of them has its own material and each material has it's own transparency map applied to it.
I need a way to snap each of those plane's respective pivots to their opaque parts. I.e. if there is a way to tell the following to blender through python language - "hey, go over every one of these planes, and do the following for each - snap a 3D cursor to an opaque part of the plane (it doesn't matter where exactly, as long as it's inside of an opaque part of the plane) and then snap plane's pivot point to the 3D cursor".
Of course I don't expect anyone to write me a full algorithm for this, I am just asking for a little help and a push in the right direction, as I do have experience with python, but not with blender :/
Any help would be appreciated.
You can find documentation on blender's python api here.
Within blender's image class you can access the pixel data at image.pixels as an array of floats, 4 floats per pixel (RGBA I think). image.size[0] is the width in pixels image.size[1] for height.
Given bpy.data.objects['Plane'].bound_box is an [8][3] array of points defining the outer extremes of the plane, you can locate a point on the plane for the pixel location to get the target point for the origin. You will also find bpy.data.objects['Plane'].matrix_world useful to translate the object coordinates to global.
bpy.context.scene.cursor_location = Vector((x,y,z)) will move the cursor to where you want.
bpy.ops.object.origin_set(type='ORIGIN_CURSOR') will set the active objects origin to the cursor. Note that this works on the active object, so you will need to alter your selection as you go.