I have a question regarding python scripting in Blender, and I'd really appreciate it if someone could give me at least some conceptual guidelines to how I could do this:
Basically I have around 100 planes (simple primitive planes) and each of them has its own material and each material has it's own transparency map applied to it.
I need a way to snap each of those plane's respective pivots to their opaque parts. I.e. if there is a way to tell the following to blender through python language - "hey, go over every one of these planes, and do the following for each - snap a 3D cursor to an opaque part of the plane (it doesn't matter where exactly, as long as it's inside of an opaque part of the plane) and then snap plane's pivot point to the 3D cursor".
Of course I don't expect anyone to write me a full algorithm for this, I am just asking for a little help and a push in the right direction, as I do have experience with python, but not with blender :/
Any help would be appreciated.
You can find documentation on blender's python api here.
Within blender's image class you can access the pixel data at image.pixels as an array of floats, 4 floats per pixel (RGBA I think). image.size[0] is the width in pixels image.size[1] for height.
Given bpy.data.objects['Plane'].bound_box is an [8][3] array of points defining the outer extremes of the plane, you can locate a point on the plane for the pixel location to get the target point for the origin. You will also find bpy.data.objects['Plane'].matrix_world useful to translate the object coordinates to global.
bpy.context.scene.cursor_location = Vector((x,y,z)) will move the cursor to where you want.
bpy.ops.object.origin_set(type='ORIGIN_CURSOR') will set the active objects origin to the cursor. Note that this works on the active object, so you will need to alter your selection as you go.
Related
I have a point cloud and meshes (vertices=points of the point cloud).
I want to project the point cloud with a certain virtual camera.
Here, since the point cloud is sparse, the rendered result includes the points which should be occluded by foreground objects.
To resolve this issue, I want to use mesh information to identify which points should be occluded.
Is there any smart way to do this in python?
Kind advice will be greatly appreciated.
After hours of searching, I conclude that I have to re-implement a novel rendering pipeline to achieve my goal.
So, instead of this, I use a mesh-based renderer to render a depth map.
And then I simply project the points of the point cloud with a projection matrix.
Here, I use the depth map to check whether the point fits with the depth or not.
If the projected point is the one that should be occluded, then the depth of the point would be larger than the depth map value at the corresponding pixel.
So, such points should be ignored while rendering.
I know that this is a less elegant and inefficient trick but anyway it works very well :)
I intend to make a 3D model based on multi view stereo images ( basically 2D plane images of the same object from different angles and orientation) inside Blender from scratch.However, I am new to Blender.
I wanted to know if there are any tutorials of how to project a single pixel or point in the space of Blender's 3D environment using python. If not tutorial, any documentation. I am still learning about this whole 3D construction thing and pretty new to this, so I am not sure maybe these points are displayed using a 3 dimensional matrix/array ?
Basically I want to implement 3D construction based on a paper written by some researchers. Mostly every such project is in C++. I want to do it in Python in Blender, and if I am capable enough, make these libraries open source.
Suggest me any pre-requisite if you think that shall help me. I have just started my 3rd year of BSc Computer Science course, and very new to the world of Computer Graphics.
(My skillset is C, Java and Python.)
I would be very glad and appreciate any help.
Thank You
[Link to websitehttps://vision.in.tum.de/research/image-based_3d_reconstruction/multiviewreconstruction[][1]]
image2
Yes, it can very likely be done in Blender, and in Python at least for small geometries / low resolution.
A valid approach for the kind of scenarios you seem to want to play with is based on the idea of "space carving" or "silhouette projection". A good description in is an old paper by Kutulakos and Seitz, which was based in part on earlier work by Szelisky.
Given a good estimation of the silhouettes, these methods can correctly reconstruct all convex portions of the object's surface, and the subset of concavities that are resolved in the photo hull. The remaining concavities are "patched" over and need to be reconstructed using a different method (e.g. stereo, or structured light). For the surfaces that can be reconstructed, space carving is generally more robust than stereo (since it is insensitive to the color and surface texture of the object), and can work on surfaces where structured light struggles (e.g. surfaces with specularities, or very dark objects with low reflectance for a laser stripe)
The basic idea is to use the silhouettes of the projection of the object in cameras around it to "remove" mass from an initial volume (e.g. a box) encompassing the object, a bit like a sculptor carving a statue by removing material from a block of marble.
Computationally, you can do it representing the volume of space of interest using an octree, initialized with a minimal level of subdivision, and then progressively refined. The refinement consists of projecting the vertices of the octree leaves in the cameras, and identifying which leaves are completely outside or partially inside the silhouettes. The former are pruned, while the latter are split, and the process continues until no more leaves can be split or a maximul level of subdivision is reached. The hull of the octree is then extracted as a "watertight" mesh using standard methods.
Apart from the above paper, a way more detailed description can be found on an old patent by Geometrix - it sold a scanner based on the above ideas around year 2000. Here is what it looked like:
i have a shirt displayed as a 3D model in the file format „obj“ or „fbx“ . I would like to calculate the object width at a specific height. It would be best, when i have the coordinates from all points at a specific height. Can anyone tell me, a python or javascript framework for that or a suggestion, how i can calculate this manually.
enter image description here
If you're using the OBJ format, then you have no unit data. It's triangles but no absolute scale.
What you're looking for should be easy to moderately difficult to determine. 3D Printing slicing software does exactly what you want to calculate the path for 3d printers. You'll take your 3D model and make sure it's oriented so "up" makes sense - the neck of the shirt in your example, then run the slicer on it at various heights.
You'll get a 2D slice of a 3D object as the intersection of a plane with the model at that height. You'll then have to compute the bounding box around the slices and adjust the width to fit whatever units high your model is.
A good place to start might be this library: https://pypi.org/project/meshcut/
or else look for open source 3D printer slicing software.
What do negative coordinates mean when I apply the function:
transformed_coordinates = cv2.perspectiveTransform(points, homography)
The documentation ,mentions nothing about this. Could someone please explain this?
Negative coordinates are entirely normal. That means that the projected points from 3D space to 2D image space are out of bounds or defined outside of the image boundaries. It's not documented because it's implicit.
Now you are probably wondering why you're getting these. I have no idea where points came from, but I suspect that you are visualizing some point cloud in 3D space and the transform maps visible points from the point cloud to where the camera is located. Therefore, it's perfectly normal to have points that are outside the field of view of the camera be mapped to negative coordinates which tells you they simply cannot appear or be visualized when projected to image space.
This question is likely to be more about the right terminology and subjects to search for more than anything else. It feels like a simple enough concept to the point where tools should be available in Python/NumPy to do it, but I just have no idea what to look for.
I recently watched a video on Space Carving and I would like to implement the basic concept using video game sprites to attempt voxelizing them. I have sprites of characters from 8 equally spaced angles (front on, camera 45 degress right, fully right, back, etc.)
I haven't found a library dedicated to this concept, but I think it should be fairly simple to implement. My thinking is I can make a 3D array that is the max size of the sprite in all dimensions that is a solid block of "clay". Then I need a 2D representation of that 3D array at each rotation angle. For each pixel in that representation I need to be able to iterate down into the 3D array each block that a "laser" fired from that position would hit.
The first step would be removing the clay where the sprite is just alpha layer, (aka setting a bool to false). The next would be "painting" the shape where possible.
The problem is I just have no idea what mathematical or programmatic terms are associated with these concepts. I could get so far as to make a 3D block of clay. But how do I get 2D representations of a 3D array at several rotated angles about one axis that I can essentially fire lasers at?