Multiview Stereo 3D construction in Blender - python

I intend to make a 3D model based on multi view stereo images ( basically 2D plane images of the same object from different angles and orientation) inside Blender from scratch.However, I am new to Blender.
I wanted to know if there are any tutorials of how to project a single pixel or point in the space of Blender's 3D environment using python. If not tutorial, any documentation. I am still learning about this whole 3D construction thing and pretty new to this, so I am not sure maybe these points are displayed using a 3 dimensional matrix/array ?
Basically I want to implement 3D construction based on a paper written by some researchers. Mostly every such project is in C++. I want to do it in Python in Blender, and if I am capable enough, make these libraries open source.
Suggest me any pre-requisite if you think that shall help me. I have just started my 3rd year of BSc Computer Science course, and very new to the world of Computer Graphics.
(My skillset is C, Java and Python.)
I would be very glad and appreciate any help.
Thank You
[Link to websitehttps://vision.in.tum.de/research/image-based_3d_reconstruction/multiviewreconstruction[][1]]
image2

Yes, it can very likely be done in Blender, and in Python at least for small geometries / low resolution.
A valid approach for the kind of scenarios you seem to want to play with is based on the idea of "space carving" or "silhouette projection". A good description in is an old paper by Kutulakos and Seitz, which was based in part on earlier work by Szelisky.
Given a good estimation of the silhouettes, these methods can correctly reconstruct all convex portions of the object's surface, and the subset of concavities that are resolved in the photo hull. The remaining concavities are "patched" over and need to be reconstructed using a different method (e.g. stereo, or structured light). For the surfaces that can be reconstructed, space carving is generally more robust than stereo (since it is insensitive to the color and surface texture of the object), and can work on surfaces where structured light struggles (e.g. surfaces with specularities, or very dark objects with low reflectance for a laser stripe)
The basic idea is to use the silhouettes of the projection of the object in cameras around it to "remove" mass from an initial volume (e.g. a box) encompassing the object, a bit like a sculptor carving a statue by removing material from a block of marble.
Computationally, you can do it representing the volume of space of interest using an octree, initialized with a minimal level of subdivision, and then progressively refined. The refinement consists of projecting the vertices of the octree leaves in the cameras, and identifying which leaves are completely outside or partially inside the silhouettes. The former are pruned, while the latter are split, and the process continues until no more leaves can be split or a maximul level of subdivision is reached. The hull of the octree is then extracted as a "watertight" mesh using standard methods.
Apart from the above paper, a way more detailed description can be found on an old patent by Geometrix - it sold a scanner based on the above ideas around year 2000. Here is what it looked like:

Related

Is there any algorithm for converting the 2d images into 3d model? [duplicate]

If I take a picture with a camera, so I know the distance from the camera to the object, such as a scale model of a house, I would like to turn this into a 3D model that I can maneuver around so I can comment on different parts of the house.
If I sit down and think about taking more than one picture, labeling direction, and distance, I should be able to figure out how to do this, but, I thought I would ask if someone has some paper that may help explain more.
What language you explain in doesn't matter, as I am looking for the best approach.
Right now I am considering showing the house, then the user can put in some assistance for height, such as distance from the camera to the top of that part of the model, and given enough of this it would be possible to start calculating heights for the rest, especially if there is a top-down image, then pictures from angles on the four sides, to calculate relative heights.
Then I expect that parts will also need to differ in color to help separate out the various parts of the model.
As mentioned, the problem is very hard and is often also referred to as multi-view object reconstruction. It is usually approached by solving the stereo-view reconstruction problem for each pair of consecutive images.
Performing stereo reconstruction requires that pairs of images are taken that have a good amount of visible overlap of physical points. You need to find corresponding points such that you can then use triangulation to find the 3D co-ordinates of the points.
Epipolar geometry
Stereo reconstruction is usually done by first calibrating your camera setup so you can rectify your images using the theory of epipolar geometry. This simplifies finding corresponding points as well as the final triangulation calculations.
If you have:
the intrinsic camera parameters (requiring camera calibration),
the camera's position and rotation (it's extrinsic parameters), and
8 or more physical points with matching known positions in two photos (when using the eight-point algorithm)
you can calculate the fundamental and essential matrices using only matrix theory and use these to rectify your images. This requires some theory about co-ordinate projections with homogeneous co-ordinates and also knowledge of the pinhole camera model and camera matrix.
If you want a method that doesn't need the camera parameters and works for unknown camera set-ups you should probably look into methods for uncalibrated stereo reconstruction.
Correspondence problem
Finding corresponding points is the tricky part that requires you to look for points of the same brightness or colour, or to use texture patterns or some other features to identify the same points in pairs of images. Techniques for this either work locally by looking for a best match in a small region around each point, or globally by considering the image as a whole.
If you already have the fundamental matrix, it will allow you to rectify the images such that corresponding points in two images will be constrained to a line (in theory). This helps you to use faster local techniques.
There is currently still no ideal technique to solve the correspondence problem, but possible approaches could fall in these categories:
Manual selection: have a person hand-select matching points.
Custom markers: place markers or use specific patterns/colours that you can easily identify.
Sum of squared differences: take a region around a point and find the closest whole matching region in the other image.
Graph cuts: a global optimisation technique based on optimisation using graph theory.
For specific implementations you can use Google Scholar to search through the current literature. Here is one highly cited paper comparing various techniques:
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms.
Multi-view reconstruction
Once you have the corresponding points, you can then use epipolar geometry theory for the triangulation calculations to find the 3D co-ordinates of the points.
This whole stereo reconstruction would then be repeated for each pair of consecutive images (implying that you need an order to the images or at least knowledge of which images have many overlapping points). For each pair you would calculate a different fundamental matrix.
Of course, due to noise or inaccuracies at each of these steps you might want to consider how to solve the problem in a more global manner. For instance, if you have a series of images that are taken around an object and form a loop, this provides extra constraints that can be used to improve the accuracy of earlier steps using something like bundle adjustment.
As you can see, both stereo and multi-view reconstruction are far from solved problems and are still actively researched. The less you want to do in an automated manner the more well-defined the problem becomes, but even in these cases quite a bit of theory is required to get started.
Alternatives
If it's within the constraints of what you want to do, I would recommend considering dedicated hardware sensors (such as the XBox's Kinect) instead of only using normal cameras. These sensors use structured light, time-of-flight or some other range imaging technique to generate a depth image which they can also combine with colour data from their own cameras. They practically solve the single-view reconstruction problem for you and often include libraries and tools for stitching/combining multiple views.
Epipolar geometry references
My knowledge is actually quite thin on most of the theory, so the best I can do is to further provide you with some references that are hopefully useful (in order of relevance):
I found a PDF chapter on Multiple View Geometry that contains most of the critical theory. In fact the textbook Multiple View Geometry in Computer Vision should also be quite useful (sample chapters available here).
Here's a page describing a project on uncalibrated stereo reconstruction that seems to include some source code that could be useful. They find matching points in an automated manner using one of many feature detection techniques. If you want this part of the process to be automated as well, then SIFT feature detection is commonly considered to be an excellent non-real-time technique (since it's quite slow).
A paper about Scene Reconstruction from Multiple Uncalibrated Views.
A slideshow on Methods for 3D Reconstruction from Multiple Images (it has some more references below it's slides towards the end).
A paper comparing different multi-view stereo reconstruction algorithms can be found here. It limits itself to algorithms that "reconstruct dense object models from calibrated views".
Here's a paper that goes into lots of detail for the case that you have stereo cameras that take multiple images: Towards robust metric reconstruction
via a dynamic uncalibrated stereo head. They then find methods to self-calibrate the cameras.
I'm not sure how helpful all of this is, but hopefully it includes enough useful terminology and references to find further resources.
Research has made significant progress and these days it is possible to obtain pretty good-looking 3D shapes from 2D images. For instance, in our recent research work titled "Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks" took a big step in solving the problem of obtaining 3D shapes from 2D images. In our work, we show that you can not only go from 2D to 3D directly and get a good, approximate 3D reconstruction but you can also learn a distribution of 3D shapes in an efficient manner and generate/synthesize 3D shapes. Below is an image of our work showing that we are able to do 3D reconstruction even from a single silhouette or depth map (on the left). The ground-truth 3D shapes are shown on the right.
The approach we took has some contributions related to cognitive science or the way the brain works: the model we built shares parameters for all shape categories instead of being specific to only one category. Also, it obtains consistent representations and takes the uncertainty of the input view into account when producing a 3D shape as output. Therefore, it is able to naturally give meaningful results even for very ambiguous inputs. If you look at the citation to our paper you can see even more progress just in terms of going from 2D images to 3D shapes.
This problem is known as Photogrammetry.
Google will supply you with endless references, just be aware that if you want to roll your own, it's a very hard problem.
Check out The Deadalus Project, althought that website does not contain a gallery with illustrative information about the solution, it post several papers and info about the working method.
I watched a lecture from one of the main researchers of the project (Roger Hubbold), and the image results are quite amazing! Althought is a complex and long problem. It has a lot of tricky details to take into account to get an approximation of the 3d data, take for example the 3d information from wall surfaces, for which the heuristic to work is as follows: Take a photo with normal illumination of the scene, and then retake the picture in same position with full flash active, then substract both images and divide the result by a pre-taken flash calibration image, apply a box filter to this new result and then post-process to estimate depth values, the whole process is explained in detail in this paper (which is also posted/referenced in the project website)
Google Sketchup (free) has a photo matching tool that allows you to take a photograph and match its perspective for easy modeling.
EDIT: It appears that you're interested in developing your own solution. I thought you were trying to obtain a 3D model of an image in a single instance. If this answer isn't helpful, I apologize.
Hope this helps if you are trying to construct 3d volume from 2d stack of images !! You can use open source tool such as ImageJ Fiji which comes with 3d viewer plugin..
https://quppler.com/creating-a-classifier-using-image-j-fiji-for-3d-volume-data-preparation-from-stack-of-images/

SLAM Vs Registration

I am working with 3D point clouds acquired from an object and I need to align them in a single global point cloud. I am having an hard time in understanding the difference between SLAM and registration. Especially since both of them can implement ICP
The point clouds have been acquired in spatial and temporal order and hav extended overlapping area; therefore I should could SLAM for aligning them.
Anyone can clarify this point to me?
Thanks!
anna
Based on your question it sounds like you are working with a depth sensor of some kind moving through an environment. You would like to create a consistent map or point cloud with this sensor. I would have commented this, but my reputation is too low at the moment.
Registration just refers to the alignment of two measurements through some transformation. For image registration this is typically when you find some transformation whether it be a simple translation or an affine warp between two images which makes them 'look' similar. Point cloud registration typically refers to finding a rotation and translation which aligns two point clouds.
SLAM, as you probably know, refers to simultaneous localization and mapping. The goal of SLAM is to find the sensors motion through a scene, and map the scene at the same time.
I think the reason you are having a hard time seeing the difference between the two is because, for your application, registration is a way of accomplishing a simple form of SLAM. The reason for this is because ICP essentially is finding the relative transformation of your depth sensor between two measurements. This is acts as odometry for your sensor.
However, registration is not necessarily going to give you a relative sensor pose in all applications. For example a KLT tracker is a form of simple image registration, but it does not give the relative transformation of two cameras directly.
I hope this clears it up.

Creating own object detection for Euro pallets

I thought about tackling a new project in which I use Tensorflow object detection API to detect Euro pallets (eg. pic).
My ultimate goal is to know how far I am away from the pallet and which relative position I have to it. So I thought about first detecting the euro pallet in an RGB feed from a kinect camera and then using its 3D feature to get the distance to the pallet.
But how do I go about the relative position of the pallet? I could create different classes, for example one is "Front view laying pallet" another one Side view laying pallet etc. but I think for that to be accurate I'd need quite a few pictures for each class for it to be valid? Like 200 for each class?
Since my guess is that there are no such labeled datasets yet thats quite a pain to create by myself.
Another way I could think of, is if I label my pallets with segmentation instead of bounding boxes, maybe there is another way to find out my relative position to the pallet? I never did semantic segmentation labeling myself but can anyone name any good programs which I could use?
I'm hoping someone can help point me in the right direction. Any help would be appreciated.
Some ideas: assuming detection and segmentation with classifier(s) works, one could then try feature detection like edges / lines to obtain clues about its orientation (bounding box).
Of course this will be tricky for simple feature detection because of very different surfaces (wood, dirt), backgrounds and lighting.
Also, "markerless tracking" (a topic in augmented reality) and "bin picking" (actually applied in the automation industry) may be keywords for similar problems, although you are probably not starting with an unordered pile of pallets.

Constructing a rough 3D points from structure from motion technique

I am using OpenCV and Python.
Let say I have this sequence video of the car. And I have tracked some 'interesting points' of the car with the cv2.goodFeaturesToTrack and cv2.calcOpticalFlowPyrLK. Now, given the traced points, I want to estimate a very rough shape (maybe a 3D box) of the car and its distance from the camera. It doesn't need to be that accurate.
On top of that, I want it to be keep updating in real time. The closest youtube video I can find that can give a view of what I am trying to achieve is this. I have found a new Structure from Motion module in OpenCV, but it is more on building a 3D model from a collection of points.
The question is, what is the best way of achieving this and what kind of library I can use (especially in order to construct the 3D space)?
And it is also OK if somehow I need to use C++ for this (although I am still not good in it yet).
Thanks.

Python OpenCV stereo camera position

I'd like to determine the position and orientation of a stereo camera relative to its previous position in world coordinates. I'm using a bumblebee XB3 camera and the motion between stereo pairs is on the order of a couple feet.
Would this be on the correct track?
Obtain rectified image for each pair
Detect/match feature points rectified images
Compute Fundamental Matrix
Compute Essential Matrix
Thanks for any help!
Well, it sounds like you have a fair understanding of what you want to do! Having a pre-calibrated stereo camera (like the Bumblebee) will then deliver up point-cloud data when you need it - but it also sounds like you basically want to also use the same images to perform visual odometry (certainly the correct term) and provide absolute orientation from a last known GPS position, when the GPS breaks down.
First things first - I wonder if you've had a look at the literature for some more ideas: As ever, it's often just about knowing what to google for. The whole idea of "sensor fusion" for navigation - especially in built up areas where GPS is lost - has prompted a whole body of research. So perhaps the following (intersecting) areas of research might be helpful to you:
Navigation in 'urban canyons'
Structure-from-motion for navigation
SLAM
Ego-motion
Issues you are going to encounter with all these methods include:
Handling static vs. dynamic scenes (i.e. ones that change purely based on the camera motion - c.f. others that change as a result of independent motion occurring in the scene: trees moving, cars driving past, etc.).
Relating amount of visual motion to real-world motion (the other form of "calibration" I referred to - are objects small or far away? This is where the stereo information could prove extremely handy, as we will see...)
Factorisation/optimisation of the problem - especially with handling accumulated error along the path of the camera over time and with outlier features (all the tricks of the trade: bundle adjustment, ransac, etc.)
So, anyway, pragmatically speaking, you want to do this in python (via the OpenCV bindings)?
If you are using OpenCV 2.4 the (combined C/C++ and Python) new API documentation is here.
As a starting point I would suggest looking at the following sample:
/OpenCV-2.4.2/samples/python2/lk_homography.py
Which provides a nice instance of basic ego-motion estimation from optic flow using the function cv2.findHomography.
Of course, this homography H only applies if the points are co-planar (i.e. lying on the same plane under the same projective transform - so it'll work on videos of nice flat roads). BUT - by the same principal we could use the Fundamental matrix F to represent motion in epipolar geometry instead. This can be calculated by the very similar function cv2.findFundamentalMat.
Ultimately, as you correctly specify above in your question, you want the Essential matrix E - since this is the one that operates in actual physical coordinates (not just mapping between pixels along epipoles). I always think of the Fundamental matrix as a generalisation of the Essential matrix by which the (inessential) knowledge of the camera intrinsic calibration (K) is omitted, and vise versa.
Thus, the relationships can be formally expressed as:
E = K'^T F K
So, you'll need to know something of your stereo camera calibration K after all! See the famous Hartley & Zisserman book for more info.
You could then, for example, use the function cv2.decomposeProjectionMatrix to decompose the Essential matrix and recover your R orientation and t displacement.
Hope this helps! One final word of warning: this is by no means a "solved problem" for the complexities of real world data - hence the ongoing research!

Categories

Resources