Convert into 3D Modeling - python

I want to work on a self project that makes me create 3D animated environment.
Is there a way to convert cameras input into 3D Modeling with python code?
If so, How many cameras do I need ?

Yes, you need stereo or trinocular cameras and OpenCV.
https://docs.opencv.org/master/dd/d53/tutorial_py_depthmap.html
Or, you can make a 3D scanner from one camera and photogrammetry software.
https://hackaday.com/2019/04/07/get-great-3d-scans-with-open-photogrammetry/

Related

Stereo-Opticalflow with opencv and python, reconstruted 3D coordinate with cv.triangulatePoints

I am searching to create a script for Stereo-Opticalflow using OPENCV. My problem is reconstrution of cartesian coordinates using two images. i have two virtual cameras place at an angle 90° between cameras. i used cv.stereoCalibrate to calibrate but i don't know how to put back together the cartesian coordinate system. Can someone help me please ?
Thank you for your time.

Camera Calibration using one Circle

I am looking for implementing Camera Calibration using OpenCV python. Instead of chessboard as the target object, I would like to use one Single Circle on a board with fixed radius.
Is there any implementation on this? It would be great if someone share any repositories. Can I use OpenCV's built-in calibration function?
Thanks in advance!
Jubair

Calculate the motion of a simple object in 3D

I'm a beginner in python and raspberry pi and for a school project we're using a raspberry pi 3 and a camera module (fixed) and the idea is when u move something in the camera frame the program gives an output of where this object is according to the camera (in 3D) and the distance this object has traveled, it would be something like ( x=2.98m, y=5.56m, z=3.87m, distance=0.677m ) and all of this using optical flow and python. Is it possible to make this and if not Is there something close or similar to this.
Any help appreciated.
The first thing you can do is camera calibration. If you have the intrinsics of the camera you can infer the direction of the vector in 3D from your camera to your object.
The problem is how to find the distance, or the length of this vector.
There are two solutions I can think of:
use two cameras - if you have two calibrated cameras (intrinsics and extrinsics) you can triangulate the two points of the object detected in the images and get the object position in 3D.
If you know the size of the object, you can infer its distance from the camera using the intrinsics.
hope I helped.

I there any way in Python to handle 3D image processing like pasting one 3d image over another 3d images completely

I've been doing transparent pasting of image object over one another using PIL.
from PIL import Image
img1 = Image.open("bg")
img2 = Image.open("fg")
img1.paste(fg, (0,0), fg.convert("RGBA"))
img1.save("final.png", "PNG")
this script works fine for 2d images, I just want someone to point me in the right direction. I want to create characters in 3D, so I want a solution.
Thanks in advance. :)
If you have a 3d model of a human an another one of a hat, you can load both in the same 3D engine, adjust transformations (e.g. position, rotate and scale the hat so it looks right on the human) and render the unified scene as a single image.
Most 3D engines support this, it depends what your comfortable with.
While you could, in theory use OpenCV built from source with contributed modules such as viz (which uses VTK behind the scenes and includes samples), or even better, the ovis package with uses Ogre3D,
in practice there are so many layers in between I'd go straight for the engine rather than OpenCV with an integration.
For example with Ogre3D you could find python bindings directly, there's pyglet and many other 3D libraries.
I would warmly recommend trying Open3D though.
It's got a wealth of 3D computer vision tools availble but for your scenario in particular, its 3D renderer is great and easy to use.
To load a 3D model check out the Mesh file io tutorial and for rendering look at visualisation.
Note that Open3D ships with plenty of Python examples and even Jupyter notebooks(e.g. file io, visualisation) to get started.

how to overlay 3d objects on webcam with opencv and python?

I want to overlay 3d objects on live webcam for my project. I have overlayed 2d objects like googles, mustache, and hat. Now I want to overlay 3d objects like beard, mustache, and hair. I searched over some articles and tutorials, but all of them were ambiguous and didn't teach where to start. From what I have learned I need to create 3d objects using blender and import it using OpenGL and then somehow overlay it to the facial landmarks. I want to know what I need to learn to achieve this objective. I have read the previous question on this source1 and it didn't help much. Also, I have read several blogs like this,
this, and the best one here and various others while surfing. I know I am entering into AR/VR, but I ready to learn the things required to get my work done.
There are various libraries/frameworks which don't require to code anything like SparkAR, Virtualtryon, ditto, etc. but they don't teach you anything. I want to learn how to do these things by myself. If I create my 3d object like hair(hair-like Goku in SuperSaiyan)/Mustache or eyes in blender how can I overlay it in real-time on webcam using OpenCV or any other python compatible lib/framework? I mean overlaying 3d hair over my own hair, overlaying 3d eyes over my eyes and similar. What are the things required to do such a task? What things do I need to learn?

Categories

Resources