I want to overlay 3d objects on live webcam for my project. I have overlayed 2d objects like googles, mustache, and hat. Now I want to overlay 3d objects like beard, mustache, and hair. I searched over some articles and tutorials, but all of them were ambiguous and didn't teach where to start. From what I have learned I need to create 3d objects using blender and import it using OpenGL and then somehow overlay it to the facial landmarks. I want to know what I need to learn to achieve this objective. I have read the previous question on this source1 and it didn't help much. Also, I have read several blogs like this,
this, and the best one here and various others while surfing. I know I am entering into AR/VR, but I ready to learn the things required to get my work done.
There are various libraries/frameworks which don't require to code anything like SparkAR, Virtualtryon, ditto, etc. but they don't teach you anything. I want to learn how to do these things by myself. If I create my 3d object like hair(hair-like Goku in SuperSaiyan)/Mustache or eyes in blender how can I overlay it in real-time on webcam using OpenCV or any other python compatible lib/framework? I mean overlaying 3d hair over my own hair, overlaying 3d eyes over my eyes and similar. What are the things required to do such a task? What things do I need to learn?
Related
I've been doing transparent pasting of image object over one another using PIL.
from PIL import Image
img1 = Image.open("bg")
img2 = Image.open("fg")
img1.paste(fg, (0,0), fg.convert("RGBA"))
img1.save("final.png", "PNG")
this script works fine for 2d images, I just want someone to point me in the right direction. I want to create characters in 3D, so I want a solution.
Thanks in advance. :)
If you have a 3d model of a human an another one of a hat, you can load both in the same 3D engine, adjust transformations (e.g. position, rotate and scale the hat so it looks right on the human) and render the unified scene as a single image.
Most 3D engines support this, it depends what your comfortable with.
While you could, in theory use OpenCV built from source with contributed modules such as viz (which uses VTK behind the scenes and includes samples), or even better, the ovis package with uses Ogre3D,
in practice there are so many layers in between I'd go straight for the engine rather than OpenCV with an integration.
For example with Ogre3D you could find python bindings directly, there's pyglet and many other 3D libraries.
I would warmly recommend trying Open3D though.
It's got a wealth of 3D computer vision tools availble but for your scenario in particular, its 3D renderer is great and easy to use.
To load a 3D model check out the Mesh file io tutorial and for rendering look at visualisation.
Note that Open3D ships with plenty of Python examples and even Jupyter notebooks(e.g. file io, visualisation) to get started.
I'm writing an application for my undergrad dissertation which, at the fundamental level allows tracking of multiple objects from a video feed using the OpenCV library. To develop this idea a bit more, I'd like to be able to draw a line on the screen displaying the history of where the bounding box has been around the object I'm tracking.
I've noticed there's no sort of inbuilt function for doing this, so any pointers towards crafting something like this would be useful, since there isn't much online about this topic.
I am trying to make a project in which I manipulate image with several tampering attacks. Since I am new to this, I referred to some research papers and found that to attack an image, one just gets an object from an image and pastes it on another. For example you can see this image:
I want to implement the same (adding or cutting out specific object from the image) in my project. But I am not able to figure it out about how to implement this with Spyder and OpenCV. I tried to search but it gave me concepts of frameworks and deep learning which is out of my league as for now. Is there any simpler way to achieve this ?
I would like to generate 2D images of 3D books with custom covers on demand.
Ideally, I'd like to import a 3D model of a book (created by an artist), change the cover texture to the custom one, and export a bitmap image (jpeg, png, etc...). I'm fairly ignorant about 3D graphics, so I'm not sure if that's possible or feasible, but it describes what I want to do. Another method would be fine if it accomplishes something similar. Like maybe I could start with a rendered 2D image and distort the custom cover somehow then put it in the right place over the original image?
It would be best if I could do this using Python, but if that's not possible, I'm open to other solutions.
Any suggestions on how to accomplish this?
Sure it's possible.
Blender would probably be overkill, but you can script blender with python, so that's one solution.
The latter solution is (I'm pretty sure) what most of those e-book cover generators do, which is why they always look a little off.
The PIL is an excellent tool for manipulating images and pixel data, so if you wanted to distort your own, that would be a great tool to look at, and if it goes too slow it's trivial to convert the image to a numpy array so you can get some speedup.
I am trying to detect a marker in a webcam video feed and overlay it with a 3d object - pretty much exactly like this: http://www.morethantechnical.com/2009/06/28/augmented-reality-with-nyartoolkit-opencv-opengl/
I know artoolkit is the best module for this, but I was hoping to just use opencv in python since I dont know nearly enough c/c++ to be able to use artoolkit. I am hoping someone will be able to get me on the right track towards detecting the marker and determining its location and orientation etc since I have no idea how best to go about this or what functions I should be using.
OpenCV doesn't have marker detection / tracking functionality out of box. However it provides all algorithms needed so it's fairly easy to implement your own one.
The article you are referring to uses OpenCV only for video grabbing. The marker detection is done by NyARToolkit which is derived from ARToolkit. NyARToolkit have versions for Java, C# and ActionScript.
ARToolkit is mostly written in plain C without using fancy C++ features. It's probably easier to use than you thought. The documentation contains well explained tutorials. e.g http://www.hitl.washington.edu/artoolkit/documentation/devstartup.htm
The introductory documentation can help you understand the process of marker detection even if you decide not to use ARToolkit.
I think the most used way to perform marker detection using python and open CV is to use SURF Descriptors.
I have found very useful this video and the linked code you can find in this page. Here you can download the code. I don't know how to overlay it with a 3d object but I'm sure you can do something with pygame or matplotlib.