I am looking for implementing Camera Calibration using OpenCV python. Instead of chessboard as the target object, I would like to use one Single Circle on a board with fixed radius.
Is there any implementation on this? It would be great if someone share any repositories. Can I use OpenCV's built-in calibration function?
Thanks in advance!
Jubair
Related
I'm trying to automatically draw a mesh or grid over a face, similar to the image below, to use the result in a blog post that I'm writing. However, my knowledge of computer vision is not enough to recognize which model or algorithm is behind these types of cool visualizations.
Could someone help pointing me some link to reador or a starting point?
Using Python, OpenCV and dlib the closest thing I found is something called delauny triangulation but I'm not sure if that's exactly what I'm looking for seeing the results.
Putting it in a few words what I have so far is:
Detect all faces on image and calculate their landmarks using dlib.get_frontal_face_detector() and dlib.shape_predictor() methods from dlib.
Use the method cv2.Subdiv2D() from OpenCV to compute a 2D subdivision based on my landmarks. In particulary I'm getting the delauny subdivision using the getTriangleList() method from the resulting subdivision.
The complete code is available here.
However, the result is not so attractive perhaps because the division is using triangles instead of polygons and I want to check if I can improve it!
I'm a beginner in python and raspberry pi and for a school project we're using a raspberry pi 3 and a camera module (fixed) and the idea is when u move something in the camera frame the program gives an output of where this object is according to the camera (in 3D) and the distance this object has traveled, it would be something like ( x=2.98m, y=5.56m, z=3.87m, distance=0.677m ) and all of this using optical flow and python. Is it possible to make this and if not Is there something close or similar to this.
Any help appreciated.
The first thing you can do is camera calibration. If you have the intrinsics of the camera you can infer the direction of the vector in 3D from your camera to your object.
The problem is how to find the distance, or the length of this vector.
There are two solutions I can think of:
use two cameras - if you have two calibrated cameras (intrinsics and extrinsics) you can triangulate the two points of the object detected in the images and get the object position in 3D.
If you know the size of the object, you can infer its distance from the camera using the intrinsics.
hope I helped.
I followed the opencv tutorials called 'Camera calibration and 3d reconstruction' (https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_table_of_contents_calib3d/py_table_of_contents_calib3d.html) and I do not understand what is the correlation between camera calibration and creating a depth map. I can see that focal length is in the equation for disparity, but according to code there's no need to compute the camera matrix 'cause its not used anywhere.
Am I right or someone can point me my mistake?
All code is in the opencv link.
I'm using python.
I have a fixed camera and I need to check if its position or orientation has been changed. I am trying to use OpenCV (calculating diiferencies between a reference image and a new one) for this, but I am pretty new to OpenCV (and image processing in general) but I am not really sure what specific algorithm would be the best to use for this, or how to interpret the results to find if the camera has been moved/rotated. Any ideas?
Please help,
One way to do it would be to register the two frames to each other using affine image registration from openCV. From this you can extract the rotation and displacement difference between the two frames. Unfortunately this will only work well for in-plane rotations but I still think it is your best bet.
If you post some sample code and data I would be happy to take a look.
You can use Canny or HoughLinesP to find lines,From this you can get two lines,compare it.Maybe this will be effective in some simple background.if some object in your picture,try sift or other feature extractor,you can take features to find the relationship from two frames.
i want to measure distance of a object using a stereo camera set, till now i have calibrated and rectified my camera , generated disparity map.Using opencv stereo_match.
But i am stuck on how to move forward to find the distance of an object in the image using disparity matrix.
Also i am in doubt about the disparity matrix, can someone please explain it.What is the content of it.
Also how to improve the disparity map , i have tried to change the parameters but with little success.
Any help would be appreciated, some sample codes or any link.
Thank you
There is a code on stereo-vision, using OpenCV and Python available on GitHub:
https://github.com/LearnTechWithUs/Stereo-Vision
The explanations are in German though...
There is also a Youtube video of the project: https://www.youtube.com/watch?v=xjx4mbZXaNc
The program is able to measure the distance of an object present in the image.
See https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#reprojectimageto3d
"Q – 4 x 4 perspective transformation matrix that can be obtained with stereoRectify()."
If you follow https://github.com/opencv/opencv/blob/master/samples/python/stereo_match.py you note in line 57 that you need to divide disp by 16.0 to get real dimensions.
As for the parameters: keep trying, it really depends on your setup. For minDisparity and numDisparities you can manually inspect your images to find a suitable range.