i want to measure distance of a object using a stereo camera set, till now i have calibrated and rectified my camera , generated disparity map.Using opencv stereo_match.
But i am stuck on how to move forward to find the distance of an object in the image using disparity matrix.
Also i am in doubt about the disparity matrix, can someone please explain it.What is the content of it.
Also how to improve the disparity map , i have tried to change the parameters but with little success.
Any help would be appreciated, some sample codes or any link.
Thank you
There is a code on stereo-vision, using OpenCV and Python available on GitHub:
https://github.com/LearnTechWithUs/Stereo-Vision
The explanations are in German though...
There is also a Youtube video of the project: https://www.youtube.com/watch?v=xjx4mbZXaNc
The program is able to measure the distance of an object present in the image.
See https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#reprojectimageto3d
"Q – 4 x 4 perspective transformation matrix that can be obtained with stereoRectify()."
If you follow https://github.com/opencv/opencv/blob/master/samples/python/stereo_match.py you note in line 57 that you need to divide disp by 16.0 to get real dimensions.
As for the parameters: keep trying, it really depends on your setup. For minDisparity and numDisparities you can manually inspect your images to find a suitable range.
Related
I have an image of a dartboard (with the numbers 1-20). If I rotate this image let's say 80 degrees, I want my python code to be able to compare the second image to the first one, and know that the angle of the second image is 80 degrees, compared to the first image. How should I start with the development of such a piece of code?
Thank you!
Scikit-image is an amazing package with very good documentation. I would start by looking at the RANSAC algorithm.
In the above example, they use corner detection using skimage.feature.corner_harris.
My idea: If the dartboard image doesn't have any distinct corners, you may try any edge operators. Then apply RANSAC, and find the angle between the old coordinates and the new coordinates using basic trigonometry.
I'm trying to automatically draw a mesh or grid over a face, similar to the image below, to use the result in a blog post that I'm writing. However, my knowledge of computer vision is not enough to recognize which model or algorithm is behind these types of cool visualizations.
Could someone help pointing me some link to reador or a starting point?
Using Python, OpenCV and dlib the closest thing I found is something called delauny triangulation but I'm not sure if that's exactly what I'm looking for seeing the results.
Putting it in a few words what I have so far is:
Detect all faces on image and calculate their landmarks using dlib.get_frontal_face_detector() and dlib.shape_predictor() methods from dlib.
Use the method cv2.Subdiv2D() from OpenCV to compute a 2D subdivision based on my landmarks. In particulary I'm getting the delauny subdivision using the getTriangleList() method from the resulting subdivision.
The complete code is available here.
However, the result is not so attractive perhaps because the division is using triangles instead of polygons and I want to check if I can improve it!
I have a fixed camera and I need to check if its position or orientation has been changed. I am trying to use OpenCV (calculating diiferencies between a reference image and a new one) for this, but I am pretty new to OpenCV (and image processing in general) but I am not really sure what specific algorithm would be the best to use for this, or how to interpret the results to find if the camera has been moved/rotated. Any ideas?
Please help,
One way to do it would be to register the two frames to each other using affine image registration from openCV. From this you can extract the rotation and displacement difference between the two frames. Unfortunately this will only work well for in-plane rotations but I still think it is your best bet.
If you post some sample code and data I would be happy to take a look.
You can use Canny or HoughLinesP to find lines,From this you can get two lines,compare it.Maybe this will be effective in some simple background.if some object in your picture,try sift or other feature extractor,you can take features to find the relationship from two frames.
I have a photo taken from a camera (whose focal length, principle point, and distortion coefficients I know). The photo has a 8cm x 8cm post-in on a table and the center of the post-it is the origin (0, 0) again in cm. I've also indicated the positive-y axis on the post-it.
From this information is it possible to compute the location of the camera and the vector in which the camera is looking in Python using OpenCV? If someone has a snippet of code that does that (assuming you know the coordinates of the post-it corners already) that would be amazing!
Use OpenCV's solvePnP specifying SOLVEPNP_IPPE_SQUARE in the flags. With only 4 points (and a postit) the solution will be quite sensitive to how accurately you mark their images, so ask yourself whether you really need the camera pose and location for your application, and how accurately. E.g., if you just want to make a flat CG "sticker" stay fixed on the table while the camera moves, all you need is estimating a homography, a much simpler task.
It does look like you have all the information required. The marker you use can be easily segmented. Shape analysis will provide corners. I did something similar to get basic eyesight tracking:
Here is a complete example.
Segmentation result for the example:
Please notice, accuracy really matters, so it might be useful to rely on several sets of points.
I have an image captured by android camera. Is it possible to calculate depth of object in the image ? Image contains object and background only. Any suggestion, explanation or links that you think can help me will be appreciated.
OpenCV is the library you need.
I did some depth identification of water levels in pure white background a few days ago. Generally, if you want to identify the depth, you can convert the question to identify the edge of the changing colors. In this case, you can convert the colorful pictures to grey and identify the changing of while-black-grey interface. OpenCV is capable of doing the job at high speed.
Hope it helps. Let me know if you need further help.
Edits:
If you want to find the actual depths, you need to project the coordinate system of your pictures to the real world, or vice versa. To do it, you have to know a fix location as your reference and the relationship between pixels and real distances.
What I did is find the fixed location and set it as zero. Afterwards, I measured a length of an object in the picture, and also calculated the pixel amount of the object. Therefore I obtained the relationship between pixels and real distances.
Note that these procedures may involve errors in the identification. I did it very carefully and the error was acceptable in my case.
With only one image, accurate depth estimation is near impossible. However, there are various methods of estimating depth under certain assumptions or the availability of the camera calibration matrix. As mentioned by #WenlongLiu, OpenCV is a very good place to start with.