Why opencv distort the image by undistort()? - python

I am following the OpenCV Camera Calibration tutorial, I have used about 100 images for the calibration. After getting camera matrix and distance matrix, I use them to undistort other set of images. What I realized is the undistorted image is highly distorted on both sides.
One of the example imgs for camera matrix:
Using the camera matrix to undistort my experimental img gave me very unreasonable results.
Original image:
After applying undistor():
Clearly, the undistortion process only paid attention to the center of the image. How can I make it undistort the image properly?
Thank you very much!
UPDATE:
Using images to cover the Filed of View as much as possible helps. Here is the new result of the same image:
I have one more question: How to know if the calibration returns satisfying calibration results? RMS is a parameter. However, it is not very robust.

Related

How to implement radial motion blur by Python OpenCV?

I can find motion blur kernel in horizontal and vertical direction, e.g. this link.
However, how can I implement radial motion blur like following pictures? I can find this functionality in Photoshop etc. I cannot find any kernel reference in website. How can I implement it by python opencv? Thanks
I don't think OpenCV has something like this built-in, but DIPlib has: dip.AdaptiveGauss(). It blurs the image with a different Gaussian at every pixel. One image indicates the orientation of the Gaussian, another one indicates the scaling.
This is how I replicated your blurred image:
import diplib as dip
img = dip.ImageRead('rose.jpg')
scale = dip.CreateRadiusCoordinate(img.Sizes()) / 100
angle = dip.CreatePhiCoordinate(img.Sizes())
out = dip.AdaptiveGauss(img, [angle, scale], [1,5])
dip.Show(out)
Disclaimer: I'm an author of DIPlib.

Inverse filtering a blurred image - Python

For an assignment, The image corrupted by atmospheric turbulence. I want to deblur an image using inverse image filtering. I have done some research and it seems I need the original image for this procedure but I only have the blurred image. How can I construct the degrading function that was used to blur this image? I am not allowed to use the original image. Thank you in advance.
This is the image:

Image Processing: Bad Quality of Disparity Image with OpenCV

I want to create a disparity image using two images from low resolution usb cameras. I am using OpenCV 4.0.0. The frames I use are taken from a video. The results I am currently getting are very bad (see below).
Both cameras were calibrated and the calibration data used to undistort the images. Is it because of the low resolution of the left image and right image?
Left:
Right:
To have a better guess there also is an overlay of both images.
Overlay:
The values for the cv2.StereoSGBM_create() function are based on the ones of the example code that comes with OpenCV (located in OpenCV/samples/python/stereo_match.py).
I would be really thankful for any help or suggestions.
Here is my code:
# convert both image to grayscale
left = cv2.cvtColor(left, cv2.COLOR_BGR2GRAY)
right = cv2.cvtColor(right, cv2.COLOR_BGR2GRAY)
# set the disparity matcher
window_size = 3
min_disp = 16
num_disp = 112-min_disp
stereo = cv2.StereoSGBM_create(minDisparity = min_disp,
numDisparities = num_disp,
blockSize = 16,
P1 = 8*3*window_size**2,
P2 = 32*3*window_size**2,
disp12MaxDiff = 1,
uniquenessRatio = 10,
speckleWindowSize = 100,
speckleRange = 32
)
# compute disparity
dis = stereo.compute(left, right).astype(np.float32) / 16.0
# display the computed disparity image
matploitlib.pyplot.imshow(dis, 'gray')
matploitlib.pyplot.show()
Most stereo algorithms require the input images to be rectified. Rectification transforms images so that corresponding epipolar lines are corresponding horizontal lines in both images. For rectification, you need to know both intrinsic and extrinsic parameters of your cameras.
OpenCV has all the tools required to perform both calibration and rectification. If you need to perform calibration, you need to have a calibration pattern (chessboard) available as well.
In short:
Compute intrinsic camera parameters using calibrateCamera().
Use the intrinsic parameters with stereoCalibrate() to perform extrinsic calibration of the stereo pair.
Using the paramters from stereoCalibrate(), compute rectification parameters with stereoRectify()
Using rectification parameters, calculate maps used for rectification and undistortion with initUndistortRectifyMap()
Now your cameras are calibrated and you can perform rectification and undistortion using remap() for images taken with the camera pair (as long as the cameras do not move relatively to each other). The rectified images calculated by remap() can now be used to calculate disparity images.
Additionally, I recommend checking out some relevant text book on the topic. Learning OpenCV: Computer Vision with the OpenCV Library has a very practical description of the process.
I agree with #Catree's comment and #sebasth's answer, mainly because your images are not rectified at all.
However, another issue may occur and I would like to warn you about this. I tried to leave a comment on #sebasth's answer, but I can't comment yet...
As you said you are using low resolution usb cameras, it makes me believe these cameras have the light exposure made by Rolling Shutter lenses. For scenes in movement and in constant change, the ideal are Global Shutter cameras. This is especially relevant if you intend to use this for scenes in movement.
(Example of Rolling Shutter effect: enter link description here).
So with the Rolling Shutter lenses you will also have to be careful about cameras synchronization.
It can work with Rolling shutter cameras, but you will need to take care with lens synchronization, preferably in a controlled environment (even with little change in lighting).
Also remember to turn off the automatic camera parameters, like: "White Balance" and especially the "Exposure".
Best regards!

Images registration for low resolution highway images

I am using opencv with Python and I have a collection of images of highways. They have fixed resolution 352*288. They have been taken by mounted cameras, these cameras rotated horizontally and diagonally. I want to align this highways into piles,
I have tried feature passed images registration using SIFT, SURF and ORB. They are providing good results but when the image have rotation diagonally and when there is a small zooming the aligning will be damaged.
I have tried intensity based image registration using findTransformEcc and it is a little bit acceptable, but when I tried intensity based image registration using Matlab the results are much better.
example of images:
first image
second image

Face morphing using opencv

Can I get some ideas on how to morph the face in a live video using opencv? I have tried Face substitution but it is implemented using openFrameworks.
I would like to implement the same using opencv. Is there any other methods available in opencv than diirectly porting Face substituion code from openFrameworks to Opencv?
I have also gone through this link, but few people have mentioned as the face morphing is deprecated in opencv?
I recently wrote an article on face morphing using OpenCV. I have shared the code in C++ / Python. You can find the details here
http://www.learnopencv.com/face-morph-using-opencv-cpp-python/
but the basic idea is as follows.
Find Point Correspondences in the two input images.
Do Delaunay triangulation on the points.
The amount of morphing is controlled by a parameter alpha. E.g .for alpha = 0, you will get Ted Cruz in the example below, and for alpha = 1. you will get Hillary Clinton. For any other alpha, it will be the blend between the two. Use alpha to calculate the location of the points in the output ( morphed ) image by taking a weighted average of the two input image points.
Calculate the affine transform between every triangle in the input images and the destination image ( morphed image ).
Warp triangle from each input images to the output image, and blend the pixels based on alpha. Do this for every triangle and you get the output morphed image.
Hope this helps.
I don't know any libraries that do this specifically, but you could cobble together something yourself. You'd need a set of common fiducial points that you reference in all faces. Then you'd want to use those point to do Delaunay triangulation on it.
Now you can either do the transform directly from one face chassis to the other, or you can write it to an intermediary normalized face, make changes to that and then write it anywhere.
Here are the steps of the face morphing implementation with mesh-warping algorithm (you could implement it with opencv or python scipy / scikit-image):
Defining Correspondences: find point-correspondences between the faces to be aligned using facial landmarks (detect landmarks with dlib, e.g.,).
Delaunay Triangulation: You need to provide a triangulation (Delaunay triangulation, e.g.) of these points that will be used for morphing (with scipy.spatial's Delaunay, e.g.,).
Computing the Mid-way (morphed) Face: computing the average shape, warp both faces into that shape (calculate the affine transforms using the triangles in source image and the corresponding ones using the morphed image and warp the points inside the triangles and alpha-blend the warped images to obtain the final morphed image, using scikit-image's warp(), e.g.,).
The next animation shows the animated output from an implementation of mesh-warping algorithm with scipy / numpy / scikit-image in python (sequence of morph images starting from Monalisa image to Leonardo da Vinci image). This can be found here too.
Another popular algorithm is Beier-neely morphing algorithm (https://en.wikipedia.org/wiki/Beier%E2%80%93Neely_morphing_algorithm)
Check a face-morphing tool in Python using OpenCV

Categories

Resources