Perspective correction in OpenCV using python - python

I am trying to do a perspective correction of a tilted rectangle ( a credit card), which is tilted in all the 4 directions. I could find its four corners and the respective angles of its tilt but I cannot find the exact location of the coordinates, where it has to be projected. I am using cv2.getPerspectiveTransform to do the transformation.
I have the aspect ratio of the actual card (the non tilted one), I want such coordinates such that the original aspect ratio is preserved. I have tried using a bounding rectangle but this increases the size of the card.
Any help would be appreciated.

Here is the way you need to follow...
For easiness I have resized your image to smaller size,
Compute quadrangle vertices for source image, here I find out manually, you can choose edge detection, hough line etc..
Q1=manual calculation;
Q2=manual calculation;
Q3=manual calculation;
Q4=manual calculation;
Compute quadrangle vertices in the destination image by keeping aspect ratio, here you can to take width of card from above quadrangle vertices of source, and compute height by multiplying with aspect ratio.
// compute the size of the card by keeping aspect ratio.
double ratio=1.6;
double cardH=sqrt((Q3.x-Q2.x)*(Q3.x-Q2.x)+(Q3.y-Q2.y)*(Q3.y-Q2.y)); //Or you can give your own height
double cardW=ratio*cardH;
Rect R(Q1.x,Q1.y,cardW,cardH);
Now you got quadrangle vertices for source and destination, then apply warpPerspective.
You can refer below C++ code,
//Compute quad point for edge
Point Q1=Point2f(90,11);
Point Q2=Point2f(596,135);
Point Q3=Point2f(632,452);
Point Q4=Point2f(90,513);
// compute the size of the card by keeping aspect ratio.
double ratio=1.6;
double cardH=sqrt((Q3.x-Q2.x)*(Q3.x-Q2.x)+(Q3.y-Q2.y)*(Q3.y-Q2.y));//Or you can give your own height
double cardW=ratio*cardH;
Rect R(Q1.x,Q1.y,cardW,cardH);
Point R1=Point2f(R.x,R.y);
Point R2=Point2f(R.x+R.width,R.y);
Point R3=Point2f(Point2f(R.x+R.width,R.y+R.height));
Point R4=Point2f(Point2f(R.x,R.y+R.height));
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
quad_pts.push_back(Q1);
quad_pts.push_back(Q2);
quad_pts.push_back(Q3);
quad_pts.push_back(Q4);
squre_pts.push_back(R1);
squre_pts.push_back(R2);
squre_pts.push_back(R3);
squre_pts.push_back(R4);
Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
int offsetSize=150;
Mat transformed = Mat::zeros(R.height+offsetSize, R.width+offsetSize, CV_8UC3);
warpPerspective(src, transformed, transmtx, transformed.size());
//rectangle(src, R, Scalar(0,255,0),1,8,0);
line(src,Q1,Q2, Scalar(0,0,255),1,CV_AA,0);
line(src,Q2,Q3, Scalar(0,0,255),1,CV_AA,0);
line(src,Q3,Q4, Scalar(0,0,255),1,CV_AA,0);
line(src,Q4,Q1, Scalar(0,0,255),1,CV_AA,0);
imshow("quadrilateral", transformed);
imshow("src",src);
waitKey();

I have a better solution which is much easy:
The red rectangle on original image and the corners points of the rectangle are source points
We use cv2.getPerspectiveTransform(src, dst) that takes source points and destination points as arguments and returns the transformation matrix which transforms any image to destination image as show in the diagram
We use this transformation matrix in cv2.warpPerspective()
- As you can see results are better. You get a very nice bird view of the image
import cv2
import matplotlib.pyplot as plt
import numpy as np
def unwarp(img, src, dst, testing):
h, w = img.shape[:2]
# use cv2.getPerspectiveTransform() to get M, the transform matrix, and Minv, the inverse
M = cv2.getPerspectiveTransform(src, dst)
# use cv2.warpPerspective() to warp your image to a top-down view
warped = cv2.warpPerspective(img, M, (w, h), flags=cv2.INTER_LINEAR)
if testing:
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))
f.subplots_adjust(hspace=.2, wspace=.05)
ax1.imshow(img)
x = [src[0][0], src[2][0], src[3][0], src[1][0], src[0][0]]
y = [src[0][1], src[2][1], src[3][1], src[1][1], src[0][1]]
ax1.plot(x, y, color='red', alpha=0.4, linewidth=3, solid_capstyle='round', zorder=2)
ax1.set_ylim([h, 0])
ax1.set_xlim([0, w])
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(cv2.flip(warped, 1))
ax2.set_title('Unwarped Image', fontsize=30)
plt.show()
else:
return warped, M
im = cv2.imread("so.JPG")
w, h = im.shape[0], im.shape[1]
# We will first manually select the source points
# we will select the destination point which will map the source points in
# original image to destination points in unwarped image
src = np.float32([(20, 1),
(540, 130),
(20, 520),
(570, 450)])
dst = np.float32([(600, 0),
(0, 0),
(600, 531),
(0, 531)])
unwarp(im, src, dst, True)
cv2.imshow("so", im)
cv2.waitKey(0)[![enter image description here][1]][1]
cv2.destroyAllWindows()

I'm writing the answer provided by #Haris in python.
import cv2
import math
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('test.jpg')
rows,cols,ch = img.shape
pts1 = np.float32([[360,50],[2122,470],[2264, 1616],[328,1820]])
ratio=1.6
cardH=math.sqrt((pts1[2][0]-pts1[1][0])*(pts1[2][0]-pts1[1][0])+(pts1[2][1]-pts1[1][1])*(pts1[2][1]-pts1[1][1]))
cardW=ratio*cardH;
pts2 = np.float32([[pts1[0][0],pts1[0][1]], [pts1[0][0]+cardW, pts1[0][1]], [pts1[0][0]+cardW, pts1[0][1]+cardH], [pts1[0][0], pts1[0][1]+cardH]])
M = cv2.getPerspectiveTransform(pts1,pts2)
offsetSize=500
transformed = np.zeros((int(cardW+offsetSize), int(cardH+offsetSize)), dtype=np.uint8);
dst = cv2.warpPerspective(img, M, transformed.shape)
plt.subplot(121),plt.imshow(img),plt.title('Input')
plt.subplot(122),plt.imshow(dst),plt.title('Output')
plt.show()

Related

How to resize an image with "varying scaling"?

I need to resize an image, but with a "varying scaling" in the y axis, after warping:
Plotted Image
Original input image
Warped output image
The image (left one) was taken at an angle, so I've used the getPerspectiveTransform and warpPerspective OpenCV functions to get the top/plan view of the image (right one).
But, now the top half of the warped image is stretched and the bottom half is squashed, and this amount of stretch/squash is varying continuously as you go down the image. So, I need to do the opposite.
For example: The zebra crossing lines in the warped image are thicker at the top of the image and thinner at the bottom. I want them to all be the same thickness and same vertical distance from each other essentially.
Badly drawn but something like this: (if we ignore the 2 people, I think this is what the final output image should be like.)
predicted output image
My end goal is to measure distance between people's feet in an image (shown by green dots), but I've got that section sorted already.
By vertically scaling the warped image to make it linear, it will allow me to accurately measure the real distance in the x & y direction from a top/plan view, (i.e each pixel in the x or y direction is say 1cm in real distance)
I was thinking of multiplying each row of the image by a factor (e.g. top rows multiply by smaller number like 0.8 or 0.9, and bottom rows multiply by bigger number like 1.1 or 1.2), but I really don't know how to do that.
Code:
import cv2 as cv
from matplotlib import pyplot as plt
import numpy as np
# READ IMAGE
imgOrig = cv.imread('.jpg')
# RESIZE IMAGE
width = int(1000)
ratio = imgOrig.shape[1]/width
height = int(imgOrig.shape[0]/ratio)
dsize = (width, height)
img = cv.resize(imgOrig, dsize)
feetLocation = [[280, 500], [740, 496]]
cv.circle(img,(280, 500),5,(0,255,0),thickness= 10)
cv.circle(img,(740, 496),5,(0,255,0),thickness= 10)
# WARPING
pts1 = np.float32([[0, -0], [width, 0], [-1800, height], [width + 1800, height]])
pts2 = np.float32([[0, 0], [width, 0], [0, height], [width, height]])
M = cv.getPerspectiveTransform(pts1, pts2)
dst = cv.warpPerspective(img, M, (width, height))
#DISPLAY IMAGES
plt.subplot(121),plt.imshow(img),plt.title('Original Image')
plt.subplot(122),plt.imshow(dst),plt.title('Warped Image')
plt.show()
I was working on a solution, before the several edits were applied. I focussed on the actual boxes only. If, instead, you actually need the surrounding, too, the following approach won't help you much, I'm afraid. Also, I assumed the bottom box to be fully included. So, if that one's somehow cut like presented in your new desired final output, additional work would be needed to handle that case.
From the given image, you could mask the gray-ish part around and between the single boxes using the saturation and value channels from the HSV color space:
Following, row-wise sum all pixels, apply some moving average to clean the signal, and detect the peaks in that signal:
The bottom image border must be manually added, since there is no gray-ish border (most likely because the box is somehow cut).
Now, for each of these "peak rows", determine the first and last masked pixels, and build boxes from each two neighbouring "peak rows". Finally, for each of these boxes, apply a distinct perspective transform to a given size. If needed, stack those boxes vertically for example:
That'd be the whole code:
import cv2
import matplotlib.pyplot as plt
import numpy as np
from scipy.signal import find_peaks
# Read original image
imgOrig = cv2.cvtColor(cv2.imread('DInAq.jpg'), cv2.COLOR_BGR2RGB)
# Resize image
width = int(1000)
ratio = imgOrig.shape[1] / width
height = int(imgOrig.shape[0] / ratio)
dsize = (width, height)
img = cv2.resize(imgOrig, dsize)
# Mask low saturation and medium to high value (i.e. gray-ish/white-ish colors)
img_gauss = cv2.GaussianBlur(img, (5, 5), -1)
h, s, v = cv2.split(cv2.cvtColor(img_gauss, cv2.COLOR_BGR2HSV))
mask = (s < 24) & (v > 64)
# Row-wise sum mask pixels, apply moving average filter, and find peaks
row_sum = np.sum(mask, axis=1)
row_sum = np.convolve(row_sum, np.ones(5)/5, 'same')
peaks = find_peaks(row_sum, prominence=50)[0]
peaks = np.insert(peaks, 4, img.shape[0]-1)
# Find first and last pixels per "peak row"
x1 = [np.argwhere(mask[p, :]).min() for p in peaks]
x2 = [np.argwhere(mask[p, :]).max() for p in peaks]
# Collect single boxes
boxes = []
for i in np.arange(len(peaks)-1, 0, -1):
boxes.append([[x1[i], peaks[i]],
[x1[i-1], peaks[i-1]],
[x2[i-1], peaks[i-1]],
[x2[i], peaks[i]]])
# Warp each box individually to a given size
warped = []
bw, bh = [400, 400]
for box in reversed(boxes):
pts1 = np.float32(box)
pts2 = np.float32([[0, bh-1], [0, 0], [bw-1, 0], [bw-1, bh-1]])
M = cv2.getPerspectiveTransform(pts1, pts2)
warped.append(cv2.warpPerspective(img, M, (bw, bh)))
# Output
plt.figure(1)
plt.subplot(121), plt.imshow(img), plt.title('Original image')
for box in boxes:
pts = np.array(box)
plt.plot(pts[:, 0], pts[:, 1], 'rx')
plt.subplot(122), plt.imshow(np.vstack(warped)), plt.title('Warped image')
plt.tight_layout(), plt.show()
That's kind of an automated way to detect and extract the single boxes. For better results, you could set up a simple GUI (solely using OpenCV, for example), and let the user click on the exact corners, and build the boxes to be transformed from there.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1
Matplotlib: 3.4.1
NumPy: 1.20.2
OpenCV: 4.5.1
SciPy: 1.6.2
----------------------------------------

Image processing - fill in hollow circles

I have a binary black and white images that looks like this
I want to fill in those white circles to be solid white disks. How can I do this in Python, preferrably using skimage?
You can detect circles with skimage's methods hough_circle and hough_circle_peaks and then draw over them to "fill" them.
In the following example most of the code is doing "hierarchy" computation for the best fitting circles to avoid drawing circles which are one inside another:
# skimage version 0.14.0
import math
import numpy as np
import matplotlib.pyplot as plt
from skimage import color
from skimage.io import imread
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle
from skimage.util import img_as_ubyte
INPUT_IMAGE = 'circles.png' # input image name
BEST_COUNT = 6 # how many circles to draw
MIN_RADIUS = 20 # min radius should be bigger than noise
MAX_RADIUS = 60 # max radius of circles to be detected (in pixels)
LARGER_THRESH = 1.2 # circle is considered significantly larger than another one if its radius is at least so much bigger
OVERLAP_THRESH = 0.1 # circles are considered overlapping if this part of the smaller circle is overlapping
def circle_overlap_percent(centers_distance, radius1, radius2):
'''
Calculating the percentage area overlap between circles
See Gist for comments:
https://gist.github.com/amakukha/5019bfd4694304d85c617df0ca123854
'''
R, r = max(radius1, radius2), min(radius1, radius2)
if centers_distance >= R + r:
return 0.0
elif R >= centers_distance + r:
return 1.0
R2, r2 = R**2, r**2
x1 = (centers_distance**2 - R2 + r2 )/(2*centers_distance)
x2 = abs(centers_distance - x1)
y = math.sqrt(R2 - x1**2)
a1 = R2 * math.atan2(y, x1) - x1*y
if x1 <= centers_distance:
a2 = r2 * math.atan2(y, x2) - x2*y
else:
a2 = math.pi * r2 - a2
overlap_area = a1 + a2
return overlap_area / (math.pi * r2)
def circle_overlap(c1, c2):
d = math.sqrt((c1[0]-c2[0])**2 + (c1[1]-c2[1])**2)
return circle_overlap_percent(d, c1[2], c2[2])
def inner_circle(cs, c, thresh):
'''Is circle `c` is "inside" one of the `cs` circles?'''
for dc in cs:
# if new circle is larger than existing -> it's not inside
if c[2] > dc[2]*LARGER_THRESH: continue
# if new circle is smaller than existing one...
if circle_overlap(dc, c)>thresh:
# ...and there is a significant overlap -> it's inner circle
return True
return False
# Load picture and detect edges
image = imread(INPUT_IMAGE, 1)
image = img_as_ubyte(image)
edges = canny(image, sigma=3, low_threshold=10, high_threshold=50)
# Detect circles of specific radii
hough_radii = np.arange(MIN_RADIUS, MAX_RADIUS, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent circles (in order from best to worst)
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii)
# Determine BEST_COUNT circles to be drawn
drawn_circles = []
for crcl in zip(cy, cx, radii):
# Do not draw circles if they are mostly inside better fitting ones
if not inner_circle(drawn_circles, crcl, OVERLAP_THRESH):
# A good circle found: exclude smaller circles it covers
i = 0
while i<len(drawn_circles):
if circle_overlap(crcl, drawn_circles[i]) > OVERLAP_THRESH:
t = drawn_circles.pop(i)
else:
i += 1
# Remember the new circle
drawn_circles.append(crcl)
# Stop after have found more circles than needed
if len(drawn_circles)>BEST_COUNT:
break
drawn_circles = drawn_circles[:BEST_COUNT]
# Actually draw circles
colors = [(250, 0, 0), (0, 250, 0), (0, 0, 250)]
colors += [(200, 200, 0), (0, 200, 200), (200, 0, 200)]
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = color.gray2rgb(image)
for center_y, center_x, radius in drawn_circles:
circy, circx = circle(center_y, center_x, radius, image.shape)
color = colors.pop(0)
image[circy, circx] = color
colors.append(color)
ax.imshow(image, cmap=plt.cm.gray)
plt.show()
Result:
Do a morphological closing (explanation) to fill those tiny gaps, to complete the circles. Then fill the resulting binary image.
Code :
from skimage import io
from skimage.morphology import binary_closing, disk
import scipy.ndimage as nd
import matplotlib.pyplot as plt
# Read image, binarize
I = io.imread("FillHoles.png")
bwI =I[:,:,1] > 0
fig=plt.figure(figsize=(24, 8))
# Original image
fig.add_subplot(1,3,1)
plt.imshow(bwI, cmap='gray')
# Dilate -> Erode. You might not want to use a disk in this case,
# more asymmetric structuring elements might work better
strel = disk(4)
I_closed = binary_closing(bwI, strel)
# Closed image
fig.add_subplot(1,3,2)
plt.imshow(I_closed, cmap='gray')
I_closed_filled = nd.morphology.binary_fill_holes(I_closed)
# Filled image
fig.add_subplot(1,3,3)
plt.imshow(I_closed_filled, cmap='gray')
Result :
Note how the segmentation trash has melded to your object on the lower right and the small cape on the lower part of the middle object has been closed. You might want to continue with an morphological erosion or opening after this.
EDIT: Long response to comments below
The disk(4) was just the example I used to produce the results seen in the image. You will need to find a suitable value yourself. Too big of a value will lead to small objects being melded into bigger objects near them, like on the right side cluster in the image. It will also close gaps between objects, whether you want it or not. Too small of a value will lead to the algorithm failing to complete the circles, so the filling operation will then fail.
Morphological erosion will erase a structuring element sized zone from the borders of the objects. Morphological opening is the inverse operation of closing, so instead of dilate->erode it will do erode->dilate. The net effect of opening is that all objects and capes smaller than the structuring element will vanish. If you do it after filling then the large objects will stay relatively the same. Ideally it should remove a lot of the segmentation artifacts caused by the morphological closing I used in the code example, which might or might not be pertinent to you based on your application.
I don't know skimage but if you'd use OpenCv, I would do a Hough transform for circles, and then just draw them over.
Hough Transform is robust, if there are some small holes in the circles that is no problem.
Something like:
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
# you can check size etc here.
for (x, y, r) in circles:
# draw the circle in the output image
# you can fill here.
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
# show the output image
cv2.imshow("output", np.hstack([image, output]))
cv2.waitKey(0)
See more info here: https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/

How to calculate an epipolar line with a stereo pair of images in Python OpenCV

How can I take two images of an object from different angles and draw epipolar lines on one based on points from the other?
For example, I would like to be able to select a point on the left picture using a mouse, mark the point with a circle, and then draw an epipolar line on the right image corresponding to the marked point.
I have 2 XML files which contain a 3x3 camera matrix and a list of 3x4 projection matrices for each picture. The camera matrix is K. The projection matrix for the left picture is P_left. The projection matrix for the right picture is P_right.
I have tried this approach:
Choose a pixel coordinate (x,y) in the left image (via mouse click)
Calculate a point p in the left image with K^-1 * (x,y,1)
Calulate the pseudo inverse matrix P+ of P_left (using np.linalg.pinv)
Calculate the epipole e' of the right image: P_right * (0,0,0,1)
Calculate the skew symmetric matrix e'_skew of e'
Calculate the Fundamental matrix F: e'_skew * P_right * P+
Calculate the epipolar line l' on the right image: F * p
Calculate a point p' in the right image: P_right * P+ * p
Transform p' and l back to pixel coordinates
Draw a line using cv2.line through p' and l
I just did this a few days ago and it works just fine. Here's the method I used:
Calibrate camera(s) to obtain camera matricies and distortion matricies (Using openCV getCorners and calibrateCamera, you can find lots of tutorials on this, but it sounds like you already have this info)
Perform stereo calibration with openCV stereoCalibrate(). It takes as parameters all of the camera and distortion matricies. You need this to determine the correlation between the two visual fields. You will get back several matricies, the rotation matrix R, translation vector T, essential matrix E and fundamental matrix F.
You then want to do undistortion using openCV getOptimalNewCameraMatrix and undistort(). This will get rid of a lot of camera aberrations (it will give you better results)
Finally, use openCV's computeCorrespondEpilines to calculate the lines and plot them. I will include some code below you can try out in Python. When I run it, I can get images like this (The colored points have their corresponding epilines drawn in the other image)
Heres some code (Python 3.0). It uses two static images and static points, but you could easily select the points with the cursor. You can also refer to the OpenCV docs on calibration and stereo calibration here.
import cv2
import numpy as np
# find object corners from chessboard pattern and create a correlation with image corners
def getCorners(images, chessboard_size, show=True):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((chessboard_size[1] * chessboard_size[0], 3), np.float32)
objp[:, :2] = np.mgrid[0:chessboard_size[0], 0:chessboard_size[1]].T.reshape(-1, 2)*3.88 # multiply by 3.88 for large chessboard squares
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
for image in images:
frame = cv2.imread(image)
# height, width, channels = frame.shape # get image parameters
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, chessboard_size, None) # Find the chess board corners
if ret: # if corners were found
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria) # refine corners
imgpoints.append(corners2) # add to corner array
if show:
# Draw and display the corners
frame = cv2.drawChessboardCorners(frame, chessboard_size, corners2, ret)
cv2.imshow('frame', frame)
cv2.waitKey(100)
cv2.destroyAllWindows() # close open windows
return objpoints, imgpoints, gray.shape[::-1]
# perform undistortion on provided image
def undistort(image, mtx, dist):
img = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
image = os.path.splitext(image)[0]
h, w = img.shape[:2]
newcameramtx, _ = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
return dst
# draw the provided points on the image
def drawPoints(img, pts, colors):
for pt, color in zip(pts, colors):
cv2.circle(img, tuple(pt[0]), 5, color, -1)
# draw the provided lines on the image
def drawLines(img, lines, colors):
_, c, _ = img.shape
for r, color in zip(lines, colors):
x0, y0 = map(int, [0, -r[2]/r[1]])
x1, y1 = map(int, [c, -(r[2]+r[0]*c)/r[1]])
cv2.line(img, (x0, y0), (x1, y1), color, 1)
if __name__ == '__main__':
# undistort our chosen images using the left and right camera and distortion matricies
imgL = undistort("2L/2L34.bmp", mtxL, distL)
imgR = undistort("2R/2R34.bmp", mtxR, distR)
imgL = cv2.cvtColor(imgL, cv2.COLOR_GRAY2BGR)
imgR = cv2.cvtColor(imgR, cv2.COLOR_GRAY2BGR)
# use get corners to get the new image locations of the checcboard corners (undistort will have moved them a little)
_, imgpointsL, _ = getCorners(["2L34_undistorted.bmp"], chessboard_size, show=False)
_, imgpointsR, _ = getCorners(["2R34_undistorted.bmp"], chessboard_size, show=False)
# get 3 image points of interest from each image and draw them
ptsL = np.asarray([imgpointsL[0][0], imgpointsL[0][10], imgpointsL[0][20]])
ptsR = np.asarray([imgpointsR[0][5], imgpointsR[0][15], imgpointsR[0][25]])
drawPoints(imgL, ptsL, colors[3:6])
drawPoints(imgR, ptsR, colors[0:3])
# find epilines corresponding to points in right image and draw them on the left image
epilinesR = cv2.computeCorrespondEpilines(ptsR.reshape(-1, 1, 2), 2, F)
epilinesR = epilinesR.reshape(-1, 3)
drawLines(imgL, epilinesR, colors[0:3])
# find epilines corresponding to points in left image and draw them on the right image
epilinesL = cv2.computeCorrespondEpilines(ptsL.reshape(-1, 1, 2), 1, F)
epilinesL = epilinesL.reshape(-1, 3)
drawLines(imgR, epilinesL, colors[3:6])
# combine the corresponding images into one and display them
combineSideBySide(imgL, imgR, "epipolar_lines", save=True)
Hopefully this helps!

Calculate the Convex Hull of Clustered Centers from K-means Numpy Array

I have a Numpy array of colors that is returned when using K-means quantization on an image. After K-means is used, I need to take those centers and calculate the convex hull of the set of color centers.
Essentially I need to find which points/colors don't have enough contrast from another color. I think the answer to this would be to find the points inside the hull (simplices?) that aren't the vertices of the hull.
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from scipy.spatial import ConvexHull
from skimage import color, exposure
from sklearn import cluster
def quantize(pixels, n_colors):
width, height, depth = pixels.shape
reshaped_pixels = np.reshape(pixels, (width * height, depth))
model = cluster.KMeans(n_clusters=n_colors, n_init=100, precompute_distances=True)
labels = model.fit_predict(reshaped_pixels)
centers = model.cluster_centers_
quantized_pixels = np.reshape(centers[labels], (width, height, centers.shape[1]))
return quantized_pixels, centers, labels
def scale_image(img, max_width=100):
img_width, img_height = img.size
ratio = (max_width / float(img_width))
new_height = int((float(img_height)*float(ratio)))
img.thumbnail((max_width, new_height), Image.NEAREST)
return img
# Open image, convert to RGB just in case
img = Image.open('image2.jpg').convert('RGB')
# Scale image to speed up processing
img = scale_image(img, 80)
# Convert to numpy array
np_img_array = np.array(img)
# Convert rgb to lab colorspace
lab_pixels = color.rgb2lab(np_img_array)
# Kmeans quantization
quantized_lab_pixels, centers, labels = quantize(lab_pixels, 16)
# Convert pixels back to rgb
quantized_rgb_pixels = (color.lab2rgb(quantized_lab_pixels)*255).astype('uint8')
# Attempt to use convex hull around cluster centers
hull = ConvexHull(centers)
for simplex in hull.simplices:
plt.plot(centers[simplex, 0], centers[simplex, 1])
plt.scatter(*centers.T, alpha=.5, color='k', marker='v')
for p in centers:
point_is_in_hull = point_in_hull(p, hull)
marker = 'x' if point_is_in_hull else 'd'
color = 'g' if point_is_in_hull else 'm'
plt.scatter(p[0], p[1], marker=marker, color=color)
plt.show()
Update: Here is my rendered plot output. I'm not using the color labels in the plotting in this render.
Here is more so what would be helpful to see. However, the plots aren't what I need... I need to find out which of the cluster centers (colors) make up the vertices of the hull and then pair them back to the vertices labels.
Here is an example of the RGB colors outputted (on the left), and then on the right you would have the final colors, which would be after excluding cluster centers that fall outside of the region.

How to change the K matrix as well as the distortion coefficients to simulate the barrel distortion in python with the help of OpenCV

I have the intrensic paramaters of my camera, as well as the distortion coefficents, and I know how to calculate out the barrel distortion. - Mainly from this blogpost:
Barrel distortion calculation
However, now I wish to add the barrel distortion like it would be done by the camera itself.
The code for correcting the barrel distortion is the following:
import numpy as np
import cv2
from matplotlib import pyplot as plt
# Define camera matrix K
K = np.array([[1.051e+03,0,0],
[0, 1.0845e+03,0],
[964.4480,544.2625,1.]])
#Matrix was written in matlab style, hence it has to be transposed ...
K = K.transpose()
# Define distortion coefficients d
d = np.array([0.0719,-0.0833,0.0013,-6.1840e-04,0])
# Read an example image and acquire its size
img = cv2.imread("grid.png")
h, w = img.shape[:2]
# Generate new camera matrix from parameters
newcameramatrix, roi = cv2.getOptimalNewCameraMatrix(K, d, (w,h), 0)
# Generate look-up tables for remapping the camera image
mapx, mapy = cv2.initUndistortRectifyMap(K, d, None, newcameramatrix, (w, h), 5)
# Remap the original image to a new image
newimg = cv2.remap(img, mapx, mapy, cv2.INTER_LINEAR)
# Display old and new image
fig, (oldimg_ax, newimg_ax) = plt.subplots(1, 2)
oldimg_ax.imshow(img)
oldimg_ax.set_title('Original image')
newimg_ax.imshow(newimg)
newimg_ax.set_title('Unwarped image')
plt.show()
I tried to simualte the barrel distortion by using the inverse of th K-Matrix, or the transposed K-matrix, as well as multiplicating the d-vector with -1.
I transposed it via:
K = K.transpose()
or inverted it via:
K = np.linalg.inv(K)
Howver, this gave me only a black image. If I do not inverted / transpose it all I just get a negative radial resolution, however I need a positive radial distortion

Categories

Resources