I would like to find the midpoint of a rotated rectangle. The rotated rectangle has got the following coordinate
[[317, 80], [183, 291], [479, 150], [378, 387]]
I have got the following code to determine
cx = (coord[0][0] + coord[2][0])//2
cy = (coord[0][1] + coord[1][1])//2
Unfortunately, the center doesn't correspond to the actual center. How do I find the exact center of the above coordinates?
The centroid of a rectangle is the midpoint of either diagonal. You used different point pairs for your two coordinate computations. Also note that, for some reason, the points are not in the usual, adjacent order. The diagonals are points 1 & 2, 0 & 3. Use either pair, such as:
# Variables to make the computations easier to read
pt1 = 1
pt2 = 2
x = 0
y = 1
cx = (coord[pt1][x] + coord[pt2][x])//2
cy = (coord[pt1][y] + coord[pt2][y])//2
Better yet, research some simple shape modules. Most of these will have a straightforward midpoint method.
I’m not sure if I’m following you, man, but if you want to calculate the centroid of a rectangle shape, it goes like this. Suppose you have the following rectangle aligned to the origin. The centroid is shown in green.
The centroid is computed as:
Cx = 0.5 * w
Cy = 0.5 * h
You can then apply a linear transformation. In this example, a rotation matrix, which is given as:
R = [ cos ϴ, -sin ϴ
sin ϴ, cos ϴ ]
The rectangle is now rotated from the original coordinate system. That’s the angle ϴ:
The centroid equations become:
Cx’ = cx cos ϴ - cy sin ϴ
Cy’ = cx sin ϴ + cy cos ϴ
Translating back to the coordinate original system, we’ve got:
Cx’’ = x + cx cos ϴ - cy sin ϴ
Cy’’ = y + cx sin ϴ + cy cos ϴ
The function should be something like this (in pseudo-code):
Tuple computeRotatedCentroid( x, y, width, height, theta ) {
cx = 0.5 * width;
cy = 0.5 * height;
thetaRadians = degreesToRadians(theta);
cosTheta = cos( thetaRadians );
sinTheta = sin( thetaRadians );
finalCx = x + cx * cosTheta - cy * sinTheta;
finalCy = y + cx * sinTheta + cy * cosTheta;
return makeTuple( finalCx, finalCy );
}
You can find the midpoint of a rotated rectangle (the centroid) with cv2.minAreaRect. The function returns the following information:
(centroid, (width, height), angle) = cv2.minAreaRect(cnts)
Here's a simple example. Input image:
Result with the centroid highlighted in green
Coordinates
(157.6988067626953, 132.07565307617188)
Code
import cv2
import numpy as np
# Load image, grayscale, Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Find contours and find centroid information on rotated rectangle
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
centroid, dimensions, angle = cv2.minAreaRect(cnts[0])
cv2.circle(image, (int(centroid[0]), int(centroid[1])), 5, (36,255,12), -1)
print(centroid)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
There are several issues.
1.) The points are not ordered correctly for your formula to work. I guess they should be ordered like: (points should be ordered such, that you have a line of the rectangle between two points that are adjacent in the list and the last and first entry in the list)
points = [[183, 291], [378, 387], [479, 150], [317, 80]]
2.) There is a mistake in your formula. (I guess the formula should be the one, that finds the mid point of a line or in this case the midpoint of a diagonal between point 0 and point 2).
It should be
cx = (coord[idx1][0] + coord[idx2][0]) / 2
cy = (coord[idx1][1] + coord[idx2][1]) / 2
where idx1, idx2 are either 0,2 or 1,3
For a rectangle cx, cy will be identical regardless of whether you use idx1=0, idx2=2 or idx1=1, idx2=3
3.) This formula (midpoint of a diagonal) determines the centroid only for rectangles. What you have is a quadrangle, which is almost, but not exactly a rectangle, so the formula does not apply at all.
Try to calculate cx, cy with idx1, idx2 = 0, 2 and with idx1, idx2 = 1, 3
and you see, that you get different results. Thus you do not have a rectangle.
Either there is a typo in the coordinates, that you posted, or perhaps there is an error in the formula that calculated your rectangle or your question really meant to calculate the centroid of a quadrangle, which is different.
In that case it might be advisable to adapt the title of the question
You can find the formula for a polygon here https://en.wikipedia.org/wiki/Centroid#Of_a_polygon
Related
I have a contour with 4 points that form a parallelogram like shape, and I want to shrink the contour points and draw it inside a regular sized version with cv2.drawContours
When I use the following code to resize it I end up with this here before and after.
M = cv2.moments(cnt)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
cnt_norm = cnt - [cx, cy]
cnt_scaled = cnt_norm * scale
cnt_scaled = cnt_scaled + [cx, cy]
cnt_scaled = cnt_scaled.astype(np.int32)
As you can see its not quite symmetrical on the top and bottom due to the skew, how can I fix this?
So I am trying to implement a method from this paper. I am stuck at the part where I have to find the angle between the major axis of the lesion’s best-fit ellipse and the x-axis of the coordinate system.
Here is the sample image:
Here is what I got so far:
Is it possible to find that angle? And after the angle has been found, I have to flip the RoI along x-axis by the angle.
UPDATE ----------
Google drive link to Roi Image: RoI image
Implementing method step by step based on the paper.
First, I should recenter the RoI to the center of the image coordinate. In the paper, they centered the RoI using its centroid. I manage to do it based on this code I found in this answer. The result is fine if my RoI is small and not touching the image border. But if I have large image the result is really bad. So I ended up centering the RoI using boundingRect. Here is the result of centering:
Code for centering RoI:
import math
import cv2
import numpy as np
import matplotlib.pyplot as plt
# read image
cont_img = cv2.imread(r"C:\Users\Pandu\Desktop\IMD064_lesion.bmp", 0)
cont_rgb = cv2.cvtColor(cont_img, cv2.COLOR_GRAY2RGB)
# fit ellipse and find ellipse properties
hh, ww = cont_img.shape
contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
# centering by centroid
half_width = int(ww/2)
half_height = int(hh/2)
offset_x = (half_width-xc)
offset_y = (half_height-yc)
T = np.float32([[1, 0, offset_x], [0, 1, offset_y]])
centered_by_centroid = cv2.warpAffine(cont_img.copy(), T, (ww, hh))
plt.imshow(centered_by_centroid, cmap=plt.cm.gray)
# centering by boundingRect
# This centered RoI is (L)
x, y, w, h = cv2.boundingRect(contours[0])
startx = (ww - w)//2
starty = (hh - h)//2
centered_by_boundingRect = np.zeros_like(cont_img)
centered_by_boundingRect[starty:starty+h, startx:startx+w] = cont_img[y:y+h, x:x+w]
plt.imshow(centered_by_boundingRect, cmap=plt.cm.gray)
Second, after centering the RoI, I should find the orientation angel and rotate the RoI based on that angel and then flip . Using code from this answer. (is this the correct way to rotate the RoI?):
# find ellipse properties of centered RoI
contours, hierarchy = cv2.findContours(centered_by_boundingRect, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
roi_centroid = (xc, yc)
rot_angle = 90 - angle
if rot_angle < 0:
rot_angle += 180
# This rotated RoI is (Lx)
M = cv2.getRotationMatrix2D(roi_centroid, -rot_angle, 1.0)
rot_im = cv2.warpAffine(centered_by_boundingRect, M, (ww, hh))
plt.imshow(rot_im, cmap=plt.cm.gray)
# (Ly)
# by passing 0 to flip() should flip image around x-axis, but I get the same result as the paper
res_flip_y = cv2.flip(rot_im.copy(), 0)
plt.imshow(res_flip_y , cmap=plt.cm.gray)
# (L) (xor) (Lx)
res_x_xor = cv2.bitwise_xor(centered_by_boundingRect, rot_im)
plt.imshow(res_x_xor, cmap=plt.cm.gray)
# (L) (xor) (Ly)
res_y_xor = cv2.bitwise_xor(centered_by_boundingRect, res_flip_x)
plt.imshow(res_y_xor, cmap=plt.cm.gray)
I still can't get the same result as the paper, the rotating operation also produce bad result on large RoI. Help...
UPDATE ---------- 20/03/2021
Small RoI: fine result on rotation and looks similar with the paper, but still not getting the same end result on the L (xor) Lx or L (xor) Ly
Large RoI: bad result on rotation as the RoI get out of border/image
The angle you're looking for is returned from fitEllipse. It's just rotated a bit according to a different reference frame. You can get your counter-clockwise rotation angle by doing 90 - angle. As for rotating the roi you can either use minAreaRect to get a minimum-fit rectangle directly, or you can fit a bounding box to the contour and rotate each point individually.
The green rectangle is the minAreaRect(), the red rectangle is the boundingRect() after it's been rotated.
import cv2
import numpy as np
import math
# rotate point
def rotate2D(point, deg):
rads = math.radians(deg);
x, y = point;
rcos = math.cos(rads);
rsin = math.sin(rads);
rx = x * rcos - y * rsin;
ry = x * rsin + y * rcos;
rx = round(rx);
ry = round(ry);
point[0] = rx;
point[1] = ry;
# translate point
def translate2D(src, target, sign):
tx, ty = target;
src[0] += tx * sign;
src[1] += ty * sign;
# read image
cont_img = cv2.imread("blob.png", 0)
cont_rgb = cv2.cvtColor(cont_img, cv2.COLOR_GRAY2RGB)
# find contour
_, contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# fit ellipse and get ellipse properties
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
# -------- NEW STUFF IN HERE --------------
# calculate counter-clockwise angle relative to x-axis
rot_angle = 90 - angle;
if rot_angle < 0:
rot_angle += 180;
print(rot_angle);
# if you want a rotated ROI I would recommend using minAreaRect rather than rotating a different rectangle
# fit a minrect to the image # this is taken directly from OpenCV's tutorials
rect = cv2.minAreaRect(contours[0]);
box = cv2.boxPoints(rect);
box = np.int0(box);
cv2.drawContours(cont_rgb, [box], 0, (0,255,0), 2);
# but if you really want to use a different rectangle and rotate it, here's how to do it
# create rectangle
x,y,w,h = cv2.boundingRect(contours[0]);
rect = [];
rect.append([x,y]);
rect.append([x+w,y]);
rect.append([x+w,y+h]);
rect.append([x,y+h]);
# rotate it
rotated_rect = [];
center = [x + w/2, y + h/2];
for point in rect:
# for each point, center -> rotate -> uncenter
translate2D(point, center, -1);
rotate2D(point, 90 - rot_angle); # "90 - angle" is because rotation goes clockwise
translate2D(point, center, 1);
rotated_rect.append([point]);
rotated_rect = np.array(rotated_rect);
cv2.drawContours(cont_rgb, [rotated_rect.astype(int)], -1, (0,0,255), 2);
# ------------- END OF NEW STUFF -----------------
# draw fitted ellipse and centroid
target_ellipse = cv2.ellipse(cont_rgb.copy(), ellipse, (37, 99, 235), 10)
centroid = cv2.circle(target_ellipse.copy(), (int(xc), int(yc)), 20, (250, 204, 21), -1)
# draw major axis
rmajor = max(d1, d2)/2
if angle > 90:
angle = angle - 90
else:
angle = angle + 90
xtop_major = xc + math.cos(math.radians(angle))*rmajor
ytop_major = yc + math.sin(math.radians(angle))*rmajor
xbot_major = xc + math.cos(math.radians(angle+180))*rmajor
ybot_major = yc + math.sin(math.radians(angle+180))*rmajor
top_major = (int(xtop_major), int(ytop_major))
bot_major = (int(xbot_major), int(ybot_major))
target_major_axis = cv2.line(centroid.copy(),
top_major, bot_major,
(0, 255, 255), 5)
## image center coordinate
hh, ww = target_major_axis.shape[:2];
x_center_start = (0, int(hh/2))
x_center_end = (int(ww), int(hh/2))
y_center_start = (int(ww/2), 0)
y_center_end = (int(ww/2), int(hh))
img_x_middle_coor = cv2.line(target_major_axis.copy(), x_center_start, x_center_end, (219, 39, 119), 10)
img_y_middle_coor = cv2.line(img_x_middle_coor.copy(), y_center_start,
y_center_end, (190, 242, 100), 10)
# show
cv2.imshow("image", img_y_middle_coor);
cv2.waitKey(0);
For the future: check that your code runs before pasting it on here. Aside from the missing "import" lines it was also missing this line:
hh, ww = target_major_axis.shape[:2]
If the sample code you paste has errors, then everyone who wants to help will have to waste some time bug-stomping before they can begin working on a solution.
I have been working on a script which calculates the rotational shift between two images using cv2's phaseCorrelate method.
I have two images, the second is a 90 degree rotated version of the first image. After loading in the images, I convert them to log-polar before passing them into the phaseCorrelate function.
From what I have read, I believe that this should yield a rotational shift between two images.
The code below describes the implementation.
#bitwise right binary shift function
def rshift(val, n): return (val % 0x100000000)
base_img = cv2.imread('img1.jpg')
cur_img = cv2.imread('dataa//t_sv_1.jpg')
curr_img = rotateImage(cur_img, 90)
rows,cols,chan = base_img.shape
x, y, c = curr_img.shape
#convert images to valid type
ref32 = np.float32(cv2.cvtColor(base_img, cv2.COLOR_BGR2GRAY))
curr32 = np.float32(cv2.cvtColor(curr_img, cv2.COLOR_BGR2GRAY))
value = np.sqrt(((rows/2.0)**2.0)+((cols/2.0)**2.0))
value2 = np.sqrt(((x/2.0)**2.0)+((y/2.0)**2.0))
polar_image = cv2.linearPolar(ref32,(rows/2, cols/2), value, cv2.WARP_FILL_OUTLIERS)
log_img = cv2.linearPolar(curr32,(x/2, y/2), value2, cv2.WARP_FILL_OUTLIERS)
shift = cv2.phaseCorrelate(polar_image, log_img)
sx = shift[0][0]
sy = shift[0][1]
sf = shift[1]
polar_image = polar_image.astype(np.uint8)
log_img = log_img.astype(np.uint8)
cv2.imshow("Polar Image", polar_image)
cv2.imshow('polar', log_img)
#get rotation from shift along y axis
rotation = sy * 180 / (rshift(y, 1));
print(rotation)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am unsure how to interpret the results of this function. The expected outcome is a value similar to 90 degrees, however, I get the value below.
Output: -0.00717516014538333
How can I make the output correct?
A method, typically referred to as the Fourier Mellin transform, and published as:
B. Srinivasa Reddy and B.N. Chatterji, "An FFT-Based Technique for Translation, Rotation, and Scale-Invariant Image Registration", IEEE Trans. on Image Proc. 5(8):1266-1271, 1996
uses the FFT and the log-polar transform to obtain the translation, rotation and scaling of one image to match the other. I find this tutorial to be very clear and informative, I will give a summary here:
Compute the magnitude of the FFT of the two images (apply a windowing function first to avoid issues with periodicity of the FFT).
Compute the log-polar transform of the magnitude of the frequency-domain images (typically a high-pass filter is applied first, but I have not seen its usefulness).
Compute the cross-correlation (actually phase correlation) between the two. This leads to a knowledge of scale and rotation.
Apply the scaling and rotation to one of the original input images.
Compute the cross-correlation (actually phase correlation) of the original input images, after correction for scaling and rotation. This leads to knowledge of the translation.
This works because:
The magnitude of the FFT is translation-invariant, we can solely focus on scaling and rotation without worrying about translation. Note that the rotation of the image is identical to the rotation of the FFT, and that scaling of the image is inverse to the scaling of the FFT.
The log-polar transform converts rotation into a vertical translation, and scaling into a horizontal translation. Phase correlation allows us to determine these translations. Converting them to a rotation and scaling is non-trivial (especially the scaling is hard to get right, but a bit of math shows the way).
If the tutorial linked above is not clear enough, one can look at the C++ code that comes with it, or at this other Python code.
OP is interested only in the rotation aspect of the method above. If we can assume that the translation is 0 (this means we know around which point the rotation was made, if we don't know the origin we need to estimate it as a translation), then we don't need to compute the magnitude of the FFT (remember it is used to make the problem translation invariant), we can apply the log-polar transform directly to the images. But note that we need to use the center of rotation as the origin for the log-polar transform. If we additionally assume that the scaling is 1, we can further simplify things by taking the linear-polar transform. That is, we logarithmic scaling of the radius axis is only necessary to estimate scaling.
OP is doing this more or less correctly, I believe. Where OP's code goes wrong is in the extent of the radius axis in the polar transform. By going all the way to the extreme corners of the image, OpenCV needs to fill in parts of the transformed image with zeros. These parts are dictated by the shape of the image, not by the contents of the image. That is, both polar images contain exactly the same sharp, high-contrast curve between image content and filled-in zeros. The phase correlation is aligning these curves, leading to an estimate of 0 degree rotation. The image content is more or less ignored because its contrast is much lower.
Instead, make the extent of the radius axis that of the largest circle that fits completely inside the image. This way, no parts of the output need to be filled with zeros, and the phase correlation can focus on the actual image content. Furthermore, considering the two images are rotated versions of each other, it is likely that the data in the corners of the images do not match, there is no need to take that into account at all!
Here is code I implemented quickly based on OP's code. I read in Lena, rotated the image by 38 degrees, computed the linear-polar transform of the original and rotated images, then the phase correlation between these two, and then determined a rotation angle based on the vertical translation. The result was 37.99560, plenty close to 38.
import cv2
import numpy as np
base_img = cv2.imread('lena512color.tif')
base_img = np.float32(cv2.cvtColor(base_img, cv2.COLOR_BGR2GRAY)) / 255.0
(h, w) = base_img.shape
(cX, cY) = (w // 2, h // 2)
angle = 38
M = cv2.getRotationMatrix2D((cX, cY), angle, 1.0)
curr_img = cv2.warpAffine(base_img, M, (w, h))
cv2.imshow("base_img", base_img)
cv2.imshow("curr_img", curr_img)
base_polar = cv2.linearPolar(base_img,(cX, cY), min(cX, cY), 0)
curr_polar = cv2.linearPolar(curr_img,(cX, cY), min(cX, cY), 0)
cv2.imshow("base_polar", base_polar)
cv2.imshow("curr_polar", curr_polar)
(sx, sy), sf = cv2.phaseCorrelate(base_polar, curr_polar)
rotation = -sy / h * 360;
print(rotation)
cv2.waitKey(0)
cv2.destroyAllWindows()
These are the four image windows shown by the code:
I created a figure that shows the phase correlation values for multiple rotations. This has been edited to reflect Cris Luengo's comment. The image is cropped to get rid of the edges of the square insert.
import cv2
import numpy as np
paths = ["lena.png", "rotate45.png", "rotate90.png", "rotate135.png", "rotate180.png"]
import os
os.chdir('/home/stephen/Desktop/rotations/')
images, rotations, polar = [],[], []
for image_path in paths:
alignedImage = cv2.imread('lena.png')
rotatedImage = cv2.imread(image_path)
rows,cols,chan = alignedImage.shape
x, y, c = rotatedImage.shape
x,y,w,h = 220,220,360,360
alignedImage = alignedImage[y:y+h, x:x+h].copy()
rotatedImage = rotatedImage[y:y+h, x:x+h].copy()
#convert images to valid type
ref32 = np.float32(cv2.cvtColor(alignedImage, cv2.COLOR_BGR2GRAY))
curr32 = np.float32(cv2.cvtColor(rotatedImage, cv2.COLOR_BGR2GRAY))
value = np.sqrt(((rows/2.0)**2.0)+((cols/2.0)**2.0))
value2 = np.sqrt(((x/2.0)**2.0)+((y/2.0)**2.0))
polar_image = cv2.linearPolar(ref32,(rows/2, cols/2), value, cv2.WARP_FILL_OUTLIERS)
log_img = cv2.linearPolar(curr32,(x/2, y/2), value2, cv2.WARP_FILL_OUTLIERS)
shift = cv2.phaseCorrelate(polar_image, log_img)
(sx, sy), sf = shift
polar_image = polar_image.astype(np.uint8)
log_img = log_img.astype(np.uint8)
sx, sy, sf = round(sx, 4), round(sy, 4), round(sf, 4)
text = image_path + "\n" + "sx: " + str(sx) + " \nsy: " + str(sy) + " \nsf: " + str(sf)
images.append(rotatedImage)
rotations.append(text)
polar.append(polar_image)
Here's an approach to determine the rotational shift between two images in degrees. The idea is to find the skew angle for each image in relation to a horizontal line. If we can find this skewed angle then we can calculate the angle difference between the two images. Here are some example images to illustrate this concept
Original unrotated image
Rotated counterclockwise by 10 degrees (neg_10) and counterclockwise by 35 degrees (neg_35)
Rotated clockwise by 7.9 degrees (pos_7_9) and clockwise by 21 degrees (pos_21)
For each image, we want to determine the skew angle in relation to a horizontal line with negative being rotated counterclockwise and positive being rotated clockwise
Here's the helper function to determine this skew angle
def compute_angle(image):
# Convert to grayscale, invert, and Otsu's threshold
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = 255 - gray
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find coordinates of all pixel values greater than zero
# then compute minimum rotated bounding box of all coordinates
coords = np.column_stack(np.where(thresh > 0))
angle = cv2.minAreaRect(coords)[-1]
# The cv2.minAreaRect() function returns values in the range
# [-90, 0) so need to correct angle
if angle < -45:
angle = -(90 + angle)
else:
angle = -angle
# Rotate image to horizontal position
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, \
borderMode=cv2.BORDER_REPLICATE)
return (angle, rotated)
After determining the skew angle for each image, we can simply calculate the difference
angle1, rotated1 = compute_angle(image1)
angle2, rotated2 = compute_angle(image2)
# Both angles are positive
if angle1 >= 0 and angle2 >= 0:
difference_angle = abs(angle1 - angle2)
# One positive, one negative
elif (angle1 < 0 and angle2 > 0) or (angle1 > 0 and angle2 < 0):
difference_angle = abs(angle1) + abs(angle2)
# Both negative
elif angle1 < 0 and angle2 < 0:
angle1 = abs(angle1)
angle2 = abs(angle2)
difference_angle = max(angle1, angle2) - min(angle1, angle2)
Here's the step by step walk through of whats going on. Using pos_21 and neg_10, the compute_angle() function will return the skew angle and the normalized image
For pos_21, we normalize the image and determine the skew angle. Left (before) -> right (after)
20.99871826171875
Similarly for neg_10, we also normalize the image and determine the skew angle. Left (before) -> right (after)
-10.007980346679688
Now that we have both angles, we can compute the difference angle. Here's the result
31.006698608398438
Here's results with other combinations. With neg_10 and neg_35 we get
24.984039306640625
With pos_7_9 and pos_21,
13.09155559539795
Finally with pos_7_9 and neg_35,
42.89918231964111
Here's the full code
import cv2
import numpy as np
def rotational_shift(image1, image2):
def compute_angle(image):
# Convert to grayscale, invert, and Otsu's threshold
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = 255 - gray
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find coordinates of all pixel values greater than zero
# then compute minimum rotated bounding box of all coordinates
coords = np.column_stack(np.where(thresh > 0))
angle = cv2.minAreaRect(coords)[-1]
# The cv2.minAreaRect() function returns values in the range
# [-90, 0) so need to correct angle
if angle < -45:
angle = -(90 + angle)
else:
angle = -angle
# Rotate image to horizontal position
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, \
borderMode=cv2.BORDER_REPLICATE)
return (angle, rotated)
angle1, rotated1 = compute_angle(image1)
angle2, rotated2 = compute_angle(image2)
# Both angles are positive
if angle1 >= 0 and angle2 >= 0:
difference_angle = abs(angle1 - angle2)
# One positive, one negative
elif (angle1 < 0 and angle2 > 0) or (angle1 > 0 and angle2 < 0):
difference_angle = abs(angle1) + abs(angle2)
# Both negative
elif angle1 < 0 and angle2 < 0:
angle1 = abs(angle1)
angle2 = abs(angle2)
difference_angle = max(angle1, angle2) - min(angle1, angle2)
return (difference_angle, rotated1, rotated2)
if __name__ == '__main__':
image1 = cv2.imread('pos_7_9.png')
image2 = cv2.imread('neg_35.png')
angle, rotated1, rotated2 = rotational_shift(image1, image2)
print(angle)
I need to remove the outer area of the circle.
I need to use only the inner area of the circle to avoid errors in image processing, currently I can only find the circle and mark it
I do not know if I'm doing it right, using cv2.circle
Please help me
circles = cv2.HoughCircles(cinza, cv2.HOUGH_GRADIENT, 1, 20)
if circles is not None:
maior = 0
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
radius = i[2]
if radius > maior:
maior = radius
for i in circles[0, :]:
center = (i[0], i[1])
radius = i[2]
if radius == maior:
cv2.circle(image, center, 1, (0, 100, 100), 3)
cv2.circle(image, center, radius, (255, 0, 255), 3)
First use numpy meshgrid to get a matrix for x and y with the indices of the image. Then calculate the distance of the indices to the center of your circle by subtracting the centroid from the indice matrices, for x and y each.
Afterwards calculate the distances of the pixels to the centroid using
distances = (x**2 + y**2)**0.5
x, y, image and distances must have the same shape now.
Then use boolean indexing on the distances matrix to pick the pixels within the circle and set them e.g. to zero like
image[distances > 0.5] = 0.0
You can use mask functions of OpenCV. In this link, you can see mask functions on c++, I could not find Python function on official pages but there is a code for masking by OpenCV on Python.
As a summary, you can mask outer size of a circle by OpenCV mask functions to remove outer area of the circle.
I want to rotate an image at several angles sequentially. I do that using cv2.getRotationMatrix2D and cv2.warpAffine. Having a pair of pixels coordinates [x,y], where x=cols, y=rows (in this case) I want to find their new coordinates in the rotated images.
I used the following slightly changed code courtesy of http://www.pyimagesearch.com/2017/01/02/rotate-images-correctly-with-opencv-and-python/ and the explanation from Affine Transformation to try to map the points in the rotated image : http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html.
The problem is my mapping or my rotation is wrong because the transformed calculated coordinates are wrong. (I tried to compute the corners manually for simple verification)
CODE:
def rotate_bound(image, angle):
# grab the dimensions of the image and then determine the
# center
(h, w) = image.shape[:2]
(cX, cY) = ((w-1) // 2.0, (h-1)// 2.0)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
print nW, nH
# adjust the rotation matrix to take into account translation
M[0, 2] += ((nW-1) / 2.0) - cX
M[1, 2] += ((nH-1) / 2.0) - cY
# perform the actual rotation and return the image
return M, cv2.warpAffine(image, M, (nW, nH))
#function that calculates the updated locations of the coordinates
#after rotation
def rotated_coord(points,M):
points = np.array(points)
ones = np.ones(shape=(len(points),1))
points_ones = np.concatenate((points,ones), axis=1)
transformed_pts = M.dot(points_ones.T).T
return transformed_pts
#READ IMAGE & CALL FCT
img = cv2.imread("Lenna.png")
points = np.array([[511, 511]])
#rotate by 90 angle for example
M, rotated = rotate_bound(img, 90)
#find out the new locations
transformed_pts = rotated_coord(points,M)
If I have for example the coordinates [511,511] I will obtain [-0.5, 511.50] ([col, row]) when I expect to obtain [0,511].
If I use instead the w // 2 a black border will be added on the image and my rotated updated coordinates will be off again.
Question: How can I find the correct location of a pair of pixels coordinates in a rotated image (by a certain angle) using Python ?
For this case of image rotation, where the image size changes after rotation and also the reference point, the transformation matrix has to be modified. The new with and height can be calculated using the following relations:
new.width = h*\sin(\theta) + w*\cos(\theta)
new.height = h*\cos(\theta) + w*\sin(\theta)
Since the image size changes, because of the black border that you might see, the coordinates of the rotation point (centre of the image) change too. Then it has to be taken into account in the transformation matrix.
I explain an example in my blog image rotation bounding box opencv
def rotate_box(bb, cx, cy, h, w):
new_bb = list(bb)
for i,coord in enumerate(bb):
# opencv calculates standard transformation matrix
M = cv2.getRotationMatrix2D((cx, cy), theta, 1.0)
# Grab the rotation components of the matrix)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cx
M[1, 2] += (nH / 2) - cy
# Prepare the vector to be transformed
v = [coord[0],coord[1],1]
# Perform the actual rotation and return the image
calculated = np.dot(M,v)
new_bb[i] = (calculated[0],calculated[1])
return new_bb
## Calculate the new bounding box coordinates
new_bb = {}
for i in bb1:
new_bb[i] = rotate_box(bb1[i], cx, cy, heigth, width)
The corresponding C++ code of the above mentioned Python code of # cristianpb, if someone is looking for a C++ code as like me:
// send the original angle i.e. don't transform it in radian
cv::Point2f rotatePointUsingTransformationMat(const cv::Point2f& inPoint, const cv::Point2f& center, const double& rotAngle)
{
cv::Mat rot = cv::getRotationMatrix2D(center, rotAngle, 1.0);
float cos = rot.at<double>(0,0);
float sin = rot.at<double>(0,1);
int newWidth = int( ((center.y*2)*sin) + ((center.x*2)*cos) );
int newHeight = int( ((center.y*2)*cos) + ((center.x*2)*sin) );
rot.at<double>(0,2) += newWidth/2.0 - center.x;
rot.at<double>(1,2) += newHeight/2.0 - center.y;
int v[3] = {static_cast<int>(inPoint.x),static_cast<int>(inPoint.y),1};
int mat3[2][1] = {{0},{0}};
for(int i=0; i<rot.rows; i++)
{
for(int j=0; j<= 0; j++)
{
int sum=0;
for(int k=0; k<3; k++)
{
sum = sum + rot.at<double>(i,k) * v[k];
}
mat3[i][j] = sum;
}
}
return Point2f(mat3[0][0],mat3[1][0]);
}