How to rotate a rectangle/bounding box together with an image - python

I'm working on a data augmentation and im trying to generate synthetic version of every image in my dataset. So i need to rotate images and together with bounding boxes as well in the images.
im only going to rotate images by 90, 180, 270 degrees.
I'm using pascal-voc annotation format as shown here. As a result i have following info.
x_min, y_min, x_max, y_max. Origin of image(i can get it from image size)
i've searched a lot on it. But i couldnt find any solution for rotating bounding boxes( or rectangles)
i've tried something like this;
i've got this solution from here and tried to adapt it but didnt work.
def rotateRect(bndbox, img_size, angle):
angle = angle * math.pi/180 # conversion from degree to radian
y_min, y_max, x_min, x_max = bndbox
ox, oy = img_size[0]/2, img_size[1]/2 # coordinate of origin of image
rect = [[x_min, y_min], [x_min, y_max],[x_max, y_min],[x_max, y_max]] # coordinates of points of corners of bounding box rectangle.
nrp = [[0, 0], [0,0 ],[0,0],[0, 0]] #new rectangle position
for i, pt in enumerate(rect):
newPx = int(ox + math.cos(angle) * (pt[0] - ox) - math.sin(angle) * (pt[1] - oy)) # new coordinate of point x
newPy = int(oy + math.sin(angle) * (pt[0] - ox) + math.cos(angle) * (pt[1] - oy)) # new coordinate of point y
nrp[i] = newPx,newPy
nx_min, ny_min, nx_max, ny_max = nrp[0][0], nrp[0][1], nrp[2][0], nrp[2][1] # new bounding boxes values.
return [ny_min, ny_max, nx_min, nx_max]
thanks.
EDIT:
I need to get this rotation together with image and bounding box.
First picture is original one, second one is rotated as 90 degree(counter-clockwise) and 3rd picture is rotated as -90 degree (counter-wise).
i tried to rotate manually on paint to be precise. So i got these results.
original of img size:(640x480)
rotation orj, 90, -90
--------------
x_min = 98, 345, 17
y_min = 345, 218, 98
x_max = 420, 462, 420
y_max = 462, 540, 134

i've found simpler way.
Base on this aproach. We can do this calculation without using trigonometric calculations like this:
def rotate90Deg(bndbox, img_width): # just passing width of image is enough for 90 degree rotation.
x_min,y_min,x_max,y_max = bndbox
new_xmin = y_min
new_ymin = img_width-x_max
new_xmax = y_max
new_ymax = img_width-x_min
return [new_xmin, new_ymin,new_xmax,new_ymax]
rotate90Deg([98,345,420,462],640)
this can be used over and over again. And returns new bounding boxes values in Pascal-voc format.

OK, maybe this can help. Assuming your rectangle is stored as a set of 4 points marking the corners, this will do arbitrary rotation around another point. If you store the points in circular order, then plot will even look like rectangles. I'm not forcing the aspect ratio on the plot, so the rotated rectangle looks like it is skewed, but it's not.
import math
import matplotlib.pyplot as plt
def rotatebox( rect, center, degrees ):
rads = math.radians(degrees)
newpts = []
for pts in rect:
diag_x = center[0] - pts[0]
diag_y = center[1] - pts[1]
# Rotate the diagonal from center to top left
newdx = diag_x * math.cos(rads) - diag_y * math.sin(rads)
newdy = diag_x * math.sin(rads) + diag_y * math.cos(rads)
newpts.append( (center[0] + newdx, center[1] + newdy) )
return newpts
# Return a set of X and Y for plotting.
def corners(rect):
return [k[0] for k in rect]+[rect[0][0]],[k[1] for k in rect]+[rect[0][1]]
rect = [[50,50],[50,120],[150,120],[150,50]]
plt.plot( *corners(rect) )
rect = rotatebox( rect, (100,100), 135 )
plt.plot( *corners(rect) )
plt.show()
The code can be made simpler for the 90/180/270 cases, because no trigonometry is needed. It's just addition, subtraction, and swapping points. Here, the rectangle is just stored [minx,miny,maxx,maxy].
import matplotlib.pyplot as plt
def rotaterectcw( rect, center ):
x0 = rect[0] - center[0]
y0 = rect[1] - center[1]
x1 = rect[2] - center[0]
y1 = rect[3] - center[1]
return center[0]+y0, center[1]-x0, center[0]+y1, center[1]-x1
def corners(rect):
x0, y0, x1, y1 = rect
return [x0,x0,x1,x1,x0],[y0,y1,y1,y0,y0]
rect = (50,50,150,120)
plt.plot( *corners(rect) )
rect = rotaterectcw( rect, (60,100) )
plt.plot( *corners(rect) )
rect = rotaterectcw( rect, (60,100) )
plt.plot( *corners(rect) )
rect = rotaterectcw( rect, (60,100) )
plt.plot( *corners(rect) )
plt.show()

I tried the implementations mentioned in the other answers but none of them worked for me. I had to rotate the image and the bounding box clockwise by 90 degrees so I made this method,
def rotate90Deg( bndbox , image_width ):
"""
image_width: Width of the image after clockwise rotation of 90 degrees
"""
x_min,y_min,x_max,y_max = bndbox
new_xmin = image_width - y_max # Reflection about center X-line
new_ymin = x_min
new_xmax = image_width - y_min # Reflection about center X-line
new_ymax = x_max
return [new_xmin, new_ymin,new_xmax,new_ymax]
Usage
image = Image.open( "..." )
image = image.rotate( -90 )
new_bbox = rotate90Deg( bbox , image.width )

Related

Rotate polygons without cutting edges

I am writing an augmentation code to rotate annotated polygons inside images. I wrote a code but it's not working right. Just Copy paste the code and you can get the results. Thank you for helping me out.
Image:
Need to rotate the image as well as a polygon for respective angle. Currently, I am not able to rotate the polygon
The image is rotated but the polygon is still in its place.
I tried This code. It rotates the polygon but not at the right position
import math
from PIL import Image, ImageDraw
from PIL import ImagePath
from PIL import Image
import matplotlib.pyplot as plt
from math import sin, cos, radians
import requests
from io import BytesIO
def rotatePolygon(polygon, degrees, height, width):
"""
Description:
Rotate polygon the given angle about its center.
Input:
polygon (list of tuples) : list of tuples with (x,y) cordinates
e.g [(1,2), (2,3), (4,5)]
degrees int : Rotation Degrees
Output:
polygon (list of tuples) : Polygon rotated on angle(degrees)
e.g [(1,2), (2,3), (4,5)]
"""
# Convert angle to radians
theta = radians(degrees)
# Getting sin and cos with respect to theta
cosang, sinang = cos(theta), sin(theta)
# find center point of Polygon to use as pivot
y, x = [i for i in zip(*polygon)]
# find center point of Polygon to use as pivot
cx = width / 2
cy = height / 2
# Rotating every point
new_points = []
for x, y in zip(x, y):
tx, ty = x-cx, y-cy
new_x = (tx*cosang + ty*sinang) + cx
new_y = (-tx*sinang + ty*cosang) + cy
new_points.append((new_y, new_x))
return new_points
# Polygon
xy = [(85, 384), (943, 374), (969, 474), (967, 527), (12, 540), (7, 490)]
degrees = 270
# Getting Image from URL
try:
img = Image.open("polygon_image.png")
except:
url = "https://github.com/SohaibAnwaar/Mask---RCNN-Polygons-/blob/main/2_image_augmentation/extras/problamatic_image.jpg?raw=true"
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.save("polygon_image.png")
# Rotating Image
rotated_image = img.rotate(degrees,expand = True)
h, w = img.size
print("NotRotated", xy)
rotated_xy = rotatePolygon(xy, 360 - (degrees), h, w)
# Ploting Rotated Image
img1 = ImageDraw.Draw(rotated_image)
img1.polygon(rotated_xy, fill ="#FFF000", outline ="blue")
# Ploting Straight Image
img1 = ImageDraw.Draw(img)
img1.polygon(xy, fill ="#FFF000", outline ="blue")
plt.imshow(rotated_image)
plt.show()
plt.imshow(img)
plt.show()
Rotation equations are:
xnew = x * cos(theta) - y * sin(theta)
ynew = x * sin(theta) + y * cos(theta)
only mistake you are doing is this:
new_x = (tx*cosang - ty*sinang) + cy
new_y = (tx*sinang + ty*cosang) + cx
After rotating image, cx and cy should be changed
Your complete code is as below:
import math
import numpy as np
from PIL import Image, ImageDraw
from PIL import ImagePath
from PIL import Image
import matplotlib.pyplot as plt
from math import sin, cos, radians
import requests
from io import BytesIO
def rotatePolygon(polygon, degrees, height, width):
"""
Description:
Rotate polygon the given angle about its center.
Input:
polygon (list of tuples) : list of tuples with (x,y) cordinates
e.g [(1,2), (2,3), (4,5)]
degrees int : Rotation Degrees
Output:
polygon (list of tuples) : Polygon rotated on angle(degrees)
e.g [(1,2), (2,3), (4,5)]
"""
# Convert angle to radians
theta = radians(degrees)
# Getting sin and cos with respect to theta
cosang, sinang = cos(theta), sin(theta)
# find center point of Polygon to use as pivot
y, x = [i for i in zip(*polygon)]
# find center point of Polygon to use as pivot
cx1 = width[0] / 2
cy1 = height[0] / 2
cx2 = width[1] / 2
cy2 = height[1] / 2
# Rotating every point
new_points = []
for x, y in zip(x, y):
tx, ty = x-cx1, y-cy1
new_x = (tx*cosang - ty*sinang) + cx2
new_y = (tx*sinang + ty*cosang) + cy2
new_points.append((new_y, new_x))
return new_points
# Polygon
xy = [(85, 384), (943, 374), (969, 474), (967, 527), (12, 540), (7, 490)]
degrees = 270
# Getting Image from URL
try:
img = Image.open("polygon_image.png")
except:
url = "https://github.com/SohaibAnwaar/Mask---RCNN-Polygons-/blob/main/2_image_augmentation/extras/problamatic_image.jpg?raw=true"
response = requests.get(url)
img = Image.open(BytesIO(response.content))
img.save("polygon_image.png")
# Rotating Image
rotated_image = img.rotate(degrees,expand = True)
h1, w1 = img.size
h2, w2 = rotated_image.size
print("NotRotated", xy)
rotated_xy = rotatePolygon(xy, degrees, [h1,h2], [w1,w2])
# Ploting Rotated Image
img1 = ImageDraw.Draw(rotated_image)
img1.polygon(rotated_xy, fill ="#FFF000", outline ="blue")
# Ploting Straight Image
img1 = ImageDraw.Draw(img)
img1.polygon(xy, fill ="#FFF000", outline ="blue")
plt.imshow(rotated_image)
plt.show()
plt.imshow(img)
plt.show()
For a rotation to the right by a right angle, use the following equations (assuming X/U left-to-right and Y/V top-down):
U = H - Y
V = X
where the image size is W x H.

find angle between major axis of ellipse and x-axis of coordinate (help me implement method from paper)

So I am trying to implement a method from this paper. I am stuck at the part where I have to find the angle between the major axis of the lesion’s best-fit ellipse and the x-axis of the coordinate system.
Here is the sample image:
Here is what I got so far:
Is it possible to find that angle? And after the angle has been found, I have to flip the RoI along x-axis by the angle.
UPDATE ----------
Google drive link to Roi Image: RoI image
Implementing method step by step based on the paper.
First, I should recenter the RoI to the center of the image coordinate. In the paper, they centered the RoI using its centroid. I manage to do it based on this code I found in this answer. The result is fine if my RoI is small and not touching the image border. But if I have large image the result is really bad. So I ended up centering the RoI using boundingRect. Here is the result of centering:
Code for centering RoI:
import math
import cv2
import numpy as np
import matplotlib.pyplot as plt
# read image
cont_img = cv2.imread(r"C:\Users\Pandu\Desktop\IMD064_lesion.bmp", 0)
cont_rgb = cv2.cvtColor(cont_img, cv2.COLOR_GRAY2RGB)
# fit ellipse and find ellipse properties
hh, ww = cont_img.shape
contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
# centering by centroid
half_width = int(ww/2)
half_height = int(hh/2)
offset_x = (half_width-xc)
offset_y = (half_height-yc)
T = np.float32([[1, 0, offset_x], [0, 1, offset_y]])
centered_by_centroid = cv2.warpAffine(cont_img.copy(), T, (ww, hh))
plt.imshow(centered_by_centroid, cmap=plt.cm.gray)
# centering by boundingRect
# This centered RoI is (L)
x, y, w, h = cv2.boundingRect(contours[0])
startx = (ww - w)//2
starty = (hh - h)//2
centered_by_boundingRect = np.zeros_like(cont_img)
centered_by_boundingRect[starty:starty+h, startx:startx+w] = cont_img[y:y+h, x:x+w]
plt.imshow(centered_by_boundingRect, cmap=plt.cm.gray)
Second, after centering the RoI, I should find the orientation angel and rotate the RoI based on that angel and then flip . Using code from this answer. (is this the correct way to rotate the RoI?):
# find ellipse properties of centered RoI
contours, hierarchy = cv2.findContours(centered_by_boundingRect, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
roi_centroid = (xc, yc)
rot_angle = 90 - angle
if rot_angle < 0:
rot_angle += 180
# This rotated RoI is (Lx)
M = cv2.getRotationMatrix2D(roi_centroid, -rot_angle, 1.0)
rot_im = cv2.warpAffine(centered_by_boundingRect, M, (ww, hh))
plt.imshow(rot_im, cmap=plt.cm.gray)
# (Ly)
# by passing 0 to flip() should flip image around x-axis, but I get the same result as the paper
res_flip_y = cv2.flip(rot_im.copy(), 0)
plt.imshow(res_flip_y , cmap=plt.cm.gray)
# (L) (xor) (Lx)
res_x_xor = cv2.bitwise_xor(centered_by_boundingRect, rot_im)
plt.imshow(res_x_xor, cmap=plt.cm.gray)
# (L) (xor) (Ly)
res_y_xor = cv2.bitwise_xor(centered_by_boundingRect, res_flip_x)
plt.imshow(res_y_xor, cmap=plt.cm.gray)
I still can't get the same result as the paper, the rotating operation also produce bad result on large RoI. Help...
UPDATE ---------- 20/03/2021
Small RoI: fine result on rotation and looks similar with the paper, but still not getting the same end result on the L (xor) Lx or L (xor) Ly
Large RoI: bad result on rotation as the RoI get out of border/image
The angle you're looking for is returned from fitEllipse. It's just rotated a bit according to a different reference frame. You can get your counter-clockwise rotation angle by doing 90 - angle. As for rotating the roi you can either use minAreaRect to get a minimum-fit rectangle directly, or you can fit a bounding box to the contour and rotate each point individually.
The green rectangle is the minAreaRect(), the red rectangle is the boundingRect() after it's been rotated.
import cv2
import numpy as np
import math
# rotate point
def rotate2D(point, deg):
rads = math.radians(deg);
x, y = point;
rcos = math.cos(rads);
rsin = math.sin(rads);
rx = x * rcos - y * rsin;
ry = x * rsin + y * rcos;
rx = round(rx);
ry = round(ry);
point[0] = rx;
point[1] = ry;
# translate point
def translate2D(src, target, sign):
tx, ty = target;
src[0] += tx * sign;
src[1] += ty * sign;
# read image
cont_img = cv2.imread("blob.png", 0)
cont_rgb = cv2.cvtColor(cont_img, cv2.COLOR_GRAY2RGB)
# find contour
_, contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# fit ellipse and get ellipse properties
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
# -------- NEW STUFF IN HERE --------------
# calculate counter-clockwise angle relative to x-axis
rot_angle = 90 - angle;
if rot_angle < 0:
rot_angle += 180;
print(rot_angle);
# if you want a rotated ROI I would recommend using minAreaRect rather than rotating a different rectangle
# fit a minrect to the image # this is taken directly from OpenCV's tutorials
rect = cv2.minAreaRect(contours[0]);
box = cv2.boxPoints(rect);
box = np.int0(box);
cv2.drawContours(cont_rgb, [box], 0, (0,255,0), 2);
# but if you really want to use a different rectangle and rotate it, here's how to do it
# create rectangle
x,y,w,h = cv2.boundingRect(contours[0]);
rect = [];
rect.append([x,y]);
rect.append([x+w,y]);
rect.append([x+w,y+h]);
rect.append([x,y+h]);
# rotate it
rotated_rect = [];
center = [x + w/2, y + h/2];
for point in rect:
# for each point, center -> rotate -> uncenter
translate2D(point, center, -1);
rotate2D(point, 90 - rot_angle); # "90 - angle" is because rotation goes clockwise
translate2D(point, center, 1);
rotated_rect.append([point]);
rotated_rect = np.array(rotated_rect);
cv2.drawContours(cont_rgb, [rotated_rect.astype(int)], -1, (0,0,255), 2);
# ------------- END OF NEW STUFF -----------------
# draw fitted ellipse and centroid
target_ellipse = cv2.ellipse(cont_rgb.copy(), ellipse, (37, 99, 235), 10)
centroid = cv2.circle(target_ellipse.copy(), (int(xc), int(yc)), 20, (250, 204, 21), -1)
# draw major axis
rmajor = max(d1, d2)/2
if angle > 90:
angle = angle - 90
else:
angle = angle + 90
xtop_major = xc + math.cos(math.radians(angle))*rmajor
ytop_major = yc + math.sin(math.radians(angle))*rmajor
xbot_major = xc + math.cos(math.radians(angle+180))*rmajor
ybot_major = yc + math.sin(math.radians(angle+180))*rmajor
top_major = (int(xtop_major), int(ytop_major))
bot_major = (int(xbot_major), int(ybot_major))
target_major_axis = cv2.line(centroid.copy(),
top_major, bot_major,
(0, 255, 255), 5)
## image center coordinate
hh, ww = target_major_axis.shape[:2];
x_center_start = (0, int(hh/2))
x_center_end = (int(ww), int(hh/2))
y_center_start = (int(ww/2), 0)
y_center_end = (int(ww/2), int(hh))
img_x_middle_coor = cv2.line(target_major_axis.copy(), x_center_start, x_center_end, (219, 39, 119), 10)
img_y_middle_coor = cv2.line(img_x_middle_coor.copy(), y_center_start,
y_center_end, (190, 242, 100), 10)
# show
cv2.imshow("image", img_y_middle_coor);
cv2.waitKey(0);
For the future: check that your code runs before pasting it on here. Aside from the missing "import" lines it was also missing this line:
hh, ww = target_major_axis.shape[:2]
If the sample code you paste has errors, then everyone who wants to help will have to waste some time bug-stomping before they can begin working on a solution.

rotate an image based on the two coordinates

I have many images which contains two points, one at the top and another at the bottom. As well as I have the coordinates stored in the excel file too. I want to roatate the image so that it is 90 degrees.Below is the image which contains two coordinates.
The red color signifies the actual image using the coordinates and the angle is 85 degrees (approx), so iwant to rotate the image and make it 90 degrees as shown with yellow in the figure.
Can someone help me with this which api or functions to use. (I am using Python for coding)
It is basic math with angles in triangle.
if you have two points (x1,y1), (x2, y2) then you can calculate dx = x2-x1, dy = y2-y1 and then you can calculate tangens_alpha = dy/dx and alpha = arcus_tangens(tangens_alpha) and you have angle which you hava to use to calculate rotation - 90-alpha
In Python it will be as below. I took points from your image.
Because image have (0,0) in top left corner, not in bottom left corner like in math so I use dy = -(y2 - y1) to flip it
import math
x1 = 295
y1 = 605
x2 = 330
y2 = 100
dx = x2 - x1
dy = -(y2 - y1)
alpha = math.degrees(math.atan2(dy, dx))
rotation = 90-alpha
print(alpha, rotation)
And now you can use PIL/pillow or cv2+imutils to rotate it
import math
import cv2
import imutils
x1 = 295
y1 = 605
x2 = 330
y2 = 100
dx = x2 - x1
dy = -(y2 - y1)
alpha = math.degrees(math.atan2(dy, dx))
rotation = 90-alpha
print(alpha, rotation)
img = cv2.imread('image.jpg')
img_2 = imutils.rotate(img, rotation)
cv2.imwrite('rotate.jpg', img_2)
img_3 = imutils.rotate_bound(img, -rotation)
cv2.imwrite('rotate_bound.jpg', img_3)
cv2.imshow('rotate', img_2)
cv2.imshow('rotate_bound', img_3)
cv2.waitKey(0)
rotate.jpg
rotate_bound.jpg

Crop area from image using Pillow in Python

I want to crop a rectangle shape area from an image using Pillow in python. The problem is that the rectangle is not necessary parallel with the image margins so I cannot use the .crop((left, top, right, bottom)) function.
Is there a way to achieve this with Pillow? (assuming we know the coordinates of all 4 points of rectangle)
If not, how it can be done using a different Python library?
You can use min rotated rectangle in OpenCV:
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
As a result You have: center coordinates (x,y), width, height, angle of rotation of rectangle. You can rotate whole image with angle from this rectangle. You image now will be rotated:
You can calculate new coordinates of four rectangle vertices (you got angle). Then just calculate normal rectangle for this points (normal rectangle = not minimal, without any rotation). With this rect You can crop Your rotated image. In this crop image will be what You want if I understand You correctly. Something like that:
So You only need Opencv. Maybe there is some library with which You can do it easier.
Here's a solution based on scikit-image (not Pillow) that you might find useful.
You could pass the vertices of the region you wish to crop to the function skimage.draw.polygon and then use the retrieved pixel coordinates to mask the original image (for example, through the alpha channel).
import numpy as np
from skimage import io, draw
img = io.imread('https://i.stack.imgur.com/x5Ym4.png')
vertices = np.asarray([[150, 140],
[300, 240],
[210, 420],
[90, 320],
[150, 150]])
rows, cols = draw.polygon(vertices[:, 0], vertices[:, 1])
crop = img.copy()
crop[:, :, -1] = 0
crop[rows, cols, -1] = 255
io.imshow(crop)
I adapted this opencv-based solution (sub_image) for use with PIL. It takes a (center, size, theta) rect which I'm getting from cv2.minAreaRect, but could be constructed mathmatically from points, etc.
I've seen a few other solutions but they left some weird artifacts.
def crop_tilted_rect(image, rect):
""" crop rect out of image, handing rotation
rect in this case is a tuple of ((center_x, center_y), (width, height), theta),
which I get from opencv's cv2.minAreaRect(contour)
"""
# Get center, size, and angle from rect
center, size, theta = rect
width, height = [int(d) for d in size]
if 45 < theta <= 90:
theta = theta - 90
width, height = height, width
theta *= math.pi / 180 # convert to rad
v_x = (math.cos(theta), math.sin(theta))
v_y = (-math.sin(theta), math.cos(theta))
s_x = center[0] - v_x[0] * (width / 2) - v_y[0] * (height / 2)
s_y = center[1] - v_x[1] * (width / 2) - v_y[1] * (height / 2)
mapping = np.array([v_x[0],v_y[0], s_x, v_x[1],v_y[1], s_y])
return image.transform((width, height), Image.AFFINE, data=mapping, resample=0, fill=1, fillcolor=(255,255,255))

How to get x,y coordinates of a text that has been rotated by an angle in PIL python?

I'm adding a text on an image at a position (x,y) and then drawing a rectangle around it (x,y,x+text_width,y+text_height). Now I'm rotating the image by an angle of 30. How can I get the new coordinates ?
from PIL import Image
im = Image.open('img.jpg')
textlayer = Image.new("RGBA", im.size, (0,0,0,0))
textdraw = ImageDraw.Draw(textlayer)
textsize = textdraw.textsize('Hello World', font='myfont.ttf')
textdraw.text((75,267), 'Hello World', font='myfont.ttf', fill=(255,255,255))
textlayer = textlayer.rotate(30)
I tried this . But I'm not getting the point correctly. Can anyone point me what I'm doing wrong.
textpos = (75,267)
theta = 30
x0,y0 = 0,0
h = textsize[0] - textsize[1]
x,y = textpos[0], textpos[1]
xNew = (x-x0)*cos(theta) - (h-y-y0)*sin(theta) + x0
yNew = -(x-x0)*sin(theta) - (h-y-y0)*cos(theta) + (h-y0)
In PIL, the rotation happens about the center of the image. So considering your center of the image is given by:
cx = int(image_width / 2)
cy = int(image_height / 2)
a specified rotation angle:
theta = 30
and given coordinates (px, py), The new coordinates can be obtained using the following equation:
rad = radians(theta)
new_px = cx + int(float(px-cx) * cos(rad) + float(py-cy) * sin(rad))
new_py = cy + int(-(float(px-cx) * sin(rad)) + float(py-cy) * cos(rad))
Please note that the angle must be specified in radians, not degrees.
This answer is inspired from this following blog-post.

Categories

Resources