I have an image and 3 points. I want to rotate the image and the points together. To this end, I rotate the image by some angle a and the points by the same angle.
When a is fixed to a python scalar (say pi/3), the rotation works fine (cf. image below, the blue dots are on the dark squares).
When the angle is randomly chosen with angle = tf.random_uniform([]), there is an offset between the rotated image and the rotated points.
Below is a the full code reproducing this behaviour.
My question is: how to explain this behaviour and correct it?
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# create toy image
square = np.zeros((1, 800, 800, 3))
square[:, 100:400, 100:400] = 1
square[:, 140:180, 140:180] = 0
square[:, 240:280, 240:280] = 0
square[:, 280:320, 280:320] = 0
kp = np.array([[160, 160], [260, 260], [300, 300]])
kp = np.expand_dims(kp, axis=0)
def _rotate(image, keypoints, angle, keypoints_num):
image = tf.contrib.image.rotate(image, angle)
cos, sin = tf.cos(angle), tf.sin(angle)
x0, y0 = .5, .5
rot_mat = tf.Variable([[cos, -sin], [sin, cos]], trainable=False)
keypoints -= (x0, y0)
keypoints = tf.reshape(keypoints, shape=[-1, 2])
keypoints = tf.matmul(keypoints, rot_mat)
keypoints = tf.reshape(keypoints, shape=[-1, keypoints_num, 2])
keypoints += (x0, y0)
return image, keypoints
image = tf.placeholder(tf.float32, [None, 800, 800, 3])
keypoints = tf.placeholder(tf.float32, [None, 3, 2])
angle = np.pi / 3 # fix angle, works fine
#angle = tf.random_uniform([]) # random angle, does not work
image_r, keypoints_r = _rotate(image, keypoints / 800, angle, 3)
keypoints_r *= 800
sess = tf.Session()
sess.run(tf.initialize_all_variables())
imr, kr = sess.run([image_r, keypoints_r], feed_dict={image: square, keypoints:kp})
# displaying output
plt.imshow(imr[0])
plt.scatter(*zip(*kr[0]))
plt.savefig('rotation.jpg')
The problem is here:
rot_mat = tf.Variable([[cos, -sin], [sin, cos]], trainable=False)
Since rot_mat is a variable, its value is being set only when variables are initialized, here:
sess.run(tf.initialize_all_variables())
So at that point rot_mat gets some value (using cos and sin, which in turn depend on angle, which is random) and it does not change anymore. Then when you do:
imr, kr = sess.run([image_r, keypoints_r], feed_dict={image: squares, keypoints:kps})
It is a different call to run, so tf.random_uniform produces a new value, but rot_mat still keeps the same value from when it was initialized. Since the image is rotated with:
image = tf.contrib.image.rotate(image, angle)
And the key points are rotated with:
keypoints = tf.matmul(keypoints, rot_mat)
The rotations do not match. The easiest fix is not to use a variable for rot_mat:
rot_mat = [[cos, -sin], [sin, cos]]
With this, the code works fine. If you really need rot_mat to be a variable, it is possible, but it is a bit more of work and it does not seem to be needed here. If you do not like rot_mat being a list and want to have a proper tensor instead, you can use tf.convert_to_tensor:
rot_mat = tf.convert_to_tensor([[cos, -sin], [sin, cos]])
Related
I'm trying to randomly rotate some images with annotations,
I'm trying to understand how to get the new point location after rotation,
my images have different shapes
Example of what I'm trying to do is: this function to calculate the new position according to this answer (here)
def new_pixel(x,y,theta,X,Y):
sin = math.sin(theta)
cos = math.cos(theta)
x_new = (x-X/2)*cos + (y-X/2)*sin + X/2
y_new = -(x-X/2)*sin + (y-Y/2)*cos + Y/2
return int(x_new),int(y_new)
the code of open original image:
img = cv2.imread('D://ubun/1.jpg')
print(img.shape)
X, Y, c = img.shape
p1 = (124,291)
p2 = (168,291)
p3 = (169,391)
p4 = (125,391)
img1 = img.copy()
cv2.circle(img1, p1, 10, color=(255,0,0), thickness=2)
plt.imshow(img1)
the red dot is the point
the code for rotation:
rotated = ndimage.rotate(img, 45)
print(rotated.shape)
p11 = new_pixel(p1[0],p1[1],45,X,Y)
p22 = new_pixel(p2[0],p2[1],45,X,Y)
p33 = new_pixel(p3[0],p3[1],45,X,Y)
p44 = new_pixel(p4[0],p4[1],45,X,Y)
cv2.circle(rotated, p11, 10, color=(255,0,0), thickness=2)
plt.imshow(rotated)
The image after rotation and see the point is not in the correct position after rotation:
I noticed that image shape is different after rotation, does this effect the calculations ?
This reply is really late, but even I faced the same issue. So there are basically two issues with your code.
math.sin /math.cos takes radians as input not degrees. SO just convert it to radians to get the right result.
X,Y : You are taking the height and width of the initial image which was not rotated. This would be right if the resulting image is the same size as the initial image , but since the resulting image resizes the output to fit the complete image after rotation , you need to take X , Y as the new width and height. But you need to consider the new height and width only while your adding it after transformation. SO the formula would be :
x_new = (x-old_Width/2)*cos + (y-old_height/2)*sin + new_width/2
y_new = -(x-old_Width/2)*sin + (y-old_height/2)*cos + new_height/2
I know this post is too late , but hope this helps out the people whoever faces the same issue.
I have a dataset of images for Key point detection. Each Image got labeled with one keypoint (x|y).
I use numpy to flip images for data augmentation.
I flip an Image horizontal with this code:
img = img[:, ::-1]
And vertical with this code
img = img[::-1]
So far so good. But I have to also recalculate the keypoints (labels) ( [85 35])
I know its basic math but i havent campe up with a solution.
Thanks in Advance.
Use the rotation matrix:
x_new = x_old * np.cos(alpha) - y_old * np.sin(alpha)
y_new = x_old * np.sin(alpha) + y_old * np.cos(alpha)
Alpha is a roatation angle in radians, but i don't know what gives img = img[:, ::-1])))
If you are flipping 180° degrees (plain vertical or horizontical) there is no need for an rotation matrix. Just get the shape of your image.
X = img.shape[1]
y = img.shape[0]
Recompute X-Position when flipping horizontally.
X_Position_New = X - X_Position_Old
Recompute Y-Position when flipping vertically.
Y_Position_New = Y - Y_Position_Old
If you flip the image horizontally then the pixel that was 85 units from the left will be 85 units from the right. The same goes for the vertical flip, 35 units from the top will be 35 units from the bottom.
So now you can either calculate the location with the help of the size img.shape of your image or you use the fact that you can access the image with negative indexes. So point [85 35] will be point [-85 35] or [width_of_image-85 35]
I'm using a set of 32x32x32 grayscale images and I want to apply random rotations on the images as a part of data augmentation while training a CNN by tflearn + tensorflow. I was using the following code to do so:
# Real-time data preprocessing
img_prep = ImagePreprocessing()
img_prep.add_featurewise_zero_center()
img_prep.add_featurewise_stdnorm()
# Real-time data augmentation
img_aug = ImageAugmentation()
img_aug.add_random_rotation(max_angle=360.)
# Input data
with tf.name_scope('Input'):
X = tf.placeholder(tf.float32, shape=(None, image_size,
image_size, image_size, num_channels), name='x-input')
Y = tf.placeholder(tf.float32, shape=(None, label_cnt), name='y-input')
# Convolutional network building
network = input_data(shape=[None, 32, 32, 32, 1],
placeholder = X,
data_preprocessing=img_prep,
data_augmentation=img_aug)
(I'm using a combination of tensorflow and tflearn to be able to use the features from both, so please bear with me. Let me know if something is wrong with the way I'm using placeholders, etc.)
I found that using the add_random_rotation (which itself uses scipy.ndimage.interpolation.rotate) treats the third dimension of my grayscale images as channels (like RGB channels) and rotates all 32 images of the third dimension by a random angel around z-axis(treats my 3D image as a 2D image with 32 channels). But I want the image to be rotated in the space (around all three axes). Do you have any idea how can I do that? Is there a function or package for easily rotating the 3D images in space?!
def random_rotation_3d(batch, max_angle):
""" Randomly rotate an image by a random angle (-max_angle, max_angle).
Arguments:
max_angle: `float`. The maximum rotation angle.
Returns:
batch of rotated 3D images
"""
size = batch.shape
batch = np.squeeze(batch)
batch_rot = np.zeros(batch.shape)
for i in range(batch.shape[0]):
if bool(random.getrandbits(1)):
image1 = np.squeeze(batch[i])
# rotate along z-axis
angle = random.uniform(-max_angle, max_angle)
image2 = scipy.ndimage.interpolation.rotate(image1, angle, mode='nearest', axes=(0, 1), reshape=False)
# rotate along y-axis
angle = random.uniform(-max_angle, max_angle)
image3 = scipy.ndimage.interpolation.rotate(image2, angle, mode='nearest', axes=(0, 2), reshape=False)
# rotate along x-axis
angle = random.uniform(-max_angle, max_angle)
batch_rot[i] = scipy.ndimage.interpolation.rotate(image3, angle, mode='nearest', axes=(1, 2), reshape=False)
# print(i)
else:
batch_rot[i] = batch[i]
return batch_rot.reshape(size)
It is more difficult to incorporate in the ImageAugmentation() but the scipy.ndimage.rotate function by default rotates 3D images correctly and takes the axes argument which specifies which specify the plane of rotation (https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.interpolation.rotate.html). Rotating around the first axis (x) means you pass axes=(1,2), to rotate around the second axis (y) useaxes=(0,2)
If you want to rotate any 3D image around the center and keep it in the center use scipy affine_transform using offset as follows:
# create a 3D image
image = np.random.random((20,20,20))
# output shape
output_shape = np.array(image.shape)
# rotation matrix around z axis
theta = 0.01
cosine = np.cos(theta)
sinus = np.sin(theta)
M = np.array([[cosine, -sinus, 0],
[sinus, cosine, 0],
[0, 0, 1]])
# offset
offset = (np.array(image.shape)-M.dot(np.array(output_shape)))
offset = offset/2.0 # it is important
# affine transformation
f_data = affine_transform(np.asarray(image), np.asarray(M),
output_shape=output_shape, offset=offset)
I'm trying to rotate a image some degrees then show it in a window.
my idea is to rotate and then show it in a new window with new width and height of window calculated from the old width and height:
new_width = x * cos angle + y * sin angle
new_height = y * cos angle + x * sin angle
I was expecting the result to look like below:
but it turns out the result looks like this:
and my code is here:
#!/usr/bin/env python -tt
#coding:utf-8
import sys
import math
import cv2
import numpy as np
def rotateImage(image, angle):#parameter angle in degrees
if len(image.shape) > 2:#check colorspace
shape = image.shape[:2]
else:
shape = image.shape
image_center = tuple(np.array(shape)/2)#rotation center
radians = math.radians(angle)
x, y = im.shape
print 'x =',x
print 'y =',y
new_x = math.ceil(math.cos(radians)*x + math.sin(radians)*y)
new_y = math.ceil(math.sin(radians)*x + math.cos(radians)*y)
new_x = int(new_x)
new_y = int(new_y)
rot_mat = cv2.getRotationMatrix2D(image_center,angle,1.0)
print 'rot_mat =', rot_mat
result = cv2.warpAffine(image, rot_mat, shape, flags=cv2.INTER_LINEAR)
return result, new_x, new_y
def show_rotate(im, width, height):
# width = width/2
# height = height/2
# win = cv2.cv.NamedWindow('ro_win',cv2.cv.CV_WINDOW_NORMAL)
# cv2.cv.ResizeWindow('ro_win', width, height)
win = cv2.namedWindow('ro_win')
cv2.imshow('ro_win', im)
if cv2.waitKey() == '\x1b':
cv2.destroyWindow('ro_win')
if __name__ == '__main__':
try:
im = cv2.imread(sys.argv[1],0)
except:
print '\n', "Can't open image, OpenCV or file missing."
sys.exit()
rot, width, height = rotateImage(im, 30.0)
print width, height
show_rotate(rot, width, height)
There must be some stupid mistakes in my code lead to this problem, but I can not figure it out...
and I know my code is not pythonic enough :( ..sorry for that..
Can anyone help me?
Best,
bearzk
As BloodyD's answer said, cv2.warpAffine doesn't auto-center the transformed image. Instead, it simply transforms each pixel using the transformation matrix. (This could move pixels anywhere in Cartesian space, including out of the original image area.) Then, when you specify the destination image size, it grabs an area of that size, beginning at (0,0), i.e. the upper left of the original frame. Any parts of your transformed image that don't lie in that region will be cut off.
Here's Python code to rotate and scale an image, with the result centered:
def rotateAndScale(img, scaleFactor = 0.5, degreesCCW = 30):
(oldY,oldX) = img.shape #note: numpy uses (y,x) convention but most OpenCV functions use (x,y)
M = cv2.getRotationMatrix2D(center=(oldX/2,oldY/2), angle=degreesCCW, scale=scaleFactor) #rotate about center of image.
#choose a new image size.
newX,newY = oldX*scaleFactor,oldY*scaleFactor
#include this if you want to prevent corners being cut off
r = np.deg2rad(degreesCCW)
newX,newY = (abs(np.sin(r)*newY) + abs(np.cos(r)*newX),abs(np.sin(r)*newX) + abs(np.cos(r)*newY))
#the warpAffine function call, below, basically works like this:
# 1. apply the M transformation on each pixel of the original image
# 2. save everything that falls within the upper-left "dsize" portion of the resulting image.
#So I will find the translation that moves the result to the center of that region.
(tx,ty) = ((newX-oldX)/2,(newY-oldY)/2)
M[0,2] += tx #third column of matrix holds translation, which takes effect after rotation.
M[1,2] += ty
rotatedImg = cv2.warpAffine(img, M, dsize=(int(newX),int(newY)))
return rotatedImg
When you get the rotation matrix like this:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,1.0)
Your "scale" parameter is set to 1.0, so if you use it to transform your image matrix to your result matrix of the same size, it will necessarily be clipped.
You can instead get a rotation matrix like this:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,0.5)
that will both rotate and shrink, leaving room around the edges (you can scale it up first so that you will still end up with a big image).
Also, it looks like you are confusing the numpy and OpenCV conventions for image sizes. OpenCV uses (x, y) for image sizes and point coordinates, while numpy uses (y,x). That is probably why you are going from a portrait to landscape aspect ratio.
I tend to be explicit about it like this:
imageHeight = image.shape[0]
imageWidth = image.shape[1]
pointcenter = (imageHeight/2, imageWidth/2)
etc...
Ultimately, this works fine for me:
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
Update:
Here is the complete script that I executed. Just cv2.imshow("winname", image) and cv2.waitkey() with no arguments to keep it open:
import cv2
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
imageOriginal = cv2.imread("/Path/To/Image.jpg")
# this was an iPhone image that I wanted to resize to something manageable to view
# so I knew beforehand that this is an appropriate size
imageOriginal = cv2.resize(imageOriginal, (600,800))
imageRotated= rotateImage(imageOriginal, 45)
cv2.imshow("Rotated", imageRotated)
cv2.waitKey()
Really not a lot there... And you were definitely right to use if __name__ == '__main__': if it is a real module that you're working on.
Well, this question seems not up-to-date, but I had the same problem and took a while to solve it without scaling the original image up and down. I will just post my solution(unfortunately C++ code, but it could be easily ported to python if needed):
#include <math.h>
#define PI 3.14159265
#define SIN(angle) sin(angle * PI / 180)
#define COS(angle) cos(angle * PI / 180)
void rotate(const Mat src, Mat &dest, double angle, int borderMode, const Scalar &borderValue){
int w = src.size().width, h = src.size().height;
// resize the destination image
Size2d new_size = Size2d(abs(w * COS((int)angle % 180)) + abs(h * SIN((int)angle % 180)), abs(w * SIN((int)angle % 180)) + abs(h * COS((int)angle % 180)));
dest = Mat(new_size, src.type());
// this is our rotation point
Size2d old_size = src.size();
Point2d rot_point = Point2d(old_size.width / 2.0, old_size.height / 2.0);
// and this is the rotation matrix
// same as in the opencv docs, but in 3x3 form
double a = COS(angle), b = SIN(angle);
Mat rot_mat = (Mat_<double>(3,3) << a, b, (1 - a) * rot_point.x - b * rot_point.y, -1 * b, a, b * rot_point.x + (1 - a) * rot_point.y, 0, 0, 1);
// next the translation matrix
double offsetx = (new_size.width - old_size.width) / 2,
offsety = (new_size.height - old_size.height) / 2;
Mat trans_mat = (Mat_<double>(3,3) << 1, 0, offsetx , 0, 1, offsety, 0, 0, 1);
// multiply them: we rotate first, then translate, so the order is important!
// inverse order, so that the transformations done right
Mat affine_mat = Mat(trans_mat * rot_mat).rowRange(0, 2);
// now just apply the affine transformation matrix
warpAffine(src, dest, affine_mat, new_size, INTER_LINEAR, borderMode, borderValue);
}
The general solution is to rotate and translate the rotated picture to the right position. So we create two transformation matrices(first for the rotation, second for the translation) and multiply them to the final affine transformation. As the matrix returned by opencv's getRotationMatrix2D is only 2x3, I had to create the matrices by hand in the 3x3 format, so they could by multiplied. Then just take the first two rows and apply the affine tranformation.
EDIT: I have created a Gist, because I have needed this functionality too often in different projects. There is also a Python-Version of it: https://gist.github.com/BloodyD/97917b79beb332a65758
I'm having a hard time finding examples for rotating an image around a specific point by a specific (often very small) angle in Python using OpenCV.
This is what I have so far, but it produces a very strange resulting image, but it is rotated somewhat:
def rotateImage( image, angle ):
if image != None:
dst_image = cv.CloneImage( image )
rotate_around = (0,0)
transl = cv.CreateMat(2, 3, cv.CV_32FC1 )
matrix = cv.GetRotationMatrix2D( rotate_around, angle, 1.0, transl )
cv.GetQuadrangleSubPix( image, dst_image, transl )
cv.GetRectSubPix( dst_image, image, rotate_around )
return dst_image
import numpy as np
import cv2
def rotate_image(image, angle):
image_center = tuple(np.array(image.shape[1::-1]) / 2)
rot_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0)
result = cv2.warpAffine(image, rot_mat, image.shape[1::-1], flags=cv2.INTER_LINEAR)
return result
Assuming you're using the cv2 version, that code finds the center of the image you want to rotate, calculates the transformation matrix and applies to the image.
Or much easier use
SciPy
from scipy import ndimage
#rotation angle in degree
rotated = ndimage.rotate(image_to_rotate, 45)
see
here
for more usage info.
def rotate(image, angle, center = None, scale = 1.0):
(h, w) = image.shape[:2]
if center is None:
center = (w / 2, h / 2)
# Perform the rotation
M = cv2.getRotationMatrix2D(center, angle, scale)
rotated = cv2.warpAffine(image, M, (w, h))
return rotated
I had issues with some of the above solutions, with getting the correct "bounding_box" or new size of the image. Therefore here is my version
def rotation(image, angleInDegrees):
h, w = image.shape[:2]
img_c = (w / 2, h / 2)
rot = cv2.getRotationMatrix2D(img_c, angleInDegrees, 1)
rad = math.radians(angleInDegrees)
sin = math.sin(rad)
cos = math.cos(rad)
b_w = int((h * abs(sin)) + (w * abs(cos)))
b_h = int((h * abs(cos)) + (w * abs(sin)))
rot[0, 2] += ((b_w / 2) - img_c[0])
rot[1, 2] += ((b_h / 2) - img_c[1])
outImg = cv2.warpAffine(image, rot, (b_w, b_h), flags=cv2.INTER_LINEAR)
return outImg
The cv2.warpAffine function takes the shape parameter in reverse order: (col,row) which the answers above do not mention. Here is what worked for me:
import numpy as np
def rotateImage(image, angle):
row,col = image.shape
center=tuple(np.array([row,col])/2)
rot_mat = cv2.getRotationMatrix2D(center,angle,1.0)
new_image = cv2.warpAffine(image, rot_mat, (col,row))
return new_image
import imutils
vs = VideoStream(src=0).start()
...
while (1):
frame = vs.read()
...
frame = imutils.rotate(frame, 45)
More: https://github.com/jrosebr1/imutils
You can simply use the imutils package to do the rotation. it has two methods
rotate: rotate the image at specified angle. however the drawback is image might get cropped if it is not a square image.
rotate_bound: it overcomes the problem happened with rotate. It adjusts the size of the image accordingly while rotating the image.
more info you can get on this blog:
https://www.pyimagesearch.com/2017/01/02/rotate-images-correctly-with-opencv-and-python/
Quick tweak to #alex-rodrigues answer... deals with shape including the number of channels.
import cv2
import numpy as np
def rotateImage(image, angle):
center=tuple(np.array(image.shape[0:2])/2)
rot_mat = cv2.getRotationMatrix2D(center,angle,1.0)
return cv2.warpAffine(image, rot_mat, image.shape[0:2],flags=cv2.INTER_LINEAR)
You need a homogenous matrix of size 2x3. First 2x2 is the rotation matrix and last column is a translation vector.
Here's how to build your transformation matrix:
# Exemple with img center point:
# angle = np.pi/6
# specific_point = np.array(img.shape[:2][::-1])/2
def rotate(img: np.ndarray, angle: float, specific_point: np.ndarray) -> np.ndarray:
warp_mat = np.zeros((2,3))
cos, sin = np.cos(angle), np.sin(angle)
warp_mat[:2,:2] = [[cos, -sin],[sin, cos]]
warp_mat[:2,2] = specific_point - np.matmul(warp_mat[:2,:2], specific_point)
return cv2.warpAffine(img, warp_mat, img.shape[:2][::-1])
You can easily rotate the images using opencv python-
def funcRotate(degree=0):
degree = cv2.getTrackbarPos('degree','Frame')
rotation_matrix = cv2.getRotationMatrix2D((width / 2, height / 2), degree, 1)
rotated_image = cv2.warpAffine(original, rotation_matrix, (width, height))
cv2.imshow('Rotate', rotated_image)
If you are thinking of creating a trackbar, then simply create a trackbar using cv2.createTrackbar() and the call the funcRotate()fucntion from your main script. Then you can easily rotate it to any degree you want. Full details about the implementation can be found here as well- Rotate images at any degree using Trackbars in opencv
Here's an example for rotating about an arbitrary point (x,y) using only openCV
def rotate_about_point(x, y, degree, image):
rot_mtx = cv.getRotationMatrix2D((x, y), angle, 1)
abs_cos = abs(rot_mtx[0, 0])
abs_sin = abs(rot_mtx[0, 1])
rot_wdt = int(frm_hgt * abs_sin + frm_wdt * abs_cos)
rot_hgt = int(frm_hgt * abs_cos + frm_wdt * abs_sin)
rot_mtx += np.asarray([[0, 0, -lftmost_x],
[0, 0, -topmost_y]])
rot_img = cv.warpAffine(image, rot_mtx, (rot_wdt, rot_hgt),
borderMode=cv.BORDER_CONSTANT)
return rot_img
you can use the following code:
import numpy as np
from PIL import Image
import math
def shear(angle,x,y):
tangent=math.tan(angle/2)
new_x=round(x-y*tangent)
new_y=y
#shear 2
new_y=round(new_x*math.sin(angle)+new_y)
#since there is no change in new_x according to the shear matrix
#shear 3
new_x=round(new_x-new_y*tangent)
#since there is no change in new_y according to the shear matrix
return new_y,new_x
image = np.array(Image.open("test.png"))
# Load the image
angle=-int(input("Enter the angle :- "))
# Ask the user to enter the angle of rotation
# Define the most occuring variables
angle=math.radians(angle)
#converting degrees to radians
cosine=math.cos(angle)
sine=math.sin(angle)
height=image.shape[0]
#define the height of the image
width=image.shape[1]
#define the width of the image
# Define the height and width of the new image that is to be formed
new_height = round(abs(image.shape[0]*cosine)+abs(image.shape[1]*sine))+1
new_width = round(abs(image.shape[1]*cosine)+abs(image.shape[0]*sine))+1
output=np.zeros((new_height,new_width,image.shape[2]))
image_copy=output.copy()
# Find the centre of the image about which we have to rotate the image
original_centre_height = round(((image.shape[0]+1)/2)-1)
#with respect to the original image
original_centre_width = round(((image.shape[1]+1)/2)-1)
#with respect to the original image
# Find the centre of the new image that will be obtained
new_centre_height= round(((new_height+1)/2)-1)
#with respect to the new image
new_centre_width= round(((new_width+1)/2)-1)
#with respect to the new image
for i in range(height):
for j in range(width):
#co-ordinates of pixel with respect to the centre of original image
y=image.shape[0]-1-i-original_centre_height
x=image.shape[1]-1-j-original_centre_width
#Applying shear Transformation
new_y,new_x=shear(angle,x,y)
new_y=new_centre_height-new_y
new_x=new_centre_width-new_x
output[new_y,new_x,:]=image[i,j,:]
pil_img=Image.fromarray((output).astype(np.uint8))
pil_img.save("rotated_image.png")