how to track a pixel location after rotating image? - python

I'm trying to randomly rotate some images with annotations,
I'm trying to understand how to get the new point location after rotation,
my images have different shapes
Example of what I'm trying to do is: this function to calculate the new position according to this answer (here)
def new_pixel(x,y,theta,X,Y):
sin = math.sin(theta)
cos = math.cos(theta)
x_new = (x-X/2)*cos + (y-X/2)*sin + X/2
y_new = -(x-X/2)*sin + (y-Y/2)*cos + Y/2
return int(x_new),int(y_new)
the code of open original image:
img = cv2.imread('D://ubun/1.jpg')
print(img.shape)
X, Y, c = img.shape
p1 = (124,291)
p2 = (168,291)
p3 = (169,391)
p4 = (125,391)
img1 = img.copy()
cv2.circle(img1, p1, 10, color=(255,0,0), thickness=2)
plt.imshow(img1)
the red dot is the point
the code for rotation:
rotated = ndimage.rotate(img, 45)
print(rotated.shape)
p11 = new_pixel(p1[0],p1[1],45,X,Y)
p22 = new_pixel(p2[0],p2[1],45,X,Y)
p33 = new_pixel(p3[0],p3[1],45,X,Y)
p44 = new_pixel(p4[0],p4[1],45,X,Y)
cv2.circle(rotated, p11, 10, color=(255,0,0), thickness=2)
plt.imshow(rotated)
The image after rotation and see the point is not in the correct position after rotation:
I noticed that image shape is different after rotation, does this effect the calculations ?

This reply is really late, but even I faced the same issue. So there are basically two issues with your code.
math.sin /math.cos takes radians as input not degrees. SO just convert it to radians to get the right result.
X,Y : You are taking the height and width of the initial image which was not rotated. This would be right if the resulting image is the same size as the initial image , but since the resulting image resizes the output to fit the complete image after rotation , you need to take X , Y as the new width and height. But you need to consider the new height and width only while your adding it after transformation. SO the formula would be :
x_new = (x-old_Width/2)*cos + (y-old_height/2)*sin + new_width/2
y_new = -(x-old_Width/2)*sin + (y-old_height/2)*cos + new_height/2
I know this post is too late , but hope this helps out the people whoever faces the same issue.

Related

OpenCV inverse warpPolar output out of frame

When I try to make an inverse polar transformation to my image, the output is outside of the output image. There are also some weird white patterns on the top. I tried to make the output image larger but the circle is on the left side so it didn't help.
I am trying to make a line circle using warpPolar function, for that first I'm flipping the line and giving it a black area as shown on the image, then using the cv2.warpPolar function with WARP_INVERSE_MAP flag.
How can I fully draw the circle, and get its bounding box is my question.
line = np.ones(shape=(20,475),dtype=np.uint8)*255
flipped = cv2.rotate(line,cv2.ROTATE_90_CLOCKWISE)
cv2.imshow('flipped',flipped)
h,w = flipped.shape
radius = int(h / (2*np.pi))
new_image = np.zeros(shape=(h,radius+w),dtype=np.uint8)
h2,w2 = new_image.shape
new_image[: ,w2-w:w2] = flipped
cv2.imshow('polar',new_image)
h,w = new_image.shape
center = (w/2,h)
output= cv2.warpPolar(new_image,center=center,maxRadius=radius,dsize=(1500,1500),flags=cv2.WARP_INVERSE_MAP + cv2.WARP_POLAR_LINEAR)
cv2.imshow('output',output)
cv2.waitKey(0)
Note: I am not getting the same result as you showed above when I tried the same code. You may miss some code lines to add ?
If I didn't misunderstand your problem,you are trying to get this result: (If I am wrong, I will update the answer accordingly)
The only point you are missing is that defining the center and radius. You are making inverse transform here, the input is created by you not warpPolar. Since you are defining size as (1500,1500), you need to update center and radius accordingly. Here is my code giving this result:
import cv2
import numpy as np
line = np.ones(shape=(20,475),dtype=np.uint8)*255
flipped = cv2.rotate(line,cv2.ROTATE_90_CLOCKWISE)
cv2.imshow('flipped',flipped)
h,w = flipped.shape
radius = int(h / (2*np.pi))
new_image = np.zeros(shape=(h,radius+w),dtype=np.uint8)
h2,w2 = new_image.shape
new_image[: ,w2-w:w2] = flipped
cv2.imshow('polar',new_image)
h,w = new_image.shape
center = (750,750)
maxRadius = 750
output= cv2.warpPolar(new_image,center=center,maxRadius=radius,dsize=(1500,1500),flags=cv2.WARP_INVERSE_MAP + cv2.WARP_POLAR_LINEAR)
cv2.imshow('output',output)
cv2.waitKey(0)

Orienting an Object in an image horizontally

I have images of Food Trays oriented in various angles. I would like to make all the trays horizontally oriented. For this I tried finding the longest edge of the tray using hough's transformation and calculated its orientation with respect to the image border and rotated it. It works fine for very few cases. I would like to make it work for all the images I have. Can anyone please help me with this? I have attached some sample images in the link below and also I have included the code which I am currently using.
Link for images
def Enquiry(lis1):
return(np.array(lis1))
img = cv2.imread('path/to/image')
canny = cv.Canny(img, 100, 200)
minLineLength = 200
maxLineGap = 10
lines = cv.HoughLinesP(canny, 1, np.pi / 180, 100, minLineLength, maxLineGap)
if Enquiry(lines).size>=4:
lines1 = lines[:,0,:]
max_length = 0
index = 0
i = 0
for x1, y1, x2, y2 in lines1:
length = (x1-x2)*(x1-x2) + (y1-y2)*(y1-y2)
if length > max_length:
max_length = length
index = i
i += 1
[x1,y1,x2,y2]=lines1[index]
degree = math.atan(abs(y1-y2)/abs(x1-x2))
angle = degree*180/np.pi
H, W = img.shape[:2]
rotation_matrix = cv.getRotationMatrix2D((W/2, H/2), -angle, 1)
img_rotation = cv.warpAffine(img, rotation_matrix, (W, H))
cv2.imwrite('rotated_image.jpg', img_rotation)
Only rotation will not help. In your test images, the tray has some shear and skew too.
What I would suggest, is find the corners from the intersection of the lines.
At least 3 corners. Then find the affine transformation between the corners and the expected actual corners of the tray.

Calculate new Position of coordinates with numpy

I have a dataset of images for Key point detection. Each Image got labeled with one keypoint (x|y).
I use numpy to flip images for data augmentation.
I flip an Image horizontal with this code:
img = img[:, ::-1]
And vertical with this code
img = img[::-1]
So far so good. But I have to also recalculate the keypoints (labels) ( [85 35])
I know its basic math but i havent campe up with a solution.
Thanks in Advance.
Use the rotation matrix:
x_new = x_old * np.cos(alpha) - y_old * np.sin(alpha)
y_new = x_old * np.sin(alpha) + y_old * np.cos(alpha)
Alpha is a roatation angle in radians, but i don't know what gives img = img[:, ::-1])))
If you are flipping 180° degrees (plain vertical or horizontical) there is no need for an rotation matrix. Just get the shape of your image.
X = img.shape[1]
y = img.shape[0]
Recompute X-Position when flipping horizontally.
X_Position_New = X - X_Position_Old
Recompute Y-Position when flipping vertically.
Y_Position_New = Y - Y_Position_Old
If you flip the image horizontally then the pixel that was 85 units from the left will be 85 units from the right. The same goes for the vertical flip, 35 units from the top will be 35 units from the bottom.
So now you can either calculate the location with the help of the size img.shape of your image or you use the fact that you can access the image with negative indexes. So point [85 35] will be point [-85 35] or [width_of_image-85 35]

How would I achieve this in opencv with an affine transform?

I was wondering how I would replicate what is being done in this image:
To break it down:
Get facial landmarks using dlib (green dots)
Rotate the image so that the eyes are horizontal
Find the midpoint of the face by averaging the left most and right most landmarks (blue dot) and center the image on the x-axis
Fix the position along the y-axis by by placing the eye center 45% from the top of the image, and the mouth center 25% from the image
Right now this is what I have:
I am kind of stuck on step 3, which I think can be done by an affine transform? But I'm completely stumped on step 4, I have no idea how I would achieve it.
Please tell me if you need me to provide the code!
EDIT: So after look at #Gal Dreiman's answer I was able to center the face perfectly so that the blue dot in the the center of my image.
Although when I implemented the second part of his answer I end up getting something like this:
I see that the points have been transformed to the right places, but it's not the outcome that I had desired as it is sheered quite dramatically. Any ideas?
EDIT 2:
After switching the x, y coordinates for the center points around, this is what I got:
As I see you section 3, the easiest way to do that is by:
Find the faces in the image:
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.cv.CV_HAAR_SCALE_IMAGE
)
For each face calculate the midpoint:
for (x, y, w, h) in faces:
mid_x = x + int(w/2)
mid_y = y + int(h/2)
Affine transform the image to center the blue dot you already calculated:
height, width = img.shape
x_dot = ...
y_dot = ...
dx_dot = int(width/2) - x_dot
dy_dot = int(height/2) - y_dot
M = np.float32([[1,0,dx_dot],[0,1,dy_dot]])
dst = cv2.warpAffine(img,M,(cols,rows))
Hope it was helpful.
Edit:
Regarding section 4:
In order to stretch (resize) the image, all you have to do is to perform an affine transform. To find the transformation matrix, we need three points from input image and their corresponding locations in output image.
p_1 = [eyes_x, eye_y]
p_2 = [int(width/2),int(height/2)] # default: center of the image
p_3 = [mouth_x, mouth_y]
target_p_1 = [eyes_x, int(eye_y * 0.45)]
target_p_2 = [int(width/2),int(height/2)] # don't want to change
target_p_3 = [mouth_x, int(mouth_y * 0.75)]
pts1 = np.float32([p_1,p_2,p_3])
pts2 = np.float32([target_p_1,target_p_2,target_p_3])
M = cv2.getAffineTransform(pts1,pts2)
output = cv2.warpAffine(image,M,(height,width))
To clear things out:
eye_x / eye_y is the location of the eye center.
The same applies on mouth_x / mouth_y, for mouth center.
target_p_1/2/3 are the target points.
Edit 2:
I see you are in trouble, I hope this time my suggestion will work for you:
There is another approach I can think of. you can perform sort of "crop" to the image by pointing on 4 points, lets define them as the 4 points which wrap the face, and change the image perspective according to their new position:
up_left = [x,y]
up_right = [...]
down_left = [...]
down_right = [...]
pts1 = np.float32([up_left,up_right,down_left,down_right])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(300,300))
So all you have to do is to define those 4 points. My suggestion, calculate the conture around the face (which you already did) and after that add delta_x and delta_y (or subtract) to the coordinates.

Python 2.7.3 + OpenCV 2.4 after rotation window doesn't fit Image

I'm trying to rotate a image some degrees then show it in a window.
my idea is to rotate and then show it in a new window with new width and height of window calculated from the old width and height:
new_width = x * cos angle + y * sin angle
new_height = y * cos angle + x * sin angle
I was expecting the result to look like below:
but it turns out the result looks like this:
and my code is here:
#!/usr/bin/env python -tt
#coding:utf-8
import sys
import math
import cv2
import numpy as np
def rotateImage(image, angle):#parameter angle in degrees
if len(image.shape) > 2:#check colorspace
shape = image.shape[:2]
else:
shape = image.shape
image_center = tuple(np.array(shape)/2)#rotation center
radians = math.radians(angle)
x, y = im.shape
print 'x =',x
print 'y =',y
new_x = math.ceil(math.cos(radians)*x + math.sin(radians)*y)
new_y = math.ceil(math.sin(radians)*x + math.cos(radians)*y)
new_x = int(new_x)
new_y = int(new_y)
rot_mat = cv2.getRotationMatrix2D(image_center,angle,1.0)
print 'rot_mat =', rot_mat
result = cv2.warpAffine(image, rot_mat, shape, flags=cv2.INTER_LINEAR)
return result, new_x, new_y
def show_rotate(im, width, height):
# width = width/2
# height = height/2
# win = cv2.cv.NamedWindow('ro_win',cv2.cv.CV_WINDOW_NORMAL)
# cv2.cv.ResizeWindow('ro_win', width, height)
win = cv2.namedWindow('ro_win')
cv2.imshow('ro_win', im)
if cv2.waitKey() == '\x1b':
cv2.destroyWindow('ro_win')
if __name__ == '__main__':
try:
im = cv2.imread(sys.argv[1],0)
except:
print '\n', "Can't open image, OpenCV or file missing."
sys.exit()
rot, width, height = rotateImage(im, 30.0)
print width, height
show_rotate(rot, width, height)
There must be some stupid mistakes in my code lead to this problem, but I can not figure it out...
and I know my code is not pythonic enough :( ..sorry for that..
Can anyone help me?
Best,
bearzk
As BloodyD's answer said, cv2.warpAffine doesn't auto-center the transformed image. Instead, it simply transforms each pixel using the transformation matrix. (This could move pixels anywhere in Cartesian space, including out of the original image area.) Then, when you specify the destination image size, it grabs an area of that size, beginning at (0,0), i.e. the upper left of the original frame. Any parts of your transformed image that don't lie in that region will be cut off.
Here's Python code to rotate and scale an image, with the result centered:
def rotateAndScale(img, scaleFactor = 0.5, degreesCCW = 30):
(oldY,oldX) = img.shape #note: numpy uses (y,x) convention but most OpenCV functions use (x,y)
M = cv2.getRotationMatrix2D(center=(oldX/2,oldY/2), angle=degreesCCW, scale=scaleFactor) #rotate about center of image.
#choose a new image size.
newX,newY = oldX*scaleFactor,oldY*scaleFactor
#include this if you want to prevent corners being cut off
r = np.deg2rad(degreesCCW)
newX,newY = (abs(np.sin(r)*newY) + abs(np.cos(r)*newX),abs(np.sin(r)*newX) + abs(np.cos(r)*newY))
#the warpAffine function call, below, basically works like this:
# 1. apply the M transformation on each pixel of the original image
# 2. save everything that falls within the upper-left "dsize" portion of the resulting image.
#So I will find the translation that moves the result to the center of that region.
(tx,ty) = ((newX-oldX)/2,(newY-oldY)/2)
M[0,2] += tx #third column of matrix holds translation, which takes effect after rotation.
M[1,2] += ty
rotatedImg = cv2.warpAffine(img, M, dsize=(int(newX),int(newY)))
return rotatedImg
When you get the rotation matrix like this:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,1.0)
Your "scale" parameter is set to 1.0, so if you use it to transform your image matrix to your result matrix of the same size, it will necessarily be clipped.
You can instead get a rotation matrix like this:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,0.5)
that will both rotate and shrink, leaving room around the edges (you can scale it up first so that you will still end up with a big image).
Also, it looks like you are confusing the numpy and OpenCV conventions for image sizes. OpenCV uses (x, y) for image sizes and point coordinates, while numpy uses (y,x). That is probably why you are going from a portrait to landscape aspect ratio.
I tend to be explicit about it like this:
imageHeight = image.shape[0]
imageWidth = image.shape[1]
pointcenter = (imageHeight/2, imageWidth/2)
etc...
Ultimately, this works fine for me:
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
Update:
Here is the complete script that I executed. Just cv2.imshow("winname", image) and cv2.waitkey() with no arguments to keep it open:
import cv2
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
imageOriginal = cv2.imread("/Path/To/Image.jpg")
# this was an iPhone image that I wanted to resize to something manageable to view
# so I knew beforehand that this is an appropriate size
imageOriginal = cv2.resize(imageOriginal, (600,800))
imageRotated= rotateImage(imageOriginal, 45)
cv2.imshow("Rotated", imageRotated)
cv2.waitKey()
Really not a lot there... And you were definitely right to use if __name__ == '__main__': if it is a real module that you're working on.
Well, this question seems not up-to-date, but I had the same problem and took a while to solve it without scaling the original image up and down. I will just post my solution(unfortunately C++ code, but it could be easily ported to python if needed):
#include <math.h>
#define PI 3.14159265
#define SIN(angle) sin(angle * PI / 180)
#define COS(angle) cos(angle * PI / 180)
void rotate(const Mat src, Mat &dest, double angle, int borderMode, const Scalar &borderValue){
int w = src.size().width, h = src.size().height;
// resize the destination image
Size2d new_size = Size2d(abs(w * COS((int)angle % 180)) + abs(h * SIN((int)angle % 180)), abs(w * SIN((int)angle % 180)) + abs(h * COS((int)angle % 180)));
dest = Mat(new_size, src.type());
// this is our rotation point
Size2d old_size = src.size();
Point2d rot_point = Point2d(old_size.width / 2.0, old_size.height / 2.0);
// and this is the rotation matrix
// same as in the opencv docs, but in 3x3 form
double a = COS(angle), b = SIN(angle);
Mat rot_mat = (Mat_<double>(3,3) << a, b, (1 - a) * rot_point.x - b * rot_point.y, -1 * b, a, b * rot_point.x + (1 - a) * rot_point.y, 0, 0, 1);
// next the translation matrix
double offsetx = (new_size.width - old_size.width) / 2,
offsety = (new_size.height - old_size.height) / 2;
Mat trans_mat = (Mat_<double>(3,3) << 1, 0, offsetx , 0, 1, offsety, 0, 0, 1);
// multiply them: we rotate first, then translate, so the order is important!
// inverse order, so that the transformations done right
Mat affine_mat = Mat(trans_mat * rot_mat).rowRange(0, 2);
// now just apply the affine transformation matrix
warpAffine(src, dest, affine_mat, new_size, INTER_LINEAR, borderMode, borderValue);
}
The general solution is to rotate and translate the rotated picture to the right position. So we create two transformation matrices(first for the rotation, second for the translation) and multiply them to the final affine transformation. As the matrix returned by opencv's getRotationMatrix2D is only 2x3, I had to create the matrices by hand in the 3x3 format, so they could by multiplied. Then just take the first two rows and apply the affine tranformation.
EDIT: I have created a Gist, because I have needed this functionality too often in different projects. There is also a Python-Version of it: https://gist.github.com/BloodyD/97917b79beb332a65758

Categories

Resources