What I'm trying to do in this example is wrap an image around a circle, like below.
To wrap the image I simply calculated the x,y coordinates using trig.
The problem is the calculated X and Y positions are rounded to make them integers. This causes the blank pixels in seen the wrapped image above. The x,y positions have to be an integer because they are positions in lists.
I've done this again in the code following but without any images to make things easier to see. All I've done is create two arrays with binary values, one array is black the other white, then wrapped one onto the other.
The output of the code is.
import math as m
from PIL import Image # only used for showing output as image
width = 254.0
height = 24.0
Ro = 40.0
img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("1", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
increment = m.radians(360 / width)
rad = Ro - 0.5
for i, row in enumerate(img):
hyp = rad - i
for j, column in enumerate(row):
alpha = j * increment
x = m.cos(alpha) * hyp + rad
y = m.sin(alpha) * hyp + rad
# put value from original image to its position in new image
cir[int(round(y))][int(round(x))] = img[i][j]
shom_im(cir)
I later found out about the Midpoint Circle Algorithm but I had worse result with that
from PIL import Image # only used for showing output as image
width, height = 254, 24
ro = 40
img = [[(0, 0, 0, 1) for x in range(int(width))]
for y in range(int(height))]
cir = [[(0, 0, 0, 255) for x in range(int(ro * 2))] for y in range(int(ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGBA", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
def putpixel(x0, y0):
global cir
cir[y0][x0] = (255, 255, 255, 255)
def drawcircle(x0, y0, radius):
x = radius
y = 0
err = 0
while (x >= y):
putpixel(x0 + x, y0 + y)
putpixel(x0 + y, y0 + x)
putpixel(x0 - y, y0 + x)
putpixel(x0 - x, y0 + y)
putpixel(x0 - x, y0 - y)
putpixel(x0 - y, y0 - x)
putpixel(x0 + y, y0 - x)
putpixel(x0 + x, y0 - y)
y += 1
err += 1 + 2 * y
if (2 * (err - x) + 1 > 0):
x -= 1
err += 1 - 2 * x
for i, row in enumerate(img):
rad = ro - i
drawcircle(int(ro - 1), int(ro - 1), rad)
shom_im(cir)
Can anybody suggest a way to eliminate the blank pixels?
You are having problems filling up your circle because you are approaching this from the wrong way – quite literally.
When mapping from a source to a target, you need to fill your target, and map each translated pixel from this into the source image. Then, there is no chance at all you miss a pixel, and, equally, you will never draw (nor lookup) a pixel more than once.
The following is a bit rough-and-ready, it only serves as a concept example. I first wrote some code to draw a filled circle, top to bottom. Then I added some more code to remove the center part (and added a variable Ri, for "inner radius"). This leads to a solid ring, where all pixels are only drawn once: top to bottom, left to right.
How you exactly draw the ring is not actually important! I used trig at first because I thought of re-using the angle bit, but it can be done with Pythagorus' as well, and even with Bresenham's circle routine. All you need to keep in mind is that you iterate over the target rows and columns, not the source. This provides actual x,y coordinates that you can feed into the remapping procedure.
With the above done and working, I wrote the trig functions to translate from the coordinates I would put a pixel at into the original image. For this, I created a test image containing some text:
and a good thing that was, too, as in the first attempt I got the text twice (once left, once right) and mirrored – that needed a few minor tweaks. Also note the background grid. I added that to check if the 'top' and 'bottom' lines – the outermost and innermost circles – got drawn correctly.
Running my code with this image and Ro,Ri at 100 and 50, I get this result:
You can see that the trig functions make it start at the rightmost point, move clockwise, and have the top of the image pointing outwards. All can be trivially adjusted, but this way it mimics the orientation that you want your image drawn.
This is the result with your iris-image, using 33 for the inner radius:
and here is a nice animation, showing the stability of the mapping:
Finally, then, my code is:
import math as m
from PIL import Image
Ro = 100.0
Ri = 50.0
# img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
# image = Image.open('0vWEI.png')
image = Image.open('this-is-a-test.png')
# data = image.convert('RGB')
pixels = image.load()
width, height = image.size
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGB", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.save("result1.png","PNG")
new_image.show()
for i in range(int(Ro)):
# outer_radius = Ro*m.cos(m.asin(i/Ro))
outer_radius = m.sqrt(Ro*Ro - i*i)
for j in range(-int(outer_radius),int(outer_radius)):
if i < Ri:
# inner_radius = Ri*m.cos(m.asin(i/Ri))
inner_radius = m.sqrt(Ri*Ri - i*i)
else:
inner_radius = -1
if j < -inner_radius or j > inner_radius:
# this is the destination
# solid:
# cir[int(Ro-i)][int(Ro+j)] = (255,255,255)
# cir[int(Ro+i)][int(Ro+j)] = (255,255,255)
# textured:
x = Ro+j
y = Ro-i
# calculate source
angle = m.atan2(y-Ro,x-Ro)/2
distance = m.sqrt((y-Ro)*(y-Ro) + (x-Ro)*(x-Ro))
distance = m.floor((distance-Ri+1)*(height-1)/(Ro-Ri))
# if distance >= height:
# distance = height-1
cir[int(y)][int(x)] = pixels[int(width*angle/m.pi) % width, height-distance-1]
y = Ro+i
# calculate source
angle = m.atan2(y-Ro,x-Ro)/2
distance = m.sqrt((y-Ro)*(y-Ro) + (x-Ro)*(x-Ro))
distance = m.floor((distance-Ri+1)*(height-1)/(Ro-Ri))
# if distance >= height:
# distance = height-1
cir[int(y)][int(x)] = pixels[int(width*angle/m.pi) % width, height-distance-1]
shom_im(cir)
The commented-out lines draw a solid white ring. Note the various tweaks here and there to get the best result. For instance, the distance is measured from the center of the ring, and so returns a low value for close to the center and the largest values for the outside of the circle. Mapping that directly back onto the target image would display the text with its top "inwards", pointing to the inner hole. So I inverted this mapping with height - distance - 1, where the -1 is to make it map from 0 to height again.
A similar fix is in the calculation of distance itself; without the tweaks Ri+1 and height-1 either the innermost or the outermost row would not get drawn, indicating that the calculation is just one pixel off (which was exactly the purpose of that grid).
I think what you need is a noise filter. There are many implementations from which I think Gaussian filter would give a good result. You can find a list of filters here. If it gets blurred too much:
keep your first calculated image
calculate filtered image
copy fixed pixels from filtered image to first calculated image
Here is a crude average filter written by hand:
cir_R = int(Ro*2) # outer circle 2*r
inner_r = int(Ro - 0.5 - len(img)) # inner circle r
for i in range(1, cir_R-1):
for j in range(1, cir_R-1):
if cir[i][j] == 0: # missing pixel
dx = int(i-Ro)
dy = int(j-Ro)
pix_r2 = dx*dx + dy*dy # distance to center
if pix_r2 <= Ro*Ro and pix_r2 >= inner_r*inner_r:
cir[i][j] = (cir[i-1][j] + cir[i+1][j] + cir[i][j-1] +
cir[i][j+1])/4
shom_im(cir)
and the result:
This basically scans between two ranges checks for missing pixels and replaces them with average of 4 pixels adjacent to it. In this black white case it is all white.
Hope it helps!
Related
Essentially, my original image has N instances of a certain object. I have the bounding box coordinates and the class for all of them in a text file. This is basically a dataset for YoloV3 and darknet. I want to generate additional images by slicing the original one in a way such that it contains at least 1 instance of one of those objects and if it does, save the image, and the new bounding box coordinates of the objects in that image.
The following is the code for slicing the image:
x1 = random.randint(0, 1200)
width = random.randint(0, 800)
y1 = random.randint(0, 1200)
height = random.randint(30, 800)
slice_img = img[x1:x1+width, y1:y1+height]
plt.imshow(slice_img)
plt.show()
My next step is to use template matching to find if my sliced image is in the original one:
w, h = slice_img.shape[:-1]
res = cv2.matchTemplate(img, slice_img, cv2.TM_CCOEFF_NORMED)
threshold = 0.6
loc = np.where(res >= threshold)
for pt in zip(*loc[::-1]): # Switch columns and rows
cv2.rectangle(img, pt, (pt[0] + w, pt[1] + h), (0, 0, 255), 5)
cv2.imwrite('result.png', img)
At this stage, I am quite lost and not sure how to proceed any further.
Ultimately, I need many new images with corresponding text files containing the class and coordinates. Any advice would be appreciated. Thank you.
P.S I cannot share my images with you, unfortunately.
Template matching is way overkill for this. Template matching essentially slides a kernel image over your main image and compares pixels of each, performing many many computations. There's no need to search the image because you already know where the objects are within the image. Essentially, you are trying to determine whether one rectangle (bounding box for an object) overlaps sufficiently with the slice, and you know the exact coordinates of each rectangle. Thus, it's a geometry problem rather than a computer vision problem.
(As an aside: the correct term for what you are calling a slice would probably be crop; slice generally means you're taking an N-dimensional array (say 3 x 4 x 5) and taking a subset of data that is N-1 dimensional by selecting a single index for one dimension (i.e. take index 0 on dimension 0 to get a 1 x 4 x 5 array).
Here's a brief example of how you might do this. Let x1 x2 y1 y2 be the min and max x and y coordinates for the crop you generate. Let ox1 ox2 oy1 oy2 be the min and max x and y coordinates for an object:
NO_SUCCESSFUL_CROPS = True
while NO_SUCCESSFUL_CROPS:
# Generate crop
x1 = random.randint(0, 1200)
width = random.randint(0, 800)
y1 = random.randint(0, 1200)
height = random.randint(30, 800)
x2 = x1 + width
y2 = y1 + height
# for each bounding box
#check if at least (nominally) 70% of object is within crop
threshold = 0.7
for bbox in all_objects:
#assign bbox to ox1 ox2 oy1 oy2
ox1,ox2,oy1,oy2 = bbox
# compute percentage of bbox that is within crop
minx = max(ox1,x1)
miny = max(oy1,y1)
maxx = min(ox2,x2)
maxy = min(oy2,y2)
area_in_crop = (maxx-minx)*(maxy-miny)
area of bbox = (ox2-ox1)*(oy2-oy1)
ratio = area_in_crop / area_of_bbox
if ratio > threshold:
# break loop
NO_SUCCESSFUL_CROPS = False
# crop image as above
crop_image = image[y1:y2,x1:x2] # if image is an array, may have to do y then x because y is row and x is column. Not sure exactly which form opencv uses
cv2.imwrite("output_file.png",crop_image)
# shift bbox coords since (x1,y1) is the new (0,0) pixel in crop_image
ox1 -= x1
ox2 -= x1
oy1 -= y1
oy2 -= y2
break # no need to continue (although you could alternately continue until you have N crops, or even make sure you get one crop with each object)
Extracting table data from digital PDFs have been simple using camelot and tabula. However, the solution doesn't work with scanned images of the document pages specifically when the table doesn't have borders and inner grids. I have been trying to generate vertical and horizontal lines using OpenCV. However, since the scanned images will have slight rotation angles, it is difficult to proceed with the approach.
How can we utilize OpenCV to generate grids (horizontal and vertical lines) and borders for the scanned document page which contains table data (along with paragraphs of text)? If this is feasible, how to nullify the rotation angle of the scanned image?
I wrote some code to estimate the horizontal lines from the printed letters in the page. The same could be done for vertical ones I guess. The code below follows some general assumptions, here
some basic steps in pseudo code style:
prepare picture for contour detection
do contour detection
we assume most contours are letters
calc mean width of all contours
calc mean area of contours
filter all contours with two conditions:
a) contour (letter) heigths < meanHigh * 2
b) contour area > 4/5 meanArea
calc center point of all remaining contours
assume we have line regions (bins)
list all center point which are inside the region
do linear regression of region points
save slope and intercept
calc mean slope and intercept
here the full code:
import cv2
import numpy as np
from scipy import stats
def resizeImageByPercentage(img,scalePercent = 60):
width = int(img.shape[1] * scalePercent / 100)
height = int(img.shape[0] * scalePercent / 100)
dim = (width, height)
# resize image
return cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
def calcAverageContourWithAndHeigh(contourList):
hs = list()
ws = list()
for cnt in contourList:
(x, y, w, h) = cv2.boundingRect(cnt)
ws.append(w)
hs.append(h)
return np.mean(ws),np.mean(hs)
def calcAverageContourArea(contourList):
areaList = list()
for cnt in contourList:
a = cv2.minAreaRect(cnt)
areaList.append(a[2])
return np.mean(areaList)
def calcCentroid(contour):
houghMoments = cv2.moments(contour)
# calculate x,y coordinate of centroid
if houghMoments["m00"] != 0: #case no contour could be calculated
cX = int(houghMoments["m10"] / houghMoments["m00"])
cY = int(houghMoments["m01"] / houghMoments["m00"])
else:
# set values as what you need in the situation
cX, cY = -1, -1
return cX,cY
def getCentroidWhenSizeInRange(contourList,letterSizeWidth,letterSizeHigh,deltaOffset,minLetterArea=10.0):
centroidList=list()
for cnt in contourList:
(x, y, w, h) = cv2.boundingRect(cnt)
area = cv2.minAreaRect(cnt)
#calc diff
diffW = abs(w-letterSizeWidth)
diffH = abs(h-letterSizeHigh)
#thresold A: almost smaller than mean letter size +- offset
#when almost letterSize
if diffW < deltaOffset and diffH < deltaOffset:
#threshold B > min area
if area[2] > minLetterArea:
cX,cY = calcCentroid(cnt)
if cX!=-1 and cY!=-1:
centroidList.append((cX,cY))
return centroidList
DEBUGMODE = True
#read image, do git clone https://github.com/WZBSocialScienceCenter/pdftabextract.git for the example
img = cv2.imread('pdftabextract/examples/catalogue_30s/data/ALA1934_RR-excerpt.pdf-2_1.png')
#get some basic infos
imgHeigh, imgWidth, imgChannelAmount = img.shape
if DEBUGMODE:
cv2.imwrite("img00original.jpg",resizeImageByPercentage(img,30))
cv2.imshow("original",img)
# prepare img
imgGrey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# apply Gaussian filter
imgGaussianBlur = cv2.GaussianBlur(imgGrey,(5,5),0)
#make binary img, black or white
_, imgBinThres = cv2.threshold(imgGaussianBlur, 130, 255, cv2.THRESH_BINARY)
## detect contours
contours, _ = cv2.findContours(imgBinThres, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#we get some letter parameter
averageLetterWidth, averageLetterHigh = calcAverageContourWithAndHeigh(contours)
threshold1AllowedLetterSizeOffset = averageLetterHigh * 2 # double size
averageContourAreaSizeOfMinRect = calcAverageContourArea(contours)
threshHold2MinArea = 4 * averageContourAreaSizeOfMinRect / 5 # 4/5 * mean
print("mean letter Width: ", averageLetterWidth)
print("mean letter High: ", averageLetterHigh)
print("threshold 1 tolerance: ", threshold1AllowedLetterSizeOffset)
print("mean letter area ", averageContourAreaSizeOfMinRect)
print("thresold 2 min letter area ", threshHold2MinArea)
#we get all centroid of letter sizes contours, the other we ignore
centroidList = getCentroidWhenSizeInRange(contours,averageLetterWidth,averageLetterHigh,threshold1AllowedLetterSizeOffset,threshHold2MinArea)
if DEBUGMODE:
#debug print all centers:
imgFilteredCenter = img.copy()
for cX,cY in centroidList:
#draw in red color as BGR
cv2.circle(imgFilteredCenter, (cX, cY), 5, (0, 0, 255), -1)
cv2.imwrite("img01letterCenters.jpg",resizeImageByPercentage(imgFilteredCenter,30))
cv2.imshow("letterCenters",imgFilteredCenter)
#we estimate a bin widths
amountPixelFreeSpace = averageLetterHigh #TODO get better estimate out of histogram
estimatedBinWidth = round( averageLetterHigh + amountPixelFreeSpace) #TODO round better ?
binCollection = dict() #range(0,imgHeigh,estimatedBinWidth)
#we do seperate the center points into bins by y coordinate
for i in range(0,imgHeigh,estimatedBinWidth):
listCenterPointsInBin = list()
yMin = i
yMax = i + estimatedBinWidth
for cX,cY in centroidList:
if yMin < cY < yMax:#if fits in bin
listCenterPointsInBin.append((cX,cY))
binCollection[i] = listCenterPointsInBin
#we assume all point are in one line ?
#model = slope (x) + intercept
#model = m (x) + n
mList = list() #slope abs in img
nList = list() #intercept abs in img
nListRelative = list() #intercept relative to bin start
minAmountRegressionElements = 12 #is also alias for letter amount we expect
#we do regression for every point in the bin
for startYOfBin, values in binCollection.items():
#we reform values
xValues = [] #TODO use more short transform
yValues = []
for x,y in values:
xValues.append(x)
yValues.append(y)
#we assume a min limit of point in bin
if len(xValues) >= minAmountRegressionElements :
slope, intercept, r, p, std_err = stats.linregress(xValues, yValues)
mList.append(slope)
nList.append(intercept)
#we calc the relative intercept
nRelativeToBinStart = intercept - startYOfBin
nListRelative.append(nRelativeToBinStart)
if DEBUGMODE:
#we debug print all lines in one picute
imgLines = img.copy()
colorOfLine = (0, 255, 0) #green
for i in range(0,len(mList)):
slope = mList[i]
intercept = nList[i]
startPoint = (0, int( intercept)) #better round ?
endPointY = int( (slope * imgWidth + intercept) )
if endPointY < 0:
endPointY = 0
endPoint = (imgHeigh,endPointY)
cv2.line(imgLines, startPoint, endPoint, colorOfLine, 2)
cv2.imwrite("img02lines.jpg",resizeImageByPercentage(imgLines,30))
cv2.imshow("linesOfLetters ",imgLines)
#we assume in mean we got it right
meanIntercept = np.mean(nListRelative)
meanSlope = np.mean(mList)
print("meanIntercept :", meanIntercept)
print("meanSlope ", meanSlope)
#TODO calc angle with math.atan(slope) ...
if DEBUGMODE:
cv2.waitKey(0)
original:
center point of letters:
lines:
I had the same problem some time ago and this tutorial is the solution to that. It explains using pdftabextract which is a Python library by Markus Konrad and leverages OpenCV’s Hough transform to detect the lines and works even if the scanned document is a bit tilted. The tutorial walks your through parsing a 1920s German newspaper
I'm trying to work on video stabilization using python and template matching via skimage. The code is supposed to track a single point during the whole video but the tracking is awfully imprecise and I suspect it's not even working correctly
This is the track_point function which is supposed to take a video as an input and some coordinates of a point and then return an array of tracked points for each frame
from skimage.feature import match_template
from skimage.color import rgb2gray
def track_point(video, x, y, patch_size = 4, search_size = 40):
length, height, width, _ = video.shape
frame = rgb2gray(np.squeeze(video[1, :, :, :])) # convert image to grayscale
x1 = int(max(1, x - patch_size / 2))
y1 = int(max(1, y - patch_size / 2))
x2 = int(min(width, x + patch_size / 2 - 1))
y2 = int(min(height, y + patch_size / 2 - 1))
template = frame[y1:y2, x1:x2] # cut the reference patch (template) from the first frame
track_x = [x]
track_y = [y]
#plt.imshow(template)
half = int(search_size/2)
for i in range(1, length):
prev_x = int(track_x[i-1])
prev_y = int(track_y[i-1])
frame = rgb2gray(np.squeeze(video[i, :, :, :])) # Extract current frame and convert it grayscale
image = frame[prev_x-half:prev_x+half,prev_y-half:prev_y+half] # Cut-out a region of search_size x search_size from 'frame' with the center in the point's previous position (i-1)
result = match_template(image, template, pad_input=False, mode='constant', constant_values=0) # Compare the region to template using match_template
ij = np.unravel_index(np.argmax(result), result.shape) # Select best match (maximum) and determine its position. Update x and y and append new x,y values to track_x,track_y
x, y = ij[::-1]
x += x1
y += y1
track_x.append(x)
track_y.append(y)
return track_x, track_y
And this is the implementation of the function
points = track_point(video, point[0], point[1])
# Draw trajectory on top of the first frame from video
image = np.squeeze(video[1, :, :, :])
figure = plt.figure()
plt.gca().imshow(image)
plt.gca().plot(points[0], points[1])
I expect the plot to be somehow regular since the video isn't that shaky but it's not.
For some reason the graph is plotting almost all of the coordinates of the search template.
EDIT: Here's the link for the video: https://upload-video.net/a11073n9Y11-noau
What am I doing wrong?
I need to synthesize many FishEye images with different intrinsic matrices based on normal pictures. I am following the method mentioned in this paper.
Ideally, if the algorithm is correct, the ideal fish eye effect should look like this:
.
But when I used my algorithm to convert a picture
it looks like this
So below is my code's flow:
1. First, I read the raw image with cv2
def read_img(image):
img = ndimage.imread(image) #this would return a 4-d array: [R,G,B,255]
img_shape = img.shape
print(img_shape)
#get the pixel coordinate
w = img_shape[1] #the width
# print(w)
h= img_shape[0] #the height
# print(h)
uv_coord = []
for u in range(w):
for v in range(h):
uv_coord.append([float(u),float(v)]) #this records the coord in the fashion of [x1,y1],[x1, y2], [x1, y3]....
return np.array(uv_coord)
Then, based on the paper:
r(θ) = k1θ + k2θ^3 + k3θ^5 + k4θ^7, (1)
where Ks are the distorted coefficients
Given pixel coordinates (x,y) in the pinhole projection image, the corresponding image coordinates (x',y')in the fisheye can be computed as:
x'=r(θ) cos(ϕ), y' = r(θ) sin(ϕ), (2)
where ϕ = arctan((y − y0)/(x − x0)), and (x0, y0) are the coordinates of the principal point in the pinhole projection image.
And then the image coordinates (x',y') is converted into pixel coordinates (xf,yf): (xf, yf):
*xf = mu * x' + u0, yf = mv * y' + v0,* (3)
where (u0, v0) are the coordinates of the principle points in the fisheye, and mu, mv denote the number of pixels per unit distance in the horizontal and vertica directions. So I am guessing there are just from the intrinsic matrix [fx, fy] and u0 v0 are the [cx, cy].
def add_distortion(sourceUV, dmatrix,Kmatrix):
'''This function is programmed to remove the pixel of the given original image coords
input arguments:
dmatrix -- the intrinsic matrix [k1,k2,k3,k4] for tweaking purposes
Kmatrix -- [fx, fy, cx, cy, s]'''
u = sourceUV[:,0] #width in x
v = sourceUV[:,1] #height in y
rho = np.sqrt(u**2 + v**2)
#get theta
theta = np.arctan(rho,np.full_like(u,1))
# rho_mat = np.array([rho, rho**3, rho**5, rho**7])
rho_mat = np.array([theta,theta**3, theta**5, theta**7])
#get the: rho(theta) = k1*theta + k2*theta**3 + k3*theta**5 + k4*theta**7
rho_d = dmatrix#rho_mat
#get phi
phi = np.arctan2((v - Kmatrix[3]), (u - Kmatrix[2]))
xd = rho_d * np.cos(phi)
yd = rho_d * np.sin(phi)
#converting the coords from image plane back to pixel coords
ud = Kmatrix[0] * (xd + Kmatrix[4] * yd) + Kmatrix[2]
vd = Kmatrix[1] * yd + Kmatrix[3]
return np.column_stack((ud,vd))
Then after gaining the distorded coordinates, I perform moving pixels in this way, where I think the problem might be:
def main():
image_name = "original.png"
img = cv2.imread(image_name)
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) #the cv2 read the image as BGR
w = img.shape[1]
h = img.shape[0]
uv_coord = read_img(image_name)
#for adding distortion
dmatrix = [-0.391942708316175,0.012746418822063 ,-0.001374061848026 ,0.005349692659231]
#the Intrinsic matrix of the original picture's
Kmatrix = np.array([9.842439e+02,9.808141e+02 , 1392/2, 2.331966e+02, 0.000000e+00])
# Kmatrix = np.array([2234.23470710156 ,2223.78349134123, 947.511596277837, 647.103139639432,-3.20443253476976]) #the distorted intrinsics
uv = add_distortion(uv_coord,dmatrix,Kmatrix)
i = 0
dstimg = np.zeros_like(img)
for x in range(w): #tthe coo
for y in range(h):
if i > (512 * 1392 -1):
break
xu = uv[i][0] #x, y1, y2, y3
yu = uv[i][1]
i +=1
# if new pixel is in bounds copy from source pixel to destination pixel
if 0 <= xu and xu < img.shape[1] and 0 <= yu and yu < img.shape[0]:
dstimg[int(yu)][int(xu)] = img[int(y)][int(x)]
img = Image.fromarray(dstimg, 'RGB')
img.save('my.png')
img.show()
However, this code does not perform in the way I want. Could you guys please help me with debugging it? I spent 3 days but I still could not see any problem with it. Thanks!!
I'm trying to rotate a image some degrees then show it in a window.
my idea is to rotate and then show it in a new window with new width and height of window calculated from the old width and height:
new_width = x * cos angle + y * sin angle
new_height = y * cos angle + x * sin angle
I was expecting the result to look like below:
but it turns out the result looks like this:
and my code is here:
#!/usr/bin/env python -tt
#coding:utf-8
import sys
import math
import cv2
import numpy as np
def rotateImage(image, angle):#parameter angle in degrees
if len(image.shape) > 2:#check colorspace
shape = image.shape[:2]
else:
shape = image.shape
image_center = tuple(np.array(shape)/2)#rotation center
radians = math.radians(angle)
x, y = im.shape
print 'x =',x
print 'y =',y
new_x = math.ceil(math.cos(radians)*x + math.sin(radians)*y)
new_y = math.ceil(math.sin(radians)*x + math.cos(radians)*y)
new_x = int(new_x)
new_y = int(new_y)
rot_mat = cv2.getRotationMatrix2D(image_center,angle,1.0)
print 'rot_mat =', rot_mat
result = cv2.warpAffine(image, rot_mat, shape, flags=cv2.INTER_LINEAR)
return result, new_x, new_y
def show_rotate(im, width, height):
# width = width/2
# height = height/2
# win = cv2.cv.NamedWindow('ro_win',cv2.cv.CV_WINDOW_NORMAL)
# cv2.cv.ResizeWindow('ro_win', width, height)
win = cv2.namedWindow('ro_win')
cv2.imshow('ro_win', im)
if cv2.waitKey() == '\x1b':
cv2.destroyWindow('ro_win')
if __name__ == '__main__':
try:
im = cv2.imread(sys.argv[1],0)
except:
print '\n', "Can't open image, OpenCV or file missing."
sys.exit()
rot, width, height = rotateImage(im, 30.0)
print width, height
show_rotate(rot, width, height)
There must be some stupid mistakes in my code lead to this problem, but I can not figure it out...
and I know my code is not pythonic enough :( ..sorry for that..
Can anyone help me?
Best,
bearzk
As BloodyD's answer said, cv2.warpAffine doesn't auto-center the transformed image. Instead, it simply transforms each pixel using the transformation matrix. (This could move pixels anywhere in Cartesian space, including out of the original image area.) Then, when you specify the destination image size, it grabs an area of that size, beginning at (0,0), i.e. the upper left of the original frame. Any parts of your transformed image that don't lie in that region will be cut off.
Here's Python code to rotate and scale an image, with the result centered:
def rotateAndScale(img, scaleFactor = 0.5, degreesCCW = 30):
(oldY,oldX) = img.shape #note: numpy uses (y,x) convention but most OpenCV functions use (x,y)
M = cv2.getRotationMatrix2D(center=(oldX/2,oldY/2), angle=degreesCCW, scale=scaleFactor) #rotate about center of image.
#choose a new image size.
newX,newY = oldX*scaleFactor,oldY*scaleFactor
#include this if you want to prevent corners being cut off
r = np.deg2rad(degreesCCW)
newX,newY = (abs(np.sin(r)*newY) + abs(np.cos(r)*newX),abs(np.sin(r)*newX) + abs(np.cos(r)*newY))
#the warpAffine function call, below, basically works like this:
# 1. apply the M transformation on each pixel of the original image
# 2. save everything that falls within the upper-left "dsize" portion of the resulting image.
#So I will find the translation that moves the result to the center of that region.
(tx,ty) = ((newX-oldX)/2,(newY-oldY)/2)
M[0,2] += tx #third column of matrix holds translation, which takes effect after rotation.
M[1,2] += ty
rotatedImg = cv2.warpAffine(img, M, dsize=(int(newX),int(newY)))
return rotatedImg
When you get the rotation matrix like this:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,1.0)
Your "scale" parameter is set to 1.0, so if you use it to transform your image matrix to your result matrix of the same size, it will necessarily be clipped.
You can instead get a rotation matrix like this:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,0.5)
that will both rotate and shrink, leaving room around the edges (you can scale it up first so that you will still end up with a big image).
Also, it looks like you are confusing the numpy and OpenCV conventions for image sizes. OpenCV uses (x, y) for image sizes and point coordinates, while numpy uses (y,x). That is probably why you are going from a portrait to landscape aspect ratio.
I tend to be explicit about it like this:
imageHeight = image.shape[0]
imageWidth = image.shape[1]
pointcenter = (imageHeight/2, imageWidth/2)
etc...
Ultimately, this works fine for me:
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
Update:
Here is the complete script that I executed. Just cv2.imshow("winname", image) and cv2.waitkey() with no arguments to keep it open:
import cv2
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
imageOriginal = cv2.imread("/Path/To/Image.jpg")
# this was an iPhone image that I wanted to resize to something manageable to view
# so I knew beforehand that this is an appropriate size
imageOriginal = cv2.resize(imageOriginal, (600,800))
imageRotated= rotateImage(imageOriginal, 45)
cv2.imshow("Rotated", imageRotated)
cv2.waitKey()
Really not a lot there... And you were definitely right to use if __name__ == '__main__': if it is a real module that you're working on.
Well, this question seems not up-to-date, but I had the same problem and took a while to solve it without scaling the original image up and down. I will just post my solution(unfortunately C++ code, but it could be easily ported to python if needed):
#include <math.h>
#define PI 3.14159265
#define SIN(angle) sin(angle * PI / 180)
#define COS(angle) cos(angle * PI / 180)
void rotate(const Mat src, Mat &dest, double angle, int borderMode, const Scalar &borderValue){
int w = src.size().width, h = src.size().height;
// resize the destination image
Size2d new_size = Size2d(abs(w * COS((int)angle % 180)) + abs(h * SIN((int)angle % 180)), abs(w * SIN((int)angle % 180)) + abs(h * COS((int)angle % 180)));
dest = Mat(new_size, src.type());
// this is our rotation point
Size2d old_size = src.size();
Point2d rot_point = Point2d(old_size.width / 2.0, old_size.height / 2.0);
// and this is the rotation matrix
// same as in the opencv docs, but in 3x3 form
double a = COS(angle), b = SIN(angle);
Mat rot_mat = (Mat_<double>(3,3) << a, b, (1 - a) * rot_point.x - b * rot_point.y, -1 * b, a, b * rot_point.x + (1 - a) * rot_point.y, 0, 0, 1);
// next the translation matrix
double offsetx = (new_size.width - old_size.width) / 2,
offsety = (new_size.height - old_size.height) / 2;
Mat trans_mat = (Mat_<double>(3,3) << 1, 0, offsetx , 0, 1, offsety, 0, 0, 1);
// multiply them: we rotate first, then translate, so the order is important!
// inverse order, so that the transformations done right
Mat affine_mat = Mat(trans_mat * rot_mat).rowRange(0, 2);
// now just apply the affine transformation matrix
warpAffine(src, dest, affine_mat, new_size, INTER_LINEAR, borderMode, borderValue);
}
The general solution is to rotate and translate the rotated picture to the right position. So we create two transformation matrices(first for the rotation, second for the translation) and multiply them to the final affine transformation. As the matrix returned by opencv's getRotationMatrix2D is only 2x3, I had to create the matrices by hand in the 3x3 format, so they could by multiplied. Then just take the first two rows and apply the affine tranformation.
EDIT: I have created a Gist, because I have needed this functionality too often in different projects. There is also a Python-Version of it: https://gist.github.com/BloodyD/97917b79beb332a65758