Image-Processing: Converting normal pictures into FishEye images with intrinsic matrix - python

I need to synthesize many FishEye images with different intrinsic matrices based on normal pictures. I am following the method mentioned in this paper.
Ideally, if the algorithm is correct, the ideal fish eye effect should look like this:
.
But when I used my algorithm to convert a picture
it looks like this
So below is my code's flow:
1. First, I read the raw image with cv2
def read_img(image):
img = ndimage.imread(image) #this would return a 4-d array: [R,G,B,255]
img_shape = img.shape
print(img_shape)
#get the pixel coordinate
w = img_shape[1] #the width
# print(w)
h= img_shape[0] #the height
# print(h)
uv_coord = []
for u in range(w):
for v in range(h):
uv_coord.append([float(u),float(v)]) #this records the coord in the fashion of [x1,y1],[x1, y2], [x1, y3]....
return np.array(uv_coord)
Then, based on the paper:
r(θ) = k1θ + k2θ^3 + k3θ^5 + k4θ^7, (1)
where Ks are the distorted coefficients
Given pixel coordinates (x,y) in the pinhole projection image, the corresponding image coordinates (x',y')in the fisheye can be computed as:
x'=r(θ) cos(ϕ), y' = r(θ) sin(ϕ), (2)
where ϕ = arctan((y − y0)/(x − x0)), and (x0, y0) are the coordinates of the principal point in the pinhole projection image.
And then the image coordinates (x',y') is converted into pixel coordinates (xf,yf): (xf, yf):
*xf = mu * x' + u0, yf = mv * y' + v0,* (3)
where (u0, v0) are the coordinates of the principle points in the fisheye, and mu, mv denote the number of pixels per unit distance in the horizontal and vertica directions. So I am guessing there are just from the intrinsic matrix [fx, fy] and u0 v0 are the [cx, cy].
def add_distortion(sourceUV, dmatrix,Kmatrix):
'''This function is programmed to remove the pixel of the given original image coords
input arguments:
dmatrix -- the intrinsic matrix [k1,k2,k3,k4] for tweaking purposes
Kmatrix -- [fx, fy, cx, cy, s]'''
u = sourceUV[:,0] #width in x
v = sourceUV[:,1] #height in y
rho = np.sqrt(u**2 + v**2)
#get theta
theta = np.arctan(rho,np.full_like(u,1))
# rho_mat = np.array([rho, rho**3, rho**5, rho**7])
rho_mat = np.array([theta,theta**3, theta**5, theta**7])
#get the: rho(theta) = k1*theta + k2*theta**3 + k3*theta**5 + k4*theta**7
rho_d = dmatrix#rho_mat
#get phi
phi = np.arctan2((v - Kmatrix[3]), (u - Kmatrix[2]))
xd = rho_d * np.cos(phi)
yd = rho_d * np.sin(phi)
#converting the coords from image plane back to pixel coords
ud = Kmatrix[0] * (xd + Kmatrix[4] * yd) + Kmatrix[2]
vd = Kmatrix[1] * yd + Kmatrix[3]
return np.column_stack((ud,vd))
Then after gaining the distorded coordinates, I perform moving pixels in this way, where I think the problem might be:
def main():
image_name = "original.png"
img = cv2.imread(image_name)
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) #the cv2 read the image as BGR
w = img.shape[1]
h = img.shape[0]
uv_coord = read_img(image_name)
#for adding distortion
dmatrix = [-0.391942708316175,0.012746418822063 ,-0.001374061848026 ,0.005349692659231]
#the Intrinsic matrix of the original picture's
Kmatrix = np.array([9.842439e+02,9.808141e+02 , 1392/2, 2.331966e+02, 0.000000e+00])
# Kmatrix = np.array([2234.23470710156 ,2223.78349134123, 947.511596277837, 647.103139639432,-3.20443253476976]) #the distorted intrinsics
uv = add_distortion(uv_coord,dmatrix,Kmatrix)
i = 0
dstimg = np.zeros_like(img)
for x in range(w): #tthe coo
for y in range(h):
if i > (512 * 1392 -1):
break
xu = uv[i][0] #x, y1, y2, y3
yu = uv[i][1]
i +=1
# if new pixel is in bounds copy from source pixel to destination pixel
if 0 <= xu and xu < img.shape[1] and 0 <= yu and yu < img.shape[0]:
dstimg[int(yu)][int(xu)] = img[int(y)][int(x)]
img = Image.fromarray(dstimg, 'RGB')
img.save('my.png')
img.show()
However, this code does not perform in the way I want. Could you guys please help me with debugging it? I spent 3 days but I still could not see any problem with it. Thanks!!

Related

Turning Circle into Square. Solved: squircle package

OpenCV / Python related:
Given a photo of a round object, how can you output that object flattened, while adjusting for surface area? Here is an example image of an input:
Soccer ball
It is similar to adjusting for camera distortion (turning a round object into flat one), but in this case the distortion comes from the object itself and not the camera.
Distorted image:
Undistorted image:
Any suggestions would help. Thank you!
Edit: The package squircle is just what I needed, thank you fmw42!
Here is a solution in Python/OpenCV. It creates transformation maps that define the equations from output back to input and applies them using cv2.remap(). The equations come from https://arxiv.org/pdf/1509.06344.pdf for the Elliptical Grid Mapping approach.
Input:
import numpy as np
import cv2
import math
# References:
# https://arxiv.org/pdf/1509.06344.pdf
# http://squircular.blogspot.com/2015/09/mapping-circle-to-square.html
# Evaluate:
# u = x*sqrt(1-y**2/2)
# v = y*sqrt(1-x**2/2)
# u,v are input circle coordinates and x,y are output square coordinates
# read input
img = cv2.imread("rings.png")
# get dimensions and center
h, w = img.shape[:2]
xcent = w / 2
ycent = h / 2
# set up the maps as float32 from output square (x,y) to input circle (u,v)
map_u = np.zeros((h, w), np.float32)
map_v = np.zeros((h, w), np.float32)
# create u and v maps where x,y is measured from the center and scaled from -1 to 1
for y in range(h):
Y = (y - ycent)/ycent
for x in range(w):
X = (x - xcent)/xcent
map_u[y, x] = xcent * X * math.sqrt(1 - 0.5*Y**2) + xcent
map_v[y, x] = ycent * Y * math.sqrt(1 - 0.5*X**2) + ycent
# do the remap
result = cv2.remap(img, map_u, map_v, cv2.INTER_LINEAR, borderMode = cv2.BORDER_REFLECT_101, borderValue=(0,0,0))
# save results
cv2.imwrite("rings_circle2square.png", result)
# display images
cv2.imshow('img', img)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Here is another example:
Input:
Result:
And here is a 3rd example:
Input:
Result:
ADDITION
Here is an alternate approach based upon the Simple Stretch equations in the reference above:
import numpy as np
import cv2
import math
# References:
# https://arxiv.org/pdf/1509.06344.pdf
# Simple stretch equations
# read input
img = cv2.imread("rings.png")
#img = cv2.imread("ICM.png")
#img = cv2.imread("soccerball_small.jpg")
# get dimensions and center
h, w = img.shape[:2]
xcent = w / 2
ycent = h / 2
# set up the maps as float32 from output square (x,y) to input circle (u,v)
map_u = np.zeros((h, w), np.float32)
map_v = np.zeros((h, w), np.float32)
# create u and v maps where x,y is measured from the center and scaled from -1 to 1
# note: copysign(1,x) is signum(x) and returns 1 ,0, or -1 depending upon sign of x
for y in range(h):
Y = (y - ycent)/ycent
for x in range(w):
X = (x - xcent)/xcent
X2 = X*X
Y2 = Y*Y
XY = X*Y
R = math.sqrt(X2+Y2)
if R == 0:
map_u[y, x] = xcent
map_v[y, x] = ycent
elif X2 >= Y2:
map_u[y, x] = xcent * math.copysign(1, X) * X2/R + xcent
map_v[y, x] = ycent * math.copysign(1, X) * XY/R + ycent
else:
map_u[y, x] = xcent * math.copysign(1, Y) * XY/R + xcent
map_v[y, x] = ycent * math.copysign(1, Y) * Y2/R + ycent
# do the remap
result = cv2.remap(img, map_u, map_v, cv2.INTER_LINEAR, borderMode = cv2.BORDER_REFLECT_101, borderValue=(0,0,0))
# save results
cv2.imwrite("rings_circle2square2.png", result)
#cv2.imwrite("ICM_circle2square2.png", result)
#cv2.imwrite("soccerball_small_circle2square2.png", result)
# display images
cv2.imshow('img', img)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Input:
Result:
Input:
Result:
Input:
Result:

Python OpenCV How to Apply Radial/Barrel Distortion?

A lot of questions about removing radial (or barrel) distortion, but how would I add it?
Visually, I want to take my input, which is presumed to be image (a), and distort it, to be like image (b):
And ideally I'd like a tunable parameter of some "radius" to control "how much distortion" I get. Based on what I want to do, it looks like I'd just need one parameter to control the 'radius of distortion' or whatever it would be called (correct me if I'm wrong).
How can I achieve this with OpenCV? I figure it must be possible because a lot of people try going the other way for things like this. I'm just not as familiar with the proper math operations and library calls to do it.
Any help much appreciated, cheers.
The program below creates barrel distortion
from wand.image import Image
import numpy as np
import cv2
with Image(filename='Path/to/Img/') as img:
print(img.size)
img.virtual_pixel = 'transparent'
img.distort('barrel', (0.1, 0.0, 0.0, 1.0)) # play around these values to create distortion
img.save(filename='filname.png')
# convert to opencv/numpy array format
img_opencv = np.array(img)
# display result with opencv
cv2.imshow("BARREL", img_opencv)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is one way to produce barrel or pincushion distortion in Python/OpenCV by creating the X and Y distortion maps and then using cv.remap() to do the warping.
Input:
import numpy as np
import cv2 as cv
import math
img = cv.imread('lena.jpg')
# grab the dimensions of the image
(h, w, _) = img.shape
# set up the x and y maps as float32
map_x = np.zeros((h, w), np.float32)
map_y = np.zeros((h, w), np.float32)
scale_x = 1
scale_y = 1
center_x = w/2
center_y = h/2
radius = w/2
#amount = -0.75 # negative values produce pincushion
amount = 0.75 # positive values produce barrel
# create map with the barrel pincushion distortion formula
for y in range(h):
delta_y = scale_y * (y - center_y)
for x in range(w):
# determine if pixel is within an ellipse
delta_x = scale_x * (x - center_x)
distance = delta_x * delta_x + delta_y * delta_y
if distance >= (radius * radius):
map_x[y, x] = x
map_y[y, x] = y
else:
factor = 1.0
if distance > 0.0:
factor = math.pow(math.sin(math.pi * math.sqrt(distance) / radius / 2), amount)
map_x[y, x] = factor * delta_x / scale_x + center_x
map_y[y, x] = factor * delta_y / scale_y + center_y
# do the remap
dst = cv.remap(img, map_x, map_y, cv.INTER_LINEAR)
# save the result
#cv.imwrite('lena_pincushion.jpg',dst)
cv.imwrite('lena_barrel.jpg',dst)
# show the result
cv.imshow('src', img)
cv.imshow('dst', dst)
cv.waitKey(0)
cv.destroyAllWindows()
Barrel (positive amount):
Pincushion (negative amount):

Python: Show cartesian image in polar plot

Description:
I have this data represented in a cartesian coordinate system with 256 columns and 640 rows. Each column represents an angle, theta, from -65 deg to 65 deg. Each row represents a range, r, from 0 to 20 m.
An example is given below:
With the following code I try to make a grid and transform each pixel location to the location it would have on a polar grid:
def polar_image(image, bearings):
(h,w) = image.shape
x_max = (np.ceil(np.sin(np.deg2rad(np.max(bearings)))*h)*2+1).astype(int)
y_max = (np.ceil(np.cos(np.deg2rad(np.min(np.abs(bearings))))*h)+1).astype(int)
blank = np.zeros((y_max,x_max,1), np.uint8)
for i in range(w):
for j in range(h):
X = (np.sin(np.deg2rad( bearings[i]))*j)
Y = (-np.cos(np.deg2rad(bearings[i]))*j)
blank[(Y+h).astype(int),(X+562).astype(int)] = image[h-1-j,w-1-i]
return blank
This returns an image as below:
Questions:
This is sort of what I actually want to achieve except from two things:
1) there seem to be some artifacts in the new image and also the mapping seems a bit coarse.
Does someone have a suggestion on how to interpolate to get rid of this?
2) The image remains in a Cartesian representation, meaning that I don't have any polar gridlines, nor can I visualize intervals of range/angle.
Anybody know how to visualize the polar grids with axis ticks in theta and range?
You can use pyplot.pcolormesh() to plot the converted mesh:
import numpy as np
import pylab as pl
img = pl.imread("c:/tmp/Wnov4.png")
angle_max = np.deg2rad(65)
h, w = img.shape
angle, r = np.mgrid[-angle_max:angle_max:h*1j, 0:20:w*1j]
x = r * np.sin(angle)
y = r * np.cos(angle)
fig, ax = pl.subplots()
ax.set_aspect("equal")
pl.pcolormesh(x, y, img, cmap="gray");
or you can use the remap() in OpenCV to convert it to a new image:
import cv2
import numpy as np
from PIL import Image
img = cv2.imread(r"c:/tmp/Wnov4.png", cv2.IMREAD_GRAYSCALE)
angle_max = np.deg2rad(65)
r_max = 20
x = np.linspace(-20, 20, 800)
y = np.linspace(20, 0, 400)
y, x = np.ix_(y, x)
r = np.hypot(x, y)
a = np.arctan2(x, y)
map_x = r / r_max * img.shape[1]
map_y = a / (2 * angle_max) * img.shape[0] + img.shape[0] * 0.5
img2 = cv2.remap(img, map_x.astype(np.float32), map_y.astype(np.float32), cv2.INTER_CUBIC)
Image.fromarray(img2)

Find [x,y] rotated coordinates locations in image [OpenCV / Python]

I want to rotate an image at several angles sequentially. I do that using cv2.getRotationMatrix2D and cv2.warpAffine. Having a pair of pixels coordinates [x,y], where x=cols, y=rows (in this case) I want to find their new coordinates in the rotated images.
I used the following slightly changed code courtesy of http://www.pyimagesearch.com/2017/01/02/rotate-images-correctly-with-opencv-and-python/ and the explanation from Affine Transformation to try to map the points in the rotated image : http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html.
The problem is my mapping or my rotation is wrong because the transformed calculated coordinates are wrong. (I tried to compute the corners manually for simple verification)
CODE:
def rotate_bound(image, angle):
# grab the dimensions of the image and then determine the
# center
(h, w) = image.shape[:2]
(cX, cY) = ((w-1) // 2.0, (h-1)// 2.0)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
print nW, nH
# adjust the rotation matrix to take into account translation
M[0, 2] += ((nW-1) / 2.0) - cX
M[1, 2] += ((nH-1) / 2.0) - cY
# perform the actual rotation and return the image
return M, cv2.warpAffine(image, M, (nW, nH))
#function that calculates the updated locations of the coordinates
#after rotation
def rotated_coord(points,M):
points = np.array(points)
ones = np.ones(shape=(len(points),1))
points_ones = np.concatenate((points,ones), axis=1)
transformed_pts = M.dot(points_ones.T).T
return transformed_pts
#READ IMAGE & CALL FCT
img = cv2.imread("Lenna.png")
points = np.array([[511, 511]])
#rotate by 90 angle for example
M, rotated = rotate_bound(img, 90)
#find out the new locations
transformed_pts = rotated_coord(points,M)
If I have for example the coordinates [511,511] I will obtain [-0.5, 511.50] ([col, row]) when I expect to obtain [0,511].
If I use instead the w // 2 a black border will be added on the image and my rotated updated coordinates will be off again.
Question: How can I find the correct location of a pair of pixels coordinates in a rotated image (by a certain angle) using Python ?
For this case of image rotation, where the image size changes after rotation and also the reference point, the transformation matrix has to be modified. The new with and height can be calculated using the following relations:
new.width = h*\sin(\theta) + w*\cos(\theta)
new.height = h*\cos(\theta) + w*\sin(\theta)
Since the image size changes, because of the black border that you might see, the coordinates of the rotation point (centre of the image) change too. Then it has to be taken into account in the transformation matrix.
I explain an example in my blog image rotation bounding box opencv
def rotate_box(bb, cx, cy, h, w):
new_bb = list(bb)
for i,coord in enumerate(bb):
# opencv calculates standard transformation matrix
M = cv2.getRotationMatrix2D((cx, cy), theta, 1.0)
# Grab the rotation components of the matrix)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cx
M[1, 2] += (nH / 2) - cy
# Prepare the vector to be transformed
v = [coord[0],coord[1],1]
# Perform the actual rotation and return the image
calculated = np.dot(M,v)
new_bb[i] = (calculated[0],calculated[1])
return new_bb
## Calculate the new bounding box coordinates
new_bb = {}
for i in bb1:
new_bb[i] = rotate_box(bb1[i], cx, cy, heigth, width)
The corresponding C++ code of the above mentioned Python code of # cristianpb, if someone is looking for a C++ code as like me:
// send the original angle i.e. don't transform it in radian
cv::Point2f rotatePointUsingTransformationMat(const cv::Point2f& inPoint, const cv::Point2f& center, const double& rotAngle)
{
cv::Mat rot = cv::getRotationMatrix2D(center, rotAngle, 1.0);
float cos = rot.at<double>(0,0);
float sin = rot.at<double>(0,1);
int newWidth = int( ((center.y*2)*sin) + ((center.x*2)*cos) );
int newHeight = int( ((center.y*2)*cos) + ((center.x*2)*sin) );
rot.at<double>(0,2) += newWidth/2.0 - center.x;
rot.at<double>(1,2) += newHeight/2.0 - center.y;
int v[3] = {static_cast<int>(inPoint.x),static_cast<int>(inPoint.y),1};
int mat3[2][1] = {{0},{0}};
for(int i=0; i<rot.rows; i++)
{
for(int j=0; j<= 0; j++)
{
int sum=0;
for(int k=0; k<3; k++)
{
sum = sum + rot.at<double>(i,k) * v[k];
}
mat3[i][j] = sum;
}
}
return Point2f(mat3[0][0],mat3[1][0]);
}

Wrap image around a circle

What I'm trying to do in this example is wrap an image around a circle, like below.
To wrap the image I simply calculated the x,y coordinates using trig.
The problem is the calculated X and Y positions are rounded to make them integers. This causes the blank pixels in seen the wrapped image above. The x,y positions have to be an integer because they are positions in lists.
I've done this again in the code following but without any images to make things easier to see. All I've done is create two arrays with binary values, one array is black the other white, then wrapped one onto the other.
The output of the code is.
import math as m
from PIL import Image # only used for showing output as image
width = 254.0
height = 24.0
Ro = 40.0
img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("1", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
increment = m.radians(360 / width)
rad = Ro - 0.5
for i, row in enumerate(img):
hyp = rad - i
for j, column in enumerate(row):
alpha = j * increment
x = m.cos(alpha) * hyp + rad
y = m.sin(alpha) * hyp + rad
# put value from original image to its position in new image
cir[int(round(y))][int(round(x))] = img[i][j]
shom_im(cir)
I later found out about the Midpoint Circle Algorithm but I had worse result with that
from PIL import Image # only used for showing output as image
width, height = 254, 24
ro = 40
img = [[(0, 0, 0, 1) for x in range(int(width))]
for y in range(int(height))]
cir = [[(0, 0, 0, 255) for x in range(int(ro * 2))] for y in range(int(ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGBA", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
def putpixel(x0, y0):
global cir
cir[y0][x0] = (255, 255, 255, 255)
def drawcircle(x0, y0, radius):
x = radius
y = 0
err = 0
while (x >= y):
putpixel(x0 + x, y0 + y)
putpixel(x0 + y, y0 + x)
putpixel(x0 - y, y0 + x)
putpixel(x0 - x, y0 + y)
putpixel(x0 - x, y0 - y)
putpixel(x0 - y, y0 - x)
putpixel(x0 + y, y0 - x)
putpixel(x0 + x, y0 - y)
y += 1
err += 1 + 2 * y
if (2 * (err - x) + 1 > 0):
x -= 1
err += 1 - 2 * x
for i, row in enumerate(img):
rad = ro - i
drawcircle(int(ro - 1), int(ro - 1), rad)
shom_im(cir)
Can anybody suggest a way to eliminate the blank pixels?
You are having problems filling up your circle because you are approaching this from the wrong way – quite literally.
When mapping from a source to a target, you need to fill your target, and map each translated pixel from this into the source image. Then, there is no chance at all you miss a pixel, and, equally, you will never draw (nor lookup) a pixel more than once.
The following is a bit rough-and-ready, it only serves as a concept example. I first wrote some code to draw a filled circle, top to bottom. Then I added some more code to remove the center part (and added a variable Ri, for "inner radius"). This leads to a solid ring, where all pixels are only drawn once: top to bottom, left to right.
How you exactly draw the ring is not actually important! I used trig at first because I thought of re-using the angle bit, but it can be done with Pythagorus' as well, and even with Bresenham's circle routine. All you need to keep in mind is that you iterate over the target rows and columns, not the source. This provides actual x,y coordinates that you can feed into the remapping procedure.
With the above done and working, I wrote the trig functions to translate from the coordinates I would put a pixel at into the original image. For this, I created a test image containing some text:
and a good thing that was, too, as in the first attempt I got the text twice (once left, once right) and mirrored – that needed a few minor tweaks. Also note the background grid. I added that to check if the 'top' and 'bottom' lines – the outermost and innermost circles – got drawn correctly.
Running my code with this image and Ro,Ri at 100 and 50, I get this result:
You can see that the trig functions make it start at the rightmost point, move clockwise, and have the top of the image pointing outwards. All can be trivially adjusted, but this way it mimics the orientation that you want your image drawn.
This is the result with your iris-image, using 33 for the inner radius:
and here is a nice animation, showing the stability of the mapping:
Finally, then, my code is:
import math as m
from PIL import Image
Ro = 100.0
Ri = 50.0
# img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
# image = Image.open('0vWEI.png')
image = Image.open('this-is-a-test.png')
# data = image.convert('RGB')
pixels = image.load()
width, height = image.size
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGB", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.save("result1.png","PNG")
new_image.show()
for i in range(int(Ro)):
# outer_radius = Ro*m.cos(m.asin(i/Ro))
outer_radius = m.sqrt(Ro*Ro - i*i)
for j in range(-int(outer_radius),int(outer_radius)):
if i < Ri:
# inner_radius = Ri*m.cos(m.asin(i/Ri))
inner_radius = m.sqrt(Ri*Ri - i*i)
else:
inner_radius = -1
if j < -inner_radius or j > inner_radius:
# this is the destination
# solid:
# cir[int(Ro-i)][int(Ro+j)] = (255,255,255)
# cir[int(Ro+i)][int(Ro+j)] = (255,255,255)
# textured:
x = Ro+j
y = Ro-i
# calculate source
angle = m.atan2(y-Ro,x-Ro)/2
distance = m.sqrt((y-Ro)*(y-Ro) + (x-Ro)*(x-Ro))
distance = m.floor((distance-Ri+1)*(height-1)/(Ro-Ri))
# if distance >= height:
# distance = height-1
cir[int(y)][int(x)] = pixels[int(width*angle/m.pi) % width, height-distance-1]
y = Ro+i
# calculate source
angle = m.atan2(y-Ro,x-Ro)/2
distance = m.sqrt((y-Ro)*(y-Ro) + (x-Ro)*(x-Ro))
distance = m.floor((distance-Ri+1)*(height-1)/(Ro-Ri))
# if distance >= height:
# distance = height-1
cir[int(y)][int(x)] = pixels[int(width*angle/m.pi) % width, height-distance-1]
shom_im(cir)
The commented-out lines draw a solid white ring. Note the various tweaks here and there to get the best result. For instance, the distance is measured from the center of the ring, and so returns a low value for close to the center and the largest values for the outside of the circle. Mapping that directly back onto the target image would display the text with its top "inwards", pointing to the inner hole. So I inverted this mapping with height - distance - 1, where the -1 is to make it map from 0 to height again.
A similar fix is in the calculation of distance itself; without the tweaks Ri+1 and height-1 either the innermost or the outermost row would not get drawn, indicating that the calculation is just one pixel off (which was exactly the purpose of that grid).
I think what you need is a noise filter. There are many implementations from which I think Gaussian filter would give a good result. You can find a list of filters here. If it gets blurred too much:
keep your first calculated image
calculate filtered image
copy fixed pixels from filtered image to first calculated image
Here is a crude average filter written by hand:
cir_R = int(Ro*2) # outer circle 2*r
inner_r = int(Ro - 0.5 - len(img)) # inner circle r
for i in range(1, cir_R-1):
for j in range(1, cir_R-1):
if cir[i][j] == 0: # missing pixel
dx = int(i-Ro)
dy = int(j-Ro)
pix_r2 = dx*dx + dy*dy # distance to center
if pix_r2 <= Ro*Ro and pix_r2 >= inner_r*inner_r:
cir[i][j] = (cir[i-1][j] + cir[i+1][j] + cir[i][j-1] +
cir[i][j+1])/4
shom_im(cir)
and the result:
This basically scans between two ranges checks for missing pixels and replaces them with average of 4 pixels adjacent to it. In this black white case it is all white.
Hope it helps!

Categories

Resources