How to remove variations in the y axis of template matching (opencv) - python

With the below function, the coords returned will be slightly different each time, because the template matching found the same template a few pixels up or down. Increasing threshold does not help, as it will simply not find anything if higher.
How could I make it always return the same coordinates after finding the template, without this little variation in the y axis (I don't care about the precision of the x axis)?
The function, which runs inside an infinite loop in this:
def fishing_region(img_gray, region_template_gray, w, h): # w, h is how wide and high is the template
region_detected = False
green_bar_region = img_gray[y-5:470+y, 347+x:488+x]
res = cv2.matchTemplate(img_gray, region_template_gray, cv2.TM_CCOEFF_NORMED)
threshold = 0.65
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
x1, y1 = pt[0], pt[1] # top-left corner
x2, y2 = pt[0] + w, pt[1] + h # bottom-right corner
coords_list = [y1, y2, x1 + 55, x2 - 35]
region_detected = True
print("Region detected")
break # only finds the template 1 time per function call
if not region_detected:
print("No region")
return region_detected, coords_list
EDIT: Here is the rectangle draw with the coordinates and the template: album.
Also, would masking the template image, removing the parts that changes colors, be possible?

If you require only one matching region per image, using the cv2.minMaxLoc() function on res will find the global maximum. This will be stable for a given image and template.
To replicate the threshold you have, you could use the following psuedocode:
~, maxVal, ~, maxLoc = cv2.minMaxLoc(res)
if (maxVal > thresh):
rest_of_function()

Related

Why is this code of Object detection using Template Matching does not run successfully

The output that I get is just the reference image and no bounding box is seen in the output.
I have tried this code from this website: https://www.sicara.fr/blog-technique/object-detection-template-matching
Here's the reference image
Reference Image
Here are the templates:
1:
templates 2:
templates 3:
As compared to the website, using the code the output should look like this:
Expected Output:
I am expecting to have this output as discussed in the website, however, when I tried to run this code, nothing seems to be detected. Here is the code that I copied:
import cv2
import numpy as np
DEFAULT_TEMPLATE_MATCHING_THRESHOLD = 0.9
class Template:
"""
A class defining a template
"""
def __init__(self, image_path, label, color, matching_threshold=DEFAULT_TEMPLATE_MATCHING_THRESHOLD):
"""
Args:
image_path (str): path of the template image path
label (str): the label corresponding to the template
color (List[int]): the color associated with the label (to plot detections)
matching_threshold (float): the minimum similarity score to consider an object is detected by template
matching
"""
self.image_path = image_path
self.label = label
self.color = color
self.template = cv2.imread(image_path)
self.template_height, self.template_width = self.template.shape[:2]
self.matching_threshold = matching_threshold
image = cv2.imread("reference.jpg")
templates = [
Template(image_path="Component1.jpg", label="1", color=(0, 0, 255), matching_threshold=0.99),
Template(image_path="Component2.jpg", label="2", color=(0, 255, 0,) , matching_threshold=0.91),
Template(image_path="Component3.jpg", label="3", color=(0, 191, 255), matching_threshold=0.99),
detections = []
for template in templates:
template_matching = cv2.matchTemplate(template.template, image, cv2.TM_CCORR_NORMED)
match_locations = np.where(template_matching >= template.matching_threshold)
for (x, y) in zip(match_locations[1], match_locations[0]):
match = {
"TOP_LEFT_X": x,
"TOP_LEFT_Y": y,
"BOTTOM_RIGHT_X": x + template.template_width,
"BOTTOM_RIGHT_Y": y + template.template_height,
"MATCH_VALUE": template_matching[y, x],
"LABEL": template.label,
"COLOR": template.color
}
detections.append(match)
def compute_iou(boxA, boxB):
xA = max(boxA["TOP_LEFT_X"], boxB["TOP_LEFT_X"])
yA = max(boxA["TOP_LEFT_Y"], boxB["TOP_LEFT_Y"])
xB = min(boxA["BOTTOM_RIGHT_X"], boxB["BOTTOM_RIGHT_X"])
yB = min(boxA["BOTTOM_RIGHT_Y"], boxB["BOTTOM_RIGHT_Y"])
interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1)
boxAArea = (boxA["BOTTOM_RIGHT_X"] - boxA["TOP_LEFT_X"] + 1) * (boxA["BOTTOM_RIGHT_Y"] - boxA["TOP_LEFT_Y"] + 1)
boxBArea = (boxB["BOTTOM_RIGHT_X"] - boxB["TOP_LEFT_X"] + 1) * (boxB["BOTTOM_RIGHT_Y"] - boxB["TOP_LEFT_Y"] + 1)
iou = interArea / float(boxAArea + boxBArea - interArea)
return iou
def non_max_suppression(objects, non_max_suppression_threshold=0.5, score_key="MATCH_VALUE"):
"""
Filter objects overlapping with IoU over threshold by keeping only the one with maximum score.
Args:
objects (List[dict]): a list of objects dictionaries, with:
{score_key} (float): the object score
{top_left_x} (float): the top-left x-axis coordinate of the object bounding box
{top_left_y} (float): the top-left y-axis coordinate of the object bounding box
{bottom_right_x} (float): the bottom-right x-axis coordinate of the object bounding box
{bottom_right_y} (float): the bottom-right y-axis coordinate of the object bounding box
non_max_suppression_threshold (float): the minimum IoU value used to filter overlapping boxes when
conducting non-max suppression.
score_key (str): score key in objects dicts
Returns:
List[dict]: the filtered list of dictionaries.
"""
sorted_objects = sorted(objects, key=lambda obj: obj[score_key], reverse=True)
filtered_objects = []
for object_ in sorted_objects:
overlap_found = False
for filtered_object in filtered_objects:
iou = compute_iou(object_, filtered_object)
if iou > non_max_suppression_threshold:
overlap_found = True
break
if not overlap_found:
filtered_objects.append(object_)
return filtered_objects
NMS_THRESHOLD = 0.2
detections = non_max_suppression(detections, non_max_suppression_threshold=NMS_THRESHOLD)
image_with_detections = image.copy()
for detection in detections:
cv2.rectangle(
image_with_detections,
(detection["TOP_LEFT_X"], detection["TOP_LEFT_Y"]),
(detection["BOTTOM_RIGHT_X"], detection["BOTTOM_RIGHT_Y"]),
detection["COLOR"],
2,
)
cv2.putText(
image_with_detections,
f"{detection['LABEL']} - {detection['MATCH_VALUE']}",
(detection["TOP_LEFT_X"] + 2, detection["TOP_LEFT_Y"] + 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5,
detection["COLOR"], 1,
cv2.LINE_AA,
)
# NMS_THRESHOLD = 0.2
# detection = non_max_suppression(detections, non_max_suppression_threshold=NMS_THRESHOLD)
print("Image written to file-system: ", status)
cv2.imshow("res", image_with_detections)
cv2.waitKey(0)
this is how his final output looks like:
5
Here's my attempt in detecting the larger components, the code was able to detect them and here is the result:
Result
Here are the resize templates and the original components that I wanted to detect but unfortunately can't:
1st 2nd 3rd
Here is a method of finding multiple matches in template matching in Python/OpenCV using your reference and smallest template. I have remove all the white padding you had around your template. My method simply draws a black rectangle over the correlation image where it matches and then repeats looking for the next best match in the modified correlation image.
I have used cv2.TM_CCORR_NORMED and a match threshold of 0.90. You have 4 of these templates showing in your reference image, so I set my search number to 4 and spacing of 10 for the non-maximum suppression by masking. You have other small items of the same shape and size, but the text on them is different. So you will need different templates for each.
Reference:
Template:
import cv2
import numpy as np
# read image
img = cv2.imread('circuit_board.jpg')
# read template
tmplt = cv2.imread('circuit_item.png')
hh, ww, cc = tmplt.shape
# set arguments
match_thresh = 0.90 # stopping threshold for match value
num_matches = 4 # stopping threshold for number of matches
match_radius = 10 # approx radius of match peaks
match_radius2 = match_radius//2
# get correlation surface from template matching
corrimg = cv2.matchTemplate(img,tmplt,cv2.TM_CCORR_NORMED)
hc, wc = corrimg.shape
# get locations of all peaks higher than match_thresh for up to num_matches
imgcopy = img.copy()
corrcopy = corrimg.copy()
for i in range(0, num_matches):
# get max value and location of max
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(corrcopy)
x1 = max_loc[0]
y1 = max_loc[1]
x2 = x1 + ww
y2 = y1 + hh
loc = str(x1) + "," + str(y1)
if max_val > match_thresh:
print("match number:", i+1, "match value:", max_val, "match x,y:", loc)
# draw draw white bounding box to define match location
cv2.rectangle(imgcopy, (x1,y1), (x2,y2), (255,255,255), 1)
# insert black rectangle over copy of corr image so not find that match again
corrcopy[y1-match_radius2:y1+match_radius2, x1-match_radius2:x1+match_radius2] = 0
i = i + 1
else:
break
# save results
# power of 4 exaggeration of correlation image to emphasize peaks
cv2.imwrite('circuit_board_multi_template_corr.png', (255*cv2.pow(corrimg,4)).clip(0,255).astype(np.uint8))
cv2.imwrite('circuit_board_multi_template_corr_masked.png', (255*cv2.pow(corrcopy,4)).clip(0,255).astype(np.uint8))
cv2.imwrite('circuit_board_multi_template_match.png', imgcopy)
# show results
# power of 4 exaggeration of correlation image to emphasize peaks
cv2.imshow('image', img)
cv2.imshow('template', tmplt)
cv2.imshow('corr', cv2.pow(corrimg,4))
cv2.imshow('corr masked', cv2.pow(corrcopy,4))
cv2.imshow('result', imgcopy)
cv2.waitKey(0)
cv2.destroyAllWindows()
Original Correlation Image:
Modified Correlation Image after 4 matches:
Matches Marked on Input as White Rectangles:
Match Locations:
match number: 1 match value: 0.9982172250747681 match x,y: 128,68
match number: 2 match value: 0.9762057065963745 match x,y: 128,90
match number: 3 match value: 0.9755787253379822 match x,y: 128,48
match number: 4 match value: 0.963689923286438 match x,y: 127,107

Counting drop on an image

I have image of drops and I want to calculate the number of it.
Here is the original image :
And Here after threshold application :
i tried a lot of fonction on OpenCV and it's never right.
Do you have any ideas on how to do ?
Thanks
The best I got, was by using :
(img_morph is my binairized image)
rbc_bw = label(img_morph)
rbc_props = regionprops(rbc_bw)
fig, ax = plt.subplots(figsize=(18, 8))
ax.imshow(img_morph)
rbc_count = 0
for i, prop in enumerate(filter(lambda x: x.area > 250, rbc_props)):
y1, x1, y2, x2 = (prop.bbox[0], prop.bbox[1],
prop.bbox[2], prop.bbox[3])
width = x2 - x1
height = y2 - y1
r = plt.Rectangle((x1, y1), width = width, height=height,
color='b', fill=False)
ax.add_patch(r)
rbc_count += 1
print('Red Blood Cell Count:', rbc_count)
plt.show()
And all my circles are detected here but also the gap in between.
A more difficult image :
Core idea: matchTemplate.
Approach:
pick a template manually from the picture
histogram equalization for badly lit inputs (or always)
matchTemplate with suitable matching mode
also using copyMakeBorder to catch instances clipping the border
thresholding and non-maximum suppression
I'll skip the boring parts and use the first example input.
Manually picked template:
scores = cv.matchTemplate(haystack, template, cv.TM_CCOEFF_NORMED)
Thresholding and NMS:
levelmask = (scores >= 0.3)
localmax = cv.dilate(scores, None, iterations=26)
localmax = (scores == localmax)
candidates = levelmask & localmax
(nlabels, labels, stats, centroids) = cv.connectedComponentsWithStats(candidates.astype(np.uint8), connectivity=8)
print(nlabels-1, "found") # background counted too
# and then draw a circle for each centroid except label 0
And that finds 766 instances. I see a few false negatives (missed) and saw a false positive too once, but that looks like less than 1%.

Is there a way to slice an image using either numpy or opencv such that the sliced image has at least one instance of the objects of interest?

Essentially, my original image has N instances of a certain object. I have the bounding box coordinates and the class for all of them in a text file. This is basically a dataset for YoloV3 and darknet. I want to generate additional images by slicing the original one in a way such that it contains at least 1 instance of one of those objects and if it does, save the image, and the new bounding box coordinates of the objects in that image.
The following is the code for slicing the image:
x1 = random.randint(0, 1200)
width = random.randint(0, 800)
y1 = random.randint(0, 1200)
height = random.randint(30, 800)
slice_img = img[x1:x1+width, y1:y1+height]
plt.imshow(slice_img)
plt.show()
My next step is to use template matching to find if my sliced image is in the original one:
w, h = slice_img.shape[:-1]
res = cv2.matchTemplate(img, slice_img, cv2.TM_CCOEFF_NORMED)
threshold = 0.6
loc = np.where(res >= threshold)
for pt in zip(*loc[::-1]): # Switch columns and rows
cv2.rectangle(img, pt, (pt[0] + w, pt[1] + h), (0, 0, 255), 5)
cv2.imwrite('result.png', img)
At this stage, I am quite lost and not sure how to proceed any further.
Ultimately, I need many new images with corresponding text files containing the class and coordinates. Any advice would be appreciated. Thank you.
P.S I cannot share my images with you, unfortunately.
Template matching is way overkill for this. Template matching essentially slides a kernel image over your main image and compares pixels of each, performing many many computations. There's no need to search the image because you already know where the objects are within the image. Essentially, you are trying to determine whether one rectangle (bounding box for an object) overlaps sufficiently with the slice, and you know the exact coordinates of each rectangle. Thus, it's a geometry problem rather than a computer vision problem.
(As an aside: the correct term for what you are calling a slice would probably be crop; slice generally means you're taking an N-dimensional array (say 3 x 4 x 5) and taking a subset of data that is N-1 dimensional by selecting a single index for one dimension (i.e. take index 0 on dimension 0 to get a 1 x 4 x 5 array).
Here's a brief example of how you might do this. Let x1 x2 y1 y2 be the min and max x and y coordinates for the crop you generate. Let ox1 ox2 oy1 oy2 be the min and max x and y coordinates for an object:
NO_SUCCESSFUL_CROPS = True
while NO_SUCCESSFUL_CROPS:
# Generate crop
x1 = random.randint(0, 1200)
width = random.randint(0, 800)
y1 = random.randint(0, 1200)
height = random.randint(30, 800)
x2 = x1 + width
y2 = y1 + height
# for each bounding box
#check if at least (nominally) 70% of object is within crop
threshold = 0.7
for bbox in all_objects:
#assign bbox to ox1 ox2 oy1 oy2
ox1,ox2,oy1,oy2 = bbox
# compute percentage of bbox that is within crop
minx = max(ox1,x1)
miny = max(oy1,y1)
maxx = min(ox2,x2)
maxy = min(oy2,y2)
area_in_crop = (maxx-minx)*(maxy-miny)
area of bbox = (ox2-ox1)*(oy2-oy1)
ratio = area_in_crop / area_of_bbox
if ratio > threshold:
# break loop
NO_SUCCESSFUL_CROPS = False
# crop image as above
crop_image = image[y1:y2,x1:x2] # if image is an array, may have to do y then x because y is row and x is column. Not sure exactly which form opencv uses
cv2.imwrite("output_file.png",crop_image)
# shift bbox coords since (x1,y1) is the new (0,0) pixel in crop_image
ox1 -= x1
ox2 -= x1
oy1 -= y1
oy2 -= y2
break # no need to continue (although you could alternately continue until you have N crops, or even make sure you get one crop with each object)

Crop overlapping images with pillow

I need help, please.
I'm trying to select and crop the overlapping area of two images with the Python Pillow library.
I have the upper-left pixel coordinate of the two pictures. With these, I can find out which one is located above the other.
I wrote a function, taking two images as arguments:
def function(img1, img2):
x1 = 223 #x coordinate of the first image
y1 = 197 #y coordinate of the first image
x2 = 255 #x coordinate of the second image
y2 = 197 #y coordinate of the second image
dX = x1 - x2
dY = y1 - y2
if y1 <= y2: #if the first image is above the other
upper = img1
lower = img2
flag = False
else:
upper = img2
lower = img1
flag = True
if dX <= 0: #if the lower image is on the left
box = (abs(dX), abs(dY), upper.size[0], upper.size[1])
a = upper.crop(box)
box = (0, 0, upper.size[0] - abs(dX), upper.size[1] - abs(dY))
b = lower.crop(box)
else:
box = (0, abs(dY), lower.size[0] - abs(dX), upper.size[1])
a = upper.crop(box)
box = (abs(dX), 0, lower.size[0], upper.size[1] - abs(dY))
b = lower.crop(box)
if flag:
return b,a #switch the two images again
else:
return a,b
I know for sure that the result is wrong (It's a school assignment).
Thanks for your help.
First of all, I don't quite get what do you mean by one picture being "above" the other (shouldn't that be a z-position?), but take a look at this: How to make rect from the intersection of two? , the first answer might be a good lead. :)

Wrap image around a circle

What I'm trying to do in this example is wrap an image around a circle, like below.
To wrap the image I simply calculated the x,y coordinates using trig.
The problem is the calculated X and Y positions are rounded to make them integers. This causes the blank pixels in seen the wrapped image above. The x,y positions have to be an integer because they are positions in lists.
I've done this again in the code following but without any images to make things easier to see. All I've done is create two arrays with binary values, one array is black the other white, then wrapped one onto the other.
The output of the code is.
import math as m
from PIL import Image # only used for showing output as image
width = 254.0
height = 24.0
Ro = 40.0
img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("1", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
increment = m.radians(360 / width)
rad = Ro - 0.5
for i, row in enumerate(img):
hyp = rad - i
for j, column in enumerate(row):
alpha = j * increment
x = m.cos(alpha) * hyp + rad
y = m.sin(alpha) * hyp + rad
# put value from original image to its position in new image
cir[int(round(y))][int(round(x))] = img[i][j]
shom_im(cir)
I later found out about the Midpoint Circle Algorithm but I had worse result with that
from PIL import Image # only used for showing output as image
width, height = 254, 24
ro = 40
img = [[(0, 0, 0, 1) for x in range(int(width))]
for y in range(int(height))]
cir = [[(0, 0, 0, 255) for x in range(int(ro * 2))] for y in range(int(ro * 2))]
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGBA", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.show()
def putpixel(x0, y0):
global cir
cir[y0][x0] = (255, 255, 255, 255)
def drawcircle(x0, y0, radius):
x = radius
y = 0
err = 0
while (x >= y):
putpixel(x0 + x, y0 + y)
putpixel(x0 + y, y0 + x)
putpixel(x0 - y, y0 + x)
putpixel(x0 - x, y0 + y)
putpixel(x0 - x, y0 - y)
putpixel(x0 - y, y0 - x)
putpixel(x0 + y, y0 - x)
putpixel(x0 + x, y0 - y)
y += 1
err += 1 + 2 * y
if (2 * (err - x) + 1 > 0):
x -= 1
err += 1 - 2 * x
for i, row in enumerate(img):
rad = ro - i
drawcircle(int(ro - 1), int(ro - 1), rad)
shom_im(cir)
Can anybody suggest a way to eliminate the blank pixels?
You are having problems filling up your circle because you are approaching this from the wrong way – quite literally.
When mapping from a source to a target, you need to fill your target, and map each translated pixel from this into the source image. Then, there is no chance at all you miss a pixel, and, equally, you will never draw (nor lookup) a pixel more than once.
The following is a bit rough-and-ready, it only serves as a concept example. I first wrote some code to draw a filled circle, top to bottom. Then I added some more code to remove the center part (and added a variable Ri, for "inner radius"). This leads to a solid ring, where all pixels are only drawn once: top to bottom, left to right.
How you exactly draw the ring is not actually important! I used trig at first because I thought of re-using the angle bit, but it can be done with Pythagorus' as well, and even with Bresenham's circle routine. All you need to keep in mind is that you iterate over the target rows and columns, not the source. This provides actual x,y coordinates that you can feed into the remapping procedure.
With the above done and working, I wrote the trig functions to translate from the coordinates I would put a pixel at into the original image. For this, I created a test image containing some text:
and a good thing that was, too, as in the first attempt I got the text twice (once left, once right) and mirrored – that needed a few minor tweaks. Also note the background grid. I added that to check if the 'top' and 'bottom' lines – the outermost and innermost circles – got drawn correctly.
Running my code with this image and Ro,Ri at 100 and 50, I get this result:
You can see that the trig functions make it start at the rightmost point, move clockwise, and have the top of the image pointing outwards. All can be trivially adjusted, but this way it mimics the orientation that you want your image drawn.
This is the result with your iris-image, using 33 for the inner radius:
and here is a nice animation, showing the stability of the mapping:
Finally, then, my code is:
import math as m
from PIL import Image
Ro = 100.0
Ri = 50.0
# img = [[1 for x in range(int(width))] for y in range(int(height))]
cir = [[0 for x in range(int(Ro * 2))] for y in range(int(Ro * 2))]
# image = Image.open('0vWEI.png')
image = Image.open('this-is-a-test.png')
# data = image.convert('RGB')
pixels = image.load()
width, height = image.size
def shom_im(img): # for showing data as image
list_image = [item for sublist in img for item in sublist]
new_image = Image.new("RGB", (len(img[0]), len(img)))
new_image.putdata(list_image)
new_image.save("result1.png","PNG")
new_image.show()
for i in range(int(Ro)):
# outer_radius = Ro*m.cos(m.asin(i/Ro))
outer_radius = m.sqrt(Ro*Ro - i*i)
for j in range(-int(outer_radius),int(outer_radius)):
if i < Ri:
# inner_radius = Ri*m.cos(m.asin(i/Ri))
inner_radius = m.sqrt(Ri*Ri - i*i)
else:
inner_radius = -1
if j < -inner_radius or j > inner_radius:
# this is the destination
# solid:
# cir[int(Ro-i)][int(Ro+j)] = (255,255,255)
# cir[int(Ro+i)][int(Ro+j)] = (255,255,255)
# textured:
x = Ro+j
y = Ro-i
# calculate source
angle = m.atan2(y-Ro,x-Ro)/2
distance = m.sqrt((y-Ro)*(y-Ro) + (x-Ro)*(x-Ro))
distance = m.floor((distance-Ri+1)*(height-1)/(Ro-Ri))
# if distance >= height:
# distance = height-1
cir[int(y)][int(x)] = pixels[int(width*angle/m.pi) % width, height-distance-1]
y = Ro+i
# calculate source
angle = m.atan2(y-Ro,x-Ro)/2
distance = m.sqrt((y-Ro)*(y-Ro) + (x-Ro)*(x-Ro))
distance = m.floor((distance-Ri+1)*(height-1)/(Ro-Ri))
# if distance >= height:
# distance = height-1
cir[int(y)][int(x)] = pixels[int(width*angle/m.pi) % width, height-distance-1]
shom_im(cir)
The commented-out lines draw a solid white ring. Note the various tweaks here and there to get the best result. For instance, the distance is measured from the center of the ring, and so returns a low value for close to the center and the largest values for the outside of the circle. Mapping that directly back onto the target image would display the text with its top "inwards", pointing to the inner hole. So I inverted this mapping with height - distance - 1, where the -1 is to make it map from 0 to height again.
A similar fix is in the calculation of distance itself; without the tweaks Ri+1 and height-1 either the innermost or the outermost row would not get drawn, indicating that the calculation is just one pixel off (which was exactly the purpose of that grid).
I think what you need is a noise filter. There are many implementations from which I think Gaussian filter would give a good result. You can find a list of filters here. If it gets blurred too much:
keep your first calculated image
calculate filtered image
copy fixed pixels from filtered image to first calculated image
Here is a crude average filter written by hand:
cir_R = int(Ro*2) # outer circle 2*r
inner_r = int(Ro - 0.5 - len(img)) # inner circle r
for i in range(1, cir_R-1):
for j in range(1, cir_R-1):
if cir[i][j] == 0: # missing pixel
dx = int(i-Ro)
dy = int(j-Ro)
pix_r2 = dx*dx + dy*dy # distance to center
if pix_r2 <= Ro*Ro and pix_r2 >= inner_r*inner_r:
cir[i][j] = (cir[i-1][j] + cir[i+1][j] + cir[i][j-1] +
cir[i][j+1])/4
shom_im(cir)
and the result:
This basically scans between two ranges checks for missing pixels and replaces them with average of 4 pixels adjacent to it. In this black white case it is all white.
Hope it helps!

Categories

Resources