I have two images:
Fragments from painting
Whole painting
I need to solve two issues:
1st. On the first image, I need to remove the black outline from each fragment. I've tried threshold and erosion, but neither of them worked. How can I do that?
2nd. I can't overlap the first image on the second, and I really don't know why. It always result on the first image overlapping it totally and putting black pixels where it should be possible to see the second image.
I'm using Python3 and OpenCV 3.2, on Ubuntu 18.04.
My program:
from PIL import Image
from matplotlib import pyplot as plt
import numpy as np
import cv2
import sys
plano_f = cv2.imread("Domenichino_Virgin-and-unicorn.jpg")
sobrepor = cv2.imread("Domenichino_Virgin-and-unicorn_img.png")
plano_f = cv2.cvtColor(plano_f, cv2.COLOR_BGR2GRAY, -1)
#sobrepor_BGRA = cv2.cvtColor(sobrepor, cv2.COLOR_BGR2BGRA)
sobrepor_BGRA = cv2.imread("nova_png.png", -1)
plt.imshow(sobrepor_BGRA),plt.show()
rows, cols, han = sobrepor_BGRA.shape
total = rows*cols
#printProgressBar(0, total, prefix="Executando...", suffix="completo", length=50)
'''for i in range(rows):
for j in range(cols):
if(sobrepor_BGRA[i, j][0] <= 5 and sobrepor_BGRA[i, j][1] <= 5 and sobrepor_BGRA[i, j][2] <= 5 and sobrepor_BGRA[i, j][3] != 0):
sobrepor_BGRA[i, j] = (0, 0, 0, 0)
#printProgressBar(i*j, total, prefix='Executando...', suffix='completo', length=50)
sys.stdout.write("\rExecutando linha " + str(i) + " de " + str(rows) + "...")
sys.stdout.flush()
cv2.imwrite("nova_png.png", sobrepor_BGRA)'''
kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (3,3))
#sobrepor_BGRA = cv2.cvtColor(sobrepor_BGRA, cv2.COLOR_BGRA2GRAY, -1)
sobrepor_BGRA = cv2.erode(sobrepor_BGRA, kernel, iterations=3)
#sobrepor_BGRA = cv2.cvtColor(sobrepor_BGRA, cv2.COLOR_GRAY2BGRA)
cv2.imwrite("nova_png2.png", sobrepor_BGRA)
#sobrepor_RGBA = cv2.cvtColor(sobrepor_BGRA, cv2.COLOR_BGRA2RGBA)
#plt.imshow(sobrepor_RGBA),plt.show()
sys.stdout.write("\nPronto!")
nova_img = cv2.addWeighted(sobrepor_BGRA, 1, plano_f, 0, 0)
cv2.imwrite("combined.png", nova_img)
plt.imshow(nova_img),plt.show()
You can use bitwise operations to do this. The idea is to obtain a mask of the missing sections of the fragments then bitwise-or the two sections together. Here's two halfs of the image, one is the fragments you already have and the other is the missing sections.
We combine both halves to get the whole painting
import cv2
import numpy as np
fragment = cv2.imread('1.jpg')
whole = cv2.imread('2.jpg')
fragment[np.where((fragment <= [250,250,250]).all(axis=2))] = [0]
result1 = cv2.bitwise_and(whole, fragment)
result2 = cv2.bitwise_and(whole, 255 - fragment)
final = result1 + result2
cv2.imshow('result1', result1)
cv2.imshow('result2', result2)
cv2.imshow('final', final)
cv2.waitKey()
1st - your image is a jpeg image which means that the black lines around the pieces are going to be imperfect due to compression artifacts, a simple threshold or dilation isn't going to perfectly remove these. You can try saving in a lossless format and modifying by hand in paint or something to clean up, you may even want to perform this step after doing an erosion and cleaning up most of it.
2nd - why don't you just copy with a mask using the copyTo function, here is an example:
import cv2
img1 = cv2.imread('x2djw.jpg')
img2 = cv2.imread('5RnNh.jpg')
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
thr, img1_mask = cv2.threshold(img1, 250, 255, cv2.THRESH_BINARY_INV)
img1_mask = img1_mask[:, :, 0] & img1_mask[:, :, 1] & img1_mask[:, :, 2]
el = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
img1_mask = cv2.erode(img1_mask, el)
img2 = cv2.merge((img2, img2, img2))
img2 = cv2.copyTo(img1, img1_mask, img2)
cv2.imwrite('test_result.png', img2)
Related
I want to detect the circlip in the fixture. if circlip is not present it should give a message "circlip not present".
Binarization applied to the saturation component gives interesting results.
vs.
But the circlip needs to remain tinted.
The solution provided by #YvesDaoust gives good insights into solving the problem.
Thresholding on Saturation channel as suggested by #YvesDaoust, followed by Morphological closing, and largest connected component extraction solves this specific problem.
Note that this solution is not general for all illumination conditions, resolutions, rotation, color, etc...
But it might work for similar conditions.
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import cv2
import numpy as np
img = cv2.imread("input2.jpg")
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
s = hsv[...,1]
th = 100
s[s<th]=0
op = cv2.MORPH_CLOSE
morph_elem = cv2.MORPH_ELLIPSE
morph_size = 5
element = cv2.getStructuringElement(morph_elem, (2*morph_size + 1, 2*morph_size+1), (morph_size, morph_size))
mph = cv2.morphologyEx(s, op, element)
# Reference: https://stackoverflow.com/a/47057324
def lcc (image):
image = image.astype('uint8')
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(image, connectivity=4)
sizes = stats[:, -1]
max_label = 1
max_size = sizes[1]
for i in range(2, nb_components):
if sizes[i] > max_size:
max_label = i
max_size = sizes[i]
img2 = np.zeros(output.shape)
img2[output == max_label] = 255
img2 = img2.astype(np.uint8)
return img2
mask = lcc(mph)
thresh = 20000000
if np.sum(mask) < thresh:
print("circlip not present")
res = img
else:
res = cv2.bitwise_and(img,img,mask = mask)
cv2.namedWindow("img", cv2.WINDOW_NORMAL)
cv2.imshow("img", res)
cv2.waitKey(0)
I want to get rid of the skeletonized lines except the contours using python.
And, want to extract only the largest contour.
(Actually, I tried to make skeletonized line from the segmented mask. And, I got the main stem with contour like above picture. Among the contours, I want to extract only the contour with largest area.)
I don't know how to do it.
Please help me if you have any idea.
Thanks in advance.
import os
import numpy as np
import cv2
from plantcv.plantcv import find_objects
from plantcv.plantcv import image_subtract
from plantcv.plantcv.morphology import segment_sort
from plantcv.plantcv.morphology import segment_skeleton
from plantcv.plantcv.morphology import _iterative_prune
from plantcv.plantcv import print_image
from plantcv.plantcv import plot_image
from plantcv.plantcv import params
from cv2.ximgproc import thinning
def find_large_contour(img, mask):
params.device += 1
mask1 = np.copy(mask)
ori_img = np.copy(img)
# If the reference image is grayscale convert it to color
if len(np.shape(ori_img)) == 2:
ori_img = cv2.cvtColor(ori_img, cv2.COLOR_GRAY2BGR)
objects, hierarchy = cv2.findContours(mask1, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)[-2:]
for i, cnt in enumerate(objects):
cv2.drawContours(ori_img, objects, i, (255, 102, 255), -1, lineType=8, hierarchy=hierarchy)
if params.debug == 'print':
print_image(ori_img, os.path.join(params.debug_outdir, str(params.device) + '_id_objects.png'))
elif params.debug == 'plot':
plot_image(ori_img)
return objects, hierarchy, ori_img
def prune(skel_img, size=2, mask=None):
# Store debug
debug = params.debug
params.debug = None
pruned_img = skel_img.copy()
# Check to see if the skeleton has multiple objects
skel_objects, _ = find_objects(skel_img, skel_img)
_, objects = segment_skeleton(skel_img)
kept_segments = []
removed_segments = []
if size > 0:
# If size>0 then check for segments that are smaller than size pixels long
# Sort through segments since we don't want to remove primary segments
secondary_objects, primary_objects = segment_sort(skel_img, objects)
# Keep segments longer than specified size
for i in range(0, len(secondary_objects)):
if len(secondary_objects[i]) > size:
kept_segments.append(secondary_objects[i])
else:
removed_segments.append(secondary_objects[i])
# Draw the contours that got removed
removed_barbs = np.zeros(skel_img.shape[:2], np.uint8)
cv2.drawContours(removed_barbs, removed_segments, -1, 255, 1,
lineType=8)
# Subtract all short segments from the skeleton image
pruned_img = image_subtract(pruned_img, removed_barbs)
pruned_contour_img = image_subtract(pruned_img, removed_barbs)
pruned_img = _iterative_prune(pruned_img, 1)
# Reset debug mode
params.debug = debug
# Make debugging image
if mask is None:
pruned_plot = np.zeros(skel_img.shape[:2], np.uint8)
else:
pruned_plot = mask.copy()
pruned_plot = cv2.cvtColor(pruned_plot, cv2.COLOR_GRAY2RGB)
pruned_obj, pruned_hierarchy, large_contour = find_large_contour(pruned_img, pruned_img)
cv2.drawContours(pruned_plot, removed_segments, -1, (0, 0, 255), params.line_thickness, lineType=8)
cv2.drawContours(pruned_plot, pruned_obj, -1, (150, 150, 150), params.line_thickness, lineType=8)
# Auto-increment device
params.device += 1
if params.debug == 'print':
print_image(pruned_img, os.path.join(params.debug_outdir, str(params.device) + '_pruned.png'))
print_image(pruned_plot, os.path.join(params.debug_outdir, str(params.device) + '_pruned_debug.png'))
elif params.debug == 'plot':
plot_image(pruned_img, cmap='gray')
plot_image(pruned_plot)
# Segment the pruned skeleton
segmented_img, segment_objects = segment_skeleton(pruned_img, mask)
return pruned_img, segmented_img, segment_objects, large_contour
vseg = cv2.imread("vseg.png", cv2.IMREAD_GRAYSCALE)
gray = thinning(vseg, thinningType=cv2.ximgproc.THINNING_GUOHALL)
pruned, seg_img, edge_objects, large_contour = prune(skel_img=gray, size=3, mask=vseg)
img_cont_gray = cv2.cvtColor(large_contour, cv2.COLOR_BGR2GRAY)
ret_cont, thresh_cont = cv2.threshold(img_cont_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
cv2.imwrite("first_cont111.png", thresh_cont)
## then I want to extract the only contour with largest area
Use morphology as the first step
ret,thresh = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY)
rect=cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, rect)
Then find the conneŃted component of maximal area
max_component=np.full(opening.shape, 0, np.uint8)
nb_components,labels,stats,centroids= cv2.connectedComponentsWithStats(opening,8)
max_component[labels == np.argmax(stats[1:, -1])+1]=255
using skimage (if you don't have it: conda install scikit-image)
import scipy.ndimage as ndi
from skimage.morphology import binary_erosion, binary_dilation
from skimage.measure import regionprops
from skimage import io
#
img = io.imread("first_cont111.png") > 0 # open image and ensure 0,1 data
# get rid of 1-pixel lines
img = binary_erosion(img)
img = binary_dilation(img)
# find individual objects and give them unique labels
label_img, _ = ndi.label(img)
props = regionprops(label_img)
# find the label that corresponds to the object with maximum area:
objects = sorted([(p.label, p.area) for p in props], key=lambda x: x[1], reverse=True)
obj = objects[0][0]
# make an image of the same size as the input image:
output_img = np.zeros_like(img)
# and use fancy indexing to copy the largest object
output_img[label_img==obj] = 1
# now make the contour by subtracting the eroded shape
output_img = output_img - binary_erosion(output_image)
Edit: Quick Summary so far: I use the watershed algorithm but I have probably a problem with threshold. It didn't detect the brighter circles.
New: Fast radial symmetry transform approach which didn't quite work eiter (Edit 6).
I want to detect circles with different sizes. The use case is to detect coins on an image and to extract them solely. -> Get the single coins as single image files.
For this I used the Hough Circle Transform of open-cv:
(https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html)
import sys
import cv2 as cv
import numpy as np
def main(argv):
## [load]
default_file = "data/newcommon_1euro.jpg"
filename = argv[0] if len(argv) > 0 else default_file
# Loads an image
src = cv.imread(filename, cv.IMREAD_COLOR)
# Check if image is loaded fine
if src is None:
print ('Error opening image!')
print ('Usage: hough_circle.py [image_name -- default ' + default_file + '] \n')
return -1
## [load]
## [convert_to_gray]
# Convert it to gray
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
## [convert_to_gray]
## [reduce_noise]
# Reduce the noise to avoid false circle detection
gray = cv.medianBlur(gray, 5)
## [reduce_noise]
## [houghcircles]
rows = gray.shape[0]
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 8,
param1=100, param2=30,
minRadius=0, maxRadius=120)
## [houghcircles]
## [draw]
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
center = (i[0], i[1])
# circle center
cv.circle(src, center, 1, (0, 100, 100), 3)
# circle outline
radius = i[2]
cv.circle(src, center, radius, (255, 0, 255), 3)
## [draw]
## [display]
cv.imshow("detected circles", src)
cv.waitKey(0)
## [display]
return 0
if __name__ == "__main__":
main(sys.argv[1:])
I tried all parameters (rows, param1, param2, minRadius, and maxRadius) to optimize the results. This worked very well for one specific image but other images with different sized coins didn't work.
Examples:
Parameters
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 16,
param1=100, param2=30,
minRadius=0, maxRadius=120)
With the same parameters:
Changed to rows/8
I also tried two other approaches of this thread: writing robust (color and size invariant) circle detection with opencv (based on Hough transform or other features)
The approach of fireant leads to this result:
The approach of fraxel didn't work either.
For the first approach: This happens with all different sizes and also the min and max radius.
How could I change the code, so that the coin size is not important or that it finds the parameters itself?
Thank you in advance for any help!
Edit:
I tried the watershed algorithm of Open-cv, as suggested by Alexander Reynolds: https://docs.opencv.org/3.4/d3/db4/tutorial_py_watershed.html
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('data/P1190263.jpg')
gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU)
# noise removal
kernel = np.ones((3,3),np.uint8)
opening = cv.morphologyEx(thresh,cv.MORPH_OPEN,kernel, iterations = 2)
# sure background area
sure_bg = cv.dilate(opening,kernel,iterations=3)
# Finding sure foreground area
dist_transform = cv.distanceTransform(opening,cv.DIST_L2,5)
ret, sure_fg = cv.threshold(dist_transform,0.7*dist_transform.max(),255,0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv.subtract(sure_bg,sure_fg)
# Marker labelling
ret, markers = cv.connectedComponents(sure_fg)
# Add one to all labels so that sure background is not 0, but 1
markers = markers+1
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
markers = cv.watershed(img,markers)
img[markers == -1] = [255,0,0]
#Display:
cv.imshow("detected circles", img)
cv.waitKey(0)
It works very well on the test image of the open-cv website:
But it performs very bad on my own images:
I can't really think of a good reason why it's not working on my images?
Edit 2:
As suggested I looked at the intermediate images. The thresh looks not good in my opinion. Next, there is no difference between opening and dist_transform. The corresponding sure_fg shows the detected images.
thresh:
opening:
dist_transform:
sure_bg:
sure_fg:
Edit 3:
I tried all distanceTypes and maskSizes I could find, but the results were quite the same (https://www.tutorialspoint.com/opencv/opencv_distance_transformation.htm)
Edit 4:
Furthermore, I tried to change the (first) threshold function. I used different threshold values instead of the OTSU Function. The best one was with 160, but it was far from good:
In the tutorial it looks like this:
It seems like the coins are somehow too bright to be detected by this algorithm, but I don't know how to improve it?
Edit 5:
Changing the overall contrast and brightness of the image (with cv.convertScaleAbs) didn't improve the results. Increasing the contrast however should increase the "difference" between foreground and background, at least on the normal image. But it even got worse. The corresponding threshold image didn't improved (didn't get more white pixel).
Edit 6: I tried another approach, the fast radial symmetry transform (from here https://github.com/ceilab/frst_python)
import cv2
import numpy as np
def gradx(img):
img = img.astype('int')
rows, cols = img.shape
# Use hstack to add back in the columns that were dropped as zeros
return np.hstack((np.zeros((rows, 1)), (img[:, 2:] - img[:, :-2]) / 2.0, np.zeros((rows, 1))))
def grady(img):
img = img.astype('int')
rows, cols = img.shape
# Use vstack to add back the rows that were dropped as zeros
return np.vstack((np.zeros((1, cols)), (img[2:, :] - img[:-2, :]) / 2.0, np.zeros((1, cols))))
# Performs fast radial symmetry transform
# img: input image, grayscale
# radii: integer value for radius size in pixels (n in the original paper); also used to size gaussian kernel
# alpha: Strictness of symmetry transform (higher=more strict; 2 is good place to start)
# beta: gradient threshold parameter, float in [0,1]
# stdFactor: Standard deviation factor for gaussian kernel
# mode: BRIGHT, DARK, or BOTH
def frst(img, radii, alpha, beta, stdFactor, mode='BOTH'):
mode = mode.upper()
assert mode in ['BRIGHT', 'DARK', 'BOTH']
dark = (mode == 'DARK' or mode == 'BOTH')
bright = (mode == 'BRIGHT' or mode == 'BOTH')
workingDims = tuple((e + 2 * radii) for e in img.shape)
# Set up output and M and O working matrices
output = np.zeros(img.shape, np.uint8)
O_n = np.zeros(workingDims, np.int16)
M_n = np.zeros(workingDims, np.int16)
# Calculate gradients
gx = gradx(img)
gy = grady(img)
# Find gradient vector magnitude
gnorms = np.sqrt(np.add(np.multiply(gx, gx), np.multiply(gy, gy)))
# Use beta to set threshold - speeds up transform significantly
gthresh = np.amax(gnorms) * beta
# Find x/y distance to affected pixels
gpx = np.multiply(np.divide(gx, gnorms, out=np.zeros(gx.shape), where=gnorms != 0),
radii).round().astype(int);
gpy = np.multiply(np.divide(gy, gnorms, out=np.zeros(gy.shape), where=gnorms != 0),
radii).round().astype(int);
# Iterate over all pixels (w/ gradient above threshold)
for coords, gnorm in np.ndenumerate(gnorms):
if gnorm > gthresh:
i, j = coords
# Positively affected pixel
if bright:
ppve = (i + gpx[i, j], j + gpy[i, j])
O_n[ppve] += 1
M_n[ppve] += gnorm
# Negatively affected pixel
if dark:
pnve = (i - gpx[i, j], j - gpy[i, j])
O_n[pnve] -= 1
M_n[pnve] -= gnorm
# Abs and normalize O matrix
O_n = np.abs(O_n)
O_n = O_n / float(np.amax(O_n))
# Normalize M matrix
M_max = float(np.amax(np.abs(M_n)))
M_n = M_n / M_max
# Elementwise multiplication
F_n = np.multiply(np.power(O_n, alpha), M_n)
# Gaussian blur
kSize = int(np.ceil(radii / 2))
kSize = kSize + 1 if kSize % 2 == 0 else kSize
S = cv2.GaussianBlur(F_n, (kSize, kSize), int(radii * stdFactor))
return S
img = cv2.imread('data/P1190263.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
result = frst(gray, 60, 2, 0, 1, mode='BOTH')
cv2.imshow("detected circles", result)
cv2.waitKey(0)
I only get this nearly black output (it has some very dark grey shadows). I don't know what to change and would be thankful for help!
i just want to flip image vertically without cv2.flip(), but output is completely black image. where is my mistake ?
import cv2
import numpy as np
def flipv(imgg):
for i in range(480):
img2= np.zeros([480, 640, 3], np.uint8)
img2[i,:]=imgg[480-i-1,:]
return img2
img = cv2.imread("foto\\test.jpg", 1)
ads= flipv(img)
cv2.imshow("qw",ads)
cv2.waitKey(0)
cv2.destroyAllWindows()
Simple function to flip images using opencv:
def flip(img,axes):
if (axes == 0) :
#horizental flip
return cv2.flip( img, 0 )
elif(axes == 1):
#vertical flip
return cv2.flip( img, 1 )
elif(axes == -1):
#both direction
return cv2.flip( img, -1 )
bflp = flip(img,-1)
plt.imshow(bflp)
You can use broadcasting of arrays as shown below. It is a lot faster:
cv2.imshow("flipped image", im[::-1])
Your code for flipping is correct (depends on how you want to perform your flip, but anyway) but you should create your img2 "outside" of your for loop only once.
def flipv(imgg):
img2= np.zeros([480, 640, 3], np.uint8)
for i in range(480):
img2[i,:]=imgg[480-i-1,:]
return img2
Using numpy.flip:
flip_v = np.flip(img,0)
flip_h = np.flip(img,1)
s.a.: numpy.flip
I'm trying to split an image into several sub-images with opencv by identifying templates of the original image and then copy the regions where I matched those templates. I'm a TOTAL newbie to opencv! I've identified the sub-images using:
result = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED)
After some cleanup I get a list of tuples called points in which I iterate to show the rectangles. tw and th is the template width and height respectively.
for pt in points:
re = cv2.rectangle(img, pt, (pt[0] + tw, pt[1] + th), 0, 2)
print('%s, %s' % (str(pt[0]), str(pt[1])))
count+=1
What I would like to accomplish is to save the octagons (https://dl.dropbox.com/u/239592/region01.png) into separated files.
How can I do this? I've read something about contours but I'm not sure how to use it. Ideally I would like to contour the octagon.
Thanks a lot for your help!
If template matching is working for you, stick to it. For instance, I considered the following template:
Then, we can pre-process the input in order to make it a binary one and discard small components. After this step, the template matching is performed. Then it is a matter of filtering the matches by means of discarding close ones (I've used a dummy method for that, so if there are too many matches you could see it taking some time). After we decide which points are far apart (and thus identify different hexagons), we can do minor adjusts to them in the following manner:
Sort by y-coordinate;
If two adjacent items start at a y-coordinate that is too close, then set them both to the same y-coord.
Now you can sort this point list in an appropriate order such that the crops are done in raster order. The cropping part is easily achieved using slicing provided by numpy.
import sys
import cv2
import numpy
outbasename = 'hexagon_%02d.png'
img = cv2.imread(sys.argv[1])
template = cv2.cvtColor(cv2.imread(sys.argv[2]), cv2.COLOR_BGR2GRAY)
theight, twidth = template.shape[:2]
# Binarize the input based on the saturation and value.
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
saturation = hsv[:,:,1]
value = hsv[:,:,2]
value[saturation > 35] = 255
value = cv2.threshold(value, 0, 255, cv2.THRESH_OTSU)[1]
# Pad the image.
value = cv2.copyMakeBorder(255 - value, 3, 3, 3, 3, cv2.BORDER_CONSTANT, value=0)
# Discard small components.
img_clean = numpy.zeros(value.shape, dtype=numpy.uint8)
contours, _ = cv2.findContours(value, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for i, c in enumerate(contours):
area = cv2.contourArea(c)
if area > 500:
cv2.drawContours(img_clean, contours, i, 255, 2)
def closest_pt(a, pt):
if not len(a):
return (float('inf'), float('inf'))
d = a - pt
return a[numpy.argmin((d * d).sum(1))]
match = cv2.matchTemplate(img_clean, template, cv2.TM_CCORR_NORMED)
# Filter matches.
threshold = 0.8
dist_threshold = twidth / 1.5
loc = numpy.where(match > threshold)
ptlist = numpy.zeros((len(loc[0]), 2), dtype=int)
count = 0
print "%d matches" % len(loc[0])
for pt in zip(*loc[::-1]):
cpt = closest_pt(ptlist[:count], pt)
dist = ((cpt[0] - pt[0]) ** 2 + (cpt[1] - pt[1]) ** 2) ** 0.5
if dist > dist_threshold:
ptlist[count] = pt
count += 1
# Adjust points (could do for the x coords too).
ptlist = ptlist[:count]
view = ptlist.ravel().view([('x', int), ('y', int)])
view.sort(order=['y', 'x'])
for i in xrange(1, ptlist.shape[0]):
prev, curr = ptlist[i - 1], ptlist[i]
if abs(curr[1] - prev[1]) < 5:
y = min(curr[1], prev[1])
curr[1], prev[1] = y, y
# Crop in raster order.
view.sort(order=['y', 'x'])
for i, pt in enumerate(ptlist, start=1):
cv2.imwrite(outbasename % i,
img[pt[1]-2:pt[1]+theight-2, pt[0]-2:pt[0]+twidth-2])
print 'Wrote %s' % (outbasename % i)
If you want only the contours of the hexagons, then crop on img_clean instead of img (but then it is pointless to sort the hexagons in raster order).
Here is a representation of the different regions that would be cut for your two examples without modifying the code above:
I am sorry, I didn't understand from your question on how do you relate matchTemplate and Contours.
Anyway, below is a small technique using contours. It is on the assumption that your other images are also like the one you provided. I am not sure if it works with your other images. But I think it would help to get a startup. Try this yourself and make necessary adjustments and modifications.
What I did :
1 - I needed the edge of octagons . So Thresholded Image using Otsu and apply dilation and erosion (or use any method you like that works well for all your images, beware of the edges in left edge of image).
2 - Then found contours (More about contours : http://goo.gl/r0ID0
3 - For each contours, find its convex hull, find its area(A) & perimeter(P)
4 - For a perfect octagon, P*P/A = 13.25 approximately. I used it here and cut it and saved it.
5 - You can see cropping it also removes some edges of octagon. If you want it, adjust the cropping dimension.
Code :
import cv2
import numpy as np
img = cv2.imread('region01.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
thresh = cv2.dilate(thresh,None,iterations = 2)
thresh = cv2.erode(thresh,None)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
number = 0
for cnt in contours:
hull = cv2.convexHull(cnt)
area = cv2.contourArea(hull)
P = cv2.arcLength(hull,True)
if ((area != 0) and (13<= P**2/area <= 14)):
#cv2.drawContours(img,[hull],0,255,3)
x,y,w,h = cv2.boundingRect(hull)
number = number + 1
roi = img[y:y+h,x:x+w]
cv2.imshow(str(number),roi)
cv2.imwrite("1"+str(number)+".jpg",roi)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Those 6 octagons will be stored as separate files.
Hope it helps !!!