I am defining a problem: I have two pictures, e.g. two photos with a 1€ coin.
How can I compare the two images to get "yes they contain both a 1€ coin"? of course the test should return false if the second picture contains a 2€ coin.
I tried the openCV methods, but there is nothing so precise.
Also, a ML approach has to handle the issue of recognising two objects in two images without any other previous exposure to them.
EDIT I noted the question is a bit too vague: I am trying to redefine it here a bit.
Given two images, how do I write a boolean function are_the_same(img1, img2) returning True if both images contain the same object?
Here what I tried so far:
SIFT, you find keypoints in images and if a certain number of them matches you state they contain the same object.
CNN siamese network, you train your network to encode same object pictures to close points in the embedding space, and different object images to points that are far from each other in the embedding space.
It depends a lot on what types of images you have, but if it's clear top down images, you can use the goldish band/center to distinguish between them.
First a mask is made based on the goldish color. (You'll probably have to make the color range more specific - I had an easy image. I used this convenient script to determine the color range.) Next some noise is removed and then contours are detected. Contours that have no child- or parent-contour are the solid center of e €2 coin. Contours with a child but no parent are the band of a €1 coin. Contours with a parent but no child are the center of a €1 coin and are ignored.
€2 gets drawn red, €1 blue.
import cv2
import numpy as np
# load image
img = cv2.imread("E1E2.jpg")
# Convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range wanted color in HSV
lower_val = np.array([0,25,0])
upper_val = np.array([179,255,255])
# Threshold the HSV image to get only goldish colors
mask = cv2.inRange(hsv, lower_val, upper_val)
# remove noise
kernel = np.ones((5,5))
mask_open = cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernel)
mask_close = cv2.morphologyEx(mask_open,cv2.MORPH_CLOSE,kernel)
# find contours
contours, hier = cv2.findContours(mask_close,cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# loop through contours, check hierarchy, draw contours
for i, cnt in enumerate(contours):
(prev, nxt, child, parent) = hier[0][i]
if child == -1 and parent == -1 :
# €2
cv2.drawContours(img, [cnt],0,(0,0,255), 3)
if child != -1 and parent == -1 :
# €1
cv2.drawContours(img, [cnt],0,(255,0,0), 3)
# display image
cv2.imshow("Res", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Related
Problem:
I'm working with a dataset that contains many images that look something like this:
Now I need all these images to be oriented horizontally or vertically, such that the color palette is either at the bottom or the right side of the image. This can be done by simply rotating the image, but the tricky part is figuring out which images should be rotated and which shouldn't.
What I have tried:
I thought that the best way to do this, is by detecting the white line that separates the the color palette from the image. I decided to rotate all images that have the palette at the bottom such that they have it at the right side.
# yes I am mixing between PIL and opencv (I like the PIL resizing more)
# resize image to be 128 by 128 pixels
img = img.resize((128, 128), PIL.Image.BILINEAR)
img = np.array(img)
# perform edge detection, not sure if these are the best parameters for Canny
edges = cv2.Canny(img, 30, 50, 3, apertureSize=3)
has_line = 0
# take numpy slice of the area where the white line usually is
# (not always exactly in the same spot which probably has to do with the way I resize my image)
for line in edges[75:80]:
# check if most of one of the lines contains white pixels
counts = np.bincount(line)
if np.argmax(counts) == 255:
has_line = True
# rotate if we found such a line
if has_line == True:
s = np.rot90(s)
An example of it working correctly:
An example of it working incorrectly:
This works maybe on 98% of images but there are some cases where it will rotate images that shouldn't be rotated or not rotate images that should be rotated. Maybe there is an easier way to do this, or maybe a more elaborate way that is more consistent? I could do it manually but I'm dealing with a lot of images. Thanks for any help and/or comments.
Here are some images where my code fails for testing purposes:
You can start by thresholding your image by setting a very high threshold like 250 to take advantage of the property that your lines are white. This will make all the background black. Now create a special horizontal kernel with a shape like (1, 15) and erode your image with it. What this will do is remove the vertical lines from the image and only the horizontal lines will be left.
import cv2
import numpy as np
img = cv2.imread('horizontal2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY)
kernel_hor = np.ones((1, 15), dtype=np.uint8)
erode = cv2.erode(thresh, kernel_hor)
As stated in the question the color palates can only be on the right or the bottom. So we can test to check how many contours does the right region has. For this just divide the image in half and take the right part. Before finding contours dilate the result to fill in any gaps with a normal (3, 3) kernel. Using the cv2.RETR_EXTERNAL find the contours and count how many we have found, if greater than a certain number the image is correct side up and there is no need to rotate.
right = erode[:, erode.shape[1]//2:]
kernel = np.ones((3, 3), dtype=np.uint8)
right = cv2.dilate(right, kernel)
cnts, _ = cv2.findContours(right, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(cnts) > 3:
print('No need to rotate')
else:
print('rotate')
#ADD YOUR ROTATE CODE HERE
P.S. I tested for all four images you have provided and it worked well. If in case it does not work for any image let me know.
Hi I need to write a program that remove demarcation from gray scale image(image with text in it)
i read about thresholding and blurring but still i dont see how can i do it.
my image is an image of a hebrew text like that:
and i need to remove the demarcation(assuming that the demarcation is the smallest element in the image) the output need to be something like that
I want to write the code in python using opencv, what topics do i need to learn to be able to do that, and how?
thank you.
Edit:
I can use only cv2 functions
The symbols you want to remove are significantly smaller than all other shapes, you can use that to determine witch ones to remove.
First use threshold to convert the image to binary. Next, you can use findContours to detect the shapes and then contourArea to determine if the shape is larger than a threshold.
Finally you can can create a mask to remove the unwanted shapes, draw the larger symbols on a new image or draw the smaller symbols in white over the original symbols in the original image - making them disappear. I used that last technique in the code below.
Result:
Code:
import cv2
# load image as grayscale
img = cv2.imread('1MioS.png',0)
# convert to binary. Inverted, so you get white symbols on black background
_ , thres = cv2.threshold(img, 200, 255, cv2.THRESH_BINARY_INV)
# find contours in the thresholded image (this gives all symbols)
contours, hierarchy = cv2.findContours(thres, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# loop through the contours, if the size of the contour is below a threshold,
# draw a white shape over it in the input image
for cnt in contours:
if cv2.contourArea(cnt) < 250:
cv2.drawContours(img,[cnt],0,(255),-1)
# display result
cv2.imshow('res', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update
To find the largest contour, you can loop through them and keep track of the largest value:
maxArea = 0
for cnt in contours:
currArea = cv2.contourArea(cnt)
if currArea > maxArea:
maxArea = currArea
print(maxArea)
I also whipped up a little more complex version, that creates a sorted list of the indexes and sizes of the contours. Then it looks for the largest relative difference in size of all contours, so you know which contours are 'small' and 'large'. I do not know if this works for all letters / fonts.
# create a list of the indexes of the contours and their sizes
contour_sizes = []
for index,cnt in enumerate(contours):
contour_sizes.append([index,cv2.contourArea(cnt)])
# sort the list based on the contour size.
# this changes the order of the elements in the list
contour_sizes.sort(key=lambda x:x[1])
# loop through the list and determine the largest relative distance
indexOfMaxDifference = 0
currentMaxDifference = 0
for i in range(1,len(contour_sizes)):
sizeDifference = contour_sizes[i][1] / contour_sizes[i-1][1]
if sizeDifference > currentMaxDifference:
currentMaxDifference = sizeDifference
indexOfMaxDifference = i
# loop through the list again, ending (or starting) at the indexOfMaxDifference, to draw the contour
for i in range(0, indexOfMaxDifference):
cv2.drawContours(img,contours,contour_sizes[i][0] ,(255),-1)
To get the background color you can do use minMaxLoc. This returns the lowest color value and it's position of an image (also the max value, but you don't need that). If you apply it to the thresholded image - where the background is black -, it will return the location of a background pixel (big odds it will be (0,0) ). You can then look up this pixel in the original color image.
# get the location of a pixel with background color
min_val, _, min_loc, _ = cv2.minMaxLoc(thres)
# load color image
img_color = cv2.imread('1MioS.png')
# get bgr values of background
b,g,r = img_color[min_loc]
# convert from numpy object
background_color = (int(b),int(g),int(r))
and then to draw the contours
cv2.drawContours(img_color,contours,contour_sizes[i][0],background_color,-1)
and of course
cv2.imshow('res', img_color)
This looks like a problem for template matching since you have what looks like a known font and can easily understand what the characters and/or demarcations are. Check out https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html
Admittedly, the tutorial talks about finding the match; modification is up to you. In that case, you know the exact shape of the template itself, so using that information along with the location of the match, just overwrite the image data with the appropriate background color (based on the examples above, 255).
You can solve it by removing all the small clusters.
I found a Python solution (using OpenCV) here.
For supporting smaller fonts, I added the following heuristic:
"The largest size of the demarcation cluster is 1/500 of the largest letter cluster".
The heuristic can be refined, by statistical analysts (or improved by other heuristics, such as demarcation locations relative to the letters).
import numpy as np
import cv2
I = cv2.imread('Goodluck.png', cv2.IMREAD_GRAYSCALE)
J = 255 - I # Invert I
img = cv2.threshold(J, 127, 255, cv2.THRESH_BINARY)[1] # Convert to binary
# https://answers.opencv.org/question/194566/removing-noise-using-connected-components/
nlabel,labels,stats,centroids = cv2.connectedComponentsWithStats(img, connectivity=8)
labels_small = []
areas_small = []
# Find largest cluster:
max_size = np.max(stats[:, cv2.CC_STAT_AREA])
thresh_size = max_size / 500 # Set the threshold to maximum cluster size divided by 500.
for i in range(1, nlabel):
if stats[i, cv2.CC_STAT_AREA] < thresh_size:
labels_small.append(i)
areas_small.append(stats[i, cv2.CC_STAT_AREA])
mask = np.ones_like(labels, dtype=np.uint8)
for i in labels_small:
I[labels == i] = 255
cv2.imshow('I', I)
cv2.waitKey(0)
Here is a MATLAB code sample (kept threshold = 200):
clear
I = imbinarize(rgb2gray(imread('בהצלחה.png')));
figure;imshow(I);
J = ~I;
%Clustering
CC = bwconncomp(J);
%Cover all small clusters with zewros.
for i = 1:CC.NumObjects
C = CC.PixelIdxList{i}; %Cluster coordinates.
%Fill small clusters with zeros.
if numel(C) < 200
J(C) = 0;
end
end
J = ~J;
figure;imshow(J);
Result:
For a little experiment in Python I'm doing I want to find small scratches on fruits. The scratches are very small and hard to detect by human eye.
I'm using a high resolution camera for that experiment.
Here is the defect I want to detect:
Original Image:
This is my result with very few lines of code:
So I found the contours of my fruit. How can I proceed to finding the scratch? The RGB Value is similar to other parts of the fruit. So how can I differentiate between A scratch, and a part of the fruit?
My code:
# Imports
import numpy as np
import cv2
import time
# Read Image & Convert
img = cv2.imread('IMG_0441.jpg')
result = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Filtering
lower = np.array([1,60,50])
upper = np.array([255,255,255])
result = cv2.inRange(result, lower, upper)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(9,9))
result = cv2.dilate(result,kernel)
# Contours
im2, contours, hierarchy = cv2.findContours(result.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
result = cv2.cvtColor(result, cv2.COLOR_GRAY2BGR)
if len(contours) != 0:
for (i, c) in enumerate(contours):
area = cv2.contourArea(c)
if area > 100000:
print(area)
cv2.drawContours(img, c, -1, (255,255,0), 12)
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),12)
# Stack results
result = np.vstack((result, img))
resultOrig = result.copy()
# Save image to file before resizing
cv2.imwrite(str(time.time())+'_0_result.jpg',resultOrig)
# Resize
max_dimension = float(max(result.shape))
scale = 900/max_dimension
result = cv2.resize(result, None, fx=scale, fy=scale)
# Show results
cv2.imshow('res',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
I changed your image to HSL colour space.
I can't see the scratch in the L channel, so the greyscale approach suggested earlier is going to be difficult.
But the scratch is quite noticeable in the hue plane.
You could use an edge detector to find the blemish in the hue channel. Here I use a difference of gaussians detector (with sizes 20 and 4).
personal guess is to use some algorithm to detect the grayscale change. The grayscale variation around the scratch should be bigger than the variation in other area. Sobel and Scharr Derivatives could be an option. This is a link to python-openCV about image gradient. You can first crop out the fruit with coutour application
If you really want to use conventional computer vision techniques, you should start with edges that can be detected on the fruit. Some of the edges are caused by the bumps on the fruit, so you have to look at various features of the area around the edges to find the difference between scratches and bumps. After you look at about a hundred scratches, you should be able to come up with some rules.
But this process is going to be very tiring, and my guess is you will not have much luck. A better way to approach this problem is to train a deep neural network by manually annotating scratches on about 100 images, and letting the network find out by itself how to distinguish scratches from the rest of the fruit.
If you are a beginner to these stuff, search for PyImageSearch and LearnOpenCV. Both are very resourceful sites where you can learn quickly.
complete noob at open cv and numpy here. here is the image: here is my code:
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
imgray = cv2.medianBlur(imgray, ksize=7)
ret, thresh = cv2.threshold(imgray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print ("number of countours detected before filtering %d -> "%len(contours))
new = np.zeros(imgray.shape)
new = cv2.drawContours(im,contours,len(contours)-1,(0,0,255),18)
cv2.namedWindow('Display',cv2.WINDOW_NORMAL)
cv2.imshow('Display',new)
cv2.waitKey()
mask = np.zeros(imgray.shape,np.uint8)
cv2.drawContours(mask,[contours[len(contours)-1]],0,255,-1)
pixelpoints = cv2.findNonZero(mask)
cv2.imwrite("masked_image.jpg",mask)
print(len(pixelpoints))
print("type of pixelpoints is %s" %type(pixelpoints))
the length of pixelpoints is nearly 2 million since it contains all the point covered by the contours. But i only require the bordering point of that contour. How do I do it? I have tried several methods from opencv documentation but always errors with tuples and sorting operations. please...help?
I only require the border points of the contour :(
Is this what you mean by border points of a contour?
The white lines you see are points that I have marked out in white against the blue drawn contours. There's a little spot at the bottom right because I think its most likely that your black background isn't really black and so when I did thresholding and a floodfill to get this,
there was a tiny white speck at the same spot. But if you play around with the parameters and do a more proper thresholding and floodfill it shouldn't be an issue.
In openCV's drawContours function, the cnts would contain lists of contours and each contour will contain an array of points. Each point is also of type numpy.ndarray. If you want to place all points of each contour in one place so it returns you a set of points of boundary points (like the white dots outline in the image above), you might want to append them all into a list. You can try this:
#rgb is brg instead
contoured=cv2.drawContours(black, cnts, -1, (255,0,0), 3)
#list of ALL points of ALL contours
all_pixels=[]
for i in range(0, len(cnts)):
for j in range(0,len(cnts[i])):
all_pixels.append(cnts[i][j])
When I tried to
print(len(all_pixels))
it returned me 2139 points.
Do this if you want to mark out the points for visualization purposes (e.g. like my white points):
#contouredC is a copy of the contoured image above
contouredC[x_val, y_val]=[255,255,255]
If you want less points, just use a step function when iterating through to draw the white points out. Something like this:
In python, for loops are slow so I think there's better ways of replacing the nested for loops with a np.where() function or something instead. Will update this if/when I figure it out. Also, this needs better thresholding and binarization techniques. Floodfill technique referenced from: Python 2.7: Area opening and closing binary image in Python not so accurate.
Hope it helps.
The image below shows an aerial photo of a house block (re-oriented with the longest side vertical), and the same image subjected to Adaptive Thresholding and Difference of Gaussians.
Images: Base; Adaptive Thresholding; Difference of Gaussians
The roof-print of the house is obvious (to the human eye) on the AdThresh image: it's a matter of connecting some obvious dots. In the sample image, finding the blue-bounded box below -
Image with desired rectangle marked in blue
I've had a crack at implementing HoughLinesP() and findContours(), but get nothing sensible (probably because there's some nuance that I'm missing). The python script-chunk that fails to find anything remotely like the blue box, is as follows:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# read in full (RGBA) image - to get alpha layer to use as mask
img = cv2.imread('rotated_12.png', cv2.IMREAD_UNCHANGED)
grey = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Otsu's thresholding after Gaussian filtering
blur_base = cv2.GaussianBlur(grey,(9,9),0)
blur_diff = cv2.GaussianBlur(grey,(15,15),0)
_,thresh1 = cv2.threshold(grey,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
thresh = cv2.adaptiveThreshold(grey,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
DoG_01 = blur_base - blur_diff
edges_blur = cv2.Canny(blur_base,70,210)
# Find Contours
(ed, cnts,h) = cv2.findContours(grey, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:4]
for c in cnts:
approx = cv2.approxPolyDP(c, 0.1*cv2.arcLength(c, True), True)
cv2.drawContours(grey, [approx], -1, (0, 255, 0), 1)
# Hough Lines
minLineLength = 30
maxLineGap = 5
lines = cv2.HoughLinesP(edges_blur,1,np.pi/180,20,minLineLength,maxLineGap)
print "lines found:", len(lines)
for line in lines:
cv2.line(grey,(line[0][0], line[0][1]),(line[0][2],line[0][3]),(255,0,0),2)
# plot all the images
images = [img, thresh, DoG_01]
titles = ['Base','AdThresh','DoG01']
for i in xrange(len(images)):
plt.subplot(1,len(images),i+1),plt.imshow(images[i],'gray')
plt.title(titles[i]), plt.xticks([]), plt.yticks([])
plt.savefig('a_edgedetect_12.png')
cv2.destroyAllWindows()
I am trying to set things up without excessive parameterisation. I'm wary of 'tailoring' an algorithm for just this one image since this process will be run on hundreds of thousands of images (with roofs/rooves of different colours which may be less distinguishable from background). That said, I would love to see a solution that 'hit' the blue-box target - that way I could at the very least work out what I've done wrong.
If anyone has a quick-and-dirty way to do this sort of thing, it would be awesome to get a Python code snippet to work with.
The 'base' image ->
Base Image
You should apply the following:
1. Contrast Limited Adaptive Histogram Equalization-CLAHE and convert to gray-scale.
2. Gaussian Blur & Morphological transforms (dialation, erosion, etc) as mentioned by #bad_keypoints. This will help you get rid of the background noise. This is the most tricky step as the results will depend on the order in which you apply (first Gaussian Blur and then Morphological transforms or vice versa) and the window sizes you choose for this purpose.
3. Apply Adaptive thresholding
4. Apply Canny's Edge detection
5. Find contour having four corner points
As said earlier you need to tweak with input parameters of these functions and also need to validate these parameters with other images. As it might be possible that it will work for this case but not for other cases. Based on trial and error you need to fix the parameter values.