I use this code to find some blobs, and pick the biggest one.
contours, hierarchy = cv2.findContours(th1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
c = max(contours, key=cv2.contourArea)
Now, I would need to change this code in a way so it returns the contour that is in the middle of the frame. (its bounding box covers the center pixel of the image)
I am not able to figure out how to do this except getting the bounding box of all contours with
xbox, ybox, wbox, hbox = cv2.boundingRect(cont)
and then checking that x and y are smaller than the centere, and x+w and y+h aare bigger than the centre. It does not look like a efficient way tough, since there can be up to 500 small controus..
There is a function in OpenCV that will check if a given point is inside a contour (returns a 1), on the boundary (returns a 0) or outside (returns a -1).
cv2.pointPolygonTest()
I'd like to suggest this approach, maybe there is a more straightforward one. I'm writing code here, but instead giving a possible algorithm:
Iterate over contours and draw each single contour using a mask in a binary mat (b/w and filled)
Check if the center pixel (image width/2, image height/2) is equal to 1
That should work.
Related
I have a hundreds of ID card images which some of them provided below:
(Disclaimer: I downloaded the images from the Web, all rights (if exist) belong to their respective authors)
As seen, they are all different in terms of brightness, perspective angle and distance, card orientation. I'm trying to extract only rectangular card area and save it as a new image. To achieve this, I came to know that I must convert an image to grayscale image and apply some thresholding methods. Then, cv2.findCountours() is applied to threshold image to get multiple vector points.
I have tried many methods and come to use cv2.adaptiveThreshold() as it is said that it finds a value for threshold (because, I can't manually set threshold values for each image). However, when I apply it to images, I am not getting what I want. For example:
My desired output should look like this:
It seems like it also includes affine transformations to make the card area (Obama case) proper but I am finding it difficult to understand. If that's possible I'd further extract and save the image separately.
Is there any other method or algorithm that can achieve what I want? It should consider different lighting conditions and card orientations. I am expecting one-fits-all solution given there will be only one rectangle ID card. Please, guide me through this with whatever you think will be of help.
Note that I can't use CNNs as object detector, it must be based on purely image-processing.
Thank you.
EDIT:
The code for the above results is pretty simple:
image = cv2.imread(args["image"])
gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh_img = cv2.adaptiveThreshold(gray_img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,51,9)
cnts = cv2.findContours(thresh_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
area_treshold = 1000
for cnt in cnts:
if cv2.contourArea(cnt) > area_treshold:
x,y,w,h = cv2.boundingRect(cnt)
print(x,y,w,h)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3)
resize = ResizeWithAspectRatio(image, width=640)
cv2.imshow("image", resize)
cv2.waitKey()
EDIT 2:
I provide the gradient magnitude images below:
Does it mean I must cover both low and high intensity values? Because the edges of the ID cards at the bottom are barely noticeable.
as the title states, I'm trying to crop the largest circle out of an image. I'm using OpenCV in python. To be exact, it's a shooting target, which always has the same format, but the picture of it can be taken with any mobile device and in different lighting conditions (I will include some examples lower).
I'm completely new to image recognition, so I have been trying out many different ways of doing this, but couldn't figure out a universal solution, that would work on all of my target images.
Why I'm trying to do this:
My assignment is to calculate score of one or multiple shots on the given target image. I have tried color segmentation to find the shots, but since the shots can be on different backgrounds, this wouldn't work properly. So now I'm trying to see the difference between the empty shooting target image and the already shot on target image. Also, I need to be able to tell, which target it was shot on (there are two target types). So I'm trying to crop out only the target from image to get rid of the background interference and then continue with the shot identifications.
What I have tried so far:
1) Finding the largest circle with HoughCircles. My next step would be to somehow remove the outer part of that found circle. I have played with the configuration of HoughCircles method for quite some time, but always one of the example images wasn't highlighting the most outer circle correctly or wasn't highlighting any of the circles :/.
My final configuration looked something like this:
img = cv2.GaussianBlur(img, (3, 3), 0)
cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 2, 10000, param1=50, param2=100, minRadius=200, maxRadius=0)
It seemed like using HoughCircles wouldn't be the right way to do this, so I moved on to another possible solution I found on the internet.
2) Finding all the countours by filtering the 'black' color range in which the circles seem to be on the pictures and than finding the largest one. The problem with this solution seemed to be that sometimes the pictures had a shadow that destroyed the outer circle and therefore it seemed impossible to crop by it.
My code looked like this:
# black color boundaries [B, G, R]
lower = [0, 0, 0]
upper = [150, 150, 150]
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
# find the colors within the specified boundaries and apply the mask
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask=mask)
ret, thresh = cv2.threshold(mask, 40, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
# draw in blue the contours that were founded
cv2.drawContours(output, contours, -1, 255, 3)
# find the biggest countour (c) by the area
c = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(c)
After that, I would try to draw a circle by the largest found contour (c) and crop by it. But I have already seen that the drawn circles weren't complete (probably due to some shadow on the picture) and therefore this wouldn't work anyway.
After those failures, I have tried so many solutions from other questions on here, but none would work for my problem.
Example images:
Target example 1
Target example 2
Target to calc score 1
Target to calc score 2
To be completely honest with you, I'm really lost on how to go about this. I would appreciate any help, advice, anything.
There are two different types of target in your samples. You may want to process them separately or ask the user for the input, what kind of target it is. Basically, you want to know how large the black part of the target, does it cover 7-10 or 4-10.
Binarize your image. Build a histogram along X and Y -- you'll find the position of the black part of your target as (x_left, x_right, y_top, y_bottom). Once you know that, you can calculate the center ((top+bottom)/2, (left+right)/2). After that you can easily calculate the score for every pixel of the image, since you know the center, the black spot size and the number of different score areas within.
I am writing code in python 2.7.12 using opencv '2.4.9.1'.
I have a 2d numpy array containing values in range [0,255].
My aim is to find largest region containing value in range[x,y]
I found
How to use python OpenCV to find largest connected component in a single channel image that matches a specific value? as pretty well-explained .
Only, the catch is - it is meant for opencv 3 .
I can try to write a function of this type
[pseudo code]
def get_component(x,y,list):
append x,y to list
visited[x][y]=1
if(x+1<m && visited[x+1][y]==0)
get_component(x+1,y,list)
if(y+1<n && visited[x][y+1]==0)
get_component(x,y+1,list)
if(x+1<m)&&(y+1<n)&&visited[x+1][y+1]==0
get_component(x+1,y+1,list)
return
MAIN
biggest_component = NULL
biggest_component_size = 0
low = lowest_value_in_user_input_range
high = highest_value_in_user_input_range
matrix a = gray image of size mxn
matrix visited = all value '0' of size mxn
for x in range(m):
for y in range(n):
list=NULL
if(a[x][y]>=low) && (a[x][y]<=high) && visited[x][y]==1:
get_component(x,y,list)
if (list.size>biggest_component_size)
biggest_component = list
Get maximum x , maximum y , min x and min y from above list containing coordinates of every point of largest component to make rectangle R .
Mission accomplished !
[/pseudo code]
Such an approach will not be efficient, I think.
Can you suggest functions for doing the same with my setup ?
Thanks.
Happy to see my answer linked! Indeed, connectedComponentsWithStats() and even connectedComponents() are OpenCV 3+ functions, so you can't use them. Instead, the easy thing to do is just use findContours().
You can calculate moments() of each contour, and included in the moments is the area of the contour.
Important note: The OpenCV function findContours() uses 8-way connectivity, not 4-way (i.e. it also checks diagonal connectivity, not just up, down, left, right). If you need 4-way, you'd need to use a different approach. Let me know if that's the case and I can update..
In the spirit of the other post, here's the general approach:
Binarize your image with the thresholds you're interested in.
Run cv2.findContours() to get the contour of each distinct component in the image.
For each contour, calculate the cv2.moments() of the contour and keep the maximum area contour (m00 in the dict returned from moments() is the area of the contour).
Either keep the contour as a list of points if that's what you need, otherwise draw them on a new blank image if you want it as a mask.
I lack creativity today, so you get the cameraman as our example image as you didn't provide one.
import cv2
import numpy as np
img = cv2.imread('cameraman.png', cv2.IMREAD_GRAYSCALE)
Now, let's binarize to get some separated blobs:
bin_img = cv2.inRange(img, 50, 80)
Now let's find the contours.
contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
# For OpenCV 3+ use:
# contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[1]
Now for the main bit; looping through the contours and finding the largest one:
max_area = 0
max_contour_index = 0
for i, contour in enumerate(contours):
contour_area = cv2.moments(contour)['m00']
if contour_area > max_area:
max_area = contour_area
max_contour_index = i
So now we have an index max_contour_index of the largest contour by area, so you can access the largest contour directly just by doing contours[max_contour_index]. You could of course just sort the contours list by the contour area and grab the first (or last, depending on sort order). If you want to make a mask of the one component, you can use
cv2.drawContours(new_blank_image, contours, max_contour_index, color=255, thickness=-1)
Note the -1 will fill the contour as opposed to outlining it. Here's an example drawing the contour over the original image:
Looks about right.
All in one function:
def largest_component_mask(bin_img):
"""Finds the largest component in a binary image and returns the component as a mask."""
contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
# should be [1] if OpenCV 3+
max_area = 0
max_contour_index = 0
for i, contour in enumerate(contours):
contour_area = cv2.moments(contour)['m00']
if contour_area > max_area:
max_area = contour_area
max_contour_index = i
labeled_img = np.zeros(bin_img.shape, dtype=np.uint8)
cv2.drawContours(labeled_img, contours, max_contour_index, color=255, thickness=-1)
return labeled_img
I have some code which detects circular shapes but I am unable to understand how it works.
From this code:
How can i find the radius and center point of the circle?
What is the behaviour of `cv2.approxPolyDP' for detecting circles?
Now find the contours in the segmented mask
contours, hierarchy = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
Sorting the contours w.r.t contour rect X
contours.sort(key = lambda x:cv2.boundingRect(x)[0])
for contour in contours:
approx = cv2.approxPolyDP(contour, 0.01*cv2.arcLength(contour,True), True)
if len(approx) > 8:
# Find the bounding rect of contour.
contour_bounding_rect = cv2.boundingRect(contour)
mid_point = contour_bounding_rect[0] + contour_bounding_rect[2]/2, contour_bounding_rect[1] + contour_bounding_rect[3]/2
print mid_point[1]/single_element_height, ", ",
So I have figured out the answer to your first question: determining the center and radius of circles in the image.
Initially I am finding all the contours present in the image. Then using a for loop, I found the center and radius using cv2.minEnclosingCircle for every contour in the image. I printed them in the console screen.
contours,hierarchy = cv2.findContours(thresh,2,1)
print len(contours)
cnt = contours
for i in range (len(cnt)):
(x,y),radius = cv2.minEnclosingCircle(cnt[i])
center = (int(x),int(y))
radius = int(radius)
cv2.circle(img,center,radius,(0,255,0),2)
print 'Circle' + str(i) + ': Center =' + str(center) + 'Radius =' + str(radius)
To answer your second question on cv2.approxPolyDP(); this function draws an approximate contour around the object in the image based on a parameter called 'epsilon'. Higher the value of 'epsilon', the contour is roughly approximated. For a lower value of epsilon, the contour grazes almost every edge of the object in the image. Visit THIS PAGE for a better understanding.
Hope this helped!! :)
Don't think approxPolyDP is the right way to go here.
If you have an image where you only have circles and you want to find center and radius, try minEnclosingCircle()
If you have an image where you have various shapes and you want to find the circles, try Hough transform (may take a long time) or fitEllipse() where you check if the bounding box it returns is square.
See documentation for both these functions
I'm trying to build a simple image analyzing tool that will find items that fit in a color range and then finds the centers of said objects.
As an example, after masking, I'm analyzing an image like this:
What I'm doing so far code-wise is rather simple:
import cv2
import numpy
bound = 30
inc = numpy.array([225,225,225])
lower = inc - bound
upper = inc + bound
img = cv2.imread("input.tiff")
cv2.imshow("Original", img)
mask = cv2.inRange(img, lower, upper)
cv2.imshow("Range", mask)
contours = cv2.findContours(mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)
print contours
This, however, gives me a countless number of contours. I'm somewhat at a loss while reading the corresponding manpage. Can I use moments to reasonably analyze the contours? Are contours even the right tool for this?
I found this question, that vaguely covers finding the center of one object, but how would I modify this approach when there are multiple items?
How do I find the centers of the objects in the image? For example, in the above sample image I'm looking to find three points (the centers of the rectangle and the two circles).
Try print len(contours). That will give you around the expected answer. The output you see is the full representation of the contours which could be thousands of points.
Try this code:
import cv2
import numpy
img = cv2.imread('inp.png', 0)
_, contours, _ = cv2.findContours(img.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)
print len(contours)
centres = []
for i in range(len(contours)):
moments = cv2.moments(contours[i])
centres.append((int(moments['m10']/moments['m00']), int(moments['m01']/moments['m00'])))
cv2.circle(img, centres[-1], 3, (0, 0, 0), -1)
print centres
cv2.imshow('image', img)
cv2.imwrite('output.png',img)
cv2.waitKey(0)
This gives me 4 centres:
[(474, 411), (96, 345), (58, 214), (396, 145)]
The obvious thing to do here is to also check for the area of the contours and if it is too small as a percentage of the image, don't count it as a real contour, it will just be noise. Just add something like this to the top of the for loop:
if cv2.contourArea(contours[i]) < 100:
continue
For the return values from findContours, I'm not sure what the first value is for as it is not present in the C++ version of OpenCV (which is what I use). The second value is obviously just the contours (an array of arrays) and the third value is a hierarchy holding information on the nesting of contours, which can be very handy.
You can use the opencv minEnclosingCircle() function on your contours to get the center of each object.
Check out this example which is in c++ but you can adapt the logic Example