I am trying to find the corners of the 4 pillars which are of yellow in colour and also detecting extreme corners of the board which is of white in colour.
Basically i want to calculate the area of whole space after subtracting the area of each pillar.
For that first am trying to identifying the corner of pillars to find the area of each pillar.
Here is the code which I tried, I am almost half way through it.
import numpy as np
import cv2
img = cv2.imread('Corner_0.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
corners = cv2.goodFeaturesToTrack(gray, 100, 0.01, 10)
corners = np.int0(corners)
for corner in corners:
x,y = corner.ravel()
cv2.circle(img,(x,y),3,255,-1)
cv2.imwrite('Detected_Corner_0.jpg',img)
I would like to detect corner and calculating the area of the pillar.
When I use Grabcut I am able to apply for one pillar, does this make sense?
Corner detectors often cannot be relied on. The show extra corners and miss the ones you expect. What's more, you have to identify an regroup them.
You obtain interesting results by computing a saturation image (S in LSH). Then by binarization and blob analysis, you can easily find the areas.
Related
I am doing a project that checks whether the labels on the ketchup bottles are outside the boundaries we want or are they correct. I am using python and opencv.
My goal is to set a boundaries and check if the tag has exceeded those limits.
I want to add boundaries like this:
I mean, for example, in the red area between the green rectangle and the edge of the ketchup bottle, I'm aiming to check if there is a pixel (so there is a slip on the label). --> example areas
So far I have done blurring the image and then I found the edges with the canny edge.
Blurred image:
After finding edges:
After this point, I want to add a frame to the picture where I found the edges and check if there are any pixels outside the border, but I'm stuck at this point.
I'm open to suggestions on how to do this.
This is my code:
import cv2
image = cv2.imread('ketchup1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
canny = cv2.Canny(blur, 75, 225)
cv2.imshow("blurred image", blur)
cv2.imshow("canny image", canny)
cv2.waitKey(0)
cv2.destroyAllWindows()
Such inspections will always start with a global location of the object, because the bottles will not always be exactly placed. This can be done by template matching, or detection of the external edges (in an horizontal strip).
First solution:
Take a sample image and define a binary mask such as the one below. Then during inspection, after registering the mask on the image, count the edge pixels inside the mask area.
Second solution:
Use template matching to locate just the label and compare its position to that of the bottle.
I need to find the coordinates of the corners of a rectangle plate in a picture, using Python and OpenCv.
This is what the original picture looks like:
Now I use the following script for edge detection:
import cv2
import numpy as np
import matplotlib.pyplot as plt
image_original = cv2.imread('c:/python_test/camera_pics/Basler_acA640-300gm__22354308__20210211_135725420_36.tiff', cv2.IMREAD_COLOR)
image_gray = cv2.cvtColor(image_original, cv2.COLOR_BGR2GRAY)
filtered_image = cv2.Canny(image_gray, threshold1=20, threshold2=200)
cv2.imwrite('c:/python_test/test.bmp',filtered_image)
And then it looks like this:
Now I am trying to find the corners of the plate, but I have no idea how I can do this. These corners can either be the corners of the inner rectangle or the outer rectangle. I edited the picture below in Paint to show which corners I mean.
Can you help me in writing a script to find the coordinates of the corners of this plate?
This is an uneasy task, especially for the two lower corners of the upper face. Contrast is low (even inexistent at places) and the image is cluttered.
The convex hull will indeed give you the two top corners (provided the screws are always detectable). You can also find the bottom corners of the lower face by finding significant angles in the convex hull.
You can somewhat approximate the locations of the corners of the upper face knowing the thickness of the plate and its projected height compared to true height.
As the long edges are not so badly detected, you may consider the long, continuous line segments (in the 8-connectedness sense), and try to match them to the convex hull and to the estimated corners.
as the title states, I'm trying to crop the largest circle out of an image. I'm using OpenCV in python. To be exact, it's a shooting target, which always has the same format, but the picture of it can be taken with any mobile device and in different lighting conditions (I will include some examples lower).
I'm completely new to image recognition, so I have been trying out many different ways of doing this, but couldn't figure out a universal solution, that would work on all of my target images.
Why I'm trying to do this:
My assignment is to calculate score of one or multiple shots on the given target image. I have tried color segmentation to find the shots, but since the shots can be on different backgrounds, this wouldn't work properly. So now I'm trying to see the difference between the empty shooting target image and the already shot on target image. Also, I need to be able to tell, which target it was shot on (there are two target types). So I'm trying to crop out only the target from image to get rid of the background interference and then continue with the shot identifications.
What I have tried so far:
1) Finding the largest circle with HoughCircles. My next step would be to somehow remove the outer part of that found circle. I have played with the configuration of HoughCircles method for quite some time, but always one of the example images wasn't highlighting the most outer circle correctly or wasn't highlighting any of the circles :/.
My final configuration looked something like this:
img = cv2.GaussianBlur(img, (3, 3), 0)
cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 2, 10000, param1=50, param2=100, minRadius=200, maxRadius=0)
It seemed like using HoughCircles wouldn't be the right way to do this, so I moved on to another possible solution I found on the internet.
2) Finding all the countours by filtering the 'black' color range in which the circles seem to be on the pictures and than finding the largest one. The problem with this solution seemed to be that sometimes the pictures had a shadow that destroyed the outer circle and therefore it seemed impossible to crop by it.
My code looked like this:
# black color boundaries [B, G, R]
lower = [0, 0, 0]
upper = [150, 150, 150]
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
# find the colors within the specified boundaries and apply the mask
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask=mask)
ret, thresh = cv2.threshold(mask, 40, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
# draw in blue the contours that were founded
cv2.drawContours(output, contours, -1, 255, 3)
# find the biggest countour (c) by the area
c = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(c)
After that, I would try to draw a circle by the largest found contour (c) and crop by it. But I have already seen that the drawn circles weren't complete (probably due to some shadow on the picture) and therefore this wouldn't work anyway.
After those failures, I have tried so many solutions from other questions on here, but none would work for my problem.
Example images:
Target example 1
Target example 2
Target to calc score 1
Target to calc score 2
To be completely honest with you, I'm really lost on how to go about this. I would appreciate any help, advice, anything.
There are two different types of target in your samples. You may want to process them separately or ask the user for the input, what kind of target it is. Basically, you want to know how large the black part of the target, does it cover 7-10 or 4-10.
Binarize your image. Build a histogram along X and Y -- you'll find the position of the black part of your target as (x_left, x_right, y_top, y_bottom). Once you know that, you can calculate the center ((top+bottom)/2, (left+right)/2). After that you can easily calculate the score for every pixel of the image, since you know the center, the black spot size and the number of different score areas within.
Using OpenCV to identify the iris region + pupil region (outer grey area + inner black circle) as seen in this image
Tried the following approaches, but unable to extract the iris region 100%.
Approach 1
Iris area detection using detection of color code of the pixels in the image
import cv2
from PIL import Image
#import cv2.cv as cv
img = cv2.imread('i1.jpg')
im = Image.open('i1.jpg')
pix = im.load()
#cv2.imshow('detected Edge',img)
height, width = img.shape[:2]
print height,width
height=height-1
width=width-1
count=0
print pix[width,height]
print pix[0,0]
for eh in range(height):
for ew in range(width):
r,g,b=pix[ew,eh]
if r<=30 and g<=30 and b<=30:
print eh,ew
cv2.circle(img,(ew,eh),1,(0,255,0),1)
print height,width
cv2.imshow('detected Edge',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Click here to view output of the above code.
Approach 2
Iris area detection using Hough Circles method
import cv2
#import cv2.cv as cv
img1 = cv2.imread('i.jpg')
img = cv2.imread('i.jpg',0)
ret, thresh = cv2.threshold(img, 50, 255, cv2.THRESH_BINARY)
edges = cv2.Canny(thresh, 100, 200)
#cv2.imshow('detected ',edges)
cimg=cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(edges, cv2.HOUGH_GRADIENT, 1, 10000, param1 = 50, param2 = 30, minRadius = 0, maxRadius = 0)
#circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20,
# param1=50,param2=30,minRadius=0,maxRadius=0)
print circles
for i in circles[0,:]:
i[2]=i[2]+4
cv2.circle(img1,(i[0],i[1]),i[2],(0,255,0),1)
#Code to close Window
cv2.imshow('detected Edge',img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
for i in range (1,5):
cv2.waitKey(1)
Click here to view the output of the code.
Kindly guide us how we can automatically extract the circular black area in human eye pictures.
I used the following reference.
http://www.cvip.uofl.edu/wwwcvip/education/ECE523/Iris%20Biometrics.pdf
To identify iris region in human eye images, you can use the following steps:-
1) Identification of pupil region:- As pupil region intensity would be very close to zero, you can use binary threshold to find pupil region.You can use connected components labelling to get regions of same intensity and then select region having eccentricity near to zero to be identified as pupil circle.The centroid of this connected region would be circle's centre and you can get the radius by the dimensions of connected components box.
2) Identification of Iris region:- Now that you have got your pupil region,you can use hough circle method to get iris region.Use canny edge detection to get edge map.Take the centre of the iris circle in a box around pupil centre and radius of the iris more than pupil radius and less than a fixed amount.Make multiple circles with varying centre and varying radius as specified above and count the number of edge map points lying on such circles.The circle with maximum number of edge points lying on it would be iris circle.
Note:- In my experience, I had found that getting iris circle was very costly as you had to make multiple circles with varying centre and radius.One solution was to keep the circle centre fixed as pupil centre and only varying radius as iris circle would be very near to pupil centre. However, it was giving wrong result as the eyelash edge maps at top and bottom were giving wrong edge map points. To solve this, I did a jugaad. I kept the iris centre fixed as pupil centre and found iris radius only for left hand part of image from pupil centre. Similarly, I found iris radius for right hand side of the image from pupil centre. I used the average of both radius and centre as pupil centre to get iris boundary. It worked for me.
Using approach 2, could you start at the center of the pupil and then travel outwards staying in the same row (travel left or right of the pupil center) until you hit the sclera of the eye. Use this as the radius for the circle containing the iris.
radius_iris = abs(first_column_of_sclera - column_of_pupil_center)
#this is the yellow line in the attatched image
To Find the Sclera: take a small pixel region like a 3x3 block (or similar, this is the green box in the image) and check for two criteria
The variance of the r,g,b channels is small. White (or gray shades) have R=G=B so that means white would have low variance
You also need to check that the rgb value is above some threshold. Someone with grey or black eyes will meet criteria 1, but unless the pixels are very light (near white) we haven't reached the sclera
Create an iris mask by creating a circle centered at the pupil with radius_iris if you want, you can also use the pupil mask to extract ONLY THE iris
To avoid wrong results and improve performance you should always use proper boundaries for HoughCircles. Iris and pupil radii will be in a certain range.
I would look for a black blob of reasonable size in the image to locate the pupil. Once you know where the pupil is you know where to look for the iris. extract a region of interest that will contain the iris (use pupil size to estimate iris size) but not much more. Then do two hough transforms to get iris and pupil position and radius.
Afterwards you can further improve accuracy by fitting a circle/ellipse using the knowledge from your hough transform, if necessary.
I've been laboring on a pet project for a bit on how to find a simple basketball in an image. I've tried a bunch of permutations of using hough.circles and transform , etc for the last few weeks but I cant seem to come anywhere close to isolating the basketball with the code examples and my own tinkering.
Here is an example photo:
And here is the result after a simple version of circle finding code I've been tinkering with:
Anyone have any idea where I have gone wrong and how I can get it right?
Here is the the code I am fiddling with:
import cv2
import cv2.cv as cv # here
import numpy as np
def draw_circles(storage, output):
circles = np.asarray(storage)
for circle in circles:
Radius, x, y = int(circle[0][3]), int(circle[0][0]), int(circle[0][4])
cv.Circle(output, (x, y), 1, cv.CV_RGB(0, 255, 0), -1, 8, 0)
cv.Circle(output, (x, y), Radius, cv.CV_RGB(255, 0, 0), 3, 8, 0)
orig = cv.LoadImage('basket.jpg')
processed = cv.LoadImage('basket.jpg',cv.CV_LOAD_IMAGE_GRAYSCALE)
storage = cv.CreateMat(orig.width, 1, cv.CV_32FC3)
#use canny, as HoughCircles seems to prefer ring like circles to filled ones.
cv.Canny(processed, processed, 5, 70, 3)
#smooth to reduce noise a bit more
cv.Smooth(processed, processed, cv.CV_GAUSSIAN, 7, 7)
cv.HoughCircles(processed, storage, cv.CV_HOUGH_GRADIENT, 2, 32.0, 30, 550)
draw_circles(storage, orig)
cv.imwrite('found_basketball.jpg',orig)
I agree with the other posters, that using the colour of the basketball is a good approach. Here is some simple code that does that:
import cv2
import numpy as np
im = cv2.imread('../media/basketball.jpg')
# convert to HSV space
im_hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
# take only the orange, highly saturated, and bright parts
im_hsv = cv2.inRange(im_hsv, (7,180,180), (11,255,255))
# To show the detected orange parts:
im_orange = im.copy()
im_orange[im_hsv==0] = 0
# cv2.imshow('im_orange',im_orange)
# Perform opening to remove smaller elements
element = np.ones((5,5)).astype(np.uint8)
im_hsv = cv2.erode(im_hsv, element)
im_hsv = cv2.dilate(im_hsv, element)
points = np.dstack(np.where(im_hsv>0)).astype(np.float32)
# fit a bounding circle to the orange points
center, radius = cv2.minEnclosingCircle(points)
# draw this circle
cv2.circle(im, (int(center[1]), int(center[0])), int(radius), (255,0,0), thickness=3)
out = np.vstack([im_orange,im])
cv2.imwrite('out.png',out)
result:
I assume that:
Always one and only one basketball is present
The basketball is the principal orange item in the scene
With these assumptions, if we find anything the correct colour, we can assume its the ball and fit a circle to it. This way we don't do any circle detection at all.
As you can see in the upper image, there are some smaller orangey elements (from the shorts) which would mess up our ball radius estimate. The code uses an opening operation (erosion followed by dilation), to remove these. This works nicely for your example image. But for other images a different method might be better: using circle detection too, or contour shape, size, or if we are dealing with a video, we could track the ball position.
I ran this code (only modified for video) on a random short basketball video, and it worked surprisingly ok (not great.. but ok).
A few thoughts:
Filter by color first to simplify the image. If you're looking specifically for an orange basketball, you could eliminate a lot of other colors. I'd recommend using HSI color space instead of RGB, but in any case you should be able to exclude colors that are some distance in color 3-space from your trained basketball color.
Try substituting Sobel or some other kernel-based edge detector that doesn't rely on manual parameters. Display the edge image to see if it looks "right" to you.
Allow for weaker edges. In the grayscale image, the contrast between the basketball and the player's dark jersey is not as great as the difference between the white undershirt and the black jersey.
Hough may yield unexpected results if the object is only nominally circular in cross section, but is actually elongated or has noisy edges in the actual image. I usually write my own Hough algorithm and haven't touched the OpenCV implementation, so I'm not sure what parameter to change, but see if you can allow for fuzzier edges.
Maybe eliminate the smooth operation. In any case, try smooth before finding edges rather than the other way around.
Try to write your own rough Hough algorithm. Although a quickie implementation may not be as flexible as the OpenCV implementation, by getting your hands dirty you may stumble onto the source of the problem.