I have a video file and I need to circle all moving objects in a certain frame I select. My idea of a solution to this problem is:
Circle all moving objects (white areas) on a video on which was applied motion detector and circle the same areas on the original frame.
I am using BackgroundSubtractorGMG() from cv2 to detect movement
Below I show the way I expect this program to work(I used to paint, so I am now sure this is correct, but I hope it is good enough to demonstrate the concept)
As others have said in comments:
Get the mask from you background subtraction algorithm
use cv.findContours(mask, ...) to find contours
(optional) select which contours you want to keep (something like ((x, y), radius) = cv.minEnclosingCircle(contour) or a, b, w, h = cv.boundingRect(c)
and if radius > 5
use drawing functions like cv.rectangle or similar to draw the shape around the contour (like so: cv.rectangle(img, (a, b), (a + w, b + h), (0, 255, 0), 2))
Related
I am following this tutorial: https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/
I was playing around with the parameters ( even those you don't see in the code ex: param2) of HoughCircles and it seems very innacurate, in my project, the disks you see on the picture will be placed on random spots and i need to be able to detect them and their color.
Currently i am only able to detect few circles, and sometimes some random circles are drawn where there is no circles so i am a bit confused.
Is this the best way to do circle detection with openCV or is there a more accurate way of doing it ?
Also why is my code not detecting every circles ?
Initial board : https://imgur.com/BrPB5Ox
Circle drawn : https://imgur.com/dT7k29E
My code :
import cv2
import numpy as np
img = cv2.imread('Photos/board.jpg')
output = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", np.hstack([img, output]))
cv2.waitKey(0)
Thanks a lot.
First of all you can not expect HoughCircles to detect all circles in different type of situations. It is not an AI. It has different parameters according to get desired results. You can check here to learn more about those parameters.
HoughCircles is a contour based function so you should be sure the contours are being detected properly. In your example I am sure bad contour results will come up because of the lighting problem. Metal materials cause light explosion in image processing and this affects finding contours badly.
What you should do:
Solve the lighting problem
Be sure about the HoughCircle parameters to get desired output
Instead of using HoughCircle you can detect each contour and their mass center ( moments help you to find their mass center). Then you can measure each length of contour points to that mass center if all equal then its a circle.
Hough transform works best on monochromatic/binary image, so you may want to preprocess it with some sort of threshold function. Parameter values for the function are very important for proper recognition.
Is this the best way to do circle detection with openCV or is there a more accurate way of doing it ? Also why is my code not detecting every circles ?
there's also findContours function
https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gadf1ad6a0b82947fa1fe3c3d497f260e0
which, to my liking, is more robust and general; you may want to give it a try
as the title states, I'm trying to crop the largest circle out of an image. I'm using OpenCV in python. To be exact, it's a shooting target, which always has the same format, but the picture of it can be taken with any mobile device and in different lighting conditions (I will include some examples lower).
I'm completely new to image recognition, so I have been trying out many different ways of doing this, but couldn't figure out a universal solution, that would work on all of my target images.
Why I'm trying to do this:
My assignment is to calculate score of one or multiple shots on the given target image. I have tried color segmentation to find the shots, but since the shots can be on different backgrounds, this wouldn't work properly. So now I'm trying to see the difference between the empty shooting target image and the already shot on target image. Also, I need to be able to tell, which target it was shot on (there are two target types). So I'm trying to crop out only the target from image to get rid of the background interference and then continue with the shot identifications.
What I have tried so far:
1) Finding the largest circle with HoughCircles. My next step would be to somehow remove the outer part of that found circle. I have played with the configuration of HoughCircles method for quite some time, but always one of the example images wasn't highlighting the most outer circle correctly or wasn't highlighting any of the circles :/.
My final configuration looked something like this:
img = cv2.GaussianBlur(img, (3, 3), 0)
cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 2, 10000, param1=50, param2=100, minRadius=200, maxRadius=0)
It seemed like using HoughCircles wouldn't be the right way to do this, so I moved on to another possible solution I found on the internet.
2) Finding all the countours by filtering the 'black' color range in which the circles seem to be on the pictures and than finding the largest one. The problem with this solution seemed to be that sometimes the pictures had a shadow that destroyed the outer circle and therefore it seemed impossible to crop by it.
My code looked like this:
# black color boundaries [B, G, R]
lower = [0, 0, 0]
upper = [150, 150, 150]
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
# find the colors within the specified boundaries and apply the mask
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask=mask)
ret, thresh = cv2.threshold(mask, 40, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
# draw in blue the contours that were founded
cv2.drawContours(output, contours, -1, 255, 3)
# find the biggest countour (c) by the area
c = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(c)
After that, I would try to draw a circle by the largest found contour (c) and crop by it. But I have already seen that the drawn circles weren't complete (probably due to some shadow on the picture) and therefore this wouldn't work anyway.
After those failures, I have tried so many solutions from other questions on here, but none would work for my problem.
Example images:
Target example 1
Target example 2
Target to calc score 1
Target to calc score 2
To be completely honest with you, I'm really lost on how to go about this. I would appreciate any help, advice, anything.
There are two different types of target in your samples. You may want to process them separately or ask the user for the input, what kind of target it is. Basically, you want to know how large the black part of the target, does it cover 7-10 or 4-10.
Binarize your image. Build a histogram along X and Y -- you'll find the position of the black part of your target as (x_left, x_right, y_top, y_bottom). Once you know that, you can calculate the center ((top+bottom)/2, (left+right)/2). After that you can easily calculate the score for every pixel of the image, since you know the center, the black spot size and the number of different score areas within.
How to detect and split the stamps of each image using OpenCV (C++ or python)?
Below are two sample images:
One simple method would be to use contouring/edge detection to extract the desired shape.
You should try out Canny Edge Detector, it has support on OpenCV.
If you're trying contouring, you will have to try changing the number of iteration of the dilation or erosion functions (depending on your code) so that each stamp remains a separate entity.
Following that, you can simply find extract each contour and extract them into separate images. This snippet should help you out with the above mentioned part
x, y, w, h = cv2.boundingRect(contour)
cv2.rectangle(rgb_image, (x, y), (x + w, y + h), settings['outline_color'],settings['outline_thickness'])
roi = rgb_image[y:y+h, x:x+w]
cv2.imwrite("/Path/",roi)
#Settings is a dictionary, ignore it.
I have an image like so:
I would like to automatically identify the dense white box area in the top left and then fill it and black out the rest of image. Producing something like this:
Essentially, I just want to return the co-ordinates of the densest cluster. I have tried ad-hoc methods such as erosion, dilation and binary closing but they do not quite suite my needs. I'm not sure if I could use k-means here? Looking for an efficient method, any help is appreciated.
You could erode the image a little bit more, to remove more of the noise, and then find the contours and filter them by area. Here is what I would use (not tested):
kernel = np.ones((2, 2), np.uint8)
img = cv2.erode(img, kernel, iterations = 2)
#Finding contours of white square:
_, conts, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_SIMPLE)
for cnt in conts:
area = cv2.contourArea(cnt)
#filter more noise
if area > 200: # optimize this number
x1, y1, w, h = cv2.boundingRect(cnt)
x2 = x1 + w # (x1, y1) = top-left vertex
y2 = y1 + h # (x2, y2) = bottom-right vertex
rect = cv2.rectangle(img, (x1, y1), (x2, y2), (255,0,0), 2)
One right approach here would be to apply a large square averaging filter. If you know approximately the size of the box you're looking for, then match that size with the filter. After applying this filter, the largest pixel value in the image will be at the middle of the densest region. Let's call this point p.
Next, apply segmentation and connected component labeling to your original image. From your example image, it seems that the box you're looking for is connected. You might want to apply some morphological operations to make sure it's connected. You can also paint a reasonably-sizes blob centered at point p, it'll connect lots of small regions that together form a dense area.
Next, remove all connected components except the one containing point p. You can do this by finding the label at pixel p, and comparing all pixels in the labeled image for equality with that label.
This should leave you a connected, compact region. You can find the bounding box of this region, and paint it on your image, if you really want to enforce that the found area be a box.
I have extracted features from hog.compute function and then used those features to train an SVM classifier. I used a script that I found online to separate rho and support vectors from the classified file.
tree = ET.parse('svm_data.xml')
root = tree.getroot()
SVs = root.getchildren()[0].getchildren()[-2].getchildren()[0]
rho = float( root.getchildren()[0].getchildren()[-1].getchildren()[0].getchildren()[1].text)
svmvec = [float(x) for x in re.sub( '\s+', ' ', SVs.text).strip().split(' ')]
svmvec.append(-rho)
pickle.dump(svmvec, open("svm.pickle", 'wb'))
This code saved the rho and support vectors to a different file which I provided to the hog.DetectMultiScale function. Initially I got the CheckDetectorSize errors, but somehow i dealt with them. But now that it finally executes, why does it always draw a rectangle on the center instead of a person?
Check here
The final code that uses the file generated from the above code, to draw rectangles on the detected area(s):
hog = cv2.HOGDescriptor("hog.xml") svm =
pickle.load(open("svmcoeff.pickle", 'rb'))
hog.setSVMDetector(np.array(svm))
for i in range(1,9): image = cv2.imread('test-'+str(i)+'.png') image =
imutils.resize(image, width=min(300, image.shape[1])) orig =
image.copy()
(rects, weights) = hog.detectMultiScale(image)
for (x, y, w, h) in rects: cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255),2)
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
pick = non_max_suppression(rects, probs=None,overlapThresh=0.65)
for (xA, yA, xB, yB) in pick: cv2.rectangle(image, (xA, yA), (xB, yB), (0, 255, 0), 2)
cv2.imshow("Before NMS", orig) cv2.imshow("After NMS", image)
key = cv2.waitKey(0) if key == 27: continue
I can't directly answer your question but I can suggest debugging steps.
First, have you checked what the SVM coefficients look like (the upper code post)? You are making a lot of content dependent operations such as picking the last element in an array, and the text field in there, and text substitution, etc. I don't know the format of your svm_data.xml and can't say anything about the correctness of these steps. But you should definitely check the output at every step of this code, and especially svmvec. Compare these values to what you get from the original object/method/etc. you produced svm_data.xml with - most SVM implementations have methods to access these parameters interactively, e.g. in Matlab or python.
You should correct the formatting of your second code post, which is significant for python code. I guess you're somehow missing line breaks.
Here again first step should be to check the values you read into svm and compare to the originals.
Make sure the rectangle parameters returned by HoG detector and those used by cv.rectangle are compatible. Your code looks OK (and checks with online examples). But I would still try drawing a few rectangles manually just to check.
You also do a non-max suppression. Is there a difference before and after? You should first verify that you get something meaningful before. First check HoG detector returns something meaningful in rects, something other than a rectangle at the center of the image. If it is not doing that, then your problem is before that point in the process. If the rects contain nice rectangles but you don't see them drawn, then your problem is there and should be easy to fix.
If the problem is in rects, then you should go back and verify each step beforehand. I already mentioned tracking the SVM parameters from where you generate them up until you set them to HoG detector. Then you can try running the HoG detector with its default person detector to see if you have the whole detection process working.
These are what I can think of at this point.