I have some code written in python that is used to clean up an image and then use a blob detector to identify blobs in the cleared up image. While this works efficiently for pictures that are small for example 549x549 pixels. It gets very slow when doing large pictures such as those that are 14790x13856 pixels. I was wondering if anyone had any suggestions on how to make the blob detector (oepncv) faster or a replacement library that is faster?
edit:
add pic:
Here is an example of a picture:
Pic
It's about that size^, but about 667x bigger
I already have the code written and on a small scale there are no errors
params = cv2.SimpleBlobDetector_Params()
# change thresholds
params.minThreshold = 0
params.maxThreshold = 255
# Filter by Area.
params.filterByArea = True
params.minArea = 0
params.maxArea = 35
# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = 0
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0
# Create a detector with the parameters
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(third)
ImgPoints = cv2.drawKeypoints(third, keypoints, np.array([]), (0, 0, `255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
The result is a picture (tiff) that I save and there are no error messages.
Related
I'm using opencv in python to calibrate the lens.
I took a photo in different angles and locations like this:
Then, I ran the blob detector in opencv.
import cv2
import numpy
img = cv2.imread('./image.png', cv2.IMREAD_GRAYSCALE)
params = cv2.SimpleBlobDetector_Params()
params.minThreshold = 50
params.maxThreshold = 255
params.filterByArea = True
params.minArea = 0
params.maxArea = 80
params.filterByColor = True
params.blobColor = 255
params.filterByCircularity = False
params.filterByConvexity = False
params.filterByInertia = False
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(img)
The detected blobs are shown in red circles like this.
The object keypoints contains the center location of the detected blobs.
I collected the center location like this:
ips = []
for keypoint in keypoints: ips.append((keypoint.pt[0], keypoint.pt[1]))
ips = np.array(ips, np.float32) # image points.
What I want is, I want to store the center location of those red circles in order like cv2. findChessboardCorners.
So I defined a function that sorting along the horizontal axis first and then vertical axis.
def index_sorting_to_checkerboard(arr, shrinker):
ind1 = arr[:,1].argsort()
arr = arr[ind1]
floored = np.floor(arr).astype(int)
floored_shrinked = np.zeros_like(floored)
std = floored[0,0]
for i, (x,y) in enumerate(floored):
if abs(y-std) <= shrinker: floored_shrinked[i] = (x, std)
else:
std = y
floored_shrinked[i] = (x, y)
# sort by the second col, then the first column.
ind2 = np.lexsort((floored_shrinked[:,0], floored_shrinked[:,1]))
ret = floored_shrinked[ind2]
ind3 = ind1[ind2] # An accumulation of all argument changes.
return ret, ind3
shk, ind = index_sorting_to_checkerboard(ips, 30)
ips = ips[ind]
This function works when the image is low-tilted.
However, when the image is tilted a lot like:
it does not works well.
That's because the vertical order of the blobs in the upper line can be lower than those in the next line.
That is, the vertical location of the blob 13 (upper right) can be lower than blob 14 (below the blob 0) when the images are tilted a lot.
So I have to change the value of 'shrinker' manually every time the new images are taken.
Can you suggest me better algorithm to sort in chessboard order that works regardless of the inclination?
I think it is possible because cv2.findChessboardCorners always returns the location in this order,
but I don't know how it does that.
Since your pattern is elongated, you can estimate the approximate directions of the two axes of the grid (e.g. with PCA).
Based on the estimated direction and the distance between points, you'll be able to search which point is next to a point.
So, it seems that the order of the points can be recognized.
I have two images of moles. One is relatively round but the other isn't. I want to find out how circular the mole is say -1 being not at all circular,0 being elliptical, and 1 being circular. I first converted the raw image into binary and then tried using the code below. The code draws a circle around the circular mole but doesn't give information on inertia. The non-cricular mole is not even detected as a blob. Am I understanding this concept incorrectly? How should I go about solving this problem?
# Standard imports
import cv2
import numpy as np
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;
# Filter by Area.
params.filterByArea = True
params.minArea = 1500
# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = 0.1
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.01
# Create a detector with the parameters
detector = cv2.SimpleBlobDetector_create(params)
# Read image
im = cv2.imread("mole_torezo.png", cv2.IMREAD_GRAYSCALE)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
Thanks to Blob Detection Using OpenCV ( Python, C++ ) for the code
I am using OpenCV 3 in Python 2.7 to calibrate different cameras. I use the findCirclesGrid() function, which succesfully finds a 4 by 11 circle pattern in a 1 Megapixel image. However, when I try to detect the pattern up close in an image with a higher resolution, the function fails. When the object is farther away in the image, it is still detected. I use the function as follows:
ret, corners = cv2.findCirclesGrid(image, (4, 11), flags=cv2.CALIB_CB_ASYMMETRIC_GRID)
With larger images, it returns False, None. It seems that the function can't handle circles that have a too large area. I tried adding cv2.CALIB_CB_CLUSTERING, but this doesn't seem to make a difference. Also, it seems that in C++ the user can signify the use of blobdetector, but not in Python. Details: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findcirclesgrid
Can I increase the maximum detection size somehow or make the function detect the pattern in another way?
Edit: I found out how to edit parameters of the blobDetector by using
params = cv2.SimpleBlobDetector_Params()
params.maxArea = 100000
detector = cv2.SimpleBlobDetector_create(params)
ret, corners = cv2.findCirclesGrid(self.gray, (horsq, versq), None,
flags=cv2.CALIB_CB_ASYMMETRIC_GRID, blobDetector=detector)
Still the same issue, though.
Edit2:
Now adding cv2.CALIB_CB_CLUSTERING resolves the issue!
The main thing you probably need to do is tweak the min area and max area of the blob detector.
Create a blob detector with params (don't use the default parameters), and adjust the minarea and max area that the detector will accept. You can first just show all the found blobs before you pass the detector that you have created into the findcirclesgrid function.
Python Sample code
params = cv2.SimpleBlobDetector_Params()
# Setup SimpleBlobDetector parameters.
print('params')
print(params)
print(type(params))
# Filter by Area.
params.filterByArea = True
params.minArea = 200
params.maxArea = 18000
params.minDistBetweenBlobs = 20
params.filterByColor = True
params.filterByConvexity = False
# tweak these as you see fit
# Filter by Circularity
# params.filterByCircularity = False
params.minCircularity = 0.2
# # # Filter by Convexity
# params.filterByConvexity = True
# params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
# params.filterByInertia = False
params.minInertiaRatio = 0.01
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(gray)
im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
fig = plt.figure()
im_with_keypoints = cv2.drawKeypoints(gray, keypoints, np.array([]), (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
plt.imshow(cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2RGB),
interpolation='bicubic')
titlestr = '%s found %d keypoints' % (fname, len(keypoints))
plt.title(titlestr)
fig.canvas.set_window_title(titlestr)
ret, corners = cv2.findCirclesGrid(gray, (cbcol, cbrow), flags=(cv2.CALIB_CB_ASYMMETRIC_GRID + cv2.CALIB_CB_CLUSTERING ), blobDetector=detector )
I am currently making a python code for people headcounting with direction. I have used 'moments'method to gather the coordinates and eventually when it crosses a certain line then the counter increments.But, this method is proving to be very inefficient. My question regarding the blob detection are:
Is there any blob detection technique for python opencv? Or it could be done with cv2.findContours?
I'm working on raspberry pi so could anyone suggest how to get blob library on debian linux?
Even if there is, how could i get a unique ID for each blob? Is there any algorithm to provide tagging of UNIQUE ID's?
If there's any better method to do this, kindly suggest an algorithm.
Thanks in advance.
For blob detection you can use SimpleBlobDetector from OpenCV:
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Filter by Area.
params.filterByArea = True
params.minArea = 100
params.maxArea =100000
# Don't filter by Circularity
params.filterByCircularity = False
# Don't filter by Convexity
params.filterByConvexity = False
# Don't filter by Inertia
params.filterByInertia = False
# Create a detector with the parameters
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(imthresh)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures
# the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(imthresh, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
For labelling, using scipy.ndimage.label is usually a better idea:
label_im, nb_labels = ndimage.label(mask)
I am trying to remove small images from graphs using Python. As an example, I attach a graph with some '+' and '-' annotating it. I don't want them there, but don't want to manually remove them as there are quite a few to go through. Any easy way to detect and remove them?
I'll give you a solution using blob analysis since I had it almost ready at hand, but would ask you to do the reading and explanation yourself, since you have not spent too much time on your own code. Maybe it helps anyway.
Resulting image:
import numpy as np
import cv2
imgray = cv2.imread('image.png')
#### Blob analysis
# SimpleBlobDetector will find black blobs on white surface
ret,imthresh = cv2.threshold(imgray,160, 255,type=cv2.THRESH_BINARY)
# Remove small breaks in lines
kernel = np.ones((3,3),np.uint8)
imthresh=cv2.erode(imthresh,kernel, iterations=1)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Filter by Area.
params.filterByArea = True
params.minArea = 0
params.maxArea =350
# Don't filter by Circularity
params.filterByCircularity = False
# Don't filter by Convexity
params.filterByConvexity = False
# Don't filter by Inertia
params.filterByInertia = False
# Create a detector with the parameters
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(imthresh)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures
# the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(imthresh, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show blobs
cv2.imshow("Keypoints", im_with_keypoints)
cv2.imshow('threshold',imthresh)
cv2.waitKey(0)
cv2.destroyAllWindows()