I'm really new to OpenCV. :) I have been working on this for almost an entire day. After hours of sleepless work I would like to know if I can further improve my code.
I have written some code to select only the black markings on the images. These black markings are child contours. Whilst my code is able to select some contours, it isn't accurate. You can see the code draws contours around the shadows along with black markings.
Code 1
At first I tried to use canny edge detection. But I was unable to overlay with the original image correctly.
import cv2
import numpy as np
image = cv2.imread('3.jpg')
image = cv2.resize(image, (500, 500))
image2 = image
cv2.waitKey(0)
# Grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find Canny edges
edged = cv2.Canny(gray, 30, 200)
cv2.waitKey(0)
contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny', edged)
cv2.waitKey(0)
# print("Number of Contours found = " + str(len(contours)))
cv2.drawContours(image2, contours, -1, (0, 255, 0), 3)
cv2.imshow('Contours', image2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Code 2
I was able to improve on Code 1 to be far more accurate. You should be able to see that it now only selects half of the thumb, none of the other fingers and it doesn't select the indent on the background.
Additionally changing the background of the image also increases the accuracy of the result.
import cv2
import numpy as np
image = cv2.imread('3.jpg', 0)
image2 = cv2.imread('3.jpg')
image = cv2.resize(image, (500, 500))
image2 = cv2.resize(image2, (500, 500))
cv2.waitKey(0)
ret, thresh_basic = cv2.threshold(image, 100, 255, cv2.THRESH_BINARY)
cv2.imshow("Thresh basic", thresh_basic)
# Taking a matrix of size 5 as the kernel
kernel = np.ones((5, 5), np.uint8)
img_erosion = cv2.erode(thresh_basic, kernel, iterations=1)
#####################
ret, thresh_inv = cv2.threshold(img_erosion, 100, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("INV", thresh_inv)
#####################
# Find Canny edges
edged = cv2.Canny(img_erosion, 30, 200)
cv2.waitKey(0)
contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny', edged)
cv2.waitKey(0)
# print("Number of Contours found = " + str(len(contours)))
cv2.imshow('Original', image2)
cv2.drawContours(image2, contours, -1, (0, 255, 0), 3)
cv2.imshow('Contours', image2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Can I improve my code further?
Related
the task I want to do looks pretty simple: I take as input several images with an object centered in the photo and a little color chart needed for other purposes. My code normally works for the majority of the cases, but sometimes fails miserably and I just can't understand why.
For example (these are the source images), it works correctly on this https://imgur.com/PHfIqcb but not on this https://imgur.com/qghzO3V
Here's the code of the interested part:
img = cv2.imread(path)
height, width, channel = img.shape
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
kernel = np.ones((31, 31), np.uint8)
dil = cv2.dilate(gray, kernel, iterations=1)
_, th = cv2.threshold(dil, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
th_er1 = cv2.bitwise_not(th)
_, contours, _= cv2.findContours(th_er1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
After that I'm just going to crop the image accordingly to the given results (getting the biggest rectangle contour), basically cutting off the photo only the main object.
But as I said, using very similar images sometimes works and sometimes not.
Thank you in advance.
maybe you could try not using otsu's method, and just set threshold manually, if it's possible... ;)
You can use the Canny edge detector. In the two images, there is a good threshold value to isolate the object in the center of the image. After applying the threshold, we blur the results and apply the Canny edge detector before finding the contours:
import cv2
import numpy as np
def process(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, 190, 255, cv2.THRESH_BINARY_INV)
img_blur = cv2.GaussianBlur(thresh, (3, 3), 1)
img_canny = cv2.Canny(img_blur, 0, 0)
kernel = np.ones((5, 5))
img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
return cv2.erode(img_dilate, kernel, iterations=1)
def get_contours(img):
contours, hierarchies = cv2.findContours(process(img), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
cv2.drawContours(img, [cnt], -1, (0, 255, 0), 30)
x, y, w, h = cv2.boundingRect(cnt)
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 30)
img = cv2.imread("image.jpeg")
get_contours(img)
cv2.imshow("Result", img)
cv2.waitKey(0)
Input images:
Output images:
The green outlines are the contours of the objects, and the red outlines are the bounding boxes of the objects.
I'm working in a script using different OpenCV operations for processing an image with solar panels.
Original image:
Final image:
Can anybody help me with the code to detect only the grid lines in the image, removing the other area.
My code is the following:
image = cv2.imread("solar3.jpg")
cv2.imshow("Image", image)
image = cv2.resize(image, (482,406))
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("gray", gray)
blur = cv2.GaussianBlur(gray, (0,0), 2)
cv2.imshow("blur", blur)
lap = cv2.Laplacian(blur, cv2.CV_32F)
cv2.imshow("lap",lap)
_,thresh = cv2.threshold(lap, 0, 0, cv2.THRESH_TOZERO)
cv2.imshow("thresh", thresh)
thresh = thresh.astype(np.uint8)
contours, hierachy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
c = 0
for i in contours:
area = cv2.contourArea(i)
if area > 1000/2:
cv2.drawContours(image, contours, c, (0, 255, 0), 2)
c+=1
cv2.imshow("Final Image", image)
cv2.waitKey(0)
Can anybody help me with the code of detecting all the grids in the image.
I wrote a simple code to search for circles in documents (since seals have a rounded shape).
But due to the poor image quality, the print outline is fuzzy, and opencv cannot always detect it. I edited the picture in photoshop and enhanced the dark colors. I saved the picture and sent it for processing. It helped me. Opencv has identified a circle representing a low-quality print (there are no such problems in high-quality documents). My code:
import numpy as np
import cv2
img = cv2.imread(r"C:\buh\doc.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# I tried experimenting with bluer, but opencv doesn't see circles in this case
# blurred = cv2.bilateralFilter(gray.copy(), 15, 15, 15 )
# imS = cv2.resize(blurred, (960, 540))
# cv2.imshow('img', imS)
# cv2.waitKey(0)
minDist = 100
param1 = 30 #500
param2 = 100 #200 #smaller value-> more false circles
minRadius = 90
maxRadius = 200 #10
# docstring of HoughCircles: HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) -> circles
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
cv2.circle(img, (i[0], i[1]), i[2], (0, 255, 0), 2)
# Show result for testing:
imS = cv2.resize(img, (960, 540))
cv2.imshow('img', imS)
cv2.waitKey(0)
The seals in the documents are circles as in the photo:
Unfortunately, I cannot add photo of the document where the original seals are located, since this is a private information...
So, I need to enhance the shades of black in the photo before trying to look for circles. How can I do this? I would also listen to other suggestions for improving the contours of seals (stamps), if someone has already encountered this.
Thank you.
Example:
Here's a simple approach:
Obtain binary image. Load image, convert to grayscale, Gaussian blur, then Otsu's threshold.
Merge small contours into a single large contour. We dilate using cv2.dilate to merge circles into a single contour.
Find external contours. Finally we find external contours with the external cv2.RETR_EXTERNAL flag and cv2.drawContours()
Visualization of the image pipeline
Input image
Threshold for binary image
Dilate
Detected contours in green
Code
import cv2
import numpy as np
# Load image, grayscale, Gaussian blur, Otsus threshold, dilate
image = cv2.imread('3.PNG')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
dilate = cv2.dilate(thresh, kernel, iterations=1)
# Find contours
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(image, [c], -1, (36,255,12), 3)
cv2.imshow('image', image)
cv2.imshow('dilate', dilate)
cv2.imshow('thresh', thresh)
cv2.waitKey()
i try to find the contour of the object using this code
import cv2
import numpy as np
# Let's load a simple image with 3 black squares
image = cv2.imread('C://Users//gfg//shapes.jpg')
cv2.waitKey(0)
# Grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find Canny edges
edged = cv2.Canny(gray, 30, 200)
cv2.waitKey(0)
# Finding Contours
# Use a copy of the image e.g. edged.copy()
# since findContours alters the image
contours, hierarchy = cv2.findContours(edged,
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny Edges After Contouring', edged)
cv2.waitKey(0)
print("Number of Contours found = " + str(len(contours)))
# Draw all contours
# -1 signifies drawing all contours
cv2.drawContours(image, contours, -1, (0, 255, 0), 3)
cv2.imshow('Contours', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I try to make a userframe so that contours is relative to new userframe (right now contour is relative to camera frame-0-640 and 0-480)
the new frame can be 5,5 - 10,6 - 7,10 - 15,12
I have and portrait image. To which I:
Convert to Grayscale
Then convert to binary
Then I used morphological erosion operation.
Then I used morphological dilation operation.
Now, as last process, I am trying to find out the contours and drawing them. However, contours are getting found and drawn too, but, it is only getting drawn in white color, which made it looked like, contours is not drawn at all. What am I doing wrong??
Here is my code:
import numpy as np
import cv2
img = cv2.imread("Test1.jpg")
image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("GrayScaled", image)
cv2.waitKey(0)
ret, thresh = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("Black&White", thresh)
cv2.waitKey(0)
kernel1 = np.ones((2,2),np.uint8)
erosion = cv2.erode(thresh,kernel1,iterations = 4)
cv2.imshow("AfterErosion", erosion)
cv2.waitKey(0)
kernel2 = np.ones((1,1),np.uint8)
dilation = cv2.dilate(erosion,kernel2,iterations = 5)
cv2.imshow("AfterDilation", dilation)
cv2.waitKey(0)
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.drawContours(dilation, contours, -1, (255, 0, 0), 2)
cv2.imshow("Contours", dilation)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here are images step by step:
Original Image:
GrayScaled Image:
Binary Image:
After Erosion:
After Dilation:
Contour:
I am defining the color of the contours boundary to be red in above code. So, why is it not showing red boundaries??