Can anyone share the code for changing the background color of attached image to white, so that I can recognize the foreground the digits with OCR software?
You need to convert to gray, apply contrast and use threshold to get this result:
import cv2
import numpy as np
img = cv2.imread("digits.jpg", cv2.IMREAD_COLOR)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)[...,0]
# edges = cv2.Canny(gray, 10,30)
blurred = cv2.GaussianBlur(gray, (7, 7), 0)
clahe = cv2.createCLAHE(clipLimit=5.0, tileGridSize=(32,32))
contrast = clahe.apply(blurred)
ret, thresh = cv2.threshold(contrast, 20, 255, cv2.THRESH_BINARY|cv2.THRESH_OTSU)
while True:
cv2.imshow("result", thresh)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
Related
I have a bean on a white background damper, the issue is that the damper is not perfectly white (as in 255,255,255). I have tried using cv2.threshold() method but I kept getting a deformed image with spots. What is the best way to achieve this?
My Image
My Code
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY)
img[thresh == 255] = 0
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
erosion = cv2.erode(img, kernel, iterations = 1)
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow("image", erosion)
In the end, the beans should only be surrounded by black images!
It could be the solution:
import cv2
import numpy as np
img = cv2.imread("data/bob.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blured = cv2.blur(gray, (5, 5))
ret, thres = cv2.threshold(img_blured, 130, 255, cv2.THRESH_BINARY)
neg = cv2.bitwise_not(thres)
erosion = cv2.erode(neg, np.ones((6, 6), np.uint8), iterations=1)
cv2.imshow("erosion", erosion)
# ret, thresh = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY)
img[erosion == 0] = 0
cv2.imshow("image", img)
cv2.waitKey(0)
Here I am using cv2.threshold but for bigger range than you used, but before it I blured the image. Then I negate and erode it.
But this cuts off the bean itself a little, if this is critical, then you should use a completely different algorithm. For example, the cv2.Canny method to find the contours of a bean and somehow further process it.
I'm trying to remove the blue background color on below image.
The blue color can be light or deep.
I tried to use cv2.inRange() function but failed.
How can I do that?
import sys
import cv2
import numpy as np
image = cv2.imread(sys.argv[1])
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower_blue = np.array([85, 50, 40])
upper_blue = np.array([135, 255, 255])
mask = cv2.inRange(hsv, lower_blue, upper_blue)
image[mask>0]=(255, 255, 255)
cv2.imshow('image',image)
cv2.waitKey(0)
I removed the background and also did OCR on the image. Here is the result:
And the code I used:
import pytesseract
import cv2
pytesseract.pytesseract.tesseract_cmd = 'C:\\Program Files (x86)\\Tesseract-OCR\\tesseract.exe'
img = cv2.imread('idText.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
adaptiveThresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 90)
config = '-l eng --oem 1 --psm 3'
text = pytesseract.image_to_string(adaptiveThresh, config=config)
print("Result: " + text)
cv2.imshow('original', img)
cv2.imshow('adaptiveThresh', adaptiveThresh)
cv2.waitKey(0)
Hope I helped you.
You can try thresholding to obtain a binary image and morphological transformations to smooth the text
import cv2
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray,105, 255, cv2.THRESH_BINARY_INV)[1]
thresh = 255 - thresh
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
result = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.imwrite('result.png', result)
cv2.waitKey()
I need to detect black objects in a real time video. I got a code in the internet for detecting blue objects. So I changed the upper and lower hsv value according to bgr colour code(am not clear about how to convert bgr to hsv), But its not detecting the black object in the video.the code am using blue colour detection is:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
_, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([110,50,50])
upper_red = np.array([130,255,255])
mask = cv2.inRange(hsv, lower_red, upper_red)
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
cap.release()
the output for blue color is:
original image:
The code I'm using for black is:`
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
_, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([0,0,0])
upper_red = np.array([0,0,0])
mask = cv2.inRange(hsv, lower_red, upper_red)
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
cap.release()
Result:
Nothing is displayed in the result of black. I think the problem is in the hsv conversion but am not sure. And in the detected blue image is not at all accurate it result in noise. How to achieve black detection and reduce noise?.
The easiest way to detect black would be to do a binary threshold in greyscale. Black pixel values will always have a very low value, so therefore it would be easier to do this in a 1 channel image instead of a 3 channel. I would recommend:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 15, 255, cv2.THRESH_BINARY_INV)
change the value of 15 until you get reasonable results. Lower value would result in preserving only darker pixels. If you wanted to extract the location of the pixels you could also get the contours i.e.
image, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
and then draw the contour back onto the original frame with:
frame = cv2.drawContours(frame, contours, -1,(0,0,255),3)
Alternatively, you might find it easier to invert the image first so that you are trying to extract white pixels. This could lead to less confusion with the pixels you want to extract being similar to the mask pixel (0). You could do this simple with numpy subtraction, then set your thresh value to a very high value i.e:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = 255-gray
ret, thresh = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
image, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
frame = cv2.drawContours(frame, contours, -1,(0,0,255),3)
black= np.array([0, 0, 0], np.uint8)
grayScale= np.array([0, 0, 29], np.uint8)
Valor (29) depends of how much "brightness" you want.
This page is where you can test your color ranges
I'm attempting to extract a blue object, very much like the one described in https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html#object-tracking
An example of a raw image with three blue shapes to extract is here:
The captured image is noisy and the unfiltered shape detection returns hundreds to thousands of "blue" shapes. In order to mitigate this, I applied the following steps:
Blurring the image before filtering it, resulting in closed surfaces
Converting the masked image (after bitwise_and) back to grayscale
Applying an OTSU threshold
Finally, detect the contours
The complete code is:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
blur = cv2.GaussianBlur(frame, (15, 15), 0)
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
lower_red = np.array([115, 50, 50])
upper_red = np.array([125, 255, 255])
mask = cv2.inRange(hsv, lower_red, upper_red)
blue = cv2.bitwise_and(blur, blur, mask=mask)
gray = cv2.cvtColor(blue, cv2.COLOR_BGR2GRAY)
(T, ted) = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU)
im2, contours, hierarchy = cv2.findContours(
ted, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
cv2.drawContours(frame, [cnt], 0, (0, 255, 0), 3)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, str(len(contours)), (10, 500), font, 2, (0, 0, 255), 2, cv2.LINE_AA)
cv2.imshow('mask', mask)
cv2.imshow('blue', blue)
cv2.imshow('grey', gray)
cv2.imshow('thresholded', ted)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Unfortunately, there are still 6-7 contours left whereas there should be three.
How can I further refine image processing to get just the three shapes?
You could use morphological operations coupled with connected components analysis:
Apply erosion on grayscale image: https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html?highlight=erode#erode
Find connected components, function cv::connectedComponents (https://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gac2718a64ade63475425558aa669a943a)
Retain connected components whose area is bigger than a given threshold.
Dilate the resulting mask: https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html?highlight=dilate#dilate
If the shapes that you're looking for specific shapes (e.g. shapes), you could use some shape descriptors.
Finally, I suggest you trying replacing the Gaussian Filter with a bilateral filter (https://docs.opencv.org/3.0-beta/modules/imgproc/doc/filtering.html#bilateralfilter) to better preserve the shapes. If you want an even better filter, have a look at this tutorial on NL-means filter (https://docs.opencv.org/3.3.1/d5/d69/tutorial_py_non_local_means.html)
I want to use opencv Canny edge detection to crop the image along with the edge. But I have wrote down the edge detection, but still don't know how to
crop the image along with the edge.
import cv2
import numpy as np
image = cv2.imread("test.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # convert image to gray
blur_image = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blur_image, 180, 900)
# cv2.imshow("Image", edged)
# cv2.waitKey(0)
# applying close function
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10, 10))
# opened = cv2.morphologyEx(edged, cv2.MORPH_OPEN, kernel)
close = cv2.morphologyEx(edged, cv2.MORPH_CLOSE, kernel)
# cv2.imshow("Closed", close)
# cv2.waitKey(0)
_,contours, hierarchy = cv2.findContours( close.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# work files
# cv2.drawContours(image,contours,-1,(0,0,255),3)
#
# cv2.imshow("result", image)
# mask = np.zeros(image.shape,dtype=np.uint8)
#mask = np.ones((image.height,image.width),dtype=np.uint8)
#m#ask[:,:]= 0
# croped = np.zeros()
cv2.drawContours(image,contours,-1,(0,0,255),3)
cv2.imshow("result", image)
cv2.waitKey(0)
I just want to crop the image along with the red color of the edge.