OpenCV Splitting Contours - python

I'm trying to detect hand with OpenCV on Python.
I am working on this thresholded image:
And that's contour drawed state:
I am trying to detect hand, but contour is too big, it captures my whole body.I need it like this:
My code:
import cv2
orImage = cv2.imread("f.png")
image = cv2.cvtColor(orImage,cv2.COLOR_BGR2GRAY)
image = cv2.blur(image,(15,15))
(_,img_th) = cv2.threshold(image,96,255,1)
(contours,_) = cv2.findContours(img_th, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
if cv2.contourArea(c) > 15:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image,(x-20,y-20),(x+w+20,y+h+20),(0,255,0),2)
cv2.drawContours(image,contours,-1,(255,0,0),2)
cv2.imwrite("hi.jpg",image)
Thanks!

I have a solution (I got some help from HERE), it has many other wonderful tutorials on image processing exclusively for OpenCV users.)
I first converted the image you have uploaded to HSV color space:
HSV = cv2.cvtColor(orimage, cv2.COLOR_BGR2HSV)
I then set an approximate range for skin detection once the image is converted to HSV color space:
l = np.array([0, 48, 80], dtype = "uint8")
u = np.array([20, 255, 255], dtype = "uint8")
I then applied this range to the HSV image:
skinDetect = cv2.inRange(HSV, l, u)
This is what I obtained (I also resized the image to make it smaller):
Now you can find the biggest contour in this image followed by morphological operations to obtain the hand perfectly.
Hope this helps.

Related

Detecting cardboard Box and Text on it using OpenCV

I want to count cardboard boxes and read a specific label which will only contain 3 words with white background on a conveyer belt using OpenCV and Python. Attached is the image I am using for experiments. The problem so far is that I am unable to detect the complete box due to noise and if I try to check w and h in x, y, w, h = cv2.boundingRect(cnt) then it simply filter out the text. in this case ABC is written on the box. Also the box have detected have spikes on both top and bottom, which I am not sure how to filter.
Below it the code I am using
import cv2
# reading image
image = cv2.imread('img002.jpg')
# convert the image to grayscale format
img_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# apply binary thresholding
ret, thresh = cv2.threshold(img_gray, 150, 255, cv2.THRESH_BINARY)
# visualize the binary image
cv2.imshow('Binary image', thresh)
# collectiong contours
contours,h = cv2.findContours(thresh, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# looping through contours
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
cv2.rectangle(image,(x,y),(x+w,y+h),(0,215,255),2)
cv2.imshow('img', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Also please suggest how to crop the text ABC and then apply an OCR on that to read the text.
Many Thanks.
EDIT 2: Many thanks for your answer and based upon your suggestion I changed the code so that it can check for boxes in a video. It worked liked a charm expect it only failed to identify one box for a long time. Below is my code and link to the video I have used. I have couple of questions around this as I am new to OpenCV, if you can find some time to answer.
import cv2
import numpy as np
from time import time as timer
def get_region(image):
contours, hierarchy = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
c = max(contours, key = cv2.contourArea)
black = np.zeros((image.shape[0], image.shape[1]), np.uint8)
mask = cv2.drawContours(black,[c],0,255, -1)
return mask
cap = cv2.VideoCapture("Resources/box.mp4")
ret, frame = cap.read()
fps = 60
fps /= 1000
framerate = timer()
elapsed = int()
while(1):
start = timer()
ret, frame = cap.read()
# convert the image to grayscale format
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# Performing threshold on the hue channel `hsv[:,:,0]`
thresh = cv2.threshold(hsv[:,:,0],127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
mask = get_region(thresh)
masked_img = cv2.bitwise_and(frame, frame, mask = mask)
newImg = cv2.cvtColor(masked_img, cv2.COLOR_BGR2GRAY)
# collectiong contours
c,h = cv2.findContours(newImg, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cont_sorted = sorted(c, key=cv2.contourArea, reverse=True)[:5]
x,y,w,h = cv2.boundingRect(cont_sorted[0])
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),5)
#cv2.imshow('frame',masked_img)
cv2.imshow('Out',frame)
if cv2.waitKey(1) & 0xFF == ord('q') or ret==False :
break
diff = timer() - start
while diff < fps:
diff = timer() - start
cap.release()
cv2.destroyAllWindows()
Link to video: https://www.storyblocks.com/video/stock/boxes-and-packages-move-along-a-conveyor-belt-in-a-shipment-factory-a-few-blank-boxes-for-your-custom-graphics-lmgxtwq
Questions:
How can we be 100% sure if the rectangle drawn is actually on top of a box and not on belt or somewhere else.
Can you please tell me how can I use the function you have provided in original answer to use for other boxes in this new code for video.
Is it correct way to again convert masked frame to grey, find contours again to draw a rectangle. Or is there a more efficient way to do it.
The final version of this code is intended to run on raspberry pi. So what can we do to optimize the code's performance.
Many thank again for your time.
There are 2 steps to be followed:
1. Box segmentation
We can assume there will be no background change since the conveyor belt is present. We can segment the box using a different color space. In the following I have used HSV color space:
img = cv2.imread('box.jpg')
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Performing threshold on the hue channel `hsv[:,:,0]`
th = cv2.threshold(hsv[:,:,0],127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
Masking the largest contour in the binary image:
def get_region(image):
contours, hierarchy = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
c = max(contours, key = cv2.contourArea)
black = np.zeros((image.shape[0], image.shape[1]), np.uint8)
mask = cv2.drawContours(black,[c],0,255, -1)
return mask
mask = get_region(th)
Applying the mask on the original image:
masked_img = cv2.bitwise_and(img, img, mask = mask)
2. Text Detection:
The text region is enclosed in white, which can be isolated again by applying a suitable threshold. (You might want to apply some statistical measure to calculate the threshold)
# Applying threshold at 220 on green channel of 'masked_img'
result = cv2.threshold(masked_img[:,:,1],220,255,cv2.THRESH_BINARY)[1]
Note:
The code is written for the shared image. For boxes of different sizes you can filter contours with approximately 4 vertices/sides.
# Function to extract rectangular contours above a certain area
def extract_rect(contours, area_threshold):
rect_contours = []
for c in contours:
if cv2.contourArea(c) > area_threshold:
perimeter = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02*perimeter, True)
if len(approx) == 4:
cv2.drawContours(image, [approx], 0, (0,255,0),2)
rect_contours.append(c)
return rect_contours
Experiment using a statistical value (mean, median, etc.) to find optimal threshold to detect text region.
Your additional questions warranted a separate answer:
1. How can we be 100% sure if the rectangle drawn is actually on top of a box and not on belt or somewhere else?
PRO: For this very purpose I chose the Hue channel of HSV color space. Shades of grey, white and black (on the conveyor belt) are neutral in this channel. The brown color of the box is contrasting could be easily segmented using Otsu threshold. Otsu's algorithm finds the optimal threshold value without user input.
CON You might face problems when boxes are also of the same color as conveyor belt
2. Can you please tell me how can I use the function you have provided in original answer to use for other boxes in this new code for video.
PRO: In case you want to find boxes using edge detection and without using color information; there is a high chance of getting many unwanted edges. By using extract_rect() function, you can filter contours that:
have approximately 4 sides (quadrilateral)
are above certain area
CON If you have parcels/packages/bags that have more than 4 sides you might need to change this.
3. Is it correct way to again convert masked frame to grey, find contours again to draw a rectangle. Or is there a more efficient way to do it.
I felt this is the best way, because all that is remaining is the textual region enclosed in white. Applying threshold of high value was the simplest idea in my mind. There might be a better way :)
(I am not in the position to answer the 4th question :) )

How to crop an image along major axis of the blob/contour with a rectangular bounding box using Python

I would like to crop an image which has a hand drawn highlighted area in orange as shown below,
The result should be a cropped image along the major axis of the blob or contour with a rectangular bounding box, as shown below,
Here's what i have tried,
import numpy as np
import cv2
# load the image
image = cv2.imread("frame50.jpg", 1)
#color boundaries [B, G, R]
lower = [0, 3, 30]
upper = [30, 117, 253]
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask=mask)
ret,thresh = cv2.threshold(mask, 50, 255, 0)
if (int(cv2.__version__[0]) > 3):
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
else:
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
# find the biggest countour (c) by the area
c = max(contours, key = cv2.contourArea)
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, x:x+w]
cv2.imshow('ROI',ROI)
cv2.imwrite('ROI.png',ROI)
cv2.waitKey(0)
This does not seem to work most of the time.For some images, the following happens,
I would like to know if there is better way to go about this or how i can fix what i have right now.Note that the highlighted area is hand drawn and can be of any shape but it is closed and not left open and the colour of the highlight is that shade of orange itself in all cases.
And is there a way to only retain content inside the circle and blackout everything outside it?
EDIT1:
I was able to fix the wrong clipping by varying the threshold more. But my main query now is: is there a way to only retain content inside the circle and blackout everything outside it? I can see the mask as show below,
How do I fill this mask and retain content inside the circle and blackout everything outside it, with the same rectangular bounding box?
Have you tried
image[x:x+w, y:y+h]
And could you check bbox with below code
cv2.rectangle(thresh,(x,y),(x+w,y+h),(255,0,0),2)
First of all, it is always better to use an HSV image instead of BGR image for masking(extracting a color). You can do this by the following code.
HSV_Image = cv2.cvtColor(Image, cv2.COLOR_BGR2HSV)
ThreshImage = cv2.inRange(HSV_Image, np.array([0, 28, 191]), np.array([24, 255, 255]))
The range numbers here are found for orange color in this case.
Image is the input image and ThreshImage is the output image with the orange region colored as white and everything else as black.
Now finding the contour in ThreshImage with cv2.RETR_EXTERNAL flag will give only one contour that is the outer boundary of the orange region.
Contours, Hierarchy = cv2.findContours(ThreshImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
To crop the orange region:
BoundingRect = cv2.boundingRect(Contours[0])
(x, y, w, h) = BoundingRect
CroppedImage = Image[y:y+h, x:x+w].copy()
"CroppedImage" will store the cropped orange region as desired.
To get contents of only inside the contour:
Bitwise AND operation will be useful here as we already have detected the contour.
First, we have to create a black image of shape the same as that of input image and draw the contour with white color filled in it.
ContourFilledImage = np.zeros(Image.shape, dtype=np.uint8)
cv2.drawContours(ContourFilledImage, Contours, -1, (255, 255, 255), -1)
Now perform a bitwise AND operation on Input Image and "ContourFilledImage"
OnlyInnerData = cv2.bitwise_and(ContourFilledImage, Image)
"OnlyInnerData" image is the desired output image having only the content of inside the circle.

Recognizing handwritten digits off a scanned image

I'm trying to read all of the handwritten digits in the scanned image here
I tried looking through pixel-by-pixel using PIL, cropping the sub-images, then feeding them through a neural network, but the regions that were being cropped never quite lined up and led to a lot of inaccuracy.
I've also tried using OpenCV to find all the grey squares, then crop the images and feed them through a neural network, but I couldn't seem to get it to find all or even only miss a few; it would miss ~30% of squares. (I'm not very experienced with OpenCV, so I could be messing something up)
So I'm just looking for a potential idea/solution for this problem, so any suggestions would be appreciated, thanks in advance!
I assume the input image name is "sqaures.jpg"
First of all, import required libraries and load image in both RGB and Gray format:
import cv2
import numpy as np
image = cv2.imread("squares.jpg", 1)
image_gray = cv2.imread("squares.jpg", 0)
Then, we perform a simple operation to clean some noise from the input image using np.where() function:
image_gray = np.where(image_gray > 240, 255, image_gray)
image_gray = np.where(image_gray <= 240, 0, image_gray)
Because we want to grab the whole square regions from the image. We need to blur the image a little bit before performing the adaptive thresholding method:
image_gray = cv2.blur(image_gray, (5, 5))
im_th = cv2.adaptiveThreshold(image_gray, 255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, 115, 1)
kernal = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
im_th = cv2.morphologyEx(im_th, cv2.MORPH_OPEN, kernal, iterations=3)
Use contour detection in OpenCV to find all possible regions:
_, contours, _ = cv2.findContours(im_th.copy(), cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
contours.remove(contours[0]) #remove the biggest contour
Finally, try to find the potential square regions based on the ratio of height and width:
square_rects = []
square_areas = []
for i, cnt in enumerate(contours):
(x, y, w, h) = cv2.boundingRect(cnt)
ar = w / float(h)
if 0.9 < ar < 1.1:
square_rects.append(((x,y), (x+w, y+h)))
square_areas.append(w*h) #store area information
We need to remove anything that is too small from the list by doing the follows:
import statistics
median_size_limit= statistics.median(square_areas) * 0.8
square_rects = [rect for i, rect in enumerate(square_rects)
if square_areas[i] > median_size_limit]
You can visually check the output by drawing all the rectangles on the original image:
for rect in square_rects:
cv2.rectangle(image, rect[0], rect[1], (0,255,0), 2)
cv2.imwrite("_output_image.png", image)
cv2.imshow("image", image)
cv2.waitKey()
You can use "square_rects" to locate all the squares and crop them from the original image.
The following is the preview of the final result.
Cheers.

Finding coordinates of point located on a curve in an image using Python

I have an image and I want to find the coordinates of some points that are located on a curve in this image. Please take a look at the image.
The red points are the points that I need to calculate their coordinates. Could you please tell me how I can do it in Python?
Thank in advance for your help.
Best
First convert the image to HSV and separate out the red dots. Then getting its coordinates is simple by finding contours and their moments. In case you are using OpenCV -
Output:
Code:
import cv2
import numpy as np
frame=cv2.imread("dots.jpg")
dots=np.zeros_like(frame)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_hsv = np.array([112, 176, 174])
higher_hsv = np.array([179,210,215])
mask = cv2.inRange(hsv, lower_hsv, higher_hsv)
cnts, h = cv2.findContours( mask, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
mnts = [cv2.moments(cnt) for cnt in cnts]
centroids = [( int(round(m['m10']/m['m00'])),int(round(m['m01']/m['m00'])) ) for m in mnts]
for c in centroids:
cv2.circle(dots,c,5,(0,255,0))
print c
cv2.imshow('red_dots', dots)
cv2.waitKey(0)
cv2.destroyAllWindows()

Circular contour detection in an image python opencv

I am trying to have the circle detected in the following image.
So I did color thresholding and finally got this result.
Because of the lines in the center being removed, the circle is split into many small parts, so if I do contour detection on this, it can only give me each contour separately.
But is there a way I can somehow combine the contours so I could get a circle instead of just pieces of it?
Here is my code for color thresholding:
blurred = cv2.GaussianBlur(img, (9,9), 9)
ORANGE_MIN = np.array((12, 182, 221),np.uint8)
ORANGE_MAX = np.array((16, 227, 255),np.uint8)
hsv_disk = cv2.cvtColor(blurred,cv2.COLOR_BGR2HSV)
disk_threshed = cv2.inRange(hsv_disk, ORANGE_MIN, ORANGE_MAX)
The task is much easier when performed with the red plane only.
I guess there was problem with the thresholds for color segmentation, So the idea here was to generate a binary mask. By inspection your region of interest seems to be brighter than the other regions of input image, so thresholding can simply be done on a grayScale image to simplify the context. Note: You may change this step as per your requirement. After satisfying with the threshold output, you may use cv2.convexHull() to get the convex shape of your contour.
Also keep in mind to select the largest contour and ignore the small contours. The following code can be used to generate the required output:
import cv2
import numpy as np
# Loading the input_image
img = cv2.imread("/Users/anmoluppal/Downloads/3xGG4.jpg")
# Converting the input image to grayScale
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Thresholding the image to get binary mask.
ret, img_thresh = cv2.threshold(img_gray, 145, 255, cv2.THRESH_BINARY)
# Dilating the mask image
kernel = np.ones((3,3),np.uint8)
dilation = cv2.dilate(img_thresh,kernel,iterations = 3)
# Getting all the contours
_, contours, __ = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Finding the largest contour Id
largest_contour_area = 0
largest_contour_area_idx = 0
for i in xrange(len(contours)):
if (cv2.contourArea(contours[i]) > largest_contour_area):
largest_contour_area = cv2.contourArea(contours[i])
largest_contour_area_idx = i
# Get the convex Hull for the largest contour
hull = cv2.convexHull(contours[largest_contour_area_idx])
# Drawing the contours for debugging purposes.
img = cv2.drawContours(img, [hull], 0, [0, 255, 0])
cv2.imwrite("./garbage.png", img)

Categories

Resources