Improving background extraction with shiny objects (OpenCV/Python) - python

I've been working on a 'fun' solution that combines multiple videos from a security camera into one video, the idea being to overlay foreground motion over many hours into a few seconds for a 'quick preview'. Thanks to Stephen and Bahramdum for getting me on the right path.
The project is open source, for anyone to see. So far, I've played with the following background extractions:
OpenCV BackgroundSubtraction using a variety of algorithms
Mask R-CNN
Yolov3/TinyYolo
Optical flow
(I haven't yet tried detection+centroid tracking, will do so next)
Based on my experiments so far, OpenCV's background extraction generally works the best due to the fact that it extracts foreground purely based on motion. Plus its very fast. True, that it also extracts things like moving leaves etc, but we can work on removing those.
Here is an example of 3 hours of video blended into one short video.
https://youtu.be/C-mJfzvFTdg
My current challenge is that it is doing a bad job with shiny objects, like cars.
Here is an example:
Background subtraction consistently does a bad job with extracting polygons for shiny objects and findContours does no better either.
I've tried several options, but my current approach is documented here, the gist of which is:
Convert frame to HSV
remove intensity (I read this in another SO thread for shiny objects)
Apply background subtraction
Clean up outside noise with MORPH_OPEN
Blur mask to hopefully connect near while blobs
find contours on new masks
only keep contours of certain min area
create a new mask, where we draw only these contours with fill
Do a final dilation to connect close filled contours of new masks
10.Use this new mask to extract foreground from the frame and overlay it with current blended video
Would anyone have suggestions on how to improve extraction for reflective objects?
self.fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows=False,
history=self.history)
frame_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
frame_hsv[:,:,0] = 0 # see if removing intensity helps
# gray = cv2.cvtColor(frame_hsv, cv2.COLOR_BGR2GRAY)
# create initial background subtraction
frame_mask = self.fgbg.apply(frame_hsv)
# remove noise
frame_mask = cv2.morphologyEx(frame_mask, cv2.MORPH_OPEN, self.kernel_clean)
# blur to merge nearby masks, hopefully
frame_mask = cv2.medianBlur(frame_mask,15)
#frame_mask = cv2.GaussianBlur(frame_mask,(5,5),cv2.BORDER_DEFAULT)
#frame_mask = cv2.blur(frame_mask,(20,20))
h,w,_ = frame.shape
new_frame_mask = np.zeros((h,w),dtype=np.uint8)
copy_frame_mask = frame_mask.copy()
# find contours of mask
relevant = False
ctrs,_ = cv2.findContours(copy_frame_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rects = []
# only select relevant contours
for contour in ctrs:
area = cv2.contourArea(contour)
if area >= self.min_blend_area:
x,y,w,h = cv2.boundingRect(contour)
pts = Polygon([[x,y], [x+w,y], [x+w, y+h], [x,y+h]])
if g.poly_mask is None or g.poly_mask.intersects(pts):
relevant = True
cv2.drawContours(new_frame_mask, [contour], -1, (255, 255, 255), -1)
rects.append([x,y,w,h])
# do a dilation to again, combine the contours
frame_mask = cv2.dilate(new_frame_mask,self.kernel_fill,iterations = 5)
frame_mask = new_frame_mask

I found way too many variations on different conditions to have something predictable using openCV's background extraction
So I switched to yolov3 for object identification (added it as a new option, actually). While TinyYOLO was pretty terrible, YoloV3 seems to be adequate, albeit much slower.
Unfortunately, given YoloV3 only does rectangle and not subject masks, I had to switch to addWeighted method of cv2 to blend a new object on top of the blended frame.
Example: https://github.com/pliablepixels/zmMagik/blob/master/sample/1.gif

Related

Detecting overlapping objects with very noisy background

I am currently trying to dectect some reflective and overlapping objects in several images. These images all come with noisy background and the objects I want to detect are overlapping badly.
Here are some examples of the images:
overlapping and sometimes the background can be bad too
Besides being able to detect the objects, I also want to count them in the future (so when there's an overlap it needs to know how many objects are overlapping).
What I tried was that I denoise the image first, and tried to apply canny edge.
image = cv2.fastNlMeansDenoisingColored(image,None,10,10,7,21) #denoise twice
image = cv2.fastNlMeansDenoisingColored(image,None,10,10,7,21)
imageblur = cv2.GaussianBlur(image,(5,5),0) #blur
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(gray, 10, 250)
hist = cv2.calcHist([gray],[0],None,[256],[0,256]) #check pixel histogram
#plt.hist(gray.ravel(),256,[0,256]); plt.show()
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7, 7))
dilated = cv2.dilate(edged,kernel,iterations = 1)
closed = cv2.morphologyEx(dilated, cv2.MORPH_CLOSE, kernel)
imcompare = cv2.hconcat([edged,closed])
imcompare = cv2.resize(imcompare, (0,0), fx=0.7, fy=0.7)
cv2.imshow('Compare',imcompare)
(cnts, _) = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
And these are my results:
without bounding boxes, with boxes
I'm not an expert in CV and I know what I'm trying right now probably wouldn't get me to desired result but I really hope I could get this project done, so any hints or directions are greatly appreciated! I'm willing to study any material that may help, and I just want to at least have a starting point which I could dive into from.
Btw I also tried looking into different color spaces, pixel histogram, watershed...etc but some challenging parts to me is that my "object" is not an object with uniform color but instead covers a big range of intensity (since it's reflective) and the background can be distracting too. I also thought about using neural network or YOLO or some other ML methods but I don't know which one could be the best (since I also want to seperate the overlapping objects).
Thanks to any response!

How can I smoothly detect the welding joint using OpenCV-Python?

I have tried to detect the welding joint (bead) using the codes attached below in the last part of this question. I aim to draw a contour on the joint as shown in the third image but my results are poor and don't look similar to the expected results. here is the summary of what I did but the codes are well clear:
1. Reading the image .jpg format
2. Image blurring
3. Thresholding
4. Morphological operations
5. Creating mask
6. Finding contour
But the results are not promising, how can I get out from here
img_new = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur_image = cv2.bilateralFilter(img_new,5,21,21)
thresh = cv2.adaptiveThreshold(blur_image,255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY_INV,15,3)
kernel = np.ones((5,5),dtype='uint8')
thresh_dilated = cv2.dilate(thresh, kernel, iterations = 1)
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(thresh_dilated, connectivity=4)
sizes = stats[1:,-1]
min_size = 5000
num_labels = num_labels -1
img2 = np.zeros((labels.shape))
for i in range(0,num_labels):
if sizes[i]>=min_size:
img2[labels==i+1]=255
closing = cv2.morphologyEx(img2,cv2.MORPH_CLOSE,kernel)
opening = cv2.morphologyEx(closing,cv2.MORPH_OPEN,kernel)
result = opening.copy()
new_result = result.astype(dtype=np.uint8)
black = np.full((new_result.shape[0],new_result.shape[1],3),(0,0,0),np.uint8)
black1 = cv2.ellipse(black,(700,750),(300,140),0,0,360,(255,255,255),-1)
grayscale = cv2.cvtColor(black1,cv2.COLOR_BGR2GRAY)
ret,b_mask = cv2.threshold(grayscale,127,255,0)
img_result = cv2.bitwise_and(new_result,b_mask)
contours, hierarchy = cv2.findContours(img_result, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours, -1, (0,255,0), 2)
cv2.imshow('result',img)
cv2.waitKey(0)```
Read any papers?
Is this for automatic processing of many welding joints on an assembly line? Does this have to work for many other similiar images? It does not make sense to fine tune an algorithm for a single picture in that case.
If yes, than you need to use more images to create the algorithm, maybe use a different light setup, maybe images taken with an IR camera right after the welding to get a "hot" mask. Or use light from left and right sight separately to combine two images for a single mark.
Another thing that would be very helpful is to get a "before" image from the parts. without the welding joint. In that case it could get easy. You would just create the difference between the images, do some filtering to remove the welding beads and the reddish layer.
Edit 1:
Another thing I forgot to mention is please look at the RGB layers separately. This is something that you should always try early on. Often there is something useful to see, e.g. in your case it could be that the blue layer might be interesting. Please add the layers to your question.

Image Processing: Algorithm Improvement for Real-Time FedEx Logo Detector

I've been working on a project involving image processing for logo detection. Specifically, the goal is to develop an automated system for a real-time FedEx truck/logo detector that reads frames from a IP camera stream and sends a notification on detection. Here's a sample of the system in action with the recognized logo surrounded in the green rectangle.
Some constraints on the project:
Uses raw OpenCV (no deep learning, AI, or trained neural networks)
Image background can be noisy
The brightness of the image can vary greatly (morning, afternoon, night)
The FedEx truck/logo can have any scale, rotation, or orientation since it could be parked anywhere on the sidewalk
The logo could potentially be fuzzy or blurry with different shades depending on the time of day
There may be many other vehicles with similar sizes or colors in the same frame
Real-time detection (~25 FPS from IP camera)
The IP camera is in a fixed position and the FedEx truck will always be in the same orientation (never backwards or upside down)
The Fedex Truck will always be the "red" variation instead of the "green" variation
Current Implementation/Algorithm
I have two threads:
Thread #1 - Captures frames from the IP camera using cv2.VideoCapture() and resizes frame for further processing. Decided to handle grabbing frames in a separate thread to improve FPS by reducing I/O latency since cv2.VideoCapture() is blocking. By dedicating an independent thread just for capturing frames, this would allow the main processing thread to always have a frame available to perform detection on.
Thread #2 - Main processing/detection thread to detect FedEx logo using color thresholding and contour detection.
Overall Pseudo-algorithm
For each frame:
Find bounding box for purple color of logo
Find bounding box for red/orange color of logo
If both bounding boxes are valid/adjacent and contours pass checks:
Combine bounding boxes
Draw combined bounding boxes on original frame
Play sound notification for detected logo
Color thresholding for logo detection
For color thresholding, I have defined HSV (low, high) thresholds for purple and red to detect the logo.
colors = {
'purple': ([120,45,45], [150,255,255]),
'red': ([0,130,0], [15,255,255])
}
To find the bounding box coordinates for each color, I follow this algorithm:
Blur the frame
Erode and dilate the frame with a kernel to remove background noise
Convert frame from BGR to HSV color format
Perform a mask on the frame using the lower and upper HSV color bounds with set color thresholds
Find largest contour in the mask and obtain bounding coordinates
After performing a mask, I obtain these isolated purple (left) and red (right) sections of the logo.
False positive checks
Now that I have the two masks, I perform checks to ensure that the found bounding boxes actually form a logo. To do this, I use cv2.matchShapes() which compares the two contours and returns a metric showing the similarity. The lower the result, the higher the match. In addition, I use cv2.pointPolygonTest() which finds the shortest distance between a point in the image and a contour for additional verification. My false positive process involves:
Checking if the bounding boxes are valid
Ensuring the two bounding boxes are adjacent based on their relative proximity
If the bounding boxes pass the adjacency and similarity metric test, the bounding boxes are combined and a FedEx notification is triggered.
Results
This check algorithm is not really robust as there are many false positives and failed detections. For instance, these false positives were triggered.
While this color thresholding and contour detection approach worked in basic cases where the logo was clear, it was severely lacking in some areas:
There is latency problems from having to compute bounding boxes on each frame
It occasionally false detects when the logo is not present
Brightness and time of day had a great impact on detection accuracy
When the logo was on a skewed angle, color threshold detection worked but was unable to detect the logo due to the check algorithm.
Would anyone be able to help me improve my algorithm or suggest alternative detection strategies? Is there any other way to perform this detection since color thresholding is highly dependent on exact calibration? If possible, I would like to move away from color thresholding and the multiple layers of filters since it's not very robust. Any insight or advice is greatly appreciated!
You might want to take a look at feature matching. The goal is to find features in two images, a template image, and a noisy image and match them. This would allow you to find the template (the logo) in the noisy image (the camera image).
A feature is, in essence, things that humans would find interesting in an image, such as corners or open spaces. I would recommend using a scale-invariant feature transform (SIFT) as a feature detection algorithm. The reason I suggest using SIFT is that it is invariant to image translation, scaling, and rotation, partially invariant to illumination changes and robust to local geometric distortion. This matches your specification.
I generated the above image using code modified from the OpenCV docs docs on SIFT feature detection:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('main.jpg',0) # target Image
# Create the sift object
sift = cv2.xfeatures2d.SIFT_create(700)
# Find keypoints and descriptors directly
kp, des = sift.detectAndCompute(img, None)
# Add the keypoints to the final image
img2 = cv2.drawKeypoints(img, kp, None, (255, 0, 0), 4)
# Show the image
plt.imshow(img2)
plt.show()
You will notice when doing this that a large number of the features do land on the FedEx logo (Above).
The next thing I did was try matching the features from the video feed to the features in the FedEx logo. I did this using the FLANN feature matcher. You could have gone with many approaches (including brute force) but because you are working on a video feed this is probably your best option. The code below is inspired from the OpenCV docs on feature matching:
import numpy as np
import cv2
from matplotlib import pyplot as plt
logo = cv2.imread('logo.jpg', 0) # query Image
img = cv2.imread('main2.jpg',0) # target Image
# Create the sift object
sift = cv2.xfeatures2d.SIFT_create(700)
# Find keypoints and descriptors directly
kp1, des1 = sift.detectAndCompute(img, None)
kp2, des2 = sift.detectAndCompute(logo,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
# Draw lines
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
# Display the matches
img3 = cv2.drawMatchesKnn(img,kp1,logo,kp2,matches,None,**draw_params)
plt.imshow(img3, )
plt.show()
Using this I was able to get the following features matched as seen below. You will notice that there are outliers. However the majority of features match:
The final step would then to be to simply draw a bounding box around this image. I will link you to another stack overflow question which does something similar but with the orb detector. Here is another way to get a bounding box using the OpenCV docs.
I hope this helps!
You can help the detector with preprocessing the image, then you don't need as many training images.
First we reduce the barrel distortion.
import cv2
img = cv2.imread('fedex.jpg')
margin = 150
# add border as the undistorted image is going to be larger
img = cv2.copyMakeBorder(
img,
margin,
margin,
margin,
margin,
cv2.BORDER_CONSTANT,
0)
import numpy as np
width = img.shape[1]
height = img.shape[0]
distCoeff = np.zeros((4,1), np.float64)
k1 = -4.5e-5;
k2 = 0.0;
p1 = 0.0;
p2 = 0.0;
distCoeff[0,0] = k1;
distCoeff[1,0] = k2;
distCoeff[2,0] = p1;
distCoeff[3,0] = p2;
cam = np.eye(3, dtype=np.float32)
cam[0,2] = width/2.0 # define center x
cam[1,2] = height/2.0 # define center y
cam[0,0] = 12. # define focal length x
cam[1,1] = 12. # define focal length y
dst = cv2.undistort(img, cam, distCoeff)
Then we transform the image in a way as if the camera is facing the FedEx truck right on. That is wherever along the curb the truck is parked, the FedEx logo will have almost the same size and orientation.
# use four points for homography estimation, coordinated taken from undistorted image
# 1. top-left corner of F
# 2. bottom-left corner of F
# 3. top-right of E
# 4. bottom-right of E
pts_src = np.array([[1083, 235], [1069, 343], [1238, 301],[1201, 454]])
pts_dst = np.array([[1069, 235],[1069, 320],[1201, 235],[1201, 320]])
h, status = cv2.findHomography(pts_src, pts_dst)
im_out = cv2.warpPerspective(dst, h, (dst.shape[1], dst.shape[0]))

OpenCV Detect scratches on fruits

For a little experiment in Python I'm doing I want to find small scratches on fruits. The scratches are very small and hard to detect by human eye.
I'm using a high resolution camera for that experiment.
Here is the defect I want to detect:
Original Image:
This is my result with very few lines of code:
So I found the contours of my fruit. How can I proceed to finding the scratch? The RGB Value is similar to other parts of the fruit. So how can I differentiate between A scratch, and a part of the fruit?
My code:
# Imports
import numpy as np
import cv2
import time
# Read Image & Convert
img = cv2.imread('IMG_0441.jpg')
result = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Filtering
lower = np.array([1,60,50])
upper = np.array([255,255,255])
result = cv2.inRange(result, lower, upper)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(9,9))
result = cv2.dilate(result,kernel)
# Contours
im2, contours, hierarchy = cv2.findContours(result.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
result = cv2.cvtColor(result, cv2.COLOR_GRAY2BGR)
if len(contours) != 0:
for (i, c) in enumerate(contours):
area = cv2.contourArea(c)
if area > 100000:
print(area)
cv2.drawContours(img, c, -1, (255,255,0), 12)
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),12)
# Stack results
result = np.vstack((result, img))
resultOrig = result.copy()
# Save image to file before resizing
cv2.imwrite(str(time.time())+'_0_result.jpg',resultOrig)
# Resize
max_dimension = float(max(result.shape))
scale = 900/max_dimension
result = cv2.resize(result, None, fx=scale, fy=scale)
# Show results
cv2.imshow('res',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
I changed your image to HSL colour space.
I can't see the scratch in the L channel, so the greyscale approach suggested earlier is going to be difficult.
But the scratch is quite noticeable in the hue plane.
You could use an edge detector to find the blemish in the hue channel. Here I use a difference of gaussians detector (with sizes 20 and 4).
personal guess is to use some algorithm to detect the grayscale change. The grayscale variation around the scratch should be bigger than the variation in other area. Sobel and Scharr Derivatives could be an option. This is a link to python-openCV about image gradient. You can first crop out the fruit with coutour application
If you really want to use conventional computer vision techniques, you should start with edges that can be detected on the fruit. Some of the edges are caused by the bumps on the fruit, so you have to look at various features of the area around the edges to find the difference between scratches and bumps. After you look at about a hundred scratches, you should be able to come up with some rules.
But this process is going to be very tiring, and my guess is you will not have much luck. A better way to approach this problem is to train a deep neural network by manually annotating scratches on about 100 images, and letting the network find out by itself how to distinguish scratches from the rest of the fruit.
If you are a beginner to these stuff, search for PyImageSearch and LearnOpenCV. Both are very resourceful sites where you can learn quickly.

Hand gesture in OpenCV python

I am trying to implement sign language interpreter using OpenCV library. to do this, i need to detect the hand gesture as a first phase. so basically i have achieved the detection of hand by converting the RGB color space into YCbCr, and then threshold the range of skin color.
ycc = cv2.cvtColor(img , cv2.COLOR_BGR2YCR_CB)
min_ycc = np.array([0,133,85], np.uint8)
max_ycc = np.array([255,170,125], np.uint8 )
skin = cv2.inRange(ycc, min_ycc, max_ycc)
opening = cv2.morphologyEx(skin, cv2.MORPH_OPEN, np.ones((5,5), np.uint8), iterations=3)
sure_bg = cv2.dilate(opening,np.ones((3,3),np.uint8), iterations=2)
_,contours,_ = cv2.findContours(sure_bg, cv2.RETR_LIST,cv2.CHAIN_APPROX_NONE)
this code works fine with low detail backgrounds but has some noise if we have a detailed background that includes nearly skin colors.
The only thing I have concern with is how to determine which contour is the hand contour. I tried the maximum contour but it did not work out very accurately.
You can remove background noises by erosion and dilation(morphological operations). Then you can set a threshold value for contour area(area=cv2.contourArea(cnt)) and filter out hand contour.
The other way is to use histogram backprojection.

Categories

Resources