I am trying to do face detection but it does not detect any face.
this is the function I have created for face detection
def faceDetection(test_img):
gray_img=cv2.cvtColor(test_img,cv2.COLOR_BGR2GRAY)
face_haar_cascade=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# haar classifier
faces=face_haar_cascade.detectMultiScale(gray_img,scaleFactor=1.32,minNeighbors=5)
return faces,gray_img
this is used in
test_img=cv2.imread('pic.png')
faces_detected,gray_img=fr.faceDetection(test_img)
print("faces_detected:",faces_detected)
for (x,y,w,h) in faces_detected:
cv2.rectangle(test_img,(x,y),(x+w,y+h),(255,0,0),thickness=5)
resized_img=cv2.resize(test_img,(500,500))
cv2.imshow("face",resized_img)
cv2.waitKey(0)
cv2.destroyAllWindows
but when I run this script it does not show any face detected
simply give output this
faces_detected: ()
and no box around image
Try using a different haar cascade. The default one is haarcascade_frontalface_alt.xml
face_haar_cascade = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml')
Change the scale factor you use for the cascade. If that doesn't work you can also reduce also the number of neighbors to maybe 2.
faces = face_haar_cascade.detectMultiScale(gray_img, scaleFactor=1.1, minNeighbors=5);
Check the number of faces you found
print('Faces found: ', len(faces))
Related
I am trying to detect a black tape on a black background.
No tape, with tape (cropped pictures):
I have first cropped the area of the tape from the original image and then performing thresholding on it. Below is the image when there is no tape:
You can notice there is an almost solid line. Black tape is placed right next to it and when it is placed this line becomes very light. Below is the image:
Is there any good image processing techniques I can use to detect when the black tape is placed and when its not placed?
Below is the code I am currently using:
import cv2
import os
import imutils
from pathlib import Path
import numpy as np
def on_mouse(event, x, y, flags, param):
if event == cv2.EVENT_LBUTTONDOWN:
print("X: {} | Y: {}".format(x, y))
dirPath = Path(__file__).parents[2]
imgPath = os.path.join(dirPath, "img", "img.png")
win_name = "Image"
cv2.namedWindow(win_name)
cv2.setMouseCallback(win_name, on_mouse)
img = cv2.imread(imgPath)
img = imutils.resize(img, width=800)
roiImg = img[298:337, 520:591]
img_gray = cv2.cvtColor(roiImg, cv2.COLOR_BGR2GRAY)
rett, thresh = cv2.threshold(img_gray, 50, 255, cv2.THRESH_BINARY)
cv2.imshow(win_name, img)
cv2.imshow("Thres", thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is the link to test video: https://drive.google.com/file/d/1P3Xkx_SuHidDs1UdacS3-DZqA-CiXQOX/view?usp=sharing
Below is the image with area marked in red where tape is usually placed
Thanks
No way to write a stable image processing software here.
In this industrial environment you get differences in ambient light, reflexion, shadows, light different presentation angles, sun light etc. This will impact the brightness of your image partial or global much more than the presence of a nearly invisible tape. This way it's not possible to find any good threshold.
So I guess you are on the right way using the "temporary solution" of detecting the gray hand.
If you really like to detect the tape you need a hardware solution that brings you away from this black on black thing:
Use white tape on black part or black tape on white part. Only to mention :-)
Use dark/bright field illumination instead of ambient light. Guess will not work because angle of part and tape is similar.
Use different wavelength with more difference than in visible light. Needs specific camera and illumination. Best wavelength depends a lot of the material but that's the most professional and stable solution here.
I have a problem here during cropping and saving image by using opencv.
I'm trying to crop by using cv2.SelectROI function but after I drag on the image, cv.2imshow won't work properly.
Here's my code:
import cv2, numpy as np
img = cv2.imread('C:/git/ML/Image/colorful.jpg')
x,y,w,h = cv2.selectROI('img', img, False)
if w and h:
roi = img[y:y+h, x:x+w]
cv2.imshow('cropped', roi)
cv2.moveWindow('cropped', 0, 0)
cv2.imwrite('cropped2.jpg',roi)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
print(x,y,w,h)
I've tried to change directory in various ways, put imshow method just before selectROI but none of them worked so far.
cv2.imshow itself shouldn't be a problem because when I don't use selectROI and just manually code the cropping performance from start to finish(by defining mouseleftbutton click, drag, leftbuttonup one by one), cv2.imshow, cv2.movewindow and cv2.imwrite works just fine.
also, not confident that the code itself have interal problem because in other computer, those activities(dragging, cropping, open in new window, save) seems to be working just fine.
is there a possibility that i haven't installed sth that should be needed in order to run selectROI..?
Anyways.. any comments will be much appreciated. Plz help me.
I am working on a virtual make-up using Python, openCV, dlib. Currently, I can get the facial landmarks like lips, nose, jaw etc. But I am quite unsure on getting the points of the cheeks.
Are they any recommendations?
If you're using dlib 68 facial landmarks, here are the ROIs of the 2 cheeks:
from imutils import face_utils
#face detection part
#rect is the face detected
shape = predictor(gray_img, rect)
shape = face_utils.shape_to_np(shape)
img[shape[29][1]:shape[33][1], shape[54][0]:shape[12][0]] #right cheeks
img[shape[29][1]:shape[33][1], shape[4][0]:shape[48][0]] #left cheek
I have been trying to detect moving vehicles. But due to varying light conditions because of clouds, (not shadows of clouds, just illuminations) the background subtraction fails.
I have uploaded my input video here --> Youtube (30secs)
Here is what I got using various available background subtraction methods available in opencv
import numpy as np
import cv2
cap = cv2.VideoCapture('traffic_finalns.mp4')
#fgbgKNN = cv2.createBackgroundSubtractorKNN()
fgbgMOG = cv2.bgsegm.createBackgroundSubtractorMOG(100,5,0.7,0)
#fgbgGMG = cv2.bgsegm.createBackgroundSubtractorGMG()
#fgbgMOG2 = cv2.createBackgroundSubtractorMOG2()
#fgbgCNT = cv2.bgsegm.createBackgroundSubtractorCNT(15,True,15*60,True)
while(1):
ret, frame = cap.read()
# fgmaskKNN = fgbgKNN.apply(frame)
fgmaskMOG = fgbgMOG.apply(frame)
# fgmaskGMG = fgbgGMG.apply(frame)
# fgmaskMOG2 = fgbgMOG2.apply(frame)
# fgmaskCNT = fgbgCNT.apply(frame)
#
# cv2.imshow('frame',frame)
# cv2.imshow('fgmaskKNN',fgmaskKNN)
cv2.imshow('fgmaskMOG',fgmaskMOG)
# cv2.imshow('fgmaskGMG',fgmaskGMG)
# cv2.imshow('fgmaskMOG2',fgmaskMOG2)
# cv2.imshow('fgmaskCNT',fgmaskCNT)
k = cv2.waitKey(20) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
(Below images -> Frame number - 977)
BackgroundSubtractorMOG : By varying the input parameter history some illumination could be reduced, but not all, as the duration of illumination is variable
BackgroundSubtractorMOG2 :
BackgroundSubtractorGMG :
**BackgroundSubtractorKNN : **
BackgroundSubtractorCNT
1] Improving results by OpenCV Background Subtraction
For varying light conditions it is important to normalize your pixal values between 0 and 1. In your code I do not see that happening
Background subtraction will not work with a single image (In your code you are reading an image)
If you are applying background subtraction on sequence of frames then the first frame of background subtraction result is of no use
you might want to adjust the arguments of the cv2.bgsegm.createBackgroundSubtractorMOG() that you are passing to get the best results... Play around with the threshold and see what results do you get
You can also apply gaussian filter to the individual frames to reduce noise and get better results cv2.GaussianBlur()
You can try cv2.equalizeHist() on individual frame so that you improve the contrast of the frames
Anyways you say that you are trying to detect moving object. Nowadays there are many modern methods that use deep-learning for object detection
2] Use tensorflow object detection api
It does object detection in real-time and also gives you the bounding box co-ordinate of the detected objects
Here are results of Tensorflow object detection api:
3] How about trying Opencv Optical Flow
4] Simple subtraction
Your environment is static
So take a frame of your environment and store it in a variable say environment_frame
Now read every frame from your video and simply subtract it from your environment frame results = environment_frame - current_frame
Now if the np.sum(results) is greater than a threshold value then we say there is a object
Now if np.sum(results) is greater then threshold then we know there is a moving object but where ???
The moving object is where there are clustered cluttered pixels which you can easily find by some clustering algorithm
Do not forget to normalize your pixel values between 0 and 1
----------------------------UPDATED----------------------------------------
If you want to find helmets in real time then your best bet is deep-learning
You can use a deep learning technique like YOLO which newer version of OpenCV has ... but I do no think they have a python binding for YOLO in OpencV
The other real time technique can be RCNN which the tensorflow object detection api already has .... I have mentioned it above
If you want to use traditional computer vision methods then you can try hog and svm for helmet data and then you can try a sliding window technique to find the helmet in your frame (This won't be in real time)
I am busy working on some very simple vehicle detection software using Python and OpenCV. I want to take a screen capture on the moment an object hits the line I have created.
Searching on Google resulted in nothing or some very big C++ projects. Since I am very unskilled with C++ I tought I would try to ask it here.
My code:
import cv2
face_cascade = cv2.CascadeClassifier('cars.xml')
vc = cv2.VideoCapture('dataset/traffic3.mp4')
if vc.isOpened():
rval , frame = vc.read()
else:
rval = False
while rval:
rval, frame = vc.read()
cv2.line(frame, (430, 830), (430, 100),(0,255,0), 3)
cv2.line(frame, (700, 700), (700, 100),(0,0,255), 3)
cv2.imshow("Result",frame)
cv2.waitKey(1);
vc.release()
So I want to take a screencapture the moment a vehicle passes on of the 2 lines?
Can somebody help me?
Thanks.
OpenCV's Cascade classifier will return a collection of Rect objects that correspond to bounding boxes around each car it detected in the image. If you don't know how to use the classifier, look at this tutorial in C++ to get an idea of how it works. Translating it to Python shouldn't be too hard.
Once you have these bounding boxes, you only need to test whether they intersect one of your lines to detect vehicles passing on the line.