slow face detection on opencv and intel galileo gen2 - python

i want to program my intel galileo gen 2 so that it displays the number of faces in front of the webcam and simply print it on the shell (using opencv). My code is working but the problem is that the processing speed is really slow. It prints the number like once every 15 seconds. This way I am also unable to check if the number is correct or not. Is there any way or someone has done it ?
Here''s the code..
import cv2
import sys
import time
cascPath = '/media/mmcblk0p1/haarcascade_frontalface_default.xml'
faceCascade = cv2.CascadeClassifier(cascPath)
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE
)
print len(faces)
time.sleep(0.033)

Although it's an Intel CPU, there aren't that many resources(400MHz CPU, 256MB RAM) on the Intel Galileo for advanced computer vision algorithms (such as face detection).
The first thing I notice is you're not setting the capture dimension.
I don't know what the camera specifications are, but I'm guessing you're opening the camera at full resolution. I recommend opening the camera at a lower resolution, such as 320x240 or even 160x120 as there will be far less pixels to process.
HAAR cascades are a bit intensive as well (especially on a system like Intel Galileo Gen2). I recommend looking into Local Binary Patterns (LBP).
These are already implemented in OpenCV and you can check out an LBP c++ sample here. It should be easy to adapt this to the Python API or find a Python API example. LBP cascades should be faster than HAAR cascades.
Although less standard, depending on your camera, you may have lower level access to it. If you do either retrieve the images in grayscale directly, or if the raw colour stream is in YUV format, retrieve only the Y channel. This should give you a minor boost as you're no longer converting colorspaces, but pursue this only if it's easy to control the camera (or you have the time and resources to go deeper just for a partial boost).
Although slower to prototype than with Python, you might also want to try using native c or c++ and check if there any compiler optimization flags that can take advantage of the CPU as much as possible.
Note: you can find a c++ face detection sample for Intel Galileo here

Related

OpenCV changing VideoCapture resolution causes colour issues and glitches

I want to capture 1920x1080 video from my camera but I've run into two issues
When I initialize a VideoCapture, it changes the width/height to 640/480
When I try to change the width/height in cv2, the image becomes messed up
Images
When setting 1920x1080 in cv2, the image becomes blue and has a glitchy bar at the bottom
cap = cv2.VideoCapture('/dev/video0')
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
Here's what's happening according to v4l2-ctl. The blue image doesn't seem to be a result of a pixelformat change (eg. RGB to BGR)
And finally, here's an example of an image being captured at 640x480 that has the correct colouring. The only difference in the code is that width/height is not set in cv2
Problem:
Actually the camera you are using has 2 mode:
640x480
1920x1080
One is for main stream, one is for sub stream. I also met this problem couple of times and here is the possible reasons why it doesn't work.
Note: I assume you tried different ways to run on full resolution(1920x1080) such as cv2.VideoCapture(0) , cv2.VideoCapture(-1) , cv2.VideoCapture(1) ...
Possible reasons
First reason could be that the camera doesn't support the resolution you desire but in your case we see that it supports 1920x1080 resolution. So this can not be the reason for your isssue.
Second reason which is general reason is that opencv backend doesn't support your camera driver. Since you are using VideoCaptureProperties of opencv, Documentation says:
Reading / writing properties involves many layers. Some unexpected result might happens along this chain. Effective behaviour depends from device hardware, driver and API Backend.
What you can do:
In this case, if you really need to reach that resolution and make compatible with opencv, you should use the SDK of your camera(if it has).

Extracting JPG frames from webcam MJPG stream using OpenCV

I am working on Ubuntu 18.04 with Python 2.7 and OpenCV 3.2. My application is the front-end of a video pipeline and entails extracting video frames from a webcam, possibly cropping and/or rotating them (90, 180, 270 deg), and then distributing them to one or more other pieces of code for further processing. The overall system tries to maximize efficiency at every step to e.g., improve options for adding functionality later on (compute power and bandwidth wise).
Functionally, I have the front-end working, but I want to improve its efficiency by processing JPEG frames extracted from the camera's MJPEG stream. This would allow efficient, lossless cropping and rotation in the JPEG domain, e.g. using jpegtran-cffi, and distribution of compressed frames that are smaller than the corresponding decoded ones. JPEG decoding will take place if/when/where necessary, with an overall expected gain. As an extra benefit, this approach allows efficient saving of the webcam video without loss of image quality due to decoding + re-coding.
The problem I run into is that OpenCV's VideoCapture class does not seem to allow access to the MJPEG stream:
import cv2
cam = cv2.VideoCapture()
cam.open(0)
if not cam.isOpened():
print("Cannot open camera")
else:
enabled = True
while enabled:
enabled, frame = cam.read()
# do stuff
Here, frame is always in component (i.e., decoded) format. I looked at using cam.grab() + cam.retrieve() instead of cam.read() with the same result (in line with the OpenCV documentation). I also tried cam.set(cv2.CAP_PROP_CONVERT_RGB, False) but that only converts the decoded video to RGB (if it is in another component format) and does not prevent decoding. BTW I verified that the camera uses the MJPEG codec (via cam.get(cv2.CAP_PROP_FOURCC)).
So my questions are: am I missing something or will this approach not work? If the latter, is there an alternative?
A final point: the application has to be able to control the webcam within its capabilities; e.g., frame size, frame rate, exposure, gain, ... This is nicely supported by cv2.VideoCapture.
Thanks!
===
Follow-up: in absence of the solution I was looking for, I added explicit JPEG encoding:
jpeg_frame = cv2.imencode('.jpg', frame, [int(cv2.IMWRITE_JPEG_QUALITY), _JPEG_QUALITY])[1]
with _JPEG_QUALITY set to 90 (out of 100). While this adds computation and reduces image quality, both in principle redundant, it allows me to experiment with trade-offs. --KvZ

Is there a way to adjust shutter speed or exposure time of a webcam using Python and OpenCV

In my robotic vision project, I need to detect a marker of a moving object but motion causes blurring effect in the image. Deconvolution methods are quite slow. So I was thinking to use a higher fps camera. Someone said I don't need higher fps, instead I need shorter exposure time.
OpenCV's Python Interface cv2 provides a method to change the settings of camera but it does not include "Exposure Time" or "Shutter Speed" settings. I'm also afraid that webcams don't even support this kind of settings.
Any other thoughts about:
Eliminating blurring effect using camera setting?
OR
Restoration of Image with real-time performance?
OR
Any suggestion about a low cost camera for real-time robotic applications?
There is a method available to change properties of VideoCapture object in OpenCV which can be used to set exposure of input image.
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_EXPOSURE, 40)
However this parameter is not supported by all cameras. Each camera type offers a different interface to set its parameters. There are many branches in OpenCV code to support as many of them, but of course not all the possibilities are covered.
Same is the case with my camera. So I had to find a different solution. That is using v4l2_ctl utility from command line terminal.
v4l2-ctl -d /dev/video0 -c exposure_absolute=40
But this retains its value for current video session only. That means you have to start video preview first and then set this property As soon as VideoCapture is released, the exposure value is restored to default.
I wanted to control exposure within my python script, so I used subprocess module to run linux bash command. e.g.
import subprocess
subprocess.check_call("v4l2-ctl -d /dev/video0 -c exposure_absolute=40",shell=True)
For instance
I was trying with a c920 for a while whithout success, but sometimes worked, other not using this:
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_EXPOSURE, -4)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 240) # after not change the exposure
But finally I realized that I was setting the width and height just after, and so I change the order and now is working fine!! DonĀ“t blow your mind trying to disable the CAP_PROP_AUTO_EXPOSURE flag!! Its not necessary(at least with c920 on windows)!!
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 240) # before works fine
cap.set(cv2.CAP_PROP_EXPOSURE, -4)
By the way, the range of exposure in C920 is from -2 to -11!!(on windows 10)
Thanks a lot

OpenCV darken oversaturated webcam image

I have a (fairly cheap) webcam which produces images which are far lighter than it should be. The camera does have brightness correction - the adjustments are obvious when moving from light to dark - but it is consistently far to bright.
I am looking for a way to reduce the brightness without iterating over the entire frame (OpenCV Python bindings on a Raspberry Pi). Does that exist? Or better, is there a standard way of sending hints to a webcam to reduce the brightness?
import cv2
# create video capture
cap = cv2.VideoCapture(0)
window = cv2.namedWindow("output", 1)
while True:
# read the frames
_,frame = cap.read()
cv2.imshow("output",frame)
if cv2.waitKey(33)== 27:
break
# Clean up everything before leaving
cv2.destroyAllWindows()
cap.release()
I forgot Raspberry Pi is just running a regular OS. What an awesome machine. Thanks for the code which confirms that you just have a regular cv2 image.
Simple vectorized scaling (without playing with each pixel) should be simple. Below just scales every pixel. It would be easy to add a few lines to normalize the image if it has a major offset.
import numpy
#...
scale = 0.5 # whatever scale you want
frame_darker = (frame * scale).astype(numpy.uint8)
#...
Does that look like the start of what you want?
The standard way to adjust webcam parameters is the VideoCapture set() method (providing your camera supports the interface. Most do in my experience). This avoids the performance overhead of processing the image yourself.
VideoCapture::set
CV_CAP_PROP_BRIGHTNESS or CV_CAP_PROP_SATURATION would appear to be what you want.

How to set framerate with OpenCV camera capture

How can I set the capture framerate, using OpenCV in Python? Here's my code, but the resulting framerate is less than the requested 30fps. Also, quality of video very bad.
import cv
cv.NamedWindow ('CamShiftDemo', 1)
device = -1
cap = cv.CaptureFromCAM(device)
size = (640,480)
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FPS,30)
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FRAME_WIDTH, size[0])
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FRAME_HEIGHT, size[1])
while True:
frame = cv.QueryFrame(cap)
cv.ShowImage('CamShiftDemo', frame)
cv.WaitKey(10)
You are limited by the hardware, namely:
your camera's capture capabilities, and
your computer's system resources.
If either of these cannot handle the requested capture parameters (in your case 640x480 resolution at 30fps), you're out of luck. Parameters you give to OpenCV are merely suggestions -- it tries to match them as best it can.
What model camera are you using? I would first look at the model specs to see if they advertise the parameters you desire.

Categories

Resources