I've showed in various ways how to take images with a webcam in Python (see How can I take camera images with Python?). You can see that the images taken with Python are considerably darker than images taken with JavaScript. What is wrong?
Image example
The image on the left was taken with http://martin-thoma.com/html5/webcam/, the one on the right with the following Python code. Both were taken with the same (controlled) lightning situation (it was dark outside and I only had some electrical lights on) and the same webcam.
Code example
import cv2
camera_port = 0
camera = cv2.VideoCapture(camera_port)
return_value, image = camera.read()
cv2.imwrite("opencv.png", image)
del(camera) # so that others can use the camera as soon as possible
Question
Why is the image taken with Python image considerably darker than the one taken with JavaScript and how do I fix it?
(Getting a similar image quality; simply making it brighter will probably not fix it.)
Note to the "how do I fix it": It does not need to be opencv. If you know a possibility to take webcam images with Python with another package (or without a package) that is also ok.
Faced the same problem. I tried this and it works.
import cv2
camera_port = 0
ramp_frames = 30
camera = cv2.VideoCapture(camera_port)
def get_image():
retval, im = camera.read()
return im
for i in xrange(ramp_frames):
temp = camera.read()
camera_capture = get_image()
filename = "image.jpg"
cv2.imwrite(filename,camera_capture)
del(camera)
I think it's about adjusting the camera to light. The former
former and later images
I think that you have to wait for the camera to be ready.
This code works for me:
from SimpleCV import Camera
import time
cam = Camera()
time.sleep(3)
img = cam.getImage()
img.save("simplecv.png")
I took the idea from this answer and this is the most convincing explanation I found:
The first few frames are dark on some devices because it's the first
frame after initializing the camera and it may be required to pull a
few frames so that the camera has time to adjust brightness
automatically.
reference
So IMHO in order to be sure about the quality of the image, regardless of the programming language, at the startup of a camera device is necessary to wait a few seconds and/or discard a few frames before taking an image.
Tidying up Keerthana's answer results in my code looking like this
import cv2
import time
def main():
capture = capture_write()
def capture_write(filename="image.jpeg", port=0, ramp_frames=30, x=1280, y=720):
camera = cv2.VideoCapture(port)
# Set Resolution
camera.set(3, x)
camera.set(4, y)
# Adjust camera lighting
for i in range(ramp_frames):
temp = camera.read()
retval, im = camera.read()
cv2.imwrite(filename,im)
del(camera)
return True
if __name__ == '__main__':
main()
Related
Iam trying to use pygame camera to take a picture from my computer cam, but for some reason it always comes out as completely black.
After some research I found out the brightness of the camera (camera.get_controls()) is set to -1 and can't be changed with camera.set_controls()
Code:
# initializing the camera
pygame.camera.init()
# make the list of all available cameras
camlist = pygame.camera.list_cameras()
# if camera is detected or not
if camlist:
# initializing the cam variable with default camera
cam = pygame.camera.Camera(camlist[3], (2952, 1944), "RGB")
# opening the camera
cam.start()
# sets the brightness to 1
cam.set_controls(True, False, 1)
# capturing the single image
image = cam.get_image()
# saving the image
pygame.image.save(image, str(cam.get_controls()) + ".png")
else:
print("No camera on current device")
Pygame.camera tries to be a non blocking API, so it can be used well in a game loop.
The problem is that you open the camera and immediately call get_image(). But no image has been generated yet, so it returns a fully black Surface. Maybe it should be changed on pygame’s end to be a blocking call for the first get_image()
On my system, it works if I put a one second delay before get_image().
You can also use the query_image() function to return if a frame is ready.
*Asterisk: This may differ between operating systems, as it uses different backends on different systems.
i'm trying to take pictures with OpenCV 4.4.0.40 and only save them if a switch, read by an Arduino, is press.
So far everything work, but it's super slow, it take about 15 seconds for the Switch value to change.
Arduino = SerialObject()
if os.path.exists(PathCouleur) and os.path.exists(PathGris):
Images = cv2.VideoCapture(Camera)
Images.set(3, 1920)
Images.set(4, 1080)
Images.set(10, Brightness)
Compte = 0
SwitchNumero = 0
while True:
Sucess, Video = Images.read()
cv2.imshow("Camera", Video)
Video = cv2.resize(Video, (Largeur, Hauteur))
Switch = Arduino.getData()
try:
if Switch[0] == "1":
blur = cv2.Laplacian(Video, cv2.CV_64F).var()
if blur < MinBlur:
cv2.imwrite(PathCouleur + ".png", Video)
cv2.imwrite(PathGris + ".png", cv2.cvtColor(Video, cv2.COLOR_BGR2GRAY))
Compte += 1
except IndexError as err:
pass
if cv2.waitKey(40) == 27:
break
Images.release()
cv2.destroyAllWindows()
else:
print("Erreur, aucun Path")
the saved images width are 640 and the height is 480 and the showimage is 1920x1080 but even without the showimage it's slow.
Can someone help me optimize this code please?
I would say that this snippet of code is responsible for the delay, since you have two major calls that depend on external devices to respond (the OpenCV calls and Arduino.getData()).
Sucess, Video = Images.read()
cv2.imshow("Camera", Video)
Video = cv2.resize(Video, (Largeur, Hauteur))
Switch = Arduino.getData()
One solution that comes to mind is to use the multithreading lib and separate the Arduino.getData() call from the main loop cycle and use it in a separate class to run in the background (you should use a sleep on the secondary thread, something like 50 or 100ms delay).
That way you should have a better response on the Switch event, when it tries to read the value on the main loop.
cam=cv2.VideoCapture(0,cv2.CAP_DSHOW)
use cv2.CAP_DSHOW it will improve the loading time of camera becasue it wil give you video feed directly
Getting the error bellow in the code
TypeError: 'NoneType' object is not subscriptable
line : crop_img = img[70:300, 70:300]
Can anyone please help me with this?
thanks a lot
img_dofh = cv2.imread("D.png",0)
ret, img = cap.read()
cv2.rectangle(img,(60,60),(300,300),(255,255,2),4) #outer most rectangle
crop_img = img[70:300, 70:300]
crop_img_2 = img[70:300, 70:300]
grey = cv2.cvtColor(crop_img, cv2.COLOR_BGR2GRAY)
You don't show where your img variable comes from. But somehow it is None instead of containing image data.
Often this happens when you write a function that is supposed to return a valid object for img, but you forget to include a return statement in the function, so it automatically returns None instead.
Check the code that creates img.
UPDATE
Responding to your code posting:
It would be helpful if you could provide a minimal, reproducible example. That might look something like this:
import cv2
cap = cv2.VideoCapture(0)
if cap.isOpened():
ret, img = cap.read()
if img is None:
print("img was not returned.")
else:
crop_img = img[70:300, 70:300]
print(crop_img) # should show an array of image data
Looking at the documentation, it appears that your camera may not have grabbed any frames by the time you reach this point in your code. The documentation says "If no frames has been grabbed (camera has been disconnected, or there are no more frames in video file), the methods return false and the functions return NULL pointer." I would bet that the .read() function is returning a NULL pointer, which gets converted to None when it is sent back to Python.
Unfortunately, since no one else has your particular camera setup, other people may have trouble reproducing your problem.
The code above works fine on my MacBook Pro, but I have to give Terminal permission to use the camera the first time I try it. Have you tried restarting your terminal app? Does your program have access to the camera?
I'm new to Open CV and python, and I've been facing a problem:
I've been doing a project on my Raspberry Pi where the webcam takes a grey scale image, removes the background, and saves it in a folder.
This is used by a machine learning algorithm to detect the object in the image.
The webcam is fixed at a particular point so I first take an image of the background and then take a picture of the object. The background is then removed from the object and it looks fine.
But if I repeat the process and overwrite the image, it becomes blurred.
This effect keeps happening until after about three or four shots the image becomes blurry and my program cant identify the object in it.
My code is:
#get Background
import cv2
cam = cv2.videoCapture(0)
ret, frame = cam.read()
if ret:
img_name = '/home/pi/Desktop/background.png'
grey_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite(img_name, grey_img)
print('{} written'.format(img_name))
cam.release()
#takeImage
import cv2
import numpy as np
ret, frame = cam.read()
back = cv2.imread('/home/pi/Desktop/background.png')
if ret:
img_name = '/home/pi/Desktop/img_capture.png'
grey_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite(img_name, grey_img)
grey_obj = cv2.subtract(cv2.imread(img_name), back)
cv2.imwrite(img_name, grey_obj)
print('{} written'.format(img_name))
cam.release()
I'm using a Logitech webcam but I'm not sure of the exact model
Please help me out, and thanks in advance.
Clear at the start:
Not so much:
Not at all:
I am building a robot. Pretty much all my code has been copied from other people's projects on tutorials.
I'm using a Raspberry Pi camera to detect faces, and I have this electric Nerf gun, that I want to fire, ONLY after opencv detects a face. Right now, my code fires the Nerf gun no matter if a face is detected or not. Please tell me what I have done wrong? I think the problem is in the if len(faces) > 0 area. Here is all my code.
program 1, called cbcnn.py
import RPi.GPIO as gpio
import time
def init():
gpio.setmode(gpio.BOARD)
gpio.setup(22, gpio.OUT)
def fire(tf):
init()
gpio.output(22, True)
time.sleep(tf)
gpio.cleanup()
print 'fire'
fire(3)
program 2, called cbfd2.py
import io
import picamera
import cv2
import numpy
from cbcnn import fire
#Create a memory stream so photos doesn't need to be saved in a file
stream = io.BytesIO()
#Get the picture (low resolution, so it should be quite fast)
#Here you can also specify other parameters (e.g.:rotate the image)
with picamera.PiCamera() as camera:
camera.resolution = (320, 240)
camera.vflip = True
camera.capture(stream, format='jpeg')
#Convert the picture into a numpy array
buff = numpy.fromstring(stream.getvalue(), dtype=numpy.uint8)
#Now creates an OpenCV image
image = cv2.imdecode(buff, 1)
#Load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('/home/pi/cbot/faces.xml /haarcascade_frontalface_default.xml')
#Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#Look for faces in the image using the loaded cascade file
faces = face_cascade.detectMultiScale(gray, 1.1, 5)
print "Found "+str(len(faces))+" face(s)"
if len(faces) > 0:
("fire")
#Draw a rectangle around every found face
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
#Save the result image
cv2.imwrite('result.jpg',image)
It fires because you have a fire(3) right after you defined the fire(tf) function. And the command below only creates a tuple with 1 string value: ("fire"). It doesn't call the fire function.
if len(faces) > 0:
("fire")
If you want to fire only when faces are detected, move that fire(3) under this IF and remove it from the top.
BTW you're importing another thing here: from cbcnn import fire with the same name as your function. This will overwrite your function name and if you put fire(3) below your import line it probably throws an error. Either change you fire function fire(tf) to fire_rocket(tf) and change your fire(3) to fire_rocket(3) under the IF.
Or add this on your import line (which you actually aren't even using!) and you can keep your fire function name as is:
from cbcnn import fire as Fire
Edit after question was changed:
Fix the IF I mentioned above and put fire(some number) in there.
The reason it fires is because when you import something from another program it runs the whole script. Because fire(3) is on there it will automatically call the function when you import it.
To avoid this you have to either:
remove other parts from your cbcnn.py: print and fire(3)
Or
put those parts in this IF statement to only run them when you actually run cbcnn.py yourself, and not by importing it:
if __name__=='__main__':
print(fire)
fire(3)
I already answer you in another post but, where you are putting:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
There you can put whatever you want, if you go into the for loop, it means you have detected a face. As you can see on the code you posted, in that case their a drawing a rectangle, but you can do anything there, the x,y,w and h are giving you the coordinates and size of the detected face.
import io
import picamera
import cv2
import numpy
import RPi.GPIO as gpio
import time
#Create a memory stream so photos doesn't need to be saved in a file
stream = io.BytesIO()
#Get the picture (low resolution, so it should be quite fast)
#Here you can also specify other parameters (e.g.:rotate the image)
with picamera.PiCamera() as camera:
camera.resolution = (320, 240)
camera.vflip = True
camera.capture(stream, format='jpeg')
#Convert the picture into a numpy array
buff = numpy.fromstring(stream.getvalue(), dtype=numpy.uint8)
#Now creates an OpenCV image
image = cv2.imdecode(buff, 1)
#Load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('/home/pi/cbot/faces.xml/haarcascade_frontalface_default.xml')
#Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#Look for faces in the image using the loaded cascade file
faces = face_cascade.detectMultiScale(gray, 1.1, 5)
print "Found "+str(len(faces))+" face(s)"
#Draw a rectangle around every found face
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
def init():
gpio.setmode(gpio.BOARD)
gpio.setup(22, gpio.OUT)
def fire(tf):
init()
gpio.output(22, True)
time.sleep(tf)
gpio.cleanup()
print 'fire'
fire(3)
#Save the result image
cv2.imwrite('result.jpg',image)