I am applying sobel edge detector in a video using OpenCV. I can see the result in a window and then I am writing the video. Even though I can see the right result on the window, the result in the output file is not the same.
Here is the code and what I can see in the window and in the output file. Any idea what can cause this?
if between (cap,0,25000): #Apply results on specific milliseconds of the video
Sobel operator - I still need to add colors
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
frame = cv2.Sobel(frame,cv2.CV_64F,1,0,5)
out.write(frame)
frame = cv2.resize(frame, None, fx = 0.2, fy = 0.2, interpolation = cv2.INTER_CUBIC)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
out.release()
cv2.destroyAllWindows()
print ("done")
The question has been answered.
I am adding the answer again here:
Sobel output is float per your cv2.CV_64F. That won't be visible since it is in the range of 0 to 1 but your written video frame wants it to be 0 to 255. So you need to scale your results by multiplying by 255 and then clip to range 0 to 255 and save as uint8. frame = (255*cv2.Sobel(frame,cv2.CV_64F,1,0,5) ).clip(0,255).astype(np.uint8). Be sure to import numpy as np
Related
I’m using OpenCV3 and Python 3.7 to capture a live video stream from my webcam and I want to control the brightness and contrast. I cannot control the camera settings using OpenCV's cap.set(cv2.CAP_PROP_BRIGHTNESS, float) and cap.set(cv2.CAP_PROP_BRIGHTNESS, int) commands so I want to apply the contrast and brightness after each frame is read. The Numpy array of each captured image is (480, 640, 3). The following code properly displays the video stream without any attempt to change the brightness or contrast.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I get a washed-out video stream when I use Numpy’s clip() method to control the contrast and brightness, even when I set contrast = 1.0 (no change to contrast) and brightness = 0 (no change to brightness). Here is my attempt to control contrast and brightness.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
contrast = 1.0
brightness = 0
frame = np.clip(contrast * frame + brightness, 0, 255)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
How can I control the contrast and brightness of a video stream using OpenCV?
I found the solution using the numpy.clip() method and #fmw42 provided a solution using the cv2.normalize() method. I like the cv2.normalize() solution slightly better because it normalizes the pixel values to 0-255 rather than clip them at 0 or 255. Both solutions are provided here.
The cv2.normalize() solution:
Brightness - shift the alpha and beta values the same amount. Alpha
can be negative and beta can be higher than 255. (If alpha >= 255,
then the picture is white and if beta <= 0, then the picure is black.
Contrast - Widen or shorten the gap between alpha and beta.
Here is the code:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.normalize(frame, frame, 0, 255, cv2.NORM_MINMAX)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The numpy.clip() solution:
This helped me solve the problem: How to fast change image brightness with python + OpenCV?. I need to:
Convert Red-Green Blue (RGB) to Hue-Saturation-Value (HSV) first
(“Value” is the same as “Brightness”)
“Slice” the Numpy array to the Value portion of the Numpy array and adjust brightness and contrast on that slice
Convert back from HSV to RGB.
Here is the working solution. Vary the contrast and brightness values. numpy.clip() ensures that all the pixel values remain between 0 and 255 in each on the channels (R, G, and B).
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
contrast = 1.25
brightness = 50
frame[:,:,2] = np.clip(contrast * frame[:,:,2] + brightness, 0, 255)
frame = cv2.cvtColor(frame, cv2.COLOR_HSV2BGR)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
import cv2 as cv
cap = cv.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = cap.read()
# normalize the frame
frame = cv.normalize(
frame, None, alpha=0, beta=255, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8UC1
)
# Display the resulting frame
cv.imshow("frame", frame)
# press q to quit
if cv.waitKey(1) & 0xFF == ord("q"):
break
I'm very new to image processing and OpenCV. I'm working on a helper-function to assist in tuning some key-frame detection parameters.
I'd like to be able to apply a (dramatic) color cast to a video frame if that frame is a member of a set of detected frames, so that I can play back the video with this function and see which frames were detected as members of each class.
The code below will breifly "hold" a frame if it's a member of one of the classes, and I used a simple method from the getting started with video OpenCV-python tutorial to display members of one class in grayscale, but as I have multiple classes, I need to be able to tell one class from another, so I'd like to be able to apply striking color casts to signal members of the other classes.
I haven't seen anything that teaches simple color adjustment using OpenCV, any help would be greatly appreciated!
import cv2
def play_video(video, frames_class_a=None, frames_class_b=None, frames_class_c=None):
"""plays a video, holding and color-coding certain frames
"""
if frames_class_a is None:
frames_class_a = []
if frames_class_b is None:
frames_class_b = []
if frames_class_c is None:
frames_class_c = []
cap = cv2.VideoCapture(video)
i = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
if i in frames_class_a:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', gray)
if cv2.waitKey(600) & 0xFF == ord('q'):
break
elif i in frames_class_b:
cv2.imshow('frame', frame) # apply "green" cast here
if cv2.waitKey(600) & 0xFF == ord('q'):
break
elif i in frames_class_c:
cv2.imshow('frame', frame) # apply "red" cast here
if cv2.waitKey(600) & 0xFF == ord('q'):
break
else:
cv2.imshow('frame', frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
i = i + 1
cap.release()
cv2.destroyAllWindows()
Two simple options come to mind.
Let's start with this code:
import cv2
import numpy as np
frame = cv2.imread('lena3.png')
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
Option 1
Create a blank 3-channel image of the same shape as frame. numpy.zeros_like is ideal for this job.
Then, assign your grayscale image to one or two of the channels. For example, to have result with shades of blue, the following code could be used.
result = np.zeros_like(frame)
result[...,0] = gray
cv2.imwrite('shaded_lena.png', result)
To produce the following image:
Option 2
The other option is to create a temporary 1-channel blank image of the same shape, and then use cv2.merge to combine it with the grayscale image.
The following code shows you all the possible combinations:
blank = np.zeros_like(gray)
shaded = [
cv2.merge([gray, blank, blank])
, cv2.merge([blank, gray, blank])
, cv2.merge([blank, blank, gray])
, cv2.merge([gray, gray, blank])
, cv2.merge([gray, blank, gray])
, cv2.merge([blank, gray, gray])
]
cv2.imwrite('shaded_lena_all.png', np.hstack(shaded))
Producing this (stacked for ease of display):
I am trying to detect a (photography) flash in a video using OpenCV.
I detected the frame in which the flash occurs (average brightness above a threshold) and now I'd like to get the frame number.
I tried using CV_CAP_PROP_POS_FRAMES from the OpenCV docs without any success.
import numpy as np
import cv2
cap = cv2.VideoCapture('file.MOV')
while(cap.isOpened()):
ret, frame = cap.read()
BW = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
average = np.average(v) #computes the average brightness
if average > 200: #flash is detected
cv2.imshow('frame',BW)
frameid = cap.get(CV_CAP_PROP_POS_FRAMES) # <--- this line does not work
print(frameid)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Any tips ?
You can either:
Use cap.get(cv2.CAP_PROP_POS_FRAMES) (see here, also), or
increment a variable at each iteration: its current value is the current frame number
From opencv-doc:
When querying a property that is not supported by the backend used by the VideoCapture class, value 0 is returned
Probably it is not supported. In that case you have to count the frame number yourself.
I'm brand new to OpenCV and I can't seem to find a way to do this (It probably has to do with me not knowing any of the specific lingo).
I'm looping through the frames of a video and pulling out a mask from the video where it is green-screened using inRange. I'm looking for a way to then insert an image into that location on the original frame. The code i'm using to pull the frames/mask is below.
import numpy as np
import cv2
cap = cv2.VideoCapture('vid.mov')
image = cv2.imread('photo.jpg')
# green digitally added not much variance
lower = np.array([0, 250, 0])
upper = np.array([10, 260, 10])
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('frame', frame)
# get mask of green area
mask = cv2.inRange(frame, lower, upper)
cv2.imshow('mask', mask)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Use Bitwise operations for masking and related binary operations. Please check below code to see how Bitwise operations are done.
Code
import numpy as np
import cv2
cap = cv2.VideoCapture('../video.mp4')
image = cv2.imread('photo.jpg')
# green digitally added not much variance
lower = np.array([0, 250, 0])
upper = np.array([10, 260, 10])
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('frame', frame)
# get mask of green area
mask = cv2.inRange(frame, lower, upper)
notMask = cv2.bitwise_not(mask)
imagePart=cv2.bitwise_and(image, image, mask = mask)
videoPart=cv2.bitwise_and(frame, frame, mask = notMask)
output = cv2.bitwise_or(imagePart, videoPart)
cv2.imshow('output', output)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
RGB bad color space
Since, you are doing color processing, I would suggest you to use appropriate color space. HSV would be a good start for you. For finding good range of HSV values, try this script.
Generating Video Output
You need a to create a video writer and once all image processing is done, write the processed frame to a new video. I am pretty sure, you cannot read and write to same video file.
Further see official docs here.
It has both examples of how to read and write video.
I am making a program that uses a webcam to track objects in real time. The resulting display is always a few frames behind. For an example, when I move the camera to point at a new spot, it still shows the first position for a few frames.
Here is my program, it should find the circles in the frame and return an image with them circled:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles() # parameters removed
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
#draw circle
cv2.circle() # parameters removed
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0XFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
As you can see it takes some time to process each frame. I expected the it to be choppy, but the result shows images from several seconds ago.