What would you have to do to get a single frame from a live webcam feed and update it repeatedly to a output file? I have seen this done before so i know it is possible. I want to use something like Python if possible but any help is welcome. Maybe this is possible using OpenCV?
This should meet your requirements of "Saving each frame from the video feed "
import numpy as np
import cv2
import random
cap = cv2.VideoCapture(0)
i=0
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
i+=1
cv2.imwrite('database/{index}.png'.format(index=i),frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
In Code . database is the directory where every frame will be saved through index(i) iteration.
OpenCV should be able to do this pretty easily:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Source: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
Related
I’m using OpenCV3 and Python 3.7 to capture a live video stream from my webcam and I want to control the brightness and contrast. I cannot control the camera settings using OpenCV's cap.set(cv2.CAP_PROP_BRIGHTNESS, float) and cap.set(cv2.CAP_PROP_BRIGHTNESS, int) commands so I want to apply the contrast and brightness after each frame is read. The Numpy array of each captured image is (480, 640, 3). The following code properly displays the video stream without any attempt to change the brightness or contrast.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I get a washed-out video stream when I use Numpy’s clip() method to control the contrast and brightness, even when I set contrast = 1.0 (no change to contrast) and brightness = 0 (no change to brightness). Here is my attempt to control contrast and brightness.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
contrast = 1.0
brightness = 0
frame = np.clip(contrast * frame + brightness, 0, 255)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
How can I control the contrast and brightness of a video stream using OpenCV?
I found the solution using the numpy.clip() method and #fmw42 provided a solution using the cv2.normalize() method. I like the cv2.normalize() solution slightly better because it normalizes the pixel values to 0-255 rather than clip them at 0 or 255. Both solutions are provided here.
The cv2.normalize() solution:
Brightness - shift the alpha and beta values the same amount. Alpha
can be negative and beta can be higher than 255. (If alpha >= 255,
then the picture is white and if beta <= 0, then the picure is black.
Contrast - Widen or shorten the gap between alpha and beta.
Here is the code:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.normalize(frame, frame, 0, 255, cv2.NORM_MINMAX)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The numpy.clip() solution:
This helped me solve the problem: How to fast change image brightness with python + OpenCV?. I need to:
Convert Red-Green Blue (RGB) to Hue-Saturation-Value (HSV) first
(“Value” is the same as “Brightness”)
“Slice” the Numpy array to the Value portion of the Numpy array and adjust brightness and contrast on that slice
Convert back from HSV to RGB.
Here is the working solution. Vary the contrast and brightness values. numpy.clip() ensures that all the pixel values remain between 0 and 255 in each on the channels (R, G, and B).
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
contrast = 1.25
brightness = 50
frame[:,:,2] = np.clip(contrast * frame[:,:,2] + brightness, 0, 255)
frame = cv2.cvtColor(frame, cv2.COLOR_HSV2BGR)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
import cv2 as cv
cap = cv.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = cap.read()
# normalize the frame
frame = cv.normalize(
frame, None, alpha=0, beta=255, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8UC1
)
# Display the resulting frame
cv.imshow("frame", frame)
# press q to quit
if cv.waitKey(1) & 0xFF == ord("q"):
break
I want to detect obstacles from a video based on their increasing size.To do that first I applied SIFT on gray image to get feature points of current frame. Next to compare the feature points of current frame with the previous frame I want to apply Brute-Force algorithm. For that I want to get feature points in previous frame. How can I access previous frame in opencv python ? and how to avoid accessing previous frame when the current frame is the first frame of the video?
below is the code written in python to get feature points of current frame.
import cv2
import numpy as np
cap = cv2.VideoCapture('video3.mov')
while(cap.isOpened()):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#detect key feature points
sift = cv2.xfeatures2d.SIFT_create()
kp, des = sift.detectAndCompute(gray, None)
#draw key points detected
img=cv2.drawKeypoints(gray,kp,gray,flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("grayframe",img)
if cv2.waitKey(100) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
You could also get/set the zero-based frame index (CAP_PROP_POS_FRAMES), which might be useful if you wanted flexibility to step back through more than one frame, compare to a specific frame, etc. Note though that this would reset the position for the next read(), so if you really only ever want the previous frame, storing it in a variable per the other answers is probably better.
next_frame = cap.get(cv2.CAP_PROP_POS_FRAMES)
current_frame = next_frame - 1
previous_frame = current_frame - 1
if previous_frame >= 0:
cap.set(cv2.CAP_PROP_POS_FRAMES, previous_frame)
ret, frame = cap.read()
There is no specific function in OpenCV to access the previous frame. Your problem can be solved by calling cap.read() once before entering the while loop. Use a variable prev_frame to store the previous frame just before reading the new frame. Finally, as a good practice, you should verify that the frame was properly read, before doing computations on it. Your code could look something like:
import cv2
import numpy as np
cap = cv2.VideoCapture('video3.mov')
ret, frame = cap.read()
while(cap.isOpened()):
prev_frame=frame[:]
ret, frame = cap.read()
if ret:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#detect key feature points
sift = cv2.xfeatures2d.SIFT_create()
kp, des = sift.detectAndCompute(gray, None)
#some magic with prev_frame
#draw key points detected
img=cv2.drawKeypoints(gray,kp,gray, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("grayframe",img)
else:
print('Could not read frame')
if cv2.waitKey(100) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Simply save the current frame to be the previous frame in the next iteration. Use a list, if you need more than 1.
import cv2
import numpy as np
cap = cv2.VideoCapture('video3.mov')
previousFrame=None
while(cap.isOpened()):
ret, frame = cap.read()
if previousFrame is not None:
#use previous frame here
pass
#save current frame
previousFrame=frame
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#detect key feature points
sift = cv2.xfeatures2d.SIFT_create()
kp, des = sift.detectAndCompute(gray, None)
#draw key points detected
img=cv2.drawKeypoints(gray,kp,gray,flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("grayframe",img)
if cv2.waitKey(100) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I'm trying to play a video file using python opencv this is my code , but it is not showing the vidfeo file when I run the code
import numpy as np
import cv2
cap = capture =cv2.VideoCapture('C2.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
cv2.waitKey(1)
cap.release()
cv2.destroyAllWindows()
I tried the answer in : link but not working again
I think u just have to increase the number inside cv2.waitKey() function to may be 25 or 30. You should get the desired result.
Also, there is no need to write cap = capture = cv2.......
Simply, writing,
cap = cv2.videoCapture('path of the video')
should work as well . Hope it works.
import numpy as np
import cv2
cap = cv2.VideoCapture('C2.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', gray)
# & 0xFF is required for a 64-bit system
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
This code worked for me. It shows both the original and the grayscale video output. Press 'q' to exit. I also didnt see the need for cap = capture... in your code.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',frame)
cv2.imshow('grayF',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Try using this
cv.CaptureFromFile()
and also check out this link http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html
First of all, there is no need of capture as you are not using capture in your code. The reason why your video file is not showing because you haven't saved it in the same directory, where you have saved the code.
You can either give the path where your file is saved as shown below
cap = cv2.VideoCapture('(path to the video file)/cv2.mp4')
Again you need to change the argument inside waitKey otherwise the program will not close the window that shows the video correctly.
Try out the following, it will definitely work. Put an if statement with waitKey() function and increase the argument which indicates the number of milliseconds it will wait for a key function to 25 or whatever number you may like so that when you press ESC key, the window will be destroyed:
if cv2.waitKey(25) & 0xFF == 27:
break
The problem in my code was in the part (While). It should be while (True) instead the one in your code
ren opencv_ffmpeg.dll to opencv_ffmpeg2413.dll on your project dir if opencv-2.4.13.exe
I am working on a project that requires that I display 3 (and possibly more) webcam feeds side by side. To tackle this project, I am using OpenCV Beta 3.0.0 and Python 2.7.5 because I am slightly familiar with the language. Also, how do I display the video in color?
Here is my current code:
import cv2
import numpy as np
capture = cv2.VideoCapture(0)
capture1 = cv2.VideoCapture(1)
while True:
ret, frame = capture.read()
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
cv2.imshow("frame",gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
capture.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
capture = cv2.VideoCapture(0)
capture1 = cv2.VideoCapture(1)
while True:
_, frame1 = capture.read()
_, frame2 = capture1.read()
frame1 = cv2.cvtColor(frame1,cv2.COLOR_BGR2RGB)
frame2 = cv2.cvtColor(frame2,cv2.COLOR_BGR2RGB)
cv2.imshow("frame1",frame1)
cv2.imshow("frame2",frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
capture1.release()
capture2.release()
cv2.destroyAllWindows()
To display color you simply don't convert to grayscale. To show two frames simultaneously just call imshow() twice. As for side by side, you can play with the frame positions if you really want. Also notice that the I converted the frames from BGR to RGB.
I used the following code to capture a video file, flip it and save it.
#To save a Video File
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
# Define the codec and create VideoWriter object
fourcc = cv2.cv.CV_FOURCC(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
frame = cv2.flip(frame,0)
# write the flipped frame
out.write(frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()
This program saves the output as output.avi
Now, to playback the video file I used the following program
#Playing Video from File
import numpy as np
import cv2
cap = cv2.VideoCapture('output.avi')
print cap.get(5) #to display frame rate of video
#print cap.get(cv2.cv.CV_CAP_PROP_FPS)
while(cap.isOpened()):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #convert to grayscale
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
This program plays the video file output.avi saved from the first program. The thing is, this video appears fast forward. So, I tried changing the delay value for cv2.waitKey(). The video looked fine when I put 100. How do I know which value to put there? Should it be related to the frame rate? I checked the frame rate of output.avi (see line cap.get(5) in second program) and got 20. But if I use 20 as delay for cv2.waitKey() the video is still too fast.
Any help would be appreciated.
From the OpenCV documentation:
The function cv.waitKey([, delay]) waits for a key event infinitely
(when delay <= 0) or for delay milliseconds, when it is positive.
If the FPS is equal to 20, then you should wait 0,05 seconds between displaying the consecutive frames. So just put waitKey(50) after imshow() in order to have the desired speed for the playback.
For what it is worth, I have tried all sorts of tricks with setting the cv2.waitKey() delay time and they have all failed. What I have found to work is to try something like:key = cv2.waitKey(1) inside of your while(cap.isOpened()) like so:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
# Define the codec and create VideoWriter object
fourcc = cv2.cv.CV_FOURCC(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
key = cv2.waitKey(1)
frame = cv2.flip(frame,0)
# write the flipped frame
out.write(frame)
cv2.imshow('frame',frame)
if key & 0xFF == ord('q'):
break
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()
I hope this helps someone out there.
put waitKey(60) after imshow() and it will be displayed at normal speed.