I am using windows 10 and python 3.7 / OpenCV4 and a Logitech C922 webcam. While the cam seems to provide 30 fps using the windows camera app, i can not get more than 5-6 fps using OpenCV. Resolution is set to FHD.
cam = cv2.VideoCapture(cv2.CAP_DSHOW+0)
while(1):
ret,frame = cam.read()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
In another post I found the solution to change the codec to MJPG. However, the camera does not accept changing the codec. I tried:
cam.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('m','j','p','g'))
cam.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M','J','P','G'))
cam.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
cam.set(cv2.CAP_PROP_FOURCC, float(cv2.VideoWriter_fourcc('m','j','p','g'))
cam.set(cv2.CAP_PROP_FOURCC, float(cv2.VideoWriter_fourcc('M','J','P','G'))
cam.set(cv2.CAP_PROP_FOURCC, 1196444237.0)
The camera always returns "844715353.0"
How can I achieve higher fps?
It seems that order matters. As I understand, OpenCv uses ffmpeg in the background. In ffmpeg the command should be something like this:
ffmpeg -f dshow -framerate 60 -video_size 1280x720 -input_format mjpeg -i video="my webcam" out.mkv
So that your OpenCV code should be something like this
my_cam_index = 0
cap = cv2.VideoCapture(my_cam_index, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FPS, 60.0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH,1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT,720)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter.fourcc('M','J','P','G'))
i work with Logitech c920 webcam and i test WebcamVideoStream class in imutils module and this module use threading and the queue data structure for improve the FPS when processing video streams with OpenCV is to move the I/O (i.e., the reading of frames from the camera sensor) to a separate thread, and you can set your custom resolution on this file webcamvideostream.py.
you can use threading to obtain higher FPS. maybe this solution help you.
plz see these links:
https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/
https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/
https://github.com/jrosebr1/imutils/blob/master/imutils/video/webcamvideostream.py
Related
I have very simple code in below, I need to see rtsp video from ip camera. But when I try to run, showing video running very slow.
Second 11:49:05 part increase after 3 second around being 11:49:06, so I only see 1 second in 3 seconds. It is working very slow. Where is my problem ? How can I solve this.
I check in local ip address video live record. It show very fast and normal.
I install FFmpeg in my system (Windows 10)
ffmpeg -version
ffmpeg version 2022-11-03-git-5ccd4d3060-essentials_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
And I try also this,
cap = cv2.VideoCapture('rtsp://[USERNAME]:[PASS]#192.168.1.64/1',cv2.CAP_FFMPEG)
And nothing has changed.
This is simple code:
import cv2
cap = cv2.VideoCapture('rtsp://[USERNAME]:[PASS]#192.168.1.64/1')
# cv2.CAP_FFMPEG
while(cap.isOpened()):
ret, frame = cap.read()
# frame = cv2.rotate(frame, cv2.ROTATE_90_CLOCKWISE)
if not ret: break
cv2.imshow('frame', frame)
if cv2.waitKey(400) & 0xFF == ord('q'):
break
cap.release()
# out.release() # saved
cv2.destroyAllWindows()
DEBUG WITH FFMPEG
When I working with below code in cmd, I have same issue
ffplay -i rtsp://[PRIVATE]:[PRIVATE]#192.168.1.64/Streaming/Channels/1
But when I working with this:
ffplay -fflags nobuffer rtsp://[PRIVATE]:[PRIVATE]#192.168.1.64/Streaming/Channels/1
There is no time delay, how can I add -fflags nobuffer commands in opencv code ?
I attached a USB webcam to my Raspberry Pi Zero W though an OTG cable. When I run my python script the OpenCV video capture at first gave me select timeout errors:
import cv2 as cv
cap = cv.VideoCapture(0)
_, img = vs.read()
cv.imwrite(filename="image.jpg", img=img)
Then I tried:
rmmod uvcvideo
modprobe uvcvideo nodrop=1 timeout=5000 quirks=0x80
It doesn't give select timeout errors anymore but the image seems corrupt. Here is an output image from the webcam:
I fixed it a while ago and I thought I should answer my own question:
vc.set(cv.CAP_PROP_FRAME_WIDTH, 480)
vc.set(cv.CAP_PROP_FRAME_HEIGHT, 360)
I just had to tell CV the correct width and height of the camera.
I have recently set up a Raspberry Pi camera and am streaming the frames over RTSP. While it may not be completely necessary, here is the command I am using the broadcast the video:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
This streams the video perfectly.
What I would now like to do is parse this stream with Python and read each frame individually. I would like to do some motion detection for surveillance purposes.
I am completely lost on where to start on this task. Can anyone point me to a good tutorial? If this is not achievable via Python, what tools/languages can I use to accomplish this?
Using the same method listed by "depu" worked perfectly for me.
I just replaced "video file" with "RTSP URL" of actual camera.
Example below worked on AXIS IP Camera.
(This was not working for a while in previous versions of OpenCV)
Works on OpenCV 3.4.1 Windows 10)
import cv2
cap = cv2.VideoCapture("rtsp://root:pass#192.168.0.91:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Bit of a hacky solution, but you can use the VLC python bindings (you can install it with pip install python-vlc) and play the stream:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
Then take a snapshot every second or so:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
And then you can use SimpleCV or something for processing (just load the image file '.snapshot.tmp.png' into your processing library).
use opencv
video=cv2.VideoCapture("rtsp url")
and then you can capture framse. read openCV documentation visit: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
Depending on the stream type, you can probably take a look at this project for some ideas.
https://code.google.com/p/python-mjpeg-over-rtsp-client/
If you want to be mega-pro, you could use something like http://opencv.org/ (Python modules available I believe) for handling the motion detection.
Here is yet one more option.
It's much more complicated than the other answers.
But this way, with just one connection to the camera, you could "fork" the same stream simultaneously to several multiprocesses, to the screen, recast it into multicast, write it to disk, etc.
Of course, just in the case you would need something like that (otherwise you'd prefer the earlier answers)
Let's create two independent python programs:
Server program (rtsp connection, decoding) server.py
Client program (reads frames from shared memory) client.py
Server must be started before the client, i.e.
python3 server.py
And then in another terminal:
python3 client.py
Here is the code:
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password#192.168.x.x", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
If you got interested, check out some more in https://elsampsa.github.io/valkka-examples/
Hi reading frames from video can be achieved using python and OpenCV . Below is the sample code. Works fine with python and opencv2 version.
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("your rtsp url")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Use in this
cv2.VideoCapture("rtsp://username:password#IPAddress:PortNO(rest of the link after the IPAdress)").
I just got a highend 1080p webcam, opening it in the "camera" app of windows 10 display it flawlessly, at 25 or 30fps, however when using opencv it's very laggy, I put a timer in the loop while disabling the display and I have around 200ms between each frame.
Why?
import numpy as np
import cv2
import time
def getAvailableCameraIds(max_to_test):
available_ids = []
for i in range(max_to_test):
temp_camera = cv2.VideoCapture(i)
if temp_camera.isOpened():
temp_camera.release()
print "found camera with id {}".format(i)
available_ids.append(i)
return available_ids
def displayCameraFeed(cameraId, width, height):
cap = cv2.VideoCapture(cameraId)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
while(True):
start = time.time()
ret, frame = cap.read()
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
end = time.time()
print "time to read a frame : {} seconds".format(end-start)
#DISABLED
#cv2.imshow('frame', frame)
#if cv2.waitKey(1) & 0xFF == ord('q'):
#break
cap.release()
cv2.destroyAllWindows()
#print getAvailableCameraIds(100)
displayCameraFeed(0, 1920, 1080)
Thanks
Opencv 3.1 on a windows 10 x64, with python 2.7 x64
I've faced the same problem on my linux system where I had 150ms delay between frames. In my case, the problem was that the Auto Exposure feature of the camera was ON, which increased exposure times, causing the delay.
Turning OFF auto exposure reduced delay to 49~51 ms
Here is a link from OBSProject that talks about it https://obsproject.com/forum/threads/getting-the-most-out-of-your-webcam.1036/
I'm not sure how you'd do this on a windows machine, a Google search revealed that changing it on your Skype settings changes it globally. (If you have bundled software with your camera, you could probably change it there as well.)
As for a linux machine, running v4l2-ctl --list-ctrls lists the features of your camera that you can modify.
I set exposure_auto_priority (bool) to 0 which turns OFF Auto Exposure.
for me this did the trick on Windows 10 with a Logitech c922.
The order in which the methods a called seem to have an impact.
(i have 'import cv' instead of 'import cv2')
cap = cv.VideoCapture(camera_index + cv.CAP_DSHOW)
cap.set(cv.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv.CAP_PROP_FRAME_HEIGHT, 1080)
cap.set(cv.CAP_PROP_FPS, 30)
cap.set(cv.CAP_PROP_FOURCC, cv.VideoWriter_fourcc(*'MJPG'))
How can i make the cv2.VideoCapture(0) recognize the USB camera of raspberry-pi.
def OnRecord(self, evt):
capture = cv2.VideoCapture(0)
if (not capture.isOpened()):
print "Error"
# video recorder
fourcc = cv2.cv.CV_FOURCC(*'XVID') # cv2.VideoWriter_fourcc() does not exist
video_writer = cv2.VideoWriter.open("output.mp4", fourcc, 20, (640, 480), True)
# record video
while (capture.isOpened()):
ret, frame = capture.read()
if ret==True:
video_writer.write(frame)
cv2.imshow('Video', frame)
else:
break
def OnCancel(self, evt):
capture.release()
video_writer.release()
cv2.destroyAllWindows()
but it only prints Error.
So i guess capture is not opening. What might be the reason?
I tried this code from opencv documentation but doesn't worked out for me.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
frame = cv2.flip(frame,0)
# write the flipped frame
out.write(frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()
Any help would be greatly appreciated.
Load the correct video for linux drivers.
sudo modprobe bcm2835-v4l2
In my experience with CV2 replacing a webcam source on linux isn't always easy. How OpenCV works is it automatically draws from the systems default video source, which is known (usually) as video0. Unplug your usb webcam and go into a terminal and typing ls /dev/video*
Remember the number it says. Now plug in your USB webcam and type in ls /dev/video* again and look for any new /video, this is your USB webcam. Now type mv /dev/videoX videoY while X is the number of your USB webcam and Y the original number. This will replace your pi's default camera.
This isn't permanent as you will need to do this every time your pi starts up, an alternative to this is creating a bash file that runs on start up. Create a text file and copy the following into it.
#!/bin/bash
mv /dev/videoX videoY
(replace the X and Y of course)
and place that in /etc/init.d directory of your pi. Don't forget you may need to use
chmod 755 /etc/init.d/FILENAME.sh
to give it permission to run
Go to terminal and type lsusb and check whether the USB camera is recognized or not. If it is recognized then try to give different device ID such as 1 or 2 or 3 rather than 0.
looks Like you might have issue with codec, try using 'MJPG' codec instead of XVID.
For more details have a look here
Make sure that the camera that you are using is UVC compatible, as openCV running on linux based systems (like a raspi) starts to do some silly things when it is working with non UVC cameras.