import time
from datetime import datetime
from cv2 import *
import schedule
def main():
capture = cv2.VideoCapture(0)
while True:
success, image = capture.read()
cv2.imshow("Live Feed", image)
cv2.waitKey(1)
schedule.every(10).seconds.do(take_screenshot())
def take_screenshot():
cv2.imwrite(f"test-{str(datetime.now())}",image)
if __name__ == '__main__':
main()
I am working on a project of my own research . Web camera is open for 60 minutes So I want to capture the user picture after every 1 minute. I want to take screenshot of live feed after every 1 minute. I tried some videos and website. but found this schedule can solve my issue but I'm getting error of Image is not defined. How do I can pass the value of image to screenshot function or how I can take screenshot after every specific interval and save in directory
If you are using webcam you should modify the code.
in OpenCV for laptop camera to use:
capture = cv2.VideoCapture(0)
and for webcam use:
capture = cv2.VideoCapture(1)
0 or 1 is used for which camera or webcam you want to use in case of multiple cam.
First time poster here, so go easy on me.
I'm working on a fun little project for myself and friends, basically I want to be able to stream and recieve video using ffmpeg, as a sort of screen sharing application. I'm a complete python noob and im just going off of the documentation for each.
Heres what I have for sending:
import ffmpeg
stream = ffmpeg.input("video.mp4")
stream = ffmpeg.output(stream, "tcp://127.0.0.1:1234", format="mpegts")
ffmpeg.run(stream)
It's simple but it works, when I run ffplay.exe -i tcp://127.0.0.1:1234?listen -hide_banner in a command prompt and run the code to send the video, it works perfectly, but when I try and use my code to recieve a video, all I get is audio, no video, and after the video has finished the last second of the audio is repeated.
Heres the recieving code:
from ffpyplayer.player import MediaPlayer
test = MediaPlayer("tcp://127.0.0.1:1234?listen")
while True:
test.get_frame()
if test == "eof":
break
Thanks for any help and sorry if im just being oblivious to something :P
You are only extracting frames from video.mp4 in your code.
test = MediaPlayer("tcp://127.0.0.1:1234?listen")
while True:
test.get_frame()
if test == "eof":
break
Now, you need to display them using some third-party library since ffpyplayer doesn't provide any inbuilt feature to display frames in a loop.
Below code uses OpenCV to display extracted frames. Install OpenCV and numpy using below command
pip3 install numpy opencv-python
Change your receiver code to
from ffpyplayer.player import MediaPlayer
import numpy as np
import cv2
player = MediaPlayer("tcp://127.0.0.1:1234?listen")
val = ''
while val != 'eof':
frame, val = player.get_frame()
if val != 'eof' and frame is not None:
img, t = frame
w = img.get_size()[0]
h = img.get_size()[1]
arr = np.uint8(np.asarray(list(img.to_bytearray()[0])).reshape(h,w,3)) # h - height of frame, w - width of frame, 3 - number of channels in frame
cv2.imshow('test', arr)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
you can also run ffplay command directly using python subprocess
I have recently set up a Raspberry Pi camera and am streaming the frames over RTSP. While it may not be completely necessary, here is the command I am using the broadcast the video:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
This streams the video perfectly.
What I would now like to do is parse this stream with Python and read each frame individually. I would like to do some motion detection for surveillance purposes.
I am completely lost on where to start on this task. Can anyone point me to a good tutorial? If this is not achievable via Python, what tools/languages can I use to accomplish this?
Using the same method listed by "depu" worked perfectly for me.
I just replaced "video file" with "RTSP URL" of actual camera.
Example below worked on AXIS IP Camera.
(This was not working for a while in previous versions of OpenCV)
Works on OpenCV 3.4.1 Windows 10)
import cv2
cap = cv2.VideoCapture("rtsp://root:pass#192.168.0.91:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Bit of a hacky solution, but you can use the VLC python bindings (you can install it with pip install python-vlc) and play the stream:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
Then take a snapshot every second or so:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
And then you can use SimpleCV or something for processing (just load the image file '.snapshot.tmp.png' into your processing library).
use opencv
video=cv2.VideoCapture("rtsp url")
and then you can capture framse. read openCV documentation visit: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
Depending on the stream type, you can probably take a look at this project for some ideas.
https://code.google.com/p/python-mjpeg-over-rtsp-client/
If you want to be mega-pro, you could use something like http://opencv.org/ (Python modules available I believe) for handling the motion detection.
Here is yet one more option.
It's much more complicated than the other answers.
But this way, with just one connection to the camera, you could "fork" the same stream simultaneously to several multiprocesses, to the screen, recast it into multicast, write it to disk, etc.
Of course, just in the case you would need something like that (otherwise you'd prefer the earlier answers)
Let's create two independent python programs:
Server program (rtsp connection, decoding) server.py
Client program (reads frames from shared memory) client.py
Server must be started before the client, i.e.
python3 server.py
And then in another terminal:
python3 client.py
Here is the code:
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password#192.168.x.x", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
If you got interested, check out some more in https://elsampsa.github.io/valkka-examples/
Hi reading frames from video can be achieved using python and OpenCV . Below is the sample code. Works fine with python and opencv2 version.
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("your rtsp url")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Use in this
cv2.VideoCapture("rtsp://username:password#IPAddress:PortNO(rest of the link after the IPAdress)").
I'm working with Python OpenCV to a project that as an initial step involves capturing an image from a webcam; I tried to automate this process by using capture = cv2.VideoCapture and capture.read(), but the camera's video mode activation and its subsequent self-adjusting are too slow for what I want to achieve in the end.
Is there a more direct method of automatically capturing a screenshot with Python (and OpenCV)? If not, do you have any alternative suggestion?
Thanks
If you want your camera screenshot function to be responsive, you need to initialize the camera capture outside of this function.
On the following code snippet, the screenshot function is triggered by pressing c:
import cv2
def screenshot():
global cam
cv2.imshow("screenshot", cam.read()[1]) # shows the screenshot directly
#cv2.imwrite('screenshot.png',cam.read()[1]) # or saves it to disk
if __name__ == '__main__':
cam = cv2.VideoCapture(0) # initializes video capture
while True:
ret, img = cam.read()
cv2.imshow("cameraFeed", img) # a window is needed as a context for key capturing (here, I display the camera feed, but there could be anything in the window)
ch = cv2.waitKey(5)
if ch == 27:
break
if ch == ord('c'): # calls screenshot function when 'c' is pressed
screenshot()
cv2.destroyAllWindows()
To clarify: cameraFeed window is only here for the purpose of the demo (where screenshot is triggered manually). If screenshot is called automatically in your program, then you don't need this part.
Hope it helps!
Basically you need to do 3 things:
#init the cam
video_capture = cv2.VideoCapture(0)
#get a frame from cam
ret, frame = video_capture.read()
#write that to disk
cv2.imwrite('screenshot.png',frame)
of course, you should wait a while before, if not you could save a weird black screen (or just the 1st thing the camera got :-) )
The code
import cv
capture = cv.CaptureFromFile("a.avi")
while True:
frame = cv.QueryFrame(capture)
cv.ShowImage("a',frame)
Shows the same initial frame from the video repeatedly (QueryFrame is not advancing the video and grabbing the next frame). It works fine if the video is captured from a webcam.
Any ideas?
I see the same mistakes over and over again, so this is probably the last time I'll address them. Hopefully people will start using the search box in the future and dig a little deeper.
Call cv.WaitKey() after displaying the frame. If don't have a delay between displaying the frames some problems could happen. I believe this the problem.
Code defensively: if you are calling a function/method that can fail, believe in Murphy, and add the appropriate check to verify it doesn't:
import cv
capture = cv.CaptureFromFile("a.avi")
if not capture :
print "Error loading video file"
# Should exit the application
while True:
frame = cv.QueryFrame(capture)
if not frame:
print "Could not retrieve frame"
cv.ShowImage("a", frame)
k = cv.WaitKey(10)
if k == 27:
break # ESC key was pressed