OpenCV buffer problem when using external trigger - python

I'm using OpenCV with USB camera on a Raspberry Pi4. I have activated an external trigger by RPI's GPIO pin. I wrote a short Python code to test the trigger and made sure it worked. Then I wrote another Python program to save a single image captured by the camera. Here is the code:
import time
import cv2
import smbus
from gpiozero import LED
TRIG_ADDR = 17
def setup_trigger_control_gpio(pin):
trigger = LED(pin)
return trigger
def setup_camera(frame_width, frame_height, fps):
cap = cv2.VideoCapture(0, cv2.CAP_ANY)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, frame_width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_height)
cap.set(cv2.CAP_PROP_FPS, fps)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
if cap.isOpened() is not True:
print ("Cannot open camera. Exiting.")
quit()
else:
return cap
trigger = setup_trigger_control_gpio(TRIG_ADDR)
cap = setup_camera(640, 480, 120)
trigger.on()
ret, frame = cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite("images/frame.pgm", frame)
trigger.off()
cap.release()
I set the buffersize to 1, since I'm using a trigger to capture the frame exactly when I need to. The program however gets stuck on the cap.read() line as if it did not receive the trigger. When I ran the program after that I ran the short program for the trigger only, it has finished successfully. Sometimes I had to run the trigger program more than once for the main program to finish, which I find a bit scary, because of the inconsistency.
I tried to set the buffersize to 10, which seemingly worked, however the saved image was empty(all black). I have also tried to "flush" the buffer by reading the first 10 empty frames, which worked and I have finally saved a proper image. The real problems occur when I try to process real time video this way, as even after flushing the buffer, the images do not correspond to the point in time when they should've been taken. Therefor I would love to use the feature of setting the buffersize to one, so that I would know exactly which frame is being processed. I will be thankful for any ideas.

Related

imshow() with desired framerate with opencv

Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for "reading" the image from the queue and then substract that value from number of miliseconds avalible for one frame:
if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame.
And then wait that time using cv2.WaitKey()
But still I get some laggy output. Which is slower then the source video
I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer()
calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer.
eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10)
or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished
I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1
I faced the same video in one of my project in which my source video have 2 fps. so in order to show it in good manners using cv2.imshow I used a delay function before displaying of frame. Its a kind of hack but this thing work for me. The code for this hack is given below. Hope you will get some help from it. peace!
import cv2
import numpy as np
import time
cap = cv2.VideoCapture (0)
width = 400
height = 350
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (width, height))
flipped = cv2.flip(frame, 1)
framerot = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
framerot = cv2.resize(framerot, (width, height))
StackImg = np.hstack([frame, flipped, framerot])
#Put time of sleep according to your fps
time.sleep(2)
cv2.imshow("ImageStacked", StackImg)
if cv2.waitKey(1) & 0xff == ord('q'):
break
cv2.destroyAllWindows()

I want to receive RTSP stream in OpenCV from web URL [duplicate]

I have recently set up a Raspberry Pi camera and am streaming the frames over RTSP. While it may not be completely necessary, here is the command I am using the broadcast the video:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
This streams the video perfectly.
What I would now like to do is parse this stream with Python and read each frame individually. I would like to do some motion detection for surveillance purposes.
I am completely lost on where to start on this task. Can anyone point me to a good tutorial? If this is not achievable via Python, what tools/languages can I use to accomplish this?
Using the same method listed by "depu" worked perfectly for me.
I just replaced "video file" with "RTSP URL" of actual camera.
Example below worked on AXIS IP Camera.
(This was not working for a while in previous versions of OpenCV)
Works on OpenCV 3.4.1 Windows 10)
import cv2
cap = cv2.VideoCapture("rtsp://root:pass#192.168.0.91:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Bit of a hacky solution, but you can use the VLC python bindings (you can install it with pip install python-vlc) and play the stream:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
Then take a snapshot every second or so:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
And then you can use SimpleCV or something for processing (just load the image file '.snapshot.tmp.png' into your processing library).
use opencv
video=cv2.VideoCapture("rtsp url")
and then you can capture framse. read openCV documentation visit: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
Depending on the stream type, you can probably take a look at this project for some ideas.
https://code.google.com/p/python-mjpeg-over-rtsp-client/
If you want to be mega-pro, you could use something like http://opencv.org/ (Python modules available I believe) for handling the motion detection.
Here is yet one more option.
It's much more complicated than the other answers.
But this way, with just one connection to the camera, you could "fork" the same stream simultaneously to several multiprocesses, to the screen, recast it into multicast, write it to disk, etc.
Of course, just in the case you would need something like that (otherwise you'd prefer the earlier answers)
Let's create two independent python programs:
Server program (rtsp connection, decoding) server.py
Client program (reads frames from shared memory) client.py
Server must be started before the client, i.e.
python3 server.py
And then in another terminal:
python3 client.py
Here is the code:
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password#192.168.x.x", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
If you got interested, check out some more in https://elsampsa.github.io/valkka-examples/
Hi reading frames from video can be achieved using python and OpenCV . Below is the sample code. Works fine with python and opencv2 version.
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("your rtsp url")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Use in this
cv2.VideoCapture("rtsp://username:password#IPAddress:PortNO(rest of the link after the IPAdress)").

Direct way to get camera screenshot

I'm working with Python OpenCV to a project that as an initial step involves capturing an image from a webcam; I tried to automate this process by using capture = cv2.VideoCapture and capture.read(), but the camera's video mode activation and its subsequent self-adjusting are too slow for what I want to achieve in the end.
Is there a more direct method of automatically capturing a screenshot with Python (and OpenCV)? If not, do you have any alternative suggestion?
Thanks
If you want your camera screenshot function to be responsive, you need to initialize the camera capture outside of this function.
On the following code snippet, the screenshot function is triggered by pressing c:
import cv2
def screenshot():
global cam
cv2.imshow("screenshot", cam.read()[1]) # shows the screenshot directly
#cv2.imwrite('screenshot.png',cam.read()[1]) # or saves it to disk
if __name__ == '__main__':
cam = cv2.VideoCapture(0) # initializes video capture
while True:
ret, img = cam.read()
cv2.imshow("cameraFeed", img) # a window is needed as a context for key capturing (here, I display the camera feed, but there could be anything in the window)
ch = cv2.waitKey(5)
if ch == 27:
break
if ch == ord('c'): # calls screenshot function when 'c' is pressed
screenshot()
cv2.destroyAllWindows()
To clarify: cameraFeed window is only here for the purpose of the demo (where screenshot is triggered manually). If screenshot is called automatically in your program, then you don't need this part.
Hope it helps!
Basically you need to do 3 things:
#init the cam
video_capture = cv2.VideoCapture(0)
#get a frame from cam
ret, frame = video_capture.read()
#write that to disk
cv2.imwrite('screenshot.png',frame)
of course, you should wait a while before, if not you could save a weird black screen (or just the 1st thing the camera got :-) )

cv2.videocapture doesn't works on Raspberry-pi

How can i make the cv2.VideoCapture(0) recognize the USB camera of raspberry-pi.
def OnRecord(self, evt):
capture = cv2.VideoCapture(0)
if (not capture.isOpened()):
print "Error"
# video recorder
fourcc = cv2.cv.CV_FOURCC(*'XVID') # cv2.VideoWriter_fourcc() does not exist
video_writer = cv2.VideoWriter.open("output.mp4", fourcc, 20, (640, 480), True)
# record video
while (capture.isOpened()):
ret, frame = capture.read()
if ret==True:
video_writer.write(frame)
cv2.imshow('Video', frame)
else:
break
def OnCancel(self, evt):
capture.release()
video_writer.release()
cv2.destroyAllWindows()
but it only prints Error.
So i guess capture is not opening. What might be the reason?
I tried this code from opencv documentation but doesn't worked out for me.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
frame = cv2.flip(frame,0)
# write the flipped frame
out.write(frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()
Any help would be greatly appreciated.
Load the correct video for linux drivers.
sudo modprobe bcm2835-v4l2
In my experience with CV2 replacing a webcam source on linux isn't always easy. How OpenCV works is it automatically draws from the systems default video source, which is known (usually) as video0. Unplug your usb webcam and go into a terminal and typing ls /dev/video*
Remember the number it says. Now plug in your USB webcam and type in ls /dev/video* again and look for any new /video, this is your USB webcam. Now type mv /dev/videoX videoY while X is the number of your USB webcam and Y the original number. This will replace your pi's default camera.
This isn't permanent as you will need to do this every time your pi starts up, an alternative to this is creating a bash file that runs on start up. Create a text file and copy the following into it.
#!/bin/bash
mv /dev/videoX videoY
(replace the X and Y of course)
and place that in /etc/init.d directory of your pi. Don't forget you may need to use
chmod 755 /etc/init.d/FILENAME.sh
to give it permission to run
Go to terminal and type lsusb and check whether the USB camera is recognized or not. If it is recognized then try to give different device ID such as 1 or 2 or 3 rather than 0.
looks Like you might have issue with codec, try using 'MJPG' codec instead of XVID.
For more details have a look here
Make sure that the camera that you are using is UVC compatible, as openCV running on linux based systems (like a raspi) starts to do some silly things when it is working with non UVC cameras.

DestroyWindow does not close window on Mac using Python and OpenCV

My program generates a series of windows using the following code:
def display(img, name, fun):
global clicked
cv.NamedWindow(name, 1)
cv.ShowImage(name, img)
cv.SetMouseCallback(name, fun, img)
while cv.WaitKey(33) == -1:
if clicked == 1:
clicked = 0
cv.ShowImage(name, img)
cv.DestroyWindow(name)
I press "q" within the gui window to close it. However, the code continues to the next call of the display function and displays a second gui window while not closing the first. I'm using a Mac with OpenCV 2.1, running the program in Terminal. How can I close the gui windows? Thanks.
You need to run cv.startWindowThread() after opening the window.
I had the same issue and now this works for me.
Hope this helps for future readers. And there is also a cv2 binding (I advise to use that instead of cv).
This code works for me:
import cv2 as cv
import time
WINDOW_NAME = "win"
image = cv.imread("ela.jpg", cv.CV_LOAD_IMAGE_COLOR)
cv.namedWindow(WINDOW_NAME, cv.CV_WINDOW_AUTOSIZE)
initialtime = time.time()
cv.startWindowThread()
while (time.time() - initialtime < 5):
print "in first while"
cv.imshow(WINDOW_NAME, image)
cv.waitKey(1000)
cv.waitKey(1)
cv.destroyAllWindows()
cv.waitKey(1)
initialtime = time.time()
while (time.time() - initialtime < 6):
print "in second while"
The same issue happens with the C++ version, on Linux:
Trying to close OpenCV window has no effect
There are a few peculiarities with the GUI in OpenCV. The destroyImage call fails to close a window (atleast under Linux, where the default backend was Gtk+ until 2.1.0) unless waitKey was called to pump the events. Adding a waitKey(1) call right after destroyWindow may work.
Even so, closing is not guaranteed; the the waitKey function is only intercepted if a window has focus, and so if the window didn't have focus at the time you invoked destroyWindow, chances are it'll stay visible till the next destroyWindow call.
I'm assuming this is a behaviour that stems from Gtk+; the function didn't give me any trouble when I used it under Windows.
Sayem2603
I tried your solution and it worked for me - thanks! I did some trial and error and discovered that looping 4 times did the trick for me... or posting the same code 4 times just the same..
Further, I drilled down to:
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.waitKey(1)
cv2.waitKey(1)
cv2.waitKey(1)
or simply calling DestroyAllWindows and then looping the waitKey() code 4 times:
cv2.destroyAllWindows()
for i in range (1,5):
cv2.waitKey(1)
Worked as well. I am not savvy enough to know why this works exactly, though I assume it has something to do with the interruption and delay created by looping that code(?)
Matthäus Brandl said, above, that the third waitKey() worked for him, so perhaps it is slightly different on each system? (I am running Linux Mint with 3.16.1 kernel and python 2.7)
The delay, alone, doesn't explain it, as simply increasing the delay time on the waitKey() does not do the trick. (Also looped print("Hello") 1000 times instead of using wiatKey() just to see if the delay that created helped any - it did not.) Must have something more to do with how waitKey() interacts with window events.
OpenCV Docs say: "This function is the only method in HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing unless HighGUI is used within an environment that takes care of event processing."
Perhaps it creates an interrupt of sorts in the GUI display that allows the destroyAllWindows() action to process?
J
Here is what worked for me:
cv2.namedWindow("image")
cv2.imshow('image', img)
cv2.waitKey(0) # close window when a key press is detected
cv2.destroyWindow('image')
cv2.waitKey(1)
This solution works for me (under Ubuntu 12.04 with python open in the shell):
Re-invoke cv.ShowImage after the window is 'destroyed'.
If you are using Spyder ( Anaconda Package ) there is the problem.
None of the solutions worked for me.
I discovered that the problem wasn't the functions, but a problem on Spyder really. Try to use a texteditor plus running on terminal and you be fine using simply:
WINDOW_NAME = "win"
image = cv.imread("foto.jpg", 0)
cv.namedWindow(WINDOW_NAME, cv.CV_WINDOW_AUTOSIZE)
cv.startWindowThread()
cv.imshow(WINDOW_NAME, image)
cv.waitKey()
cv.destroyAllWindows()
I solved the problem by calling cv2.waitKey(1) in a for loop, I don't know why it worked but gets my job done, so I didn't bother myself further.
for i in range(1,10):
cv2.destroyAllWindows()
cv2.waitkey(1)
you are welcome to explain.
It seems that none of the above solutions worked for me if I run it on Jupyter Notebook (the window hangs when closing and you need to force quit Python to close the window).
I am on macOS High Sierra 10.13.4, Python 3.6.5, OpenCV 3.4.1.
The below code works if you run it as a .py file (source: https://www.learnopencv.com/read-write-and-display-a-video-using-opencv-cpp-python/). It opens the camera, records the video, closes the window successfully upon pressing 'q', and saves the video in .avi format.
import cv2
import numpy as np
# Create a VideoCapture object
cap = cv2.VideoCapture(0)
# Check if camera opened successfully
if (cap.isOpened() == False):
print("Unable to read camera feed")
# Default resolutions of the frame are obtained.The default resolutions are system dependent.
# We convert the resolutions from float to integer.
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
# Define the codec and create VideoWriter object.The output is stored in 'outpy.avi' file.
out = cv2.VideoWriter('outpy.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))
while(True):
ret, frame = cap.read()
if ret == True:
# Write the frame into the file 'output.avi'
out.write(frame)
# Display the resulting frame
cv2.imshow('frame',frame)
# Press Q on keyboard to stop recording
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Break the loop
else:
break
# When everything done, release the video capture and video write objects
cap.release()
out.release()
# Closes all the frames
cv2.destroyAllWindows()
Fiddling around with this issue in the python console I observed the following behavior:
issuing a cv2.imshow after cv2.destroyWindow sometimes closes the window. Albeit the old window pops up again with the next highgui call, e.g., cv2.namedWindow
the third call of cv2.waitKey after cv2.destroyWindow closed the window every time I tried. Additionally the closed window remained closed, even when using cv2.namedWindow afterwards
Hope this helps somebody.
(I used Ubuntu 12.10 with python 2.7.3 but OpenCV 2.4.2 from the 13.04 repos)
After searching aroung for some time, none of the solutions provided worked for me so since there's a bug in this function and I did not have time to fix it, I did not have to use the cv2 window to show the frames. Once a few frames have been saved, you can open the file in a different viewer, like VLC or MoviePlayer ( for linux ).
Here's how i did mine.
import cv2
threadDie = True # change this to false elsewhere to stop getting the video
def getVideo(Message):
print Message
print "Opening url"
video = cv2.VideoCapture("rtsp://username:passwordp#IpAddress:554/axis-media/media.amp")
print "Opened url"
fourcc = cv2.cv.CV_FOURCC('X','V','I','D')
fps = 25.0 # or 30.0 for a better quality stream
writer = cv2.VideoWriter('out.avi', fourcc,fps, (640,480),1)
i = 0
print "Reading frames "
while threadDie:
ret, img = video.read()
print "frame number: ",i
i=i+1
writer.write(img)
del(video)
print "Finished capturing video"
Then open the file with a different viewer, prabably in a nother function, like if you like vlc, you can start it and pass the saved file as a parameter. On the terminal, i would do this
vlc out.avi #out.avi is my video file being saved by the function above.
This worked for me on arch linux.
I had the same issue. The problem is that while(cap.isOpened()): loop does not finish so that I added below structure. When video has no frame in the following part, it returns ret values as False. Normally, I put destroyAllWindows command out of loop but I moved it into the loop. It works in my code properly.
while(cap.isOpened()):
ret, frame = cap.read()
if ret == False:
cap.release()
cv2.waitKey(1)
cv2.destroyAllWindows()
cv2.waitKey(1)
This worked for me in spyder :
import cv2 as cv
cv.namedWindow("image")
img = cv.imread("image_name.jpg")
cv.imshow("image",img)
cv.waitKey(5000) # 5 sec delay before image window closes
cv.destroyWindow("image")
Remember use only cv.waitKey(positive Integer) for this to work
cv2.imshow("the image I want to show ",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.waitKey(1) # to close the window.
The above code worked well for me.
I'm using Mac and python 3.7 .
Close the terminal, later close the window, it worked for me in Visual Studio Code in Windows; I made a task to compile and run the executable in the terminal, the program used my webcam to capture video and display it in a QT window, when I clicked the close button it didn't close, it reopened itself again and continued with the program until I closed the terminal and later could close the program window without it reopening again.

Categories

Resources