Python 3.7 OpenCV - Slow processing - python

i'm trying to take pictures with OpenCV 4.4.0.40 and only save them if a switch, read by an Arduino, is press.
So far everything work, but it's super slow, it take about 15 seconds for the Switch value to change.
Arduino = SerialObject()
if os.path.exists(PathCouleur) and os.path.exists(PathGris):
Images = cv2.VideoCapture(Camera)
Images.set(3, 1920)
Images.set(4, 1080)
Images.set(10, Brightness)
Compte = 0
SwitchNumero = 0
while True:
Sucess, Video = Images.read()
cv2.imshow("Camera", Video)
Video = cv2.resize(Video, (Largeur, Hauteur))
Switch = Arduino.getData()
try:
if Switch[0] == "1":
blur = cv2.Laplacian(Video, cv2.CV_64F).var()
if blur < MinBlur:
cv2.imwrite(PathCouleur + ".png", Video)
cv2.imwrite(PathGris + ".png", cv2.cvtColor(Video, cv2.COLOR_BGR2GRAY))
Compte += 1
except IndexError as err:
pass
if cv2.waitKey(40) == 27:
break
Images.release()
cv2.destroyAllWindows()
else:
print("Erreur, aucun Path")
the saved images width are 640 and the height is 480 and the showimage is 1920x1080 but even without the showimage it's slow.
Can someone help me optimize this code please?

I would say that this snippet of code is responsible for the delay, since you have two major calls that depend on external devices to respond (the OpenCV calls and Arduino.getData()).
Sucess, Video = Images.read()
cv2.imshow("Camera", Video)
Video = cv2.resize(Video, (Largeur, Hauteur))
Switch = Arduino.getData()
One solution that comes to mind is to use the multithreading lib and separate the Arduino.getData() call from the main loop cycle and use it in a separate class to run in the background (you should use a sleep on the secondary thread, something like 50 or 100ms delay).
That way you should have a better response on the Switch event, when it tries to read the value on the main loop.

cam=cv2.VideoCapture(0,cv2.CAP_DSHOW)
use cv2.CAP_DSHOW it will improve the loading time of camera becasue it wil give you video feed directly

Related

OpenCV buffer problem when using external trigger

I'm using OpenCV with USB camera on a Raspberry Pi4. I have activated an external trigger by RPI's GPIO pin. I wrote a short Python code to test the trigger and made sure it worked. Then I wrote another Python program to save a single image captured by the camera. Here is the code:
import time
import cv2
import smbus
from gpiozero import LED
TRIG_ADDR = 17
def setup_trigger_control_gpio(pin):
trigger = LED(pin)
return trigger
def setup_camera(frame_width, frame_height, fps):
cap = cv2.VideoCapture(0, cv2.CAP_ANY)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, frame_width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_height)
cap.set(cv2.CAP_PROP_FPS, fps)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
if cap.isOpened() is not True:
print ("Cannot open camera. Exiting.")
quit()
else:
return cap
trigger = setup_trigger_control_gpio(TRIG_ADDR)
cap = setup_camera(640, 480, 120)
trigger.on()
ret, frame = cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite("images/frame.pgm", frame)
trigger.off()
cap.release()
I set the buffersize to 1, since I'm using a trigger to capture the frame exactly when I need to. The program however gets stuck on the cap.read() line as if it did not receive the trigger. When I ran the program after that I ran the short program for the trigger only, it has finished successfully. Sometimes I had to run the trigger program more than once for the main program to finish, which I find a bit scary, because of the inconsistency.
I tried to set the buffersize to 10, which seemingly worked, however the saved image was empty(all black). I have also tried to "flush" the buffer by reading the first 10 empty frames, which worked and I have finally saved a proper image. The real problems occur when I try to process real time video this way, as even after flushing the buffer, the images do not correspond to the point in time when they should've been taken. Therefor I would love to use the feature of setting the buffersize to one, so that I would know exactly which frame is being processed. I will be thankful for any ideas.

imshow() with desired framerate with opencv

Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for "reading" the image from the queue and then substract that value from number of miliseconds avalible for one frame:
if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame.
And then wait that time using cv2.WaitKey()
But still I get some laggy output. Which is slower then the source video
I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer()
calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer.
eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10)
or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished
I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1
I faced the same video in one of my project in which my source video have 2 fps. so in order to show it in good manners using cv2.imshow I used a delay function before displaying of frame. Its a kind of hack but this thing work for me. The code for this hack is given below. Hope you will get some help from it. peace!
import cv2
import numpy as np
import time
cap = cv2.VideoCapture (0)
width = 400
height = 350
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (width, height))
flipped = cv2.flip(frame, 1)
framerot = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
framerot = cv2.resize(framerot, (width, height))
StackImg = np.hstack([frame, flipped, framerot])
#Put time of sleep according to your fps
time.sleep(2)
cv2.imshow("ImageStacked", StackImg)
if cv2.waitKey(1) & 0xff == ord('q'):
break
cv2.destroyAllWindows()

I want to receive RTSP stream in OpenCV from web URL [duplicate]

I have recently set up a Raspberry Pi camera and am streaming the frames over RTSP. While it may not be completely necessary, here is the command I am using the broadcast the video:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
This streams the video perfectly.
What I would now like to do is parse this stream with Python and read each frame individually. I would like to do some motion detection for surveillance purposes.
I am completely lost on where to start on this task. Can anyone point me to a good tutorial? If this is not achievable via Python, what tools/languages can I use to accomplish this?
Using the same method listed by "depu" worked perfectly for me.
I just replaced "video file" with "RTSP URL" of actual camera.
Example below worked on AXIS IP Camera.
(This was not working for a while in previous versions of OpenCV)
Works on OpenCV 3.4.1 Windows 10)
import cv2
cap = cv2.VideoCapture("rtsp://root:pass#192.168.0.91:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Bit of a hacky solution, but you can use the VLC python bindings (you can install it with pip install python-vlc) and play the stream:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
Then take a snapshot every second or so:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
And then you can use SimpleCV or something for processing (just load the image file '.snapshot.tmp.png' into your processing library).
use opencv
video=cv2.VideoCapture("rtsp url")
and then you can capture framse. read openCV documentation visit: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
Depending on the stream type, you can probably take a look at this project for some ideas.
https://code.google.com/p/python-mjpeg-over-rtsp-client/
If you want to be mega-pro, you could use something like http://opencv.org/ (Python modules available I believe) for handling the motion detection.
Here is yet one more option.
It's much more complicated than the other answers.
But this way, with just one connection to the camera, you could "fork" the same stream simultaneously to several multiprocesses, to the screen, recast it into multicast, write it to disk, etc.
Of course, just in the case you would need something like that (otherwise you'd prefer the earlier answers)
Let's create two independent python programs:
Server program (rtsp connection, decoding) server.py
Client program (reads frames from shared memory) client.py
Server must be started before the client, i.e.
python3 server.py
And then in another terminal:
python3 client.py
Here is the code:
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password#192.168.x.x", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
If you got interested, check out some more in https://elsampsa.github.io/valkka-examples/
Hi reading frames from video can be achieved using python and OpenCV . Below is the sample code. Works fine with python and opencv2 version.
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("your rtsp url")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Use in this
cv2.VideoCapture("rtsp://username:password#IPAddress:PortNO(rest of the link after the IPAdress)").

How to get the latest frame from capture device (camera) in opencv

I want to connect to a camera, and only capture a frame when an event happens (e.g. keypress). A simplified version of what I'd like to do is this:
cap = cv2.VideoCapture(device_id)
while True:
if event:
img = cap.read()
preprocess(img)
process(img)
cv.Waitkey(10)
However, cap.read seems to only capture the next frame in the queue, and not the latest. I did a lot of searching online, and there seems to be a lot of questions on this but no definitive answer. Only some dirty hacks which involve opening and closing the capture device just before and after grabbing (which won't work for me as my event might be triggered multiple times per second); or assuming a fixed framerate and reading a fixed-n times on each event (which won't work for me as my event is unpredictable and could happen at any interval).
A nice solution would be:
while True:
if event:
while capture_has_frames:
img = cap.read()
preprocess(img)
process(img)
cv.Waitkey(10)
But what is capture_has_frames? Is it possible to get that info? I tried looking into CV_CAP_PROP_POS_FRAMES but it's always -1.
For now I have a separate thread where the capture is running at full fps, and on my event I'm grabbing the latest image from that thread, but this seems overkill.
(I'm on Ubuntu 16.04 btw, but I guess it shouldn't matter. I'm also using pyqtgraph for display)
I think the solution mentioned in the question, namely having a separate thread that clears the buffer, is the easiest non-brittle solution for this. Here reasonably nice (I think) code for this:
import cv2, queue, threading, time
# bufferless VideoCapture
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.q = queue.Queue()
t = threading.Thread(target=self._reader)
t.daemon = True
t.start()
# read frames as soon as they are available, keeping only most recent one
def _reader(self):
while True:
ret, frame = self.cap.read()
if not ret:
break
if not self.q.empty():
try:
self.q.get_nowait() # discard previous (unprocessed) frame
except queue.Empty:
pass
self.q.put(frame)
def read(self):
return self.q.get()
cap = VideoCapture(0)
while True:
time.sleep(.5) # simulate time between events
frame = cap.read()
cv2.imshow("frame", frame)
if chr(cv2.waitKey(1)&255) == 'q':
break
The frame reader thread is encapsulated inside the custom VideoCapture class, and communication with the main thread is via a queue.
I posted very similar code for a node.js question, where a JavaScript solution would have been better. My comments on another answer to that question give details why a non-brittle solution without separate thread seems difficult.
An alternative solution that is easier but supported only for some OpenCV backends is using CAP_PROP_BUFFERSIZE. The 2.4 docs state it is "only supported by DC1394 [Firewire] v 2.x backend currently." For Linux backend V4L, according to a comment in the 3.4.5 code, support was added on 9 Mar 2018, but I got VIDEOIO ERROR: V4L: Property <unknown property string>(38) not supported by device for exactly this backend. It may be worth a try first; the code is as easy as this:
cap.set(cv2.CAP_PROP_BUFFERSIZE, 0)
Here's a simplified version of Ulrich's solution.
OpenCV's read() function combines grab() and retrieve() in one call, where grab() just loads the next frame in memory, and retrieve decodes the latest grabbed frame (demosaicing & motion jpeg decompression).
We're only interested in decoding the frame we're actually reading, so this solution saves some CPU, and removes the need for a queue
import cv2
import threading
# bufferless VideoCapture
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.lock = threading.Lock()
self.t = threading.Thread(target=self._reader)
self.t.daemon = True
self.t.start()
# grab frames as soon as they are available
def _reader(self):
while True:
with self.lock:
ret = self.cap.grab()
if not ret:
break
# retrieve latest frame
def read(self):
with self.lock:
_, frame = self.cap.retrieve()
return frame
EDIT: Following Arthur Tacca's comment, added a lock to avoid simultaneous grab & retrieve, which could lead to a crash as OpenCV isn't thread-safe.
Its also possible to always get the latest frame by using cv2.CAP_GSTREAMER backend. If you have gstreamer support enabled in cv2.getBuildInformation(), you can initialize your video capture with the appsink parameters sync=false and drop=true
Example:
cv2.VideoCapture("rtspsrc location=rtsp://... ! decodebin ! videoconvert ! video/x-raw,framerate=30/1 ! appsink drop=true sync=false", cv2.CAP_GSTREAMER)
On my Raspberry Pi 4,
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
does work and was all that I needed for my pi camera to give me the latest frame, with a consistent 3+ second delay between the scene in front of the camera and displaying that scene in the preview image. My code takes 1.3 seconds to process an image, so I'm not sure why the other 2 seconds of delay are present, but it's consistent and works.
Side note: since my code takes over a second to process an image, I also added
cap.set( cv2.CAP_PROP_FPS, 2 )
in case it reduces any unneeded activity, since I can't quite get a frame a second. When I put cv2.CAP_PROP_FPS to 1, though, I got a strange output of all my frames being almost entirely dark, so setting FPS too low can cause an issue
If you don't want to capture the frame when there is no event happening, why are you preprocessing/processing your frame? If you do not process your frame, you can simply discard it unless the event occur. Your program should be able to capture, evaluate your condition and discard at a sufficient speed, i.e. fast enough compared to your camera FPS capture rate, to always get the last frame in the queue.
If not proficient in python because I do my OpenCV in C++, but it should look similar to this:
vidcap = cv.VideoCapture( filename )
while True:
success, frame = vidcap.read()
If Not success:
break
If cv.waitKey(1):
process(frame)
As per OpenCV reference, vidcap.read() returns a bool. If frame is read correctly, it will be True. Then, the captured frame is store in variable frame. If there is no key press, the loop keeps on going. When a key is pressed, you process your last captured frame.

Why are webcam images taken with Python so dark?

I've showed in various ways how to take images with a webcam in Python (see How can I take camera images with Python?). You can see that the images taken with Python are considerably darker than images taken with JavaScript. What is wrong?
Image example
The image on the left was taken with http://martin-thoma.com/html5/webcam/, the one on the right with the following Python code. Both were taken with the same (controlled) lightning situation (it was dark outside and I only had some electrical lights on) and the same webcam.
Code example
import cv2
camera_port = 0
camera = cv2.VideoCapture(camera_port)
return_value, image = camera.read()
cv2.imwrite("opencv.png", image)
del(camera) # so that others can use the camera as soon as possible
Question
Why is the image taken with Python image considerably darker than the one taken with JavaScript and how do I fix it?
(Getting a similar image quality; simply making it brighter will probably not fix it.)
Note to the "how do I fix it": It does not need to be opencv. If you know a possibility to take webcam images with Python with another package (or without a package) that is also ok.
Faced the same problem. I tried this and it works.
import cv2
camera_port = 0
ramp_frames = 30
camera = cv2.VideoCapture(camera_port)
def get_image():
retval, im = camera.read()
return im
for i in xrange(ramp_frames):
temp = camera.read()
camera_capture = get_image()
filename = "image.jpg"
cv2.imwrite(filename,camera_capture)
del(camera)
I think it's about adjusting the camera to light. The former
former and later images
I think that you have to wait for the camera to be ready.
This code works for me:
from SimpleCV import Camera
import time
cam = Camera()
time.sleep(3)
img = cam.getImage()
img.save("simplecv.png")
I took the idea from this answer and this is the most convincing explanation I found:
The first few frames are dark on some devices because it's the first
frame after initializing the camera and it may be required to pull a
few frames so that the camera has time to adjust brightness
automatically.
reference
So IMHO in order to be sure about the quality of the image, regardless of the programming language, at the startup of a camera device is necessary to wait a few seconds and/or discard a few frames before taking an image.
Tidying up Keerthana's answer results in my code looking like this
import cv2
import time
def main():
capture = capture_write()
def capture_write(filename="image.jpeg", port=0, ramp_frames=30, x=1280, y=720):
camera = cv2.VideoCapture(port)
# Set Resolution
camera.set(3, x)
camera.set(4, y)
# Adjust camera lighting
for i in range(ramp_frames):
temp = camera.read()
retval, im = camera.read()
cv2.imwrite(filename,im)
del(camera)
return True
if __name__ == '__main__':
main()

Categories

Resources