python video stream skip frames - python

I want to live search the screen of my android phone with opencv and python.
My phone streams its screen via http and i am reading the stream with some code like this:
host = "192.168.178.168:8080"
hoststr = 'http://' + host + '/stream.mjpeg'
print 'Streaming ' + hoststr
stream=urllib2.urlopen(hoststr)
bytes=''
drop_count = 0
while True:
bytes+=stream.read(1024)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
drop_count+=1
if drop_count > 120:
drop_count = 0
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
i=cv2.imdecode(np.fromstring(jpg,dtype=np.uint8),cv2.IMREAD_COLOR)
cv2.imshow(hoststr,i)
process_img(i)#my image processing
if cv2.waitKey(1) ==27:
exit(0)
The problem is, that my screen searching takes too long and creates a big delay. Since the fps my phone is sending are way to much, i would like to only process one image per second or per two seconds. How can i do that? I can not change the fps on my phone.
I can resize the screen image on the phone to 50% before sending it, but if i resize it more than 50% i can not find what im searching for with opencv anymore. But even with 50% resize is is too much delayed.
If i make a simple counter and only process every 2/10/30 image that does not help.
EDIT: I added my simple counter implementation to drop frames, that does not help. If I dont process the image, i got constant small delay with and without framedropping.
EDIT²: Finally solved it, sadly I dont remember, where I read it, but its very simple with openCV VideoCapture:
screen_capture = cv2.VideoCapture(stream_url) #init videocapture
drop_count = 0 #init drop counter
while True:
drop_count+=1
ret = screen_capture.grab() #grab frame but dont process it
if drop_count % 5 == 0: #check if not dropping frame
ret, img = self.screen_capture.retrieve() #process frame
This way frames you want to drop really get dropped, and no delay arise.

I'm guessing the delay get's so long, because you only display the processed frames. Effectively only 1 out of every 120 frames is show. And that frame is shown for the next 120 frames and the the processing time. That would make it seem laggy indeed.
You should display all frames normally and only call the process_img() function every 120th frame.
Try if this improves things:
if a!=-1 and b!=-1:
drop_count+=1
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
i=cv2.imdecode(np.fromstring(jpg,dtype=np.uint8),cv2.IMREAD_COLOR)
cv2.imshow(hoststr,i)
if drop_count > 120: # only process every 120th frame
drop_count = 0
process_img(i)#my image processing
if cv2.waitKey(1) ==27:
exit(0)

Related

Read video URLs from a list for a period of time and release streams, OpenCV Python

I have a video URL list I want to open / process using OPenCV. Example list below
cam_name,address
Cam_01,vid1.mp4
Cam_02,vid1.mp4
Cam_03,vid1.mp4
I want to open the first camera/address, stream for 60 seconds, release and start the second, release and so on. When get to the last stream in the list e.g. Cam_03, start the list again, back to Cam_01.
I dont need to Thread, a I just need to start a stream and stop after a period of time etc.
Code so far:
# this calls the OpenCV stream
def main(cam_name, camID):
main_bret(cam_name, camID)
df = 'cams_list.txt'
df = pd.read_csv(df)
for index, row in df.iterrows():
main(row['cam_name'], row['address'])
########################################
main_bret(cam_name, camID):
vid = cv2.VideoCapture(camID)
while(True):
ret, frame = vid.read()
cv2.imshow(cam_name, frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
vid.release()
cv2.destroyAllWindows()
How to I construct the loop to stream for 60 seconds, release and move to the next in the list.. and repeat when finished the list?
What is happing now, e.g. if just streaming a mp4, will finish the video, then start the next. Just need it to stop streaming after 60 seconds.
I have tried adding sleep() etc around cv2.waitKey(1) but it runs everything for that period, not individual streams. I guess I am not putting the sleep in the correct place.
tks
I'm not sure I understood every detail of your program, but this is what I have:
df = 'cams_list.txt'
df = pd.read_csv(df)
def main_bret(cam_name, camID):
vid = cv2.VideoCapture(camID)
ret, frame = vid.read()
cv2.imshow(cam_name, frame)
cv2.waitKey(60_000)
vid.release()
cv2.destroyAllWindows()
index = 0
upper_index = df.shape[0]
while True:
main_bret(df.iloc[index]["cam_name"], df.iloc[index]["address"])
index = index + 1
if index >= upper_index:
index = 0
It's easier to iterate over and over through a list with a while so I get rid of the for loop.
The "waitKey" function waits 60 seconds (60000 ms) and then it deletes the window.
I hope I was helpful.

OpenCV Recorded webcam-video is faster than real life Python

I am reaching some problems recording my video from the webcam. Because I am doing an experiment and at the same time recording the video, it is very important that the real second is the same as the second recorded.
The code I am using is this one:
def record_video(self, path_video="test"):
vid_capture = cv2.VideoCapture(0)
#vid_capture.set(cv2.CAP_PROP_FPS, 20)
fps=vid_capture.get(cv2.CAP_PROP_FPS)
#vid_capture.set(cv2.CAP_PROP_BUFFERSIZE, 2)
vid_cod = cv2.VideoWriter_fourcc(*'XVID')
output = cv2.VideoWriter("experiment_videos/" + path_video + ".avi", vid_cod, fps, (640,480))
x = threading.Thread(target=socket_trigger)
x.daemon = True
x.start()
print("Waiting")
while(start==0):
ret_cam, frame_cam = vid_capture.read()
while (True):
ret_cam, frame_cam = vid_capture.read()
output.write(frame_cam)
# Close and break the loop after pressing "x" key
if cv2.waitKey(1) & 0XFF == ord('x'):
break
if end== 1:
break
# close the already opened camera
vid_capture.release()
#cap.release()
# close the already opened file
output.release()
# close the window and de-allocate any associated memory usage
cv2.destroyAllWindows()
All code works, the flag variables that I receive work properly. The problem comes when, for instance, if I record a video of 5 mins (real time), the output can be 4:52,4:55 or even 5:00. It's just not exact.
I think it's because I write in a output file of 30fps (that's what vid_capture.get(cv2.CAP_PROP_FPS) returns) and my camera sometimes is recording in less frames(let's say 28,29).
I don't know what to do, I have tried setting fps of the cameras at 20 but it didn't work (#vid_capture.set(cv2.CAP_PROP_FPS, 20)).
Any ideas? For me is so critical matching times, and my only job is to record a webcam camera.
Thanks for all,
Jaime

Video record from an IP camera with variable frame rate with opencv and python

First of all, I want to comment on what I'm trying to do.
I have an IP camera connected to my network(FOSCAM 9800p) through a router with ethernet cable and from it I am trying to record a video with the RTSP protocol. My intention in the future is to add a small video processing in the middle with opencv but at the moment I want to do the tests to simply record it.
The main problem is that the camera is delivering a variable rate of frames per second, that is, sometimes it does it to 18, others to 22 and so on. When recording the video with a fixed rate of frames per second what ends up happening is that the video plays faster than it should
Something weird that when I run with opencv get (CAP_PROP_FPS) it is returning a big value like 180000.0
To try to solve this problem, what we do is read the frames and place them in a queue. From another process commanded by a timer.Event () we read them and try to write in our video at regular intervals of time in order to obtain a fixed frame rate.
The code is the following:
video_capture = cv2.VideoCapture("rtsp://"+user+":"+password+"#"+ip+":"+str(port)+"/videoMain")
if (video_capture.isOpened() == False):
print("Unable to read camera feed")
sys.exit()
frame_width = int(video_capture.get(3))
frame_height = int(video_capture.get(4))
video_writer =cv2.VideoWriter(output_filename,cv2.VideoWriter_fourcc(*'MP4V'), fps_to_save, (frame_width,frame_height))
input_buffer = queue.Queue(20)
finished = False
read_frames = 0
def readFile():
global finished
global read_frames
while not finished:
ret, frame = video_capture.read()
if not ret:
finished = True
while not finished:
try:
input_buffer.put_nowait(frame)
read_frames+=1
break
except queue.Full:
print("queue.Full")
pass
def processingFile():
global finished
written_frames = 0
repeated_frames = 0
time_per_frame_elapsed = 0.0
start_time=time.time()
ticker = threading.Event()
while True:
ticker.wait(time_per_frame-time_per_frame_elapsed)
time_per_frame_start=time.time()
try:
frame = input_buffer.get_nowait()
video_writer.write(frame)
writing_time = time.time()
if written_frames is 0:
start_time = writing_time
written_frames += 1
except queue.Empty:
if written_frames is not 0:
video_writer.write(frame)
writing_time = time.time()
written_frames += 1
repeated_frames += 1
except:
pass
total_elapsed_time = time.time() - start_time
print("total_elapsed_time:{:f}".format(total_elapsed_time))
if total_elapsed_time>time_to_save_seconds:
finished = True
ticker.clear()
print ("Playback terminated.")
break
time_per_frame_elapsed=time.time()-time_per_frame_start
print("Total readed frames:{:f}".format(read_frames))
print("Total frames repated:{:f}".format(repeated_frames))
print("Total frames writed:{:f}".format(written_frames))
tReadFile = threading.Thread(target=readFile)
tProcessingFile = threading.Thread(target=processingFile)
tReadFile.start()
tProcessingFile.start()
tProcessingFile.join()
tReadFile.join()
The result is close to what we want, but sometimes we have significant differences in the times. We are doing tests with short videos of about 10 seconds and we get 9.8 seconds of recording.
At first it would not seem a serious problem but the error is cumulative, that is if we increase the time increases so that to record videos of a longer time we have more serious problems.
We would like to know how to solve this type of video recording problems with cameras that deliver frames at variable rates. Is it a good idea to do so?
What can be generating the cumulative error in the times?
From already thank you very much!
Greetings to all!
I can say only one thing. In my own experience OpenCV VideoCapture class works with FFMPEG (OpenCV uses it to decode video) in on-line mode very badly. There was a lot of image artifacts and ffmpeg internal errors. But VideoCapture perfectly works with USB cameras. I solved problem with on-line capture from IP-camera with XSplit Broadcaster. This package is able to emulate a USB-camera via physical IP-camera. The only limitation is resizing of camera frame to 640*480 size. Basic license of XSplit Broadcaster is absolutely free

Parse video at lower frame rate

I currently am working on a project where I need to parse a video and pass it through several models. The videos come in at 60fps. I do not need to run every frame through the models. I am running into some issue when trying to skip the unneeded frames. I have tried two methods which are both fairly slow.
Method 1 code snippet:
The issue here is that I am still reading every frame of the video. Only every 4th frames is run through my model.
cap = cv2.VideoCapture(self.video)
while cap.isOpened():
success, frame = cap.read()
if count % 4 !=0:
count += 1
continue
if success:
''Run frame through models''
else:
break
Method 2 code snippet:
This method is slower. I am trying to avoid reading unnecessary frames in this case.
cap = cv2.VideoCapture(video)
count=0
while True:
if count % 4 != 0:
cap.set(cv2.CV_CAP_PROP_POS_FRAMES, count*15)
count+=1
success, frame = cap.read()
Any advice on how to achieve this in the most efficient way would be greatly appreciated.
Getting and setting frames by changing CV_CAP_PROP_POS_FRAMES is not accurate (and slow) due to how video compression works: by using keyframes.
It might help to not use the read() function at all. Instead use grab() and only retrieve() the needed frames. From the documentation: The (read) methods/functions combine VideoCapture::grab() and VideoCapture::retrieve() in one call.
grab() gets the frame data, and retrieve() decodes it (the computationally heavy part). What you might want to do is only grab frames you want to skip but not retrieve them.
Depending on your system and opencv build, you could also have ffmpeg decode video using hardware acceleration.
As I get it you are trying to process every fourth frame. You are using the condition:
if count % 4 != 0
which is triggered 3 out of 4 frames instead (you are processing frames 1, 2, 3, 5, 6, 7 etc)! Use the opposite:
if count % 4 == 0
Also although code snippets the two method does not seem to process the same frames. Although in both cases your counter seems to increase by 1 in each frame you actually point to the 15xframe of that counter in the second case (cap.set(cv2.CV_CAP_PROP_POS_FRAMES, count*15).
Some comments on your code also (maybe I misunderstood something):
Case 1:
while cap.isOpened():
success, frame = cap.read()
if count % 4 !=0:
count += 1
continue
here you seem to count only some frames (3 out of 4 as mentioned) since frames which are multiples of 4 are skipped: condition count % 4 !=0 is not met in that case and your counter is not updated although you read a frame. So, you have an inaccurate counter here. It's not shown how and where do you process you frames to judge that part though.
Case 2:
while True:
if count % 4 != 0:
cap.set(cv2.CV_CAP_PROP_POS_FRAMES, count*15)
count+=1
success, frame = cap.read()
here you read frames only if condition is met so in this code snippet you don't actually read any frame since frame 0 does not trigger the condition! If you update the counter outside the if scope it's not clear here. But if you do you should also read a frame there. Anyway more code should be revealed to tell.
As a general advice you should update the counter everytime you read a frame.
Instead putting a threshold on number of frames, which would make opencv process ALL the frames (which you rightly pointed out, slows the video processing), it's better to use CAP_PROP_POS_MSEC link and offload that processing to cv2. By using this option you can configure cv2 to sample 1 frame every nth Millisecond. So setting subsample_rate=1000 in vidcap.set(cv2.CAP_PROP_POS_MSEC, (frame_count * subsample_rate)) would sample 1 frame every 1 second (as 1000 milliseconds equals 1 second). Hopefully this improves your video processing speed.
def extractImagesFromVideo(path_in, subsample_rate, path_out, saveImage, resize=(), debug=False):
vidcap = cv2.VideoCapture(path_in)
if not vidcap.isOpened():
raise IOError
if debug:
length = int(vidcap.get(cv2.cv2.CAP_PROP_FRAME_COUNT))
width = int(vidcap.get(cv2.cv2.CAP_PROP_FRAME_WIDTH))
height = int(vidcap.get(cv2.cv2.CAP_PROP_FRAME_HEIGHT))
fps = vidcap.get(cv2.cv2.CAP_PROP_FPS)
print 'Length: %.2f | Width: %.2f | Height: %.2f | Fps: %.2f' % (length, width, height, fps)
success, image = vidcap.read() #extract first frame.
frame_count = 0
while success:
vidcap.set(cv2.CAP_PROP_POS_MSEC, (frame_count*subsample_rate))
success, image = vidcap.read()
if saveImage and np.any(image):
cv2.imwrite(os.path.join(path_out, "%s_%d.png" % (frame_count)), image)
frame_count = frame_count + 1
return frame_count

OpenCV frame count wrong for interlaced videos, workaround?

Working with the code sample from How does QueryFrame work? I noticed that the program used a lot of time to exit if it ran to the end of the video. I wanted to exit quickly on the last frame, and I verified that it's a lot quicker if I don't try to play past the end of the video, but there are some details that don't make sense to me. Here's my code:
import cv2
# create a window
winname = "myWindow"
win = cv2.namedWindow(winname, cv2.CV_WINDOW_AUTOSIZE)
# load video file
invideo = cv2.VideoCapture("video.mts")
frames = invideo.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT)
print "frame count:", frames
# interval between frame in ms.
fps = invideo.get(cv2.cv.CV_CAP_PROP_FPS)
interval = int(1000.0 / fps)
# play video
while invideo.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) < frames:
print "Showing frame number:", invideo.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
(ret, im) = invideo.read()
if not ret:
break
cv2.imshow(winname, im)
if cv2.waitKey(interval) == 27: # ASCII 27 is the ESC key
break
del invideo
cv2.destroyWindow(winname)
The only thing is that the frame count returned is 744, while the last played frame number is 371 (counting from 0, so that's 372 frames). I assume this is because the video is interlaced, and I guess I need to account for that and divide interval by 2 and frames by 2. But the question is, how do I figure out that I need to do this? There doesn't seem to be a property to check this:
http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture-get

Categories

Resources