So I am trying to code up something that keeps a video running until a condition is met and some processes are done and new video is loaded immediately after these processes. These processes should continue without closing the current video and should only close the current video when the new video is loaded and ready for vizualization.
This is best formulated by an example as given below:
import numpy as np
import os
import pygame
import cv2
import time
def play_video(vid_loc):
pygame.init()
infos = pygame.display.Info()
screen_size = (infos.current_w, infos.current_h)
cap = cv2.VideoCapture(vid_loc)
frame_counter =0
while(True):
ret, frame = cap.read()
frame_counter += 1
if frame_counter == cap.get(cv2.CAP_PROP_FRAME_COUNT):
frame_counter = 0
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
frame= cv2.resize(frame,screen_size)
cv2.namedWindow('image', flags=cv2.WINDOW_GUI_NORMAL)
cv2.setWindowProperty("image", cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow('image',frame)
if cv2.waitKey(1000) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
vid_loc = ["home/odd.ogv", "home/even.ogv"]
while True:
n = random.randint(1, 100)
if n%2 == 0:
play_video(vid_loc[1])
else:
play_video(vid_loc[0])
time.sleep(10)
So this code generates a random number and checks if it is a even number or not. If even it runs a different video and if odd another video. My expectation with this code was to keep the random number being generated every
10 second and keep the video running while the "while loop" runs again , generates another random number and checks if it is even or not without closing the video. If the number is odd , another video is run immediately after current video is closed. However this code would just keep the video running in loop as intended but does not generate another random number.
It basically gets stuck at cv2.imshow until "q" key is pressed and only then the "while loop" runs again.
Related
I have a video URL list I want to open / process using OPenCV. Example list below
cam_name,address
Cam_01,vid1.mp4
Cam_02,vid1.mp4
Cam_03,vid1.mp4
I want to open the first camera/address, stream for 60 seconds, release and start the second, release and so on. When get to the last stream in the list e.g. Cam_03, start the list again, back to Cam_01.
I dont need to Thread, a I just need to start a stream and stop after a period of time etc.
Code so far:
# this calls the OpenCV stream
def main(cam_name, camID):
main_bret(cam_name, camID)
df = 'cams_list.txt'
df = pd.read_csv(df)
for index, row in df.iterrows():
main(row['cam_name'], row['address'])
########################################
main_bret(cam_name, camID):
vid = cv2.VideoCapture(camID)
while(True):
ret, frame = vid.read()
cv2.imshow(cam_name, frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
vid.release()
cv2.destroyAllWindows()
How to I construct the loop to stream for 60 seconds, release and move to the next in the list.. and repeat when finished the list?
What is happing now, e.g. if just streaming a mp4, will finish the video, then start the next. Just need it to stop streaming after 60 seconds.
I have tried adding sleep() etc around cv2.waitKey(1) but it runs everything for that period, not individual streams. I guess I am not putting the sleep in the correct place.
tks
I'm not sure I understood every detail of your program, but this is what I have:
df = 'cams_list.txt'
df = pd.read_csv(df)
def main_bret(cam_name, camID):
vid = cv2.VideoCapture(camID)
ret, frame = vid.read()
cv2.imshow(cam_name, frame)
cv2.waitKey(60_000)
vid.release()
cv2.destroyAllWindows()
index = 0
upper_index = df.shape[0]
while True:
main_bret(df.iloc[index]["cam_name"], df.iloc[index]["address"])
index = index + 1
if index >= upper_index:
index = 0
It's easier to iterate over and over through a list with a while so I get rid of the for loop.
The "waitKey" function waits 60 seconds (60000 ms) and then it deletes the window.
I hope I was helpful.
I am reaching some problems recording my video from the webcam. Because I am doing an experiment and at the same time recording the video, it is very important that the real second is the same as the second recorded.
The code I am using is this one:
def record_video(self, path_video="test"):
vid_capture = cv2.VideoCapture(0)
#vid_capture.set(cv2.CAP_PROP_FPS, 20)
fps=vid_capture.get(cv2.CAP_PROP_FPS)
#vid_capture.set(cv2.CAP_PROP_BUFFERSIZE, 2)
vid_cod = cv2.VideoWriter_fourcc(*'XVID')
output = cv2.VideoWriter("experiment_videos/" + path_video + ".avi", vid_cod, fps, (640,480))
x = threading.Thread(target=socket_trigger)
x.daemon = True
x.start()
print("Waiting")
while(start==0):
ret_cam, frame_cam = vid_capture.read()
while (True):
ret_cam, frame_cam = vid_capture.read()
output.write(frame_cam)
# Close and break the loop after pressing "x" key
if cv2.waitKey(1) & 0XFF == ord('x'):
break
if end== 1:
break
# close the already opened camera
vid_capture.release()
#cap.release()
# close the already opened file
output.release()
# close the window and de-allocate any associated memory usage
cv2.destroyAllWindows()
All code works, the flag variables that I receive work properly. The problem comes when, for instance, if I record a video of 5 mins (real time), the output can be 4:52,4:55 or even 5:00. It's just not exact.
I think it's because I write in a output file of 30fps (that's what vid_capture.get(cv2.CAP_PROP_FPS) returns) and my camera sometimes is recording in less frames(let's say 28,29).
I don't know what to do, I have tried setting fps of the cameras at 20 but it didn't work (#vid_capture.set(cv2.CAP_PROP_FPS, 20)).
Any ideas? For me is so critical matching times, and my only job is to record a webcam camera.
Thanks for all,
Jaime
I want to live search the screen of my android phone with opencv and python.
My phone streams its screen via http and i am reading the stream with some code like this:
host = "192.168.178.168:8080"
hoststr = 'http://' + host + '/stream.mjpeg'
print 'Streaming ' + hoststr
stream=urllib2.urlopen(hoststr)
bytes=''
drop_count = 0
while True:
bytes+=stream.read(1024)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
drop_count+=1
if drop_count > 120:
drop_count = 0
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
i=cv2.imdecode(np.fromstring(jpg,dtype=np.uint8),cv2.IMREAD_COLOR)
cv2.imshow(hoststr,i)
process_img(i)#my image processing
if cv2.waitKey(1) ==27:
exit(0)
The problem is, that my screen searching takes too long and creates a big delay. Since the fps my phone is sending are way to much, i would like to only process one image per second or per two seconds. How can i do that? I can not change the fps on my phone.
I can resize the screen image on the phone to 50% before sending it, but if i resize it more than 50% i can not find what im searching for with opencv anymore. But even with 50% resize is is too much delayed.
If i make a simple counter and only process every 2/10/30 image that does not help.
EDIT: I added my simple counter implementation to drop frames, that does not help. If I dont process the image, i got constant small delay with and without framedropping.
EDIT²: Finally solved it, sadly I dont remember, where I read it, but its very simple with openCV VideoCapture:
screen_capture = cv2.VideoCapture(stream_url) #init videocapture
drop_count = 0 #init drop counter
while True:
drop_count+=1
ret = screen_capture.grab() #grab frame but dont process it
if drop_count % 5 == 0: #check if not dropping frame
ret, img = self.screen_capture.retrieve() #process frame
This way frames you want to drop really get dropped, and no delay arise.
I'm guessing the delay get's so long, because you only display the processed frames. Effectively only 1 out of every 120 frames is show. And that frame is shown for the next 120 frames and the the processing time. That would make it seem laggy indeed.
You should display all frames normally and only call the process_img() function every 120th frame.
Try if this improves things:
if a!=-1 and b!=-1:
drop_count+=1
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
i=cv2.imdecode(np.fromstring(jpg,dtype=np.uint8),cv2.IMREAD_COLOR)
cv2.imshow(hoststr,i)
if drop_count > 120: # only process every 120th frame
drop_count = 0
process_img(i)#my image processing
if cv2.waitKey(1) ==27:
exit(0)
Working with the code sample from How does QueryFrame work? I noticed that the program used a lot of time to exit if it ran to the end of the video. I wanted to exit quickly on the last frame, and I verified that it's a lot quicker if I don't try to play past the end of the video, but there are some details that don't make sense to me. Here's my code:
import cv2
# create a window
winname = "myWindow"
win = cv2.namedWindow(winname, cv2.CV_WINDOW_AUTOSIZE)
# load video file
invideo = cv2.VideoCapture("video.mts")
frames = invideo.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT)
print "frame count:", frames
# interval between frame in ms.
fps = invideo.get(cv2.cv.CV_CAP_PROP_FPS)
interval = int(1000.0 / fps)
# play video
while invideo.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) < frames:
print "Showing frame number:", invideo.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
(ret, im) = invideo.read()
if not ret:
break
cv2.imshow(winname, im)
if cv2.waitKey(interval) == 27: # ASCII 27 is the ESC key
break
del invideo
cv2.destroyWindow(winname)
The only thing is that the frame count returned is 744, while the last played frame number is 371 (counting from 0, so that's 372 frames). I assume this is because the video is interlaced, and I guess I need to account for that and divide interval by 2 and frames by 2. But the question is, how do I figure out that I need to do this? There doesn't seem to be a property to check this:
http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture-get
I have a bunch of videos and depthmaps showing human poses from the Microsoft Kinect.
I can get a skeleton of the human in the video but what I want to do is recognize a certain pose from this skeleton data.
To do that I need to annotate each frame in the videos with a 0 or 1, corresponding to "bad pose" and "good pose", i.e. the frame has a binary state variable.
I want to be able to playback the avi file in matlab and then press space to switch between these two states and simultaneously add the state variable to an array giving the state for each frame in the video.
Is there a tool in matlab that can do this? Otherwise matlab is not a restriction, python, C++ or any other language is fine.
I have been googling around, and most of the stuff I have found is to annotate individual frames with a polygon. I want to do this at maybe half the regular framerate of the video.
EDIT: I used the solution provided by miindlek and decided to share a few things if someone runs into this. I needed to see in the video what annotation I was assigning to each frame, so I made a small circle in the upper left corner of the video as I displayed it. Hopefully this will be useful for someone else later. I also capture the key pressed with waitKey and then do something based on the output. This allows for multiple keys to be pressed during the annotations.
import numpy as np
import cv2
import os
os.chdir('PathToVideo')
# Blue cicle means that the annotation haven't started
# Green circle is a good pose
# Red is a bad pose
# White circle means we are done, press d for that
# Instructions on how to use!
# Press space to swap between states, you have to press space when the person
# starts doing poses.
# Press d when the person finishes.
# press q to quit early, then the annotations are not saved, you should only
# use this if you made a mistake and need to start over.
cap = cv2.VideoCapture('Video.avi')
# You can INCREASE the value of speed to make the video SLOWER
speed = 33
# Start with the beginning state as 10 to indicate that the procedure has not started
current_state = 10
saveAnnotations = True
annotation_list = []
# We can check wether the video capture has been opened
cap.isOpened()
colCirc = (255,0,0)
# Iterate while the capture is open, i.e. while we still get new frames.
while(cap.isOpened()):
# Read one frame.
ret, frame = cap.read()
# Break the loop if we don't get a new frame.
if not ret:
break
# Add the colored circle on the image to know the state
cv2.circle(frame,(50,50), 50, colCirc, -1)
# Show one frame.
cv2.imshow('frame', frame)
# Wait for a keypress and act on it
k = cv2.waitKey(speed)
if k == ord(' '):
if current_state==0:
current_state = 1
colCirc = (0,0,255)
else:
current_state = 0
colCirc = (0,255,0)
if current_state == 10:
current_state = 0
colCirc = (0,255,0)
if k == ord('d'):
current_state = 11
colCirc = (255,255,255)
# Press q to quit
if k == ord('q'):
print "You quit! Restart the annotations by running this script again!"
saveAnnotations = False
break
annotation_list.append(current_state)
# Release the capture and close window
cap.release()
cv2.destroyAllWindows()
# Only save if you did not quit
if saveAnnotations:
f = open('poseAnnot.txt', 'w')
for item in annotation_list:
print>>f, item
f.close()
One way to solve your task is using the opencv library with python, as described in this tutorial.
import numpy as np
import cv2
cap = cv2.VideoCapture('video.avi')
current_state = False
annotation_list = []
while(True):
# Read one frame.
ret, frame = cap.read()
if not ret:
break
# Show one frame.
cv2.imshow('frame', frame)
# Check, if the space bar is pressed to switch the mode.
if cv2.waitKey(1) & 0xFF == ord(' '):
current_state = not current_state
annotation_list.append(current_state)
# Convert the list of boolean values to a list of int values.
annotation_list = map(int, annotation_list)
print annotation_list
cap.release()
cv2.destroyAllWindows()
The variable annotation_list contains all annotations for each frame. To switch between the two modes, you have to press the space bar.