I have a python program with opencv gui that uses a multiprocessing in threading in class instance.
After running below script, when I type q it will quit the sub loop and it terminates or join all existing thread, process and close queue in modules.
And then if I type r then it will start sub loop again, but this time, the worker in multiprocessing runs twice slower. I have no idea why it is happening.
code is like below.
module_started = False
height = 480
width = 800
img_base = np.zeros((height,width,3), np.uint8)
cv2.namedWindow("VIN", cv2.WINDOW_NORMAL)
while True:
if not module_started:
cam1 = PersonDetector(0)
cam1.start()
outputs = []
module_started = True
while True:
if cam1.new_detection:
outputs = cam1.get_output()
if outputs:
for xys in outputs:
(startX, startY, endX, endY) = xys
cv2.rectangle(cam1.frame, (startX, startY), (endX, endY), (255,5,150),2)
# show the output image
cv2.imshow("VIN", cam1.frame)
key = cv2.waitKey(40) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
cam1.stop()
del cam1
# show the output frame
cv2.imshow("VIN", img_base)
key = cv2.waitKey(40) & 0xFF
if key == ord("e"):
break
if key == ord("r"): # go back to start sub loop
module_started = False
cv2.destroyAllWindows()
it is on raspberry pi 4. it has cpu 4 core.
first time, htop shows CPU% over 300%
but second time, htop shows CPU% just 99 or 100%...
why it happens?
I have tested by myself for a whole day.
It was not because of multiprocessing nor threading.
It was because of cv2.imshow(), I removed imshow then everything keep the same speed but when repeating imshow like above code, it shows program down, I still dont know why though...
Related
I am reaching some problems recording my video from the webcam. Because I am doing an experiment and at the same time recording the video, it is very important that the real second is the same as the second recorded.
The code I am using is this one:
def record_video(self, path_video="test"):
vid_capture = cv2.VideoCapture(0)
#vid_capture.set(cv2.CAP_PROP_FPS, 20)
fps=vid_capture.get(cv2.CAP_PROP_FPS)
#vid_capture.set(cv2.CAP_PROP_BUFFERSIZE, 2)
vid_cod = cv2.VideoWriter_fourcc(*'XVID')
output = cv2.VideoWriter("experiment_videos/" + path_video + ".avi", vid_cod, fps, (640,480))
x = threading.Thread(target=socket_trigger)
x.daemon = True
x.start()
print("Waiting")
while(start==0):
ret_cam, frame_cam = vid_capture.read()
while (True):
ret_cam, frame_cam = vid_capture.read()
output.write(frame_cam)
# Close and break the loop after pressing "x" key
if cv2.waitKey(1) & 0XFF == ord('x'):
break
if end== 1:
break
# close the already opened camera
vid_capture.release()
#cap.release()
# close the already opened file
output.release()
# close the window and de-allocate any associated memory usage
cv2.destroyAllWindows()
All code works, the flag variables that I receive work properly. The problem comes when, for instance, if I record a video of 5 mins (real time), the output can be 4:52,4:55 or even 5:00. It's just not exact.
I think it's because I write in a output file of 30fps (that's what vid_capture.get(cv2.CAP_PROP_FPS) returns) and my camera sometimes is recording in less frames(let's say 28,29).
I don't know what to do, I have tried setting fps of the cameras at 20 but it didn't work (#vid_capture.set(cv2.CAP_PROP_FPS, 20)).
Any ideas? For me is so critical matching times, and my only job is to record a webcam camera.
Thanks for all,
Jaime
I am running some code called bcf.py it is very long and convoluted, but in short, it extracts 300 feature points from each image from a group of folders. So potentially there could be hundreds of images.
When I go to run the script, everything works except I have to keep pressing the return button to extract all the feature points and repeat this button pressing for every image, which is frustrating.
Why does it do this, and how would I fix it? The goal is to press the wait key once and extract the features.
Thank you
I do not know what this is called to be able to search for an answer.
def _svm_classify_test(self):
clf = self._load_classifier()
label_to_cls = self._load_label_to_class_mapping()
testing_data = []
labels = []
for (cls, idx) in self.data.keys():
testing_data.append(self.data[(cls, idx)]['spp_descriptor'])
labels.append(hash(cls))
predictions = clf.predict(testing_data)
correct = 0
for (i, label) in enumerate(labels):
if predictions[i] == label:
correct += 1
else:
print("Mistook %s for %s" % (label_to_cls[label], label_to_cls[predictions[i]]))
print("Correct: %s out of %s (Accuracy: %.2f%%)" % (correct, len(predictions), 100. * correct / len(predictions)))
def show(self, image):
cv2.imshow('image', image)
_ = cv2.waitKey()
The goal is to press the wait key once and automatically runs through​ all the images and extract the features.
The function cv.waitKey([, delay]) as it is explained in the documentation, it may take a value which you can consider as timeout. That means, you can pass a 10 and it will block the function for 10 milliseconds for a keyboard input.
For your case, I do not see where in the code you use your function show so I can't know exactly how you should do it to have that behaviour, but as a pseudocode for you to get an idea, it will be something like:
filenames = [] #lets assume your filenames are here
for f in filenames:
img = cv2.imread(f)
cv2.imshow("image", img)
cv2.waitKey(10)
If you want a pause at the beginning you can do an imshow outside the loop and a waitkey with 0 after it. Also, you can play with the amount of time, like 5000 to display it for 5 seconds before continuing.
But if it takes too long to process you may consider putting the imshow part in a thread, since the window maybe unresponsive after the waitKey while it waits for the feature extraction process to finish. Also, it may be good to add something like 'q' to quit the app or something.... These are just some suggestions :)
I have a video of the cow farm. My objectives are -
(a) get the location of the corners of the cow pen (cowhouse)
(b) get the corners of the food container
Here is my approach I am thinking about-
(a) - capture the frame and freeze on the 1st frame
- user will manually put the mouse on the corners
- the x,y location will be saved in a list
- press "p" key to proceed to the next frame
(b) - freeze the frame on the second frame
- user will manually put the mouse on the corners
- the x,y location will be saved in another list
- press "c" key to proceed to next frames
I already have other codes to carry out other operations. I tried the following codes to get point from an image (not video). Now sure how to pause the video and use the existing frame as the input image
import cv2, numpy as np
ix,iy = -1,-1
# the list of locations
mouse = []
def get_location(event,x,y,flags,param):
global ix,iy
if event == cv2.EVENT_LBUTTONDBLCLK:
ix,iy = x,y
mouse.append([x,y])
# take image and name it
img = cv2.imread("colo.png",0)
cv2.namedWindow('image')
cv2.setMouseCallback('image',get_location)
while(1):
cv2.imshow('image',img)
k = cv2.waitKey(20) & 0xFF
if k == 27:
break
elif k == ord('a'):
print (ix,iy)
print (mouse)
cv2.destroyAllWindows()
The answers I am looking for are - (a) how to freeze the frame on a specific frame number and (b) cv2.setMouseCallback('image',get_location) is taking a string as the first argument, how to insert the frame as argument here?
a) use a variable to set the waitKey to 0. Only after a key press the next frame will be shown. Change the variable after "c" is pressed, so the video runs normally:
waitTime = 0
k = cv2.waitKey(waitTime)
if k == ord('c'):
waitTime = 20
b) the string argument is the name of the window where the callback is attached to. To 'insert the frame', just call imshow on the window. The code you have seems fine in that regard.
So I am trying to code up something that keeps a video running until a condition is met and some processes are done and new video is loaded immediately after these processes. These processes should continue without closing the current video and should only close the current video when the new video is loaded and ready for vizualization.
This is best formulated by an example as given below:
import numpy as np
import os
import pygame
import cv2
import time
def play_video(vid_loc):
pygame.init()
infos = pygame.display.Info()
screen_size = (infos.current_w, infos.current_h)
cap = cv2.VideoCapture(vid_loc)
frame_counter =0
while(True):
ret, frame = cap.read()
frame_counter += 1
if frame_counter == cap.get(cv2.CAP_PROP_FRAME_COUNT):
frame_counter = 0
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
frame= cv2.resize(frame,screen_size)
cv2.namedWindow('image', flags=cv2.WINDOW_GUI_NORMAL)
cv2.setWindowProperty("image", cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow('image',frame)
if cv2.waitKey(1000) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
vid_loc = ["home/odd.ogv", "home/even.ogv"]
while True:
n = random.randint(1, 100)
if n%2 == 0:
play_video(vid_loc[1])
else:
play_video(vid_loc[0])
time.sleep(10)
So this code generates a random number and checks if it is a even number or not. If even it runs a different video and if odd another video. My expectation with this code was to keep the random number being generated every
10 second and keep the video running while the "while loop" runs again , generates another random number and checks if it is even or not without closing the video. If the number is odd , another video is run immediately after current video is closed. However this code would just keep the video running in loop as intended but does not generate another random number.
It basically gets stuck at cv2.imshow until "q" key is pressed and only then the "while loop" runs again.
I have a bunch of videos and depthmaps showing human poses from the Microsoft Kinect.
I can get a skeleton of the human in the video but what I want to do is recognize a certain pose from this skeleton data.
To do that I need to annotate each frame in the videos with a 0 or 1, corresponding to "bad pose" and "good pose", i.e. the frame has a binary state variable.
I want to be able to playback the avi file in matlab and then press space to switch between these two states and simultaneously add the state variable to an array giving the state for each frame in the video.
Is there a tool in matlab that can do this? Otherwise matlab is not a restriction, python, C++ or any other language is fine.
I have been googling around, and most of the stuff I have found is to annotate individual frames with a polygon. I want to do this at maybe half the regular framerate of the video.
EDIT: I used the solution provided by miindlek and decided to share a few things if someone runs into this. I needed to see in the video what annotation I was assigning to each frame, so I made a small circle in the upper left corner of the video as I displayed it. Hopefully this will be useful for someone else later. I also capture the key pressed with waitKey and then do something based on the output. This allows for multiple keys to be pressed during the annotations.
import numpy as np
import cv2
import os
os.chdir('PathToVideo')
# Blue cicle means that the annotation haven't started
# Green circle is a good pose
# Red is a bad pose
# White circle means we are done, press d for that
# Instructions on how to use!
# Press space to swap between states, you have to press space when the person
# starts doing poses.
# Press d when the person finishes.
# press q to quit early, then the annotations are not saved, you should only
# use this if you made a mistake and need to start over.
cap = cv2.VideoCapture('Video.avi')
# You can INCREASE the value of speed to make the video SLOWER
speed = 33
# Start with the beginning state as 10 to indicate that the procedure has not started
current_state = 10
saveAnnotations = True
annotation_list = []
# We can check wether the video capture has been opened
cap.isOpened()
colCirc = (255,0,0)
# Iterate while the capture is open, i.e. while we still get new frames.
while(cap.isOpened()):
# Read one frame.
ret, frame = cap.read()
# Break the loop if we don't get a new frame.
if not ret:
break
# Add the colored circle on the image to know the state
cv2.circle(frame,(50,50), 50, colCirc, -1)
# Show one frame.
cv2.imshow('frame', frame)
# Wait for a keypress and act on it
k = cv2.waitKey(speed)
if k == ord(' '):
if current_state==0:
current_state = 1
colCirc = (0,0,255)
else:
current_state = 0
colCirc = (0,255,0)
if current_state == 10:
current_state = 0
colCirc = (0,255,0)
if k == ord('d'):
current_state = 11
colCirc = (255,255,255)
# Press q to quit
if k == ord('q'):
print "You quit! Restart the annotations by running this script again!"
saveAnnotations = False
break
annotation_list.append(current_state)
# Release the capture and close window
cap.release()
cv2.destroyAllWindows()
# Only save if you did not quit
if saveAnnotations:
f = open('poseAnnot.txt', 'w')
for item in annotation_list:
print>>f, item
f.close()
One way to solve your task is using the opencv library with python, as described in this tutorial.
import numpy as np
import cv2
cap = cv2.VideoCapture('video.avi')
current_state = False
annotation_list = []
while(True):
# Read one frame.
ret, frame = cap.read()
if not ret:
break
# Show one frame.
cv2.imshow('frame', frame)
# Check, if the space bar is pressed to switch the mode.
if cv2.waitKey(1) & 0xFF == ord(' '):
current_state = not current_state
annotation_list.append(current_state)
# Convert the list of boolean values to a list of int values.
annotation_list = map(int, annotation_list)
print annotation_list
cap.release()
cv2.destroyAllWindows()
The variable annotation_list contains all annotations for each frame. To switch between the two modes, you have to press the space bar.