I am running a Camera Calibration program from Python using openCV. I am using my computer camera from an XPS 15 9575 in order to capture different frames of a classic black and white checkerboard that I printed. For some reason, it never registers in the program that there is a checkerboard.
I've run this program by itself and with already produced images and it works. It only doesn't work as I try to capture new ones and process them instantly.
This is the beginning of the code. It runs to check to see if it finds the corners and then moves onto the next step. When running, it never makes it past this.
cam = cv2.VideoCapture(0)
cv2.namedWindow("test")
img_counter = 0
imgNames = []
size = (5,5)
while True:
ret, frame = cam.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow("test", gray)
if not ret:
break
k = cv2.waitKey(1)
if k%256 == 27:
break
elif k%256 == 32:
img_name = "{}.png".format(img_counter)
imgtemp = cv2.imread(img_name)
graytemp = cv2.cvtColor(imgtemp,cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(graytemp, size,None)
print (ret)
if ret == True:
print ("good!")
imgNames.append(img_name)
cv2.imwrite(img_name, frame)
img_counter += 1
else:
print ("again")
In your code above, you are trying to read an image which actually doesn't exist. See these lines:
img_name = "{}.png".format(img_counter)
imgtemp = cv2.imread(img_name)
Here, img_name is just a string, which doesn't point to an image file yet. You can do one thing, capture a frame and save it here and give it a name img_name and then try to read it via cv2.imread function, like below:
img_name = "{}.png".format(img_counter)
cv2.imwrite(img_name, frame)
imgtemp = cv2.imread(img_name)
Alternatively, you can replace imgtemp = cv2.imread(img_name) with imgtemp = frame. In this case, you don't have to save and then process a frame. Here, once a spacebar is pressed, processing is done on the current captured video frame without saving it.
And don't forget to add below lines at the end of your code:
cam.release()
cv2.destroyAllWindows()
Related
I'm trying to read a video, put some shapes on it and write it out using opencv-python (using VideoWriter class):
def Mask_info(path):
"""This function will mask the information part of the video"""
video = cv.VideoCapture(path)
framenum = video.get(cv.CAP_PROP_FRAME_COUNT)
fps = video.get(cv.CAP_PROP_FPS)
fourcc = cv.VideoWriter_fourcc(*"vp09")
width = int(video.get(cv.CAP_PROP_FRAME_WIDTH))
height = int(video.get(cv.CAP_PROP_FRAME_HEIGHT))
size = (width,height)
if (video.isOpened ==False ):
print("Error while reading the file.")
result = cv.VideoWriter("masked_video.mp4",fourcc,fps,size)
while(True):
isTrue,frame = video.read()
cv.rectangle(frame,(65,0),(255,10),(0,0,0),-1)
cv.rectangle(frame,(394,0),(571,10),(0,0,0),-1)
if isTrue == True:
result.write(frame)
cv.imshow("Masked Video",frame)
if cv.waitKey(1) & 0xFF == ord("d"):
break
else:
break
video.release()
result.release()
cv.destroyAllWindows()
Mask_info("samplesound.webm")
The problem is that the output video length is zero, while the input video is 10 seconds.
To elaborate on my comment above:
You should verify that video.read() returns any valid frame.
It could be that due to path or other issue the VideoCapture failed to open the input file.
You attempt to draw the rectangles (using cv.rectangle) before the if that checks whether you have a valid frame.
But if video.read() failed (e.g. when reaching the end of the input) frame will be None. Then cv.rectangle will throw an exception causing the program to terminate without flushing and closing the output file.
Instead you should do the drawings inside the isTrue == True branch of the if:
if isTrue == True:
cv.rectangle(frame,(65,0),(255,10),(0,0,0),-1)
cv.rectangle(frame,(394,0),(571,10),(0,0,0),-1)
result.write(frame)
# ...
I have a camera that faces an LED sensor that I want to restrict on capturing images when the object is in sight otherwise not capturing. I will designate a point or region where the color pixels will change as an object arrives it will wait for 3sec and then capture. Thereafter it will not capture the same object if it still remains in the designated region. Here are my codes.
import cv2
import numpy as np
import os
os.chdir('C:/Users/Man/Desktop')
previous_flag = 0
current_flag = 0
a = 0
video = cv2.VideoCapture(0)
while True:
ret, frame = video.read()
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
cv2.imshow('Real Time', rgb)
cv2.waitKey(1)
if(rgb[400, 200, 0]>200): # selecting a point that will change in terms of pixel values
current_flag = 0
previous_flag = current_flag
print('No object Detected')
else: # object arrived
current_flag = 1
change_in_flags = current_flag - previous_flag
if(change_in_flags==1):
time.sleep(3)
cv2.imwrite('.trial/cam+str(a).jpg', rgb)
a += 1
previous_flag = current_flag
video.release()
cv2.destroyAllWindows()
When I implement the above program, for the first case (if) it prints several lines of No object Detected.
How can I reduce those sequentially printed lines for No object Detected statement? So that the program can just print one line without repetition.
I also tried to add a while loop to keep the current status true after a+=1 like:
while True:
previous_flag = current_flag
continue
It worked but the system became so slow is there any way to avoid such a while loop such that the system becomes faster? Any advice on that
import cv2
import numpy as np
import os
os.chdir('C:/Users/Man/Desktop')
state1=0
a = 0
video = cv2.VideoCapture(0)
while True:
ret, frame = video.read()
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
cv2.imshow('Real Time', rgb)
cv2.waitKey(1)
if(rgb[400, 200, 0]>200): # selecting a point that will change in terms of pixel values
if(state1 == 0):
print('No object Detected')
state1=1
else: # object arrived
time.sleep(1)
if(state1==1)
print('Object is detected')
cv2.imwrite('./trial/cam+str(a).jpg', rgb)
a += 1
state1=0
video.release()
cv2.destroyAllWindows()
I'm looking for the ways of how merge 2 videos. I have 2 files (car_det.py, line_det.py) that separately works perfectly. However, I need them in one video. It perfectly works for "vehicle" to record output video, but produces troubles with "line" variable.
video_capture = cv2.VideoCapture('video6.mp4')
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi', fourcc, 30.0, (640,480))
while (video_capture.isOpened()):
ret, frame = video_capture.read()
if ret:
vehicle = processVideo(video_capture)
line = processImage(frame)
out.write(vehicle)
cv2.imshow("vehicle", vehicle)
cv2.imshow("line", line)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
video_capture.release()
out.release()
cv2.destroyAllWindows()
If "merge" means merging two videos frame by frame, you must open two videos first like this.
video_capture1 = cv2.VideoCapture('video1.mp4')
video_capture2 = cv2.VideoCapture('video2.mp4')
And in loop, need to get frames of each video.
ret, frame1 = video_capture1.read()
ret, frame2 = video_capture2.read()
Next, let's merge. If there are no overlapped part between two videos, then simple add would work.
resultFrame = frame1 + frame2;
Then, write this frame on VideoWriter.
out.write(resultFrame)
I'm trying to convert the following video to images
https://www.signingsavvy.com/media/mp4-ld/24/24851.mp4
however, I have done it by using OpenCV
# Importing all necessary libraries
import cv2
import os
# Read the video from specified path
cam = cv2.VideoCapture("C:\Users\ahmad\Hi_ASL.mp4")
print(cam.get(cv2.CAP_PROP_FPS))
try:
# creating a folder named data
if not os.path.exists('data'):
os.makedirs('data')
# if not created then raise error
except OSError:
print ('Error: Creating directory of data')
# frame
currentframe = 0
while(True):
# reading from frame
ret,frame = cam.read()
if ret:
# if video is still left continue creating images
name = './data/frame' + str(currentframe) + '.jpg'
# print ('Creating...' + name)
# writing the extracted images
cv2.imwrite(name, frame)
# increasing counter so that it will
# show how many frames are created
currentframe += 1
else:
break
# ret,frame = cam.read()
# Release all space and windows once done
cam.release()
cv2.destroyAllWindows()
After I have done it. I want to convert those images to video to be like the one above and I wrote this code
img = [img for img in os.listdir('data')]
frame = cv2.imread('data\' + img[0])
h , w , l = frame.shape
vid = cv2.VideoWriter('hiV.mp4' , 0 ,1 ,(w,h))
for imgg in img:
vid.write(cv2.imread('data\' + imgg))
cv2.destroyAllWindows()
vid.release()
The problem is the result of combining the images to a video using OpenCV is not the same as the original video. So, what is the problem? I want it to be the same as the original one.
The result of the code above is this video https://drive.google.com/file/d/16vwT35wzc95tBleK5VCpZJQkaLxSiKVd/view?usp=sharing
And thanks.
You should change cv2.VideoWriter('hiV.mp4' , 0 ,1 ,(w,h)) to cv2.VideoWriter('hiV.mp4' , 0 ,30 ,(w,h)) As the 1 sets the fps and that means that you write 1 frame every second and not 30 or 29.97(NTSC) as the original video.
Aim : Detect the motion and save only the motion periods in files with names of the starting time.
Now I met the issue about how to save the video to the files with video starting time.
What I tested :
I tested my program part by part. It seems that each part works well except the saving part.
Running status: No error. But in the saving folder, there is no video. If I use a static saving path instead, the video will be saved successfully, but the video will be override by the next video. My codes are below:
import cv2
import numpy as np
import time
cap = cv2.VideoCapture( 0 )
bgst = cv2.createBackgroundSubtractorMOG2()
fourcc=cv2.VideoWriter_fourcc(*'DIVX')
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
n = "start_time"
while True:
ret, frame = cap.read()
dst = bgst.apply(frame)
dst = np.array(dst, np.int8)
if np.count_nonzero(dst)>3000: # use this value to adjust the "Sensitivity“
print('something is moving %s' %(time.ctime()))
path = r'E:\OpenCV\Motion_Detection\%s.avi' %n
out = cv2.VideoWriter( path, fourcc, 50, size )
out.write(frame)
key = cv2.waitKey(3)
if key == 32:
break
else:
out.release()
n = time.ctime()
print("No motion Detected %s" %n)
What I meant is:
import cv2
import numpy as np
import time
cap = cv2.VideoCapture( 0 )
bgst = cv2.createBackgroundSubtractorMOG2()
fourcc=cv2.VideoWriter_fourcc(*'DIVX')
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
path = r'E:\OpenCV\Motion_Detection\%s.avi' %(time.ctime())
out = cv2.VideoWriter( path, fourcc, 16, size )
while True:
ret, frame = cap.read()
dst = bgst.apply(frame)
dst = np.array(dst, np.int8)
for i in range(number of frames in the video):
if np.count_nonzero(dst)<3000: # use this value to adjust the "Sensitivity“
print("No Motion Detected")
out.release()
else:
print('something is moving %s' %(time.ctime()))
#label each frame you want to output here
out.write(frame(i))
key = cv2.waitKey(1)
if key == 32:
break
cap.release()
cv2.destroyAllWindows()
If you see the code there will be a for loop, within which the process of saving is done.
I do not know the exact syntax involving for loop with frames, but I hope you have the gist of it. You have to find the number of frames present in the video and set that as the range in the for loop.
Each frame gets saved uniquely (see the else condition.) As I said I do not know the syntax. Please refer and follow this procedure.
Cheers!