I am running a script using opencv to record and save a video from a usb camera. Using the same code on my laptop and a raspberry pi 3B, the laptop saves the video and I can watch it back but the raspberry pi video ends up with an error (0xc10100be) saying that "the file isn't playable becuase the file type is unsupported, the file extension is incorrect or the file corrupt". Can Raspberry Pi not save an MP4 file, is there something I need to install on raspberry pi to save videos or is it likely something else?
Thanks.
import cv2
video_cam = cv2.VideoCapture(0) # opens camera for video capture, 0 for inbuilt camera, 1 for USB cam
width = int(video_cam.get(3))
height = int(video_cam.get(4))
size = (width, height)
print(size)
print(type(video_cam))
print(video_cam.isOpened()) # returns true if video capturig initialised, means VideoCapture construcor succeeded
out = cv2.VideoWriter('video.mp4', cv2.VideoWriter_fourcc(*'mp4v'), 20, size, True)
while video_cam.isOpened():
check, frame = video_cam.read() # takes, decodes and returns video frame
if check==True:
out.write(frame) # writes frame to mp4/avi file for saving
cv2.imshow("image", frame) # display in image, takes 2 inputs, window name being displayed in and the actual image
# waitkey(): if use (0) displays still image and image remains until key is pressed
# if use (1) frame displayed for 1ms before closing
# keep streaming until press q 0xFF - means the ASCII input, waitKey returns 32 bit integer value but we only want last 8 bits
if cv2.waitKey(1) & 0xFF ==ord('q'):
break
else:
break
video_cam.release() # closes video camera
out.release()
cv2.destroyAllWindows()
Related
Edit: The problem was fixed after using VLC media player. The standard windows media player could not read the videos.
I am attempting to take a video, run it through an object detector, add the bounding boxes, and output the video with the bounding boxes. However, I'm having a strange problem where the video I output cannot be viewed. I get a "Server execution failed" error when attempting to play the video on my local windows 10 machine. I am developing through SSH on ubuntu 20.04 with the vs code remote development extension.
Here is some OpenCV code that does not work with my setup. The video is written to the disk, and OpenCV is able to read frames from output5.avi, however the output5.avi file cannot be opened as I described.
import cv2
cap = cv2.VideoCapture("video.mov")
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
out = cv2.VideoWriter('output5.avi', fourcc, 30, (width, height), isColor=True)
while cap.isOpened():
# get validity boolean and current frame
ret, frame = cap.read()
# if valid tag is false, loop back to start
if not ret:
break
else:
frame = cv2.resize(frame, (width, height))
out.write(frame)
cap.release()
out.release()
I have also attempted to save the video through torchvision.io.write_video, but the exact same problem occurs. Tensorboard similarly doesn't work.
It must be something wrong with how the remote machine is set up, but I have no idea what could be wrong.
I am simply trying to read a video using openCV Video Capture, and then outputting that same video using Video Writer. But the resultant video is not playable and each time I run the function, although having the same input video the output video has different file sizes.
cap = cv2.VideoCapture("video_input.mp4")
fps = cap.get(cv2.CAP_PROP_FPS)
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
out = cv2.VideoWriter("video_output.mp4",cv2.VideoWriter_fourcc(*'mp4v'), fps, (int(width), int(height)))
count_frame = 0
if (cap.isOpened()== False):
print("Error opening video stream or file")
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
out.write(frame)
count_frame = count_frame + 1
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
Note: The input video has 8000 frames, if i use a condition to only write the first 2000 frames the video writer outputs those 2000 frames in a video without problem.
Does anyone know what is my problem? This exact code use to work fine some weeks ago.
Edit: I would also like to add that I´m running this code on a virtual machine on jupyterLab. But when i try to run in other virtual machine, it runs perfectly fine. The openCV versions are the same in both VM's
First my Setup:
Windows 10, Asus Notebook, Logitech C920 HD Pro Webcam, Opencv-Python 3.4.4.19
When I manually take a photo with the webcam using the Windows 10 Camera App, it is sharp.
But if I program a code in Python and use OpenCV the taken photo is blurred (unsharp).
When I press the space bar, a photo is taken.
I already tried to play with the contrast, brightness and FPS. Unfortunately this didn't lead to any result.
import cv2
import os
cam = cv2.VideoCapture(1)
cv2.namedWindow("test")
cam.set(3, 1920)
cam.set(4, 1080)
img_counter = 0
myfile="XXX"
if os.path.isfile(myfile):
os.remove(myfile)
else:
print("Error: %s file not found" % myfile)
while True:
ret, frame = cam.read()
cv2.imshow("test", frame)
if not ret:
break
k = cv2.waitKey(1)
if k%256 == 27:
print("Escape hit, closing...")
break
elif k%256 == 32:
img_name = "Bild{}.png".format(img_counter)
cv2.imwrite(img_name, frame)
print("{} written!".format(img_name))
img_counter += 1
cam.release()
cv2.destroyAllWindows()
Are there settings for OpenCv to get the image sharper?
In the final stage I have 3 cameras which automatically take a photo one after the other.
Unsharp Image (OpenCV)
Sharp Image (Windows 10 Kamera App)
Image cv2.imshow
Typical camera pipelines have a sharpening filter applied to remove the effect of blurring caused by the scene being out of focus, or poor optics.
My guess is that OpenCV is capturing the image closer to "raw" without adding the sharpening filter. You should be able to add a sharpening filter yourself. You can tell when a sharpening filter is applied as there will be a "ringing" artifact around high contrast edges.
I have recently installed OpenCV for PYTHON on my mac by following the tutorial:
http://www.pyimagesearch.com/2016/12/19/install-opencv-3-on-macos-with-homebrew-the-easy-way/
I wrote a code to read a videofile, which is able to retrieve the fps,timestamp,total no. of frames, at each frame that is read:
cap = cv2.VideoCapture(particle_name + video_file_type)
while True:
time = cap.get(cv2.CAP_PROP_POS_MSEC)
fps = cap.get(cv2.CAP_PROP_FPS)
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
print(time, fps, total_frames)
ret, frame = cap.read()
cv2.imshow('frame', frame)
cap.release()
cv2.destroyAllWindows()
I tried this code out on Video A, .mov format, Motion JPEG Video (dmb1) codec.
And Video B, .avi format, Motion JPEG Video (MJPG) codec.
For both Video A and Video B, the fps and total_frames printed out were constants. However, for Video A, time increased gradually (as it should), but for Video B, time remained constant at 0.
I thought it could be the format of the videos that causes this difference so I changed the format of Video B to .mov while retaining the same codec, however the problem still persisted.
May I know how I can retrieve the accurate timestamp from Video B?
I'm not sure why the cap.get(cv2.CAP_PROP_FRAME_COUNT) isn't returning the correct timestamp. This could be a codec issue. Yoo could try other codecs like XVID, MP4V, etc. Note that the extension merely denotes the container for the file and changing that may not really result in any meaningful change in the video file.
If you still are unable to get it work, use the frame count along with the FPS of the image to get the timestamp.
fps = cap.get(cv2.CV_CAP_PROP_FPS)
frame_count = 0
while True:
frame_count++
time = float(frame_count)/fps
EDIT:
You can change the codec using ffmpeg. Here's a sample tutorial for Macs https://www.macxdvd.com/mac-dvd-video-converter-how-to/ffmpeg-avi-to-mp4-free.htm.
I am using opencv version 3.1.0 with zbar (latest version as of this post) and PIL (latest version as of this post)
import zbar
import Image
import cv2
# create a reader
scanner = zbar.ImageScanner()
# configure the reader
scanner.parse_config('enable')
#create video capture feed
cap = cv2.VideoCapture(0)
while(True):
ret, cv = cap.read()
cv = cv2.cvtColor(cv, cv2.COLOR_BGR2RGB)
pil = Image.fromarray(cv)
width, height = pil.size
raw = pil.tostring()
# wrap image data
image = zbar.Image(width, height, 'Y800', raw)
# scan the image for barcodes
scanner.scan(image)
# extract results
for symbol in image:
# do something useful with results
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
# clean up
print "/n ...Done"
I don't understand why this is not working it is supposed to constantly check for qrcodes in the current frame of the Video Stream and if it sees one it decodes it and prints what it says inside I hold up printed out qrcodes in front of my webcam and it does not work it shows that my camera is on and that there is a video stream occurring so somewhere in the while loop something is going wrong
I tried it before with qr codes on my computer not printed out and it worked fine
I also tried having it show me the current frame with cv2.imshow("out",cv) but when I did the program showed just a big grey square where it should show the video stream and then it froze so I had to kill Netbeans.
zbar works with grayscale images. Change cv = cv2.cvtColor(cv, cv2.COLOR_BGR2RGB) to cv = cv2.cvtColor(cv, cv2.COLOR_BGR2GRAY).
I'm guessing you're using this example code to base your program off of. They do the color to grayscale conversion with convert('L') on line 15.