Edit: The problem was fixed after using VLC media player. The standard windows media player could not read the videos.
I am attempting to take a video, run it through an object detector, add the bounding boxes, and output the video with the bounding boxes. However, I'm having a strange problem where the video I output cannot be viewed. I get a "Server execution failed" error when attempting to play the video on my local windows 10 machine. I am developing through SSH on ubuntu 20.04 with the vs code remote development extension.
Here is some OpenCV code that does not work with my setup. The video is written to the disk, and OpenCV is able to read frames from output5.avi, however the output5.avi file cannot be opened as I described.
import cv2
cap = cv2.VideoCapture("video.mov")
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
out = cv2.VideoWriter('output5.avi', fourcc, 30, (width, height), isColor=True)
while cap.isOpened():
# get validity boolean and current frame
ret, frame = cap.read()
# if valid tag is false, loop back to start
if not ret:
break
else:
frame = cv2.resize(frame, (width, height))
out.write(frame)
cap.release()
out.release()
I have also attempted to save the video through torchvision.io.write_video, but the exact same problem occurs. Tensorboard similarly doesn't work.
It must be something wrong with how the remote machine is set up, but I have no idea what could be wrong.
Related
I am running a script using opencv to record and save a video from a usb camera. Using the same code on my laptop and a raspberry pi 3B, the laptop saves the video and I can watch it back but the raspberry pi video ends up with an error (0xc10100be) saying that "the file isn't playable becuase the file type is unsupported, the file extension is incorrect or the file corrupt". Can Raspberry Pi not save an MP4 file, is there something I need to install on raspberry pi to save videos or is it likely something else?
Thanks.
import cv2
video_cam = cv2.VideoCapture(0) # opens camera for video capture, 0 for inbuilt camera, 1 for USB cam
width = int(video_cam.get(3))
height = int(video_cam.get(4))
size = (width, height)
print(size)
print(type(video_cam))
print(video_cam.isOpened()) # returns true if video capturig initialised, means VideoCapture construcor succeeded
out = cv2.VideoWriter('video.mp4', cv2.VideoWriter_fourcc(*'mp4v'), 20, size, True)
while video_cam.isOpened():
check, frame = video_cam.read() # takes, decodes and returns video frame
if check==True:
out.write(frame) # writes frame to mp4/avi file for saving
cv2.imshow("image", frame) # display in image, takes 2 inputs, window name being displayed in and the actual image
# waitkey(): if use (0) displays still image and image remains until key is pressed
# if use (1) frame displayed for 1ms before closing
# keep streaming until press q 0xFF - means the ASCII input, waitKey returns 32 bit integer value but we only want last 8 bits
if cv2.waitKey(1) & 0xFF ==ord('q'):
break
else:
break
video_cam.release() # closes video camera
out.release()
cv2.destroyAllWindows()
I am simply trying to read a video using openCV Video Capture, and then outputting that same video using Video Writer. But the resultant video is not playable and each time I run the function, although having the same input video the output video has different file sizes.
cap = cv2.VideoCapture("video_input.mp4")
fps = cap.get(cv2.CAP_PROP_FPS)
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
out = cv2.VideoWriter("video_output.mp4",cv2.VideoWriter_fourcc(*'mp4v'), fps, (int(width), int(height)))
count_frame = 0
if (cap.isOpened()== False):
print("Error opening video stream or file")
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
out.write(frame)
count_frame = count_frame + 1
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
Note: The input video has 8000 frames, if i use a condition to only write the first 2000 frames the video writer outputs those 2000 frames in a video without problem.
Does anyone know what is my problem? This exact code use to work fine some weeks ago.
Edit: I would also like to add that I´m running this code on a virtual machine on jupyterLab. But when i try to run in other virtual machine, it runs perfectly fine. The openCV versions are the same in both VM's
I'm trying to read images from an IDS GV-5240CP camera plugged to my laptop via ethernet using Python and OpenCV.
This is what I am supposed to get :
A 1280x1024 image (here resized for upload)
But using this code:
import cv2
cap = cv2.VideoCapture(1)
_, frame = cap.read()
print(frame.shape)
cv2.imshow("Out", frame)
cv2.waitKey(2000)
cap.release()
cv2.destroyAllWindows()
cv2.imwrite('Test2.png', frame)
I get:
A 640x480 cropped image
How can I set my video capture to the native resolution?
VideoCapture uses 640x480 by default.
If you want a different resolution, specify it in the constructor or use the set() method with CAP_PROP_FRAME_WIDTH and so on. Details in the docs.
cap = cv2.VideoCapture(1, apiPreference=cv2.CAP_ANY, params=[
cv2.CAP_PROP_FRAME_WIDTH, 1280,
cv2.CAP_PROP_FRAME_HEIGHT, 1024])
out = cv2.VideoWriter(("E:", 'output.avi'), fourcc, 20.0, (640, 480))
try using this and change the resolution according to your need,
tho it will download or use
cv2.resize(frame,(w,h),fx=0,fy=0, interpolation = cv2.INTER_CUBIC)
since you don't want to do this then use set() module in OpenCV to set specific resolution. docs.opencv.org/4.x/d8/dfe/… this link for the full info about set()
I am trying to stream and save video from my webcam with the Python script shown below but, for some reason, the 'myvideo.mp4' file has a very small size and cannot be opened with QuickTime (or other players) - it seems to be empty. However, the video stream works perfectly.
As suggested in other topics, I have tried different file formats and codecs and I pass exact fps, width and height that my webcam returns. Perhaps anyone knows what can be the issue here? Thanks in advance!
import cv2
cap = cv2.VideoCapture(0)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
fps = cap.get(cv2.CAP_PROP_FPS)
writer = cv2.VideoWriter('myvideo.mp4',cv2.VideoWriter_fourcc(*'mp4v'),fps,(width,height))
while True:
ret,frame = cap.read()
# OPERATIONS (DRAWING)
writer.write(frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
writer.release()
cv2.destroyAllWindows()
I have also tried running the script as superuser but it did not help. I am using Mac.
QuickTime error:
The document “myvideo.mp4” could not be opened.
The file isn’t compatible with QuickTime Player.
Try to change fourcc(4-character code of codec used to compress the frames.)
writer = cv2.VideoWriter('myvideo.mp4',cv2.VideoWriter_fourcc(*'mp4v'),fps,(width,height))
replace it with
writer = cv2.VideoWriter('myvideo.mp4',cv2.VideoWriter_fourcc(*'XVID'),fps,(width,height))
Simply fix typo
Change
height = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
to
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
You were taking CAP_PROP_FRAME_WIDTH twice.
If the problem persists...
If this still does not help, try to swap them. It seems stupid, but helped for me. I guess that the get method somehow takes orientation into account, but then reading a frame neglects the orientation of the video (or opposite, does not matter, they are just inconsistent). I had exactly the same problem, and swapped width with height and it solved it.
Extra
A bit old list of codecs tested on mac.
I am using opencv version 3.1.0 with zbar (latest version as of this post) and PIL (latest version as of this post)
import zbar
import Image
import cv2
# create a reader
scanner = zbar.ImageScanner()
# configure the reader
scanner.parse_config('enable')
#create video capture feed
cap = cv2.VideoCapture(0)
while(True):
ret, cv = cap.read()
cv = cv2.cvtColor(cv, cv2.COLOR_BGR2RGB)
pil = Image.fromarray(cv)
width, height = pil.size
raw = pil.tostring()
# wrap image data
image = zbar.Image(width, height, 'Y800', raw)
# scan the image for barcodes
scanner.scan(image)
# extract results
for symbol in image:
# do something useful with results
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
# clean up
print "/n ...Done"
I don't understand why this is not working it is supposed to constantly check for qrcodes in the current frame of the Video Stream and if it sees one it decodes it and prints what it says inside I hold up printed out qrcodes in front of my webcam and it does not work it shows that my camera is on and that there is a video stream occurring so somewhere in the while loop something is going wrong
I tried it before with qr codes on my computer not printed out and it worked fine
I also tried having it show me the current frame with cv2.imshow("out",cv) but when I did the program showed just a big grey square where it should show the video stream and then it froze so I had to kill Netbeans.
zbar works with grayscale images. Change cv = cv2.cvtColor(cv, cv2.COLOR_BGR2RGB) to cv = cv2.cvtColor(cv, cv2.COLOR_BGR2GRAY).
I'm guessing you're using this example code to base your program off of. They do the color to grayscale conversion with convert('L') on line 15.