Informations about adding image to a video - python

I'm trying to add an image to a pre-existing video in python. I haven't write any code yet, but I've done some research. Most of people seem to use OpenCV or ffmpeg to manipulate image and video.
So my questions are :
Can we add images to video with OpenCV or ffmpeg?
Can we do that without OpenCV or ffmpeg
If yes, do you have some informations on it
If no, should I use OpenCV or ffmpeg ? Maybe another ?
Which version of OpenCV should I use ? and why ?
I've read that they are a problem with OpenCV and Ubuntu, did they correct it ?
If you have some other informations or advice for me

In OpenCV you could extract all the frames from the video and then use the VideoWriter class to create a new video from the frames plus your extra images. One issue I’ve had in the past using VideoWriter is using the correct video codec, you will have to do some research on this and see what’s compatible with your particular setup.
https://docs.opencv.org/3.2.0/dd/d9e/classcv_1_1VideoWriter.html
You may be able to do something similar with PIL (Python Imaging Library) although I’m not entirely sure on that. As for ffmpeg I’m also not sure.
The latest release of OpenCV would be sufficient for this. I currently have OpenCV 3.4 with Ubuntu 16.04 LTS on a Virtual Machine and I have no problems using the VideoWriter class. Have a look at this blog post for advice on installing OpenCV on Ubuntu.
https://www.pyimagesearch.com/2016/10/24/ubuntu-16-04-how-to-install-opencv/

Related

can I use wand API in python without installing ImageMagick?

I used wand API of ImageMagick in python for ML project ,I want to change my coding file into one executable file so that I can share it to my friends but I got lots of difficulties to pack ImageMagick in one executable file, that's why I want to use wand without ImageMagick.
if you Answer this, then it will help me a lot.
Thankyou. I

Unable to open a .avi file that I made using OpenCV

So I decided to try making a screen recorder and I was following a tutorial.
So, the code works exactly as it should but after I finish the code and try running the .avi file I get the following error:
I searched around the web but I couldn't find the answer for my question, you guys have any idea?
OpenCV uses FFMPEG under the hood and ships release builds without the proprietary codecs you often need. I’ve been compiling OpenCV from source for 10 years and have successfully gotten good and repeatable codecs maybe once. Here are my recommendations...
Did the video play in VLC? That often is more forgiving. Use VLC first to troubleshoot the rest of your code if it plays there.
Play with many different FOURCC codes. When using only free codecs, you have to iterate through a lot of them before finding a valid match. Also, I’ve found that settings such as too low a frame rate can cause players to not work well.
Build a copy of FFMPEG from source with non-free drivers, then build OpenCV from source using those. This is a rabbit hole of doom though cause each codec is itself a separate package and install. You will spend a lot of time learning how to build software.
Use OpenCV for image processing and write video using a video writer API directly. Libav/FFMPEG, libx264, and others all have direct callable APIs so you can use them.

Instead of opencv_createsamples (OpenCV 4.0 and later) on Windows?

I want to train my own Haar cascade to recognise my phone in images.
I have a large number of negative/background images and an image of my phone. I wanted to use opencv_createsamples() to generate a large number of positive images, using the negative/background images and the image of my phone.
However according to How to build opencv correctly in windows to get "opencv_createsamples.exe", opencv_createsamples() has been disabled.
Is there another module I could use, or should I install an older version of OpenCV? I'm on Python (3.7.4) and Windows (10).
I've checked out https://docs.opencv.org/2.4/doc/tutorials/introduction/windows_install/windows_install.html but they say the tutorials may be obsolete. Could someone recommend an up to date source of information?
I really want to learn how do this in Windows, not using a Linux server.

How can I capture frames from a PointGrey USB camera in OpenCV/Python/OSX?

PointGrey is a leading manufacturer of machine vision cameras, but unfortunately their support for Mac OS is very limited. A www search led me to guess that I need to install libusb and libdc1394 in order to recognize the camera, which i did using brew. This did not work.
I don't understand exactly how libusb and libdc1394 libraries work under the hood, other than that they handle the hand-shaking with the camera and data transfer via the USB bus. OpenCV usually makes it incredibly easy to open a camera and start processing frames, but unfortunately when it doesn't work it's not clear how to debug. I found python wrappers for libusb and libdc1394 and included them in my code, which resulted in no errors, but no luck grabbing frames either.
If you don't have an exact answer, suggest strategies for solving this problem i.e. how should I systematically approach this, diagnosing all the possible failure modes. Is there a way that I can see more of what's going on when OpenCV tries to detect and read from a camera?
My python/opencv code works well with a simple capture = cv2.VideoCapture(0)
but doesn't work with capture = cv2.VideoCapture(1), giving output as follows:
Warning, camera failed to properly initialize!
Cleaned up camera.
Typically cv2.VideoCapture(0) will give the built-in camera on my macbook, and from what I understand cv2.VideoCapture(1) will give the next available camera (i.e. plugged in through USB).
I know the camera works well on a windows machine (in Windows Movie Maker). Do I need to do something further under the hood to get python and opencv to recognize the camera under OS X?
Many years later, we can answer this question. FLIR, which acquired PointGrey, has released Spinnaker, a cross-platform SDK with a UI and a set of pre-compiled examples for accessing the camera: https://www.flir.com/support-center/iis/machine-vision/downloads/spinnaker-sdk-and-firmware-download/

Prosilica GigE camera with OpenCV/python

I'm using a GigE (ethernet) prosilica GC camera on Mac OS X, and have been able to read it out through the proprietary sample viewing software from Allied Vision.
I would like to be able to read out the camera using OpenCV. I have opencv installed correctly, but I am not sure how to read out the camera. The last person to ask this question (~2 years ago) was told to use the native camera API to do this, and then analyze the images with opencv:
OpenCV with GigE Vision Cameras
However, I would like to know if it is even possible to do this with python/ opencv. There seems to be very little information online about how to do this, so I'm curious if anyone managed to get it to work, and could post some example code/etc. I have all my camera IP address information, model, etc, if that helps, but I don't know how to even tell opencv where to look.
Thanks in advance,
Mike
I believe OpenCV interfaces to the Prosilica cameras via the PvAPI. You'll need to make sure OpenCV is compiled with this setting using the WITH_PVAPI CMake option (you will need to build from source). cap_pvapi.cpp is the wrapper for the PvAPI driver that will allow you to use the VideoCapture class.
To build OpenCV from source, take a look at the tutorials here.
Try Pymba
You can use Allied Vision's new SDK Vimba and a python interface called Pymba. Instructions are on the Pymba github page but basically you
Install Vimba
Install Pymba via pip install pymba
There's example code in the repo. I've been using it and it's pretty straight forward.

Categories

Resources