Prosilica GigE camera with OpenCV/python - python

I'm using a GigE (ethernet) prosilica GC camera on Mac OS X, and have been able to read it out through the proprietary sample viewing software from Allied Vision.
I would like to be able to read out the camera using OpenCV. I have opencv installed correctly, but I am not sure how to read out the camera. The last person to ask this question (~2 years ago) was told to use the native camera API to do this, and then analyze the images with opencv:
OpenCV with GigE Vision Cameras
However, I would like to know if it is even possible to do this with python/ opencv. There seems to be very little information online about how to do this, so I'm curious if anyone managed to get it to work, and could post some example code/etc. I have all my camera IP address information, model, etc, if that helps, but I don't know how to even tell opencv where to look.
Thanks in advance,
Mike

I believe OpenCV interfaces to the Prosilica cameras via the PvAPI. You'll need to make sure OpenCV is compiled with this setting using the WITH_PVAPI CMake option (you will need to build from source). cap_pvapi.cpp is the wrapper for the PvAPI driver that will allow you to use the VideoCapture class.
To build OpenCV from source, take a look at the tutorials here.

Try Pymba
You can use Allied Vision's new SDK Vimba and a python interface called Pymba. Instructions are on the Pymba github page but basically you
Install Vimba
Install Pymba via pip install pymba
There's example code in the repo. I've been using it and it's pretty straight forward.

Related

Unable to open a .avi file that I made using OpenCV

So I decided to try making a screen recorder and I was following a tutorial.
So, the code works exactly as it should but after I finish the code and try running the .avi file I get the following error:
I searched around the web but I couldn't find the answer for my question, you guys have any idea?
OpenCV uses FFMPEG under the hood and ships release builds without the proprietary codecs you often need. I’ve been compiling OpenCV from source for 10 years and have successfully gotten good and repeatable codecs maybe once. Here are my recommendations...
Did the video play in VLC? That often is more forgiving. Use VLC first to troubleshoot the rest of your code if it plays there.
Play with many different FOURCC codes. When using only free codecs, you have to iterate through a lot of them before finding a valid match. Also, I’ve found that settings such as too low a frame rate can cause players to not work well.
Build a copy of FFMPEG from source with non-free drivers, then build OpenCV from source using those. This is a rabbit hole of doom though cause each codec is itself a separate package and install. You will spend a lot of time learning how to build software.
Use OpenCV for image processing and write video using a video writer API directly. Libav/FFMPEG, libx264, and others all have direct callable APIs so you can use them.

Using webcam as a QR code scanner in Python-3.6

I have spent weeks looking for a way to turn my webcam (built into the computer) into a QR scanner using Python but nothing has worked.
In the first instance, I tried installing this software which supposedly would allow me to turn my camera into a barcode scanner, which could then use this video to decode the codes in python. I installed the scanner along with 'pywin32' which was supposedly the library I needed to use, but I couldn't get the two to communicate as my computer kept saying that pywin32 has not been installed (although it had).
Then, I moved onto using zbar/ pyzbar. I downloaded all of the modules that were recommended (I followed the instructions set out on here) but these each came with several more error messages. It was all to do with various libraries and modules not being installed - I've tried downloading PIL/pillow, pyqrcode and a number of other things that are supposed to work, but for some reason, don't.
I don't feel that I can provide any evidence of code as I haven't got any code to fix for this particular issue -- I am simply looking for anyone who may know of a way to transform an ordinary webcam into a barcode/qr scanner using python.
Assuming none of the libraries I need are installed on my computer at the moment, could someone please explain to me exactly which libraries I will need to download, where I can find them, and how I could use them to make Python communicate with my webcam?
This is for my A Level Coursework and the scanner is absolutely fundamental to the program; if anyone can provide me with a useful, understandable solution then I would be really grateful. I apologise if this question is still a little too broad - I am a complete novice to coding and after searching endlessly for hours to find a solution, I feel that this is my final resort.
I did a project on zbar a couple of years ago, and it took 6 months to get zbar working :)
Here's how I setup zbar:
Zbar python module does require zbar.exe. Go to http://zbar.sourceforge.net/download.html and click either ZBar 0.10 Windows installer if you have windows, or Linux builds. Run zbar-0.10-setup.exe and follow installation instructions.
The Zbar python module is available on pypi. That means a simple pip install zbar will install it.
To get .py examples of Zbar running, first download the source code for zbar (top link on http://zbar.sourceforge.net/download.html ), unzip the tar.bz2 file (use 7zip). Inside the unzipped folder there should be /examples. Inside the folder, you will find several examples (proccessor.py is a good one), which can be run just as you would normally run a python program.

Informations about adding image to a video

I'm trying to add an image to a pre-existing video in python. I haven't write any code yet, but I've done some research. Most of people seem to use OpenCV or ffmpeg to manipulate image and video.
So my questions are :
Can we add images to video with OpenCV or ffmpeg?
Can we do that without OpenCV or ffmpeg
If yes, do you have some informations on it
If no, should I use OpenCV or ffmpeg ? Maybe another ?
Which version of OpenCV should I use ? and why ?
I've read that they are a problem with OpenCV and Ubuntu, did they correct it ?
If you have some other informations or advice for me
In OpenCV you could extract all the frames from the video and then use the VideoWriter class to create a new video from the frames plus your extra images. One issue I’ve had in the past using VideoWriter is using the correct video codec, you will have to do some research on this and see what’s compatible with your particular setup.
https://docs.opencv.org/3.2.0/dd/d9e/classcv_1_1VideoWriter.html
You may be able to do something similar with PIL (Python Imaging Library) although I’m not entirely sure on that. As for ffmpeg I’m also not sure.
The latest release of OpenCV would be sufficient for this. I currently have OpenCV 3.4 with Ubuntu 16.04 LTS on a Virtual Machine and I have no problems using the VideoWriter class. Have a look at this blog post for advice on installing OpenCV on Ubuntu.
https://www.pyimagesearch.com/2016/10/24/ubuntu-16-04-how-to-install-opencv/

How to use another device camera as input for NAO robot

Is it possible to use as camera input another device (like a tablet camera) instead of the robot’s camera? If possible, how should I do that using Python or through libraries using command line?
In our team, we've made a custom build to adress the problem of learning faces and recognize from files instead of standard robot camera.
You can clone our public repo:
git clone http://protolab.aldebaran.com:9000/protolab/facedetection_custom.git
You'll find compiled library binary for various classic version, pick the right one...
Don't hesitate to ask question if something's unclear...

How can I capture frames from a PointGrey USB camera in OpenCV/Python/OSX?

PointGrey is a leading manufacturer of machine vision cameras, but unfortunately their support for Mac OS is very limited. A www search led me to guess that I need to install libusb and libdc1394 in order to recognize the camera, which i did using brew. This did not work.
I don't understand exactly how libusb and libdc1394 libraries work under the hood, other than that they handle the hand-shaking with the camera and data transfer via the USB bus. OpenCV usually makes it incredibly easy to open a camera and start processing frames, but unfortunately when it doesn't work it's not clear how to debug. I found python wrappers for libusb and libdc1394 and included them in my code, which resulted in no errors, but no luck grabbing frames either.
If you don't have an exact answer, suggest strategies for solving this problem i.e. how should I systematically approach this, diagnosing all the possible failure modes. Is there a way that I can see more of what's going on when OpenCV tries to detect and read from a camera?
My python/opencv code works well with a simple capture = cv2.VideoCapture(0)
but doesn't work with capture = cv2.VideoCapture(1), giving output as follows:
Warning, camera failed to properly initialize!
Cleaned up camera.
Typically cv2.VideoCapture(0) will give the built-in camera on my macbook, and from what I understand cv2.VideoCapture(1) will give the next available camera (i.e. plugged in through USB).
I know the camera works well on a windows machine (in Windows Movie Maker). Do I need to do something further under the hood to get python and opencv to recognize the camera under OS X?
Many years later, we can answer this question. FLIR, which acquired PointGrey, has released Spinnaker, a cross-platform SDK with a UI and a set of pre-compiled examples for accessing the camera: https://www.flir.com/support-center/iis/machine-vision/downloads/spinnaker-sdk-and-firmware-download/

Categories

Resources