I'm trying to access a live video feed from my Raspberry Pi with a PiCam attached. I have enabled the camera in the interface settings and even tested it by snapping some pictures and videos using the PiCam library. However, OpenCV is giving me troubles. I installed open CV following this tutorial (shoutout to Adrian Rosenbrock).
Check out the screenshot below for the code and the error message. I'm running everything from within a virtual environment with OpenCV installed, like Adrian suggests.
Code and error message
VIDEOIO ERROR: V4L: can't open camera by index 0
From what I've read from other problems people have had, this error is sometimes resolved by playing with the index value. If i change the index value to -1 or 1, like most solutions suggest, i either get the same thing or a slightly different "can't access camera" error.
I'm relatively new to OpenCV and RPi so I might just be missing something simple. Any suggestion is much appreciated, thanks!
You may need to enable access to the camera. Try typing:
sudo modprobe bcm2835-v4l2 in terminal on your RPI.
Related
Tried varying the expsoure settings for the Pi CSI based camera using libcamera library in python.
Tried out multiple methods which did not lead to successful implementation.
Unable to vary exposure using the following example code:
cam.set(libcamera.AeEnable, 0)
cam.set(libcamera.ExposureValue, 4)
cam.set(libcamera.ExposureTime, 100000)
cam.set(libcamera.AnalogueGain, -0.1)
Experienced no change in exposure when varying the above settings over libcamera.
But I was successful in varying the brightness using the following command:
cam.set(libcamera.Brightness, -1)
I also tried using the PiCamera library and experienced the following error:
OSError: libmmal.so: cannot open shared object file: No such file or directory
Found that the community has already faced similar problems.
The posts shared that the above library is not present in 64 bit os.
I also tried sudo modprobe bcm2835-v4l2 inorder to have the CSI camera detected as a device allowing it to be accessed using OpenCV since several blog posts shared this procedure as a possibility.
The camera was not picked up over OpenCV and returned nothing.
Blog
Hackster Blog
I have a DeLorme Earthmate LT-40 USB GPS device that I used years ago with a Windows XP program. Out of curiosity I plugged it into my Raspberry Pi to see if I could read the data. I've managed to see data using sudo gpsmon at the command prompt so I would like to take this a step further and write a Python program to read the data. Not knowing very much about Python I've searched YouTube and google for possible solutions. It looks like that I need to import pynmea2. I used pip install pynmea2 to install the module. I keep getting
"ModuleNotFoundError: No module named pynmea2"
when I try to run my script. I tried to reinstall pynmea2 again which gave me
Requirement already satisfied: pynmea2 in ./.local/lib/python2.7/site-packages(1.15.0).
I don't understand what I'm doing wrong. Any help would be greatly appreciated. Thank you.
I have the older version LT-20 of that GPS and usually it presents itself as ttyUSB0 (in my case) when I plug it on Raspberry.
Just do a dmesg command to see in which port it is being recognized, then you can do a cat /dev/ttyUSB0 command and you see all the messages coming from your GPS. The messages start with a $GP for each type of frame. See detailed $GP description at: http://aprs.gids.nl/nmea/ .
Then from your python program, you can open /dev/ttyUSB0 as a file (as read only) and handle each frame and interpetting it according to its format.
Best regards
Flavio
I am trying to setup My Ubuntu Desktop wallpaper that accepts a Live video feed from webcam using opencv and after performing some object detection I accept it as Opencv feed where I am able to Display it using cv2.imshow.
But instead of Using Imshow is it possible to cast the output of object detection as ubuntu desktop live Wallpaper.
I am even able to setup live video from youtube as Ubuntu live wallpaper using cvlc but unable to understand how to do that from opencv output.
You might have to go with a lot of trouble as looks like its never been implemented but you can try implementing by one of the following ways :
Try saving the frames and consciously deleting the old frame at a
same time changing background from os call.
if you are able to make cvlc run You can try running it by cv::VideoWriter or Streaming
video to localhost and then using similar YouTube approach you
were doing.
You can Try using Streamlink/MPV/Xwinwrap fork.
You can save the desired results using cv2.imwrite() and then try the methods described here and here
Hey guys I want to get the live video frames from my usb webcam connected to my Pynq FPGA. The goal is to make motion detection on each frame but I've been struggling to get a live video. I've tried the first example in this link but I get a really bad frame rate. I tried to get a better rate by adding the following line:vc.set(cv2.CAP_PROP_FPS, 60) but it didn't change anything. I tried an example in MATLAB and I had no problems connecting to the webcam and I had a smooth frame rate.
I've read that OpenCV can't be used together with Python3 yet but I still get images in the notebook what I don't understand. I also don't know how to install other packages or libraries like pygame for jupyter notebooks on the pynq, it says everywhere that I have to enter pip install "name" and put the library in the site-packages directory but I haven't seen that directory on jupyter notebooks. So I'm trying to find a way without installing new libraries.
I really need your help guys, do you have some suggestions how to get a live video stream from my webcam on jupyter notebooks?
OpenCV can work with python3. I am using that
at first you need to install pip, it is pretty easy flow.
After that connect the board to the web and use pip install
I have written a python code by using OpenCV library to detect a motion. If a motion occurs, it takes a snapshot of the moving object. However my problem is this: If I execute the program on my PC (Ubuntu 12.04) everything's OK.
But when I execute the program on my BeagleBone which has Angstrom Linux running and an Us Robotics webcam device attached to it, after a while it gives the following error:
libv4l2: error dequeuing buf: No such device
VIDIOC_DQBUF: No such device
How can I solve this problem?
Regards
edit: I installed ubuntu 12.04 to my BeagleBone and everything is OK with it too. It seems like my problem is related to Angstrom image. Maybe a driver or a library (libv4l2?) problem? Any ideas?
I was seeing this error with Ubuntu also when the board was powered through the USB cable. When I powered the board with a 5V supply, the problem went away.
I experienced the same problem. I even changed my board because of this error. But when I resorted back to my older power supply, it was gone. As simple as that.