Hey guys I want to get the live video frames from my usb webcam connected to my Pynq FPGA. The goal is to make motion detection on each frame but I've been struggling to get a live video. I've tried the first example in this link but I get a really bad frame rate. I tried to get a better rate by adding the following line:vc.set(cv2.CAP_PROP_FPS, 60) but it didn't change anything. I tried an example in MATLAB and I had no problems connecting to the webcam and I had a smooth frame rate.
I've read that OpenCV can't be used together with Python3 yet but I still get images in the notebook what I don't understand. I also don't know how to install other packages or libraries like pygame for jupyter notebooks on the pynq, it says everywhere that I have to enter pip install "name" and put the library in the site-packages directory but I haven't seen that directory on jupyter notebooks. So I'm trying to find a way without installing new libraries.
I really need your help guys, do you have some suggestions how to get a live video stream from my webcam on jupyter notebooks?
OpenCV can work with python3. I am using that
at first you need to install pip, it is pretty easy flow.
After that connect the board to the web and use pip install
Related
I am trying to setup My Ubuntu Desktop wallpaper that accepts a Live video feed from webcam using opencv and after performing some object detection I accept it as Opencv feed where I am able to Display it using cv2.imshow.
But instead of Using Imshow is it possible to cast the output of object detection as ubuntu desktop live Wallpaper.
I am even able to setup live video from youtube as Ubuntu live wallpaper using cvlc but unable to understand how to do that from opencv output.
You might have to go with a lot of trouble as looks like its never been implemented but you can try implementing by one of the following ways :
Try saving the frames and consciously deleting the old frame at a
same time changing background from os call.
if you are able to make cvlc run You can try running it by cv::VideoWriter or Streaming
video to localhost and then using similar YouTube approach you
were doing.
You can Try using Streamlink/MPV/Xwinwrap fork.
You can save the desired results using cv2.imwrite() and then try the methods described here and here
I have created an app that displays rtsp streams in Kivy grid view. It works just fine on my computer, but when I deploy it to another PC everything works up until the video needs to be playing in the grid (i just get white squares in the lower left of the tile). I strongly feel that there is a package I need to download that Kivy Documentation does not mention.
I have pip installed all kivy dependencies, Cython, and Pillow on the other PC.
I would like to see video in each block as I do on the PC that I built the app on
No RTSP Stream Coming Through
***Update: (On the other PC (mini PC)) I uninstalled Python 3.7.3, reinstalled it, installed kivy in the proper order according to their install for windows documentation, and installed Cython. This got it working but now some the text is missing in the app. Also, I am getting multiple .dll errors (libopus-0.dll and libgstopus.dll) I tried removing gstreamer from the python share folder and that got it back to just showing white boxes.
I can't install opencv-contrib-python on PyCharm using the Setting options provided by PyCharm. I get these error everytime I tried first not using the --user option, such as in image 1:
Then I tried using the --user option, and I got what is showed in image 2
My knowledge of Python is very weak and I don't need to get it better right now. I have a script I need to run for a very close presentation (in the script there are many section with many examples of how to use SIFT, image homography, and other stuff used in image processing and artificial vision. I can't run the SIFT part, this is why I need opencv-contrib-python). I'm using opencv 3.4.3.18 and I need to make the script working for that distribution!
I haven't found anything of useful since now and I'm in quite a hurry and still without a solution.
Thank you for the help!
I'm trying to access a live video feed from my Raspberry Pi with a PiCam attached. I have enabled the camera in the interface settings and even tested it by snapping some pictures and videos using the PiCam library. However, OpenCV is giving me troubles. I installed open CV following this tutorial (shoutout to Adrian Rosenbrock).
Check out the screenshot below for the code and the error message. I'm running everything from within a virtual environment with OpenCV installed, like Adrian suggests.
Code and error message
VIDEOIO ERROR: V4L: can't open camera by index 0
From what I've read from other problems people have had, this error is sometimes resolved by playing with the index value. If i change the index value to -1 or 1, like most solutions suggest, i either get the same thing or a slightly different "can't access camera" error.
I'm relatively new to OpenCV and RPi so I might just be missing something simple. Any suggestion is much appreciated, thanks!
You may need to enable access to the camera. Try typing:
sudo modprobe bcm2835-v4l2 in terminal on your RPI.
I am trying to write a script that will extract a single frame from a user uploaded video clip in order to create a thumbnail. It sounds like either OpenCV or ffmpeg will do what I need, but I am having trouble installing them.
I tried installing OpenCV using apt-get install libopencv-dev, and it looks like everything worked, but when running import cv2 in Python, it says there is no such module. Also tried installing using these instructions, but when I run the import, it hangs for a second or two and then says Failed to initialize libdc1394.
I then tried to install ffmpeg with pyffmpeg, but the most recent version of pyffmpeg I could find was released 3 years ago and built for Python2.6 on Ubuntu 10.10, while I am using 2.7 on Ubuntu 12.04.
Does anyone have experience installing either of these, or would recommend something else for this purpose?
If you are trying to install ffmpeg, you can do this more easily if you have homebrew and then use brew to install it. It's a lot more complicated otherwise.
If you just want to be able to complete this task, and it doesn't have to be the tool you're listing, you could try ffmpeg-python. After you have it installed, you can use the get_video_thumbnail example to get a single image. You could also probably tweak the read_frame_as_jpeg example to get a single image from a frame if you want.