I am currently working on a hobby project, which is video streaming from an IP camera (Giroptic 360) on RaspberryPi 3 board via RTSP.
I am particularly interested in its URL link used to connect to said camera:
rtsp://[IP address]:[Port]/PSIA/Streaming/channels/2?videoCodecType=H.264
I am wondering if there's such a thing as changing the streamed resolution directly from the URL link (by adding more parameters?) and if there's any more extra functionalities available through the URL link?
** I have tried changing the resolution via openCV's
cap.set(CAP_PROP_FRAME_WIDTH, 1000)
cap.set(CAP_PROP_FRAME_HEIGHT, 500)
But it still gave me the default 2048x1024 resolution.
No, you won't have anyway of commanding via the RSTP link using the opencv built in functions. These work at driver level and all the RTSP link provides is a place to pull frames from.
If you want to resize the images, you can do this after you grab the frame using OpenCV resize
If you want to size the stream itself, then you will need to recode the stream at the source i.e. go into the settings and change it. Although you may not have access to this if it is somebody elses stream, you dont have permissions etc.
Related
I am trying to stream live video from an external camera using cv2. I was able to write the simple code to stitch the frames and stream it. But am struggling to find how to change the camera.
I tried to run it after disabling the main webcam from the task manager, but it still did not work.
So, if anyone can help me with some clue regarding the same, that would be a great help.
Cameras are numbered for Windows. You can try a few indeces and check which camera index belongs to the camera you want.
capture = cv2.VideoCapture(index)
Been using opencv (mostly with python) for video capture/processing (Win10/Linux). Is there a way to create an output video stream that would appear as a valid source (like another camera) in videoconferencing software like zoom, skype, etc ? For now I just share the imshow() window, but it is slow/ineffective.
I need to display the USB camera video of raspberry-pi in Wxpython with control buttons in it. I have managed to embed vlc in to Wxpython with control buttons (Got it from Google) to play existing video. Is there any way to stream the USB camera video in it?
Thanks in advance :)
If you've managed to embed VLC, you should just be able to point it at the v4l address for the webcam.
The v4l address should be along the lines of v4l:/dev/video0:size=640x480 (but it'll vary depending on your machine I guess). You may find it easier to use the standalone VLC client to get the address you need and then put it into your program.
This StackOverflow thread may be useful for later depending on what you're doing.
I have been working with some drones and robotics projects using arduino and python. There was a kickstarter project for a neat little hex copter, that hasn't been managed well.
I was lucky, i got my copter and then some time later after some frustrated email exchanges, i finally recieved the camera as well. To this day, their forum has people still complaining. Their maker forum is now down and their wiki hasn't been updated with any specifics on the camera.
http://www.flexbot.cc/wiki/index.php?title=Main_Page#Hardware
Their app to accompany the drone still doesn't support the camera module. Not that it'd matter, as their code isn't very well documented or annotated.
https://github.com/HexAirbot
There are some tips on switching the camera on the comments page of their kickstarter campaign.
https://www.kickstarter.com/projects/1387330585/hex-a-copter-that-anyone-can-fly/posts/1093716
So, sob story over, i'm stuck with this neat little wifi camera that i am unsure on how to connect to. I know how to switch it on and it does have a micro-usb port on it.
What library in Python could i use to stream an image from this camera given that it is a wifi camera. If i wanted the video stream as a numpy matrix.
I need to interface with the camera, so i can connect and disconnect.
Then, be able to read images frame by frame with ffmpeg. I have some python modules that can detect and read from a camera, but how can my code ensure that the camera is connected?
Totally stuck. Any help would be appreciated.
Considering you are building for the android platform, you will more than likely need to use some sort of java/python driver/interface, unless you just use java.
Here is an article on java/python, and using python from within java.
Using Python from within Java
Background
I'm attempting to craft a simple video playback script for a small cinema that automates the playing of videos and control of the projector, sound and lighting systems. I have two video outputs, one goes to a monitor in the projection booth, and the other directly to the projector. I desire to play video (and only video) fullscreen to the projector while putting controls and a small (~1/4 screen) preview on the monitor. This will allow the projectionist to view the video being output and control the playback from the monitor in the booth while all the audience ever sees is the video output.
Problem
I am currently using Python to control VLC player (with libvlc Python bindings) to playback videos. I have everything working fine except that I can't figure out how to get a preview (direct copy) of the video being played fullscreen on the projector output into my GUI.
I have tried using the clone filter, but I cant get the cloned window to automagically appear full screen nor in my GUI. The clone filter seems like the logical choice but it seems to be VERY inflexible when it comes to specifying destination screens, fullscreen, etc. I must be able to open video windows full screen on the projector monitor. Professionalism is key and it would look bad if the projectionist had to drag a window over and double click on it when the movie started.
Currently Using:
Debian Linux
Python 2.7
wxPython
libvlc
I would like to continue using Python as I already have the code for controlling the projector, sound processor, lighting and curtain written and tested. I chose VLC because it really seems bulletproof when it comes to video playback but am not committed to it's continued use. I also chose wxWidgets for my GUI as a result of past experience but I am not stuck on that either.
This describes the direct solution and does not concentrate on any alternative or the overall design of your application.
As Your Application and VLC media player are separate processes, you will not be able to get what you want directly because there is no "shared memory" between those 2 applications. The best shot to "copy" the decoded frames from VLC will be to e.g. send a RAW Video .mts stream (ts is usually used for this kind of usecase) and send e.g. to udp://localhost:1234.
In your application, you will need to be able to receive the ts stream, "decode" it and display at the spot of interest.
For start, i would try if you are able to do this using 2 vlc players that you control manually. When you achieved that the first VLC streams to udp and outputs on the main display at the same time, and the other VLC player receives and plays the udp stream you can go on:
Find a player library that you can use directly in your wxpython application and check if it can receive the udp stream as well E.g.
https://wxpython.org/Phoenix/docs/html/wx.media.MediaCtrl.html
This player lib for example requires gstreamer as a base.
As a result, main display and the picture in your applicatoin might have a latency of some seconds. To come around this latency, the best way that i currently know is using WebRTC but this is a lot more complex setup than the above.
https://www.sipwise.org/news/technical/tv-over-webrt/
Sure in case you do some "encoding" for WebRTC or even for UDP, you would need to utilize some hardware encoder, e.g. Nvidia NVENC in order to be able to guarantee the needed resources are always there.