streaming media over HTTP - python

I'm writing a web service to stream video (e.g., web cam). I want to know what options do I have.
For example, one naive way I can think of is to periodically fetch a jpeg from the source and display it.
But I also know some media types like mjpeg can be streamed over the HTTP. However I do not know exactly how that is achieved technically. Any example would be welcome.
UPDATE:
I found the below link, which implements a live video stream over HTTP using mjpeg and python WSGI.
Streaming MJPEG over HTTP with gstreamr and python – WSGI version (GIST)

Forget about gstreamer or ffmpeg.
create RAM DISK
install openCV , within infinite loop with "read()" write each frame into RAM DISK as jpg with Pillow
install https://snapcraft.io/mjpg-streamer point to ram disk and enable delete after read.

Related

How to decode videoData from ios in opencv videocapture

I have an Objective-C app that I get in a function video data as
unit8_t.bytes and length.
-(void)videoFeed:(DJIVideoFeed *)videoFeed didUpdateVideoData:
(NSData *)videoData {
[[DJIVideoPreviewer instance] push:(uint8_t *)videoData.bytes length:(int)videoData.length];
[_videoExtractor parseVideo:(uint8_t *)videoData.bytes
length:(int)videoData.length withFrame:^(VideoFrameH264Raw *frame) {
I have SocketIO implemented in the application and I send this data to a python server with OpenCV on it .
I have 2 problems:
I don't know how to take this data and actually make it a video
I wish to send h264 throw the socket but the type(VideoFrameH264Raw) is not expectable by Objective-C
[self.socket emit:#"streaming" with:#[frame]];
I have tried to get the information in videocapture in OpenCV but with no lack ,
I am also a little bit confuse about the case i get single frames throw the socket and show them as a video in python.
Emm depends on how it is encoded.
If it using the internal encoder (.mov), not much you can do now. It is a proprietary compression algorithm. And apple doesn't give a crap about a single researcher
If it is DJI internal data feed, they are using FFMPEG like method. Shouldn't have many problems in setting using UDP server capture way.
To confirm this, You can try with H264 directly. Or you can try to see the raw video data? For mp4 there was a few bits/bytes of key control sequence.

Efficient way of sending a large number of images from client to server

I'm working on a project where one client needs to take several snapshots from a camera (i.e. it's actually taking a short-duration video, hence a stream of frames), then send all images to a server which then performs some processing on these images and returns a result to the client.
Client and server are all running Python3 code.
The critical part is the image sending one.
Some background first, images are *640*480 jpeg* files. JPEG was chosen as a default choice, but lower quality encoding can be selected as well. They are captured in sequence by a camera. We have thus approximately ~600 frames to send. Frame size is around 110KiB.
The client consists of a Raspberry Pi 3 model B+. It sends the frames via wifi to a 5c server. Server and client both reside in the same LAN for the prototype version. But future deployments might be different, both in term of connectivity medium (wired or wireless) and area (LAN or metro).
I've implemented several solutions for this:
Using Python sockets on the server and the client: I'm either sending one frame directly after one frame capture, or I'm sending all images in sequence after the whole stream capture is done.
Using Gstreamer: I'm launching a GStreamer endpoint on my client and directly send the frames to the server as I stream. I'm capturing the stream on the server side with OpenCV compiled with GStreamer support, then save them to the disk.
Now, the issue I'm facing is that even if both solutions work 'well' (they get the 'final' job done, which is to send data to a server and receive a result based on some remote processing), I'm convinced there is a better way to send a large amount of data to a server, using either the Python socket library, or any other available tools.
All personal researches are done on that matter lead me to solutions similar to mine, using Python sockets, or were out of context (relying on other backends than pure Python).
By a better way, I assume:
A solution that saves bandwidth as much as possible.
A solution that sends all data as fast as possible.
For 1. I slightly modified my first solution to archive and compress all captured frames in a .tgz file that I send over to the server. It indeed decreases the bandwidth usage but also increases the time spent on both ends (due to the un/compression processes). It's obviously particularly true when the dataset is large.
For 2. GStreamer allowed me to have a negligible delay between the capture and the reception on my server. I have however no compression at all and for the reasons stated above, I cannot really use this library for further development.
How can I send a large number of images from one client to one server with minimal bandwidth usage and delay in Python?
If you want to transfer images as frames you can use some existing apps like MJPEG-Streamer which encode images from a webcam interface to JPG which reduces the image size. But if you need a more robust transfer with advanced encoding you can use some Linux tools like FFMPEG with streaming which is documented in here.
If you want lower implementation and control the whole stream by your code for modifications you can use web-based frameworks like Flask and transfer your images directly throw HTTP protocol. You can find a good example in here.
If you don't want to stream you can convert a whole set of images to a video encoded format like h264 and then transfer bytes throw the network. You can use opencv to do this.
There are also some good libraries written in python like pyffmpeg.
you can restream camera using ffmpeg over network so that client can read it both ways. it will reduce delays.

python rtsp to webrtc

I want to pass an h.264 or MJPEG RTSP stream from an IP camera directly to a webRTC session in a browser, without re-encoding. It has to be done in python, because the target is to have such RTSP/WebRTC gateway reside on the camera itself, which has a python interpreter. The stream is one way, to the browser only. I'm a python freshman, so any hints, ideas, or links to existing libraries are welcome.
I've seen the writeup at http://www.codeproject.com/Articles/800910/Broadcasting-of-a-Video-Stream-from-an-IP-camera-U, but this requires transcode to VP8 (and is not python).
Also reviewed the thread at Use an IP-camera with webRTC and looked at the Kurento media server (nodejs) and Janus gateway (C).
One of the commenters said "you could probably very easily use the native webrtc API and provide an RTSP stream through it." Do there exist any python bindings to the native WebRTC api? Am I deranged for even thinking such a gateway application is possible in python?
Firefox supports H.264 (via the OpenH264 plugin, which is automatically downloaded). Chrome will be adding H.264 "soon". Neither supports MJPEG, nor does the native webrtc.org code - though MJPEG is supported by all of them as a camera-capture source, and it wouldn't be particularly hard to add an MJPEG video codec to the native webrtc.org code. (Non-trivial, however, because of the number of things you'd need to change.)
Note that if this traverses the open internet (or even potentially a wifi link) your solution will be unable to easily adapt to bitrate changes without asking the IP camera to change it's rate.

Capture/Restream streaming web cam video over network with VLC or Python

I have streaming video from a web cam that is accessible via a public ip I opened up on my machine. As in I can go to http//ip/webview and view the video with (I did have to install the activex hidvrocx.cab plugin). The video source itself is h.264 and, according to wireshark, is running on port 9000 over tcp.
What I would like to do is re-stream the raw video, but at this point I would settle for converting it to FLV so I can open is with VLC or something...
According to the technical support team of the webcam (Swann), "netviewer" (some third party software) can view the video feed so there is no encryption / special DRM.
I'm new to this whole thing streaming video world, so this is what I've tried / am considering:
- I've tried loading the stream using VLC at tcp://public_ip:9000, but according to swann support VLC can not view the source because it is raw h.264. Is it possible to use vlc to convert this raw h.264 format into something readable to media players. Possibly...?
vlc src --sout flv
Is it possible to use the python VideoCapture library? Is this strictly for capturing video directly for the device, or does it work over the network?
I'm completely lost right now, so even seeing the raw stream in a media player of any type would be an accomplishment.
TLDR; I have a streaming video source from a webcam over a public IP I would like to ultimately "redistribute" in it's original format (h.264) or flv. How do I accomplish this?
vlc <input_stream> --sout=#std{access=http,mux=ts,dst=<server_ip>:<port>/}
This command will re-stream your Input stream to http with ts muxer
Also you can try rtp/rtsp:
vlc <input_stream> --sout=#rtp{sdp=rtsp://<server_ip>:<port>/XXX.sdp}
will re-stream to rstp protocal

Chopping media stream in HTML5 websocket server for webbased chat/video conference application

We are currently working on a chat + (file sharing +) video conference application using HTML5 websockets. To make our application more accessible we want to implement Adaptive Streaming, using the following sequence:
Raw audio/video data client goes to server
Stream is split into 1 second chunks
Encode stream into varying bandwidths
Client receives manifest file describing available segments
Downloads one segment using normal HTTP
Bandwidth next segment chosen on performance of previous one
Client may select from a number of different alternate streams at a variety of data rates
So.. How do we split our audio/video data in chunks with Python?
We know Microsoft already build the Expression Encoder 2 which enables Adaptive Streaming, but it only supports Silverlight and that's not what we want.
Edit:
There's also an solution called FFmpeg (and for Python a PyFFmpeg wrapper), but it only supports Apple Adaptive streaming.
I think ffmpeg is the main tool you'll want to look at. It's become most well supported open source media manipulator. There is a python wrapper for it. Though it is also possible to access the command line through the subprocess module.
I've found some nice articles about how other people build a stream segmenter for other platforms, so now we know how to build one in Python.

Categories

Resources