Python livestream to RTMP server (Flash Media Server clone?) - python

I would like to play around with coding an application that could capture a desktop or section of a screen (height and width variables for resolution) and stream those to an RTMP server (rtmp://server.com/live).
I saw something called rtmplite, but the description of this package is:
"This is a python implementation of the Flash RTMP server"
So I would ultimately like to achieve the following, but will implement it in pieces as I go along, without getting overwhelmed at the project scope:
Make connection to RTMP server (with authentication where needed) to channel on ustream.com, justin.tv/twitch.tv, own3d.tv, etc.
Ability to select height, width selection of desktop or entire desktop and stream live to that channel, as if I was using Flash Media Live Encoder.
Really I just want to make my own Python-based FMLE or Xsplit application so I can stream live on my own without using those applications.
Any libraries you can send me to read up on that explain this FMLE-clone type process or information would be helpful! Thanks

I did some RTMP streaming from python for the wiidiaplayer project: http://wiidiaplayer.org. This is by no means a full solution, but at least some RTMP functionality has been implemented in python there.
Unfortunately it has been a long time since I touched that code; if you have any questions feel free to ask them; I'm not sure how much of the answers I will be able to provide.

Related

How to stream multiple ESP32-CAM's to a Python Server and serve stream to clients over the cloud

I am developing a live monitoring application using multiple resources. The server is a Python Flask Application (Running on GCP using a docker container in cloud run). For the database for storing users, we chose Firestore, and authentication is done using JWT. The client application was created with React Native and receives information from the server using basic HTTP methods in a JSON format.
As for now, the whole system works, but it is still missing the live cameras. I never played around with live video streaming or anything like that.
For hardware, we chose ESP32-CAM modules as they are cheap and easy to maintain.
The problem is that I have no idea how to stream the camera content to the server and serve that to the client. Remember that the cameras would be at various locations with no external IP access, so the server couldn't just make capture requests for the camera; the camera themselves will have to send the information to the server.
I took a look at some ESP32-CAM libraries and various communications methods, but none fit what I am trying to do or that have enough documentation for a basic understanding. Some were RTSP, FFmpeg, Sockets, HLS, WebRCT. It must be compatible with the ESP32 (C++ or/and Arduino libraries) and Python packages. (With ports allowed to be accessed by GCP if needed)
Lastly, I would like suggestions on how and what to use to transmit the data to the final client.
Below is a quick schematic for the current infrastructure.
Schematic

Streaming live audio from audio interface over a local network via browser

This is my first post so sorry if it’s worded wrong/ put in the wrong place.
I am currently designing a prototype product which has the ability to take an audio input (for example the master channel from a sound desk) through an audio interface, and then place it on a browser-based application for all devices on the local network to access, listen to and download.
Currently I believe the best way to do this would be via WebRTC hosted on a server, such as a Raspberry Pi or equivalent, where WebRTC is ran through Python Tornado/Flask in order to serve this to an HTTP page. I would like some advice on the best way to do this, or if you believe if there is a more efficient way to do this?
I was going to use the server to also act as a DHCP server as it will be for local environments only. Management features such as routing protocols, logging/debugging information and encryption handlers will also be displayed on this HTTP application.
I have attached a diagram of the topology that I am using, if I have not explained myself clearly.
Thanks.
Network Topology Image

Creating an Action Server & Client in ROS

I'm trying to setup a ROS Action server & client to handle sending images (encoded as 64-bit strings) between Python and ROS (with the goal of making the image something other scripts can pull from ROS). Being very new to all of this (Python, Ubuntu, Bash, ROS, etc...), I'm having a very difficult time determining HOW exactly to do this. I think part of the reason is that the ROS wiki tutorials/documentation are linear to a fault, and the process just comes across as convoluted and extraordinarily complicated. Does anyone out there know of any non-ROS-wiki related/dependent tutorials to help me figure this out? Or can you create a concise step-by-step guide to establishing this system? I've been unable to find much of anything relating to this topic specifically - which makes me think it's either a very uncommon use, or it's super easy and I'm just not at that level yet.
My attempt at a solution is essentially just getting the information flow down. I want Python to be able to read in an image, convert it to bytes (using b64encode), and send it over to ROS to publish as an action. (Thus, a stream of images can be sent with no pause, as would be done with a service, if I understand correctly.) Anything subscribed to the node (or server, however that works, I'll figure it out when I get there) can then see the images and pull them from the action server.
Now, I'm being told an action is the best way to do this. Personally, I don't see why a service wouldn't suffice (and I've at least gotten one of those to work).
Thanks again for any help you all can provide!
Edit: The end application here is for video streaming. A server will grab live video, convert it to images, change those to byte strings, and stream them to the client, which will then publish them to a ROS Action Server.
I think you're overcomplicating it. I wouldn't implement it as an actionlib server, although that is one approach. I've created a few similar systems, and this is how I structured them:
Write your node that streams video by publishing images on a topic. You can implement this as an actionlib server, but that's not required. In my case, I used the pre-existing raspicam_node to read a Raspberry Pi's camera. If you want to implement this in Python, read the tutorial on creating a publisher.
Create a node that subscribes to your image topic, and then reads the image from the topic message. Again, the tutorial shows how to create a subscriber. The main difference is that you'd use either CompressedImage or Image from sensor_msgs.msg as your message type.
For the subscriber-side, here's an example Python ROS node I wrote that implements a MJPEG streamer. It subscribes to a topic, reads the image data, and re-publishes it via a streaming HTTP response. Even though it's "slow Python", I get less than 1 second of latency.

webrtc without a browser

Right now I am using this project here. It is a python script that runs a server using webrtc to send the clients/browsers webcam to the server and perform face recognition. What I want to do is do the same thing with a web cam or pi cam hooked up to the pi but without the use of the browser. Is there a way to do it with the current set up or is there a better method to accomplish this?
You can use the native library and connect it to the face recognition server. You can use either the google implementation of webrtc or a more recent implementation (by Ericsson) called openWebrtc. The developers of openWebRTC are very proud of running their implementation on various pieces of hardware like raspberry pi and iOS devices.
If you don't what to mess with a native library you can use a nodejs binding for webrtc (for example node-webrtc or easyrtc)
If you want a Python implementation of WebRTC, give aiortc a try. It features support for audio, video and data channels and builds upon Python's asyncio framework.
The server example illustrates both how to perform image processing on a video stream and how to send video back to the remote party. Aside from signaling there is no actual "server" or "client" role in WebRTC so you can also run aiortc on your raspberry pi and have it send video frames to whatever WebRTC endpoint you want.

Webcam streaming in Django project

I am currently working on a Django project and would like to add the ability for uses to enter a video conference with each other using their webcams. I understand html5 has capabilities for this, but I would like to stay away from it for now, as quite a few browsers don't yet support it. Anyone have any suggestion as to how I could do this? Thanks.
It's hard to say use this one thing when really it will be a collection of things that meet your individual needs. Here are some links to some resources that should get you started.
OpenCV - Has Python wrappers for webcam
Tornado - Python web framework and asynchronous networking
library
Twisted - Event driven networking engine written in Python
On the client side, you might want to look at getUserMedia.js for handling capturing the video from the camera - it implements a Flash fallback for browsers that don't support the getUserMedia() API.
On the server side, I think Drewness's answer covers it.
The short answer is that you have to use flash or narrow down which browsers you want to support.
The act of getting the stream from your webcam and into the browser is somewhat supported by HTML5 and fully supported by flash in modern browsers.
The tricky part is streaming that to other people in a call. There are two approaches - have everyone pipe their feed to a central server which then beams the collected feeds down to everyone in the room, or have peers directly connect to one another.
For any kind of real time chat app, you want to be using the latter (the latency of a central server architecture makes it unusable).
On the web your options are WebRTC, RTMFP, HLS, or plugins. WebRTC is fantastic, but is still a working standard. Most significantly IE does not support it, so if you expect this to be a public facing web app a sizable percentage of your users will be out of luck. HLS is an apple technology that also has patchy support (and isn't particularly efficient).
For RTMFP, have a look at cirrus/stratus. They have a sample app that illustrates the technology (BTW this is what ChatRoulette uses). Of course this requires flash, but IMO it's your best bet for covering as many platforms as possible without having your users install something first.
The choice of web framework (Django in your case) doesn't matter very much since you don't want your users sending their streams up to the server anyway. The server's job is simply to help with discovery and connection, and for this you should look into a push/comet server like APE.

Categories

Resources