Stream video frames from a Pi in real time - python

I am currently dealing with a Pi camera on a Raspberry Pi 2B. I would like to stream input images (which I am continuously collecting when turned on) in real-time to a server that runs computer vision software. The processing on my server then returns data based on the image recognition that has to be sent back to the Raspberry Pi.
Since the Raspberry Pi's video capturing and WiFi capabilities might not be too overwhelming, I think about the best way to stream such data, both images/video frames from the Pi and (maybe) JSON-formatted table-like generated data in the opposite direction.
I thought about the following possibilities which I want to implement in Python (easier) or C++ (faster if necessary):
Frames
entire frame in REST API, accessible through GET
streaming via TCP
pushing and pulling from a SQL database on the server
Generated data
pushing and pulling from a SQL database on the server
pushing and pulling from a Redis database on the server
REST API on server, collecting via GET on the Pi
There are definitely other possibilities which I might not know yet. So you're welcome to recommend your favourite solution. I am really looking forward to hear about the pros and cons of each technology.
Thanks!

Related

How to stream multiple ESP32-CAM's to a Python Server and serve stream to clients over the cloud

I am developing a live monitoring application using multiple resources. The server is a Python Flask Application (Running on GCP using a docker container in cloud run). For the database for storing users, we chose Firestore, and authentication is done using JWT. The client application was created with React Native and receives information from the server using basic HTTP methods in a JSON format.
As for now, the whole system works, but it is still missing the live cameras. I never played around with live video streaming or anything like that.
For hardware, we chose ESP32-CAM modules as they are cheap and easy to maintain.
The problem is that I have no idea how to stream the camera content to the server and serve that to the client. Remember that the cameras would be at various locations with no external IP access, so the server couldn't just make capture requests for the camera; the camera themselves will have to send the information to the server.
I took a look at some ESP32-CAM libraries and various communications methods, but none fit what I am trying to do or that have enough documentation for a basic understanding. Some were RTSP, FFmpeg, Sockets, HLS, WebRCT. It must be compatible with the ESP32 (C++ or/and Arduino libraries) and Python packages. (With ports allowed to be accessed by GCP if needed)
Lastly, I would like suggestions on how and what to use to transmit the data to the final client.
Below is a quick schematic for the current infrastructure.
Schematic

raspberry pi using a webcam to output to a website to view

I am currently working on a project in which I am using a webcam attached to a raspberry pi to then show what the camera is seeing through a website using a client and web server based method through python, However, I need to know how to link the raspberry pi to a website to then output what it sees through the camera while then also outputting it through the python script, but then i don't know where to start
If anyone could help me I would really appreciate it.
Many thanks.
So one way to do this with python would be to capture the camera image using opencv in a loop and display it to a website hosted on the Pi using a python frontend like flask (or some other frontend). However as others have pointed out, the latency on this would be so bad any processing you wish to do would be nearly impossible.
If you wish to do this without python, take a look at mjpg-streamer, that can pull a video feed from an attached camera and display it on a localhost website. The quality is fairly good on localhost. You can then forward this to the web (if needed) using port forwarding or an application like nginx.
If you want to split the recorded stream into 2 (to forward one to python and to broadcast another to a website), ffmpeg is your best bet, but the FPS and quality would likely be terrible.

How to get live plot data from headless Raspberry Pi?

I'm new to using a RaspberryPi. Until now I was experimenting with an Arduino.
If I connect an Arduino by usb it is recognized as COM device and with the Arduino serial plot software it was easily possible to live plot sensor data.
For my next project I want to work with an Raspberry Pi Zero W and Python.
Is it possible to send serial data from a python script over the charging usb-cable like with the Arduino? If not, what would be the easiest way to send sensor data e.g. to matplotlib to plot the data directly?
It is possible. However I would not recommend using the USB serial port profile. While it works, it is severely limited in comparison to the alternative. Which is using one of the various ethernet gadget modes.
One tutorial for setting this up is e.g. http://www.circuitbasics.com/raspberry-pi-zero-ethernet-gadget/
The result of this is a full network interface that you can not only use to transfer data over a TCP/IP socket but at the same time run a SSH-connection to start and monitor your application. Even to develop by using a SSH-enabled editor such as Emacs. So the possibilities are much bigger than over the single-stream serial setup.
If it absolutely has to be serial, that's of course possible too - follow e.g. this tutorial: https://learn.adafruit.com/turning-your-raspberry-pi-zero-into-a-usb-gadget/serial-gadget

Efficient way of sending a large number of images from client to server

I'm working on a project where one client needs to take several snapshots from a camera (i.e. it's actually taking a short-duration video, hence a stream of frames), then send all images to a server which then performs some processing on these images and returns a result to the client.
Client and server are all running Python3 code.
The critical part is the image sending one.
Some background first, images are *640*480 jpeg* files. JPEG was chosen as a default choice, but lower quality encoding can be selected as well. They are captured in sequence by a camera. We have thus approximately ~600 frames to send. Frame size is around 110KiB.
The client consists of a Raspberry Pi 3 model B+. It sends the frames via wifi to a 5c server. Server and client both reside in the same LAN for the prototype version. But future deployments might be different, both in term of connectivity medium (wired or wireless) and area (LAN or metro).
I've implemented several solutions for this:
Using Python sockets on the server and the client: I'm either sending one frame directly after one frame capture, or I'm sending all images in sequence after the whole stream capture is done.
Using Gstreamer: I'm launching a GStreamer endpoint on my client and directly send the frames to the server as I stream. I'm capturing the stream on the server side with OpenCV compiled with GStreamer support, then save them to the disk.
Now, the issue I'm facing is that even if both solutions work 'well' (they get the 'final' job done, which is to send data to a server and receive a result based on some remote processing), I'm convinced there is a better way to send a large amount of data to a server, using either the Python socket library, or any other available tools.
All personal researches are done on that matter lead me to solutions similar to mine, using Python sockets, or were out of context (relying on other backends than pure Python).
By a better way, I assume:
A solution that saves bandwidth as much as possible.
A solution that sends all data as fast as possible.
For 1. I slightly modified my first solution to archive and compress all captured frames in a .tgz file that I send over to the server. It indeed decreases the bandwidth usage but also increases the time spent on both ends (due to the un/compression processes). It's obviously particularly true when the dataset is large.
For 2. GStreamer allowed me to have a negligible delay between the capture and the reception on my server. I have however no compression at all and for the reasons stated above, I cannot really use this library for further development.
How can I send a large number of images from one client to one server with minimal bandwidth usage and delay in Python?
If you want to transfer images as frames you can use some existing apps like MJPEG-Streamer which encode images from a webcam interface to JPG which reduces the image size. But if you need a more robust transfer with advanced encoding you can use some Linux tools like FFMPEG with streaming which is documented in here.
If you want lower implementation and control the whole stream by your code for modifications you can use web-based frameworks like Flask and transfer your images directly throw HTTP protocol. You can find a good example in here.
If you don't want to stream you can convert a whole set of images to a video encoded format like h264 and then transfer bytes throw the network. You can use opencv to do this.
There are also some good libraries written in python like pyffmpeg.
you can restream camera using ffmpeg over network so that client can read it both ways. it will reduce delays.

webrtc without a browser

Right now I am using this project here. It is a python script that runs a server using webrtc to send the clients/browsers webcam to the server and perform face recognition. What I want to do is do the same thing with a web cam or pi cam hooked up to the pi but without the use of the browser. Is there a way to do it with the current set up or is there a better method to accomplish this?
You can use the native library and connect it to the face recognition server. You can use either the google implementation of webrtc or a more recent implementation (by Ericsson) called openWebrtc. The developers of openWebRTC are very proud of running their implementation on various pieces of hardware like raspberry pi and iOS devices.
If you don't what to mess with a native library you can use a nodejs binding for webrtc (for example node-webrtc or easyrtc)
If you want a Python implementation of WebRTC, give aiortc a try. It features support for audio, video and data channels and builds upon Python's asyncio framework.
The server example illustrates both how to perform image processing on a video stream and how to send video back to the remote party. Aside from signaling there is no actual "server" or "client" role in WebRTC so you can also run aiortc on your raspberry pi and have it send video frames to whatever WebRTC endpoint you want.

Categories

Resources