I am currently working on a project in which I am using a webcam attached to a raspberry pi to then show what the camera is seeing through a website using a client and web server based method through python, However, I need to know how to link the raspberry pi to a website to then output what it sees through the camera while then also outputting it through the python script, but then i don't know where to start
If anyone could help me I would really appreciate it.
Many thanks.
So one way to do this with python would be to capture the camera image using opencv in a loop and display it to a website hosted on the Pi using a python frontend like flask (or some other frontend). However as others have pointed out, the latency on this would be so bad any processing you wish to do would be nearly impossible.
If you wish to do this without python, take a look at mjpg-streamer, that can pull a video feed from an attached camera and display it on a localhost website. The quality is fairly good on localhost. You can then forward this to the web (if needed) using port forwarding or an application like nginx.
If you want to split the recorded stream into 2 (to forward one to python and to broadcast another to a website), ffmpeg is your best bet, but the FPS and quality would likely be terrible.
Related
I am currently dealing with a Pi camera on a Raspberry Pi 2B. I would like to stream input images (which I am continuously collecting when turned on) in real-time to a server that runs computer vision software. The processing on my server then returns data based on the image recognition that has to be sent back to the Raspberry Pi.
Since the Raspberry Pi's video capturing and WiFi capabilities might not be too overwhelming, I think about the best way to stream such data, both images/video frames from the Pi and (maybe) JSON-formatted table-like generated data in the opposite direction.
I thought about the following possibilities which I want to implement in Python (easier) or C++ (faster if necessary):
Frames
entire frame in REST API, accessible through GET
streaming via TCP
pushing and pulling from a SQL database on the server
Generated data
pushing and pulling from a SQL database on the server
pushing and pulling from a Redis database on the server
REST API on server, collecting via GET on the Pi
There are definitely other possibilities which I might not know yet. So you're welcome to recommend your favourite solution. I am really looking forward to hear about the pros and cons of each technology.
Thanks!
I am writing web app, where I would like to:
Display LOCAL stream from webcam - it means, that I want to stream video from server (I do not want to open webcam of client)
Read QR Codes and list them in text box
These two were already achieved but! I came across some unexpected behaviour. Functionality that I have described is working perfectly, but only on localhost. I want to deploy it, so it could be accessible via different computer (it is meant to be used on a robot).
So to describe my architecture: I am using Jetson TX2 as a server (webcam is connected here). I am using Django web framework, django-channels, daphne as a web server and ngingx as a proxy. I am running daphne and background process in supervisor.
I am using worker (background process) to capture frames from webcam and send it via redis to the web backend.
So when I run it on localhost everything work as expected. When I set Debug to FALSE and I add Jetson's IP to ALLOWED_HOSTS and try to access the web from different computer this happens:
I can see, that webcam is accessed because the webcam light turns on. I put some QR code in front of the webcam and the code appears in the textbox on web! BUT the video is not there (when ALLOWED_HOSTS contains localhost video IS there). Output of background process which collects the camera frames gives following error:
libv4l2: error setting pixformat: Device or resource busy
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline)
in cvCaptureFromCAM_GStreamer, file /home/nvidia/prototype/opencv/opencv-3.4.0/modules/videoio/$
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)):raised OpenCV exception:
/home/nvidia/toyota_prototype/opencv/opencv-3.4.0/modules/videoio /src/cap_gstreamer.cpp:890: error: (-2)$
in function cvCaptureFromCAM_GStreamer
I will not post whole code here, since I do not know where exactly is the problem. Does anyone have an idea where the problem could be?
Thank you for your help!
So, I figured it out. In my html template I had one line, where I was linking to the stream address:
<img src="http://127.0.0.1:8000/webcam-stream">
I think, now you all know, where the problem was. I needed to change the IP to HOST address.
I have Raspberry Pi and a USB webcam. I am trying to send the video to my laptop. I will be happy if there is a way in python. I have already tried to suggested method:
1- read and save images via openCV and then send them to laptop
2- using motion service in ubuntu
But the problem is that the both methods are slow and the final picture in the laptop has a great delay for my work and it's doesn't get live image.
So I am looking for a way to maybe read webcam data directly in its raw format (no conversion) and send them to the laptop and process and view them on the laptop.
Maybe you're looking for something like PiMotion. You can take snapshots and accsess them via your LAN network (using apache). Another possibility is something called "motion". It takes snapshots as soon something moves but it's able to stream to your webbrowser too. I'm not 100% sure but I think you have to install it like that
sudo apt-get install motion fswebcam
then edit /etc/default/motion by changing start_motion_deamon to "yes" and /etc/motion/motion.conf by setting "deamon" to "on" and "webcam_localhost" to "off".
Now you should be able to access your webcam under
http://192.168.2.103:8081
(your Pi probably has another IP)
I'm going to test it and leave a comment with the right way if I was successful.
Greetings, Marvin
Right now I am using this project here. It is a python script that runs a server using webrtc to send the clients/browsers webcam to the server and perform face recognition. What I want to do is do the same thing with a web cam or pi cam hooked up to the pi but without the use of the browser. Is there a way to do it with the current set up or is there a better method to accomplish this?
You can use the native library and connect it to the face recognition server. You can use either the google implementation of webrtc or a more recent implementation (by Ericsson) called openWebrtc. The developers of openWebRTC are very proud of running their implementation on various pieces of hardware like raspberry pi and iOS devices.
If you don't what to mess with a native library you can use a nodejs binding for webrtc (for example node-webrtc or easyrtc)
If you want a Python implementation of WebRTC, give aiortc a try. It features support for audio, video and data channels and builds upon Python's asyncio framework.
The server example illustrates both how to perform image processing on a video stream and how to send video back to the remote party. Aside from signaling there is no actual "server" or "client" role in WebRTC so you can also run aiortc on your raspberry pi and have it send video frames to whatever WebRTC endpoint you want.
I would like to play around with coding an application that could capture a desktop or section of a screen (height and width variables for resolution) and stream those to an RTMP server (rtmp://server.com/live).
I saw something called rtmplite, but the description of this package is:
"This is a python implementation of the Flash RTMP server"
So I would ultimately like to achieve the following, but will implement it in pieces as I go along, without getting overwhelmed at the project scope:
Make connection to RTMP server (with authentication where needed) to channel on ustream.com, justin.tv/twitch.tv, own3d.tv, etc.
Ability to select height, width selection of desktop or entire desktop and stream live to that channel, as if I was using Flash Media Live Encoder.
Really I just want to make my own Python-based FMLE or Xsplit application so I can stream live on my own without using those applications.
Any libraries you can send me to read up on that explain this FMLE-clone type process or information would be helpful! Thanks
I did some RTMP streaming from python for the wiidiaplayer project: http://wiidiaplayer.org. This is by no means a full solution, but at least some RTMP functionality has been implemented in python there.
Unfortunately it has been a long time since I touched that code; if you have any questions feel free to ask them; I'm not sure how much of the answers I will be able to provide.