I have an Objective-C app that I get in a function video data as
unit8_t.bytes and length.
-(void)videoFeed:(DJIVideoFeed *)videoFeed didUpdateVideoData:
(NSData *)videoData {
[[DJIVideoPreviewer instance] push:(uint8_t *)videoData.bytes length:(int)videoData.length];
[_videoExtractor parseVideo:(uint8_t *)videoData.bytes
length:(int)videoData.length withFrame:^(VideoFrameH264Raw *frame) {
I have SocketIO implemented in the application and I send this data to a python server with OpenCV on it .
I have 2 problems:
I don't know how to take this data and actually make it a video
I wish to send h264 throw the socket but the type(VideoFrameH264Raw) is not expectable by Objective-C
[self.socket emit:#"streaming" with:#[frame]];
I have tried to get the information in videocapture in OpenCV but with no lack ,
I am also a little bit confuse about the case i get single frames throw the socket and show them as a video in python.
Emm depends on how it is encoded.
If it using the internal encoder (.mov), not much you can do now. It is a proprietary compression algorithm. And apple doesn't give a crap about a single researcher
If it is DJI internal data feed, they are using FFMPEG like method. Shouldn't have many problems in setting using UDP server capture way.
To confirm this, You can try with H264 directly. Or you can try to see the raw video data? For mp4 there was a few bits/bytes of key control sequence.
Related
I am making an app that can control a tello drone using kivy and buildozer. The method to call video stream from djitellopy, a library that supports tello drone, uses cv2.VideoCapture, but this method does not work when building as an android app.
So I try to decode the packet using recvfrom directly.
def receive_video_thread(self):
res_string, ip = self.client_socket_video.recvfrom(2048)
img=np.fromstring(res_string, dtype = np.uint8)
self.frame=cv2.imdecode(img, cv2.IMREAD_COLOR)
This code is a simplified code to check if the packet loading and decoding are normal. The problematic part is "self.frame=cv2.imdecode(img, cv2.IMREAD_COLOR)" on the last line.
If self.frame is output here, "None" is output. As a result of investigating to solve this problem, I found out that the video of the tello drone is encoded in the h.264 method. But I don't know which method to use for h.264 decoding.
What should I do to solve this problem? (e.g. how to install a module that supports h.264 decoding, etc.) All other parts of the project work without problems. But if there's anything I need to check and tell you, please let me know. I will check and respond as soon as I read the comments.
I'm working on a project where one client needs to take several snapshots from a camera (i.e. it's actually taking a short-duration video, hence a stream of frames), then send all images to a server which then performs some processing on these images and returns a result to the client.
Client and server are all running Python3 code.
The critical part is the image sending one.
Some background first, images are *640*480 jpeg* files. JPEG was chosen as a default choice, but lower quality encoding can be selected as well. They are captured in sequence by a camera. We have thus approximately ~600 frames to send. Frame size is around 110KiB.
The client consists of a Raspberry Pi 3 model B+. It sends the frames via wifi to a 5c server. Server and client both reside in the same LAN for the prototype version. But future deployments might be different, both in term of connectivity medium (wired or wireless) and area (LAN or metro).
I've implemented several solutions for this:
Using Python sockets on the server and the client: I'm either sending one frame directly after one frame capture, or I'm sending all images in sequence after the whole stream capture is done.
Using Gstreamer: I'm launching a GStreamer endpoint on my client and directly send the frames to the server as I stream. I'm capturing the stream on the server side with OpenCV compiled with GStreamer support, then save them to the disk.
Now, the issue I'm facing is that even if both solutions work 'well' (they get the 'final' job done, which is to send data to a server and receive a result based on some remote processing), I'm convinced there is a better way to send a large amount of data to a server, using either the Python socket library, or any other available tools.
All personal researches are done on that matter lead me to solutions similar to mine, using Python sockets, or were out of context (relying on other backends than pure Python).
By a better way, I assume:
A solution that saves bandwidth as much as possible.
A solution that sends all data as fast as possible.
For 1. I slightly modified my first solution to archive and compress all captured frames in a .tgz file that I send over to the server. It indeed decreases the bandwidth usage but also increases the time spent on both ends (due to the un/compression processes). It's obviously particularly true when the dataset is large.
For 2. GStreamer allowed me to have a negligible delay between the capture and the reception on my server. I have however no compression at all and for the reasons stated above, I cannot really use this library for further development.
How can I send a large number of images from one client to one server with minimal bandwidth usage and delay in Python?
If you want to transfer images as frames you can use some existing apps like MJPEG-Streamer which encode images from a webcam interface to JPG which reduces the image size. But if you need a more robust transfer with advanced encoding you can use some Linux tools like FFMPEG with streaming which is documented in here.
If you want lower implementation and control the whole stream by your code for modifications you can use web-based frameworks like Flask and transfer your images directly throw HTTP protocol. You can find a good example in here.
If you don't want to stream you can convert a whole set of images to a video encoded format like h264 and then transfer bytes throw the network. You can use opencv to do this.
There are also some good libraries written in python like pyffmpeg.
you can restream camera using ffmpeg over network so that client can read it both ways. it will reduce delays.
I'm trying to create a low latency stream (sub second) using GStreamer and Python's aiortc library for creating a WebRTC peer for the stream data. I've modified the server example from aiortc and can send an audio file and hook into the video response but what classes/process do I need to use to leverage a GStreamer RTSP video stream?
Do I need to decode the samples with something like an appsink and send each frame individually or is there an aiortc class that can take the RTSP uri and stream the result for me to the peer?
I'm currently running with GStreamer 1.10.4.
This seems like a promising start, but you will need to do some NAL unit parsing. Also I believe this implementation decodes and re-encodes each frame, but if the encoded video formats are compatible, you ought to be able to send it without these extra steps.
I'm writing a web service to stream video (e.g., web cam). I want to know what options do I have.
For example, one naive way I can think of is to periodically fetch a jpeg from the source and display it.
But I also know some media types like mjpeg can be streamed over the HTTP. However I do not know exactly how that is achieved technically. Any example would be welcome.
UPDATE:
I found the below link, which implements a live video stream over HTTP using mjpeg and python WSGI.
Streaming MJPEG over HTTP with gstreamr and python – WSGI version (GIST)
Forget about gstreamer or ffmpeg.
create RAM DISK
install openCV , within infinite loop with "read()" write each frame into RAM DISK as jpg with Pillow
install https://snapcraft.io/mjpg-streamer point to ram disk and enable delete after read.
I have streaming video from a web cam that is accessible via a public ip I opened up on my machine. As in I can go to http//ip/webview and view the video with (I did have to install the activex hidvrocx.cab plugin). The video source itself is h.264 and, according to wireshark, is running on port 9000 over tcp.
What I would like to do is re-stream the raw video, but at this point I would settle for converting it to FLV so I can open is with VLC or something...
According to the technical support team of the webcam (Swann), "netviewer" (some third party software) can view the video feed so there is no encryption / special DRM.
I'm new to this whole thing streaming video world, so this is what I've tried / am considering:
- I've tried loading the stream using VLC at tcp://public_ip:9000, but according to swann support VLC can not view the source because it is raw h.264. Is it possible to use vlc to convert this raw h.264 format into something readable to media players. Possibly...?
vlc src --sout flv
Is it possible to use the python VideoCapture library? Is this strictly for capturing video directly for the device, or does it work over the network?
I'm completely lost right now, so even seeing the raw stream in a media player of any type would be an accomplishment.
TLDR; I have a streaming video source from a webcam over a public IP I would like to ultimately "redistribute" in it's original format (h.264) or flv. How do I accomplish this?
vlc <input_stream> --sout=#std{access=http,mux=ts,dst=<server_ip>:<port>/}
This command will re-stream your Input stream to http with ts muxer
Also you can try rtp/rtsp:
vlc <input_stream> --sout=#rtp{sdp=rtsp://<server_ip>:<port>/XXX.sdp}
will re-stream to rstp protocal