I am an intern and I work on a project to create WebSDR. I need to create a web interface that allows users to observe the activity on the frequency range they want (with a waterfall graph) and also transmit the sound on the chosen frequency.
example of a websdr
For this purpose we have an SDR connected to a local server and running GNURadio. (I show you the block diagram as it is now, it is obviously not final)
global architecture
I then created a server code in python which retrieves the data sent in UDP via the "UDP Sink" block which for the moment simply transmits it in text to the client code in javascript to display it on an html page. (I will send you the codes if needed)
server
client.js and client.html
I'm stuck now, I can't find any resources on the internet for the rest. I would like to process the data on the server in order to create an audio data stream that would be streamed to the web client. But also a way to create a waterfall graphic, which I will convert to an image afterwards and which will be sent to the client every second to give the impression that the waterfall graphic is refreshed regularly.
Please can you give me some answers to create these two features. I am open to other proposals as well if the method I want to apply is not the right one.
Thank you very much,
Related
I have client and server in my project. In the client part, the user will upload his own excel file and this file will come to my server for processing. My artificial intelligence python code will run on my server and it will make changes to excel. When every time it changes, I want to send the updated version to the client so that the client can see the change live. Example Let's say I have 10 functions on server side, each function changes some cells in excel(I can get the index of the changed cells). When each function is finished, I will send the changing indexes to the client and these changed places will be updated in the table in the client (C++, Qt).
At first, I made the server with PHP, but calling my artificial intelligence python codes externally(shell_exec) was not a good method. That's why I want to do the server part with python.
Is django the best way for me?
What I've tried with Django:
I wanted to send data continuously from server to client with StreamingHttpResponse object, but even though I used iter_content to recv the incoming data on the client, when all the code was finished, all came at once. When I set the chunksize value of iter_content to a small value, I could get it instantly, but it's not a full word. So I decided to use websocket.
I have a problem with websocket; I can't send text and byte data at the same time.
When client while uploading the Excel file, I need to send some text data as a parameter to my server.
Waiting for your help thank you!
You can send bytes as hexadecimal string.
Check this out: binascii hexlify
I am planning to write my own live stream server in python using the HLS protocol for a small project.
My idea is to use Amazon S3 for storage and have the python server just output the m3u8 file.
This is all straightforward, now to the problem: I want to stream live video from a camera over an unreliable network and if there is a congestion the player could end up with completing playing of the last file referenced in the m3u8 file. Can i in some way mark the stream as a live stream having the player try again reloading the m3u8 for a specific time looking for a next segment or how should live streaming using HLS be handled?
Maybe live streaming over HLS is not supported?
This is explictly allowed in the HLS spec as a "live Playlist". There are a few things you need to be aware of, but notably, from section 6.2.1:
The server MUST NOT change the Media Playlist file, except to:
o Append lines to it (Section 6.2.1).
And if we take a look at Section 4.3.3.4:
The EXT-X-ENDLIST tag indicates that no more Media Segments will be added to the Media Playlist file. It MAY occur anywhere in the Media Playlist file.
In other words, if a playlist does not include the #EXT-X-ENDLIST tag, it's expected that the player will keep loading the playlist from whatever source it originally loaded it from with some frequency, looking for the server to append segments to the playlist.
Most players will do this taking into account current network conditions so they have a chance to get new segments before the playback is caught up. If the server needs to interrupt the segments, or otherwise introduce a gap, it has the responsibility to introduce a #EXT-X-DISCONTINUITY-SEQUENCE tag.
Apple provides a more concrete example of a Live Playlist on their developer website.
What would be best way to solve following problem with Python ?
I have real-time data stream coming to my object-oriented storage from user application (json files being stored into S3 storage in Amazon).
Upon receiving of each JSON file, I have to within certain time (1s in this instance) process data in the file and generate response that is send back to the user. This data is being processed by simple Python script.
My issue is, that the real-time data stream can at the same time generate even few hundreds JSON files from user applications that I need to run trough my Python script and I don't know how to approach this the best way.
I understand, that way to tackle this would be to use trigger based Lambdas that would execute job on the top of every file once uploaded from real-time stream in server-less environment, however this option is quite expensive compared to have single server instance running and somehow triggering jobs inside.
Any advice is appreciated. Thanks.
Serverless can actually be cheaper than using a server. It is much cheaper when there are periods of no activity because you don't need to pay for a server doing nothing.
The hardest part of your requirement is sending the response back to the user. If an object is uploaded to S3, there is no easy way to send back a response and it isn't even obvious who is the user that sent the file.
You could process the incoming file and then store a response back in a similarly-named object, and the client could then poll S3 for the response. That requires the upload to use a unique name that is somehow generated.
An alternative would be for the data to be sent to AWS API Gateway, which can trigger an AWS Lambda function and then directly return the response to the requester. No server required, automatic scaling.
If you wanted to use a server, then you'd need a way for the client to send a message to the server with a reference to the JSON object in S3 (or with the data itself). The server would need to be running a web server that can receive the request, perform the work and provide back the response.
Bottom line: Think about the data flow first, rather than the processing.
First I would say I'm new to Azure.
Most of my cloud experience comes from AWS.
I'm using IoT Hub with a connected device that sends a message every 1 min.
So far what I did what according to this guide from the Microsoft team:
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-live-data-visualization-in-web-apps
Now, I wanted to create something like Lambda function in AWS, and from what I understand in Azure they called it Azure Functions. I wanted to create a function that gets triggered every time a new message from my device has been received, do something (let's say add 1) and then send it back (so I can pull the 'new' data to my backend).
So far what I did was to create a new "Azure Function" (which I guess it's like a container to functions?)
And then I try to create a new function by click 'Add new' and click on the 'IoT Hub (Event Hub)' template. But when I get to my code and try to test it I get a 404 response. Do I need to create something else? Do I need to create a new 'event' in my IoT Hub? Do I need to create a new 'Event Hub'?
Thanks!
P.s
I try to google it but must of the answers were with the old portal or in C#, I'm using Node and Python.
This scenario is covered in this sample. The sample is in JavaScript. It writes the messages to a database, but you can change this part if you want.
To answer some of your other questions:
IoTHub comes with a built-in Event Hub, so no need to create anything else! Your Azure Function will use an Event Hub trigger to subscribe to events coming from IoT Hub. By default, every event that a device sends to IoT Hub will end up on that endpoint, so to 'create' a new event, use the device SDK (on a device or on your machine) to send a message to IoT Hub.
You mentioned 'sending it back', but in most cases you don't have to respond to IoT Hub messages. You can for instance store the message in a database and build a web application that reads from that database. You could also get real-time updates in your web application, but that's outside the scope of your question.
I have tried to answer your queries below:-
I wanted to create a function that gets triggered every time a new
message from my device has been received, do something (let's say add 1)
and then send it back (so I can pull the 'new' data to my backend).
If you mean sending the data back to IoTHub, that doesn't seem logical to me as the manipulated data is something not sent by device. I would rather treat my Azure function as the backend and save/send the data in some persistent store or a message broker where it can be accessed by other consumer(s).
So far what I did was to create a new "Azure Function" (which I guess
it's like a container to functions?) And then I try to create a new
function by click 'Add new' and click on the 'IoT Hub (Event Hub)'
template. But when I get to my code and try to test it I get a 404
response. Do I need to create something else?
There are couple of ways by which you can create the Azure function with Built-in endpoints that is compatible with Event Hub as the trigger. Check below image. Relevant information about Built-in endpoints can be found here.
Do I need to create a new 'event' in my IoT Hub?
Not sure exactly what you mean by this. The way the flow work is
Send telemetry messages from device. NodeJS example can be found here.
You need to add message routing for messages arriving at the IoTHub should be received in Built-in endpoints. Check image below for telemetry message routing to Built-in endpoint.
Similarly you can route device twin change events, lifecycle events to Built-in endpoint.
Do I need to create a new 'Event Hub'?
Not required, as the Built-in endpoint is Event Hub compatible. Check documentation here. Unless, you have a specific need as per your business use case a custom Event hub endpoint is not required.
But when I get to my code and try to test it I get a 404 response.
Now, we need to trigger the azure function whenever a new event/message is received on the Built-in Endpoint. You can do this by couple of ways.
Azure portal
Command line
VS code
The main point to be noted above is your azure function binding[trigger] is set correctly in the function.json file. Here is how the trigger looks like.
MyEventHub and myEventHubReadConnectionAppSetting value should be picked from Application settings. Check image below.
I suggest you to go through this page for in depth understanding of how the Event hub trigger works with Azure function.
Once you have all the above steps done, you can open your Azure function app in portal and go to Functions section in the Function app blade. There you can monitor, code & test, check integration for your Azure function.
I have a python script running continuously as a webjob on Azure. In almost every 3 minutes it generates a new set of data. Once the data is generated we want to send it to UI(angular) in real time.
What could be the ideal approach (fastest) to get this functionality?
The data generated is a json containing 50 key value pairs. I read about signalr, but can I directly use signalr with my python code? Is there any other approach like sockets etc.?
What you need is called WebSocket, this is a protocol which allows back-end servers to push data to connected web clients.
There are implementations of WebSocket for python (a quick search found me this one).
Once you have a WebSocket going, you can create a service in o your angular project to handle the yields from your python service, most likely using observables.
Hopefully this sets you on the right path