I am currently doing a simulation, in which each second has 200k points. I want to send this in real time as much as possible with very minimal delay. The problem is sending 1 packet in LoRaWAN has delays and some packets are not sent, which is natural.
How can I send these 200k points in a single packet? For example, after 1 second I will send all data (200k points) to the network, in a packet.
BTW, I am using Python.
The use case you have is not one for LoRaWAN. It is for low data, low need applications over wide areas. 200k points (which I must assume by the name is not a single byte per unit) every second is a datatransfer of at least 720MB. That is way to much.
This is never going to work, you need to move to WiFi/Bluetooth to those kinds of transfer but your range is going to decrease dramatically.
Related
I'm currently trying to use a sensor to measure a process's consistency. The sensor output varies wildly in its actual reading but displays features that are statistically different across three categories [dark, appropriate, light], with dark and light being out of control items. For example, one output could read approximately 0V, the process repeats and the sensor then reads 0.6V. Both the 0V reading and the 0.6V reading could represent an in control process. There is a consistent difference for sensor readings for out of control items vs in control items. An example set of an in control item can be found here and an example set of two out of control items can be found here. Because of the wildness of the sensor and characteristic shapes of each category's data, I think the best way to assess the readings is to process them with an AI model. This is my first foray into creating a model that creates a categorical prediction given a time series window. I haven't been able to find anything on the internet with my searches (I'm possibly looking for the wrong thing). I'm certain that what I'm attempting is feasible and has a strong case for an AI model, I'm just not certain what the optimal way to make it is. One idea that I had was to treat the data similarly to how an image is treated by an object detection model, with the readings as the input array and the category as the output, but I'm not certain that this is the best way to go about solving the problem. If anyone can help point me in the right direction or give me a resource, I would greatly appreciate it. Thanks for reading my post!
I used Arduino (a Teensyduino) to intermittently print strings through Serial. These strings are basically of integers ranging from 1 to 1000,
e.g.
Serial1.print('456');
delay(1000);
Serial1.print('999');
At the same time, I directly record the voltage output from the serial transmission pin, using some data acquisition system sampling at 30000 Hz. The voltage recording occurs over the span of an hour, where multiple strings are printed at random times during the hour. These strings are printed typically several seconds apart.
I have a recording of the entire voltage signal across an hour, which I will analyse offline in Python. Given this vector of 0-5V values across an hour, how do I detect all occurrences of strings printed, and also decode all the strings from the voltage values? e.g. retrieve '456' and '999' from the example above
Okay, if you want to do it from scratch, you're doing this wrong.
First thing you need to know is the transmission protocol. If you can transmit whatever you can from the Teensy, then you've got yourself what is called an oracle and you've already half way to the goal: start transmitting different bit sequences (0xFF, 0xF0, 0x0F, 0x00) and see what gets transmitted along the line, and how. Since the Teensy is almost certainly using straight 9600 8N1, you are now at this stage exactly (you could reproduce the oscilloscope picture from voltage data if you wanted).
Read those answers, and you'll get the rest of the road to a working Python code that translates voltage spikes to bits and then to characters.
If you don't have an oracle, it gets more complicated. My own preference in that case would be to get myself a pliable Teensy all for me and do the first part there. Otherwise, you have to first read the post above, then work it backwards looking at data recordings, which will be much more difficult.
In a pinch, in the oracle scenario, you could even shoot yourself all codes from '0' to '9' - or from 0x00 to 0xFF, or from '0000' to '9999' if that's what it takes - then use a convolution to match the codes with whatever's on the wire, and that would get you the decoded signal without even knowing what protocol was used (I did it once, and can guarantee that it can be done. It was back in the middle ages and the decoder ran on a 80286, so it took about four or five seconds to decode each character's millisecond-burst using a C program. Nowadays you could do it real time, I guess).
I have a multithreaded simulation code doing some calculation in discrete time steps. So for each time stamp I have a set of variables which I want to store and access later on.
Now my question is:
What is a good/the best data structure given following conditions?:
has to be thread-safe (so I guess a ordered dictionary might be best here). I have one thread writing data, other threads only read it.
I want to find data later given a time interval which is not necessarily a multiple of the time-step size.
E.g.: I simulate values from t=0 to t=10 in steps of 1. If I get a request for all data in the range of t=5.6 to t=8.1, I want to get the simulated values such that the requested times are within the returned time range. In this case all data from t=5 to t=9.
the time-step size can vary from run to run. It is constant within a run, so the created data set has always a consistent time-step size. But I might want to restart simulation with a better time resolution.
the amount of time stamps which are calculated might be rather large (up to a million may be)
From searching through the net I get the impression some tree-like structure implemented as a dictionary might be a good idea, but I would also need some kind of iterator/index to go through the data, since I want to fetch always data from time intervals. I got no real idea how something like that could look like ...
There are posts for finding a key in a dictionary close to a given value. But these always include some look up of all the keys in the dictionary, which might not be so cool for a million of keys (that is how I feel at least).
I have a numpy array which is continuously growing in size, with a function adding data to it every so often. This array actually is sound data, which I would like to play, not after the array is done, but while it is still growing. Is there a way I can do that using pyaudio? I have tried implementing a callback, but without success. It gives me choppy audio with delay
You could perhaps intercept the event or pipeline that appends the data to your array.
In order to get rid of the choppiness you will need some kind of intermediate buffer - imagine that data comes at random intervals - sometimes there will be several data points added simultaneously and sometimes no data for a period of time, but on longer timescales there will be some average inflow value. This is a standard practice done in streaming services to increase video quality.
Adjust the buffer size and this should eliminate the choppiness. This will of course introduce initial delay in playing the data, i.e. it won't be "live", but might be close to live with less choppiness.
I have a python program that continuously reads radar data from 8 different radars in the network, parses the data and writes them in JSON files. Back when I was working with just 1 radar, the flow of data I would receive in my python program looked much faster than when I am working with 8 radars. What I mean by this is (I am going to give random numbers for the sake of explaining) when I worked with 1 radar, I would read 10 sentences from the radar per second whereas now that I work with 8 radars, I read about 2 sentences from each radar per second. I read data from radars in a much slower manner since parsing and writing takes time and there's more radars to work with. So I definitely parse and write the data at a slower pace than it gets created and sent through the network. That means that at some point my socket buffer will overflow, right? Is there a way to check if it ever overflows? And if it does, how can I fix this problem? Would I need a virtual machine per 'x' number of radars? Are there other fixes?
Thanks.