I have a multithreaded simulation code doing some calculation in discrete time steps. So for each time stamp I have a set of variables which I want to store and access later on.
Now my question is:
What is a good/the best data structure given following conditions?:
has to be thread-safe (so I guess a ordered dictionary might be best here). I have one thread writing data, other threads only read it.
I want to find data later given a time interval which is not necessarily a multiple of the time-step size.
E.g.: I simulate values from t=0 to t=10 in steps of 1. If I get a request for all data in the range of t=5.6 to t=8.1, I want to get the simulated values such that the requested times are within the returned time range. In this case all data from t=5 to t=9.
the time-step size can vary from run to run. It is constant within a run, so the created data set has always a consistent time-step size. But I might want to restart simulation with a better time resolution.
the amount of time stamps which are calculated might be rather large (up to a million may be)
From searching through the net I get the impression some tree-like structure implemented as a dictionary might be a good idea, but I would also need some kind of iterator/index to go through the data, since I want to fetch always data from time intervals. I got no real idea how something like that could look like ...
There are posts for finding a key in a dictionary close to a given value. But these always include some look up of all the keys in the dictionary, which might not be so cool for a million of keys (that is how I feel at least).
Related
I am working on an application which generates a couple of hundred datasets every ten minutes. These datasets consist of a timestamp, and some corresponding values from an ongoing measurement.
(Almost) Naturally, I use pandas dataframes to manage the data in memory.
Now I need to do some work with history data (eg. averaging or summation over days/weeks/months etc. but not limited to that), and I need to update those accumulated values rather frequently (ideally also every ten minutes), so I am wondering which would be the most access-efficient way to store the data on disk?
So far I have been storing the data for every ten minute interval in a separate csv-file and then read the relevant files into a new dataframe as needed. But I feel that there must be a more efficient way, especially when it comes to working with a larger amount of datasets. Although computation cost and memory are not the central issue, as I am running the code on a comparatively powerful machine, but I still don't want to (and most likely, can't afford to) read all the data into memory every time.
It seems to me that the answer should lie within the built-in serialization functions of pandas, but from the docs and my google findings I honestly can't really tell which would fit my needs best.
Any ideas how I could manage my data better?
I have a series of points in time, say about ~60 events over the course of an hour. I'd like to figure out how to segment this series into "sub-series" based on how close in time the events occur.
The thing is, I am extremely hesitant to define an arbitrary interval to split on; i.e., I don't want to say, let's group events at every ten minutes. In the same way, I'm hesitant to define a "break threshold"; I don't want to say, if five minutes has passed without an event, start a new segment.
(For the record, the events have no inherent value for the purposes of this grouping. It's simply a series of points in time.)
Is there any way to dynamically segment a time series? I researched a few different things, and the most promising line of investigation was into Bayesian blocking, but I couldn't determine a way to modify it into what I need.
I'm looking for a neat way to detect particular events in time series data.
In my case, an event might consist of a value changing by more than a certain amount from one sample to the next, or it might consist of a sample being (for example) greater than a threshold while another parameter is less than another threshold.
e.g. imagine a time series list in which I've got three parameters; a timestamp, some temperature data and some humidity data:
time_series = []
# time, temp, humidity
time_series.append([0.0, 12.5, 87.5])
time_series.append([0.1, 12.8, 92.5])
time_series.append([0.2, 12.9, 95.5])
Obviously a useful time series would be much longer than this.
I can obviously loop through this data checking each row (and potentially the previous row) to see if it meets my criteria, but I'm wondering if there's a neat library or technique that I can use to search time series data for particular events - especially where an event might be defined as a function of a number of contiguous samples, or a function of samples in more than one column.
Does anyone know of such a library or technique?
You might like to investigate pandas, which includes time series tools see this pandas doc.
I think that what you are trying to do is take "slices" through the data. [This link on earthpy.org] (http://earthpy.org/pandas-basics.html) has a nice introduction to using time series data with pandas, and if you follow down through the examples it shows how to take out slices, which I think would correspond to pulling out parameters that exceed thresholds, etc. in your data.
Here's the scenario. Let's say I have data from a visual psychophysics experiment, in which a subject indicates whether the net direction of motion in a noisy visual stimulus is to the left or to the right. The atomic unit here is a single trial and a typical daily session might have between 1000 and 2000 trials. With each trial are associated various parameters: the difficulty of that trial, where stimuli were positioned on the computer monitor, the speed of motion, the distance of the subject from the display, whether the subject answered correctly, etc. For now, let's assume that each trial has only one value for each parameter (e.g., each trial has only one speed of motion, etc.). So far, so easy: trial ids are the Index and the different parameters correspond to columns.
Here's the wrinkle. With each trial are also associated variable length time series. For instance, each trial will have eye movement data that's sampled at 1 kHz (so we get time of acquisition, the x data at that time point, and y data at that time point). Because each trial has a different total duration, the length of these time series will differ across trials.
So... what's the best means for representing this type of data in a pandas DataFrame? Is this something that pandas can even be expected to deal with? Should I go to multiple DataFrames, one for the single valued parameters and one for the time series like parameters?
I've considered adopting a MultiIndex approach where level 0 corresponds to trial number and level 1 corresponds to time of continuous data acquisition. Then all I'd need to do is repeat the single valued columns to match the length of the time series on that trial. But I immediately foresee 2 problems. First, the number of single valued columns is large enough that extending each one of them to match the length of the time series seems very wasteful if not impractical. Second, and more importantly, if I wanna do basic groupby type of analyses (e.g. getting the proportion of correct responses at a given difficulty level), this will give biased (incorrect) results because whether each trial was correct or wrong will be repeated as many times as necessary for its length to match the length of time series on that trial (which is irrelevant to the computation of the mean across trials).
I hope my question makes sense and thanks for suggestions.
I've also just been dealing with this type of issue. I have a bunch of motion-capture data that I've recorded, containing x- y- and z-locations of several motion-capture markers at time intervals of 10ms, but there are also a couple of single-valued fields per trial (e.g., which task the subject is doing).
I've been using this project as a motivation for learning about pandas so I'm certainly not "fluent" yet with it. But I have found it incredibly convenient to be able to concatenate data frames for each trial into a single larger frame for, e.g., one subject:
subject_df = pd.concat(
[pd.read_csv(t) for t in subject_trials],
keys=[i for i, _ in enumerate(subject_trials)])
Anyway, my suggestion for how to combine single-valued trial data with continuous time recordings is to duplicate the single-valued columns down the entire index of your time recordings, like you mention toward the end of your question.
The only thing you lose by denormalizing your data in this way is that your data will consume more memory; however, provided you have sufficient memory, I think the benefits are worth it, because then you can do things like group individual time frames of data by the per-trial values. This can be especially useful with a stacked data frame!
As for removing the duplicates for doing, e.g., trial outcome analysis, it's really straightforward to do this:
df.outcome.unique()
assuming your data frame has an "outcome" column.
I am trying to investigate differences between runs/experiments in a continuously logged data set. I am taking a fixed subset of a few months for this data set and then analysising it to come up with an estimate on when a run was started. I have this sorted in a series of times.
With this I then chop the data up into 30 hour chunks (approximate time between runs) and then put it into a dictionary:
data = {}
for time in times:
timeNow = np.datetime64(time.to_datetime())
time30hr = np.datetime64(time.to_datetime())+np.timedelta64(30*60*60,'s')
data[time] = df[timeNow:time30hr]
So now I have a dictionary of dataframes, indexed by by StartTime and each one contains all of my data for a run, plus some extra to ensure I have it all for every run. But to compare two runs together I need to have a common X value to stack them on top of each other. Now every run is different and the point I want to consider "the same" varies depending on what i'm looking at. For the example below I have used the largest value in that dataset to "pivot" on.
for time in data:
A = data[time]
#Find max point for value. And take the first if there is more than 1
maxTtime = A[A['Value'] == A['Value'].max()]['DateTime'][0]
# Now we can say we want 12 hours before and end 12 after.
new = A[maxTtime-datetime.timedelta(0.5):maxTtime+datetime.timedelta(0.5)]
#Stick on a new column with time from 0 point:
new['RTime'] = new['DateTime'] - maxTtime
#Plot values against this new time
plot(new['RTime'],new['Value'])
This yields a graph like:
Which is great except I can't get a decent legend in order to tell what run was what and work out how much variation there is. I believe half my problem is because Im iterating over a dictionary of dataframes which is causing issues.
Could someone recommend how to better organise this (a dictionary of dataframes is all I could do to get it to work). I've thought of doing a hierarchical dataframe and instead of indexing it by run time, assigning a set of identifiers to the runs (The actual time is contained within the dataframes themself so I have no problem loosing the assumed starttime) and plotting it then with a legend.
My final aim is to have a dataset and methodology that means I can investigate the similarity and differences between different runs using different "pivot points" amd produce a graph of each one which I can then interrogate (or at least tell which data set is which to interrogate the data directly) but couldn;t get past various errors with creating it.
I can upload a set of the data to a csv if required but am not sure on the best place to upload it to. Thanks