Grouping Points in Time - python

I have a series of points in time, say about ~60 events over the course of an hour. I'd like to figure out how to segment this series into "sub-series" based on how close in time the events occur.
The thing is, I am extremely hesitant to define an arbitrary interval to split on; i.e., I don't want to say, let's group events at every ten minutes. In the same way, I'm hesitant to define a "break threshold"; I don't want to say, if five minutes has passed without an event, start a new segment.
(For the record, the events have no inherent value for the purposes of this grouping. It's simply a series of points in time.)
Is there any way to dynamically segment a time series? I researched a few different things, and the most promising line of investigation was into Bayesian blocking, but I couldn't determine a way to modify it into what I need.

Related

data structure to find values in an interval

I have a multithreaded simulation code doing some calculation in discrete time steps. So for each time stamp I have a set of variables which I want to store and access later on.
Now my question is:
What is a good/the best data structure given following conditions?:
has to be thread-safe (so I guess a ordered dictionary might be best here). I have one thread writing data, other threads only read it.
I want to find data later given a time interval which is not necessarily a multiple of the time-step size.
E.g.: I simulate values from t=0 to t=10 in steps of 1. If I get a request for all data in the range of t=5.6 to t=8.1, I want to get the simulated values such that the requested times are within the returned time range. In this case all data from t=5 to t=9.
the time-step size can vary from run to run. It is constant within a run, so the created data set has always a consistent time-step size. But I might want to restart simulation with a better time resolution.
the amount of time stamps which are calculated might be rather large (up to a million may be)
From searching through the net I get the impression some tree-like structure implemented as a dictionary might be a good idea, but I would also need some kind of iterator/index to go through the data, since I want to fetch always data from time intervals. I got no real idea how something like that could look like ...
There are posts for finding a key in a dictionary close to a given value. But these always include some look up of all the keys in the dictionary, which might not be so cool for a million of keys (that is how I feel at least).

python create random list of numbers along with a fixed increment

I have a long-running (several hours) script that periodically sends queries to a server. The server is very sensitive to load, so the queries are sparse (not more than 1 every 3 minutes).
The server will always take exactly 10 minutes to process the query. So I can check the result of query 1 any time after 10 minutes of sending it.
So there are two types of operations, "sending query" and "checking result of query". I want all operations to happen at random intervals (subject to the constraint than there are at least 3 minutes between adjacent operations)
Following the advice in this answer (https://stackoverflow.com/a/51918697/10690958) , I can generate a time-series of integers such that there is a gap of at least 3 between them. Lets all be series 1.
I can also generate a similar time-series of status checking queries (3 minutes between them). Lets call this series 2.
Now series 1 is randomly spaced. Series 2 is also randomly spaced. But there is a correlation between series 1 and 2 ,i.e. "response time"="query time"+10 minutes.
This the union of series 1 and 2 wont be random. Furthermore there is a (very small) possiblility of collision. For example, query 2 might be going out exactly when one is checking the result of query 1.
Is there a way to make union of the two sequences also perfectly random , as well as avoid the possibility of collisions. Ideally all traffic to the server (whether query or status check) should be at perfectly random intervals.
I realize that the title is not very descriptive, but could not figure out a better way to describe the situation. Please edit if you think you have a better description.
For example:
query_sequence=set([3,8,12,21,37])
check_result_sequence=set([13,18,22,31,47])
server_traffic=query_sequence.union(check_result_sequence)
But their union (server_traffic) is not random , since
check_result_sequence=query_sequence+10
P.S.:
Generating time-points with more granularity might help with reducing probability of collisions (as mentioned in the comment). As regards randomness of the union of two sequences, I dont see any satisfactory solution. What I finally decided to do was
check_result_sequence=query_sequence+10+( 5*random.random())
This adds a random "jitter" to the responses sequence, and so should help with reducing correlation between the two sequences.
1) I hardly see the necessity to randomize the interval between the requests
2) You could do a single list: a list which represent the available moments to submit a request
server_traffic=set([3,8,12,15,19,23,26,30,34,40])
for x in range(4):
send_query(server_traffic)
while(True):
send_result_request(server_traffic)
send_query(server_traffic)
Then every time you decide if you want to send a query, or to check the result, with your own policy. This should make everything easier

Detecting custom events in time series data in Python

I'm looking for a neat way to detect particular events in time series data.
In my case, an event might consist of a value changing by more than a certain amount from one sample to the next, or it might consist of a sample being (for example) greater than a threshold while another parameter is less than another threshold.
e.g. imagine a time series list in which I've got three parameters; a timestamp, some temperature data and some humidity data:
time_series = []
# time, temp, humidity
time_series.append([0.0, 12.5, 87.5])
time_series.append([0.1, 12.8, 92.5])
time_series.append([0.2, 12.9, 95.5])
Obviously a useful time series would be much longer than this.
I can obviously loop through this data checking each row (and potentially the previous row) to see if it meets my criteria, but I'm wondering if there's a neat library or technique that I can use to search time series data for particular events - especially where an event might be defined as a function of a number of contiguous samples, or a function of samples in more than one column.
Does anyone know of such a library or technique?
You might like to investigate pandas, which includes time series tools see this pandas doc.
I think that what you are trying to do is take "slices" through the data. [This link on earthpy.org] (http://earthpy.org/pandas-basics.html) has a nice introduction to using time series data with pandas, and if you follow down through the examples it shows how to take out slices, which I think would correspond to pulling out parameters that exceed thresholds, etc. in your data.

Pandas and the best method for representing variable-length time-series

Here's the scenario. Let's say I have data from a visual psychophysics experiment, in which a subject indicates whether the net direction of motion in a noisy visual stimulus is to the left or to the right. The atomic unit here is a single trial and a typical daily session might have between 1000 and 2000 trials. With each trial are associated various parameters: the difficulty of that trial, where stimuli were positioned on the computer monitor, the speed of motion, the distance of the subject from the display, whether the subject answered correctly, etc. For now, let's assume that each trial has only one value for each parameter (e.g., each trial has only one speed of motion, etc.). So far, so easy: trial ids are the Index and the different parameters correspond to columns.
Here's the wrinkle. With each trial are also associated variable length time series. For instance, each trial will have eye movement data that's sampled at 1 kHz (so we get time of acquisition, the x data at that time point, and y data at that time point). Because each trial has a different total duration, the length of these time series will differ across trials.
So... what's the best means for representing this type of data in a pandas DataFrame? Is this something that pandas can even be expected to deal with? Should I go to multiple DataFrames, one for the single valued parameters and one for the time series like parameters?
I've considered adopting a MultiIndex approach where level 0 corresponds to trial number and level 1 corresponds to time of continuous data acquisition. Then all I'd need to do is repeat the single valued columns to match the length of the time series on that trial. But I immediately foresee 2 problems. First, the number of single valued columns is large enough that extending each one of them to match the length of the time series seems very wasteful if not impractical. Second, and more importantly, if I wanna do basic groupby type of analyses (e.g. getting the proportion of correct responses at a given difficulty level), this will give biased (incorrect) results because whether each trial was correct or wrong will be repeated as many times as necessary for its length to match the length of time series on that trial (which is irrelevant to the computation of the mean across trials).
I hope my question makes sense and thanks for suggestions.
I've also just been dealing with this type of issue. I have a bunch of motion-capture data that I've recorded, containing x- y- and z-locations of several motion-capture markers at time intervals of 10ms, but there are also a couple of single-valued fields per trial (e.g., which task the subject is doing).
I've been using this project as a motivation for learning about pandas so I'm certainly not "fluent" yet with it. But I have found it incredibly convenient to be able to concatenate data frames for each trial into a single larger frame for, e.g., one subject:
subject_df = pd.concat(
[pd.read_csv(t) for t in subject_trials],
keys=[i for i, _ in enumerate(subject_trials)])
Anyway, my suggestion for how to combine single-valued trial data with continuous time recordings is to duplicate the single-valued columns down the entire index of your time recordings, like you mention toward the end of your question.
The only thing you lose by denormalizing your data in this way is that your data will consume more memory; however, provided you have sufficient memory, I think the benefits are worth it, because then you can do things like group individual time frames of data by the per-trial values. This can be especially useful with a stacked data frame!
As for removing the duplicates for doing, e.g., trial outcome analysis, it's really straightforward to do this:
df.outcome.unique()
assuming your data frame has an "outcome" column.

Splitting time data into "runs" in order to plot and examine differences

I am trying to investigate differences between runs/experiments in a continuously logged data set. I am taking a fixed subset of a few months for this data set and then analysising it to come up with an estimate on when a run was started. I have this sorted in a series of times.
With this I then chop the data up into 30 hour chunks (approximate time between runs) and then put it into a dictionary:
data = {}
for time in times:
timeNow = np.datetime64(time.to_datetime())
time30hr = np.datetime64(time.to_datetime())+np.timedelta64(30*60*60,'s')
data[time] = df[timeNow:time30hr]
So now I have a dictionary of dataframes, indexed by by StartTime and each one contains all of my data for a run, plus some extra to ensure I have it all for every run. But to compare two runs together I need to have a common X value to stack them on top of each other. Now every run is different and the point I want to consider "the same" varies depending on what i'm looking at. For the example below I have used the largest value in that dataset to "pivot" on.
for time in data:
A = data[time]
#Find max point for value. And take the first if there is more than 1
maxTtime = A[A['Value'] == A['Value'].max()]['DateTime'][0]
# Now we can say we want 12 hours before and end 12 after.
new = A[maxTtime-datetime.timedelta(0.5):maxTtime+datetime.timedelta(0.5)]
#Stick on a new column with time from 0 point:
new['RTime'] = new['DateTime'] - maxTtime
#Plot values against this new time
plot(new['RTime'],new['Value'])
This yields a graph like:
Which is great except I can't get a decent legend in order to tell what run was what and work out how much variation there is. I believe half my problem is because Im iterating over a dictionary of dataframes which is causing issues.
Could someone recommend how to better organise this (a dictionary of dataframes is all I could do to get it to work). I've thought of doing a hierarchical dataframe and instead of indexing it by run time, assigning a set of identifiers to the runs (The actual time is contained within the dataframes themself so I have no problem loosing the assumed starttime) and plotting it then with a legend.
My final aim is to have a dataset and methodology that means I can investigate the similarity and differences between different runs using different "pivot points" amd produce a graph of each one which I can then interrogate (or at least tell which data set is which to interrogate the data directly) but couldn;t get past various errors with creating it.
I can upload a set of the data to a csv if required but am not sure on the best place to upload it to. Thanks

Categories

Resources