I am trying to solve the following problem. I have a code where there is an event and it is connected to a time in seconds. I am attaching this to video so I need a lead and a lag time. I am able to create the lead time, but for some of the events, the lag time is correlated to when the next event occurs.
For example, a shot has a lead time of 3 and a lag time of 7. However, for zone time, I need to create a lead time of 3 and a lag time of next occurrence of the title 'whistle'.
Related
I have a dataset that looks as follows:
What I would like to do with this data is calculate how much time was spent in specific states, per day. So say for example I wanted to know how long the unit was running today. I would just like to know the sum of the time the unit spent RUNNING: 45 minutes, NOT_RUNNING: 400 minutes, WARMING_UP: 10 minutes, etc.
I know how to summarize the column data on its own, but I'm looking to reference the time stamp I have available to subtract the first time it was on, from the last time it was on and get that measure of difference. I haven't had any luck searching for this solution, but there's no way I'm the first to come across this and know it can be done some how, just looking to learn how. Anything helps, Thanks!
I have a time-series that contains a hidden periodicity in the data.
I already found the period itself (e.g., 60 minutes, 100 minutes).
How can I find the specific sequence that has the periodicity? note that the periodicity can even start in the middle of the time series.
What can I do about it?
Check out cycle detection algorithms. If your cycle is exact they give you the cycle length and the stretch leading up to the first cycle.
Im using Python VLC to build a custom playback app in pyqt. I have painted a nice custom slider to track along with the video, but hit a bit of an annoying problem.
No matter how often I tell my slider to update, it's quite glitchy (jumping every 1/4 second or so) and looks choppy (just the timeline, not the video).
Digging into it, I learned that
media_player.get_position()
Has quite a low polling rate. It returns the same value quite often then jumps a large amount the next time it gives a new value.
So right now I ran some test metrics and found it tends to update every 0.25-0.3 seconds. So now I have a system that basicay stores the last value and last system time a new value came in, and the last jump-distance in returned values and does some basic math with those things to fake proper linear timeline data between polls to make a very smooth timeline slider.
The problem is this assumes my value of every 0.25-0.3 seconds is consistent across machines, hardware, frame rates of videos etc.
Does anyone know of a better fix?
Maybe a way to increase the poll rate of VLC to give me better data to begin with - or some better math to handle smoothing?
Thanks
Using get_position() returns a value between 0.0 and 1.0, essentially a percentage of the current position measured against the total running time.
Instead you can use get_time() which returns the current position in 1000ths of a second.
i.e.
print (self.player.get_time()/1000) would print the current position in seconds.
You could also register a callback for the vlc event EventType.MediaPlayerTimeChanged, as mentioned in the other answer given by #mtz
i.e.
Where self.player is defined as:
self.Instance = vlc.Instance()
self.player = self.Instance.media_player_new()
Then:
self.vlc_event_manager = self.player.event_manager()
self.vlc_event_manager.event_attach(vlc.EventType.MediaPlayerTimeChanged, self.media_time_changed)
def media_time_changed(self,event):
print (event.u.new_time/1000)
print (self.player.get_time()/1000)
print (self.player.get_position())
Try using the libvlc_MediaPlayerPositionChanged or libvlc_MediaPlayerTimeChanged mediaplayer events instead.
https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__event.html#gga284c010ecde8abca7d3f262392f62fc6a4b6dc42c4bc2b5a29474ade1025c951d
I have some sensor data, which essentially is a time stamp, and a status number. A new line, with timestamp and status is recorded semi-periodically (everything from every second to 1 hour). The same status can be repeated over hundreds of lines. I want to represent how long time the component has been in each of the states it can be in.
To get the accumulative time in each sequence I just need to loop over all the records, and sum up all the time spent in each state, right? But how do I visualize this? The idea is to both show how much time is used in each state, acumulated, but also visualize the status and status changes along a date axis.
Suggestions for plot types is welcome.
Example data (unixtime | state): http://pastebin.com/6TmXFZQd
This design is what I would do to visualize the data described in the question:
I have a series of points in time, say about ~60 events over the course of an hour. I'd like to figure out how to segment this series into "sub-series" based on how close in time the events occur.
The thing is, I am extremely hesitant to define an arbitrary interval to split on; i.e., I don't want to say, let's group events at every ten minutes. In the same way, I'm hesitant to define a "break threshold"; I don't want to say, if five minutes has passed without an event, start a new segment.
(For the record, the events have no inherent value for the purposes of this grouping. It's simply a series of points in time.)
Is there any way to dynamically segment a time series? I researched a few different things, and the most promising line of investigation was into Bayesian blocking, but I couldn't determine a way to modify it into what I need.