Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have two datetime time points for some data. I want to know if a specific day and time of the week occured on or inbetween the points. I want to specifically know if (Fri 17:00) - (Monday 07:00) occured. I can check the specic points if its the days im looking for with strftime(%a). But this would not work with the span
(Thursday 12/8) - (Monday 16/8)
Is this possible to achieve somewhat easily?
You can use the datetime.weekday() method and datetime.hour attribute to check if your dates are in your desired range. No need to compare the string representation of the time points.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 days ago.
Improve this question
I'm working on rainfall data in a csv file. I calculated the quantiles of my data, now I would like to extract the days when the rainfall amount exceed the 99th quantile.
But I'm struggling while trying to access the dates.
Here you have a screen of my dataframe (named series_total) in my jupyter notebook. series_total['Date'] returns the key error.
enter image description here
Waiting for some help/advices ;)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 days ago.
Improve this question
I am looking for a way to use stock price data within micro timeframes. (30seconds, 15seconds, 1 second, milliseconds etc.)
I tried using the yfinance library. However, they do not have timeframes that are that small.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Add a column (called TIME_HOURS) based on the data in the TIME column and rounded up
the value to the nearest hour. For example, if the original TIME row said:
‘02/28/2018 05:40:00 PM’ we want ‘2018-02-28 18:00:00’
(the change is that 5:40pm was rounded up to 6:00pm and the TIME_HOUR column is
actually a proper datetime and not a string).
You can use pandas.Series.dt.round:
df["TIME_HOURS"].dt.round('H')
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What is the fastest method to write a function for time series calculation that counts consecutive values in the same series ? A For loop or vector
Here is what my data looks like:
enter image description here
You can use rolling function to calculate the sum of 4 consecutive hours
df.consumption4hr = df.Consumption.groupby(level='Accounts').rolling(window=4).sum()
with that you can just find the list of accounts that has 0 in that column. for example:
df[df.consumption4hr == 0].Accounts.unique()
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a list of csv file which contains several columns.
There's one that contains the lenght of my test in this format hh:mm:ss
I need to divide this data in two database based on lenght: <00:16:00 or >00:16:00
How can I do that?
Thanks for helping and sorry for my bad english.
Brute force:
value = "00:15:47" # taken from csv
if value < "00:16:00":
# handle smaller values
else:
# handle bigger values