I am trying to figure out what the best way to create a list of timestamps in Python is, where the values for the items in the list increment by one minute. The timestamps would be by minute, and would be for the previous 24 hours. I need to create timestamps of the format "MM/dd/yyy HH:mm:ss" or to at least contain all of those measures. The timestamps will be an axis for a graph of data that I am collecting.
Calculating the times alone isn't too bad, as I could just get the current time, convert it to seconds, and change the value by one minute very easily. However, I am kind of stuck on figuring out the date aspect of it without having to do a lot of checking, which doesn't feel very Pythonic.
Is there an easier way to do this? For example, in JavaScript, you can get a Date() object, and simply subtract one minute from the value and JS will take care of figuring out if any of the other fields need to change and how they need to change.
datetime is the way to go, you might want to check out This Blog.
import datetime
import time
now = datetime.datetime.now()
print now
print now.ctime()
print now.isoformat()
print now.strftime("%Y%m%dT%H%M%S")
This would output
2003-08-05 21:36:11.590000
Tue Aug 5 21:36:11 2003
2003-08-05T21:36:11.590000
20030805T213611
You can also do subtraction with datetime and timedelta objects
now = datetime.datetime.now()
minute = timedelta(days=0,seconds=60,microseconds=0)
print now-minute
would output
2015-07-06 10:12:02.349574
You are looking for datetime and timedelta objects. See the docs.
Related
I get a 24-hour UTC time from an API which I need to convert to a different timezone. Both the year, month, day, and minutes are completely redundant and useless to me.
This function would basically be like this website: https://www.worldtimebuddy.com/utc-to-aest-converter (in 24-hour mode) but dynamic and programmatic. If somebody can share how to do this conversion I can extrapolate it and create the function myself.
The timezone that I am converting to has to be able to be changed as if it was a parameter in the function. The result does not need to be a DateTime object, could just be an integer of the converted hour.
Thanks for helping! Python 3.9
I found a really simple way to do it.
transformed_time = (utc_time + shift) % 24
I originally just didn't know this is how timezones worked, but it works perfectly, if you don't know the 'shift' of your timezone, look up "UTC offset to your timezone".
I have a large list of timestamps in nanoseconds (can easily be converted to miliseconds). I now want to make an instance of DatetimeIndex using these timestamps. Yet simply passing
timestamps = [3377536510631, 3377556564631, 3377576837400, 3377596513631, ...]
dti = DatetimeIndex(timestamps)
yields dates at 1970 yet they should be at 2017. Dividing them by a million to get milliseconds gives the same rsult. It seems that the input isn't as expected but I wouldn't know either how to easily set the input correctly or how to set the parameters correctly
Your timestamp probably has a false starting time (wrong offset). This usually happens, if the time is not set correctly on the a measurement device. If you cold-start the measurement, It will probably start at time stamp 0, which is 01/01/1970.
If you know the exact time and date the measurement was started, simply subtract the .mim() value from the time stamp column and add the time stamp of the actual start time to the result.
I'm working with some video game speedrunning (basically, races where people try to beat a game as fast as they can) data, and I have many different run timings in HH:MM:SS format. I know it's possible to convert to seconds, but I want to keep in this format for the purposes of making the axes on any graphs easy to read.
I have all the data in a data frame already and tried converting the timing data to datetime format, with format = '%H:%M:%S', but it just uses this as the time on 1900-01-01.
data=[['Aggy','01:02:32'], ['Kirby','01:04:54'],['Sally','01:06:04']]
df=pd.DataFrame(data, columns=['Runner','Time'])
df['Time']=pd.to_datetime(df['Time'], format='%H:%M:%S')
I thought specifying the format to be just hours/minutes/seconds would strip away any date, but when I print out the header of my dataframe, it says that the time data is now 1900-01-01 01:02:32, as an example. 1:02:32 AM on January 1st, 1900. I want Python to recognize the 1:02:32 as a duration of time, not a datetime format. What's the best way to go about this?
The format argument defines the format of the input date, not the format of the resulting datetime object (reference).
For your needs you can either use the H:m:s part of the datetime, or use the to_timedelta
method.
In pandas, I've converted some of my dataset from US/Eastern and some from American/Chicago:
data_f1 = data_f[:'2007-04-26 16:59:00']
data_f1.index = data_f1.index.tz_localize('US/Eastern', infer_dst=True).tz_convert('Europe/London')
data_f2 = data_f['2007-04-26 17:00:00':]
data_f2.index = data_f2.index.tz_localize('America/Chicago', infer_dst=True).tz_convert('Europe/London')
data = data_f1.append(data_f2)
I have two questions about this.
(1) Does tz_convert() account for the DST changes between NY (or Chicago) time and London? Is there any documentation to support this? I couldn't find it anywhere.
(2) The output looks like this:
I'm not sure what the "+01:00" is at the end of the time stamp, but I think it has something to do with DST transitions? What is the + exactly relative to? I'm not sure what it means or why it's necessary - if I convert from US/Eastern 14:00 to Europe/London 19:00, it's simply 19:00, not 19:00+01:00? Why is that added?
When I output to Excel, I have to manually chop off the everything after the "+". Is there any option to simply not output it to begin with (unless it's actually important)?
Thanks for your help in advance!
UPDATE:
The closest thing I've found to stripping the +'s is here: Convert pandas timezone-aware DateTimeIndex to naive timestamp, but in certain timezone but it seems like this may take a long time with a lot of data. Is there not a more efficient way?
The way I used to solve this was to output to a .csv, read it back in (which makes it time zone naive but keeps the time zone it was in), then strip the +'s.
I am new to python. I am looking for ways to extract/tag the date & time specific information from text
e.g.
1.I will meet you tomorrow
2. I had sent it two weeks back
3. Waiting for you last half an hour
I had found timex from nltk_contrib, however found couple of problems with it
https://code.google.com/p/nltk/source/browse/trunk/nltk_contrib/nltk_contrib/timex.py
b. Not sure of the Date data type passed to ground(tagged_text, base_date)
c. It deals only with date i.e. granularity at day level. Cant find expression like next one hour etc.
Thank you for your help
b) The data type that you need to pass to ground(tagged_text, base_date) is an instance of the datetime.date class which you'd initialize using something like:
from datetime import date
base_date = date.today()