I want to convert an array of date-time strings (YYYY-MM-DD hh:mm:ss) to GPS seconds (seconds after 2000-01-01 12:00:00) in the python environment.
In order to get the GPS seconds for a single date in Linux BASH, I simply input date2sec datetimestring and it returns a number.
I could do this within a for-loop within python. But, how would I incorporate this within the python script, as it is an external script?
Or, is there another way to convert an array of date-time strings (or single date-time strings incorporated into a for-loop) to GPS time without using date2sec?
Updated Answer: uses Astropy library:
from astropy.time import Time
t = Time('2019-12-03 23:55:32', format='iso', scale='utc')
print(t.gps)
Here you are setting the date in UTC and t.gps converts the datetime to GPS seconds.
Further research showed that directly using datetime objects doesn't take leap seconds into account.
other helpful links here:
How to get current date and time from GPS unsegment time in python
Here is the solution that I used for an entire array of date-times in a for-loop:
import numpy as _np
J2000 = _np.datetime64('2000-01-01 12:00:00') # Time origin
dateTime = [...] # an array of date-times in 'YYYY-MM-DD hh:mm:ss' format
GPSarray_secs = [] # Create new empty array
for i in range(0,len(dateTime)) : # For-loop conversion
GPSseconds = (_np.datetime64(dateTime) - J2000).astype(int) # Calculate GPS seconds
GPSarray_secs = _np.append(GPSarray_secs , GPSseconds) # Append array
The simple conversion for one date-time entry is:
import numpy as _np
J2000 = _np.datetime64('2000-01-01 12:00:00') # Time origin
GPSseconds = (_np.datetime64(dateTime) - J2000).astype(int) # Conversion where dateTime is in 'YYYY-MM-DD hh:mm:ss' format
Importing datetime should not be required.
Related
I have an array of Unix Epoch time stampts that I need to convert to datetime format in python.
I can make the conversion ok using numpy and pandas, as below:
tim = [1627599600,1627599600,1627599601,1627649998,1627649998,1627649999]
tim_np = np.array(tim)
tim_np = np.asarray(tim, dtype='datetime64[s]',)
tim_pd = pd.to_datetime(tim,unit='s', utc=True,)
print(tim_np)
print(tim_pd)
The problem I am running into is that the time zone is wrong, I am in NY so require it set to "EST".
I tried addressing by setting utc=True in the pd.to_datetime function but it still keeps defaulting to "GMT" ( 5 hours ahead).
I also tried the datetime.fromtimestamp(0) but it seemingly only works on single elements and not arrays - https://note.nkmk.me/en/python-unix-time-datetime/
Is there any efficient method to set the time zone when converting epochs?
Found that this can be achieved with pandas using the .tz_* methods:
.tz_localie - https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.tz_localize.html
.tz_convert - https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.tz_convert.html
Working code:
tim = [1627599600,1627599600,1627599601,1627649998,1627649998,1627649999]
tim_pd = (pd.to_datetime(tim,unit='s')
.tz_localize('utc')
.tz_convert('America/New_York'))
print(tim_pd)
In LabVIEW, there exists a Waveform datatype, for uniformly spaced data that consists of
A start timestamp
The space between samples dt
An array containing data points
How would something equivalent in python look? Would you just create a list of datetime based on start time and dt?
OK, as was suggested one way of achieving this is to convert the waveform into a time list and a value list. Here's one way of doing that (uses JKI JSON Serializer)
And in python:
def timestamp(waveform):
import datetime
from datetime import timedelta
import json
obj = json.loads(waveform)
date_time_obj = datetime.datetime.fromisoformat(obj['t0'])
time_list = [date_time_obj+datetime.timedelta(seconds=obj['dt']*i) for i in
range(len(obj['y']))] # your timestamp list
obj['y'] # your value list
return "done"
I have tested this with plot.ly and it works as expected.
I have a data set consisting of time stamps looking as following:
['2019-12-18T12:06:55.697975Z"', '2019-12-18T12:06:55.707017Z"',...]
I need a for loop which will go through all timestamps and convert each to a time structure. But what I created, does not work - I am not sure about data types and also about indexing when using strings.
My attempt is following:
from time import struct_time
for idx in enumerate(tm_str): #tm_str is a string:['2019-12-18T12:06:55.697975Z"', ...]
a=idx*30+2 #a should be first digit in year - third position in string -
b=idx*30+6 # b should last dgit of year, 30 is period to next year
tm_year=tm_str[a:b]
Month, day, hour etc. should be done is similar way.
Have you tried the datetime library / datetime objects?
https://docs.python.org/3/library/datetime.html
This will create datetime objects that you can use to do a lot of handy calculations.
from datetime import datetime
# your original list:
time_stamp_list = ['2019-12-18T12:06:55.697975Z"', '2019-12-18T12:06:55.707017Z"']
# empty list of datetime objects:
datetime_list = []
# iterate over each time_stamp in the original list:
for time_stamp in time_stamp_list:
# convert to datetime object using datetime.strptime():
datetime_object = datetime.strptime(time_stamp, '%Y-%m-%dT%H:%M:%S.%fZ"')
# append datetime object to datetime object list:
datetime_list.append(datetime_object)
I'm trying to import CSV data from a file produced by a device which has a system clock which is set to 'Australia/Adelaide' time, but doesn't switch from standard to daylight time in summer. I can import it no problem as tz-naive but I need to correlate it with data which is tz-aware.
The following is incorrect as it assumes the data transitions to summer time on '2017-10-01'
data = pd.read_csv('~/dev/datasets/data.csv', parse_dates=['timestamp'], index_col=['timestamp'])
data.index.tz_localize('Australia/Adelaide')
tz_localize contains a number of arguments to deal with ambiguous dates - but I don't see any way to tell it that the data doesn't transition at all. Is there a way to specify a "custom" timezone that's 'Australia/Adelaide', no daylight savings?
Edit: I found this question - Create New Timezone in pytz which has given me some ideas - in this case the timestamps are a constant offset from UTC so i can probably add that to the date after importing, localise as UTC then convert to 'Australia/Adelaide'. I'll report back...
The solution I came up with is as follows:
Since the data is 'Australia/Adelaide' with no DLS transistion, that means the UTC offset is a constant (+10:30) all year. Hence a solution is to import that data as tz-naive, subtract 10 hours and 30 minutes, localise as UTC then convert to 'Australia/Adelaide', i.e.
data = pd.read_csv('~/dev/datasets/data.csv', parse_dates=['timestamp'], index_col=['timestamp'])
data.index = data.index - pd.DateOffset(hours=10) - pd.DateOffset(minutes=30)
data.index = data.index.tz_localize('UTC').tz_convert('Australia/Adelaide')
ALL,
I'm trying to get a statistical data of the file. Doing so gives me following:
atime - datetime timestamp representation
atime_nano - nano-seconds resolution in addition to a_time.
What I'd like to do is to convert atime.atime_nano to a datetime variable in Python.
So if I have:
atime = 1092847621L
atime_nano = 7100000L
I'd like to convert it to the datetime object in python that will have correct date with the milliseconds.
How can I do that?
Thank you.
Datetimes can have microseconds (1 microsecond = 1000 nanoseconds)
You can do the following for your example:
dt = datetime.fromtimestamp(1092847621L).replace(microsecond = 7100000L/1000)