Need to take data from text file to spreadsheet for analysis - python

I am working with data which I get in text files, and which has to be subsequently analysed. I'm currently using Excel for this task. The original file looks like this:
Contact Angle (deg) 86.20
Wetting Tension (dy/cm) 4.836
Wetting Tension Left (dy/cm) 39.44
Wetting Tension Right (dy/cm) 39.44
Base Tilt Angle (deg) 0.00
Base (mm) 1.6858
Base Area (mm2) 2.2322
Height (mm) 0.7888
Tip Width (mm) 0.9707
Wetted Tip Width (mm) 0.9581
Sessile Volume (ul) 1.1374
Sessile Surface Area (mm2) 4.1869
Contrast (cts) 245
Sharpness (cts) 161
Black Peak (cts) 10
White Peak (cts) 255
Edge Threshold (cts) 111
Base Left X (mm) 4.138
Base Right X (mm) 5.821
Base Y (mm) 2.980
RMS Fit Error (mm) 3.545E-3
#1600
I don't need the majority of this information, and for now, all I need is the Contact Angle at the top, and the time (prefixed by the '#' at the bottom). At the moment, I have a script which extracts the information I need and creates another text file for easy reading. The code used is below:
infile = "in.txt"
outfile = "newout.out"
measure_time = ""
with open(infile) as f, open(outfile, 'w') as f2:
for line in f:
if line.split():
if line.split()[0] == "Contact":
contact_angle = line.split()[-1].strip()
f2.write("Contact Angle (deg): " + contact_angle + '\n')
if line.split()[0][0] == '#':
for i in range(1,5):
measure_time += (line.split()[0][i])
f2.write("Measured at: " + measure_time[:2] + ":" + measure_time[2:] + '\n')
measure_time = ""
else:
continue
What I am looking for is a way to get my data nicely formatted in a spreadsheet for easy analysis. I would like the angles in the same row, in adjoining cells, and the measurement time in the cells below that, but I'm unsure what the best way to go about this is.
Can anyone with some more Python experience help me here?
EDIT: The image here shows what I tried to explain (poorly) above.
EDIT2: The solution posted below by #RonRosenfeld works, but I would still prefer to have a Python solution for this problem, as stated earlier. As I have no previous experience with Excel VBA, I would rather use something familiar to me.

I would just read the original file or files into Excel, selecting only those lines that begin with the Contact Angle, or # token. I'm not sure how much error checking you need to do. The following assumes that you will select multiple files, and that each file is formatted as you demonstrated in your original data. It will output the angles in row 1, and the corresponding times in row 2. It does NOT check for proper formatting; or that every Angle has a corresponding Time.
It also does NOT test and will give an error, if you only select one file. That capability can be added, if necessary.
EDIT: modified to account for either TAB or SPACE as the separator; also added code to clear worksheet and autofit the columns
It should also be easy to modify if you want to select additional parameters.
Option Explicit
'Set Reference to Microsoft Scripting Runtime
Sub GetDataFromTextFiles()
Dim FSO As FileSystemObject
Dim TS As TextStream
Dim F As File
Dim sLines As Variant
Dim I As Long, J As Long
Dim sFilePath
Dim S As String
Dim vLines() As Variant
Dim rExtract As Range
'Hard Coded here but could also use a
'User form to select multiple lines
vLines = Array("#", "Contact Angle")
Set rExtract = [b3]
Cells.Clear
[a3] = "Contact Angle (deg)"
[a4] = "Measured At"
sFilePath = Application.GetOpenFilename("Text Files (*.txt), *.txt", MultiSelect:=True)
Set FSO = New FileSystemObject
For J = LBound(sFilePath) To UBound(sFilePath)
Set TS = FSO.OpenTextFile(sFilePath(J), ForReading)
Do Until TS.AtEndOfStream = True
S = Trim(Replace(TS.ReadLine, Chr(9), Chr(32)))
For I = 0 To UBound(vLines)
If InStr(1, S, vLines(I)) = 1 Then
Select Case I
Case 0 '#
With rExtract(2, 1)
.Value = TimeSerial(Int(Mid(S, 2) / 100), Mid(S, 2) Mod 100, 0)
.NumberFormat = "hh:mm"
End With
Case 1 '#
rExtract(1, 1) = Mid(S, InStrRev(S, " ") + 1)
'advance to next column after outputting angle
Set rExtract = rExtract(1, 2)
End Select
End If
Next I
Loop
Next J
Cells.EntireColumn.AutoFit
End Sub
Here is another macro that does NOT require setting a reference to Microsoft Scripting Runtime. It does not use the FileSystemObject, but rather uses built-in VBA routines to read the file. I have been told that it will run more quickly, but I've not tested it myself. In addition, there could be issues with certain types of data, but they do not seem to exist in your files, and it runs fine on your sample.
Option Explicit
Sub GetDataFromTextFiles()
Dim sLines As Variant
Dim I As Long, J As Long
Dim sFilePath
Dim S As String
Dim vLines() As Variant
Dim rExtract As Range
'Hard Coded here but could also use a
'User form to select multiple lines
vLines = Array("#", "Contact Angle")
Set rExtract = [b3]
Cells.Clear
[a3] = "Contact Angle (deg)"
[a4] = "Measured At"
sFilePath = Application.GetOpenFilename("Text Files (*.txt), *.txt", MultiSelect:=True)
For J = LBound(sFilePath) To UBound(sFilePath)
Open sFilePath(J) For Input As #1
Do While Not EOF(1)
Input #1, S
S = Trim(Replace(S, Chr(9), Chr(32)))
For I = 0 To UBound(vLines)
If InStr(1, S, vLines(I)) = 1 Then
Select Case I
Case 0 '#
With rExtract(2, 1)
.Value = TimeSerial(Int(Mid(S, 2) / 100), Mid(S, 2) Mod 100, 0)
.NumberFormat = "hh:mm"
End With
Case 1
rExtract(1, 1) = Mid(S, InStrRev(S, " ") + 1)
'advance to next column after outputting angle
Set rExtract = rExtract(1, 2)
End Select
End If
Next I
Loop
Close #1
Next J
Cells.EntireColumn.AutoFit
End Sub

Related

win32com LineStyle Excel

Luckily i found this side:
https://www.linuxtut.com/en/150745ae0cc17cb5c866/
(There are many Linetypes difined
Excel Enum XlLineStyle)
(xlContinuous = 1
xlDashDot = 4
xlDashDotDot = 5
xlSlantDashDot = 13
xlDash = -4115
xldot = -4118
xlDouble = -4119
xlLineStyleNone = -4142)
i run with try and except +/- 100.000 times set lines because i thought anywhere should be this
[index] number for put this line in my picture too but they warsnt.. why not?
how can i set this line?
why are there some line indexe's in a such huge negative ranche and not just 1, 2, 3...?
how can i discover things like the "number" for doing things like that?
why is this even possible, to send apps data's in particular positions, i want to step a little deeper in that, where can i learn more about this?
(1) You can't find the medium dashed in the linestyle enum because there is none. The line that is drawn as border is a combination of lineStyle and Weight. The lineStyle is xlDash, the weight is xlThin for value 03 in your table and xlMedium for value 08.
(2) To figure out how to set something like this in VBA, use the Macro recorder, it will reveal that lineStyle, Weight (and color) are set when setting a border.
(3) There are a lot of pages describing all the constants, eg have a look to the one #FaneDuru linked to in the comments. They can also be found at Microsoft itself: https://learn.microsoft.com/en-us/office/vba/api/excel.xllinestyle and https://learn.microsoft.com/en-us/office/vba/api/excel.xlborderweight. It seems that someone translated them to Python constants on the linuxTut page.
(4) Don't ask why the enums are not continuous values. I assume especially the constants with negative numbers serve more that one purpose. Just never use the values directly, always use the defined constants.
(5) You can assume that numeric values that have no defined constant can work, but the results are kind of unpredictable. It's unlikely that there are values without constant that result in something "new" (eg a different border style).
As you can see in the following table, not all combination give different borders. Setting the weight to xlHairline will ignore the lineStyle. Setting it to xlThick will also ignore the lineStyle, except for xlDouble. Ob the other hand, xlDouble will be ignored when the weight is not xlThick.
Sub border()
With ThisWorkbook.Sheets(1)
With .Range("A1:J18")
.Clear
.Interior.Color = vbWhite
End With
Dim lStyles(), lWeights(), lStyleNames(), lWeightNames
lStyles() = Array(xlContinuous, xlDash, xlDashDot, xlDashDotDot, xlDot, xlDouble, xlLineStyleNone, xlSlantDashDot)
lStyleNames() = Array("xlContinuous", "xlDash", "xlDashDot", "xlDashDotDot", "xlDot", "xlDouble", "xlLineStyleNone", "xlSlantDashDot")
lWeights = Array(xlHairline, xlThin, xlMedium, xlThick)
lWeightNames = Array("xlHairline", "xlThin", "xlMedium", "xlThick")
Dim x As Long, y As Long
For x = LBound(lStyles) To UBound(lStyles)
Dim row As Long
row = x * 2 + 3
.Cells(row, 1) = lStyleNames(x) & vbLf & "(" & lStyles(x) & ")"
For y = LBound(lWeights) To UBound(lWeights)
Dim col As Long
col = y * 2 + 3
If x = 1 Then .Cells(1, col) = lWeightNames(y) & vbLf & "(" & lWeights(y) & ")"
With .Cells(row, col).Borders
.LineStyle = lStyles(x)
.Weight = lWeights(y)
End With
Next
Next
End With
End Sub

MODIS AQUA Data - Stacking / Mosaic data with python GDAL

I know how to access and plot subdatasets using gdal and python. However, I'm wondering if there's a way to use the GEO data contained in the HDF4 file so I could look at the same area over many years.
And if possible, can an area be cut out of the data and how?
UPDATE:
To be more specific: I plotted MODIS Data and as you can see below the river moves downwards (rectangular structure top left corner). So over a whole year it's not the same location that i'm observing.
There's a directory in the subdatasets called Geolocation Fields with Long and Alt directories. So is it possible to access this information or lay it over the data to cut out a specific area?
If we for example take a look at the NASA picture below would it be possible to cut it between 10-15 alt. and -5 to 0 long.
You can download a sample file by copying the url below:
https://ladsweb.modaps.eosdis.nasa.gov/archive/allData/6/MYD021KM/2009/034/MYD021KM.A2009034.1345.006.2012058160107.hdf
UPDATE:
I ran
x0, dx, dxdy, y0, dydx, dy = hdf_file.GetGeoTransform()
which gave me the following output:
x0: 0.0
dx: 1.0
dxdy: 0.0
y0: 0.0
dydx: 0.0
dy: 1.0
As well as
gdal.Warp(workdir2+"/output.tif",workdir1+"/MYD021KM.A2009002.1345.006.2012058153105.hdf")
which gave me the following error:
ERROR 1: Input file /Volumes/Transcend/Master_Thesis/Data/AQUA_002_1345/MYD021KM.A2009002.1345.006.2012058153105.hdf has no raster bands.
**UPDATE 2: **
Here's my code on how I open and read my hdf files:
all_files is a list containing file names like:
MYD021KM.A2008002.1345.006.2012066153213.hdf
MYD021KM.A2008018.1345.006.2012066183305.hdf
MYD021KM.A2008034.1345.006.2012067035823.hdf
MYD021KM.A2008050.1345.006.2012067084421.hdf
etc .....
for fe in all_files:
print "\nopening file: ", fe
try:
hdf_file = gdal.Open(workdir1 + "/" + fe)
print "getting subdatasets..."
subDatasets = hdf_file.GetSubDatasets()
Emissiv_Bands = gdal.Open(subDatasets[2][0])
print "getting bands..."
Bands = Emissiv_Bands.ReadAsArray()
print "unit conversion ... "
get_name_tag = re.findall(".A(\d{7}).", all_files[i])[0]
print "name tag of current file: ", get_name_tag
# Code for 1 Band:
L_B_1 = radiance_scales[specific_band] * (Bands[specific_band] - radiance_offsets[specific_band]) # Source: MODIS Level 1B Product User's Guide Page 36 MOD_PR02 V6.1.12 (TERRA)/V6.1.15 (AQUA)
data_1_band['%s' % get_name_tag] = L_B_1
L_B_1_mean['%s' % get_name_tag] = L_B_1.mean()
# Code for many different Bands:
data_all_bands["%s" % get_name_tag] = []
for k in Band_nrs[lowest_band:highest_band]: # Bands 8-11
L_B = radiance_scales[k] * (Bands[k] - radiance_offsets[k]) # List with all bands
print "Appending mean value of {} for band {} out of {}".format(L_B.mean(), Band_nrs[k], len(Band_nrs))
data_all_bands['%s' % get_name_tag].append(L_B.mean()) # Mean radiance values
i=i+1
print "data added. Adding i+1 = ", i
except AttributeError:
print "\n*******************************"
print "Can't open file {}".format(workdir1 + "/" + fe)
print "Skipping this file..."
print "*******************************"
broken_files.append(workdir1 + "/" + fe)
i=i+1
Without knowing your exact data source and desired output etc. it is hard to give you a specific answer. With that said, it appears that you have the native .hdf format of MODIS images and wish to do some subsetting to get the images referenced to the same area, then plot etc.
It might help for you to look at gdal.Warp() from the gdal module. This method is able to take a .hdf file and subset a series of images to the same bounding box with the same resolution/number of rows and columns.
You can then analyse and plot these images/compare pixels etc.
I hope that this gives you a good starting point to get started.
gdal.Warp docs: https://gdal.org/python/osgeo.gdal-module.html#Warp
More general warp help: https://www.gdal.org/gdalwarp.html
Something like this:
import gdal
# Set up the gdal.Warp options such as desired spatial resolution,
# resampling algorithm to use and output format.
# See: https://gdal.org/python/osgeo.gdal-module.html#WarpOptions
# for other options that can be specified.
warp_options = gdal.WarpOptions(format="GTiff",
outputBounds=[min_x, min_y, max_x, max_y],
xRes=res,
yRes=res,
# PROBABLY NEED TO SET t_srs TOO
)
# Apply the warp.
# (output_file, input_file, options)
gdal.Warp("/path/to/output_file.tif",
"/path/to/input_file.hdf",
options=warp_options)
Exact code to write:
# Apply the warp.
# (output_file, input_file, options)
gdal.Warp('/path/to/output_file.tif',
'/path/to/HDF4_EOS:EOS_SWATH:"MYD021KM.A2009034.1345.006.2012058160107.hdf":MODIS_SWATH_Type_L1B:EV_1KM_RefSB',
options=warp_options)

Using astropy.fits and numpy to apply coincidence corrections to SWIFT fits image

This question may be a little specialist, but hopefully someone might be able to help. I normally use IDL, but for developing a pipeline I'm looking to use python to improve running times.
My fits file handling setup is as follows:
import numpy as numpy
from astropy.io import fits
#Directory: /Users/UCL_Astronomy/Documents/UCL/PHASG199/M33_UVOT_sum/UVOTIMSUM/M33_sum_epoch1_um2_norm.img
with fits.open('...') as ima_norm_um2:
#Open UVOTIMSUM file once and close it after extracting the relevant values:
ima_norm_um2_hdr = ima_norm_um2[0].header
ima_norm_um2_data = ima_norm_um2[0].data
#Individual dimensions for number of x pixels and number of y pixels:
nxpix_um2_ext1 = ima_norm_um2_hdr['NAXIS1']
nypix_um2_ext1 = ima_norm_um2_hdr['NAXIS2']
#Compute the size of the images (you can also do this manually rather than calling these keywords from the header):
#Call the header and data from the UVOTIMSUM file with the relevant keyword extensions:
corrfact_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
coincorr_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
#Check that the dimensions are all the same:
print(corrfact_um2_ext1.shape)
print(coincorr_um2_ext1.shape)
print(ima_norm_um2_data.shape)
# Make a new image file to save the correction factors:
hdu_corrfact = fits.PrimaryHDU(corrfact_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_corrfact]).writeto('.../M33_sum_epoch1_um2_corrfact.img')
# Make a new image file to save the corrected image to:
hdu_coincorr = fits.PrimaryHDU(coincorr_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_coincorr]).writeto('.../M33_sum_epoch1_um2_coincorr.img')
I'm looking to then apply the following corrections:
# Define the variables from Poole et al. (2008) "Photometric calibration of the Swift ultraviolet/optical telescope":
alpha = 0.9842000
ft = 0.0110329
a1 = 0.0658568
a2 = -0.0907142
a3 = 0.0285951
a4 = 0.0308063
for i in range(nxpix_um2_ext1 - 1): #do begin
for j in range(nypix_um2_ext1 - 1): #do begin
if (numpy.less_equal(i, 4) | numpy.greater_equal(i, nxpix_um2_ext1-4) | numpy.less_equal(j, 4) | numpy.greater_equal(j, nxpix_um2_ext1-4)): #then begin
#UVM2
corrfact_um2_ext1[i,j] == 0
coincorr_um2_ext1[i,j] == 0
else:
xpixmin = i-4
xpixmax = i+4
ypixmin = j-4
ypixmax = j+4
#UVM2
ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax])
xvec_UVM2 = ft*ima_UVM2sum
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2*xvec_UVM2) + (a3*xvec_UVM2*xvec_UVM2*xvec_UVM2) + (a4*xvec_UVM2*xvec_UVM2*xvec_UVM2*xvec_UVM2)
Ctheory_UVM2 = - alog(1-(alpha*ima_UVM2sum*ft))/(alpha*ft)
corrfact_um2_ext1[i,j] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum)
coincorr_um2_ext1[i,j] = corrfact_um2_ext1[i,j]*ima_sk_um2[i,j]
The above snippet is where it is messing up, as I have a mixture of IDL syntax and python syntax. I'm just not sure how to convert certain aspects of IDL to python. For example, the ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax]) I'm not quite sure how to handle.
I'm also missing the part where it will update the correction factor and coincidence correction image files, I would say. If anyone could have the patience to go over it with a fine tooth comb and suggest the neccessary changes I need that would be excellent.
The original normalised image can be downloaded here: Replace ... in above code with this file
One very important thing about numpy is that it does every mathematical or comparison function on an element-basis. So you probably don't need to loop through the arrays.
So maybe start where you convolve your image with a sum-filter. This can be done for 2D images by astropy.convolution.convolve or scipy.ndimage.filters.uniform_filter
I'm not sure what you want but I think you want a 9x9 sum-filter that would be realized by
from scipy.ndimage.filters import uniform_filter
ima_UVM2sum = uniform_filter(ima_norm_um2_data, size=9)
since you want to discard any pixel that are at the borders (4 pixel) you can simply slice them away:
ima_UVM2sum_valid = ima_UVM2sum[4:-4,4:-4]
This ignores the first and last 4 rows and the first and last 4 columns (last is realized by making the stop value negative)
now you want to calculate the corrections:
xvec_UVM2 = ft*ima_UVM2sum_valid
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2**2) + (a3*xvec_UVM2**3) + (a4*xvec_UVM2**4)
Ctheory_UVM2 = - np.alog(1-(alpha*ima_UVM2sum_valid*ft))/(alpha*ft)
these are all arrays so you still do not need to loop.
But then you want to fill your two images. Be careful because the correction is smaller (we inored the first and last rows/columns) so you have to take the same region in the correction images:
corrfact_um2_ext1[4:-4,4:-4] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum_valid)
coincorr_um2_ext1[4:-4,4:-4] = corrfact_um2_ext1[4:-4,4:-4] *ima_sk_um2
still no loop just using numpys mathematical functions. This means it is much faster (MUCH FASTER!) and does the same.
Maybe I have forgotten some slicing and that would yield a Not broadcastable error if so please report back.
Just a note about your loop: Python's first axis is the second axis in FITS and the second axis is the first FITS axis. So if you need to loop over the axis bear that in mind so you don't end up with IndexErrors or unexpected results.

Python: slow for loop performance on reading, extracting and writing from a list of thousands of files

I am extracting 150 different cell values from 350,000 (20kb) ascii raster files. My current code is fine for processing the 150 cell values from 100's of the ascii files, however it is very slow when running on the full data set.
I am still learning python so are there any obvious inefficiencies? or suggestions to improve the below code.
I have tried closing the 'dat' file in the 2nd function; no improvement.
dat = None
First: I have a function which returns the row and column locations from a cartesian grid.
def world2Pixel(gt, x, y):
ulX = gt[0]
ulY = gt[3]
xDist = gt[1]
yDist = gt[5]
rtnX = gt[2]
rtnY = gt[4]
pixel = int((x - ulX) / xDist)
line = int((ulY - y) / xDist)
return (pixel, line)
Second: A function to which I pass lists of 150 'id','x' and 'y' values in a for loop. The first function is called within and used to extract the cell value which is appended to a new list. I also have a list of files 'asc_list' and corresponding times in 'date_list'. Please ignore count / enumerate as I use this later; unless it is impeding efficiency.
def asc2series(id, x, y):
#count = 1
ls_id = []
ls_p = []
ls_d = []
for n, (asc,date) in enumerate(zip(asc, date_list)):
dat = gdal.Open(asc_list)
gt = dat.GetGeoTransform()
pixel, line = world2Pixel(gt, east, nort)
band = dat.GetRasterBand(1)
#dat = None
value = band.ReadAsArray(pixel, line, 1, 1)[0, 0]
ls_id.append(id)
ls_p.append(value)
ls_d.append(date)
Many thanks
In world2pixel you are setting rtnX and rtnY which you don't use.
You probably meant gdal.Open(asc) -- not asc_list.
You could move gt = dat.GetGeoTransform() out of the loop. (Rereading made me realize you can't really.)
You could cache calls to world2Pixel.
You're opening dat file for each pixel -- you should probably turn the logic around to only open files once and lookup all the pixels mapped to this file.
Benchmark, check the links in this podcast to see how: http://talkpython.fm/episodes/show/28/making-python-fast-profiling-python-code

Export out keyframe within object range

Is it possible to solely out export the keyframes of a given object within its own keyframed ranged?
Example, camA is keyframed in the range of Frame 1 to 10. But when I tried to export out this camera in another format, it is taking into account of the overall time slider instead. And hence exported_camA is keyframed in the range of Frame 1 to 24 (24 is the max range of my time slider)
Will this be possible? I tried out using cmds.playbackOptions but apparently it is also exporting out according to the time slider range
def __init__(self, transform, startAnimation, endAnimation, cameraObj):
self.fileExport = []
print ">>> Exported : %s" %self.fileExport
mayaGlobal = OpenMaya.MGlobal()
mayaGlobal.viewFrame(OpenMaya.MTime(1))
for i in range(startAnimation, endAnimation):
focalLength = cameraObj.focalLength()
vFilmApp = cameraObj.verticalFilmAperture()
focalOut = 2* math.degrees(math.atan(vFilmApp * 25.4/ (2* focalLength)))
myEuler = OpenMaya.MEulerRotation()
spc = OpenMaya.MSpace.kWorld
trans = transform.getTranslation(spc)
rotation = transform.getRotation(myEuler)
rotVector = OpenMaya.MVector(myEuler.asVector())
self.fileExport.append((str(i) + '\t' + str(trans[0]) + "\t" + str(trans[1]) + "\t" + str(trans[2]) + "\t" + str(math.degrees(rotVector[0])) + "\t" + str(math.degrees(rotVector[1])) + "\t" + str(math.degrees(rotVector[2])) + "\t" + str(focalOut) + "\n"))
mayaGlobal.viewFrame(OpenMaya.MTime(i+1))
in cmds you can get the maximum and minimum times for a given animation like this:
key_times = cmds.keyframe('pCube1', attribute = 'translate', q=True, tc=True)
first_key = key_times[0]
last_key = key_times[-1]
Note that this has to be applied to a particular attribute (in this case, I used 'translate'), otherwise you will get the keys from the first anim curve Maya finds on the object.
That said, it's usually considered best to export either the scene keyframe range or an explicitly set frame range. Otherwise you may have somebody working in a scene and scrubbing the time, then exporting and seeing fewer frames.
I have also found this command - cmds.findKeyframe so as to capture the keyframes of the selected object animation and it also aids in my code as well
Though I am not sure if this will generates any adverse effects later on, seeing that I have yet to encounter one :x
For example:
minTime = cmds.findKeyframe(which='first') # First keyframe
maxTime = cmds.findKeyframe(which='last') # Last keyframe

Categories

Resources