Export out keyframe within object range - python

Is it possible to solely out export the keyframes of a given object within its own keyframed ranged?
Example, camA is keyframed in the range of Frame 1 to 10. But when I tried to export out this camera in another format, it is taking into account of the overall time slider instead. And hence exported_camA is keyframed in the range of Frame 1 to 24 (24 is the max range of my time slider)
Will this be possible? I tried out using cmds.playbackOptions but apparently it is also exporting out according to the time slider range
def __init__(self, transform, startAnimation, endAnimation, cameraObj):
self.fileExport = []
print ">>> Exported : %s" %self.fileExport
mayaGlobal = OpenMaya.MGlobal()
mayaGlobal.viewFrame(OpenMaya.MTime(1))
for i in range(startAnimation, endAnimation):
focalLength = cameraObj.focalLength()
vFilmApp = cameraObj.verticalFilmAperture()
focalOut = 2* math.degrees(math.atan(vFilmApp * 25.4/ (2* focalLength)))
myEuler = OpenMaya.MEulerRotation()
spc = OpenMaya.MSpace.kWorld
trans = transform.getTranslation(spc)
rotation = transform.getRotation(myEuler)
rotVector = OpenMaya.MVector(myEuler.asVector())
self.fileExport.append((str(i) + '\t' + str(trans[0]) + "\t" + str(trans[1]) + "\t" + str(trans[2]) + "\t" + str(math.degrees(rotVector[0])) + "\t" + str(math.degrees(rotVector[1])) + "\t" + str(math.degrees(rotVector[2])) + "\t" + str(focalOut) + "\n"))
mayaGlobal.viewFrame(OpenMaya.MTime(i+1))

in cmds you can get the maximum and minimum times for a given animation like this:
key_times = cmds.keyframe('pCube1', attribute = 'translate', q=True, tc=True)
first_key = key_times[0]
last_key = key_times[-1]
Note that this has to be applied to a particular attribute (in this case, I used 'translate'), otherwise you will get the keys from the first anim curve Maya finds on the object.
That said, it's usually considered best to export either the scene keyframe range or an explicitly set frame range. Otherwise you may have somebody working in a scene and scrubbing the time, then exporting and seeing fewer frames.

I have also found this command - cmds.findKeyframe so as to capture the keyframes of the selected object animation and it also aids in my code as well
Though I am not sure if this will generates any adverse effects later on, seeing that I have yet to encounter one :x
For example:
minTime = cmds.findKeyframe(which='first') # First keyframe
maxTime = cmds.findKeyframe(which='last') # Last keyframe

Related

Problem using cloned array data without modifying original (Python)

I have a project to do for a Python initiation course, but I am stuck close to the end because of a problem.
My problem is the following one :
I want to use a double of my "tdata" data frame composed of the values of the different attributes of a class called "world" to make changes to it. (Trying to do some forecast with the current levels of the indicators)
I tried to do it by generating a new data frame "graphdat" which I used in a function to generate a graph.
My problem is that, in the end, my "tdata" array is also modified.
I tried to use graphdat = tdata.copy() , but it returns an AttributeError : 'world' object has no attribute 'copy'.
Anyone would know how I could do it in another way?
Thank you!
def graph_ppm(self):
self.price_ppm = 10
self.budget -= self.price_ppm
period = tdata.period
graphdat = tdata
while period < 30:
period +=1
graphdat.sup = (graphdat.emit - graphdat.absorb)
graphdat.ppm += graphdat.sup
yppm.append(round(graphdat.ppm,2))
EDIT:
I think I misunderstood the whole problem.
As suggested by Md Imbesat Hassan Rizvi, I decided to use graphdat = copy.deepcopy(tdata) but I want to use this function a multiple-time, I do want to reinitialize graphdat to the current level of the parameters and the current period.
The problem is that I obtain this kind of graph if a run this function multiple times :
Graph
My maximum period is 30, and I want to get rid of the past values creating a very new graph.
def graph_temp(self):
self.price_temp = 10
self.budget -= self.price_temp
graphdat = copy.deepcopy(tdata)
period = graphdat.period
plx.clear_plot()
while period < 30:
period +=1
graphdat.sup = (graphdat.emit - graphdat.absorb)
graphdat.ppm += graphdat.sup
if graphdat.ppm <380:
graphdat.temperature += graphdat.sup * 0.001
if graphdat.ppm <400:
graphdat.temperature += (graphdat.sup) * 0.001
if graphdat.ppm <450:
graphdat.temperature += (graphdat.sup) * 0.005
graphdat.pop_satisfaction -=1
else:
graphdat.temperature += (graphdat.sup) * 0.01
ytemp.append(round(graphdat.temperature,2))
limittemp = [2]*31
recomtemp = [1.5]*31
plx.plot(ytemp, label="Temperatures forecast",line_marker = "•")
plx.plot(limittemp, label="Catastrophe level",line_marker = "-")
plx.plot(recomtemp, label="Limit level after period 30",line_marker = "=")
plx.xlabel('Temperatures')
plx.ylabel('Period')
plx.title('Title')
plx.figsize(50, 25)
plx.ticks(31, 11)
return plx.show()
Since tdata appears to be an instance of a custom class world for which copy attribute doesn't exist, you can make a copy of it using methods from copy module:
import copy
graphdat = copy.deepcopy(tdata)
Henceforth, graphdat and tdata will be different instances of the world class.

Finding the global minimum of a noisy function via simulated annealing in python

I'm trying to find the global minimum of the function from the hundred digit hundred dollars challenge, question #4 as an exercise for simulated annealing.
As the basis of my understanding and approach to writing the code, I refer to the global optimization algorithms version 3 book which is found for free online.
Consequently, I've initially come up with the following code:
The noisy func:
def noisy_func(x, y):
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
The function used to mutate the values:
def mutate(X_Value, Y_Value):
mutationResult_X = X_Value + randomNumForInput()
mutationResult_Y = Y_Value + randomNumForInput()
while mutationResult_X > 4 or mutationResult_X < -4:
mutationResult_X = X_Value + randomNumForInput()
while mutationResult_Y > 4 or mutationResult_Y < -4:
mutationResult_Y = Y_Value + randomNumForInput()
mutationResults = [mutationResult_X, mutationResult_Y]
return mutationResults
randomNumForInput simply returns a random number between 4 and -4. (Interval Limits for the search.) Hence it is equivalent to random.uniform(-4, 4).
This is the central function of the program.
def simulated_annealing(f):
"""Peforms simulated annealing to find a solution"""
#Start by initializing the current state with the initial state
#acquired by a random generation of a number and then using it
#in the noisy func, also set solution(best_state) as current_state
#for a start
pCurSelect = [randomNumForInput(),randomNumForInput()]
current_state = f(pCurSelect[0],pCurSelect[1])
best_state = current_state
#Begin time monitoring, this will represent the
#Number of steps over time
TimeStamp = 1
#Init current temp via the func, using such values as to get the initial temp
initial_temp = 100
final_temp = .1
alpha = 0.001
num_of_steps = 1000000
#calculates by how much the temperature should be tweaked
#each iteration
#suppose the number of steps is linear, we'll send in 100
temp_Delta = calcTempDelta(initial_temp, final_temp, num_of_steps)
#set current_temp via initial temp
current_temp = getTemperature(initial_temp, temp_Delta)
#max_iterations = 100
#initial_temp = get_Temperature_Poly(TimeStamp)
#current_temp > final_temp
while current_temp > final_temp:
#get a mutated value from the current value
#hence being a 'neighbour' value
#with it, acquire the neighbouring state
#to the current state
neighbour_values = mutate(pCurSelect[0], pCurSelect[1])
neighbour_state = f(neighbour_values[0], neighbour_values[1])
#calculate the difference between the newly mutated
#neighbour state and the current state
delta_E_Of_States = neighbour_state - current_state
# Check if neighbor_state is the best state so far
# if the new solution is better (lower), accept it
if delta_E_Of_States <= 0:
pCurSelect = neighbour_values
current_state = neighbour_state
if current_state < best_state:
best_state = current_state
# if the new solution is not better, accept it with a probability of e^(-cost/temp)
else:
if random.uniform(0, 1) < math.exp(-(delta_E_Of_States) / current_temp):
pCurSelect = neighbour_values
current_state = neighbour_state
# Here, we'd decrement the temperature or increase the timestamp, normally
"""current_temp -= alpha"""
#print("Run number: " + str(TimeStamp) + " current_state = " + str(current_state) )
#increment TimeStamp
TimeStamp = TimeStamp + 1
# calc temp for next iteration
current_temp = getTemperature(current_temp, temp_Delta)
#print("Iteration Count: " + str(TimeStamp))
return best_state
alpha is not used for this implementation, however temperature is moderated linearly using the following funcs:
def calcTempDelta(T_Initial, T_Final, N):
return((T_Initial-T_Final)/N)
def getTemperature(T_old, T_new):
return (T_old - T_new)
This is how I implemented the solution described in page 245 of the book. However, this implementation does not return to me the global minimum of the noisy function, but rather, one of its near-by local minimum.
The reasons I implemented the solution in this way is two fold:
It has been provided to me as a working example of a linear temperature moderation, and thus a working template.
Although I have tried to understand the other forms of temperature moderation laid out in the book in pages 248-249, it is not entirely clear to me how the variable "Ts" is calculated, and even after trying to look through some of the cited sources the book references, it remains esoteric for me still. Thus I figured, I'd rather try to make this "simple" solution work correctly first, before proceeding to attempt other approaches of temperature quenching (logarithmic, exponential, etc).
Since then I have tried in numerous ways to acquire the global minimum of the noisy func through various different iterations of the code, which would be too much to post here all at once. I've tried different rewrites of this code:
Decrease the randomly rolled number over each iteration as in order to search within a smaller scope every time, this has resulted in more consistent but still incorrect results.
Mutate by different increments, so lets say, between -1 and 1, etc. Same effect.
Rewrite mutate as in order to examine the neighbouring points to the current point via some step size, and examine neighboring points by adding/reducing said step size from the current point's x/y values, checking the differences between the newly generated point and the current point (the delta of E's, basically), and return the appropriate values with whichever one produced the lowest distance to the current function, thus being its closest proximity neighbour.
Reduce the intervals limits over which the search occurs.
It is in these, the solutions involving step-size/reducing limits/checking neighbours by quadrants that I have used movements comprised of some constant alpha times the time_stamp.
These and other solutions which I've attempted have not worked, either producing even less accurate results (albeit in some cases more consistent results) or in one case, not working at all.
Therefore I must be missing something, whether its to do with the temperature moderation, or the precise way (formula) by which I'm supposed to make the next step (mutate) in the algorithm.
I know its a lot to take in and look at, but I'd appreciate any constructive criticism/help/advice you can provide me.
If it will be of any help to showcase code bits of the other solution attempts, I'll post them if asked.
It is important that you keep track of what you are doing.
I have put a few important tips on frigidum
The alpha cooling generally works well, it makes sure you don't speed through the interesting sweet-spot, where about 0.1 of the proposals are accepted.
Make sure your proposals are not too coarse, I have put a example where I only change x or y, but never both. The idea is that annealing will take whats best, or take a tour, and let the scheme decide.
I use the package frigidum for the algo, but its pretty much the same are your code. Also notice I have 2 proposals, a large change and a small change, combinations usually work well.
Finally, I noticed its hopping a lot. A small variation would be to pick the best-so-far before you go in the last 5% of your cooling.
I use/install frigidum
!pip install frigidum
And made a small change to make use of numpy arrays;
import math
def noisy_func(X):
x, y = X
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
import frigidum
import numpy as np
import random
def random_start():
return np.random.random( 2 ) * 4
def random_small_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.02 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.02 * (random.random() - .5), 0] ), -4,4)
def random_big_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.5 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.5 * (random.random() - .5), 0] ), -4,4)
local_opt = frigidum.sa(random_start=random_start,
neighbours=[random_small_step, random_big_step],
objective_function=noisy_func,
T_start=10**2,
T_stop=0.00001,
repeats=10**4,
copy_state=frigidum.annealing.copy)
The output of the above was
---
Neighbour Statistics:
(proportion of proposals which got accepted *and* changed the objective function)
random_small_step : 0.451045
random_big_step : 0.268002
---
(Local) Minimum Objective Value Found:
-3.30669277
With the above code sometimes I get below -3, but I also noticed sometimes it has found something around -2, than it is stuck in the last phase.
So a small tweak would be to re-anneal the last phase of the annealing, with the best-found-so-far.
Hope that helps, let me know if any questions.

Weird "demonic" xtick in matplotlib (jpeg artifacts? No way...)

So I'm comparing NBA betting lines between different sportsbooks over time
Procedure:
Open pickle file of scraped data
Plot the scraped data
The pickle file is a dictionary of NBA betting lines over time. Each of the two teams are their own nested dictionary. Each key in these team-specific dictionaries represents a different sportsbook. The values for these sportsbook keys are lists of tuples, representing timeseries data. It looks roughly like this:
dicto = {
'Time': <time that the game starts>,
'Team1': {
Market1: [ (time1, value1), (time2, value2), etc...],
Market2: [ (time1, value1), (time2, value2), etc...],
etc...
}
'Team2': {
<SAME FORM AS TEAM1>
}
}
There are no issues with scraping or manipulating this data. The issue comes when I plot it. Here is the code for the script that unpickles and plots these dictionaries:
import matplotlib.pyplot as plt
import pickle, datetime, os, time, re
IMAGEPATH = 'Images'
reg = re.compile(r'[A-Z]+#[A-Z]+[0-9|-]+')
noDate = re.compile(r'[A-Z]+#[A-Z]+')
# Turn 1 into '01'
def zeroPad(num):
if num < 10:
return '0' + str(num)
else:
return num
# Turn list of time-series tuples into an x list and y list
def unzip(lst):
x = []
y = []
for i in lst:
x.append(f'{i[0].hour}:{zeroPad(i[0].minute)}')
y.append(i[1])
return x, y
# Make exactly 5, evenly spaced xticks
def prune(xticks):
last = len(xticks)
first = 0
mid = int(len(xticks) / 2) - 1
upMid = int( mid + (last - mid) / 2)
downMid = int( (mid - first) / 2)
out = []
count = 0
for i in xticks:
if count in [last, first, mid, upMid, downMid]:
out.append(i)
else:
out.append('')
count += 1
return out
def plot(filename, choice):
IMAGEPATH = 'Images'
IMAGEPATH = os.path.join(IMAGEPATH, choice)
with open(filename, 'rb') as pik:
dicto = pickle.load(pik)
fig, axs = plt.subplots(2)
gameID = noDate.search(filename).group(0)
tm = dicto['Time']
fig.suptitle(gameID + '\n' + str(tm))
i = 0
for team in dicto.keys():
axs[i].set_title(team)
if team == 'Time':
continue
for market in dicto[team].keys():
lst = dicto[team][market]
x, y = unzip(lst)
axs[i].plot(x, y, label= market)
axs[i].set_xticks(prune(x))
axs[i].set_xticklabels(rotation=45, labels = x)
i += 1
plt.tight_layout()
#Finish
outputFile = reg.search(filename).group(0)
date = (datetime.datetime.today() - datetime.timedelta(hours = 6)).date()
fig.savefig(os.path.join(IMAGEPATH, str(date), f'{outputFile}.png'))
plt.close()
Here is the image that results from calling the plot function on one of the dictionaries that I described above. It is pretty much exactly as I intended it, except for one very strange and bothersome problem.
You will notice that the bottom right tick looks haunted, demonic, jpeggy, whatever you want to call it. I am highly suspicious that this problem occurs in the prune function, which I use to set the xtick values of the plot.
The reason that I prune the values with a function like this is because these dictionaries are continuously updated, so setting a static number of xticks would not work. And if I don't prune the xticks, they end up becoming unreadable due to overlapping one another.
I am quite confused as to what could cause an xtick to look like this. It happens consistently, for every dictionary, every time. Before I added the prune function (when the xticks unbound, overlapping one another), this issue did not occur. So when I say I'm suspicious that the prune function is the cause, I am really quite certain.
I will be happy to share an instance of one of these dictionaries, but they are saved as .pickle files, and I'm pretty sure it's bad practice to share pickle files over the internet. I have been warned about potential malware, so I'll just stay away from that. But if you need to see the dictionary, I can take the time to prettily print one and share a screenshot. Any help is greatly appreciated!
Matplotlib does this when there are many xticks or yticks which are plotted on the same value. It is normal. If you can limit the number of times the specific value is plotted - you can make it appear indistinguishable from the rest of the xticks.
Plot a simple example to test this out and you will see for yourself.

MODIS AQUA Data - Stacking / Mosaic data with python GDAL

I know how to access and plot subdatasets using gdal and python. However, I'm wondering if there's a way to use the GEO data contained in the HDF4 file so I could look at the same area over many years.
And if possible, can an area be cut out of the data and how?
UPDATE:
To be more specific: I plotted MODIS Data and as you can see below the river moves downwards (rectangular structure top left corner). So over a whole year it's not the same location that i'm observing.
There's a directory in the subdatasets called Geolocation Fields with Long and Alt directories. So is it possible to access this information or lay it over the data to cut out a specific area?
If we for example take a look at the NASA picture below would it be possible to cut it between 10-15 alt. and -5 to 0 long.
You can download a sample file by copying the url below:
https://ladsweb.modaps.eosdis.nasa.gov/archive/allData/6/MYD021KM/2009/034/MYD021KM.A2009034.1345.006.2012058160107.hdf
UPDATE:
I ran
x0, dx, dxdy, y0, dydx, dy = hdf_file.GetGeoTransform()
which gave me the following output:
x0: 0.0
dx: 1.0
dxdy: 0.0
y0: 0.0
dydx: 0.0
dy: 1.0
As well as
gdal.Warp(workdir2+"/output.tif",workdir1+"/MYD021KM.A2009002.1345.006.2012058153105.hdf")
which gave me the following error:
ERROR 1: Input file /Volumes/Transcend/Master_Thesis/Data/AQUA_002_1345/MYD021KM.A2009002.1345.006.2012058153105.hdf has no raster bands.
**UPDATE 2: **
Here's my code on how I open and read my hdf files:
all_files is a list containing file names like:
MYD021KM.A2008002.1345.006.2012066153213.hdf
MYD021KM.A2008018.1345.006.2012066183305.hdf
MYD021KM.A2008034.1345.006.2012067035823.hdf
MYD021KM.A2008050.1345.006.2012067084421.hdf
etc .....
for fe in all_files:
print "\nopening file: ", fe
try:
hdf_file = gdal.Open(workdir1 + "/" + fe)
print "getting subdatasets..."
subDatasets = hdf_file.GetSubDatasets()
Emissiv_Bands = gdal.Open(subDatasets[2][0])
print "getting bands..."
Bands = Emissiv_Bands.ReadAsArray()
print "unit conversion ... "
get_name_tag = re.findall(".A(\d{7}).", all_files[i])[0]
print "name tag of current file: ", get_name_tag
# Code for 1 Band:
L_B_1 = radiance_scales[specific_band] * (Bands[specific_band] - radiance_offsets[specific_band]) # Source: MODIS Level 1B Product User's Guide Page 36 MOD_PR02 V6.1.12 (TERRA)/V6.1.15 (AQUA)
data_1_band['%s' % get_name_tag] = L_B_1
L_B_1_mean['%s' % get_name_tag] = L_B_1.mean()
# Code for many different Bands:
data_all_bands["%s" % get_name_tag] = []
for k in Band_nrs[lowest_band:highest_band]: # Bands 8-11
L_B = radiance_scales[k] * (Bands[k] - radiance_offsets[k]) # List with all bands
print "Appending mean value of {} for band {} out of {}".format(L_B.mean(), Band_nrs[k], len(Band_nrs))
data_all_bands['%s' % get_name_tag].append(L_B.mean()) # Mean radiance values
i=i+1
print "data added. Adding i+1 = ", i
except AttributeError:
print "\n*******************************"
print "Can't open file {}".format(workdir1 + "/" + fe)
print "Skipping this file..."
print "*******************************"
broken_files.append(workdir1 + "/" + fe)
i=i+1
Without knowing your exact data source and desired output etc. it is hard to give you a specific answer. With that said, it appears that you have the native .hdf format of MODIS images and wish to do some subsetting to get the images referenced to the same area, then plot etc.
It might help for you to look at gdal.Warp() from the gdal module. This method is able to take a .hdf file and subset a series of images to the same bounding box with the same resolution/number of rows and columns.
You can then analyse and plot these images/compare pixels etc.
I hope that this gives you a good starting point to get started.
gdal.Warp docs: https://gdal.org/python/osgeo.gdal-module.html#Warp
More general warp help: https://www.gdal.org/gdalwarp.html
Something like this:
import gdal
# Set up the gdal.Warp options such as desired spatial resolution,
# resampling algorithm to use and output format.
# See: https://gdal.org/python/osgeo.gdal-module.html#WarpOptions
# for other options that can be specified.
warp_options = gdal.WarpOptions(format="GTiff",
outputBounds=[min_x, min_y, max_x, max_y],
xRes=res,
yRes=res,
# PROBABLY NEED TO SET t_srs TOO
)
# Apply the warp.
# (output_file, input_file, options)
gdal.Warp("/path/to/output_file.tif",
"/path/to/input_file.hdf",
options=warp_options)
Exact code to write:
# Apply the warp.
# (output_file, input_file, options)
gdal.Warp('/path/to/output_file.tif',
'/path/to/HDF4_EOS:EOS_SWATH:"MYD021KM.A2009034.1345.006.2012058160107.hdf":MODIS_SWATH_Type_L1B:EV_1KM_RefSB',
options=warp_options)

Need to take data from text file to spreadsheet for analysis

I am working with data which I get in text files, and which has to be subsequently analysed. I'm currently using Excel for this task. The original file looks like this:
Contact Angle (deg) 86.20
Wetting Tension (dy/cm) 4.836
Wetting Tension Left (dy/cm) 39.44
Wetting Tension Right (dy/cm) 39.44
Base Tilt Angle (deg) 0.00
Base (mm) 1.6858
Base Area (mm2) 2.2322
Height (mm) 0.7888
Tip Width (mm) 0.9707
Wetted Tip Width (mm) 0.9581
Sessile Volume (ul) 1.1374
Sessile Surface Area (mm2) 4.1869
Contrast (cts) 245
Sharpness (cts) 161
Black Peak (cts) 10
White Peak (cts) 255
Edge Threshold (cts) 111
Base Left X (mm) 4.138
Base Right X (mm) 5.821
Base Y (mm) 2.980
RMS Fit Error (mm) 3.545E-3
#1600
I don't need the majority of this information, and for now, all I need is the Contact Angle at the top, and the time (prefixed by the '#' at the bottom). At the moment, I have a script which extracts the information I need and creates another text file for easy reading. The code used is below:
infile = "in.txt"
outfile = "newout.out"
measure_time = ""
with open(infile) as f, open(outfile, 'w') as f2:
for line in f:
if line.split():
if line.split()[0] == "Contact":
contact_angle = line.split()[-1].strip()
f2.write("Contact Angle (deg): " + contact_angle + '\n')
if line.split()[0][0] == '#':
for i in range(1,5):
measure_time += (line.split()[0][i])
f2.write("Measured at: " + measure_time[:2] + ":" + measure_time[2:] + '\n')
measure_time = ""
else:
continue
What I am looking for is a way to get my data nicely formatted in a spreadsheet for easy analysis. I would like the angles in the same row, in adjoining cells, and the measurement time in the cells below that, but I'm unsure what the best way to go about this is.
Can anyone with some more Python experience help me here?
EDIT: The image here shows what I tried to explain (poorly) above.
EDIT2: The solution posted below by #RonRosenfeld works, but I would still prefer to have a Python solution for this problem, as stated earlier. As I have no previous experience with Excel VBA, I would rather use something familiar to me.
I would just read the original file or files into Excel, selecting only those lines that begin with the Contact Angle, or # token. I'm not sure how much error checking you need to do. The following assumes that you will select multiple files, and that each file is formatted as you demonstrated in your original data. It will output the angles in row 1, and the corresponding times in row 2. It does NOT check for proper formatting; or that every Angle has a corresponding Time.
It also does NOT test and will give an error, if you only select one file. That capability can be added, if necessary.
EDIT: modified to account for either TAB or SPACE as the separator; also added code to clear worksheet and autofit the columns
It should also be easy to modify if you want to select additional parameters.
Option Explicit
'Set Reference to Microsoft Scripting Runtime
Sub GetDataFromTextFiles()
Dim FSO As FileSystemObject
Dim TS As TextStream
Dim F As File
Dim sLines As Variant
Dim I As Long, J As Long
Dim sFilePath
Dim S As String
Dim vLines() As Variant
Dim rExtract As Range
'Hard Coded here but could also use a
'User form to select multiple lines
vLines = Array("#", "Contact Angle")
Set rExtract = [b3]
Cells.Clear
[a3] = "Contact Angle (deg)"
[a4] = "Measured At"
sFilePath = Application.GetOpenFilename("Text Files (*.txt), *.txt", MultiSelect:=True)
Set FSO = New FileSystemObject
For J = LBound(sFilePath) To UBound(sFilePath)
Set TS = FSO.OpenTextFile(sFilePath(J), ForReading)
Do Until TS.AtEndOfStream = True
S = Trim(Replace(TS.ReadLine, Chr(9), Chr(32)))
For I = 0 To UBound(vLines)
If InStr(1, S, vLines(I)) = 1 Then
Select Case I
Case 0 '#
With rExtract(2, 1)
.Value = TimeSerial(Int(Mid(S, 2) / 100), Mid(S, 2) Mod 100, 0)
.NumberFormat = "hh:mm"
End With
Case 1 '#
rExtract(1, 1) = Mid(S, InStrRev(S, " ") + 1)
'advance to next column after outputting angle
Set rExtract = rExtract(1, 2)
End Select
End If
Next I
Loop
Next J
Cells.EntireColumn.AutoFit
End Sub
Here is another macro that does NOT require setting a reference to Microsoft Scripting Runtime. It does not use the FileSystemObject, but rather uses built-in VBA routines to read the file. I have been told that it will run more quickly, but I've not tested it myself. In addition, there could be issues with certain types of data, but they do not seem to exist in your files, and it runs fine on your sample.
Option Explicit
Sub GetDataFromTextFiles()
Dim sLines As Variant
Dim I As Long, J As Long
Dim sFilePath
Dim S As String
Dim vLines() As Variant
Dim rExtract As Range
'Hard Coded here but could also use a
'User form to select multiple lines
vLines = Array("#", "Contact Angle")
Set rExtract = [b3]
Cells.Clear
[a3] = "Contact Angle (deg)"
[a4] = "Measured At"
sFilePath = Application.GetOpenFilename("Text Files (*.txt), *.txt", MultiSelect:=True)
For J = LBound(sFilePath) To UBound(sFilePath)
Open sFilePath(J) For Input As #1
Do While Not EOF(1)
Input #1, S
S = Trim(Replace(S, Chr(9), Chr(32)))
For I = 0 To UBound(vLines)
If InStr(1, S, vLines(I)) = 1 Then
Select Case I
Case 0 '#
With rExtract(2, 1)
.Value = TimeSerial(Int(Mid(S, 2) / 100), Mid(S, 2) Mod 100, 0)
.NumberFormat = "hh:mm"
End With
Case 1
rExtract(1, 1) = Mid(S, InStrRev(S, " ") + 1)
'advance to next column after outputting angle
Set rExtract = rExtract(1, 2)
End Select
End If
Next I
Loop
Close #1
Next J
Cells.EntireColumn.AutoFit
End Sub

Categories

Resources