Using pyTDMS in Python - python

I am currently running a labview script that uses a DAQmx controller to take in voltage readings from batteries. The script takes in the voltage data and the amount of time that the test was run for and writes a TDMS file.
I would like to write a python script to take in this TDMS file and average some of the values. I have not been able to figure out how to use pyTDMS in order to read in this file.
Does anyone know how to use pyTDMS in python?
this is what my file looks like when I open it in notepad++
TDSm® i ä ä /ÿÿÿÿ name junk071316_164717 /'DAQ Assistant_0'ÿÿÿÿ /'DAQ Assistant_0'/'Voltage'i ÿÿÿÿ NI_Scaling_Status unscaled NI_Number_Of_Scales NI_Scale[1]_Scale_Type Linear NI_Scale[1]_Linear_Slope
I H Hà4? NI_Scale[1]_Linear_Y_Intercept
¿Ú$À NI_Scale[1]_Linear_Input_Source /'DAQ Assistant_0'/'Voltage_0'i ÿÿÿÿ NI_Scaling_Status unscaled NI_Number_Of_Scales NI_Scale[1]_Scale_Type Linear NI_Scale[1]_Linear_Slope
òòòà4? NI_Scale[1]_Linear_Y_Intercept
`IÝ$À N

Related

How to design data provider

I have an application made of many individual scripts. Output of each of them is an input od the next one. Each script reads data on the beggining and saves modified data as its last activity. In short:
script1.py: reads mariadb data to df -> does stuff -> saves raw data in mysql.sql sqlite3 format
script2.py: reads sqlite3 file -> does stuff -> saves raw data in data.txt - tab separated values
program3.exe: reads data.txt -> does stuff -> writes another.txt - tab separated values
script4.py: reads another.txt -> does stuff -> creates data4.csv
script5.py: reads data4.csv -> does stuff -> inserts mariadb entries
What I am searching and asking for is: is there any design pattern (or other mechanism) for creating data provider for situation like that? "Data provider" should be a some abstraction layer which:
have different data source types (like mariadb connection, csv files, txt files, others) predefined and easy to extern that list.
should reads data from "data-specified-source" and deliver the data to given script/proggram (f.i. by execute script with parameter)
should validate if output of each application part (each script/program) is valid or take over the task of generating this data
In general "Data provider" would run script1.py with some parameter (dataframe?) in some sandbox, take over data before it is saved and prepare data for script2.py proper execution. OR it just could run script1.py with some parameter, wait for execution, check if output is valid, convert (if necessary) that output to another format and run script2.py with well-prepared data.
I have access to python script sources (script1.py ... script5.py) and I can modify them. I am unable to modify program3.exe source code but it is always one part of the whole process. What is the best way (or just a way) to design such a layer?
Since you include a .exe file, I'll assume you are using Windows. You can write a batch file or a powershell script. On linux the equivalent would be a bash script.
If your sources and destinations are hard coded, then the batch file is going to be something like
script1.py
REM assume output file is named mysql.sql
script2.py
REM assume output file is data.txt and has tab separated values
program3.exe
REM assume output file is another.txt and has tab separated values
script4.py
REM creates data4.csv
script5.py
The REM is short for REMARK in a batch file and allows for commenting.

How to load .mat file into workspace using Matlab Engine API for Python?

I have a .mat workspace file containing 4 character variables. These variables contain paths to various folders I need to be able to cd to and from relatively quickly. Usually, when using only Matlab I can load this workspace as follows (provided the .mat file is in the current directory).
load paths.mat
Currently I am experimenting with the Matlab Engine API for Python. The Matlab help docs recommend using the following Python formula to send variables to the current workspace in the desktop app:
import matlab.engine
eng = matlab.engine.start_matlab()
x = 4.0
eng.workspace['y'] = x
a = eng.eval('sqrt(y)')
print(a)
Which works well. However the whole point of the .mat file is that it can quickly load entire sets of variables the user is comfortable with. So the above is not efficient when trying to load the workspace.
I have also tried two different variations in Python:
eng.load("paths.mat")
eng.eval("load paths.mat")
The first variation successfully loads a dict variable in Python containing all four keys and values but this does not propagate to the workspace in Matlab. The second variation throws an error:
File "", line unknown SyntaxError: Error: Unexpected MATLAB
expression.
How do I load up a workspace through the engine without having to manually do it in Matlab? This is an important part of my workflow....
You didn't specify the number of output arguments from the MATLAB engine, which is a possible reason for the error.
I would expect the error from eng.load("paths.mat") to read something like
TypeError: unsupported data type returned from MATLAB
The difference in error messages may arise from different versions of MATLAB, engine API...
In any case, try specifying the number of output arguments like so,
eng.load("paths.mat", nargout=0)
This was giving me fits for a while. A few things to try. I was able to get this working on Matlab 2019a with Python 3.7. I had the most trouble trying to create a string and using the string as an argument for load and eval/evalin, so there might be some trickiness with the single or double quotes, or needing to have an additional set of quotes in the string.
Make sure the MAT file is on the Matlab Path. You can use addpath and rmpath really easily with pathlib objects:
from pathlib import Path
mat_file = Path('local/path/from/cwd/example.mat').resolve # get absolute path
eng.addpath(str(mat_file.parent))
# Execute other commands
eng.rmpath(str(mat_file.parent))
Per dML's answer, make sure to specify the nargout=0 when there are no outputs from the function, and always when calling a script. If there are 1 or more outputs you don't have to have an output in Python, and there is more than one it will be output as a tuple.
You can also turn your script into a function (just won't have access to base workspace without using evalin/assignin):
function load_example_matfile()
evalin('base','load example.mat')
end
eng.feval('load_example_matfile')
And, it does seem to work on the plain vanilla eval and load as well, but if you leave off the nargout=0 it either errors out or gives you the output of the file in python directly.
Both of these work.
eng.eval('load example.mat', nargout=0)
eng.load('example.mat', nargout=0)

How to operate on unsaved Excel file?

I'd like to automate a loop:
ABAQUS generates a Excel file;
Matlab utilises data in Excel file;
loop 1 and 2.
Now my question is: after step 1, the Excel file from ABAQUS is unsaved as Book1. I cannot use Matlab command to save it. Is there a way not to save this ''Book1'' file, but use the data in it? Or if I can find where it is so I can use the data inside? (I assume that Excel always saves the file even though user doesn't?)
Thank you! 
As agentp mentioned, if you are running Abaqus via a Python script, you can just use Python to create a .txt file to save all the relevant information. If well structured, a .txt file can be as readable as an Excel spreadsheet. Because Matlab and Python have intrinsic functions to read and write files this communication can be easily done.
As for Matlab calling Abaqus, you can use something similar to:
system('abaqus cae nogui=YOUR_SCRIPT.py')
Your script that pipes to Excel should have some code similar to this:
abq_ExcelUtilities.excelUtilities.XYtoExcel(
xyDataNames='S:Mises PI: PART-1-1 E: 4309 IP: 1', trueName='')
writing the same data to a report (.rpt) file the code looks like this:
x0 = session.xyDataObjects['S:Mises PI: PART-1-1 E: 4309 IP: 1']
session.writeXYReport(fileName='abaqus.rpt', xyData=(x0, ))
now to "roll your own", use that x0 object: x0.data is a regular python tuple holding the actual data which you can write to a file however you like, eg:
file=open('myfile.csv','w')
for point in x0.data: file.write('%g,%g\n'%point)
file.close()
(you can comment or delete the writeXYReport call )

Accessing Running Python program from another Python program

I have the following program running
collector.py
data=0
while True:
#collects data
data=data+1
I have another program cool.py which wants to access the current data value. How can I do this?
Ultimately, something like:
cool.py
getData()
*An Idea would be to use a global variable for data?
You can use memory mapping.
http://docs.python.org/2/library/mmap.html
For example you open a file in tmp directore, next u mapping this file to memory in both program and write u data to this file.

Failing to continually save and update a .CSV file after a period of time

I have a written a small program that reads values from two pieces of equipment every minuet and then saves it to a .csv file. I wanted the file to be updated and saved after every collection of every point so that if pc crashes, or other problem occurs no data loss occurs. To do that I open the file (ab mode), use write row and the close the file in a loop. The time between collections is about 1 minuet. This works quiet well, but the problem is after 5-6 hours of data collection, it stops saving to .csv file, and does not bring up any errors, the code continues to run with graph being update like nothing happened, but opening the .csv file reveals that data is lost. I would like to know if there is something wrong with the code I am using. I should also not I am running a subprocess from this that does live plotting, but I do not think it would cause an issue... I added those code lines as well.
##Initial file declaration and header
with open(filename,'wb') as wdata:
savefile=csv.writer(wdata,dialect='excel')
savefile.writerow(['System time','Time from Start(s)','Weight(g)','uS/cm','uS','Measured degC','%/C','Ideal degC','/cm'])
##Open Plotting Subprocess
draw=subprocess.Popen('TriPlot.py',shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE)
##data collection loop
while True:
Collect Data x and y
Waits for data about 60 seconds, no sleep or pause commoand used, pyserial inteface is used.
## Send Data to subprocess
draw.stdin.write('%d\n' % tnow)
draw.stdin.write('%d\n' % s_w)
draw.stdin.write('%d\n' % tnow)
draw.stdin.write('%d\n' % float(s_c[5]))
##Saving data Section
wdata=open(filename,'ab')
savefile=csv.writer(wdata,dialect='excel')
savefile.writerow([tcurrent,tnow,s_w,s_c[5],s_c[7],s_c[9],s_c[11],s_c[13],s_c[15]])
wdata.close()
P.S This code uses the following packages for code not shown. pyserial, csv, os, subprocess,Tkinter, string, numpy, time and wx.
If draw.stdin.write() blocks it probably means that you are not consuming draw.stdout in a timely manner. The docs warn about the dead-lock due to the full OS pipe buffer.
If you don't need the output you could set stdout=devnull where devnull = open(os.devnull, 'wb') otherwise there are several approaches to read the output without blocking your code: threads, select, tempfile.TemoraryFile.

Categories

Resources