Export .csv file from .mat file generated with OpenModelica - python

I am trying to export a .csv file from a .mat file, which was generated with OpenModelica. The following code seems to work quite well:
from scipy.io import loadmat
import numpy
x = loadmat('results.mat')
traj=x['data_2'][0]
numpy.savetxt("results.csv", traj, delimiter=",")
However, there is an issue that I cannot solve. The line traj=x['data_2'][0] is taking an array with the values (over time) of the first variable in the file (index is 0). The problem is that I cannot make a correspondence between the variable I am looking for and its index. Let's say that I want to print the values of a variable called "My_model.T". How do I know the index of this variable?

The file format is described here: https://www.openmodelica.org/doc/OpenModelicaUsersGuide/1.17/technical_details.html#the-matv4-result-file-format
So you need to lookup the variable's name in the name matrix, then look in the dataInfo matrix to see if the variable is stored in data_1 or data_2 and which index it has in this matrix.
Edit: And since the title was how to create a CSV from MAT-file... You can do this from an OpenModelica .mos-script:
filterSimulationResults("M_res.mat", "M_res.csv", readSimulationResultVars("M_res.mat"))

Related

How to adjust text file saved by python?

I'm trying to save data from an array called Sevol. This matrix has 100 rows and 1000 columns, so len(Sevol[i]) has 1000 elements and Sevol[0][0] would be the first element of the first list.
I tried to save this array with the commands
np.savetxt(path + '/data_Sevol.txt', Sevol[i], delimiter=" ")
It works fine. However, I would like the file to be organized as an array anyway. For example, currently, the file is being saved like this in Notepad:
And I would like the data to remain organized, as for example in this file:
Is there an argument in the np.savetxt function or something I can do to better organize the text file?

Insert data range into multi dimensionnal array

I am trying to get data from an excel file using xlwings (am new to python) and load it into a multi dimensionnal array (or rather, table) that I could then loop through later on row by row.
What I would like to do :
db = []
wdb = xw.Book(r'C:\temp\xlpython\db.xlsx')
db.append(wdb.sheets[0].range('A2:K2').expand('down'))
So this would load the data into my table 'db', and I could later loop through it using :
for i in range(len(db)):
print(db[i][1])
If I wanted to retrieve the data originally in column B for instance
But instead of this, it loads the data in a single dimension, so if I run the code :
print(range(len(db)))
I will get (0,1) instead of the (0,145) expected if I had 146 rows of data in the excel file
Is there a way to do this, except loading the table line by line ?
Thanks
Have a look at the documentation here on converting the range to a numpy array or specifying the dimensions.
db = []
wdb = xw.Book(r'C:\temp\xlpython\db.xlsx')
db.append(wdb.sheets[0].range('A2:K2').options(np.array, expand='down').value)
After looking at numpy arrays as suggested by Rawson, it seems they have the same behaviour than python lists when appending a whole range, meaning it generates a flat array and does not preserve the rows of the excel range into the array; at least I couldn't get it to work that way.
So finally I looked into panda DataFrame and it seems to do the exact needed job, you can even import column titles which is a plus.
import pandas as pd
wdb = xw.Book(r'C:\temp\xlpython\db.xlsx')
db= pd.DataFrame(wdb.sheets[0].range('A2:K2').expand('down').value)

Python - re-write a netcdf file after calculation

I have a netcdf4 file called test.nc
I am calculating monthly median (from the daily values) with the following code:
import xarray as xr
os.chdir(inbasedir)
data = xr.open_dataset('test.nc')
monthly_data = data.resample(freq='m', dim ='time', how = 'median')
My question is how can I write this output to a new netcdf file, without having to re-write all the variables and the metadata already included in the input netcdf file.
Not sure if it is what you want. But this creates a new netcdf file from the created Dataset:
monthly_data.to_netcdf('newfile.nc')
You might use .drop() on the Dataset to remove data which you don't want in the output.

Need help using ascii to write .dat files

Currently, I'm trying to use astropy.io.ascii in python anaconda to write a .dat file that includes data I've already read in (using ascii) from a different .dat file. I defined a specific table in the pre-existing file to be Data, the problem with Data is that I need to multiply the first of the columns by a factor of 101325 to change it's units, and I need fourth of the four columns to disappear entirely. So I defined the first column as Pressure_pa and I converted its units, then I defined the other two columns to be Altitude_km and Temperature_K. Is there any way I can use ascii's write function to tell it to write a .dat file containing the three columns I defined? And how would I go about it? Below is the code that has brought me up to the point of having defined these three columns of data:
from astropy.io import ascii
Data=ascii.read('output_couple_121_100.dat',guess=False,header_start=384,data_start=385,data_end=485,delimiter=' ')
Pressure_pa=Data['P(atm)'][:}*101325
Altitude_km=Data['Alt(km)'][:]
Temperature_K=Data['T'][:]
Now I thought that I might be able to use ascii.write(), to write a .dat file with Pressure_pa, Altitude_km and Temperature_K into the same file, is there any way to do this?
So I think I figured it out! I'll create a more generic version to fit others
from astropy.io import ascii
Data=ascii.read('filename.dat',guess=False,header_start=1,data_start=2,data_end=10,delimiter=' ')
#above: defining Data as a certain section of a .dat file beginning at line 2 through 10 with headers in line 1
ascii.write(Data,'new_desired_file_name.dat',names=['col1','col2','col3','col4'],exclude_names=['col3'],delimiter=' ')
#above: telling ascii to take Data and creat a .dat file with it, when defining the names, define a name for every column in Data and then use the exclude_names command to tell it not to include those specific columns

python import .mat file: how to just import one wanted column from a matrix

I have this .mat file D887_ALL.mat and there are several matrices in it, one is called trigger_events and is a 671×2 matrix. I will only use the first column. Can I just import the first column in Python?
This is what I can do now, just import the whole matrix events:
dataFile='/Users/gaoyingqiang/Desktop/Python/D887_All.mat'
data=scio.loadmat(dataFile)
trigger_events=['trigger_events']
How can I do this?
How about this if you want just the first column?
trigger_events_col1 = data['trigger_events'][:, 1]

Categories

Resources