I was wondering if there is way to read .npy files in Matlab? I know I can convert those to Matlab-style .mat files using scipy.io.savemat in Python; however I'm more interested in a native or plugin support for .npy files in Matlab.
This did the job for me, I used it to read npy files.
https://github.com/kwikteam/npy-matlab
If you only want to read .npy file all you need from the npy-matlab project are two files: readNPY.m and readNPYheader.m.
Usage is as simple as:
>> im = readNPY('/path/to/file.npy');
There is a c++ library available https://github.com/rogersce/cnpy
You could write a mex function to read the data. I would prefer to store everything in hdf5
A quick way would be to read it in python, as below,
data = np.load('/tmp/123.npz')
Then save it as '.csv', again by python, using python documentation or,
numpy.savetxt('FileName.csv', arrayToSave)
(more documentation here)
Finally, you can read it in MATLAB using the following command,
csvread()
Quick Update:
As the user "Ender" mentioned in the comments, the csvread() is now deprecated and readmatrix() stands in its lieu. (documentation)
Related
I have some VLP16 LiDar data in .csv file format, have to load the data in Ros Rviz for which I need the Rosbag file(.bag). I have tried finding it in the Ros tutorial, what I got was to convert .bag to .csv
I'm not actually expert in processing .bag files but I think you need to go through your CSV file and manually add the values using rosbag Python API
Not direct answer but check this script in python, which might help you.
Regarding C++ I propose this repository: convert_csv_to_rosbag which is even closer to what you asked.
However, it seems that you need to do it by yourself based on these examples.
I am trying to extract data from HDF files and compare the data. Is it possible to automate the process using Squish? Also how to compare data of 2 HDf files of different versions? I am very new to this and have no clue how to start. Any help is appreciated.
Thank you!
If the HDF files are basically plain text, it may be possible to use test.compareTextFiles().
If you can convert the HDF files to XML and you can use test.compareXMLFiles(), which gains you the ability to ignore dynamic portions more easily.
And in general, if you can find any tools that do what you need (extraction, comparison/diff) then you can use them in Squish test scripts (see Article - Executing external applications).
I have this huge Excel (xls) file that I have to read data from. I tried using the xlrd library, but is pretty slow. I then found out that by converting the Excel file to CSV file manually and reading the CSV file is orders of magnitude faster.
But I cannot ask my client to save the xls as csv manually every time before importing the file. So I thought of converting the file on the fly, before reading it.
Has anyone done any benchmarking as to which procedure is faster:
Open the Excel file with with the xlrd library and save it as CSV file, or
Open the Excel file with win32com library and save it as CSV file?
I am asking because the slowest part is the opening of the file, so if I can get a performance boots from using win32com I would gladly try it.
if you need to read the file frequently, I think it is better to save it as CSV. Otherwise, just read it on the fly.
for performance issue, I think win32com outperforms. however, considering cross-platform compatibility, I think xlrd is better.
win32com is more powerful. With it, one can handle Excel in all ways (e.g. reading/writing cells or ranges).
However, if you are seeking a quick file conversion, I think pandas.read_excel also works.
I am using another package xlwings. so I am also interested with a comparison among these packages.
to my opinion,
I would use pandas.read_excel to for quick file conversion.
If demanding more processing on Excel, I would choose win32com.
I want to open a .fif file of size around 800MB. I googled and found that these kind of files can be opened with photoshop. Is there a way to extract the images and store in some other standard format using python or c++.
This is probably an EEG or MEG data file. The full specification is here, and it can be read in with the MNE package in Python.
import mne
raw = mne.io.read_raw_fif('filename.fif')
FIF stands for Fractal Image Format and seems to be output of the Genuine Fractals Plugin for Adobe's Photoshop. Unfortunately, there is no format specification available and the plugin claims to use patented algorithms so you won't be able to read these files from within your own software.
There however are other tools which can do fractal compression. Here's some information about one example. While this won't allow you to open FIF files from the Genuine Fractals Plugin, it would allow you to compress the original file, if still available.
XnView seems to handle FIF files, but it's windows-only. There is a MP or Multiplatform version, but it seems less complete and didn't work when I tried to view a FIF file.
Update: XnView MP, which does work on Linux and OSX claims to support FIF, but I couldn't get it to work.
Update2: There's also an open source project:Fiasco that can work with fractal images, but not sure it's compatible with the proprietary FIF format.
I'm working on a script to create an OpenOffice document. After this i want to save the file. Maybe later also as an PDF.. Google doesn't give me any information how to fix this..
My question here is: What method should be used to save an openoffice-writer document?
Thanks in advance!
You should look at this similar question which answer covers both MSWord and OOWriter (by the way, creating a Word file could be the easiest to be read with OpenOffice).
How can I create a Word document using Python?
Alexis
You can create a rtf file with pyrtf or it's variants, and for pdf you can use reportlab. These are libraries for use in python, not to control remotely oo. There are other libraries for other formats.