I have a recording of tracking data in .edf format (SR-RESEARCH eyelink). I want to convert it to ASC/CSV format in python. I have the GUI application but I want to do it programmatically (in Python).
I found the package pyEDFlib but couldn't find an example to how convert the eye-tracking .edf file to .asc or .csv.
What will the best best way to do it?
Thanks
If I trust the page here: http://pyedflib.readthedocs.io/en/latest, you can run through all the signals in the file this way:
import pyedflib
import numpy as np
f = pyedflib.EdfReader("data/test_generator.edf")
n = f.signals_in_file
signal_labels = f.getSignalLabels()
sigbufs = np.zeros((n, f.getNSamples()[0]))
for i in np.arange(n):
sigbufs[i, :] = f.readSignal(i)
The pyEDFlib library simply reads the file into an EdfReader object.
Then you just need to go through and make row for each.
I assume that signal_labels (in the code above) will be an array with all the labels so make a comma separated string out of them
signal_labels_row = ",".join(signal_labels)
Then do the same for each signal, 1 comma separated String for each
Then simply write them in a file.
I can see they provide an example of how to read a file and extract all the data you need here
https://github.com/holgern/pyedflib/blob/master/demo/readEDFFile.py
Based on your answers i have created this python3 script to export all singnals to multiple .csv files https://github.com/folkien/pyEdfToCsv
Related
I have a JSON file whose size is about 5GB. I neither know how the JSON file is structured nor the name of roots in the file. I'm not able to load the file in the local machine because of its size So, I'll be working on high computational servers.
I need to load the file in Python and print the first 'N' lines to understand the structure and Proceed further in data extraction. Is there a way in which we can load and print the first few lines of JSON in python?
If you want to do it in Python, you can do this:
N = 3
with open("data.json") as f:
for i in range(0, N):
print(f.readline(), end = '')
You can use the command head to display the N first line of the file. To get a sample of the json to know how is it structured.
And use this sample to work on your data extraction.
Best regards
I'm trying to read in a file
the text file itself is laid out in 9 columns with tons of data (454 lines total)
I'm trying to read in and retrieve certain columns of data so I can plot a diagram of the mass related to temperature (an HR diagram)
however when I try to load the text using:
file = 'nameoftext.txt' #the file itself is saved as a txt from notepad++
track1 = np.loadtext(file, skiprows=70) #im skipping 70 rows of headers to the data (and np is numpy)
I get an error saying:
ValueError: could not convert string to float: 'iso'
I have no idea what this means or what I'm doing.
I'm also using np.loadtext because that's the only way my professor showed us how to load files and I have no idea how else to do it.
another option for loading .txt files in the python is the genfromtxt() function also in numpy. In this function the object type of values in each column can be specified or you can allow the function to guess the type on its own.
Check out the link below for a similar question.
Loading text file containing both float and string using numpy.loadtxt
I have few lists which i want to save it to a *.mat file. But according to scipy.io.savemat command documentation i Need to create a dictionary with the lists and then use the command to save it to a *.mat file.
If i save it according to the way mentioned in the docs the mat file will have structure with variables as the Arrays which i used in the dictionary. Now i have a Problem here, I have another program (which is not editable) will use the mat files and load them to plot some Graphs from the data. The program cannot process the structure because it is written in a way where if it loads a mat files and then it will directly process the Arrays in it.
So is there a way to save the mat file without using dictionaries? Please see the Image for more understanding
Thanks
This is the sample algorithm i used to save my *.mat file
import os
os.getcwd()
os.chdir(os.getcwd())
import scipy.io as sio
x=[1,2,3,4,5]
y=[234,5445,778] #can be 1000 lists
data={}
data['x']=x
data['y']=y
sio.savemat('test.mat',{'interpolated_data':data})
How about
scipy.io.savemat('interpolated_data_max_compare.mat',
{'NA1_X_order10_ACCE_ms2': np.zeros((3000,1)),
'NA1_X_order10_DISP_mm': np.ones((3000,1))})
Should work fine...
According to the code you added in your question, instead of sio.savemat('...', {'interpolated_data':data}), just save
sio.savemat('...', data)
and you should be fine: data is already a dictionary you don't need to add an extra level with {'interpolated_data': data} when saving.
You could use the Writing primitives directly
import scipy.io.matlab as ml
f=open("something.mat","wb")
mw=ml.mio5.MatFile5Writer(f)
mw.put_variables({"testVar":22})
This forum has been extremely helpful for a python novice like me to improve my knowledge. I have generated a large number of raw data in text format from my CFD simulation. My objective is to import these text files into python and do some postprocessing on them. This is a code that I have currently.
import numpy as np
from matplotlib import pyplot as plt
import os
filename=np.array(['v1-0520.txt','v1-0878.txt','v1-1592.txt','v1-3020.txt','v1-5878.txt'])
for i in filename:
format_name= i
path='E:/Fall2015/Research/CFDSimulations_Fall2015/ddn310/Autoexport/v1'
data= os.path.join(path,format_name)
X,Y,U,V,T,Tr = np.loadtxt(data,usecols=(1,2,3,4,5,6),skiprows=1,unpack = True) # Here X and Y represents the X and Y coordinate,U,V,T,Tr represents the Dependent Variables
plt.figure(1)
plt.plot(T,Y)
plt.legend(['vt1a','vtb','vtc','vtd','vte','vtf'])
plt.grid(b=True)
Is there a better way to do this, like importing all the text files (~10000 files) at once into python and then accessing whichever files I need for post processing (maybe indexing). All the text files will have the same number of columns and rows.
I am just a beginner to Python.I will be grateful if someone can help me or point me in the right direction.
Your post needs to be edited to show proper indentation.
Based on a quick read, I think you are:
reading a file, making a small edit, and write it back
then you load it into a numpy array and plot it
Presumably the purpose of your edit is to correct some header or value.
You don't need to write the file back. You can use content directly in loadtxt.
content = content.replace("nodenumber","#nodenumber") # Ignoring Node number column
data1=np.loadtxt(content.splitlines())
Y=data1[:,2]
temp=data1[:,5]
loadtxt accepts any thing that feeds it line by line. content.splitlines() makes a list of lines, which loadtxt can use.
the load could be more compact with:
Y, temp = np.loadtxt(content.splitlines(), usecols=(2,5), unpack=True)
With usecols you might not even need the replace step. You haven't given us a sample file to test.
I don't understand your multiple file needs. One way other you need to open and read each file, one by one. And it would be best to close one before going on to the next. The with open(name) as f: syntax is great for ensuring that a file is closed.
You could collect the loaded data in larger lists or arrays. If Y and temp are identical in size for all files, they can be collected into larger dimensional array, e.g. YY[i,:] = Y for the ith file, where YY is preallocated. If they can vary in size, it is better to collect them in lists.
Hey this might be a simple question, but I have never had to import any files into Python before.
So I have a numpy Matrix in a text file, named dmatrix.txt, so how would I be able to import the file into Python and use the matrix?
I am trying to use numpy.load(), but I am unsure how to use it.
Try numpy.loadtxt('dmatrix.txt'); you could add a delimiter argument if the file is comma-separated or something.
numpy.load is for files in numpy/python binary formats - created by numpy.save, numpy.savez, or pickle.