How to read edf data in Python 3 - python

How can I read edf data using Python? I want to analyze data of a edf file, but I cannot read it using pyEDFlib. It threw the error OSError: The file is discontinous and cannot be read and I'm not sure why.

I assume that your data are biological time-series like EEG, is this correct? If so, you can use the MNE library.
You have to install it first. Since it is not a standard library, take a look here. Then, you can use the read_raw_edf() method.
For example:
import mne
file = "my_path\\my_file.edf"
data = mne.io.read_raw_edf(file)
raw_data = data.get_data()
# you can get the metadata included in the file and a list of all channels:
info = data.info
channels = data.ch_names
See documentation in the links above for other properties of the data object

Related

Loading multiple .EDF files using MNE-python

Absolute noob here with EEG analysis. I used the following code to read one subject successfully:
import mne
file = "my_path\\my_file.edf"
data = mne.io.read_raw_edf(file)
raw_data = data.get_data()
channels = data.ch_names
This works perfectly fine. But my intention is to follow along the MNE-python documentation from this link where they use
raws = [read_raw_edf(f, preload=True) for f in raw_fnames]
I have a dataset of 25 subjects, all in one directory and with .edf extension. I am trying to append all the rows from all these tables and I cant get this to work. Please any light on this?

How to avail "Forecasting: Methods and Application" dataset in Python?

I am using the book Forecasting: Methods and Applications by Makridakis, Wheelwright and Hyndman. I want to do the exercises along the way, but in Python, not R (as suggested in the book).
I do not know how to use R. I know that the datasets can be availed from an R package - fma. This is the link to the package.
Is there a possible script, in R or Python, which will allow me to download the datasets as .csv files? That way, I will be able to access them using Python.
one possibility:
## install and load package:
install.packages('fma')
library('fma')
## list example data of package fma:
data(package = 'fma')
## export single data as csv:
write.csv(cement, file = 'cement.csv')
## bulk export:
## data names are in `[,3]`rd column of list member "results"
## of `data(...)` output
for (data_name in data(package = 'fma')[['results']][,3]){
write.csv(get(data_name), file = paste0(data_name, '.csv'))
}
Edit:
As Anirban noted, attaching the package {fma} exposes only a few datasets to the search path. The data can be obtained by cloning or downloading from Rob J. Hyndman's source (click green Code button and choose). Subfolder 'data' contains each dataset as an .rda file which can be load()ed and converted. (Observe the licence conditions - GPL3.0 - and acknowledge the authors' efforts anyway.)
That said, you could load and convert the data like this:
setwd('path/to/fma-master/data')
for(data_name in dir()){
cat(paste0('converting ', data_name, '... '))
load(data_name)
object_name <- (gsub('\\.rda','', data_name))
write.csv(get(object_name),
file = paste0(object_name,'.csv'),
row.names = FALSE,
append = FALSE ## overwrite file if exists
)
}

Is there any feasible solution to read WOT battle results .dat files?

I am new here to try to solve one of my interesting questions in World of Tanks. I heard that every battle data is reserved in the client's disk in the Wargaming.net folder because I want to make a batch of data analysis for our clan's battle performances.
image
It is said that these .dat files are a kind of json files, so I tried to use a couple of lines of Python code to read but failed.
import json
f = open('ex.dat', 'r', encoding='unicode_escape')
content = f.read()
a = json.loads(content)
print(type(a))
print(a)
f.close()
The code is very simple and obviously fails to make it. Well, could anyone tell me the truth about that?
Added on Feb. 9th, 2022
After I tried another set of codes via Jupyter Notebook, it seems like something can be shown from the .dat files
import struct
import numpy as np
import matplotlib.pyplot as plt
import io
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
fbuff = io.BufferedReader(f)
N = len(fbuff.read())
print('byte length: ', N)
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
data =struct.unpack('b'*N, f.read(1*N))
The result is a set of tuple but I have no idea how to deal with it now.
Here's how you can parse some parts of it.
import pickle
import zlib
file = '4402905758116487.dat'
cache_file = open(file, 'rb') # This can be improved to not keep the file opened.
# Converting pickle items from python2 to python3 you need to use the "bytes" encoding or "latin1".
legacyBattleResultVersion, brAllDataRaw = pickle.load(cache_file, encoding='bytes', errors='ignore')
arenaUniqueID, brAccount, brVehicleRaw, brOtherDataRaw = brAllDataRaw
# The data stored inside the pickled file will be a compressed pickle again.
vehicle_data = pickle.loads(zlib.decompress(brVehicleRaw), encoding='latin1')
account_data = pickle.loads(zlib.decompress(brAccount), encoding='latin1')
brCommon, brPlayersInfo, brPlayersVehicle, brPlayersResult = pickle.loads(zlib.decompress(brOtherDataRaw), encoding='latin1')
# Lastly you can print all of these and see a lot of data inside.
The response contains a mixture of more binary files as well as some data captured from the replays.
This is not a complete solution but it's a decent start to parsing these files.
First you can look at the replay file itself in a text editor. But it won't show the code at the beginning of the file that has to be cleaned out. Then there is a ton of info that you have to read in and figure out but it is the stats for each player in the game. THEN it comes to the part that has to do with the actual replay. You don't need that stuff.
You can grab the player IDs and tank IDs from WoT developer area API if you want.
After loading the pickle files like gabzo mentioned, you will see that it is simply a list of values and without knowing what the value is referring to, its hard to make sense of it. The identifiers for the values can be extracted from your game installation:
import zipfile
WOT_PKG_PATH = "Your/Game/Path/res/packages/scripts.pkg"
BATTLE_RESULTS_PATH = "scripts/common/battle_results/"
archive = zipfile.ZipFile(WOT_PKG_PATH, 'r')
for file in archive.namelist():
if file.startswith(BATTLE_RESULTS_PATH):
archive.extract(file)
You can then decompile the python files(uncompyle6) and then go through the code to see the identifiers for the values.
One thing to note is that the list of values for the main pickle objects (like brAccount from gabzo's code) always has a checksum as the first value. You can use this to check whether you have the right order and the correct identifiers for the values. The way these checksums are generated can be seen in the decompiled python files.
I have been tackling this problem for some time (albeit in Rust): https://github.com/dacite/wot-battle-results-parser/tree/main/datfile_parser.

How to link HDF5 files generated with Pandas?

Assume we have a folder with HDF5-files generated by pandas.to_hdf. I would like to create one master.h5 file that contains external links to all the DataFrames.
According to the documentation of h5py, the standard way to do this is
myfile = h5py.File('master.h5','w')
myfile['ext link'] = h5py.ExternalLink("some_sub_file.h5", "/path/to/resource")
But files generated by pandas.to_hdf contain not just datasets, but h5py.Groups. How exactly would you then set up the external link to work?
Links can point to any object in the HDF5 data structure (datasets or groups). The file is a special form of a group; called the root group and referenced with '/'. So, to link to a file, use: h5py.ExternalLink(filename,'/').
You didn't say if you want a link for each dataframe/dataset in each file, or links for each file. It's simpler to create links to the file root groups. If you create individual links to the datasets, be sure you assign unique names.
There are 2 answers that demonstrate each method. The questions were not specifically about h5py.ExternalLink(), but my answers to each question used external links. See these answers:
HDF5 Attributes of External Links: Creates links to root group in multiple files. (each file only has 1 dataset...but your process would be identical.)
I/O Issues in Loading Several Large H5PY Files : Creates links to multiple datasets in multiple files. (Requires unique dataset names to work "as-is". Can be modified if names are not unique.)
I modified the code from the second answer (70089964) to show how to create 3 external links from the master file to the root group in 3 files (where each file has 5 datasets).
Code to create 3 example files:
import h5py
import numpy as np
for fcnt in range(3):
fname = f'file_{fcnt+1}.h5'
with h5py.File(fname,'w') as h5fw:
for dscnt in range(1,6,1):
arr = np.random.random(10).reshape(5,2)
h5fw.create_dataset(f'data_{fcnt*10+dscnt:02}',data=arr*dscnt)
Code to create links from the master file to the 3 files:
import h5py
fnames = ['file_1.h5','file_2.h5','file_3.h5']
with h5py.File(f'master_{len(fnames)}_links}.h5','w') as h5fw:
for fname in fnames:
with h5py.File(fname,'r') as h5fr:
h5fw[fname] = h5py.ExternalLink(fname,'/')
Additional research on Pandas and HDF5 links revealed an interesting discovery: there are limitations with links (you can create them in Pandas, but Pandas can't access the linked data). In other words, the links are there, and work fine with HDFView, h5py and PyTables. Reference these GitHub issues:
Pandas hdf functions should support the hdf5 ExternalLink
functionality when reading/writing - Issue #6019
Presence of softlink in HDF5 file breaks HDFStore.keys() - Issue
#20523
Status for both appears to be Open. My tests confirm previously reported errors.
The code below shows how to create both link types. It also shows the error message you will get when you try to access the linked data. (Error message is: KeyError: 'you cannot get attributes from this 'NoAttrs' instance. This is due to a HDF5 limitation attribute restrictions on links. a A HDFStore node has some required attributes. Result is the 'NoAttrs' message when Pandas tries to read the attributes.
import pandas as pd
df1 = pd.DataFrame({ "a": [1,2,3,4], "b": [11,12,13,14] })
print(df1.to_string())
# Create file 1 with simple dataframe
f1 = "test_1.hdf"
with pd.HDFStore(f1, mode="w") as hdf1:
hdf1.put("/key1", df1)
# Create file 2 with external link
f2 = "test_extlink.hdf"
with pd.HDFStore(f2, mode="w") as hdf2:
hdf2._handle.create_external_link(hdf2._handle.root, "extlink_key1", f"{f1}:/key1")
print("Successful external link write")
with pd.HDFStore(f2, mode="r") as hdf2:
print(hdf2.keys()) # Notice that [] (no keys) is printed
# following lines will trigger the 'NoAttrs' error message
# df2test = pd.read_hdf(f2,key="extlink_key1")
# print(df2test.to_string())
print("End external link read")
# Create file 3 with simple dataframe and symbolic (soft) link
f3 = "test_symlink.hdf"
with pd.HDFStore(f3, mode="w") as hdf3:
hdf3.put("/key1", df1)
hdf3._handle.create_soft_link(hdf3._handle.root, "symlink_key1", "/key1")
print("Successful symbolic link write")
with pd.HDFStore(f3, mode="r") as hdf3:
print(hdf3.keys()) # Notice that only ['key1'] is printed
# following lines will trigger the 'NoAttrs' error message
# df3test = pd.read_hdf(f3,key="symlink_key1")
# print(df3test.to_string())
print("End symbolic link read")

Mongoexport exporting invalid json files

I collected some tweets from the twitter API and stored it to mongodb, I tried exporting the data to a JSON file and didn't have any issues there, until I tried to make a python script to read the JSON and convert it to a csv. I get this traceback error with my code:
json.decoder.JSONDecodeError: Extra data: line 367 column 1 (char 9745)
So, after digging around the internet I was pointed to check the actual JSON data in an online validator, which I did. This gave me the error of:
Multiple JSON root elements
from the site https://jsonformatter.curiousconcept.com/
Here are pictures of the 1st/2nd object beginning/end of the file:
or a link to the data here
Now, the problem is, I haven't found anything on the internet of how to handle that error. I'm not sure if it's an error with the data I've collected, exported, or if I just don't know how to work with it.
My end game with these tweets is to make a network graph. I was looking at either Networkx or Gephi, which is why I'd like to get a csv file.
Robert Moskal is right. If you can address the issue at source and use --jsonArray flag when you use mongoexport then it will make the problem easier i guess. If you can't address it at source then read the below points.
The code below will extract you the individual json objects from the given file and convert them to python dictionaries.
You can then apply your CSV logic to each individual dictionary.
If you are using csv module then I would say use unicodecsv module as it would handle the unicode data in your json objects.
import json
with open('path_to_your_json_file', 'rb') as infile:
json_block = []
for line in infile:
json_block.append(line)
if line.startswith('}'):
json_dict = json.loads(''.join(json_block))
json_block = []
print json_dict
If you want to convert it to CSV using pandas you can use the below code:
import json, pandas as pd
with open('path_to_your_json_file', 'rb') as infile:
json_block = []
dictlist=[]
for line in infile:
json_block.append(line)
if line.startswith('}'):
json_dict = json.loads(''.join(json_block))
dictlist.append(json_dict)
json_block = []
df = pd.DataFrame(jsonlist)
df.to_csv('out.csv',encoding='utf-8')
If you want to flatten out the json object you can use pandas.io.json.json_normalize() method.
Elaborating on #MYGz suggestion to use --jsonArray
Your post doesn't show how you exported the data from mongo. If you use the following via the terminal, you will get valid json from mongodb:
mongoexport --collection=somecollection --db=somedb --jsonArray --out=validfile.json
Replace somecollection, somedb and validfile.json with your target collection, target database, and desired output filename respectively.
The following: mongoexport --collection=somecollection --db=somedb --out=validfile.json...will NOT give you the results you are looking for because:
By default mongoexport writes data using one JSON document for every
MongoDB document. Ref
A bit late reply, and I am not sure it was available the time this question was posted. Anyway, now there is a simple way to import the mongoexport json data as follows:
df = pd.read_json(filename, lines=True)
mongoexport provides each line as a json objects itself, instead of the whole file as json.

Categories

Resources