The data is already loaded from disk issue from mne - python

i am following this tutorial
M/EEG analysis with MNE Python
and i have a little errors in this fragment : 01:35:57 Working with BIDS data
i followed all the steps before and also implemented code :
import matplotlib.pyplot as plt
import pathlib
import matplotlib
import mne
import mne_bids
matplotlib.use('Qt5Agg')
directory ='C:/Users/User/mne_data/MNE-sample-data/MEG\sample/sample_audvis_raw.fif'
raw =mne.io.read_raw(directory)
#raw.plot()
#plt.show()
events =mne.find_events(raw)
#print(events)
event_id ={
"Auditory/Left":1,
"Auditory/Right":2,
"Visual/Left":3,
"Visual/Right":4,
"Smiley":5,
"Button":32
}
raw.info['line_freq']=60
raw.load_data()
out_path =pathlib.Path("out_data/sample_bids")
bids_path =mne_bids.BIDSPath(subject='01',session='01',task='audiovisual',run='01',root=out_path)
mne_bids.write_raw_bids(raw,bids_path=bids_path,events_data=events,event_id=event_id,overwrite=True)
but when i am running this code, i am getting issue :
ValueError: The data is already loaded from disk and may be altered. See warning for "allow_preload".
i can't understand reason of this error, how can i fix it?please helo me

Related

Run markdown in pycharm with R and python chunks using reticulate

Really difficult to find anyone using markdown in a python IDE (I am using pycharm), with both R and python chunks.
Here is my code so far; I am just trying to set up my markdown to use both R and python code; it seems like my python chunk doesn't work; any idea why? Thanks!
R environment
library(readODS) # excel data
library(glmmTMB) # mixed models
library(car) # ANOVA on mixed models
library(DHARMa) # goodness of fit of the model
library(emmeans) # post hoc
library(ggplot2) # plots
library(reticulate) # link between R and python
use_python('C:/Users/saaa/anaconda3/envs/Python_projects/python.exe')
Python environment
import pandas as pd
import os
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns

Pkl.File import can't be read "ValueError: unsupported pickle protocol: 5"

here the exemplary code:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pickle5 as pickle
#Read
output = pd.read_pickle("Energy_data.pkl")
plt.figure()
#print(output)
output.plot()
I am using Python 3.7 and that is probably the reason for the error message, because this .pkl files were created in Python 3.8 . If my colleague runs it (he created the .pkl-Files), it'll work.
I tried to use this solution (maybe I did not do it correctly) shown here, but it did not work anyway. Can someone show me how to import the pkl files using the example above in Python 3.7?
Thank you very much in advance!

"'charmap' codec can't encode character '\u015f' in position 510335: character maps to <undefined>" when loading a csv in Altair

I am running this code in Jupyter Lab and I get the following error. I can't figure out what the problem is. I am using Altair/Pandas on Jupyter Lab to visualise the Pleiades dataset.
import altair as alt
import altair_viewer
import pandas as pd
import numpy as np
from vega_datasets import data
alt.data_transformers.disable_max_rows()
alt.data_transformers.enable('csv')
location_data = pd.read_csv("pleiades-locations2.csv")
location_data.head()
alt.Chart(location_data).mark_point().encode(
x='reprLat:Q',
y='reprLong:Q',
color='timePeriods:N',
tooltip='featureType')
If anyone knows where the issue might be, I would greatly appreciate it.
EDIT: Apparently, the problem had to do with the way Windows reads Unicode, because I tried to run this on a Linux machine, and it worked fine.

import error when using spafe library for feature extraction

I ' am working on audio file and need to use spafe library for lfcc, lpc... and i install the library as mentionned in the site : https://spafe.readthedocs.io/en/latest/
But when i try to extract some features , like lfcc, mfcc, lpc, i have import error par example when i use this code :
import scipy.io.wavfile
import spafe.utils.vis as vis
from spafe.features.mfcc import lfcc
i have this error :
ImportError: cannot import name 'lfcc'
I don't undestand because i can import spafe, i have all dependancies the libraries required with the correct versions ( numpy, scipy...).
There seems to be a typo in the docs example (which I guess tou are trying to follow); it should be
from spafe.features.lfcc import lfcc
i.e. lfcc, not mfcc (which mfcc indeed does not have a module lfcc, hence the error).

NetCDF Attribute not found when using metpy and siphon to get data

I'm trying to plot some meteorological data in NetCDF format accessed via the Unidata siphon package.
I've imported what the MetPy docs suggest are the relevant libraries
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from netCDF4 import num2date
import numpy as np
import xarray as xr
from siphon.catalog import TDSCatalog
from datetime import datetime
import metpy.calc as mpcalc
from metpy.units import units
and I've constructed a query for data as per the Siphon docs
best_gfs = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/grib/NCEP/GFS/Global_0p25deg/catalog.xml?dataset=grib/NCEP/GFS/Global_0p25deg/Best')
best_ds = best_gfs.datasets[0]
ncss = best_ds.subset()
query = ncss.query()
query.lonlat_box(north=55, south=20, east=-60, west=-90).time(datetime.utcnow())
query.accept('netcdf4')
query.variables('Vertical_velocity_pressure_isobaric','Relative_humidity_isobaric','Temperature_isobaric','u-component_of_wind_isobaric','v-component_of_wind_isobaric','Geopotential_height_isobaric')
data = ncss.get_data(query)
Unfortunately, when I attempt to parse the dataset using the code from the Metpy docs
data = data.metpy.parse_cf()
I get an error: "AttributeError: NetCDF: Attribute not found"
When attempting to fix this problem, I came across another SO post that seems to have the same issue, but the solution suggested there -- to update my metpy to the latest version, -- did not work for me. I updated metpy using Conda but got the same problem as before I updated. Any other ideas on how to get this resolved?
Right now the following code in Siphon
data = ncss.get_data(query)
will return a Dataset object from netcdf4-python. You need one extra step to hand this to xarray, which will make MetPy's parse_cf available:
from xarray.backends import NetCDF4DataStore
ds = xr.open_dataset(NetCDF4DataStore(data))
data = ds.metpy.parse_cf()

Categories

Resources