I am trying to calculate some climatic indexes using E-OBS daily data, and it requires some computation. I am using the xarray package for that. However, I am getting the following error message:
File "C:\Users\filip\anaconda3\envs\gis\lib\site-packages\xarray\backends\netCDF4_.py", line 486, in prepare_variable
nc4_var = self.ds.createVariable(
File "netCDF4\_netCDF4.pyx", line 2768, in netCDF4._netCDF4.Dataset.createVariable
File "netCDF4\_netCDF4.pyx", line 3857, in netCDF4._netCDF4.Variable.__init__
File "netCDF4\_netCDF4.pyx", line 1887, in netCDF4._netCDF4._ensure_nc_success
RuntimeError: NetCDF: Invalid argument
A minimum reproducible example would be:
import xarray as xr
prec = 'D:/inputs/eobs/rr_ens_mean_0.1deg_reg_v24.0e.nc'
ds_p = xr.open_dataset(prec)
pp = ds_p.resample({'time': 'YS'}).sum(min_count=1, keep_attrs=True)
pp_mean = pp.mean('time', keep_attrs=True)
temp = 'D:/inputs/eobs/tg_ens_mean_0.1deg_reg_v24.0e.nc'
ds_t = xr.open_dataset(temp)
tp = ds_t.resample({'time': 'YS'}).mean(keep_attrs=True)
tp_mean = tp.mean('time', keep_attrs=True)
ind = (pp_mean['rr'] / tp_mean['tg']) * 10
ind.to_netcdf('D:/outputs/eobs/ind.nc')
Could you try and see if you get the same error message? Am I doing something wrong?
Related
I am trying to simulate two FMUs with the one having inputs as CSV files by using the Master. What I have tried is the following:
from pyfmi import load_fmu
from pyfmi import Master
import pandas as pd
electricity_network = load_fmu(r"C:\Users\kosmy\Pandapower_Reduced.fmu")
pv = load_fmu(r"C:\Users\kosmy\Photovoltaics.Model.PVandWeather_simple.fmu")
load_file = r"C:\Users\kosmy\load_prof_sec_res.csv"
load = pd.read_csv(load_file)
models = [electricity_network, pv]
connections = [(pv, "P_MW", electricity_network, "P_pv1"),
(pv, "P_MW", electricity_network, "P_pv2")]
master_simulator = Master(models, connections)
input_object = [((electricity_network, 'P_load1'), load),
((electricity_network, 'P_load2'), load)]
res = master_simulator.simulate(final_time = 86400, input = input_object)
I am getting the following error:
Traceback (most recent call last):
File "C:\Users\kosmy\run_csv_pyfmi.py", line 29, in <module>
res = master_simulator.simulate(final_time = 86400, input = input_object)
File "src\pyfmi\master.pyx", line 1474, in pyfmi.master.Master.simulate
File "src\pyfmi\master.pyx", line 1369, in pyfmi.master.Master.specify_external_input
TypeError: tuple indices must be integers or slices, not tuple
Apparently, I do not give the correct format to the input, but I have not found an example demonstrating the correct format when using the Master.
Does anyone know how can I use the input in this case?
def load(t):
return 10, math.cos(t)
input_object = ([(electricity_network, 'P_load1'),(electricity_network, 'P_load2')],load)
another option is
data = np.transpose(np.vstack((t,u,v)))
input_object = (['InputVarI','InputVarP'],data)
error while creating a 2-tuple as input for model.simulate() of fmu model with pyfmi
I have been working with the alpha vantage python API for a while now, but I have only needed to pull daily and intraday timeseries data. I am trying to pull extended intraday data, but am not having any luck getting it to work. Trying to run the following code:
from alpha_vantage.timeseries import TimeSeries
apiKey = 'MY API KEY'
ts = TimeSeries(key = apiKey, output_format = 'pandas')
totalData, _ = ts.get_intraday_extended(symbol = 'NIO', interval = '15min', slice = 'year1month1')
print(totalData)
gives me the following error:
Traceback (most recent call last):
File "/home/pi/Desktop/test.py", line 9, in <module>
totalData, _ = ts.get_intraday_extended(symbol = 'NIO', interval = '15min', slice = 'year1month1')
File "/home/pi/.local/lib/python3.7/site-packages/alpha_vantage/alphavantage.py", line 219, in _format_wrapper
self, *args, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/alpha_vantage/alphavantage.py", line 160, in _call_wrapper
return self._handle_api_call(url), data_key, meta_data_key
File "/home/pi/.local/lib/python3.7/site-packages/alpha_vantage/alphavantage.py", line 354, in _handle_api_call
json_response = response.json()
File "/usr/lib/python3/dist-packages/requests/models.py", line 889, in json
self.content.decode(encoding), **kwargs
File "/usr/lib/python3/dist-packages/simplejson/__init__.py", line 518, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
What is interesting is that if you look at the TimeSeries class, it states that extended intraday is returned as a "time series in one csv_reader object" whereas everything else, which works for me, is returned as "two json objects". I am 99% sure this has something to do with the issue, but I'm not entirely sure because I would think that calling intraday extended function would at least return SOMETHING (despite it being in a different format), but instead just gives me an error.
Another interesting little note is that the function refuses to take "adjusted = True" (or False) as an input despite it being in the documentation... likely unrelated, but maybe it might help diagnose.
Seems like TIME_SERIES_INTRADAY_EXTENDED can return only CSV format, but the alpha_vantage wrapper applies JSON methods, which results in the error.
My workaround:
from alpha_vantage.timeseries import TimeSeries
import pandas as pd
apiKey = 'MY API KEY'
ts = TimeSeries(key = apiKey, output_format = 'csv')
#download the csv
totalData = ts.get_intraday_extended(symbol = 'NIO', interval = '15min', slice = 'year1month1')
#csv --> dataframe
df = pd.DataFrame(list(totalData[0]))
#setup of column and index
header_row=0
df.columns = df.iloc[header_row]
df = df.drop(header_row)
df.set_index('time', inplace=True)
#show output
print(df)
This is an easy way to do it.
ticker = 'IBM'
date= 'year1month2'
apiKey = 'MY API KEY'
df = pd.read_csv('https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY_EXTENDED&symbol='+ticker+'&interval=15min&slice='+date+'&apikey='+apiKey+'&datatype=csv&outputsize=full')
#Show output
print(df)
import pandas as pd
symbol = 'AAPL'
interval = '15min'
slice = 'year1month1'
api_key = ''
adjusted = '&adjusted=true&'
csv_url = 'https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY_EXTENDED&symbol='+symbol+'&interval='+interval+'&slice='+slice+adjusted+'&apikey='+api_key
data = pd.read_csv(csv_url)
print(data.head)
I'm trying to query data with python in the owl file I created using the owlready library. But I get the following error. what would be the reason?
The code structure and received error are as follows.
from owlready2 import *
from urllib.request import urlopen
from rdflib.graph import Graph
onto = default_world.get_ontology("http://muratkilinc.com/ontologies/izmir.owl").load()
graph = default_world.as_rdflib_graph()
r = list(graph.query_owlready("""
PREFIX uni:<http://muratkilinc.com/ontologies/izmir.owl>
SELECT ?adi ?soyadi ?yas
WHERE
{
?turistler uni:yas ?yas.
?turistler uni:adi ?adi.
?turistler uni:soyadi ?soyadi.
FILTER(?yas > 35).
}"""))
results = default_world.as_rdflib_graph().query_owlready(r)
results = list(results)
print(results)
Error:
* Owlready2 * Warning: optimized Cython parser module 'owlready2_optimized' is not available,
defaulting to slower Python implementation
Traceback (most recent call last):
File "c:/Users/BAUM-PC/Desktop/izmir/sparql.py", line 21, in <module>
results = list(results)
File "C:\Users\BAUM-PC\AppData\Local\Programs\Python\Python37-32\lib\site-
packages\owlready2\rdflib_store.py", line 261, in query_owlready
File "C:\Users\BAUM-PC\AppData\Local\Programs\Python\Python37-32\lib\site-
packages\rdflib\graph.py", line 1089, in query
query_object, initBindings, initNs, **kwargs))
File "C:\Users\BAUM-PC\AppData\Local\Programs\Python\Python37-32\lib\site-
packages\rdflib\plugins\sparql\processor.py", line 74, in query
parsetree = parseQuery(strOrQuery)
File "C:\Users\BAUM-PC\AppData\Local\Programs\Python\Python37-32\lib\site-
packages\rdflib\plugins\sparql\parser.py", line 1057, in parseQuery
q = expandUnicodeEscapes(q)
File "C:\Users\BAUM-PC\AppData\Local\Programs\Python\Python37-32\lib\site-
packages\rdflib\plugins\sparql\parser.py", line 1048, in expandUnicodeEscapes
return expandUnicodeEscapes_re.sub(expand, q)
TypeError: expected string or bytes-like object
You have to skip second query and error message will skip
from owlready2 import *
from rdflib.graph import Graph
onto = default_world.get_ontology("http://muratkilinc.com/ontologies/izmir.owl").load()
graph = default_world.as_rdflib_graph()
r = list(graph.query_owlready("""
PREFIX uni:<http://muratkilinc.com/ontologies/izmir.owl>
SELECT ?adi ?soyadi ?yas
WHERE
{
?turistler uni:yas ?yas.
?turistler uni:adi ?adi.
?turistler uni:soyadi ?soyadi.
FILTER(?yas > 35).
}"""))
print(list(r))
It gives empty list - so it works without error message.
Empty list is different problem - with query, not with code - so you should ask new question.
I am trying to following the steps listed here to update a feature on AGOL from a local feature class. I keep getting a circular reference within the for loop and I'm not sure why it's happening.
Please see the code I'm using below.
import arcgis, arcpy, csv, os, time, copy, pandas as pd
from arcgis.gis import GIS
from pandas import DataFrame
from copy import deepcopy
gis = GIS("url", "username","pass")
fc = gis.content.get('ItemID')
flayer = fc.layers[0]
fset=flayer.query()
fields = ('GPS_Time','Visibility','EngineeringSection','Condition')
UpdateLayer = "C:\\Users\\USer\\Documents\\ArcGIS\\Default.gdb\\Data"
UpdateTable=DataFrame(arcpy.da.FeatureClassToNumPyArray(UpdateLayer , fields, skip_nulls=True))
overlap_rows = pd.merge(left=fset.sdf, right = UpdateTable, how='inner', on='EngineeringSection')
features_for_update = []
all_features = fset.features
for EngSec in overlap_rows['EngineeringSection']:
original_feature = [f for f in all_features if f.attributes['EngineeringSection'] == EngSec][0]
feature_to_be_updated = deepcopy(original_feature)
matching_row = UpdateTable.where(UpdateTable['EngineeringSection'] == EngSec).dropna()
original_feature.attributes['GPS_Time'] = (matching_row['GPS_Time'])
original_feature.attributes['Visibility'] = int(matching_row['Visibility'])
original_feature.attributes['Condition'] = str(matching_row['Condition'])
update_result = flayer.edit_features(updates=[original_feature])
flayer.edit_features(updates= features_for_update)
Here is the error I receive:
Traceback (most recent call last):
File "<stdin>", line 9, in <module>
File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\arcgis\features\layer.py", line 1249, in edit_features
default=_date_handler)
File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
ValueError: Circular reference detected
The line below assign a tuple as an attribute value. Is it what you wanted?
original_feature.attributes['GPS_Time'] = (matching_row['GPS_Time'])
If you want to assign the value just do:
original_feature.attributes['GPS_Time'] = matching_row['GPS_Time']
Also, I think this line:
flayer.edit_features(updates= features_for_update)
Should be:
flayer.edit_features(updates=[feature_to_be_updated])
Thanks for your help, I was able to get it all running with this script:
I also added in some timing to see how long it was taking
import arcpy, csv, os, time
import pandas as pd
from arcgis.gis import GIS
from pandas import DataFrame
from copy import deepcopy
start_time = time.time()
gis = GIS("url", "user","pass")
fc = gis.content.get('ContentID')
flayer = fc.layers[0]
fset=flayer.query()
fields = ('GPS_Time','Visibility','EngineeringSection','Condition')
UpdateLayer = "C:\\Users\\user\\Documents\\ArcGIS\\Default.gdb\\data"
UpdateTable=DataFrame(arcpy.da.FeatureClassToNumPyArray(UpdateLayer , fields, skip_nulls=True))
overlap_rows = pd.merge(left=fset.sdf, right = UpdateTable, how='inner', on='EngineeringSection')
features_for_update = []
all_features = fset.features
for EngSec in overlap_rows['EngineeringSection']:
original_feature = [f for f in all_features if f.attributes['EngineeringSection'] == EngSec][0]
feature_to_be_updated = deepcopy(original_feature)
matching_row = UpdateTable.where(UpdateTable['EngineeringSection'] == EngSec).dropna()
feature_to_be_updated.attributes['GPS_Time'] = matching_row['GPS_Time'].iloc[0]
feature_to_be_updated.attributes['Visibility'] = int(matching_row['Visibility'])
feature_to_be_updated.attributes['Condition'] = str(matching_row['Condition'].iloc[0])
update_result = flayer.edit_features(updates=[feature_to_be_updated])
update_result
elapsed_time = time.time() - start_time
totaltime = time.strftime("%H:%M:%S", time.gmtime(elapsed_time))
print("Total processing time: "+ totaltime)
I am trying to do some registration in python using the nipype package. It worked for basic registration:
from nipype.interfaces import fsl
from nipype.testing import example_data
flt = fsl.FLIRT(bins=640, cost_func='mutualinfo')
flt.inputs.in_file = 'myInput.img'
flt.inputs.reference = 'myReference.img'
flt.inputs.out_file = 'moved_subject.nii'
flt.inputs.out_matrix_file = 'subject_to_template.mat'
res = flt.run()
This yielded a successful registration. However, I am trying to apply this registration transformation to a non-brain image in the same space as the input MRI, using the outputted flt.inputs.out_matrix_file = 'subject_to_template.mat'.
I tried the following:
from nipype.interfaces import fsl
flt = fsl.FLIRT(bins=640, cost_func='mutualinfo')
flt.inputs.in_file = 'myNonBrainImage.img'
flt.inputs.reference = 'myReference.img'
flt.inputs.out_file = 'regNonBrain.nii'
flt.inputs.in_matrix_file = 'subject_to_template.mat'
flt.inputs.apply_xfm = True
res = flt.run()
Hoping that the flt.inputs.in_matrix_file and flt.inputs.apply_xfm = True flag would override standard registration and just use the matrix to register the additional image, but I got this error:
INFO:interface:stderr 2011-08-10T14:59:17.307116:Unrecognised option D
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/bin/python27/lib/python2.7/site-packages/nipype-0.4.1-py2.7.egg/nipype/interfaces/base.py", line 775, in run
runtime = self._run_interface(runtime)
File "/usr/bin/python27/lib/python2.7/site-packages/nipype-0.4.1-py2.7.egg/nipype/interfaces/base.py", line 1050, in _run_interface
self.raise_exception(runtime)
File "/usr/bin/python27/lib/python2.7/site-packages/nipype-0.4.1-py2.7.egg/nipype/interfaces/base.py", line 1027, in raise_exception
raise RuntimeError(message)
RuntimeError: Command:
flirt -in RF8869_3D_XRT_Dose_CT_A.img -ref clo010809T1Gd.img -out regDose.nii -omat /root/Desktop/Test Data/RF8869_3D_XRT_Dose_CT_A_flirt.mat -applyxfm -bins 640 -searchcost mutualinfo -init subject_to_template.mat
Standard output:
Standard error:
Unrecognised option D
Return code: 255
Interface FLIRT failed to run.
Do you know why and how can I solve this?
There is a space in directory/file name containing your images:
/root/Desktop/Test Data
Rename Test Data as Test_Data and it will work