Edit: I'm starting to suspect the problems arising below are due to the metadata, because even after correcting the issues raised regarding units mpcalc.geostrophic_wind(z) still issues warnings about the coordinates and ordering. Maybe the function is unable to identify the coordinates from the file? Perhaps this is because WRF output data is non-CF compliant?
I would like to compute geostrophic and ageostrophic winds from WRF-ARW data using the MetPy function mpcalc.geostrophic_wind.
My attempt results in a bunch of errors and I don't know what I'm doing wrong. Can someone tell me how to modify my code to get rid of these errors?
Here is my attempt so far:
#
import numpy as np
from netCDF4 import Dataset
import metpy.calc as mpcalc
from wrf import getvar
# Open the NetCDF file
filename = "wrfout_d01_2016-10-04_12:00:00"
ncfile = Dataset(filename)
# Extract the geopotential height and wind variables
z = getvar(ncfile, "z", units="m")
ua = getvar(ncfile, "ua", units="m s-1")
va = getvar(ncfile, "va", units="m s-1")
# Smooth height data
z = mpcalc.smooth_gaussian(z, 3)
# Compute the geostrophic wind
geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z)
# Calculate ageostrophic wind components
ageo_wind_u = ua - geo_wind_u
ageo_wind_v = va - geo_wind_v
#
The computation of the geostrophic wind throws several warnings:
>>> # Compute the geostrophic wind
>>> geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z)
/mnt/.../.../metpy_en/lib/python3.9/site-packages/metpy/xarray.py:355: UserWarning: More than one time coordinate present for variable.
warnings.warn('More than one ' + axis + ' coordinate present for variable'
/mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:1459: UserWarning: Horizontal dimension numbers not found. Defaulting to (..., Y, X) order.
warnings.warn('Horizontal dimension numbers not found. Defaulting to '
/mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:355: UserWarning: More than one time coordinate present for variable "XLAT".
warnings.warn('More than one ' + axis + ' coordinate present for variable'
/mnt/.../.../lib/python3.9/site-packages/metpy/xarray.py:1393: UserWarning: y and x dimensions unable to be identified. Assuming [..., y, x] dimension order.
warnings.warn('y and x dimensions unable to be identified. Assuming [..., y, x] '
/mnt/.../.../lib/python3.9/site-packages/metpy/calc/basic.py:1274: UserWarning: Input over 1.5707963267948966 radians. Ensure proper units are given.
warnings.warn('Input over {} radians. '
Can anyone tell me why I'm getting these warnings?
And then trying to compute an ageostrophic wind component results in a bunch of errors:
>>> # Calculate ageostrophic wind components
>>> ageo_wind_u = ua - geo_wind_u
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/.../lib/python3.9/site-packages/xarray/core/_typed_ops.py", line 209, in __sub__
return self._binary_op(other, operator.sub)
File "/mnt/.../lib/python3.9/site-packages/xarray/core/dataarray.py", line 4357, in _binary_op f(self.variable, other_variable)
File "/mnt/.../lib/python3.9/site-packages/xarray/core/_typed_ops.py", line 399, in __sub__
return self._binary_op(other, operator.sub)
File "/mnt/.../lib/python3.9/site-packages/xarray/core/variable.py", line 2639, in _binary_op
f(self_data, other_data) if not reflexive else f(other_data, self_data)
File "/mnt/iusers01/fatpou01/sees01/w34926hb/.conda/envs/metpy_env/lib/python3.9/site-packages/pint/facets/numpy/quantity.py", line 61, in __array_ufunc__
return numpy_wrap("ufunc", ufunc, inputs, kwargs, types)
File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 953, in numpy_wrap return handled[name](*args, **kwargs)
File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 513, in _subtract (x1, x2), output_wrap = unwrap_and_wrap_consistent_units(x1, x2)
File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 130, in unwrap_and_wrap_consistent_units args, _ = convert_to_consistent_units(*args, pre_calc_units=first_input_units)
File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 111, in convert_to_consistent_units tuple(convert_arg(arg, pre_calc_units=pre_calc_units) for arg in args),
File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 111, in <genexpr> tuple(convert_arg(arg, pre_calc_units=pre_calc_units) for arg in args),
File "/mnt/.../lib/python3.9/site-packages/pint/facets/numpy/numpy_func.py", line 93, in convert_arg raise DimensionalityError("dimensionless", pre_calc_units)
pint.errors.DimensionalityError: Cannot convert from 'dimensionless' to 'meter / second'
Any help would be appreciated.
(By the way, I looked at the script at https://github.com/Unidata/python-training/blob/master/pages/gallery/Ageostrophic_Wind_Example.ipynb and did not find it helpful because I'm not sure which of the data manipulations near the top I need to do for the WRF data.)
wrfpython's getvar function, while it takes units as a parameter, only uses this (as far as I can tell) to convert values in the arrays before returning them. To use this with MetPy you need to attach proper units. I would do this using a small helper function:
from metpy.units import units
def metpy_getvar(file, name, units_str):
return getvar(file, name, units=units_str) * units(units_str)
z = metpy_getvar(ncfile, "z", units="m")
ua = metpy_getvar(ncfile, "ua", units="m s-1")
va = metpy_getvar(ncfile, "va", units="m s-1")
That should eliminate the complaints about missing units.
EDIT: Fix name collision in hastily written function.
The data presented by raw WRF-ARW datasets and by variables extracted via wrf-python do not have metadata that interact well with MetPy's assumptions about unit attributes, coordinate variables, and grid projections (from the CF Conventions). Instead, I would recommend using xwrf, a recently released package for working with WRF data in a more CF-Conventions-friendly way. With xwrf, your example would look like:
import metpy.calc as mpcalc
import xarray as xr
import xwrf
# Open the NetCDF file
filename = "wrfout_d01_2016-10-04_12:00:00"
ds = xr.open_dataset(filename).xwrf.postprocess()
# Extract the geopotential height and wind variables
z = ds['geopotential_height']
ua = ds['wind_east']
va = ds['wind_north']
# Smooth height data
z = mpcalc.smooth_gaussian(z, 3)
# Compute the geostrophic wind
geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z)
# Calculate ageostrophic wind components
ageo_wind_u = ua - geo_wind_u
ageo_wind_v = va - geo_wind_v
I've made some progress: an updated script and the resulting plot are included below. Part of the problem was that I needed to pass dx, dy, and lat into the function metpy.calc.geostrophic_wind, as they were seemingly not being read automatically from the numpy array.
There are still (at least) two problems:
I've passed x_dim=-2 and y_dim=-1 in an effort to set [X,Y] order. (The documentation here https://unidata.github.io/MetPy/latest/api/generated/metpy.calc.geostrophic_wind.html says the default is x_dim = -1 and y_dim=-2 for [...Y,X] order, but does not say what to set x_dim and y_dim to for [...X,Y] order, so I just guessed.) However, I am still getting ``UserWarning: Horizontal dimension numbers not found. Defaulting to (..., Y, X) order.''
Secondly, as you can see in the plot there is something weird going on with the geostrophic wind component at the coastlines.
u-component of geostrophic wind at 300 mb
Here is my current script:
import numpy as np
from netCDF4 import Dataset
import metpy.calc as mpcalc
from metpy.units import units
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
from wrf import getvar, interplevel, to_np, get_basemap, latlon_coords
# Open the NetCDF file
filename = "wrfout_d01_2016-10-04_12:00:00"
ncfile = Dataset(filename)
z = getvar(ncfile, "z", units="m") * units.meter
# Smooth height data
z = mpcalc.smooth_gaussian(z, 3)
dx = 4000.0 * units.meter
dy = 4000.0 * units.meter
lat = getvar(ncfile, "lat") * units.degrees
geo_wind_u, geo_wind_v = mpcalc.geostrophic_wind(z,dx,dy,lat,x_dim=-2,y_dim=-1)
#####
p = getvar(ncfile, "pressure")
z = getvar(ncfile, "z", units="m")
ht_300 = interplevel(z, p, 300)
#geostrophic wind components on 300 mb level
geo_wind_u_300 = interplevel(geo_wind_u, p, 300)
geo_wind_v_300 = interplevel(geo_wind_v, p, 300)
# Get the lat/lon coordinates
lats, lons = latlon_coords(ht_300)
# Get the basemap object
bm = get_basemap(ht_300)
# Create the figure
fig = plt.figure(figsize=(12,12))
ax = plt.axes()
# Convert the lat/lon coordinates to x/y coordinates in the projection space
x, y = bm(to_np(lons), to_np(lats))
# Add the 300 mb height contours
levels = np.arange(8640., 9690., 40.)
contours = bm.contour(x, y, to_np(ht_300), levels=levels, colors="black")
plt.clabel(contours, inline=1, fontsize=10, fmt="%i")
# Add the wind contours
levels = np.arange(10, 70, 5)
geo_u_contours = bm.contourf(x, y, to_np(geo_wind_u_300), levels=levels, cmap=get_cmap("YlGnBu"))
plt.colorbar(geo_u_contours, ax=ax, orientation="horizontal", pad=.05, shrink=0.75)
# Add the geographic boundaries
bm.drawcoastlines(linewidth=0.25)
bm.drawstates(linewidth=0.25)
bm.drawcountries(linewidth=0.25)
plt.title("300 mb height (m) and u-component of geostrophic wind (m s-1) at 1200 UTC on 04-10-2016", fontsize=12)
plt.savefig('geo_u_300mb_04-10-2016_1200_smoothed.png', bbox_inches='tight')
Related
Using NASA's SRTM data, I've generated a global elevation heatmap.
The problem is, however, the continents tend to blend in with the ocean because of the range of elevation values. Is it possible to change the colorbar's scale so that the edges of the continents are more distinct from the ocean? I've tried different cmaps, but they all seem to suffer from the problem.
Here is my code. I'm initializing a giant array (with 0s) to hold global elevation data, and then populating it file by file from the SRTM dataset. Each file is 1 degree latitude by 1 degree longitude.
Another question I had was regarding the map itself. For some reason, the Appalachian Mountains seem to have disappeared entirely.
import os
import numpy as np
from .srtm_map import MapGenerator
from ..utils.hgt_parser import HGTParser
from tqdm import tqdm
import cv2
import matplotlib.pyplot as plt
import richdem as rd
class GlobalMapGenerator():
def __init__(self):
self.gen = MapGenerator()
self.base_dir = "data/elevation/"
self.hgt_files = os.listdir(self.base_dir)
self.global_elevation_data = None
def shrink(data, rows, cols):
return data.reshape(rows, data.shape[0]/rows, cols, data.shape[1]/cols).sum(axis=1).sum(axis=2)
def GenerateGlobalElevationMap(self, stride):
res = 1201//stride
max_N = 59
max_W = 180
max_S = 56
max_E = 179
# N59 --> N00
# S01 --> S56
# E000 --> E179
# W180 --> W001
# Initialize array global elevation
self.global_elevation_data = np.zeros(( res*(max_S+max_N+1), res*(max_E+max_W+1) ))
print("Output Image Shape:", self.global_elevation_data.shape)
for hgt_file in tqdm(self.hgt_files):
lat_letter = hgt_file[0]
lon_letter = hgt_file[3]
lat = int(hgt_file[1:3])
lon = int(hgt_file[4:7])
if lat_letter == "S":
# Shift south down by max_N, but south starts at S01 so we translate up by 1 too
lat_trans = max_N + lat - 1
else:
# Bigger N lat means further up. E.g. N59 is at index 0 and is higher than N00
lat_trans = max_N - lat
if lon_letter == "E":
# Shift east right by max_W
lon_trans = max_W + lon
else:
# Bigger W lon means further left. E.g. W180 is at index 0 and is more left than W001
lon_trans = max_W - lon
# load in data from file as resized
data = cv2.resize(HGTParser(os.path.join(self.base_dir, hgt_file)), (res, res))
# generate bounds (x/y --> lon.lat for data from this file for the giant array)
lat_bounds = [res*lat_trans, res*(lat_trans+1)]
lon_bounds = [res*lon_trans, res*(lon_trans+1)]
try:
self.global_elevation_data[ lat_bounds[0]:lat_bounds[1], lon_bounds[0]:lon_bounds[1] ] = data
except:
print("REFERENCE ERROR: " + hgt_file)
print("lat: ", lat_bounds)
print("lon: ", lon_bounds)
# generate figure
plt.figure(figsize=(20,20))
plt.imshow(self.global_elevation_data, cmap="rainbow")
plt.title("Global Elevation Heatmap")
plt.colorbar()
plt.show()
np.save("figures/GlobalElevationMap.npy", self.global_elevation_data)
plt.savefig("figures/GlobalElevationMap.png")
def GenerateGlobalSlopeMap(self, stride):
pass
Use a TwoSlopeNorm (docs) for your norm, like the example here.
From the example:
Sometimes we want to have a different colormap on either side of a conceptual center point, and we want those two colormaps to have different linear scales. An example is a topographic map where the land and ocean have a center at zero, but land typically has a greater elevation range than the water has depth range, and they are often represented by a different colormap.
If you set the midpoint at sea level (0), then you can have two very different scalings based on ocean elevation vs land elevation.
Example code (taken from the example linked above):
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cbook as cbook
from matplotlib import cm
dem = cbook.get_sample_data('topobathy.npz', np_load=True)
topo = dem['topo']
longitude = dem['longitude']
latitude = dem['latitude']
fig, ax = plt.subplots()
# make a colormap that has land and ocean clearly delineated and of the
# same length (256 + 256)
colors_undersea = plt.cm.terrain(np.linspace(0, 0.17, 256))
colors_land = plt.cm.terrain(np.linspace(0.25, 1, 256))
all_colors = np.vstack((colors_undersea, colors_land))
terrain_map = colors.LinearSegmentedColormap.from_list(
'terrain_map', all_colors)
# make the norm: Note the center is offset so that the land has more
# dynamic range:
divnorm = colors.TwoSlopeNorm(vmin=-500., vcenter=0, vmax=4000)
pcm = ax.pcolormesh(longitude, latitude, topo, rasterized=True, norm=divnorm,
cmap=terrain_map, shading='auto')
# Simple geographic plot, set aspect ratio beecause distance between lines of
# longitude depends on latitude.
ax.set_aspect(1 / np.cos(np.deg2rad(49)))
ax.set_title('TwoSlopeNorm(x)')
cb = fig.colorbar(pcm, shrink=0.6)
cb.set_ticks([-500, 0, 1000, 2000, 3000, 4000])
plt.show()
See how it scales numbers with this simple usage (from docs):
>>> import matplotlib. Colors as mcolors
>>> offset = mcolors.TwoSlopeNorm(vmin=-4000., vcenter=0., vmax=10000)
>>> data = [-4000., -2000., 0., 2500., 5000., 7500., 10000.]
>>> offset(data)
array([0., 0.25, 0.5, 0.625, 0.75, 0.875, 1.0])
I am trying to fit a smoothing B-spline to some data and I found this very helpful post on here. However, I not only need the spline, but also its derivatives, so I tried to add the following code to the example:
tck_der = interpolate.splder(tck, n=1)
x_der, y_der, z_der = interpolate.splev(u_fine, tck_der)
For some reason this does not seem to work due to some data type issues. I get the following traceback:
Traceback (most recent call last):
File "interpolate_point_trace.py", line 31, in spline_example
tck_der = interpolate.splder(tck, n=1)
File "/home/user/anaconda3/lib/python3.7/site-packages/scipy/interpolate/fitpack.py", line 657, in splder
return _impl.splder(tck, n)
File "/home/user/anaconda3/lib/python3.7/site-packages/scipy/interpolate/_fitpack_impl.py", line 1206, in splder
sh = (slice(None),) + ((None,)*len(c.shape[1:]))
AttributeError: 'list' object has no attribute 'shape'
The reason for this seems to be that the second argument of the tck tuple contains a list of numpy arrays. I thought turning the input data to be a numpy array as well would help, but it does not change the data types of tck.
Does this behavior reflect an error in scipy, or is the input malformed?
I tried manually turning the list into an array:
tck[1] = np.array(tck[1])
but this (which didn't surprise me) also gave an error:
ValueError: operands could not be broadcast together with shapes (0,8) (7,1)
Any ideas of what the problem could be? I have used scipy before and on 1D splines the splder function works just fine, so I assume it has something to do with the spline being a line in 3D.
------- edit --------
Here is a minimum working example:
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
from mpl_toolkits.mplot3d import Axes3D
total_rad = 10
z_factor = 3
noise = 0.1
num_true_pts = 200
s_true = np.linspace(0, total_rad, num_true_pts)
x_true = np.cos(s_true)
y_true = np.sin(s_true)
z_true = s_true / z_factor
num_sample_pts = 80
s_sample = np.linspace(0, total_rad, num_sample_pts)
x_sample = np.cos(s_sample) + noise * np.random.randn(num_sample_pts)
y_sample = np.sin(s_sample) + noise * np.random.randn(num_sample_pts)
z_sample = s_sample / z_factor + noise * np.random.randn(num_sample_pts)
tck, u = interpolate.splprep([x_sample, y_sample, z_sample], s=2)
x_knots, y_knots, z_knots = interpolate.splev(tck[0], tck)
u_fine = np.linspace(0, 1, num_true_pts)
x_fine, y_fine, z_fine = interpolate.splev(u_fine, tck)
# this is the part of the code I inserted: the line under this causes the crash
tck_der = interpolate.splder(tck, n=1)
x_der, y_der, z_der = interpolate.splev(u_fine, tck_der)
# end of the inserted code
fig2 = plt.figure(2)
ax3d = fig2.add_subplot(111, projection='3d')
ax3d.plot(x_true, y_true, z_true, 'b')
ax3d.plot(x_sample, y_sample, z_sample, 'r*')
ax3d.plot(x_knots, y_knots, z_knots, 'go')
ax3d.plot(x_fine, y_fine, z_fine, 'g')
fig2.show()
plt.show()
Stumbled into the same problem...
I circumvented the error by using interpolate.splder(tck, n=1) and instead used interpolate.splev(spline_ev, tck, der=1) which returns the derivatives at the points spline_ev (see Scipy Doku).
If you need the spline I think you can then use interpolate.splprep() again.
In total something like:
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
points = np.random.rand(10,2) * 10
(tck, u), fp, ier, msg = interpolate.splprep(points.T, s=0, k=3, full_output=True)
spline_ev = np.linspace(0.0, 1.0, 100, endpoint=True)
spline_points = interpolate.splev(spline_ev, tck)
# Calculate derivative
spline_der_points = interpolate.splev(spline_ev, tck, der=1)
spline_der = interpolate.splprep(spline_der_points.T, s=0, k=3, full_output=True)
# Plot the data and derivative
fig = plt.figure()
plt.plot(points[:,0], points[:,1], '.-', label="points")
plt.plot(spline_points[0], spline_points[1], '.-', label="tck")
plt.plot(spline_der_points[0], spline_der_points[1], '.-', label="tck_der")
# Show tangent
plt.arrow(spline_points[0][23]-spline_der_points[0][23], spline_points[1][23]-spline_der_points[1][23], 2.0*spline_der_points[0][23], 2.0*spline_der_points[1][23])
plt.legend()
plt.show()
EDIT:
I also opened an Issue on Github and according to ev-br the usage of interpolate.splprep is depreciated and one should use make_interp_spline / BSpline instead.
As noted in other answers, splprep output is incompatible with splder, but is compatible with splev. And the latter can evaluate the derivatives.
However, for interpolation, there is an alternative approach, which avoids splprep altogether. I'm basically copying a reply on the SciPy issue tracker (https://github.com/scipy/scipy/issues/10389):
Here's an example of replicating the splprep outputs. First let's make sense out of the splprep output:
# start with the OP example
import numpy as np
from scipy import interpolate
points = np.random.rand(10,2) * 10
(tck, u), fp, ier, msg = interpolate.splprep(points.T, s=0, k=3, full_output=True)
# check the meaning of the `u` array: evaluation of the spline at `u`
# gives back the original points (up to a list/transpose)
xy = interpolate.splev(u, tck)
xy = np.asarray(xy)
np.allclose(xy.T, points)
Next, let's replicate it without splprep. First, build the u array: the curve is represented parametrically, and u is essentially an approximation for the arc length. Other parametrizations are possible, but here let's stick to what splprep does. Translating the pseudocode from the doc page, https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splprep.html
vv = np.sum((points[1:, :] - points[:-1, :])**2, axis=1)
vv = np.sqrt(vv).cumsum()
vv/= vv[-1]
vv = np.r_[0, vv]
# check:
np.allclose(u, vv)
Now, interpolate along the parametric curve: points vs vv:
spl = interpolate.make_interp_spline(vv, points)
# check spl.t vs knots from splPrep
spl.t - tck[0]
The result, spl, is a BSpline object which you can evaluate, differentiate etc in a usual way:
np.allclose(points, spl(vv))
# differentiate
spl_derivative = spl.derivative(vv)
I want to retrieve all lat/lon coordinate pairs of a regular grid across the US. I've sourced the following code from an earlier stacked question:
import shapely.geometry
import pyproj
# https://stackoverflow.com/questions/40342355/how-can-i-generate-a-regular-geographic-grid-using-python
# Set up projections
p_ll = pyproj.Proj(init='epsg:4326') #4326
p_mt = pyproj.Proj(init='epsg:3857') # espg:3857 metric; same as EPSG:900913
# Create corners of rectangle to be transformed to a grid
nw = shapely.geometry.Point((24.0,-89)) #actually SW
print(nw)
se = shapely.geometry.Point((25.0,-65)) #actually NE
# 24.0,-89
# 50.0,-65
stepsize = 50000 # 5 km grid step size
# Project corners to target projection
s = pyproj.transform(p_ll, p_mt, nw.x, nw.y) # Transform NW point to 3857
e = pyproj.transform(p_ll, p_mt, se.x, se.y) # .. same for SE
print(s)
print(e)
# Iterate over 2D area
gridpoints = []
x = s[0]
while x < e[0]:
y = s[1]
while y < e[1]:
p = shapely.geometry.Point(pyproj.transform(p_mt, p_ll, x, y))
gridpoints.append(p)
y += stepsize
x += stepsize
# with open('testout.csv', 'wb') as of:
# of.write('lon;lat\n')
for p in gridpoints:
print(str(p.x)+"," +str(p.y))
# of.write('{:f};{:f}\n'.format(p.x, p.y))
Source: How can I generate a regular geographic grid using python?
The code doesn't work when my longitude exceeds -89.9999 (-90 thru -180) which results in the following error:
MacBook-Pro:Desktop user$ python updated_calculate_points.py
Traceback (most recent call last):
File "updated_calculate_points.py", line 19, in <module>
s = pyproj.transform(p_ll, p_mt, nw.x, nw.y) # Transform NW point to 3857
File "/Users/user/anaconda3/lib/python3.6/site-packages/pyproj/__init__.py", line 504, in transform
_proj._transform(p1,p2,inx,iny,inz,radians)
File "_proj.pyx", line 361, in _proj._transform (_proj.c:3762)
RuntimeError: b'latitude or longitude exceeded limits'
I don't fully understand how these projections work. Can someone assist?
Using the h3 library in combination with defined geos (ie. pulled from Openstreetmap) is an elegant solution
I am trying to create an above-head view heat map of relatively sparse EEG data (27 electrodes). I convert x,y cartesian coordinates of the EEG electrodes to polar, and attempt to map them as such. Each x,y coordinate corresponds to a given value (if you want to know: Hurst exponent) to which I would like the color around that location to correspond.
I started with working code from this page and tried to adapt it to my problem. Unfortunately, my adaptation is not working.
Here is my code:
from __future__ import division, print_function, absolute_import
from pylab import *
import numpy as np
from scipy.interpolate import griddata
# Setting the paremeters that define the circle
max_r = 1
max_theta = 2.0 * np.pi
###Cartesian coordinates of the 27 electrodes
###x axis goes from back of head to nose, with nose being the positive direction
###y axis goes from ear to ear, with toward right ear (from perspective of self) being the negative direction
# X coordinates
X = [0.95, 0.95, 0.673, 0.673, 0.000000000000000044, 0.000000000000000044,
-0.673, -0.673, -0.95, -0.95, 0.587, 0.587, 0.0000000000000000612, 0.0000000000000000612,
-0.587, -0.587, 0.719, 0.00000000000000000000000000000000375, -0.719,
0.375, 0.375, 0.999, -0.999, -0.375, -0.375, -0.9139, -0.9139,.5,.6,.7,.8]
# Y coordinates
Y = [0.309, -0.309, 0.545, -0.545, 0.719, -0.719, 0.545, -0.545,
0.309, -0.309, 0.809, -0.809, 0.999, -0.999, 0.809, -0.809, 0, -0.0000000000000000612,
-0.0000000000000000881, 0.375, -0.375, 0, -0.000000000000000122, 0.375, -0.375, 0.2063, -0.2063,.5,.6,.7,.8]
# Convert cartesian coordinates to polar
def convert_to_polar(x, y):
theta = np.arctan2(y, x)
r = np.sqrt(x ** 2 + y ** 2)
return theta, r
# Arrays that house the theta and radii from converted cartesian coordinates.
Thetas = []
Rs = []
# Converting cartesian coordinates to polar, for each electrode
for i in range(0, 31):
theta, r = convert_to_polar(X[i], Y[i])
Thetas.append(theta)
Rs.append(r)
# Making a two column list that contains the converted thetas and radii, so the appropriate shape is attained.
points = [Thetas,Rs]
values = [[.51,.71,.81,.91,.72,.87,.90,.67,.78,.89,.56,.45,.68,.96,.69,.63,.37,.85,.92,.70,.74,.97,.35,.76,.68,.46,.68,90,91,92,93],
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,30,78,56,90]]
# now we create a grid of values, interpolated from our random sample above
theta = np.linspace(0.0, max_theta, 100)
r = np.linspace(0, max_r, 200)
grid_r, grid_theta = np.meshgrid(r, theta)
data = griddata(points, values, (grid_r, grid_theta),fill_value=0)
# Create a polar projection
ax1 = plt.subplot(projection="polar")
ax1.pcolormesh(theta, r, data.T)
plt.show()
I get an error:
Traceback (most recent call last):
File "/Users/mac/NIH/EEG/Python/Testing heat map", line 50, in <module>
data = griddata(points, values, (grid_r, grid_theta),fill_value=0)
File "/Users/mac/anaconda/lib/python2.7/site-packages/scipy/interpolate/ndgriddata.py", line 217, in griddata rescale=rescale)
File "scipy/interpolate/interpnd.pyx", line 246, in scipy.interpolate.interpnd.LinearNDInterpolator.__init__
(scipy/interpolate/interpnd.c:4980)
File "scipy/spatial/qhull.pyx", line 1747, in scipy.spatial.qhull.Delaunay.__init__
(scipy/spatial/qhull.c:15918)
File "scipy/spatial/qhull.pyx", line 415, in scipy.spatial.qhull._Qhull.__init__
(scipy/spatial/qhull.c:5108) scipy.spatial.qhull.**QhullError: QH6214 qhull input error: not enough points(2) to construct initial simplex (need 33)**
While executing: | qhull d Qbb Qt Q12 Qx Qz Qc Options selected for
Qhull 2015.2.r 2016/01/18: run-id 1980533833 delaunay Qbbound-last
Qtriangulate Q12-no-wide-dup Qxact-merge Qz-infinity-point
Qcoplanar-keep _zero-centrum Qinterior-keep
Q3-no-merge-vertices-dim-high
The bolded portion is what I am trying to understand. When I add more points (that is, when I add more points to the lists X and Y that become polar coordinates), the number of points the error claims to need keeps increasing, always two steps ahead of how many points I have input.
Does anyone have any idea how to deal with this?
The first argument to griddata must have shape (n, D), where n is the number of points, and D is the dimension of those points. You passed in points = [Thetas,Rs], where Thetas and Rs are lists with length 31. When that input is converted to a two-dimensional array, it will have shape (2, 31). So griddata thinks you have passed in just two 31-dimensional points.
To fix this, you can create points using numpy.column_stack so that it is an array with shape (31, 2), e.g.:
points = np.column_stack((Thetas, Rs))
I have 3D measurement data on a sphere that is very coarse and I want to interpolate.
I found that RectSphereBivariateSpline from scipy.interpolate should be most suitable.
I used the example in the RectSphereBivariateSpline documentation as a starting point and now have the following code:
""" read csv input file, post process and plot 3D data """
import csv
import numpy as np
from mayavi import mlab
from scipy.interpolate import RectSphereBivariateSpline
# user input
nElevationPoints = 17 # needs to correspond with csv file
nAzimuthPoints = 40 # needs to correspond with csv file
threshold = - 40 # needs to correspond with how measurement data was captured
turnTableStepSize = 72 # needs to correspond with measurement settings
resolution = 0.125 # needs to correspond with measurement settings
# read data from file
patternData = np.empty([nElevationPoints, nAzimuthPoints]) # empty buffer
ifile = open('ttest.csv') # need the 'b' suffix to prevent blank rows being inserted
reader = csv.reader(ifile,delimiter=',')
reader.next() # skip first line in csv file as this is only text
for nElevation in range (0,nElevationPoints):
# azimuth
for nAzimuth in range(0,nAzimuthPoints):
patternData[nElevation,nAzimuth] = reader.next()[2]
ifile.close()
# post process
def r(thetaIndex,phiIndex):
"""r(thetaIndex,phiIndex): function in 3D plotting to return positive vector length from patternData[theta,phi]"""
radius = -threshold + patternData[thetaIndex,phiIndex]
return radius
#phi,theta = np.mgrid[0:nAzimuthPoints,0:nElevationPoints]
theta = np.arange(0,nElevationPoints)
phi = np.arange(0,nAzimuthPoints)
thetaMesh, phiMesh = np.meshgrid(theta,phi)
stepSizeRad = turnTableStepSize * resolution * np.pi / 180
theta = theta * stepSizeRad
phi = phi * stepSizeRad
# create new grid to interpolate on
phiIndex = np.linspace(1,360,360)
phiNew = phiIndex*np.pi/180
thetaIndex = np.linspace(1,180,180)
thetaNew = thetaIndex*np.pi/180
thetaNew,phiNew = np.meshgrid(thetaNew,phiNew)
# create interpolator object and interpolate
data = r(thetaMesh,phiMesh)
lut = RectSphereBivariateSpline(theta,phi,data.T)
data_interp = lut.ev(thetaNew.ravel(),phiNew.ravel()).reshape((360,180)).T
x = (data_interp(thetaIndex,phiIndex)*np.cos(phiNew)*np.sin(thetaNew))
y = (-data_interp(thetaIndex,phiIndex)*np.sin(phiNew)*np.sin(thetaNew))
z = (data_interp(thetaIndex,phiIndex)*np.cos(thetaNew))
# plot 3D data
obj = mlab.mesh(x, y, z, colormap='jet')
obj.enable_contours = True
obj.contour.filled_contours = True
obj.contour.number_of_contours = 20
mlab.show()
The example from the documentation works, but when I try to run the above code with the following test data: testdata I get a ValueError at the code position where the RectSphereBivariateSpline interpolator object is declared:
ValueError:
ERROR: on entry, the input data are controlled on validity
the following restrictions must be satisfied.
-1<=iopt(1)<=1, 0<=iopt(2)<=1, 0<=iopt(3)<=1,
-1<=ider(1)<=1, 0<=ider(2)<=1, ider(2)=0 if iopt(2)=0.
-1<=ider(3)<=1, 0<=ider(4)<=1, ider(4)=0 if iopt(3)=0.
mu >= mumin (see above), mv >= 4, nuest >=8, nvest >= 8,
kwrk>=5+mu+mv+nuest+nvest,
lwrk >= 12+nuest*(mv+nvest+3)+nvest*24+4*mu+8*mv+max(nuest,mv+nvest)
0< u(i-1)=0: s>=0
if s=0: nuest>=mu+6+iopt(2)+iopt(3), nvest>=mv+7
if one of these conditions is found to be violated,control is
immediately repassed to the calling program. in that case there is no
approximation returned.
I have tried and tried, but I am absolutely clueless what I should change in order to satisfy the RectSphereBivariateSpline object.
Does anyone have any hint as to what I may be doing wrong?
-- EDIT --
With the suggestions from #HYRY, I now have the following code that runs without runtime errors:
""" read csv input file, post process and plot 3D data """
import csv
import numpy as np
from mayavi import mlab
from scipy.interpolate import RectSphereBivariateSpline
# user input
nElevationPoints = 17 # needs to correspond with csv file
nAzimuthPoints = 40 # needs to correspond with csv file
threshold = - 40 # needs to correspond with how measurement data was captured
turnTableStepSize = 72 # needs to correspond with measurement settings
resolution = 0.125 # needs to correspond with measurement settings
# read data from file
patternData = np.empty([nElevationPoints, nAzimuthPoints]) # empty buffer
ifile = open('ttest.csv') # need the 'b' suffix to prevent blank rows being inserted
reader = csv.reader(ifile,delimiter=',')
reader.next() # skip first line in csv file as this is only text
for nElevation in range (0,nElevationPoints):
# azimuth
for nAzimuth in range(0,nAzimuthPoints):
patternData[nElevation,nAzimuth] = reader.next()[2]
ifile.close()
# post process
def r(thetaIndex,phiIndex):
"""r(thetaIndex,phiIndex): function in 3D plotting to return positive vector length from patternData[theta,phi]"""
radius = -threshold + patternData[thetaIndex,phiIndex]
return radius
#phi,theta = np.mgrid[0:nAzimuthPoints,0:nElevationPoints]
theta = np.arange(0,nElevationPoints)
phi = np.arange(0,nAzimuthPoints)
thetaMesh, phiMesh = np.meshgrid(theta,phi)
stepSizeRad = turnTableStepSize * resolution * np.pi / 180
theta = theta * stepSizeRad
phi = phi * stepSizeRad
# create new grid to interpolate on
phiIndex = np.arange(1,361)
phiNew = phiIndex*np.pi/180
thetaIndex = np.arange(1,181)
thetaNew = thetaIndex*np.pi/180
thetaNew,phiNew = np.meshgrid(thetaNew,phiNew)
# create interpolator object and interpolate
data = r(thetaMesh,phiMesh)
theta[0] += 1e-6 # zero values for theta cause program to halt; phi makes no sense at theta=0
lut = RectSphereBivariateSpline(theta,phi,data.T)
data_interp = lut.ev(thetaNew.ravel(),phiNew.ravel()).reshape((360,180)).T
def rInterp(theta,phi):
"""rInterp(theta,phi): function in 3D plotting to return positive vector length from interpolated patternData[theta,phi]"""
thetaIndex = theta/(np.pi/180)
thetaIndex = thetaIndex.astype(int)
phiIndex = phi/(np.pi/180)
phiIndex = phiIndex.astype(int)
radius = data_interp[thetaIndex,phiIndex]
return radius
# recreate mesh minus one, needed otherwise the below gives index error, but why??
phiIndex = np.arange(0,360)
phiNew = phiIndex*np.pi/180
thetaIndex = np.arange(0,180)
thetaNew = thetaIndex*np.pi/180
thetaNew,phiNew = np.meshgrid(thetaNew,phiNew)
x = (rInterp(thetaNew,phiNew)*np.cos(phiNew)*np.sin(thetaNew))
y = (-rInterp(thetaNew,phiNew)*np.sin(phiNew)*np.sin(thetaNew))
z = (rInterp(thetaNew,phiNew)*np.cos(thetaNew))
# plot 3D data
obj = mlab.mesh(x, y, z, colormap='jet')
obj.enable_contours = True
obj.contour.filled_contours = True
obj.contour.number_of_contours = 20
mlab.show()
However, the plot is much different than the non-interpolated data, see picture here as reference.
Also, when running the interactive session, data_interp is much larger in value (>3e5) than the original data (this is around 20 max).
Any further tips?
It looks like that theta[0] can't be 0, if you change it a litte before call RectSphereBivariateSpline:
theta[0] += 1e-6