Counting each line intersection on a grid of polygons in geopandas - python

I have a large dataset (~20000) of past storms over 40 years that have a list of central points over 3-hour intervals. I'm trying to overlay a mesh-grid onto a large area from which I would like to count the number of times each storm has passed over any given grid cell, however my current implementation only tracks the position at those three-hour intervals, leading to some instances where the track jumps a grid space when it should also be counted.
I am trying to address this problem using geopandas instead to create a lineseries for each storm track, and then perform an intersection against the mesh grid, however, I cannot find any functional implementations that allow me to do so.
To create the grid in geopandas, I am using the following solution from a previous question:
lonCount = ((plotExtent[1]+360) - (plotExtent[0]+360)) * gridResolution
latCount = ((plotExtent[3]) - (plotExtent[2])) * gridResolution
lons = np.linspace(plotExtent[0], plotExtent[1], lonCount)
lats = np.linspace(plotExtent[2], plotExtent[3], latCount)
# Store the meshgrid in polygon format
xlines = [((x1, yi), (x2, yi)) for x1, x2 in zip(lons[:-1], lons[1:]) for yi in lats]
ylines = [((xi, y1), (xi, y2)) for y1, y2 in zip(lats[:-1], lats[1:]) for xi in lons]
# Save as a Shapely object, then store in geopandas
grids = list(polygonize(MultiLineString(xlines + ylines)))
polyFrame = gpd.GeoDataFrame(grids)
This creates a geoDataSeries of ~5600 polygon objects. I then loop through each of my storm objects to strip out the lat/lon list pairs, and convert them into a shapely LineSeries object, which is then read into geopandas as such:
polyLine = LineString(list(zip(storm_lons, storm_lats)))
coord_tests = gpd.GeoSeries(polyLine)
My goal from here is to simply do something like this:
I = coord_tests.intersects(polyFrame)
To collect a list of polygons that the LineString intersects with, however, this prompts the following error:
AttributeError: No geometry data set yet (expected in column 'geometry'.)
I'm wondering if I have something formatted incorrectly here, am passing the call incorrectly to this function, or if there is a more efficient way to accomplish what I am trying to do here.
Any assistance would be greatly appreciated.
Thanks!

polyFrame = gpd.GeoDataFrame(geometry=grids)
:-)

Related

Finding if a line between two geo coordinates crosses land

Currently I'm working with a dataset that contains routes around the sea but some of them either cross land or are on land (due to the fidelity of the data being quite low). I have been using the great https://github.com/toddkarin/global-land-mask tool from toddkarin to find which of the coordinates I have are on land so I can discard them (eventually I'll may find a way of moving them to the nearest point at sea).
My current problem is that I need to find a way of finding if a line (given any two coordinates) crosses land (think of an island between two point in the sea).
My area of operation is the entire globe and I am using WGS84 if that changes anything. I have some very basic experience with matplotlib/Basemap but I'm not at all confident with it and I'm struggling to find where to start with this. Do I try to plot each coordinate along the line at a given distance/resolution and then use Todd's tool or is there a more efficient way?
Thanks in advance for any assistance. I've done a lot of digging and reading before posting but haven't found what I think I need.
I need the tool to be in python ideally but if I need to call another language/library/exe that can give me a True/False output that's good too.
A possible tool available in Python to perform these sorts of operations is Shapely.
If you're able to extract the polygon data of island and other masses, then you could use Shapely to perform an intersection test (see Line vs. Polygon Intersection Coordinates). This will work for checking intersections between points, lines and arbitrary polygons.
The quick and dirty way is as you propose yourself, to discretize the line between the two points and check each of these.
Thanks to some help from the answer here I came up with the following which is now working.
How to find all coordinates efficiently between two geo points/locations with certain interval using python
from global_land_mask import globe
def crosses_land(x1,y1,x2,y2):
# your geo points
#x1, y1 = 13.26077,100.81099
#x2, y2 = 13.13237,100.82993
# the increment step (higher = faster)
STEP = 0.0003
if x1 > x2: # x2 must be the bigger one here
x1, x2 = x2, x1
y1, y2 = y2, y1
for i in range(int((x2-x1)/STEP) + 1):
try:
x = x1 + i*STEP
y = (y1-y2)/(x1-x2) * (x - x1) + y1
except:
continue
is_on_land = globe.is_land(float(x), float(y))
#if not is_on_land:
#print("in water")
if is_on_land:
#print("crosses land")
return True
print(crosses_land(x1,y1,x2,y2))

geopandas not recognizing point in polygon

I have two data frames. One has polygons of buildings (around 70K) and the other has points that may or not be inside the polygons (around 100K). I need to identify if a point is inside a polygon or not.
When I plot both dataframes (example below), the plot shows that some points are inside the polygons and other are not. However, when I use .within(), the outcome says none of the points are inside polygons.
I recreated the example creating one polygon and one point "by hand" rather than importing the data and in this case .within() does recognize that the point is in the polygon. Therefore, I assume I'm making a mistake but I don't know where.
Example: (I'll just post the part that corresponds to one point and one polygon for simplicity. In this case, each data frame contains either a single point or a single polygon)
1) Using the imported data. The data frame dmR has the points and the data frame dmf has the polygon
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
from shapely import wkt
from shapely.geometry import Point, Polygon
plt.style.use("seaborn")
# I'm skipping the data manipulation stage and
# going to the point where the data are used.
print(dmR)
geometry
35 POINT (-95.75207 29.76047)
print(dmf)
geometry
41964 POLYGON ((-95.75233 29.76061, -95.75194 29.760...
# Plot
fig, ax = plt.subplots(figsize=(5,5))
minx, miny, maxx, maxy = ([-95.7525, 29.7603, -95.7515, 29.761])
ax.set_xlim(minx, maxx)
ax.set_ylim(miny, maxy)
dmR.plot(ax=ax, c='Red')
dmf.plot(ax=ax, alpha=0.5)
plt.savefig('imported_data.png')
The outcome
shows that the point is inside the polygon. However,
print(dmR.within(dmf))
35 False
41964 False
dtype: bool
2) If I try to recreate this by hand, it would be as follows (there may be a better way to do this but I couldn't figure it out):
# Get the vertices of the polygon to create it by hand
poly1 = dmf['geometry']
g = [i for i in poly1]
x,y = g[0].exterior.coords.xy
x,y
(array('d', [-95.752332508564, -95.75193554162979, -95.75193151831627, -95.75232848525047, -95.752332508564]),
array('d', [29.760606530637265, 29.760607694859385, 29.76044470363038, 29.76044237518235, 29.760606530637265]))
# Create the polygon by hand using the corresponding vertices
coords = [(-95.752332508564, 29.760606530637265),
(-95.75193554162979, 29.760607694859385),
(-95.75193151831627, 29.7604447036303),
(-95.75232848525047, 29.76044237518235),
(-95.752332508564, 29.760606530637265)]
poly = Polygon(coords)
# Create point by hand (just copy the point from 1) above
p1 = Point(-95.75207, 29.76047)
# Create the GeoPandas data frames from the point and polygon
ex = gpd.GeoDataFrame()
ex['geometry']=[poly]
ex = ex.set_geometry('geometry')
ex_p = gpd.GeoDataFrame()
ex_p['geometry'] = [p1]
ex_p = ex_p.set_geometry('geometry')
# Plot and print
fig, ax = plt.subplots(figsize=(5,5))
ax.set_xlim(minx, maxx)
ax.set_ylim(miny, maxy)
ex_p.plot(ax=ax, c='Red')
ex.plot(ax = ax, alpha=0.5)
plt.savefig('by_hand.png')
In this case, the outcome also shows the point in the polygon. However,
ex_p.within(ex)
0 True
dtype: bool
which recognize that the point is in the polygon. All suggestions on what to do are appreciated! Thanks.
I don't know if this is the most efficient way to do it but I was able to do what I needed within Python and using Geopandas.
Instead of using point.within(polygon) approach, I did a spatial join (geopandas.sjoin(df_1, df_2, how = 'inner', op = 'contains')) This results in a new data frame that contains the points that are within polygons and excludes the ones that are not. More information on how to do this can be found here.
I assume something is fishy about your coordinate reference system (crs). I cannot tell about dmr as it is not provided but ex_p is a naive geometry as you generated it from points without specifying the crs. You can check the crs using:
dmr.crs
Let's assume it's in 4326, then it will return:
<Geographic 2D CRS: EPSG:4326>
Name: WGS 84
Axis Info [ellipsoidal]:
- Lat[north]: Geodetic latitude (degree)
- Lon[east]: Geodetic longitude (degree)
Area of Use:
- name: World
- bounds: (-180.0, -90.0, 180.0, 90.0)
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
In this case you would need to set a CRS for ex_p first using:
ex_p = ex_p.set_crs(epsg=4326)
If you want to inherit the crs of dmr dynamically you can also use:
ex_p = ex_p.set_crs(dmr.crs)
After you set a crs, you can re-project from one crs to another using:
ex_p = ex_p.to_crs(epsg=3395)
More on that topic:
https://geopandas.org/projections.html

Python: using polygons to create a mask on a given 2d grid

I have some polygons (Canadian provinces), read in with GeoPandas, and want to use these to create a mask to apply to gridded data on a 2-d latitude-longitude grid (read from a netcdf file using iris). An end goal would be to only have data for a given province remaining, with the rest of the data masked out. So the mask would be 1's for grid boxes within the province, and 0's or NaN's for grid boxes outside the province.
The polygons can be obtained from the shapefile here:
https://www.dropbox.com/s/o5elu01fetwnobx/CAN_adm1.shp?dl=0
The netcdf file I am using can be downloaded here:
https://www.dropbox.com/s/kxb2v2rq17m7lp7/t2m.20090815.nc?dl=0
I imagine there are two approaches here but I am struggling with both:
1) Use the polygon to create a mask on the latitude-longitude grid so that this can be applied to lots of datafiles outside of python (preferred)
2) Use the polygon to mask the data that have been read in and extract only the data inside the province of interest, to work with interactively.
My code so far:
import iris
import geopandas as gpd
#read the shapefile and extract the polygon for a single province
#(province names stored as variable 'NAME_1')
Canada=gpd.read_file('CAN_adm1.shp')
BritishColumbia=Canada[Canada['NAME_1'] == 'British Columbia']
#get the latitude-longitude grid from netcdf file
cubelist=iris.load('t2m.20090815.nc')
cube=cubelist[0]
lats=cube.coord('latitude').points
lons=cube.coord('longitude').points
#create 2d grid from lats and lons (may not be necessary?)
[lon2d,lat2d]=np.meshgrid(lons,lats)
#HELP!
Thanks very much for any help or advice.
UPDATE: Following the great solution from #DPeterK below, my original data can be masked, giving the following:
It looks like you have started well! Geometries loaded from shapefiles expose various geospatial comparison methods, and in this case you need the contains method. You can use this to test each point in your cube's horizontal grid for being contained within your British Columbia geometry. (Note that this is not a fast operation!) You can use this comparison to build up a 2D mask array, which could be applied to your cube's data or used in other ways.
I've written a Python function to do the above – it takes a cube and a geometry and produces a mask for the (specified) horizontal coordinates of the cube, and applies the mask to the cube's data. The function is below:
def geom_to_masked_cube(cube, geometry, x_coord, y_coord,
mask_excludes=False):
"""
Convert a shapefile geometry into a mask for a cube's data.
Args:
* cube:
The cube to mask.
* geometry:
A geometry from a shapefile to define a mask.
* x_coord: (str or coord)
A reference to a coord describing the cube's x-axis.
* y_coord: (str or coord)
A reference to a coord describing the cube's y-axis.
Kwargs:
* mask_excludes: (bool, default False)
If False, the mask will exclude the area of the geometry from the
cube's data. If True, the mask will include *only* the area of the
geometry in the cube's data.
.. note::
This function does *not* preserve lazy cube data.
"""
# Get horizontal coords for masking purposes.
lats = cube.coord(y_coord).points
lons = cube.coord(x_coord).points
lon2d, lat2d = np.meshgrid(lons,lats)
# Reshape to 1D for easier iteration.
lon2 = lon2d.reshape(-1)
lat2 = lat2d.reshape(-1)
mask = []
# Iterate through all horizontal points in cube, and
# check for containment within the specified geometry.
for lat, lon in zip(lat2, lon2):
this_point = gpd.geoseries.Point(lon, lat)
res = geometry.contains(this_point)
mask.append(res.values[0])
mask = np.array(mask).reshape(lon2d.shape)
if mask_excludes:
# Invert the mask if we want to include the geometry's area.
mask = ~mask
# Make sure the mask is the same shape as the cube.
dim_map = (cube.coord_dims(y_coord)[0],
cube.coord_dims(x_coord)[0])
cube_mask = iris.util.broadcast_to_shape(mask, cube.shape, dim_map)
# Apply the mask to the cube's data.
data = cube.data
masked_data = np.ma.masked_array(data, cube_mask)
cube.data = masked_data
return cube
If you just need the 2D mask you could return that before the above function applies it to the cube.
To use this function in your original code, add the following at the end of your code:
geometry = BritishColumbia.geometry
masked_cube = geom_to_masked_cube(cube, geometry,
'longitude', 'latitude',
mask_excludes=True)
If this doesn't mask anything it might well mean that your cube and geometry are defined on different extents. That is, your cube's longitude coordinate runs from 0°–360°, and if the geometry's longitude values run from -180°–180°, then the containment test will never return True. You can fix this by changing the extents of your cube with the following:
cube = cube.intersection(longitude=(-180, 180))
I found an alternative solution to the excellent one posted by #DPeterK above, which yields the same result. It uses matplotlib.path to test if points are contained within the exterior coordinates described by the geometries loaded from a shape file. I am posting this because this method is ~10 times faster than that given by #DPeterK (2:23 minutes vs 25:56 minutes). I'm not sure what is preferable: an elegant solution, or a speedy, brute force solution. Perhaps one can have both?!
One complication with this method is that some geometries are MultiPolygons - i.e. the shape consists of several smaller polygons (in this case, the province of British Columbia includes islands off of the west coast, which can't be described by the coordinates of the mainland British Columbia Polygon). The MultiPolygon has no exterior coordinates but the individual polygons do, so these each need to be treated individually. I found that the neatest solution to this was to use a function copied from GitHub (https://gist.github.com/mhweber/cf36bb4e09df9deee5eb54dc6be74d26), which 'explodes' MultiPolygons into a list of individual polygons that can then be treated separately.
The working code is outlined below, with my documentation. Apologies that it is not the most elegant code - I am relatively new to Python and I'm sure there are lots of unnecessary loops/neater ways to do things!
import numpy as np
import iris
import geopandas as gpd
from shapely.geometry import Point
import matplotlib.path as mpltPath
from shapely.geometry.polygon import Polygon
from shapely.geometry.multipolygon import MultiPolygon
#-----
#FIRST, read in the target data and latitude-longitude grid from netcdf file
cubelist=iris.load('t2m.20090815.minus180_180.nc')
cube=cubelist[0]
lats=cube.coord('latitude').points
lons=cube.coord('longitude').points
#create 2d grid from lats and lons
[lon2d,lat2d]=np.meshgrid(lons,lats)
#create a list of coordinates of all points within grid
points=[]
for latit in range(0,241):
for lonit in range(0,480):
point=(lon2d[latit,lonit],lat2d[latit,lonit])
points.append(point)
#turn into np array for later
points=np.array(points)
#get the cube data - useful for later
fld=np.squeeze(cube.data)
#create a mask array of zeros, same shape as fld, to be modified by
#the code below
mask=np.zeros_like(fld)
#NOW, read the shapefile and extract the polygon for a single province
#(province names stored as variable 'NAME_1')
Canada=gpd.read_file('/Users/ianashpole/Computing/getting_province_outlines/CAN_adm_shp/CAN_adm1.shp')
BritishColumbia=Canada[Canada['NAME_1'] == 'British Columbia']
#BritishColumbia.geometry.type reveals this to be a 'MultiPolygon'
#i.e. several (in this case, thousands...) if individual polygons.
#I ultimately want to get the exterior coordinates of the BritishColumbia
#polygon, but a MultiPolygon is a list of polygons and therefore has no
#exterior coordinates. There are probably many ways to progress from here,
#but the method I have stumbled upon is to 'explode' the multipolygon into
#it's individual polygons and treat each individually. The function below
#to 'explode' the MultiPolygon was found here:
#https://gist.github.com/mhweber/cf36bb4e09df9deee5eb54dc6be74d26
#---define function to explode MultiPolygons
def explode_polygon(indata):
indf = indata
outdf = gpd.GeoDataFrame(columns=indf.columns)
for idx, row in indf.iterrows():
if type(row.geometry) == Polygon:
#note: now redundant, but function originally worked on
#a shapefile which could have combinations of individual polygons
#and MultiPolygons
outdf = outdf.append(row,ignore_index=True)
if type(row.geometry) == MultiPolygon:
multdf = gpd.GeoDataFrame(columns=indf.columns)
recs = len(row.geometry)
multdf = multdf.append([row]*recs,ignore_index=True)
for geom in range(recs):
multdf.loc[geom,'geometry'] = row.geometry[geom]
outdf = outdf.append(multdf,ignore_index=True)
return outdf
#-------
#Explode the BritishColumbia MultiPolygon into its constituents
EBritishColumbia=explode_polygon(BritishColumbia)
#Loop over each individual polygon and get external coordinates
for index,row in EBritishColumbia.iterrows():
print 'working on polygon', index
mypolygon=[]
for pt in list(row['geometry'].exterior.coords):
print index,', ',pt
mypolygon.append(pt)
#See if any of the original grid points read from the netcdf file earlier
#lie within the exterior coordinates of this polygon
#pth.contains_points returns a boolean array (true/false), in the
#shape of 'points'
path=mpltPath.Path(mypolygon)
inside=path.contains_points(points)
#find the results in the array that were inside the polygon ('True')
#and set them to missing. First, must reshape the result of the search
#('points') so that it matches the mask & original data
#reshape the result to the main grid array
inside=np.array(inside).reshape(lon2d.shape)
i=np.where(inside == True)
mask[i]=1
print 'fininshed checking for points inside all polygons'
#mask now contains 0's for points that are not within British Columbia, and
#1's for points that are. FINALLY, use this to mask the original data
#(stored as 'fld')
i=np.where(mask == 0)
fld[i]=np.nan
#Done.

Finding Intersections Region Based Trajectories vs. Line Trajectories

I have two trajectories (i.e. two lists of points) and I am trying to find the intersection points for both these trajectories. However, if I represent these trajectories as lines, I might miss real world intersections (just misses).
What I would like to do is to represent the line as a polygon with certain width around the points and then find where the two polygons intersect with each other.
I am using the python spatial library but I was wondering if anyone has done this before. Here is a picture of the line segments which don't intersect because they just miss each other. Below is the sample data code that represents the trajectory of two objects.
object_trajectory=np.array([[-3370.00427248, 3701.46800775],
[-3363.69164715, 3702.21408203],
[-3356.31277271, 3703.06477984],
[-3347.25951787, 3704.10740164],
[-3336.739511 , 3705.3958357 ],
[-3326.29355823, 3706.78035903],
[-3313.4987339 , 3708.2076586 ],
[-3299.53433345, 3709.72507366],
[-3283.15486406, 3711.47077376],
[-3269.23487255, 3713.05635557]])
target_trajectory=np.array([[-3384.99966703, 3696.41922372],
[-3382.43687562, 3696.6739521 ],
[-3378.22995178, 3697.08802862],
[-3371.98983789, 3697.71490469],
[-3363.5900481 , 3698.62666805],
[-3354.28520354, 3699.67613798],
[-3342.18581931, 3701.04853915],
[-3328.51519511, 3702.57528111],
[-3312.09691577, 3704.41961271],
[-3297.85543763, 3706.00878621]])
plt.plot(object_trajectory[:,0],object_trajectory[:,1],'b',color='b')
plt.plot(vehicle_trajectory[:,0],vehicle_trajectory[:,1],'b',color='r')
Let's say you have two lines defined by numpy arrays x1, y1, x2, and y2.
import numpy as np
You can create an array distances[i, j] containing the distances between the ith point in the first line and the jth point in the second line.
distances = ((x1[:, None] - x2[None, :])**2 + (y1[:, None] - y2[None, :])**2)**0.5
Then you can find indices where distances is less than some threshold you want to define for intersection. If you're thinking of the lines as having some thickness, the threshold would be half of that thickness.
threshold = 0.1
intersections = np.argwhere(distances < threshold)
intersections is now a N by 2 array containing all point pairs that are considered to be "intersecting" (the [i, 0] is the index from the first line, and [i, 1] is the index from the second line). If you want to get the set of all the indices from each line that are intersecting, you can use something like
first_intersection_indices = np.asarray(sorted(set(intersections[:, 0])))
second_intersection_indices = np.asarray(sorted(set(intersections[:, 1])))
From here, you can also determine how many intersections there are by taking only the center value for any consecutive values in each list.
L1 = []
current_intersection = []
for i in range(first_intersection_indices.shape[0]):
if len(current_intersection) == 0:
current_intersection.append(first_intersection_indices[i])
elif first_intersection_indices[i] == current_intersection[-1]:
current_intersection.append(first_intersection_indices[i])
else:
L1.append(int(np.median(current_intersection)))
current_intersection = [first_intersection_indices[i]]
print(len(L1))
You can use these to print the coordinates of each intersection.
for i in L1:
print(x1[i], y1[i])
Turns out that the shapely package already has a ton of convinience functions that get me very far with this.
from shapely.geometry import Point, LineString, MultiPoint
# I assume that self.line is of type LineString (i.e. a line trajectory)
region_polygon = self.line.buffer(self.lane_width)
# line.buffer essentially generates a nice interpolated bounding polygon around the trajectory.
# Now we can identify all the other points in the other trajectory that intersects with the region_polygon that we just generated. You can also use .intersection if you want to simply generate two polygon trajectories and find the intersecting polygon as well.
is_in_region = [region_polygon.intersects(point) for point in points]

Interpolation over an irregular grid

So, I have three numpy arrays which store latitude, longitude, and some property value on a grid -- that is, I have LAT(y,x), LON(y,x), and, say temperature T(y,x), for some limits of x and y. The grid isn't necessarily regular -- in fact, it's tripolar.
I then want to interpolate these property (temperature) values onto a bunch of different lat/lon points (stored as lat1(t), lon1(t), for about 10,000 t...) which do not fall on the actual grid points. I've tried matplotlib.mlab.griddata, but that takes far too long (it's not really designed for what I'm doing, after all). I've also tried scipy.interpolate.interp2d, but I get a MemoryError (my grids are about 400x400).
Is there any sort of slick, preferably fast way of doing this? I can't help but think the answer is something obvious... Thanks!!
Try the combination of inverse-distance weighting and
scipy.spatial.KDTree
described in SO
inverse-distance-weighted-idw-interpolation-with-python.
Kd-trees
work nicely in 2d 3d ..., inverse-distance weighting is smooth and local,
and the k= number of nearest neighbours can be varied to tradeoff speed / accuracy.
There is a nice inverse distance example by Roger Veciana i Rovira along with some code using GDAL to write to geotiff if you're into that.
This is of coarse to a regular grid, but assuming you project the data first to a pixel grid with pyproj or something, all the while being careful what projection is used for your data.
A copy of his algorithm and example script:
from math import pow
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
def pointValue(x,y,power,smoothing,xv,yv,values):
nominator=0
denominator=0
for i in range(0,len(values)):
dist = sqrt((x-xv[i])*(x-xv[i])+(y-yv[i])*(y-yv[i])+smoothing*smoothing);
#If the point is really close to one of the data points, return the data point value to avoid singularities
if(dist<0.0000000001):
return values[i]
nominator=nominator+(values[i]/pow(dist,power))
denominator=denominator+(1/pow(dist,power))
#Return NODATA if the denominator is zero
if denominator > 0:
value = nominator/denominator
else:
value = -9999
return value
def invDist(xv,yv,values,xsize=100,ysize=100,power=2,smoothing=0):
valuesGrid = np.zeros((ysize,xsize))
for x in range(0,xsize):
for y in range(0,ysize):
valuesGrid[y][x] = pointValue(x,y,power,smoothing,xv,yv,values)
return valuesGrid
if __name__ == "__main__":
power=1
smoothing=20
#Creating some data, with each coodinate and the values stored in separated lists
xv = [10,60,40,70,10,50,20,70,30,60]
yv = [10,20,30,30,40,50,60,70,80,90]
values = [1,2,2,3,4,6,7,7,8,10]
#Creating the output grid (100x100, in the example)
ti = np.linspace(0, 100, 100)
XI, YI = np.meshgrid(ti, ti)
#Creating the interpolation function and populating the output matrix value
ZI = invDist(xv,yv,values,100,100,power,smoothing)
# Plotting the result
n = plt.normalize(0.0, 100.0)
plt.subplot(1, 1, 1)
plt.pcolor(XI, YI, ZI)
plt.scatter(xv, yv, 100, values)
plt.title('Inv dist interpolation - power: ' + str(power) + ' smoothing: ' + str(smoothing))
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.colorbar()
plt.show()
There's a bunch of options here, which one is best will depend on your data...
However I don't know of an out-of-the-box solution for you
You say your input data is from tripolar data. There are three main cases for how this data could be structured.
Sampled from a 3d grid in tripolar space, projected back to 2d LAT, LON data.
Sampled from a 2d grid in tripolar space, projected into 2d LAT LON data.
Unstructured data in tripolar space projected into 2d LAT LON data
The easiest of these is 2. Instead of interpolating in LAT LON space, "just" transform your point back into the source space and interpolate there.
Another option that works for 1 and 2 is to search for the cells that maps from tripolar space to cover your sample point. (You can use a BSP or grid type structure to speed up this search) Pick one of the cells, and interpolate inside it.
Finally there's a heap of unstructured interpolation options .. but they tend to be slow.
A personal favourite of mine is to use a linear interpolation of the nearest N points, finding those N points can again be done with gridding or a BSP. Another good option is to Delauney triangulate the unstructured points and interpolate on the resulting triangular mesh.
Personally if my mesh was case 1, I'd use an unstructured strategy as I'd be worried about having to handle searching through cells with overlapping projections. Choosing the "right" cell would be difficult.
I suggest you taking a look at GRASS (an open source GIS package) interpolation features (http://grass.ibiblio.org/gdp/html_grass62/v.surf.bspline.html). It's not in python but you can reimplement it or interface with C code.
Am I right in thinking your data grids look something like this (red is the old data, blue is the new interpolated data)?
alt text http://www.geekops.co.uk/photos/0000-00-02%20%28Forum%20images%29/DataSeparation.png
This might be a slightly brute-force-ish approach, but what about rendering your existing data as a bitmap (opengl will do simple interpolation of colours for you with the right options configured and you could render the data as triangles which should be fairly fast). You could then sample pixels at the locations of the new points.
Alternatively, you could sort your first set of points spatially and then find the closest old points surrounding your new point and interpolate based on the distances to those points.
There is a FORTRAN library called BIVAR, which is very suitable for this problem. With a few modifications you can make it usable in python using f2py.
From the description:
BIVAR is a FORTRAN90 library which interpolates scattered bivariate data, by Hiroshi Akima.
BIVAR accepts a set of (X,Y) data points scattered in 2D, with associated Z data values, and is able to construct a smooth interpolation function Z(X,Y), which agrees with the given data, and can be evaluated at other points in the plane.

Categories

Resources