Path 'contains_points' yields incorrect results with Bezier curve - python

I am trying to select a region of data based on a matplotlib Path object, but when the path contains a Bezier curve (not just straight lines), the selected region doesn't completely fill in the curve. It looks like it's trying, but the far side of the curve gets chopped off.
For example, the following code defines a fairly simple closed path with one straight line and one cubic curve. When I look at the True/False result from the contains_points method, it does not seem to match either the curve itself or the raw vertices.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
from matplotlib.patches import PathPatch
# Make the Path
verts = [(1.0, 1.5), (-2.0, 0.25), (-1.0, 0.0), (1.0, 0.5), (1.0, 1.5)]
codes = [Path.MOVETO, Path.CURVE4, Path.CURVE4, Path.CURVE4, Path.CLOSEPOLY]
path1 = Path(verts, codes)
# Make a field with points to select
nx, ny = 101, 51
x = np.linspace(-2, 2, nx)
y = np.linspace(0, 2, ny)
yy, xx = np.meshgrid(y, x)
pts = np.column_stack((xx.ravel(), yy.ravel()))
# Construct a True/False array of contained points
tf = path1.contains_points(pts).reshape(nx, ny)
# Make a PathPatch for display
patch1 = PathPatch(path1, facecolor='c', edgecolor='b', lw=2, alpha=0.5)
# Plot the true/false array, the patch, and the vertices
fig, ax = plt.subplots()
ax.imshow(tf.T, origin='lower', extent=(x[0], x[-1], y[0], y[-1]))
ax.add_patch(patch1)
ax.plot(*zip(*verts), 'ro-')
plt.show()
This gives me this plot:
It looks like there is some sort of approximation going on - is this just a fundamental limitation of the calculation in matplotlib, or am I doing something wrong?
I can calculate the points inside the curve myself, but I was hoping to not reinvent this wheel if I don't have to.
It's worth noting that a simpler construction using quadratic curves does appear to work properly:
I am using matplotlib 2.0.0.

This has to do with the space in which the paths are evaluated, as explained in GitHub issue #6076. From a comment by mdboom there:
Path intersection is done by converting the curves to line segments
and then converting the intersection based on the line segments. This
conversion happens by "sampling" the curve at increments of 1.0. This
is generally the right thing to do when the paths are already scaled
in display space, because sampling the curve at a resolution finer
than a single pixel doesn't really help. However, when calculating the
intersection in data space as you've done here, we obviously need to
sample at a finer resolution.
This is discussing intersections, but contains_points is also affected. This enhancement is still open so we'll have to see if it is addressed in the next milestone. In the meantime, there are a couple options:
1) If you are going to be displaying a patch anyway, you can use the display transformation. In the example above, adding the following demonstrates the correct behavior (based on a comment by tacaswell on duplicate issue #8734, now closed):
# Work in transformed (pixel) coordinates
hit_patch = path1.transformed(ax.transData)
tf1 = hit_patch.contains_points(ax.transData.transform(pts)).reshape(nx, ny)
ax.imshow(tf2.T, origin='lower', extent=(x[0], x[-1], y[0], y[-1]))
2) If you aren't using a display and just want to calculate using a path, the best bet is to simply form the Bezier curve yourself and make a path out of line segments. Replacing the formation of path1 with the following calculation of path2 will produce the desired result.
from scipy.special import binom
def bernstein(n, i, x):
coeff = binom(n, i)
return coeff * (1-x)**(n-i) * x**i
def bezier(ctrlpts, nseg):
x = np.linspace(0, 1, nseg)
outpts = np.zeros((nseg, 2))
n = len(ctrlpts)-1
for i, point in enumerate(ctrlpts):
outpts[:,0] += bernstein(n, i, x) * point[0]
outpts[:,1] += bernstein(n, i, x) * point[1]
return outpts
verts1 = [(1.0, 1.5), (-2.0, 0.25), (-1.0, 0.0), (1.0, 0.5), (1.0, 1.5)]
nsegments = 31
verts2 = np.concatenate([bezier(verts1[:4], nsegments), np.array([verts1[4]])])
codes2 = [Path.MOVETO] + [Path.LINETO]*(nsegments-1) + [Path.CLOSEPOLY]
path2 = Path(verts2, codes2)
Either method yields something that looks like the following:

Related

Color gradient on one contour line

I'm very very new to Python, i usually do my animations with AfterEffects, but it requires a lot of computation time for quite simple things.
• So I would like to create this kind of animation (or at least image) :
AfterEffects graph (forget the shadows, i don't really need it at this point)
Those are circles merging together as they collide, one of them being highlighted (the orange one).
• For now i only managed to do the "merging thing" computing a "distance map" and ploting a contour line :
Python + Matplotlib graph with the following code :
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
part_size = 0.0002
nb_part = 200
mesh_res = 500 # resolution of grid
x = np.linspace(0, 1.9, mesh_res)
y = np.linspace(0, 1, mesh_res)
Xgrid, Ygrid = np.meshgrid(x, y)
centers = np.random.uniform(0,1,(nb_part,2)) # array filled with disks centers positions
sizes = part_size*np.ones(nb_part) # array filled whith disks sizes
#sizes = np.random.uniform(0,part_size,nb_part)
dist_map = np.zeros((mesh_res,mesh_res),float) # array to plot the contour of
for i in range(nb_part):
dist_map += sizes[i] / ((Xgrid - centers[i][0]) ** 2 + (Ygrid - centers[i][1]) ** 2) # function with (almost) value of 1 when on a cricle, so we want the contour of this array
fig, ax = plt.subplots()
contour_opts = {'levels': np.linspace(0.9, 1., 1), 'color':'red', 'linewidths': 4} # to plot only the one-ish values of contour
ax.contour(x, y, dist_map, **contour_opts)
def update(frame_number):
ax.collections = [] # reset the graph
centers[:] += 0.01*np.sin(2*np.pi*frame_number/100+np.stack((np.arange(nb_part),np.arange(nb_part)),axis=-1)) # just to move circles "randomly"
dist_map = np.zeros((mesh_res, mesh_res), float) # updating array of distances
for i in range(nb_part):
dist_map += sizes[i] / ((Xgrid - centers[i][0]) ** 2 + (Ygrid - centers[i][1]) ** 2)
ax.contour(x, y, dist_map, **contour_opts) # calculate the new contour
ani = FuncAnimation(fig, update, interval=20)
plt.show()
The result is not that bad but :
i can't figure how to highlight just one circle keeping the merging effect (ideally, the colors should merge as well, and i would like to keep the image transparency when exported)
it still requires some time to compute each frame (it is way faster than AfterEffects though), so i guess i'm still very far from using optimally python, numpy, and matplotlib. Maybe there are even libraries able to do that kind of things ? So if there is a better strategy to implement it, i'll take it.

interpolate.griddata shifts data northwards, is it a bug?

I observe unexpected results from scipy.interpolate.griddata. I am trying to visualize a set of irregularly spaced points using matplotlib.basemap and scipy.interpolate.griddata.
The data is given as three lists: latitudes, longitudes and values. To get them on the map I interpolate the data onto a regular grid and visualize it using Basemap's imshow function.
I observe that the interpolated data is shifted northwards from true positions.
Here is an example. Here I want to highlight a cell formed by two meridians and two parallels. I expect to get something like this:
However what I get is something like this:
You can see that the red rectangle is visibly shifted northwards.
I have tried to vary the grid resolution and the number of points, however this does not seem to have any effect on this observed shift.
Here is an IPython notebook that illustrates the issue.
Also below is the complete code:
import numpy as np
from numpy import random
from scipy import interpolate
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
# defining the region of interest
r = {'lon':[83.0, 95.5], 'lat':[48.5,55.5]}
# initializing Basemap
m = Basemap(projection='merc',
llcrnrlon=r['lon'][0],
llcrnrlat=r['lat'][0],
urcrnrlon=r['lon'][1],
urcrnrlat=r['lat'][1],
lon_0=r['lon'][0],
ellps='WGS84',
fix_aspect=True,
resolution='h')
# defining the highlighted block
block = {'lon':[89,91],'lat':[50.5,52.5]}
# generating the data
npixels = 100000
lat_range = r['lat'][1] - r['lat'][0]
lats = lat_range * random.random(npixels) + r['lat'][0]
lon_range = r['lon'][1] - r['lon'][0]
lons = lon_range * random.random(npixels) + r['lon'][0]
values = np.zeros(npixels)
for p in range(npixels):
if block['lat'][0] < lats[p] < block['lat'][1] \
and block['lon'][0] < lons[p] < block['lon'][1]:
values[p] = 1.0
# plotting the original data without interpolation
plt.figure(figsize=(5, 5))
m.drawparallels(np.arange(r['lat'][0], r['lat'][1] + 0.25, 2.0),
labels=[True,False,True,False])
m.drawmeridians(np.arange(r['lon'][0], r['lon'][1] + 0.25, 2.0),
labels=[True,True,False,True])
m.scatter(lons,lats,c=values,latlon=True,edgecolors='none')
# interpolating on the regular grid
nx = ny = 500
mapx = np.linspace(r['lon'][0],r['lon'][1],nx)
mapy = np.linspace(r['lat'][0],r['lat'][1],ny)
mapgridx,mapgridy = np.meshgrid(mapx,mapy)
mapdata = interpolate.griddata(list(zip(lons,lats)),values,
(mapgridx,mapgridy),method='nearest')
# plotting the interpolated data
plt.figure(figsize=(5, 5))
m.drawparallels(np.arange(r['lat'][0], r['lat'][1] + 0.25, 2.0),
labels=[True,False,True,False])
m.drawmeridians(np.arange(r['lon'][0], r['lon'][1] + 0.25, 2.0),
labels=[True,True,False,True])
m.imshow(mapdata)
I am seeing this with SciPy 0.17.0
Pauli Virtanen on SciPy bugtracker answered the question.
The issue goes away if one replaces basemap.imshow() with matplotlib.pyplot.pcolormesh()
Replacing above
m.imshow(mapdata)
with
meshx,meshy = m(mapx,mapy)
plt.pcolormesh(meshx,meshy,mapdata)
produces correctly aligned image.
It is not clear what I am doing wrong with basemap.imshow, but that is probably another question.

Lorentzian scipy.optimize.leastsq fit to data fails

Since I took a lecture on Python I wanted to use it to fit my data. Although I have been trying for a while now, I still have no idea why this is not working.
What I would like to do
Take one data-file after another from a subfolder (here called: 'Test'), transform the data a little bit and fit it with a Lorentzian function.
Problem description
When I run the code posted below, it does not fit anything and just returns my initial parameters after 4 function calls. I tried scaling the data, playing around with ftol and maxfev after checking the python documentation over and over again, but nothing improved. I also tried changing the lists to numpy.arrays explicitely, as well as the solution given to the question scipy.optimize.leastsq returns best guess parameters not new best fit, x = x.astype(np.float64). No improvement. Strangely enough, for few selected data-files this same code worked at some point, but for the majority it never did. It can definitely be fitted, since a Levenberg-Marquard fitting routine gives reasonably good results in Origin.
Can someone tell me what is going wrong or point out alternatives...?
import numpy,math,scipy,pylab
from scipy.optimize import leastsq
import glob,os
for files in glob.glob("*.txt"):
x=[]
y=[]
z=[]
f = open(files, 'r')
raw=f.readlines()
f.close()
del raw[0:8] #delete Header
for columns in ( raw2.strip().split() for raw2 in raw ): #data columns
x.append(float(columns[0]))
y.append(float(columns[1]))
z.append(10**(float(columns[1])*0.1)) #transform data for the fit
def lorentz(p,x):
return (1/(1+(x/p[0] - 1)**4*p[1]**2))*p[2]
def errorfunc(p,x,z):
return lorentz(p,x)-z
p0=[3.,10000.,0.001]
Params,cov_x,infodict,mesg,ier = leastsq(errorfunc,p0,args=(x,z),full_output=True)
print Params
print ier
Without seeing your data it is hard to tell what is going wrong. I generated some random noise and used your code to perform a fit to it. Everything works okay. This algorithm does not allow for parameter boundaries so you may run into problems if your p0 is close to zero. I did the following:
import numpy as np
from scipy.optimize import leastsq
import matplotlib.pyplot as plt
def lorentz(p,x):
return p[2] / (1.0 + (x / p[0] - 1.0)**4 * p[1]**2)
def errorfunc(p,x,z):
return lorentz(p,x)-z
p = np.array([0.5, 0.25, 1.0], dtype=np.double)
x = np.linspace(-1.5, 2.5, num=30, endpoint=True)
noise = np.random.randn(30) * 0.05
z = lorentz(p,x)
noisyz = z + noise
p0 = np.array([-2.0, -4.0, 6.8], dtype=np.double) #Initial guess
solp, ier = leastsq(errorfunc,
p0,
args=(x,noisyz),
Dfun=None,
full_output=False,
ftol=1e-9,
xtol=1e-9,
maxfev=100000,
epsfcn=1e-10,
factor=0.1)
plt.plot(x, z, 'k-', linewidth=1.5, alpha=0.6, label='Theoretical')
plt.scatter(x, noisyz, c='r', marker='+', color='r', label='Measured Data')
plt.plot(x, lorentz(solp,x), 'g--', linewidth=2, label='leastsq fit')
plt.xlim((-1.5, 2.5))
plt.ylim((0.0, 1.2))
plt.grid(which='major')
plt.legend(loc=8)
plt.show()
This yielded a solution of:
solp = array([ 0.51779002, 0.26727697, 1.02946179])
Which is close to the theoretical value:
np.array([0.5, 0.25, 1.0])

scipy.interpolate.UnivariateSpline not smoothing regardless of parameters

I'm having trouble getting scipy.interpolate.UnivariateSpline to use any smoothing when interpolating. Based on the function's page as well as some previous posts, I believe it should provide smoothing with the s parameter.
Here is my code:
# Imports
import scipy
import pylab
# Set up and plot actual data
x = [0, 5024.2059124920379, 7933.1645067836089, 7990.4664106277542, 9879.9717114947653, 13738.60563208926, 15113.277958924193]
y = [0.0, 3072.5653360000988, 5477.2689107965398, 5851.6866463790966, 6056.3852496014106, 7895.2332350173638, 9154.2956175610598]
pylab.plot(x, y, "o", label="Actual")
# Plot estimates using splines with a range of degrees
for k in range(1, 4):
mySpline = scipy.interpolate.UnivariateSpline(x=x, y=y, k=k, s=2)
xi = range(0, 15100, 20)
yi = mySpline(xi)
pylab.plot(xi, yi, label="Predicted k=%d" % k)
# Show the plot
pylab.grid(True)
pylab.xticks(rotation=45)
pylab.legend( loc="lower right" )
pylab.show()
Here is the result:
I have tried this with a range of s values (0.01, 0.1, 1, 2, 5, 50), as well as explicit weights, set to either the same thing (1.0) or randomized. I still can't get any smoothing, and the number of knots is always the same as the number of data points. In particular, I'm looking for outliers like that 4th point (7990.4664106277542, 5851.6866463790966) to be smoothed over.
Is it because I don't have enough data? If so, is there a similar spline function or cluster technique I can apply to achieve smoothing with this few datapoints?
Short answer: you need to choose the value for s more carefully.
The documentation for UnivariateSpline states that:
Positive smoothing factor used to choose the number of knots. Number of
knots will be increased until the smoothing condition is satisfied:
sum((w[i]*(y[i]-s(x[i])))**2,axis=0) <= s
From this one can deduce that "reasonable" values for smoothing, if you don't pass in explicit weights, are around s = m * v where m is the number of data points and v the variance of the data. In this case, s_good ~ 5e7.
EDIT: sensible values for s depend of course also on the noise level in the data. The docs seem to recommend choosing s in the range (m - sqrt(2*m)) * std**2 <= s <= (m + sqrt(2*m)) * std**2 where std is the standard deviation associated with the "noise" you want to smooth over.
#Zhenya's answer of manually setting knots in between datapoints was too rough to deliver good results in noisy data without being selective about how this technique is applied. However, inspired by his/her suggestion, I have had success with Mean-Shift clustering from the scikit-learn package. It performs auto-determination of the cluster count and seems to do a fairly good smoothing job (very smooth in fact).
# Imports
import numpy
import pylab
import scipy
import sklearn.cluster
# Set up original data - note that it's monotonically increasing by X value!
data = {}
data['original'] = {}
data['original']['x'] = [0, 5024.2059124920379, 7933.1645067836089, 7990.4664106277542, 9879.9717114947653, 13738.60563208926, 15113.277958924193]
data['original']['y'] = [0.0, 3072.5653360000988, 5477.2689107965398, 5851.6866463790966, 6056.3852496014106, 7895.2332350173638, 9154.2956175610598]
# Cluster data, sort it and and save
inputNumpy = numpy.array([[data['original']['x'][i], data['original']['y'][i]] for i in range(0, len(data['original']['x']))])
meanShift = sklearn.cluster.MeanShift()
meanShift.fit(inputNumpy)
clusteredData = [[pair[0], pair[1]] for pair in meanShift.cluster_centers_]
clusteredData.sort(lambda pair1, pair2: cmp(pair1[0],pair2[0]))
data['clustered'] = {}
data['clustered']['x'] = [pair[0] for pair in clusteredData]
data['clustered']['y'] = [pair[1] for pair in clusteredData]
# Build a spline using the clustered data and predict
mySpline = scipy.interpolate.UnivariateSpline(x=data['clustered']['x'], y=data['clustered']['y'], k=1)
xi = range(0, round(max(data['original']['x']), -3) + 3000, 20)
yi = mySpline(xi)
# Plot the datapoints
pylab.plot(data['clustered']['x'], data['clustered']['y'], "D", label="Datapoints (%s)" % 'clustered')
pylab.plot(xi, yi, label="Predicted (%s)" % 'clustered')
pylab.plot(data['original']['x'], data['original']['y'], "o", label="Datapoints (%s)" % 'original')
# Show the plot
pylab.grid(True)
pylab.xticks(rotation=45)
pylab.legend( loc="lower right" )
pylab.show()
While I'm not aware of any library which will do it for you off-hand, I'd try a bit more DIY approach: I'd start from making a spline with knots in between the raw data points, in both x and y. In your particular example, having a single knot in between the 4th and 5th points should do the trick, since it'd remove the huge derivative at around x=8000.
I had trouble getting BigChef's answer running, here is a variation that works on python 3.6:
# Imports
import pylab
import scipy
import sklearn.cluster
# Set up original data - note that it's monotonically increasing by X value!
data = {}
data['original'] = {}
data['original']['x'] = [0, 5024.2059124920379, 7933.1645067836089, 7990.4664106277542, 9879.9717114947653, 13738.60563208926, 15113.277958924193]
data['original']['y'] = [0.0, 3072.5653360000988, 5477.2689107965398, 5851.6866463790966, 6056.3852496014106, 7895.2332350173638, 9154.2956175610598]
# Cluster data, sort it and and save
import numpy
inputNumpy = numpy.array([[data['original']['x'][i], data['original']['y'][i]] for i in range(0, len(data['original']['x']))])
meanShift = sklearn.cluster.MeanShift()
meanShift.fit(inputNumpy)
clusteredData = [[pair[0], pair[1]] for pair in meanShift.cluster_centers_]
clusteredData.sort(key=lambda li: li[0])
data['clustered'] = {}
data['clustered']['x'] = [pair[0] for pair in clusteredData]
data['clustered']['y'] = [pair[1] for pair in clusteredData]
# Build a spline using the clustered data and predict
mySpline = scipy.interpolate.UnivariateSpline(x=data['clustered']['x'], y=data['clustered']['y'], k=1)
xi = range(0, int(round(max(data['original']['x']), -3)) + 3000, 20)
yi = mySpline(xi)
# Plot the datapoints
pylab.plot(data['clustered']['x'], data['clustered']['y'], "D", label="Datapoints (%s)" % 'clustered')
pylab.plot(xi, yi, label="Predicted (%s)" % 'clustered')
pylab.plot(data['original']['x'], data['original']['y'], "o", label="Datapoints (%s)" % 'original')
# Show the plot
pylab.grid(True)
pylab.xticks(rotation=45)
pylab.show()

Python interp1d vs. UnivariateSpline

I'm trying to port some MatLab code over to Scipy, and I've tried two different functions from scipy.interpolate, interp1d and UnivariateSpline. The interp1d results match the interp1d MatLab function, but the UnivariateSpline numbers come out different - and in some cases very different.
f = interp1d(row1,row2,kind='cubic',bounds_error=False,fill_value=numpy.max(row2))
return f(interp)
f = UnivariateSpline(row1,row2,k=3,s=0)
return f(interp)
Could anyone offer any insight? My x vals aren't equally spaced, although I'm not sure why that would matter.
I just ran into the same issue.
Short answer
Use InterpolatedUnivariateSpline instead:
f = InterpolatedUnivariateSpline(row1, row2)
return f(interp)
Long answer
UnivariateSpline is a 'one-dimensional smoothing spline fit to a given set of data points' whereas InterpolatedUnivariateSpline is a 'one-dimensional interpolating spline for a given set of data points'. The former smoothes the data whereas the latter is a more conventional interpolation method and reproduces the results expected from interp1d. The figure below illustrates the difference.
The code to reproduce the figure is shown below.
import scipy.interpolate as ip
#Define independent variable
sparse = linspace(0, 2 * pi, num = 20)
dense = linspace(0, 2 * pi, num = 200)
#Define function and calculate dependent variable
f = lambda x: sin(x) + 2
fsparse = f(sparse)
fdense = f(dense)
ax = subplot(2, 1, 1)
#Plot the sparse samples and the true function
plot(sparse, fsparse, label = 'Sparse samples', linestyle = 'None', marker = 'o')
plot(dense, fdense, label = 'True function')
#Plot the different interpolation results
interpolate = ip.InterpolatedUnivariateSpline(sparse, fsparse)
plot(dense, interpolate(dense), label = 'InterpolatedUnivariateSpline', linewidth = 2)
smoothing = ip.UnivariateSpline(sparse, fsparse)
plot(dense, smoothing(dense), label = 'UnivariateSpline', color = 'k', linewidth = 2)
ip1d = ip.interp1d(sparse, fsparse, kind = 'cubic')
plot(dense, ip1d(dense), label = 'interp1d')
ylim(.9, 3.3)
legend(loc = 'upper right', frameon = False)
ylabel('f(x)')
#Plot the fractional error
subplot(2, 1, 2, sharex = ax)
plot(dense, smoothing(dense) / fdense - 1, label = 'UnivariateSpline')
plot(dense, interpolate(dense) / fdense - 1, label = 'InterpolatedUnivariateSpline')
plot(dense, ip1d(dense) / fdense - 1, label = 'interp1d')
ylabel('Fractional error')
xlabel('x')
ylim(-.1,.15)
legend(loc = 'upper left', frameon = False)
tight_layout()
The reason why the results are different (but both likely correct) is that the interpolation routines used by UnivariateSpline and interp1d are different.
interp1d constructs a smooth B-spline using the x-points you gave to it as knots
UnivariateSpline is based on FITPACK, which also constructs a smooth B-spline. However, FITPACK tries to choose new knots for the spline, to fit the data better (probably to minimize chi^2 plus some penalty for curvature, or something similar). You can find out what knot points it used via g.get_knots().
So the reason why you get different results is that the interpolation algorithm is different. If you want B-splines with knots at data points, use interp1d or splmake. If you want what FITPACK does, use UnivariateSpline. In the limit of dense data, both methods give same results, but when data is sparse, you may get different results.
(How do I know all this: I read the code :-)
Works for me,
from scipy import allclose, linspace
from scipy.interpolate import interp1d, UnivariateSpline
from numpy.random import normal
from pylab import plot, show
n = 2**5
x = linspace(0,3,n)
y = (2*x**2 + 3*x + 1) + normal(0.0,2.0,n)
i = interp1d(x,y,kind=3)
u = UnivariateSpline(x,y,k=3,s=0)
m = 2**4
t = linspace(1,2,m)
plot(x,y,'r,')
plot(t,i(t),'b')
plot(t,u(t),'g')
print allclose(i(t),u(t)) # evaluates to True
show()
This gives me,
UnivariateSpline: A more recent
wrapper of the FITPACK routines.
this might explain the slightly different values? (I also experienced that UnivariateSpline is much faster than interp1d.)

Categories

Resources