I try to fit a function to extract parameters from a binary 2d grating in python.
Here is my code, which runs but does not deliver a proper output:
import numpy as np
import pylab as plt
from scipy.optimize import curve_fit
def grid(X, Y, P, FS):
"""
function to calculate Z(X, Y) of a binary grating with
period P and feature size FS
input:
X, Y (np.array) from numpy meshgrid, the domain of the function
P(float, int): period of the grating
FS(float, int): size of the grating features
output:
Z(np.array): binary heightprofile of the grating conainting 0 and 1
same shape as X and Y
"""
Z = np.ones_like(X)
Z[X%P>FS] = 0
Z[Y%P>FS] = 0
return Z
# domain of the example
x = np.arange(0, 500)
y = np.arange(0, 500)
X, Y = np.meshgrid(x, y)
# plot of the example grating
Z = grid(X, Y, 93, 42)
plt.contourf(X, Y, Z)
plt.show()
None
# here starts the fit
# np.ravel is used in combination with scipy.optimize.curve_fit like in every example I found online
# goal: find the values of P and FS used to generate Z
xdata = np.vstack((X.ravel(), Y.ravel()))
ydata = Z.ravel()
def _grid(xdata, P, FS):
"""
helper function to call grid(X, Y, P, FS) with the flattend input used
for the curve_fit
returns the result of Z in same flatted manner
"""
# unpack x, y and generate the meshgrid
x, y = xdata
x = np.unique(x)
y = np.unique(y)
X, Y = np.meshgrid(x, y)
# call the original function and return the flattend result
res = grid(X, Y, P, FS)
return res.ravel()
# try to fit the parameters
popt, pcov = curve_fit(_grid, xdata, ydata, p0=[90, 40])
print (popt)
print (pcov)
Does someone else maybe spot the problem? Or is there a better way or programming languge to do this simple fit?
Related
I have some function z(x, y) and I would like to generate a quiver plot (a 2D plot of the gradients). Something like this:
In order to do it, I have to run gradient over a linear mesh and adjust data to the format that matplotlib.quiver does.
A naive way is to iterate forward and backward in a loop:
for i in range(10):
for j in range(10):
x = torch.tensor(1. * i, requires_grad=True)
y = torch.tensor(1. * j, requires_grad=True)
z = x ** 2 + y ** 2
z.backward()
print(x.grad, y.grad)
This is obviously very inefficient. There are some examples on how to generate a linear mesh from x, y but I would need later change the mesh back to the format of the forward formula, get vectors of gradient and put them back, etc..
A simple example in numpy would be:
import matplotlib.pyplot as plt
n = 25
x_range = np.linspace(-25, 25, n)
y_range = np.linspace(-25, 25, n)
X, Y = np.meshgrid(x_range, y_range)
Z = X**2 + Y**2
U, V = 2*X, 2*Y
plt.quiver(X, Y, U, V, Z, alpha=.9)
What would be the standard way of doing this with pytorch? Are there some simple examples available?
You can compute gradients of non-scalars by passing torch.Tensors of ones.
import matplotlib.pyplot as plt
import torch
# create meshgrid
n = 25
a = torch.linspace(-25, 25, n)
b = torch.linspace(-25, 25, n)
x = a.repeat(n)
y = b.repeat(n, 1).t().contiguous().view(-1)
x.requires_grad = True
y.requires_grad=True
z = x**2 + y**2
# this line will compute the gradients
torch.autograd.backward([z], [torch.ones(x.size()), torch.ones(y.size())])
# detach to plot
plt.quiver(x.detach(), y.detach(), x.grad, y.grad, z.detach(), alpha=.9)
plt.show()
If you need to do this repeatedly you need to zero the gradients (set x.grad = y.grad = None).
I am very new to Gaussian processes and python as well.
I am trying to produce a very simple Gaussian regression for a 3d model.
I have a very simple Python code for a function:
import numpy as np
def exponential_cov(x, y, params):
return params[0] * np.exp( -0.5 * params[1] * np.subtract.outer(x, y)**2)
def conditional(x_new, x, y, params):
B = exponential_cov(x_new, x, params)
C = exponential_cov(x, x, params)
A = exponential_cov(x_new, x_new, params)
mu = np.linalg.inv(C).dot(B.T).T.dot(y)
sigma = A - B.dot(np.linalg.inv(C).dot(B.T))
return(mu.squeeze(), sigma.squeeze())
import matplotlib.pylab as plt
# GP PRIOR
tu = [1, 10]
Si_tu = exponential_cov(0, 0, tu)
xpts = np.arange(-5, 5, step=0.01)
plt.errorbar(xpts, np.zeros(len(xpts)), yerr=Si_tu, capsize=0, color='#95daed', alpha=0.5, label='error') #error
plt.plot(xpts, np.zeros(len(xpts)), linestyle='dashed', color='#3105b2', linewidth=2.5, label='mu'); #mu
# GP FOR 1ST POINT
x = [1.]
y = np.sin(x)+np.cos(np.sqrt(15)*x)
Si_1 = exponential_cov(x, x, tu)
def predict(x, data, kernel, params, sigma, t):
k = [kernel(x, y, params) for y in data]
Sinv = np.linalg.inv(sigma)
y_pred = np.dot(k, Sinv).dot(t)
sigma_new = kernel(x, x, params) - np.dot(k, Sinv).dot(k)
return y_pred, sigma_new
x_pred = np.linspace(-5, 5, 1000) #change step here!!
print "x_pred="
print(x_pred)
predictions = [predict(i, x, exponential_cov, tu, Si_1, y) for i in x_pred]
y_pred, sigmas = np.transpose(predictions)
print "y_pred ="
print(y_pred )
print "sigmas ="
print(sigmas )
# GP FOR 2ND POINT
m, s = conditional([-1], x, y, tu)
y2 = np.sin(-1)+np.cos(np.sqrt(15)*(-1))
x.append(-1)
y=np.append(y,y2)
Si_2 = exponential_cov(x, x, tu)
predictions = [predict(i, x, exponential_cov, tu, Si_2, y) for i in x_pred]
y_pred, sigmas = np.transpose(predictions)
print "y_pred ="
print(y_pred )
print "sigmas ="
print(sigmas )
By using this code I get very nice fitting results for the function np.sin(x) + np.cos(np.sqrt(15) * x), but what I really want to do is to try the same Gaussian process for the function Z = np.sin(2*X) * np.cos(2*Y) / 2.
I know that the idea is basically the same, but I cannot adapt my python code to the [x,y] input to obtain z.
I will really appreciate your help, hints or links!
In the previous, the input of your function is 1-D, and then the new function is 2-D. So you have to change the covariance function, for example, use ard-based kernel, please refer to cook book for kernel. Also, you can do the isotropic kernel for 2-D, just make sure the suitable distance function (e.g. L2-norm) and the single lengthscale you choose.
I am learning ML with python. I read the below code from that book.
x, y = np.array(x), np.array(y)
x = (x - x.mean()) / x.std()
x0 = np.linspace(-2, 4, 100)
def get_model(deg):
return lambda input_x=x0: np.polyval(np.polyfit(x, y, deg), input_x)
def get_cost(deg, input_x, input_y):
return 0.5 * ((get_model(deg)(input_x) - input_y) ** 2).sum()
I'm not sure why in the get_cost function, the author uses get_model(deg) to multiply input_x which is x. In my understanding, get_model(deg) function already return the predicted y based on x0.
When I tried to understand what's happening, I typed get_model(4), then it returned <function __main__.get_model.<locals>.<lambda>>. To my surprised, it haven't returned the predicted y based on x0 but a function?! I just totally messed up.
When I tried typing get_model(4)(x), It just return the predicted y based on x, I don't get it. Please someone could help me to figure out.
The method get_model(x) is, as you noticed, not return predictions, but a model for predicting.
If you execute get_model(1) the method will return you a linear model, which allows you to fit your values into a linear function:
import numpy as np
import matplotlib.pyplot as plt
fig = plt.gcf()
fig.set_size_inches(10, 5)
x = np.linspace(-2, 4, 200)
y = x**2
y += np.random.rand(len(x)) * 10
x0= x
def get_model(deg):
return lambda input_x=x0: np.polyval(np.polyfit(x, y, deg), input_x)
linear_model = get_model(1)
plt.scatter(x, y)
plt.scatter(x, linear_model(), c='red')
plt.show()
If you want to try another model, you can do this by changing the degree of the model:
plt.scatter(x, y)
plt.scatter(x, get_model(2)(), c='red')
plt.scatter(x, get_model(19)(), c='yellow')
plt.show()
I hope this helps you understand the code a bit better.
I'm new to Python so please be patient. I appreciate any help!
What I have: three 1D lists (xr, yr, zr), one containing x-values, the other two y- and z-values
What I want to do: create a 3D contour plot in matplotlib
I realized that I need to convert the three 1D lists into three 2D lists, by using the meshgrid function.
Here's what I have so far:
xr = np.asarray(xr)
yr = np.asarray(yr)
zr = np.asarray(zr)
X, Y = np.meshgrid(xr,yr)
znew = np.array([zr for x,y in zip(np.ravel(X), np.ravel(Y))])
Z = znew.reshape(X.shape)
Running this gives me the following error (for the last line I entered above):
total size of new array must be unchanged
I went digging around stackoverflow, and tried using suggestions from people having similar problems. Here are the errors I get from each of those suggestions:
Changing the last line to:
Z = znew.reshape(X.shape[0])
Gives the same error.
Changing the last line to:
Z = znew.reshape(X.shape[0], len(znew))
Gives the error:
Shape of x does not match that of z: found (294, 294) instead of (294, 86436).
Changing it to:
Z = znew.reshape(X.shape, len(znew))
Gives the error:
an integer is required
Any ideas?
Well,sample code below works for me
import numpy as np
import matplotlib.pyplot as plt
xr = np.linspace(-20, 20, 100)
yr = np.linspace(-25, 25, 110)
X, Y = np.meshgrid(xr, yr)
#Z = 4*X**2 + Y**2
zr = []
for i in range(0, 110):
y = -25.0 + (50./110.)*float(i)
for k in range(0, 100):
x = -20.0 + (40./100.)*float(k)
v = 4.0*x*x + y*y
zr.append(v)
Z = np.reshape(zr, X.shape)
print(X.shape)
print(Y.shape)
print(Z.shape)
plt.contour(X, Y, Z)
plt.show()
TL;DR
import matplotlib.pyplot as plt
import numpy as np
def get_data_for_mpl(X, Y, Z):
result_x = np.unique(X)
result_y = np.unique(Y)
result_z = np.zeros((len(result_x), len(result_y)))
# result_z[:] = np.nan
for x, y, z in zip(X, Y, Z):
i = np.searchsorted(result_x, x)
j = np.searchsorted(result_y, y)
result_z[i, j] = z
return result_x, result_y, result_z
xr, yr, zr = np.genfromtxt('data.txt', unpack=True)
plt.contourf(*get_data_for_mpl(xr, yr, zr), 100)
plt.show()
Detailed answer
At the beginning, you need to find out for which values of x and y the graph is being plotted. This can be done using the numpy.unique function:
result_x = numpy.unique(X)
result_y = numpy.unique(Y)
Next, you need to create a numpy.ndarray with function values for each point (x, y) from zip(X, Y):
result_z = numpy.zeros((len(result_x), len(result_y)))
for x, y, z in zip(X, Y, Z):
i = search(result_x, x)
j = search(result_y, y)
result_z[i, j] = z
If the array is sorted, then the search in it can be performed not in linear time, but in logarithmic time, so it is enough to use the numpy.searchsorted function to search. but to use it, the arrays result_x and result_y must be sorted. Fortunately, sorting is part of the numpy.unique method and there are no additional actions to do. It is enough to replace the search (this method is not implemented anywhere and is given simply as an intermediate step) method with np.searchsorted.
Finally, to get the desired image, it is enough to call the matplotlib.pyplot.contour or matplotlib.pyplot.contourf method.
If the function value does not exist for (x, y) for all x from result_x and all y from result_y, and you just want to not draw anything, then it is enough to replace the missing values with NaN. Or, more simply, create result_z as numpy.ndarray` from NaN and then fill it in:
result_z = numpy.zeros((len(result_x), len(result_y)))
result_z[:] = numpy.nan
I am trying to fit some data points with y uncertainties in python. The data are labeled in python as x,y and yerr.
I need to do a linear fit on that data in loglog scale. As a reference if the fit results are properly, i compare the python results with the ones from Scidavis
I tried curve_fit with
def func(x, a, b):
return np.exp(a* np.log(x)+np.log(b))
popt, pcov = curve_fit(func, x, y,sigma=yerr)
as well as kmpfit with
def funcL(p, x):
a,b = p
return ( np.exp(a*np.log(x)+np.log(b)) )
def residualsL(p, data):
a,b=p
x, y, errorfit = data
return (y-funcL(p,x)) / errorfit
a0=1
b0=0.1
p0 = [a0,b0]
fitterL = kmpfit.Fitter(residuals=residualsL, data=(x,y,yerr))
fitterL.parinfo = [{}, {}]
fitterL.fit(params0=p0)
and when i am trying to fit the data with one of those without uncertainties (i.e setting yerr=1), everything works just fine and the results are identical with the ones from scidavis. But if i set yerr to the uncertainties of the data file i get some disturbing results.
In python i get i.e. a=0.86 and in scidavis a=0.14. I read something about that the errors are included as weights. Do i have to change anything, in order to calculate the fit correctly? Or what am i doing wrong?
edit: here is an example of a data file (x,y,yerr)
3.942387e-02 1.987800e+00 5.513165e-01
6.623142e-02 7.126161e+00 1.425232e+00
9.348280e-02 1.238530e+01 1.536208e+00
1.353088e-01 1.090471e+01 7.829126e-01
2.028446e-01 1.023087e+01 3.839575e-01
3.058446e-01 8.403626e+00 1.756866e-01
4.584524e-01 7.345275e+00 8.442288e-02
6.879677e-01 6.128521e+00 3.847194e-02
1.032592e+00 5.359025e+00 1.837428e-02
1.549152e+00 5.380514e+00 1.007010e-02
2.323985e+00 6.404229e+00 6.534108e-03
3.355974e+00 9.489101e+00 6.342546e-03
4.384128e+00 1.497998e+01 2.273233e-02
and the result:
in python:
without uncertainties: a=0.06216 +/- 0.00650 ; b=8.53594 +/- 1.13985
with uncertainties: a=0.86051 +/- 0.01640 ; b=3.38081 +/- 0.22667
in scidavis:
without uncertainties: a = 0.06216 +/- 0.08060; b = 8.53594 +/- 1.06763
with uncertainties: a = 0.14154 +/- 0.005731; b = 7.38213 +/- 2.13653
I must be misunderstanding something. Your posted data does not look anything like
f(x,a,b) = np.exp(a*np.log(x)+np.log(b))
The red line is the result of scipy.optimize.curve_fit,
the green line is the result of scidavis.
My guess is that neither algorithm is converging toward a good fit, so it is not surprising that the results do not match.
I can't explain how scidavis finds its parameters, but according to the definitions as I understand them, scipy is finding parameters with lower least squares residuals than scidavis:
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as optimize
def func(x, a, b):
return np.exp(a* np.log(x)+np.log(b))
def sum_square(residuals):
return (residuals**2).sum()
def residuals(p, x, y, sigma):
return 1.0/sigma*(y - func(x, *p))
data = np.loadtxt('test.dat').reshape((-1,3))
x, y, yerr = np.rollaxis(data, axis = 1)
sigma = yerr
popt, pcov = optimize.curve_fit(func, x, y, sigma = sigma, maxfev = 10000)
print('popt: {p}'.format(p = popt))
scidavis = (0.14154, 7.38213)
print('scidavis: {p}'.format(p = scidavis))
print('''\
sum of squares for scipy: {sp}
sum of squares for scidavis: {d}
'''.format(
sp = sum_square(residuals(popt, x = x, y = y, sigma = sigma)),
d = sum_square(residuals(scidavis, x = x, y = y, sigma = sigma))
))
plt.plot(x, y, 'bo', x, func(x,*popt), 'r-', x, func(x, *scidavis), 'g-')
plt.errorbar(x, y, yerr)
plt.show()
yields
popt: [ 0.86051258 3.38081125]
scidavis: (0.14154, 7.38213)
sum of squares for scipy: 53249.9915654
sum of squares for scidavis: 239654.84276