so what I'm trying to do is to fit the specific model onto the dataset of x-y values and get constants of the model. I can get the constant (in this case there is only one) and then the fitted y_opt. Below is the working example of doing so:
import pandas as pd
from scipy.optimize import curve_fit
data = pd.read_csv(r'')
x_measured = data['x[-]'].values
y_measured = data['y[-]'].values
def y_NH(x_eng, D):
y_comp = D * x_eng*(x_eng**2 + 3 * x_eng + 3) / (1 + x_eng)**2
return y_comp
D = curve_fit(y_NH, x_measured, y_measured)
y_opt = y_NH(x_measured, D[0])
This works well, but it's not exactly good for me.
The formula for y_comp is something I needed to derive manually - originally I had other variable, let's say Y_comp, and got y_comp by differentiating Y_comp (by x_eng, obviously). What I would like to achieve is to feed my function with Y_comp (because there will be more like Z_comp, F_comp etc.), it would differentiate it resulting in y_comp (z_comp, f_comp) and then it would fit the model it onto my dataset - the result then would be constant(s) of the particular model.
I started with some work, but still I am not sufficient and would appreciate some help on this topic. The bugged code is:
import sympy as sy
from sympy.utilities.lambdify import lambdify
def y_NH2(x_eng, D):
lambda1 = sy.Symbol('lambda1')
x_eng = sy.Symbol('x_eng')
#Gi = sy.Symbol('Gi')
lambda1 = x_eng + 1
W = lambda1**2 + 2 / lambda1
y_comp_symb = sy.diff(W, x_eng)
y_comp = lambdify(x_eng, y_comp_symb,'numpy')
y_return = D / 2 * y_comp(x_eng)
return y_return
y_p = y_NH2(x_measured, 12)
print(y_p)
D = curve_fit(y_NH2, x_measured, y_measured)
y_opt = y_NH2(x_measured, D[0])
This raises an error in curve_fit that is: "error: Result from function call is not a proper array of floats."
Could you please give me a hint?
Related
I am using SciPy.optimize.curve_fit https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
to get the coefficients of curve fitting function, the SciPy function takes the model function as its first argument
so if I want to make a linear curve fit, I pass to it the following function:
def objective(x, a, b):
return a * x + b
If I want polynomial curve fitting of second degree, I pass the following:
def objective(x, a, b, c):
return a * x + b * x**2 + c
And so on, what I want to achieve is to make this model function generic
for example, if the user wanted to curve fit to a polynomial of 5th degree by inputting 5 it should change to
def objective(x, a, b, c, d, e, f):
return (a * x) + (b * x**2) + (c * x**3) + (d * x**4) + (e * x**5) + f
While the code is running
is this possible?
And if it is not possible using SciPy because it requires to change a function, is there any other way to achieve what I want ?
If you really want to implement it on your own, you can either use a variable number of coefficients *coeffs:
def objective(x, *coeffs):
result = coeffs[-1]
for i, coeff in enumerate(coeffs[:-1]):
result += coeff * x**(i+1)
return result
or use np.polyval:
import numpy as np
def objective(x, *coeffs):
return np.polyval(x, coeffs)
However, note that there's no need to use curve_fit. You can directly use np.polyfit to do a least-squares polynomial fit.
The task can be accomplished in a few ways. If you want to use scipy, you can simply create a dictionary to refer to specific function by using numerical input from user:
import scipy.optimize as optimization
polnum = 2 # suppose this is input from user
def objective1(x, a, b):
return a * x + b
def objective2(x, a, b, c):
return a * x + b * x**2 + c
# Include some more functions
# Do not include round brackets in the dictionary
object_dict = {
1: objective1,
2: objective2
# Include the numbers with corresponding functions
}
opt = optimization.curve_fit(object_dict[polnum], x, y) # Curve fitted
print(opt[0]) # Returns parameters
However, I would suggest you going with a bit better way where you do not have to define each function:
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression
polnum = 2 # suppose this is input from user
# Creates a Polynomial of polnum degree
model = make_pipeline(PolynomialFeatures(polnum), LinearRegression)
# You can change the estimator Linear Regression to any other offered in sklearn.linear_model
# Make sure that x and y data are of the same shape
model.fit(x, y)
# Now you can use the model to check some other values
y_predict = model.predict(x_predict[:, np.newaxis])
Referring to here, I would like to find the MLE of alpha and lam, given the following PDF
import scipy.stats as st
import numpy as np
class Weib(st.rv_continuous):
def _pdf(self, data, alpha, lam):
t = data[0]
delta = data[1]
fx = (alpha * lam * (t**(alpha-1)))**(delta) * np.exp(-lam * (t**alpha))
return fx
def _argcheck(self, alpha, lam):
a = alpha > 0
l = lam > 0
return (a & l)
And I tried
Weib_inst = Weib(name='Weib')
Samples = Weib_inst.rvs(alpha=1, lam=3, size = 1000)
And it says
'float' object is not subscriptable
Weib_inst._fitstart([[1,2],[2,4]]) also returns the same error message.
It seems this occurs because the data is not 1-dimensional, but I cannot find the way to bypass this.
Any help might be appreciated.
You may try to define _fitstart in your subclass. The framework assumes univariate distributions, however.
Does statsmodels support nonlinear regression to an arbitrary equation? (I know that there are some forms that are already built in, e.g. for logistic regression, but I am after something more flexible)
In the solution https://stats.stackexchange.com/a/44249 to a question about non-linear regression,
the code is in R and uses the function nls. There it has the equation's parameters defined with start = list(a1=0, ...). These are of course just some initial guesses and not the final fitted values. But what is different here compared to lm is that the parameters don't need to be from the columns of the input data.
I've been able to use statsmodels.formula.api.ols as an equivalent for R's lm, but when I try to use it with an equation that has parameters (and not weights for the inputs / combinations of inputs), statsmodels complains about the parameters not being defined. It does not seem to have an equivalent argument as start= so it isn't obvious how to introduce them.
Is there a different class or function in statsmodels that accepts definition of these initial parameter values?
EDIT:
My current attempt and also workaround following suggestion with lmfit
from statsmodels.formula.api import ols
import numpy as np
import pandas as pd
def eqn_poly(x, a, b):
''' simple polynomial '''
return a*x**2.0 + b*x
def eqn_nl(x, a, b):
''' fractional equation '''
return 1.0 / ((a+x)*b)
x = np.arange(0, 3, 0.1)
y1 = eqn_poly(x, 0.1, 0.5)
y2 = eqn_nl(x, 0.1, 0.5)
sigma =0.05
y1_noise = y1 + sigma * np.random.randn(*y1.shape)
y2_noise = y2 + sigma * np.random.randn(*y2.shape)
df1 = pd.DataFrame(np.vstack([x, y1_noise]).T, columns= ['x', 'y'])
df2 = pd.DataFrame(np.vstack([x, y2_noise]).T, columns= ['x', 'y'])
res1 = ols("y ~ 1 + x + I(x ** 2.0)", df1).fit()
print res1.summary()
res3 = ols("y ~ 1 + x + I(x ** 2.0)", df2).fit()
#res2 = ols("y ~ eqn_nl(x, a, b)", df2).fit()
# ^^^ this fails if a, b are not initialised ^^^
# so initialise a, b
a,b = 1.0, 1.0
res2 = ols("y ~ eqn_nl(x, a, b)", df2).fit()
print res2.summary()
# ===> and now the fitting is bad, it has an intercept -4.79, and a weight
# on the equation 15.7.
Giving lmfit the formula it is able to find parameters.
import lmfit
mod = lmfit.Model(eqn_nl)
lm_result = mod.fit(y2_noise, x=x, a=1.0, b=1.0)
print lm_result.fit_report()
# ===> this one works fine, a=0.101, b=0.4977
But trying to put y1, x into ols doesn't seem to work ("PatsyError: model is missing required outcome variables"). I didn't really follow that suggestion.
consider scipy.optimize.curve_fit as desired R.nls-like function
I'm trying to optimize a function with scipy.optimize.minimize but I can't figure out what goes where, alternatively getting the error messages "ValueError: setting an array element with a sequence" or "TypeError: llf() takes 1 positional argument but 2 were given"
My code is what follows:
import numpy as np
import pandas as pd
u = np.random.normal(0, 1, 50)
t = 25
x = t*u/(1-u)
x = np.sort(x, axis=0)
theta = list(range(1, 1001, 1))
theta = np.divide(theta, 10)
xv, tv = np.meshgrid(x, theta)
xt_sum = xv + tv # Each *theta* has been added to all values of *x*
xt_sum_inv = 1/xt_sum
xt_sum_n = np.sum(xt_sum_inv, axis=1) # This is a length 1000 vector where each entry is equal to sum(1/(theta + x))
def llf(arg):
return (-1 * (50/arg - 2 * xt_sum_n))
res = scipy.optimize.minimize(llf, theta, method='BFGS')
theta is what I am trying to optimize for.
I feel I might have either my positional arguments wrong, or my variables or function output is the wrong data structure. Any help would be much appreciated.
From the documentation,
scipy.optimize.minimize
Minimization of scalar function of one or more variables
The keyword above is scalar. Your function does not return a single value (a scalar), but many, i.e. it returns a vector.
Whatever you are trying to achieve, you are using the wrong numerical function, or you are defining the wrong target function, i.e. your llf().
At one step in the model I'm writing, i have to calculate the error function of a quantity. What I'm trying to do looks like this:
from math import erf
import numpy as np
import pymc as pm
sig = pm.Exponential('sig', beta=0.1, size=10)
x = erf(sig ** 2)
This fails because erf doesn't work on arrays. I tried:
#pm.deterministic
def x(sig=sig):
return [erf(s) for s in sig]
but with no success, I know it's possible to get the result with:
np_erf = np.vectorize(erf)
x = np_erf((sig ** 2).value)
but this doesn't seem like the correct way because it doesn't produce a pm.Deterministic but just a np.array. How can I do it instead? (PyMC is version 2.3)
Edit: The above examples were simplified for clarity, here's the what the relevant passages look in the real code. Ideally, I would like this to work:
mu = pm.LinearCombination('mu', [...], [...])
sig2 = pm.exp(mu) ** 2
f = 1 / (pm.sqrt(np.pi * sig2 / 2.0) * erf(W / sig2))
but if fails with the message TypeError: only length-1 arrays can be converted to Python scalars. Going the np.vectorize route
np_erf = np.vectorize(erf)
f = 1 / (pm.sqrt(np.pi * sig2 / 2.0) * np_erf(W / sig2))
crashes with the same error message. The list comprehension
#pm.deterministic
def f(sig2=sig2):
return [1 / (pm.sqrt(np.pi * s / 2.0) * erf(W / s)) for s in sig2]
works as such, but leads to an error later in the code at this spot:
#pm.observed(plot=True)
def y(value=df['dist'], sig2=sig2, f=f):
return (np.log(np.exp(-(value ** 2) / 2.0 / sig2) * f)).sum()
and the error is AttributeError: log.
I've got the calculation of the error function working using a numerical approximation, which should mean that the general setup is correct. It would be just nicer and clearer to use the erf function directly.
I found the solution. I didn't realize that if you create a variable using the pymc.deterministic decorator, the parameters passed to the function are numpy.array, and not pymc.Distribution. And this allows to numpy.vectorize the function and apply it to the variable. So instead of
sig = pm.Exponential('sig', beta=0.1, size=10)
x = erf(sig ** 2)
you need to use
sig = pm.Exponential('sig', beta=0.1, size=10)
np_erf = np_vectorize(erf)
#pm.deterministic
def x(sig=sig):
return np_erf(sig ** 2)
and it works.