I am fitting my function with experimental data. The function is complicated enough that I am unable to post here, but my fitting module looks like this:
out_put = scipy.optimize.leastsq(func, initial parameter, full_output=True, ftol=0.001, xtol=0.001, gtol = 0.001)
fitter_sol = out_put[0]
error = np.sqrt(out_put[1].diagonal())
The last line of code gives an error under execution, and the error looks like:
AttributeError: 'NoneType' object has no attribute 'diagonal'
What could be the potential source of this error?
The docs say the second result of leastsq is:
None if a singular matrix encountered (indicates very flat curvature in some direction).
So your input is a singular matrix.
Related
I am getting this error when i'm trying to plot a function. This is some of my code:
def f(x):
return f1
xspace=np.linspace(-3,3,100)
plt.ylim([-3,3])
plt.plot(xspace,f(xspace))
Where f1 is calculated previously
line1=x**16-1
line2=x**24-1
f1=sym.cancel(line1/line2)
My question is when i do return f1 i get the error above however when i write the function out longhand instead of writing f1 it works which seems weird to me as they are both the same. Do i always have to write the function out when defining it before graphing or can i set it to a variable? It seems very tedious to do this especially when i have to graph the second derivative for the next part of it.
Thanks in advance
I have divided my data into training and validation samples and have successfully fit my model with three types of linear models. What I cannot figure out how to do is apply the model to the validation sample to evaluate the fit. When I attempt to apply the model to the holdout sample (sorry, I know that this isn't a reproducible example but I think that the issue is pretty clear. I'm just putting this snippet here for completeness. Please be gentle!):
valid = validation.loc[:, x + [ "sale_amt"]]
holdout1 = m1.predict(valid)
I get the following error message:
AttributeError Traceback (most recent call last)
in ()
8
9 valid = validation.loc[:, x + [ "sale_amt"]]
---> 10 holdout1 = m1.predict(valid)
AttributeError: 'OLS' object has no attribute 'predict'`
Other Python OLS regression packages have a 'predict' method, but it doesn't seem that PySAL does. I realize that the function coefficients (betas) are available and will pursue applying them to my validation data directly, but I was hoping that there is a simple answer that I just missed.
I apologize if it is bad form to answer my own question, but I did come up with a solution. I contacted Daniel Arribas-Bel, one of the PySAL developers, and he helped guide me to the result I was seeking. Note that my PySAL OLS object is named m1, and my validation dataframe is called 'validation':
m1 = ps.model.spreg.OLS(...)
m1.intercept = m1.betas[0] # Get the intercept from the betas array
m1.coefficients = m1.betas[1:len(m1.betas)] # Get the coefficients from the betas array
validation['predicted_price'] = m1.intercept + validation.loc[:, x].dot( m1.coefficients)
Note that this is the method I would use for a non-spatial model adapted for the KNN model I built in PySAL and might not be technically fully correct for a spatial model. Caveat emptor.
I was studying the AdaDelta optimization algorithm so I tried to implement it in Python, but there is something wrong with my code, since I get the following error:
AttributeError: 'numpy.ndarray' object has no attribute 'sqrt'
I did not find something about what is causing that error. According to the message, it's because of this line of code:
rms_grad = np.sqrt(self.e_grad + epsilon)
This line is similar to this equation:
RMS[g]t=√E[g^2]t+ϵ
I got the core equations of the algorithm in this article: http://ruder.io/optimizing-gradient-descent/index.html#adadelta
Just one more detail: I'm initializing the E[g^2]t matrix like this:
self.e_grad = (1 - mu)*np.square(nabla)
Where nabla is the gradient. Similar to this equation:
E[g2]t = γE[g2]t−1 + (1−γ)g2t
(the first term is equal to zero in the first iteration, just like the line of code above)
So I want to know if I'm initializing the E matrix the wrong way or I'm doing the square root inappropriately. I tried to use the pow() function but it doesn't work. If anyone could help me with this I would be very grateful, I'm trying this for weeks.
Additional details requested by andersource:
Here is the entire source code on github: https://github.com/pedrovbeltran/neural-networks-and-deep-learning/blob/experimental/modified-networks/network2_with_adadelta.py .
I think the problem is that self.e_grad_w is an ndarray of shape (2,) which further contains two additional ndarrays with 2d shapes, instead of directly containing data. This seems to be initialized in e_grad_initializer, in which nabla_w has the same structure. I didn't track where this comes from all the way back, but I believe once you fix this issue the problem will be resolved.
I am a complete beginner, and I'm currently doing this tutorial about logit regression models in python 3.4, with statsmodels 0.6.1 and Pycharm community version 4.5.1:
http://blog.yhathq.com/posts/logistic-regression-and-python.html
It runs smoothly. I try to add my own lines, to try out a few things.
After the part when I fit the data
train_cols = data.columns[1:]
logit = sm.logit(data['admit'], data[train_cols])
result = logit.fit()
and I print out the summary
print(result.summary())
I tried to take a little detour from the tutorial, to print only the Goodness of Fit measurement (in this case, it is a pseudo R-squared value). According to the documentation it is a method of result object (same as summary), so it should work like this:
print(result.prsquared())
However, running this code results in a TypeError on a line containing only print(result.prsquared()):
TypeError: 'numpy.float64' object is not callable
It really bugs me, because if I would to compare several models, pseudo R-squared would be my first choice to do it.
prsquared is an attribute, not a function. Try:
print(result.prsquared)
Using the scipy.optimize.minimize() function I went trough different results using different methods for the same objective function. To evaluate the goodness-of-fit I use to look at the reduced chi squared as a first criterion. After some time I ended with this useful guide http://newville.github.io/lmfit-py/fitting.html#Minimizer where it is specified that the reduced chi squared is set as attribute of the Minimizer object returned from the minimize() function. But if I do
minobj = scipy.optimize.minimize(...)
minobj.redchi
I get
AttributeError: redchi
Meanwhile minobj.message and minobj.success are correctly displayed.
Any guess?
The documentation is a little misleading --- if you look at lmfit/minimizer.py, and do a string search for "redchi" in the entire file, it only appears once, and that is in the leastsq() method. So basically, it only calculates the reduced chi squared for least-squares fitting.
If you're feeling up to it, you could add redchi to the other methods in the appropriate places, fork the lmfit github repo, and commit your changes.
In addition to Ashwin's answer, you could always just use:
result = lmfit.minimize(...)
x2 = result.chisqr
nfree = result.nfree
red_x2 = x2/nfree