How do you use the new Polynomials sub-package in numpy to give it new x values and get an output of y values?
https://numpy.org/doc/stable/reference/routines.polynomials.package.html
In prior versions of numpy it went something like this:
poly = np.poly1d(np.polyfit(x, y, 3)
new_x = np.linspace(0, 100)
new_y = poly(new_x)
The new version I am struggling to give it x values that give me the y values of each?
from numpy.polynomial import Polynomial
poly = Polynomial(Polynomial.fit(x, y, 3))
When I give it an array of x it just returns the coefficients.
You can directly call the resulting series to evaluate it:
from numpy.polynomial import Polynomial
poly = Polynomial.fit(x, y, 3)
new_y = poly(new_x)
Check this page of the documentation it has several examples.
Unfortunately, the answer by #Joan Charmant and the supportive comment #rh109019 do not work.
The intuitive way suggested by #Joan Charmant is, basically, what the question's about: it doesn't work.
Evidently, there is a new method introduced in numpy.polynomial.polynomial devoted specifically to evaluating polynomials. See here.
Here's my code where I'm comparing the two approaches.
import numpy as np
Pgauge = np.asarray([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0])
NIST = np.asarray([1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1])
calibrationCurve = np.polynomial.polynomial.Polynomial.fit(Pgauge,
NIST,
deg=1
)
print("The polynomial: {}".format(calibrationCurve))
x = np.asarray([0, 1]) # values of x to evaluate the polynomial at
c = calibrationCurve.coef # coefficients of the polynomial
print("The intuitive (wrong) way: {}".format(calibrationCurve(x)))
print("The correct way: {}".format(np.polynomial.polynomial.polyval(x, c)))
The first print command prints out the polynomial:4.6+3.5x.
If we want to evaluate it at the points 0 and 1 (x = np.asarray([0, 1])), we expect to get 4.6 and 8.1 respectively.
The second print command (that reads "The intuitive (wrong) way"), uses the method suggested by #Joan Charmant. It gives [0.1, 1.1] as the result. Which is wrong. Though seemingly, it looks ok: it gives two numbers as expected. But the numbers themselves are wrong. I don't know how these numbers were calculated. But if I had a bigger series of data, I wouldn't go with a calculator through it and assume I've got a correct result.
The last print command makes use of the polyval method suggested in the user manual that I cited above. And it works perfectly well. It gives [4.6, 8.1] as the result.
It so happens that my answer is wrong as well (see all the comments below by #user2357112 supports Monica).
But still, I'll leave it here for the folks who, like me, fell the victim of the confusing new numpy.polynomial library.
FIRST: why my code is wrong?
Everything's ok with it. But the line print("The polynomial: {}".format(calibrationCurve)) doesn't give me what, I think, it must give me. It takes the correct polynomial, changes its coefficients somehow and prints out a new polynomial with the changed coefficients. Still, it does store the correct polynomial in its memory and when you do the thing suggested by #Joan Charmant it may give you the correct answer if you ask it properly.
SECOND: how to use the new numpy.polynomial library in order to get a correct result?
Due to that peculiarity, you have to introduce a new line of code. Namely, do the Polynomial.fit() and immediately afterwards use the .convert() method. Then work with the converted polynomial only.
Here's my code that works correctly now.
import numpy as np
Pgauge = np.asarray([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0])
NIST = np.asarray([1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1])
calibrationCurveMessedUp = np.polynomial.polynomial.Polynomial.fit(Pgauge,
NIST,
deg=1
)
calibrationCurve = calibrationCurveMessedUp.convert()
print("The polynomial: {}".format(calibrationCurve))
print("The rounded polynomial coefficients: {}".format(calibrationCurve.coef))
x = np.asarray([0, 1]) # values of x to evaluate the polynomial at
print(calibrationCurve(x))
THIRD: a little note.
Apparently, there is a possibility to get the correct polynomial without the additional line of code. Probably, you have to give the correct window and domain parameters to the Polynomial.fit() function. Or may be there is another way.
If anybody knows such a way, you're welcome to edit my current answer and add your code.
Related
I am trying to do a simple linear curve fit with scipy, normally this method works fine for me. This time however for a reason unknown to me it doesn't work.
(I suspect that maybe the numbers are so big that it reaches the limit of what can be stored under a given data type.)
Regardless of the reason, the idea is to make a plot that looks like this:
As you see on the axis here the numbers are of a common order of magnitude. However this time I tried to make a fit to much bigger data points on the order of 1E10, for this I tried to use the following code (here I present only the code for making a scatter plot and then fitting only one data set).
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
ucrt_T = 2/np.sqrt(3)
ucrt_U = 0.1/np.sqrt(3)
T = [314.1, 325.1, 335.1, 345.1, 355.1, 365.1, 374.1, 384.1, 393.1]
T_to_4th = [9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31]
ucrt_T_lst = [143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17]
UBlack = [1.9,3.1, 4.4, 5.6, 7.0, 8.7, 10.2, 11.8, 13.4]
def lin_function(x,a,b):
return a*x + b
def line_fit_2():
#Dodanie pozostałych punktów na wykresie
plt.scatter(UBlack, T_to_4th, color='blue')
plt.errorbar(UBlack, T_to_4th, yerr=ucrt_T, fmt='o')
#Seria CZARNA
VltBlack = np.array(UBlack)
Tt4 = np.array(T_to_4th)
popt, pcov = curve_fit(lin_function, VltBlack, Tt4, absolute_sigma=False)
perr = np.sqrt(np.diag(pcov))
y = lin_function(VltBlack, *popt)
#Stylistyka i wygląd wykresu
#plt.plot(Pressure1, y, '--', color = 'g', label="fit with: $a={:.3f}\pm{:.3f}$, $b={:.3f}\pm{:.3f}$" .format(popt[0], perr[0], popt[1], perr[1]))
plt.plot(VltBlack, y, '--', color='green')
plt.ylabel(r'$T^4$ w $[K^4]$')
plt.xlabel(r'Napięcie termometru U w [mV]')
plt.legend(['Fit', 'Data points'])
plt.grid()
plt.show()
line_fit_2()
If you will run it you will find out that the scatter plot is created however the fit isn't executed properly, as only a horizontal line will be added. Additionally an error OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning) is raised.
I would be very happy to know what I am doing wrong or how to resolve this problem. All help is appreciated!
You've pretty much already answered your question, so I'll just confirm your suspicion: the reason the OptimizeWarning is raised is because the underlying optimization algorithm doesn't work properly/diverges due to large parameter numbers.
The solution is very simple, just scale your input parameters before using the fitting tool. Just keep the scaling in mind when you add labels to your x/y axis:
T_to_4th = np.array([9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31])/10e6
ucrt_T_lst = np.array([143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17])/10e6
What I did is just divide the lists with big numbers by 10e6. This means that the values are no longer in kPa for example, but in mega kPa (which would be GPa now).
To divide the entire list by the same value, first convert it to a numpy array.
Hope this helps :)
I'm trying to fit a sigmoid curve onto a small set of points, basically generating a probability curve from a set of observations. I'm using scipy.optimize.curve_fit, with a slightly modified logistic function (so as to be bound completely within [0,1]). Currently I have had the greatest success with the dogbox method, and an exact tr_solver.
When I attempt to run the code, for certain data points it will raise:
ValueError: `x0` violates bound constraints.
I did not run into this issue (using the same code and data) until I updated to the most recent version of numpy/scipy (numpy 1.17.0, scipy 1.3.1), so I believe it to be a result of this update (I cannot downgrade, as other libraries that I require for other aspects of this project require these versions)
I'm running this on a large dataset (N ~15000), and for very specific values the curve fit fails, claiming that the initial guess is outside of the bound constraints. This is not the case, and even checking quickly via the print statement before the curve fit in the provided example confirms this.
At first I had thought that it was a numpy precision error and that a value this small was considered to be out of bounds, but altering it slightly or providing a new, arbitrary number of a similar magnitude does not cause a ValueError. Additionally, other failed values are as big as ~1e-10, so I assume it must be something else.
Here is an example that fails for me every time:
import numpy as np
import scipy as sp
from scipy.special import expit, logit
import scipy.optimize
def f(x,x0,g,c,k):
y = c*expit(k*10.*(x-x0)) + g*(1.-c)
return y
# x0 g c k
p0 = np.array([8.841357069490852e-01, 4.492363462957287e-19, 5.547073496706608e-01, 7.435378446218519e+00])
bounds = np.array([[-1.,1.], [0.,1.], [0.,1.], [0.,20.]])
x = np.array([1.0, 1.0, 1.0, 1.0, 1.0, 0.8911796599834791, 1.0, 1.0, 1.0, 0.33232919909076103, 1.0])
y = np.array([0.999, 0.999, 0.999, 0.999, 0.999, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001])
s = np.array([0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9])
print([pval >= b[0] and pval <= b[1] for pval,b in zip(p0,bounds)])
fit,cov = sp.optimize.curve_fit(f,x,y,p0=p0,sigma=s,bounds=([b[0] for b in bounds],[b[1] for b in bounds]),method='dogbox',tr_solver='exact')
print(fit)
print(cov)
Here is the specific error stack, everything after the above call to curve fit.
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\minpack.py", line 763, in curve_fit
**kwargs)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_lsq\least_squares.py", line 927, in least_squares
tr_solver, tr_options, verbose)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_lsq\dogbox.py", line 310, in dogbox
J = jac(x, f)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_lsq\least_squares.py", line 874, in jac_wrapped
kwargs=kwargs, sparsity=jac_sparsity)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_numdiff.py", line 362, in approx_derivative
raise ValueError("`x0` violates bound constraints.")
ValueError: `x0` violates bound constraints.
If anyone has any insight as to what may be causing this, I would greatly appreciate the help! I did some searching and couldn't find any answers that may relate to this scenario, so I decided to open this question up. Thanks!
EDIT 9/9/19:
np.__version__ is 1.17.2 and sp.__version__ is 1.3.1, when I originally posted this I was on numpy 1.17.0 but upgrading has not fixed the issue. I'm running this on Python 3.6.6 on 64-bit Windows 10.
If I change either the second or fourth bound to be +/-np.inf (or change both), then the code does in fact complete -- but I am still unsure how my x0 is invalid (and I still need to have the fit bounded to these values)
EDIT: 1/22/20
upgraded np.__version__ to 1.18.1 and sp.__version__ to 1.4.1, to no avail. I have opened an issue on the scipy github repository for this error. However, it seems that they are also unable to reproduce the issue and therefore cannot address it.
Horrible hack. Do not do it at home :) but if you just need to get work done at your own risk:
In
C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_numdiff.py
Find:
if np.any((x0 < lb) | (x0 > ub)):
raise ValueError("`x0` violates bound constraints.")
Replace with:
if np.any(((x0 - lb) < -1e-12) | (x0 > ub)):
raise ValueError("`x0` violates bound constraints.")
Where -1e-12 is what you think your case can tolerate as an error of your bound constraint (x0-lb) < 0. Here x0 is a guess and lb is a lower bound.
I do not know what numerical horrors would result out of this hack. But if you just want to get going...
I am trying to use scipy.optimize.fsolve() to solve for x that makes the function equal to zero, but keep getting the error above. My code is:
import scipy.optimize as optimize
from scipy.stats import genextreme as gev
gevcombined = [(-0.139, 3.035, 0.871),(-0.0863, 3.103, 0.818),(-0.198, 3.13, 0.982)]
ratio = [0.225, 0.139, 0.294]
P = [0.5,0.8,0.9,0.96,0.98,0.99]
def mixedpop(x):
for j in range(len(ratio)):
F = (ratio[j]*gev.cdf(x,gevcombined[j][0],gevcombined[j][1],gevcombined[j][2]))+((1-ratio[j]*gev.cdf(x,gevcombined[j][0],gevcombined[j][1],gevcombined[j][2]))-P
return F
initial = 10
Rm = optimize.fsolve(mixedpop,initial)
I keep getting the error:
ValueError:the array returned by a function changed size between calls
What does this error mean? The expected output would be a value for each value of P. So the values of x from Rm would equal something like [3.5, 4, 5.4, 6.3, 7.2, 8.1] for each ratio
Okay I figured out how to get fsolve to work for an array of solutions.
It works if I write the whole thing like this:
Rm = []
initial = [10,10,10,10,10,10]
for j in range(len(ratio)):
f = lambda x : (ratio[j]*gev.cdf(x,gevcombined[j][0],gevcombined[j][1],gevcombined[j][2]))+((1-ratio[j]*gev.cdf(x,gevcombined[j][0],gevcombined[j][1],gevcombined[j][2]))-P
Rm.append(list(optimize.fsolve(f,initial)))
And my output is:
[[3.37, 4.37, 5.13, 6.43, 7.91, 9.88],[3.41, 4.42, 5.09, 6.13, 7.07, 8.18],[3.49, 4.87, 5.95, 7.51, 8.80, 10.19]]
the error occures because the shape of initial does not match your variables.
initial = np.ones(len(x))
However I cannot make my head around what your function is doing. it does the trick for me.
Hi I have two numpy arrays (in this case representing depth and percentage depth dose data) as follows:
depth = np.array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ,
1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. , 2.2,
2.4, 2.6, 2.8, 3. , 3.5, 4. , 4.5, 5. , 5.5])
pdd = np.array([ 80.40649399, 80.35692155, 81.94323956, 83.78981286,
85.58681373, 87.47056637, 89.39149833, 91.33721651,
93.35729334, 95.25343909, 97.06283306, 98.53761309,
99.56624117, 100. , 99.62820672, 98.47564754,
96.33163961, 93.12182427, 89.0940637 , 83.82699219,
77.75436857, 63.15528566, 46.62287768, 29.9665386 ,
16.11104226, 6.92774817, 0.69401413, 0.58247614,
0.55768992, 0.53290371, 0.5205106 ])
which when plotted give the following curve:
I need to find the depth at which the pdd falls to a given value (initially 50%). I have tried slicing the arrays at the point where the pdd reaches 100% as I'm only interested in the points after this.
Unfortunately np.interp only appears to work where both x and y values are incresing.
Could anyone suggest where I should go next?
If I understand you correctly, you want to interpolate the function depth = f(pdd) at pdd = 50.0. For the purposes of the interpolation, it might help for you to think of pdd as corresponding to your "x" values, and depth as corresponding to your "y" values.
You can use np.argsort to sort your "x" and "y" by ascending order of "x" (i.e. ascending pdd), then use np.interp as usual:
# `idx` is an an array of integer indices that sorts `pdd` in ascending order
idx = np.argsort(pdd)
depth_itp = np.interp([50.0], pdd[idx], depth[idx])
plt.plot(depth, pdd)
plt.plot(depth_itp, 50, 'xr', ms=20, mew=2)
This isn't really a programming solution, but it's how you can find the depth. I'm taking the liberty of renaming your variables, so x(i) = depth(i) and y(i) = pdd(i).
In a given interval [x(i),x(i+1)], your linear interpolant is
p_1(X) = y(i) + (X - x(i))*(y(i+1) - y(i))/(x(i+1) - x(i))
You want to find X such that p_1(X) = 50. First find i such that x(i)>50 and x(i+1), then the above equation can be rearranged to give
X = x(i) + (50 - y(i))*((x(i+1) - x(i))/(y(i+1) - y(i)))
For your data (with MATLAB; sorry, no python code) I make it approximately 2.359. This can then be verified with np.interp(X, depth, pdd)
There are several methods to carry out interpolation. For your case, you are basically looking for the depth at 50% which is not available in your data. The simplest interpolation is the linear case. I'm using numerical recipes library in C++ for acquiring the interpolated value via several techniques, therefore,
Linear Interpolation: see page 117
interpolated value depth(50%): 2.35915
Polynomial Interpolation: see page 117
interpolated value depth(50%): 2.36017
Cubic Spline Interpolation: see page 120
interpolated value depth(50%): 2.19401
Rational Function Interpolation: see page 124
interpolated value depth(50%): 2.35986
I have two arrays: one with 30 years of observations, and one with 30 years of historical model runs. I want to calculate the standard deviation between observations and model results, to see how much the model deviates from observations. How do I go about doing this?
Edit
Here are the two arrays (Each number represents a year(1971-2000)):
obs = [ 2790.90283203 2871.02514648 2641.31738281 2721.64453125
2554.19384766 2773.7746582 2500.95825195 3238.41186523
2571.62133789 2421.93017578 2615.80395508 2271.70654297
2703.82275391 3062.25366211 2656.18359375 2593.62231445
2547.87182617 2846.01245117 2530.37573242 2535.79931641
2237.58032227 2890.19067383 2406.27587891 2294.24975586
2510.43847656 2395.32055664 2378.36157227 2361.31689453 2410.75
2593.62915039]
model = [ 2976.01928711 3353.92114258 3000.92700195 3116.5078125 2935.31787109
2799.75805664 3328.06225586 3344.66333008 3318.31689453
3348.85302734 3578.70800781 2791.78198242 4187.99902344
3610.77124023 2991.984375 3112.97412109 4223.96826172
3590.92724609 3284.6015625 3846.34936523 3955.84350586
3034.26074219 3574.46362305 3674.80175781 3047.98144531
3209.56616211 2654.86547852 2780.55053711 3117.91699219
2737.67626953]
You want to compare two signals, e.g. A and B in the following example:
import numpy as np
A = np.random.rand(5)
B = np.random.rand(5)
print "A:", A
print "B:", B
Output:
A: [ 0.66926369 0.63547359 0.5294013 0.65333154 0.63912645]
B: [ 0.17207719 0.26638423 0.55176735 0.05251388 0.90012135]
Analyzing individual signals
The standard deviation of each single signal is not what you need:
print "standard deviation of A:", np.std(A)
print "standard deviation of B:", np.std(B)
Output:
standard deviation of A: 0.0494162021651
standard deviation of B: 0.304319034639
Analyzing the difference
Instead you might compute the difference and apply some common measure like the sum of absolute differences (SAD), the sum of squared differences (SSD) or the correlation coefficient:
print "difference:", A - B
print "SAD:", np.sum(np.abs(A - B))
print "SSD:", np.sum(np.square(A - B))
print "correlation:", np.corrcoef(np.array((A, B)))[0, 1]
Output:
difference: [ 0.4971865 0.36908937 -0.02236605 0.60081766 -0.2609949 ]
SAD: 1.75045448355
SSD: 0.813021824351
correlation: -0.38247081
Use numpy.
import numpy as np
data = [1.2, 2.3, 1.3, 1.2, 5.4]
np.std(data)
Or you could try this:
import numpy as np
obs = np.array([1.2, 2.3, 1.3, 1.2, 5.4])
model = np.array([1.1, 2.4, 1.2, 1.2, 5.3])
np.std(obs-model)
The standard deviation of the same index of multiple lists (e.g. comparing model vs measurement, multiple measurement data etc.. ) as such as
import numpy as np
obs = np.array([0,1,2,3,4])
model = np.array([2,4,6,8,10])
can be calculated by stacking the data into one array:
arr = np.vstack((obs,model))
Now the standard deviation is calculated using np.std() with a specific axis
std = np.std(arr,axis=0)
Alternative one line solution:
std = np.std((model,obs),axis=0)
Output:
[1.0, 1.5, 2.0, 2.5, 3.0]
If you're doing anything more complicated than just finding the standard deviation and/or mean, use numpy/scipy. If that's all you need to do, use the statistics package from the Python Standard Library.
>>> import statistics
>>> statistics.stdev([1, 2, 3])
1.0
It was added in Python 3.4 (see PEP-450) as a lightweight alternative to Numpy for basic stats equations.