How do you create a logit-normal distribution in Python? - python

Following this post, I tried to create a logit-normal distribution by creating the LogitNormal class:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import logit
from scipy.stats import norm, rv_continuous
class LogitNormal(rv_continuous):
def _pdf(self, x, **kwargs):
return norm.pdf(logit(x), **kwargs)/(x*(1-x))
class OtherLogitNormal:
def pdf(self, x, **kwargs):
return norm.pdf(logit(x), **kwargs)/(x*(1-x))
fig, ax = plt.subplots()
values = np.linspace(10e-10, 1-10e-10, 1000)
sigma, mu = 1.78, 0
ax.plot(
values, LogitNormal().pdf(values, loc=mu, scale=sigma), label='subclassed'
)
ax.plot(
values, OtherLogitNormal().pdf(values, loc=mu, scale=sigma),
label='not subclassed'
)
ax.legend()
fig.show()
However, the LogitNormal class does not produce the desired results. When I don't subclass rv_continuous it works. Why is that? I need the subclassing to work because I also need the other methods that come with it like rvs.
Btw, the only reason I am creating my own logit-normal distribution in Python is because the only implementations of that distribution that I could find were from the PyMC3 package and from the TensorFlow package, both of which are pretty heavy / overkill if you only need them for that one function. I already tried PyMC3, but apparently it doesn't do well with scipy I think, it always crashed for me. But that's a whole different story.

Forewords
I came across this problem this week and the only relevant issue I have found about it is this post. I have almost same requirement as the OP:
Having a random variable for Logit Normal distribution.
But I also need:
To be able to perform statistical test as well;
While being compliant with the scipy random variable interface.
As #Jacques Gaudin pointed out the interface for rv_continous (see distribution architecture for details) does not ensure follow up for loc and scale parameters when inheriting from this class. And this is somehow misleading and unfortunate.
Implementing the __init__ method of course allow to create the missing binding but the trade off is: it breaks the pattern scipy is currently using to implement random variables (see an example of implementation for lognormal).
So, I took time to dig into the scipy code and I have created a MCVE for this distribution. Although it is not totally complete (it mainly misses moments overrides) it fits the bill for both OP and my purposes while having satisfying accuracy and performance.
MCVE
An interface compliant implementation of this random variable could be:
class logitnorm_gen(stats.rv_continuous):
def _argcheck(self, m, s):
return (s > 0.) & (m > -np.inf)
def _pdf(self, x, m, s):
return stats.norm(loc=m, scale=s).pdf(special.logit(x))/(x*(1-x))
def _cdf(self, x, m, s):
return stats.norm(loc=m, scale=s).cdf(special.logit(x))
def _rvs(self, m, s, size=None, random_state=None):
return special.expit(m + s*random_state.standard_normal(size))
def fit(self, data, **kwargs):
return stats.norm.fit(special.logit(data), **kwargs)
logitnorm = logitnorm_gen(a=0.0, b=1.0, name="logitnorm")
This implementation unlock most of the scipy random variables potential.
N = 1000
law = logitnorm(0.24, 1.31) # Defining a RV
sample = law.rvs(size=N) # Sampling from RV
params = logitnorm.fit(sample) # Infer parameters w/ MLE
check = stats.kstest(sample, law.cdf) # Hypothesis testing
bins = np.arange(0.0, 1.1, 0.1) # Bin boundaries
expected = np.diff(law.cdf(bins)) # Expected bin counts
As it relies on scipy normal distribution we may assume underlying functions have the same accuracy and performance than normal random variable object. But it might indeed be subject to float arithmetic inaccuracy especially when dealing with highly skewed distributions at the support boundary.
Tests
To check out how it performs we draw some distribution of interest and check them.
Let's create some fixtures:
def generate_fixtures(
locs=[-2.0, -1.0, 0.0, 0.5, 1.0, 2.0],
scales=[0.32, 0.56, 1.00, 1.78, 3.16],
sizes=[100, 1000, 10000],
seeds=[789, 123456, 999999]
):
for (loc, scale, size, seed) in itertools.product(locs, scales, sizes, seeds):
yield {"parameters": {"loc": loc, "scale": scale}, "size": size, "random_state": seed}
And perform checks on related distributions and samples:
eps = 1e-8
x = np.linspace(0. + eps, 1. - eps, 10000)
for fixture in generate_fixtures():
# Reference:
parameters = fixture.pop("parameters")
normal = stats.norm(**parameters)
sample = special.expit(normal.rvs(**fixture))
# Logit Normal Law:
law = logitnorm(m=parameters["loc"], s=parameters["scale"])
check = law.rvs(**fixture)
# Fit:
p = logitnorm.fit(sample)
trial = logitnorm(*p)
resample = trial.rvs(**fixture)
# Hypothetis Tests:
ks = stats.kstest(check, trial.cdf)
bins = np.histogram(resample)[1]
obs = np.diff(trial.cdf(bins))*fixture["size"]
ref = np.diff(law.cdf(bins))*fixture["size"]
chi2 = stats.chisquare(obs, ref, ddof=2)
Some adjustments with n=1000, seed=789 (this sample is quite normal) are shown below:

If you look at the source code of the pdf method, you will notice that _pdf is called without the scale and loc keyword arguments.
if np.any(cond):
goodargs = argsreduce(cond, *((x,)+args+(scale,)))
scale, goodargs = goodargs[-1], goodargs[:-1]
place(output, cond, self._pdf(*goodargs) / scale)
It results that the kwargs in your overriding _pdf method is always an empty dictionary.
If you look a bit closer at the code, you will also notice that the scaling and location are handled by pdf as opposed to _pdf.
In your case, the _pdf method calls norm.pdf so the loc and scale parameters must somehow be available in LogitNormal._pdf.
You could for example pass scale and loc when creating an instance of LogitNormal and store the values as class attributes:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import logit
from scipy.stats import norm, rv_continuous
class LogitNormal(rv_continuous):
def __init__(self, scale=1, loc=0):
super().__init__(self)
self.scale = scale
self.loc = loc
def _pdf(self, x):
return norm.pdf(logit(x), loc=self.loc, scale=self.scale)/(x*(1-x))
fig, ax = plt.subplots()
values = np.linspace(10e-10, 1-10e-10, 1000)
sigma, mu = 1.78, 0
ax.plot(
values, LogitNormal(scale=sigma, loc=mu).pdf(values), label='subclassed'
)
ax.legend()
fig.show()

Related

Custom numpy (or scipy?) probability distribution for random number generation

The issue
Tl;dr: I would like a function that randomly returns a float (or optionally an ndarray of floats) in an interval following a probability distribution that resembles the sum of a "Gaussian" and a uniform distributions.
The function (or class) - let's say custom_distr() - should have as inputs (with default values already given):
the lower and upper bounds of the interval: low=0.0, high=1.0
the mean and standard deviation parameters of the "Gaussian": loc=0.5, scale=0.02
the size of the output: size=None
size can be an integer or a tuple of integers. If so, then loc and scale can either both simultaneously be scalars, or ndarrays whose shape corresponds to size.
The output is a scalar or an ndarray, depending on size.
The output has to be scaled to certify that the cumulative distribution is equal to 1 (I'm uncertain how to do this).
Note that I'm following numpy.random.Generator's naming convention from uniform and normal distributions as reference, but the nomenclature and the utilized packages does not really matter to me.
What I've tried
Since I couldn't find a way to "add" numpy.random.Generator's uniform and Gaussian distributions directly, I've tried using scipy.stats.rv_continuous subclassing, but I'm stuck at how to define the _rvs method, or the _ppf method to make it fast.
From what I've understood of rv_continuous class definition in Github, _rvs uses numpy's random.RandomState (which is out of date in comparison to random.Generator) to make the distributions. This seems to defeat the purpose of using scipy.stats.rv_continuous subclassing.
Another option would be to define _ppf, the percent-point function of my custom distribution, since according to rv_generic class definition in Github, the default function _rvs uses _ppf. But I'm having trouble defining this function by hand.
Following, there is a MWE, tested using low=0.0, high=1.0, loc=0.3 and scale=0.02. The names are different than the "The issue" section, because terminologies of terms are different between numpy and scipy.
import numpy as np
from scipy.stats import rv_continuous
import scipy.special as sc
import matplotlib.pyplot as plt
import time
# The class definition
class custom_distr(rv_continuous):
def __init__(self, my_loc=0.5, my_scale=0.5, a=0.0, b=1.0, *args, **kwargs):
super(custom_distr, self).__init__(a, b, *args, **kwargs)
self.a = a
self.b = b
self.my_loc = my_loc
self.my_scale = my_scale
def _pdf(self, x):
# uniform distribution
aux = 1/(self.b-self.a)
# gaussian distribution
aux += 1/np.sqrt(2*np.pi*self.my_scale**2) * \
np.exp(-(x-self.my_loc)**2/2/self.my_scale**2)
return aux/2 # divide by 2?
def _cdf(self, x):
# uniform distribution
aux = (x-self.a)/(self.b-self.a)
# gaussian distribution
aux += 0.5*(1+sc.erf((x-self.my_loc)/(self.my_scale*np.sqrt(2))))
return aux/2 # divide by 2?
# Testing the class
if __name__ == "__main__":
my_cust_distr = custom_distr(name="my_dist", my_loc=0.3, my_scale=0.02)
x = np.linspace(0.0, 1.0, 10000)
start_t = time.time()
the_pdf = my_cust_distr.pdf(x)
print("PDF calc time: {:4.4f}".format(time.time()-start_t))
plt.plot(x, the_pdf, label='pdf')
start_t = time.time()
the_cdf = my_cust_distr.cdf(x)
print("CDF calc time: {:4.4f}".format(time.time()-start_t))
plt.plot(x, the_cdf, 'r', alpha=0.8, label='cdf')
# Get 10000 random values according to the custom distribution
start_t = time.time()
r = my_cust_distr.rvs(size=10000)
print("RVS calc time: {:4.4f}".format(time.time()-start_t))
plt.hist(r, density=True, histtype='stepfilled', alpha=0.3, bins=40)
plt.ylim([0.0, the_pdf.max()])
plt.grid(which='both')
plt.legend()
print("Maximum of CDF is: {:2.1f}".format(the_cdf[-1]))
plt.show()
The generated image is:
The output is:
PDF calc time: 0.0010
CDF calc time: 0.0010
RVS calc time: 11.1120
Maximum of CDF is: 1.0
The time computing the RVS method is too slow in my approach.
According to Wikipedia, the ppf, or percent-point function (also called the Quantile function), can be written as the inverse function of the cumulative distribution function (cdf), when the cdf increases monotonically.
From the figure shown in the question, the cdf of my custom distribution function does, indeed, increase monotonically - as is expected, since the cdf's of Gaussian and uniform distributions do so too.
The ppf of the general normal distribution can be found in this Wikipedia page under "Quartile function". And the ppf of a uniform function defined between a and b can be calculated simply as p*(b-a)+a, where p is the desired probability.
But the inverse function of the sum of two functions, cannot (in general) be trivially written as a function of the inverses! See this Mathematics Exchange post for more information.
Therefore, the partial "solution" I have found thus far is to save an array containing the cdf of my custom distribution when instantiating an object and then finding the ppf (i.e. the inverse function of the cdf) via 1D interpolation, which only works as long as the cdf is indeed a monotonically increasing function.
NOTE 1: I still haven't fixed the bound's check issue mentioned by Peter O.
NOTE 2: The proposed solution is unviable if an ndarray of loc's were given, because of the lack of a closed-form expression for the Quartile function. Therefore, the original question is still open.
The working code is now:
import numpy as np
from scipy.stats import rv_continuous
import scipy.special as sc
import matplotlib.pyplot as plt
import time
# The class definition
class custom_distr(rv_continuous):
def __init__(self, my_loc=0.5, my_scale=0.5, a=0.0, b=1.0,
init_ppf=1000, *args, **kwargs):
super(custom_distr, self).__init__(a, b, *args, **kwargs)
self.a = a
self.b = b
self.my_loc = my_loc
self.my_scale = my_scale
self.x = np.linspace(a, b, init_ppf)
self.cdf_arr = self._cdf(self.x)
def _pdf(self, x):
# uniform distribution
aux = 1/(self.b-self.a)
# gaussian distribution
aux += 1/np.sqrt(2*np.pi)/self.my_scale * \
np.exp(-0.5*((x-self.my_loc)/self.my_scale)**2)
return aux/2 # divide by 2?
def _cdf(self, x):
# uniform distribution
aux = (x-self.a)/(self.b-self.a)
# gaussian distribution
aux += 0.5*(1+sc.erf((x-self.my_loc)/(self.my_scale*np.sqrt(2))))
return aux/2 # divide by 2?
def _ppf(self, p):
if np.any((p<0.0) | (p>1.0)):
raise RuntimeError("Quantile function accepts only values between 0 and 1")
return np.interp(p, self.cdf_arr, self.x)
# Testing the class
if __name__ == "__main__":
a = 1.0
b = 3.0
my_loc = 1.5
my_scale = 0.02
my_cust_distr = custom_distr(name="my_dist", a=a, b=b,
my_loc=my_loc, my_scale=my_scale)
x = np.linspace(a, b, 10000)
start_t = time.time()
the_pdf = my_cust_distr.pdf(x)
print("PDF calc time: {:4.4f}".format(time.time()-start_t))
plt.plot(x, the_pdf, label='pdf')
start_t = time.time()
the_cdf = my_cust_distr.cdf(x)
print("CDF calc time: {:4.4f}".format(time.time()-start_t))
plt.plot(x, the_cdf, 'r', alpha=0.8, label='cdf')
start_t = time.time()
r = my_cust_distr.rvs(size=10000)
print("RVS calc time: {:4.4f}".format(time.time()-start_t))
plt.hist(r, density=True, histtype='stepfilled', alpha=0.3, bins=100)
plt.ylim([0.0, the_pdf.max()])
# plt.xlim([a, b])
plt.grid(which='both')
plt.legend()
print("Maximum of CDF is: {:2.1f}".format(the_cdf[-1]))
plt.show()
The resulting image is:
And the output is:
PDF calc time: 0.0010
CDF calc time: 0.0010
RVS calc time: 0.0010
Maximum of CDF is: 1.0
The code is faster than before, at the cost of using a bit more memory.

Gumbel half distribution

I have generated a probability density function in python using scipy using the code below:
import matplotlib.pyplot as plt
from scipy.stats import gumbel_l
import numpy as np
data = gumbel_l.rvs(size=100000)
data = np.sort(data)
plt.hist(data, bins=50, density=True)
plt.plot(data, gumbel_l.pdf(data))
plt.show()
My quesiton is if there is a way to get a one tailed distribution of this, namely the left side of the distribution, both to generate it and to fit a pdf to it.
You can create a custom rv_continuous subclass. The minimum requirement, is that the custom class provides a pdf. The pdf can be obtained from gumbel_l's pdf up till x = 0 and being zero for positive x. The pdf needs to be normalized to get its area equal to 1, for which we can divide by gumbel_l's cdf(0).
With only the pdf implemented, you'll notice that obtaining random variates (.rvs) will be rather slow. Scipy rv_continuous very slow explains this can be remedied by generating too many variates and throwing away the values that are too high, or by providing an implementation for the ppf.
As the ppf can be obtained straightforward from gumbel_l's ppf, the code below implements that solution. A similar approach can be used to truncate at another position, or even to truncate at two spots.
import matplotlib.pyplot as plt
from scipy.stats import gumbel_l, rv_continuous
import numpy as np
class gumbel_l_trunc_gen(rv_continuous):
"truncated gumbel_l distribution"
def __init__(self, name='gumbel_l_trunc'):
self.gumbel_l_cdf_0 = gumbel_l.cdf(0)
self.gumbel_trunc_normalize = 1 / self.gumbel_l_cdf_0
super().__init__(name=name)
def _pdf(self, x):
return np.where(x <= 0, gumbel_l.pdf(x) * self.gumbel_trunc_normalize, 0)
def _cdf(self, x):
return np.where(x <= 0, gumbel_l.cdf(x) * self.gumbel_trunc_normalize, 1)
def _ppf(self, x):
return gumbel_l.ppf(x * self.gumbel_l_cdf_0)
gumbel_l_trunc = gumbel_l_trunc_gen()
data = gumbel_l_trunc.rvs(size=100000)
x = np.linspace(min(data), 1, 500)
plt.hist(data, bins=50, density=True)
plt.plot(x, gumbel_l_trunc.pdf(x))
plt.show()

Are these functions equivalent?

I am building a neural network that makes use of T-distribution noise. I am using functions defined in the numpy library np.random.standard_t and the one defined in tensorflow tf.distributions.StudentT. The link to the documentation of the first function is here and that to the second function is here. I am using the said functions like below:
a = np.random.standard_t(df=3, size=10000) # numpy's function
t_dist = tf.distributions.StudentT(df=3.0, loc=0.0, scale=1.0)
sess = tf.Session()
b = sess.run(t_dist.sample(10000))
In the documentation provided for the Tensorflow implementation, there's a parameter called scale whose description reads
The scaling factor(s) for the distribution(s). Note that scale is not technically the standard deviation of this distribution but has semantics more similar to standard deviation than variance.
I have set scale to be 1.0 but I have no way of knowing for sure if these refer to the same distribution.
Can someone help me verify this? Thanks
I would say they are, as their sampling is defined in almost the exact same way in both cases. This is how the sampling of tf.distributions.StudentT is defined:
def _sample_n(self, n, seed=None):
# The sampling method comes from the fact that if:
# X ~ Normal(0, 1)
# Z ~ Chi2(df)
# Y = X / sqrt(Z / df)
# then:
# Y ~ StudentT(df).
seed = seed_stream.SeedStream(seed, "student_t")
shape = tf.concat([[n], self.batch_shape_tensor()], 0)
normal_sample = tf.random.normal(shape, dtype=self.dtype, seed=seed())
df = self.df * tf.ones(self.batch_shape_tensor(), dtype=self.dtype)
gamma_sample = tf.random.gamma([n],
0.5 * df,
beta=0.5,
dtype=self.dtype,
seed=seed())
samples = normal_sample * tf.math.rsqrt(gamma_sample / df)
return samples * self.scale + self.loc # Abs(scale) not wanted.
So it is a standard normal sample divided by the square root of a chi-square sample with parameter df divided by df. The chi-square sample is taken as a gamma sample with parameter 0.5 * df and rate 0.5, which is equivalent (chi-square is a special case of gamma). The scale value, like the loc, only comes into play in the last line, as a way to "relocate" the distribution sample at some point and scale. When scale is one and loc is zero, they do nothing.
Here is the implementation for np.random.standard_t:
double legacy_standard_t(aug_bitgen_t *aug_state, double df) {
double num, denom;
num = legacy_gauss(aug_state);
denom = legacy_standard_gamma(aug_state, df / 2);
return sqrt(df / 2) * num / sqrt(denom);
})
So essentially the same thing, slightly rephrased. Here we have also have a gamma with shape df / 2 but it is standard (rate one). However, the missing 0.5 is now by the numerator as / 2 within the sqrt. So it's just moving the numbers around. Here there is no scale or loc, though.
In truth, the difference is that in the case of TensorFlow the distribution really is a noncentral t-distribution. A simple empirical proof that they are the same for loc=0.0 and scale=1.0 is to plot histograms for both distributions and see how close they look.
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
np.random.seed(0)
t_np = np.random.standard_t(df=3, size=10000)
with tf.Graph().as_default(), tf.Session() as sess:
tf.random.set_random_seed(0)
t_dist = tf.distributions.StudentT(df=3.0, loc=0.0, scale=1.0)
t_tf = sess.run(t_dist.sample(10000))
plt.hist((t_np, t_tf), np.linspace(-10, 10, 20), label=['NumPy', 'TensorFlow'])
plt.legend()
plt.tight_layout()
plt.show()
Output:
That looks pretty close. Obviously, from the point of view of statistical samples, this is not any kind of proof. If you were not still convinced, there are some statistical tools for testing whether a sample comes from a certain distribution or two samples come from the same distribution.

Lognorm distribution fitting

I am trying to do a lognorm distribution fit but the resulting paramter seem a bit odd. Could you please show me my mistake or explain to me if I am misinterpreting the parameters.
import numpy as np
import scipy.stats as st
data = np.array([1050000, 1100000, 1230000, 1300000, 1450000, 1459785, 1654000, 1888000])
s, loc, scale = st.lognorm.fit(data)
#calculating the mean
lognorm_mean = st.lognorm.mean(s = s, loc = loc, scale = scale)
The resulting mean is: 945853602904015.8.
But this doesn't make any sense.
The mean should be:
data_ln = np.log(data)
ln_mean = np.mean(data_ln)
ln_std = np.std(data_ln)
mean = np.exp(ln_mean + np.power(ln_std, 2)/2)
Here the resulting mean is 1391226.31. This should be correct.
Can you please help me with this topic?
Best regards
Norbi
I think you can tune the parameters of the minimizer to get an acceptable result:
import numpy as np
import scipy.stats as st
from scipy.optimize import minimize
data = np.array([1050000, 1100000, 1230000, 1300000,
1450000, 1459785, 1654000, 1888000])
def opti_wrap(fun, x0, args, disp=0, **kwargs):
return minimize(fun, x0, args=args, method='SLSQP',
tol=1e-12, options={'maxiter': 1000}).x
s, loc, scale = st.lognorm.fit(data, optimizer=opti_wrap)
lognorm_mean = st.lognorm.mean(s=s, loc=loc, scale=scale)
print(lognorm_mean) # should give 1392684.4350
The reason you are seeing a strange result is due to the default minimizer failing to converge on the maximum likelihood result. This could be due to a mis-behaving cost function with so few data points (you are trying to fit 3 params but only have 8 data points...). Note: I'm using scipy version 1.1.0.

using output of scipy.interpolate.UnivariateSpline later in python or in Matlab without needing original datapoints

I'm using scipy.interpolate.UnivariateSpline to smoothly interpolate a large amount of data. Works great. I get an object which acts like a function.
Now I want to save the spline points for later and use them in Matlab (and also Python, but that's less urgent), without needing the original data. How can I do this?
In scipy I have no clue; UnivariateSpline does not seem to offer a constructor with the previously-computed knots and coefficients.
In MATLAB, I've tried the Matlab functions spline() and pchip(), and while both come close, they have errors near the endpoints that look kind of like Gibbs ears.
Here is a sample set of data that I have, in Matlab format:
splinedata = struct('coeffs',[-0.0412739180955273 -0.0236463479425733 0.42393753107602 -1.27274336116436 0.255711720888164 1.93923263846732 -2.30438927604816 1.02078680231079 0.997156858475075 -2.35321792387215 0.667027554745454 0.777918416623834],...
'knots',[0 0.125 0.1875 0.25 0.375 0.5 0.625 0.75 0.875 0.9999],...
'y',[-0.0412739180955273 -0.191354308450615 -0.869601364377744 -0.141538578624065 0.895258135865578 -1.04292294390242 0.462652465278345 0.442550440125204 -1.03967756446455 0.777918416623834])
The coefficients and knots are the result of calling get_coeffs() and get_knots() on the scipy UnivariateSpline. The 'y' values are the values of the UnivariateSpline at the knots, or more precisely:
y = f(f.get_knots())
where f is my UnivariateSpline.
How can I use this data to make a spline that matches the behavior of the UnivariateSpline, without having to use the Curve-Fitting Toolbox? I don't need to do any data fitting in Matlab, I just need to know how to construct a cubic spline from knots/coefficients/spline values.
You can do it by using the functions _eval_args() and _from_tck() from the class UnivariateSpline. The first one gives returns the spline parameters, which you can store and later create a similar spline object using the the second one.
Here is an example:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import UnivariateSpline
x = np.linspace(-3, 3, 50)
y = np.exp(-x**2) + 0.1 * np.random.randn(50)
spl1 = UnivariateSpline(x, y, s=.5)
xi = np.linspace(-3, 3, 1000)
tck = spl1._eval_args
spl2 = UnivariateSpline._from_tck(tck)
plt.plot(x, y, 'ro', ms=5, label='data')
plt.plot(xi, spl1(xi), 'b', label='original spline')
plt.plot(xi, spl2(xi), 'y:', lw=4, label='recovered spline')
plt.legend()
plt.show()
In scipy, try scipy.interpolate.splev, which takes
tck: a sequence ... containing the knots, coefficients, and degree of the spline.
Added: the following python class creates spline functions:
init with (knots, coefs, degree),
then use it just like spline functions created by UnivariateSpline( x, y, s ):
from scipy.interpolate import splev
# http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splev.html
class Splinefunc:
""" splinef = Splinefunc( knots, coefs, degree )
...
y = splinef( x ) # __call__
19june untested
"""
def __init__( self, knots, coefs, degree ):
self.knots = knots
self.coefs = coefs
self.degree = degree
def __call__( self, x ):
return splev( x, (self.knots, self.coefs, self.degree ))

Categories

Resources