In my program, I am applying box cox transform to my data and I am interested to reverse the box-cox transformation at a certain step through my experiment. However I noticed there are two variants of boxcox:
scipy.special.boxcox
scipy.stats.boxcox
I learned that the first option has a function that reverses the box cox transform here.
However I just want to know why in scipy.special the lambda parameter cannot be None while in scipy.stats it could be. In my code I am actually using scipy.stats and the Lamda is None. Now if I want to revert to using scipy.special in order to use its reverse function, what should I set lamda to ?
Here is my current code:
elif self.output_box:
y_train, self.y_train_lambda_ = boxcox(y_train)
y_test, self.y_test_lambda_ = boxcox(y_test)
They both use the same formula for the transformation so it seems that the only difference is that with scipy.stats you can calculate the optimal lambda for the data. If you use scipy.stats.boxcox with lambda=None it returns two parameters: the transformed array and the lambda that maximizes the log-likelihood function (and if alpha is not None too it returns the confidence interval for lambda). Therefore, that’s the lambda that you have to use with the inverse transformation.
Related
In my ODE function I need to iteratively solve an equation for a parameter until convergence at each time step. I'd like to pass the latest parameter value to be used as the initial value for the next time step so when the function does the iterative update of the parameter it will take less time. But I can't figure out how to do that. The code structure of the ODE function is like this:
from scipy.integrate import solve_ivp
def run(t, y):
if t==0:
a = 1e-8
nn = 0
while nn<=100:
nn = nn +1
#update a until convergence
return a*y
In some one language I can return the updated parameter to be used by the integrator, but I don't' see how that's possible with solve_ivp
It's not clear what you're after: do you want to obtain a solution for an ODE at a series of parameter values (i.e. for each value of the parameter you solve the full ODE) or you are changing the parameter along with the ODE iterations (IOW, you want inner or outer iterations).
If the former, then just do a for loop over the parameters. If the latter, it's likely easier and cleaner to use solver classes which implement specific solvers (DOPRI, Radau, RK, BDF etc), which solve_ivp delegates the work to. They offer a step method, which performs a single step. So that you can adjust you parameters, control convergence etc on a way that's most relevant to this particular case.
I think what you are looking for is something in the following form:
class test:
a = 1e-8
def f(self, t, y):
## do iter on self.a
return self.a*y
t = test()
# solve_ivp(t.f, .....)
This way you can always use the last value of a, since it is part of your instance of the test class. This is not exactly what you are asking for, since this will call the iteration each time solve_ivp evaluates f, which will be multiple times per timestep. However, I think this is the closest you can get, since solve_ivp does not appear to have a callback function to invoke after each timestep
I have two loss functions here to be minimized:
The first one is a local one, where:
min f1(x1),
min f2(x2),
min f3(x3),....
min fn(xn)
The other one is global one, where:
min f(x1,x2,...,xn) = f1(x1)+f2(x2)+...fn(xn)
For each local problem fi(x), I have 2 variables to be optimized, and I have 1000 local problems. Correspondingly, for the global problem, I have 2000 variables to be optimized. Surely the 2nd one has more parameters to be optimized, but since f1, f2, f3...fn are independent with each other, I hope they two should be comparable.
I use the scipy minimize function for optimization (scipy.optimize.minimize). But the 2nd one much much slower than the 1st one.
The only drawback of the global one, i think, is taking more gradients than it actually need to. For example, the gradient of x1 only comes from f1, but the global computes its gradient from f2, f3... fn, which is 0. Thus, making it slower. If that is the case, I do hope there would be some ways for acceleration.
BTW, since I later on need to add a global constraint to the optimization, this is why I must use the global loss function instead of the local one.
I think your guess is correct that the amount of time that it takes more is because it needs to compute the gradients. Based on the description page for scipy.optimize.minimize (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html), it seems that the method computes the gradient numerically if you provide jac = False (optional and set to False by default).
jac : bool or callable, optional
Jacobian (gradient) of objective function. Only for CG, BFGS, Newton-CG, L-BFGS- B, TNC, SLSQP, dogleg, trust-ncg, trust-krylov, trust-region-exact. If jac is a Boolean and is True, fun is assumed to return the gradient along with the objective function. If False, the gradient will be estimated numerically. jac can also be a callable returning the gradient of the objective. In this case, it must accept the same arguments as fun.
Based on the above, you can set jac = True and then you should provide your function as a callable that returns function value as well as the gradients. This should speed up the process.
One other way is to write your own customizable minimizer as callable.
I am using a scipy.minimize function, where I'd like to have one parameter only searching for options with two decimals.
def cost(parameters,input,target):
from sklearn.metrics import mean_squared_error
output = self.model(parameters = parameters,input = input)
cost = mean_squared_error(target.flatten(), output.flatten())
return cost
parameters = [1, 1] # initial parameters
res = minimize(fun=cost, x0=parameters,args=(input,target)
model_parameters = res.x
Here self.model is a function that performs some matrix manipulation based on the parameters. Input and target are two matrices. The function works the way I want to, except I would like to have parameter[1] to have a constraint. Ideally I'd just like to give an numpy array, like np.arange(0,10,0.01). Is this possible?
In general this is very hard to do as smoothness is one of the core-assumptions of those optimizers.
Problems where some variables are discrete and some are not are hard and usually tackled either by mixed-integer optimization (working good for MI-linear-programming, quite okay for MI-convex-programming although there are less good solvers) or global-optimization (usually derivative-free).
Depending on your task-details, i recommend decomposing the problem:
outer-loop for np.arange(0,10,0.01)-like fixing of variable
inner-loop for optimizing, where this variable is fixed
return the model with the best objective (with status=success)
This will effect in N inner-optimizations, where N=state-space of your to fix-var.
Depending on your task/data, it might be a good idea to traverse the fixing-space monotonically (like using np's arange) and use the solution of iteration i as initial-point for the problem i+1 (potentially less iterations needed if guess is good). But this is probably not relevant here, see next part.
If you really got 2 parameters, like indicated, this decomposition leads to an inner-problem with only 1 variable. Then, don't use minimize, use minimize_scalar (faster and more robust; does not need an initial-point).
I have a function that takes in a multivariate argument x. Here x = [x1,x2,x3]. Let's say my function looks like:
f(x,T) = np.dot(x,T) + np.exp(np.dot(x,T) where T is a constant.
I am interested in finding df/dx1, df/dx2 and df/dx3 functions.
I have achieved some success using scipy diff, but I am a bit skeptical because it uses numerical differences. Yesterday, my colleague pointed me to Autograd (github). Since it seems to be a popular package, I am hoping someone here knows how to get partial differentiation using this package. My initial tests with this library indicates that the grad function only takes differentiation with respect to the first argument. I am not sure how to extend it to other arguments. Any help would be greatly appreciated.
Thanks.
I found the following description of the grad function in the autograd source code:
def grad(fun, x)
"Returns a function which computes the gradient of `fun` with
respect to positional argument number `argnum`. The returned
function takes the same arguments as `fun`, but returns the
gradient instead. The function `fun`should be scalar-valued. The
gradient has the same type as the argument."
So
def h(x,t):
return np.dot(x,t) + np.exp(np.dot(x,t))
h_x = grad(h,0) # derivative with respect to x
h_t = grad(h,1) # derivative with respect to t
Also make sure to use the numpy libaray that comes with autograd
import autograd.numpy as np
instead of
import numpy as np
in order to make use of all numpy functions.
I'm using the differential_evolution algorithm in scipy to fit some data with various exponential functions convolved with gaussian functions - this in itself is not a problem, the function fits it well.
However, it is not giving the jacobian in the result dictionary (which I would like to use to calculate the errors on my fit constants), despite the fact that I have set "polish" (i.e. use scipy.optimize.minimize with the L-BFGS-B method to polish the best population member at the end) to True, and thus the documentation states it should give the jacobian. My function takes the gaussian width and any number of exponents, and is being fit like so:
result = differential_evolution(exponentialfit, bounds, args=(avgspectra, c, fitfrom, errors, numcomponents, 1), tol=0.000000000001, disp=True, polish=True)
Is there any reason it is not giving the jacobian in the result output?