I am trying to estimate few parameters using the constained maximum likelihood in R and more specifically the constrOptim() from the stata package in R. I am programming in Python and using R via the RPy2.
In my model, I am assuming that the data follow the Beta-distribution, so I created a simulated dataset by using prespecified values for the parameters and now I am trying to estimate these parameters in order to verify that my estimation program works fine.
What I have observed is that my estimation is quite sensitive to the initial parameters. For example I have 11 parameters to estimate (let's call the parameters as pam1..pam11) and their true value is:
pam1=0.2 pam2=0.3 pam3=0.4 pam4=0.7 pam5=0.55 pam6=0.45 pam7=0.1 pam8=0.01 pam9=0.01 pam10=45 pam11=45
In the constrOptim() I am setting the starting parameters as:
start_param=FloatVector((pam1,pam2,pam3,pam4,pam5,pam6,pam7,pam8,pam9,pam10,pam,11))
where I set the starting values. I have observed that when I am using different sets of starting values the results change. For example when I am using the set
start_param=FloatVector((0.2,0.3,0.4,0.6,0.7,0.8,0.3,0.011,0.011,15,15))
and I obtain the following estimates
$par
[1] 0.20851065 0.30348571 0.43616932 0.73695654 0.58287221
0.45541506
[7] 0.11191879 0.02233908 0.01988878 46.57249043 45.48544918
$value
[1] -215.9711
$convergence
[1] 0
but when I am using another set as for example:
start_param=FloatVector((0.2,0.3,0.4,0.75,0.55,0.45,0.3,0.05,0.05,59,59))
the results change and it seems that I am losing convergence
$par
[1] 0.17218738 0.27165359 0.48458978 0.80295773 0.62618983 0.43254786
[7] 0.12426385 0.02991442 0.01853252 57.78269692 59.35376216
$value
[1] -146.9858
$convergence
[1] 1
My question is the following:
I have seen that in Stata, there is an option that searches for better starting values for the numerical optimization algorithm. I tried to set multiple starting values by setting a matrix but this did not work.
Is there an option in constrOptim that will allow me to do something like this?
Many thanks in advance.
For additional information, the specification I use for the constrOptim() is:
res=statsr.constrOptim(start_param,Rmaxlikelihood,grad='NULL',ui=ui,ci=ci,method="Nelder-Mead",control=list("maxit=3000,trace=F"))
I came across a function in R which does exactly what I was looking for.
The package ‘Rsolnp’ has the function "gosolnp" which is described to perform Random Initialization and Multiple Restarts of the solnp solver.
It is quite efficient and the documentation provides examples on how to use it.
More: http://cran.r-project.org/web/packages/Rsolnp/Rsolnp.pdf
Related
I'm trying to simulate a system of ODEs. While doing so, I need to increase the current value of certain variables by some factor at specific time points when the odeint runs?
I tried doing the following. But what i could notice is that the time values are in floating point. This makes it difficult for me to specify an if-condition for adding a certain value to the inputs that are going to be integrated further in the process.
Below is the problem case. Please help me out with this.
def myfunc(s,t):
# whenever the time is an even day, increase the variable by 2
if t%2==0:
addition = 2
else:
addition = 0
dsdt = (2s+8)+addition
return dsdt
Problem: The incoming time(t) in the function is a floating point number. This prevents me from applying a if condition for specific discrete even values of 't'
Detailed description:
(a)I define a timespan vector , Tspan = np.linspace(1,100,100), and a initial condition s0 = [3].
(b) When I run the " odeint(myfunc, s0, Tspan) ", I need to update the incoming 's' variable by some factor, only at certain timepoints ( Say, for t = 25,50,75).
(c) But for me to this, if I place print(t) inside the "myfunc(s,t)", I could watch out that the incoming 't' is in float type.
(d) And one important note is that the # myfunc is called > #Timesteps in the Tspan vector. This is why the runtime 't' is in floating points.
(e) with this said if i try to perform "if ceil(t)%25==0 or round" the same int is returned for next 4 to 5 function calls ( this is because the there are few number of function calls happening between two subsequent timepoints), as a result, if I try to update the incoming 's' with an if condition on the ceiled(t), the update on 's' is performed for 4 to 5 subsequent function calls instead of once at a specific time point, and this should be avoided.
I hope my problem is clear. Please help me out if you could, in someway. Thanks folks!
All "professional" solvers use an internal adaptive step size. The step size for the output has no or minimal influence on this. Each method step uses multiple evaluations of the ODE function. Depending on the output sampling frequency, you can have multiple internal steps per output step, or multiple output steps get interpolated from the same internal step.
What you describe as desired mechanism is different from your example code. Your example code changes a parameter of the ODE. The description amounts to a state change. While the first can be done with deleterious but recoverable effects on the step size controller, the second requires an event-action mechanism with a state-changing action. Such is not implemented in any of the scipy solvers.
The best way is to solve for the segments between the changes, perform the state change at the end of each segment and restart the integration with the new state. Use array concatenation on the time and value segments to get the large solution function table.
t1=np.linspace(0,25,25+1);
u10=u0
u1=odeint(fun,u10,t1);
t2=t1+25; # or a more specific construction for non-equal interval lengths
u20 = 3*u1[-1] # or some other change of the last state u1[-1]
u2=odeint(fun,u20,t2);
t3=t2+25;
u30 = u2[-1].copy();
u30[0] -=5; # or what the actual state change was supposed to be
u3=odeint(fun,u30,t3);
# combine the segments for plotting, gives vertical lines at the event locations
t=np.concatenate([t1,t2,t3]);
u=np.concatenate([u1,u2,u3]);
For more segments it is better do organize this in a loop, especially if the state change at the event locations via an uniform procedure depending on a few parameters.
The original problem
While translating MATLAB code to python, I have the function [parmhat,parmci] = gpfit(x,alpha). This function fits a Generalized Pareto Distribution and returns the parameter estimates, parmhat, and the 100(1-alpha)% confidence intervals for the parameter estimates, parmci.
MATLAB also provides the function gplike that returns acov, the inverse of Fisher's information matrix. This matrix contains the asymptotic variances on the diagonal when using MLE. I have the feeling this can be coupled to the confidence intervals as well, however my statistics background is not strong enough to understand if this is true.
What I am looking for is Python code that gives me the parmci values (I can get the parmhat values by using scipy.stats.genpareto.fit). I have been scouring Google and Stackoverflow for 2 days now, and I cannot find any approach that works for me.
While I am specifically working with the Generalized Pareto Distribution, I think this question can apply to many more (if not all) distributions that scipy.stats has.
My data: I am interested in the shape and scale parameters of the generalized pareto fit, the location parameter should be fixed at 0 for my fit.
What I have done so far
scipy.stats While scipy.stats provides nice fitting performance, this library does not offer a way to calculate the confidence interval on the parameter estimates of the distribution fitter.
scipy.optimize.curve_fit As an alternative I have seen suggested to use scipy.optimize.curve_fit instead, as this does provide the estimated covariance of the parameter estimated. However that fitting method uses least squares, whereas I need to use MLE and I didn't see a way to make curve_fit use MLE instead. Therefore it seems that I cannot use curve_fit.
statsmodel.GenericLikelihoodModel Next I found a suggestion to use statsmodel.GenericLikelihoodModel. The original question there used a gamma distribution and asked for a non-zero location parameter. I altered the code to:
import numpy as np
from statsmodels.base.model import GenericLikelihoodModel
from scipy.stats import genpareto
# Data contains 24 experimentally obtained values
data = np.array([3.3768732 , 0.19022354, 2.5862942 , 0.27892331, 2.52901677,
0.90682787, 0.06842895, 0.90682787, 0.85465385, 0.21899145,
0.03701204, 0.3934396 , 0.06842895, 0.27892331, 0.03701204,
0.03701204, 2.25411215, 3.01049545, 2.21428639, 0.6701813 ,
0.61671203, 0.03701204, 1.66554224, 0.47953739, 0.77665706,
2.47123239, 0.06842895, 4.62970341, 1.0827188 , 0.7512669 ,
0.36582134, 2.13282122, 0.33655947, 3.29093622, 1.5082936 ,
1.66554224, 1.57606579, 0.50645878, 0.0793677 , 1.10646119,
0.85465385, 0.00534871, 0.47953739, 2.1937636 , 1.48512994,
0.27892331, 0.82967374, 0.58905024, 0.06842895, 0.61671203,
0.724393 , 0.33655947, 0.06842895, 0.30709881, 0.58905024,
0.12900442, 1.81854273, 0.1597266 , 0.61671203, 1.39384127,
3.27432715, 1.66554224, 0.42232511, 0.6701813 , 0.80323855,
0.36582134])
params = genpareto.fit(data, floc=0, scale=0)
# HOW TO ESTIMATE/GET ERRORS FOR EACH PARAM?
print(params)
print('\n')
class Genpareto(GenericLikelihoodModel):
nparams = 2
def loglike(self, params):
# params = (shape, loc, scale)
return genpareto.logpdf(self.endog, params[0], 0, params[2]).sum()
res = Genpareto(data).fit(start_params=params)
res.df_model = 2
res.df_resid = len(data) - res.df_model
print(res.summary())
This gives me a somewhat reasonable fit:
Scipy stats fit: (0.007194143471555344, 0, 1.005020562073944)
Genpareto fit: (0.00716650293, 8.47750397e-05, 1.00504535)
However in the end I get an error when it tries to calculate the covariance:
HessianInversionWarning: Inverting hessian failed, no bse or cov_params available
If I do return genpareto.logpdf(self.endog, *params).sum() I get a worse fit compared to scipy stats.
Bootstrapping Lastly I found mentions to bootstrapping. While I did sort of understand what's the idea behind it, I have no clue how to implement it. What I understand is that you should resample N times (1000 for example) from your data set (24 points in my case). Then do a fit on that sub-sample, and register the fit result. Then do a statistical analysis on the N results, i.e. calculating mean, std_dev and then confidence interval, like Estimate confidence intervals for parameters of distribution in python or Compute a confidence interval from sample data assuming unknown distribution. I even found some old MATLAB documentation on the calculations behind gpfit explaining this.
However I need my code to run fast, and I am not sure if any implementation that I make will do this calculation fast.
Conclusions Does anyone know of a Python function that calculates this in an efficient manner, or can point me to a topic where this has been explained already in a way that it works for my case at least?
I had the same issue with GenericLikelihoodModel and I came across this post (https://pystatsmodels.narkive.com/9ndGFxYe/mle-error-warning-inverting-hessian-failed-maybe-i-cant-use-matrix-containers) which suggests using different starting parameter values to get a result with positive hessian. Solved my problem.
I need to write a semidefinite program that minimizes the trace of an operator, say R, subject to the constraint that tr_A(R)^{Tb} >>0 . That means that R represents a 3 qubit quantum system and the trace over the first system gives you an operator that represents the remaining 2 qubit systems. Taking the partial transpose with respect to one of the qubits, you get the partially transposed quantum state of the restricted 2 qubit system. It is this state that I want to make positive semidefinite.
I am using PICOS (to write the SDP) and qutip (to do the operations).
P = pic.Problem()
Rho = P.add_variable('Rho',(n,n),'hermitian')
P.add_constraint(pic.trace(Rho)==1)
P.add_constraint(Rho>>0)
RhoQOBJ = Qobj(Rho)
RhoABtr = ptrace(RhoQOBJ, [0,1])
RhoABqbj = partial_transpose(RhoABtr, [0], method='dense')
RhoAB = RhoABqbj.full()
Problem: I need to make Rho a Qobj, for qutip to be able to understand it, but Rho above is only an instance of the Variable class. Anyone has any idea on how to do this?
Also I looked here, http://picos.zib.de/tuto.html#variables , it became even more confusing as this function puts the instance in a dictionary and only gives you back a key.
You need to be able to output a numpy array or sparse matrix to convert to a Qobj. I could not find anything in the picos docs that discusses this option.
I am seeing this post very late, but maybe I can help... I am not sure what the function Qobj() is doing, can you please tell me more about it.
Otherwise, there is now a new partial_transpose() function in PICOS (version released today), which hopefully does what you need.
Best,
Guillaume.
The smooth.spline function in R allows a tradeoff between roughness (as defined by the integrated square of the second derivative) and fitting the points (as defined by summing the squares of the residuals). This tradeoff is accomplished by the spar or df parameter. At one extreme you get the least squares line, and the other you get a very wiggly curve which intersects all of the data points (or the mean if you have duplicated x values with different y values)
I have looked at scipy.interpolate.UnivariateSpline and other spline variants in Python, however, they seem to only tradeoff by increasing the number of knots, and setting a threshold (called s) for the allowed SS residuals. By contrast, the smooth.spline in R allows having knots at all the x values, without necessarily having a wiggly curve that hits all the points -- the penalty comes from the second derivative.
Does Python have a spline fitting mechanism that behaves in this way? Allowing all knots but penalizing the second derivative?
You can use R functions in Python with rpy2:
import rpy2.robjects as robjects
r_y = robjects.FloatVector(y_train)
r_x = robjects.FloatVector(x_train)
r_smooth_spline = robjects.r['smooth.spline'] #extract R function# run smoothing function
spline1 = r_smooth_spline(x=r_x, y=r_y, spar=0.7)
ySpline=np.array(robjects.r['predict'](spline1,robjects.FloatVector(x_smooth)).rx2('y'))
plt.plot(x_smooth,ySpline)
If you want to directly set lambda: spline1 = r_smooth_spline(x=r_x, y=r_y, lambda=42) doesn't work, because lambda has already another meaning in Python, but there is a solution: How to use the lambda argument of smooth.spline in RPy WITHOUT Python interprating it as lambda.
To get the code running you first need to define the data x_train and y_train and you can define x_smooth=np.array(np.linspace(-3,5,1920)). if you want to plot it between -3 and 5 in Full-HD-resolution.
Note that this code is not fully compatible with Jupyter-notebooks for the latest versions of rpy2. You can fix this by using !pip install -Iv rpy2==3.4.2 as described in NotImplementedError: Conversion 'rpy2py' not defined for objects of type '<class 'rpy2.rinterface.SexpClosure'>' only after I run the code twice
I've been looking for exactly the same thing, but would rather not have to translate the code to Python. The Splinter package seems like an option, however: https://github.com/bgrimstad/splinter
From research on google, I concluded that
By contrast, the smooth.spline in R allows having knots at all the x values, without necessarily having a wiggly curve that hits all the points -- the penalty comes from the second derivative.
I developed for the example a simple Modelica model based on the fluid library of the MSL. I connected a MassFlowSource with a pipe and a Boundary_PT as sink function as in the picture below:
http://www.casimages.com/img.php?i=14061806120359130.png
I generate a FMU package with OpenModelica (in mode model-exchange).
I manage this FMU package with python with the code below:
import pyfmi, os
from pyfmi import load_fmu
myModel = load_fmu('PathToFolder\\test3.fmu')
res1 = myModel.simulate() # First simulation with m_flow in source set to [1] Kg/s
x = myModel.get('boundary1.m_flow') # Mass flow rate of the source
y = myModel.get('pipe.port_a.m_flow') # Mass flow rate in pipe
print x, y
myModel.set('boundary1.m_flow', 2)
option = myModel.simulate_options()
option['initialize'] = False # Need to initialize the simulation
res2 = myModel.simulate(options = option) # Second simulation with m_flow in source set to [2] Kg/s
x = myModel.get('boundary1.m_flow') # Mass flow rate of the source
y = myModel.get('pipe.port_a.m_flow') # Mass flow rate in pipe
print x, y
os.system('pause')
The objective is to show a problem when you change a parameter in the model, here the "m_flow" variable in source component. This new set to "2" should change the "m_flow" in pipe but it does not.
Results: In the first simulation the both "m_flow" are gotten to "1" and it's normal because the model is set like this. In the second simulation, I set the parameter to "2" in the source but the pipe "m_flow" stay to "1" (It should be "2").
http://www.casimages.com/img.php?i=140618060905759619.png
The model of the fluid source in Modelica is this one (only our interesting part):
equation
if not use_m_flow_in then
m_flow_in_internal = m_flow;
end if;
connect(m_flow_in, m_flow_in_internal);
I think the FMU don't consider parameter when they are in a if-condition. For me it's a problem because I need to manage FMU and to be sure that if I set a parameter, the simulation will use this new set. How be sure that FMU/FMI works well? Where is the exhaustive list with the type of parameters we can't manage in FMU?
I already know that parameters which change the number of equations can't be consider in FMU management (idem for variables which change the index of DAEs).
Note that OpenModelica has a concept of structural parameters and the Evaluate=true annotation. For example, if a parameter is used as an array dimension, it might be evaluated to an Integer value. All uses of that parameter will use the evaluated value, as if it was a constant.
Rather than including a picture of the diagram, the Modelica source code would have been easier to look at in order to find out what OpenModelica did to the system.
I suspect a parameter was evaluated. If you generate non-FMU code, you could inspect the modelName_init.xml generated by OpenModelica and find the entry for a parameter and look for the property isValueChangeable.
You could also use OMEdit to debug the system and view the initial equation (generate the executable including debug information). File->Open Transformations File, then select the modelName_info.xml file. Search for the variable you tried to change and go to the initial equation that defined it. It could very well be that a start-value (set by PyFMI) is ignored because it is not needed to produce a solution.
whenever you try to set new values to the parameter,
Follow these steps:
1.Reset the model
2.set new values for the parameter
3.Simulate the model.
I am not familiar with PyFMI, but I kinda encountered the same situation before. You could try a few things below.
Try to terminate/free the instant after your first sim.
As most parameters could not be changed after init, you could make that parameter as an input connector, so that this specific parameter could be changed at any time.
(In FMU from Dymola) I also found that if that parameter involves in your initial nonlinear system of equation, then you will get an error "the model could not be initialized" if you try to init the model on the same instant.