Declare constraints for basinhopping optimization - python

I'm having trouble creating a dictionary for the constraints using scipy.optimize.basinhopping. I'm able to get my code to run (without constraints), but the answer doesn't make sense because I need to enforce some constraints. For now, I'm only trying to get one constraint working but for the final solution I need to figure out how to implement several constraints. The code I have now is:
x0 = [f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11]
args = arg1,arg2,arg3,arg4
def func(x,*args)
#Do some math
return result
#This is where I need help most
cons = {'type':'ineq','fun': lambda x: x[5]-x[4]}
minimizer_kwargs = {"method":"COBYLA","args":"args","constraints":"cons"}
ret = scipy.optimize.basinhopping(func,x0,minimizer_kwargs=minimizer_kwargs)
But get this error when trying to run it:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 601, in runfile
execfile(filename, namespace)
File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 66, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Python27/Scripts/SpectralResearch/mainScripts/main.py", line 121, in <module>
ret = scipy.optimize.basinhopping(func,x0,minimizer_kwargs=minimizer_kwargs)
File "C:\Python27\lib\site-packages\scipy\optimize\_basinhopping.py", line 605, in basinhopping
accept_tests, disp=disp)
File "C:\Python27\lib\site-packages\scipy\optimize\_basinhopping.py", line 72, in __init__
minres = minimizer(self.x)
File "C:\Python27\lib\site-packages\scipy\optimize\_basinhopping.py", line 279, in __call__
return self.minimizer(self.func, x0, **self.kwargs)
File "C:\Python27\lib\site-packages\scipy\optimize\_minimize.py", line 432, in minimize
return _minimize_cobyla(fun, x0, args, constraints, **options)
File "C:\Python27\lib\site-packages\scipy\optimize\cobyla.py", line 218, in _minimize_cobyla
raise TypeError('Constraints must be defined using a '
TypeError: Constraints must be defined using a dictionary.
Essentially I need to enforce the constraint that certain variables are greater than others. I've been looking at the documentation([1],[2]) and articles, but haven't found anything that works. Any ideas what I could be doing wrong?

minimizer_kwargs = {"method":"COBYLA","args":args,"constraints":cons}
You passed the strings "args" and "cons" instead of the variables with those names.

Related

LU decomposition error in statsmodels ARIMA model

I know there is a very similar question and answer on stackoverflow (here), but this seems to be distinctly different. I am using statsmodels v 0.13.2, and I am using an ARIMA model as opposed to a SARIMAX model.
I am trying to fit a list of time series data sets with an ARIMA model. The offending piece of my code is here:
import numpy as np
from statsmodels.tsa.arima.model import ARIMA
items = np.log(og_items)
items['count'] = items['count'].apply(lambda x: 0 if math.isnan(x) or math.isinf(x) else x)
model = ARIMA(items, order=(14, 0, 7))
trained = model.fit()
items is a dataframe containing a date index and a single column, count.
I apply the lambda on the second line because some counts can be 0, resulting in a negative infinity after log is applied. The final product going into the ARIMA does not contain any NaNs or Infinite numbers. However, when I try this without using the log function, I do not get the error. This only occurs on certain series, but there does not seem to be rhyme or reason to which are affected. One series had about half of its values as zero after applying the lambda, while another did not have a single zero. Here is the error:
Traceback (most recent call last):
File "item_pipeline.py", line 267, in <module>
main()
File "item_pipeline.py", line 234, in main
restaurant_predictions = make_predictions(restaurant_data=restaurant_data, models=models,
File "item_pipeline.py", line 138, in make_predictions
predictions = model(*data_tuple[:2], min_date=min_date, max_date=max_date,
File "/Users/rob/Projects/5out-ml/models/item_level/items/predict_arima.py", line 127, in predict_daily_arima
predict_date_arima(prediction_dict, item_dict, prediction_date, x_days_out=x_days_out, log_vals=log_vals,
File "/Users/rob/Projects/5out-ml/models/item_level/items/predict_arima.py", line 51, in predict_date_arima
raise e
File "/Users/rob/Projects/5out-ml/models/item_level/items/predict_arima.py", line 47, in predict_date_arima
fitted = model.fit()
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/tsa/arima/model.py", line 390, in fit
res = super().fit(
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 704, in fit
mlefit = super(MLEModel, self).fit(start_params, method=method,
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/base/model.py", line 563, in fit
xopt, retvals, optim_settings = optimizer._fit(f, score, start_params,
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/base/optimizer.py", line 241, in _fit
xopt, retvals = func(objective, gradient, start_params, fargs, kwargs,
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/base/optimizer.py", line 651, in _fit_lbfgs
retvals = optimize.fmin_l_bfgs_b(func, start_params, maxiter=maxiter,
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_lbfgsb_py.py", line 199, in fmin_l_bfgs_b
res = _minimize_lbfgsb(fun, x0, args=args, jac=jac, bounds=bounds,
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_lbfgsb_py.py", line 362, in _minimize_lbfgsb
f, g = func_and_grad(x)
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py", line 286, in fun_and_grad
self._update_grad()
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py", line 256, in _update_grad
self._update_grad_impl()
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py", line 173, in update_grad
self.g = approx_derivative(fun_wrapped, self.x, f0=self.f,
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_numdiff.py", line 505, in approx_derivative
return _dense_difference(fun_wrapped, x0, f0, h,
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_numdiff.py", line 576, in _dense_difference
df = fun(x) - f0
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_numdiff.py", line 456, in fun_wrapped
f = np.atleast_1d(fun(x, *args, **kwargs))
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py", line 137, in fun_wrapped
fx = fun(np.copy(x), *args)
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/base/model.py", line 531, in f
return -self.loglike(params, *args) / nobs
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 939, in loglike
loglike = self.ssm.loglike(complex_step=complex_step, **kwargs)
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/tsa/statespace/kalman_filter.py", line 983, in loglike
kfilter = self._filter(**kwargs)
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/tsa/statespace/kalman_filter.py", line 903, in _filter
self._initialize_state(prefix=prefix, complex_step=complex_step)
File "/Users/rob/Projects/5out-ml/venv/lib/python3.8/site-packages/statsmodels/tsa/statespace/representation.py", line 983, in _initialize_state
self._statespaces[prefix].initialize(self.initialization,
File "statsmodels/tsa/statespace/_representation.pyx", line 1362, in statsmodels.tsa.statespace._representation.dStatespace.initialize
File "statsmodels/tsa/statespace/_initialization.pyx", line 288, in statsmodels.tsa.statespace._initialization.dInitialization.initialize
File "statsmodels/tsa/statespace/_initialization.pyx", line 406, in statsmodels.tsa.statespace._initialization.dInitialization.initialize_stationary_stationary_cov
File "statsmodels/tsa/statespace/_tools.pyx", line 1206, in statsmodels.tsa.statespace._tools._dsolve_discrete_lyapunov
numpy.linalg.LinAlgError: LU decomposition error.
The solution in the other stackoverflow post was to initialize the statespace differently. It looks like the statespace is involved, if you look at the last few lines of the error. However, it does not seem that that workflow is exposed in the newer version of statsmodels. Is it? If not, what else can I try to circumvent this error?
So far, I have tried manually initializing the model to approximate diffuse, and manually setting the initialize property to approximate diffuse. Neither seem to be valid in the new statsmodels code.
Turns out there's a new way to initialize. The second line below is the operative line.
model = ARIMA(items, order=(14, 0, 7))
model.initialize_approximate_diffuse() # this line
trained = model.fit()

Integer, multi-objective optimization with Platypus (Python)

I am exploring the Platypus library for multi-objective optimization in Python. It appears to me that Platypus should support variables (optimization parameters) as integers out of the box, however this simple problem (two objectives, three variables, no constraints and Integer variables with SMPSO):
from platypus import *
def my_function(x):
""" Some objective function"""
return [-x[0] ** 2 - x[2] ** 2, x[1] - x[0]]
def AsInteger():
problem = Problem(3, 2) # define 3 inputs and 1 objective (and no constraints)
problem.directions[:] = Problem.MAXIMIZE
int1 = Integer(-50, 50)
int2 = Integer(-50, 50)
int3 = Integer(-50, 50)
problem.types[:] = [int1, int2, int3]
problem.function = my_function
algorithm = SMPSO(problem)
algorithm.run(10000)
Results into:
Traceback (most recent call last):
File "D:\MyProjects\Drilling\test_platypus.py", line 62, in
AsInteger()
File "D:\MyProjects\Drilling\test_platypus.py", line 19, in AsInteger
algorithm.run(10000)
File "build\bdist.win-amd64\egg\platypus\core.py", line 405, in run
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 820, in step
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 838, in iterate
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1008, in _update_velocities
TypeError: unsupported operand type(s) for -: 'list' and 'list'
Similarly, if I try to use another optimization technique in Platypus (CMAES instead of SMPSO):
Traceback (most recent call last):
File "D:\MyProjects\Drilling\test_platypus.py", line 62, in
AsInteger()
File "D:\MyProjects\Drilling\test_platypus.py", line 19, in AsInteger
algorithm.run(10000)
File "build\bdist.win-amd64\egg\platypus\core.py", line 405, in run
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1074, in step
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1134, in initialize
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1298, in iterate
File "build\bdist.win-amd64\egg\platypus\core.py", line 378, in evaluate_all
File "build\bdist.win-amd64\egg\platypus\evaluator.py", line 88, in evaluate_all
File "build\bdist.win-amd64\egg\platypus\evaluator.py", line 55, in run_job
File "build\bdist.win-amd64\egg\platypus\core.py", line 345, in run
File "build\bdist.win-amd64\egg\platypus\core.py", line 518, in evaluate
File "build\bdist.win-amd64\egg\platypus\core.py", line 160, in call
File "build\bdist.win-amd64\egg\platypus\types.py", line 147, in decode
File "build\bdist.win-amd64\egg\platypus\tools.py", line 521, in gray2bin
TypeError: 'float' object has no attribute 'getitem'
I get other types of error messages with other algorithms (OMOPSO, GDE3). While the algorithms NSGAIII, NSGAII, SPEA2, etc... appear to be working.
Has anyone ever encountered such issues? Maybe I am specifying the problem in te wrong way?
Thank you in advance for any suggestion.
Andrea.
try to change the way u add the problem type
problem.types[:] = [integer(-50,50),integer(-50,50),integer(-50,50)]
could work this way

PYOMO spreadsheet reading issue

We have a model that uses Dataportal of Pyomo to read parameter from several csv files. On a Windows laptop we are running into the following error while this is not replicable on another computer. Any ideas what might be missing in this setting?
Traceback (most recent call last):
File "", line 1, in
runfile('C:/Users/stianbac/OneDrive - NTNU/EMPIRE/EMPIRE in Pyomo/EMPIRE_Pyomo_version_4/Empire_draft4.py',
wdir='C:/Users/stianbac/OneDrive - NTNU/EMPIRE/EMPIRE in
Pyomo/EMPIRE_Pyomo_version_4')
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py",
line 710, in runfile
execfile(filename, namespace)
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py",
line 101, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/stianbac/OneDrive - NTNU/EMPIRE/EMPIRE in
Pyomo/EMPIRE_Pyomo_version_4/Empire_draft4.py", line 107, in
instance = model.create_instance(data)
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\pyomo\core\base\DataPortal.py",
line 138, in load
self.connect(**kwds)
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\pyomo\core\base\DataPortal.py",
line 98, in connect
self._data_manager.open()
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\pyomo\core\plugins\data\sheet.py",
line 54, in open
self.sheet = ExcelSpreadsheet(self.filename, ctype=self.ctype)
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\pyutilib\excel\spreadsheet.py",
line 79, in new
return ExcelSpreadsheet_win32com(*args, **kwds)
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\pyutilib\excel\spreadsheet_win32com.py",
line 59, in init
self.open(filename, worksheets, default_worksheet)
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\pyutilib\excel\spreadsheet_win32com.py",
line 80, in open
self._ws[wsid] = self.wb.Worksheets.Item(wsid)
File
"C:\Users\stianbac\AppData\Local\Continuum\anaconda3\lib\site-packages\win32com\client\dynamic.py",
line 516, in getattr
ret = self.oleobj.Invoke(retEntry.dispid,0,invoke_type,1)
com_error: (-2147418111, 'Call was rejected by callee.', None, None)
Here is the entry of the code:
from __future__ import division
from pyomo.environ import *
#from pyomo.core.expr import current as EXPR
#import numpy as np
import math
import csv
model = AbstractModel()
model.Nodes = Set()
model.Generators = Set() #g
...
data = DataPortal()
data.load(filename='Sets.xlsx',range='B1:B53',using='xlsx',format="set", set=model.Generators)
data.load(filename='Sets.xlsx',range='nodes',using='xlsx',format="set", set=model.Nodes)
...
instance = model.create_instance(data)
...

How to interface Pyomo with GLPK?

opt = SolverFactory("glpk")
opt.options["mipgap"] = 0.05
opt.options["FeasibilityTol"] = 1e-05
solver_manager = SolverManagerFactory("serial")
# results = solver_manager.solve(instance, opt=opt, tee=True,timelimit=None, mipgap=0.1)
results = solver_manager.solve(model, opt=opt, tee=True, timelimit=None)
# sends results to stdout
# results.write()
def pyomo_save_results(options=None, instance=None, results=None):
OUTPUT = open(r'Results_generic_hub.txt', 'w')
print(results, file=OUTPUT)
OUTPUT.close()
It generates the following error. GLPK is installed with GLPSOL -- help working from any directory. Is this a problem with the GLPK module? Or with the model itself? Environment: - Conda, Mac OS Yosemite.
File "<ipython-input-7-ba156f9322b2>", line 7, in <module>
results = solver_manager.solve(model, opt=opt, tee=True,timelimit=None)
File "/anaconda/lib/python3.6/site-
packages/pyomo/opt/parallel/async_solver.py", line 34, in solve
return self.execute(*args, **kwds)
File "/anaconda/lib/python3.6/site-
packages/pyomo/opt/parallel/manager.py", line 107, in execute
ah = self.queue(*args, **kwds)
File "/anaconda/lib/python3.6/site-
packages/pyomo/opt/parallel/manager.py", line 122, in queue
return self._perform_queue(ah, *args, **kwds)
File "/anaconda/lib/python3.6/site-
packages/pyomo/opt/parallel/local.py", line 59, in _perform_queue
results = opt.solve(*args, **kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/base/solvers.py", line 582, in solve
self._presolve(*args, **kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/solver/shellcmd.py", line 196, in _presolve
OptSolver._presolve(self, *args, **kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/base/solvers.py", line 661, in _presolve
**kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/base/solvers.py", line 729, in _convert_problem
**kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/base/convert.py", line 110, in convert_problem
problem_files, symbol_map = converter.apply(*tmp, **tmpkw)
File "/anaconda/lib/python3.6/site-packages/pyomo/solvers/plugins/converter/model.py", line 86, in apply
io_options=io_options)
File "/anaconda/lib/python3.6/site-packages/pyomo/core/base/block.py", line 1646, in write
io_options)
File "/anaconda/lib/python3.6/site-packages/pyomo/repn/plugins/cpxlp.py", line 163, in __call__
include_all_variable_bounds=include_all_variable_bounds)
File "/anaconda/lib/python3.6/site-packages/pyomo/repn/plugins/cpxlp.py", line 575, in _print_model_LP
" cannot write legal LP file" % str(model.name))
ValueError: ERROR: No objectives defined for input model 'unknown'; cannot write legal LP file
The error you are seeing:
"ERROR: No objectives defined for input model 'unknown'; cannot write legal LP file"
indicates that Pyomo cannot find an active Objective component on your model (either you never added one to the model, or the Objective component(s) were all deactivated). Either way, valid LP files (which is how Pyomo interfaces with GLPK) require an objective. Fixing your model by adding an Objective should resolve this error.
Try this code in the end of the script:
> instance = model.create() instance.pprint() opt =
> SolverFactory("glpk") results = opt.solve(instance)
> print(results)
`

Exception: 'numpy.float64' object is not callable when optimising

I keep getting
Exception: 'numpy.float64' object is not callable
when trying to minimize a function.
I can call the function I'm trying to minimize as
def testLLCalc():
mmc = MortalityModelCalibrator()
a = mmc.log_likelihood(2000, np.array([[0.6, 0.2, 0.8]]))
but when I try and minimize it by doing
x0 = np.array([0, 0, 0])
res = minimize(-a[0], x0)
I get the exception above. Any help would be appreciated. Full traceback is:
Error
Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\unittest\case.py", line 59, in testPartExecutor
yield
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\unittest\case.py", line 601, in run
testMethod()
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\site-packages\nose\case.py", line 198, in runTest
self.test(*self.arg)
File "C:\Users\Matt\Documents\PyCharmProjects\Mortality\src\PennanenMortalityModel_test.py", line 57, in testLLCalc
res = minimize(-a[0], x0)
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\site-packages\scipy\optimize\_minimize.py", line 444, in minimize
return _minimize_bfgs(fun, x0, args, jac, callback, **options)
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\site-packages\scipy\optimize\optimize.py", line 913, in _minimize_bfgs
gfk = myfprime(x0)
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\site-packages\scipy\optimize\optimize.py", line 292, in function_wrapper
return function(*(wrapper_args + args))
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\site-packages\scipy\optimize\optimize.py", line 688, in approx_fprime
return _approx_fprime_helper(xk, f, epsilon, args=args)
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\site-packages\scipy\optimize\optimize.py", line 622, in _approx_fprime_helper
f0 = f(*((xk,) + args))
File "C:\Program Files (x86)\JetBrains\WinPython-64bit-3.5.3.0Qt5\python-3.5.3.amd64\lib\site-packages\scipy\optimize\optimize.py", line 292, in function_wrapper
return function(*(wrapper_args + args))
Exception: 'numpy.float64' object is not callable
scipy's minimize expects a callable function as first argument.
As you did not show your complete code it's just a guessing-game here, but this
res = minimize(-a[0], x0)
has to mean that the first element of a should be a function.
Seeing this line:
a = mmc.log_likelihood(2000, np.array([[0.6, 0.2, 0.8]]))
it does not look like that as probably a scalar is returned.
The effect is simple: scipy want's to call this given function with some argument (x0 at the beginning), but calls some numpy-array value with some argument in your case (which is not valid of course).
Review the docs:
minimize(fun, x0, args=(),...
fun : callable
Objective function.
x0 : ndarray
Initial guess.
args : tuple, optional
Extra arguments passed to the objective function and its derivatives
Do you know what a 'callable' is? It's a function (or equivalent), something can be 'called' with fun(x0, arg0, arg1, ...).
The error tells us that -a[0] is an element of a numpy array, a.
It's not clear whether you are trying to minimize this function, or whether this is part of using minimize. It can't be source of a, because it doesn't return anything.
def testLLCalc():
mmc = MortalityModelCalibrator()
a = mmc.log_likelihood(2000, np.array([[0.6, 0.2, 0.8]]))
# return a ????
So - review your understanding of basic Python, especially the idea of a 'callable'. And run some the minimize examples, to get a better feel for how to use this function.

Categories

Resources