I am using Baron solver with a licence version under AMPL licence for solving a MINLP in Pyomo and Spyder.
I use baron in my code as following:
opt=SolverFactory('baron', executable='/home/LocalUser/ampl_linux-intel64/baron')
result=opt.solve(instance, keepfiles=True,tee=True)
but when I run my code, I got this error:
Solver log file: '/tmp/tmpil8__em2.baron.log'
Solver solution file: '/tmp/tmpvphbmoa8.baron.soln'
Solver problem files: ('/tmp/tmpl7sk5uym.pyomo.bar',)
/home/LocalUser/ampl_linux-intel64/baron: can't open /tmp/tmpl7sk5uym.pyomo.bar.nl
ERROR: Solver (baron) returned non-zero return code (1)
ERROR: See the solver log above for diagnostic information.
Traceback (most recent call last):
File "Platform-nonlinear.py", line 236, in <module>
result=opt.solve(instance, keepfiles=True,tee=True)
File "/home/LocalUser/.local/lib/python3.6/site-packages/pyomo/opt/base/solvers.py", line 596, in solve
"Solver (%s) did not exit normally" % self.name)
pyutilib.common._exceptions.ApplicationError: Solver (baron) did not exit normally
I tried it in both windows and linux but I got the same error at both time!!!
T really don't know how I can fix it!
What I can suggest you is that to try to relocate your temp dir to a location where you have full access to it.
Commonly, and in my case "macos", restricts access to TMPDIR only to system so probably your files can't be stored on that location and then can't be retrieved.
"printenv |grep TMP" or "echo $TMPDIR" will show you your TEMP folder\'s address
If that is the case you can find how to change the TMPDIR of your OS by a simple search.
Related
Does anyone know what this error message means?
I tried looking in old conversations, but they had used different solvers so it seems like I need to use another method.
I run an optimization problem with pyomo in python with the solver gurobi.
My full error message:
File "C:\Users\frida.spyder-py3\26 january\optimization.py", line 183, in
solver.solve(m, tee=True)
File "C:\Users\frida\anaconda3\lib\site-packages\pyomo\solvers\plugins\solvers\direct_solver.py", line 183, in solve
default_variable_value=self._default_variable_value)
File "C:\Users\frida\anaconda3\lib\site-packages\pyomo\core\base\PyomoModel.py", line 226, in load_from
% str(results.solver.status))
ValueError: Cannot load a SolverResults object with bad status: error
ValueError: Cannot load a SolverResults object with bad status: error. This means that it is not possible to access the solution.
Your problem is either infeasible or unbounded, or a solution does not exist.
According to pyomo documentation, you can see the output of your solver with the option tee=True to the solver
SolverFactory('glpk').solve(model, tee=True)
Also you can use pprintto see your model or variables
model.pprint()
model.x.pprint()
when I try to reload firewalld, it tells me
Error: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: Numerical result out of range
JSON blob:
{"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"chain": {"family": "inet", "table": "firewalld", "name": "filter_IN_policy_allow-host-ipv6"}}}]}
I don't know why this is, after Google, it still hasn't been resolved
I had the same error message. I enabled verbose debugs on firewalld and tailed the logs to file for a deeper dive. In my case the exception was originally happening in "nftables.py" on line "361".
Exception:
2022-01-23 14:00:23 DEBUG3: <class 'firewall.core.nftables.nftables'>: calling python-nftables with JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"chain": {"family": "inet", "table": "firewalld", "name": "filter_IN_policy_allow-host-ipv6"}}}]}
2022-01-23 14:00:23 DEBUG1: Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/firewall/core/fw.py", line 888, in rules
backend.set_rule(rule, self._log_denied)
File "/usr/lib/python3.6/site-packages/firewall/core/nftables.py", line 390, in set_rule
self.set_rules([rule], log_denied)
File "/usr/lib/python3.6/site-packages/firewall/core/nftables.py", line 361, in set_rules
raise ValueError("'%s' failed: %s\nJSON blob:\n%s" % ("python-nftables", error, json.dumps(json_blob)))
ValueError: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: Numerical result out of range
Line 361 in "nftables.py":
self._loader(config.FIREWALLD_POLICIES, "policy")
Why this is a problem:
Basically nftables is a backend service and firewalld is a frontend service. They are dependent on each other to function. Each time you restart firewalld it has to reconcile the backend, in this case nftables. At some point during the reconciliation a conflict is occurring in the python code. That is unfortunate as the only real solution will likely have to come from code improvements from nftables in how it is able to populate policies into chains and tables.
A work-around:
The good news is, if you are like me, you don't use ipv6, in which case we simply disable the policy rather than solve for the issue. I'll put the work-around steps below.
Work-around Steps:
The proper way to remove the policy is to use the command "firewall-cmd --delete-policy=allow-host-ipv6 --permanent" but I encountered other errors and exceptions in python when attempting to do that. Since I don't care about ipv6 I manually deleted the XML from configuration and restarted the firewalld service.
rm /usr/lib/firewalld/policies/allow-host-ipv6.xml
rm /etc/firewalld/policies/allow-host-ipv6.xml
systemctl restart firewalld
Side Note:
Once I fixed this conflict, I also had some additional conflicts between nftables/iptables/fail2ban that had to be cleared up. For that I just used the command "fail2ban-client unban --all" to make fail2ban wipe clean all of the chains it added to iptables.
I am solving a set of equations in simulation (IMODE = 1, SOLVER = 3). The IPOPT solver solves to an acceptable level and exits, but gekko returns an error for this and does return my solution. Per the IPOPT documentation, the tolerance for the acceptable level is 1.0e-6, which is the same as the default values for OTOL and RTOL used by gekko (and the values I am using). I was able to modify the gekko.py source code to get my answer to return, but by doing so I've bypassed all types of errors. I don't want all errors bypassed as they obviously help debugging other issues like infeasibilities. Is there a m.solve option that I am missing, or another way to not trigger an error when IPOPT solves to an acceptable level?
One way to handle errors from a solver is to wrap the solve command in try, except statements. The APPINFO output may give you guidance on what type of error was encountered and let you respond differently to "infeasible solution", "solved to acceptable level", or other IPOPT error codes.
try:
m.solve(disp=True)
except:
print('Solver error, looking at APPINFO')
if m.options.APPINFO==1:
print('APPINFO=1')
elif m.options.APPINFO==2:
print('APPINFO=2')
Another option is to try a different solver such as APOPT or BPOPT.
m.options.SOLVER = 1
Edit: The parameter APPINFO isn't updated when Gekko raises a solver exception. Try the following instead with debug=0:
m.solve(disp=True,debug=0)
if m.options.APPINFO!=0:
print('Solver error, looking at APPINFO')
if m.options.APPINFO==1:
print('APPINFO=1')
elif m.options.APPINFO==2:
print('APPINFO=2')
I just updated Gekko so that remote solves will also by-pass the raised exceptions and finish processing the options file with the APPINFO information. The APPINFO information is in options.json in the run directory when running locally and is read in with load_JSON in gk_post_solve.py.
I run opencv 3.2.0, ubuntu 14.04, and latest opencv_contrib.
I run examine:
https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/textdetection.py
But it have show err:
$ python textdetection.py scenetext_word01.jpg
textdetection.py
A demo script of the Extremal Region Filter algorithm described in:
Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012
Extracting Class Specific Extremal Regions from 9 channels ...
(...) this may take a while (...)
OpenCV Error: Bad argument (Default classifier file not found!) in ERClassifierNM1, file /home/vietnam/opencv_and_contri/opencv_contrib/modules/text/src/erfilter.cpp, line 1022
Traceback (most recent call last):
File "textdetection.py", line 38, in <module>
erc1 = cv2.text.loadClassifierNM1(pathname+'/trained_classifierNM1.xml')
cv2.error: /home/vietnam/opencv_and_contri/opencv_contrib/modules/text/src/erfilter.cpp:1022: error: (-5) Default classifier file not found! in function ERClassifierNM1
How to solve this?
Try using relative paths in the parameters for cv2.text.loadClassifierNM1() and cv2.text.loadClassifierNM2()
So now that part of the code looks like this:
erc1 = cv2.text.loadClassifierNM1('./trained_classifierNM1.xml')
er1 = cv2.text.createERFilterNM1(erc1,16,0.00015,0.13,0.2,True,0.1)
erc2 = cv2.text.loadClassifierNM2('./trained_classifierNM2.xml')
er2 = cv2.text.createERFilterNM2(erc2,0.5)
I'm not sure why this works (it did for me), but I tried this after looking at a solution posted for a similar problem in VS2015 here: https://github.com/cesardelgadof/OpenCVBinaries/issues/1
Hope this helps.
Trying with absolute path e.g. "/usr/lib/opencv-3.2.0/opencv_contrib-3.2.0/modules/text/samples/trained_classifierNM1.xml" worked in my case for Ubuntu 16.04, C++
I am trying to do Incomplete Cholesky Decomposition in Python, but no direct Python package I can find.
Since the most available codes I can find online are written in Matlab, I want to take a detour by
compiling the matlab code to a shared library (I am using Mac OS and MATLAB_R2014a, so it should produce .dylib file)
load library in Python by using Ctypes
The following lists the detailed steps:
0. Download Matlab Source Code
The code can be downloaded from F. Bach's webpage link to zip file, which contains the following files:
panc:csi-1.0 panc25$ ls
center.m csi.dll csi.mexglx csi_gaussian.dll csi_gaussian.mexglx readme.txt
csi.c csi.m csi_gaussian.c csi_gaussian.m demo_csi.m sqdist.m
1. Compiling the matlab code to a shared library
Then by following this post, I run the command:
mcc -v -W cpplib:libcsi -T link:lib csi
After around a minute, the terminal prints MEX completed successfully and in my folder there are
panc:csi-1.0 panc25$ ls
center.m csi.m csi_gaussian.dll demo_csi.m libcsi.exports readme.txt
csi.c csi.mexglx csi_gaussian.m libcsi.cpp libcsi.h sqdist.m
csi.dll csi_gaussian.c csi_gaussian.mexglx libcsi.dylib mccExcludedFiles.log
where libcsi.dylib is the shared library I want.
2. Loading library in Python
Then I open IPython and try to load the library:
In [1]: import ctypes
In [2]: ctypes.C
ctypes.CDLL ctypes.CFUNCTYPE
In [2]: ctypes.CDLL('libcsi.dylib')
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-2-b6d0c1a91651> in <module>()
----> 1 ctypes.CDLL('libcsi.dylib')
/Users/panc25/anaconda/lib/python2.7/ctypes/__init__.pyc in __init__(self, name, mode, handle, use_errno, use_last_error)
363
364 if handle is None:
--> 365 self._handle = _dlopen(self._name, mode)
366 else:
367 self._handle = handle
OSError: dlopen(libcsi.dylib, 6): Library not loaded: #rpath/libmwmclmcrrt.8.3.dylib
Referenced from: /Users/panc25/Downloads/csi-1.0/libcsi.dylib
Reason: image not found
This problem persists even after I replace file name in ctypes.CDLL('libcsi.dylib') with the full path.
So I am confused. The shared library is there, but why Python says "image not found"?
BTW
SInce the source code also provide C implementation through mex.h, I also tried to first create a .mex file, then compile the .mex to a shared library as follows:
panc:csi-1.0 panc25$ mex csi.c
which created the csi.mexmaci64 file. Then according to this link, I called:
panc:csi-1.0 panc25$ mcc -B csharedlib:csi2 csi.mexmaci64
which produced csi2.dylib file.
But when I tried to load it in Python, I had the same error.
Could anyone let me know what is wrong?
I would avoid Matlab altogether, and instead use the Incomplete Cholesky Decomposition available in PyMC2:
from pymc.gp.incomplete_chol import ichol_full
The f2py wrapped Fortran code, that was actually adapted from a MEX file, can be found here. So you could use this independently of PyMC2 if need be.
If you are interested, you could also propose to add this function to scipy (see this githib issue ).