My website runs this Python script that would be way more optimized if Cython is used. Recently I needed to add Sympy with Lambdify, and this is not going well with Cython.
So I stripped the problem to a minimum working example. In the code, I have a dictionary with string keys with values that are lists. I would like to use these keys as variables. In the following simplified example, there's only 1 variable, but generally I need more. Please check the following example:
import numpy as np
from sympy.parsing.sympy_parser import parse_expr
from sympy.utilities.lambdify import lambdify, implemented_function
from sympy import S, Symbol
from sympy.utilities.autowrap import ufuncify
def CreateMagneticFieldsList(dataToSave,equationString,DSList):
expression = S(equationString)
numOfElements = len(dataToSave["MagneticFields"])
#initialize the magnetic field output array
magFieldsArray = np.empty(numOfElements)
magFieldsArray[:] = np.NaN
lam_f = lambdify(tuple(DSList),expression,modules='numpy')
try:
# pass
for i in range(numOfElements):
replacementList = np.zeros(len(DSList))
for j in range(len(DSList)):
replacementList[j] = dataToSave[DSList[j]][i]
try:
val = np.double(lam_f(*replacementList))
except:
val = np.nan
magFieldsArray[i] = val
except:
print("Error while evaluating the magnetic field expression")
return magFieldsArray
list={"MagneticFields":[1,2,3,4,5]}
out=CreateMagneticFieldsList(list,"MagneticFields*5.1",["MagneticFields"])
print(out)
Let's call this test.py. This works very well. Now I would like to cythonize this, so I use the following script:
#!/bin/bash
cython --embed -o test.c test.py
gcc -pthread -fPIC -fwrapv -Ofast -Wall -L/lib/x86_64-linux-gnu/ -lpython3.4m -I/usr/include/python3.4 -o test.exe test.c
Now if I execute ./test.exe, it throws an exception! Here's the exception:
Traceback (most recent call last):
File "test.py", line 42, in init test (test.c:1811)
out=CreateMagneticFieldsList(list,"MagneticFields*5.1",["MagneticFields"])
File "test.py", line 19, in test.CreateMagneticFieldsList (test.c:1036)
lam_f = lambdify(tuple(DSList),expression,modules='numpy')
File "/usr/local/lib/python3.4/dist-packages/sympy/utilities/lambdify.py", line 363, in lambdify
callers_local_vars = inspect.currentframe().f_back.f_locals.items()
AttributeError: 'NoneType' object has no attribute 'f_locals'
So the question is: How can I get lambdify to work with Cython?
Notes: I would like to point out that I have Debian Jessie, and that's why I'm using Python 3.4. Also I would like to point out that I don't have any problem with Cython when not using lambdify. Also I would like to point out that Cython is updated to the last version with pip3 install cython --upgrade.
This is a something of a workround to the real problem (identified in the comments and #jjhakala's answer) that Cython doesn't generate full tracebacks/introspection information for compiled functions. I gather from your comments that you'd like to keep most of your program compiled with Cython for speed reasons.
The "solution" is to use the Python interpreter for only the individual function that needs to call lambdify and leave the rest in Cython. You can do this using exec.
A very simple example of the idea is
exec("""
def f(func_to_call):
return func_to_call()""")
# a Cython compiled version
def f2(func_to_call):
return func_to_call())
This can be compiled as a Cython module and imported, and upon being imported the Python interpreter runs the code in the string, and correctly adds f to the module globals. If we create a pure Python function
def g():
return inspect.currentframe().f_back.f_locals
calling cython_module.f(g) gives me a dictionary with key func_to_call (as expected) while cython_module.f2(g) gives me the __main__ module globals (but this is because I'm running from an interpreter rather than using --embed).
Edit: Full example, based on your code
from sympy import S, lambdify # I'm assuming "S" comes from sympy
import numpy as np
CreateMagneticFieldsList = None # stops a compile error about CreateMagneticFieldsList being undefined
exec("""def CreateMagneticFieldsList(dataToSave,equationString,DSList):
expression = S(equationString)
numOfElements = len(dataToSave["MagneticFields"])
#initialize the magnetic field output array
magFieldsArray = np.empty(numOfElements)
magFieldsArray[:] = np.NaN
lam_f = lambdify(tuple(DSList),expression,modules='numpy')
try:
# pass
for i in range(numOfElements):
replacementList = np.zeros(len(DSList))
for j in range(len(DSList)):
replacementList[j] = dataToSave[DSList[j]][i]
try:
val = np.double(lam_f(*replacementList))
except:
val = np.nan
magFieldsArray[i] = val
except:
print("Error while evaluating the magnetic field expression")
return magFieldsArray""")
list={"MagneticFields":[1,2,3,4,5]}
out=CreateMagneticFieldsList(list,"MagneticFields*5.1",["MagneticFields"])
print(out)
When compiled with your script this prints
[ 5.1 10.2 15.3 20.4 25.5 ]
Substantially all I've done is wrapped your function in an exec statement, so it's executed by the Python interpreter. This part won't see any benefit from Cython, however the rest of your program still will. If you want to maximise the amount compiled with Cython you could divide it up into multiple functions so that only the small part containing lambdify is in the exec.
It is stated in cython docs - limitations that
Stack frames
Currently we generate fake tracebacks as part of exception
propagation, but don’t fill in locals and can’t fill in co_code. To be
fully compatible, we would have to generate these stack frame objects
at function call time (with a potential performance penalty). We may
have an option to enable this for debugging.
f_locals in
AttributeError: 'NoneType' object has no attribute 'f_locals'
seems to point towards this incompability issue.
Related
To get a feel for NumPy's source code I want to start by adding my own dummy custom function. I've followed their docs to set up a developing environment and have done an inplace build of NumPy as advised ($ python setup.py build_ext -i; export PYTHONPATH=$PWD).
Now I want to add this function:
def multiplybytwo(x):
"""
Return the double of the input
"""
y = x*2
return y
But I do not know where to place it so that this code would run properly:
import numpy as np
a = np.array([10])
b = np.mulitplybytwo(a)
This answer has been added at OP’s request.
As mentioned in my comments above, a starting point (of many) would be to investigate the top level __init__.py file in the numpy library. Crack this open and read through the various imports and initialising setup. This will help to get your bearings as to where the new function could be placed, as you’ll likely see from where other (familiar) top-level functions are imported.
Caveat:
While I’d generally discourage adding custom functionality to a library (as all changes will be lost on upgrade), I understand why you’re looking into this. Just keep in mind what is stated in previous sentence.
if you want to add your function to Numpy, I'd:
clone github repo git clone https://github.com/numpy/numpy.git
write your function somewhere in /numpy/core
compile it: python setup.py build_ext --inplace -- more info about compilation/installation: https://numpy.org/doc/stable/user/building.html
for a deeper feel of Numpy - check how to contribute: https://numpy.org/devdocs/dev/index.html
Everything in Python is an object!
So, you can add your function to NumPy in the same way you say that x = 1
See below:
>>> def multiplybytwo(x):
... """
... Return the double of the input
... """
... y = x*2
... return y
>>> import numpy as np
>>> np.mulitplybytwo = multiplybytwo ## HERE IS THE "TRICK"
>>> a = np.array([10])
>>> b = np.mulitplybytwo(a)
>>> b
array([20])
>>> print(b)
[20]
In that line, you are creating a custom function in NumPy and saying that that function is the one defined by you.
You can have a proof using id
>>> id(np.multiplybytwo)
140489045965856
>>> id(multiplybytwo)
140489045965856
>>>
Both id are the same.
I would like to parallelise a MatLab function using python and ray.
In this example:
I used MathWorks to wrap a very simple matlab function as Python application (the function just returns as result the input integer - see below). Instructions here: https://uk.mathworks.com/help/compiler_sdk/gs/create-a-python-application-with-matlab-code.html
The application (the function) is then called in python (see below). The function is called using the ray package to parallelise the call.
The function is called four times using as inputs some integers: 0, 1, 2 and 3. The result should be a list: [0, 1, 2, 3]
The call fails with error: ray/cloudpickle/cloudpickle_fast.py, TypeError: 'str' object is not callable. The error stays the same whatever the matlab function does and whatever (valid) matlab code I put in.
The same function (test_function_for_python_2) works well in a simple for loop and returns [0, 1, 2, 3], so the problem is most likely not in the MathWorks wrapping of the MatLab function into python.
In summary, ray needs to pickle the matlab/python library (the function), and the pickling encounters a problem I can't solve.
Any help would be most appreciated.
The operating system in this example is MacOSx Catalina, the python version is 3.7.9, the ray python module version is 1.0.0.
The mwpython version is 3.7.9 too, and mwpython (installed by the MatLab SDK runtime) must be used to run the python script, not python or python3. MatLab and MatLab SDK runtime are version R2020b.
Matlab function
function result = test_function_for_python_2(an_integer_input)
result = an_integer_input;
end
Python to parallelises the matlab function above
import TestTwo # a class with a simple function returning the input integer
import matlab
import ray # for parallelisation
# Initialise ===========
ray.shutdown() # shutdown any open CPU pool
ray.init() # create a pool of parallel threads
# initialise the matlab app containing the code of a simple test function (returns the same integer that is input)
my_test_function = TestTwo.initialize()
print(type(my_test_function))
print(type(my_test_function.test_function_for_python_2))
#ray.remote
def function_two(an_integer):
'''Here a single integer is input and returned.
The function responsible for it has been written in Matlab and wrapped as a python library in Mathworks.'''
result = my_test_function.test_function_for_python_2(an_integer, 1) # nargout=1)
return result
# input to the function
some_integers = [0, 1, 2, 3]
# run function and collect results with ray for parallelisation
ray_objects = [function_two.remote(an_integer=i) for i in some_integers]
future_results = ray.get(ray_objects)
print('Results: {}'.format(future_results))
# Terminate =====
my_test_function.terminate() # close the matlab app containing the code for the test function
ray.shutdown() # shutdown any open CPU pool
Error trace from mwpython console
% /Applications/MATLAB/MATLAB_Runtime/v99/bin/mwpython ~/Documents/TestTwo/for_redistribution_files_only/run_test_2_matlab_wrapped_in_python_parallel.py
2020-11-06 21:09:00,142 INFO services.py:1166 -- View the Ray dashboard at http://127.0.0.1:8265
/Applications/MATLAB/MATLAB_Runtime/v99/bin/maci64/mwpython3.7.app/Contents/MacOS/mwpython3.7: can't find '__main__' module in ''
<class 'matlab_pysdk.runtime.deployablepackage.DeployablePackage'>
<class 'matlab_pysdk.runtime.deployablefunc.DeployableFunc'>
Traceback (most recent call last):
File "/Users/me/Documents/TestTwo/for_redistribution_files_only/run_test_2_matlab_wrapped_in_python_parallel.py", line 32, in <module>
ray_objects = [function_two.remote(an_integer=i) for i in some_integers]
File "/Users/me/Documents/TestTwo/for_redistribution_files_only/run_test_2_matlab_wrapped_in_python_parallel.py", line 32, in <listcomp>
ray_objects = [function_two.remote(an_integer=i) for i in some_integers]
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ray/remote_function.py", line 99, in _remote_proxy
return self._remote(args=args, kwargs=kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ray/remote_function.py", line 207, in _remote
self._pickled_function = pickle.dumps(self._function)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 70, in dumps
cp.dump(obj)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 656, in dump
return Pickler.dump(self, obj)
TypeError: 'str' object is not callable
I am writing a Python program in Anaconda/Spyder on a 64-bit Windows 8 machine. I'm getting all the know issues with gcc.bat ("gcc.bat' failed with exit status 1") and I have it fixed - almost. My pyx file (called testFunc.pyx) has the following code:
import numpy as np
cimport numpy as np
def funcMatUtility(np.ndarray[np.float64_t, ndim=1] vecX,
np.ndarray[np.float64_t, ndim=1] vecE):
cdef np.ndarray[np.float64_t, ndim=2] out = \
np.zeros((len(vecX),len(vecE)),dtype=np.float64)
for iX, valX in enumerate(vecX):
for iE, valE in enumerate(vecE):
out[iX,iE] = valX + valE
return out
I call this function by running the following py file in Spyder:
import os
import numpy as np
import pyximport
os.environ['CPATH'] = np.get_include()
mingw_setup_args = { 'options': { 'build_ext': { 'compiler': 'mingw32' } } }
pyximport.install(setup_args=mingw_setup_args)
import testFunc
x = testFunc.funcMatUtility(np.array([0.0,1.0,2.0]),np.array([0.0,1.0,2.0,3.0]))
Without the line os.environ['CPATH'] = np.get_include() I get the gcc.bat error message immediately. Without the setup arguments in install() I get another error message: Unable to find vcvarsall.bat.
So with these lines I can compile my Cython code, which suggests that this is what I needed to run the Cython compiler on my Windows machine in the first place. The problem, however, is that I can only do this once. If want to import it again, for example because I am still developing my code and I only did a test run, I get the gcc.bat error message again (gcc.bat failed with exit status 1) unless I close and re-open Spyder. I tried the second import with just the import statement (i.e. not importing pyximport again), to no avail. What could be the reason that I can only compile the Cython code once?
I think I found the problem (and hence, the solution): I need to tell the compiler I'm using numpy. I found the explanation here: https://github.com/cython/cython/wiki/InstallingOnWindows
So the py file should be
import numpy
import pyximport
pyximport.install(
setup_args={"script_args":["--compiler=mingw32"],
"include_dirs":numpy.get_include()},reload_support=True)
import testFunc
x = testFunc.funcMatUtility(np.array([0.0,1.0,2.0]),np.array([0.0,1.0,2.0,3.0]))
The 'include_dirs' part tells the compiler I'm using numpy. This works in Spyder, also in repeated runs.
I'm trying to set the number of threads for numpy calculations with mkl_set_num_threads like this
import numpy
import ctypes
mkl_rt = ctypes.CDLL('libmkl_rt.so')
mkl_rt.mkl_set_num_threads(4)
but I keep getting an segmentation fault:
Program received signal SIGSEGV, Segmentation fault.
0x00002aaab34d7561 in mkl_set_num_threads__ () from /../libmkl_intel_lp64.so
Getting the number of threads is no problem:
print mkl_rt.mkl_get_max_threads()
How can I get my code working?
Or is there another way to set the number of threads at runtime?
Ophion led me the right way. Despite the documentation, one have to transfer the parameter of mkl_set_num_thread by reference.
Now I have defined to functions, for getting and setting the threads
import numpy
import ctypes
mkl_rt = ctypes.CDLL('libmkl_rt.so')
mkl_get_max_threads = mkl_rt.mkl_get_max_threads
def mkl_set_num_threads(cores):
mkl_rt.mkl_set_num_threads(ctypes.byref(ctypes.c_int(cores)))
mkl_set_num_threads(4)
print mkl_get_max_threads() # says 4
and they work as expected.
Edit: according to Rufflewind, the names of the C-Functions are written in capital-case, which expect parameters by value:
import ctypes
mkl_rt = ctypes.CDLL('libmkl_rt.so')
mkl_set_num_threads = mkl_rt.MKL_Set_Num_Threads
mkl_get_max_threads = mkl_rt.MKL_Get_Max_Threads
Long story short, use MKL_Set_Num_Threads and its CamelCased friends when calling MKL from Python. The same applies to C if you don't #include <mkl.h>.
The MKL documentation seems to suggest that the correct type signature in C is:
void mkl_set_num_threads(int nt);
Okay, let's try a minimal program then:
void mkl_set_num_threads(int);
int main(void) {
mkl_set_num_threads(1);
return 0;
}
Compile it with GCC and boom, Segmentation fault again. So it seems the problem isn't restricted to Python.
Running it through a debugger (GDB) reveals:
Program received signal SIGSEGV, Segmentation fault.
0x0000… in mkl_set_num_threads_ ()
from /…/mkl/lib/intel64/libmkl_intel_lp64.so
Wait a second, mkl_set_num_threads_?? That's the Fortran version of mkl_set_num_threads! How did we end up calling the Fortran version? (Keep in mind that Fortran's calling convention requires arguments to be passed as pointers rather than by value.)
It turns out the documentation was a complete façade. If you actually inspect the header files for the recent versions of MKL, you will find this cute little definition:
void MKL_Set_Num_Threads(int nth);
#define mkl_set_num_threads MKL_Set_Num_Threads
… and now everything makes sense! The correct function do call (for C code) is MKL_Set_Num_Threads, not mkl_set_num_threads. Inspecting the symbol table reveals that there are actually four different variants defined:
nm -D /…/mkl/lib/intel64/libmkl_rt.so | grep -i mkl_set_num_threads
00000000000e3060 T MKL_SET_NUM_THREADS
…
00000000000e30b0 T MKL_Set_Num_Threads
…
00000000000e3060 T mkl_set_num_threads
00000000000e3060 T mkl_set_num_threads_
…
Why did Intel put in four different variants of one function despite there being only C and Fortran variants in the documentation? I don't know for certain, but I suspect it's for compatibility with different Fortran compilers. You see, Fortran calling convention is not standardized. Different compilers will mangle the names of the functions differently:
some use upper case,
some use lower case with a trailing underscore, and
some use lower case with no decoration at all.
There may even be other ways that I'm not aware of. This trick allows the MKL library to be used with most Fortran compilers without any modification, the downside being that C functions need to be "mangled" to make room for the 3 variants of the Fortran calling convention.
For people looking for a cross platform and packaged solution, note that we have recently released threadpoolctl, a module to limit the number of threads used in C-level threadpools called by python (OpenBLAS, OpenMP and MKL). See this answer for more info.
For people looking for the complete solution, you can use a context manager:
import ctypes
class MKLThreads(object):
_mkl_rt = None
#classmethod
def _mkl(cls):
if cls._mkl_rt is None:
try:
cls._mkl_rt = ctypes.CDLL('libmkl_rt.so')
except OSError:
cls._mkl_rt = ctypes.CDLL('mkl_rt.dll')
return cls._mkl_rt
#classmethod
def get_max_threads(cls):
return cls._mkl().mkl_get_max_threads()
#classmethod
def set_num_threads(cls, n):
assert type(n) == int
cls._mkl().mkl_set_num_threads(ctypes.byref(ctypes.c_int(n)))
def __init__(self, num_threads):
self._n = num_threads
self._saved_n = self.get_max_threads()
def __enter__(self):
self.set_num_threads(self._n)
return self
def __exit__(self, type, value, traceback):
self.set_num_threads(self._saved_n)
Then use it like:
with MKLThreads(2):
# do some stuff on two cores
pass
Or just manipulating configuration by calling following functions:
# Example
MKLThreads.set_num_threads(3)
print(MKLThreads.get_max_threads())
Code is also available in this gist.
I am new to Python. I am trying to run MATLAB from inside Python using the mlab package. I was following the guide on the website, and I entered this in the Python command line:
from mlab.releases import latest_release
The error I got was:
cannot import name find_available_releases
It seems that under matlabcom.py there was no find_available_releases function.
May I know if anyone knows how to resolve this? Thank you!
PS: I am using Windows 7, MATLAB 2012a and Python 2.7
I skimmed through the code, and I don't think all of the README file and its documentation match what's actually implemented. It appears to be mostly copied from the original mlabwrap project.
This is confusing because mlabwrap is implemented using a C extension module to interact with the MATLAB Engine API. However the mlab code seems to have replaced that part with a pure Python implementation as the MATLAB-bridge backend. It comes from "Dana Pe'er Lab" and it uses two different methods to interact with MATLAB depending on the platform (COM/ActiveX on Windows and pipes on Linux/Mac).
Now that we understand how the backend is implemented, you can start looking at the import error.
Note: the Linux/Mac part of the code tries to find the MATLAB executable in some hardcoded fixed locations, and allows to choose between different versions.
However you are working on Windows, and the code doesn't really implement any way of picking between MATLAB releases for this platform (so all of the methods like discover_location and find_available_releases are useless on Windows). In the end, the COM object is created as:
self.client = win32com.client.Dispatch('matlab.application')
As explained here, the ProgID matlab.application is not version-specific, and will simply use whatever was registered as the default MATLAB COM server. We can explicitly specify what MATLAB version we want (assuming you have multiple installations), for instance matlab.application.8.3 will pick MATLAB R2014a.
So to fix the code, IMO the easiest way would be to get rid of all that logic about multiple MATLAB versions (in the Windows part of the code), and just let it create the MATLAB COM object as is. I haven't attempted it, but I don't think it's too involved... Good luck!
EDIT:
I download the module and I managed to get it to work on Windows (I'm using Python 2.7.6 and MATLAB R2014a). Here are the changes:
$ git diff
diff --git a/src/mlab/matlabcom.py b/src/mlab/matlabcom.py
index 93f075c..da1c6fa 100644
--- a/src/mlab/matlabcom.py
+++ b/src/mlab/matlabcom.py
## -21,6 +21,11 ## except:
print 'win32com in missing, please install it'
raise
+def find_available_releases():
+ # report we have all versions
+ return [('R%d%s' % (y,v), '')
+ for y in range(2006,2015) for v in ('a','b')]
+
def discover_location(matlab_release):
pass
## -62,7 +67,7 ## class MatlabCom(object):
"""
self._check_open()
try:
- self.eval('quit();')
+ pass #self.eval('quit();')
except:
pass
del self.client
diff --git a/src/mlab/mlabraw.py b/src/mlab/mlabraw.py
index 3471362..16e0e2b 100644
--- a/src/mlab/mlabraw.py
+++ b/src/mlab/mlabraw.py
## -42,6 +42,7 ## def open():
if is_win:
ret = MatlabConnection()
ret.open()
+ return ret
else:
if settings.MATLAB_PATH != 'guess':
matlab_path = settings.MATLAB_PATH + '/bin/matlab'
diff --git a/src/mlab/releases.py b/src/mlab/releases.py
index d792b12..9d6cf5d 100644
--- a/src/mlab/releases.py
+++ b/src/mlab/releases.py
## -88,7 +88,7 ## class MatlabVersions(dict):
# Make it a module
sys.modules['mlab.releases.' + matlab_release] = instance
sys.modules['matlab'] = instance
- return MlabWrap()
+ return instance
def pick_latest_release(self):
return get_latest_release(self._available_releases)
First I added the missing find_available_releases function. I made it so that it reports that all MATLAB versions are available (like I explained above, it doesn't really matter because of the way the COM object is created). An even better fix would be to detect the installed/registered MATLAB versions using the Windows registry (check the keys HKCR\Matlab.Application.X.Y and follow their CLSID in HKCR\CLSID). That way you can truly choose and pick which version to run.
I also fixed two unrelated bugs (one where the author forgot the function return value, and the other unnecessarily creating the wrapper object twice).
Note: During testing, it might be faster NOT to start/shutdown a MATLAB instance each time the script is called. This is why I commented self.eval('quit();') in the close function. That way you can start MATLAB using matlab.exe -automation (do this only once), and then repeatedly re-use the session without shutting it down. Just kill the process when you're done :)
Here is a Python example to test the module (I also show a comparison against NumPy/SciPy/Matplotlib):
test_mlab.py
# could be anything from: latest_release, R2014b, ..., R2006a
# makes no difference :)
from mlab.releases import R2014a as matlab
# show MATLAB version
print "MATLAB version: ", matlab.version()
print matlab.matlabroot()
# compute SVD of a NumPy array
import numpy as np
A = np.random.rand(5, 5)
U, S, V = matlab.svd(A, nout=3)
print "S = \n", matlab.diag(S)
# compare MATLAB's SVD against Scipy's SVD
U, S, V = np.linalg.svd(A)
print S
# 3d plot in MATLAB
X, Y, Z = matlab.peaks(nout=3)
matlab.figure(1)
matlab.surf(X, Y, Z)
matlab.title('Peaks')
matlab.xlabel('X')
matlab.ylabel('Y')
matlab.zlabel('Z')
# compare against matplotlib surface plot
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='jet')
ax.view_init(30.0, 232.5)
plt.title('Peaks')
plt.xlabel('X')
plt.ylabel('Y')
ax.set_zlabel('Z')
plt.show()
Here is the output I get:
C:\>python test_mlab.py
MATLAB version: 8.3.0.532 (R2014a)
C:\Program Files\MATLAB\R2014a
S =
[[ 2.41632007]
[ 0.78527851]
[ 0.44582117]
[ 0.29086795]
[ 0.00552422]]
[ 2.41632007 0.78527851 0.44582117 0.29086795 0.00552422]
EDIT2:
The above changes have been accepted and merged into mlab.
You are right in saying that the find_available_releases() is not written. 2 ways to work this out
Check out the code in linux and work on it (You are working on
windows !)
Change the Code as below
Add the following function in matlabcom.py as in matlabpipe.py
def find_available_releases():
global _RELEASES
if not _RELEASES:
_RELEASES = list(_list_releases())
return _RELEASES
If you see mlabraw.py file, the following code will give you a clear idea why I am saying this !
import sys
is_win = 'win' in sys.platform
if is_win:
from matlabcom import MatlabCom as MatlabConnection
from matlabcom import MatlabError as error
from matlabcom import discover_location, find_available_releases
from matlabcom import WindowsMatlabReleaseNotFound as MatlabReleaseNotFound
else:
from matlabpipe import MatlabPipe as MatlabConnection
from matlabpipe import MatlabError as error
from matlabpipe import discover_location, find_available_releases
from matlabpipe import UnixMatlabReleaseNotFound as MatlabReleaseNotFound