SPM Dicom Convert in python (Ipython/ Nipype) - python

I am new to python or more specifically ipython. I have been running through the steps to run what should be a very simple Dicom Conversion in a statistical package called SPM for an MRI image file as described by NiPype. I can't get it to run and was wondering what I was doing wrong. I am not getting an error message, instead, there is no file change or output. It just hangs. Does anyone have any idea what I might be doing wrong? It's likely that I am missing something very simple here (sorry :(
import os
from pylab import *
from glob import glob
from nipype.interfaces.matlab import MatlabCommand as mlab
mlab.set_default_paths('/home/orkney_01/s1252042/matlab/spm8')
from nipype.interfaces.spm.utils import DicomImport as di
os.chdir('/sdata/images/projects/ASD_MM/1/datafiles/restingstate_files')
filename = "reststate_directories.txt"
restingstate_files_list = [line.strip() for line in open(filename)]
for x in restingstate_files_list:
os.chdir( x )
y = glob('*.dcm')
conversion = di(in_files = y))
print(res.outputs)

You are creating a DicomImport interface, but you are not actually running it. You should have res = di.run().
Also, you are best to tell the interface where to run using di.base_dir = '/some/path' before running.
Finally, you may also want to print the contents of restingstate_files_list to check you are finding the DICOM directories correctly.

Related

any existing code/library for image sharpness or blurriness estimation in python?

I would like to find some existing code/library for sharpness/blurriness estimation on normal images. (prefer in Python) I will need to compare the performance of different algorithms later.
I have 10000+ MRI scan images with different "quality"(sharpness/blurriness). I need to write code to filter images with certain "quality"(sharpness/blurriness) which is up to user. Hence, I am trying to research about image sharpness/blurriness estimation on medical images. My supervisor told me there are lots of existing code for sharpness/blurriness estimation on normal images(maybe it is no-reference sharpness metric) on internet. She asked me to search about them and try them on normal images first. Then try to learn about their algorithms.
I have searched about this on internet and found some pages which are relevant. However, lots of them are out of date.
For example:
On
Image sharpness metric
page,
Cumulative probability of blur detection (CPBD) https://ivulab.asu.edu/software/quality/cpbd
seems not working anymore. I guess the reason is that "imread" function is removed from new "scipy" library. (please see later code and error message) I think I can try the old version of "scipy" later. However, I would like to find some more currently available code/library about image sharpness/blurriness estimation.
Also, my working environment will be in Windows 10 or CentOS-7.
I have tried the following code with CPBD:
import sys, cpbd
from scipy import ndimage
input_image1 = ndimage.imread('D:\Work\Project\scripts\test_images\blur1.png', mode='L')
input_image2 = ndimage.imread('D:\Work\Project\scripts\test_images\clr1.png', mode='L')
print("blurry image sharpness:")
cpbd.compute(input_image1)
print("clear image sharpness:")
cpbd.compute(input_image2)
Error message from Python 3.7 shell (ran in Window 10):
Traceback (most recent call last):
File "D:\Work\Project\scripts\try_cpbd.py", line 1, in <module>
import sys, cpbd
File "D:\Program_Files_2\Python\lib\site-packages\cpbd\__init__.py", line 3, in <module>
from .compute import compute
File "D:\Program_Files_2\Python\lib\site-packages\cpbd\compute.py", line 14, in <module>
from scipy.misc import imread #Original: from scipy.ndimage import imread
ImportError: cannot import name 'imread' from 'scipy.misc' (D:\Program_Files_2\Python\lib\site-packages\scipy\misc\__init__.py)
Seems that cpbd package has not been updated from some time.
It worked for me with the following steps:
Edit "D:\Program_Files_2\Python\lib\site-packages\cpbd\compute.py":
Comment the last 4 lines starting with:
#if __name__ == '__main__':
Use the python code:
import cpbd
import cv2
input_image1 = cv2.imread('blur1.png')
if input_image1 is None:
print("error opening image")
exit()
input_image1 = cv2.cvtColor(input_image1, cv2.COLOR_BGR2GRAY)
print("blurry image sharpness:")
cpbd.compute(input_image1)
Since scipy.misc.imread is deprecated since 1.0.0, and removed in 1.2.0, I would use skimage.io.imread instead (which is in most ways a drop-in replacement).
Edit the code in cpbd/compute.py
import skimage.io
input_image1 = skimage.io.imread('blur1.png')
cv2 also works (or other options: imageio, PIL, ...) but skimage tends to be a bit easier to install/use.
The following steps worked for me:
Open the compute.py from C:\ProgramData\Anaconda3\Lib\site-packages\cpbd\compute.py or wherever you have installed it. You will find the following code:
from scipy.ndimage import imread
replace it with:
from skimage.io import imread
If you can't save the compute.py file, then copy it to desktop, edit it in the above mentioned way and replace the file in C:\ProgramData\Anaconda3\Lib\site-packages\cpbd\compute.py with it.
Following the answer from Baj Mile, I did the following and it worked for me.
opened the cpbd\compute.py file
commented the line : from scipy.ndimage import imread
Added the line: import cv2
Made the following changes to the main section:
if __name__ == '__main__':
#input_image = imread(argv[1], mode='L')
input_image=cv2.imread(argv[1])
sharpness = compute(input_image)
print('CPBD sharpness for %s: %f' % (argv[1], sharpness))
close the compute.py file.
In the main code:
import cpbd
import cv2
input_image1 = cv2.imread('testimage.jpg')
input_image1 = cv2.cvtColor(input_image1, cv2.COLOR_BGR2GRAY)
cpbd.compute(input_image1)

Install issues with 'lr_utils' in python

I am trying to complete some homework in a DeepLearning.ai course assignment.
When I try the assignment in Coursera platform everything works fine, however, when I try to do the same imports on my local machine it gives me an error,
ModuleNotFoundError: No module named 'lr_utils'
I have tried resolving the issue by installing lr_utils but to no avail.
There is no mention of this module online, and now I started to wonder if that's a proprietary to deeplearning.ai?
Or can we can resolve this issue in any other way?
You will be able to find the lr_utils.py and all the other .py files (and thus the code inside them) required by the assignments:
Go to the first assignment (ie. Python Basics with numpy) - which you can always access whether you are a paid user or not
And then click on 'Open' button in the Menu bar above. (see the image below)
.
Then you can include the code of the modules directly in your code.
As per the answer above, lr_utils is a part of the deep learning course and is a utility to download the data sets. It should readily work with the paid version of the course but in case you 'lost' access to it, I noticed this github project has the lr_utils.py as well as some data sets
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments
Note:
The chinese website links did not work when I looked at them. Maybe the server storing the files expired. I did see that this github project had some datasets though as well as the lr_utils file.
EDIT: The link no longer seems to work. Maybe this one will do?
https://github.com/knazeri/coursera/blob/master/deep-learning/1-neural-networks-and-deep-learning/2-logistic-regression-as-a-neural-network/lr_utils.py
Download the datasets from the answer above.
And use this code (It's better than the above since it closes the files after usage):
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
"lr_utils" is not official library or something like that.
Purpose of "lr_utils" is to fetch the dataset that is required for course.
option (didn't work for me): go to this page and there is a python code for downloading dataset and creating "lr_utils"
I had a problem with fetching data from provided url (but at least you can try to run it, maybe it will work)
option (worked for me): in the comments (at the same page 1) there are links for manually downloading dataset and "lr_utils.py", so here they are:
link for dataset download
link for lr_utils.py script download
Remember to extract dataset when you download it and you have to put dataset folder and "lr_utils.py" in the same folder as your python script that is using it (script with this line "import lr_utils").
The way I fixed this problem was by:
clicking File -> Open -> You will see the lr_utils.py file ( it does not matter whether you have paid/free version of the course).
opening the lr_utils.py file in Jupyter Notebooks and clicking File -> Download ( store it in your own folder ), rerun importing the modules. It will work like magic.
I did the same process for the datasets folder.
You can download train and test dataset directly here: https://github.com/berkayalan/Deep-Learning/tree/master/datasets
And you need to add this code to the beginning:
import numpy as np
import h5py
import os
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
I faced similar problem and I had followed the following steps:
1. import the following library
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
2. download the train_catvnoncat.h5 and test_catvnoncat.h5 from any of the below link:
[https://github.com/berkayalan/Neural-Networks-and-Deep-Learning/tree/master/datasets]
or
[https://github.com/JudasDie/deeplearning.ai/tree/master/Improving%20Deep%20Neural%20Networks/Week1/Regularization/datasets]
3. create a folder named datasets and paste these two files in this folder.
[ Note: datasets folder and your source code file should be in same directory]
4. run the following code
def load_dataset():
with h5py.File('datasets1/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets1/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
5. Load the data:
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
check datasets
print(len(train_set_x_orig))
print(len(test_set_x_orig))
your data set is ready, you may check the len of the train_set_x_orig, train_set_y variable. For mine, it was 209 and 50
I could download the dataset directly from coursera page.
Once you open the Coursera notebook you go to File -> Open and the following window will be display:
enter image description here
Here the notebooks and datasets are displayed, you can go to the datasets folder and download the required data for the assignment. The package lr_utils.py is also available for downloading.
below is your code, just save your file named "lr_utils.py" and now you can use it.
import numpy as np
import h5py
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
if your code file can not find you newly created lr_utils.py file just write this code:
import sys
sys.path.append("full path of the directory where you saved Ir_utils.py file")
Here is the way to get dataset from as #ThinkBonobo:
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments/datasets
write a lr_utils.py file, as above answer #StationaryTraveller, put it into any of sys.path() directory.
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
....
!!! BUT make sure that you delete 'datasets/', cuz now the name of your data file is train_catvnoncat.h5
restart kernel and good luck.
I may add to the answers that you can save the file with lr_utils script on the disc and import that as a module using importlib util function in the following way.
The below code came from the general thread about import functions from external files into the current user session:
How to import a module given the full path?
### Source load_dataset() function from a file
# Specify a name (I think it can be whatever) and path to the lr_utils.py script locally on your PC:
util_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_utils.py")
# Make a module
load_utils = importlib.util.module_from_spec(util_script)
# Execute it on the fly
util_script.loader.exec_module(load_utils)
# Load your function
load_utils.load_dataset()
# Then you can use your load_dataset() coming from above specified 'module' called load_utils
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_utils.load_dataset()
# This could be a general way of calling different user specified modules so I did the same for the rest of the neural network function and put them into separate file to keep my script clean.
# Just remember that Python treat it like a module so you need to prefix the function name with a 'module' name eg.:
# d = nnet_utils.model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1000, learning_rate = 0.005, print_cost = True)
nnet_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_nnet.py")
nnet_utils = importlib.util.module_from_spec(nnet_script)
nnet_script.loader.exec_module(nnet_utils)
That was the most convenient way for me to source functions/methods from different files in Python so far.
I am coming from the R background where you can call just one line function source() to bring external scripts contents into your current session.
The above answers didn't help, some links had expired.
So, lr_utils is not a pip library but a file in the same notebook as the CourseEra website.
You can click on "Open", and it'll open the explorer where you can download everything that you would want to run in another environment.
(I used this on a browser.)
This is how i solved mine, i copied the lir_utils file and paste it in my notebook thereafter i downloaded the dataset by zipping the file and extracting it. With the following code. Note: Run the code on coursera notebook and select only the zipped file in the directory to download.
!pip install zipfile36
zf = zipfile.ZipFile('datasets/train_catvnoncat_h5.zip', mode='w')
try:
zf.write('datasets/train_catvnoncat.h5')
zf.write('datasets/test_catvnoncat.h5')
finally:
zf.close()

Python script has to be run twice to execute

I have written a python script to gather and analyze data from a .csv file and then plot it using the matplotlib.pyplot module. I'm using numpy.genfromtext() to gather the data.
The first time I run the file, nothing happens. I get the console message:
>>> runfile('C:/my_filepath/thing.py')
and nothing more. If I run the file again, then it executes, prints the stuff, the plot comes up etc:
runfile('C:/my_filepath/thing.py')
~the stuff it's supposed to print~
More info: This problem only occurs on my laptop computer, which leads me to believe it has something to do with my matplotlib installation (mysterious to me because I installed an identical Anaconda package on my desktop and there is no issue). On the laptop the plot window is separate, and on the desktop the plot displays in the console. Maybe that's relevant.
Has anyone had this issue?
edit2: This only happens if I try to run the program multiple times in the same console. If I run the script in a fresh console it works fine, then you have to run it twice for every execution. If I close the matplotlib window, kill the console, and open a new one, I can execute it fine every time.
edit: here is a working example which exhibits the odd behavior. don't make fun of my code - I learned python over a weekend
import re
import numpy as np
from numpy import genfromtxt as gft
import matplotlib.pyplot as plt
def getnames(Pattern):
#construct the list of files in the directory
CSVList = []
for FileName in os.listdir():
if re.compile(Pattern).search(FileName):
CSVList.append(FileName)
print(len(CSVList), 'files found.')
#sort the list by the integer values of the last 4 digits of the filename
CSVList.sort(key=lambda x: int(x.rsplit('.')[0][-4:]))
return CSVList
def xy_extract(Data):
x = np.array([row[0] for row in Data])
y = np.array([row[1] for row in Data])
return x, y
CSVList = getnames('(?i)\.csv$')
print(CSVList)
#plt.figure(figsize=(10,5))
plt.xlim(799,3999)
for Filename in CSVList:
Data = gft(Filename, delimiter=',',skip_header=2)
x, y = xy_extract(Data)
plt.plot(x, y,label=Filename)
plt.show()

Write a for loop in Abaqus Macro (Python)

I've been using Abaqus for a while but I'm new to macros and python script. I'm sorry if this kind of question has already been asked, I did search on google to see if there was a similar problem but nothing works..
My problem is the following :
I have a model in Abaqus, I've ran an analysis with 2 steps and I created a path in it and I'd like to extract the value of the Von Mises stress along this path for each frame of each step.
Ideally I'd love to save it into an Excel or a .txt file for easy further analysis (e.g in Matlab).
Edit : I solved part of the problem, my macro works and all my data is correctly saved in the XY-Data manager.
Now I'd like to save all the "Y" data in an excel or text file and I have no clue on how to do that. I'll keep digging but if anyone has an idea I'll take it !
Here's the code from the abaqusMacros.py file :
# -*- coding: mbcs -*-
# Do not delete the following import lines
from abaqus import *
from abaqusConstants import *
import __main__
def VonMises():
import section
import regionToolset
import displayGroupMdbToolset as dgm
import part
import material
import assembly
import step
import interaction
import load
import mesh
import optimization
import job
import sketch
import visualization
import xyPlot
import displayGroupOdbToolset as dgo
import connectorBehavior
odbFile = session.openOdb(name='C:/Temp/Job-1.odb')
stepsName = odbFile.steps.keys()
for stepId in range(len(stepsName)):
numberOfFrames = len(odbFile.steps.values()[stepId].frames)
for frameId in range(numberOfFrames):
session.viewports['Viewport: 1'].odbDisplay.setPrimaryVariable(
variableLabel='S', outputPosition=INTEGRATION_POINT, refinement=(
INVARIANT, 'Mises'))
session.viewports['Viewport: 1'].odbDisplay.setFrame(step=stepId, frame=frameId)
pth = session.paths['Path-1']
session.XYDataFromPath(name='Step_'+str(stepId)+'_'+str(frameId), path=pth, includeIntersections=False,
projectOntoMesh=False, pathStyle=PATH_POINTS, numIntervals=10,
projectionTolerance=0, shape=DEFORMED, labelType=TRUE_DISTANCE)
First of all, your function VonMises contains only import statements, other parts of the code are not properly indented, so they are outside of the function.
Second of all, the function is never called. If you run your script using 'File > Run script', then you should call the function at the end of your file.
There two thing seem like obvious errors, but there are some other bad things as well.
Also, I don't see the point of writing import __name__ at the top of your file because I really doubt you have a module name __name__; Python environment used by Abaqus probably doesn't either.
There are some other things that might be improved as well, but you should try to fix the errors first.
If you got an actual error message from Abaqus (either in the window or in abaqus.rpy file), it would be helpful if you posted it here.
Got it :
I'll use the code posted above, repeated here :
# -*- coding: mbcs -*-
# Do not delete the following import lines
from abaqus import *
from abaqusConstants import *
import __main__
def VonMises():
import section
import regionToolset
import displayGroupMdbToolset as dgm
import part
import material
import assembly
import step
import interaction
import load
import mesh
import optimization
import job
import sketch
import visualization
import xyPlot
import displayGroupOdbToolset as dgo
import connectorBehavior
odbFile = session.openOdb(name='C:/Temp/Job-1.odb')
stepsName = odbFile.steps.keys()
for stepId in range(len(stepsName)):
numberOfFrames = len(odbFile.steps.values()[stepId].frames)
for frameId in range(numberOfFrames):
session.viewports['Viewport: 1'].odbDisplay.setPrimaryVariable(
variableLabel='S', outputPosition=INTEGRATION_POINT, refinement=(
INVARIANT, 'Mises'))
session.viewports['Viewport: 1'].odbDisplay.setFrame(step=stepId, frame=frameId)
pth = session.paths['Path-1']
session.XYDataFromPath(name='Step_'+str(stepId)+'_'+str(frameId), path=pth, includeIntersections=False,
projectOntoMesh=False, pathStyle=PATH_POINTS, numIntervals=10,
projectionTolerance=0, shape=DEFORMED, labelType=TRUE_DISTANCE)
And I just discovered the "Excel Utilities" tool on Abaqus which is enough for what I want to do.
Thanks y'all for your input.
Here is how you extract XY-data
from odbAccess import *
odb = session.odbs['C:/Job-Directory/Job-1.odb']
output = open('Result.dat', 'w')
for i in range (0,Number-of-XYData-to-extract):
xy1 = odb.userData.xyDataObjects['XYData-'+str(i)]
for x in range (0,len(xy1)):
output.write ( str(xy1[x]) + "\n" )
output.close()

Python Mlab - cannot import name find_available_releases

I am new to Python. I am trying to run MATLAB from inside Python using the mlab package. I was following the guide on the website, and I entered this in the Python command line:
from mlab.releases import latest_release
The error I got was:
cannot import name find_available_releases
It seems that under matlabcom.py there was no find_available_releases function.
May I know if anyone knows how to resolve this? Thank you!
PS: I am using Windows 7, MATLAB 2012a and Python 2.7
I skimmed through the code, and I don't think all of the README file and its documentation match what's actually implemented. It appears to be mostly copied from the original mlabwrap project.
This is confusing because mlabwrap is implemented using a C extension module to interact with the MATLAB Engine API. However the mlab code seems to have replaced that part with a pure Python implementation as the MATLAB-bridge backend. It comes from "Dana Pe'er Lab" and it uses two different methods to interact with MATLAB depending on the platform (COM/ActiveX on Windows and pipes on Linux/Mac).
Now that we understand how the backend is implemented, you can start looking at the import error.
Note: the Linux/Mac part of the code tries to find the MATLAB executable in some hardcoded fixed locations, and allows to choose between different versions.
However you are working on Windows, and the code doesn't really implement any way of picking between MATLAB releases for this platform (so all of the methods like discover_location and find_available_releases are useless on Windows). In the end, the COM object is created as:
self.client = win32com.client.Dispatch('matlab.application')
As explained here, the ProgID matlab.application is not version-specific, and will simply use whatever was registered as the default MATLAB COM server. We can explicitly specify what MATLAB version we want (assuming you have multiple installations), for instance matlab.application.8.3 will pick MATLAB R2014a.
So to fix the code, IMO the easiest way would be to get rid of all that logic about multiple MATLAB versions (in the Windows part of the code), and just let it create the MATLAB COM object as is. I haven't attempted it, but I don't think it's too involved... Good luck!
EDIT:
I download the module and I managed to get it to work on Windows (I'm using Python 2.7.6 and MATLAB R2014a). Here are the changes:
$ git diff
diff --git a/src/mlab/matlabcom.py b/src/mlab/matlabcom.py
index 93f075c..da1c6fa 100644
--- a/src/mlab/matlabcom.py
+++ b/src/mlab/matlabcom.py
## -21,6 +21,11 ## except:
print 'win32com in missing, please install it'
raise
+def find_available_releases():
+ # report we have all versions
+ return [('R%d%s' % (y,v), '')
+ for y in range(2006,2015) for v in ('a','b')]
+
def discover_location(matlab_release):
pass
## -62,7 +67,7 ## class MatlabCom(object):
"""
self._check_open()
try:
- self.eval('quit();')
+ pass #self.eval('quit();')
except:
pass
del self.client
diff --git a/src/mlab/mlabraw.py b/src/mlab/mlabraw.py
index 3471362..16e0e2b 100644
--- a/src/mlab/mlabraw.py
+++ b/src/mlab/mlabraw.py
## -42,6 +42,7 ## def open():
if is_win:
ret = MatlabConnection()
ret.open()
+ return ret
else:
if settings.MATLAB_PATH != 'guess':
matlab_path = settings.MATLAB_PATH + '/bin/matlab'
diff --git a/src/mlab/releases.py b/src/mlab/releases.py
index d792b12..9d6cf5d 100644
--- a/src/mlab/releases.py
+++ b/src/mlab/releases.py
## -88,7 +88,7 ## class MatlabVersions(dict):
# Make it a module
sys.modules['mlab.releases.' + matlab_release] = instance
sys.modules['matlab'] = instance
- return MlabWrap()
+ return instance
def pick_latest_release(self):
return get_latest_release(self._available_releases)
First I added the missing find_available_releases function. I made it so that it reports that all MATLAB versions are available (like I explained above, it doesn't really matter because of the way the COM object is created). An even better fix would be to detect the installed/registered MATLAB versions using the Windows registry (check the keys HKCR\Matlab.Application.X.Y and follow their CLSID in HKCR\CLSID). That way you can truly choose and pick which version to run.
I also fixed two unrelated bugs (one where the author forgot the function return value, and the other unnecessarily creating the wrapper object twice).
Note: During testing, it might be faster NOT to start/shutdown a MATLAB instance each time the script is called. This is why I commented self.eval('quit();') in the close function. That way you can start MATLAB using matlab.exe -automation (do this only once), and then repeatedly re-use the session without shutting it down. Just kill the process when you're done :)
Here is a Python example to test the module (I also show a comparison against NumPy/SciPy/Matplotlib):
test_mlab.py
# could be anything from: latest_release, R2014b, ..., R2006a
# makes no difference :)
from mlab.releases import R2014a as matlab
# show MATLAB version
print "MATLAB version: ", matlab.version()
print matlab.matlabroot()
# compute SVD of a NumPy array
import numpy as np
A = np.random.rand(5, 5)
U, S, V = matlab.svd(A, nout=3)
print "S = \n", matlab.diag(S)
# compare MATLAB's SVD against Scipy's SVD
U, S, V = np.linalg.svd(A)
print S
# 3d plot in MATLAB
X, Y, Z = matlab.peaks(nout=3)
matlab.figure(1)
matlab.surf(X, Y, Z)
matlab.title('Peaks')
matlab.xlabel('X')
matlab.ylabel('Y')
matlab.zlabel('Z')
# compare against matplotlib surface plot
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='jet')
ax.view_init(30.0, 232.5)
plt.title('Peaks')
plt.xlabel('X')
plt.ylabel('Y')
ax.set_zlabel('Z')
plt.show()
Here is the output I get:
C:\>python test_mlab.py
MATLAB version: 8.3.0.532 (R2014a)
C:\Program Files\MATLAB\R2014a
S =
[[ 2.41632007]
[ 0.78527851]
[ 0.44582117]
[ 0.29086795]
[ 0.00552422]]
[ 2.41632007 0.78527851 0.44582117 0.29086795 0.00552422]
EDIT2:
The above changes have been accepted and merged into mlab.
You are right in saying that the find_available_releases() is not written. 2 ways to work this out
Check out the code in linux and work on it (You are working on
windows !)
Change the Code as below
Add the following function in matlabcom.py as in matlabpipe.py
def find_available_releases():
global _RELEASES
if not _RELEASES:
_RELEASES = list(_list_releases())
return _RELEASES
If you see mlabraw.py file, the following code will give you a clear idea why I am saying this !
import sys
is_win = 'win' in sys.platform
if is_win:
from matlabcom import MatlabCom as MatlabConnection
from matlabcom import MatlabError as error
from matlabcom import discover_location, find_available_releases
from matlabcom import WindowsMatlabReleaseNotFound as MatlabReleaseNotFound
else:
from matlabpipe import MatlabPipe as MatlabConnection
from matlabpipe import MatlabError as error
from matlabpipe import discover_location, find_available_releases
from matlabpipe import UnixMatlabReleaseNotFound as MatlabReleaseNotFound

Categories

Resources