ITK in Python: SimpleITK or not? - python

I have started to work with ITK for a week thanks to SimpleITK in Python. Even though, currently, I am satisfied with SimpleITK, I have noticed that some features such as the Powell optimization scheme or the OnePlusEvolutionary one are not available in SimpleITK. It seems to be the same with landmark-based registration methods.
So, I was wondering if there is a way to retrieve all the features available in ITK (in C++) in SimpleITK or if it is necessary to perform my own wrapping?
If not I will later learn C++ to do so!
Thanks!

You are correct. SimpleITK is limited in functionality. It is however possible to access the entire ITK library in Python using the WrapITK interface. This is an old interface which I don't believe has been updated for a while. However, it is still possible to compile new builds from source and use WrapITK.
The process is not exactly smooth. I have done the build on a Windows machine in the past and had then noted some of the not-so-smooth-steps that I had to encounter. I'll lay those down here. Since you haven't mentioned your OS, I'm going to go ahead explain the setup for Windows. Let's see if you're able to get it up and running.
ITK-4.6 + Python2.7 + CMake + VS2008 professional (all 32-bit)
Nothing later than VS2008 can compile GCCXML, which is required for generating python bindings
After configuring CMAKE, the following flags need to be additionally set:
ITK_WRAP_PYTHON
ITK_WRAP_* (all types: float, double, etc.)
ITK_BUILD_SHARED_LIBS (gets automatically set if first flag is set)
In VS2008, build in Release mode only. Number of projects in the project explorer will be more than 500. Around 300-350 should get built.
When building, make sure that you have an accessible internet conection for downloading GCCXML (which will likely get downloaded after you have started the build in VS2008). There should be no error while verifying the download. If there is, it might be because of directory creation permission errors.
The project should get built with NO ERRORS
Copy the WrapITK.pth file from the ITK build/Wrapping/Generators/Python/Release to Python/Lib/site-packages
Add the following to your Path variable:
C:\ProgramLibs\ITK\build2008\lib\Release
C:\ProgramLibs\ITK\build2008\bin\Release
C:\ProgramLibs\ITK\build2008\lib
Now ITK should work properly (below is a test python file that you can use to run a sanity check on the build). Except every time you call itk.Image the first time in your program or on the Python interpreter, there are a dozen warnings and they take time to execute that. This is a known issue. Once you are past this, it's smooth.
Test file
import itk
pixelType = itk.UC
imageType = itk.Image[pixelType, 2]
readerType = itk.ImageFileReader[imageType]
writerType = itk.ImageFileWriter[imageType]
reader = readerType.New()
writer = writerType.New()
reader.SetFileName("<input image file location>")
writer.SetFileName("D:/Output.png")
writer.SetInput( reader.GetOutput() )
writer.Update()

There is also WrapITK, which is a python wrapping of ITK that you can enable to compile with ITK (so, you'll have to compile it by yourself but you will not need to code the warpping at least). See http://kitware.com/blog/home/post/888, http://www.itk.org/Wiki/ITK/Wrapping , http://www.itk.org/Wiki/ITK/Release_4/Wrapping/WrapITK_Installation.t .
Note however that probably not all filters are wrapped (http://www.itk.org/Wiki/Proposals:Increasing_WrapITK_Coverage#List_of_Unwrapped_Filters , last updated in 2009 so probably now the situation is better).
The only compiled wrapITK package I came across is Devide-RE https://www.youtube.com/watch?v=-b1zS536R2M (with an older version of ITK, 3.2 if I remember correctly), but maybe also Slicer and Vistrail have it (http://www.itk.org/pipermail/insight-users/2009-August/031910.html)

Related

Where is mujoco_py.MjModel(<filepath>) defined?

I have been studying some of the OpenAI gym envs and came across this line:
self.model = mujoco_py.MjModel(fullpath)
(https://github.com/openai/gym/blob/master/gym/envs/mujoco/mujoco_env.py#L28)
Can anyone tell me where mujoco_py.MjModel() is defined? I assume this is somehow pulled from native MuJoCo / Cython...
EDIT
Also when I search the install folder of mujoco_py (<Python-installation-directory>/Lib/site-packages/mujoco_py/), there is literally no MjModel found (Sublime fulltext search). (The search might exclude some files.) What I do find a lot are 'mjModel' and 'PyMjModel' though.
I am confused because the instantiation through mujoco_py.MjModel() also seems to create a different kind of model than using functions like mujoco_py.load_model_from_path(). The former have a .data attribute while the latter apparently don't.
If you have installed mujoco-py, you can probably find MjModel in something like the following file:
<Python-installation-directory>/Lib/site-packages/mujoco_py/mjcore.py
You won't find that python file in the mujoco-py repository though. It probably gets generated from C++ code during the installation process (when running setup.py). It looks like MjModel is defined in the mjmodel.pxd file (for more info on .pxd files, see this).

looking for source code of from gen_nn_ops in tensorflow

I am new to tensorflow for deep learning and interested in deconvolution (convolution transpose) operation in tensorflow. I need to take a look at the source code for operating deconvolution. The function is I guess conv2d_transpose() in nn_ops.py.
However, in the function it calls another function called gen_nn_ops.conv2d_backprop_input(). I need to take a look at what is inside this function, but I am unable to find it in the repository. Any help would be appreciated.
You can't find this source because the source is automatically generated by bazel. If you build from source, you'll see this file inside bazel-genfiles. It's also present in your local distribution which you can find using inspect module. The file contains automatically generated Python wrappers to underlying C++ implementations, so it basically consists of a bunch of 1-line functions. A shortcut to find underlying C++ implementation of such generated Python op is to convert snake case to camel case, ie conv2d_backprop_input -> Conv2dBackpropInput
# figure out where gen_nn_ops is
print(tf.nn.conv2d_transpose.__globals__['gen_nn_ops'])
from tensorflow.python.ops import gen_nn_ops
import inspect
inspect.getsourcefile('gen_nn_ops.conv2d_backprop_input')
'/Users/yaroslav/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/gen_nn_ops.py'
If you cared to find out how this file really came about, you could follow the trail of bazel dependencies in BUILD files. It to find Bazel target that generated it from tensorflow source tree:
fullname=$(bazel query tensorflow/python/ops/gen_nn_ops.py)
bazel query "attr('srcs', $fullname, ${fullname//:*/}:*)"
//tensorflow/python:nn_ops_gen
So now going to BUILD file inside tensorflow/python you see that this is a target of type tf_gen_op_wrapper_private_py which is defined here and calls gen_op_wrapper_py from tensorflow/tensorflow.bzl which looks like this
def tf_gen_op_wrapper_py(name, out=None, hidden=None, visibility=None, deps=[],
....
native.cc_binary(
name = tool_name,
This native.cc_binary construct is a way to have Bazel target that represents execution of an arbitrary command. In this case it calls tool_name with some arguments. With a couple more steps you can find that "tool" here is compiled from framework/python_op_gen_main.cc
The reason for this complication is that TensorFlow was designed to be language agnostic. So in ideal world you would have each op described in ops.pbtxt, and then each op would have one implementation per hardware type using REGISTER_KERNEL_BUILDER, so all implementations would be done in C++/CUDA/Assembly and become automatically available to all language front-ends. There would be an equivalent translator op like "python_op_gen_main" for every language and all client library code would be automatically generated. However, because Python is so dominant, there was pressure to add features on the Python side. So now there are two kinds of ops -- pure TensorFlow ops seen in files like gen_nn_ops.py, and Python-only ops in files like nn_ops.py which typically wrap ops automatically generated files gen_nn_ops.py but add extra features/syntax sugar. Also, originally all names were camel-case, but it was decided that public facing release should be PEP compliant with more common Python syntax, so this is a reason for camel-case/snake-case mismatch between C++/Python interfaces of same op
Unfortunately, TensorFlow code is not easy to read :(
To make things fast, the python code has to interleave C++ code, which also uses indirect dependencies.
gen_X functions are generated from their C++ code; to find it, you need to search for Conv2dBackpropInput.
You can find the registration of the kernel op in ops/nn_ops.cc and concrete implementation in kernels/conv_grad_input_ops.cc.
This is generated file when you build Tensorflow.
After you build Tensorflow source, you should see a symbolic link file named "bazel-genfiles" at tensorflow root directory, and go to the location it points, and then you can find it at tensorflow/python/ops/gen_nn_ops.py
I use tensor flow TF2, on google colab:
import inspect
inspect.getsourcefile(gen_nn_ops.conv2d_backprop_input)
'/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_nn_ops.py'

Freezing shared objects with cx_freeze on Linux

I am trying to CX_Freeze an application for the Linux Platform. The Windows MSI installer works perfectly but the Linux counter-part doesn't really function the way I want it.
When the package is build it runs perfectly on the original system, but when ported to a different system (although same architecture) it generates a segfault. First thing I did was check the librarys and there are some huge version differences with libc, pthread and libdl. So I decided to include these in the build, like so:
if windows_build:
build_exe_options['packages'].append("win32net")
build_exe_options['packages'].append("win32security")
build_exe_options['packages'].append("win32con")
pywintypes_dll = 'pywintypes{0}{1}.dll'.format(*sys.version_info[0:2]) # e.g. pywintypes27.dll
build_exe_options['include_files'].append((os.path.join(GetSystemDirectory(), pywintypes_dll), pywintypes_dll))
else:
build_exe_options['packages'].append("subprocess")
build_exe_options['packages'].append("encodings")
arch_lib_path = ("/lib/%s-linux-gnu" % os.uname()[4])
shared_objects = ["libc.so.6", "libpthread.so.0", "libz.so.1", "libdl.so.2", "libutil.so.1", "libm.so.6", "libgcc_s.so.1", "ld-linux-x86-64.so.2"]
lib_paths = ["/lib", arch_lib_path, "/lib64"]
for so in shared_objects:
for lib in lib_paths:
lib_path = "%s/%s" % (lib, so)
if os.path.isfile(lib_path):
build_exe_options['include_files'].append((lib_path, so))
break
After checking the original cx_frozen bin it seems the dynamic libraries play there part and intercept the calls perfectly. Although now I am at the part where pthread segfaults due to the fact that he tries the systems libc instead of mine (checked with ldd and gdb).
My question is quite simple, this method I am trying is horrible as it doesn't do the recursive depency resolving. Therefore my question would be "what is the better way of doing this? or should I write recursive depency solving within my installer?"
And in order to beat the Solution: "Use native Python instead", we got some hardware appliances (think 2~4U) with Linux (and Bash access) where we want to run this aswell. porting entire python (with its dynamic links etcetc) seemd like way to much work when we can cx_freeze it and ship the librarys with.
I don't know about your other problems, but shipping libc.so.6 to another system the way you do it can not possibly work, as explained here.

Extract Assembly Version from DLL using Python

I'm trying to extract some version information from a DLL using python. I read this question:
Python windows File Version attribute
It was helpful, but I also need to get the 'Assembly version' from the DLL. It's there when I right click and look on the versions tab, but not sure how I extract this with python.
On this page:
http://timgolden.me.uk/python/win32_how_do_i/get_dll_version.html
Tim Golden says:
You can use the slightly more messy
language-dependent code in the demos
which come with pywin32 to find the
strings in the box beneath it.
Can someone point me to the example that might be useful? I looked in the win32api directories but there's nothing obvious. Would I find a solution there?
If you would rather not introduce a dependency on Python.Net, you can also use the win32 api directly:
from win32api import GetFileVersionInfo, LOWORD, HIWORD
def get_version_number (filename):
info = GetFileVersionInfo (filename, "\\")
ms = info['FileVersionMS']
ls = info['FileVersionLS']
return HIWORD (ms), LOWORD (ms), HIWORD (ls), LOWORD (ls)
Source: http://timgolden.me.uk/python/win32_how_do_i/get_dll_version.html
I'm not sure you can get at this information by using native code. The usual way of obtaining the assembly info is by running .Net code (e.g. C#). So I'm guessing in order to be able to do the same from python you'll need to run some .Net python interpreter. See for example http://pythonnet.github.io/

Detecting symlinks (mklink) on Vista/7 in Python without Pywin32

Currently the buildout recipe collective.recipe.omelette uses junction.exe on all versions of Windows to create symlinks. However junction.exe does not come with Windows by default and most importantly does not support creating symlinks to files (only directories) which causes a problem with quite a few Python packages.
On NT6+ (Vista and 7) there is now the mklink utility that not only comes by default but is also capable of creating symlinks to files as well as directories. I would like to update collective.recipe.omelette to use this if available and have done so except for one otherwise simple feature; detecting whether a file or folder is actually a symlink. Since this is a small buildout recipe, requiring Pywin32 in my opinion is a bit too much (unless setuptools could somehow only download it on Windows?).
Currently on Windows what omelette does is call junction.exe on the folder and then grep the response for "Substitute Name:" but I can't find anything as simple for mklink.
The only method I can think of is to call "dir" in the directory and then to go through the response line by line looking for "<SYMLINK>" and the folder/filename on the same line. Surely there is something better?
See jaraco.windows.filesystem (part of the jaraco.windows package) for extensive examples on symlink operations in Windows without pywin32.
Could you use ctypes to access the various needed functions and structures? this patch, currently under discussion, is intended to add symlink functionality to module os under Vista and Windows 7 -- but it won't be in before Python 2.7 and 3.2, so the wait (and the requirement for the very latest versions when they do eventually come) will likely be too long; a ctypes-based solution might tide you over and the code in the patch shows what it takes in C to do it (and ctypes-based programmed in Python is only a bit harder than the same programming in C).
Unless somebody's already released some other stand-alone utility like junction.exe for this purpose, I don't see other workable approaches (until that patch finally makes it into a future Python's stdlib) beyond using ctypes, Pywin32, or dir, and you've already ruled out the last two of these three options...!-)
On windows junctions and symbolic links have the attribute FILE_ATTRIBUTE_REPARSE_POINT (0x400) for reparse points. If you get the file's attributes, then detect this on?
You could use ctypes (as stated in the other answer) to access Kernel32.dll and GetFileAttributes, and detect this value.
You could leverage the Tcl you have available with Tkinter, as that has a 'file link' command that knows about junctions, unlike Pythons os module.
I have searched widely for a better solution, however python 2.7 just does not have a good solution for this. So eventually, I ended up with this (code below), its admittedly ugly, but it's downright pretty compared to all the ctypes hacks I've seen.
import os, subprocess
def realpath(path):
# not a folder path, ignore
if (not os.path.isdir(path)):
return path
rootpath = os.path.abspath(path)
oneup, foldername = os.path.split(rootpath)
output = subprocess.check_output("dir " + oneup, shell=True)
links = {}
for line in output.splitlines():
pos = line.find("<SYMLINKD>")
if not pos == -1:
link = line[pos+15:].split()
links[link[0]] = link[1].strip("[]")
return links[foldername] if links.has_key(foldername) else rootpath

Categories

Resources