I'm currently experiencing an issue in wrapping some Fortran subroutines for use in a python3 script. This issue has only come up since I have attempted to use OpenMP in the subroutines.
For example, if I compile a module 'test.pyd' using f2py -c -m --fcompiler=gfortran --compiler=mingw32 --f90flags='-fopenmp' test test.f90 -lgomp, in which 'test.f90' is a Fortran subroutine which contains a parallelized loop, upon attempting to import this module into my script, I encounter ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed..
Removing the -fopenmp flag in compiling, or the !$omp comments in the Fortran subroutine remove this error.
Changing the subroutine to a roughly equivalent Fortran program, the program compiles to a .exe and runs correctly in parallel.
I'm on a Windows 10 platform, with an AMD64 processor, using GNU Fortran and C compilers from TDM-GCC
I just tried your build command, and it looks prefectly fine. I am myself able to run a parallel subroutine from a python module compiled just the way you are doing.
How are you executing the python code that is using your module? I think the problem is that you don't have the openmp dll (which is named libgomp-1.dll) in your path
I would advise you to run (from a bash shell) :
where libgomp-1.dll
If the command can't find it, then you should probably add the path to your openmp dll (which is usually "C:\tools\mingw64\bin\") to your PATH.
In order to do this, you can use:
export PATH=$PATH:C:\tools\mingw64\bin\ && python script_using_module.py
There is a good chance the way you are executing your python code doesn't account properly for the path, since you can run the parallel executable without a problem.
Related
I could always use mpi with C++, compiling programs via mpicxx. Some time ago, I also had to use mpi with Python. I installed the Python package mpi4py via Anaconda and my Python mpi codes worked fine.
Now, however, I went back to C++. When trying to compile with mpicxx, I get an error:
"
mpicxx -o TEST.exe TEST.cpp
The Open MPI wrapper compiler was unable to find the specified compiler
x86_64-conda_cos6-linux-gnu-c++ in your PATH.
Note that this compiler was either specified at configure time or in
one of several possible environment variables."
Apparently, installing mpi4py with Anaconda messed something up as the system tries to look for a compiler with "conda" in its name...
Can someone guide me to fix this in a way that I can freely use both, C++ and Python, with mpi?
edit: Using Ubuntu 18.04.
I want to write code in Python but still have real-time capable code. This means I cannot use the Python interpreter. Mypyc looks promising for this very specific purpose, even though it is not a goal of the tool, as it is only meant to accelerate Python. Would it be possible to run mypyc generated code without the Python interpreter?
I have tried the following things, without success:
Compiling __native.c with gcc and manually linking it to files it requires, such as Python.h (in python installation) and mypyc libraries.
Opening the .so file in a C program with dlopen and importing functions with dlsym.
TL;DR
Adding pybind11 bindings to a working C++ DLL project allows me to import and use the resulting DLL in Python but breaks my ability to use it in C++ code using boost/dll machinery.
Summary
I've got a C++ library that I compile to FooLib.dll. I use boost/dll's BOOST_DLL_ALIAS and boost::dll::import_alias() to export and load a class Foo that does some work in other C++ code.
Some details omitted but it all works great, following this recipe.
I'd like to be able to call the same library code from Python to do some complicated functional testing and do comparisons to numpy/scipy prototypes without having to write so much test code in C++.
So I tried to add pybind11 bindings to the FooLib DLL project using PYBIND11_MODULE.
It compiles, I get a FooLib.dll. I can copy and rename that to FooLib.pyd, import it as a Python module, and it all works fine. I export Foo as a Python class, and it works.
However, when I compile in the pybind bindings, the boost/dll import machinery can no longer load the original FooLib.dll. I verify with boost::dll::library_info() that the appropriate CreateFoo symbol is exported to the DLL. But loading with boost::dll::import_alias() fails with:
boost::dll::shared_library::load() failed: The specified module could not be found
Minimal Example
Unfortunately, something that needs a C++ and Python executable and compiled boost isn't exactly minimal, but I did my best here:
https://github.com/danzimmerman/pybind-boostdll-minimal
Direct links to the source files:
DLL Project Files
HelloSayerLib.h
HelloSayerImp.cpp
C++ Test Code
HelloSayerLibCppTest.cpp
Python Test Code
HelloSayerLibPythonTests.py
Any advice for next steps?
Is it even possible to compile to one binary that works for both C++ and Python like this?
The suggestion in #n.'pronouns'm. comment is correct. Simply copying the python DLL from the Anaconda distribution I built against to the C++ program's run directory fixes the problem. Makes sense in retrospect, but didn't occur to me.
Makes it even more likely that I should keep the builds separate or at least set up my real project to only build with pybind bindings on my machine.
I'm just now reading into cython and I'm wondering if cython compiles imported modules as part of the executable of if you still need to have the modules installed on the target machine to run the cython binary.
The "interface" of a Cython module remains at the Python level. When you import a module in Cython, the module becomes available only at the Python level of the code and uses the regular Python import mechanism.
So:
Cython does not "compile in" the dependencies.
You need to install the dependencies on the target machine.
For "Cython level" code, including the question of "cimporting" module, Cython uses the equivalent of C headers (the .pxd declaration files) and dynamically loaded libraries to access external code. The .so files (for Linux, DLL for windows and dylib for mac) need to be present on the target machine.
I'm compiling a SWIG wrapped C++ library into a python module, that should ideally be distributable for individuals to use the library transparently like a module. I'm building the library using cmake and swig on OSX 10.8.2 (System framework - Apple python2.7.2, Installed framework - python.org python 2.7.5)
The trouble I'm running into is that after linking with the framework, the compiled library is very selective of the version of python that is being run, even though otool -L shows that it is compiled with "compatability version 2.7.0". It appears that the different distributions have slightly different linker symbols and stuff starts to break
The most common problem is that it crashes the python kernel with a Fatal Python error: PyThreadState_Get: no current thread (according to: this question, indicative of a linking incompatability). I can get my library to work in the python it was compiled against.
Unfortunately this library is for use in a academic laboratory, with computers of all different ages and operating systems, many of them in permanent deprecation in order to run proprietary software that hasn't been updated in years, and I certainly don't have time to play I.T. and fix all of them, currently I've just been compiling against the version of python that comes with the latest Enthought distribution since most computers can get that in one way or another . A lot of the researchers I work with use some python IDE specific to their field that comes with an interpreter built in, but is not modifiable and not a Framework build (so I can't build against it), for the time being, they can run their experiment scripts in Enthought as a stop-gap, but its not ideal. Even when I link against the python.org distribution that is the same version as the built-in IDE python (2.7.2 I think, it even has the same damn release number), it still breaks the same way.
In any case, the question is, is there any way to link a SWIG wrapped python library so that it will run (at least on OSX) regardless of what interpreter is importing it (given certain minimum conditions, like guaranteed to be >=2.7.0).
EDIT
Compiling against canopy/python installed version with the following linker flags in cmake
set (CMAKE_SHARED_LINKER_FLAGS "-L ~/Library/Enthought/Canopy_32bit/User/lib -ldl -framework CoreFoundation -lpython2.7 -u _PyMac_Error ~/Library/Enthought/C\
anopy_32bit/User/lib")
This results in an #rpath symbol path when examining the linked library with otool, seems to work fine with enthought/canopy on other OSX systems, the -lpython seems to be optional, it adds an additional python symbol in the library reference to the osx python (not system python)
Compiling against system python with the following linker flags
set (CMAKE_SHARED_LINKER_FLAGS "-L /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/config -ldl -framework CoreFoundation -u _PyMac_Error /Library/Frameworks/Python.framework/Versions/Current/Python")
Works in enthought and system python
Neither of these work with the bundled python with psychopy, which is the target environment, compiling against the bundled python works with psychopy but no other python.
I've been getting the same error/having the same problem. I'd be interested if you've found a solution.
I have found that if I compile against the native python include directory and run the native OS X python binary /usr/bin/python that it works just fine, always. Even when I compile against some other python library (like the one I find at /Applications/Canopy.app/appdata/canopy-1.0.3.1262.macosx-x86_64/Canopy.app/Contents/include ) I can get the native OS X interpreter to work just fine.
I can't seem to get the Enthought version to work, ever. What directory are you compiling against for use with Enthought/Canopy?
There also seems to be some question of configuring SWIG at installation to know about a particular python library, but this might not be related: http://swig.10945.n7.nabble.com/SWIG-Installation-mac-osx-10-8-3-Message-w-o-attachments-td13183.html