I could always use mpi with C++, compiling programs via mpicxx. Some time ago, I also had to use mpi with Python. I installed the Python package mpi4py via Anaconda and my Python mpi codes worked fine.
Now, however, I went back to C++. When trying to compile with mpicxx, I get an error:
"
mpicxx -o TEST.exe TEST.cpp
The Open MPI wrapper compiler was unable to find the specified compiler
x86_64-conda_cos6-linux-gnu-c++ in your PATH.
Note that this compiler was either specified at configure time or in
one of several possible environment variables."
Apparently, installing mpi4py with Anaconda messed something up as the system tries to look for a compiler with "conda" in its name...
Can someone guide me to fix this in a way that I can freely use both, C++ and Python, with mpi?
edit: Using Ubuntu 18.04.
Related
I'm currently experiencing an issue in wrapping some Fortran subroutines for use in a python3 script. This issue has only come up since I have attempted to use OpenMP in the subroutines.
For example, if I compile a module 'test.pyd' using f2py -c -m --fcompiler=gfortran --compiler=mingw32 --f90flags='-fopenmp' test test.f90 -lgomp, in which 'test.f90' is a Fortran subroutine which contains a parallelized loop, upon attempting to import this module into my script, I encounter ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed..
Removing the -fopenmp flag in compiling, or the !$omp comments in the Fortran subroutine remove this error.
Changing the subroutine to a roughly equivalent Fortran program, the program compiles to a .exe and runs correctly in parallel.
I'm on a Windows 10 platform, with an AMD64 processor, using GNU Fortran and C compilers from TDM-GCC
I just tried your build command, and it looks prefectly fine. I am myself able to run a parallel subroutine from a python module compiled just the way you are doing.
How are you executing the python code that is using your module? I think the problem is that you don't have the openmp dll (which is named libgomp-1.dll) in your path
I would advise you to run (from a bash shell) :
where libgomp-1.dll
If the command can't find it, then you should probably add the path to your openmp dll (which is usually "C:\tools\mingw64\bin\") to your PATH.
In order to do this, you can use:
export PATH=$PATH:C:\tools\mingw64\bin\ && python script_using_module.py
There is a good chance the way you are executing your python code doesn't account properly for the path, since you can run the parallel executable without a problem.
I'm trying to create a Python module using f2py using the Intel Fortran compiler, but WITHOUT the corresponding Intel or Microsoft C compiler. I had been told that I could use the MinGW C compiler for this, though I'm starting to doubt this.
I have 32-bit Python 2.7.3 with numpy 1.8.1, Intel Fortran 11.1 (with the VS 2008 shell), and MinGW 4.8.1.
Using f2py with --fcompiler=intelv and --compiler=mingw32 seemed to work until it got time to link. I got many unresolved externals as in this posting (I'm not currently able to access the makefile referenced in the answer), along with this warning:
LNK4078: multiple '.drectve' sections found with different attributes (00100A00)
I realized that some incorrect options were being passed to ifort, so I tried calling ifort directly to link. When I did, it went ahead with the link, but when I tried to import the .pyd file, python complained that it did not have an init function defined. Looking at the .pyd file with depends, I saw that it was not linked correctly against MSVCR90.DLL or LIBIFCOREMD.DLL. I tried listing the corresponding .lib files in the link line, but that didn't appear to help with this issue.
I also tried calling link directly (with the corresponding options), but it just refused to link at all.
Am I trying to do something impossible?
I'm compiling a SWIG wrapped C++ library into a python module, that should ideally be distributable for individuals to use the library transparently like a module. I'm building the library using cmake and swig on OSX 10.8.2 (System framework - Apple python2.7.2, Installed framework - python.org python 2.7.5)
The trouble I'm running into is that after linking with the framework, the compiled library is very selective of the version of python that is being run, even though otool -L shows that it is compiled with "compatability version 2.7.0". It appears that the different distributions have slightly different linker symbols and stuff starts to break
The most common problem is that it crashes the python kernel with a Fatal Python error: PyThreadState_Get: no current thread (according to: this question, indicative of a linking incompatability). I can get my library to work in the python it was compiled against.
Unfortunately this library is for use in a academic laboratory, with computers of all different ages and operating systems, many of them in permanent deprecation in order to run proprietary software that hasn't been updated in years, and I certainly don't have time to play I.T. and fix all of them, currently I've just been compiling against the version of python that comes with the latest Enthought distribution since most computers can get that in one way or another . A lot of the researchers I work with use some python IDE specific to their field that comes with an interpreter built in, but is not modifiable and not a Framework build (so I can't build against it), for the time being, they can run their experiment scripts in Enthought as a stop-gap, but its not ideal. Even when I link against the python.org distribution that is the same version as the built-in IDE python (2.7.2 I think, it even has the same damn release number), it still breaks the same way.
In any case, the question is, is there any way to link a SWIG wrapped python library so that it will run (at least on OSX) regardless of what interpreter is importing it (given certain minimum conditions, like guaranteed to be >=2.7.0).
EDIT
Compiling against canopy/python installed version with the following linker flags in cmake
set (CMAKE_SHARED_LINKER_FLAGS "-L ~/Library/Enthought/Canopy_32bit/User/lib -ldl -framework CoreFoundation -lpython2.7 -u _PyMac_Error ~/Library/Enthought/C\
anopy_32bit/User/lib")
This results in an #rpath symbol path when examining the linked library with otool, seems to work fine with enthought/canopy on other OSX systems, the -lpython seems to be optional, it adds an additional python symbol in the library reference to the osx python (not system python)
Compiling against system python with the following linker flags
set (CMAKE_SHARED_LINKER_FLAGS "-L /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/config -ldl -framework CoreFoundation -u _PyMac_Error /Library/Frameworks/Python.framework/Versions/Current/Python")
Works in enthought and system python
Neither of these work with the bundled python with psychopy, which is the target environment, compiling against the bundled python works with psychopy but no other python.
I've been getting the same error/having the same problem. I'd be interested if you've found a solution.
I have found that if I compile against the native python include directory and run the native OS X python binary /usr/bin/python that it works just fine, always. Even when I compile against some other python library (like the one I find at /Applications/Canopy.app/appdata/canopy-1.0.3.1262.macosx-x86_64/Canopy.app/Contents/include ) I can get the native OS X interpreter to work just fine.
I can't seem to get the Enthought version to work, ever. What directory are you compiling against for use with Enthought/Canopy?
There also seems to be some question of configuring SWIG at installation to know about a particular python library, but this might not be related: http://swig.10945.n7.nabble.com/SWIG-Installation-mac-osx-10-8-3-Message-w-o-attachments-td13183.html
I have a Python module which wraps a C++ library. The library uses MPI and is compiled with mpicxx. Everything works great on some machines, but on others I get this:
ImportError: ./_pyCombBLAS.so: undefined symbol: _ZN3MPI3Win4FreeEv
So there's an undefined symbol from the MPI library. As far as I can tell mpicxx should link everything in, yet it doesn't. Any ideas why?
It turns out that the fault was that the wrong libraries were being loaded.
As you know, a cluster is likely to have several versions of MPI installed, sometimes the same version is compiled with several compilers. These are all likely to have the same filenames. In my case even though I compiled with, say MPICH GNU, the default path was to the OpenMPI PGI libraries. I didn't realize this, I thought compiling with MPICH GNU would mean the MPICH GNU libraries would be found at runtime.
Of course I couldn't actually use PGI-compiled OpenMPI because Python was compiled with GCC and PGI doesn't emit binaries fully compatible with GCC.
The solution is to set the LD_LIBRARY environment variable to match the MPI distribution you used to compile your code.
It's a shared library problem. Try running ldd on the extension module on both the system where it works and the system where it fails.
ldd _extension.so
This should show you all the libraries your extension depends on so you can make sure they're available.
A simple way to work around it may be to statically link the dependencies into your extension.
symbol ZN3MPI3Win4FreeEv is defined is libmpi_cxx.so, so one have to link with -lmpi_cxx instead of -lmpi
RDFLib needs C extensions to be compiled to install on ActiveState Python 2.5; as far as I can tell, there's no binary installer anywhere obvious on the web. On attempting to install with python setup.py install, it produces the following message:
error: Python was built with Visual Studio 2003;
extensions must be built with a compiler than can generate compatible binaries.
Visual Studio 2003 was not found on this system. If you have Cygwin installed,
you can try compiling with MingW32, by passing "-c mingw32" to setup.py.
There are various resources on the web about configuring a compiler for distutils that discuss using MinGW, although I haven't got this to work yet. As an alternative I have VS2005.
Can anyone categorically tell me whether you can use the C compiler in VS2005 to build Python extension modules for a VS2003 compiled Python (in this case ActiveState Python 2.5). If this is possible, what configuration is needed?
The main problem is C run-time library. Python 2.4/2.5 linked against msvcr71.dll and therefore all C-extensions should be linked against this dll.
Another option is to use gcc (mingw) instead of VS2005, you can use it to compile python extensions only. There is decent installer that allows you to configure gcc as default compiler for your Python version:
http://www.develer.com/oss/GccWinBinaries
I can't tell you categorically, but I don't believe you can. I've only run into this problem in the inverse situation (Python built with VS2005, trying to build with VS2003). Searching the web did not turn up any way to hack around it. My eventual solution was to get VC Express, since VC2005 is when Microsoft started releasing the free editions. But that's obviously not an option for you.
I don't use ActiveState Python, but is there a newer version you could use? The source ships with project files for VS2008, and I'm pretty sure the python.org binary builds stopped using VS2003 a while ago.
As of today Mar 2012, I can categorically say it is possible with Python2.4.4 (only one I've tested) and Visual Studio 2005 and 2008. Just installing VS10 to check that. I don't know why it works and I have problems using distutils so I have to compile manually.