Compile Cython extensions from the command line with gcc (mingw32) on windows - python

I'm trying to test a small cython module on win32, and I'm having trouble building it.
The file is called linalg_cython.pyx and has these contents:
from __future__ import absolute_import, division, print_function
import numpy as np
cimport numpy as np
import cython
##cython.boundscheck(False)
#np.ndarray[np.float32]
##cython.wraparound(False)
def L2_sqrd_float32(np.ndarray hist1, np.ndarray hist2):
""" returns the squared L2 distance
seealso L2
Test:
hist1 = np.random.rand(4, 2)
hist2 = np.random.rand(4, 2)
out = np.empty(hist1.shape, dtype=hist1.dtype)
"""
return (np.abs(hist1 - hist2) ** 2).sum(-1) # this is faster
L2_sqrd = L2_sqrd_float32
I was able to get this compiling by using a setup.py, but I don't want to have to rely on setup.py to build the extensions. This is because I haven't fully understood the cython compilation process yet. I want to compile it on my own first before I start trusting setup.py. That being said I was able get a good start by looking at the output of "setup.py build_ext":
C:\MinGW\bin\gcc.exe -mdll -O -Wall ^
-IC:\Python27\Lib\site-packages\numpy\core\include ^
-IC:\Python27\include -IC:\Python27\PC ^
-c vtool\linalg_cython.c ^
-o build\temp.win32-2.7\Release\vtool\linalg_cython.o
writing build\temp.win32-2.7\Release\vtool\linalg_cython.def
C:\MinGW\bin\gcc.exe -shared \
-s \
build\temp.win32-2.7\Release\vtool\linalg_cython.o \
build\temp.win32-2.7\Release\vtool\linalg_cython.def \
-LC:\Python27\libs \
-LC:\Python27\PCbuild \
-lpython27 \
-lmsvcr90 \
-o build\lib.win32-2.7\vtool\linalg_cython.pyd
The pyd file that this created seemed to work, but my goal is understanding, not just making it work.
Copying this format (and trying some things myself) I'm currently using these commands to build everything manually.
C:\Python27\Scripts\cython.exe vtool\linalg_cython.pyx
C:\MinGW\bin\gcc.exe -mdll -O -DNPY_NO_DEPRECATED_API -Wall -Wno-unknown-pragmas
-Wno-format -Wno-unused-function -m32 -shared
-IC:\Python27\Lib\site-packages\numpy\core\include -IC:\Python27\include
-IC:\Python27\PC -IC:\Python27\Lib\site-packages\numpy\core\include
-LC:\Python27\libs -LC:\Python27\PCbuild -lpython27 -lmsvcr90 -o
vtool\linalg_cython.pyd -c vtool\linalg_cython.c
The main difference between my command and the setup.py command is that I'm trying to call gcc in one line instead of splitting it up into two lines. I would have called it in two commands, but the def file seems to be autogenerated by setup.py and I'm not sure what its all about.
Its contents seem simple:
LIBRARY linalg_cython.pyd
EXPORTS
initlinalg_cython
but I'd like to know more about what it is before I split my command into two steps and autogenerate this def file myself. Either way shouldn't it be possible to create the .pyd in one call to gcc?
With the command that I'm using I'm able to get a .pyd file in the right place, but when I try to import it I get
<type 'exceptions.ImportError'>: DLL load failed: %1 is not a valid Win32 application.
which is supposed to be a x86/x64 mismatch, hence why I tried adding the flag -m32.
In summary: When trying to comiple a simple cython module my gcc command is giving me 32/64 bit errors. How do I fix my gcc command such that it generates a valid 32 bit pyd file.

You didn't mention if your python is 32bit or 64bit..this kind of behavior usually happens when you try to import 32bit module in 64bit python, or vice versa.
Make sure your python and module, you`re trying to import, are same bit architecture. Easiest way to fix this is by downloading and installing right python.

Related

Python setuptools not including C++ standard library headers

I'm trying to compile a Python wrapper to a small C++ library I've written. I've written the following setup.py script to try to use setuptools to compile the wrapper:
from setuptools import setup, Extension
import numpy as np
import os
atmcmodule = Extension(
'atmc',
include_dirs=[np.get_include(), '/usr/local/include'],
libraries=['mcopt', 'c++'], # my C++ library is at ./build/libmcopt.a
library_dirs=[os.path.abspath('./build')],
sources=['atmcmodule.cpp'],
language='c++',
extra_compile_args=['-std=c++11', '-v'],
)
setup(name='tracking',
version='0.1',
description='Particle tracking and MC optimizer module',
ext_modules=[atmcmodule],
)
However, when I run python setup.py build on OS X El Capitan, clang complains about not finding some C++ standard library headers:
In file included from atmcmodule.cpp:7:
In file included from ./mcopt.h:11:
In file included from ./arma_include.h:4:
/usr/local/include/armadillo:54:12: fatal error: 'initializer_list' file not found
#include <initializer_list>
^
1 error generated.
error: command 'gcc' failed with exit status 1
Passing the -v flag to the compiler shows that it is searching the following include paths:
#include <...> search starts here:
/Users/[username]/miniconda3/include
/Users/[username]/miniconda3/lib/python3.4/site-packages/numpy/core/include
/usr/local/include
/Users/[username]/miniconda3/include/python3.4m
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/c++/4.2.1
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/c++/4.2.1/backward
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.0/include
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/System/Library/Frameworks (framework directory)
End of search list.
This apparently doesn't include the path to the C++ standard library headers. If I compile a small test C++ source with the -v option, I can see that clang++ normally also searches the path /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1, and if I include this path in the include_dirs option for Extension in my setup.py script, then the extension module compiles correctly and works. However, hard-coding this path into the script doesn't seem like a good solution since this module also needs to work on Linux.
So, my question is how do I properly make setuptools include the required headers?
Update (11/22/2015)
As setuptools tries to compile the extension, it prints the first command it's running:
gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/[username]/miniconda3/include -arch x86_64 -I/Users/[username]/miniconda3/lib/python3.4/site-packages/numpy/core/include -I/Users/[username]/Documents/Code/ar40-aug15/monte_carlo/mcopt -I/usr/local/include -I/Users/[username]/miniconda3/include/python3.4m -c /Users/[username]/Documents/Code/ar40-aug15/monte_carlo/atmc/atmcmodule.cpp -o build/temp.macosx-10.5-x86_64-3.4/Users/[username]/Documents/Code/ar40-aug15/monte_carlo/atmc/atmcmodule.o -std=c++11 -fopenmp -v
If I paste this command into a terminal and run it myself, the extension compiles successfully. So I suspect either setuptools is modifying some environment variables I'm not aware of, or it's lying a little about the commands it's actually running.
Setuptools tries to compile C/C++ extension modules with the same flags used to compile the Python interpreter. After checking the flags used to compile my Python install (from Anaconda), I found it was compiling for a minimum Mac OS X version of 10.5. This seems to make it use the GCC libstdc++ instead of clang's libc++ (which supports C++11).
This can be fixed by either setting the environment variable MACOSX_DEPLOYMENT_TARGET to 10.9 (or later), or adding '-mmacosx-version-min=10.9' to extra_compile_args.

Error when trying to cross-compile SWIG Python extension for mingw32 using distutils

I am trying to cross-compile a simple SWIG Python extension on Linux for Windows (mingw32), using the distutils module.
The ultimate goal is to compile a Python wrapper for some library and being able to use it on Windows. Obviously I started with the most basic example and unfortunately it fails.
Here are the files I am using:
example.c
/* File : example.c */
/* A global variable */
double Foo = 3.0;
/* Compute the greatest common divisor of positive integers */
int gcd(int x, int y) {
int g;
g = y;
while (x > 0) {
g = x;
x = y % x;
y = g;
}
return g;
}
example.i - SWIG interface file
/* File : example.i */
%module example
%inline %{
extern int gcd(int x, int y);
extern double Foo;
%}
setup.py
# setup.py
import distutils
from distutils.core import setup, Extension
setup(name = "SWIG example",
version = "1.0",
ext_modules = [Extension("_example", ["example.i","example.c"])])
In order to compile using the native (Linux) gcc compiler, I am invoking:
python setup.py build
Everything works like a charm! Unfortunately when trying to specify the Windows target:
python setup.py build --compiler=mingw32
I get the error saying that gcc can't recognize -mdll switch:
running build
running build_ext
building '_example' extension
swigging example.i to example_wrap.c
swig -python -o example_wrap.c example.i
creating build
creating build/temp.linux-x86_64-2.7
gcc -mdll -O -Wall -I/home/jojek/anaconda/include/python2.7 -c example_wrap.c -o build/temp.linux-x86_64-2.7/example_wrap.o
gcc: error: unrecognized command line option ‘-mdll’
error: command 'gcc' failed with exit status 1
Fair enough, it makes perfect sense, since toolchain is not valid. I made sure that mingw32 is installed on my machine. By calling dpkg -L mingw32 I know that compiler is located in /usr/bin/i586-mingw32msvc-gcc.
My next step was to override the CC environmental variable with the actual path to my compiler. When I try to compile it again, then I am getting the following error with missing sys/select.h header file:
running build
running build_ext
building '_example' extension
swigging example.i to example_wrap.c
swig -python -o example_wrap.c example.i
creating build
creating build/temp.linux-x86_64-2.7
/usr/bin/i586-mingw32msvc-gcc -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/jojek/anaconda/include/python2.7 -c example_wrap.c -o build/temp.linux-x86_64-2.7/example_wrap.o
example_wrap.c:1: warning: -fPIC ignored for target (all code is position independent)
In file included from /home/jojek/anaconda/include/python2.7/Python.h:58,
from example_wrap.c:125:
/home/jojek/anaconda/include/python2.7/pyport.h:351:24: error: sys/select.h: No such file or directory
error: command '/usr/bin/i586-mingw32msvc-gcc' failed with exit status 1
Does anyone have an idea how to manage that task?
There's quite a bit going on behind the scenes when you compile Python modules using distutils. You're getting closer with each try in your question, however the problem you've now encountered is that you're using Linux header files with a Windows (cross) compiler. (sys/select.h isn't supported with mingw32, cygwin might be a different story though). In reality it's the lack of the configuration header file that's causing your cross compile to try and use the POSIX interfaces instead of Win32 alternatives.
My answer rewinds a few steps and starts out simply building the module by hand, using mingw32 on Linux and then we'll look at using distutils once we've proven that we have all that's required.
I'm also assuming that you don't have a Windows build box (or even a VM) available to simply build your extension on Windows natively since that's far simpler than cross compiling. If you're reading this and have the option to use a Windows box to build your Windows Python extensions, do that instead and save time and effort. That said it is possible to build Windows Python modules using only a Linux box.
Starting with mingw32 already installed and working on your Linux box (e.g. using the Debian/Ubuntu packages) the first step is to get the Windows header files (or configuration to be more specific). I'm assuming you're targeting the build that most people get when they type "python windows" into a search engine so I downloaded the Windows MSI installers from python.org and extracted them from there.
There are two things we want to get from the Python distribution:
python27.dll (Usually gets placed in c:\windows\system32 or c:\windows\syswow64)
The 'include' directory (Usually gets placed in c:\python27\include)
Under Linux there are a few different ways you can extract this. You could use Wine to install the MSI file. I used both cabextract and 7z with success in my testing though, for example with cabextract:
cabextract /tmp/python-2.7.10.msi -F '*.h'
cabextract /tmp/python-2.7.10.msi -F 'python27.dll'
(Note: if you use 7z you'll find the files you really want inside a second, inner archive named 'python').
At this point you could also extract the file 'libpython27.a' which usually lives inside c:\python27\libs\ however this file isn't sufficient or even useful for linking using mingw32.
Given the header files we've now got enough to compile our extension, although as noted above to get mingw32 to link against python27.dll we need to do a bit more work first. We're going to need a tool called pexports to list all the exported symbols in the Python DLL and let dlltool generate a stub library for mingw32 to link against. I downloaded pexports directly and then extracted it with:
tar xvf ~/Downloads/pexports-0.47-mingw32-bin.tar.xz
Once that's extracted we get a single Windows executable. I used Wine in my example here to run it directly; alternatively, you could extract the source, and build it as a tool to run natively on the Linux host:
tar xvf ~/Downloads/pexports-0.47-mingw32-src.tar.xz
(cd pexports-0.47 && ./configure && make)
or you could have duplicated the functionality of the tool using the Python module pefile (which runs fine cross platform) to extract the exports that we care about if you were looking to avoid using Wine as well.
Anyway with pexports you can generate a .def file that contains the information we need for dlltool:
wine bin/pexports.exe -v python27.dll > python27.def
or, (if you've built pexports as a native tool), simply:
./pexports-0.47/pexports -v python27.dll > python27.def
where python27.dll is what we extracted from the .msi file earlier.
(This was my pexports reference)
Once you've got the .def file you can use the mingw32 dlltool to generate a .a file that we'll use later to link our Python module against:
i586-mingw32msvc-dlltool -A --dllname python27.dll --def python27.def --output-lib libpython27.a
Now we've reached a point where we can think about running SWIG itself to generate the code for us to compile. I simplified your example interface even further to be just:
%module test
%inline %{
int gcd(int x, int y) {
int g;
g = y;
while (x > 0) {
g = x;
x = y % x;
y = g;
}
return g;
}
%}
And then ran SWIG on my Linux box as:
swig -Wall -python test.i
This generated test_wrap.c which I compiled with:
i586-mingw32msvc-gcc test_wrap.c -I../include -Wall -Wextra -shared -o _test.pyd ./libpython27.a
And there we have a Windows Python module built using just Linux.
To check it really runs I copied test.py and _test.pyd to a Windows box and then did:
Python 2.7.10 (default, May 23 2015, 09:40:32) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import test
>>> test.gcd(1024, 512)
512
>>>
Now all that remains is to make sure distutils can find the right include files and libraries to link against by manipulating its paths.
For 64 bits, I couldnt make it work with pexports, so I used gendef to generate the python27.def file.
gendef is a tool which generate def files from DLLs. A def file is a list of symbols exported by a DLL. The primary use of this tool is to allow creation of a import library of DLLs created by non-GCC compilers. It can handle both x86 (win32) and amd64 (win64) executables.
https://sourceforge.net/p/mingw-w64/wiki2/gendef/
Hope It helps!

How can I wrap a C-Library in SWIG, which has usually to be linked during C-compilation?

Given a C-library, which has to be linked during compilation if I want to use its functions. I want to access these functions in Python using SWIG. I can only find examples and introductions where C-Code (example.c) is wrapped using SWIG, no method how to wrap a dynamic library (example.so).
All you need to do to make the .so (or .a) library case work is to link the library appropriately when you do the compile step of the example build process. You will still have to compile the example_wrap.c that gets generated, this is where you can link against things.
So modified from the SWIG docs that would be:
$ swig -python example.i
$ gcc -O2 -fPIC -c example.c
$ gcc -O2 -fPIC -c example_wrap.c -I/usr/local/include/python2.5
$ gcc -shared example_wrap.o -o _example.so -lmylib.so
In reality you can also skip this at the compile time linker step and use dlopen at runtime instead by injecting some extra code into the Python part of your module that calls dlopen before the shared object from SWIG gets loaded.

'ImportError' in Python extension module wrapping C library

(UPDATE 3 contains the questions I'd like to get answers to. UPDATE 2 refers to corrections I did trying to understand and fix this issue)
I'm trying to get a Python extension module to wrap a C library (in this case, it is only an example described in the book Python Cookbook (3er edition.) The problem is that I'm encountering the classic error
ImportError: dynamic module does not define init function (initsample)
when I'm trying to import the module.
The book itself has the following comment:
For all of the recipes that follow, assume that the preceding code is found in a file named
sample.c, that definitions are found in a file named sample.h and that it has been com‐
piled into a library libsample that can be linked to other C code. The exact details of
compilation and linking vary from system to system, but that is not the primary focus.
It is assumed that if you’re working with C code, you’ve already figured that out.
And I think I'm getting that wrong. This is what I'm doing: First of all, I have the following files:
➜ Sample ls
csample.pxd sample.c sample.h sample.pyx setup.py
The content of the files is according to this github repository containing the files described in the book. Assuming everything is copied correctly. I then compile sample.c
➜ Sample gcc -c -fPIC -I/path/to/python2.7 sample.c
This creates sample.o and I proceed to create a shared library:
➜ Sample gcc -shared sample.o -o libsample.so
It creates a libsample.so and finally run the setup.py file:
➜ Sample python setup.py build_ext --inplace
running build_ext
skipping 'sample.c' Cython extension (up-to-date)
building 'sample' extension
creating build
creating build/temp.linux-x86_64-2.7
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/path/to/include/python2.7 -c sample.c -o build/temp.linux-x86_64-2.7/sample.o
gcc -pthread -shared build/temp.linux-x86_64-2.7/sample.o -L. -L/path/to/lib -lsample -lpython2.7 -o /path/to/sample.so
However, I obtain the same error. What am I missing?
This is my setup.py (eventually, I copied a part to clean previous build):
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import sys
import os
import shutil
# clean previous build
for root, dirs, files in os.walk(".", topdown=False):
for name in files:
if (name.startswith("sample") and not(name.endswith(".pyx") or name.endswith(".pxd") or name.endswith(".h"))):
os.remove(os.path.join(root, name))
for name in dirs:
if (name == "build"):
shutil.rmtree(name)
ext_modules = [
Extension('sample',
['sample.pyx'],
libraries=['sample'],
library_dirs=['.'])]
setup(
name = 'Sample extension module',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
)
UPDATE 2
I was wondering if it is normal that after running
python setup.py build_ext --inplace
the file sample.c gets replaced with a cythonized version.
I assume that running setup.py is using cython sample.c, but I'm asking this because yesterday I was mistakenly compiling the cythonized version sample.c (after removing libsample.so, sample.o, sample.so that were created in a previous run) and after updating sample.pyx, I got this error:
>>> import sample
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ./sample.so: undefined symbol: divide
After this, I restored sample.c to its original content, removed all remaining files from previous builds (including temporal files) and crucially, I had to change sample.pyx in some way (e.g. adding a blank line.)
I noticed I have to do this to avoid the
ImportError: dynamic module does not define init function (initsample)
error. Then I proceeded as usual:
➜ Sample gcc -c -fPIC -I/path/to/python2.7 sample.c
➜ Sample gcc -shared sample.o -o libsample.so
➜ Sample python setup.py build_ext --inplace
running build_ext
cythoning sample.pyx to sample.c
warning: sample.pyx:27:42: Use boundscheck(False) for faster access
building 'sample' extension
creating build
creating build/temp.linux-x86_64-2.7
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/path/to/include/python2.7 -c sample.c -o build/temp.linux-x86_64-2.7/sample.o
gcc -pthread -shared build/temp.linux-x86_64-2.7/sample.o -L. -L/path/to/lib -lsample -lpython2.7 -o /path/to/Sample/sample.so
➜ Sample python
>>> import sample
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: libsample.so: cannot open shared object file: No such file or directory
This time I get cythoning sample.pyx to sample.c in the output above and a new error. However, libsample.so is in the Sample directory. What could be the issue here?
UPDATE 3:
I finally was able to import this module successfully but the last error
ImportError: libsample.so: cannot open shared object file: No such file or directory
implied that I needed to set the current directory in the environment variable LD_LIBRARY_PATH. However, I did it in a crude way:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/Sample
What is the correct way to do this? Maybe using some parameter in setup.py? Regarding the several issues I found, I'd like to get an answer as to why all of this happened:
sample.c gets replaced when running python setup.py build_ext --inplace Is this okay?
Do I need to restore sample.c every time I'm doing this procedure?
Do I need to change sample.pyx in order to get a correct output (instead of skipping 'sample.c' Cython extension (up-to-date) with a misleading output)
What is the correct way to solve ImportError: libsample.so: cannot open shared object file: No such file or directory?
A possible solution to 4. is to add in setup.py:
runtime_library_dirs=['./'],
However, this requires libsample.so to be located in the same directory as setup.py.

Compile .c files to .pyd

I used Cython to convert a .pyx file to .c. Now, I'm trying to compile it to .pyd, using the tcc compiler:
C:\Users\USER>"C:\Program Files\tcc\tcc.exe" tkExtra.c -o tkExtra.pyd -
shared -IC\Python27\include -LC\Python27\libs -lpython27
However, I get this error:
tkExtra.c:8: error: include file 'pyconfig.h' not found
C:\Python27\include does have pyconfig.h. I had looked at this answer to get the command line code for this, only substituting gcc with tcc.
How can I fix this, or is there a better way to do this?
Seems like you're missing some colons. Try:
C:\Users\USER>"C:\Program Files\tcc\tcc.exe" tkExtra.c -o tkExtra.pyd -shared -IC:\Python27\include -LC:\Python27\libs -lpython27

Categories

Resources