I have a simple example python extenstion I want to use from C/C++. The code is as follows
example.pxy:
from numpy import random
cdef public void somefunc():
print(random.randint(500))
setup.py:
from setuptools import setup
cfom Cython.Build import cythonize
import numpy
setup(
ext_modules=cythonize("example.pyx"),
zip_safe=False,
include_dirs=[numpy.get_include()]
)
Running python3 setup.py build_ext --inplace --compiler="mingw32" -DMS_WIN64 then creates example.c, example.h and example.cp310-win_amd64.pyd. The C++ code I am using to call someFunc is:
example.cpp:
#include <Python.h>
#include "example.h"
int main()
{
Py_Initialize();
somefunc();
return 0;
}
This I compile using g++ example.cpp -DMS_WIN64. But that command seems to be incomplete. There are sill objects left that need to be linked, namely the ones from the example.pyx. How do I do this? I do not see any generated .dllor.lib` or similar.
Additionally, if I use #include "exmaple.c" in
example.cpp, I get a very long list of missing symbols from the linker. The objects are all named __im_Py*.
I am using MINGW64 on Windows 10. The python installation I am trying to link against is a regular python installation from the system. I have an environment variable CPLUS_INCLUDE_PATH=C:\Program Files\Python310\include.
Related
This is a simple question but has been bothering me for 3 months now. When I use the setuptools/setup.py method to compile C++ code into a Python package on my windows os, it always defaults to MSVC, but part of the code uses stdlibc++ which is only accessible with GNU. Is there some way to specify it to MinGW, or somehow change the default behavior? I have looked into other methods, cppimport does not support windows, and the cmake method seems very complex to me.
For reference, a simple test to check whether the compiler is MSVC:
check_compiler.cpp
#include <iostream>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
#include <pybind11/stl.h>
namespace py = pybind11;
void test(){
#ifdef _MSVC_LANG
py::print("Compiled with MSVC. ");
#else
py::print("Not compiled with MSVC. ");
#endif
}
PYBIND11_MODULE(check_compiler, m) {
m.def("test", &test);
}
setup.py
"""
python setup.py install
"""
from pybind11.setup_helpers import Pybind11Extension
from setuptools import setup
import os
import shutil
ext_modules = [
Pybind11Extension(
'check_compiler',
sources=['check_compiler.cpp'],
language='c++',
cxx_std=11
),
]
setup(
name='check_compiler',
author='XXX',
ext_modules=ext_modules
)
# copy the package
for filename in os.listdir("dist"):
if filename.endswith(".egg"):
shutil.copy(os.path.join("dist", filename), ".")
Then run python setup.py install, an .egg file will be copied from subfolder to the current directory. Finally, run the following:
main.py
import check_compiler
check_compiler.test()
Similar question, but no accepted answer: How can I build a setup.py to compile C++ extension using Python, pybind11 and Mingw-w64?
Update: I was able to specify MinGW with cmake by adding -G "MinGW Makefiles" in the camke command. Still woudl welcome an answer of how to do this with setuptools, as it is the most convenient method.
I'm using a 3rd-Party Vendor who is providing a Windows Driver (DLL) and C Header Files. What I'm trying to do is use SWIG to recompile the header file into a Python Module.
Here are the my files:
- BTICard.i
- BTICard.h
- BTICARD64.dll
- BTICARD64.lib
SWIG interface source
%module BTICard
%include <windows.i>
%{
#define SWIG_FILE_WITH_INIT
#include "BTICard.H"
#define BTICardAPI
%}
In Cygwin, I used the following commands:
swig -python -py3 BTICard.i
Which then generated the following files:
- BTICard.py
- BTICard_wrap.c
In Cygwin, compile for Python Module
gcc -c -fpic BTICARD.H BTICard_wrap.c -I/usr/include/python3.8
Which now allows BTICard to be imported in Python
import BTICard
import ctypes
BTICarddll = ctypes.WinDLL('BTICARD64')
pRec1553 = SEQRECORD1553() # Doesn't initialize
The BTICard.H contains the following:
typedef struct - Used to initialize various field structures
enum - Constant Declarations
According to the SWIG documentation, the typedef structs are supposed to be converted into Python classes. When I tried initializing the class, I got a NameError. I suspect the issue is with my interface file not recognizing these types so it failed to convert them.
Upon further investigation, I tried using the distutils approach and created the setup.py
#!/usr/bin/env python3.8
"""
setup.py file for SWIG
"""
from distutils.core import setup, Extension
example_module = Extension('_BTICard',
sources=['BTICard_wrap.c', 'BTICard.h'],)
setup (name = 'BTICard',
version = '0.1',
author = "TESTER",
description = """BTICard API""",
ext_modules = [example_module],
py_modules = ["BTICard"],
)
To build the package:
$ python3.8 setup.py build_ext --inplace
running build_ext
building '_BTICard' extension
error: unknown file type '.h' (from 'BTICard.h')
What's the issue here?
Is there a way I can access the Python source file after creating the object from gcc?
All I am trying to do is to validate a separate Python Wrapper that seems to have issues (it's a completely separate topic). Is there another way to create this Python Module?
The .i file isn't including the interface to export. It should look like:
%module BTICard
%{
#include "BTICard.H" // this just makes the interface available to the wrapper.
%}
%include <windows.i>
%include "BTICard.h" // This wraps the interface defined in the header.
setup.py knows about SWIG interfaces, so include the .i file directly as a source. Headers are included by the sources and aren't listed as sources. You may need other options but this should get you on the right track. You'll likely need the DLLs export library (BTICard.lib) and need to link to that as well:
example_module = Extension('_BTICard',
sources=['BTICard.i'],
libraries=['BTICard.lib'])
I'm trying to embed Cython code into C following O'reilly Cython book chapter 8. I found this paragraph on Cython's documentation but still don't know what should I do:
If the C code wanting to use these functions is part of more than one shared library or executable, then import_modulename() function needs to be called in each of the shared libraries which use these functions. If you crash with a segmentation fault (SIGSEGV on linux) when calling into one of these api calls, this is likely an indication that the shared library which contains the api call which is generating the segmentation fault does not call the import_modulename() function before the api call which crashes.
I'm running Python 3.4, Cython 0.23 and GCC 5 on OS X. The source code are transcendentals.pyx and main.c:
main.c
#include "transcendentals_api.h"
#include <math.h>
#include <stdio.h>
int main(int argc, char **argv)
{
Py_SetPythonHome(L"/Users/spacegoing/anaconda");
Py_Initialize();
import_transcendentals();
printf("pi**e: %f\n", pow(get_pi(), get_e()));
Py_Finalize();
return 0;
}
transcendentals.pyx
cdef api double get_pi():
return 3.1415926
cdef api double get_e():
print("calling get_e()")
return 2.718281828
I'm compiling those files using setup.py and Makefile:
setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
setup(
ext_modules=cythonize([
Extension("transcendentals", ["transcendentals.pyx"])
])
)
Makefile
python-config=/Users/spacegoing/anaconda/bin/python3-config
ldflags:=$(shell $(python-config) --ldflags)
cflags:=$(shell $(python-config) --cflags)
a.out: main.c transcendentals.so
gcc-5 $(cflags) $(ldflags) transcendentals.c main.c
transcendentals.so: setup.py transcendentals.pyx
python setup.py build_ext --inplace
cython transcendentals.pyx
clean:
rm -r a.out a.out.dSYM build transcendentals.[ch] transcendentals.so transcendentals_api.h
However, I came to error Segmentation fault: 11. Any idea can help with this? Thanks!
In that Makefile there is
transcendentals.so: setup.py transcendentals.pyx
python setup.py build_ext --inplace
Unless python refers to /Users/spacegoing/anaconda/bin/python3 it should be replaced since the module may be compiled for wrong python version, and cannot thus be loaded.
In main.c there is call import_transcendentals() that does not check the return value i.e. if the import fails or succeeds. In case of failure, get_pi() and get_e() point to invalid memory locations and trying to call them causes a segmentation fault.
Also, the module has to be located somewhere where it can be found. It seems that when embedding, the current directory is not searched for python modules. PYTHONPATH environment variable could be changed to include the directory where transcendentals.so is located.
The following is an altenative way of embedding the code to the C program and sidesteps the import issues since the module code is linked to the executable.
Essentially, a call to PyInit_transcendentals() is missing.
File transcendentals.h will be generated when the cython functions are defined public i.e.
cdef public api double get_pi():
...
cdef public api double get_e():
Your main.c should have the include directives
#include <Python.h>
#include "transcendentals.h"
and then in main
Py_Initialize();
PyInit_transcendentals();
There should be no #include "transcendentals_api.h" and no import_transcendentals()
The first reason is that according to the documentation
However, note that you should include either modulename.h or
modulename_api.h in a given C file, not both, otherwise you may get
conflicting dual definitions.
The second reason is, that since transcendentals.c is linked to the program in
gcc $(cflags) $(ldflags) transcendentals.c main.c
there is no reason to import transcendentals module. The module has to be initialized though, PyInit_transcendentals() does that for Python 3
I am trying to wrap a function foo in test.cpp with swig. I have a header foo.h which contains the declaration of the function foo. test.cpp is dependent upon a external header ex.h and shared object file libex.so located in /usr/lib64
I followed the blog post from here.
I am able to build the module with python setup.py build_ext --inplace. However when I try to import it I get the following error and I am not sure what I am missing as most others questions with this error do not use a setup.py file. Below is an example of what I currently have.
The Error on importing _foo:
>>> import _foo
ImportError: dynamic module does not define init function (init_foo)
test.i
%module foo
%{
#pragma warning(disable : 4996)
#define SWIG_FILE_WITH_INIT
#include "test.h"
%}
%include <std_vector.i>
%include <std_string.i>
%include "test.h"
test.cpp
#include "ex.h"
void foo(int i){
return;
};
test.h
#include "ex.h"
void foo(int i);
setup.py
try:
from setuptools.command.build_ext import build_ext
from setuptools import setup, Extension, Command
except:
from distutils.command.build_ext import build_ext
from distutils import setup, Extension, Command
foo_module = Extension('_foo',
sources=['foo.i' , 'foo.cpp'],
swig_opts=['-c++'],
library_dirs=['/usr/lib64'],
libraries=['ex'],
include_dirs = ['/usr/include'],
extra_compile_args = ['-DNDEBUG', '-DUNIX', '-D__UNIX', '-m64', '-fPIC', '-O2', '-w', '-fmessage-length=0'])
setup(name='mymodule',
ext_modules=[foo_module],
py_modules=["foo"],
)
Is looks like there is some inconsistency in the use of foo and _foo, as the wrap file is generated compiled and linked in.
Try changing the module name in test.i from
%module foo
to
%module _foo
or adjusting the Extension declaration in your setup.py from
Extension('_foo',
to
Extension('foo',
I had the same error, however it was due to which python was being in use. I was using a system with python2.7, python3.4 and python3.5. Only "python3" (symlinked to python3.5) worked. Importing with any of the other pythons gave the " ImportError: dynamic module does not define init function" error.
Can I use cython to create a shared library with exported C functions that have python code as the core? Like wrapping Python with C??
It is to be used in plugins.
tk
Using Cython, you can write function declared as C ones with the cdef keyword (and public... important!), with Python inner code:
yourext.pyx
cdef int public func1(unsigned long l, float f):
print(f) # some python code
Note: in the following is assumed that we are working in the root of drive D:\
Building (setup.py)
from distutils.core import setup
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext': build_ext},
name = 'My app',
ext_modules = cythonize("yourext.pyx"),
)
Then run python setup.py build_ext --inplace
After running the setup.py (if you are using distutils), you'll get 2 files of interest:
yourext.h
yourext.c
Looking into the .c will show you that func1 is a C function, in the end.
Those two files are all we need to do the rest.
C main program for testing
// test.c
#include "Python.h"
#include "yourext.h"
main()
{
Py_Initialize(); // start python interpreter
inityourext(); // run module yourext
func1(12, 3.0); // Lets use shared library...
Py_Finalize();
}
As we don't use the extension (.pyd) by itself, we need to make a little trick/hack in the header file to disable the "DLL behavior". Add the following at the beginning of "yourext.h":
#undef DL_IMPORT # Undefines DL_IMPORT macro
#define DL_IMPORT(t) t # Redefines it to do nothing...
__PYX_EXTERN_C DL_IMPORT(int) func1(unsigned long, float);
Compiling "yourext" as a shared library
gcc -shared yourext.c -IC:\Python27\include -LC:\Python27\libs -lpython27 -o libyourext.dll
Then compiling our test program (linking to the DLL)
gcc test.c -IC:\Python27\include -LC:\Python27\libs -LD:\ -lpython27 -lyourext -o test.exe
Finally, run the program
$ test
3.0
This is not obvious, and there is many other ways to achieve the same thing, but this works (have a look to boost::python, ..., other solutions may better fit your needs).
I hope this answers a little bit your question or, at least, gave you an idea...