I'm just now reading into cython and I'm wondering if cython compiles imported modules as part of the executable of if you still need to have the modules installed on the target machine to run the cython binary.
The "interface" of a Cython module remains at the Python level. When you import a module in Cython, the module becomes available only at the Python level of the code and uses the regular Python import mechanism.
So:
Cython does not "compile in" the dependencies.
You need to install the dependencies on the target machine.
For "Cython level" code, including the question of "cimporting" module, Cython uses the equivalent of C headers (the .pxd declaration files) and dynamically loaded libraries to access external code. The .so files (for Linux, DLL for windows and dylib for mac) need to be present on the target machine.
Related
As distutils is being removed from Python in the > 3.10 versions, and setuptools will not be added to the stdlib, I want to replace an existing setup.py recipe for building/installing a C++ library Cython extension (i.e. not primarily a Python package, not run in a venv, etc.) with some custom code.
The Cython part is working fine, and I just about managed to construct an equivalent call to the C++ compiler from that previously executed by distutils, by using config-var info from sysconfig... though the latter was very trial and error, with no documentation or particular consistency to the config-var collection as far as I could tell.
But I am now stuck on identifying what directory to install my build extension .so into, within the target prefix of my build. Depending on the platform and path scheme in use, and the prefix itself, the subdirs could be in lib or lib64, a pythonX.Y subdir of some sort, and a final site-packages, dist-packages or other directory. This decision was previously made by distutils but I can't find any equivalent code to return such path decisions in other stdlib packages.
Any suggestions of answers or best-practice approaches? (Other than "use setuptools", please!)
I need to use both nuitka (code obfuscation) and numba (code optimization), but nuitka crashes when executing #njit (understandable).
My idea is to precompile some code with numba using it's AOT compiling feature, so nuitka can use it later when doing it's thing.
Issues I encountered :
when leaving only the precompiled .so file, nuitka ignores it => ImportError
when leaving source .py file with .so file, nuitka uses the .py file despite seeing both files
it displays a warning about finding both files
I tried adding --no-prefer-source-code (which should be default) but it still uses the .py file instead of the precompiled .so
Did anyone at some point managed to import external .so modules in a nuitka process ?
NB: I know, I can just copy the precompiled numba .so file and get it to be imported 'normally' by the compiled nuitka .so, but it's not scalable if I add more precompiled numba code in the futur.
The idea would be to include the precompiled numba .so inside nuitka's .so (is it even possible ?)
I have developed a Python module using Boost Python. This extension has a setup.py file to install it. This file requests the extension to be linked against libboost_python by doing something like the following:
my_module = Extension('_mymodule', files=...
libraries=['boost_python'],
...)
Recently, the boost developers seem to have changed the naming convention of Boost Python from naming the library libboost_python to naming it libboost_python27 (to reflect the fact that they were calling libboost_python3 the corresponding library for Python 3.x).
What is the best way, from the setup.py script, to detect how the Boost Python library should be named (given that it is installed in a non-standard location)?
I am wrapping C++ code for use in Python using SWIG. The C++ module I am wrapping has C++ dependencies of other modules located within a different package. However, rather than directly importing/including these files, I would like to import a previously created Python library/dynamic shared library to deal with the dependencies. I want to use this method because I do not want to hard-code files in this package for them to work. I will simply have access to the shared library.
Currently, without importing the library, compiling the new module with the wrapper file results in the error:
"fatal error: string: No such file or directory compilation terminated."
as a header file the new module depends on is not available within this package. I do not want to copy the required headers the new module has a dependency on into this package.
I would like to know if this is possible within either the SWIG interface file or CMake.
Thanks for your help.
I wrote a Python extension in C, and my python program uses that extension. In order for it to work, I would have to install the extension on the user's system before my program can run. Is there a way to bypass that installation step and somehow just have the extension in my python package? The only compiled part obviously is the extension (since it's in C).
You can avoid having some one to install it independently but you can not avoid installation completely. If his computing platform differs from yours, he will have to build the extension.
What can be done is that you setup a package distribution using distutils. This way the package could be installed or built. You can include the "C" extension in your package.
For some standard platform, you can then provide binary package distribution. The user should have the ability to rebuild, if the binary package is not working out for him.
just put the .d compiled python dll in the same directory as your python script. then you'll be able to import it.