I am writing a Python C Extension on Linux (Rhel 6.3) with Python 2.6.6.
There is a shared library lib_common.so and i have written a C code(Python c Extension) to call the methods in the library lib_common.so.
I have created a setup.py which includes the library and the C code.
It was able to create the module mymod.so (mymod) successfully.
I copied this so to the /usr/lib64/python2.6/site-packages/ directory and i also copied lib_common.so to the same directory
Now when i invoke Python interpreter and import the module (mymod) i am getting an error which says that the function which is present in lib_common.so is undefined
ImportError: /usr/lib64/python2.6/site-packages/mymod.so: undefined symbol: My_Fun
My doubt is whether i am missing any step here because of which i am getting this error?
Related
import winshell
r = list(winshell.recycle_bin())
for index, value in enumerate(r):
print(index, value.original_filename())
This is the simple script I wrote, but when I try running it (or antyhing else that uses winshell) I get this error:
ModuleNotFoundError: No module named 'win32'
And when I try running pip install win32 I get another error:
ERROR: Could not find a version that satisfies the requirement win32 (from versions: none)
ERROR: No matching distribution found for win32
So now I'm even more confused. Why does winshell need a different module? That module doesn't even exist. Is it fine if I use some different module than the non-existent win32? If so which one? What am I supposed to do now?
First, you have to execute the script inside the Scripts directory, the pywin32_postinstall.py. Let’s say your Python directory is C:\python3, just follow the code below.
cd C:\python3
python Scripts/pywin32_postinstall.py -install
After that, the installation will drop the DLL files under the C:\Windows\System32. You need to move those two files ( pythoncom310.dll and pywintypes310.dll) to C:\python3\Lib\site-packages\win32 directory.
After that, you need to edit the python310._pth that you can find inside the Python installation folder. Then make the following changes:
Lib/site-packages
Lib/site-packages/win32
Lib/site-packages/win32/lib
Lib/site-packages/pythonwin
python310.zip
#Uncomment to run site.main() automatically
#import site
Save and try running your code again.
Troubleshoot
If you still get an error saying “ImportError: DLL load failed while importing win32api: The specified module could not be found.”, make sure you have copied the two dll files to Lib\site-packages\win32 directory.
PythonWin32Api
I'm on MacOs and I'm trying to embed python inside a c shared library. Disclaimer: there are two installations of python involved with this question. One is my "main" python, so to speak, installed at /Library/Frameworks/Python.framework/Versions/3.10/bin/python3, that will be referred as "main python"; and the other one is a subdirectory of my current working directory containing python source code that i will build and embed, and it will be referred as embedded python.
For clarity, I have reproduced a minimalistic example of my problem that gives the same results.
Consider that the current working directory contains:
python310 (a directory containing python source code)
I configure it using the following command:
./configure --prefix=__path_to_the_directory_containing_python_source__ --exec-prefix=__path_to_the_directory_containing_python_source__ --enable-shared --enable-optimizations --with-lto
and compiled it using make && make altinstall
I have used altinstall because I will use this distribution only inside my application and I didn't want to replace my normal python installation.
test_interpreter.c:
#include "__path_to_the_directory_containing_python_source__/Include/Python.h"
int test() {
Py_SetProgramName(L"__path_to_the_directory_containing_python_source__/python.exe");
Py_Initialize();
// adds the current directory to sys.path to make test_import.py importable
PyObject* sysPath = PySys_GetObject("path"); // borrowed reference
PyObject* cur_dir = PyUnicode_FromString(""); // new reference
PyList_Append(sysPath, cur_dir);
PyRun_SimpleString("import test_import");
Py_DECREF(cur_dir);
Py_Finalize();
return 0;
}
test_import.py:
# will be called by embedded python
print("Inside python")
...
test_interpreter.py
# will be called by main python
import ctypes
lib = ctypes.CDLL("/__path_to_current_directory__/libtest.so")
tp = ctypes.CFUNCTYPE(ctypes.c_int)
test = tp(("test", lib))
test()
Now the problem is that, when test_import.py imports some builtin modules (note it doesn't happen with every module but just some), i get errors like segfault (e.g. when importing ctypes) or abort trap (e.g. when importing sqlite3). The most interesting part is that the errors do not occur if the interpreter is embedded inside an executable instead than a shared library.
So, if I compile test_interpreter.c into libtest.so using: gcc -L__path_to_the_directory_containing_python_source__ -lpython3.10 -shared -install_name __current_directory_path__/libtest.so -o libtest.so test_interpreter.c,
then modify test_import.py to for example
# will be called by embedded python
print("Inside python")
import decimal
and execute python3 test_interpreter.py (using main python; version is still 3.10) i get:
Inside python
Segmentation fault: 11
Other modules that give me the same error message are:
tkinter
ctypes
Also, if it can be usefull, I managed to understand that when importing ctypes inside the embedded interpreter the error occures when the line from _ctypes import Union, Structure, Array (of ctypes' __ init __.py) is executed.
If i modify test_interpreter.py to:
print("Inside python")
import sqlite3
and run the same command i get:
Inside python
Python(86563,0x10a7105c0) malloc: *** error for object 0x101d56360: pointer being freed was not allocated
Python(86563,0x10a7105c0) malloc: *** set a breakpoint in malloc_error_break to debug
Abort trap: 6
Other modules that give me the same error message are:
dbm
distutils
random
json
zoneinfo
base64
csv
calendar
pickle
Note that if I compile test_interpreter.c as an executable (after changing test function name to main) using gcc -L__path_to_the_directory_containing_python_source__ -o test test_interpreter.c, or if I run the python executable (without embedding it) and try to import those module, everything works fine.
Thanks in advance to everyone who will try to understand what's going on.
I had a working Cython program which wrapped some C libraries and custom C code. Recently, I had to switch my project to C++, so I renamed all my C code to *.cpp. Cython compiled fine and produced the .so file. However, when I try to import my library in Python, I get the following error.
File "example.py", line 1, in <module>
from tag36h11_detector import detect
ImportError: dlopen(/Users/ajay/Documents/Code/calibration/apriltag_detector/examples/tag36h11_detector.cpython-36m-darwin.so, 2): Symbol not found: _free_detection_payload
Referenced from: /Users/ajay/Documents/Code/calibration/apriltag_detector/examples/tag36h11_detector.cpython-36m-darwin.so
Expected in: flat namespace
in /Users/ajay/Documents/Code/calibration/apriltag_detector/examples/tag36h11_detector.cpython-36m-darwin.so
Because I'm not sure about the source of the error, I'm not sure what relevant information to provide.
Here's my setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
import numpy
setup(ext_modules=cythonize(Extension(
name='tag36h11_detector',
sources=["tag36h11_detector.pyx",
"tag36h11_detector/tag36h11_detector.cpp"],
include_dirs=["/usr/local/include/apriltag", numpy.get_include()],
libraries=["apriltag"])))
I compile it with python setup.py build_ext --inplace
Thanks for any assistance!
Add language=c++ to your setup:
setup(ext_modules=cythonize(Extension(
name='XXX',
....
language="c++",
)))
You probably use gcc. The frontend of gcc (and many other compilers) decides whether the file is compiled as C (cc1 is used) or C++ (cc1plus is used) depending on its extension: ".c" is C, ".cpp" is C++.
If you use extra_compile_args=["-v"], in your setup you can see exactly which compiler is used:
Cython creates "tag36h11_detector.c" and because of its extension the C-compiler (cc1) is used.
For the file "tag36h11_detector/tag36h11_detector.cpp" the C++-compiler (cc1plus) is used.`
One of the differences between C and C++ is the name mangling: C expects that the names of the symbols in the object files are not mangled, but C++ mangles it.
For example for a function with signature int test(int) C tells to the linker to search for a symbol called test, but C++ creates a symbol called _Z4testi instead, and thus it cannot be find in the linkage step.
Now, what happens during the linkage? On Linux, the default-behavior of linking a shared object is, that we can have undefined symbols. It is implicitly assumed, that those symbols will be availably during the run-time, when the shared library is loaded. That means the program fails only when the shared object is loaded and the symbol cannot be found, i.e. when you import your module.
You could add extra_link_args=["-Wl,--no-undefined"] to ensure that the compilation fails if there are undefined symbols ain order to not have any surprises during the runtime.
One way to fix it could be too say to C++-compiler to emit unmangled names using extern "C" in your code, as pointed out in the comments.
A less intrusive approach would be to make clear to compiler, that C++-convention is used by adding language="c++" to the setup.
With language="c++", Cython creates "XXX.cpp" (instead of "XXX.c") from "XXX.pyx", and thus gcc chooses C++-compiler for the cythonized file, which is aware of the right name-mangling.
I have compiled scip with:
$ IPOPT=true make SHARED=true scipoptlib
It has compiled successfully and I run python setup.py install of the python interface.
However, when I run from pyscipopt.scip import Model in Python, I get the following error message:
ImportError: scip-3.2.1/interfaces/python/lib/libscipopt.so: undefined symbol: _ZTIN5Ipopt7JournalE
You need to adapt the setup.py to also include Ipopt as library to link to.
It's close to the end of the file and is called libraries in the definition of the Cython extension.
I have built an executable using cxfreeze (inside a python3.2 virtualenv) on my local machine.
The executable runs correctly on the local machine.
I'm trying to run the executable on a separate target machine (of identical OS and architecture), but get the following error:
...
File "/home/chris/.virtualenvs/python3env/lib/python3.2/site-packages/psycopg2/__init__.py", line 67, in <module>
File "ExtensionLoader_psycopg2__psycopg.py", line 18, in <module>
ImportError: No module named None
All the shared library dependencies are met on the target machine (according to ldd).
Based on the trace my guess is that psycopg2 is trying to load the shared library _psycopg.cpython-32mu.so (locally python3.2/site-packages/psycopg2/_psycopg.cpython-32mu.so) but can't find it at runtime.
I tried placing the library in the same directory as the executable and setting LD_LIBRARY_PATH, but neither solved the (assumed) problem.
After running strace on each process, it appears that the pure python version is looking for the file _psycopg.cpython-32mu.so
open("/home/chris/.virtualenvs/python3env/lib/python3.2/site-packages/psycopg2/_psycopg.cpython-32mu.so", O_RDONLY|O_CLOEXEC) = 8
Whereas the binary built by cxfreeze is looking for the file psycopg2._psycopg.so
open("/path/to/psycopg2._psycopg.so", O_RDONLY|O_CLOEXEC) = 3
md5sum reveals these files to be identical, so it appears that the cxfreeze process changes the expected name of the dynamic library. It's worth noting that a version of this library correctly name for the target is included in the dist directory output by cxfreeze.