Embedding python: Version inconsistent with ProgramFullPath - python

I have anaconda Python first on my path, but a simple Python embedding example shows my Mac system python version instead, even though ProgramFullPath correctly points to anaconda python. Is there a way to correctly find / use anaconda python?
Minimal example:
#include <Python.h>
#include <stdio.h>
int main(void) {
Py_Initialize();
printf("Python version:\n%s\n", Py_GetVersion());
printf("Python Program Full Path:\n%s\n", Py_GetProgramFullPath());
Py_Finalize();
return 0;
}
I compile with,
gcc `python-config --cflags` example.c `python-config --ldflags`
or, expanding the results of the python-config calls,
gcc -I/Users/ryandwyer/anaconda/include/python2.7 \
-I/Users/ryandwyer/anaconda/include/python2.7 \
-fno-strict-aliasing -I/Users/ryandwyer/anaconda/include \
-arch x86_64 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes \
example.c -lpython2.7 -ldl -framework CoreFoundation -u _PyMac_Error
Running the program gives,
Python version:
2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
Python Program Full Path:
/Users/ryandwyer/anaconda/bin/python
This seems to be the same problem as Embed python in c++: choose python version. I have also tried setting PYTHONHOME, Py_SetProgramName, Py_SetPythonHome, but cannot get Python_GetVersion() to return the anaconda version.

There was a partial answer in the post you linked.
Option 1: Run your program as follows
LD_LIBRARY_PATH=/path_to_anaconda/lib ./program
Option 2: Run the following command in the terminal, then run your program
export LD_LIBRARY_PATH=/path_to_anaconda/lib
./program
Option 3: Add the following line to the end of your .bashrc file
LD_LIBRARY_PATH=/path_to_anaconda/lib
Why do you have to do this when embedding python, but not when running the interpreter normally? I have no idea, but if some Python/C wizard stumbles on this post I'd love to know why.

Related

Why python c api does not work on apache module written in c?

Have a simple module written in c:
#include <stdio.h>
#include "apr_hash.h"
#include "ap_config.h"
#include "ap_provider.h"
#include "httpd.h"
#include "http_core.h"
#include "http_config.h"
#include "http_log.h"
#include "http_protocol.h"
#include "http_request.h"
#define PY_SSIZE_T_CLEAN
#include <Python.h>
static int example_handler(request_rec *r) {
if (!r->handler || strcmp(r->handler, "example-handler")){
return (DECLINED);
}
PyObject* py_io = PyImport_ImportModule("io");
Py_DECREF(py_io);
ap_set_content_type(r, "text/html");
ap_rprintf(r, "Filename: %s", r->filename);
return OK;
}
static void register_hooks(apr_pool_t *pool) {
ap_hook_handler(example_handler, NULL, NULL, APR_HOOK_LAST);
}
module AP_MODULE_DECLARE_DATA mod_example = {
STANDARD20_MODULE_STUFF, NULL, NULL, NULL, NULL, NULL, register_hooks
};
And compile without apxs:
# Compile
gcc -D LINUX -D AMD64 \
$($(apxs -q APR_CONFIG) --cflags --includes) \
$(python3-config --cflags --includes) \
-fPIC -DSHARED_MODULE \
-I$(apxs -q INCLUDEDIR) $(apxs -q CFLAGS) \
-c mod_example.c;
# Link shared library
gcc -D LINUX -D AMD64 \
$($(apxs -q APR_CONFIG) --link-ld) \
$(python3-config --ldflags) \
-shared -o mod_example.so mod_example.o;
The compilation is success without errors or warnings:
$ file build/mod_example.so
build/mod_example.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=10f64f7e7c6d0ec9301e07cb61fe5d0249653704, with debug_info, not stripped
But when upload to a vm machine with apache2 installed and enable module and restart apache, says:
apache2: Syntax error on line 146 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/mod_example.load: Cannot load /usr/lib/apache2/modules/mod_example.so into server: /usr/lib/apache2/modules/mod_example.so: undefined symbol: PyImport_ImportModule
The function is documented here: https://docs.python.org/3/c-api/import.html
Why is the python function declared in python.h compiled correctly but not declared on the server?, have same python and apache version (dev headers included) in host machine (ubuntu-desktop 20.04 lts) and vm machine (ubuntu server 20.04 lts):
$ python3 --version
Python 3.8.5
$ gcc --version
gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 ...
$ find /usr/include -name "apr-*" | grep -o apr.*
apr-1.0
Expanding my comments into a proper answer:
The problem is that neither the module nor httpd is linked with the Python runtime library, which for you would probably be some version of libpython3.8.so. That has no negative impact on compilation proper, and presumably the linker accepts it on account of the object being built being a shared library, not a program. apxs may also be providing link flags that contribute.
If python3-config is behaving correctly in your build environment then it is emitting the appropriate -L and -l flags already, but link behavior is sensitive to the order of command-line arguments. In particular, you should designate supporting libraries after the objects that require them. Thus, you could get the libraries named at an appropriate point either by moving $(python3-config --ldflags) to the end of the link command or by appending $(python3-config --libs) to that command. It is not a problem for support libraries to be designated multiple times.
I am not sure what conventional practice is in the Python extension world, if even there is a convention. python3-config splits things up strangely, to my eye: inasmuch as it has separate options --ldflags and --libs, I find it a bit surprising that the output requested by the first is a superset of that requested by the second. To my mind, the --ldflags ought to give the linker flags, if any, that go before all object names on the command line, and the --libs should give the libraries that come after. But it looks like those can be used as if their output were according to my expectations, even though that of --ldflags appears not to be:
gcc -D LINUX -D AMD64 \
$($(apxs -q APR_CONFIG) --link-ld) \
$(python3-config --ldflags) \
-shared -o mod_example.so mod_example.o \
$(python3-config --libs)

Python and C++ integration.Problems with dynamic library

I use Swig.(Mac os 10.13)
My shell script:
swig -c++ -python -o example_wrap.cpp example.i
g++ -c -std=c++17 -fPIC example.cpp
g++ -c -std=c++17 -fPIC example_wrap.cpp -o example_wrap.o \
-I/usr/local/Cellar//python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/include/python3.7m
ld -bundle -macosx_version_min 10.13 -flat_namespace \
-undefined suppress -o _example.so *.o
I spent enough time to seek how to create C++ dynamic library for Python, but I have never used the last line.Most often I create a library from an IDE.
g++ -shared is more familiar, but it doesn't work.
Many such errors appear:
Undefined symbols for architecture x86_64:
"_PyArg_ParseTuple", referenced from:
_wrap_printTree(_object*, _object*) in example_wrap.o
I know about this methods from Python.h.
So, the questions are - how does the last line work(ld -bundle ...)? Are there other methods to create the dynamic library?How can I use g++ -shared?
Here is a small CMakeLists.txt that should work for the example you posted:
cmake_minimum_required(VERSION 3.10) # change this to your needs
project(foo VERSION 0.0 LANGUAGES CXX C)
find_package(SWIG REQUIRED)
include(${SWIG_USE_FILE})
find_package(PythonLibs)
include_directories(${PYTHON_INCLUDE_PATH})
include_directories(${CMAKE_CURRENT_SOURCE_DIR})
set(CMAKE_SWIG_FLAGS "")
add_library(exampleImpl SHARED example.h example.cpp)
target_compile_features(exampleImpl PUBLIC cxx_std_17)
set_source_files_properties(example.i PROPERTIES CPLUSPLUS ON)
swig_add_library(example LANGUAGE python SOURCES example.i)
swig_link_libraries(example ${PYTHON_LIBRARIES} exampleImpl)
To make sure cmake uses the right python library, you can pass an appropriate option upon configure time, see here.

Error importing Boost Python module (function_impl_base9max_arityEv)

I am trying to build a hello world C++ Python extension using boost-python.
I got the following source code from https://www.mantidproject.org/Boost_Python_Introduction:
// test.cpp
#include <iostream>
#include <boost/python.hpp>
void sayHello()
{
std::cout << "Hello, Python!\n";
}
BOOST_PYTHON_MODULE(test) // Name here must match the name of the final shared library, i.e. mantid.dll or mantid.so
{
boost::python::def("sayHello", &sayHello);
}
However, when I try to compile using the following command:
g++ -fPIC -I/usr/include/python3.6m test.cpp -c
g++ -shared test.o -o test.so -I/usr/include/python3.6m -I/lib64/libboost_python3
This command compiles successfully the code and creates a library file test.so.
However, when I try to import the module in python3, I get the following error:
ImportError: /home/yt/C++/test.so: undefined symbol: _ZNK5boost6python7objects21py_function_impl_base9max_arityEv
The link Import Error on boost python hello program seems to suggest the command
I used above would solve the problem by adding -I/usr/include/python3.6m and -I/lib64/libboost_python3, but it does not.
What am I doing wrong?
Thanks!
OS: Fedora 29 x86_64
Thanks guys!
The problem was the linker command. The correct one is:
g++ -fPIC -I/usr/include/python3.6m test.cpp -c
g++ -L /lib64 -shared test.o -o test.so -lpython3.6m -lboost_python3
Now it works on Fedora 29

building python 3.6.3 from scratch on openSUSE

I've been having a hard time building python 3.6.3 from source on openSUSE LEAP 42.3.
When I started configuring the build I ran:
./configure --prefix=/opt/python3.6 --with-pydebug --enable-optimizations --enable-shared
and in another rendition
./configure --prefix=/opt/python3.6 --with-pydebug --enable-optimizations --enable-shared --with-system-expat --with-system-ffi
prior to both, CXX was defined with
CXX = "/usr/bin/g++"
Configuration goes well (or it seems to) and then when I start make, after some success, it ALWAYS fails with this:
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3
-Wall -Wstrict-prototypes -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -I. -I./Include -DPy_BUILD_CORE -I./Modules/expat -DHAVE_EXPAT_CONFIG_H -DUSE_PYEXPAT_CAPI -c ./Modules/expat/xmlparse.c -o Modules/xmlparse.o ./Modules/expat/xmlparse.c:92:3: error: #error You do not have support for any sources of high quality entropy
enabled. For end user security, that is probably not what you want.
Your options include: * Linux + glibc >=2.25 (getrandom):
HAVE_GETRANDOM, * Linux + glibc <2.25 (syscall SYS_getrandom):
HAVE_SYSCALL_GETRANDOM, * BSD / macOS >=10.7 (arc4random_buf):
HAVE_ARC4RANDOM_BUF, * BSD / macOS <10.7 (arc4random):
HAVE_ARC4RANDOM, * libbsd (arc4random_buf): HAVE_ARC4RANDOM_BUF +
HAVE_LIBBSD, * libbsd (arc4random): HAVE_ARC4RANDOM + HAVE_LIBBSD, *
Linux / BSD / macOS (/dev/urandom): XML_DEV_URANDOM * Windows
(RtlGenRandom): _WIN32. If insist on not using any of these, bypass
this error by defining XML_POOR_ENTROPY; you have been warned. If you
have reasons to patch this detection code away or need changes to the
build system, please open a bug. Thank you!
I googled and I have yet to see something on this error.
One last note. I tried various permutations of ./configure, removing various feature flags but always keeping the prefix
Can someone suggest WHY this is failing? (and how to fix it please) This is the first time it has happened to me and I suspect very strongly that I had forgotten to install something, but expat and libexpat are all there.
My thanks
OK so this answer is a workaround and for the sake of my need, it will suffice.
I remembered that SUSE has an open build service (which ROCKS \m/ - https://build.opensuse.org/)
From there, I found some enterprising dev who had created a repo for Python 3.6.3 (I will be sending them an email to find out how they did it)
But it was a simple matter of adding the repo [http://download.opensuse.org/repositories/devel:/languages:/python/openSUSE_Leap_42.3/] and then doing a repo specific distro upgrade (zypper dup --repo python3.6.3)
To be extra safe, I create a btrfs snapshot so I can roll back if things went sideways.. they did not and I am a happy camper.
Python 3.6.3 repo: Python 3.6.3 repo on OBS

python termination error when ctypes dll calls printf

I am developing a python system with some core dlls accessed via ctypes. I have reduced the problem to this condition: execute a module that loads (no need to call) two dlls, one of which calls printf -- this error will occur in exit.
This application has requested the Runtime to terminate it in an
unusual way. Please contact the application's support team for more
information.
My environment:
- Windows 7, SP1
- Python 2.7.8
- MinGW v 3.20
This test case is adapted from a tutorial on writing dlls with MinGW:
/* add_core.c */
__declspec(dllexport) int sum(int a, int b) {
return a + b;
}
/* sub_core.c */
#include <stdio.h>
__declspec(dllexport) int sum(int a, int b) {
printf("Hello from sub_core.c");
return a - b;
}
prog.py
import ctypes
add_core_dll = ctypes.cdll.LoadLibrary('add_core.dll')
add_core_dll = ctypes.cdll.LoadLibrary('sub_core.dll')
> make
gcc -Wall -O3 -g -ansi -c add_core.c -o add_core.o
gcc -g -L. -ansi -shared add_core.o -o add_core.dll
gcc -Wall -O3 -g -ansi -c sub_core.c -o sub_core.o
gcc -g -L. -ansi -shared sub_core.o -o sub_core.dll
>python prog.py
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
and pops up a message dialog to the same effect: "python.exe has stopped working ...".
Note that the programs execute as expected and produce normal output. This error at termination is just a big nuisance I'd like to be rid of.
the same happens for:
Windows 7 Enterprise, SP1
Python 2.7.11
mingw32-g++.exe 5.3.0

Categories

Resources