I'm trying to embedding some Python code into C; It's the first time I do a thing like that.
Here is the simple code of my first attempt copied by a guide on internet:
#include <Python.h>
void exec_pycode(const char* code)
{
Py_Initialize();
PyRun_SimpleString(code);
Py_Finalize();
}
int main(int argc, char **argv) {
exec_pycode(argv[1]);
return 0;
}
So I've installed python3.4-dev package.
Then for having info for the linker I typed:
pkg-config --cflags --libs python3
Then I tried to compile my code:
gcc -std=c99 -o main -I /usr/local/include/python3.4m -L /usr/local/lib -lpython3.4m main.c
(according the command before)
but this is the result:
/tmp/ccJFmdcr.o: in function "exec_pycode":
main.c:(.text+0xd): reference undefined to "Py_Initialize"
main.c:(.text+0x1e): reference undefined to "PyRun_SimpleStringFlags"
main.c:(.text+0x23): reference undefined to "Py_Finalize"
collect2: error: ld returned 1 exit status
It would seem that there is a problem with linking phase, but I can't understend where is the problem seeing that i've passed to the linker the exact paths of the header and of the library. How can I solve that problem?
Try reordering your compilation command, such that all linking options are specified after your C source files:
gcc -std=c99 -o main -I /usr/local/include/python3.4m main.c \
-L /usr/local/lib -lpython3.4m
Related
I installed python 3.9.1 on my Raspberry Pi following the instructions here https://www.ramoonus.nl/2020/10/06/how-to-install-python-3-9-on-raspberry-pi/ and set it as the default python interpreter. I got my compiling and linking parameters for embedded Python following the instructions here https://docs.python.org/3.9/extending/embedding.html#compiling-and-linking-under-unix-like-systems
I tried a simple test with the following code (test.c) :
#include <Python.h>
int
main(int argc, char *argv[])
{
wchar_t *program = Py_DecodeLocale(argv[0], NULL);
if (program == NULL) {
fprintf(stderr, "Fatal error: cannot decode argv[0]\n");
exit(1);
}
Py_SetProgramName(program); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is', ctime(time()))\n");
Py_Finalize();
PyMem_RawFree(program);
return 0;
}
and then
gcc -I/usr/local/opt/python-3.9.1/include/python3.9 -I/usr/local/opt/python-3.9.1/include/python3.9 -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -c test.c -o test.o
and
gcc -L/usr/local/opt/python-3.9.1/lib/python3.9/config-3.9-arm-linux-gnueabihf -L/usr/local/opt/python-3.9.1/lib -lcrypt -lpthread -ldl -lutil -lm -o test.o
and got
/usr/lib/gcc/arm-linux-gnueabihf/4.9/../../../arm-linux-gnueabihf/crt1.o: In function '_start': /build/glibc-P1SmLh/glibc-2.19/csu/../ports/sysdeps/arm/start.S:119: undefined reference to 'main' collect2: error: ld returned 1 exit status
Trying to compile the example at https://docs.python.org/3.9/extending/embedding.html#pure-embedding throws the same error. What could the problem be?
Edit:
After Expolarity's comment I changed the linker command to:
gcc test.o -L/usr/local/opt/python-3.9.1/lib/python3.9/config-3.9-arm-linux-gnueabihf -L/usr/local/opt/python-3.9.1/lib -lcrypt -lpthread -ldl -lutil -lm -o test
which seems to have worked but threw me a bunch of other errors:
test.o: In function main': /home/pi/Downloads/test.c:6: undefined reference to Py_DecodeLocale'
/home/pi/Downloads/test.c:11: undefined reference to Py_SetProgramName' /home/pi/Downloads/test.c:12: undefined reference to Py_Initialize'
/home/pi/Downloads/test.c:13: undefined reference to PyRun_SimpleStringFlags' /home/pi/Downloads/test.c:15: undefined reference to Py_Finalize'
/home/pi/Downloads/test.c:16: undefined reference to `PyMem_RawFree'
collect2: error: ld returned 1 exit status
This seems more serious. Any ideas?
After tttapa's answer over here it finally worked by adjusting the linker command as so:
gcc test.o -L/usr/local/opt/python-3.9.1/lib/python3.9/config-3.9-arm-linux-gnueabihf -L/usr/local/opt/python-3.9.1/lib -lcrypt -lpthread -ldl -lutil -lm -lpython3.9 -o test
Edit: Expolarity also answered correctly just after tttapa. Thanks a lot everyone!
I am trying to build a hello world C++ Python extension using boost-python.
I got the following source code from https://www.mantidproject.org/Boost_Python_Introduction:
// test.cpp
#include <iostream>
#include <boost/python.hpp>
void sayHello()
{
std::cout << "Hello, Python!\n";
}
BOOST_PYTHON_MODULE(test) // Name here must match the name of the final shared library, i.e. mantid.dll or mantid.so
{
boost::python::def("sayHello", &sayHello);
}
However, when I try to compile using the following command:
g++ -fPIC -I/usr/include/python3.6m test.cpp -c
g++ -shared test.o -o test.so -I/usr/include/python3.6m -I/lib64/libboost_python3
This command compiles successfully the code and creates a library file test.so.
However, when I try to import the module in python3, I get the following error:
ImportError: /home/yt/C++/test.so: undefined symbol: _ZNK5boost6python7objects21py_function_impl_base9max_arityEv
The link Import Error on boost python hello program seems to suggest the command
I used above would solve the problem by adding -I/usr/include/python3.6m and -I/lib64/libboost_python3, but it does not.
What am I doing wrong?
Thanks!
OS: Fedora 29 x86_64
Thanks guys!
The problem was the linker command. The correct one is:
g++ -fPIC -I/usr/include/python3.6m test.cpp -c
g++ -L /lib64 -shared test.o -o test.so -lpython3.6m -lboost_python3
Now it works on Fedora 29
Is there a good way to embed both a Python2 and a Python3 interpreter into a C program and then running either one or the other with the decision occurring at runtime?
Here's an example attempt:
Makefile:
all: main
main: main.c librun_in_py2.so librun_in_py3.so
g++ main.c -lrun_in_py2 -lrun_in_py3 -L. -Wl,-rpath -Wl,$$ORIGIN -o main
librun_in_py2.so: run_in_py2.c
g++ $$(python2.7-config --cflags --ldflags) -shared -fPIC $< -o $#
librun_in_py3.so: run_in_py3.c
g++ $$(python3.4-config --cflags --ldflags) -shared -fPIC $< -o $#
clean:
#-rm main *.so
main.c
void run_in_py2(const char* const str);
void run_in_py3(const char* const str);
static const char str2[] = "from time import time,ctime\n"
"import sys\n"
"print sys.version_info\n"
"print 'Today is',ctime(time())\n";
static const char str3[] = "from time import time,ctime\n"
"import sys\n"
"print(sys.version_info)\n"
"print('Today is', ctime(time()))\n";
int main(int argc, char* [])
{
if (argc == 2)
run_in_py2(str2);
else
run_in_py3(str3);
}
run_in_py2.c
#include <Python.h>
void run_in_py2(const char* const str)
{
Py_Initialize();
PyRun_SimpleString(str);
Py_Finalize();
}
run_in_py3.c:
#include <Python.h>
void run_in_py3(const char* const str)
{
Py_Initialize();
PyRun_SimpleString(str);
Py_Finalize();
}
Because of the order of library linking the result is always the same:
$ ./main
sys.version_info(major=2, minor=7, micro=9, releaselevel='final', serial=0)
('Today is', 'Thu Jun 4 10:59:29 2015')
Since the names are the same it looks like the linker resolves everything with the Python 2 interpreter. Is there some way to isolate the names or to encourage the linker to be lazier in resolving them? If possible, it'd be ideal to get the linker to confirm that all names can be resolved and but put off symbol resolution until the appropriate linker can be chosen.
A highly related question asks about running two independent embedded interpreters at the same time:
Multiple independent embedded Python Interpreters on multiple operating system threads invoked from C/C++ program
The suggestion there is to use two separate processes but I suspect there's a simpler answer to this question.
The reason behind the question is that I thought I understood from a conversation with someone way back when that there was a program that did this. And now I'm just curious about how it would be done.
I have written a Python API in C code and saved the file as foo.c.
Code:
#include <Python.h>
#include <stdio.h>
static PyObject *foo_add(PyObject *self, PyObject *args)
{
int a;
int b;
if (!PyArg_ParseTuple(args, "ii", &a, &b))
{
return NULL;
}
return Py_BuildValue("i", a + b);
}
static PyMethodDef foo_methods[] = {
{ "add", (PyCFunction)foo_add, METH_VARARGS, NULL },
{ NULL, NULL, 0, NULL }
};
PyMODINIT_FUNC initfoo()
{
Py_InitModule3("foo", foo_methods, "My first extension module.");
}
When i try to compile using the below mentioned command i am getting compilation error.
Command: gcc -shared -I/usr/include/python2.7 foo.c -o foo.so
Error:
gcc -shared -I/usr/include/python2.7 foo.c -o foo.so
/usr/bin/ld: /tmp/ccd6XiZp.o: relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
/tmp/ccd6XiZp.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
If i give compilation command with "-c" option, its getting compiled successfully and created the object file foo.so (This is the executable file).
I have to create a object file (without using -c option in compilation command) and import them in Python shell to verify it.
Please let me know what am i doing wrong here.
In your compilation flags you should include -fPIC to compile as position independent code. This is required for dynamically linked libraries.
e.g.
gcc -c -fPIC foo.c -o foo.o
gcc -shared foo.o -o foo
or in a single step
gcc -shared -fPIC foo.c -o foo.so
I want to write a python extension in c. I work on Mac, I took a code from here:
#include <Python.h>
static PyObject* say_hello(PyObject* self, PyObject* args)
{
const char* name;
if (!PyArg_ParseTuple(args, "s", &name))
return NULL;
printf("Hello %s!\n", name);
Py_RETURN_NONE;
}
static PyMethodDef HelloMethods[] =
{
{"say_hello", say_hello, METH_VARARGS, "Greet somebody."},
{NULL, NULL, 0, NULL}
};
PyMODINIT_FUNC
inithello(void)
{
(void) Py_InitModule("hello", HelloMethods);
}
I compile it:
gcc -c -o py_module.o py_module.c -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/
gcc -o py_module py_module.o -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/ -lm
But I get this error:
Undefined symbols for architecture x86_64:
"_PyArg_ParseTuple", referenced from:
_say_hello in py_module.o
"_Py_InitModule4_64", referenced from:
_inithello in py_module.o
"__Py_NoneStruct", referenced from:
_say_hello in py_module.o
"_main", referenced from:
start in crt1.10.6.o
ld: symbol(s) not found for architecture x86_64
collect2: ld returned 1 exit status
make: *** [py_module] Error 1
How comes python doesn't support X86_64 architecture?
Two things:
You need to link your extension as a shared object (you're attempting to link an executable, which is why the linker is looking for main());
You need to link against the Python static library (-lpython).
Query the python env path with these commands:
$python-config --includes
-I/usr/include/python2.6 -I/usr/include/python2.6
$python-config --ldflags
-lpthread -ldl -lutil -lm -lpython2.6
generate .o file:
$ g++ -fPIC -c -I/usr/include/python2.6 -I/usr/include/python2.6 xx.cpp
generate .so file:
g++ -shared xx.o -o xx.so
Thanks to #NPE #glglgl and anatoly here is my Makefile:
DIR=/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/
CC=gcc
CFLAGS=-I$(DIR)
ODIR=.
LIBS_DIR=/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config/
LIBS=-lpython2.7
_DEPS =
DEPS = $(patsubst %,$(IDIR)/%,$(_DEPS))
_OBJ = py_module.o
OBJ = $(patsubst %,$(ODIR)/%,$(_OBJ))
$(ODIR)/%.o: %.c $(DEPS)
$(CC) -c -o $# $< $(CFLAGS)
py_module: $(OBJ)
gcc -shared $^ $(CFLAGS) -I$(LIBS_DIR) $(LIBS) -o $#
.PHONY: clean
clean:
rm -f $(ODIR)/*.o *~ core $(INCDIR)/*~
makefile template had been taken from here.
In order to find the paths, one may use python-config --ldflags
and python-config --includes