I'm trying to call external C program. The same code already works on linux and windows, but not on solaris.
Can somebody take a look?
Original example is taken from http://csl.name/C-functions-from-Python/
C code (myModule.c)
#include <Python.h>
static PyObject* py_myFunction(PyObject* self, PyObject* args)
{
char *s = "Hello from C!";
return Py_BuildValue("s", s);
}
static PyObject* py_myOtherFunction(PyObject* self, PyObject* args)
{
double x, y;
PyArg_ParseTuple(args, "dd", &x, &y);
return Py_BuildValue("d", x*y);
}
static PyMethodDef myModule_methods[] = {
{"myFunction", py_myFunction, METH_VARARGS},
{"myOtherFunction", py_myOtherFunction, METH_VARARGS},
{NULL, NULL}
};
void initmyModule()
{
(void) Py_InitModule("myModule", myModule_methods);
}
Python calling it
from myModule import *
print "Result from myFunction:", myFunction()
print "Result from myOtherFunction(4.0, 5.0):", myOtherFunction(4.0, 5.0)
Compiling on Linux (tested on RHEL)
gcc -fPIC -shared -I/usr/include/python2.6 -lpython2.6 -o myModule.so myModule.c
Compiling on Windows XP under MinGW
gcc -Ic:/Python27/include -Lc:/Python27/libs myModule.c -lpython27 -shared -o myModule.pyd
But I can't get it to work on solaris. I can compile it with
gcc -fPIC -I/usr/include/python2.4 -L/usr/lib/python2.4 myModule.c -lpython2.4 -shared -o myModule.so
but it fails with an error
from myModule import *
ImportError: ld.so.1: python2.4: fatal: libgcc_s.so.1: open failed: No such file or directory
Can someone help me figure it out?
gcc is 3.4.6
Python is 2.4.6
solaris 10 on x86 machine
This should hook you up
pfexec rm /usr/lib/libgcc_s.so.1
pfexec ln -s /opt/ts/gcc/3.4/lib/libgcc_s.so.1 /usr/lib/libgcc_s.so.1
Related
Is there a good way to embed both a Python2 and a Python3 interpreter into a C program and then running either one or the other with the decision occurring at runtime?
Here's an example attempt:
Makefile:
all: main
main: main.c librun_in_py2.so librun_in_py3.so
g++ main.c -lrun_in_py2 -lrun_in_py3 -L. -Wl,-rpath -Wl,$$ORIGIN -o main
librun_in_py2.so: run_in_py2.c
g++ $$(python2.7-config --cflags --ldflags) -shared -fPIC $< -o $#
librun_in_py3.so: run_in_py3.c
g++ $$(python3.4-config --cflags --ldflags) -shared -fPIC $< -o $#
clean:
#-rm main *.so
main.c
void run_in_py2(const char* const str);
void run_in_py3(const char* const str);
static const char str2[] = "from time import time,ctime\n"
"import sys\n"
"print sys.version_info\n"
"print 'Today is',ctime(time())\n";
static const char str3[] = "from time import time,ctime\n"
"import sys\n"
"print(sys.version_info)\n"
"print('Today is', ctime(time()))\n";
int main(int argc, char* [])
{
if (argc == 2)
run_in_py2(str2);
else
run_in_py3(str3);
}
run_in_py2.c
#include <Python.h>
void run_in_py2(const char* const str)
{
Py_Initialize();
PyRun_SimpleString(str);
Py_Finalize();
}
run_in_py3.c:
#include <Python.h>
void run_in_py3(const char* const str)
{
Py_Initialize();
PyRun_SimpleString(str);
Py_Finalize();
}
Because of the order of library linking the result is always the same:
$ ./main
sys.version_info(major=2, minor=7, micro=9, releaselevel='final', serial=0)
('Today is', 'Thu Jun 4 10:59:29 2015')
Since the names are the same it looks like the linker resolves everything with the Python 2 interpreter. Is there some way to isolate the names or to encourage the linker to be lazier in resolving them? If possible, it'd be ideal to get the linker to confirm that all names can be resolved and but put off symbol resolution until the appropriate linker can be chosen.
A highly related question asks about running two independent embedded interpreters at the same time:
Multiple independent embedded Python Interpreters on multiple operating system threads invoked from C/C++ program
The suggestion there is to use two separate processes but I suspect there's a simpler answer to this question.
The reason behind the question is that I thought I understood from a conversation with someone way back when that there was a program that did this. And now I'm just curious about how it would be done.
I'm trying to embedding some Python code into C; It's the first time I do a thing like that.
Here is the simple code of my first attempt copied by a guide on internet:
#include <Python.h>
void exec_pycode(const char* code)
{
Py_Initialize();
PyRun_SimpleString(code);
Py_Finalize();
}
int main(int argc, char **argv) {
exec_pycode(argv[1]);
return 0;
}
So I've installed python3.4-dev package.
Then for having info for the linker I typed:
pkg-config --cflags --libs python3
Then I tried to compile my code:
gcc -std=c99 -o main -I /usr/local/include/python3.4m -L /usr/local/lib -lpython3.4m main.c
(according the command before)
but this is the result:
/tmp/ccJFmdcr.o: in function "exec_pycode":
main.c:(.text+0xd): reference undefined to "Py_Initialize"
main.c:(.text+0x1e): reference undefined to "PyRun_SimpleStringFlags"
main.c:(.text+0x23): reference undefined to "Py_Finalize"
collect2: error: ld returned 1 exit status
It would seem that there is a problem with linking phase, but I can't understend where is the problem seeing that i've passed to the linker the exact paths of the header and of the library. How can I solve that problem?
Try reordering your compilation command, such that all linking options are specified after your C source files:
gcc -std=c99 -o main -I /usr/local/include/python3.4m main.c \
-L /usr/local/lib -lpython3.4m
I have written a Python API in C code and saved the file as foo.c.
Code:
#include <Python.h>
#include <stdio.h>
static PyObject *foo_add(PyObject *self, PyObject *args)
{
int a;
int b;
if (!PyArg_ParseTuple(args, "ii", &a, &b))
{
return NULL;
}
return Py_BuildValue("i", a + b);
}
static PyMethodDef foo_methods[] = {
{ "add", (PyCFunction)foo_add, METH_VARARGS, NULL },
{ NULL, NULL, 0, NULL }
};
PyMODINIT_FUNC initfoo()
{
Py_InitModule3("foo", foo_methods, "My first extension module.");
}
When i try to compile using the below mentioned command i am getting compilation error.
Command: gcc -shared -I/usr/include/python2.7 foo.c -o foo.so
Error:
gcc -shared -I/usr/include/python2.7 foo.c -o foo.so
/usr/bin/ld: /tmp/ccd6XiZp.o: relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
/tmp/ccd6XiZp.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
If i give compilation command with "-c" option, its getting compiled successfully and created the object file foo.so (This is the executable file).
I have to create a object file (without using -c option in compilation command) and import them in Python shell to verify it.
Please let me know what am i doing wrong here.
In your compilation flags you should include -fPIC to compile as position independent code. This is required for dynamically linked libraries.
e.g.
gcc -c -fPIC foo.c -o foo.o
gcc -shared foo.o -o foo
or in a single step
gcc -shared -fPIC foo.c -o foo.so
I am developing a python system with some core dlls accessed via ctypes. I have reduced the problem to this condition: execute a module that loads (no need to call) two dlls, one of which calls printf -- this error will occur in exit.
This application has requested the Runtime to terminate it in an
unusual way. Please contact the application's support team for more
information.
My environment:
- Windows 7, SP1
- Python 2.7.8
- MinGW v 3.20
This test case is adapted from a tutorial on writing dlls with MinGW:
/* add_core.c */
__declspec(dllexport) int sum(int a, int b) {
return a + b;
}
/* sub_core.c */
#include <stdio.h>
__declspec(dllexport) int sum(int a, int b) {
printf("Hello from sub_core.c");
return a - b;
}
prog.py
import ctypes
add_core_dll = ctypes.cdll.LoadLibrary('add_core.dll')
add_core_dll = ctypes.cdll.LoadLibrary('sub_core.dll')
> make
gcc -Wall -O3 -g -ansi -c add_core.c -o add_core.o
gcc -g -L. -ansi -shared add_core.o -o add_core.dll
gcc -Wall -O3 -g -ansi -c sub_core.c -o sub_core.o
gcc -g -L. -ansi -shared sub_core.o -o sub_core.dll
>python prog.py
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
and pops up a message dialog to the same effect: "python.exe has stopped working ...".
Note that the programs execute as expected and produce normal output. This error at termination is just a big nuisance I'd like to be rid of.
the same happens for:
Windows 7 Enterprise, SP1
Python 2.7.11
mingw32-g++.exe 5.3.0
I want to use a c program invoke a python program,
os:ubuntu 12.10 x64
python2.7.3
C code:
#include <stdio.h>
#include <stdlib.h>
#include <python2.7/Python.h>
int main(int argc, char** argv)
{
printf("Hello world!\n");
Py_Initialize();
Py_SetProgramName("c_python");
PyRun_SimpleString("print \"Hello world,Python!\"\n");
Py_Finalize();
exit(0);
}
compile shell:
gcc -I/usr/include/python2.7 -L/usr/lib/python2.7 -Wall -fPIC c_python.c -o c_pyton
/tmp/cciuHgrf.o:in ‘main’:
c_python.c:(.text+0x1c):reference undefined ‘Py_Initialize’
c_python.c:(.text+0x28):reference undefined ‘Py_SetProgramName’
c_python.c:(.text+0x3e):reference undefined ‘Py_Finalize’
collect2: error: ld return 1
You need to link the Python interpreter into your executable: -lpython.