I'm have a C++ program that is pretty far developed to do it's own job and now we'd like to add an addition functionality to it and we thought that making said functionality in Python and then calling that python with required inputs from the C++ when needed would be the best way to go as it keeps them separated and allows us to use this python script from elsewhere too.
As a first step I decided to try to make a test program to see how this would work and seems like it was a good idea because I can't get it to work.
How do I run separate python from c++?
I have tried following this guide and while it seems good it doesn't give any information on what compiler options should I run this with?
I have two files, cpp.cpp and python.py
This is my cpp.cpp file:
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <ncurses.h>
#include <Python.h>
using namespace std;
int main() {
std::cout << "C++ program started!\n";
char filename[] = "python.py";
FILE* fp;
Py_Initialize();
fp = _Py_fopen(filename, "r");
PyRun_SimpleFile(fp, filename);
Py_Finalize();
std::cout << "C++ program is ending!\n";
return 0;
}
and my python file is just two printf line:
#print('External Python program running...')
#print('Hello World from Python program')
I then try to compile this, give it all the includes it seems to want and then execute the output file:
g++ -I . -I /home/ahomm/python3.6/Include -I /home/ahomm/python3.6/release cpp.cpp && ./a.out
This is the output I get:
/tmp/cccQsh1p.o: In function `main':
cpp.cpp:(.text+0x3f): undefined reference to `Py_Initialize'
cpp.cpp:(.text+0x52): undefined reference to `_Py_fopen'
cpp.cpp:(.text+0x70): undefined reference to `PyRun_SimpleFileExFlags'
cpp.cpp:(.text+0x75): undefined reference to `Py_Finalize'
collect2: error: ld returned 1 exit status
What am I missing? is just something just a little or completely wrong?
cpp and py files and located in the same directory.
And how do I then read the output of python in C++? Haven't even got to that yet...
You have to link your code with libpython3.x.a/python3.x.lib (x - version of python you use). Which file to link: *.a or *.lib depends of your OS. The files are available with python distribution.
Here is a code with cmake that works for me:
cmake_minimum_required(VERSION 2.8.9)
project (embpy)
add_executable(embpy embpy.cpp)
target_include_directories(embpy PRIVATE /path-to-python/Python38/include/python3.8)
target_link_libraries(embpy /path-to-python/Python38/lib/libpython3.8.a)
the embpy.cpp is the same as yours
Figured it out myself then, the problem was incomplete compiler arguments.
This is what I got it to works with:
g++ -fPIC $(python3.6-config --cflags) cpp.cpp $(python3.6-config --ldflags)
the key missing parts were $(python3.6-config --cflags) before and $(python3.6-config --ldflags) after the file that was to be compiled. The first one gives g++ the compile options and the latter gives the flags for linking.
Found the solution from python docs, part 1.6.
Related
I want to know how can I create a python script that run c++ code.
I did find some talks about subprocess module but it's used to run commands
I did find some talks about Boost and Swig but I didn't understand as a beginner how to use them
Testing subprocess:
import subprocess
subprocess.call(["g++", "main.cpp"],shell = True)
tmp=subprocess.call("main.cpp",shell = True)
print("printing result")
print(tmp)
Can any one help me please!
A simple example would be to create a .cpp file:
// cpy.cpp
#include <iostream>
int main()
{
std::cout << "Hello World! from C++" << std::endl;
return 0;
}
And a Python script:
// cpy.py
import subprocess
cmd = "cpy.cpp"
subprocess.call(["g++", cmd])
subprocess.call("./a.out")
Then in the terminal, run the Python script:
~ python cpy.py
~ Hello World! from C++
EDIT:
If you want control of calling C++ functions from Python, you will need to create bindings to extend Python with C++. This can be done a number of ways, the Python docs has a thorough raw implementation of how it can be done for simple cases, but also there are libraries such as pybind and boost.Python that can do this for you.
An example with boost.Python:
// boost-example.cpp
#include <iostream>
#include <boost/python.hpp>
using namespace boost::python;
int printHello()
{
std::cout << "Hello, World! from C++" << std::endl;
}
BOOST_PYTHON_MODULE(hello)
{
def("print_hello", printHello);
}
You will need to create a shared object file (.so) and make sure to link the appropriate Python headers and libraries. An example might look like:
g++ printHello.cpp -fPIC -shared -L/usr/lib/python2.7/config-3.7m-x86_64-linux-gnu/ -I/usr/include/python2.7 -lpython2.7 -lboost_python -o hello.so
And in the same directory that you created the hello.so file:
python
>>> import hello
>>> hello.print_hello()
Hello, World! from C++
Boost.Python can be used to do some pretty magic things, including exposing classes, wrapping overloaded functions, exposing global and class variables for reading and writing, hybrid Python/C++ inheritance heirarchies, all with the utility of dramatic performance gains.
I recommend going through these docs and getting to know the API if you are looking to go down this route.
As an alternative to compiling the C++ code into a separate program and executing that, you can also use cppyy (http://cppyy.org) to run the C++ code directly, through a JIT, within the same program.
Example:
import cppyy
cppyy.cppdef('''
void hello() {
std::cout << "Hello, World!" << std::endl;
}''')
cppyy.gbl.hello() # calls the C++ function 'hello' defined above
You can use the .os module of python to run os commands.
import os
myCmd1 = 'ls -la'
os.system(myCmd)
Your command can be 'g++ main.cpp second.cpp -o run', then you can use the same mechanism to call the ./run shell.
Make sure you have the right permissions
I'm trying to embed Cython code into C following O'reilly Cython book chapter 8. I found this paragraph on Cython's documentation but still don't know what should I do:
If the C code wanting to use these functions is part of more than one shared library or executable, then import_modulename() function needs to be called in each of the shared libraries which use these functions. If you crash with a segmentation fault (SIGSEGV on linux) when calling into one of these api calls, this is likely an indication that the shared library which contains the api call which is generating the segmentation fault does not call the import_modulename() function before the api call which crashes.
I'm running Python 3.4, Cython 0.23 and GCC 5 on OS X. The source code are transcendentals.pyx and main.c:
main.c
#include "transcendentals_api.h"
#include <math.h>
#include <stdio.h>
int main(int argc, char **argv)
{
Py_SetPythonHome(L"/Users/spacegoing/anaconda");
Py_Initialize();
import_transcendentals();
printf("pi**e: %f\n", pow(get_pi(), get_e()));
Py_Finalize();
return 0;
}
transcendentals.pyx
cdef api double get_pi():
return 3.1415926
cdef api double get_e():
print("calling get_e()")
return 2.718281828
I'm compiling those files using setup.py and Makefile:
setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
setup(
ext_modules=cythonize([
Extension("transcendentals", ["transcendentals.pyx"])
])
)
Makefile
python-config=/Users/spacegoing/anaconda/bin/python3-config
ldflags:=$(shell $(python-config) --ldflags)
cflags:=$(shell $(python-config) --cflags)
a.out: main.c transcendentals.so
gcc-5 $(cflags) $(ldflags) transcendentals.c main.c
transcendentals.so: setup.py transcendentals.pyx
python setup.py build_ext --inplace
cython transcendentals.pyx
clean:
rm -r a.out a.out.dSYM build transcendentals.[ch] transcendentals.so transcendentals_api.h
However, I came to error Segmentation fault: 11. Any idea can help with this? Thanks!
In that Makefile there is
transcendentals.so: setup.py transcendentals.pyx
python setup.py build_ext --inplace
Unless python refers to /Users/spacegoing/anaconda/bin/python3 it should be replaced since the module may be compiled for wrong python version, and cannot thus be loaded.
In main.c there is call import_transcendentals() that does not check the return value i.e. if the import fails or succeeds. In case of failure, get_pi() and get_e() point to invalid memory locations and trying to call them causes a segmentation fault.
Also, the module has to be located somewhere where it can be found. It seems that when embedding, the current directory is not searched for python modules. PYTHONPATH environment variable could be changed to include the directory where transcendentals.so is located.
The following is an altenative way of embedding the code to the C program and sidesteps the import issues since the module code is linked to the executable.
Essentially, a call to PyInit_transcendentals() is missing.
File transcendentals.h will be generated when the cython functions are defined public i.e.
cdef public api double get_pi():
...
cdef public api double get_e():
Your main.c should have the include directives
#include <Python.h>
#include "transcendentals.h"
and then in main
Py_Initialize();
PyInit_transcendentals();
There should be no #include "transcendentals_api.h" and no import_transcendentals()
The first reason is that according to the documentation
However, note that you should include either modulename.h or
modulename_api.h in a given C file, not both, otherwise you may get
conflicting dual definitions.
The second reason is, that since transcendentals.c is linked to the program in
gcc $(cflags) $(ldflags) transcendentals.c main.c
there is no reason to import transcendentals module. The module has to be initialized though, PyInit_transcendentals() does that for Python 3
I'm trying to compile very simple file to python on windows and I'm having a bad time.
The .i file is testfile.i:
%module testfile
%include "stl.i"
%{
int get_num() {
return 3;
}
%}
int get_num() {
return 3;
}
The swig function:
{swig path}\swig.exe -c++ -python testfile.i
This works perfectly, I now got the testfile.py file and testfile_wrap.cxx file.
Now I understood that I need to compile this to library (.pyd on windows). I tried:
{gcc path}\gcc.exe -fPIC -shared testfile_wrap.cxx -o testfile_wrap.pyd -L. -LC:\Python27\libs\ -lpython27 -IC:\python27\include.
Here is the problem, I get alot of errors like this ones:
C:\Users\itay\AppData\Local\Temp\ccANsNeU.o:testfile_wrap.cxx:(.text+0xc00): undefined reference to `__imp_PyExc_MemoryError'
C:\Users\itay\AppData\Local\Temp\ccANsNeU.o:testfile_wrap.cxx:(.text+0xc13): undefined reference to `__imp_PyExc_IOError'
C:\Users\itay\AppData\Local\Temp\ccANsNeU.o:testfile_wrap.cxx:(.text+0xc26): undefined reference to `__imp_PyExc_RuntimeError'
C:\Users\itay\AppData\Local\Temp\ccANsNeU.o:testfile_wrap.cxx:(.text+0xc39): undefined reference to `__imp_PyExc_IndexError'
And it continues on and on.
What am I doing wrong?
Thank you for your help
Update:
I managed to call swig and compile/link using visual studio 2013 but I get the same error. I followed tutorials and it still does not work.
The solution: my python was 64 bit and it didn't work. Changed to python 32 bit and now it is working (python 2.7.10)
So I'd like to call some python code from c via cython. I've managed to call cython code from c. And I can also call python code from cython. But when I add it all together, some things are missing.
Here is my python code (quacker.pyx):
def quack():
print "Quack!"
Here is my cython "bridge" (caller.pyx):
from quacker import quack
cdef public void call_quack():
quack()
And here is the c code (main.c):
#include <Python.h>
#include "caller.h"
int main() {
Py_Initialize();
initcaller();
call_quack();
Py_Finalize();
return 0;
}
When I run this I get this exception:
Exception NameError: "name 'quack' is not defined" in 'caller.call_quack' ignored
The missing pieces I'm suspecting:
I haven't called initquacker()
I haven't included quacker.h
Cython didn't produce any quacker.h - only quacker.c
caller.c doesn't import quacker.h or call initquacker()
I'm not really sure that it's even possible to do what I'm trying to do, but it seems to me that it ought to be. I'd love to hear any input you might have.
Edit:
This is how I cythonize / compile / link / run:
$ cython *.pyx
$ cc -c *.c -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
$ cc -L/System/Library/Frameworks/Python.framework/Versions/2.7/lib -L/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config -lpython2.7 -ldl *.o -o main
$ ./main
If you rename the quacker.pyx to quacker.py, everything is actually correct. The only problem is that your program won't search for python modules in the current directory, resulting in the output:
Exception NameError: "name 'quack' is not defined" in 'caller.call_quack' ignored
If you add the current directory to the PYTHONPATH environment variable however, the output becomes the one you'd expect:
$ PYTHONPATH=".:$PYTHONPATH" ./main
Quack!
When running the python shell, according to the documentation the current directory (or the directory containing the script) is added to the sys.path variable automatically, but when creating a simple program using Py_Initialize and Py_Finalize this does not seem to happen. Since the PYTHONPATH variable is also used to populate the sys.path python variable, the workaround above produces the correct result.
Alternatively, below the Py_Intialize line, you could add an empty string to sys.path as follows by just executing some python code, specified as a string:
PyRun_SimpleString("import sys\nsys.path.insert(0,'')");
After recompiling, just running ./main should then work.
Edit
It's actually interesting to see what's going on if you run the code as specified in the question, so without renaming the quacker.pyx file. In that case, the initcaller() function tries to import the quacker module, but since no quacker.py or quacker.pyc exists, the module cannot be found, and the initcaller() function produces an error.
Now, this error is reported the python way, by raising an exception. But the code in the main.c file doesn't check for this. I'm no expert in this, but in my tests adding the following code below initcaller() seemed to work:
if (PyErr_Occurred())
{
PyErr_Print();
return -1;
}
The output of the program then becomes the following:
Traceback (most recent call last):
File "caller.pyx", line 1, in init caller (caller.c:836)
from quacker import quack
ImportError: No module named quacker
By calling the initquacker() function before initcaller(), the module name quacker already gets registered so the import call that's done inside initcaller() will detect that it's already loaded and the call will succeed.
In case there's anyone wondering how would it work in Python 3, here's my solution after struggling a bit as a Cython newbie.
main.c
#include <Python.h>
#include "caller.h"
int
main()
{
PyImport_AppendInittab("caller", PyInit_caller);
Py_Initialize();
PyImport_ImportModule("caller");
call_quack();
Py_Finalize();
return 0;
}
caller.pyx
# cython: language_level=3
import sys
sys.path.insert(0, '')
from quacker import quack
cdef public void call_quack():
quack()
quacker.py
def quack():
print("Quack!")
Finally, here's the Makefile that compiles everything:
target=main
cybridge=caller
CC=gcc
CFLAGS= `python3-config --cflags`
LDFLAGS=`python3-config --ldflags`
all:
cython $(cybridge).pyx
$(CC) $(CFLAGS) -c *.c
$(CC) $(LDFLAGS) *.o -o $(target)
clean:
rm -f $(cybridge).{c,h,o} $(target).o $(target)
rm -rf __pycache__
Maybe this is not what you want but I got it working by the following changes:
in quacker.pyx I added
cdef public int i
To force Cython to generate the .h file.
An then in the main:
#include <Python.h>
#include "caller.h"
#include "quacker.h"
int main() {
Py_Initialize();
initquacker();
initcaller();
call_quack();
Py_Finalize();
return 0;
}
I needed to do this using CMake and ended up recreating this sample. You can find the repository with complete working example here.
You can build and run the example using either Docker on the CLI or a Visual Studio devcontainer.
I've been trying to set up (Install and get the correct libraries) for my computer so I could start graphic programming.
I've visited the OpenGL site, and have found it unhelpful.
I tried the Wikibooks' Setting up page, but that has install info specific to Debian and Debian like systems and I couldn't find the corresponding stuff for fedora.
I know C and python and would prefer to work in C if possible, I did find PyOpenGL.noarch and installed it using yum.
I looked up a couple of other sites and didn't find much but I managed to Install freeglut-devel
I checked and found the GL libraries in /usr/include/GL folder but when I try to run the following code{taken from the wikibooks site itself, so I'm assuming it works}:
#include <stdio.h> /* printf */
#include <GL/glut.h> /* glut graphics library */
/*
* Linux c console program
* gcc f.c -lglut
* ./a.out
* */
main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutCreateWindow("red 3D lighted cube");
printf("GL_VERSION = %s\n",glGetString(GL_VERSION) ); /* GL_VERSION = 2.1.2 NVIDIA 195.36.24 */
return 0;
}
And when I do gcc -lglut filename.c
I get the following errors:
/usr/bin/ld: /usr/lib/gcc/i686-redhat-linux/4.6.1/../../../libglut.so: undefined reference to symbol 'glGetString'
/usr/bin/ld: note: 'glGetString' is defined in DSO /usr/lib/libGL.so.1 so try adding it to the linker command line
/usr/lib/libGL.so.1: could not read symbols: Invalid operation
collect2: ld returned 1 exit status
And I have no Idea what to do.
A basic step-by-step procedure would be much appreciated but if any help is always welcome.
Try adding -lGL to the command line you use to compile it (that's what the error message is telling you to do).
This question also suggests that you'll need -lGLU as well.
Additionally, I would put the libraries after the source files that use them, so:
gcc filename.c -lglut -lGL -lGLU
Instead of:
gcc -lglut -lGL -lGLU filename.c
There is some more information about why you get this message on fedora here, but the basic fix is to explicitly link the missing library.