I have a cpp file that compiles fine with g++ by using the shell:
extern "C"{
#include <quadmath.h>
}
inline char* print( const __float128& val)
{
char* buffer = new char[128];
quadmath_snprintf(buffer,128,"%+.34Qe", val);
return buffer;
}
int main(){
__float128 a = 1.0;
print(a);
return 0;
}
However, when I try to compile it via a python scrit, it fails with the following error:
"undefined reference to quadmath_snprintf"
Here the code of the python script:
import commands
import string
import os
(status, output) = commands.getstatusoutput("(g++ test/*.c -O3 -lquadmath -m64)")
Any idea how to solve this? Thanks.
When you open a shell a whole of stuff is silently initialized for you, and most important for your issue, environment variables are set. What you most likely miss is the definition of LIBRARY_PATH, which is the variable used by the linker to look for libraries matching the ones you instruct it to link using the -lNAME flags.
What the linker needs is a list of directories where it will search for files matching libNAME.{a,so}. You can also pass these directories directly using the -L flag, but in general, you should probably try to use a program like CMake, Make or any other build tool.
This will give you access to commands like find_package and target_link_libraries (CMake), to find, respectively add libraries to your build targets, instead of having to maintain your python to compile your stuff.
Related
Let's say I have a simple class in hello.h
#ifndef LIB_HELLO_GREET_H_
#define LIB_HELLO_GREET_H_
class A{
public:
int a = 0;
int b = 0;
int add(){
return a+b;
}
};
#endif
with bazel build file:
load("#rules_cc//cc:defs.bzl", "cc_binary", "cc_library")
cc_library(
name = "hello",
hdrs = ["hello.h"],
)
cc_binary(
name = "hello.so",
deps = [
":hello",
],
linkshared=True,
linkstatic=False
)
After I run bazel build hello.so, there is a shared object file generated in bazel-bin/main and bazel-bin/main/hello.so.runfiles/__main__/main/hello.so. Using those files, I want call class A with a python script. Ideally, I'd want to use cppyy or something similar.
I've tried with simple python scripts
import cppyy
cppyy.load_reflection_info('hello')
print(dir(cppyy.gbl))
or
import cppyy
cppyy.load_library('hello')
print(dir(cppyy.gbl))
with both .so files, but cppyy can't seem to detect class A - it is never inside cppyy.gbl
I'm wondering the best way to solve this problem, any help appreciated!
With cppyy, you also need to give it the header file, that is:
cppyy.include("hello.h")
and optionally use cppyy.add_include_path() to add the directory where hello.h resides, so that it can be found.
An alternative is to use so-called "dictionaries." These package (customizable) directory locations, names of necessary headers files and (through linking) the shared library with the implementation into a single, new, shared library (named "dictionary" because of history). There can be a so-called "rootmap" file on the side that allows a class loader to find A automatically on first use. See the example in the documentation.
Dear WAF build system experts,
Let's suppose that you use the WAF build system to build a library fooLib and a program fooProg. Then, you want to check the program fooProg by a Python script fooProgTest that checks the output of fooProg.
Here is an minimum example for fooLib and fooProg:
$ cat fooLib/fooLib.cpp
int foo()
{
return 42;
}
$ cat fooProg/fooProg.cpp
#include <iostream>
extern int foo();
int main()
{
std::cout << foo() << std::endl;
return 0;
}
In this example, it is my goal to to have a Python script that checks that fooProg outputs 42.
Here comes my not so clean solution:
import os
from waflib.Tools import waf_unit_test
def options(opt):
opt.load("compiler_cxx waf_unit_test python")
def configure(cnf):
cnf.load("compiler_cxx waf_unit_test python")
def build(bld):
bld.add_post_fun(waf_unit_test.summary)
bld.options.clear_failed_tests= True
bld(features= "cxx cxxshlib",
target= "fooLib",
source= "fooLib/fooLib.cpp")
bld(features= "cxx cxxprogram",
target= "fooProg/fooProg",
source= "fooProg/fooProg.cpp",
use= "fooLib")
testEnv= os.environ.copy()
testEnv["FOO_EXE"]= bld.path.find_or_declare("fooProg/fooProg").abspath()
bld(features= "test_scripts",
test_scripts_source= "fooProgTest/fooProgTest.py",
test_scripts_template= "${PYTHON} ${SRC[0].abspath()}",
test_scripts_paths= {
"LD_LIBRARY_PATH": bld.bldnode.abspath()
},
test_scripts_env= testEnv
)
cat fooProgTest/fooProgTest.py
#!/usr/bin/env python
import os
import subprocess
assert subprocess.check_output("{}".format(
os.environ["FOO_EXE"])).startswith("42")
My questions are below:
Does anyone of you know how to avoid setting LD_LIBRARY_PATH manually?
How to avoid setting the path of fooProg via the environment variable "FOO_EXE"?
Thank you very much!
Does anyone of you know how to avoid setting LD_LIBRARY_PATH manually?
You can specify the runtime search path for your executable. Assuming that the file fooLib.so is in the same directory as fooProg, the following change to your wscript should suffice:
bld(features= "cxx cxxprogram",
target= "fooProg/fooProg",
source= "fooProg/fooProg.cpp",
use= "fooLib",
rpath= "$ORIGIN")
Which makes LD to take the directory, where the executable is stored, also into consideration, when searching for shared objects.
How to avoid setting the path of fooProg via the environment variable "FOO_EXE"?
With subprocess.check_output you can pass multiple arguments. I.e.
subprocess.check_output([
"your_executable_to_launch",
"arg1",
"arg2"
])
In your test script you would have to read the arguments either using sys.argv or argparse.
Extra:
Launching the interpreter to launch to launch your application seems a bit hacky. Instead, define a custom task (implement waflib.Task.Task), which then runs subprocess.check_output1.
1 AFAIK waf gives you a convenient method to launch processes, although I cannot remember its name.
So I have a Python program that's finding .txt file directories and then passing those directories as a list(I believe) to my C++ program. The problem I am having is that I am not sure how to pass the list to C++ properly. I have used :
subprocess.call(["path for C++ executable"] + file_list)
where file_list is the [] of txt file directories.
My arguments that my C++ code accepts are:
int main (int argc, string argv[])
Is this correct or should I be using a vector? When I do use this as my argument and try to print out the list I get the directory of my executable, the list, and then smiley faces, symbols, and then the program crashes.
Any suggestions? My main point that I am trying to find out is the proper syntax of utilizing subprocess.call. Any help would be appreciated! thanks!
Another option is to use cython, (not a direct answer). Here is a simple complete example:
Suppose you have the following files:
cython_file.cpp
python_file.py
setup.py
sum_my_vector.cpp
sum_my_vector.h
setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
ext_modules = [Extension(
name="cython_file",
sources=["cython_file.pyx", "sum_my_vector.cpp"],
extra_compile_args=["-std=c++11"],
language="c++",
)]
setup(
name = 'cython_file',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules,
)
cython_file.pyx
from libcpp.vector cimport vector
cdef extern from "sum_my_vector.h":
int sum_my_vector(vector[int] my_vector)
def sum_my_vector_cpp(my_list):
cdef vector[int] my_vector = my_list
return sum_my_vector(my_vector)
sum_my_vector.cpp
#include <iostream>
#include <vector>
#include "sum_my_vector.h"
using namespace::std;
int sum_my_vector(vector<int> my_vector)
{
int my_sum = 0;
for (auto iv = my_vector.begin(); iv != my_vector.end(); iv++)
my_sum += *iv;
return my_sum;
}
sum_my_vector.h
#ifndef SUM_MY_VECTOR
#define SUM_MY_VECTOR
using namespace::std;
int sum_my_vector(vector<int> my_vector);
#endif
python_file.py
from cython_file import sum_my_vector_cpp
print sum_my_vector_cpp([1,2,3,5])
Now run
python setup.py build_ext --inplace
and the you can run the python file
python python_file.py
11
"Passing a list through Python to C++"
An alternative approach would be to use Boost.Python, this may not answer your question directly, but still its worth pointing out another solution.
#include <boost/python.hpp>
#include <vector>
#include <string>
void get_dir_list( boost::python::list dir_list )
{
for (int i = 0; i < len(dir_list); ++i)
{
std::string x = boost::python::extract<std::string>(dir_list[i]);
// perform stuffs
std::cout << "This is " << x << std::endl ;
}
}
BOOST_PYTHON_MODULE(get_dir_list)
{
def("get_dir_list", get_dir_list);
}
Compiled Using :
g++ main.cpp -shared -fPIC -o get_dir_list.so -I/usr/include/python2.7 -lboost_python
Usage :
import get_dir_list
import os
get_dir_list.get_dir_list(os.listdir('.'))
Live Demo Here
I'll post this alternative solution since it would also work for other long lists of strings that needed to be passed.
In your Python script create a text file (I'll call it "masterFile") and write the file paths to the masterFile. You could give each file path a separate line. Then pass the masterFile's file path to your C++ program. This way you don't have to worry about the length of your command line arguments. Let your C++ program open and read the file for processing.
You can use something like os.remove() to get rid of the masterFile in your Python script once the C++ program has finished.
Also, you mentioned in the comments that you need to do different tasks dependent on different file paths: A suggestion would be to add a char at the beginning of each line in the masterFile to signal what needs to be done for the particular file. Example:
a Random/path/aFile.txt # a could mean do task 1
b Random2/path2/differentFile.c # b could mean do task 2
You pass a list to subprocess.call. subprocess.call converts this to what is needed for the system (which may vary, but certainly isn't a Python list). The system then arranges for this to be copied somewhere in the new process, and sets up the standard arguments to main, which are int, char**. In your C++ program, you must define main as int main( int argc, char** argv ); nothing else will work. (At least... a system could support int main( std::string const& ) or some such as an extension. But I've never heard of one that did.)
I am writing C extension for python. All I want to do is to take size as input, create an object of that size and return the reference of that created object. My code looks like:
static PyObject *capi_malloc(PyObject *self, PyObject *args)
{
int size;
if (!PyArg_ParseTuple(args, "i", &size))
return NULL;
//Do something to create an object or a character buffer of size `size`
return something
}
How can I do this? I am able to allocate a memory using PyMem_Malloc() but confused about returning a reference of an object.
If all you really want to do is to allocate a raw memory area of size size and return it, (even though it's not really a correctly initialized PyObject type), just do the following:
char *buf = (char *)PyMem_Malloc(size);
return (PyObject *)buf;
Not sure if that's useful in any way but it'll compile and get you a pointer to a raw memory buffer that's been cast as a PyObject pointer.
(This was not in your question but if you really want an honest to goodness PyObject pointer, you'll have to deal with calling something like PyObject_New() function. Docs here: http://docs.python.org/2/c-api/allocation.html )
Creating the previous example using SWIG is much more straight forward. To follow this path you need to get SWIG up and running first. To install it on an Ubuntu system, you might need to run the following commands
$ sudo apt-get install libboost-python-dev
$ sudo apt-get install python-dev
After that create two files.
/hellomodule.c/
#include <stdio.h>
void say_hello(const char* name) {
printf("Hello %s!\n", name);
}
/hello.i/
%module hello
extern void say_hello(const char* name);
Now comes the more difficult part, gluing it all together.
First we need to let SWIG do its work.
swig -python hello.i
This gives us the files hello.py and hello_wrap.c.
The next step is compiling (substitute /usr/include/python2.4/ with the correct path for your setup!).
gcc -fpic -c hellomodule.c hello_wrap.c -I/usr/include/python2.4/
Now linking and we are done!
gcc -shared hellomodule.o hello_wrap.o -o _hello.so
The module is used in the following way.
>>> import hello
>>> hello.say_hello("World")
Hello World!
if you want to use directly into IDE you can use http://cython.org/#about
and still if you want to develop your own, you may browse through source code of cython.
I have the task of "wrapping" a c library into a python class. The docs are incredibly vague on this matter. It seems they expect only advanced python users would implement ctypes.
Some step by step help would be wonderful.
So I have my c library. What do I do? What files do I put where? How do I import the library? I read that there might be a way to "auto wrap" to Python?
(By the way I did the ctypes tutorial on python.net and it doesn't work. Meaning I'm thinking they are assuming I should be able to fill in the rest of the steps.)
In fact this is the error I get with their code:
File "importtest.py", line 1
>>> from ctypes import *
SyntaxError: invalid syntax
I could really use some step by step help on this!
Here's a quick and dirty ctypes tutorial.
First, write your C library. Here's a simple Hello world example:
testlib.c
#include <stdio.h>
void myprint(void);
void myprint()
{
printf("hello world\n");
}
Now compile it as a shared library (mac fix found here):
$ gcc -shared -Wl,-soname,testlib -o testlib.so -fPIC testlib.c
# or... for Mac OS X
$ gcc -shared -Wl,-install_name,testlib.so -o testlib.so -fPIC testlib.c
Then, write a wrapper using ctypes:
testlibwrapper.py
import ctypes
testlib = ctypes.CDLL('/full/path/to/testlib.so')
testlib.myprint()
Now execute it:
$ python testlibwrapper.py
And you should see the output
Hello world
$
If you already have a library in mind, you can skip the non-python part of the tutorial. Make sure ctypes can find the library by putting it in /usr/lib or another standard directory. If you do this, you don't need to specify the full path when writing the wrapper. If you choose not to do this, you must provide the full path of the library when calling ctypes.CDLL().
This isn't the place for a more comprehensive tutorial, but if you ask for help with specific problems on this site, I'm sure the community would help you out.
PS: I'm assuming you're on Linux because you've used ctypes.CDLL('libc.so.6'). If you're on another OS, things might change a little bit (or quite a lot).
The answer by Chinmay Kanchi is excellent but I wanted an example of a function which passes and returns a variables/arrays to a C++ code. I though I'd include it here in case it is useful to others.
Passing and returning an integer
The C++ code for a function which takes an integer and adds one to the returned value,
extern "C" int add_one(int i)
{
return i+1;
}
Saved as file test.cpp, note the required extern "C" (this can be removed for C code).
This is compiled using g++, with arguments similar to Chinmay Kanchi answer,
g++ -shared -o testlib.so -fPIC test.cpp
The Python code uses load_library from the numpy.ctypeslib assuming the path to the shared library in the same directory as the Python script,
import numpy.ctypeslib as ctl
import ctypes
libname = 'testlib.so'
libdir = './'
lib=ctl.load_library(libname, libdir)
py_add_one = lib.add_one
py_add_one.argtypes = [ctypes.c_int]
value = 5
results = py_add_one(value)
print(results)
This prints 6 as expected.
Passing and printing an array
You can also pass arrays as follows, for a C code to print the element of an array,
extern "C" void print_array(double* array, int N)
{
for (int i=0; i<N; i++)
cout << i << " " << array[i] << endl;
}
which is compiled as before and the imported in the same way. The extra Python code to use this function would then be,
import numpy as np
py_print_array = lib.print_array
py_print_array.argtypes = [ctl.ndpointer(np.float64,
flags='aligned, c_contiguous'),
ctypes.c_int]
A = np.array([1.4,2.6,3.0], dtype=np.float64)
py_print_array(A, 3)
where we specify the array, the first argument to print_array, as a pointer to a Numpy array of aligned, c_contiguous 64 bit floats and the second argument as an integer which tells the C code the number of elements in the Numpy array. This then printed by the C code as follows,
1.4
2.6
3.0
Firstly: The >>> code you see in python examples is a way to indicate that it is Python code. It's used to separate Python code from output. Like this:
>>> 4+5
9
Here we see that the line that starts with >>> is the Python code, and 9 is what it results in. This is exactly how it looks if you start a Python interpreter, which is why it's done like that.
You never enter the >>> part into a .py file.
That takes care of your syntax error.
Secondly, ctypes is just one of several ways of wrapping Python libraries. Other ways are SWIG, which will look at your Python library and generate a Python C extension module that exposes the C API. Another way is to use Cython.
They all have benefits and drawbacks.
SWIG will only expose your C API to Python. That means you don't get any objects or anything, you'll have to make a separate Python file doing that. It is however common to have a module called say "wowza" and a SWIG module called "_wowza" that is the wrapper around the C API. This is a nice and easy way of doing things.
Cython generates a C-Extension file. It has the benefit that all of the Python code you write is made into C, so the objects you write are also in C, which can be a performance improvement. But you'll have to learn how it interfaces with C so it's a little bit extra work to learn how to use it.
ctypes have the benefit that there is no C-code to compile, so it's very nice to use for wrapping standard libraries written by someone else, and already exists in binary versions for Windows and OS X.