I want to include the Python.h and the ruby.h header in the same C/C++ file, as I want to work with both at the same time. Which would be the best way to include both and prevent the compiler/preprocessor to warn about multiple redefinitions of the same variables or there is another way to use those languages from C?
MWE:
// file.cpp
#include <iostream>
#include <Python.h>
#include <ruby.h>
int main() {
return 0;
}
I warnings like this from the preprocessor:
In file included from /path/to/Python/include/Python.h:8,
from /path/to/file.cpp:4:
/path/to/Python/include/pyconfig.h:61: warning: "HAVE_HYPOT" redefined
#define HAVE_HYPOT
In file included from /path/to/Ruby/include/ruby-2.7.0/ruby/ruby.h:24,
from /path/to/Ruby/include/ruby-2.7.0/ruby.h:33,
from /path/to/file.cpp:5:
/path/to/Ruby/include/ruby-2.7.0/x64-mingw32/ruby/config.h:211: note: this is the location of the previous definition
#define HAVE_HYPOT 1
An example to demonstrate not getting complaint for redefinitions in headers.
Though you may have to re-declare every functions you need in a wrapper header.
P.h
#define AAA 2
R.h
#define AAA 1
Pwrapper.h
int get(void);
Pwrapper.cc
#inclide "P.h" //include here, so Pwrapper.h won't conflict definition with R.h
int get(void){
return AAA;
}
In your actual source file for operations
#include "Pwrapper.h"
#include "R.h"
Not sure whats in python.h but since python is an interpreted language, Id assume python.h references object code that interprets python code which is held as a text data in your c++ code as in
char * pythonScript = "print \"Hello, World!\"";
pythonExec(pythonScript);
Python is an interpreted language (at least on linux) and is also a foreign languge to a c++ compiler.
The only time Ive seen c++ directly support a foreign language is the implementation dependent asm keyword where some compilers allow you to write assembly language code blocks directly into the C++ source code. Its not supported by all compilers and asm is the only time Ive seen this style of support.
Debateably, things like opengl are languages in their own right and there is a way of foreign language support in the sense that each opengl function and variable is replicated with a c++ function or variable to the extent that the full language is mapped into c++.
Sorry I dont have a full answer but given that I think what youre trying to do isnt really supported, I hope it provides you some pointers to where you should be looking.
Maybe someone has a better answer.
Edit:
This does not work (as alluded in a comment) for preprocessor macros, which are not #undef'd, so does not answer the question exactly.
Original answer
Not sure about linking / global variables within implementation files, or about what your files contain, but for header files, you can put the include macro within a namespace:
namespace a {
#include Python.h
}
namespace b {
#include ruby.h
}
Then you reference the proper namespace to use a variable:
a::SAME_NAME
b::SAME_NAME
Related
I have a couple of header files that are already defined in C (C++ if we're being technical, but the header is C compatible) that I use to define a bunch of data types (structs). I would like to make use of these in a python script that I am going use to test the corresponding C++ application. This is mostly to avoid having to redefine them in python as some of the structs are unwieldy, but also it would be nice to have them defined in one place so if changes happen down the road it will be easier to adapt.
When I started looking into this I thought that this was certainly doable but none of the examples I have come across get me quite there. The closest I got was using cffi. I got a simple example working how I want it to:
Data types header:
// mylib.h
struct Point2D
{
float x;
float y;
};
struct Point3D
{
float x;
float y;
float z;
};
Python code:
from cffi import FFI
with open("./mylib.h", "r") as fo:
header_text = fo.read()
ffi = FFI()
ffi.cdef(header_text)
point = ffi.new("struct Point2D*")
But this fails if I have #includes or #ifdefs in the header file, per the cffi documentation:
The declarations can contain types, functions, constants and global
variables. What you pass to the cdef() must not contain more than
that; in particular, #ifdef or #include directives are not supported.
Are there any tricks I can do to make this work?
You cannot directly access C structs in Python. You will need to 'bind' C functions to Python functions. This only allows you to access C functions from Python - not a C struct.
Testing C++ is generally done using Google Test. If you require using Python to test C++ functionality then you will need to create bindings in Python to access the C++ functions (as C functions using extern "C").
You can only bind to a C/C++ library. Google "Call C functions in Python" for more.
I'm working on building a library in rust that I think would be extremely useful in other languages. I would like to expose this functionality with idiomatic bindings to as many languages as possible with as little effort as I can get away with. Obviously SWIG is a great choice for this project.
I'm using a fantastic project called safer_ffi to produce the C interface to the rust library. It removes a lot of the error prone boiler plate on the rust side but also limits my options on exactly what the C interface looks like. Currently it represents strings with this C type:
typedef struct {
uint8_t * ptr;
size_t len;
} slice_boxed_uint8_t;
I can't for the life of me set the ptr member of the struct without causing a TypeError in python. My interface file is simply:
%module swig_example
%{
/* Includes the header in the wrapper code */
#include "swig_example.h"
%}
%include "stdint.i"
%include "cstring.i"
/* Parse the header file to generate wrappers */
%include "swig_example.h"
and I try and set up the struct with the following python:
def _str_to_slice(input: str) -> slice_boxed_uint8_t:
slice = slice_boxed_uint8_t()
slice.ptr = input
slice.len = len(input)
return slice
which produces the following error "TypeError: in method 'slice_boxed_uint8_t_ptr_set', argument 2 of type 'uint8_t '". I have tried all sorts of combinations of how to invoke it and how to generate the bindings. I've been walking through the generated C code but haven't found the issue yet. It looks like it understands that this pointer is a char but isn't making the connection that it is okay to use as a uint8_t*. I might have misunderstood some of the generated C code, I'm still not very deep on walking through that yet.
I did my best to include all relevant info but I know I might be missing some important context in this post so the code can be found here. The README.md points out here to find all relevant files, reasoning on how things are set up and I checked in the SWIG generated c and python files. This project is the smallest subset of my original project I could make to it easier for others to troubleshoot
Huge thank you to any help that anyone can provide!
In the documentation for writing CPython extensions, one can find the following code:
static PyObject *
spam_system(PyObject *self, PyObject *args)
{
const char *command;
int sts;
if (!PyArg_ParseTuple(args, "s", &command))
return NULL;
sts = system(command);
return PyLong_FromLong(sts);
}
As we can see, this function in the external C extension is able to use a function defined (I think) inside the main CPython interpreter source code: PyArg_ParseTuple.
If we were to simply build the extension source file directly (aka gcc -shared myextension.c, etc.), even while including the necessary header <Python.h>, the linker would complain about undefined reference to PyArg_ParseTupe.
So how are CPython extensions built, in a way that allows them to freely reference functions from the CPython code base?
Are extensions built together with the source code of the actual interpreter? Are they linked with the object files of the actual interpreter? A different approach?
Please refer to approach relevant to Windows. Additional information about Linux is also welcome.
Well, in Linux the relevant library call is dlopen(), where "dl" stands for Dynamic Linking. The manual page says:
If the executable was linked with the flag "-rdynamic" (or, synonymously, "--export-dynamic"), then the global symbols in the executable will also be used to resolve references in a dynamically loaded library.
In other words, references from inside the dynamic loaded library are resolved at load time by the dynamic linker.
As we know C/C++ has macro function to do text replacement, here is an example:
#include <string>
#include <iostream>
using namespace std;
#define FIRST_NAME Print("Moon")
#define FAMILY_NAME Print("Sun")
string Print(string name)
{
cout << name << endl;
return name;
}
int main()
{
string name = FIRST_NAME + FAMILY_NAME;
return 0;
}
As you can see the FIRST_NAME and FAMILY_NAME are macros which will be replaced by the function Print().
My question is does Python have a similar feature? Or what can I do to create this kind of feature?
Already in C++ using macros is not considered a good practice. You can get the same performance, and better compiler checking, using functions or templates. These have less risk of repeating side effects, or breaking bugs because of multi-statement macros.
Therefore, a remaining legitimate use for macros, especially in C++, is metaprogramming. e.g The boost preprocessor library.
In python, there are no hard types, only duck types. (If it quacks like a duck, it's enough). It is a dynamic language (versus static), and reduces, if not completely eliminates, the need for macros.
Python has a taste (a philosophy), and it is called being "pythonic". Macros are not pythonic. Just don't use them.
In your case, make functions.
In Python you can pretty much do achieve almost anything you want with its metaclasses (custom behavior of common Python operations on objects), runtime reflection and runtime evaluation.
On the other hand, such an approach is discouraged unless you really need it because it balloons complexity and kills your ability to "statically" understand the code. That is, do not use them unless you are doing things like:
Dynamic code generation
Designing a Python-based DSL
Some kind of framework that needs a bit of magic to look best for users
There is also the possibility of applying any sort of preprocessor to your Python source code, of course, including C's, and including a Python script too.
Yes, python has a similar feature, assignment
def MyPrint(name):
print(name)
return name
FIRST_NAME = MyPrint("Moon")
FAMILY_NAME = MyPrint("Sun")
Note that C++ also has this feature.
I'm just getting started with ctypes and would like to use a C++ class that I have exported in a dll file from within python using ctypes.
So lets say my C++ code looks something like this:
class MyClass {
public:
int test();
...
I would know create a .dll file that contains this class and then load the .dll file in python using ctypes.
Now how would I create an Object of type MyClass and call its test function? Is that even possible with ctypes? Alternatively I would consider using SWIG or Boost.Python but ctypes seems like the easiest option for small projects.
Besides Boost.Python(which is probably a more friendly solution for larger projects that require one-to-one mapping of C++ classes to python classes), you could provide on the C++ side a C interface. It's one solution of many so it has its own trade offs, but I will present it for the benefit of those who aren't familiar with the technique. For full disclosure, with this approach one wouldn't be interfacing C++ to python, but C++ to C to Python. Below I included an example that meets your requirements to show you the general idea of the extern "c" facility of C++ compilers.
//YourFile.cpp (compiled into a .dll or .so file)
#include <new> //For std::nothrow
//Either include a header defining your class, or define it here.
extern "C" //Tells the compile to use C-linkage for the next scope.
{
//Note: The interface this linkage region needs to use C only.
void * CreateInstanceOfClass( void )
{
// Note: Inside the function body, I can use C++.
return new(std::nothrow) MyClass;
}
//Thanks Chris.
void DeleteInstanceOfClass (void *ptr)
{
delete(std::nothrow) ptr;
}
int CallMemberTest(void *ptr)
{
// Note: A downside here is the lack of type safety.
// You could always internally(in the C++ library) save a reference to all
// pointers created of type MyClass and verify it is an element in that
//structure.
//
// Per comments with Andre, we should avoid throwing exceptions.
try
{
MyClass * ref = reinterpret_cast<MyClass *>(ptr);
return ref->Test();
}
catch(...)
{
return -1; //assuming -1 is an error condition.
}
}
} //End C linkage scope.
You can compile this code with
gcc -shared -o test.so test.cpp
#creates test.so in your current working directory.
In your python code you could do something like this (interactive prompt from 2.7 shown):
>>> from ctypes import cdll
>>> stdc=cdll.LoadLibrary("libc.so.6") # or similar to load c library
>>> stdcpp=cdll.LoadLibrary("libstdc++.so.6") # or similar to load c++ library
>>> myLib=cdll.LoadLibrary("/path/to/test.so")
>>> spam = myLib.CreateInstanceOfClass()
>>> spam
[outputs the pointer address of the element]
>>> value=CallMemberTest(spam)
[does whatever Test does to the spam reference of the object]
I'm sure Boost.Python does something similar under the hood, but perhaps understanding the lower levels concepts is helpful. I would be more excited about this method if you were attempting to access functionality of a C++ library and a one-to-one mapping was not required.
For more information on C/C++ interaction check out this page from Sun: http://dsc.sun.com/solaris/articles/mixing.html#cpp_from_c
The short story is that there is no standard binary interface for C++ in the way that there is for C. Different compilers output different binaries for the same C++ dynamic libraries, due to name mangling and different ways to handle the stack between library function calls.
So, unfortunately, there really isn't a portable way to access C++ libraries in general. But, for one compiler at a time, it's no problem.
This blog post also has a short overview of why this currently won't work. Maybe after C++0x comes out, we'll have a standard ABI for C++? Until then, you're probably not going to have any way to access C++ classes through Python's ctypes.
The answer by AudaAero is very good but not complete (at least for me).
On my system (Debian Stretch x64 with GCC and G++ 6.3.0, Python 3.5.3) I have segfaults as soon has I call a member function that access a member value of the class.
I diagnosticated by printing pointer values to stdout that the void* pointer coded on 64 bits in wrappers is being represented on 32 bits in Python. Thus big problems occurs when it is passed back to a member function wrapper.
The solution I found is to change:
spam = myLib.CreateInstanceOfClass()
Into
Class_ctor_wrapper = myLib.CreateInstanceOfClass
Class_ctor_wrapper.restype = c_void_p
spam = c_void_p(Class_ctor_wrapper())
So two things were missing: setting the return type to c_void_p (the default is int) and then creating a c_void_p object (not just an integer).
I wish I could have written a comment but I still lack 27 rep points.
Extending AudaAero's and Gabriel Devillers answer I would complete the class object instance creation by:
stdc=c_void_p(cdll.LoadLibrary("libc.so.6"))
using ctypes c_void_p data type ensures the proper representation of the class object pointer within python.
Also make sure that the dll's memory management be handled by the dll (allocated memory in the dll should be deallocated also in the dll, and not in python)!
I ran into the same problem. From trial and error and some internet research (not necessarily from knowing the g++ compiler or C++ very well), I came across this particular solution that seems to be working quite well for me.
//model.hpp
class Model{
public:
static Model* CreateModel(char* model_name) asm("CreateModel"); // static method, creates an instance of the class
double GetValue(uint32_t index) asm("GetValue"); // object method
}
#model.py
from ctypes import ...
if __name__ == '__main__':
# load dll as model_dll
# Static Method Signature
fCreateModel = getattr(model_dll, 'CreateModel') # or model_dll.CreateModel
fCreateModel.argtypes = [c_char_p]
fCreateModel.restype = c_void_p
# Object Method Signature
fGetValue = getattr(model_dll, 'GetValue') # or model_dll.GetValue
fGetValue.argtypes = [c_void_p, c_uint32] # Notice two Params
fGetValue.restype = c_double
# Calling the Methods
obj_ptr = fCreateModel(c_char_p(b"new_model"))
val = fGetValue(obj_ptr, c_int32(0)) # pass in obj_ptr as first param of obj method
>>> nm -Dg libmodel.so
U cbrt#GLIBC_2.2.5
U close#GLIBC_2.2.5
00000000000033a0 T CreateModel # <----- Static Method
U __cxa_atexit#GLIBC_2.2.5
w __cxa_finalize#GLIBC_2.2.5
U fprintf#GLIBC_2.2.5
0000000000002b40 T GetValue # <----- Object Method
w __gmon_start__
...
...
... # Mangled Symbol Names Below
0000000000002430 T _ZN12SHMEMWrapper4HashEPKc
0000000000006120 B _ZN12SHMEMWrapper8info_mapE
00000000000033f0 T _ZN5Model12DestroyModelEPKc
0000000000002b20 T _ZN5Model14GetLinearIndexElll
First, I was able to avoid the extern "C" directive completely by instead using the asm keyword which, to my knowledge, asks the compiler to use a given name instead of the generated one when exporting the function to the shared object lib's symbol table. This allowed me to avoid the weird symbol names that the C++ compiler generates automatically. They look something like the _ZN1... pattern you see above. Then in a program using Python ctypes, I was able to access the class functions directly using the custom name I gave them. The program looks like fhandle = mydll.myfunc or fhandler = getattr(mydll, 'myfunc') instead of fhandle = getattr(mydll, '_ZN12...myfunc...'). Of course, you could just use the long name; it would make no difference, but I figure the shorter name is a little cleaner and doesn't require using nm to read the symbol table and extract the names in the first place.
Second, in the spirit of Python's style of object oriented programming, I decided to try passing in my class' object pointer as the first argument of the class object method, just like when we pass self in as the first method in Python object methods. To my surprise, it worked! See the Python section above. Apparently, if you set the first argument in the fhandle.argtypes argument to c_void_ptr and pass in the ptr you get from your class' static factory method, the program should execute cleanly. Class static methods seem to work as one would expect like in Python; just use the original function signature.
I'm using g++ 12.1.1, python 3.10.5 on Arch Linux. I hope this helps someone.