As we know C/C++ has macro function to do text replacement, here is an example:
#include <string>
#include <iostream>
using namespace std;
#define FIRST_NAME Print("Moon")
#define FAMILY_NAME Print("Sun")
string Print(string name)
{
cout << name << endl;
return name;
}
int main()
{
string name = FIRST_NAME + FAMILY_NAME;
return 0;
}
As you can see the FIRST_NAME and FAMILY_NAME are macros which will be replaced by the function Print().
My question is does Python have a similar feature? Or what can I do to create this kind of feature?
Already in C++ using macros is not considered a good practice. You can get the same performance, and better compiler checking, using functions or templates. These have less risk of repeating side effects, or breaking bugs because of multi-statement macros.
Therefore, a remaining legitimate use for macros, especially in C++, is metaprogramming. e.g The boost preprocessor library.
In python, there are no hard types, only duck types. (If it quacks like a duck, it's enough). It is a dynamic language (versus static), and reduces, if not completely eliminates, the need for macros.
Python has a taste (a philosophy), and it is called being "pythonic". Macros are not pythonic. Just don't use them.
In your case, make functions.
In Python you can pretty much do achieve almost anything you want with its metaclasses (custom behavior of common Python operations on objects), runtime reflection and runtime evaluation.
On the other hand, such an approach is discouraged unless you really need it because it balloons complexity and kills your ability to "statically" understand the code. That is, do not use them unless you are doing things like:
Dynamic code generation
Designing a Python-based DSL
Some kind of framework that needs a bit of magic to look best for users
There is also the possibility of applying any sort of preprocessor to your Python source code, of course, including C's, and including a Python script too.
Yes, python has a similar feature, assignment
def MyPrint(name):
print(name)
return name
FIRST_NAME = MyPrint("Moon")
FAMILY_NAME = MyPrint("Sun")
Note that C++ also has this feature.
Related
I have a couple of header files that are already defined in C (C++ if we're being technical, but the header is C compatible) that I use to define a bunch of data types (structs). I would like to make use of these in a python script that I am going use to test the corresponding C++ application. This is mostly to avoid having to redefine them in python as some of the structs are unwieldy, but also it would be nice to have them defined in one place so if changes happen down the road it will be easier to adapt.
When I started looking into this I thought that this was certainly doable but none of the examples I have come across get me quite there. The closest I got was using cffi. I got a simple example working how I want it to:
Data types header:
// mylib.h
struct Point2D
{
float x;
float y;
};
struct Point3D
{
float x;
float y;
float z;
};
Python code:
from cffi import FFI
with open("./mylib.h", "r") as fo:
header_text = fo.read()
ffi = FFI()
ffi.cdef(header_text)
point = ffi.new("struct Point2D*")
But this fails if I have #includes or #ifdefs in the header file, per the cffi documentation:
The declarations can contain types, functions, constants and global
variables. What you pass to the cdef() must not contain more than
that; in particular, #ifdef or #include directives are not supported.
Are there any tricks I can do to make this work?
You cannot directly access C structs in Python. You will need to 'bind' C functions to Python functions. This only allows you to access C functions from Python - not a C struct.
Testing C++ is generally done using Google Test. If you require using Python to test C++ functionality then you will need to create bindings in Python to access the C++ functions (as C functions using extern "C").
You can only bind to a C/C++ library. Google "Call C functions in Python" for more.
I want to include the Python.h and the ruby.h header in the same C/C++ file, as I want to work with both at the same time. Which would be the best way to include both and prevent the compiler/preprocessor to warn about multiple redefinitions of the same variables or there is another way to use those languages from C?
MWE:
// file.cpp
#include <iostream>
#include <Python.h>
#include <ruby.h>
int main() {
return 0;
}
I warnings like this from the preprocessor:
In file included from /path/to/Python/include/Python.h:8,
from /path/to/file.cpp:4:
/path/to/Python/include/pyconfig.h:61: warning: "HAVE_HYPOT" redefined
#define HAVE_HYPOT
In file included from /path/to/Ruby/include/ruby-2.7.0/ruby/ruby.h:24,
from /path/to/Ruby/include/ruby-2.7.0/ruby.h:33,
from /path/to/file.cpp:5:
/path/to/Ruby/include/ruby-2.7.0/x64-mingw32/ruby/config.h:211: note: this is the location of the previous definition
#define HAVE_HYPOT 1
An example to demonstrate not getting complaint for redefinitions in headers.
Though you may have to re-declare every functions you need in a wrapper header.
P.h
#define AAA 2
R.h
#define AAA 1
Pwrapper.h
int get(void);
Pwrapper.cc
#inclide "P.h" //include here, so Pwrapper.h won't conflict definition with R.h
int get(void){
return AAA;
}
In your actual source file for operations
#include "Pwrapper.h"
#include "R.h"
Not sure whats in python.h but since python is an interpreted language, Id assume python.h references object code that interprets python code which is held as a text data in your c++ code as in
char * pythonScript = "print \"Hello, World!\"";
pythonExec(pythonScript);
Python is an interpreted language (at least on linux) and is also a foreign languge to a c++ compiler.
The only time Ive seen c++ directly support a foreign language is the implementation dependent asm keyword where some compilers allow you to write assembly language code blocks directly into the C++ source code. Its not supported by all compilers and asm is the only time Ive seen this style of support.
Debateably, things like opengl are languages in their own right and there is a way of foreign language support in the sense that each opengl function and variable is replicated with a c++ function or variable to the extent that the full language is mapped into c++.
Sorry I dont have a full answer but given that I think what youre trying to do isnt really supported, I hope it provides you some pointers to where you should be looking.
Maybe someone has a better answer.
Edit:
This does not work (as alluded in a comment) for preprocessor macros, which are not #undef'd, so does not answer the question exactly.
Original answer
Not sure about linking / global variables within implementation files, or about what your files contain, but for header files, you can put the include macro within a namespace:
namespace a {
#include Python.h
}
namespace b {
#include ruby.h
}
Then you reference the proper namespace to use a variable:
a::SAME_NAME
b::SAME_NAME
I'm just getting started with ctypes and would like to use a C++ class that I have exported in a dll file from within python using ctypes.
So lets say my C++ code looks something like this:
class MyClass {
public:
int test();
...
I would know create a .dll file that contains this class and then load the .dll file in python using ctypes.
Now how would I create an Object of type MyClass and call its test function? Is that even possible with ctypes? Alternatively I would consider using SWIG or Boost.Python but ctypes seems like the easiest option for small projects.
Besides Boost.Python(which is probably a more friendly solution for larger projects that require one-to-one mapping of C++ classes to python classes), you could provide on the C++ side a C interface. It's one solution of many so it has its own trade offs, but I will present it for the benefit of those who aren't familiar with the technique. For full disclosure, with this approach one wouldn't be interfacing C++ to python, but C++ to C to Python. Below I included an example that meets your requirements to show you the general idea of the extern "c" facility of C++ compilers.
//YourFile.cpp (compiled into a .dll or .so file)
#include <new> //For std::nothrow
//Either include a header defining your class, or define it here.
extern "C" //Tells the compile to use C-linkage for the next scope.
{
//Note: The interface this linkage region needs to use C only.
void * CreateInstanceOfClass( void )
{
// Note: Inside the function body, I can use C++.
return new(std::nothrow) MyClass;
}
//Thanks Chris.
void DeleteInstanceOfClass (void *ptr)
{
delete(std::nothrow) ptr;
}
int CallMemberTest(void *ptr)
{
// Note: A downside here is the lack of type safety.
// You could always internally(in the C++ library) save a reference to all
// pointers created of type MyClass and verify it is an element in that
//structure.
//
// Per comments with Andre, we should avoid throwing exceptions.
try
{
MyClass * ref = reinterpret_cast<MyClass *>(ptr);
return ref->Test();
}
catch(...)
{
return -1; //assuming -1 is an error condition.
}
}
} //End C linkage scope.
You can compile this code with
gcc -shared -o test.so test.cpp
#creates test.so in your current working directory.
In your python code you could do something like this (interactive prompt from 2.7 shown):
>>> from ctypes import cdll
>>> stdc=cdll.LoadLibrary("libc.so.6") # or similar to load c library
>>> stdcpp=cdll.LoadLibrary("libstdc++.so.6") # or similar to load c++ library
>>> myLib=cdll.LoadLibrary("/path/to/test.so")
>>> spam = myLib.CreateInstanceOfClass()
>>> spam
[outputs the pointer address of the element]
>>> value=CallMemberTest(spam)
[does whatever Test does to the spam reference of the object]
I'm sure Boost.Python does something similar under the hood, but perhaps understanding the lower levels concepts is helpful. I would be more excited about this method if you were attempting to access functionality of a C++ library and a one-to-one mapping was not required.
For more information on C/C++ interaction check out this page from Sun: http://dsc.sun.com/solaris/articles/mixing.html#cpp_from_c
The short story is that there is no standard binary interface for C++ in the way that there is for C. Different compilers output different binaries for the same C++ dynamic libraries, due to name mangling and different ways to handle the stack between library function calls.
So, unfortunately, there really isn't a portable way to access C++ libraries in general. But, for one compiler at a time, it's no problem.
This blog post also has a short overview of why this currently won't work. Maybe after C++0x comes out, we'll have a standard ABI for C++? Until then, you're probably not going to have any way to access C++ classes through Python's ctypes.
The answer by AudaAero is very good but not complete (at least for me).
On my system (Debian Stretch x64 with GCC and G++ 6.3.0, Python 3.5.3) I have segfaults as soon has I call a member function that access a member value of the class.
I diagnosticated by printing pointer values to stdout that the void* pointer coded on 64 bits in wrappers is being represented on 32 bits in Python. Thus big problems occurs when it is passed back to a member function wrapper.
The solution I found is to change:
spam = myLib.CreateInstanceOfClass()
Into
Class_ctor_wrapper = myLib.CreateInstanceOfClass
Class_ctor_wrapper.restype = c_void_p
spam = c_void_p(Class_ctor_wrapper())
So two things were missing: setting the return type to c_void_p (the default is int) and then creating a c_void_p object (not just an integer).
I wish I could have written a comment but I still lack 27 rep points.
Extending AudaAero's and Gabriel Devillers answer I would complete the class object instance creation by:
stdc=c_void_p(cdll.LoadLibrary("libc.so.6"))
using ctypes c_void_p data type ensures the proper representation of the class object pointer within python.
Also make sure that the dll's memory management be handled by the dll (allocated memory in the dll should be deallocated also in the dll, and not in python)!
I ran into the same problem. From trial and error and some internet research (not necessarily from knowing the g++ compiler or C++ very well), I came across this particular solution that seems to be working quite well for me.
//model.hpp
class Model{
public:
static Model* CreateModel(char* model_name) asm("CreateModel"); // static method, creates an instance of the class
double GetValue(uint32_t index) asm("GetValue"); // object method
}
#model.py
from ctypes import ...
if __name__ == '__main__':
# load dll as model_dll
# Static Method Signature
fCreateModel = getattr(model_dll, 'CreateModel') # or model_dll.CreateModel
fCreateModel.argtypes = [c_char_p]
fCreateModel.restype = c_void_p
# Object Method Signature
fGetValue = getattr(model_dll, 'GetValue') # or model_dll.GetValue
fGetValue.argtypes = [c_void_p, c_uint32] # Notice two Params
fGetValue.restype = c_double
# Calling the Methods
obj_ptr = fCreateModel(c_char_p(b"new_model"))
val = fGetValue(obj_ptr, c_int32(0)) # pass in obj_ptr as first param of obj method
>>> nm -Dg libmodel.so
U cbrt#GLIBC_2.2.5
U close#GLIBC_2.2.5
00000000000033a0 T CreateModel # <----- Static Method
U __cxa_atexit#GLIBC_2.2.5
w __cxa_finalize#GLIBC_2.2.5
U fprintf#GLIBC_2.2.5
0000000000002b40 T GetValue # <----- Object Method
w __gmon_start__
...
...
... # Mangled Symbol Names Below
0000000000002430 T _ZN12SHMEMWrapper4HashEPKc
0000000000006120 B _ZN12SHMEMWrapper8info_mapE
00000000000033f0 T _ZN5Model12DestroyModelEPKc
0000000000002b20 T _ZN5Model14GetLinearIndexElll
First, I was able to avoid the extern "C" directive completely by instead using the asm keyword which, to my knowledge, asks the compiler to use a given name instead of the generated one when exporting the function to the shared object lib's symbol table. This allowed me to avoid the weird symbol names that the C++ compiler generates automatically. They look something like the _ZN1... pattern you see above. Then in a program using Python ctypes, I was able to access the class functions directly using the custom name I gave them. The program looks like fhandle = mydll.myfunc or fhandler = getattr(mydll, 'myfunc') instead of fhandle = getattr(mydll, '_ZN12...myfunc...'). Of course, you could just use the long name; it would make no difference, but I figure the shorter name is a little cleaner and doesn't require using nm to read the symbol table and extract the names in the first place.
Second, in the spirit of Python's style of object oriented programming, I decided to try passing in my class' object pointer as the first argument of the class object method, just like when we pass self in as the first method in Python object methods. To my surprise, it worked! See the Python section above. Apparently, if you set the first argument in the fhandle.argtypes argument to c_void_ptr and pass in the ptr you get from your class' static factory method, the program should execute cleanly. Class static methods seem to work as one would expect like in Python; just use the original function signature.
I'm using g++ 12.1.1, python 3.10.5 on Arch Linux. I hope this helps someone.
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions.
I've read several questions on here that deal with a similar problem (here and here for example) but I'm confused about which approach I need to take.
One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me.
Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
You should call C from Python by writing a ctypes wrapper. Cython is for making python-like code run faster, ctypes is for making C functions callable from python. What you need to do is the following:
Write the C functions you want to use. (You probably did this already)
Create a shared object (.so, for linux, os x, etc) or dynamically loaded library (.dll, for windows) for those functions. (Maybe you already did this, too)
Write the ctypes wrapper (It's easier than it sounds, I wrote a how-to for that)
Call a function from that wrapper in Python. (This is just as simple as calling any other python function)
If I understand well, you have no preference for dialoging as c => python or like python => c.
In that case I would recommend Cython. It is quite open to many kinds of manipulation, specially, in your case, calling a function that has been written in Python from C.
Here is how it works (public api) :
The following example assumes that you have a Python Class (self is an instance of it), and that this class has a method (name method) you want to call on this class and deal with the result (here, a double) from C. This function, written in a Cython extension would help you to do this call.
cdef public api double cy_call_func_double(object self, char* method, bint *error):
if (hasattr(self, method)):
error[0] = 0
return getattr(self, method)();
else:
error[0] = 1
On the C side, you'll then be able to perform the call like so :
PyObject *py_obj = ....
...
if (py_obj) {
int error;
double result;
result = cy_call_func_double(py_obj, (char*)"initSimulation", &error);
cout << "Do something with the result : " << result << endl;
}
Where PyObject is a struct provided by Python/C API
After having caught the py_obj (by casting a regular python object, in your cython extension like this : <PyObject *>my_python_object), you would finally be able to call the initSimulation method on it and do something with the result.
(Here a double, but Cython can deal easily with vectors, sets, ...)
Well, I am aware that what I just wrote can be confusing if you never wrote anything using Cython, but it aims to be a short demonstration of the numerous things it can do for you in term of merging.
By another hand, this approach can take more time than recoding your Python code into C, depending on the complexity of your algorithms.
In my opinion, investing time into learning Cython is pertinent only if you plan to have this kind of needs quite often...
Hope this was at least informative...
Well, here you are referring to two below things.
How to call c function within from python (Extending python)
How to call python function/script from C program (Embedding Python)
For #2 that is 'Embedding Python'
You may use below code segment:
#include "python.h"
int main(int argc, char *argv[]) {
Py_SetProgramName(argv[0]); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print 'Today is',ctime(time())\n");
/*Or if you want to run python file within from the C code*/
//pyRun_SimpleFile("Filename");
Py_Finalize();
return 0; }
For #1 that is 'Extending Python'
Then best bet would be to use Ctypes (btw portable across all variant of python).
>> from ctypes import *
>> libc = cdll.msvcrt
>> print libc.time(None)
>> 1438069008
>> printf = libc.printf
>> printf("Hello, %s\n", "World!")
>> Hello, World!
14
>> printf("%d bottles of beer\n", 42)
>> 42 bottles of beer
19
For detailed guide you may want to refer to my blog article:
It'll be easier to call C from python. Your scenario sounds weird - normally people write most of the code in python except for the processor-intensive portion, which is written in C. Is the two-dimensional FFT the computationally-intensive part of your code?
There's a nice and brief tutorial on this from Digital Ocean here. Short version:
1. Write C Code
You've already done this, so super short example:
#include <stdio.h>
int addFive(int i) {
return i + 5;
}
2. Create Shared Library File
Assuming the above C file is saved as c_functions.c, then to generate the .so file to call from python type in your terminal:
cc -fPIC -shared -o c_functions.so c_functions.c
3. Use Your C Code in Python!
Within your python module:
# Access your C code
from ctypes import *
so_file = "./c_functions.so"
c_functions = CDLL(so_file)
# Use your C code
c_functions.addFive(10)
That last line will output 15. You're done!
The answer from BLimitless quoting Digital Ocean is fine for simple cases, but it defaults to allowing int type arguments only. If need a different type for your input argument, for example to a float type, you need to add this:
c_functions.addFive.argtypes=[ctypes.c_float]
And if you change the output argument, for example to a float type you need this:
c_functions.addFive.restype=ctypes.c_float
I have a C extension module, to which I would like to add some Python utility functions. Is there a recommended way of doing this?
For example:
import my_module
my_module.super_fast_written_in_C()
my_module.written_in_Python__easy_to_maintain()
I'm primarily interested in Python 2.x.
The usual way of doing this is: mymod.py contains the utility functions written in Python, and imports the goodies in the _mymod module which is written in C and is imported from _mymod.so or _mymod.pyd. For example, look at .../Lib/csv.py in your Python distribution.
Prefix your native extension with an underscore.
Then, in Python, create a wrapper module that imports that native extension and adds some other non-native routines on top of that.
The existing answers describe the method most often used: it has the potential advantage of allowing pure-Python (or other-language) implementations on platforms in which the compiled C extension is not available (including Jython and IronPython).
In a few cases, however, it may not be worth splitting the module into a C layer and a Python layer just to provide a few extras that are more sensibly written in Python than in C. For example, gmpy (lines 7113 ff at this time), in order to enable pickling of instances of gmpy's type, uses:
copy_reg_module = PyImport_ImportModule("copy_reg");
if (copy_reg_module) {
char* enable_pickle =
"def mpz_reducer(an_mpz): return (gmpy.mpz, (an_mpz.binary(), 256))\n"
"def mpq_reducer(an_mpq): return (gmpy.mpq, (an_mpq.binary(), 256))\n"
"def mpf_reducer(an_mpf): return (gmpy.mpf, (an_mpf.binary(), 0, 256))\n"
"copy_reg.pickle(type(gmpy.mpz(0)), mpz_reducer)\n"
"copy_reg.pickle(type(gmpy.mpq(0)), mpq_reducer)\n"
"copy_reg.pickle(type(gmpy.mpf(0)), mpf_reducer)\n"
;
PyObject* namespace = PyDict_New();
PyObject* result = NULL;
if (options.debug)
fprintf(stderr, "gmpy_module imported copy_reg OK\n");
PyDict_SetItemString(namespace, "copy_reg", copy_reg_module);
PyDict_SetItemString(namespace, "gmpy", gmpy_module);
PyDict_SetItemString(namespace, "type", (PyObject*)&PyType_Type);
result = PyRun_String(enable_pickle, Py_file_input,
namespace, namespace);
If you want those few extra functions to "stick around" in your module (not necessary in this example case), you would of course use your module object as built by Py_InitModule3 (or whatever other method) and its PyModule_GetDict rather than a transient dictionary as the namespace in which to PyRun_String. And of course there are more sophisticated approaches than to PyRun_String the def and class statements you need, but, for simple enough cases, this simple approach may in fact be sufficient.