I'm trying to convert Python function to Delphi using Python4Delphi (to educate myself and to gain speed, I hope). However I've no idea how this works with Delphi and Python. Here's my original function:
def MyFunc(img, curve):
C = 160
for i in xrange(img.dim()[0]):
p = img[i]
img[i] = (p[0], p[1], p[2] - curve[p[0]] + C)
(Img is not python list, but custom object)
I found related Demo09 from Python4Delphi, but couldn't find any help how to go thru that list, unpack the tuple and modify value.
Any pointers for documentation creating extensions?
Python4Delphi handles the problem of loading Python's main DLL into a Delphi program, embedding the Python interpreter into your delphi application, but also has some demos for the reverse; to write an extension using Delphi. Below is some working example code.
I found a book book reference here to writing python extensions using delphi. Page 469 of the Python Programming on Win32, by Mark Hammond & Andy Robinson (O'Reilly).
A sample DLL skeleton for a Delphi DLL that implements a python extension might look like
this, taken from the Demo09 folder in Python4Delphi source distribution:
Project (.dpr) file source:
library demodll;
{$I Definition.Inc}
uses
SysUtils,
Classes,
module in 'module.pas';
exports
initdemodll;
{$IFDEF MSWINDOWS}
{$E pyd}
{$ENDIF}
{$IFDEF LINUX}
{$SONAME 'demodll'}
{$ENDIF}
begin
end.
Actual extension unit (module.pas):
unit module;
interface
uses PythonEngine;
procedure initdemodll; cdecl;
var
gEngine : TPythonEngine;
gModule : TPythonModule;
implementation
function Add( Self, Args : PPyObject ) : PPyObject; far; cdecl;
var
a, b : Integer;
begin
with GetPythonEngine do
begin
if PyArg_ParseTuple( args, 'ii:Add', [#a, #b] ) <> 0 then
begin
Result := PyInt_FromLong( a + b );
end
else
Result := nil;
end;
end;
procedure initdemodll;
begin
try
gEngine := TPythonEngine.Create(nil);
gEngine.AutoFinalize := False;
gEngine.LoadDll;
gModule := TPythonModule.Create(nil);
gModule.Engine := gEngine;
gModule.ModuleName := 'demodll';
gModule.AddMethod( 'add', #Add, 'add(a,b) -> a+b' );
gModule.Initialize;
except
end;
end;
initialization
finalization
gEngine.Free;
gModule.Free;
end.
Note that methods that can be called from python can only have parameters Self, Args : PPyObject as their parameter signature, and the Args value is a Python tuple (an immutable data structure similar to a vector or array). You then have to parse the tuple, and inside it, there will be 1 or more arguments of various types. You then have to deal with the fact that each item inside the tuple object passed in could be an integer, a string, a tuple, a list, a dictionary, etc etc.
You're going to need to learn to call method on a python object as in python code: img.dim(), get items from a list and so on.
Look for whereever PyArg_ParseTuple is defined (ctrl-click it) and look for other methods that start with the prefix Py that might have names like PyList_GetItem. That is the pseudo-OOP naming convention used by python (PyCATEGORY_MethodName). It's all pretty easy once you see some sample code. Sadly, most of that sample code is in C.
You could probably even use a tool to auto-convert your Python code above into sample C code, then try translating it into Python, line by line. But it all sounds like a waste of time to me.
Some more Python API functions to look up and learn:
Py_BuildValue - useful for return values
Py_INCREF and Py_DECREF - necessary for object reference counting.
You will need to know all the memory rules, and ownership rules here.
Related
I'm working on Python wrapper classes for Matlab's dynamic libraries to read Matlab MAT files in Python, and I'm encountering an odd behavior that I cannot explain from the ctypes interface.
The C function signature looks like this:
const mwSize *mxGetDimensions(const mxArray *);
Here, mwSize is a renamed size_t, and mxArray* is an opaque pointer. This function returns the "shape" of the Matlab array. The returned pointer points to the size_t array, which is stored internally within mxArray object and is not null terminated (its size is obtained via another function).
To call this function from Python, I set up the library as follows:
libmx = ctypes.cdll.LoadLibrary('libmx.dll')
libmx.mxGetDimensions.restype = ctypes.POINTER(ctypes.c_size_t)
libmx.mxGetDimensions.argtypes = [ctypes.c_void_p]
and after obtained mxArray* in VAR, I called:
dims = libmx.mxGetDimensions(VAR)
print(dims[0],dims[1])
VAR is known to be 2-D and has a shape of (1, 13) (validated with a C program) but my Python code returns (55834574849 0) in c_ulonglong... Results are consistently garbage across all the variables stored in the test MAT file.
What am I doing wrong? Other library calls using VAR seems be working properly, so VAR is pointing to the valid object. As stated above, mxGetDimensions() called in a C program works as expected.
Any inputs would be appreciated! Thanks
#Neitsa solved my immediate issue in his comment under the OP, and further investigation of libmx.dll resolved the remaining discrepancy between Python & C versions.
Because Matlab's libmx.dll goes back really long time as it was first written in the 32-bit era, the DLL contains multiple versions of its functions for backward compatibility. As it turned out, const mwSize *mxGetDimensions(const mxArray *); is the oldest version of the function, and the associated C header file (matrix.h) has the line #define mxGetDimensions mxGetDimensions_800 to override the function with its newest version. Obviously, Python's ctypes doesn't check the C header file; so, it was left for me to sift through the header file figuring out which version of the function to use.
In the end, the proper behavior with POINTER(c_size_t) was obtained when I changed my code to:
libmx = ctypes.cdll.LoadLibrary('libmx.dll')
libmx.mxGetDimensions_800.restype = ctypes.POINTER(ctypes.c_size_t)
libmx.mxGetDimensions_800.argtypes = [ctypes.c_void_p]
dims = libmx.mxGetDimensions_800(VAR)
So, there you have it: Study the associated header file thoroughly if you are wrapping a 3rd-party dynamic/shared library.
I'm just getting started with ctypes and would like to use a C++ class that I have exported in a dll file from within python using ctypes.
So lets say my C++ code looks something like this:
class MyClass {
public:
int test();
...
I would know create a .dll file that contains this class and then load the .dll file in python using ctypes.
Now how would I create an Object of type MyClass and call its test function? Is that even possible with ctypes? Alternatively I would consider using SWIG or Boost.Python but ctypes seems like the easiest option for small projects.
Besides Boost.Python(which is probably a more friendly solution for larger projects that require one-to-one mapping of C++ classes to python classes), you could provide on the C++ side a C interface. It's one solution of many so it has its own trade offs, but I will present it for the benefit of those who aren't familiar with the technique. For full disclosure, with this approach one wouldn't be interfacing C++ to python, but C++ to C to Python. Below I included an example that meets your requirements to show you the general idea of the extern "c" facility of C++ compilers.
//YourFile.cpp (compiled into a .dll or .so file)
#include <new> //For std::nothrow
//Either include a header defining your class, or define it here.
extern "C" //Tells the compile to use C-linkage for the next scope.
{
//Note: The interface this linkage region needs to use C only.
void * CreateInstanceOfClass( void )
{
// Note: Inside the function body, I can use C++.
return new(std::nothrow) MyClass;
}
//Thanks Chris.
void DeleteInstanceOfClass (void *ptr)
{
delete(std::nothrow) ptr;
}
int CallMemberTest(void *ptr)
{
// Note: A downside here is the lack of type safety.
// You could always internally(in the C++ library) save a reference to all
// pointers created of type MyClass and verify it is an element in that
//structure.
//
// Per comments with Andre, we should avoid throwing exceptions.
try
{
MyClass * ref = reinterpret_cast<MyClass *>(ptr);
return ref->Test();
}
catch(...)
{
return -1; //assuming -1 is an error condition.
}
}
} //End C linkage scope.
You can compile this code with
gcc -shared -o test.so test.cpp
#creates test.so in your current working directory.
In your python code you could do something like this (interactive prompt from 2.7 shown):
>>> from ctypes import cdll
>>> stdc=cdll.LoadLibrary("libc.so.6") # or similar to load c library
>>> stdcpp=cdll.LoadLibrary("libstdc++.so.6") # or similar to load c++ library
>>> myLib=cdll.LoadLibrary("/path/to/test.so")
>>> spam = myLib.CreateInstanceOfClass()
>>> spam
[outputs the pointer address of the element]
>>> value=CallMemberTest(spam)
[does whatever Test does to the spam reference of the object]
I'm sure Boost.Python does something similar under the hood, but perhaps understanding the lower levels concepts is helpful. I would be more excited about this method if you were attempting to access functionality of a C++ library and a one-to-one mapping was not required.
For more information on C/C++ interaction check out this page from Sun: http://dsc.sun.com/solaris/articles/mixing.html#cpp_from_c
The short story is that there is no standard binary interface for C++ in the way that there is for C. Different compilers output different binaries for the same C++ dynamic libraries, due to name mangling and different ways to handle the stack between library function calls.
So, unfortunately, there really isn't a portable way to access C++ libraries in general. But, for one compiler at a time, it's no problem.
This blog post also has a short overview of why this currently won't work. Maybe after C++0x comes out, we'll have a standard ABI for C++? Until then, you're probably not going to have any way to access C++ classes through Python's ctypes.
The answer by AudaAero is very good but not complete (at least for me).
On my system (Debian Stretch x64 with GCC and G++ 6.3.0, Python 3.5.3) I have segfaults as soon has I call a member function that access a member value of the class.
I diagnosticated by printing pointer values to stdout that the void* pointer coded on 64 bits in wrappers is being represented on 32 bits in Python. Thus big problems occurs when it is passed back to a member function wrapper.
The solution I found is to change:
spam = myLib.CreateInstanceOfClass()
Into
Class_ctor_wrapper = myLib.CreateInstanceOfClass
Class_ctor_wrapper.restype = c_void_p
spam = c_void_p(Class_ctor_wrapper())
So two things were missing: setting the return type to c_void_p (the default is int) and then creating a c_void_p object (not just an integer).
I wish I could have written a comment but I still lack 27 rep points.
Extending AudaAero's and Gabriel Devillers answer I would complete the class object instance creation by:
stdc=c_void_p(cdll.LoadLibrary("libc.so.6"))
using ctypes c_void_p data type ensures the proper representation of the class object pointer within python.
Also make sure that the dll's memory management be handled by the dll (allocated memory in the dll should be deallocated also in the dll, and not in python)!
I ran into the same problem. From trial and error and some internet research (not necessarily from knowing the g++ compiler or C++ very well), I came across this particular solution that seems to be working quite well for me.
//model.hpp
class Model{
public:
static Model* CreateModel(char* model_name) asm("CreateModel"); // static method, creates an instance of the class
double GetValue(uint32_t index) asm("GetValue"); // object method
}
#model.py
from ctypes import ...
if __name__ == '__main__':
# load dll as model_dll
# Static Method Signature
fCreateModel = getattr(model_dll, 'CreateModel') # or model_dll.CreateModel
fCreateModel.argtypes = [c_char_p]
fCreateModel.restype = c_void_p
# Object Method Signature
fGetValue = getattr(model_dll, 'GetValue') # or model_dll.GetValue
fGetValue.argtypes = [c_void_p, c_uint32] # Notice two Params
fGetValue.restype = c_double
# Calling the Methods
obj_ptr = fCreateModel(c_char_p(b"new_model"))
val = fGetValue(obj_ptr, c_int32(0)) # pass in obj_ptr as first param of obj method
>>> nm -Dg libmodel.so
U cbrt#GLIBC_2.2.5
U close#GLIBC_2.2.5
00000000000033a0 T CreateModel # <----- Static Method
U __cxa_atexit#GLIBC_2.2.5
w __cxa_finalize#GLIBC_2.2.5
U fprintf#GLIBC_2.2.5
0000000000002b40 T GetValue # <----- Object Method
w __gmon_start__
...
...
... # Mangled Symbol Names Below
0000000000002430 T _ZN12SHMEMWrapper4HashEPKc
0000000000006120 B _ZN12SHMEMWrapper8info_mapE
00000000000033f0 T _ZN5Model12DestroyModelEPKc
0000000000002b20 T _ZN5Model14GetLinearIndexElll
First, I was able to avoid the extern "C" directive completely by instead using the asm keyword which, to my knowledge, asks the compiler to use a given name instead of the generated one when exporting the function to the shared object lib's symbol table. This allowed me to avoid the weird symbol names that the C++ compiler generates automatically. They look something like the _ZN1... pattern you see above. Then in a program using Python ctypes, I was able to access the class functions directly using the custom name I gave them. The program looks like fhandle = mydll.myfunc or fhandler = getattr(mydll, 'myfunc') instead of fhandle = getattr(mydll, '_ZN12...myfunc...'). Of course, you could just use the long name; it would make no difference, but I figure the shorter name is a little cleaner and doesn't require using nm to read the symbol table and extract the names in the first place.
Second, in the spirit of Python's style of object oriented programming, I decided to try passing in my class' object pointer as the first argument of the class object method, just like when we pass self in as the first method in Python object methods. To my surprise, it worked! See the Python section above. Apparently, if you set the first argument in the fhandle.argtypes argument to c_void_ptr and pass in the ptr you get from your class' static factory method, the program should execute cleanly. Class static methods seem to work as one would expect like in Python; just use the original function signature.
I'm using g++ 12.1.1, python 3.10.5 on Arch Linux. I hope this helps someone.
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions.
I've read several questions on here that deal with a similar problem (here and here for example) but I'm confused about which approach I need to take.
One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me.
Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
You should call C from Python by writing a ctypes wrapper. Cython is for making python-like code run faster, ctypes is for making C functions callable from python. What you need to do is the following:
Write the C functions you want to use. (You probably did this already)
Create a shared object (.so, for linux, os x, etc) or dynamically loaded library (.dll, for windows) for those functions. (Maybe you already did this, too)
Write the ctypes wrapper (It's easier than it sounds, I wrote a how-to for that)
Call a function from that wrapper in Python. (This is just as simple as calling any other python function)
If I understand well, you have no preference for dialoging as c => python or like python => c.
In that case I would recommend Cython. It is quite open to many kinds of manipulation, specially, in your case, calling a function that has been written in Python from C.
Here is how it works (public api) :
The following example assumes that you have a Python Class (self is an instance of it), and that this class has a method (name method) you want to call on this class and deal with the result (here, a double) from C. This function, written in a Cython extension would help you to do this call.
cdef public api double cy_call_func_double(object self, char* method, bint *error):
if (hasattr(self, method)):
error[0] = 0
return getattr(self, method)();
else:
error[0] = 1
On the C side, you'll then be able to perform the call like so :
PyObject *py_obj = ....
...
if (py_obj) {
int error;
double result;
result = cy_call_func_double(py_obj, (char*)"initSimulation", &error);
cout << "Do something with the result : " << result << endl;
}
Where PyObject is a struct provided by Python/C API
After having caught the py_obj (by casting a regular python object, in your cython extension like this : <PyObject *>my_python_object), you would finally be able to call the initSimulation method on it and do something with the result.
(Here a double, but Cython can deal easily with vectors, sets, ...)
Well, I am aware that what I just wrote can be confusing if you never wrote anything using Cython, but it aims to be a short demonstration of the numerous things it can do for you in term of merging.
By another hand, this approach can take more time than recoding your Python code into C, depending on the complexity of your algorithms.
In my opinion, investing time into learning Cython is pertinent only if you plan to have this kind of needs quite often...
Hope this was at least informative...
Well, here you are referring to two below things.
How to call c function within from python (Extending python)
How to call python function/script from C program (Embedding Python)
For #2 that is 'Embedding Python'
You may use below code segment:
#include "python.h"
int main(int argc, char *argv[]) {
Py_SetProgramName(argv[0]); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print 'Today is',ctime(time())\n");
/*Or if you want to run python file within from the C code*/
//pyRun_SimpleFile("Filename");
Py_Finalize();
return 0; }
For #1 that is 'Extending Python'
Then best bet would be to use Ctypes (btw portable across all variant of python).
>> from ctypes import *
>> libc = cdll.msvcrt
>> print libc.time(None)
>> 1438069008
>> printf = libc.printf
>> printf("Hello, %s\n", "World!")
>> Hello, World!
14
>> printf("%d bottles of beer\n", 42)
>> 42 bottles of beer
19
For detailed guide you may want to refer to my blog article:
It'll be easier to call C from python. Your scenario sounds weird - normally people write most of the code in python except for the processor-intensive portion, which is written in C. Is the two-dimensional FFT the computationally-intensive part of your code?
There's a nice and brief tutorial on this from Digital Ocean here. Short version:
1. Write C Code
You've already done this, so super short example:
#include <stdio.h>
int addFive(int i) {
return i + 5;
}
2. Create Shared Library File
Assuming the above C file is saved as c_functions.c, then to generate the .so file to call from python type in your terminal:
cc -fPIC -shared -o c_functions.so c_functions.c
3. Use Your C Code in Python!
Within your python module:
# Access your C code
from ctypes import *
so_file = "./c_functions.so"
c_functions = CDLL(so_file)
# Use your C code
c_functions.addFive(10)
That last line will output 15. You're done!
The answer from BLimitless quoting Digital Ocean is fine for simple cases, but it defaults to allowing int type arguments only. If need a different type for your input argument, for example to a float type, you need to add this:
c_functions.addFive.argtypes=[ctypes.c_float]
And if you change the output argument, for example to a float type you need this:
c_functions.addFive.restype=ctypes.c_float
I am working on a system which is embedding a Python interpreter, and I need to construct a PyObject* given a string from the C API.
I have a const char* representing a dictionary, in the proper format for eval() to work properly from within Python, ie: "{'bar': 42, 'baz': 50}".
Currently, this is being passed into Python as a PyObject* using the Py_Unicode_ api (representing a string), so in my python interpreter, I can successfully write:
foo = eval(myObject.value)
print(foo['bar']) # prints 42
I would like to change this to automatically "eval" the const char* on the C side, and return a PyObject* representing a completed dictionary. How do I go about converting this string into a dictionary in the C API?
There are two basic ways to do this.
The first is to simply call eval the same way you do in Python. The only trick is that you need a handle to the builtins module, because you don't get that for free in the C API. There are a number of ways to do this, but one really easy way is to just import it:
/* or PyEval_GetBuiltins() if you know you're at the interpreter's top level */
PyObject *builtins = PyImport_ImportModule("builtins");
PyObject *eval = PyObject_GetAttrString(builtins, "eval");
PyObject *args = Py_BuildValue("(s)", expression_as_c_string);
PyObject *result = PyObject_Call(eval, args);
(This is untested code, and it at least leaks references, and doesn't check for NULL return if you want to handle exceptions on the C side… But it should be enough to get the idea across.)
One nice thing about this is that you can use ast.literal_eval in exactly the same way as eval (which means you get some free validation); just change "builtins" to "ast", and "eval" to "literal_eval". But the real win is that you're doing exactly what eval does in Python, which you already know is exactly what you wanted.
The alternative is to use the compilation APIs. At the really high level, you can just build a Python statement out of "foo = eval(%s)" and PyRun_SimpleString it. Below that, use Py_CompileString to parse and compile the expression (you can also parse and compile in separate steps, but that isn't useful here), then PyEval_EvalCode to evaluate it in the appropriate globals and locals. (If you're not tracking globals yourself, use the interpreter-reflection APIs PyEval_GetLocals and PyEval_GetGlobals.) Note that I'm giving the super-simplified version of each function; often you want to use one of the sibling functions. But you can find them easily in the docs.
I am running to some problems and would like some help. I have a piece code, which is used to embed a python script. This python script contains a function which will expect to receive an array as an argument (in this case I am using numpy array within the python script).
I would like to know how can I pass an array from C to the embedded python script as an argument for the function within the script. More specifically can someone show me a simple example of this.
Really, the best answer here is probably to use numpy arrays exclusively, even from your C code. But if that's not possible, then you have the same problem as any code that shares data between C types and Python types.
In general, there are at least five options for sharing data between C and Python:
Create a Python list or other object to pass.
Define a new Python type (in your C code) to wrap and represent the array, with the same methods you'd define for a sequence object in Python (__getitem__, etc.).
Cast the pointer to the array to intptr_t, or to explicit ctypes type, or just leave it un-cast; then use ctypes on the Python side to access it.
Cast the pointer to the array to const char * and pass it as a str (or, in Py3, bytes), and use struct or ctypes on the Python side to access it.
Create an object matching the buffer protocol, and again use struct or ctypes on the Python side.
In your case, you want to use numpy.arrays in Python. So, the general cases become:
Create a numpy.array to pass.
(probably not appropriate)
Pass the pointer to the array as-is, and from Python, use ctypes to get it into a type that numpy can convert into an array.
Cast the pointer to the array to const char * and pass it as a str (or, in Py3, bytes), which is already a type that numpy can convert into an array.
Create an object matching the buffer protocol, and which again I believe numpy can convert directly.
For 1, here's how to do it with a list, just because it's a very simple example (and I already wrote it…):
PyObject *makelist(int array[], size_t size) {
PyObject *l = PyList_New(size);
for (size_t i = 0; i != size; ++i) {
PyList_SET_ITEM(l, i, PyInt_FromLong(array[i]));
}
return l;
}
And here's the numpy.array equivalent (assuming you can rely on the C array not to be deleted—see Creating arrays in the docs for more details on your options here):
PyObject *makearray(int array[], size_t size) {
npy_int dim = size;
return PyArray_SimpleNewFromData(1, &dim, (void *)array);
}
At any rate, however you do this, you will end up with something that looks like a PyObject * from C (and has a single refcount), so you can pass it as a function argument, while on the Python side it will look like a numpy.array, list, bytes, or whatever else is appropriate.
Now, how do you actually pass function arguments? Well, the sample code in Pure Embedding that you referenced in your comment shows how to do this, but doesn't really explain what's going on. There's actually more explanation in the extending docs than the embedding docs, specifically, Calling Python Functions from C. Also, keep in mind that the standard library source code is chock full of examples of this (although some of them aren't as readable as they could be, either because of optimization, or just because they haven't been updated to take advantage of new simplified C API features).
Skip the first example about getting a Python function from Python, because presumably you already have that. The second example (and the paragraph right about it) shows the easy way to do it: Creating an argument tuple with Py_BuildValue. So, let's say we want to call a function you've got stored in myfunc with the list mylist returned by that makelist function above. Here's what you do:
if (!PyCallable_Check(myfunc)) {
PyErr_SetString(PyExc_TypeError, "function is not callable?!");
return NULL;
}
PyObject *arglist = Py_BuildValue("(o)", mylist);
PyObject *result = PyObject_CallObject(myfunc, arglist);
Py_DECREF(arglist);
return result;
You can skip the callable check if you're sure you've got a valid callable object, of course. (And it's usually better to check when you first get myfunc, if appropriate, because you can give both earlier and better error feedback that way.)
If you want to actually understand what's going on, try it without Py_BuildValue. As the docs say, the second argument to [PyObject_CallObject][6] is a tuple, and PyObject_CallObject(callable_object, args) is equivalent to apply(callable_object, args), which is equivalent to callable_object(*args). So, if you wanted to call myfunc(mylist) in Python, you have to turn that into, effectively, myfunc(*(mylist,)) so you can translate it to C. You can construct a tuple like this:
PyObject *arglist = PyTuple_Pack(1, mylist);
But usually, Py_BuildValue is easier (especially if you haven't already packed everything up as Python objects), and the intention in your code is clearer (just as using PyArg_ParseTuple is simpler and clearer than using explicit tuple functions in the other direction).
So, how do you get that myfunc? Well, if you've created the function from the embedding code, just keep the pointer around. If you want it passed in from the Python code, that's exactly what the first example does. If you want to, e.g., look it up by name from a module or other context, the APIs for concrete types like PyModule and abstract types like PyMapping are pretty simple, and it's generally obvious how to convert Python code into the equivalent C code, even if the result is mostly ugly boilerplate.
Putting it all together, let's say I've got a C array of integers, and I want to import mymodule and call a function mymodule.myfunc(mylist) that returns an int. Here's a stripped-down example (not actually tested, and no error handling, but it should show all the parts):
int callModuleFunc(int array[], size_t size) {
PyObject *mymodule = PyImport_ImportModule("mymodule");
PyObject *myfunc = PyObject_GetAttrString(mymodule, "myfunc");
PyObject *mylist = PyList_New(size);
for (size_t i = 0; i != size; ++i) {
PyList_SET_ITEM(l, i, PyInt_FromLong(array[i]));
}
PyObject *arglist = Py_BuildValue("(o)", mylist);
PyObject *result = PyObject_CallObject(myfunc, arglist);
int retval = (int)PyInt_AsLong(result);
Py_DECREF(result);
Py_DECREF(arglist);
Py_DECREF(mylist);
Py_DECREF(myfunc);
Py_DECREF(mymodule);
return retval;
}
If you're using C++, you probably want to look into some kind of scope-guard/janitor/etc. to handle all those Py_DECREF calls, especially once you start doing proper error handling (which usually means early return NULL calls peppered through the function). If you're using C++11 or Boost, unique_ptr<PyObject, Py_DecRef> may be all you need.
But really, a better way to reduce all that ugly boilerplate, if you plan to do a lot of C<->Python communication, is to look at all of the familiar frameworks designed for improving extending Python—Cython, boost::python, etc. Even though you're embedding, you're effectively doing the same work as extending, so they can help in the same ways.
For that matter, some of them also have tools to help the embedding part, if you search around the docs. For example, you can write your main program in Cython, using both C code and Python code, and cython --embed. You may want to cross your fingers and/or sacrifice some chickens, but if it works, it's amazingly simple and productive. Boost isn't nearly as trivial to get started, but once you've got things together, almost everything is done in exactly the way you'd expect, and just works, and that's just as true for embedding as extending. And so on.
The Python function will need a Python object to be passed in. Since you want that Python object to be a NumPy array, you should use one of the NumPy C-API functions for creating arrays; PyArray_SimpleNewFromData() is probably a good start. It will use the buffer provided, without copying the data.
That said, it is almost always easier to write the main program in Python and use a C extension module for the C code. This approach makes it easier to let Python do the memory management, and the ctypes module together with Numpy's cpython extensions make it easy to pass a NumPy array to a C function.