I'm trying to use the C library libnfc for NFC Devices on python (http://nfc-tools.org/). With C I can run the example program and it works fine. I downloaded the package (https://code.google.com/p/pynfc/), which should allow to use the library on python and run the command "sudo python setup.py build_ext", like it is told in the README, but I get the following error:
running build_ext
building '_nfc' extension
swigging nfc.i to nfc_wrap.c
swig -python -I/usr/include -module nfc -interface _nfc -O -o nfc_wrap.c nfc.i
nfc/nfc.h:1489: Error: Syntax error in input(3).
error: command 'swig' failed with exit status 1
This is the content of nfc.h:
typedef struct {
PyObject_HEAD
void *ptr; // <- line 1489
swig_type_info *ty;
int own;
PyObject *next;
#ifdef SWIGPYTHON_BUILTIN
PyObject *dict;
#endif
} SwigPyObject;
I'm using Linux Mint 15.
Thanks to everyone who takes the time to read this!
SWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.
So, just try:
apt-get install swig
Related
I'm have a C++ program that is pretty far developed to do it's own job and now we'd like to add an addition functionality to it and we thought that making said functionality in Python and then calling that python with required inputs from the C++ when needed would be the best way to go as it keeps them separated and allows us to use this python script from elsewhere too.
As a first step I decided to try to make a test program to see how this would work and seems like it was a good idea because I can't get it to work.
How do I run separate python from c++?
I have tried following this guide and while it seems good it doesn't give any information on what compiler options should I run this with?
I have two files, cpp.cpp and python.py
This is my cpp.cpp file:
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <ncurses.h>
#include <Python.h>
using namespace std;
int main() {
std::cout << "C++ program started!\n";
char filename[] = "python.py";
FILE* fp;
Py_Initialize();
fp = _Py_fopen(filename, "r");
PyRun_SimpleFile(fp, filename);
Py_Finalize();
std::cout << "C++ program is ending!\n";
return 0;
}
and my python file is just two printf line:
#print('External Python program running...')
#print('Hello World from Python program')
I then try to compile this, give it all the includes it seems to want and then execute the output file:
g++ -I . -I /home/ahomm/python3.6/Include -I /home/ahomm/python3.6/release cpp.cpp && ./a.out
This is the output I get:
/tmp/cccQsh1p.o: In function `main':
cpp.cpp:(.text+0x3f): undefined reference to `Py_Initialize'
cpp.cpp:(.text+0x52): undefined reference to `_Py_fopen'
cpp.cpp:(.text+0x70): undefined reference to `PyRun_SimpleFileExFlags'
cpp.cpp:(.text+0x75): undefined reference to `Py_Finalize'
collect2: error: ld returned 1 exit status
What am I missing? is just something just a little or completely wrong?
cpp and py files and located in the same directory.
And how do I then read the output of python in C++? Haven't even got to that yet...
You have to link your code with libpython3.x.a/python3.x.lib (x - version of python you use). Which file to link: *.a or *.lib depends of your OS. The files are available with python distribution.
Here is a code with cmake that works for me:
cmake_minimum_required(VERSION 2.8.9)
project (embpy)
add_executable(embpy embpy.cpp)
target_include_directories(embpy PRIVATE /path-to-python/Python38/include/python3.8)
target_link_libraries(embpy /path-to-python/Python38/lib/libpython3.8.a)
the embpy.cpp is the same as yours
Figured it out myself then, the problem was incomplete compiler arguments.
This is what I got it to works with:
g++ -fPIC $(python3.6-config --cflags) cpp.cpp $(python3.6-config --ldflags)
the key missing parts were $(python3.6-config --cflags) before and $(python3.6-config --ldflags) after the file that was to be compiled. The first one gives g++ the compile options and the latter gives the flags for linking.
Found the solution from python docs, part 1.6.
I'm trying to embed Cython code into C following O'reilly Cython book chapter 8. I found this paragraph on Cython's documentation but still don't know what should I do:
If the C code wanting to use these functions is part of more than one shared library or executable, then import_modulename() function needs to be called in each of the shared libraries which use these functions. If you crash with a segmentation fault (SIGSEGV on linux) when calling into one of these api calls, this is likely an indication that the shared library which contains the api call which is generating the segmentation fault does not call the import_modulename() function before the api call which crashes.
I'm running Python 3.4, Cython 0.23 and GCC 5 on OS X. The source code are transcendentals.pyx and main.c:
main.c
#include "transcendentals_api.h"
#include <math.h>
#include <stdio.h>
int main(int argc, char **argv)
{
Py_SetPythonHome(L"/Users/spacegoing/anaconda");
Py_Initialize();
import_transcendentals();
printf("pi**e: %f\n", pow(get_pi(), get_e()));
Py_Finalize();
return 0;
}
transcendentals.pyx
cdef api double get_pi():
return 3.1415926
cdef api double get_e():
print("calling get_e()")
return 2.718281828
I'm compiling those files using setup.py and Makefile:
setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
setup(
ext_modules=cythonize([
Extension("transcendentals", ["transcendentals.pyx"])
])
)
Makefile
python-config=/Users/spacegoing/anaconda/bin/python3-config
ldflags:=$(shell $(python-config) --ldflags)
cflags:=$(shell $(python-config) --cflags)
a.out: main.c transcendentals.so
gcc-5 $(cflags) $(ldflags) transcendentals.c main.c
transcendentals.so: setup.py transcendentals.pyx
python setup.py build_ext --inplace
cython transcendentals.pyx
clean:
rm -r a.out a.out.dSYM build transcendentals.[ch] transcendentals.so transcendentals_api.h
However, I came to error Segmentation fault: 11. Any idea can help with this? Thanks!
In that Makefile there is
transcendentals.so: setup.py transcendentals.pyx
python setup.py build_ext --inplace
Unless python refers to /Users/spacegoing/anaconda/bin/python3 it should be replaced since the module may be compiled for wrong python version, and cannot thus be loaded.
In main.c there is call import_transcendentals() that does not check the return value i.e. if the import fails or succeeds. In case of failure, get_pi() and get_e() point to invalid memory locations and trying to call them causes a segmentation fault.
Also, the module has to be located somewhere where it can be found. It seems that when embedding, the current directory is not searched for python modules. PYTHONPATH environment variable could be changed to include the directory where transcendentals.so is located.
The following is an altenative way of embedding the code to the C program and sidesteps the import issues since the module code is linked to the executable.
Essentially, a call to PyInit_transcendentals() is missing.
File transcendentals.h will be generated when the cython functions are defined public i.e.
cdef public api double get_pi():
...
cdef public api double get_e():
Your main.c should have the include directives
#include <Python.h>
#include "transcendentals.h"
and then in main
Py_Initialize();
PyInit_transcendentals();
There should be no #include "transcendentals_api.h" and no import_transcendentals()
The first reason is that according to the documentation
However, note that you should include either modulename.h or
modulename_api.h in a given C file, not both, otherwise you may get
conflicting dual definitions.
The second reason is, that since transcendentals.c is linked to the program in
gcc $(cflags) $(ldflags) transcendentals.c main.c
there is no reason to import transcendentals module. The module has to be initialized though, PyInit_transcendentals() does that for Python 3
I'm trying to compile some SWIG bindings from a wireless communications library (http://www.yonch.com/wireless) that also uses the IT++ library. I am using SWIG version 2.0.11 on Ubuntu 14.04.
This is the error I am getting when trying to build:
/usr/include/itpp/base/binary.h:162: Error: Syntax error in input(1)
Here is line 162 from binary.h:
ITPP_EXPORT std::ostream &operator<<(std::ostream &output, const bin &inbin);
If the rest of that file is needed it can be found here: http://montecristo.co.it.pt/itpp/binary_8h_source.html
This is the SWIG command line call that is being used:
/usr/bin/swig -c++ -python -I/home/user/anaconda/include/python2.7 -I../../../include -I/usr/include -I../../../bindings/itpp -I../../../bindings/itpp/.. -DHAVE_CONFIG_H -o base_sparse.cpp ../../../bindings/itpp/base_sparse.i
I have almost no experience with SWIG and can't seem to see what about the code would be causing that syntax error. Any insights would be greatly appreciated!
Exports are not understood by SWIG
I usually add a
#define ITPP_EXPORT
in your .i file after the inclusion of the C/C++ headers and before you include them using
%include "Someheader.h"
Can I use cython to create a shared library with exported C functions that have python code as the core? Like wrapping Python with C??
It is to be used in plugins.
tk
Using Cython, you can write function declared as C ones with the cdef keyword (and public... important!), with Python inner code:
yourext.pyx
cdef int public func1(unsigned long l, float f):
print(f) # some python code
Note: in the following is assumed that we are working in the root of drive D:\
Building (setup.py)
from distutils.core import setup
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext': build_ext},
name = 'My app',
ext_modules = cythonize("yourext.pyx"),
)
Then run python setup.py build_ext --inplace
After running the setup.py (if you are using distutils), you'll get 2 files of interest:
yourext.h
yourext.c
Looking into the .c will show you that func1 is a C function, in the end.
Those two files are all we need to do the rest.
C main program for testing
// test.c
#include "Python.h"
#include "yourext.h"
main()
{
Py_Initialize(); // start python interpreter
inityourext(); // run module yourext
func1(12, 3.0); // Lets use shared library...
Py_Finalize();
}
As we don't use the extension (.pyd) by itself, we need to make a little trick/hack in the header file to disable the "DLL behavior". Add the following at the beginning of "yourext.h":
#undef DL_IMPORT # Undefines DL_IMPORT macro
#define DL_IMPORT(t) t # Redefines it to do nothing...
__PYX_EXTERN_C DL_IMPORT(int) func1(unsigned long, float);
Compiling "yourext" as a shared library
gcc -shared yourext.c -IC:\Python27\include -LC:\Python27\libs -lpython27 -o libyourext.dll
Then compiling our test program (linking to the DLL)
gcc test.c -IC:\Python27\include -LC:\Python27\libs -LD:\ -lpython27 -lyourext -o test.exe
Finally, run the program
$ test
3.0
This is not obvious, and there is many other ways to achieve the same thing, but this works (have a look to boost::python, ..., other solutions may better fit your needs).
I hope this answers a little bit your question or, at least, gave you an idea...
I am trying to compile a simple C extension in Mac to use with Python, and all works well in the command line. Code and gcc command that works are presented below.
Now I am trying to build the same extension in Xcode 4.5 (Mac OS10.8), and I tried several target settings for either dylib or static library, but I always get a file that cannot be loaded in Python showing the error:
./myModule.so: unknown file type, first eight bytes: 0x21 0x3C 0x61 0x72 0x63 0x68 0x3E 0x0A
My ultimate target is to create a workspace in XCode with the source code of a C/C++ extension and have python script that calls it in Xcode. So, if I need to debug the C/C++ extension I have XCode debugging capabilities. I am aware that XCode do not debug into Python script, but it can run it, correct ?
gcc -shared -arch i386 -arch x86_64 -L/usr/lib/python2.7 -framework python -I/usr/include/python2.7 -o myModule.so myModule.c -v
#include <Python.h>
/*
* Function to be called from Python
*/
static PyObject* py_myFunction(PyObject* self, PyObject* args)
{
char *s = "Hello from C!";
return Py_BuildValue("s", s);
}
/*
* Another function to be called from Python
*/
static PyObject* py_myOtherFunction(PyObject* self, PyObject* args)
{
double x, y;
PyArg_ParseTuple(args, "dd", &x, &y);
return Py_BuildValue("d", x*y);
}
/*
* Bind Python function names to our C functions
*/
static PyMethodDef myModule_methods[] = {
{"myFunction", py_myFunction, METH_VARARGS},
{"myOtherFunction", py_myOtherFunction, METH_VARARGS},
{NULL, NULL}
};
/*
* Python calls this to let us initialize our module
*/
void initmyModule()
{
(void) Py_InitModule("myModule", myModule_methods);
}
This guy seems to be having the same problem.
I've figured out the problem. Even though I changed the setting in xcode to specify output type "dynamic library" or "bundle", xcode was ignoring the setting. Starting a new BSD dynamic library project solved the issues I was seeing. Thanks for the help!
I've had success debugging unit-tested C extensions in XCode 4.6 using setuptools, virtualenv, unittest and GDB as the debugger.
I use virtualenvwrapper to create a virtualenv for the project and then set ~/.virtualenvs/module_name/bin/python as the executable to debug.
The single argument to pass to the virtualenv python interpreter in the Run configuration is the path to your test.py.
I then set GDB rather than None as the debugger launching it automatically.
The last step is to pass "setup.py install" as the arguments to your build tool (~/.virtualenvs/module_name/bin/python) on your test target's External Build Tool Configuration pane. The virtualenv provides a fairly simple way for you to install the shared object for your C extension into the test script python interpreter's library path without actually installing it into the global site-packages for your host.
With this setup I can call the extension code from a python script (the ultimate aim) and still debug the C code using XCode's GUI debug support.
If I haven't described this clearly please let me know and I'll share an example project.