So I have a C program that I am running from Python. But am getting segmentation fault error. when I run the C program alone, it runs fine. The C program interfaces a fingerprint sensor using the fprint lib.
#include <poll.h>
#include <stdlib.h>
#include <sys/time.h>
#include <stdio.h>
#include <libfprint/fprint.h>
int main(){
struct fp_dscv_dev **devices;
struct fp_dev *device;
struct fp_img **img;
int r;
r=fp_init();
if(r<0){
printf("Error");
return 1;
}
devices=fp_discover_devs();
if(devices){
device=fp_dev_open(*devices);
fp_dscv_devs_free(devices);
}
if(device==NULL){
printf("NO Device\n");
return 1;
}else{
printf("Yes\n");
}
int caps;
caps=fp_dev_img_capture(device,0,img);
printf("bloody status %i \n",caps);
//save the fingerprint image to file. ** this is the block that
causes the segmentation fault.
int imrstx;
imrstx=fp_img_save_to_file(*img,"enrolledx.pgm");
fp_img_free(*img);
fp_exit();
return 0;
}
the python code
from ctypes import *
so_file = "/home/arkounts/Desktop/pythonsdk/capture.so"
my_functions = CDLL(so_file)
a=my_functions.main()
print(a)
print("Done")
The capture.so is built and accessed in python. But calling from python, I get a Segmentation fault. What could be my problem?
Thanks alot
Although I am unfamiliar with libfprint, after taking a look at your code and comparing it with the documentation, I see two issues with your code that can both cause a segmentation fault:
First issue:
According to the documentation of the function fp_discover_devs, NULL is returned on error. On success, a NULL-terminated list is returned, which may be empty.
In the following code, you check for failure/success, but don't check for an empty list:
devices=fp_discover_devs();
if(devices){
device=fp_dev_open(*devices);
fp_dscv_devs_free(devices);
}
If devices is non-NULL, but empty, then devices[0] (which is equivalent to *devices) is NULL. In that case, you pass this NULL pointer to fp_dev_open. This may cause a segmentation fault.
I don't think that this is the reason for your segmentation fault though, because this error in your code would only be triggered if an empty list were returned.
Second issue:
The last parameter of fp_dev_img_capture should be a pointer to an allocated variable of type struct fp_img *. This tells the function the address of the variable that it should write to. However, with the code
struct fp_img **img;
[...]
caps=fp_dev_img_capture(device,0,img);
you are passing that function a wild pointer, because img does not point to any valid object. This can cause a segmentation fault as soon as the wild pointer is dereferenced by the function or cause some other kind of undefined behavior, such as overwriting other variables in your program.
I suggest you write the following code instead:
struct fp_img *img;
[...]
caps=fp_dev_img_capture(device,0,&img);
Now the third parameter is pointing to a valid object (to the variable img).
Since img is now a single pointer and not a double pointer, you must pass img instead of *img to the functions fp_img_save_to_file and fp_img_free.
This second issue is probably the reason for your segmentation fault. It seems that you were just "lucky" that your program did not segfault as a standalone program.
Related
I'm stuck in wrapping C dll with python.
dll side is provided blackbox and I made additional wrapper class.
python gives C dll numpy ndarray. then C dll process it and return it.
When I delete array in C, whole program dies. with or without deep copy. below are brief code to reproduce
python main
import mypyd, numpy
myobject = mypyd.myimpro()
image = readimg() #ndarray
result = myobject.process(image) <- process die here with exit code -1073741819
dllmain header
BOOST_PYTHON_MODULE(mypyd){
class_<mywrapclass>("myimpro")
.def("process", &mywrapclass::process)
}
c wrapper header
namespace np = boost::python::numpy;
class mywrapclass{
dllhandler m_handler;
myimagestruct process(np::ndarray);
}
c wrapper code
mywrapclass::mywrapclass(){
Py_Initialize();
np::initialize();
m_handler = initInDLL()
}
np::ndarray mywrapclass::process(np::ndarray input){
myimagestruct image = ndarray2struct(input); ## deep copy data array
myimagestruct result = processInDLL(m_handler, image);
np::ndarray ret = struct2ndarray(result); ## also deep copy data array. just in case.
return ret;
}
c header for dll(I can't modify this code. it's given with dll)
typedef struct {
void *data
... size, type etc
} myimagestruct;
typedef void * dllhandler;
dllhandler = initInDLL();
myimagestruct processInDLL(myimagestruct);
sorry for wrong code. Initially, I naively thought boost.python or deep copying will solve this heap memory problem, lol. Tried almost every keyword in my brain but couldn't even find any point to start.
dll works perfect without boost wrapping. So it must be my part to resolve it.
Thanks for reading anyway.
I'm relatively new to Python and this is my first attempt at writing a C extension.
Background
In my Python 3.X project I need to load and parse large binary files (10-100MB) to extract data for further processing. The binary content is organized in frames: headers followed by a variable amount of data. Due to the low performance in Python I decided to go for a C extension to speedup the loading part.
The standalone C code outperforms Python by a factor in between 20x-500x so I am pretty satisfied with it.
The problem: the memory keeps growing when I invoke the function from my C-extension multiple times within the same Python module.
my_c_ext.c
#include <Python.h>
#include <numpy/arrayobject.h>
#include "my_c_ext.h"
static unsigned short *X, *Y;
static PyObject* c_load(PyObject* self, PyObject* args)
{
char *filename;
if(!PyArg_ParseTuple(args, "s", &filename))
return NULL;
PyObject *PyX, *PyY;
__load(filename);
npy_intp dims[1] = {n_events};
PyX = PyArray_SimpleNewFromData(1, dims, NPY_UINT16, X);
PyArray_ENABLEFLAGS((PyArrayObject*)PyX, NPY_ARRAY_OWNDATA);
PyY = PyArray_SimpleNewFromData(1, dims, NPY_UINT16, Y);
PyArray_ENABLEFLAGS((PyArrayObject*)PyY, NPY_ARRAY_OWNDATA);
PyObject *xy = Py_BuildValue("NN", PyX, PyY);
return xy;
}
...
//More Python C-extension boilerplate (methods, etc..)
...
void __load(char *) {
// open file, extract frame header and compute new_size
X = realloc(X, new_size * sizeof(*X));
Y = realloc(Y, new_size * sizeof(*Y));
X[i] = ...
Y[i] = ...
return;
}
test.py
import my_c_ext as ce
binary_files = ['file1.bin',...,'fileN.bin']
for f in binary_files:
x,y = ce.c_load(f)
del x,y
Here I am deleting the returned objects in hope of lowering memory usage.
After reading several posts (e.g. this, this and this), I am still stuck.
I tried to add/remove the PyArray_ENABLEFLAGS setting the NPY_ARRAY_OWNDATA flag without experiencing any difference. It is not yet clear to me if the NPY_ARRAY_OWNDATA implies a free(X) in C. If I explicitly free the arrays in C, I ran into a segfault when trying to load second file in the for loop in test.py.
Any idea of what am I doing wrong?
This looks like a memory management disaster. NPY_ARRAY_OWNDATA should cause it to call free on the data (or at least PyArray_free which isn't necessarily the same thing...).
However once this is done you still have the global variables X and Y pointing to a now-invalid area of memory. You then call realloc on those invalid pointers. At this point you're well into undefined behaviour and so anything could happen.
If it's a global variable then the memory needs to be managed globally, not by Numpy. If the memory is managed by the Numpy array then you need to ensure that you store no other way to access it except through that Numpy array. Anything else is going to cause you problems.
After following this answer, I wrote a C library which I call from python using ctypes. The C library contains a function which takes a char* and modifies its contents. I would like to read the contents in Python, however, after calling the library function in Python, I get an empty string even though I see the correct output in the terminal if I include a printf statement in the C code. What am I doing wrong?
C code:
void somefunction(char * pythonbuffer)
{
/* do something */
printf("%s", pythonbuffer);
}
Python code:
lib.somefunction.argtypes = [ctypes.c_char_p]
lib.somefunction.restype = ctypes.c_void_p
buffer = ctypes.create_string_buffer(upper_limit)
lib.somefunction(buffer)
print(buffer.value)
Output:
b''
Found the problem. Turns out that the code in the /* do something */ part somehow changed the adress of pythonbuffer. I solved it by using a temporary char * and then copying it to pythonbuffer with strcpy.
In python, you can learn memory location of variables by using id function, so:
X = "Hello world!"
print(id(X)) # Output is equal to 139806692112112 (0x7F27483876F0)
I'm tried to access to variable with pointers in C (Surely the other program still alive):
#include <stdio.h>
int main(void){
char *x = (char *) 0x7F27483876F0;
printf("%s\n", x);
return 0;
}
I compile the code, no errors or warnings but when i tried the running program OS giving a Segmentation error. How i can solve this problem?
Or is it possible?
Doing something like this is more and more impossible these days. With features like address space layout randomization you can't really tell where a given program, let alone variable will load in actual memory.
Best bet is to use some type of message passing. Not sure why all the downvotes on your question, but it seems like a reasonably put question, even if not technically feasible these days.
I am writing a Python Extension in C (on Linux (Ubuntu 14.04)) and ran into an issue with dynamic memory allocation. I searched through SO and found several posts on free() calls causing similar errors because free() tries to free memory that is not dynamically allocated. But I don't know if/how that is a problem in the code below:
#include <Python.h>
#include <stdio.h>
#include <stdlib.h>
static PyObject* tirepy_process_data(PyObject* self, PyObject *args)
{
FILE* rawfile = NULL;
char* rawfilename = (char *) malloc(128*sizeof(char));
if(rawfilename == NULL)
printf("malloc() failed.\n");
memset(rawfilename, 0, 128);
const int num_datapts = 0; /* Just Some interger variable*/
if (!PyArg_ParseTuple(args, "si", &rawfilename, &num_datapts)) {
return NULL;
}
/* Here I am going top open the file, read the contents and close it again */
printf("Raw file name is: %s \n", rawfilename);
free(rawfilename);
return Py_BuildValue("i", num_profiles);
}
The output is:
Raw file name is: \home\location_to_file\
*** Error in `/usr/bin/python': free(): invalid pointer: 0xb7514244 ***
According to the documentation:
These formats allow to access an object as a contiguous chunk of memory. You don’t have to provide raw storage for the returned unicode or bytes area. Also, you won’t have to release any memory yourself, except with the es, es#, et and et# formats.
(Emphasis is added by me)
So you do not need to first allocate memory with malloc(). You also do not need to free() the memory afterwards.
Your error occurs because you are trying to free memory that was provided/allocated by Python. So C (malloc/free) is unable to free it, as it is unknown to the C runtime.
Please consider the API docs for `PyArg_ParseTuple': https://docs.python.org/2/c-api/arg.html
You shall NOT pass a pointer to allocated memory, nor shall you free it afterwards.