Why doesn't the NumPy-C api warn me about failed allocations? - python

I've been writing a Python extension that writes into a NumPy array from C. During testing, I noticed that certain very large arrays would generate a segfault when I tried to access some of their elements.
Specifically, the last line of the following code segment fails with a segfault:
// Size of buffer we will write to
npy_intp buffer_len_alt = BUFFER_LENGTH;
//
PyArray_Descr * dtype;
dtype = PyArray_DescrFromType(NPY_BYTE);
PyObject* column = PyArray_Zeros(1, &buffer_len_alt, dtype, 0);
//Check that array creation succeeds
if (column == NULL){
// This exit point is not reached, so it looks like everything is OK
return (PyObject *) NULL;
}
// Get the array's internal buffer so we can write to it
output_buffer = PyArray_BYTES((PyArrayObject *)column);
// Try writing to the buffer
output_buffer[0] = 'x'; //No segfault
output_buffer[((int) buffer_len_alt) - 1] = 'x'; // Segfault here
I checked and found that the error occurs only when I try to allocate an array of about 3GB (i.e. BUFFER_LENGTH is about 3*2^30). It's not surprising that an allocation of this size would fail, even if Python is using it's custom allocator. What really concerns me is that NumPy did not raise an error or otherwise indicate that the array creation did not go as planned.
I have already tried checking PyArray_ISCONTIGUOUS on the returned array, and using PyArray_GETCONTIGUOUS to ensure it is a single memory segment, but the segfault would still occur. NPY_ARRAY_DEFAULT creates contiguous arrays, so this shouldn't be necessary anyways.
Is there some error flag I should be checking? How can I detect/prevent this situation in the future? Setting BUFFER_LENGTH to a smaller value obviously works, but this value is determined at runtime and I would like to know the exact bounds.
EDIT:
As #DavidW pointed out, the error stems from casting buffer_len_alt to an int, since npy_intp can be a 64-bit number. Replacing the cast to int with a cast to 'unsigned long' fixes the problem for me.

The issue (diagnosed in the comments) was actually with the array lookup rather than the allocation of the array. Your code contained the line
output_buffer[((int) buffer_len_alt) - 1] = 'x'
When buffer_len_alt (approx value 3000000000) was cast to an (32 bit) int (maximum value 2147483647) you ended up with an invalid address, probably a large negative number.
The solution is just to use
output_buffer[buffer_len_alt - 1] = 'x'
(i.e. I don't see why you should need a cast at all).

Related

Python `ctypes` - How to copy buffer returned by C function into a bytearray

A pointer to buffer of type POINTER(c_ubyte) is returned by the C function (the image_data variable in the following code). I want this data to be managed by Python, so I want to copy it into a bytearray. Here's the function call
image_data = stb_image.stbi_load(filename_cstr, byref(width),
byref(height), byref(num_channels),
c_int(expected_num_channels))
We get to know the width and height of the image only after that call, so can't pre-allocate a bytearray.
I would have used
array_type = c.c_ubyte * (num_channels.value * width.value * height.value)
image_data_bytearray = bytearray(cast(image_data, array_type))
But the type to cast to must be a pointer, not array, so I get an error.
TypeError: cast() argument 2 must be a pointer type, not c_ubyte_Array_262144
What should I do?
OK, reading the answer to the question linked to in the comments (thanks, #"John Zwinck" and #"eryksun"), there are two ways of storing the data, either in a bytearray or a numpy.array. In all these snippets, image_data is of type POINTER(c_ubyte), and we have array_type defined as -
array_type = c_ubyte * num_channels * width * height
We can create a bytearray first and then loop over and set the bytes
arr_bytes = bytearray(array_size)
for i in range(array_size):
arr_bytes[i] = image_data[i]
Or a better way is to create a C array instance using from_address and then initialize a bytearray with it -
image_data_carray = array_type.from_address(addressof(image_data.contents))
# Copy into bytearray
image_data_bytearray = bytearray(image_data_carray)
And during writing the image (didn't ask this question, just sharing for completeness), we can obtain pointer to the bytearray data like this and give it to stbi_write_png
image_data_carray = array_type.from_buffer(image_data_bytearray)
image_data = cast(image_data_carray, POINTER(c_ubyte))
The numpy based way of doing it is as answered in the linked question
address = addressof(image_data.contents)
image_data_ptr = np.ctypeslib.as_array(array_type.from_address(address))
This alone however only points to the memory returned by the C function, doesn't copy into a Python-managed array object. We can copy by creating a numpy array as
image_data = np.array(image_data_ptr)
To confirm I have done an assert all(arr_np == arr_bytes) there. And arr_np.dtype is uint8.
And during writing the image, we can obtain a pointer to the numpy array's data like this
image_data = image_data_numpy.ctypes.data_as(POINTER(c_ubyte))
Your variable array_type shouldn't even be called thus as it is in fact not an initialized C array nor any kind of type, but a Python object prepared for doing the array initialization.
Well, initialized array also shouldn't be called thus as well. :D
You should be doing there an equivalent of:
unsigned char array[channels*width*height];
in C. Then array is a pointer to N*types unsigned char pointing to first byte of the array. (index 0)
A cast() should get a pointer to see the data's type,. So doing:
array = (c.c_ubyte*(channels*width*height))()
should do the trick. But you don't need extra allocated memory. So you can create a pointer as suggested in a comment.
But I suggest you use:
image_data = bytearray(c.string_at(image_data))
It should work, assuming, of course, that returned image is null terminated. Well, this also implies using signed chars but it doesn't have to be.
If you wrote the C portion, just allocate one byte extra to the memory that will contain an image which is declared/cast to contain unsigned chars and put the last item to 0.
Then leave the algorithm to work as before. If you do not null terminate it, you will still get the whole image with string_at(), but there will be a memory leak of 3 bytes or so more. Very undesirable.
I used this trick in my C module for colorspace conversion. It works extremely fast as there are no loops, No anything extra. string_at() just pulls in the buffer and creates Python string wrapper around it.
Then you can use numpy.fromstring(...) or array.array("B", image_data) or use bytearray() as above etc.
Otherwise, well, I saw your answer just now. You can do it as you wrote as well, but I think that my dirty trick is better (if you can change the C code, of course).
P.S. Whoops! I just saw in a doc string that string_at() can have an optional argument size. Perhaps using it will completely ignore the termination and there wouldn't be any leakage. I am asking myself now why didn't I use it in my project but messed with null termination.
Perhaps out of lazyness. Using size shouldn't require any modifications to C code. So it would be:
image_data = bytearray(c.string_at(image_data, channels*width*height))

C++ Pointer to Numpy Array

Briefly:
Is there an efficient way to make a numpy array given a pointer in memory to the array, it's type, and the number of elements?
More detail:
I am working with a python framework which has an object.GetData() command that is supposed to return a pointer to the data (an array of 35,000 int8) of this object.
I'm supposed to be able efficiently load these integers to a numpy array through
arr = numpy.frombuffer(object.GetData(),count=35000,dtype="int8")
but this doesn't seem to work. I get an error message ValueError: buffer is smaller than requested size. Changing the length, I can get it to output an array, but typically less than 20 integers in length (usually 0 or 1 integers).
I believe I can access the pointer to the start of the array, in hex form, through
hex(id(object.GetData()))
which looks like it gives addresses (e.g. 0x10fd8c670) but I don't know if this is the actual address.
I'm more comfortable in python than c++, but there could be a bug in the c++ code. The c++ code for GetData is:
const _Tp* GetData() const
{
// Return a const pointer to the internal data
return (fData.size() > 0 ) ? &(fData)[0] : NULL;
}
where fdata is initialized as a VecType through:
VecType fData;
Right now I can access each element of the object's data through an object.At(i) command where i is the index of the data array of object, but it is very slow to load each element into a numpy array this way, and I'm dealing with a lot of data. For reference, the At command in the c++ code does this:
_Tp At(size_t i) const
{
return fData.at(i);
}
Any help would be appreciated. I don't have a ton of experience with pointers, and even less with pointers in python, but I would like to figure this out in python rather than re-write all my code in c++. Thanks!

Parse C Array Size with Python & LibClang

I am currently using Python to parse a C file using LibClang. I've encountered a problem while reading a C-array which size is defined by a define-directive-variable.
With node.get_children i can perfectly read the following array:
int myarray[20][30][10];
As soon as the array size is replaced with a variable, the array won't be read correctly. The following array code can't be read.
#define MAX 60;
int myarray[MAX][30][10];
Actually the parser stops at MAX and in the dump there is the error: invalid sloc.
How can I solve this?
Thanks
Run the code through a C preprocessor before trying to parse it. That will cause all preprocessor-symbols to be replaced by their values, i.e. your [MAX] will become [60].
Note that C code can also do this:
const int three[] = { 1, 2, 3 };
i.e. let the compiler deduce the length of the array from the number of initializer values given.
Or, from C99, even this:
const int hundred[] = { [99] = 4711 };
So a naive approach might still break, but I don't know anything about the capabilities of the parser you're using, of course.
Semicolon ; in the define directive way causing the error.

convert PyInt to C Int

i need to convert PyInt to C int. In my code
count=PyInt_FromSsize_t(PyList_Size(pValue))
pValue is a PyObject, PyList. the problem i was having is the PyList_Size is not returning me the correct list size (count is supposed to be 5, but it gave me 6 million), or there is a problem with data types since im in C code interfacing to python scripts. Ideally, i want count to be in a C int type.
i've found python/c APIs that return me long C data types... which is not what i want... anybody can point me to the correct method or APIs??
The PyInt_FromSsize_t() returns a full-fledged Python int object sitting in memory and returns its memory address — that is where the 6-million number is coming from. You just want to get the scalar returned by PyList_Size() and cast it to a C integer, I think:
count = (int) PyList_Size(pValue)
If the list could be very long you might want to think about making count a long instead, in which case you could cast to that specific type instead.
Note: a count of -1 means that Python encountered an exception while trying to measure the list length. Here are the docs you should read to know how to handle exceptions:
http://docs.python.org/c-api/intro.html#exceptions

Using python ctypes to get buffer of floats from shared library into python string

I'm trying to use python ctypes to use these two C functions from a shared library:
bool decompress_rgb(unsigned char *data, long dataLen, int scale)
float* getRgbBuffer()
The first function is working fine. I can tell by putting some debug code in the shared library and checking the input.
The problem is getting the data out. The RGB buffer is a pointer to a float (obviously) and this pointer stays constant during the life of the application. Therefore whenever I want to decompress an image, I call decompress_rgb and then need to see what's at the location pointed to by getRgbBuffer. I know that the buffer size is (720 * 288 * sizeof(float)) so I guess this has to come into play somewhere.
There's no c_float_p type so I thought I'd try this:
getRgbBuffer.restype = c_char_p
Then I do:
ptr = getRgbBuffer()
print "ptr is ", ptr
which just outputs:
ptr = 3078746120
I'm guessing that's the actual address rather than the content, but even if I was successfully dereferencing the pointer and getting the contents, it would only be the first char.
How can I get the contents of the entire buffer into a python string?
Edit: Had to change:
getRgbBuffer.restype = c_char_p
to
getRgbBuffer.restype = c_void_p
but then BastardSaint's answer worked.
Not fully tested, but I think it's something along this line:
buffer_size = 720 * 288 * ctypes.sizeof(ctypes.c_float)
rgb_buffer = ctypes.create_string_buffer(buffer_size)
ctypes.memmove(rgb_buffer, getRgbBuffer(), buffer_size)
Key is the ctypes.memmove() function. From the ctypes documentation:
memmove(dst, src, count)
Same as the standard C memmove library function: copies count bytes from src to dst. dst and src must be integers or ctypes instances that can be converted to pointers.
After the above snippet is run, rgb_buffer.value will return the content up until the first '\0'. To get all bytes as a python string, you can slice the whole thing: buffer_contents = rgb_buffer[:].
It's been a while since I used ctypes and I don't have something which returns a "double *" handy enough to test this out, but if you want a c_float_p:
c_float_p = ctypes.POINTER(ctypes.c_float)
Reading BastardSaint's answer, you just want the raw data, but I wasn't sure if you're doing that as a workaround to not having a c_float_p.

Categories

Resources