How to create variable sized __local memory in pyopencl? - python

in my C OpenCL code I use clSetKernelArg to create 'variable size' __local memory for use in my kernels, which is not available in OpenCL per se. See my example:
clSetKernelArg(clKernel, ArgCounter++, sizeof(cl_mem), (void *)&d_B);
...
clSetKernelArg(clKernel, ArgCounter++, sizeof(float)*block_size*block_size, NULL);
...
kernel="
matrixMul(__global float* C,
...
__local float* A_temp,
...
)"
{...
My question is now, how to do the same in pyopencl?
I looked through the examples that come with pyopencl, but the only thing I could find was an approach using templates, which seems as to me as I understood it like an overkill. See example.
kernel = """
__kernel void matrixMul(__global float* C,...){
...
__local float A_temp[ %(mem_size) ];
...
}
What do you recommend?

It is similar to C. You pass it a fixed size array as a local. Here is an example from Enja's radix sort. Notice the last argument is a local memory array.
def naive_scan(self, num):
nhist = num/2/self.cta_size*16
global_size = (nhist,)
local_size = (nhist,)
extra_space = nhist / 16 #NUM_BANKS defined as 16 in RadixSort.cpp
shared_mem_size = self.uintsz * (nhist + extra_space)
scan_args = ( self.mCountersSum,
self.mCounters,
np.uint32(nhist),
cl.LocalMemory(2*shared_mem_size)
)
self.radix_prg.scanNaive(self.queue, global_size, local_size, *(scan_args)).wait()

I am no familiar with Python and its OpenCL implementation, but a local memory can also be created within the kernel with a fixed size (similar what you did):
__kernel void matrixMul(...) {
__local float A_templ[1024];
}
Instead of 1024 a defined preprocessor symbol can be used and can be set during compilation to change the size:
#define SIZE 1024
__kernel void matrixMul(...) {
__local float A_templ[SIZE];
}
SIZE can be defined within the same soure, as compiler parameter for cLBuildProgram or as an additional source for clCreateProgramWithSource.
EDIT: Found something with Google ;-): http://www.google.com/url?sa=t&source=web&cd=4&ved=0CC8QFjAD&url=http%3A%2F%2Flinksceem.eu%2Fjoomla%2Ffiles%2FPRACE_Winter_School%2FLinkSCEMM_pyOpenCL.pdf&rct=j&q=Pyopencl%20__local%20memory&ei=BTbETbWhOsvBswadp62ODw&usg=AFQjCNG6rXEEkDpE1304pmQDu3GFdRA0BQ&sig2=vHOGOqwA1HHUl10c6HO8WQ&cad=rja

Related

I can't get output numbers with ctypes cuda

cuda1.cu
#include <iostream>
using namespace std ;
# define DELLEXPORT extern "C" __declspec(dllexport)
__global__ void kernel(long* answer = 0){
*answer = threadIdx.x + (blockIdx.x * blockDim.x);
}
DELLEXPORT void resoult(long* h_answer){
long* d_answer = 0;
cudaMalloc(&d_answer, sizeof(long));
kernel<<<10,1000>>>(d_answer);
cudaMemcpy(&h_answer, d_answer, sizeof(long), cudaMemcpyDeviceToHost);
cudaFree(d_answer);
}
main.py
import ctypes
import numpy as np
add_lib = ctypes.CDLL(".\\a.dll")
resoult= add_lib.resoult
resoult.argtypes = [ctypes.POINTER(ctypes.c_long)]
x = ctypes.c_long()
print("R:",resoult(x))
print("RV: ",x.value)
print("RB: ",resoult(ctypes.byref(x)))
output in python:0
output in cuda: 2096
I implemented based on c language without any problems but in cuda mode I have a problem how can I have the correct output value
Thanks
cudaMemcpy is expecting pointers for dst and src.
In your function resoult, h_answer is a pointer to a long allocated by the caller.
Since it's already the pointer where the data should be copied to, you should use it as is and not take it's address by using &h_answer.
Therefore you need to change your cudaMemcpy from:
cudaMemcpy(&h_answer, d_answer, sizeof(long), cudaMemcpyDeviceToHost);
To:
cudaMemcpy(h_answer, d_answer, sizeof(long), cudaMemcpyDeviceToHost);

Freeing memory when using ctypes

I am using ctypes to try and speed up my code.
My problem is similar to the one in this tutorial : https://cvstuff.wordpress.com/2014/11/27/wraping-c-code-with-python-ctypes-memory-and-pointers/
As pointed out in the tutorial I should free the memory after using the C function. Here is my C code
//C functions
double* getStuff(double *R_list, int items){
double results[items];
double* results_p;
for(int i = 0; i < items; i++){
res = calculation ; \\do some calculation
results[i] = res; }
results_p = results;
printf("C allocated address %p \n", results_p);
return results_p; }
void free_mem(double *a){
printf("freeing address: %p\n", a);
free(a); }
Which I compile with gcc -shared -Wl,-lgsl,-soname, simps -o libsimps.so -fPIC simps.c
And python:
//Python
from ctypes import *
import numpy as np
mydll = CDLL("libsimps.so")
mydll.getStuff.restype = POINTER(c_double)
mydll.getStuff.argtypes = [POINTER(c_double),c_int]
mydll.free_mem.restype = None
mydll.free_mem.argtypes = [POINTER(c_double)]
R = np.logspace(np.log10(0.011),1, 100, dtype = float) #input
tracers = c_int(len(R))
R_c = R.ctypes.data_as(POINTER(c_double))
for_list = mydll.getStuff(R_c,tracers)
print 'Python allocated', hex(for_list)
for_list_py = np.array(np.fromiter(for_list, dtype=np.float64, count=len(R)))
mydll.free_mem(for_list)
Up to the last line the code does what I want it to and the for_list_py values are correct. However, when I try to free the memory, I get a Segmentation fault and on closer inspection the address associated with for_list --> hex(for_list) is different to the one allocated to results_p within C part of the code.
As pointed out in this question, Python ctypes: how to free memory? Getting invalid pointer error , for_list will return the same address if mydll.getStuff.restype is set to c_void_p. But then I struggle to put the actual values I want into for_list_py. This is what I've tried:
cast(for_list, POINTER(c_double) )
for_list_py = np.array(np.fromiter(for_list, dtype=np.float64, count=len(R)))
mydll.free_mem(for_list)
where the cast operation seems to change for_list into an integer. I'm fairly new to C and very confused. Do I need to free that chunk of memory? If so, how do I do that whilst also keeping the output in a numpy array? Thanks!
Edit: It appears that the address allocated in C and the one I'm trying to free are the same, though I still recieve a Segmentation fault.
C allocated address 0x7ffe559a3960
freeing address: 0x7ffe559a3960
Segmentation fault
If I do print for_list I get <__main__.LP_c_double object at 0x7fe2fc93ab00>
Conclusion
Just to let everyone know, I've struggled with c_types for a bit.
I've ended up opting for SWIG instead of c_types. I've found that the code runs faster on the whole (compared to the version presented here). I found this documentation on dealing with memory deallocation in SWIG very useful https://scipy-cookbook.readthedocs.io/items/SWIG_Memory_Deallocation.html as well as the fact that SWIG gives you a very easy way of dealing with numpy n-dimensional arrays.
After getStuff function exits, the memory allocated to results array is not available any more, so when you try to free it, it crashes the program.
Try this instead:
double* getStuff(double *R_list, int items)
{
double* results_p = malloc(sizeof((*results_p) * (items + 1));
if (results_p == NULL)
{
// handle error
}
for(int i = 0; i < items; i++)
{
res = calculation ; \\do some calculation
results_p[i] = res;
}
printf("C allocated address %p \n", results_p);
return results_p;
}

returning numpy arrays via pybind11

I have a C++ function computing a large tensor which I would like to return to Python as a NumPy array via pybind11.
From the documentation of pybind11, it seems like using STL unique_ptr is desirable.
In the following example, the commented out version works, whereas the given one compiles but fails at runtime ("Unable to convert function return value to a Python type!").
Why is the smartpointer version failing? What is the canonical way to create and return a NumPy array?
PS: Due to program structure and size of the array, it is desirable to not copy memory but create the array from a given pointer. Memory ownership should be taken by Python.
typedef typename py::array_t<double, py::array::c_style | py::array::forcecast> py_cdarray_t;
// py_cd_array_t _test()
std::unique_ptr<py_cdarray_t> _test()
{
double * memory = new double[3]; memory[0] = 11; memory[1] = 12; memory[2] = 13;
py::buffer_info bufinfo (
memory, // pointer to memory buffer
sizeof(double), // size of underlying scalar type
py::format_descriptor<double>::format(), // python struct-style format descriptor
1, // number of dimensions
{ 3 }, // buffer dimensions
{ sizeof(double) } // strides (in bytes) for each index
);
//return py_cdarray_t(bufinfo);
return std::unique_ptr<py_cdarray_t>( new py_cdarray_t(bufinfo) );
}
A few comments (then a working implementation).
pybind11's C++ object wrappers around Python types (like pybind11::object, pybind11::list, and, in this case, pybind11::array_t<T>) are really just wrappers around an underlying Python object pointer. In this respect there are already taking on the role of a shared pointer wrapper, and so there's no point in wrapping that in a unique_ptr: returning the py::array_t<T> object directly is already essentially just returning a glorified pointer.
pybind11::array_t can be constructed directly from a data pointer, so you can skip the py::buffer_info intermediate step and just give the shape and strides directly to the pybind11::array_t constructor. A numpy array constructed this way won't own its own data, it'll just reference it (that is, the numpy owndata flag will be set to false).
Memory ownership can be tied to the life of a Python object, but you're still on the hook for doing the deallocation properly. Pybind11 provides a py::capsule class to help you do exactly this. What you want to do is make the numpy array depend on this capsule as its parent class by specifying it as the base argument to array_t. That will make the numpy array reference it, keeping it alive as long as the array itself is alive, and invoke the cleanup function when it is no longer referenced.
The c_style flag in the older (pre-2.2) releases only had an effect on new arrays, i.e. when not passing a value pointer. That was fixed in the 2.2 release to also affect the automatic strides if you specify only shapes but not strides. It has no effect at all if you specify the strides directly yourself (as I do in the example below).
So, putting the pieces together, this code is a complete pybind11 module that demonstrates how you can accomplish what you're looking for (and includes some C++ output to demonstrate that is indeed working correctly):
#include <iostream>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
PYBIND11_PLUGIN(numpywrap) {
py::module m("numpywrap");
m.def("f", []() {
// Allocate and initialize some data; make this big so
// we can see the impact on the process memory use:
constexpr size_t size = 100*1000*1000;
double *foo = new double[size];
for (size_t i = 0; i < size; i++) {
foo[i] = (double) i;
}
// Create a Python object that will free the allocated
// memory when destroyed:
py::capsule free_when_done(foo, [](void *f) {
double *foo = reinterpret_cast<double *>(f);
std::cerr << "Element [0] = " << foo[0] << "\n";
std::cerr << "freeing memory # " << f << "\n";
delete[] foo;
});
return py::array_t<double>(
{100, 1000, 1000}, // shape
{1000*1000*8, 1000*8, 8}, // C-style contiguous strides for double
foo, // the data pointer
free_when_done); // numpy array references this parent
});
return m.ptr();
}
Compiling that and invoking it from Python shows it working:
>>> import numpywrap
>>> z = numpywrap.f()
>>> # the python process is now taking up a bit more than 800MB memory
>>> z[1,1,1]
1001001.0
>>> z[0,0,100]
100.0
>>> z[99,999,999]
99999999.0
>>> z[0,0,0] = 3.141592
>>> del z
Element [0] = 3.14159
freeing memory # 0x7fd769f12010
>>> # python process memory size has dropped back down
I recommend using ndarray. A foundational principle is that the underlying data is never copied unless explicitly requested (or you quickly end up with huge inefficiencies). Below is an example of it in use, but there are other features I haven't shown, including conversion to Eigen arrays (ndarray::asEigen(array)), which makes it pretty powerful.
Header:
#ifndef MYTENSORCODE_H
#define MYTENSORCODE_H
#include "ndarray_fwd.h"
namespace myTensorNamespace {
ndarray::Array<double, 2, 1> myTensorFunction(int param1, double param2);
} // namespace myTensorNamespace
#endif // include guard
Lib:
#include "ndarray.h"
#include "myTensorCode.h"
namespace myTensorNamespace {
ndarray::Array<double, 2, 1> myTensorFunction(int param1, double param2) {
std::size_t const size = calculateSize();
ndarray::Array<double, 2, 1> array = ndarray::allocate(size, size);
array.deep() = 0; // initialise
for (std::size_t ii = 0; ii < size; ++ii) {
array[ii][ndarray::view(ii, ii + 1)] = 1.0;
}
return array;
}
} // namespace myTensorNamespace
Wrapper:
#include "pybind11/pybind11.h"
#include "ndarray.h"
#include "ndarray/pybind11.h"
#include "myTensorCode.h"
namespace py = pybind11;
using namespace pybind11::literals;
namespace myTensorNamespace {
namespace {
PYBIND11_MODULE(myTensorModule, mod) {
mod.def("myTensorFunction", &myTensorFunction, "param1"_a, "param2"_a);
}
} // anonymous namespace
} // namespace myTensorNamespace

Python to C for loop conversion

I have the following python code:
r = range(1,10)
r_squared = []
for item in r:
print item
r_squared.append(item*item)
How would I convert this code to C? Is there something like a mutable array in C or how would I do the equivalent of the python append?
simple array in c.Arrays in the C are Homogenous
int arr[10];
int i = 0;
for(i=0;i<sizeof(arr);i++)
{
arr[i] = i; // Initializing each element seperately
}
Try using vectors in C go through this link
/ vector-usage.c
#include <stdio.h>
#include "vector.h"
int main() {
// declare and initialize a new vector
Vector vector;
vector_init(&vector);
// fill it up with 150 arbitrary values
// this should expand capacity up to 200
int i;
for (i = 200; i > -50; i--) {
vector_append(&vector, i);
}
// set a value at an arbitrary index
// this will expand and zero-fill the vector to fit
vector_set(&vector, 4452, 21312984);
// print out an arbitrary value in the vector
printf("Heres the value at 27: %d\n", vector_get(&vector, 27));
// we're all done playing with our vector,
// so free its underlying data array
vector_free(&vector);
}
Arrays in C are mutable by default, in that you can write a[i] = 3, just like Python lists.
However, they're fixed-length, unlike Python lists.
For your problem, that should actually be fine. You know the final size you want; just create an array of that size, and assign to the members.
But of course there are problems for which you do need append.
Writing a simple library for appendable arrays (just like Python lists) is a pretty good learning project for C. You can also find plenty of ready-made implementations if that's what you want, but not in the standard library.
The key is to not use a stack array, but rather memory allocated on the heap with malloc. Keep track of the pointer to that memory, the capacity, and the used size. When the used size reaches the capacity, multiply it by some number (play with different numbers to get an idea of how they affect performance), then realloc. That's just about all there is to it. (And if you look at the CPython source for the list type, that's basically the same thing it's doing.)
Here's an example. You'll want to add some error handling (malloc and realloc can return NULL) and of course the rest of the API beyond append (especially a delete function, which will call free on the allocated memory), but this should be enough to show you the idea:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
typedef struct {
int *i;
size_t len;
size_t capacity;
} IntArray;
IntArray int_array_make() {
IntArray a = {
.i = malloc(10 * sizeof(int)),
.len = 0,
.capacity = 10
};
return a;
}
void int_array_append(IntArray *a, int value) {
if (a->len+1 == a->capacity) {
size_t new_capacity = (int)(a->capacity * 1.6);
a->i = realloc(a->i, new_capacity * sizeof(int));
a->capacity = new_capacity;
}
a->i[a->len++] = value;
}
int main(int argc, char *argv[]) {
IntArray a = int_array_make();
for (int i = 0; i != 50; i++)
int_array_append(&a, i);
for (int i = 0; i != a.len; ++i)
printf("%d ", a.i[i]);
printf("\n");
}
c doesnt have any way of dynamically increasing the size of the array like in python. arrays here are of fixed length
if you know the size of the array that you will be using, u can use this kind of declaration, like this
int arr[10];
or if you would want to add memery on the fly (in runtime), use malloc call along with structure (linked lists)

Passing struct with pointer members to OpenCL kernel using PyOpenCL

Let's suppose I have a kernel to compute the element-wise sum of two arrays. Rather than passing a, b, and c as three parameters, I make them structure members as follows:
typedef struct
{
__global uint *a;
__global uint *b;
__global uint *c;
} SumParameters;
__kernel void compute_sum(__global SumParameters *params)
{
uint id = get_global_id(0);
params->c[id] = params->a[id] + params->b[id];
return;
}
There is information on structures if you RTFM of PyOpenCL [1], and others have addressed this question too [2] [3] [4]. But none of the OpenCL struct examples I've been able to find have pointers as members.
Specifically, I'm worried about whether host/device address spaces match, and whether host/device pointer sizes match. Does anyone know the answer?
[1] http://documen.tician.de/pyopencl/howto.html#how-to-use-struct-types-with-pyopencl
[2] Struct Alignment with PyOpenCL
[3] http://enja.org/2011/03/30/adventures-in-opencl-part-3-constant-memory-structs/
[4] http://acooke.org/cute/Somesimple0.html
No, there is no guaranty that address spaces match. For the basic types (float, int,…) you have alignment requirement (section 6.1.5 of the standard) and you have to use the cl_type name of the OpenCL implementation (when programming in C, pyopencl does the job under the hood I’d say).
For the pointers it’s even simpler due to this mismatch. The very beginning of section 6.9 of the standard v 1.2 (it’s section 6.8 for version 1.1) states:
Arguments to kernel functions declared in a program that are pointers
must be declared with the __global, __constant or __local qualifier.
And in the point p.:
Arguments to kernel functions that are declared to be a struct or
union do not allow OpenCL objects to be passed as elements of the
struct or union.
Note also the point d.:
Variable length arrays and structures with flexible (or unsized)
arrays are not supported.
So, no way to make you kernel runs as described in your question and that's why you haven’t been able to find some examples of OpenCl struct have pointers as members.
I still can propose a workaround that takes advantage of the fact that the kernel is compiled in JIT. It still requires that you pack you data properly and that you pay attention to the alignment and finally that the size doesn’t change during the execution of the program. I honestly would go for a kernel taking 3 buffers as arguments, but anyhow, there it is.
The idea is to use the preprocessor option –D as in the following example in python:
Kernel:
typedef struct {
uint a[SIZE];
uint b[SIZE];
uint c[SIZE];
} SumParameters;
kernel void foo(global SumParameters *params){
int idx = get_global_id(0);
params->c[idx] = params->a[idx] + params->b[idx];
}
Host code:
import numpy as np
import pyopencl as cl
def bar():
mf = cl.mem_flags
ctx = cl.create_some_context()
queue = cl.CommandQueue(self.ctx)
prog_f = open('kernels.cl', 'r')
#a = (1, 2, 3), b = (4, 5, 6)
ary = np.array([(1, 2, 3), (4, 5, 6), (0, 0, 0)], dtype='uint32, uint32, uint32')
cl_ary = cl.Buffer(ctx, mf.READ_WRITE | mf.COPY_HOST_PTR, hostbuf=ary)
#Here should compute the size, but hardcoded for the example
size = 3
#The important part follows using -D option
prog = cl.Program(ctx, prog_f.read()).build(options="-D SIZE={0}".format(size))
prog.foo(queue, (size,), None, cl_ary)
result = np.zeros_like(ary)
cl.enqueue_copy(queue, result, cl_ary).wait()
print result
And the result:
[(1L, 2L, 3L) (4L, 5L, 6L) (5L, 7L, 9L)]
I don't know the answer to my own question, but there are 3 workarounds I can come up with off the top of my head. I consider Workaround 3 the best option.
Workaround 1: We only have 3 parameters here, so we could just make a, b, and c kernel parameters. But I've read there's a limit on the number of parameters you can pass to a kernel, and I personally like to refactor any function that takes more than 3-4 arguments to use structs (or, in Python, tuples or keyword arguments). So this solution makes the code harder to read, and doesn't scale.
Workaround 2: Dump everything in a single giant array. Then the kernel would look like this:
typedef struct
{
uint ai;
uint bi;
uint ci;
} SumParameters;
__kernel void compute_sum(__global SumParameters *params, uint *data)
{
uint id = get_global_id(0);
data[params->ci + id] = data[params->ai + id] + data[params->bi + id];
return;
}
In other words, instead of using pointers, use offsets into a single array. This looks an awful lot like the beginnings of implementing my own memory model, and it feels like it's reinventing a wheel that exists somewhere in PyOpenCL, or OpenCL, or both.
Workaround 3: Make setter kernels. Like this:
__kernel void set_a(__global SumParameters *params, __global uint *a)
{
params->a = a;
return;
}
and ditto for set_b, set_c. Then execute these kernels with worksize 1 to set up the data structure. You still need to know how big a block to allocate for params, but if it's too big, nothing bad will happen (except a little wasted memory), so I'd say just assume the pointers are 64 bits.
This workaround's performance is probably awful (I imagine a kernel call has enormous overhead), but fortunately that shouldn't matter too much for my application (my kernel is going to run for seconds at a time, it's not a graphics thing that has to run at 30-60 fps, so I imagine that the time taken by extra kernel calls to set parameters will end up being a tiny fraction of my workload, no matter how high the per-kernel-call overhead is).

Categories

Resources