How do I cimport scipy.spatial.ckdtree? - python

I am rewriting some of my code using Cython, and want to type a ckdtree variable. I am currently doing
cimport scipy.spatial.ckdtree as ckdtree
cdef tuple remove_labels(double[:, :] x_train, double[:, :] y_train):
cdef ckdtree.KDTree tree
// ...
tree = ckdtree.KDTree(x_rest_view)
but I get several errors:
When importing, I get the error
'scipy/spatial/ckdtree.pxd' not found
In line
cdef ckdtree.KDTree tree
I get
'KDTree' is not a type identifier
In line
tree = ckdtree.KDTree(x_rest_view)
I get
cimported module has no attribute 'KDTree'
What am I doing wrong?

Related

Cython fails to recognize overloaded constructor

I'm trying to compile the cyrand project using cython, but am running into a bizarre compile error when testing overloaded constructors. See this gist for the files in question.
From the gist, I can compile and run example.pyx just fine, which uses the default constructor:
import numpy as np
cimport numpy as np
cimport cython
include "random.pyx"
#cython.boundscheck(False)
def example(n):
cdef int N = n
cdef rng r
cdef rng_sampler[double] * rng_p = new rng_sampler[double](r)
cdef rng_sampler[double] rng = deref(rng_p)
cdef np.ndarray[np.double_t, ndim=1] result = np.empty(N, dtype=np.double)
for i in range(N):
result[i] = rng.normal(0.0, 2.0)
print result
return result
^ this works and runs fine. An example run produces the following output:
$ python test_example.py
[ 0.47237842 3.153744849 3.6854932057 ]
Yet when I try to compile and run the test which used a constructor that takes an unsigned long as argument:
import numpy as np
cimport numpy as np
cimport cython
include "random.pyx"
#cython.boundscheck(False)
def example_seed(n, seed):
cdef int N = n
cdef unsigned long Seed = seed
cdef rng r
cdef rng_sampler[double] * rng_p = new rng_sampler[double](Seed)
cdef rng_sampler[double] rng = deref(rng_p)
cdef np.ndarray[np.double_t, ndim=1] result = np.empty(N, dtype=np.double)
for i in range(N):
result[i] = rng.normal(0.0, 2.0)
print result
return result
I get the following cython compiler error:
Error compiling Cython file:
-----------------------------------------------------------
...
cdef int N = n
cdef unsigned long Seed = seed
cdef rng_sampler[double] * rng_p = new rng_sampler[double](Seed)
----------------------------------------------------------
example/example_seed.pyx:15:67 Cannot assign type 'unsigned long' to 'mt19937'
I interpret this message, along with the fact that example.pyx compiles and produces a working example.so file, that cython cannot find (or manage) the rng_sampler constructor that takes an unsigned long as input. I've not used cython before, and my cpp is middling at best. Can anyone shed light on how to fix this simple problem?
python: 2.7.10 (Anaconda 2.0.1)
cython: 0.22.1
I resolved the error, it was to do with how boost is installed. I had installed boost via apt-get. After downloading/untarring and changing the pointers to boost in setup.py, it works.

Why does numba have worse optimization than Cython in this code?

I am trying to optimize some code with numba. The problem is that a simple Cython optimization (just specifying data types) is six times faster than using autojit, so I don't know if I'm doing something wrong.
The function to optimize is:
from numba import autojit
#autojit(nopython=True)
def get_energy(system, i,j,m):
#system is an array, (i,j) some indices and m the size of the array
up=i-1; down=i+1; left=j-1; right=j+1
if up<0: total=system[m,j]
else: total=system[up,j]
if down>m: total+=system[0,j]
else: total+=system[down,j]
if left<0: total+=system[i,m]
else: total+=system[i,left]
if right>m: total+=system[i,0]
else: total+=system[i,right]
return 2*system[i,j]*total
A simple run would be something like this:
import numpy as np
x=np.random.rand(50,50)
get_energy(x, 3, 5, 50)
I've understood that numba is good at loops but may not optimize other things very well. Anyhow, I would expect a similar performance to Cython, is numba slower accessing arrays or at conditional statements?
The .pyx file in Cython is:
import numpy as np
cimport cython
cimport numpy as np
def get_energy(np.ndarray[np.float64_t, ndim=2] system, int i,int j,unsigned int m):
cdef int up
cdef int down
cdef int left
cdef int right
cdef np.float64_t total
up=i-1; down=i+1; left=j-1; right=j+1
if up<0: total=system[m,j]
else: total=system[up,j]
if down>m: total+=system[0,j]
else: total+=system[down,j]
if left<0: total+=system[i,m]
else: total+=system[i,left]
if right>m: total+=system[i,0]
else: total+=system[i,right]
return 2*system[i,j]*total
Please comment if I need to give further information.

Cython: how to resolve TypeError: Cannot convert memoryviewslice to numpy.ndarray?

Inside a random_stuff_printer.pyx file, I have a cdef function that looks something like this:
cdef np.ndarray[np.float64_t, ndim=4] randomizer():
return np.random.random((4, 4, 4, 4))
Then I have a def function that looks like this, inside the same random_stuff_printer.pyx:
def random_printer():
random_stuff = randomizer()
print random_stuff
I compile the file, and call random_printer, but I get the following error:
TypeError: Cannot convert random_stuff_printer._memoryviewslice to numpy.ndarray
How can I fix this issue?
I believe that this is an issue of keeping your def, cdefs, and cimport and imports straight. Here's some code that works for me:
import numpy as np
cimport numpy as cnp
cdef cnp.ndarray[cnp.float64_t, ndim=4] randomizer():
return np.random.random((4, 4, 4, 4))
def random_printer():
cdef foo = randomizer()
print(foo)
See for example this notebook: http://nbviewer.ipython.org/gist/arokem/6fa00ceb17e16c367c8a

cython vs python different results within scipy.optimize.fsolve

I cythonized a function that I call a bunch of times in my code. The cython version and the original python code give me the same answers (within 1e-7 which I understand has something to do with cython vs. python types...not the question here but might be important).
I attempt to find the root of the function using scipy.optimize.fsolve(). The python version works fine, but the cython version diverges.
The code is pretty involved and has a big external file to prepare some of the arguments, so I can't post everything. I post the cython code. Full code is here.
def euler_outside(float b_prime, int index_b,
np.ndarray[np.double_t, ndim=1] b_grid, int index_y,
np.ndarray[np.double_t, ndim=1] y_grid,
np.ndarray[np.double_t, ndim=1] y_vec,
np.ndarray[np.double_t, ndim=2] pol_mat_b, float q,
np.ndarray[np.double_t, ndim=2] pol_mat_q,
np.ndarray[np.double_t, ndim=2] P, float beta,
int n_ygrid, int check=0):
'''
b_prime - the variable of interest. want to find b_prime that solves this
function
'''
cdef double b, y, c, uc, e_ucp, eul_val
cdef int i
cdef np.ndarray[np.float64_t, ndim=1] uct, c_prime = np.zeros((n_ygrid,))
b = b_grid[index_b]
y = y_grid[index_y]
# Get value of consumption today
c = b + y - b_prime/q
# Get possible values of consumption tomorrow
if check:
c_prime = b_prime + y_vec - b_grid[0]/q
else:
for i in range(n_ygrid):
c_prime[i] = (b_prime + y_vec[i] -
(np.interp(b_prime, b_grid, pol_mat_b[:,i]) /
np.interp(b_prime, b_grid, pol_mat_q[:,i])))
if c<0:
return 1e10
uc = utility_prime(c)
uct = utility_prime(c_prime)
e_ucp = np.inner( uct, P[index_y,:] )
eul_val = uc - beta*q * e_ucp
return eul_val
The python code is the same but w/out the cdef statements and type info on the arguments. I've checked to make sure the output is the same for the same input values, and it is. My question is why scipy's fsolve goes off the deep-end for one and not the other. I assume it's a problem with my cython?
Running python 2.7 from Anaconda. Compiling the extension module via pyximport.
As mentioned in the comments above, the reason for the discrepancy between the results from the Python and Cython versions is that in the Cython function, several of the inputs are declared as float, whereas the actual Python variables are double precision.
The resulting increase in round-off error for the Cython function seems to be the reason why fsolve fails to converge - when these inputs are declared as double instead, the Python and Cython versions yield the exact same result, and fsolve converges correctly for both.
As an aside, cases where round-off error in the objective function prevents convergence are indicative of ill-conditioned problems. You might want to think about whether it's possible to re-formulate your model in order to improve its numerical stability.

Force NumPy ndarray to take ownership of its memory in Cython

Following this answer to "Can I force a numpy ndarray to take ownership of its memory?" I attempted to use the Python C API function PyArray_ENABLEFLAGS through Cython's NumPy wrapper and found it is not exposed.
The following attempt to expose it manually (this is just a minimum example reproducing the failure)
from libc.stdlib cimport malloc
import numpy as np
cimport numpy as np
np.import_array()
ctypedef np.int32_t DTYPE_t
cdef extern from "numpy/ndarraytypes.h":
void PyArray_ENABLEFLAGS(np.PyArrayObject *arr, int flags)
def test():
cdef int N = 1000
cdef DTYPE_t *data = <DTYPE_t *>malloc(N * sizeof(DTYPE_t))
cdef np.ndarray[DTYPE_t, ndim=1] arr = np.PyArray_SimpleNewFromData(1, &N, np.NPY_INT32, data)
PyArray_ENABLEFLAGS(arr, np.NPY_ARRAY_OWNDATA)
fails with a compile error:
Error compiling Cython file:
------------------------------------------------------------
...
def test():
cdef int N = 1000
cdef DTYPE_t *data = <DTYPE_t *>malloc(N * sizeof(DTYPE_t))
cdef np.ndarray[DTYPE_t, ndim=1] arr = np.PyArray_SimpleNewFromData(1, &N, np.NPY_INT32, data)
PyArray_ENABLEFLAGS(arr, np.NPY_ARRAY_OWNDATA)
^
------------------------------------------------------------
/tmp/test.pyx:19:27: Cannot convert Python object to 'PyArrayObject *'
My question: Is this the right approach to take in this case? If so, what am I doing wrong? If not, how do I force NumPy to take ownership in Cython, without going down to a C extension module?
You just have some minor errors in the interface definition. The following worked for me:
from libc.stdlib cimport malloc
import numpy as np
cimport numpy as np
np.import_array()
ctypedef np.int32_t DTYPE_t
cdef extern from "numpy/arrayobject.h":
void PyArray_ENABLEFLAGS(np.ndarray arr, int flags)
cdef data_to_numpy_array_with_spec(void * ptr, np.npy_intp N, int t):
cdef np.ndarray[DTYPE_t, ndim=1] arr = np.PyArray_SimpleNewFromData(1, &N, t, ptr)
PyArray_ENABLEFLAGS(arr, np.NPY_OWNDATA)
return arr
def test():
N = 1000
cdef DTYPE_t *data = <DTYPE_t *>malloc(N * sizeof(DTYPE_t))
arr = data_to_numpy_array_with_spec(data, N, np.NPY_INT32)
return arr
This is my setup.py file:
from distutils.core import setup, Extension
from Cython.Distutils import build_ext
ext_modules = [Extension("_owndata", ["owndata.pyx"])]
setup(cmdclass={'build_ext': build_ext}, ext_modules=ext_modules)
Build with python setup.py build_ext --inplace. Then verify that the data is actually owned:
import _owndata
arr = _owndata.test()
print arr.flags
Among others, you should see OWNDATA : True.
And yes, this is definitely the right way to deal with this, since numpy.pxd does exactly the same thing to export all the other functions to Cython.
#Stefan's solution works for most scenarios, but is somewhat fragile. Numpy uses PyDataMem_NEW/PyDataMem_FREE for memory-management and it is an implementation detail, that these calls are mapped to the usual malloc/free + some memory tracing (I don't know which effect Stefan's solution has on the memory tracing, at least it seems not to crash).
There are also more esoteric cases possible, in which free from numpy-library doesn't use the same memory-allocator as malloc in the cython code (linked against different run-times for example as in this github-issue or this SO-post).
The right tool to pass/manage the ownership of the data is PyArray_SetBaseObject.
First we need a python-object, which is responsible for freeing the memory. I'm using a self-made cdef-class here (mostly because of logging/demostration), but there are obviously other possiblities as well:
%%cython
from libc.stdlib cimport free
cdef class MemoryNanny:
cdef void* ptr # set to NULL by "constructor"
def __dealloc__(self):
print("freeing ptr=", <unsigned long long>(self.ptr)) #just for debugging
free(self.ptr)
#staticmethod
cdef create(void* ptr):
cdef MemoryNanny result = MemoryNanny()
result.ptr = ptr
print("nanny for ptr=", <unsigned long long>(result.ptr)) #just for debugging
return result
...
Now, we use a MemoryNanny-object as sentinel for the memory, which gets freed as soon as the parent-numpy-array gets destroyed. The code is a little bit awkward, because PyArray_SetBaseObject steals the reference, which is not handled by Cython automatically:
%%cython
...
from cpython.object cimport PyObject
from cpython.ref cimport Py_INCREF
cimport numpy as np
#needed to initialize PyArray_API in order to be able to use it
np.import_array()
cdef extern from "numpy/arrayobject.h":
# a little bit awkward: the reference to obj will be stolen
# using PyObject* to signal that Cython cannot handle it automatically
int PyArray_SetBaseObject(np.ndarray arr, PyObject *obj) except -1 # -1 means there was an error
cdef array_from_ptr(void * ptr, np.npy_intp N, int np_type):
cdef np.ndarray arr = np.PyArray_SimpleNewFromData(1, &N, np_type, ptr)
nanny = MemoryNanny.create(ptr)
Py_INCREF(nanny) # a reference will get stolen, so prepare nanny
PyArray_SetBaseObject(arr, <PyObject*>nanny)
return arr
...
And here is an example, how this functionality can be called:
%%cython
...
from libc.stdlib cimport malloc
def create():
cdef double *ptr=<double*>malloc(sizeof(double)*8);
ptr[0]=42.0
return array_from_ptr(ptr, 8, np.NPY_FLOAT64)
which can be used as follows:
>>> m = create()
nanny for ptr= 94339864945184
>>> m.flags
...
OWNDATA : False
...
>>> m[0]
42.0
>>> del m
freeing ptr= 94339864945184
with results/output as expected.
Note: the resulting arrays doesn't really own the data (i.e. flags return OWNDATA : False), because the memory is owned be the memory-nanny, but the result is the same: the memory gets freed as soon as array is deleted (because nobody holds a reference to the nanny anymore).
MemoryNanny doesn't have to guard a raw C-pointer. It can be anything else, for example also a std::vector:
%%cython -+
from libcpp.vector cimport vector
cdef class VectorNanny:
#automatically default initialized/destructed by Cython:
cdef vector[double] vec
#staticmethod
cdef create(vector[double]& vec):
cdef VectorNanny result = VectorNanny()
result.vec.swap(vec) # swap and not copy
return result
# for testing:
def create_vector(int N):
cdef vector[double] vec;
vec.resize(N, 2.0)
return VectorNanny.create(vec)
The following test shows, that the nanny works:
nanny=create_vector(10**8) # top shows additional 800MB memory are used
del nanny # top shows, this additional memory is no longer used.
The latest Cython version allows you to do with with minimal syntax, albeit slightly more overhead than the lower-level solutions suggested.
numpy_array = np.asarray(<np.int32_t[:10, :10]> my_pointer)
https://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html#coercion-to-numpy
This alone does not pass ownership.
Notably, a Cython array is generated with this call, via array_cwrapper.
This generates a cython.array, without allocating memory. The cython.array uses the stdlib.h malloc and free by default, so it would be expected that you use the default malloc, as well, instead of any special CPython/Numpy allocators.
free is only called if ownership is set for this cython.array, which it is by default only if it allocates data. For our case, we can manually set it via:
my_cyarr.free_data = True
So to return a 1D array, it would be as simple as:
from cython.view cimport array as cvarray
# ...
cdef cvarray cvarr = <np.int32_t[:N]> data
cvarr.free_data = True
return np.asarray(cvarr)

Categories

Resources