Cython prange with an array of string - python

I'm trying to use prange in order to process multiple strings.
As it is not possible to do this with a python list, I'm using a numpy array.
With an array of floats, this function works :
from cython.parallel import prange
cimport numpy as np
from numpy cimport ndarray as ar
cpdef func_float(ar[np.float64_t,cast=True] x, double alpha):
cdef int i
for i in prange(x.shape[0], nogil=True):
x[i] = alpha * x[i]
return x
When I try this simple one :
cpdef func_string(ar[np.str,cast=True] x):
cdef int i
for i in prange(x.shape[0], nogil=True):
x[i] = x[i] + str(i)
return x
I'm getting this
>> func_string(x = np.array(["apple","pear"],dtype=np.str))
File "processing.pyx", line 8, in processing.func_string
cpdef func_string(ar[np.str,cast=True] x):
ValueError: Item size of buffer (20 bytes) does not match size of 'str object' (8 bytes)
I'm probably missing something and I can't find an alternative to str.
Is there a way to properly use prange with an array of string ?

Beside the fact, that your code should fail when cythonized, because you try to create a Python-object (i.e. str(i)) without gil, your code isn't doing what you think it should do.
In order to analyse what is going on, let's take a look at a much simple cython-version:
%%cython -2
cimport numpy as np
from numpy cimport ndarray as ar
cpdef func_string(ar[np.str, cast=True] x):
print(len(x))
From your error message, one can deduct that you use Python 3 and the Cython-extension is built with (still default) language_level=2, thus I'm using -2 in the %%cython-magic cell.
And now:
>>> x = np.array(["apple", "pear"], dtype=np.str)
>>> func_string(x)
ValueError: Item size of buffer (20 bytes) does not match size of 'str object' (8 bytes)
What is going on?
x is not what you think it is
First, let's take a look at x:
>>> x.dtype
<U5
So x isn't a collection of unicode-objects. An element of x consist of 5 unicode-characters and those elements are stored contiguously in memory, one after another. What is important: The same information as in unicode-objects stored in a different memory layout.
This is one of numpy's quirks and how np.array works: every element in the list is converted to an unicode-object, than the maximal size of the element is calculated and dtype (in this case <U5) is calculated and used.
np.str is interpreted differently in cython code (ar[np.str] x) (twice!)
First difference: in your Python3-code np.str is for unicode, but in your cython code, which is cythonized with language_level=2, np.str is for bytes (see doc).
Second difference: seeing np.str, Cython will interpret it as array with Python-objects (maybe it should be seen as a Cython-bug) - it is almost the same as if dtype were np.object - actually the only difference to np.object are slightly different error messages.
With this information we can understand the error message. During the runtime, the input-array is checked (before the first line of the function is executed!):
expected is an array with python-objects, i.e. 8-byte pointers, i.e. array with element size of 8bytes
received is an array with element size 5*4=20 bytes (one unicode-character is 4 bytes)
thus the cast cannot be done and the observed exception is thrown.
you cannot change the size of an element in an <U..-numpy-array:
Now let's take a look at the following:
>>> x = np.array(["apple", b"pear"], dtype=np.str)
>>> x[0] = x[0]+str(0)
>>> x[0]
'apple'
the element didn't change, because the string x[0]+str(0) was truncated while written back to x-array: there is only place for 5 characters! It would work (to some degree, as long as resulting string has no more than 5 characters) with "pear" though:
>>> x[1] = x[1]+str(1)
>>> x[1]
'pear0'
Where does this all leave you?
you probably want to use bytes and not unicodes (i.e. dtype=np.bytes_)
given you don't know the element size of your numpy-array at the compile type, you should declare the input-array x as ar x in the signature and roll out the runtime checks, similar as done in the Cython's "depricated" numpy-tutorial.
if changes should be done in-place, the elements in the input-array should be big enough for the resulting strings.
All of the above, has nothing to do with prange. To use prange you cannot use str(i) because it operates on python-objects.

Related

int64 vs array(int64, 0d, C) in numba

I define a jited function returning a tuple using numba. It's something like below.
import numba as nb
from numba.types import Tuple
import numpy as np
FOO_T = Tuple.from_types((nb.types.NPDatetime('M'), nb.int64))
#nb.jit([FOO_T(nb.types.NPDatetime('M'), nb.types.NPDatetime('M'))], nopython=True)
def foo(input1, input2):
temp1 = input1
temp2 = np.array(input1 - input2).view(nb.int64)
output = (temp1, temp2)
return output
A TypeError is reported as below. The second element of output tuple is defined as int64. However, it's actually compiled as array(int64, 0d, C).
TypingError: No conversion from Tuple(datetime64[M], array(int64, 0d, C)) to Tuple(datetime64[M], int64) for '$38return_value.15', defined at None
Have no idea how to make them consistent. Thanks for your help.
np.array(input1 - input2).view(nb.int64) is an array of int64 and not a scalar. This is why Numba report an error. Note that np.array(input1 - input2) results in a weird type: an array of dimension 0. AFAIK, this is what Numpy use to represent scalars but such an array cannot be indexed in Numba nor converted to a scalar.
You could subtract two scalar and build an array with np.array([input1 - input2]) and then call view. That being said, view is probably not what you want to do here as it reinterpret the binary representation of a NPDatetime as an integer. This is really unsafe and AFAIK there is no reason to assume that this can work. You can just make the difference and cast the result with (np.uint64)(input1 - input2).

never allocate an output of numpy.ufunc

This question has info on using an input as an output to compute something in place with a numpy.ufunc:
Numpy passing input array as `out` argument to ufunc
Is it possible to avoid allocating space for an unwanted output of a numpy.ufunc? For example, say I only want one of the two outputs from modf. Can I ensure that the other, unwanted array is never allocated at all?
I thought passing _ to out might do it, but it throws an error:
import numpy as np
ar = np.arange(6)/3
np.modf(ar, out=(ar, _))
TypeError: return arrays must be of ArrayType
As it says in the docs, passing None means that the output array is allocated in the function and returned. I can ignore the returned values, but it still has to be allocated and populated inside the function.
You can minimize allocation by passing a "fake" array:
ar = np.arange(6) / 3
np.modf(ar, ar, np.broadcast_arrays(ar.dtype.type(0), ar)[0])
This dummy array is as big as a single double, and modf will not do allocation internally.
EDIT According to suggestions from #Eric and #hpaulj, a more general and long-term solution would be
np.lib.stride_tricks._broadcast_to(np.empty(1, ar.dtype), ar.shape, False, False)

Why do numpy array turns int into float

I'm trying to fill an array with integers, but it seems like numpy array keep turning the integers into floats. Why is this happening and how do I stop this?
arr = np.empty(9)
arr[3] = 7
print(arr[3])
>>>7.0
NumPy arrays, unlike Python lists, can contain only a single type, which (as far as I know) is set at creation time. Everything you put into the array gets converted to that type.
By default, the data type is assumed to be float. To set another type, you can pass dtype to the empty function like this:
>>> arr = np.empty(9, dtype=int)
>>> arr[3] = 7
>>> arr[3]
7

Pass numpy array of list of integers in Cython method from Python

I would like to pass the following array of list of integers (i.e., it's not an two dimensional array) to the Cython method from python code.
Python Sample Code:
import numpy as np
import result
a = np.array([[1], [2,3]])
process_result(a)
The output of a is array([list([1]), list([2, 3])], dtype=object)
Cython Sample Code:
def process_result(int[:,:] a):
pass
The above code gives the following error:
ValueError: Buffer has wrong number of dimensions (expected 2, got 1)
I tried to pass a simple array instead of numpy I got the following error
a = [[1], [2,3]]
process_result(a)
TypeError: a bytes-like object is required, not 'list'
Kindly assist me how to pass the value of a into the Cython method process_result and whats the exact datatype needs to use to receive this value in Cython method.
I think you're using the wrong data-type. Instead of a numpy array of lists, you should be using a list of numpy arrays. There is very little benefit of using numpy arrays of Python objects (such as lists) - unlike numeric types they aren't stored particulatly efficiently, they aren't quick to do calculations on, and you can't accelerate them in Cython. Therefore the outermost level may as well be a normal Python list.
However, the inner levels all look to be homogenous arrays of integers, and so would be ideal candidates for Numpy arrays (especially if you want to process them in Cython).
Therefore, build your list as:
a = [ np.array([1],dtype=np.int), np.array([2,3],dtype=np.int) ]
(Or use tolist on a numpy array)
For your function you can define it like:
def process_result(list a):
cdef int[:] item
for item in a:
#operations on the inner arrays are fast!
pass
Here I've assumed that you most likely want to iterate over the list. Note that there's pretty little benefit in typing a to be list, so you could just leave it untyped (to accept any Python object) and then you could pass it other iterables too, like your original numpy array.
Convert the array of list of integer to list of object (i.e., list of list of integers - its not a two dimensional array)
Python Code:
import numpy as np
import result
a = np.array([[1], [2,3]]).tolist()
process_result(a)
The output of a is [[1], [2,3]]
Cython Sample Code:
def process_result(list a):
pass
Change the int[:, :] to list. It works fine.
Note: If anyone know the optimal answer kindly post it, It will be
helpful.

Python numpy array vs list

I need to perform some calculations a large list of numbers.
Do array.array or numpy.array offer significant performance boost over typical arrays?
I don't have to do complicated manipulations on the arrays, I just need to be able to access and modify values,
e.g.
import numpy
x = numpy.array([0] * 1000000)
for i in range(1,len(x)):
x[i] = x[i-1] + i
So I will not really be needing concatenation, slicing, etc.
Also, it looks like array throws an error if I try to assign values that don't fit in C long:
import numpy
a = numpy.array([0])
a[0] += 1232234234234324353453453
print(a)
On console I get:
a[0] += 1232234234234324353453453
OverflowError: Python int too large to convert to C long
Is there a variation of array that lets me put in unbounded Python integers?
Or would doing it that way take away the point of having arrays in the first place?
You first need to understand the difference between arrays and lists.
An array is a contiguous block of memory consisting of elements of some type (e.g. integers).
You cannot change the size of an array once it is created.
It therefore follows that each integer element in an array has a fixed size, e.g. 4 bytes.
On the other hand, a list is merely an "array" of addresses (which also have a fixed size).
But then each element holds the address of something else in memory, which is the actual integer that you want to work with. Of course, the size of this integer is irrelevant to the size of the array. Thus you can always create a new (bigger) integer and "replace" the old one without affecting the size of the array, which merely holds the address of an integer.
Of course, this convenience of a list comes at a cost: Performing arithmetic on the integers now requires a memory access to the array, plus a memory access to the integer itself, plus the time it takes to allocate more memory (if needed), plus the time required to delete the old integer (if needed). So yes, it can be slower, so you have to be careful what you're doing with each integer inside an array.
Your first example could be speed up. Python loop and access to individual items in a numpy array are slow. Use vectorized operations instead:
import numpy as np
x = np.arange(1000000).cumsum()
You can put unbounded Python integers to numpy array:
a = np.array([0], dtype=object)
a[0] += 1232234234234324353453453
Arithmetic operations compared to fixed-sized C integers would be slower in this case.
For most uses, lists are useful. Sometimes working with numpy arrays may be more convenient for example.
a=[1,2,3,4,5,6,7,8,9,10]
b=[5,8,9]
Consider a list 'a' and if you want access the elements in a list at discrete indices given in list 'b'
writing
a[b]
will not work.
but when you use them as arrays, you can simply write
a[b]
to get the output as array([6,9,10]).
Do array.array or numpy.array offer significant performance boost over
typical arrays?
I tried to test this a bit with the following code:
import timeit, math, array
from functools import partial
import numpy as np
# from the question
def calc1(x):
for i in range(1,len(x)):
x[i] = x[i-1] + 1
# a floating point operation
def calc2(x):
for i in range(0,len(x)):
x[i] = math.sin(i)
L = int(1e5)
# np
print('np 1: {:.5f} s'.format(timeit.timeit(partial(calc1, np.array([0] * L)), number=20)))
print('np 2: {:.5f} s'.format(timeit.timeit(partial(calc2, np.array([0] * L)), number=20)))
# np but with vectorized form
vfunc = np.vectorize(math.sin)
print('np 2 vectorized: {:.5f} s'.format(timeit.timeit(partial(vfunc, np.arange(0, L)), number=20)))
# with list
print('list 1: {:.5f} s'.format(timeit.timeit(partial(calc1, [0] * L), number=20)))
print('list 2: {:.5f} s'.format(timeit.timeit(partial(calc2, [0] * L), number=20)))
# with array
print('array 1: {:.5f} s'.format(timeit.timeit(partial(calc1, array.array("f", [0] * L)), number=20)))
print('array 2: {:.5f} s'.format(timeit.timeit(partial(calc2, array.array("f", [0] * L)), number=20)))
And the results were that list executes fastest here (Python 3.3, NumPy 1.8):
np 1: 2.14277 s
np 2: 0.77008 s
np 2 vectorized: 0.44117 s
list 1: 0.29795 s
list 2: 0.66529 s
array 1: 0.66134 s
array 2: 0.88299 s
Which seems to be counterintuitive. There doesn't seem to be any advantage in using numpy or array over list for these simple examples.
To OP: For your use case use lists.
My rules for when to use which, considering robustness and speed:
list: (most robust, fastest for mutable cases)
Ex. When your list is constantly mutating as in a physics simulation. When you are "creating" data from scratch that may be unpredictable in nature.
np.arrary: (less robust, fastest for linear algebra & data post processing)
Ex. When you are "post processing" a data set that you have already collected via sensors or a simulation; performing operations that can be vectorized.
Do array.array or numpy.array offer significant performance boost over typical arrays?
It can, depending on what you're doing.
Or would doing it that way take away the point of having arrays in the first place?
Pretty much, yeah.
use a=numpy.array(number_of_elements, dtype=numpy.int64) which should give you an array of 64-bit integers. These can store any integer number between -2^63 and (2^63)-1 (approximately between -10^19 and 10^19) which is usually more than enough.

Categories

Resources