math.fsum for arrays of multiple dimensions - python

I have a numpy array of dimension (i, j) in which I would like to add up the first dimension to receive a array of shape (j,). Normally, I'd use NumPy's own sum
import numpy
a = numpy.random.rand(100, 77)
numpy.sum(a, axis=0)
but in my case it doesn't cut it: Some of the sums are very ill-conditioned, so the computed results only have a few correct digits.
math.fsum is fantastic at keeping the errors at bay, but it only applies to iterables of one dimension. numpy.vectorize doesn't do the job either.
How to efficiently apply math.fsum to an array of multiply dimensions?

This one works fast enough for me.
import numpy
import math
a = numpy.random.rand(100, 77)
a = numpy.swapaxes(a, 0, 1)
a = numpy.array([math.fsum(row) for row in a])
Hopefully it's the axis you are looking for (returns 77 sums).

Check out the signature keyword to vectorize.
_math_fsum_vec = numpy.vectorize(math.fsum, signature='(m)->()')
Unfortunately, it's slower than the for solution:
Code to reproduce the plot:
import math
import numpy
import perfplot
_math_fsum_vec = numpy.vectorize(math.fsum, signature='(m)->()')
def fsum_vectorize(a):
return _math_fsum_vec(a.T).T
def fsum_for(a):
return numpy.array([math.fsum(row) for row in a.T])
perfplot.save(
'fsum.png',
setup=lambda n: numpy.random.rand(n, 100),
kernels=[fsum_vectorize, fsum_for],
n_range=[2**k for k in range(12)],
logx=True,
logy=True,
)

Related

Fast Bitwise Get Column in Python

Is there an efficient way to get an array of boolean values that are in the n-th position in bitwise array in Python?
Create numpy array with values 0 or 1:
import numpy as np
array = np.array(
[
[1, 0, 1],
[1, 1, 1],
[0, 0, 1],
]
)
Compress size by np.packbits:
pack_array = np.packbits(array, axis=1)
Expected result - some function that could get all values from n-th column from bitwise array. For example if I would like the second column I would like to get (the same as I would call array[:,1]):
array([0, 1, 0])
I have tried numba with the following function. It returns right results but it is very slow:
import numpy as np
from numba import njit
#njit(nopython=True, fastmath=True)
def getVector(packed, j):
n = packed.shape[0]
res = np.zeros(n, dtype=np.int32)
for i in range(n):
res[i] = bool(packed[i, j//8] & (128>>(j%8)))
return res
How to test it?
import numpy as np
import time
from numba import njit
array = np.random.choice(a=[False, True], size=(100000000,15))
pack_array = np.packbits(array, axis=1)
start = time.time()
array[:,10]
print('np array')
print(time.time()-start)
#njit(nopython=True, fastmath=True)
def getVector(packed, j):
n = packed.shape[0]
res = np.zeros(n, dtype=np.int32)
for i in range(n):
res[i] = bool(packed[i, j//8] & (128>>(j%8)))
return res
# To initialize
getVector(pack_array, 10)
start = time.time()
getVector(pack_array, 10)
print('getVector')
print(time.time()-start)
It returns:
np array
0.00010132789611816406
getVector
0.15648770332336426
Besides some micro-optimisations, I dont believe that there is much that can be optimised here. There are also a few small mistakes in your code:
#njit(nopython=True) is saying the same thing twice (the n in njit already stands for nopython mode.) simply #njit or #jit(nopython=True) should be used
fastMath is for "cutting corners" when doing floating point math, since we are only working with integers and booleans, it can be safely removed because it does nothing for us here.
My updated code (seeing a meagre 40% perfomance increase on my machine):
import numba as nb
import numpy as np
np.random.seed(0)
array = np.random.choice(a=[False, True], size=(10000000,15))
pack_array = np.packbits(array, axis=1)
#nb.njit(locals={'res': nb.boolean[:]})
def getVector(packed, j):
n = packed.shape[0]
res = np.zeros(n, dtype=nb.boolean)
byte = j//8
bit = 128>>(j%8)
for i in range(n):
res[i] = bool(packed[i, byte] & bit)
return res
getVector(pack_array, 10)
In your answer, "res" is a list of 32 bit integers, by giving np.zeros() the numba (NOT numpy) boolean datatype, we can swap it to the more efficient booleans. This is where most of the perfomance improvement comes from. On my machine putting j_mod and j_flr outside of the loop had no noticable effect. But it did have an effect for the commenter #Michael Szczesny, so it might help you aswell.
I would not try to use strides, which #Nick ODell is suggesting, because they can be quite dangerous if used incorrectly. (See the numpy documentation).
edit: I have made a few small changes that were suggested by Michael. (Thanks)

How to use numpy.eye function with custom diagonal values?

I am trying to use a single line of code to make a matrix with zeros except a custom value for the diagonals. I am able to do it like the code I put below, but am wondering if I can do it by only using np.eye?
import numpy as np
a = np.eye(4,4 k=0)
np.fill_diagonal (a,4)
print(a)
try the identity matrix in numpy module:
a=np.identity(10)*4
import numpy as np
a = np.eye(4)*4
print(a)
I would avoid np.eye() altogether and just use np.fill_diagonal() on a zeroed matrix, if you are not using any of its features:
import numpy as np
def value_eye_fill(value, n):
result = np.zeros((n, n))
np.fill_diagonal(result, value)
return result
That should be the fastest approach for larger inputs, within NumPy.
Of course you can also use np.eye() and avoid np.fill_diagonal() by just multiplying the value by the result of np.eye():
import numpy as np
def value_eye_fill(value, n):
return value * np.eye(n)

Invert particular bits in a byte array [duplicate]

What is the most efficient way to map a function over a numpy array? I am currently doing:
import numpy as np
x = np.array([1, 2, 3, 4, 5])
# Obtain array of square of each element in x
squarer = lambda t: t ** 2
squares = np.array([squarer(xi) for xi in x])
However, this is probably very inefficient, since I am using a list comprehension to construct the new array as a Python list before converting it back to a numpy array. Can we do better?
I've tested all suggested methods plus np.array(list(map(f, x))) with perfplot (a small project of mine).
Message #1: If you can use numpy's native functions, do that.
If the function you're trying to vectorize already is vectorized (like the x**2 example in the original post), using that is much faster than anything else (note the log scale):
If you actually need vectorization, it doesn't really matter much which variant you use.
Code to reproduce the plots:
import numpy as np
import perfplot
import math
def f(x):
# return math.sqrt(x)
return np.sqrt(x)
vf = np.vectorize(f)
def array_for(x):
return np.array([f(xi) for xi in x])
def array_map(x):
return np.array(list(map(f, x)))
def fromiter(x):
return np.fromiter((f(xi) for xi in x), x.dtype)
def vectorize(x):
return np.vectorize(f)(x)
def vectorize_without_init(x):
return vf(x)
b = perfplot.bench(
setup=np.random.rand,
n_range=[2 ** k for k in range(20)],
kernels=[
f,
array_for,
array_map,
fromiter,
vectorize,
vectorize_without_init,
],
xlabel="len(x)",
)
b.save("out1.svg")
b.show()
How about using numpy.vectorize.
import numpy as np
x = np.array([1, 2, 3, 4, 5])
squarer = lambda t: t ** 2
vfunc = np.vectorize(squarer)
vfunc(x)
# Output : array([ 1, 4, 9, 16, 25])
TL;DR
As noted by #user2357112, a "direct" method of applying the function is always the fastest and simplest way to map a function over Numpy arrays:
import numpy as np
x = np.array([1, 2, 3, 4, 5])
f = lambda x: x ** 2
squares = f(x)
Generally avoid np.vectorize, as it does not perform well, and has (or had) a number of issues. If you are handling other data types, you may want to investigate the other methods shown below.
Comparison of methods
Here are some simple tests to compare three methods to map a function, this example using with Python 3.6 and NumPy 1.15.4. First, the set-up functions for testing:
import timeit
import numpy as np
f = lambda x: x ** 2
vf = np.vectorize(f)
def test_array(x, n):
t = timeit.timeit(
'np.array([f(xi) for xi in x])',
'from __main__ import np, x, f', number=n)
print('array: {0:.3f}'.format(t))
def test_fromiter(x, n):
t = timeit.timeit(
'np.fromiter((f(xi) for xi in x), x.dtype, count=len(x))',
'from __main__ import np, x, f', number=n)
print('fromiter: {0:.3f}'.format(t))
def test_direct(x, n):
t = timeit.timeit(
'f(x)',
'from __main__ import x, f', number=n)
print('direct: {0:.3f}'.format(t))
def test_vectorized(x, n):
t = timeit.timeit(
'vf(x)',
'from __main__ import x, vf', number=n)
print('vectorized: {0:.3f}'.format(t))
Testing with five elements (sorted from fastest to slowest):
x = np.array([1, 2, 3, 4, 5])
n = 100000
test_direct(x, n) # 0.265
test_fromiter(x, n) # 0.479
test_array(x, n) # 0.865
test_vectorized(x, n) # 2.906
With 100s of elements:
x = np.arange(100)
n = 10000
test_direct(x, n) # 0.030
test_array(x, n) # 0.501
test_vectorized(x, n) # 0.670
test_fromiter(x, n) # 0.883
And with 1000s of array elements or more:
x = np.arange(1000)
n = 1000
test_direct(x, n) # 0.007
test_fromiter(x, n) # 0.479
test_array(x, n) # 0.516
test_vectorized(x, n) # 0.945
Different versions of Python/NumPy and compiler optimization will have different results, so do a similar test for your environment.
There are numexpr, numba and cython around, the goal of this answer is to take these possibilities into consideration.
But first let's state the obvious: no matter how you map a Python-function onto a numpy-array, it stays a Python function, that means for every evaluation:
numpy-array element must be converted to a Python-object (e.g. a Float).
all calculations are done with Python-objects, which means to have the overhead of interpreter, dynamic dispatch and immutable objects.
So which machinery is used to actually loop through the array doesn't play a big role because of the overhead mentioned above - it stays much slower than using numpy's built-in functionality.
Let's take a look at the following example:
# numpy-functionality
def f(x):
return x+2*x*x+4*x*x*x
# python-function as ufunc
import numpy as np
vf=np.vectorize(f)
vf.__name__="vf"
np.vectorize is picked as a representative of the pure-python function class of approaches. Using perfplot (see code in the appendix of this answer) we get the following running times:
We can see, that the numpy-approach is 10x-100x faster than the pure python version. The decrease of performance for bigger array-sizes is probably because data no longer fits the cache.
It is worth also mentioning, that vectorize also uses a lot of memory, so often memory-usage is the bottle-neck (see related SO-question). Also note, that numpy's documentation on np.vectorize states that it is "provided primarily for convenience, not for performance".
Other tools should be used, when performance is desired, beside writing a C-extension from the scratch, there are following possibilities:
One often hears, that the numpy-performance is as good as it gets, because it is pure C under the hood. Yet there is a lot room for improvement!
The vectorized numpy-version uses a lot of additional memory and memory-accesses. Numexp-library tries to tile the numpy-arrays and thus get a better cache utilization:
# less cache misses than numpy-functionality
import numexpr as ne
def ne_f(x):
return ne.evaluate("x+2*x*x+4*x*x*x")
Leads to the following comparison:
I cannot explain everything in the plot above: we can see bigger overhead for numexpr-library at the beginning, but because it utilize the cache better it is about 10 time faster for bigger arrays!
Another approach is to jit-compile the function and thus getting a real pure-C UFunc. This is numba's approach:
# runtime generated C-function as ufunc
import numba as nb
#nb.vectorize(target="cpu")
def nb_vf(x):
return x+2*x*x+4*x*x*x
It is 10 times faster than the original numpy-approach:
However, the task is embarrassingly parallelizable, thus we also could use prange in order to calculate the loop in parallel:
#nb.njit(parallel=True)
def nb_par_jitf(x):
y=np.empty(x.shape)
for i in nb.prange(len(x)):
y[i]=x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
return y
As expected, the parallel function is slower for smaller inputs, but faster (almost factor 2) for larger sizes:
While numba specializes on optimizing operations with numpy-arrays, Cython is a more general tool. It is more complicated to extract the same performance as with numba - often it is down to llvm (numba) vs local compiler (gcc/MSVC):
%%cython -c=/openmp -a
import numpy as np
import cython
#single core:
#cython.boundscheck(False)
#cython.wraparound(False)
def cy_f(double[::1] x):
y_out=np.empty(len(x))
cdef Py_ssize_t i
cdef double[::1] y=y_out
for i in range(len(x)):
y[i] = x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
return y_out
#parallel:
from cython.parallel import prange
#cython.boundscheck(False)
#cython.wraparound(False)
def cy_par_f(double[::1] x):
y_out=np.empty(len(x))
cdef double[::1] y=y_out
cdef Py_ssize_t i
cdef Py_ssize_t n = len(x)
for i in prange(n, nogil=True):
y[i] = x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
return y_out
Cython results in somewhat slower functions:
Conclusion
Obviously, testing only for one function doesn't prove anything. Also one should keep in mind, that for the choosen function-example, the bandwidth of the memory was the bottle neck for sizes larger than 10^5 elements - thus we had the same performance for numba, numexpr and cython in this region.
In the end, the ultimative answer depends on the type of function, hardware, Python-distribution and other factors. For example Anaconda-distribution uses Intel's VML for numpy's functions and thus outperforms numba (unless it uses SVML, see this SO-post) easily for transcendental functions like exp, sin, cos and similar - see e.g. the following SO-post.
Yet from this investigation and from my experience so far, I would state, that numba seems to be the easiest tool with best performance as long as no transcendental functions are involved.
Plotting running times with perfplot-package:
import perfplot
perfplot.show(
setup=lambda n: np.random.rand(n),
n_range=[2**k for k in range(0,24)],
kernels=[
f,
vf,
ne_f,
nb_vf, nb_par_jitf,
cy_f, cy_par_f,
],
logx=True,
logy=True,
xlabel='len(x)'
)
squares = squarer(x)
Arithmetic operations on arrays are automatically applied elementwise, with efficient C-level loops that avoid all the interpreter overhead that would apply to a Python-level loop or comprehension.
Most of the functions you'd want to apply to a NumPy array elementwise will just work, though some may need changes. For example, if doesn't work elementwise. You'd want to convert those to use constructs like numpy.where:
def using_if(x):
if x < 5:
return x
else:
return x**2
becomes
def using_where(x):
return numpy.where(x < 5, x, x**2)
It seems that no one has mentioned a built-in factory method of producing ufunc in numpy package: np.frompyfunc, which I have tested against np.vectorize, and have outperformed it by about 20~30%. Of course it will not perform as well prescribed C code or even numba(which I have not tested), but it can a better alternative than np.vectorize
f = lambda x, y: x * y
f_arr = np.frompyfunc(f, 2, 1)
vf = np.vectorize(f)
arr = np.linspace(0, 1, 10000)
%timeit f_arr(arr, arr) # 307ms
%timeit vf(arr, arr) # 450ms
I have also tested larger samples, and the improvement is proportional. See the documentation also here
Edit: the original answer was misleading, np.sqrt was applied directly to the array, just with a small overhead.
In multidimensional cases where you want to apply a builtin function that operates on a 1d array, numpy.apply_along_axis is a good choice, also for more complex function compositions from numpy and scipy.
Previous misleading statement:
Adding the method:
def along_axis(x):
return np.apply_along_axis(f, 0, x)
to the perfplot code gives performance results close to np.sqrt.
I believe in newer version( I use 1.13) of numpy you can simply call the function by passing the numpy array to the fuction that you wrote for scalar type, it will automatically apply the function call to each element over the numpy array and return you another numpy array
>>> import numpy as np
>>> squarer = lambda t: t ** 2
>>> x = np.array([1, 2, 3, 4, 5])
>>> squarer(x)
array([ 1, 4, 9, 16, 25])
As mentioned in this post, just use generator expressions like so:
numpy.fromiter((<some_func>(x) for x in <something>),<dtype>,<size of something>)
All above answers compares well, but if you need to use custom function for mapping, and you have numpy.ndarray, and you need to retain the shape of array.
I have compare just two, but it will retain the shape of ndarray. I have used the array with 1 million entries for comparison. Here I use square function, which is also inbuilt in numpy and has great performance boost, since there as was need of something, you can use function of your choice.
import numpy, time
def timeit():
y = numpy.arange(1000000)
now = time.time()
numpy.array([x * x for x in y.reshape(-1)]).reshape(y.shape)
print(time.time() - now)
now = time.time()
numpy.fromiter((x * x for x in y.reshape(-1)), y.dtype).reshape(y.shape)
print(time.time() - now)
now = time.time()
numpy.square(y)
print(time.time() - now)
Output
>>> timeit()
1.162431240081787 # list comprehension and then building numpy array
1.0775556564331055 # from numpy.fromiter
0.002948284149169922 # using inbuilt function
here you can clearly see numpy.fromiter works great considering to simple approach, and if inbuilt function is available please use that.
Use numpy.fromfunction(function, shape, **kwargs)
See "https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html"

Apply bincount to each row of a 2D numpy array

Is there a way to apply bincount with "axis = 1"? The desired result would be the same as the list comprehension:
import numpy as np
A = np.array([[1,0],[0,0]])
np.array([np.bincount(r,minlength = np.max(A) + 1) for r in A])
#array([[1,1]
# [2,0]])
np.bincount doesn't work with a 2D array along a certain axis. To get the desired effect with a single vectorized call to np.bincount, one can create a 1D array of IDs such that different rows would have different IDs even if the elements are the same. This would keep elements from different rows not binning together when using a single call to np.bincount with those IDs. Thus, such an ID array could be created with an idea of linear indexing in mind, like so -
N = A.max()+1
id = A + (N*np.arange(A.shape[0]))[:,None]
Then, feed the IDs to np.bincount and finally reshape back to 2D -
np.bincount(id.ravel(),minlength=N*A.shape[0]).reshape(-1,N)
If the data is too large for this to be efficient, then the issue is more likely to be the memory usage of the dense matrix rather than the numerical operations themself. Here is an example of using a sklearn Hashing Vectorizer on a matrix which is too large to use the bincounts method (the results are a sparse matrix):
import numpy as np
from sklearn.feature_extraction.text import HashingVectorizer
h = HashingVectorizer()
A = np.random.randint(100,size=(1000,100))*10000
A_str = [" ".join([str(v) for v in i]) for i in A]
%timeit h.fit_transform(A_str)
#10 loops, best of 3: 110 ms per loop
You can use apply_along_axis, Here is an example
import numpy as np
test_array = np.array([[0, 0, 1], [0, 0, 1]])
print(test_array)
np.apply_along_axis(np.bincount, axis=1, arr= test_array,
minlength = np.max(test_array) +1)
Note the final shape of this array depends on the number of bins, also you can specify other arguments along with apply_along_axis

Iterate over matrices in numpy

How can you iterate over all 2^(n^2) binary n by n matrices (or 2d arrays) in numpy? I would something like:
for M in ....:
Do you have to use itertools.product([0,1], repeat = n**2) and then convert to a 2d numpy array?
This code will give me a random 2d binary matrix but that isn't what I need.
np.random.randint(2, size=(n,n))
Note that 2**(n**2) is a big number for even relatively small n, so your loop might run indefinetely long.
Being said that, one possible way to iterate matrices you need is for example
nxn = np.arange(n**2).reshape(n, -1)
for i in xrange(0, 2**(n**2)):
arr = (i >> nxn) % 2
# do smthng with arr
np.array(list(itertools.product([0,1], repeat = n**2))).reshape(-1,n,n)
produces a (2^(n^2),n,n) array.
There may be some numpy 'grid' function that does the same, but my recollection from other discussions is that itertools.product is pretty fast.
g=(np.array(x).reshape(n,n) for x in itertools.product([0,1], repeat = n**2))
is a generator that produces the nxn arrays one at time:
g.next()
# array([[0, 0],[0, 0]])
Or to produce the same 3d array:
np.array(list(g))

Categories

Resources