Can I parallelize this small Python script with a global dict? - python

I have this problem that takes about 2.8 seconds in my MBA with Python3. Since at its core we have a caching dictionary, I figure that it doesn't matter which call hits the cache first, and so maybe I can get some gains from threading. I can't quite figure it out, though. This is a bit higher level than the questions I normally ask, but can someone walk me through the parallelization process for this problem?
import time
import threading
even = lambda n: n%2==0
next_collatz = lambda n: n//2 if even(n) else 3*n+1
cache = {1: 1}
def collatz_chain_length(n):
if n not in cache: cache[n] = 1 + collatz_chain_length(next_collatz(n))
return cache[n]
if __name__ == '__main__':
valid = range(1, 1000000)
for n in valid:
# t = threading.Thread(target=collatz_chain_length, args=[n] )
# t.start()
collatz_chain_length(n)
print( max(valid, key=cache.get) )
Or, if it is a bad candidate, why?

You won't get a good boost out of threading in Python if your workload is CPU intensive. That's because only one thread will actually be using the processor at a time due to the GIL (Global Interpreter Lock).
However, if your workload was I/O bound (waiting for responses from a network request for example), threads would give you a bit of a boost, because if your thread is blocked waiting for a network response, another thread can do useful work.
As HDN mentioned, using multiprocessing will help - this uses multiple Python interpreters to get the work done.
The way I would approach this is to divide the number of iterations by the number of processes you plan to create. For example if you create 4 processes, give each process a 1000000/4 slice of the work.
At the end you will need to aggregate the results of each process and apply your max() to get the result.

Threading won't give you much in terms of performance gains because it won't get around the Global Interpreter Lock, which will only run one thread at any given moment. It might actually even slow you down because of the context switching.
If you want to leverage parallelization for performance purposes in Python, you're going to have to use multiprocessing to actually leverage more than one core at a time.

I managed to speedup your code 16.5x times on single core, read further.
As said before multi-threading doesn't give any improvement in pure Python, because of Global Interpreter Lock.
Regarding multi-processing - there are two options 1) to implement shared dictionary and read/write to it directly from different processes. 2) to cut range of values into parts and solve task for separate subranges on different processes, then just take maximum out of all processes answers.
First option will be very slow, because in your code reading/writing to dictionary is main time-consuming operation, using shared between processes dictionary will slow it down 5 times more giving no improvements from multi-core.
Second option will give some improvement but also not great because different processes will recompute same values many times. This option will give considerable improvement only if you have very many cores or use many separate machines in cluster.
I decided to implement another way to improve your task (option-3) - to use Numba and to do other optimizations. My solution is then also suitable for option-2 (parallelization of sub-ranges).
Numba is Just-in-Time compiler and optimizer, it converts pure Python code to optimized C++ and then compiles to machine code. Numba can usually give 10x-100x times speedup.
To run code with numba you just need to install pip install numba (currently Numba is supported for Python version <= 3.8, support for 3.9 will be soon too!).
All improvements that I did gave 16.5x times speedup on single-core (e.g. if on your algorithm it was 64 seconds for some range then on my code it is 4 seconds).
I had to rewrite your code, algorithm and idea is same like yours, but I made algorithm non-recursive (because Numba doesn't deal well with recursion) and also used list instead of dictionary for not too large values.
My single core numba-based version may use sometimes too much of memory, that is only because of cs parameter which controls threshold for using list instead of dictionary, currently this cs is set to be stop * 10 (search this in code), if you don't have much memory just set it to e.g. stop * 2 (but not less than stop * 1). I have 16GB of memory and program runs correctly even for 64000000 upper limit.
Also besides Numba code I implemented C++ solution, it appeared to be same in speed like Numba, it means Numba did a good work! C++ code is located after Python code.
I did timings measurement of your algorithm (solve_py()) and my (solve_nm()) and compared them. Timings are listed after code.
Just for reference, I did multi-core processing version too using my numba solution, but it didn't give any improvements over single-core version, there was even slow-down. That all happened because multi-core version computed same values many times. Maybe multi-machine version will give noticable improvement, but probably not multi-core.
Try-it-online links below allow to run only small ranges because of limited memory on thos free online servers!
Try it online!
import time, threading, time, numba
def solve_py(start, stop):
even = lambda n: n%2==0
next_collatz = lambda n: n//2 if even(n) else 3*n+1
cache = {1: 1}
def collatz_chain_length(n):
if n not in cache: cache[n] = 1 + collatz_chain_length(next_collatz(n))
return cache[n]
for n in range(start, stop):
collatz_chain_length(n)
r = max(range(start, stop), key = cache.get)
return r, cache[r]
#numba.njit(cache = True, locals = {'n': numba.int64, 'l': numba.int64, 'zero': numba.int64})
def solve_nm(start, stop):
zero, l, cs = 0, 0, stop * 10
ns = [zero] * 10000
cache_lo = [zero] * cs
cache_lo[1] = 1
cache_hi = {zero: zero}
for n in range(start, stop):
if cache_lo[n] != 0:
continue
nsc = 0
while True:
if n < cs:
cg = cache_lo[n]
else:
cg = cache_hi.get(n, zero)
if cg != 0:
l = 1 + cg
break
ns[nsc] = n
nsc += 1
n = (n >> 1) if (n & 1) == 0 else 3 * n + 1
for i in range(nsc - 1, -1, -1):
if ns[i] < cs:
cache_lo[ns[i]] = l
else:
cache_hi[ns[i]] = l
l += 1
maxn, maxl = 0, 0
for k in range(start, stop):
v = cache_lo[k]
if v > maxl:
maxn, maxl = k, v
return maxn, maxl
if __name__ == '__main__':
solve_nm(1, 100000) # heat-up, precompile numba
for stop in [1000000, 2000000, 4000000, 8000000, 16000000, 32000000, 64000000]:
tr, resr = None, None
for is_nm in [False, True]:
if stop > 16000000 and not is_nm:
continue
tb = time.time()
res = (solve_nm if is_nm else solve_py)(1, stop)
te = time.time()
print(('py', 'nm')[is_nm], 'limit', stop, 'time', round(te - tb, 2), 'secs', end = '')
if not is_nm:
resr, tr = res, te - tb
print(', n', res[0], 'len', res[1])
else:
if tr is not None:
print(', boost', round(tr / (te - tb), 2))
assert resr == res, (resr, res)
else:
print(', n', res[0], 'len', res[1])
Output:
py limit 1000000 time 3.34 secs, n 837799 len 525
nm limit 1000000 time 0.19 secs, boost 17.27
py limit 2000000 time 6.72 secs, n 1723519 len 557
nm limit 2000000 time 0.4 secs, boost 16.76
py limit 4000000 time 13.47 secs, n 3732423 len 597
nm limit 4000000 time 0.83 secs, boost 16.29
py limit 8000000 time 27.32 secs, n 6649279 len 665
nm limit 8000000 time 1.68 secs, boost 16.27
py limit 16000000 time 55.42 secs, n 15733191 len 705
nm limit 16000000 time 3.48 secs, boost 15.93
nm limit 32000000 time 7.38 secs, n 31466382 len 706
nm limit 64000000 time 16.83 secs, n 63728127 len 950
C++ version of same algorithm as Numba is located below:
Try it online!
#include <cstdint>
#include <vector>
#include <unordered_map>
#include <tuple>
#include <iostream>
#include <stdexcept>
#include <chrono>
typedef int64_t i64;
static std::tuple<i64, i64> Solve(i64 start, i64 stop) {
i64 cs = stop * 10, n = 0, l = 0, nsc = 0;
std::vector<i64> cache_lo(cs), ns(10000);
cache_lo[1] = 1;
std::unordered_map<i64, i64> cache_hi;
for (i64 i = start; i < stop; ++i) {
if (cache_lo[i] != 0)
continue;
n = i;
nsc = 0;
while (true) {
i64 cg = 0;
if (n < cs)
cg = cache_lo[n];
else {
auto it = cache_hi.find(n);
if (it != cache_hi.end())
cg = it->second;
}
if (cg != 0) {
l = 1 + cg;
break;
}
ns.at(nsc) = n;
++nsc;
n = (n & 1) ? 3 * n + 1 : (n >> 1);
}
for (i64 i = nsc - 1; i >= 0; --i) {
i64 n = ns[i];
if (n < cs)
cache_lo[n] = l;
else
cache_hi[n] = l;
++l;
}
}
i64 maxn = 0, maxl = 0;
for (size_t i = start; i < stop; ++i)
if (cache_lo[i] > maxl) {
maxn = i;
maxl = cache_lo[i];
}
return std::make_tuple(maxn, maxl);
}
int main() {
try {
for (auto stop: std::vector<i64>({1000000, 2000000, 4000000, 8000000, 16000000, 32000000, 64000000})) {
auto tb = std::chrono::system_clock::now();
auto r = Solve(1, stop);
auto te = std::chrono::system_clock::now();
std::cout << "cpp limit " << stop
<< " time " << double(std::chrono::duration_cast<std::chrono::milliseconds>(te - tb).count()) / 1000.0 << " secs"
<< ", n " << std::get<0>(r) << " len " << std::get<1>(r) << std::endl;
}
return 0;
} catch (std::exception const & ex) {
std::cout << "Exception: " << ex.what() << std::endl;
return -1;
}
}
Output:
cpp limit 1000000 time 0.17 secs, n 837799 len 525
cpp limit 2000000 time 0.357 secs, n 1723519 len 557
cpp limit 4000000 time 0.757 secs, n 3732423 len 597
cpp limit 8000000 time 1.571 secs, n 6649279 len 665
cpp limit 16000000 time 3.275 secs, n 15733191 len 705
cpp limit 32000000 time 7.112 secs, n 31466382 len 706
cpp limit 64000000 time 17.165 secs, n 63728127 len 950

Related

accelerated FFT to be invoked from Python Numba CUDA kernel

I need to calculate the Fourier transform of a 256 element float64 signal. The requirement is as such that I need to invoke these FFTs from inside a cuda.jitted section and it must be completed within 25usec. Alas cuda.jit-compiled functions do not allow to invoke external libraries => I wrote my own. Alas my single-core code is still way too slow (~250usec on a Quadro P4000). Is there a better way?
I created a single core FFT-function that gives correct results, but is alas 10x too slow. I don't understand how to make good use of multiple cores.
---fft.py
from numba import cuda, boolean, void, int32, float32, float64, complex128
import math, sys, cmath
def _transform_radix2(vector, inverse, out):
n = len(vector)
levels = int32(math.log(float32(n))/math.log(float32(2)))
assert 2**levels==n # error: Length is not a power of 2
#uncomment either Numba.Cuda or Numpy memory allocation, (intelligent conditional compileation??)
exptable = cuda.local.array(1024, dtype=complex128)
#exptable = np.zeros(1024, np.complex128)
assert (n // 2) <= len(exptable) # error: FFT length > MAXFFTSIZE
coef = complex128((2j if inverse else -2j) * math.pi / n)
for i in range(n // 2):
exptable[i] = cmath.exp(i * coef)
for i in range(n):
x = i
y = 0
for j in range(levels):
y = (y << 1) | (x & 1)
x >>= 1
out[i] = vector[y]
size = 2
while size <= n:
halfsize = size // 2
tablestep = n // size
for i in range(0, n, size):
k = 0
for j in range(i, i + halfsize):
temp = out[j + halfsize] * exptable[k]
out[j + halfsize] = out[j] - temp
out[j] += temp
k += tablestep
size *= 2
scale=float64(n if inverse else 1)
for i in range(n):
out[i]=out[i]/scale # the inverse requires a scaling
# now create the Numba.cuda version to be called by a GPU
gtransform_radix2 = cuda.jit(device=True)(_transform_radix2)
---test.py
from numba import cuda, void, float64, complex128, boolean
import cupy as cp
import numpy as np
import timeit
import fft
#cuda.jit(void(float64[:],boolean, complex128[:]))
def fftbench(y, inverse, FT):
Y = cuda.local.array(256, dtype=complex128)
for i in range(len(y)):
Y[i]=complex128(y[i])
fft.gtransform_radix2(Y, False, FT)
str='\nbest [%2d/%2d] iterations, min:[%9.3f], max:[%9.3f], mean:[%9.3f], std:[%9.3f] usec'
a=[127.734375 ,130.87890625 ,132.1953125 ,129.62109375 ,118.6015625
,110.2890625 ,106.55078125 ,104.8203125 ,106.1875 ,109.328125
,113.5 ,118.6640625 ,125.71875 ,127.625 ,120.890625
,114.04296875 ,112.0078125 ,112.71484375 ,110.18359375 ,104.8828125
,104.47265625 ,106.65625 ,109.53515625 ,110.73828125 ,111.2421875
,112.28125 ,112.38671875 ,112.7734375 ,112.7421875 ,113.1328125
,113.24609375 ,113.15625 ,113.66015625 ,114.19921875 ,114.5
,114.5546875 ,115.09765625 ,115.2890625 ,115.7265625 ,115.41796875
,115.73828125 ,116. ,116.55078125 ,116.5625 ,116.33984375
,116.63671875 ,117.015625 ,117.25 ,117.41015625 ,117.6640625
,117.859375 ,117.91015625 ,118.38671875 ,118.51171875 ,118.69921875
,118.80859375 ,118.67578125 ,118.78125 ,118.49609375 ,119.0078125
,119.09375 ,119.15234375 ,119.33984375 ,119.31640625 ,119.6640625
,119.890625 ,119.80078125 ,119.69140625 ,119.65625 ,119.83984375
,119.9609375 ,120.15625 ,120.2734375 ,120.47265625 ,120.671875
,120.796875 ,120.4609375 ,121.1171875 ,121.35546875 ,120.94921875
,120.984375 ,121.35546875 ,120.87109375 ,120.8359375 ,121.2265625
,121.2109375 ,120.859375 ,121.17578125 ,121.60546875 ,121.84375
,121.5859375 ,121.6796875 ,121.671875 ,121.78125 ,121.796875
,121.8828125 ,121.9921875 ,121.8984375 ,122.1640625 ,121.9375
,122. ,122.3515625 ,122.359375 ,122.1875 ,122.01171875
,121.91015625 ,122.11328125 ,122.1171875 ,122.6484375 ,122.81640625
,122.33984375 ,122.265625 ,122.78125 ,122.44921875 ,122.34765625
,122.59765625 ,122.63671875 ,122.6796875 ,122.6171875 ,122.34375
,122.359375 ,122.7109375 ,122.83984375 ,122.546875 ,122.25390625
,122.06640625 ,122.578125 ,122.7109375 ,122.83203125 ,122.5390625
,122.2421875 ,122.06640625 ,122.265625 ,122.13671875 ,121.8046875
,121.87890625 ,121.88671875 ,122.2265625 ,121.63671875 ,121.14453125
,120.84375 ,120.390625 ,119.875 ,119.34765625 ,119.0390625
,118.4609375 ,117.828125 ,117.1953125 ,116.9921875 ,116.046875
,115.16015625 ,114.359375 ,113.1875 ,110.390625 ,108.41796875
,111.90234375 ,117.296875 ,127.0234375 ,147.58984375 ,158.625
,129.8515625 ,120.96484375 ,124.90234375 ,130.17578125 ,136.47265625
,143.9296875 ,150.24609375 ,141. ,117.71484375 ,109.80859375
,115.24609375 ,118.44140625 ,120.640625 ,120.9921875 ,111.828125
,101.6953125 ,111.21484375 ,114.91015625 ,115.2265625 ,118.21875
,125.3359375 ,139.44140625 ,139.76953125 ,135.84765625 ,137.3671875
,141.67578125 ,139.53125 ,136.44921875 ,135.08203125 ,135.7890625
,137.58203125 ,138.7265625 ,154.33203125 ,172.01171875 ,152.24609375
,129.8046875 ,125.59375 ,125.234375 ,127.32421875 ,132.8984375
,147.98828125 ,152.328125 ,153.7734375 ,155.09765625 ,156.66796875
,159.0546875 ,151.83203125 ,138.91796875 ,138.0546875 ,140.671875
,143.48046875 ,143.99609375 ,146.875 ,146.7578125 ,141.15234375
,141.5 ,140.76953125 ,140.8828125 ,145.5625 ,150.78125
,148.89453125 ,150.02734375 ,150.70703125 ,152.24609375 ,148.47265625
,131.95703125 ,125.40625 ,123.265625 ,123.57421875 ,129.859375
,135.6484375 ,144.51171875 ,155.05078125 ,158.4453125 ,140.8125
,100.08984375 ,104.29296875 ,128.55078125 ,139.9921875 ,143.38671875
,143.69921875 ,137.734375 ,124.48046875 ,116.73828125 ,114.84765625
,113.85546875 ,117.45703125 ,122.859375 ,125.8515625 ,133.22265625
,139.484375 ,135.75 ,122.69921875 ,115.7734375 ,116.9375
,127.57421875]
y1 =cp.zeros(len(a), cp.complex128)
FT1=cp.zeros(len(a), cp.complex128)
for i in range(len(a)):
y1[i]=a[i] #convert to complex to feed the FFT
r=1000
series=sorted(timeit.repeat("fftbench(y1, False, FT1)", number=1, repeat=r, globals=globals()))
series=series[0:r-5]
print(str % (len(series), r, 1e6*np.min(series), 1e6*np.max(series), 1e6*np.mean(series), 1e6*np.std(series)));
a faster implementation t<<25usec
The drawback of your algorithm is that even on GPU it runs on a single-core.
In order to understand how to design algorithms on Nvidia GPGPU I recommend to look at :
the CUDA C Programming guide and to the numba documentation to apply the code in python.
Moreover to understand what's wrong with your code, I recommend to use Nvidia profiler.
The following parts of the answer will explained how to apply the basics on your example.
Run multiples threads
To improve performances, you will first need to launch multiples threads that can run in parallel, CUDA handle threads as follow:
Threads are grouped into blocs of n threads (n < 1024)
Each thread withing the same bloc can be synchronized and have access to a (fast) common memory space called "shared memory".
You can run multiples blocs in parallel in a "grid" but you will lose the synchronization mechanism.
The syntax to run multiples threads is the following:
fftbench[griddim, blockdim](y1, False, FT1)
to simplify, I will use only one bloc of size 256:
fftbench[1, 256](y1, False, FT1)
Memory
To improve GPU performances it's important to look where the data will be stored, their is three main spaces:
global memory: it's the "RAM" of your GPU, it's slow and have a high latency, this is where all your array are placed when you send them to the GPU.
shared memory: it's a little fast access memory, all the thread of a bloc have access to the same shared memory.
local memory: physically it's the same that global memory, but each thread access its own local memory.
Typically, if you use multiples times the sames data, you should try store them in shared memory to prevent latency from the global memory.
In your code, you can store exptable in shared memory:
exptable = cuda.shared.array(1024, dtype=complex128)
and if n is not too big, you may want to use a working instead of using out:
working = cuda.shared.array(256, dtype=complex128)
Assign tasks to each thread
Of course if you don't change your function, all thread will do the same job and it will just slow down your program.
In this example we will assign each thread to one cell of the array. To do so, we have to get the unique id of thread withing a bloc:
idx = cuda.threadIdx.x
Now we will be able to speed up the for loops, lets handle them one by one:
exptable = cuda.shared.array(1024, dtype=complex128)
...
for i in range(n // 2):
exptable[i] = cmath.exp(i * coef)
Here is the goal: we will want the n/2 first threads to fill this array, then all the thread will be able to use it.
So in this case just replace the for loop by a condition on the thread idx's:
if idx < n // 2:
exptable[idx] = cmath.exp(idx * coef)
For the two last loops it's easier, each thread will deal with one cell of the array:
for i in range(n):
x = i
y = 0
for j in range(levels):
y = (y << 1) | (x & 1)
x >>= 1
out[i] = vector[y]
become
x = idx
y = 0
for j in range(levels):
y = (y << 1) | (x & 1)
x >>= 1
working[idx] = vector[y]
and
for i in range(n):
out[i]=out[i]/scale # the inverse requires a scaling
become
out[idx]=working[idx]/scale # the inverse requires a scaling
I use the shared array working but you can replace it by out if you want to use global memory.
Now, lets look at the while loop, we said that we want each thread to only deal with one cell of the array. So we can try to parallelize the two for loops inside.
...
for i in range(0, n, size):
k = 0
for j in range(i, i + halfsize):
temp = out[j + halfsize] * exptable[k]
out[j + halfsize] = out[j] - temp
out[j] += temp
k += tablestep
...
To simplify I will only use half of the threads, we will take the 128 first threads and determine j as follow:
...
if idx < 128:
j = (idx%halfsize) + size*(idx//halfsize)
...
k is:
k = tablestep*(idx%halfsize)
so we got the loop:
size = 2
while size <= n:
halfsize = size // 2
tablestep = n // size
if idx < 128:
j = (idx%halfsize) + size*(idx//halfsize)
k = tablestep*(idx%halfsize)
temp = working[j + halfsize] * exptable[k]
working[j + halfsize] = working[j] - temp
working[j] += temp
size *= 2
Synchronization
Last but not least, we need to synchronize all theses threads. In fact the program will not work if we do not synch. On the GPU thread may not run at the same time so you can get issues when data are produced by one thread and used by another one, for example:
exptable[0] is used by thread_2 before thread_0 fill store its value
working[j + halfsize] is moddified by another thread before you store it in temp
to prevent this we can use the function:
cuda.syncthreads()
All the threads in the same bloc will finish this line before execution the rest of the code.
In this example, you need to synchronize at two point, after the working initialization and after each iteration of the while loop.
then your code look like:
def _transform_radix2(vector, inverse, out):
n = len(vector)
levels = int32(math.log(float32(n))/math.log(float32(2)))
assert 2**levels==n # error: Length is not a power of 2
exptable = cuda.shared.array(1024, dtype=complex128)
working = cuda.shared.array(256, dtype=complex128)
assert (n // 2) <= len(exptable) # error: FFT length > MAXFFTSIZE
coef = complex128((2j if inverse else -2j) * math.pi / n)
if idx < n // 2:
exptable[idx] = cmath.exp(idx * coef)
x = idx
y = 0
for j in range(levels):
y = (y << 1) | (x & 1)
x >>= 1
working[idx] = vector[y]
cuda.syncthreads()
size = 2
while size <= n:
halfsize = size // 2
tablestep = n // size
if idx < 128:
j = (idx%halfsize) + size*(idx//halfsize)
k = tablestep*(idx%halfsize)
temp = working[j + halfsize] * exptable[k]
working[j + halfsize] = working[j] - temp
working[j] += temp
size *= 2
cuda.syncthreads()
scale=float64(n if inverse else 1)
out[idx]=working[idx]/scale # the inverse requires a scaling
I feel like your question is a good way to introduce some basics about GPGPU computing and I try to answer it in a didactic way. The final code is far from perfect and can be optimized a lot, I highly recommend you to read this Programming guide if you want to learn more about GPU optimizations.

Better way to compute Top N closest cities in Python/Numba using GPU

I have M~200k points with X,Y coordinates of cities packed in a Mx2 numpy array.
Intent is, for each city to compute Top N closest cities and return their indices and distances to that city in a MxN numpy matrix.
Numba does a great job speeding up my serial python code on CPU and making it multithreaded using prange generator, such that on a 16 cores SSE 4.2/AVX machine call to compute 30 closest cities completes in 6min 26s while saturating all the cores:
#numba.njit(fastmath=True)#
def get_closest(n,N_CLOSEST,coords,i):
dist=np.empty(n,np.float32)
for j in range(n):
dist[j]=(coords[i,0]-coords[j,0])**2+(coords[i,1]-coords[j,1])**2
indices=np.argsort(dist)[1:N_CLOSEST+1]
return indices,dist[indices]
#numba.njit(fastmath=True,parallel=True)
def get_top_T_closest_cities(coords,T):
n=len(coords)
N_CLOSEST=min(T,n-1)
closest_distances=np.empty((n,N_CLOSEST),np.float32)
closest_cities=np.empty((n,N_CLOSEST),np.int32)
for i in prange(n):
closest_cities[i,:],closest_distances[i,:]=get_closest(n,N_CLOSEST,coords,i)
return closest_cities,closest_distances
closest_cities,closest_distances=get_top_T_closest_cities(data,30)
Note: I wanted to use
mrange=np.arange(N_CLOSEST)
and
indices=np.argpartition(dist,mrange)
later to save some cycles, but unfortunately Numba did not support np.argpartition yet.
Then I decided to put my newly bought RTX 2070 to some good use and try offloading these very parallel by the nature computations to the GPU, using again Numba and possibly CuPy.
After some thinking I came up with a relatively dumb rewrite, where GPU kernel handling one city at time was called consecutively for each of M cities. In that kernel, each GPU thread was computing its distance and saving to specific location in dist array. All arrays were allocated on device to minimize PCI data transfer:
import cupy as cp
def get_top_T_closest_cities_gpu(coords,T):
n=len(coords)
N_CLOSEST=min(T,n-1)
closest_distances=cp.empty((n,N_CLOSEST),cp.float32)
closest_cities=cp.empty((n,N_CLOSEST),cp.int32)
device_coords=cp.asarray(coords)
dist=cp.ndarray(n,cp.float32)
for i in range(n):
if (i % 1000)==0:
print(i)
closest_cities[i,:],closest_distances[i,:]=get_closest(n,N_CLOSEST,device_coords,i,dist)
return cp.asnumpy(closest_cities),cp.asnumpy(closest_distances)
#cuda.jit()
def get_distances(coords,i,n,dist):
stride = cuda.gridsize(1)
start = cuda.grid(1)
for j in range(start, n, stride):
dist[j]=(coords[i,0]-coords[j,0])**2+(coords[i,1]-coords[j,1])**2
def get_closest(n,N_CLOSEST,coords,i,dist):
get_distances[512,512](coords,i,n,dist)
indices=cp.argsort(dist)[1:N_CLOSEST+1]
return indices,dist[indices]
Now, computing results on GPU took almost the same 6 mins time, but GPU load was barely 1% (yes I made sure results returned by CPU and GPU versions are the same). I played a bit with block size, did not see any significant change. Interestingly, cp.argsort and get_distances were consuming approximately the same share of processing time.
I feel like it has something to do with streams, how to properly init a lot of them? This would be most straightforward way to reuse my code, by treating not 1 city at time but let's say 16 or whatever my compute capability allows, ideally 1000 maybe.
What would those of you guys experienced in GPU coding in Numba/CuPy recommend to fully exploit power of GPU in my case?
Advice from pure C++ CUDA apologists would also be very welcome, as I haven't seen comparison of native Cuda solution with Numba/CuPy CUDA solution.
Python version: ['3.6.3 |Anaconda, Inc']
platform: AMD64
system: Windows-10-10.0.14393-SP0
Numba version: 0.41.0
You seem to be interested in an answer based on CUDA C++, so I'll provide one. It should be straightforward to convert this kernel to an equivalent numba cuda jit kernel; the translation process is fairly mechanical.
An outline of the method I chose is as follows:
assign one city per thread (the reference city for that thread)
each thread traverses through all the cities computing the pairwise distance to its reference city
as each distance is computed, it is checked to see if it is within the current "closest cities". The approach here is to keep a running list for each thread. As each thread computes a new distance, it checks to see if that distance is less than the current maximum distance in the running list. If it is, the current maximum distance/city is replaced with the one just computed.
The list of "closest cities" along with their distances is maintained in shared memory.
At the completion of computing all distances, each thread then sorts and writes out the values in its shared buffer to global memory
There are several "optimizations" to this basic plan. One is that each warp reads 32 consecutive values at a time, and then uses a shuffle operation to pass the values around among threads, so as to reduce global memory traffic.
A single kernel call performs the entire operation for all cities.
The performance of this kernel will vary quite a bit depending on the GPU you are running it on. For example on a tesla P100 it runs in about 0.5s. On a Tesla K40 it takes ~6s. On a Quadro K2000 it takes ~40s.
Here is a full test case on the Tesla P100:
$ cat t364.cu
#include <iostream>
#include <math.h>
#include <stdlib.h>
#include <algorithm>
#include <vector>
#include <thrust/sort.h>
const int ncities = 200064; // should be divisible by nTPB
const int nclosest = 32; // maximum 48
const int nTPB = 128; // should be multiple of 32
const float FLOAT_MAX = 999999.0f;
__device__ void init_shared(float *d, int n){
int idx = threadIdx.x;
for (int i = 0; i < n; i++){
d[idx] = FLOAT_MAX;
idx += nTPB;}
}
__device__ int find_max(float *d, int n){
int idx = threadIdx.x;
int max = 0;
float maxd = d[idx];
for (int i = 1; i < n; i++){
idx += nTPB;
float next = d[idx];
if (maxd < next){
max = i;
maxd = next;}}
return max;
}
__device__ float compute_sqdist(float2 c1, float2 c2){
return (c1.x - c2.x)*(c1.x - c2.x) + (c1.y - c2.y)*(c1.y - c2.y);
}
__device__ void sort_and_store_sqrt(int idx, float *closest_dist, int *closest_id, float *dist, int *city, int n, int n_cities)
{
for (int i = n-1; i >= 0; i--){
int max = find_max(dist, n);
closest_dist[idx + i*n_cities] = sqrtf(dist[max*nTPB+threadIdx.x]);
closest_id[idx + i*n_cities] = city[max*nTPB+threadIdx.x];
dist[max*nTPB+threadIdx.x] = 0.0f;}
};
__device__ void shuffle(int &city_id, float2 &next_city_xy, int my_warp_lane){
int src_lane = (my_warp_lane+1)&31;
city_id = __shfl_sync(0xFFFFFFFF, city_id, src_lane);
next_city_xy.x = __shfl_sync(0xFFFFFFFF, next_city_xy.x, src_lane);
next_city_xy.y = __shfl_sync(0xFFFFFFFF, next_city_xy.y, src_lane);
}
__global__ void k(const float2 * __restrict__ cities, float * __restrict__ closest_dist, int * __restrict__ closest_id, const int n_cities){
__shared__ float dist[nclosest*nTPB];
__shared__ int city[nclosest*nTPB];
int idx = threadIdx.x+blockDim.x*blockIdx.x;
int my_warp_lane = (threadIdx.x & 31);
init_shared(dist, nclosest);
float2 my_city_xy = cities[idx];
float last_max = FLOAT_MAX;
for (int i = 0; i < n_cities; i+=32){
int city_id = i+my_warp_lane;
float2 next_city_xy = cities[city_id];
for (int j = 0; j < 32; j++){
if (j > 0) shuffle(city_id, next_city_xy, my_warp_lane);
if (idx != city_id){
float my_dist = compute_sqdist(my_city_xy, next_city_xy);
if (my_dist < last_max){
int max = find_max(dist, nclosest);
last_max = dist[max*nTPB+threadIdx.x];
if (my_dist < last_max) {
dist[max*nTPB+threadIdx.x] = my_dist;
city[max*nTPB+threadIdx.x] = city_id;}}}}}
sort_and_store_sqrt(idx, closest_dist, closest_id, dist, city, nclosest, n_cities);
}
void on_cpu(float2 *cities, float *dists, int *ids){
thrust::host_vector<int> my_ids(ncities);
thrust::host_vector<float> my_dists(ncities);
for (int i = 0; i < ncities; i++){
for (int j = 0; j < ncities; j++){
my_ids[j] = j;
if (i != j) my_dists[j] = sqrtf((cities[i].x - cities[j].x)*(cities[i].x - cities[j].x) + (cities[i].y - cities[j].y)*(cities[i].y - cities[j].y));
else my_dists[j] = FLOAT_MAX;}
thrust::sort_by_key(my_dists.begin(), my_dists.end(), my_ids.begin());
for (int j = 0; j < nclosest; j++){
dists[i+j*ncities] = my_dists[j];
ids[i+j*ncities] = my_ids[j];}
}
}
int main(){
float2 *h_cities, *d_cities;
float *h_closest_dist, *d_closest_dist, *h_closest_dist_cpu;
int *h_closest_id, *d_closest_id, *h_closest_id_cpu;
cudaMalloc(&d_cities, ncities*sizeof(float2));
cudaMalloc(&d_closest_dist, nclosest*ncities*sizeof(float));
cudaMalloc(&d_closest_id, nclosest*ncities*sizeof(int));
h_cities = new float2[ncities];
h_closest_dist = new float[ncities*nclosest];
h_closest_id = new int[ncities*nclosest];
h_closest_dist_cpu = new float[ncities*nclosest];
h_closest_id_cpu = new int[ncities*nclosest];
for (int i = 0; i < ncities; i++){
h_cities[i].x = (rand()/(float)RAND_MAX)*10.0f;
h_cities[i].y = (rand()/(float)RAND_MAX)*10.0f;}
cudaMemcpy(d_cities, h_cities, ncities*sizeof(float2), cudaMemcpyHostToDevice);
k<<<ncities/nTPB, nTPB>>>(d_cities, d_closest_dist, d_closest_id, ncities);
cudaMemcpy(h_closest_dist, d_closest_dist, ncities*nclosest*sizeof(float),cudaMemcpyDeviceToHost);
cudaMemcpy(h_closest_id, d_closest_id, ncities*nclosest*sizeof(int),cudaMemcpyDeviceToHost);
for (int i = 0; i < nclosest; i++){
for (int j = 0; j < 5; j++) std::cout << h_closest_id[j+i*ncities] << "," << h_closest_dist[j+i*ncities] << " ";
std::cout << std::endl;}
if (ncities < 5000){
on_cpu(h_cities, h_closest_dist_cpu, h_closest_id_cpu);
for (int i = 0; i < ncities*nclosest; i++)
if (h_closest_id_cpu[i] != h_closest_id[i]) {std::cout << "mismatch at: " << i << " was: " << h_closest_id[i] << " should be: " << h_closest_id_cpu[i] << std::endl; return -1;}
}
return 0;
}
$ nvcc -o t364 t364.cu
$ nvprof ./t364
==31871== NVPROF is profiling process 31871, command: ./t364
33581,0.00466936 116487,0.0163371 121419,0.0119542 138585,0.00741395 187892,0.0205568
56138,0.0106946 105637,0.0195111 175565,0.0132018 121719,0.0090809 198333,0.0269794
36458,0.014851 6942,0.0244329 67920,0.013367 140919,0.014739 91533,0.0348142
48152,0.0221216 146867,0.0250192 14656,0.020469 163149,0.0247639 162442,0.0354359
3681,0.0226946 127671,0.0265515 32841,0.0219539 109520,0.0359874 21346,0.0424094
20903,0.0313963 3075,0.0279635 79787,0.0220388 106161,0.0365807 24186,0.0453916
191226,0.0316818 4168,0.0297535 126726,0.0246147 154598,0.0374218 62106,0.0459001
185573,0.0349628 76270,0.030849 122878,0.0249695 124897,0.0447656 38124,0.0463704
71252,0.037517 18879,0.0350544 112410,0.0299296 102006,0.0462593 12361,0.0464608
172641,0.0399721 134288,0.035077 39252,0.031506 164570,0.0470057 81105,0.0502873
18960,0.0433545 53266,0.0360357 195467,0.0334281 36715,0.0470069 153375,0.0508115
163568,0.0442928 95544,0.0361058 151839,0.0398384 114041,0.0490263 8524,0.0511531
182326,0.047519 59919,0.0362906 90810,0.0408069 52013,0.0494515 16793,0.0525569
169860,0.048417 77412,0.036694 12065,0.0414496 137863,0.0494703 197500,0.0537481
40158,0.0492621 34592,0.0377113 54812,0.041594 58792,0.049532 70501,0.0541114
121444,0.0501154 102800,0.0414865 96238,0.0433548 34323,0.0531493 161527,0.0551868
118564,0.0509228 82832,0.0449587 167965,0.0439488 104475,0.0534779 66968,0.0572788
60136,0.0528873 88318,0.0455667 118562,0.0462537 129853,0.0535594 7544,0.0588875
95975,0.0587857 65792,0.0479467 124929,0.046828 116360,0.0537344 37341,0.0594454
62542,0.0592229 57399,0.0509665 186583,0.0470843 47571,0.0541045 81275,0.0596965
11499,0.0607943 28314,0.0512354 23730,0.0518801 176089,0.0543222 562,0.06527
131594,0.0611795 23120,0.0515408 25933,0.0547776 117474,0.0557752 194499,0.0657885
23959,0.0623019 137283,0.0533193 173000,0.0577509 157537,0.0566689 193091,0.0666895
5660,0.0629772 15498,0.0555821 161025,0.0596721 123148,0.0589929 147928,0.0672529
51033,0.063036 15456,0.0565314 145859,0.0601408 3012,0.0601779 107646,0.0687241
26790,0.0643055 99659,0.0576798 33153,0.0603556 48388,0.0617377 47566,0.0689055
178826,0.0655352 143209,0.058413 101960,0.0604994 22146,0.0620504 91118,0.0705487
32108,0.0672866 172089,0.058676 17885,0.0638383 137318,0.0624543 138223,0.0716578
38125,0.0678566 42847,0.0609811 10879,0.0681518 154360,0.0633921 96195,0.0723226
96583,0.0683073 169703,0.0621889 100839,0.0721133 28595,0.0661302 20422,0.0731882
98329,0.0683718 50432,0.0636618 84204,0.0733909 181919,0.066552 75375,0.0736715
75814,0.0694582 161714,0.0674298 89049,0.0749184 151275,0.0679411 37849,0.0739173
==31871== Profiling application: ./t364
==31871== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 95.66% 459.88ms 1 459.88ms 459.88ms 459.88ms k(float2 const *, float*, int*, int)
4.30% 20.681ms 2 10.340ms 10.255ms 10.425ms [CUDA memcpy DtoH]
0.04% 195.55us 1 195.55us 195.55us 195.55us [CUDA memcpy HtoD]
API calls: 58.18% 482.42ms 3 160.81ms 472.46us 470.80ms cudaMemcpy
41.05% 340.38ms 3 113.46ms 322.83us 339.73ms cudaMalloc
0.55% 4.5490ms 384 11.846us 339ns 503.79us cuDeviceGetAttribute
0.16% 1.3131ms 4 328.27us 208.85us 513.66us cuDeviceTotalMem
0.05% 407.13us 4 101.78us 93.796us 118.27us cuDeviceGetName
0.01% 61.719us 1 61.719us 61.719us 61.719us cudaLaunchKernel
0.00% 23.965us 4 5.9910us 3.9830us 8.9510us cuDeviceGetPCIBusId
0.00% 6.8440us 8 855ns 390ns 1.8150us cuDeviceGet
0.00% 3.7490us 3 1.2490us 339ns 2.1300us cuDeviceGetCount
0.00% 2.0260us 4 506ns 397ns 735ns cuDeviceGetUuid
$
CUDA 10.0, Tesla P100, CentOS 7
The code includes the possibility for verification, but it verifies the results against a CPU-based naive code, which takes quite a bit longer. So I've restricted verification to smaller test cases, eg. 4096 cities.
Here's a "translated" approximation of the above code using numba cuda. It seems to run about 2x slower than the CUDA C++ version, which shouldn't really be the case. (One of the contributing factors seems to be the use of the sqrt function - it is having a significant performance impact in the numba code but no perf impact in the CUDA C++ code. It may be using a double-precision realization under the hood in numba CUDA.) Anyway it gives a reference for how it could be realized in numba:
import numpy as np
import numba as nb
import math
from numba import cuda,float32,int32
# number of cities, should be divisible by threadblock size
N = 200064
# number of "closest" cities
TN = 32
#number of threads per block
threadsperblock = 128
#shared memory size of each array in elements
smSize=TN*threadsperblock
#cuda.jit('int32(float32[:])',device=True)
def find_max(dist):
idx = cuda.threadIdx.x
mymax = 0
maxd = dist[idx]
for i in range(TN-1):
idx += threadsperblock
mynext = dist[idx]
if maxd < mynext:
mymax = i+1
maxd = mynext
return mymax
#cuda.jit('void(float32[:], float32[:], float32[:], int32[:], int32)')
def k(citiesx, citiesy, td, ti, nc):
dist = cuda.shared.array(smSize, float32)
city = cuda.shared.array(smSize, int32)
idx = cuda.grid(1)
tid = cuda.threadIdx.x
wl = tid & 31
my_city_x = citiesx[idx];
my_city_y = citiesy[idx];
last_max = 99999.0
for i in range(TN):
dist[tid+i*threadsperblock] = 99999.0
for i in xrange(0,nc,32):
city_id = i+wl
next_city_x = citiesx[city_id]
next_city_y = citiesy[city_id]
for j in range(32):
if j > 0:
src_lane = (wl+1)&31
city_id = cuda.shfl_sync(0xFFFFFFFF, city_id, src_lane)
next_city_x = cuda.shfl_sync(0xFFFFFFFF, next_city_x, src_lane)
next_city_y = cuda.shfl_sync(0xFFFFFFFF, next_city_y, src_lane)
if idx != city_id:
my_dist = (next_city_x - my_city_x)*(next_city_x - my_city_x) + (next_city_y - my_city_y)*(next_city_y - my_city_y)
if my_dist < last_max:
mymax = find_max(dist)
last_max = dist[mymax*threadsperblock+tid]
if my_dist < last_max:
dist[mymax*threadsperblock+tid] = my_dist
city[mymax*threadsperblock+tid] = city_id
for i in range(TN):
mymax = find_max(dist)
td[idx+i*nc] = math.sqrt(dist[mymax*threadsperblock+tid])
ti[idx+i*nc] = city[mymax*threadsperblock+tid]
dist[mymax*threadsperblock+tid] = 0
#peform test
cx = np.random.rand(N).astype(np.float32)
cy = np.random.rand(N).astype(np.float32)
td = np.zeros(N*TN, dtype=np.float32)
ti = np.zeros(N*TN, dtype=np.int32)
d_cx = cuda.to_device(cx)
d_cy = cuda.to_device(cy)
d_td = cuda.device_array_like(td)
d_ti = cuda.device_array_like(ti)
k[N/threadsperblock, threadsperblock](d_cx, d_cy, d_td, d_ti, N)
d_td.copy_to_host(td)
d_ti.copy_to_host(ti)
for i in range(TN):
for j in range(1):
print(ti[i*N+j])
print(td[i*N+j])
I haven't included validation in this numba cuda variant, so it may contain defects.
EDIT: By switching the above code examples from 128 threads per block to 64 threads per block, the codes will run noticeably faster. This is due to the increase in the theoretical and achieved occupancy that this allows, in light of the shared memory limiter to occupancy.
As pointed out in the comments by max9111, a better algorithm exists. Here's a comparison (on a very slow GPU) between the above python code and the tree-based algorithm (borrowing from the answer here):
$ cat t27.py
import numpy as np
import numba as nb
import math
from numba import cuda,float32,int32
from scipy import spatial
import time
#Create some coordinates and indices
#It is assumed that the coordinates are unique (only one entry per hydrant)
ncities = 200064
Coords=np.random.rand(ncities*2).reshape(ncities,2)
Coords*=100
Indices=np.arange(ncities) #Indices
nnear = 33
def get_indices_of_nearest_neighbours(Coords,Indices):
tree=spatial.cKDTree(Coords)
#k=4 because the first entry is the nearest neighbour
# of a point with itself
res=tree.query(Coords, k=nnear)[1][:,1:]
return Indices[res]
start = time.time()
a = get_indices_of_nearest_neighbours(Coords, Indices)
end = time.time()
print (a.shape[0],a.shape[1])
print
print(end -start)
# number of cities, should be divisible by threadblock size
N = ncities
# number of "closest" cities
TN = nnear - 1
#number of threads per block
threadsperblock = 64
#shared memory size of each array in elements
smSize=TN*threadsperblock
#cuda.jit('int32(float32[:])',device=True)
def find_max(dist):
idx = cuda.threadIdx.x
mymax = 0
maxd = dist[idx]
for i in range(TN-1):
idx += threadsperblock
mynext = dist[idx]
if maxd < mynext:
mymax = i+1
maxd = mynext
return mymax
#cuda.jit('void(float32[:], float32[:], float32[:], int32[:], int32)')
def k(citiesx, citiesy, td, ti, nc):
dist = cuda.shared.array(smSize, float32)
city = cuda.shared.array(smSize, int32)
idx = cuda.grid(1)
tid = cuda.threadIdx.x
wl = tid & 31
my_city_x = citiesx[idx];
my_city_y = citiesy[idx];
last_max = 99999.0
for i in range(TN):
dist[tid+i*threadsperblock] = 99999.0
for i in xrange(0,nc,32):
city_id = i+wl
next_city_x = citiesx[city_id]
next_city_y = citiesy[city_id]
for j in range(32):
if j > 0:
src_lane = (wl+1)&31
city_id = cuda.shfl_sync(0xFFFFFFFF, city_id, src_lane)
next_city_x = cuda.shfl_sync(0xFFFFFFFF, next_city_x, src_lane)
next_city_y = cuda.shfl_sync(0xFFFFFFFF, next_city_y, src_lane)
if idx != city_id:
my_dist = (next_city_x - my_city_x)*(next_city_x - my_city_x) + (next_city_y - my_city_y)*(next_city_y - my_city_y)
if my_dist < last_max:
mymax = find_max(dist)
last_max = dist[mymax*threadsperblock+tid]
if my_dist < last_max:
dist[mymax*threadsperblock+tid] = my_dist
city[mymax*threadsperblock+tid] = city_id
for i in range(TN):
mymax = find_max(dist)
td[idx+i*nc] = math.sqrt(dist[mymax*threadsperblock+tid])
ti[idx+i*nc] = city[mymax*threadsperblock+tid]
dist[mymax*threadsperblock+tid] = 0
#peform test
cx = np.zeros(N).astype(np.float32)
cy = np.zeros(N).astype(np.float32)
for i in range(N):
cx[i] = Coords[i,0]
cy[i] = Coords[i,1]
td = np.zeros(N*TN, dtype=np.float32)
ti = np.zeros(N*TN, dtype=np.int32)
start = time.time()
d_cx = cuda.to_device(cx)
d_cy = cuda.to_device(cy)
d_td = cuda.device_array_like(td)
d_ti = cuda.device_array_like(ti)
k[N/threadsperblock, threadsperblock](d_cx, d_cy, d_td, d_ti, N)
d_td.copy_to_host(td)
d_ti.copy_to_host(ti)
end = time.time()
print(a)
print
print(end - start)
print(np.flip(np.transpose(ti.reshape(nnear-1,ncities)), 1))
$ python t27.py
(200064, 32)
1.32144594193
[[177220 25281 159413 ..., 133366 45179 27508]
[ 56956 163534 90723 ..., 135081 33025 167104]
[ 88708 42958 162851 ..., 115964 77180 31684]
...,
[186540 52500 124911 ..., 157102 83407 38152]
[151053 144837 34487 ..., 171148 37887 12591]
[ 13820 199968 88667 ..., 7241 172376 51969]]
44.1796119213
[[177220 25281 159413 ..., 133366 45179 27508]
[ 56956 163534 90723 ..., 135081 33025 167104]
[ 88708 42958 162851 ..., 115964 77180 31684]
...,
[186540 52500 124911 ..., 157102 83407 38152]
[151053 144837 34487 ..., 171148 37887 12591]
[ 13820 199968 88667 ..., 7241 172376 51969]]
$
On this very slow Quadro K2000 GPU, the CPU/scipy-based kd-tree algorithm is much faster than this GPU implementation. However, at ~1.3s, the scipy implementation is still a bit slower than the brute-force CUDA C++ code running on a Tesla P100, and faster GPUs exist today. A possible explanation for this disparity is given here. As pointed out there, the brute force approach is easily parallelizable, whereas the kd-tree approach may be non-trivial to parallelize. In any event, a faster solution yet might be a GPU-based kd-tree implementation.

Why is my Python NumPy code faster than C++?

Why is this Python NumPy code,
import numpy as np
import time
k_max = 40000
N = 10000
data = np.zeros((2,N))
coefs = np.zeros((k_max,2),dtype=float)
t1 = time.time()
for k in xrange(1,k_max+1):
cos_k = np.cos(k*data[0,:])
sin_k = np.sin(k*data[0,:])
coefs[k-1,0] = (data[1,-1]-data[1,0]) + np.sum(data[1,:-1]*(cos_k[:-1] - cos_k[1:]))
coefs[k-1,1] = np.sum(data[1,:-1]*(sin_k[:-1] - sin_k[1:]))
t2 = time.time()
print('Time:')
print(t2-t1)
faster than the following C++ code?
#include <cstdio>
#include <iostream>
#include <cmath>
#include <time.h>
using namespace std;
// consts
const unsigned int k_max = 40000;
const unsigned int N = 10000;
int main()
{
time_t start, stop;
double diff;
// table with data
double data1[ N ];
double data2[ N ];
// table of results
double coefs1[ k_max ];
double coefs2[ k_max ];
// main loop
time( & start );
for( unsigned int j = 1; j<N; j++ )
{
for( unsigned int i = 0; i<k_max; i++ )
{
coefs1[ i ] += data2[ j-1 ]*(cos((i+1)*data1[ j-1 ]) - cos((i+1)*data1[ j ]));
coefs2[ i ] += data2[ j-1 ]*(sin((i+1)*data1[ j-1 ]) - sin((i+1)*data1[ j ]));
}
}
// end of main loop
time( & stop );
// speed result
diff = difftime( stop, start );
cout << "Time: " << diff << " seconds";
return 0;
}
The first one shows: "Time: 8 seconds"
while the second: "Time: 11 seconds"
I know that NumPy is written in C, but I would still think that C++ example would be faster. Am I missing something? Is there a way to improve the C++ code (or the Python one)?
Version 2 of the code
I have changed the C++ code (dynamical tables to static tables) as suggested in one of the comments. The C++ code is faster now, but still much slower than the Python version.
Version 3 of the code
I have changed from debug to release mode and increased 'k' from 4000 to 40000. Now NumPy is just slightly faster (8 seconds to 11 seconds).
I found this question interesting, because every time I encountered similar topic about the speed of NumPy (compared to C/C++) there was always answers like "it's a thin wrapper, its core is written in C, so it's fast", but this doesn't explain why C should be slower than C with additional layer (even a thin one).
The answer is: your C++ code is not slower than your Python code when properly compiled.
I've done some benchmarks, and at first it seemed that NumPy is surprisingly faster. But I forgot about optimizing the compilation with GCC.
I've computed everything again and also compared results with a pure C version of your code. I am using GCC version 4.9.2, and Python 2.7.9 (compiled from the source with the same GCC). To compile your C++ code I used g++ -O3 main.cpp -o main, to compile my C code I used gcc -O3 main.c -lm -o main. In all examples I filled data variables with some numbers (0.1, 0.4), as it changes results. I also changed np.arrays to use doubles (dtype=np.float64), because there are doubles in C++ example. My pure C version of your code (it's similar):
#include <math.h>
#include <stdio.h>
#include <time.h>
const int k_max = 100000;
const int N = 10000;
int main(void)
{
clock_t t_start, t_end;
double data1[N], data2[N], coefs1[k_max], coefs2[k_max], seconds;
int z;
for( z = 0; z < N; z++ )
{
data1[z] = 0.1;
data2[z] = 0.4;
}
int i, j;
t_start = clock();
for( i = 0; i < k_max; i++ )
{
for( j = 0; j < N-1; j++ )
{
coefs1[i] += data2[j] * (cos((i+1) * data1[j]) - cos((i+1) * data1[j+1]));
coefs2[i] += data2[j] * (sin((i+1) * data1[j]) - sin((i+1) * data1[j+1]));
}
}
t_end = clock();
seconds = (double)(t_end - t_start) / CLOCKS_PER_SEC;
printf("Time: %f s\n", seconds);
return coefs1[0];
}
For k_max = 100000, N = 10000 results where following:
Python 70.284362 s
C++ 69.133199 s
C 61.638186 s
Python and C++ have basically the same time, but note that there is a Python loop of length k_max, which should be much slower compared to C/C++ one. And it is.
For k_max = 1000000, N = 1000 we have:
Python 115.42766 s
C++ 70.781380 s
For k_max = 1000000, N = 100:
Python 52.86826 s
C++ 7.050597 s
So the difference increases with fraction k_max/N, but python is not faster even for N much bigger than k_max, e. g. k_max = 100, N = 100000:
Python 0.651587 s
C++ 0.568518 s
Obviously, the main speed difference between C/C++ and Python is in the for loop. But I wanted to find out the difference between simple operations on arrays in NumPy and in C. Advantages of using NumPy in your code consists of: 1. multiplying the whole array by a number, 2. calculating sin/cos of the whole array, 3. summing all elements of the array, instead of doing those operations on every single item separately. So I prepared two scripts to compare only these operations.
Python script:
import numpy as np
from time import time
N = 10000
x_len = 100000
def main():
x = np.ones(x_len, dtype=np.float64) * 1.2345
start = time()
for i in xrange(N):
y1 = np.cos(x, dtype=np.float64)
end = time()
print('cos: {} s'.format(end-start))
start = time()
for i in xrange(N):
y2 = x * 7.9463
end = time()
print('multi: {} s'.format(end-start))
start = time()
for i in xrange(N):
res = np.sum(x, dtype=np.float64)
end = time()
print('sum: {} s'.format(end-start))
return y1, y2, res
if __name__ == '__main__':
main()
# results
# cos: 22.7199969292 s
# multi: 0.841291189194 s
# sum: 1.15971088409 s
C script:
#include <math.h>
#include <stdio.h>
#include <time.h>
const int N = 10000;
const int x_len = 100000;
int main()
{
clock_t t_start, t_end;
double x[x_len], y1[x_len], y2[x_len], res, time;
int i, j;
for( i = 0; i < x_len; i++ )
{
x[i] = 1.2345;
}
t_start = clock();
for( j = 0; j < N; j++ )
{
for( i = 0; i < x_len; i++ )
{
y1[i] = cos(x[i]);
}
}
t_end = clock();
time = (double)(t_end - t_start) / CLOCKS_PER_SEC;
printf("cos: %f s\n", time);
t_start = clock();
for( j = 0; j < N; j++ )
{
for( i = 0; i < x_len; i++ )
{
y2[i] = x[i] * 7.9463;
}
}
t_end = clock();
time = (double)(t_end - t_start) / CLOCKS_PER_SEC;
printf("multi: %f s\n", time);
t_start = clock();
for( j = 0; j < N; j++ )
{
res = 0.0;
for( i = 0; i < x_len; i++ )
{
res += x[i];
}
}
t_end = clock();
time = (double)(t_end - t_start) / CLOCKS_PER_SEC;
printf("sum: %f s\n", time);
return y1[0], y2[0], res;
}
// results
// cos: 20.910590 s
// multi: 0.633281 s
// sum: 1.153001 s
Python results:
cos: 22.7199969292 s
multi: 0.841291189194 s
sum: 1.15971088409 s
C results:
cos: 20.910590 s
multi: 0.633281 s
sum: 1.153001 s
As you can see NumPy is incredibly fast, but always a bit slower than pure C.
I am actually surprised that no one mentioned Linear Algebra libraries like BLAS LAPACK MKL and all...
Numpy is using complex Linear Algebra libraries !
Essentially, Numpy is most of the time not built on pure c/cpp/fortran code... it is actually built on complex libraries that take advantage of the most performant algorithms and ideas to optimise the code. These complex libraries are hardly matched by naive implementation of classic linear algebra computations. The simplest first example of improvement is the blocking trick.
I took the following image from the CSE lab of ETH, where they compare matrix vector multiplication for different implementation. The y-axis represents the intensity of computations (in GFLOPs); long story short, it is how fast the computations are done. The x-axis is the dimension of the matrix.
C and C++ are fast languages, but actually if you want to mimic the speed of these libraries, you might have to go one step deeper and use either Fortran or intrinsics instructions (that are perhaps the closest to assembly code you can do in C++).
Consider the question Benchmarking (python vs. c++ using BLAS) and (numpy), where the very good answer from #Jfs, and we observe: "There is no difference between C++ and numpy on my machine."
Some more reference:
Why is a naïve C++ matrix multiplication 100 times slower than BLAS?
On my computer, your (current) Python code runs in 14.82 seconds (yes, my computer's quite slow).
I rewrote your C++ code to something I'd consider halfway reasonable (basically, I almost ignored your C++ code and just rewrote your Python into C++. That gave me this:
#include <cstdio>
#include <iostream>
#include <cmath>
#include <chrono>
#include <vector>
#include <assert.h>
const unsigned int k_max = 40000;
const unsigned int N = 10000;
template <class T>
class matrix2 {
std::vector<T> data;
size_t cols;
size_t rows;
public:
matrix2(size_t y, size_t x) : cols(x), rows(y), data(x*y) {}
T &operator()(size_t y, size_t x) {
assert(x <= cols);
assert(y <= rows);
return data[y*cols + x];
}
T operator()(size_t y, size_t x) const {
assert(x <= cols);
assert(y <= rows);
return data[y*cols + x];
}
};
int main() {
matrix2<double> data(N, 2);
matrix2<double> coeffs(k_max, 2);
using namespace std::chrono;
auto start = high_resolution_clock::now();
for (int k = 0; k < k_max; k++) {
for (int j = 0; j < N - 1; j++) {
coeffs(k, 0) += data(j, 1) * (cos((k + 1)*data(j, 0)) - cos((k + 1)*data(j+1, 0)));
coeffs(k, 1) += data(j, 1) * (sin((k + 1)*data(j, 0)) - sin((k + 1)*data(j+1, 0)));
}
}
auto end = high_resolution_clock::now();
std::cout << duration_cast<milliseconds>(end - start).count() << " ms\n";
}
This ran in about 14.4 seconds, so it's a slight improvement over the Python version--but given that the Python is mostly a pretty thin wrapper around some C code, getting only a slight improvement is pretty much what we should expect.
The next obvious step would be to use multiple cores. To do that in C++, we can add this line:
#pragma omp parallel for
...before the outer for loop:
#pragma omp parallel for
for (int k = 0; k < k_max; k++) {
for (int j = 0; j < N - 1; j++) {
coeffs(k, 0) += data(j, 1) * (cos((k + 1)*data(j, 0)) - cos((k + 1)*data(j+1, 0)));
coeffs(k, 1) += data(j, 1) * (sin((k + 1)*data(j, 0)) - sin((k + 1)*data(j+1, 0)));
}
}
With -openmp added to the compiler's command line (though the exact flag depends on the compiler you're using, of course), this ran in about 4.8 seconds. If you have more than 4 cores, you can probably expect a larger improvement than that though (conversely, if you have fewer than 4 cores, expect a smaller improvement--but nowadays, more than 4 is a lot more common that fewer).
I tried to understand your Python code and reproduce it in C++. I found that you didn't represent correctly the for-loops in order to do the correct calculations of the coeffs, hence should switch your for-loops. If this is the case, you should have the following:
#include <iostream>
#include <cmath>
#include <time.h>
const int k_max = 40000;
const int N = 10000;
double cos_k, sin_k;
int main(int argc, char const *argv[])
{
time_t start, stop;
double data[2][N];
double coefs[k_max][2];
time(&start);
for(int i=0; i<k_max; ++i)
{
for(int j=0; j<N; ++j)
{
coefs[i][0] += data[1][j-1] * (cos((i+1) * data[0][j-1]) - cos((i+1) * data[0][j]));
coefs[i][1] += data[1][j-1] * (sin((i+1) * data[0][j-1]) - sin((i+1) * data[0][j]));
}
}
// End of main loop
time(&stop);
// Speed result
double diff = difftime(stop, start);
std::cout << "Time: " << diff << " seconds" << std::endl;
return 0;
}
Switching the for-loops gives me: 3 seconds for C++ code, optimized with -O3, while Python code runs at 7.816 seconds.
The Python code can't be faster than properly-coded C++ code since Numpy is coded in C, which is often slower than C++ since C++ can do more optimizations. They'll only be around each other with Python running somewhere between the same time as C++ to about twice C++ when doing the majority of your computation in large computations that Python pushes off to compiled binaries to calculate. Most anything beyond large matrix multiplication, addition, scalar on matrix multiplication, etc. will perform much worse in Python. For example, look at the Benchmark Game where people submit solutions to various algorithms in various languages, and the website keeps track of the fastest submissions for each (algorithm, language) pair. You can even view the source code for each submission. For most test cases, Python is 2-15 times slower than C++. That makes sense too if you do anything other than simple math operations - anything with linked lists, binary search trees, procedural code, etc. The interpreted nature of Python combined with it storing metadata for each object (even int, double, float, etc.) significantly bogs things down in a way that no Python programmer can fix.

python function(or a code block) runs much slower with a time interval in a loop

I notice a case in python, when a block of code, nested in a loop, runs continuously, it is much faster than running with some .sleep() time interval.
I wonder the reason and a possible solution.
I guess it's related to CPU-cache or some mechanism of cPython VM.
'''
Created on Aug 22, 2015
#author: doge
'''
import numpy as np
import time
import gc
gc.disable()
t = np.arange(100000)
for i in xrange(100):
#np.sum(t)
time.sleep(1) #--> if you comment this line, the following lines will be much faster
st = time.time()
np.sum(t)
print (time.time() - st)*1e6
result:
without sleep in loop, time consumed: 50us
with a sleep in loop, time consumed: >150us
some disadvantage of the .sleep() is, that it releases CPU, thus I provide the exactly same version with a C code below:
'''
Created on Aug 22, 2015
#author: doge
'''
import numpy as np
import time
import gc
gc.disable()
t = np.arange(100000)
count = 0
for i in xrange(100):
count += 1
if ( count % 1000000 != 0 ):
continue
#--> these three lines make the following lines much slower
st = time.time()
np.sum(t)
print (time.time() - st)*1e6
another experiment: (we remove the for loop)
st = time.time()
np.sum(t)
print (time.time() - st)*1e6
st = time.time()
np.sum(t)
print (time.time() - st)*1e6
st = time.time()
np.sum(t)
print (time.time() - st)*1e6
...
st = time.time()
np.sum(t)
print (time.time() - st)*1e6
result:
execution time decreased from 150us -> 50us gradually.
and keep stable in 50us.
to find out whether this is problem of CPU-cache, I wrote a C counterpart. And have found out that this kind of phenomenon does not happen.
#include <iostream>
#include <sys/time.h>
#define num 100000
using namespace std;
long gus()
{
struct timeval tm;
gettimeofday(&tm, NULL);
return ( (tm.tv_sec % 86400 + 28800) % 86400 )*1000000 + tm.tv_usec;
}
double vec_sum(double *v, int n){
double result = 0;
for(int i = 0;i < n;++i){
result += v[i];
}
return result;
}
int main(){
double a[num];
for(int i = 0; i < num; ++i){
a[i] = (double)i;
}
//for(int i = 0; i < 1000; ++i){
// cout<<a[i]<<"\n";
//}
int count = 0;
long st;
while(1){
++count;
if(count%100000000 != 0){ //---> i use this line to create a delay, we can do the same way in python, result is the same
//if(count%1 != 0){
continue;
}
st = gus();
vec_sum(a,num);
cout<<gus() - st<<endl;
}
return 0;
}
result:
time stable in 250us, no matter in "count%100000000" or "count%1"
(not an answer - but too long to post as comment)
i did some experimentation and ran (something slightly simpler) through timeit.
from timeit import timeit
import time
n_loop = 15
n_timeit = 10
sleep_sec = 0.1
t = range(100000)
def with_sleep():
for i in range(n_loop):
s = sum(t)
time.sleep(sleep_sec)
def without_sleep():
for i in range(n_loop):
s = sum(t)
def sleep_only():
for i in range(n_loop):
time.sleep(sleep_sec)
wo = timeit(setup='from __main__ import without_sleep',
stmt='without_sleep()',
number = n_timeit)
w = timeit(setup='from __main__ import with_sleep',
stmt='with_sleep()',
number = n_timeit)
so = timeit(setup='from __main__ import sleep_only',
stmt='sleep_only()',
number = n_timeit)
print(so - n_timeit*n_loop*sleep_sec, so)
print(w - n_timeit*n_loop*sleep_sec, w)
print(wo)
the result is:
0.031275457000447204 15.031275457000447
1.0220358229998965 16.022035822999896
0.41462676399987686
the first line is just to check that the the sleep function uses about n_timeit*n_loop*sleep_sec seconds. so if this value is small - that should be ok.
but as you see - your findings remain: the loop with the sleep function (subtracting the time sleep uses) takes up more time than the loop without sleep...
i don't think that python optimizes the loop without sleep (a c compiler might; the variable s is never used).

Python code optimization (20x slower than C)

I've written this very badly optimized C code that does a simple math calculation:
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#define MIN(a, b) (((a) < (b)) ? (a) : (b))
#define MAX(a, b) (((a) > (b)) ? (a) : (b))
unsigned long long int p(int);
float fullCheck(int);
int main(int argc, char **argv){
int i, g, maxNumber;
unsigned long long int diff = 1000;
if(argc < 2){
fprintf(stderr, "Usage: %s maxNumber\n", argv[0]);
return 0;
}
maxNumber = atoi(argv[1]);
for(i = 1; i < maxNumber; i++){
for(g = 1; g < maxNumber; g++){
if(i == g)
continue;
if(p(MAX(i,g)) - p(MIN(i,g)) < diff && fullCheck(p(MAX(i,g)) - p(MIN(i,g))) && fullCheck(p(i) + p(g))){
diff = p(MAX(i,g)) - p(MIN(i,g));
printf("We have a couple %llu %llu with diff %llu\n", p(i), p(g), diff);
}
}
}
return 0;
}
float fullCheck(int number){
float check = (-1 + sqrt(1 + 24 * number))/-6;
float check2 = (-1 - sqrt(1 + 24 * number))/-6;
if(check/1.00 == (int)check)
return check;
if(check2/1.00 == (int)check2)
return check2;
return 0;
}
unsigned long long int p(int n){
return n * (3 * n - 1 ) / 2;
}
And then I've tried (just for fun) to port it under Python to see how it would react. My first version was almost a 1:1 conversion that run terribly slow (120+secs in Python vs <1sec in C).
I've done a bit of optimization, and this is what I obtained:
#!/usr/bin/env/python
from cmath import sqrt
import cProfile
from pstats import Stats
def quickCheck(n):
partial_c = (sqrt(1 + 24 * (n)))/-6
c = 1/6 + partial_c
if int(c.real) == c.real:
return True
c = c - 2*partial_c
if int(c.real) == c.real:
return True
return False
def main():
maxNumber = 5000
diff = 1000
for i in range(1, maxNumber):
p_i = i * (3 * i - 1 ) / 2
for g in range(i, maxNumber):
if i == g:
continue
p_g = g * (3 * g - 1 ) / 2
if p_i > p_g:
ma = p_i
mi = p_g
else:
ma = p_g
mi = p_i
if ma - mi < diff and quickCheck(ma - mi):
if quickCheck(ma + mi):
print ('New couple ', ma, mi)
diff = ma - mi
cProfile.run('main()','script_perf')
perf = Stats('script_perf').sort_stats('time', 'calls').print_stats(10)
This runs in about 16secs which is better but also almost 20 times slower than C.
Now, I know C is better than Python for this kind of calculations, but what I would like to know is if there something that I've missed (Python-wise, like an horribly slow function or such) that could have made this function faster.
Please note that I'm using Python 3.1.1, if this makes a difference
Since quickCheck is being called close to 25,000,000 times, you might want to use memoization to cache the answers.
You can do memoization in C as well as Python. Things will be much faster in C, also.
You're computing 1/6 in each iteration of quickCheck. I'm not sure if this will be optimized out by Python, but if you can avoid recomputing constant values, you'll find things are faster. C compilers do this for you.
Doing things like if condition: return True; else: return False is silly -- and time consuming. Simply do return condition.
In Python 3.x, /2 must create floating-point values. You appear to need integers for this. You should be using //2 division. It will be closer to the C version in terms of what it does, but I don't think it's significantly faster.
Finally, Python is generally interpreted. The interpreter will always be significantly slower than C.
I made it go from ~7 seconds to ~3 seconds on my machine:
Precomputed i * (3 * i - 1 ) / 2 for each value, in yours it was computed twice quite a lot
Cached calls to quickCheck
Removed if i == g by adding +1 to the range
Removed if p_i > p_g since p_i is always smaller than p_g
Also put the quickCheck-function inside main, to make all variables local (which have faster lookup than global).
I'm sure there are more micro-optimizations available.
def main():
maxNumber = 5000
diff = 1000
p = {}
quickCache = {}
for i in range(maxNumber):
p[i] = i * (3 * i - 1 ) / 2
def quickCheck(n):
if n in quickCache: return quickCache[n]
partial_c = (sqrt(1 + 24 * (n)))/-6
c = 1/6 + partial_c
if int(c.real) == c.real:
quickCache[n] = True
return True
c = c - 2*partial_c
if int(c.real) == c.real:
quickCache[n] = True
return True
quickCache[n] = False
return False
for i in range(1, maxNumber):
mi = p[i]
for g in range(i+1, maxNumber):
ma = p[g]
if ma - mi < diff and quickCheck(ma - mi) and quickCheck(ma + mi):
print('New couple ', ma, mi)
diff = ma - mi
Because the function p() monotonically increasing you can avoid comparing the values as g > i implies p(g) > p(i). Also, the inner loop can be broken early because p(g) - p(i) >= diff implies p(g+1) - p(i) >= diff.
Also for correctness, I changed the equality comparison in quickCheck to compare difference against an epsilon because exact comparison with floating point is pretty fragile.
On my machine this reduced the runtime to 7.8ms using Python 2.6. Using PyPy with JIT reduced this to 0.77ms.
This shows that before turning to micro-optimization it pays to look for algorithmic optimizations. Micro-optimizations make spotting algorithmic changes much harder for relatively tiny gains.
EPS = 0.00000001
def quickCheck(n):
partial_c = sqrt(1 + 24*n) / -6
c = 1/6 + partial_c
if abs(int(c) - c) < EPS:
return True
c = 1/6 - partial_c
if abs(int(c) - c) < EPS:
return True
return False
def p(i):
return i * (3 * i - 1 ) / 2
def main(maxNumber):
diff = 1000
for i in range(1, maxNumber):
for g in range(i+1, maxNumber):
if p(g) - p(i) >= diff:
break
if quickCheck(p(g) - p(i)) and quickCheck(p(g) + p(i)):
print('New couple ', p(g), p(i), p(g) - p(i))
diff = p(g) - p(i)
There are some python compilers that might actually do a good bit for you. Have a look at Psyco.
Another way of dealing with math intensive programs is to rewrite the majority of the work into a math kernel, such as NumPy, so that heavily optimized code is doing the work, and your python code only guides the calculation. To get the most out of this strategy, avoid doing calculations in loops, and instead let the math kernel do all of that.
The other respondents have already mentioned several optimizations that will help. However, ultimately, you're not going to be able to match the performance of C in Python. Python is a nice tool, but since it's interpreted, it isn't really suited for heavy number crunching or other apps where performance is key.
Also, even in your C version, your inner loop could use quite a bit of help. Updated version:
for(i = 1; i < maxNumber; i++){
for(g = 1; g < maxNumber; g++){
if(i == g)
continue;
max=i;
min=g;
if (max<min) {
// xor swap - could use swap(p_max,p_min) instead.
max=max^min;
min=max^min;
max=max^min;
}
p_max=P(max);
p_min=P(min);
p_i=P(i);
p_g=P(g);
if(p_max - p_min < diff && fullCheck(p_max-p_min) && fullCheck(p_i + p_g)){
diff = p_max - p_min;
printf("We have a couple %llu %llu with diff %llu\n", p_i, p_g, diff);
}
}
}
///////////////////////////
float fullCheck(int number){
float den=sqrt(1+24*number)/6.0;
float check = 1/6.0 - den;
float check2 = 1/6.0 + den;
if(check == (int)check)
return check;
if(check2 == (int)check2)
return check2;
return 0.0;
}
Division, function calls, etc are costly. Also, calculating them once and storing in vars such as I've done can make things a lot more readable.
You might consider declaring P() as inline or rewrite as a preprocessor macro. Depending on how good your optimizer is, you might want to perform some of the arithmetic yourself and simplify its implementation.
Your implementation of fullCheck() would return what appear to be invalid results, since 1/6==0, where 1/6.0 would return 0.166... as you would expect.
This is a very brief take on what you can do to your C code to improve performance. This will, no doubt, widen the gap between C and Python performance.
20x difference between Python and C for a number crunching task seems quite good to me.
Check the usual performance differences for some CPU intensive tasks (keep in mind that the scale is logarithmic).
But look on the bright side, what's 1 minute of CPU time compared with the brain and typing time you saved writing Python instead of C? :-)

Categories

Resources