I have a Python algorithm that takes two strings as input and does various tests on each's characters to return a score.
This often involves 100s of pairs of strings, and since it doesn't involve writing to memory, concurrency problems shouldn't be a matter.
Thing is, from my (little) GPU programming experience, I recall it's required to make simple loops and give a fixed length to each arrays when coding for GPU (OpenGL shaders), which is annoying because strings are effectively arrays with variable array length.
I can consider turning Python strings into C-like char arrays, but it seems like a tedious solution, and doesn't solve the problem of making simple loops.
My question is this; is there any way to achieve great performance gains by parallelizing a Python code like this to GPU? Is it even possible?
def evaluator( baseStr, listOfStr ) :
for word in listOfStr : # PARALLELIZE THIS
scoreList += [ evaluateTwoWords(baseStr, word) ];
def evaluateTwoWords(baseStr, otherStr) :
SOME WORD-WISE COMPARISON
i = 0; j = 0;
while i < len(baseStr) and j < len(word) :
...
return someScore;
For the above provided code , yes you could achieve a significant speedup on a GPU if every thread/worker on the GPU is assigned a task to do the string comparison.
But there are a few constraints with a GPU.
1) If the string list to be loaded into the device memory is too huge,then
lost of system bandwidth is utilized to copy the string list from the
host to device memory. This context switch is one of the biggest setbacks
of using a GPU
2) Also a GPU becomes very effective in solving algorithms that have a lot
of SIMD(Single Instruction Multiple Data) characteristics. Check
this out for more info on SIMD https://en.wikipedia.org/wiki/SIMD. So the
more you start deviating from SIMD, the more penaltiy on speedup
Below is a sample Pycuda Version of your program
I've used PyCuda but there are other OpencL python drivers that do the job as well.I haven't tested the GPU code below due to hardware constraints , but I've coded it primarily from these examples http://wiki.tiker.net/PyCuda/Examples.
This is what the code does.
1) copy the string list to gpu device memory
2) copy the base string to device memory
3) Then call the kernel function to return something
4) Finally reduce the returned values using summation or the desired reduce
function of your choice
Below code is a perfect example of SIMD where the result of a thread is independent on the result of another thread. But that's just an ideal case. You might have to decide whether an algorithm can be a good candidate for a GPU or not.
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy
string_list = ['Apple','Microsoft', 'Google','Facebook', 'Twitter']
string_list_lines = numpy.array( string_list, dtype=str)
#Allocalte mem to list of strings on the GPU device
string_list_linesGPU = cuda.mem_alloc(string_list_lines.size * string_list_lines.dtype.itemsize)
#After allocation of mem, copy it to gpu device memory
cuda.memcpy_htod(string_list_linesGPU, string_list_lines)
## ****** Now GPU device has list of strings loaded into it
## Same process applied for the base string too
baseStr = "Seagate"
baseStrGPU = cuda.mem_alloc( len(baseStr))
cuda.memcpy_htod(baseStrGPU, baseStr)
#Num of blocks
blocks = len(string_list)
#Threads per block
threadsPerBlock = 1
#Write the actual kernel function
mod = SourceModule("""
__global__ int evaluateTwoWords(char *string1, char **string2)
{
idx = threadIdx.x;
while len(string1) > len(string2){
string2[i][0] = string1[0]s
// you could probably foloow up with some kind of algorithm here
}
return len(string2)
}
""")
#Run the source model
gpusin = mod.get_function("evaluateTwoWords")
result = 0
result += gpusin(destGPU, linesGPU, grid=(blocks,1), block=(threadsPerBlock,1,1))
return result
Hope this helps !
Related
I have been trying to debug a program using vast amounts of memory and have distilled it into the following example:
# Caution, use carefully, this can utilise all available memory on your computer
# and render it effectively unresponsive, to the point where you cannot access
# the shell to kill the process; thus requiring reboot.
import numpy as np
import collections
import torch
# q = collections.deque(maxlen=1500) # Uses around 6.4GB
# q = collections.deque(maxlen=3000) # Uses around 12GB
q = collections.deque(maxlen=5000) # Uses around 18GB
def f():
nparray = np.zeros([4,84,84], dtype=np.uint8)
q.append(nparray)
nparray1 = np.zeros([32,4,84,84], dtype=np.float32)
tens = torch.tensor(nparray1, dtype=torch.float32)
while True:
f()
Please note the cautionary message in the 1st line of this program. If you set maxlen to a level where it uses too much of your available RAM, it can crash your computer.
I measured the memory using top (VIRT column), and its memory use seems wildly excessive (details on the commented lines above). From previous experience in my original program if maxlen is high enough it will crash my computer.
Why is it using so much memory?
I calculate the increase in expected memory from maxlen=1500 to maxlen=3000 to be:
4 * 84 * 84 * 15000 / (1024**2) == 403MB.
But we see an increase of 6GB.
There seems to be some sort of interaction between using collections and the tensor allocation as commenting either out causes memory use to be expected; eg commenting out the tensor line leads to total memory use of 2GB which seems much more reasonable.
Thanks for any help or insight,
Julian.
I think PyTorch store and update the computational graph each time you call f(), and thus the graph-size just keeps getting bigger and bigger.
Can you try to free the memory usage by using del(tens) (deleting the reference for the variable after usage), and let me know how it works? (found in PyTorch-documents here: https://pytorch.org/docs/stable/notes/faq.html)
I am trying to speed up Python programs using Rust, a language in which I am a total beginner. I wrote a function that counts the occurrences of each possible string of length n within a larger string. For instance, if the main string is "AAAAT" and n=3, the outcome would be a hashmap {"AAA":2,"AAT":1}. I use pyo3 to call the Rust function from Python. The code of the Rust function is:
fn count_nmers(seq: &str, n: usize) -> PyResult<HashMap<&str,u64>> {
let mut current_pos: usize = 0;
let mut counts: HashMap<&str,u64> = HashMap::new();
while current_pos+n <= seq.len() {
//print!("{}\n", &seq[current_pos..current_pos+n]);
match counts.get(&seq[current_pos..current_pos+n]) {
Some(repeats) => counts.insert(&seq[current_pos..current_pos+n],repeats+1),
None => counts.insert(&seq[current_pos..current_pos+n],1)
};
current_pos +=1;
}
//print!("{:?}",counts)
Ok(counts)
}
When I use small values for n (n<10), Rust is about an order of magnitude faster than Python, but as the length of n increases, the gap tends to zero with both functions having the same speed by n=200. (see graph)
Times to count for different n-mer lengths (Python black, rust red)
I must be doing something wrong with the strings, but I can't find the mistake.
The python code is:
def nmer_freq_table(sequence,nmer_length=6):
nmer_dict=dict()
for nmer in seq_win(sequence,window_size=nmer_length):
if str(nmer) in nmer_dict.keys():
nmer_dict[str(nmer)]=nmer_dict[str(nmer)]+1
else:
nmer_dict[str(nmer)]=1
return nmer_dict
def seq_win(seq,window_size=2):
length=len(seq)
i=0
while i+window_size <= length:
yield seq[i:i+window_size]
i+=1
You are computing hash function multiple times, this may matter for large n values. Try using entry function instead of manual inserts:
while current_pos+n <= seq.len() {
let en = counts.entry(&seq[current_pos..current_pos+n]).or_default();
*en += 1;
current_pos +=1;
}
Complete code here
Next, make sure you are running --release compiled code like cargo run --release.
And one more thing to take in mind is discussed here, Rust may use non-optimal hash function for your case which you can change.
And finally, on large data, most of time is spent in HashMap/dict internals which are not a python, but compiled code. So don't expect it to scale well.
Could it be because as n gets larger the number of iterations through the loop gets smaller?
Fewer iterations through the loop would reduce the performance gain seen by using Rust. I'm sure there is a small per function call performance cost for transition/marshaling to Rust from Python. This would explain how eventually the performance from pure Python and Python/Rust becomes the same.
I was randomly comparing the computation times of an explicit for-loop with vectorized implementation in numpy. I ran exactly 1 million iterations and found some astounding differences. For-loop took about 646ms while the np.exp() function computed the same result in less than 20ms.
import time
import math
import numpy as np
iter = 1000000
x = np.zeros((iter,1))
v = np.random.randn(iter,1)
before = time.time()
for i in range(iter):
x[i] = math.exp(v[i])
after = time.time()
print(x)
print("Non vectorized= " + str((after-before)*1000) + "ms")
before = time.time()
x = np.exp(v)
after = time.time()
print(x)
print("Vectorized= " + str((after-before)*1000) + "ms")
The result I got:
[[0.9256753 ]
[1.2529006 ]
[3.47384978]
...
[1.14945181]
[0.80263805]
[1.1938528 ]]
Non vectorized= 646.1577415466309ms
[[0.9256753 ]
[1.2529006 ]
[3.47384978]
...
[1.14945181]
[0.80263805]
[1.1938528 ]]
Vectorized= 19.547224044799805ms
My questions are:
What exactly is happening in the second case? The first one is using
an explicit for-loop and thus the computation time is justified.
What is happening "behind the scenes" in the second case?
How can one implement such computations (second case) without using numpy (in plain Python)?
What is happening is that NumPy is calling high quality numerical libraries (BLAS for instance) which are very good at vector arithmetic.
I imagine you could specifically call the exact libraries used by NumPy, however, NumPy would likely know best which to use.
NumPy is a Python wrapper over libraries and code written in C. This is a large part of the efficiency of NumPy. C code compiles directly to instructions which are executed by your processor or GPU. On the other hand, Python code must be interpreted as it executes. Despite the ever increasing speed we can get from interpreted languages with advances like Just In Time Compilers, for some tasks they will never be able to approach the speed of compiled languages.
It comes down to the fact that Python does not have direct access to the hardware level.
Python can't use the SIMD (Single instruction, multiple data) assembly instructions that most modern CPU's and GPU's have. These SIMD instruction allow a single operation to execute on a vector of data all at once (within a single clock cycle) at the hardware level.
NumPy on the other hand has functions built in C, and C is a language capable of running SIMD instructions. Therefore NumPy can take advantage of the vectorization hardware in your processor.
I use the following code to load 24-bit binary data into a 16-bit numpy array :
temp = numpy.zeros((len(data) / 3, 4), dtype='b')
temp[:, 1:] = numpy.frombuffer(data, dtype='b').reshape(-1, 3)
temp2 = temp.view('<i4').flatten() >> 16 # >> 16 because I need to divide by 2**16 to load my data into 16-bit array, needed for my (audio) application
output = temp2.astype('int16')
I imagine that it's possible to improve the speed efficiency, but how?
It seems like you are being very roundabout here. Won't this do the same thing?
output = np.frombuffer(data,'b').reshape(-1,3)[:,1:].flatten().view('i2')
This would save some time from not zero-filling a temporary array, skipping the bitshift and avoiding some unneceessary data moves. I haven't actually benchmarked it yet, though, and I expect the savings to be modest.
Edit: I have now performed the benchmark. For len(data) of 12 million, I get 80 ms for your version and 39 ms for mine, so pretty much exactly a factor 2 speedup. Not a very big improvement, as expected, but then your starting point was already pretty fast.
Edit2: I should mention that I have assumed little endian here. However, the original question's code is also implicitly assuming little endian, so this is not a new assumption on my part.
(For big endian (data and architecture), you would replace 1: by :-1. If the data had a different endianness than the CPU, then you would also need to reverse the order of the bytes (::-1).)
Edit3: For even more speed, I think you will have to go outside python. This fortran function, which also uses openMP, gets me a factor 2+ speedup compared to my version (so 4+ times faster than yours).
subroutine f(a,b)
implicit none
integer*1, intent(in) :: a(:)
integer*1, intent(out) :: b(size(a)*2/3)
integer :: i
!$omp parallel do
do i = 1, size(a)/3
b(2*(i-1)+1) = a(3*(i-1)+2)
b(2*(i-1)+2) = a(3*(i-1)+3)
end do
!$omp end parallel do
end subroutine
Compile with FOPT="-fopenmp" f2py -c -m basj{,.f90} -lgomp. You can then import and use it in python:
import basj
def convert(data): return def mine2(data): return basj.f(np.frombuffer(data,'b')).view('i2')
You can control the number of cores to use via the environment variavble OMP_NUM_THREADS, but it defaults to using all available cores.
Inspired by #amaurea's answer, here is a cython version (I already used cython in my original code, so I'll continue with cython instead of mixing cython + fortran) :
import cython
import numpy as np
cimport numpy as np
def binary24_to_int16(char *data):
cdef int i
res = np.zeros(len(data)/3, np.int16)
b = <char *>((<np.ndarray>res).data)
for i in range(len(data)/3):
b[2*i] = data[3*i+1]
b[2*i+1] = data[3*i+2]
return res
There is a factor 4 speed gain :)
I am comparing performance of numpy vs matlab, in several cases I observed that numpy is significantly slower (indexing, simple operations on arrays such as absolute value, multiplication, sum, etc.). Let's look at the following example, which is somehow striking, involving the function digitize (which I plan to use for synchronizing timestamps):
import numpy as np
import time
scale=np.arange(1,1e+6+1)
y=np.arange(1,1e+6+1,10)
t1=time.time()
ind=np.digitize(scale,y)
t2=time.time()
print 'Time passed is %2.2f seconds' %(t2-t1)
The result is:
Time passed is 55.91 seconds
Let's now try the same example Matlab using the equivalent function histc
scale=[1:1e+6];
y=[1:10:1e+6];
tic
[N,bin]=histc(scale,y);
t=toc;
display(['Time passed is ',num2str(t), ' seconds'])
The result is:
Time passed is 0.10237 seconds
That's 560 times faster!
As I'm learning to extend Python with C++, I implemented my own version of digitize (using boost libraries for the extension):
import analysis # my C++ module implementing digitize
t1=time.time()
ind2=analysis.digitize(scale,y)
t2=time.time()
print 'Time passed is %2.2f seconds' %(t2-t1)
np.all(ind==ind2) #ok
The result is:
Time passed is 0.02 seconds
There is a bit of cheating as my version of digitize assumes inputs are all monotonic, this might explain why it is even faster than Matlab. However, sorting an array of size 1e+6 takes 0.16 seconds (with numpy.sort), making therefore the performance of my function worse (by a factor of approx 1.6) compared to the Matlab function histc.
So the questions are:
Why is numpy.digitize so slow? Is this function not supposed to be written in compiled and optimized code?
Why is my own version of digitize much faster than numpy.digitize, but still slower than Matlab (I am quite confident I use the fastest algorithm possible, given that I assume inputs are already sorted)?
I am using Fedora 16 and I recently installed ATLAS and LAPACK libraries (but there has been so change in performance). Should I perhaps rebuild numpy? I am not sure if my installation of numpy uses the appropriate libraries to gain maximum speed, perhaps Matlab is using better libraries.
Update
Based on the answers so far, I would like to stress that the Matlab function histc is not equivalent to numpy.histogram if someone (like me in this case) does not care about the histogram. I need the second output of hisc, which is a mapping from input values to the index of the provided input bins. Such an output is provided by the numpy functions digitize and searchsorted. As one of the answers says, searchsorted is much faster than digitize. However, searchsorted is still slower than Matlab by a factor 2:
t1=time.time()
ind3=np.searchsorted(y,scale,"right")
t2=time.time()
print 'Time passed is %2.2f seconds' %(t2-t1)
np.all(ind==ind3) #ok
The result is
Time passed is 0.21 seconds
So the questions are now:
What is the sense of having numpy.digitize if there is an equivalent function numpy.searchsorted which is 280 times faster?
Why is the Matlab function histc (which also provides the output of numpy.searchsorted) 2 times faster than numpy.searchsorted?
First, let's look at why numpy.digitize is slow. If your bins are found to be monotonic, then one of these functions is called depending on whether the bins are nondecreasing or nonincreasing (the code for this is found in numpy/lib/src/_compiled_base.c in the numpy git repo):
static npy_intp
incr_slot_(double x, double *bins, npy_intp lbins)
{
npy_intp i;
for ( i = 0; i < lbins; i ++ ) {
if ( x < bins [i] ) {
return i;
}
}
return lbins;
}
static npy_intp
decr_slot_(double x, double * bins, npy_intp lbins)
{
npy_intp i;
for ( i = lbins - 1; i >= 0; i -- ) {
if (x < bins [i]) {
return i + 1;
}
}
return 0;
}
As you can see, it is doing a linear search. Linear search is much, much slower than binary search so there is your answer as to why it is slow. I will open a ticket for this on the numpy tracker.
Second, I think that Matlab is actually slower than your C++ code because Matlab also assumes that the bins are monotonically nondecreasing.
I can't answer why numpy.digitize() is so slow -- I could confirm your timings on my machine.
The function numpy.searchsorted() does basically the same thing as numpy.digitize(), but efficiently.
ind = np.searchsorted(y, scale, "right")
takes about 0.15 seconds on my machine and gives exactly the same result as your code.
Note that your Matlab code does something different from both of those functions -- it is the equivalent of numpy.histogram().
Before the question can get answered, several subquestions need to be addressed:
In order to get more reliable results, you should run several
iterations of the tests and average their results. This would
somehow eliminate startup effects, which do not have anything to do
with the algorithm. Also, try to use larger data for the same
purpose.
Use the same algortihms across the frameworks. This has already
been addressed in other answers here.
Make sure, the algorithms are really similar enough. How do they
utilize system ressources? How is iterated over memory ? If (just an
example) a Matlab algorithm uses repmat and the numpy would not, the
comparison is not fair.
How does the corresponding framework parallelize? This possibly
is connected to your individual machine / processor configuration.
Matlab does parallelize some (but by far not all) builtin functions.
I dont know about numpy/CPython.
Use a memory profiler in order to find out, how both implementations
behave from that performance point of view.
Afterwards (this is only a guess) we probably will find out, numpy does often behave slower than Matlab. Many questions here at SO come to the same conclusion. One explanation could be, that Matlab has an easier job to optimize array access, because it does not need to take into account a whole collection of general purpose objects (like CPython). The requirements on mathematical arrays are much lower than those on general arrays. numpy on the other hand does utilize CPython, which must serve the full python library - not only numpy. However, according to this comparison test (among many others) Matlab is still pretty slow ...
I don't think you are comparing the same functions in numpy and matlab. The equivalent to histc is np.histogram as far as I can tell from looking at the documentation. I don't have matlab to do a comparison, but when I do the following on my machine:
In [7]: import numpy as np
In [8]: scale=np.arange(1,1e+6+1)
In [9]: y=np.arange(1,1e+6+1,10)
In [10]: %timeit np.histogram(scale,y)
10 loops, best of 3: 135 ms per loop
I get a number that is approximately equivalent to what you get for histc.