def count_occurrences(string):
count = 0
for text in GENERIC_TEXT_STORE:
count += text.count(string)
return count
GENERIC_TEXT_STORE is a list of string. For example:
GENERIC_TEXT_STORE = ['this is good', 'this is a test', 'that's not a test']
Given a string 'text', I want to find how many times the text, i.e. 'this', occurs in the GENERIC_TEXT_STORE. If my GENERIC_TEXT_STORE is huge, this is very slow. What are the ways to make this search and count much faster? For instance, if I split the big GENERIC_TEXT_STORE list into multiple smaller lists, would that be faster?
If the multiprocessing module is useful here, how to make it possible for this purpose?
First, check that your algorithm is actually doing what you want, as suggested in the comments above. The count() method is checking for substring equality, you could probably get a big improvement by refactoring your code to test only complete words assuming that's what you want. Something like this could work as your condition.
any((word==string for word in text.split()))
Multiprocessing would probably help as you could split the list into smaller lists (one per core) then add up all the results when each process finishes (avoid inter-process communication during the execution). I've found from testing that multi-processing in Python varies quite a bit between operating systems, Windows and Mac can take quite a long time to actually spawn the processes whereas Linux seems to do it much faster. Some people have said that setting a CPU affinity for each process using pstools is important but I didn't find this made much difference in my case.
Another answer would be to look at using Cython to compile your Python into a C program or alternatively rewrite the whole thing in a faster language, but as you've tagged this answer Python I assume you're not so keen on that.
You can use re.
In [2]: GENERIC_TEXT_STORE = ['this is good', 'this is a test', 'that\'s not a test']
In [3]: def count_occurrences(string):
...: count = 0
...: for text in GENERIC_TEXT_STORE:
...: count += text.count(string)
...: return count
In [6]: import re
In [7]: def count(_str):
...: return len(re.findall(_str,''.join(GENERIC_TEXT_STORE)))
...:
In [28]: def count1(_str):
...: return ' '.join(GENERIC_TEXT_STORE).count(_str)
...:
Now using timeit to analyse the execution time.
when size of GENERIC_TEXT_STORE is 3.
In [9]: timeit count('this')
1.27 µs ± 57.1 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [10]: timeit count_occurrences('this')
697 ns ± 25.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [33]: timeit count1('this')
385 ns ± 22.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
when size of GENERIC_TEXT_STORE is 15000.
In [17]: timeit count('this')
1.07 ms ± 118 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [18]: timeit count_occurrences('this')
3.35 ms ± 279 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [37]: timeit count1('this')
275 µs ± 18.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
when size of GENERIC_TEXT_STORE is 150000
In [20]: timeit count('this')
5.7 ms ± 2.39 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: timeit count_occurrences('this')
33 ms ± 3.26 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [40]: timeit count1('this')
3.98 ms ± 211 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
when size of GENERIC_TEXT_STORE is over a million (1500000)
In [23]: timeit count('this')
50.3 ms ± 7.21 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [24]: timeit count_occurrences('this')
283 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [43]: timeit count1('this')
40.7 ms ± 1.09 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
count1 < count < count_occurrences
When the size of GENERIC_TEXT_STORE is large count and count1 are almost 4 to 5 times faster than count_occurrences.
Related
They both end up doing the same but which one is more efficient?
I want to know the meaning behind them
You should use df['class'].value_counts().
pd.value_counts is undocumented, thus not guaranteed to remain accessible on the long term.
The two calls are equally fast:
s = pd.Series(np.random.choice(list('ABCD'), size=100000))
%%timeit
pd.value_counts(s)
# 8.59 ms ± 640 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
s.value_counts()
# 8.48 ms ± 357 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
I have to do a lot of dot products in my data processing pipeline. So, I was experimenting with the following two pieces of code where one is 3 times efficient (in terms of runtime) when compared to its slowest counterpart.
slowest method (with arrays created on the fly)
In [33]: %timeit np.dot(np.arange(200000), np.arange(200000, 400000))
352 µs ± 958 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
fastest method (with static arrays)
In [34]: vec1_arr = np.arange(200000)
In [35]: vec2_arr = np.arange(200000, 400000)
In [36]: %timeit np.dot(vec1_arr, vec2_arr)
121 µs ± 90.3 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
why is the first method of dynamically generating arrays 3x slower when compared to second method? Is it because in the first method much of these extra time is spent in allocating memory for the elements? Or some other factors contributing to this degradation?
To gain little more understanding, I also replicated the setup in pure Python. And surprisingly there is no performance difference between doing it one way or the other, although it is slower than the numpy implementation, which is obvious and expected.
In [42]: %timeit sum(map(operator.mul, range(200000), range(200000, 400000)))
12.5 ms ± 71.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [38]: vec1 = range(200000)
In [39]: vec2 = range(200000, 400000)
In [40]: %timeit sum(map(operator.mul, vec1, vec2))
12.5 ms ± 27.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
The behaviour in the case of pure Python is clear because range function doesn't actually create all those elements. It does lazy evaluation (i.e. it is generated on the fly).
Note: The pure Python impl. is just to make myself convinced that the array allocation might be the factor that is causing the drag. It's not meant to compare it with NumPy implementation.
The pure Python test is not fair. Because np.arange(200000) really returns an array while range(200000) only returns a generator. So these two methods both create arrays on the fly.
import operator
%timeit sum(map(operator.mul, range(200000), range(200000, 400000)))
# 15.1 ms ± 45.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
vec1 = range(200000)
vec2 = range(200000, 400000)
%timeit sum(map(operator.mul, vec1, vec2))
# 15.2 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
vec1 = list(range(200000))
vec2 = list(range(200000, 400000))
%timeit sum(map(operator.mul, vec1, vec2))
# 12.4 ms ± 716 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
And we can see the time cost on allocation:
import numpy as np
%timeit np.arange(200000), np.arange(200000, 400000)
# 632 µs ± 9.45 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.dot(np.arange(200000), np.arange(200000, 400000))
# 703 µs ± 5.27 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
vec1_arr = np.arange(200000)
vec2_arr = np.arange(200000, 400000)
%timeit np.dot(vec1_arr, vec2_arr)
# 77.7 µs ± 427 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Make sense.
The difference in speed is due to allocating the arrays in the slower case. I pasted the output of %timeit that takes into account the allocation of arrays in the two cases. The OP's timeit commands only took into account allocation in the slower case but not in the faster case.
%timeit np.dot(np.arange(200000), np.arange(200000, 400000))
# 524 µs ± 11.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit vec1_arr = np.arange(200000); vec2_arr = np.arange(200000, 400000); np.dot(vec1_arr, vec2_arr)
# 523 µs ± 17.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The allocation of both arrays takes about 360 microseconds on my machine, and the np.dot operation takes 169 microseconds. The sum of those two durations is 529 microseconds, which is equivalent to the output of %timeit output above.
%timeit vec1_arr = np.arange(200000); vec2_arr = np.arange(200000, 400000)
# 360 µs ± 16.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
vec1_arr = np.arange(200000)
vec2_arr = np.arange(200000, 400000)
%timeit np.dot(vec1_arr, vec2_arr)
# 169 µs ± 5.24 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Is there a way to speed up the following two lines of code?
choice = np.argmax(cust_profit, axis=0)
taken = np.array([np.sum(choice == i) for i in range(n_pr)])
%timeit np.argmax(cust_profit, axis=0)
37.6 µs ± 222 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.array([np.sum(choice == i) for i in range(n_pr)])
40.2 µs ± 206 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
n_pr == 2
cust_profit.shape == (n_pr+1, 2000)
Solutions:
%timeit np.unique(choice, return_counts=True)
53.7 µs ± 190 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.histogram(choice, bins=np.arange(n_pr + 2))
70.5 µs ± 205 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.bincount(choice)
7.4 µs ± 17.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
These microseconds worry me, cause this code locates under two layers of scipy.optimize.minimize(method='Nelder-Mead'), that locates in double nested loop, so 40µs equals 4 hours. And I think to wrap it all in genetic search.
The first line seems pretty straightforward. Unless you can sort the data or something like that, you are stuck with the linear lookup in np.argmax. The second line can be sped up simply by using numpy instead of vanilla python to implement it:
v, counts = np.unique(choice, return_counts=True)
Alternatively:
counts = np.histogram(choice, bins=np.arange(n_pr + 2))
A version of histogram optimized for integers also exists:
count = np.bincount(choice)
The latter two options are better if you want to guarantee that the bins include all possible values of choice, regardless of whether they are actually present in the array or not.
That being said, you probably shouldn't worry about something that takes microseconds.
I was comparing the performance of counting how many letters 'C' are in a very long string, using a numpy array of characters and the string method count.
genome is a very long string.
g1 = genome
g2 = np.array([i for i in genome])
%timeit np.sum(g2=='C')
4.43 s ± 230 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit g1.count('C')
955 ms ± 6.42 ms per loop (mean ± std. dev. of 7 runs, 1 loop each).
I expected that a numpy array would compute it faster but I am wrong.
Can someone explain me how the count method works and what is it faster than using a numpy array?
Thank you!
Let's explore some variations on the problem. I won't try to make as large a string as yours.
In [393]: astr = 'ABCDEF'*10000
First the string count:
In [394]: astr.count('C')
Out[394]: 10000
In [395]: timeit astr.count('C')
70.2 µs ± 115 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Now try a 1 element array with that string:
In [396]: arr = np.array(astr)
In [397]: arr.shape
Out[397]: ()
In [398]: np.char.count(arr, 'C')
Out[398]: array(10000)
In [399]: timeit np.char.count(arr, 'C')
200 µs ± 2.97 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [400]: arr.dtype
Out[400]: dtype('<U60000')
My experience with other uses of char is that it iterates on the array elements and applies the string method. So it can't be faster than applying the string method directly. I suppose the rest of the time is some sort of numpy overhead.
Make a list from the string - one character per list element:
In [402]: alist = list(astr)
In [403]: alist.count('C')
Out[403]: 10000
In [404]: timeit alist.count('C')
955 µs ± 18.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The list count has to loop through the elements, and do the test against C each time. Still it is faster than sum(i=='C' for i in alist) (and variants).
Now make an array from that list - single character elements:
In [405]: arr1 = np.array(alist)
In [406]: arr1.shape
Out[406]: (60000,)
In [407]: timeit arr1=='C'
634 µs ± 12.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [408]: timeit np.sum(arr1=='C')
740 µs ± 23.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The np.sum is relatively fast. It's the check against 'C' that takes the most time.
If I construct a numeric array of the same size, the count time is quite a bit faster. The equality test against a number is faster than the equivalent string test.
In [431]: arr2 = np.resize(np.array([1,2,3,4,5,6]),arr1.shape[0])
In [432]: np.sum(arr2==3)
Out[432]: 10000
In [433]: timeit np.sum(arr2==3)
155 µs ± 1.66 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
numpy does not promise to be faster for all Python operations. For the most part when working string elements, it is heavily dependent on Python's own string code.
Why would I use reset_index(drop=True), when the alternative is much faster? I am sure there is something I am missing. (Or my timings are bad somehow...)
import pandas as pd
l = pd.Series(range(int(1e7)))
%timeit l.reset_index(drop=True)
# 35.9 ms +- 1.29 ms per loop (mean +- std. dev. of 7 runs, 10 loops each)
%timeit l.index = range(int(1e7))
# 13 us +- 455 ns per loop (mean +- std. dev. of 7 runs, 100000 loops each)
The costly operation in reseting the index is not to create the new index (as you showed, that is super fast) but to return a copy of the series. If you compare:
%timeit l.reset_index(drop=True)
22.6 ms ± 172 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit l.index = range(int(1e7))
14.7 µs ± 348 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit l.reset_index(inplace=True, drop=True)
13.7 µs ± 121 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
You can see that the inplace operation (where no copy is returned) is more or less equally fast as your methode. However it is generally discouraged to perform inplace operations.