implementing python multithreading into the calculation for a progressbar - python

i am trying to create a progressbar similar to tqdm's. Everything works just fine, but i noticed that the calculation for every step of the progressbar (for big iterables, len > 50) takes a lot of time. this is my code.
def progressbar(iterable):
def new(index):
#... print the progressbar
for i in range(len(iterable)):
new(i)
yield iterable[i]
the problem is that while on small iterables the time that new() takes to execute is indifferent, on larger iterables it becomes a problem (which does not occur in the tqdm library). For example the following code takes a few seconds to execute. It should be instant!
iterator = progressbar(range(1000))
for i in iterator: pass
can you tell me a way to remedy this thing? maybe implementing multithreading?

It's not clear what the issue is (you are not showing all of your calculations), but I believe your approach can be improved with the way your progress bar is handling the iterable it is being passed:
First, you are assuming that the iterable is indexable, which may not always be the case.
If it is a generator function, then the length may not be determinable with the len function nor would converting the generator to a list to get its length be necessarily efficient and it would probably defeat the purpose of having a progress bar, as in the example below. Your interface should therefore allow the user to pass an optional total parameter (as tqdm does) to explicitly specify the length of the iterable.
You can do some upfront calculations outside of function new so thatnew can quickly calculate based on the value of the index argument how wide the bar should be.
I would suggest the following changes:
def progressbar(iterable, total=None):
def new(index):
#... print the progressbar
from math import floor
nonlocal division, width
n_division = floor(index / division + .5)
remainder = width - n_division
print('|', '.' * n_division, ' ' * remainder, '|', sep='', end='\r')
if total is None:
iterable = list(iterable)
# we must convert to a list
total = len(iterable)
it = iter(iterable)
width = 60 # with of progress bar
division = total / 60 # each division represents this many completions
try:
for i in range(total):
# ensure next value exists before printing it:
yield next(it)
new(i)
except StopIteration:
pass
print()
def fun():
import time
for i in range(1000):
time.sleep(.03)
yield i
iterator = progressbar(fun(), total=1000)
values = [i for i in iterator]
print(values[0], values[-1])
Multithreading
Incorporating multithreading as a way of speeding up processing is problematic. The following is a (naive) attempt to do so that fails because although multithreading is being used to get the values from the generator function fun, the generator function is still generating values only once every .03 seconds. It's should also be clear that if the iterable is, for example, a simple list that multithreading is not going to be able to iterate the list more quickly than using a single thread:
from multiprocessing.pool import ThreadPool
def progressbar(iterable, total=None):
def new(index):
#... print the progressbar
from math import floor
nonlocal division, width
n_division = floor(index / division + .5)
remainder = width - n_division
print('|', '.' * n_division, ' ' * remainder, '|', sep='', end='\r')
if total is None:
iterable = list(iterable)
# we must convert to a list
total = len(iterable)
it = iter(iterable)
width = 60 # with of progress bar
division = total / 60 # each division represents this many completions
with ThreadPool(20) as pool:
for i, result in enumerate(pool.imap(lambda x: x, iterable)):
yield result
new(i)
print()
def fun():
import time
for i in range(1000):
time.sleep(.03)
yield i
iterator = progressbar(fun(), total=1000)
values = [i for i in iterator]
print(values[0], values[-1])
What would have sped up processing would have been if the generator function itself had used multithreading. But, of course, one has no control over how the iterable is being created:
from multiprocessing.pool import ThreadPool
def progressbar(iterable, total=None):
def new(index):
#... print the progressbar
from math import floor
nonlocal division, width
n_division = floor(index / division + .5)
remainder = width - n_division
print('|', '.' * n_division, ' ' * remainder, '|', sep='', end='\r')
if total is None:
iterable = list(iterable)
# we must convert to a list
total = len(iterable)
it = iter(iterable)
width = 60 # with of progress bar
division = total / 60 # each division represents this many completions
try:
for i in range(total):
# ensure next value exists before printing it:
yield next(it)
new(i)
except StopIteration:
pass
print()
def fun():
import time
def fun2(i):
time.sleep(.03)
return i
with ThreadPool(20) as pool:
for i in pool.imap(fun2, range(1000)):
yield i
iterator = progressbar(fun(), total=1000)
values = [i for i in iterator]
print(values[0], values[-1])

Related

Python: How can I send an iterator to two different consumers without loading the entire thing into memory?

I have an iterator that is consumed by two functions (mean_summarizer and std_summarizer in example below). I want both functions to process the iterator, WITHOUT ever having to load the entire iterator into memory at once.
Below is a minimal example (also in Colab) that provides the correct result, EXCEPT that it involves loading the entire input into memory at once. No need to understand the fancy code inside mean_summarizer, std_summarizer, and last - it's mainly like that for brevity.
Question is: What is the cleanest way to re-implement summarize_input_stream without changing the function signature (just the inside), such that its memory usage does not scale with length of the input stream?
I have a feeling coroutines are involved, but I don't know how to use them.
import numpy as np
from typing import Iterable, Mapping, Callable, Any
def summarize_input_stream( # Run the input stream through multiple summarizers and collect results
input_stream: Iterable[float],
summarizers: Mapping[str, Callable[[Iterable[float]], float]]
) -> Mapping[str, float]:
inputs = list(input_stream) # PROBLEM IS HERE <-- We load entire stream into memory at once
return {name: summarizer(inputs) for name, summarizer in summarizers.items()}
def last(iterable: Iterable[Any]) -> Any: # Just returns last element of iterable
return max(enumerate(iterable))[1]
def mean_summarizer(stream: Iterable[float]) -> float: # Just computes mean online and returns final value
return last(avg for avg in [0] for i, x in enumerate(stream) for avg in [avg*i/(i+1) + x/(i+1)])
def std_summarizer(stream: Iterable[float]) -> float: # Just computes standard deviation online and returns final value
return last(cumsum_of_sq/(i+1) - (cumsum/(i+1))**2 for cumsum_of_sq, cumsum in [(0, 0)] for i, x in enumerate(stream) for cumsum_of_sq, cumsum in [(cumsum_of_sq+x**2, cumsum+x)])**.5
summary_stats = summarize_input_stream(
input_stream=(np.random.randn()*2+3 for _ in range(1000)),
summarizers={'mean': mean_summarizer, 'std': std_summarizer}
)
print(summary_stats)
# e.g. {'mean': 3.020903422847062, 'std': 1.943724669289156}
I found a solution that does not involve changing the signature of summarize_input_stream. It launches one thread per summarizer and feeds each one incrementally via a separate blocking queue (link to Colab).
import numpy as np
from typing import Iterable, Mapping, Callable, Any
from threading import Thread
from queue import Queue
from functools import partial
def summarize_input_stream( # Run the input stream through multiple summarizers and collect results
input_stream: Iterable[float],
summarizers: Mapping[str, Callable[[Iterable[float]], float]]
) -> Mapping[str, float]:
POISON_PILL = object()
def run_summarizer(summarizer: Callable[[Iterable[float]], float], queue: Queue) -> float:
result = summarizer(iter(queue.get, POISON_PILL)) # Waits until the food is ready to eat
queue.put(result) # Use the queue the other way around to return the result
queues = [Queue(maxsize=1) for _ in summarizers] # <-- Note We could can probably be more time-efficient if we increase maxsize, which should cause less thread switching at the cost of more memory usage
threads = [Thread(target=partial(run_summarizer, summarizer, queue)) for summarizer, queue in zip(summarizers.values(), queues)]
for t in threads:
t.start()
for inp in input_stream:
for queue in queues:
queue.put(inp) # Waits until the summarizer is hungry to feed it
for queue in queues:
queue.put(POISON_PILL) # Stop the iteration
for t in threads:
t.join()
results = [queue.get() for queue in queues]
return {name: result for name, result in zip(summarizers, results)}
def last(iterable: Iterable[Any]) -> Any: # Just returns last element of iterable
return max(enumerate(iterable))[1]
def mean_summarizer(stream: Iterable[float]) -> float: # Just computes mean online and returns final value
return last(avg for avg in [0] for i, x in enumerate(stream) for avg in [avg * i / (i + 1) + x / (i + 1)])
def std_summarizer(stream: Iterable[float]) -> float: # Just computes standard deviation online and returns final value
return last(cumsum_of_sq / (i + 1) - (cumsum / (i + 1)) ** 2 for cumsum_of_sq, cumsum in [(0, 0)] for i, x in enumerate(stream) for cumsum_of_sq, cumsum in
[(cumsum_of_sq + x ** 2, cumsum + x)]) ** .5
summary_stats = summarize_input_stream(
input_stream=(np.random.randn() * 2 + 3 for _ in range(1000)),
summarizers={'mean': mean_summarizer, 'std': std_summarizer}
)
print(summary_stats)
# e.g. {'mean': 3.020903422847062, 'std': 1.943724669289156}
You can't do this. A generalized iterator can only be processed once, and to make it possible to process it twice, you need to store it in some way, either by listifying it as you're doing, or using itertools.tee (which, if one of the tee-d iterators is consumed completely before the other pulls any items, is morally equivalent; it has to store all of the data internally).
The only way to make this work is if you use a single summarizer that processes the input once and computes all relevant summaries at the same time.
As #ShadowRanger says, it seems (unless proven otherwise) that there is no solution to the question as stated that does not involve multi-threading.
However, with a minor change to the signature of summarize_input_stream (which admittedly violates the rules which I myself wrote), we can get the result without paying the (memory) price.
The trick (which I'll call "genfunctrification" unless it has a pre-existing name) is:
We turn our summarizers from functions of type: Callable[[Iterable[float]], float] ...
.. into generator-functions of type Callable[[Iterable[float]], Iterable[Callable[[], float]]]
Note that we could have just made them return the results directly (Callable[[Iterable[float]], Iterable[float]), but this would involve wastefully recomputing things like (cumsum_of_sq/(i+1) - (cumsum/(i+1))**2)**.5 on each iteration, when we only actually need it on the last, so instead we make our iterators yield Callables, which can compute the result only when needed (after the last iteration).
The modified code (and Colab link).
import numpy as np
from typing import Iterable, Mapping, Callable, Any, Sequence
import itertools
def summarize_input_stream( # Run the input stream through multiple summarizers and collect results
input_stream: Iterable[float],
summarizers: Mapping[str, Callable[[Iterable[float]], Iterable[Callable[[], float]]]]
) -> Mapping[str, float]:
input_streams_teed = itertools.tee(input_stream, len(summarizers))
result_getter_streams: Sequence[Iterable[Callable[[], float]]] = [summarizer(stream_copy) for summarizer, stream_copy in zip(summarizers.values(), input_streams_teed)]
final_results = last([f() for f in func_tup] for func_tup in zip(*result_getter_streams))
return {name: r for name, r in zip(summarizers, final_results)}
def last(iterable: Iterable[Any]) -> Any: # Just returns last element of iterable
return max(enumerate(iterable))[1]
def mean_summarizer(stream: Iterable[float]) -> Iterable[Callable[[], float]]: # Just computes mean online and returns final value
return ((lambda: avg) for avg in [0] for i, x in enumerate(stream) for avg in [avg*i/(i+1) + x/(i+1)])
def std_summarizer(stream: Iterable[float]) -> Iterable[Callable[[], float]]: # Just computes standard deviation online and returns final value
return ((lambda: (cumsum_of_sq/(i+1) - (cumsum/(i+1))**2)**.5) for cumsum_of_sq, cumsum in [(0, 0)] for i, x in enumerate(stream) for cumsum_of_sq, cumsum in [(cumsum_of_sq+x**2, cumsum+x)])
summary_stats = summarize_input_stream(
input_stream=(np.random.randn()*2+3 for _ in range(1000)),
summarizers={'mean': mean_summarizer, 'std': std_summarizer}
)
print(summary_stats)
# e.g. {'mean': 3.020903422847062, 'std': 1.943724669289156}
Note: since this particular implementation depends on dicts maintaining their order, it will only work reliably in Python 3.7+ - though it could easily be backported if needed.

Can't use Python multiprocessing with large amount of calculations

I have to speed up my current code to do around 10^6 operations in a feasible time. Before I used multiprocessing in my the actual document I tried to do it in a mock case. Following is my attempt:
def chunkIt(seq, num):
avg = len(seq) / float(num)
out = []
last = 0.0
while last < len(seq):
out.append(seq[int(last):int(last + avg)])
last += avg
return out
def do_something(List):
# in real case this function takes about 0.5 seconds to finish for each
iteration
turn = []
for e in List:
turn.append((e[0]**2, e[1]**2,e[2]**2))
return turn
t1 = time.time()
List = []
#in the real case these 20's can go as high as 150
for i in range(1,20-2):
for k in range(i+1,20-1):
for j in range(k+1,20):
List.append((i,k,j))
t3 = time.time()
test = []
List = chunkIt(List,3)
if __name__ == '__main__':
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(do_something,List)
for result in results:
test.append(result)
test= np.array(test)
t2 = time.time()
T = t2-t1
T2 = t3-t1
However, when I increase the size of my "List" my computer tires to use all of my RAM and CPU and freezes. I even cut my "List" into 3 pieces so it will only use 3 of my cores. However, nothing changed. Also, when I tried to use it on a smaller data set I noticed the code ran much slower than when it ran on a single core.
I am still very new to multiprocessing in Python, am I doing something wrong. I would appreciate it if you could help me.
To reduce memory usage, I suggest you use instead the multiprocessing module and specifically the imap method method (or imap_unordered method). Unlike the map method of either multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor, the iterable argument is processed lazily. What this means is that if you use a generator function or generator expression for the iterable argument, you do not need to create the complete list of arguments in memory; as a processor in the pool become free and ready to execute more tasks, the generator will be called upon to generate the next argument for the imap call.
By default a chunksize value of 1 is used, which can be inefficient for a large iterable size. When using map and the default value of None for the chunksize argument, the pool will look at the length of the iterable first converting it to a list if necessary and then compute what it deems to be an efficient chunksize based on that length and the size of the pool. When using imap or imap_unordered, converting the iterable to a list would defeat the whole purpose of using that method. But if you know what that size would be (more or less) if the iterable were converted to a list, then there is no reason not to apply the same chunksize calculation the map method would have, and that is what is done below.
The following benchmarks perform the same processing first as a single process and then using multiprocessing using imap where each invocation of do_something on my desktop takes approximately .5 seconds. do_something now has been modified to just process a single i, k, j tuple as there is no longer any need to break up anything into smaller lists:
from multiprocessing import Pool, cpu_count
import time
def half_second():
HALF_SECOND_ITERATIONS = 10_000_000
sum = 0
for _ in range(HALF_SECOND_ITERATIONS):
sum += 1
return sum
def do_something(tpl):
# in real case this function takes about 0.5 seconds to finish for each iteration
half_second() # on my desktop
return tpl[0]**2, tpl[1]**2, tpl[2]**2
"""
def generate_tpls():
for i in range(1, 20-2):
for k in range(i+1, 20-1):
for j in range(k+1, 20):
yield i, k, j
"""
# Use smaller number of tuples so we finish in a reasonable amount of time:
def generate_tpls():
# 64 tuples:
for i in range(1, 5):
for k in range(1, 5):
for j in range(1, 5):
yield i, k, j
def benchmark1():
""" single processing """
t = time.time()
for tpl in generate_tpls():
result = do_something(tpl)
print('benchmark1 time:', time.time() - t)
def compute_chunksize(iterable_size, pool_size):
""" This is more-or-less the function used by the Pool.map method """
chunksize, remainder = divmod(iterable_size, 4 * pool_size)
if remainder:
chunksize += 1
return chunksize
def benchmark2():
""" multiprocssing """
t = time.time()
pool_size = cpu_count() # 8 logical cores (4 physical cores)
N_TUPLES = 64 # number of tuples that will be generated
pool = Pool(pool_size)
chunksize = compute_chunksize(N_TUPLES, pool_size)
for result in pool.imap(do_something, generate_tpls(), chunksize=chunksize):
pass
print('benchmark2 time:', time.time() - t)
if __name__ == '__main__':
benchmark1()
benchmark2()
Prints:
benchmark1 time: 32.261038303375244
benchmark2 time: 8.174998044967651
The nested For loops creating the array before the main definition appears to be the problem. Moving that part to underneath the main definition clears up any memory problems.
def chunkIt(seq, num):
avg = len(seq) / float(num)
out = []
last = 0.0
while last < len(seq):
out.append(seq[int(last):int(last + avg)])
last += avg
return out
def do_something(List):
# in real case this function takes about 0.5 seconds to finish for each
iteration
turn = []
for e in List:
turn.append((e[0]**2, e[1]**2,e[2]**2))
return turn
if __name__ == '__main__':
t1 = time.time()
List = []
#in the real case these 20's can go as high as 150
for i in range(1,20-2):
for k in range(i+1,20-1):
for j in range(k+1,20):
List.append((i,k,j))
t3 = time.time()
test = []
List = chunkIt(List,3)
with concurrent.futures.ProcessPoolExecutor() as executor:
results = executor.map(do_something,List)
for result in results:
test.append(result)
test= np.array(test)
t2 = time.time()
T = t2-t1
T2 = t3-t1

Using Multithreading or Multiprocessing to improve computational speed

I am iterating through very large file size [mesh]. Since the iteration is independent, I would like to split my mesh into smaller sizes and run them all at the same time in order to lower computation time. Below is a sample code. For example, if mesh is of length=50000, I would like to divide the mesh into 100 and run fun for each mesh/100 at the same time.
import numpy as np
def fnc(data, mesh):
d = []
for i, dummy_val in enumerate(mesh):
d.append(np.sqrt((data[:, 0]-mesh[i, 0])**2.0 + (data[:, 1]-mesh[i, 1])**2.0))
return d
interpolate = fnc(mydata, mymesh)
I would like to know to achieve this using multiprocessing or multithreading as I'm unable to reconcile it with the execution of my loop.
This will give you the general idea. I couldn't test this since I do not have your data. The default constructor for ProcessPoolExecutor will use the number of processors on your computer. But since that determines the level of multiprocessing you can have, it will probably be more efficient to set the N_CHUNKS parameter to the number of simultaneous processes you can support. That is, if you have a processing pool size of 6, then it is better to just divide your array into 6 large chunks and have 6 processes do the work rather than breaking it up into smaller pieces where processes will have to wait to run. So you should probably specify a specific max_workers number to the ProcessPoolExecutor not greater than the number of processors you have and set N_CHUNKS to the same value.
from concurrent.futures import ProcessPoolExecutor, as_completed
import numpy as np
def fnc(data, mesh):
d = []
for i, dummy_val in enumerate(mesh):
d.append(np.sqrt((data[:, 0]-mesh[i, 0])**2.0 + (data[:, 1]-mesh[i, 1])**2.0))
return d
def main(data, mesh):
#N_CHUNKS = 100
N_CHUNKS = 6 # assuming you have 6 processors; see max_workers parameter
n = len(mesh)
assert n != 0
if n <= N_CHUNKS:
N_CHUNKS = 1
chunk_size = n
last_chunk_size = n
else:
chunk_size = n // N_CHUNKS
last_chunk_size = n - chunk_size * (N_CHUNKS - 1)
with ProcessPoolExcutor(max_workers=N_CHUNKS) as executor: # assuming you have 6 processors
the_futures = {}
start = 0
for i in range(N_CHUNKS - 1):
future = executor.submit(fnc, data, mesh[start:start+chunk_size]) # pass slice
the_futures[future] = (start, start+chunk_size) # map future to request parameters
start += chunk_size
if last_chunk_size:
future = executor.submit(fnc, data, mesh[start:n]) # pass slice
the_futures[future] = (start, start+n)
for future in as_completed(the_futures):
(start, end) = the_futures[future] # the original range
d = future.result() # do something with the results
if __name__ == '__main__':
# the call to main must be done in a block governed by if __name__ == '__main__' or you will get into a recursive
# loop where each subprocess calls main again
main(data, mesh)

Optimizing a parallel implementation of a list comprehension

I have a dataframe, where each row contains a list of integers. I also have a reference-list that I use to check what integers in the dataframe appear in this list.
I have made two implementations of this, one single-threaded and one multi-threaded. The single-threaded implementation is quite fast (takes roughly 0.1s on my machine), whereas the multithreaded takes roughly 5s.
My question is: Is this due to my implementation being poor, or is this merely a case where the overhead due to multithreading is so large that it doesn't make sense to use multiple threads?
The example is below:
import time
from random import randint
import pandas as pd
import multiprocessing
from functools import partial
class A:
def __init__(self, N):
self.ls = [[randint(0, 99) for i in range(20)] for j in range(N)]
self.ls = pd.DataFrame({'col': self.ls})
self.lst_nums = [randint(0, 99) for i in range(999)]
#classmethod
def helper(cls, lst_nums, col):
return any([s in lst_nums for s in col])
def get_idx_method1(self):
method1 = self.ls['col'].apply(lambda nums: any(x in self.lst_nums for x in nums))
return method1
def get_idx_method2(self):
pool = multiprocessing.Pool(processes=1)
method2 = pool.map(partial(A.helper, self.lst_nums), self.ls['col'])
pool.close()
return method2
if __name__ == "__main__":
a = A(50000)
start = time.time()
m1 = a.get_idx_method1()
end = time.time()
print(end-start)
start = time.time()
m2 = a.get_idx_method2()
end = time.time()
print(end - start)
First of all, multiprocessing is useful when the cost of data communication between the main process and the others is less comparable to the time cost of the function.
Another thing is that you made an error in your code:
def helper(cls, lst_nums, col):
return any([s in lst_nums for s in col])
VS
any(x in self.lst_nums for x in nums)
You have that list [] in the helper method, which will make the any() method to wait for the entire array to be computed, while the second any() will just stop at the first True value.
In conclusion if you remove list brackets from the helper method and maybe increase the randint range for lst_nums initializer, you will notice an increase in speed when using multiple processes.
self.lst_nums = [randint(0, 10000) for i in range(999)]
and
def helper(cls, lst_nums, col):
return any(s in lst_nums for s in col)

python-measure function time

I am having a problem with measuring the time of a function.
My function is a "linear search":
def linear_search(obj, item,):
for i in range(0, len(obj)):
if obj[i] == item:
return i
return -1
And I made another function that measures the time 100 times and adds all the results to a list:
def measureTime(a):
nl=[]
import random
import time
for x in range(0,100): #calculating time
start = time.time()
a
end =time.time()
times=end-start
nl.append(times)
return nl
When I'm using measureTime(linear_search(list,random.choice(range(0,50)))), the function always returns [0.0].
What can cause this problem? Thanks.
you are actually passing the result of linear_search into function measureTime, you need to pass in the function and arguments instead for them to be execute inside measureTime function like #martijnn2008 answer
Or better wise you can consider using timeit module to to the job for you
from functools import partial
import timeit
def measureTime(n, f, *args):
# return average runtime for n number of times
# use a for loop with number=1 to get all individual n runtime
return timeit.timeit(partial(f, *args), number=n)
# running within the module
measureTime(100, linear_search, list, random.choice(range(0,50)))
# if running interactively outside the module, use below, lets say your module name mymodule
mymodule.measureTime(100, mymodule.linear_search, mymodule.list, mymodule.random.choice(range(0,50)))
Take a look at the following example, don't know exactly what you are trying to achieve so I guessed it ;)
import random
import time
def measureTime(method, n, *args):
start = time.time()
for _ in xrange(n):
method(*args)
end = time.time()
return (end - start) / n
def linear_search(lst, item):
for i, o in enumerate(lst):
if o == item:
return i
return -1
lst = [random.randint(0, 10**6) for _ in xrange(10**6)]
repetitions = 100
for _ in xrange(10):
item = random.randint(0, 10**6)
print 'average runtime =',
print measureTime(linear_search, repetitions, lst, item) * 1000, 'ms'

Categories

Resources