Not sure if it is relevant, but I use Spyder(version 3.3.3) as Python IDE and this python version is 3.7.3
The following is my code:
import os
import numpy as np
import multiprocessing as mp
from itertools import product
def func1(a_tup):
return (a_tup[0][0], *a_tup[1]), (a_tup[0][1]+a_tup[1][0])*a_tup[1][1]
if __name__ == '__main__':
lst1 = np.arange(4)
lst2 = np.arange(2, 8, 3)
lst3 = product(enumerate(lst2),
(item for item in product(lst1, lst1) if item[0] < item[1])
)
result1 = np.zeros((len(lst2), len(lst1) -1, len(lst1)), dtype = np.float32)
with mp.Pool(os.cpu_count() - 1) as pool:
mp_result1 = pool.imap(func1, lst3)#<--this line right here
for item in mp_result1:
print(item)
The problem is that as soon as I run this code, it will just sit there and never finish.
I have found that as soon as I change the line mp_result1 = pool.imap(func1, lst3) to mp_result1 = list(pool.imap(func1, lst3)), the coded will run perfectly. But the thing is, I am looking for a way to make the code work without converting mp_result1 to a list, because when the generator mp_result1 becomes big enough in terms of memory usage, converting it to a list object will instantly consume all of the available memory which makes the system freezes. So it is definitely not an ideal situation to be in.
So here is my question:
How to iterate over the result of pool.imap without converting it to a list object first?
Thank you in advance
Related
I am attempting to use timeit to keep track of how long a sorting algorithm takes to finish. However, it seems I can't even find an answer online how to exactly run timeit with functions that were originally written in another module. I've tried various setups and string inputs but am finding myself lost on this.
I tried using "t = timeit.Timer('sort.bubble(temp_array)')," but printing out the timer objects only gives me the memory addresses and cannot be converted to an integer...
In this case, I am calling bubble sort from another module.
#This section is on a timetests.py file
import random
import sort
import timeit
test_array1 = [random.randint(0, 500) for i in range(10)]
arrays_to_sort = [test_array1]
bubble_times = []
for a in range(len(arrays_to_sort)):
temp_array = arrays_to_sort[a]
t = timeit(sort.bubble(temp_array))
bubble_times.append(t)
**t = timeit(sort.bubble(temp_array))** #code is definitely not correct here
#This file is on sort.py
def bubble(list):
for current_pass in range(len(list) - 1, 0, -1):
for element in range(current_pass):
#Swap the element if the current one is smaller than
#next one
if list[element] > list[element + 1]:
temp = list[element]
list[element] = list[element + 1]
list[element + 1] = temp
return list
You need to function bubble and variable temp_array in the local environment.
Try:
t = timeit.timeit("bubble(temp_array)", setup = "from sort import bubble; from __main__ import temp_array"),
bubble_times.append(t)
Explanation:
So we use bubble() since that's the way to access an imported function
you would use sort.bubble() if you had imported the sort module rather than just a function from the module
We have to also bring in temp_array (assuming we are running timetests.py as the main module)
Using lambda function
Another option is use lambda to create a zero argument function which we pass to timeit. Note: checkout how to pass parameters of a function when using timeit.Timer()
t = timeit.timeit(lambda: sort.bubble(temp_array))
The following for loop is part of a iterative simulation process and is the main bottleneck regarding computational time:
import numpy as np
class Simulation(object):
def __init__(self,n_int):
self.n_int = n_int
def loop(self):
for itr in range(self.n_int):
#some preceeding code which updates rows_list and diff with every itr
cols_red_list = []
rows_list = list(range(2500)) #row idx for diff where negative element is known to appear
diff = np.random.uniform(-1.323, 3.780, (2500, 300)) #np.random.uniform is just used as toy example
for row in rows_list:
col = next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
cols_red_list.append(col)
# some subsequent code which uses the cols_red_list data
sim1 = Simulation(n_int=10)
sim1.loop()
Hence, I tried to parallelize it by using the multiprocessing package in hope to reduce computation time:
import numpy as np
from multiprocessing import Pool, cpu_count
from functools import partial
def crossings(row, diff):
return next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
class Simulation(object):
def __init__(self,n_int):
self.n_int = n_int
def loop(self):
for itr in range(self.n_int):
#some preceeding code which updates rows_list and diff with every
rows_list = list(range(2500))
diff = np.random.uniform(-1, 1, (2500, 300))
if __name__ == '__main__':
num_of_workers = cpu_count()
print('number of CPUs : ', num_of_workers)
pool = Pool(num_of_workers)
cols_red_list = pool.map(partial(crossings,diff = diff), rows_list)
pool.close()
print(len(cols_red_list))
# some subsequent code which uses the cols_red_list data
sim1 = Simulation(n_int=10)
sim1.loop()
Unfortunately, the parallelization turns out to be much slower compared to the sequential piece of code.
Hence my question: Did I use the multiprocessing package properly in that particular example? Are there alternative ways to parallelize the above mentioned for loop ?
Disclaimer: As you're trying to reduce the runtime of your code through parallelisation, this doesn't strictly answer your question but it might still be a good learning opportunity.
As a golden rule, before moving to multiprocessing to improve
performance (execution time), one should first optimise the
single-threaded case.
Your
rows_list = list(range(2500))
Generates the numbers 0 to 2499 (that's the range) and stores them in memory (list), which requires time to do the allocation of the required memory and the actual write. You then only use these predictable values once each, by reading them from memory (which also takes time), in a predictable order:
for row in rows_list:
This is particularly relevant to the runtime of your loop function as you do it repeatedly (for itr in range(n_int):).
Instead, consider generating the number only when you need it, without an intermediate store (which conceptually removes any need to access RAM):
for row in range(2500):
Secondly, on top of sharing the same issue (unnecessary accesses to memory), the following:
diff = np.random.uniform(-1, 1, (2500, 300))
# ...
col = next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
seems to me to be optimisable at the level of math (or logic).
What you're trying to do is get a random variable (that col index) by defining it as "the first time I encounter a random variable in [-1;1] that is lower than 0". But notice that figuring out if a random variable with a uniform distribution over [-α;α] is negative, is the same as having a random variable over {0,1} (i.e. a bool).
Therefore, you're now working with bools instead of floats and you don't even have to do the comparison (val < 0) as you already have a bool. This potentially makes the code much faster. Using the same idea as for rows_list, you can generate that bool only when you need it; testing it until it is True (or False, choose one, it doesn't matter obviously). By doing so, you only generate as many random bools as you need, not more and not less (BTW, what happens in your code if all 300 elements in the row are negative? ;) ):
for _ in range(n_int):
cols_red_list = []
for row in range(2500):
col = next(i for i in itertools.count() if random.getrandbits(1))
cols_red_list.append(col)
or, with list comprehension:
cols_red_list = [next(i for i in count() if getrandbits(1))
for _ in range(2500)]
I'm sure that, through proper statistical analysis, you even can express that col random variable as a non-uniform variable over [0;limit[, allowing you to compute it much faster.
Please test the performance of an "optimized" version of your single-threaded implementation first. If the runtime is still not acceptable, you should then look into multithreading.
multiprocessing uses system processes (not threads!) for parallelization, which require expensive IPC (inter-process communication) to share data.
This bites you in two spots:
diff = np.random.uniform(-1, 1, (2500, 300)) creates a large matrix which is expensive to pickle/copy to another process
rows_list = list(range(2500)) creates a smaller list, but the same applies here.
To avoid this expensive IPC, you have one and a half choices:
If on a POSIX-compliant system, initialize your variables on the module level, that way each process gets a quick-and-dirty copy of the required data. This is not scalable as it requires POSIX, weird architecture (you probably don't want to put everything on the module level), and doesn't support sharing changes to that data.
Use shared memory. This only supports mostly primitive data types, but mp.Array should cover your needs.
The second problem is that setting up a pool is expensive, as num_cpu processes need to be started. Your workload is small enough to be negligible compared to this overhead. A good practice is to only create one pool and reuse it.
Here is a quick-and-dirty example of the POSIX only solution:
import numpy as np
from multiprocessing import Pool, cpu_count
from functools import partial
n_int = 10
rows_list = np.array(range(2500))
diff = np.random.uniform(-1, 1, (2500, 300))
def crossings(row, diff):
return next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
def workload(_):
cols_red_list = [crossings(row, diff) for row in rows_list]
print(len(cols_red_list))
class Simulation(object):
def loop(self):
num_of_workers = cpu_count()
with Pool(num_of_workers) as pool:
pool.map(workload, range(10))
pool.close()
sim1 = Simulation()
sim1.loop()
For me (and my two cores) this is roughly twice as fast as the sequential version.
Update with shared memory:
import numpy as np
from multiprocessing import Pool, cpu_count, Array
from functools import partial
n_int = 10
ROW_COUNT = 2500
### WORKER
diff = None
result = None
def init_worker(*args):
global diff, result
(diff, result) = args
def crossings(i):
result[i] = next(idx for idx, val in enumerate(diff[i*300:(i+1)*300]) if val < 0)
### MAIN
class Simulation():
def loop(self):
num_of_workers = cpu_count()
diff = Array('d', range(ROW_COUNT*300), lock=False)
result = Array('i', ROW_COUNT, lock=False)
# Shared memory needs to be passed when workers are spawned
pool = Pool(num_of_workers, initializer=init_worker, initargs=(diff, result))
for i in range(n_int):
# SLOW, I assume you use a different source of values anyway.
diff[:] = np.random.uniform(-1, 1, ROW_COUNT*300)
pool.map(partial(crossings), range(ROW_COUNT))
print(len(result))
pool.close()
sim1 = Simulation()
sim1.loop()
A few notes:
Shared memory needs to be set up at worker creation, so it's global anyway.
This still isn't faster than the sequential version, but that's mainly due to random.uniform needing to be copied entirely into shared memory. I assume that are just values for testing, and in reality you'd fill it differently anyway.
I only pass indices to the worker, and use them to read and write values to the shared memory.
I am quite new to python.I have been thinking of making the below code to parellel calls where a list of doj values are formatted with help of lambda,
m_df[['doj']] = m_df[['doj']].apply(lambda x: formatdoj(*x), axis=1)
def formatdoj(doj):
doj = str(doj).split(" ")[0]
doj = datetime.strptime(doj, '%Y' + "-" + '%m' + "-" + "%d")
return doj
Since the list has million records, the time it takes to format all takes a lot of time.
How to make parellel function call in python similar to Parellel.Foreach in c#?
I think that in your case using parallel computation is a bit of an overkill. The slowness comes from the code, not from using a single processor. I'll show you in some steps how to make it faster, guessing a bit that you're working with a Pandas dataframe and what your dataframe contains (please stick to SO guidelines and include a complete working example!!)
For my test, I've used the following random dataframe with 100k rows (scale times up to get to your case):
N=int(1e5)
m_df = pd.DataFrame([['{}-{}-{}'.format(y,m,d)]
for y,m,d in zip(np.random.randint(2007,2019,N),
np.random.randint(1,13,N),
np.random.randint(1,28,N))],
columns=['doj'])
Now this is your code:
tstart = time()
m_df[['doj']] = m_df[['doj']].apply(lambda x: formatdoj(*x), axis=1)
print("Done in {:.3f}s".format(time()-tstart))
On my machine it runs in around 5.1s. It has several problems. The first one is you're using dataframes instead of series, although you work only on one column, and creating a useless lambda function. Simply doing:
m_df['doj'].apply(formatdoj)
Cuts down the time to 1.6s. Also joining strings with '+' is slow in python, you can change your formatdoj to:
def faster_formatdoj(doj):
return datetime.strptime(doj.split()[0], '%Y-%m-%d')
m_df['doj'] = m_df['doj'].apply(faster_formatdoj)
This is not a great improvement but does cut down a bit to 1.5s. If you need to join the strings for real (because e.g. they are not fixed), rather use '-'.join('%Y','%m','%d'), that's faster.
But the true bottleneck comes from using datetime.strptime a lot of times. It is intrinsically a slow command - dates are a bulky thing. On the other hand, if you have millions of dates, and assuming they're not uniformly spread since the beginning of humankind, chances are they are massively duplicated. So the following is how you should truly do it:
tstart = time()
# Create a new column with only the first word
m_df['doj_split'] = m_df['doj'].apply(lambda x: x.split()[0])
converter = {
x: faster_formatdoj(x) for x in m_df['doj_split'].unique()
}
m_df['doj'] = m_df['doj_split'].apply(lambda x: converter[x])
# Drop the column we added
m_df.drop(['doj_split'], axis=1, inplace=True)
print("Done in {:.3f}s".format(time()-tstart))
This works in around 0.2/0.3s, more than 10 times faster than your original implementation.
After all this, if you still are running to slow, you can consider working in parallel (rather parallelizing separately the first "split" instruction and, maybe, the apply-lambda part, otherwise you'd be creating many different "converter" dictionaries nullifying the gain). But I'd take that as a last step rather than the first solution...
[EDIT]: Originally in the first step of the last code box I used m_df['doj_split'] = m_df['doj'].str.split().apply(lambda x: x[0]) which is functionally equivalent but a bit slower than m_df['doj_split'] = m_df['doj'].apply(lambda x: x.split()[0]). I'm not entirely sure why, probably because it's essentially applying two functions instead of one.
Your best bet is to use dask. Dask has a data_frame type which you can use to create this a similar dataframe, but, while executing compute function, you can specify number of cores with num_worker argument. this will parallelize the task
Since I'm not sure about your example, I will give you another one using the multiprocessing library:
# -*- coding: utf-8 -*-
import multiprocessing as mp
input_list = ["str1", "str2", "str3", "str4"]
def format_str(str_input):
str_output = str_input + "_test"
return str_output
if __name__ == '__main__':
with mp.Pool(processes = 2) as p:
result = p.map(format_str, input_list)
print (result)
Now, let's say you want to map a function with several arguments, you should then use starmap():
# -*- coding: utf-8 -*-
import multiprocessing as mp
input_list = ["str1", "str2", "str3", "str4"]
def format_str(str_input, i):
str_output = str_input + "_test" + str(i)
return str_output
if __name__ == '__main__':
with mp.Pool(processes = 2) as p:
result = p.starmap(format_str, [(input_list, i) for i in range(len(input_list))])
print (result)
Do not forget to place the Pool inside the if __name__ == '__main__': and that multiprocessing will not work inside an IDE such as spyder (or others), thus you'll need to run the script in the cmd.
To keep the results, you can either save them to a file, or keep the cmd open at the end with os.system("pause") (Windows) or an input() on Linux.
It's a fairly simple way to use multiprocessing with python.
I understand from simple examples that Pool.map is supposed to behave identically to the 'normal' python code below except in parallel:
def f(x):
# complicated processing
return x+1
y_serial = []
x = range(100)
for i in x: y_serial += [f(x)]
y_parallel = pool.map(f, x)
# y_serial == y_parallel!
However I have two bits of code that I believe should follow this example:
#Linear version
price_datas = []
for csv_file in loop_through_zips(data_directory):
price_datas += [process_bf_data_csv(csv_file)]
#Parallel version
p = Pool()
price_data_parallel = p.map(process_bf_data_csv, loop_through_zips(data_directory))
However the Parallel code doesn't work whereas the Linear code does. From what I can observe, the parallel version appears to be looping through the generator (it's printing out log lines from the generator function) but then not actually performing the "process_bf_data_csv" function. What am I doing wrong here?
.map tries to pull all values from your generator to form it into an iterable before actually starting the work.
Try waiting longer (till the generator runs out) or use multi threading and a queue instead.
I have a iterable object in python Z, which is to large to fit into memory. I would like to perform a parallel calculation over this object and write the results, in order that they appear in Z, to a file. Consider this silly example:
import numpy as np
import multiprocessing as mp
import itertools as itr
FOUT = open("test",'w')
def f(x):
val = hash(np.random.random())
FOUT.write("%s\n"%val)
N = 10**9
Z = itr.repeat(0,N)
P = mp.Pool()
P.map(f,Z,chunksize=50)
P.close()
P.join()
FOUT.close()
There are two major problems with this:
multiple results can be written to the same line
a result is returned with N objects in it - this will be to big to hold in memory (and we don't need it!).
What I've tried:
Using a global lock mp.Lock() to share the FOUT resource: doesn't help, because I think each worker creates it's own namespace.
Use apply_async instead of map: While having callback fixes 1], 2], it doesn't accept an iterable object.
Use imap instead of map and iterating over the results:
Something like:
def f(x):
val = hash(np.random.random())
return val
P = mp.Pool()
C = P.imap(f,Z,chunksize=50)
for x in C:
FOUT.write("%s\n"%x)
This still uses inordinate amounts of memory, though I'm not sure why.