pbs cluster - python multiple simulations - python

I have to run multiple simulations of the same model with varying parameters (or random number generator seed). Previously I worked on a server with many cores, where I used python multiprocessing library with apply_async. This was very handy as I could decide the maximum number of cores to occupy and simulations would just go into a queue.
Now I moved to a place with a hpc cluster working with pbs. It seems from trial and error and different answers that multiprocessing works only inside one node. Is there a way to make it work on many nodes, or any other library which reaches the same funtionality with the same easyness to use in few lines?
To let you understand my kind of code:
import functions_library as L
import multiprocessing as mp
if __name__ == "__main__":
N = 100
proc = 50
pool = mp.Pool(processes = proc)
seed = 342
np.random.seed(seed)
seeds = np.random.randint(low=1,high=100000,size=N)
resul = []
for SEED in seeds:
SEED = int(SEED)
resul.append(pool.apply_async(L.some_function, args = (some_args)))
print(SEED)
results = [p.get() for p in resul]
database = pd.DataFrame(results)
database.to_csv("prova.csv")
EDIT
As I understand, mpi4py might be helpful as naturally interacts with pbs. Is that correct? How can I adapt my code to mpi4py?

I have found that the schwimmbad package is quite handy to run code written for multiprocessing in an MPI cluster with minimal changes.
I hope it helps!

Related

Multiprocessing on PBS cluster node

I have to run multiple simulations of the same model with varying parameters (or random number generator seed). Previously I worked on a server with many cores, where I used python multiprocessing library with apply_async. This was very handy as I could decide the maximum number of cores to occupy and simulations would just go into a queue.
As I understand from other questions, multiprocessing works on pbs clusters as long as you work on just one node, which can be fine for now. However, my code doesn't always work.
To let you understand my kind of code:
import functions_library as L
import multiprocessing as mp
if __name__ == "__main__":
N = 100
proc = 50
pool = mp.Pool(processes = proc)
seed = 342
np.random.seed(seed)
seeds = np.random.randint(low=1,high=100000,size=N)
resul = []
for SEED in seeds:
SEED = int(SEED)
resul.append(pool.apply_async(L.some_function, args = (some_args)))
print(SEED)
results = [p.get() for p in resul]
database = pd.DataFrame(results)
database.to_csv("prova.csv")
The function creates 3 N=10000 networkx graphs and perform some computations on them, then returns a simple short python dictionary.
The weird thing I cannot debug is the following error message:
multiprocessing.pool.MaybeEncodingError: Error sending result: >''. >Reason: 'RecursionError('maximum recursion depth exceeded while calling a >Python object')'
What's strange is that I run multiple istances of the code on different nodes. 3 times the code correctly worked, whereas most of the times it returns the previous error. I tried lunching different number of parallel simulation, from 7 to 20 (# cores of the nodes), but there doesn't seem to be a pattern, so I guess it's not a memory issue.
In other questions similar error seems to be related to pickling strange or big objects, but in this case the only thing that comes out of the function is a short dictionary, so it shouldn't be related to that.
I also tried increasing the allowed recursion depth with the sys library at the beginning og the work but didn't work up to 15000.
Any idea to solve or at least understand this behavior?
It was related to eigenvector_centrality() not converging.
When running outside of multiprocessing it correctly returns a networkx error, whereas inside it only this recursion error is returned.
I am not aware if this is a weird very function specific behavior or sometimes multiprocessing cannot handle some library errors.

python the limit of multiprocessing.pool doesn't work

When i use the multiprocessing.pool to do a research in quantitative strategy, there is a problem occurred in the linux server monitor with htop. It seems that the parameters processes doesn't work and all cpus are occupied.
For example, if i set multiprocessing.pool(processes=8) and use pool.apply_async to run my backtest strategy with parameters' length 6000. It seems everything is right at the beginning of the backtest and after some time, it shows that all 24 cpus are occupied by this program, it seems that processes=8 doesn't work.
Pseudo code like this:
def stragegy(paras):
# some trading logic
return indicator
if __name == '__main__':
# paras_list has 6000 different parameters
# a_list: store the result of strategy
paras_list = [[..], [..], ...[..]]
a_list = []
pool = multiprocessing.Pool(processes=8) # create 8 processes
for i in range(len(paras_list)):
a_list.append(pool.apply_async(func=strategy, args=(paras_list[i])))
pool.close()
pool.join()
linux server -htop screen
Is anyone can help me about this problem?
Thanks a lot!!

Python Multiprocessing pool.map unresponsive with too many worker processes

first question on stack overflow so please bear with. I am looking to calculate the variance for group ratings (long numpy arrays). Running the program without parallel processing works fine, but given each process can run independently and there are 32 groups I am looking to make use of multiprocessing to speed things up. This works OK for small numbers of groups < 10, but after this the program will often just seemingly stop running with no error messages at an unspecified number of groups ( usually between 20 and 30 ) although less frequently will run all the way through. The arrays are quite large ( 21451 x 11462 user item ratings) and so I am wondering if the problem is caused by not enough memory, although no error messages are printed.
import numpy as np
from functools import partial
import multiprocessing
def variance_parallel(extra_matrices, group_num):
# do some variation calculation
# print confirmation that we have entered function, and group number
return single_group_var
def variance(extra_matrices, num_groups):
variance_partial = partial(variance_parallel, extra_matrices)
for g in list(range(num_groups)):
group_var = pool.map(variance_partial,range(g))
return(group_var)
num_cores = multiprocessing.cpu_count() - 1
pool = multiprocessing.Pool(processes=num_cores)
variance(extra_matrices, num_groups)
Running the above code shows the program progressively building the number of groups it is checking variance on ([0],[0,1],[0,1,2],...) before eventually printing nothing.
Thanks in advance for any help and apologies if my formatting / question is a bit off!
Multiple processes do not share data
Data sent to processes needs to be copied
Since the arrays are large, the issue is very likely to do with said copying of large arrays to the processes. Further more in Python's multiprocessing, sending data to processes is done by serialisation which is (a) CPU intensive and (b) takes extra memory in and by it self.
In short multi processing is not a good fit for your use case. Since numpy is a native code extension (where GIL does not apply) and is thread safe, best to use threading instead of multiprocessing. With threading, the worker threads can share data via their parent process's address space which makes away with having to copy.
That should stop the program from running out of memory.
However, for threads to share address space the data they share needs to be bound to an object, like in a python class.
Something like the below - untested as the code sample is incomplete.
import numpy as np
from functools import partial
from threading import Thread
from multiprocessing import cpu_count
class Variance(Thread):
def __init__(self, extra_matrices, group_num):
Thread.__init__(self)
self.extra_matrices = extra_matrices
self.group_num = group_num
self.output = None
def run(self):
# do some variation calculation
# print confirmation that we have entered function, and group number
self.output = single_group_var
num_cores = cpu_count() - 1
results = []
for g in list(range(num_groups)):
workers = [Variance(extra_matrices, range(g))
for _ in range(num_cores)]
# Start threads
for worker in workers:
worker.start()
# Wait for completion
for worker in workers:
worker.join()
results.extend([w.output for w in workers])
print results

Threadpool in python is not as fast as expected

I'm beginner to python and machine learning. I'm trying to reproduce the code for countvectorizer() using multi-threading. I'm working with yelp dataset to do sentiment analysis using LogisticRegression. This is what I've written so far:
Code snippet:
from multiprocessing.dummy import Pool as ThreadPool
from threading import Thread, current_thread
from functools import partial
data = df['text']
rev = df['stars']
y = []
def product_helper(args):
return featureExtraction(*args)
def featureExtraction(p,t):
temp = [0] * len(bag_of_words)
for word in p.split():
if word in bag_of_words:
temp[bag_of_words.index(word)] += 1
return temp
# function to be mapped over
def calculateParallel(threads):
pool = ThreadPool(threads)
job_args = [(item_a, rev[i]) for i, item_a in enumerate(data)]
l = pool.map(product_helper,job_args)
pool.close()
pool.join()
return l
temp_X = calculateParallel(12)
Here this is just part of code.
Explanation:
df['text'] has all the reviews and df['stars'] has the ratings (1 through 5). I'm trying to find the word count vector temp_X using multi-threading. bag_of_words is a list of some frequent words of choice.
Question:
Without multi-threading , I was able to compute the temp_X in around 24 minutes and the above code took 33 mins for a dataset of size 100k reviews. My machine has 128GB of DRAM and 12 cores (6 physical cores with hyperthreading i.e., threads per core=2).
What am I doing wrong here?
Your whole code seems CPU Bound rather than IO Bound.You are just using threads which are under GIL so effectively running just one thread plus overheads.It runs only on one core.To run on multiple cores use
Use
import multiprocessing
pool = multiprocessing.Pool()
l = pool.map_async(product_helper,job_args)
from multiprocessing.dummy import Pool as ThreadPool is just a wrapper over thread module.It utilises just one core and not more than that.
Python and threads dont really work together very well. There is a known issue called the GIL (global interperter lock). Basically there is a lock in the interperter that makes all threads not run in parallel (even if you have multiple cpu cores). Python will simply give each thread a few milliseconds of cpu time one after another (and the reason it became slower is the overhead from context switching between those threads).
Here is a really good document explaining how it works: http://www.dabeaz.com/python/UnderstandingGIL.pdf
To fix your problem i suggest you try multi processing:
https://pymotw.com/2/multiprocessing/basics.html
Note: multiprocessing is not 100% equivilent to multithreading. Multiprocessing will run at parallel but the diffrent processes wont share memory so if you change a variable in one of them it will not be changed in the other process.

Writing a parallel loop

I am trying to run a parallel loop on a simple example.
What am I doing wrong?
from joblib import Parallel, delayed
import multiprocessing
def processInput(i):
return i * i
if __name__ == '__main__':
# what are your inputs, and what operation do you want to
# perform on each input. For example...
inputs = range(1000000)
num_cores = multiprocessing.cpu_count()
results = Parallel(n_jobs=4)(delayed(processInput)(i) for i in inputs)
print(results)
The problem with the code is that when executed under Windows environments in Python 3, it opens num_cores instances of python to execute the parallel jobs but only one is active. This should not be the case since the activity of the processor should be 100% instead of 14% (under i7 - 8 logic cores).
Why are the extra instances not doing anything?
Continuing on your request to provide a working multiprocessing code, I suggest that you use pool_map (if the delayed functionality is not important), I'll give you an example, if your'e working on python3 its worth to mention you can use starmap.
Also worth mentioning that you can use map_sync/starmap_async if the order of the returned results does not have to correspond to the order of inputs.
import multiprocessing as mp
def processInput(i):
return i * i
if __name__ == '__main__':
# what are your inputs, and what operation do you want to
# perform on each input. For example...
inputs = range(1000000)
# removing processes argument makes the code run on all available cores
pool = mp.Pool(processes=4)
results = pool.map(processInput, inputs)
print(results)
On Windows, the multiprocessing module uses the 'spawn' method to start up multiple python interpreter processes. This is relatively slow. Parallel tries to be smart about running the code. In particular, it tries to adjust batch sizes so a batch takes about half a second to execute. (See the batch_size argument at https://pythonhosted.org/joblib/parallel.html)
Your processInput() function runs so fast that Parallel determines that it is faster to run the jobs serially on one processor than to spin up multiple python interpreters and run the code in parallel.
If you want to force your example to run on multiple cores, try setting batch_size to 1000 or making processInput() more complicated so it takes longer to execute.
Edit: Working example on windows that shows multiple processes in use (I'm using windows 7):
from joblib import Parallel, delayed
from os import getpid
def modfib(n):
# print the process id to see that multiple processes are used, and
# re-used during the job.
if n%400 == 0:
print(getpid(), n)
# fibonacci sequence mod 1000000
a,b = 0,1
for i in range(n):
a,b = b,(a+b)%1000000
return b
if __name__ == "__main__":
Parallel(n_jobs=-1, verbose=5)(delayed(modfib)(j) for j in range(1000, 4000))

Categories

Resources