I've been playing around with Python a lot lately, and in comparing numerous parallelization packages, I noticed that the performance increase from serial to parallel seems to top out at 6 processes instead of 8--the number of cores my MacBook Pro (OS X 10.8.2) has.
The attached plot compares the timing of different tasks as a function of number of processes (parallel or sequential). This example is using the python built-int 'multiprocessing' package 'Memory' vs. 'Processor' refers to memory-intensive (just allocating large arrays) vs. computationally intensive (many operations) functions.
What is the cause of the top-out below 8-processes?
(The 'Time's are averaged over 100 function calls for each number of processes)
import multiprocessing as mp
import time
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
iters = 100
mem_num = 1000
pro_num = 20000
max_procs = 10
line_width = 2.0
legend_size = 10
fig_name = 'timing.pdf'
def UseMemory(num):
test1 = np.zeros([num,num])
test2 = np.arange(num*num)
test3 = np.array(test2).reshape([num, num])
test4 = np.empty(num, dtype=object)
return
def UseProcessor(num):
test1 = np.arange(num)
test1 = np.cos(test1)
test1 = np.sqrt(np.fabs(test1))
test2 = np.zeros(num)
for i in range(num): test2[i] = test1[i]
return np.std(test2)
def MemJob(its):
for ii in range(its): UseMemory(mem_num)
def ProJob(its):
for ii in range(iters): UseProcessor(pro_num)
if __name__ == "__main__":
print '\nParTest\n'
proc_range = np.arange(1,max_procs+1,step=1)
test_times = np.zeros([len(proc_range),2,2]) # test_times[num_procs][0-ser,1-par][0-mem,1-pro]
tot_times = np.zeros([len(proc_range),2 ]) # tot_times[num_procs][0-ser,1-par]
print ' Testing %2d numbers of processors between [%d,%d]' % (len(proc_range), 1, max_procs)
print ' Iterations %d, Memory Length %d, Processor Length %d' % (iters, mem_num, pro_num)
for it in range(len(proc_range)):
procs = proc_range[it]
job_arg = procs*[iters]
print '\n - %2d, Processes = %3d' % (it, procs)
# --- Test Serial ---
print ' - - Serial'
# Test Memory
all_start = time.time()
start = time.time()
map(MemJob, [procs*iters])
ser_mem_time = time.time() - start
# Test Processor
start = time.time()
map(ProJob, job_arg)
ser_pro_time = time.time() - start
ser_time = time.time() - all_start
# --- Test Parallel : multiprocessing ---
print ' - - Parallel: multiprocessing'
pool = mp.Pool(processes=procs)
# Test Memory
all_start = time.time()
start = time.time()
pool.map(MemJob, job_arg)
par_mem_time = time.time() - start
# Test Processor
start = time.time()
pool.map(ProJob, job_arg)
par_pro_time = time.time() - start
par_time = time.time() - all_start
print ' - - Collecting'
ser_mem_time /= procs
ser_pro_time /= procs
par_mem_time /= procs
par_pro_time /= procs
ser_time /= procs
par_time /= procs
test_times[it][0] = [ ser_mem_time, ser_pro_time ]
test_times[it][1] = [ par_mem_time, par_pro_time ]
tot_times[it] = [ ser_time , par_time ]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel('Number of Processes')
ax.set_ylabel('Time [s]')
ax.xaxis.grid(True)
ax.yaxis.grid(True)
lines = []
names = []
l1, = ax.plot(proc_range, test_times[:,0,0], linewidth=line_width)
lines.append(l1)
names.append('Serial Memory')
l1, = ax.plot(proc_range, test_times[:,0,1], linewidth=line_width)
lines.append(l1)
names.append('Serial Processor')
l1, = ax.plot(proc_range, tot_times[:,0], linewidth=line_width)
lines.append(l1)
names.append('Serial')
l1, = ax.plot(proc_range, test_times[:,1,0], linewidth=line_width)
lines.append(l1)
names.append('Parallel Memory')
l1, = ax.plot(proc_range, test_times[:,1,1], linewidth=line_width)
lines.append(l1)
names.append('Parallel Processor')
l1, = ax.plot(proc_range, tot_times[:,1], linewidth=line_width)
lines.append(l1)
names.append('Parallel')
plt.legend(lines, names, ncol=2, prop={'size':legend_size}, fancybox=True, shadow=True, bbox_to_anchor=(1.10, 1.10))
fig.savefig(fig_name,dpi=fig.get_dpi())
print ' - Saved to ', fig_name
plt.show(block=True)
From the discussion above I think you have the information you need, but I'm adding an answer to collect facts in case it benefits others (plus I wanted to work it through myself). (Due credit to #bamboon who mentioned some of this first.)
First, your MacBook has a CPU with four physical cores, but the design of the chip is such that each core's hardware has the ability to run two threads. This is called "simultaneous multithreading" (SMT) and in this case is embodied by Intel's hyperthreading feature. So taken all together you have 8 "virtual cores" (4 + 4 = 8).
Note that the OS treats all the virtual cores the same, i.e. it does not distinguish between the two SMT threads offered by a physical core, and that's why sysctl returns 8 when you query it. Python will do the same thing:
>>> import multiprocessing
>>> multiprocessing.cpu_count()
8
Second, the speedup limit you're encountering is a well-known phenomenon in which parallel performance saturates and does not improve with the addition of more processors working on the problem. This effect is described by Amdahl's Law, a quantitative statement about how much speedup to expect from multiple processors depending on how much code can be parallelized and how much runs serially.
Typically a number of factors limit relative speedup, including details of the OS and even the computer's architecture (e.g. how SMT works in a hardware core), so that even if you parallelize as much of your code as you can, your performance will not scale indefinitely. Understanding where the serial bottleneck is can require very detailed analysis of your program and its running environment.
You can find a good example with discussion in this question.
I hope this helps.
Related
The following code (python) measures the speedup when increasing number of processing. The task in the multiprocessing is just multiplying a random matrix, the size of which is also varied and corresponding elapsed time is measured.
Note that, each process does not share any object and they are completely independent. So, I expected that performance curve when changing number of process will be almost same for all matrix size. However, when plotting the results (see below), I found that the expectation is false. Specifically, when matrix size becomes large (80, 160), the performance hardly be better though number of process increased. Note: The figures legend indicates the matrix sizes.
Could you explain, why performance does not become better when matrix size is large?
For your information, here is the spec of my CPU:
https://www.amd.com/en/products/cpu/amd-ryzen-9-3900x
Product Family: AMD Ryzen™ Processors
Product Line: AMD Ryzen™ 9 Desktop Processors
# of CPU Cores: 12
# of Threads: 24
Max. Boost Clock: Up to 4.6GHz
Base Clock: 3.8GHz
L1 Cache: 768KB
L2 Cache: 6MB
L3 Cache: 64MB
main script
import numpy as np
import pickle
from dataclasses import dataclass
import time
import multiprocessing
import os
import subprocess
import numpy as np
def split_number(n_total, n_split):
return [n_total // n_split + (1 if x < n_total % n_split else 0) for x in range(n_split)]
def task(args):
n_iter, idx, matrix_size = args
#cores = "{},{}".format(2 * idx, 2 * idx+1)
#os.system("taskset -p -c {} {}".format(cores, os.getpid()))
for _ in range(n_iter):
A = np.random.randn(matrix_size, matrix_size)
for _ in range(100):
A = A.dot(A)
def measure_time(n_process: int, matrix_size: int) -> float:
n_total = 100
assigne_list = split_number(n_total, n_process)
pool = multiprocessing.Pool(n_process)
ts = time.time()
pool.map(task, zip(assigne_list, range(n_process), [matrix_size] * n_process))
elapsed = time.time() - ts
return elapsed
if __name__ == "__main__":
n_experiment_sample = 5
n_logical = os.cpu_count()
n_physical = int(0.5 * n_logical)
result = {}
for mat_size in [5, 10, 20, 40, 80, 160]:
subresult = {}
result[mat_size] = subresult
for n_process in range(1, n_physical + 1):
elapsed = np.mean([measure_time(n_process, mat_size) for _ in range(n_experiment_sample)])
subresult[n_process] = elapsed
print("{}, {}, {}".format(mat_size, n_process, elapsed))
with open("result.pkl", "wb") as f:
pickle.dump(result, f)
plot script
import numpy as np
import matplotlib.pyplot as plt
import pickle
with open("result.pkl", "rb") as f:
result = pickle.load(f)
fig, ax = plt.subplots()
for matrix_size in result.keys():
subresult = result[matrix_size]
n_process_list = list(subresult.keys())
elapsed_time_list = np.array(list(subresult.values()))
speedups = elapsed_time_list[0] / elapsed_time_list
ax.plot(n_process_list, speedups, label=matrix_size)
ax.set_xlabel("number of process")
ax.set_ylabel("speed up compared to single process")
ax.legend(loc="upper left", borderaxespad=0, fontsize=10, framealpha=1.0)
plt.show()
Background: I'm trying to create a simple bootstrap function for sampling means with replacement. I want to parallelize the function since I will eventually be deploying this on data with millions of data points and will want to have sample sizes much larger. I've ran other examples such as the Mandelbrot example. In the code below you'll see that I have a CPU version of the code, which runs fine as well.
I've read several resources to get this up and running:
Random Numbers with CUDA
Writing Kernels in CUDA
The issue: This is my first foray into CUDA programming and I believe I have everything setup correctly. I'm getting this one error that I cannot seem to figure out:
TypingError: cannot determine Numba type of <class 'object'>
I believe the LOC in question is:
bootstrap_rand_gpu[threads_per_block, blocks_per_grid](rng_states, dt_arry_device, n_samp, out_mean_gpu)
Attempts to resolve the issue: I won't go into full detail, but here are the following attempts
Thought it might have something to do with cuda.to_device(). I changed it around and I also called cuda.to_device_array_like(). I've used to_device() for all parameters, and for just a few. I've seen code samples where it's used for every parameter and sometimes not. So I'm not sure what should be done.
I've removed the random number generator for GPUs (create_xoroshiro128p_states) and just used a static value to test.
Explicitly assigning integers with int() (and not). Not sure why I tried this. I read that Numba only supports a limited data types, so I made sure that they were ints
Numba Supported Datatypes
Few other things I don't recall...
Apologies for messy code. I'm a bit at wits' end on this.
Below is the full code:
import numpy as np
from numpy import random
from numpy.random import randn
import pandas as pd
from timeit import default_timer as timer
from numba import cuda
from numba.cuda.random import create_xoroshiro128p_states, xoroshiro128p_uniform_float32
from numba import *
def bootstrap_rand_cpu(dt_arry, n_samp, boot_samp, out_mean):
for i in range(boot_samp):
rand_idx = random.randint(n_samp-1,size=(50)) #get random array of indices 0-49, with replacement
out_mean[i] = dt_arry[rand_idx].mean()
#cuda.jit
def bootstrap_rand_gpu(rng_states, dt_arry, n_samp, out_mean):
thread_id = cuda.grid(1)
stride = cuda.gridsize(1)
for i in range(thread_id, dt_arry.shape[0], stride):
for k in range(0,n_samp-1,1):
rand_idx_arry[k] = int(xoroshiro128p_uniform_float32(rng_states, thread_id) * 49)
out_mean[thread_id] = dt_arry[rand_idx_arry].mean()
mean = 10
rand_fluc = 3
n_samp = int(50)
boot_samp = int(1000)
dt_arry = (random.rand(n_samp)-.5)*rand_fluc + mean
out_mean_cpu = np.empty(boot_samp)
out_mean_gpu = np.empty(boot_samp)
##################
# RUN ON CPU
##################
start = timer()
bootstrap_rand_cpu(dt_arry, n_samp, boot_samp, out_mean_cpu)
dt = timer() - start
print("CPU Bootstrap mean of " + str(boot_samp) + " mean samples: " + str(out_mean_cpu.mean()))
print("Bootstrap CPU in %f s" % dt)
##################
# RUN ON GPU
##################
threads_per_block = 64
blocks_per_grid = 24
#create random state for each state in the array
rng_states = create_xoroshiro128p_states(threads_per_block * blocks_per_grid, seed=1)
start = timer()
dt_arry_device = cuda.to_device(dt_arry)
out_mean_gpu_device = cuda.to_device(out_mean_gpu)
bootstrap_rand_gpu[threads_per_block, blocks_per_grid](rng_states, dt_arry_device, n_samp, out_mean_gpu_device)
out_mean_gpu_device.copy_to_host()
dt = timer() - start
print("GPU Bootstrap mean of " + str(boot_samp) + " mean samples: " + str(out_mean_gpu.mean()))
print("Bootstrap GPU in %f s" % dt)
You seem to have at least 4 issues:
In your kernel code, rand_idx_arry is undefined.
You can't do .mean() in cuda device code
Your kernel launch config parameters are reversed.
Your kernel had an incorrect range for the grid-stride loop. dt_array.shape[0] is 50, so you were only populating the first 50 locations in your gpu output array. Just like your host code, the range for this grid-stride loop should be the size of the output array (which is boot_samp)
There may be other issues as well, but when I refactor your code like this to address those issues, it seems to run without error:
$ cat t65.py
#import matplotlib.pyplot as plt
import numpy as np
from numpy import random
from numpy.random import randn
from timeit import default_timer as timer
from numba import cuda
from numba.cuda.random import create_xoroshiro128p_states, xoroshiro128p_uniform_float32
from numba import *
def bootstrap_rand_cpu(dt_arry, n_samp, boot_samp, out_mean):
for i in range(boot_samp):
rand_idx = random.randint(n_samp-1,size=(50)) #get random array of indices 0-49, with replacement
out_mean[i] = dt_arry[rand_idx].mean()
#cuda.jit
def bootstrap_rand_gpu(rng_states, dt_arry, n_samp, out_mean):
thread_id = cuda.grid(1)
stride = cuda.gridsize(1)
for i in range(thread_id, out_mean.shape[0], stride):
my_sum = 0.0
for k in range(0,n_samp-1,1):
my_sum += dt_arry[int(xoroshiro128p_uniform_float32(rng_states, thread_id) * 49)]
out_mean[thread_id] = my_sum/(n_samp-1)
mean = 10
rand_fluc = 3
n_samp = int(50)
boot_samp = int(1000)
dt_arry = (random.rand(n_samp)-.5)*rand_fluc + mean
#plt.plot(dt_arry)
#figureData = plt.figure(1)
#plt.title('Plot ' + str(n_samp) + ' samples')
#plt.plot(dt_arry)
#figureData.show()
out_mean_cpu = np.empty(boot_samp)
out_mean_gpu = np.empty(boot_samp)
##################
# RUN ON CPU
##################
start = timer()
bootstrap_rand_cpu(dt_arry, n_samp, boot_samp, out_mean_cpu)
dt = timer() - start
print("CPU Bootstrap mean of " + str(boot_samp) + " mean samples: " + str(out_mean_cpu.mean()))
print("Bootstrap CPU in %f s" % dt)
#figureMeanCpu = plt.figure(2)
#plt.title('Plot '+ str(boot_samp) + ' bootstrap means - CPU')
#plt.plot(out_mean_cpu)
#figureData.show()
##################
# RUN ON GPU
##################
threads_per_block = 64
blocks_per_grid = 24
#create random state for each state in the array
rng_states = create_xoroshiro128p_states(threads_per_block * blocks_per_grid, seed=1)
start = timer()
dt_arry_device = cuda.to_device(dt_arry)
out_mean_gpu_device = cuda.to_device(out_mean_gpu)
bootstrap_rand_gpu[blocks_per_grid, threads_per_block](rng_states, dt_arry_device, n_samp, out_mean_gpu_device)
out_mean_gpu = out_mean_gpu_device.copy_to_host()
dt = timer() - start
print("GPU Bootstrap mean of " + str(boot_samp) + " mean samples: " + str(out_mean_gpu.mean()))
print("Bootstrap GPU in %f s" % dt)
python t65.py
CPU Bootstrap mean of 1000 mean samples: 10.148048544038735
Bootstrap CPU in 0.037496 s
GPU Bootstrap mean of 1000 mean samples: 10.145088765532936
Bootstrap GPU in 0.416822 s
$
Notes:
I've commented out a bunch of stuff that doesn't seem to be relevant. You might want to do something like that in the future when posting code (remove stuff that is not relevant to your question.)
I've fixed some things about your final GPU printout at the end, also.
I haven't studied your code carefully. I'm not suggesting anything is defect free. I'm just pointing out some issues and providing a guide for how they might be addressed. I can see the results don't match between CPU and GPU, but since I don't know what you're doing, and also because the random number generators don't match between CPU and GPU code, it's not obvious to me that things should match.
I have recently played around with Python's multiprocessing module to speed up the forward-backward algorithm for Hidden Markov Models as forward filtering and backward filtering can run independently. Seeing the run-time halve was awe-inspiring stuff.
I now attempt to include some multiprocessing in my iterative Viterbi algorithm.In this algorithm, the two processes I am trying to run are not independent. The val_max part can run independently but arg_max[t] depends on val_max[t-1]. So I played with the idea that one can run val_max as a separate process and then arg_max also as a separate process which can be fed by val_max.
I admit to be a bit out of my depth here and do not know much about multiprocessing other than watching some basic video's on it as well as browsing blogs. I provide my attempt below, but it does not work.
import numpy as np
from time import time,sleep
import multiprocessing as mp
class Viterbi:
def __init__(self,A,B,pi):
self.M = A.shape[0] # number of hidden states
self.A = A # Transition Matrix
self.B = B # Observation Matrix
self.pi = pi # Initial distribution
self.T = None # time horizon
self.val_max = None
self.arg_max = None
self.obs = None
self.sleep_time = 1e-6
self.output = mp.Queue()
def get_path(self,x):
# returns the most likely state sequence given observed sequence x
# using the Viterbi algorithm
self.T = len(x)
self.val_max = np.zeros((self.T, self.M))
self.arg_max = np.zeros((self.T, self.M))
self.val_max[0] = self.pi*self.B[:,x[0]]
for t in range(1, self.T):
# Indepedent Process
self.val_max[t] = np.max( self.A*np.outer(self.val_max[t-1],self.B[:,obs[t]]) , axis = 0 )
# Dependent Process
self.arg_max[t] = np.argmax( self.val_max[t-1]*self.A.T, axis = 1)
# BACKTRACK
states = np.zeros(self.T, dtype=np.int32)
states[self.T-1] = np.argmax(self.val_max[self.T-1])
for t in range(self.T-2, -1, -1):
states[t] = self.arg_max[t+1, states[t+1]]
return states
def get_val(self):
'''Independent Process'''
for t in range(1,self.T):
self.val_max[t] = np.max( self.A*np.outer(self.val_max[t-1],self.B[:,self.obs[t]]) , axis = 0 )
self.output.put(self.val_max)
def get_arg(self):
'''Dependent Process'''
for t in range(1,self.T):
while 1:
# Process info if available
if self.val_max[t-1].any() != 0:
self.arg_max[t] = np.argmax( self.val_max[t-1]*self.A.T, axis = 1)
break
# Else sleep and wait for info to arrive
sleep(self.sleep_time)
self.output.put(self.arg_max)
def get_path_parallel(self,x):
self.obs = x
self.T = len(obs)
self.val_max = np.zeros((self.T, self.M))
self.arg_max = np.zeros((self.T, self.M))
val_process = mp.Process(target=self.get_val)
arg_process = mp.Process(target=self.get_arg)
# get first initial value for val_max which can feed arg_process
self.val_max[0] = self.pi*self.B[:,obs[0]]
arg_process.start()
val_process.start()
arg_process.join()
val_process.join()
Note: get_path_parallel does not have backtracking yet.
It would seem that val_process and arg_process never really run. Really not sure why this happens. You can run the code on the Wikipedia example for the viterbi algorithm.
obs = np.array([0,1,2]) # normal then cold and finally dizzy
pi = np.array([0.6,0.4])
A = np.array([[0.7,0.3],
[0.4,0.6]])
B = np.array([[0.5,0.4,0.1],
[0.1,0.3,0.6]])
viterbi = Viterbi(A,B,pi)
path = viterbi.get_path(obs)
I also tried using Ray. However, I had no clue what I was really doing there. Can you please help recommend me what to do in order to get the parallel version to run. I must be doing something very wrong but I do not know what.
Your help would be much appreciated.
I have managed to get my code working thanks to #SıddıkAçıl. The producer-consumer pattern is what does the trick. I also realised that the processes can run successfully but if one does not store the final results in a "result queue" of sorts then it vanishes. By this I mean, that I filled in values in my numpy arrays val_max and arg_max by allowing the process to start() but when I called them, they were still np.zero arrays. I verified that they did fill up to the correct arrays by printing them just as the process is about to terminate (at last self.T in iteration). So instead of printing them, I just added them to a multiprocessing Queue object on the final iteration to capture then entire filled up array.
I provide my updated working code below. NOTE: it is working but takes twice as long to complete as the serial version. My thoughts on why this might be so is as follows:
I can get it to run as two processes but don't actually know how to do it properly. Experienced programmers might know how to fix it with the chunksize parameter.
The two processes I am separating are numpy matrix operations. These processes execute so fast already that the overhead of concurrency (multiprocessing) is not worth the theoretical improvement. Had the two processes been the two original for loops (as used in Wikipedia and most implementations) then multiprocessing might have given gains (perhaps I should investigate this). Furthermore, because we have a producer-consumer pattern and not two independent processes (producer-producer pattern) we can only expect the producer-consumer pattern to run as long as the longest of the two processes (in this case the producer takes twice as long as the consumer). We can not expect run time to halve as in the producer-producer scenario (this happened with my parallel forward-backward HMM filtering algorithm).
My computer has 4 cores and numpy already does built-in CPU multiprocessing optimization on its operations. By me attempting to use cores to make the code faster, I am depriving numpy of cores that it could use in a more effective manner. To figure this out, I am going to time the numpy operations and see if they are slower in my concurrent version as compared to that of my serial version.
I will update if I learn anything new. If you perhaps know the real reason for why my concurrent code is so much slower, please do let me know. Here is the code:
import numpy as np
from time import time
import multiprocessing as mp
class Viterbi:
def __init__(self,A,B,pi):
self.M = A.shape[0] # number of hidden states
self.A = A # Transition Matrix
self.B = B # Observation Matrix
self.pi = pi # Initial distribution
self.T = None # time horizon
self.val_max = None
self.arg_max = None
self.obs = None
self.intermediate = mp.Queue()
self.result = mp.Queue()
def get_path(self,x):
'''Sequential/Serial Viterbi Algorithm with backtracking'''
self.T = len(x)
self.val_max = np.zeros((self.T, self.M))
self.arg_max = np.zeros((self.T, self.M))
self.val_max[0] = self.pi*self.B[:,x[0]]
for t in range(1, self.T):
# Indepedent Process
self.val_max[t] = np.max( self.A*np.outer(self.val_max[t-1],self.B[:,obs[t]]) , axis = 0 )
# Dependent Process
self.arg_max[t] = np.argmax( self.val_max[t-1]*self.A.T, axis = 1)
# BACKTRACK
states = np.zeros(self.T, dtype=np.int32)
states[self.T-1] = np.argmax(self.val_max[self.T-1])
for t in range(self.T-2, -1, -1):
states[t] = self.arg_max[t+1, states[t+1]]
return states
def get_val(self,intial_val_max):
'''Independent Poducer Process'''
val_max = intial_val_max
for t in range(1,self.T):
val_max = np.max( self.A*np.outer(val_max,self.B[:,self.obs[t]]) , axis = 0 )
#print('Transfer: ',self.val_max[t])
self.intermediate.put(val_max)
if t == self.T-1:
self.result.put(val_max) # we only need the last val_max value for backtracking
def get_arg(self):
'''Dependent Consumer Process.'''
t = 1
while t < self.T:
val_max =self.intermediate.get()
#print('Receive: ',val_max)
self.arg_max[t] = np.argmax( val_max*self.A.T, axis = 1)
if t == self.T-1:
self.result.put(self.arg_max)
#print('Processed: ',self.arg_max[t])
t += 1
def get_path_parallel(self,x):
'''Multiprocessing producer-consumer implementation of Viterbi algorithm.'''
self.obs = x
self.T = len(obs)
self.arg_max = np.zeros((self.T, self.M)) # we don't tabulate val_max anymore
initial_val_max = self.pi*self.B[:,obs[0]]
producer_process = mp.Process(target=self.get_val,args=(initial_val_max,),daemon=True)
consumer_process = mp.Process(target=self.get_arg,daemon=True)
self.intermediate.put(initial_val_max) # initial production put into pipeline for consumption
consumer_process.start() # we can already consume initial_val_max
producer_process.start()
#val_process.join()
#arg_process.join()
#self.output.join()
return self.backtrack(self.result.get(),self.result.get()) # backtrack takes last row of val_max and entire arg_max
def backtrack(self,val_max_last_row,arg_max):
'''Backtracking the Dynamic Programming solution (actually a Trellis diagram)
produced by Multiprocessing Viterbi algorithm.'''
states = np.zeros(self.T, dtype=np.int32)
states[self.T-1] = np.argmax(val_max_last_row)
for t in range(self.T-2, -1, -1):
states[t] = arg_max[t+1, states[t+1]]
return states
if __name__ == '__main__':
obs = np.array([0,1,2]) # normal then cold and finally dizzy
T = 100000
obs = np.random.binomial(2,0.3,T)
pi = np.array([0.6,0.4])
A = np.array([[0.7,0.3],
[0.4,0.6]])
B = np.array([[0.5,0.4,0.1],
[0.1,0.3,0.6]])
t1 = time()
viterbi = Viterbi(A,B,pi)
path = viterbi.get_path(obs)
t2 = time()
print('Iterative Viterbi')
print('Path: ',path)
print('Run-time: ',round(t2-t1,6))
t1 = time()
viterbi = Viterbi(A,B,pi)
path = viterbi.get_path_parallel(obs)
t2 = time()
print('\nParallel Viterbi')
print('Path: ',path)
print('Run-time: ',round(t2-t1,6))
Welcome to SO. Consider taking a look at producer-consumer pattern that is heavily used in multiprocessing.
Beware that multiprocessing in Python reinstantiates your code for every process you create on Windows. So your Viterbi objects and therefore their Queue fields are not the same.
Observe this behaviour through:
import os
def get_arg(self):
'''Dependent Process'''
print("Dependent ", self)
print("Dependent ", self.output)
print("Dependent ", os.getpid())
def get_val(self):
'''Independent Process'''
print("Independent ", self)
print("Independent ", self.output)
print("Independent ", os.getpid())
if __name__ == "__main__":
print("Hello from main process", os.getpid())
obs = np.array([0,1,2]) # normal then cold and finally dizzy
pi = np.array([0.6,0.4])
A = np.array([[0.7,0.3],
[0.4,0.6]])
B = np.array([[0.5,0.4,0.1],
[0.1,0.3,0.6]])
viterbi = Viterbi(A,B,pi)
print("Main viterbi object", viterbi)
print("Main viterbi object queue", viterbi.output)
path = viterbi.get_path_parallel(obs)
There are three different Viterbi objects as there are three different processes. So, what you need in terms of parallelism is not processes. You should explore the threading library that Python offers.
This is my first python MPI program, and I would really appreciate some help optimizing the code. Specifically, I have two questions regarding scattering and gathering, if anyone can help. This program is much slower than a traditional approach without MPI.
I am trying to scatter one array, do some work on the nodes which fills another set of arrays, and gather those. My questions are primarily in the setup and gather sections of code.
Is it necessary to allocate memory for an array on all nodes? (A, my_A, xset, yset, my_xset, my_yset)? Some of these can get large.
Is an array the best structure to gather for the data I am using? When I scatter A, it is relatively small. However, xset and yset can get very large (over a million elements at least).
Here is the code:
#!usr/bin/env python
#Libraries
import numpy as py
import matplotlib.pyplot as plt
from mpi4py import MPI
comm = MPI.COMM_WORLD
print "%d nodes running."% (comm.size)
#Variables
cmin = 0.0
cmax = 4.0
cstep = 0.005
run = 300
disc = 100
#Setup
if comm.rank == 0:
A = py.arange(cmin, cmax + cstep, cstep)
else:
A = py.arange((cmax - cmin) / cstep, dtype=py.float64)
my_A = py.empty(len(A) / comm.size, dtype=py.float64)
xset = py.empty(len(A) * (run - disc) * comm.size, dtype=py.float64)
yset = py.empty(len(A) * (run - disc) * comm.size, dtype=py.float64)
my_xset = py.empty(0, dtype=py.float64)
my_yset = py.empty(0, dtype=py.float64)
#Scatter
comm.Scatter( [A, MPI.DOUBLE], [my_A, MPI.DOUBLE] )
#Work
for i in my_A:
x = 0.5
for j in range(0, run):
x = i * x * (1 - x)
if j >= disc:
my_xset = py.append(my_xset, i)
my_yset = py.append(my_yset, x)
#Gather
comm.Allgather( [my_xset, MPI.DOUBLE], [xset, MPI.DOUBLE])
comm.Allgather( [my_yset, MPI.DOUBLE], [yset, MPI.DOUBLE])
#Export Results
if comm.rank == 0:
f = open('points.3d', 'w+')
for k in range(0, len(xset)-1):
f.write('(' + str(round(xset[k],2)) + ',' + str(round(yset[k],2)) + ',0)\n')
f.close()
You do not need to allocate A on the non-root processes. If you were not using Allgather, but a simple Gather, you could also omit xset and yset. Basically you only need to allocate data that is actually used by the collectives - the other parameters are only significant on root.
Yes, a numpy array is an appropriate data structure for such large arrays. For small data where it is not performance-critical, it can be more convenient and pythonic to use the all-lowercase methods and communicate with Python objects (lists etc).
I am trying to run a multiprocessing task on all 4 processors of a core. When I run just one of the four tasks that I actually want to run, the code takes about 3 seconds per 1000 iterations. However, if I set it up to run all 4 processes the speed quadruples to 13 seconds per 1000 iterations. I'm going to attach part of my code below. I'm not sure why this is happening. I've tried to do my own investigation and it doesn't seem to be a memory or cpu issue. If I monitor as it runs with one task, one of the processors is active at 100% and only .8% of the memory is in use. And when I run 4 tasks all 4 processors are active at 100% each with .8% of memory used.
Not sure why this is happening. I've used the Multiprocessing task many times before and have never noticed a run time increase. Anyways, here is my messy code:
import numpy as np
import multiprocessing
from astropy.io import fits
import os
import time
def SmoothAllSpectra(ROOT, range_lo, range_hi):
file_names = os.listdir(ROOT)
for i in range(len(file_names)):
file_names[i] = ROOT + file_names[i]
pool = multiprocessing.Pool(4)
for filename in pool.map(work, file_names[range_lo:range_hi]):
print filename+' complete'
return ROOT
def work(filename):
data = ImportSpectrum(filename) ##data is a 2d numpy array
smooth_data = SmoothSpectrum(data, 6900, h0=1)
return filename
def SmoothSpectrum(data, max_wl, h0=1):
wl_data = data[0]
count_data = data[1]
hi_idx = np.argmin(np.abs(wl_data - max_wl))
smoothed = np.empty((2, len(wl_data[:hi_idx])))
smoothed[0] = wl_data[:hi_idx]
temp = np.exp(-.5 * np.power(smoothed[0,int(len(smoothed[0])/2.)] - smoothed[0], 2) / h0**2)
idx = np.where(temp>1e-10)[0]
window = np.ceil(len(idx)/2.)
numer = np.zeros(len(smoothed[0]))
denom = np.zeros(len(smoothed[0]))
for i in range(len(numer)):
K1 = np.zeros(len(numer))
if (i-window >= 0) and (i+window <= len(smoothed[0])):
K1[i-window:i+window] = np.exp(-.5 * np.power(smoothed[0,i] - smoothed[0,i-window:i+window], 2) / h0**2)
elif i-window < 0:
K1[:i+window] = np.exp(-.5 * np.power(smoothed[0,i] - smoothed[0,:i+window], 2) / h0**2)
else:
K1[i-window:] = np.exp(-.5 * np.power(smoothed[0,i] - smoothed[0,i-window:], 2) / h0**2)
numer += count_data[i] * K1
denom += K1
smoothed[1] = numer / denom
return smoothed
There's nothing fancy going on in the code, just smoothing some data. I'm sure there are loads of ways to optimize this code, but I'm interested in why I'm seeing a computation time increase going from 1 task to 4 tasks.
Cheers!