"Processes" or "Threads" strategy to fill independant blocks into global array? - python

In python2, I would like to fill a global array by filling with parallel processes (or threads) different sub-arrays (there is a total 16 blocks). I must precise that each block doesn't depend of the others, I mean when I perfom the assignement of each cells of the current block.
1) From what I have found, I would have a great benefit from a CPU multi-cores by using different "processes" but it seems a little bit complicated to share the global array by all others processes.
2) From another point of view, I can use "threads" instead of "processes" since the implementation is less hard. I have found out the libray "ThreadPool" from "multiprocessing.dummy" allows to share this global array by all others concurrent threads.
For example, with python2.7, the following code works :
from multiprocessing.dummy import Pool as ThreadPool
## discretization along x-axis and y-axis for each block
arrayCross_k = np.linspace(kMIN, kMAX, dimPoints)
arrayCross_mu = np.linspace(-1, 1, dimPoints)
# Build all big matrix with N total blocks = dimBlock*dimBlock = 16 here
arrayFullCross = np.zeros((dimBlocks, dimBlocks, arrayCross_k.size, arrayCross_mu.size))
dimBlocks = 4
# Size of dimension along k and mu axis
dimPoints = 100
# dimension along one dimension of global arrayFullCross
dimMatCovCross = dimBlocks*dimPoints
# Build cross-correlation matrix
def buildCrossMatrix_loop(params_array):
# rows indices
xb = params_array[0]
# columns indices
yb = params_array[1]
# Current redshift
z = zrange[params_array[2]]
# Loop inside block
for ub in range(dimPoints):
for vb in range(dimPoints):
# Diagonal blocs
if (xb == yb):
# Fill the (xb,yb) su-block of global array by
arrayFullCross[xb][xb][ub][vb] = 2*P_obs_cross(arrayCross_k[ub], arrayCross_mu[vb] , z, 10**P_m(np.log10(arrayCross_k[ub])),
...
...
# End of function buildCrossMatrix_loop
# Main loop
while i < len(zrange):
def generatorCrossMatrix(index):
for igen in range(dimBlocks):
for lgen in range(dimBlocks):
yield igen, lgen, index
if __name__ == '__main__':
# Use 20 threads
pool = ThreadPool(20)
pool.map(buildCrossMatrix_loop, generatorCrossMatrix(i))
# Increment index "i"
i = i+1
But unfortunately, even by using 20 threads, I realize that the cores of my CPU are not fully running (actually, with 'top' or 'htop' command, I only see a single process at 100%).
3) What is the strategy that I have to chose if I want to full exploit the 16 cores of my CPU (like this is the case with pool.map(function, generator)) but with also the sharing of global array ?
4) some people told me to do I/O for each sub-array (basically, write each block in a file and gather all sub-arrays by reading them and get the full array filled). This solution is handy but I would like to avoid I/O (unless there is really not other solutions).
5) I have practised MPI library with C language and the operation of filling sub-array and finally gather them to build a big array, is not very complicated. However, I wouldn't like to use MPI with Python language (I don't know even if it exists).
6) I tried also to use Process with target equal to my filling function (buildCrossMatrix_loop) like this into while Main loop above :
from multiprocessing import Process
# Main loop on z range
while i < len(zrange):
params_p = []
for ip in range(4):
for jp in range(4):
params_p.append(ip)
params_p.append(jp)
params_p.append(i)
p = Process(target=buildCrossMatrix_loop, args=(params_p,))
params_p = []
p.start()
# Finished : wait everybody
p.join()
...
...
i = i+1
# End of main while loop
But the final 2D global array is filled only of zeros. So I must deduce that Process function doesn't share the array to fill ?
7) So which strategy I have to look for ? :
1. The using of "pool processes" and find a way to share the global array knowing all my 16-cores will be running
2. The using of "Threads" and share the global array but performances, at first sight, seems to be less good than with "pool processes". Maybe there is a way to increase the power of each "Threads", I mean like with "pool processes" ?
I tried to follow the different examples on https://docs.python.org/2/library/multiprocessing.html but without success, this is to say, without relevant performances from a speed-up point of view.
I think that in my case, the major issue is the gathering of all sub-arrays OR the fact that the global array arrayFullCross is not shared by other processes or threads.
If someone had a simple example of the sharing of global variable in a multi-threading context (here an array), this would nice to put it here.
UPDATE 1: I made test with the Threading (and not multiprocessing) but performances remain rather bad. GIL is not apparently unlocked, i.e only one process appears in htop command (maybe the version of Threading library is not the right one).
So I am going to try to handle my issue with using the "return" method.
Naively, I tried to return the whole array at the end of the function on which I apply the map function, like this :
# Build cross-correlation matrix
def buildCrossMatrix_loop(params_array):
# rows indices
xb = params_array[0]
# columns indices
yb = params_array[1]
# Current redshift
z = zrange[params_array[2]]
# Loop inside block
for ub in range(dimPoints):
for vb in range(dimPoints):
# Diagonal blocs
if (xb == yb):
arrayFullCross[xb][xb][ub][vb] = 2*P_obs_cross(arrayCross_k[ub], arrayCross_mu[vb])
...
... #others assignments on arrayFullCross elements
# Return global array to main process
return arrayFullCross
Then, I tried to receive this global array from map like this :
if __name__ == '__main__':
pool = Pool(16)
outputArray = pool.map(buildCrossMatrix_loop, generatorCrossMatrix(i))
pool.terminate()
## Print outputArray
print 'outputArray = ', outputArray
## Reshape 4D outputArray to 2D array
arrayFullCross2D_swap = np.array(outputArray).swapaxes(1,2).reshape(dimMatCovCross,dimMatCovCross)
Unfortunately, when I print the outputArray, I get :
outputArray = [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
This is not the 4D outputArray expected, just a list of 16 None (I think that number of 16 correspond to the number of processes provided by generatorCrossMatrix(i)).
How could I get back the whole 4D array once map is launched and when it has finished ?

First of all I believe multiprocessing.ThreadPool is a private API so you should avoid it. Now multiprocessing.dummy is a useless module. It does not do any multithreading/processing that's why you don't see any benefit. You should use the "plain" multiprocessing module.
The second code does not work because it is using multiple processes. Processes do not share memory, so the changes you do in a subprocess are not reflected in the other subprocesses or the main process. You either want to:
Return the value and combine them together in the main process, for example using multiprocessing.Pool.map
Use threading instead of multiprocessing: just replaceimport multiprocessingwithimport threadingandmultiprocessing.Processwiththreading.Thread` and the code should work.
Note that the threading version will work only because numpy releases the GIL during computations, otherwise it would be stuck at 1 CPU.
You may want to look at this similar question which I answered a couple of minutes ago.

Related

Vectorizing code that includes conditional statement

I've been using for loops for a script I've been working on, and I think the for loops are making the script run too slow. I think I could pick up a lot of speed with vectorization, but I'm having a little trouble figuring out how to vectorize my code. I'm copying some simple sample code that more or less mimics what I'm trying to accomplish in my actual code to make the question easier to follow. (Sorry if it's not the most elegant or technically sound. I'm still getting experience with Python.)
import numpy as np
def create_combo(input1,input2):
combo = input1 + input2
return combo
def another_combo(monitor,scalar_1,scalar_2):
add_more = monitor + scalar_1 + scalar_2
return add_more
# Initialize monitor
monitor = 0
# Initialize a threshold variable
threshold = 15
# Create input arrays
primary_1 = [1, 3, 5, 7, 9]
primary_2 = [2, 4, 6, 8, 10]
primary_1 = np.array(primary_1)
primary_2 = np.array(primary_2)
storage_vec = []
for i in range(5):
# Create input variables
scalar_1 = 0.5
scalar_2 = 2
# Call the create_combo function
combination = create_combo(primary_1[i],primary_2[i])
# call the another_combo function
add_on = another_combo(monitor,scalar_1,scalar_2)
monitor = combination + add_on
# Check if monitor exceeds threshold
if monitor > threshold:
# Store the index if so
storage_vec.append(i)
# Reset the variable, monitor
monitor = 0
I can understand how it would be easy to vectorize the function named create_combo. And I can also see that it would be simple to make the variable named monitor a vector as well.
What I'm having trouble wrapping my head around is how I could reset that monitor variable at specific points in the vector and continue doing computations with monitor after it has been reset. So, I've been working with monitor as a scalar and doing computations on each element of the inputs to the functions. However, this seems to be way too slow, and I know vectorizing typically speeds things up.
I read that perhaps the np.where() function could help, but I'm also not sure how to apply that in this situation.
Any guidance?
Your example if probably over complicated (or your example is not complete).
But if we want to create the storage_vec easily we can noticed that monitor increase by 21.5 after each step until it reach the threshold.
So the vector storage_vec can simply be computed with:
#Increase value:
inc = primary_1[-1]+primary_2[-1]+scalar_1+scalar_2 # = 21.5
#floor division
fd = threshold//inc
#ceil division
cd = -(-threshold//inc)
#storage_vec
storage_vec = np.r_[fd:10:cd]

Adding arrays to global array using multiprocessing

I've a global NumPy array ys_final and have defined a function that generates an array ys. The ys array will be generated based on an input parameter and I want to add these ys arrays to the global array, i.e ys_final = ys_final + ys.
The order of addition doesn't matter so I want to use Pool.apply_async() from the multiprocessing library but I can't write to the global array. The code for reference is:
import multiprocessing as mp
ys_final = np.zeros(len)
def ys_genrator(i):
#code to generate ys array
return ys
pool = mp.Pool(mp.cpu_count())
for i in range(3954):
ys_final = ys_final + pool.apply_async(ys_genrator, args=(i)).get()
pool.close()
pool.join()
The above block of code keeps on running forever and nothing happens. I've tried *mp.Process also and still I face the same problem. There I defined a target function that directly adds to the global array but it is also not working as the block keeps running forever. Reference:
def func(i):
#code to generate ys
global ys_final
ys_final = ys_final + ys
for i in range(3954):
p = mp.Process(target=func, args=(i,))
p.start()
p.join()
Any suggestions will be really helpful.
EDIT:
My ys_genrator is a function for linear interpolation. Based on the parameter i which is an index for rows in a 2D image, the function creates an array of interpolated amplitudes that will be superimposed with all the interpolated amplitudes from the image, so ys need to be added to ys_final
The variable len is the length of the interpolated array, which is same for all rows.
For reference, a simpler version of ys_genrator(i) is as follows:
def ys_genrator(i):
ys = np.ones(10)*i
return ys
A few points:
pool.apply_async(ys_genrator, args=(i)) needs to be pool.apply_async(ys_genrator, args=(i,)). Note the comma after the i.
pool.apply_async(ys_genrator, args=(i,)).get() is exactly equivalent to pool.apply(ys.genrator, args=(i,)). That is, you will block because of your immediate call to get and you will have absolutely no parallism. You would need to do all your calls to pool.apply_async and save the returned AsyncResult instances and only then call get on these instances.
If you are running under Windows, you will have a problem. The code that creates new processes must be within a block governed by if __name__ == '__main__':
If you are running under something like Jupyter Notebook or iPython you will have a problem. The worker function, ys_genrator, would need to be in an external file and imported.
Using apply_async for submitting a lot of tasks is inefficient. You are better of using imap or imap_unordered where the tasks get submitted in "chunks" and you can process the results one by one as they become available. But you must choose a "suitable" chunksize argument.
Any code you have at the global level, such as ys_final = np.zeros(len) will be executed by every sub-process if you are running under Windows, and this can be wasteful if the subprocesses do not need to "see" this variable. If they do need to see this variable, be aware that each process in the pool will be working with its own copy of the variable so it better be a read-only usage. Even then, it can be very wasteful of storage if the variable is large. There are ways of sharing such a variable across the processes but it is not perfectly clear whether you need to (you haven't even defined variable len). So it is difficult to give you improved code. However, it appears that your worker function does not need to "see" ys_final, so I will take a shot at an improved solution.
But be aware that if your function ys_genrator is very trivial, nothing will be gained by using multiprocessing because there is overhead in both creating the processing pool and in passing arguments from one process to another. Also, if ys_genrator is using numpy, this can also be a source of problems since numpy uses multiprocessing for some of its own functions and you are better off not mixing numpy with your own multiprocessing.
import multiprocessing as mp
import numpy as np
SIZE = 3
def ys_genrator(i):
#code to generate ys array
# for this dummy example all SIZE entries will end up with the same result:
ys = [i] * SIZE # for example: [1, 1, 1]
return ys
def compute_chunksize(poolsize, iterable_size):
chunksize, remainder = divmod(iterable_size, 4 * poolsize)
if remainder:
chunksize += 1
return chunksize
if __name__ == '__main__':
ys_final = np.zeros(SIZE)
n_iterations = 3954
poolsize = min(mp.cpu_count(), n_iterations)
chunksize = compute_chunksize(poolsize, n_iterations)
print('poolsize =', poolsize, 'chunksize =', chunksize)
pool = mp.Pool(poolsize)
for result in pool.imap_unordered(ys_genrator, range(n_iterations), chunksize):
ys_final += result
print(ys_final)
Prints:
poolsize = 8 chunksize = 124
[7815081. 7815081. 7815081.]
Update
You can also just use:
for result in pool.map(ys_genrator, range(n_iterations)):
ys_final += result
The issue is that when you use method map, the method wants to compute an efficient chunksize argument based on the size of the iterable argument (see my compute_chunksize function above, which is essentially what pool.map will use). But to do this, is will have to first convert the iterable to a list to get its size. If n_iterations is very large, this is not very efficient, although it's probably not an major issue for a size of 3954. Still, you would be better off using my compute_chunksize function in this case since you know the size of the iterable and then pass the chunksize argument explicitly to map as I have done in the code using imap_unordered.

Best way to simultaneously run this loop?

I have the following code:
data = [2,5,3,16,2,5]
def f(x):
return 2*x
f_total = 0
for x in data:
f_total += f(x)
print(f_total/len(data))
which I want to speed up the for loop. (In reality the code is more complex and I want to run it in a super computer with many many processing cores). I have read that I can do this with the multiprocessing library where I can get python3 to simultaneously run different chunks of the loop at the same time but I am a bit lost with it.
Could you explain me how to do it with this minimal version of my program?
Thanks!
import multiprocessing
from numpy import random
"""
This mentions the number of worker threads that you want to run in parallel.
Depending on the number of cores in your system you should choose the appropriate
number of threads. When you call 'map' function it will distribute the input
values in that many parts
"""
NUM_CORES = 6
data = random.rand(100, 1)
"""
+2 so that the cores are not left idle in case a thread is waiting for I/O.
Choose by performing an empirical analysis depending on the function you are trying to compute.
It could match up to NUM_CORES as well. You can vary the chunksize as well depending on the size of 'data' that you have.
"""
NUM_THREADS = NUM_CORES+2
CHUNKSIZE = int(len(data)/(NUM_THREADS))
def f(x):
return 2*x
# This takes care of creating pool of worker threads which will be assigned the jobs
pool = multiprocessing.Pool(NUM_THREADS)
# map vs imap. If the data is large go for imap else map is also good.
it = pool.imap(f, data, chunksize=CHUNKSIZE)
f_total = 0
# Iterate and sum up the result
for value in it:
f_total += sum(value)
print(f_total/len(data))
Why choose imap over map?

Python for loop: Proper implementation of multiprocessing

The following for loop is part of a iterative simulation process and is the main bottleneck regarding computational time:
import numpy as np
class Simulation(object):
def __init__(self,n_int):
self.n_int = n_int
def loop(self):
for itr in range(self.n_int):
#some preceeding code which updates rows_list and diff with every itr
cols_red_list = []
rows_list = list(range(2500)) #row idx for diff where negative element is known to appear
diff = np.random.uniform(-1.323, 3.780, (2500, 300)) #np.random.uniform is just used as toy example
for row in rows_list:
col = next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
cols_red_list.append(col)
# some subsequent code which uses the cols_red_list data
sim1 = Simulation(n_int=10)
sim1.loop()
Hence, I tried to parallelize it by using the multiprocessing package in hope to reduce computation time:
import numpy as np
from multiprocessing import Pool, cpu_count
from functools import partial
def crossings(row, diff):
return next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
class Simulation(object):
def __init__(self,n_int):
self.n_int = n_int
def loop(self):
for itr in range(self.n_int):
#some preceeding code which updates rows_list and diff with every
rows_list = list(range(2500))
diff = np.random.uniform(-1, 1, (2500, 300))
if __name__ == '__main__':
num_of_workers = cpu_count()
print('number of CPUs : ', num_of_workers)
pool = Pool(num_of_workers)
cols_red_list = pool.map(partial(crossings,diff = diff), rows_list)
pool.close()
print(len(cols_red_list))
# some subsequent code which uses the cols_red_list data
sim1 = Simulation(n_int=10)
sim1.loop()
Unfortunately, the parallelization turns out to be much slower compared to the sequential piece of code.
Hence my question: Did I use the multiprocessing package properly in that particular example? Are there alternative ways to parallelize the above mentioned for loop ?
Disclaimer: As you're trying to reduce the runtime of your code through parallelisation, this doesn't strictly answer your question but it might still be a good learning opportunity.
As a golden rule, before moving to multiprocessing to improve
performance (execution time), one should first optimise the
single-threaded case.
Your
rows_list = list(range(2500))
Generates the numbers 0 to 2499 (that's the range) and stores them in memory (list), which requires time to do the allocation of the required memory and the actual write. You then only use these predictable values once each, by reading them from memory (which also takes time), in a predictable order:
for row in rows_list:
This is particularly relevant to the runtime of your loop function as you do it repeatedly (for itr in range(n_int):).
Instead, consider generating the number only when you need it, without an intermediate store (which conceptually removes any need to access RAM):
for row in range(2500):
Secondly, on top of sharing the same issue (unnecessary accesses to memory), the following:
diff = np.random.uniform(-1, 1, (2500, 300))
# ...
col = next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
seems to me to be optimisable at the level of math (or logic).
What you're trying to do is get a random variable (that col index) by defining it as "the first time I encounter a random variable in [-1;1] that is lower than 0". But notice that figuring out if a random variable with a uniform distribution over [-α;α] is negative, is the same as having a random variable over {0,1} (i.e. a bool).
Therefore, you're now working with bools instead of floats and you don't even have to do the comparison (val < 0) as you already have a bool. This potentially makes the code much faster. Using the same idea as for rows_list, you can generate that bool only when you need it; testing it until it is True (or False, choose one, it doesn't matter obviously). By doing so, you only generate as many random bools as you need, not more and not less (BTW, what happens in your code if all 300 elements in the row are negative? ;) ):
for _ in range(n_int):
cols_red_list = []
for row in range(2500):
col = next(i for i in itertools.count() if random.getrandbits(1))
cols_red_list.append(col)
or, with list comprehension:
cols_red_list = [next(i for i in count() if getrandbits(1))
for _ in range(2500)]
I'm sure that, through proper statistical analysis, you even can express that col random variable as a non-uniform variable over [0;limit[, allowing you to compute it much faster.
Please test the performance of an "optimized" version of your single-threaded implementation first. If the runtime is still not acceptable, you should then look into multithreading.
multiprocessing uses system processes (not threads!) for parallelization, which require expensive IPC (inter-process communication) to share data.
This bites you in two spots:
diff = np.random.uniform(-1, 1, (2500, 300)) creates a large matrix which is expensive to pickle/copy to another process
rows_list = list(range(2500)) creates a smaller list, but the same applies here.
To avoid this expensive IPC, you have one and a half choices:
If on a POSIX-compliant system, initialize your variables on the module level, that way each process gets a quick-and-dirty copy of the required data. This is not scalable as it requires POSIX, weird architecture (you probably don't want to put everything on the module level), and doesn't support sharing changes to that data.
Use shared memory. This only supports mostly primitive data types, but mp.Array should cover your needs.
The second problem is that setting up a pool is expensive, as num_cpu processes need to be started. Your workload is small enough to be negligible compared to this overhead. A good practice is to only create one pool and reuse it.
Here is a quick-and-dirty example of the POSIX only solution:
import numpy as np
from multiprocessing import Pool, cpu_count
from functools import partial
n_int = 10
rows_list = np.array(range(2500))
diff = np.random.uniform(-1, 1, (2500, 300))
def crossings(row, diff):
return next(idx for idx, val in enumerate(diff[row,:]) if val < 0)
def workload(_):
cols_red_list = [crossings(row, diff) for row in rows_list]
print(len(cols_red_list))
class Simulation(object):
def loop(self):
num_of_workers = cpu_count()
with Pool(num_of_workers) as pool:
pool.map(workload, range(10))
pool.close()
sim1 = Simulation()
sim1.loop()
For me (and my two cores) this is roughly twice as fast as the sequential version.
Update with shared memory:
import numpy as np
from multiprocessing import Pool, cpu_count, Array
from functools import partial
n_int = 10
ROW_COUNT = 2500
### WORKER
diff = None
result = None
def init_worker(*args):
global diff, result
(diff, result) = args
def crossings(i):
result[i] = next(idx for idx, val in enumerate(diff[i*300:(i+1)*300]) if val < 0)
### MAIN
class Simulation():
def loop(self):
num_of_workers = cpu_count()
diff = Array('d', range(ROW_COUNT*300), lock=False)
result = Array('i', ROW_COUNT, lock=False)
# Shared memory needs to be passed when workers are spawned
pool = Pool(num_of_workers, initializer=init_worker, initargs=(diff, result))
for i in range(n_int):
# SLOW, I assume you use a different source of values anyway.
diff[:] = np.random.uniform(-1, 1, ROW_COUNT*300)
pool.map(partial(crossings), range(ROW_COUNT))
print(len(result))
pool.close()
sim1 = Simulation()
sim1.loop()
A few notes:
Shared memory needs to be set up at worker creation, so it's global anyway.
This still isn't faster than the sequential version, but that's mainly due to random.uniform needing to be copied entirely into shared memory. I assume that are just values for testing, and in reality you'd fill it differently anyway.
I only pass indices to the worker, and use them to read and write values to the shared memory.

Speed up Python eval when reading and evaluating list of equations from file

I have put together a simple Python script which reads a large list of algebraic expressions from a text file on separate lines, evaluates the mathematics on each line and puts it into a numpy array. The eigenvalues of this matrix are then found. The parameters A,B,C will then be changed and the program run again, hence a function is used to achieve this.
Some of these text files will have millions of lines of equations, so after profiling the code I found that the eval command accounts for approximately 99% of the execution time. I am aware of the dangers of using eval but this code will only ever be used by myself. All other parts of the code are fast, except the call to eval.
Here is the code where mat_size is set to 500 which represents a 500*500 array meaning 250,000 lines of equations are being read in from the file. I cannot provide the file as it is ~ 0.5GB in size, but have provided an example of what it looks like below and it only uses basic mathematical operations.
import numpy as np
from numpy import *
from scipy.linalg import eigvalsh
mat_size = 500
# Read the file line by line
with open("test_file.txt", 'r') as f:
lines = f.readlines()
# Function to evaluate the maths and build the numpy array
def my_func(A,B,C):
lst = []
for i in lines:
# Strip the \n
new = eval(i.rstrip())
lst.append(new)
# Build the numpy array
AA = np.array(lst,dtype=np.float64)
# Resize it to mat_size
matt = np.resize(AA,(mat_size,mat_size))
return matt
# Function to find eigenvalues of matrix
def optimise(x):
A,B,C = x
test = my_func(A,B,C)
ev=-1*eigvalsh(test)
return ev[-(1)]
# Define what A,B,C are, this can be changed each time the program is run
x0 = [7.65,5.38,4.00]
# Print result
print(optimise(x0))
A few lines of an example input text file: (mat_size can be changed to 2 to run this file)
.5/A**3*B**5+C
35.5/A**3*B**5+3*C
.8/C**3*A**5+C**9
.5/A*3+B**5-C/45
I am aware eval is usually bad practice and slow, so I looked for other means to achieving a speed up. I tried methods outlined here but none of these appeared to work. I also tried applying sympy to the problem but that caused a massive slowdown. What is a better way of going about this problem?
EDIT
From the suggestion to use numexpr instead, I have come across an issue where it grinds to a halt compared to the standard eval. For some instances the matrix elements contain quite a lot of algebraic expressions. Here is an example of just one matrix element, i.e one of the equations in the file (it contains a few more terms not defined in the code above, but can be easily defined at top of the code):
-71*A**3/(A+B)**7-61*B**3/(A+B)**7-3/2/B**2/C**2*A**6/(A+B)**7-7/4/B**3/m3*A**6/(A+B)**7-49/4/B**2/C*A**6/(A+B)**7+363/C*A**3/(A+B)**7*z3+451*B**3/C/(A+B)**7*z3-3/2*B**5/C/A**2/(A+B)**7-3/4*B**7/C/A**3/(A+B)**7-1/B/C**3*A**6/(A+B)**7-3/2/B**2/C*A**5/(A+B)**7-107/2/C/m3*A**4/(A+B)**7-21/2/B/C*A**4/(A+B)**7-25/2*B/C*A**2/(A+B)**7-153/2*B**2/C*A/(A+B)**7-5/2*B**4/C/m3/(A+B)**7-B**6/C**3/A/(A+B)**7-21/2*B**4/C/A/(A+B)**7-7/4/B**3/C*A**7/(A+B)**7+86/C**2*A**4/(A+B)**7*z3+90*B**4/C**2/(A+B)**7*z3-1/4*B**6/m3/A**3/(A+B)**7-149/4/B/C*A**5/(A+B)**7-65*B**2/C**3*A**4/(A+B)**7-241/2*B/C**2*A**4/(A+B)**7-38*B**3/C**3*A**3/(A+B)**7+19*B**2/C**2*A**3/(A+B)**7-181*B/C*A**3/(A+B)**7-47*B**4/C**3*A**2/(A+B)**7+19*B**3/C**2*A**2/(A+B)**7+362*B**2/C*A**2/(A+B)**7-43*B**5/C**3*A/(A+B)**7-241/2*B**4/C**2*A/(A+B)**7-272*B**3/C*A/(A+B)**7-25/4*B**6/C**2/A/(A+B)**7-77/4*B**5/C/A/(A+B)**7-3/4*B**7/C**2/A**2/(A+B)**7-23/4*B**6/C/A**2/(A+B)**7-11/B/C**2*A**5/(A+B)**7-13/B**2/m3*A**5/(A+B)**7-25*B/C**3*A**4/(A+B)**7-169/4/B/m3*A**4/(A+B)**7-27*B**2/C**3*A**3/(A+B)**7-47*B/C**2*A**3/(A+B)**7-27*B**3/C**3*A**2/(A+B)**7-38*B**2/C**2*A**2/(A+B)**7-131/4*B/m3*A**2/(A+B)**7-25*B**4/C**3*A/(A+B)**7-65*B**3/C**2*A/(A+B)**7-303/4*B**2/m3*A/(A+B)**7-5*B**5/C**2/A/(A+B)**7-49/4*B**4/m3/A/(A+B)**7-1/2*B**6/C**2/A**2/(A+B)**7-5/2*B**5/m3/A**2/(A+B)**7-1/2/B/C**3*A**7/(A+B)**7-3/4/B**2/C**2*A**7/(A+B)**7-25/4/B/C**2*A**6/(A+B)**7-45*B/C**3*A**5/(A+B)**7-3/2*B**7/C**3/A/(A+B)**7-123/2/C*A**4/(A+B)**7-37/B*A**4/(A+B)**7-53/2*B*A**2/(A+B)**7-75/2*B**2*A/(A+B)**7-11*B**6/C**3/(A+B)**7-39/2*B**5/C**2/(A+B)**7-53/2*B**4/C/(A+B)**7-7*B**4/A/(A+B)**7-7/4*B**5/A**2/(A+B)**7-1/4*B**6/A**3/(A+B)**7-11/C**3*A**5/(A+B)**7-43/C**2*A**4/(A+B)**7-363/4/m3*A**3/(A+B)**7-11*B**5/C**3/(A+B)**7-45*B**4/C**2/(A+B)**7-451/4*B**3/m3/(A+B)**7-5/C**3*A**6/(A+B)**7-39/2/C**2*A**5/(A+B)**7-49/4/B**2*A**5/(A+B)**7-7/4/B**3*A**6/(A+B)**7-79/2/C*A**3/(A+B)**7-207/2*B**3/C/(A+B)**7+22/B/C**2*A**5/(A+B)**7*z3+94*B/C**2*A**3/(A+B)**7*z3+76*B**2/C**2*A**2/(A+B)**7*z3+130*B**3/C**2*A/(A+B)**7*z3+10*B**5/C**2/A/(A+B)**7*z3+B**6/C**2/A**2/(A+B)**7*z3+3/B**2/C**2*A**6/(A+B)**7*z3+7/B**3/C*A**6/(A+B)**7*z3+52/B**2/C*A**5/(A+B)**7*z3+169/B/C*A**4/(A+B)**7*z3+131*B/C*A**2/(A+B)**7*z3+303*B**2/C*A/(A+B)**7*z3+49*B**4/C/A/(A+B)**7*z3+10*B**5/C/A**2/(A+B)**7*z3+B**6/C/A**3/(A+B)**7*z3-3/4*B**7/C/m3/A**3/(A+B)**7-7/4/B**3/C/m3*A**7/(A+B)**7-49/4/B**2/C/m3*A**6/(A+B)**7-149/4/B/C/m3*A**5/(A+B)**7-293*B/C/m3*A**3/(A+B)**7+778*B**2/C/m3*A**2/(A+B)**7-480*B**3/C/m3*A/(A+B)**7-77/4*B**5/C/m3/A/(A+B)**7-23/4*B**6/C/m3/A**2/(A+B)**7
numexpr completely chokes when the matrix elements are of this form, whereas eval evaluates it instantaneously. For just a 10*10 matrix (100 equations in file) numexpr takes about 78 seconds to process the file, whereas eval takes 0.01 seconds. Profiling the code that uses numexpr reveals that the getExprnames and precompile function of numexpr are the causes of the issue with precompile taking 73.5 seconds of the total time and getExprNames taking 3.5 seconds of the time. Why would the precompile cause such a bottleneck in this particular calculation along with the getExprNames? Is this module just not well suited to long algebraic expressions?
I found a way to speed eval() up in this particular instance by making use of the multiprocessing library. I read the file in as usual, but then break the list into equal sized sub-lists which can then be processed separately on different CPU's and the evaluated sub-lists recombined at the end. This offers a nice speedup over the original method. I am sure the code below can be simplified/optimised; but for now it works (for instance what if there is a prime number of list elements? this will mean unequal lists). Some rough benchmarks show it is ~ 3 times faster using the 4 CPU's of my laptop. Here is the code:
from multiprocessing import Process, Queue
with open("test.txt", 'r') as h:
linesHH = h.readlines()
# Get the number of list elements
size = len(linesHH)
# Break apart the list into the desired number of chunks
chunk_size = size/4
chunks = [linesHH[x:x+chunk_size] for x in xrange(0, len(linesHH), chunk_size)]
# Declare variables
A = 0.1
B = 2
C = 2.1
m3 = 1
z3 = 2
# Declare all the functions that process the substrings
def my_funcHH1(A,B,C,que): #add a argument to function for assigning a queue to each chunk function
lstHH1 = []
for i in chunks[0]:
HH1 = eval(i)
lstHH1.append(HH1)
que.put(lstHH1)
def my_funcHH2(A,B,C,que):
lstHH2 = []
for i in chunks[1]:
HH2 = eval(i)
lstHH2.append(HH2)
que.put(lstHH2)
def my_funcHH3(A,B,C,que):
lstHH3 = []
for i in chunks[2]:
HH3 = eval(i)
lstHH3.append(HH3)
que.put(lstHH3)
def my_funcHH4(A,B,C,que):
lstHH4 = []
for i in chunks[3]:
HH4 = eval(i)
lstHH4.append(HH4)
que.put(lstHH4)
queue1 = Queue()
queue2 = Queue()
queue3 = Queue()
queue4 = Queue()
# Declare the processes
p1 = Process(target= my_funcHH1, args= (A,B,C,queue1))
p2 = Process(target= my_funcHH2, args= (A,B,C,queue2))
p3 = Process(target= my_funcHH3, args= (A,B,C,queue3))
p4 = Process(target= my_funcHH4, args= (A,B,C,queue4))
# Start them
p1.start()
p2.start()
p3.start()
p4.start()
HH1 = queue1.get()
HH2 = queue2.get()
HH3 = queue3.get()
HH4 = queue4.get()
p1.join()
p2.join()
p3.join()
p4.join()
# Obtain the final result by combining lists together again.
mergedlist = HH1 + HH2 + HH3 + HH4

Categories

Resources