Hello I've been working on a huge csv file which needs similarity tests done. There is 1.16million rows and to test similarity between each rows it takes approximately 7 hours. I want to use multiple threads to reduce the time it takes to do so. My function which does the similarity test is:
def similarity():
for i in range(0, 1000):
for j in range(i+1, 1000):
longestSentence = 0
commonWords = 0
row1 = dff['Product'].iloc[i]
row2 = dff['Product'].iloc[j]
wordsRow1 = row1.split()
wordsRow2 = row2.split()
# iki tumcedede esit olan sozcukler
common = list(set(wordsRow1).intersection(wordsRow2))
if len(wordsRow1) > len(wordsRow2):
longestSentence = len(wordsRow1)
commonWords = calculate(common, wordsRow1)
else:
longestSentence = len(wordsRow2)
commonWords = calculate(common, wordsRow2)
print(i, j, (commonWords / longestSentence) * 100)
def calculate(common, longestRow):#esit sozcuklerin bulunmasi
sum = 0
for word in common:
sum += longestRow.count(word)
return sum
I am using ThreadPoolExecutor to do multithreading and the code to do so is:
with ThreadPoolExecutor(max_workers=500) as executor:
for result in executor.map(similarity()):
print(result)
But even if I set max_workers to incredible amounts the code runs the same. How can I make it so the code runs faster? Is there any other way?
I tried to do it with threading library but it doesn't work because it just starts the threads to do the same job over and over again. So if I do 10 threads it just starts the function 10 times to do the same thing. Thanks in advance for any help.
ThreadPoolExecutor will not actually help a lot because ThreadPool is more for IO tasks. Let's say you would do 500 API calls this would work but since you are doing heavy CPU tasks it does not work. You should use ProcessPoolExecutor but also point attention that making max_workers numbers greater than the number of your cores will not do anything as well.
Also, your syntax is incorrect because you are running the same function inside your pool.
But I think you need to change your algorithm to make this work properly. There is definitely something wrong with your time compexity.
from concurrent.futures import ProcessPoolExecutor
from time import sleep
values = [3,4,5,6]
def cube(x):
print(f'Cube of {x}:{x*x*x}')
if __name__ == '__main__':
result =[]
with ProcessPoolExecutor(max_workers=5) as exe:
exe.submit(cube,2)
# Maps the method 'cube' with a iterable
result = exe.map(cube,values)
for r in result:
print(r)
Related
There is this stackoverflow post that really nicely shows a way to calculate the proximity matrix of a RandomForestClassifier().
Proximity Matrix in sklearn.ensemble.RandomForestClassifier
Nevertheless the for-loop in that script is quite slow if you have a large dataframe. I tried to parallelize this for-loop, but unsuccesfully. I only get 'None' as an output.
How can I parallelize this for-loop in Spyder 4 running Python 3.8.5 on Windows 10?
proxMat = 1*np.equal.outer(a, a)
for i in range(1, nTrees):
a = terminals[:,i]
proxMat += 1*np.equal.outer(a, a)
Here you want to perform a reduce operation - so parrallelization is not obvious.
You did not specify how you tried to parallelize the loop.
A simple way to parrallelize :
import multiprocessing
pool = multiprocessing.Pool(processes=4)
def get_outer(i):
return np.equal.outer(terminals[:,i],terminals[:,i])
todo = list(range(1, nTrees))
results = pool.map(get_outer, todo)
proxMat = 1*np.equal.outer(a, a)
for res in results:
proxMat+ = res
I'm not sure this one would help, but possibly you'd have less pickling problems :
import multiprocessing
pool = multiprocessing.Pool(processes=4)
def get_outer(t):
return np.equal.outer(t,t)
# This part might be costly !
terms = [terminals[:,i] for i in range(1, nTrees)]
results = pool.map(get_outer, terms)
proxMat = 1*np.equal.outer(a, a)
for res in results:
proxMat+ = res
I have the following code:
data = [2,5,3,16,2,5]
def f(x):
return 2*x
f_total = 0
for x in data:
f_total += f(x)
print(f_total/len(data))
which I want to speed up the for loop. (In reality the code is more complex and I want to run it in a super computer with many many processing cores). I have read that I can do this with the multiprocessing library where I can get python3 to simultaneously run different chunks of the loop at the same time but I am a bit lost with it.
Could you explain me how to do it with this minimal version of my program?
Thanks!
import multiprocessing
from numpy import random
"""
This mentions the number of worker threads that you want to run in parallel.
Depending on the number of cores in your system you should choose the appropriate
number of threads. When you call 'map' function it will distribute the input
values in that many parts
"""
NUM_CORES = 6
data = random.rand(100, 1)
"""
+2 so that the cores are not left idle in case a thread is waiting for I/O.
Choose by performing an empirical analysis depending on the function you are trying to compute.
It could match up to NUM_CORES as well. You can vary the chunksize as well depending on the size of 'data' that you have.
"""
NUM_THREADS = NUM_CORES+2
CHUNKSIZE = int(len(data)/(NUM_THREADS))
def f(x):
return 2*x
# This takes care of creating pool of worker threads which will be assigned the jobs
pool = multiprocessing.Pool(NUM_THREADS)
# map vs imap. If the data is large go for imap else map is also good.
it = pool.imap(f, data, chunksize=CHUNKSIZE)
f_total = 0
# Iterate and sum up the result
for value in it:
f_total += sum(value)
print(f_total/len(data))
Why choose imap over map?
I use multiprocessing Pool to run parallel. I tried with 4 cores first in HPC with sub. When it uses 4 core, the time is reduced 4 times compared to 1 core. When I check with qstat, several times it uses 4 cores but after that just 1 core, with exactly the same code.
Could you please give some advice what is wrong with my code or the system?
import pandas as pd
import numpy as np
from multiprocessing import Pool
from datetime import datetime
t1 = pd.read_csv("template.csv",header=None)
s1 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_adfr.csv")
s2 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_dock.csv")
s3 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_gemdock.csv")
s4 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_ledock.csv")
s5 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_plants.csv")
s6 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_psovina.csv")
s7 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_quickvina2.csv")
s8 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_smina.csv")
s9 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_vina.csv")
s10 = pd.read_csv("/home/donp/dude_1000_raw_raw/dude_1000_raw_raw_vinaxb.csv")
#number of core and arrays
n = 4
m = (len(t1) // n)+1
g= m*n - len(t1)
for g1 in range(g):
t1.loc[len(t1)]=0
results=[]
def block_linear(i):
temp = pd.DataFrame(np.zeros((m,29)))
for a in range(0,m):
sum_matrix = (t1.iloc[a,0]*s1) + (t1.iloc[a,1]*s2) + (t1.iloc[a,2]*s3)+ (t1.iloc[a,3]*s4) + (t1.iloc[a,4]*s5) + (t1.iloc[a,5]*s6) + (t1.iloc[a,6]*s7) + (t1.iloc[a,7]*s8) + (t1.iloc[a,8]*s9) + (t1.iloc[a,9]*s10)
rank_sum= pd.DataFrame.rank(sum_matrix,axis=0,ascending=True,method='min') #real-True
temp.iloc[a,:] = rank_sum.iloc[999].values
temp['median'] = temp.median(axis=1)
temp.index = range(i*m,(i+1)*m)
return temp
start=datetime.now()
if __name__ == '__main__':
pool = Pool(processes=n)
results = pool.map(block_linear,range(0,n))
print(datetime.now()-start)
out=pd.concat(results)
out.drop(out.tail(g).index,inplace=True)
out.to_csv('test_10dock_4core.csv',index=False)
The main idea is to cut large table into smallers, run calculations and combine together.
Without a more detailed usage of the multiprocessing's Pool package is really difficult to understand and help. Please notice that the Pool package does not guarantee parallelization: the _apply function, for example, only uses one worker of the Pool, and block all your executions. You can check out more details about it here and there.
But assuming you are using the library properly, you should make sure your code is fully parallelizable: an I/O operation on disk, for example, can bottleneck your parallelization and thus making your code run in only one process at a time.
I hope it helped.
[Edit]
Since you provided more details about your problem, I can give more specific tips:
The first thing is that your code is zero parallel. You are just calling the same function N times. This is not how multiprocessing should work.
Instead, the part that should be parallel is the one that is usually in a for loops, like the one you have inside the block_linear().
So, what I recommend to you:
You should change your code to first calculate all your weighted sum and only after that do the rest of the operations. This will help a lot with parallelization.
So, put this operation in a function:
def weighted_sum(column,df2):
temp = pd.DataFrame(np.zeros(m))
for a in range(0,m):
result = (t1.iloc[a,column]*df2)
temp.iloc[a] = result
return temp
So then, you use pool.starmap to parallel the function for the 10 dataframes you have, something like this:
results = pool.starmap(weighted_sum,[(0,s1),(1,s2),(2,s3),....,[9,s10]])
ps: pool.starmap is similar to pool.map but accepts a list of tuple arguments. You can have more details about it here.
At last but not least, you should operate over your results to end your calculations. Since you will have one weighted_sum per column, you can apply a sum over the columns and then the rank_sum.
This is not a fully runnable code to solve your problem, but a general guide of how your should restructure your code to have a multiprocessing advantage. I recommend you to test it over a subsample of the data frames just to make sure it's working properly before you run it on all your data.
Using the threading library to accelerate calculating each point's neighborhood in a points-cloud. By calling function CalculateAllPointsNeighbors at the bottom of the post.
The function receives a search radius, maximum number of neighbors and a number of threads to split the work on. No changes are done on any of the points. And each point stores data in its own np.ndarray cell accessed by its own index.
The following function times how long it takes N number of threads to finish calculating all points neighborhoods:
def TimeFuncThreads(classObj, uptothreads):
listTimers = []
startNum = 1
EndNum = uptothreads + 1
for i in range(startNum, EndNum):
print("Current Number of Threads to Test: ", i)
tempT = time.time()
classObj.CalculateAllPointsNeighbors(searchRadius=0.05, maxNN=25, maxThreads=i)
tempT = time.time() - tempT
listTimers.append(tempT)
PlotXY(np.arange(startNum, EndNum), listTimers)
The problem is, I've been getting very different results in each run. Here are the plots from 5 subsequent runs of the function TimeFuncThreads. The X axis is number of threads, Y is the runtime. First thing is, they look totally random. And second, there is no significant acceleration boost.
I'm confused now whether I'm using the threading library wrong and what is this behavior that I'm getting?
The function that handles the threading and the function that is being called from each thread:
def CalculateAllPointsNeighbors(self, searchRadius=0.20, maxNN=50, maxThreads=8):
threadsList = []
pointsIndices = np.arange(self.numberOfPoints)
splitIndices = np.array_split(pointsIndices, maxThreads)
for i in range(maxThreads):
threadsList.append(threading.Thread(target=self.GetPointsNeighborsByID,
args=(splitIndices[i], searchRadius, maxNN)))
[t.start() for t in threadsList]
[t.join() for t in threadsList]
def GetPointsNeighborsByID(self, idx, searchRadius=0.05, maxNN=20):
if isinstance(idx, int):
idx = [idx]
for currentPointIndex in idx:
currentPoint = self.pointsOpen3D.points[currentPointIndex]
pointNeighborhoodObject = self.GetPointNeighborsByCoordinates(currentPoint, searchRadius, maxNN)
self.pointsNeighborsArray[currentPointIndex] = pointNeighborhoodObject
self.__RotatePointNeighborhood(currentPointIndex)
It pains me to be the one to introduce you to the Python Gil. Is a very nice feature that makes parallelism using threads in Python a nightmare.
If you really want to improve your code speed, you should be looking at the multiprocessing module
I have to solve a complex network optimization problem using a heuristic algorithm (e.g. ant algorithm). This algorithm is decomposed in two steps: 1.) Calculate new solutions using an random component, 2.) Evaluate new Solutions. These two parts are very highly time-consuming and solved the problem parallel using multiprocessing in subprograms. With increasing number of iteration the time duration of the steps increases very fast. I localized the time delay between the initialization of the parallel processes (labels timeMainCalculate and timeMainEvaluate) and the start of the first subprogram (labels timeSubCalculate and timeSubEvaluate). In the first iteration the time difference timeMainCalculate-timeSubCalculate respectively timeMainEvaluate-timeSubEvaluate is nearly 0, after 100 iterations it is about 10 seconds for both steps and after 200 steps the time difference is about 20. This time difference is linear increasing. The time duration for calculation and evaluation in the subprograms is constant. So it might be a problem in the communication between the main program and the subprograms using multiprocessing. Pool.
For your Information: I’m using Python 3.3 on a eight core computer.
opt_heuristic.py:
import multiprocessing.Pool
import datetime
import calculate, evaluate
epsilon = 1e-10
nbOfCpusForParallel = 6
nbCalculation = 6
def calculate_bound_update_information(result):
Do_some_calculation using result
return [bound,x,y,z]
if __name__ == '__main__':
Inititalize x,y,z
while bound > epsilon:
# Calculate new Solution
pool = multiprocessing.Pool(processes=nbOfCpusForParallel)
result_parallel = list()
for i in range(nbCalculation):
result_parallel.append(pool.apply_async(calculate.main,[x,y,z]))
timeMainCalculate = datetime.datetime.now()
pool.close()
pool.join()
resultCalculation = [result_parallel[i].get() for i in range(nbCalculation)]
# Evaluate Solutions
pool = multiprocessing.Pool(processes=nbOfCpusForParallel)
argsEvalute = [[resultCalculation[i][0],resultCalculation[i][1]] for i in range(len(resultCalculation))]
result_evaluate = list()
for i in range(len(resultCalculation)):
result_evaluate.append(pool.apply_async(evaluate.main,argsEvalute[i]))
timeMainEvaluate = datetime.datetime.now()
pool.close()
pool.join()
resultEvaluation = [result_evaluate[i].get() for i in range(len(resultCalculation))]
[bound,x,y,z] = calculate_bound_update_information(resultEvaluation)
calculate.py:
import datetime
def main(x,y,z):
timeSubCalculate = datetime.datetime.now()
Do some random calculation using x,y,z
return result
evaluate.py
import datetime
def main(x,y):
timeSubEvaluate = datetime.datetime.now()
Do some evaluation using x,y
return result
I seems to me that the main program store some information of the parallel process. I tried some things like delete the pool variable but it was not successful.
Has somebody an idea what's the technical problem and how it could be solved? Thanks.