I'm brand new to multi-threaded processing, so please forgive me if I butcher terms or miss something obvious.
The code below doesn't offer any time advantage over different code that calls the same two functions one after the other.
import time
import threading
start_time = time.clock()
def fibonacci(nth): #can be ignored
first = 0
second = 1
for i in range(nth):
third = first + second
first = second
second = third
print "Fibonacci number", i + 1, "is", len(str(first)), "digits long"
def collatz(collatz_max): #can be ignored
for n in range(collatz_max):
n = n + 1 #avoid entering 0
solution = []
solution.append(n)
while n != 1:
if n % 2 == 0:
n = n / 2
else:
n = (n*3) + 1
solution.append(n)
print "Path for Collatz number", collatz_max, "is", solution
def scripts():
thread_fibonacci = threading.Thread(target=fibonacci, args = (800000,))
thread_collatz = threading.Thread(target=collatz, args = (400000,))
thread_fibonacci.start()
thread_collatz.start()
return thread_fibonacci, thread_collatz
all_scripts = scripts()
#wait until both threads are finished
for script in all_scripts:
script.join()
print time.clock() - start_time, "seconds"
What do I need to do to make the threads simultaneous? Does GIL mean concurrency can only be achieved through separate processes? If so, what is the point of multithreading?
Using Python 2.7.5 on Windows 8.1, quad-core processor. Any help would be appreciated.
There are good answers regarding the GIL you can look at.
In short, if your tasks are CPU-bound (like the ones you posted), threads are not going to help you. Python threads are good for IO-bound tasks, like retrieving a web page.
Related
Recently started learning parallel on my own and I have next to no idea what I'm doing. Tried applying what I have learnt but I think I'm doing something wrong because my parallel code is taking a longer time to execute than my serial code. My PC is running a i7-9700. This is the original serial code in question
def getMatrix(name):
matrixCreated = []
i = 0
while True:
i += 1
row = input('\nEnter elements in row %s of Matrix %s (separated by commas)\nOr -1 to exit: ' %(i, name))
if row == '-1':
break
else:
strList = row.split(',')
matrixCreated.append(list(map(int, strList)))
return matrixCreated
def getColAsList(matrixToManipulate, col):
myList = []
numOfRows = len(matrixToManipulate)
for i in range(numOfRows):
myList.append(matrixToManipulate[i][col])
return myList
def getCell(matrixA, matrixB, r, c):
matrixBCol = getColAsList(matrixB, c)
lenOfList = len(matrixBCol)
productList = [matrixA[r][i]*matrixBCol[i] for i in range(lenOfList)]
return sum(productList)
matrixA = getMatrix('A')
matrixB = getMatrix('B')
rowA = len(matrixA)
colA = len(matrixA[0])
rowB = len(matrixB)
colB = len(matrixB[0])
result = [[0 for p in range(colB)] for q in range(rowA)]
if (colA != rowB):
print('The two matrices cannot be multiplied')
else:
print('\nThe result is')
for i in range(rowA):
for j in range(colB):
result[i][j] = getCell(matrixA, matrixB, i, j)
print(result[i])
EDIT: This is the parallel code with time library. Initially didn't include it as I thought it was wrong so just wanted to see if anyone had ideas to parallize it instead
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
def getMatrix(name):
matrixCreated = []
i = 0
while True:
i += 1
row = input('\nEnter elements in row %s of Matrix %s (separated by commas)\nOr -1 to exit: ' %(i, name))
if row == '-1':
break
else:
strList = row.split(',')
matrixCreated.append(list(map(int, strList)))
return matrixCreated
def getColAsList(matrixToManipulate, col):
myList = []
numOfRows = len(matrixToManipulate)
for i in range(numOfRows):
myList.append(matrixToManipulate[i][col])
return myList
def getCell(matrixA, matrixB, r, c):
matrixBCol = getColAsList(matrixB, c)
lenOfList = len(matrixBCol)
productList = [matrixA[r][i]*matrixBCol[i] for i in range(lenOfList)]
return sum(productList)
matrixA = getMatrix('A')
matrixB = getMatrix('B')
rowA = len(matrixA)
colA = len(matrixA[0])
rowB = len(matrixB)
colB = len(matrixB[0])
import time
start_time = time.time()
result = [[0 for p in range(colB)] for q in range(rowA)]
if (colA != rowB):
print('The two matrices cannot be multiplied')
else:
print('\nThe result is')
for i in range(rowA):
for j in range(colB):
result[i][j] = getCell(matrixA, matrixB, i, j)
print(result[i])
print (" %s seconds " % (time.time() - start_time))
results = [pool.apply(getMatrix, getColAsList, getCell)]
pool.close()
So I would agree that you are doing something wrong. I would say that your code is not parallelable.
For the code to be parallelable it has to be dividable into smaller pieces and it either has to be:
1, Independent, meaning when it runs it doesn't rely on other processes to do its job.
For example if I have a list with 1,000,000 objects that need to be processed. And I have 4 workers to process them with. Then give each worker 1/4 of the objects to process and then when they finish all objects have been processed. But worker 3 doesn't care if worker 1, 2 or 4 completed before or after it did. Nor does worker3 care about what worker 1, 2 or 4 returned or did. It actually shouldn't even know that there are any other workers out there.
2, Managed, meaning there is dependencies between workers but thats ok cause you have a main thread that coordinates the workers. Still though, workers shouldn't know or care about each other. Think of them as mindless muscle, they only do what you tell them to do. Not to think for themselves.
For example I have a list with 1,000,000 objects that need to be processed. First all objects need to go through func1 which returns something. Once ALL objects are done with func1 those results should then go into func2. So I create 4 workers, give each worker 1/4 of the objects and have them process them with func1 and return the results. I wait for all workers to finish processing the objects. Then I give each worker 1/4 of the results returned by func1 and have them process it with func2. And I can keep doing this as many times as I want. All I have to do is have the main thread coordinate the workers so they dont start when they aren't suppose too and tell them what and when to process.
Take this with a grain of salt as this is a simplified version of parallel processing.
Tip for parallel and concurrency
You shouldn't get user input in parallel. Only the main thread should handle that.
If your work load is light then you shouldn't use parallel processing.
If your task can't be divided up into smaller pieces then its not parallelable. But it can still be run on a background thread as a way of running something concurrently.
Concurrency Example:
If your task is long running and not parallelable, lets say it takes 10 minutes to complete. And it requires a user to give input. Then when the user gives input start the task on a worker. If the user gives input again 1 minute later then take that input and start the 2nd task on worker2. Input at 5 minutes start task3 on worker3. At the 10 minute mark task1 is complete. Because everything is running concurrently by the 15 minute mark all task are complete. That's 2x faster then running the tasks in serial which would take 30 minutes. However this is concurrency not parallel.
In order to get more speed on my mac using Python I use the fork feature. My mac has 4 cores and I noticed that if I use 4 forks, my program speeds up by 3.7 times. Any additional forks do not make the program faster. I'm not even certain if successful forking relies on the number of cores or not, in fact I know very little about what is really going on, I just know that it works. I realize that on gcloud the vCPU = 5 does not necessarily mean there are 5 cores, but I was hoping that more CPUs would somehow help the forking process go faster. In any case, I put the following python program on a 16 vCPU gcloud computer and I saw no increase in speed. The following program just counts up to 250,000,000. Using 4 forks on my mac it takes 16 seconds, but using 16 forks on the 16 vCPU gcloud it takes 18 seconds.
import functools, time, os
p = print
def print_intervals(number, interval, fork=None, total=0, print=True):
if number > 0 and number % interval == 0 and number >= interval:
if total:
per = int((number / total) * 100)
number = f"{number} - {per}%"
if fork == None:
p(number)
else:
p(f"fork {fork}")
p(number)
return
def timer(func):
"""Print the runtime of the decorated function"""
#functools.wraps(func)
def wrapper_timer(*args, **kwargs):
start_time = time.perf_counter()
value = func(*args, **kwargs)
end_time = time.perf_counter()
run_time = end_time - start_time
run_time = round(run_time, 2)
print(f"Finished {func.__name__!r} in {run_time} secs")
return value
return wrapper_timer
#timer
def temp1(**kwargs):
start = kwargs['start']
stop = kwargs['stop']
fork_num = kwargs['fork_num']
for x in range(start, stop):
print_intervals(x, 10_000_000)
z = x + 1
p(f'done fork {fork_num}')
def divide_range(divisions: int, total: int, idx: int):
sec = total // divisions
start = idx * sec
if total % divisions != 0 and idx == divisions:
stop = total
else:
stop = start + sec
return start, stop
def main_fork(func, total, **kwargs):
forks = 16
fake = kwargs.get("fake")
for i in range(forks):
start1, stop1 = 0, 0
if total != -1:
start1, stop1 = divide_range(forks, total, i)
p(f'fork num {i} {start1} {stop1}')
if not fake:
newpid = os.fork()
kwargs['start'] = start1
kwargs['stop'] = stop1
kwargs['fork_num'] = i
if fake and i > 0:
pass
elif fake:
func(**kwargs)
elif newpid == 0:
child(func, **kwargs)
return
def child(func, **kwargs):
func(**kwargs)
os._exit(0)
main_fork(temp1, 250_000_000, **{})
I tried it again and this time it worked. A 16 vCPU N1 machine sped up the program by a factor of 5. I'm not sure what I did different but the only thing I can think is that I was actually using the 1 vCPU computer when I thought I was using the 16 vCPU computer.
Very late to this, however, you may want to see the built-in multiprocessing library.
Python was created before CPUs with multiple cores were around, so it runs on one single thread - no matter how many threading.Thread instances you run.
When using the multiprocessing library, you're effectively running Python x amount of times. For example, you'd be running a different Python instance on each core of the CPU to speed things up by a marginable amount.
I wrote this program to properly learn how to use multi-threading. I want to implement something similar to this in my own program:
import numpy as np
import time
import os
import math
import random
from threading import Thread
def powExp(x, r):
for c in range(x.shape[1]):
x[r][c] = math.pow(100, x[r][c])
def main():
print()
rows = 100
cols = 100
x = np.random.random((rows, cols))
y = x.copy()
start = time.time()
threads = []
for r in range(x.shape[0]):
t = Thread(target = powExp, args = (x, r))
threads.append(t)
t.start()
for t in threads:
t.join()
end = time.time()
print("Multithreaded calculation took {n} seconds!".format(n = end - start))
start = time.time()
for r in range(y.shape[0]):
for c in range(y.shape[1]):
y[r][c] = math.pow(100, y[r][c])
end = time.time()
print("Singlethreaded calculation took {n} seconds!".format(n = end - start))
print()
randRow = random.randint(0, rows - 1)
randCol = random.randint(0, cols - 1)
print("Checking random indices in x and y:")
print("x[{rR}][{rC}]: = {n}".format(rR = randRow, rC = randCol, n = x[randRow][randCol]))
print("y[{rR}][{rC}]: = {n}".format(rR = randRow, rC = randCol, n = y[randRow][randCol]))
print()
for r in range(x.shape[0]):
for c in range(x.shape[1]):
if(x[r][c] != y[r][c]):
print("ERROR NO WORK WAS DONE")
print("x[{r}][{c}]: {n} == y[{r}][{c}]: {ny}".format(
r = r,
c = c,
n = x[r][c],
ny = y[r][c]
))
quit()
assert(np.array_equal(x, y))
if __name__ == main():
main()
As you can see from the code the goal here is to parallelize the operation math.pow(100, x[r][c]) by creating a thread for every column. However this code is extremely slow, a lot slower than single-threaded versions.
Output:
Multithreaded calculation took 0.026447772979736328 seconds!
Singlethreaded calculation took 0.006798267364501953 seconds!
Checking random indices in x and y:
x[58][58]: = 9.792315687115973
y[58][58]: = 9.792315687115973
I searched through stackoverflow and found some info about the GIL forcing python bytecode to be executed on a single core only. However I'm not sure that this is in fact what is limiting my parallelization. I tried rearranging the parallelized for-loop using pools instead of threads. Nothing seems to be working.
Python code performance decreases with threading
EDIT: This thread discusses the same issue. Is it completely impossible to increase performance using multi-threading in python because of the GIL? Is the GIL causing my slowdowns?
EDIT 2 (2017-01-18): So from what I can gather after searching for quite a bit online it seems like python is really bad for parallelism. What I'm trying to do is parellelize a python function used in a neural network implemented in tensorflow...it seems like adding a custom op is the way to go.
The number of issues here is quite... numerous. Too many (system!) threads that do too little work, the GIL, etc. This is what I consider a really good introduction to parallelism in Python:
https://www.youtube.com/watch?v=MCs5OvhV9S4
Live coding is awesome.
I am trying to come up with a way to have threads work on the same goal without interfering. In this case I am using 4 threads to add up every number between 0 and 90,000. This code runs but it ends almost immediately (Runtime: 0.00399994850159 sec) and only outputs 0. Originally I wanted to do it with a global variable but I was worried about the threads interfering with each other (ie. the small chance that two threads double count or skip a number due to strange timing of the reads/writes). So instead I distributed the workload beforehand. If there is a better way to do this please share. This is my simple way of trying to get some experience into multi threading. Thanks
import threading
import time
start_time = time.time()
tot1 = 0
tot2 = 0
tot3 = 0
tot4 = 0
def Func(x,y,tot):
tot = 0
i = y-x
while z in range(0,i):
tot = tot + i + z
# class Tester(threading.Thread):
# def run(self):
# print(n)
w = threading.Thread(target=Func, args=(0,22499,tot1))
x = threading.Thread(target=Func, args=(22500,44999,tot2))
y = threading.Thread(target=Func, args=(45000,67499,tot3))
z = threading.Thread(target=Func, args=(67500,89999,tot4))
w.start()
x.start()
y.start()
z.start()
w.join()
x.join()
y.join()
z.join()
# while (w.isAlive() == False | x.isAlive() == False | y.isAlive() == False | z.isAlive() == False): {}
total = tot1 + tot2 + tot3 + tot4
print total
print("--- %s seconds ---" % (time.time() - start_time))
You have a bug that makes this program end almost immediately. Look at while z in range(0,i): in Func. z isn't defined in the function and its only by luck (bad luck really) that you happen to have a global variable z = threading.Thread(target=Func, args=(67500,89999,tot4)) that masks the problem. You are testing whether the thread object is in a list of integers... and its not!
The next problem is with the global variables. First, you are absolutely right that using a single global variable is not thread safe. The threads would mess with each others calculations. But you misunderstand how globals work. When you do threading.Thread(target=Func, args=(67500,89999,tot4)), python passes the object currently referenced by tot4 to the function, but the function has no idea which global it came from. You only update the local variable tot and discard it when the function completes.
A solution is to use a global container to hold the calculations as shown in the example below. Unfortunately, this is actually slower than just doing all the work in one thread. The python global interpreter lock (GIL) only lets 1 thread run at a time and only slows down CPU-intensive tasks implemented in pure python.
You could look at the multiprocessing module to split this into multiple processes. That works well if the cost of running the calculation is large compared to the cost of starting the process and passing it data.
Here is a working copy of your example:
import threading
import time
start_time = time.time()
tot = [0] * 4
def Func(x,y,tot_index):
my_total = 0
i = y-x
for z in range(0,i):
my_total = my_total + i + z
tot[tot_index] = my_total
# class Tester(threading.Thread):
# def run(self):
# print(n)
w = threading.Thread(target=Func, args=(0,22499,0))
x = threading.Thread(target=Func, args=(22500,44999,1))
y = threading.Thread(target=Func, args=(45000,67499,2))
z = threading.Thread(target=Func, args=(67500,89999,3))
w.start()
x.start()
y.start()
z.start()
w.join()
x.join()
y.join()
z.join()
# while (w.isAlive() == False | x.isAlive() == False | y.isAlive() == False | z.isAlive() == False): {}
total = sum(tot)
print total
print("--- %s seconds ---" % (time.time() - start_time))
You can pass in a mutable object that you can add your results either with an identifier, e.g. dict or just a list and append() the results, e.g.:
import threading
def Func(start, stop, results):
results.append(sum(range(start, stop+1)))
rngs = [(0, 22499), (22500, 44999), (45000, 67499), (67500, 89999)]
results = []
jobs = [threading.Thread(target=Func, args=(start, stop, results)) for start, stop in rngs]
for j in jobs:
j.start()
for j in jobs:
j.join()
print(sum(results))
# 4049955000
# 100 loops, best of 3: 2.35 ms per loop
As others have noted you could look multiprocessing in order to split the work to multiple different processes that can run parallel. This would benefit especially in CPU-intensive tasks assuming that there isn't huge amount of data to pass between the processes.
Here's a simple implementation of the same functionality using multiprocessing:
from multiprocessing import Pool
POOL_SIZE = 4
NUMBERS = 90000
def func(_range):
tot = 0
for z in range(*_range):
tot += z
return tot
with Pool(POOL_SIZE) as pool:
chunk_size = int(NUMBERS / POOL_SIZE)
chunks = ((i, i + chunk_size) for i in range(0, NUMBERS, chunk_size))
print(sum(pool.imap(func, chunks)))
In above chunks is a generator that produces the same ranges that were hardcoded in original version. It's given to imap which works the same as standard map except that it executes the function in the processes within the pool.
Less known fact about multiprocessing is that you can easily convert the code to use threads instead of processes by using undocumented multiprocessing.pool.ThreadPool. In order to convert above example to use threads just change import to:
from multiprocessing.pool import ThreadPool as Pool
I'm slowly switching to Python and I wanted to make a simple test for comparing the performance of a simple array summation. I generate a random 1000x1000 array and add one to each of the values in this array.
Here my script in Python :
import time
import numpy
from numpy.random import random
def testAddOne(data):
"""
Test addOne
"""
return data + 1
i = 1000
data = random((i,i))
start = time.clock()
for x in xrange(1000):
testAddOne(data)
stop = time.clock()
print stop - start
And my function in MATLAB:
function test
%parameter declaration
c=rand(1000);
tic
for t = 1:1000
testAddOne(c);
end
fprintf('Structure: \n')
toc
end
function testAddOne(c)
c = c + 1;
end
The Python takes 2.77 - 2.79 seconds, the same as the MATLAB function (I'm actually quite impressed by Numpy!). What would I have to change to my Python script to use multithreading? I can't in MATLAB since I don,t have the toolbox.
Multi threading in Python is only useful for situations where threads get blocked, e.g. on getting input, which is not the case here (see the answers to this question for more details). However, multi processing is easy to do in Python. Multiprocessing in general is covered here.
A program taking a similar approach to your example is below
import time
import numpy
from numpy.random import random
from multiprocessing import Process
def testAddOne(data):
return data + 1
def testAddN(data,N):
# print "testAddN", N
for x in xrange(N):
testAddOne(data)
if __name__ == '__main__':
matrix_size = 1000
num_adds = 10000
num_processes = 4
data = random((matrix_size,matrix_size))
start = time.clock()
if num_processes > 1:
processes = [Process(target=testAddN, args=(data,num_adds/num_processes))
for i in range(num_processes)]
for p in processes:
p.start()
for p in processes:
p.join()
else:
testAddN(data,num_adds)
stop = time.clock()
print "Elapsed", stop - start
A more useful example using a pool of worker processes to successively add 1 to different matrices is below.
import time
import numpy
from numpy.random import random
from multiprocessing import Pool
def testAddOne(data):
return data + 1
def testAddN(dataN):
data,N=dataN
for x in xrange(N):
data = testAddOne(data)
return data
if __name__ == '__main__':
num_matrices = 4
matrix_size = 1000
num_adds_per_matrix = 2500
num_processes = 4
inputs = [(random((matrix_size,matrix_size)), num_adds_per_matrix)
for i in range(num_matrices)]
#print inputs # test using, e.g., matrix_size = 2
start = time.clock()
if num_processes > 1:
proc_pool = Pool(processes=num_processes)
outputs = proc_pool.map(testAddN, inputs)
else:
outputs = map(testAddN, inputs)
stop = time.clock()
#print outputs # test using, e.g., matrix_size = 2
print "Elapsed", stop - start
In this case the code in testAddN actually does something with the result of calling testAddOne. And you can uncomment the print statements to check that some useful work is being done.
In both cases I've changed the total number of additions to 10000; with fewer additions the cost of starting up processes becomes more significant (but you can experiment with the parameters). And you can experiment with num_processes also. On my machine I found that compared to running in the same process with num_processes=1 I got just under a 2x speedup spawning four processes with num_processes=4.