This question already has answers here:
Is there a simple process-based parallel map for python?
(5 answers)
Closed 4 years ago.
I wrote code like this:
def process(data):
#create file using data
all = ["data1", "data2", "data3"]
I want to execute process function on my all list in parallel, because they are creating small files so I am not concerned about disk write but the processing takes long, so I want to use all of my cores.
How can I do this using default modules in python 2.7?
Assuming CPython and the GIL here.
If your task is I/O bound, in general, threading may be more efficient since the threads are simply dumping work on the operating system and idling until the I/O operation finishes. Spawning processes is a heavy way to babysit I/O.
However, most file systems aren't concurrent, so using multithreading or multiprocessing may not be any faster than synchronous writes.
Nonetheless, here's a contrived example of multiprocessing.Pool.map which may help with your CPU-bound work:
from multiprocessing import cpu_count, Pool
def process(data):
# best to do heavy CPU-bound work here...
# file write for demonstration
with open("%s.txt" % data, "w") as f:
f.write(data)
# example of returning a result to the map
return data.upper()
tasks = ["data1", "data2", "data3"]
pool = Pool(cpu_count() - 1)
print(pool.map(process, tasks))
A similar setup for threading can be found in concurrent.futures.ThreadPoolExecutor.
As an aside, all is a builtin function and isn't a great variable name choice.
Or:
from threading import Thread
def process(data):
print("processing {}".format(data))
l= ["data1", "data2", "data3"]
for task in l:
t = Thread(target=process, args=(task,))
t.start()
Or (only python version > 3.6.0):
from threading import Thread
def process(data):
print(f"processing {data}")
l= ["data1", "data2", "data3"]
for task in l:
t = Thread(target=process, args=(task,))
t.start()
There is a template of using multiprocessing, hope helpful.
from multiprocessing.dummy import Pool as ThreadPool
def process(data):
print("processing {}".format(data))
alldata = ["data1", "data2", "data3"]
pool = ThreadPool()
results = pool.map(process, alldata)
pool.close()
pool.join()
Related
My script loops through each line of an input file and performs some actions using the string in each line. Since the tasks performed on each line are independent of each other, I decided to separate the task into threads so that the script doesn't have to wait for the task to complete to continue with the loop. The code is given below.
def myFunction(line, param):
# Doing something with line and param
# Sends multiple HTTP requests and parse the response and produce outputs
# Returns nothing
param = arg[1]
with open(targets, "r") as listfile:
for line in listfile:
print("Starting a thread for: ",line)
t=threading.Thread(target=myFunction, args=(line, param,))
threads.append(t)
t.start()
I realized that this is a bad idea as the number of lines in the input file grew large. With this code, there would be as many threads as the number of lines. Researched a bit and figured that queues would be the way.
I want to understand the optimal way of using queues for this scenario and if there are any alternatives which I can use.
To go around this problem, you can use the concept of Thread Pools, where you define a fixed number of Threads/workers to be used, for example 5 workers, and whenever a Thread finishes executing, an other Future(ly) submmited thread would take its place automatically.
Example :
import concurrent.futures
def myFunction(line, param):
print("Done with :", line, param)
param = "param_example"
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
futures = []
with open("targets", "r") as listfile:
for line in listfile:
print("Starting a thread for: ", line)
futures.append(executor.submit(myFunction, line=line, param=param))
# waiting for the threads to finish and maybe print a result :
for future in concurrent.futures.as_completed(futures):
print(future.result()) # an Exceptino should be handled here!!!
Queues are one way to do it. The way to use them is to put function parameters on a queue, and use threads to get them and do the processing.
The queue size doesn't matter too much in this case because reading the next line is fast. In another case, a more optimized solution would be to set the queue size to at least twice the number of threads. That way if all threads finish processing an item from the queue at the same time, they will all have the next item in the queue ready to be processed.
To avoid complicating the code threads can be set as daemonic so that they don't stop the program from finishing after the processing is done. They will be terminated when the main process finishes.
The alternative is to put a special item on the queue (like None) for each thread and make the threads exit after getting it from the queue and then join the threads.
For the examples bellow the number of worker threads is set using the workers variable.
Here is an example of a solution using a queue.
from queue import Queue
from threading import Thread
queue = Queue(workers * 2)
def work():
while True:
myFunction(*queue.get())
queue.task_done()
for _ in range(workers):
Thread(target=work, daemon=True).start()
with open(targets, 'r') as listfile:
for line in listfile:
queue.put((line, param))
queue.join()
A simpler solution might be using ThreadPoolExecutor. It is especially simple in this case because the function being called doesn't return anything that needs to be used in the main thread.
from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor(max_workers=workers) as executor:
with open(targets, 'r') as listfile:
for line in listfile:
executor.submit(myFunction, line, param)
Also, if it's not a problem to have all lines stored in memory, there is a solution which doesn't use anything other than threads. The work is split in such a way that the threads read some lines from a list and ignore other lines. A simple example with two threads is where one thread reads odd lines and the other reads even lines.
from threading import Thread
with open(targets, 'r') as listfile:
lines = listfile.readlines()
def work_split(n):
for line in lines[n::workers]:
myFunction(line, param)
threads = []
for n in range(workers):
t = Thread(target=work_split, args=(n,))
t.start()
threads.append(t)
for t in threads:
t.join()
I have done a quick benchmark and the Queue is slightly faster than the ThreadPoolExecutor, but the solution with the split work is faster than both.
From the code you have reported, has no sense the use of thread.
This because there aren't any I/O operations, and so the threads are executed in a linear way without multithread. The GIL (Global Interpreter Lock) is never released by a thread in this case, so the application is only apparently using multithreading, in reality the interpreter is using only one CPU for the program and one thread at time.
In this way you don't have any advantages on use of thread, on the contrary you can have a performance degradation for this scenario, due to the switch context, and to the thread initialization overhead when a thread starts.
The only way to have better performance in this scenario, if applicable in this case, is a multiprocess program. But pay attention on the number of process that you start, remember that every process has its own interpreter.
It was a good answer by GitFront. This answer just adds one more option using the multiprocessing package.
Using concurrent.futures or multiprocessing depends on particular requirements. Multiprocessing has a lot more options comparatively but for the given question the results should be near identical in the simplest case.
from multiprocessing import cpu_count, Pool
PROCESSES = cpu_count() # Warning: uses all cores
def pool_method(listfile, param):
p = Pool(processes=PROCESSES)
checker = [p.apply_async(myFunction, (line, param)) for line in listfile]
...
There are various other methods too other than "apply_async", but this should work well for your needs.
I have a python generator that returns lots of items, for example:
import itertools
def generate_random_strings():
chars = "ABCDEFGH"
for item in itertools.product(chars, repeat=10):
yield "".join(item)
I then iterate over this and perform various tasks, the issue is that I'm only using one thread/process for this:
my_strings = generate_random_strings()
for string in my_strings:
# do something with string...
print(string)
This works great, I'm getting all my strings, but it's slow. I would like to harness the power of Python multiprocessing to "divide and conquer" this for loop. However, of course, I want each string to be processed only once. While I've found much documentation on multiprocessing, I'm trying to find the most simple solution for this with the least amount of code.
I'm assuming each thread should take a big chunk of items every time and process them before coming back and getting another big chunk etc...
Many thanks,
Most simple solution with least code? multiprocessing context manager.
I assume that you can put "do something with string" into a function called "do_something"
from multiprocessing import Pool as ProcessPool
number_of_processes = 4
with ProcessPool(number_of_processes) as pool:
pool.map(do_something, my_strings)
If you want to get the results of "do_something" back again, easy!
with ProcessPool(number_of_processes) as pool:
results = pool.map(do_something, my_strings)
You'll get them in a list.
Multiprocessing.dummy is a syntactic wrapper for process pools that lets you use the multiprocessing syntax. If you want threads instead of processes, just do this:
from multiprocessing.dummy import Pool as ThreadPool
You may use multiprocessing.
import multiprocessing
def string_fun(string):
# do something with string...
print(string)
my_strings = generate_random_strings()
num_of_threads = 7
pool = multiprocessing.Pool(num_of_threads)
pool.map(string_fun, my_strings)
Assuming you're using the lastest version of Python, you may want to read something about asyncio module. Multithreading is not easy to implement due to GIL lock: "In CPython, the global interpreter lock, or GIL, is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once. This lock is necessary mainly because CPython's memory management is not thread-safe."
So you can swap on Multiprocessing, or, as reported above, take a look at asycio module.
asyncio — Asynchronous I/O > https://docs.python.org/3/library/asyncio.html
I'll integrate this answer with some code as soon as possible.
Hope it helps,
Hele
As #Hele mentioned, asyncio is best of all, here is an example
Code
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# python 3.7.2
from asyncio import ensure_future, gather, run
import random
alphabet = 'ABCDEFGH'
size = 1000
async def generate():
tasks = list()
result = None
for el in range(1, size):
task = ensure_future(generate_one())
tasks.append(task)
result = await gather(*tasks)
return list(set(result))
async def generate_one():
return ''.join(random.choice(alphabet) for i in range(8))
if __name__ == '__main__':
my_strings = run(generate())
print(my_strings)
Output
['CHABCGDD', 'ACBGAFEB', ...
Of course, you need to improve generate_one, this variant is very slow.
You can see source code here.
I have tested a multiprocess and thread in python, but multiprocess is slower than thread, and I calculate a distance using editdistance, my code like:
def calc_dist(kw, trie_word):
dists = []
while len(trie_word) != 0:
w = trie_word.pop()
dist = editdistance.eval(kw, w)
dists.append((w, dist))
return dists
if __name__ == "__main__":
word_list = [str(i) for i in range(1, 10000001)]
key_word = '2'
print("calc")
s = time.time()
with Pool(processes=4) as pool:
result = pool.apply_async(calc_dist, (key_word, word_list))
print(len(result.get()))
print("用时",time.time()-s)
Using threading:
class DistThread(threading.Thread):
def __init__(self, func, args):
super(DistThread, self).__init__()
self.func = func
self.args = args
self.dists = None
def run(self):
self.dists = self.func(*self.args)
def join(self):
super().join(self)
return self.dists
In my computer, it consumes about 118s, but thread takes about 36s, where is wrong with it?
a couple of issues:
a significant amount of time will be spent serialising the data so it can be sent to the other process while threads share the same address space so pointers can be used
your current code is only using one process to do all the calcs with multiprocessing. you need to seperate your array into "chunks" somehow so that it can be processed via multiple workers
e.g:
import time
from multiprocessing import Pool
import editdistance
def calc_one(trie_word):
return editdistance.eval(key_word, trie_word)
if __name__ == "__main__":
word_list = [str(i) for i in range(1, 10000001)]
key_word = '2'
print("calc")
s = time.time()
with Pool(processes=4) as pool:
result = pool.map(calc_one, word_list, chunksize=10000)
print(len(result))
print("time",time.time()-s)
s = time.time()
result = list(calc_one(w) for w in word_list)
print(len(result))
print("time",time.time()-s)
this relies on key_word being a global variable. for me, the version using multiple processes takes ~5.3 seconds while the second version takes ~16.9 secs. not 4 times as quick as the data still needs to be sent back and forth, but pretty good
I had a similar experience with threading and multi processing inside Python to consume CSVS that had a large amount of data. I had a small look into this and found that processing spawns multiple processes to perform tasks which can be slower than just running one threaded process since threading runs in one place. There is a more definitive answer here: Multiprocessing vs Threading Python.
Pasting answer from link incase link disappears;
The threading module uses threads, the multiprocessing module uses processes. The difference is that threads run in the same memory space, while processes have separate memory. This makes it a bit harder to share objects between processes with multiprocessing. Since threads use the same memory, precautions have to be taken or two threads will write to the same memory at the same time. This is what the global interpreter lock is for.
Spawning processes is a bit slower than spawning threads. Once they are running, there is not much difference.
I'm trying to execute a function on every line of a CSV file as fast as possible. My code works, but I know it could be faster if I make better use of the multiprocessing library.
processes = []
def execute_task(task_details):
#work is done here, may take 1 second, may take 10
#send output to another function
with open('twentyThousandLines.csv', 'rb') as file:
r = csv.reader(file)
for row in r:
p = Process(target=execute_task, args=(row,))
processes.append(p)
p.start()
for p in processes:
p.join()
I'm thinking I should put the tasks into a Queue and process them with a Pool but all the examples make it seem like Queue doesn't work the way I assume, and that I can't map a Pool to an ever expanding Queue.
I've done something similar using a Pool of workers.
from multiprocessing import Pool, cpu_count
def initializer(arg1, arg2):
# Do something to initialize (if necessary)
def process_csv_data(data):
# Do something with the data
pool = Pool(cpu_count(), initializer=initializer, initargs=(arg1, arg2))
with open("csv_data_file.csv", "rb") as f:
csv_obj = csv.reader(f)
for row in csv_obj:
pool.apply_async(process_csv_data, (row,))
However, as pvg commented under your question, you might want to consider how to batch your data. Going row by row may not the the right level of granularity.
You might also want to profile/test to figure out the bottle-neck. For example, if disk access is limiting you, you might not benefit from parallelizing.
mulprocessing.Queue is a means to exchanging objects among the processes, so it's not something you'd put a task into.
For me it looks like you are actually trying to speed up
def check(row):
# do the checking
return (row,result_of_check)
with open('twentyThousandLines.csv', 'rb') as file:
r = csv.reader(file)
for row,result in map(check,r):
print(row,result)
which can be done with
#from multiprocessing import Pool # if CPU-bound (but even then not alwys)
from multiprocessing.dummy import Pool # if IO-bound
def check(row):
# do the checking
return (row,result_of_check)
if __name__=="__main__": #in case you are using processes on windows
with open('twentyThousandLines.csv', 'rb') as file:
r = csv.reader(file)
with Pool() as p: # before python 3.3 you should do close() and join() explicitly
for row,result in p.imap_unordered(check,r, chunksize=10): # just a quess - you have to guess/experiement a bit to find the best value
print(row,result)
Creating processes takes some time (especially on windows) so in most cases using threads via multiprocessing.dummy is faster (and also multiprocessing is not totally trivial - see Guidelines).
I have a list dataframe_chunk which contains chunks of a very large pandas dataframe.I would like to write every single chunk into a different csv, and to do so in parallel. However, I see the files being written sequentially and I'm not sure why this is the case. Here's the code:
import concurrent.futures as cfu
def write_chunk_to_file(chunk, fpath):
chunk.to_csv(fpath, sep=',', header=False, index=False)
pool = cfu.ThreadPoolExecutor(N_CORES)
futures = []
for i in range(N_CORES):
fpath = '/path_to_files_'+str(i)+'.csv'
futures.append(pool.submit( write_chunk_to_file(dataframe_chunk[i], fpath)))
for f in cfu.as_completed(futures):
print("finished at ",time.time())
Any clues?
One thing that is stated in the Python 2.7.x threading docs
but not in the 3.x docs is that
Python cannot achieve true parallelism using the threading library - only one thread will execute at a time.
You should try using concurrent.futures with the ProcessPoolExecutor which uses separate processes for each job and therefore can achieve true parallelism on a multi-core CPU.
Update
Here is your program adapted to use the multiprocessing library instead:
#!/usr/bin/env python3
from multiprocessing import Process
import os
import time
N_CORES = 8
def write_chunk_to_file(chunk, fpath):
with open(fpath, "w") as f:
for x in range(10000000):
f.write(str(x))
futures = []
print("my pid:", os.getpid())
input("Hit return to start:")
start = time.time()
print("Started at:", start)
for i in range(N_CORES):
fpath = './tmp/file-'+str(i)+'.csv'
p = Process(target=write_chunk_to_file, args=(i,fpath))
futures.append(p)
for p in futures:
p.start()
print("All jobs started.")
for p in futures:
p.join()
print("All jobs finished at ",time.time())
You can monitor the jobs with this shell command in another window:
while true; do clear; pstree 12345; ls -l tmp; sleep 1; done
(Replace 12345 with the pid emitted by the script.)
Your code probably works, it starts making the 2nd+ file(s) while the 1st chunk is being written, etc. It will be slightly faster than the simple synchronous version because the syscalls follow each other sooner.
But from the kernel perspective the IO syscalls will still come in one after another from a single python process, so the files will be created in serial, albeit at a faster frequency.