We have a batch processing system which we are looking to modify to use multiple threads. The process takes in a delimited file and performs calculations on it via pandas.
I would like to split up the dataframe into N chunks if the total amount of records exceeds a threshold. Each chunk should then be fed to a thread from a threadpool executor to get the calculations done, then at the end I would wait for the threads to sync and concatenate the resulting DFs into one.
Problem is that I'm not sure how to split a Pandas DF like this. Let's say there's going to be an arbitrary number of threads, 2 (as an example), and i want to start the split if the record number is over 200000
So the idea would be, if I send a file with 200001 records, thread 1 would get 100000, and thread 2 would get 100001. If I send one with 1000000, thread 1 would get 500000 and thread 2 would get 500000.
(If the total records don't exceed this threshold, I'd just execute the process on a single thread)
I have seen related solutions, but none have applied to my case.
def do_something(df):
if len(df) > some_threshold:
pivot = len(df)//2
threading.Thread(target=do_something,args=(df[:pivot]).start()
return do_something(df[:pivot])
actually_do_something_with_smallish_df(df)
maybe?
Below, I've included example code of how to split. Then, using ThreadPoolExecutor, it will execute the code with eight threads, in my case (you can use the Thread library too). The process_pandas function is just a dummy function; you can use whatever you want:
import pandas as pd
from concurrent.futures import ThreadPoolExecutor as th
threshold = 300
block_size = 100
num_threads = 8
big_list = pd.read_csv('pandas_list.csv',delimiter=';',header=None)
blocks = []
if len(big_list) > threshold:
for i in range((len(big_list)//block_size)):
blocks.append(big_list[block_size*i:block_size*(i+1)])
i=i+1
if i*block_size < len(big_list):
blocks.append(big_list[block_size*i:])
else:
blocks.append(big_list)
def process_pandas(df):
print('Doing calculations...')
indexes = list(df.index.values)
df.loc[indexes[0], 2] = 'changed'
return df
with th(num_threads) as ex:
results = ex.map(process_pandas,blocks)
final_dataframe = pd.concat(results, axis=0)
Related
I have the following code:
data = [2,5,3,16,2,5]
def f(x):
return 2*x
f_total = 0
for x in data:
f_total += f(x)
print(f_total/len(data))
which I want to speed up the for loop. (In reality the code is more complex and I want to run it in a super computer with many many processing cores). I have read that I can do this with the multiprocessing library where I can get python3 to simultaneously run different chunks of the loop at the same time but I am a bit lost with it.
Could you explain me how to do it with this minimal version of my program?
Thanks!
import multiprocessing
from numpy import random
"""
This mentions the number of worker threads that you want to run in parallel.
Depending on the number of cores in your system you should choose the appropriate
number of threads. When you call 'map' function it will distribute the input
values in that many parts
"""
NUM_CORES = 6
data = random.rand(100, 1)
"""
+2 so that the cores are not left idle in case a thread is waiting for I/O.
Choose by performing an empirical analysis depending on the function you are trying to compute.
It could match up to NUM_CORES as well. You can vary the chunksize as well depending on the size of 'data' that you have.
"""
NUM_THREADS = NUM_CORES+2
CHUNKSIZE = int(len(data)/(NUM_THREADS))
def f(x):
return 2*x
# This takes care of creating pool of worker threads which will be assigned the jobs
pool = multiprocessing.Pool(NUM_THREADS)
# map vs imap. If the data is large go for imap else map is also good.
it = pool.imap(f, data, chunksize=CHUNKSIZE)
f_total = 0
# Iterate and sum up the result
for value in it:
f_total += sum(value)
print(f_total/len(data))
Why choose imap over map?
I'm trying to revisit this slightly older question and see if there's a better answer these days.
I'm using python3 and I'm trying to share a large dataframe with the workers in a pool. My function reads the dataframe, generates a new array using data from the dataframe, and returns that array. Example code below (note: in the example below I do not actually use the dataframe, but in my code I do).
def func(i):
return i*2
def par_func_dict(mydict):
values = mydict['values']
df = mydict['df']
return pd.Series([func(i) for i in values])
N = 10000
arr = list(range(N))
data_split = np.array_split(arr, 3)
df = pd.DataFrame(np.random.randn(10,10))
pool = Pool(cores)
gen = ({'values' : i, 'df' : df}
for i in data_split)
data = pd.concat(pool.map(par_func_dict,gen), axis=0)
pool.close()
pool.join()
I'm wondering if there's a way I can prevent feeding the generator with copies of the dataframe to prevent taking up so much memory.
The answer to the link above suggests using multiprocessing.Process(), but from what I can tell, it's difficult to use that on top of functions that return things (need to incorporate signals / events), and the comments indicate that each process still ends up using a large amount of memory.
I am trying to read 3 different files in python and do something to extract the data outof it. Then I want to merge the data into one big file.
Since each individual files are already big and take sometime doing the data processing, I am thinking if
I can read all three files at once (in multiple threads/process)
wait for the process for all files to finish
when all output are ready then pipe all the data to downstream function to merge it.
Can someone suggest some improvement to this code to do what I want.
import pandas as pd
file01_output = ‘’
file02_output = ‘’
file03_output = ‘’
# I want to do all these three “with open(..)” at once.
with open(‘file01.txt’, ‘r’) as file01:
for line in file01:
something01 = do something in line
file01_output += something01
with open(‘file02.txt’, ‘r’) as file01:
for line in file01:
something02 = do something in line
file02_output += something02
with open(‘file03.txt’, ‘r’) as file01:
for line in file01:
something03 = do something in line
file03_output += something03
def merge(a,b,c):
a = file01_output
b = file01_output
c = file01_output
# compile the list of dataframes you want to merge
data_frames = [a, b, c]
df_merged = reduce(lambda left,right: pd.merge(left,right,
on=['common_column'], how='outer'), data_frames).fillna('.')
There are many ways to use multiprocessing in your problem so I'll just propose one way. Since, as you mentioned, the processing happening on the data in the file is CPU bound you can run that in a separate process and expect to see some improvement (how much improvement, if any, depends on the problem, algorithm, # cores, etc.). For example, the overall structure could look like just having a pool which you map a list of all the filenames which you need to process and in that function you do your computing.
It's easier with a concrete example. Let's pretend we have a list of CSVs 'file01.csv', 'file02.csv', 'file03.csv' which have a NUMBER column and we want to compute whether that number is prime (CPU bound). Example, file01.csv:
NUMBER
1
2
3
...
And the other files look similar but with different numbers to avoid duplicating work. The code to compute the primes could then look like this:
import pandas as pd
from multiprocessing import Pool
from sympy import isprime
def compute(filename):
# IO (probably not faster)
my_data_df = pd.read_csv(filename)
# do some computing (CPU)
my_data_df['IS_PRIME'] = my_data_df.NUMBER.map(isprime)
return my_data_df
if __name__ == '__main__':
filenames = ['file01.csv', 'file02.csv', 'file03.csv']
# construct the pool and map to the workers
with Pool(2) as pool:
results = pool.map(compute, filenames)
print(pd.concat(results))
I've used the sympy package for a convenient isprime method and I'm sure the structure of my data is quite different but, hopefully, that example illustrates a structure you could use too. The plan of performing your CPU bound computations in a pool (or list of Processes) and then merge/reduce/concatenating the result is a reasonable approach to the problem.
I have a time series dataframe with about 10 columns where I am performing manipulations on the time series to return results of strategy data. I would like to test 2 parameters as they may or may not effect each other. When tested independently, each run take over 10 sec per unit(over 6.5 hours for the total run) and I'm looking to speed this up..I have been reading about dask and it seems that its the right module to use.
My current code iterates over each parameter range with a nested loops. I know it can be paralleled as the data per day is mutually exclusive.
Here is the code:
amount1=np.arange(.001,.03,.0005)
amount2=np.arange(.001,.03,.0005)
def getResults(df,amount1,amount2):
final_results=[]
for x in tqdm(amount1):
for y in amount2:
df1=None
df1=function1(df.copy(), x, y ) #takes about 2sec.
df1=function2(df1) #takes about 2sec.
df1=function3(df1) #takes about 3sec.
final_results.append([x,y,df1['results'].iloc[-1]])
return final_results
UPDATE:
So it looks like the improvements should come by adjusting the function to remove the iteration from the calls and to create a list of jobs(my understanding. Here is where I am so far. I probably will need to move my df to a dask dataframe, so that the data can be chunked into smaller pieces. The question is do I leave the function1,2 and 3 functions as pandas vector manulipulations or do they need to move to complete dask functions?
def getResults(df,amount):
df1=None
df1=dsk.delayed(function1)(df,amount[0],amount[1] )
df1=dsk.delayed(function2)(df1)
df1=dsk.delayed(function2)(df1)
return [amount[0],amount[1],df1['results'].iloc[-1]]
#Create a list of processes from jobs. jobs is a list of tuples that replaces the iteration.
processes =[getResults(df,items) for items in jobs]
#Create a process list of results
results=[]
for i in range(len(processes):
results.append(processes[i])
You probably want to use either dask.delayed or the concurrent.futures interface.
Something like the following would probably work well (untested, I recommend that you read the docs referenced above to understand what it's doing).
def getResults(df,amount1,amount2):
final_results=[]
for x in amount1:
for y in amount2:
df1=None
df1=dask.delayed(function1)(df.copy(), x, y )
df1=dask.delayed(function2)(df1)
df1=dask.delayed(function3)(df1)
final_results.append([x,y,df1['results'].iloc[-1]])
return final_results
out = getResults(df, amount1, amount2)
result = delayed(out).compute()
Also, I would avoid calling df.copy() if you can avoid it. Ideally function1 would not mutate input data.
I have a very large list of strings (originally from a text file) that I need to process using python. Eventually I am trying to go for a map-reduce style of parallel processing.
I have written a "mapper" function and fed it to multiprocessing.Pool.map(), but it takes the same amount of time as simply calling the mapper function with the full set of data. I must be doing something wrong.
I have tried multiple approaches, all with similar results.
def initial_map(lines):
results = []
for line in lines:
processed = # process line (O^(1) operation)
results.append(processed)
return results
def chunks(l, n):
for i in xrange(0, len(l), n):
yield l[i:i+n]
if __name__ == "__main__":
lines = list(open("../../log.txt", 'r'))
pool = Pool(processes=8)
partitions = chunks(lines, len(lines)/8)
results = pool.map(initial_map, partitions, 1)
So the chunks function makes a list of sublists of the original set of lines to give to the pool.map(), then it should hand these 8 sublists to 8 different processes and run them through the mapper function. When I run this I can see all 8 of my cores peak at 100%. Yet it takes 22-24 seconds.
When I simple run this (single process/thread):
lines = list(open("../../log.txt", 'r'))
results = initial_map(results)
It takes about the same amount of time. ~24 seconds. I only see one process getting to 100% CPU.
I have also tried letting the pool split up the lines itself and have the mapper function only handle one line at a time, with similar results.
def initial_map(line):
processed = # process line (O^(1) operation)
return processed
if __name__ == "__main__":
lines = list(open("../../log.txt", 'r'))
pool = Pool(processes=8)
pool.map(initial_map, lines)
~22 seconds.
Why is this happening? Parallelizing this should result in faster results, shouldn't it?
If the amount of work done in one iteration is very small, you're spending a big proportion of the time just communicating with your subprocesses, which is expensive. Instead, try to pass bigger slices of your data to the processing function. Something like the following:
slices = (data[i:i+100] for i in range(0, len(data), 100)
def process_slice(data):
return [initial_data(x) for x in data]
pool.map(process_slice, slices)
# and then itertools.chain the output to flatten it
(don't have my comp. so can't give you a full working solution nor verify what I said)
Edit: or see the 3rd comment on your question by #ubomb.