mpi4py: dynammic data processing - python

I have a vector that contain stock tickers like tickers = ['AAPL','XOM','GOOG'] and in my "traditional" python program I would loop over this tickers vector, select one ticker string like AAPL, import a csv file that contains AAPL stock returns, use the returns as an input to a common function, and finally generate a csv file as an output. I have over 4000 tickers and the function to apply to each ticker takes time to process. I have access to a computer cluster with the mpi4py package with access to about 100 processors per job. I understand well (and was able to implement) this mpi example in python:
from mpi4py import MPI
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
if rank == 0:
data = [i for i in range(8)]
# dividing data into chunks
chunks = [[] for _ in range(size)]
for i, chunk in enumerate(data):
chunks[i % size].append(chunk)
else:
data = None
chunks = None
data = comm.scatter(chunks, root=0)
print str(rank) + ': ' + str(data)
[cha#cluster] ~/utils> mpirun -np 3 ./mpi.py
2: [2, 5]
0: [0, 3, 6]
1: [1, 4, 7]
So in this example, we have a data vector of size 8 and assign to each processor (3 in total) an equal number of elements of the data. How can I use the similar above example and assign to each processor one stock ticker and apply the function that needs to be run for each ticker? How can I tell python that once a processor get free, to go back in the tickers vector and process a ticker that has not yet been processed?

There's another way to think of this. You have 100 processors processing 4000 chunks of data. One way you can look at this is that each processor gets a block of data on which to operate. Evenly split, each processor will get 40 tickers to process. Processor 1 will get 0-39, processor 2 will get 40-79, etc.
Thinking this way, you don't need to worry about what happens when a processor finishes its tasks. Just have a loop:
block_size = len(tickers) / size # this will be 40 in your example
for i in range(block_size):
ticker = tickers[rank * block_size + i]
process(ticker)
def process(ticker):
# load data
# process data
# output data
Does this make sense?
[edit]
If you're wanting to read more, this is really just a variation on row-major order indexing, a common method for accessing multidimensional data that's stored in a single dimension of memory.

Related

Hadoop stuck on reduce 67% (only with large data)

I'm a beginner at Hadoop and Linux.
The Problem
Hadoop reduce stuck (or move really really slow) when the input data is large (e.x. 600k rows or 6M rows) even though the Map and Reduce functions are quite simple, 2021-08-08 22:53:12,350 INFO mapreduce.Job: map 100% reduce 67%.
In Linux System Monitor I can see when reduce hit the 67% only one CPU keep running at the time at 100% and the rest of them are sleeping :) see this picture
What ran successfully
I ran the MapReduce job with small input data (600 rows) fast and successfully without any issue map 100% reduce 100%, 2021-08-08 19:44:13,350 INFO mapreduce.Job: map 100% reduce 100%.
Mapper (Python)
#!/usr/bin/env python3
import sys
from itertools import islice
from operator import itemgetter
def read_input(file):
# read file except first line
for line in islice(file, 1, None):
# split the line into words
yield line.split(',')
def main(separator='\t'):
# input comes from STDIN (standard input)
data = read_input(sys.stdin)
for words in data:
# for each row we take only the needed columns
data_row = list(itemgetter(*[1, 2, 4, 5, 6, 9, 10, 18])(words))
data_row[7] = data_row[7].replace('\n', '')
# taking year and month No.from first column to create the
# key that will send to reducer
date = data_row[0].split(' ')[0].split('-')
key = str(date[0]) + '_' + str(date[1])
# value that will send to reducer
value = ','.join(data_row)
# print here will send the output pair (key, value)
print('%s%s%s' % (key, separator, value))
if __name__ == "__main__":
main()
Reducer (Python)
#!/usr/bin/env python3
from itertools import groupby
from operator import itemgetter
import sys
import pandas as pd
import numpy as np
import time
def read_mapper_output(file):
for line in file:
yield line
def main(separator='\t'):
all_rows_2015 = []
all_rows_2016 = []
start_time = time.time()
names = ['tpep_pickup_datetime', 'tpep_dropoff_datetime', 'trip_distance',
'pickup_longitude', 'pickup_latitude', 'dropoff_longitude',
'dropoff_latitude', 'total_amount']
df = pd.DataFrame(columns=names)
# input comes from STDIN (standard input)
data = read_mapper_output(sys.stdin)
for words in data:
# get key & value from Mapper
key, value = words.split(separator)
row = value.split(',')
# split data with belong to 2015 from data belong to 2016
if key in '2015_01 2015_02 2015_03':
all_rows_2015.append(row)
if len(all_rows_2015) >= 10:
df=df.append(pd.DataFrame(all_rows_2015, columns=names))
all_rows_2015 = []
elif key in '2016_01 2016_02 2016_03':
all_rows_2016.append(row)
if len(all_rows_2016) >= 10:
df=df.append(pd.DataFrame(all_rows_2016, columns=names))
all_rows_2016 = []
print(df.to_string())
print("--- %s seconds ---" % (time.time() - start_time))
if __name__ == "__main__":
main()
More Info
I'm using Hadoop v3.2.1 on Linux installed on VMware to run MapReduce job in Python.
Reduce Job in Numbers:
Input Data Size
Number of rows
Reduce job time
~98 Kb
600 rows
~0.1 sec
good
~953 Kb
6,000 rows
~1 sec
good
~9.5 Mb
60,000 rows
~52 sec
good
~94 Mb
600,000 rows
~5647 sec (~94 min)
very slow
~11 Gb
76,000,000 rows
??
impossible
The goal is running on ~76M rows input data, it's impossible with this issue remaining.
"when reduce hit the 67% only one CPU keep running at the time at 100% and the rest of them are sleeping" - you have skew. One key has far more values than any other key.
I see some problems here.
In the reduce phase you don't make any summarization, just fiter 2015Q1 and 2015Q2 - reduce is supposed to be used for summarization like grouping by key or doing some calculations based on the keys.
If you just need to filter data, do it on the map phase to save cycles (assume you're billed for all data):
You store a lot of stuff in RAM inside a dataframe. Since you don't know how big is the key, you are experiencing trashing. This combined with heavy keys will make your process do a page fault on every DataFrame.append after some time.
There are some fixes:
Do you really need a reduce phase? Since you are just filtering the first three months os 2015 and 2016 you cand do this on the Map phase. This will make the process go a bit faster if you need a reduce later since it will need less data for the reduce phase.
def main(separator='\t'):
# input comes from STDIN (standard input)
data = read_input(sys.stdin)
for words in data:
# for each row we take only the needed columns
data_row = list(itemgetter(*[1, 2, 4, 5, 6, 9, 10, 18])(words))
# Find out first if you are filtering this data
# taking year and month No.from first column to create the
# key that will send to reducer
date = data_row[0].split(' ')[0].split('-')
# Filter out
if (date[1] in [1,2,3]) and (date[0] in [2015,2016]):
# We keep this data. Calulate key and clean up data_row[7]
key = str(date[0]) + '_' + str(date[1])
data_row[7] = data_row[7].replace('\n', '')
# value that will send to reducer
value = ','.join(data_row)
# print here will send the output pair (key, value)
print('%s%s%s' % (key, separator, value))
Try not to store data in memory during the reduce. Since you are filtering, print() the results as soon as you have it. If your source data is not sorted, the reduce will serve as a way to have all data from the same month together.
You've got a bug in your reduce phase: you're losing number_of_records_per_key modulo 10 because you don't append the results to the dataframe. Dont' append to the dataframe and print the result asap.

Append simulation data using HDF5

I currently run a simulation several times and want to save the results of these simulations so that they can be used for visualizations.
The simulation is run 100 times and each of these simulations generates about 1 million data points (i.e. 1 million values for 1 million episodes), which I now want to store efficiently. The goal within each of these episodes would be to generate an average of each value across all 100 simulations.
My main file looks like this:
# Defining the test simulation environment
def test_simulation:
environment = environment(
periods = 1000000
parameter_x = ...
parameter_y = ...
)
# Defining the simulation
environment.simulation()
# Save simulation data
hf = h5py.File('runs/simulation_runs.h5', 'a')
hf.create_dataset('data', data=environment.value_history, compression='gzip', chunks=True)
hf.close()
# Run the simulation 100 times
for i in range(100):
print(f'--- Iteration {i} ---')
test_simulation()
The value_history is generated within game(), i.e. the values are continuously appended to an empty list according to:
def simulation:
for episode in range(periods):
value = doSomething()
self.value_history.append(value)
Now I get the following error message when going to the next simulation:
ValueError: Unable to create dataset (name already exists)
I am aware that the current code keeps trying to create a new file and generates an error because it already exists. Now I am looking to reopen the file created in the first simulation, append the data from the next simulation and save it again.
The example below shows how to pull all these ideas together. It creates 2 files:
Create 1 resizable dataset with maxshape() parameter on first loop, then use dataset.resize() on subsequent loops -- output is
simulation_runs1.h5
Create a unique dataset for each simulation -- output is
simulation_runs2.h5.
I created a simple 100x100 NumPy array of randoms for the "simulation data", and ran the simulation 10 times. They are variables, so you can increase to larger values to determine which method is better (faster) for your data. You may also discover memory limitations saving 1M data points for 1M time periods.
Note 1: If you can't save all the data in system memory, you can incrementally save simulation results to the H5 file. It's just a little more complicated.
Note 2: I added a mode variable to control whether a new file is created for the first simulation (i==0) or the existing file is opened in append mode for subsequent simulations.
import h5py
import numpy as np
# Create some psuedo-test data
def test_simulation(i):
periods = 100
times = 100
# Define the simulation with some random data
val_hist = np.random.random(periods*times).reshape(periods,times)
a0, a1 = val_hist.shape[0], val_hist.shape[1]
if i == 0:
mode='w'
else:
mode='a'
# Save simulation data (resize dataset)
with h5py.File('runs/simulation_runs1.h5', mode) as hf:
if 'data' not in list(hf.keys()):
print('create new dataset')
hf.create_dataset('data', shape=(1,a0,a1), maxshape=(None,a0,a1), data=val_hist,
compression='gzip', chunks=True)
else:
print('resize existing dataset')
d0 = hf['data'].shape[0]
hf['data'].resize( (d0+1,a0,a1) )
hf['data'][d0:d0+1,:,:] = val_hist
# Save simulation data (unique datasets)
with h5py.File('runs/simulation_runs2.h5', mode) as hf:
hf.create_dataset(f'data_{i:03}', data=val_hist,
compression='gzip', chunks=True)
# Run the simulation 100 times
for i in range(10):
print(f'--- Iteration {i} ---')
test_simulation(i)

improve performance fetching data from mongodb

earnings = self.collection.find({}) #return 60k documents
----
data_dic = {'score': [], "reading_time": [] }
for earning in earnings:
data_dic['reading_time'].append(earning["reading_time"])
data_dic['score'].append(earning["score"])
----
df = pd.DataFrame()
df['reading_time'] = data_dic["reading_time"]
df['score'] = data_dic["score"]
The code between --- takes 4 seconds to complete. How can I improve this function ?
The time consists of these parts: Mongodb query time, time used to transfer data, network round trip, python list operation. You can optimize each of them.
One is to reduce data amount to transfer. Since you only need reading_time and score, you can only fetch them. If your average document size is big, this approach is very effective.
earnings = self.collection.find({}, {'reading_time': True, 'score': True})
Second. Mongo transfer limited amount of data in a batch. The data contains up to 60k rows, it will take multiple times to transfer data. You can adjust cursor.batchSize to reduce round trip count.
Third, increase network bandwith if you can.
Fourth. You can accelerate by leveraging numpy array. It's a C-like array data structure which is faster than python list. Pre-allocate fixed length array and assigne value by index. This avoid internal adjustment when call list.append.
count = earnings.count()
score = np.empty((count,), dtype=float)
reading_time = np.empty((count,), dtype='datetime64[us]')
for i, earning in enumerate(earnings):
score[i] = earning["score"]
reading_time[i] = earning["reading_time"]
df = pd.DataFrame()
df['reading_time'] = reading_time
df['score'] = score

accessing many grib messages at once with pygrib

I need to access some grib files. I have figured out how to do it, using pygrib.
However, the only way I figured out how to do it is painstakingly slow.
I have 34 years of 3hrly data, they are organized in ~36 files per year (one every 10 days more or less). for a total of about 1000 files.
Each file has ~80 “messages” (8 values per day for 10 days). (they are spatial data, so they have (x,y) dimensions).
To read all my data I write:
grbfile = pygrib.index(filename, 'shortName', 'typeOfLevel', 'level')
var1 = grbfile.select(typeOfLevel='pressureFromGroundLayer', level=180, shortName='unknown')
for it in np.arange(len(var1)):
var_values, lat1, lon1 = var1[it].data()
if (it==0):
tot_var = np.expand_dims(var_values,axis=0)
else:
tot_var = np.append(tot_var, np.expand_dims(var_values,axis=0),axis=0)
and repeat this for each of the 1000 files.
is there a quicker way? like loading all the ~80 layers per grib files at once? something like:
var_values, lat1, lon1 = var1[:].data()
If I understand you correctly, you want the data from all 80 messages in each file stacked up in one array.
I have to warn you, that that array will get very large, and may cause NumPy to throw a MemoryError (happened to me before) depending on your grid size etc.
That being said, you can do something like this:
# substitute with a list of your file names
# glob is a builtin library that can help accomplish this
files = list_of_files
grib = pygrib.open(files[0]) # start with the first one
# grib message numbering starts at 1
data, lats, lons = grib.message(1).data()
# while np.expand_dims works, the following is shorter
# syntax wise and will accomplish the same thing
data = data[None,...] # add an empty dimension as axis 0
for m in xrange(2, grib.messages + 1):
data = np.vstack((data, grib.message(m).values[None,...]))
grib.close() # good practice
# now data has all the values from each message in the first file stacked up
# time to stack the rest on there
for file_ in files[1:]: # all except the first file which we've done
grib = pygrib.open(file_)
for msg in grib:
data = np.vstack((data, msg.values[None,...]))
grib.close()
print data.shape # should be (80 * len(files), nlats, nlons)
This may gain you some speed. pygrib.open objects act like generators, so they pass you each pygrib.gribmessage object as it's called for instead of building a list of them like the select() method of a pygrib.index does. If you need all the messages in a particular file then this is the way I would access them.
Hope it helps!

Add two Pandas Series or DataFrame objects in-place?

I have a dataset where we record the electrical power demand from each individual appliance in the home. The dataset is quite large (2 years or data; 1 sample every 6 seconds; 50 appliances). The data is in a compressed HDF file.
We need to add the power demand for every appliance to get the total aggregate power demand over time. Each individual meter might have a different start and end time.
The naive approach (using a simple model of our data) is to do something like this:
LENGHT = 2**25
N = 30
cumulator = pd.Series()
for i in range(N):
# change the index for each new_entry to mimick the fact
# that out appliance meters have different start and end time.
new_entry = pd.Series(1, index=np.arange(i, LENGTH+i))
cumulator = cumulator.add(new_entry, fill_value=0)
This works fine for small amounts of data. It also works OK with large amounts of data as long as every new_entry has exactly the same index.
But, with large amounts of data, where each new_entry has a different start and end index, Python quickly gobbles up all the available RAM. I suspect this is a memory fragmentation issue. If I use multiprocessing to fire up a new process for each meter (to load the meter's data from disk, load the cumulator from disk, do the addition in memory, then save the cumulator back to disk, and exit the process) then we have fine memory behaviour but, of course, all that disk IO slows us down a lot.
So, I think what I want is an in-place Pandas add function. The plan would be to initialise cumulator to have an index which is the union of all the meters' indicies. Then allocate memory once for that cumulator. Hence no more fragmentation issues.
I have tried two approaches but neither is satisfactory.
I tried using numpy.add to allow me to set the out argument:
# Allocate enough space for the cumulator
cumulator = pd.Series(0, index=np.arange(0, LENGTH+N))
for i in range(N):
new_entry = pd.Series(1, index=np.arange(i, LENGTH+i))
cumulator, aligned_new_entry = cumulator.align(new_entry, copy=False, fill_value=0)
del new_entry
np.add(cumulator.values, aligned_new_entry.values, out=cumulator.values)
del aligned_new_entry
But this gobbles up all my RAM too and doesn't seem to do the addition. If I change the penaultiate line to cumulator.values = np.add(cumulator.values, aligned_new_entry.values, out=cumulator.values) then I get an error about not being able to assign to cumulator.values.
This second approach appears to have the correct memory behaviour but is far too slow to run:
for i in range(N):
new_entry = pd.Series(1, index=np.arange(i, LENGTH+i))
for index in cumulator.index:
try:
cumulator[index] += new_entry[index]
except KeyError:
pass
I suppose I could write this function in Cython. But I'd rather not have to do that.
So: is there any way to do an 'inplace add' in Pandas?
Update
In response to comments below, here is a toy example of our meter data and the sum we want. All values are watts.
time meter1 meter2 meter3 sum
09:00:00 10 10
09:00:06 10 20 30
09:00:12 10 20 30
09:00:18 10 20 30 50
09:00:24 10 20 30 50
09:00:30 10 30 40
If you want to see more details then here's the file format description of our data logger, and here's the 4TByte archive of our entire dataset.
After messing around a lot with multiprocessing, I think I've found a fairly simple and efficient way to do an in-place add without using multiprocessing:
import numpy as np
import pandas as pd
LENGTH = 2**26
N = 10
DTYPE = np.int
# Allocate memory *once* for a Series which will hold our cumulator
cumulator = pd.Series(0, index=np.arange(0, N+LENGTH), dtype=DTYPE)
# Get a numpy array from the Series' buffer
cumulator_arr = np.frombuffer(cumulator.data, dtype=DTYPE)
# Create lots of dummy data. Each new_entry has a different start
# and end index.
for i in range(N):
new_entry = pd.Series(1, index=np.arange(i, LENGTH+i), dtype=DTYPE)
aligned_new_entry = np.pad(new_entry.values, pad_width=((i, N-i)),
mode='constant', constant_values=((0, 0)))
# np.pad could be replaced by new_entry.reindex(index, fill_value=0)
# but np.pad is faster and more memory efficient than reindex
del new_entry
np.add(cumulator_arr, aligned_new_entry, out=cumulator_arr)
del aligned_new_entry
del cumulator_arr
print cumulator.head(N*2)
which prints:
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
10 10
11 10
12 10
13 10
14 10
15 10
16 10
17 10
18 10
19 10
assuming that your dataframe looks something like:
df.index.names == ['time']
df.columns == ['meter1', 'meter2', ..., 'meterN']
then all you need to do is:
df['total'] = df.fillna(0, inplace=True).sum(1)

Categories

Resources