JModelica and Concurrent Futures - python

I am using JModelica to optimize a model using IPOPT in the background.
I would like to run many optimizations in parallel. At the moment I am doing
this using the multiprocessing module.
Right now, the code is as follows. It performs a parameter sweep over the
variables T and So and writes the results to output files named for these
parameters. The output files also contain a list of the parameters used in the
model along with the run results.
#!/usr/local/jmodelica/bin/jm_python.sh
import itertools
import multiprocessing
import numpy as np
import time
import sys
import signal
import traceback
import StringIO
import random
import cPickle as pickle
def PrintResToFile(filename,result):
def StripMX(x):
return str(x).replace('MX(','').replace(')','')
varstr = '#Variable Name={name: <10}, Unit={unit: <7}, Val={val: <10}, Col={col:< 5}, Comment="{comment}"\n'
with open(filename,'w') as fout:
#Print all variables at the top of the file, along with relevant information
#about them.
for var in result.model.getAllVariables():
if not result.is_variable(var.getName()):
val = result.initial(var.getName())
col = -1
else:
val = "Varies"
col = result.get_column(var.getName())
unit = StripMX(var.getUnit())
if not unit:
unit = "X"
fout.write(varstr.format(
name = var.getName(),
unit = unit,
val = val,
col = col,
comment = StripMX(var.getAttribute('comment'))
))
#Ensure that time variable is printed
fout.write(varstr.format(
name = 'time',
unit = 's',
val = 'Varies',
col = 0,
comment = 'None'
))
#The data matrix contains only time-varying variables. So fetch all of
#these, couple them in tuples with their column number, sort by column
#number, and then extract the name of the variable again. This results in a
#list of variable names which are guaranteed to be in the same order as the
#data matrix.
vkeys_in_order = [(result.get_column(x),x) for x in result.keys() if result.is_variable(x)]
vkeys_in_order = map(lambda x: x[1], sorted(vkeys_in_order))
for vk in vkeys_in_order:
fout.write("{0:>13},".format(vk))
fout.write("\n")
sio = StringIO.StringIO()
np.savetxt(sio, result.data_matrix, delimiter=',', fmt='%13.5f')
fout.write(sio.getvalue())
def RunModel(params):
T = params[0]
So = params[1]
try:
import pyjmi
signal.signal(signal.SIGINT, signal.SIG_IGN)
#For testing what happens if an error occurs
# import random
# if random.randint(0,100)<50:
# raise "Test Exception"
op = pyjmi.transfer_optimization_problem("ModelClass", "model.mop")
op.set('a', 0.20)
op.set('b', 1.00)
op.set('f', 0.05)
op.set('h', 0.05)
op.set('S0', So)
op.set('finalTime', T)
# Set options, see: http://www.jmodelica.org/api-docs/usersguide/1.13.0/ch07s06.html
opt_opts = op.optimize_options()
opt_opts['n_e'] = 40
opt_opts['IPOPT_options']['tol'] = 1e-10
opt_opts['IPOPT_options']['output_file'] = '/z/err_'+str(T)+'_'+str(So)+'_info.dat'
opt_opts['IPOPT_options']['linear_solver'] = 'ma27' #See: http://www.coin-or.org/Ipopt/documentation/node50.html
res = op.optimize(options=opt_opts)
result_file_name = 'out_'+str(T)+'_'+str(So)+'.dat'
PrintResToFile(result_file_name, res)
return (True,(T,So))
except:
ex_type, ex, tb = sys.exc_info()
return (False,(T,So),traceback.extract_tb(tb))
try:
fstatus = open('status','w')
except:
print("Could not open status file!")
sys.exit(-1)
T = map(float,[10,20,30,40,50,60,70,80,90,100,110,120,130,140])
So = np.arange(0.1,30.1,0.1)
tspairs = list(itertools.product(T,So))
random.shuffle(tspairs)
pool = multiprocessing.Pool()
mapit = pool.imap_unordered(RunModel,tspairs)
pool.close()
completed = 0
while True:
try:
res = mapit.next(timeout=2)
pickle.dump(res,fstatus)
fstatus.flush()
completed += 1
print(res)
print "{0: >4} of {1: >4} ({2: >4} left)".format(completed,len(tspairs),len(tspairs)-completed)
except KeyboardInterrupt:
pool.terminate()
pool.join()
sys.exit(0)
except multiprocessing.TimeoutError:
print "{0: >4} of {1: >4} ({2: >4} left)".format(completed,len(tspairs),len(tspairs)-completed)
except StopIteration:
break
Using the model:
optimization ModelClass(objective=-S(finalTime), startTime=0, finalTime=100)
parameter Real S0 = 2;
parameter Real F0 = 0;
parameter Real a = 0.2;
parameter Real b = 1;
parameter Real f = 0.05;
parameter Real h = 0.05;
output Real F(start=F0, fixed=true, min=0, max=100, unit="kg");
output Real S(start=S0, fixed=true, min=0, max=100, unit="kg");
input Real u(min=0, max=1);
equation
der(F) = u*(a*F+b);
der(S) = f*F/(1+h*F)-u*(a*F+b);
end ModelClass;
Is this safe?

No, it is not safe. op.optimize() will store the optimization results with a file name derived from the model name, and then load the results to return the data, so when you try to run several optimizations at once you will get a race condition. To circumvent this, you can provide distinct result file names in opt_opts['result_file_name'].

No. It does not seem to be safe as of 02015-11-09.
The code above names output files according to the input parameters. The output files also contain the input parameters used to run the model.
With 4 cores two situations arise:
Occasionally the error Inconsistent number of lines in the result data. is raised in the file /usr/local/jmodelica/Python/pyjmi/common/io.py.
Output files show one set of parameters internally but are named for a different set of parameters, which indicates disagreement between the parameters the script thinks it is processing and the parameters that are actually being processed.
With 24 cores:
The error The result does not seem to be of a supported format. is repeatedly raised by /usr/local/jmodelica/Python/pyjmi/common/io.py.
Together, this information suggests that intermediate files are being used by JModelica, but that there is overlap in the names of the intermediate files resulting in errors in the best case and incorrect results in the worst case.
One might hypothesize that this is the result of bad random number generation in a tempfile function somewhere, but a bug relating to that was resolved on 02011-11-25. Perhaps the PRNGs are being seeded based on a system clock or a constant and therefore progress in sync?
However, this does not seem to be the case since the following does not produce collisions:
#!/usr/bin/env python
import time
import tempfile
import os
import collections
from multiprocessing import Pool
def f(x):
tf = tempfile.NamedTemporaryFile(delete=False)
print(tf.name)
return tf.name
p = Pool(24)
ret = p.map(f, range(2000))
counts = collections.Counter(ret)
print(counts)

Related

Running different function parallel in python3.7 using multiprocessing

I want to run two different function in parallel in python, I have used the below code :
def remove_special_char(data):
data['Description'] = data['Description'].apply(lambda val: re.sub(r'^=', "'=", str(val))) # Change cell values which start with '=' sign leading to Excel formula issues
return(data)
file_path1 = '.\file1.xlsx'
file_path2 = '.\file2.xlsx'
def method1(file_path1):
data = pd.read_excel(file_path1)
data= remove_special_char(data)
return data
def method2(file_path2):
data = pd.read_excel(file_path2)
data= remove_special_char(data)
return data
I am using the below Pool process , but its not working.
from multiprocessing import Pool
p = Pool(3)
result1 = p.map(method1(file_path1), args=file_path1)
result2 = p.map(method2(file_path1), args=file_path2)
I want to run both these methods in parallel to save execution time and at the same time get the return value as well.
I don't know why you are defining the same method twice with different parameter names, but anyway the map method of Pools is taking as its first argument a function, and the second argument is an iterable. What map does is call the function on each item of the iterable, and return a list with all the results. So what you want to do is more something like:
from multiprocessing import Pool
file_paths = ('.\file1.xlsx', '.\file2.xlsx')
def method(file_path):
data = pd.read_excel(file_path)
data= remove_special_char(data)
return data
with Pool(3) as p:
result = p.map(method, file_paths)

Unable to send multiple arguments to concurrrent.futures.Executor.map()

I am trying to combine the solutions provided in both of these SO answers - Using threading to slice an array into chunks and perform calculation on each chunk and reassemble the returned arrays into one array and Pass multiple parameters to concurrent.futures.Executor.map?. I have a numpy array that I chunk into segments and I want each chunk to be sent to a separate thread and an additional argument to be sent along with the chunk of the original array. This additional argument is a constant and will not change. The performCalc is a function that will take two arguments -one the chunk of the original numpy array and a constant.
First solution I tried
import psutil
import numpy as np
import sys
from concurrent.futures import ThreadPoolExecutor
from functools import partial
def main():
testThread()
def testThread():
minLat = -65.76892
maxLat = 66.23587
minLon = -178.81404
maxLon = 176.2949
latGrid = np.arange(minLat,maxLat,0.05)
lonGrid = np.arange(minLon,maxLon,0.05)
gridLon,gridLat = np.meshgrid(latGrid,lonGrid)
grid_points = np.c_[gridLon.ravel(),gridLat.ravel()]
n_jobs = psutil.cpu_count(logical=False)
chunk = np.array_split(grid_points,n_jobs,axis=0)
x = ThreadPoolExecutor(max_workers=n_jobs)
maxDistance = 4.3
func = partial(performCalc,chunk)
args = [chunk,maxDistance]
# This prints 4.3 twice although there are four cores in the system
results = x.map(func,args)
# This prints 4.3 four times correctly
results1 = x.map(performTest,chunk)
def performCalc(chunk,maxDistance):
print(maxDistance)
return chunk
def performTest(chunk):
print("test")
main()
So performCalc() prints 4.3 twice even though the number of cores in the system is 4. While performTest() prints test four times correctly. I am not able to figure out the reason for this error.
Also I am sure the way I set up the for itertools.partial call is incorrect.
1) There are four chunks of the original numpy array.
2) Each chunk is to be paired with maxDistance and sent to performCalc()
3) There will be four threads that will print maxDistance and will return parts of the total result which will be returned in one array
Where am I going wrong ?
UPDATE
I tried using the lambda approach as well
results = x.map(lambda p:performCalc(*p),args)
but this prints nothing.
Using the solution provided by user mkorvas as shown here - How to pass a function with more than one argument to python concurrent.futures.ProcessPoolExecutor.map()? I was able to solve my problem as shown in the solution here -
import psutil
import numpy as np
import sys
from concurrent.futures import ThreadPoolExecutor
from functools import partial
def main():
testThread()
def testThread():
minLat = -65.76892
maxLat = 66.23587
minLon = -178.81404
maxLon = 176.2949
latGrid = np.arange(minLat,maxLat,0.05)
lonGrid = np.arange(minLon,maxLon,0.05)
print(latGrid.shape,lonGrid.shape)
gridLon,gridLat = np.meshgrid(latGrid,lonGrid)
grid_points = np.c_[gridLon.ravel(),gridLat.ravel()]
print(grid_points.shape)
n_jobs = psutil.cpu_count(logical=False)
chunk = np.array_split(grid_points,n_jobs,axis=0)
x = ThreadPoolExecutor(max_workers=n_jobs)
maxDistance = 4.3
func = partial(performCalc,maxDistance)
results = x.map(func,chunk)
def performCalc(maxDistance,chunk):
print(maxDistance)
return chunk
main()
What apparently one needs to do(and I do not know why and maybe somebody can clarify in another answer) is you need to switch the order of input to the function performCalc()
as shown here -
def performCalc(maxDistance,chunk):
print(maxDistance)
return chunk

Can a high CPU load (from other applications) affect python performance/accuracy?

I'm working on a code to read and display the results of a Finite Element Analysis (FEA) calculation. The results are stored in several (relatively big) text files that contain a list of nodes (ID number, location in space) and lists for the physical fields of relevance (ID of node, value of the field on that point).
However, I have noticed that when I'm running a FEA case in the background and I try to run my code at the same time it returns errors, not always the same one and not always at the same iteration, all seemly at random and without any modification to the code or to the input files whatsoever, just by hitting the RUN button seconds apart between runs.
Example of the errors that I'm getting are:
keys[key] = np.round(np.asarray(keys[key]),7)
TypeError: can't multiply sequence by non-int of type 'float'
#-------------------------------------------------------------------------
triang = tri.Triangulation(x, y)
ValueError: x and y arrays must have a length of at least 3
#-------------------------------------------------------------------------
line = [float(n) for n in line]
ValueError: could not convert string to float: '0.1225471E'
In case you are curious, this is my code (keep in mind that it is not finished yet and that I'm a mechanical engineer, not a programmer). Any feedback on how to make it better is also appreciated:
import matplotlib.pyplot as plt
import matplotlib.tri as tri
import numpy as np
import os
triangle_max_radius = 0.003
respath = 'C:/path'
fields = ['TEMPERATURE']
# Plot figure definition --------------------------------------------------------------------------------------
fig, ax1 = plt.subplots()
fig.subplots_adjust(left=0, right=1, bottom=0.04, top=0.99)
ax1.set_aspect('equal')
# -------------------------------------------------------------------------------------------------------------
# Read outputfiles --------------------------------------------------------------------------------------------
resfiles = [f for f in os.listdir(respath) if (os.path.isfile(os.path.join(respath,f)) and f[:3]=='csv')]
resfiles = [[f,int(f[4:])] for f in resfiles]
resfiles = sorted(resfiles,key=lambda x: (x[1]))
resfiles = [os.path.join(respath,f[:][0]).replace("\\","/") for f in resfiles]
# -------------------------------------------------------------------------------------------------------------
# Read data inside outputfile ---------------------------------------------------------------------------------
for result_file in resfiles:
keys = {}
keywords = []
with open(result_file, 'r') as res:
for line in res:
if line[0:2] == '##':
if len(line) >= 5:
line = line[:3] + line[7:]
line = line.replace(';',' ')
line = line.split()
if line:
if line[0] == '##':
if len(line) >= 3:
keywords.append(line[1])
keys[line[1]] = []
elif line[0] in keywords:
curr_key = line[0]
else:
line = [float(n) for n in line]
keys[curr_key].append(line)
for key in keys:
keys[key] = np.round(np.asarray(keys[key]),7)
for item in fields:
gob_temp = np.empty((0,4))
for node in keys[item]:
temp_coords, = np.where(node[0] == keys['COORDINATES'][:,0])
gob_temp_coords = [node[0], keys['COORDINATES'][temp_coords,1], keys['COORDINATES'][temp_coords,2], node[1]]
gob_temp = np.append(gob_temp,[gob_temp_coords],axis=0)
x = gob_temp[:,1]
y = gob_temp[:,2]
z = gob_temp[:,3]
triang = tri.Triangulation(x, y)
triangles = triang.triangles
xtri = x[triangles] - np.roll(x[triangles], 1, axis=1)
ytri = y[triangles] - np.roll(y[triangles], 1, axis=1)
maxi = np.max(np.sqrt(xtri**2 + ytri**2), axis=1)
triang.set_mask(maxi > triangle_max_radius)
ax1.tricontourf(triang,z,100,cmap='plasma')
ax1.triplot(triang,color="black",lw=0.2)
plt.show()
So back to the question, is it possible for the accuracy/performance of python to be affected by CPU load or any other 'external' factors? Or that's not an option and there's definitively something wrong with my code (which works well on other circumstances by the way)?
No, other processes only affect how often your process gets time slots to execute -- i.e., from a user's perspective, how quickly it completes its job.
If you're having errors under load, this means there are errors in your program's logic -- most probably, race conditions. They basically boil down to making assumptions about your environment that are no longer true when there's other activity in it. E.g.:
Your program is multithreaded, and the logic makes assumptions about which order threads are executed in. (This includes assumptions about how long some task would take to complete.)
Your program is using shared resources (files, streams etc) that other processes are also using at the same time. (E.g. some other program is in the process of (over)writing a file while you're trying to read it. Or, if you're reading from a stream, not all data are available yet.)

Optimizing Algorithm of large dataset calculations

Once again I find myself stumped with pandas, and how to best perform a 'vector operation'. My code works, however it will take a long time to iterate through everything.
What the code is trying to do is loop through shapes.cv and determine which shape_pt_sequence is a stop_id, and then assigns the stop_lat and stop_lon to shape_pt_lat and shape_pt_lon, while also marking the shape_pt_sequence as is_stop.
GISTS
stop_times.csv LINK
trips.csv LINK
shapes.csv LINK
Here is my code:
import pandas as pd
from haversine import *
'''
iterate through shapes and match stops along a shape_pt_sequence within
x amount of distance. for shape_pt_sequence that is closest, replace the stop
lat/lon to the shape_pt_lat/shape_pt_lon, and mark is_stop column with 1.
'''
# readability assignments for shapes.csv
shapes = pd.read_csv('csv/shapes.csv')
shapes_index = list(set(shapes['shape_id']))
shapes_index.sort(key=int)
shapes.set_index(['shape_id', 'shape_pt_sequence'], inplace=True)
# readability assignments for trips.csv
trips = pd.read_csv('csv/trips.csv')
trips_index = list(set(trips['trip_id']))
trips.set_index(['trip_id'], inplace=True)
# readability assignments for stops_times.csv
stop_times = pd.read_csv('csv/stop_times.csv')
stop_times.set_index(['trip_id','stop_sequence'], inplace=True)
print(len(stop_times.loc[1423492]))
# readability assginments for stops.csv
stops = pd.read_csv('csv/stops.csv')
stops.set_index(['stop_id'], inplace=True)
# for each trip_id
for i in trips_index:
print('******NEW TRIP_ID******')
print(i)
i = i.astype(int)
# for each stop_sequence in stop_times
for x in range(len(stop_times.loc[i])):
stop_lat = stop_times.loc[i,['stop_lat','stop_lon']].iloc[x,[0,1]][0]
stop_lon = stop_times.loc[i,['stop_lat','stop_lon']].iloc[x,[0,1]][1]
stop_coordinate = (stop_lat, stop_lon)
print(stop_coordinate)
# shape_id that matches trip_id
print('**SHAPE_ID**')
trips_shape_id = trips.loc[i,['shape_id']].iloc[0]
trips_shape_id = int(trips_shape_id)
print(trips_shape_id)
smallest = 0
for y in range(len(shapes.loc[trips_shape_id])):
shape_lat = shapes.loc[trips_shape_id].iloc[y,[0,1]][0]
shape_lon = shapes.loc[trips_shape_id].iloc[y,[0,1]][1]
shape_coordinate = (shape_lat, shape_lon)
haversined = haversine_mi(stop_coordinate, shape_coordinate)
if smallest == 0 or haversined < smallest:
smallest = haversined
smallest_shape_pt_indexer = y
else:
pass
print(haversined)
print('{0:.20f}'.format(smallest))
print('{0:.20f}'.format(smallest))
print(smallest_shape_pt_indexer)
# mark is_stop as 1
shapes.iloc[smallest_shape_pt_indexer,[2]] = 1
# replace coordinate value
shapes.loc[trips_shape_id].iloc[y,[0,1]][0] = stop_lat
shapes.loc[trips_shape_id].iloc[y,[0,1]][1] = stop_lon
shapes.to_csv('csv/shapes.csv', index=False)
What you could do to optmizing this code is use some threads/workers instead those for.
I recommend using the Pool of Workes as its very simple to use.
In:
for i in trips_index:
You could use something like:
from multiprocessing import Pool
pool = Pool(processes=4)
result = pool.apply_async(func, trips_index)
And than the method func would be like:
def func(i):
#code here
And you could simply put the whole for loop inside this method.
It would make it work with 4 subprocess in this example, git it a nice improvment.
One thing to consider is that a collection of trips will often have the same sequence of stops and the same shape data (the only difference between trips is the timing). So it might make sense to cache the find-closest-point-on-shape operation for (stop_id, shape_id). I bet that would reduce your runtime by an order-of-magnitude.

Python Multiprocessing calling multiple methods

I have a Python class that uses a multiprocessing pool to process and clean a large dataset. The method that does most of the cleaning is 'dataCleaner', which needs to call a second method 'processObservation'.
I am quite new to Python multiprocessing, and I cannot seem to figure out how to ensure that the method 'processObservation' will get called from 'cleanData' when a new process is spawned. How can I do this? My preference would be to keep all of these methods in the class. I suspect this has to do with the 'call' definition, but am not sure how to modify it appropriately.
def processData(self, dataset, num_procs = mp.cpu_count()):
dataSize = len(dataset)
outputDict = dict()
procs = mp.Pool(processes = num_procs, maxtasksperchild = 1)
# Generate data chunks for processing.
chunk = dataSize / num_procs
dataChunk = [(i, i + chunk) for i in range(0, dataSize, chunk)]
count = 1
print 'Number of data chunks %d' %len(dataChunk)
for i in dataChunk:
procs.apply_async(self.dataCleaner, args = (dataset[i[0]:i[1]], count, ))
count += 1
procs.close()
procs.join()
def cleanData(self, data, procNumber):
print 'Spawning new process: %d' %os.getpid()
tempDict = dict()
print len(data)
for obs in data:
key, value = processObservation(obs)
tempDict[key] = value
cPickle.dump(tempDict, open( '../dataMP/cleanedData_' + str(procNumber) + '.p', 'wb'))
def __call__(self, dataset, count):
return self.cleanData(dataset, count)
It's hard to tell what's going on b/c you haven't given reproducible code or an error.
However, your issue is very likely because you are using multiprocessing from inside a class.
See: Using multiprocessing in a class
and Multiprocessing: How to use Pool.map on a function defined in a class?

Categories

Resources