parallel execution of a function in python using multiprocessing module - python

I am using multiprocessing module in python to make the function to run in parallel.
the functions name is:
Parallel_Solution_Combination_Method(subset, i):
the subset parameter is a list element which is made up of a tuple of chromosomes.
chromosome is a class defined by me within the same script.
I am running on Lubuntu Linux based OS. the code that I'm using to try to run the function in parallel is:
pool = mp.Pool(processes=2)
results = [pool.apply_async(Parallel_Solution_Combination_Method,
args=(subsets[i],i,)
)
for i in range(len(subsets))
]
however the problem that I'm encoutring is that, whenever I specify the number of processes more than 1, the results are not as expected, lets say if I'm passing a list of subsets of size 10 and im using:
processes=2
then the first two outputs are resulting exactly the same values, outputs 3 and 4 the same and so on, whereas if I specify the number of processes:
processes = 1
which essentially is a sequential run, then the outcome is correct as expected (same as a normal for loop without multiprocessing.
I don't know why my results are getting mixed up even though I'm explicitly sending a different tuple from the set which is specified by the index i of the fool loop.
args=(subsets[i],i,)
I am running on a hardware with two cores so I was hoping that I can run two instances of the function in parallel, but the outcome is that it's producing duplicate results. I cant figure out my wrong doing.
Please help!! Thank you.
def Parallel_Solution_Combination_Method(subset, counter):
print 'entered parallel sol comb'
child_chromosome = chromosome()
combination_model_offset = 300
attempts = 0
while True:
template1 = subset[0].record_template
template2 = subset[1].record_template
template_child = template1
template_gap1 = find_allIndices(template1, '-')
template_gap2 = find_allIndices(template2, '-')
if(len(template_gap1) !=0 and len(template_gap2) != 0):
template_gap_difference = find_different_indicies(template_gap1, template_gap2)
if(len(template_gap_difference) != 0):
template_slice_point = random.choice(template_gap_difference)
if(template_gap2[template_slice_point -1] < template_gap1[template_slice_point]):
#swap template1 template2 values as well as their respective gap indices
#so that in crossover the gaps would not collide with each other.
temp_template = template1
temp_gap = template_gap1
template1 = template2
template2 = temp_template
template_gap1 = template_gap2
template_gap2 = temp_gap
#the crossing over takes the first part of the child sequence to be up until
#the crossing point without including it. this way it ensures that the resulting
#child sequence is different from both of the parents by at least one point.
child_template_gap = template_gap1[:template_slice_point]+template_gap2[template_slice_point:]
child_gap_part1 = child_template_gap[:template_slice_point]
child_gap_part2 = child_template_gap[template_slice_point:]
if template_slice_point == 0:
template_child = template2
else:
template_child = template1[:template_gap1[template_slice_point]]
template_residues_part1 = str(template_child).translate(None, '-')
template_residues_part2 = str(template2).translate(None, '-')
template_residues_part2 = template_residues_part2[len(template_residues_part1):]
for i in range(template_gap1[template_slice_point-1], len(template1)):
if i in child_gap_part2:
template_child = template_child + '-'
else:
template_child = template_child + template_residues_part2[0:1]
template_residues_part2 = template_residues_part2[1:]
target1 = subset[0].record_target
target2 = subset[1].record_target
target_child = target1
target_gap1 = find_allIndices(target1, '-')
target_gap2 = find_allIndices(target2, '-')
if(len(target_gap1) !=0 and len(target_gap2) != 0):
target_gap_difference = find_different_indicies(target_gap1, target_gap2)
if(len(target_gap_difference) !=0):
target_slice_point = random.choice(target_gap_difference)
if(target_gap2[target_slice_point -1] < target_gap1[target_slice_point]):
#swap template1 template2 values as well as their respective gap indices
#so that in crossover the gaps would not collide with each other.
temp_target = target1
temp_gap = target_gap1
target1 = target2
target2 = temp_target
target_gap1 = target_gap2
target_gap2 = temp_gap
#the crossing over takes the first part of the child sequence to be up until
#the crossing point without including it. this way it ensures that the resulting
#child sequence is different from both of the parents by at least one point.
child_target_gap = target_gap1[:target_slice_point]+target_gap2[target_slice_point:]
child_gap_part1 = child_target_gap[:target_slice_point]
child_gap_part2 = child_target_gap[target_slice_point:]
if target_slice_point == 0:
target_child = target2
else:
target_child = target1[:target_gap1[target_slice_point]]
target_residues_part1 = str(target_child).translate(None, '-')
target_residues_part2 = str(target2).translate(None, '-')
target_residues_part2 = target_residues_part2[len(target_residues_part1):]
for i in range(target_gap1[target_slice_point-1], len(target1)):
if i in child_gap_part2:
target_child = target_child + '-'
else:
target_child = target_child + target_residues_part2[0:1]
target_residues_part2 = target_residues_part2[1:]
if not [False for y in Reference_Set if y.record_template == template_child and y.record_target == target_child] or attempts <= 100:
break
attempts +=1
child_chromosome.record_template = template_child
#print template_child
child_chromosome.record_target = target_child
#print target_child
generate_PIR(template_header, template_description, child_chromosome.record_template, target_header,target_description, child_chromosome.record_target)
output_values = start_model(template_id, target_id,'PIR_input.ali', combination_model_offset + counter)
child_chromosome.molpdf_score = output_values['molpdf']
#print output_values['molpdf']
mdl = complete_pdb(env, '1BBH.B99990'+ str(combination_model_offset + counter)+'.pdb')
child_chromosome.normalized_dope_score = mdl.assess_normalized_dope()
#print mdl.assess_normalized_dope()
return child_chromosome
this is the code for the Parallel_Soultion_Combination_Method, also if it becomes handy, i'm including the chromosome class that I defined:
class chromosome():
"""basic solution represenation that holds alignments and it's evaluations"""
def __init__(self):
self.record_template = ''
self.record_target = ''
self.molpdf_score = 0.0
self.ga341_score = 0.0
self.dope_score = 0.0
self.normalized_dope_score = 0.0
self.flag_value = 0
self.distance_value = 0
def add_molpdf(self, molpdf):
self.molpdf_score = molpdf
def add_ga341(self, ga341):
self.ga341_score = ga341
def add_dope(self, dope):
self.dope_score = dope
def add_normalized_dope(self, normalized_dope):
self.normalized_dope_score = normalized_dope
def add_records(self, records):
self.seq_records = records
for rec in self.seq_records:
if rec.id == template_id:
self.record_template = rec.seq
elif rec.id == target_id:
self.record_target = rec.seq
def set_flag(self, flag):
self.flag_value = flag
def add_distance(self, distance):
self.distance_value = distance
please note that all of this is within the same python script.

Related

parallelized tasks are not ditributed a cross available cpus

As shown in the code posted below in section DecoupleGridCellsProfilerLoopsPool, the run() is called as much times as the contents of the self.__listOfLoopDecouplers and it works as it supposed to be, i mean the parallelization is working duly.
as shown in the same section, DecoupleGridCellsProfilerLoopsPool.pool.map returns results and i populate some lists,lets discuss the list names self.__iterablesOfZeroCoverageCell it contains number of objects of type gridCellInnerLoopsIteratorsForZeroCoverageModel.
After that, i created the pool ZeroCoverageCellsProcessingPool with the code as posted below as well.
The problem i am facing, is the parallelized code in ZeroCoverageCellsProcessingPool is very slow and the visulisation of the cpu tasks shows that there are no processes work in parallel as shown in the video contained in url posted below.
i was suspicious about the pickling issues related to when parallelizing the code in ZeroCoverageCellsProcessingPool,so i removed the enitre body of the run() in ZeroCoverageCellsProcessingPool. however, it shows no change in the behaviour of the parallelized code.
also the url posted below shown how the parallelized methoth of ZeroCoverageCellsProcessingPool behaves.
given the code posted below, please let me know why the parallelization does not work for code in ZeroCoverageCellsProcessingPool
output url:please click the link
output url
DecoupleGridCellsProfilerLoopsPool
def postTask(self):
self.__postTaskStartTime = time.time()
with Pool(processes=int(config['MULTIPROCESSING']['proceses_count'])) as DecoupleGridCellsProfilerLoopsPool.pool:
self.__chunkSize = PoolUtils.getChunkSize(lst=self.__listOfLoopDecouplers,cpuCount=int(config['MULTIPROCESSING']['cpu_count']))
logger.info(f"DecoupleGridCellsProfilerLoopsPool.self.__chunkSize(task per processor):{self.__chunkSize}")
for res in DecoupleGridCellsProfilerLoopsPool.pool.map(self.run,self.__listOfLoopDecouplers,chunksize=self.__chunkSize):
if res[0] is not None and res[1] is None and res[2] is None:
self.__iterablesOfNoneZeroCoverageCell.append(res[0])
elif res[1] is not None and res[0] is None and res[2] is None:
self.__iterablesOfZeroCoverageCell.append(res[1])
elif res[2] is not None and res[0] is None and res[1] is None:
self.__iterablesOfNoDataCells.append(res[2])
else:
raise Exception (f"WTF.")
DecoupleGridCellsProfilerLoopsPool.pool.join()
assert len(self.__iterablesOfNoneZeroCoverageCell)+len(self.__iterablesOfZeroCoverageCell)+len(self.__iterablesOfNoDataCells) == len(self.__listOfLoopDecouplers)
zeroCoverageCellsProcessingPool = ZeroCoverageCellsProcessingPool(self.__devModeForWSAWANTIVer2,self.__iterablesOfZeroCoverageCell)
zeroCoverageCellsProcessingPool.postTask()
def run(self,param:LoopDecoupler):
row = param.getRowValue()
col = param.getColValue()
elevationsTIFFWindowedSegmentContents = param.getElevationsTIFFWindowedSegment()
verticalStep = param.getVericalStep()
horizontalStep = param.getHorizontalStep()
mainTIFFImageDatasetContents = param.getMainTIFFImageDatasetContents()
NDVIsTIFFWindowedSegmentContentsInEPSG25832 = param.getNDVIsTIFFWindowedSegmentContentsInEPSG25832()
URLOrFilePathForElevationsTIFFDatasetInEPSG25832 = param.getURLOrFilePathForElevationsTIFFDatasetInEPSG25832()
threshold = param.getThreshold()
rowsCnt = 0
colsCnt = 0
pixelsValuesSatisfyThresholdInTIFFImageDatasetCnt = 0
pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt = int(config['window']['width']) * int(config['window']['height'])
pixelsWithNoDataValueInTIFFImageDatasetCnt = int(config['window']['width']) * int(config['window']['height'])
_pixelsValuesSatisfyThresholdInNoneZeroCoverageCell = []
_pixelsValuesDoNotSatisfyThresholdInZeroCoverageCell = []
_pixelsValuesInNoDataCell = []
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel = None
gridCellInnerLoopsIteratorsForZeroCoverageModel = None
gridCellInnerLoopsIteratorsForNoDataCellsModel = None
for x in range(row,row + verticalStep):
if rowsCnt == verticalStep:
rowsCnt = 0
for y in range(col,col + horizontalStep):
if colsCnt == horizontalStep:
colsCnt = 0
pixelValue = mainTIFFImageDatasetContents[0][x][y]
# windowIOUtils.writeContentsToFile(windowIOUtils.getPathToOutputDir()+"/"+config['window']['file_name']+".{0}".format(config['window']['file_extension']), "pixelValue:{0}\n".format(pixelValue))
if pixelValue >= float(threshold):
pixelsValuesSatisfyThresholdInTIFFImageDatasetCnt+=1
_pixelsValuesSatisfyThresholdInNoneZeroCoverageCell.append(elevationsTIFFWindowedSegmentContents[0][rowsCnt][colsCnt])
elif ((pixelValue < float(threshold)) and (pixelValue > float(config['TIFF']['no_data_value']))):
pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt-=1
_pixelsValuesDoNotSatisfyThresholdInZeroCoverageCell.append(elevationsTIFFWindowedSegmentContents[0][rowsCnt][colsCnt])
elif (pixelValue <= float(config['TIFF']['no_data_value'])):
pixelsWithNoDataValueInTIFFImageDatasetCnt-=1
_pixelsValuesInNoDataCell.append(elevationsTIFFWindowedSegmentContents[0][rowsCnt][colsCnt])
else:
raise Exception ("WTF.Exception: unhandled condition for pixel value: {0}".format(pixelValue))
# _pixelCoordinatesInWindow.append([x,y])
colsCnt+=1
rowsCnt+=1
'''Grid-cell classfication'''
if (pixelsValuesSatisfyThresholdInTIFFImageDatasetCnt > 0):
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel = GridCellInnerLoopsIteratorsForNoneZeroCoverageModel()
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setRowValue(row)
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setColValue(col)
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setVericalStep(verticalStep)
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setHorizontalStep(horizontalStep)
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setMainTIFFImageDatasetContents(mainTIFFImageDatasetContents)
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setNDVIsTIFFWindowedSegmentContentsInEPSG25832(NDVIsTIFFWindowedSegmentContentsInEPSG25832)
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setURLOrFilePathForElevationsTIFFDatasetInEPSG25832(URLOrFilePathForElevationsTIFFDatasetInEPSG25832)
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setPixelsValuesSatisfyThresholdInTIFFImageDatasetCnt(pixelsValuesSatisfyThresholdInTIFFImageDatasetCnt)
gridCellInnerLoopsIteratorsForNoneZeroCoverageModel.setPixelsValuesSatisfyThresholdInNoneZeroCoverageCell(_pixelsValuesSatisfyThresholdInNoneZeroCoverageCell)
elif (pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt < (int(config['window']['width']) * int(config['window']['height'])) and pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt >= 0):
gridCellInnerLoopsIteratorsForZeroCoverageModel = GridCellInnerLoopsIteratorsForZeroCoverageModel()
gridCellInnerLoopsIteratorsForZeroCoverageModel.setRowValue(row)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setColValue(col)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setVericalStep(verticalStep)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setHorizontalStep(horizontalStep)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setMainTIFFImageDatasetContents(mainTIFFImageDatasetContents)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setNDVIsTIFFWindowedSegmentContentsInEPSG25832(NDVIsTIFFWindowedSegmentContentsInEPSG25832)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setURLOrFilePathForElevationsTIFFDatasetInEPSG25832(URLOrFilePathForElevationsTIFFDatasetInEPSG25832)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setPixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt(pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setPixelsWithNoDataValueInTIFFImageDatasetCnt(pixelsWithNoDataValueInTIFFImageDatasetCnt)
gridCellInnerLoopsIteratorsForZeroCoverageModel.setPixelsValuesDoNotSatisfyThresholdInZeroCoverageCell(_pixelsValuesDoNotSatisfyThresholdInZeroCoverageCell)
elif (pixelsWithNoDataValueInTIFFImageDatasetCnt == 0):
gridCellInnerLoopsIteratorsForNoDataCellsModel = GridCellInnerLoopsIteratorsForNoDataCellsModel()
gridCellInnerLoopsIteratorsForNoDataCellsModel.setRowValue(row)
gridCellInnerLoopsIteratorsForNoDataCellsModel.setColValue(col)
gridCellInnerLoopsIteratorsForNoDataCellsModel.setVericalStep(verticalStep)
gridCellInnerLoopsIteratorsForNoDataCellsModel.setHorizontalStep(horizontalStep)
gridCellInnerLoopsIteratorsForNoDataCellsModel.setMainTIFFImageDatasetContents(mainTIFFImageDatasetContents)
gridCellInnerLoopsIteratorsForNoDataCellsModel.setNDVIsTIFFWindowedSegmentContentsInEPSG25832(NDVIsTIFFWindowedSegmentContentsInEPSG25832)
gridCellInnerLoopsIteratorsForNoDataCellsModel.setURLOrFilePathForElevationsTIFFDatasetInEPSG25832(URLOrFilePathForElevationsTIFFDatasetInEPSG25832)
gridCellInnerLoopsIteratorsForNoDataCellsModel.setPixelsWithNoDataValueInTIFFImageDatasetCnt(pixelsWithNoDataValueInTIFFImageDatasetCnt)
gridCellInnerLoopsIteratorsForNoDataCellsModel.setPixelsValuesInNoDataCell(_pixelsValuesInNoDataCell)
if gridCellInnerLoopsIteratorsForZeroCoverageModel is not None:
gridCellInnerLoopsIteratorsForZeroCoverageModel.setPixelsWithNoDataValueInTIFFImageDatasetCnt(pixelsWithNoDataValueInTIFFImageDatasetCnt)
else:
raise Exception (f"WTF.")
return gridCellInnerLoopsIteratorsForNoneZeroCoverageModel,gridCellInnerLoopsIteratorsForZeroCoverageModel,gridCellInnerLoopsIteratorsForNoDataCellsModel
ZeroCoverageCellsProcessingPool:
def postTask(self):
self.__postTaskStartTime = time.time()
"""to collect results per each row
"""
resAllCellsForGridCellsClassifications = []
# NDVIs
resAllCellsForNDVITIFFDetailsForZeroCoverageCell = []
# area of coverage
resAllCellsForAreaOfCoverageForZeroCoverageCell = []
# interception
resAllCellsForInterceptionForZeroCoverageCell = []
# fourCornersOfWindowInEPSG25832
resAllCellsForFourCornersOfWindowInEPSG25832ZeroCoverageCell = []
# outFromEPSG25832ToEPSG4326-lists
resAllCellsForOutFromEPSG25832ToEPSG4326ForZeroCoverageCells = []
# fourCornersOfWindowsAsGeoJSON
resAllCellsForFourCornersOfWindowsAsGeoJSONInEPSG4326ForZeroCoverageCell = []
# calculatedCenterPointInEPSG25832
resAllCellsForCalculatedCenterPointInEPSG25832ForZeroCoverageCell = []
# centerPointsOfWindowInImageCoordinatesSystem
resAllCellsForCenterPointsOfWindowInImageCoordinatesSystemForZeroCoverageCell = []
# pixelValuesOfCenterPoints
resAllCellsForPixelValuesOfCenterPointsForZeroCoverageCell = []
# centerPointOfKeyWindowAsGeoJSONInEPSG4326
resAllCellsForCenterPointOfKeyWindowAsGeoJSONInEPSG4326ForZeroCoverageCell = []
# centerPointInEPSG4326
resAllCellsForCenterPointInEPSG4326ForZeroCoveringCell = []
# average heights
resAllCellsForAverageHeightsForZeroCoverageCell = []
# pixels values
resAllCellsForPixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCell = []
# area Of Coverage
resAllCellsForAreaOfCoverageForZeroCoverageCell = []
resAllCellsForPixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt = []
noneKeyWindowCnt=0
# center points as string
centerPointsAsStringForZeroCoverageCell = ""
with Pool(processes=int(config['MULTIPROCESSING']['proceses_count'])) as ZeroCoverageCellsProcessingPool.pool:
self.__chunkSize = PoolUtils.getChunkSize(lst=self.__iterables,cpuCount=int(config['MULTIPROCESSING']['cpu_count']))
logger.info(f"ZeroCoverageCellsProcessingPool.self.__chunkSize(task per processor):{self.__chunkSize}")
for res in ZeroCoverageCellsProcessingPool.pool.map(func=self.run,iterable=self.__iterables,chunksize=self.__chunkSize):
resAllCellsForGridCellsClassifications.append(res[0])
# NDVIs
resAllCellsForNDVITIFFDetailsForZeroCoverageCell.append(res[1])
# area of coverage
resAllCellsForAreaOfCoverageForZeroCoverageCell.append(res[2])
# interception
resAllCellsForInterceptionForZeroCoverageCell.append(res[3])
# fourCornersOfWindowInEPSG25832
resAllCellsForFourCornersOfWindowInEPSG25832ZeroCoverageCell.append(res[4])
# outFromEPSG25832ToEPSG4326-lists
resAllCellsForOutFromEPSG25832ToEPSG4326ForZeroCoverageCells.append(res[5])
# fourCornersOfWindowsAsGeoJSONInEPSG4326
resAllCellsForFourCornersOfWindowsAsGeoJSONInEPSG4326ForZeroCoverageCell.append(res[6])
# calculatedCenterPointInEPSG25832
resAllCellsForCalculatedCenterPointInEPSG25832ForZeroCoverageCell.append(res[7])
# centerPointsOfWindowInImageCoordinatesSystem
resAllCellsForCenterPointsOfWindowInImageCoordinatesSystemForZeroCoverageCell.append(res[8])
# pixelValuesOfCenterPoints
resAllCellsForPixelValuesOfCenterPointsForZeroCoverageCell.append(res[9])
# centerPointInEPSG4326
resAllCellsForCenterPointInEPSG4326ForZeroCoveringCell.append(res[10])
# centerPointOfKeyWindowAsGeoJSONInEPSG4326
resAllCellsForCenterPointOfKeyWindowAsGeoJSONInEPSG4326ForZeroCoverageCell.append(res[11])
# average heights
resAllCellsForAverageHeightsForZeroCoverageCell.append(res[12])
# pixels values
resAllCellsForPixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCell.append(res[13])
# pixelsValues cnt
resAllCellsForPixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt.append(res[14])
noneKeyWindowCnt +=res[15]
# centerPoints-As-String
if (res[16] is not None):
centerPointsAsStringForZeroCoverageCell+=str(res[16])
assert noneKeyWindowCnt == len(self.__iterables)
ZeroCoverageCellsProcessingPool.pool.close()
ZeroCoverageCellsProcessingPool.pool.terminate()
ZeroCoverageCellsProcessingPool.pool.join()
return
def run(self,params:GridCellInnerLoopsIteratorsForZeroCoverageModel):
if params is not None:
logger.info(f"Processing zero coverage cell #(row{params.getRowValue()},col:{params.getColValue()})")
row = params.getRowValue()
col = params.getColValue()
mainTIFFImageDatasetContents = params.getMainTIFFImageDatasetContents()
NDVIsTIFFWindowedSegmentContentsInEPSG25832 = params.getNDVIsTIFFWindowedSegmentContentsInEPSG25832()
URLOrFilePathForElevationsTIFFDatasetInEPSG25832 = params.getURLOrFilePathForElevationsTIFFDatasetInEPSG25832()
datasetElevationsTIFFInEPSG25832 = rasterio.open(URLOrFilePathForElevationsTIFFDatasetInEPSG25832,'r')
_pixelsValuesDoNotSatisfyThresholdInZeroCoverageCell = params.getPixelsValuesDoNotSatisfyThresholdInZeroCoverageCell()
pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt = params.getPixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt()
countOfNoDataCells = params.getPixelsWithNoDataValueInTIFFImageDatasetCnt()
outFromEPSG25832ToEPSG4326ForZeroCoverageCells = []
fourCornersOfWindowsAsGeoJSONInEPSG4326ForZeroCoverageCell = []
ndviTIFFDetailsForZeroCoverageCell = NDVITIFFDetails(None,None,None).getNDVIValuePer10mX10m()
"""area of coverage per grid-cell"""
areaOfCoverageForZeroCoverageCell = None
""""interception"""
interceptionForZeroCoverageCell = None
CntOfNDVIsWithNanValueInZeroCoverageCell = 0
fourCornersOfWindowInEPSG25832ZeroCoverageCell = None
outFromEPSG25832ToEPSG4326ForZeroCoverageCells = []
fourCornersOfWindowsAsGeoJSONInEPSG4326ForZeroCoverageCell = []
calculatedCenterPointInEPSG25832ForZeroCoverageCell = None
centerPointsOfWindowInImageCoordinatesSystemForZeroCoverageCell = None
pixelValuesOfCenterPointsOfZeroCoverageCell = None
centerPointInEPSG4326ForZeroCoveringCell = None
centerPointOfKeyWindowAsGeoJSONInEPSG4326ForZeroCoverageCell = None
centerPointsAsStringForZeroCoverageCell = None
"""average heights"""
averageHeightsForZeroCoverageCell = None
gridCellClassifiedAs = GridCellClassifier.ZERO_COVERAGE_CELL.value
cntOfNoneKeyWindow = 1
ndviTIFFDetailsForZeroCoverageCell = NDVITIFFDetails(ulX=row//int(config['ndvi']['resolution_height']),ulY=col//int(config['ndvi']['resolution_width']),dataset=NDVIsTIFFWindowedSegmentContentsInEPSG25832).getNDVIValuePer10mX10m()
"""area of coverage per grid-cell"""
areaOfCoverageForZeroCoverageCell = round(AreaOfCoverageDetails(pixelsCount=(int(config['window']['width']) * int(config['window']['height'])) - (pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt + countOfNoDataCells)).getPercentageOfAreaOfCoverage(),2)
""""interception"""
if math.isnan(ndviTIFFDetailsForZeroCoverageCell):
# ndviTIFFDetailsForZeroCoverageCell = 0
CntOfNDVIsWithNanValueInZeroCoverageCell = 1
interceptionForZeroCoverageCell = config['sentinel_values']['interception']
else:
Indvi = INDVI()
Ic = Indvi.calcInterception(ndviTIFFDetailsForZeroCoverageCell)
Pc=areaOfCoverageForZeroCoverageCell,"""percentage of coverage"""
Pnc=float((int(config['window']['width'])*int(config['window']['height'])) - areaOfCoverageForZeroCoverageCell),"""percentage of non-coverage"""
Inc=float(config['interception']['noneCoverage']),"""interception of none-coverage"""
I=(float(Pc[0])*(Ic))+float((Pnc[0]*Inc[0]))
interceptionForZeroCoverageCell = round(I,2)
if I != 10 and I != float('nan'):
logger.error(f"ndviTIFFDetailsForZeroCoverageCell:{ndviTIFFDetailsForZeroCoverageCell}")
logger.error(f"I:{I}")
fourCornersOfWindowInEPSG25832ZeroCoverageCell = RasterIOPackageUtils.convertFourCornersOfWindowFromImageCoordinatesToCRSByCoordinatesOfCentersOfPixelsMethodFor(row,col,int(config['window']['height']),int(config['window']['width']),datasetElevationsTIFFInEPSG25832)
for i in range(0,len(fourCornersOfWindowInEPSG25832ZeroCoverageCell)):
# fourCornersOfKeyWindowInEPSG4326.append(RasterIOPackageUtils.convertCoordsToDestEPSGForDataset(fourCornersOfWindowInEPSG25832[i],datasetElevationsTIFFInEPSG25832,destEPSG=4326))
outFromEPSG25832ToEPSG4326ForZeroCoverageCells.append(OSGEOUtils.fromEPSG25832ToEPSG4326(fourCornersOfWindowInEPSG25832ZeroCoverageCell[i])) # resultant coords order is in form of lat,lon and it must be in lon,lat.thus, out[1]-lat out[0]-lon
"""fourCornersOfWindowsAsGeoJSONInEPSG4326"""
fourCornersOfWindowInEPSG4326 = []
for i in range(0,len(outFromEPSG25832ToEPSG4326ForZeroCoverageCells)):
fourCornersOfWindowInEPSG4326.append(([outFromEPSG25832ToEPSG4326ForZeroCoverageCells[i][1]],[outFromEPSG25832ToEPSG4326ForZeroCoverageCells[i][0]]))
fourCornersOfWindowsAsGeoJSONInEPSG4326ForZeroCoverageCell.append(jsonUtils.buildFeatureCollectionAsGeoJSONForFourCornersOfKeyWindow(fourCornersOfWindowInEPSG4326[0],fourCornersOfWindowInEPSG4326[1],fourCornersOfWindowInEPSG4326[2],fourCornersOfWindowInEPSG4326[3],areaOfCoverageForZeroCoverageCell))
# debugIOUtils.writeContentsToFile(debugIOUtils.getPathToOutputDir()+"/"+"NDVIsPer10mX10mForKeyWindow"+config['window']['file_name']+".{0}".format(config['window']['file_extension']),"{0}\n".format(NDVIsPer10mX10mForKeyWindow))
"""
building geojson object for a point "center-point" to visualize it.
"""
calculatedCenterPointInEPSG25832ForZeroCoverageCell = MiscUtils.calculateCenterPointsGivenLLOfGridCell(fourCornersOfWindowInEPSG25832ZeroCoverageCell[1])#lower-left corner
centerPointsOfWindowInImageCoordinatesSystemForZeroCoverageCell = RasterIOPackageUtils.convertFromCRSToImageCoordinatesSystemFor(calculatedCenterPointInEPSG25832ForZeroCoverageCell[0],calculatedCenterPointInEPSG25832ForZeroCoverageCell[1],datasetElevationsTIFFInEPSG25832)
pixelValuesOfCenterPointsOfZeroCoverageCell = mainTIFFImageDatasetContents[0][centerPointsOfWindowInImageCoordinatesSystemForZeroCoverageCell[0]][centerPointsOfWindowInImageCoordinatesSystemForZeroCoverageCell[1]]
centerPointInEPSG4326ForZeroCoveringCell = RasterIOPackageUtils.convertCoordsToDestEPSGForDataset(calculatedCenterPointInEPSG25832ForZeroCoverageCell,datasetElevationsTIFFInEPSG25832,destEPSG=4326)
centerPointOfKeyWindowAsGeoJSONInEPSG4326ForZeroCoverageCell = jsonUtils.buildGeoJSONForPointFor(centerPointInEPSG4326ForZeroCoveringCell)
"""average heights"""
averageHeightsForZeroCoverageCell = round(MiscUtils.getAverageFor(_pixelsValuesDoNotSatisfyThresholdInZeroCoverageCell),2)
assert len(_pixelsValuesDoNotSatisfyThresholdInZeroCoverageCell) > 0 and (len(_pixelsValuesDoNotSatisfyThresholdInZeroCoverageCell) <= (int(config['window']['width']) * int(config['window']['height'])) )
"""the following code block is for assertion only"""
if self.__devModeForWSAWANTIVer2 == config['DEVELOPMENT_MODE']['debug']:
assert pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt >= 0 and (pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt < (int(config['window']['width']) * int(config['window']['height'])) )
assert (pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt+countOfNoDataCells) == (int(config['window']['width']) * int(config['window']['height']))
print(f"profiling for gridCellClassifiedAs:{gridCellClassifiedAs}....>pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt:{pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt}")
print(f"profiling for gridCellClassifiedAs:{gridCellClassifiedAs}....>countOfNoDataCells:{countOfNoDataCells}")
pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt = (int(config['window']['width']) * int(config['window']['height'])) - (pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt + countOfNoDataCells)
assert pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt == 0, (f"WTF.")
print(f"profiling for gridCellClassifiedAs:{gridCellClassifiedAs}....>computed pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt:{pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt}")
print(f"\n")
centerPointAsTextInWKTInEPSG3857 = CoordinatesUtils.buildWKTPointFormatForSinglePointFor(calculatedCenterPointInEPSG25832ForZeroCoverageCell[0],calculatedCenterPointInEPSG25832ForZeroCoverageCell[1])
s = centerPointAsTextInWKTInEPSG3857.replace("POINT","")
s = s.replace("(","")
s = s.replace(")","")
s = s.strip()
s = s.split(" ")
centerPointsAsStringForZeroCoverageCell = s[0] + "\t" + s[1] + "\n"
centerPointsAsStringForZeroCoverageCell = centerPointsAsStringForZeroCoverageCell.replace('\'',"")
return gridCellClassifiedAs,ndviTIFFDetailsForZeroCoverageCell,areaOfCoverageForZeroCoverageCell,interceptionForZeroCoverageCell,fourCornersOfWindowInEPSG25832ZeroCoverageCell,outFromEPSG25832ToEPSG4326ForZeroCoverageCells,fourCornersOfWindowsAsGeoJSONInEPSG4326ForZeroCoverageCell,calculatedCenterPointInEPSG25832ForZeroCoverageCell,centerPointsOfWindowInImageCoordinatesSystemForZeroCoverageCell,pixelValuesOfCenterPointsOfZeroCoverageCell,centerPointInEPSG4326ForZeroCoveringCell,centerPointOfKeyWindowAsGeoJSONInEPSG4326ForZeroCoverageCell,averageHeightsForZeroCoverageCell,np.array(_pixelsValuesDoNotSatisfyThresholdInZeroCoverageCell).tolist(),pixelsValuesDoNotSatisfyThresholdInTIFFImageDatasetCnt,cntOfNoneKeyWindow,centerPointsAsStringForZeroCoverageCell,CntOfNDVIsWithNanValueInZeroCoverageCell

Python Return Statement Not Providing Expected Results

Just attempting to return values from a defined function. When calling the function first and attempting to print the return values I receive "[variable] not defined". However, if I run "print(qb_stat_filler())" it prints the results in a tuple. I need the individual variables returned to use in a separate function.
For Example
print(qb_stat_filler())
outputs: (0, 11, 24, 24.2024, 39.1143, 293.0, 1.9143000000000001, 0.2262, 97.84333355313255)
but when trying
qb_stat_filler()
print(cmp_avg)
print(att_avg)
outputs: NameError: name 'cmp_avg' is not defined
Process finished with exit code 1
I've tried establishing the variables outside of the function, then passing and returning them and that did not work either. Any thoughts?
def qb_stat_filler():
n_input = input('Player name: ')
t_input = input('Players team: ')
loc_input = input('H or #: ')
o_input = input('Opponent: ')
# convert index csv to dictionary of player values
q = pd.read_csv('Models\\QB Indexes\\QBname.csv')
q = q[['Player', 'Num']]
qb_dict = dict(q.values)
name = qb_dict.get('{}'.format(n_input))
t = pd.read_csv('Models\\QB Indexes\\Tmname.csv')
t = t[['Tm', 'Num']]
tm_dict = dict(t.values)
team = tm_dict.get('{}'.format(t_input))
loc = 0
if loc_input == '#':
loc = 0
elif loc_input == 'H':
loc = 1
z = pd.read_csv('Models\\QB Indexes\\Oppname.csv')
z = z[['Opp', 'Num']]
opp_dict = dict(z.values)
opp = opp_dict.get('{}'.format(o_input))
*there are several lines of code here that involve SQL
queries and data cleansing*
cmp_avg = (cmp_match + cmpL4) / 2
att_avg = (patt_match + pattL4) / 2
pyds_avg = (py_match + pydsL4) / 2
ptd_avg = (ptdL4 + ptd_match) / 2
int_avg = (intL4 + int_match) / 2
qbr_avg = (qbr_match + qbrL4) / 2
return name, team, opp, cmp_avg, att_avg, pyds_avg, ptd_avg,
int_avg, qbr_avg
qb_stat_filler()
You might consider:
def qb_stat_filler():
stats = {}
...
stats['name'] = name
z = z[['Opp', 'Num']]
opp_dict = dict(z.values)
stats['opp'] = opp_dict.get('{}'.format(o_input))
...
stats['cmp_avg'] = (cmp_match + cmpL4) / 2
stats['att_avg'] = (patt_match + pattL4) / 2
stats['pyds_avg'] = (py_match + pydsL4) / 2
stats['ptd_avg'] = (ptdL4 + ptd_match) / 2
stats['int_avg'] = (intL4 + int_match) / 2
stats['qbr_avg'] = (qbr_match + qbrL4) / 2
return stats
...
stats = qb_stat_filler()
print(stats['cmp_avg'])

how can I run this code with two loops faster? Can I run it without using for?

I wanna run this code for a wide range instead of this range. So I wanna make it better to run faster.
Is it impossible to use something else instead of these loops?
z1=3
z2=HEIGHT-1
def myfunction(z1,z2):
for l in range(z1):
vector = np.zeros(WIDTH)
vector[WIDTH//2] = 1
result = []
result.append(vector)
for i in range(z2):
vector = doPercolationStep(vector, PROP, i)
result.append(vector)
result = np.array(result)
ss = result.astype(int)
ss = np.where(ss==0, -1, ss)
ww = (ss+(ss.T))/2
re_size = ww/(np.sqrt(L))
matr5 = re_size
np.savetxt('F:/folder/matr5/'+str(l)+'.csv', matr5)
and doPercolationStep is:
WIDTH = 5
HEIGHT = 5
L=5
PROP = 0.6447
def doPercolationStep(vector, PROP, time):
even = time%2 # even is 1 or 0
vector_copy = np.copy(vector)
WIDTH = len(vector)
for i in range(even, WIDTH, 2):
if vector[i] == 1:
pro1 = random.random()
pro2 = random.random()
if pro1 < PROP:
vector_copy[(i+WIDTH-1)%WIDTH] = 1 # left neighbour of i
if pro2 < PROP:
vector_copy[(i+1)%WIDTH] = 1 # right neighbour of i
vector_copy[i] = 0
return vector_copy

Problems with calculating gcskews using python

everyone.
I have some problems with calculating gcskews in python.
My 2 major inputs are fasta file and bed file.
Bed file has columns of gn(0), gene_type(1), gene name(2), chromosome(3), strand(4), num(5), start(6).(These numbers are index numbers in python.) Then I am trying to use some functions which can calculate gcskews of sense and antisense strand from the start site of each gene. The window is 100bp and these are the functions.
import re
import sys
import os
# opening bed file
content= []
with open("gene_info.full.tsv") as new :
for line in new :
content.append(line.strip().split())
content = content[1:]
def fasta2dict(fil):
dic = {}
scaf = ''
seq = []
for line in open(fil):
if line.startswith(">") and scaf == '':
scaf = line.split(' ')[0].lstrip(">").replace("\n", "")
elif line.startswith(">") and scaf != '':
dic[scaf] = ''.join(seq)
scaf = line.split(' ')[0].lstrip(">").replace("\n", "")
seq = []
else:
seq.append(line.rstrip())
dic[scaf] = ''.join(seq)
return dic
dic_file = fasta2dict("full.fa")
# functions for gc skew
def GC_skew_up(strand, loc, seq, window = 100) : # need -1 for index
values_up = []
loc = loc - 1
if strand == "+" :
sp_up = seq[loc - window : loc]
g_up = sp_up.count('G') + sp_up.count('g')
c_up = sp_up.count('C') + sp_up.count('c')
try :
skew_up = (g_up - c_up) / float(g_up + c_up)
except ZeroDivisionError:
skew_up = 0.0
values_up.append(skew_up)
elif strand == "-" :
sp_up = seq[loc : loc + window]
g_up = sp_up.count('G') + sp_up.count('g')
c_up = sp_up.count('C') + sp_up.count('c')
try :
skew_up = (c_up - g_up) / float(g_up + c_up)
except ZeroDivisionError:
skew_up = 0.0
values_up.append(skew_up)
return values_up
def GC_skew_dw(strand, loc, seq, window = 100) :
values_dw = []
loc = loc - 1
if strand == "+" :
sp_dw = seq[loc : loc + window]
g_dw = sp_dw.count('G') + sp_dw.count('g')
c_dw = sp_dw.count('C') + sp_dw.count('c')
try :
skew_dw = (g_dw - c_dw) / float(g_dw + c_dw)
except ZeroDivisionError:
skew_dw = 0.0
values_dw.append(skew_dw)
elif strand == "-" :
sp_dw = seq[loc - window : loc]
g_dw = sp_dw.count('G') + sp_dw.count('g')
c_dw = sp_dw.count('C') + sp_dw.count('c')
try :
skew_dw = (c_dw - g_dw) / float(g_dw + c_dw)
except ZeroDivisionError:
skew_dw = 0.0
values_dw.append(skew_dw)
return values_dw
As I said, I want to calculate the gcskews for 100bp of strands from the start site of genes.
Therefore, I made codes that get the chromosome name from the bed file and get the sequence data from the Fasta file.
Then according to gene name and strand information, I expected that codes will find the correct start site and gcskew for 100bp window will be calculated.
However, when I run this code, gcskew of - strand is wrong but + strand is correct. (I got correct gcskew data and I used it.)
Gcskews are different from the correct data, but I don't know what is the problem.
Could anyone tell me what is the problem of this code?
Thanks in advance!
window = 100
gname = []
up = []
dw = []
for match in content :
seq_chr = dic_file[str(match[3])]
if match[4] == "+" :
strand = match[4]
new = int(match[6])
sen_up = GC_skew_up(strand, new, seq_chr, window = 100)
sen_dw = GC_skew_dw(strand, new, seq_chr, window = 100)
gname.append(match[2])
up.append(str(sen_up[0]))
dw.append(str(sen_dw[0]))
if match[4] == "-" :
strand = match[4]
new = int(match[6])
an_up = GC_skew_up(strand, new, seq_chr, window = 100)
an_dw = GC_skew_dw(strand, new, seq_chr, window = 100)
gname.append(match[2])
up.append(str(an_up[0]))
dw.append(str(an_dw[0]))
tot = zip(gname, up, dw)

tkinter execution dies after about 140 iterations with no error message (mem leak?)

My code dies after about 140+ iterations, and I don't know why. I guess memory leak is a possibility, but I couldn't find it. I also found out that changing some arithmetic constants can prolong the time until the crash.
I have a genetic algorithm that tries to find best (i.e. minimal steps) route from point A (src) to point B (dst).
I create a list of random chromosomes, where each chromosome has:
src + dst [always the same]
list of directions (random)
I then run the algorithm:
find best route and draw it (for visualization purposes)
Given a probability P - replace the chromosomes with cross-overs (i.e. pick 2, and take the "end" of one's directions, and replace the "end" of the second's)
Given probability Q - mutate (replace the next direction with a random direction)
This all goes well, and most of the times I do find a route (usually not the ideal one), but sometimes, when it searches for a long time (say, about 140+ iterations) it just crushes. No warning. No error.
How can I prevent that (a simple iteration limit can work, but I do want the algorithm to run for a long time [~2000+ iterations])?
I think the relevant parts of the code are:
update function inside GUI class
which calls to cross_over
When playing with the update_fitness() score values (changing score -= (weight+1)*2000*(shift_x + shift_y) to score -= (weight+1)*2*(shift_x + shift_y) it runs for a longer time. Could be some kind of an arithmetic overflow?
import tkinter as tk
from enum import Enum
from random import randint, sample
from copy import deepcopy
from time import sleep
from itertools import product
debug_flag = False
class Direction(Enum):
Up = 0
Down = 1
Left = 2
Right = 3
def __str__(self):
return str(self.name)
def __repr__(self):
return str(self.name)[0]
# A chromosome is a list of directions that should lead the way from src to dst.
# Each step in the chromosome is a direction (up, down, right ,left)
# The chromosome also keeps track of its route
class Chromosome:
def __init__(self, src = None, dst = None, length = 10, directions = None):
self.MAX_SCORE = 1000000
self.route = [src]
if not directions:
self.directions = [Direction(randint(0,3)) for i in range(length)]
else:
self.directions = directions
self.src = src
self.dst = dst
self.fitness = self.MAX_SCORE
def __str__(self):
return str(self.fitness)
def __repr__(self):
return self.__str__()
def set_src(self, pixel):
self.src = pixel
def set_dst(self, pixel):
self.dst = pixel
def set_directions(self, ls):
self.directions = ls
def update_fitness(self):
# Higher score - a better fitness
score = self.MAX_SCORE - len(self.route)
score += 4000*(len(set(self.route)) - len(self.route)) # penalize returning to the same cell
score += (self.dst in self.route) * 500 # bonus routes that get to dst
for weight,cell in enumerate(self.route):
shift_x = abs(cell[0] - self.dst[0])
shift_y = abs(cell[1] - self.dst[1])
score -= (weight+1)*2000*(shift_x + shift_y) # penalize any wrong turn
self.fitness = max(score, 0)
def update(self, mutate_chance = 0.9):
# mutate #
self.mutate(chance = mutate_chance)
# move according to direction
last_cell = self.route[-1]
try:
direction = self.directions[len(self.route) - 1]
except IndexError:
print('No more directions. Halting')
return
if direction == Direction.Down:
x_shift, y_shift = 0, 1
elif direction == Direction.Up:
x_shift, y_shift = 0, -1
elif direction == Direction.Left:
x_shift, y_shift = -1, 0
elif direction == Direction.Right:
x_shift, y_shift = 1, 0
new_cell = last_cell[0] + x_shift, last_cell[1] + y_shift
self.route.append(new_cell)
self.update_fitness()
def cross_over(p1, p2, loc = None):
# find the cross_over point
if not loc:
loc = randint(0,len(p1.directions))
# choose one of the parents randomly
x = randint(0,1)
src_parent = (p1, p2)[x]
dst_parent = (p1, p2)[1 - x]
son = deepcopy(src_parent)
son.directions[loc:] = deepcopy(dst_parent.directions[loc:])
return son
def mutate(self, chance = 1):
if 100*chance > randint(0,99):
self.directions[len(self.route) - 1] = Direction(randint(0,3))
class GUI:
def __init__(self, rows = 10, cols = 10, iteration_timer = 100, chromosomes = [], cross_over_chance = 0.5, mutation_chance = 0.3, MAX_ITER = 100):
self.rows = rows
self.cols = cols
self.canv_w = 800
self.canv_h = 800
self.cell_w = self.canv_w // cols
self.cell_h = self.canv_h // rows
self.master = tk.Tk()
self.canvas = tk.Canvas(self.master, width = self.canv_w, height = self.canv_h)
self.canvas.pack()
self.rect_dict = {}
self.iteration_timer = iteration_timer
self.iterations = 0
self.MAX_ITER = MAX_ITER
self.chromosome_list = chromosomes
self.src = chromosomes[0].src # all chromosomes share src + dst
self.dst = chromosomes[0].dst
self.prev_best_route = []
self.cross_over_chance = cross_over_chance
self.mutation_chance = mutation_chance
self.no_obstacles = True
# init grid #
for r in range(rows):
for c in range(cols):
self.rect_dict[(r, c)] = self.canvas.create_rectangle(r *self.cell_h, c *self.cell_w,
(1+r)*self.cell_h, (1+c)*self.cell_w,
fill="gray")
# init grid #
# draw src + dst #
self.color_src_dst()
# draw src + dst #
# after + mainloop #
self.master.after(iteration_timer, self.start_gui)
tk.mainloop()
# after + mainloop #
def start_gui(self):
self.start_msg = self.canvas.create_text(self.canv_w // 2,3*self.canv_h // 4, fill = "black", font = "Times 25 bold underline",
text="Starting new computation.\nPopulation size = %d\nCross-over chance = %.2f\nMutation chance = %.2f" %
(len(self.chromosome_list), self.cross_over_chance, self.mutation_chance))
self.master.after(2000, self.update)
def end_gui(self, msg="Bye Bye!"):
self.master.wm_attributes('-alpha', 0.9) # transparency
self.canvas.create_text(self.canv_w // 2,3*self.canv_h // 4, fill = "black", font = "Times 25 bold underline", text=msg)
cell_ls = []
for idx,cell in enumerate(self.prev_best_route):
if cell in cell_ls:
continue
cell_ls.append(cell)
self.canvas.create_text(cell[0]*self.cell_w, cell[1]*self.cell_h, fill = "purple", font = "Times 16 bold italic", text=str(idx+1))
self.master.after(3000, self.master.destroy)
def color_src_dst(self):
r_src = self.rect_dict[self.src]
r_dst = self.rect_dict[self.dst]
c_src = 'blue'
c_dst = 'red'
self.canvas.itemconfig(r_src, fill=c_src)
self.canvas.itemconfig(r_dst, fill=c_dst)
def color_route(self, route, color):
for cell in route:
try:
self.canvas.itemconfig(self.rect_dict[cell], fill=color)
except KeyError:
# out of bounds -> ignore
continue
# keep the src + dst
self.color_src_dst()
# keep the src + dst
def compute_shortest_route(self):
if self.no_obstacles:
return (1 +
abs(self.chromosome_list[0].dst[0] - self.chromosome_list[0].src[0]) +
abs(self.chromosome_list[0].dst[1] - self.chromosome_list[0].src[1]))
else:
return 0
def create_weighted_chromosome_list(self):
ls = []
for ch in self.chromosome_list:
tmp = [ch] * (ch.fitness // 200000)
ls.extend(tmp)
return ls
def cross_over(self):
new_chromosome_ls = []
weighted_ls = self.create_weighted_chromosome_list()
while len(new_chromosome_ls) < len(self.chromosome_list):
try:
p1, p2 = sample(weighted_ls, 2)
son = Chromosome.cross_over(p1, p2)
if son in new_chromosome_ls:
continue
else:
new_chromosome_ls.append(son)
except ValueError:
continue
return new_chromosome_ls
def end_successfully(self):
self.end_gui(msg="Got to destination in %d iterations!\nBest route length = %d" % (len(self.prev_best_route), self.compute_shortest_route()))
def update(self):
# first time #
self.canvas.delete(self.start_msg)
# first time #
# end #
if self.iterations >= self.MAX_ITER:
self.end_gui()
return
# end #
# clean the previously best chromosome route #
self.color_route(self.prev_best_route[1:], 'gray')
# clean the previously best chromosome route #
# cross over #
if 100*self.cross_over_chance > randint(0,99):
self.chromosome_list = self.cross_over()
# cross over #
# update (includes mutations) all chromosomes #
for ch in self.chromosome_list:
ch.update(mutate_chance=self.mutation_chance)
# update (includes mutations) all chromosomes #
# show all chromsome fitness values #
if debug_flag:
fit_ls = [ch.fitness for ch in self.chromosome_list]
print(self.iterations, sum(fit_ls) / len(fit_ls), fit_ls)
# show all chromsome fitness values #
# find and display best chromosome #
best_ch = max(self.chromosome_list, key=lambda ch : ch.fitness)
self.prev_best_route = deepcopy(best_ch.route)
self.color_route(self.prev_best_route[1:], 'gold')
# find and display best chromosome #
# check if got to dst #
if best_ch.dst == best_ch.route[-1]:
self.end_successfully()
return
# check if got to dst #
# after + update iterations #
self.master.after(self.iteration_timer, self.update)
self.iterations += 1
# after + update iterations #
def main():
iter_timer, ITER = 10, 350
r,c = 20,20
s,d = (13,11), (7,8)
population_size = [80,160]
cross_over_chance = [0.2,0.4,0.5]
for pop_size, CO_chance in product(population_size, cross_over_chance):
M_chance = 0.7 - CO_chance
ch_ls = [Chromosome(src=s, dst=d, directions=[Direction(randint(0,3)) for i in range(ITER)]) for i in range(pop_size)]
g = GUI(rows=r, cols=c, chromosomes = ch_ls, iteration_timer=iter_timer,
cross_over_chance=CO_chance, mutation_chance=M_chance, MAX_ITER=ITER-1)
del(ch_ls)
del(g)
if __name__ == "__main__":
main()
I do not know if you know the Python Profiling tool of Visual Studio, but it is quite useful in cases as yours (though I usually program with editors, like VS Code).
I have run your program and, as you said, it sometimes crashes. I have analyzed the code with the profiling tool and it seems that the problem is the function cross_over, specifically the random function:
I would strongly suggest reviewing your cross_over and mutation functions. The random function should not be called so many times (2 millions).
I have previously programmed Genetic Algorithms and, to me, it seems that your program is falling into a local minimum. What is suggested in these cases is playing with the percentage of mutation. Try to increase it a little bit so that you could get out of the local minimum.

Categories

Resources