recursion to iteration in python - python

We are trying to make a cluster analysis for a big amount of data. We are kind of new to python and found out that an iterative function is way more efficient than an recursive one. Now we are trying to change that but it is way harder than we thought.
This code underneath is the heart of our clustering function. This takes over 90 percent of the time. Can you help us to change that into a recursive one?
Some extra information: The taunach function gets neighbours of our point which will later form the clusters. The problem is that we have many many points.
def taunach(tau,delta, i,s,nach,anz):
dis=tabelle[s].dist
#delta=tau
x=data[i]
y=Skalarprodukt(data[tabelle[s].index]-x)
a=tau-abs(dis)
#LA.norm(data[tabelle[s].index]-x)
if y<a*abs(a):
nach.update({item.index for item in tabelle[tabelle[s].inner:tabelle[s].outer-1]})
anz = anzahl(delta, i, tabelle[s].inner, anz)
if dis>-1:
b=dis-tau
if y>=b*abs(b):#*(1-0.001):
nach,anz=taunach(tau,delta, i,tabelle[s].outer,nach,anz)
else:
if y<tau**2:
nach.add(tabelle[s].index)
if y < delta:
anz += 1
if tabelle[s].dist>-4:
b = dis - tau
if y>=b*abs(b):#*(1-0.001)):
nach,anz=taunach(tau,delta, i,tabelle[s].outer,nach,anz)
if tabelle[s].dist > -1:
if y<=(dis+tau)**2:
nach,anz=taunach(tau,delta, i,tabelle[s].inner,nach,anz)
return nach,anz

Related

Optimizing python function for Numba

The following is the python function which I am trying to rewrite in python:
def freeoh_count_nojit(coord=np.array([[]]),\
molInterfaceIndex=np.array([]),\
hNeighbourList=np.array([]),\
topol=np.array([[]]),\
cos_HAngle=0.0,\
cellsize=np.array([]),\
is_orig_def=False,is_new_def=True):
labelArray=[]
freeOHcosDA=[]; freeOHcosDAA=[]
mol1Coord=np.zeros((3,3),dtype=float)
labelArray=np.empty(molInterfaceIndex.shape[0], dtype="U10")
for i in range(molInterfaceIndex.shape[0]): # loop over selected molecules
mol2CoordList=[]; timesave=[]
mol1Coord=np.array([coord[k] for k in topol[molInterfaceIndex[i]]]) # extract center molecule
gen = np.array([index for index in hNeighbourList[i] if index!=-1]) # remove padding
for j in range(gen.shape[0]):
mol2CoordList.append([coord[k] for k in topol[gen[j]]]) # extract neighbors
mol2Coord=np.array(mol2CoordList).reshape(-1,3)
if is_orig_def:
acceptor,donor,cosAngle=interface_hbonding_orig(mol1Coord,mol2Coord,cos_HAngle,cellsize)
labelArray[i]="D"*np.abs(2-np.sum(donor))+"A"*np.clip(np.array([np.sum(acceptor)]),1,2)[0]
elif is_new_def:
acceptor,donor,cosAngle=interface_hbonding_new(mol1Coord,mol2Coord,cos_HAngle,cellsize)
labelArray[i]="D"*np.abs(2-np.sum(donor))+"A"*np.sum(acceptor)
if labelArray[i] in "DA":
freeOHcosDA.append(cosAngle)
elif labelArray[i] in "DAA":
freeOHcosDAA.append(cosAngle)
freeOHcos=freeOHcosDA+freeOHcosDAA
return labelArray, freeOHcos
The function takes in a coordinates frame of the simulation molecules. The code selects a central molecule from an index list molInterfaceIndex and extracts its neighbouring molecules coordinates from a pre-generated neighbour (generated from scipy.spatial.KDTree hence cannot be called from a jitted function). The central molecule and its neighbour are send to a jitted function which then returns two [1,1] and a scaler which are then used to label the central molecule.
My attempt at rewriting the above python function is below:
#njit(cache=True,parallel=True)
def freeoh_count_jit(coord=np.array([[]]),\
molInterfaceIndex=np.array([]),\
hNeighbourList=np.array([]),\
topol=np.array([[]]),\
cos_HAngle=0.0,\
cellsize=np.array([]),\
is_orig_def=False,is_new_def=True):
NAtomsMol=3 #No. of atoms in a molecule
_M=molInterfaceIndex.shape[0]
_N=hNeighbourList.shape[1]
mol1Coord=np.zeros((NAtomsMol,3),dtype=np.float64)
mol2Coord=np.zeros((_N*NAtomsMol,3),dtype=np.float64)
acceptor=np.zeros((_M,2),dtype=int)
donor=np.zeros((_M,2),dtype=int)
cosAngle=np.zeros(_M,dtype=np.float64)
gen=np.zeros(_M,dtype=int)
freeOHMask = np.zeros(_M, dtype=int) == 0
labelArray=np.empty(_M, dtype="U10")
for i in range(_M): # loop over selected molecules
for index,j in enumerate(topol[molInterfaceIndex[i]]):
mol1Coord[index]=coord[j] # extract center molecule
for indexJ,j in enumerate(hNeighbourList[i]):
for indexK,k in enumerate(topol[j]):
mol2Coord[indexK+topol[j].shape[0]*indexJ]=coord[k] # extract neighbors
gen[i] = len(np.array([index for index in hNeighbourList[i] if index!=-1]))*NAtomsMol # get actual number of neighbor atoms
if is_orig_def:
acceptor[i],donor[i],cosAngle[i]=interface_hbonding_orig(mol1Coord,mol2Coord[:gen[i]],cos_HAngle,cellsize)
labelArray[i]="D"*np.abs(2-np.sum(donor[i]))+"A"*np.clip(np.array([np.sum(acceptor[i])]),1,2)[0]
elif is_new_def:
acceptor[i],donor[i],cosAngle[i]=interface_hbonding_new(mol1Coord,mol2Coord[:gen[i]],cos_HAngle,cellsize)
labelArray[i]="D"*np.abs(2-np.sum(donor[i]))+"A"*np.sum(acceptor[i])
freeOHMask[np.where(cosAngle > 1.0)] = False
return acceptor, donor, labelArray, freeOHMask
The main issue is that #jit function seem to be providing incorrect results while using numba.prange for the outer loop. Also, the execution time for the function increases per call which is a bit confusing. The functions interface_hbonding_orig() and interface_hbonding_new() are already jitted so I think they are out of scope of discussion here. One of the bigger questions is that whether I even to jit this function at all as the most time consuming part is supposing to be the array selection in the initial few initial lines in the outer loop. If anyone has any suggestions for rewriting this function or even for algorithm design, it would be really helpful.

Finding the global minimum of a noisy function via simulated annealing in python

I'm trying to find the global minimum of the function from the hundred digit hundred dollars challenge, question #4 as an exercise for simulated annealing.
As the basis of my understanding and approach to writing the code, I refer to the global optimization algorithms version 3 book which is found for free online.
Consequently, I've initially come up with the following code:
The noisy func:
def noisy_func(x, y):
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
The function used to mutate the values:
def mutate(X_Value, Y_Value):
mutationResult_X = X_Value + randomNumForInput()
mutationResult_Y = Y_Value + randomNumForInput()
while mutationResult_X > 4 or mutationResult_X < -4:
mutationResult_X = X_Value + randomNumForInput()
while mutationResult_Y > 4 or mutationResult_Y < -4:
mutationResult_Y = Y_Value + randomNumForInput()
mutationResults = [mutationResult_X, mutationResult_Y]
return mutationResults
randomNumForInput simply returns a random number between 4 and -4. (Interval Limits for the search.) Hence it is equivalent to random.uniform(-4, 4).
This is the central function of the program.
def simulated_annealing(f):
"""Peforms simulated annealing to find a solution"""
#Start by initializing the current state with the initial state
#acquired by a random generation of a number and then using it
#in the noisy func, also set solution(best_state) as current_state
#for a start
pCurSelect = [randomNumForInput(),randomNumForInput()]
current_state = f(pCurSelect[0],pCurSelect[1])
best_state = current_state
#Begin time monitoring, this will represent the
#Number of steps over time
TimeStamp = 1
#Init current temp via the func, using such values as to get the initial temp
initial_temp = 100
final_temp = .1
alpha = 0.001
num_of_steps = 1000000
#calculates by how much the temperature should be tweaked
#each iteration
#suppose the number of steps is linear, we'll send in 100
temp_Delta = calcTempDelta(initial_temp, final_temp, num_of_steps)
#set current_temp via initial temp
current_temp = getTemperature(initial_temp, temp_Delta)
#max_iterations = 100
#initial_temp = get_Temperature_Poly(TimeStamp)
#current_temp > final_temp
while current_temp > final_temp:
#get a mutated value from the current value
#hence being a 'neighbour' value
#with it, acquire the neighbouring state
#to the current state
neighbour_values = mutate(pCurSelect[0], pCurSelect[1])
neighbour_state = f(neighbour_values[0], neighbour_values[1])
#calculate the difference between the newly mutated
#neighbour state and the current state
delta_E_Of_States = neighbour_state - current_state
# Check if neighbor_state is the best state so far
# if the new solution is better (lower), accept it
if delta_E_Of_States <= 0:
pCurSelect = neighbour_values
current_state = neighbour_state
if current_state < best_state:
best_state = current_state
# if the new solution is not better, accept it with a probability of e^(-cost/temp)
else:
if random.uniform(0, 1) < math.exp(-(delta_E_Of_States) / current_temp):
pCurSelect = neighbour_values
current_state = neighbour_state
# Here, we'd decrement the temperature or increase the timestamp, normally
"""current_temp -= alpha"""
#print("Run number: " + str(TimeStamp) + " current_state = " + str(current_state) )
#increment TimeStamp
TimeStamp = TimeStamp + 1
# calc temp for next iteration
current_temp = getTemperature(current_temp, temp_Delta)
#print("Iteration Count: " + str(TimeStamp))
return best_state
alpha is not used for this implementation, however temperature is moderated linearly using the following funcs:
def calcTempDelta(T_Initial, T_Final, N):
return((T_Initial-T_Final)/N)
def getTemperature(T_old, T_new):
return (T_old - T_new)
This is how I implemented the solution described in page 245 of the book. However, this implementation does not return to me the global minimum of the noisy function, but rather, one of its near-by local minimum.
The reasons I implemented the solution in this way is two fold:
It has been provided to me as a working example of a linear temperature moderation, and thus a working template.
Although I have tried to understand the other forms of temperature moderation laid out in the book in pages 248-249, it is not entirely clear to me how the variable "Ts" is calculated, and even after trying to look through some of the cited sources the book references, it remains esoteric for me still. Thus I figured, I'd rather try to make this "simple" solution work correctly first, before proceeding to attempt other approaches of temperature quenching (logarithmic, exponential, etc).
Since then I have tried in numerous ways to acquire the global minimum of the noisy func through various different iterations of the code, which would be too much to post here all at once. I've tried different rewrites of this code:
Decrease the randomly rolled number over each iteration as in order to search within a smaller scope every time, this has resulted in more consistent but still incorrect results.
Mutate by different increments, so lets say, between -1 and 1, etc. Same effect.
Rewrite mutate as in order to examine the neighbouring points to the current point via some step size, and examine neighboring points by adding/reducing said step size from the current point's x/y values, checking the differences between the newly generated point and the current point (the delta of E's, basically), and return the appropriate values with whichever one produced the lowest distance to the current function, thus being its closest proximity neighbour.
Reduce the intervals limits over which the search occurs.
It is in these, the solutions involving step-size/reducing limits/checking neighbours by quadrants that I have used movements comprised of some constant alpha times the time_stamp.
These and other solutions which I've attempted have not worked, either producing even less accurate results (albeit in some cases more consistent results) or in one case, not working at all.
Therefore I must be missing something, whether its to do with the temperature moderation, or the precise way (formula) by which I'm supposed to make the next step (mutate) in the algorithm.
I know its a lot to take in and look at, but I'd appreciate any constructive criticism/help/advice you can provide me.
If it will be of any help to showcase code bits of the other solution attempts, I'll post them if asked.
It is important that you keep track of what you are doing.
I have put a few important tips on frigidum
The alpha cooling generally works well, it makes sure you don't speed through the interesting sweet-spot, where about 0.1 of the proposals are accepted.
Make sure your proposals are not too coarse, I have put a example where I only change x or y, but never both. The idea is that annealing will take whats best, or take a tour, and let the scheme decide.
I use the package frigidum for the algo, but its pretty much the same are your code. Also notice I have 2 proposals, a large change and a small change, combinations usually work well.
Finally, I noticed its hopping a lot. A small variation would be to pick the best-so-far before you go in the last 5% of your cooling.
I use/install frigidum
!pip install frigidum
And made a small change to make use of numpy arrays;
import math
def noisy_func(X):
x, y = X
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
import frigidum
import numpy as np
import random
def random_start():
return np.random.random( 2 ) * 4
def random_small_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.02 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.02 * (random.random() - .5), 0] ), -4,4)
def random_big_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.5 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.5 * (random.random() - .5), 0] ), -4,4)
local_opt = frigidum.sa(random_start=random_start,
neighbours=[random_small_step, random_big_step],
objective_function=noisy_func,
T_start=10**2,
T_stop=0.00001,
repeats=10**4,
copy_state=frigidum.annealing.copy)
The output of the above was
---
Neighbour Statistics:
(proportion of proposals which got accepted *and* changed the objective function)
random_small_step : 0.451045
random_big_step : 0.268002
---
(Local) Minimum Objective Value Found:
-3.30669277
With the above code sometimes I get below -3, but I also noticed sometimes it has found something around -2, than it is stuck in the last phase.
So a small tweak would be to re-anneal the last phase of the annealing, with the best-found-so-far.
Hope that helps, let me know if any questions.

Why does my Python script run slower than it should on my HeapSort implementation?

I've got as assignment to implement the heap sort algorithm into either Python or Java (or any other languages). Since I'm not that really "fluent" in Python or Java I decided to do both.
But here I ran into a problem, the running time of the program is way too much hight than it "should" be.
I mean by that, that the heap sort is supposed to run into a O(n * log n) and for current processor running on a clock rate of several GHz I didn't expect for that algorithm to run into over 2000secs for an array of size 320k
So for what I've done, I implemented the algorithm from the pseudo code of this sort in Python and in Java (I also tried the code in Julia from Rosetta Code to see if the running time was similar, why Julia ? Random pick)
So I checked the output for small input size problem, such as an array of size 10, 20 and 30. It appears that the array it correctly sorted in both languages/implementations.
Then I used the heapq library that implement this same algorithm to check once again if the running time was similar. It surprised me when it was actually the case... But after few tries I tried one last thing which is updating Python and then, the program using heapq ran much faster than the previous ones. Actually it was around 2k sec for the 320k array and now it around 1.5 sec or so.
I retried my algorithm and the problem was still there.
So here are the Heapsort class that I implemented:
class MaxHeap:
heap = []
def __init__(self, data=None):
if data is not None:
self.buildMaxHeap(data)
#classmethod
def toString(cls):
return str(cls.heap)
#classmethod
def add(cls, elem):
cls.heap.insert(len(cls.heap), elem)
cls.buildMaxHeap(cls.heap)
#classmethod
def remove(cls, elem):
try:
cls.heap.pop(cls.heap.index(elem))
except ValueError:
print("The value you tried to remove is not in the heap")
#classmethod
def maxHeapify(cls, heap, i):
left = 2 * i + 1
right = 2 * i + 2
largest = i
n = len(heap)
if left < n and heap[left] > heap[largest]:
largest = left
if right < n and heap[right] > heap[largest]:
largest = right
if largest != i:
heap[i], heap[largest] = heap[largest], heap[i]
cls.maxHeapify(heap, largest)
#classmethod
def buildMaxHeap(cls, heap):
for i in range(len(heap) // 2, -1, -1):
cls.maxHeapify(heap, i)
cls.heap = heap
#staticmethod
def heapSort(table):
heap = MaxHeap(table)
output = []
i = len(heap.heap) - 1
while i >= 0:
heap.heap[0], heap.heap[i] = heap.heap[i], heap.heap[0]
output = [heap.heap[i]] + output
heap.remove(heap.heap[i])
heap.maxHeapify(heap.heap, 0)
i -= 1
return output
To log the runtime for each array size (10000 - 320000) I use this loop in the main function :
i = 10000
while i <= 320000:
tab = [0] * i
j = 0
while j < i:
tab[j] = randint(0, i)
j += 1
start = time()
MaxHeap.heapSort(tab)
end = time()
pprint.pprint("Size of the array " + str(i))
pprint.pprint("Total execution time: " + str(end - start) + "s")
i *= 2
If you need the rest of the code to see where the error could be, don't hesitate I'll provide it. Just didn't want to share the whole file for no reasons.
As said earlier the running time I expected is from the worst case running time : O(n * log n)
with modern architecture and a processor of 2.6GHz I would expect something around 1sec or even less (since the running time is asked in nanosecond I suppose that even 1 sec is still too long)
Here are the results :
Python (own) : Java (Own)
Time Size Time Size
593ms. 10k 243ms. 10k
2344ms. 20k 600ms. 20k
9558ms. 40k 1647ms. 40k
38999ms. 80k 6666ms. 80k
233811ms. 160k 62789ms. 160k
1724926ms. 320k 473177ms. 320k
Python (heapq) Julia (Rosetta Code)
Time Size Time Size
6ms. 10k 21ms. 10k
14ms. 20k 21ms. 20k
15ms. 40k 23ms. 40k
34ms. 80k 28ms. 80k
79ms. 160k 39ms. 160k
168ms. 320k 60ms. 320k
And according to the formula the O(n * log n) give me :
40000 10k
86021 20k
184082 40k
392247 80k
832659 160k
1761648 320k
I think that these result could be used to determine how much time it should take depending on the machine (theoretically)
As you can see the high running time result comes from my algorithm, but I can't tell where in the code and that's why I'm asking here for help. (Runs slow both in Java and Python) (Didn't try to use heap sort in java lib is there is one to see the difference with my implementation, my bad)
Thanks a lot.
Edit : I forgot to add that I run this program on a MacBook Pro (last version MacOS, i7 2,6GHz. In case the problem could also comes from anything else than the code.
Edit 2 : Here are the modifications I did on the algorithm, following the answer I received. The program run approximately 200 times faster than previously, and so now it run in barely 2sec for the array of size 320k
class MaxHeap:
def __init__(self, data=None):
self.heap = []
self.size = 0
if data is not None:
self.size = len(data)
self.buildMaxHeap(data)
def toString(self):
return str(self.heap)
def add(self, elem):
self.heap.insert(self.size, elem)
self.size += 1
self.buildMaxHeap(self.heap)
def remove(self, elem):
try:
self.heap.pop(self.heap.index(elem))
except ValueError:
print("The value you tried to remove is not in the heap")
def maxHeapify(self, heap, i):
left = 2 * i + 1
right = 2 * i + 2
largest = i
if left < self.size and heap[left] > heap[largest]:
largest = left
if right < self.size and heap[right] > heap[largest]:
largest = right
if largest != i:
heap[i], heap[largest] = heap[largest], heap[i]
self.maxHeapify(heap, largest)
def buildMaxHeap(self, heap):
for i in range(self.size // 2, -1, -1):
self.maxHeapify(heap, i)
self.heap = heap
#staticmethod
def heapSort(table):
heap = MaxHeap(table)
i = len(heap.heap) - 1
while i >= 0:
heap.heap[0], heap.heap[i] = heap.heap[i], heap.heap[0]
heap.size -= 1
heap.maxHeapify(heap.heap, 0)
i -= 1
return heap.heap
And it runs using the same main as given before
Its interesting that you posted the clock speed of your computer- you COULD calculate the actual number of steps your algorithm requires... but you would need to know an awful lot about the implementation. For example, in python every time an object is created or goes out of scope, the interpreter updates counters on the underlying object, and frees the memory if those ref counts reach 0. Instead, you should look at the relative speed.
The third party examples you posted show the speed as less then doubling when the input array length doubles. That doesn't seem right, does it? Turns out that for those examples the initial work of building the array probably dominates the time spent sorting the array!
In your code, there is already one comment that calls out what I was going to say...
heap.remove(heap.heap[i])
This operation will go through your list (starting at index 0) looking for a value that matches, and then deletes it. This is already bad (if it works as intended, you are doing 320k comparisons on that line if your code worked as you expected!). But it gets worse- deleting an object from an array is not an in-place modification- every object after the deleted object has to be moved forward in the list. Finally, there is no guarantee that you are actually removing the last object there... duplicate values could exist!
Here is a useful website that lists the complexity of various operations in python - https://wiki.python.org/moin/TimeComplexity. In order to implement an algorithm as efficiently as possible, you need as many of your data structure operations to be O(1) as possible. Here is an example... here is some original code, presumably with heap.heap being a list...
output = [heap.heap[i]] + output
heap.remove(heap.heap[i])
doing
output.append(heap.heap.pop())
Would avoid allocating a new list AND use a constant time operation to mutate the old one. (much better to just use the output backwards than use the O(n) time insert(0) method! you could use a dequeue object for output to get appendleft method if you really need the order)
If you posted your whole code there are probably lots of other little things we could help with. Hopefully this helped!

Finding minimum value of a function wit 11,390,625 variable combinations

I am working on a code to solve for the optimum combination of diameter size of number of pipelines. The objective function is to find the least sum of pressure drops in six pipelines.
As I have 15 choices of discrete diameter sizes which are [2,4,6,8,12,16,20,24,30,36,40,42,50,60,80] that can be used for any of the six pipelines that I have in the system, the list of possible solutions becomes 15^6 which is equal to 11,390,625
To solve the problem, I am using Mixed-Integer Linear Programming using Pulp package. I am able to find the solution for the combination of same diameters (e.g. [2,2,2,2,2,2] or [4,4,4,4,4,4]) but what I need is to go through all combinations (e.g. [2,4,2,2,4,2] or [4,2,4,2,4,2] to find the minimum. I attempted to do this but the process is taking a very long time to go through all combinations. Is there a faster way to do this ?
Note that I cannot calculate the pressure drop for each pipeline as the choice of diameter will affect the total pressure drop in the system. Therefore, at anytime, I need to calculate the pressure drop of each combination in the system.
I also need to constraint the problem such that the rate/cross section of pipeline area > 2.
Your help is much appreciated.
The first attempt for my code is the following:
from pulp import *
import random
import itertools
import numpy
rate = 5000
numberOfPipelines = 15
def pressure(diameter):
diameterList = numpy.tile(diameter,numberOfPipelines)
pressure = 0.0
for pipeline in range(numberOfPipelines):
pressure += rate/diameterList[pipeline]
return pressure
diameterList = [2,4,6,8,12,16,20,24,30,36,40,42,50,60,80]
pipelineIds = range(0,numberOfPipelines)
pipelinePressures = {}
for diameter in diameterList:
pressures = []
for pipeline in range(numberOfPipelines):
pressures.append(pressure(diameter))
pressureList = dict(zip(pipelineIds,pressures))
pipelinePressures[diameter] = pressureList
print 'pipepressure', pipelinePressures
prob = LpProblem("Warehouse Allocation",LpMinimize)
use_diameter = LpVariable.dicts("UseDiameter", diameterList, cat=LpBinary)
use_pipeline = LpVariable.dicts("UsePipeline", [(i,j) for i in pipelineIds for j in diameterList], cat = LpBinary)
## Objective Function:
prob += lpSum(pipelinePressures[j][i] * use_pipeline[(i,j)] for i in pipelineIds for j in diameterList)
## At least each pipeline must be connected to a diameter:
for i in pipelineIds:
prob += lpSum(use_pipeline[(i,j)] for j in diameterList) ==1
## The diameter is activiated if at least one pipelines is assigned to it:
for j in diameterList:
for i in pipelineIds:
prob += use_diameter[j] >= lpSum(use_pipeline[(i,j)])
## run the solution
prob.solve()
print("Status:", LpStatus[prob.status])
for i in diameterList:
if use_diameter[i].varValue> pressureTest:
print("Diameter Size",i)
for v in prob.variables():
print(v.name,"=",v.varValue)
This what I did for the combination part which took really long time.
xList = np.array(list(itertools.product(diameterList,repeat = numberOfPipelines)))
print len(xList)
for combination in xList:
pressures = []
for pipeline in range(numberOfPipelines):
pressures.append(pressure(combination))
pressureList = dict(zip(pipelineIds,pressures))
pipelinePressures[combination] = pressureList
print 'pipelinePressures',pipelinePressures
I would iterate through all combinations, I think you would run into memory problems otherwise trying to model ALL combinations in a MIP.
If you iterate through the problems perhaps using the multiprocessing library to use all cores, it shouldn't take long just remember only to hold information on the best combination so far, and not to try and generate all combinations at once and then evaluate them.
If the problem gets bigger you should consider Dynamic Programming Algorithms or use pulp with column generation.

put stockprices into groups when they are within 0.5% of each other

Thanks for the answers, I have not used StackOverflow before so I was suprised by the number of answers and the speed of them - its fantastic.
I have not been through the answers properly yet, but thought I should add some information to the problem specification. See the image below.
I can't post an image in this because i don't have enough points but you can see an image
at http://journal.acquitane.com/2010-01-20/image003.jpg
This image may describe more closely what I'm trying to achieve. So you can see on the horizontal lines across the page are price points on the chart. Now where you get a clustering of lines within 0.5% of each, this is considered to be a good thing and why I want to identify those clusters automatically. You can see on the chart that there is a cluster at S2 & MR1, R2 & WPP1.
So everyday I produce these price points and then I can identify manually those that are within 0.5%. - but the purpose of this question is how to do it with a python routine.
I have reproduced the list again (see below) with labels. Just be aware that the list price points don't match the price points in the image because they are from two different days.
[YR3,175.24,8]
[SR3,147.85,6]
[YR2,144.13,8]
[SR2,130.44,6]
[YR1,127.79,8]
[QR3,127.42,5]
[SR1,120.94,6]
[QR2,120.22,5]
[MR3,118.10,3]
[WR3,116.73,2]
[DR3,116.23,1]
[WR2,115.93,2]
[QR1,115.83,5]
[MR2,115.56,3]
[DR2,115.53,1]
[WR1,114.79,2]
[DR1,114.59,1]
[WPP,113.99,2]
[DPP,113.89,1]
[MR1,113.50,3]
[DS1,112.95,1]
[WS1,112.85,2]
[DS2,112.25,1]
[WS2,112.05,2]
[DS3,111.31,1]
[MPP,110.97,3]
[WS3,110.91,2]
[50MA,110.87,4]
[MS1,108.91,3]
[QPP,108.64,5]
[MS2,106.37,3]
[MS3,104.31,3]
[QS1,104.25,5]
[SPP,103.53,6]
[200MA,99.42,7]
[QS2,97.05,5]
[YPP,96.68,8]
[SS1,94.03,6]
[QS3,92.66,5]
[YS1,80.34,8]
[SS2,76.62,6]
[SS3,67.12,6]
[YS2,49.23,8]
[YS3,32.89,8]
I did make a mistake with the original list in that Group C is wrong and should not be included. Thanks for pointing that out.
Also the 0.5% is not fixed this value will change from day to day, but I have just used 0.5% as an example for spec'ing the problem.
Thanks Again.
Mark
PS. I will get cracking on checking the answers now now.
Hi:
I need to do some manipulation of stock prices. I have just started using Python, (but I think I would have trouble implementing this in any language). I'm looking for some ideas on how to implement this nicely in python.
Thanks
Mark
Problem:
I have a list of lists (FloorLevels (see below)) where the sublist has two items (stockprice, weight). I want to put the stockprices into groups when they are within 0.5% of each other. A groups strength will be determined by its total weight. For example:
Group-A
115.93,2
115.83,5
115.56,3
115.53,1
-------------
TotalWeight:12
-------------
Group-B
113.50,3
112.95,1
112.85,2
-------------
TotalWeight:6
-------------
FloorLevels[
[175.24,8]
[147.85,6]
[144.13,8]
[130.44,6]
[127.79,8]
[127.42,5]
[120.94,6]
[120.22,5]
[118.10,3]
[116.73,2]
[116.23,1]
[115.93,2]
[115.83,5]
[115.56,3]
[115.53,1]
[114.79,2]
[114.59,1]
[113.99,2]
[113.89,1]
[113.50,3]
[112.95,1]
[112.85,2]
[112.25,1]
[112.05,2]
[111.31,1]
[110.97,3]
[110.91,2]
[110.87,4]
[108.91,3]
[108.64,5]
[106.37,3]
[104.31,3]
[104.25,5]
[103.53,6]
[99.42,7]
[97.05,5]
[96.68,8]
[94.03,6]
[92.66,5]
[80.34,8]
[76.62,6]
[67.12,6]
[49.23,8]
[32.89,8]
]
I suggest a repeated use of k-means clustering -- let's call it KMC for short. KMC is a simple and powerful clustering algorithm... but it needs to "be told" how many clusters, k, you're aiming for. You don't know that in advance (if I understand you correctly) -- you just want the smallest k such that no two items "clustered together" are more than X% apart from each other. So, start with k equal 1 -- everything bunched together, no clustering pass needed;-) -- and check the diameter of the cluster (a cluster's "diameter", from the use of the term in geometry, is the largest distance between any two members of a cluster).
If the diameter is > X%, set k += 1, perform KMC with k as the number of clusters, and repeat the check, iteratively.
In pseudo-code:
def markCluster(items, threshold):
k = 1
clusters = [items]
maxdist = diameter(items)
while maxdist > threshold:
k += 1
clusters = Kmc(items, k)
maxdist = max(diameter(c) for c in clusters)
return clusters
assuming of course we have suitable diameter and Kmc Python functions.
Does this sound like the kind of thing you want? If so, then we can move on to show you how to write diameter and Kmc (in pure Python if you have a relatively limited number of items to deal with, otherwise maybe by exploiting powerful third-party add-on frameworks such as numpy) -- but it's not worthwhile to go to such trouble if you actually want something pretty different, whence this check!-)
A stock s belong in a group G if for each stock t in G, s * 1.05 >= t and s / 1.05 <= t, right?
How do we add the stocks to each group? If we have the stocks 95, 100, 101, and 105, and we start a group with 100, then add 101, we will end up with {100, 101, 105}. If we did 95 after 100, we'd end up with {100, 95}.
Do we just need to consider all possible permutations? If so, your algorithm is going to be inefficient.
You need to specify your problem in more detail. Just what does "put the stockprices into groups when they are within 0.5% of each other" mean?
Possibilities:
(1) each member of the group is within 0.5% of every other member of the group
(2) sort the list and split it where the gap is more than 0.5%
Note that 116.23 is within 0.5% of 115.93 -- abs((116.23 / 115.93 - 1) * 100) < 0.5 -- but you have put one number in Group A and one in Group C.
Simple example: a, b, c = (0.996, 1, 1.004) ... Note that a and b fit, b and c fit, but a and c don't fit. How do you want them grouped, and why? Is the order in the input list relevant?
Possibility (1) produces ab,c or a,bc ... tie-breaking rule, please
Possibility (2) produces abc (no big gaps, so only one group)
You won't be able to classify them into hard "groups". If you have prices (1.0,1.05, 1.1) then the first and second should be in the same group, and the second and third should be in the same group, but not the first and third.
A quick, dirty way to do something that you might find useful:
def make_group_function(tolerance = 0.05):
from math import log10, floor
# I forget why this works.
tolerance_factor = -1.0/(-log10(1.0 + tolerance))
# well ... since you might ask
# we want: log(x)*tf - log(x*(1+t))*tf = -1,
# so every 5% change has a different group. The minus is just so groups
# are ascending .. it looks a bit nicer.
#
# tf = -1/(log(x)-log(x*(1+t)))
# tf = -1/(log(x/(x*(1+t))))
# tf = -1/(log(1/(1*(1+t)))) # solved .. but let's just be more clever
# tf = -1/(0-log(1*(1+t)))
# tf = -1/(-log((1+t))
def group_function(value):
# don't just use int - it rounds up below zero, and down above zero
return int(floor(log10(value)*tolerance_factor))
return group_function
Usage:
group_function = make_group_function()
import random
groups = {}
for i in range(50):
v = random.random()*500+1000
group = group_function(v)
if group in groups:
groups[group].append(v)
else:
groups[group] = [v]
for group in sorted(groups):
print 'Group',group
for v in sorted(groups[group]):
print v
print
For a given set of stock prices, there is probably more than one way to group stocks that are within 0.5% of each other. Without some additional rules for grouping the prices, there's no way to be sure an answer will do what you really want.
apart from the proper way to pick which values fit together, this is a problem where a little Object Orientation dropped in can make it a lot easier to deal with.
I made two classes here, with a minimum of desirable behaviors, but which can make the classification a lot easier -- you get a single point to play with it on the Group class.
I can see the code bellow is incorrect, in the sense the limtis for group inclusion varies as new members are added -- even it the separation crieteria remaisn teh same, you heva e torewrite the get_groups method to use a multi-pass approach. It should nto be hard -- but the code would be too long to be helpfull here, and i think this snipped is enoguh to get you going:
from copy import copy
class Group(object):
def __init__(self,data=None, name=""):
if data:
self.data = data
else:
self.data = []
self.name = name
def get_mean_stock(self):
return sum(item[0] for item in self.data) / len(self.data)
def fits(self, item):
if 0.995 < abs(item[0]) / self.get_mean_stock() < 1.005:
return True
return False
def get_weight(self):
return sum(item[1] for item in self.data)
def __repr__(self):
return "Group-%s\n%s\n---\nTotalWeight: %d\n\n" % (
self.name,
"\n".join("%.02f, %d" % tuple(item) for item in self.data ),
self.get_weight())
class StockGrouper(object):
def __init__(self, data=None):
if data:
self.floor_levels = data
else:
self.floor_levels = []
def get_groups(self):
groups = []
floor_levels = copy(self.floor_levels)
name_ord = ord("A") - 1
while floor_levels:
seed = floor_levels.pop(0)
name_ord += 1
group = Group([seed], chr(name_ord))
groups.append(group)
to_remove = []
for i, item in enumerate(floor_levels):
if group.fits(item):
group.data.append(item)
to_remove.append(i)
for i in reversed(to_remove):
floor_levels.pop(i)
return groups
testing:
floor_levels = [ [stock. weight] ,... <paste the data above> ]
s = StockGrouper(floor_levels)
s.get_groups()
For the grouping element, could you use itertools.groupby()? As the data is sorted, a lot of the work of grouping it is already done, and then you could test if the current value in the iteration was different to the last by <0.5%, and have itertools.groupby() break into a new group every time your function returned false.

Categories

Resources