OR Tools limit visits between two nodes - python

For example lets say I have something like this:
solver().Add(this.solver.ActiveVar(start) == this.solver.ActiveVar(end));
for a specific route. this means that start index must end on end index.
What if I want to limit the number of visits that can happen in between this?
Example if the limit is 2 then only solutions that have something like so will be valid
start-> n1 -> n2 -> end
start -> n1 -> end
start -> end
Normally I would try something involving vehicle constraints, but in this case one vehicle can have multiple starts and ends

Few things:
1.
solver().Add(this.solver.ActiveVar(start) == this.solver.ActiveVar(end));
just mean that both locations must be active (i.e. visited) or unvisited (aka 0) (i.e. are part of a disjunction).
What about creating a counter dimension then restrict the difference between both nodes ?
In Python should be more or less:
routing.AddConstantDimension(
1, # increase by one at each visit
42, # max count
True, # Force Start at zero
'Counter')
counter_dim = routing.GetDimensionOrDie('Counter')
start = manager.NodeToIndex(start_node)
end = manager.NodeToIndex(end_node)
solver = routing.solver()
# start must be visited at most two nodes before end node
solver.Add(count_dim.CumulVar(start) + 3 >= count_dim.CumulVar(end))
# start must be visited before end
solver.Add(count_dim.CumulVar(start) <= count_dim.CumulVar(end))
Don't get your "vehicle multiple start", each vehicle has only one start. node....

Related

binary heap as tree structure searching algorithm

so i guess you are all fimilliar with a binary heap data structure if not.. Brilliant. org say
i.e. a binary tree which obeys the property that the root of any tree is greater than or equal to (or smaller than or equal to) all its children (heap property). The primary use of such a data structure is to implement a priority queue.
will one of the properties of a binary heap is that it must be filled from top to bottom (from root) and from right to left
I coded this algorithm to find the next available spot to insert the next number I add (I hard coded the first nodes so I can track more further down the tree
this search method is inspired by BFS(Breadth First Search) algorithm
note that in this code I only care about finding the next empty node without the need to keep the heap property
I tested the code but I don't think I tested it enough so if you spot problems, bugs or suggest any ideas, every comment is welcomed
def insert(self, data):
if self.root.data == None:
self.root.data = data
print('root', self.root.data)
else:
self.search()
def search(self):
print('search..L31')
queue = [self.root]
while queue:
curr = queue.pop(0)
print(curr.data)
if curr.right_child == None:
print('made it')
return
else:
queue.append(curr.left_child)
queue.append(curr.right_child)
h = Min_heap(10)
h.insert(2)
h.root.left_child = Node(3)
h.root.right_child = Node(5)
h.root.left_child.left_child = Node(8)
h.root.left_child.right_child = Node(7)
h.root.right_child.left_child = Node(9)
# The tree I am building...
# __2__
# / \
# 3 5
# / \ / \
# 8 7 9 ⨂
# ↑
# what am
# looking for
h.search()
there is another way to figuring this out which is basically translating the tree into an array/list using special formulas and then we just assume that the next data we want to insert is the last element in the previous array and then work back through the same formulas but I already know that algorithm and I thought why not trying to solve it as a graph soooo...
You should better implement a binary heap as a list (array). But if you want to do it with node objects that have left/right attributes, then the position for the next node can be derived from the size of the tree.
So if you enrich your heap class instances with a size attribute and maintain that attribute to reflect the current number of nodes in the tree, then the following method will tell you where the next insertion point is, in O(logn) time:
Take the binary representation of the current size plus 1. So if the tree currently has 4 nodes, take the binary representation of 5, i.e. 101. Then drop the leftmost (most significant) bit. The bits that then remain are an encoding of the path towards the new spot: 0 means "left", 1 means "right".
Here is an implementation of a method that will return the parent node of where the new insertion spot is, and whether it would become the "left" or the "right" child of it:
def next_spot(self):
if not self.root:
raise ValueError("empty tree")
node = self.root
path = self.size + 1
sides = bin(path)[3:-1] # skip "0b1" and final bit
for side in sides:
if side == "0":
node = node.left
else:
node = node.right
# use final bit for saying "left" or "right"
return node, ("left", "right")[path % 2]
If you want to guarantee balanced, just add to each node how many items are there or below. Maintain that with the heap. And when placing an element, always go to where there are the fewest things.
If you just want a simple way to place, just randomly place it. You don't have to be perfect. You will still on average be O(log(n)) levels, just with a worse constant.
(Of course your constants are better with the array approach, but you say you know that one and are deliberately not implementing it.)

How to set up this programm idea? : Elliott Wave Counter on Stock Charts by finding Minima and Maxima and how they relate to each other

My Idea is as follows and i want to really get to learn more about programming and how to structure a program:
I want to let count waves on a stock chart.
Within the Elliott Wave Rules are some specifications, like (most basic):
Wave 2 never retraces more than 100% of wave 1.
Wave 3 cannot be the shortest of the three impulse waves, namely waves 1, 3 and 5.
Wave 4 does not overlap with the price territory of wave 1, except in the
rare case of a diagonal triangle formation.
(from Wikipedia https://en.wikipedia.org/wiki/Elliott_wave_principle#Wave_rules_and_guidelines)
There are more sophisticated rules of course, but in my imagination, they could be addressed by the same iterative logic like in which I want to apply my rules.
Please guys, and girls, give me feedback on my thoughts if they make any sense in structure and layout to set up a program or not, because i lack experience here:
I want to find the minima and maxima, and give them a wavecount depending on the minima and maxima before.
Therefore i would check every candle (every closing price, day, hour, etc) if the value is below or above the previous value and also values. For example:
If there are two candles going up, then one down, then three up, then two down, then two up, this could be a complete Impulsewave, according to the above-listed rules. In total, i would have 10 candles and the following rules must apply:
The third candle (or the first that goes down, after the two going up) must not close below the starting price of the initial candle. AND also it must be met, that the following candles (how much that would become) must all go up in a row, unless they overcome the price of the previous maxima (the second candle).
When the price starts to drop again, it could be counted as wave 4 then (second minima in a sequence) and when it goes up again, this would indicate wave 5.
Then it also must be met, that, if the price starts to go down again, it does not close below the first maxima (in this case the second candle).
And so on and so on.
My question now is: Is this kind of looping through certain data points is even a appropriate way to approach that kind of project? Or am I totally wrong here?
I just thought: because of the fractal character of Elliott waves, I would only need very basic rules, that would depend on, what the same iterative process spits out the previous times it is scanning data points.
What do you think?
Is there a better, a smarter way to realise what i am planing to do?
And also, how I could do this in a good way?
Maybe there is also a way to just feed some patterns into a predefined execution structure and then let this run over data points just as price charts.
What would your approach look like?
Thanks a lot and best wishes, Benjamin
Here is my idea/code for finding highs and lows. It's doenst work standalone. If you have any idea, how it can help to find waves, let me know.
import pandas as pd
import config.Text
class AnalyzerHighLow(object):
def __init__(self, df):
self.high_low = None
self.df = df.close.values
self.highs = pd.DataFrame(columns=[config.Text.date, config.Text.extrema, config.Text.type])
self.lows = pd.DataFrame(columns=[config.Text.date, config.Text.extrema, config.Text.type])
def highlow(self):
idx_start = 0
self.find_high(self.df, idx_start)
self.find_low(self.df, idx_start)
self.high_low = pd.concat([self.highs, self.lows], ignore_index=True, sort=True, axis=0)
self.high_low = self.high_low.sort_values(by=[config.Text.date])
self.high_low = self.high_low.reset_index(drop=True)
return self.high_low
def find_high(self, high_low, idx_start):
pvt_high = high_low[idx_start]
reached = False
for i in range(idx_start + 1, len(high_low)):
act_high = high_low[i]
if act_high > pvt_high:
reached = True
pvt_high = act_high
elif act_high < pvt_high and reached is True:
self.highs.loc[i - 1] = [i - 1, pvt_high, config.Text.maxima]
return self.find_high(high_low, i)
elif act_high < pvt_high:
pvt_high = high_low[i]
if (reached is True) and (i == (len(high_low))):
self.highs.loc[i - 1] = [i - 1, pvt_high, config.Text.maxima]
def find_low(self, high_low, idx_start):
pvt_low = high_low[idx_start]
reached = False
for i in range(idx_start + 1, len(high_low)):
act_low = high_low[i]
if act_low < pvt_low:
reached = True
pvt_low = act_low
elif act_low > pvt_low and reached is True:
self.lows.loc[i - 1] = [i - 1, pvt_low, config.Text.minima]
return self.find_low(high_low, i)
elif act_low > pvt_low:
pvt_low = high_low[i]
if (reached is True) and (i == (len(high_low) - 1)):
self.lows.loc[i - 1] = [i - 1, pvt_low, config.Text.minima]

How can I find the maximum number of cities that I can visit given a travel budget (in minutes) using a travel time matrix

I have a list of 12 cities connected to each other without exception. The only thing of concern is travel time. The name of each city is here.  The distance matrix (representing travel time in minutes) between city pairs is here. 
How can I find out how many cities I can visited given a certain travel budget (say 800 minutes) from a city of origin (it can be any of the 12).
You can't visit the same city twice during the trip and you don't need to worry about returning to your origin. I can't go above my travel budget.
import numpy as np
from scipy.spatial import distance_matrix
from sklearn.cluster import AgglomerativeClustering
def find_cities(dist, budget): # dist: a 12x12 matrix of travel time in minutes between city pairs; budget: max travel time allowed for the trip (in mins)
assert len(dist) == 12 # there are 12 cities to visit and each one has a pairwise cost with all other 11 citis
clusters = [] # list of cluster labels from 1..n where n is no of cities to be visited
dists = [0] + [row[1:] for row in dist] # start-to-start costs have been excluded from the distances array which only contains intercity distances
linkage = 'complete' # complete linkage used here because we want an optimal solution i.e., finding minimum number of clusters required
ac = AgglomerativeClustering(affinity='precomputed', linkage=linkage, compute_full_tree=True) # affinity must be precomputed or function otherwise it will use euclidean distance by default !!!
# compute full tree ensures that I get all possible clustesr even if they don't make up entire population! This is needed so that I can determine how many clusters need to be created given my budget constraints below
Z = ac.fit_predict(dists).tolist() # AgglomerativeClustering.fit_predict returns list of cluster labels for each city
while budget >= min(dists): # while my budget is greater than the minimum intercity travel cost, i.e., I can still visit another city
if len(set(Z)) > 1: # at least 2 clusters are needed to form a valid tour so continue only when there are more than one cluster left in Z
c1 = np.argmax([sum([i==j for j in Z]) for i in set(Z)]) # find which clustes has max no of cities associated with it and that will be the next destination (cities within this cluster have same label as their parent cluster!) # numpy argmax returns index of maximum value along an axis; here we want to know which group has most elements!
c2 = [j for j,val in enumerate(Z) if val == Z[c1]][0] # retrieve first element from the group whose parent is 'cluster' returned by previous line
clusters += [c2 + 1] ## add new destination found into our trip plan/list "clusters" after converting its city id back into integer starting from 1 instead of 0 like array indices do!!
dists += [dist[c1][c2]] ## update total distance travelled so far based on newly added destination ... note: distances between two adjacent cities always equals zero because they fall under same cluster
budget -= dists[-1] ## update travel budget by subtracting the cost of newly added destination from our total budget
else: break # when there is only one city left in Z, then stop! it's either a single city or two cities are connected which means cost between them will always be zero!
return clusters # this is a list of integers where each integer represents the id of city that needs to be visited next!
def main():
with open('uk12_dist.txt','r') as f: ## read travel time matrix between cities from file ## note: 'with' keyword ensures file will be closed automatically after reading or writing operation done within its block!!!
dist = [[int(num) for num in line.split()] for line in f] ## convert text data into array/list of lists using list comprehension; also ensure all data are converted into int before use!
with open('uk12_name.txt','r') as f: ## read names of 12 cities from file ## note: 'with' keyword ensures file will be closed automatically after reading or writing operation done within its block!!!
name = [line[:-1].lower().replace(" ","") for line in f] ## remove newline character and any leading/trailing spaces, then convert all characters to lowercase; also make sure there's no space between first and last name (which results in empty string!) otherwise won't match later when searching distances!!
budget = 800 # max travel budget allowed (in mins) i.e., 8 hrs travelling at 60 mins per km which means I can cover about 800 kms on a full tank!
print(find_cities(dist,budget), "\n") ## print(out list of city ids to visit next!
print("Total distance travelled: ", sum(dist[i][j] for i, j in enumerate([0]+find_cities(dist,budget))), "\n" ) # calculate total cost/distance travelled so far by adding up all distances between cities visited so far - note index '0' has been added at start because 0-2 is same as 2-0 and it's not included in find_cities() output above !
while True:
try: ## this ensures that correct input from user will be obtained only when required!!
budget = int(raw_input("\nEnter your travel budget (in minutes): ")) # get new travel budget from user and convert into integer before use!!!
if budget <= 800: break ## stop asking for valid input only when the value entered by user isn't greater than 800 mins or 8 hrs !!
except ValueError: ## catch exception raised due to invalid data type; continue asking until a valid number is given by user!!
pass
print(name[find_cities(dist,budget)[1]],"->",name[find_cities(dist,budget)[2]],"-> ...",name[find_cities(dist,budget)[-1]] )## print out the city names of cities to visit next!
return None
if __name__ == '__main__': main()

How to set minimum locations per route in Google OR-Tools?

I am trying to limit the minimum locations visit per vehicle, I have implemented the maximum location constraint successfully but having issues in figuring out minimum locations. My code for maximum location:
def counter_callback(from_index):
"""Returns 1 for any locations except depot."""
# Convert from routing variable Index to user NodeIndex.
from_node = manager.IndexToNode(from_index)
return 1 if (from_node != 0) else 0;
counter_callback_index = routing.RegisterUnaryTransitCallback(counter_callback)
routing.AddDimensionWithVehicleCapacity(
counter_callback_index,
0, # null slack
[16,16,16], # maximum locations per vehicle
True, # start cumul to zero
'Counter')
You should not put a hard limit on the number of nodes as it easily makes the model unfeasible.
The recommended way is to create a new dimension which just counts the number of visits (the evaluator always returns 1), then push a soft lower bound on the cumulvar of this dimension at the end of each vehicle.

Pair items of a list depending on value

I have an xml file like the following:
<edge from="0/0" to="0/1" speed="10"/>
<edge from="0/0" to="1/0" speed="10"/>
<edge from="0/1" to="0/0" speed="10"/>
<edge from="0/1" to="0/2" speed="10"/>
...
Note, that there exist pairs of from-to and vice versa. (In the example above only the pair ("0/0","0/1") and ("0/1","0/0") is visible, however there is a partner for every entry.) Also, note that those pairs are not ordered.
The file describes edges within a SUMO network simulation. I want to assign new speeds randomly to the different streets. However, every <edge> entry only describes one direction(lane) of a street. Hence, I need to find its "partner".
The following code distributes the speed values lane-wise only:
import xml.dom.minidom as dom
import random
edgexml = dom.parse("plain.edg.xml")
MAX_SPEED_OPTIONS = ["8","9","10"]
for edge in edgexml.getElementsByTagName("edge"):
x = random.randint(0,2)
edge.setAttribute("speed", MAX_SPEED_OPTIONS[x])
Is there a simple (pythonic) way to maybe gather those pairs in tuples and then assign the same value to both?
If you know a better way to solve my problem using SUMO tools, I'd be happy too. However I'm still interested in how I can solve the given abstract list problem in python as it is not just a simple zip like in related questions.
Well, you can walk the list of edges and nest another iteration over all edges to search for possible partners. Since this is of quadratic complexity, we can even reduce calculation time by only walking over not yet visited edges in the nested run.
Solution
(for a detailed description, scroll down)
import xml.dom.minidom as dom
import random
edgexml = dom.parse('sampledata/tmp.xml')
MSO = [8, 9, 10]
edge_groups = []
passed = []
for idx, edge in enumerate(edgexml.getElementsByTagName('edge')):
if edge in passed:
continue
partners = []
for partner in edgexml.getElementsByTagName('edge')[idx:]:
if partner.getAttribute('from') == edge.getAttribute('to') \
and partner.getAttribute('to') == edge.getAttribute('from'):
partners.append(partner)
edge_groups.append([edge] + partners)
passed.extend([edge] + partners)
for e in edge_groups:
print('NEW EDGE GROUP')
x = random.choice(MSO)
for p in e:
p.setAttribute('speed', x)
print(' E from "%s" to "%s" at "%s"' % (p.getAttribute('from'), p.getAttribute('to'), x))
Yields the output:
NEW EDGE GROUP
E from "0/0" to "0/1" at "8"
E from "0/1" to "0/0" at "8"
NEW EDGE GROUP
E from "0/0" to "1/0" at "10"
NEW EDGE GROUP
E from "0/1" to "0/2" at "9"
Detailed description
edge_groups = []
passed = []
Initialize the result structure edge_groups, which will be a list of lists holding partnered edges in groups. The additional list passed will help us to avoid redundant edges in our result.
for idx, edge in enumerate(edgexml.getElementsByTagName('edge')):
Start iterating over the list of all edges. I use enumerate here to obtain the index at the same time, because our nested iteration will only iterate over a sub-list starting at the current index to reduce complexity.
if edge in passed:
continue
Stop, if we have visited this edge at any point in time before. This does only happen if the edge has been recognized as a partner of another list before (due to index-based sublisting). If it has been taken as the partner of another list, we can omit it with no doubt.
partners = []
for partner in edgexml.getElementsByTagName('edge')[idx:]:
if partner.getAttribute('from') == edge.getAttribute('to') \
and partner.getAttribute('to') == edge.getAttribute('from'):
partners.append(partner)
Initialize helper list to store identified partner edges. Then, walk through all edges in the remaining list starting from the current index. I.e. do not iterate over edges that have already been passed in the outer iteration. If the potential partner is an actual partner (from/to matches), then append it to our partners list.
edge_groups.append([edge] + partners)
passed.extend([edge] + partners)
The nested iteration has passed and partners holds all identified partners for the current edge. Push them into one list and append it to the result variable edge_groups. Since it is unneccessarily complex to check against the 2-level list edge_groups to see whether we have already traversed an edge in the next run, we will additionally keep a list of already used nodes and call it passed.
for e in edge_groups:
print('NEW EDGE GROUP')
x = random.choice(MSO)
for p in e:
p.setAttribute('speed', x)
print(' E from "%s" to "%s" at "%s"' % (p.getAttribute('from'), p.getAttribute('to'), x))
Finally, we walk over all groups of edges in our result edge_groups, randomly draw a speed from MSO (hint: use random.choice() to randomly choose from a list), and assign it to all edges in this group.

Categories

Resources