Calculating a custom cost function within A* Search - python

Problem Background
I am working on a problem where using an A* search algorithm, I need to find the fastest route between two nodes in a graph given a specific cost function. Here is an overview of the cost function:
Package finds the fastest route, in expectation, for a certain delivery driver. Whenever this driver drives on a road with a speed limit >= 50 mph, there is a chance that a package will fall out of their truck and be destroyed. Consequently, this mistake will add an extra 2*(t_road + t_trip) hours to their trip, where t_trip is the time it took to get from the start city to the beginning of the road, and t_road is the time it takes to drive the length of the road segment.
For a road of length l miles, the probability p of this mistake happening is equal to tanh(l/1000) if the speed limit is >= 50 mph, and 0 otherwise. This means that, in expectation, it will take t_road + p * 2(t_road + t_trip) hours to drive on this road.
So, I have constructed my graph where every city in a given dataset is a node and each road that connects two cities is an edge. Each edge has dictionary of weights defined as: {'distance':int, 'speed':int, 'time':float}
Problem Overview
So, I am trying to figure out where in my A* search algorithm I am going wrong because when I try calculating the package cost function, there are some edge cases (look below for an example) where my cost function is not performing properly.
My Attempt(s)
Here is a method I wrote to calculate the package cost:
def delivery_calculation(path, G):
result = 0
for i in range(0, len(path)-1):
edge = G.get_edge(path[i], path[i+1])
if not edge:
continue
if edge[1]['speed'] >= 50:
prob = math.tanh(edge[1]['distance']/1000)
else:
prob = 0
t_road = edge[1]['time']
t_trip = total_metric(path[0:i+1], G, 'time')
result += t_road + (prob*(2*(t_road+t_trip)))
return result
where path is a list of nodes, G is a graph, and get_edge() returns an edge and its weight between two nodes. It should be noted that the graph is undirected.
So within my A*, when I check all adjacent nodes and look to add them to the fringe (implemented as a priority queue), I calculate what the package cost would be for a given adjacent node. This, to me, seems like correct logic, but I could be wrong. Here is the code for that:
for move in G.get_adjacent(curr_node):
neighbor = move[0]
weight = move[1]
path = [neighbor]
node = curr_node
while node is not None:
path.append(node)
node = tracked_nodes[node]
path.reverse()
ncost = float(delivery_calculation(path, G))
where get_adjacent() gets all nodes that are adjacent to a given node.
So ncost is then used for the priority metric within my priority queue. My thinking behind that is if I want the fastest route, the smallest package cost should lead me there.
Results
So, the problem here is that for some searches, other cost functions turn up faster routes for package costs than the package cost function itself. An example:
I want to find the fastest route between Bloomington, Indiana, and Chicago, Illinois based on time
Total hours: 3.900 (cost function)
Total hours for package delivery: 4.725
I want to find the fastest route between Bloomington, Indiana, and Chicago, Illinois based on package time
Total hours: 4.115
Total hours for package delivery: 4.736 (cost function)
I've spent hours looking at my code and trying to rework the logic, but I cannot figure out why this would happen. Potentially how the float type is working and rounding values? This is one of the only edge cases that I've found (another being Bloomington, Indiana to Buffalo, New York). A working example would look like:
I want to find the fastest route between Denver, Colorado, and Milwaukee, Wisconsin based on time
Total hours: 16.387 (cost function)
Total hours for package delivery: 33.704
I want to find the fastest route between Denver, Colorado, and Milwaukee, Wisconsin based on package time
Total hours: 21.513
Total hours for package delivery: 23.037 (cost function)
Any ideas? I am absolutely stuck. Thank you in advance and I appreciate you taking the time to read this entire post.

Related

Python Networkx route though all nodes

I'm trying to write a small route planning application in python just to learn about graphs. In the end, the user should be able to pass in his "home" location and enter some locations he wants to stop by. The application will then calculate the optimal path that starts and ends in his home while visiting every location. So far I've got the API requests all sorted out and a Network with all possible routes between all nodes and corresponding weights is automatically created. Now I'm stuck with a 'G' and don't know how to proceed. I've looked into the Networkx documentation about the shortest paths and cannot find a function that seems to do what I want. The best results I got when searching where Wikipedia articles about Dijkstra and the all_pairs_shortest_path() function, which, too, do not yield the answers I'm searching for.
Maybe there is someone out there who stumbled upon the same problem as I have and knows a solution that I cannot find.
If you have a graph G and want to find the route from A (home) to B to C to D (final destination) in order, you'd call dijkstra_path on it for (A, B), (B, C) and (C, D), and concatenate the paths generated.
This is the "pickup problem".
A delivery driver must pickup passengers at several locations and deliver them to a destination.
I have a c++ implementation of an application to calculate reasonable solutions to this problem when there is a link between every location costing the euclidean distance between the locations. Documentation at https://github.com/JamesBremner/PathFinder/wiki/Pickup
You will have to modify this for your problem, by calculating the cost of the cheapest path between every location ( Dijkstra ) and linking every pair of locations with that cost.
Note that the algorithm will fail if the direct distance between any two nodes is greater than the distance between them via a third node.
Example: One pickup driver has to pickup 6 cargoes and deliver them to a designation. The input file looks like this
format pickup
d 1 3 start
e 6 5 end
c 1 1 c1
c 5 5 c2
c 3 3 c3
c 4 4 c4
c 2 2 c5
c 6 6 c6
and the result looks like this
The example is very simple, just to show how this works. However this is a high performance application using an efficient implementation of the travelling salesman algorithm ( no brute force searching through permutations! ;-) I have used it for the allocation and routing of drivers to restaurant deliveries in a big city where the requirement was to handle thousands of orders in a few seconds. This plot shows the performance that was achieved.
Like #AKX mentioned, analyzing the different permutations of the stops I want to make was the way to go. For anybody who might encounter a similar problem in the future I post my approach, although it's far from being optimized or even moderately clean code.
First of all, instead of saving all possible routes into the graph, I saved them to an array called routes = []. To calculate the sub-routes, I then create a copy of the routes array which does not include routes to the starting (or ending) location:
strippedRoutes = []
for route in routes:
if route[0] == 0 or route[1] == 0:
pass
else:
strippedRoutes.append(route)
subroutes = sortAllPermutations(strippedRoutes, len(addressen) - 1)
I pass all the subroutes into a new function, which will calculate the 'cost' of all the permutations:
def findCost(nodesToTraverce, startNode, endNode):
## lookup the cost
cost = 0
for edge in nodesToTraverce:
if edge[0] == startNode and edge[1] == endNode:
cost = edge[2]['weight']
break
return cost
def sortAllPermutations(nodesToTraverce, numberOfNodes):
nodes = []
for i in range(1, numberOfNodes +1):
nodes.append(i)
perms = list(itertools.permutations(nodes))
buffer = []
for p in perms: #calculate cost of route
cost = 0
for s in range(0, numberOfNodes-1): #calculate cost of subroute
cost += findCost(nodesToTraverce, p[s], p[s+1])
toAdd = []
for j in range(0, len(nodes)):
toAdd.append(p[j])
toAdd.append(cost)
buffer.append(toAdd)
#sort buffer by cost
buffer.sort(key= lambda x: x[4])
return buffer
Because I fetch all the costs for my routes from an API, finding the cost of each sub-route is relatively easy and no more than searching in a lookup table. The first lines of the sortAllPermutations() function are used to create a permutation table. In the for-loop, the cost of each permutation is calculated by looking up the cost of the different edges that are saved inside the "nodesToTraverce" array, which is passed into the function. The first nested for-loop that iterates s is used for exactly that. The last lines of the perm for-loop are used to store the permutation and it's cost into the buffer, which is sorted (unnecessarily) once the for loop terminates and then returned. Back in the main function, merely the cost of getting to each starting point of the sub-route and going back home from the last stop are calculated, and the route of the overall lowest cost is saved:
subroutes = sortAllPermutations(strippedRoutes, len(addressen) - 1)
mostEfficient = []
iteration = 0
for perm in subroutes:
cost = perm[4]
cost += findCost(routes, 0, perm[0])
cost += findCost(routes, perm[3], 0)
if iteration == 0 or cost < mostEfficient[6]:
mostEfficient = []
mostEfficient.append((0))
for i in range(0, len(perm)-1):
mostEfficient.append(perm[i])
mostEfficient.append(0)
mostEfficient.append(cost)
iteration += 1
print(mostEfficient)
And with that, the most efficient sightseeing roundtrip is calculated:
>>> [0, 1, 2, 4, 3, 0, 164044.1]
I'm not sure if here is the right place to discuss this, but I've got some closing thoughts on the project. If the project should not be used to plan roundtrips sights but in a professional environment, where you want to calculate most efficient routes, you would not optimize for shortest distance or least time needed, but for both criteria and more. I really want to try this, with some route-stops meeting deadlines or having a higher priority than others, but I have absolutely no clue how I would go about implementing several dimensions of optimization. I'm closing this thread now, but if someone who reads this has articles on this feel free to answer or pin me.
Thanks for all your help!

How to implement a cost minimization objective function correctly in Gurobi?

Given transport costs, per single unit of delivery, for a supermarket from three distribution centers to ten separate stores.
Note: Please look in the #data section of my code to see the data that I'm not allowed to post in photo form. ALSO note while my costs are a vector with 30 entries. Each distribution centre can only access 10 costs each. So DC1 costs = entries 1-10, DC2 costs = entries 11-20 etc..
I want to minimize the transport cost subject to each of the ten stores demand (in units of delivery).
This can be done by inspection. The the minimum cost being $150313. The problem being implementing the solution with Python and Gurobi and producing the same result.
What I've tried is a somewhat sloppy model of the problem in Gurobi so far. I'm not sure how to correctly index and iterate through my sets that are required to produce a result.
This is my main problem: The objective function I define to minimize transport costs is not correct as I produce a non-answer.
The code "runs" though. If I change to maximization I just get an unbounded problem. So I feel like I am definitely not calling the correct data/iterations through sets into play.
My solution so far is quite small, so I feel like I can format it into the question and comment along the way.
from gurobipy import *
#Sets
Distro = ["DC0","DC1","DC2"]
Stores = ["S0", "S1", "S2", "S3", "S4", "S5", "S6", "S7", "S8", "S9"]
D = range(len(Distro))
S = range(len(Stores))
Here I define my sets of distribution centres and set of stores. I am not sure where or how to exactly define the D and S iteration variables to get a correct answer.
#Data
Demand = [10,16,11,8,8,18,11,20,13,12]
Costs = [1992,2666,977,1761,2933,1387,2307,1814,706,1162,
2471,2023,3096,2103,712,2304,1440,2180,2925,2432,
1642,2058,1533,1102,1970,908,1372,1317,1341,776]
Just a block of my relevant data. I am not sure if my cost data should be 3 separate sets considering each distribution centre only has access to 10 costs and not 30. Or if there is a way to keep my costs as one set but make sure each centre can only access the costs relevant to itself I would not know.
m = Model("WonderMarket")
#Variables
X = {}
for d in D:
for s in S:
X[d,s] = m.addVar()
Declaring my objective variable. Again, I'm blindly iterating at this point to produce something that works. I've never programmed before. But I'm learning and putting as much thought into this question as possible.
#set objective
m.setObjective(quicksum(Costs[s] * X[d, s] * Demand[s] for d in D for s in S), GRB.MINIMIZE)
My objective function is attempting to multiply the cost of each delivery from a centre to a store, subject to a stores demand, then make that the smallest value possible. I do not have a non zero constraint yet. I will need one eventually?! But right now I have bigger fish to fry.
m.optimize()
I produce a 0 row, 30 column with 0 nonzero entries model that gives me a solution of 0. I need to set up my program so that I get the value that can be calculated easily by hand. I believe the issue is my general declaring of variables and low knowledge of iteration and general "what goes where" issues. A lot of thinking for just a study exercise!
Appreciate anyone who has read all the way through. Thank you for any tips or help in advance.
Your objective is 0 because you do not have defined any constraints. By default all variables have a lower bound of 0 and hence minizing an unconstrained problem puts all variables to this lower bound.
A few comments:
Unless you need the names for the distribution centers and stores, you could define them as follows:
D = 3
S = 10
Distro = range(D)
Stores = range(S)
You could define the costs as a 2-dimensional array, e.g.
Costs = [[1992,2666,977,1761,2933,1387,2307,1814,706,1162],
[2471,2023,3096,2103,712,2304,1440,2180,2925,2432],
[1642,2058,1533,1102,1970,908,1372,1317,1341,776]]
Then the cost of transportation from distribution center d to store s are stored in Costs[d][s].
You can add all variables at once and I assume you want them to be binary:
X = m.addVars(D, S, vtype=GRB.BINARY)
(or use Distro and Stores instead of D and S if you need to use the names).
Your definition of the objective function then becomes:
m.setObjective(quicksum(Costs[d][s] * X[d, s] * Demand[s] for d in Distro for s in Stores), GRB.MINIMIZE)
(This is all assuming that each store can only be delivered from one distribution center, but since your distribution centers do not have a maximal capacity this seems to be a fair assumption.)
You need constraints ensuring that the stores' demands are actually satisfied. For this it suffices to ensure that each store is being delivered from one distribution center, i.e., that for each s one X[d, s] is 1.
m.addConstrs(quicksum(X[d, s] for d in Distro) == 1 for s in Stores)
When I optimize this, I indeed get an optimal solution with value 150313.

Weighted Traversal Algorithm (Breadth first is better?)

I'm having trouble designing an algorithm for a traversal problem.
I have a Ship that I control on a 2D grid and it starts on the very bottom of the grid. Each tile of the grid has a value (between 0 and 1000) equal to how much 'resource' is in that tile.
The Ship can go_left(), go_up(), go_right() or stay_still()
If the ship stay_still() it collects 25% of it's current tile's resource (rounded up to the nearest int).
If the ship uses a move command, it needs to spend 10% of it's current tile resource value rounded down. Moves that cost more than the ship has collected are illegal. (So if a ship is on a 100, it costs 10 to move off the 100, if it's on a 9 or less, moving is free).
The goal is to find a relatively short path that legally collects 1000 resource. Returning a list of the move order to corresponds to the path.
I naturally tried a recursive approach:
In sudo-code the algorithm is:
alg(position, collected, best_path):
if ship has 1000:
return best_path
alg(stay still)
if ship has enough to move:
alg(try left)
alg(try up)
alg(try right)
If you want a closer look at the actual syntax in python3 here it is:
def get_path_to_1000(self, current_position, collected_resource, path, game_map):
if collected_resource >= 1000:
return path
path_stay = path.copy().append(stay_still())
self.get_path_to_1000(current_position, collected_resource +
math.ceil(0.25 * game_map[current_position].value),
path_stay, game_map.copy().collect(current_position))
cost = math.floor(0.1 * game_map[current_position].value)
if collected_resource >= cost:
direction_list = [Direction.West, Direction.North, Direction.East]
move_list = [go_left(), go_up(), go_right()]
for i in range(3):
new_path = path.copy().append(move_list[i])
self.get_path_to_1000(
current_position.offset(direction_list[i]),
collected_resource - cost, new_path, game_map)
The problem with my approach is that the algorithm never completes because it keeps trying longer and longer lists of the ship staying still.
How can I alter my algorithm so it actually tries more than one option, returning a relatively short (or shortest) path to 1000?
The nature of this problem (ignoring the exact mechanics of the rounding down/variable cost of moving) is to find the shortest number of nodes in order to acquire 1,000 resources. Another way to look at this goal is that the ship is trying to find the most efficient move with each turn.
This issue can be solved with a slightly modified version of Dijksta's algorithm. Instead of greedily choosing the move with the least weight, we will choose the move with the most weight (greatest number of resources), and add this value to a running counter that will make sure that we reach 1000 resources total. By greedily adding the most efficient edge weights (while below 1000), we'll find the least number of moves to get a total 1000 resources.
Simply keep a list of the moves made with the algorithm and return that list when the resource counter reaches 1000.
Here's a helpful resource on how to best implement Dijkstra's algorithm:
https://www.geeksforgeeks.org/dijkstras-shortest-path-algorithm-greedy-algo-7/
With the few modifications, it should be your best bet!

Using fitness sharing on a minimization function

I'm trying to use fitness/function sharing on a minimization function. I'm using the standard definition of the sharing function found here which then divides the fitness by the niche count. This will lower the fitness, proportional to the amount of individuals in its niche. However, in my case the lower the fitness the more fit the individual is. How can I make my fitness sharing function increase the fitness proportionally to the amount of individuals in its niche?
Here's the code:
def evalTSPsharing(individual, radius, pop):
individualFitness = evalTSP(individual)[0]
nicheCount = 0
for ind in pop:
distance = abs(individualFitness - evalTSP(ind)[0])
if distance < radius:
nicheCount += (1-(distance/radius))
return (individualFitness/nicheCount,)
I couldn't find a non-pdf of the paper, but here's a picture of the relevant parts. Again, this is from the link above.
Question is two years old now, but I'll give it a try:
You can try replacing the niche_count division penalty with a multiplication, i.e.:
individualFitness * nicheCount
instead of:
individualFitness / nicheCount

How to approach a number guessing game (with a twist) algorithm?

Update(July 2020): Question is 9 years old but still one that I'm deeply interested in. In the time since, machine learning(RNN's, CNN's, GANS,etc), new approaches and cheap GPU's have risen that enable new approaches. I thought it would be fun to revisit this question to see if there are new approaches.
I am learning programming (Python and algorithms) and was trying to work on a project that I find interesting. I have created a few basic Python scripts, but I’m not sure how to approach a solution to a game I am trying to build.
Here’s how the game will work:
Users will be given items with a value. For example,
Apple = 1
Pears = 2
Oranges = 3
They will then get a chance to choose any combo of them they like (i.e. 100 apples, 20 pears, and one orange). The only output the computer gets is the total value (in this example, it's currently $143). The computer will try to guess what they have. Which obviously it won’t be able to get correctly the first turn.
Value quantity(day1) value(day1)
Apple 1 100 100
Pears 2 20 40
Orange 3 1 3
Total 121 143
The next turn the user can modify their numbers but no more than 5% of the total quantity (or some other percent we may chose. I’ll use 5% for example.). The prices of fruit can change(at random) so the total value may change based on that also (for simplicity I am not changing fruit prices in this example). Using the above example, on day 2 of the game, the user returns a value of $152 and $164 on day 3. Here's an example:
Quantity (day2) %change (day2) Value (day2) Quantity (day3) %change (day3) Value(day3)
104 104 106 106
21 42 23 46
2 6 4 12
127 4.96% 152 133 4.72% 164
*(I hope the tables show up right, I had to manually space them so hopefully it's not just doing it on my screen, if it doesn't work let me know and I'll try to upload a screenshot.)
I am trying to see if I can figure out what the quantities are over time (assuming the user will have the patience to keep entering numbers). I know right now my only restriction is the total value cannot be more than 5% so I cannot be within 5% accuracy right now so the user will be entering it forever.
What I have done so far
Here’s my solution so far (not much). Basically, I take all the values and figure out all the possible combinations of them (I am done this part). Then I take all the possible combos and put them in a database as a dictionary (so for example for $143, there could be a dictionary entry {apple:143, Pears:0, Oranges :0}..all the way to {apple:0, Pears:1, Oranges :47}. I do this each time I get a new number so I have a list of all possibilities.
Here’s where I’m stuck. In using the rules above, how can I figure out the best possible solution? I think I’ll need a fitness function that automatically compares the two days data and removes any possibilities that have more than 5% variance of the previous days data.
Questions:
So my question with user changing the total and me having a list of all the probabilities, how should I approach this? What do I need to learn? Is there any algorithms out there or theories that I can use that are applicable? Or, to help me understand my mistake, can you suggest what rules I can add to make this goal feasible (if it's not in its current state. I was thinking adding more fruits and saying they must pick at least 3, etc..)? Also, I only have a vague understanding of genetic algorithms, but I thought I could use them here, if is there something I can use?
I'm very very eager to learn so any advice or tips would be greatly appreciated (just please don't tell me this game is impossible).
UPDATE: Getting feedback that this is hard to solve. So I thought I'd add another condition to the game that won't interfere with what the player is doing (game stays the same for them) but everyday the value of the fruits change price (randomly). Would that make it easier to solve? Because within a 5% movement and certain fruit value changes, only a few combinations are probable over time.
Day 1, anything is possible and getting a close enough range is almost impossible, but as the prices of fruits change and the user can only choose a 5% change, then shouldn't (over time) the range be narrow and narrow. In the above example, if prices are volatile enough I think I could brute force a solution that gave me a range to guess in, but I'm trying to figure out if there's a more elegant solution or other solutions to keep narrowing this range over time.
UPDATE2: After reading and asking around, I believe this is a hidden Markov/Viterbi problem that tracks the changes in fruit prices as well as total sum (weighting the last data point the heaviest). I'm not sure how to apply the relationship though. I think this is the case and could be wrong but at the least I'm starting to suspect this is a some type of machine learning problem.
Update 3: I am created a test case (with smaller numbers) and a generator to help automate the user generated data and I am trying to create a graph from it to see what's more likely.
Here's the code, along with the total values and comments on what the users actually fruit quantities are.
#!/usr/bin/env python
import itertools
# Fruit price data
fruitPriceDay1 = {'Apple':1, 'Pears':2, 'Oranges':3}
fruitPriceDay2 = {'Apple':2, 'Pears':3, 'Oranges':4}
fruitPriceDay3 = {'Apple':2, 'Pears':4, 'Oranges':5}
# Generate possibilities for testing (warning...will not scale with large numbers)
def possibilityGenerator(target_sum, apple, pears, oranges):
allDayPossible = {}
counter = 1
apple_range = range(0, target_sum + 1, apple)
pears_range = range(0, target_sum + 1, pears)
oranges_range = range(0, target_sum + 1, oranges)
for i, j, k in itertools.product(apple_range, pears_range, oranges_range):
if i + j + k == target_sum:
currentPossible = {}
#print counter
#print 'Apple', ':', i/apple, ',', 'Pears', ':', j/pears, ',', 'Oranges', ':', k/oranges
currentPossible['apple'] = i/apple
currentPossible['pears'] = j/pears
currentPossible['oranges'] = k/oranges
#print currentPossible
allDayPossible[counter] = currentPossible
counter = counter +1
return allDayPossible
# Total sum being returned by user for value of fruits
totalSumDay1=26 # Computer does not know this but users quantities are apple: 20, pears 3, oranges 0 at the current prices of the day
totalSumDay2=51 # Computer does not know this but users quantities are apple: 21, pears 3, oranges 0 at the current prices of the day
totalSumDay3=61 # Computer does not know this but users quantities are apple: 20, pears 4, oranges 1 at the current prices of the day
graph = {}
graph['day1'] = possibilityGenerator(totalSumDay1, fruitPriceDay1['Apple'], fruitPriceDay1['Pears'], fruitPriceDay1['Oranges'] )
graph['day2'] = possibilityGenerator(totalSumDay2, fruitPriceDay2['Apple'], fruitPriceDay2['Pears'], fruitPriceDay2['Oranges'] )
graph['day3'] = possibilityGenerator(totalSumDay3, fruitPriceDay3['Apple'], fruitPriceDay3['Pears'], fruitPriceDay3['Oranges'] )
# Sample of dict = 1 : {'oranges': 0, 'apple': 0, 'pears': 0}..70 : {'oranges': 8, 'apple': 26, 'pears': 13}
print graph
We'll combine graph-theory and probability:
On the 1st day, build a set of all feasible solutions. Lets denote the solutions set as A1={a1(1), a1(2),...,a1(n)}.
On the second day you can again build the solutions set A2.
Now, for each element in A2, you'll need to check if it can be reached from each element of A1 (given x% tolerance). If so - connect A2(n) to A1(m). If it can't be reached from any node in A1(m) - you can delete this node.
Basically we are building a connected directed acyclic graph.
All paths in the graph are equally likely. You can find an exact solution only when there is a single edge from Am to Am+1 (from a node in Am to a node in Am+1).
Sure, some nodes appear in more paths than other nodes. The probability for each node can be directly deduced based on the number of paths that contains this node.
By assigning a weight to each node, which equals to the number of paths that leads to this node, there is no need to keep all history, but only the previous day.
Also, have a look at non-negative-values linear diphantine equations - A question I asked a while ago. The accepted answer is a great way to enumarte all combos in each step.
Disclaimer: I changed my answer dramatically after temporarily deleting my answer and re-reading the question carefully as I misread some critical parts of the question. While still referencing similar topics and algorithms, the answer was greatly improved after I attempted to solve some of the problem in C# myself.
Hollywood version
The problem is a Dynamic constraint satisfaction problem (DCSP), a variation on Constraint satisfaction problems (CSP.)
Use Monte Carlo to find potential solutions for a given day if values and quantity ranges are not tiny. Otherwise, use brute force to find every potential solutions.
Use Constraint Recording (related to DCSP), applied in cascade to previous days to restrict the potential solution set.
Cross your fingers, aim and shoot (Guess), based on probability.
(Optional) Bruce Willis wins.
Original version
First, I would like to state what I see two main problems here:
The sheer number of possible solutions. Knowing only the number of items and the total value, lets say 3 and 143 for example, will yield a lot of possible solutions. Plus, it is not easy to have an algorithm picking valid solution without inevitably trying invalid solutions (total not equal to 143.)
When possible solutions are found for a given day Di, one must find a way to eliminate potential solutions with the added information given by { Di+1 .. Di+n }.
Let's lay down some bases for the upcoming examples:
Lets keep the same item values, the whole game. It can either be random or chosen by the user.
The possible item values is bound to the very limited range of [1-10], where no two items can have the same value.
No item can have a quantity greater than 100. That means: [0-100].
In order to solve this more easily I took the liberty to change one constraint, which makes the algorithm converge faster:
The "total quantity" rule is overridden by this rule: You can add or remove any number of items within the [1-10] range, total, in one day. However, you cannot add or remove the same number of items, total, more than twice. This also gives the game a maximum lifecycle of 20 days.
This rule enables us to rule out solutions more easily. And, with non-tiny ranges, renders Backtracking algorithms still useless, just like your original problem and rules.
In my humble opinion, this rule is not the essence of the game but only a facilitator, enabling the computer to solve the problem.
Problem 1: Finding potential solutions
For starters, problem 1. can be solved using a Monte Carlo algorithm to find a set of potential solutions. The technique is simple: Generate random numbers for item values and quantities (within their respective accepted range). Repeat the process for the required number of items. Verify whether or not the solution is acceptable. That means verifying if items have distinct values and the total is equal to our target total (say, 143.)
While this technique has the advantage of being easy to implement it has some drawbacks:
The user's solution is not guaranteed to appear in our results.
There is a lot of "misses". For instance, it takes more or less 3,000,000 tries to find 1,000 potential solutions given our constraints.
It takes a lot of time: around 4 to 5 seconds on my lazy laptop.
How to get around these drawback? Well...
Limit the range to smaller values and
Find an adequate number of potential solutions so there is a good chance the user's solution appears in your solution set.
Use heuristics to find solutions more easily (more on that later.)
Note that the more you restrict the ranges, the less useful while be the Monte Carlo algorithm is, since there will be few enough valid solutions to iterate on them all in reasonable time. For constraints { 3, [1-10], [0-100] } there is around 741,000,000 valid solutions (not constrained to a target total value.) Monte Carlo is usable there. For { 3, [1-5], [0-10] }, there is only around 80,000. No need to use Monte Carlo; brute force for loops will do just fine.
I believe the problem 1 is what you would call a Constraint satisfaction problem (or CSP.)
Problem 2: Restrict the set of potential solutions
Given the fact that problem 1 is a CSP, I would go ahead and call problem 2, and the problem in general, a Dynamic CSP (or DCSP.)
[DCSPs] are useful when the original formulation of a
problem is altered in some way, typically because the set of
constraints to consider evolves because of the environment. DCSPs
are viewed as a sequence of static CSPs, each one a transformation of
the previous one in which variables and constraints can be added
(restriction) or removed (relaxation).
One technique used with CSPs that might be useful to this problem is called Constraint Recording:
With each change in the environment (user entered values for Di+1), find information about the new constraint: What are the possibly "used" quantities for the add-remove constraint.
Apply the constraint to every preceding day in cascade. Rippling effects might significantly reduce possible solutions.
For this to work, you need to get a new set of possible solutions every day; Use either brute force or Monte Carlo. Then, compare solutions of Di to Di-1 and keep only solutions that can succeed to previous days' solutions without violating constraints.
You will probably have to keep an history of what solutions lead to what other solutions (probably in a directed graph.) Constraint recording enables you to remember possible add-remove quantities and rejects solutions based on that.
There is a lot of other steps that could be taken to further improve your solution. Here are some ideas:
Record constraints for item-value combinations found in previous days solutions. Reject other solutions immediately (as item values must not change.) You could even find a smaller solution sets for each existing solution using solution-specific constraints to reject invalid solutions earlier.
Generate some "mutant", full-history, solutions each day in order to "repair" the case where the D1 solution set doesn't contain the user's solution. You could use a genetic algorithm to find a mutant population based on an existing solution set.)
Use heuristics in order find solutions easily (e.g. when a valid solution is found, try and find variations of this solution by substituting quantities around.)
Use behavioral heuristics in order to predict some user actions (e.g. same quantity for every item, extreme patterns, etc.)
Keep making some computations while the user is entering new quantities.
Given all of this, try and figure out a ranking system based on occurrence of solutions and heuristics to determine a candidate solution.
This problem is impossible to solve.
Let's say that you know exactly for what ratio number of items was increased, not just what is the maximum ratio for this.
A user has N fruits and you have D days of guessing.
In each day you get N new variables and then you have in total D*N variables.
For each day you can generate only two equations. One equation is the sum of n_item*price and other is based on a known ratio. In total you have at most 2*D equations if they are all independent.
2*D < N*D for all N > 2
I wrote a program to play the game. Of course, I had to automate the human side, but I believe I did it all in such a way that I shouldn't invalidate my approach when played against a real human.
I approached this from a machine learning perspective and treated the problem as a hidden markov model where the total price was the observation. My solution is to use a particle filter. This solution is written in Python 2.7 using NumPy and SciPy.
I stated any assumptions I made either explicitly in the comments or implicitly in the code. I also set some additional constraints for the sake of getting code to run in an automated fashion. It's not particularly optimized as I tried to err on the side comprehensibility rather than speed.
Each iteration outputs the current true quantities and the guess. I just pipe the output to a file so I can review it easily. An interesting extension would be to plot the output on a graph either 2D (for 2 fruits) or 3D (for 3 fruits). Then you would be able to see the particle filter hone in on the solution.
Update:
Edited the code to include updated parameters after tweaking. Included plotting calls using matplotlib (via pylab). Plotting works on Linux-Gnome, your mileage may vary. Defaulted NUM_FRUITS to 2 for plotting support. Just comment out all the pylab calls to remove plotting and be able to change NUM_FRUITS to anything.
Does a good job estimating the current fxn represented by UnknownQuantities X Prices = TotalPrice. In 2D (2 Fruits) this is a line, in 3D (3 Fruits) it'd be a plane. Seems to be too little data for the particle filter to reliably hone in on the correct quantities. Need a little more smarts on top of the particle filter to really bring together the historical information. You could try converting the particle filter to 2nd- or 3rd-order.
Update 2:
I've been playing around with my code, a lot. I tried a bunch of things and now present the final program that I'll be making (starting to burn out on this idea).
Changes:
The particles now use floating points rather than integers. Not sure if this had any meaningful effect, but it is a more general solution. Rounding to integers is done only when making a guess.
Plotting shows true quantities as green square and current guess as red square. Currently believed particles shown as blue dots (sized by how much we believe them). This makes it really easy to see how well the algorithm is working. (Plotting also tested and working on Win 7 64-bit).
Added parameters for turning off/on quantity changing and price changing. Of course, both 'off' is not interesting.
It does a pretty dang good job, but, as has been noted, it's a really tough problem, so getting the exact answer is hard. Turning off CHANGE_QUANTITIES produces the simplest case. You can get an appreciation for the difficulty of the problem by running with 2 fruits with CHANGE_QUANTITIES off. See how quickly it hones in on the correct answer then see how harder it is as you increase the number of fruit.
You can also get a perspective on the difficulty by keeping CHANGE_QUANTITIES on, but adjusting the MAX_QUANTITY_CHANGE from very small values (.001) to "large" values (.05).
One situation where it struggles is if on dimension (one fruit quantity) gets close to zero. Because it's using an average of particles to guess it will always skew away from a hard boundary like zero.
In general this makes a great particle filter tutorial.
from __future__ import division
import random
import numpy
import scipy.stats
import pylab
# Assume Guesser knows prices and total
# Guesser must determine the quantities
# All of pylab is just for graphing, comment out if undesired
# Graphing only graphs first 2 FRUITS (first 2 dimensions)
NUM_FRUITS = 3
MAX_QUANTITY_CHANGE = .01 # Maximum percentage change that total quantity of fruit can change per iteration
MAX_QUANTITY = 100 # Bound for the sake of instantiating variables
MIN_QUANTITY_TOTAL = 10 # Prevent degenerate conditions where quantities all hit 0
MAX_FRUIT_PRICE = 1000 # Bound for the sake of instantiating variables
NUM_PARTICLES = 5000
NEW_PARTICLES = 500 # Num new particles to introduce each iteration after guessing
NUM_ITERATIONS = 20 # Max iterations to run
CHANGE_QUANTITIES = True
CHANGE_PRICES = True
'''
Change individual fruit quantities for a random amount of time
Never exceed changing fruit quantity by more than MAX_QUANTITY_CHANGE
'''
def updateQuantities(quantities):
old_total = max(sum(quantities), MIN_QUANTITY_TOTAL)
new_total = old_total
max_change = int(old_total * MAX_QUANTITY_CHANGE)
while random.random() > .005: # Stop Randomly
change_index = random.randint(0, len(quantities)-1)
change_val = random.randint(-1*max_change,max_change)
if quantities[change_index] + change_val >= 0: # Prevent negative quantities
quantities[change_index] += change_val
new_total += change_val
if abs((new_total / old_total) - 1) > MAX_QUANTITY_CHANGE:
quantities[change_index] -= change_val # Reverse the change
def totalPrice(prices, quantities):
return sum(prices*quantities)
def sampleParticleSet(particles, fruit_prices, current_total, num_to_sample):
# Assign weight to each particle using observation (observation is current_total)
# Weight is the probability of that particle (guess) given the current observation
# Determined by looking up the distance from the hyperplane (line, plane, hyperplane) in a
# probability density fxn for a normal distribution centered at 0
variance = 2
distances_to_current_hyperplane = [abs(numpy.dot(particle, fruit_prices)-current_total)/numpy.linalg.norm(fruit_prices) for particle in particles]
weights = numpy.array([scipy.stats.norm.pdf(distances_to_current_hyperplane[p], 0, variance) for p in range(0,NUM_PARTICLES)])
weight_sum = sum(weights) # No need to normalize, as relative weights are fine, so just sample un-normalized
# Create new particle set weighted by weights
belief_particles = []
belief_weights = []
for p in range(0, num_to_sample):
sample = random.uniform(0, weight_sum)
# sum across weights until we exceed our sample, the weight we just summed is the index of the particle we'll use
p_sum = 0
p_i = -1
while p_sum < sample:
p_i += 1
p_sum += weights[p_i]
belief_particles.append(particles[p_i])
belief_weights.append(weights[p_i])
return belief_particles, numpy.array(belief_weights)
'''
Generates new particles around the equation of the current prices and total (better particle generation than uniformly random)
'''
def generateNewParticles(current_total, fruit_prices, num_to_generate):
new_particles = []
max_values = [int(current_total/fruit_prices[n]) for n in range(0,NUM_FRUITS)]
for p in range(0, num_to_generate):
new_particle = numpy.array([random.uniform(1,max_values[n]) for n in range(0,NUM_FRUITS)])
new_particle[-1] = (current_total - sum([new_particle[i]*fruit_prices[i] for i in range(0, NUM_FRUITS-1)])) / fruit_prices[-1]
new_particles.append(new_particle)
return new_particles
# Initialize our data structures:
# Represents users first round of quantity selection
fruit_prices = numpy.array([random.randint(1,MAX_FRUIT_PRICE) for n in range(0,NUM_FRUITS)])
fruit_quantities = numpy.array([random.randint(1,MAX_QUANTITY) for n in range(0,NUM_FRUITS)])
current_total = totalPrice(fruit_prices, fruit_quantities)
success = False
particles = generateNewParticles(current_total, fruit_prices, NUM_PARTICLES) #[numpy.array([random.randint(1,MAX_QUANTITY) for n in range(0,NUM_FRUITS)]) for p in range(0,NUM_PARTICLES)]
guess = numpy.average(particles, axis=0)
guess = numpy.array([int(round(guess[n])) for n in range(0,NUM_FRUITS)])
print "Truth:", str(fruit_quantities)
print "Guess:", str(guess)
pylab.ion()
pylab.draw()
pylab.scatter([p[0] for p in particles], [p[1] for p in particles])
pylab.scatter([fruit_quantities[0]], [fruit_quantities[1]], s=150, c='g', marker='s')
pylab.scatter([guess[0]], [guess[1]], s=150, c='r', marker='s')
pylab.xlim(0, MAX_QUANTITY)
pylab.ylim(0, MAX_QUANTITY)
pylab.draw()
if not (guess == fruit_quantities).all():
for i in range(0,NUM_ITERATIONS):
print "------------------------", i
if CHANGE_PRICES:
fruit_prices = numpy.array([random.randint(1,MAX_FRUIT_PRICE) for n in range(0,NUM_FRUITS)])
if CHANGE_QUANTITIES:
updateQuantities(fruit_quantities)
map(updateQuantities, particles) # Particle Filter Prediction
print "Truth:", str(fruit_quantities)
current_total = totalPrice(fruit_prices, fruit_quantities)
# Guesser's Turn - Particle Filter:
# Prediction done above if CHANGE_QUANTITIES is True
# Update
belief_particles, belief_weights = sampleParticleSet(particles, fruit_prices, current_total, NUM_PARTICLES-NEW_PARTICLES)
new_particles = generateNewParticles(current_total, fruit_prices, NEW_PARTICLES)
# Make a guess:
guess = numpy.average(belief_particles, axis=0, weights=belief_weights) # Could optimize here by removing outliers or try using median
guess = numpy.array([int(round(guess[n])) for n in range(0,NUM_FRUITS)]) # convert to integers
print "Guess:", str(guess)
pylab.cla()
#pylab.scatter([p[0] for p in new_particles], [p[1] for p in new_particles], c='y') # Plot new particles
pylab.scatter([p[0] for p in belief_particles], [p[1] for p in belief_particles], s=belief_weights*50) # Plot current particles
pylab.scatter([fruit_quantities[0]], [fruit_quantities[1]], s=150, c='g', marker='s') # Plot truth
pylab.scatter([guess[0]], [guess[1]], s=150, c='r', marker='s') # Plot current guess
pylab.xlim(0, MAX_QUANTITY)
pylab.ylim(0, MAX_QUANTITY)
pylab.draw()
if (guess == fruit_quantities).all():
success = True
break
# Attach new particles to existing particles for next run:
belief_particles.extend(new_particles)
particles = belief_particles
else:
success = True
if success:
print "Correct Quantities guessed"
else:
print "Unable to get correct answer within", NUM_ITERATIONS, "iterations"
pylab.ioff()
pylab.show()
For your initial rules:
From my school years, I would say that if we make an abstraction of the 5% changes, we have everyday an equation with three unknown values (sorry I don't know the maths vocabulary in English), which are the same values as previous day.
At day 3, you have three equations, three unknown values, and the solution should be direct.
I guess the 5% change each day may be forgotten if the values of the three elements are different enough, because, as you said, we will use approximations and round the numbers.
For your adapted rules:
Too many unknowns - and changing - values in this case, so there is no direct solution I know of. I would trust Lior on this; his approach looks fine! (If you have a limited range for prices and quantities.)
I realized that my answer was getting quite lengthy, so I moved the code to the top (which is probably what most people are interested in). Below it there are two things:
an explanation why (deep) neural networks are not a good approach to this problem, and
an explanation why we can't uniquely determine the human's choices with the given information.
For those of you interested in either topic, please see below. For the rest of you, here is the code.
Code that finds all possible solutions
As I explain further down in the answer, your problem is under-determined. In the average case, there are many possible solutions, and this number grows at least exponentially as the number of days increases. This is true for both, the original and the extended problem. Nevertheless, we can (sort of) efficiently find all solutions (it's NP hard, so don't expect too much).
Backtracking (from the 1960s, so not exactly modern) is the algorithm of choice here. In python, we can write it as a recursive generator, which is actually quite elegant:
def backtrack(pos, daily_total, daily_item_value, allowed_change, iterator_bounds, history=None):
if pos == len(daily_total):
yield np.array(history)
return
it = [range(start, stop, step) for start, stop, step in iterator_bounds[pos][:-1]]
for partial_basket in product(*it):
if history is None:
history = [partial_basket]
else:
history.append(partial_basket)
# ensure we only check items that match the total basket value
# for that day
partial_value = np.sum(np.array(partial_basket) * daily_item_value[pos, :-1])
if (daily_total[pos] - partial_value) % daily_item_value[pos, -1] != 0:
history.pop()
continue
last_item = (daily_total[pos] - partial_value) // daily_item_value[pos, -1]
if last_item < 0:
history.pop()
continue
basket = np.array([*partial_basket] + [int(last_item)])
basket_value = np.sum(basket * daily_item_value[pos])
history[-1] = basket
if len(history) > 1:
# ensure that today's basket stays within yesterday's range
previous_basket = history[-2]
previous_basket_count = np.sum(previous_basket)
current_basket_count = np.sum(basket)
if (np.abs(current_basket_count - previous_basket_count) > allowed_change * previous_basket_count):
history.pop()
continue
yield from backtrack(pos + 1, daily_total, daily_item_value, allowed_change, iterator_bounds, history)
history.pop()
This approach essentially structures all possible candidates into a large tree and then performs depth first search with pruning whenever a constraint is violated. Whenever a leaf node is encountered, we yield the result.
Tree search (in general) can be parallelized, but that is out of scope here. It will make the solution less readable without much additional insight. The same goes for reducing constant overhead of the code, e.g., working the constraints if ...: continue into the iterator_bounds variable and do less checks.
I put the full code example (including a simulator for the human side of the game) at the bottom of this answer.
Modern Machine Learning for this problem
Question is 9 years old but still one that I'm deeply interested in. In the time since, machine learning(RNN's, CNN's, GANS,etc), new approaches and cheap GPU's have risen that enable new approaches. I thought it would be fun to revisit this question to see if there are new approaches.
I really like your enthusiasm for the world of deep neural networks; unfortunately they simply do not apply here for a few reasons:
(Exactness) If you need an exact solution, like for your game, NNs can't provide that.
(Integer Constraint) The currently dominant NN training methods are gradient descent based, so the problem has to be differentiable or you need to be able to reformulate it in such a way that it becomes differentiable; constraining yourself to integers kills GD methods in the cradle. You could try evolutionary algorithms to search for a parameterization. This does exist, but those methods are currently a lot less established.
(Non-Convexity) In the typical formulation, training a NN is a local method, which means you will find exactly 1 (locally optimal) solution if your algorithm is converging. In the average case, your game has many possible solutions for both the original and extended version. This not only means that - on average - you can't figure out the human's choice (basket), but also that you have no control over which of the many solutions the NN will find. Current NN success stories suffer the same fate, but they tend to don't really care, because they only want some solution instead of a specific one. Some okay-ish solution beats the hell out of no solution at all.
(Expert Domain Knowledge) For this game, you have a lot of domain knowledge that can be exploited to improve the optimization/learning. Taking full advantage of arbitrary domain knowledge in NNs is not trivial and for this game building a custom ML model (not a neural network) would be easier and more efficient.
Why the game can not be uniquely solved - Part 1
Let's consider a substitute problem first and lift the integer requirement, i.e., the basket (human choice of N fruits for a given day) can have fractional fruits (0.3 oranges).
The total value constraint np.dot(basket, daily_price) == total_value limits the possible solutions for the basket; it reduces the problem by one dimension. Freely pick amounts for N-1 fruits, and you can always find a value for the N-th fruit to satisfy the constraint. So while it seems that there are N choices to make for a day, there are actually only N-1 that we can make freely, and the last one will be fully determined by our previous choices. So for each day the game goes on, we need to estimate an additional N-1 choices/variables.
We might want to enforce that all the choices are greater than 0, but that only reduces the interval from which we can choose a number; any open interval of real numbers has infinitely many numbers in it, so we will never run out of options because of this. Still N-1 choices to make.
Between two days, the total basket volume np.sum(basket) only changes by at most some_percent of the previous day, i.e. np.abs(np.sum(previous_basket) - np.sum(basket)) <= some_percent * np.sum(previous_basket). Some of the choices we could make at a given day will change the basket by more than some_percent of the previous day. To make sure we never violate this, we can freely make N-2 choices and then have to pick the N-1-th variable so that adding it and adding the N-the variable (which is fixed from our previous choices) stays within some_percent. (Note: This is an inequality constraint, so it will only reduce the number of choices if we have equality, i.e., the basket changes by exactly some_percent. In optimization theory this is known as the constraint being active.)
We can again think about the constraint that all choices should be greater 0, but the argument remains that this simply changes the interval from which we can now freely choose N-2 variables.
So after D days we are left with N-1 choices to estimate from the first day (no change constraint) and (D-1)*(N-2) choices to estimate for each following day. Unfortunately, we ran out of constraints to further reduce this number and the number of unknowns grows by at least N-2 each day. This is essentially what what Luka Rahne meant with "2*D < N*D for all N > 2". We will likely find many candidates which are all equally probable.
The exact food prices each day don't matter for this. As long as they are of some value, they will constrain one of the choices. Hence, if you extend your game in the way you specify, there is always a chance for infinitely many solutions; regardless of the number of days.
Why the game can still not be uniquely solved - Part 2
There is one constraint we didn't look at which might help fix this: only allow integer solutions for choices. The problem with integer constraints is that they are very complex to deal with. However, our main concern here is if adding this constraint will allow us to uniquely solve the problem given enough days. For this, there is a rather intuitive counter-example. Suppose you have 3 consecutive days, and for the 1st and 3d day, the total value constraint only allows one basket. In other words, we know the basket for day 1 and day 3, but not for day 2. Here, we only know it's total value, that it is within some_percent of day 1 and that day 3 is within some_percent of day 2. Is this enough information to always work out what is in the basket on day 2?
some_percent = 0.05
Day 1: basket: [3 2] prices: [10 7] total_value: 44
Day 2: basket: [x y] prices: [5 5] total_value: 25
Day 3: basket: [2 3] prices: [9 5] total_value: 33
Possible Solutions Day 2: [2 3], [3 2]
Above is one example, where we know the values for two days thanks to the total value constraint, but that still won't allow us to work out the exact composition of the basket at day 2. Thus, while it may be possible to work it out in some cases, it is not possible in general. Adding more days after day 3 doesn't help figuring out day 2 at all. It might help in narrowing the options for day 3 (which will then narrow the options for day 2), but we already have just 1 choice left for day 3, so it's no use.
Full Code
import numpy as np
from itertools import product
import tqdm
def sample_uniform(n, r):
# check out: http://compneuro.uwaterloo.ca/files/publications/voelker.2017.pdf
sample = np.random.rand(n + 2)
sample_norm = np.linalg.norm(sample)
unit_sample = (sample / sample_norm)
change = np.floor(r * unit_sample[:-2]).astype(np.int)
return change
def human(num_fruits, allowed_change=0.05, current_distribution=None):
allowed_change = 0.05
if current_distribution is None:
current_distribution = np.random.randint(1, 50, size=num_fruits)
yield current_distribution.copy()
# rejection sample a suitable change
while True:
current_total = np.sum(current_distribution)
maximum_change = np.floor(allowed_change * current_total)
change = sample_uniform(num_fruits, maximum_change)
while np.sum(change) > maximum_change:
change = sample_uniform(num_fruits, maximum_change)
current_distribution += change
yield current_distribution.copy()
def prices(num_fruits, alter_prices=False):
current_prices = np.random.randint(1, 10, size=num_fruits)
while True:
yield current_prices.copy()
if alter_prices:
current_prices = np.random.randint(1, 10, size=num_fruits)
def play_game(num_days, num_fruits=3, alter_prices=False):
human_choice = human(num_fruits)
price_development = prices(num_fruits, alter_prices=alter_prices)
history = {
"basket": list(),
"prices": list(),
"total": list()
}
for day in range(num_days):
choice = next(human_choice)
price = next(price_development)
total_price = np.sum(choice * price)
history["basket"].append(choice)
history["prices"].append(price)
history["total"].append(total_price)
return history
def backtrack(pos, daily_total, daily_item_value, allowed_change, iterator_bounds, history=None):
if pos == len(daily_total):
yield np.array(history)
return
it = [range(start, stop, step) for start, stop, step in iterator_bounds[pos][:-1]]
for partial_basket in product(*it):
if history is None:
history = [partial_basket]
else:
history.append(partial_basket)
# ensure we only check items that match the total basket value
# for that day
partial_value = np.sum(np.array(partial_basket) * daily_item_value[pos, :-1])
if (daily_total[pos] - partial_value) % daily_item_value[pos, -1] != 0:
history.pop()
continue
last_item = (daily_total[pos] - partial_value) // daily_item_value[pos, -1]
if last_item < 0:
history.pop()
continue
basket = np.array([*partial_basket] + [int(last_item)])
basket_value = np.sum(basket * daily_item_value[pos])
history[-1] = basket
if len(history) > 1:
# ensure that today's basket stays within relative tolerance
previous_basket = history[-2]
previous_basket_count = np.sum(previous_basket)
current_basket_count = np.sum(basket)
if (np.abs(current_basket_count - previous_basket_count) > allowed_change * previous_basket_count):
history.pop()
continue
yield from backtrack(pos + 1, daily_total, daily_item_value, allowed_change, iterator_bounds, history)
history.pop()
if __name__ == "__main__":
np.random.seed(1337)
num_fruits = 3
allowed_change = 0.05
alter_prices = False
history = play_game(15, num_fruits=num_fruits, alter_prices=alter_prices)
total_price = np.stack(history["total"]).astype(np.int)
daily_price = np.stack(history["prices"]).astype(np.int)
basket = np.stack(history["basket"]).astype(np.int)
maximum_fruits = np.floor(total_price[:, np.newaxis] / daily_price).astype(np.int)
iterator_bounds = [[[0, maximum_fruits[pos, fruit], 1] for fruit in range(num_fruits)] for pos in range(len(basket))]
# iterator_bounds = np.array(iterator_bounds)
# import pdb; pdb.set_trace()
pbar = tqdm.tqdm(backtrack(0, total_price,
daily_price, allowed_change, iterator_bounds), desc="Found Solutions")
for solution in pbar:
# test price guess
calculated_price = np.sum(np.stack(solution) * daily_price, axis=1)
assert np.all(calculated_price == total_price)
# test basket change constraint
change = np.sum(np.diff(solution, axis=0), axis=1)
max_change = np.sum(solution[:-1, ...], axis=1) * allowed_change
assert np.all(change <= max_change)
# indicate that we found the original solution
if not np.any(solution - basket):
pbar.set_description("Found Solutions (includes original)")
When the player selects a combination which will reduce the number of possibilities to 1, computer will win. Otherwise, the player can pick a combination with the constraint of the total varying within a certain percentage, that computer may never win.
import itertools
import numpy as np
def gen_possible_combination(total, prices):
"""
Generates all possible combinations of numbers of items for
given prices constraint by total
"""
nitems = [range(total//p + 1) for p in prices]
prices_arr = np.array(prices)
combo = [x for x in itertools.product(
*nitems) if np.dot(np.array(x), prices_arr) == total]
return combo
def reduce(combo1, combo2, pct):
"""
Filters impossible transitions which are greater than pct
"""
combo = {}
for x in combo1:
for y in combo2:
if abs(sum(x) - sum(y))/sum(x) <= pct:
combo[y] = 1
return list(combo.keys())
def gen_items(n, total):
"""
Generates a list of items
"""
nums = [0] * n
t = 0
i = 0
while t < total:
if i < n - 1:
n1 = np.random.randint(0, total-t)
nums[i] = n1
t += n1
i += 1
else:
nums[i] = total - t
t = total
return nums
def main():
pct = 0.05
i = 0
done = False
n = 3
total_items = 26 # np.random.randint(26)
combo = None
while not done:
prices = [np.random.randint(1, 10) for _ in range(n)]
items = gen_items(n, total_items)
total = np.dot(np.array(prices), np.array(items))
combo1 = gen_possible_combination(total, prices)
if combo:
combo = reduce(combo, combo1, pct)
else:
combo = combo1
i += 1
print(i, 'Items:', items, 'Prices:', prices, 'Total:',
total, 'No. Possibilities:', len(combo))
if len(combo) == 1:
print('Solution', combo)
break
if np.random.random() < 0.5:
total_items = int(total_items * (1 + np.random.random()*pct))
else:
total_items = int(
np.ceil(total_items * (1 - np.random.random()*pct)))
if __name__ == "__main__":
main()

Categories

Resources