How to get vehicle's spare parts in minimum cost? - python

Given a data file have shop id, cost, items, what algorithm should I use to get all asked spare parts from a single shop in minimum prize. If no single shop can be found, 'None' should be printed. Also, there's no harm in purchasing extra spare parts within minimum prize.
Shop ID, Cost($), Items List
1, 4.00, E
1, 8.00, F
2, 5.00, E
2, 6.50, F
5, 4.00, A
5, 8.00, D
6, 5.00, D
6, 6.00, A, B, C ; Here in 6$ three items can be obtained
7, 2.20, B
7, 3.00, B,C
7, 2.00, B
7, 2.50, C
a) Spare Parts : A,D
Output:
Shop ID-6
Cost - 11.0$
b) Spare Parts : E,F
Output:
Shop ID-2
Cost - 11.5$
My approach (which doesn't work):
a) Get common Shops Ids first for given input
shop_id_list=[]
for items in input_list:
shop_id_list = getCommonShopIds(items.strip(), shop_id_list )
all_items.append( items )
b) For each all_items get minimum cost of that item in all shop_id_list, ( 0 if items already included in last iteration)
res = [0 for x in range(len(shop_id_list)) ]
for items in all_items:
all_cost = getMinShopCost( shop_id_list, items )
res= map(operator.add, all_cost, res ) # Add those list
c) Find minimum element index in res (say i) and print the corresponding shop_id_list[i] and res[i]
My logic doesn't work for cases like:
Input : B C
It prints 7 4.5$
Expected should be 7 3.00$
Is this is any standard problem, or variation of any graph theory problem, etc ?
I am not able to figure out any optimized approach, any help will be appreciated.
PS: Python is only tagged since the question has python code snippet, I'm just interested in approach. Also, this is not any problem from online going contest, not that I'm aware of.

There's 2 steps here.
First step: You first need to find which places can satisfy the constraint of having all of the required parts in stock. Call the set of places that meets this constraint set S. If S is empty then you return none. If S is not empty and there's more than one location then you go to the second step.
Second step: Y have to calculate the cost of getting the parts from each of the places in S. To calculate the cost for an individual place is a constraint satisfaction problem. http://en.wikipedia.org/wiki/Constraint_satisfaction_problem. There are a few ways in which you might go about solving this. One way is to use a mixed integer LP based approach with the following formulation:
Let X be the item groups bought
let y_1, .. y_n be the items you require
min F(X) = \sum cost(X)
subject to:
y_i \in X, for i \in {1,..,n}
Essentially you have some binary constraints. There's most likely better ways of formulating this, but this hopefully gives you the general idea.
You could possibly solve this with some LP solver like the simplex method.
If you are using Python have a look at these solver libraries:
http://www.scipy.org/
https://software.sandia.gov//trac/coopr
https://code.google.com/p/pulp-or/
After the costs are calculated you return the place that has the lowest cost of the places in S

If the problem is small then simple backtracking would do :-
Do for each shop
select a offer yet unselected and containing atleast a subset of unpurchased items.
make the offer as selected and mark all items covered by it as selected.
recursively select until all items exhausted or no viable offer exists.
record the min cost of all possible purchases.
Further you can also prevent recursion if the cost already exceeds the current min cost.
If the number of offer is small then use also do brute force :-
select a subset of offers and marks all items it covers.
generate a n bit number from it.
Put in a hashmap n => min cost.
This is very effective if you have multiple orders as then you just have to lookup in the hashmap by making a n bit number of the order.
You can also try branch and bound technique with estimate like sum of items taken individually

Related

Python Networkx route though all nodes

I'm trying to write a small route planning application in python just to learn about graphs. In the end, the user should be able to pass in his "home" location and enter some locations he wants to stop by. The application will then calculate the optimal path that starts and ends in his home while visiting every location. So far I've got the API requests all sorted out and a Network with all possible routes between all nodes and corresponding weights is automatically created. Now I'm stuck with a 'G' and don't know how to proceed. I've looked into the Networkx documentation about the shortest paths and cannot find a function that seems to do what I want. The best results I got when searching where Wikipedia articles about Dijkstra and the all_pairs_shortest_path() function, which, too, do not yield the answers I'm searching for.
Maybe there is someone out there who stumbled upon the same problem as I have and knows a solution that I cannot find.
If you have a graph G and want to find the route from A (home) to B to C to D (final destination) in order, you'd call dijkstra_path on it for (A, B), (B, C) and (C, D), and concatenate the paths generated.
This is the "pickup problem".
A delivery driver must pickup passengers at several locations and deliver them to a destination.
I have a c++ implementation of an application to calculate reasonable solutions to this problem when there is a link between every location costing the euclidean distance between the locations. Documentation at https://github.com/JamesBremner/PathFinder/wiki/Pickup
You will have to modify this for your problem, by calculating the cost of the cheapest path between every location ( Dijkstra ) and linking every pair of locations with that cost.
Note that the algorithm will fail if the direct distance between any two nodes is greater than the distance between them via a third node.
Example: One pickup driver has to pickup 6 cargoes and deliver them to a designation. The input file looks like this
format pickup
d 1 3 start
e 6 5 end
c 1 1 c1
c 5 5 c2
c 3 3 c3
c 4 4 c4
c 2 2 c5
c 6 6 c6
and the result looks like this
The example is very simple, just to show how this works. However this is a high performance application using an efficient implementation of the travelling salesman algorithm ( no brute force searching through permutations! ;-) I have used it for the allocation and routing of drivers to restaurant deliveries in a big city where the requirement was to handle thousands of orders in a few seconds. This plot shows the performance that was achieved.
Like #AKX mentioned, analyzing the different permutations of the stops I want to make was the way to go. For anybody who might encounter a similar problem in the future I post my approach, although it's far from being optimized or even moderately clean code.
First of all, instead of saving all possible routes into the graph, I saved them to an array called routes = []. To calculate the sub-routes, I then create a copy of the routes array which does not include routes to the starting (or ending) location:
strippedRoutes = []
for route in routes:
if route[0] == 0 or route[1] == 0:
pass
else:
strippedRoutes.append(route)
subroutes = sortAllPermutations(strippedRoutes, len(addressen) - 1)
I pass all the subroutes into a new function, which will calculate the 'cost' of all the permutations:
def findCost(nodesToTraverce, startNode, endNode):
## lookup the cost
cost = 0
for edge in nodesToTraverce:
if edge[0] == startNode and edge[1] == endNode:
cost = edge[2]['weight']
break
return cost
def sortAllPermutations(nodesToTraverce, numberOfNodes):
nodes = []
for i in range(1, numberOfNodes +1):
nodes.append(i)
perms = list(itertools.permutations(nodes))
buffer = []
for p in perms: #calculate cost of route
cost = 0
for s in range(0, numberOfNodes-1): #calculate cost of subroute
cost += findCost(nodesToTraverce, p[s], p[s+1])
toAdd = []
for j in range(0, len(nodes)):
toAdd.append(p[j])
toAdd.append(cost)
buffer.append(toAdd)
#sort buffer by cost
buffer.sort(key= lambda x: x[4])
return buffer
Because I fetch all the costs for my routes from an API, finding the cost of each sub-route is relatively easy and no more than searching in a lookup table. The first lines of the sortAllPermutations() function are used to create a permutation table. In the for-loop, the cost of each permutation is calculated by looking up the cost of the different edges that are saved inside the "nodesToTraverce" array, which is passed into the function. The first nested for-loop that iterates s is used for exactly that. The last lines of the perm for-loop are used to store the permutation and it's cost into the buffer, which is sorted (unnecessarily) once the for loop terminates and then returned. Back in the main function, merely the cost of getting to each starting point of the sub-route and going back home from the last stop are calculated, and the route of the overall lowest cost is saved:
subroutes = sortAllPermutations(strippedRoutes, len(addressen) - 1)
mostEfficient = []
iteration = 0
for perm in subroutes:
cost = perm[4]
cost += findCost(routes, 0, perm[0])
cost += findCost(routes, perm[3], 0)
if iteration == 0 or cost < mostEfficient[6]:
mostEfficient = []
mostEfficient.append((0))
for i in range(0, len(perm)-1):
mostEfficient.append(perm[i])
mostEfficient.append(0)
mostEfficient.append(cost)
iteration += 1
print(mostEfficient)
And with that, the most efficient sightseeing roundtrip is calculated:
>>> [0, 1, 2, 4, 3, 0, 164044.1]
I'm not sure if here is the right place to discuss this, but I've got some closing thoughts on the project. If the project should not be used to plan roundtrips sights but in a professional environment, where you want to calculate most efficient routes, you would not optimize for shortest distance or least time needed, but for both criteria and more. I really want to try this, with some route-stops meeting deadlines or having a higher priority than others, but I have absolutely no clue how I would go about implementing several dimensions of optimization. I'm closing this thread now, but if someone who reads this has articles on this feel free to answer or pin me.
Thanks for all your help!

Python find max under constraint

I'm learning python on my own and I'm unable to find the right solution for a specific problem:
I get x $.
I can buy a list of different items which each have a certain price (costs) and provide a particular gain (gain)
I want to get the maximum the gain for the x $.
There is only 1 of each item.
lets says :
dollars = 10
cost = [5, 4, 1, 10]
gain = [7, 6, 4, 12]
here => the max gain is 17
With a naïve solution, based on permutation, I managed to find a solution when the number of item is low.
but When the number of item grows, the time increase and the computer crashes.
Is there a typical algorithm to solve that kind of pb?
You mentionned not being interested in the solution's code in one of your comments, so I'll only be explaining the algorithm. This problem is better known as the 0-1 knapsack problem.
A typical approach to solving it is using dynamic programming:
let's define a value that we'll call m(i, c), which is the max gain you can get by spending up to c $ and only buying items among the first i of your list.
You've got:
m(0, c) = 0 (If you can't buy any item, you won't be getting any gain).
m(i, c) = m(i-1, c) if cost[i]>c (if the new item is more than the cost limit, you won't be able to buy it anyways)
m(i, c) = max(m(i-1, c), m(i-1, c-cost[i]) + gain[i]) if cost[i]<=c (you're now able to buy item i. Either you do buy it, or you don't, and the best gain you can get out of it is the maximum of those two alternatives)
To get the best price, all you have to do is compute m(len(cost), dollars). You can for example do so with a for loop where you'll compute m(i, dollars) for every i up to len(cost), by filling a list of m values. To figure out which items were actually bought and not only the max gain, you'll have to save them in a separate list as you're filling out m.
This sounds like a LeetCode problem, but I'll give you a decent answer (not the best, can definitely be optimized):
Problem
Assuming you are trying to find the max amount of gain from any n items strung together without repeating any item, the following algorithm could work.
Solution
You would take the highest ratio of the zipped cost and gain and remove that index from the zipped variable. Then, you'd redo the problem until you don't have enough money for any purchase:
Code:
#!/usr/bin/env python3
dollars = 10
cost = [5, 4, 1, 10]
gain = [7, 6, 4, 12]
result_gain = 0
zipped = [i for i in zip(cost, gain)]
largestgain = []
# create ratio of cost to gain and pick from the smallest to the largest
ratios = []
for x in zipped:
# divide the gain by the cost
ratios.append(x[1]/x[0])
# create a largest variable to grab the largest ratio from a for loop for every updated index
largest = 0
for x in range(0, len(zipped)):
for index, ratio in enumerate(ratios):
if index == 0:
largest = ratio
else:
if ratio > largest:
largest = ratio # let largest be the new largest ratio
# get the index of the largest ratio
largest = ratios.index(largest)
print(largest)
# append the largest gain to a list of what should be added up later
largestgain.append(zipped[largest])
# check if dollars, when subtracted from the first index, yield less than 0
if dollars - zipped[largest][0] < 0:
break
# if not, subtract dollars from total and update resulted gain
else:
dollars = dollars - zipped[largest][0]
result_gain = result_gain + zipped[largest][1]
# delete the largest zipped variable in order to redo this process
del zipped[largest]
# delete the largest ratio as well in order to redo this process
del ratios[largest]
# print the list that would yield you the largest ratios in order from largest to smallest, but in the form of a zipped list
print(largestgain)
# The max amount of gain earned
print(result_gain)
Explanation:
I added the shebang so that you can test it yourself, but it should work perfectly. I've commented my code so that you can read the algorithm's process. Test it with larger lists if you want to.
Be aware that there aren't exception checkers for the lengths of cost and gain lists, so if the cost list is larger than the gain list, it will divide a buffer that doesn't exist and an exception will be thrown.
If this algorithm is too slow, feel free to check this resource for other knapsack algorithm solutions. This one is quite elegant, but others, not so much.
EDIT:
This is a very greedy algorithm and doesn't work for all values, as noted by a commenter. Refer to theory for better explanations.

Adding a binary constraint in Pulp python without violating the linearity constraint

Dear fellow python users,
I am building a multi period multi product planning model using Pulp.
What the model should do is rather simple: plan production against minimal holding and production costs while meeting demand.
I have the following data:
periods = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
products = ['A', 'B', 'C']
And I create the following variables:
prod_vars = pulp.LpVariable.dicts('production', [(i,j) for i in products for j in periods],0)
inv_vars = pulp.LpVariable.dicts('inventory', [(i,j) for i in products for j in periods],0)
There are 2 constraints, 1 to always meet demand and 1 to stay below production capacity.
Please note that there is a dataframe (input_data) that retrieves the value of the given demand for that period.
for i in products:
for j in periods[1:]:
model.addConstraint(pulp.LpConstraint(
e=inv_vars[(i,j-1)] + prod_vars[(i,j)] - inv_vars[(i,j)],
sense=pulp.LpConstraintEQ,
name='inv_balance_' + str(i)+ str(j),
rhs=input_data[i][j-1]))
for j in periods:
model.addConstraint(pulp.LpConstraint(
e=(pulp.lpSum(prod_vars[(i,j)] for i in products)),
sense=pulp.LpConstraintLE,
name='total_production_capacity'+str(j),
rhs=(input_data['production_capacity'][j-1])
Then I add cost function and set the objective:
total_production_cost = production_cost*pulp.lpSum(prod_vars)
total_holding_cost =holding_cost * pulp.lpSum(inv_vars)
objective = total_holding_cost + total_production_cost + activation_cost
model.setObjective(objective)
This model works all fine and gives me an output like this.
prod_vars: (A,1) =5, (B,1)=10, (C,1)=15 and so on for all periods.
However: I want to penalize the system for producing multiple products. I.e., adding fixed costs when choosing to produce a second or third product. It would then be more benefical to produce more of product A and hold inventory for some months then to produce A every month. I tried so by adding another variable:
use_vars = pulp.LpVariable.dicts('uselocation', [(i,j) for i in products for j in periods] , 0,1,pulp.LpBinary)
And add fixed costs for using the variable:
activation_cost = pulp.lpSum(activation_cost*use_vars[(i,j)] for i in products for j in periods)
I think that I would need to multiply al my prod_vars by my use_vars in the two constraints. However, if I would do this in my first inventory constraint, Pulp gives the error that my constraint is not linear anymore.
Does someone know how I can make this happen??
Thanks in advance :)
I want to penalize the system for producing multiple products. I.e.,
adding fixed costs when choosing to produce a second or third product.
It is best to step away from code and look at it mathematically.
Let x(i,t)>=0 be the production of item i in period t
As we need to count, we need binary variables. So, introduce:
y(i,t) = 1 if item i is produced in period t
0 otherwise
Then we can add
x(i,t) <= M*y(i,t) (M large enough constant: i.e. capacity)
This implements y(i,t)=0 => x(i,t)=0. We don't have to worry about the other way around x(i,t)=0 => y(i,t)=0 as that is taken care of by the objective (minimize cost).
To add a special cost for producing 2 or 3 products, we need one more binary variable:
count(k,t) = 1 if the number of products produced is k (k=0,1,2,3)
= 0 otherwise
This can be calculated as:
y(A,t)+y(B,t)+y(C,t) = 1*count(1,t)+2*count(2,t)+3*count(3,t)
count(1,t)+count(2,t)+count(3,t) <= 1
Now you can add say: 100*count(2,t)+200*count(3,t) to your cost calculation. (Note: just for completeness: I assumed the cost for 3 products is at least as large as the cost for 2 products).
To achieve what you intend you can add additional constraints connecting prod_vars and use_vars like (pseudo-code):
prod_vars[(i, j)] >= use_vars[(i, j)] forall i, j
prod_vars[(i, j)] <= M * use_vars[(i, j)] forall i, j
where M can be set to max(input_data['production_capacity']).
Doing this you don't need modify the original constraints.

python: algorithm - to gather items from mean

Not sure whether this is the right place, but I have a question related to algorithm and I cant think of an efficient algorithm.
So thought of sharing my problem statement.. :)
To ease up what I am trying to explain, let me create a hypothetical example.
Suppose, I have a list which contains an object whcih contains two things..
lets say product id and price
Now, this is a long long list..sort of like an inventory..
out of this I have defined three price segments.. lowprice, midprice and highprice
and then k1,k2,k3 where k1,k2 and k3 are ratios.
So, the job is now,, I have to gather products from this huge inventory in such a way that there is n1 products from lowprice range, n2 products from midprice range and n3 products from high price range... where n1:n2:n3 == k1:k2:k3
Now, how do I efficiently achieve the following.
I target the low price point is 100 dollars
and I have to gather 20 products from this range..
mid price range is probably 500 dollars
and so on
So I start with 100 dollars.. and then look for items between 90 and 100 and also between 100 and 110
Let say I found 5 products in interval 1 low (90,100) and 2 products in interval 1 high (100,110)
Then, I go to next low interval and next high interval.
I keep on doing this until I get the number of products in this interval.
How do I do this?? Also there might be case, when the number of products in a particular price range is less than what I need.. (maybe mid price range is 105 dollars...).. so what should I do in that case..
Please pardon me, if this is not the right platform.. as from the question you can make out that this is more like a debative question rather than the "I am getting this error" type of question.
Thanks
You are probably looking for selection algorithm.
First find the n1'th smallest element, let it be e1, and the lower bound list is all elements such that element <= e1.
Do the same for the other ranges.
pseudo code for lower bound list:
getLowerRange(list,n):
e <- select(list,n)
result <- []
for each element in list:
if element <= e:
result.append(element)
return result
Note that this solution fails if there are many "identical" items [result will be a bigger list], but finding those items, and removing it from result list is not hard.
Note that selection algorithm is O(n), so this algorithm will consume linear time related to your list's size.
Approach 1
If the assignment what products belong to the three price segments never changes, why don't you simply build 3 lists, one for the products in each price segment (assuming these sets are disjoint).
Then you may pick from these lists randomly (either with or without replacement - as you like). The number of items for each class is given by the ratios.
Approach 2
If the product-price-segment assignment is intended to be pre-specified, e.g., by passing corresponding price values for each segment on function call, you may want to have the products sorted by price and use a binary search to select the m-nearest-neighbors (for example). The parameter m could be specified according to the ratios. If you specify a maximum distance you may reject products that are outside the desired price range.
Approach 3
If the product-price-segment assignment needs to be determined autonomously, you could apply your clustering algorithm of choice, e.g., k-means, to assign your products to, say, k = 3 price segments. For the actual product selection you may proceed similarly as described above.
It's seems like you should try a database solution rather then using a list. Check out sqlite. It's in Python by default

How to approach a number guessing game (with a twist) algorithm?

Update(July 2020): Question is 9 years old but still one that I'm deeply interested in. In the time since, machine learning(RNN's, CNN's, GANS,etc), new approaches and cheap GPU's have risen that enable new approaches. I thought it would be fun to revisit this question to see if there are new approaches.
I am learning programming (Python and algorithms) and was trying to work on a project that I find interesting. I have created a few basic Python scripts, but I’m not sure how to approach a solution to a game I am trying to build.
Here’s how the game will work:
Users will be given items with a value. For example,
Apple = 1
Pears = 2
Oranges = 3
They will then get a chance to choose any combo of them they like (i.e. 100 apples, 20 pears, and one orange). The only output the computer gets is the total value (in this example, it's currently $143). The computer will try to guess what they have. Which obviously it won’t be able to get correctly the first turn.
Value quantity(day1) value(day1)
Apple 1 100 100
Pears 2 20 40
Orange 3 1 3
Total 121 143
The next turn the user can modify their numbers but no more than 5% of the total quantity (or some other percent we may chose. I’ll use 5% for example.). The prices of fruit can change(at random) so the total value may change based on that also (for simplicity I am not changing fruit prices in this example). Using the above example, on day 2 of the game, the user returns a value of $152 and $164 on day 3. Here's an example:
Quantity (day2) %change (day2) Value (day2) Quantity (day3) %change (day3) Value(day3)
104 104 106 106
21 42 23 46
2 6 4 12
127 4.96% 152 133 4.72% 164
*(I hope the tables show up right, I had to manually space them so hopefully it's not just doing it on my screen, if it doesn't work let me know and I'll try to upload a screenshot.)
I am trying to see if I can figure out what the quantities are over time (assuming the user will have the patience to keep entering numbers). I know right now my only restriction is the total value cannot be more than 5% so I cannot be within 5% accuracy right now so the user will be entering it forever.
What I have done so far
Here’s my solution so far (not much). Basically, I take all the values and figure out all the possible combinations of them (I am done this part). Then I take all the possible combos and put them in a database as a dictionary (so for example for $143, there could be a dictionary entry {apple:143, Pears:0, Oranges :0}..all the way to {apple:0, Pears:1, Oranges :47}. I do this each time I get a new number so I have a list of all possibilities.
Here’s where I’m stuck. In using the rules above, how can I figure out the best possible solution? I think I’ll need a fitness function that automatically compares the two days data and removes any possibilities that have more than 5% variance of the previous days data.
Questions:
So my question with user changing the total and me having a list of all the probabilities, how should I approach this? What do I need to learn? Is there any algorithms out there or theories that I can use that are applicable? Or, to help me understand my mistake, can you suggest what rules I can add to make this goal feasible (if it's not in its current state. I was thinking adding more fruits and saying they must pick at least 3, etc..)? Also, I only have a vague understanding of genetic algorithms, but I thought I could use them here, if is there something I can use?
I'm very very eager to learn so any advice or tips would be greatly appreciated (just please don't tell me this game is impossible).
UPDATE: Getting feedback that this is hard to solve. So I thought I'd add another condition to the game that won't interfere with what the player is doing (game stays the same for them) but everyday the value of the fruits change price (randomly). Would that make it easier to solve? Because within a 5% movement and certain fruit value changes, only a few combinations are probable over time.
Day 1, anything is possible and getting a close enough range is almost impossible, but as the prices of fruits change and the user can only choose a 5% change, then shouldn't (over time) the range be narrow and narrow. In the above example, if prices are volatile enough I think I could brute force a solution that gave me a range to guess in, but I'm trying to figure out if there's a more elegant solution or other solutions to keep narrowing this range over time.
UPDATE2: After reading and asking around, I believe this is a hidden Markov/Viterbi problem that tracks the changes in fruit prices as well as total sum (weighting the last data point the heaviest). I'm not sure how to apply the relationship though. I think this is the case and could be wrong but at the least I'm starting to suspect this is a some type of machine learning problem.
Update 3: I am created a test case (with smaller numbers) and a generator to help automate the user generated data and I am trying to create a graph from it to see what's more likely.
Here's the code, along with the total values and comments on what the users actually fruit quantities are.
#!/usr/bin/env python
import itertools
# Fruit price data
fruitPriceDay1 = {'Apple':1, 'Pears':2, 'Oranges':3}
fruitPriceDay2 = {'Apple':2, 'Pears':3, 'Oranges':4}
fruitPriceDay3 = {'Apple':2, 'Pears':4, 'Oranges':5}
# Generate possibilities for testing (warning...will not scale with large numbers)
def possibilityGenerator(target_sum, apple, pears, oranges):
allDayPossible = {}
counter = 1
apple_range = range(0, target_sum + 1, apple)
pears_range = range(0, target_sum + 1, pears)
oranges_range = range(0, target_sum + 1, oranges)
for i, j, k in itertools.product(apple_range, pears_range, oranges_range):
if i + j + k == target_sum:
currentPossible = {}
#print counter
#print 'Apple', ':', i/apple, ',', 'Pears', ':', j/pears, ',', 'Oranges', ':', k/oranges
currentPossible['apple'] = i/apple
currentPossible['pears'] = j/pears
currentPossible['oranges'] = k/oranges
#print currentPossible
allDayPossible[counter] = currentPossible
counter = counter +1
return allDayPossible
# Total sum being returned by user for value of fruits
totalSumDay1=26 # Computer does not know this but users quantities are apple: 20, pears 3, oranges 0 at the current prices of the day
totalSumDay2=51 # Computer does not know this but users quantities are apple: 21, pears 3, oranges 0 at the current prices of the day
totalSumDay3=61 # Computer does not know this but users quantities are apple: 20, pears 4, oranges 1 at the current prices of the day
graph = {}
graph['day1'] = possibilityGenerator(totalSumDay1, fruitPriceDay1['Apple'], fruitPriceDay1['Pears'], fruitPriceDay1['Oranges'] )
graph['day2'] = possibilityGenerator(totalSumDay2, fruitPriceDay2['Apple'], fruitPriceDay2['Pears'], fruitPriceDay2['Oranges'] )
graph['day3'] = possibilityGenerator(totalSumDay3, fruitPriceDay3['Apple'], fruitPriceDay3['Pears'], fruitPriceDay3['Oranges'] )
# Sample of dict = 1 : {'oranges': 0, 'apple': 0, 'pears': 0}..70 : {'oranges': 8, 'apple': 26, 'pears': 13}
print graph
We'll combine graph-theory and probability:
On the 1st day, build a set of all feasible solutions. Lets denote the solutions set as A1={a1(1), a1(2),...,a1(n)}.
On the second day you can again build the solutions set A2.
Now, for each element in A2, you'll need to check if it can be reached from each element of A1 (given x% tolerance). If so - connect A2(n) to A1(m). If it can't be reached from any node in A1(m) - you can delete this node.
Basically we are building a connected directed acyclic graph.
All paths in the graph are equally likely. You can find an exact solution only when there is a single edge from Am to Am+1 (from a node in Am to a node in Am+1).
Sure, some nodes appear in more paths than other nodes. The probability for each node can be directly deduced based on the number of paths that contains this node.
By assigning a weight to each node, which equals to the number of paths that leads to this node, there is no need to keep all history, but only the previous day.
Also, have a look at non-negative-values linear diphantine equations - A question I asked a while ago. The accepted answer is a great way to enumarte all combos in each step.
Disclaimer: I changed my answer dramatically after temporarily deleting my answer and re-reading the question carefully as I misread some critical parts of the question. While still referencing similar topics and algorithms, the answer was greatly improved after I attempted to solve some of the problem in C# myself.
Hollywood version
The problem is a Dynamic constraint satisfaction problem (DCSP), a variation on Constraint satisfaction problems (CSP.)
Use Monte Carlo to find potential solutions for a given day if values and quantity ranges are not tiny. Otherwise, use brute force to find every potential solutions.
Use Constraint Recording (related to DCSP), applied in cascade to previous days to restrict the potential solution set.
Cross your fingers, aim and shoot (Guess), based on probability.
(Optional) Bruce Willis wins.
Original version
First, I would like to state what I see two main problems here:
The sheer number of possible solutions. Knowing only the number of items and the total value, lets say 3 and 143 for example, will yield a lot of possible solutions. Plus, it is not easy to have an algorithm picking valid solution without inevitably trying invalid solutions (total not equal to 143.)
When possible solutions are found for a given day Di, one must find a way to eliminate potential solutions with the added information given by { Di+1 .. Di+n }.
Let's lay down some bases for the upcoming examples:
Lets keep the same item values, the whole game. It can either be random or chosen by the user.
The possible item values is bound to the very limited range of [1-10], where no two items can have the same value.
No item can have a quantity greater than 100. That means: [0-100].
In order to solve this more easily I took the liberty to change one constraint, which makes the algorithm converge faster:
The "total quantity" rule is overridden by this rule: You can add or remove any number of items within the [1-10] range, total, in one day. However, you cannot add or remove the same number of items, total, more than twice. This also gives the game a maximum lifecycle of 20 days.
This rule enables us to rule out solutions more easily. And, with non-tiny ranges, renders Backtracking algorithms still useless, just like your original problem and rules.
In my humble opinion, this rule is not the essence of the game but only a facilitator, enabling the computer to solve the problem.
Problem 1: Finding potential solutions
For starters, problem 1. can be solved using a Monte Carlo algorithm to find a set of potential solutions. The technique is simple: Generate random numbers for item values and quantities (within their respective accepted range). Repeat the process for the required number of items. Verify whether or not the solution is acceptable. That means verifying if items have distinct values and the total is equal to our target total (say, 143.)
While this technique has the advantage of being easy to implement it has some drawbacks:
The user's solution is not guaranteed to appear in our results.
There is a lot of "misses". For instance, it takes more or less 3,000,000 tries to find 1,000 potential solutions given our constraints.
It takes a lot of time: around 4 to 5 seconds on my lazy laptop.
How to get around these drawback? Well...
Limit the range to smaller values and
Find an adequate number of potential solutions so there is a good chance the user's solution appears in your solution set.
Use heuristics to find solutions more easily (more on that later.)
Note that the more you restrict the ranges, the less useful while be the Monte Carlo algorithm is, since there will be few enough valid solutions to iterate on them all in reasonable time. For constraints { 3, [1-10], [0-100] } there is around 741,000,000 valid solutions (not constrained to a target total value.) Monte Carlo is usable there. For { 3, [1-5], [0-10] }, there is only around 80,000. No need to use Monte Carlo; brute force for loops will do just fine.
I believe the problem 1 is what you would call a Constraint satisfaction problem (or CSP.)
Problem 2: Restrict the set of potential solutions
Given the fact that problem 1 is a CSP, I would go ahead and call problem 2, and the problem in general, a Dynamic CSP (or DCSP.)
[DCSPs] are useful when the original formulation of a
problem is altered in some way, typically because the set of
constraints to consider evolves because of the environment. DCSPs
are viewed as a sequence of static CSPs, each one a transformation of
the previous one in which variables and constraints can be added
(restriction) or removed (relaxation).
One technique used with CSPs that might be useful to this problem is called Constraint Recording:
With each change in the environment (user entered values for Di+1), find information about the new constraint: What are the possibly "used" quantities for the add-remove constraint.
Apply the constraint to every preceding day in cascade. Rippling effects might significantly reduce possible solutions.
For this to work, you need to get a new set of possible solutions every day; Use either brute force or Monte Carlo. Then, compare solutions of Di to Di-1 and keep only solutions that can succeed to previous days' solutions without violating constraints.
You will probably have to keep an history of what solutions lead to what other solutions (probably in a directed graph.) Constraint recording enables you to remember possible add-remove quantities and rejects solutions based on that.
There is a lot of other steps that could be taken to further improve your solution. Here are some ideas:
Record constraints for item-value combinations found in previous days solutions. Reject other solutions immediately (as item values must not change.) You could even find a smaller solution sets for each existing solution using solution-specific constraints to reject invalid solutions earlier.
Generate some "mutant", full-history, solutions each day in order to "repair" the case where the D1 solution set doesn't contain the user's solution. You could use a genetic algorithm to find a mutant population based on an existing solution set.)
Use heuristics in order find solutions easily (e.g. when a valid solution is found, try and find variations of this solution by substituting quantities around.)
Use behavioral heuristics in order to predict some user actions (e.g. same quantity for every item, extreme patterns, etc.)
Keep making some computations while the user is entering new quantities.
Given all of this, try and figure out a ranking system based on occurrence of solutions and heuristics to determine a candidate solution.
This problem is impossible to solve.
Let's say that you know exactly for what ratio number of items was increased, not just what is the maximum ratio for this.
A user has N fruits and you have D days of guessing.
In each day you get N new variables and then you have in total D*N variables.
For each day you can generate only two equations. One equation is the sum of n_item*price and other is based on a known ratio. In total you have at most 2*D equations if they are all independent.
2*D < N*D for all N > 2
I wrote a program to play the game. Of course, I had to automate the human side, but I believe I did it all in such a way that I shouldn't invalidate my approach when played against a real human.
I approached this from a machine learning perspective and treated the problem as a hidden markov model where the total price was the observation. My solution is to use a particle filter. This solution is written in Python 2.7 using NumPy and SciPy.
I stated any assumptions I made either explicitly in the comments or implicitly in the code. I also set some additional constraints for the sake of getting code to run in an automated fashion. It's not particularly optimized as I tried to err on the side comprehensibility rather than speed.
Each iteration outputs the current true quantities and the guess. I just pipe the output to a file so I can review it easily. An interesting extension would be to plot the output on a graph either 2D (for 2 fruits) or 3D (for 3 fruits). Then you would be able to see the particle filter hone in on the solution.
Update:
Edited the code to include updated parameters after tweaking. Included plotting calls using matplotlib (via pylab). Plotting works on Linux-Gnome, your mileage may vary. Defaulted NUM_FRUITS to 2 for plotting support. Just comment out all the pylab calls to remove plotting and be able to change NUM_FRUITS to anything.
Does a good job estimating the current fxn represented by UnknownQuantities X Prices = TotalPrice. In 2D (2 Fruits) this is a line, in 3D (3 Fruits) it'd be a plane. Seems to be too little data for the particle filter to reliably hone in on the correct quantities. Need a little more smarts on top of the particle filter to really bring together the historical information. You could try converting the particle filter to 2nd- or 3rd-order.
Update 2:
I've been playing around with my code, a lot. I tried a bunch of things and now present the final program that I'll be making (starting to burn out on this idea).
Changes:
The particles now use floating points rather than integers. Not sure if this had any meaningful effect, but it is a more general solution. Rounding to integers is done only when making a guess.
Plotting shows true quantities as green square and current guess as red square. Currently believed particles shown as blue dots (sized by how much we believe them). This makes it really easy to see how well the algorithm is working. (Plotting also tested and working on Win 7 64-bit).
Added parameters for turning off/on quantity changing and price changing. Of course, both 'off' is not interesting.
It does a pretty dang good job, but, as has been noted, it's a really tough problem, so getting the exact answer is hard. Turning off CHANGE_QUANTITIES produces the simplest case. You can get an appreciation for the difficulty of the problem by running with 2 fruits with CHANGE_QUANTITIES off. See how quickly it hones in on the correct answer then see how harder it is as you increase the number of fruit.
You can also get a perspective on the difficulty by keeping CHANGE_QUANTITIES on, but adjusting the MAX_QUANTITY_CHANGE from very small values (.001) to "large" values (.05).
One situation where it struggles is if on dimension (one fruit quantity) gets close to zero. Because it's using an average of particles to guess it will always skew away from a hard boundary like zero.
In general this makes a great particle filter tutorial.
from __future__ import division
import random
import numpy
import scipy.stats
import pylab
# Assume Guesser knows prices and total
# Guesser must determine the quantities
# All of pylab is just for graphing, comment out if undesired
# Graphing only graphs first 2 FRUITS (first 2 dimensions)
NUM_FRUITS = 3
MAX_QUANTITY_CHANGE = .01 # Maximum percentage change that total quantity of fruit can change per iteration
MAX_QUANTITY = 100 # Bound for the sake of instantiating variables
MIN_QUANTITY_TOTAL = 10 # Prevent degenerate conditions where quantities all hit 0
MAX_FRUIT_PRICE = 1000 # Bound for the sake of instantiating variables
NUM_PARTICLES = 5000
NEW_PARTICLES = 500 # Num new particles to introduce each iteration after guessing
NUM_ITERATIONS = 20 # Max iterations to run
CHANGE_QUANTITIES = True
CHANGE_PRICES = True
'''
Change individual fruit quantities for a random amount of time
Never exceed changing fruit quantity by more than MAX_QUANTITY_CHANGE
'''
def updateQuantities(quantities):
old_total = max(sum(quantities), MIN_QUANTITY_TOTAL)
new_total = old_total
max_change = int(old_total * MAX_QUANTITY_CHANGE)
while random.random() > .005: # Stop Randomly
change_index = random.randint(0, len(quantities)-1)
change_val = random.randint(-1*max_change,max_change)
if quantities[change_index] + change_val >= 0: # Prevent negative quantities
quantities[change_index] += change_val
new_total += change_val
if abs((new_total / old_total) - 1) > MAX_QUANTITY_CHANGE:
quantities[change_index] -= change_val # Reverse the change
def totalPrice(prices, quantities):
return sum(prices*quantities)
def sampleParticleSet(particles, fruit_prices, current_total, num_to_sample):
# Assign weight to each particle using observation (observation is current_total)
# Weight is the probability of that particle (guess) given the current observation
# Determined by looking up the distance from the hyperplane (line, plane, hyperplane) in a
# probability density fxn for a normal distribution centered at 0
variance = 2
distances_to_current_hyperplane = [abs(numpy.dot(particle, fruit_prices)-current_total)/numpy.linalg.norm(fruit_prices) for particle in particles]
weights = numpy.array([scipy.stats.norm.pdf(distances_to_current_hyperplane[p], 0, variance) for p in range(0,NUM_PARTICLES)])
weight_sum = sum(weights) # No need to normalize, as relative weights are fine, so just sample un-normalized
# Create new particle set weighted by weights
belief_particles = []
belief_weights = []
for p in range(0, num_to_sample):
sample = random.uniform(0, weight_sum)
# sum across weights until we exceed our sample, the weight we just summed is the index of the particle we'll use
p_sum = 0
p_i = -1
while p_sum < sample:
p_i += 1
p_sum += weights[p_i]
belief_particles.append(particles[p_i])
belief_weights.append(weights[p_i])
return belief_particles, numpy.array(belief_weights)
'''
Generates new particles around the equation of the current prices and total (better particle generation than uniformly random)
'''
def generateNewParticles(current_total, fruit_prices, num_to_generate):
new_particles = []
max_values = [int(current_total/fruit_prices[n]) for n in range(0,NUM_FRUITS)]
for p in range(0, num_to_generate):
new_particle = numpy.array([random.uniform(1,max_values[n]) for n in range(0,NUM_FRUITS)])
new_particle[-1] = (current_total - sum([new_particle[i]*fruit_prices[i] for i in range(0, NUM_FRUITS-1)])) / fruit_prices[-1]
new_particles.append(new_particle)
return new_particles
# Initialize our data structures:
# Represents users first round of quantity selection
fruit_prices = numpy.array([random.randint(1,MAX_FRUIT_PRICE) for n in range(0,NUM_FRUITS)])
fruit_quantities = numpy.array([random.randint(1,MAX_QUANTITY) for n in range(0,NUM_FRUITS)])
current_total = totalPrice(fruit_prices, fruit_quantities)
success = False
particles = generateNewParticles(current_total, fruit_prices, NUM_PARTICLES) #[numpy.array([random.randint(1,MAX_QUANTITY) for n in range(0,NUM_FRUITS)]) for p in range(0,NUM_PARTICLES)]
guess = numpy.average(particles, axis=0)
guess = numpy.array([int(round(guess[n])) for n in range(0,NUM_FRUITS)])
print "Truth:", str(fruit_quantities)
print "Guess:", str(guess)
pylab.ion()
pylab.draw()
pylab.scatter([p[0] for p in particles], [p[1] for p in particles])
pylab.scatter([fruit_quantities[0]], [fruit_quantities[1]], s=150, c='g', marker='s')
pylab.scatter([guess[0]], [guess[1]], s=150, c='r', marker='s')
pylab.xlim(0, MAX_QUANTITY)
pylab.ylim(0, MAX_QUANTITY)
pylab.draw()
if not (guess == fruit_quantities).all():
for i in range(0,NUM_ITERATIONS):
print "------------------------", i
if CHANGE_PRICES:
fruit_prices = numpy.array([random.randint(1,MAX_FRUIT_PRICE) for n in range(0,NUM_FRUITS)])
if CHANGE_QUANTITIES:
updateQuantities(fruit_quantities)
map(updateQuantities, particles) # Particle Filter Prediction
print "Truth:", str(fruit_quantities)
current_total = totalPrice(fruit_prices, fruit_quantities)
# Guesser's Turn - Particle Filter:
# Prediction done above if CHANGE_QUANTITIES is True
# Update
belief_particles, belief_weights = sampleParticleSet(particles, fruit_prices, current_total, NUM_PARTICLES-NEW_PARTICLES)
new_particles = generateNewParticles(current_total, fruit_prices, NEW_PARTICLES)
# Make a guess:
guess = numpy.average(belief_particles, axis=0, weights=belief_weights) # Could optimize here by removing outliers or try using median
guess = numpy.array([int(round(guess[n])) for n in range(0,NUM_FRUITS)]) # convert to integers
print "Guess:", str(guess)
pylab.cla()
#pylab.scatter([p[0] for p in new_particles], [p[1] for p in new_particles], c='y') # Plot new particles
pylab.scatter([p[0] for p in belief_particles], [p[1] for p in belief_particles], s=belief_weights*50) # Plot current particles
pylab.scatter([fruit_quantities[0]], [fruit_quantities[1]], s=150, c='g', marker='s') # Plot truth
pylab.scatter([guess[0]], [guess[1]], s=150, c='r', marker='s') # Plot current guess
pylab.xlim(0, MAX_QUANTITY)
pylab.ylim(0, MAX_QUANTITY)
pylab.draw()
if (guess == fruit_quantities).all():
success = True
break
# Attach new particles to existing particles for next run:
belief_particles.extend(new_particles)
particles = belief_particles
else:
success = True
if success:
print "Correct Quantities guessed"
else:
print "Unable to get correct answer within", NUM_ITERATIONS, "iterations"
pylab.ioff()
pylab.show()
For your initial rules:
From my school years, I would say that if we make an abstraction of the 5% changes, we have everyday an equation with three unknown values (sorry I don't know the maths vocabulary in English), which are the same values as previous day.
At day 3, you have three equations, three unknown values, and the solution should be direct.
I guess the 5% change each day may be forgotten if the values of the three elements are different enough, because, as you said, we will use approximations and round the numbers.
For your adapted rules:
Too many unknowns - and changing - values in this case, so there is no direct solution I know of. I would trust Lior on this; his approach looks fine! (If you have a limited range for prices and quantities.)
I realized that my answer was getting quite lengthy, so I moved the code to the top (which is probably what most people are interested in). Below it there are two things:
an explanation why (deep) neural networks are not a good approach to this problem, and
an explanation why we can't uniquely determine the human's choices with the given information.
For those of you interested in either topic, please see below. For the rest of you, here is the code.
Code that finds all possible solutions
As I explain further down in the answer, your problem is under-determined. In the average case, there are many possible solutions, and this number grows at least exponentially as the number of days increases. This is true for both, the original and the extended problem. Nevertheless, we can (sort of) efficiently find all solutions (it's NP hard, so don't expect too much).
Backtracking (from the 1960s, so not exactly modern) is the algorithm of choice here. In python, we can write it as a recursive generator, which is actually quite elegant:
def backtrack(pos, daily_total, daily_item_value, allowed_change, iterator_bounds, history=None):
if pos == len(daily_total):
yield np.array(history)
return
it = [range(start, stop, step) for start, stop, step in iterator_bounds[pos][:-1]]
for partial_basket in product(*it):
if history is None:
history = [partial_basket]
else:
history.append(partial_basket)
# ensure we only check items that match the total basket value
# for that day
partial_value = np.sum(np.array(partial_basket) * daily_item_value[pos, :-1])
if (daily_total[pos] - partial_value) % daily_item_value[pos, -1] != 0:
history.pop()
continue
last_item = (daily_total[pos] - partial_value) // daily_item_value[pos, -1]
if last_item < 0:
history.pop()
continue
basket = np.array([*partial_basket] + [int(last_item)])
basket_value = np.sum(basket * daily_item_value[pos])
history[-1] = basket
if len(history) > 1:
# ensure that today's basket stays within yesterday's range
previous_basket = history[-2]
previous_basket_count = np.sum(previous_basket)
current_basket_count = np.sum(basket)
if (np.abs(current_basket_count - previous_basket_count) > allowed_change * previous_basket_count):
history.pop()
continue
yield from backtrack(pos + 1, daily_total, daily_item_value, allowed_change, iterator_bounds, history)
history.pop()
This approach essentially structures all possible candidates into a large tree and then performs depth first search with pruning whenever a constraint is violated. Whenever a leaf node is encountered, we yield the result.
Tree search (in general) can be parallelized, but that is out of scope here. It will make the solution less readable without much additional insight. The same goes for reducing constant overhead of the code, e.g., working the constraints if ...: continue into the iterator_bounds variable and do less checks.
I put the full code example (including a simulator for the human side of the game) at the bottom of this answer.
Modern Machine Learning for this problem
Question is 9 years old but still one that I'm deeply interested in. In the time since, machine learning(RNN's, CNN's, GANS,etc), new approaches and cheap GPU's have risen that enable new approaches. I thought it would be fun to revisit this question to see if there are new approaches.
I really like your enthusiasm for the world of deep neural networks; unfortunately they simply do not apply here for a few reasons:
(Exactness) If you need an exact solution, like for your game, NNs can't provide that.
(Integer Constraint) The currently dominant NN training methods are gradient descent based, so the problem has to be differentiable or you need to be able to reformulate it in such a way that it becomes differentiable; constraining yourself to integers kills GD methods in the cradle. You could try evolutionary algorithms to search for a parameterization. This does exist, but those methods are currently a lot less established.
(Non-Convexity) In the typical formulation, training a NN is a local method, which means you will find exactly 1 (locally optimal) solution if your algorithm is converging. In the average case, your game has many possible solutions for both the original and extended version. This not only means that - on average - you can't figure out the human's choice (basket), but also that you have no control over which of the many solutions the NN will find. Current NN success stories suffer the same fate, but they tend to don't really care, because they only want some solution instead of a specific one. Some okay-ish solution beats the hell out of no solution at all.
(Expert Domain Knowledge) For this game, you have a lot of domain knowledge that can be exploited to improve the optimization/learning. Taking full advantage of arbitrary domain knowledge in NNs is not trivial and for this game building a custom ML model (not a neural network) would be easier and more efficient.
Why the game can not be uniquely solved - Part 1
Let's consider a substitute problem first and lift the integer requirement, i.e., the basket (human choice of N fruits for a given day) can have fractional fruits (0.3 oranges).
The total value constraint np.dot(basket, daily_price) == total_value limits the possible solutions for the basket; it reduces the problem by one dimension. Freely pick amounts for N-1 fruits, and you can always find a value for the N-th fruit to satisfy the constraint. So while it seems that there are N choices to make for a day, there are actually only N-1 that we can make freely, and the last one will be fully determined by our previous choices. So for each day the game goes on, we need to estimate an additional N-1 choices/variables.
We might want to enforce that all the choices are greater than 0, but that only reduces the interval from which we can choose a number; any open interval of real numbers has infinitely many numbers in it, so we will never run out of options because of this. Still N-1 choices to make.
Between two days, the total basket volume np.sum(basket) only changes by at most some_percent of the previous day, i.e. np.abs(np.sum(previous_basket) - np.sum(basket)) <= some_percent * np.sum(previous_basket). Some of the choices we could make at a given day will change the basket by more than some_percent of the previous day. To make sure we never violate this, we can freely make N-2 choices and then have to pick the N-1-th variable so that adding it and adding the N-the variable (which is fixed from our previous choices) stays within some_percent. (Note: This is an inequality constraint, so it will only reduce the number of choices if we have equality, i.e., the basket changes by exactly some_percent. In optimization theory this is known as the constraint being active.)
We can again think about the constraint that all choices should be greater 0, but the argument remains that this simply changes the interval from which we can now freely choose N-2 variables.
So after D days we are left with N-1 choices to estimate from the first day (no change constraint) and (D-1)*(N-2) choices to estimate for each following day. Unfortunately, we ran out of constraints to further reduce this number and the number of unknowns grows by at least N-2 each day. This is essentially what what Luka Rahne meant with "2*D < N*D for all N > 2". We will likely find many candidates which are all equally probable.
The exact food prices each day don't matter for this. As long as they are of some value, they will constrain one of the choices. Hence, if you extend your game in the way you specify, there is always a chance for infinitely many solutions; regardless of the number of days.
Why the game can still not be uniquely solved - Part 2
There is one constraint we didn't look at which might help fix this: only allow integer solutions for choices. The problem with integer constraints is that they are very complex to deal with. However, our main concern here is if adding this constraint will allow us to uniquely solve the problem given enough days. For this, there is a rather intuitive counter-example. Suppose you have 3 consecutive days, and for the 1st and 3d day, the total value constraint only allows one basket. In other words, we know the basket for day 1 and day 3, but not for day 2. Here, we only know it's total value, that it is within some_percent of day 1 and that day 3 is within some_percent of day 2. Is this enough information to always work out what is in the basket on day 2?
some_percent = 0.05
Day 1: basket: [3 2] prices: [10 7] total_value: 44
Day 2: basket: [x y] prices: [5 5] total_value: 25
Day 3: basket: [2 3] prices: [9 5] total_value: 33
Possible Solutions Day 2: [2 3], [3 2]
Above is one example, where we know the values for two days thanks to the total value constraint, but that still won't allow us to work out the exact composition of the basket at day 2. Thus, while it may be possible to work it out in some cases, it is not possible in general. Adding more days after day 3 doesn't help figuring out day 2 at all. It might help in narrowing the options for day 3 (which will then narrow the options for day 2), but we already have just 1 choice left for day 3, so it's no use.
Full Code
import numpy as np
from itertools import product
import tqdm
def sample_uniform(n, r):
# check out: http://compneuro.uwaterloo.ca/files/publications/voelker.2017.pdf
sample = np.random.rand(n + 2)
sample_norm = np.linalg.norm(sample)
unit_sample = (sample / sample_norm)
change = np.floor(r * unit_sample[:-2]).astype(np.int)
return change
def human(num_fruits, allowed_change=0.05, current_distribution=None):
allowed_change = 0.05
if current_distribution is None:
current_distribution = np.random.randint(1, 50, size=num_fruits)
yield current_distribution.copy()
# rejection sample a suitable change
while True:
current_total = np.sum(current_distribution)
maximum_change = np.floor(allowed_change * current_total)
change = sample_uniform(num_fruits, maximum_change)
while np.sum(change) > maximum_change:
change = sample_uniform(num_fruits, maximum_change)
current_distribution += change
yield current_distribution.copy()
def prices(num_fruits, alter_prices=False):
current_prices = np.random.randint(1, 10, size=num_fruits)
while True:
yield current_prices.copy()
if alter_prices:
current_prices = np.random.randint(1, 10, size=num_fruits)
def play_game(num_days, num_fruits=3, alter_prices=False):
human_choice = human(num_fruits)
price_development = prices(num_fruits, alter_prices=alter_prices)
history = {
"basket": list(),
"prices": list(),
"total": list()
}
for day in range(num_days):
choice = next(human_choice)
price = next(price_development)
total_price = np.sum(choice * price)
history["basket"].append(choice)
history["prices"].append(price)
history["total"].append(total_price)
return history
def backtrack(pos, daily_total, daily_item_value, allowed_change, iterator_bounds, history=None):
if pos == len(daily_total):
yield np.array(history)
return
it = [range(start, stop, step) for start, stop, step in iterator_bounds[pos][:-1]]
for partial_basket in product(*it):
if history is None:
history = [partial_basket]
else:
history.append(partial_basket)
# ensure we only check items that match the total basket value
# for that day
partial_value = np.sum(np.array(partial_basket) * daily_item_value[pos, :-1])
if (daily_total[pos] - partial_value) % daily_item_value[pos, -1] != 0:
history.pop()
continue
last_item = (daily_total[pos] - partial_value) // daily_item_value[pos, -1]
if last_item < 0:
history.pop()
continue
basket = np.array([*partial_basket] + [int(last_item)])
basket_value = np.sum(basket * daily_item_value[pos])
history[-1] = basket
if len(history) > 1:
# ensure that today's basket stays within relative tolerance
previous_basket = history[-2]
previous_basket_count = np.sum(previous_basket)
current_basket_count = np.sum(basket)
if (np.abs(current_basket_count - previous_basket_count) > allowed_change * previous_basket_count):
history.pop()
continue
yield from backtrack(pos + 1, daily_total, daily_item_value, allowed_change, iterator_bounds, history)
history.pop()
if __name__ == "__main__":
np.random.seed(1337)
num_fruits = 3
allowed_change = 0.05
alter_prices = False
history = play_game(15, num_fruits=num_fruits, alter_prices=alter_prices)
total_price = np.stack(history["total"]).astype(np.int)
daily_price = np.stack(history["prices"]).astype(np.int)
basket = np.stack(history["basket"]).astype(np.int)
maximum_fruits = np.floor(total_price[:, np.newaxis] / daily_price).astype(np.int)
iterator_bounds = [[[0, maximum_fruits[pos, fruit], 1] for fruit in range(num_fruits)] for pos in range(len(basket))]
# iterator_bounds = np.array(iterator_bounds)
# import pdb; pdb.set_trace()
pbar = tqdm.tqdm(backtrack(0, total_price,
daily_price, allowed_change, iterator_bounds), desc="Found Solutions")
for solution in pbar:
# test price guess
calculated_price = np.sum(np.stack(solution) * daily_price, axis=1)
assert np.all(calculated_price == total_price)
# test basket change constraint
change = np.sum(np.diff(solution, axis=0), axis=1)
max_change = np.sum(solution[:-1, ...], axis=1) * allowed_change
assert np.all(change <= max_change)
# indicate that we found the original solution
if not np.any(solution - basket):
pbar.set_description("Found Solutions (includes original)")
When the player selects a combination which will reduce the number of possibilities to 1, computer will win. Otherwise, the player can pick a combination with the constraint of the total varying within a certain percentage, that computer may never win.
import itertools
import numpy as np
def gen_possible_combination(total, prices):
"""
Generates all possible combinations of numbers of items for
given prices constraint by total
"""
nitems = [range(total//p + 1) for p in prices]
prices_arr = np.array(prices)
combo = [x for x in itertools.product(
*nitems) if np.dot(np.array(x), prices_arr) == total]
return combo
def reduce(combo1, combo2, pct):
"""
Filters impossible transitions which are greater than pct
"""
combo = {}
for x in combo1:
for y in combo2:
if abs(sum(x) - sum(y))/sum(x) <= pct:
combo[y] = 1
return list(combo.keys())
def gen_items(n, total):
"""
Generates a list of items
"""
nums = [0] * n
t = 0
i = 0
while t < total:
if i < n - 1:
n1 = np.random.randint(0, total-t)
nums[i] = n1
t += n1
i += 1
else:
nums[i] = total - t
t = total
return nums
def main():
pct = 0.05
i = 0
done = False
n = 3
total_items = 26 # np.random.randint(26)
combo = None
while not done:
prices = [np.random.randint(1, 10) for _ in range(n)]
items = gen_items(n, total_items)
total = np.dot(np.array(prices), np.array(items))
combo1 = gen_possible_combination(total, prices)
if combo:
combo = reduce(combo, combo1, pct)
else:
combo = combo1
i += 1
print(i, 'Items:', items, 'Prices:', prices, 'Total:',
total, 'No. Possibilities:', len(combo))
if len(combo) == 1:
print('Solution', combo)
break
if np.random.random() < 0.5:
total_items = int(total_items * (1 + np.random.random()*pct))
else:
total_items = int(
np.ceil(total_items * (1 - np.random.random()*pct)))
if __name__ == "__main__":
main()

Categories

Resources