Simulating multiple Poisson processes - python

I have N processes and a different Poisson rate for each. I would like to simulate arrival times from all N processes. If N =1 I can do this
t = 0
N = 1
for i in range(1,10):
t+= random.expovariate(15)
print N, t
However if I have N = 5 and a list of rates
rates = [10,1,15,4,2]
I would like somehow for the loop to output the arrival times of all N processes in the right order. That is I would still like only two numbers per line (the ID of the process and the arrival time) but globally sorted by arrival time.
I could just make N lists and merge them afterwards but I would like the arrival times to be outputted in the right order in the first place.
Update. One problem is that if you just sample a fixed number of arrivals from each process, you get only early times from the high rate processes. So I think I need to sample from a fixed time interval for each process so the number of samples varies depending on the rate.

If I'm understanding you correctly:
import random
import itertools
def arrivalGenerator(rate):
t = 0
while True:
t += random.expovariate(rate)
yield t
rates = [10, 1, 15, 4, 2]
t = [(i, 0) for i in range(0, len(rates))]
arrivals = []
for i in range(len(rates)):
t = 0
generator = arrivalGenerator(rates[i])
arrivals += [(i, arrival) \
for arrival in itertools.takewhile(lambda t: t < 100, generator)]
sorted_arrivals = sorted(arrivals, key=lambda x: x[1])
for arrival in sorted_arrivals:
print arrival[0], arrival[1]
Note that your initial logic was generating a fixed number of arrivals for each process. What you really want is a specific time window, and to keep generating for a given process until you exceed that time window.

Following http://www.columbia.edu/~ks20/4703-Sigman/4703-07-Notes-PP-NSPP.pdf I think there is a more efficient answer.
You do roughly:
total_rate = sum(rates)
probabilities = [ r/total_rate for r in rates ]
arrivals = []
t = 0
while t < T:
t += random.expovariate(total_rate)
i = weighted_random(probabilities)
arrivals += (i, t)
This method eliminates the need to keep coroutine state around for a large number of different arrival processes. There's just a single "net" arrival process. The distribution will be the same.
Note that I have not given an implementation for weighted_random above, but I assume my intention is clear. It is left as an exercise for the reader ;-) -- or see e.g. http://eli.thegreenplace.net/2010/01/22/weighted-random-generation-in-python.
You can also do:
arrivals = []
t = 0
while t < T:
dt_list = [ random.expovariate(r) for r in rates ]
(dt,i) = min((tau,i) for i,tau in enumerate(dt_list))
t += dt
arrivals += (i, t)
i.e., you actually do generate separate interarrival times for all processes, but you do not need to "remember" the states of the processes. Note that the minimum of two independent exponentially-distributed random variables with rates r1 and r2 is itself exponentially distributed with rate r1+r2 (per http://ocw.mit.edu/courses/mathematics/18-440-probability-and-random-variables-spring-2011/lecture-notes/MIT18_440S11_Lecture20.pdf), so this is actually quite similar to the previous code snippet.
Of the two methods I have given here, I think the first is better:
The first is O( len(arrivals) * log(len(rates)) ) whereas the second is O( len(arrivals) * len(rates) )
The first requires 2 random numbers from the underlying generator per arrival, whereas the second requires len(rates) random numbers per arrival.
The first requires 1 evaluation of log (I assume this is how the exponential random variable is generated) per arrival, whereas the second requires O(len(rates)) evaluations of log per arrival.
Also, take all of the above Python syntax with a grain of salt (I have not run it, and I am rusty with Python), and eliminate temporary lists if you like. This is meant as "pseudocode" really; for a fast Monte Carlo simulation you'd probably use C++ (and/or CUDA) anyway.
I know you're probably well past the point of needing this answer, but I hope it can be helpful to others who find this post.

Related

Parallel algorithm for set splitting

I'm trying to solve an issue with subsection set.
The input data is the list and the integer.
The case is to divide a set into N-elements subsets whose sum of element is almost equal. As this is an NP-hard problem I try two approaches:
a) iterate all possibilities and distribute it using mpi4py to many machines (the list above 100 elements and 20 element subsets working too long)
b) using mpi4py send the list to different seed but in this case I potentially calculate the same set many times. For instance of 100 numbers and 5 subsets with 20 elements each in 60s my result could be easily better by human simply looking for the table.
Finally I'm looking for heuristic algorithm, which could be computing in distributed system and create N-elements subsets from bigger set whose sum is almost equal.
a = [range(12)]
k = 3
One of the possible solution:
[1,2,11,12] [3,4,9,10] [5,6,7,8]
because sum is 26, 26, 26
Not always it is possible to create exactly the equal sums or number of
elements. The difference between maximum and minimum number of elements in
sets could be 0 (if len(a)/k is integer) or 1.
edit 1:
I investigate two option: 1. Parent generate all iteration and then send to the parallel algorithm (but this is slow for me). 2. Parent send a list and each node generates own subsets and calculated the subset sum in restricted time. Then send the best result to parent. Parent received this results and choose the best one with minimized the difference between sums in subsets. I think the second option has potential to be faster.
Best regards,
Szczepan
I think you're trying to do something more complicated than necessary - do you actually need an exact solution (global optimum)? Regarding the heuristic solution, I had to do something along these lines in the past so here's my take on it:
Reformulate the problem as follows: You have a vector with given mean ('global mean') and you want to break it into chunks such that means of each individual chunk will be as close as possible to the 'global mean'.
Just divide it into chunks randomly and then iteratively swap elements between the chunks until you get acceptable results. You can experiment with different ways how to do it, here I'm just reshuffling elements of chunks with the minimum at maximum 'chunk-mean'.
In general, the bigger the chunk is, the easier it becomes, because the first random split would already give you not-so-bad solution (think sample means).
How big is your input list? I tested this with 100000 elements input (uniform distribution integers). With 50 2000-elements chunks you get the result instantly, with 2000 50-elements chunks you need to wait <1min.
import numpy as np
my_numbers = np.random.randint(10000, size=100000)
chunks = 50
iter_limit = 10000
desired_mean = my_numbers.mean()
accepatable_range = 0.1
split = np.array_split(my_numbers, chunks)
for i in range(iter_limit):
split_means = np.array([array.mean() for array in split]) # this can be optimized, some of the means are known
current_min = split_means.min()
current_max = split_means.max()
mean_diff = split_means.ptp()
if(i % 100 == 0 or mean_diff <= accepatable_range):
print("Iter: {}, Desired: {}, Min {}, Max {}, Range {}".format(i, desired_mean, current_min, current_max, mean_diff))
if mean_diff <= accepatable_range:
print('Acceptable solution found')
break
min_index = split_means.argmin()
max_index = split_means.argmax()
if max_index < min_index:
merged = np.hstack((split.pop(min_index), split.pop(max_index)))
else:
merged = np.hstack((split.pop(max_index), split.pop(min_index)))
reshuffle_range = mean_diff+1
while reshuffle_range > mean_diff:
# this while just ensures that you're not getting worse split, either the same or better
np.random.shuffle(merged)
modified_arrays = np.array_split(merged, 2)
reshuffle_range = np.array([array.mean() for array in modified_arrays]).ptp()
split += modified_arrays

create number of requests for a day having Poission arrival on an hourly basis in python

let's say we have a service to which # of requests are coming and we are adding those requests on an hourly basis like from 12-1 and 1-2 etc. So what I want to do is to generate these number of requests which follow Poisson arrival and then add this to a dictionary representing a day of week
monday = [hour_range, number_of_clients_in_that_hour]
Then at the end, we will have these 7 dictionaries named from Mon to Sunday and on which some linear regression can be used to predict # of clients for next hour of a given day.
So basically, as I am simulating this scenario in python, I need to make an arrival which will represent this kind of scenario. I have following code, using which I generate # of clients in an hour using uniform distribution. how can I do it for Poisson arrival or any other arrival which truly represents such scenario? My code is as follow
day_names = ['mon','tue','wed','thurs','fri','sat','sun']
time_values = np.linspace(1,23,23,dtype='int') # print from 1,2...23
for day_iterator in range(1,7+1):
number_of_clients = [] # create empty list that will hold number of clients
for i in range(1,24,1): # lets create no. of clients for a day on an hourly basis in this for loop
rand_value = random.randint(1,20) # generate number of clients
number_of_clients.append(rand_value) # append the number of clients to this list
# a single day data is generated after this for
locals() [day_names[day_iterator-1]] = dict(zip(time_values,number_of_clients)) # create dict for each day of a week
# print each day
print "monday = %s"%mon
print "tuesday = %s"%tue
print "wed = %s"%wed
print "thurs = %s"%thurs
print "fri = %s"%fri
print "sat = %s"%sat
print "sun = %s"%sun
plt.plot(mon.keys(),mon.values())
The path of least resistance is to use the built-in Poisson generator from numpy.
However, if you want to roll your own the following code will do the trick:
import math
import random
def poisson(rate):
x = 0
product = random.random()
threshold = math.exp(-rate)
while product >= threshold:
product *= random.random()
x += 1
return x
This is based on the fact that Poisson events have exponentially distributed interarrival times, so you can generate exponentials until their sum exceeds your specified rate. This implementation is slightly more clever though—by exponentiating both sides of the summation/threshold relationship, the sum of logarithmic evaluations turns into simple multiplication, and the result can be compared to a pre-calculated exponentiated threshold. This is algebraically identical to the sum of exponential random variates but it performs a single exponentiation and an average of lambda multiplications, rather than summing an average of lambda log evaluations.
Finally, whichever generator you use you need to know the rate. Bearing in mind that poisson is the French word for fish, one of the worst jokes in prob & stats is the statement "the Poisson scales." This means that the hourly rate can be converted to a daily rate by simply multiplying by 24, the number of hours in a day. For example, if you have an average of 3 per hour, you will have an average of 72 per day.
The inter-arrival times for a Poisson process (with the usual simplifying assumptions) are exponentially distributed. In this kind of modelling work then, it's the inter-arrival times that are often used rather than the parent process.
Here's how you can get a count for each hour of a Poisson process using a well-known Python library. Notice that scale in the inverse of the Poisson parameter.
>>> def hourly_arrivals(scale=1):
... count = 0
... while expon.rvs(scale=scale, size=1) < 1:
... count += 1
... return count
...
>>> hourly_arrivals()
0
>>> hourly_arrivals()
8
>>> hourly_arrivals()
0
>>> hourly_arrivals()
1
>>> hourly_arrivals()
4
>>> hourly_arrivals()
0
>>> hourly_arrivals()
2
You have also asked about 'any other arrival which truly represents such scenario'. This is an empirical problem. I would say, gather as many steady-state inter-arrival times as you can for the system you are studying and try to fit a cumulative distribution function to them. If you would like to discuss that please put another question.

Python random sample generator (comfortable with huge population sizes)

As you might know random.sample(population,sample_size) quickly returns a random sample, but what if you don't know in advance the size of the sample? You end up in sampling the entire population, or shuffling it, which is the same. But this can be wasteful (if the majority of sample sizes come up to be small compared to population size) or even unfeasible (if population size is huge, running out of memory). Also, what if your code needs to jump from here to there before picking the next element of the sample?
P.S. I bumped into the need of optimizing random sample while working on simulated annealing for TSP. In my code sampling is restarted hundreds of thousands of times, and each time I don't know if I will need to pick 1 element or the 100% of the elements of population.
At first, I would split the population into blocks. The function to do the block sampling can easily be a generator, being able to process sample of arbitrary size. This also allows you to make the function a generator.
Imagine infinite population, a population block of 512 and sample size of 8. This means you could gather as many samples as you need, and for future reduction again sample the already sampled space (for 1024 blocks this means 8196 samples from which you can sample again).
At the same time, this allows for parallel processing which may be feasible in case of very large samples.
Example considering in-memory population
import random
population = [random.randint(0, 1000) for i in range(0, 150000)]
def sample_block(population, block_size, sample_size):
block_number = 0
while 1:
try:
yield random.sample(population[block_number * block_size:(block_number + 1) * block_size], sample_size)
block_number += 1
except ValueError:
break
sampler = sample_block(population, 512, 8)
samples = []
try:
while 1:
samples.extend(sampler.next())
except StopIteration:
pass
print random.sample(samples, 200)
If population was external to the script (file, block) the only modification is that you would have to load appropriate chunk to a memory. Proof of concept how sampling of infinite population could look like:
import random
import time
def population():
while 1:
yield random.randint(0, 10000)
def reduced_population(samples):
for sample in samples:
yield sample
def sample_block(generator, block_size, sample_size):
block_number = 0
block = []
while 1:
block.append(generator.next())
if len(block) == block_size:
s = random.sample(block, sample_size)
block_number += 1
block = []
print 'Sampled block {} with result {}.'.format(block_number, s)
yield s
samples = []
result = []
reducer = sample_block(population(), 512, 12)
try:
while 1:
samples.append(reducer.next())
if len(samples) == 1000:
sampler = sample_block(reduced_population(samples), 1000, 15)
result.append(list(sampler))
time.sleep(5)
except StopIteration:
pass
Ideally, you'd also gather the samples and again sample them.
That's what generators for, I believe. Here is an example of Fisher-Yates-Knuth sampling via generator/yield, you get events one by one and stop when you want to.
Code updated
import random
import numpy
import array
class populationFYK(object):
"""
Implementation of the Fisher-Yates-Knuth shuffle
"""
def __init__(self, population):
self._population = population # reference to the population
self._length = len(population) # lengths of the sequence
self._index = len(population)-1 # last unsampled index
self._popidx = array.array('i', range(0,self._length))
# array module vs numpy
#self._popidx = numpy.empty(self._length, dtype=numpy.int32)
#for k in range(0,self._length):
# self._popidx[k] = k
def swap(self, idx_a, idx_b):
"""
Swap two elements in population
"""
temp = self._popidx[idx_a]
self._popidx[idx_a] = self._popidx[idx_b]
self._popidx[idx_b] = temp
def sample(self):
"""
Yield one sampled case from population
"""
while self._index >= 0:
idx = random.randint(0, self._index) # index of the sampled event
if idx != self._index:
self.swap(idx, self._index)
sampled = self._population[self._popidx[self._index]] # yielding it
self._index -= 1 # one less to be sampled
yield sampled
def index(self):
return self._index
def restart(self):
self._index = self._length - 1
for k in range(0,self._length):
self._popidx[k] = k
if __name__=="__main__":
population = [1,3,6,8,9,3,2]
gen = populationFYK(population)
for k in gen.sample():
print(k)
You can get a sample of size K out of a population of size N by picking K non-repeating random-numbers in the range [0...N[ and treat them as indexes.
Option a)
You could generate such a index-sample using the well-known sample method.
random.sample(xrange(N), K)
From the Python docs about random.sample:
To choose a sample from a range of integers, use an xrange() object as an argument. This is especially fast and space efficient for sampling from a large population
Option b)
If you don't like the fact that random.sample already returns a list instead of a lazy generator of non-repeating random numbers, you can go fancy with Format-Preserving Encryption to encrypt a counter.
This way you get a real generator of random indexes, and you can pick as many as you want and stop at any time, without getting any duplicates, which gives you dynamically sized sample sets.
The idea is to construct an encryption scheme to encrypt the numbers from 0 to N. Now, for each time you want to get a sample from your population, you pick a random key for your encryption and start to encrypt the numbers from 0, 1, 2, ... onwards (this is the counter). Since every good encryption creates a random-looking 1:1 mapping you end up with non-repeating random integers you can use as indexes.
The storage requirements during this lazy generation is just the initial key plus the current value of the counter.
The idea was already discussed in Generating non-repeating random numbers in Python. There even is a python snippet linked: formatpreservingencryption.py
A sample code using this snippet could be implemented like this:
def itersample(population):
# Get the size of the population
N = len(population)
# Get the number of bits needed to represent this number
bits = (N-1).bit_length()
# Generate some random key
key = ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(32))
# Create a new crypto instance that encrypts binary blocks of width <bits>
# Thus, being able to encrypt all numbers up to the nearest power of two
crypter = FPEInteger(key=key, radix=2, width=bits)
# Count up
for i in xrange(1<<bits):
# Encrypt the current counter value
x = crypter.encrypt(i)
# If it is bigger than our population size, just skip it
# Since we generate numbers up to the nearest power of 2,
# we have to skip up to half of them, and on average up to one at a time
if x < N:
# Return the randomly chosen element
yield population[x]
I wrote (in Python 2.7.9) a random sampler generator (of indexes) whose speed depends only on sample size (it should be O(ns log(ns)) where ns is sample size). So it is fast when sample size is small compared to population size, because it does NOT depend at all on population size. It doesn't build any population collection, it just picks random indexes and uses a kind of bisect method on sampled indexes to avoid duplicates and keep then sorted. Given an iterable population, here's how to use itersample generator:
import random
sampler=itersample(len(population))
next_pick=sampler.next() # pick the next random (index of) element
or
import random
sampler=itersample(len(population))
sample=[]
for index in sampler:
# do something with (index of) picked element
sample.append(index) # build a sample
if some_condition: # stop sampling when needed
break
If you need the actual elements and not just the indexes, just apply population iterable to the index when needed (population[sampler.next()] and population[index] respectively for first and second example).
The results of some tests show that speed does NOT depend on population size, so if you need to randomly pick only 10 elements from a population of 100 billions, you pay only for 10 (remember, we don't know in advance how many elements we'll pick, otherwise you'd better use random.sample).
Sampling 1000 from 1000000
Using itersample 0.0324 s
Sampling 1000 from 10000000
Using itersample 0.0304 s
Sampling 1000 from 100000000
Using itersample 0.0311 s
Sampling 1000 from 1000000000
Using itersample 0.0329 s
Other tests confirm that running time is slightly more than linear with sample size:
Sampling 100 from 1000000000
Using itersample 0.0018 s
Sampling 1000 from 1000000000
Using itersample 0.0294 s
Sampling 10000 from 1000000000
Using itersample 0.4438 s
Sampling 100000 from 1000000000
Using itersample 8.8739 s
Finally, here is the generator function itersample:
import random
def itersample(c): # c: population size
sampled=[]
def fsb(a,b): # free spaces before middle of interval a,b
fsb.idx=a+(b+1-a)/2
fsb.last=sampled[fsb.idx]-fsb.idx if len(sampled)>0 else 0
return fsb.last
while len(sampled)<c:
sample_index=random.randrange(c-len(sampled))
a,b=0,len(sampled)-1
if fsb(a,a)>sample_index:
yielding=sample_index
sampled.insert(0,yielding)
yield yielding
elif fsb(b,b)<sample_index+1:
yielding=len(sampled)+sample_index
sampled.insert(len(sampled),yielding)
yield yielding
else: # sample_index falls inside sampled list
while a+1<b:
if fsb(a,b)<sample_index+1:
a=fsb.idx
else:
b=fsb.idx
yielding=a+1+sample_index
sampled.insert(a+1,yielding)
yield yielding
Here is another idea. So for huge population we would like to keep some info about selected records. In your case you keep one integer index per selected record - 32bit or 64bbit integer, plus some code to do reasonable search wrt selected/not selected. In case of large number of selected records this record keeping might be prohibitive. What I would propose is to use Bloom filter for selected indeces set. False positive matches are possible, but false negatives are not, thus no risk to get duplicated records. It does introduce slight bias - false positives records will be excluded from sampling. But memory efficiency is good, fewer than 10 bits per element are required for a 1% false positive probability. So if you select 5% of the population and have 1% false positive, you missed 0.0005 of your population, depending on requirements might be ok. If you want lower false positive, use more bits. But memory efficiency would be a lot better, though I expect there is more code to execute per record sample.
Sorry, no code

python: improve performance and/or method to avoid memory error creating, saving and deleting variable variables

I have been fighting against a function giving me a memory error and thanks to your support (Python: how to split and return a list from a function to avoid memory error) I managed to sort the issue; however, since I am not a pro-programmer I would like to ask for your opinion on my method and how to improve its performance (if possible).
The function is a generator function returning all cycles from an n-nodes digraph. However, for a 12 nodes digraph, there are about 115 million cycles (each defined as a list of nodes, e.g. [0,1,2,0] is a cycle). I need all cycles available for further processing even after I have extracted some of their properties when they were first generated, so they need to be stored somewhere. So, the idea is to cut the result array every 10 million cycles to avoid memory error (when an array is too big, python runs out of RAM) and create a new array to store the following results. In the 12 node digraph, I would then have 12 result arrays, 11 full ones (containing 10 million cycles each) and the last containing 5 million cycles.
However, splitting the result array is not enough since the variables stay in RAM. So, I still need to write each one to the disk and delete it afterwards to clear the RAM.
As stated in How do I create a variable number of variables?, using 'exec' to create variable variable names is not very "clean" and dictionary solutions are better. However, in my case, if I store the results in a single dictionary, it will run out of memory due to the size of the arrays. Hence, I went for the 'exec' way. I would be grateful if you could comment on that decision.
Also, to store the arrays I use numpy.savez_compressed which gives me a 43 Mb file for each 10million cycles array. If it is not compressed it creates a 500 Mb file. However, using the compressed version slows the writing process. Any idea how to speed the writing and/or compressing process?
A simplified version of the code I wrote is as follows:
nbr_result_arrays=0
result_array_0=[]
result_lenght=10000000
tmp=result_array_0 # I use tmp to avoid using exec within the for loop (exec slows down code execution)
for cycle in generator:
tmp.append(cycle)
if len(tmp) == result_lenght:
exec 'np.savez_compressed(\'results_' +str(nbr_result_arrays)+ '\', tmp)'
exec 'del result_array_'+str(nbr_result_arrays)
nbr_result_arrays+=1
exec 'result_array_'+str(nbr_result_arrays)+'=[]'
exec 'tmp=result_array_'+str(nbr_result_arrays)
Thanks for reading,
Aleix
How about using itertools.islice?
import itertools
import numpy as np
for i in itertools.count():
tmp = list(itertools.islice(generator, 10000000))
if not tmp:
break
np.savez_compressed('results_{}'.format(i), tmp)
del tmp
thanks to all for your suggestions.
As suggested by #Aya, I believe that to improve performance (and possible space issues) I should avoid to store the results on the HD because storing them adds half of the time than creating the result, so loading and processing it again would get very close to creating the result again. Additionally, if I do not store any result, I save space which can become a big issue for bigger digraphs (a 12 node complete digraphs has about 115 million cycles but a 29 node ones has about 848E27 cycles... and increasing at factorial rate).
The idea is that I first need to find through all cycles going through the weakest arc to find the total probability of all cycles going it. Then, with this total probability I must go again through all those cycles to subtract them from the original array according to the weighted probability (I needed the total probability to be able to calculate the weighted probalility: weighted_prob= prob_of_this_cycle/total_prob_through_this_edge).
Thus, I believe that this is the best approach to do that (but I am open to more discussions! :) ).
However, I have a doubt regarding speed processing regarding two sub-functions:
1st: find whether a sequence contains a specific (smaller) sequence. I am doing that with the function "contains_sequence" which relies on the generator function "window" (as suggested in Is there a Python builtin for determining if an iterable contained a certain sequence? However I have been told that doing it with a deque would be up to 33% faster. Any other ideas?
2nd: I am currently finding the cycle probability of a cycle by sliding through the cycle nodes (which is represented by a list) to find the probability at the output of each arc to stay within the cycle and then multiply them all to find the cycle probability (the function name is find_cycle_probability). Any performance suggestions on this function would be appreciated since I need to run it for each cycle, i.e. countless times.
Any other tips/suggestion/comments will be most welcome! And thanks again for your help.
Aleix
Below follows the simplified code:
def simple_cycles_generator_w_filters(working_array_digraph, arc):
'''Generator function generating all cycles containing a specific arc.'''
generator=new_cycles.simple_cycles_generator(working_array_digraph)
for cycle in generator:
if contains_sequence(cycle, arc):
yield cycle
return
def find_smallest_arc_with_cycle(working_array,working_array_digraph):
'''Find the smallest arc through which at least one cycle flows.
Returns:
- if such arc exist:
smallest_arc_with_cycle = [a,b] where a is the start of arc and b the end
smallest_arc_with_cycle_value = x where x is the weight of the arc
- if such arc does not exist:
smallest_arc_with_cycle = []
smallest_arc_with_cycle_value = 0 '''
smallest_arc_with_cycle = []
smallest_arc_with_cycle_value = 0
sparse_array = []
for i in range(numpy.shape(working_array)[0]):
for j in range(numpy.shape(working_array)[1]):
if working_array[i][j] !=0:
sparse_array.append([i,j,working_array[i][j]])
sorted_array=sorted(sparse_array, key=lambda x: x[2])
for i in range(len(sorted_array)):
smallest_arc=[sorted_array[i][0],sorted_array[i][1]]
generator=simple_cycles_generator_w_filters(working_array_digraph,smallest_arc)
if any(generator):
smallest_arc_with_cycle=smallest_arc
smallest_arc_with_cycle_value=sorted_array[i][2]
break
return smallest_arc_with_cycle,smallest_arc_with_cycle_value
def window(seq, n=2):
"""Returns a sliding window (of width n) over data from the iterable
s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... """
it = iter(seq)
result = list(itertools.islice(it, n))
if len(result) == n:
yield result
for elem in it:
result = result[1:] + [elem]
yield result
def contains_sequence(all_values, seq):
return any(seq == current_seq for current_seq in window(all_values, len(seq)))
def find_cycle_probability(cycle, working_array, total_outputs):
'''Finds the cycle probability of a given cycle within a given array'''
output_prob_of_each_arc=[]
for i in range(len(cycle)-1):
weight_of_the_arc=working_array[cycle[i]][cycle[i+1]]
output_probability_of_the_arc=float(weight_of_the_arc)/float(total_outputs[cycle[i]])#NOTE:total_outputs is an array, thus the float
output_prob_of_each_arc.append(output_probability_of_the_arc)
circuit_probabilities_of_the_cycle=numpy.prod(output_prob_of_each_arc)
return circuit_probabilities_of_the_cycle
def clean_negligible_values(working_array):
''' Cleans the array by rounding negligible values to 0 according to a
pre-defined threeshold.'''
zero_threeshold=0.000001
for i in range(numpy.shape(working_array)[0]):
for j in range(numpy.shape(working_array)[1]):
if working_array[i][j] == 0:
continue
elif 0 < working_array[i][j] < zero_threeshold:
working_array[i][j] = 0
elif -zero_threeshold <= working_array[i][j] < 0:
working_array[i][j] = 0
elif working_array[i][j] < -zero_threeshold:
sys.exit('Error')
return working_array
original_array= 1000 * numpy.random.random_sample((5, 5))
total_outputs=numpy.sum(original_array,axis=0) + 100 * numpy.random.random_sample(5)
working_array=original_array.__copy__()
straight_array= working_array.__copy__()
cycle_array=numpy.zeros(numpy.shape(working_array))
iteration_counter=0
working_array_digraph=networkx.DiGraph(working_array)
[smallest_arc_with_cycle, smallest_arc_with_cycle_value]= find_smallest_arc_with_cycle(working_array, working_array_digraph)
while smallest_arc_with_cycle: # using implicit true value of a non-empty list
cycle_flows_to_be_subtracted = numpy.zeros(numpy.shape((working_array)))
# FIRST run of the generator to calculate each cycle probability
# note: the cycle generator ONLY provides all cycles going through
# the specified weakest arc
generator = simple_cycles_generator_w_filters(working_array_digraph, smallest_arc_with_cycle)
nexus_total_probs = 0
for cycle in generator:
cycle_prob = find_cycle_probability(cycle, working_array, total_outputs)
nexus_total_probs += cycle_prob
# SECOND run of the generator
# using the nexus_prob_sum calculated before, I can allocate the weight of the
# weakest arc to each cycle going through it
generator = simple_cycles_generator_w_filters(working_array_digraph,smallest_arc_with_cycle)
for cycle in generator:
cycle_prob = find_cycle_probability(cycle, working_array, total_outputs)
allocated_cycle_weight = cycle_prob / nexus_total_probs * smallest_arc_with_cycle_value
# create the array to be substracted
for i in range(len(cycle)-1):
cycle_flows_to_be_subtracted[cycle[i]][cycle[i+1]] += allocated_cycle_weight
working_array = working_array - cycle_flows_to_be_subtracted
clean_negligible_values(working_array)
cycle_array = cycle_array + cycle_flows_to_be_subtracted
straight_array = straight_array - cycle_flows_to_be_subtracted
clean_negligible_values(straight_array)
# find the next weakest arc with cycles.
working_array_digraph=networkx.DiGraph(working_array)
[smallest_arc_with_cycle, smallest_arc_with_cycle_value] = find_smallest_arc_with_cycle(working_array,working_array_digraph)

How to approach a number guessing game (with a twist) algorithm?

Update(July 2020): Question is 9 years old but still one that I'm deeply interested in. In the time since, machine learning(RNN's, CNN's, GANS,etc), new approaches and cheap GPU's have risen that enable new approaches. I thought it would be fun to revisit this question to see if there are new approaches.
I am learning programming (Python and algorithms) and was trying to work on a project that I find interesting. I have created a few basic Python scripts, but I’m not sure how to approach a solution to a game I am trying to build.
Here’s how the game will work:
Users will be given items with a value. For example,
Apple = 1
Pears = 2
Oranges = 3
They will then get a chance to choose any combo of them they like (i.e. 100 apples, 20 pears, and one orange). The only output the computer gets is the total value (in this example, it's currently $143). The computer will try to guess what they have. Which obviously it won’t be able to get correctly the first turn.
Value quantity(day1) value(day1)
Apple 1 100 100
Pears 2 20 40
Orange 3 1 3
Total 121 143
The next turn the user can modify their numbers but no more than 5% of the total quantity (or some other percent we may chose. I’ll use 5% for example.). The prices of fruit can change(at random) so the total value may change based on that also (for simplicity I am not changing fruit prices in this example). Using the above example, on day 2 of the game, the user returns a value of $152 and $164 on day 3. Here's an example:
Quantity (day2) %change (day2) Value (day2) Quantity (day3) %change (day3) Value(day3)
104 104 106 106
21 42 23 46
2 6 4 12
127 4.96% 152 133 4.72% 164
*(I hope the tables show up right, I had to manually space them so hopefully it's not just doing it on my screen, if it doesn't work let me know and I'll try to upload a screenshot.)
I am trying to see if I can figure out what the quantities are over time (assuming the user will have the patience to keep entering numbers). I know right now my only restriction is the total value cannot be more than 5% so I cannot be within 5% accuracy right now so the user will be entering it forever.
What I have done so far
Here’s my solution so far (not much). Basically, I take all the values and figure out all the possible combinations of them (I am done this part). Then I take all the possible combos and put them in a database as a dictionary (so for example for $143, there could be a dictionary entry {apple:143, Pears:0, Oranges :0}..all the way to {apple:0, Pears:1, Oranges :47}. I do this each time I get a new number so I have a list of all possibilities.
Here’s where I’m stuck. In using the rules above, how can I figure out the best possible solution? I think I’ll need a fitness function that automatically compares the two days data and removes any possibilities that have more than 5% variance of the previous days data.
Questions:
So my question with user changing the total and me having a list of all the probabilities, how should I approach this? What do I need to learn? Is there any algorithms out there or theories that I can use that are applicable? Or, to help me understand my mistake, can you suggest what rules I can add to make this goal feasible (if it's not in its current state. I was thinking adding more fruits and saying they must pick at least 3, etc..)? Also, I only have a vague understanding of genetic algorithms, but I thought I could use them here, if is there something I can use?
I'm very very eager to learn so any advice or tips would be greatly appreciated (just please don't tell me this game is impossible).
UPDATE: Getting feedback that this is hard to solve. So I thought I'd add another condition to the game that won't interfere with what the player is doing (game stays the same for them) but everyday the value of the fruits change price (randomly). Would that make it easier to solve? Because within a 5% movement and certain fruit value changes, only a few combinations are probable over time.
Day 1, anything is possible and getting a close enough range is almost impossible, but as the prices of fruits change and the user can only choose a 5% change, then shouldn't (over time) the range be narrow and narrow. In the above example, if prices are volatile enough I think I could brute force a solution that gave me a range to guess in, but I'm trying to figure out if there's a more elegant solution or other solutions to keep narrowing this range over time.
UPDATE2: After reading and asking around, I believe this is a hidden Markov/Viterbi problem that tracks the changes in fruit prices as well as total sum (weighting the last data point the heaviest). I'm not sure how to apply the relationship though. I think this is the case and could be wrong but at the least I'm starting to suspect this is a some type of machine learning problem.
Update 3: I am created a test case (with smaller numbers) and a generator to help automate the user generated data and I am trying to create a graph from it to see what's more likely.
Here's the code, along with the total values and comments on what the users actually fruit quantities are.
#!/usr/bin/env python
import itertools
# Fruit price data
fruitPriceDay1 = {'Apple':1, 'Pears':2, 'Oranges':3}
fruitPriceDay2 = {'Apple':2, 'Pears':3, 'Oranges':4}
fruitPriceDay3 = {'Apple':2, 'Pears':4, 'Oranges':5}
# Generate possibilities for testing (warning...will not scale with large numbers)
def possibilityGenerator(target_sum, apple, pears, oranges):
allDayPossible = {}
counter = 1
apple_range = range(0, target_sum + 1, apple)
pears_range = range(0, target_sum + 1, pears)
oranges_range = range(0, target_sum + 1, oranges)
for i, j, k in itertools.product(apple_range, pears_range, oranges_range):
if i + j + k == target_sum:
currentPossible = {}
#print counter
#print 'Apple', ':', i/apple, ',', 'Pears', ':', j/pears, ',', 'Oranges', ':', k/oranges
currentPossible['apple'] = i/apple
currentPossible['pears'] = j/pears
currentPossible['oranges'] = k/oranges
#print currentPossible
allDayPossible[counter] = currentPossible
counter = counter +1
return allDayPossible
# Total sum being returned by user for value of fruits
totalSumDay1=26 # Computer does not know this but users quantities are apple: 20, pears 3, oranges 0 at the current prices of the day
totalSumDay2=51 # Computer does not know this but users quantities are apple: 21, pears 3, oranges 0 at the current prices of the day
totalSumDay3=61 # Computer does not know this but users quantities are apple: 20, pears 4, oranges 1 at the current prices of the day
graph = {}
graph['day1'] = possibilityGenerator(totalSumDay1, fruitPriceDay1['Apple'], fruitPriceDay1['Pears'], fruitPriceDay1['Oranges'] )
graph['day2'] = possibilityGenerator(totalSumDay2, fruitPriceDay2['Apple'], fruitPriceDay2['Pears'], fruitPriceDay2['Oranges'] )
graph['day3'] = possibilityGenerator(totalSumDay3, fruitPriceDay3['Apple'], fruitPriceDay3['Pears'], fruitPriceDay3['Oranges'] )
# Sample of dict = 1 : {'oranges': 0, 'apple': 0, 'pears': 0}..70 : {'oranges': 8, 'apple': 26, 'pears': 13}
print graph
We'll combine graph-theory and probability:
On the 1st day, build a set of all feasible solutions. Lets denote the solutions set as A1={a1(1), a1(2),...,a1(n)}.
On the second day you can again build the solutions set A2.
Now, for each element in A2, you'll need to check if it can be reached from each element of A1 (given x% tolerance). If so - connect A2(n) to A1(m). If it can't be reached from any node in A1(m) - you can delete this node.
Basically we are building a connected directed acyclic graph.
All paths in the graph are equally likely. You can find an exact solution only when there is a single edge from Am to Am+1 (from a node in Am to a node in Am+1).
Sure, some nodes appear in more paths than other nodes. The probability for each node can be directly deduced based on the number of paths that contains this node.
By assigning a weight to each node, which equals to the number of paths that leads to this node, there is no need to keep all history, but only the previous day.
Also, have a look at non-negative-values linear diphantine equations - A question I asked a while ago. The accepted answer is a great way to enumarte all combos in each step.
Disclaimer: I changed my answer dramatically after temporarily deleting my answer and re-reading the question carefully as I misread some critical parts of the question. While still referencing similar topics and algorithms, the answer was greatly improved after I attempted to solve some of the problem in C# myself.
Hollywood version
The problem is a Dynamic constraint satisfaction problem (DCSP), a variation on Constraint satisfaction problems (CSP.)
Use Monte Carlo to find potential solutions for a given day if values and quantity ranges are not tiny. Otherwise, use brute force to find every potential solutions.
Use Constraint Recording (related to DCSP), applied in cascade to previous days to restrict the potential solution set.
Cross your fingers, aim and shoot (Guess), based on probability.
(Optional) Bruce Willis wins.
Original version
First, I would like to state what I see two main problems here:
The sheer number of possible solutions. Knowing only the number of items and the total value, lets say 3 and 143 for example, will yield a lot of possible solutions. Plus, it is not easy to have an algorithm picking valid solution without inevitably trying invalid solutions (total not equal to 143.)
When possible solutions are found for a given day Di, one must find a way to eliminate potential solutions with the added information given by { Di+1 .. Di+n }.
Let's lay down some bases for the upcoming examples:
Lets keep the same item values, the whole game. It can either be random or chosen by the user.
The possible item values is bound to the very limited range of [1-10], where no two items can have the same value.
No item can have a quantity greater than 100. That means: [0-100].
In order to solve this more easily I took the liberty to change one constraint, which makes the algorithm converge faster:
The "total quantity" rule is overridden by this rule: You can add or remove any number of items within the [1-10] range, total, in one day. However, you cannot add or remove the same number of items, total, more than twice. This also gives the game a maximum lifecycle of 20 days.
This rule enables us to rule out solutions more easily. And, with non-tiny ranges, renders Backtracking algorithms still useless, just like your original problem and rules.
In my humble opinion, this rule is not the essence of the game but only a facilitator, enabling the computer to solve the problem.
Problem 1: Finding potential solutions
For starters, problem 1. can be solved using a Monte Carlo algorithm to find a set of potential solutions. The technique is simple: Generate random numbers for item values and quantities (within their respective accepted range). Repeat the process for the required number of items. Verify whether or not the solution is acceptable. That means verifying if items have distinct values and the total is equal to our target total (say, 143.)
While this technique has the advantage of being easy to implement it has some drawbacks:
The user's solution is not guaranteed to appear in our results.
There is a lot of "misses". For instance, it takes more or less 3,000,000 tries to find 1,000 potential solutions given our constraints.
It takes a lot of time: around 4 to 5 seconds on my lazy laptop.
How to get around these drawback? Well...
Limit the range to smaller values and
Find an adequate number of potential solutions so there is a good chance the user's solution appears in your solution set.
Use heuristics to find solutions more easily (more on that later.)
Note that the more you restrict the ranges, the less useful while be the Monte Carlo algorithm is, since there will be few enough valid solutions to iterate on them all in reasonable time. For constraints { 3, [1-10], [0-100] } there is around 741,000,000 valid solutions (not constrained to a target total value.) Monte Carlo is usable there. For { 3, [1-5], [0-10] }, there is only around 80,000. No need to use Monte Carlo; brute force for loops will do just fine.
I believe the problem 1 is what you would call a Constraint satisfaction problem (or CSP.)
Problem 2: Restrict the set of potential solutions
Given the fact that problem 1 is a CSP, I would go ahead and call problem 2, and the problem in general, a Dynamic CSP (or DCSP.)
[DCSPs] are useful when the original formulation of a
problem is altered in some way, typically because the set of
constraints to consider evolves because of the environment. DCSPs
are viewed as a sequence of static CSPs, each one a transformation of
the previous one in which variables and constraints can be added
(restriction) or removed (relaxation).
One technique used with CSPs that might be useful to this problem is called Constraint Recording:
With each change in the environment (user entered values for Di+1), find information about the new constraint: What are the possibly "used" quantities for the add-remove constraint.
Apply the constraint to every preceding day in cascade. Rippling effects might significantly reduce possible solutions.
For this to work, you need to get a new set of possible solutions every day; Use either brute force or Monte Carlo. Then, compare solutions of Di to Di-1 and keep only solutions that can succeed to previous days' solutions without violating constraints.
You will probably have to keep an history of what solutions lead to what other solutions (probably in a directed graph.) Constraint recording enables you to remember possible add-remove quantities and rejects solutions based on that.
There is a lot of other steps that could be taken to further improve your solution. Here are some ideas:
Record constraints for item-value combinations found in previous days solutions. Reject other solutions immediately (as item values must not change.) You could even find a smaller solution sets for each existing solution using solution-specific constraints to reject invalid solutions earlier.
Generate some "mutant", full-history, solutions each day in order to "repair" the case where the D1 solution set doesn't contain the user's solution. You could use a genetic algorithm to find a mutant population based on an existing solution set.)
Use heuristics in order find solutions easily (e.g. when a valid solution is found, try and find variations of this solution by substituting quantities around.)
Use behavioral heuristics in order to predict some user actions (e.g. same quantity for every item, extreme patterns, etc.)
Keep making some computations while the user is entering new quantities.
Given all of this, try and figure out a ranking system based on occurrence of solutions and heuristics to determine a candidate solution.
This problem is impossible to solve.
Let's say that you know exactly for what ratio number of items was increased, not just what is the maximum ratio for this.
A user has N fruits and you have D days of guessing.
In each day you get N new variables and then you have in total D*N variables.
For each day you can generate only two equations. One equation is the sum of n_item*price and other is based on a known ratio. In total you have at most 2*D equations if they are all independent.
2*D < N*D for all N > 2
I wrote a program to play the game. Of course, I had to automate the human side, but I believe I did it all in such a way that I shouldn't invalidate my approach when played against a real human.
I approached this from a machine learning perspective and treated the problem as a hidden markov model where the total price was the observation. My solution is to use a particle filter. This solution is written in Python 2.7 using NumPy and SciPy.
I stated any assumptions I made either explicitly in the comments or implicitly in the code. I also set some additional constraints for the sake of getting code to run in an automated fashion. It's not particularly optimized as I tried to err on the side comprehensibility rather than speed.
Each iteration outputs the current true quantities and the guess. I just pipe the output to a file so I can review it easily. An interesting extension would be to plot the output on a graph either 2D (for 2 fruits) or 3D (for 3 fruits). Then you would be able to see the particle filter hone in on the solution.
Update:
Edited the code to include updated parameters after tweaking. Included plotting calls using matplotlib (via pylab). Plotting works on Linux-Gnome, your mileage may vary. Defaulted NUM_FRUITS to 2 for plotting support. Just comment out all the pylab calls to remove plotting and be able to change NUM_FRUITS to anything.
Does a good job estimating the current fxn represented by UnknownQuantities X Prices = TotalPrice. In 2D (2 Fruits) this is a line, in 3D (3 Fruits) it'd be a plane. Seems to be too little data for the particle filter to reliably hone in on the correct quantities. Need a little more smarts on top of the particle filter to really bring together the historical information. You could try converting the particle filter to 2nd- or 3rd-order.
Update 2:
I've been playing around with my code, a lot. I tried a bunch of things and now present the final program that I'll be making (starting to burn out on this idea).
Changes:
The particles now use floating points rather than integers. Not sure if this had any meaningful effect, but it is a more general solution. Rounding to integers is done only when making a guess.
Plotting shows true quantities as green square and current guess as red square. Currently believed particles shown as blue dots (sized by how much we believe them). This makes it really easy to see how well the algorithm is working. (Plotting also tested and working on Win 7 64-bit).
Added parameters for turning off/on quantity changing and price changing. Of course, both 'off' is not interesting.
It does a pretty dang good job, but, as has been noted, it's a really tough problem, so getting the exact answer is hard. Turning off CHANGE_QUANTITIES produces the simplest case. You can get an appreciation for the difficulty of the problem by running with 2 fruits with CHANGE_QUANTITIES off. See how quickly it hones in on the correct answer then see how harder it is as you increase the number of fruit.
You can also get a perspective on the difficulty by keeping CHANGE_QUANTITIES on, but adjusting the MAX_QUANTITY_CHANGE from very small values (.001) to "large" values (.05).
One situation where it struggles is if on dimension (one fruit quantity) gets close to zero. Because it's using an average of particles to guess it will always skew away from a hard boundary like zero.
In general this makes a great particle filter tutorial.
from __future__ import division
import random
import numpy
import scipy.stats
import pylab
# Assume Guesser knows prices and total
# Guesser must determine the quantities
# All of pylab is just for graphing, comment out if undesired
# Graphing only graphs first 2 FRUITS (first 2 dimensions)
NUM_FRUITS = 3
MAX_QUANTITY_CHANGE = .01 # Maximum percentage change that total quantity of fruit can change per iteration
MAX_QUANTITY = 100 # Bound for the sake of instantiating variables
MIN_QUANTITY_TOTAL = 10 # Prevent degenerate conditions where quantities all hit 0
MAX_FRUIT_PRICE = 1000 # Bound for the sake of instantiating variables
NUM_PARTICLES = 5000
NEW_PARTICLES = 500 # Num new particles to introduce each iteration after guessing
NUM_ITERATIONS = 20 # Max iterations to run
CHANGE_QUANTITIES = True
CHANGE_PRICES = True
'''
Change individual fruit quantities for a random amount of time
Never exceed changing fruit quantity by more than MAX_QUANTITY_CHANGE
'''
def updateQuantities(quantities):
old_total = max(sum(quantities), MIN_QUANTITY_TOTAL)
new_total = old_total
max_change = int(old_total * MAX_QUANTITY_CHANGE)
while random.random() > .005: # Stop Randomly
change_index = random.randint(0, len(quantities)-1)
change_val = random.randint(-1*max_change,max_change)
if quantities[change_index] + change_val >= 0: # Prevent negative quantities
quantities[change_index] += change_val
new_total += change_val
if abs((new_total / old_total) - 1) > MAX_QUANTITY_CHANGE:
quantities[change_index] -= change_val # Reverse the change
def totalPrice(prices, quantities):
return sum(prices*quantities)
def sampleParticleSet(particles, fruit_prices, current_total, num_to_sample):
# Assign weight to each particle using observation (observation is current_total)
# Weight is the probability of that particle (guess) given the current observation
# Determined by looking up the distance from the hyperplane (line, plane, hyperplane) in a
# probability density fxn for a normal distribution centered at 0
variance = 2
distances_to_current_hyperplane = [abs(numpy.dot(particle, fruit_prices)-current_total)/numpy.linalg.norm(fruit_prices) for particle in particles]
weights = numpy.array([scipy.stats.norm.pdf(distances_to_current_hyperplane[p], 0, variance) for p in range(0,NUM_PARTICLES)])
weight_sum = sum(weights) # No need to normalize, as relative weights are fine, so just sample un-normalized
# Create new particle set weighted by weights
belief_particles = []
belief_weights = []
for p in range(0, num_to_sample):
sample = random.uniform(0, weight_sum)
# sum across weights until we exceed our sample, the weight we just summed is the index of the particle we'll use
p_sum = 0
p_i = -1
while p_sum < sample:
p_i += 1
p_sum += weights[p_i]
belief_particles.append(particles[p_i])
belief_weights.append(weights[p_i])
return belief_particles, numpy.array(belief_weights)
'''
Generates new particles around the equation of the current prices and total (better particle generation than uniformly random)
'''
def generateNewParticles(current_total, fruit_prices, num_to_generate):
new_particles = []
max_values = [int(current_total/fruit_prices[n]) for n in range(0,NUM_FRUITS)]
for p in range(0, num_to_generate):
new_particle = numpy.array([random.uniform(1,max_values[n]) for n in range(0,NUM_FRUITS)])
new_particle[-1] = (current_total - sum([new_particle[i]*fruit_prices[i] for i in range(0, NUM_FRUITS-1)])) / fruit_prices[-1]
new_particles.append(new_particle)
return new_particles
# Initialize our data structures:
# Represents users first round of quantity selection
fruit_prices = numpy.array([random.randint(1,MAX_FRUIT_PRICE) for n in range(0,NUM_FRUITS)])
fruit_quantities = numpy.array([random.randint(1,MAX_QUANTITY) for n in range(0,NUM_FRUITS)])
current_total = totalPrice(fruit_prices, fruit_quantities)
success = False
particles = generateNewParticles(current_total, fruit_prices, NUM_PARTICLES) #[numpy.array([random.randint(1,MAX_QUANTITY) for n in range(0,NUM_FRUITS)]) for p in range(0,NUM_PARTICLES)]
guess = numpy.average(particles, axis=0)
guess = numpy.array([int(round(guess[n])) for n in range(0,NUM_FRUITS)])
print "Truth:", str(fruit_quantities)
print "Guess:", str(guess)
pylab.ion()
pylab.draw()
pylab.scatter([p[0] for p in particles], [p[1] for p in particles])
pylab.scatter([fruit_quantities[0]], [fruit_quantities[1]], s=150, c='g', marker='s')
pylab.scatter([guess[0]], [guess[1]], s=150, c='r', marker='s')
pylab.xlim(0, MAX_QUANTITY)
pylab.ylim(0, MAX_QUANTITY)
pylab.draw()
if not (guess == fruit_quantities).all():
for i in range(0,NUM_ITERATIONS):
print "------------------------", i
if CHANGE_PRICES:
fruit_prices = numpy.array([random.randint(1,MAX_FRUIT_PRICE) for n in range(0,NUM_FRUITS)])
if CHANGE_QUANTITIES:
updateQuantities(fruit_quantities)
map(updateQuantities, particles) # Particle Filter Prediction
print "Truth:", str(fruit_quantities)
current_total = totalPrice(fruit_prices, fruit_quantities)
# Guesser's Turn - Particle Filter:
# Prediction done above if CHANGE_QUANTITIES is True
# Update
belief_particles, belief_weights = sampleParticleSet(particles, fruit_prices, current_total, NUM_PARTICLES-NEW_PARTICLES)
new_particles = generateNewParticles(current_total, fruit_prices, NEW_PARTICLES)
# Make a guess:
guess = numpy.average(belief_particles, axis=0, weights=belief_weights) # Could optimize here by removing outliers or try using median
guess = numpy.array([int(round(guess[n])) for n in range(0,NUM_FRUITS)]) # convert to integers
print "Guess:", str(guess)
pylab.cla()
#pylab.scatter([p[0] for p in new_particles], [p[1] for p in new_particles], c='y') # Plot new particles
pylab.scatter([p[0] for p in belief_particles], [p[1] for p in belief_particles], s=belief_weights*50) # Plot current particles
pylab.scatter([fruit_quantities[0]], [fruit_quantities[1]], s=150, c='g', marker='s') # Plot truth
pylab.scatter([guess[0]], [guess[1]], s=150, c='r', marker='s') # Plot current guess
pylab.xlim(0, MAX_QUANTITY)
pylab.ylim(0, MAX_QUANTITY)
pylab.draw()
if (guess == fruit_quantities).all():
success = True
break
# Attach new particles to existing particles for next run:
belief_particles.extend(new_particles)
particles = belief_particles
else:
success = True
if success:
print "Correct Quantities guessed"
else:
print "Unable to get correct answer within", NUM_ITERATIONS, "iterations"
pylab.ioff()
pylab.show()
For your initial rules:
From my school years, I would say that if we make an abstraction of the 5% changes, we have everyday an equation with three unknown values (sorry I don't know the maths vocabulary in English), which are the same values as previous day.
At day 3, you have three equations, three unknown values, and the solution should be direct.
I guess the 5% change each day may be forgotten if the values of the three elements are different enough, because, as you said, we will use approximations and round the numbers.
For your adapted rules:
Too many unknowns - and changing - values in this case, so there is no direct solution I know of. I would trust Lior on this; his approach looks fine! (If you have a limited range for prices and quantities.)
I realized that my answer was getting quite lengthy, so I moved the code to the top (which is probably what most people are interested in). Below it there are two things:
an explanation why (deep) neural networks are not a good approach to this problem, and
an explanation why we can't uniquely determine the human's choices with the given information.
For those of you interested in either topic, please see below. For the rest of you, here is the code.
Code that finds all possible solutions
As I explain further down in the answer, your problem is under-determined. In the average case, there are many possible solutions, and this number grows at least exponentially as the number of days increases. This is true for both, the original and the extended problem. Nevertheless, we can (sort of) efficiently find all solutions (it's NP hard, so don't expect too much).
Backtracking (from the 1960s, so not exactly modern) is the algorithm of choice here. In python, we can write it as a recursive generator, which is actually quite elegant:
def backtrack(pos, daily_total, daily_item_value, allowed_change, iterator_bounds, history=None):
if pos == len(daily_total):
yield np.array(history)
return
it = [range(start, stop, step) for start, stop, step in iterator_bounds[pos][:-1]]
for partial_basket in product(*it):
if history is None:
history = [partial_basket]
else:
history.append(partial_basket)
# ensure we only check items that match the total basket value
# for that day
partial_value = np.sum(np.array(partial_basket) * daily_item_value[pos, :-1])
if (daily_total[pos] - partial_value) % daily_item_value[pos, -1] != 0:
history.pop()
continue
last_item = (daily_total[pos] - partial_value) // daily_item_value[pos, -1]
if last_item < 0:
history.pop()
continue
basket = np.array([*partial_basket] + [int(last_item)])
basket_value = np.sum(basket * daily_item_value[pos])
history[-1] = basket
if len(history) > 1:
# ensure that today's basket stays within yesterday's range
previous_basket = history[-2]
previous_basket_count = np.sum(previous_basket)
current_basket_count = np.sum(basket)
if (np.abs(current_basket_count - previous_basket_count) > allowed_change * previous_basket_count):
history.pop()
continue
yield from backtrack(pos + 1, daily_total, daily_item_value, allowed_change, iterator_bounds, history)
history.pop()
This approach essentially structures all possible candidates into a large tree and then performs depth first search with pruning whenever a constraint is violated. Whenever a leaf node is encountered, we yield the result.
Tree search (in general) can be parallelized, but that is out of scope here. It will make the solution less readable without much additional insight. The same goes for reducing constant overhead of the code, e.g., working the constraints if ...: continue into the iterator_bounds variable and do less checks.
I put the full code example (including a simulator for the human side of the game) at the bottom of this answer.
Modern Machine Learning for this problem
Question is 9 years old but still one that I'm deeply interested in. In the time since, machine learning(RNN's, CNN's, GANS,etc), new approaches and cheap GPU's have risen that enable new approaches. I thought it would be fun to revisit this question to see if there are new approaches.
I really like your enthusiasm for the world of deep neural networks; unfortunately they simply do not apply here for a few reasons:
(Exactness) If you need an exact solution, like for your game, NNs can't provide that.
(Integer Constraint) The currently dominant NN training methods are gradient descent based, so the problem has to be differentiable or you need to be able to reformulate it in such a way that it becomes differentiable; constraining yourself to integers kills GD methods in the cradle. You could try evolutionary algorithms to search for a parameterization. This does exist, but those methods are currently a lot less established.
(Non-Convexity) In the typical formulation, training a NN is a local method, which means you will find exactly 1 (locally optimal) solution if your algorithm is converging. In the average case, your game has many possible solutions for both the original and extended version. This not only means that - on average - you can't figure out the human's choice (basket), but also that you have no control over which of the many solutions the NN will find. Current NN success stories suffer the same fate, but they tend to don't really care, because they only want some solution instead of a specific one. Some okay-ish solution beats the hell out of no solution at all.
(Expert Domain Knowledge) For this game, you have a lot of domain knowledge that can be exploited to improve the optimization/learning. Taking full advantage of arbitrary domain knowledge in NNs is not trivial and for this game building a custom ML model (not a neural network) would be easier and more efficient.
Why the game can not be uniquely solved - Part 1
Let's consider a substitute problem first and lift the integer requirement, i.e., the basket (human choice of N fruits for a given day) can have fractional fruits (0.3 oranges).
The total value constraint np.dot(basket, daily_price) == total_value limits the possible solutions for the basket; it reduces the problem by one dimension. Freely pick amounts for N-1 fruits, and you can always find a value for the N-th fruit to satisfy the constraint. So while it seems that there are N choices to make for a day, there are actually only N-1 that we can make freely, and the last one will be fully determined by our previous choices. So for each day the game goes on, we need to estimate an additional N-1 choices/variables.
We might want to enforce that all the choices are greater than 0, but that only reduces the interval from which we can choose a number; any open interval of real numbers has infinitely many numbers in it, so we will never run out of options because of this. Still N-1 choices to make.
Between two days, the total basket volume np.sum(basket) only changes by at most some_percent of the previous day, i.e. np.abs(np.sum(previous_basket) - np.sum(basket)) <= some_percent * np.sum(previous_basket). Some of the choices we could make at a given day will change the basket by more than some_percent of the previous day. To make sure we never violate this, we can freely make N-2 choices and then have to pick the N-1-th variable so that adding it and adding the N-the variable (which is fixed from our previous choices) stays within some_percent. (Note: This is an inequality constraint, so it will only reduce the number of choices if we have equality, i.e., the basket changes by exactly some_percent. In optimization theory this is known as the constraint being active.)
We can again think about the constraint that all choices should be greater 0, but the argument remains that this simply changes the interval from which we can now freely choose N-2 variables.
So after D days we are left with N-1 choices to estimate from the first day (no change constraint) and (D-1)*(N-2) choices to estimate for each following day. Unfortunately, we ran out of constraints to further reduce this number and the number of unknowns grows by at least N-2 each day. This is essentially what what Luka Rahne meant with "2*D < N*D for all N > 2". We will likely find many candidates which are all equally probable.
The exact food prices each day don't matter for this. As long as they are of some value, they will constrain one of the choices. Hence, if you extend your game in the way you specify, there is always a chance for infinitely many solutions; regardless of the number of days.
Why the game can still not be uniquely solved - Part 2
There is one constraint we didn't look at which might help fix this: only allow integer solutions for choices. The problem with integer constraints is that they are very complex to deal with. However, our main concern here is if adding this constraint will allow us to uniquely solve the problem given enough days. For this, there is a rather intuitive counter-example. Suppose you have 3 consecutive days, and for the 1st and 3d day, the total value constraint only allows one basket. In other words, we know the basket for day 1 and day 3, but not for day 2. Here, we only know it's total value, that it is within some_percent of day 1 and that day 3 is within some_percent of day 2. Is this enough information to always work out what is in the basket on day 2?
some_percent = 0.05
Day 1: basket: [3 2] prices: [10 7] total_value: 44
Day 2: basket: [x y] prices: [5 5] total_value: 25
Day 3: basket: [2 3] prices: [9 5] total_value: 33
Possible Solutions Day 2: [2 3], [3 2]
Above is one example, where we know the values for two days thanks to the total value constraint, but that still won't allow us to work out the exact composition of the basket at day 2. Thus, while it may be possible to work it out in some cases, it is not possible in general. Adding more days after day 3 doesn't help figuring out day 2 at all. It might help in narrowing the options for day 3 (which will then narrow the options for day 2), but we already have just 1 choice left for day 3, so it's no use.
Full Code
import numpy as np
from itertools import product
import tqdm
def sample_uniform(n, r):
# check out: http://compneuro.uwaterloo.ca/files/publications/voelker.2017.pdf
sample = np.random.rand(n + 2)
sample_norm = np.linalg.norm(sample)
unit_sample = (sample / sample_norm)
change = np.floor(r * unit_sample[:-2]).astype(np.int)
return change
def human(num_fruits, allowed_change=0.05, current_distribution=None):
allowed_change = 0.05
if current_distribution is None:
current_distribution = np.random.randint(1, 50, size=num_fruits)
yield current_distribution.copy()
# rejection sample a suitable change
while True:
current_total = np.sum(current_distribution)
maximum_change = np.floor(allowed_change * current_total)
change = sample_uniform(num_fruits, maximum_change)
while np.sum(change) > maximum_change:
change = sample_uniform(num_fruits, maximum_change)
current_distribution += change
yield current_distribution.copy()
def prices(num_fruits, alter_prices=False):
current_prices = np.random.randint(1, 10, size=num_fruits)
while True:
yield current_prices.copy()
if alter_prices:
current_prices = np.random.randint(1, 10, size=num_fruits)
def play_game(num_days, num_fruits=3, alter_prices=False):
human_choice = human(num_fruits)
price_development = prices(num_fruits, alter_prices=alter_prices)
history = {
"basket": list(),
"prices": list(),
"total": list()
}
for day in range(num_days):
choice = next(human_choice)
price = next(price_development)
total_price = np.sum(choice * price)
history["basket"].append(choice)
history["prices"].append(price)
history["total"].append(total_price)
return history
def backtrack(pos, daily_total, daily_item_value, allowed_change, iterator_bounds, history=None):
if pos == len(daily_total):
yield np.array(history)
return
it = [range(start, stop, step) for start, stop, step in iterator_bounds[pos][:-1]]
for partial_basket in product(*it):
if history is None:
history = [partial_basket]
else:
history.append(partial_basket)
# ensure we only check items that match the total basket value
# for that day
partial_value = np.sum(np.array(partial_basket) * daily_item_value[pos, :-1])
if (daily_total[pos] - partial_value) % daily_item_value[pos, -1] != 0:
history.pop()
continue
last_item = (daily_total[pos] - partial_value) // daily_item_value[pos, -1]
if last_item < 0:
history.pop()
continue
basket = np.array([*partial_basket] + [int(last_item)])
basket_value = np.sum(basket * daily_item_value[pos])
history[-1] = basket
if len(history) > 1:
# ensure that today's basket stays within relative tolerance
previous_basket = history[-2]
previous_basket_count = np.sum(previous_basket)
current_basket_count = np.sum(basket)
if (np.abs(current_basket_count - previous_basket_count) > allowed_change * previous_basket_count):
history.pop()
continue
yield from backtrack(pos + 1, daily_total, daily_item_value, allowed_change, iterator_bounds, history)
history.pop()
if __name__ == "__main__":
np.random.seed(1337)
num_fruits = 3
allowed_change = 0.05
alter_prices = False
history = play_game(15, num_fruits=num_fruits, alter_prices=alter_prices)
total_price = np.stack(history["total"]).astype(np.int)
daily_price = np.stack(history["prices"]).astype(np.int)
basket = np.stack(history["basket"]).astype(np.int)
maximum_fruits = np.floor(total_price[:, np.newaxis] / daily_price).astype(np.int)
iterator_bounds = [[[0, maximum_fruits[pos, fruit], 1] for fruit in range(num_fruits)] for pos in range(len(basket))]
# iterator_bounds = np.array(iterator_bounds)
# import pdb; pdb.set_trace()
pbar = tqdm.tqdm(backtrack(0, total_price,
daily_price, allowed_change, iterator_bounds), desc="Found Solutions")
for solution in pbar:
# test price guess
calculated_price = np.sum(np.stack(solution) * daily_price, axis=1)
assert np.all(calculated_price == total_price)
# test basket change constraint
change = np.sum(np.diff(solution, axis=0), axis=1)
max_change = np.sum(solution[:-1, ...], axis=1) * allowed_change
assert np.all(change <= max_change)
# indicate that we found the original solution
if not np.any(solution - basket):
pbar.set_description("Found Solutions (includes original)")
When the player selects a combination which will reduce the number of possibilities to 1, computer will win. Otherwise, the player can pick a combination with the constraint of the total varying within a certain percentage, that computer may never win.
import itertools
import numpy as np
def gen_possible_combination(total, prices):
"""
Generates all possible combinations of numbers of items for
given prices constraint by total
"""
nitems = [range(total//p + 1) for p in prices]
prices_arr = np.array(prices)
combo = [x for x in itertools.product(
*nitems) if np.dot(np.array(x), prices_arr) == total]
return combo
def reduce(combo1, combo2, pct):
"""
Filters impossible transitions which are greater than pct
"""
combo = {}
for x in combo1:
for y in combo2:
if abs(sum(x) - sum(y))/sum(x) <= pct:
combo[y] = 1
return list(combo.keys())
def gen_items(n, total):
"""
Generates a list of items
"""
nums = [0] * n
t = 0
i = 0
while t < total:
if i < n - 1:
n1 = np.random.randint(0, total-t)
nums[i] = n1
t += n1
i += 1
else:
nums[i] = total - t
t = total
return nums
def main():
pct = 0.05
i = 0
done = False
n = 3
total_items = 26 # np.random.randint(26)
combo = None
while not done:
prices = [np.random.randint(1, 10) for _ in range(n)]
items = gen_items(n, total_items)
total = np.dot(np.array(prices), np.array(items))
combo1 = gen_possible_combination(total, prices)
if combo:
combo = reduce(combo, combo1, pct)
else:
combo = combo1
i += 1
print(i, 'Items:', items, 'Prices:', prices, 'Total:',
total, 'No. Possibilities:', len(combo))
if len(combo) == 1:
print('Solution', combo)
break
if np.random.random() < 0.5:
total_items = int(total_items * (1 + np.random.random()*pct))
else:
total_items = int(
np.ceil(total_items * (1 - np.random.random()*pct)))
if __name__ == "__main__":
main()

Categories

Resources