So I'm working on some calculator for a game I play - for fun, which takes various abilities with different cooldowns, usage times, a percentage in which they may be used at etc ...
So far I am doing this by analyzing numbers in base however many abilities I have, so for example assuming i have 5 abilities used over 4 seconds:
0000: 60 damage (using ability 0, trying to use it again but failing - so returns ability damage of 0)
0001: 60 damage
Skip a few ...
0101: 200 damage
and again ...
4444: 70 damage.
Process terminates. - Hope that made sense.
Problem is, doing this in brute force works well with small times (like above) and number or abilities, however at much higher times and number of abilities it runs analyzing trillions of simulations which means brute force no longer becomes an option.
Question is, considering the data is mostly random, are there any heuristic algorithm's that (all thought may not return the optimal) will return a relatively good result.
Thanks for any responses :)
Let me rephrase to make sure I understand correctly: you want to find the best sequencing of skills, given their individual damage and cooldowns, such that only one skill is used at each time, and no skill is used more often than its cooldown allows. If so, it is a kind of a scheduling problem and one way to approach would be through linear programming.
The rough idea is to introduce n_skills * simulation_length variables x[skill][time], each constrained between 0 and 1, with the interpretation of "use skill skill at time time if x[skill][time] == 1, don't use if == 0". Now you optimize the sum of all variables weighted by the damage their skill does, sum(x[skill][:] * damage[skill] for skill in skills), under additional linear constraints (explained through numpy-like pseudocode):
for each time t, sum(x[:][t]) <= 1 (at each time you can use at most one ability)
for each ability a and time t0 sum(x[a][t0-cooldown(a):t0+cooldown(a)] <= 1 (within the period of its cooldown, you can only use your ability at most once)
Now the tricky part is that while this will give you a solution that is optimal in some sense, it will most likely not be physical, that is you'll get fractional xs. This is where the heuristic part kicks in, you have to find some way to "round" the solution to integers, losing objective value in the process, to make it physically (game-ally) meaningful. One way is to only keep x[a][t] == 1, and round all other numbers down to zero. It will give a meaningful solution, but it may not be very satisfying (ie. your character will do almost nothing). Given that my model for the problem is quite simple, I would expect there are some theoretical results on how to give a good rounding.
While I can suggest the scipy package for solving the linear program once it's formulated, the whole problem of building the constraint matrix and rounding the results (even trivially) is not a beginner-level programming task.
Related
It is well known that pseudo-random numbers are not cryptographically secure.
An extremely basic way I can think of generating a pseudo-random number could be to get the time-stamp at the time the code runs and return the lowest significant figures.
For example the outcome of import time; return time.time_ns/100 % 1000 returns a number between 0 and 1000 that should be almost impossible to predict unless you know exactly the time at which the code run (with a nanosecond precision) and all the overhead execution times of the code.
We could then use one or more numbers generated this way to run a chaotic function (as a logistic map) to generate number that should be extremely hard to predict.
One extremely naïve implementation could be:
import time
def random():
return time.time_ns()/100 % 1000 / 1000
def logistic():
r = 3.9 + random()/10
N = 1000 + int(random()*100)
x = random()
for _ in range(N):
x = r*x*(1-x)
return x
print(logistic())
However, I'm quite sure that no one would consider this to be cryptographically secure.
Why is this? How could one predict/attack such method?
EDIT:
My question is a theoretical question to understand the reasons why building a true RNG is so difficult and hard. I would never use a RNG I wrote in a real code, even less if it had to be cryptographically secure. However, it would be nice to have a bit more details on WHY it's hard to achieve such result so that hundreds/thousands of researcher spent their life working on this topic.
It is well known that pseudo-random numbers are not cryptographically secure.
Is it really? All cryptographic systems I know do use pseudo random generators. There are two major point for a cryptographically secure pseudo-random sequence:
the probability of any value should be the same (as much of possible) to keep entropy high. If on a 16 bit number the generation algorithm consistently sets 8 bits to zero, you only have a 8 bit generator...
knowledge of a number of consecutive values shall not allow to predict the next one(s) - this one is really the tricky part...
Relying on the nano-seconds part of the time blindly assumes that the internal clock of the system will not have prefered values for the low order bits... and nothing ensures that!
Common systems only rely on the hardware randomness of the time to build a seed.
And when it comes to security and cryptography to rule is do not roll your own unless you are an established specialist (and if you are, you already know that any new algorithm or implementation should be carefully reviewed by peers). Hell is always hidden in the details, and something that looks very clever at first sight could introduce a major flaw. Here relying on randomness of the system clock is not secure.
The fact is that building good algorithms and implementations is very hard. And having others to trust them takes even more time. There is nothing bad in experimenting new ideas, and studying how they are validated is even more interesting and you would learn a lot. But my advice is to not use you brand new algo for anything else than tests, and never for mission critical operations.
For cryptography, it is not only desirable that individual numbers are hard to predict but also that multiple numbers are hard to predict – that is, numbers should (appear to) be independent.
Notably, they should be independent even if an attacker knows the algorithm.
That is problematic with time based "randomness", since by design the next time is after the previous time. Worse, there is a direct relation "how much" one number is after the other – namely how much time has elapsed since fetching the previous number.
For example, drawing numbers in any predictable manner such as a loop gives a significant correlation between numbers:
>>> import time
>>>
>>> def random():
... return time.time_ns()/100 % 1000
...
>>> # some example samples
>>> [random() for _ in range(5)]
[220.0, 250.0, 250.0, 260.0, 260.0]
>>> [random() for _ in range(5)]
[370.0, 390.0, 400.0, 410.0, 410.0]
>>> # how much spread there is in samples
>>> import statistics
>>> [statistics.stdev(random() for _ in range(5)) for _ in range(5)]
[16.431676725154983, 11.40175425099138, 11.40175425099138, 8.366600265340756, 11.40175425099138]
>>> [statistics.stdev(random() for _ in range(5)) for _ in range(5)]
[16.431676725154983, 11.40175425099138, 11.40175425099138, 11.40175425099138, 11.40175425099138]
Notably, the critical part is actually not that the spread is low but that it is predictable.
The other problem is that there really is not much that can be done about it: The "randomness" contained in time is inherently limited.
On one end, time randomness is constrained by resolution. As can be seen by the output, my system clock only has sufficient resolution for multiples of 10 – so it can only draw 100 distinct numbers.
On the other end, time randomness is constrained by the program duration. A program that, say, draws during 1 ms only can draw randomness from that duration – that's only 1 000 000 distinct nanoseconds.
Notably, no algorithm can increase that randomness – one can shift it or even it out across some range, but not create more randomness from it.
Now, in practice one could say that it is still somewhat difficult to predict the values drawn this way. The point however is that other means are more difficult to predict.
Consider that some attacker knew your algorithm and tried to brute-force a result:
With a true random generator for actual 3*[0, 1000) they must try 1000*1000*1000 numbers.
With the time based random generator for seemingly 3*[0, 1000) they must try roughly 100*5 numbers (low resolution, low spread).
That means time based randomness makes the approach 2_000_000 times less robust against a brute force attack.
after my failed attempt at asking the question a few months back (not being detailed enough) - I'm going to try again.
Basically i'm attempting to develop a calculator for a game (RuneScape) this will determine the best order to use abilities in, each ability may have its own unique variables, some examples (these are actual abilities):
Order: Ability name, average (ability) damage, ability cooldown (time before it may be used again), adrenaline it may be used at (a bar that lies between 0 to 100, some abilities may be used at any and add 8 to the bar, others require, 50 or even 100 to be used and will remove 15/reset the bar to zero), time it takes to use ability, and finally I include any other special details.
Sever: 112.8 average damage, Cooldown: 15 seconds, used at any "adrenaline", giving +8, takes 1.8 seconds to execute
assault: 525.6 average damage, Cooldown: 30 seconds, used at 50 (or greater) adrenaline taking away -15, takes 4.2 seconds to execute.
Barge: 75 average damage, 20.4 second cooldown, used at any "adrenaline" giving +8, takes 1.8 seconds to execute, will bind the target for 6.6 seconds.
slice: 70 average damage, 3 second cooldown, used at any adrenaline giving +8, takes 1.8 seconds to execute, will deal (assume 1.5 - for simplicity) extra damage if the target is bound.
Now, lets say I wanted only a 6 second rotation (for simplicity), and started "adrenaline" at 50, there would be multiple ways to execute the abilities (for example):
Assault, followed by a sever
Barge, followed by a slice, followed by a sever, followed by a small assault.
etc ...
In this case, an algorithm could easily calculate which method is best using a brute-force algorithm and storing the best solution. However, given this is a simple example, If i wanted, say, 10 abilities over 18 seconds, brute force simply takes too long (unless anyone has a spare Quantum computer lying around). Other than checking random scenario's would anyone know a "heuristic" solution that may not always give the best results, but give close to best?
Further, here is a graph showing one simulation I did (not very pretty)
Thanks to anyone who actually managed to read and understand my essay.
Don't forget, that for different enemies the best sequence will differ. For example, if the enemy has medium defence, many HP and medium attacks, you need to make maximum damage/time for a long time. If another enemy has small HP and great attack, you should burst out for maximum damage, to kill him before he hits you. Great defense with medium attacks need very strong attacks from your side and collecting adrenaline for them and defensive tactics in between.
So, according to the enemy you should define the best one or two attacks or preprepaired combinations of actions. (let's name them attacks, too). This part should be solved by some complex if-then-else/case combination. According to them you should plan the intermediate actions (by brute force combinatorics. The limit is only one - you should not die earlier. (of course, it could be changed in more complicated situation). What will be optimized - your adrenaline or HP in the end or time of the battle - it is up to you.
If combinatorics takes too much time, range optimization targets into several levels. Optimize for the highest level only for one time interval, fix results as the next start point, rerange optimization targets, optimize for the next time interval, and so on. This way we think in real life and it works fast and not so bad.
Each sample is an array of features (ints). I need to split my samples into two separate groups by figuring out what the best feature, and the best splitting value for that feature, is. By "best", I mean the split that gives me the greatest entropy difference between the pre-split set and the weighted average of the entropy values on the left and right sides. I need to try all (2^m−2)/2 possible ways to partition these items into two nonempty lists (where m is the number of distinct values (all samples with the same value for that feature are moved together as a group))
The following is extremely slow so I need a more reasonable/ faster way of doing this.
sorted_by_feature is a list of (feature_value, 0_or_1) tuples.
same_vals = {}
for ele in sorted_by_feature:
if ele[0] not in same_vals:
same_vals[ele[0]] = [ele]
else:
same_vals[ele[0]].append(ele)
l = same_vals.keys()
orderings = list(itertools.permutations(l))
for ordering in orderings:
list_tups = []
for dic_key in ordering:
list_tups += same_vals[dic_key]
left_1 = 0
left_0 = 0
right_1 = num_one
right_0 = num_zero
for index, tup in enumerate(list_tups):
#0's or #1's on the left +/- 1
calculate entropy on left/ right, calculate entropy drop, etc.
Trivial details (continuing the code above):
if index == len(sorted_by_feature) -1:
break
if tup[1] == 1:
left_1 += 1
right_1 -= 1
if tup[1] == 0:
left_0 += 1
right_0 -= 1
#only calculate entropy if values to left and right of split are different
if list_tups[index][0] != list_tups[index+1][0]:
tl;dr
You're asking for a miracle. No programming language can help you out of this one. Use better approaches than what you're considering doing!
Your Solution has Exponential Time Complexity
Let's assume a perfect algorithm: one that can give you a new partition in constant O(1) time. In other words, no matter what the input, a new partition can be generated in a guaranteed constant amount of time.
Let's in fact go one step further and assume that your algorithm is only CPU-bound and is operating under ideal conditions. Under ideal circumstances, a high-end CPU can process upwards of 100 billion instructions per second. Since this algorithm takes O(1) time, we'll say, oh, that every new partition is generated in a hundred billionth of a second. So far so good?
Now you want this to perform well. You say you want this to be able to handle an input of size m. You know that that means you need about pow(2,m) iterations of your algorithm - that's the number of partitions you need to generate, and since generating each algorithm takes a finite amount of time O(1), the total time is just pow(2,m) times O(1). Let's take a quick look at the numbers here:
m = 20 means your time taken is pow(2,20)*10^-11 seconds = 0.00001 seconds. Not bad.
m = 40 means your time taken is pow(2,40)10-11 seconds = 1 trillion/100 billion = 10 seconds. Also not bad, but note how small m = 40 is. In the vast panopticon of numbers, 40 is nothing. And remember we're assuming ideal conditions.
m = 100 means 10^41 seconds! What happened?
You're a victim of algorithmic theory. Simply put, a solution that has exponential time complexity - any solution that takes 2^m time to complete - cannot be sped up by better programming. Generating or producing pow(2,m) outputs is always going to take you the same proportion of time.
Note further that 100 billion instructions/sec is an ideal for high-end desktop computers - your CPU also has to worry about processes other than this program you're running, in which case kernel interrupts and context switches eat into processing time (especially when you're running a few thousand system processes, which you no doubt are). Your CPU also has to read and write from disk, which is I/O bound and takes a lot longer than you think. Interpreted languages like Python also eat into processing time since each line is dynamically converted to bytecode, forcing additional resources to be devoted to that. You can benchmark your code right now and I can pretty much guarantee your numbers will be way higher than the simplistic calculations I provide above. Even worse: storing 2^40 permutations requires 1000 GBs of memory. Do you have that much to spare? :)
Switching to a lower-level language, using generators, etc. is all a pointless affair: they're not the main bottleneck, which is simply the large and unreasonable time complexity of your brute force approach of generating all partitions.
What You Can Do Instead
Use a better algorithm. Generating pow(2,m) partitions and investigating all of them is an unrealistic ambition. You want, instead, to consider a dynamic programming approach. Instead of walking through the entire space of possible partitions, you want to only consider walking through a reduced space of optimal solutions only. That is what dynamic programming does for you. An example of it at work in a problem similar to this one: unique integer partitioning.
Dynamic programming problems approaches work best on problems that can be formulated as linearized directed acyclic graphs (Google it if not sure what I mean!).
If a dynamic approach is out, consider investing in parallel processing with a GPU instead. Your computer already has a GPU - it's what your system uses to render graphics - and GPUs are built to be able to perform large numbers of calculations in parallel. A parallel calculation is one in which different workers can do different parts of the same calculation at the same time - the net result can then be joined back together at the end. If you can figure out a way to break this into a series of parallel calculations - and I think there is good reason to suggest you can - there are good tools for GPU interfacing in Python.
Other Tips
Be very explicit on what you mean by best. If you can provide more information on what best means, we folks on Stack Overflow might be of more assistance and write such an algorithm for you.
Using a bare-metal compiled language might help reduce the amount of real time your solution takes in ordinary situations, but the difference in this case is going to be marginal. Compiled languages are useful when you have to do operations like searching through an array efficiently, since there's no instruction-compilation overhead at each iteration. They're not all that more useful when it comes to generating new partitions, because that's not something that removing the dynamic bytecode generation barrier actually affects.
A couple of minor improvements I can see:
Use try/catch instead of if not in to avoid double lookup of keys
if ele[0] not in same_vals:
same_vals[ele[0]] = [ele]
else:
same_vals[ele[0]].append(ele)
# Should be changed to
try:
same_vals[ele[0]].append(ele) # Most of the time this will work
catch KeyError:
same_vals[ele[0]] = [ele]
Dont explicitly convert a generator to a list if you dont have to. I dont immediately see any need for your casting to a list, which would slow things down
orderings = list(itertools.permutations(l))
for ordering in orderings:
# Should be changed to
for ordering in itertools.permutations(l):
But, like I said, these are only minor improvements. If you really needed this to be faster, consider using a different language.
I'm a data analysis student and I'm starting to explore Genetic Algorithms at the moment. I'm trying to solve a problem with GA but I'm not sure about the formulation of the problem.
Basically I have a state of a variable being 0 or 1 (0 it's in the normal range of values, 1 is in a critical state). When the state is 1 I can apply 3 solutions (let's consider Solution A, B and C) and for each solution I know the time that the solution was applied and the time where the state of the variable goes to 0.
So I have for the problem a set of data that have a critical event at 1, the solution applied and the time interval (in minutes) from the critical event to the application of the solution, and the time interval (in minutes) from the application of the solution until the event goes to 0.
I want with a genetic algorithm to know which is the best solution for a critical event and the fastest one. And if it is possible to rank the solutions acquired so if in the future on solution can't be applied I can always apply the second best for example.
I'm thinking of developing the solution in Python since I'm new to GA.
Edit: Specifying the problem (responding to AMack)
Yes is more a less that but with some nuances. For example the function A can be more suitable to make the variable go to F but because exist other problems with the variable are applied more than one solution. So on the data that i receive for an event of V, sometimes can be applied 3 ou 4 functions but only 1 or 2 of them are specialized for the problem that i want to analyze. My objetive is to make a decision support on the solution to use when determined problem appear. But the optimal solution can be more that one because for some event function A acts very fast but in other case of the same event function A don't produce a fast response and function C is better in that case. So in the end i pretend a solution where is indicated what are the best solutions to the problem but not only the fastest because the fastest in the majority of the cases sometimes is not the fastest in the same issue but with a different background.
I'm unsure of what your question is, but here are the elements you need for any GA:
A population of initial "genomes"
A ranking function
Some form of mutation, crossing over within the genome
and reproduction.
If a critical event is always the same, your GA should work very well. That being said, if you have a different critical event but the same genome you will run into trouble. GA's evolve functions towards the best possible solution for A Set of conditions. If you constantly run the GA so that it may adapt to each unique situation you will find a greater degree of adaptability, but have a speed issue.
You have a distinct advantage using python because string manipulation (what you'll probably use for the genome) is easy, however...
python is slow.
If the genome is short, the initial population is small, and there are very few generations this shouldn't be a problem. You lose possibly better solutions that way but it will be significantly faster.
have fun...
You should take a look at the GARAGe Michigan State. They are a GA research group with a fair number of resources in terms of theory, papers, and software that should provide inspiration.
To start, let's make sure I understand your problem.
You have a set of sample data, each element containing a time series of a binary variable (we'll call it V). When V is set to True, a function (A, B, or C) is applied which returns V to it's False state. You would like to apply a genetic algorithm to determine which function (or solution) will return V to False in the least amount of time.
If this is the case, I would stay away from GAs. GAs are typically used for some kind of function optimization / tuning. In general, the underlying assumption is that what you permute is under your control during the algorithm's application (i.e., you are modifying parameters used by the algorithm that are independent of the input data). In your case, my impression is that you just want to find out which of your (I assume) static functions perform best in a wide variety of cases. If you don't feel your current dataset provides a decent approximation of your true input distribution, you can always sample from it and permute the values to see what happens; however, this would not be a GA.
Having said all of this, I could be wrong. If anyone has used GAs in verification like this, please let me know. I'd certainly be interested in learning about it.
I have a problem with a game I am making. I think I know the solution(or what solution to apply) but not sure how all the ‘pieces’ fit together.
How the game works:
(from How to approach number guessing game(with a twist) algorithm? )
users will be given items with a value(values change every day and the program is aware of the change in price). For example
Apple = 1
Pears = 2
Oranges = 3
They will then get a chance to choose any combo of them they like (i.e. 100 apples, 20 pears, and 1 oranges). The only output the computer gets is the total value(in this example, its currently $143). The computer will try to guess what they have. Which obviously it won’t be able to get correctly the first turn.
Value quantity(day1) value(day1)
Apple 1 100 100
Pears 2 20 40
Orange 3 1 3
Total 121 143
The next turn the user can modify their numbers but no more than 5% of the total quantity (or some other percent we may chose. I’ll use 5% for example.). The prices of fruit can change(at random) so the total value may change based on that also(for simplicity I am not changing fruit prices in this example). Using the above example, on day 2 of the game, the user returns a value of $152 and $164 on day 3. Here's an example.
quantity(day2) %change(day2) value(day2) quantity(day3) %change(day3) value(day3)
104 104 106 106
21 42 23 46
2 6 4 12
127 4.96% 152 133 4.72% 164
*(I hope the tables show up right, I had to manually space them so hopefully its not just doing it on my screen, if it doesn't work let me know and I'll try to upload a screenshot).
I am trying to see if I can figure out what the quantities are over time(assuming the user will have the patience to keep entering numbers). I know right now my only restriction is the total value cannot be more than 5% so I cannot be within 5% accuracy right now so the user will be entering it forever.
What I have done so far:
I have taken all the values of the fruit and total value of fruit basket that’s given to me and created a large table of all the possibilities. Once I have a list of all the possibilities I used graph theory and created nodes for each possible solution. I then create edges(links) between nodes from each day(for example day1 to day2) if its within 5% change. I then delete all nodes that do not have edges(links to other nodes), and as the user keeps playing I also delete entire paths when the path becomes a dead end.
This is great because it narrows the choices down, but now I’m stuck because I want to narrow these choices even more. I’ve been told this is a hidden markov problem but a trickier version because the states are changing(as you can see above new nodes are being added every turn and old/non-probable ones are being removed).
** if it helps, I got a amazing answer(with sample code) on a python implementation of the baum-welch model(its used to train the data) here: Example of implementation of Baum-Welch **
What I think needs to be done(this could be wrong):
Now that I narrowed the results down, I am basically trying to allow the program to try to predict the correct based the narrowed result base. I thought this was not possible but several people are suggesting this can be solved with a hidden markov model. I think I can run several iterations over the data(using a Baum-Welch model) until the probabilities stabilize(and should get better with more turns from the user).
The way hidden markov models are able to check spelling or handwriting and improve as they make errors(errors in this case is to pick a basket that is deleted upon the next turn as being improbable).
Two questions:
How do I figure out the transition and emission matrix if all states are at first equal? For example, as all states are equally likely something must be used to dedicate the probability of states changing. I was thinking of using the graph I made to weight the nodes with the highest number of edges as part of the calculation of transition/emission states? Does that make sense or is there a better approach?
How can I keep track of all the changes in states? As new baskets are added and old ones are removed, there becomes an issue of tracking the baskets. I though an Hierarchical Dirichlet Process hidden markov model(hdp-hmm) would be what I needed but not exactly sure how to apply it.
(sorry if I sound a bit frustrated..its a bit hard knowing a problem is solvable but not able to conceptually grasp what needs to be done).
As always, thanks for your time and any advice/suggestions would be greatly appreciated.
Like you've said, this problem can be described with a HMM. You are essentially interested in maintaining a distribution over latent, or hidden, states which would be the true quantities at each time point. However, it seems you are confusing the problem of learning the parameters for a HMM opposed to simply doing inference in a known HMM. You have the latter problem but propose employing a solution (Baum-Welch) designed to do the former. That is, you have the model already, you just have to use it.
Interestingly, if you go through coding a discrete HMM for your problem you get an algorithm very similar to what you describe in your graph-theory solution. The big difference is that your solution is tracking what is possible whereas a correct inference algorithm, like the Virterbi algorithm, will track what is likely. The difference is clear when there is overlap in the 5% range on a domain, that is, when multiple possible states could potentially transition to the same state. Your algorithm might add 2 edges to a point, but I doubt that when you compute the next day that has an effect (it should count twice, essentially).
Anyway, you could use the Viterbi algortihm, if you are only interested in the best guess at the most recent day I'll just give you a brief idea how you can just modify your graph-theory solution. Instead of maintaining edges between states maintain a fraction representing the probability that state is the correct one (this distribution is sometimes called the belief state). At each new day, propagate forward your belief state by incrementing each bucket by the probability of it's parent (instead of adding an edge your adding a floating point number). You also have to make sure your belief state is properly normalized (sums to 1) so just divide by its sum after each update. After that, you can weight each state by your observation, but since you don't have a noisy observation you can just go and set all the impossible states to being zero probability and then re-normalize. You now have a distribution over underlying quantities conditioned on your observations.
I'm skipping over a lot of statistical details here, just to give you the idea.
Edit (re: questions):
The answer to your question really depends on what you want, if you want only the distribution for the most recent day then you can get away with a one-pass algorithm like I've described. If, however, you want to have the correct distribution over the quantities at every single day you're going to have to do a backward pass as well. Hence, the aptly named forward-backward algorithm. I get the sense that since you are looking to go back a step and delete edges then you probably want the distribution for all days (unlike I originally assumed). Of course, you noticed there is information that can be used so that the "future can inform the past" so to speak, and this is exactly the reason why you need to do the backward pass as well, it's not really complicated you just have to run the exact same algorithm starting at the end of the chain. For a good overview check out Christopher Bishop's 6-piece tutorial on videolectures.net.
Because you mentioned adding/deleting edges let me just clarify the algorithm I described previously, keep in mind this is for a single forward pass. Let there be a total of N possible permutations of quantities, so you will have a belief state that is a sparse vector N elements long (called v_0). The first step you receive a observation of the sum, and you populate the vector by setting all the possible values to have probability 1.0, then re-normalize. The next step you create a new sparse vector (v_1) of all 0s, iterate over all non-zero entries in v_0 and increment (by the probability in v_0) all entries in v_1 that are within 5%. Then, zero out all the entries in v_1 that are not possible according to the new observation, then re-normalize v_1 and throw away v_0. repeat forever, v_1 will always be the correct distribution of possibilities.
By the way, things can get way more complex than this, if you have noisy observations or very large states or continuous states. For this reason it's pretty hard to read some of the literature on statistical inference; it's quite general.