I am experimenting with Monte Carlo Simulations and I have come up with this interesting problem. Suppose we are generating random values using a Normal distribution with St.Dev = 2 and mean = the last value generated (Markov process), we start at the value 5, but every time we generate a value greater than 9 we start generating random values using a second Normal distribution with St.Dev = 3. If we generate a value greater than 15 or less than 0 we start from 5 again. We want to find the expected value of this random process. Now one way would be to just generate a very large amount of samples, but seeing as this would be impractical if we decide to work with a more complicated process, my question is: What is the smart way to estimate the expected value (also probability distribution and other standard characteristics of this random process).
I have looked into the variations of Monte Carlo like Makrkov Chain Monte Carlo (MCMC). Yet I cannot seem to think of a good approach to solving this problem.
Any advice or sources would be helpful :)
PS I am working in Python, but any reference would be helpful, be it a code implementation in some other language, a theoretical explanation or even just the right term to search for on the Internet.
Related
My question is very specific. Given a k dimensional Gaussian distribution with mean and standard deviation, say I wish to sample 10 points from this distribution. But the 10 samples should be very different from each other. For example, I do not wish to sample 5 of those very close to the mean (By very close, we may assume for this example within 1 sigma) which may happen if I do random sampling. Let us also add an additional constraint that all the drawn samples should be at least 1 sigma away from each other. Is there a known way to sample in this fashion methodically? Is there any such module in PyTorch which can do so?
Sorry if this thought is ill posed but I am trying to understand if such a thing is possible.
To my knowledge there is no such library. The problem you are trying to solve is straightforward. Just check if the random number you get is 'far enough' from the mean. The complexity of that check is constant. The probability of a point not to be between one sigma from the mean is ~32%. It is not that unlikely.
I am trying to investigate the distribution of maximum likelihood estimates for specifically for a large number of covariates p and a high dimensional regime (meaning that p/n, with n the sample size, is about 1/5). I am generating the data and then using statsmodels.api.Logit to fit the parameters to my model.
The problem is, this only seems to work in a low dimensional regime (like 300 covariates and 40000 observations). Specifically, I get that the maximum number of iterations has been reached, the log likelihood is inf i.e. has diverged, and a 'singular matrix' error.
I am not sure how to remedy this. Initially, when I was still working with smaller values (say 80 covariates, 4000 observations), and I got this error occasionally, I set a maximum of 70 iterations rather than 35. This seemed to help.
However it clearly will not help now, because my log likelihood function is diverging. It is not just a matter of non-convergence within the maixmum number of iterations.
It would be easy to answer that these packages are simply not meant to handle such numbers, however there have been papers specifically investigating this high dimensional regime, say here where p=800 covariates and n=4000 observations are used.
Granted, this paper used R rather than python. Unfortunately I do not know R. However I should think that python optimisation should be of comparable 'quality'?
My questions:
Might it be the case that R is better suited to handle data in this high p/n regime than python statsmodels? If so, why and can the techniques of R be used to modify the python statsmodels code?
How could I modify my code to work for numbers around p=800 and n=4000?
In the code you currently use (from several other questions), you implicitly use the Newton-Raphson method. This is the default for the sm.Logit model. It computes and inverts the Hessian matrix to speed-up estimation, but that is incredibly costly for large matrices - let alone oft results in numerical instability when the matrix is near singular, as you have already witnessed. This is briefly explained on the relevant Wikipedia entry.
You can work around this by using a different solver, like e.g. the bfgs (or lbfgs), like so,
model = sm.Logit(y, X)
result = model.fit(method='bfgs')
This runs perfectly well for me even with n = 10000, p = 2000.
Aside from estimation, and more problematically, your code for generating samples results in data that suffer from a large degree of quasi-separability, in which case the whole MLE approach is questionable at best. You should urgently look into this, as it suggests your data may not be as well-behaved as you might like them to be. Quasi-separability is very well explained here.
I use a python package called emcee to fit a function to some data points. The fit looks great, but when I want to plot the value of each parameter at each step I get this:
In their example (with a different function and data points) they get this:
Why is my function converging so fast, and why does it have that weird shape in the beginning. I apply MCMC using likelihood and posterior probability. And even if the fit looks very good, the error on parameters of function are very small (10^10 smaller than the actual value) and I think it is because of the random walks. Any idea how to fix it? Here is their code for fitting: http://dan.iel.fm/emcee/current/user/line/ I used the same code with the obvious modifications for my data points and fitting function.
I would not say that your function is converging faster than the emcee line-fitting example you're linked to. In the example, the walkers start exploring the most likely values in the parameter space almost immediately, whereas in your case it takes more than 200 iterations to reach the high probability region.
The jump in your trace plots looks like a burn-in. It is a common feature of MCMC sampling algorithms where your walkers are given starting points away from the bulk of the posterior and then must find their way to it. It looks like in your case the likelihood function is fairly smooth so you only need a hundred or so iterations to achieve this, giving your function this "weird shape" you're talking about.
If you can constrain the starting points better, do so; if not, you might consider discarding this initial length before further analysis is made (see here and here for discussions on burn in lengths).
As for whether the errors are realistic or not, you need to inspect the resulting posterior models for that, because the ratio of the actual parameter value to its uncertainties does not have a say in this. For example, if we take your linked example and change the true value of b to 10^10, the resulting errors will be ten magnitudes smaller while remaining perfectly valid.
I read somewhere that the python library function random.expovariate produces intervals equivalent to Poisson Process events.
Is that really the case or should I impose some other function on the results?
On a strict reading of your question, yes, that is what random.expovariate does.
expovariate gives you random floating point numbers, exponentially distributed. In a Poisson process the size of the interval between consecutive events is exponential.
However, there are two other ways I could imagine modelling poisson processes
Just generate random numbers, uniformly distributed and sort them.
Generate integers which have a Poisson distribution (i.e. they are distributed like the number of events within a fixed interval in a Poisson process). Use numpy.random.poisson to do this.
Of course all three things are quite different. The right choice depends on your application.
https://stackoverflow.com/a/10250877/1587329 gives a nice explanation of why this works (not only in python), and some code. In short
simulate the first 10 events in a poisson process with an averate rate
of 15 arrivals per second like this:
import random
for i in range(1,10):
print random.expovariate(15)
I have a problem with a game I am making. I think I know the solution(or what solution to apply) but not sure how all the ‘pieces’ fit together.
How the game works:
(from How to approach number guessing game(with a twist) algorithm? )
users will be given items with a value(values change every day and the program is aware of the change in price). For example
Apple = 1
Pears = 2
Oranges = 3
They will then get a chance to choose any combo of them they like (i.e. 100 apples, 20 pears, and 1 oranges). The only output the computer gets is the total value(in this example, its currently $143). The computer will try to guess what they have. Which obviously it won’t be able to get correctly the first turn.
Value quantity(day1) value(day1)
Apple 1 100 100
Pears 2 20 40
Orange 3 1 3
Total 121 143
The next turn the user can modify their numbers but no more than 5% of the total quantity (or some other percent we may chose. I’ll use 5% for example.). The prices of fruit can change(at random) so the total value may change based on that also(for simplicity I am not changing fruit prices in this example). Using the above example, on day 2 of the game, the user returns a value of $152 and $164 on day 3. Here's an example.
quantity(day2) %change(day2) value(day2) quantity(day3) %change(day3) value(day3)
104 104 106 106
21 42 23 46
2 6 4 12
127 4.96% 152 133 4.72% 164
*(I hope the tables show up right, I had to manually space them so hopefully its not just doing it on my screen, if it doesn't work let me know and I'll try to upload a screenshot).
I am trying to see if I can figure out what the quantities are over time(assuming the user will have the patience to keep entering numbers). I know right now my only restriction is the total value cannot be more than 5% so I cannot be within 5% accuracy right now so the user will be entering it forever.
What I have done so far:
I have taken all the values of the fruit and total value of fruit basket that’s given to me and created a large table of all the possibilities. Once I have a list of all the possibilities I used graph theory and created nodes for each possible solution. I then create edges(links) between nodes from each day(for example day1 to day2) if its within 5% change. I then delete all nodes that do not have edges(links to other nodes), and as the user keeps playing I also delete entire paths when the path becomes a dead end.
This is great because it narrows the choices down, but now I’m stuck because I want to narrow these choices even more. I’ve been told this is a hidden markov problem but a trickier version because the states are changing(as you can see above new nodes are being added every turn and old/non-probable ones are being removed).
** if it helps, I got a amazing answer(with sample code) on a python implementation of the baum-welch model(its used to train the data) here: Example of implementation of Baum-Welch **
What I think needs to be done(this could be wrong):
Now that I narrowed the results down, I am basically trying to allow the program to try to predict the correct based the narrowed result base. I thought this was not possible but several people are suggesting this can be solved with a hidden markov model. I think I can run several iterations over the data(using a Baum-Welch model) until the probabilities stabilize(and should get better with more turns from the user).
The way hidden markov models are able to check spelling or handwriting and improve as they make errors(errors in this case is to pick a basket that is deleted upon the next turn as being improbable).
Two questions:
How do I figure out the transition and emission matrix if all states are at first equal? For example, as all states are equally likely something must be used to dedicate the probability of states changing. I was thinking of using the graph I made to weight the nodes with the highest number of edges as part of the calculation of transition/emission states? Does that make sense or is there a better approach?
How can I keep track of all the changes in states? As new baskets are added and old ones are removed, there becomes an issue of tracking the baskets. I though an Hierarchical Dirichlet Process hidden markov model(hdp-hmm) would be what I needed but not exactly sure how to apply it.
(sorry if I sound a bit frustrated..its a bit hard knowing a problem is solvable but not able to conceptually grasp what needs to be done).
As always, thanks for your time and any advice/suggestions would be greatly appreciated.
Like you've said, this problem can be described with a HMM. You are essentially interested in maintaining a distribution over latent, or hidden, states which would be the true quantities at each time point. However, it seems you are confusing the problem of learning the parameters for a HMM opposed to simply doing inference in a known HMM. You have the latter problem but propose employing a solution (Baum-Welch) designed to do the former. That is, you have the model already, you just have to use it.
Interestingly, if you go through coding a discrete HMM for your problem you get an algorithm very similar to what you describe in your graph-theory solution. The big difference is that your solution is tracking what is possible whereas a correct inference algorithm, like the Virterbi algorithm, will track what is likely. The difference is clear when there is overlap in the 5% range on a domain, that is, when multiple possible states could potentially transition to the same state. Your algorithm might add 2 edges to a point, but I doubt that when you compute the next day that has an effect (it should count twice, essentially).
Anyway, you could use the Viterbi algortihm, if you are only interested in the best guess at the most recent day I'll just give you a brief idea how you can just modify your graph-theory solution. Instead of maintaining edges between states maintain a fraction representing the probability that state is the correct one (this distribution is sometimes called the belief state). At each new day, propagate forward your belief state by incrementing each bucket by the probability of it's parent (instead of adding an edge your adding a floating point number). You also have to make sure your belief state is properly normalized (sums to 1) so just divide by its sum after each update. After that, you can weight each state by your observation, but since you don't have a noisy observation you can just go and set all the impossible states to being zero probability and then re-normalize. You now have a distribution over underlying quantities conditioned on your observations.
I'm skipping over a lot of statistical details here, just to give you the idea.
Edit (re: questions):
The answer to your question really depends on what you want, if you want only the distribution for the most recent day then you can get away with a one-pass algorithm like I've described. If, however, you want to have the correct distribution over the quantities at every single day you're going to have to do a backward pass as well. Hence, the aptly named forward-backward algorithm. I get the sense that since you are looking to go back a step and delete edges then you probably want the distribution for all days (unlike I originally assumed). Of course, you noticed there is information that can be used so that the "future can inform the past" so to speak, and this is exactly the reason why you need to do the backward pass as well, it's not really complicated you just have to run the exact same algorithm starting at the end of the chain. For a good overview check out Christopher Bishop's 6-piece tutorial on videolectures.net.
Because you mentioned adding/deleting edges let me just clarify the algorithm I described previously, keep in mind this is for a single forward pass. Let there be a total of N possible permutations of quantities, so you will have a belief state that is a sparse vector N elements long (called v_0). The first step you receive a observation of the sum, and you populate the vector by setting all the possible values to have probability 1.0, then re-normalize. The next step you create a new sparse vector (v_1) of all 0s, iterate over all non-zero entries in v_0 and increment (by the probability in v_0) all entries in v_1 that are within 5%. Then, zero out all the entries in v_1 that are not possible according to the new observation, then re-normalize v_1 and throw away v_0. repeat forever, v_1 will always be the correct distribution of possibilities.
By the way, things can get way more complex than this, if you have noisy observations or very large states or continuous states. For this reason it's pretty hard to read some of the literature on statistical inference; it's quite general.