I am trying to comprehend the idea of optimize.minimize. However, I got stuck with a small problem. I do not understand that why we need an initial guess (x0) while using the optimize.minimize? Can anyone help me out??
Thank you so much.
Numerical optimisation basically says: here's a function f.
Let's say we're trying to find a minimum. Let's add a bit to our starting variable. What is f(x + a)? Does it go down? And let's add a bit more? Is f(x + a + b) smaller than that? Eventually, after trying a ton of different inputs, going up and down, you'll have a good idea of how the function behaves and where it is minimised.
To do this, you need to start someplace so that you can add or subtract from the x part of f(x).
Related
I would like to ask you something regarding a Linear program for optimization. I already successfully setup the model. However, I have problems in setting up a metaheuristic to reduce the computation time. The basic optimization model can be seen here:
In the metaheuristic algorithm there is an while loop with a condition as follows:
while $ \sum_{i=1}^I b_i y_i \leq \sum_{k=1}^K q_k $ do
I tried to realize this condition with the following code:
while lpSum(b[i]*y[i] for i in I)<=lpSum(q[k] for k in K):
If calculate the two sums separately I get the right results for both. However, when I put them into this condition, the code runs into an endless loop, even when the condition gets fulfilled and it should break the loop. I guess it has to do with the data type, and that the argument can't be an LpAffineExpression. However, I am really struggling to understand this problem.
I hope you understood my problem and I would really appreciate your ideas and explanations a lot! Please tell me, if you need more information on something specific - sorry for being a beginner.
Thanks a lot in advance and best regards,
Bernhard
lpSums do not have a value, like a regular sum has.
Any Python objects can have be compared to other objects using built-in equations like __eq__. That is how I can say date(2000, 1, 1) < date(2000, 1, 2). However, lpAffineExpressionss (which lpSums are a type of) are meant to be used in constraints. Their contents are variables, which are solved by the LP solver, so they do not yet have any values.
Thus the return value for lpSum(x) <= lpSum(y) is not true or false, like with normal equations, but it's an equation. And an equation is not None, or False, or any other falsey value. What you are saying is equivalent to while <some object>:, which is always true. Hence your infinite loop.
I don't know what "using a metaheuristic to reduce computation time" implies in this context - maybe you run a few iterations of the LP solver and then employ your metaheuristic on the result.
If that is the case, use b[i].value() to get the value the variable b[i] was given in that solution, and be sure to compute the total in a regular sum.
The question is:
Determine at which point the function y=xe(x^2) crosses the unit circle in the x-positive, y-positive quadrant.
Rewrite the problem as a fix-point problem, i.e. in the form x=F(x)
This equation can be solved iteratively: x_n=F(x_n−1)
Implement the above equation into a function fixpoint that takes as argument the initial guess x0 and the tolerance tol and returns the sequence xn of approximations to x.
I'm very new to python, and I've rewritten the equations as
xn=1/(np.sqrt(1+np.exp(2(x0)**2)))
and created a function, but I'm genuinely not too sure how to go about this.
It genuinely is not our issue if you don't understand the language or the problem you are trying to solve.
This looks like homework. You don't learn anything if somebody here does it for you.
Try to solve an iteration or two by hand with calculator, pencil, and paper before you program anything.
Your first equation looks wrong to me.
xn=1/(np.sqrt(1+np.exp(2*(x0)**2)))
I don't know if you forgot a multiplication sign between the two arguments to the exponential function. You should check.
I would prefer x0*x0 to x0**2. Personal taste.
I would expect to see an equation that would take in x(n) and return x(n+1). Yours will never use the new value x(n) to get x(n+1). You're stuck with x(0) as written.
I would expect to see a loop where the initial value of x(n) is x(0). Inside the loop I'd calculate x(n+1) from x(n) and check to see if it's converged to a desired tolerance. If it has, I'd exit the loop. If it has not, I'd update x(n) to equal x(n+1) and loop again.
My math skills are really poor.
I'm trying to handle accelerometer data (tri-axis) with Python
I need to compute the norm
That is easy I made something like that:
math.sqrt(x_value**2 + y_value**2 + z_value**2)
but now, I have to compute the integration of that:
And for that I have no clue..
Can somebody help me on that?
Edit: Adding more info due to the negative votes (??)
I know there are tools to make integrations in python, but this one is not with edges (there are no limits in the formula) So that, I don't understand how to make it works..
Integrating the modulus of the acceleration will lead you nowhere. You must integrate all three components separately to get the components of the speed (and integrate them a second time to get the position vector).
To perform the numerical integration, use the Simpson rule incrementally. Anyway, as the number of points must be even, you will only get every other value of the speed.
I'm trying to find roots to a 2D optimization problem, of the form (below is not the actual equation as it's very long, this is just an example of the style of problem).
def my_function(a,b):
c = exp(a) + b
d = a + 2 - exp(b)
return c, d
I want to know a and b, for which c and d are zero.
So far, I'm using fsolve from the scipy optimize library, and passing the seed values as values which I know are close to the solution. This works well, although on some occasional, fails and I get the error about the solver "not making good progress over last 10 iterations".
I wonder if there's a way / general good practice for making root finding more robust?
Otherwise, I'd like to try bounded-root-finding. In 1D, fminbound can be used, but I can't find a function that will let me specify bounds for a 2D problem.
Any help appreciated.
Thanks
In the big picture, there is no general way to make root finding more robust - there is a reason there are so many different functions in scipy.optimize!
One trick is rather than finding roots of f(x) you can instead try to find minimas of f^2(x). Finding minimas is often more robust because the algorithm just needs to keep going downhill to the bottom. However, the downside is that the minima found might not be at f(x)=0 (i.e. not a root).
So, you could try scipy.optimize.fmin_tnc which is a minimizer with bounds and see what happens.
Good guesses are always helpful, but 'close' might not always be best - you might look more deeply at the function and figure out what the landscape 'close' by really looks like, and if 'close' (or far!) in a different direction might be easier for the solver (that is, one direction might be quite choppy requiring going over mountains to find the valley, while another direction has a beautiful broad path down to the bottom of the valley).
first, let me say that I lack experiences with scientific math or statistics - so this might be a very well-known problem, but I don't know where to start.
I have a function f(x1, x2, ..., xn) where I need to guess the x'ses and find the highest value for f. The function has the following properties:
the total number or parameters is usually around 40 to 60, so a brute-force approach is impossible.
the possible values for each x range from 0.01 to 2.99
the function is steady, meaning that a higher f value means that the guess for the parameters is better and vice versa.
So far, I implemented a pretty basic method in python. It initially sets all parameters to 1, randomly guesses new values and checks if the f is higher than before. If not, roll back to the previous values.
In a loop with 10,000 iterations this seems to work somehow, but the result is propably far from being perfect.
Any suggestions on how to improve the search for the optimal parameters will be appreciated. When googling this issue things linke MCMC came up, but that seems like a very advanced method and I would need a lot of time to even understand the method.
Basic hints or concepts would help me more than elaborated methods and algorithms.
Don't do it yourself. Install SciPy and use its optimization routines. scipy.optimize.minimize looks like a good fit.
I think you want to take a look at scipy.optimize (http://docs.scipy.org/doc/scipy-0.10.0/reference/tutorial/optimize.html). A maximization is the minimization of the -1*function.