The question is:
Determine at which point the function y=xe(x^2) crosses the unit circle in the x-positive, y-positive quadrant.
Rewrite the problem as a fix-point problem, i.e. in the form x=F(x)
This equation can be solved iteratively: x_n=F(x_n−1)
Implement the above equation into a function fixpoint that takes as argument the initial guess x0 and the tolerance tol and returns the sequence xn of approximations to x.
I'm very new to python, and I've rewritten the equations as
xn=1/(np.sqrt(1+np.exp(2(x0)**2)))
and created a function, but I'm genuinely not too sure how to go about this.
It genuinely is not our issue if you don't understand the language or the problem you are trying to solve.
This looks like homework. You don't learn anything if somebody here does it for you.
Try to solve an iteration or two by hand with calculator, pencil, and paper before you program anything.
Your first equation looks wrong to me.
xn=1/(np.sqrt(1+np.exp(2*(x0)**2)))
I don't know if you forgot a multiplication sign between the two arguments to the exponential function. You should check.
I would prefer x0*x0 to x0**2. Personal taste.
I would expect to see an equation that would take in x(n) and return x(n+1). Yours will never use the new value x(n) to get x(n+1). You're stuck with x(0) as written.
I would expect to see a loop where the initial value of x(n) is x(0). Inside the loop I'd calculate x(n+1) from x(n) and check to see if it's converged to a desired tolerance. If it has, I'd exit the loop. If it has not, I'd update x(n) to equal x(n+1) and loop again.
Related
Lets assume we have a two dimensional function
def f(x,y): #some calculations here
return value
we know from observations the following
f(x,0.9y) = 10
f(x,0.8y)=15
f(x,0.8y) = 23
...
how can I find the right values for x,y to get the best fit to the observations? which procedure is recommended in case of such an optimization problem?
It seems you have a typo in your question. Anyhow, given that we only have two data points, optimization is not exactly doable, as a set of simultaneous equations gives you an exact result, for many different types of functions.
If you meant to give three data points, there is different approach:
Note that the x is constant. This means that whatever the function is, we cannot say anything about the x portion of the 2D function. So it's really a one dimensional problem: How does y behave?
Given that we know nothing about the data in question, and given the values you gave, I would by gut instinct probably go with an exponential/logarithmic function (assuming the final value is meant to read 0.7). It might also be a linear function, though once you fit a function that you produce, you would have to calculate the error (here is how you would do this). Beyond this, in my experience, there is not much that you can do.
I would like to ask you something regarding a Linear program for optimization. I already successfully setup the model. However, I have problems in setting up a metaheuristic to reduce the computation time. The basic optimization model can be seen here:
In the metaheuristic algorithm there is an while loop with a condition as follows:
while $ \sum_{i=1}^I b_i y_i \leq \sum_{k=1}^K q_k $ do
I tried to realize this condition with the following code:
while lpSum(b[i]*y[i] for i in I)<=lpSum(q[k] for k in K):
If calculate the two sums separately I get the right results for both. However, when I put them into this condition, the code runs into an endless loop, even when the condition gets fulfilled and it should break the loop. I guess it has to do with the data type, and that the argument can't be an LpAffineExpression. However, I am really struggling to understand this problem.
I hope you understood my problem and I would really appreciate your ideas and explanations a lot! Please tell me, if you need more information on something specific - sorry for being a beginner.
Thanks a lot in advance and best regards,
Bernhard
lpSums do not have a value, like a regular sum has.
Any Python objects can have be compared to other objects using built-in equations like __eq__. That is how I can say date(2000, 1, 1) < date(2000, 1, 2). However, lpAffineExpressionss (which lpSums are a type of) are meant to be used in constraints. Their contents are variables, which are solved by the LP solver, so they do not yet have any values.
Thus the return value for lpSum(x) <= lpSum(y) is not true or false, like with normal equations, but it's an equation. And an equation is not None, or False, or any other falsey value. What you are saying is equivalent to while <some object>:, which is always true. Hence your infinite loop.
I don't know what "using a metaheuristic to reduce computation time" implies in this context - maybe you run a few iterations of the LP solver and then employ your metaheuristic on the result.
If that is the case, use b[i].value() to get the value the variable b[i] was given in that solution, and be sure to compute the total in a regular sum.
I want to solve the following optimization problem with Python:
I have a black box function f with multiple variables as input.
The execution of the black box function is quite time consuming, therefore I would like to avoid a brute force approach.
I would like to find the optimum input parameters for that black box function f.
In the following, for simplicity I just write the dependency for one dimension x.
An optimum parameter x is defined as:
the cost function cost(x) is maximized with the sum of
f(x) value
a maximum standard deviation of f(x)
.
cost(x) = A * f(x) + B * max(standardDeviation(f(x)))
The parameters A and B are fix.
E.g., for the picture below, the value of x at the position 'U' would be preferred over the value of x at the positon of 'V'.
My question is:
Is there any easily adaptable framework or process that I could utilize (similar to e. g. simulated annealing or bayesian optimisation)?
As mentioned, I would like to avoid a brute force approach.
I’m still not 100% sure of your approach, but does this formula ring true to you:
A * max(f(x)) + B * max(standardDeviation(f(x)))
?
If it does, then I guess you may want to consider that maximizing f(x) may (or may not) be compatible with maximizing the standard deviation of f(x), which means you may be facing a multi-objective optimization problem.
Again, you haven’t specified what f(x) returns - is it a vector? I hope it is, otherwise I’m unclear on what you can calculate the standard deviation on.
The picture you posted is not so obvious to me. F(x) is the entire black curve, it has a maximum at the point v, but what can you say about the standard deviation? To calculate the standard deviation of you have to take into account the entire f(x) curve (including the point u), not just the neighborhood of u and v. If you only want to get the standard deviation in an interval around a maximum for f(x), then I think you’re out of luck when it comes to frameworks. The best thing that comes to my mind is to use a local (or maybe global, better) optimization algorithm to hunt for the maximum of f(x) - simulated annealing, differential evolution, tunnelling, and so on - and then, when you have found a maximum for f(x), sample a few points on the left and right of your optimum and calculate the standard deviation of these evaluations. Then you’ll have to decide if the combination of the maximum of f(x) and this standard deviation is good enough or not compared to any previous “optimal” point found.
This is all speculation, as I’m unsure that your problem is really an optimization one or simply a “peak finding” exercise, for which there are many different - and more powerful and adequate- methods.
Andrea.
I am trying to optimize a certain function using the Nelder-Mead method and I need help understanding some of the arguments. I am fairly new to the world of numerical optimizations so, please, forgive my ignorance of what might be obvious to more experienced users. I note that I already looked at minimize(method=’Nelder-Mead’) and at scipy.optimize.minimize but it was not of as much help as I would have hoped. I am trying to optimize function $f$ under two conditions: (i) I want the optimization to stop once the $f$ value is below a certain value and (ii) once the argument is around the optimal value, I don't want the optimizer to increase the step again (i.e., once it gets below the threshold value and stays below for a couple of iterations, I would like the optimization to terminate). Here is the optimization code I use:
scipy.optimize.minimize(fun=f, x0=init_pos, method="nelder-mead",
options={"initial_simplex": simplex,
"disp": True, "maxiter" : 25,
"fatol": 0.50, "adaptive": True})
where f is my function (f : RxR -> [0,sqrt(2))). I understand that x0=init_pos are initial values for f, "initial_simplex": simplex is the initial triangle (in my 2D case), "maxiter" : 25 means that the optimizer will run up to 25 iterations before terminating.
Here are things I do not understand/I am not sure about:
The website 1 says "fatol: Absolute error in func(xopt) between iterations that is acceptable for convergence." Since the optimal value for my function is f(xopt)=0, does "fatol": 0.50 mean that the optimization will terminate once the f(x) will have the value of 0.5 or less? If not, how do I modify the condition for termination (in my case, how do I assure that it does stop once f(x)<=0.5)? I am ok if the optimizer runs a few more iterations around the region giving <0.5 but right now it tends to jump out of the near optimal region in a completely random way and I would like to be able to prevent it (if possible).
Likewise, as far as I understand, "xatol: Absolute error in xopt between iterations that is acceptable for convergence." means that the optimization will terminate once the difference between the optimal and the present arguments is at most xatol. Since in principle I do not know a-priori what the xopt is, does it mean in practice that once |x_n - x_(n+1)|, the optimizer will stop? If no, is there a way of adding a constraint to stop the function once it is near the optimal point?
I will appreciate if someone can either answer or give me a better reference than the SciPy documentation.
this condition stops the algorithm as soon as |f(x_n) - f(x_(n+1))| < fatol
same : this condition stops the algorithm as soon as |x_n - x_(n+1)| < xatol
I am trying to comprehend the idea of optimize.minimize. However, I got stuck with a small problem. I do not understand that why we need an initial guess (x0) while using the optimize.minimize? Can anyone help me out??
Thank you so much.
Numerical optimisation basically says: here's a function f.
Let's say we're trying to find a minimum. Let's add a bit to our starting variable. What is f(x + a)? Does it go down? And let's add a bit more? Is f(x + a + b) smaller than that? Eventually, after trying a ton of different inputs, going up and down, you'll have a good idea of how the function behaves and where it is minimised.
To do this, you need to start someplace so that you can add or subtract from the x part of f(x).