I am trying to use negative of scipy.optimize.minimize to maximize a function f (a, b, c, d). d is a numpy.array of guess variables.
I am trying to put some bounds on each d. And also a constraint on each d such that (d1 * a1 + d2 * a2 + ... + d3 * a3) < some_Value (a being the other argument to the subject function f).
My problem is how do I define this constraint as an argument to the maximize function.
I could not find any maximize function in the library so we're using the negative of minimize with minimize documentation over here.
Please consider asking for clarifications if the question is not clear enough.
It's not totally clear from your description which of the parameters of f you are optimizing over. For the purposes of this example I'm going to use x to refer to the vector of parameters you are optimizing over, and a to refer to another parameter vector of the same length which is held constant.
Now let's suppose you wanted to enforce the following inequality constraint:
10 <= x[0] * a[0] + x[1] * a[1] + ... + x[n] * a[n]
First you must define a function that accepts x and a and returns a value that is non-negative when the constraint is met. In this case we could use:
lambda x, a: (x * a).sum() - 10
or equivalently:
lambda x, a: x.dot(a) - 10
Constraints are passed to minimize in a dict (or a sequence of dicts if you have multiple constraints to apply):
con = {'type': 'ineq',
'fun': lambda x, a: a.dot(x) - 10,
'jac': lambda x, a: a,
'args': (a,)}
For greater efficiency I've also defined a function that returns the Jacobian (the sequence of partial derivatives of the constraint function w.r.t. each parameter in x), although this is not essential - if unspecified it will be estimated via first-order finite differences.
Your call to minimize would then look something like:
res = minimize(f, x0, args=(a,), method='SLSQP', constraints=con)
You can find another complete example of constrained optimization using SLSQP in the official documentation here.
Related
Given a SymPy function f(x) and values a, b (a != b), is there a way to find the minimum and maximum value of f(x) on this interval? I’ve found some code for finding extremums that can be adapted for this purpose (split them into a min and max array, find lowest and highest respectively values with lambdify and use them), but surely there must be an easier way?
An alternative option would be using a np.linspace, but then I might miss out on exact values, which would be bad for things I have to do with them next.
As now noted in the cited page, since this PR you should be able to do the following:
from sympy.calculus.util import *
f = (x**3 / 3) - (2 * x**2) - 3 * x + 1
ivl = Interval(0, 3) # e.g. your (a, b)
print(minimum(f, ivl))
print(maximum(f, ivl))
This code is a derivative code for a Taylor expansion that is 5 derivatives long. So ds(i) is supposed to replace its zero valued variables with the new x values (the derivative values). I keep getting the error "cannot assign function to call"
def derivatives(f, x, a, n):
f = f(x)
x = var
a = 1.0
n = 5
ds = np.zeros(n)
exp = f(x)
for i in range(n):
exp = sp.diff(exp,x)
ds(i) = exp.replace(x, a)
return ds
You probably meant ds[i], not ds(i). Square brackers for indexing vs round parentheses for function calls. That said, the code has other issues, from undefined var to using a NumPy array (?) to store SymPy objects. In general, it's advisable to keep in mind that SymPy works primarily with expressions not with functions. Expressions do not "take arguments", they are not like callable functions in Python.
And all of this is unnecessary, because SymPy computes n-th derivative on its own. Example, the 5th derivative of exp(2*x) at 0:
x = sp.symbols('x')
f = sp.exp(2*x) # an expression, not a function
n = 5
a = 0
print(f.diff(x, n).subs(x, a)) # take derivative n times, then plug a for x
prints 32. Or, if you want a Taylor expansion up to and including x**n:
print(f.series(x, a, n + 1))
prints 1 + 2*x + 2*x**2 + 4*x**3/3 + 2*x**4/3 + 4*x**5/15 + O(x**6).
I might ask a simple question, but I looked for it on the net and found nothing that explains well (I'm not a great pythoner..)
It's just that I want to minimize a function with constraints, so I use the minimize function from scipy.optimize. But I want to minimize on the variable x (the only variable), but I also would like other paramters (that I could change easily if needed).
For example, here a basic function that I want to minimize:
import scipy.optimize as so
def goal(x, a) : #The variable is x, a is just a parameter
return 2 * (x - 6) ** 2 + a
Constraint:
cons = ({'type': 'ineq', 'fun': lambda x: x})
Initial:
x0 = 10
res = so.minimize(goal, x0, method = 'SLSQP', constraints = cons)
res
Of course if I put something like:
goal = lambda x: 2 * (x - 6) ** 2 + 3 #so a=3
I got something, but I have to change "manually" the a...
I think it's something with *a or else ? (I'm not that good :( )
Thanks if one could give me a bit of a code for that example ;)
I've been looking through the minimize function declaration files, and I am really confused as to how the function works. So for example, if I have something like this:
import numpy as np
from scipy.integrate import quad
from scipy.optimize import minimize
encoderdistance = 2.53141952655
Dx = lambda t: -3.05 * np.sin(t)
Dy = lambda t: 2.23 * np.cos(t)
def func(x): return np.sqrt(Dx(x)**2 + Dy(x)**2)
print minimize(lambda x: abs(quad(func, 0, x)[0] - encoderdistance), 1).x
print minimize(lambda x: abs(4.24561823393 - encoderdistance), 1).x
the second print statement at the bottom will yield a different result than the one on the top even though I subbed out the quad function for the value it produced. If this is due to the lambda x part, can you explain how that affects that line of code exactly? Also, how would you type the second to last line into a calculator such as wolfram alpha? Thanks!
The optimizer needs a function to minimize -- that's what the lambda x: is about.
In the second-to-last line, you're asking the optimizer to find a value of x such that the integral from 0 to x of func(x) is close to encoderdistance.
In the last line, the function to be minimized in your last line is just a scalar value, with no dependency on x, and the optimizer is bailing out because it can't change that.
How scipy.minimize works is described here but that isn't your issue. You have two lambda functions that are definitely not the same:
lambda x: abs(quad(func, 0, x)[0] - encoderdistance)
lambda x: abs(4.24561823393 - encoderdistance)
The first is a 'V'-shaped function while the second is a horizontal line. scipy finds the minimum of the 'V' at about 1.02 and cannot perform any minimization on a horizontal line so it returns your initial guess: 1.
Here is how you could do it in Mathematica:
Dx[t_] := -3.05*Sin[t]
Dy[t_] := 2.23*Cos[t]
func[x_] := Sqrt[Dx[x]^2 + Dy[x]^2]
encoderdistance = 2.53141952655;
fmin[x_?NumberQ] :=
Abs[NIntegrate[func[t], {t, 0, x}] - encoderdistance]
NMinimize[fmin[x], x][[2]][[1]][[2]]
With regard to your first question, in statement:
print minimize(lambda x: abs(4.24561823393 - encoderdistance), 1).x
your lambda function is a constant independent of the argument x. minimize quits immediately after observing that the function does not decrease after several variations of the argument.
If I do
import sympy
k, V, Vprime = sympy.symbols('k, V, Vprime')
print sympy.diff(k + V(t), t)
I get Derivative(V(t), t) as I expect - the derivative distributes and the constant term has zero derivative.
However, if I construct an equivalent expression via substitution, simplify does not distribute the derivative. How can I get the same result via substitution as when I evaluate the expression directly?
sympy.diff(Vprime(t)).subs({Vprime(t): k + V(t)}).simplify()
returns Derivative(k + V(t), t).
The solution to this problem is provided by the doit method, which says it "Evaluate(s) objects that are not evaluated by default like limits, integrals, sums and products.":
sympy.diff(Vprime(t)).subs({Vprime(t): k + V(t)}).doit()
yields
Derivative(V(t), t)