I've been looking through the minimize function declaration files, and I am really confused as to how the function works. So for example, if I have something like this:
import numpy as np
from scipy.integrate import quad
from scipy.optimize import minimize
encoderdistance = 2.53141952655
Dx = lambda t: -3.05 * np.sin(t)
Dy = lambda t: 2.23 * np.cos(t)
def func(x): return np.sqrt(Dx(x)**2 + Dy(x)**2)
print minimize(lambda x: abs(quad(func, 0, x)[0] - encoderdistance), 1).x
print minimize(lambda x: abs(4.24561823393 - encoderdistance), 1).x
the second print statement at the bottom will yield a different result than the one on the top even though I subbed out the quad function for the value it produced. If this is due to the lambda x part, can you explain how that affects that line of code exactly? Also, how would you type the second to last line into a calculator such as wolfram alpha? Thanks!
The optimizer needs a function to minimize -- that's what the lambda x: is about.
In the second-to-last line, you're asking the optimizer to find a value of x such that the integral from 0 to x of func(x) is close to encoderdistance.
In the last line, the function to be minimized in your last line is just a scalar value, with no dependency on x, and the optimizer is bailing out because it can't change that.
How scipy.minimize works is described here but that isn't your issue. You have two lambda functions that are definitely not the same:
lambda x: abs(quad(func, 0, x)[0] - encoderdistance)
lambda x: abs(4.24561823393 - encoderdistance)
The first is a 'V'-shaped function while the second is a horizontal line. scipy finds the minimum of the 'V' at about 1.02 and cannot perform any minimization on a horizontal line so it returns your initial guess: 1.
Here is how you could do it in Mathematica:
Dx[t_] := -3.05*Sin[t]
Dy[t_] := 2.23*Cos[t]
func[x_] := Sqrt[Dx[x]^2 + Dy[x]^2]
encoderdistance = 2.53141952655;
fmin[x_?NumberQ] :=
Abs[NIntegrate[func[t], {t, 0, x}] - encoderdistance]
NMinimize[fmin[x], x][[2]][[1]][[2]]
With regard to your first question, in statement:
print minimize(lambda x: abs(4.24561823393 - encoderdistance), 1).x
your lambda function is a constant independent of the argument x. minimize quits immediately after observing that the function does not decrease after several variations of the argument.
Related
Why doesn't the following function take the inner h value that is defined in the function body and gives weird results (arbitrary h value)?
def diff(f): # def not define
h = 0.001
return (lambda x: (f(x+h) - f(x)) / h)
def sin_by_million (x):
return math.sin( 10 ** 6 *x)
>>> diff(sin_by_million) (0)
826.8795405320026
Instead of 1000000?
As per #ThierryLathuille comment, your step h is too big. In real life, you should adapt it based on the function and value at which you want the derivative.
Check out jax instead:
import jax
import jax.numpy as np
def sin_by_million(x):
return np.sin(1e6 * x)
Then:
>>> g = jax.grad(sin_by_million)
... g(0.0)
DeviceArray(1000000., dtype=float32)
The beauty of jax is that it actually compiles your call tree using chain rule, and produces some code (the calls after the first one are much, much faster). It also works on multivariate functions and complex code (with some rules though). And it works wonderfully well & fast on GPUs.
I might ask a simple question, but I looked for it on the net and found nothing that explains well (I'm not a great pythoner..)
It's just that I want to minimize a function with constraints, so I use the minimize function from scipy.optimize. But I want to minimize on the variable x (the only variable), but I also would like other paramters (that I could change easily if needed).
For example, here a basic function that I want to minimize:
import scipy.optimize as so
def goal(x, a) : #The variable is x, a is just a parameter
return 2 * (x - 6) ** 2 + a
Constraint:
cons = ({'type': 'ineq', 'fun': lambda x: x})
Initial:
x0 = 10
res = so.minimize(goal, x0, method = 'SLSQP', constraints = cons)
res
Of course if I put something like:
goal = lambda x: 2 * (x - 6) ** 2 + 3 #so a=3
I got something, but I have to change "manually" the a...
I think it's something with *a or else ? (I'm not that good :( )
Thanks if one could give me a bit of a code for that example ;)
In python, I have two functions f1(x) and f2(x) returning a number. I would like to calculate a definite integral after their multiplication, i.e., something like:
scipy.integrate.quad(f1*f2, 0, 1)
What is the best way to do it? Is it even possible in python?
I found out just a second ago, that I can use lambda :)
scipy.integrate.quad(lambda x: f1(x)*f2(x), 0, 1)
Anyway, I'm leaving it here. Maybe it will help somebody out.
When I had the same problem, I used this (based on the suggestion above)
from scipy.integrate import quad
def f1(x):
return x
def f2(x):
return x**2
ans, err = quad(lambda x: f1(x)*f2(x), 0, 1)
print("the result is", ans)
I am trying to use negative of scipy.optimize.minimize to maximize a function f (a, b, c, d). d is a numpy.array of guess variables.
I am trying to put some bounds on each d. And also a constraint on each d such that (d1 * a1 + d2 * a2 + ... + d3 * a3) < some_Value (a being the other argument to the subject function f).
My problem is how do I define this constraint as an argument to the maximize function.
I could not find any maximize function in the library so we're using the negative of minimize with minimize documentation over here.
Please consider asking for clarifications if the question is not clear enough.
It's not totally clear from your description which of the parameters of f you are optimizing over. For the purposes of this example I'm going to use x to refer to the vector of parameters you are optimizing over, and a to refer to another parameter vector of the same length which is held constant.
Now let's suppose you wanted to enforce the following inequality constraint:
10 <= x[0] * a[0] + x[1] * a[1] + ... + x[n] * a[n]
First you must define a function that accepts x and a and returns a value that is non-negative when the constraint is met. In this case we could use:
lambda x, a: (x * a).sum() - 10
or equivalently:
lambda x, a: x.dot(a) - 10
Constraints are passed to minimize in a dict (or a sequence of dicts if you have multiple constraints to apply):
con = {'type': 'ineq',
'fun': lambda x, a: a.dot(x) - 10,
'jac': lambda x, a: a,
'args': (a,)}
For greater efficiency I've also defined a function that returns the Jacobian (the sequence of partial derivatives of the constraint function w.r.t. each parameter in x), although this is not essential - if unspecified it will be estimated via first-order finite differences.
Your call to minimize would then look something like:
res = minimize(f, x0, args=(a,), method='SLSQP', constraints=con)
You can find another complete example of constrained optimization using SLSQP in the official documentation here.
Just started learning python, and was asked to define a python function that integrate a math function.
We were instructed that the python function must be in the following form: (for example, to calculate the area of y = 2x + 3 between x=1 and x=2 )
integrate( 2 * x + 3, 1, 2 )
(it should return the area below)
and we are not allowed to use/import any libraries other than math (and the built in integration tool is not allowed either).
Any idea how I should go about it? When I wrote the program, I always get x is undefined, but if I define x as a value ( lets say 0 ) then the 2*x+3 part in the parameters is always taken as a value instead of a math equation, so I can't really use it inside?
It would be very helpful, not just to this assignment, but many in the future if I know how a python function can take a math equation as parameter, so thanks alot.
Let's say your integration function looks like this:
def integrate(func, lo_x, hi_x):
#... Stuff to perform the integral, which will need to evaluate
# the passed function for various values of x, like this
y = func(x)
#... more stuff
return value
Then you can call it like this:
value = integrate(lambda x: 2 * x + 3, 1, 2)
edit
However, if the call to the integration function has to look exactly like
integrate( 2 * x + 3, 1, 2 )
then things are a bit trickier. If you know that the function is only going to be called with a polynomial function you could do it by making x an instance of a polynomial class, as suggested by M. Arthur Vaïsse in his answer.
Or, if the integrate( 2 * x + 3, 1, 2 ) comes from a string, eg from a command line argument or a raw_input() call, then you could extract the 2 * x + 3 (or whatever) from the string using standard Python string methods and then build a lambda function from that using exec.
Here come an implementation that fill the needs I think. It allow you to define mathematical function such as 2x+3 and propose an implementation of integral calculation by step as described here [http://en.wikipedia.org/wiki/Darboux_integral]
import math
class PolynomialEquation():
""" Allow to create function that are polynomial """
def __init__(self,coef):
"""
coef : coeficients of the polynome.
An equation initialized with [1,2,3] as parameters is equivalent to:
y = 1 + 2X + 3X²
"""
self.coef = coef
def __call__(self, x):
"""
Make the object callable like a function.
Return the value of the equation for x
"""
return sum( [self.coef[i]*(x**i) for i in range(len(self.coef)) ])
def step_integration(function, start, end, steps=100):
"""
Proceed to a step integration of the function.
The more steps there are, the more the approximation is good.
"""
step_size = (end-start)/steps
values = [start + i*step_size for i in range(1,steps+1)]
return sum([math.fabs(function(value)*step_size) for value in values])
if __name__ == "__main__":
#check that PolynomialEquation.value works properly. Assert make the program crash if the test is False.
#y = 2x+3 -> y = 3+2x -> PolynomialEquation([3,2])
eq = PolynomialEquation([3,2])
assert eq(0) == 3
assert eq(1) == 5
assert eq(2) == 7
#y = 1 + 2X + 3X² -> PolynomialEquation([1,2,3])
eq2 = PolynomialEquation([1,2,3])
assert eq2(0) == 1
assert eq2(1) == 6
assert eq2(2) == 17
print(step_integration(eq, 0, 10))
print(step_integration(math.sin, 0, 10))
EDIT : in truth the implementation is only the upper Darboux integral. The true Darboux integral could be computed if really needed by computing the lower Darboux integral ( replace range(1, steps+1) by range(steps) in step_integration function give you the lower Darboux function. And then increase the step parameter while the difference between the two Darboux function is greater than a small value depending on your precision need (could be 0.001 for example). Thus a 100 step integration is suppose to give you a decent approximation of the integral value.