I might ask a simple question, but I looked for it on the net and found nothing that explains well (I'm not a great pythoner..)
It's just that I want to minimize a function with constraints, so I use the minimize function from scipy.optimize. But I want to minimize on the variable x (the only variable), but I also would like other paramters (that I could change easily if needed).
For example, here a basic function that I want to minimize:
import scipy.optimize as so
def goal(x, a) : #The variable is x, a is just a parameter
return 2 * (x - 6) ** 2 + a
Constraint:
cons = ({'type': 'ineq', 'fun': lambda x: x})
Initial:
x0 = 10
res = so.minimize(goal, x0, method = 'SLSQP', constraints = cons)
res
Of course if I put something like:
goal = lambda x: 2 * (x - 6) ** 2 + 3 #so a=3
I got something, but I have to change "manually" the a...
I think it's something with *a or else ? (I'm not that good :( )
Thanks if one could give me a bit of a code for that example ;)
Related
I'm making a solver of cubic equations in Python that includes division of polynomials.
from sympy import symbols
# Coefficients
a = int(input("1st coef: "))
b = int(input("2nd coef: "))
c = int(input("3rd coef: "))
d = int(input("Const: "))
# Polynomial of x
def P(x):
return a*x**3 + b*x**2 + c*x + d
x = symbols('x')
# Find 1 root by Cardano
R = (3*a*c - b**2) / (9*a**2)
Q = (3*b**3 - 9*a*b*c + 27*a**2*d) / (54*a**3)
Delta = R**3 + Q**2
y = (-Q + sqrt(Delta))**(1/3) + (-Q - sqrt(Delta))**(1/3)
x_1 = y - b/(3*a)
# Division
P(x) / (x - x_1) = p(x)
print(p(x)) # Just a placeholder
The program returns an error: "cannot assign to operator" and highlights the P(x) after the # Division comment (worded poorly, yes, but I'm from Russia so idc).
What I tried doing was to assign a variable to a polynomial and then dividing:
z = P(x)
w = x - x_1
p = z / w
print(p)
But alas: it just returns a plain old quotient (a = 1, b = 4, c = -9, d = -36):
(x**3 + 4*x**2 - 9*x - 36)/(x - 2.94254537742264)
Does anyone out here knows what to do in this situation (not to mention the non-exact value of x_1: the roots of x^3+4x^2-9x-36=0 are 3, -4, and -3, no floaty-irrational-messy-ugly things in sight)?
tl;dr: Polynomial division confusion and non-exact roots
I am not sure what exactly your question is but here is an attempt at an answer
The line
P(x) / (x - x_1) = p(x)
is problematic for multiple reasons. First of all it's important to know that the = operator in python (and a lot of other modern programming languages) is an assignment operator. You seem to come from more of a math background, so consider it to be something like the := operator. The direction of this is always fixed, i.e. with a = b you are always assigning the value of b to the variable a. In your case you are basically assigning an expression the value of p which does not make much sense:
Python can't assign anything to an expression (At least not as far as I know)
p(x) is not yet defined
The second problem is that you are mixing python functions with math functions.
A python function looks something like this:
def some_function(some_parameter)
print("Some important Thing!: ", some_parameter)
some_return_value = 42
return some_return_value
It (can) take some variable(s) as input, do a bunch of things with them, and then (can) return something else. They are generally called with the bracket operator (). I.e. some_function(42) translates to execute some_function and substitute the first parameter with the value 42. An expression in sympy however is as far as python is concerned just an object/variable.
So basically you could have just written P = a*x**3 + b*x**2 + c*x + d. What your P(x) function is doing is basically taking the expression a*x**3 + b*x**2 + c*x + d, substituting x for whatever you have put in the brackets, and then giving it back in as a sympy expression. (It's important to understand, that the x in your P python function has nothing to do with the x you define later! Because of that, one usually tries to avoid such "false friends" in coding)
Also, a math function in sympy is really just an expression formed from sympy symbols. As far as sympy is concerned, the return value of the P function is a (mathematical) function of the symbols a,b,c,d and the symbol you put into the brackets. This is why, whenever you want to integrate or differentiate, you will need to specify by which symbol to do that.
So the line should have looked something like this.
p = P(x) / (x - x_1)
Or you leave replace the P(x) function with P = a*x**3 + b*x**2 + c*x + d and end up with
p = P / (x - x_1)
Thirdly if you would like to have the expression simplified you should take a look here (https://docs.sympy.org/latest/tutorial/simplification.html). There are multiple ways here of simplifying expressions, depending on what sort of expression you want as a result. To make for faster code sympy will only simplify your expression if you specifically ask for it.
You might however be disappointed with the results, as the line
y = (-Q + sqrt(Delta))**(1/3) + (-Q - sqrt(Delta))**(1/3)
will do an implicit conversion to floating point numbers, and you are going to end up with rounding problems. To blame is the (1/3) part which will evaluate to 0.33333333 before ever seeing sympy. One possible fix for this would be
y = (-Q + sqrt(Delta))**(sympy.Rational(1,3)) + (-Q - sqrt(Delta))**(sympy.Rational(1,3))
(You might need to add import sympy at the top)
Generally, it might be worth learning a bit more about python. It's a language that mostly tries to get out of your way with annoying technical details. This unfortunately however also means that things can get very confusing when using libraries like sympy, that heavily rely on stuff like classes and operator overloading. Learning a bit more python might give you a better idea about what's going on under the hood, and might make the distinction between python stuff and sympy specific stuff easier. Basically, you want to make sure to read and understand this (https://docs.sympy.org/latest/gotchas.html).
Let me know if you have any questions, or need some resources :)
Why doesn't the following function take the inner h value that is defined in the function body and gives weird results (arbitrary h value)?
def diff(f): # def not define
h = 0.001
return (lambda x: (f(x+h) - f(x)) / h)
def sin_by_million (x):
return math.sin( 10 ** 6 *x)
>>> diff(sin_by_million) (0)
826.8795405320026
Instead of 1000000?
As per #ThierryLathuille comment, your step h is too big. In real life, you should adapt it based on the function and value at which you want the derivative.
Check out jax instead:
import jax
import jax.numpy as np
def sin_by_million(x):
return np.sin(1e6 * x)
Then:
>>> g = jax.grad(sin_by_million)
... g(0.0)
DeviceArray(1000000., dtype=float32)
The beauty of jax is that it actually compiles your call tree using chain rule, and produces some code (the calls after the first one are much, much faster). It also works on multivariate functions and complex code (with some rules though). And it works wonderfully well & fast on GPUs.
I've been looking through the minimize function declaration files, and I am really confused as to how the function works. So for example, if I have something like this:
import numpy as np
from scipy.integrate import quad
from scipy.optimize import minimize
encoderdistance = 2.53141952655
Dx = lambda t: -3.05 * np.sin(t)
Dy = lambda t: 2.23 * np.cos(t)
def func(x): return np.sqrt(Dx(x)**2 + Dy(x)**2)
print minimize(lambda x: abs(quad(func, 0, x)[0] - encoderdistance), 1).x
print minimize(lambda x: abs(4.24561823393 - encoderdistance), 1).x
the second print statement at the bottom will yield a different result than the one on the top even though I subbed out the quad function for the value it produced. If this is due to the lambda x part, can you explain how that affects that line of code exactly? Also, how would you type the second to last line into a calculator such as wolfram alpha? Thanks!
The optimizer needs a function to minimize -- that's what the lambda x: is about.
In the second-to-last line, you're asking the optimizer to find a value of x such that the integral from 0 to x of func(x) is close to encoderdistance.
In the last line, the function to be minimized in your last line is just a scalar value, with no dependency on x, and the optimizer is bailing out because it can't change that.
How scipy.minimize works is described here but that isn't your issue. You have two lambda functions that are definitely not the same:
lambda x: abs(quad(func, 0, x)[0] - encoderdistance)
lambda x: abs(4.24561823393 - encoderdistance)
The first is a 'V'-shaped function while the second is a horizontal line. scipy finds the minimum of the 'V' at about 1.02 and cannot perform any minimization on a horizontal line so it returns your initial guess: 1.
Here is how you could do it in Mathematica:
Dx[t_] := -3.05*Sin[t]
Dy[t_] := 2.23*Cos[t]
func[x_] := Sqrt[Dx[x]^2 + Dy[x]^2]
encoderdistance = 2.53141952655;
fmin[x_?NumberQ] :=
Abs[NIntegrate[func[t], {t, 0, x}] - encoderdistance]
NMinimize[fmin[x], x][[2]][[1]][[2]]
With regard to your first question, in statement:
print minimize(lambda x: abs(4.24561823393 - encoderdistance), 1).x
your lambda function is a constant independent of the argument x. minimize quits immediately after observing that the function does not decrease after several variations of the argument.
I am trying to use negative of scipy.optimize.minimize to maximize a function f (a, b, c, d). d is a numpy.array of guess variables.
I am trying to put some bounds on each d. And also a constraint on each d such that (d1 * a1 + d2 * a2 + ... + d3 * a3) < some_Value (a being the other argument to the subject function f).
My problem is how do I define this constraint as an argument to the maximize function.
I could not find any maximize function in the library so we're using the negative of minimize with minimize documentation over here.
Please consider asking for clarifications if the question is not clear enough.
It's not totally clear from your description which of the parameters of f you are optimizing over. For the purposes of this example I'm going to use x to refer to the vector of parameters you are optimizing over, and a to refer to another parameter vector of the same length which is held constant.
Now let's suppose you wanted to enforce the following inequality constraint:
10 <= x[0] * a[0] + x[1] * a[1] + ... + x[n] * a[n]
First you must define a function that accepts x and a and returns a value that is non-negative when the constraint is met. In this case we could use:
lambda x, a: (x * a).sum() - 10
or equivalently:
lambda x, a: x.dot(a) - 10
Constraints are passed to minimize in a dict (or a sequence of dicts if you have multiple constraints to apply):
con = {'type': 'ineq',
'fun': lambda x, a: a.dot(x) - 10,
'jac': lambda x, a: a,
'args': (a,)}
For greater efficiency I've also defined a function that returns the Jacobian (the sequence of partial derivatives of the constraint function w.r.t. each parameter in x), although this is not essential - if unspecified it will be estimated via first-order finite differences.
Your call to minimize would then look something like:
res = minimize(f, x0, args=(a,), method='SLSQP', constraints=con)
You can find another complete example of constrained optimization using SLSQP in the official documentation here.
Just started learning python, and was asked to define a python function that integrate a math function.
We were instructed that the python function must be in the following form: (for example, to calculate the area of y = 2x + 3 between x=1 and x=2 )
integrate( 2 * x + 3, 1, 2 )
(it should return the area below)
and we are not allowed to use/import any libraries other than math (and the built in integration tool is not allowed either).
Any idea how I should go about it? When I wrote the program, I always get x is undefined, but if I define x as a value ( lets say 0 ) then the 2*x+3 part in the parameters is always taken as a value instead of a math equation, so I can't really use it inside?
It would be very helpful, not just to this assignment, but many in the future if I know how a python function can take a math equation as parameter, so thanks alot.
Let's say your integration function looks like this:
def integrate(func, lo_x, hi_x):
#... Stuff to perform the integral, which will need to evaluate
# the passed function for various values of x, like this
y = func(x)
#... more stuff
return value
Then you can call it like this:
value = integrate(lambda x: 2 * x + 3, 1, 2)
edit
However, if the call to the integration function has to look exactly like
integrate( 2 * x + 3, 1, 2 )
then things are a bit trickier. If you know that the function is only going to be called with a polynomial function you could do it by making x an instance of a polynomial class, as suggested by M. Arthur Vaïsse in his answer.
Or, if the integrate( 2 * x + 3, 1, 2 ) comes from a string, eg from a command line argument or a raw_input() call, then you could extract the 2 * x + 3 (or whatever) from the string using standard Python string methods and then build a lambda function from that using exec.
Here come an implementation that fill the needs I think. It allow you to define mathematical function such as 2x+3 and propose an implementation of integral calculation by step as described here [http://en.wikipedia.org/wiki/Darboux_integral]
import math
class PolynomialEquation():
""" Allow to create function that are polynomial """
def __init__(self,coef):
"""
coef : coeficients of the polynome.
An equation initialized with [1,2,3] as parameters is equivalent to:
y = 1 + 2X + 3X²
"""
self.coef = coef
def __call__(self, x):
"""
Make the object callable like a function.
Return the value of the equation for x
"""
return sum( [self.coef[i]*(x**i) for i in range(len(self.coef)) ])
def step_integration(function, start, end, steps=100):
"""
Proceed to a step integration of the function.
The more steps there are, the more the approximation is good.
"""
step_size = (end-start)/steps
values = [start + i*step_size for i in range(1,steps+1)]
return sum([math.fabs(function(value)*step_size) for value in values])
if __name__ == "__main__":
#check that PolynomialEquation.value works properly. Assert make the program crash if the test is False.
#y = 2x+3 -> y = 3+2x -> PolynomialEquation([3,2])
eq = PolynomialEquation([3,2])
assert eq(0) == 3
assert eq(1) == 5
assert eq(2) == 7
#y = 1 + 2X + 3X² -> PolynomialEquation([1,2,3])
eq2 = PolynomialEquation([1,2,3])
assert eq2(0) == 1
assert eq2(1) == 6
assert eq2(2) == 17
print(step_integration(eq, 0, 10))
print(step_integration(math.sin, 0, 10))
EDIT : in truth the implementation is only the upper Darboux integral. The true Darboux integral could be computed if really needed by computing the lower Darboux integral ( replace range(1, steps+1) by range(steps) in step_integration function give you the lower Darboux function. And then increase the step parameter while the difference between the two Darboux function is greater than a small value depending on your precision need (could be 0.001 for example). Thus a 100 step integration is suppose to give you a decent approximation of the integral value.