Minimize function subject to constraint - python

I am trying to solve a constrained optimization problem using
cipy.optimize.minimize but so far had no success.
Specifically I want to minimize the objective function over y1 and y2:
f(y1,y2)=(x1(y1,y2)-x1)^2+(x2(y1,y2)-x2)^2
Subject to the constraint:
y1*y2>0
The goal is to find the values of y1 and y2 for different pairs of x1 and x2.
This is what i have so far
def f(x1,x2):
k=(x1(y1,y2)-x1)^2+(x2(y1,y2)-x2)^2
return k
But i am not sure how to set up the function holding the aforementioned constraint:
def constraint(x):
....
Once i have the constraint is the following syntax correct?
optimize.minimize(f, np.array([0, 0]), method="SLSQP",
constraints={"fun": constraint, "type": "ineq"})
I am new in Python so any help would be appreciated.

For constraints. From the docs:
Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative. Note that COBYLA only supports inequality constraints.
Therefore, your constraint is simply a function that must be non-negative. In your case:
def constraint(y):
return y[0] * y[1]
Note that the function must input a vector. e.g.:
def f(x):
x1, x2 = x
return x1**2 + x2**2
EDIT Using a function that tries to fit calculated vs. observed data.
def calculated_x(y):
""" example """
y1, y2 = y
x1 = 0.5 + 0.2 * y1 + 0.3 * y2
x2 = 0.4 + 0.1 * y1 + 0.3 * y2
def f(y, x1, x2):
x1_calc, x2_calc = calculated_x(y)
return (x1- x1_calc)**2 + (x2 - x2_calc)**2
m = minimize(f, [0,0], args=(3,2), constraints=({'fun': lambda y: y[0] * y[1], 'type': 'ineq'},))
print(m)
>> array([3, 1.999999])
You can also build a function based on your minimization (example above):
def minimize_y(x1, x2):
# note that x1 and x2 become arguments
m = minimize(f, [0,0], args=(x1,x2), constraints=({'fun': lambda y: y[0] * y[1], 'type': 'ineq'},)
return m.x

Related

Common tangent using python

I am trying to find a common tangent to two curves using python but I am not able to solve it.
The equations to the two curves are complicated that involve logarithms.
Is there a way in python to compute the x coordinates of a tangent that is common to both the curves in general. If I have 2 curves f(x) and g(x), I want to find the x-coordinates x1 and x2 on a common tangent where x1 lies on f(x) and x2 on g(x). I am trying f'(x1) = g'(x2) and f'(x1) = f(x1) - f(x2) / (x1 - x2) to get x1 and x2 but I am not able to get values using nonlinsolve as the equations are too complicated.
I want to just find x-coordinates of the common tangent
Can anyone suggest a better way?
import numpy as np
import sympy
from sympy import *
from matplotlib import pyplot as plt
x = symbols('x')
a, b, c, d, e, f = -99322.50019502985, -86864.87072433547, -96876.05627516498, -89703.35055202093, -3390.863799999999, -20942.518
def func(x):
y1_1 = a - a*x + b*x
y1_2 = c - c*x + d*x
c1 = (1 - x) ** (1 - x)
c2 = (x ** x)
y2 = 12471 * (sympy.log((c1*c2)))
y3 = 2*f*x**3 - x**2*(e + 3*f) + x*(e + f)
eqn1 = y1_1 + y2 + y3
eqn2 = y1_2 + y2 + y3
return eqn1, eqn2
val = np.linspace(0, 1)
f1 = sympy.lambdify(x, func(x)[0])(val)
f2 = sympy.lambdify(x, func(x)[1])(val)
plt.plot(val, f1)
plt.plot(val, f2)
plt.show()
I am trying this
x1, x2 = sympy.symbols('x1 x2')
fun1 = func(x1)[0]
fun2 = func(x2)[0]
diff1 = diff(fun1,x1)
diff2 = diff(fun2,x2)
eq1 = diff1 - diff2
eq2 = diff1 - ((fun1 - fun2) / (x1 - x2))
sol = nonlinsolve([eq1, eq2], [x1, x2])
the first thing that needs to be done is to reduce the formulas
for example the first formula is actually this:
formula = x*(1 - x)*(17551.6542 - 41885.036*x) + x*(1 - x)*(41885.036*x - 24333.3818) + 12457.6294706944*x + log((x/(1 - x))**(12000*x)*(1 - x)**12000) - 99322.5001950298
formula = (x-x^2)*(17551.6542 - 41885.036*x) + (x-x^2)*(41885.036*x - 24333.3818) + 12457.6294706944*x + log((x/(1 - x))**(12000*x)*(1 - x)**12000) - 99322.5001950298
# constants
a = 41885.036
b = 17551.6542
c = 24333.3818
d = 12457.6294706944
e = 99322.5001950298
f = 12000
formula = (x-x^2)*(b - a*x) + (x-x^2)*(a*x - c) + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
formula = (ax^3 -bx^2 + bx - ax^2) + (x-x^2)*(a*x - c) + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
formula = ax^3 -bx^2 + bx - ax^2 -ax^3 + ax^2 + cx^2 -cx + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
# collect x terms by power (note how the x^3 tern drops out, so its easier).
formula = (c-b)*x^2 + (b-c+d)*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
which is much cleaner and is a quadratic with a log term.
i expect that you can do some work on the log term too, but this is an excercise for the original poster.
likewise the second formula can be reduced in the same way, which is again an excercise for the original poster.
From this, both equations need to be differentiated with respect to x to find the tangent. Then set both formulas to be equal to each other (for a common tangent).
This would completely solve the question.
I actually wonder if this is a python question at all or actually a pure maths question.....
The important point to note is that, since the derivatives are monotonic, for any value of derivative of fun1, there is a solution for fun2. This can be easily seen if you plot both derivatives.
Thus, we want a function that, given an x1, returns an x2 that matches it. I'll use numerical solution because the system is too cumbersome for numerical solution.
import scipy.optimize
def find_equal_value(f1, f2, x, x1):
goal = f1.subs(x, x1)
to_solve = sympy.lambdify(x, (f2 - goal)**2) # Quadratic functions tend to be better behaved, and the result is the same
sol = scipy.optimize.fmin(func=to_solve, x0=x1, ftol=1e-8, disp=False) # The value for f1 is a good starting guess
return sol[0]
I used fmin as the solver above because it worked and I knew how to use it by heart. Maybe root_scalar can give better results.
Using the function above, let's get some pairs (x1, x2) where the derivatives are equal:
df1 = sympy.diff(func(x)[0])
df2 = sympy.diff(func(x)[1])
x1 = 0.25236537 # Close to the zero derivative
x2 = find_equal_value(df1, df2, x, x1)
print(f'Derivative of f1 in x1: {df1.subs(x, x1)}')
print(f'Derivative of f2 in x2: {df2.subs(x, x2)}')
print(f'Error: {df1.subs(x, x1) - df2.subs(x, x2)}')
This results is:
Derivative of f1 in x1: 0.0000768765858083498
Derivative of f2 in x2: 0.0000681969431752805
Error: 0.00000867964263306931
If you want a x2 for several x1s (beware that in some cases the solver hits a value where the logs are invalid. Always check your result for validity):
x1s = np.linspace(0.2, 0.8, 50)
x2s = [find_equal_value(df1, df2, x, x1) for x1 in x1s]
plt.plot(x1s, x2s); plt.grid(); plt.show()

Scipy.optimize violating/not respecting constraint

The problem definition is as follows:
Objective function: maximize Z = 45x1 + 20x2
Constraint 1 (material): 20x1 + 5x2 ≤ 9500
Constraint 2 (time): 0.04x1 + 0.12x2 ≤ 40
Constraint 3 (storage): x1 + x2 ≤ 550
Positivity: x1, x2 ≥ 0
This is done as follow in Python:
# import libraries
import numpy as np
import scipy.optimize as so
# Initialization
bnds = ((0, 550), (0, 550))
initial = np.asarray([400, 150])
# objective function
def maxZ(x_in, sign=1.0):
x1, x2 = x_in[0:2]
return sign * (45 * x1 + 20 * x2)
# constraints
def Cmaterial(x_in):
x1, x2 = x_in[0:2]
return 20 * x1 + 5 * x2 - 9500
def Ctime(x_in):
x1, x2 = x_in[0:2]
return 0.04 * x1 + 0.12 * x2 - 40
def Cstorage(x_in):
x1, x2 = x_in[0:2]
return x1 + x2 - 550
# constraint terms
con1 = {'type':'ineq', 'fun':Cmaterial}
con2 = {'type':'ineq', 'fun':Ctime}
con3 = {'type':'ineq', 'fun':Cstorage}
cons = [con1, con2, con3]
# Optimization
out = so.minimize(maxZ, initial, method='SLSQP', bounds=bnds, constraints=cons, args=(1.0,))
print(f"Optimum solution occurs at {out.x}, where Z = {out.fun}")
print(f"Material: {Cmaterial(out.x) + 9500}")
print(f"Production time: {Ctime(out.x) + 40}")
print(f"Storage: {Cstorage(out.x) + 550}")
The outcome is:
Optimum solution occurs at [427.27272727 190.90909091], where Z = 23045.45454545614
Material: 9500.00000000076
Production time: 40.00000000000009
Storage: 618.1818181818464
I have verified through graphical method and Excel verification that the expected result should be x1 = 450, x2 = 100.
The result from Scipy.optimize is x1 = 427.27, x2 = 190.91.
My question: the storage constraint of x1 + x2 ≤ 550 is clearly violated since the result is 618.18. What could be the reason for this?
First, you need to transform your maximization problem into a minimization problem, i.e. the sign argument inside maxZ should be -1.0, not 1.0. Note also that scipy.optimize.minimize expects inequality constraints with g(x) >= 0, not g(x) <= 0. Hence, you have to transform your constraints accordingly:
con1 = {'type':'ineq', 'fun': lambda x: -1.0*Cmaterial(x)}
con2 = {'type':'ineq', 'fun': lambda x: -1.0*Ctime(x)}
con3 = {'type':'ineq', 'fun': lambda x: -1.0*Cstorage(x)}
cons = [con1, con2, con3]
out = so.minimize(maxZ, initial, method='SLSQP', bounds=bnds, constraints=cons, args=(-1.0,))
yields your expected solution. Last but not least, this is a linear optimization problem (LP) and thus, should be solved with scipy.optimize.linprog. However, this requires that you formulate the problem in the standard LP form .

How to Solve for the Motion of a Double Pendulum

I want to plot the motion of a double pendulum with a spring in python. I need to plot the theta1, theta2, r, and their first derivatives. I have found my equations for the motion, which are second-order ODEs so I then converted them to first-order ODEs where x1=theta1, x2=theta1-dot, y1=theta2, y2=theta2-dot, z1=r, and z2=r-dot. Here is a picture of the double pendulum problem: enter image description here
Here is my code:
from scipy.integrate import solve_ivp
from numpy import pi, sin, cos, linspace
g = 9.806 #Gravitational acceleration
l0 = 1 #Natural length of spring is 1
k = 2 #K value for spring is 2
OA = 2 #Length OA is 2
m = 1 #Mass of the particles is 1
def pendulumDynamics1(t, x): #Function to solve for theta-1 double-dot
x1 = x[0]
x2 = x[1]
y1 = y[0]
y2 = y[1]
z1 = z[0]
z2 = z[1]
Fs = -k*(z1-l0)
T = m*(x2**2)*OA + m*g*cos(x1) + Fs*cos(y1-x1)
x1dot = x2
x2dot = (Fs*sin(y1-x1) - m*g*sin(x1))/(m*OA) # angles are in radians
return [x1dot,x2dot]
def pendulumDynamics2(t, y): #Function to solve for theta-2 double-dot
x1 = x[0]
x2 = x[1]
y1 = y[0]
y2 = y[1]
z1 = z[0]
z2 = z[1]
Fs = -k*(z1-l0)
y1dot = y2
y2dot = (-g*sin(y1) - (Fs*cos(y1-x1)*sin(x1))/m + g*cos(y1-x1)*sin(x1) - x2*z1*sin(x1))/z1
return [y1dot,y2dot]
def pendulumDynamics3(t, z): #Function to solve for r double-dot (The length AB which is the spring)
x1 = x[0]
x2 = x[1]
y1 = y[0]
y2 = y[1]
z1 = z[0]
z2 = z[1]
Fs = -k*(z1-l0)
z1dot = z2
z2dot = g*cos(y1) - Fs/m + (y2**2)*z1 + x2*OA*cos(y1-x1) - (Fs*(sin(y1-x1))**2)/m + g*sin(x1)*sin(y1-x1)
return [z1dot,z2dot]
# Define initial conditions, etc
d2r = pi/180
x0 = [30*d2r, 0] # start from 30 deg, with zero velocity
y0 = [60*d2r, 0] # start from 60 deg, with zero velocity
z0 = [1, 0] #Start from r=1
t0 = 0
tf = 10
#Integrate dynamics, initial value problem
sol1 = solve_ivp(pendulumDynamics1,[t0,tf],x0,dense_output=True) # Save as a continuous solution
sol2 = solve_ivp(pendulumDynamics2,[t0,tf],y0,dense_output=True) # Save as a continuous solution
sol3 = solve_ivp(pendulumDynamics3,[t0,tf],z0,dense_output=True) # Save as a continuous solution
t = linspace(t0,tf,200) # determine solution at these times
dt = t[1]-t[0]
x = sol1.sol(t)
y = sol2.sol(t)
z = sol3.sol(t)
I have 3 functions in my code, each to solve for x, y, and z. I then use solve_ivp function to solve for x, and y, and z. The error in the code is:
`File "C:\Users\omora\OneDrive\Dokument\AERO 211\project.py", line 13, in pendulumDynamics1
y1 = y[0]
NameError: name 'y' is not defined`
I don't understand why it is saying that y is not defined, because I defined it in my functions.
Your system is closed without friction, thus can be captured by the Lagrange or Hamiltonian formalism. You have 3 position variables, thus a 6-dimensional dynamical state, complemented either by the velocities or the impulses.
Let q_k be theta_1, theta_2, r, Dq_k their time derivatives and p_k the impulse variables to q_k, then the dynamics can be realized by
def DoublePendulumSpring(u,t,params):
m_1, l_1, m_2, l_2, k, g = params
q_1,q_2,q_3 = u[:3]
p = u[3:]
A = [[l_1**2*(m_1 + m_2), l_1*m_2*q_3*cos(q_1 - q_2), -l_1*m_2*sin(q_1 - q_2)],
[l_1*m_2*q_3*cos(q_1 - q_2), m_2*q_3**2, 0],
[-l_1*m_2*sin(q_1 - q_2), 0, m_2]]
Dq = np.linalg.solve(A,p)
Dq_1,Dq_2,Dq_3 = Dq
T1 = Dq_2*q_3*sin(q_1 - q_2) + Dq_3*cos(q_1 - q_2)
T3 = Dq_1*l_1*cos(q_1 - q_2) + Dq_2*q_3
Dp = [-l_1*(m_2*Dq_1*T1 + g*(m_1+m_2)*sin(q_1)),
l_1*m_2*Dq_1*T1 - g*m_2*q_3*sin(q_2),
m_2*Dq_2*T3 + g*m_2*cos(q_2) + k*(l_2 - q_3) ]
return [*Dq, *Dp]
For a derivation see the Euler-Lagrange equations and their connection to the Hamilton equations. You might get asked about such a derivation.
This, after suitable defining the parameter tuple and initial conditions, can be fed to odeint and produces a solution that can then be plotted, animated or otherwise examined. The lower bob traces a path like the one below, not periodic and not very deterministic. (The fulcrum and the arc of the upper bob are also inserted, but less interesting.)
def pendulumDynamics1(t, x):
x1 = x[0]
x2 = x[1]
y1 = y[0]
y2 = y[1]
z1 = z[0]
z2 = z[1]
You only pass x as a parameter. The code inside the function has no idea what y and z refer to.
You will need to change the function call to also include those variables.
def pendulumDynamics1(t, x, y, z):

Quadratic to Chaining or Connected Multiple Piecewise in Python

I've been digging in stackoverflow for a while and can't find any example for multiple piecewise curve fitting. I want to convert a quadratic function into multiple chaining (I don't know the exact name of it, but i need every tail connected to the head of the next piecewise, simply "connected") of piecewise function. This is my code so far using scipy.optimize to convert quadratic into 2 pieces of piecewise linear function.
import scipy.optimize as opt
import numpy as np
import copy
def func_2piecewise(x, m_0, x_1, y_1, m_1):
y = np.piecewise(x, [x <= x_1, x > x_1],
[lambda x:m_0*(x-x_1) + y_1, lambda x:m_1*(x-x_1) + y_1])
return y
xmin=0
xmax=100
a=0.1
a0=1
a00=10
piece_number=2
sigma=np.ones(numberOfStep)
if piece_number==2:
lower_bounds=[-np.inf,xmin,-np.inf,-np.inf]
upper_bounds=[np.inf,xmax,np.inf,np.inf]
w, _ = opt.curve_fit(func_2piecewise, x_sample, y_sample,bounds=(lower_bounds,upper_bounds),sigma=sigma)
x_0=copy.deepcopy(xmin)
y_0=func_2piecewise(x_0, *w).tolist()
[m_0, x_1, y_1, m_1]=w
result=[x_0,y_0,m_0,x_1,y_1,m_1]
The problem is, I can't implement the same approach for three piecewise (i don't know how to make x_2 > x_1):
def func_gradients(x_list,y_list):
len_x_list=len(x_list)
if len_x_list==1:
m_list=y_list/x_list
return m_list
m_list=[]
for idx in range(len_x_list-1):
m_list.append((y_list[idx+1]-y_list[idx])/(x_list[idx+1]-x_list[idx]))
return m_list
def func_3piecewise(x, m_0, x_1, y_1, x_2, y_2, m_2):
y = np.piecewise(x, [x <= x_1, (x > x_1) & (x <= x_2), x > x_2],
[lambda x:m_0*(x-x_1) + y_1, lambda x:y_1+(y_2-y_1)*(x-x_1)/(x_2-x_1), lambda x:m_2*(x-x_2) + y_2])
return y
if piece_number==3:
lower_bounds=[-np.inf,xmin,-np.inf,xmin,-np.inf,-np.inf]
upper_bounds=[np.inf,xmax,np.inf,xmax,np.inf,np.inf]
w, _ = opt.curve_fit(func_3piecewise, x_sample, y_sample,bounds=(lower_bounds,upper_bounds),sigma=sigma)
x_0=copy.deepcopy(xmin)
y_0=func_3piecewise(x_0, *w).tolist()
[m_0, x_1, y_1, x_2, y_2, m_2]=w
m_1=func_gradients(x_2-x_1,y_2-y_1)
result=[x_0,y_0,m_0,x_1,y_1,m_1, x_2, y_2, m_2]
The full code can be seen in pastebin
So, the question is:
How to make a chaining (every tail of the piecewise function connected to the head of the next piece, or simply "connected") picewise function in python for general n-pieces? Other algorithm or solver is acceptable.
Edit: I add my result so far for 2 piecewise.
Update: I found that my code (for three pieces) is not working because of a small typo (sorry about this, just tell me if I should delete this question). Now it's working and I update the paste bin. But, if you have a general (flexible, no need to write function for each number variant) function that can generate n number of pieces,I'll gladly accept the answer.
You can parametrize on the distance x2-x1 instead of parametrizing on x2. Because you can give the optimizer bounds, you can set the distance to be greater than 0.
For example, to make a general piecewise-linear function with 4 intervals, define the following:
The points which separate the intervals and x0, x1 and x2. The slopes in the 4 intervals are m0, m1, m2 and m3. The value of the function at x0 is y0.
Define d1 = x1 - x0, d2 = x2 - x1. From here:
x1 = x0 + d1
x2 = x0 + d1 + d2
Then, you have 8 optimization parameters: x0, y0, d1, d2, m0, m1, m2 and m3. By nature of your optimization problem, all except x0 and y0 are non-negative.
Equation for the first interval:
y = m0 * (x - x0) + y0
Equation for the second interval:
y = m1 * (x - x0) + y0
Now you can get the rest of the equations in a recursive way, by applying the previous equation at the rightmost point of its interval. For the x1 point, the value of the function is:
y1 = m1 * d1 + y0
So the third equation is
y =
m2 * (x - x1) + y1 =
m2 * (x - x0 - d1) + m1 * d1 + y0
For the x2 point, this gives
y2 = m2 * d2 + y1
So the fourth equation is
y =
m3 * (x - x2) + y2 =
m3 * (x - x0 - d1 - d2) + m2 * d2 + m1 * d1 + y0

invalid value encountered in double_scalars def f(t, x): return np.power(x, 2) - x

I have been doing a lot of graphing of slope fields and ODE solutions recently, and I decided to try my hand at making a little function that automatically graphs solutions with a vector field overlay.
This function takes a set of initial conditions and plots that many solutions. It works pretty well, but for some initial values I get the error in the title:
invalid value encountered in double_scalars
def f(t, x): return np.power(x, 2) - x
Here is the code for the function:
def grapher(fn, t_0, t_n, dt, y_0):
"""
Takes a first order ODE and solves it for initial conditions
provided by y_0
:param fn: y' = f(t,y)
:param t_0: start time
:param t_n: end time
:param dt: step size
:param y_0: iterable containing initial conditions
:return:
"""
t = np.arange(t_0, t_n, dt)
y_min = .0
y_max = .0
for iv in np.asarray(y_0):
soln = rk4(dt, t, fn, iv)
plt.plot(t, soln, '-r')
if y_min > np.min(soln):
y_min = np.min(soln)
if y_max < np.max(soln):
y_max = np.max(soln)
x = np.linspace(t_0, t_n + dt, 11)
y = np.linspace(y_min, y_max, 11)
X, Y = np.meshgrid(x, y)
theta = np.arctan(f(X, Y))
U = np.cos(theta)
V = np.sin(theta)
plt.quiver(X, Y, U, V, angles='xy')
plt.xlim((t_0, t_n - dt))
plt.ylim((y_min - .1*y_min, y_max + .1*y_max))
plt.show()
And here is the application that fails:
def f(t, x): return x**2 - x
grapher(f,0,4,0.1, (-0.9, 0.9, 1.1))
It produces this graph, which is missing the solution associated with the initial condition 1.1:
However, if I choose a value less than or equal to 1, I get the correct graph:
I don't see an opportunity for divide by zero here, so I'm a bit confused. Also, the qualitative characteristics of the ODE are not fully on display unless I can choose an initial condition higher than 1.
I'd like to also note, that when I did not have a function to automate this process, the defined function f(x) = x^2 - x gave me no troubles at all. Any clue on why this might be?
If it helps, here is the rk4 algorithm I wrote in a different module:
def rk4(dt, t, field, y_0):
"""
:param dt: float - the timestep
:param t: array - the time mesh
:param field: method - the vector field y' = f(t, y)
:param y_0: array - contains initial conditions
:return: ndarray - solution
"""
# Initialize solution matrix. Each row is the solution to the system
# for a given time step. Each column is the full solution for a single
# equation.
y = np.asarray(len(t) * [y_0])
for i in np.arange(len(t) - 1):
k1 = dt * field(t[i], y[i])
k2 = dt * field(t[i] + 0.5 * dt, y[i] + 0.5 * k1)
k3 = dt * field(t[i] + 0.5 * dt, y[i] + 0.5 * k2)
k4 = dt * field(t[i] + dt, y[i] + k3)
y[i + 1] = y[i] + (k1 + 2 * k2 + 2 * k3 + k4) / 6
return y
I think that there is no error in the code, just the solution gets too large.
If you called grapher with
grapher(f, 0, 4, 0.1, (-0.9, 0.9, 1.01))
you would get:
With:
grapher(f, 0, 4, 0.1, (-0.9, 0.9, 1.02))
and when y_0 gets to be 1.1 the value for soln are not reported because np.pow(), upon detecting overflow, is just returning nan which then matplotlib does not know how to plot.
If you changed
def f(t, x):
return x**2 - x
to:
def f(t, x):
return x * (x - 1)
you would get a (ugly, but "correct") plot also of the solution for y_0 == 1.1, because instead of overflowing defaulting to nan you are now getting infs as maximum values, which of course matplotlib does not know how to handle in the process of generating the axes:

Categories

Resources