I'm trying to minimize a functional (integral of a function), given constraints
def s(y,a,b,c,d):
v = [1, y, y**2, y**3]
alpha = [a, b, c, d]
q = np.inner(v,alpha)
return -q*np.exp(-q)
def p(y,a,b,c,d):
v = [1, y, y**2, y**3]
alpha = [a, b, c, d]
q = np.inner(v,alpha)
return np.exp(-q)
def Q(u):
a, b, c, d = u
d = integrate.quad(lambda y: s(y,a,b,c,d), 0, 1)
return d[0]
cons = ({'type': 'eq', 'fun' : integrate.quad(lambda y: p(y,a,b,c,d), 0, 1)[0]-1},
{'type': 'eq', 'fun' : integrate.quad(lambda y: (p(y,a,b,c,d)*y), 0, 1)[0]-0.483523521402009},
{'type': 'eq', 'fun' : integrate.quad(lambda y: (p(y,a,b,c,d)*y**2), 0, 1)[0]-0.300458990347083},
{'type': 'eq', 'fun' : integrate.quad(lambda y: (p(y,a,b,c,d)*y**3), 0, 1)[0]-0.209996591802522})
res = minimize(Q, x0 = (0, 0, 0, 0), method='BFGS', constraints=cons)
print(res)
I get an output
fun: -0.36787942624169967
hess_inv: array([[ 17.98311921, -49.74794121, 2.50822967, 36.21942131],
[ -49.74794121, 191.70720321, -23.14586623, -158.65310285],
[ 2.50822967, -23.14586623, 8.1640543 , 25.72129091],
[ 36.21942131, -158.65310285, 25.72129091, 142.59127393]])
jac: array([ -3.54647636e-06, -1.94460154e-06, -1.75461173e-06,
3.24100256e-07])
message: 'Optimization terminated successfully.'
nfev: 126*
*nit: 19
njev: 21
status: 0
success: True
x: array([ 0.99920744, 0.0092224 , -0.02276881, 0.0150456 ])**
However using this array of X, does not satisfy the constraint
x = (0.99920744, 0.0092224 , -0.02276881, 0.0150456)
integrate.quad(lambda y: p(y,x[0],x[1],x[2],x[3]), 0, 1)[0]
0.3678829742546207
which is not 1, as specifically stated in the constraint. How is it claiming convergence when it clearly has not converged?
Related
I would like to compute the integral of a summation:
import sympy as sp
t = sp.Symbol("t")
n = sp.Symbol("n", integer=True, positive=True)
sum_term = sp.Sum(sp.exp(-(n*sp.pi)**2 * t), (n, 1, sp.oo))
sp.integrate(sum_term, (t, 0, t)).doit()
However, this doesn't calculate the integral:
Integral(Sum(exp(-pi**2*n**2*t), (n, 1, oo)), (t, 0, t))
I'm not sure why this doesn't work but if you interchange the order of the sum and the integral it works:
In [5]: import sympy as sym
In [6]: t, n = sym.symbols('t, n')
In [7]: f = sym.exp(-(n*sym.pi)**2 * t)
In [8]: f
Out[8]: exp(-pi**2*n**2*t)
In [11]: e1 = sym.Integral(sym.Sum(f, (n, 1, sym.oo)), (t, 0, sym.oo))
In [12]: e2 = sym.Sum(sym.Integral(f, (t, 0, sym.oo)), (n, 1, sym.oo))
In [13]: e1
Out[13]: Integral(Sum(exp(-pi**2*n**2*t), (n, 1, oo)), (t, 0, oo))
In [14]: e2
Out[14]: Sum(Integral(exp(-pi**2*n**2*t), (t, 0, oo)), (n, 1, oo))
In [15]: e1.doit()
Out[15]: Integral(Sum(exp(-pi**2*n**2*t), (n, 1, oo)), (t, 0, oo))
In [17]: e2.doit()
Out[17]: 1/6
Consider it:
X = [1, 2, 3]
p = np.poly1d(X)
print('x: ', X, 'y: ', p(X))
output >> x: [1, 2, 3] y: [ 6 11 18]
what if I want to find x based on y?
x: [?, ?, ?] y: [ 6 11 18]
np.poly1d(X) means you create a polynomial of the required degree where X are its coefficients. Let's call them a, b, and c. So practically you have the expression
a*x**2 + b*x + c
When you then pass these three values for x, you get the following 3 equations
a**3 + b*a + c = 6
a*b**2 + b**2 + c = 11
a*c**2 + b*c + c = 18
There might be an algebraic way you can solve them yourself, but after a quick think I didn't come up with anything. However, sympy will happily solve this system of equations for you.
import numpy as np
import sympy as sym
def generate_y(X):
return np.poly1d(X)(X)
def solve_x(Y):
a, b, c = sym.symbols('a b c')
e1 = sym.Eq(a**3 + b*a + c, Y[0])
e2 = sym.Eq(a*b**2 + b**2 + c, Y[1])
e3 = sym.Eq(a*c**2 + b*c + c, Y[2])
return sym.solvers.solve([e1, e2, e3], [a, b, c])
For example
>>> solve_x(generate_y([1, 2, 3]))
[(1, 2, 3)]
>>> solve_x(generate_y([-5, 105, 2]))
[(-5, 105, 2)]
You could generalise this for nth order polynomials by creating the symbolic expressions dynamically, but for higher order you'll run into problems (such is life) and for 1st order you'll have multiple solutions.
def solve_x(Y):
symbols = sym.symbols('a:z')[:len(Y)]
X = sym.symbols('X')
expr = sum(s*X**i for i, s in enumerate(symbols[::-1]))
eqns = [sym.Eq(expr.subs({X: s}), y) for s, y in zip(symbols, Y)]
return sym.solvers.solve(eqns, symbols)
Usage
>>> solve_x(generate_y([1, 2]))
[(1, 2), (-1 + sqrt(2), 2*sqrt(2)), (-sqrt(2) - 1, -2*sqrt(2))]
>>> solve_x(generate_y([1, 2, 3, 4]))
# still computing...
Following code needs to be optimized and minimised with respect to x by using scipy optimizer.
The issue is it works with single argument but when function is taken multiple values, it can't handle it.
This code works well.
from scipy.optimize import root
b = 1
def func(x):
# result when x = 0, but result equation depends also on b value.
result = x + b
return result
sol = root(func, 0.1)
print(sol.x, sol.fun)
But this is not working.....
b =[ 1, 2, 3, 4, 5]
def func(x, b):
# result when x = 0, but result equation depends also on b value.
result = x + b
return result
for B in b:
sol = root(lambda x,B: func(x, B), 0.1)
print(sol.x, sol.fun)
How can result obtain with iteration through b?
As #hpaulj mentioned, root accepts an args parameter that will be passed onto func. So, we can make the script more flexible:
from scipy.optimize import root
def func(x, *args):
result = 0
for i, a in enumerate(args):
result += a * x ** i
return result
coeff_list = [(6, 3), (-3, 2, 1), (-6, 1, 2)]
for coeffs in coeff_list:
sol = root(func, [-4, 4][:len(coeffs)-1], args = coeffs)
print(*coeffs, sol.x, sol.fun)
Output:
6 3 [-2.] [8.8817842e-16]
-3 2 1 [-3. 1.] [ 1.46966528e-09 -4.00870892e-10]
-6 1 2 [-2. 1.5] [-6.83897383e-14 4.97379915e-14]
Initial answer
I don't understand the need for your lambda function:
from scipy.optimize import root
def func(x):
# result when x = 0, but result equation depends also on b value.
result = x + b
return result
B =[ 1, 2, 3, 4, 5]
for b in B:
sol = root(func, 0.1)
print(b, sol.x, sol.fun)
Output:
1 [-1.] [0.]
2 [-2.] [0.]
3 [-3.] [0.]
4 [-4.] [0.]
5 [-5.] [0.]
I don't see in the scipy documentation any hint of how to pass parameters to func. But this approach also works for multiple parameters:
from scipy.optimize import root
def func(x):
#depending on the parameters, this has 0, 1 or 2 solutions
result = a * x ** 2 + b * x + c
return result
A = range(3)
B = [3, 2, 1]
C = [6, -3, -6]
for a, b, c in zip(A, B, C):
sol = root(func, [-4, 4])
print(a, b, c, sol.x, sol.fun)
Output:
0 3 6 [-2. -2.] [ 8.8817842e-16 0.0000000e+00]
1 2 -3 [-3. 1.] [ 1.46966528e-09 -4.00870892e-10]
2 1 -6 [-2. 1.5] [-6.83897383e-14 4.97379915e-14]
I'm trying to write a simple LP in Python to solve rock paper scissors, here is my code before:
from scipy.optimize import linprog
obj = [0, 0, 0, -1]
A = [[0, 1, -1, -1], [-1, 0, 1, -1], [1, -1, 0, -1], [1, 1, 1, 0]]
b = [0, 0, 0, 1]
pb = (0.0, 1)
wb = (None, None)
res = linprog(obj, A_ub=A, b_ub=b, bounds=(pb,pb,pb,wb),options={"disp": True})
print(res)
Unfortunately when I run this, I get the following message:
'Optimization failed. The problem appears to be unbounded.'
But considering my LP is as follows:
f = -w
pp - ps - w = 0
-pr + -ps - w = 0
pr - pp - w = 0
pr + pp + ps = 1
0 < pr, pp, ps < 1
I don't see why this is unbounded. If I'm either messing up the construction of my LP or there is a syntax error could someone let me know.
You write in your call:
res = linprog(obj, A_ub=A, b_ub=b, bounds=(pb,pb,pb,wb),options={"disp": True})
So this means that you write an A_ub * x <= b_ub instead of A_eq * x = b_eq. As a result your program looks like:
minimize f = -w
wrt.
pp - ps - w <= 0
-pr + -ps - w <= 0
pr - pp - w <= 0
pr + pp + ps = 1
0 < pr, pp, ps < 1
Since we here have in the last lines ... - w <= 0 and we aim to minimize -w, this means that the optimal is w being positive infinity.
Your program suggests however that you want equality bounds. So we can write this with A_eq and b_eq parameters:
res = linprog(obj, A_eq=A, b_eq=b, bounds=(pb,pb,pb,wb),options={"disp": True})
This then gives us:
>>> res = linprog(obj, A_eq=A, b_eq=b, bounds=(pb,pb,pb,wb),options={"disp": True})
Optimization terminated successfully.
Current function value: -0.000000
Iterations: 4
>>> print(res)
fun: -0.0
message: 'Optimization terminated successfully.'
nit: 4
slack: array([0.66666667, 0.66666667, 0.66666667])
status: 0
success: True
x: array([0.33333333, 0.33333333, 0.33333333, 0. ])
This thus means that we have:
pr = 1/3
pp = 1/3
ps = 1/3
w = 0
I try to solve nonlinear programming task using scipy.optimize.minimize
max r
x1**2 + y1**2 <= (1-r)**2
(x1-x2)**2 + (y1-y2)**2 >= 4*r**2
0 <= r <= 1
So I've wrote next code:
r = np.linspace(0, 1, 100)
x1 = np.linspace(0, 1, 100)
y1 = np.linspace(0, 1, 100)
x2 = np.linspace(0, 1, 100)
y2 = np.linspace(0, 1, 100)
fun = lambda r: -r
cons = ({'type': 'ineq',
'fun': lambda x1, r: [x1[0] ** 2 + x1[1] ** 2 - (1 - r) ** 2],
'args': (r,)},
{'type': 'ineq',
'fun': lambda x2, r: [x2[0] ** 2 + x2[1] ** 2 - (1 - r) ** 2],
'args': (r,)},
{'type': 'ineq',
'fun': lambda x1, x2, r: [(x1[0] - x2[0]) ** 2 + (x1[1] - x2[1]) ** 2 - 4 * r ** 2],
'args': (x2, r,)})
bnds = ((0, 1), (-1, 1), (-1, 1), (-1, 1), (-1, 1))
x0 = [0, 0, 0, 0, 0]
minimize(fun, x0, bounds=bnds, constraints=cons)
But I've got next error
File "C:\Anaconda2\lib\site-packages\scipy\optimize\slsqp.py", line 377, in _minimize_slsqp
c = concatenate((c_eq, c_ieq))
ValueError: all the input arrays must have same number of dimensions
Please, help me to find out my mistakes and write correct code
UPD:
Thx to #unutbu i've understand how to build it correctly.
fun = lambda x: -x[0]
cons = ({'type': 'ineq',
'fun': lambda x: -x[1] ** 2 - x[2] ** 2 + (1 - x[0]) ** 2},
{'type': 'ineq',
'fun': lambda x: -x[3] ** 2 - x[4] ** 2 + (1 - x[0]) ** 2},
{'type': 'ineq',
'fun': lambda x: (x[1] - x[3]) ** 2 + (x[1] - x[4]) ** 2 - 4 * x[0] ** 2})
bnds = ((0, 1), (-1, 1), (-1, 1), (-1, 1), (-1, 1))
x0 = [0.5, 0.3, 0.5, 0.3, 0.5]
answer = minimize(fun, x0, bounds=bnds, constraints=cons)
In task of minimization we have to lead constraints to such form:
g(x) >= 0
that's why constraints look like in that way.
Your parameter space appears to be 5-dimensional. A point in your parameter
space would be z = (r, x1, y1, x2, y2). Therefore the function to be minimized
-- and also the constraint functions -- should accept a point z and
return a scalar value.
Thus instead of
fun = lambda r: -r
use
def func(z):
r, x1, y1, x2, y2 = z
return -r
and instead of
lambda x1, r: [x1[0] ** 2 + x1[1] ** 2 - (1 - r) ** 2]
use
def con1(z):
r, x1, y1, x2, y2 = z
return x1**2 + y1**2 - (1-r)**2
and so on.
Note that simple constraints such as 0 <= r <= 1 can be handled by setting the bounds parameter instead of defining a constraint. And if the bounds for x1, y1, x2, y2 are from -1 to 1, then you might also want change
x1 = np.linspace(0, 1, 100)
...
to
x1 = np.linspace(-1, 1, 100)
...
However, the arrays r, x1, y1, x2, y2 are not needed to minimize func, so you could just as well eliminate them from the script entirely.
import numpy as np
import scipy.optimize as optimize
"""
max r
x1**2 + y1**2 <= (1-r)**2
(x1-x2)**2 + (y1-y2)**2 >= 4*r**2
0 <= r <= 1
"""
def func(z):
r, x1, y1, x2, y2 = z
return -r
def con1(z):
r, x1, y1, x2, y2 = z
return x1**2 + y1**2 - (1-r)**2
def con2(z):
r, x1, y1, x2, y2 = z
return 4*r**2 - (x1-x2)**2 - (y1-y2)**2
cons = ({'type': 'ineq', 'fun': con1}, {'type': 'ineq', 'fun': con2},)
bnds = ((0, 1), (-1, 1), (-1, 1), (-1, 1), (-1, 1))
guess = [0, 0, 0, 0, 0]
result = optimize.minimize(func, guess, bounds=bnds, constraints=cons)
print(result)
yields
fun: -1.0
jac: array([-1., 0., 0., 0., 0., 0.])
message: 'Optimization terminated successfully.'
nfev: 14
nit: 2
njev: 2
status: 0
success: True
x: array([ 1., 0., 0., 0., 0.])