I'm trying to maximize/minimize a function with two variables using Lagrange Multiplier method, below is my code
import numpy as np
from scipy.optimize import fsolve
Sa = 200
Sm = 100
n = 90
mu1 = 500
mu2 = 150
sigma1 = 25
sigma2 = 10
f = 0.9
def func(X):
u1 = X[0]
u2 = X[1]
L = X[2] # this is the multiplier. lambda is a reserved keyword in python
'function --> f(u1,u2) = u1**2 + u2**2'
'constraint --> g(u1,u2) = (Snf/a)**1/b - n = 0'
Snf = Sa/(1-Sm/(sigma1*u1 + mu1))
a = (f*(sigma1*u1 + mu1)**2)/(sigma2*u2 + mu2)
b = -1/3*(f*(sigma1*u1 + mu1))/(sigma2*u2 + mu2)
return (u1**2+u2**2 - L * ((Snf/a)**1/b) - n)
def dfunc(X):
dLambda = np.zeros(len(X))
h = 1e-3 # this is the step size used in the finite difference.
for i in range(len(X)):
dX = np.zeros(len(X))
dX[i] = h
dLambda[i] = (func(X+dX)-func(X-dX))/(2*h);
return dLambda
# this is the max
X1 = fsolve(dfunc, [1, 1, 0])
print (X1, func(X1))
# this is the min
X2 = fsolve(dfunc, [-1, -1, 0])
print (X2, func(X2))
When I try to do a simple function as the constraint such as u1+u2=4 or u1^2+u2^2 = 20, it works just fine , but when I try my actual constraint function it always gives this error, is there a reason why?? THanks for the help
C:\Python34\lib\site-packages\scipy\optimize\minpack.py:161:
RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last five Jacobian evaluations.
warnings.warn(msg, RuntimeWarning)
Related
I use solve_ivp to solve an ODE:
def test_ode(t, y):
dydt = C - y + (y ** 8 / (1 + y ** 8))
return dydt
steady_state = []
for C in np.linspace(0, 1, 1001):
sol = solve_ivp(test_ode, [0, 1e06], [0], method='BDF')
steady_state.append(sol.y[0][-1])
This gives me a RuntimeWarning:
ETA: --:--:--/anaconda3/lib/python3.6/site-packages/scipy/integrate/_ivp/bdf.py:418:
RuntimeWarning: divide by zero encountered in power
factors = error_norms ** (-1 / np.arange(order, order + 3))
But even worse, the run basically freezes (or at least gets incredibly slow). Replacing the initial value [0] by [1e-08] does not solve the problem. How can I fix this?
You can use NumbaLSODA: https://github.com/Nicholaswogan/NumbaLSODA . Its like solve_ivp, but all the code can be compiled. So it is very speedy:
from NumbaLSODA import lsoda_sig, lsoda
import numpy as np
import numba as nb
import time
#nb.cfunc(lsoda_sig)
def test_ode(t, y_, dydt, p):
y = y_[0]
C = p[0]
dydt[0] = C - y + (y ** 8 / (1 + y ** 8))
funcptr = test_ode.address
#nb.njit()
def main():
steady_state = np.empty((1001,),np.float64)
CC = np.linspace(0, 1, 1001)
y0 = np.array([0.0])
for i in range(len(CC)):
data = np.array([CC[i]],np.float64)
t_eval = np.array([0.0, 1.0e6])
sol, success = lsoda(funcptr, y0, t_eval, data = data)
steady_state[i] = sol[-1,0]
return steady_state
main()
start = time.time()
steady_state = main()
end = time.time()
print(end-start)
result is 0.013 seconds
given several vectors:
x1 = [3 4 6]
x2 = [2 8 1]
x3 = [5 5 4]
x4 = [6 2 1]
I wanna find weight w1, w2, w3 to each item, and get the weighted sum of each vector: yi = w1*i1 + w2*i2 + w3*i3. for example, y1 = 3*w1 + 4*w2 + 6*w3
to make the variance of these values(y1, y2, y3, y4) to be minimized.
notice: w1, w2, w3 should > 0, and w1 + w2 + w3 = 1
I don't know what kind of problems it should be... and how to solve it in python or matlab?
You can start with building a loss function stating the variance and the constraints on w's. The mean is m = (1/4)*(y1 + y2 + y3 + y4). The variance is then (1/4)*((y1-m)^2 + (y2-m)^2 + (y3-m)^2 + (y4-m)^2) and the constraint is a*(w1+w2+w3 - 1) where a is the Lagrange multiplier. The problem looks like to me a convex optimisation with convex constraints since the loss function is quadratic with respect to target variables (w1,w2,w3) and the constraints are linear. You can look for projected gradient descent algorithms which respect to the constraints provided. Take a look to here http://www.ifp.illinois.edu/~angelia/L5_exist_optimality.pdf There are no straightforward analytic solutions to such kind of problems in general.
w = [5, 6, 7]
x1 = [3, 4, 6]
x2 = [2, 8, 1]
x3 = [5, 5, 4]
y1, y2, y3 = 0, 0, 0
for index, i in enumerate(w):
y1 = y1 + i * x1[index]
y2 = y2 + i * x2[index]
y3 = y3 + i * x3[index]
print(min(y1, y2, y3))
I think I maybe get the purpose of your problem.But if you want to find the smallest value, I hope this can help you.
I just make the values fixed, you can make it to be the def when you see this is one way to solve your question.
I don't know much about optimization problem, but I get the idea of gradient descent so I tried to reduce the weight between the max score and min score, my script is below:
# coding: utf-8
import numpy as np
#7.72
#7.6
#8.26
def get_max(alist):
max_score = max(alist)
idx = alist.index(max_score)
return max_score, idx
def get_min(alist):
max_score = min(alist)
idx = alist.index(max_score)
return max_score, idx
def get_weighted(alist,aweight):
res = []
for i in range(0, len(alist)):
res.append(alist[i]*aweight[i])
return res
def get_sub(list1, list2):
res = []
for i in range(0, len(list1)):
res.append(list1[i] - list2[i])
return res
def grad_dec(w,dist, st = 0.001):
max_item, max_item_idx = get_max(dist)
min_item, min_item_idx = get_min(dist)
w[max_item_idx] = w[max_item_idx] - st
w[min_item_idx] = w[min_item_idx] + st
def cal_score(w, x):
score = []
print 'weight', w ,x
for i in range(0, len(x)):
score_i = 0
for j in range(0,5):
score_i = w[j]*x[i][j] + score_i
score.append(score_i)
# check variance is small enough
print 'score', score
return score
# cal_score(w,x)
if __name__ == "__main__":
init_w = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
x = [[7.3, 10, 8.3, 8.8, 4.2], [6.8, 8.9, 8.4, 9.7, 4.2], [6.9, 9.9, 9.7, 8.1, 6.7]]
score = cal_score(init_w,x)
variance = np.var(score)
round = 0
for round in range(0, 100):
if variance < 0.012:
print 'ok'
break
max_score, idx = get_max(score)
min_score, idx2 = get_min(score)
weighted_1 = get_weighted(x[idx], init_w)
weighted_2 = get_weighted(x[idx2], init_w)
dist = get_sub(weighted_1, weighted_2)
# print max_score, idx, min_score, idx2, dist
grad_dec(init_w, dist)
score = cal_score(init_w, x)
variance = np.var(score)
print 'variance', variance
print score
In my practice it really can reduce the variance. I am very glad but I don't know whether my solution is solid in math.
My full solution can be viewed in PDF.
The trick is to put the vectors x_i as columns of a matrix X.
Then writing the problem becomes a Convex Problem with constrain of the solution to be on the Unit Simplex.
I solved it using Projected Sub Gradient Method.
I calculated the Gradient of the objective function and created a projection to the Unit Simplex.
Now all needed is to iterate them.
I validated my solution using CVX.
% StackOverflow 44984132
% How to calculate weight to minimize variance?
% Remarks:
% 1. sa
% TODO:
% 1. ds
% Release Notes
% - 1.0.000 08/07/2017
% * First release.
%% General Parameters
run('InitScript.m');
figureIdx = 0; %<! Continue from Question 1
figureCounterSpec = '%04d';
generateFigures = OFF;
%% Simulation Parameters
dimOrder = 3;
numSamples = 4;
mX = randi([1, 10], [dimOrder, numSamples]);
vE = ones([dimOrder, 1]);
%% Solve Using CVX
cvx_begin('quiet')
cvx_precision('best');
variable vW(numSamples)
minimize( (0.5 * sum_square_abs( mX * vW - (1 / numSamples) * (vE.' * mX * vW) * vE )) )
subject to
sum(vW) == 1;
vW >= 0;
cvx_end
disp([' ']);
disp(['CVX Solution - [ ', num2str(vW.'), ' ]']);
%% Solve Using Projected Sub Gradient
numIterations = 20000;
stepSize = 0.001;
simplexRadius = 1; %<! Unit Simplex Radius
stopThr = 1e-6;
hKernelFun = #(vW) ((mX * vW) - ((1 / numSamples) * ((vE.' * mX * vW) * vE)));
hObjFun = #(vW) 0.5 * sum(hKernelFun(vW) .^ 2);
hGradFun = #(vW) (mX.' * hKernelFun(vW)) - ((1 / numSamples) * vE.' * (hKernelFun(vW)) * mX.' * vE);
vW = rand([numSamples, 1]);
vW = vW(:) / sum(vW);
for ii = 1:numIterations
vGradW = hGradFun(vW);
vW = vW - (stepSize * vGradW);
% Projecting onto the Unit Simplex
% sum(vW) == 1, vW >= 0.
vW = ProjectSimplex(vW, simplexRadius, stopThr);
end
disp([' ']);
disp(['Projected Sub Gradient Solution - [ ', num2str(vW.'), ' ]']);
%% Restore Defaults
% set(0, 'DefaultFigureWindowStyle', 'normal');
% set(0, 'DefaultAxesLooseInset', defaultLoosInset);
You can see the full code in StackOverflow Q44984132 (PDF is available as well).
I am trying to find out the cause of the artifacts that appear after convolution, they are to be seen in the plot arround x = -.0016 and x= .0021 (please see the code below). I am convoluting the "lorentzian" function (or the derivative of the langevin function) which I define in the code, with 2 Dirac impulses in the function "ditrib".
I would appreciate your help.
Thank you
Here is my code:
import numpy as np
import matplotlib.pyplot as plt
def Lorentzian(xx):
if not hasattr(xx, '__iter__'):
xx = [ xx ]
res = np.zeros(len(xx))
for i in range(len(xx)):
x = xx[i]
if np.fabs(x) < 0.1:
res[i] = 1./3. - x**2/15. + 2.* x**4 / 189. - x**6/675. + 2.* x**8 / 10395. - 1382. * x**10 / 58046625. + 4. * x**12 / 1403325.
else:
res[i] = (1./x**2 - 1./np.sinh(x)**2)
return res
amp = 18e-3
a = 1/.61e3
b = 5.5
t_min = 0
dt = 1/5e6
t_max = (10772) * dt
t = np.arange(t_min,t_max,dt)
x_min = -amp/b
x_max = amp/b
dx = dt*(x_min-x_max)/(t_min-t_max)
x = np.arange(x_min,x_max,dx)
func1 = lambda x : Lorentzian(b*(x/a))
def distrib(x):
res = np.zeros(np.size(x))
res[int(np.floor(np.size(x)/3))] = 1
res[int(3*np.floor(np.size(x)/4))] = 3
return res
func2 = lambda x,xs : np.convolve(distrib(x), func1(xs), 'same')
plt.plot(x, func2(x,x))
plt.xlabel('x (m)')
plt.ylabel('normalized signal')
try removing the "pedestal" of func1
func1(x)[0], func1(x)[-1]
Out[7]: (0.0082945964013920719, 0.008297677313152443)
just subtract
func2 = lambda x,xs : np.convolve(distrib(x), func1(xs)-func1(x)[0], 'same')
gives a smooth convolution curve
depending on the result you want you may have to add it back in after, weighted by the Dirac sum
I try to use Lagrange multiplier to optimize a function, and I am trying to loop through the function to get a list of number, however I got the error
ValueError: setting an array element with a sequence.
Here is my code, where do I go wrong? If the n is not an array I can get the result correctly though
import numpy as np
from scipy.optimize import fsolve
n = np.arange(10000,100000,10000)
def func(X):
x = X[0]
y = X[1]
L = X[2]
return (x + y + L * (x**2 + y**2 - n))
def dfunc(X):
dLambda = np.zeros(len(X))
h = 1e-3
for i in range(len(X)):
dX = np.zeros(len(X))
dX[i] = h
dLambda[i] = (func(X+dX)-func(X-dX))/(2*h);
return dLambda
X1 = fsolve(dfunc, [1, 1, 0])
print (X1)
Helps would be appreciated, thank you very much
First, check func = fsolve()
Second, print(func([1,1,0]))` - result in not number ([2 2 2 2 2 2 2 2 2]), beause "n" is list. if you want to iterate n try:
import numpy as np
from scipy.optimize import fsolve
n = np.arange(10000,100000,10000)
def func(X,n):
x = X[0]
y = X[1]
L = X[2]
return (x + y + L * (x**2 + y**2 - n))
def dfunc(X,n):
dLambda = np.zeros(len(X))
h = 1e-3
r = 0
for i in range(len(X)):
dX = np.zeros(len(X))
dX[i] = h
dLambda[i] = (func(X+dX,n)-func(X-dX,n))/(2*h)
return dLambda
for iter_n in n:
print("for n = {0} dfunc = {1}".format(iter_n,dfunc([0.8,0.4,0.3],iter_n)))
The following is as much I could boil it down.
I'm trying to solve a system of equations with 18 equations and 18 variables. For the moment, I hold 4 of these variables fixed. Originally, I got weird results. So I simplified the problem so far such that the first 9 and the last 9 equations are separate and identical. Moreover, the problem is exactly identified: There should be one unique solution.
The solution vector contains 14 elements (18 minus the 4 fixed variables). Since these are ordered properly, the first 7 solution variables should be identical to the last 7 solution variables. However, they are not.
I checked identity of equations by putting in an identical vector x[:7] = x[7:] and checking that res[:9] == res[9:] were all true.
Following is the output that I get:
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.125393271845
Iterations: 18
Function evaluations: 297
Gradient evaluations: 18
Out[223]:
J W w v U u Y
0 0.663134 0.237578 0.251245 10.00126 0.165647 0.093939 0.906657
1 0.022635 0.825547 1.000000 10.00340 0.512898 0.089790 0.909918
Where I have stacked the first 7 variables into the first row and the next 7 variables into the second row. Clearly, these are not identical.
Code for reproduction follows
import numpy as np
# parameters
class Parameters(object):
r = 1.03
sBar = 0.1
sB = 0.1
c = 0.1
z = 0.001
H = 1
epsilon = 1
beta = 0.1
def q(theta):
if theta <= 0:
return 999
return float(1)/theta
def f(theta):
if theta < 1:
return 0
return 1 - float(1)/theta
# sum_all allows to see residual as vector, not summed
def twoSectorFake(x, Param, sum_all=True):
JBar, WBar, wBar, vBar, UBar, uBar, YBar = x[:7]
JB, WB, wB, vB, UB, uB, YB = x[7:]
VBar = 0
VB = 0
pB = 1
pBar = 1
#theta = float(vB + vBar)/u
thetaBar = float(vBar)/uBar
thetaB = float(vB)/uB
res = np.empty(18,)
res[0] = Param.r*JBar - (pBar - wBar - Param.sBar*(JBar - VBar) )
res[1] = Param.r * VBar - ( -Param.c + q(thetaBar) * (JBar - VBar) )
res[2] = Param.r * WBar - (wBar - Param.sBar * (WBar - UBar) )
res[3] = Param.r * UBar - (Param.z + f(thetaBar) * (WBar - UBar) )
res[4] = Param.sBar * YBar - vBar * q(thetaBar)
res[5] = Param.sBar * YBar - uBar * f(thetaBar)
res[6] = JBar - (1 - Param.beta) * (JBar + WBar - UBar)
res[7] = Param.H - YBar - uBar
res[8] = thetaBar * uBar - vBar
res[9] = Param.r*JB - (pB - wB - Param.sB*(JB - VB))
res[10] = Param.r*VB - ( -Param.c + q(thetaB) * (JB - VB))
res[11] = Param.r * WB - (wB - Param.sB * (WB - UB))
res[12] = Param.r * UB - (Param.z + f(thetaB) * (WB - UB))
res[13] = Param.sB * YB - vB*q(thetaB)
res[14] = Param.sB * YB - uB * f(thetaB)
res[15] = JB - (1 - Param.beta) * (JB + WB - UB)
res[16] = Param.H - YB - uB
res[17] = thetaB * uB - vB
idx = abs(res > 10000)
# don't square too big numbers, they may become INF and the problem won't solve
res[idx] = abs(res[idx])
res[~idx] = res[~idx]**2
if (sum_all==False):
return res
return sum(res)
Param = Parameters()
x2 = np.empty(0,)
boundaries2 = []
# JBar
x2 = np.append(x2, 1)
boundaries2.append([0, 100])
# WBar
x2 = np.append(x2, 1)
boundaries2.append([0, 100])
# wBar
x2 = np.append(x2, 0.5)
boundaries2.append([0.01, 100])
# vBar
x2 = np.append(x2, 10)
boundaries2.append([0.01, 100000])
# UBar
x2 = np.append(x2, float(Param.z)/(Param.r-1)+1)
boundaries2.append([float(Param.z)/(Param.r-1) - 0.1, 100])
# uBar
x2 = np.append(x2, 0.5*Param.H)
boundaries2.append([0.0000001, Param.H])
# YBar
x2 = np.append(x2, 0.5*Param.H)
boundaries2.append([0.0001, Param.H])
# JB
x2 = np.append(x2, 1)
boundaries2.append([0, 100])
# WB
x2 = np.append(x2, 1)
boundaries2.append([0, 100])
# wB
x2 = np.append(x2, 0.5)
boundaries2.append([1, 100])
# vB
x2 = np.append(x2, 10)
boundaries2.append([0.01, 100])
# UB
x2 = np.append(x2, float(Param.z)/(Param.r-1)+1)
boundaries2.append([float(Param.z)/(Param.r-1) - 0.1, 100])
# uB
x2 = np.append(x2, 0.5*Param.H)
boundaries2.append([0.0000001, Param.H])
# YB
x2 = np.append(x2, 0.5*Param.H)
boundaries2.append([0.0001, Param.H])
result = optimize.fmin_slsqp(func=twoSectorFake, x0=x2, bounds=boundaries2, args=(Param,), iter=200)
res1 = result[:7]
res2 = result[7:]
df = pd.DataFrame(np.reshape(res1, (1,7)), columns=['J', 'W', 'w', 'v', 'U', 'u','Y'])
df.loc[1] = np.reshape(res2, (1,7))
print df
EDIT:
your variable boundaries2 isn't symmetric, i.e. boundaries2[7:]!=boundaries2[:7] .
Try writing
boundaries2 = boundaries2[7:]*2
just before your call to fmin_slsqp and you get a symmetric local minimum. I leave the previous general comments on your setup below, because they apply in any case.
First of all, are you sure that there exists only one solution to your problem? if not, I wouldn't expect necessarily the scipy numerical routine to return the solution you have in mind. It can converge to any other non-symmetric solution.
In second instance, you are not solving the system of equations. If you evaluate twoSectorFake(result,Param) at the end of your code, you get 0.15. You may be better off with other root solvers, see the root finding section.
This means that you're then looking at a local minimum of your target function, i.e. not a zero. Again, there's no reason why the numerical routine should necessarily calculate a symmetric local minimum.