I have some data of the form
x1[i], x2[i], x3[i], z[i],
where z[i] is an unknown deterministic function of x1[i], x2[i], and x3[i]. I would like to find a quadratic function u(x1, x2, x3)= a11*x1^2 + a22*x2^2 + a33*x3^2 + a12*x1*x2 + ... + a0 that overbounds the data, i.e., u(x1[i], x2[i], x3[i]) >= z[i] for all i, and that minimizes the sum of the squared errors subject to the constraints.
Is there a computationally efficient solution approach in either Python or Matlab?
Your problem sounds like a quadratic programming problem with linear constraints. There are efficient algorithms to solve these and they are implemented in Matlab and Python as well; see quadprog and CVXOPT respectively.
There is very simple solution. Just use polynomial regression in Mathlab (http://www.matrixlab-examples.com/polynomial-regression.html).
You will get a certain function P(x1[i],x2[i],x3[i]).
1. Then for each i compute expression Diff[i] = P(x1[i],x2[i],x3[i]) - z[i].
You will get some array Diff.
2. Select all the negative values.
3. Find the minimum value in Diff: M = Min(Diff).
4. The desired function is F(x1[i],x2[i],x3[i]) = P(x1[i],x2[i],x3[i]) + Abs(M), where Abs(M) - it's value excluding the sign of M.
But if you're not just limited to quadratic functions, you can vary the degree of the polynomial and eventually get a more precise solution.
Related
I want to solve the following (convex) minimization problem:
min ||x||_1 under the constraints sgn(A[x,R]=y) and ||x||_2 = 1
where A is a mx(N+1) matrix, x in R^N a vector, and \[x,R\] a vector that is created by appending a given number R. The objective is to find the optimal value for x.
A is a Fourier matrix and there are fast matrix-vector, inversion, etc. algorithms available. Since this matrix is really big, I need to use an optimization algorithm that utilizes this.
Currently, I use the following implementation in cvxpy, which is way too slow:
import cvxpy as cvx
# rewrite the problem in the form x = x^- + x^+
n = A.shape[1]-1
vx = cvx.Variable(2*n)
objective = cvx.Minimize(cvx.pnorm(vx, 1)) # min ||x||_1
constraints = [vx >= 0, cvx.multiply(A[:,:n] # vx[:n] - A[:,:n] # vx[n:] + A[:,n]*R, y) >= 0,
cvx.norm(vx, 2) <= R] # sgn(A[x,1]) = y, ||x||_2 <= R
x, solve_time = solve(vx, objective, constraints)
solution = x[:n] - x[n:]
Is there a way to use fast matrix computations in cvxpy? Or is there a better library? I found a few implementations that can do this for one special algorithm but not in the general case, so I was not able to implement my problem.
No. The solver will not call your matrix multiplication code. They do their own linear algebra, which is very different in many ways. In a sense your matrix multiplication is just notation for the problem statement.
Regarding performance, it depends heavily on where the bottleneck is. Is it in generating the model (in cvxpy itself) or in the solver? What solver are you using? Consider using a different solver. Obviously, we don't have enough information (and no reproducible example) to answer this question.
I am trying to minimize a function of a vector of length 20, but I want to constrain the solution to be monotonic, i.e.
x[1] <= x[2]... <= x[20]
I have tried to implement this in the following way using "constraints" for this routine:
cons = tuple([{'type':'ineq', 'fun': lambda x: x[i]- x[i-1]} for i in range(1, len(node_vals))])
res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons) #optimize
However, the results I get are not monotonic, even when the initial guess b is, it seems that the optimizer is completely ignoring the constraints. What could be going wrong? I have also tried changing the constraint to x[i]**3 - x[i+1]**3 to make it "smoother", but it didn't help at all. My objective function, localisation is the integral of solution to an eigenvalue problem whose parameters are defined beforehand:
def localisation(node_vals, domain): #calculate localisation for solutions with piecewise linear grading
f = piecewise(node_vals, domain) #create piecewise linear function using given values at nodes
#plt.plot(domain, f(domain))
M = diff_matrix(f(domain)) #differentiation matrix created from piecewise linear function
m = np.concatenate(([0], get_solutions(M)[1][:, 0], [0]))
integral = num_int(domain, m)
return integral
You didn’t post a minimum reproducible example that we can run. However, did you try to specify which optimization algorithm to use in SciPy? Something like this:
res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons, method=‘SLSQP’)
I'm having a very similar problem but with additional upper and lower bounds on the monotonicity property. I'm tackling the problem like this (maybe it helps you):
Using the Trust-Region Constrained Algorithm given by scipy. This provides us a way of dealing with linear constraints in a matrix-manner:
lb <= A.dot(x) <= ub
where lb & and ub are the lower (upper) bounds of this constraint problem and A is the matrix, representing the linear constraint problem.
every row of matrix A is a linear term which defines a constraint
If, for example, x[0] <= x[1], then this can be transformed into x[0] - x[1] <= 0 which in terms of the linear constraint matrix A looks like this [1, -1,...], provided that the upper bound vector has a 0 value on this level of course (vice versa is also possible but either way, having at least one of both, lower or upper bound, makes this easy)
Setting up enough of these inequalities and at the same time merging a couple of those into a single inequality may create a sufficient matrix to solve this.
Hope this helps a bit, It did the job for my problem.
i'm triyng to solve this equation using "nsolve" function. unfortunately, this error appears:
ValueError: Could not find root within given tolerance. (435239733.760000060718 > 2.16840434497100886801e-19)
Try another starting point or tweak arguments.
The code is:
import sympy
d=[0.3, 32.6, 33.4, 241.7, 396.2, 444.4, 480.8, 588.9, 1043.9, 1136.1, 1288.1, 1408.1, 1439.4, 1604.8]
N=len(d)
x = sympy.Symbol('x', real=True)
expr2 = sympy.Eq(d[13] + N * sympy.Pow(x, -1) - N * d[13] * sympy.Pow(1 - sympy.exp(-d[13] * N), -1), 0)
expr_2 = sympy.simplify(expr=expr2)
solution = sympy.nsolve(expr_2, -0.01)
s = round(solution, 6)
print(s)
The system you are trying to solve leads to large derivatives and very abrupt changes, including a singularity at x == 0. This is the graph of the equation (Using Mathematica).
Numerical solvers struggle with these functions because most of them assumes some amount of smoothness around the solution and can be confused around singularities. Almost all of them (I'm talking about solvers in general, not just SymPy) benefits from regularization or a reformulation of the problem.
I would suggest to simplify the equation by multiplying both sides by x, which would remove the division by x and lead to a smoother function (linear in this case), for which numerical solvers behave correctly.
With this reformulation, you should find that the solution is 0.000671064.
Moreover, I would also suggest to rescale the coefficients so that they are all in [-1,1]. This also generally helps solvers. In your case, it will find the solution easily since it is linear, but more complex equations might cause problems.
I would like to know how to define a complex objective function using or-tools (if it is possible).
The basic example below shows how to have basic linear problem with Or-tools in python:
solver = pywraplp.Solver('lp_pricing_problem', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
# Define variables with a range from 0 to 1000.
x = solver.NumVar(0, 1000, 'Variable_x')
y = solver.NumVar(0, 1000, 'Variable_y')
# Define some constraints.
solver.Add(x >= 17)
solver.Add(x <= 147)
solver.Add(y >= 61)
solver.Add(y <= 93)
# Minimize 0.5*x + 2*y
objective = solver.Objective()
objective.SetCoefficient(x, 0.5)
objective.SetCoefficient(y, 2)
objective.SetMinimization()
status = solver.Solve()
# Print the solution
if status == solver.OPTIMAL:
print("x: {}, y: {}".format(x.solution_value(), y.solution_value())) # x: 17.0, y: 61.0
In this very basic example the objective function is Minimize(0.5*x + 2*y).
What would be the syntax to obtain, for example, the least squares Minimize(x^2 + y^2) or the absolute value of a variable Minimize(abs(x) + y)?
Is it possible to define a sub-function and call it into the objective function? Or should I proceed another way?
Many thanks in advance,
Romain
You've tagged this question with linear-programming, so you already have the ingredients to figure out the answer here.
If you check out this page, you'll see that OR-Tools solves linear programs, as well as few other families of optimization problems.
So the first objective function you mention, Minimize(0.5*x + 2*y) is solvable because it is linear.
The second objective you mention---Minimize(x^2 + y^2)---cannot be solved with OR-Tools because it is nonlinear: those squared terms make it quadratic. To solve this problem you need something that can do quadratic programming, second-order cone programming, or quadratically constrained quadratic programming. All of these methods include linear programming as a subset. The tool I recommend for solving these sorts of problems is cvxpy, which offers a powerful and elegant interface. (Alternatively, you can approximate the quadratic as linear-piecewise, but you will incur many more constraints.)
The last objective you mention, Minimize(c*abs(x) + y) can be solved as a linear program even though abs(x) itself is nonlinear. To do so, we rewrite the objective as min( c*(t1-t2) +y) and add the constraints t1,t2>=0. This works as long as c is positive and you are minimizing (or c is negative and you are maximizing). A longer explanation is here.
There are many such transformations you can perform and one of the skills of a mathematical programmer/operations researcher is to have many of them memorized.
I try to find a solution for a system of equations by using scipy.optimize.fsolve in python 2.7. The goal is to calculate equilibrium concentrations for a chemical system. Due to the nature of the problem, some of the constants are very small. Now for some combinations i do get a proper solution. For some parameters i don't find a solution. Either the solutions are negative, which is not reasonable from a physical point of view or fsolve produces:
ier = 3, 'xtol=0.000000 is too small, no further improvement in the approximate\n solution is possible.')
ier = 4, 'The iteration is not making good progress, as measured by the \n improvement from the last five Jacobian evaluations.')
ier = 5, 'The iteration is not making good progress, as measured by the \n improvement from the last ten iterations.')
It seems to me, based on my research, that the failure to find proper solutions of the equation system is connected to the datatype float.64 not being precise enough. As a friend pointed out, the system is not well conditioned with parameters differing in several magnitudes.
So i tried to use fsolve with the mpfr type provided by the gmpy2 module but that resulted in the following error:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
Now here is a small example with parameter which lead to a solution if the randomized starting parameters fit happen to be good. However if the constant C_HCL is chosen to be something like 1e-4 or bigger then i never find a proper solution.
from numpy import *
from scipy.optimize import *
K_1 = 1e-8
K_2 = 1e-8
K_W = 1e-30
C_HCL = 1e-11
C_NAOH = K_W/C_HCL
C_HL = 1e-6
if C_HCL-C_NAOH > 0:
Saeure_Base = C_HCL-C_NAOH+sqrt(K_W)
OH_init = K_W/(Saeure_Base)
elif C_HCL-C_NAOH < 0:
OH_init = C_NAOH-C_HCL+sqrt(K_W)
Saeure_Base = K_W/OH_init
# some randomized start parameters
G1 = random.uniform(0, 2)*Saeure_Base
G2 = random.uniform(0, 2)*OH_init
G3 = random.uniform(1, 2)*C_HL*(sqrt(K_W))/(Saeure_Base+OH_init)
G4 = random.uniform(0.1, 1)*(C_HL - G3)/2
G5 = C_HL - G3 - G4
zGuess = array([G1,G2,G3,G4,G5])
#equation system / 5 variables --> H3O, OH, HL, H2L, L
def myFunction(z):
H3O = z[0]
OH = z[1]
HL = z[2]
H2L = z[3]
L = z[4]
F = empty((5))
F[0] = H3O*L/HL - K_1
F[1] = OH*H2L/HL - K_2
F[2] = K_W - OH*H3O
F[3] = C_HL - HL - H2L - L
F[4] = OH+L+C_HCL-H2L-H3O-C_NAOH
return F
z = fsolve(myFunction,zGuess, maxfev=10000, xtol=1e-15, full_output=1,factor=0.1)
print z
So the questions are. Is this problem based on the precision of float.64 and
if yes , (how) can it be solved with python? Is fsolve the way to go? Would i need to change the fsolve function so it accepts a different data type?
The root of your problem is either theoretical or numerical.
The scipy.optimize.fsolvefunction is based on the MINPACK Fortran solver (http://www.netlib.org/minpack/). This solver use a Newton-Raphson optimisation algorithm to provide the solution.
There are underlying assumptions about the smoothness of the function when you use this algorithm. For example, the jacobian matrix at the solution point x is supposed to be invertible. The one you are more concerned about is the basins of attraction.
In order to converge, the starting point of the algorithm needs to be near the actual solution, i.e. in the basins of attraction. This condition is always met for convex functions, however it is easy to find some functions for which this algorithm behaves badly. Your function is one of this as you have a fraction of your inputs parameters.
To address this issue you should just change the starting point. This starting point becomes also very important for functions with multiple solutions: this picture from the wikipedia article shows you the solution found depending of the starting point (five colours for five solutions); so you should be careful with your solution and actually check the "physical" aspects of your solution.
For the numerical aspects, the Newton-Raphson algorithm needs to have the value of the jacobian matrix (the derivatives matrix). If it is not provided to the MINPACK solver, the jacobian is estimated with a finite-difference formula. The perturbation step for the finite difference formula need to be provided epsfcn=None, the None being here as default value only in the case where fprimeis provided (there is no need for the jacobian estimation in this case). So first you should incorporate that. You could also specify directly the jacobian by derivating your function by hand.
However, the minimum value for the step size will be the machine precision, also called machine epsilon. For your problem, you have very small inputs values which can be a problem. I would suggest multiply everyone of them by the same value (like 10^6), it is equivalent to a change of the units but will avoid rounding up errors and problems with machine precision.
This problem is also important when you look at the parameter xtol=1e-15 you provided. In your error message, it gives xtol=0.000000, as it is below machine precision and cannot be taken into account. Also, if you look at your line F[2] = K_W - OH*H3O, given the machine precision, it does not matter if K_W is 1e-15or 1e-30. 0 is a solution for both of this case compare to the machine precision. To avoid this problem, just multiply everything by a bigger value.
So to sum up:
For the Newton-Raphson algorithm, the initialisation point matters !
For this algorithm, you should specify how you compute the jacobian !
In numerical computation, never work with small values. You can easily change the dimension to something different: it is basic units conversion, like working in gram instead of kilogram.