Minimizing the sum of 3 variables subject to equality and integrality constraints - python

I am working on a programming (using Python) problem where I have to solve the following type of linear equation in 3 variables:
x, y, z are all integers.
Equation example: 2x + 5y + 8z = 14
Condition: Minimize x + y + z
I have been trying to search for an algorithm for finding a solution to this, in an optimum way. If anybody has any idea please guide me through algorithm or code-sources.
I am just curious, what can be done if this problem is extrapolated to n variables?
I don't want to use hit & trial loops to keep checking for values. Also, there may be a scenario that equation has no solution.
UPDATE
Adding lower bounds condition:
x, y, z >= 0
x, y, z are natural

Any triple (x, y, z), with z = (14 - 2x - 5y) / 8, satisfies your constraint.
Note that x + y + (14 - 2x - 5y) / 8 is unbounded from below. This function decreases when each of x and y decrease, with no finite minimum.

You have an equality-constrained integer program (IP) in just 3 dimensions. The equality constraint 2 x + 5 y + 8 z = 14 defines a plane in 3-dimensional space. Parametrizing it,
x = 7 - 2.5 u - 4 v
y = u
z = v
we obtain an unconstrained IP in 2 dimensions. Given the integrality constraints, we have u <- {0,2} and v <- {0,1}. Enumerating all four (u,v) pairs, we conclude that the minimum is 4 and that it is attained at (u,v) = (2,0) and (u,v) = (0,1), which correspond to (x,y,z) = (2,2,0) and (x,y,z) = (3,0,1), respectively.
Using PuLP to solve the integer program:
from pulp import *
# decision variables
x = LpVariable("x", 0, None, LpInteger)
y = LpVariable("y", 0, None, LpInteger)
z = LpVariable("z", 0, None, LpInteger)
# define integer program (IP)
prob = LpProblem("problem", LpMinimize)
prob += x+y+z # objective function
prob += 2*x + 5*y + 8*z == 14 # equality constraint
# solve IP
prob.solve()
# print results
print LpStatus[prob.status]
print value(x)
print value(y)
print value(z)
which produces x = 3, y = 0 and z = 1.

Another tool to solve this type of problems is SCIP. There is also an easy to use Python interface available on GitHub: PySCIPOpt.
In general (mixed) integer programming problems are very hard to solve (NP complexity) and often even simple looking instances with only a few variables and constraints can take hours to prove the optimal solution.

From your first equation:
x = (14 - 5y - 8x) / 2
so, you now only need to minimize
(14 - 5y - 8z) / 2 + y + z
which is
(14 - 3y - 6z) / 2
But we can ignore the ' / 2' part for minimization purposes.
Presumably, there must be some other constraints on your problem, since as described the solution is that both y and z may grow without bound.

I do not know any general fast solution for n variables, or not using hit & trail loops. But for the given specific equation 2x + 5y + 8z = 14, there maybe some shortcut based on observation.
Notice that the range is very small for any possible solutions:
0<=x<=7, 0<=y<=2, 0<=z<=1
Also other than x = 7, you have at least to use 2 variables.
(x+y+z = 7 for this case)
Let's find what we got if using only 2 variables:
If you choose to use (x,z) or (y,z), as z can only be 1, x or y is trivial.
(x+y+z = 4 for (x,z), no solution for (y,z))
If you choose to use (x,y), as x's coefficient is even and y's coefficient is odd, you must choose even number of y to achieve an even R.H.S. (14). Which means y must be 2, x is then trivial.
(x+y+z = 4 for this case)
Let's find what we got if using all 3 variables:
Similarly, z must be 1, so basically it's using 2 variables (x,y) to achieve 14-8 = 6 which is even.
Again we use similar argument, so we must choose even number of y which is 2, however at this point 2y + 1z > 14 already, which means there is no solution using all 3 variables.
Therefore simply by logic, reduce the equation by using 1 or 2 variables, we can find that minimum x+y+z is 4 to achieve 14 (x=3,y=0,z=1 or x=2,y=2,z=0)

Related

Symbolic Calculus and Integration in Python

I am trying to numerically compute a double integral.
The issue is that (I think) I need a mix of symbolic integration and numerical integration.
The integral looks something like this:
I cannot use numpy.integrate because it is not just a double integral because of the power (1/a) in the middle.
I cannot get a number for the innermost integral (to then raise to the power) because it ends up being a function that depends on x which I would then need to integrate.
I tried with symbolic calculus, using a nested sym.integrate like here
sym.integrate((sym.integrate(sym.exp(-(w**2)/(2*sigmaw)-alpha*((x-w)**2)/(2*sigma)),(w,-sym.oo, sym.oo)))**(1/alpha),(x,-sym.oo, sym.oo))
however, it just spits back the expression itself and no number.
I think I would need to get a symbolic expression for the inner integral to use as a function for numerical integration.
Is it even possible?
If not in python, with another language like R?
Any experience with things of this sort?
I worked with Maxima (https://maxima.sourceforge.io) since OP seems to be saying the exact system used isn't too important.
The integrand is just a product of Gaussian bumps, so its integral over the real line is not too hard. Maxima doesn't have the strongest integrator in the world, but anyway it seems to handle this problem okay.
Start by assuming all the parameters are positive; if not specified, Maxima will ask for the sign during the calculation.
(%i2) assume (alpha > 0, sigmaw > 0, sigma > 0);
(%o2) [alpha > 0, sigmaw > 0, sigma > 0]
Define the inner integrand.
(%i3) I: exp(-(w**2)/(2*sigmaw)-alpha*((x-w)**2)/(2*sigma));
2 2
alpha (x - w) w
(- --------------) - --------
2 sigma 2 sigmaw
(%o3) %e
Compute the inner integral.
(%i4) I1: integrate (I, w, minf, inf);
(%o4) (sqrt(2) sqrt(%pi) sqrt(sigma) sqrt(sigmaw)
2
alpha x
- ------------------------
2 alpha sigmaw + 2 sigma
%e )/sqrt(alpha sigmaw + sigma)
The pretty-printer (ASCII art) display is hard to read here, maybe this 1-d representation makes more sense. grind produces the 1-d display.
(%i5) grind(%);
(sqrt(2)*sqrt(%pi)*sqrt(sigma)*sqrt(sigmaw)
*%e^-((alpha*x^2)/(2*alpha*sigmaw+2*sigma)))
/sqrt(alpha*sigmaw+sigma)$
(%o5) done
Define the outer integrand.
(%i7) I2: I1^(1/alpha);
1 1 1 1
------- ------- ------- -------
2 alpha 2 alpha 2 alpha 2 alpha
(%o7) (2 %pi sigma sigmaw
2
x 1
- ------------------------ -------
2 alpha sigmaw + 2 sigma 2 alpha
%e )/(alpha sigmaw + sigma)
Compute the outer integral. The final result is named foo here.
(%i9) foo: integrate (I2, x, minf, inf);
1 1 1 1
------- + 1/2 ------- ------- -------
2 alpha 2 alpha 2 alpha 2 alpha
(%o9) (%pi 2 sigma sigmaw
1
-------
2 alpha
sqrt(2 alpha sigmaw + 2 sigma))/(alpha sigmaw + sigma)
Evaluate the outer integral for specific values of the parameters.
(%i10) ev (foo, alpha = 3, sigma = 3/7, sigmaw = 7/4);
1/6 1/6 1/6 1/3 2/3
2 3 7 159 %pi
(%o10) ----------------------------
sqrt(14)
(%i11) float(%);
(%o11) 5.790416728790489
Compute a numerical approximation. Note quad_qagi is suitable for infinite intervals.
(%i12) ev (quad_qagi (lambda([x], quad_qagi (I, w, minf, inf)[1]^(1/alpha)), x, minf, inf),
alpha = 3, sigma = 3/7, sigmaw = 7/4);
(%o12) [5.790416728790598, 7.216782674725913E-9, 270, 0]
Looks like that supports the symbolic result.
(%i13) first(%) - %o11;
(%o13) 1.092459456231154E-13
The outer integral again, in 1-d display which might be useful for copying into another program:
(%i14) grind(foo);
(%pi^(1/(2*alpha)+1/2)*2^(1/(2*alpha))*sigma^(1/(2*alpha))
*sigmaw^(1/(2*alpha))
*sqrt(2*alpha*sigmaw+2*sigma))
/(alpha*sigmaw+sigma)^(1/(2*alpha))$
(%o14) done
I recommend pretty strongly to try to get to a symbolic result if possible; numerical integration is often tricky. In the example given, if it turned out that you could only do the inner integral but not the outer one, that would still be a pretty big win. You could plug the symbolic solution for the inner integral into a numerical approximation for the outer one.
this doesn't answer your question but it will surely help you, as other have already pointed out other useful tools.
for the integration at hand, you don't really need to do symbolic integration.
numerical integration is simply summing on a defined finite grid, and integrating over w is simply summing over the w axis, same as x.
the main problem is how to choose the integration grid, since it cannot be infinite, for gaussians I'd say at least 10 times their sigma for as low error as you can get, as for the grid spacing, I'd make it as small as you can wait for it to run.
so for the above integration, this would be equivalent, make sure you don't increase the grid steps until you have a picture of how much memory it will need, or else your pc will hang.
import numpy as np
# define constants
sigmaw = 0.1
sigma = 0.1
alpha = 0.2
# define grid
max_w = 2
min_w = -max_w
min_x = -3
max_x = -min_x
steps_w = 2000 # don't increase this too much or you'll run out of memory
steps_x = 1000 # don't increase this too much or you'll run out of memory
dw = (max_w - min_w) / steps_w
dx = (max_x - min_x) / steps_x
x_vec = np.linspace(min_x, max_x, steps_x)
w_vec = np.linspace(min_w, max_w, steps_w)
x, w = np.meshgrid(x_vec, w_vec, sparse=True)
# do integration
inner_term = np.exp(-(w ** 2) / (2 * sigmaw) - alpha * ((x - w) ** 2) / (2 * sigma))
inner_integral = np.sum(inner_term, axis=0) * dw
del inner_term # to free some memory
inner_integral_powered = inner_integral ** (1 / alpha)
del inner_integral # to free some memory
outer_integral = np.sum(inner_integral_powered) * dx
print(outer_integral)
Numerical integration works by sampling the integrand at some values of the argument. In particular, the Newton-Cotes formulas sample uniformly, while different flavors of Gaussian integration sample irregularly.
So in your case, the integrator will require an evaluation of the inner integral for various values of x to integrate on x, implying each time a numerical integration on w with known x.
Note that as your domain is unbounded, you will have to use a change of variable to make it finite.
If the inner integral has an analytical expression, you can of course use it and integrate numerically on x.

How to create the equivalent of Excel Solver valueof function?

I have the following equation: x/0,2 * (0,2+1)+y/0,1*(0,1+1) = 26.34
The initial values of X and Y are set as 4.085 and 0.17 respectively.
I need to find the values of X and Y which satisfy the equation and have the lowest common deviation from initially set values. In other words, sum of |4.085 - x| and |0.17 - y| is minimized.
With Excel Solver Valueof Function this easy to find:
we insert x and y as variables to be changed to reach 26 in the formula result
Here is my python code (I am trying to use sympy for that)
x,y = symbols('x y')
eqn = solve([Eq(x/0.2*(0.2+1)+y/0.1*(0.1+1),26)],x,y)
print(eqn)
I am getting however strange result {x: 4.33333333333333 - 1.83333333333333*y}
Can anyone help me solve this equation?
The answer you are obtaining is not strange, it is just the answer to what you ask. You have an equation on two variables x and y, the solution to this problem is in general not unique (sometimes infinite). Now, you can either add an extra condition (inequality for example) or change the numeric Domain in which solutions are possible (like in Diophantine equations). You can do either of them in Sympy, in the following example I find the solution on x to your problem in the Real domain, using solveset:
from sympy import symbols, Eq, solveset
x,y = symbols('x y')
eqn = solveset(Eq(1.2 * x / 0.2 + 1.1 * y / 0.1, 26), x, Reals)
print(eqn)
Output:
Intersection(FiniteSet(4.33333333333333 - 1.83333333333333*y), Reals)
As you can see the solution on x is a finite set, that is the intersection between a straight line on y and the Reals. Any particular solution can be found by direct evaluation of y.
This is equivalent to say x = 4.33333333333333 - 1.83333333333333 * y if you evaluate this equation in the guess value y = 0.17, you obtain x = 4.0216 (close to your x = 4.085 guess value).
Edit:
After analyzing the new information added to your question, I think I have finally understood it: your problem is a constrained optimization. Now, I don't use Excel frequently, but it would be my bet that under the hood this optimization is carried out there using Lagrange multipliers. In your particular case, the target function represents the deviation of the solution (x, y) from the point (4.085, 0.17). For convenience, I have chosen this function to be the Euclidean distance between them (absolute values as you suggested can be problematic due to discontinuity of the derivatives). The constraint function is simply the equation you provided. To solve this problem with Sympy, one could use something like this:
import sympy as sp
# Define symbols and functions
x, y, lamb = sp.symbols('x, y, lamb', real=True)
func = sp.sqrt((x - 4.085) ** 2 + (y - 0.17) ** 2) # Target function
const = 1.2 * x / 0.2 + 1.1 * y / 0.1 - 26 # Constraint function
# Define Lagrangian
lagrang = func - lamb * const
# Compute gradient of Lagrangian
grad_lagrang = [sp.diff(lagrang, var) for var in [x, y, lamb]]
# Solve the resulting system of equations
spoints = sp.solve(grad_lagrang, [x, y, lamb], dict=True)
# Print stationary points
print(spoints)
Output:
[{x: 4.07047770700637, lamb: -0.0798086884467563, y: 0.143375796178345}]
Since in our case only one stationary point was found, this is the optimal solution (although this is only a necessary condition). The value of the lamb multiplier can be ditched, so x, y = 4.070, 0.1434. Hope this helps.

Can the CP solver be initialised at a specific point?

I am using the CP-Sat solver to optimise a timetable I am making. However, this now takes a long time to solve. Is it possible to seed the solver with an old result, to act as a starting point, with the goal of reducing the time required to find the optimal result?
Take a look at this solution hinting example:
https://github.com/google/or-tools/blob/stable/ortools/sat/docs/model.md#solution-hinting
num_vals = 3
x = model.NewIntVar(0, num_vals - 1, 'x')
y = model.NewIntVar(0, num_vals - 1, 'y')
z = model.NewIntVar(0, num_vals - 1, 'z')
model.Add(x != y)
model.Maximize(x + 2 * y + 3 * z)
# Solution hinting: x <- 1, y <- 2
model.AddHint(x, 1)
model.AddHint(y, 2)
Edit: you should also try to
Reduce the amount of variables.
Reduce the domain of the integer variables.
Run the solver with multiples threads usingsolver.parameters.num_search_workers = 8.
Prefer boolean over integer variables/contraints.
Set redundant constraints and/or symmetry breaking constraints.
Segregate your problem and merge the results.

ODEs with infinite initlal condition in python

I have a second order differential equation that I want to solve it in python. The problem is that for one of the variables I don't have the initial condition in 0 but only the value at infinity. Can one tell me what parameters I should provide for scipy.integrate.odeint ? Can it be solved?
Equation:
Theta needs to be found in terms of time. Its first derivative is equal to zero at t=0. theta is not known at t=0 but it goes to zero at sufficiently large time. all the rest is known. As an approximate I can be set to zero, thus removing the second order derivative which should make the problem easier.
This is far from being a full answer, but is posted here on the OP's request.
The method I described in the comment is what is known as a shooting method, that allows converting a boundary value problem into an initial value problem. For convenience, I am going to rename your function theta as y. To solve your equation numerically, you would first turn it into a first order system, using two auxiliary function, z1 = y and z2 = y', and so your current equation
I y'' + g y' + k y = f(y, t)
would be rewitten as the system
z1' = z2
z2' = f(z1, t) - g z2 - k z1
and your boundary conditions are
z1(inf) = 0
z2(0) = 0
So first we set up the function to compute the derivative of your new vectorial function:
def deriv(z, t) :
return np.array([z[1],
f(z[0], t) - g * z[1] - k * z[0]])
If we had a condition z1[0] = a we could solve this numerically between t = 0 and t = 1000, and get the value of y at the last time as something like
def y_at_inf(a) :
return scipy.integrate.odeint(deriv, np.array([a, 0]),
np.linspace(0, 1000, 10000))[0][-1, 0]
So now all we need to know is what value of a makes y = 0 at t = 1000, our poor man's infinity, with
a = scipy.optimize.root(y_at_inf, [1])

numpy.poly1d , root-finding optimization, shifting polynom on x-axis

it is commonly an easy task to build an n-th order polynomial
and find the roots with numpy:
import numpy
f = numpy.poly1d([1,2,3])
print numpy.roots(f)
array([-1.+1.41421356j, -1.-1.41421356j])
However, suppose you want a polynomial of type:
f(x) = a*(x-x0)**0 + b(x-x0)**1 + ... + n(x-x0)**n
Is there a simple way to construct a numpy.poly1d type function
and find the roots ? I've tried scipy.fsolve but it is very unstable as it depends highly on the choice of the starting values
in my particular case.
Thanks in advance
Best Regards
rrrak
EDIT: Changed "polygon"(wrong) to "polynomial"(correct)
First of all, surely you mean polynomial, not polygon?
In terms of providing an answer, are you using the same value of "x0" in all the terms? If so, let y = x - x0, solve for y and get x using x = y + x0.
You could even wrap it in a lambda function if you want. Say, you want to represent
f(x) = 1 + 3(x-1) + (x-1)**2
Then,
>>> g = numpy.poly1d([1,3,1])
>>> f = lambda x:g(x-1)
>>> f(0.0)
-1.0
The roots of f are given by:
f.roots = numpy.roots(g) + 1
In case x0 are different by power, such as:
f(x) = 3*(x-0)**0 + 2*(x-2)**1 + 3*(x-1)**2 + 2*(x-2)**3
You can use polynomial operation to calculate the finally expanded polynomial:
import numpy as np
import operator
ks = [3,2,3,2]
offsets = [0,2,1,2]
p = reduce(operator.add, [np.poly1d([1, -x0])**i * c for i, (c, x0) in enumerate(zip(ks, offsets))])
print p
The result is:
3 2
2 x - 9 x + 20 x - 14

Categories

Resources