Using Python to Find Integer Solutions to System of Linear Equations - python

I am trying to write a python program to solve for all the positive integer solutions of a polynomial of the form x_1 + 2x_2 + ... + Tx_T = a subject to x_0 + x_1 + ... + x_T = b.
For example, by brute-force I can show that the only positive integer solution to x_1 + 2x_2 = 4 subject to x_0 + x_1 + x_2 = 2 is (0,0,2). However, I would like a program to do this on its own.
Any suggestions?

You can use Numpy Linear Algebra to solve a system of equations, the least-squares solution to a linear matrix equation. In your case, you can consider the following vectors:
import numpy as np
# range(T): coefficients of the first equation
# np.ones(T): only 'ones' as the coefficients of the second equation
A = np.array([range(T), np.ones(T)) # Coefficient matrix
B = np.array([a, b]) # Ordinate or “dependent variable” values
And then find the solution as follows:
x = np.linalg.lstsq(A, B)[0]
Finally you can implement a solve method passing the variables T, a b:
import numpy as np
def solve(T, a, b):
A = np.array([range(T), np.ones(T)])
B = np.array([a, b])
return np.linalg.lstsq(A, B)[0]
Integer solutions:
If you want only integer solutions then you are looking at a system of linear diophantine equations.
every system of linear Diophantine equations may be written:
AX = C, where A is an m×n matrix of integers, X is an n×1 column matrix of unknowns and C is an m×1 column matrix of integers.
Every system of linear diophantine equation may be solved by computing the Smith normal form of its matrix.

Related

Python: Find roots of 2d polynomial

I have a 2D numpy array C which contains the coefficients of a 2d polynomial, such that the polynomial is given by the sum over all coefficients:
c[i,j]*x^i*y^j
How can I find the roots of this 2d polynomial?
It seems that numpy.roots only works for 1d polynomials.
This is a polynomial in two variables. In general there will be infinitely many roots (think about all the values of x and y that will yield xy=0), so an algorithm that gives you all the roots cannot exist.
It is correct as Jussi Nurminen pointed out that a multivariate polynomial does not have a finite number of roots in the general case, however it is still possible to find functions that yield all the (infinitely many) roots of a multivariate polynomial.
One solution would be to use sympy:
import numpy as np
import sympy as sp
np.random.seed(1234)
deg = (2, 3) # degrees of polynomial
x = sp.symbols(f'x:{len(deg)}') # free variables
c = np.random.uniform(-1, 1, deg) # coefficients of polynomial
eq = (np.power(np.array(x), np.indices(deg).transpose((*range(1, len(deg) + 1), 0))).prod(-1) * c).sum()
sol = sp.solve(eq, x, dict=True)
for i, s in enumerate(sol):
print(f'solution {i}:')
for k, v in s.items():
print(f' {k} = {sp.simplify(v).evalf(3)}')
# solution 0:
# x0 = (1.25e+14*x1**2 - 2.44e+14*x1 + 6.17e+14)/(-4.55e+14*x1**2 + 5.6e+14*x1 + 5.71e+14)

How to solve AX = B equation with Python (NumPy, SciPy etc.), where A, X, B are matrices and all elements of X must be non-negative

I need to solve an equation AX = B using Python where A, X, B are matrices and all values of X must be non-negative.
The best solution I've found is
X = np.linalg.lstsq(A, B, rcond=None)
but as a result X contains negative values. Is it possible to get a solution without negative values? Thanks in advance!
In general, this is not mathematically possible. Given the basic requirements of A and B being invertible, X is a unique matrix. If you don't like the elements that X has, you can't simply ask for another solution: there isn't one. You'll have to change A or B to get a different result.
You could solve it with cvxpy:
import cvxpy
def solve(A, B):
"""
Minimizes |AX - B|**2, assuming A and B are
square matrices for simplicity. If this optimized
error is zero, this corresponds to solving AX = B.
"""
n = A.shape[0]
X = cvxpy.Variable((n,n))
# Set objective
obj_fun = cvxpy.sum_squares(A*X - B)
objective = cvxpy.Minimize(obj_fun)
# Set constraints
constraints = [X >= 0]
prob = cvxpy.Problem(objective, constraints)
result = prob.solve(solver = "ECOS")
return X.value
EDIT: The answer by Prune is correct I believe. You can check if the error in the numerical solver is non-zero by inspecting results.

Solve overdetermined system with QR decomposition in Python

I'm trying to solve an overdetermined system with QR decomposition and linalg.solve but the error I get is
LinAlgError: Last 2 dimensions of the array must be square.
This happens when the R array is not square, right? The code looks like this
import numpy as np
import math as ma
A = np.random.rand(2,3)
b = np.random.rand(2,1)
Q, R = np.linalg.qr(A)
Qb = np.matmul(Q.T,b)
x_qr = np.linalg.solve(R,Qb)
Is there a way to write this in a more efficient way for arbitrary A dimensions? If not, how do I make this code snippet work?
The reason is indeed that the matrix R is not square, probably because the system is overdetermined. You can try np.linalg.lstsq instead, finding the solution which minimizes the squared error (which should yield the exact solution if it exists).
import numpy as np
A = np.random.rand(2, 3)
b = np.random.rand(2, 1)
x_qr = np.linalg.lstsq(A, b)[0]
You need to call QR with the flag mode='reduced'. The default Q R matrices are returned as M x M and M x N, so if M is greater than N then your matrix R will be nonsquare. If you choose reduced (economic) mode your matrices will be M x N and N x N, in which case the solve routine will work fine.
However, you also have equations/unknowns backwards for an overdetermined system. Your code snippet should be
import numpy as np
A = np.random.rand(3,2)
b = np.random.rand(3,1)
Q, R = np.linalg.qr(A, mode='reduced')
#print(Q.shape, R.shape)
Qb = np.matmul(Q.T,b)
x_qr = np.linalg.solve(R,Qb)
As noted by other contributors, you could also call lstsq directly, but sometimes it is more convenient to have Q and R directly (e.g. if you are also planning on computing projection matrix).
As shown in the documentation of numpy.linalg.solve:
Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.
Your system of equations is underdetermined not overdetermined. Notice that you have 3 variables in it and 2 equations, thus fewer equations than unknowns.
Also notice how it also mentions that in numpy.linalg.solve(a,b), a must be an MxM matrix. The reason behind this is that solving the system of equations Ax=b involves computing the inverse of A, and only square matrices are invertible.
In these cases a common approach is to take the Moore-Penrose pseudoinverse, which will compute a best fit (least squares) solution of the system. So instead of trying to solve for the exact solution use numpy.linalg.lstsq:
x_qr = np.linalg.lstsq(R,Qb)

need to improve accuracy in fsolve to find multiples roots

I'm using this code to get the zeros of a nonlinear function.
Most certainly, the function should have 1 or 3 zeros
import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fsolve
[a, b, c] = [5, 10, 0]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
x = np.linspace(-10, 10, 1000)
print(fsolve(func, [-10, 0, 10]))
plt.plot(x, func(x))
plt.show()
In this case the code give the 3 expected roots without any problem.
But, with c = -1.5 the code miss a root, and with c = -3 it find a non existing root.
I want to calculate the roots for many different parameter combinations, so changing the seeds manually is not a practical solution.
I appreciate any solution, trick or advice.
What you need is an automatic way to obtain good initial estimates of the roots of the function. This is in general a difficult task, however, for univariate, continuous functions, it is rather simple. The idea is to note that (a) this class of functions can be approximated to an arbitrary precision by a polynomial of appropriately large order, and (b) there are efficient algorithms for finding (all) the roots of a polynomial. Fortunately, Numpy provides functions for both performing polynomial approximation and finding polynomial roots.
Let's consider a specific function
[a, b, c] = [5, 10, -1.5]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
The following code uses polyfit and poly1d to approximate func over the range of interest (-10<x<10) by a polynomial function f_poly of order 10.
x_range = np.linspace(-10,10,100)
y_range = func(x_range)
pfit = np.polyfit(x_range,y_range,10)
f_poly = np.poly1d(pfit)
As the following plot shows, f_poly is indeed a good approximation of func. Even greater accuracy can be obtained by increasing the order. However, there is no point in pursuing extreme accuracy in the polynomial approximation, since we are looking for approximate estimates of the roots that will be later refined by fsolve
The roots of the polynomial approximation can be simply obtained as
roots = np.roots(pfit)
roots
array([-10.4551+1.4893j, -10.4551-1.4893j, 11.0027+0.j ,
8.6679+2.482j , 8.6679-2.482j , -5.7568+3.2928j,
-5.7568-3.2928j, -4.9269+0.j , 4.7486+0.j , 2.9158+0.j ])
As expected, Numpy returns 10 complex roots. However, we are only interested for real roots within the interval [-10,10]. These can be extracted as follows:
x0 = roots[np.where(np.logical_and(np.logical_and(roots.imag==0, roots.real>-10), roots.real<10))].real
x0
array([-4.9269, 4.7486, 2.9158])
Array x0 can serve as the initialization for fsolve:
fsolve(func, x0)
array([-4.9848, 4.5462, 2.7192])
Remark: The pychebfun package provides a function that directly gives all the roots of a function within an interval. It is also based on the idea of performing polynomial approximation, however, it uses a more sophisticated (yet, more efficient) approach. It automatically chooses the best polynomial order of the approximation (no user input), with the polynomial roots being practically equal to the true ones (no need to refine them via fsolve).
This simple code gives the same roots as those by fsolve
import pychebfun
f_cheb = pychebfun.Chebfun.from_function(func, domain = (-10,10))
f_cheb.roots()
Between two stationary points (i.e., df/dx=0), you have one or zero roots. In your case it is possible to calculate the two stationary points analytically:
[-c + log(1/(b - sqrt(b*(b - 4)) - 2)) + log(2),
-c + log(1/(b + sqrt(b*(b - 4)) - 2)) + log(2)]
So you have three intervals where you need to find a zero. Using Sympy saves you from doing the calculations by hand. Its sy.nsolve() allows to robustly find a zero in an interval:
import sympy as sy
a, b, c, x = sy.symbols("a, b, c, x", real=True)
# The function:
f = -(x+a) + b / (1 + sy.exp(-(x + c)))
df = f.diff(x) # calculate f' = df/dx
xxs = sy.solve(df, x) # Solving for f' = 0 gives two solutions
# numerical values:
pp = {a: 5, b: 10, c: .5} # values for a, b, c
fpp = f.subs(pp)
xxs_pp = [xpr.subs(pp).evalf() for xpr in xxs] # numerical stationary points
xxs_pp.sort() # in ascending order
# resulting intervals:
xx_low = [-1e9, xxs_pp[0], xxs_pp[1]]
xx_hig = [xxs_pp[0], xxs_pp[1], 1e9]
# calculate roots for each interval:
xx0 = []
for xl_, xh_ in zip(xx_low, xx_hig):
try:
x0 = sy.nsolve(fpp, (xl_, xh_), solver="bisect") # calculate zero
except ValueError: # no solution found
continue
xx0.append(x0)
print("The zeros are:")
print(xx0)
sy.plot(fpp) # plot function

Python solve nonlinear (transcedental) equations

I have an equation 'a*x+logx-b=0,(a and b are constants)', and I want to solve x. The problem is that I have numerous constants a(accordingly numerous b). How do I solve this equation by using python?
You could check out something like
http://docs.scipy.org/doc/scipy-0.13.0/reference/optimize.nonlin.html
which has tools specifically designed for these kinds of equations.
Cool - today I learned about Python's numerical solver.
from math import log
from scipy.optimize import brentq
def f(x, a, b):
return a * x + log(x) - b
for a in range(1,5):
for b in range(1,5):
result = brentq(lambda x:f(x, a, b), 1e-10, 20)
print a, b, result
brentq provides estimate where the function crosses the x-axis. You need to give it two points, one which is definitely negative and one which is definitely positive. For negative point choose number that is smaller than exp(-B), where B is maximum value of b. For positive point choose number that's bigger than B.
If you cannot predict range of b values, you can use a solver instead. This will probably produce a solution - but this is not guaranteed.
from scipy.optimize import fsolve
for a in range(1,5):
for b in range(1,5):
result = fsolve(f, 1, (a,b))
print a, b, result

Categories

Resources