Python: Find roots of 2d polynomial - python

I have a 2D numpy array C which contains the coefficients of a 2d polynomial, such that the polynomial is given by the sum over all coefficients:
c[i,j]*x^i*y^j
How can I find the roots of this 2d polynomial?
It seems that numpy.roots only works for 1d polynomials.

This is a polynomial in two variables. In general there will be infinitely many roots (think about all the values of x and y that will yield xy=0), so an algorithm that gives you all the roots cannot exist.

It is correct as Jussi Nurminen pointed out that a multivariate polynomial does not have a finite number of roots in the general case, however it is still possible to find functions that yield all the (infinitely many) roots of a multivariate polynomial.
One solution would be to use sympy:
import numpy as np
import sympy as sp
np.random.seed(1234)
deg = (2, 3) # degrees of polynomial
x = sp.symbols(f'x:{len(deg)}') # free variables
c = np.random.uniform(-1, 1, deg) # coefficients of polynomial
eq = (np.power(np.array(x), np.indices(deg).transpose((*range(1, len(deg) + 1), 0))).prod(-1) * c).sum()
sol = sp.solve(eq, x, dict=True)
for i, s in enumerate(sol):
print(f'solution {i}:')
for k, v in s.items():
print(f' {k} = {sp.simplify(v).evalf(3)}')
# solution 0:
# x0 = (1.25e+14*x1**2 - 2.44e+14*x1 + 6.17e+14)/(-4.55e+14*x1**2 + 5.6e+14*x1 + 5.71e+14)

Related

Given A = S*D*S.T (D is a diagonal matrix , S/S.T an arbitrary nxn matrix) Shouldn't the eigenvalues of A correspond to the diagonal entries of D?

I was asked to write a function that generates a random symmetric positive definite 2D matrix.
Here is my attempt:
import numpy as np
from numpy import linalg as la
def random_spd(n):
"""Generates random 2D SPD matrix (symmetric positive definite)"""
while True:
S = np.random.rand(n,n)
if la.matrix_rank(S)==n: #Make sure that S has full rank.
break
D = np.diag(np.random.randint(0,10,size=n))
print(f"S:\n{S}\n\nD:\n{D}\n") #Only for debugging
return S#D#S.T
A = random_spd(2)
print(f"A:\n{A}\n")
ei_vals, ei_vecs = la.eig(A)
print(f"Eigenvalues:\n{ei_vals}\n\nEigenvectors:\n{ei_vecs}")
Output:
D:
[[6 0]
[0 5]]
A:
[[1.97478191 1.71620628]
[1.71620628 2.37372465]]
Eigenvalues:
[0.4464938 3.90201276]
Eigenvectors:
[[-0.74681018 -0.66503726]
[ 0.66503726 -0.74681018]]
As far as I know, the function works.
Now, if I try to calculate the eigenvalues of a randomly generated matrix, shouldn't they be the same as
the diagonal entries of the matrix D?
Can someone help me understand my misconception or mistake?
Thank you very much!
Best regards, Max :)
What you are applying is a congruency transform, it preserves definiteness.
A positive definite matrix P is one that for any (non-null) vector x of shape (N, 1), x.T # P # x.
Now if you replace x = S # y, in the above condition you get y.T # S.T # P # S # y, comparing the two you conclude that S.T # P # S is positive definite as well (semidefinite positive if S is not full-rank).
Similarly the eigenvalues are defined by the equation
A # v = lambda * v
If you replace v = S u the equation you get is
A # S # u = lambda * S # u
To place this equation in the same form as the eigenvalues equatios, left-multiply the equation for inv(S)
(inv(S) # A # S) # u = lambda * u
We say that the matrix obtained this way inv(S) # A # S is similar to A, and we call this a similarity transformation.
There are simpler ways to do create a positive definite matrix. One simple way
S = np.random.rand(n,n)
A = S.T # S + eps * np.eye(n)
S.T # S can be seen as a congruency transform of the identity matrix, thus positive semidefinite, adding eps * eye(n) will ensure that all eigen values are greater than eps. No matrix inversions, no eigen decomposition.

Normalized Cross-Correlation in Python

I have been struggling the last days trying to compute the degrees of freedom of two pair of vectors (x and y) following reference of Chelton (1983) which is:
degrees of freedom according to Chelton(1983)
and I can't find a proper way to calculate the normalized cross correlation function using np.correlate,
I always get an output that it isn't in between -1, 1.
Is there any easy way to get the cross correlation function normalized in order to compute the degrees of freedom of two vectors?
Nice Question. There is no direct way but you can "normalize" the input vectors before using np.correlate like this and reasonable values will be returned within a range of [-1,1]:
Here i define the correlation as generally defined in signal processing textbooks.
c'_{ab}[k] = sum_n a[n] conj(b[n+k])
CODE: If a and b are the vectors:
a = (a - np.mean(a)) / (np.std(a) * len(a))
b = (b - np.mean(b)) / (np.std(b))
c = np.correlate(a, b, 'full')
References:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.correlate.html
https://en.wikipedia.org/wiki/Cross-correlation
MATLAB ➜ xcorr(a, b, 'normalized');
MATLAB normalized cross-correlation implementation in Python.
import numpy as np
a = [1, 2, 3, 4]
b = [2, 4, 6, 8]
norm_a = np.linalg.norm(a)
a = a / norm_a
norm_b = np.linalg.norm(b)
b = b / norm_b
c = np.correlate(a, b, mode = 'full')
If you are interested in the normalized correlation when the sequences are aligned (not the correlation function of the correlation versus time offsets), the function numpy.corrcoef does this directly, as computing the covariance matrix of x and y and then normalizing it by the standard deviation of x and the standard deviation of y.
https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html#numpy.corrcoef
This is the Pearson correlation coefficient and will have a range of +/-1.
a = np.dot(abs(var1),abs(var2),'full')
b = np.correlate(var1,var2,'full')
c = b/a
This is my idea: but it will normalize it 0-1

need to improve accuracy in fsolve to find multiples roots

I'm using this code to get the zeros of a nonlinear function.
Most certainly, the function should have 1 or 3 zeros
import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fsolve
[a, b, c] = [5, 10, 0]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
x = np.linspace(-10, 10, 1000)
print(fsolve(func, [-10, 0, 10]))
plt.plot(x, func(x))
plt.show()
In this case the code give the 3 expected roots without any problem.
But, with c = -1.5 the code miss a root, and with c = -3 it find a non existing root.
I want to calculate the roots for many different parameter combinations, so changing the seeds manually is not a practical solution.
I appreciate any solution, trick or advice.
What you need is an automatic way to obtain good initial estimates of the roots of the function. This is in general a difficult task, however, for univariate, continuous functions, it is rather simple. The idea is to note that (a) this class of functions can be approximated to an arbitrary precision by a polynomial of appropriately large order, and (b) there are efficient algorithms for finding (all) the roots of a polynomial. Fortunately, Numpy provides functions for both performing polynomial approximation and finding polynomial roots.
Let's consider a specific function
[a, b, c] = [5, 10, -1.5]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
The following code uses polyfit and poly1d to approximate func over the range of interest (-10<x<10) by a polynomial function f_poly of order 10.
x_range = np.linspace(-10,10,100)
y_range = func(x_range)
pfit = np.polyfit(x_range,y_range,10)
f_poly = np.poly1d(pfit)
As the following plot shows, f_poly is indeed a good approximation of func. Even greater accuracy can be obtained by increasing the order. However, there is no point in pursuing extreme accuracy in the polynomial approximation, since we are looking for approximate estimates of the roots that will be later refined by fsolve
The roots of the polynomial approximation can be simply obtained as
roots = np.roots(pfit)
roots
array([-10.4551+1.4893j, -10.4551-1.4893j, 11.0027+0.j ,
8.6679+2.482j , 8.6679-2.482j , -5.7568+3.2928j,
-5.7568-3.2928j, -4.9269+0.j , 4.7486+0.j , 2.9158+0.j ])
As expected, Numpy returns 10 complex roots. However, we are only interested for real roots within the interval [-10,10]. These can be extracted as follows:
x0 = roots[np.where(np.logical_and(np.logical_and(roots.imag==0, roots.real>-10), roots.real<10))].real
x0
array([-4.9269, 4.7486, 2.9158])
Array x0 can serve as the initialization for fsolve:
fsolve(func, x0)
array([-4.9848, 4.5462, 2.7192])
Remark: The pychebfun package provides a function that directly gives all the roots of a function within an interval. It is also based on the idea of performing polynomial approximation, however, it uses a more sophisticated (yet, more efficient) approach. It automatically chooses the best polynomial order of the approximation (no user input), with the polynomial roots being practically equal to the true ones (no need to refine them via fsolve).
This simple code gives the same roots as those by fsolve
import pychebfun
f_cheb = pychebfun.Chebfun.from_function(func, domain = (-10,10))
f_cheb.roots()
Between two stationary points (i.e., df/dx=0), you have one or zero roots. In your case it is possible to calculate the two stationary points analytically:
[-c + log(1/(b - sqrt(b*(b - 4)) - 2)) + log(2),
-c + log(1/(b + sqrt(b*(b - 4)) - 2)) + log(2)]
So you have three intervals where you need to find a zero. Using Sympy saves you from doing the calculations by hand. Its sy.nsolve() allows to robustly find a zero in an interval:
import sympy as sy
a, b, c, x = sy.symbols("a, b, c, x", real=True)
# The function:
f = -(x+a) + b / (1 + sy.exp(-(x + c)))
df = f.diff(x) # calculate f' = df/dx
xxs = sy.solve(df, x) # Solving for f' = 0 gives two solutions
# numerical values:
pp = {a: 5, b: 10, c: .5} # values for a, b, c
fpp = f.subs(pp)
xxs_pp = [xpr.subs(pp).evalf() for xpr in xxs] # numerical stationary points
xxs_pp.sort() # in ascending order
# resulting intervals:
xx_low = [-1e9, xxs_pp[0], xxs_pp[1]]
xx_hig = [xxs_pp[0], xxs_pp[1], 1e9]
# calculate roots for each interval:
xx0 = []
for xl_, xh_ in zip(xx_low, xx_hig):
try:
x0 = sy.nsolve(fpp, (xl_, xh_), solver="bisect") # calculate zero
except ValueError: # no solution found
continue
xx0.append(x0)
print("The zeros are:")
print(xx0)
sy.plot(fpp) # plot function

Using Python to Find Integer Solutions to System of Linear Equations

I am trying to write a python program to solve for all the positive integer solutions of a polynomial of the form x_1 + 2x_2 + ... + Tx_T = a subject to x_0 + x_1 + ... + x_T = b.
For example, by brute-force I can show that the only positive integer solution to x_1 + 2x_2 = 4 subject to x_0 + x_1 + x_2 = 2 is (0,0,2). However, I would like a program to do this on its own.
Any suggestions?
You can use Numpy Linear Algebra to solve a system of equations, the least-squares solution to a linear matrix equation. In your case, you can consider the following vectors:
import numpy as np
# range(T): coefficients of the first equation
# np.ones(T): only 'ones' as the coefficients of the second equation
A = np.array([range(T), np.ones(T)) # Coefficient matrix
B = np.array([a, b]) # Ordinate or “dependent variable” values
And then find the solution as follows:
x = np.linalg.lstsq(A, B)[0]
Finally you can implement a solve method passing the variables T, a b:
import numpy as np
def solve(T, a, b):
A = np.array([range(T), np.ones(T)])
B = np.array([a, b])
return np.linalg.lstsq(A, B)[0]
Integer solutions:
If you want only integer solutions then you are looking at a system of linear diophantine equations.
every system of linear Diophantine equations may be written:
AX = C, where A is an m×n matrix of integers, X is an n×1 column matrix of unknowns and C is an m×1 column matrix of integers.
Every system of linear diophantine equation may be solved by computing the Smith normal form of its matrix.

scipy.odeint returning incorrect values for second order non-linear differential equation

I have been trying to solve the second order non-linear differential equation for Newton's Law of Universal Gravitation (inverse square law):
x(t)'' = -GM/(x**2)
for the motion of a satellite approaching the earth (in this case a point-mass) in one dimension
using numpy.odeint with a series of first order differential equations, but the operation has been yielding incorrect results when compared to Mathematica or to simplified forms of the law (∆x = (1/2)at^2).
This is the code for the program:
import numpy as np
from scipy.integrate import odeint
def deriv(x, t): #derivative function, where x[0] is x, x[1] is x' or v, and x2 = x'' or a
x2 = -mu/(x[0]**2)
return x[1], x2
init = 6371000, 0 #initial values for x and x'
a_t = np.linspace(0, 20, 100) #time scale
mu = 398600000000000 #gravitational constant
x, _ = odeint(deriv, init, a_t).T
sol = np.column_stack([a_t, x])
which yields an array with coupled a_t and x position values as the satellite approaches the earth from an initial distance of 6371000 m (the average radius of the earth). One would expect, for instance, that the object would take approximately 10 seconds to fall 1000 m at the surface from 6371000m to 6370000m (because the solution to 1000 = 1/2(9.8)(t^2) is almost exactly 10), and the mathematica solution to the differential equation puts the value at slightly above 10s as well.
Yet that value according the odeint solution and the sol array is nearly 14.4.
t x
[ 1.41414141e+01, 6.37001801e+06],
[ 1.43434343e+01, 6.36998975e+06],
Is there significant error in the odeint solution, or is there a major problem in my function/odeint usage? Thanks!
(because the solution to 1000 = 1/2(9.8)(t^2) is almost exactly 10),
This is the right sanity check, but something's off with your arithmetic. Using this approximation, we get a t of
>>> (1000 / (1/2 * 9.8))**0.5
14.285714285714285
as opposed to a t of ~10, which would give us only a distance of
>>> 1/2 * 9.8 * 10**2
490.00000000000006
This expectation of ~14.29 is very close to the result you observe:
>>> sol[abs((sol[:,1] - sol[0,1]) - -1000).argmin()]
array([ 1.42705427e+01, 6.37000001e+06])

Categories

Resources