i want to solve this linear equation in python
import numpy as np
x2=264
x1=266
x3=294
y2=270
y1=240
y3=227
fract=(x2-x1)*(y3-y1)-(y2-y1)*(x3-x1)
A = np.matrix([[fract-(y3-y1)*(x3-x1)+(y2-y1)*(x2-x1),((x3-x1)**2)-(x2-x1)**2],[((y2-y1)**2)-(y3-y1)**2,fract+(y3-y1)*(x3-x1)-(y2-y1)*(x2-x1)]])
B = np.matrix([[(fract+(y3-y1)*(x3-x1)-(y2-y1)*(x2-x1))], [y1*fract+(y2-y1)*(x1*y2-y1*x2)+(y3-y1)*(x3*y1-y3*x1)]])
A_inverse = np.linalg.inv(A)
X = A_inverse * B
print (X)
LinAlgError: Singular matrix
This is explained simply by printing A:
[[ -510 780]
[ 731 -1118]]
Both cofactors are 570180, so the determinant is 0.
As the error message tells you, the matrix is singular, which means there is no unique solution: either none or infinite, depending on the constants applied.
Related
I have a matrix that gives out something like that:
a =
[[ 3.14333470e-02 3.11644303e-02 3.03622814e-02 2.90406252e-02
2.72220757e-02 2.49377488e-02 2.22267299e-02 1.91354055e-02
1.57166688e-02 1.20290155e-02 8.13554227e-03 4.10286765e-03
-8.19802426e-09 -4.10288390e-03 -8.13555810e-03 -1.20290306e-02
-1.57166830e-02 -1.91354185e-02 -2.22267415e-02 -2.49377588e-02
-2.72220839e-02 -2.90406315e-02 -3.03622856e-02 -3.11644324e-02
-3.14333470e-02]
[ 0.00000000e+00 2.90117128e-03 5.75270270e-03 8.50580375e-03
1.11133681e-02 1.35307796e-02 1.57166756e-02 1.76336548e-02
1.92489172e-02 2.05348252e-02 2.14693765e-02 2.20365808e-02
2.22267328e-02 2.20365792e-02 2.14693735e-02 2.05348208e-02
1.92489114e-02 1.76336477e-02 1.57166674e-02 1.35307704e-02
1.11133581e-02 8.50579304e-03 5.75269150e-03 2.90115979e-03
-1.15937571e-08]
[ 0.00000000e+00 2.90117128e-03 5.75270270e-03 8.50580375e-03
1.11133681e-02 1.35307796e-02 1.57166756e-02 1.76336548e-02
1.92489172e-02 2.05348252e-02 2.14693765e-02 2.20365808e-02
2.22267328e-02 2.20365792e-02 2.14693735e-02 2.05348208e-02
1.92489114e-02 1.76336477e-02 1.57166674e-02 1.35307704e-02
1.11133581e-02 8.50579304e-03 5.75269150e-03 2.90115979e-03
-1.15937571e-08]]
and I want to calculate the eigenvalues and eigenvectors
w, v = numpy.linalg.eig(a)
How can I do this?
You cannot directly compute the eigenvalues of the matrix since it is not square. In order to find the eigenvalues and eigenvectors, the matrix has to be diagonalized, which involves taking a matrix inversion at an intermediate step, and only square matrices are invertible.
In order to find the eigenvalues from a non-square matrix you can compute the singular value decomposition (in numpy: np.linalg.svd). You can then relate the singular values with the eigenvalues as explained here, or here. Quoting one of the answers:
Definition: The singular values of a m×n matrix A are the positive square roots of the nonzero eigenvalues of the corresponding matrix A.T*A. The corresponding eigenvectors are called the singular vectors.
Your array is not square, just fill a zero column to fix it.
import numpy
a = numpy.array(([1,7,3,9],[3,1,5,1],[4,2,6,3]))
# fill with zeros to get a square matrix
z = numpy.zeros((max(a.shape), max(a.shape)))
z[:a.shape[0],:a.shape[1]] = a
a = z
w, v = numpy.linalg.eig(a)
print(w)
print(v)
Out:
[10.88979431 -2.23132083 -0.65847348 0. ]
[[-0.55662903 -0.89297739 -0.8543584 -0.58834841]
[-0.50308806 0.25253601 -0.0201359 -0.58834841]
[-0.66111007 0.37258146 0.51929401 0.39223227]
[ 0. 0. 0. 0.39223227]]
I was trying to implement the matrix exponential function as in scipy.linalg.expm. I gained inspiration from kaityo256's github repository. I thus wrote down the following.
from scipy.linalg import expm
from scipy.linalg import eigh
from scipy.linalg import inv
from math import exp as math_exp
from numpy import array, zeros
from numpy.random import random_sample
from numpy.testing import assert_allclose
def diag2sqr(x):
'''Makes an square matrix from a diagonal one.
Takes a 1d matrix. Determines its data type.
Finds out the shape of the 1d matrix.
Makes an empty square matrix with both
dimensions equal to largest (nonzero) dimension of
the 1d matrix. It then fills the elements of the
1d matrix into diagonal slots of the empty
square one.
Parameters
----------
x : ndarray
ndarray of be coverted to a square ndarray
Returns
-------
xsqr : ndarray
ndarray with diagonals sameas those of x
all other elements are zero
dtype same as that of x
'''
x_flat = x.ravel()
xsqr = zeros((x_flat.shape[0], x_flat.shape[0]), dtype=x.dtype)
# Making the empty matrix
for i in range(x_flat.shape[0]):
xsqr[i, i] = x_flat[i]
# filling up the ith element
print('xsqr', xsqr)
return xsqr
def kaityo_expm(x, ):
'''Exponentiates an ndarray (kaityo).
Exponentiates a ndarray in the most naive way.
Parameters
----------
x : ndarray
The ndarray to be exponentiated
Returns
-------
kexpm : ndarray
x after exponentiating
'''
rx, ux = eigh(x)
# Find eigenvalues and eigenvectors
# eigenvectors composed to form a unitary
ux_inv = inv(ux)
# Inverse of the unitary
# tx = diag([array([math_exp(i) for i in rx]).ravel()])
# tx = array([math_exp(i) for i in rx])
tx = diag2sqr(array([math_exp(i) for i in rx]))
# Constructing the diagonal matrix
kexpm1 = tx#ux_inv
kexpm = ux#kexpm1
return kexpm
Afterwards, I tried to test the above code versus scipy.linalg.expm.
x = random_sample((10, 10))
assert_allclose(expm(x), kaityo_expm(x))
This leads to the following output.
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatch: 100%
Max absolute difference: 7.04655733
Max relative difference: 0.59875635
x: array([[18.032424, 16.224408, 12.432163, 16.614248, 12.85653 , 13.705387,
15.096966, 10.577946, 18.399573, 17.938062],
[16.352809, 17.525898, 12.79079 , 16.295562, 13.512996, 14.407979,...
y: array([[18.649103, 13.157682, 11.264763, 16.099163, 15.2293 , 17.854499,
11.691586, 13.412066, 15.023189, 15.598455],
[13.157682, 13.612502, 9.628261, 12.659313, 13.559437, 13.382417,..
Obviously, both the implementations differ.
The questions are as follows:
Is it acceptable for them to differ?
Is my implementation wrong?
If my implementation is wrong, how do I fix it?
If my implementation is correct, when is it safe to use scipy.linalg.expm?
I have seen the following questions:
Matrix exponentiation with scipy: expm, expm2 and expm3
from a mathematical approach the definition of exponential of a matrix is made using the Taylor series of the exponential, so:
let A be a diagonal square matrix:
the problem arise when A is a generic square matrix, so before doing the exponential you will need do diagonalize it using eigenvalue and eigenvectors:
with U the matrix of eigenvectors and Lambda the matrix with the eigenvalues on the diagonal.
at this point we are close to finding what is an exponential of a matrix:
now lets implement this result in a simple script:
>>> import numpy as np
>>> import scipy.linalg as ln
>>> A = [[2/3, -4/3, 2],
[5/6, 4/3, -2],
[5/6, -2/3, 0]]
>>> A = np.matrix(A)
>>> print(A)
[[ 0.66666667 -1.33333333 2. ]
[ 0.83333333 1.33333333 -2. ]
[ 0.83333333 -0.66666667 0. ]]
>>> eigvalue, eigvectors = np.linalg.eig(A)
>>> print("eigvalue: ", eigvalue)
>>> print("eigvectors:")
>>> print(eigvectors)
eigvalue: [ 1. -1. 2.]
eigvectors:
[[ 0.81649658 0.27216553 0.87287156]
[ 0.40824829 -0.68041382 -0.21821789]
[ 0.40824829 -0.68041382 0.43643578]]
>>> e_Lambda = np.eye(np.size(A, 0))*(np.exp(eigvalue))
>>> print(e_Lambda)
[[2.71828183 0. 0. ]
[0. 0.36787944 0. ]
[0. 0. 7.3890561 ]]
>>> e_A = eigvectors*e_Lambda*eigvectors.I
>>> print(e_A)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> e_A2 = ln.expm(A)
>>> print(e_A2)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> np.testing.assert_allclose(e_A, e_A2)
>>> print(e_A - e_A2)
[[-1.77635684e-15 1.77635684e-15 -8.88178420e-16]
[ 4.44089210e-16 -1.77635684e-15 8.88178420e-16]
[-2.22044605e-16 0.00000000e+00 4.44089210e-16]]
we see that the result is basically the same, so i think it's safe to use scipy.linalg.expm for matrix exponentiation.
i created a repo with the notebook for further testing.
I want to write a function that uses SVD decomposition to solve a system of equations ax=b, where a is a square matrix and b is a vector of values. The scipy function scipy.linalg.svd() should turn a into the matrices U W V. For U and V, I can simply take the transpose of to find their inverse. But for W the function gives me a 1-D array of values that I need to put down the diagonal of a matrix and then enter one over the value.
def solveSVD(a,b):
U,s,V=sp.svd(a,compute_uv=True)
Ui=np.transpose(a)
Vi=np.transpose(V)
W=np.diag(s)
Wi=np.empty(np.shape(W)[0],np.shape(W)[1])
for i in range(np.shape(Wi)[0]):
if W[i,i]!=0:
Wi[i,i]=1/W[i,i]
ai=np.matmul(Ui,np.matmul(Wi,Vi))
x=np.matmul(ai,b)
return(x)
However, I get a "TypeError: data type not understood" error. I think part of the issue is that
W=np.diag(s)
is not producing a square diagonal matrix.
This is my first time working with this library so apologies if I've done something very stupid, but I cannot work out why this line hasn't worked. Thanks all!
To be short, using singular value decomposition let you replace your initial problem which is A x = b by U diag(s) Vh x = b. Using a bit of algebra on the latter, give you the following 3 steps function which is really easy to read :
import numpy as np
from scipy.linalg import svd
def solve_svd(A,b):
# compute svd of A
U,s,Vh = svd(A)
# U diag(s) Vh x = b <=> diag(s) Vh x = U.T b = c
c = np.dot(U.T,b)
# diag(s) Vh x = c <=> Vh x = diag(1/s) c = w (trivial inversion of a diagonal matrix)
w = np.dot(np.diag(1/s),c)
# Vh x = w <=> x = Vh.H w (where .H stands for hermitian = conjugate transpose)
x = np.dot(Vh.conj().T,w)
return x
Now, let's test it with
A = np.random.random((100,100))
b = np.random.random((100,1))
and compare it with LU decomposition of np.linalg.solve function
x_svd = solve_svd(A,b)
x_lu = np.linalg.solve(A,b)
which gives
np.allclose(x_lu,x_svd)
>>> True
Please, feel free to ask more explanations in comments if needed. Hope this helps.
I'm trying to use solve_ivp but I don't understand how it deals with the initial values in the argument. The documentation on solve_ivp states:
scipy.integrate.solve_ivp(fun, t_span, y0, method='RK45', t_eval=None, dense_output=False, events=None, vectorized=False, **options)
with
y0 : array_like, shape (n,)
Initial state. For problems in the complex domain, pass y0 with a complex data type (even if the initial guess is purely real)
However, I don't understand the example
>>> from scipy.integrate import solve_ivp
>>> def exponential_decay(t, y): return -0.5 * y
>>> sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8])
>>> print(sol.t)
[ 0. 0.11487653 1.26364188 3.06061781 4.85759374
6.65456967 8.4515456 10. ]
>>> print(sol.y)
[[2. 1.88836035 1.06327177 0.43319312 0.17648948 0.0719045
0.02929499 0.01350938]
[4. 3.7767207 2.12654355 0.86638624 0.35297895 0.143809
0.05858998 0.02701876]
[8. 7.5534414 4.25308709 1.73277247 0.7059579 0.287618
0.11717996 0.05403753]]
Why do they give an array of 3 initial values here when the differential equation only has one component?
the differential equation only has one component
It doesn't. The function exponential_decay can accept an array as y, and perform operations on that array in a vectorized fashion, as is typical in NumPy.
The initial value determines how many components the unknown function has. In this case, three.
Part of my code inverts a matrix (really an ndarray) using numpy.linalg.inv. However, this frequently errors out as follows:
numpy.linalg.linalg.LinAlgError: Singular matrix
That would be fine if the matrix was actually singular. But that doesn't seem to be the case.
For example, I'm printing the matrix before trying to invert it. So right before the error it prints this:
[[ 0.76400334 0.22660491]
[ 0.22660491 0.06721147]]
... and then returns the above singularity error when it tries to invert that matrix. But from what I can tell this matrix is invertible. Numpy even seems to agree when asked later.
>>> numpy.linalg.inv([[0.76400334, 0.22660491], [0.22660491, 0.06721147]])
array([[ 2.88436275e+07, -9.72469076e+07],
[ -9.72469076e+07, 3.27870046e+08]])
Here's the exact code snippet:
print np.dot(np.transpose(X), X)
print np.linalg.inv(np.dot(np.transpose(X),X))
The first line prints the matrix above; the second line fails.
So what distinguishes the two actions above? Why does the stand-alone code work even though it errors out in my script?
EDIT: Per Colonel Beauvel's request, if I do
try:
print np.dot(np.transpose(X), X)
z = np.linalg.inv(np.dot(np.transpose(X), X))
except:
z = "whoops"
print z
it outputs
[[ 0.01328185 0.1092696 ]
[ 0.1092696 0.89895982]]
whoops
but trying this on its own I get
>>> numpy.linalg.inv([[0.01328185, 0.1092696], [0.1092696, 0.89895982]])
array([[ 2.24677775e+08, -2.73098420e+07],
[ -2.73098420e+07, 3.31954382e+06]])
It's a matter of printing precision. The IEEE 754 doubles, that you're most likely using, have about 16 decimal digits of precision and you need to write out 17 to preserve the binary value.
Here's a small example. First create a singlular matrix:
In [1]: import numpy as np
In [2]: np.random.seed(0)
In [3]: a, b, c = np.random.rand(3)
In [4]: d = b*c / a
In [5]: X = np.array([[a, b],[c, d]])
Print and try to invert it:
In [6]: X
Out[6]:
array([[ 0.5488135 , 0.71518937],
[ 0.60276338, 0.78549444]])
In [7]: np.linalg.inv(X)
LinAlgError: Singular matrix
Try to invert the printed matrix:
In [8]: Y = np.array([[ 0.5488135 , 0.71518937],
...: [ 0.60276338, 0.78549444]])
In [9]: np.linalg.inv(Y)
Out[9]:
array([[-85805775.2940297 , 78125795.99532071],
[ 65844615.19517545, -59951242.76033063]])
Succes!
Increase printing precision and try again:
In [10]: np.set_printoptions(precision=17)
In [11]: X
Out[11]:
array([[ 0.54881350392732475, 0.71518936637241948],
[ 0.60276337607164387, 0.78549444195576024]])
In [12]: Z = np.array([[ 0.54881350392732475, 0.71518936637241948],
...: [ 0.60276337607164387, 0.78549444195576024]])
In [13]: np.linalg.inv(Z)
LinAlgError: Singular matrix
I just compute the determinant:
In [130]: m = np.array([[ 0.76400334, 0.22660491],[ 0.22660491,0.06721147]])
In [131]: np.linalg.det(m)
Out[131]: 2.3302017068132921e-09
# which is in fact for a 2D matrix 0.76400334*0.06721147 - 0.22660491*0.22660491
Which is already quit close to 0.
If a matrix m can be inverted, mathematically you can compute the adjoint and divide by the determinant to get the inverted matrix.
Numerically if the determinant is too small, this can entail the kind of error you have ...