np.solve() but when A (first matrix) unknown - python

np.solve() works great when you have an equation in the form of Ax = b
My problem is that I actually have an equation in the form of xC = D, where x is a 2x2 matrix I want to find out, and C and D are 2x2 matrices I'm given.
And because matrix multiplication is generally not commutative, I can't just swap the two around.
Is there an efficient way to solve this in numpy (or other library in python)?

x # C = D is the same as D^-1 # x # C # C^-1 = D^-1 # D # C^-1 which is D^-1 # x = C^-1 which is in the form Ax = b where A is np.linalg.pinv(D) and b is np.linalg.pinv(C)
which boils down to
x = D # np.linalg.pinv(C)
which you could have gotten by just multipying both side of the equation by the inverse of C

Related

Python/transposed vector/multiplication

I want to define the following matrix
A = b * c ^T.
So the product from a column vector b and a transposed column vector c. This product is a matrix.
In this case b and c have the same amount of components thus multiplication is possible.
The np.transpose(c) command did not really help me because when I did
import numpy as np
b = np.array([1,1])
c = np.array([0,1])
d = np.transpose(c)
A = b * d
print(A)
I received the vector [0,1] but I should be receiving a matrix. Because a column vector multiplied with a transposed column vector yields a matrix.
What could I do instead?

broadcasted lstsq (least squares)

I have a bunch of 3x2 matrices, let's say 777 of them, and just as many right-hand sides of size 3. For each of them, I would like to know the least squared solution, so I'm doing
import numpy
A = numpy.random.rand(3, 2, 777)
b = numpy.random.rand(3, 777)
for k in range(777):
numpy.linalg.lstsq(A[..., k], b[..., k])
That works, but is slow. I'd much rather compute all the solutions in one go, but upon
numpy.linalg.lstsq(A, b)
I'm getting
numpy.linalg.linalg.LinAlgError: 3-dimensional array given. Array must be two-dimensional
Any hints on how to broadcast numpy.linalg.lstsq?
One can make use of the fact that if A = U \Sigma V^T is the singular value decomposition of A,
x = V \Sigma^+ U^T b
is the least-squares solution to Ax = b. SVD is broadcasted in numpy. It now only requires a bit of fiddling with einsums to get it all right:
A = numpy.random.rand(7, 3, 2)
b = numpy.random.rand(7, 3)
for k in range(7):
x, res, rank, sigma = numpy.linalg.lstsq(A[k], b[k])
print(x)
print
u, s, v = numpy.linalg.svd(A, full_matrices=False)
uTb = numpy.einsum('ijk,ij->ik', u, b)
xx = numpy.einsum('ijk, ij->ik', v, uTb / s)
print(xx)

Loops to minimize function of arrays in python

I have some large arrays each with i elements, call them X, Y, Z, for which I need to find some values a, b--where a and b are real numbers between 0 and 1--such that, for the following functions,
r = X - a*Y - b*Z
r_av = Sum(r)/i
rms = Sum((r - r_av)^2), summing over the i pixels
I want to minimize the rms. Basically I'm looking to minimize the scatter in r, and thus need to find the right a and b to do that. So far I have thought to do this in nested loops in one of two ways: either 1)just looping through a range of possible a,b and then selecting out the smallest rms, or 2)inserting a while statement so that the loop will terminate once rms stops decreasing with decreasing a,b for instance. Here's some pseudocode for these:
1) List
for a = 1
for b = 1
calculate m
b = b - .001
a = a - .001
loop 1000 times
sort m values, from smallest
print (a,b) corresponding to smallest m
2) Terminate
for a = 1
for b = 1
calculate m
while m > previous step,
b = b - .001
a = a - .001
Is one of these preferable? Or is there yet another, better way to go about this? Any tips would be greatly appreciated.
There is already a handy formula for least squares fitting.
I came up with two different ways to solve your problem.
For the first one, consider the matrix K:
L = len(X)
K = np.identity(L) - np.ones((L, L)) / L
In your case, A and B are defined as:
A = K.dot(np.array([Y, Z]).transpose())
B = K.dot(np.array([X]).transpose())
Apply the formula to find C that minimizes the error A * C - B:
C = np.linalg.inv(np.transpose(A).dot(A))
C = C.dot(np.transpose(A)).dot(B)
Then the result is:
a, b = C.reshape(2)
Also, note that numpy already provides linalg.lstsq that does the exact same thing:
a, b = np.linalg.lstsq(A, B)[0].reshape(2)
A simpler way is to define A as:
A = np.array([Y, Z, [1]*len(X)]).transpose()
Then solve it against X to get the coefficients and the mean:
a, b, mean = np.linalg.lstsq(A, X)[0]
If you need a proof of this result, have a look at this post.
Example:
>>> import numpy as np
>>> X = [5, 7, 9, 5]
>>> Y = [2, 0, 4, 1]
>>> Z = [7, 2, 4, 6]
>>> A = np.array([Y, Z, [1] * len(X)]).transpose()
>>> a, b, mean = np.linalg.lstsq(A, X)[0]
>>> print(a, b, mean)
0.860082304527 -0.736625514403 8.49382716049

PIL perspective transform, work out the (a, b, c, d, e, f, g, h)

I am trying to use PIL to do a perspective transformation on an image, i have the coordinates of the corners of the image and the coordinates of where the corners of the image should end up. I am not sure how to obtain (a, b, c, d, e, f, g, h) for the 'data' parameter.
I know it has something to do with this:
http://bishopw.loni.ucla.edu/AIR5/2Dperspective.html
but i am not sure what this page means.
You can get the parameters by solving the equation: T.x1 + v= x2 where x1 is the points coordinates in coordinate system 1 (original picture) and x2 is the new coordinate system (tilted or rotated or 3d). x1, x2, v are 2 by 1 vectors and T is 2 by 2 matrix. For example x1 = (x1x, x1y), x2 = (x2x,x2y) , v = (c,f) and
T = a b
d e
If you do not know matrix algebra, you can solve this by eliminating variables. For each point you get two equations like:
a*x1x + b*x1y + c = x2x
d*x1x + e*x1y + f = x2y
If you now plug in one of the corner points. Lets say x1 = (0,1) and x2 = (0,4) you get:
a*0 + b*1 + c = 0
d*0 + e*1 + f = 4
From that you get:
b = -c
e = 4-f
Now, if you repeat this to other corner points (and use the knowledge of b = -c). You can solve numeric values for all variables.
Hint, scale your original picture coordinates to unit square (0,0), (0,1), (1,0) and (1,1) before calculating the transformation. This way you have lots of ones and zeros. The mathematical method is called gauss elimination (use google or wikipedia->gauss elimination->example of the algorithm).
Note that the data in im.tranform has six parameters (2d -> 2d transformation):
Data is a 6-tuple (a, b, c, d, e, f) which contain the first two rows
from an affine transform matrix. For each pixel (x, y) in the output
image, the new value is taken from a position (a x + b y + c, d x + e
y + f) in the input image, rounded to nearest pixel.
EDIT: Ups, the above was for AFFINE tranformation. You were asking about PERSPECTIVE transformation. The function is the same but parameters are different. Data should be like:
Data is a 8-tuple (a, b, c, d, e, f, g, h) which contains the
coefficients for a perspective transform. For each pixel (x, y) in the
output image, the new value is taken from a position (a x + b y +
c)/(g x + h y + 1), (d x + e y + f)/(g x + h y + 1) in the input
image, rounded to nearest pixel.
So your equation is Q.x3 = x4, where original coordinate x3 is (x3x, x3y,1) and the transformed coordinate x4 is (x4x, x4y, 1) and for Q:
Q = a b c
d e f
g h 1
Compared to the AFFINE one, you embed the constant v into the matrix. Now your equations become:
a*x3x + b*x3y + c*1 = x4x
d*x3x + e*x3y + f*1 = x4y
g*x3x + h*x3y + 1*1 = 1
Solving by gauss elimination as the AFFINE transformation.

numpy linear algebra basic help

This is what I need to do-
I have this equation-
Ax = y
Where A is a rational m*n matrix (m<=n), and x and y are vectors of
the right size. I know A and y, I don't know what x is equal to. I
also know that there is no x where Ax equals exactly y.
I want to find the vector x' such that Ax' is as close as possible to
y. Meaning that (Ax' - y) is as close as possible to (0,0,0,...0).
I know that I need to use either the lstsq function:
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#lstsq
or the svd function:
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#svd
I don't understand the documentation at all. Can someone please show
me how to use these functions to solve my problem.
Thanks a lot!!!
The updated documentation may be a bit more helpful... looks like you want
numpy.linalg.lstsq(A, y)
SVD is for the case of m < n, because you don't really have enough degrees of freedom.
The docs for lstsq don't look very helpful. I believe that's least square fitting, for the case where m > n.
If m < n, you'll want SVD.
The SVD of matrix A gives you orthogonal matrices U and V and diagonal matrix Σ such that
A = U Σ V T
where
U UT = I ;
V VT = I
Hence, if
x A = y
then
x U Σ V T = y
x U Σ V T V = y V
x U Σ = y V
U T x Σ = y V
x Σ = U y V
x = Σ -1 U T y V
x = V T Σ -1 U T y
So given SVD of A you can get x.
Although for general matrices A B != B A, it is true for vector x that x U == U T x.
For example, consider x = ( x, y ), U = ( a, b ; c, d ):
x U = ( x, y ) ( a, b ; c, d )
= ( xa+yc, xb+yd )
= ( ax+cy, bx+dy )
= ( a, c; b, d ) ( x; y )
= U T x
It's fairly obvious when you look at the values in x U being the dot products of x and the columns of U, and the values in UTx being the dot products of the x and the rows of UT, and the relation of rows and columns in transposition

Categories

Resources