numpy linear algebra basic help - python

This is what I need to do-
I have this equation-
Ax = y
Where A is a rational m*n matrix (m<=n), and x and y are vectors of
the right size. I know A and y, I don't know what x is equal to. I
also know that there is no x where Ax equals exactly y.
I want to find the vector x' such that Ax' is as close as possible to
y. Meaning that (Ax' - y) is as close as possible to (0,0,0,...0).
I know that I need to use either the lstsq function:
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#lstsq
or the svd function:
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#svd
I don't understand the documentation at all. Can someone please show
me how to use these functions to solve my problem.
Thanks a lot!!!

The updated documentation may be a bit more helpful... looks like you want
numpy.linalg.lstsq(A, y)

SVD is for the case of m < n, because you don't really have enough degrees of freedom.
The docs for lstsq don't look very helpful. I believe that's least square fitting, for the case where m > n.
If m < n, you'll want SVD.

The SVD of matrix A gives you orthogonal matrices U and V and diagonal matrix Σ such that
A = U Σ V T
where
U UT = I ;
V VT = I
Hence, if
x A = y
then
x U Σ V T = y
x U Σ V T V = y V
x U Σ = y V
U T x Σ = y V
x Σ = U y V
x = Σ -1 U T y V
x = V T Σ -1 U T y
So given SVD of A you can get x.
Although for general matrices A B != B A, it is true for vector x that x U == U T x.
For example, consider x = ( x, y ), U = ( a, b ; c, d ):
x U = ( x, y ) ( a, b ; c, d )
= ( xa+yc, xb+yd )
= ( ax+cy, bx+dy )
= ( a, c; b, d ) ( x; y )
= U T x
It's fairly obvious when you look at the values in x U being the dot products of x and the columns of U, and the values in UTx being the dot products of the x and the rows of UT, and the relation of rows and columns in transposition

Related

How to use sympy to find the relationship between X and Y

I am new to sympy. My ultimate goal is to plot y with respect to x.
Y is the known formula of k, m, ω
and x = ω*(m/k)**0.5 is also known.
I want to know how can I plot a function of both of them.
I am not sure which direction I should proceed from. I have tried simplification, and in the handwritten calculation, M/K of the numerator and denominator should be eliminated, but I have used sympy to only achieve the same variables in the top and bottom, which makes me at a loss. I hope that you can give me a solution.
This is an implicit definition of y and x in terms of parameters omega, k and m. Since omega and x are directly proportional, I would recommend solving for omega in terms of x and then replacing omega in y with that solution. That will give you y as a function of x which you can then plot. Here is a toy example:
>>> from sympy.abc import x, w, k
>>> yi = w
>>> xi = w*k
>>> yx = yi.subs(w. solve(x - xi, w)[0]); yx
x/k
>>> plot(yx.subs(k, 1), ylim=(-1,3))

How to calculate the divergent of a vector in sympy?

I want to calculate the divergent of a given vector with sympy. Is there any function in python responsible for this? I looked for something in the functions of einsteinpy, but I still haven't found any that help.
Basically I want to calculate \nabla_\mu (n v^\mu)=0 from a given vector v; n being a constant number.
\nabla_\mu (nv^\mu)=0 represents a divergence where \mu will take the derivative with respect to x, y or z of the vector element corresponding to the component. For example:
\nabla_\mu (n v^\mu) = \partial_x (u^x) + \partial_y(u^y) + \partial_z(u^z)
u can be something like (2x,4y,6z)
I appreciate any help.
As shown by #mikuszefski, you can use the module sympy.vector such that you have the implementation of the divergence in a space.
Another way to do what you want is to use the function derive_by_array to get a tensor and do einsten contraction.
import sympy as sp
x, y, z = sp.symbols("x y z") # dim = 3
# Now the functions that you want:
u, v, w = 2*x, 4*y, 6*z
# In a more general way, you can do:
u = sp.Function("u")(x, y, z)
v = sp.Function("v")(x, y, z)
w = sp.Function("w")(x, y, z)
U = sp.Array([u, v, w]) # U is a vector of dim = 3 (or sympy.Array)
X = sp.Array([x, y, z]) # X is a vector of dim = 3 (or sympy.Array)
dUdX = sp.derive_by_array(U, X) # dUdX is a tensor of dim = 3 and order = 2
# Frist way:
divU = sp.trace(sp.Matrix(sp.derive_by_array(U, X))) # Limited
# Second way:
divU = sp.tensorcontraction(sp.derive_by_array(U, X), (0, 1)) # More general
This solution works fine when dim = 2 for example, but you must have that len(X) == len(U)

np.solve() but when A (first matrix) unknown

np.solve() works great when you have an equation in the form of Ax = b
My problem is that I actually have an equation in the form of xC = D, where x is a 2x2 matrix I want to find out, and C and D are 2x2 matrices I'm given.
And because matrix multiplication is generally not commutative, I can't just swap the two around.
Is there an efficient way to solve this in numpy (or other library in python)?
x # C = D is the same as D^-1 # x # C # C^-1 = D^-1 # D # C^-1 which is D^-1 # x = C^-1 which is in the form Ax = b where A is np.linalg.pinv(D) and b is np.linalg.pinv(C)
which boils down to
x = D # np.linalg.pinv(C)
which you could have gotten by just multipying both side of the equation by the inverse of C

How to succinctly map over a plane with numpy

I have written code to plot the average squared error of a linear function over a given dataset, to visualise progress during a gradient descent training for the optimum regression line.
The relevant bits are these:
def compute_error(f, X, Y):
e = lambda x, y : (y - f(x))**2
return sum(e(x, y) for (x, y) in zip(X, Y))/len(X)
mn, bn, density = abs(target_slope)*1.5, abs(target_intercept)*1.5, 20
M, B = map(list, zip(*[(m, b) for m in np.linspace(-mn, +mn, density)
for b in np.linspace(-bn, +bn, density)]))
E = [compute_error(lambda x : m*x+b, X, Y) for m, b in zip(M,B)]
This works, but is very messy. I suspect there might be a very succinct way to pull off the same thing with numpy. So far I have gotten this:
M, B = map(np.ndarray.flatten, np.mgrid[-mn:+mn:1/density, -bn:+bn:1/density])
I still don't know how to improve the instantiation of E, and for some reason right now it is a lot slower than the messy version.
So, what would be a good way to map over a plane like MXB with numpy?
If you want to run above code you can build X and Y like so:
import numpy as np
from numpy.random import normal
target_slope = 3
target_intercept = 15
def generate_random_data(slope=1, minx=0, maxx=100, n=200, intercept=0):
f = lambda x : normal(slope*x, maxx/5)+intercept
X = np.linspace(minx, maxx, n)
Y = [f(x) for x in X]
return X, Y
X, Y = generate_random_data(slope=target_slope, intercept=target_intercept)
def compute_error(f, X, Y):
return np.mean( (Y - f(X))**2 )
MB = np.mgrid[-mn:+mn:2*mn/density, -bn:+bn:2*bn/density]
MB = MB.reshape((2, -1)).T
E = [compute_error(lambda x : m*x+b, X, Y) for m, b in MB]
It is possible to write a full numpy solution:
Y = np.array(Y)
M, B = np.mgrid[-mn:+mn:2*mn/density, -bn:+bn:2*bn/density]
mx = M.reshape((-1,1))*X
b = B.reshape((-1,1))*np.ones_like(X)
E = np.mean( (mx+b - Y)**2, axis=1 )
It may also be possible to write a solution without using the need to flatten the arrays and obtain the error as a 2D array...
I don't fully follow what you're trying to achieve here. However, this may help get you started with a numpy solution:
X, Y = generate_random_data(slope=target_slope, intercept=target_intercept, n=180)
M, B = np.mgrid[-mn:+mn:1/density, -bn:+bn:1/density]
f = M.T*X + B.T
error = np.sum((f-Y)**2)
Note I've had to alter the default number of X,Y values

linear algebra in python

Given a tall m by n matrix X, I need to calculate s = 1 + x(X.T X)^{-1} x.T. Here, x is a row vector and s is scalar. Is there an efficient (or, recommended) way to compute this in python?
Needless to say, X.T X will be symmetric positive definite.
My attempt:
If we consider the QR decomposition of X, i.e., X = QR, where Q is orthogonal, R is upper triangular, then X.T X = R.T R.
QR decomposition can be easily obtained using numpy.linalg.qr, that is
Q,R = numpy.linalg.qr(X)
But then again, is there a particularly efficient way to calculate inv(R.T R)?
If you are doing the QR factorization of X, resulting in X.T X = R.T R, you may avoid using np.linalg.inv (and np.linalg.solve) by using forward and backward substitution instead (R.T is lower triangular!) with scipy.linalg.solve_triangular:
import numpy as np
import scipy.linalg as LA
Q,R = np.linalg.qr(X)
# solve R.T R z = x such that R z = y
# with step (a) then (b)
# step (a) solve R.T y = x
y = LA.solve_triangular(R,x,trans='T')
# step (b) solve R z = y
z = LA.solve_triangular(R,x)
s = 1 + x # z
where # is the python3 matrix multiplication operator.

Categories

Resources