I'm absolutely pulling my hair out here when trying to port over a matrix calculation from octave to numpy. This is specifically in regards to multivariate regression.
My arbitrary data is as follows where the array 'x' is my input value:
x = [
[1, 1 ,2],
[1, 3 ,4],
[1, 5 ,6],
[1, 7, 8],
[1, 9 ,10],
[1, 11 ,12]]
And 'y' are my output values (simply the sum):
y = [[3],[7],[11],[15],[19],[23]]
In Octave the following code will calculate the correct coefficients (where pinv(A) calculates the Moore-Penrose pseudo-inverse of matrix A):
pinv (x' * x) * x' * y'
In numpy I am performing the following :
x = np.array(x)
y = np.array(y)
x_T = (x.transpose())
x_theta = (inv(np.dot(x_T,x)))
x_theta = np.dot(x_theta,x_T)
x_theta = np.dot(x_theta,y)
However this outputs:
[[-330.5],[36.875],[-3.875]]
Which is obviously incorrect. Presuming I'm just being simple but any help would be appreciated.
Many thanks!
Posting this as an answer so your question doesn't still show up as unanswered - use np.linalg.pinv (pseudo-inverse) where you would use pinv in Octave.
Related
I came across performing calculation for euclidian distance using numpy vectorization, here. Calculation done is:
>>> tri = np.array([[1, 1],
... [3, 1],
... [2, 3]])
>>> np.sum(tri**2, axis=1) ** 0.5 # Or: np.sqrt(np.sum(np.square(tri), 1))
array([1.4142, 3.1623, 3.6056])
So, to understand, I tried:
>>> np.sum(tri**2, axis=1)
array([ 2, 10, 13])
So basically, tri**2 is squaring each element: [[1,1],[9,1],[4,9]]. Next, we sum each sub-array element to get [1+1, 9+1, 4+9] = [2,10,13]
Then we take square root of each of them.
But I didnt get where are we doing the subtraction qi-pi as in the formula? Also I felt we should be getting single value: √((1-1)^2+(9-1)^2+(4-9)^2)=9.43
Am I missing some maths here or python / numpy understanding?
Assuming you have two vectors p and q represented as np.array:
dist = np.sqrt(np.sum((q - p) ** 2))
There is also np.linalg.norm which computes the same thing:
assert np.isclose(dist, np.linalg.norm(q - p))
def lin_eqn(a,b):
'''
Solve the system of linear equations
of the form ax = b
Eg.
x + 2*y = 8
3*x + 4*y = 18
Given inputs a and b represent coefficients and constant of linear equation respectively
coefficients:
a = np.array([[1, 2], [3, 4]])
constants:
b = np.array([8, 18])
Desired Output: [2,3]
'''
# YOUR CODE HERE
**inv=np.linalg.inv(a)
return np.matmul(inv,b)**
print(lin_eqn(np.array([[1, 2], [3, 4]]),np.array([8, 18])))
#It prints [2. 3.]
assert lin_eqn(np.array([[1, 2], [3, 4]]),np.array([8, 18])).tolist() == [2.0000000000000004, 2.9999999999999996]
This assert statement given in my assignment due to which the answer is not matching.
It throws an error because [2. 3.] is not equal to [2.0000000000000004, 2.9999999999999996] I am not able to resolve this issue. Please help.
Do not invert matrices to solve linear equations systems, use np.linalg.solve instead.
There will be round-off errors, instead of checking for equality, you would rather check that the norm of your solution and a reference is smaller than a given (small) tolerance. I.e.:
assert np.linalg.norm(lin_eqn(a, b) - reference_sol) < 1e-12
reference_sol would also be an array.
Consider the linear regression of Y on X, where (xi, yi) = (2, 7), (0, 2), (5, 14) for i = 1, 2, 3. The solution is (a, b) = (2.395, 2.079), obtained using the regression function on a hand-held calculator.
I want to calculate the slope and the intercept of a linear fit using
the pykalman module. I'm getting
ValueError: The shape of all parameters is not consistent. Please re-check their values.
I'd really appreciate if someone would help me.
Here is my code :
from pykalman import KalmanFilter
import numpy as np
measurements = np.asarray([[7], [2], [14]])
initial_state_matrix = [[1], [1]]
transition_matrix = [[1, 0], [0, 1]]
observation_covariance_matrix = [[1, 0],[0, 1]]
observation_matrix = [[2, 1], [0, 1], [5, 1]]
kf1 = KalmanFilter(n_dim_state=2, n_dim_obs=6,
transition_matrices=transition_matrix,
observation_matrices=observation_matrix,
initial_state_mean=initial_state_matrix,
observation_covariance=observation_covariance_matrix)
kf1 = kf1.em(measurements, n_iter=0)
(smoothed_state_means, smoothed_state_covariances) = kf1.smooth(measurements)
print smoothed_state_means
Here's the code snippet:
from pykalman import KalmanFilter
import numpy as np
kf = KalmanFilter()
(filtered_state_means, filtered_state_covariances) = kf.filter_update(filtered_state_mean = [[0],[0]], filtered_state_covariance = [[90000,0],[0,90000]], observation=np.asarray([[7],[2],[14]]),transition_matrix = np.asarray([[1,0],[0,1]]), observation_matrix = np.asarray([[2,1],[0,1],[5,1]]), observation_covariance = np.asarray([[.1622,0,0],[0,.1622,0],[0,0,.1622]]))
print filtered_state_means
print filtered_state_covariances
for x in range(0, 1000):
(filtered_state_means, filtered_state_covariances) = kf.filter_update(filtered_state_mean = filtered_state_means, filtered_state_covariance = filtered_state_covariances, observation=np.asarray([[7],[2],[14]]),transition_matrix = np.asarray([[1,0],[0,1]]), observation_matrix = np.asarray([[2,1],[0,1],[5,1]]), observation_covariance = np.asarray([[.1622,0,0],[0,.1622,0],[0,0,.1622]]))
print filtered_state_means
print filtered_state_covariances
filtered_state_covariance was chosen large because we have no idea where our filter_state_mean is initially and the observations are just [[y1],[y2],[y3]]. Observation_matrix is [[x1,1],[x2,1],[x3,1]] thus giving second element as our intercept. Imagine it like this y1 = m*x1+c where m and c are slope and intercept respectively. In our case filtered_state_mean = [[m],[c]]. Notice that the new filtered_state_means is used as filtered_state_mean for new kf.filter_update() (in iterating loop) because we now know where mean lies with filtered_state_covariance = filtered_state_covariances. Iterating it 1000 times converges the mean to real value. If you want to know about the function/method used the link is: https://pykalman.github.io/
If the system state does not change between measurements (also called vacuous movement step), then transition_matrix φ = I.
I'm not sure if what I'm going to say now is true or not. So please correct me if I am wrong
observation_covariance matrix must be of size m x m where m is the number of observations (in our case = 3). The diagonal elements are just variances I believe variance_y1, variance_y2 and variance_y3 and off-diagonal elements are covariances. For example element (1,2) in matrix is standard deviation of y1,(COMMA NOT PRODUCT) standard deviation of y2 and is equal to element (2,1). Similarly for other elements. Can someone help me include uncertainty in x1, x2 and x3. I mean how do you implement uncertainties in x in the above code.
I want to calculate the eigenvectors x from a system A by using this: A x = λ x
The problem is that I don't know how to solve the eigenvalues by using SymPy.
Here is my code. I want to get some values for x1 and x2 from matrix A
from sympy import *
x1, x2, Lambda = symbols('x1 x2 Lambda')
I = eye(2)
A = Matrix([[0, 2], [1, -3]])
equation = Eq(det(Lambda*I-A), 0)
D = solve(equation)
print([N(element, 4) for element in D]) # Eigenvalus in decimal form
print(pretty(D)) # Eigenvalues in exact form
X = Matrix([[x1], [x2]]) # Eigenvectors
T = A*X - D[0]*X # The Ax = %Lambda X with the first %Lambda = D[0]
print(pretty(solve(T, x1, x2)))
The methods eigenvals and eigenvects is what one would normally use here.
A.eigenvals() returns {-sqrt(17)/2 - 3/2: 1, -3/2 + sqrt(17)/2: 1} which is a dictionary of eigenvalues and their multiplicities. If you don't care about multiplicities, use list(A.eigenvals().keys()) to get a plain list of eigenvalues.
The output of eigenvects is a bit more complicated, and consists of triples (eigenvalue, multiplicity of this eigenvalue, basis of the eigenspace). Note that the multiplicity is algebraic multiplicity, while the number of eigenvectors returned is the geometric multiplicity, which may be smaller. The eigenvectors are returned as 1-column matrices for some reason...
For your matrix, A.eigenvects() returns the eigenvector [-2/(-sqrt(17)/2 + 3/2), 1] for the eigenvalue -3/2 + sqrt(17)/2, and eigenvector [-2/(3/2 + sqrt(17)/2), 1] for eigenvalue -sqrt(17)/2 - 3/2.
If you want the eigenvectors presented as plain lists of coordinates, the following
[list(tup[2][0]) for tup in A.eigenvects()]
would output [[-2/(-sqrt(17)/2 + 3/2), 1], [-2/(3/2 + sqrt(17)/2), 1]]. (Note this just picks one eigenvector for each eigenvalue, which is not always what you want)
sympy has a very convenient way of getting eigenvalues and eigenvectors: sympy-doc
Your example would simply become:
from sympy import *
A = Matrix([[0, 2], [1, -3]])
print(A.eigenvals()) #returns eigenvalues and their algebraic multiplicity
print(A.eigenvects()) #returns eigenvalues, eigenvects
This answer will help you when you all eignvectors, the solution above doesnt always give you all eienvectos for example this matrix A used below
# the matrix
A = Matrix([
[4, 0, 1],
[2, 3, 2],
[1, 0, 4]
])
sym_eignvects = []
for tup in sMatrix.eigenvects():
for v in tup[2]:
sym_eignvects.append(list(v))
What is the easiest and fastest way to interpolate between two arrays to get new array.
For example, I have 3 arrays:
x = np.array([0,1,2,3,4,5])
y = np.array([5,4,3,2,1,0])
z = np.array([0,5])
x,y corresponds to data-points and z is an argument. So at z=0 x array is valid, and at z=5 y array valid. But I need to get new array for z=1. So it could be easily solved by:
a = (y-x)/(z[1]-z[0])*1+x
Problem is that data is not linearly dependent and there are more than 2 arrays with data. Maybe it is possible to use somehow spline interpolation?
This is a univariate to multivariate regression problem. Scipy supports univariate to univariate regression, and multivariate to univariate regression. But you can instead iterate over the outputs, so this is not such a big problem. Below is an example of how it can be done. I've changed the variable names a bit and added a new point:
import numpy as np
from scipy.interpolate import interp1d
X = np.array([0, 5, 10])
Y = np.array([[0, 1, 2, 3, 4, 5],
[5, 4, 3, 2, 1, 0],
[8, 6, 5, 1, -4, -5]])
XX = np.array([0, 1, 5]) # Find YY for these
YY = np.zeros((len(XX), Y.shape[1]))
for i in range(Y.shape[1]):
f = interp1d(X, Y[:, i])
for j in range(len(XX)):
YY[j, i] = f(XX[j])
So YY are the result for XX. Hope it helps.