Is there a direct way to calculate state transition matrix(i.e. e^(A*t), where A is a matrix)?
I planned to calculate it in this way:
but failed:
And if I directly calculate A*t first and then use expm(), it still cannot work since there should be no variable in expm().
I hope I illustrate my problem clearly :)
EDIT: Here is the code I think should be useful to solve my problem:
import numpy as np
import sympy
import scipy
from scipy.integrate import quad
Ts=0.02
s=sympy.symbols('s')
t=sympy.symbols('t')
T0=np.matrix([[1,0,0],
[0,1,0],
[0,-1,1]])
M0=np.matrix([[1.735,0.15851,0.042262],
[0.123728,0.07019322,0.02070838],
[0.042262,0.0243628,0.014375212]])
F0=np.matrix([[-22.915,0,0],
[0,-0.00969,0.00264],
[0,0.00264,-0.00264]])
N0=np.matrix([[0,0,0],
[0,1.553398,0],
[0,0,0.4141676]])
G0=np.matrix([[11.887],[0],[0]])
Ky=np.matrix([1.0121,4.5728,6.3652,0.9117,1.5246,0.9989])
A21=T0*(M0.I)*N0*(T0.I)
A22=T0*(M0.I)*F0*(T0.I)
Z=np.zeros((3,3))
Y=(np.matrix([0,0,0])).T
by1=np.row_stack((Z,A21))
by2=np.row_stack((np.identity(3),A22))
A=np.column_stack((by1,by2))
G=scipy.linalg.expm(A*Ts)
B2=T0*(M0.I)*G0
B=np.row_stack((Y,B2))
S1=sympy.Matrix((s*np.identity(6))-A)
S2=S1.inv()
S=S2
for (i,j), orinm in scipy.ndenumerate(S2):
S[i,j]=sympy.inverse_laplace_transform(orinm, s, t)
#integral
H=np.zeros(S2.shape, dtype=float)
for (i,j),func_sympy in scipy.ndenumerate(S2):
func = sympy.lambdify( (t),func_sympy, 'math')
H[i,j] = quad(func, 0, 0.02)[0]
print(H)
You can directly calculate the matrix exponential using scipy.
import numpy as np
from scipy.linalg import expm
A = np.random.random((10, 10))
exp_A = expm(A)
The documentation for this is here It uses the Pade approximation.
Here is an example using the 2x2 identity matrix.
>>> expm(np.eye(2))
array([[2.71828183, 0. ],
[0. , 2.71828183]])
If you need the matrix exponential of a symbolic matrix (as per your comment) then you can do this with Sympy:
t = sympy.symbols('t')
A = sympy.Matrix([[t, 0], [0, t]])
>>> sympy.exp(A)
Matrix([
[expt(t), 0],
[0, exp(t)]])
Related
Take in two 3 dimensional vectors, each represented as an array, and tell whether they are linearly independent. I tried to use np.linalg.solve() to get the solution of x, and tried to find whether x is trivial or nontrivial. But it shows 'LinAlgError: Last 2 dimensions of the array must be square'. Can anyone help me how to figure that out?
from sympy import *
import numpy as np
from scipy import linalg
from numpy import linalg
v1 = np.array([0, 5, 0])
v2 = np.array([0, -10, 0])
a = np.array([v1,v2])
b = np.zeros(3)
x = np.linalg.solve(a, b)
As your final matrix will be in a rectangular form, a simple approach of EigenValues will not work. You need to use the library of sympy
import sympy
import numpy as np
matrix = np.array([
[0, 5, 0],
[0, -10, 0]
])
_, indexes = sympy.Matrix(matrix).T.rref() # T is for transpose
print(indexes)
This will print the indexes of linearly independent rows. To further print them from the matrix, use
print(matrix[indexes,:])
To answer your specific question, check if two vectors are linearly dependant or not. You can most definitely use an if statement afterwards if it is the two vectors you are always going to check.
if len(indexes) == 2:
print("linearly independant")
else:
print("linearly dependant")
If one eigenvalue of the matrix is zero, its corresponding eigenvector is linearly dependent.
So the following code would work for simple case:
from sympy import *
import numpy as np
from scipy import linalg
from numpy import linalg
matrix = np.array([[0, 1, 0, 0], [0, 0, 1, 0], [0, 1, 1, 0], [1, 0, 0,
1]])
(lambdas, V) = np.linalg.eig(matrix.T)
print matrix[lambdas == 0, :]
Output: [[0 1 1 0]]
Is it possible to retriev the Lagrange multipliers from scipy linprog like in Matlab linprog? If so how?
I read the documentation but I didn't find it. There is a return parameter call slack but I think this is something different because it is only related to the inequality constraint:
slack: 1D array
The (nominally positive) values of the slack variables, b_ub - A_ub # x.
Thanks for the help!
Not implemented yet. See How to get Lagrange / lambda multipliers out of 'linprog' optimize subroutine in scipy module ? #11848.
Although my question was already answered by Arraval. I found a work around that I want to share, also using scipy. Linprog hasn't implemented yet but the minimize function can return the Lagrange multipliers when utilizing the method='trust-constr':
I hope this helps.
Starting from scipy 1.7.0, one can also receive the Lagrangian multipliers (also known as dual values or shadow prices) by using the HiGHS dual simplex solver:
import numpy as np
from scipy.optimize import linprog
c = -1*np.array([300, 500])
A_ub = np.array([[1, 2], [1, 1], [0, 3]])
b_ub = np.array([170, 150, 180])
A_eq = np.array([[1, 1]])
b_eq = np.array([80])
# solve c'x s.t. A_ub*x <= b_ub, A_eq*x == b_eq, x >= 0
result = linprog(c=c, A_ub=A_ub, b_ub=b_ub, method="highs-ds")
# lagrangian multipliers
λ_ineq = result['ineqlin']['marginals']
λ_eq = result['eqlin']['marginals']
I want to solve a matrix differential equation, like this one:
import numpy as np
from scipy.integrate import odeint
def deriv(A, t, Ab):
return np.dot(Ab, A)
Ab = np.array([[-0.25, 0, 0],
[ 0.25, -0.2, 0],
[ 0, 0.2, -0.1]])
time = np.linspace(0, 25, 101)
A0 = np.array([10, 20, 30])
MA = odeint(deriv, A0, time, args=(Ab,))
However, this does not work in the case of having complex matrix elements. I am looking for something similar to scipy.integrate.complex_ode but for odeint. If this is not possible, what other library should I use to perform the integration? I appreciate your help!
odeintw wrapper for odeint must be used in the same fashion as in the question. However, the initial value A0 must be complex-valued vector.
I have an array of values a = (2,3,0,0,4,3)
y=0
for x in a:
y = (y+x)*.95
Is there any way to use cumsum in numpy and apply the .95 decay to each row before adding the next value?
You're asking for a simple IIR Filter. Scipy's lfilter() is made for that:
import numpy as np
from scipy.signal import lfilter
data = np.array([2, 3, 0, 0, 4, 3], dtype=float) # lfilter wants floats
# Conventional approach:
result_conv = []
last_value = 0
for elmt in data:
last_value = (last_value + elmt)*.95
result_conv.append(last_value)
# IIR Filter:
result_IIR = lfilter([.95], [1, -.95], data)
if np.allclose(result_IIR, result_conv, 1e-12):
print("Values are equal.")
If you're only dealing with a 1D array, then short of scipy conveniences or writing a custom reduce ufunc for numpy, then in Python 3.3+, you can use itertools.accumulate, eg:
from itertools import accumulate
a = (2,3,0,0,4,3)
y = list(accumulate(a, lambda x,y: (x+y)*0.95))
# [2, 4.75, 4.5125, 4.286875, 7.87253125, 10.3289046875]
Numba provides an easy way to vectorize a function, creating a universal function (thus providing ufunc.accumulate):
import numpy
from numba import vectorize, float64
#vectorize([float64(float64, float64)])
def f(x, y):
return 0.95 * (x + y)
>>> a = numpy.array([2, 3, 0, 0, 4, 3])
>>> f.accumulate(a)
array([ 2. , 4.75 , 4.5125 , 4.286875 ,
7.87253125, 10.32890469])
I don't think that this can be done easily in NumPy alone, without using a loop.
One array-based idea would be to calculate the matrix M_ij = .95**i * a[N-j] (where N is the number of elements in a). The numbers that you are looking for are found by summing entries diagonally (with i-j constant). You could use thus use multiple numpy.diagonal(…).sum().
The good old algorithm that you outline is clearer and probably quite fast already (otherwise you can use Cython).
Doing what you want through NumPy without a single loop sounds like wizardry to me. Hats off to anybody who can pull this off.
I'm trying to create two random variables which are correlated with one another, and I believe the best way is to draw from a bivariate normal distribution with given parameters (open to other ideas). The uncorrelated version looks like this:
import numpy as np
sigma = np.random.uniform(.2, .3, 80)
theta = np.random.uniform( 0, .5, 80)
However, for each one of the 80 draws, I want the sigma value to be related to the theta value. Any thoughts?
Use the built-in: http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html
>>> import numpy as np
>>> mymeans = [13,5]
>>> # stdevs = sqrt(5),sqrt(2)
>>> # corr = .3 / (sqrt(5)*sqrt(2) = .134
>>> mycov = [[5,.3], [.3,2]]
>>> np.cov(np.random.multivariate_normal(mymeans,mycov,500000).T)
array([[ 4.99449936, 0.30506976],
[ 0.30506976, 2.00213264]])
>>> np.corrcoef(np.random.multivariate_normal(mymeans,mycov,500000).T)
array([[ 1. , 0.09629313],
[ 0.09629313, 1. ]])
As shown, things get a little hairier if you have to adjust for not-unit variances)
more reference: http://www.riskglossary.com/link/correlation.htm
To be real-world meaningful, the covariance matrix must be symmetric and must also be positive definite or positive semidefinite (it must be invertable). Particular anti-correlation structures might not be possible.
import multivariate_normal from scipy can be used. Suppose we create random variables x and y:
from scipy.stats import multivariate_normal
rv_mean = [0, 1] # mean of x and y
rv_cov = [[1.0,0.5], [0.5,2.0]] # covariance matrix of x and y
rv = multivariate_normal.rvs(rv_mean, rv_cov, size=10000)
You have x from rv[:,0] and y from rv[:,1]. Correlation coefficients can be obtained from
import numpy as np
np.corrcoef(rv.T)