Is it possible to retriev the Lagrange multipliers from scipy linprog like in Matlab linprog? If so how?
I read the documentation but I didn't find it. There is a return parameter call slack but I think this is something different because it is only related to the inequality constraint:
slack: 1D array
The (nominally positive) values of the slack variables, b_ub - A_ub # x.
Thanks for the help!
Not implemented yet. See How to get Lagrange / lambda multipliers out of 'linprog' optimize subroutine in scipy module ? #11848.
Although my question was already answered by Arraval. I found a work around that I want to share, also using scipy. Linprog hasn't implemented yet but the minimize function can return the Lagrange multipliers when utilizing the method='trust-constr':
I hope this helps.
Starting from scipy 1.7.0, one can also receive the Lagrangian multipliers (also known as dual values or shadow prices) by using the HiGHS dual simplex solver:
import numpy as np
from scipy.optimize import linprog
c = -1*np.array([300, 500])
A_ub = np.array([[1, 2], [1, 1], [0, 3]])
b_ub = np.array([170, 150, 180])
A_eq = np.array([[1, 1]])
b_eq = np.array([80])
# solve c'x s.t. A_ub*x <= b_ub, A_eq*x == b_eq, x >= 0
result = linprog(c=c, A_ub=A_ub, b_ub=b_ub, method="highs-ds")
# lagrangian multipliers
λ_ineq = result['ineqlin']['marginals']
λ_eq = result['eqlin']['marginals']
Related
I am trying to find the covariance matrix of all possible images(flattened) with each pixel - {0,1}.
I have written following code using numpy:
import numpy as np
a = np.array(np.meshgrid([1,0], [1, 0], [1, 0],[1,0],[0,1])).T.reshape(-1,5)
a = np.transpose(a)
covariance = np.cov(a)
print(covariance)
I get output 0.25806452 in the diagonal. But I think the diagonal should be exactly 0.25.
Can anyone explain why it isn't 0.25?
It is being normalised by 1/(N-1), not 1/N. Set the ddof parameter to change this behaviour.
Is there a direct way to calculate state transition matrix(i.e. e^(A*t), where A is a matrix)?
I planned to calculate it in this way:
but failed:
And if I directly calculate A*t first and then use expm(), it still cannot work since there should be no variable in expm().
I hope I illustrate my problem clearly :)
EDIT: Here is the code I think should be useful to solve my problem:
import numpy as np
import sympy
import scipy
from scipy.integrate import quad
Ts=0.02
s=sympy.symbols('s')
t=sympy.symbols('t')
T0=np.matrix([[1,0,0],
[0,1,0],
[0,-1,1]])
M0=np.matrix([[1.735,0.15851,0.042262],
[0.123728,0.07019322,0.02070838],
[0.042262,0.0243628,0.014375212]])
F0=np.matrix([[-22.915,0,0],
[0,-0.00969,0.00264],
[0,0.00264,-0.00264]])
N0=np.matrix([[0,0,0],
[0,1.553398,0],
[0,0,0.4141676]])
G0=np.matrix([[11.887],[0],[0]])
Ky=np.matrix([1.0121,4.5728,6.3652,0.9117,1.5246,0.9989])
A21=T0*(M0.I)*N0*(T0.I)
A22=T0*(M0.I)*F0*(T0.I)
Z=np.zeros((3,3))
Y=(np.matrix([0,0,0])).T
by1=np.row_stack((Z,A21))
by2=np.row_stack((np.identity(3),A22))
A=np.column_stack((by1,by2))
G=scipy.linalg.expm(A*Ts)
B2=T0*(M0.I)*G0
B=np.row_stack((Y,B2))
S1=sympy.Matrix((s*np.identity(6))-A)
S2=S1.inv()
S=S2
for (i,j), orinm in scipy.ndenumerate(S2):
S[i,j]=sympy.inverse_laplace_transform(orinm, s, t)
#integral
H=np.zeros(S2.shape, dtype=float)
for (i,j),func_sympy in scipy.ndenumerate(S2):
func = sympy.lambdify( (t),func_sympy, 'math')
H[i,j] = quad(func, 0, 0.02)[0]
print(H)
You can directly calculate the matrix exponential using scipy.
import numpy as np
from scipy.linalg import expm
A = np.random.random((10, 10))
exp_A = expm(A)
The documentation for this is here It uses the Pade approximation.
Here is an example using the 2x2 identity matrix.
>>> expm(np.eye(2))
array([[2.71828183, 0. ],
[0. , 2.71828183]])
If you need the matrix exponential of a symbolic matrix (as per your comment) then you can do this with Sympy:
t = sympy.symbols('t')
A = sympy.Matrix([[t, 0], [0, t]])
>>> sympy.exp(A)
Matrix([
[expt(t), 0],
[0, exp(t)]])
Using numpy.linalg.solve to solve a linear algebra equation, but receiving _assertNdSquareness and last 2 dimensions of the array must be square errors:
Any help is appreciated here's my code:
c = array([[1, 1, 1], [.07, .08, .09]])
d = array([24000, 1870])
z = linalg.solve(c, d)
print(z)
You cannot use numpy.linalg.solve for non-square matrices as mentioned in the documentation, a must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent. Your matrix is not square, but the documentation also mentions this, if either is not true, use lstsq for the least-squares best “solution” of the system/equation.
An example of this is below, and should work for you;
c = array([[1, 1, 1], [.07, .08, .09]])
d = array([24000, 1870])
z = linalg.lstsq(c, d)[0]
print(z)
# compare d and c*z to be sure
print(numpy.allclose(d,numpy.dot(c,z))) # should be true
I want to solve a matrix differential equation, like this one:
import numpy as np
from scipy.integrate import odeint
def deriv(A, t, Ab):
return np.dot(Ab, A)
Ab = np.array([[-0.25, 0, 0],
[ 0.25, -0.2, 0],
[ 0, 0.2, -0.1]])
time = np.linspace(0, 25, 101)
A0 = np.array([10, 20, 30])
MA = odeint(deriv, A0, time, args=(Ab,))
However, this does not work in the case of having complex matrix elements. I am looking for something similar to scipy.integrate.complex_ode but for odeint. If this is not possible, what other library should I use to perform the integration? I appreciate your help!
odeintw wrapper for odeint must be used in the same fashion as in the question. However, the initial value A0 must be complex-valued vector.
I have an array of values a = (2,3,0,0,4,3)
y=0
for x in a:
y = (y+x)*.95
Is there any way to use cumsum in numpy and apply the .95 decay to each row before adding the next value?
You're asking for a simple IIR Filter. Scipy's lfilter() is made for that:
import numpy as np
from scipy.signal import lfilter
data = np.array([2, 3, 0, 0, 4, 3], dtype=float) # lfilter wants floats
# Conventional approach:
result_conv = []
last_value = 0
for elmt in data:
last_value = (last_value + elmt)*.95
result_conv.append(last_value)
# IIR Filter:
result_IIR = lfilter([.95], [1, -.95], data)
if np.allclose(result_IIR, result_conv, 1e-12):
print("Values are equal.")
If you're only dealing with a 1D array, then short of scipy conveniences or writing a custom reduce ufunc for numpy, then in Python 3.3+, you can use itertools.accumulate, eg:
from itertools import accumulate
a = (2,3,0,0,4,3)
y = list(accumulate(a, lambda x,y: (x+y)*0.95))
# [2, 4.75, 4.5125, 4.286875, 7.87253125, 10.3289046875]
Numba provides an easy way to vectorize a function, creating a universal function (thus providing ufunc.accumulate):
import numpy
from numba import vectorize, float64
#vectorize([float64(float64, float64)])
def f(x, y):
return 0.95 * (x + y)
>>> a = numpy.array([2, 3, 0, 0, 4, 3])
>>> f.accumulate(a)
array([ 2. , 4.75 , 4.5125 , 4.286875 ,
7.87253125, 10.32890469])
I don't think that this can be done easily in NumPy alone, without using a loop.
One array-based idea would be to calculate the matrix M_ij = .95**i * a[N-j] (where N is the number of elements in a). The numbers that you are looking for are found by summing entries diagonally (with i-j constant). You could use thus use multiple numpy.diagonal(…).sum().
The good old algorithm that you outline is clearer and probably quite fast already (otherwise you can use Cython).
Doing what you want through NumPy without a single loop sounds like wizardry to me. Hats off to anybody who can pull this off.