Context: I'm doing a bunch of simulations that require me to implement different Hamiltonians. These Hamiltonians are just matrices, built out of Kronecker products of some common elements, with some prefactors that I have to calculate based on the system parameters. E.g, using ⊗ for the Kronecker product
H = w1(a,b,c) * sigmax ⊗ I + w2(x,y,z)*I ⊗ sigmay
I was hoping I could make a simple parser that could read in the values of a,b,c,x,y,z and an expression for the Hamiltonian and construct the necessary matrix. Sympy seems like an obvious candidate, but I can't get a matrix expression to build using strings.
from sympy import symbols,Matrix,MatrixSymbol
from sympy.physics import msigma
from sympy.physics.quantum import TensorProduct
w1,w2 = symbols('w1 w2')
X1 = MatrixSymbol('X1',4,4)
X2 = MatrixSymbol('X2',4,4)
x = msigma(1)
x_1 = TensorProduct(eye(2),x)
x_2 = TensorProduct(x,eye(2))
exp = w1*X1 + w2*X2
exp.subs([(w1,0.5),(w2,2),(X1,x_1),(X2,x_2)]).as_explicit()
will work. But, trying
exp = MatrixExpr('w1*X1+w2*X2')
or
exp = MatrixExpr(sympify('w1*X1+w2*X2'))
or even
exp = sympify('w1*X1 + w2*X2')
exp.subs([(w1,0.5),(w2,2),(X1,x_1),(X2,x_2)])
won't.
It also won't work if I change w1 or w2 to be 1x1 instances of a MatrixSymbol.
What am I doing wrong here? This is my first time using sympy so I'm very clear that I may just be missing something.
Let's look what's going on in simpler case:
exp = sympify('w1*X1'); right_exp = w1*X1
type(exp), type(right_exp)
Out[47]: (sympy.core.mul.Mul, sympy.matrices.expressions.matmul.MatMul)
Looks like simpify doesn'y understand that X1 is a matrix. So, if we mention it explicit, everything will be allright:
exp = sympify("w1*MatrixSymbol('X1',4,4)")
exp.subs([(w1,0.5),(X1,x_1)]).as_explicit()
Out[49]:
Matrix([
[ 0, 0.5, 0, 0],
[0.5, 0, 0, 0],
[ 0, 0, 0, 0.5],
[ 0, 0, 0.5, 0]])
right_exp.subs([(w1,0.5),(X1,x_1)]).as_explicit()
Out[50]:
Matrix([
[ 0, 0.5, 0, 0],
[0.5, 0, 0, 0],
[ 0, 0, 0, 0.5],
[ 0, 0, 0.5, 0]])
And the final statement:
exp = sympify("w1*MatrixSymbol('X1',4,4)+w2*MatrixSymbol('X2',4,4)")
exp.subs([(w1,0.5),(w2,2),(X1,x_1),(X2,x_2)]).as_explicit()
Out[63]:
Matrix([
[ 0, 0.5, 2, 0],
[0.5, 0, 0, 2],
[ 2, 0, 0, 0.5],
[ 0, 2, 0.5, 0]])
What's going on? If you read Basics of expressions in SymPy you can find there statement that "matrices aren’t sympifiable" and simpify interprets X1 as a symbol.
It's hard to say how to behave in another situations. There are notes in docs that warn:
Sometimes autosimplification during sympification results in
expressions that are very different in structure than what was
entered.
Related
I am playing around with spectral properties of differential operators. To get a feel for things
I decided to start out with computing the eigenvalues and eigenvectors of the 1-D Laplacian with periodic boundary conditions
Lap =
[[-2, 1, 0, 0, ..., 1],
[ 1,-2, 1, 0, ..., 0],
[ 0, 1,-2, 1, ..., 0],
...
...
[ 0, 0, ..., 1,-2, 1],
[ 1, 0, ..., 0, 1,-2]]
So I run the following
import numpy as np
import scipy.linalg as scilin
N = 12
Lap = np.zeros((N, N))
for i in range(N):
Lap[i, i] = -2
Lap[i, (i+1)%N] = 1
Lap[i, (i-1)%N] = 1
eigvals, eigvecs = scilin.eigh(Lap)
where
> print(eigvals)
[-4.00000000e+00 -3.73205081e+00 -3.73205081e+00 -3.00000000e+00
-3.00000000e+00 -2.00000000e+00 -2.00000000e+00 -1.00000000e+00
-1.00000000e+00 -2.67949192e-01 -2.67949192e-01 9.43689571e-16]
which is what I expect. However I decide to verify that these eigenvalues and eigenvectors
are correct. What I end up with is
> (Lap - eigvals[0]*np.identity(N)).dot(eigvecs[0])
array([ 0.28544445, 0.69044928, 0.83039882, 0.03466493, -0.79854101,
-0.81598463, -0.78119579, -0.7445237 , -0.769496 , -0.79741997,
-1.09625463, -0.69683007])
I expect to get the zero vector. So what is going on here?
As mentioned in the comment by #Warren, eigenvectors are columns of eigvecs. While in numpy indexing, eigvecs[0] represent first row of eigvecs. To fix it:
print((Lap-eigvals[0]*np.eye(N))#eigvecs[:,0])
[-6.66133815e-16 2.55351296e-15 -1.77635684e-15 1.11022302e-16
5.55111512e-16 -2.22044605e-16 -3.66373598e-15 -4.44089210e-16
7.77156117e-16 -1.11022302e-16 -1.66533454e-15 2.22044605e-15]
Which is basically all 0 (the numbers are there due to precision issue)
I am new to using the library Sympy. I am need to extract all coefficients of the characteristic polynomial to be used later.
For example, my code is:
import sympy as sp
M = sp.Matrix([[0, 0, 0, 1, 0, 1], [0, 0, 0, 0, 1, 0], [0, 1, 0, 1, 0, -1], [1, 0, -1, 0, 1, 0], [0, 0, 0, 1, 0, 0], [-1, 0, 1, 0, 0, 0]])
lamda = symbols('lamda')
p = M.charpoly(lamda)
print(p)
print(p.coeffs())
which gives output:
PurePoly(lamda**6 + lamda**4 - lamda**2, lamda, domain='ZZ')
[1, 1, -1]
However, I need [1, 0, 1, 0, 1, 0, 0], which includes the zero coefficients of the lamda too the exponents 4, 3, 1, and 0, terms. I would normally use a for loop to iterate over the equation to see which terms are missing so a zero can be inserted into the appropriate spot in the array of coefficients. However, when I attempted to do so, I received an error saying PurePoly type doesn't support indexing. So, I was wondering if anyone knows how to make sympy include the zeros or a way to do it myself? I need will eventually have to incorporate this code into a loop for lots of matrices so I can't manually do it.
Thanks.
When I have questions like this I hope for some sort of intelligent naming of methods for objects and look through the directory of the object:
>>> print([w for w in dir(p) if 'coeff' in w])
['all_coeffs', 'as_coeff_Add', 'as_coeff_Mul', ...]
That all_coeffs is the one you want:
>>> help(p.all_coeffs)
Help on method all_coeffs in module sympy.polys.polytools:
all_coeffs(f) method of sympy.polys.polytools.PurePoly instance
Returns all coefficients from a univariate polynomial ``f``.
>>> p.all_coeffs()
[1,0,1,0,−1,0,0]
A rotation matrix can either be written in two ways.
With row unit vectors,
[ ex_0 ex_1 ex_2]
[ ey_0 ey_1 ey_2]
[ ez_0 ez_1 ez_2]
Or with column unit vectors,
[ ex_0 ey_0 ez_0]
[ ex_1 ey_1 ez_1]
[ ex_2 ey_2 ez_2]
In the following code which definition is being used? Is DCM in row or column format?
from sympy.vector import CoordSysCartesian
from sympy import symbols, pi, Matrix
theta = symbols('theta')
N = CoordSysCartesian('N')
A = N.orient_new_axis('A', theta, N.k)
dcm = N.rotation_matrix(A).subs({'theta':pi/2})
Output,
Matrix([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]])
I tried to use hmmlearn from GitHub to run a binary hidden markov model. This does not work:
import hmmlearn.hmm as hmm
transmat = np.array([[0.7, 0.3],
[0.3, 0.7]])
emitmat = np.array([[0.9, 0.1],
[0.2, 0.8]])
obs = np.array([0, 0, 1, 0, 0])
startprob = np.array([0.5, 0.5])
h = hmm.MultinomialHMM(n_components=2, startprob=startprob,
transmat=transmat)
h.emissionprob_ = emitmat
# fails
h.fit([0, 0, 1, 0, 0])
# fails
h.decode([0, 0, 1, 0, 0])
print h
I get this error:
ValueError: zero-dimensional arrays cannot be concatenated
What is the right way to use this module? Note I am using the version of hmmlearn that was separated from sklearn, because apparently sklearn doesn't maintain hmmlearn anymore.
Fit accepts list of sequences and not a single sequence (as in general you can have multiple, independent sequences observed from different runs of your experiments/observations). Thus simply put your list inside another list
import hmmlearn.hmm as hmm
import numpy as np
transmat = np.array([[0.7, 0.3],
[0.3, 0.7]])
emitmat = np.array([[0.9, 0.1],
[0.2, 0.8]])
startprob = np.array([0.5, 0.5])
h = hmm.MultinomialHMM(n_components=2, startprob=startprob,
transmat=transmat)
h.emissionprob_ = emitmat
# works fine
h.fit([[0, 0, 1, 0, 0]])
# h.fit([[0, 0, 1, 0, 0], [0, 0], [1,1,1]]) # this is the reason for such
# syntax, you can fit to multiple
# sequences
print h.decode([0, 0, 1, 0, 0])
print h
gives
(-4.125363362578882, array([1, 1, 1, 1, 1]))
MultinomialHMM(algorithm='viterbi',
init_params='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
n_components=2, n_iter=10,
params='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
random_state=<mtrand.RandomState object at 0x7fe245ac7510>,
startprob=None, startprob_prior=1.0, thresh=0.01, transmat=None,
transmat_prior=1.0)
If I ask SymPy to row-reduce the singular matrix
nu = Symbol('nu')
lamb = Symbol('lambda')
A3 = Matrix([[-3*nu, 1, 0, 0],
[3*nu, -2*nu-1, 2, 0],
[0, 2*nu, (-1 * nu) - lamb - 2, 3],
[0, 0, nu + lamb, -3]])
print A3.rref()
then it returns the identity matrix
(Matrix([
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]]), [0, 1, 2, 3])
which it shouldn't do, since the matrix is singular. Why is SymPy giving me the wrong answer and how can I get it to give me the right answer?
I know SymPy knows the matrix is singular, because when I ask for A3.inv(), it gives
raise ValueError("Matrix det == 0; not invertible.")
Furthermore, when I remove lamb from the matrix (equivalent to setting lamb = 0), SymPy gives the correct answer:
(Matrix([
[1, 0, 0, -1/nu**3],
[0, 1, 0, -3/nu**2],
[0, 0, 1, -3/nu],
[0, 0, 0, 0]]), [0, 1, 2])
which leads me to believe that this problem only happens with more than one variable.
EDIT: Interestingly, I just got the correct answer when I pass rref() the argument "simplify=True". I still have no idea why that is though.
The rref algorithm fundamentally requires the ability to tell if the elements of the matrix are identically zero. In SymPy, the simplify=True option instructs SymPy to simplify the entries first at the relevant stage of the algorithm. With symbolic entries, this is necessary, as you can easily have symbolic expressions that are identically zero but which don't simplify to such automatically, like x*(x - 1) - x**2 + x. The option is off by default because in general such simplification can be expensive, through this can be controlled by passing in a less general simplify function than simplify (for rational functions, use cancel). The defaults here could probably be smarter.