I need to find matrix A that A*B*A = B*A*B, and A^2 =1 if B =(1,0,0)(0,-1,0)(0,0,1)
I tried that do it in sympy:sympy.solve(A*B*A - B*A*A)
import sympy as sp
a,b,c,d,e,f,g,h,k = sp.symbols('a b c d e f g h k')
B = sp.Matrix([[1,0,1],[0,-1,0],[0,0,1]])
A = sp.Matrix([[a,b,c],[d,e,f],[g,h,k]])
sp.solve([A*B*A - B*A*B,B**2-sp.eye(3)])
and also I tried numpy.linalg, but I don't have any results
First of all, linalg will not help, as this is not a linear problem: unknown A gets multiplied by itself. You want to solve a system of 18 quadratic equations with 9 unknowns. For a generic system we'd expect no solutions, but there is a lot of structure here.
In my version of SymPy (1.1.1) direct attempts to solve even one of the matrix equations A*B*A=B*A*B or A*A=I fail to finish in reasonable time. So let's follow the advice of saintsfan342000 and approach the problem numerically, as a minimization problem. This is how I did it:
import numpy as np
from scipy.optimize import minimize
B = np.array([[1,0,0], [0,-1,0], [0,0,1]])
def func(A, B):
A = A.reshape((3, 3))
return np.linalg.norm(A.dot(B).dot(A)-B.dot(A).dot(B))**2 + np.linalg.norm(A.dot(A)-np.eye(3))**2
while True:
guess = np.random.uniform(-2, 2, size=(9,))
res = minimize(func, guess, args=(B,))
if res.fun < 1e-15:
A = res.x.reshape((3, 3))
print(A)
The function to minimize is the sum of squares of Frobenius norms of A*B*A-B*A*B and A*A-I. I put minimization in a loop because there are some local minima where minimize will get stuck; so when the minimal value found is not sufficiently close to zero, I ignore the result and start over. After running for a while, the script will print a bunch of matrices like
[[ 0.70386835 0.86117949 -1.40305355]
[ 0.17193376 0.49999999 0.81461157]
[-0.25409118 0.73892171 -0.20386834]]
which all share two important features:
the central element A[1,1] is 1/2
the trace of the matrix (the sum of diagonal elements) is 1.
Let's use this information to help SymPy solve the system. I still don't want to throw both equations at it, so I try to get one at a time.
from sympy import *
var('a:h') # a quick way to declare a bunch of symbols
B = Matrix([[1, 0, 0], [0, -1, 0], [0, 0, 1]])
A = Matrix([[a, b, c], [d, S(1)/2, f], [g, h, S(1)/2-a]]) # S(1)/2 is a way to get rational 1/2 instead of decimal 0.5
print(solve(A*B*A - B*A*B))
print(solve(A*A - eye(3)))
Now solve succeeds and prints the following:
[{b: h*(4*f*h - 3)/(2*g), d: -g/(2*h), a: 2*f*h - 1/2, c: -f*h*(4*f*h - 3)/g}]
[{b: h*(4*f*h - 3)/(2*g), d: -g/(2*h), a: 2*f*h - 1/2, c: -f*h*(4*f*h - 3)/g}]
Whoa! With the two constraints that we found numerically, both matrix equations are equivalent! I did not expect that. So we already have the solution:
A = Matrix([[2*f*h - S(1)/2, h*(4*f*h - 3)/(2*g), -f*h*(4*f*h - 3)/g], [-g/(2*h), S(1)/2, f], [g, h, 1 - 2*f*h]])
for arbitrary f, g, h.
Note: A=B is a trivial solution, which was excluded above by the requirement of A[1,1]=1/2. I imagine this was the intent; it appears you were looking for faithful 3-dimensional representations of the symmetric group S3.
Related
I am trying to find the right python or R package/function to approximate x in the equation a + xb = c.
a, b, and c are tuples/vectors, so if I have:
a = (1,2,1)
b = (2,3,2)
c = (5,8,5)
then I would like the function to give me x = 2.
I feel like some form of least squares approach might be the right way to go, but I cannot seem to find a function that does this. Maybe I am looking with the wrong terms, because it seems like such an obvious thing, I don't know.
You can use Sympy for Python , here how it works :
from sympy.solvers import solve
from sympy import Symbol
x = Symbol('x')
solve(x**2 - 1, x)
[-1, 1] # output
I am trying to solve a system of linear equations A * x = b for the unknown x using scipy's linalg.solve function. Here is an example that works fine:
import numpy as np
import scipy.linalg as linalg
A = np.array([[ 0.18666667, 0.06222222, -0.01777778],
[ 0.01777778, 0.18666667, 0.01777778],
[-0.01777778, 0.06222222, 0.18666667]])
b = np.array([0.26666667, -0.26666667, -0.4])
x = linalg.solve(A, b, assume_a='gen')
It results in x = [1.77194417, -1.4555256, -1.48892533], which is a correct solution. This can be verified by computing A.dot(x), which results in [0.26666667, -0.26666667, -0.4]. As this is the same as b, the solution is correct.
However, the matrix of coefficients A is symmetrical, i.e., the values above and below the main diagonal are the same. If I understand the documentation correctly, for solving such a problem more efficiently, the solve function allows to set the argument assume_a='sym'. Unfortunately, using the following code (given the same A and b) results in an incorrect solution being found:
x = linalg.solve(A, b, assume_a='sym')
It results in x = [1.88811181, -1.88811181, -1.78321672], which is different from the solution above. Computing A.dot(x) results in [0.26666667, -0.35058274, -0.48391607]. As this is different from b, the solution seems to be incorrect.
I am wondering, if there is any problem with my code, or if my understanding of symmetric matrices or the expected result is simply wrong!? Maybe the matrix must satisfy additional constraints to be used together with assume_a='sym'?
I appreciate your answers. Thanks in advance!
In think it won't happen. I provide a short answer about it.
Non-symmetric A
import numpy as np
import scipy.linalg as linalg
A = np.array([[ 0.18666667, 0.06222222, -0.01777778],
[ 0.01777778, 0.18666667, 0.01777778],
[-0.01777778, 0.06222222, 0.18666667]])
b = np.array([0.26666667, -0.26666667, -0.4])
x = linalg.solve(A, b, assume_a='gen')
np.allclose(A # x,b)
Out:
True
Which shows the solver works well.
Symmetric A
# use you upper triangular A to get a symmetric matrix
A_symm = (np.triu(A) + np.triu(A).T -np.diag(A.diagonal()))
# solve the equations
x = linalg.solve(A_symm, b, assume_a='sym')
np.allclose(A_symm # x,b)
Out:
True
It still works.
If you pass a non-symmetric matrix A to the solver , and then specify the assume_a = 'sym', solver will only use upper triangular matrix of A, see below:
x = linalg.solve(A, b, assume_a='sym')
np.allclose(A # x,b),x
Out:
(False, array([ 1.88811181, -1.88811181, -1.78321672]))
The result shows that solver works "wrong", but the result x is the same with result of linalg.solve(A_symm, b, assume_a='sym')
The only example/docs I can find are on the Scipy docs page.
To test, I'm looking at a time-independent Schrod eq in a 1d infinite potential well. This has a neat analytic solution found by solving the DE, and inserting boundary conditions of ψ(0) = 0, ψ(L) = 0, and that the func soln to 1, but this question applies to solving any DE where the BCs we know aren't for the initial value.
You can solve it numerically with Scipy's solve_ivp by starting with ψ(0) = 0, and cheating to place ψ'(0) appropriately using the analytic soln. Can use shooting method to find an appropriate E value, eg the normalization condition above.
These are two sets of BCs: ψ(0) = 0 for both, normalization for both, and a second value of ψ for the analytic approach, and an initial value of ψ' for the ivp approach. Scipy's solve_bvp seems to offer a solution using the first set of BCs numerically (since we're cheating by inserting ψ'), but i can't get it working. This pseudocode describes the problem, and is how I expect the API to behave:
bcs = {0: (0, None), L: (0, None)} # Two BCs on ψ; no BCs on derivative
x_span = (0, L)
sol = solve_bvp(rhs, bcs, x_span)
In reality, the code looks something like this, and I can't get it to work:
def bc(ψ_a, ψ_b):
return np.array([ψ_a[0], ψ_b[0]])
x_span = (0, L)
x_eval = np.linspace(x_span[0], x_span[1], int(1e5))
x_guess = np.array([0, L])
ψ_guess = np.array([[0, 1], [0, -1]])
res = solve_bvp(rhs_1d, bc, x_guess, ψ_guess)
I've no idea how to build the bc function, and don't know why the guesses are set up the way they are. And unsure how I can guess for the value of ψ without also inserting a guess for ψ'. (The docs imply you can) Also of note, the docs shows an example implying you can use solve_bvp for a normalization BC as well, but not sure how to approach. (Example is too sparse)
The equivalent and working ivp code, for ref: (Compare to my solve_bvp pseudocode)
Python code:
ψ_0 = (0, sqrt(2/L) * n*π/L)
x_span = (0, L)
sol = solve_ivp(rhs_1d, x_span, ψ_0)
For the eigenvalue problem
-u''+V(x)u = c*u
with boundary conditions
u(0)=0=u(L)
and normalization
int(u(x)^2, x=0 to L)=1
set up the integral as third component. With the eigenvalue as parameter these are 4 dimensions allowing for 4 boundary conditions, the additional 2 are that the integral at 0 is zero and that the integral at L has value 1.
# some length
L = 10;
# some potential function
def V(x): return 1+(2*x-L)**2;
# the ODE function
def odesys(x,y,p):
u,v,S = y; c=p[0]
return [v, (V(x)-c)*u , u**2 ]
# the boundary conditions
def boundary(y0, yL, c):
return [ y0[0], yL[0], y0[2], yL[2]-1 ]
With the initial guess you select approximately what eigenfunction/eigenvalue you will get, more or less.
n=11;
w = (np.pi*n)/L
x_init = np.linspace(0,L,4*n+1);
u_init = np.sin(w*x_init);
v_init = np.cos(w*x_init)*w;
y_init = [ u_init, v_init, x_init/L ]
There is no need to put too many points into the guess, just enough that the structure of the first component is faithfully represented.
Then call the solver with the prepared data, take notice that the default tolerance is 1e-3, if you want better you have to allow for a finer subdivision. If everything runs fine, plot the solution.
res = solve_bvp(odesys, boundary, x_init, y_init, p=[w**2], max_nodes=10000, tol=1e-6)
print res.message
if res.success:
x_disp = np.linspace(0,L,3001)
y_disp = res.sol(x_disp)
plt.plot(x_disp, y_disp[0])
plt.title("eigenfunction to eigenvalue $\lambda=%.6f$"%res.p[0]);
plt.grid(); plt.show()
In the following code I am trying to implement the following
write a function naturalSpline that implements cubic spline interpolation with natural boundary conditions
Use a tridiagonal solver to solve the arising tridiagonal system for the first derivatives.
The prototype of the function should read yy=naturalSpline(x,y,xx) where (x,y) are the input points and data, and xx are the points where the data should be interpolated.
I figured first I would start with the second bullet point, creating the tridiagonal solver. So this is just the Thomas algorithm. I spent some time to create this part of the code and I have formatted it below. But now I am trying to finish the first and third bullet points but I am not sure how to use what I have done already to finish those. Looking for some help with this! Thanks in advance.
import numpy as np
def TDMA(a,b,c,d):
n = len(d)
w= np.zeros(n-1,float)
g= np.zeros(n, float)
p = np.zeros(n,float)
w[0] = c[0]/b[0]
g[0] = d[0]/b[0]
for i in range(1,n-1):
w[i] = c[i]/(b[i] - a[i-1]*w[i-1])
for i in range(1,n):
g[i] = (d[i] - a[i-1]*g[i-1])/(b[i] - a[i-1]*w[i-1])
p[n-1] = g[n-1]
for i in range(n-1,0,-1):
p[i-1] = g[i-1] - w[i-1]*p[i]
return p
A = np.array([[10,2,0,0],[3,10,4,0],[0,1,7,5], [0,0,3,4]],dtype=float)
a = np.array([3.,1,3])
b = np.array([10.,10.,7.,4.])
c = np.array([2.,4.,5.])
d = np.array([3,4,5,6.])
print (TDMA(a, b, c, d))
Which gives the correct output, I even tested it against np.linalg.solve(a,b,c,d) to make sure it was correct
[ 0.14877589 0.75612053 -1.00188324 2.25141243]
For each interval [x_k, x_(k+1)], you can solve the four equations
p_k(x_k) = f(x_k) = y_k
p_k'(x_k) = f'(x_k) = d_k
p_k(x_(k+1)) = f(x_(k+1)) = y_(k+1)
p_k'(x_(k+1)) = f'(x_(k+1)) = d_(k+1)
(without checking your code, I assume that this is what you did).
From this, you can construct a dict
{'polynomials': [ [a_0, ..., d_0], ..., [a_24, ..., d_24] ],
'knots': [x_0, ..., x_24]}
For each x of your 250 point, you check for which k the point x is in the interval [x_k, x_(k+1)] and evaluate p_k(x).
All of this is straight forward mathematics and python coding. If something is not clear, you are better of learning more about both fields, instead of getting specialized advise on this website.
I want to integrate the product of two time- and frequency-shifted Hermite functions using scipy.integrate.quad.
However, since large order-polynomials are included, there are numerical errors occuring. Here's my Code:
import numpy as np
import scipy.integrate
import scipy.special as sp
from math import pi
def makeFuncs():
# Create the 0th, 4th, 8th, 12th and 16th order hermite function
return [lambda t, n=n: np.exp(-0.5*t**2)*sp.hermite(n)(t) for n in np.arange(5)*4]
def ambgfun(funcs, i, k, tau, f):
# Integrate f1(t)*f2(t+tau)*exp(-j2pift) over t from -inf to inf
f1 = funcs[i]
f2 = funcs[k]
func = lambda t: np.real(f1(t) * f2(t+tau) * np.exp(-1j*(2*pi)*f*t))
return scipy.integrate.quad(func, -np.inf, np.inf)
def main():
f = makeFuncs()
print "A00(0,0):", ambgfun(f, 0, 0, 0, 0)
print "A01(0,0):", ambgfun(f, 0, 1, 0, 0)
print "A34(0,0):", ambgfun(f, 3, 4, 0, 0)
if __name__ == '__main__':
main()
The hermite functions are orthogonal, thus all integrals should be equal to zero. However, they are not, as the output shows:
A00(0,0): (1.7724538509055159, 1.4202636805184462e-08)
A01(0,0): (8.465450562766819e-16, 8.862237123626351e-09)
A34(0,0): (-10.1875, 26.317246925873935)
How can I make this calculation more accurate? The hermite-function from scipy contain a weights variable which should be used for Gaussian Quadrature, as given in the documentation (http://docs.scipy.org/doc/scipy/reference/special.html#orthogonal-polynomials). However, I have not found a hint in the docs how to use these weights.
I hope you can help :)
Thanks, Max
The answer is that the result you get is numerically as close to zero as it gets. I don' think it's really possible to get much better results if you work with floating point numbers --- you are facing a general problem in numerical integration.
Consider this:
import numpy as np
from scipy import integrate, special
f = lambda t: np.exp(-t**2) * special.eval_hermite(12, t) * special.eval_hermite(16, t)
abs_ig, abs_err = integrate.quad(lambda t: abs(f(t)), -np.inf, np.inf)
ig, err = integrate.quad(f, -np.inf, np.inf)
print ig
# -10.203125
print abs_ig
# 2.22488114805e+15
print ig / abs_ig, err / abs_ig
# -4.58591912155e-15 1.18053770382e-14
The value of the integrand has therefore been computed to an accuracy comparable to the floating point epsilon. Because of the rounding error in subtracting values of a large-magnitude oscillating integrand, it's not really possible to get better results.
So how to proceed? In my experience, what you'd need to do now is to approach the problem not numerically, but analytically. Importantly, the Fourier transform of Hermite polynomials times the weight function is known, so you can work in the Fourier space all the time here.