I have a simultaneous equation calculator that takes in a lst which contains the strings of the two equations and is supposed to output the solutions to X and Y. Here is the code for it.
x, y = symbols('x,y')
transformations=(standard_transformations + (implicit_multiplication_application,))
eqs_sympy = [Eq(parse_expr(e.split('=')[0], transformations=transformations),
parse_expr(e.split('=')[1], transformations=transformations))
for e in final_lst]
sol = solve(eqs_sympy)
An example of final_lst : ["5x^2 + 1y^5 = 12", "5x^3 + 18y^2 = 42"] (Replace ^ with **)
However, sol just outputs a blank list, why is this so?
These highly nonlinear equations are too high of order for SymPy to give an explicit solution. You can get a numerical solution, however, if you have a good initial guess for the solution. You can get a reasonable guess by graphing the equations or by solving the related equations that don't have the highest powered terms. Use nsolve (e.g. here). Here is one of the 3 roots:
>>> from sympy import nsolve
>>> tuple(nsolve(eqs,(x,y),(-1,1)))
(-0.731937744431011, 1.56277202986898)
I am trying to solve this ordinary linear differential equation of second order with SymPy and get an unexpected result.
import sympy as sym
k, t = sym.symbols('k, t')
s = sym.Function('s')
diff_eq = sym.Eq(s(t).diff(t, 2) + s(t) * k**2, 0) # everything fine here, when I print this I get what I expected.
solution_diff_eq = sym.dsolve(diff_eq, s(t))
print(solution_diff_eq)
Which prints
Eq(s(t), C1*exp(-I*k*t) + C2*exp(I*k*t))
However, the solution I expected is
Any ideas what I have done wrong?
The result prints as
Eq(s(t), C1*exp(-I*k*t) + C2*exp(I*k*t))
which is correct, as I is the imaginary unit. You might prefer the real form, but sympy was not notified of that and produced the most simple form as sum of exponential terms, especially as it is not clear if k is actually real.
If you make it explicit that k is a positive real number via
k = sym.Symbol('k', real=True, positive=True)
the solution is actually in real form, as you were expecting
Eq(s(t), C1*sin(k*t) + C2*cos(k*t))
I have this line of MATLAB code:
a/b
I am using these inputs:
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9]
b = ones(25, 18)
This is the result (a 1x25 matrix):
[5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
What is MATLAB doing? I am trying to duplicate this behavior in Python, and the mrdivide documentation in MATLAB was unhelpful. Where does the 5 come from, and why are the rest of the values 0?
I have tried this with other inputs and receive similar results, usually just a different first element and zeros filling the remainder of the matrix. In Python when I use linalg.lstsq(b.T,a.T), all of the values in the first matrix returned (i.e. not the singular one) are 0.2. I have already tried right division in Python and it gives something completely off with the wrong dimensions.
I understand what a least square approximation is, I just need to know what mrdivide is doing.
Related:
Array division- translating from MATLAB to Python
MRDIVIDE or the / operator actually solves the xb = a linear system, as opposed to MLDIVIDE or the \ operator which will solve the system bx = a.
To solve a system xb = a with a non-symmetric, non-invertible matrix b, you can either rely on mridivide(), which is done via factorization of b with Gauss elimination, or pinv(), which is done via Singular Value Decomposition, and zero-ing of the singular values below a (default) tolerance level.
Here is the difference (for the case of mldivide): What is the difference between PINV and MLDIVIDE when I solve A*x=b?
When the system is overdetermined, both algorithms provide the
same answer. When the system is underdetermined, PINV will return the
solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will
pick the solution with least number of non-zero elements.
In your example:
% solve xb = a
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9];
b = ones(25, 18);
the system is underdetermined, and the two different solutions will be:
x1 = a/b; % MRDIVIDE: sparsest solution (min L0 norm)
x2 = a*pinv(b); % PINV: minimum norm solution (min L2)
>> x1 = a/b
Warning: Rank deficient, rank = 1, tol = 2.3551e-014.
ans =
5.0000 0 0 ... 0
>> x2 = a*pinv(b)
ans =
0.2 0.2 0.2 ... 0.2
In both cases the approximation error of xb-a is non-negligible (non-exact solution) and the same, i.e. norm(x1*b-a) and norm(x2*b-a) will return the same result.
What is MATLAB doing?
A great break-down of the algorithms (and checks on properties) invoked by the '\' operator, depending upon the structure of matrix b is given in this post in scicomp.stackexchange.com. I am assuming similar options apply for the / operator.
For your example, MATLAB is most probably doing a Gaussian elimination, giving the sparsest solution amongst a infinitude (that's where the 5 comes from).
What is Python doing?
Python, in linalg.lstsq uses pseudo-inverse/SVD, as demonstrated above (that's why you get a vector of 0.2's). In effect, the following will both give you the same result as MATLAB's pinv():
from numpy import *
a = array([1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9])
b = ones((25, 18))
# xb = a: solve b.T x.T = a.T instead
x2 = linalg.lstsq(b.T, a.T)[0]
x2 = dot(a, linalg.pinv(b))
TL;DR: A/B = np.linalg.solve(B.conj().T, A.conj().T).conj().T
I did not find the earlier answers to create a satisfactory substitute, so I dug into Matlab's reference documents for mrdivide further and found the solution. I cannot explain the actual mathematics here or take credit for coming up with the answer. I'm just following Matlab's explanation. Additionally, I wanted to post the actual detail from Matlab to give credit. If it's a copyright issue, someone tell me and I'll remove the actual text.
%/ Slash or right matrix divide.
% A/B is the matrix division of B into A, which is roughly the
% same as A*INV(B) , except it is computed in a different way.
% More precisely, A/B = (B'\A')'. See MLDIVIDE for details.
%
% C = MRDIVIDE(A,B) is called for the syntax 'A / B' when A or B is an
% object.
%
% See also MLDIVIDE, RDIVIDE, LDIVIDE.
% Copyright 1984-2005 The MathWorks, Inc.
Note that the ' symbol indicates the complex conjugate transpose. In python using numpy, that requires .conj().T chained together.
Per this handy "cheat sheet" of numpy for matlab users, linalg.lstsq(b,a) -- linalg is numpy.linalg.linalg, a light-weight version of the full scipy.linalg.
a/b finds the least square solution to the system of linear equations bx = a
if b is invertible, this is a*inv(b), but if it isn't, the it is the x which minimises norm(bx-a)
You can read more about least squares on wikipedia.
according to matlab documentation, mrdivide will return at most k non-zero values, where k is the computed rank of b. my guess is that matlab in your case solves the least squares problem given by replacing b by b(:1) (which has the same rank). In this case the moore-penrose inverse b2 = b(1,:); inv(b2*b2')*b2*a' is defined and gives the same answer
I have a function whose roots I'd like to find. So far, even Mathematica was inable of finding the roots analytically, so numerically is fine (but please, I'd be happy to be surprised on this matter).
The examples in the documentation all refer to "real" functions, lambda functions, and don't address this issue sufficiently (or I'm just too slow to understand). Here's a simple use case:
from sympy import *
p, r, c, y, lam, f = symbols('p r c y lambda f')
priceCDF = (c*lam*p + c*r - lam*p*r - p*r + r*(c - p)*LambertW(-exp((-c*lam*p - c*r + lam*p*r + lam*r*(c - p) + p*r)/(r*(c - p))), -1))/(lam*r*(c - p))
priceCDFplot = priceCDF.subs(r, 2).subs(c, 0.5).subs(lam, 1)
mpmath.findroot(priceCDFplot, 0.8)
which gives me TypeError: 'Mul' object is not callable. What am I wrong, how do I numerically find the root -- and how could I find it analyitcally?
If you want to use mpmath.findroot, you'll need to convert the SymPy expression to a mpmath expression. The easiest way to do this is with lambdify(p, priceCDF, 'mpmath') (I'm assuming p is the variable you want to solve for).
Another solution would be to use sympy.nsolve, which works directly on SymPy expressions.
Trying to compute the following lines I'm getting a realy complex result.
from sympy import *
s = symbols("s")
t = symbols("t")
h = 1/(s**3 + s**2/5 + s)
inverse_laplace_transform(h,s,t)
The result is the following:
(-(I*exp(-t/10)*sin(3*sqrt(11)*t/10) - exp(-t/10)*cos(3*sqrt(11)*t/10))*gamma(-3*sqrt(11)*I/5)*gamma(-1/10 - 3*sqrt(11)*I/10)/(gamma(9/10 - 3*sqrt(11)*I/10)*gamma(1 - 3*sqrt(11)*I/5)) + (I*exp(-t/10)*sin(3*sqrt(11)*t/10) + exp(-t/10)*cos(3*sqrt(11)*t/10))*gamma(3*sqrt(11)*I/5)*gamma(-1/10 + 3*sqrt(11)*I/10)/(gamma(9/10 + 3*sqrt(11)*I/10)*gamma(1 + 3*sqrt(11)*I/5)) + gamma(1/10 - 3*sqrt(11)*I/10)*gamma(1/10 + 3*sqrt(11)*I/10)/(gamma(11/10 - 3*sqrt(11)*I/10)*gamma(11/10 + 3*sqrt(11)*I/10)))*Heaviside(t)
However the answer should be simpler, Wolframalpha proves it.
Is there any way to simplify this result?
I tried a bit with this one and the way I could find a simpler solution is using something like:
from sympy import *
s = symbols("s")
t = symbols("t", positive=True)
h = 1/(s**3 + s**2/5 + s)
inverse_laplace_transform(h,s,t).evalf().simplify()
Notice that I define t as a positive variable, otherwise the sympy function returns a large term followed by the Heaviaside function. The result still contains many gamma functions that I could not reduce to the expression returned by Wolfram. Using evalf() some of those are converted to their numeric value and then after simplification you get a expression similar like the one in Wolfram but with floating numbers.
Unfortunately this part of Sympy is not quite mature. I also tried with Maxima and the result is quite close to the one in Wolfram. So it seems that Wolfram is not doing anything really special there.