I'm trying to solve a system with three nonlinear equations in Python 3.8. I'm using the function sympy.nonlinsolve(). However, I received the error message "convergence to root failed; try n < 15 or maxsteps > 50".
This is my code:
import sympy as sp
x_1 = 0.0
z_1 = 1.0
x_2 = 15.81
z_2 = 0.99
x_3 = 23.8
z_3 = 0.98
r, x_m, z_m = sp.symbols('r, x_m, z_m', real=True)
Eq_1 = sp.Eq((x_1 - x_m) ** 2 + (z_1 - z_m) ** 2 - r ** 2, 0)
Eq_2 = sp.Eq((x_2 - x_m) ** 2 + (z_2 - z_m) ** 2 - r ** 2, 0)
Eq_3 = sp.Eq((x_3 - x_m) ** 2 + (z_3 - z_m) ** 2 - r ** 2, 0)
ans = sp.nonlinsolve([Eq_1, Eq_2, Eq_3], [r, x_m, z_m])
I would welcome every help. Thanks in advance.
I get an answer from solve:
In [56]: sp.solve([Eq_1, Eq_2, Eq_3], [r, x_m, z_m])
Out[56]:
[(-5.71609538434502e+18, -4.80343980343979e+15, -5.71609336609336e+18), (-19222.9235141152, -4.2537
0843989772, -19221.9230434783), (19222.9235141152, -4.25370843989772, -19221.9230434783), (5.716095
38434502e+18, -4.80343980343979e+15, -5.71609336609336e+18)]
I'm not sure why nonlinsolve works but from the large numbers in the answer I guess that this isn't well conditioned.
If you use exact rational numbers then you can get the same solution from both solve and nonlinsolve:
In [59]: import sympy as sp
...:
...: x_1 = 0
...: z_1 = 1
...: x_2 = sp.Rational('15.81')
...: z_2 = sp.Rational('0.99')
...: x_3 = sp.Rational('23.8')
...: z_3 = sp.Rational('0.98')
...:
...: r, x_m, z_m = sp.symbols('r, x_m, z_m', real=True)
...: Eq_1 = sp.Eq((x_1 - x_m) ** 2 + (z_1 - z_m) ** 2 - r ** 2, 0)
...: Eq_2 = sp.Eq((x_2 - x_m) ** 2 + (z_2 - z_m) ** 2 - r ** 2, 0)
...: Eq_3 = sp.Eq((x_3 - x_m) ** 2 + (z_3 - z_m) ** 2 - r ** 2, 0)
...: ans = sp.solve([Eq_1, Eq_2, Eq_3], [r, x_m, z_m])
In [60]: ans
Out[60]:
⎡⎛-√564927076558939081 -8316 -44210423 ⎞ ⎛√564927076558939081 -8316 -44210423 ⎞⎤
⎢⎜─────────────────────, ──────, ──────────⎟, ⎜───────────────────, ──────, ──────────⎟⎥
⎣⎝ 39100 1955 2300 ⎠ ⎝ 39100 1955 2300 ⎠⎦
This is another of those cases where it is good to emphasize the A of CAS and let it help you as you work through the problem by hand:
_ solve first equation for r**2
>>> from sympy import solve
>>> r2 = solve(Eq_1, r**2)
_ substitute into the other two equations and expand them
>>> eqs = [i.subs(r**2, r2[0]).expand() for i in (Eq_2, Eq_3)]
_ see what you've get
>>> eqs
[Eq(-31.62*x_m + 0.02*z_m + 249.9362, 0), Eq(-47.6*x_m + 0.04*z_m + 566.4004, 0)]
_ That's two linear equations. Solve with solve -- nonlinsolve is not needed
>>> xz = solve(eqs); xz
{x_m: -4.25370843989770, z_m: -19221.9230434783}
_ substitute into r2 and set equal to r**2 and solve for r
>>> ris = solve(Eq(r**2, r2[0].subs(xz))); ris
[-19222.9235141152, 19222.9235141152]
_ collect the solutions
soln = []
>>> for i in ris:
... xz[r] = i
... soln.append(xz)
...
>>> soln
[{x_m: -4.25370843989770, z_m: -19221.9230434783, r: -19222.9235141152},
{x_m: -4.25370843989770, z_m: -19221.9230434783, r: 19222.9235141152}]
[print out has been edited for viewing pleasure]
When solving nonlinear systems, try reduce the number of systems that you have to deal with. Eliminate linear variables for sure -- other (r**2 in this case) if possible -- before trying to solve the nonlinear parts.
The very large numbers obtained when solving all 3 at once might be a reflection of the ill-posed nature of the system ("not well conditioned" as Oscar noted. Perhaps the problem was designed to teach that point.
Related
I want to solve a system of equations symbolically such as A = ax + by and B = cx + dy, for x and y explicitly on sympy.
I tried the solve function of sympy as
solve([A, B], [x, y]), but isn't working. It's returning an empty list, [].
How can I solve it using sympy?
This is the actual equation I'm trying to solve:
from sympy import*
i,j,phi, p, e_phi, e_rho = symbols(r'\hat{i} \hat{j} \phi \rho e_\phi e_\rho')
e_rho = cos(phi)*i + sin(phi)*j
e_phi = -p*sin(phi)*i + p*cos(phi)*j
solve([e_rho,e_phi], [i,j])
I don't know what version of SymPy you're using but I just tried with the latest version and I get an answer:
In [4]: from sympy import*
...: i,j,phi, p, e_phi, e_rho = symbols(r'i j phi rho e_phi e_rho')
...: e_rho = cos(phi)*i + sin(phi)*j
...: e_phi = -p*sin(phi)*i + p*cos(phi)*j
...: solve([e_rho,e_phi], [i,j])
Out[4]: {i: 0, j: 0}
That's the correct answer to your equations (provided rho is nonzero):
In [5]: e_rho
Out[5]: i⋅cos(φ) + j⋅sin(φ)
In [6]: e_phi
Out[6]: -i⋅ρ⋅sin(φ) + j⋅ρ⋅cos(φ)
If you meant to solve for e_rho and e_phi to be equal to something other than zero then you should include a right hand side either by subtracting it from the expressions or by using Eq:
In [2]: A, B = symbols('A, B')
In [3]: solve([Eq(e_rho, A), Eq(e_phi, B)], [i, j])
Out[3]:
⎧ A⋅ρ⋅cos(φ) B⋅sin(φ) A⋅ρ⋅sin(φ) B⋅cos(φ) ⎫
⎪i: ───────────────────── - ─────────────────────, j: ───────────────────── + ─────────────────────⎪
⎨ 2 2 2 2 2 2 2 2 ⎬
⎪ ρ⋅sin (φ) + ρ⋅cos (φ) ρ⋅sin (φ) + ρ⋅cos (φ) ρ⋅sin (φ) + ρ⋅cos (φ) ρ⋅sin (φ) + ρ⋅cos (φ)⎪
⎩ ⎭
In [4]: solve([Eq(e_rho, A), Eq(e_phi, B)], [i, j], simplify=True)
Out[4]:
⎧ B⋅sin(φ) B⋅cos(φ)⎫
⎨i: A⋅cos(φ) - ────────, j: A⋅sin(φ) + ────────⎬
⎩ ρ ρ ⎭
Again that's the correct answer (assuming rho != 0).
I need algorithm, that solve systems like this:
Example 1:
5x - 6y = 0 <--- line
(10- x)**2 + (10- y)**2 = 2 <--- circle
Solution:
find y:
(10- 6/5*y)**2 + (10- y)**2 = 2
100 - 24y + 1.44y**2 + 100 - 20y + y**2 = 2
2.44y**2 - 44y + 198 = 0
D = b**2 - 4ac
D = 44*44 - 4*2.44*198 = 3.52
y[1,2] = (-b+-sqrt(D))/2a
y[1,2] = (44+-1.8761)/4.88 = 9.4008 , 8.6319
find x:
(10- x)**2 + (10- 5/6y)**2 = 2
100 - 20x + y**2 + 100 - 5/6*20y + (5/6*y)**2 = 2
1.6944x**2 - 36.6666x + 198 = 0
D = b**2 - 4ac
D = 36.6666*36.6666 - 4*1.6944*198 = 2.4747
x[1,2] = (-b+-sqrt(D))/2a
x[1,2] = (36.6666+-1.5731)/3.3888 = 11.2841 , 10.3557
my skills are not enough to write this algorithm please help
and another algorithm that solve this system.
5x - 6y = 0 <--- line
|-10 - x| + |-10 - y| = 2 <--- rhomb
as answer here i need two x and two y.
You can use sympy, Python's symbolic math library.
Solutions for fixed parameters
from sympy import symbols, Eq, solve
x, y = symbols('x y', real=True)
eq1 = Eq(5 * x - 6 * y, 0)
eq2 = Eq((10 - x) ** 2 + (10 - y) ** 2, 2)
solutions = solve([eq1, eq2], (x, y))
print(solutions)
for x, y in solutions:
print(f'{x.evalf()}, {y.evalf()}')
This leads to two solutions:
[(660/61 - 6*sqrt(22)/61, 550/61 - 5*sqrt(22)/61),
(6*sqrt(22)/61 + 660/61, 5*sqrt(22)/61 + 550/61)]
10.3583197613288, 8.63193313444070
11.2810245009662, 9.40085375080520
The other equations work very similar:
eq1 = Eq(5 * x - 6 * y, 0)
eq2 = Eq(Abs(-10 - x) + Abs(-10 - y), 2)
leading to :
[(-12, -10),
(-108/11, -90/11)]
-12.0000000000000, -10.0000000000000
-9.81818181818182, -8.18181818181818
Dealing with arbitrary parameters
For your new question, how to deal with arbitrary parameters, sympy can help to find formulas, at least when the structure of the equations is fixed:
from sympy import symbols, Eq, Abs, solve
x, y = symbols('x y', real=True)
a, b, xc, yc = symbols('a b xc yc', real=True)
r = symbols('r', real=True, positive=True)
eq1 = Eq(a * x - b * y, 0)
eq2 = Eq((xc - x) ** 2 + (yc - y) ** 2, r ** 2)
solutions = solve([eq1, eq2], (x, y))
Studying the generated solutions, some complicated expressions are repeated. Those could be substituted by auxiliary variables. Note that this step isn't necessary, but helps a lot in making sense of the solutions. Also note that substitution in sympy often only considers quite literal replacements. That's by the introduction of c below is done in two steps:
c, d = symbols('c d', real=True)
for xi, yi in solutions:
print(xi.subs(a ** 2 + b ** 2, c)
.subs(r ** 2 * a ** 2 + r ** 2 * b ** 2, c * r ** 2)
.subs(-a ** 2 * xc ** 2 + 2 * a * b * xc * yc - b ** 2 * yc ** 2 + c * r ** 2, d)
.simplify())
print(yi.subs(a ** 2 + b ** 2, c)
.subs(r ** 2 * a ** 2 + r ** 2 * b ** 2, c * r ** 2)
.subs(-a ** 2 * xc ** 2 + 2 * a * b * xc * yc - b ** 2 * yc ** 2 + c * r ** 2, d)
.simplify())
Which gives the formulas:
x1 = b*(a*yc + b*xc - sqrt(d))/c
y1 = a*(a*yc + b*xc - sqrt(d))/c
x2 = b*(a*yc + b*xc + sqrt(d))/c
y2 = a*(a*yc + b*xc + sqrt(d))/c
These formulas then can be converted to regular Python code without the need of sympy. That code will only work for an arbitrary line and circle. Some tests need to be added around, such as c == 0 (meaning the line is just a dot), and d either be zero, positive or negative.
The stand-alone code could look like:
import math
def give_solutions(a, b, xc, yc, r):
# intersection between a line a*x-b*y==0 and a circle with center (xc, yc) and radius r
c =a ** 2 + b ** 2
if c == 0:
print("degenerate line equation given")
else:
d = -a**2 * xc**2 + 2*a*b * xc*yc - b**2 * yc**2 + c * r**2
if d < 0:
print("no solutions")
elif d == 0:
print("1 solution:")
print(f" x1 = {b*(a*yc + b*xc)/c}")
print(f" y1 = {a*(a*yc + b*xc)/c}")
else: # d > 0
print("2 solutions:")
sqrt_d = math.sqrt(d)
print(f" x1 = {b*(a*yc + b*xc - sqrt_d)/c}")
print(f" y1 = {a*(a*yc + b*xc - sqrt_d)/c}")
print(f" x2 = {b*(a*yc + b*xc + sqrt_d)/c}")
print(f" y2 = {a*(a*yc + b*xc + sqrt_d)/c}")
For the rhombus, sympy doesn't seem to be able to work well with abs in the equations. However, you could use equations for the 4 sides, and test whether the obtained intersections are inside the range of the rhombus. (The four sides would be obtained by replacing abs with either + or -, giving four combinations.)
Working this out further, is far beyond the reach of a typical stackoverflow answer, especially as you seem to ask for an even more general solution.
I have an equation, describing an ellipse. I want to use Python to find the minimum and maxium angle theta and length of the axis Lc of these ellipse for further calculations.
The parameters Lc1, theta1, etc. are the axis length for the respective angle.
I tried sympy.solve without success (the result didn´t make any sense) and are now trying sympy.solveset.
This is my current script. That is followed by further calculation which needs theta.
import math
import sympy as sym
from sympy import solve, Eq
from sympy import *
import matplotlib.pyplot as plt
#define parameters
#Lc modelled with matlab before
Lc1 = 0.67 / 1000
Lc2 = 0.36/ 1000
Lc3 = 0.7 / 1000
#orientation of the cut section
theta1 = 89
theta2 = -35
theta3 = 25
#calculation from Beaudoin et al. (2016) and Ebner et al. (2010)
dL = ((Lc2 - Lc1)/(Lc3 - Lc1))
c = math.atan(- (((dL * (math.sin(2 * theta3) - math.sin(2 * theta1))) - (math.sin(2 * theta2) - math.sin(2* theta1)))/ ((dL * (math.cos(2 * theta3) - math.cos(2 * theta1))) - (math.cos(2 * theta2) - math.cos(2 * theta1)))))
b = ((Lc2 - Lc1)/((math.sin(2 * theta2 + c)) - (math.sin(2 * theta1 + c))))
a = Lc1 - (b * math.sin(2 * theta1 + c))
print('Done calculating a, b and c')
##try to find theta at min and max Lc
#theta = sym.symbols('theta')
theta = var('theta')
#define eqation that give us our crossover length
Lc = Eq(a + b * sym.sin(2 * theta +c))
dLc = Eq(2 * b * sym.cos(2 * theta +c) + sym.sin(2 * theta + c))
print('Busy finding minimum and maximum.')
sol = solveset(dLc, theta)
sol
#now we have the derivation we can go find min and max of Lc
Using solveset I recieved no result so far, because python doesn´t stop running.
I am not sure, if I can get a reliable result with my current script or not. What is wrong? Is there a more efficient way? I´d be glad if anyone could help me!
Thx in advance!
Since this equation has no symbolic parameters and has floating point coefficients I guess that you just want a numeric solution so you can use nsolve:
In [4]: nsolve(dLc, theta, 1.2)
Out[4]: 1.22165851958244
The initial guess (1.2) comes from looking at a plot of the function (plot(dLc.lhs)).
Note that your equation looks like this:
In [26]: dLc
Out[26]: sin(2⋅θ + 0.697151824925716) + 0.0011237899722686⋅cos(2⋅θ + 0.697151824925716) = 0
We can solve this in terms of arbitrary symbols rather than particular numbers:
In [27]: a, b = symbols('a, b', real=True)
In [28]: eq = sin(2*theta + a) + b*cos(2*theta + a)
In [29]: solve(eq, theta)
Out[29]:
⎡ ⎛ ________ ⎞ ⎛ ________ ⎞⎤
⎢ ⎜ ╱ 2 ⎟ ⎜ ╱ 2 ⎟⎥
⎢ a ⎜╲╱ b + 1 - 1⎟ a ⎜╲╱ b + 1 + 1⎟⎥
⎢- ─ - atan⎜───────────────⎟, - ─ + atan⎜───────────────⎟⎥
⎣ 2 ⎝ b ⎠ 2 ⎝ b ⎠⎦
That gives two solutions one of which is negative and one positive. The other solutions come from adding multiples of pi.
The solution was much easier than I expected:
Having the ellipse-parameters a,b and c the maximum is simply L_maximum = a+b with the angle theta = math.raidans(45) - c/2
Plus: The imported angles theta1,... have to be converted to radians as well, with math.radians()
I've been thinking on this problem, but I can't seem to wrap my head around it.
I want to solve a matrix with three equations with unknowns x, y, z so they all equal the same number.
Lets say my equations are:
x + 3 = A
y(2y - 2) = 2A
z(4z - 1) = A
So I can construct a matrix looking like:
[(X + 3) , 0 , 0] [0] [A]
[ 0 ,(2y - 2), 0] [y] = [2A]
[ 0 , , 0, (4z -1)] [z] [A]
I know numpy has a linear algebra but that is only when the answer (A) is already known.
My question is, would I have to construct a loop to brute force the answer of (A) or is there a more pythonic way of answering these series of equations?
Linear algebra can only solve for multiples of your variables, not powers (that is why it is called linear, ie the equation for a straight line, Ax + By + Cz = 0).
For this set of equations you can use the quadratic formula to solve in terms of a:
x + 3 = a => x = a - 3
y * (y - 1) = a => y**2 - y - a = 0
y = (1 +/- (1 + 4*a) ** 0.5) / 2
= 0.5 +/- (0.25 + a) ** 0.5
(a >= -0.25 for real roots)
z * (4*z - 1) = a => 4 * z**2 - z - a = 0
z = (1 +/- (1 + 16*a) ** 0.5) / 8
= 0.125 +/- (0.015625 + 0.25*a) ** 0.5
(a >= -0.0625 for real roots)
then
def solve(a):
assert a >= -0.625, "No real solution"
x = a - 3
yoffs = (0.25 * a) ** 0.5
ylo = 0.5 - yoffs
yhi = 0.5 + yoffs
zoffs = (0.015625 + 0.25 * a) ** 0.5
zlo = 0.125 - zoffs
zhi = 0.125 + zoffs
return [
(x, ylo, zlo),
(x, ylo, zhi),
(x, yhi, zlo),
(x, yhi, zhi)
]
You do not have a system of 3 equations with 3 unknowns. You have a system of 3 equations with 4 unknowns: x, y, z and A.
That means your answer will be parameterized on A, because you do not have enough equations to solve for all unknowns.
Solving a general system of polynomial equations can be done by the so-called Groebner basis approach, which is what sympy uses. Here is a snippet on how to use the library to solve this or similar problems:
from sympy.solvers.polysys import solve_poly_system
from sympy.abc import x, y, z, A
f1 = x + 3 - A
f2 = y * (2 * y - 2) - 2 * A
f3 = z * (4 * z - 1) - A
solve_poly_system([f1, f2, f3], x, y, z)
# Outputs:
# [(A - 3, -sqrt(4*A + 1)/2 + 1/2, -sqrt(16*A + 1)/8 + 1/8),
# (A - 3, -sqrt(4*A + 1)/2 + 1/2, sqrt(16*A + 1)/8 + 1/8),
# (A - 3, sqrt(4*A + 1)/2 + 1/2, -sqrt(16*A + 1)/8 + 1/8),
# (A - 3, sqrt(4*A + 1)/2 + 1/2, sqrt(16*A + 1)/8 + 1/8)]
As you can see, the result requires to fix the value of A to be fully determined.
Up to now I have always Mathematica for solving analytical equations. Now however I need to solve a few hundred equations of this type (characteristic polynomials)
a_20*x^20+a_19*x^19+...+a_1*x+a_0=0 (constant floats a_0,...a_20)
at once which yields awfully long calculation times in Mathematica.
Is there like a ready to use command in numpy or any other package to solve an equation of this type? (up to now I have used Python only for simulations so I don't know much about analytical tools and I couldn't find anything useful in the numpy tutorials).
You use numpy (apparently), but I've never tried it myself though: http://docs.scipy.org/doc/numpy/reference/generated/numpy.roots.html#numpy.roots.
Numpy also provides a polynomial class... numpy.poly1d.
This finds the roots numerically -- if you want the analytical roots, I don't think numpy can do that for you.
Here is an example from simpy docs:
>>> from sympy import *
>>> x = symbols('x')
>>> from sympy import roots, solve_poly_system
>>> solve(x**3 + 2*x + 3, x)
____ ____
1 \/ 11 *I 1 \/ 11 *I
[-1, - - --------, - + --------]
2 2 2 2
>>> p = Symbol('p')
>>> q = Symbol('q')
>>> sorted(solve(x**2 + p*x + q, x))
__________ __________
/ 2 / 2
p \/ p - 4*q p \/ p - 4*q
[- - + -------------, - - - -------------]
2 2 2 2
>>> solve_poly_system([y - x, x - 5], x, y)
[(5, 5)]
>>> solve_poly_system([y**2 - x**3 + 1, y*x], x, y)
___ ___
1 \/ 3 *I 1 \/ 3 *I
[(0, I), (0, -I), (1, 0), (- - + -------, 0), (- - - -------, 0)]
2 2 2 2
(a link to the docs with this example)
You may want to look at SAGE which is a complete python distribution designed for mathematical processing. Beyond that, I have used Sympy for somewhat similar matters, as Marcin highlighted.
import decimal as dd
degree = int(input('What is the highest co-efficient of x? '))
coeffs = [0]* (degree + 1)
coeffs1 = {}
dd.getcontext().prec = 10
for ii in range(degree,-1,-1):
if ii != 0:
res=dd.Decimal(input('what is the coefficient of x^ %s ? '%ii))
coeffs[ii] = res
coeffs1.setdefault('x^ %s ' % ii, res)
else:
res=dd.Decimal(input('what is the constant term ? '))
coeffs[ii] = res
coeffs1.setdefault('CT', res)
coeffs = coeffs[::-1]
def contextmg(start,stop,step):
r = start
while r < stop:
yield r
r += step
def ell(a,b,c):
vals=contextmg(a,b,c)
context = ['%.10f' % it for it in vals]
return context
labels = [0]*degree
for ll in range(degree):
labels[ll] = 'x%s'%(ll+1)
roots = {}
context = ell(-20,20,0.0001)
for x in context:
for xx in range(degree):
if xx == 0:
calculatoR = (coeffs[xx]* dd.Decimal(x)) + coeffs[xx+1]
else:
calculatoR = calculatoR * dd.Decimal(x) + coeffs[xx+1]
func =round(float(calculatoR),2)
xp = round(float(x),3)
if func==0 and roots=={} :
roots[labels[0]] = xp
labels = labels[1:]
p = xp
elif func == 0 and xp >(0.25 + p):
roots[labels[0]] = xp
labels = labels[1:]
p = xp
print(roots)