Up to now I have always Mathematica for solving analytical equations. Now however I need to solve a few hundred equations of this type (characteristic polynomials)
a_20*x^20+a_19*x^19+...+a_1*x+a_0=0 (constant floats a_0,...a_20)
at once which yields awfully long calculation times in Mathematica.
Is there like a ready to use command in numpy or any other package to solve an equation of this type? (up to now I have used Python only for simulations so I don't know much about analytical tools and I couldn't find anything useful in the numpy tutorials).
You use numpy (apparently), but I've never tried it myself though: http://docs.scipy.org/doc/numpy/reference/generated/numpy.roots.html#numpy.roots.
Numpy also provides a polynomial class... numpy.poly1d.
This finds the roots numerically -- if you want the analytical roots, I don't think numpy can do that for you.
Here is an example from simpy docs:
>>> from sympy import *
>>> x = symbols('x')
>>> from sympy import roots, solve_poly_system
>>> solve(x**3 + 2*x + 3, x)
____ ____
1 \/ 11 *I 1 \/ 11 *I
[-1, - - --------, - + --------]
2 2 2 2
>>> p = Symbol('p')
>>> q = Symbol('q')
>>> sorted(solve(x**2 + p*x + q, x))
__________ __________
/ 2 / 2
p \/ p - 4*q p \/ p - 4*q
[- - + -------------, - - - -------------]
2 2 2 2
>>> solve_poly_system([y - x, x - 5], x, y)
[(5, 5)]
>>> solve_poly_system([y**2 - x**3 + 1, y*x], x, y)
___ ___
1 \/ 3 *I 1 \/ 3 *I
[(0, I), (0, -I), (1, 0), (- - + -------, 0), (- - - -------, 0)]
2 2 2 2
(a link to the docs with this example)
You may want to look at SAGE which is a complete python distribution designed for mathematical processing. Beyond that, I have used Sympy for somewhat similar matters, as Marcin highlighted.
import decimal as dd
degree = int(input('What is the highest co-efficient of x? '))
coeffs = [0]* (degree + 1)
coeffs1 = {}
dd.getcontext().prec = 10
for ii in range(degree,-1,-1):
if ii != 0:
res=dd.Decimal(input('what is the coefficient of x^ %s ? '%ii))
coeffs[ii] = res
coeffs1.setdefault('x^ %s ' % ii, res)
else:
res=dd.Decimal(input('what is the constant term ? '))
coeffs[ii] = res
coeffs1.setdefault('CT', res)
coeffs = coeffs[::-1]
def contextmg(start,stop,step):
r = start
while r < stop:
yield r
r += step
def ell(a,b,c):
vals=contextmg(a,b,c)
context = ['%.10f' % it for it in vals]
return context
labels = [0]*degree
for ll in range(degree):
labels[ll] = 'x%s'%(ll+1)
roots = {}
context = ell(-20,20,0.0001)
for x in context:
for xx in range(degree):
if xx == 0:
calculatoR = (coeffs[xx]* dd.Decimal(x)) + coeffs[xx+1]
else:
calculatoR = calculatoR * dd.Decimal(x) + coeffs[xx+1]
func =round(float(calculatoR),2)
xp = round(float(x),3)
if func==0 and roots=={} :
roots[labels[0]] = xp
labels = labels[1:]
p = xp
elif func == 0 and xp >(0.25 + p):
roots[labels[0]] = xp
labels = labels[1:]
p = xp
print(roots)
Related
I have a problem. I want to calculate everything in the quadratic function. equations for reference:
ax^2 + bx + c
a(x-p)^2 + q
I made 8 possible inputs in tkinter and I want my program to try and calculate everything if possible. Otherwise to return sth like not enough data.
Equations:
delta = b^2-4ac
p = -b/(2a)
p = (x1+x2)/2
q = -delta/(4a)
#if delta>0
x2 = (-b-sqrt(delta))/(2a)
x1 = (-b+sqrt(delta))/(2a)
#if delta=0
x0 = -b/(2a)
#if delta<0 no solutions
#a, b, c are the coefficients.
b = -2ap
c = p^2*a+q
Example:
Input:
p = 3
q = -9
x1 = 0.877868
a = 2
Output:
b = -12
c = 9
x2 = 5.12132
delta = 72
So, for example, I give it [x1, x2, a] and it will calculate [q, p, b, c, and delta] if possible.
Is there a function that I can give all different formulas to and it will try to calculate everything?
For now, my only idea is to brute force it in 'try' or with 'ifs', but I feel like it would take thousands of lines of code, so I won't do that.
I found out that you can use sympy's solve. This is my solution. I used Eq for defining equations and then solved them.
def missingVariables(dictOfVariables):
symbols = dict(a=sym('a'), b=sym('b'), c=sym('c'), p=sym('p'), q=sym('q'), x1=sym('x1'), x2=sym('x2'),
delta=sym('delta'))
var = mergeDicts(symbols, dictOfVariables)
deltaEQ = Eq((var['b'] ** 2 - 4 * var['a'] * var['c']), var['delta'])
x1EQ = Eq(((-var['b'] - sqrt(var['delta'])) / (2 * var['a'])), var['x1'])
x2EQ = Eq(((-var['b'] + sqrt(var['delta'])) / (2 * var['a'])), var['x2'])
pEQ = Eq((-var['b']) / (2 * var['a']), var['p'])
pEQ2 = Eq(((var['x1'] + var['x2']) / 2), var['p'])
qEQ = Eq(((-var['delta']) / (4 * var['a'])), var['q'])
bEQ = Eq((-2 * var['a'] * var['p']), var['b'])
cEQ = Eq((var['a'] * var['p'] ** 2 + var['q']), var['c'])
solution = solve((deltaEQ, x1EQ, x2EQ, pEQ, pEQ2, qEQ, bEQ, cEQ))
solution = solution[0]
new_dict = {}
for k, v in solution.items():
try:
new_dict[str(k)] = round(float(v), 4)
except TypeError:
new_dict[str(k)] = v
return new_dict
I'm trying to solve a system with three nonlinear equations in Python 3.8. I'm using the function sympy.nonlinsolve(). However, I received the error message "convergence to root failed; try n < 15 or maxsteps > 50".
This is my code:
import sympy as sp
x_1 = 0.0
z_1 = 1.0
x_2 = 15.81
z_2 = 0.99
x_3 = 23.8
z_3 = 0.98
r, x_m, z_m = sp.symbols('r, x_m, z_m', real=True)
Eq_1 = sp.Eq((x_1 - x_m) ** 2 + (z_1 - z_m) ** 2 - r ** 2, 0)
Eq_2 = sp.Eq((x_2 - x_m) ** 2 + (z_2 - z_m) ** 2 - r ** 2, 0)
Eq_3 = sp.Eq((x_3 - x_m) ** 2 + (z_3 - z_m) ** 2 - r ** 2, 0)
ans = sp.nonlinsolve([Eq_1, Eq_2, Eq_3], [r, x_m, z_m])
I would welcome every help. Thanks in advance.
I get an answer from solve:
In [56]: sp.solve([Eq_1, Eq_2, Eq_3], [r, x_m, z_m])
Out[56]:
[(-5.71609538434502e+18, -4.80343980343979e+15, -5.71609336609336e+18), (-19222.9235141152, -4.2537
0843989772, -19221.9230434783), (19222.9235141152, -4.25370843989772, -19221.9230434783), (5.716095
38434502e+18, -4.80343980343979e+15, -5.71609336609336e+18)]
I'm not sure why nonlinsolve works but from the large numbers in the answer I guess that this isn't well conditioned.
If you use exact rational numbers then you can get the same solution from both solve and nonlinsolve:
In [59]: import sympy as sp
...:
...: x_1 = 0
...: z_1 = 1
...: x_2 = sp.Rational('15.81')
...: z_2 = sp.Rational('0.99')
...: x_3 = sp.Rational('23.8')
...: z_3 = sp.Rational('0.98')
...:
...: r, x_m, z_m = sp.symbols('r, x_m, z_m', real=True)
...: Eq_1 = sp.Eq((x_1 - x_m) ** 2 + (z_1 - z_m) ** 2 - r ** 2, 0)
...: Eq_2 = sp.Eq((x_2 - x_m) ** 2 + (z_2 - z_m) ** 2 - r ** 2, 0)
...: Eq_3 = sp.Eq((x_3 - x_m) ** 2 + (z_3 - z_m) ** 2 - r ** 2, 0)
...: ans = sp.solve([Eq_1, Eq_2, Eq_3], [r, x_m, z_m])
In [60]: ans
Out[60]:
⎡⎛-√564927076558939081 -8316 -44210423 ⎞ ⎛√564927076558939081 -8316 -44210423 ⎞⎤
⎢⎜─────────────────────, ──────, ──────────⎟, ⎜───────────────────, ──────, ──────────⎟⎥
⎣⎝ 39100 1955 2300 ⎠ ⎝ 39100 1955 2300 ⎠⎦
This is another of those cases where it is good to emphasize the A of CAS and let it help you as you work through the problem by hand:
_ solve first equation for r**2
>>> from sympy import solve
>>> r2 = solve(Eq_1, r**2)
_ substitute into the other two equations and expand them
>>> eqs = [i.subs(r**2, r2[0]).expand() for i in (Eq_2, Eq_3)]
_ see what you've get
>>> eqs
[Eq(-31.62*x_m + 0.02*z_m + 249.9362, 0), Eq(-47.6*x_m + 0.04*z_m + 566.4004, 0)]
_ That's two linear equations. Solve with solve -- nonlinsolve is not needed
>>> xz = solve(eqs); xz
{x_m: -4.25370843989770, z_m: -19221.9230434783}
_ substitute into r2 and set equal to r**2 and solve for r
>>> ris = solve(Eq(r**2, r2[0].subs(xz))); ris
[-19222.9235141152, 19222.9235141152]
_ collect the solutions
soln = []
>>> for i in ris:
... xz[r] = i
... soln.append(xz)
...
>>> soln
[{x_m: -4.25370843989770, z_m: -19221.9230434783, r: -19222.9235141152},
{x_m: -4.25370843989770, z_m: -19221.9230434783, r: 19222.9235141152}]
[print out has been edited for viewing pleasure]
When solving nonlinear systems, try reduce the number of systems that you have to deal with. Eliminate linear variables for sure -- other (r**2 in this case) if possible -- before trying to solve the nonlinear parts.
The very large numbers obtained when solving all 3 at once might be a reflection of the ill-posed nature of the system ("not well conditioned" as Oscar noted. Perhaps the problem was designed to teach that point.
I'm trying to calculate sin(x) using Taylor series without using factorials.
import math, time
import matplotlib.pyplot as plot
def sin3(x, i=30):
x %= 2 * math.pi
n = 0
dn = x**2 / 2
for c in range(4, 2 * i + 4, 2):
n += dn
dn *= -x**2 / ((c + 1) * (c + 2))
return x - n
def draw_graph(start = -800, end = 800):
y = [sin3(i/100) for i in range(start, end)]
x = [i/100 for i in range(start, end)]
y2 = [math.sin(i/100) for i in range(start, end)]
x2 = [i/100 for i in range(start, end)]
plot.fill_between(x, y, facecolor="none", edgecolor="red", lw=0.7)
plot.fill_between(x2, y2, facecolor="none", edgecolor="blue", lw=0.7)
plot.show()
When you run the draw_graph function it uses matplotlib to draw a graph, the redline is the output from my sin3 function, and the blue line is the correct output from the math.sin method.
As you can see the curve is not quite right, it's not high or low enough (seems to peak at 0.5), and also has strange behavior where it generates a small peak around 0.25 then drops down again. How can I adjust my function to match the correct output of math.sin?
You have the wrong equation for sin(x), and you also have a messed up loop invariant.
The formula for sin(x) is x/1! - x^3/3! + x^5/5! - x^7/7!..., so I really don't know why you're initializing dn to something involving x^2.
You also want to ask yourself: What is my loop invariant? What is the value of dn when I reach the start of my loop. It is clear from the way you update dn that you expect it to be something involving x^i / i!. Yet on the very first iteration of the loop, i=4, yet dn involves x^2.
Here is what you meant to write:
def sin3(x, i=30):
x %= 2 * math.pi
n = 0
dn = x
for c in range(1, 2 * i + 4, 2):
n += dn
dn *= -x**2 / ((c + 1) * (c + 2))
return n
I took a cryptography course this semester in graduate school, and once of the topics we covered was NTRU. I am trying to code this in pure Python, purely as a hobby. When I attempt to find a polynomial's inverse modulo p (in this example p = 3), SymPy always returns negative coefficients, when I want strictly positive coefficients. Here is the code I have. I'll explain what I mean.
import sympy as sym
from sympy import GF
def make_poly(N,coeffs):
"""Create a polynomial in x."""
x = sym.Symbol('x')
coeffs = list(reversed(coeffs))
y = 0
for i in range(N):
y += (x**i)*coeffs[i]
y = sym.poly(y)
return y
N = 7
p = 3
q = 41
f = [1,0,-1,1,1,0,-1]
f_poly = make_poly(N,f)
x = sym.Symbol('x')
Fp = sym.polys.polytools.invert(f_poly,x**N-1,domain=GF(p))
Fq = sym.polys.polytools.invert(f_poly,x**N-1,domain=GF(q))
print('\nf =',f_poly)
print('\nFp =',Fp)
print('\nFq =',Fq)
In this code, f_poly is a polynomial with degree at most 6 (its degree is at most N-1), whose coefficients come from the list f (the first entry in f is the coefficient on the highest power of x, continuing in descending order).
Now, I want to find the inverse polynomial of f_poly in the convolution polynomial ring Rp = (Z/pZ)[x]/(x^N - 1)(Z/pZ)[x] (similarly for q). The output of the print statements at the bottom are:
f = Poly(x**6 - x**4 + x**3 + x**2 - 1, x, domain='ZZ')
Fp = Poly(x**6 - x**5 + x**3 + x**2 + x + 1, x, modulus=3)
Fq = Poly(8*x**6 - 15*x**5 - 10*x**4 - 20*x**3 - x**2 + 2*x - 4, x, modulus=41)
These polynomials are correct in modulus, but I would like to have positive coefficients everywhere, as later on in the algorithm there is some centerlifting involved, so I need to have positive coefficients. The results should be
Fp = x^6 + 2x^5 + x^3 + x^2 + x + 1
Fq = 8x^6 + 26x^5 + 31x^4 + 21x^3 + 40x^2 + 2x + 37
The answers I'm getting are correct in modulus, but I think that SymPy's invert is changing some of the coefficients to negative variants, instead of staying inside the mod.
Is there any way I can update the coefficients of this polynomial to have only positive coefficients in modulus, or is this just an artifact of SymPy's function? I want to keep the SymPy Poly format so I can use some of its embedded functions later on down the line. Any insight would be much appreciated!
This seems to be down to how the finite field object implemented in GF "wraps" integers around the given modulus. The default behavior is symmetric, which means that any integer x for which x % modulo <= modulo//2 maps to x % modulo, and otherwise maps to (x % modulo) - modulo. So GF(10)(5) == 5, whereas GF(10)(6) == -4. You can make GF always map to positive numbers instead by passing the symmetric=False argument:
import sympy as sym
from sympy import GF
def make_poly(N, coeffs):
"""Create a polynomial in x."""
x = sym.Symbol('x')
coeffs = list(reversed(coeffs))
y = 0
for i in range(N):
y += (x**i)*coeffs[i]
y = sym.poly(y)
return y
N = 7
p = 3
q = 41
f = [1,0,-1,1,1,0,-1]
f_poly = make_poly(N,f)
x = sym.Symbol('x')
Fp = sym.polys.polytools.invert(f_poly,x**N-1,domain=GF(p, symmetric=False))
Fq = sym.polys.polytools.invert(f_poly,x**N-1,domain=GF(q, symmetric=False))
print('\nf =',f_poly)
print('\nFp =',Fp)
print('\nFq =',Fq)
Now you'll get the polynomials you wanted. The output from the print(...) statements at the end of the example should look like:
f = Poly(x**6 - x**4 + x**3 + x**2 - 1, x, domain='ZZ')
Fp = Poly(x**6 + 2*x**5 + x**3 + x**2 + x + 1, x, modulus=3)
Fq = Poly(8*x**6 + 26*x**5 + 31*x**4 + 21*x**3 + 40*x**2 + 2*x + 37, x, modulus=41)
Mostly as a note for my own reference, here's how you would get Fp using Mathematica:
Fp = PolynomialMod[Algebra`PolynomialPowerMod`PolynomialPowerMod[x^6 - x^4 + x^3 + x^2 - 1, -1, x, x^7 - 1], 3]
output:
1 + x + x^2 + x^3 + 2 x^5 + x^6
Updated: How do I find the minimum of a function on a closed interval [0,3.5] in Python? So far I found the max and min but am unsure how to filter out the minimum from here.
import sympy as sp
x = sp.symbols('x')
f = (x**3 / 3) - (2 * x**2) + (3 * x) + 1
fprime = f.diff(x)
all_solutions = [(xx, f.subs(x, xx)) for xx in sp.solve(fprime, x)]
print (all_solutions)
Since this PR you should be able to do the following:
from sympy.calculus.util import *
f = (x**3 / 3) - (2 * x**2) - 3 * x + 1
ivl = Interval(0,3)
print(minimum(f, x, ivl))
print(maximum(f, x, ivl))
print(stationary_points(f, x, ivl))
Perhaps something like this
from sympy import solveset, symbols, Interval, Min
x = symbols('x')
lower_bound = 0
upper_bound = 3.5
function = (x**3/3) - (2*x**2) - 3*x + 1
zeros = solveset(function, x, domain=Interval(lower_bound, upper_bound))
assert zeros.is_FiniteSet # If there are infinite solutions the next line will hang.
ans = Min(function.subs(x, lower_bound), function.subs(x, upper_bound), *[function.subs(x, i) for i in zeros])
Here's a possible solution using sympy:
import sympy as sp
x = sp.Symbol('x', real=True)
f = (x**3 / 3) - (2 * x**2) - 3 * x + 1
#f = 3 * x**4 - 4 * x**3 - 12 * x**2 + 3
fprime = f.diff(x)
all_solutions = [(xx, f.subs(x, xx)) for xx in sp.solve(fprime, x)]
interval = [0, 3.5]
interval_solutions = filter(
lambda x: x[0] >= interval[0] and x[0] <= interval[1], all_solutions)
print(all_solutions)
print(interval_solutions)
all_solutions is giving you all points where the first derivative is zero, interval_solutions is constraining those solutions to a closed interval. This should give you some good clues to find minimums and maximums :-)
The f.subs commands show two ways of displaying the value of the given function at x=3.5, the first as a rational approximation, the second as the exact fraction.