Related
i wrote the below program in python with the hope of conducting a Helmholtz decomposition on a vector V(x,z)=[f(x,z),0,0] where f(x,z) is a function defined earlier, the aim of this program is to get the solenoidal and harmonic parts of vector V as S(x,z)=[S1(x,z),S2(x,z),S3(x,z)] and H(x,z)=[H1(x,z),H2(x,z),H3(x,z)] with S and H satisfying the condition V=S+H which transllates to (S1+H1=f, S2+H2=0, S3+H3=0)
please help i cant get anywhere with this problem, the output of the above code isnt what i wanted , its the following:
Solenoidal:
[[-22.6179559436889 + 41.14742726254I, 33.243161684442 - 99.9416505604629I, -22.6179559436889 + 41.14742726254I], [0.000151144774536593 + 0.000222403457962539I, 0, -0.000151144774536593 - 0.000222403457962539I], [22.6210744289585 - 41.1540953247099I, -41.2442631673893 + 88.1909008014634I, 6.6295316668479 - 64.6849359328842I]]
Harmonic:
[[26.6155393446675 - 35.2651619174123I, -33.243161684442 + 99.9416505604629I, 18.6203725427103 - 47.0296926076676I], [-0.000151144774536593 - 0.000222403457962539I, 0, 0.000151144774536593 + 0.000222403457962539I], [-18.6231887384308 + 47.0368054767535I, 41.2442631673893 - 88.1909008014634I, -10.6274173573755 + 58.8022257808406I]]
`
import math
import numpy as np
from sympy import symbols, simplify, lambdify
# Define x and z as symbolic variables
x, z = symbols('x, z')
# Define the function f
def f(x, z):
term1 = 171.05 * 10**(-18) * ((1.00 * x**4 + 2.00 * x**2 * z**2 + 1.00 * z**4) * math.atan(z*x) - 1.00 * x**3 * z - 1.00 * x * z**3)
term2 = -3.17 * 10**6 * x**4 - 6.36 * 10**6 * x**2 * z**2 - 3.19 * 10**6 * z**4 + 1.00 * x**4 * z + 2.00 * x**2 * z**3 + 1.00 * z**5
term3 = (z - 44.33 * 10**3)
term4 = ((-2.00 * 10**3) / (576.30 * 10**3 + 13.00 * z))**2.69 * (x**2 + z**2)**7.00 / 2.00 * z
return term1 * term2 * term3 / (term4 + 1e-15) # Add a small value to term4 to avoid division by zero
# Define a 2D array with 3 elements
vector = np.array([[f(x, z) for x in range(-1, 2)] for z in range(-1, 2)])
def helmholtz_hodge_decomposition(vector):
# Compute the gradient of the vector field
gradient = np.gradient(vector)
# Compute the curl of the vector field
curl = np.cross(gradient[0], gradient[1])
# Compute the divergence of the vector field
divergence = np.sum(gradient, axis=0)
# Compute the harmonic part of the vector field
harmonic = -curl - divergence
# Compute the solenoidal part of the vector field
solenoidal = vector - harmonic
return solenoidal, harmonic
# Print the solenoidal and harmonic parts as functions of x and z
solenoidal, harmonic = helmholtz_hodge_decomposition(vector)
print("Solenoidal:")
print(simplify(solenoidal))
print("Harmonic:")
print(simplify(harmonic))
# Create functions from the solenoidal and harmonic parts
solenoidal_part = lambdify((x, z), simplify(solenoidal), 'numpy')
harmonic_part = lambdify((x, z), simplify(harmonic), 'numpy')
`
expecting :Conducting a Helmholtz decomposition on a vector V(x,z)=[f(x,z),0,0] where f(x,z) is a function defined earlier, the aim of this program is to get the solenoidal and harmonic parts of vector V as S(x,z)=[S1(x,z),S2(x,z),S3(x,z)] and H(x,z)=[H1(x,z),H2(x,z),H3(x,z)] with S and H satisfying the condition V=S+H which transllates to (S1+H1=f, S2+H2=0, S3+H3=0)
I am trying to simplify
cos(phi) + cos(phi - 2*pi/3)*e^(I*2*pi/3) + cos(phi - 4*pi/3)*e^(I*4*pi/3)
which I know reduces down to 1.5e^(I*phi)
I cannot get SymPy to recognize this. I have tried simplify, trigsimp, expand, etc. But nothing seems to work. Any suggestions?
Here is my code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import sympy as sp
from sympy import I
sp.init_printing()
phi = sp.symbols('\phi', real = True)
vec = sp.cos(phi) + sp.cos(phi - 2*sp.pi/3)*sp.exp(I*2*sp.pi/3) + sp.cos(phi - 4*sp.pi/3)*sp.exp(I*4*sp.pi/3)
vec.simplify()
vec.rewrite(sp.exp).simplify()
vec.rewrite(sp.exp).expand().simplify()
None of these produce the expected result.
I can confirm my result manually, by substituting values in for phi like this:
sp.simplify(vec.rewrite(sp.exp).simplify() - 3/2*sp.exp(I*phi)).evalf(subs={phi:3})
It's not obvious but you can get there like this:
In [40]: phi = symbols('phi', real=True)
In [41]: e = cos(phi) + cos(phi - 2*pi/3)*E**(I*2*pi/3) + cos(phi - 4*pi/3)*E**(I*4*pi/3)
In [42]: e
Out[42]:
-2⋅ⅈ⋅π 2⋅ⅈ⋅π
─────── ─────
3 ⎛ π⎞ 3 ⎛ π⎞
- ℯ ⋅sin⎜φ + ─⎟ + cos(φ) - ℯ ⋅cos⎜φ + ─⎟
⎝ 6⎠ ⎝ 3⎠
In [43]: e.rewrite(exp).expand().rewrite(sin).expand().rewrite(exp)
Out[43]:
ⅈ⋅φ
3⋅ℯ
──────
2
This is definitely an example in which doing computation by hands might be faster then exploring how to do it with SymPy.
Anyway, this is how I achieve that:
vec.rewrite(exp).simplify().subs(
root(-1, 6),
root(-1, 6).rewrite(exp)
).expand().subs(
I * exp(-5 * I * pi / 6),
(I * exp(-5 * I * pi / 6)).simplify().rewrite(exp)
).simplify().expand()
# out: 3*exp(I*\phi)/2
I need algorithm, that solve systems like this:
Example 1:
5x - 6y = 0 <--- line
(10- x)**2 + (10- y)**2 = 2 <--- circle
Solution:
find y:
(10- 6/5*y)**2 + (10- y)**2 = 2
100 - 24y + 1.44y**2 + 100 - 20y + y**2 = 2
2.44y**2 - 44y + 198 = 0
D = b**2 - 4ac
D = 44*44 - 4*2.44*198 = 3.52
y[1,2] = (-b+-sqrt(D))/2a
y[1,2] = (44+-1.8761)/4.88 = 9.4008 , 8.6319
find x:
(10- x)**2 + (10- 5/6y)**2 = 2
100 - 20x + y**2 + 100 - 5/6*20y + (5/6*y)**2 = 2
1.6944x**2 - 36.6666x + 198 = 0
D = b**2 - 4ac
D = 36.6666*36.6666 - 4*1.6944*198 = 2.4747
x[1,2] = (-b+-sqrt(D))/2a
x[1,2] = (36.6666+-1.5731)/3.3888 = 11.2841 , 10.3557
my skills are not enough to write this algorithm please help
and another algorithm that solve this system.
5x - 6y = 0 <--- line
|-10 - x| + |-10 - y| = 2 <--- rhomb
as answer here i need two x and two y.
You can use sympy, Python's symbolic math library.
Solutions for fixed parameters
from sympy import symbols, Eq, solve
x, y = symbols('x y', real=True)
eq1 = Eq(5 * x - 6 * y, 0)
eq2 = Eq((10 - x) ** 2 + (10 - y) ** 2, 2)
solutions = solve([eq1, eq2], (x, y))
print(solutions)
for x, y in solutions:
print(f'{x.evalf()}, {y.evalf()}')
This leads to two solutions:
[(660/61 - 6*sqrt(22)/61, 550/61 - 5*sqrt(22)/61),
(6*sqrt(22)/61 + 660/61, 5*sqrt(22)/61 + 550/61)]
10.3583197613288, 8.63193313444070
11.2810245009662, 9.40085375080520
The other equations work very similar:
eq1 = Eq(5 * x - 6 * y, 0)
eq2 = Eq(Abs(-10 - x) + Abs(-10 - y), 2)
leading to :
[(-12, -10),
(-108/11, -90/11)]
-12.0000000000000, -10.0000000000000
-9.81818181818182, -8.18181818181818
Dealing with arbitrary parameters
For your new question, how to deal with arbitrary parameters, sympy can help to find formulas, at least when the structure of the equations is fixed:
from sympy import symbols, Eq, Abs, solve
x, y = symbols('x y', real=True)
a, b, xc, yc = symbols('a b xc yc', real=True)
r = symbols('r', real=True, positive=True)
eq1 = Eq(a * x - b * y, 0)
eq2 = Eq((xc - x) ** 2 + (yc - y) ** 2, r ** 2)
solutions = solve([eq1, eq2], (x, y))
Studying the generated solutions, some complicated expressions are repeated. Those could be substituted by auxiliary variables. Note that this step isn't necessary, but helps a lot in making sense of the solutions. Also note that substitution in sympy often only considers quite literal replacements. That's by the introduction of c below is done in two steps:
c, d = symbols('c d', real=True)
for xi, yi in solutions:
print(xi.subs(a ** 2 + b ** 2, c)
.subs(r ** 2 * a ** 2 + r ** 2 * b ** 2, c * r ** 2)
.subs(-a ** 2 * xc ** 2 + 2 * a * b * xc * yc - b ** 2 * yc ** 2 + c * r ** 2, d)
.simplify())
print(yi.subs(a ** 2 + b ** 2, c)
.subs(r ** 2 * a ** 2 + r ** 2 * b ** 2, c * r ** 2)
.subs(-a ** 2 * xc ** 2 + 2 * a * b * xc * yc - b ** 2 * yc ** 2 + c * r ** 2, d)
.simplify())
Which gives the formulas:
x1 = b*(a*yc + b*xc - sqrt(d))/c
y1 = a*(a*yc + b*xc - sqrt(d))/c
x2 = b*(a*yc + b*xc + sqrt(d))/c
y2 = a*(a*yc + b*xc + sqrt(d))/c
These formulas then can be converted to regular Python code without the need of sympy. That code will only work for an arbitrary line and circle. Some tests need to be added around, such as c == 0 (meaning the line is just a dot), and d either be zero, positive or negative.
The stand-alone code could look like:
import math
def give_solutions(a, b, xc, yc, r):
# intersection between a line a*x-b*y==0 and a circle with center (xc, yc) and radius r
c =a ** 2 + b ** 2
if c == 0:
print("degenerate line equation given")
else:
d = -a**2 * xc**2 + 2*a*b * xc*yc - b**2 * yc**2 + c * r**2
if d < 0:
print("no solutions")
elif d == 0:
print("1 solution:")
print(f" x1 = {b*(a*yc + b*xc)/c}")
print(f" y1 = {a*(a*yc + b*xc)/c}")
else: # d > 0
print("2 solutions:")
sqrt_d = math.sqrt(d)
print(f" x1 = {b*(a*yc + b*xc - sqrt_d)/c}")
print(f" y1 = {a*(a*yc + b*xc - sqrt_d)/c}")
print(f" x2 = {b*(a*yc + b*xc + sqrt_d)/c}")
print(f" y2 = {a*(a*yc + b*xc + sqrt_d)/c}")
For the rhombus, sympy doesn't seem to be able to work well with abs in the equations. However, you could use equations for the 4 sides, and test whether the obtained intersections are inside the range of the rhombus. (The four sides would be obtained by replacing abs with either + or -, giving four combinations.)
Working this out further, is far beyond the reach of a typical stackoverflow answer, especially as you seem to ask for an even more general solution.
I want to equate those differential equations. I know I can solve them easily in the paper but I want to know how to do it in Python:
from sympy import symbols, Eq, solve
P = Function("P")
Q = Symbol('Q')
Q_d = Symbol("Q_d")
Q_s = Symbol("Q_s")
t = Symbol("t")
dy2 = 3 * Derivative(P(t), t,2)
dy1 = Derivative(P(t), t)
eq1 = Eq(dy2 + dy1 - P(t) + 9,Q_d)
display(eq1)
dy2_ = 5 * Derivative(P(t), t,2)
dy1_ = -Derivative(P(t), t)
eq2 = Eq(dy2_ + dy1_ +4* P(t) -1 ,Q_s)
display(eq2)
−𝑃(𝑡) + 𝑑/𝑑𝑡*𝑃(𝑡)+3*𝑑2/𝑑𝑡2 * 𝑃(𝑡) + 9 = 𝑄𝑑
4𝑃(𝑡) − 𝑑/𝑑𝑡*𝑃(𝑡)+5*𝑑2/𝑑𝑡2 * 𝑃(𝑡) −1 = 𝑄𝑠
These are basically "supply and demand" equations the result is basically:
2 * 𝑑2/𝑑𝑡2 * 𝑃(𝑡) = (2 * 𝑑/𝑑𝑡𝑃(𝑡) - 5𝑃(𝑡) +10)
How can I find this result? I know Sympy "Solve" can do such a thing:
solve((eq1,eq2), (x, y))
But in this case, I don't have any knowledge.
I assume you get what you call the result by setting Qs = Qd and subtracting the equations? It can be rewritten as
(2 * 𝑑/𝑑𝑡𝑃(𝑡) - 5𝑃(𝑡) +10) - 2 * 𝑑2/𝑑𝑡2 * 𝑃(𝑡) = 0
which you can obtain in sympy doing
>>> eq1.lhs - eq2.lhs
-5*P(t) + 2*Derivative(P(t), t) - 2*Derivative(P(t), (t, 2)) + 10
where lhs returns the left-hand side of the equation.
What's the (best) way to solve a pair of non linear equations using Python. (Numpy, Scipy or Sympy)
eg:
x+y^2 = 4
e^x+ xy = 3
A code snippet which solves the above pair will be great
for numerical solution, you can use fsolve:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html#scipy.optimize.fsolve
from scipy.optimize import fsolve
import math
def equations(p):
x, y = p
return (x+y**2-4, math.exp(x) + x*y - 3)
x, y = fsolve(equations, (1, 1))
print equations((x, y))
If you prefer sympy you can use nsolve.
>>> nsolve([x+y**2-4, exp(x)+x*y-3], [x, y], [1, 1])
[0.620344523485226]
[1.83838393066159]
The first argument is a list of equations, the second is list of variables and the third is an initial guess.
Short answer: use fsolve
As mentioned in other answers the simplest solution to the particular problem you have posed is to use something like fsolve:
from scipy.optimize import fsolve
from math import exp
def equations(vars):
x, y = vars
eq1 = x+y**2-4
eq2 = exp(x) + x*y - 3
return [eq1, eq2]
x, y = fsolve(equations, (1, 1))
print(x, y)
Output:
0.6203445234801195 1.8383839306750887
Analytic solutions?
You say how to "solve" but there are different kinds of solution. Since you mention SymPy I should point out the biggest difference between what this could mean which is between analytic and numeric solutions. The particular example you have given is one that does not have an (easy) analytic solution but other systems of nonlinear equations do. When there are readily available analytic solutions SymPY can often find them for you:
from sympy import *
x, y = symbols('x, y')
eq1 = Eq(x+y**2, 4)
eq2 = Eq(x**2 + y, 4)
sol = solve([eq1, eq2], [x, y])
Output:
⎡⎛ ⎛ 5 √17⎞ ⎛3 √17⎞ √17 1⎞ ⎛ ⎛ 5 √17⎞ ⎛3 √17⎞ 1 √17⎞ ⎛ ⎛ 3 √13⎞ ⎛√13 5⎞ 1 √13⎞ ⎛ ⎛5 √13⎞ ⎛ √13 3⎞ 1 √13⎞⎤
⎢⎜-⎜- ─ - ───⎟⋅⎜─ - ───⎟, - ─── - ─⎟, ⎜-⎜- ─ + ───⎟⋅⎜─ + ───⎟, - ─ + ───⎟, ⎜-⎜- ─ + ───⎟⋅⎜─── + ─⎟, ─ + ───⎟, ⎜-⎜─ - ───⎟⋅⎜- ─── - ─⎟, ─ - ───⎟⎥
⎣⎝ ⎝ 2 2 ⎠ ⎝2 2 ⎠ 2 2⎠ ⎝ ⎝ 2 2 ⎠ ⎝2 2 ⎠ 2 2 ⎠ ⎝ ⎝ 2 2 ⎠ ⎝ 2 2⎠ 2 2 ⎠ ⎝ ⎝2 2 ⎠ ⎝ 2 2⎠ 2 2 ⎠⎦
Note that in this example SymPy finds all solutions and does not need to be given an initial estimate.
You can evaluate these solutions numerically with evalf:
soln = [tuple(v.evalf() for v in s) for s in sol]
[(-2.56155281280883, -2.56155281280883), (1.56155281280883, 1.56155281280883), (-1.30277563773199, 2.30277563773199), (2.30277563773199, -1.30277563773199)]
Precision of numeric solutions
However most systems of nonlinear equations will not have a suitable analytic solution so using SymPy as above is great when it works but not generally applicable. That is why we end up looking for numeric solutions even though with numeric solutions:
1) We have no guarantee that we have found all solutions or the "right" solution when there are many.
2) We have to provide an initial guess which isn't always easy.
Having accepted that we want numeric solutions something like fsolve will normally do all you need. For this kind of problem SymPy will probably be much slower but it can offer something else which is finding the (numeric) solutions more precisely:
from sympy import *
x, y = symbols('x, y')
nsolve([Eq(x+y**2, 4), Eq(exp(x)+x*y, 3)], [x, y], [1, 1])
⎡0.620344523485226⎤
⎢ ⎥
⎣1.83838393066159 ⎦
With greater precision:
nsolve([Eq(x+y**2, 4), Eq(exp(x)+x*y, 3)], [x, y], [1, 1], prec=50)
⎡0.62034452348522585617392716579154399314071550594401⎤
⎢ ⎥
⎣ 1.838383930661594459049793153371142549403114879699 ⎦
Try this one, I assure you that it will work perfectly.
import scipy.optimize as opt
from numpy import exp
import timeit
st1 = timeit.default_timer()
def f(variables) :
(x,y) = variables
first_eq = x + y**2 -4
second_eq = exp(x) + x*y - 3
return [first_eq, second_eq]
solution = opt.fsolve(f, (0.1,1) )
print(solution)
st2 = timeit.default_timer()
print("RUN TIME : {0}".format(st2-st1))
->
[ 0.62034452 1.83838393]
RUN TIME : 0.0009331008900937708
FYI. as mentioned above, you can also use 'Broyden's approximation' by replacing 'fsolve' with 'broyden1'. It works. I did it.
I don't know exactly how Broyden's approximation works, but it took 0.02 s.
And I recommend you do not use Sympy's functions <- convenient indeed, but in terms of speed, it's quite slow. You will see.
An alternative to fsolve is root:
import numpy as np
from scipy.optimize import root
def your_funcs(X):
x, y = X
# all RHS have to be 0
f = [x + y**2 - 4,
np.exp(x) + x * y - 3]
return f
sol = root(your_funcs, [1.0, 1.0])
print(sol.x)
This will print
[0.62034452 1.83838393]
If you then check
print(your_funcs(sol.x))
you obtain
[4.4508396968012676e-11, -1.0512035686360832e-11]
confirming that the solution is correct.
I got Broyden's method to work for coupled non-linear equations (generally involving polynomials and exponentials) in IDL, but I haven't tried it in Python:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.broyden1.html#scipy.optimize.broyden1
scipy.optimize.broyden1
scipy.optimize.broyden1(F, xin, iter=None, alpha=None, reduction_method='restart', max_rank=None, verbose=False, maxiter=None, f_tol=None, f_rtol=None, x_tol=None, x_rtol=None, tol_norm=None, line_search='armijo', callback=None, **kw)[source]
Find a root of a function, using Broyden’s first Jacobian approximation.
This method is also known as “Broyden’s good method”.
You can use openopt package and its NLP method. It has many dynamic programming algorithms to solve nonlinear algebraic equations consisting:
goldenSection, scipy_fminbound, scipy_bfgs, scipy_cg, scipy_ncg, amsg2p, scipy_lbfgsb, scipy_tnc, bobyqa, ralg, ipopt, scipy_slsqp, scipy_cobyla, lincher, algencan, which you can choose from.
Some of the latter algorithms can solve constrained nonlinear programming problem.
So, you can introduce your system of equations to openopt.NLP() with a function like this:
lambda x: x[0] + x[1]**2 - 4, np.exp(x[0]) + x[0]*x[1]
from scipy.optimize import fsolve
def double_solve(f1,f2,x0,y0):
func = lambda x: [f1(x[0], x[1]), f2(x[0], x[1])]
return fsolve(func,[x0,y0])
def n_solve(functions,variables):
func = lambda x: [ f(*x) for f in functions]
return fsolve(func, variables)
f1 = lambda x,y : x**2+y**2-1
f2 = lambda x,y : x-y
res = double_solve(f1,f2,1,0)
res = n_solve([f1,f2],[1.0,0.0])
You can use nsolve of sympy, meaning numerical solver.
Example snippet:
from sympy import *
L = 4.11 * 10 ** 5
nu = 1
rho = 0.8175
mu = 2.88 * 10 ** -6
dP = 20000
eps = 4.6 * 10 ** -5
Re, D, f = symbols('Re, D, f')
nsolve((Eq(Re, rho * nu * D / mu),
Eq(dP, f * L / D * rho * nu ** 2 / 2),
Eq(1 / sqrt(f), -1.8 * log ( (eps / D / 3.) ** 1.11 + 6.9 / Re))),
(Re, D, f), (1123, -1231, -1000))
where (1123, -1231, -1000) is the initial vector to find the root. And it gives out:
The imaginary part are very small, both at 10^(-20), so we can consider them zero, which means the roots are all real. Re ~ 13602.938, D ~ 0.047922 and f~0.0057.