Harder equations (with Derivatives and Integrals) and ConditionSet in SymPy - python

I am planning to calculate b (Its also x on Xo axis) for which curve (function) length from 0 to x is equal 1.
By knowing: https://www.mathsisfun.com/calculus/arc-length.html
(integral from 0 to b) ∫ (1 + ((f’(x))^2)^(1/2) dx = 1
and that:
(integral from a to b) ∫ f(x)dx = F(b) - F(a)
We can calculate it by
1 - F(0) + F(b) = 0 , where this is now an Equation in terms of x, beacuse b as I said is x on Xo axis.
So now I tried it for f(x) = x**3 (full code will be below)
F(b) is equal to this monster: https://www.wolframalpha.com/input/?i=integral&assumption=%7B%22C%22%2C+%22integral%22%7D+-%3E+%7B%22Calculator%22%7D&assumption=%7B%22F%22%2C+%22Integral%22%2C+%22integrand%22%7D+-%3E%22%281+%2B+9x%5E4%29%5E%281%2F2%29%22
All I get from SymPy is ConditionSet but not number. Of course ConditionSet cannot be evauluated by evalf()
So here are my questions:
Did I make a mistake in math?
Is my code wrong and how to improve it?
Is SymPy enough to calculate this?
Did I misunderstand documentation?
Code:
from __future__ import division
import matplotlib.pyplot as plt
from sympy import *
x, y, z = symbols('x y z', real=True)
function1 = x**3
Antiderivative1 = integrate((1+(diff(function1))**2)**(1/2), x)
b = solveset(Eq(1 + Antiderivative1.subs(x, 0).evalf() - Antiderivative1, 0), x)
print(b)
Thats the output:
ConditionSet(x, Eq(x*hyper((-0.5, 1/4), (5/4,), 9*x**4*exp_polar(I*pi)) - 4.0*gamma(5/4)/gamma(1/4), 0), Complexes)
Thanks in advance and sorry for mistakes in grammar.

Note that you should use S(1)/2 or Rational(1, 2) (or sqrt) rather than 1/2 which will give you a float in Python. With that we have
In [16]: integrand = sqrt(1 + ((x**3).diff(x))**2)
In [17]: integrand
Out[17]:
__________
╱ 4
╲╱ 9⋅x + 1
In [18]: antiderivative = integrand.integrate(x)
In [19]: antiderivative
Out[19]:
┌─ ⎛-1/2, 1/4 │ 4 ⅈ⋅π⎞
x⋅Γ(1/4)⋅ ├─ ⎜ │ 9⋅x ⋅ℯ ⎟
2╵ 1 ⎝ 5/4 │ ⎠
─────────────────────────────────────
4⋅Γ(5/4)
While that isn't the same form as the result from Wolfram Alpha it could easily be the same function (up to an additive constant). From this result or the one on Wolfram Alpha I very much doubt that you will find an analytic solution (using SymPy or anything else).
You can however find a numerical solution. Unfortunately there is a bug in SymPy's lambdify function that means nsolve doesn't work with this function:
In [22]: nsolve(equation, x, 1)
...
NameError: name 'exp_polar' is not defined
We can do it ourselves with Newton steps though:
In [76]: f = equation.lhs
In [77]: fd = f.diff(x)
In [78]: newton = lambda xi: (xi - f.subs(x, xi)/fd.subs(x, xi)).evalf()
In [79]: xj = 1.0
In [80]: xj = newton(xj); print(xj)
0.826749667942050
In [81]: xj = newton(xj); print(xj)
0.791950624620750
In [82]: xj = newton(xj); print(xj)
0.790708415511451
In [83]: xj = newton(xj); print(xj)
0.790706893629886
In [84]: xj = newton(xj); print(xj)
0.790706893627605
In [85]: xj = newton(xj); print(xj)
0.790706893627605

Related

Why won't SymPy directly solve for the normalization of this wavefunction?

Here is some minimal code demonstrating the problem.
from sympy import *
x = Symbol('x', real=True)
t = Symbol('t', real=True)
A = Symbol('A', real=True, positive=True)
λ = Symbol('λ', real=True, positive=True)
ω = Symbol('ω', real=True, positive=True)
# Define wavefunction
psi_x_t = A * exp(-λ * Abs(x)) * exp(-I*ω*t)
# Normalize
print(solve(integrate(psi_x_t**2, (x, -oo, oo)) - 1))
print(solve(integrate(psi_x_t**2, (x, -oo, oo)) - 1, A))
Where I simply call solve it returns [{t: 0, λ: A**2}]. What I am looking for is actually A=sqrt(λ). But when I include A as the symbol I am solving for I don't get a plus/minus sqrt(λ) but rather an empty collection [].
Why is solve giving me an empty list of solutions, and is there a way to make it directly return the solution for A?
You are not getting a solution because you have declared assumptions on your variables and those assumptions are inconsistent with the solution. Let's try without those assumptions:
In [20]: from sympy import *
...:
...: x = Symbol('x')
...: t = Symbol('t')
...: A = Symbol('A')
...: λ = Symbol('λ')
...: ω = Symbol('ω')
...:
...: # Define wavefunction
...: psi_x_t = A * exp(-λ * Abs(x)) * exp(-I*ω*t)
In [21]: psi_x_t
Out[21]:
-λ⋅│x│ -ⅈ⋅t⋅ω
A⋅ℯ ⋅ℯ
In [22]: eq = integrate(psi_x_t**2, (x, -oo, oo)) - 1
In [23]: eq
Out[23]:
⎛⎧ 2 -2⋅ⅈ⋅t⋅ω ⎞
⎜⎪ A ⋅ℯ π⎟
⎜⎪ ──────────── for │arg(λ)│ < ─⎟
⎜⎪ λ 2⎟
⎜⎪ ⎟
⎜⎨∞ ⎟ - 1
⎜⎪⌠ ⎟
⎜⎪⎮ 2 -2⋅λ⋅│x│ -2⋅ⅈ⋅t⋅ω ⎟
⎜⎪⎮ A ⋅ℯ ⋅ℯ dx otherwise ⎟
⎜⎪⌡ ⎟
⎝⎩-∞ ⎠
I'm guessing that this Piecewise is the reason that you set those assumptions. Let's just manually select the first case of the Piecewise and work from there:
In [26]: eq1 = piecewise_fold(eq).args[0][0]
In [27]: eq1
Out[27]:
2 -2⋅ⅈ⋅t⋅ω
A ⋅ℯ
──────────── - 1
λ
Now we solve this:
In [28]: solve([eq1], [A])
Out[28]:
⎡⎛ ⅈ⋅t⋅ω ⎞ ⎛ ⅈ⋅t⋅ω ⎞⎤
⎣⎝-√λ⋅ℯ ,⎠, ⎝√λ⋅ℯ ,⎠⎦
Now for almost all possible values of omega this is not going to be real which is a violation of the assumptions you stated that A should be positive.
Presumably the equation you actually wanted comes from integrating abs(psi)^2:
In [34]: from sympy import *
...:
...: x = Symbol('x', real=True)
...: t = Symbol('t', real=True)
...: A = Symbol('A', real=True, positive=True)
...: λ = Symbol('λ', real=True, positive=True)
...: ω = Symbol('ω', real=True, positive=True)
...:
...: # Define wavefunction
...: psi_x_t = A * exp(-λ * Abs(x)) * exp(-I*ω*t)
In [35]: eq = integrate(abs(psi_x_t)**2, (x, -oo, oo)) - 1
In [36]: eq
Out[36]:
2
A
── - 1
λ
In [37]: solve([eq], [A])
Out[37]: [(√λ,)]
Here only the positive root is returned because we assumed A to be positive.
Why is solve giving me an empty list of solutions, and is there a way to make it directly return the solution for A?
I can't answer this question. However, when solve fails I usually give a try to solveset:
print(solveset(integrate(psi_x_t**2, (x, -oo, oo)) - 1, A))
# {-sqrt(λ)*exp(I*t*ω), sqrt(λ)*exp(I*t*ω)}
It did not solve for A as expected because it doesn't choose some time in which to solve for it. If we first set time, such as t=0, then it will find a suitable solution.
from sympy import *
x = Symbol('x', real=True)
t = Symbol('t', real=True)
A = Symbol('A', real=True, positive=True)
λ = Symbol('λ', real=True, positive=True)
ω = Symbol('ω', real=True, positive=True)
# Define wavefunction
psi_x_t = A * exp(-λ *Abs(x)) * exp(-I*ω*t)
# Intergrate
result = integrate(psi_x_t**2, (x, -oo, oo)) - 1
result = result.subs(t, 0)
# Normalize
print(solve(result))
print(solve(result, A))

Sympy not solving a ODE correctly?

I want to solve the ode f´´(x) + k*f(x) = 0.
which is a trivial ODE to solve (https://www.wolframalpha.com/input?i=f%60%60%28x%29+%2B+kf%28x%29%3D0)
my code is
from sympy import *
x,t,k,L,C1,C2 = symbols("x,t,k,L,C1,C2")
f=symbols('f', cls=Function)
g=symbols('g', cls=Function)
Fx = f(x).diff(x)
Fxx = f(x).diff(x,x)
Gtt = g(t).diff(t,t)
Gt = g(t).diff(t)
BC1 = 0
BC2 = L
Eq1_k_positive = dsolve(Eq1.subs(k,-k))
display(Eq1_k_positive)
Not really sure why I don't get the solution that I should get. and no its not the same when I use BCs that would get me a result I get 0 since I don't get the sin cos equation. any tips on what's not correct?
This is your differential equation:
In [18]: k, x = symbols('k, x')
In [19]: f = Function('f')
In [20]: eq = Eq(f(x).diff(x, 2) + k*f(x), 0)
In [21]: eq
Out[21]:
2
d
k⋅f(x) + ───(f(x)) = 0
2
dx
This is the solution returned by SymPy:
In [22]: dsolve(eq)
Out[22]:
____ ____
-x⋅╲╱ -k x⋅╲╱ -k
f(x) = C₁⋅ℯ + C₂⋅ℯ
That solution is correct for any nonzero complex number k.
There can be many equivalent forms to represent the general solution of an ODE. SymPy will choose a different form here if you specify something about the symbol k such as that it is positive:
In [24]: k = symbols('k', positive=True)
In [25]: eq = Eq(f(x).diff(x, 2) + k*f(x), 0)
In [26]: eq
Out[26]:
2
d
k⋅f(x) + ───(f(x)) = 0
2
dx
In [27]: dsolve(eq)
Out[27]: f(x) = C₁⋅sin(√k⋅x) + C₂⋅cos(√k⋅x)
This solution is also correct for any nonzero complex number k but will only be returned if k is declared positive because it is only for positive k that there is any reason to prefer the sin/cos form to the exp form.

SymPy not simplifying "enough" while differentiating an implicit function

I am trying to find the 1st and 2nd derivatives of the implicit function y = f(x)
which is defined by the equation: exp(sin(x)) - x * exp(sin(y)) = 0
SymPy calculates the 1st derivative and gives this answer:
But this expression can be written much simpler as:
(x * cos(x) - 1) / (x * cos(y))
using the fact that x = exp(sin(x)-sin(y))
The answer given for the 2nd derivative is also quite complicated.
Of course the second derivative can also be simplified
quite a lot using the same fact x = exp(sin(x)-sin(y)) .
How can I make/force SymPy apply these additional simplifications?
Is that possible even?
Here is my script.
#!/usr/bin/env python
# coding: utf-8
# ### Differentiating an implicit function using SymPy
# In[1]:
import sympy as sp
# In[2]:
sp.__version__
# In[3]:
sp.init_printing(use_latex='mathjax') # use pretty mathjax output
# In[4]:
sp.var('x y z')
F = sp.exp(sp.sin(x)) - x * sp.exp(sp.sin(y))
# In[5]:
f1 = sp.idiff( F, y, x ) # First derivative of y w.r.t. x
f1
# In[6]:
sp.simplify(f1)
# In[7]:
f2 = sp.idiff( F, y, x, 2) # Second derivative of y w.r.t. x
f2
sp.simplify(f2)
# In[ ]:
And also, here is an even simpler example which shows this undesired behavior.
#!/usr/bin/env python
# coding: utf-8
# ### Differentiating an implicit function using SymPy
# In[1]:
import sympy as sp
# In[2]:
sp.__version__
# In[3]:
sp.init_printing(use_latex='mathjax') # use pretty mathjax output
# In[4]:
sp.var('x y')
F = sp.ln(sp.sqrt(x**2 + y**2)) - sp.atan(y / x)
# In[5]:
f1 = sp.idiff( F, y, x ) # First derivative of y w.r.t. x
f1
# In[6]:
sp.simplify(f1)
# In[7]:
f2 = sp.idiff( F, y, x, 2) # Second derivative of y w.r.t. x
f2
sp.simplify(f2)
# In[ ]:
The second derivative here is given as:
This expression obviously can be simplified further even without using any special facts.
You can simplify the expressions yourself. In the first example you can just choose a term to eliminate and solve F for that:
In [42]: F
Out[42]:
sin(y) sin(x)
- x⋅ℯ + ℯ
In [43]: solve(F, exp(sin(y)))
Out[43]:
⎡ sin(x)⎤
⎢ℯ ⎥
⎢───────⎥
⎣ x ⎦
In [44]: [esy] = solve(F, exp(sin(y)))
In [45]: f2.subs(exp(sin(y)), esy)
Out[45]:
2
x⋅sin(y)⋅cos (x) 2⋅sin(y)⋅cos(x) sin(y) 1
-x⋅sin(x) + ──────────────── - ─────────────── + ───────── + ─
2 2 2 x
cos (y) cos (y) x⋅cos (y)
──────────────────────────────────────────────────────────────
x⋅cos(y)
You can apply further simplification operations from there.
In the second example you can just call factor:
In [47]: f2
Out[47]:
⎛ 2 2⎞
2⋅⎝x + y ⎠
─────────────────────────
3 2 2 3
x - 3⋅x ⋅y + 3⋅x⋅y - y
In [48]: factor(f2)
Out[48]:
⎛ 2 2⎞
2⋅⎝x + y ⎠
───────────
3
(x - y)

Find min and max angle of an ellipse in Python

I have an equation, describing an ellipse. I want to use Python to find the minimum and maxium angle theta and length of the axis Lc of these ellipse for further calculations.
The parameters Lc1, theta1, etc. are the axis length for the respective angle.
I tried sympy.solve without success (the result didn´t make any sense) and are now trying sympy.solveset.
This is my current script. That is followed by further calculation which needs theta.
import math
import sympy as sym
from sympy import solve, Eq
from sympy import *
import matplotlib.pyplot as plt
#define parameters
#Lc modelled with matlab before
Lc1 = 0.67 / 1000
Lc2 = 0.36/ 1000
Lc3 = 0.7 / 1000
#orientation of the cut section
theta1 = 89
theta2 = -35
theta3 = 25
#calculation from Beaudoin et al. (2016) and Ebner et al. (2010)
dL = ((Lc2 - Lc1)/(Lc3 - Lc1))
c = math.atan(- (((dL * (math.sin(2 * theta3) - math.sin(2 * theta1))) - (math.sin(2 * theta2) - math.sin(2* theta1)))/ ((dL * (math.cos(2 * theta3) - math.cos(2 * theta1))) - (math.cos(2 * theta2) - math.cos(2 * theta1)))))
b = ((Lc2 - Lc1)/((math.sin(2 * theta2 + c)) - (math.sin(2 * theta1 + c))))
a = Lc1 - (b * math.sin(2 * theta1 + c))
print('Done calculating a, b and c')
##try to find theta at min and max Lc
#theta = sym.symbols('theta')
theta = var('theta')
#define eqation that give us our crossover length
Lc = Eq(a + b * sym.sin(2 * theta +c))
dLc = Eq(2 * b * sym.cos(2 * theta +c) + sym.sin(2 * theta + c))
print('Busy finding minimum and maximum.')
sol = solveset(dLc, theta)
sol
#now we have the derivation we can go find min and max of Lc
Using solveset I recieved no result so far, because python doesn´t stop running.
I am not sure, if I can get a reliable result with my current script or not. What is wrong? Is there a more efficient way? I´d be glad if anyone could help me!
Thx in advance!
Since this equation has no symbolic parameters and has floating point coefficients I guess that you just want a numeric solution so you can use nsolve:
In [4]: nsolve(dLc, theta, 1.2)
Out[4]: 1.22165851958244
The initial guess (1.2) comes from looking at a plot of the function (plot(dLc.lhs)).
Note that your equation looks like this:
In [26]: dLc
Out[26]: sin(2⋅θ + 0.697151824925716) + 0.0011237899722686⋅cos(2⋅θ + 0.697151824925716) = 0
We can solve this in terms of arbitrary symbols rather than particular numbers:
In [27]: a, b = symbols('a, b', real=True)
In [28]: eq = sin(2*theta + a) + b*cos(2*theta + a)
In [29]: solve(eq, theta)
Out[29]:
⎡ ⎛ ________ ⎞ ⎛ ________ ⎞⎤
⎢ ⎜ ╱ 2 ⎟ ⎜ ╱ 2 ⎟⎥
⎢ a ⎜╲╱ b + 1 - 1⎟ a ⎜╲╱ b + 1 + 1⎟⎥
⎢- ─ - atan⎜───────────────⎟, - ─ + atan⎜───────────────⎟⎥
⎣ 2 ⎝ b ⎠ 2 ⎝ b ⎠⎦
That gives two solutions one of which is negative and one positive. The other solutions come from adding multiples of pi.
The solution was much easier than I expected:
Having the ellipse-parameters a,b and c the maximum is simply L_maximum = a+b with the angle theta = math.raidans(45) - c/2
Plus: The imported angles theta1,... have to be converted to radians as well, with math.radians()

How to solve a pair of nonlinear equations using Python?

What's the (best) way to solve a pair of non linear equations using Python. (Numpy, Scipy or Sympy)
eg:
x+y^2 = 4
e^x+ xy = 3
A code snippet which solves the above pair will be great
for numerical solution, you can use fsolve:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html#scipy.optimize.fsolve
from scipy.optimize import fsolve
import math
def equations(p):
x, y = p
return (x+y**2-4, math.exp(x) + x*y - 3)
x, y = fsolve(equations, (1, 1))
print equations((x, y))
If you prefer sympy you can use nsolve.
>>> nsolve([x+y**2-4, exp(x)+x*y-3], [x, y], [1, 1])
[0.620344523485226]
[1.83838393066159]
The first argument is a list of equations, the second is list of variables and the third is an initial guess.
Short answer: use fsolve
As mentioned in other answers the simplest solution to the particular problem you have posed is to use something like fsolve:
from scipy.optimize import fsolve
from math import exp
def equations(vars):
x, y = vars
eq1 = x+y**2-4
eq2 = exp(x) + x*y - 3
return [eq1, eq2]
x, y = fsolve(equations, (1, 1))
print(x, y)
Output:
0.6203445234801195 1.8383839306750887
Analytic solutions?
You say how to "solve" but there are different kinds of solution. Since you mention SymPy I should point out the biggest difference between what this could mean which is between analytic and numeric solutions. The particular example you have given is one that does not have an (easy) analytic solution but other systems of nonlinear equations do. When there are readily available analytic solutions SymPY can often find them for you:
from sympy import *
x, y = symbols('x, y')
eq1 = Eq(x+y**2, 4)
eq2 = Eq(x**2 + y, 4)
sol = solve([eq1, eq2], [x, y])
Output:
⎡⎛ ⎛ 5 √17⎞ ⎛3 √17⎞ √17 1⎞ ⎛ ⎛ 5 √17⎞ ⎛3 √17⎞ 1 √17⎞ ⎛ ⎛ 3 √13⎞ ⎛√13 5⎞ 1 √13⎞ ⎛ ⎛5 √13⎞ ⎛ √13 3⎞ 1 √13⎞⎤
⎢⎜-⎜- ─ - ───⎟⋅⎜─ - ───⎟, - ─── - ─⎟, ⎜-⎜- ─ + ───⎟⋅⎜─ + ───⎟, - ─ + ───⎟, ⎜-⎜- ─ + ───⎟⋅⎜─── + ─⎟, ─ + ───⎟, ⎜-⎜─ - ───⎟⋅⎜- ─── - ─⎟, ─ - ───⎟⎥
⎣⎝ ⎝ 2 2 ⎠ ⎝2 2 ⎠ 2 2⎠ ⎝ ⎝ 2 2 ⎠ ⎝2 2 ⎠ 2 2 ⎠ ⎝ ⎝ 2 2 ⎠ ⎝ 2 2⎠ 2 2 ⎠ ⎝ ⎝2 2 ⎠ ⎝ 2 2⎠ 2 2 ⎠⎦
Note that in this example SymPy finds all solutions and does not need to be given an initial estimate.
You can evaluate these solutions numerically with evalf:
soln = [tuple(v.evalf() for v in s) for s in sol]
[(-2.56155281280883, -2.56155281280883), (1.56155281280883, 1.56155281280883), (-1.30277563773199, 2.30277563773199), (2.30277563773199, -1.30277563773199)]
Precision of numeric solutions
However most systems of nonlinear equations will not have a suitable analytic solution so using SymPy as above is great when it works but not generally applicable. That is why we end up looking for numeric solutions even though with numeric solutions:
1) We have no guarantee that we have found all solutions or the "right" solution when there are many.
2) We have to provide an initial guess which isn't always easy.
Having accepted that we want numeric solutions something like fsolve will normally do all you need. For this kind of problem SymPy will probably be much slower but it can offer something else which is finding the (numeric) solutions more precisely:
from sympy import *
x, y = symbols('x, y')
nsolve([Eq(x+y**2, 4), Eq(exp(x)+x*y, 3)], [x, y], [1, 1])
⎡0.620344523485226⎤
⎢ ⎥
⎣1.83838393066159 ⎦
With greater precision:
nsolve([Eq(x+y**2, 4), Eq(exp(x)+x*y, 3)], [x, y], [1, 1], prec=50)
⎡0.62034452348522585617392716579154399314071550594401⎤
⎢ ⎥
⎣ 1.838383930661594459049793153371142549403114879699 ⎦
Try this one, I assure you that it will work perfectly.
import scipy.optimize as opt
from numpy import exp
import timeit
st1 = timeit.default_timer()
def f(variables) :
(x,y) = variables
first_eq = x + y**2 -4
second_eq = exp(x) + x*y - 3
return [first_eq, second_eq]
solution = opt.fsolve(f, (0.1,1) )
print(solution)
st2 = timeit.default_timer()
print("RUN TIME : {0}".format(st2-st1))
->
[ 0.62034452 1.83838393]
RUN TIME : 0.0009331008900937708
FYI. as mentioned above, you can also use 'Broyden's approximation' by replacing 'fsolve' with 'broyden1'. It works. I did it.
I don't know exactly how Broyden's approximation works, but it took 0.02 s.
And I recommend you do not use Sympy's functions <- convenient indeed, but in terms of speed, it's quite slow. You will see.
An alternative to fsolve is root:
import numpy as np
from scipy.optimize import root
def your_funcs(X):
x, y = X
# all RHS have to be 0
f = [x + y**2 - 4,
np.exp(x) + x * y - 3]
return f
sol = root(your_funcs, [1.0, 1.0])
print(sol.x)
This will print
[0.62034452 1.83838393]
If you then check
print(your_funcs(sol.x))
you obtain
[4.4508396968012676e-11, -1.0512035686360832e-11]
confirming that the solution is correct.
I got Broyden's method to work for coupled non-linear equations (generally involving polynomials and exponentials) in IDL, but I haven't tried it in Python:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.broyden1.html#scipy.optimize.broyden1
scipy.optimize.broyden1
scipy.optimize.broyden1(F, xin, iter=None, alpha=None, reduction_method='restart', max_rank=None, verbose=False, maxiter=None, f_tol=None, f_rtol=None, x_tol=None, x_rtol=None, tol_norm=None, line_search='armijo', callback=None, **kw)[source]
Find a root of a function, using Broyden’s first Jacobian approximation.
This method is also known as “Broyden’s good method”.
You can use openopt package and its NLP method. It has many dynamic programming algorithms to solve nonlinear algebraic equations consisting:
goldenSection, scipy_fminbound, scipy_bfgs, scipy_cg, scipy_ncg, amsg2p, scipy_lbfgsb, scipy_tnc, bobyqa, ralg, ipopt, scipy_slsqp, scipy_cobyla, lincher, algencan, which you can choose from.
Some of the latter algorithms can solve constrained nonlinear programming problem.
So, you can introduce your system of equations to openopt.NLP() with a function like this:
lambda x: x[0] + x[1]**2 - 4, np.exp(x[0]) + x[0]*x[1]
from scipy.optimize import fsolve
def double_solve(f1,f2,x0,y0):
func = lambda x: [f1(x[0], x[1]), f2(x[0], x[1])]
return fsolve(func,[x0,y0])
def n_solve(functions,variables):
func = lambda x: [ f(*x) for f in functions]
return fsolve(func, variables)
f1 = lambda x,y : x**2+y**2-1
f2 = lambda x,y : x-y
res = double_solve(f1,f2,1,0)
res = n_solve([f1,f2],[1.0,0.0])
You can use nsolve of sympy, meaning numerical solver.
Example snippet:
from sympy import *
L = 4.11 * 10 ** 5
nu = 1
rho = 0.8175
mu = 2.88 * 10 ** -6
dP = 20000
eps = 4.6 * 10 ** -5
Re, D, f = symbols('Re, D, f')
nsolve((Eq(Re, rho * nu * D / mu),
Eq(dP, f * L / D * rho * nu ** 2 / 2),
Eq(1 / sqrt(f), -1.8 * log ( (eps / D / 3.) ** 1.11 + 6.9 / Re))),
(Re, D, f), (1123, -1231, -1000))
where (1123, -1231, -1000) is the initial vector to find the root. And it gives out:
The imaginary part are very small, both at 10^(-20), so we can consider them zero, which means the roots are all real. Re ~ 13602.938, D ~ 0.047922 and f~0.0057.

Categories

Resources