I am using scipy.optimize's function fsolve to solve for two unknowns in two equations. The equations that I am trying to solve in the end are (much) more complex but I already struggle understanding the following basic example.
import scipy.optimize as scopt
def fun(variables) :
(x,y) = variables
eqn_1 = x ** 2 + y - 4
eqn_2 = x + y ** 2 +3
return [eqn_1, eqn_2]
result = scopt.fsolve(fun, (0.1, 1))
print(result)
This gives the result [-2.08470396 -0.12127194], however when I plug those numbers back into the function (one time assuming the first is meant as x, one time assuming the first is y), I get results very different from zero.
print((-2.08470396)**2 - 0.12127194 - 4)
print((-2.08470396) + (- 0.12127194) ** 2 + 3)
Result 0.22 and 0.93.
print((-0.12127194)**2 -2.08470396 - 4)
print((-0.12127194) + (-2.08470396) ** 2 + 3)
Result -6.06 and 7.22.
What am I missing here?
Did you notice the warning that is generated when you run result = scopt.fsolve(fun, (0.1, 1))? That warning tells you that something failed:
In [35]: result = scopt.fsolve(fun, (0.1, 1))
/Users/warren/a202111/lib/python3.9/site-packages/scipy/optimize/minpack.py:175: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
The problem is that there is no solution to fun(variables) = (0, 0). The first equation gives y = 4-x**2, and then the second equation can be written x + (4-x**2)**2 + 3 = 0, which has no real solution (you can plot the left side or do some algebra to convince yourself of that).
If you use, say, eqn_2 = x + y ** 2 - 3, fsolve gives a valid numerical solution:
In [36]: def fun(variables) :
...: (x,y) = variables
...: eqn_1 = x ** 2 + y - 4
...: eqn_2 = x + y ** 2 - 3
...: return [eqn_1, eqn_2]
...:
In [37]: result = scopt.fsolve(fun, (0.1, 1))
In [38]: result
Out[38]: array([-1.38091841, 2.09306436])
In [39]: fun(result)
Out[39]: [0.0, 0.0]
Related
I have an x vs y data. I made the non-linear estimation of that data and the resulting function looks like the below:
and the estimated function is: 25342𝑥^9 −155900𝑥^8+409218𝑥^7 − 599317𝑥^6 + 537190𝑥^5 − 303116𝑥^4 + ... + 274
I used this answer's suggestion to find the upper limit of my integral by using the below code:
p_optimal = estimate_function_from_data_points()
from sympy import integrate, solve
from sympy.abc import x, u
f = 25342.695344882944*x**9 - 155900.56387247072*x**8 + 409218.9290579793*x**7 - 599317.5264117827*x**6 + 537190.6784517929*x**5 - 303116.0648042093*x**4 + 105493.81468203208*x**3 - 20422.11374996729*x**2 + 1263.9293528900394*x + 274.55521542679185
lower = 0.0925 # Initial
upper = u
eq = integrate(f, (x, lower, upper))
eq, solve(eq + 100, u)
Out: [-0.114399781774514 - 0.112912224139529*I,
-0.114399781774514 + 0.112912224139529*I,
0.145632609024802 - 0.532284754794354*I,
0.145632609024802 + 0.532284754794354*I,
0.646926125977188 - 0.679233975801008*I,
0.646926125977188 + 0.679233975801008*I,
1.20499184950745 - 0.534200757552949*I,
1.20499184950745 + 0.534200757552949*I,
1.53445822404458 - 0.201823934360761*I,
1.53445822404458 + 0.201823934360761*I]
I get 9 results (as expected because it's 9th order function) and all of them are complex numbers because I don't have a nice function as given on the answer. What methods can I use to get a real number solution?
Edit: When I run the below code, I get 100 as a result of the integral. So, the upper limit should be around 0.945
lower = 0.0925 # Initial
upper = 0.945
integrate(f, (x, lower, upper)).evalf()
Out: 100.016292426307
If you're looking for eq to be equal to 100 then you should ask to solve Eq(eq, 100) or eq - 100 rather than eq + 100 (since that's asking for eq to be equal to -100). With that:
In [14]: solve(eq - 100, u)
Out[14]:
[-0.212948010713551, 0.944530756107237, 0.0760056875453759 - 0.410894283883493⋅ⅈ, 0.076005687545375
9 + 0.410894283883493⋅ⅈ, 0.530254482189976 - 0.556134814597481⋅ⅈ, 0.530254482189976 + 0.55613481459
7481⋅ⅈ, 1.07772724094452 - 0.347706425047349⋅ⅈ, 1.07772724094452 + 0.347706425047349⋅ⅈ, 1.367830243
4028 - 0.115266894498628⋅ⅈ, 1.3678302434028 + 0.115266894498628⋅ⅈ]
Note that the first two roots are real (one negative and one positive).
You can ask solve to return only real or only positive roots by setting assumptions on u:
In [23]: u = symbols('u', positive=True)
In [24]: eq = integrate(f, (x, 0.0925, u))
In [25]: solve(eq - 100, u)
Out[25]: [0.944530756107237]
Here the equation you are solving is really numerical but you are using SymPy's solve function which is intended to find exact analytic solutions. Here are some faster ways to solve this using SymPy's real_roots, nroots and nsolve functions:
In [15]: [r.n(3) for r in real_roots(eq - 100)]
Out[15]: [-0.213, 0.945]
In [16]: nroots(eq - 100)
Out[16]:
[-0.212948010713551, 0.944530756105865, 0.0760056875453764 - 0.410894283883493⋅ⅈ, 0.076005687545376
4 + 0.410894283883493⋅ⅈ, 0.530254482189946 - 0.556134814597486⋅ⅈ, 0.530254482189946 + 0.55613481459
7486⋅ⅈ, 1.07772724094385 - 0.34770642504758⋅ⅈ, 1.07772724094385 + 0.34770642504758⋅ⅈ, 1.36783024340
417 - 0.115266894502763⋅ⅈ, 1.36783024340417 + 0.115266894502763⋅ⅈ]
In [17]: nsolve(eq - 100, u, 0.9)
Out[17]: 0.944530756105865
I have a code that calculate some mathematical equations and when I want to see the simplified results, it can not equate 2.0 with 2 inside power, which is logical since one is float and the other is integer. But decision was sympys where to put these two values, not mine.
Here is the expression in my results that sympy is not simplifying
from sympy import *
x = symbols('x')
y = -exp(2.0*x) + exp(2*x)
print(simplify(y)) # output is -exp(2.0*x) + exp(2*x)
y = -exp(2*x) + exp(2*x)
print(simplify(y)) # output is 0
y = -2.0*x + 2*x
print(simplify(y)) # output is 0
y = -x**2.0 + x**2
print(simplify(y)) # output is -x**2.0 + x**2
is there any way working around this problem? I am looking for a way to make sympy assume that everything other than symbols are floats, and preventing it to decide which one is float or integer.
this problem has been asked before by Gerardo Suarez but not with a satisfactory answer.
There is another sympy function you can use called nsimplify. When I run your examples they all return zero:
from sympy import *
x = symbols("x")
y = -exp(2.0 * x) + exp(2 * x)
print(nsimplify(y)) # output is 0
y = -exp(2 * x) + exp(2 * x)
print(nsimplify(y)) # output is 0
y = -2.0 * x + 2 * x
print(nsimplify(y)) # output is 0
y = -(x ** 2.0) + x ** 2
print(nsimplify(y)) # output is 0
Update
As #Shoaib Mirzaei mentioned you can also use the rational argument in the simplify() function like this:
simplify(y,rational=True)
I am trying to apply numpy to this code I wrote for trapezium rule integration:
def integral(a,b,n):
delta = (b-a)/float(n)
s = 0.0
s+= np.sin(a)/(a*2)
for i in range(1,n):
s +=np.sin(a + i*delta)/(a + i*delta)
s += np.sin(b)/(b*2.0)
return s * delta
I am trying to get the return value from the new function something like this:
return delta *((2 *np.sin(x[1:-1])) +np.sin(x[0])+np.sin(x[-1]) )/2*x
I am trying for a long time now to make any breakthrough but all my attempts failed.
One of the things I attempted and I do not get is why the following code gives too many indices for array error?
def integral(a,b,n):
d = (b-a)/float(n)
x = np.arange(a,b,d)
J = np.where(x[:,1] < np.sin(x[:,0])/x[:,0])[0]
Every hint/advice is very much appreciated.
You forgot to sum over sin(x):
>>> def integral(a, b, n):
... x, delta = np.linspace(a, b, n+1, retstep=True)
... y = np.sin(x)
... y[0] /= 2
... y[-1] /= 2
... return delta * y.sum()
...
>>> integral(0, np.pi / 2, 10000)
0.9999999979438324
>>> integral(0, 2 * np.pi, 10000)
0.0
>>> from scipy.integrate import quad
>>> quad(np.sin, 0, np.pi / 2)
(0.9999999999999999, 1.1102230246251564e-14)
>>> quad(np.sin, 0, 2 * np.pi)
(2.221501482512777e-16, 4.3998892617845996e-14)
I tried this meanwhile, too.
import numpy as np
def T_n(a, b, n, fun):
delta = (b - a)/float(n) # delta formula
x_i = lambda a,i,delta: a + i * delta # calculate x_i
return 0.5 * delta * \
(2 * sum(fun(x_i(a, np.arange(0, n + 1), delta))) \
- fun(x_i(a, 0, delta)) \
- fun(x_i(a, n, delta)))
Reconstructed the code using formulas at bottom of this page
https://matheguru.com/integralrechnung/trapezregel.html
The summing over the range(0, n+1) - which gives [0, 1, ..., n] -
is implemented using numpy. Usually, you would collect the values using a for loop in normal Python.
But numpy's vectorized behaviour can be used here.
np.arange(0, n+1) gives a np.array([0, 1, ...,n]).
If given as argument to the function (here abstracted as fun) - the function formula for x_0 to x_n
will be then calculated. and collected in a numpy-array. So fun(x_i(...)) returns a numpy-array of the function applied on x_0 to x_n. This array/list is summed up by sum().
The entire sum() is multiplied by 2, and then the function value of x_0 and x_n subtracted afterwards. (Since in the trapezoid formula only the middle summands, but not the first and the last, are multiplied by 2). This was kind of a hack.
The linked German page uses as a function fun(x) = x ^ 2 + 3
which can be nicely defined on the fly by using a lambda expression:
fun = lambda x: x ** 2 + 3
a = -2
b = 3
n = 6
You could instead use a normal function definition, too: defun fun(x): return x ** 2 + 3.
So I tested by typing the command:
T_n(a, b, n, fun)
Which correctly returned:
## Out[172]: 27.24537037037037
For your case, just allocate np.sin tofun and your values for a, b, and n into this function call.
Like:
fun = np.sin # by that eveywhere where `fun` is placed in function,
# it will behave as if `np.sin` will stand there - this is possible,
# because Python treats its functions as first class citizens
a = #your value
b = #your value
n = #your value
Finally, you can call:
T_n(a, b, n, fun)
And it will work!
I'm trying to get SymPy to solve a system of equations but it gives me an error saying:
NotImplementedError: could not solve 3*sin(3*t0/2)*tan(t0) + 2*cos(3*t0/2) - 4
Is there another way for me to be able to solve the system of equations:
sin(x)+(y-x)cos(x) = 0
-1.5(y-x)sin(1.5x)+cos(1.5x) = 2
I used :
from sympy import *
solve([sin(x)+(y-x)cos(x), -1.5(y-x)sin(1.5x)+cos(1.5x)-2], x, y)
SymPy could do better with this equation, but ultimately it's equivalent to some 10th degree polynomial the roots of which can only be represented abstractly. I'll describe the steps one can take and show how far SymPy can go. It's a semi-manual solution process which should be more automatic.
First of all, don't put 1.5, or other floating point numbers, in the equations. Instead, introduce a coefficient a = Rational(3, 2) and use that:
eq = [sin(x) + (y-x)*cos(x), -a*(y-x)*sin(a*x) + cos(a*x) - 2]
Variable y can be eliminated using the first equation: y=x-tan(x), which is easy for us to see, but SymPy sometimes misses the opportunity. Let's help it:
eq1 = eq[1].subs(y, x-tan(x)) # 3*sin(3*x/2)*tan(x)/2 + cos(3*x/2) - 2
As is, solve and solveset (an alternative SymPy solver) give up on the equation because of this mix of trigonometric functions of different arguments. Some of us remember from school days that trigonometric functions can be expressed as rational functions of the tangent of half-argument, so let's do that: rewrite the equation in terms of tan.
eq2 = eq1.rewrite(tan) # (-tan(3*x/4)**2 + 1)/(tan(3*x/4)**2 + 1) - 2 + 3*tan(3*x/4)*tan(x)/(tan(3*x/4)**2 + 1)
As mentioned, this halves the argument. Having fractions like x/4 in trig functions is bad. Introduce a new symbol, var('u'), and make u = x/4:
eq3 = eq2.subs(x, 4*u) # (-tan(3*u)**2 + 1)/(tan(3*u)**2 + 1) - 2 + 3*tan(3*u)*tan(4*u)/(tan(3*u)**2 + 1)
Now we can expand all these tangents in terms of tan(u), using expand_trig. The equation gets longer:
eq4 = expand_trig(eq3) # (1 - (-tan(u)**3 + 3*tan(u))**2/(-3*tan(u)**2 + 1)**2)/(1 + (-tan(u)**3 + 3*tan(u))**2/(-3*tan(u)**2 + 1)**2) - 2 + 3*(-4*tan(u)**3 + 4*tan(u))*(-tan(u)**3 + 3*tan(u))/((1 + (-tan(u)**3 + 3*tan(u))**2/(-3*tan(u)**2 + 1)**2)*(-3*tan(u)**2 + 1)*(tan(u)**4 - 6*tan(u)**2 + 1))
But it's also simpler because tan(u) can be treated as another unknown, say v.
eq5 = eq4.subs(tan(u), v) # (1 - (-v**3 + 3*v)**2/(-3*v**2 + 1)**2)/(1 + (-v**3 + 3*v)**2/(-3*v**2 + 1)**2) - 2 + 3*(-4*v**3 + 4*v)*(-v**3 + 3*v)/((1 + (-v**3 + 3*v)**2/(-3*v**2 + 1)**2)*(-3*v**2 + 1)*(v**4 - 6*v**2 + 1))
Great, now we have a rational function. It can be handled with solveset(eq5, x). By default solveset gives all complex solutions and we need only real roots among them, so let's specify the domain as Reals:
vsol = list(solveset(eq5, v, domain=S.Reals))
There is no algebraic formula for these, so they are recorded somewhat abstractly but these are actual numbers we can work with:
[CRootOf(3*v**10 + 9*v**8 - 78*v**6 + 22*v**4 - 21*v**2 + 1, 0),
CRootOf(3*v**10 + 9*v**8 - 78*v**6 + 22*v**4 - 21*v**2 + 1, 1),
CRootOf(3*v**10 + 9*v**8 - 78*v**6 + 22*v**4 - 21*v**2 + 1, 2),
CRootOf(3*v**10 + 9*v**8 - 78*v**6 + 22*v**4 - 21*v**2 + 1, 3)]
For example, we can go back to x and y now, and evaluate the solutions:
xsol = [4*atan(v) for v in vsol]
ysol = [x - tan(x) for x in xsol]
numsol = [(N(x), N(y)) for x, y in zip(xsol, ysol)]
Numeric values are
[(-4.35962510714700, -1.64344290066272),
(-0.877886785847899, 0.326585146723377),
(0.877886785847899, -0.326585146723377),
(4.35962510714700, 1.64344290066272)]
Of course there are infinitely more because the tangent is periodic. Finally, let's check these actually work:
residuals = [[e.subs({x: xv, y: yv}) for e in eq] for xv, yv in numsol]
These are a bunch of numbers of order 1e-15 or less, so yes, the equations hold within machine precision.
Unlike a purely numeric solution we'd get from SciPy or other numeric solvers, these can be evaluated with any accuracy without repeating the process. For example, 50 digits of the first x-solution:
xsol[0].evalf(50) # -4.3596251071470021258397061103704574594477338857831
Just for the fun of it here is a manual solution that only needs solving a polynomial of degree 5:
Write t = x/2, a = y-x, s = sin t, c = cos t, S = sin x and
C = cos x.
The the given equations can be rewritten
(1) 2 sc + a (c^2 - s^2) = 0
(2) 3 a s^3 - 9 a c^2 s - 6 c s^2 + 2 c^3 = 4
Multiplying (1) by 3 s and adding to (2):
(3) -6 a c^2 s + 2 c^3 = 4
Next we substitute a = -S / C and use S = 2sc and s^2 = 1 - c^2:
(4) 12 c^3 (1 - c^2) / C + 2 c^3 = 4
Multiply with C = 2 c^2 - 1:
(5) c^3 (12 - 12 c^2 + 4 c^2 - 2) = 8 c^2 - 4
Finally,
(6) 4 c^5 - 5 c^3 + 4 c^2 - 2 = 0
This has a pair of complex solutions, one real solution outside the domain of the cosine and another two solutions which give the four principal solutions for x.
(7) c_1/2 = 0.90520121, -0.57206084
(8) x_1/2/3/4 = +/- 2 arccos(x_1/2)
I am solving a system of transcendental equations:
cos(x) / x = 0.48283 + a*3.46891
cos(y) / y = 0.47814 + b*28.6418
a + b = 1
1.02 * sinc(x) = 1.03 * sinc(y)
And it just so happens I tried to solve the above system in two separated programming languages (Mathematica and Python)
Mathematica
Running the code
FindRoot[{Cos[x]/x == 0.482828 + a*3.46891,
Cos[y]/y == 0.47814 + b*28.6418, a + b == 1,
1.02*Sinc[x] == 1.03*Sinc[y]}, {{x, .2}, {y, .2}, {a, 0.3}, {b,
0.3}}, PrecisionGoal -> 6]
returns
{x -> 0.261727, y -> 0.355888, a -> 0.924737, b -> 0.0752628}
Python
Running the code:
import numpy as np
from scipy.optimize import root
def fuuu(X, en,dv,tri,sti):
x, y, a, b = X
F = [np.cos(x) / x - en-a*dv,
np.cos(y) / y - tri-b*sti,
a + b - 1,
1.02 * np.sinc(x) - 1.03 * np.sinc(y)]
return F
root(fuuu, [0.2, 0.2, 0.3, 0.3], args=(0.482828,3.46891,0.47814,28.6418)).x
returns
array([ 0.26843418, 0.27872813, 0.89626625, 0.10373375])
Comparison
Let's say that the 'x' value is the same. Let's just ignore the small difference. But the y values differ by miles! The physical meaning completely changes. For some reason I believe the values from Mathematica more than I believe values from Python.
Questions:
Why do the calculations differ?
Which one is now correct? What do I have to change in python (assuming python is the problematic one)?
The calculation differ because of the sinc function.
(* Mathematica *)
In[1] := Sinc[0.26843418]
Out[1] = 0.988034
# Python
>>> np.sinc(0.26843418)
0.88561519683835599
>>> np.sin(0.26843418) / 0.26843418
0.98803370932709034
Huh? Well let's RTFM
numpy.sinc(x)
Return the sinc function.
The sinc function is sin(πx)/(πx).
Oops. NumPy's sinc has a different definition than Mathematica's Sinc.
Mathematica's Sinc uses the unnormalized definition sin(x)/x. This definition is usually used in mathematics and physics.
NumPy's sinc uses the normalized version sin(πx)/(πx). This definition is usually used in digital signal processing and information theory. It is called normalized because
∫-∞∞ sin(πx)/(πx) dx = 1.
Therefore, if you want NumPy to produce the same result as Mathematica, you need to divide x and y by np.pi.
def fuuu(X, en,dv,tri,sti):
x, y, a, b = X
F = [np.cos(x) / x - en-a*dv,
np.cos(y) / y - tri-b*sti,
a + b - 1,
1.02 * np.sinc(x/np.pi) - 1.03 * np.sinc(y/np.pi)] # <---
return F
>>> root(fuuu, [0.2, 0.2, 0.3, 0.3], args=(0.482828,3.46891,0.47814,28.6418)).x
array([ 0.26172691, 0.3558877 , 0.92473722, 0.07526278])