I am trying to find the root of a function between by [0, pi/2], all algorithms in scipy have this condition : f(a) and f(b) must have opposite signs.
In my case f(0)*f(pi/2) > 0 is there any solution, I precise I don't need solution outside [0, pi/2].
The function:
def dG(thetaf,psi,gamma) :
return 0.35*((cos(psi))**2)*(2*sin(3*thetaf/2+2*gamma)+(1+4*sin(gamma)**2)*sin(thetaf/2)-sin(3*thetaf/2))+(sin(psi)**2)*sin(thetaf/2)
Based on the comments and on #Mike Graham's answer, you can do something that will check where the change of signs are. Given y = dG(x, psi, gamma):
x[y[:-1]*y[1:] < 0]
will return the positions where you had a change of sign. You can an iterative process to find the roots numerically up to the error tolerance that you need:
import numpy as np
from numpy import sin, cos
def find_roots(f, a, b, args=[], errTOL=1e-6):
err = 1.e6
x = np.linspace(a, b, 100)
while True:
y = f(x, *args)
pos = y[:-1]*y[1:] < 0
if not np.any(pos):
print('No roots in this interval')
return roots
err = np.abs(y[pos]).max()
if err <= errTOL:
roots = 0.5*x[:-1][pos] + 0.5*x[1:][pos]
return roots
inf_sup = zip(x[:-1][pos], x[1:][pos])
x = np.hstack([np.linspace(inf, sup, 10) for inf, sup in inf_sup])
There is a root only if, between a and b, there are values with different signs. If this happens there are almost certainly going to be multiple roots. Which one of those do you want to find?
You're going to have to take what you know about f to figure out how to deal with this. If you know there is exactly one root, you can just find the local minimumn. If you know there are two, you can find the minimum and use that's coordinate c to find one of the two roots (one between a and c, the other between c and what used to be called b).
You need to know what you're looking for to be able to find it.
Related
according to this graph: desmos
print(solve('x**2 + x - 1/x'))
# [-1/3 + (-1/2 - sqrt(3)*I/2)*(sqrt(69)/18 + 25/54)**(1/3) + 1/(9*(-1/2 - sqrt(3)*I/2)*(sqrt(69)/18 + 25/54)**(1/3)), -1/3 + 1/(9*(-1/2 + sqrt(3)*I/2)*(sqrt(69)/18 + 25/54)**(1/3)) + (-1/2 + sqrt(3)*I/2)*(sqrt(69)/18 + 25/54)**(1/3), -1/3 + 1/(9*(sqrt(69)/18 + 25/54)**(1/3)) + (sqrt(69)/18 + 25/54)**(1/3)]
I was expecting [0.755, 0.57], but, I got something I cannot use in my future program. I desire to get a list of floats as result, so refer to this post, I did following, but I got some even more weird:
def solver(solved, rit=3):
res = []
for val in solved:
if isinstance(val, core.numbers.Add):
flt = val.as_two_terms()[0]
flt = round(flt, rit)
else:
flt = round(val, rit)
if not isinstance(flt, core.numbers.Add):
res.append(flt)
return res
print(solver(solve('x**2 + x - 1/x')))
# [-0.333, -0.333, -0.333]
Now I am really disappointed with sympy, I wonder if there is an accurate way to get a list of floats as result, or I will code my own gradient descent algorithm to find the roots and intersection.
sym.solve solves an equation for the independent variable. If you provide an expression, it'll assume the equation sym.Eq(expr, 0). But this only gives you the x values. You have to substitute said solutions to find the y value.
Your equation has 3 solutions. A conjugate pair of complex solutions and a real one. The latter is where your two graphs meet.
import sympy as sym
x = sym.Symbol('x')
# better to represent it like the equation it is
eq = sym.Eq(x**2, 1/x - x)
sol = sym.solve(eq)
for s in sol:
if s.is_real:
s = s.evalf()
print(s, eq.lhs.subs({x: s})) # eq.rhs works too
There are a variety of things you can do to get the solution. If you know the approximate root location and you want a numerical answer, nsolve is simplest since it has no requirements on the type of expression:
>>> from sympy import nsolve, symbols
>>> x = symbols('x')
>>> eq = x**2 + x - 1/x
>>> nsolve(eq, 1)
0.754877666246693
You can try a guess near 0.57 but it will go to the same solution. So is there really a second real roots? You can't use real_roots on this expression because it isn't in polynomial form. But if you split it into numerator and denominator you can check for the roots of the numerator:
>>> n, d = eq.as_numer_denom()
>>> from sympy import real_roots
>>> real_roots(n)
[CRootOf(x**3 + x**2 - 1, 0)]
So there is only one real root for that expression, the one that nroots gave you.
Note: the answer that solve gives is an exact solution to the cubic equation and it can't figure out definitively which ones are a solution to the equation so it returns all three. If you evaluate them you will find that only one of them is real. But since you don't need the symbolic solution, just stick to nroots.
Does anyone know why the below doesn't equal 0?
import numpy as np
np.sin(np.radians(180))
or:
np.sin(np.pi)
When I enter it into python it gives me 1.22e-16.
The number π cannot be represented exactly as a floating-point number. So, np.radians(180) doesn't give you π, it gives you 3.1415926535897931.
And sin(3.1415926535897931) is in fact something like 1.22e-16.
So, how do you deal with this?
You have to work out, or at least guess at, appropriate absolute and/or relative error bounds, and then instead of x == y, you write:
abs(y - x) < abs_bounds and abs(y-x) < rel_bounds * y
(This also means that you have to organize your computation so that the relative error is larger relative to y than to x. In your case, because y is the constant 0, that's trivial—just do it backward.)
Numpy provides a function that does this for you across a whole array, allclose:
np.allclose(x, y, rel_bounds, abs_bounds)
(This actually checks abs(y - x) < abs_ bounds + rel_bounds * y), but that's almost always sufficient, and you can easily reorganize your code when it's not.)
In your case:
np.allclose(0, np.sin(np.radians(180)), rel_bounds, abs_bounds)
So, how do you know what the right bounds are? There's no way to teach you enough error analysis in an SO answer. Propagation of uncertainty at Wikipedia gives a high-level overview. If you really have no clue, you can use the defaults, which are 1e-5 relative and 1e-8 absolute.
One solution is to switch to sympy when calculating sin's and cos's, then to switch back to numpy using sp.N(...) function:
>>> # Numpy not exactly zero
>>> import numpy as np
>>> value = np.cos(np.pi/2)
6.123233995736766e-17
# Sympy workaround
>>> import sympy as sp
>>> def scos(x): return sp.N(sp.cos(x))
>>> def ssin(x): return sp.N(sp.sin(x))
>>> value = scos(sp.pi/2)
0
just remember to use sp.pi instead of sp.np when using scos and ssin functions.
Faced same problem,
import numpy as np
print(np.cos(math.radians(90)))
>> 6.123233995736766e-17
and tried this,
print(np.around(np.cos(math.radians(90)), decimals=5))
>> 0
Worked in my case. I set decimal 5 not lose too many information. As you can think of round function get rid of after 5 digit values.
Try this... it zeros anything below a given tiny-ness value...
import numpy as np
def zero_tiny(x, threshold):
if (x.dtype == complex):
x_real = x.real
x_imag = x.imag
if (np.abs(x_real) < threshold): x_real = 0
if (np.abs(x_imag) < threshold): x_imag = 0
return x_real + 1j*x_imag
else:
return x if (np.abs(x) > threshold) else 0
value = np.cos(np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
value = np.exp(-1j*np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
Python uses the normal taylor expansion theory it solve its trig functions and since this expansion theory has infinite terms, its results doesn't reach exact but it only approximates.
For e.g
sin(x) = x - x³/3! + x⁵/5! - ...
=> Sin(180) = 180 - ... Never 0 bout approaches 0.
That is my own reason by prove.
Simple.
np.sin(np.pi).astype(int)
np.sin(np.pi/2).astype(int)
np.sin(3 * np.pi / 2).astype(int)
np.sin(2 * np.pi).astype(int)
returns
0
1
0
-1
I get difficulty with coding the following:
Higher order derivatives.
a) Given n € N, x0,a0,a1,…an € R with an != 0, write a program that determines
the coefficients A0,A1,….An of the unique polynomial function P of degree n that
satisfies the conditions
P(x0)= a0, P’(x0)= a1,…….P(n)(x0)=an.
I know the math part very well but I stuck at the coding part. Please help!
Perhaps this will work:
def poly(x0,*a):
assert len(*a) > 0
if len(*a) == 1:
return [a[0]]
return [a[0]] + [(j+1)*c for j, c in enumerate(poly(x0, *a[1:])]
what it does is that it throws the problem back to a similar problem. You can decide the derivative of the function by a similar problem with one coefficient less (the coefficients a1...aN). The case where you have only one coefficient is handled specifically to end the recursion formula.
The [(j+1)*c for j, c in enumerate(poly(x0, *a[1:])] construct will multiply each coefficient with one more than it's index, which means taking the anti-derivate of the polynomial.
Now I managed to write a bit of code to determine the first derivative on a given degree. For example if n = 2, than P'(x0)= a1+ 2*a2*x0**1 -the first derivative, but I cannot manage to calculate from here the second derivative that should be : P''(x0)= 2*a2.
So here is my code so far:
def deriv(d1,n):
for i in range(1,n+1):
s+= a[i]* i*x0**(i-1)
d1 = s
return d1
Now I know that I would need to call the function deriv( d1) but I don't know how. I welcome any answers or suggestions
to start off I have already solved this problem so it's not a big deal, I'm just asking to satisfy my own curiosity. The question is how to solve a series of simultaneous equations given a set of constraints. The equations are:
tau = 62.4*d*0.0007
A = (b + 1.5*d)*d
P = b + 2*d*sqrt(1 + 1.5**2)
R = A/P
Q = (1.486/0.03)*A*(R**(2.0/3.0))*(0.0007**0.5)
and the conditions are:
tau <= 0.29, Q = 10000 +- say 3, and minimize b
As I mentioned I was already able to come up with a solution using a series of nested loops:
b = linspace(320, 330, 1000)
d = linspace(0.1, 6.6392, 1000)
ansQ = []
ansv = []
anstau = []
i_index = []
j_index = []
for i in range(len(b)):
for j in range(len(d)):
tau = 62.4*d[j]*0.0007
A = (b[i] + 1.5*d[j])*d[j]
P = b[i] + 2*d[j]*sqrt(1 + 1.5**2)
R = A/P
Q = (1.486/0.03)*A*(R**(2.0/3.0))*(0.0007**0.5)
if Q >= 10000 and tau <= 0.29:
ansQ.append(Q)
ansv.append(Q/A)
anstau.append(tau)
i_index.append(i)
j_index.append(j)
This takes a while, and there is something in the back of my head saying that there must be an easier/more elegant solution to this problem. Thanks (Linux Mint 13, Python 2.7.x, scipy 0.11.0)
You seem to only have two degrees of freedom here---you can rewrite everything in terms of b and d or b and tau or (pick your two favorites). Your constraint on tau implies directly a constraint on d, and you can use your constraint on Q to imply a constraint on b.
And it doesn't look (to me at least, I still haven't finished my coffee) that your code is doing anything other than plotting some two dimensional functions over a grid you've defined--NOT solving a system of equations. I normally understand "solving" to involve setting something equal to something else, and writing one variable as a function of another variable.
It does appear you've only posted a snippet, though, so I'll assume you do something else with your data down stream.
Ok, I see. I think this isn't really a minimization problem, it's a plotting problem. The first thing I'd do is see what ranges are implied for b and d from your constraints on tau, and then use that to derive a constraint on d. Then you can mesh those points with meshgrid (as you mentioned below) and run over all combinations.
Since you're applying the constraint before you apply the mesh (as opposed to after, as in your code), you'll only be sampling the parameter space that you're interested in. In your code you generate a bunch of junk you're not interested in, and pick out the gems. If you apply your constraints first, you'll only be left with gems!
I'd define my functions like:
P = lambda b, d: b + 2*d*np.sqrt(1 + 1.5**2)
which works like
>>> import numpy as np
>>> P = lambda b, d: b + 2*d*np.sqrt(1 + 1.5**2)
>>> P(1,2)
8.2111025509279791
Then you can write another function to serve up b and d for you, so you can do something like:
def get_func_vals(b, d):
pvals.append(P(b,d))
or, better yet, store b and d as tuples in a function that doesn't return but yields:
pvals = [P(b,d) for (b,d) in thing_that_yields_b_and_d_tuples]
I didn't test this last line of code, and I always screw up these parenthesis, but I think it's right.
I'm trying to solve the equation f(x) = x-sin(x) -n*t -m0
In this equation, n and m0 are attributes, defined in my class. Further, t is a constant integer in the equation, but it has to change each time.
I've solved the equation so i get a 'new equation'. I've imported scipy.optimize
def f(x, self):
return (x - math.sin(x) -self.M0 - self.n*t)
def test(self,t):
return fsolve(self.f, 1, args=(t))
Any corrections and suggestions to make it work?
I can see at least two problems: you've mixed up the order of arguments to f, and you're not giving f access to t. Something like this should work:
import math
from scipy.optimize import fsolve
class Fred(object):
M0 = 5.0
n = 5
def f(self, x, t):
return (x - math.sin(x) -self.M0 - self.n*t)
def test(self, t):
return fsolve(self.f, 1, args=(t))
[note that I was lazy and made M0 and n class members]
which gives:
>>> fred = Fred()
>>> fred.test(10)
array([ 54.25204733])
>>> import numpy
>>> [fred.f(x, 10) for x in numpy.linspace(54, 55, 10)]
[-0.44121095114838482, -0.24158955381855662, -0.049951288133726734,
0.13271070588400136, 0.30551399241764443, 0.46769772292130796,
0.61863201965219616, 0.75782574394219182, 0.88493255340251409,
0.99975517335862207]
You need to define f() like so:
def f(self, x, t):
return (x - math.sin(x) - self.M0 - self.n * t)
In other words:
self comes first (it always does);
then comes the current value of x;
then come the arguments you supply to fsolve().
You're using a root finding algorithm of some kind. There are several in common use, so it'd be helpful to know which one.
You need to know three things:
The algorithm you're using
The equation you're finding the roots for
The initial guess and range over which you're looking
You need to know that some combinations may not have any roots.
Visualizing the functions of interest can be helpful. You have two: a linear function and a sinusiod. If you were to plot the two, which sets of constants would give you intersections? The intersection is the root you're looking for.