Solving an equation with scipy's fsolve - python

I'm trying to solve the equation f(x) = x-sin(x) -n*t -m0
In this equation, n and m0 are attributes, defined in my class. Further, t is a constant integer in the equation, but it has to change each time.
I've solved the equation so i get a 'new equation'. I've imported scipy.optimize
def f(x, self):
return (x - math.sin(x) -self.M0 - self.n*t)
def test(self,t):
return fsolve(self.f, 1, args=(t))
Any corrections and suggestions to make it work?

I can see at least two problems: you've mixed up the order of arguments to f, and you're not giving f access to t. Something like this should work:
import math
from scipy.optimize import fsolve
class Fred(object):
M0 = 5.0
n = 5
def f(self, x, t):
return (x - math.sin(x) -self.M0 - self.n*t)
def test(self, t):
return fsolve(self.f, 1, args=(t))
[note that I was lazy and made M0 and n class members]
which gives:
>>> fred = Fred()
>>> fred.test(10)
array([ 54.25204733])
>>> import numpy
>>> [fred.f(x, 10) for x in numpy.linspace(54, 55, 10)]
[-0.44121095114838482, -0.24158955381855662, -0.049951288133726734,
0.13271070588400136, 0.30551399241764443, 0.46769772292130796,
0.61863201965219616, 0.75782574394219182, 0.88493255340251409,
0.99975517335862207]

You need to define f() like so:
def f(self, x, t):
return (x - math.sin(x) - self.M0 - self.n * t)
In other words:
self comes first (it always does);
then comes the current value of x;
then come the arguments you supply to fsolve().

You're using a root finding algorithm of some kind. There are several in common use, so it'd be helpful to know which one.
You need to know three things:
The algorithm you're using
The equation you're finding the roots for
The initial guess and range over which you're looking
You need to know that some combinations may not have any roots.
Visualizing the functions of interest can be helpful. You have two: a linear function and a sinusiod. If you were to plot the two, which sets of constants would give you intersections? The intersection is the root you're looking for.

Related

How to calculate a sigmoid function without using an exp() function in Python?

I'm working in somewhat of a limited development environment. I'm writing a neural network in Python. I don't have access to numpy and as it is I can't even import the math module. So my options are limited. I need to calculate the sigmoid function, however I'm not sure how the exp() function works under the hood. I understand exponents and that I can use code like:
base = .57
exp = base ** exponent
However I'm not sure what exponent should be? How do functions like numpy.exp() calculate the exponent? This is what I need to replicate.
The exponential function exp(a) is equivalent to e ** a, where e is Euler's number.
>>> e = 2.718281828459045
>>> def exp(a):
... return e ** a
...
>>> import math # accuracy test
>>> [math.exp(i) - exp(i) for i in range(1, 12, 3)]
[0.0, 7.105427357601002e-15, 2.2737367544323206e-13, 1.4551915228366852e-11]
def sigmoid(z):
e = 2.718281828459
return 1.0/(1.0 + e**(-1.0*z))
# This is the formula for sigmoid in pure python
# where z = hypothesis. You have to find the value of hypothesis
you can use ** just fine for your use case it will work with both float and integer input
print(2**3)
8
print(2**0.5 )
1.4142135623730951
if you really need a drop in replacement for numpy.exp()
you can just make a function that behaves like it is written in the docs https://numpy.org/doc/stable/reference/generated/numpy.exp.html
from typing import List
def not_numpy_exp(x:[List[float],float]):
e = 2.718281828459045 # close enough
if type(x) == list:
return [e ** _x for _x in x]
else:
return e**x
how the exp() function works under the hood
If you mean math.exp from built-in module math in this place it does simply
exp(x, /)
Return e raised to the power of x.
where e should be understand as math.e (2.718281828459045). If import math is not allowed you might do
pow(2.718281828459045, x)
instead of exp(x)

Why doesn't use the variable value inside the function in python

Why doesn't the following function take the inner h value that is defined in the function body and gives weird results (arbitrary h value)?
def diff(f): # def not define
h = 0.001
return (lambda x: (f(x+h) - f(x)) / h)
def sin_by_million (x):
return math.sin( 10 ** 6 *x)
>>> diff(sin_by_million) (0)
826.8795405320026
Instead of 1000000?
As per #ThierryLathuille comment, your step h is too big. In real life, you should adapt it based on the function and value at which you want the derivative.
Check out jax instead:
import jax
import jax.numpy as np
def sin_by_million(x):
return np.sin(1e6 * x)
Then:
>>> g = jax.grad(sin_by_million)
... g(0.0)
DeviceArray(1000000., dtype=float32)
The beauty of jax is that it actually compiles your call tree using chain rule, and produces some code (the calls after the first one are much, much faster). It also works on multivariate functions and complex code (with some rules though). And it works wonderfully well & fast on GPUs.

Symbolically multiply infinite sums in SymPy

I'm writing software which uses SymPy to symbolically write code, and I've encountered multiplied sums that I need to simplify. The algorithm I'm using calls for the use of the Cauchy product to convert two sums multiplied by each other into a double sum. Below is an example of what I'm trying to accomplish:
from sympy import Sum, Function, Symbol, oo
# Define variables
n = Symbol('n')
x = Symbol('x')
t = Symbol('t')
# Define functions
theta = Function('theta')(t)
p = Function('p')(n,x)
q = Function('q')(n,x)
# Create Summations
pSum = Sum(p*theta**n, (n,0,oo))
qSum = Sum(q*theta**n, (n,0,oo))
# Multiply
out = pSum * qSum
print(out)
>>> Sum(p(n, x)*theta(t)**n, (n, 0, oo))*Sum(q(n, x)*theta(t)**n, (n, 0, oo))
I need to convert this to
print(out)
>>> Sum(Sum((p(i, x)*q(n-i, x))*theta**n, (i, 0, n)), (n, 0, oo))
My approach at this was importing Sum and defining a class that inherits from Sum. I then define the __mul__ operator to do what I want. This works for simple cases, but in more complicated cases, it won't work. In this example, the first case works, but the next one won't multiply because the * isn't calling __mul__ when already in SymPy.
import sympy
from sympy import expand, Function, Symbol, oo, diff, Sum, Derivative
class Sum2(Sum):
# Overriding the __mul__ method.
def __mul__(self, other):
if isinstance(other, Sum2):
i = Symbol('i')
n = Symbol('n')
return Sum2(Sum(self.args[0].subs(n, i)*other.args[0].subs(n, n-i), (i,0,n)), (n,0,oo))
else:
super().__mul__(other)
x = Symbol('x')
t = Symbol('t')
n = Symbol('n')
f = Function('f')(n, x)
a = Sum2(f*t**n, (n,0,oo))
# Works
print(a*a)
# Doesn't work.
c = (Derivative(a,x)*a).doit()
print(c)
print(c.doit())
print(expand(c))
I've tried a similar approach, inheriting from Function instead. Same problem. Perhaps __mul__ wasn't the right function to redefine? How can I allow infinite sums to be multiplied in this way?

Python -the integral from function multiplication

In python, I have two functions f1(x) and f2(x) returning a number. I would like to calculate a definite integral after their multiplication, i.e., something like:
scipy.integrate.quad(f1*f2, 0, 1)
What is the best way to do it? Is it even possible in python?
I found out just a second ago, that I can use lambda :)
scipy.integrate.quad(lambda x: f1(x)*f2(x), 0, 1)
Anyway, I'm leaving it here. Maybe it will help somebody out.
When I had the same problem, I used this (based on the suggestion above)
from scipy.integrate import quad
def f1(x):
return x
def f2(x):
return x**2
ans, err = quad(lambda x: f1(x)*f2(x), 0, 1)
print("the result is", ans)

Find root of a function in a given interval

I am trying to find the root of a function between by [0, pi/2], all algorithms in scipy have this condition : f(a) and f(b) must have opposite signs.
In my case f(0)*f(pi/2) > 0 is there any solution, I precise I don't need solution outside [0, pi/2].
The function:
def dG(thetaf,psi,gamma) :
return 0.35*((cos(psi))**2)*(2*sin(3*thetaf/2+2*gamma)+(1+4*sin(gamma)**2)*sin(thetaf/2)-‌​sin(3*thetaf/2))+(sin(psi)**2)*sin(thetaf/2)
Based on the comments and on #Mike Graham's answer, you can do something that will check where the change of signs are. Given y = dG(x, psi, gamma):
x[y[:-1]*y[1:] < 0]
will return the positions where you had a change of sign. You can an iterative process to find the roots numerically up to the error tolerance that you need:
import numpy as np
from numpy import sin, cos
def find_roots(f, a, b, args=[], errTOL=1e-6):
err = 1.e6
x = np.linspace(a, b, 100)
while True:
y = f(x, *args)
pos = y[:-1]*y[1:] < 0
if not np.any(pos):
print('No roots in this interval')
return roots
err = np.abs(y[pos]).max()
if err <= errTOL:
roots = 0.5*x[:-1][pos] + 0.5*x[1:][pos]
return roots
inf_sup = zip(x[:-1][pos], x[1:][pos])
x = np.hstack([np.linspace(inf, sup, 10) for inf, sup in inf_sup])
There is a root only if, between a and b, there are values with different signs. If this happens there are almost certainly going to be multiple roots. Which one of those do you want to find?
You're going to have to take what you know about f to figure out how to deal with this. If you know there is exactly one root, you can just find the local minimumn. If you know there are two, you can find the minimum and use that's coordinate c to find one of the two roots (one between a and c, the other between c and what used to be called b).
You need to know what you're looking for to be able to find it.

Categories

Resources