Python -the integral from function multiplication - python

In python, I have two functions f1(x) and f2(x) returning a number. I would like to calculate a definite integral after their multiplication, i.e., something like:
scipy.integrate.quad(f1*f2, 0, 1)
What is the best way to do it? Is it even possible in python?

I found out just a second ago, that I can use lambda :)
scipy.integrate.quad(lambda x: f1(x)*f2(x), 0, 1)
Anyway, I'm leaving it here. Maybe it will help somebody out.

When I had the same problem, I used this (based on the suggestion above)
from scipy.integrate import quad
def f1(x):
return x
def f2(x):
return x**2
ans, err = quad(lambda x: f1(x)*f2(x), 0, 1)
print("the result is", ans)

Related

Finding N roots using an iterative procedure with fsolve (python)

I wonder how I could manage to implement an iterative root finder using fsolve over an interval, up until it found N roots ?
(Assuming I know how small the steps should be to get every roots during the procedure)
Is there a way to do so with a simple double for loop ?
Here is what it would look like for a "simple" function (cos(x)*x):
import numpy as np
from scipy.optimize import fsolve
import numpy as np
def f(x):
return np.cos(x)*x
for n in range(1,10) :
a = 0
k = 0
while k < 1000 :
k = fsolve(f,a)
if k == a :
a = a+0.01
k = fsolve(f,a)
else :
print(k)
But I can't make it works this way. I can't use chebpy because my real
function is more complexe (involving bessel function) and chepby doesnt
seem to accept such function as an argument.
Edit : Corrected indentation , this program yields 0 (first solution) an infinite number of time without stopping.
can you share your error?
may be something related to the indentation of your f(x) function, try changing your code to this:
def f(x):
return np.cos(x)*x
I found a solution that does the job as of now. It consist of passing an array corresponding to the search interval, and then sorting out the solutions (in my case, only looking at positive solutions, removing duplicates etc etc)
It might not be the best way but it works for me :
In this example i'm looking for the 10 first positive solutions of cos(x)*x=0
assuming they would be in [0,100].
import numpy as np
from scipy.optimize import fsolve
def f(x):
return np.cos(x)*x
int = np.arange(0,100,1)
arr = np.array([int])
roots=fsolve(f,arr)
# print(roots)
roots=np.around(roots, decimals=5, out=None)
a = roots[roots >= 0]
b = np.unique(a)
b=(b[:10])
print(b)
Result :
[ 0. 1.5708 4.71239 7.85398 10.99557 14.13717 17.27876 20.42035
23.56194 26.70354]
I had to use np.around otherwise np.unique would not works.
Thanks again.

How exactly does scipy.optimize's minimize function work?

I've been looking through the minimize function declaration files, and I am really confused as to how the function works. So for example, if I have something like this:
import numpy as np
from scipy.integrate import quad
from scipy.optimize import minimize
encoderdistance = 2.53141952655
Dx = lambda t: -3.05 * np.sin(t)
Dy = lambda t: 2.23 * np.cos(t)
def func(x): return np.sqrt(Dx(x)**2 + Dy(x)**2)
print minimize(lambda x: abs(quad(func, 0, x)[0] - encoderdistance), 1).x
print minimize(lambda x: abs(4.24561823393 - encoderdistance), 1).x
the second print statement at the bottom will yield a different result than the one on the top even though I subbed out the quad function for the value it produced. If this is due to the lambda x part, can you explain how that affects that line of code exactly? Also, how would you type the second to last line into a calculator such as wolfram alpha? Thanks!
The optimizer needs a function to minimize -- that's what the lambda x: is about.
In the second-to-last line, you're asking the optimizer to find a value of x such that the integral from 0 to x of func(x) is close to encoderdistance.
In the last line, the function to be minimized in your last line is just a scalar value, with no dependency on x, and the optimizer is bailing out because it can't change that.
How scipy.minimize works is described here but that isn't your issue. You have two lambda functions that are definitely not the same:
lambda x: abs(quad(func, 0, x)[0] - encoderdistance)
lambda x: abs(4.24561823393 - encoderdistance)
The first is a 'V'-shaped function while the second is a horizontal line. scipy finds the minimum of the 'V' at about 1.02 and cannot perform any minimization on a horizontal line so it returns your initial guess: 1.
Here is how you could do it in Mathematica:
Dx[t_] := -3.05*Sin[t]
Dy[t_] := 2.23*Cos[t]
func[x_] := Sqrt[Dx[x]^2 + Dy[x]^2]
encoderdistance = 2.53141952655;
fmin[x_?NumberQ] :=
Abs[NIntegrate[func[t], {t, 0, x}] - encoderdistance]
NMinimize[fmin[x], x][[2]][[1]][[2]]
With regard to your first question, in statement:
print minimize(lambda x: abs(4.24561823393 - encoderdistance), 1).x
your lambda function is a constant independent of the argument x. minimize quits immediately after observing that the function does not decrease after several variations of the argument.

Integrating more than one function error in python

Lets say i define my function G,
def G(k, P, W):
return k**2*P*W**2
Where P and W are two functions that have independent variables of k and k is a defined number.
I am trying to integrate this from 0 to infinity
I = scipy.integrate.quad(G, 0, np.Inf)
inputting this into my console gives me the error,
G() takes exactly 3 arguments (2 given)
I tried using the arg() command, but it does not seem to change it and code remains stubborn. What am i doing wrong and what am i missing?
If I understand correctly, k is a constant. Then you can write:
k = 10
I = integrate.dblquad(lambda p,w: G(k,p,w), 0, np.Inf, lambda x: 0, lambda x: np.Inf)
Found it in the scipy documentation.
Besides, your integral looks divergent.
For symbolic integrals see sympy.integrate. It is a different library.
import * from sympy
k,P,W = symbols('k P W')
integrate(G(k,P,W),P,W)

Find root of a function in a given interval

I am trying to find the root of a function between by [0, pi/2], all algorithms in scipy have this condition : f(a) and f(b) must have opposite signs.
In my case f(0)*f(pi/2) > 0 is there any solution, I precise I don't need solution outside [0, pi/2].
The function:
def dG(thetaf,psi,gamma) :
return 0.35*((cos(psi))**2)*(2*sin(3*thetaf/2+2*gamma)+(1+4*sin(gamma)**2)*sin(thetaf/2)-‌​sin(3*thetaf/2))+(sin(psi)**2)*sin(thetaf/2)
Based on the comments and on #Mike Graham's answer, you can do something that will check where the change of signs are. Given y = dG(x, psi, gamma):
x[y[:-1]*y[1:] < 0]
will return the positions where you had a change of sign. You can an iterative process to find the roots numerically up to the error tolerance that you need:
import numpy as np
from numpy import sin, cos
def find_roots(f, a, b, args=[], errTOL=1e-6):
err = 1.e6
x = np.linspace(a, b, 100)
while True:
y = f(x, *args)
pos = y[:-1]*y[1:] < 0
if not np.any(pos):
print('No roots in this interval')
return roots
err = np.abs(y[pos]).max()
if err <= errTOL:
roots = 0.5*x[:-1][pos] + 0.5*x[1:][pos]
return roots
inf_sup = zip(x[:-1][pos], x[1:][pos])
x = np.hstack([np.linspace(inf, sup, 10) for inf, sup in inf_sup])
There is a root only if, between a and b, there are values with different signs. If this happens there are almost certainly going to be multiple roots. Which one of those do you want to find?
You're going to have to take what you know about f to figure out how to deal with this. If you know there is exactly one root, you can just find the local minimumn. If you know there are two, you can find the minimum and use that's coordinate c to find one of the two roots (one between a and c, the other between c and what used to be called b).
You need to know what you're looking for to be able to find it.

Solving an equation with scipy's fsolve

I'm trying to solve the equation f(x) = x-sin(x) -n*t -m0
In this equation, n and m0 are attributes, defined in my class. Further, t is a constant integer in the equation, but it has to change each time.
I've solved the equation so i get a 'new equation'. I've imported scipy.optimize
def f(x, self):
return (x - math.sin(x) -self.M0 - self.n*t)
def test(self,t):
return fsolve(self.f, 1, args=(t))
Any corrections and suggestions to make it work?
I can see at least two problems: you've mixed up the order of arguments to f, and you're not giving f access to t. Something like this should work:
import math
from scipy.optimize import fsolve
class Fred(object):
M0 = 5.0
n = 5
def f(self, x, t):
return (x - math.sin(x) -self.M0 - self.n*t)
def test(self, t):
return fsolve(self.f, 1, args=(t))
[note that I was lazy and made M0 and n class members]
which gives:
>>> fred = Fred()
>>> fred.test(10)
array([ 54.25204733])
>>> import numpy
>>> [fred.f(x, 10) for x in numpy.linspace(54, 55, 10)]
[-0.44121095114838482, -0.24158955381855662, -0.049951288133726734,
0.13271070588400136, 0.30551399241764443, 0.46769772292130796,
0.61863201965219616, 0.75782574394219182, 0.88493255340251409,
0.99975517335862207]
You need to define f() like so:
def f(self, x, t):
return (x - math.sin(x) - self.M0 - self.n * t)
In other words:
self comes first (it always does);
then comes the current value of x;
then come the arguments you supply to fsolve().
You're using a root finding algorithm of some kind. There are several in common use, so it'd be helpful to know which one.
You need to know three things:
The algorithm you're using
The equation you're finding the roots for
The initial guess and range over which you're looking
You need to know that some combinations may not have any roots.
Visualizing the functions of interest can be helpful. You have two: a linear function and a sinusiod. If you were to plot the two, which sets of constants would give you intersections? The intersection is the root you're looking for.

Categories

Resources